text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
Q: Keep CSS3 hover state when unhovered I'm trying to keep the magnified lens in the hovered state when the user clicks the input field instead of returning back to the question mark. I've tried the following jQuery but it doesn't seem to work: $(".srchbtn").on("mouseover", function() { $(".srchbtn").addClass('hover'); }); Here's a link to the CodePen: http://cdpn.io/hcadj A: * *You mention you want your action to happen "when the user clicks" but bind the handler to the mouseover event instead of the click *You add a class with name hover to the element, but your have no rule using that class.. So try changing your css rule to .srchbtn:hover, .srchbtn.hover{ /*add this line and the comma in the previous one*/ /*your properties here*/ } (you might even want to change the first selector to .search:hover .srchbtn) and change your script to $(".srchbtn").one("click", function() { $(".srchbtn").addClass('hover'); }); Demo at http://codepen.io/gpetrioli/pen/IeEmd A: * *The problem is the CSS. You use the :hover, which is only active when the mouse is over it. Using jQuery, you can use .mouseenter() to indicate the css should happen when you enter, but not go away when you leave. $('.srchbtn').mouseenter(function(){ $(this).css({"display":"block","height":"10px","width":"10px"}); }); The display:block, height and width are just examples of how you could enter CSS to it only on mouseenter, but not go away (which you could get with mouseleave();). * *Alternatively, another option of solving your issue is to make the question-field a child of the questionmark-div. That way, when you hover over the question-field, you are technically still hovering over the question mark itself, making the CSS still apply the :hover state. Pick either! Edit after re-reading your question, you might want to go for option 2, since when the user focuses neither the question mark or the question-field, it'll revert back to it's non:hover state.
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,498
Fort Ticonderoga will present the Eleventh Annual Scots Day on Saturday, June 16. The celebration of Scottish history, heritage and culture runs from 9:30 am to 5 pm. Tour the Scottish Clan tents to discover more about your own Scottish connection and explore centuries of stories, based on Scottish soldiers in the British Army, through a military timeline offered throughout the day. Also, be sure to check out Border Collie demonstrations, special tours, Carillon boat cruises, pipe band performances, and march to the Carillon Battlefield for a remembrance service. To learn more about the event, participating vendors and clans, and the full schedule, visit www.fortticonderoga.org or call 518-585-2821. A special memorial ceremony honoring the 42nd Highland Regiment, also known as the Black Watch, will take place at the Scottish Cairn on the Carillon Battlefield located at Fort Ticonderoga. The procession to the Cairn will begin at 11:20 am. The Memorial Ceremony will take place at 11:30 am and will remember the incredible bravery and discipline of the Black Watch against insurmountable odds at the 1758 Battle of Carillon. As the highlight of the 1781: A War Not Yet Won exhibit in the Mars Education Center, this presentation led by Curator Matthew Keagle will focus on the fascinating story of the Royal Highland Emigrants and the rare example of their camp color, on display for 2018. This program and the flag conservation was made possible, in part, by the Essex County Arts Council's Cultural Assistance Program Grant with funding provided by Essex County and a 2018 Making of Nations Grant from the Champlain Valley National Heritage Partnership. Hear the sounds of Scottish bagpipe music throughout the day as the Leatherstocking District Pipe Band perform lively concerts on the fort's historic Parade Ground. Join Jim McRae of Green Acres Farm and his Border Collies at 12:00 pm and 1:45 pm as they demonstrate sheep herding. See these real working dogs show their skills to move herds of sheep from pasture to pasture! Black Watch Military Living History Programs! Discover the history of the Black Watch Regiment through living history programs presented throughout the day. Highlighted programs include a living history timeline of the Regiment. The re-enacting group depicts its history from the 18th century through the early 21st century, with various members representing different significant points in the unit's history. Learn about the incredible bravery and discipline of the Black Watch against insurmountable odds at the 1758 Battle of Carillon. Photo: Celebrate the Scot in you Saturday, June 16, 2018 at Fort Ticonderoga with special programs and ongoing activities. This entry was posted in Boat Tours, Books, Collections, Education, Exhibits, Family Fun in the Adirondacks, Family programs, Fort Ticonderoga, Fort Ticonderoga Staff, Grant, King's Garden, Life Long Learning, Living History & Material Culture, Museums, Programs, Public Programs, Special Events, Tourist Destination, Uncategorized, Waterway Tours and tagged 1758 Battle of Carillon, Bagpipe Performances, Bagpipes, Beth Hill, Black Watch, Black Watch Military, Black Watch Regiments, Border Collie Demonstrations, Cairn, Cairn Memorial Service, Family Adventure, family destination, Family Fun, Fort Ticonderoga, Jim McRae, Leatherstocking District Pipe Band, living history, military living history programs, Presentation, Royal Highland Emigrant Camp Color Flag, Scots Day, The 42nd Highland Regiment, Ticonderoga. Bookmark the permalink.
{ "redpajama_set_name": "RedPajamaC4" }
1,488
Beatus-Apokalypsen, auch Beatus-Handschriften oder kurz Beatus genannt, sind illuminierte Handschriften mit einem dem Beatus von Liébana zugeschriebenen Apokalypsen-Kommentar. Die meisten Beatus-Handschriften entstanden zwischen dem 10. und 12. Jahrhundert in Nordspanien. 26 illuminierte Handschriften sowie einige reine Texthandschriften sind ganz oder teilweise erhalten. Die umfangreichen, großformatigen Codices mit ihren zahlreichen farbenprächtigen Miniaturen (ein vollständiger Beatus enthält über 100) zählen zu den bedeutendsten Meisterwerken der spanischen Buchmalerei und seit 2015 zum Weltdokumentenerbe. Entstehung und Verwendung Die älteren erhaltenen Beatus-Codices stammen durchweg aus dem christlichen Norden der Iberischen Halbinsel, vor allem aus dem Königreich León. Hersteller wie Auftraggeber waren fast ausschließlich in Klöstern zu finden. In der Liturgie fand die Beatus-Apokalypse keine Verwendung, sie diente vor allem der privaten Andacht und Erbauung von Mönchen. Von dem profunden Eindruck auf die gläubigen Leser zeugen die zerkratzten Augen und Gesichter des Teufels oder der Hure Babylon, die in manchen Handschriften zu finden sind. Einige der prächtigen Codices dürften wohl auch als Prestigeobjekte und Statussymbole gedient haben und wurden wenig gelesen: Sie zeigen wenig Gebrauchsspuren (Glossen, Randnotizen) und sind hervorragend erhalten. Inhalt Die Beatus-Handschriften enthalten mit leichten Variationen die folgende Sammlung von Texten: Genealogische Tafeln; Vorwort (Praefatio) mit einer Widmung an Etherius von El Burgo de Osma; Zwei Prologe, die dem heiligen Hieronymus zugeschrieben werden; Die Interpretatio, ein Kurzkommentar zu ausgewählten Abschnitten der Apokalypse; Den Hauptteil, den in zwölf Bücher eingeteilten Commentarius in Apocalypsin; Das Explicit, in dem die Bestandteile eines Buches (Codex) aufgezählt werden (nach Isidor von Sevilla, Etymologiae VI 13, 1-2; 14, 6); Die Traktate De adfinitates et gradibus und De agnatis et cognatis mit Definitionen von Verwandtschaftsverhältnissen (Etymologiae IX 5-6); Den Commentarius des Hieronymus zum Buch Daniel Wie es gerade zu dieser Zusammenstellung von Texten gekommen ist, deren Zusammenhang nicht immer einleuchtet, ist unbekannt. Der Apokalypsen-Kommentar des Beatus von Liébana Die Zuschreibung des anonym überlieferten Kommentars an Beatus von Liébana ist unbestritten: Sie ergibt sich aus der Widmung an den Beatus-Schüler Etherius und aus der stilistischen und inhaltlichen Nähe zur Streitschrift Adversus Elipandum. Im Vorwort nennt Beatus seine Quellen: Hieronymus, Augustinus, Ambrosius, Fulgentius, Gregor, Tyconius, Irenäus, Apringius von Beja und Isidor. Der eigene Beitrag des Beatus ist gering; sein Werk ist im Wesentlichen eine Kompilation aus Werken der genannten Autoren. Der überwiegende Teil – annähernd die Hälfte – stammt aus dem Apokalypsen-Kommentar des Tyconius, eines Donatisten, der im 4. Jahrhundert in Nordafrika lebte. Die Einteilung in zwölf Bücher stammt wahrscheinlich schon von Tyconius. Der Text der Apokalypse wird in 66 Abschnitte, storiae genannt, eingeteilt. Nach jeder storia folgt eine explanatio, die mit den Worten incipit explanatio suprascriptae storiae eingeleitet wird und Vers für Vers den Offenbarungstext kommentiert. Das weitschweifige Werk umfasst in einer modernen Druckausgabe mehr als 500 Seiten. Als Entstehungszeit wird das Ende des 8. Jahrhunderts angenommen (um 786). Stil In den isolierten christlichen Königreichen der nördlichen Iberischen Halbinsel haben sich Kunstformen entwickelt, die sich von den übrigen europäischen Kunststilen deutlich abheben, auch wenn sich verschiedene Einflüsse nachweisen lassen: Teilweise gehen diese bis auf die Antike zurück; aus dem benachbarten Frankenreich und vor allem aus dem arabischen Raum wurden Anregungen übernommen, letztere wohl durch mozarabische Flüchtlinge vermittelt. Charakteristisch sind die mangelnde Naturtreue und extreme Stilisierung: Berge sind beispielsweise als einfache geometrische Figuren dargestellt, oft nur simple, mit dem Zirkel gezeichnete Halbkreise. Jede Räumlichkeit fehlt; die Darstellungen sind rein zweidimensional. Dem steht eine reiche Ornamentik gegenüber; bemerkenswert ist vor allem das kunstvolle Flechtwerk. Die Farben sind meist kräftig und leuchtend. Auch eine eigene Schriftform hat sich auf der nördlichen Iberischen Halbinsel ausgebildet, die westgotische Minuskel, in der alle älteren Beatus-Codices geschrieben sind. Ikonographie Vermutlich war der Apokalypsen-Kommentar des Beatus von Anfang an als illustriertes Werk angelegt. Das Bildprogramm erfuhr im Laufe der Jahrhunderte verschiedene Ergänzungen und Erweiterungen, hat sich jedoch nicht wesentlich geändert. Aufgrund ikonographischer, stilistischer und textueller Merkmale wurden von mehreren Autoren Einteilungen und Stemmata der Beatus-Codices vorgeschlagen, unter anderem von Wilhelm Neuß, Henry A. Sanders, Peter K. Klein und John Williams. Die Ergebnisse unterscheiden sich in einigen Details; allen gemeinsam ist jedoch die Einteilung in zwei Familien, wobei die Familie II sich durch eine reichere Bebilderung mit mehr und größeren Miniaturen auszeichnet. Innerhalb der Familie II werden noch die Zweige IIa und IIb unterschieden. Bilder zur Einleitung Bei den Illustrationen vor der eigentlichen Beatus-Apokalypse unterscheiden sich die Handschriften am stärksten; etliche Bilder kommen nur in einer einzigen Handschrift vor. Folgende Miniaturen sind mehreren Codices gemeinsam: Labyrinth: Buchstaben-Labyrinthe finden sich in zahlreichen spanischen Handschriften des Hochmittelalters, nicht nur in Beatus-Codices. Kästchen mit Buchstaben sind zu einem – meist rechteckigen und reich verzierten – Teppich angeordnet. In den häufig wiederholten Buchstaben ist eine kurze Botschaft verborgen, etwa der Herstellungsort, der Schreiber oder Auftraggeber (z. B. S(an)c(t)i Micaeli lib(er), "Buch des hl. Michael" im Morgan-Beatus). Maiestas Domini, Alpha und Omega Anbetung des Lammes und des Kreuzes Evangelisten-Zyklus: Dieser Zyklus von acht Miniaturen kommt nur in der Familie II vor. In einem hufeisenförmigen Bogen befinden sich jeweils zwei Figuren, auf der linken Seite der Evangelist mit einem Zeugen, auf der rechten Seite zwei Engel, die das Evangelium präsentieren. Darüber sind die Evangelistensymbole dargestellt. Für diesen Zyklus lassen sich verschiedene Vorbilder nachweisen, die bis ins Frühchristentum oder sogar in die Antike zurückreichen (Porträt des Dichters mit seiner Muse). Die Motivation für die Aufnahme des Evangelisten-Zyklus in einen Apokalypsen-Kommentar ist unbekannt. Genealogische Tafeln: Diese finden sich nur in den Beatus der Familie II. Auf insgesamt sieben Doppelseiten wird der Stammbaum Christi dargestellt. Die Namen stehen in kreisförmigen Medaillons, weitere Texte sind von Rechtecken, Kreisen und vor allem hufeisenförmigen Bögen umrahmt. Mehrere kleine Miniaturen illustrieren die Genealogie: Adam und Eva Noach (Noe filus Lamech): Die Abbildung zeigt das Brandopfer des Noach und markiert den Beginn des zweiten Weltalters (incipit secunda aetas mundi) Opferung Isaaks Weltkarte: Diese einfache isidorianische Weltkarte (siehe unten) illustriert die Aufteilung der Welt auf die drei Söhne Noachs. Maria mit dem Kind Der seltene Vogel und die Schlange: Der Kampf des Vogels mit der Schlange, den jener mit dem Schnabel allein nicht gewinnen kann und durch einen Schlag mit dem Schwanz den Sieg erringt, wird christologisch gedeutet. Bilder zur Johannes-Apokalypse Der Großteil der Miniaturen illustriert die Apokalypse selbst. Die Miniaturen folgen sehr buchstabengetreu dem Text, beispielsweise werden Sonne und Mond in drei annähernd gleich große Sektoren eingeteilt, um die Verfinsterung eines Drittels zu illustrieren . Es findet sich aber auch manche Nachlässigkeit, die Symbolzahlen (sieben Häupter, zehn Hörner, 24 Älteste usw.) sind nicht immer korrekt wiedergegeben. Die dargestellten Personen, Gegenstände und Szenen sind fast immer beschriftet, beispielsweise Iohannes, angelus (Engel), lapis (Mühlstein, ), ubi Iohannes librum accepit (hier nimmt Johannes das Buch entgegen, ) usw. Die Illustrationen befinden sich üblicherweise zwischen storia und explanatio. In den Codices der Familie II sind die meisten ganzseitig und gerahmt. Auch kleinere Formate kommen vor, vor allem bei den Zyklen (sieben Sendschreiben, sieben Posaunen, sieben Schalen); manche Miniaturen sind doppelseitig. Charakteristisch für die Familie II sind auch die monochromen horizontalen Streifen, die den Hintergrund bilden. Nachfolgend werden vollständig alle storiae aufgelistet, die in den Beatus-Handschriften illustriert sind: 1. Buch: Die Offenbarung Gottes an Johannes Der Herr erscheint in den Wolken Vision des Ältesten mit den sieben Leuchtern 2. Buch: Sendschreiben an die sieben Gemeinden (,): Zyklus von sieben ähnlichen, meist kleineren Miniaturen 3. Buch: Vision Gottes auf dem Thron; die vierundzwanzig Ältesten und das gläserne Meer Vision des Lamms, der vier Wesen und der vierundzwanzig Ältesten 4. Buch: Die vier apokalyptischen Reiter Der Altar und die Seelen der Verstorbenen Die Öffnung des sechsten Siegels Die Engel der vier Winde Die Anbetung des Lammes und die 144.000 Versiegelten (meist doppelseitig) Die Öffnung des siebenten Siegels 5. Buch: Die ersten fünf Posaunen (–): Zyklus von fünf kleineren Miniaturen Die Heuschrecken und der Engel des Abgrunds Die sechste Posaune Die Vision von den Reitern Der starke Engel; Johannes verschlingt das Buch und misst den Tempel – (Johannes und das Meer befinden sich außerhalb des Rahmens, sodass die Miniatur größer als eine Seite ist) Die beiden Zeugen Der Antichrist tötet die beiden Zeugen Auferstehung der beiden Zeugen; Erdbeben Die siebente Posaune 6. Buch: Der geöffnete Tempel und das Tier aus dem Abgrund Die mit der Sonne bekleidete Frau und der Drache (doppelseitig) Die Anbetung des Tieres und des Drachen Das Tier, das aus der Erde heraufsteigt Das Lamm auf dem Berg Zion 7. Buch: Der Engel mit dem ewigen Evangelium und der Fall Babylons Ernte und Weinlese; die Kelter des Zornes Gottes Die sieben Engel mit den sieben Plagen und das Lamm Die sieben Engel mit den sieben Schalen und der Tempel 8. Buch: Die sieben Engel mit den sieben Schalen Die ersten sechs Engel gießen ihre Schalen aus : Zyklus von sechs kleineren Miniaturen Aus dem Maul des Tieres, des Drachen und des falschen Propheten kommen unreine Geister gleich Fröschen hervor Der siebente Engel gießt seine Schale aus 9. Buch: Die große Hure und die Könige der Erde Die Frau auf dem scharlachroten Tier Das Lamm besiegt das Tier, den Drachen und den falschen Propheten 10. Buch: Der Fall Babylons; die Klage der Könige und Kaufleute (ungerahmt, eineinhalb Seiten) Ein Engel wirft den Mühlstein ins Meer Erscheinung Gottes; Botschaft an Johannes 11. Buch: Der Reiter Treu und Wahrhaftig Der Engel in der Sonne und die Vögel Die Tötung des Tieres und des falschen Propheten Ein Engel fesselt den Drachen für tausend Jahre Die Seelen der Gerechten vor Gott Die Herrschaft Satans Das Tier, der Teufel und der falsche Prophet werden in den See von brennendem Schwefel geworfen 12. Buch: Das jüngste Gericht (doppelseitig) Das himmlische Jerusalem (ungerahmt) Das Wasser des Lebens und die Bäume des Lebens Der Engel weist die Anbetung Johannes' zurück Bilder zum Kommentar des Beatus Nur wenige Miniaturen illustrieren die explanationes des Beatus: Weltkarte: Im Anschluss an die Vision von den sieben Leuchtern folgt der umfangreiche Prolog des zweiten Buches (De ecclesia et synagoga), in dem ein Einschub von den zwölf Aposteln und den Ländern und Weltteilen handelt, die diese missionierten. Im Gegensatz zu der in manchen Codices vorhandenen kleinen Skizze der Weltkarte in der Genealogie füllt diese Miniatur zwei Seiten. Die vom Ozean umflossene Erde ist oval, von annähernd kreisförmig (Beatus von Turin) bis nahezu rechteckig (Silos-Beatus, Beatus von Girona). Der Osten (oriens) ist oben und enthält eine Darstellung des irdischen Paradieses mit dem Sündenfall. Rechts befindet sich das Rote Meer, das auch in roter Farbe gemalt ist; südlich (rechts) davon liegt die terra incognita, die manchmal mit Fabelwesen wie Skiapoden bevölkert ist (z. B. Beatus von Burgo de Osma). Links unten liegt Europa, nördlich des stilisierten Mittelmeers; rechts von diesem liegt Afrika. Zusammen mit der sogenannten Vatikanischen Isidorkarte (B.A.V., Vat. Lat. 6018, f. 64v–65r) bilden die Beatuskarten die ältesten Überlieferungen der ausführlichen mittelalterlichen Mappae mundi. Die vier Tiere Daniels: Ein Abschnitt des Prologs De ecclesia et synagoga handelt vom Tier (de bestia) und zitiert den Kommentar des Hieronymus zum Buch Daniel, in dem die Vision des Propheten von den vier Tieren gedeutet wird. Die Statue mit goldenem Haupt und tönernen Füßen: Unmittelbar anschließend wird der Traum Nebukadnezzars gedeutet . Die Frau auf dem Tier: Dasselbe Motiv kommt im 9. Buch bei der entsprechenden storia noch einmal vor. Die Arche Noah : Sie illustriert die explanatio zu den sieben Sendschreiben, in der die Frage erörtert wird: Qualiter una ecclesia sit cum septem dicantur apertissime per arca Noe declaratur – Wie es nur eine Kirche gibt, obwohl von sieben die Rede ist, wird am deutlichsten durch die Arche Noah erklärt. Der Text beruht auf dem Traktat De arca Noe des Gregor von Elvira. Die Arche ist vier- oder fünfstöckig dargestellt; in den unteren Geschossen befinden sich die Tiere, darunter manchmal auch Fabelwesen wie Einhörner. Im obersten Geschoss steht Noach und hält die Taube mit dem Olivenzweig. Neben ihm befinden sich seine Frau, seine drei Söhne und drei Schwiegertöchter. Ein Rabe pickt an einem Ertrunkenen. Die Palme der Gerechten: Die Vision der 144.000 Versiegelten (Sie standen in weißen Gewändern vor dem Thron und vor dem Lamm und trugen Palmzweige in den Händen ) ist Anlass zu einer längeren Abschweifung über die Palme als Symbol des Gerechten (nach Gregor dem Großen, Moralia in Iob 6, 49). Die Füchsin und der Hahn: Die explanatio zum Tier, das aus dem Abgrund heraufsteigt, vergleicht die Ketzer mit Füchsen, die Hühner rauben. Diese kleine Miniatur ist nur in der Familie II zu finden. Bilder zum Buch Daniel Die Illustrationen zum Buch Daniel sind in allen Handschriften, auch in jenen der Familie II, ungerahmt. Frontispiz: Babylon von Schlangen umgeben Die Belagerung Jerusalems Der Traum Nebukadnezzars und die Statue mit den tönernen Füßen Die Anbetung des goldenen Standbilds und die jungen Männer im Feuerofen Das Gastmahl des Belschazzar Daniel in der Löwengrube Vision des Hochbetagten und der vier Tiere aus dem Meer Die Burg Susa, der Zweikampf zwischen Widder und Ziegenbock Gabriel erklärt die Vision des Daniel Daniel und der Engel am Ufer des Tigris Beatus-Handschriften bis 1100 Die älteren Beatus-Handschriften stammen durchweg aus den christlichen Königreichen der Iberischen Halbinsel und sind in westgotischer Minuskel geschrieben; einzige Ausnahme ist der Beatus von Saint-Sever. In der Literatur hat sich keine einheitliche Nomenklatur durchgesetzt. So werden Beatus-Handschriften nach ihrem Schreiber oder Illuminator, nach dem Auftraggeber, dem Entstehungs- oder Aufbewahrungsort benannt. Von Neuß wurde ein System von Siglen vorgeschlagen, die im Folgenden in eckigen Klammern angegeben werden. Einzelblatt aus Silos [Fc] Das älteste erhaltene Fragment einer Beatus-Handschrift wird heute als Fragment 4 im Kloster Santo Domingo de Silos aufbewahrt, wohin es im 18. Jahrhundert aus Cirueña kam. Der ursprüngliche Herkunftsort ist unbekannt. Das Fragment wird auf das Ende des 9. Jahrhunderts datiert. Die kleinformatige Miniatur von bescheidener handwerklicher Qualität zeigt den Altar und die Seelen der Verstorbenen . Handschriften der Familie I Beatus von Madrid [A1] (Spanische Nationalbibliothek Vitr. 14-1), Mitte des 10. Jahrhunderts. Beatus von San Millán de la Cogolla [A2] (Madrid, Real Academia de la Historia, Cod. 33). Der Codex stammt aus dem letzten Viertel des 10. Jahrhunderts und blieb unvollendet. Er wurde im 12. Jahrhundert ergänzt. Escorial-Beatus [E] (El Escorial, Biblioteca del Monasterio San Lorenzo el Real, Cod. & II. 5). Die Handschrift entstand um das Jahr 1000 in San Millán de la Cogolla. Beatus von Burgo de Osma [O] (Archiv der Kathedrale von Burgo de Osma, Cod. 1). Der Codex wurde 1086 in Sahagún geschrieben. Beatus von Genf (Bibliothèque de Genève, Ms. lat. 357). Dieser Codex wurde im 11. Jahrhundert in Süditalien in der Region von Benevent hergestellt. Er war bis vor kurzem unbekannt und wurde erst publiziert, nachdem er 2007 durch Schenkung in die Bibliothek von Genf kam. Morgan-Beatus [M] Der Codex, MS 644 der Pierpont Morgan Library in New York, ist nach seinem Aufbewahrungsort benannt. In einem umfangreichen Kolophon auf f. 293 nennt der Schreiber seinen Namen in einem Wortspiel (Maius quippe pusillus – Maius, der Kleine), den Ort (cenobii summi Dei nuntii Micaelis arcangeli – das Kloster San Miguel de Escalada bei León), seinen Auftraggeber, den Abt Victor, und die Jahreszahl (duo gemina ter terna centiese ter dena bina = 2 × 2 + 3 × 300 + 3 × 10 × 2 = 964). Nachdem die damals in Spanien übliche Zeitrechnung von der heute verwendeten um 38 Jahre abweicht, würde das eine Entstehungszeit von 926 bedeuten. In der heutigen Forschung wird ein so frühes Datum aus stilistischen Gründen nahezu ausgeschlossen. Allgemein wird eine Entstehung um die Mitte des 10. Jahrhunderts angenommen, wobei verschiedene Interpretationen der Datumsangabe des Maius vorgeschlagen wurden. Der Morgan-Beatus ist der älteste erhaltene Codex der Familie II (Zweig IIa). Es ist möglich, dass die Neuerungen der Familie II auf Maius selbst zurückzuführen sind. Beatus von Tábara [T] Im Kolophon des Beatus von Tábara [T] (Madrid, Archivo Histórico Nacional, Cod. 1097B) erwähnt dessen Schreiber Emeterius den Tod seines Lehrers Magius im Jahr 968. Es gilt als sicher, dass dieser Magius mit Maius identisch ist. Beatus von Valladolid [V] Nach seinem mutmaßlichen Entstehungsort, dem Kloster von Valcavado in der Nähe von Saldaña in der Provinz Palencia wird dieser Codex auch "Beatus von Valcavado" genannt. Er wird heute in der Biblioteca Santa Cruz in Valladolid aufbewahrt (MS 433) und gehört der Familie IIa an. Eine Inschrift auf f. 3v berichtet, dass die Handschrift am 8. Juni 970 begonnen und am 8. September 970 vollendet wurde. Mit der Arbeit wurde der Mönch Obeco vom Abt Sempronius betraut. Beatus von Girona [G] Dieser Beatus-Codex stammt aus der zweiten Hälfte des 10. Jahrhunderts und wird heute im Archiv der Kathedrale von Girona aufbewahrt (Archivo Capitular de Girona ms. 7 olim 41). Er gehört der Familie IIb an, weist aber eine Reihe von Besonderheiten auf. Der Schreiber Senior wird auf f. 283v genannt (Senior presbiter scripsit). Der Kolophon auf f. 284 nennt den Auftraggeber, den Abt Dominicus (Dominicus abba liber fieri precipit) und als Illuminatoren die Nonne En und den Presbyter Emeterius (En depintrix et Dei aiutrix frater Emeterius et presbiter – En, Malerin und Gehilfin Gottes; Emeterius, Bruder und Presbyter). Gelegentlich wird der Name der Malerin auch als Ende bezeichnet, nach der – wohl falschen – Abteilung Ende pintrix. Der Kolophon nennt auch die Jahreszahl (era millesima XIII = 1013), was nach heutiger Zeitrechnung 975 entspricht. Zu Beginn enthält die Handschrift eine Reihe von Miniaturen, die sonst in keinem Beatus (außer dem Beatus von Turin) vorkommen. Auf ff. 3v–4r befindet sich eine doppelseitige Darstellung des Himmels in sechs konzentrischen Kreisen. Auf die genealogischen Tafeln folgt ein Zyklus von sechs ganzseitigen Miniaturen zum Leben Christi, darunter Kreuzigung und Höllenfahrt. Bemerkenswert ist auch der Zyklus der sieben Sendschreiben. Die Miniaturen sind ganzseitig und ungerahmt; die Darstellungen der sechs Kirchen (das Blatt mit der Kirche von Pergamon fehlt) sind abwechslungsreich und phantasievoll. Die Farben wirken – im Vergleich zum kräftigen Kolorit der meisten Handschriften der Familie II – blass und gedämpft. Zahlreiche Miniaturen sind mit Gold und Silber geschmückt. Der Beatus von Turin [Tu] (Turin, Biblioteca Nazionale Universitaria, ms. lat. 93, sgn I.II,1), entstanden um 1100, ist eine unmittelbare Kopie des Beatus von Girona. Beatus von Urgell [U] Dieser Codex wird im Diözesanmuseum von La Seu d'Urgell aufbewahrt (Num. Inv. 501). Er enthält keine Hinweise auf Auftraggeber, Herstellungsort, Schreiber oder Illuminator. Er wurde vermutlich im letzten Viertel des 10. Jahrhunderts im Königreich León hergestellt und wird der Familie IIa zugerechnet. Er scheint bereits 1147 in einem Inventarverzeichnis der Kathedrale von La Seu d'Urgell auf. Facundus-Beatus [J] Der Facundus-Beatus ist eine der bekanntesten und prächtigsten Beatus-Handschriften. Im Buchstaben-Labyrinth am Anfang werden die Auftraggeber genannt, König Ferdinand I. und Königin Sancha, nach denen der Codex manchmal auch "Beatus des Ferdinand und der Sancha" genannt wird. Es ist der einzige Beatus, der erwiesenermaßen nicht für ein Kloster bestimmt war. Als Herstellungsort wird ein königliches Skriptorium in León angenommen. Das Kolophon (f. 317) nennt den Schreiber Facundus (Facundus scripsit) und die Jahreszahl (bis quadragies et V post millesima, 2 × 40 + 5 + 1000 = 1085, nach unserer Zeitrechnung 1047). Heute wird die Handschrift in der Spanischen Nationalbibliothek aufbewahrt (Vitr. 14-2). Sie wird zur Familie IIa gerechnet. Umberto Eco veröffentlichte eine Monographie über den Codex. Er ist eine wichtige Inspirationsquelle für den Namen der Rose, in dem die Miniatur der mit der Sonne bekleideten Frau und dem Drachen eine bedeutende Rolle spielt. Fanlo-Beatus [FL] Von diesem Beatus sind nur sieben Seiten in einer Kopie aus dem 17. Jahrhundert erhalten. Die Kopie wurde von Vicente Juan de Lastanosa (1607–84) mit Wasserfarben auf Papier gemalt; die Vorlage befand sich im Kloster Montearagón in der Provinz Toledo. Mit anderen Papieren aus dem Besitz von Juan Francisco Andrés de Uztarroz (1606–53), einem mit Lastanosa befreundeten aragonesischen Historiker, wurde die Kopie 1988 von der Pierpont Morgan Library erworben, wo sie heute unter der Bezeichnung M. 1079 aufbewahrt wird. Lastanosas Kopien sind so detailgetreu, dass zahlreiche Rückschlüsse auf das Original möglich sind. Es handelte sich dabei höchstwahrscheinlich um eine direkte Kopie des Escorial-Beatus [E] aus der Mitte des 11. Jahrhunderts. Der Schreiber Sancius (Sancho) nennt sich in einem Akrostichon (Sancius notarius presbiter mementote). Das Buchstaben-Labyrinth nennt als Auftraggeber den Abt Pantio. Dieser Pantio wird mit dem aus anderen Quellen bekannten Bancio oder Banzo identifiziert, der von 1035 bis 1070 Abt des aragonesischen Klosters San Andrés de Fanlo war. Beatus von Saint-Sever [S] Dieser Codex wurde vermutlich um 1060 für die Abtei Saint-Sever in der Gascogne geschrieben, möglicherweise wurde er dort auch hergestellt. Das Buchstaben-Labyrinth auf f. 1 nennt den Auftraggeber Gregorius abba[s] nobil[is], womit höchstwahrscheinlich Gregorius Muntaner gemeint ist, der von 1028 bis 1072 Abt von Saint-Sever war. In einer Säule in den genealogischen Tafeln (f. 6) hat sich ein Stephanus Garsia Placidus verewigt, bei dem es sich um einen der Illuminatoren handeln könnte. Die Handschrift wird in Paris in der Bibliothèque nationale aufbewahrt (MS lat. 8878). Dieser Beatus-Codex ist in vieler Hinsicht bemerkenswert: Er ist der einzige Beatus vor 1100, der jenseits der Pyrenäen entstand und nicht in westgotischer, sondern in karolingischer Minuskel geschrieben ist. Er gehört der Familie I an, enthält aber auch Miniaturen, die sonst nur in der Familie II vorkommen. Stilistisch und inhaltlich gibt es beträchtliche Abweichungen von den spanischen Vorbildern. Die Darstellung von Menschen und Tieren ist wesentlich naturgetreuer. Er enthält etliche Miniaturen, die nicht zum Bildprogramm des Beatus gehören, z. B. die Darstellung zweier glatzköpfiger Männer, die einander an den Bärten ziehen, während eine Frau zusieht (f. 184). Silos-Beatus [D] Der Silos-Beatus ist der jüngste der Beatus-Codices in traditionellem Stil und westgotischer Minuskel. Wie aus mehreren Kolophonen (f. 275v, 276, 277v) hervorgeht, wurde der Text von den Schreibern Dominicus und Munnius im April 1091 vollendet, während die Miniaturen vom Illuminator Petrus erst am 1. Juli 1109 vollendet wurden. Begonnen wurde das Werk unter Abt Fortunius von Santo Domingo de Silos; nach dessen Tod wurde es unter den Äbten Nunnus und Johannes weitergeführt. Die Handschrift gehört der Familie IIa an. Die erste Miniatur ist eine außergewöhnliche Darstellung der Hölle, in der ein reicher Mann (dives) und ein unzüchtiges Paar von mehreren Dämonen (Atimos, Radamas, Beelzebub, Barabbas) gequält werden. Joseph Bonaparte eignete sich die Handschrift an, als er König von Spanien war. 1840 verkaufte er sie an das British Museum, wo sie heute als Add. MS 11695 aufbewahrt wird. Spätere Beatus-Handschriften Gegen Ende des elften Jahrhunderts kam es zu tiefgreifenden Änderungen im kirchlichen und kulturellen Leben der christlichen Königreiche der Iberischen Halbinsel. Die bis dato gültige westgotische Liturgie wurde durch den Römischen Ritus ersetzt, die westgotische Minuskel wurde nach dem Konzil 1090 durch die Karolingische Minuskel ersetzt. Auch Kunst und Architektur näherten sich immer mehr den in Frankreich verbreiteten romanischen Formen an. Damit war die Blütezeit der Beatus-Handschriften im eigentümlich spanischen (oft – wohl nicht ganz korrekt – "mozarabisch" genannten) Stil vorbei. Bis zur Mitte des 13. Jahrhunderts wurde noch eine Reihe von Beatus-Codices in karolingischer Minuskel geschrieben. Die Miniaturen bauen meist auf dem Bildprogramm früherer Beatus-Handschriften auf, weisen aber zahlreiche Stilmerkmale der romanischen Buchmalerei auf. Wichtige dieser Codices sind: Rylands-Beatus [R], auch Manchester-Beatus genannt (Manchester, John Rylands University Library Latin MS 8), ca. 1175, Cardeña-Beatus [Pc]: Die Handschrift entstand ca. 1180 und ist auf Sammlungen in Madrid (Museo Arqueológico Nacional und Colección Francisco de Zabálburu y Basabe), New York (Metropolitan Museum of Art) und Girona (Museu d'Art de Girona) verstreut. Der Beatus von Lorvão [L] wurde 1189 im Kloster S. Mammas in Lorvão (Portugal) geschrieben und wird heute im Arquivo Nacional da Torre do Tombo in Lissabon aufbewahrt. Der Arroyo-Beatus [Ar] (benannt nach der Zisterzienserinnenabtei San Andrés de Arroyo) wurde in der ersten Hälfte des 13. Jahrhunderts in der Region von Burgos hergestellt, möglicherweise im Kloster San Pedro de Cardeña. Heute befindet er sich zum Teil in Paris (Bibliothèque nationale) und New York (Bernard H. Breslauer Collection). Nachwirkungen Nach 1250 scheint die Produktion von Beatus-Handschriften ganz aufzuhören. Auch die Nachwirkungen und Einflüsse auf andere Kunstwerke waren gering. Beispielsweise lassen sich in den zahlreichen prächtigen gotischen Apokalypsen-Kommentaren, die im dritten Viertel des 13. Jahrhunderts in England hergestellt wurden, nur ganz vereinzelt – wenn überhaupt – Einflüsse der Beatus-Tradition nachweisen. Im 16. Jahrhundert zeigten einzelne Gelehrte historisches und antiquarisches Interesse an Beatus-Handschriften, so der Humanist Ambrosio de Morales (1513–91) aus Córdoba. Beginnend mit den Arbeiten von Neuß und Montague Rhodes James wurden die Beatus-Codices im 20. Jahrhundert sorgfältig wissenschaftlich erforscht. Die Darstellung der in der Sintflut Ertrunkenen im Beatus von Saint-Sever hat Picassos berühmtes Gemälde Guernica beeinflusst. Das gegen Ende des 20. Jahrhunderts weit verbreitete Interesse an Apokalyptik führte auch zu einer großen Popularität des Beatus und zahlreichen Publikationen und Faksimile-Ausgaben. Zitat Literatur Claus Bernet: Beatus-Apokalypsen. Norderstedt 2016, ISBN 978-3-7392-4692-5. Brigitte Englisch: Ordo orbis terrae. Die Weltsicht in den Mappae mundi des frühen und hohen Mittelalters. Berlin 2002, ISBN 3-05-003635-4, S. 171 ff. und S. 259 ff. John Williams und Barbara A. Shailor: Beatus-Apokalypse der Pierpont Morgan Library. Ein Hauptwerk der spanischen Buchmalerei des 10. Jahrhunderts. Belser Verlag, Stuttgart und Zürich 1991, ISBN 3-7630-1213-3. Mireille Mentré: Spanische Buchmalerei des Mittelalters. Wiesbaden 2006, ISBN 3-89500-196-1. Joaquín Yarza Luaces: Beato de Liébana. Manuscritos iluminados. Moleiro, Barcelona 1998, ISBN 84-88526-39-3 (spanisch). John Williams: The Illustrated Beatus. A Corpus of Illustrations of the Commentary on the Apocalypse. Miller, London (englisch): Band 1: Introduction. 1994, ISBN 0-905203-91-7. Band 2: The ninth and tenth centuries. 1994, ISBN 0-905203-92-5. Band 3: The tenth and eleventh centuries. 1998, ISBN 0-905203-93-3. Band 4: The eleventh and twelfth centuries. 2002, ISBN 0-905203-94-1. Band 5: The twelfth and thirteenth centuries. 2003, ISBN 0-905203-95-X. Visionen vom Weltende. Apokalypse-Faksimiles aus der Sammlung Detlef M. Noack, hrsg. v. Caroline Zöhl, Berlin 2010, ISBN 978-3-929619-59-1. Wilhelm Neuss: . Veröffentlichungen des romanistischen Auslandsinstituts der rheinischen Friedrich Wilhelms-Universität Bonn. Band 3. Bonn und Leipzig 1922. Wilhelm Neuss: Die Apokalypse des Hl. Johannes in der altspanischen und altchristlichen Bibel-Illustration. Münster 1931. Weblinks Ms. 33 Beato de San Millan de la Cogolla Ms. 33 Ms. Cod. & II.5 Escorial Beatus of San Millán Ms 644 Morgan Beatus Ms 1097 B Beatus of San Salvador de Távara Ms. 433 Beatus of Valcavado Ms. 26 Urgell Beatus Ms. 7 Gerona Beatus (Girona Beatus) Vit. 14-1 Beati in Apocalipsin libri duodecim (Emilianenses Codice) VITR 14.2 (pdf) Beato of Liébana: Codice of Fernando I and Dña. Sancha (Facundo/Facundus) Ms. Add. 11695 Beatus of Santo Domingo de Silos Ms. lat. 357 Geneva Beatus MS 8 Rylands Beatus Anmerkungen Kodikologie Buchmalerei Handschrift (Christentum) Offenbarung des Johannes Religiöses Werk (Neues Testament) Weltdokumentenerbe (Portugal) Weltdokumentenerbe (Spanien)
{ "redpajama_set_name": "RedPajamaWikipedia" }
447
Carex densa är en halvgräsart som först beskrevs av Liberty Hyde Bailey, och fick sitt nu gällande namn av Liberty Hyde Bailey. Carex densa ingår i släktet starrar, och familjen halvgräs. Inga underarter finns listade i Catalogue of Life. Källor Externa länkar Starrar densa
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,402
Extrude Black and White POM ROD ,POM formal name is polyacetal or POM has excellent physical performances: high hardness, good wear resistance and fatigue resistance, good chemical stability and electricity insulation, dimensional stability, especially excellent solvent rigidity (elastic modules) and mechanical strength. It can act as lubricated parts, decorated parts, precision instruments, bearings gears, pumps, and insulated shells replacing of bronze, copper alloy, zinc, aluminum, steel and other metals. Round Virgin pom rod ,It is a highly crystalline thermoplastic, charactererised by high tensile strength, stiffness, low coefficient of friction and excellent dimensional stability. Shenzhen Xiongyihua Plastic Insulation Limited was established in 2006 in Shenzhen, is a manufacturer and trader, specialized in the research , development and production of Engineering Plastic and Insulation Plastic , such as phenolic laminated , epoxy fiberglass, NYLON PA6 , POM,PE,PVC, PU ,PTFE , ABS ,PETG,PEEK ,PMMA,HDPE and other related plastic products . Looking for ideal Plastic Pom Rod Manufacturer & supplier ? We have a wide selection at great prices to help you get creative. All the Black and White Pom Rod are quality guaranteed. We are China Origin Factory of Extrude Pom Rod. If you have any question, please feel free to contact us.
{ "redpajama_set_name": "RedPajamaC4" }
1,261
Tipton Five Ways railway station was a station built by the Oxford, Worcester and Wolverhampton Railway, serving the town of Tipton in the western section near the border with Coseley for 88 years from 1853. The 'Five Ways' tag was only added in 1950 – to avoid confusion with Tipton Owen Street. It was situated on the Oxford-Worcester-Wolverhampton Line. The station eventually closed in 1962, though the line remained open until 22 September 1968. The station buildings were demolished soon after closure. The station site was developed in 2001–02 with new housing, which made use of most of the track bed between Sedgley Road West and Birmingham New Road. The overbridges at both ends of this section of the railway were demolished at this time. References Further reading Disused railway stations in Sandwell Railway stations in Great Britain opened in 1853 Railway stations in Great Britain closed in 1962 Former Great Western Railway stations
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,660
{"url":"https:\/\/www.hackmath.net\/en\/math-problem\/10591","text":"# Dividing money\n\nVilem, Cenek, and Edita divided the money they earned by spreading the leaflet. Vilem got 240 CZK more than Cenek and twice more than Edita. Edita got 400kc less than Vilem.\n\nResult\n\nC = \u00a0560\nV = \u00a0800\nx = \u00a0400\ns = \u00a01760 Kc\n\n#### Solution:\n\nV=240+C\nV = 2x\nx = V-400\n\nC-V = -240\nV-2x = 0\nV-x = 400\n\nC = 560\nV = 800\nx = 400\n\nCalculated by our linear equations calculator.\n$s = V+C+x = 800+560+400 = 1760 = 1760 \\ \\text { Kc }$\n\nOur examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you!\n\nLeave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...):\n\nBe the first to comment!\n\n#### Following knowledge from mathematics are needed to solve this word math problem:\n\nDo you have a linear equation or system of equations and looking for its solution? Or do you have quadratic equation?\n\n## Next similar math problems:\n\n1. Purchase\nThree buyers pay \u20ac 468. The first paid 3-times more than second, third half over second. How many euros paid each of them?\n2. Far country\nIn a country far away, the value of 3 pesos is 12 centavos more than the value of 1 peso. How many centavos is 1 peso worth?\n3. Forest nursery\nIn the forest nursery after winter, they found that 1\/10 stems died out of them. For them, they land 193 new spruces. How many spruces are in the forest nursery?\n4. Find x\nSolve: if 2(x-1)=14, then x= (solve an equation with one unknown)\n5. Eq1\nSolve equation: 4(a-3)=3(2a-5)\n6. If-then equation\nIf 5x - 17 = -x + 7, then x =\nAdded together and write as decimal number: LXVII + MLXIV\n8. Simple equation 10\n35= 7*3*x what is x?\n9. The math test\nThe math test contains 20 problems. For each correctly solved problem, the solver gets 3 points, for each incorrectly solved or unsolved problem, 2 points are deducted. Ondrej got 25 points. How many problems did he solve correctly?\n10. Benches\nThe park has 64 benches. Occupied are by 18 more than empty. How many benches are occupied and empty ?\n11. Popsicles\nFrancis went to buy ice lollies. If he buy 8 popsicles he missed 4 USD. When he buy 7 popsicles, got back 1 USD. How many USD was a popsicle?\n12. Equation 29\nSolve next equation: 2 ( 2x + 3 ) = 8 ( 1 - x) -5 ( x -2 )\n13. Simple equation 9\nSolve the following equation: -8y+5=-9y+9\n14. Simple equation 8\nSolve the following equation: 36=-(1+7x)-6(-7-x)\n15. Apples 3\nJulka has 5 apples more than Hugo and four apples less than Annie. Hugo has 17 apples. How many apples has Julka and how Annie?\n16. Unknown number 11\nThat number increased by three equals three times itself?\n17. Cages\nHonza had three cages (black, silver, gold) and three animals (guinea pig, rat and puppy). There was one animal in each cage. The golden cage stood to the left of the black cage. The silver cage stood on the right of the guinea pig cage. The rat was in the","date":"2020-01-27 12:08:14","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 1, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.28912532329559326, \"perplexity\": 6515.894478643235}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-05\/segments\/1579251700675.78\/warc\/CC-MAIN-20200127112805-20200127142805-00159.warc.gz\"}"}
null
null
Q: Prove $E(|X-\mu|) = \sqrt{\frac{2}{\pi}}\sigma$ Prove $$E(|X-\mu|) = \sqrt{\frac{2}{\pi}}\sigma,$$ if $X$ is a normal random variable with mean $\mu$ and variance $\sigma^2$. I don't know where to begin with this problem and would like help! Thanks A: To simplify a bit, standardize (i.e. take $Z = (X - \mu)/\sigma$). Write the expected value as an integral. Use symmetry to get rid of the absolute value. A: This is an elegant problem of symmetry of the normal probability distribution. First of all, let $Y = X - \mu$. So $D(Y) = D(X) = \sigma$. Notice that the figure of the normal probability distribution which Y follows is symmetrical with respect to its expectation $E(Y)=0$. The probability distribution of $|Y|$ can be viewed as the result of "adding" the probability of $+y$ and $-y$ together. Intuitively, the figure of probability distribution of $|Y|$ is made by folding the left half into the right and they plus together so that each point of the right half doubles its original value. $$E(|X-\mu|) = E(|Y|) = \int_{0}^{\infty}2{\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{x^2}{2\sigma^2}}}xdx $$ let $t=\frac{x}{\sigma}$. So we have $$E(|Y|)=\sigma\sqrt{\frac{2}{\pi}}\int_{0}^{\infty}e^{-\frac{t^2}{2}}tdt \\ =\sigma\sqrt{\frac{2}{\pi}}$$
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,262
William Brooke Thomas Trego (September 15, 1858 – June 24, 1909) was an American painter best known for his historical military subjects, in particular scenes of the American Revolution and Civil War. Biography William B. T. Trego was born in Yardley, Bucks County, Pennsylvania in 1858, the son of the artist Jonathan Kirkbridge Trego and Emily Roberts née Thomas. At the age of two William's hands and feet became nearly paralyzed, either from polio, or from a doctor administering a dose of calomel (mercurous chloride). Trego's family moved to Detroit in 1874 where William was enrolled in the local school, but an incident where 16-year-old William burned off all his hair with a gas jet made his father decide to teach William in his studio from then on. Despite his crippled hands, young William showed an aptitude for art, learning to paint with a brush jammed in his right hand while he guided it with his left. William Trego first received public attention when he exhibited a painting titled The Charge of Custer at Winchester in 1879 at the Michigan State Fair. His depiction of George Armstrong Custer's charge at the Third Battle of Winchester was described by the Cleveland Press as "one of the best historical paintings of the kind that has ever been produced by an American artist." Pennsylvania Academy years Later that year, Trego used the proceeds from the sale of The Charge of Custer at Winchester to enroll himself at the Pennsylvania Academy of the Fine Arts in Philadelphia, Pennsylvania. He studied at PAFA for three years under Thomas Eakins, in courses that included instruction on aspects of the human figure, including anatomical study of the human and animal body and surgical dissection. Trego did not appreciate Eakins' rigorous, terse teaching style, and would later remark: "Fortunately for myself I was drilled in the principles of drawing in my father's studio before I went to the Academy, so that I was able to some extent to brave the sarcasm and neglect of Eakins" In an 1882 Academy exhibition, Trego won the first Toppan Prize for his work, Battery of Light Artillery en Route, and the painting was subsequently purchased for the Academy by Fairman Rogers. In 1883, Trego received what he thought was a snub from the Academy when the art jury for the Temple Competition of Historical Paintings, a competition intended to help revive historical painting by limiting entries to depictions of the American War of Independence", decided there were no paintings of sufficient quality to merit a 1st or second place, and awarded Trego 3rd place for his painting of George Washington and his troops called The March to Valley Forge. Trego sued the Academy on the grounds that if his painting was the best overall, it should receive first place (and he should get the $3,000 prize money). In 1886, he lost the case, with the Pennsylvania Supreme Court ruling the jury was well within their rights under the contract of the exhibit to award prizes as they saw fit. North Wales studio After leaving the Academy, Trego lived in North Wales, Pennsylvania, with his mother and father. Except for trips abroad, Trego would live in North Wales for the rest of his life, working in a studio behind his house. He used the town residents, their horses, and the surrounding landscape as models and backdrops for his paintings. Trego was becoming well known for the accuracy of his military depictions as well as the honest, sometimes brutal realism, especially in his Civil War subjects The Civil War works were well received and Trego had much success selling paintings during that time. Paris In 1887, he went to Paris to study at the Académie Julian under the French academic painters Tony Robert-Fleury and William-Adolphe Bouguereau. Trego studied at the French museums while he was there and enjoyed the Paris night life with other Pennsylvania Academy alumni such as Robert Henri, Augustus B. Koopman, Henry McCarter, and Frederick J. Waugh. Trego also participated in the Paris Salons of 1889 and 1890, gaining some recognition for his 1889 submission, a military painting titled The End of the Charge of von Bredow's Brigade at Rezonville depicting German cavalry units charge against French during the Franco-Prussian War. One French writer thought this work put Trego on par with the famous French academic military artist, Édouard Detaille. On his ocean voyage home from Paris in 1890, Trego returned to America not only with new found knowledge of French academic painting, he also returned with a French fiancée. But in a sad and very public event on board ship, the "handsome French girl" (as reported in the newspapers of the time) switched her affections to fellow Académie Julian student James R. Fisher. When they arrived in Philadelphia the news papers reported the two artists as parting "bitter enemies". Later years After his return to the States, Trego's work received much acclaim from critics. In 1891, noted American art collector Thomas Benedict Clarke wrote of Trego: "In the accomplishment of his work, which is marked by strength, firmness, and force, he has had to overcome physical infirmities that would have made a less brave and earnest character halt at the threshold." Despite these accolades and the prestige of exhibiting in the Paris Salon, Trego found it hard to sell paintings due to the declining in popularity of realistic military artwork. He painted portraits and genre paintings to make money and took on work doing book and magazine illustration. He also tried unsuccessfully to become an instructor at The Pennsylvania Academy of the Fine Arts. He lived with, and was supported by, his parents during the 1890s. Trego's father died in 1901 and his stepmother died six years later. Trego's increasing financial problems during this time made him take on students including Walter Emerson Baum and his wife, Flora. Trego tried to revive his career by basing a painting on the popular novel Ben Hur with one of his last works, The Chariot Race from Ben Hur (1908). He sent it to the 1909 National Academy of Design exhibition in New York but it failed to spark any interest. William Trego was found unconscious in his studio on June 24, 1909 and was dead by the time the doctor arrived. His obituary in The New York Times reported that he died of "overexertion" due to "excessive heat". The cause of death specified on his death certificate was a supposed suicide by the administration of some unknown poison. The contents of his North Wales studio were left to Walter Emerson Baum. Legacy During his lifetime, Trego had painted over 200 historical and military paintings. These would become so widely published after his death that writer Edwin Augustus Peeples commented: "There is probably not an American History book which doesn't have (a) Trego picture in it". In 1976, Trego's The March to Valley Forge had become such an iconic image of that event that it was reproduced as a souvenir postage sheet issued by the United States Postal Service as part of the observance of the United States Bicentennial. It is currently on loan from the Museum of the American Revolution to Valley Forge National Historical Park. A book was published about Trego's life, So Bravely and So Well: The Life and Art of William T. Trego, by Joseph P. Eckhardt, in 2011. Collections Trego's work is represented in many permanent collections including: Illustration for the Century - Smithsonian Institution, Cooper-Hewitt, National Design Museum Horse Artillery Going into Battery, Petersburg, Va. and A Mortar Battery Firing - United States Department of the Army, United States Military Academy, West Point Museum Battery of Light Artillery En Route - Pennsylvania Academy of the Fine Arts The March to Valley Forge (1883) - The American Revolution Center The Chariot Race (1908) and Civil War Battle Scene (1887) - James A. Michener Art Museum Hancock's Corps Assaulting the Works at the "Bloody Angle" (1887) - Cooper-Hewitt, National Design Museum Jonathan K. Trego (1817–1901) and The Rescue of the Colors - Bucks County Historical Society Exhibitions and awards Michigan State Fair, 1879 - The Charge of Custer at Winchester Pennsylvania Academy of the Fine Arts, 1882 - Toppan Prize, Pennsylvania Academy of the Fine Arts, 1883 - Temple Silver Medal, The March to Valley Forge Paris Salon, 1889 Paris Salon, 1890 World's Columbian Exposition, 1893 Cotton States and International Exposition, Atlanta, Georgia American Art Society, Silver Medal, 1902 National Academy of Design exhibition, New York, 1909 - The Chariot Race from Ben Hur James A. Michener Art Museum exhibition, Doylestown, PA, 2011 - Various works Gallery See also Artists of stamps of the United States References Further reading Gemmill, Helen Hartman. William B. T. Trego: the artist with paralyzed hands, Antiques, November 1983, pp. 994–999. Sozanski, Edward J. A forgotten painter of military history: William T. Trego's life and career unfold first as inspiration, then as tragedy, which makes the exhibition of his art at the James A. Michener Art Museum alternately fascinating and sad, The Philadelphia Inquirer, June 12, 2011 Archived External links James A. Michener Art Museum: Bucks County Artists - William B. T. Trego SIRIS (Smithsonian Institution Research Information System) - Trego, William Brooke Thomas, 1859-1909, painter Smithsonian - American Bicentennial Issues: Souvenir Sheets 31c Washington Reviewing Army at Valley Forge sheet of 5 The Reporter - Recognizing fame - Sunday February 10, 2008 The William B.T. Trego Centenary Project William B. T. Trego Website 1859 births 1909 deaths 19th-century American painters 19th-century American male artists American male painters 20th-century American painters Realist painters Military art People from Yardley, Pennsylvania Pennsylvania Academy of the Fine Arts alumni Académie Julian alumni American history painters Students of Thomas Eakins 20th-century American male artists
{ "redpajama_set_name": "RedPajamaWikipedia" }
381
Гец Баур (; 1 серпня 1917, Шанхай — 21 березня 2012, Радольфцелль) — німецький офіцер-підводник, капітан-лейтенант крігсмаріне. Біографія 5 березня 1935 року вступив на флот. З вересня 1938 року — писар і вахтовий офіцер на есмінці «Ганс Лоді». З листопада 1940 по травень 1941 року пройшов курс підводника. У травні-жовтні 1941 року — 1-й вахтовий офіцер на підводному човні U-552, після чого пройшов курс командира човна. З 8 січня 1942 року — командир U-660, на якому здійснив 3 походи (разом 77 днів у морі). 12 листопада 1942 року в Середземному морі північніше Орану (36°07′ пн. ш. 01°00′ зх. д.) глибинними бомбами британських корветів «Лотус» і «Старворт». 2 члени екіпажу загинули, 45 (включаючи Баура) були врятовані і взяті в полон. Всього за час бойових дій потопив 2 кораблі загальною водотоннажністю 10 066 тонн і пошкодив 2 кораблі загальною водотоннажністю 10 447 тонн. Звання Кандидат в офіцери (5 квітня 1935) Морський кадет (25 вересня 1935) Фенріх-цур-зее (1 липня 1936) Оберфенріх-цур-зее (1 січня 1938) Лейтенант-цур-зее (1 квітня 1938) Оберлейтенант-цур-зее (1 жовтня 1939) Капітан-лейтенант (1 листопада 1942) Нагороди Медаль «За вислугу років у Вермахті» 4-го класу (4 роки) Залізний хрест 2-го класу (1939) 1-го класу (1941) Нагрудний знак есмінця (1940) Нагрудний знак підводника (27 серпня 1941) Примітки Посилання Біографічні дані. Баур на сайті uboat.net Командири підводних човнів Капітан-лейтенанти крігсмаріне
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,747
"The French Mistake" is the fifteenth episode of the sixth season of paranormal drama television series Supernatural. It was first broadcast on The CW on February 25, 2011. In this episode, Sam and Dean are sent to an alternate reality by the angel Balthazar, where they are called actors named "Jared Padalecki" and "Jensen Ackles" who play Sam and Dean in a television show that follows their lives named Supernatural. Furthermore, in this reality, nothing supernatural exists. Sam and Dean attempt to return to their reality, but are hampered by their lives as actors as well as the crew of their TV show. Plot Sam (Jared Padalecki) and Dean Winchester (Jensen Ackles) are doing research at the house of Bobby Singer (Jim Beaver) when the rogue angel Balthazar (Sebastian Roché) appears. He says that the archangel Raphael is aiming to kill all the allies of the angel Castiel (Misha Collins), and so he gives them a key and sends them to an alternate reality to keep them out of harm's way. According to Balthazar, the key opens a door where he has stored stolen angelic weapons, which will help Castiel gain control in Heaven. When they appear in the alternate reality, Sam and Dean find themselves on the set of Supernaturala fantasy horror television series that is being filmed in Vancouver, Canada. In this dimension, Sam and Dean are actors known as "Jared Padalecki" and "Jensen Ackles", and they star in the aforementioned show, which follows the adventures of the fictitious Winchester brothers. In order to figure out what is going on, they try to contact Castiel, but instead encounter "Misha Collins", an actor who simply plays Castiel on the show. Eventually, Dean suggests they try doing the same spell Balthazar did in order to get them back to their own reality. Discouraged that the ingredients for Balthazar's spell cannot be located on the set of Bobby's house, Sam and Dean have the actors' driver, Clif Kosterman (Phil Hayes), take them to Jared's place, which turns out to be a large ostentatious mansion. Sam and Dean are shocked when Rubyor rather, "Genevieve Padalecki" the actress who plays herappears and kisses Sam; eventually, Sam and Dean realize that in this dimension, she is Jared's wife. After Genevieve heads out to a charity function for the International Otter Adoption Fund, Sam and Dean go online and purchase bonafide saints' bones for rush delivery using Jared's credit cards. Dean then spends the night on Jared's couch, while Sam alternates between surfing the internet for signs of supernatural activity and having sex with Genevieve. The next morning, Clif drives them to the airport to pick up the rush delivery before taking them back to the set. Before they are able to work the spell, however, the show's director Bob Singer (portrayed in the episode by Brian Doyle-Murray) forces them to act in a scene, which ends badly after many takes. Eventually, they are able to try the spell, but nothing happens. This causes them to the conclusion that in this reality there is no real magic and that the supernatural simply does not exist. Back on set, Raphael's angelic hitman Virgil (Carlos Sanz) appears from the Winchester's reality and tries to vanquish Dean with his angel powers. However, he, too, is devoid of powers in this reality, so Sam and Dean are able to physically attack and subdue him. In time, the rest of the crew intervenes, however, and during the resulting scuffle Virgil pickpockets the key Balthazar gave Sam before running away. After this incident, the crew of Supernatural is suspicious because "Jared" and "Jensen" are behaving extremely out of character: Normally they do not speak to each other, they came in with a mysterious package (which, in reality, is holding the saint's bones) which the crew believes are either drugs or black market organs, and now they have beaten up what everyone thinks is an extra. Bob Singer calls showrunner Sera Gamble and suggests that they get Eric Kripke (portrayed in the episode by Micah A. Hauptman) to come to Vancouver and talk to the actors, to which Sera reluctantly agrees. Meanwhile, Virgil takes Misha hostage and kills him so he can use his blood to contact Raphael. Bob confronts Sam and Dean, and they drop the pretense of being Jared and Jensen and tell him that they quit the show. They then go back to Jared's house and learn from Genevieve that Misha was killed. They go to the scene of the crime to investigate, and learn from a witness states that the killer was speaking to a "Raphael". They also learn from the relayed conversation that Virgil will return to the set of Supernatural, where he will be pulled back into the Winchester's reality by Raphael himself. Eventually, Virgil returns on set just as Kripke arrives to speak to them, and the angel goes on a killing spree, murdering Eric Kripke, Bob Singer, and many other crew members before Sam and Dean manage to knock him out and retrieve the stolen key. Just as Sam has the key in hand, Raphael activates the nearby gate between the worlds, and they land back in their own reality. Back in their original reality, Sam and Dean come face-to-face with Raphael (Lanette Ware). He demands that they turn over the key and begins to torture the brothers when they refuse. Just then, Balthazar arrives and reveals that the key which Sam and Dean had the entire time was a fake and that the sojourn through the alternate reality was a diversion to throw off Raphael and his minions. Angered that he has been tricked, Raphael threatens to kill them all, but Castiel arrives on the scene; he orders Raphael to let his friends go and also reveals to the archangel that he is now in possession of Heaven's weapons. Raphael, out-maneuvered and out-gunned flees, and Castiel returns Sam and Dean back to Bobby's house. Sam and Dean demand to know what exactly is going on, to which Castiel makes a vague promise that he will tell them in time what is happening in Heaven. Production The title of "The French Mistake" is a reference to the climax of the 1974 American satirical western film Blazing Saddles: At the end of said movie, a fight between the heroes and villains breaks out that literally breaks the fourth wall and spills over into an adjacent movie set wherein a musical entitled The French Mistake is being filmed. Reception "The French Mistake" aired on The CW on February 25, 2011. The episode was watched by 2.18 million viewers with a 1.0/4 share among adults aged 18 to 49. This means that 1.0 percent of all households with televisions watched the episode, while 3 percent of all households watching television at that time watched it. Supernatural ranked as the second most-watched program on The CW in the day, behind Smallville. Zack Handlen of The A. V. Club'' gave "The French Mistake" an A, calling it "Supernatural at its most gloriously self-referential". While noting that the episode was not perfect, Handlen nevertheless found the entry to be humorous and "smart" in a way that prevented him from "really want[ing] to poke holes in it". Diana Steenbergen of IGN gave "The French Mistake" a 9.5 score out of 10 and applauded the show writers for taking "an insane idea and turn it into gold". In particular, Steenbergen cited the episode's willingness to playfully lampoon the shows stars and producers as one of its strongest elements. The episode has become a large point of discussion among the show's fans, as well as the cast and crew, due to its in-jokes and meta plot. In particular, creator Eric Kripke and actor Jared Padalecki have cited "The French Mistake" as one of their all time favorite episodes of the show. References Bibliography External links Supernatural (season 6) episodes 2011 American television episodes Television episodes set in California
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,835
Test Firebase Push Notification of notification received
{ "redpajama_set_name": "RedPajamaGithub" }
1,123
Socialist Action News Magazine of Socialist Action in Australia Home Theory & History Lenin: The State Lenin: The State A lecture deliver 100 years ago today Editor, July 11, 2019 A lecture delivered at the Sverdlov University – July 11, 1919 Comrades, according to the plan you have adopted and which has been conveyed to me, the subject of today's talk is the state. I do not know how familiar you are already with this subject. If I am not mistaken your courses have only just begun and this is the first time you will be tackling this subject systematically. If that is so, then it may very well happen that in the first lecture on this difficult subject I may not succeed in making my exposition sufficiently clear and comprehensible to many of my listeners. And if this should prove to be the case, I would request you not to be perturbed by the fact, because the question of the state is a most complex and difficult one, perhaps one that more than any other has been confused by bourgeois scholars, writers and philosophers. It should not therefore be expected that a thorough understanding of this subject can be obtained from one brief talk, at a first sitting. After the first talk on this subject you should make a note of the passages which you have not understood or which are not clear to you, and return to them a second, a third and a fourth time, so that what you have not understood may be further supplemented and elucidated later, both by reading and by various lectures and talks. I hope that we may manage to meet once again and that we shall then be able to exchange opinions on all supplementary questions and see what has remained most unclear. I also hope that in addition to talks and lectures you Will devote some time to reading at least a few of the most important works of Marx and Engels. I have no doubt that these most important works are to be found in the lists of books and in the handbooks which are available in your library for the students of the Soviet and Party school; and although, again, some of you may at first be dismayed by the difficulty of the exposition, I must again warn you that you should not let this worry you; what is unclear at a first reading will become clear at a second reading, or when you subsequently approach the question from a somewhat different. angle. For I once more repeat that the question is so complex and has been so confused by bourgeois scholars and writers that anybody who desires to study it seriously and master it independently must attack it several times, return to it again and again and consider it from various angles in order to attain a clear, sound understanding of it. Because it is such a fundamental, such a basic question in all politics, and because not only in such stormy and revolutionary times as the present, but even in the most peaceful times, you will come across it every day in any newspaper in connection with any economic or political question it will be all the easier to return to it. Every day, in one context or another, you will be returning to the question: what is the state, what is its nature, what is its significance and what is the attitude of our Party, the party that is fighting for the overthrow of capitalism, the Communist Party—what is its attitude to the state? And the chief thing is that you should acquire, as a result of your reading, as a result of the talks and lectures you will hear on the state, the ability to approach this question independently, since you will be meeting with it on the most diverse occasions, in connection with the most trifling questions, in the most unexpected contexts and in discussions and disputes with opponents. Only when you learn to find your way about independently in this question may you consider yourself sufficiently confirmed in your convictions and able with sufficient success to defend them against anybody and at any time. After these brief remarks, I shall proceed to deal with the question itself—what is the state, how did it arise and fundamentally what attitude to the state should be displayed by the party of the working class, which is fighting for the complete overthrow of capitalism—the Communist Party? I have already said that you are not likely to find another question which has been so confused, deliberately and unwittingly, by representatives of bourgeois science, philosophy, jurisprudence, political economy and journalism, as the question of the state. To this day it is very often confused with religious questions; not only those professing religious doctrines (it is quite natural to expect it of them), but even people who consider themselves free from religious prejudice, very often confuse the specific question of the state with questions of religion and endeavour to build up a doctrine—very often a complex one, with an ideological, philosophical approach and argumentation—which claims that the state is something divine, something supernatural, that it is a certain force by virtue of which mankind has lived, that it is a force of divine origin which confers on people, or can confer on people, or which brings with it something that is not of man, but is given him from without. And it must be said that this doctrine is so closely bound up with the interests of the exploiting classes—the landowners and the capitalists—so serves their interests, has so deeply permeated all the customs, views and science of the gentlemen who represent the bourgeoisie, that you will meet with vestiges of it on every hand, even in the view of the state held by the Mensheviks and Socialist-Revolutionaries, although they are convinced that they can regard the state with sober eyes and reject indignantly the suggestion that they are under the sway of religious prejudices. This question has been so confused and complicated because it affects the interests of the ruling classes more than any other question (yielding place in this respect only to the foundations of economic science). The doctrine of the state serves to justify social privilege, the existence of exploitation, the existence of capitalism—and that is why it would be the greatest mistake to expect impartiality on this question, to approach it in the belief that people who claim to be scientific can give you a purely scientific view on the subject. In the question of the state, in the doctrine of the state, in the theory of the state, when you have become familiar with it and have gone into it deeply enough, you will always discern the struggle between different classes, a struggle which is reflected or expressed in a conflict of views on the state, in the estimate of the role and significance of the state. To approach this question as scientifically as possible we must cast at least a fleeting glance back on the history of the state, its emergence and development. The most reliable thing in a question of social science, and one that is most necessary in order really to acquire the habit of approaching this question correctly and not allowing oneself to get lost in the mass of detail or in the immense variety of conflicting opinion—the most important thing if one is to approach this question scientifically is not to forget the underlying historical connection, to examine every question from the standpoint of how the given phenomenon arose in history and what were the principal stages in its development, arid, from the standpoint of its development, to examine what it has become today. I hope that in studying this question of the state you will acquaint yourselves with Engels's book The Origin of the Family, Private Property and the State. This is one of the fundamental works of modern socialism, every sentence of which can be accepted with confidence, in the assurance that it has not been said at random but is based on immense historical and political material. Undoubtedly, not all the parts of this work have been expounded in an equally popular and comprehensible way; some of them presume a reader who already possesses a certain knowledge of history and economics. But I again repeat that you should not be perturbed if on reading this work you do not understand it at once. Very few people do. But returning to it later, when your interest has been aroused, you will succeed —in understanding the greater part, if not the whole of it. I refer to this book because it gives the correct approach to the question in the sense mentioned. It begins with a historical sketch of the origin of the state. This question, like every other—for example, that of the origin of capitalism, the exploitation of man by man, socialism, how socialism arose, what conditions gave rise to it—can be approached soundly and confidently only if we cast a glance back on the history of its development as a whole. In connection with this problem it should first of all be noted that the state has not always existed. There was a time when there was no state. It appears wherever and whenever a division of society into classes appears, whenever exploiters and exploited appear. Before the first form of exploitation of man by man arose, the first form of division into classes—slave-owners and slaves—there existed the patriarchal family, or, as it is sometimes called, the clan family. (Clan-tribe; at the time people of one kin lived together.) Fairly definite traces of these primitive times have survived in the life of many primitive peoples; and if you take any work whatsoever on primitive civilisation, you will always come across more or less definite descriptions, indications and recollections of the fact that there was a time, more or less similar—to primitive communism, when the division of society into slave-owners and slaves did not exist. And in those times there was no state, no special apparatus for the systematic application of force and the subjugation of people by force. It is such an apparatus that is called the state. In primitive society, when people lived in small family groups and were still at the lowest stages of development, in a condition approximating to savagery—an epoch from which modern, civilised human society is separated by several thousand years—there were yet no signs of t e existence of a state. We find the predominance of custom, authority, respect, the power enjoyed by the elders of the clan; we find this power sometimes accorded to women the position of women then was not like the downtrodden and oppressed condition of women today—but nowhere do we find a special category of people set apart to rule others and who, for the sake and purpose of rule, systematically and permanently have at their disposal a certain apparatus of coercion, an apparatus of violence, such as is represented at the present time, as you all realise, by armed contingents of troops, prisons and other means of subjugating the will of others by force—all that which constitutes the essence of the state. If we get away from what are known as religious teachings, from the subtleties, philosophical arguments and various opinions advanced by bourgeois scholars, if we get away from these and try to get at the real core of the matter, we shall find that the state really does amount to such an apparatus of rule which stands outside society as a whole. When there appears such a special group of men occupied solely with government, and who in order to rule need a special apparatus of coercion to subjugate the will of others by force—prisons, special contingents of men, armies, etc.—then there appears the state. But there was a time when there was no state, when general ties, the community itself, discipline and the ordering of work were maintained by force of custom and tradition, by the authority or the respect enjoyed by the elders of the clan or by women—who in those times not only frequently enjoyed a status equal to that of men, but not infrequently enjoyed an even higher status—and when there was no special category of persons who were specialists in ruling. History shows that the state as a special apparatus for coercing people arose wherever and whenever there appeared a division of society into classes, that is, a division into groups of people some of which were permanently in a position to appropriate the labour of others, where some people exploited others. And this division of society into classes must always be clearly borne in mind as a fundamental fact of history. The development of all human societies for thousands of years, in all countries without exception, reveals a general conformity to law, a regularity and consistency; so that at first we had a society without classes—the original patriarchal, primitive society, in which there were no aristocrats; then we had a society based on slavery—a slaveowning society. The whole of modern, civilised Europe has passed through this stage—slavery ruled supreme two thousand years ago. The vast majority of peoples of the other parts of the world also passed through this stage. Traces of slavery survive to this day among the less developed peoples; you will find the institution of slavery in Africa, for example, at the present time. The division into slaveowners and slaves was the first important class division. The former group not only owned all the means of production—the land and the implements, however poor and primitive they may have been in those times—but also owned people. This group was known as slave-owners, while those who laboured and supplied labour for others were known as slaves. This form was followed in history by another—feudalism. In the great majority of countries slavery in the course of its development evolved into serfdom. The fundamental division of society was now into feudal lords and peasant serfs. The form of relations between people changed. The slave-owners had regarded the slaves as their property; the law had confirmed this view and regarded the slave as a chattel completely owned by the slave-owner. As far as the peasant serf was concerned, class oppression and dependence remained, but it was not considered that the feudal lord owned the peasants as chattels, but that he was only entitled to their labour, to the obligatory performance of certain services. In practice, as you know, serfdom, especially in Russia where it survived longest of all and assumed the crudest forms, in no way differed from slavery. Further, with the development of trade, the appearance of the world market and the development of money circulation, a new class arose within feudal society—the capitalist class. From the commodity, the exchange of commodities and the rise of the power of money, there derived the power of capital. During the eighteenth century, or rather, from the end of the eighteenth century and during the nineteenth century, revolutions took place all over the world. Feudalism was abolished in all the countries of Western Europe. Russia was the last country in which this took place. In 1861 a radical change took place in Russia as well; as a consequence of this one form of society was replaced by another—feudalism was replaced by capitalism, under which division into classes remained, as well as various traces and remnants of serfdom, but fundamentally the division into classes assumed a different form. The owners of capital, the owners of the land and the owners of the factories in all capitalist countries constituted and still constitute an insignificant minority of the population who have complete command of the labour of the whole people, and, consequently, command, oppress and exploit the whole mass of labourers, the majority of whom are proletarians, wage-workers, who procure their livelihood in the process of production only by the sale of their own worker's hands, their labour-power. With the transition to capitalism, the peasants, who had been disunited and downtrodden in feudal times, were converted partly (the majority) into proletarians, and partly (the minority) into wealthy peasants who themselves hired labourers and who constituted a rural bourgeoisie. This fundamental fact—the transition of society from primitive forms of slavery to serfdom and finally to capitalism—you must always bear in mind, for only by remembering this fundamental fact, only by examining all political doctrines placed in this fundamental scheme, will you be able properly to appraise these doctrines and understand what they refer to; for each of these great periods in the history of mankind, slave-owning, feudal and capitalist, embraces scores and hundreds of centuries and presents such a mass of political forms, such a variety of political doctrines, opinions and revolutions, that this extreme diversity and immense variety (especially in connection with the political, philosophical and other doctrines of bourgeois scholars and politicians) can be understood only by firmly holding, as to a guiding thread, to this division of society into classes, this change in the forms of class rule, and from this standpoint examining all social questions—economic, political, spiritual, religious, etc. If you examine the state from the standpoint of this fundamental division, you will find that before the division of society into classes, as I have already said, no state existed. But as the social division into classes arose and took firm root, as class society arose, the state also arose and took firm root. The history of mankind knows scores and hundreds of countries that have passed or are still passing through slavery, feudalism and capitalism. In each of these countries, despite the immense historical changes that have taken place, despite all the political vicissitudes and all the revolutions due to this development of mankind, to the transition from slavery through feudalism to capitalism and to the present world-wide struggle against capitalism, you will always discern the emergence of the state. It has always been a certain apparatus which stood outside society and consisted of a group of people engaged solely, or almost solely, or mainly, in ruling. People are divided into the ruled, and into specialists in ruling, those who rise above society and are called rulers, statesmen. This apparatus, this group of people who rule others, always possesses certain means of coercion, of physical force, irrespective of whether this violence over people is expressed in the primitive club, or in more perfected types of weapons in the epoch of slavery, or in the firearms which appeared in the Middle Ages, or, finally, in modern weapons, which in the twentieth century are technical marvels and are based entirely on the latest achievements of modern technology. The methods of violence changed, but whenever there was a state there existed in every society a group of persons who ruled, who commanded, who dominated and who in order to maintain their power possessed an apparatus of physical coercion, an apparatus of violence, with those weapons which corresponded to the technical level of the given epoch. And by examining these general phenomena, by asking ourselves why no state existed when there were no classes, when there were no exploiters and exploited, and why it appeared when classes appeared—only in this way shall we find a definite answer to the question of what is the nature and significance of the state. The state is a machine for maintaining the rule of one class over another. When there were no classes in society, when, before the epoch of slavery, people laboured in primitive conditions of greater equality, in conditions when the productivity of labour was still at its lowest, and when primitive man could barely procure the wherewithal for the crudest and most primitive existence, a special group of people whose function is to rule and to dominate the rest of society, had not and could not yet have emerged. Only when the first form of the division of society into classes appeared, only when slavery appeared, when a certain class of people, by concentrating on the crudest forms of agricultural labour, could produce a certain surplus, when this surplus was not absolutely essential for the most wretched existence of the slave and passed into the hands of the slave-owner, when in this way the existence of this class of slave-owners was secure—then in order that it might take firm root it was necessary for a state to appear. And it did appear—the slave-owning state, an apparatus which gave the slave-owners power and enabled them to rule over the slaves. Both society and the state were then on a much smaller scale than they are now, they possessed incomparably poorer means of communication—the modern means of communication did not then exist. Mountains, rivers and seas were immeasurably greater obstacles than they are now, and the state took shape within far narrower geographical boundaries. A technically weak state apparatus served a state confined within relatively narrow boundaries and with a narrow range of action. Nevertheless,. there did exist an apparatus which compelled the slaves to remain in slavery, which kept one part of society subjugated to and oppressed by another. It is impossible to compel the greater part of society to work systematically for the other part of society without a permanent apparatus of coercion. So long as there were no classes, there was no apparatus of this sort. When classes appeared, everywhere and always, as the division grew and took firmer hold, there also appeared a special institution—the state. The forms of state were extremely varied. As early as the period of slavery we find diverse forms of the state in the countries that were the most advanced, cultured and civilised according to the standards of the time—for example, in ancient Greece and Rome which were based entirely on slavery. At that time there was already a difference between monarchy and republic, between aristocracy and democracy. A monarchy is the power of a single person, a republic is the absence of any non-elected authority; an aristocracy is the power of a relatively small minority, a democracy is the power of the people (democracy in Greek literally means the power of the people). All these differences arose in the epoch of slavery. Despite these differences, the state of the slave-owning epoch was a slave-owning state, irrespective of whether it was a monarchy or a republic, aristocratic or democratic. In every course on the history of ancient times, in any lecture on this subject, you will hear about the struggle which was waged between the monarchical and republican states. But the fundamental fact is that the slaves were not regarded as human beings—not only were they not regarded as citizens, they were not even regarded as human beings. Roman law regarded them as chattels. The law of manslaughter, not to mention the other laws for the protection of the person, did not extend to slaves. It defended only the slaveowners, who were alone recognised as citizens with full rights. But whether a monarchy was instituted or a republic, it was a monarchy of the slave-owners or a republic of the slave-owners. All rights were enjoyed by the slave-owners, while the slave was a chattel in the eyes of the law; and not only could any sort of violence be perpetrated against a slave, but even the killing of a slave was not considered a crime. Slave-owning republics differed in their internal organisation, there were aristocratic republics and democratic republics. In an aristocratic republic only a small number of privileged persons took part in the elections; in a democratic republic everybody took part but everybody meant only the slave-owners, that is, everybody except the slaves. This fundamental fact must be borne in mind, because it throws more light than any other on the question of the state and clearly demonstrates the nature of the state. The state is a machine for the oppression of one class by another, a machine for holding in obedience to one class other, subordinated classes. There are various forms of this machine. The slave-owning state could be a monarchy, an aristocratic republic or even a democratic republic. In fact the forms of government varied extremely, but their essence was always the same: the slaves enjoyed no rights and constituted an oppressed class; they were not regarded as human beings. We find the same thing in the feudal state. The change in the form of exploitation transformed the slave-owning state into the feudal state. This was of immense importance. In slave-owning society the slave enjoyed no rights whatever and was not regarded as a human being; in feudal society the peasant was bound to the soil. The chief distinguishing feature of serfdom was that the peasants (and at that time the peasants constituted the majority; the urban population was still very small) were considered bound to the land—this is the very basis of "serfdom". The peasant might work a definite number of days for himself on the plot assigned to him by the landlord; on the other days the peasant serf worked for his lord. The essence of class society remained—society was based on class exploitation. Only the owners of the land could enjoy full rights; the peasants had no rights at all. In practice their condition differed very little from the condition of slaves in the slave-owning state. Nevertheless, a wider road was opened for their emancipation, for the emancipation of the peasants, since the peasant serf was not regarded as the direct property of the lord. He could work part of his time on his own plot, could, so to speak, belong to himself to some extent; and with the wider opportunities for the development of exchange and trade relations the feudal system steadily disintegrated and the scope of emancipation of the peasantry steadily widened. Feudal society was always more complex than slave society. There was a greater development of trade and industry, which even in those days led to capitalism. In the Middle Ages feudalism predominated. And here too the forms of state varied, here too we find both the monarchy and the republic, although the latter was much more weakly expressed. But always the feudal lord was regarded as the only ruler. The peasant serfs were deprived of absolutely all political rights. Neither under slavery nor under the feudal system could a small minority of people dominate over the vast majority without coercion. History is full of the constant attempts of the oppressed classes to throw off oppression. The history of slavery contains records of wars of emancipation from slavery which lasted for decades. Incidentally, the name "Spartacist" now adopted by the German Communists—the only German party which is really fighting against the yoke of capitalism—was adopted by them because Spartacus was one of the most prominent heroes of one of the greatest revolts of slaves, which took place about two thousand years ago. For many years the seemingly omnipotent Roman Empire, which rested entirely on slavery, experienced the shocks and blows of a widespread uprising of slaves who armed and united to form a vast army under the leadership of Spartacus. In the end they were defeated, captured and put to torture by the slave-owners. Such civil wars mark the whole history of the existence of class society. I have just mentioned an example of the greatest of these civil wars in the epoch of slavery. The whole epoch of feudalism is likewise marked by constant uprisings of the peasants. For example, in Germany in the Middle Ages the struggle between the two classes—the landlords and the serfs—assumed wide proportions and was transformed into a civil war of the peasants against the landowners. You are all familiar with similar examples of repeated uprisings of the peasants against the feudal landowners in Russia. In order to maintain their rule and to preserve their power, the feudal lords had to have an apparatus by which they could unite under their subjugation a vast number of people and subordinate them to certain laws and regulations; and all these laws fundamentally amounted to one thing—the maintenance of the power of the lords over the peasant serfs. And this was the feudal state, which in Russia, for example, or in quite backward Asiatic countries (where feudalism prevails to this day) differed in form—it was either a republic or a monarchy. When the state was a monarchy, the rule of one person was recognised; when it was —a republic, the participation of the elected representatives of landowning society was in one degree or another recognised—this was in feudal society. Feudal society represented a division of classes under which the vast majority—the peasant serfs—were completely subjected to an insignificant minority—the owners of the land. The development of trade, the development of commodity exchange, led to the emergence of a new class—the capitalists. Capital took shape at the close of the Middle Ages, when, after the discovery of America, world trade developed enormously, when the quantity of precious metals increased, when silver and gold became the medium of exchange, when money circulation made it possible for individuals to possess tremendous wealth. Silver and gold were recognised as wealth all over the world. The economic power of the landowning class declined and the power of the new class—the representatives of capital—developed. The reconstruction of society was such that all citizens seemed to be equal, the old division into slave-owners and slaves disappeared, all were regarded as equal before the law irrespective of what capital each owned; whether he owned land as private property, or was a poor man who owned nothing but his labour-power—all were equal before the law. The law protects everybody equally; it protects the property of those who have it from attack by the masses who, possessing no property, possessing nothing but their labour-power, grow steadily impoverished and ruined and become converted into proletarians. Such is capitalist society. I cannot dwell on it in detail. You will return to this when you come to discuss the Programme of the Party you will then hear a description of capitalist society. This society advanced against serfdom, against the old feudal system, under the slogan of liberty. But it was liberty for those who owned property. And when feudalism was shattered, which occurred at the end of the eighteenth century and the beginning of the nineteenth century—in Russia it occurred later than in other countries, in 1861—the feudal state was then superseded by the capitalist state, which proclaims liberty for the whole people as its slogan, which declares that it expresses the will of the whole people and denies that it is a class state. And here there developed a struggle between the socialists, who are fighting for the liberty of the whole people, and the capitalist state—a struggle which has led to the creation of the Soviet Socialist Republic and which is spreading all over the world. To understand the struggle that has been started against world capital, to understand the nature of the capitalist state, we must remember that when the capitalist state advanced against the feudal state it entered the fight under the slogan of liberty. The abolition of feudalism meant liberty for the representatives of the capitalist state and served their purpose, inasmuch as serfdom was breaking down and the peasants had acquired the opportunity of owning as their full property the land which they had purchased for compensation or in part by quit-rent—this did not concern the state: it protected property irrespective of its origin, because the state was founded on private property. The peasants became private owners in all the modern, civilised states. Even when the landowner surrendered part of his land to the peasant, the state protected private property, rewarding the landowner by compensation, by letting him take money for the land. The state as it were declared that it would fully preserve private property, and the state accorded it every support and protection. The state recognised the property rights of every merchant, industrialist and manufacturer. And this society, based on private property, on the power of capital, on the complete subjection of the propertyless workers and labouring masses of the peasantry, proclaimed that its rule was based on liberty. Combating feudalism, it proclaimed freedom of property and was particularly proud of the fact that the state had ceased, supposedly, to be a class state. Yet the state continued to be a machine which helped the capitalists to hold the poor peasants and the working class in subjection. But in outward appearance it was free. It proclaimed universal suffrage, and declared through its champions, preachers, scholars and philosophers, that it was not a class state. Even now, when the Soviet Socialist Republics have begun to fight the state, they accuse us of violating liberty, of building a state based on coercion, on the suppression of some by others, whereas they represent a popular, democratic state. And now, when the world socialist revolution has begun, and when the revolution has succeeded in some countries, when the fight against world capital has grown particularly acute, this question of the state has acquired the greatest importance and has become, one might say, the most burning one, the focus of all present-day political questions and political disputes. Whichever party we take in Russia or in any of the more civilised countries, we find that nearly all political disputes, disagreements and opinions now centre around the conception of the state. Is the state in a capitalist country, in a democratic republic—especially one like Switzerland or the U.S.A.—in the freest democratic republics, an expression of the popular will, the sum total of the general decision of the people, the expression of the national will, and so forth; or is the state a machine that enables the capitalists of those countries to maintain their power over the working class and the peasantry? That is the fundamental question around which all political disputes all over the world now centre. What do they say about Bolshevism? The bourgeois press abuses the Bolsheviks. You will not find a single newspaper that does not repeat the hackneyed accusation that the Bolsheviks violate popular rule. If our Mensheviks and Socialist-Revolutionaries in their simplicity of heart (perhaps it is not simplicity, or perhaps it is the simplicity which the proverb says is worse than robbery) think that they discovered and invented the accusation that the Bolsheviks have violated liberty and popular rule, they are ludicrously mistaken. Today every one of the richest newspapers in the richest countries, which spend tens of millions on their distribution and disseminate bourgeois lies and imperialist policy in tens of millions of copies—every one of these newspapers repeats these basic arguments and accusations against Bolshevism, namely, that the U.S.A., Britain and Switzerland are advanced states based on popular rule, whereas the Bolshevik republic is a state of bandits in which liberty is unknown, and that the Bolsheviks have violated the idea of popular rule and have even gone so far as to disperse the Constituent Assembly. These terrible accusations against the Bolsheviks are repeated all over the world. These accusations lead us directly to the question—what is the state? In order to understand these accusations, in order to study them and have a fully intelligent attitude towards them, and not to examine them on hearsay but with a firm opinion of our own, we must have a clear idea of what the state is. We have before us capitalist states of every kind and all the theories in defence of them which were created before the war. In order to answer the question properly we must critically examine all these theories and views. I have already advised you to turn for help to Engels's book The Origin of the Family, Private Property and the State. This book says that every state in which private ownership of the land and means of production exists, in which capital dominates, however democratic it may be, is a capitalist state, a machine used by the capitalists to keep the working class and the poor peasants in subjection; .while universal suffrage, a Constituent Assembly, a parliament are merely a form, a sort of promissory note, which does not change the real state of affairs. The forms of domination of the state may vary: capital manifests its power in one way where one form exists, and in another way where another form exists—but essentially the power is in the hands of capital, whether there are voting qualifications or some other rights or not, or whether the republic is a democratic one or not—in fact, the more democratic it is the cruder and more cynical is the rule of capitalism. One of the most democratic republics in the world is the United States of America, yet nowhere (and those who have been there since 1905 probably know it) is the power of capital, the power of a handful of multimillionaires over the whole of society, so crude and so openly corrupt as in America. Once capital exists, it dominates the whole of society, and no democratic republic, no franchise can change its nature. The democratic republic and universal suffrage were an immense progressive advance as compared with feudalism; they have enabled the proletariat to achieve its present unity and solidarity, to form those firm and disciplined ranks which are waging a systematic struggle against capital. There was nothing even approximately resembling this among the peasant serfs, not to speak of the slaves. The slaves, as we know, revolted, rioted, started civil wars, but they could never create a class-conscious majority and parties to lead the struggle, they could not clearly realise what their aims were, and even in the most revolutionary moments of history they were always pawns in the hands of the ruling classes. The bourgeois republic, parliament, universal suffrage—all represent great progress from the standpoint of the world development of society. Mankind moved towards capitalism, and it was capitalism alone which, thanks to urban culture, enabled the oppressed proletarian class to become conscious of itself and to create the world working-class movement, the millions of workers organised all over the world in parties—the socialist parties which are consciously leading the struggle of the masses. Without parliamentarism, without an electoral system, this development of the working class would have been impossible. That is why all these things have acquired such great importance in the eyes of the broad masses of people. That is why a radical change seems to be so difficult. It is not only the conscious hypocrites, scientists and priests that uphold and defend the bourgeois lie that the state is free and that it is its mission to defend the interests of all; so also do a large number of people who sincerely adhere to the old prejudices and who cannot understand the transition from the old, capitalist society to socialism. Not only people who are directly dependent on the bourgeoisie, not only those who live under the yoke of capital or who have been bribed by capital (there are a large number of all sorts of scientists, artists, priests, etc. , in the service of capital), but even people who are simply under the sway of the prejudice of bourgeois liberty, have taken up arms against Bolshevism all over the world because when the Soviet Republic was founded it rejected these bourgeois lies and openly declared: you say your state is free, whereas in reality, as long as there is private property, your state, even if it is a democratic republic, is nothing but a machine used by the capitalists to suppress the workers, and the freer the state, the more clearly is this expressed. Examples of this are Switzerland in Europe and the United States in America. Nowhere does capital rule so cynically and ruthlessly, and nowhere is it so clearly apparent, as in these countries, although they are democratic republics, no matter how prettily they are painted and notwithstanding all the talk about labour democracy and the equality of all citizens. The fact is that in Switzerland and the United States capital dominates, and every attempt of the workers to achieve the slightest real improvement in their condition is immediately met by civil war. There are fewer soldiers, a smaller standing army, in these countries—Switzerland has a militia and every Swiss has a gun at home, while in America there was no standing army until quite recently and so when there is a strike the bourgeoisie arms, hires soldiery and suppresses the strike; and nowhere is this suppression of the working-class movement accompanied by such ruthless severity as in Switzerland and the U.S.A. , and nowhere does the influence of capital in parliament manifest itself as powerfully as in these countries. The power of capital is everything, the stock exchange is everything, while parliament and elections are marionettes, puppets…. But the eyes of the workers are being opened more and more, and the idea of Soviet government is spreading farther and farther afield, especially after the bloody carnage we have just experienced. The necessity for a relentless war on the capitalists is becoming clearer and clearer to the working class. Whatever guise a republic may assume, however democratic it may be, if it is a bourgeois republic, if it retains private ownership of the land and factories, and if private capital keeps the whole of society in wage-slavery, that is, if the republic does not carry out what is proclaimed in the Programme of our Party and in the Soviet Constitution, then this state is a machine for the suppression of some people by others. And we shall place this machine in the hands of the class that is to overthrow the power of capital. We shall reject all the old prejudices about the state meaning universal equality—for that is a fraud: as long as there is exploitation there cannot be equality. The landowner cannot be the equal of the worker, or the hungry man the equal of the full man. This machine called the state, before which people bowed in superstitious awe, believing the old tales that it means popular rule, tales which the proletariat declares to be a bourgeois lie—this machine the proletariat will smash. So far we have deprived the capitalists of this machine and have taken it over. We shall use this machine, or bludgeon, to destroy all exploitation. And when the possibility of exploitation no longer exists anywhere in the world, when there are no longer owners of land and owners of factories, and when there is no longer a situation in which some gorge while others starve, only when the possibility of this no longer exists shall we consign this machine to the scrap-heap. Then there will be no state and no exploitation. Such is the view of our Communist Party. I hope that we shall return to this subject in subsequent lectures, return to it again and again. By V. I. Lenin Tags : Marxism Humanitarian crisis on Manus Island deepens Corkman developers rewarded with dodgy deal Editor, January 5, 2011 Review: James P Cannon and the US left Editor, December 23, 2019 The roots of racism against Aboriginal people Editor, October 2, 2015 Ned Kelly: Australia's most famous rebel Editor, October 15, 1989 Support the pilots' victory! Editor, March 3, 2005 The principles behind how we run the Socialist Party Editor, November 20, 2011 The police: Do they really protect and serve us all? Subscribe to our mailing list and get a weekly newsletter direct to your inbox Oops, something went wrong... The Socialist is the magazine of Socialist Action. We are an organisation that fights in workplaces, communities, and on campuses against the exploitation and injustices that working class people face. If you agree with our ideas, join us in the fight against capitalism and for a socialist world! Donate via Secure Payment The Socialist needs you! Reading The Socialist online is free, but producing the print edition isn't. We have no big business or government backers. Instead, we rely on subscriptions and donations from readers like you. Even a few dollars adds up, so please consider chipping in! The Socialist - Magazine of Socialist Action - ISSN 2206-3218 (Online) | © Copyright 2020
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,262
"Mister Red " we call him just Red for short. Red is known for keeping for his dapper appearance. We comb through piles of upcycled sweaters, selecting berry to brick reds, adding contrast to his unique design. The textural pursuit of this gent's wardrobe shows on his big fluffy tail! He's also a wee bit demanding, so only poly-fill will do for his hypo-allergenic stuffing. Red is 16.5 inches with a 9-inch tail, about 16 inches wide. He likes the idea that he can be washed occasionally for appearance sake, you know? Cold water, gentle wash. Line dry. Mister Red feels honored to be a part of the environmentally friendly line of friends, but he prefers having one close buddy. Maybe it's you?
{ "redpajama_set_name": "RedPajamaC4" }
4,568
Simple stud earrings of open circle design with dot texture. We thought you would like to know has been eyeing up our 14k Yellow Gold Circle Dot Earring for some time now and has requested we bring it your attention. What's your question about 14k Yellow Gold Circle Dot Earring? These simple circular stud earrings are made in 14k yellow gold. The circle detail features raised dots giving these earrings a lovely texture. These are the perfect everyday stud earrings. Earring Diameter: 9mm. Closure: Stud. Note: Matching Ring, Necklace available.
{ "redpajama_set_name": "RedPajamaC4" }
586
Federal court rules in favor of Gov. DeSantis in vaccine passport skirmish An appeals court reversed a prior ruling which favored the CDC's incentivization of vaccine passports for cruise ships. Florida Gov. Ron DeSantis signs bill banning transgender girls from school sports10 Tampa Bay / YouTube Ashley Sadler Tue Jul 27, 2021 - 3:59 pm EDT Fri Jul 30, 2021 - 3:46 pm EDT July 27, 2021 (LifeSiteNews) — A federal appeals court handed Florida Gov. Ron DeSantis (R) a big win Friday by reversing a prior decision which had favored COVID-19 mandates imposed by the Centers for Disease Control and Prevention (CDC). In its reversal, the court instead sided with the state of Florida and its ban on vaccine passports. The reversal came just hours after DeSantis submitted an emergency application to the U.S. Supreme Court, CNBC reported, and marks a crucial victory in the national legal battle against vaccine mandates. The decision by the 11th Circuit Court of Appeals is the latest move in a tense confrontation between the CDC and the state of Florida going back as far as April, with the CDC imposing crushing COVID-19 restrictions on the cruise line industry, a major income-generator in the state. "I'm glad to see the 11th Circuit Court of Appeals reverse its prior decision and free the cruise lines from unlawful CDC mandates, which effectively mothballed the industry for more than a year," DeSantis said in a statement July 23. Florida's five most popular ports, including Port Canaveral, Port Miami, and Port Everglades, account for "60 percent of embarkations at all U.S. ports," according to a recent report by the Cruise Lines International Association (CLIA), as well as "60 percent of the total employment of all cruise lines throughout the United States." But DeSantis emphasized that the victory in the case is not isolated to one industry or one state. "The importance of this case extends beyond the cruise industry," DeSantis said. "From here on out a federal bureau will be on thin legal and constitutional ice if and when it attempts to exercise such sweeping authority that is not explicitly delineated by law." As reported by Breitbart News in a July 24 article, "[t]he saga between DeSantis and the CDC began in April after the governor announced the lawsuit over the federal health agency's restrictions on the cruise industry." DeSantis' first victory in the skirmish was scored in June when a Tampa federal district court issued a preliminary injunction in the Republican governor's favor, saying the CDC's COVID-19 rules were "likely unconstitutional and overstepping their legal authority." That victory was threatened however, when just days before the Tampa court's ruling was set to take effect last weekend, the three-judge panel on the 11th Circuit Court of Appeals voted 2-1 to issue a temporary stay of the injunction July 17. On July 23, Florida moved aggressively in response to the stay, filing an Emergency Application with the U.S. Supreme Court to vacate the stay on the preliminary injunction. In the application, Florida argued that the vital cruise line industry has been crippled for 16 months under unlawful CDC mandates. According to CLIA, the industry was responsible for the creation of roughly 159,000 jobs and $8.1 billion in revenue throughout Florida in 2019. Amid crushing lockdowns and restrictions last year, however, an estimated $1 billion in income has been lost by the U.S. as a whole for every month that cruises remain grounded. According to a BizJournals report late last year, the "suspension of voyages" in Florida and throughout the U.S. would cost "about $14.1 billion in direct expenditures and $32.7 billion in total expenditures" by the end of 2020. Meanwhile, national cruise line job losses by the end of the year totaled "96,500 in direct employment and 254,400 in total employment." Florida's application to the Supreme Court noted that "[f]rom March to October 2020, the CDC categorically banned cruising." "In October 2020, the agency supplanted its ban with a 'Conditional Sailing Order,'" the application continues, saying the CDC order "purported to reserve to the CDC the power to issue 'technical instructions' — which the CDC has wielded by posting an ever-changing array of requirements on its website, some of which purport to modify even central provisions of that Order, all without notice and comment." One such requirement imposed by the CDC forced cruise ships to "run self-funded experiments called 'test voyages.'" The CDC then offered a loophole to the expensive imposition, unilaterally updating its order after COVID shots became available to "allow cruise ships to avoid the test-sail requirement and many of the Order's other requirements by agreeing to vaccinate 95% of their crew and sail with 95% vaccinated passengers." The CDC loophole which incentivized the cruise line industry to require vaccine passports contradicts Florida law. In SB 2006, which took effect July 1, DeSantis explicitly outlawed vaccine passports in his state for industries in both the public and private sector. PETITION: Support Christian Cake Baker, Jack Phillips, against latest LGBT harassment effort 8858 have signed the petition. Let's get to 9000! The Christian baker, Jack Phillips, needs your support as he is again being targeted by LGBT activists, despite the fact that his stand against compelled speech and being forced to bake a "wedding" cake for homosexuals was vindicated by the Supreme Court in 2018. Now, a gender-confused man, "Autumn" Scardina, has succeeded in getting Mr. Phillips fined $500 by Denver District Court judge, A. Bruce Jones, for refusing to make a cake celebrating the man's attempted "transition." The LGBT mob just won't leave this Christian baker make his living in peace! But, Phillips and his legal counsel say they will appeal this ruling. Please SIGN and SHARE this petition which sends moral support and prayers to Mr. Phillips after this latest assault on his right to free speech and not to be forced, in his work, to promote an ideology with which he disagrees. The fact that the court pursued the complaint by this gender-confused man, after Phillips' right to reject such compelled speech was confirmed by the Supreme Court, is nothing short of targeting by LGBT activists and harassment by the District Court. In 2018, the Supreme Court overruled an earlier District Court ruling and determined that forcing Phillips to create a homosexual "wedding" cake was a violation of the First Amendment. But, just before the Supreme Court's ruling, Scardina (the LGBT-activist-plaintiff in this latest miscarriage of justice) demanded that Phillips create a cake celebrating his "transition." Upon being refused, Scardina sued Phillips. "The request was for a custom-designed cake, pink on the inside and blue on the outside, to reflect and celebrate a gender transition," Alliance Defending Freedom (ADF), a legal organization protecting religious freedom and Phillips' legal counsel, said in a statement on Wednesday. "Phillips' shop declined the request because the customer specifically requested that the cake express messages and celebrate an event in conflict with Phillips' religious beliefs," ADF continued. "Jack Phillips serves all people but shouldn't be forced to create custom cakes with messages that violate his conscience," ADF stated. "The harassment of people like Jack … has been occurring for nearly a decade and must stop." Please SIGN and SHARE this petition which supports Jack Phillips against the latest LGBT efforts to persecute him because of this sincerely held religious beliefs. 'Court forces Christian baker to make cakes celebrating a transgender 'transition'' - www.lifesitenews.com/news/court-forces-christian-baker-to-make-cakes-celebrating-a-transgender-transition 'Jack Phillips back in court over refusal to bake transgender cake' - https://www.lifesitenews.com/news/jack-phillips-back-in-court-over-refusal-to-bake-transgender-cake ADF Statement - https://adfmedia.org/case/scardina-v-masterpiece-cakeshop **Photo Credit: YouTube screen grab On the same day Florida's application to the U.S. Supreme Court was filed, the 11th Circuit reversed its prior decision, giving the green light for the cruise line industry to "resume operations without adhering to the CDC's restrictions," as reported by Breitbart. In a 2-1 decision, the judges vacated their July 17 order, denying the CDC's "time sensitive motion for stay pending appeal," on the grounds that the CDC "failed to demonstrate an entitlement to a stay." Major cruise lines are now making announcements about upcoming cruise launches, with Disney announcing its cruises will start up in early August. In light of DeSantis's recent win against CDC orders, Disney says it is "not requiring vaccinations for guests on sailings departing from Florida" but will require COVID-19 tests prior to embarking. Guests can skip COVID-19 testing, however, if they elect to provide proof of COVID-19 vaccination. This just in! Beginning Aug. 9, the Disney Dream will kick off our long-awaited return to cruising with three- and four-night cruises from Port Canaveral, FL. Check out this video for important details to plan your next #DisneyCruiseLine vacation! https://t.co/ziglPRXcei pic.twitter.com/sKMPZeunXl — Disney Cruise Line (@DisneyCruise) July 23, 2021 Florida's victory will be an important precedent for current and future lawsuits related to vaccine passports. Many institutions and industries have begun aggressively incentivizing vaccination or outright requiring it, threatening the unvaccinated with termination or loss of privileges. In Indiana, college students are appealing a federal judge's ruling which denied their request to put Indiana University's vaccine mandate on hold pending the outcome of their federal lawsuit filed in June. The federal judge upheld the university's vaccine mandate which requires students, faculty, and staff to become "fully vaccinated" with an experimental COVID-19 jab as a prerequisite for attending classes or maintaining employment at the institution. The Trump-appointed judge said the mandate was in the "legitimate interest of public health." On Monday, California and New York both moved to force health care workers and state employees to get the COVID-19 shots or be subject to daily testing as a condition of employment. The Department of Veterans' Affairs will also mandate COVID-19 injections for its health workers, becoming the first federal department to require the jabs. The U.K., France, and Greece, among others, have already begun imposing vaccine passports and requiring health care employees to get the jab. The mandates come in response to allegedly surging cases of the "Delta variant" of the coronavirus. However, research suggests that COVID-19 injections fail to prevent "breakthrough COVID" and are not very effective against the Delta variant. The mandates also fail to recognize natural immunity or to reference the increasing number of serious adverse reactions or deaths associated with the shots. In May, after reports of breakthrough cases exceeded 10,000 and deaths among the "fully vaccinated" reportedly hit 535, (though 16% of those fatalities were allegedly reported as asymptomatic or not related to COVID) the Centers for Disease Control and Prevention announced its transition "from monitoring all reported vaccine breakthrough cases" to focusing "on identifying and investigating only hospitalized or fatal cases due to any cause." Meanwhile, a British government "variant" report found that 68% of the recent COVID deaths with the Delta variant in the U.K. were among those who had gotten the shot, with over half having been "fully vaccinated." Likewise, Israel reported late last month that most new cases of COVID were in the vaccinated, with the vaccines "significantly less effective" against the so-called Delta variant. In Israel about 60% of patients in serious condition had gotten the jab, while around 90% of those over the age of 50 testing positive for the virus have gotten full doses of the injection.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,664
{"url":"https:\/\/electricala2z.com\/electrical-circuits\/capacitance-definition-formula\/","text":"# What is Capacitance | Definition & Formula\n\nWant create site? Find Free WordPress Themes and plugins.\n\nDefinition: Capacitance is a property that opposes any change in voltage. A capacitor is a device that temporarily stores an electric charge.\n\nA capacitor accepts or returns this charge in order to maintain a constant voltage. Schematic symbols that are used to represent a capacitor are shown in Figure 1.\n\nFigure 1. Schematic symbols for the capacitor.\n\nThe capacitor is made of two plates of conductive material, separated by insulation. This insulation is called a dielectric, Figure 2.\n\nIn the figure, the plates are connected to a dc voltage source. The circuit appears to be an open circuit because the plates do not contact each other. However, the meter in the circuit will show some current flow for a brief period after the switch is closed.\n\nFigure 2. A basic form of a capacitor.\n\nIn Figure 3, as the switch is closed, electrons from the negative terminal of the source flow to one plate of the capacitor. These electrons repel electrons from the second plate (like charges repel), which are then drawn to the positive terminal of the source. The capacitor is now charged to the same potential as the source and is opposing the source voltage.\n\nIf the capacitor is removed from the circuit, it will remain charged. The energy is stored within the electric field of the capacitor. Once the capacitor is fully charged, current ceases to flow in the circuit.\n\nFigure 3. The capacitor charges to the source voltage.\n\nIt is important to remember that in the circuit in Figure 3, no electrons flowed through the capacitor. This is because a capacitor blocks direct current. However, one plate did become negatively charged and the other positively charged. A strong electric field exists between them.\n\nInsulating or dielectric materials vary in their ability to support the electric field. This ability is known as the dielectric constant of the material.\n\nThe constants of various materials are shown in Figure 4. These numbers are based on comparison with the dielectric constant of dry air. The constant for dry air has been assigned as 1.\n\nFigure 4. Dielectric constants. Larger numbers are better able to support electric fields.\n\n## Capacitor Working Voltage\n\nThe dielectrics used for capacitors can only withstand certain voltages. If this voltage is exceeded, the dielectric will break down and arcing will result. This maximum voltage is known as the working voltage (WV).\n\nExceeding the working voltage can cause a short circuit and can ruin other parts of the circuit connected to the dielectric.\n\nIncreased voltage ratings require special materials and thicker dielectrics. When a capacitor is replaced, check its capacitance value and dc working voltage.\n\nWhen a capacitor is used in an ac circuit, the working voltage should safely exceed the peak ac voltage. For example, a 120-volt effective ac voltage has a peak voltage of 120 V \u00d7 1.414 = 169.7 volts. Any capacitors used must be able to handle 169.7 volts.\n\n## Capacitance Calculation Formula\n\nCapacitance is determined by the number of electrons that can be stored in the capacitor for each volt of the applied voltage. Capacitance is measured in farads (F). A farad represents a charge of one coulomb that raises the potential one volt. This equation is written:\n\n$C=\\frac{Q}{E}$\n\nWhere C is the capacitance in farads, Q is the charge in coulombs, and E is the voltage in volts.\n\nCapacitors used in electronic work have capacities measured in microfarads (1\/1,000,000 F) and Pico farads (1\/1,000,000 of 1\/1,000,000 F). Microfarad is commonly written as \u03bcF or sometimes written as mfd. Picofarad is written as pF. Nano farad is not a common measurement of capacitance.\n\nA conversion chart for these units is shown in Figure 5.\n\nFigure 5. Prefixes used with the farad. Take special note that the prefix nano is missing. Nano farad is not a standard rating size for a capacitor.\n\nCapacitance is determined by:\n\n\u2022 The material used as a dielectric. (The larger the dielectric constant, the greater the capacitance.)\n\u2022 The area of the plates. (The larger the plate area, the greater the capacitance.)\n\u2022 The distance between the plates. (The smaller the distance, the greater the capacitance.)\n\u2022 These factors are related in the mathematical formula:\n\n$C=0.225\\times \\frac{KA\\left( n-1 \\right)}{d}$\n\nWhere C is the capacitance in Pico farads, K equals the dielectric constant, A equals the area of one side of one plate in square inches, d equals the distance between plates in inches, and n equals the number of plates.\n\nThis formula illustrates the following facts:\n\n1. Capacity increases as the area of the plates increase, or as the dielectric constant increases.\n2. Capacity decreases as the distance between plate\u2019s increases.\n\nA lesson in Safety\n\nMany large capacitors in TVs and other electronic equipment retain their charge for a long time after power is turned off. Discharge these capacitors by shorting terminals to the equipment\u2019s chassis with an insulated screwdriver.\n\nIf capacitors are not discharged, the voltages can destroy test equipment, and persons working on the equipment can receive a severe shock!\n\nDid you find apk for android? You can find new Free Android Games and apps.","date":"2020-11-27 03:23:57","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.565792441368103, \"perplexity\": 924.3567023582004}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-50\/segments\/1606141189038.24\/warc\/CC-MAIN-20201127015426-20201127045426-00057.warc.gz\"}"}
null
null
docker run --name postgres -d -p 5432:5432 -e POSTGRES_USER="postgres" -e POSTGRES_PASSWORD="postgres" -e POSTGRES_DB="go_restful" postgres # при нормальных условиях использования docker run -d -p 8080:8080 --name restful --link postgres:postgresql zhanat87/golang # короткий тест #docker run -it --rm -p 8080:8080 --name restful --link postgres:postgres zhanat87/golang # отладка #docker run -it -p 8080:8080 --name restful --link postgres:postgres zhanat87/golang bash docker ps -a echo "start success"
{ "redpajama_set_name": "RedPajamaGithub" }
6,101
Greatest Hits of 2011 By Clay Hamilton The top Santa Clara Magazine stories from last year, as well as from the vaults, keep online readers coming back. One of the advantages of the online magazine is that readers are able to interact with our stories. And in 2011, interact they did. Whether it was to add a memory of life in Graham Hall, provide a thoughtful comment on a feature article, share a story on Facebook, or view photo galleries of the Peace Corps volunteers, readers came to santaclaramagazine.com more often and in higher numbers than ever before. So what were the favorites? Drum roll, please … 10. MAN IN MOTION When it comes to football, Rich McGuinness '89 is the force behind The Ride and the U.S. Army All-American Bowl. 9. SERIAL START-UP SENSATION Diane Keng '14 — a veteran entrepreneur at 19. 8. LAW AT 100 A century of legal education at SCU. See snapshots from across the years and look at the big picture of how the legal landscape has changed. 7. LIFE CYCLE A photo essay by Susan Middleton '70. Luminous beauty drawn from two remarkable projects—Evidence of Evolution and Spineless. And a sneak peek at a show by this Guggenheim fellow opening in April at SCU's de Saisset Museum. 6. HOW CAN YOU DEFEND THOSE PEOPLE? As public defenders on the Homicide Task Force, Robert Strunck '76 and Crystal Marchigiani '78 have some 40 years between them representing accused murderers—many of whom faced the death penalty. 5. TRADITION SHATTERED Fifty years ago, Santa Clara admitted the first class of women to its undergraduate program. Gerri Beasley '65 shares some memories. 4. SATELLITE HEART For the first part of her life, Anya Marina '96 found her voice a source of embarrassment and ridicule. Now, with her third album on the way, it's her bread and butter. 3. CHANGE THE WORLD The U.S. Peace Corps turned 50 this year and a few Santa Clara grads (and faculty and staff) recount their time as volunteers—and where it's taken them. 2. REMEMBRANCE OF THINGS GRAHAM Thousands of SCU students called Graham Hall home over the past half century. The first residence hall built for women, it boasted a pool, the Pipestage club, and campus hijinks. The old buildings are gone to make way for the new but the memories live on. 1. REVEALED! THE TRUTH BEHIND NO NAME! On today's Rock Report: the story (and real identity) of a legendary bad boy disc jockey. It's none other than Mike Nelson '96, whose freshman thrash band was once booed off the stage at the Leavey Center. Our online readers also came in search of articles from previous years. So much so that we are working to make it even easier to find favorite stories and related content. What were the most sought after blasts from the past? Take a look—maybe you'll discover a gem you've missed. 10. BUILT BY IMMIGRANTS Gerald McKevitt, S.J., looks at the lives of the early Californian Jesuits and the impact they had on the West. 10. A CENTURY OF BRONCO BASKETBALL The first basketball player ever to make the cover of Sports Illustrated, 11 NCAA tourney invites, a dozen All-Americans, a No. 2 nation­al ranking, and an alumnus who's changing the way the game is played in the NBA. Not a bad first hundred seasons. 9. FILIPINO ANGELENOS Mae Respicio Koerner's Filipinos in Los Angeles offers a remarkable glimpse of a century of Filipinos in Los Angeles. 8. GOING GLOBAL They hail from around the world, but it's Bronco red and white that brings them together. 7. BE WHO YOU IS James Martin, S.J., reminds us that our own vocations lead to true happiness, not trying to lead someone else's life. 6. TRUTH, LEGEND, AND JESSE JAMES Jesse James' exploits made him a legend even in his own time. Now the author of the novel The Assassination of Jesse James by Coward Robert Ford reveals what it takes to get beyond coloring book heroes and villains to understanding a charming psychopath and his killer. 5. JUSTICE DELAYED Late last fall, the FBI concluded an 18-month investigation into the case of the 1955 murder of Emmett Till, a 14-year-old African-American boy. What have we learned (and not learned) about civil rights in the 50 years since? 4. BREAKING THROUGH Francisco Jiménez has faced many challenges since entering the United States from Mexico. Through work in the fields, to deportation, to struggles in English class, he persevered. And now he's a professor at SCU. 3. WHAT DO WE SEE WHEN WE LOOK? PHOTOGRAPHY, LYNCHING, AND MORAL CHANGE An ethical examination of art exhibits featuring images of lynchings. 2. A PUZZLING PROFESSOR Byron Walden, an assistant professor of mathematics at SCU, draws on his knowledge of numerical analysis to create crossword puzzles for The New York Times. 1. SPIRITUAL EXERCISES Iñigo de Loyola kept a notebook of the consolations, graces, and inner wrechings he experienced while meditating on scripture. It became a practical manual for others.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,399
Q: Case in where clause with null value I want to check if the value is exist in the column than give that record, else give record which have NULL value in it. Here is the sample query but it is not working for me as syntax is not correct. SELECT * FROM t_config_rule WHERE rule = ( CASE WHEN rule = 'IND' THEN 'IND' ELSE NULL END) A: A CASE expression in the WHERE clause makes no sense, as you can always say the same with simple AND and/or OR. Your condition translates to: WHERE rule = 'IND' OR rule = NULL As rule = NULL is never true (it would have to be rule IS NULL for this to work), the condition is essentially a mere WHERE rule = 'IND' You seem to want something entirely different anyway, it seems. You seem to want to look for one single record. It shall be the one with rule = 'IND'. If that record does no exist, you want the record with rule IS NULL instead. So select both records, the one with 'IND' and the one with NULL, and then keep the preferred one: SELECT * FROM t_config_rule WHERE rule = 'IND' OR rule IS NULL ORDER BY rule NULLS LAST FETCH FIRST ROW ONLY; The FETCH FIRST clause is available only as of Oracle 12c, though. For older versions use: SELECT * FROM ( SELECT * FROM t_config_rule WHERE rule = 'IND' OR rule IS NULL ORDER BY rule NULLS LAST ) WHERE ROWNUM = 1; or number the rows with ROW_NUMBER and keep the first row: SELECT * FROM ( SELECT r. *, ROW_NUMBER() OVER (ORDER BY rule NULLS LAST) AS rn FROM t_config_rule r WHERE rule = 'IND' OR rule IS NULL ) WHERE rn = 1; FETCH FIRST and ROW_NUMBER are standard SQL, whereas ROWNUM is Oracle-only (and even violating the standard at that).
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,092
Fritz Gartz (* 3. Februar 1883 in Berlin; † 1. September 1960 in Söcking) war ein deutscher Maler. Fritz Gartz wird der Stilrichtung "Expressiver Realismus" zugerechnet, die durch den Kunsthistoriker Rainer Zimmermann geprägt wurde. Als siebtes von zwölf Kindern einer Bäckerfamilie geboren, besuchte er ab September 1905 eine private Schule für Malerei in Berlin. Ab Oktober 1906 war er Schüler der Malschule von Walter Thor in München, um dann an der Kunstakademie bei Hugo von Habermann unterrichtet zu werden. 1906 lernte er in München Jorgos Busianis und Giorgio de Chirico kennen und zeichnete 1908 ein frühes Porträt von de Chirico. 1907 unternahm Fritz Gartz eine Reise nach Ägypten. Im März 1908 schloss er seine erste Ehe. Von 1911 bis 1913 war er Mitglied des Deutschen Künstlerverbandes "Die Juryfreien" in München. 1914 wurde er Mitglied der Freien Münchner Künstler. Sein Stil wurde vom Impressionismus geprägt und fand schnell eine expressive Ausdrucksform. Bereits früh zeigte sich seine Begabung zum Porträt und später zur Landschaftsmalerei. 1915 wurde er zum Kriegsdienst eingezogen, von dem er nach einer Verwundung 1917 zurückkehrte. Mehrere Reisen führten ihn nach Italien, ins bayrische Gebirge und an den Chiemsee, wo er sich der Landschaftsmalerei widmete. 1928 heiratet er zum zweiten Mal und baute sich in Söcking bei Starnberg ein Haus. 1932 erhielt er den Albrecht-Dürer-Preis der Stadt Nürnberg. Ab 1934 zog er sich in sein Familienleben zurück. Zum Lebensunterhalt trugen verschiedene Auftragsarbeiten bei. 1954 wurde sein Werk in der Städtischen Galerie München erstmals in einer Einzelausstellung gezeigt. Nach längerem Herzleiden starb Fritz Gartz 1960 in Söcking. In Starnberg ist der "Fritz-Gartz-Weg" nach ihm benannt. Literatur Georg Jacob Wolf: Zu den Aquarellen von Fritz Gartz. In: DIE KUNST FÜR ALLE. Heft 7, München, 1932, S. 218–220. Karl B. Berthold, Goldschmiedearbeiten, Joachim Berthold, Plastiken, Fritz Gartz, Gemälde und Graphik, Helmut Hölzler, Graphik, Willy Reue, Gemälde : Ausstellung vom 12. Juni – 11. Juli 1954 Gerd Roos: Girogio de Chirico und seine Malerfreunde Fritz Gartz – Georgios Busianis – Dimitros Pikinos in München 1906–1909. In: Schmied, Wieland, Ross, Gerd: Giorgio de Chirico München 1906–1909. Akademie der Künste, München 1994, S. 55–182. Ingrid von der Dollen: Die Sammlung Joseph Hierling – Expressiver Realismus. Schweinfurter Museumsschriften 166/2009, Schweinfurt 2009, ISBN 978-3-936042-49-8. Ingrid von der Dollen: Fritz Gartz 1883 -1960 Malerei und Grafik. Edition Joseph Hierling, Tutzing 2012, ISBN 978-3-925435-25-6 Weblinks Der Maler Fritz Gartz Förderkreis Expressiver Realismus e. V. München Maler (Deutschland) Maler der Moderne Verschollene Generation Deutscher Geboren 1883 Gestorben 1960 Mann
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,568
Guest Post: Using foreign language in your story – the balancing act! by Juliette Wade I love to use non-English languages in a story. To me, a foreign language behaves like a form of music – it creates mood and atmosphere for the readers who don't understand it, and it further creates a layer of meaning for those readers who do understand it. I take much the same philosophy toward foreign language as I do toward alien language: the more you use, the more alien the story feels; the less you use, the more familiar the language feels. With a story like "Suteta Mono de wa Nai (Not Easily Thrown Away)," it's really important to me to have a point of view that suggests a person who is inside the Japanese perspective. That means using a lot of English, which might seem ironic. Making a mostly English narrative feel Japanese is all about exploring the other aspects of the Japanese point of view, like metaphors, attitudes, culturally grounded phrases and opinions, all of which can be accomplished in English. That English narrative creates a scaffold into which Japanese phrases can be embedded without having to be defined. When the context surrounding a word is strong, the word does not need translation. But what constitutes a strong context differs for different people. I, personally, could totally fail to provide any scaffolding context at all and still appreciate what those words are doing, which means I have to guess in my first draft which ones will be easily deduced by others and which ones won't. Also, it depends on who is reading it. Say, for example, with the identity of Naoko's grandmother, also known as Obaa-chan. A lot of people do know that phrase; a lot don't. In this instance I relied on Grice's Cooperative Principle of Communication, the Maxim of Quantity which says "say as much as, and no more than, is required." At the start of the story, there is one character hearing voices. The voices mention she has a grandmother. Then the single character refers to someone named Obaa-chan. Conversational logic tells us that Obaa-chan has to be her grandmother, because only one other flesh-and-blood character has been mentioned, and that's her grandmother. Right? Right. But it's not that obvious for everyone (at least one of my readers mentioned feeling this way). Just because something makes sense to you, that doesn't mean you can count on every reader to put together the same logic. Conversational logic depends a lot on our previous knowledge and experience! It's just like when I hear someone saying a sf/f character name that I wrote down, and saying it in a way I consider "wrong." Who's to say that's wrong? It's what was on the page, filtered through their own phonological interpretation of the spelling. I decided long ago that it was up to them how to say it and I could explain the "real" pronunciation if asked, but didn't really have to. When you are working with an alien language that nobody knows, you can be certain that nobody knows it. But when you are working with a foreign language, it's a different kind of balancing act. Some people know it, and some don't. More importantly, these people come to a story with different attitudes. Some people are content to let foreign language act like musical accompaniment, and some people feel like they want to know the meaning of every word. The writer is usually standing somewhere in between. In my own early drafts of "Suteta Mono de wa Nai (Not Easily Thrown Away)," I discovered something I hadn't expected. A reader with no knowledge of Japanese at all was perfectly happy to let the Japanese function as musical accompaniment so long as the English parts of the story made sense, and deduction could reconstruct the meanings in critical spots. A reader with lots of knowledge of Japanese was content to see a level of redundancy in the piece because they knew that not everyone would know all the words they did. A reader with some knowledge of Japanese, but not enough to understand all the words used, was discontent with not knowing all those words and wanted to see stronger scaffolding. Once you move out into the world where a large number of people are reading, this selection of reactions diversifies. I've heard the no-Japanese reaction a lot, I've heard the lots-of-Japanese reaction a lot, and I've heard multiple different versions of the some-Japanese reaction. My sense is that a person's reaction will be very individual, not only dependent on how much Japanese they know, but how much they want to know, how much they personally expect others to know, etc. This will be similar (with some variations) no matter what language you choose to use. This is all part of sending a story out into the world. Every reader reads a story differently. Every reader brings different expectations. So how much Japanese is too much? How much scaffolding is too much? It's a tricky balancing act. As a writer, all you can do is respond to critique to a level that seems reasonable based on what you know about your beta readers, and then trust that later readers will read it in their own way. Because they will – that's what reading is. It's something to think about. Juliette Wade has lived in Japan three times, and has turned her studies in linguistics, anthropology and Japanese language and culture into tools for writing fantasy and science fiction. She lives the Bay Area of Northern California with her husband and two children, who support and inspire her. She blogs at TalkToYoUniverse, where this post first appeared, and she runs the "Dive into Worldbuilding!" hangout series on Google+. Her fiction has appeared several times in Analog Science Fiction and Fact, and in several anthologies. Tags: Juliette Wade Sabrina Vourvoulias May 17, 2014 at 10:45 am This is an issue I grapple with all the time. As a bilingual Latina whose day job requires editing and writing both Spanish and English, as well as translating from English to Spanish (and vice versa), I habitually sprinkle Spanish into the English-language fiction I write. It is habit, yes (half of the editorial team at our newspaper are fully bilingual, the other half are variously English-only or Spanish-only, so conversations hop languages out of habit and necessity) but also because it reflects the Latino experience in the U.S. (and Latin America) which is almost inevitably a bilingual one. I rarely provide an outright translation in the story because, like you, I trust that context and a bit of deductive reading is usually all that is needed. Your comments about adding music I found a bit more problematic, even as I was nodding along to it. It made me think of a poem I wrote relatively recently (for the In Other Words anthology to benefit Con or Bust) which uses a newish Spanish syllabic form. It was really delightful for me as writer to challenge myself to conform both English and Spanish words to the syllable count and keep to cadences this set up. At the same time, I realized that for many readers, the effect would be no different than what is provided by the sort of linguistic tourism that sprinkles "foreign" words (seemingly at random) through a piece. And this distresses me because I don't have a high tolerance for either linguistic or cultural tourism in writing. How can the non-bilingual reader tell the difference? Does the music of language used in a real way come through? Or does it lead to the further Skippyjohn-ing (http://www.amazon.com/Skippyjon-Jones-Doghouse-Judy-Schachner/dp/0142407496) and Cinco de Mayoing of a language I love? And can the unsuspecting reader tell linguistic tourism from the genuine use of Spanglish which — despite the Real Academia Española's prissy and disrespectful definition of it as a "deformed" version of Spanish — is actually an incredibly dynamic, politically intriguing and distinct homegrown U.S. language? I want to trust that the ear (bilingual or not) knows the difference, but does it? Obviously, I choose to do it anyway (almost all of my published stories, and many of my poems, have some Spanish phrases or words in them), but it does give me pause. A lady does not raise her voice. A lady does not tear her gowns. A lady does not contradict a gentleman. A lady does not practice magic… JG Faherty JG Faherty has written 5 novels, 6 novellas, and 50+ short stories in the adult & YA H/SF/F genres. His works range from quiet, dark suspense to over-the-top comic gruesomeness. Follow him on Twitter and Facebook .
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,012
House Beats December 1 Deadline, Grants One Month SGR Reprieve Earlier this week, the U.S. House approved a one-month delay in Medicare SGR driven payment cuts, giving a short-term reprieve to a looming crisis over treatment of the nation's elderly. The Senate acted on this reprieve just before the Thanksgiving break. Opening Salvo on Preventing 2011 SGR Cut Launched by AARP The Washington State chapter of AARP announced a campaign to urge Congress to prevent the pending imposition of the 25% Medicare SGR cut, now set for January 1, 2011. Our state's Congressional Delegation has repeatedly affirmed its commitment to stopping a cut. AARP's campaign—initially aimed at our two US Senators—urges them to lean on their colleagues to take action before the end of the year to defer any cuts through December 31, 2011. The effort is to engage an estimated 15,000 seniors. Here are key points made in AARP's announcement: "An overwhelming number of AARP members across Washington say they are concerned they could lose their doctor if Congress fails to stop a 25 percent pay cut facing Medicare doctors, according to a recent survey. The survey also found that AARP members fear it could be very difficult for them to find a new doctor that would accept Medicare patients if Congress fails to act. "'We constantly hear from seniors that, after their children and grandchildren, the person they trust most is their doctor,' said AARP Advocacy Director Ingrid McDonald. 'This survey is a message to lawmakers that seniors will be watching to see whether they vote to prevent people in Medicare from losing their doctors.' "The majority of survey respondents (84%) said they'd be either 'very concerned' or 'somewhat concerned' that doctors may stop treating Medicare patients because of this cut. In addition, eighty-percent (80%) said they'd have trouble finding a new doctor that would take Medicare should the looming cut take effect. "Physician payment has been an ongoing issue facing seniors and the doctors who care for them. More than 10 years ago, Congress created a flawed system to pay doctors who treat Medicare patients. Because lawmakers have been unable to fix this system, Medicare can no longer pay doctors what it costs to care for seniors. Now, unless Congress acts by January 1 the scheduled (…2011...) 25 percent cut will take place, which could result in many seniors losing their doctors or have trouble finding a new one. "'Washington's seniors count on the security and peace of mind they get from seeing the doctor they trust,' McDonald added. 'We urge Senators Cantwell and Murray to stop this cut and work to provide doctors with a stable payment system so they'll continue to treat Medicare patients.' "The survey also found that seventy-nine percent (79%) of AARP members would be more favorable to their Senators if they fought to preserve access to physicians by protecting Medicare payments to doctors. "Finally, in response to some lawmakers who proposed to stop the cut for three months, sixty-seven percent of AARP members said they would prefer a long-term solution. AARP is urging Congress to stop cuts now for one year while the new Congress works towards a permanent solution that will give seniors the peace of mind that they can keep seeing their doctors." Hospital Applications for the Emergency Cardiac and Stroke System are Now Available The Department of Health is now accepting applications to participate in the Emergency Cardiac and Stroke (ECS) System http://www.doh.wa.gov/hsqa/hdsp/default.htm. Applying to participate in the ECS System is easy. There are three categorization levels for stroke centers and two for cardiac centers. Most hospitals already meet the criteria to participate at one of the levels. Steps to apply: Determine which level of cardiac and stroke center categorization best fits your hospital's resources and capabilities. See the participation criteria here http://www.doh.wa.gov/hsqa/hdsp/hospital.htm [NOTE: we have provided clarification and made minor modifications to the criteria on the application. Please request an application even if you think your hospital doesn't meet the participation criteria exactly as stated on the website.] Request an application for the categorization level you wish to apply for by sending an email to Kim Kelley, Cardiac/Stroke Systems Coordinator, kim.kelley@doh.wa.gov. We will begin sending applications out Friday, December 3, 2011. 3) Complete and submit the application according to the instructions on the application. You'll hear back from the DOH within 60 days of your application. The sooner we can let EMS know which hospitals are participating, the faster we can get the system in place and start saving more lives and reducing disability for the people in our communities. If you have questions, contact Kim Kelley, 360-236-3613, kim.kelley@doh.wa.gov. Know Your Options Re: Medicare From mid-November through December 31, physicians will have their annual opportunity to review and perhaps change their Medicare participation status. Given the severe Medicare payment disruptions and uncertainty going forward the WSMA encourages you to review your options carefully. To help you choose the direction that is right for your practice, the AMA has developed the "Know your options: Medicare participation guide." This kit contains a detailed explanation of the three available options: participation (PAR), non-participation (non-PAR), and private contracting. It also includes a helpful revenue calculator and various sample materials to help physicians share information with current, new, and prospective patients. The Medicare options kit is accessible at www.ama-assn.org/go/medicareoptions. Also, please continue to urge your patients to get involved by directing them to our online petition www.wsma.org/melt-down.cfm. See item below. Interpreter Services Payment Cut Delayed The Washington Medicaid program has delayed until March 1, 2011 its proposed elimination of coverage for interpreter services. As part of the state's budget crisis, the cut was scheduled to take effect on January 1. The WSMA, in cooperation with Physicians Insurance A Mutual Company, has prepared updated guidance on interpreters' services, available on www.wsma.org (Practice Resource Center, Practice Management Operations). Hole Deeper, Hill Steeper: The State Budget The state budget problems worsen. As the governor wrote to state agency heads, "Like you, I can't stomach more bad fiscal news. Today's forecast took an already difficult situation and made it worse. The projection takes $385 million from this year's budget and $809 million from the 2011–13 budget. While the projection has immediate consequences, it also creates a $5.7 billion deficit for the next biennium." The steep decline in revenues announced by the state economist were not anticipated, and just to get through June 30, 2011 would add another 4.6 percent to the 6.3 percent across the board reductions ordered last month. Rather than do that, the governor asked legislative leaders of both parties to provide their options on how to address the shortfall. The drama that is unfolding, now that revenues continue to drop and voters have tied the hands of the legislature to increase revenues, brings to mind the axiom that "one (in this case, the voters) should be careful what they wish for; he or she might get what they want." When the cuts come, we'll see how far into the middle class they go – and whether state employees and state retirement plans are affected – and to what degree voters reconsider their positions. In the meantime, our Olympia staff is in steady contact with legislators and the governor's office regarding health care programs and issues. Study: 40% of ED patients have low health literacy U.S. researchers reported about 40% of emergency department patients have limited health literacy, which may influence their reasons for seeking ED care and their outcomes. The study found that the low health literacy group had skills assessed at or below the eighth-grade level, while ED patient materials usually are written at or above the ninth-grade level. CDC sees drop in cold medication-related ED visits for toddlers CDC researchers said the number of children below age 2 who were taken to the emergency department for adverse reactions from cough and cold medications decreased by more than 50% between the 14 months before and the 14 months after the products were pulled from the market. However, researchers noted that two-thirds of the ED visits before and after withdrawal involved unsupervised ingestion of the medications. Joint Commission Alert: Suicides a risk in the ER, hospital A new Joint Commission Sentinel Event Alert urges greater attention to the risk of suicide for non-psychiatric patients in emergency departments and medical/surgical inpatient units and recommends education for caregivers about warning signs that may indicate when these patients are contemplating harming themselves. The Alert cautions that many patients who kill themselves in general hospital units do not have a psychiatric history or a history of suicidal attempts. Washington Needs a Statewide Drug Take-Back Program Access to Safe Disposal Can Prevent Accidental Poisonings and Abuse Each year about one-third of the medicines sold to Washington households, about 33 million containers of drugs, go unused. That's a big problem. Last year, the Washington Poison Center received more than 17,000 calls regarding young children that were poisoned by prescription or over-the-counter medicines. In addition to poisonings, kids are now being tempted by these same drugs because they perceive them as "safe". A staggering 12 percent of Washington high school seniors abuse medicines. And more than half of teens abusing medicines get them from a family member or friend, often without their knowledge. The best solution to curbing this epidemic is a permanent drug take-back program where Washington residents can bring their unwanted medicine to be properly destroyed. The Washington Chapter American College of Emergency Physicians is part of Take Back Your Meds, a group of organizations that are advocating for legislation that requires drug companies to pay for a take-back programs to collect and destroy these medicines. For about a penny per container of medicine sold in Washington state drug manufacturers could provide a convenient, safe program and relieve the financial burden on local governments, retailers and taxpayers. To find out more about Take Back Your Meds and help support efforts to create a statewide drug take-back program go to www.takebackyourmeds.org. Medicaid expands "Generics First" initiative to include mental health drugs (anti-psychotics) The Washington State Medicaid program will expand its successful Generics First initiative to mental health drugs on November 1, encouraging prescribers to use lower-cost generic drugs. A similar program for children has been up and running for a year without issues. The initiative will not apply to Medicaid clients who are already on a drug regimen involving a brand-name drug or a combination of drugs that includes a brand. Although brands are considerably more expensive than generics, the program will continue to pay for those drugs. Dr. Jeffery Thompson, Medicaid's chief medical officer, said the Generics First initiative has been successful in other categories of drugs – saving the state approximately $2 million a year in every 1 percent increase in the generics fill rate. For More Information: Jim Stevenson, Communications, 360-725-1915 (Pager: 360-971-4067). Online Death Filing Starts Early 2011 The Washington State Department of Health is releasing a new online Electronic Death Registration System (EDRS) to Pierce, Thurston, Mason, Benton, Franklin and Spokane counties in early 2011, with a statewide release to follow. Those who file death records in Washington State are encouraged to enroll in the new system. EDRS will streamline the death registration process, improve the quality of the death data collected, improve communication among those who file, and use the internet to make filing faster. To enroll or request information, contact Field Services at 800-525-0127 or EDRS@doh.wa.gov. Everyone Benefits with EDRS Physicians will quickly complete a death record from any computer with internet access and file it with a single click. This paperless system does not require extensive computer knowledge. It will streamline communication between funeral directors and physicians and eliminate the need to fax or sign paper records. It will offer a fast, easy, more accurate way to file. Families will get death certificates faster and will be able to do so from any local health jurisdiction across the state. EDRS will deliver better service to families because delays with paper processing will be reduced. Funeral homes will save time and money by collecting physicians' signatures electronically. They will view cases online and get death certificates faster. The people of Washington will benefit by having immediate and accurate death data used to combat public health threats E-Update Contributions and Suggestions The WA-ACEP NewsWatch is your newsletter! All member contributions are welcome! We encourage you to submit articles, letters, practices tips to share in the newsletter, or send us a question you would like answered or your ideas for future articles. Email your contribution and suggestions to the WA-ACEP office at smc@wsma.org. California Chapter ACEP 34th Annual Emergency Medicine in Yosemite 4th Annual Forensic Investigations March 30 – April 1, 2011 Kansas City, MI Meeting Brochure Summit to Sound – NW Emergency Medicine Assembly Newswatch Archive To have your job posting or classified advertising included in the JobWatch, submit your copy to smc@wsma.org. Your ad will appear in our electronically distributed NewsWatch. Ads for job postings will also be placed in the Job Posting section of the website, until the position has been filled. Click here to view the WA-ACEP JobWatch 1 (760) 552-0612 ext. 3038
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,760
\section{introduction} The term cosmological black hole is used to describe a collapsing structure within an otherwise expanding universe. Since the early beginning of the discovery of the expansion of the universe people have been looking for models describing an overdense region in a cosmological background (\cite{McVittie}, see also \cite{cosmological black hole}). The initial expansion of such an overdense region will finally decouple from the background and collapses to a dynamical black hole. The resulting structure and its central black hole, despite the very weak gravitational field outside it, differ from the familiar Schwarzschild one in being neither static no asymptotically flat \cite{man}. Therefore, such cosmological structures, if based on exact solutions of general relativity and not produced by a cut-and-paste technology, are very interesting laboratories to study not only general relativistic structures, their quasi-local features such as mass and horizons \cite{man}, but the validity of the weak field approximation in the presence of the very weak gravitational fields relevant to non-local concepts within general relativity \cite{mojahed}. After all, the universe is evolving and asymptotically not Minkowskian. Therefore, one needs to have a dynamical model for a black hole to be compared with the familiar results in the literature on black holes within a static and asymptotically flat space-time \cite{waldbook} where global concepts such as event horizon are not defined. The need for a local definition of black holes and their horizons has led us to concepts such as Hayward's trapping horizon {\cite{Hayward94}, isolated horizon \cite{ashtekar99}, Ashtekar and Krishnan's dynamical horizon (DH) \cite{ashtekar02}, and Booth and Fairhurst's slowly evolving horizon {\cite{booth04}. \\ Now, a widely used metric to describe the gravitational collapse of a spherically symmetric dust cloud is the so-called Tolman-Bondi-\L (LTB) metric \cite{LTB}. It was pointed out in \cite{man, mangeneral} that the model admits cosmological black holes. These models may be extended to a perfect fluid with a non zero pressure, the so-called \L \ models \cite{hellaby}. Our interest is now using these cosmological solutions to construct dynamical black holes within a FRW expanding universe, and study their characteristics. To do this, we have to avoid any cut-and-paste method of finding the solution. Section II is an introduction to these inhomogeneous prefect fluid cosmological models and how numerically integrate the field equations leading to a dynamical structure within an otherwise FRW universe. In section III the result of the numerical integration is reported expressing the main characteristics of our model assuming different pressure profiles. We will then discuss the result in section IV. Throughout the paper we assume $8\pi G = c = 1$. \section{General spherically symmetric solution} Consider a general inhomogeneous spherically symmetric spacetime \cite{hellaby} filled with a perfect fluid and a metric expressed in the comoving coordinates, $x^{\mu} = (t,\,r,\,\theta,\,\phi)$: \begin{equation}\label{metric} ds^2 = -e^{2\sigma} \, dt^2 + e^{2\lambda} \, dr^2 + R^2 \, d\Omega^2 \;, \end{equation} where $\sigma = \sigma(t,r)$, $\lambda = \lambda(t,r)$ are functions to be determined, $R = R(t,r)$ is the physical radius, and $d\Omega^2 = d\theta^2 + \sin^2 \theta \, d\phi^2$ is the metric of the unit 2-sphere. The energy momentum tensor of the perfect fluid is given by \begin{equation}\label{mT} T^{\mu\nu} = (\rho + p) \, u^{\mu} \, u^{\nu} + g^{\mu\nu}p \;, \end{equation} where $\rho = \rho(t,r)$ is the mass-energy density, $p = p(t,r)$ is the pressure, and $u^{\mu} = (e^{-\sigma}, 0, 0, 0)$ is the perfect fluid four-velocity. \subsection{Field Equations} \label{FEq} In addition to the Einstein field equations, $G^{\mu\nu} = \kappa \,T^{\mu\nu} - g^{\mu\nu}\Lambda$, we will use the conservation equations in the form \newline \begin{equation} \frac{2 e^{2\sigma}}{(\rho + p)} \, \nabla_\mu T^{t\mu} = \dot{\lambda} + \frac{2 \dot{\rho}}{(\rho + p)} + \frac{4 \dot{R}}{R} \; = 0 \label{gtt} \end{equation} \begin{equation} \frac{e^{\lambda}}{(\rho + p)} \, \nabla_\mu T^{r\mu} = \sigma' + \frac{p'}{p + \rho} = 0, \label{grr} \end{equation} where the dot means the derivative with respect to $t$, and the prime means the derivative with respect to $r$. The Einstein equations lead finally to following equations \begin{equation}\label{ev} \frac{\partial}{\partial r} \left[ R + R \dot{R}^2 e^{-2\sigma} - R R'^2 e^{-\lambda} - \frac{1}{3} \Lambda R^3 \right] = \kappa \rho R^2 R', \; \end{equation} and \begin{equation}\label{Pr} \frac{\partial}{\partial t}\left[ R + R \dot{R}^2 e^{-2\sigma}-RR'^2 e^{-\lambda}-\frac{1}{3}\Lambda R^3\right] = -\kappa p R^2\dot{ R}. \; \end{equation} The term in the brackets is related to the Misner-Sharp mass, $M,$ defined by \begin{equation} \frac{2M}{R} = \dot{R}^2 e^{-2\sigma} - R'^2 e^{-\lambda} + 1 - \frac{1}{3} \Lambda R^2 \;. \label{mR} \end{equation} Eqs (\ref{ev}) and (\ref{Pr}) may now be written as \begin{equation} \kappa \rho = \frac{2M'}{R^2 R'} ~, \kappa p = -\frac{2\dot{M}}{R^2 \dot{R}} ~. \label{pressure} \end{equation} We may write Eq (\ref{mR}) in the form of an evolution equation of the model: \begin{equation} \label{Ev} \dot{R} = \pm e^{\sigma} \sqrt{\frac{2M}{R} + f + \frac{\Lambda R^2}{3}} ~, \\ \end{equation} where \begin{equation} \label{f} f(t, r) = R'^2 e^{-\lambda} - 1~ \end{equation} is the curvature term, or twice the total energy of test particle at $r$ (analogous to $f(r)$ in the LTB model). Note that $R(t,r)$ can not be directly obtained from this equation because of the unknown functions $\lambda$, $\sigma$, and $M$. The metric functions $g_{tt}$ and $g_{rr}$ may be obtained by integrating (\ref{gtt}) and (\ref{grr}): \begin{equation} \label{si} \sigma = \ c(t) - \int_{r_0}^r \frac{ p' \, \ dr}{(\rho + p)}|_{t=const} = \sigma_0 -\int_{\rho_0}^\rho \frac{(\frac{\partial p}{\partial\rho})}{(\rho + p(\rho))} \, d\rho~|_{t=const}, \end{equation} and \begin{equation} \label{lamb} \lambda = \lambda_0(r) - 2 \int_{\rho_0}^\rho \frac{d\rho}{(\rho + p(\rho))} - 4 \ln \left( \frac{R}{R_0} \right)~|_{t=const}, \end{equation} where $c(t)$ and $\lambda_{0}(r)$ are arbitrary functions of integration (see \cite{hellaby} for more details). In the case of $c(t)$ it is easily seen that requiring our coordinates to lead to the LTB synchronous ones for $p = 0$ leads to $c(t)=0$. We notice also that according to (\ref{f}) and the LTB coordinate conditions, the choice of $\lambda_{0}(r)$ is equivalent to the choice of $f(t_0,r)=f_0(r)$. One may prefer to choose $f_0(r)$ and then calculate $\lambda_0(r)$ from $e^{\lambda_0} = R_0'^2/(1 + f_0)$.\\ We have therefore 5 unknowns $p, \rho, \sigma, \lambda$, and $R$, four dynamical equations $\dot{\rho}$, $\dot{M}$, $\dot{\lambda}$, and $\dot{R},$ in addition to an equation of state $p = p(\rho)$, and the definition of the mass $M$ (\ref{mR}). This defines a numerical algorithm to find solutions for the dynamics of our spherical structure after assuming the initial conditions. \subsection{Construction of the \L\ Model} \label{alg} To generate a \L\ model in the general case we need a numerical procedure. We first specify the arbitrary initial functions, $R_0(r)=R(t_0,r)$, $\rho_0(r)= \rho(t_0,r)$, $\lambda_0(r)= \lambda(t_0,r)$ , $\sigma_0(r)= \sigma(t_0,r)$, and the equation of state $p(\rho)$. We then integrate the dynamical equations for constant $t$ or $r$ in the following order: \begin{enumerate} \item Choose an initial time $t_0$, and specify $R(t_0, r) = R_0(r)$; $R_0'$ is then also known form the derivative of $ R_0(r)$ with respect to $r$ at $t=t_0$; \item Specify $M(t_0, r) = M_0(r) $ at $ t=t_0$; \item Once $M_0(r)$ and $ R_0(r)$ are specified, $\rho_0(r)$ may be determined from (\ref{pressure}); \item Select an equation of state, $p = p(\rho)$; \item Choose $\lambda(t_0, r)$, or choose first $f(t_0,r)=f_0(r)$ and then calculate $\lambda_0(r)$ from $e^{\lambda_0} = R_0'^2/(1 + f_0)$; \item $\sigma(t_0, r)$ can then be obtained by integrating (\ref{si}) along $t = t_0$. \end{enumerate} We have now specified how to determine all the needed initial functions at the time $t_0$. Therefore, their $r-$derivatives are also known. The time evolution of the metric functions along the worldlines of constant $r$ may then be calculated. This is done in the following way: \begin{enumerate} \item From equation (\ref{Ev}) we obtain $\dot{R}$ \item Eq (\ref{pressure}) then gives us $\dot{M}$: \begin{equation} \dot{M} = \frac{- \kappa p \, \dot{R} R^2}{2} ~; \end{equation} \item Combining Eq (\ref{gtt}) and $G_{01} $ allows us to eliminate $\sigma'$. Then substituting $\dot{\lambda}$ from (\ref{grr}), we arrive at $\dot{\rho}$: \begin{equation} \dot{\rho} = - p' \, \frac{\dot{R}}{R'} - (\rho + p) \left[ \frac{\dot{R}'}{R'} + \frac{2 \dot{R}}{R} \right] ~; \label{rhodot} \end{equation} \item From the equation of state $p = p(\rho)$ we obtain $\dot{p}$ \begin{equation} \dot{p} = \frac{dp}{d\rho} \, \dot{\rho} ~; \end{equation} \item Eq (\ref{grr}) may be combined with (\ref{rhodot}) to give \begin{equation} \dot{\lambda} = \frac{2}{R'} \left( \frac{ p' \dot{R}}{(\rho + p)} + \dot{R}' \right)~; \end{equation} \item Using the initial values for $t = t_0$, we are then in a position to solve the above 5 differential equations numerically to obtain $R(t,r)$, $M(t,r)$, $\rho(t,r)$, $p(t,r)$ and $\lambda(t, r)$ for every $t$ and $r$. Note that in each step the spatial derivatives $\dot{R}'$, $p'$ and $M'$ needs to be determined; \item Finally, $\sigma(t,r)$ is obtained from Eq (\ref{si}) by integrating along constant $t$. \end{enumerate} Notice that we have chosen 4 initial functions, $R_0(r)$, $\rho_0(r)$, $\lambda_0(r)$ and $\sigma_0(t)$, as well as the equation of state $p(\rho)$. \section{equation of state and the results} We are now ready to specify the equation of state and integrate the model to see its characteristics. To have a comparative discussion of the results we consider two types of equation of state: a perfect fluid with a constant state function, $p=w\rho$, and a more general case with the equation of state $p=w s(r)\rho$ matching our needs for a structure with pressure inside the structure and a pressure-less matter dominated universe far from the structure. We may then choose the function $ s(r)$ in a way that the pressure becomes zero at infinity, i.e. at $r>>r_0$. A suitable choice is $s(r)=e^{-\frac{r}{r_0}}$ where the order of magnitiude $r_0$ is the distance of void form the center (boundary of expanding and collapsing phase). This is a more realistic model to describe a black hole collapse within the FRW universe and to see the effect of the inside pressure while the universe outside is matter dominated with no pressure.\newline{} The model we envisage starts from a small inhomogeneity within a FRW universe. The density profile should be such that the metric outside the structure tends to FRW independent of the time while the central overdensity region undergoes a collapse after some initial expansion. At the initial conditions, where the density contrast of the overdensity region is still too small, we may assume that the metric is almost FRW or LTB; the density contrast and the pressure does not play a significant role. The dynamics of \L\ universe will give us anyhow the expected structure at late times. To choose the initial conditions at the time $t_0$, we will therefore use a LTB solution with a negative curvature function. We have in fact tried both examples of LTB or FRW initial data and received no significant difference between the final \L\ solutions. \\ Now, let us choose the the two initial functions $f(r)=f(t_0,r)$ and $M(r)=M(t_0,r)$ in the following way to achieve an asymptotically FRW final solution: \begin{equation} f(r)=-\frac{1}{b}re^{-r}, \end{equation} \begin{equation} M(r)=\frac{1}{a}r^{3/2}(1+r^{3/2}). \end{equation} Far from the central overdensity region we have \begin{equation} lim_{r\rightarrow\infty}f(r)=0, \end{equation} \begin{equation} lim_{r\rightarrow\infty}M(r)=\frac{r^3}{a}, \end{equation} showing the asymptotically FRW behavior of the initial conditions. The corresponding LTB solution of Einstein equations now gives us $R(r)=R_0(r)$ at the initial time $t_0$. Assuming an equation of state is now enough to numerically calculate the necessary dynamical functions of the model. Specifically, by looking at $\dot{R}(t,r)$ and $\rho(t,r)$ we may extract informations of how the central region starts collapsing after the initial expansion and how a black hole with distinct apparent and event horizons develops while the outer region expands as a familiar FRW universe. We may also find out the difference to the case of the pressure-less model. It will also show if and how the very weak gravity outside the collapsed structure affects the dynamic of the central structure in comparison to the familiar Schwarzschild model. The results of the numerical calculation for both equation of states are given in the following sections.\\ \subsection{The density behavior} The density profiles for both equation of states as a function or $t$ and $r$ are given in Figs.(\ref{den1}) and (\ref{den2}). A comparison of these figures shows the effect of the pressure on the development of the central black hole. Obviously in case of non-vanishing pressure outside the structure the collapse is more highlighted with a more steep density profile. The over-density region in the collapsing phase is always separated from the expanding under-density region through a void not expressible in these figures. We will consider the deepest place of the void as the boundary of the structure. This boundary is always near by the boundary between the contracting and the expanding region of the model structure.\newline{} \begin{figure}[h] \includegraphics[width = 8cm]{den1.eps} \caption{ \label{den1} Density evolution of our cosmological black hole for the perfect fluid with the equation of state $p=w\rho$.} \end{figure} \begin{figure}[h] \includegraphics[width = 8cm]{den2.eps} \caption{ \label{den2} Density evolution of our cosmological black hole for the perfect fluid with the equation of state $p=w\rho s(r)$. Note the less significant central density and the more flat density profile near to the center of the structure.} \end{figure} \subsection{ The pressure effect} Figs.(\ref{2kol}) and (\ref{22kol}) show the behavior of the collapsing and the expanding regions for the equation of state $p = w\rho$ by depicting the corresponding \L\ Hubble parameter $\dot R/R$ versus the physical radius $R$. Figs. (\ref{1kol}) and (\ref{11kol}) show similar data for the equation of state $p = ws(r)\rho$. Note that the function $s(r)$, as defined to get matter dominated FRW universe at far distances, has no significant effect, and the qualitative behavior of the dynamics of the physical radius is independent of it. Therefore, as far as we are interested in the qualitative features of the model, we will just use the simple equation of state with $s(r)=1$.\\ The place of separation between the expanding and collapsing region defined by $\dot{R}>0$ and $\dot{R}<0$ is almost coincident with the place of the void where we have defined as the boundary of the structure. Now, from the figures we realize that the effect of the pressure in different regions of the model and its comparison to the homogeneous FRW model is an intriguing one. As we know already from the Friedman equations in FRW models, the pressure adds up to the density and has an attractive effect slowing down the expansion leading to a more negative acceleration ($\frac{\ddot{a}}{a}=-\frac{1}{6}(\rho+3 p)$). This is evident from the figures at distances far from the center where our model tends to an FRW one. Whereas within the structure where we have a contracting overdensity region the behavior is counter-intuitive. Except for the case of vanishing pressure, in all the other cases the pressure begins somewhere to act classically like a repulsive force opposing the collapse of the structure. To see this more clearly, we have also depicted the acceleration in Fig.(\ref{accelaration}). As we approach distances near to the center, the negative acceleration in the FRW limit and even inside the void, gradually increases to positive values, meaning that somewhere within the structure the contraction of the structure slows down due to the pressure like a classical fluid. Therefore, the pressure effect begins somewhere within the structure to act like a repulsive force in contrast to the outer regions where its attractive nature dominates. Note that the central black hole and its horizon has a much smaller radius than the region of the repulsive pressure effect we are discussing. \begin{figure} \centering \mbox{ \subfigure[The overall scheme: $\dot{R}>0$ and $\dot{R}<0$ show the expanding and collapsing regions, respectively.\label{2kol}]{\includegraphics[width=0.5\linewidth]{rdot1.eps}} \quad \subfigure[A magnified scheme related to the regions near to the center\label{1kol}]{\includegraphics[width=0.5\linewidth]{rdot12.eps}} } \caption{The behavior of the Hubble parameter $\dot{R}/R$ in the case of $p=w\rho$. Evidently the pressure slows down the collapse velocity near the center of the structure} \label{main figure label} \end{figure} \begin{figure} \centering \mbox{ \subfigure[The overall scheme: $\dot{R}>0$ and $\dot{R}<0$ show the expanding and collapsing regions, respectively. \label{22kol}]{\includegraphics[width=0.5\linewidth]{rdot2.eps}}\quad \subfigure[A magnified scheme related to the regions near to the center.\label{11kol}]{\includegraphics[width=0.5\linewidth]{rdot22.eps}} } \caption{The behavior of the Hubble parameter $\dot{R}/R$ in the case of $p=w\rho s(r)$. The features are qualitatively as in Fig (\ref {main figure label}).} \label{main figure label1} \end{figure} \begin{figure} \centering \mbox{ \subfigure[The overall scheme: $\dot{R}>0$ and $\dot{R}<0$ show the expanding and collapsing regions, respectively. \label{acc}]{\includegraphics[width=0.5\linewidth]{acc.eps}}\quad \subfigure[The overall scheme of the acceleration.\label{acc1}]{\includegraphics[width=0.5\linewidth]{acc1.eps}} } \caption{The behavior of the Hubble parameter $\dot{R}/R$ and acceleration $\ddot{R}/R $ in the case of $p=w\rho $. } \label{accelaration} \end{figure} \subsection{The Apparent and Event Horizon} The boundary of a dynamical black hole, where the area law and the black hole temperature are defined, is a non-trivial concept (see for example \cite{man} and \cite{manradiation}). Our model is again a good example to see the behavior of both apparent and event horizon of a dynamical structure within an expanding universe. It is easily seen that the apparent horizon for our cosmological black hole is located at $R=2M$ \cite{mangeneral}. This apparent horizon is calculated in $t, r$ coordinates numerically. It is always space-like tending to be light-like at late times. This can best be seen by comparing the slope of the apparent horizon relative to the light cone at every coordinate point of it. This is in contrast to the Schwarzschild black hole horizon where it is always light-like. At the late times, however, we expect the apparent horizon to become approximately light-like and approaching the event horizon. This is reflected in the Figs.(\ref{horizon1}) and (\ref{horizon2}). It is evident that $ \frac{dt}{dr}|_{AH}< \frac{dt}{dr}|_{null}$ at all times on the apparent horizon, the difference tending to zero at late times. Therefore, the apparent horizon is always a space-like \emph{dynamical horizon} leading to a \emph{slowly varying horizon} at late times \cite{ashtekar02, man}. Note that the qualitative result is independent of the equation of state. \begin{figure}[h] \includegraphics[width = 7.5cm]{horizon1.eps} \caption{ \label{horizon1} The $p=w\rho$ case: $ \frac{dt}{dr}|_{AH}< \frac{dt}{dr}|_{null}$ on the apparent horizon. Therefore, the apparent horizon is always a space-like dynamical horizon leading to a slowly varying horizon at late times.} \end{figure} \begin{figure}[h] \includegraphics[width = 8cm]{horizon2.eps} \caption{ \label{horizon2} The $p=ws(r)\rho$ case: $ \frac{dt}{dr}|_{AH}< \frac{dt}{dr}|_{null}$ on the apparent horizon. Qualitatively, there is no difference to the Fig..... } \end{figure} We now show how the dynamical horizon of our cosmological black hole becomes a \emph{slowly evolving horizon} at late times. Let's first define the evolution parameter $c$ such that the tangent vector to the dynamical horizon, $V $, is given by \begin{equation} V^\mu=\ell^\mu-c n^\mu, \end{equation} where the two vectors $\ell^a$ and $n^a$ are normal null vectors on a space-like two surface $S$ in $(t,r)$ plane (see \cite{ashtekar02}). We expect $c$ to go to zero at late times in order for our dynamical horizon to become a \emph{slowly evolving horizon}. In the case of our \L $~$ model $c$ is calculated to be \begin{eqnarray} c=2\frac{M'+w M'}{M'-w M'-R'}|_{AH}. \label{black hole} \end{eqnarray} The result of the numerical calculation for different equation of states and different state functions is given in Figs.(\ref{slowly1}) and (\ref{slowly2}). The decreasing behavior of the function $c$ in the course of time independent of the equation of state is evident. We may then conclude that the dynamical horizon of the cosmological black hole tends to a slowly evolving horizon. \begin{figure}[h] \includegraphics[width = 8cm]{slowly1.eps} \caption{The $p=w\rho$ case: the more pressure the sooner the dynamical horizon becomes a slowly evolving horizon. } \label{slowly1} \end{figure} \begin{figure}[h] \includegraphics[width = 8cm]{slowly2.eps} \caption{The $p=w\rho s(r)$ case: qualitatively, the same behavior as in Fig.... } \label{slowly2} \end{figure} \subsection{ Mass and matter flux} Due to the expanding background we expect the matter flux into the dynamical black hole to be decreasing and the dynamical horizon to become a slowly evolving horizon in the course of time\cite{mangeneral}. We know already that there is no unique concept of mass in general relativity corresponding to the Newtonian concept. The question of what does general relativity tell us about the mass of a cosmological structure in a dynamical setting was discussed recently \cite{manmass}. It was shown \cite{razbin} that The Misner-Sharp quasi-local mass, $M$, is very close to the Newtonian mass. Let us then take the Misner-Sharp mass for this black hole and calculate the corresponding matter flux into the black hole. In the case of \L model, the matter flux is given by \begin{eqnarray} \frac{dM(r,t)}{dt}|_{AH}=\frac{\partial{M(r,t)}}{ \partial{t}}|_{AH} +\frac{\partial{M(r,t)}}{\partial{r}}\frac{\partial{r}}{\partial{t}}|_{AH}=\dot{M}|_{AH}+M'\frac{\partial{r}}{\partial{t}}|_{AH} \end{eqnarray} The result of the numerical calculation is depicted in Figs.(\ref{mass}) and (\ref{flux2}). Note how the pressure decreases the rate of matter flux into the black hole. \begin{figure}[h] \includegraphics[width = 8cm]{flux1.eps} \caption{The $p=w\rho$ case: the rate of matter flux into the black hole decreases with the pressure. } \label{mass} \end{figure} \begin{figure}[h] \includegraphics[width = 8cm]{flux2.eps} \caption{The $p=ws(r)\rho$ case: qualitatively the same behavior as in Fig.... } \label{flux2} \end{figure} \section{DISCUSSION} We have studied the evolution of a structure made of perfect fluid with non-vanishing pressure as an exact solution of Einstein equations within an otherwise expanding FRW universe. The structure boundary is separated by a void from the expanding part of the model which is very much like a FRW universe already near by the void. We have noticed a counter-intuitive pressure effect somewhere inside the structure where the existence of the pressure slows down the collapse like a classical fluid in contrast to distances far from the structure. The collapsed region develops to a dynamical black hole with a space-like apparent horizon, in contrast to the Schwarzschild black hole. This apparent horizon tends to a slowly evolving horizon and becoming light-like at late times with a decreasing mater flux into the black hole. We have, therefore, to conclude that the mere existence of a cosmological matter, even dust, may have significant effect on the central black hole differentiating it from a Schwarzschild one irrespective of how small the density outside the structure is. Hence we may not be allowed to speak about the Newtonian approximation because of the very weak gravity in cases of non-local or quasi-local quantities such as the horizon and the mass.
{ "redpajama_set_name": "RedPajamaArXiv" }
8,231
{"url":"https:\/\/math.stackexchange.com\/questions\/3557425\/does-the-series-diverge-or-converge","text":"# Does the series diverge or converge?\n\nI'm asked to find out if this series converges or diverges:\n\n$$\\sum_{n=1}^{\\infty} \\frac{1+(-1)^n}{\\sqrt{n}}=0+\\frac{2}{\\sqrt{2}}+0+\\frac{2}{\\sqrt{4}}...$$\n\nSo I thought I could use a direct comparison test, so\n\n$$\\sum_{n=1}^{\\infty} \\frac{1+(-1)^n}{\\sqrt{n}}\\leq \\sum_{n=1}^{\\infty}\\frac{2}{\\sqrt{n}}$$ But giving that this is a p-serie with $$p=-\\frac{1}{2}$$ I know that I can not use this to compare with because it diverges. So I'm stuck. Does anyone have some tips?\n\n\u2022 You are very close to showing divergence by a comparison test, but it should be done by a \"lower bound\" (actually an equality) with the even terms (equiv. with the odd terms) of the partial sums sequence. \u2013\u00a0hardmath Feb 23 at 18:21\n\nIt diverges, since it is the sum of the divergent series $$\\displaystyle\\sum_{n=1}^\\infty\\frac1{\\sqrt n}$$ with the convergent series $$\\displaystyle\\sum_{n=1}^\\infty\\frac{(-1)^n}{\\sqrt n}$$.\n\n\u2022 Ohh, I have never thought about it like that. I have read though the chapter of my book, but I have not found this result. Thank you so much! \u2013\u00a0Mathomat55 Feb 23 at 18:12\n\nHint:\n\nSince $$1+(-1)^n=0$$ if $$n$$ is odd and $$2$$ if $$n$$ is even, it's\n\n$$\\sum\\limits_{k=1}^\\infty\\dfrac{2}{\\sqrt{2k}}= \\sqrt2\\sum\\limits_{k=1}^\\infty \\dfrac1{\\sqrt k}$$\n\n\u2022 Thank you so much! It was a great explanation! \u2013\u00a0Mathomat55 Feb 23 at 18:26\n\nOther way to see it is that in the sum you take common factor of $$\\sqrt2$$ u get $$\\frac{1}{\\sqrt2}\\left(\\frac{2}{\\sqrt1}+\\frac{2}{\\sqrt2}+\\frac{2}{\\sqrt3}+...\\right)=\\frac{1}{\\sqrt2}\\sum_{n\\geq1} \\frac{2}{\\sqrt n}=\\frac{2}{\\sqrt2}\\sum_{n\\geq1} \\frac{1}{\\sqrt n}>\\infty$$\n\n\u2022 Thank you so much! It was really helpful! \u2013\u00a0Mathomat55 Feb 23 at 18:27\n\u2022 Pretty sure that the last sum isn't $>\\infty$... \u2013\u00a0Glen O Feb 24 at 2:23","date":"2020-03-29 10:23:59","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 12, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8374783396720886, \"perplexity\": 308.6358134794806}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-16\/segments\/1585370494064.21\/warc\/CC-MAIN-20200329074745-20200329104745-00509.warc.gz\"}"}
null
null
\section{Introduction\label{sec:intro}} The intra-cluster medium is magnetised. Direct evidence for cluster-wide magnetic fields are the large-scale diffuse radio sources of synchrotron origin. There is growing evidence that these fields are of the order of $\sim \mu$G and are ordered on kiloparsec scales \citep[see e.g. recent reviews][]{2002ARA&A..40..319C, 2002RvMP...74..775W, 2004astro.ph.10182G}. One method to investigate magnetic field structure and strength is the detection of the Faraday rotation effect. This effect is observed whenever linearly polarised radio emission passes through a magnetised medium. A linearly polarised wave can be described by two circularly polarised waves. Their motion along magnetic field lines in a plasma introduces a phase difference between the two waves resulting in a rotation of the plane of polarisation. If the Faraday active medium is external to the source of the polarised emission, one expects the change in polarisation angle to be proportional to the squared wavelength. The proportionality factor is called the rotation measure ($RM$). This quantity can be evaluated in terms of the line of sight integral over the product of the electron density and the magnetic field component along the line of sight. Observed $RM$ maps of extended extragalactic radio sources are especially valuable in order to study the intra-cluster magnetic fields. Simple analytical approaches based on the patchy structure of the $RM$ maps to measure the characteristic length scale of the magnetic fields, which are necessary to translate $RM$ values into field strength, result in magnetic field strength of $\sim$ 5 $\mu$G up to $\sim$ 30 $\mu$G for cooling flow clusters, e.g. Cygnus A \citep{1987ApJ...316..611D}, Hydra A \citep{1993ApJ...416..554T}, A1795 \citep{1993AJ....105..778G}, 3C295 \citep{2001MNRAS.324..842A}. The same arguments have lead to estimates of a cluster magnetic field strength of 2...8 $\mu$G for non-cooling flow clusters, e.g. Coma \citep{1995A&A...302..680F}, A119 \citep{1999A&A...344..472F}, 3C129 \citep{2001MNRAS.326....2T}, A2634 \& A400 \citep{2002ApJ...567..202E}. Observations of a polarised radio point source sample seen through a cluster atmosphere were presented by \citet{1991ApJ...379...80K}. They detected an $RM$ broadening towards the cluster centre implying a magnetic field strength of 1 $\mu$G. More recently, \citet{2001ApJ...547L.111C} analysed a statistical sample of 16 cluster sources against a control sample. They also detect a broadening of the $RM$ distribution for sources towards the cluster centre. They find a cluster magnetic field strength of \hbox{4...8 $\mu$G}. These high magnetic field values derived using $RM$ methods seem to be in contrast to the lower magnetic field values of 0.1...0.3 $\mu$G estimates from Inverse Compton (IC) measurements which are possible for clusters with observed diffuse radio haloes \citep{1987ApJ...320..139R, 1994ApJ...429..554R, 1999ApJ...511L..21R, 1998PASJ...50..389H, 2000ApJ...534L...7F, 2001ApJ...552L..97F, 2004ApJ...602L..73F, 1998A&A...330...90E}. Cosmic microwave background photons are expected to inverse Compton scatter off of the relativistic electrons thereby emitting non-thermal X-ray emission. Upper limits on this non-thermal X-ray emission together with the radio observations of the synchrotron radiation which is emitted by the relativistic electron population can then be used to set lower limits on the average magnetic field strength. There is an order of magnitude difference between the field strength derived for these methods. Several arguments can be given to reconcile the different results. First, except for a very small number of clusters (including the Coma cluster), at best one of the methods could be applied, so that the difference could be a difference between clusters. Second, the Faraday rotation method measures a volume-averaged magnetic field weighted by the thermal electron density whereas the inverse Compton results give volume-averaged field strengths which are weighted with the relativistic electron distribution. Since the relativistic electron population is easily diminished in regions with strong magnetic fields due to the enhanced synchrotron cooling, the inverse Compton method is expected to provide smaller estimates. Thus, a medium that is inhomogeneously magnetised on small scales compared to the observational spatial resolution might possibly solve the contradiction \citep{1999A&A...344..409E}. Furthermore, since the observed IC flux could originate from other sources, it is an upper limit. Hence, the IC measurements give only lower limits on the magnetic field strength. For a more detailed discussion, we refer to \citet{2002ARA&A..40..319C, 2004astro.ph.10182G}. \citet{2003A&A...401..835E} proposed a method to determine the magnetic power spectra by Fourier transforming $RM$ maps. Based on these considerations, \citet{2003A&A...412..373V} applied this method and determined the magnetic power spectrum of three clusters (Abell 400, Abell 2634 and Hydra~A) from $RM$ maps of radio sources located in these clusters. Furthermore, they determined field strengths of $\sim 12\,\,\mu$G for the cooling flow cluster Hydra~A, $3 \,\, \mu$G and $6 \,\, \mu$G for the non-cooling flow clusters Abell 2634 and Abell 400, respectively. Their analysis revealed spectral slopes of the power spectra with spectral indices $ -2.0\ldots-1.6$. However, it was realised that using the proposed analysis, it is difficult to reliably determine differential quantities such as spectral indices due to the complicated shapes of the emission regions used which lead to a redistribution of magnetic power within the spectra. Recently, \citet{2004A&A...424..429M} proposed a numerical method to determine the magnetic power spectrum in clusters. They infer the magnetic field strength and structure by comparing simulations of $RM$ maps as caused by multi-scale magnetic fields with the observed polarisation properties of extended cluster radio sources such as radio galaxies and haloes. They argue that field strengths derived in the literature using analytical expressions have been overestimated by a factor of $\sim$ 2. In order to determine a power spectrum from observational data, maximum likelihood estimators are widely used in astronomy. These methods and algorithms have been greatly improved, especially by the Cosmic Microwave Background (CMB) analysis which tackles the problem of determining the power spectrum from large CMB maps. \citet{1998ApJ...495..564K} proposed such an estimator to determine the power spectrum of a primordial magnetic field from the distribution of $RM$ measurements of distant radio galaxies. Based on the initial idea of \citet{1998ApJ...495..564K}, the methods developed by the CMB community \citep[especially][]{1998PhRvD..57.2117B} and our understanding of the magnetic power spectrum of cluster gas \citep{2003A&A...401..835E}, we derive here an Bayesian maximum likelihood approach to calculate the magnetic power spectrum of cluster gas given observed Faraday rotation maps of extended extragalactic radio sources. The power spectrum enables us also to determine characteristic field length scales and strength. After testing our method on artificially generated $RM$ maps with known power spectra, we apply our analysis to a Faraday rotation map of Hydra~A. The data were kindly provided by Greg Taylor. In addition, this method allows us to determine the uncertainties of our measurement and, thus, we are able to give errors on the calculated quantities. Based on these calculations, we investigate the nature of turbulence of the magnetised gas. This paper is structured as follows. In Sect.~\ref{sec:theory}, a method employing a maximum likelihood estimator as suggested by \citet{1998PhRvD..57.2117B} to determine the magnetic power spectrum from $RM$ maps is introduced. Special requirements for the analysis of $RM$ maps with such a method are discussed. In Sect.~\ref{sec:test}, we apply our maximum likelihood estimator to generated $RM$ maps with known power spectra to test our algorithm. In Sect.~\ref{sec:app}, the application of our method to data of Hydra~A is described. In Sect.~\ref{sec:discussion}, the derived power spectra are presented and the results are discussed. In Sect.~\ref{sec:conclusion}, conclusions are drawn. We assume a Hubble constant of H$_{0} = 70$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_{m} = 0.3$ and $\Omega_{\Lambda} = 0.7$ in a flat universe. All equations follow the notation of \citet{2003A&A...401..835E}. \section{Maximum likelihood analysis\label{sec:theory}} \subsection{The covariance matrix $C_{RM}$\label{sec:crm}} One of the most commonly used methods of Bayesian statistics is the maximum likelihood method. The likelihood function for a model characterised by $p$ parameters $a_p$ is equivalent to the probability of the data $\vec{\Delta}$ given a particular set of $a_p$ and can be expressed in the case of (near) Gaussian statistics of $\vec{\Delta}$ as \begin{equation} \label{eq:likely} \mathcal{L}_{\vec{\Delta}}(a_p) = \frac{1}{(2\pi)^{n/2}|C|^{1/2}}\cdot \exp\left(-\frac{1}{2}\vec{\Delta}^{T}C ^{-1}\vec{\Delta}\right), \end{equation} where $|C|$ indicates the determinant of a matrix, $\Delta_i = RM_i$ are the actual observed data, $n$ indicates the number of observationally independent points and $C = C(a_p)$ is the covariance matrix. This covariance matrix can be defined as \begin{equation} C_{ij}(a_p) = \langle \Delta_i^{obs}\Delta_j^{obs} \rangle = \langle RM_i^{obs}\,RM_j^{obs} \rangle, \end{equation} where the brackets $\langle \rangle$ denote the expectation value and, thus, $C_{ij}(a_p)$ describes our expectation based on the proposed model characterised by a particular set of $a_p$s. Now, the likelihood function $\mathcal{L}_{\vec{\Delta}}(a_p)$ has to be maximised for the parameters $a_p$. Although the magnetic fields might be non-Gaussian, the $RM$ should be close to Gaussian due to the central limit theorem. Observationally, $RM$ distributions are known to be close to Gaussian \citep[e.g.][]{1993ApJ...416..554T, 1999A&A...344..472F, 1999A&A...341...29F, 2001MNRAS.326....2T}. Ideally, the covariance matrix is the sum of a signal and a noise matrix term which results if the errors are uncorrelated to true values. Writing $RM^{obs} = RM^{true} + \delta RM$ results in \begin{eqnarray} C_{ij}(a_p) & = & \langle RM_i^{true} RM_j^{true} \rangle + \langle \delta RM_i \, \delta RM_j \rangle \nonumber \\ & = & C_{RM}(\vec{x}_{\perp i},\, \vec{x}_{\perp j}) + \langle \delta RM_i \, \delta RM_j \rangle \end{eqnarray} where $\vec{x}_{\perp i}$ is the displacement of point $i$ from the $z$-axis and $\langle \delta RM_i \, \delta RM_j \rangle$ indicates the expectation for the uncertainty in our measurement. Unfortunately, while in the discussion of the power spectrum measurements of CMB experiments the noise term is extremely carefully studied, for our discussion this is not the case and goes beyond the scope of the paper. Thus, we will neglect this term. However, \citet{1995MNRAS.273..877J} discuss uncertainties involved in the data reduction process to gain a model for $\langle \delta RM_i\, \delta RM_j \rangle$. Since we are interested in the magnetic power spectrum, we have to find an expression for the covariance matrix $C_{ij}(a_p) = C_{RM}(\vec{x}_{\perp i},\,\vec{x}_{\perp j})$ which can be identified as the $RM$ autocorrelation $\langle RM(\vec{x}_{\perp i})\,RM(\vec{x}_{\perp j}) \rangle$. This has then to be related to the magnetic power spectra. The observable in any Faraday experiment is the rotation measure \textit{RM}. For a line of sight parallel to the $z$-axis and displaced by $\vec{x}_{\perp}$ from it, the $RM$ arising from polarised emission passing from the source $z_s(\vec{x}_{\perp})$ through a magnetised medium to the observer located at infinity is expressed by \begin{equation} RM(\vec{x}_{\perp})= a_0 \int_{z_s(\vec{x}_{\perp})}^{\infty} \!\!\! {\rm d} \vec{x}\, n_{{\rm e}}(\vec{x}) \, B_z (\vec{x}), \end{equation} where $a_0 = e^3/(2\pi m_e^2c^4)$, $\vec{x} = (\vec{x}_\perp, z)$, $n_e(\vec{x})$ is the electron density and $B_z(\vec{x})$ is the magnetic field component along the line of sight. In the following, we will assume that the magnetic fields in galaxy clusters are isotropically distributed throughout the Faraday screen. If one samples such a field distribution over a large enough volume they can be treated as statistically homogeneous and statistically isotropic. Therefore, any statistical average over a field quantity will not be influenced by the geometry or the exact location of the volume sampled. Following \citet{2003A&A...401..835E}, we can define the elements of the $RM$ covariance matrix using the $RM$ autocorrelation function $C_{RM}(\vec{x}_{\perp i}, \vec{x}_{\perp j}) = \left< RM(\vec{x}_{\perp i})RM(\vec{x}_{\perp j}) \right>$ and introduce a window function $f(\vec{x})$ which describes the properties of the sampling volume \begin{equation} \label{eq:correl} C_{RM}(\vec{x}_{\perp}, \vec{x}'_{\perp}) = \tilde{a_0}^2 \!\!\! \int_{z_s} ^\infty \!\!\!\!\!\! {\rm d} z \int_{z'_s} ^ \infty \!\!\!\!\!\! {\rm d} z' f(\vec{x})f(\vec{x}')\left< B_z(\vec{x}_{\perp}, z) B_z(\vec{x}'_{\perp}, z') \right>, \end{equation} where $\tilde{a_0} = a_0n_{e0}$, the central electron density is $n_{e0}$ and the window function is defined by \begin{equation} \label{eq:window} f(\vec{x}) = \mathbf{1}_{\{\vec{x}_{\perp} \in \Omega\} }\,\mathbf{1}_{\{z \geq z_{\rm s}(\vec{x}_{\perp})\}} \, \,g(\vec{x}) \,n_e(\vec{x})/n_{e0}, \end{equation} where $\mathbf{1}_{\{condition\}}$ is equal to unity if the condition is true and zero if not and $\Omega$ defines the region for which $RM$s were actually measured. The electron density distribution $n_e(\vec{x})$ is chosen with respect to a reference point $\vec{x}_{ref}$ (usually the cluster centre) such that $n_{e0} = n_e(\vec{x}_{ref})$, e.g. the central density, and $B_0 = \langle \vec{B}^2 (\vec{x}_{ref}) \rangle ^{1/2}$. The dimensionless average magnetic field profile $g(\vec{x}) = \langle \vec{B} ^2 (\vec{x}) \rangle ^{1 / 2} / \vec{B} _{0}$ is assumed to scale with the density profile such that $g(\vec{x}) = (n_e(\vec{x})/n_{e0})^{\alpha_{B}}$. Setting $\vec{x}' = \vec{x} + \vec{r}$ and assuming that the correlation length of the magnetic field is much smaller than characteristic changes in the electron density distribution, we can separate the two integrals in Eq.~(\ref{eq:correl}). Furthermore, we can introduce the magnetic field autocorrelation tensor \hbox{$M_{ij} = \langle B_i(\vec{x}) \cdot B_j(\vec{x}+\vec{r}) \rangle$} \citep[see e.g.][]{ 1999PhRvL..83.2957S, 2003A&A...401..835E}. Taking this into account, the $RM$ autocorrelation function can be described by \begin{equation} \label{eq:sep_int} C_{RM}(\vec{x}_{\perp}, \vec{x}_{\perp} + \vec{r}_{\perp}) = \tilde{a_0}^2 \int_{z_s} ^\infty \!\!\!\! {\rm d} z \, f(\vec{x})f(\vec{x}+\vec{r}) \int_{(z'_s - z) \to -\infty} ^ \infty \!\!\!\!\!\!\!\!\!\!\!\!\! {\rm d} r_z M_{zz}(\vec{r}) \end{equation} Here, the approximation $(z'_s - z) \to -\infty$ is valid for Faraday screens which are much thicker than the magnetic autocorrelation length. This will turn out to be the case in the application at hand. The Fourier transformed $zz$-component of the autocorrelation tensor $M_{zz}(\vec{k})$ can be expressed by the Fourier transformed scalar magnetic autocorrelation function $w(k) = \sum_i M_{ii}(k)$ and a $k$ dependent term (see Eq.~(31) in \citet{2003A&A...401..835E}) leading to \begin{equation} \label{eq:mzz_r} M_{zz}(\vec{r}) = \frac{1}{2\pi^3} \int ^\infty _{-\infty} \!\!\! {\rm d} ^3k \,\frac{w(k)}{2}\,\left( 1 - \frac{k_z^2}{k^2} \right) \, {\rm e}^{-i\vec{k} \vec{r}} \end{equation} Furthermore, the one dimensional magnetic energy power spectrum $\varepsilon_B(k)$ can be expressed in terms of the magnetic autocorrelation function $w(k)$ such that \begin{equation} \label{eq:wk_ebk} \varepsilon_B(k)\, {\rm d} k = \frac{k^2w(k)}{2\,(2\pi)^3}\, {\rm d} k. \end{equation} As stated in \citet{2003A&A...401..835E}, the $k_z = 0$ - plane of $M_{zz}(\vec{k})$ is all that is required to reconstruct the magnetic autocorrelation function $w(k)$. Thus, inserting Eq.~(\ref{eq:mzz_r}) into Eq.~(\ref{eq:sep_int}) and using Eq.~(\ref{eq:wk_ebk}) leads to \begin{eqnarray} C_{RM}(\vec{x}_{\perp}, \vec{x}_{\perp} + \vec{r}_{\perp}) & = & 4\pi^2 \tilde{a_0}^2 \int_{z_s} ^\infty \!\!\!\!\! {\rm d} z\, f(\vec{x})f(\vec{x}+\vec{r}) \times \nonumber \\ & & \int_{-\infty} ^ \infty \!\!\!\!\! {\rm d} k\, \varepsilon_B(k) \frac{J_0(kr_{\perp})}{k}, \end{eqnarray} where $J_0(kr_{\perp})$ is the 0th Bessel function. This equation gives an expression for the $RM$-autocorrelation function in terms of the magnetic power spectra of the Faraday-producing medium. Since the magnetic power spectrum is the interesting function, we parametrise $\varepsilon_B(k) = \sum_p \varepsilon_{B_p} \mathbf{1}_{\{ k \, \in \, [k_p, k_{p+1}] \}}$, where $\varepsilon_{B_p}$ is constant in the interval $[k_p, k_{p+1}]$, leading to \begin{equation} \label{eq:cfinal} C_{RM}(\varepsilon_{B_p}) = 4\pi^2 \tilde{a_0}^2 \int_{z_s} ^\infty \!\!\!\!\!\!dz\, f(\vec{x})f(\vec{x}+\vec{r}) \sum_p\! \varepsilon_{B_p} \int_{k_p} ^ {k_{p+1}} \!\!\!\!\!\!\!\! dk\, \frac{J_0(kr_{\perp})}{k}, \end{equation} where the $\varepsilon_{B_p}$ are to be understood as the model parameter $a_P$ for which the likelihood function $\mathcal{L}_{\vec{\Delta)}}(a_p)$ has to be maximised given the Faraday data $\vec{\Delta}$. \subsection{Evaluation of the likelihood function\label{sec:likely}} In order to maximise the likelihood function, \citet{1998PhRvD..57.2117B} approximate the likelihood function as a Gaussian of the parameters in regions close to the maximum $\vec{a} = \{ a \}_{{\rm max}}$, where $\{ a \}_{{\rm max}}$ is the set of model parameters which maximise the likelihood function. In this case, one can perform a Taylor expansion of $\ln\mathcal{L}_{\vec{\Delta}}(\vec{a}+\delta \vec{a})$ about $a_p$ and truncates at the second order in $\delta a_p$ without making a large error. \begin{eqnarray} \ln \mathcal{L}_{\vec{\Delta}}(\vec{a}+\delta \vec{a}) & = & \ln \mathcal{L}_{\vec{\Delta}}(\vec{a}) + \sum_p \frac{\partial \ln\mathcal{L}_{\vec{\Delta}}(\vec{a})}{\partial a_{p}} \delta a_p + \nonumber \\ & & \frac{1}{2} \sum_{pp'} \frac{\partial^2 \ln\mathcal{L}_{\vec{\Delta}}(\vec{a})} {\partial a_p\, \partial a_{p'}} \delta a_p \delta_{p'} \end{eqnarray} With this approximation, one can directly solve for the $\delta a_p$ that maximise the likelihood function $\mathcal{L}$ \begin{equation} \label{eq:delta} \delta a_p = - \sum_{p'} \left( \frac{\partial^2 \ln\mathcal{L}_{\vec{\Delta}}(\vec{a})} {\partial a_p\, \partial a_{p'}} \right)^{-1}\, \frac{\partial \ln\mathcal{L}_{\vec{\Delta}}(\vec{a})}{\partial a_{p'}}, \end{equation} where the first derivative is given by \begin{equation} \label{eq:first} \frac{\partial \ln\mathcal{L}_{\vec{\Delta}}(\vec{a})}{\partial a_{p'}} = \frac{1}{2} \mathrm{Tr} \left[ \left( \vec{\Delta} \vec{\Delta}^T - C \right) \left( C^{-1} \frac{\partial C}{\partial a_{p'}} C^{-1} \right) \right] \end{equation} and the second derivative is expressed by \begin{eqnarray} \label{eq:second} \mathcal{F}^{(a)} _{pp'} & = & - \left( \frac{\partial^2 \ln\mathcal{L}_{\vec{\Delta}}(\vec{a})} {\partial a_p\, \partial a_{p'}} \right) = \mathrm{Tr} \left[ \left( \vec{\Delta} \vec{\Delta}^T - C \right) \left( C^{-1} \frac{\partial C}{\partial a_{p}} C^{-1}\frac{\partial C}{\partial a_{p'}} C^{-1} \right. \right. \nonumber \\ & & \left. \left. - \frac{1}{2} C^{-1}\frac{\partial^2 C}{\partial a_p \partial a_{p'}} C^{-1} \right) \right] + \frac{1}{2} \mathrm{Tr} \left( C^{-1} \frac{\partial C}{\partial a_{p}} C^{-1} \frac{\partial C}{\partial a_{p'}} \right), \end{eqnarray} where Tr indicates the trace of a matrix. The second derivative is called the curvature matrix. If the covariance matrix is linear in the parameter $a_p$ then the second derivatives of the covariance matrix $\partial^2 C/(\partial a_p \partial a_{p'})$ vanish. Note that for the calculation of the $\delta a_p$, the inverse curvature matrix $(\mathcal{F}^{(a)} _{pp'})^{-1}$ has to be calculated. The diagonal terms of the inverse curvature matrix $(\mathcal{F}_{pp} ^{(a)})^{-1}$ can be regarded as the errors $\sigma^2 _{a_p}$ to the parameters $a_p$. A suitable iterative algorithm to determine the power spectra would be to start with an initial guess of a parameter set $a_p$. Using this initial guess, the $\delta a_p$s have to be calculated using Eq.~(\ref{eq:delta}). If the $\delta a_p$s are not sufficiently close to zero, a new parameter set $a' _p = a_p + \delta a_p$ is used and again the $\delta a' _p$ are calculated and so on. This process can be stopped when $\delta a_p / \sigma_{a_p} \le \epsilon$, where $\epsilon$ describes the required accuracy. \subsection{Binning and rebinning} In our parametrisation of the model given by Eq.~(\ref{eq:cfinal}) the bin size, i.e.~the size of the interval $[k_p, k_{p+1}]$, is important. Since we are measuring the power spectrum, we chose equal bins on a logarithmic scale as the initial binning. However, if the bins are too small then the cross correlation between two bins could be very high and the two bins cannot be regarded as independent anymore. Furthermore, the errors might be very large, and could be one order of magnitude larger than the actual values. In order to avoid such situations, it is preferable to chose either fewer bins or to rebin by adding two bins together. Note that this oversampling is not a real problem, since the model parameter covariance matrix takes care of the redundancy between data points. However, for computational efficiency and for a better display of the data, a smaller set of mostly independent data points is preferable. To find a criterion for rebinning, an expression for the cross correlation of two parameter $a_p$ and $a_{p'}$ can be defined by \begin{equation} \label{eq:cross} \delta_{pp'} = \frac{\langle \sigma_p \sigma_{p'} \rangle}{\langle \sigma_p\rangle \, \langle \sigma_{p'} \rangle} = \frac{\mathcal{F}^{-1} _{pp'}}{\sqrt{\mathcal{F}^{-1} _{pp} \mathcal{F}^{-1} _{p'p'}}}, \end{equation} where the full range, $-1 \le \delta_{pp'} \le 1$, is possible but usually the correlation will be negative, indicating anti-correlation. Our criterion for rebinning is to require that if the absolute value of the cross-correlation $| \delta_{pp'} |$ is larger than $\delta_{pp'} ^{\rm max}$ for two bins $p$ and $p'$ then these two bins are added together in such a way that the magnetic energy $\sum_p \varepsilon_{B_p}* \Delta k_{p}$ is conserved. After rebinning the algorithm again starts to iterate and finds the maximum with the new binning. This is done as long as the cross-correlation of two bins is larger than required. \subsection{The algorithm} As a first guess for a set of model parameter $\varepsilon_{B_p}$, we used the results from a Fourier analysis of the original $RM$ map employing the algorithms as described in \citet{2003A&A...412..373V}. However, we also employed as first guess $\varepsilon_{B_p}$ a simple power law $\varepsilon_{B_p} \propto k_i^{\alpha}$, where $\alpha$ is the spectral index. The results and the shape of the power spectrum did not change. If not stated otherwise, an iteration is stopped when $\epsilon < 0.01$, i.e. the change in parameter $\varepsilon_{B_p}$ is smaller than 1\% of the error in the parameter $\varepsilon_{B_p}$ itself. Once the iteration converges to a final set of model parameters the cross-correlation between the bins is checked and if necessary, the algorithm will start a new iteration after rebinning. Throughout the rest of the paper, we require a $| \delta_{pp'} | < 0.5$ for $p \neq p'$. Once the power spectra in terms of $\varepsilon_B(k) = \sum _p \varepsilon_{B_p} \mathbf{1}_{\{[k_p, k_{p+1}]\}}$ is determined, we can calculate the magnetic energy density $\varepsilon_B$ by integration of the power spectrum \begin{equation} \varepsilon_B(a_p) = \int _0 ^{\infty} {\rm d} k\, \varepsilon_B(k) = \sum_p \varepsilon_{B_p} \Delta k_p, \end{equation} where $\Delta k_p = k_{p+1} - k_p$ is the binsize. Also $\lambda_B$ and $\lambda_{RM}$ are accessible by integration of the power spectrum \citep{2003A&A...401..835E}. \begin{eqnarray} \lambda_B & = & \pi \frac{\int_0 ^{\infty} {\rm d} k \, \varepsilon_B(k)/k}{\int _0 ^{\infty} {\rm d} k \, \varepsilon_B(k)} = \pi \frac{\sum_p \varepsilon_{B_p} \ln(k_{p+1}/k_{p})} {\sum_p \varepsilon_{B_p} \Delta k_p} \\ \lambda_{RM} & = & 2 \frac{\int_0 ^{\infty} {\rm d} k \, \varepsilon_B(k)/k^2}{\int_0 ^{\infty} {\rm d} k \, \varepsilon_B(k)/k} = 2 \frac{\sum_p \varepsilon_{B_p} \left( 1/k_{p} - 1/k_{p+1} \right)}{\sum_p \varepsilon_{B_p} \ln(k_{p+1}/k_{p})}. \end{eqnarray} Since the method allows to calculate errors $\sigma_{\varepsilon_{B_p}}$, one can also determine errors for these integrated quantities. However, the cross-correlations $\delta_{pp'}$ which are non-zero as already mentioned, have to be taken into account. The probability distribution $P(\vec{a})$ of a parameter can often be described by a Gaussian \begin{equation} \label{eq:prob} P(\vec{a}) \sim e^{-\frac{1}{2} \delta \vec{a} ^T X^{-1} \delta \vec{a}}, \end{equation} where $X$ is the covariance matrix of the parameters, $\delta \vec{a} = \vec{a} - \vec{a}_{{\rm peak}}$, $\vec{a}=\{a\}_{{\rm max}}$ is the determined maximum value for the probability distribution and $\vec{a}_{{\rm peak}}$ is the actual maximum of the probability function. The standard deviation is defined as \begin{equation} \label{eq:deltaeb} \langle \delta \varepsilon_B^2 \rangle = \langle (\varepsilon_B(a)-\varepsilon_B)^2 \rangle = \int\, {\rm d}^n a\, P(a)\,(\varepsilon_B(a) - \varepsilon_B)^2. \end{equation} Assuming that $P(\vec{a})$ follows a Gaussian distribution (as done above in Eq.~(\ref{eq:prob})) and using that $\varepsilon_B(a)$ is linear in the $a_p = \varepsilon_{B_p}$ then Eq.~(\ref{eq:deltaeb}) becomes \begin{eqnarray} \langle \delta \varepsilon_B^2 \rangle & = & \int {\rm d}^n a\, P(a) \, \left[ \delta a \frac{\partial \varepsilon_B}{\partial a_p} \right]^2\\ & = & \int {\rm d}^n a\, P(a) \, \sum_p \delta a_p \, \frac{\partial \varepsilon_B}{\partial a_p} \, \sum_{p'} \delta a_{p'} \, \frac{\partial \varepsilon_B}{\partial a_{p'}}. \end{eqnarray} Rearranging this equation and realising that the partial derivatives are independent of the $a_p$ since $\varepsilon_B$ is linear in the $a_p$s this leads to \begin{equation} \langle \delta \varepsilon_B^2 \rangle = \sum_{pp'} \frac{\partial \varepsilon_B}{\partial a_p}\, \frac{\partial \varepsilon_B}{\partial a_{p'}} \int {\rm d}^n a \, P(a) \, \delta a_p \, \delta a_{p'} \end{equation} and finally using Eq.~(\ref{eq:cross}) \begin{equation} \langle \delta \varepsilon_B^2 \rangle = \sum_{pp'} \frac{\partial \varepsilon_B}{\partial a_p}\, \frac{\partial \varepsilon_B}{\partial a_{p'}} \,\langle \sigma_{p} \sigma_{p'} \rangle, \end{equation} where $\langle \sigma_{p} \, \sigma_{p'} \rangle = \mathcal{F}_{pp'} ^{-1}$ A similar argumentation can be applied to the error derivation for the correlation lengths $\lambda_{RM}$ and $\lambda_B$, although the correlation length are not linear in the coefficients $a_p$. If one uses the partial derivatives at the determined maximum, one still is able to approximately separate them from the integral. This leads to the following expressions for their errors \begin{equation} \langle \delta \lambda_B^2 \rangle \approx \sum_{pp'} \frac{\partial \lambda_B}{\partial a_p} \bigg\arrowvert_{a _p ^{{\rm max}}} \frac{\partial \lambda_B}{\partial a_{p'}} \bigg\arrowvert_{a _{p'} ^{{\rm max}}} \langle \sigma_{p} \sigma_{p'} \rangle \end{equation} and \begin{equation} \langle \delta \lambda_{RM}^2 \rangle \approx \sum_{pp'} \frac{\partial \lambda_{RM}}{\partial a_p} \bigg\arrowvert_{a _p ^{{\rm max}}} \frac{\partial \lambda_{RM}}{\partial a_{p'}} \bigg\arrowvert_{a _{p'} ^{{\rm max}}} \langle \sigma_{p} \sigma_{p'} \rangle. \end{equation} \section{Testing the algorithm\label{sec:test}} \begin{figure*}[htb] \resizebox{\hsize}{!}{\includegraphics{fig1.ps}} \caption[]{\label{fig:testrm} Right panel, a small part ($37 \times 37$ kpc) of a typical realisation of a $RM$ map which is produced by a Kolmogorov-like magnetic field power spectrum for $k \geq k_c = 0.8$ kpc$^{-1}$ and a magnetic field strength of 5 $\mu$G. Left panel, the $RM$ data used for the data matrix $\Delta_i$ is shown where we averaged arbitrary neighbouring points in order to reduce the number of independent points in a similar way as done later with the observational data.} \end{figure*} In order to test our algorithm, we applied our maximum likelihood estimator to generated $RM$ maps with a known magnetic power spectrum $\varepsilon_B(k)$. \citet{2003A&A...401..835E} give a prescription (their Eq.~(37)) for the relation between the amplitude of $RM$, $|\hat{RM}(k_{\perp})|^2$, and the magnetic power spectrum in Fourier space \begin{equation} \varepsilon_B^{{\rm obs}}(k) = \frac{k^2}{a_1\,A_{\Omega}(2\pi)^4} \int_{0}^{2\pi} {\rm d} \phi \,\, |\hat{RM}(\vec{k}_{\perp})|^2 \end{equation} or \begin{equation} \label{eq:rmk} |\hat{RM}(k_{\perp})|^2 = \frac{a_1\,A_{\Omega}(2\pi)^3}{k^2} \varepsilon_B^{{\rm obs}}(k), \end{equation} where $A_{\Omega}$ is the area $\Omega$ for which $RM$'s are actually measured and $a_1 = a_0^2\,n_{{\rm e0}}^2\,L$, where $L$ is the characteristic depth of the Faraday screen. As the Faraday screen, we assumed a box with sides being 150 kpc long and a depth of $L = 300$ kpc. For simplicity, we assumed a uniform electron density profile with a density of $n_{{\rm e0}} = 0.001$ cm$^{-3}$. For the magnetic field power spectra, we used \begin{equation} \label{eq:k0norm} \varepsilon_B^{{\rm obs}}(k) = \left\{ \begin{array}{ll} \frac{\varepsilon_B}{k_0^{1-\alpha} \, k_c^{2+\alpha}} \, k^2 & \forall k \leq k_c \\ \frac{\varepsilon_B}{k_0}\left( \frac{k}{k_0} \right)^{-5/3} & \forall k \geq k_c \end{array} \right. . \end{equation} where the spectral index which was set to mimic Kolmogorov turbulence with energy injection at $k = k_{c}$, and \begin{equation} \varepsilon_B = \frac{\langle B^2 \rangle}{8\pi} = \int _{0} ^{k_{{\rm max}}} \!\!\! {\rm d} k \, \varepsilon_B ^{{\rm obs}}(k), \end{equation} where $k_{{\rm max}} = \pi/\Delta r$ is determined by the pixel size ($\Delta r$) of the $RM$ map used. The latter equation combined with Eq.~(\ref{eq:k0norm}) gives the normalisation $k_0$ in such a way that the integration over the accessible power spectrum will result in a magnetic field strength of $B$ for which we used 5 $\mu$G. We used a $k_{c} = 0.8$ kpc$^{-1}$. In order to generate a $RM$ map with the magnetic power spectrum $\varepsilon_B(k)$ for the chosen Faraday screen, we filled the real and imaginary part of the Fourier space independently with Gaussian deviates. Then these values were multiplied by the appropriate values given by Eq.~(\ref{eq:rmk}) corresponding to their place in $k$-space. As a last step, an inverse Fourier transformation was performed. A typical realisation of such a generated $RM$ map is shown in Fig.~\ref{fig:testrm}. For the analysis of the resulting $RM$ map only a small part of the initial map was used in order to reproduce the influence of the limited emission region of a radio source. We applied the Fourier analysis as described in \citet{2003A&A...401..835E} to this part. The resulting power spectrum is shown in Fig.~\ref{fig:test} as a dashed line in comparison with the input power spectrum as a dotted line. The maximum likelihood method is numerically limited by computational power since it involves matrix multiplication and inversion, where the latter is a $N^3$ process. Thus, not all points of the many which are defined in our maps can be used. However, it is desirable to use as much information as possible from the original map. Therefore we chose to randomly average neighbouring points with a scheme which let to a map with spatially inhomogeneously resolved cells. The resulting map is highly resolved on top and lowest on the bottom with some random deviations which make it similar to the error weighting of the observed data. We used $N$ = 1500 independent points for the analysis. In the left panel of Fig.~\ref{fig:testrm}, the averaged $RM$ map which was used for the test is shown. As a first guess for the maximum likelihood estimation, we used the power spectra derived by the Fourier analysis. The resulting power spectrum is shown as filled circles with 1-$\sigma$ error bars in Fig.~\ref{fig:test}. The input power spectrum and the power spectrum derived by the maximum likelihood estimator agree well within the one $\sigma$ level. Integration over this power spectrum results in a field strength of \hbox{$(4.7 \pm 0.3) \mu$G} in agreement with the input magnetic field strength of \hbox{$5 \mu$G}. \begin{figure}[htb] \resizebox{\hsize}{!}{\includegraphics{fig2.ps}} \caption[]{\label{fig:test} Power spectra for a simulated $RM$ map as shown in Fig.~\ref{fig:testrm}. The input power spectrum is shown in comparison to the one found by the Fourier analysis as described in \citet{2003A&A...401..835E} and the one which was derived by our maximum likelihood estimator. One can see the good agreement within one $\sigma$ between input power spectrum and the power spectrum derived by the maximum likelihood method.} \end{figure} \begin{figure*}[hbt] \resizebox{\hsize}{!}{\includegraphics{fig3.ps}} \caption[]{\label{fig:rmav} The final $RM$ map from the north lobe of Hydra~A which was analysed with the maximum likelihood estimator; left: error weighted map. The dots indicate the coordinates which correspond to the appropriate error weighted $RM$ value, which resulted from averaging over the indicated area; right: original \textit{Pacman} map. Note that the small scale noise for the diffuse part of the lobe is averaged out and only the large scale information carried by this region is maintained.} \end{figure*} \section{Application to Hydra~A\label{sec:app}} \subsection{The data $\vec{\Delta}$ \label{sec:data}} We applied this maximum likelihood estimator introduced and tested in the last sections to the Faraday rotation map of the north lobe of the radio source Hydra~A \citep{1993ApJ...416..554T}. The data were kindly provided by Greg Taylor. For this purpose, we used a high fidelity $RM$ map presented in \citet{2004astro.ph..1216V} which was generated by the newly developed algorithm \textit{Pacman} \citep{2004astro.ph..1214D} using the original polarisation data. \textit{Pacman} also provides error maps $\sigma_i$ by error propagation of the instrumental uncertainties of polarisation angles. The \textit{Pacman} map which was used is shown in the right panel of Fig.~\ref{fig:rmav}. For the same reasons as mentioned in Sect.~\ref{sec:test}, we averaged the data. An appropriate averaging procedure using error weighting was applied such that \begin{equation} \overline{RM}_i = \frac{\sum_j {RM_j}/{\sigma^2_j}} {\sum_{j} {1}/{\sigma_j^2}}, \end{equation} and the error calculates as \begin{equation} \sigma ^2 _{\overline{RM}_i} = \frac{\sum_j \left( {1}/ {\sigma^2_j} \right) } { \left( \sum_{j} {1}/{\sigma_j^2}\right)^2 } = \frac{1}{\sum_{j} {1}/{\sigma_j^2}}. \end{equation} Here, the sum goes over the set of old pixels $\{ j \}$ which form the new pixels $\{ i \}$. The corresponding pixel coordinates $\{ i \}$ were also determined by applying an error weighting scheme \begin{equation} \overline{x}_i = \frac{\sum_j {x_j}/{\sigma^2 _j}}{\sum_j 1/\sigma^2_j} \;\;\mbox{and}\;\; \overline{y}_i = \frac{\sum_j {y_j}/{\sigma^2 _j}}{\sum_j 1/\sigma^2_j}. \end{equation} The analysed $RM$ map was determined by a gridding procedure. The original $RM$ map was divided into four equally sized cells. In each of these the original data were averaged as described above. Then the cell with the smallest error was chosen and again divided into four equally sized cells and the original data contained in the so-determined cell were averaged. The last step was repeated until the number of cells reached a defined value $N$. We decided to use $N = 1500$. This is partly due to the limitation of computational power but also partly because of the desired suppression of small-scale noise by a strong averaging of the noisy regions. The final $RM$ map which was analysed is shown in Fig.~\ref{fig:rmav}. The most noisy regions in Hydra~A are located in the coarsely resolved northernmost part of the lobe. We chose not to resolve this region any further but to keep the large-scale information which is carried by this region. \subsection{The window function\label{sec:window}} As mentioned in Sect.~\ref{sec:crm}, the window function describes the sampling volume and, thus, we have to find a suitable description for it based on Eq.~\ref{eq:window}. Hydra A (or 3C218) is located at a redshift of 0.0538 \citep{1991trcb.book.....D}. For the derivation of the electron density profile parameter, we relied on the work by \citet{1999ApJ...517..627M} done for ROSAT PSPC data while using the deprojection of X-ray surface brightness profiles as described in the Appendix A of \citet{2004A&A...413...17P}. Since Hydra A is known to exhibit a strong cooling flow as observed in the X-ray studies, we assumed a double $\beta$-profile \footnote { defined as \begin{math} n_{{\rm e}}(r) = [n_{{\rm e1}}^2 (0)(1+(r/r_{{\rm c1}})^2)^{-3\beta}+n_{{\rm e2}}^2 (0) (1+(r/r_{{\rm c2}})^2)^{-3\beta}]^{1/2}. \end{math} } and used for the inner profile $n_{{\rm e1}}(0) = 0.056$ cm$^{-3}$ and $r_{c1} = 0.53$ arcmin; for the outer profile we used $n_{{\rm e2}}(0) = 0.0063$ cm$^{-3}$ and $r_{c2} = 2.7$ arcmin and we applied a $\beta = 0.77$. \begin{figure*}[hbt] \resizebox{\hsize}{!}{\includegraphics[angle=-90]{fig4.ps}} \vspace{-1.0cm} \caption[]{\label{fig:radial} The comparison of the integrated squared window function $f^2(r)$ (lines) with the $RM$ dispersion function $\langle RM^2(r) \rangle$ (open circles) and $\langle RM^2 \rangle - \langle RM(r) \rangle^2$ (filled circles). Different models for the window function were assumed. In (a) $\alpha_B = 1.0$, in (b) $\alpha_B = 0.5$ and in (c) $\alpha_B = 0.1$ were used, where the inclination angle $\theta$ of the source was varied. It can be seen that models for the window function with $\alpha_B = 0.1\ldots0.5$ and $\theta = 10\degr \ldots 50\degr$ match the shape of the dispersion function very well.} \end{figure*} Assuming this electron density profile to be accurately determined, there are two other parameters which enter in the window function. The first one is related to the source geometry. For Hydra~A, a clear depolarisation asymmetry between the two lobes is observed, known as the Laing-Garrington effect \citep{1988Natur.331..147G, 1988Natur.331..149L} suggesting that the source is tilted from the $xy$-plane \citep{1993ApJ...416..554T}. In fact, the north lobe points towards the observer. In order to take this into account, we introduced an angle $\theta$ which describes the angle between the source and the $xy$-plane such that the north lobe points towards the observer. \citet{1993ApJ...416..554T} determine an inclination angle of $\theta = 45\degr$. The other parameter is related to the global magnetic field distribution which is assumed to scale with the electron density profile $B(r) \propto n_{{\rm e}}(r) ^ {\alpha_B}$. In a scenario in which an originally statistically homogeneously magnetic energy density gets adiabatically compressed, one expects $\alpha_B = 2/3$. If the ratio of magnetic and thermal pressure is constant throughout the cluster then $\alpha_B = 0.5$. However, $\alpha_B$ might have any other value. \citet{2001A&A...378..777D} determined an $\alpha_B=0.9$ for the outer regions of the cluster Abell 119. In order to constrain the applicable ranges of these quantities, one can compare the integrated squared window function with the $RM$ dispersion function $\langle RM(r_{\perp})^2 \rangle$ of the $RM$ map used since \begin{equation} \langle RM^2 (r_{\perp}) \rangle \propto \int _{-\infty} ^{\infty} {\rm d} z\, f^2(r_{\perp}, z), \end{equation} as stated by Eq.~(24) of \citet{2003A&A...401..835E}. Therefore, we compared the shape of the two functions. The result is shown in Fig.~\ref{fig:radial}. For the window function, we used three different $\alpha_B =0.1, 0.5, 1.0$ and for each of these, five different inclination angles $\theta = 0\degr, 10\degr, 30\degr, 45\degr$ and $60\degr$ were employed, although the $\theta = 0\degr$ is not very likely considering the observational evidence of the Laing-Garrington effect as observed in Hydra~A by \citet{1993ApJ...416..554T}. The different results are plotted as lines of different style in Fig.~\ref{fig:radial}. The filled and open dots represent the $RM$ dispersion function. The solid circles indicate the binned $\langle RM^2 \rangle$ function. The open circles represent the binned $\langle RM^2 \rangle - \langle RM \rangle^2$ function, which is cleaned of any foreground $RM$ signals. From Fig.~\ref{fig:radial}, it can be seen that models with $\alpha_B = 1.0$ or $\theta > 50\degr$ are not able to recover the shape of the $RM$ dispersion function and, thus, we expect $\alpha_B < 1.0$ and $\theta < 50\degr$ to be more likely. \section{Results and discussion\label{sec:discussion}} Based on the described treatment of the data and the description of the window function, first we calculated power spectra for various scaling exponents $\alpha_B$ while keeping the inclination angle at $\theta = 45\degr$. For this investigation, we used as the number of bins $n_l = 5$ which proved to be sufficient. For these calculations, we used $\epsilon < 0.1$. The resulting power spectra are plotted in Fig.~\ref{fig:power_alpha}. \begin{figure}[htb] \resizebox{\hsize}{!}{\includegraphics{fig5.ps}} \caption[]{\label{fig:power_alpha} Power spectra for $N = 1500$ and $n_l = 5$. Different exponents $\alpha_B$ in the relation $B(r) \sim n_e(r)^{\alpha_B}$ of the window function were used. The inclination angle of the source was chosen to be $\theta = 45\degr$.} \end{figure} In Fig.~\ref{fig:power_alpha}, one can see that the power spectrum derived for $\alpha_B = 1.0$ has a completely different shape whereas the other power spectra show only slight deviation from each other and are vertically displaced, implying different normalisation factors, i.e. central magnetic field strengths which increase with increasing $\alpha_B$. The straight dashed line which is also plotted in Fig.~\ref{fig:power_alpha} indicates a Kolmogorov-like power spectrum being equal to $5/3$ in our prescription. The power spectra follow this slope over at least one order of magnitude. In Sect.~\ref{sec:window}, we were not able to distinguish between the various scenarios for $\alpha_B$ although we found that an $\alpha_B = 1$ does not properly reproduce the measured $RM$ dispersion. However, the likelihood function offers the possibility to calculate the actual probability of a set of parameters given the data (see Eq.~(\ref{eq:likely})). Thus, we calculated the log likelihood $\ln {\mathcal L}_{\vec{\Delta}}(\vec{a})$ value for various power spectra derived for the different window functions varying in the scaling exponent $\alpha_B$ and assuming the inclination angle of the source to be for all geometries $\theta = 45\degr$. In Fig.~\ref{fig:alpha_lnL}, the log likelihood is shown as a function of the scaling parameter $\alpha_B$ used. \begin{figure}[htb] \resizebox{\hsize}{!}{\includegraphics{fig6.ps}} \caption[]{\label{fig:alpha_lnL} The log likelihood $\ln \mathcal {L}_{\vec{\Delta}} (\vec{a})$ of various power spectra assuming different $\alpha_B$ while using a constant inclination angle $\theta = 45\degr$. $\alpha_B = 0.1\ldots0.8$ are in the plateau of maximum likelihood. The sudden decrease for $\alpha_B < 0.1$ in the likelihood might be due to non-Gaussian effects becoming too strong.} \end{figure} As can be seen from Fig.~\ref{fig:alpha_lnL}, there is a plateau of most likely scaling exponents $\alpha_B$ ranging from 0.1 to 0.8. An $\alpha_B = 1$ seems to be very unlikely for our model as already deduced in Sect.~\ref{sec:window}. The sudden decrease for $\alpha_B < 0.1$ might be due to non-Gaussian effects. The magnetic field strength derived for this plateau region ranges from 9 $\mu$G to 5 $\mu$G. The correlation length of the magnetic field $\lambda_B$ was determined to range between $2.5$ kpc and $3.0$ kpc whereas the $RM$ correlation length was determined to be in the range of $4.5\ldots5.0$ kpc. These ranges have to be considered as a systematic uncertainty since we are not yet able to distinguish between these scenarios observationally. Another systematic effect might be given by uncertainties in the electron density itself. Varying the electron density normalisation parameters ($n_{{\rm e1}}(0)$ and $n_{{\rm e2}}(0)$) leads to a vertical displacement of the power spectrum while keeping the same shape. In order to study the influence of the inclination angle on the power spectrum, we used an $\alpha_B = 0.5$, being in the middle of the most likely region derived. For this calculation, we used smaller bins and thus increased the number of bins to $n_l = 8$. We calculated the power spectrum for two different inclination angles $\theta = 30\degr$ and $\theta = 45\degr$. The results are shown in Fig.~\ref{fig:power_theta} in comparison with a Kolmogorov-like power spectrum. \begin{figure}[htb] \resizebox{\hsize}{!}{\includegraphics{fig7.ps}} \caption[]{\label{fig:power_theta} Power spectra for two different inclination angles $\theta = 30\degr$ and $\theta = 45\degr$ and an $\alpha_B = 0.5$. For comparison a Kolmogorov-like power spectrum is plotted as a straight dashed line. One can see that the calculated power spectra follow such a power spectrum over at least one order of magnitude. Note that the error bars are larger than in Fig.~\ref{fig:power_alpha} because smaller bin sizes were used.} \end{figure} As can be seen from Fig.~\ref{fig:power_theta}, the power spectra derived agree well with a Kolmogorov-like power spectrum over at least one order of magnitude. For the inclination angle of $\theta = 30\degr$, we derived the following field and map properties \hbox{$B = 5.7 \pm 0.1 \, \mu$G}, \hbox{$\lambda_B = 3.1\pm0.3$ kpc} and \hbox{$\lambda_{RM} = 6.7 \pm 0.7$ kpc}. For $\theta = 45\degr$, we calculated \hbox{$B = 7.3 \pm 0.2 \, \mu$G}, \hbox{$\lambda_B = 2.8\pm0.2$ kpc} and \hbox{$\lambda_{RM} = 5.2 \pm 0.5$ kpc}. The value of the log likelihood $\ln \mathcal{L}$ was determined to be slightly higher for the inclination angle of $\theta = 30\degr$. The flattening of the power spectra for large $k$s can be explained by small-scale noise which we did not model separately. Although the central magnetic field strength decreases with decreasing scaling parameter $\alpha_B$, the volume-integrated magnetic field energy $E_B$ within the cluster core radius $r_{{\rm c2}}$ increases. The volume-integrated magnetic field energy $E_B$ is calculated as follows \begin{equation} E_B = 4 \pi \int _0 ^{r_{{\rm c2}}} {\rm d} r\, r^2 \, \frac{B^2(r)}{8\pi} = \frac{B_0^2}{2} \int _0 ^{r_{{\rm c2}}} {\rm d} r \, r^2 \, \left( \frac{n_{{\rm e}}(r)}{n_{{\rm e0}}} \right) ^{2 \alpha_B}, \end{equation} where we integrate from the cluster centre to the core radius $r_{{\rm c2}}$ of the second, non-cooling flow, component of the electron density distribution. We integrated the magnetic field profile for the various scaling parameters and the corresponding field strengths which we determined by our maximum likelihood estimator. The result is plotted in Fig.~\ref{fig:bradial}. The higher magnetic energies for the smaller scaling parameters which correspond to a lower central magnetic field strength are due to the higher field strength in the outer parts of the cool cluster core. This effect would be much more drastic if we had extrapolated the scaling $B(r) \propto n_{\rm e}(r)^{\alpha_B}$ to larger cluster radii and integrated over a larger volume. \begin{figure}[htb] \resizebox{\hsize}{!}{\includegraphics{fig8.ps}} \caption[]{\label{fig:bradial} The integrated magnetic field energy $E_B$ within the cluster core radius $r_{\rm c2}$ for the various scaling parameters $\alpha_B$ also used in Fig.~\ref{fig:alpha_lnL} and the corresponding central magnetic field strength $B_0$ as determined by our maximum likelihood estimator.} \end{figure} \section{Conclusions\label{sec:conclusion}} We presented a maximum likelihood estimator for the determination of cluster magnetic field power spectra from $RM$ maps of extended polarised radio sources. We introduced the covariance matrix for $RM$ under the assumption of statistically homogeneously-distributed magnetic fields throughout the Faraday screen. We successfully tested our approach on simulated $RM$ maps with known power spectra. We applied our approach to the $RM$ map of the north lobe of Hydra A. We calculated different power spectra for various window functions being especially influenced by the scaling parameter between electron density profile and global magnetic field distribution and the inclination angle of the emission region. The scaling parameter $\alpha_B$ was determined to be most likely in the range of $0.1\ldots0.8$. We realised that there is a systematic uncertainty in the values calculated due to the uncertainty in the window parameter itself. Taking this into account, we deduced for the central magnetic field strength in the Hydra A cluster $B = (7 \pm 2)\,\mu$G and for the magnetic field correlation length $\lambda_B = (3.0 \pm 0.5)$ kpc. If the geometry uncertainties could be removed, the remaining statistical errors are an order of magnitude smaller. The difference of these values to the ones found in an earlier analysis of the same dataset of Hydra~A which yielded $B = 12 \mu$G and \hbox{$\lambda_B = 1$ kpc} \citep{2003A&A...412..373V} is a result of the improved $RM$ map using the \textit{Pacman} algorithm \citep{2004astro.ph..1214D, 2004astro.ph..1216V} and a better knowledge of the magnetic cluster profile, i.e. here $\alpha_B \approx 0.5$ \citep[instead of $\alpha_B = 1.0$ in ][]{2003A&A...412..373V}. However, the magnetic field strength found in Hydra A supports the trend of relatively large magnetic fields derived for cooling flow clusters from RM measurements reported in the literature. The cluster magnetic field power spectrum of Hydra A follows a Kolmogorov-like power spectrum over at least one order of magnitude. However, from our analysis it seems that there is a dominant scale $\sim$ 3 kpc at which the magnetic power is concentrated. \begin{acknowledgements} We like to thank Greg Taylor for providing the polarisation data of the radio source Hydra A and Klaus Dolag for the calculation of the $RM$ map using \textit{Pacman}. We like to thank Greg Taylor and Marat Gilfanov for useful comments on the manuscript. \end{acknowledgements} \bibliographystyle{aa}
{ "redpajama_set_name": "RedPajamaArXiv" }
713
Huos est une commune française située dans l'ouest du département de la Haute-Garonne, en région Occitanie. Sur le plan historique et culturel, la commune est dans le pays de Comminges, correspondant à l'ancien comté de Comminges, circonscription de la province de Gascogne située sur les départements actuels du Gers, de la Haute-Garonne, des Hautes-Pyrénées et de l'Ariège. Exposée à un climat océanique altéré, elle est drainée par la Garonne et par un autre cours d'eau. La commune possède un patrimoine naturel remarquable : un site Natura 2000 (« Garonne, Ariège, Hers, Salat, Pique et Neste »), un espace protégé (« la Garonne, l'Ariège, l'Hers Vif et le Salat ») et trois zones naturelles d'intérêt écologique, faunistique et floristique. Huos est une commune rurale qui compte en , après avoir connu une forte hausse de la population depuis 1975. Elle appartient à l'unité urbaine de Montréjeau et fait partie de l'aire d'attraction de Saint-Gaudens. Ses habitants sont appelés les Huossais ou Huossaises. Géographie Localisation La commune d'Huos se trouve dans le département de la Haute-Garonne, en région Occitanie. Elle se situe à à vol d'oiseau de Toulouse, préfecture du département, à de Saint-Gaudens, sous-préfecture, et à de Bagnères-de-Luchon, bureau centralisateur du canton de Bagnères-de-Luchon dont dépend la commune depuis 2015 pour les élections départementales. La commune fait en outre partie du bassin de vie de Montréjeau. Les communes les plus proches sont : Ausson (), Pointis-de-Rivière (), Gourdan-Polignan (), Montréjeau (), Clarac (), Seilhan (), Cier-de-Rivière (), Ponlat-Taillebourg (). Sur le plan historique et culturel, Huos fait partie du pays de Comminges, correspondant à l'ancien comté de Comminges, circonscription de la province de Gascogne située sur les départements actuels du Gers, de la Haute-Garonne, des Hautes-Pyrénées et de l'Ariège. Huos est limitrophe de six autres communes. Géologie et relief La superficie de la commune est de ; son altitude varie de . Hydrographie La commune est dans le bassin de la Garonne, au sein du bassin hydrographique Adour-Garonne. Elle est drainée par la Garonne, un bras de la Garonne, constituant un réseau hydrographique de de longueur totale. La Garonne est un fleuve principalement français prenant sa source en Espagne et qui coule sur avant de se jeter dans l'océan Atlantique. Climat Le climat qui caractérise la commune est qualifié, en 2010, de « climat océanique altéré », selon la typologie des climats de la France qui compte alors huit grands types de climats en métropole. En 2020, la commune ressort du même type de climat dans la classification établie par Météo-France, qui ne compte désormais, en première approche, que cinq grands types de climats en métropole. Il s'agit d'une zone de transition entre le climat océanique et les climats de montagne et semi-continental. Les écarts de température entre hiver et été augmentent avec l'éloignement de la mer. La pluviométrie est plus faible qu'en bord de mer, sauf aux abords des reliefs. Les paramètres climatiques qui ont permis d'établir la typologie de 2010 comportent six variables pour les températures et huit pour les précipitations, dont les valeurs correspondent à la normale 1971-2000. Les sept principales variables caractérisant la commune sont présentées dans l'encadré ci-après. Avec le changement climatique, ces variables ont évolué. Une étude réalisée en 2014 par la Direction générale de l'Énergie et du Climat complétée par des études régionales prévoit en effet que la température moyenne devrait croître et la pluviométrie moyenne baisser, avec toutefois de fortes variations régionales. Ces changements peuvent être constatés sur la station météorologique de Météo-France la plus proche, « Clarac », sur la commune de Clarac, mise en service en 1994 et qui se trouve à à vol d'oiseau, où la température moyenne annuelle est de et la hauteur de précipitations de pour la période 1981-2010. Sur la station météorologique historique la plus proche, « Saint-Girons », sur la commune de Lorp-Sentaraille, dans le département de l'Ariège, mise en service en 1949 et à , la température moyenne annuelle évolue de pour la période 1971-2000, à pour 1981-2010, puis à pour 1991-2020. Milieux naturels et biodiversité Espaces protégés La protection réglementaire est le mode d'intervention le plus fort pour préserver des espaces naturels remarquables et leur biodiversité associée. Un espace protégé est présent sur la commune : « la Garonne, l'Ariège, l'Hers Vif et le Salat », objet d'un arrêté de protection de biotope, d'une superficie de . Réseau Natura 2000 Le réseau Natura 2000 est un réseau écologique européen de sites naturels d'intérêt écologique élaboré à partir des directives habitats et oiseaux, constitué de zones spéciales de conservation (ZSC) et de zones de protection spéciale (ZPS). Un site Natura 2000 a été défini sur la commune au titre de la directive habitats : « Garonne, Ariège, Hers, Salat, Pique et Neste », d'une superficie de , un réseau hydrographique pour les poissons migrateurs (zones de frayères actives et potentielles importantes pour le Saumon en particulier qui fait l'objet d'alevinages réguliers et dont des adultes atteignent déjà Foix sur l'Ariège. Zones naturelles d'intérêt écologique, faunistique et floristique L'inventaire des zones naturelles d'intérêt écologique, faunistique et floristique (ZNIEFF) a pour objectif de réaliser une couverture des zones les plus intéressantes sur le plan écologique, essentiellement dans la perspective d'améliorer la connaissance du patrimoine naturel national et de fournir aux différents décideurs un outil d'aide à la prise en compte de l'environnement dans l'aménagement du territoire. Une ZNIEFF de est recensée sur la commune : « la Garonne de Montréjeau jusqu'à Lamagistère » (), couvrant dont 63 dans la Haute-Garonne, trois dans le Lot-et-Garonne et 26 dans le Tarn-et-Garonne et deux ZNIEFF de : « la Garonne et milieux riverains, en aval de Montréjeau » (), couvrant dont 64 dans la Haute-Garonne, trois dans le Lot-et-Garonne et 26 dans le Tarn-et-Garonne ; le « piémont calcaire commingeois et bassin de Sauveterre » (), couvrant du département. Urbanisme Typologie Huos est une commune rurale, car elle fait partie des communes peu ou très peu denses, au sens de la grille communale de densité de l'Insee. Elle appartient à l'unité urbaine de Montréjeau, une agglomération inter-départementale regroupant et en , dont elle est une commune de la banlieue. Par ailleurs la commune fait partie de l'aire d'attraction de Saint-Gaudens, dont elle est une commune de la couronne. Cette aire, qui regroupe , est catégorisée dans les aires de moins de . Occupation des sols L'occupation des sols de la commune, telle qu'elle ressort de la base de données européenne d'occupation biophysique des sols Corine Land Cover (CLC), est marquée par l'importance des territoires agricoles (75,2 % en 2018), néanmoins en diminution par rapport à 1990 (76,9 %). La répartition détaillée en 2018 est la suivante : zones agricoles hétérogènes (42,2 %), terres arables (33 %), forêts (13,2 %), zones urbanisées (11,3 %), zones industrielles ou commerciales et réseaux de communication (0,2 %). L'IGN met par ailleurs à disposition un outil en ligne permettant de comparer l'évolution dans le temps de l'occupation des sols de la commune (ou de territoires à des échelles différentes). Plusieurs époques sont accessibles sous forme de cartes ou photos aériennes : la carte de Cassini (), la carte d'état-major (1820-1866) et la période actuelle (1950 à aujourd'hui). Voies de communication et transports Accès par l'autoroute A64 sortie puis l'A645 et la route nationale 125, ainsi qu'avec le réseau Arc-en-ciel. Risques majeurs Le territoire de la commune d'Huos est vulnérable à différents aléas naturels : météorologiques (tempête, orage, neige, grand froid, canicule ou sécheresse), inondations, feux de forêts et séisme (sismicité modérée). Il est également exposé à un risque technologique, la rupture d'un barrage. Un site publié par le BRGM permet d'évaluer simplement et rapidement les risques d'un bien localisé soit par son adresse soit par le numéro de sa parcelle. Risques naturels Certaines parties du territoire communal sont susceptibles d'être affectées par le risque d'inondation par débordement de cours d'eau, notamment la Garonne. La commune a été reconnue en état de catastrophe naturelle au titre des dommages causés par les inondations et coulées de boue survenues en 1982, 1999, 2009 et 2013. Un plan départemental de protection des forêts contre les incendies a été approuvé par arrêté préfectoral du 25 septembre 2006. Huos est exposée au risque de feu de forêt du fait de la présence sur son territoire du massif des piémonts des Pyrénées. Il est ainsi défendu aux propriétaires de la commune et à leurs ayants droit de porter ou d'allumer du feu dans l'intérieur et à une distance de des bois, forêts, plantations, reboisements ainsi que des landes. L'écobuage est également interdit, ainsi que les feux de type méchouis et barbecues, à l'exception de ceux prévus dans des installations fixes (non situées sous couvert d'arbres) constituant une dépendance d'habitation Le retrait-gonflement des sols argileux est susceptible d'engendrer des dommages importants aux bâtiments en cas d'alternance de périodes de sécheresse et de pluie. 90,9 % de la superficie communale est en aléa moyen ou fort (88,8 % au niveau départemental et 48,5 % au niveau national). Sur les dénombrés sur la commune en 2019, sont en en aléa moyen ou fort, soit 100 %, à comparer aux 98 % au niveau départemental et 54 % au niveau national. Une cartographie de l'exposition du territoire national au retrait gonflement des sols argileux est disponible sur le site du BRGM. Par ailleurs, afin de mieux appréhender le risque d'affaissement de terrain, l'inventaire national des cavités souterraines permet de localiser celles situées sur la commune. Concernant les mouvements de terrains, la commune a été reconnue en état de catastrophe naturelle au titre des dommages causés par la sécheresse en 1989 et par des mouvements de terrain en 1999. Risques technologiques La commune est en outre située en aval du barrage de Naguilhes sur le Gnoles (affluent de l'Ariège, département de l'Ariège). À ce titre elle est susceptible d'être touchée par l'onde de submersion consécutive à la rupture d'un de ces ouvrages. Toponymie Histoire Jusqu'à la Révolution, la paroisse de Martres-de-Rivière formait, avec celles de Pointis-de-Rivière et de Cier-de-Rivière, l'une des enclaves languedociennes du « Petit-Comminges », l'un des 24 diocèses civils de la province du Languedoc. Les paroisses voisines faisaient partie, elles, du Comté du Comminges qui dépendait de la Gascogne. Héraldique Politique et administration Administration municipale Le nombre d'habitants au recensement de 2011 étant compris entre 100 et 499, le nombre de membres du conseil municipal pour l'élection de 2014 est de onze. Rattachements administratifs et électoraux Commune faisant partie de la huitième circonscription de la Haute-Garonne de la communauté de communes du Haut Comminges et du canton de Bagnères-de-Luchon (avant le redécoupage départemental de 2014, Huos faisait partie de l'ex-canton de Barbazan). Tendances politiques et résultats Liste des maires Population et société Démographie Enseignement Huos fait partie de l'académie de Toulouse. En 2010, les écoles comptent une centaine d'enfants avec les élèves des villages voisins, Martres-de-Rivière, Ardiège et Cier-de-Rivière, regroupés en regroupement pédagogique intercommunal. Santé Culture et festivité salle des fêtes, Activités sportives randonnée pédestre, chasse, pétanque, Écologie et recyclage Économie Revenus En 2018 (données Insee publiées en ), la commune compte fiscaux, regroupant . La médiane du revenu disponible par unité de consommation est de ( dans le département). Emploi En 2018, la population âgée de s'élève à , parmi lesquelles on compte 74,9 % d'actifs (66,4 % ayant un emploi et 8,5 % de chômeurs) et 25,1 % d'inactifs. En 2018, le taux de chômage communal (au sens du recensement) des est inférieur à celui de la France et du département, alors qu'en 2008 la situation était inverse. La commune fait partie de la couronne de l'aire d'attraction de Saint-Gaudens, du fait qu'au moins 15 % des actifs travaillent dans le pôle. Elle compte en 2018, contre 62 en 2013 et 42 en 2008. Le nombre d'actifs ayant un emploi résidant dans la commune est de 189, soit un indicateur de concentration d'emploi de 28,8 % et un taux d'activité parmi les 15 ans ou plus de 51,5 %. Sur ces 189 actifs de 15 ans ou plus ayant un emploi, 22 travaillent dans la commune, soit 12 % des habitants. Pour se rendre au travail, 90,4 % des habitants utilisent un véhicule personnel ou de fonction à quatre roues, 1,6 % les transports en commun, 5,9 % s'y rendent en deux-roues, à vélo ou à pied et 2,1 % n'ont pas besoin de transport (travail au domicile). Activités hors agriculture Secteurs d'activités 26 établissements sont implantés à Huos au . Le tableau ci-dessous en détaille le nombre par secteur d'activité et compare les ratios avec ceux du département. Le secteur du commerce de gros et de détail, des transports, de l'hébergement et de la restauration est prépondérant sur la commune puisqu'il représente 30,8 % du nombre total d'établissements de la commune (8 sur les 26 entreprises implantées à Huos), contre 25,9 % au niveau départemental. Entreprises et commerces Les jardins de Comminges coopérative d'intérêt collectif est une association créée en 2006 par Afidel, les Jardins du Comminges proposent 2 ateliers d'insertion sociale et professionnelle Le maraîchage en agriculture biologique sur 6 hectares, Les travaux de l'environnement. Agriculture La commune est dans « La Rivière », une petite région agricole localisée dans le sud du département de la Haute-Garonne, consituant la partie piémont au relief plus doux que les Pyrénées centrales la bordant au sud et où la vallée de la Garonne s'élargit. En 2020, l'orientation technico-économique de l'agriculture sur la commune est la culture de légumes ou champignons. Trois exploitations agricoles ayant leur siège dans la commune sont dénombrées lors du recensement agricole de 2020 (18 en 1988). La superficie agricole utilisée est de . Culture locale et patrimoine Lieux et monuments Église Saint-Saturnin. Le monument aux morts rappelle que les soldats étaient presque tous agriculteurs. Représentation de la Grotte de Massabielle avec Notre-Dame de Lourdes et sainte Bernadette Personnalités liées à la commune Jean-Étienne Bartier, baron de Saint-Hilaire (1766-1835), général des armées de la République et de l'Empire, né à Aspet, inhumé dans le cimetière de la commune. Christian Piquemal (1940), général de corps d'armée. Fait divers Le , Fernando Rodrigues 32 ans, tue son épouse Joëlle (née Soubie) 31 ans, dans leur maison ainsi que sa belle-sœur Fabienne Jacomet 21 ans à coups de sabre et de hache, avant de se suicider d'un coup de fusil. Henri-Jean Jacomet, employé de la Cellulose d'Aquitaine, à Saint-Gaudens, époux de Fabienne, ayant découvert le massacre, sera accusé, puis innocenté en 1995. Il a été le sujet d'une des émissions de Faites entrer l'accusé, d'Accusé à tort rediffusé dans Enquêtes criminelles : le magazine des faits divers, de Sept à huit et de Crimes. Pour approfondir Bibliographie Articles connexes Liste des communes de la Haute-Garonne Panthéon pyrénéen Liens externes Huos sur le site de la Communauté de Communes du Haut-Comminges Notes et références Notes et cartes Notes Cartes Références Site de l'Insee Autres sources Commune en Haute-Garonne Commune dans l'arrondissement de Saint-Gaudens Unité urbaine de Montréjeau Aire urbaine de Montréjeau Aire d'attraction de Saint-Gaudens
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,044
Chilocardamum es un género de plantas con flores perteneciente a la familia Brassicaceae. Comprende 4 especies descritas y aceptadas. Taxonomía El género fue descrito por Otto Eugen Schulz y publicado en Das Pflanzenreich IV. 105(Heft 86): 179. 1924. Especies A continuación se brinda un listado de las especies del género Chilocardamum aceptadas hasta julio de 2012, ordenadas alfabéticamente. Para cada una se indica el nombre binomial seguido del autor, abreviado según las convenciones y usos. Chilocardamum castellanosii (O.E.Schulz) Al-Shehbaz Chilocardamum longistylum (Romanczuk) Al-Shehbaz Chilocardamum onuridifolium (Ravenna) Al-Shehbaz Chilocardamum patagonicum (Speg.) O.E.Schulz Referencias Thelypodieae
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,684
\section{Preliminaries} Throughout this paper, we are assuming that the symbol FRBSU monoidal category indicates a finite, rigid, braided monoidal category whose unit object $I$ is simple.\\ A category is small if the collection of all objects form a set. If $A$ is an object in a category $\mathcal{A}$, then a subobject $B$ of $A$ is an object with a monomorphism $B\rightarrow A$.\\ \cite{baki} An object $A$ is simple in an abelian category $\mathcal{A}$ if for any injection $B\rightarrow A$, we get $B=0$ or $B\cong A$.\\ A cover for an object $A$ in a category $\mathcal{A}$ is an object $P$ with an epimorphism $f:~P\rightarrow A$. This cover is projective if $P$ is a projective object.\\ An object $A$ in a category is of finite length if there exists a finite sequence of monomorphisms $\xymatrix{ 0\ar[r] & A_n\ar[r] & A_{n-1}\ar[r] & ...\ar[r] & A_0=A}$ such that the cokernels of these monomorphisms are simple objects.\\ A $k$ linear abelian category is semisimple if every object is isomorphic to direct sum of simple objects. \begin{lem} \label{22} $[Schur's~Lemma]$ If $k$ is an algebraically closed field of characteristic zero, then $End(X)=k$ whenever $X$ is a simple object in an abelian $k$ linear category $\mathcal{A}$. \end{lem} \begin{lem} \label{12} If $X\cong Y$ are nonzero simple objects in a $k$ linear abelian category $\mathcal{A}$ for $k$ is a perfect field, then $Hom(X,~Y)=0$. \end{lem} \begin{proof} Assume that $\mathcal{A}$ is a $k$ linear abelian category and $X$, $Y$ are nonzero simple objects. Let $f:~X\rightarrow Y$ is a nonzero morphism in $\mathcal{A}$. $Ker(f)\cong 0$ since $X$ is simple and $f\neq 0$, so that morphism is a monomorphism. As a result, $X\cong Y$ that is a contradiction, hence $f=0$. \end{proof} A $k$ linear abelian category $\mathcal{A}$ where $k$ is a perfect field is finite if for all objects $X$, $Y$ in $\mathcal{A}$, $Hom_{\mathcal{A}}(X,~Y)$ is finite dimensional vector space over $k$, all objects $A\in \mathcal{A}$ has finite length, every simple object in $\mathcal{A}$ has a projective cover and that category has finitely many isomorphism classes of simple objects.\\ For example $Vec_f(k)$ is a finite category, because $Hom_{Vec_f(k)}(V,~W)$ is isomorphic to the vector space $M_{m\times n}(k)$ of $m\times n$ matrices in which the entries are elements of the field $k$ where $dim(V)=m$ and $dim(W)=n$ for given two finite dimensional vector spaces $V$ and $W$. It is finite dimensional since $M_{m\times n}(k)$ is finite dimensional with dimension $m\times n$. The only simple object is $k$ and every object is free in that category, as a result every object is projective and $k^2\rightarrow k$ is a surjection for example.\\ \subsection{Monoidal Category and Braiding} We use \cite{joro} as a reference for the following definitions and example. \begin{defn} $(\mathcal{A},~\otimes,~I,~a,~l,~r)$ is a monoidal category if for all objects $X$, $Y$, $Z$ and $W$ in $\mathcal{A}$, the associativity pentagon and the unit triangle commute.\\ Here $\mathcal{A}$ is a category, $\otimes:~\mathcal{A} \times \mathcal{A} \rightarrow \mathcal{A}$ is a functor, $I$ is a unit object in $\mathcal{A}$, $a$ is the associativity constraint which is a family of natural isomorphisms \begin{align} \begin{tikzpicture} \node (A) at (0, 0) {$a_{XYZ}:~(X\otimes Y) \otimes Z$}; \node (B) at (4, 0) {$X\otimes (Y\otimes Z)$}; \path[->] (A) edge node [above] {$\cong$} (B); \end{tikzpicture} \end{align} $l$ is a left unit constraint which is a family of natural isomorphisms \begin{align} \begin{tikzpicture} \node (A) at (0, 0) {$l_X:~I\otimes X$}; \node (B) at (3, 0) {$X$}; \path[->] (A) edge node [above] {$\cong$} (B); \end{tikzpicture} \end{align} and $r$ is a right unit constraint which is a family of natural isomorphisms \begin{align} \begin{tikzpicture} \node (A) at (0, 0) {$r_X:~X\otimes I$}; \node (B) at (3, 0) {$X.$}; \path[->] (A) edge node [above] {$\cong$} (B); \end{tikzpicture} \end{align} \end{defn} \begin{lem} If $(\mathcal{A},~\otimes,~I,~a,~l,~r)$ is a monoidal category, then $\mathcal{A}^{op}$ is a monoidal category. \end{lem} \begin{proof} We define the tensor product as $X\otimes^{op} Y=Y\otimes X$ and associativity constraint $a^{op}$ as a family of natural isomorphisms \begin{align*} \begin{tikzpicture} \node (A) at (0, 0) {$a^{op}_{XYZ}:~(X\otimes^{op} Y)\otimes^{op} Z$}; \node (B) at (5.5, 0) {$X\otimes^{op}(Y\otimes^{op} Z)$}; \path[->] (A) edge node [above] {$\cong$} (B); \end{tikzpicture} \end{align*} in $\mathcal{A}^{op}$ for all objects $X$, $Y$ and $Z$ in $\mathcal{A}$. This is same as the family of natural isomorphisms \begin{align*} \begin{tikzpicture} \node (A) at (0, 0) {$Z\otimes (Y\otimes X)$}; \node (B) at (4, 0) {$(Z\otimes Y) \otimes X$}; \path[->] (A) edge node [above] {$\cong$} (B); \end{tikzpicture} \end{align*} in $\mathcal{A}^{op}$ which can be obtained by inverting the arrows \begin{align*} \begin{tikzpicture} \node (A) at (0, 0) {$(Z\otimes Y) \otimes X$}; \node (B) at (4, 0) {$Z\otimes (Y\otimes X)$}; \path[->] (A) edge node [above] {$\cong$} (B); \end{tikzpicture} \end{align*} in $\mathcal{A}$ and as a result, we get $a^{op}_{XYZ}=a^{-1}_{ZYX}$ for all objects $X$, $Y$ and $Z$ in $\mathcal{A}$.\\ Here, $I^{op}=I$. $l^{op}$ is a left unit constraint which is a family of natural isomorphisms \begin{align*} \begin{tikzpicture} \node (A) at (0, 0) {$l^{op}_X:~I\otimes^{op} X$}; \node (B) at (3, 0) {$X$}; \path[->] (A) edge node [above] {$\cong$} (B); \end{tikzpicture} \end{align*} in $\mathcal{A}^{op}$ which is same as \begin{align*} \begin{tikzpicture} \node (A) at (0, 0) {$ l^{op}_X:~X\otimes I$}; \node (B) at (3, 0) {$X$}; \path[->] (A) edge node [above] {$\cong$} (B); \end{tikzpicture} \end{align*} for all objects $X$ in $\mathcal{A}$. So, we take $l^{op}_X=r_X$ and $l^{op}=r$ in $\mathcal{A}$. Similarly, we can take $r^{op}=l$. \end{proof} Also, we define a category $\mathcal{A}^{rev}$ for a given monoidal category $\mathcal{A}$ in which the objects and the arrows are the same as in $\mathcal{A}$ and the tensor product is defined as $X\otimes^{rev} Y=Y\otimes X$.\\ A strictly full subcategory $\mathcal{B}$ of a monoidal category $\mathcal{A}$ is monoidal if it contains the unit object $I$ in $\mathcal{A}$ and $A\otimes B$ for all objects $A$ and $B$ in $\mathcal{A}$.\\ $(\mathcal{A},~\otimes)$ is an additive monoidal category if $\mathcal{A}$ is an additive category and $\otimes$ is a biadditive functor. It is abelian if $\mathcal{A}$ is an abelian category.\\ A monoidal category is strict if all $a$, $l$ and $r$ are identity arrows. For example, the category of all $k$ vector spaces $Vec(k)$ is not a strict monoidal category for a given field $k$. $U\otimes (V\otimes W)\neq (U\otimes V)\otimes W$ in general for all vector spaces $U,~V,~W$ in $Vec(k)$, even in $Vec_f(k)$, but we can obtain a family of natural isomorphisms of those products as an associativity constraint. \begin{thm} $[MacLane]$ Every monoidal category is equivalent to a strict monoidal category. \end{thm} \begin{defn} An object $A$ is invertible in a category $\mathcal{A}$ if there exists an object $B$ in $\mathcal{A}$ such that $A\otimes B\cong B\otimes A\cong I$ for $I$ is the unit object. \end{defn} \begin{remark} Invertible objects in a monoidal category $\mathcal{A}$ form a monoidal subcategory of that category. If every simple object in $\mathcal{A}$ is invertible, then we say that the category is pointed. \end{remark} \begin{defn} A braiding $c$ for a monoidal category $\mathcal{A}$ is a natural family of isomorphisms $\xymatrix{c_{XY}:~X\otimes Y \ar[rr]^{\cong} & & Y\otimes X}$ for all objects $X$, $Y$ in $\mathcal{A}$ such that two hexagon diagrams commute in $\mathcal{A}$. \end{defn} \begin{note} If $\mathcal{A}$ is a braided monoidal category, then $\mathcal{A}^{rev}$ is a braided monoidal category with the braiding $c^{rev}$ that is a family of natural isomorphisms $c^{rev}_{XY}=c_{YX}$ for all objects $X$ and $Y$ in $\mathcal{A}$. Similarly, $\mathcal{A}^{op}$ is a braided monoidal category with the braiding $c^{op}$ that is a family of natural isomorphisms $c^{op}_{XY}=c^{-1}_{XY}$ for all objects $X$ and $Y$ in $\mathcal{A}$. $\mathcal{A}^{op}\simeq \mathcal{A}^{rev}$ in that situation. \end{note} \begin{defn} A monoidal category $\mathcal{A}$ with a braiding $c$ is called symmetric if the composition \begin{align} \xymatrix{ X\otimes Y \ar[rr]^{c_{XY}} & & Y\otimes X \ar[rr]^{c_{YX}} & & X\otimes Y} \end{align} is $id_{X\otimes Y}$ for all objects $X$, $Y$ in $\mathcal{A}$. \end{defn} \begin{exmp} The category of all $k$ vector spaces $Vec(k)$ is a braided, symmetric monoidal category for a field $k$. \end{exmp} \begin{proof} $(c_{YX}\circ c_{XY})(x\otimes y)=(c_{YX}\circ c_{XY})(xy)=c_{YX}(yx)=xy=x\otimes y$ for all objects $X$ and $Y$ in $Vec(k)$, for all elements $x\in X,~y\in Y$. As a result, the composition is the identity. \end{proof} \begin{exmp} Assume that $G$ is an abelian group, $k$ is a field, $(f,~h)$ is an abelian 3 cocycle on $G$ with coefficients in $k$. That is, $f:~G\times G\times G\rightarrow k$ is a normalized 3 cocycle such that \begin{align} f(x,~0,~y)=0, \end{align} \begin{align} f(x,~y,~z)+f(w,~x+y,~z)+f(w,~x,~y)=f(w,~x,~y+z)+f(w+x,~y,~z) \end{align} and $h:~G\times G\rightarrow k$ is a function such that \begin{align} f(y,~z,~x)+h(x,~y+z)+f(x,~y,~z)=h(x,~z)+f(y,~x,~z)+h(x,~y), \end{align} \begin{align} -f(z,~x,~y)+h(x+y,~z)-f(x,~y,~z)=h(x,~z)-f(x,~z,~y)+h(y,~z). \end{align} Let $\mathcal{A}$ be a category such that the objects are families of $k$ modules $X=\{X_g~|~g\in G\}$ and the arrow between two families $X$, $Y$ is a family $\xymatrix{ \theta=\{X_{g_1}\ar[r]^{\theta_{g_1g_2}} & Y_{g_2}\}}$ where $\theta_{g_1g_2}$ is a $k$ module homomorphism for all $g_1$ and $g_2$ in $G$, \begin{align} (X\otimes Y)_g=\underset{g_1+g_2=g}{\Sigma}(X_{g_1}\otimes Y_{g_2}) \end{align} is the tensor product, \begin{align} a_{XYZ}:~(X\otimes Y)\otimes Z\rightarrow X\otimes (Y\otimes Z),\\ a_{XYZ}((x\otimes y)\otimes z)=f(g_1,~g_2,~g_3)x\otimes (y\otimes z) \end{align} is the associativity constraint and \begin{align} c_{XY}:~X\otimes Y\rightarrow Y\otimes X,\\ c(x\otimes y)=h(g_1,~g_2)y\otimes x \end{align} is the braiding for $x\in X_{g_1}$, $y\in Y_{g_2}$, $z\in Z_{g_3}$. So, this category is a braided monoidal category. \end{exmp} \subsection{The Category of Monoidal Functors} The following materials are found in \cite{joro}. \begin{defn} For two monoidal categories $\mathcal{A}$ and $\mathcal{B}$, assume that $\xymatrix{ \mathcal{F}:~\mathcal{A} \ar[r] & \mathcal{B}}$ is a functor, $\gamma$ is the family of natural isomorphisms \begin{align} \begin{tikzpicture} \node (A) at (0, 0) {$\gamma_{XY}:~\mathcal{F}(X) \otimes \mathcal{F}(Y)$}; \node (B) at (4, 0) {$\mathcal{F}(X\otimes Y)$}; \path[->] (A) edge node [above] {$\cong$} (B); \end{tikzpicture} \end{align} for all objects $X$, $Y$ in $\mathcal{A}$, $\xymatrix{ \varphi:~I\ar[r]^{\cong} & \mathcal{F}(I)}$ is an isomorphism for the unit object $I$. Then, $(\mathcal{F},~\gamma,~\varphi)$ is a monoidal functor if it satisfies compatible conditions. \end{defn} \begin{note} $(\mathcal{F},~\gamma,~\varphi)$ is strict if $\gamma$ and $\varphi$ are identities. \end{note} \begin{defn} A monoidal functor $\mathcal{F}:~\mathcal{A} \rightarrow \mathcal{B}$ between braided monoidal categories $\mathcal{A}$ and $\mathcal{B}$ is braided if the following diagram is commutative. \begin{equation} \xymatrix{ \mathcal{F}(X) \otimes \mathcal{F}(Y) \ar[d]^c \ar[r]^{\gamma} & \mathcal{F}(X\otimes Y) \ar[d]^{\mathcal{F}(c)}\\ \mathcal{F}(Y) \otimes \mathcal{F}(X) \ar[r]^{\gamma} & \mathcal{F}(Y\otimes X) } \end{equation} \end{defn} \begin{defn} If $\mathcal{F},~\mathcal{G}:~\mathcal{A} \rightarrow \mathcal{B}$ are two monoidal functors, then a map $\theta:~\mathcal{F} \rightarrow \mathcal{G}$ is a natural transformation if the following two diagrams commute. \begin{equation} \xymatrix@C+1em { \mathcal{F}(X) \otimes \mathcal{F}(Y) \ar[d]^{\theta(X) \otimes \theta(Y)} \ar[r]^{\gamma} & \mathcal{F}(X\otimes Y) \ar[d]^{\theta(X\otimes Y)}\\ \mathcal{G}(X) \otimes \mathcal{G}(Y) \ar[r]^{\gamma} & \mathcal{G}(X\otimes Y) } \quad \xymatrix{ & I \ar[ld]_{\varphi} \ar[rd]^{\varphi}\\ \mathcal{F}(I) \ar[rr]^{\theta(I)} & & \mathcal{G}(I) } \end{equation} \end{defn} \begin{prop} The collection $Hom(\mathcal{A},~\mathcal{B})$ in which the objects are monoidal functors $\mathcal{F}:~\mathcal{A} \rightarrow \mathcal{B}$ and morphisms are natural transformations between monoidal functors for given monoidal categories $\mathcal{A}$ and $\mathcal{B}$ forms a category. \end{prop} \begin{lem} $Hom(\mathcal{A},~\mathcal{A})$ is a monoidal category in which the tensor product is the composition of functors for a given monoidal category $\mathcal{A}$. \end{lem} We denote the category of right exact monoidal functors by $Hom^{re}(\mathcal{A},~\mathcal{B})$, the category of left exact monoidal functors by $Hom^{le}(\mathcal{A},~\mathcal{B})$ and the category of exact monoidal functors by $Hom^{e}(\mathcal{A},~\mathcal{B})$. \begin{remark} A monoidal functor $(\mathcal{F},~\gamma,~\varphi):~(\mathcal{A},~\otimes_{\mathcal{A}}) \rightarrow (\mathcal{B},~\otimes_{\mathcal{B}})$ is a monoidal equivalence if $\mathcal{F}:~\mathcal{A}\rightarrow \mathcal{B}$ is an equivalence of categories. In that situation, there exists a monoidal functor $(\mathcal{G},~\gamma',~\varphi'):~\mathcal{B} \rightarrow \mathcal{A}$ and isomorphism of monoidal functors $\mathcal{G}\circ \mathcal{F} \rightarrow id_{\mathcal{A}}$, $\mathcal{F}\circ \mathcal{G} \rightarrow id_{\mathcal{B}}$. \end{remark} \begin{prop} \label{32} \cite{joro} If $\mathcal{A}$ is braided monoidal category, then we get a monoidal equivalence $\mathcal{A} \rightarrow \mathcal{A}^{rev}$. \end{prop} \begin{proof} We define a monoidal functor $\mathcal{F}:~\mathcal{A} \rightarrow \mathcal{A}^{rev}$ by sending an object $A$ in $\mathcal{A}$ to itself, $\gamma$ as a family of natural isomorphisms $\gamma_{XY}:~X\otimes^{rev} Y=Y\otimes X \rightarrow X\otimes Y$ for all objects $X$, $Y$ in $\mathcal{A}$ and also $\varphi=id_I$. We define $\gamma_{XY}=c^{rev}_{XY}$. Then, we need to show that the following diagram commutes in $\mathcal{A}^{rev}$. \begin{align} \label{354} \xymatrix{ (X\otimes^{rev} Y)\otimes^{rev} Z \ar[d]_{\gamma \otimes^{rev} id} \ar[r]^{a_{XYZ}^{rev}} & X\otimes^{rev} (Y\otimes^{rev} Z) \ar[d]^{id\otimes^{rev} \gamma} \\ (X\otimes Y)\otimes^{rev} Z\ar[d]_{\gamma} & X\otimes^{rev} (Y\otimes Z) \ar[d]^{\gamma} & &\\ (X\otimes Y)\otimes Z\ar[r]_{a_{XYZ}} & X\otimes (Y\otimes Z)} \end{align} This diagram is same as the following diagram in $\mathcal{A}$. \begin{align} \label{400} \xymatrix{ Z\otimes (Y\otimes X) \ar[d]_{id_Z\otimes c_{YX}} \ar[r]^{a^{-1}_{ZYX}} & (Z\otimes Y) \otimes X\ar[d]^{c_{ZY}\otimes id_X} \\ Z\otimes (X\otimes Y) \ar[d]_{c_{Z(XY)}} & (Y\otimes Z) \otimes X \ar[d]^{c_{(YZ)X}} & & \\ (X\otimes Y)\otimes Z \ar[r]_{a_{XYZ}} & X\otimes (Y\otimes Z)} \end{align} The first and third squares commute by definition and the middle one commutes by using naturality of the braiding in the following diagram. \begin{align} \label{353} \begin{tikzpicture} \node (A) at (0, 0) {$(Z\otimes Y)\otimes X$}; \node (B) at (4, 0) {$(Y\otimes Z)\otimes X$}; \node (C) at (8, 0) {$Y\otimes (Z\otimes X)$}; \node (D) at (12, 0) {$Y\otimes (X\otimes Z)$}; \node (E) at (0, -2) {$Z\otimes (Y\otimes X)$}; \node (F) at (12, -2) {$(Y\otimes X)\otimes Z$}; \node (G) at (0, -4) {$Z\otimes (X\otimes Y)$}; \node (H) at (12, -4) {$(X\otimes Y)\otimes Z$}; \node (K) at (0, -6) {$(Z\otimes X)\otimes Y$}; \node (L) at (4, -6) {$(X\otimes Z)\otimes Y)$}; \node (M) at (8, -6) {$X\otimes (Z\otimes Y)$}; \node (N) at (12, -6) {$X\otimes (Y\otimes Z)$}; \path[->] (A) edge node [above] {$c_{ZY}\otimes id_X$} (B); \path[->] (B) edge node [above] {$a_{YZX}$} (C); \path[->] (C) edge node [above] {$id_Y\otimes c_{ZX}$} (D); \path[->] (E) edge node [above] {$c_{Z(YX)}$} (F); \path[->] (G) edge node [above] {$c_{Z(XY)}$} (H); \path[->] (K) edge node [below] {$c_{ZX}\otimes id_Y$} (L); \path[->] (L) edge node [below] {$a_{XZY}$} (M); \path[->] (M) edge node [below] {$id_X\otimes c_{ZY}$} (N); \path[->] (A) edge node [left] {$a_{ZYX}$} (E); \path[->] (E) edge node [left] {$id_Z\otimes c_{YX}$} (G); \path[->] (G) edge node [left] {$a^{-1}_{ZXY}$} (K); \path[->] (D) edge node [right] {$a^{-1}_{YXZ}$} (F); \path[->] (F) edge node [right] {$c_{YX}\otimes id_Z$} (H); \path[->] (H) edge node [right] {$a_{XYZ}$} (N); \end{tikzpicture} \end{align} As a result, that diagram commutes.\\ $a_{XYZ}\circ c_{Z(XY)}\circ (id_Z\otimes c_{YX})=a_{XYZ}\circ a^{-1}_{XYZ}\circ (id_X\otimes c_{ZY})\circ a_{XZY}\circ (c\otimes id_Y)\circ a^{-1}_{ZXY}\circ (id_Z\otimes c_{YX})=(id_X\otimes c_{ZY})\circ a_{XZY}\circ (c\otimes id_Y)\circ a^{-1}_{ZXY}\circ (id_Z\otimes c_{YX})$.\\ $c_{(YZ)X}\circ (c_{ZY}\otimes id_X)\circ a^{-1}_{ZYX}=a_{XYZ}\circ (c_{YX}\otimes id_Z)\circ a^{-1}_{YXZ}\circ (id_Y\otimes c_{ZX})\circ a_{YZX}\circ (c_{ZY}\otimes id_X)\circ a^{-1}_{ZYX}$.\\ These two equations are same by the commutativity of Diagram \ref{353}. As a result, Diagram \ref{354} commutes. The commutativity of other diagrams are easy to show, also it is a braided monoidal functor. The reader can show that the conditions for equivalence are satisfied. \end{proof} $\mathcal{A} \simeq \mathcal{A}^{op}$ as a corollary of this proposition. \subsection{Rigid Monoidal Categories} \cite{baki} An object $Y$ in a monoidal category $\mathcal{A}$ is a right dual for a given object $X$ in $\mathcal{A}$ if there are morphisms $ev_{rX}:~Y\otimes X \rightarrow I$ and $coev_{rX}:~I \rightarrow X\otimes Y$ such that the following compositions are the identities. \begin{align} \begin{tikzpicture} \node (A) at (0, 0) {$Y=Y\otimes I$}; \node (B) at (5, 0) {$Y\otimes X \otimes Y$}; \node (C) at (10, 0) {$I\otimes Y=Y$}; \path[->] (A) edge node [above] {$id_Y\otimes coev_{rX}$} (B); \path[->] (B) edge node [above] {$ev_{rX}\otimes id_Y$} (C); \end{tikzpicture} \\ \label{97} \begin{tikzpicture} \node (A) at (0, 0) {$X=I\otimes X $}; \node (B) at (5, 0) {$X\otimes Y \otimes X$}; \node (C) at (10, 0) {$X\otimes I=X$}; \path[->] (A) edge node [above] {$coev_{rX}\otimes id_X$} (B); \path[->] (B) edge node [above] {$id_X\otimes ev_{rX}$} (C); \end{tikzpicture} \end{align} Similarly, an object $Z$ is a left dual object for the object $X$ in that category if there are morphisms $ev_{lX}:~X\otimes Z\rightarrow I$ and $coev_{lX}:~I\rightarrow Z\otimes X$ such that the following compositions are the identities. \begin{align} \label{98} \begin{tikzpicture} \node (A) at (0, 0) {$Z=I\otimes Z$}; \node (B) at (5, 0) {$Z\otimes X\otimes Z$}; \node (C) at (10, 0) {$Z\otimes I=Z$}; \path[->] (A) edge node [above] {$coev_{lX}\otimes id_Z$} (B); \path[->] (B) edge node [above] {$id_Z\otimes ev_{lX}$} (C); \end{tikzpicture} \\ \label{99} \begin{tikzpicture} \node (A) at (0, 0) {$X=X\otimes I$}; \node (B) at (5, 0) {$X\otimes Z\otimes X$}; \node (C) at (10, 0) {$I\otimes X=X$}; \path[->] (A) edge node [above] {$id_X\otimes coev_{lX}$} (B); \path[->] (B) edge node [above] {$ev_{lX}\otimes id_X$} (C); \end{tikzpicture} \end{align} We denote the left dual with $^+X$ and the right dual with $X^+$. \begin{lem} \label{96} A left dual $^+X$ and a right dual $X^+$ in a monoidal category $\mathcal{A}$ is unique up to a unique isomorphism. \end{lem} \begin{proof} See \cite{baki} for the proof. \end{proof} \begin{defn} A monoidal category is rigid if every object $X$ in that category has both a right and a left dual object. \end{defn} \begin{exmp} The category of finite dimensional vector spaces $Vec_k$ over a field $k$ is rigid. If that category is consisting of all $k$ vector spaces without finiteness assumption, then it is not rigid. \end{exmp} \begin{proof} For a given finite dimensional $k$ vector space $V$, the right and left dual object for $V$ is the dual space $Hom_k(V,~k)$ with the evaluation map \begin{align*} ev_{rV}:~Hom_k(V,~k)\otimes V\rightarrow k,~(f,~v)\mapsto f(v) \end{align*} and the coevaluation map $coev_{rV}:~k\rightarrow V\otimes Hom_k(V,~k)$ which is an embedding. We may see that the following compositions are the identities. \begin{align*} \begin{tikzpicture} \node (A) at (0,0) {$V=k\otimes V$}; \node (B) at (6,0) {$V\otimes Hom_k(V,~k)\otimes V$}; \node (C) at (12,0) {$V\otimes k=V$}; \path[->] (A) edge [left] node [above] {$coev_{rV}\otimes id_V$} (B); \path[->] (B) edge [right] node [above] {$id_V\otimes ev_{rV}$} (C); \end{tikzpicture} \end{align*} \begin{align*} \begin{tikzpicture} \node (A) at (0,0) {$Hom_k(V,~k)\otimes k$}; \node (B) at (6,0) {$Hom_k(V,~k)\otimes V\otimes Hom_k(V,~k)$}; \node (C) at (12,0) {$k\otimes Hom_k(V,~k)$}; \node (D) at (0,-2) {$Hom_k(V,~k)$}; \node (E) at (6, -2) {$Hom_k(V,~k)\otimes V\otimes Hom_k(V,~k)$}; \node (F) at (12,-2) {$Hom_k(V,~k)$}; \draw[thick, double] (0, -0.2)--(0, -1.8) [xshift=5pt]; \draw[thick, double] (12, -0.2)--(12, -1.8) [xshift=5pt]; \path[->] (A) edge [left] node [above] {$id\otimes coev_{rV}$} (B); \path[->] (B) edge [right] node [above] {$ev_{rV}\otimes id$} (C); \path[->] (D) edge [left] node [above] {$id\otimes coev_{rV}$} (E); \path[->] (E) edge [right] node [above] {$ev_{rV}\otimes id$} (F); \end{tikzpicture} \end{align*} Similarly, we may show that the compositions \ref{98} and \ref{99} are the identities which shows that $Hom_k(V,~k)$ is a left dual for the object $V$.\\ Second part follows since infinite dimensional spaces don't have any coevaluation map. \end{proof} \begin{remark} \label{82} \cite{baki} Tensor product functor $\otimes:~\mathcal{A} \times \mathcal{A} \rightarrow \mathcal{A}$ is exact in each variable in an abelian, rigid monoidal category $\mathcal{A}$. \end{remark} \begin{lem} If a monoidal category $\mathcal{A}$ is rigid, then $\mathcal{A}^{op}$ is rigid, too. \end{lem} \begin{proof} If $X$ is an object in a rigid monoidal category $\mathcal{A}$, then $X$ has both a left and a right dual objects $^+X$ and $X^+$ which are unique up to a unique isomorphism by Lemma \ref{96} such that the following compositions are the identities by definition where $ev_{rX}:~X_+\otimes X \rightarrow I$, $coev_{rX}:~I \rightarrow X\otimes X^+$, $ev_{lX}:~X\otimes {^+X}\rightarrow I$ and $coev_{lX}:~I \rightarrow {^+X}\otimes X$ are morphisms in $\mathcal{A}$. \begin{align*} \begin{tikzpicture} \node (A) at (0, 0) {$X^+=X^+\otimes I$}; \node (B) at (6, 0) {$X^+\otimes X\otimes X^+$}; \node (C) at (12, 0) {$I\otimes X^+=X^+$}; \path[->] (A) edge node [above] {$id_{X^+}\otimes coev_{rX}$} (B); \path[->] (B) edge node [above] {$ev_{rX}\otimes id_{X^+}$} (C); \end{tikzpicture} \\ \begin{tikzpicture} \node(A) at (0, 0) {$X=I\otimes X$}; \node (B) at (6, 0) {$X\otimes X^+\otimes X$}; \node (C) at (12, 0) {$X\otimes I=X$}; \path[->] (A) edge node [above] {$coev_{rZ}\otimes id_X$} (B); \path[->] (B) edge node [above]{$id_X\otimes ev_{rX}$} (C); \end{tikzpicture} \\ \begin{tikzpicture} \node (A) at (0, 0) {$^+X=I\otimes {^+X}$}; \node (B) at (6, 0) {$^+X\otimes X \otimes {^+X}$}; \node (C) at (12, 0) {$ ^+X\otimes I={^+X}$}; \path[->] (A) edge node [above] {$coev_{lX}\otimes id_{^+X}$} (B); \path[->] (B) edge node [above] {$id_{^+X}\otimes ev_{lX}$} (C); \end{tikzpicture} \\ \begin{tikzpicture} \node (A) at (0, 0) {$X=X\otimes I$}; \node (B) at (6, 0) {$X\otimes {^+X} \otimes X$}; \node (C) at (12, 0) {$I\otimes X=X$}; \path[->] (A) edge node [above] {$id_X\otimes coev_{lX}$} (B); \path[->] (B) edge node [above] {$ev_{lX}\otimes id_X$} (C); \end{tikzpicture} \end{align*} We know that $ev_{rX}\otimes id_{X^+} \circ id_{X^+}\otimes coev_{rX}=id_{X^+}$ in $\mathcal{A}$ by the first composition, so $(ev_{rX}\otimes id_{X^+} \circ id_{X^+}\otimes coev_{rX})^{op}=(id_{X^+})^{op}=id_{X^+}$. This implies that \begin{align*} (id_{X^+}\otimes coev_{rX})^{op}\circ (ev_{rX}\otimes id_{X^+})^{op}=coev_{rX}^{op}\otimes ^{op} id_{X^+}\circ id_{X^+}\otimes ^{op} ev_{rX}^{op}=id_{X^+}. \end{align*} As a result, the following composition is the identity of $X^+$ in $\mathcal{A}^{op}$ \begin{align*} \begin{tikzpicture} \node (A) at (0, 0) {$X^+=I\otimes X^+$}; \node (B) at (6, 0) {$X^+\otimes X \otimes X^+$}; \node (C) at (12, 0) {$X^+\otimes I=X^+$}; \path[->] (A) edge node [above] {$id_{X^+}\otimes^{op} ev_{rX}^{op}$} (B); \path[->] (B) edge node [above] {$coev_{rX}^{op}\otimes^{op} id_{X^+}$} (C); \end{tikzpicture} \end{align*} which is same as the following one \begin{align*} \begin{tikzpicture} \node (A) at (0, 0) {$X^+=X^+\otimes^{op} I $}; \node (B) at (6, 0) {$ X^+\otimes^{op} X \otimes^{op} X^+$}; \node (C) at (12, 0) {$I\otimes^{op} X^+=X^+.$}; \path[->] (A) edge node [above] {$id_{X^+}\otimes^{op} ev_{rX}^{op}$} (B); \path[->] (B) edge node [above] {$coev_{rX}^{op}\otimes^{op} id_{X^+}$} (C); \end{tikzpicture} \end{align*} Also, we get an identity of $X$ by the composition \begin{align*} \begin{tikzpicture} \node (A) at (0, 0) {$X=X\otimes I$}; \node (B) at (6, 0) {$X\otimes X^+\otimes X$}; \node (C) at (12, 0) {$ I\otimes X=X$}; \path[->] (A) edge node [above] {$(id_X\otimes ev_{rX})^{op}$} (B); \path[->] (B) edge node [above] {$(coev_{rX}\otimes id_X)^{op}$} (C); \end{tikzpicture} \end{align*} which is same as the following one \begin{align*} \begin{tikzpicture} \node (A) at (0, 0) {$X=I\otimes^{op} X$}; \node (B) at (6, 0) {$X\otimes^{op} X^+\otimes^{op} X$}; \node (C) at (12, 0) {$X\otimes^{op} I=X.$}; \path[->] (A) edge node [above] {$ev_{rX}^{op}\otimes^{op} id_X$} (B); \path[->] (B) edge node [above] {$id_X\otimes^{op} coev_{rX}^{op}$} (C); \end{tikzpicture} \end{align*} These two identities show that $X^+$ is right dual for $X$ in $\mathcal{A}^{op}$. By using same technique, we may show that $^+X$ is left dual for $X$ in $\mathcal{A}^{op}$. Those are unique objects in $\mathcal{A}$ by Lemma \ref{96}, so they are also unique in $\mathcal{A}^{op}$ as objects. This shows that $\mathcal{A}^{op}$ is a rigid category. \end{proof} \begin{lem} If $I$ is a unit object in a rigid monoidal category $\mathcal{A}$, then $I^+=I$. \end{lem} \begin{proof} The composition $\xymatrix{I\ar[r] & I^+\ar[r] & I}$ is the identity by \ref{97}. Also, the other condition is satisfied. It is easy to see that $I$ satisfies the required conditions, too. Hence, $I=I^+$ by uniqueness of a right dual. \end{proof} \begin{lem} \label{71} \cite{baki} If a monoidal category $\mathcal{A}$ is rigid, then for all objects $X$ and $Y$ in $\mathcal{A}$, $(X\otimes_{\mathcal{A}} Y)^+=Y^+\otimes_{\mathcal{A}} X^+$. \end{lem} \begin{exmp} Assume that $\mathcal{A}$ and $\mathcal{B}$ are two monoidal categories and $(\mathcal{F},~\gamma,~\varphi)$ is a monoidal functor between those categories. If $X$ is an object in $\mathcal{A}$ with a right dual $X^+$, then $\mathcal{F}(X^+)$ is a right dual of $\mathcal{F}(X)$. \end{exmp} \begin{proof} We define the evaluation map as $ev_{r\mathcal{F}(X)}=\mathcal{F}(ev_{rX}) \circ \gamma$ that is shown with the following diagram \begin{align*} ev_{r\mathcal{F}(X)}:~\mathcal{F}(X^+)\otimes \mathcal{F}(X)\rightarrow \mathcal{F}(X^+\otimes X) \rightarrow \mathcal{F}(I) \end{align*} and the coevaluation map as $coev_{r\mathcal{F}(X)}=\gamma^{-1} \circ \mathcal{F}(coev_{rX})$ that is shown with the following diagram \begin{align*} coev_{r\mathcal{F}(X)}:~\mathcal{F}(I) \rightarrow \mathcal{F}(X\otimes X^+) \rightarrow \mathcal{F}(X) \otimes \mathcal{F}(X^+) \end{align*} by using $\gamma$. It is obvious that the following compositions are the identities since $\mathcal{F}$ is a monoidal functor and $X^+$ is a right dual for $X$. \begin{align*} \xymatrix{ \mathcal{F}(X^+)=\mathcal{F}(X^+) \otimes \mathcal{F}(I) \ar[r] & \mathcal{F}(X^+) \otimes \mathcal{F}(X) \otimes \mathcal{F}(X^+) \ar[r] & \mathcal{F}(I) \otimes \mathcal{F}(X^+)=\mathcal{F}(X^+)}\\ \xymatrix{ \mathcal{F}(X)=\mathcal{F}(I) \otimes \mathcal{F}(X) \ar[r] & \mathcal{F}(X) \otimes \mathcal{F}(X^+) \otimes \mathcal{F}(X) \ar[r] & \mathcal{F}(X) \otimes \mathcal{F}(I)=\mathcal{F}(X)} \end{align*} As a result, $\mathcal{F}(X^+)$ is a right dual for $\mathcal{F}(X)$. \end{proof} \begin{defn} A monoidal subcategory of a monoidal category $\mathcal{A}$ is a monoidal category under the induced monoidal structure of $\mathcal{A}$ and it is a rigid monoidal subcategory of a rigid monoidal category $\mathcal{A}$ if it contains $X^+$ and $^+X$ whenever it contains an object $X$. \end{defn} \begin{prop} \label{66} If $\mathcal{A}$ is a rigid monoidal category, then an object $X$ in $\mathcal{A}$ is invertible if and only if $ev_{rX}:~X^+\otimes X\rightarrow I$ and $coev_{rX}:~I\rightarrow X\otimes X^+$ are isomorphisms. In that situation, $^+X\cong X^+$. If $Y$ is another invertible object, then $X\otimes Y$ is invertible. \end{prop} \begin{proof} If the above maps are isomorphisms, then we get $X^+\otimes X\cong I\cong X\otimes X^+$, so $X^+$ is the required object in the definition of an invertible object. Thus, $X$ is invertible. Similarly, we see that $X^+$ is invertible.\\ Conversely, if $X$ is invertible, then there exists an object $Z$ such that $X\otimes Z\cong Z\otimes X\cong I$, so we can use $Z$ as a right dual, hence $Z\cong X^+$ by uniqueness of a right dual. Then the above maps are isomorphisms. With the same idea, we may consider $Z\cong {^+X}$ by the isomorphism and we reverse the arrows if required and see $Z$ is a left dual. As a result, $X^+\cong {^+X}$.\\ Now, assume that $X$ and $Y$ are two invertible objects in the category. Then, $X^+\otimes X\cong I\cong X\otimes X^+$ and $Y^+\otimes Y\cong I\cong Y\otimes Y^+$. So, we get $Y^+\otimes X^+\otimes X\otimes Y\cong I\cong X\otimes Y\otimes Y^+\otimes X^+$. This shows that $Y^+\otimes X^+$ is the inverse of $X\otimes Y$. \end{proof} \begin{prop} Assume that $(\mathcal{F},~\gamma,~\varphi)$ and $(\mathcal{G},~\gamma',~\varphi')$ are two monoidal functors from $\mathcal{A} \rightarrow \mathcal{B}$. If $\mathcal{A}$ and $\mathcal{B}$ are rigid monoidal categories, then every morphism of monoidal functors from $\mathcal{F}$ to $\mathcal{G}$ is an isomorphism. \end{prop} \subsection{ Drinfeld Center of A Monoidal Category} Assume that $\mathcal{A}$ is a monoidal category. We denote the Drinfeld center of $\mathcal{A}$ by $\mathcal{Z}(\mathcal{A})$.\\ Objects in $\mathcal{Z}(\mathcal{A})$ are $(Z,~\gamma_Z)$ where $Z$ is an object in $\mathcal{A}$ and $\gamma_Z$ is a family of natural isomorphisms $\xymatrix{ \gamma_{ZX}:~Z\otimes X \ar[rr]^{\sim} & & X\otimes Z}$ for all objects $X$ in $\mathcal{A}$ such that the following diagrams commute. \begin{align} \begin{tikzpicture} \node (A) at (1, 0) {$Z\otimes (X\otimes Y)$}; \node (B) at (-2, -1.5) {$(Z\otimes X)\otimes Y$}; \node (C) at (1, -3) {$(X\otimes Z)\otimes Y$}; \node (D) at (5, -3) {$X\otimes (Z\otimes Y)$}; \node (E) at (8, -1.5) {$X\otimes(Y\otimes Z)$}; \node (F) at (5, 0) {$(X\otimes Y)\otimes Z$}; \draw [->, thick] (4, -1.3) arc (360:20:20pt); \path[->] (A) edge node [above] {$\gamma_{Z(X\otimes Y)}$} (F); \path[->] (B) edge node [left, midway] {$a$} (A); \path[->] (B) edge node [left, midway] {$\gamma_{ZX} \otimes id_Y$} (C); \path[->] (C) edge node [below] {$a$} (D); \path[->] (D) edge node [right, midway] {$id_X\otimes \gamma_{ZY}$} (E); \path[->] (F) edge node [right, midway] {$a$} (E); \end{tikzpicture} \end{align} \begin{align} \begin{tikzpicture} \node (A) at (0, 0) {$Z\otimes I$}; \node (B) at (4, 0) {$I\otimes Z$}; \node (C) at (2,-2) {$Z$}; \path[->] (A) edge node [above] {$\gamma_{ZI}$} (B); \path[->] (A) edge node [left] {$r$} (C); \path[->] (B) edge node [right] {$l$} (C); \end{tikzpicture} \end{align} A morphism $f:~(Z,~\gamma_Z) \rightarrow (W,~\gamma_W)$ in $\mathcal{Z}(\mathcal{A})$ is an arrow $f:~Z \rightarrow W$ such that the following diagram commutes. \begin{equation} \xymatrix{ Z\otimes X\ar[d]_{f\otimes id_X} \ar[rr]^{\gamma_{ZX}} & & X\otimes Z \ar[d]^{id_X\otimes f}\\ W\otimes X \ar[rr]_{\gamma_{WX}} & & X\otimes W} \end{equation} \begin{lem} $Z(\mathcal{A})$ is a braided monoidal category. \end{lem} \begin{proof} $(Z,~\gamma_Z)\otimes (W,~\gamma_W)=(Z\otimes W,~(\gamma_Z \otimes id_W)\circ (id_Z\otimes \gamma_W))$ is a tensor product for all objects $Z$ and $W$ in $\mathcal{A}$ and $c_{(Z,~\gamma_Z)(W,~\gamma_W)}: (Z,~\gamma_Z)\otimes (W,~\gamma_W) \rightarrow (W,~\gamma_W)\otimes (Z,~\gamma_Z)$ is a braiding which is same as \begin{align*} c:~(Z\otimes W,~(\gamma_Z \otimes id_W) \circ (id_Z\otimes \gamma_W)) \rightarrow (W\otimes Z,~(\gamma_W \otimes id_Z)\circ (id_W\otimes \gamma_Z)). \end{align*} \end{proof} \subsection{Module Categories} \begin{defn} \cite{os} A category $\mathcal{M}$ is a left module category on a finite monoidal category $\mathcal{A}$ if there exists an exact bifunctor \begin{align} \label{74} \otimes_{l\mathcal{M}}:~\mathcal{A} \times \mathcal{M} \rightarrow \mathcal{M},~(X,~M)\mapsto X\otimes_{l\mathcal{M}} M \end{align} for all objects $X$ in $\mathcal{A}$, $M$ in $\mathcal{M}$, associativity constraint $a_{l\mathcal{M}}$ consisting of associativity isomorphisms $a_{XYM}: (X\otimes_{\mathcal{A}} Y)\otimes_{l\mathcal{M}} M\rightarrow X\otimes_{l\mathcal{M}}(Y\otimes_{l\mathcal{M}} M)$ and left unit constraint $l_{\mathcal{M}}$ which is a family of left unit isomorphisms $l_M:~I\otimes_{l\mathcal{M}} M\rightarrow M$ such that for any $X$, $Y$, $Z$ in $\mathcal{A}$, $M$ in $\mathcal{M}$, the following diagrams commute. \begin{align} \begin{tikzpicture} \node (A) at (0, 0) {$((X\otimes_{\mathcal{A}} Y)\otimes_{\mathcal{A}} Z)\otimes_{l\mathcal{M}} M$}; \node (B) at (8, 0) {$(X\otimes_{\mathcal{A}} (Y\otimes_{\mathcal{A}} Z))\otimes_{l\mathcal{M}} M$}; \node (C) at (-1, -2) {$(X\otimes_{\mathcal{A}} Y)\otimes_{l\mathcal{M}} (Z\otimes_{l\mathcal{M}} M) $}; \node (D) at (9, -2) {$X\otimes_{l\mathcal{M}} ((Y\otimes_{\mathcal{A}} Z)\otimes_{l\mathcal{M}} M)$}; \node (E) at (4, -4) {$X\otimes_{l\mathcal{M}} (Y\otimes_{l\mathcal{M}} (Z\otimes_{l\mathcal{M}} M))$}; \draw [->, thick] (4.8, -1.5) arc (360:30:20pt); \path[->] (A) edge node [above] {$a_{XYZ}\otimes_{l\mathcal{M}} id_M$} (B); \path[->] (A) edge node [left, midway] {$a_{(XY)ZM}$} (C); \path[->] (B) edge node [right, midway] {$a_{X(YZ)M}$} (D); \path[->] (C) edge node [left, midway] {$a_{XY(ZM)}$} (E); \path[->] (D) edge node [right, midway] {$id_X\otimes_{l\mathcal{M}} a_{YZM}$} (E); \end{tikzpicture} \end{align} \begin{align} \begin{tikzpicture} \node (A) at (0, 0) {$(X\otimes_{\mathcal{A}} I)\otimes_{l\mathcal{M}} M$}; \node (B) at (6, 0) {$ X\otimes_{l\mathcal{M}} (I\otimes_{l\mathcal{M}} M)$}; \node (C) at (3, -2) {$X\otimes_{l\mathcal{M}} M$}; \path[->] (A) edge node [above] {$a_{XIM}$} (B); \path[->] (A) edge node [left] {$r_X\otimes_{l\mathcal{M}} id_M$} (C); \path[->] (B) edge node [right] {$id_X\otimes_{l\mathcal{M}} l_M$} (C); \end{tikzpicture} \end{align} \end{defn} \begin{defn} A category $\mathcal{M}$ is right module category on a finite monoidal category $\mathcal{A}$ if there exists an exact bifunctor \begin{align*} \otimes_{r\mathcal{M}}:~\mathcal{M} \times \mathcal{A} \rightarrow \mathcal{M},~(M,~X)\mapsto M\otimes_{r\mathcal{M}} X \end{align*} for all objects $X\in \mathcal{A}$, $M\in \mathcal{M}$, associativity constraint $a_{r\mathcal{M}}$ consisting of associativity isomorphisms $a_{MXY}:~M\otimes_{r\mathcal{M}} (X\otimes_{\mathcal{A}} Y) \rightarrow (M\otimes_{r\mathcal{M}} X)\otimes_{r\mathcal{M}} Y$ and right unit constraint $r_{\mathcal{M}}$ which is a family of right unit isomorphisms $r_M:~M\otimes_{r\mathcal{M}} I\rightarrow M$ such that for any $X$, $Y$, $Z$ in $\mathcal{A}$, $M$ in $\mathcal{M}$, the following diagrams commute. \begin{align} \label{69} \begin{tikzpicture} \node (A) at (0, 0) {$M\otimes_{r\mathcal{M}} ((X\otimes_{\mathcal{A}} Y)\otimes_{\mathcal{A}} Z)$}; \node (B) at (8, 0) {$M\otimes_{r\mathcal{M}} (X\otimes_{\mathcal{A}} (Y\otimes_{\mathcal{A}} Z))$}; \node (C) at (-1, -2) {$(M\otimes_{r\mathcal{M}} X) \otimes_{r\mathcal{M}}(Y\otimes_{\mathcal{A}} Z)$}; \node (D) at (9, -2) {$(M\otimes_{r\mathcal{M}} (X\otimes_{\mathcal{A}} Y))\otimes_{r\mathcal{M}} Z$}; \node (E) at (4, -4) {$((M\otimes_{r\mathcal{M}} X) \otimes_{r\mathcal{M}} Y)\otimes_{r\mathcal{M}} Z$}; \draw [->, thick] (4.8, -1.5) arc (360:30:20pt); \path[->] (A) edge node [above] {$_{id_M\otimes_{r\mathcal{M}} a_{XYZ}}$} (B); \path[->] (A) edge node [left, midway] {$a_{M(XY)Z}$} (C); \path[->] (B) edge node [right, midway] {$a_{MX(YZ)}$} (D); \path[->] (C) edge node [left, midway] {$a_{(MX)YZ}$} (E); \path[->] (D) edge node [right, midway] {$a_{MXY}\otimes_{r\mathcal{M}} id_Z$} (E); \end{tikzpicture} \end{align} \begin{align} \label{70} \begin{tikzpicture} \node (A) at (0, 0) {$M\otimes_{r\mathcal{M}} (I\otimes_{\mathcal{A}} X)$}; \node (B) at (6, 0) {$(M\otimes_{r\mathcal{M}} I)\otimes_{r\mathcal{M}} X$}; \node (C) at (3, -2) {$M\otimes_{r\mathcal{M}} X$}; \path[->] (A) edge node [above] {$a_{MIX}$} (B); \path[->] (A) edge node [left] {$id_M\otimes_{r\mathcal{M}} l_X$} (C); \path[->] (B) edge node [right] {$r_M\otimes_{r\mathcal{M}} id_X$} (C); \end{tikzpicture} \end{align} \end{defn} \begin{prop} \label{1} For a left $\mathcal{A}$ module category $\mathcal{M}$ where $\mathcal{A}$ is a finite, rigid monoidal category, $\mathcal{M}^{op}$ is a right $\mathcal{A}$ module category obtained from $\mathcal{M}$ by reversing the arrows. \end{prop} \begin{proof} We define the action as \begin{align*} \otimes_{r\mathcal{M}^{op}}:~\mathcal{M}^{op} \times \mathcal{A} \rightarrow \mathcal{M}^{op},~(M,~X)\mapsto M\otimes_{r\mathcal{M}^{op}} X \end{align*} for all objects $M\in \mathcal{M}^{op}$ and for all objects $X\in \mathcal{A}$ where $M\otimes_{r\mathcal{M}^{op}} X=X^+\otimes_{l\mathcal{M}} M$. $\otimes_{r\mathcal{M}^{op}}$ is an exact bifunctor since $\otimes_{l\mathcal{M}}$ is an exact bifunctor.\\ Can we find an associativity constraint $a_{r\mathcal{M}^{op}}$ consisting of associativity isomorphisms $a_{MXY}:~M\otimes_{r\mathcal{M}^{op}} (X\otimes_{\mathcal{A}} Y) \rightarrow (M\otimes_{r\mathcal{M}^{op}} X)\otimes_{r\mathcal{M}^{op}} Y$ for all objects $X$, $Y$ in $\mathcal{A}$, $M$ in $\mathcal{M}$ and a right unit constraint $r_{\mathcal{M}^{op}}$ which is a family of right unit isomorphisms $r_M:~M\otimes_{r\mathcal{M}^{op}} I\rightarrow M$ such that for any $X$, $Y$, $Z$ in $\mathcal{A}$, $M$ in $\mathcal{M}$, the Diagram \ref{69} and Diagram \ref{70} commute? \begin{align*} M\otimes_{r\mathcal{M}^{op}}(X\otimes_{\mathcal{A}} Y)=(X\otimes_{\mathcal{A}} Y)^+\otimes_{l\mathcal{M}} \mathcal{M}=(Y^+\otimes_{\mathcal{A}} X^+)\otimes_{l\mathcal{M}} M~by~Lemma~\ref{71}, \end{align*} \begin{align*} (M\otimes_{r\mathcal{M}^{op}} X)\otimes_{r\mathcal{M}^{op}} Y=Y^+\otimes_{l\mathcal{M}}(X^+\otimes_{l\mathcal{M}} M). \end{align*} We know that there is an isomorphism \begin{align*} a_{Y^+X^+M}:~(Y^+\otimes_{\mathcal{A}} X^+)\otimes_{l\mathcal{M}} M\rightarrow Y^+\otimes_{l\mathcal{M}}(X^+\otimes_{l\mathcal{M}} M) \end{align*} since $\mathcal{M}$ is a left $\mathcal{A}$ module category, so, we can take $a_{MXY}=a_{Y^+X^+M}$.\\ $M\otimes_{r\mathcal{M}^{op}} I=I^+\otimes_{l\mathcal{M}} M$ and there is an isomorphism $l_M:~I^+\otimes_{l\mathcal{M}} M\rightarrow M$, so we get the family of right unit constraints $r_M=l_M$ since $I^+=I$. The reader can show that the Diagram \ref{69} and Diagram \ref{70} commute. \end{proof} Similarly, for a right $\mathcal{A}$ module category $\mathcal{M}$, $\mathcal{M}^{op}$ is the category obtained from $\mathcal{M}$ by reversing the arrows which is a left $\mathcal{A}$ module category with $X\times M=M\otimes {^+X}$ for all objects $M\in \mathcal{M}$, $X\in \mathcal{A}$. \begin{lem} Assume that $\mathcal{A}$ is a fusion category and $\mathcal{M}$ is a module category over $\mathcal{A}$. $(\mathcal{M}^{op})^{op}\simeq \mathcal{M}$ canonically as an $\mathcal{A}$ module category. \end{lem} \begin{lem} \label{43} Assume that $\mathcal{A}$ is a finite monoidal category and $A$ is an algebra in $\mathcal{A}$. Then, the category of right $A$ modules $\mathcal{A}A$ is a left $A$ module category and the category of left $A$ modules $A\mathcal{A}$ is a right $A$ module category. \end{lem} \begin{note} A tensor category $\mathcal{A}$ is a $Z(\mathcal{A})$ module category with the action \begin{align*} Z(\mathcal{A})\times \mathcal{A} \rightarrow \mathcal{A},~((Z,~\gamma),~X)\mapsto Z\otimes X. \end{align*} \end{note} \subsection{Indecomposable, Exact and Semisimple Module Category} \begin{prop} Assume that $\mathcal{M}$ and $\mathcal{N}$ are module categories over a finite monoidal category $\mathcal{A}$. Then, the direct sum $\mathcal{P}=\mathcal{M} \oplus \mathcal{N}$ of $\mathcal{M}$ and $\mathcal{N}$ is a module category over $\mathcal{A}$ with $\otimes_{\mathcal{P}}=\otimes_{\mathcal{M}} \oplus \otimes_{\mathcal{N}}, \quad a_{\mathcal{P}}=a_{\mathcal{M}} \oplus a_{\mathcal{N}}, \quad l_{\mathcal{P}}=l_{\mathcal{M}} \oplus l_{\mathcal{N}}$. \end{prop} \begin{defn} A module category $\mathcal{P}$ over a finite monoidal category $\mathcal{A}$ is indecomposable if $\mathcal{M}=0$ or $\mathcal{N}=0$ whenever $\mathcal{P} \simeq \mathcal{M} \oplus \mathcal{N}$. \end{defn} \begin{defn} A module category $\mathcal{M}$ over a monoidal category $\mathcal{A}$ is exact if for all projective objects $P$ in $\mathcal{A}$ and all objects $M$ in $\mathcal{M}$, $P\otimes M$ is a projective object in $\mathcal{M}$. \end{defn} \begin{lem} \label{72} Every finite monoidal category $\mathcal{A}$ is an exact module category over itself. \end{lem} \begin{exmp} Every object in an exact module category $\mathcal{M}$ over $Vec_f(k)$ is projective. \end{exmp} \begin{proof} $k$ is free, hence a projective object in $Vec_f(k)$, for all objects $M$ in $\mathcal{M}$, we have $M=k\otimes M$ that is projective. As a result, every object is projective in $\mathcal{M}$. \end{proof} \begin{lem} If $\mathcal{M}$ is a module category over a rigid monoidal category $\mathcal{A}$, then for all objects $A\in \mathcal{A}$ and projective objects $P\in \mathcal{M}$, $A\otimes P$ is a projective object in $\mathcal{M}$. \end{lem} \begin{proof} Assume that $\mathcal{M}$ is a module category over $\mathcal{A}$ given as above. Let $A$ be an object in $\mathcal{A}$ and $P$ be projective in $\mathcal{M}$. For all epimorhisms $f:~X\rightarrow Y$ and morphisms $g:~A\otimes P\rightarrow Y$ in $\mathcal{M}$, can we find a morphism $k:~A\otimes P\rightarrow X$ such that $g=k\circ f$. \begin{align*} Hom_{\mathcal{M}}(X,~Y)\cong Hom_{\mathcal{M}}(^+A\otimes X,~^+A\otimes Y),\\ Hom_{\mathcal{M}}(A\otimes P,~Y)\cong Hom_{\mathcal{M}}(P,~^+A\otimes Y),\\ Hom_{\mathcal{M}}(A\otimes P,~X)\cong Hom_{\mathcal{M}}(P,~^+A\otimes X). \end{align*} We find an epimorphism $f':~{^+A}\otimes X\rightarrow {^+A}\otimes Y$ corresponding to $f$ and a morphism $g':~P\rightarrow {^+A}\otimes Y$ corresponding to $g$. Since $P$ is projective, then we get a morphism $k':~P\rightarrow {^+A}\otimes X$ such that $g'=f'\circ k'$. After that, we get a morphism $k:~A\otimes P\rightarrow X$ corresponding to $k'$ such that $f\circ k=g$. As a result, $A\otimes P$ is a projective object in $\mathcal{M}$. \end{proof} \begin{lem} \label{77} If $\mathcal{A}$ is a finite semisimple monoidal category, then the unit object $I$ is projective in that category. Plus, all objects are projective in such a category. \end{lem} \begin{proof} We want to show that $I$ is projective in $\mathcal{A}$. Assume that we are given an epimorphism $f:~A\rightarrow B$ and a map $g:~I\rightarrow B$. Can we find a map $h:~I\rightarrow A$ such that $f\circ h=g$?\\ $A=\oplus_i A_i$, $I=\oplus_j I_j$ and $B=\oplus_k B_k$ for simple objects $A_i$, $I_j$ and $B_k$ for all $i$, $j$, and $k$. We can decompose $f$ and $g$ as $f=~\oplus_{ik} f_{ik}$ and $g=~\oplus_{jk} g_{jk}$ where $f_{ik}:~A_i\rightarrow B_k$, $g_{jk}:~I_j\rightarrow B_k$ are morphisms in $\mathcal{A}$. By Proposition \ref{12}, we get $f_{ik}=g_{jk}=0$, so any morphism $I_j\rightarrow A_i$ works, then we take $h$ as the direct sum of those morphisms and the result follows afterwards.\\ $A=I\otimes A$ for all objects $A$ in $\mathcal{A}$ by left unit associativity. $\mathcal{A}$ is an exact left module category over itself by Lemma \ref{72}. $I$ is projective, thus $I\otimes A$ is projective by exactness. Hence, every object is projective in $\mathcal{A}$. \end{proof} \begin{cor} If a module category $\mathcal{M}$ over a finite monoidal category $\mathcal{A}$ is semisimple, then it is exact. \end{cor} \begin{proof} Assume that $\mathcal{M}$ is a semisimple module category over a finite monoidal category $\mathcal{A}$. Any object in a semisimple category is projective, so $\mathcal{M}$ is exact. \end{proof} \begin{cor} \cite{etos} A module category $\mathcal{M}$ over a fusion category $\mathcal{A}$ is exact if and only if it is semisimple. \end{cor} \begin{lem} Assume that $\mathcal{A}$ is a finite, rigid monoidal category with simple unit object $I$. Then, any exact module category $\mathcal{M}$ over $\mathcal{A}$ has enough projectives. \end{lem} \begin{proof} Assume that $\mathcal{A}$ is given as above and $\mathcal{M}$ is an exact left module category over $\mathcal{A}$. Then, we find a projective object $P$ in $\mathcal{A}$ with an epimorphism $P\rightarrow I$ since $\mathcal{A}$ is a finite category. Hence, for all objects $M$ in $\mathcal{M}$, we get an epimorphism $P\otimes M\rightarrow I\otimes M\cong M$ in $\mathcal{M}$. $\mathcal{M}$ is exact, so $P\otimes M$ is projective by exactness. As a result, we see that $\mathcal{M}$ has enough projectives and there exists a projective cover for every simple object in $\mathcal{M}$. \end{proof} \subsection{The Category of Module Functors} \begin{defn} \cite{os} Assume that $\mathcal{M}$ and $\mathcal{N}$ are two left module categories over a finite monoidal category $\mathcal{A}$. A module functor between them is a pair $(\mathcal{F},~f)$ where $\mathcal{F}:~\mathcal{M} \rightarrow \mathcal{N}$ is a functor and $f$ is a family of natural isomorphisms \begin{align} \label{13} f_{XM}:~\mathcal{F}(X\otimes M)\rightarrow X\otimes \mathcal{F}(M) \end{align} for all objects $X$ in $\mathcal{A}$ and $M$ in $\mathcal{M}$ such that for any $X$, $Y$ in $\mathcal{A}$, $M$ in $\mathcal{M}$, the following diagrams are commutative. \begin{equation} \xymatrix{ \mathcal{F}((X\otimes Y)\otimes M) \ar[d]^{f_{(X\otimes Y)M}} \ar[rr]^{\mathcal{F}(a_{XYZ})} & & \mathcal{F}(X\otimes (Y\otimes M)) \ar[rr]^{f_{X(Y\otimes M)}} & & X\otimes \mathcal{F}(Y\otimes M) \ar[d]^{id_X\otimes f_{YM}}\\ (X\otimes Y)\otimes \mathcal{F}(M) \ar[rrrr]_{a_{XY\mathcal{F}(M)}} & & & & X\otimes(Y\otimes \mathcal{F}(M)) } \end{equation} \begin{equation} \xymatrix{ \mathcal{F}(I\otimes M) \ar[rd]_{\mathcal{F}(l_{M})} \ar[rr]^{f_{IM}} & & I\otimes \mathcal{F}(M) \ar[ld]^{l_{\mathcal{F}(M)}} \\ & \mathcal{F}(M) } \end{equation} \end{defn} The collection of all module functors $(\mathcal{F},~f):~\mathcal{M} \rightarrow \mathcal{N}$ between two module categories $\mathcal{M}$ and $\mathcal{N}$ over a finite monoidal category $\mathcal{A}$ is denoted by $Hom_{\mathcal{A}}(\mathcal{M},~\mathcal{N})$. \begin{lem} If $(\mathcal{F},~f):~\mathcal{M} \rightarrow \mathcal{N}$ and $(\mathcal{G},~g):~\mathcal{N} \rightarrow \mathcal{K}$ are two module functors, then $(\mathcal{G} \circ \mathcal{F},~e):~\mathcal{M} \rightarrow \mathcal{K}$ is a module functor where $e=g\circ \mathcal{G}(f)$. \end{lem} A morphism between $(\mathcal{F},~f)$ and $(\mathcal{G},~g)$ is a natural transformation $h:~\mathcal{F} \rightarrow \mathcal{G}$ such that for any $X$ in $\mathcal{A}$, $M$ in $\mathcal{M}$, the following diagram commutes. \begin{equation} \xymatrix{ \mathcal{F}(X\otimes M) \ar[d]_{f_{XM}} \ar[rr]^{h(X\otimes M)} & & \mathcal{G}(X\otimes M) \ar[d]^{g_{XM}}\\ X\otimes \mathcal{F}(M) \ar[rr]^{id_X\otimes h(M)} & & X\otimes \mathcal{G}(M) } \end{equation} \begin{lem} $Hom_{\mathcal{A}}(\mathcal{M},~\mathcal{N})$ is a category of module functors $(\mathcal{F},~f):~\mathcal{M} \rightarrow \mathcal{N}$ for all module categories $\mathcal{M}$ and $\mathcal{N}$ over a given finite monoidal category $\mathcal{A}$. \end{lem} Two module categories $\mathcal{M}$ and $\mathcal{N}$ over a finite monoidal category $\mathcal{A}$ are equivalent if there exist module functors $(\mathcal{F},~f):~\mathcal{M} \rightarrow \mathcal{N}$, $(\mathcal{G},~g):~\mathcal{N} \rightarrow \mathcal{M}$ and natural isomorphisms \begin{align} h:~id_{\mathcal{N}} \rightarrow (\mathcal{F} \circ \mathcal{G}),~\quad k:~id_{\mathcal{M}} \rightarrow (\mathcal{G} \circ \mathcal{F}). \end{align} We denote the full subcategory of $Hom_{\mathcal{A}}(\mathcal{M},~\mathcal{N})$ consisting of right exact $\mathcal{A}$ module functors by $Hom^{re}_{\mathcal{A}}(\mathcal{M},~\mathcal{N})$. Similarly, we use $Hom^{le}_{\mathcal{A}}(\mathcal{M},~\mathcal{N})$ to denote the full subcategory of left exact $\mathcal{A}$ module functors and $Hom^e_{\mathcal{A}}(\mathcal{M},~\mathcal{N})$ to denote the full subcategory of exact $\mathcal{A}$ module functors. \begin{thm} \label{250} \cite{etos} Every additive module functor $\mathcal{F}:~\mathcal{M} \rightarrow \mathcal{N}$ between two module categories $\mathcal{M}$ and $\mathcal{N}$ over an FRBSU monoidal category $\mathcal{A}$ is exact if $\mathcal{M}$ is exact. \end{thm} \subsection{Bimodule Category and Some Properties} \cite{dani} Assume that $\mathcal{A}$ and $\mathcal{B}$ are two finite monoidal categories. A category $\mathcal{M}$ is an $(\mathcal{A}- \mathcal{B})$ bimodule category if it is a left $\mathcal{A}$ module category and right $\mathcal{B}$ module category such that there exists a middle associativity constraint $a$ consisting of a collection of isomorphisms \begin{align} \label{3} a_{XMY}:~X\otimes (M\otimes Y)\rightarrow (X\otimes M)\otimes Y \end{align} natural in $X\in \mathcal{A}$, $Y\in \mathcal{B}$, $M\in \mathcal{M}$ which satisfies the commutativity of two pentagons. \begin{lem} If $\mathcal{M}$ is an $(\mathcal{A}-\mathcal{B})$ bimodule category, then $\mathcal{M}^{op}$ is a $(\mathcal{B}-\mathcal{A})$ bimodule category. \end{lem} \begin{proof} Assume that $\mathcal{M}$ is an $(\mathcal{A}-\mathcal{B})$ bimodule category. In that situation $\mathcal{M}$ is a left $\mathcal{A}$ module category and a right $\mathcal{B}$ module category, so $\mathcal{M}^{op}$ is a left $\mathcal{B}$ module category and right $\mathcal{A}$ module category by Proposition \ref{1}.\\ We have an associativity constraint $a$ consisting of a family of isomorphisms \begin{align*} a_{XMY}:~X\otimes (M\otimes Y)\rightarrow (X\otimes M)\otimes Y \end{align*} natural in $X\in \mathcal{A}$, $Y\in \mathcal{B}$, $M\in \mathcal{M}$ as in \ref{3} which satisfies the commutativity of the required diagrams for all $X,~Y\in \mathcal{A}$, $Z,~W\in \mathcal{B}$ and $M\in \mathcal{M}$ to be an $(\mathcal{A}-\mathcal{B})$ bimodule. We need to define $a^{op}$ consisting of associativity constraints \begin{align} \label{4} a^{op}_{XMY}:~X\otimes^{op}(M\otimes^{op} Y)\rightarrow (X\otimes^{op} M)\otimes^{op} Y. \end{align} \ref{4} is obtained by reversing the morphism $Y\otimes (M\otimes X)\rightarrow (Y\otimes M)\otimes X$ in $\mathcal{M}$ which is same as $a_{YMX}$. We can prove the compatibility conditions without no difficulty. \end{proof} \begin{lem} Every finite, rigid monoidal category $\mathcal{A}$ is a bimodule category over itself. \end{lem} \begin{proof} Assume that $\mathcal{A}$ is a finite, rigid monoidal category. We can take $\mathcal{M}=\mathcal{A}$. We have a bifunctor $\mathcal{F}:~\mathcal{A} \times \mathcal{A} \rightarrow \mathcal{A}$ taking $(X,~Y)$ to $X\otimes Y$. $\mathcal{F}$ is exact in each variable by Remark \ref{82}.\\ We can use the associativity constraint $a$ and left unit constraint $l$ in the definition of a monoidal category. We can see that these satisfy the commutativity of the required diagrams to be a left $\mathcal{A}$ module category. Similarly, it is a right $\mathcal{A}$ module category with the same associativity constraint and right unit constraint $r$ by the definition of a monoidal category. Also, we use same associativity constraint and a middle associativity constraint. These satisfy the compatibility conditions. \end{proof} \begin{lem} If $\mathcal{A}$ and $\mathcal{B}$ are finite monoidal categories, then every exact $(\mathcal{A}-\mathcal{B})$ bimodule category $\mathcal{M}$ is finite. \end{lem} \begin{lem} \cite{dani} If $\mathcal{A}$ is a braided monoidal category, then any left $\mathcal{A}$ module category $\mathcal{M}$ is an $(\mathcal{A}-\mathcal{A})$ bimodule category. \end{lem} \begin{remark} $A\mathcal{A}=A\mathcal{A} I$ and $\mathcal{A}B=I\mathcal{A} B$. \end{remark} \begin{proof} Assume that $M$ is an object in $A\mathcal{A}$, so it is a left $A$ module. $M\otimes I\cong M$, so it is a right $I$ module at the same time. As a result, it is an object in $A\mathcal{A} I$. Same for $I\mathcal{A} B$. \end{proof} \begin{lem} The category $A\mathcal{A} B$ consisting of $(A-B)$ bimodules is an $(\mathcal{A}-\mathcal{A})$ bimodule category. \end{lem} \begin{proof} Every $(A-B)$ bimodule $M$ is a left $A$ module and a right $B$ module in $\mathcal{A}$ that satisfies the compatible conditions, so $M$ is an object in $A\mathcal{A}$ and an object in $\mathcal{A}B$. This means that $A\mathcal{A} B$ is a subcategory of $A\mathcal{A}$ and a subcategory of $\mathcal{A} B$. $A\mathcal{A}$ is a right $\mathcal{A}$ module category and $\mathcal{A} B$ is left $\mathcal{A}$ module category. As a result, $A\mathcal{A} B$ is both a left $\mathcal{A}$ and right $\mathcal{A}$ module category. We need to define an associativity constraint $a$ consisting of associativity isomorphisms as in \ref{3} for all objects $X$, $Y$ in $\mathcal{A}$, $M$ in $A\mathcal{A} B$ that satisfies the required conditions.\\ We have two actions $\mathcal{A} \times A\mathcal{A} B\rightarrow A\mathcal{A} B$ taking $(X,~M)$ to $X\otimes_{l(\mathcal{A} B)} M$ and $A\mathcal{A} B\times \mathcal{A} \rightarrow A\mathcal{A} B$ taking $(M,~Y)$ to $M\otimes_{r(A\mathcal{A})} Y$.\\ We know that $X\otimes_{l(\mathcal{A} B)} M$ is a right $B$ module and we may show that it is a left $A$ module which means that it is an $(A-B)$ bimodule. Similarly, $M\otimes_{r(A\mathcal{A})} Y$ is an $(A-B)$ bimodule.\\ $X\otimes_{l(\mathcal{A}B)}(M\otimes_{r(A\mathcal{A})} Y)\rightarrow (X\otimes_{l(\mathcal{A}B)} M)\otimes_{r(A\mathcal{A})} Y$ is an isomorphism since $M$ is an object in $\mathcal{A}$ and the above actions are exactly same as the tensor product in $\mathcal{A}$, we can use the associativity constraint in $\mathcal{A}$ as a middle associativity constraint. It is obvious, this gives the commutativity of the diagrams in the definition. \end{proof} The following proposition and its proof is found in \cite{dani} and we want to repeat the proof here. \begin{prop} \label{10} The functor $\mathfrak{F}:~A\mathcal{A} B\rightarrow Hom^{re}_{\mathcal{A}}(\mathcal{A}A,~\mathcal{A}B)$ taking $M$ to $-\otimes_A M$ is an equivalence of categories for all algebras $A$ and $B$ in $\mathcal{A}$ for $\mathcal{A}$ is a finite monoidal category. \end{prop} \begin{proof} We must show that \begin{align*} f:~Hom_{A\mathcal{A} B}(M,~N)\rightarrow Hom_{Hom^{re}_{\mathcal{A}}(\mathcal{A}A,~\mathcal{A}B)}(-\otimes_A M,~-\otimes_A N) \end{align*} is an isomorphism and $\mathcal{F}$ is essentially surjective for all $(A-B)$ bimodules $M$ and $N$.\\ We send each morphism $\xymatrix{M\ar[r]^a & N}$ in the category of $(A-B)$ bimodules to a natural transformation $\xymatrix{-\otimes_A M\ar[r]^{f(a)} & -\otimes_A N}$ in the category of module functors $Hom^{re}_{\mathcal{A}}(\mathcal{A}A,~\mathcal{A}B)$. Here, $-\otimes_A M,~-\otimes_A N:~\mathcal{A}A \rightarrow \mathcal{A}B$ are $\mathcal{A}$ module functors for all $(A-B)$ bimodules $M$ and $N$ in $\mathcal{A}$.\\ To show that they are module functors, we show that $K\otimes_A N=K\otimes N$ is a right $B$ module in the category $\mathcal{A}$ at first for all right $A$ modules $K$ in $\mathcal{A}$. We define the action as $a_{(K\otimes N)}=(K\otimes N)\times B\rightarrow (K\otimes N)$ by sending the pair $((k\otimes n),~b)$ to $k\otimes (n\otimes b)$ for all elements $k\in K$, $n\in N$, $b\in B$. As a result, the following diagrams commute since $N$ is a right $B$ module. \begin{equation*} \xymatrix@C+1em{ (K\otimes N)\otimes B\otimes B \ar[d]_{a_{(K\otimes N)}\otimes id} \ar[rr]^{id_{(K\otimes N)}\otimes m} & & (K\otimes N)\otimes B \ar[d]^{a_{(K\otimes N)}}\\ (K\otimes N)\otimes B \ar[rr]^{a_{(K\otimes N)}} & & (K\otimes N)} \quad \xymatrix{ (K\otimes N)\otimes B \ar[rr]^{a_{(K\otimes N)}} & & K\otimes N\\ (K\otimes N)\otimes I \ar[u]^{id\otimes u} \ar[urr]_{l_{(K\otimes N)}}} \end{equation*} We want to show that $f$ is an injection. Let $a,~b:~M\rightarrow N$ be two $(A-B)$ bimodule homomorphisms and $f(a)=f(b)$.\\ $f(a),~f(b):~-\otimes_A M\rightarrow -\otimes_A N$ are natural transformations and for all objects $K$ in $\mathcal{A}A$, we get $f(a)(K)=f(b)(K):~K\otimes_A M\rightarrow K\otimes_A N$.\\ For all elements $k$ in $K$ and $m$ in $M$, we have $k\otimes a(m)=k\otimes b(m)$, so $k\otimes (a(m)-b(m))=0$ and $a(m)=b(m)$. This says that $a=b$. Surjectivity is clear. As a result $f$ is an isomorphism.\\ For all right exact $\mathcal{A}$ module functors $\mathcal{G}:~\mathcal{A}A \rightarrow \mathcal{A}B$, can we find an $(A-B)$ bimodule $M$ such that $\mathfrak{F}(M) \cong \mathcal{G}$? We take $M=\mathcal{G}(A)$. It is $(A-B)$ bimodule.\\ Now we want to show the commutativity of the required diagram for the natural isomorphism. For all morphism $\alpha:~T\rightarrow S$ in $\mathcal{A}A$, we get the following commutative diagram. \begin{equation*} \xymatrix{ T\otimes_A \mathcal{G}(A) \ar[rr]^{\cong}_{\varphi} \ar[d]_{\alpha \otimes_A \mathcal{G}(A)} & & \mathcal{G}(T) \ar[d]^{\mathcal{G}(\alpha)}\\ S\otimes_A \mathcal{G}(A) \ar[rr]^{\cong}_{\theta} & & \mathcal{G}(S)} \end{equation*} $T\otimes_A \mathcal{G}(A)\cong T\otimes_A A\cong T\cong \mathcal{G}(T)$ since $\mathcal{G}(A)\cong A$ and $\mathcal{G}(T)\cong T$. Similarly, $\theta$ is an isomorphism, so the diagram commutes. \end{proof} As a result, we see that $Hom^{re}_{\mathcal{A}}(\mathcal{A}A,~\mathcal{A}B)$ is a finite category since the category $A\mathcal{A} B$ is finite.\\ Similarly, the functor $A\mathcal{A}\rightarrow Hom^{le}_{\mathcal{A}}(\mathcal{A}A,~\mathcal{A}B)$ taking $M$ to $Hom_{\mathcal{A}A}(-,~M):~\mathcal{A}A \rightarrow \mathcal{A}B$ is an equivalence of categories for all given algebras $A$ and $B$ in $\mathcal{A}$. So, $Hom^{le}_{\mathcal{A}}(\mathcal{A}A,~\mathcal{A}B)$ is a finite category. \begin{lem} $Hom^{re}_{\mathcal{A}}(\mathcal{M},~\mathcal{N})$ is finite if $\mathcal{M}$ and $\mathcal{N}$ are exact module categories over $\mathcal{A}$ and satisfies the required conditions in Proposition \ref{9}. \end{lem} \begin{proof} $\mathcal{M} \simeq \mathcal{A}A$ and $\mathcal{N} \simeq \mathcal{A}B$ for some algebras $A$ and $B$ in $\mathcal{A}$. So, the category $Hom^{re}_{\mathcal{A}}(\mathcal{M},~\mathcal{N})$ is equivalent to the category $A\mathcal{A} B$ and $A\mathcal{A} B$ is finite. \end{proof} \begin{lem} $Hom^{re}_{\mathcal{A}}(\mathcal{M},~\mathcal{M})$ is a monoidal category of endofunctors of $\mathcal{M}$ for $\mathcal{M}$ is a left module category over a finite monoidal category $\mathcal{A}$. \end{lem} \begin{proof} For given two module categories $\mathcal{F},~\mathcal{G}:~\mathcal{M} \rightarrow \mathcal{M}$, we define their tensor product as the composition $\mathcal{G} \circ \mathcal{F}:~\mathcal{M} \rightarrow \mathcal{M}$ and the unit functor as the identity of $\mathcal{M}$ which is $\mathcal{I}=id_{\mathcal{M}}:~\mathcal{M} \rightarrow \mathcal{M}$. We get an associativity constraint $a$ which is a family of associative isomorphisms ${a_{\mathcal{F} \mathcal{G} \mathcal{H}}:~(\mathcal{F} \circ \mathcal{G}) \circ \mathcal{H} \rightarrow \mathcal{F} \circ (\mathcal{G} \circ \mathcal{H})}$, left and right unit constraints satisfying the required compatibility conditions. \end{proof} \begin{prop} The monoidal category $Hom^{re}_{\mathcal{A}}(\mathcal{A}A,~\mathcal{A} A)$ is strict and rigid. If $\mathcal{A}=Vec(k)$, then it is not a rigid category. \end{prop} \begin{proof} The right and left duals are the right and left adjoint functors. \end{proof} \begin{thm} If $\mathcal{A}$ is a multifusion category, $\mathcal{M}$ and $\mathcal{N}$ are module categories over $\mathcal{A}$, then the category $Hom^{re}_{\mathcal{A}}(\mathcal{M},~\mathcal{N})$ is semisimple module category over $Hom^{re}_{\mathcal{A}}(\mathcal{M},~\mathcal{M})$ with action given by composition of functors. It is exact if $\mathcal{M}$ and $\mathcal{N}$ are exact module categories over $\mathcal{A}$. \end{thm} \begin{proof} We define the action as $Hom^{re}_{\mathcal{A}}(\mathcal{M},~\mathcal{M})\times Hom^{re}_{\mathcal{A}}(\mathcal{M},~\mathcal{N})\rightarrow Hom^{re}_{\mathcal{A}}(\mathcal{M},~\mathcal{N})$ with $(\mathcal{F},~\mathcal{G})\mapsto \mathcal{G} \circ \mathcal{F}$. \end{proof} \subsection{The Center of A Bimodule Category} The center $Z_{\mathcal{A}}(\mathcal{M})$ of an $\mathcal{A}$ bimodule category $\mathcal{M}$ is defined in \cite{gr} as below. Here, $\mathcal{A}$ is a finite rigid, monoidal category whose unit object is simple.\\ The objects are $(M,~\gamma_M)$ where $M$ is an object in $\mathcal{M}$ and $\gamma_M$ is a family of natural isomorphisms $\gamma_{MX}:~X\otimes M\rightarrow M\otimes X$ which satisfy the commutativity of the following diagram where $X,~Y$ are objects in $\mathcal{A}$ and $M$ is an object in $\mathcal{M}$. \begin{equation} \xymatrix{ (X\otimes Y) \otimes M \ar[d]_{a^{-1}_{XYM}} \ar[r]^{\gamma_{M(XY)}} & M\otimes (X\otimes Y) \ar[d]^{a^{-1}_{MXY}}\\ X\otimes (Y\otimes M) \ar[d]_{X\otimes \gamma_{MY}} & (M\otimes X)\otimes Y\\ X\otimes (M\otimes Y)\ar[r]_{a^{-1}_{XMY}} & (X\otimes M)\otimes Y \ar[u]_{\gamma_{MX}\otimes Y}} \end{equation} A morphism between $(M,~\gamma_M)$ and $(N,~\gamma_N)$ in $Z_{\mathcal{A}}(\mathcal{M})$ is a morphism $f:~M\rightarrow N$ in $\mathcal{M}$ satisfying the condition $\gamma_N(X)(id_X\otimes f)=(f\otimes id_X)\gamma_M(X)$. \subsection{Definition of A Bicategory} The following definitions are found in \cite{le1} and \cite{le2} in detail.\\ A collection $\mathfrak{X}$ consisting of the objects $A$, $B$, ... is a bicategory if the following conditions are satisfied. \begin{enumerate} \item $\mathfrak{X}(A,~B)$ is a category whose objects are 1 arrows $f:~A\rightarrow B$, $g:~A\rightarrow B$, ... and morphisms are 2 arrows $\gamma:~f\Rightarrow g$, $\theta:~f\Rightarrow g$ , ... as shown in the following diagram. \begin{align*} \begin{tikzpicture}[out=145, in=145, relative] \node (A) at (0,0) {A}; \node (B) at (3,0) {B}; \draw[->, thick, double] (1.5,0.25) -- (1.5,-0.25) [xshift=5pt] node[right, midway] {$\gamma$} (B); \path[->] (A) edge [bend left] node [above] {f} (B); \path[->] (A) edge [bend right] node [below] {g} (B); \end{tikzpicture} \end{align*} \item $\mathcal{F}_{ABC}:~\mathfrak{X}(B,~C)\times \mathfrak{X}(A,~B) \rightarrow \mathfrak{X}(A,~C)$ is a functor taking the pairs $(g,~f)$ to $g\circ f=gf$ and $(\theta,~\gamma)$ to $\theta \star \gamma$. $\theta \star \gamma$ is shown as in the following diagram. \begin{align*} \begin{tikzpicture}[out=145, in=145, relative] \node (A) at (0,0) {A}; \node (B) at (3,0) {B}; \node (C) at (6, 0) {C~~=}; \draw[->, thick, double] (1.5,0.25) -- (1.5,-0.25) [xshift=5pt] node[right, midway] {$\gamma$} (B); \draw[->, thick, double] (4.5,0.25) -- (4.5,-0.25) [xshift=5pt] node[right, midway] {$\theta$} (C); \path[->] (A) edge [bend left] node [above] {$f$} (B); \path[->] (A) edge [bend right] node [below] {$g$} (B); \path[->] (B) edge [bend left] node [above] {$h$} (C); \path[->] (B) edge [bend right] node [below] {$k$} (C); \end{tikzpicture} \begin{tikzpicture}[out=145, in=145, relative] \node (A) at (0,0) {A}; \node (C) at (3,0) {C}; \draw[->, thick, double] (1.5,0.25) -- (1.5,-0.25) [xshift=5pt] node[right, midway] {$\theta \star \gamma$} (C); \path[->] (A) edge [bend left] node [above] {$h\circ f$} (C); \path[->] (A) edge [bend right] node [below] {$k\circ g$} (C); \end{tikzpicture} \end{align*} \item $\mathcal{F}_A:~1\rightarrow \mathfrak{X}(A,~A)$ is a functor sending the object $\star$ in $1$ to the arrow $id_A$ where $1$ is a category with one object. \item $a_{ABCD}:~\mathcal{F}_{ABD} \circ (\mathcal{F}_{BCD} \times 1) \rightarrow \mathcal{F}_{ACD} \circ (1\times \mathcal{F}_{ABC})$ \begin{align} \label{45} \begin{tikzpicture} \node (A) at (0, 0) {$\mathfrak{X}(C, D) \times \mathfrak{X}(B, C) \times \mathfrak{X}(A, B)$}; \node (C) at (7, 0) {$\mathfrak{X}(B, D) \times \mathfrak{X}(A, B)$}; \node (B) at (0,-2) {$\mathfrak{X}(C, D) \times \mathfrak{X}(A, C)$}; \node (D) at (7, -2) {$\mathfrak{X}(A,~D)$}; \draw[->, thick, double] (4,-0.75)--(3, -1.25) [xshift=5pt] node [right, midway] {$a_{ABCD}$} (B); \path[->] (A) edge node [above] {$\mathcal{F}_{BCD} \times 1$} (C); \path[->] (A) edge node [right, midway] {$1\times \mathcal{F}_{ABC}$} (B); \path[->] (B) edge node [below] {$\mathcal{F}_{ACD}$} (D); \path[->] (C) edge node [right, midway] {$\mathcal{F}_{ABD}$} (D); \end{tikzpicture} \end{align} is a natural isomorphism. $\xymatrix{a_{ABCD}(f,~g,~h):~(fg)h\ar[rr]^{\sim} & & f(gh)}$ are 2 arrows for all 1 arrows $f:~C\rightarrow D$, $g:~B\rightarrow C$ and $h:~A\rightarrow B$ such that the following pentagon commutes for all 1 arrows $f,~g,~h,~k$. \begin{align} \label{46} \begin{tikzpicture} \node (A) at (0, 0) {$((fg)h)k$}; \node (B) at (3, 0) {$(f(gh))k$}; \node (C) at (-1, -2) {$(fg)(hk)$}; \node (D) at (4, -2) {$f((gh)k)$}; \node (E) at (1.5, -3.5) {$f(g(hk))$}; \draw [->, thick] (2, -1.5) arc (360:30:10pt); \path[->] (A) edge node [above] {$a\star id_k$} (B); \path[->] (A) edge node [right, midway] {$a$} (C); \path[->] (B) edge node [right, midway] {$a$} (D); \path[->] (D) edge node [right, midway] {$id_f\star a$} (E); \path[->] (C) edge node [right, midway] {$a$} (E); \end{tikzpicture} \end{align} \item $r_{AB}:~\mathcal{F}_{AAB} \circ (1\times \mathcal{F}_A) \rightarrow \mathcal{G}$ and $l_{AB}:~\mathcal{F}_{ABB} \circ (\mathcal{F}_B \times 1) \rightarrow \mathcal{H}$ \begin{align} \begin{tikzpicture} \node (A) at (1, 0) {$\mathfrak{X}(A, B) \times \mathfrak{X}(A, A)$}; \node (B) at (-1, -2) {$\mathfrak{X}(A, B)$}; \node (C) at (3,-2) {$\mathfrak{X}(A, B) \times 1$}; \draw[->, thick, double] (1, -0.5) -- (1, -1.25) [xshift=2pt] node[right] {$r_{AB}$} (B); \path[->] (A) edge node [left, midway] {$\mathcal{F}_{AAB}$} (B); \path[->] (C) edge node [right, midway] {$1\times \mathcal{F}_A$} (A); \path[->] (C) edge node [below] {$\sim$} node [above] {$\mathcal{G}$} (B); \end{tikzpicture} \quad \begin{tikzpicture} \node (A) at (1, 0) {$\mathfrak{X}(B, B) \times \mathfrak{X}(A, B)$}; \node (B) at (-1, -2) {$\mathfrak{X}(A, B)$}; \node (C) at (3,-2) {$1\times \mathfrak{X}(A, B)$}; \draw[->, thick, double] (1, -0.5) -- (1, -1.25) [xshift=2pt] node[right] {$l_{AB}$} (B); \path[->] (A) edge node [left, midway] {$\mathcal{F}_{ABB}$} (B); \path[->] (C) edge node [right, midway] {$\mathcal{F}_B \times 1$} (A); \path[->] (C) edge node [below] {$\sim$} node [above] {$\mathcal{H}$} (B); \end{tikzpicture} \end{align} are natural isomorphisms. $\xymatrix{r_{AB}(f,~\star):~f\circ id_A \ar[r] & f}$ and $\xymatrix{ l_{AB}(\star,~f):~id_B\circ f\ar[r] & f}$ are 2 arrows for all 1 arrows $f:~A\rightarrow B$ such that the following triangle commutes. \begin{align} \xymatrix{ (fid_A)g\ar[rr]^a \ar[rd]_{r\star id_g} & & f(id_Bg)\ar[ld]^{id_f\star l} \\ & fg} \end{align} \end{enumerate} \begin{remark} If all natural isomorphisms $a$, $r$, $l$ are identities such that $(fg)h=f(gh)$, $1f=f=f1$ and same conditions are true for the composition of 2 arrows, then $\mathfrak{X}$ is called a 2-category. \end{remark} \section{Internal Hom of Two Objects in A Module Category} In this section, we are assuming that $\mathcal{M}$ is an exact module category over a finite, rigid monoidal category $\mathcal{A}$ whose unit object $I$ is simple and we are given objects $M$, $N$ in $\mathcal{M}$. \begin{lem} \label{75} The functor $Hom_{\mathcal{M}}(-\otimes M,~N):~\mathcal{A} \rightarrow Set$ is left exact. \end{lem} \begin{proof} Assume that we have an exact sequence $0\rightarrow A\rightarrow B\rightarrow C\rightarrow 0$ for all objects $A$, $B$ and $C$ in $\mathcal{A}$. Then, the sequence $A\otimes M\rightarrow B\otimes M\rightarrow C\otimes M\rightarrow 0$ is exact since $-\otimes M$ is an exact functor by \ref{74}. So, the sequence \begin{align*} 0\rightarrow Hom_{\mathcal{M}}(C\otimes M,~N)\rightarrow Hom_{\mathcal{M}}(B\otimes M,~N)\rightarrow Hom_{\mathcal{M}}(A\otimes M,~N) \end{align*} is exact since $Hom_{\mathcal{M}}(-,~N)$ is left exact controvariant functor by Example \ref{76}. This proves the left exactness. \end{proof} \begin{defn} Internal hom of $M$ and $N$ is an object $\underline{Hom}_{\mathcal{M}}(M,~N)$ in $\mathcal{A}$ which represents the functor $Hom_{\mathcal{M}}(-\otimes M,~N):~\mathcal{A} \rightarrow Set$ whenever it is representable. \end{defn} This means that there exists a natural isomorphism between the functors $Hom_{\mathcal{M}}(-\otimes M,~N)$ and $Hom_{\mathcal{A}}(-,~\underline{Hom}_{\mathcal{M}}(M,~N))$. \begin{lem} The functor $Hom_{\mathcal{M}}(-\otimes M,~N):~\mathcal{A} \rightarrow Set$ is exact if the internal hom $\underline{Hom}_{\mathcal{M}}(M,~N)$ exists and projective in $\mathcal{A}$. \end{lem} \begin{proof} $Hom_{\mathcal{M}}(X\otimes M,~N)\cong Hom_{\mathcal{A}}(X,~\underline{Hom}_{\mathcal{M}}(M,~N))$ for all objects $X$ in $\mathcal{A}$ by existence of representing object. We know that this functor is left exact controvariant functor by Lemma \ref{75}. We want to show that it is right exact. For this, we need to show that the controvariant functor $Hom_{\mathcal{A}}(-,~\underline{Hom}_{\mathcal{M}}(M,~N)):~\mathcal{A} \rightarrow Set$ is right exact which means that the covariant functor $Hom_{\mathcal{A}^{op}}(\underline{Hom}_{\mathcal{M}}(M,~N),~-):~\mathcal{A}^{op} \rightarrow Set$ is right exact.\\ Assume that $\xymatrix{ 0\ar[r] & X\ar[r]^f & Y\ar[r]^g & Z\ar[r] & 0}$ is an exact sequence in $\mathcal{A}^{op}$. We want to show that the sequence\\ $\xymatrix{0\ar[r] & Hom_{\mathcal{A}^{op}}(\underline{Hom}_{\mathcal{M}}(M,~N),~X)\ar[r]^F & Hom_{\mathcal{A}^{op}}(\underline{Hom}_{\mathcal{M}}(M,~N),~Y)\\ & \ar[r]^G & Hom_{\mathcal{A}^{op}}(\underline{Hom}_{\mathcal{M}}(M,~N),~Z)\ar[r] & 0}$\\ is exact in $Set$. We just need to show that $G$ is an epimorphism since that sequence is left exact. It is obvious since the internal hom is projective. \end{proof} This functor is always exact if $\mathcal{A}$ is semisimple by Lemma \ref{77}. \begin{lem} $Hom_{\mathcal{M}}(X\otimes M,~N)\cong Hom_{\mathcal{A}}(X,~\underline{Hom}_{\mathcal{M}}(M,~N))$ canonically for all objects $X$ in $\mathcal{A}$ by definition since the functor $Hom_{\mathcal{M}}(-\otimes M,~N)$ is controvariant. \end{lem} \begin{lem} $Hom_{\mathcal{M}}(M,~X\otimes N)\cong Hom_{\mathcal{A}}(I,~X\otimes \underline{Hom}_{\mathcal{M}}(M,~N))$ canonically for all objects $X$ in $\mathcal{A}$. \end{lem} \begin{proof} For all morphisms $M\rightarrow X\otimes N$ in $\mathcal{M}$, we find a morphism \begin{align} X^+\otimes M\rightarrow X^+\otimes X\otimes N\rightarrow I\otimes N=N \end{align} by using rigidity of $\mathcal{A}$. Here, we use the evaluation map $ev_{rX}:~X^+\otimes X\rightarrow I$. So, \begin{align} Hom_{\mathcal{M}}(M,~X\otimes N)\cong Hom_{\mathcal{M}}(X^+\otimes M,~N)\cong Hom_{\mathcal{A}}(X^+,~\underline{Hom}_{\mathcal{M}}(M,~N)). \end{align} For all morphisms $X^+\rightarrow \underline{Hom}_{\mathcal{M}}(M,~N)$, we get a morphism \begin{align} I\rightarrow X\otimes X^+\rightarrow X\otimes \underline{Hom}_{\mathcal{M}}(M,~N) \end{align} by using the coevaluation map. As a result, we get an isomorphism \begin{align} Hom_{\mathcal{M}}(X^+,~\underline{Hom}_{\mathcal{M}}(M,~N))\cong Hom_{\mathcal{A}}(I,~X\otimes \underline{Hom}_{\mathcal{M}}(M,~N)). \end{align} \end{proof} \begin{lem} $\underline{Hom}_{\mathcal{M}}(X\otimes M,~N) \cong \underline{Hom}_{\mathcal{M}}(M,~N) \otimes X^+$ canonically. \end{lem} \begin{proof} \cite{os} We have\\ $Hom_{\mathcal{A}}(K,~\underline{Hom}_{\mathcal{M}}(M,~N)\otimes X^+) \cong Hom_{\mathcal{A}}(K\otimes X,~\underline{Hom}_{\mathcal{M}}(M,~N))\cong \\ Hom_{\mathcal{M}}((K\otimes X)\otimes M,~N)\cong Hom_{\mathcal{M}}(K\otimes (X\otimes M),~N)\cong Hom_{\mathcal{A}}(K,~\underline{Hom}_{\mathcal{M}}(X\otimes M,~N))$\\ for all $K$ in $\mathcal{M}$, so $\underline{Hom}_{\mathcal{M}}(M,~N)\otimes X^+\cong \underline{Hom}_{\mathcal{M}}(X\otimes M,~N)$ canonically. \end{proof} \begin{lem} $\underline{Hom}_{\mathcal{M}}(M,~X\otimes N) \cong X\otimes \underline{Hom}(M,~N)$ canonically. \end{lem} \begin{proof} See \cite{os}. \end{proof} \begin{lem} \cite{etos} If we assume that $\mathcal{A}$ is a braided monoidal category, then $\underline{Hom}_{\mathcal{M}}(M,~M)$ is an algebra for given object $M$ in $\mathcal{M}$ if it exists as a representing object of the functor $Hom_{\mathcal{M}}(-\otimes M,~M)$. \end{lem} \begin{proof} We need to define a multiplication morphism \begin{align*} m:~\underline{Hom}_{\mathcal{M}}(M,~M)\otimes \underline{Hom}_{\mathcal{M}}(M,~M)\rightarrow \underline{Hom}_{\mathcal{M}}(M,~M) \end{align*} and a unit morphism $u:~I\rightarrow \underline{Hom}_{\mathcal{M}}(M,~M)$ satisfying the required compatibility conditions.\\ \cite{os} finds a multiplication morphism $m$ as below.\\ $id_{\underline{Hom}_{\mathcal{M}}(M,~M)}$ is in $Hom_{\mathcal{A}}(\underline{Hom}_{\mathcal{M}}(M,~M),~\underline{Hom}_{\mathcal{M}}(M,~M))$ since $\underline{Hom}_{\mathcal{M}}(M,~M)$ is an object in $\mathcal{A}$. \begin{align*} Hom_{\mathcal{A}}(\underline{Hom}_{\mathcal{M}}(M,~M),~\underline{Hom}_{\mathcal{M}}(M,~M))\cong Hom_{\mathcal{M}}(\underline{Hom}_{\mathcal{M}}(M,~M) \otimes M,~M) \end{align*} by definition. So, we get a unique morphism $f:~\underline{Hom}_{\mathcal{M}}(M,~M)\otimes M\rightarrow M$ corresponding to $id_{\underline{Hom}_{\mathcal{M}}(M,~M)}$. Using this morphism, we get a composition \begin{align*} \begin{tikzpicture} \node (A) at (0, 0) {$\underline{Hom}_{\mathcal{M}}(M,~M) \otimes (\underline{Hom}_{\mathcal{M}}(M,~M) \otimes M)$}; \node (B) at (7, 0) {$\underline{Hom}_{\mathcal{M}}(M,~M)\otimes M$}; \node (C) at (11, 0) {$M$}; \path[->] (A) edge node [above] {$id\otimes f$} (B); \path[->] (B) edge node [above] {$f$} (C); \end{tikzpicture} \end{align*} This is same as the morphism $\xymatrix{ (\underline{Hom}_{\mathcal{M}}(M,~M)\otimes \underline{Hom}_{\mathcal{M}}(M,~M))\otimes M\rightarrow M}$ since \begin{align*} \underline{Hom}_{\mathcal{M}}(M,~M)\otimes (\underline{Hom}_{\mathcal{M}}(M,~M)\otimes M) \cong (\underline{Hom}_{\mathcal{M}}(M,~M)\otimes \underline{Hom}_{\mathcal{M}}(M,~M))\otimes M. \end{align*} This defines a multiplication morphism\\ $\underline{Hom}_{\mathcal{M}}(M,~M) \otimes \underline{Hom}_{\mathcal{M}}(M,~M) \rightarrow \underline{Hom}_{\mathcal{M}}(M,~M)$ as shown in \cite{os}.\\ This multiplication is associative since $\underline{Hom}_{\mathcal{M}}(M,~M)$ is an object in $\mathcal{A}$ and we have the associativity constraint.\\ For the compatibility conditions, we need to show that the following diagrams commute. \begin{align*} \begin{tikzpicture} \node (A) at (0, 0) {$\underline{Hom}_{\mathcal{M}}(M,~M) \otimes \underline{Hom}_{\mathcal{M}}(M,~M) \otimes \underline{Hom}_{\mathcal{M}}(M,~M)$}; \node (B) at (-4, -2.5) {$\underline{Hom}_{\mathcal{M}}(M,~M) \otimes \underline{Hom}_{\mathcal{M}}(M,~M)$}; \node (C) at (4, -2.5) {$\underline{Hom}_{\mathcal{M}}(M,~M) \otimes \underline{Hom}_{\mathcal{M}}(M,~M)$}; \node (D) at (0, -5) {$\underline{Hom}_{\mathcal{M}}(M,~M)$}; \path[->] (A) edge node [left, midway] {$m\otimes id$} (B); \path[->] (A) edge node [right, midway] {$id\otimes m$} (C); \path[->] (B) edge node [left, midway] {$m$} (D); \path[->] (C) edge node [right, midway] {$m$} (D); \end{tikzpicture} \end{align*} \begin{align*} \begin{tikzpicture} \node (A) at (0, 0) {$\underline{Hom}_{\mathcal{M}}(M,~M) \otimes \underline{Hom}_{\mathcal{M}}(M,~M)$}; \node (B) at (7, 0) {$\underline{Hom}_{\mathcal{M}}(M,~M)$}; \node (C) at (0, -2) {$ I\otimes \underline{Hom}_{\mathcal{M}}(M,~M)$}; \path[->] (A) edge node [above] {$m$} (B); \path[->] (C) edge node [left, midway] {$u\otimes id$} (A); \path[->] (C) edge node [below] {$l$} (B); \end{tikzpicture} \end{align*} \begin{align*} \begin{tikzpicture} \node (A) at (0, 0) {$\underline{Hom}_{\mathcal{M}}(M,~M) \otimes \underline{Hom}_{\mathcal{M}}(M,~M)$}; \node (B) at (7, 0) {$\underline{Hom}_{\mathcal{M}}(M,~M)$}; \node (C) at (0, -2) {$\underline{Hom}_{\mathcal{M}}(M,~M) \otimes I$}; \path[->] (A) edge node [above] {$m$} (B); \path[->] (C) edge node [left, midway] {$id\otimes u$} (A); \path[->] (C) edge node [below] {$r$} (B); \end{tikzpicture} \end{align*} We have the isomorphisms \begin{align*} Hom_{\mathcal{A}}(\underline{Hom}_{\mathcal{M}}(M,~M)\otimes \underline{Hom}_{\mathcal{M}}(M,~M),~\underline{Hom}_{\mathcal{M}}(M,~M)) \cong \\ Hom_{\mathcal{M}}(\underline{Hom}_{\mathcal{M}}(M,~M) \otimes \underline{Hom}_{\mathcal{M}}(M,~M) \otimes M,~M), \end{align*} \begin{align*} Hom_{\mathcal{A}}(\underline{Hom}_{\mathcal{M}}(M,~M),~\underline{Hom}_{\mathcal{M}}(M,~M))\cong Hom_{\mathcal{M}}(\underline{Hom}(M,~M)\otimes M,~M). \end{align*} $m$ is a morphism in the first hom set and $id$ is a morphism in the second one.\\ Also, we have a composition of hom sets\\ $Hom_{\mathcal{A}}(A\otimes A\otimes A,~A\otimes A)\times Hom_{\mathcal{A}}(A\otimes A,~A)\rightarrow Hom_{\mathcal{A}}(A \otimes A\otimes A,~A)$\\ for $A=\underline{Hom}_{\mathcal{M}}(M,~M)$ taking $(m\otimes id,~m)$ to $m\circ (m\otimes id)$ and $(id\otimes m,~m)$ to $m\circ (id\otimes m)$. We may prove that $m\circ (m\otimes id)=m\circ (id\otimes m)$ for the commutativity of the first diagram.\\ Now, we want to find a unit morphism $u:~I\rightarrow \underline{Hom}_{\mathcal{M}}(M,~M)$ satisfying the commutativity of the required diagrams.\\ $Hom_{\mathcal{M}}(M,~X\otimes N)\cong Hom_{\mathcal{A}}(I,~X\otimes \underline{Hom}_{\mathcal{M}}(M,~N))$ for all objects $X$ in $\mathcal{A}$. Taking $X=I$ and $M=N$, we get an isomorphism $Hom_{\mathcal{M}}(M,~M)\cong Hom_{\mathcal{A}}(I,~\underline{Hom}_{\mathcal{M}}(M,~M))$. So, for the identity morphism $id_M:~M\rightarrow M$, we get a unique morphism $u:~I\rightarrow \underline{Hom}_{\mathcal{M}}(M,~M)$.\\ We have an isomorphism \begin{align*} Hom_{\mathcal{A}}(I\otimes \underline{Hom}_{\mathcal{M}}(M,~M),~\underline{Hom}_{\mathcal{M}}(M,~M)) \cong Hom_{\mathcal{M}}(I\otimes \underline{Hom}_{\mathcal{M}}(M,~M) \otimes M,~M). \end{align*} The diagram \begin{align*} \begin{tikzpicture} \node (A) at (0, 0) {$\underline{Hom}_{\mathcal{M}}(M,~M) \otimes \underline{Hom}_{\mathcal{M}}(M,~M)$}; \node (B) at (7, 0) {$\underline{Hom}_{\mathcal{M}}(M,~M)$}; \node (C) at (0, -2) {$ I\otimes \underline{Hom}_{\mathcal{M}}(M,~M)$}; \path[->] (A) edge node [above] {$m$} (B); \path[->] (C) edge node [left, midway] {$u\otimes id$} (A); \path[->] (C) edge node [below] {$l$} (B); \end{tikzpicture} \end{align*} corresponds to the diagram \begin{align*} \begin{tikzpicture} \node (A) at (0, 0) {$\underline{Hom}_{\mathcal{M}}(M,~M) \otimes \underline{Hom}_{\mathcal{M}}(M,~M)\otimes M$}; \node (B) at (7, 0) {$M$}; \node (C) at (0, -2) {$ I\otimes \underline{Hom}_{\mathcal{M}}(M,~M) \otimes M$}; \path[->] (A) edge (B); \path[->] (C) edge node [left, midway] {$u\otimes id$} (A); \path[->] (C) edge (B); \end{tikzpicture} \end{align*} And we get another diagram \begin{align*} \begin{tikzpicture} \node (A) at (0, 0) {$M\otimes \underline{Hom}_{\mathcal{M}}(M,~M) \otimes M$}; \node (B) at (7, 0) {$M$}; \node (C) at (0, -2) {$M\otimes \underline{Hom}_{\mathcal{M}}(M,~M) \otimes M$}; \path[->] (A) edge (B); \path[->] (C) edge node [left, midway] {$id_M\otimes id$} (A); \path[->] (C) edge (B); \end{tikzpicture} \end{align*} Obviously, this diagram commutes. As a result, the first one commutes and we get the result. We follow the similar way for the right associativity constraint. \end{proof} \begin{lem} \cite{baki} $(^+A)^+=A$ for all objects $A$ in a rigid monoidal category $\mathcal{A}$. \end{lem} \begin{lem} \label{222} \cite{etos} $\underline{Hom}_{\mathcal{A} A}(M,~M)=(M\otimes ^+M)^+=(^+M)^+\otimes M^+= M\otimes M^+\cong M^+\otimes M$ for all right $A$ modules $M$ in a left module category $\mathcal{A} A$ over an FRBSU monoidal category $\mathcal{A}$. \end{lem} \begin{lem} $\underline{Hom}_{\mathcal{M}}(M,~N)$ is a right $\underline{Hom}_{\mathcal{M}}(M,~M)$ module in $\mathcal{A}$. \end{lem} \begin{proof} We need to define a multiplication morphism \begin{align*} a:~\underline{Hom}_{\mathcal{M}}(M,~N)\otimes \underline{Hom}_{\mathcal{M}}(M,~M) \rightarrow \underline{Hom}_{\mathcal{M}}(M,~N) \end{align*} satisfying the commutativity of the following diagrams. \begin{align*} \begin{tikzpicture} \node (A) at (0, 0) {$\underline{Hom}_{\mathcal{M}}(M,~N) \otimes \underline{Hom}_{\mathcal{M}}(M,~M) \otimes \underline{Hom}_{\mathcal{M}}(M,~M)$}; \node (B) at (-4, -2.5) {$\underline{Hom}_{\mathcal{M}}(M,~N) \otimes \underline{Hom}_{\mathcal{M}}(M,~M)$}; \node (C) at (4, -2.5) {$\underline{Hom}_{\mathcal{M}}( M,~N) \otimes \underline{Hom}_{\mathcal{M}}(M,~M)$}; \node (D) at (0, -5) {$\underline{Hom}_{\mathcal{M}}(M,~N)$}; \path[->] (A) edge node [left, midway] {$id\otimes m$} (B); \path[->] (A) edge node [right, midway] {$a\otimes id$} (C); \path[->] (B) edge node [left, midway] {$a$} (D); \path[->] (C) edge node [right, midway] {$a$} (D); \end{tikzpicture} \end{align*} \begin{align*} \begin{tikzpicture} \node (A) at (0, 0) {$\underline{Hom}_{\mathcal{M}}(M,~N) \otimes \underline{Hom}_{\mathcal{M}}(M,~M)$}; \node (B) at (7, 0) {$\underline{Hom}_{\mathcal{M}}(M,~N)$}; \node (C) at (0, -2) {$\underline{Hom}_{\mathcal{M}}(M,~N)\otimes I$}; \path[->] (A) edge node [above] {$a$} (B); \path[->] (C) edge node [left, midway] {$id\otimes u$} (A); \path[->] (C) edge node [below] {$l$} (B); \end{tikzpicture} \end{align*} \begin{align*} Hom_{\mathcal{A}}(\underline{Hom}_{\mathcal{M}}(M,~N)\otimes \underline{Hom}_{\mathcal{M}}(M,~M),~\underline{Hom}_{\mathcal{M}}(M,~N))\\ \cong Hom_{\mathcal{M}}(\underline{Hom}_{\mathcal{M}}(M,~N)\otimes \underline{Hom}_{\mathcal{M}}(M,~M) \otimes M,~N), \end{align*} \begin{align*} Hom_{\mathcal{A}}(\underline{Hom}_{\mathcal{M}}(M,~N),~\underline{Hom}_{\mathcal{M}}(M,~N))\cong Hom_{\mathcal{M}}(\underline{Hom}_{\mathcal{M}}(M,~N)\otimes M,~N). \end{align*} So the identity morphism $id_{\underline{Hom}_{\mathcal{M}}(M,~N)}$ corresponds to a morphism \begin{align*} k:~\underline{Hom}_{\mathcal{M}}(M,~N) \otimes M\rightarrow N \end{align*} and $a$ corresponds to a composition \begin{align*} \underline{Hom}_{\mathcal{M}}(M,~N) \otimes \underline{Hom}_{\mathcal{M}}(M,~M) \otimes M \rightarrow \underline{Hom}_{\mathcal{M}}(M,~N) \otimes M\rightarrow N \end{align*} which is $k\circ (id_{\underline{Hom}_{\mathcal{M}}(M,~N)}\otimes f)$ and $f$ is the morphism $\underline{Hom}_{\mathcal{M}}(M,~M) \otimes M\rightarrow M$ that corresponds to $id_{\underline{Hom}_{\mathcal{M}}(M,~M)}$.\\ It is easy to show that those diagrams commute. \end{proof} \begin{lem} If $A=\underline{Hom}_{\mathcal{M}}(M,~M)$ is the algebra defined as above, then $\mathcal{A} A$ is an exact left module category over $\mathcal{A}$. \end{lem} \begin{prop} \label{81} \cite{etos} The mapping $\underline{Hom}_{\mathcal{M}}(M,~-):~\mathcal{M} \rightarrow \mathcal{A}A$ is an exact module functor. \end{prop} \begin{proof} $f$ is a family of natural isomorphisms $f_{XN}:~\underline{Hom}_{\mathcal{M}}(M,~X\otimes N) \rightarrow X\otimes \underline{Hom}_{\mathcal{M}}(M,~N)$ for all objects $X$ in $\mathcal{A}$, $N$ in $\mathcal{M}$ such that for all $X$, $Y$ in $\mathcal{A}$, $N$ in $\mathcal{M}$, the following diagrams commute. \begin{equation*} \xymatrix{ \underline{Hom}_{\mathcal{M}}(M,~(X\otimes Y)\otimes N) \ar[d] \ar[r] & \underline{Hom}_{\mathcal{M}}(M,~X\otimes (Y\otimes N)) \ar[r] & X\otimes \underline{Hom}_{\mathcal{M}}(M,~Y\otimes N) \ar[d]\\ (X\otimes Y)\otimes \underline{Hom}_{\mathcal{M}}(M,~N) \ar[rr] & & X\otimes(Y\otimes \underline{Hom}_{\mathcal{M}}(M,~N)) } \end{equation*} \begin{equation*} \xymatrix{ \underline{Hom}_{\mathcal{M}}(M,~I\otimes N) \ar[rd]_{\mathcal{F}(l(M))} \ar[rr] & & I\otimes \underline{Hom}_{\mathcal{M}}(M,~N) \ar[ld] \\ & \underline{Hom}_{\mathcal{M}}(M,~N)} \end{equation*} Exactness comes from \ref{250} and this proves the proposition. \end{proof} \begin{thm} \label{9} \cite{etos} Assume that $\mathcal{M}$ is an exact module category over a finite, rigid monoidal category $\mathcal{A}$ such that the unit object in $\mathcal{A}$ is simple. Let $\underline{Hom}(M,~M)=A$ is the algebra defined as above. Assume further that there exists an object $X\in \mathcal{A}$ for all objects $N\in \mathcal{M}$ such that $Hom_{\mathcal{M}}(X\otimes M,~N)\neq 0$. Then, the functor $\underline{Hom}_{\mathcal{M}}(M,~-):~\mathcal{M} \rightarrow \mathcal{A}A$ is an equivalence of module categories. \end{thm} \begin{proof} We need to show that \begin{align*} Hom_{\mathcal{M}}(N,~K)\rightarrow Hom_{\mathcal{A}A}(\underline{Hom}_{\mathcal{M}}(M,~N),~\underline{Hom}_{\mathcal{M}}(M,~K)) \end{align*} is an isomorphism for all objects $N$ and $K$ in $\mathcal{M}$ and $\mathcal{F}$ is essentially surjective for the equivalence. \cite{os} proves the isomorphism for all objects $N$ of the form $X\otimes M$ and for all objects $K$ in $\mathcal{M}$ first, then the author proves it for all objects $N$ and $K$ in $\mathcal{M}$ by using the exactness of the functor $\underline{Hom}_{\mathcal{M}}(M,~-)$. After that, \cite{os} shows that $\underline{Hom}_{\mathcal{M}}(M,~-)$ is essentially surjective.\\ We may apply similar way to prove the proposition. In \cite{os}, the proposition is given whenever the category is semisimple, hence exact means semisimple in such a category. $\mathcal{M}$ is indecomposable module category there instead of the assumption for $\mathcal{M}$ in here. \end{proof} \begin{exmp} $Vec_f(k)$ is an FRBSU monoidal category and it is an exact left module category over itself with the tensor multiplication. Let $A=\underline{Hom}_{Vec_f(k)}(M,~M)$. $A$ is an algebra over $k$ for a right $A$ module $M$ in $Vec_f(k)$. For all right $A$ modules $N$ in $Vec_f(k)$, we get a surjection $N\otimes M\rightarrow N$, hence $M$ genetares $Vec_f(k)$ and as a result, $Vec_f(k)\simeq Vec_f(k)A$ by Theorem \ref{9}. \end{exmp} \begin{lem} \label{79} The left module category $\mathcal{A} A$ for $A=\underline{Hom}_{\mathcal{M}}(M,~M)$ for some object $M$ in $\mathcal{M}$ is a finite category, hence $\mathcal{M}$ is a finite category. \end{lem} \section{ Invertible Bimodule Categories} We use \cite{gr} and \cite{etnios} at most for the following information. \begin{lem} \label{36} Assume that $\mathcal{M}$ is an $(\mathcal{A}-\mathcal{B})$ bimodule category and $\mathcal{N}$ is a $(\mathcal{B}-\mathcal{C})$ bimodule category for given finite, rigid monoidal categories $\mathcal{A}$, $\mathcal{B}$ and $\mathcal{C}$. Then, $Hom^{re}_{\mathcal{B}}(\mathcal{M}^{op},~\mathcal{N})$ is an $(\mathcal{A}-\mathcal{C})$ bimodule category. It is abelian. If $\mathcal{A}=\mathcal{B}$, $\mathcal{M}$ and $\mathcal{N}$ are exact, then it is exact. \end{lem} \begin{proof} To prove that $Hom^{re}_{\mathcal{B}}(\mathcal{M}^{op},~\mathcal{N})$ is an $(\mathcal{A}-\mathcal{C})$ bimodule category, we define the left action of $\mathcal{A}$ on $Hom^{re}_{\mathcal{B}}(\mathcal{M}^{op},~\mathcal{N})$ by \begin{align*} \mathfrak{F}:~\mathcal{A} \times Hom^{re}_{\mathcal{B}}(\mathcal{M}^{op},~\mathcal{N})\rightarrow Hom^{re}_{\mathcal{B}}(\mathcal{M}^{op},~\mathcal{N}),~(A,~\mathcal{F})\rightarrow A\otimes \mathcal{F} \end{align*} for all objects $A$ in $\mathcal{A}$ and right exact module functors $\mathcal{F}:~\mathcal{M}^{op} \rightarrow \mathcal{N}$. Here, we have $(A\otimes \mathcal{F})(M)=\mathcal{F}(M\otimes A)$ for all objects $M$ in $\mathcal{M}^{op}$. $\mathcal{M}^{op}$ is a right $\mathcal{A}$ module category, so $M\otimes A$ is an object in $\mathcal{M}^{op}$. We need to show that $\mathfrak{F}$ is a biexact bifunctor, that is $\mathfrak{F}(-,~\mathcal{F})$ is exact for all right exact module functors $\mathcal{F}:~\mathcal{M}^{op} \rightarrow \mathcal{N}$ which means that exact in the first variable and $\mathfrak{F}(A,~-)$ is exact for all objects $A$ in $\mathcal{A}$ which means that exact in the second variable.\\ To prove that $\mathfrak{F}(-,~\mathcal{F})$ is an exact functor, we need to show that \begin{align*} 0\rightarrow A\otimes \mathcal{F} \rightarrow B\otimes \mathcal{F} \rightarrow C\otimes \mathcal{F} \rightarrow 0 \end{align*} is an exact sequence of natural transformations of right exact module functors from $\mathcal{M}^{op}$ to $\mathcal{N}$ whenever the sequence $0\rightarrow A\rightarrow B\rightarrow C\rightarrow 0$ is exact. That sequence of natural transformations is a sequence of morphisms \begin{align} 0\rightarrow (A\otimes \mathcal{F})(M) \rightarrow (B\otimes \mathcal{F})(M) \rightarrow (C\otimes \mathcal{F})(M) \rightarrow 0 \end{align} for all objects $M$ in $\mathcal{M}^{op}$ which satisfies the compatibility conditions for all morphisms $M\rightarrow N$ in $\mathcal{M}^{op}$. This sequence is same as the sequence \begin{align} \label{17} 0\rightarrow \mathcal{F}(M\otimes A) \rightarrow \mathcal{F}(M\otimes B) \rightarrow \mathcal{F}(M\otimes C) \rightarrow 0 \end{align} by the above action. The sequence $0\rightarrow M\otimes A\rightarrow M\otimes B\rightarrow M\otimes C\rightarrow 0$ is exact since the action is an exact functor by definition of module category. As a result, \ref{17} is an exact sequence by Lemma \ref{250}.\\ Assume that $0\rightarrow \mathcal{F} \rightarrow \mathcal{G} \rightarrow \mathcal{H} \rightarrow 0$ is an exact sequence of module functors in $Hom^{re}_{\mathcal{B}}(\mathcal{M}^{op},~\mathcal{N})$. It is clear that the sequence $0\rightarrow A\otimes \mathcal{F} \rightarrow A\otimes \mathcal{G} \rightarrow A\otimes \mathcal{H}\rightarrow 0$ is exact. As a result, $\mathfrak{F}(A,~-)$ is an exact functor.\\ How do we get an associativity constraint $a$ consisting of a family of associativity isomorphism $a_{AB\mathcal{F}}:~(A\otimes B)\otimes \mathcal{F} \rightarrow A\otimes (B\otimes \mathcal{F})$ for all objects $A$, $B$ in $\mathcal{A}$, right exact module functors $\mathcal{F}:~\mathcal{M}^{op} \rightarrow \mathcal{N}$ and a unit constraint $l$ consisting of a family of unit isomorphisms $l_{\mathcal{F}}:~I\otimes \mathcal{F} \rightarrow \mathcal{F}$ for the unit object $I$ in $\mathcal{A}$, module functor $\mathcal{F}:~\mathcal{M}^{op} \rightarrow \mathcal{N}$ making the required diagrams commute.\\ For all objects $A$, $B$ in $\mathcal{A}$, module functors $\mathcal{F}:~\mathcal{M}^{op} \rightarrow \mathcal{N}$, we get \begin{align*} ((A\otimes B) \otimes \mathcal{F})(M)=\mathcal{F}(M\otimes (A\otimes B)),\\ (A\otimes (B\otimes \mathcal{F}))(M)=(B\otimes \mathcal{F})(M\otimes A)=\mathcal{F}((M\otimes A)\otimes B). \end{align*} $\mathcal{M}^{op}$ is a right $\mathcal{A}$ module category, so $M\otimes (A\otimes B)\cong (M\otimes A)\otimes B)$. We get a short exact sequence $0\rightarrow M\otimes (A\otimes B)\rightarrow (M\otimes A)\otimes B\rightarrow 0$. $\mathcal{F}$ is exact by \ref{250}, so the sequence $0\rightarrow \mathcal{F}(M\otimes (A\otimes B))\rightarrow \mathcal{F}((M\otimes A)\otimes B)\rightarrow 0$ is exact, hence we get an isomorphism $\mathcal{F}(M\otimes (A\otimes B)) \cong \mathcal{F}((M\otimes A)\otimes B)$ and use this isomorphism as an associativity constraint. Similarly, we define $l$.\\ $\mathfrak{G}:~Hom^{re}_{\mathcal{B}}(\mathcal{M}^{op},~\mathcal{N})\times \mathcal{C} \rightarrow Hom^{re}_{\mathcal{B}}(\mathcal{M}^{op},~\mathcal{N})$ is the right action of $\mathcal{C}$ on $Hom^{re}_{\mathcal{B}}(\mathcal{M}^{op},~\mathcal{N})$ with $(\mathcal{F},~C)\rightarrow \mathcal{F} \otimes C$ for all objects $C$ in $\mathcal{C}$ and right exact module functors $\mathcal{F}:~\mathcal{M}^{op} \rightarrow \mathcal{N}$.\\ Here, $\mathcal{F} \otimes C:~\mathcal{M}^{op} \rightarrow \mathcal{N}$ such that $(\mathcal{F} \otimes C)(M)=\mathcal{F}(M) \otimes C$ for all objects $M$ in $\mathcal{M}^{op}$. $\mathcal{N}$ is right $\mathcal{C}$ module category, so $\mathcal{F}(M) \otimes C$ is an object in $\mathcal{N}$. We can show that $\mathfrak{G}$ is a biexact bifunctor in a similar way. After that we find a right associativity constraint and a right unit constraint satisfying the required conditions.\\ Finally, we find a middle associativity constraint satisfying two commutative diagrams, hence the lemma is proved. \end{proof} \begin{cor} \label{35} If $\mathcal{M}$ is an $(\mathcal{A}-\mathcal{B})$ bimodule category, then \begin{align} Hom^{re}_{\mathcal{B}}(\mathcal{M}^{op},~\mathcal{B}) \simeq \mathcal{M} \simeq Hom^{re}_{\mathcal{A}}(\mathcal{A}^{op},~\mathcal{M}) \end{align} canonically as $(\mathcal{A}-\mathcal{B})$ bimodule categories. \end{cor} \begin{lem} \label{40} \cite{os} $\mathcal{F}_{\mathcal{M}}:~\mathcal{A} \rightarrow Hom^{re}_{\mathcal{A}}(\mathcal{M},~\mathcal{M})$ is a monoidal functor that takes any object $A$ in $\mathcal{A}$ to $A\otimes_{l\mathcal{M}} -$ where $\mathcal{A}$ is a finite, braided monoidal category and $\mathcal{M}$ is a left $\mathcal{A}$ module category. \end{lem} \begin{proof} First, we want to show that $A\otimes -$ is a right exact $\mathcal{A}$ module functor for all objects $A$ in $\mathcal{A}$. The right exactness is clear by definition of module category.\\ Hom sets are vector spaces, because $\mathcal{M}$ is finite by Lemma \ref{79}. That functor is $k$ linear since all functions $f:~Hom_{\mathcal{M}}(A,~B)\rightarrow Hom_{\mathcal{M}}(A\otimes -,~B\otimes -)$ are linear maps for all objects $A$ and $B$ in $\mathcal{M}$.\\ For the functors $\mathcal{F}_1,~\mathcal{F}_2:~A\rightarrow B$, $k_1,~k_2\in k$, we get $f(k_1\mathcal{F}_1+k_2\mathcal{F}_2)=\mathcal{F}$ where $\xymatrix{((k_1+k_2)A)\otimes -\ar[r]^{\mathcal{F}} & ((k_1+k_2)B)\otimes -}$ is a natural transformation. This natural transformation is same as \begin{align*} (k_1(A\otimes -)\rightarrow k_1(B\otimes -))+(k_2(A\otimes -)\rightarrow k_2(B\otimes -)) \end{align*} which is same as $k_1f(\mathcal{F}_1)+k_2f(\mathcal{F}_2)$.\\ For all objects $B$ in $\mathcal{A}$ and for all objects $M$ in $\mathcal{M}$, we obtain \begin{align*} \xymatrix{f_{BM}=a_{BAM}\circ c_{AB}\circ a^{-1}_{ABM}:~A\otimes (B\otimes M)\ar[rr] & & B\otimes (A\otimes M)} \end{align*} by using associativity constraint and the braiding as in the following diagram. \begin{align*} \xymatrix{ A\otimes (B\otimes M) \ar[rr]^{\cong}_{a_{ABM}^{-1}} & & (A\otimes B)\otimes M\ar[rr]^{\cong}_{c_{AB}} & & (B\otimes A)\otimes M\ar[rr]^{\cong}_{a_{BAM}} & & B\otimes (A\otimes M)}. \end{align*} It is easy to see that the compatibility conditions are satisfied, hence it is a module functor.\\ Now, we want to prove that the assignment $\mathcal{F}_{\mathcal{M}}:~\mathcal{A} \rightarrow Hom^{re}_{\mathcal{A}}(\mathcal{M},~\mathcal{M})$ taking any object $A$ in $\mathcal{A}$ to a right exact module functor $\mathcal{F}(A):~\mathcal{M} \rightarrow \mathcal{M}$ defined by $\mathcal{F}(A)(M)=A\otimes M$ for all objects $M$ in $\mathcal{M}$ is a monoidal functor.\\ There exists a natural transformation $\gamma_{AB}:~\mathcal{F}_{\mathcal{M}}(A) \circ \mathcal{F}_{\mathcal{M}}(B) \rightarrow \mathcal{F}_{\mathcal{M}}(A\otimes B)$.\\ $(\mathcal{F}_{\mathcal{M}}(A)\circ \mathcal{F}_{\mathcal{M}}(B))(M)=\mathcal{F}_{\mathcal{M}}(A)(B\otimes M)=A\otimes (B\otimes M)$ and $\mathcal{F}_{\mathcal{M}}(A\otimes B)(M)=(A\otimes B)\otimes M$ for all $M$ in $\mathcal{M}$.\\ We have an isomorphism $a^{-1}:~A\otimes (B\otimes M)\rightarrow (A\otimes B)\otimes M$. This says that \begin{align*} a^{-1}:~(\mathcal{F}_{\mathcal{M}}(A) \circ \mathcal{F}_{\mathcal{M}}(B))(M)\rightarrow \mathcal{F}_{\mathcal{M}}(A\otimes B)(M) \end{align*} is an isomorphism for all $M$ in $\mathcal{M}$, so $\gamma_{AB}$ is a natural isomorphism.\\ Also $I\rightarrow \mathcal{F}_{\mathcal{M}}(I)=I\otimes M$ is an isomorphism by left unit constraint. We may show that the diagrams commute. So, we get the result. \end{proof} \begin{defn} An $(\mathcal{A}-\mathcal{B})$ bimodule category $\mathcal{M}$ for $\mathcal{A}$ and $\mathcal{B}$ are finite monoidal categories is invertible if the monoidal functors $\mathcal{B} \rightarrow Hom^{re}_{\mathcal{A}}(\mathcal{M},~\mathcal{M})$ taking all objects $B$ in $\mathcal{B}$ to $-\otimes B$ and $\mathcal{A} \rightarrow Hom^{re}_{\mathcal{B}}(\mathcal{M}^{op},~\mathcal{M}^{op})=Hom_{\mathcal{B}}(\mathcal{M},~\mathcal{M})$ taking all objects $A$ in $\mathcal{A}$ to $A\otimes -$ are equivalences as bimodule categories. \end{defn} The following proposition is found in \cite{etnios} for fusion categories. \begin{prop} \label{41} An $(\mathcal{A}-\mathcal{B})$ bimodule category $\mathcal{M}$ for given finite monoidal categories $\mathcal{A}$ and $\mathcal{B}$ is invertible if and only if for all objects $A$ in $\mathcal{A}$ and $B$ in $\mathcal{B}$, the monoidal functor \begin{align*} \mathcal{R}:~\mathcal{B}^{rev} \rightarrow Hom^{re}_{\mathcal{A}}(\mathcal{M},~\mathcal{M}),~\mathcal{R}(B)(M)=M\otimes B \end{align*} is an equivalence of $(\mathcal{B}-\mathcal{B})$ bimodule categories if and only if the monoidal functor \begin{align*} \mathcal{L}:~\mathcal{A} \rightarrow Hom^{re}_{\mathcal{B}}(\mathcal{M},~\mathcal{M}),~L(A)(M)=A\otimes M \end{align*} is an equivalence of $(\mathcal{A}-\mathcal{A})$ bimodule categories. \end{prop} \begin{proof} If $\mathcal{M}$ is invertible, then $\mathcal{B} \simeq Hom^{re}_{\mathcal{A}}(\mathcal{M},~\mathcal{M})$ as $(\mathcal{B}-\mathcal{B})$ bimodule categories. So, $\mathcal{B}^{rev} \simeq Hom^{re}_{\mathcal{A}}(\mathcal{M},~\mathcal{M})$.\\ Similarly, $\mathcal{A} \simeq Hom^{re}_{\mathcal{B}}(\mathcal{M}^{op},~\mathcal{M}^{op})=Hom^{re}_{\mathcal{B}}(\mathcal{M},~\mathcal{M})$ as $(\mathcal{B}-\mathcal{B})$ bimodule categories.\\ Conversely, if $\mathcal{R}$ is an equivalence, then $\mathcal{B}^{rev} \simeq Hom^{re}_{\mathcal{A}}(\mathcal{M},~\mathcal{M})$ as $(\mathcal{B}-\mathcal{B})$ bimodule categories, so $\mathcal{B} \simeq Hom^{re}_{\mathcal{A}}(\mathcal{M},~\mathcal{M})$ and if $\mathcal{L}$ is an equivalence, then \begin{align*} \mathcal{A} \simeq Hom^{re}_{\mathcal{B}}(\mathcal{M},~\mathcal{M})=Hom^{re}_{\mathcal{B}}(\mathcal{M}^{op},~\mathcal{M}^{op}) \end{align*} as $(\mathcal{A}-\mathcal{A})$ bimodule categories. If $\mathcal{R}$ and $\mathcal{L}$ are equivalences, then $\mathcal{M}$ is invertible.\\ See \cite{etnios} for the rest of the proof. \end{proof} \begin{note} If $\mathcal{A}$ is a finite braided monoidal category and $\mathcal{M}$ is an invertible left $\mathcal{A}$ module category such that $\mathcal{M} \simeq \mathcal{A} A$, then $\mathcal{M}$ is an $(\mathcal{A}-\mathcal{A})$ bimodule category with right action given by $M\otimes_{r\mathcal{M}} A=A\otimes_{l\mathcal{M}} M$ for all objects $A$ in $\mathcal{A}$ and $M$ in $\mathcal{M}$. \end{note} \begin{remark} \label{38} If $\mathcal{A}$ is a finite braided monoidal category, $\mathcal{M}$ is an invertible $(\mathcal{A}-\mathcal{A})$ bimodule category and $A$ is an object in $\mathcal{A}$, then we obtain two monoidal equivalences \begin{align*} \mathcal{R}:~\mathcal{A}^{rev} \rightarrow Hom^{re}_{\mathcal{A}}(\mathcal{M},~\mathcal{M}),~\mathcal{R}(A)(M)=M\otimes A\\ \mathcal{L}:~\mathcal{A} \rightarrow Hom^{re}_{\mathcal{A}}(\mathcal{M},~\mathcal{M}),~\mathcal{L}(A)(M)=A\otimes M \end{align*} by Proposition \ref{41} for all objects $M$ in $\mathcal{M}$. \end{remark} \begin{cor} \cite{etnios} Assume that the left module category $\mathcal{M}$ over a fusion category $\mathcal{A}$ is invertible. Then, it is indecomposable left module category over $\mathcal{A}$. \end{cor} \begin{proof} Let $\mathcal{A}$ be a fusion category and $\mathcal{M}$ be decomposable module category over $\mathcal{A}$ under the given conditions. Then, $\mathcal{M} \simeq \mathcal{P} \oplus \mathcal{Q}$ for indecomposable module categories $\mathcal{P}$ and $\mathcal{Q}$ over $\mathcal{A}$ and $A=\underset{i}{\oplus} A_i$ for simple objects $A_i$ in $\mathcal{A}$ for all objects $A$ in $\mathcal{A}$ since $\mathcal{A}$ is a semisimple monoidal category. The monoidal functor $L:~\mathcal{A} \rightarrow Hom^{re}_{\mathcal{B}}(\mathcal{M},~\mathcal{M}),~L(A)(M)=A\otimes M$ between monoidal categories is an equivalence for all objects $A$ in $\mathcal{A}$ and for all objects $M$ in $\mathcal{M}$ under these conditions.\\ $A\otimes M=(\underset{i}{\oplus} A_i)\otimes_{l\mathcal{M}} M\simeq (\underset{i}{\oplus} A_i)\otimes_{l\mathcal{M}} (P\oplus Q)=((\underset{i}{\oplus} A_i)\otimes_{l\mathcal{P}} P)\oplus ((\underset{i}{\oplus} A_i)\otimes_{l\mathcal{Q}} Q)\\ =(\underset{i}{\oplus}(A_i\otimes_{l\mathcal{P}} P) \oplus (\underset{i}{\oplus}(A_i\otimes_{l\mathcal{Q}} Q))$ for all objects $P$ in $\mathcal{P}$ and $Q$ in $\mathcal{Q}$.\\ However, $\underset{i}{\oplus}(A_i\otimes_{l\mathcal{P}} P) \oplus \underset{i}{\oplus}(A_i\otimes_{l\mathcal{Q}} Q)$ is an object in $\mathcal{P} \oplus \mathcal{Q}$. This means that $\underset{i}{\oplus}(A_i\otimes_{l\mathcal{P}} P)$ is an object in $\mathcal{P}$ and $\underset{i}{\oplus}(A_i\otimes_{l\mathcal{Q}} Q)$ is an object in $\mathcal{Q}$. So, $\mathcal{P}=\underset{i}{\oplus}(A_i\otimes \mathcal{P})$ and $\mathcal{Q}=\underset{i}{\oplus}(A_i\otimes \mathcal{Q})$. This is a contradiction since $\mathcal{P}$ and $\mathcal{Q}$ are indecomposable module categories over $\mathcal{A}$. As a result, $\mathcal{M}$ is indecomposable. \end{proof} \section{2-Category of FRBSU Monoidal Categories} \begin{lem} \label{47} The composition of two monoidal functors is again a monoidal functor. \end{lem} \begin{proof} There exists a functor $Hom(\mathcal{B},~\mathcal{C})\times Hom(\mathcal{A},~\mathcal{B})\rightarrow Hom(\mathcal{A},~\mathcal{C})$ taking any pair $(\mathcal{F},~\mathcal{G})$ to $\mathcal{F} \circ \mathcal{G}$ for given monoidal functors $(\mathcal{F},~\beta,~\varphi_1):~\mathcal{B} \rightarrow \mathcal{C}$ in $Hom(\mathcal{B},~\mathcal{C})$ and $(\mathcal{G},~\psi,~\varphi_2):~\mathcal{A} \rightarrow \mathcal{B}$ in $Hom(\mathcal{A},~\mathcal{B})$.\\ $\beta$ is a family of natural isomorphisms $\beta_{XY}:~\mathcal{F}(X) \otimes \mathcal{F}(Y) \rightarrow \mathcal{F}(X\otimes Y)$ for all objects $X$, $Y$ in $\mathcal{B}$ and $\psi$ is a family of natural isomorphisms $\psi_{AB}:~\mathcal{G}(A) \otimes \mathcal{G}(B) \rightarrow \mathcal{G}(A\otimes B)$ for all objects $A$ and $B$ in $\mathcal{A}$.\\ We want to show that $\mathcal{F} \circ \mathcal{G}$ is a monoidal category. We define $\gamma$ as a family of natural isomorphisms $\gamma_{AB}=\mathcal{F}(\psi_{AB}) \circ \beta_{\mathcal{G}(A) \mathcal{G}(B)}$ for all objects $A$ and $B$ in $\mathcal{A}$ as in the following diagram. \begin{align*} \begin{tikzpicture} \node (A) at (0,0) {$\mathcal{F}(\mathcal{G}(A)) \otimes\mathcal{F}(\mathcal{G}(B))$}; \node (B) at (6,0) {$\mathcal{F}(\mathcal{G}(A) \otimes \mathcal{G}(B))$}; \node (C) at (12,0) {$\mathcal{F}(\mathcal{G}(A\otimes B))$}; \node (D) at (0,-2) {$(\mathcal{F} \circ \mathcal{G})(A)\otimes (\mathcal{F} \circ \mathcal{G})(B)$}; \node (E) at (6, -2) {$\mathcal{F}(\mathcal{G}(A) \otimes \mathcal{G}(B))$}; \node (F) at (12,-2) {$(\mathcal{F} \circ \mathcal{G})(A\otimes B)$}; \draw[thick, double] (0, -0.2)--(0, -1.8) [xshift=5pt]; \draw[thick, double] (12, -0.2)--(12, -1.8) [xshift=5pt]; \path[->] (A) edge [right] node [above] {$\beta_{\mathcal{G}(A) \mathcal{G}(B)}$} (B); \path[->] (B) edge [right] node [above] {$\mathcal{F}(\psi_{AB})$} (C); \path[->] (D) edge [right] node [above] {$\beta_{\mathcal{G}(A) \mathcal{G}(B)}$} (E); \path[->] (E) edge [right] node [above] {$\mathcal{F}(\psi_{AB})$} (F); \end{tikzpicture} \end{align*} $I\cong \mathcal{G}(I)$, so $I\cong \mathcal{F}(I) \cong (\mathcal{F} \circ \mathcal{G})(I)$ for the unit object $I$ in $\mathcal{A}$. It is easy to prove the commutativity of the required diagrams.\\ If we have natural transformations $\theta_1:~\mathcal{F}_1\Rightarrow \mathcal{F}_2$ in $Hom(\mathcal{B},~\mathcal{C})$ and $\theta_2:~\mathcal{G}_1 \Rightarrow \mathcal{G}_2$ in $Hom(\mathcal{A},~\mathcal{B})$, then the composition is again a natural transformation. This gives a morphism in $Hom(\mathcal{A},~\mathcal{C})$ corresponding to the pair $(\theta_1,~\theta_2)$ in $Hom(\mathcal{B},~\mathcal{C})\times Hom(\mathcal{A},~\mathcal{B})$. \end{proof} \begin{prop} The collection of all FRBSU monoidal categories forms a 2-category $\mathcal{MCT}$ in which 1 arrows are braided monoidal functors of those categories and 2 arrows are natural isomorphisms between those monoidal functors. \end{prop} \begin{proof} $\mathfrak{X}=\mathcal{MCT}$. Objects are FRBSU monoidal categories.\\ $\mathcal{MCT}(\mathcal{A},~\mathcal{B})$ is a category in which 1 arrows are braided monoidal functors $\mathcal{K}:~\mathcal{A} \rightarrow \mathcal{B}$ and 2 arrows are natural isomorphisms $\theta:~\mathcal{K} \Rightarrow \mathcal{L}$ for all monoidal functors $\mathcal{K},~\mathcal{L}:~\mathcal{A} \rightarrow \mathcal{B}$. We show 2 arrows as in the following diagram. \begin{align*} \begin{tikzpicture}[out=145, in=145, relative] \node (A) at (0,0) {$\mathcal{A}$}; \node (B) at (3,0) {$\mathcal{B}$}; \draw[->, thick, double] (1.5,0.25) -- (1.5,-0.25) [xshift=5pt] node[right, midway] {$\theta$} (B); \path[->] (A) edge [bend left] node [above] {$\mathcal{K}$} (B); \path[->] (A) edge [bend right] node [below] {$\mathcal{L}$} (B); \end{tikzpicture} \end{align*} $\mathcal{F}_{\mathcal{A} \mathcal{B} \mathcal{C}}:~\mathcal{MCT}(\mathcal{B},~\mathcal{C}) \times \mathcal{MCT}(\mathcal{A},~\mathcal{B}) \rightarrow \mathcal{MCT}(\mathcal{A},~\mathcal{C})$ is a functor taking the pair $(\mathcal{L},~\mathcal{K})$ to $\mathcal{L} \circ \mathcal{K}$ and the pair $(\theta,~\gamma)$ to $\theta \star \gamma$ by Lemma \ref{47}.\\ $\mathcal{F}_{\mathcal{A}}:~1\rightarrow \mathcal{MCT}(\mathcal{A},~\mathcal{A})$ is a functor sending the object $\star$ in $1$ to 1 arrow $id_{\mathcal{A}}:~\mathcal{A}\rightarrow \mathcal{A}$. \begin{align*} a_{\mathcal{A} \mathcal{B} \mathcal{C} \mathcal{D}}:~\mathcal{F}_{\mathcal{A} \mathcal{B} \mathcal{D}} \circ (\mathcal{F}_{\mathcal{B} \mathcal{C} \mathcal{D}} \times 1) \rightarrow \mathcal{F}_{\mathcal{A} \mathcal{C} \mathcal{D}} \circ (1\times \mathcal{F}_{\mathcal{A} \mathcal{B} \mathcal{C}}) \end{align*} \begin{align*} \begin{tikzpicture} \node (A) at (0, 0) {$\mathcal{MCT}(\mathcal{C},~\mathcal{D}) \times \mathcal{MCT}(\mathcal{B},~\mathcal{C}) \times \mathcal{MCT}(\mathcal{A},~\mathcal{B})$}; \node (C) at (8, 0) {$\mathcal{MCT}(\mathcal{B},~\mathcal{D}) \times \mathcal{MCT}(\mathcal{A},~\mathcal{B})$}; \node (B) at (0,-2) {$\mathcal{MCT}(\mathcal{C},~\mathcal{D})\times \mathcal{MCT}(\mathcal{A},~\mathcal{C})$}; \node (D) at (8, -2) {$\mathcal{MCT}(\mathcal{A},~\mathcal{D})$}; \draw[->, thick, double] (4,-0.75)--(3, -1.25) [xshift=5pt] node [right, midway] {$a_{\mathcal{A} \mathcal{B} \mathcal{C} \mathcal{D}}$} (B); \path[->] (A) edge node [above] {$\mathcal{F}_{\mathcal{B} \mathcal{C} \mathcal{D}} \times 1$} (C); \path[->] (A) edge node [right, midway] {$1\times \mathcal{F}_{\mathcal{A} \mathcal{B} \mathcal{C}}$} (B); \path[->] (B) edge node [below] {$\mathcal{F}_{\mathcal{A} \mathcal{C} \mathcal{D}}$} (D); \path[->] (C) edge node [right, midway] {$\mathcal{F}_{\mathcal{A} \mathcal{B} \mathcal{D}}$} (D); \end{tikzpicture} \end{align*} is a natural isomorphism. \begin{align*} (\mathcal{F}_{\mathcal{A} \mathcal{B} \mathcal{D}} \circ (\mathcal{F}_{\mathcal{B} \mathcal{C} \mathcal{D}} \times 1))(\mathcal{K},~\mathcal{L},~\mathcal{M})=\mathcal{F}_{\mathcal{A} \mathcal{B} \mathcal{D}}(\mathcal{K} \circ \mathcal{L},~\mathcal{M})=(\mathcal{K}\circ \mathcal{L})\circ \mathcal{M}, \end{align*} \begin{align*} (\mathcal{F}_{\mathcal{A} \mathcal{C} \mathcal{D}} \circ (1\times \mathcal{F}_{\mathcal{A} \mathcal{B} \mathcal{C}}))(\mathcal{K},~\mathcal{L},~\mathcal{M})=\mathcal{F}_{\mathcal{A} \mathcal{C} \mathcal{D}}(\mathcal{K},~\mathcal{L}\circ \mathcal{M})=\mathcal{K}\circ (\mathcal{L}\circ \mathcal{M}) \end{align*} for all 1 arrows $\mathcal{K},~\mathcal{L},~\mathcal{M}$. $(\mathcal{K}\circ \mathcal{L})\circ \mathcal{M}=\mathcal{K}\circ (\mathcal{L}\circ \mathcal{M})$ and $a_{\mathcal{A} \mathcal{B} \mathcal{C} \mathcal{D}}(\mathcal{K},~\mathcal{L},~\mathcal{M})=id_{\mathcal{K} \circ \mathcal{L}\circ \mathcal{M}}$.\\ For all morphisms $\alpha:~(\mathcal{K}_1,~\mathcal{L}_1,~\mathcal{M}_1)\rightarrow (\mathcal{K}_2,~\mathcal{L}_2,~\mathcal{M}_2)$ in \begin{align*} \mathcal{MCT}(\mathcal{C},~\mathcal{D}) \times \mathcal{MCT}(\mathcal{B},~\mathcal{C}) \times \mathcal{MCT}(\mathcal{A},~\mathcal{B}), \end{align*} the following diagram \begin{align*} \begin{tikzpicture} \node (A) at (0, 0) {$(\mathcal{K}_1 \circ \mathcal{L}_1) \circ \mathcal{M}_1$}; \node (C) at (8, 0) {$(\mathcal{K}_2 \circ \mathcal{L}_2) \circ \mathcal{M}_2$}; \node (B) at (0,-2) {$\mathcal{K}_1 \circ (\mathcal{L}_1 \circ \mathcal{M}_1)$}; \node (D) at (8, -2) {$\mathcal{K}_2 \circ (\mathcal{L}_2 \circ \mathcal{M}_2)$}; \path[->] (A) edge node [above] {$(\mathcal{F}_{\mathcal{A} \mathcal{B} \mathcal{D}} \circ (\mathcal{F}_{\mathcal{B} \mathcal{C} \mathcal{D}} \times 1))(\alpha)$} (C); \path[->] (A) edge node [left, midway] {$a_{\mathcal{A} \mathcal{B} \mathcal{C} \mathcal{D}}(\mathcal{K}_1,~\mathcal{L}_1,~\mathcal{M}_1)$} (B); \path[->] (B) edge node [below] {$(\mathcal{F}_{\mathcal{A} \mathcal{C} \mathcal{D}} \circ (1\times \mathcal{F}_{\mathcal{A} \mathcal{B} \mathcal{C}}))(\alpha)$} (D); \path[->] (C) edge node [right, midway] {$a_{\mathcal{A} \mathcal{B} \mathcal{C} \mathcal{D}}(\mathcal{K}_2,~\mathcal{L}_2,~\mathcal{M}_2)$} (D); \end{tikzpicture} \end{align*} commutes. As a result $a_{\mathcal{A} \mathcal{B} \mathcal{C} \mathcal{D}}(\mathcal{K},~\mathcal{L},~\mathcal{M})$ is a natural isomorphism for all 1 arrows $\mathcal{K}$, $\mathcal{L}$ and $\mathcal{M}$. Similarly, we show $r$ and $l$ are natural isomorphisms. Hence, $MCT$ is a 2-category. \end{proof} \section{Crossed Modules and Morphisms Of Crossed Modules} A crossed module $\xymatrix{\mathfrak{C}=[N\ar[r]^{h_{\mathfrak{C}}} & M]}$ is a pair of groups $(M,~N)$ such that $M$ acts on $N$ by $M\times N\rightarrow N$ taking $(m,~n)$ to $^mn$ and $h_{\mathfrak{C}}:~N \rightarrow M$ is a group homomorphism satisfying the conditions $h_{\mathfrak{C}}(^m n)=mh_{\mathfrak{C}}(n)m^{-1}$ and $^{h_{\mathfrak{C}}(n)} n'=nn'n^{-1}$ for all $n,~n'\in N$ and $m\in M$. \subsection{Strict Morphisms and Butterflies Between Crossed Modules} We use \cite{no} and \cite{alno} as references here. \begin{defn} For given crossed modules $\xymatrix{\mathfrak{C_1}=[N_1\ar[r]^{h_{\mathfrak{C_1}}} & M_1]}$ and $\xymatrix{\mathfrak{C_2}=[N_2\ar[r]^{h_{\mathfrak{C_2}}} & M_2]}$, a strict morphism $F=(f_1,~f_2)$ between them is a pair of group homomorphisms $f_1:~M_1 \rightarrow M_2$ and $f_2:~N_1\rightarrow N_2$ such that $h_{\mathfrak{C_2}} \circ f_2=f_1\circ h_{\mathfrak{C_1}}$ and $f_2(^mn)=^{f_1(m)}f_2(n)$ for all $n\in N_1$ and $m\in M_1$. We show that morphism by the following diagram. \begin{align} \label{24} \begin{tikzpicture} \node (A) at (0, 0) {$N_1$}; \node (B) at (2, 0) {$N_2$}; \node (C) at (0, -2) {$M_1$}; \node (D) at (2, -2) {$M_2$}; \path[->] (A) edge node [above] {$f_2$} (B); \path[->] (A) edge node [left] {$h_{\mathfrak{C_1}}$} (C); \path[->] (B) edge node [right] {$h_{\mathfrak{C_2}}$} (D); \path[->] (C) edge node [below] {$f_1$} (D); \end{tikzpicture} \end{align} \end{defn} \begin{defn} The strict morphism in Diagram \ref{24} is an equivalence of crossed modules if $\pi_2(\mathfrak{C_1})\cong \pi_2(\mathfrak{C_2})$ and $\pi_1(\mathfrak{C_1})\cong \pi_1(\mathfrak{C_2})$. \end{defn} \begin{defn} The commutative diagram of group homomorphisms \begin{equation} \label{23} \xymatrix{N_1\ar[rd]^f \ar[dd]_{h_{\mathfrak{C_1}}} & & N_2\ar[ld]_k \ar[dd]^{h_{\mathfrak{C_2}}} \\ & E\ar[rd]_t \ar[ld]^g & \\ M_1 & & M_2} \end{equation} is a butterfly between two crossed modules $\xymatrix{\mathfrak{C_1}=[N_1\ar[r]^{h_{\mathfrak{C_1}}} & M_1]}$ and $\xymatrix{\mathfrak{C_2}=[N_2\ar[r]^{h_{\mathfrak{C_2}}} & M_2]}$ if it satisfies the following axioms. \begin{enumerate} \item Both diagonal sequences are complexes, \item The NE-SW sequence is a group extension, \item $k(^{t(x)}n_2)=xk(n_2)x^{-1}$ and $f(^{g(x)}n_1)=xf(n_1)x^{-1}$ for all $x\in E,~n_1\in N_1,~n_2\in N_2$. \end{enumerate} \end{defn} We denote the above butterfly with $(E,~t,~g,~k,~f)$. Also, we can denote that butterfly by $P:~\mathfrak{C_1} \rightarrow \mathfrak{C}_2$ for crossed modules $\mathfrak{C_1}$ and $\mathfrak{C}_2$. \begin{defn} A butterfly is reversible(equivalence) if both of the diagonals are extensions, that is a butterfly such that the NW-SE sequence is short exact. It is splittable if there exists a splitting homomorphism $s:~M_1\rightarrow E$ such that $g\circ s=id_{M_1}$ which is same as the condition that the NE-SW sequence is a split extension. \end{defn} The inverse of the butterfly in Diagram \ref{23} is shown as in the following diagram which is a butterfly. \begin{equation} \xymatrix{N_2\ar[rd]^k \ar[dd]_{h_{\mathfrak{C_2}}} & & N_1\ar[ld]_f \ar[dd]^{h_{\mathfrak{C_1}}} \\ & E\ar[rd]_g \ar[ld]^t & \\ M_2 & & M_1} \end{equation} \begin{prop} \label{29} Every split butterfly corresponds to a unique strict morphism $(f_1,~f_2)$ between two crossed modules. \end{prop} \begin{proof} Assume that $(f_1,~f_2)$ is a strict morphism as in Diagram \ref{24}. We get a commutative diagram \begin{equation} \label{25} \xymatrix{ N_1 \ar[rd]^f \ar[dd]_{h_{\mathfrak{C}_1}} & & N_2 \ar[ld]_k \ar[dd]^{h_{\mathfrak{C}_2}}\\ & N_2\ltimes M_1\ar[rd]_t \ar[ld]^g & \\ M_1 & & M_2} \end{equation} which means that the NE-SW sequence is a split extension in that butterfly, that is $E=N_2\ltimes M_1$ with the product law $(n_1,~m_1).(n_2,~m_2)=(n_1.^{f_1(m_1)}n_2,~m_1.m_2)$ for all $m_1,~m_2\in M_1$ and $n_1,~n_2\in N_2$.\\ Here, we define $g$ as a projection, $k(n)=(id_{N_2},~1)(n)=(n,~1)$ for all $n\in N_2$, $f(n)=(f_2( n^{-1}),~h_1(n))$ for all $n\in N_1$ and $t(n,~m)=h_2(n).f_1(m)$ for all $n\in N_2$ and $m\in M_1$.\\ Conversely, if we are given a split butterfly as in Diagram \ref{25}, we can find a canonical splitting homomorphism $s:~M_1\rightarrow N_2\ltimes M_1$ taking $m$ to $(1,~m)$ for all $m\in M_1$. We define $f_1=t\circ s$. $f_2$ can be defined from the equation $s\circ h_1=f.(k\circ f_2)$. We can see that those group homomorphisms satisfy the required conditions. \end{proof} \subsection{Strict Morphisms and 2-Category of Crossed Modules} \begin{lem} \cite{no} The collection $\mathcal{XM}$ consisting of crossed modules forms a category whose morphisms are strict morphisms of crossed modules as defined in \ref{24}. \end{lem} \begin{defn} A pointed natural transformation $PNT:~G\Rightarrow F$ between two strict morphisms $F=(f_1,~f_2)$ and $G=(g_1,~g_2)$ for the crossed modules $\xymatrix{\mathfrak{C_1}=[N_1\ar[r]^{h_{\mathfrak{C_1}}} & M_1]}$ and $\xymatrix{\mathfrak{C_2}=[N_2\ar[r]^{h_{\mathfrak{C_2}}} & M_2]}$ is a crossed homomorphism $\gamma:~M_1\rightarrow N_2$ such that for all $a,~a'\in M_1$, \begin{align} \gamma(aa')=(^{f_1(a')}\gamma(a)).\gamma(a') \end{align} and the following conditions are satisfied. \begin{enumerate} \item $g_1(a)=f_1(a)h_{\mathfrak{C_2}}(\gamma(a^{-1}))$ for all $a\in M_1$ \item $g_2(b)=f_2(b)\gamma(h_{\mathfrak{C_1}}(b^{-1}))$ for all $b\in N_1$ \end{enumerate} \end{defn} \begin{remark} A pointed natural transformation $PNT:~G\Rightarrow F$ between the crossed modules $\xymatrix{\mathfrak{C_1}=[N_1\ar[r]^{h_{\mathfrak{C_1}}} & M_1]}$ and $\xymatrix{\mathfrak{C_2}=[N_2\ar[r]^{h_{\mathfrak{C_2}}} & M_2]}$ that is a crossed homomorphism $\gamma:~M_1\rightarrow N_2$ is an isomorphism if there exists a pointed natural transformation $PNT':~F\Rightarrow G$ which is a crossed homomorphism $\gamma':~M_1\rightarrow N_2$ defined by $\gamma'(m)=\gamma^{-1}(m)$ for all $m\in M_1$. \end{remark} \begin{lem} \label{600} There exists a 2-category $\underline{\mathcal{XM}}$ whose objects are crossed modules, 1 arrows are strict morphisms between those crossed modules and 2 arrows are pointed natural transformations between those strict morphisms such that the pointed natural transformations $PNT:~G\Rightarrow G$ are the trivial pointed natural transformations where $G$ is a strict morphism between any crossed modules $\xymatrix{\mathfrak{C_1}=[N_1\ar[r]^{h_{\mathfrak{C_1}}} & M_1]}$ and $\xymatrix{\mathfrak{C_2}=[N_2\ar[r]^{h_{\mathfrak{C_2}}} & M_2]}$ in $\underline{\mathcal{XM}}$. \end{lem} \begin{proof} We take $\mathfrak{X}=\underline{\mathcal{XM}}$. First, we need to show that $\underline{\mathcal{XM}}(\mathfrak{C_1},~\mathfrak{C_2})$ is a category whose objects are 1 arrows and morphisms are 2 arrows for the crossed modules $\xymatrix{\mathfrak{C_1}=[N_1\ar[r]^{h_{\mathfrak{C_1}}} & M_1]}$ and $\xymatrix{\mathfrak{C_2}=[N_2\ar[r]^{h_{\mathfrak{C_2}}} & M_2]}$ in $\underline{\mathcal{XM}}$.\\ The identity morphism is the trivial pointed natural transformation.\\ We define the composition of two pointed natural transformations $PNT_1:~G\Rightarrow F$ which is a crossed homomorphism $\gamma_1$ and $PNT_2:~F\Rightarrow E$ which is a crossed homomorphism $\gamma_2$ between the strict morphisms as $\gamma=\gamma_2. \gamma_1:~M_1\rightarrow N_2$. For all elements $a,~a'$ in $M_1$, we get\\ $\gamma(a.a')=\gamma_2(a.a') . \gamma_1(a.a')=(^{e_1(a')}\gamma_2(a))\gamma_2(a') .(^{f_1(a')}\gamma_1(a))\gamma_1(a')$\\ $=(^{e_1(a')}\gamma_2(a))\gamma_2(a')(^{e_1(a')h_{\mathfrak{C_2}}(\gamma_2((a')^{-1}))}\gamma_1(a))\gamma_1(a')$ since $f_1(a')=e_1(a')h_{\mathfrak{C_2}}(\gamma_2((a')^{-1}))$\\ $=(^{e_1(a')}\gamma_2(a))\gamma_2(a')(^{e_1(a')}\gamma_2((a')^{-1})\gamma_1(a)\gamma_2(a'))\gamma_1(a')$ since\\ $^{h_{\mathfrak{C_2}}(\gamma_2((a')^{-1}))}\gamma_1(a)=\gamma_2((a')^{-1}) . \gamma_1(a) . \gamma_2(a')$ by definition of crossed module\\ $=(^{e_1(a')}\gamma_2(a)). \gamma_2(a'). (^{e_1(a')}\gamma_2((a')^{-1})). (^{e_1(a')}\gamma_1(a)). (^{e_1(a')}\gamma_2(a')) . \gamma_1(a')$\\ $=(^{e_1(a')}\gamma_2(a)). \gamma_2(a').(^{e_1(a')}\gamma_2((a')^{-1})). \gamma_2(a') . \gamma_2((a')^{-1}) . (^{e_1(a')}\gamma_1(a)) . (^{e_1(a')}\gamma_2(a')) . \gamma_1(a')$\\ $=(^{e_1(a')}\gamma_2(a)). \gamma_2(a') . \gamma_2((a')^{-1} . a') . \gamma_2((a')^{-1}) . (^{e_1(a')}\gamma_1(a)) . (^{e_1(a')}\gamma_2(a')) . \gamma_2(a') . \gamma_2((a')^{-1}) . \gamma_1(a')$\\ $=(^{e_1(a')}\gamma_2(a)). (^{e_1(a')}\gamma_1(a)). \gamma_2(a' a'). \gamma_2((a')^{-1}). \gamma_1(a')$\\ $=(^{e_1(a')}\gamma_2(a). \gamma_1(a)). \gamma_2(a'). \gamma_1(a')$\\ $=(^{e_1(a')}\gamma(a)). \gamma(a')$.\\ $g_1(a)=f_1(a). h_{\mathfrak{C_2}}(\gamma_1(a^{-1}))$ since $PNT_1:~G\Rightarrow F$ is a pointed natural transformation which is a crossed homomorphism $\gamma_1$ and $f_1(a)=e_1(a). h_{\mathfrak{C_2}}(\gamma_2(a^{-1}))$ since $PNT_2:~F\Rightarrow E$ is a pointed natural transformation which is a crossed homomorphism $\gamma_2$. Hence, \begin{align*} g_1(a)=e_1(a). h_{\mathfrak{C_2}}(\gamma_2(a^{-1})). h_{\mathfrak{C_2}}(\gamma_1(a^{-1}))=e_1(a). h_{\mathfrak{C_2}}((\gamma_2. \gamma_1)(a^{-1}))=e_1(a). h_{\mathfrak{C_2}}(\gamma(a^{-1}) \end{align*} as desired. Similarly, we may show the other part. As a result, $\gamma$ is a crossed homomorphism and gives a pointed natural transformation $PNT_3:~G\Rightarrow E$.\\ It is clear that the composition is associative.\\ For all pointed natural transformations $PNT:~G\Rightarrow F$, the crossed homomorphism $\gamma$ and $id:~F\Rightarrow F$, the composition $id\star PNT:~G\Rightarrow F$ is equal to $PNT$. Similarly, we show the other part.\\ As a result, $\underline{\mathcal{XM}}(\mathfrak{C_1},~\mathfrak{C_2})$ is a category.\\ The mapping $\mathcal{F}_{\mathfrak{C_1} \mathfrak{C_2} \mathfrak{C_3}}:~\underline{\mathcal{XM}}(\mathfrak{C_2},~\mathfrak{C_3})\times \underline{\mathcal{XM}}(\mathfrak{C_1},~\mathfrak{C_2}) \rightarrow \underline{\mathcal{XM}}(\mathfrak{C_1},~\mathfrak{C_3})$ is a functor. We send each pair $(G,~F)$ to $G\circ F$ where $G=(g_1,~g_2)$, $F=(f_1,~f_2)$ and $G\circ F=(g_1\circ f_1,~g_2\circ f_2)$ are strict morphims between the corresponding crossed modules.\\ We send each pair of pointed natural transformations $PNT_2:~E\Rightarrow K$ which is a crossed homomorphism $\gamma_2$ and $PNT_1:~G\Rightarrow F$ which is a crossed homomorphism $\gamma_1$ to their composition $PNT_3:~E\circ G\Rightarrow K\circ F$. Here, $G=(g_1,~g_2)$, $F=(f_1,~f_2)$, $E=(e_1,~e_2)$ and $K=(k_1,~k_2)$ are strict morphisms. We need to define a crossed homomorphism $\gamma_3$.\\ We take $\gamma_3=(k_2\circ \gamma_1) . (\gamma_2 \circ g_1)$ and see it satisfies the required conditions to be a crossed homomorphism. We draw the following diagrams to see the relation between the strict morphisms. \begin{align*} \begin{tikzpicture}[out=145, in=145, relative] \node (A) at (0,0) {$\mathfrak{C_1}$}; \node (B) at (3,0) {$\mathfrak{C_2}$}; \node (C) at (6, 0) {$\mathfrak{C_3}~~=$}; \draw[->, thick, double] (1.5,0.25) -- (1.5,-0.25) [xshift=5pt] node[right, midway] {$\gamma_1$} (B); \draw[->, thick, double] (4.5,0.25) -- (4.5,-0.25) [xshift=5pt] node[right, midway] {$\gamma_2$} (C); \path[->] (A) edge [bend left] node [above] {$G=(g_1,~g_2)$} (B); \path[->] (A) edge [bend right] node [below] {$F=(f_1,~f_2)$} (B); \path[->] (B) edge [bend left] node [above] {$E=(e_1,~e_2)$} (C); \path[->] (B) edge [bend right] node [below] {$K=(k_1,k_2)$} (C); \end{tikzpicture} \begin{tikzpicture}[out=145, in=145, relative] \node (A) at (0,0) {$\mathfrak{C_1}$}; \node (C) at (3,0) {$\mathfrak{C_3}$}; \draw[->, thick, double] (1.5,0.25) -- (1.5,-0.25) [xshift=5pt] node[right, midway] {$\gamma_3$} (C); \path[->] (A) edge [bend left] node [above] {$E\circ G=(e_1\circ g_1,~e_2\circ g_2)$} (C); \path[->] (A) edge [bend right] node [below] {$K\circ F=(k_1\circ f_1,~k_2\circ f_2)$} (C); \end{tikzpicture} \end{align*} We also draw the following diagrams to understand the group homomorphisms better. \begin{align*} \begin{tikzpicture} \node (A) at (0, 0) {$N_1$}; \node (B) at (0, -2) {$M_1$}; \node (C) at (2, -2) {$M_2$}; \node (D) at (2, 0) {$N_2$}; \node (E) at (4, 0) {$N_3$}; \node (F) at (4, -2) {$M_3$}; \path[->] (A) edge node [left] {$h_{\mathfrak{C_1}}$} (B); \path[->] (A) edge node [above] {$g_2$} node [below] {$f_2$} (D); \path[->] (B) edge node [above] {$g_1$} node [below] {$f_1$} (C); \path[->] (D) edge node [right] {$h_{\mathfrak{C_2}}$} (C); \path[->] (D) edge node [above] {$e_2$} node [below] {$k_2$} (E); \path[->] (C) edge node [above] {$e_1$} node [below] {$k_1$} (F); \path[->] (E) edge node [right] {$h_{\mathfrak{C_3}}~~\Rightarrow$} (F); \path[->] (B) edge node [right] {$\gamma_1$} (D); \path[->] (C) edge node [right] {$\gamma_2$} (E); \end{tikzpicture} \begin{tikzpicture} \node (A) at (0, 0) {$N_1$}; \node (B) at (0, -2) {$M_1$}; \node (C) at (3, -2) {$M_3$}; \node (D) at (3, 0) {$N_3$}; \path[->] (A) edge node [left] {$h_{\mathfrak{C_1}}$} (B); \path[->] (A) edge node [above] {$e_2\circ g_2$} node [below] {$k_2\circ f_2$} (D); \path[->] (B) edge node [above] {$e_1\circ g_1$} node [below] {$k_1\circ f_1$} (C); \path[->] (D) edge node [right] {$h_{\mathfrak{C_3}}$} (C); \path[->] (B) edge node [right] {$\gamma_3$} (D); \end{tikzpicture} \end{align*} For all $a,~a'\in M_1$, $b\in N_1$ and $c\in N_2$, $d,~d'\in M_2$, we get the following equalities by definition of $\gamma_1$ and $\gamma_2$. \begin{enumerate} \item \label{60} $\gamma_1(a.a')=(^{f_1(a')}\gamma_1(a)). \gamma_1(a')$ \item $g_1(a)=f_1(a). h_{\mathfrak{C_2}}(\gamma_1(a^{-1}))$ \item $g_2(b)=f_2(b). \gamma_1(h_{\mathfrak{C_1}}(b^{-1}))$ \item \label{61} $\gamma_2(d.d')=(^{k_1(d')}\gamma_2(d)). \gamma_2(d')$ \item $e_1(d)=k_1(d). h_{\mathfrak{C_3}}(\gamma_2(d^{-1}))$ \item \label{65} $e_2(c)=k_2(c). \gamma_2(h_{\mathfrak{C_2}}(c^{-1}))$ \end{enumerate} For all $a'\in M_1$, we have \begin{align} \label{63} (k_1\circ g_1)(a')=k_1(f_1(a'). h_{\mathfrak{C_2}}(\gamma_1((a')^{-1})))=(k_1\circ f_1)(a'). (k_1\circ h_{\mathfrak{C_2}})(\gamma_1((a')^{-1})), \end{align} \begin{align} (^{(k_1\circ f_1)(a')}(k_2\circ \gamma_1)((a')^{-1}))=k_2(^{f_1(a')}\gamma_1((a')^{-1}))\\ =k_2((^{f_1(a')}\gamma_1((a')^{-1})). \gamma_1(a'). \gamma_1((a')^{-1}))\\ \label{62} =k_2((a')^{-1}. a' . \gamma_1((a')^{-1}))=(k_2\circ \gamma_1)((a')^{-1}). \end{align} For all $a,~a'\in M_1$, we have\\ $\gamma_3(a . a')=(k_2\circ \gamma_1)(a.a') . (\gamma_2 \circ g_1)(a. a')=k_2((^{f_1(a')}\gamma_1(a)). \gamma_1(a')) . \gamma_2(g_1(a) . g_1(a'))$ by \ref{60}\\ $=k_2(^{f_1(a')}\gamma_1(a)). k_2(\gamma_1(a')). (^{(k_1\circ g_1)(a')}(\gamma_2 \circ g_1)(a)) . (\gamma_2 \circ g_1)(a')$ by \ref{61}\\ $=k_2(^{f_1(a')}\gamma_1(a)). k_2(\gamma_1(a')). (^{(k_1\circ f_1)(a'). k_1(h_{\mathfrak{C_2}}(\gamma_1((a')^{-1})))}(\gamma_2\circ g_1)(a)) . (\gamma_2 \circ g_1)(a')$ by \ref{63}\\ $=k_2(^{f_1(a')}\gamma_1(a)). k_2(\gamma_1(a')). (^{(k_1\circ f_1)(a'). (h_{\mathfrak{C_3}} (k_2\circ \gamma_1)((a')^{-1}))}(\gamma_2\circ g_1)(a)) . (\gamma_2 \circ g_1)(a')$ since $K$ is a strict morphism\\ $=k_2(^{f_1(a')}\gamma_1(a)). k_2(\gamma_1(a')). (^{(k_1\circ f_1)(a')}(k_2\circ \gamma_1)((a')^{-1}).(\gamma_2\circ g_1)(a). (k_2\circ \gamma_1)(a')). (\gamma_2 \circ g_1)(a')$ by definition of $\mathfrak{C_3}$\\ $=k_2(^{f_1(a')}\gamma_1(a)). k_2(\gamma_1(a')). (^{(k_1\circ f_1)(a')}(k_2\circ \gamma_1)((a')^{-1})) .(^{(k_1\circ f_1)(a')}(\gamma_2\circ g_1)(a)).\\ (^{(k_1\circ f_1)(a')}(k_2\circ \gamma_1)(a')). (\gamma_2 \circ g_1)(a')$\\ $=(^{(k_1\circ f_1)(a')}(k_2\circ \gamma_1)(a)).k_2(\gamma_1(a')). (^{(k_1\circ f_1)(a')}(k_2\circ \gamma_1)((a')^{-1})) .(^{(k_1\circ f_1)(a')}(\gamma_2\circ g_1)(a)).\\ (^{(k_1\circ f_1)(a')}(k_2\circ \gamma_1)(a')). (\gamma_2 \circ g_1)(a')$ since $K$ is a strict morphism\\ $=(^{(k_1\circ f_1)(a')}(k_2\circ \gamma_1)(a)). k_2(\gamma_1(a')). (k_2\circ \gamma_1)((a')^{-1}). (^{(k_1\circ f_1)(a')}(\gamma_2\circ g_1)(a)).\\ (^{(k_1\circ f_1)(a')}(k_2\circ \gamma_1)(a')). (\gamma_2 \circ g_1)(a')$ by \ref{62}\\ $=(^{(k_1\circ f_1)(a')}(k_2\circ \gamma_1)(a)). (^{(k_1\circ f_1)(a')}(\gamma_2\circ g_1)(a)). (^{(k_1\circ f_1)(a')}(k_2\circ \gamma_1)(a')). (\gamma_2 \circ g_1)(a')$\\ $=(^{(k_1\circ f_1)(a')}(k_2\circ \gamma_1)(a)). (^{(k_1\circ f_1)(a')}(\gamma_2\circ g_1)(a)). (k_2\circ \gamma_1)(a'). (\gamma_2 \circ g_1)(a')$ by \ref{62}\\ $=(^{(k_1\circ f_1)(a')}(k_2\circ \gamma_1)(a). (\gamma_2\circ g_1)(a)) . (k_2\circ \gamma_1)(a'). (\gamma_2 \circ g_1)(a')$\\ $=(^{(k_1\circ f_1)(a')}((k_2\circ \gamma_1). (\gamma_2\circ g_1))(a)). ((k_2\circ \gamma_1) . (\gamma_2 \circ g_1))(a')$\\ $=(^{(k_1\circ f_1)(a')}\gamma_3(a)). \gamma_3(a')$ as required.\\ The other conditions are satisfied.\\ For a crossed homomorphism $\xymatrix{\mathfrak{C}=[N\ar[r]^{h_{\mathfrak{C}}} & M]}$, the mapping $\mathcal{F}_{\mathfrak{C}}:~1\rightarrow \underline{\mathcal{XM}}(\mathfrak{C},~\mathfrak{C})$ is a functor taking the element $\star$ in $1$ to $id_h=(id,~id)$ and morphisms $\star \rightarrow \star$ to a trivial pointed natural transformation $PNT:~(id,~id)\Rightarrow (id,~id)$. \begin{align*} \begin{tikzpicture} \node (A) at (0, 0) {$\underline{\mathcal{XM}}(\mathfrak{C_3},~\mathfrak{C_4}) \times \underline{\mathcal{XM}}(\mathfrak{C_2},~\mathfrak{C_3}) \times \underline{\mathcal{XM}}(\mathfrak{C_1},~\mathfrak{C_2})$}; \node (B) at (9, 0) {$\underline{\mathcal{XM}}(\mathfrak{C_2},~\mathfrak{C_4}) \times \underline{\mathcal{XM}}(\mathfrak{C_1},~\mathfrak{C_2})$}; \node (C) at (0,-2) {$\underline{\mathcal{XM}}(\mathfrak{C_3},~\mathfrak{C_4}) \times \underline{\mathcal{XM}}(\mathfrak{C_1},~\mathfrak{C_3})$}; \node (D) at (9, -2) {$\underline{\mathcal{XM}}(\mathfrak{C_1},~\mathfrak{C_4})$}; \draw[->, thick, double] (6,-0.75)--(3, -1.25) [xshift=5pt] node [right, midway] {$a_{\mathfrak{C_1} \mathfrak{C_2} \mathfrak{C_3} \mathfrak{C_4}}$} (B); \path[->] (A) edge node [above] {$\mathcal{F}_{\mathfrak{C_2} \mathfrak{C_3} \mathfrak{C_4}} \times 1$} (B); \path[->] (A) edge node [left, midway] {$1\times \mathcal{F}_{\mathfrak{C_1} \mathfrak{C_2} \mathfrak{C_3}}$} (C); \path[->] (B) edge node [right] {$\mathcal{F}_{\mathfrak{C_1} \mathfrak{C_3} \mathfrak{C_4}}$} (D); \path[->] (C) edge node [below] {$\mathcal{F}_{\mathfrak{C_1} \mathfrak{C_2} \mathfrak{C_4}}$} (D); \end{tikzpicture} \end{align*} $a_{\mathfrak{C_1} \mathfrak{C_2} \mathfrak{C_3} \mathfrak{C_4}}(F,~G,~H):~(F\circ G)\circ H\Rightarrow F\circ (G\circ H)$ is a trivial pointed natural transformation and $(F\circ G)\circ H=F\circ (G\circ H)$ for all strict morphisms $F$, $G$ and $H$. $a_{\mathfrak{C_1} \mathfrak{C_2} \mathfrak{C_3} \mathfrak{C_4}}(F,~G,~H)$ is the identity morphism by assumption, hence it is an isomorphism and the required diagram is commutative. As a result, $a_{\mathfrak{C_1} \mathfrak{C_2} \mathfrak{C_3} \mathfrak{C_4}}$ is a natural isomorphism.\\ We show the other conditions in a similar way and see $\underline{\mathcal{XM}}$ is a 2-category. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,382
{"url":"http:\/\/quant.stackexchange.com\/tags\/stochastic-calculus\/hot?filter=year","text":"# Tag Info\n\n8\n\nFrom $(2)$, \\begin{align*} \\ln S_t &=\\ln F_{t, t} \\\\ &= \\ln F_{0, t}-\\frac{1}{2}\\int_0^t\\sigma^2 e^{-2\\lambda (t-s)}ds+\\int_0^t \\sigma e^{-\\lambda(t-s)} dB_s\\\\ &=\\ln F_{0, t}-\\frac{\\sigma^2}{4\\lambda} \\left(1-e^{-2\\lambda t}\\right)+e^{-\\lambda t}\\int_0^t \\sigma e^{\\lambda s} dB_s. \\end{align*} Then, \\begin{align*} \\lambda e^{-\\lambda t}\\int_0^t \\...\n\n7\n\nStochastics are usually applied in the field of derivatives pricing. In this setting the task is to price a derivative such that it fits into the landscape of tradable instruments (no-arbitrage). We work using the risk-neutral measure - usually denoted by $Q$. The measure is derived from other traded instruments. In risk analysis (e.g. calculate the VaR, ES ...\n\n7\n\nA stochastic differential equation is nothing more than a short-hand notation for a corresponding integral equation. So the initial SDE you provided actually means $$\\int_0^t d S_u = \\int_0^t \\mu(S_u, u) du + \\int_0^t\\sigma(S_u, u) dW_u$$ This is how the SDE is defined (see e.g. here). The reason is that you cannot differentiate a Brownian motion. It does ...\n\n6\n\nAs @SRKX suggested, full answers are provided below. (a). Using Ito's lemma \\begin{align*} d\\left(W_t^3\\right) &= 3W_t^2 dW_t +3W_t dt. \\end{align*} Integrating on both sides, we obtain that \\begin{align*} W_t^3 = 3\\int_0^t W_s^2 dW_s +3\\int_0^t W_s ds. \\end{align*} Consequently, \\begin{align*} \\int_0^t W_s^2 dW_s = \\frac{1}{3}W_t^3 -\\int_0^t W_s ds. \\...\n\n6\n\nYou know that $E\\left[\\int_{0}^{s}W_udu\\right]=E\\left[\\int_{0}^{t}W_vdv\\right]=0$. By definition \\begin{align} & Cov\\left(\\int_{0}^{s}W_u\\,du\\,\\,,\\,\\int_{0}^{t}W_v\\,dv\\right)=E\\left[\\int_{0}^{s}W_u\\,du\\int_{0}^{t}W_v\\,dv\\right]-0 \\end{align} then \\begin{align} & Cov\\left(\\int_{0}^{s}W_u\\,du\\,\\,,\\,\\int_{0}^{t}W_v\\,dv\\right)=\\int_{0}^{s}\\int_{0}^{t}E\\,...\n\n6\n\nThe dynamics \\begin{align*} \\frac{dS_t}{S_t} =\\mu dt + \\sigma dW_t. \\end{align*} is under the real-world measure $\\mathbb{P}$. Then, \\begin{align*} d\\ln S_t =\\Big(\\mu-\\frac{1}{2}\\sigma^2 \\Big) dt + \\sigma dW_t. \\end{align*} Therefore, \\begin{align*} \\ln S_T = \\ln S_t + \\Big(\\mu-\\frac{1}{2}\\sigma^2 \\Big)(T-t) + \\sigma \\big(W_T-W_t\\big).\\tag{1} \\end{align*} ...\n\n5\n\nFeynman\u2013Kac Theorem: Assume that $F$ is a solution to the boundary value problem \\begin{align} &F_t+\\mu(t,x)F_x+\\frac{1}{2}\\sigma^2(t,x)F_{xx}-rF=0\\\\ &F(T,x)=\\Phi(x), \\end{align} Assume furthermore that the process $e^{-r_s}\\sigma(s,X_s)F_s$ is in $\\mathcal L^2$ where \\begin{align} dX_s=\\mu(s,x)ds+\\sigma(s,x)dW_s, \\end{align} then $F$ has the ...\n\n5\n\nYour logic is fine $$X_t \\sim \\mathcal {N}(X_0+\\mu t, \\sigma^2 t)$$ Thus, $\\left (\\frac {X_t}{\\sigma\\sqrt {t}}\\right)^2$ indeed exhibits a non central chi-squared distribution $$\\left (\\frac {X_t}{\\sigma\\sqrt {t}}\\right)^2 \\sim \\chi^2\\left(k=1,\\lambda=\\left (\\frac {X_0+\\mu t}{\\sigma\\sqrt {t}}\\right)^2\\right)$$ whence the law of $S_t := X_t^2$. As ...\n\n5\n\nWhat can be shown is that the above expressions are equal in probability. First check the distribution. As any linear combination of a Gaussian is Gaussian the right hand side is Gaussian - the left hand side too. Then we need the 2 moments: The expected values - it is zero ... easy to see. Next what you did not specify is that the correlation between $... 4 I think all they are doing is integrating and estimating $$P(|W_t| \\leq 2) = \\int_{-2}^{2} \\frac{d}{dr} P(W_t \\leq r) dr$$ so $$P(|W_t| \\leq 2) \\leq 4 \\sup \\limits_{r \\in [-2,2]} \\frac{d}{dr}P(W_t \\leq r)$$ The normal density is maximal at zero and we are done. 4 Let$\\{P_t \\mid t \\geq 0\\}be a compound Poisson process, where \\begin{align*} P_t = \\sum_{i=1}^{N_t} (V_i -1), \\end{align*} andN_t$is a Poisson process with intensity$\\lambda$and jump times$\\tau_i$,$i = 1, \\ldots, \\infty$. Let$Y_i=\\ln V_i$and$f(x)be the density function. Then \\begin{align*} P_t - \\lambda t E(V_1) &= P_t - \\lambda t \\int_{\\... 4 We consider the case where the Novikov condition is satisfied, that is, \\begin{align*} E\\left[\\exp\\left(\\frac{1}{2}\\int_0^T \\theta^2_s ds \\right)\\right] < \\infty. \\end{align*} Then\\{L_t \\mid t \\ge 0\\}$is a$(\\mathscr{F}_t, \\mathbb{P})$-martingale. On$\\mathscr{F}_T$, we define the probability measure$Qby \\begin{align*} \\frac{dQ}{dP}\\big|_{\\mathscr{F}... 4 In generaldX^2$is an ad-hoc or heuristic form of$d\\langle X, X\\rangle_t$, where$\\langle X, X\\rangle_tis the quadratic variation, which is defined by \\begin{align*} \\langle X, X\\rangle_t = \\lim_{\\pi\\rightarrow 0} \\sum_{i=1}^n (X_{t_i}-X_{t_{i-1}})^2. \\end{align*} Here,0=t_0 < \\cdots < t_n = t$, and$\\pi = \\max\\{ t_i-t_{i-1}, i=1,\\ldots, n\\}$. ... 3 Note that, for$0 \\leq s < t, \\begin{align*} W_t^3 &= (W_t-W_s+W_s)^3\\\\ &= (W_t-W_s)^3 + 3(W_t-W_s)^2 W_s + 3 (W_t-W_s) W_s^2 + W_s^3. \\end{align*} Moreover, \\begin{align*} E\\big( (W_t-W_s)^3 \\mid \\mathcal{F}_s\\big) &= E\\big( (W_t-W_s)^3\\big)\\\\ &= 0,\\\\ E\\big((W_t-W_s)^2 W_s \\mid \\mathcal{F}_s\\big) &= W_s E\\big( (W_t-W_s)^2\\big)\\\\ &... 3 It is nearly a Bronwian motion. Just the variance is not correct: The question is more tricky than it seems. A Brownian motion has the distribution properties stated below, so does a linear combination of BMs. But after all it is a martingale in a certain filtration (set of information) which has to be defined.B_t$is a BM in its own filtration, so is$...\n\n3\n\nYour notations are really hard to follow as you define $\\mathbb{P}$ twice at the beginning. The notation $\\mathbb{P} = \\mathbb{\\hat{P}}$ and $\\mathbb{P} =\\mathbb{\\tilde{P}}$ is not meaningful as the probability measure $\\mathbb{P}$ is already fixed and used for the real world probability measure. I think that this is the reason why you are getting confused. ...\n\n3\n\nThere is somewhere summary of methods that can be used to estimate parameters of SDE? If you'd like a brief survey, consider the following packages as well as the accompanying papers (note: you may want to follow the citations listed therein): https:\/\/cran.r-project.org\/web\/packages\/Sim.DiffProc\/ \"Estimation of Stochastic Differential Equations with Sim....\n\n3\n\nFor a time interval $[0,T]$, Girsanov theorem states that given a process $\\lambda$ such that process $U$, defined by $$dU_t = -\\lambda_tU_tdW_t, \\; U_0=1,$$ is a $P$-martingale, then one can define a new measure $Q$ equivalent to $P$ by $$\\frac{dQ}{dP} = U_T,$$ and a standard Brownian motion under $Q$, $W^\\star$, by $$dW^\\star_t = dW_t + \\lambda_tdt.$$ In ...\n\n3\n\nGiven efficient markets, asset prices should be unpredictable in the sense that any upcoming returns are uncorrelated with current or past returns. Hence for traded assets the price should follow something more similar to a GBM than an O-U process. However, many financial metrics are not prices; for example interest rates or volatility. O-U processes may ...\n\n3\n\nLet \\begin{align*} L(t; T, T + \\Delta) = \\frac{1}{\\Delta} \\left[ \\frac{P(t,T)}{P(t, T+\\Delta)} - 1 \\right] \\end{align*} be the forward Libor rate at time $t$ for the period $[T, T+\\Delta]$. Consider a caplet with payoff at $T+\\Delta$ of the form \\begin{align} \\Delta\\max\\big(L(T; T, T + \\Delta) -K, \\, 0 \\big) &= \\Delta\\max\\big((L(T; T, T + \\Delta)-\\alpha)...\n\n3\n\nBelow I assume that you meant: $\\psi (T) = \\max (S_t - S_T, 0)$ which constitutes the payout of a forward start rather than a lookback option. If not please clarify your question... If you are looking for the option price $V_0$, assuming a Black-Scholes diffusion (GBM + constant interest rates), you have \\begin{align*} V_0 &= P(0,T) E[ \\psi (T) \\vert \\...\n\n3\n\nApply Ito's lemma to $f(W_t) = W_t^2$ then $$f(W_T) = f(W_0) + \\int_0^T f'(W_t) dW_t + \\frac{1}{2} \\int_0^T f''(W_t) dt.$$ Thus $$W_T^2 = 2 \\int_0^T W_tdW_t + \\frac12 2 T = 2 \\int_0^T W_tdW_t + T.$$ If we rearrange terms then we get $$\\int_0^T W_tdW_t = (W_T^2-T)\/2.$$\n\n3\n\nWhile Richard's answer is technically correct, just saying the result can be obtained using Ito's formula doesn't make the issue much clearer. So let me go into the microscopics of the issue. The Ito integral is defined in the following way. Suppose we divide the time interval $[0,t]$ into $n$ pieces with $t_i = i~dt$ where $dt=\\frac{t}{n}$ then we define ...\n\n3\n\nI would not say there is no link to what you say but here would be my view. Intuitive explanation If you wait for a delay $h$ before exercising, you lose your exercise right between $t$ and $t+h$, this leads to a loss in value. Supermartingale property proof (to apply it in your case : $\\phi_t=e^{-rt}(L-S_t)^+$) If we denote $\\phi$ the obstacle, and $\\... 3 Assume deterministic and constant interest rates. For an investor in the foreign economy i.e. a market participant that can only trade assets delivering a payout in the foreign currency, let us define $$\\tilde{X}_t = \\tilde{X}_0 \\exp \\left(\\left(r_f-r_d-\\frac{\\sigma_\\tilde{X}^2}{2}\\right)+\\sigma_\\tilde{X} W_t^{\\tilde{X},\\mathbb{Q}^f} \\right)$$ $$Y_t =... 3 Let$$ f_{\\lambda}(t,r)=E^{(t,r)}\\left[e^{-\\lambda r_{T}}\\right] $$where E^{(t,r)} denotes the expectation conditional on r_{t}=r. We assume f is smooth for the remainder. Let \\theta=T\\wedge\\inf\\left\\{ s>t\\colon\\left|r_{s}-r\\right|>1\\right\\} . By the Markov property of \\{r_{t}\\},$$ f_{\\lambda}(t,r)=E^{(t,r)}\\left[f_{\\lambda}(\\left(t+h\\right)... 3 Here's my 2 cents: a) Conditional expectations can always be seen as martingales (this is a direct consequence of the tower property). Thus, we here have that $$M_t := E^*[e^{-\\lambda {r_{T}}}|r_t]$$ is a martingale. Applying It\u00f4's lemma to$M_t = f_{\\lambda}(t,r_t)$as you did is a good starting point. But doing this, leaves you with an SDE, not a PDE.... 3 Your problem probably comes from the notations used. Let the Moment Generating Function (MGF) of a random variable$X$be defined as $$M_X(u) := E[e^{uX}]$$ From this definition, it entails that $$E(X^n) = M_X^{(n)}(u=0) = \\frac{d^{n} M_X}{ d u^{n}}(u=0)$$ Knowing this, the function $$f_{\\lambda}(t,r)=E[e^{-\\lambda {r_{T}}}|r_t=r]$$ can be ... 3 We assume that$\\gamma(s, t)$is differentiable with respect to$t\\$. Then, \\begin{align*} dx_t = \\left(\\int_0^t \\frac{\\partial\\gamma(s, t)}{\\partial t} dW_s \\right)dt + \\gamma(t, t) dW_t. \\end{align*}\n\nOnly top voted, non community-wiki answers of a minimum length are eligible","date":"2016-06-26 11:49:59","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 3, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.999729573726654, \"perplexity\": 1482.7400814539155}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-26\/segments\/1466783395166.84\/warc\/CC-MAIN-20160624154955-00190-ip-10-164-35-72.ec2.internal.warc.gz\"}"}
null
null
using System; using System.Collections.Generic; using SBF=SchemaBuilderFramework; using UML=TSF.UmlToolingFramework.UML; using UTF_EA = TSF.UmlToolingFramework.Wrappers.EA; namespace EAAddinFramework.SchemaBuilder { /// <summary> /// Description of EASchemaBuilderFactory. /// </summary> public class EASchemaBuilderFactory:SBF.SchemaBuilderFactory { private UTF_EA.Model EAModel {get {return (UTF_EA.Model)this.model;}} private EASchema currentSchema; /// returns the singleton instance for the given model. public static new EASchemaBuilderFactory getInstance(UML.Extended.UMLModel model){ EASchemaBuilderFactory factory = SBF.SchemaBuilderFactory.getInstance(model) as EASchemaBuilderFactory; if( factory == null ) { factory = new EASchemaBuilderFactory((UTF_EA.Model)model); } return factory; } /// returns the singleton instance for a new model public static new EASchemaBuilderFactory getInstance() { return SBF.SchemaBuilderFactory.getInstance() as EASchemaBuilderFactory; } protected EASchemaBuilderFactory (UTF_EA.Model model):base(model) { } public override SBF.Schema createSchema(object objectToWrap, SBF.SchemaSettings settings) { this.currentSchema = new EASchema( this.EAModel,(EA.SchemaComposer) objectToWrap, settings); return this.currentSchema; } public EASchemaPropertyWrapper createSchemaPropertyWrapper(SBF.SchemaElement owner, EA.SchemaProperty objectToWrap) { if (objectToWrap.UMLType == "Attribute") { var sourceObject = this.EAModel.getElementByGUID(objectToWrap.GUID); if (sourceObject is UTF_EA.EnumerationLiteral) { return (EASchemaLiteral) this.createSchemaLiteral(owner, objectToWrap); } else { return (EASchemaProperty) this.createSchemaProperty(owner, objectToWrap); } } else { return (EASchemaAssociation) this.createSchemaAssociation(owner, objectToWrap); } } public override SBF.SchemaProperty createSchemaProperty(SBF.SchemaElement owner, object objectToWrap) { return new EASchemaProperty(this.EAModel, (EASchemaElement)owner, (EA.SchemaProperty)objectToWrap); } public override SBF.SchemaElement createSchemaElement(SBF.Schema owner, object objectToWrap) { return new EASchemaElement(this.EAModel,(EASchema) owner, (EA.SchemaType)objectToWrap); } public override SBF.SchemaAssociation createSchemaAssociation(SBF.SchemaElement owner, object objectToWrap) { return new EASchemaAssociation(this.EAModel,(EASchemaElement) owner, (EA.SchemaProperty) objectToWrap); } public override SBF.SchemaLiteral createSchemaLiteral(SBF.SchemaElement owner, object objectToWrap) { return new EASchemaLiteral(this.EAModel,(EASchemaElement) owner, (EA.SchemaProperty) objectToWrap); } } }
{ "redpajama_set_name": "RedPajamaGithub" }
1,090
<!DOCTYPE html> <!-- To change this license header, choose License Headers in Project Properties. To change this template file, choose Tools | Templates and open the template in the editor. --> <h1>Mis compras</h1> <div class="col-md-12" > <table class="table table-bordered table-striped"> <thead> <tr> <th></th> <th>Referencia</th> <th>Fecha</th> <th>Total</th> </tr> </thead> <tbody ng-repeat="purchase in purchases | startFrom:currentPage*pageSize | limitTo:pageSize"> <tr ng-if="purchase.items.length > 0"> <td><a ng-click="changeState($index)"><i ng-class="{'glyphicon glyphicon-plus-sign':!states[$index],'glyphicon glyphicon-minus-sign' : states[$index]}"></i></a></td> <td>#000000{{purchase.id}}</td> <td>{{purchase.date}}</td> <td>{{ getTotal(purchase) | currency}}</td> </tr> <tr ng-show="states[$parent.$index]" ng-repeat="item in purchase.items"> <td>{{$index}}</td> <td><i>{{item.name}}</i></td> <td><i>Cantidad : {{item.qty}}</i></td> <td><i>{{item.artwork.price|currency}}</i></td> </tr> </tbody> </table> <div style="float:right;"> <button ng-disabled="currentPage == 0" ng-click="currentPage=currentPage-1" class="btn btn-default"> Previous </button> {{currentPage+1}}/{{numberOfPages()}} <button ng-disabled="currentPage >= purchases.length/pageSize - 1" ng-click="currentPage=currentPage+1" class="btn btn-default"> Next </button> </div> </div>
{ "redpajama_set_name": "RedPajamaGithub" }
856
\subsection{Summary of Results} In this work, we show how to combine these two main positive results of VCG and single-dimensional mechanisms into a single mechanism, which we call the {\em Hybrid Mechanism}. This new mechanism applies to domains in which some players are multidimensional and some players are single-dimensional. A typical example is to schedule $m$ tasks, such that task $i$ can only be executed by player 0 and player $i$. In this case, player 0 is multidimensional and the other $m$ players are single-dimensional. We call this the {\em star balancing} problem. This is a multidimensional mechanism design problem for which the VCG mechanism, as well as every other known mechanism, performs very poorly. However, as we show in Section~\ref{sec:stars}, the Hybrid Mechanism has approximation ratio 2, optimal among all truthful mechanisms. We generalize the star balancing problem in three directions: \emph{hyperstars}, \emph{graphs/multigraphs} and also to objectives other than makespan minimization. \subparagraph{Hyperstars.} In the hyperstar version, there are $k$ multidimensional players/machines and every task can be executed by any one of these $k$ players or by a task-specific single-dimensional player. Specifically, there are $k$ different \emph{root players} (players $1,2,\ldots ,k$ with bids $(r_{ij})_{k\times m}$) and each of them are allowed to process all tasks. In addition, for each task there is one \emph{leaf player}, which can process only this single task (players $k+1,k+2,\ldots,k+m$ with bids $(\ell_1,\ell_2,\ldots \ell_m)$). Note that the root players without the leaves form a classic input for unrelated scheduling mechanisms with $k$ players and $m$-tasks. We can now state the Hybrid Mechanism for this case. \begin{definition}[Hybrid Mechanism] \label{def:hybrid} The Hybrid Mechanism minimizes $$\min_T\left\{\left(\min_{x^T}\sum_{i=1}^k\lambda_i r_i\cdot x^T_i\right) + g_{T}(\ell)\right\},$$ where the $\lambda_i$ can be arbitrary non-negative real numbers and $(g_T)_{T\subseteq M}$ can be any functions that guarantee that the leaf players are truthful\footnote{In Section~\ref{sec:hybrid} we provide general definitions as well as necessary and sufficient conditions for truthfulness of the Hybrid Mechanism. As a prominent example think of $g_T(\ell)=\max_{j\notin T} \ell_i.$ }. The output of the mechanism is the subset of tasks $T$ that are allocated to the multidimensional root players together with their allocation $x^T$ (i.e. its characteristic matrix $(x^T_{ij})_{k\times m},$ with exactly one $1$ in each column $j\in T$). The remaining tasks, $M\setminus T$, are allocated to the leaf players. \end{definition} VCG fairs poorly, yielding % approximation ratio $m$ in this domain, but the Hybrid Mechanism has approximation ratio $k+1$, as we show in the next theorem. \begin{theorem} \label{thm:hyperstars} For the hyperstar scheduling problem, the Hybrid Mechanism with $g_T(\ell)=\max_{j\notin T} \ell_i,$ and with $\lambda_i=1$, for every $i$, is $(k+1)$-approximate. \end{theorem} \subparagraph*{(Multi)Graphs.} The other generalization of the star balancing problem to graphs and multigraphs is the Unrelated Graph Balancing problem (Section~\ref{sec:scheduling}). This is a special case of unrelated machines scheduling in which there is a (multi)graph whose nodes represent the machines and whose edges represent tasks that can be executed only by the incident nodes. For general graphs, all machines are multiparameter, but we can still apply the Hybrid Mechanism, if we first decompose the graph into stars and then apply the Hybrid Mechanism to each one of them. The combined mechanism, which we call the {\em Star-Cover Mechanism}, has surprisingly good approximation ratio for certain classes of graphs --- ratio 4 for trees, 8 for planar graphs, and $2k+2$ for $k$-degenerate graphs (Corollary~\ref{cor:planar}). These results use as ingredient the analysis of star graphs, in which the Hybrid Mechanism has approximation ratio 2 % (Section~\ref{sec:scheduling}). \subparagraph*{Mechanisms for ${L^p}$-norm optimization.} In Section~\ref{sec:lp-norm}, % we consider the much more general objective of minimizing or maximizing the $L^p$-norm of the values of the players, for $p>0$. The scheduling problem is the special case of minimizing the $L^{\infty}$-norm. We show that the Hybrid Mechanism performs very well for this much more general problem, and in some cases it has the optimal approximation ratio among all truthful mechanisms. This illustrates the applicability and usefulness of the Hybrid Mechanism in applications with various domains and objectives. We emphasize that for all these cases, even for stars, all known mechanisms such as the VCG and affine maximizers have very poor performance. \subparagraph{Relation to the Nisan-Ronen conjecture.} Our results on (multi)graphs show that this domain may provide an easier way to attack the Nisan-Ronen conjecture. In a recent work~\cite{CKK20}, we showed a $\Omega(\sqrt{n})$ lower bound for multistars with edge multiplicity only 2, when the root player has submodular or supermodular valuations. In contrast, our results in this work show that for additive valuations, the Star-Cover Mechanism has approximation ratio 4 on the very same multigraphs. However, the Hybrid and the Star-Cover Mechanisms have high approximation for multistars with high edge-multiplicity or for simple clique graphs. It is natural to ask whether there are other, better mechanisms for these cases. Recently we have proved a $\Omega(\sqrt{n})$ lower bound for the former case, which is the first super-constant lower bound for the Nisan-Ronen problem~\cite{CKK20b}, and we conjecture that the latter case admits similarly a high, perhaps even linear, lower bound. We remark that all previous lower bound proofs use inherently either (multi)graphs \cite{ChrKouVid09,KV07,CKK20b} or, recently, hypergraphs with hyperedges of small size~\cite{GiannakopoulosH20, DS20}. Our work provides new methodological tools to study these objects, that can help to identify certain (hyper)graph structures as good candidates for high lower bounds and to avoid those where low upper bounds exist. For example, the 2.755 lower bound construction of \cite{GiannakopoulosH20} uses a hyperstar with $k=2$, for which the Hybrid Mechanism achieves an upper bound of 3 (Thm~\ref{thm:hyperstars}). All our lower bounds are information theoretic and hold independently of the computational time of the mechanisms. Conversely, all upper bounds are polynomial time algorithms when the star decomposition is given. We leave it open whether computing an optimal star decomposition of a graph is in $P$, although it follows from our results that it can be approximated with an additive term of $1$ in polynomial time (actually in linear time). \section{Hypergraphs} \label{sec:hypergraphs} In this section we consider the generalization of some results of the previous section from stars to hyperstars. A hyperstar is a hypergaph, with a center consisting of $k$ different $r$-players (players $1,2,\ldots ,k$ with bids $(r_{ij})_{k\times m}$), where each of them can process all tasks, and a set of leaves (players $k+1,k+2,\ldots,k+m$ with bids $(\ell_1,\ell_2,\ldots \ell_m)$), one leaf per task. Each task can be allocated to any $r$-player and to its associated leaf player. Note that the $r$-players without the leaves form a classic input for unrelated scheduling mechanisms with $k$ players and $m$-tasks. The Hybrid Mechanism for this setting is defined in Definition~\ref{def:hybrid}. Here we give the definition again for the special case when $g_T(\ell)=\max_{j\notin T} \ell_i,$ and prove that in this case the Hybrid Mechanism is $k+1$-approximative. In Section~\ref{sec:hybrid} we prove the truthfulness of the Hybrid Mechanism for stars and hyperstars for more general $g_T(\ell)$ functions, implying truthfulness in this special case (see Corollary~\ref{cor:sufficient}). \begin{definition}[Hybrid (Max-)Mechanism for hyperstars] Let $$S\in \arg\min_T\bigg\{\bigg(\min_{x^T}\sum_{i=1}^k\lambda_i \,r_i\cdot x^T_i\bigg) +\max_{j\notin T} \ell_i\bigg\},$$ where the $\lambda_i$ are arbitrary non-negative real numbers, the $x^T$ all possible characteristic matrices for allocations of tasks from $T$ and ties are broken by a fixed order over the allocations. Assign $S$ to the root, and the rest of the items to the leaves. \end{definition} The next theorem provides a general approximation ratio of the Hybrid Mechanism that generalizes some results of the previous subsection. \begin{theorem}[Theorem 2]% For the hyperstar scheduling problem, the Hybrid Mechanism with $g_T(\ell)=\max_{j\notin T} \ell_i,$ and with $\lambda_i=1$, for every $i$, is $(k+1)$-approximate. \end{theorem} \begin{proof}[Proof of Theorem~\ref{thm:hyperstars}] Note that with all $\lambda_i=1,$ we obtain the VCG mechanism on the root players and on the tasks of $T$ in the first summand of the definition, that is, the mechanism giving every task (of $T$) to the player with minimum processing time. It is known that VCG is $k$-approximate: VCG yields at most the sum of all minimum processing times over all tasks as makespan, and OPT is at least the $1/k$ fraction of this sum. Now, assume that an optimal allocation gives the set $S^*$ to the root players, and $M\setminus S^*$ to the leaves. Then we have \begin{eqnarray*} ALG &\leq & \min_{T\subseteq M}\{VCG(T)+\max_{i\not \in T}\ell_{i}\}\\ &\leq & VCG(S^*)+\max_{i\not \in S^*}\ell_{i}\\ &\leq & k\cdot OPT(S^*)+\max_{i\not \in S^*}\ell_{i}\\ &\leq & (k+1)\cdot\max\{ OPT(S^*),\max_{i\not \in S^*}\ell_{i}\}\\ &=& (k+1) OPT, \end{eqnarray*} where $VCG(S^*)$ and $OPT(S^*)$ refer only to the root players. \end{proof} \subsubsection{Further Related Work} \label{sec:further} \noindent\textbf{Graph Balancing.} As was already mentioned, for the pure graph balancing problem, the best approximation ratio for classical polynomial time algorithms is $1.75$ by \cite{EKS14}. Wang and Sitters~\cite{WangS16} showed a different LP-based algorithm with a higher ratio of $11/6\approx 1.83$, while Huang and Ott~\cite{HuangO16} designed a purely combinatorial approximation algorithm but with also a higher guarantee of $1.857$. Jansen and Rohwedder~\cite{JansenR19} studied the so-called {\em configuration LP} which was introduced by Bansal and Sviridenko~\cite{BansalS06}. They showed that it has an integrality gap of at most 1.749 breaking the 1.75 barrier of the integrality gaps of the previous LP formulations. This leaves open the possibility of using this LP to produce an approximation algorithm with a ratio better than 1.75. Verschae and Wiese~\cite{VerschaeW14} studied the {\em unrelated} version of graph balancing (whose strategic variant we consider in this paper) and showed that the integrality gap of the configuration LP is equal to $2$, which is much higher comparing to graph balancing. They also showed a 2-approximation algorithm for the problem of maximizing the minimum load, which is the best possible unless P=NP. The problem has been studied for various special graph classes. For the case of simple graphs (also known as Graph Orientation), Asahiro et al~\cite{AsahiroJMO11-trees} showed that the problem is in P for the case of trees, while Asahiro, Miyano and Ono~\cite{AsahiroMO11-planar} showed that it becomes strongly NP-hard for planar and bipartite graphs. Finally, Lee, Leung and Pinedo~\cite{LeeLP09a} concluded the case of trees in the case of multiple edges, showing an FPTAS which is the best possible, given that the problem in multi-graphs is immediately NP-hard even for the simple case of two vertices (due to reduction from Subset Sum). \noindent\textbf{Truthful Scheduling.} The lack of progress in the original unrelated machine problem led to the study of special cases where progress has been made. Ashlagi et al.\cite{ADL09}, resolved a restricted version of the Nisan-Ronen conjecture, for the special but natural class of {\em anonymous} mechanisms. Lavi and Swamy~\cite{LaviS09} studied a restricted input domain which however retains the multi-dimensional flavour of the setting. They considered inputs with only two possible values ``low'' and ``high'', that are publicly known to the designer. For this case they showed an elegant deterministic mechanism with an approximation factor of 2. They also showed that even for this setting achieving the optimal makespan is not possible under truthfulness, and provided a lower bound of $11/10$. Yu~\cite{Yu09} extended the results for a range of values, and Auletta et al.~\cite{Auletta0P15} studied multi-dimensional domains where the private information of the machines is a single bit. Randomization has led to mildly improved guarantees. There are two extensions of truthfulness for randomized mechanisms;{\em universal truthfulness} if the mechanism is described as a probability distribution over deterministic truthful mechanisms, and {\em truthfulness-in-expectation}, if in expectation no player can benefit by lying. The former notion was first considered in \cite{NR01} for two machines, it was later extended to $n$ machines by Mu'alem and Schapira~\cite{MualemS18} and finally Lu and Yu~\cite{LuYu08} showed a $0.837n$-approximate mechanism, which is currently the best known. Lu and Yu~\cite{LuY08a} showed a truthful-in-expectation mechanism with an approximation guarantee of $(m+5)/2$. Mu'alem and Schapira~\cite{MualemS18}, showed a lower bound of $2-1/m$, for both notions of randomization. Christodoulou, Koutsoupias and Kov{\'a}cs~\cite{CKK10} extended this lower bound for fractional mechanisms, where each task can be split to multiple machines, and they also showed a fractional mechanism with a guarantee of $(m+1)/2$. The special case of two machines~\cite{Lu09, LuY08a} is still unresolved; currently, the best upper bound is $1.587$ due to Chen, Du, and Zuluaga~\cite{ChenDZ15}. The case of {\em related} machines is well understood. It falls into the so-called {\em single-dimensional} mechanism design in which the valuations of a player are linear expressions of a single parameter. In this case, the cost of each machine is expressed via a single parameter, its {\em (inverse) speed} multiplied by the workload allocated to the machine, instead of an $m$-valued vector, as it is the case for the unrelated machines and the Graph Balancing setting. Archer and Tardos~\cite{AT01} showed that, in contrast to the unrelated machines version, the optimal makespan can be achieved by an (exponential-time) truthful algorithm, while \cite{CK13} gave a deterministic truthful PTAS which is the best possible even for the pure algorithmic problem (unless P=NP). Truthful implementation of other objectives was considered by Mu'alem and Schapi\-ra~\cite{MualemS18} for multi-dimensional problems and by Epstein, Levin and van Stee~\cite{EpsteinLS13} for single-dimensional ones. Leucci, Mamageishvili and Penna~\cite{LeucciMP18} demonstrated high lower bounds for other min-max objectives on some combinatorial optimization problems on graphs, showing essentially that VCG is the best mechanism for these problems. Minooei and Swamy~\cite{MS12} considered a multi-dimensional vertex cover problem, and approached it by decomposition into single parameter problems. The Bayesian setting, where the players costs are drawn from a probability distribution has also been studied. Daskalakis and Weinberg~\cite{DaskalakisW15} showed a mechanism that is at most a factor of 2 from the {\em optimal truthful mechanism}, but not with respect to the optimal makespan. Chawla et al.~\cite{ChawlaHMS13} provided bounds of prior-independent mechanisms (where the input distribution is unknown to the mechanism), while Giannakopoulos and Kyropoulou~\cite{GiannakopoulosK17} showed that the VCG mechanism achieves a factor of $O( \log n/\log \log n )$ under some distributional and symmetry assumptions. Recently Christodoulou, Koutsoupias, and Kov{\'a}cs~\cite{CKK20} showed a lower bound of $\sqrt{n-1}$ for all deterministic truthful mechanisms, when the cost of processing a subset of tasks is given by a submodular (or supermodular) set function, instead of an additive function which is assumed in the standard scheduling setting. \section{Graph Balancing} \label{sec:scheduling} In this section we focus on the (Unrelated) Graph Balancing problem, which is a special case of makespan minimization of scheduling unrelated machines. The Graph Balancing is a multi-parameter mechanism design problem that retains most of the difficulty of the Nisan-Ronen conjecture, yet has certain features that make it more amenable. One of the difficulties in dealing with truthful mechanisms is that while truthfulness is a local property (i.e., independent truthfulness conditions, one per player), the allocation algorithm is a global function (that involves all players). Local algorithms attempt to reconcile this tension by insisting that the allocation is also ``local'', but they take this notion too far. The results of this work show that locality in mechanisms is very restrictive in some domains, where the Hybrid Mechanism outperforms every local mechanism. The Graph Balancing problem is more amenable than the general scheduling problem because it exhibits another kind of locality, \emph{domain locality}: when a machine does not get a task, we know which machines gets it. Yet, this locality is not very restrictive and the problem retains most of its original difficulty. In this section, we take advantage of domain locality to obtain an optimal mechanism for stars. It turns out that this mechanism, the Hybrid Mechanism, is a special case of a more general mechanism (see Section~\ref{sec:hybrid}). But since the Hybrid Mechanism does not apply to general graphs, here we also propose the Star-Cover mechanism for general graphs: decompose the graph into stars and apply the Hybrid Mechanism independently to each star. In this way, we obtain a 4-approximation algorithm for trees and similar positive results for other types of graphs. Makespan minimization is the special case, when $p=\infty$, of minimizing the $L^p$-norm of the values of the players. Other special cases of the $L^p$-norm optimization is the case $p=1$, which corresponds to welfare maximization, and the case $p=0$, which is related to Nash Social Welfare~\cite{Cole_2018}. We deal with this more general problem in another section (Section~\ref{sec:lp-norm}). Most of the results and proofs of this section generalize to any $p\geq 1$. % \subsection{Stars and the Hybrid Mechanism} \label{sec:stars} In this subsection, we focus on star graphs, where there are $n=m+1$ players and $m$ tasks. Player $0$ is the root of the star, and has processing times given by a vector $r=(r_1,r_2,\ldots r_m).$ We also refer to this player as the \emph{root player} or \emph{$r$-player}. For given bids $r$ of the root player, and task set $T\subseteq M$ we use the short notation $r(T)=\sum_{j\in T} r_j$. There are also $m$ \emph{leaf-players}, one for each leaf of the star with processing times $\ell=(\ell_1,\ldots, \ell_m)$ respectively. Each task $j$ can only be assigned to two players; either to the root, with processing time $r_j,$ or to the leaf with processing time $\ell_j$. As usual, we denote by $r_{-i}$ the vector of bids of the root player except for the bid for task $i,$ and similarly $\ell_{-i}$ denotes the bids of all leaf-players, except for player $i.$ The vector of all input bids is given by $t=(r,\ell).$ As we show later in the Lower Bound section (Section~\ref{sec:lower-bounds}), all previously known mechanisms for the Unrelated Graph Balancing problem, e.g. affine minimizers and task independent mechanisms, have approximation ratio at least $\sqrt{n-1}$ for graphs, even for stars. In contrast, we now show that the Hybrid Mechanism has constant approximation ratio for stars. \begin{definition}[Hybrid Mechanism for Graph Balancing]\label{def:maxmech} Consider an instance of the Unrelated Graph Balancing problem on a star of $n$ nodes and set of tasks $M$. Let \begin{align} \label{eq:max1} S\in \argmin_{T\subseteq M}\{r(T)+\max_{i\not \in T} \ell_i\}. \end{align} The mechanism assigns a set of tasks $S$ to the root and the remaining tasks to leaves. Ties are broken in a deterministic way (e.g., lexicographically). \end{definition} Figure~\ref{fig:max-mechanism} shows the partition of the space of the root player induced by the Hybrid Mechanism for a star of two leaves. \begin{figure} \centering \begin{tikzpicture}[scale=0.65] \draw[->] (0,0) -- (7,0) node[anchor=north] {$r_1$}; \draw[->] (0,0) -- (0,7) node[anchor=east] {$r_2$}; \draw[very thick, blue] (0, 4) node[anchor=east,black] {$\ell_2$} -- (2, 4) -- (2, 7); \draw[very thick, blue ] (6, 0) node[anchor=north,black] {$\ell_1$}-- (2, 4); \draw[dashed, gray] (2,4) -- (2,0) node[anchor=north,black] {$\ell_1-\ell_2$}; \end{tikzpicture} \begin{tikzpicture}[scale=0.65] \draw[->] (0,0) -- (7,0) node[anchor=north] {$r_1$}; \draw[->] (0,0) -- (0,7) node[anchor=east] {$r_2$}; \draw[very thick, blue] (4, 0) node[anchor=north,black] {$\ell_1$} -- (4, 2) -- (6, 2); \draw[very thick, blue ] (0, 6) node[anchor=east,black] {$\ell_2$}-- (4, 2); \draw[dashed, gray] (4,2) -- (0,2) node[anchor=east,black] {$\ell_2-\ell_1$}; \end{tikzpicture} \caption{An instance of the Hybrid Mechanism, for the star of $m=2$ leaves. It shows the partition of bid-space of the root player induced by the allocation of the Hybrid Mechanism when $\ell_1 \geq \ell_2$ (left) and when $\ell_2 \geq \ell_1$ (right). In the left case, the root gets both tasks in the area near $(0,0)$, it gets only task $1$ when $r_1\leq \ell_1-\ell_2$ and $r_2\geq \ell_2$, and it gets neither task otherwise. Note that, in contrast to VCG, for every vector of fixed values for the leaves, only three allocations are possible.} \label{fig:max-mechanism} \end{figure} The argmin expression that defines the Hybrid Mechanism and a corresponding expression that defines the VCG mechanism are similar: in the definition of VCG, instead of $\max_{i\not \in T} \ell_i$, we have $\sum_{i\not \in T} \ell_i$. It is a happy coincidence that replacing the operator sum with max preserves the truthfulness of the mechanism, a fact that rarely holds. \begin{lemma} The Hybrid Mechanism for Graph Balancing on stars is truthful and has approximation ratio 2. \end{lemma} \begin{proof} The root player has no incentive to lie since $-\max_{i\not \in T} \ell_i$ can be interpreted as its payments. The reason that leaf players have no incentive to lie comes essentially from the fact that the expression in \eqref{eq:max1} is monotone in $\ell_i$ (see Section~\ref{sec:hybrid}, for a more rigorous and extensive treatment of the truthfulness of the general Hybrid Mechanism). Let $S^*=\argmin_{T\subseteq M}\max\{r(T),\, \max_{i\not \in T} \ell_i\}$ be the subset assigned to the root in an optimal allocation, $OPT$ be the optimal makespan, and $ALG$ be the makespan achieved by the Hybrid Mechanism. Then we have \begin{align*} ALG \leq \min_{T\subseteq M}\{r(T)+\max_{i\not \in T} \ell_i\} \leq r(S^*)+\max_{i\not \in S^*} \ell_i\leq 2\max \{r(S^*),\,\max_{i\not \in S^*} \ell_i \}= 2 OPT. \end{align*} \end{proof} \subsection{Upper bound for general graphs and multigraphs} \label{sec:gener-graphs-mult} We now turn our attention to positive (upper bound) results for general graphs and multigraphs. We will need a few definitions first. \begin{definition}[Star decomposition] A \emph{star decomposition} of a (multi)graph $G(V,E)$ is a partition $T=\{T_1,\ldots,T_k\}$ of its edges into stars (see Figure~\ref{fig:star-decomposition} for an example). Let $V(T_i)$ denote the vertex set of the star spanned by $T_i.$ The star contention number of a star decomposition is the maximum number of stars that include a node either as a root or as a leaf: $c(T)=\max_{v\in V}|\{i\,:\, v\in V(T_i), i=1,\ldots,k\}|$. The \emph{star contention number of a (multi)graph} is the minimum star contention number among all its star decompositions. \end{definition} In an optimal star decomposition of a graph (but not multigraph), we can assume that every node is the root of at most one star, otherwise we can merge stars with common root without changing the star contention number. A related notion to star decomposition that has been studied extensively is the notion of edge orientation of a multigraph (or of load balancing when we consider multigraphs). \begin{definition}[Edge orientation number] Define the orientation number of a given orientation of the edges of a multigraph $G$, as its maximum in-degree. The \emph{edge orientation number} $o(G)$ of a multigraph $G$ is the minimum orientation number among all its possible orientations. \end{definition} Indeed the two notions are closely related: every star decomposition corresponds to a graph orientation by orienting the edges in all stars from roots to leaves, and vice versa a graph orientation gives rise to a star decomposition in which every node with its outgoing edges defines a star. Given that in an optimal star decomposition of a graph, each node is the root of at most one star, we get that for \emph{every graph} $G$: \begin{align*} o(G)\leq c(G) \leq o(G)+1. \end{align*} This relation for \emph{multigraphs} is similar only that in the right hand side we add the maximum edge multiplicity $w$ instead of $1$, i.e., $o(G)\leq c(G) \leq o(G)+w$. The following definition utilizes the Hybrid Mechanism on stars to obtain a general mechanism for arbitrary graphs (and multigraphs). \begin{definition}[Star-Cover Mechanism] \label{def:star-cover} Let $G=(V,E)$ be a multigraph and let $T=\{T_1, \ldots, T_k\}$ be a fixed star decomposition. The Star-Cover mechanism runs the Hybrid Mechanism on every star of $T$ independently. That is, if $S_{i,h}$ is the subset of tasks allocated to a player $i$ by the Hybrid Mechanism when applied to a star $T_h$, the set of tasks allocated to player $i$ is $S_i=\cup_{h = 1}^k S_{i,h}$. \end{definition} We can now state and prove the general positive theorem of this section. \begin{theorem} The Star-Cover mechanism for a given multigraph $G$ that uses the Hybrid Mechanism on every star of a fixed star decomposition $T=\{T_1,\ldots,T_k\}$ is truthful and has an approximation ratio at most $2c(T)$. \end{theorem} \begin{proof} Fix some player $i$ and let $S_{i,h}$ be the subset of tasks allocated to player $i$ by the Star Mechanism when applied to a star $T_h$, $h=1,\ldots,k$. Truthfulness is an immediate consequence of the following two observations. First, since the fixed star decomposition is independent of player $i$'s processing times, player $i$ cannot affect it by lying. Second, $S_{i,h}$ is independent of player $i$'s processing times $t_{i}(e)$ for all edges $e\not\in T_h,$ therefore player $i$ cannot alter the assignment on $T_h$ by changing its values outside $T_h$. To see the approximation guarantee, let $OPT$, $OPT(T_h)$ be the optimal makespan on $G$ and $T_h$ respectively, and let $ALG$ and $ALG(T_h)$ be the makespan achieved by the Star-Cover mechanism on $G$ and $T_h.$ \begin{align*} ALG \leq \max_{h=1,\ldots,k} c(T)\cdot ALG(T_h)\leq \max_{h=1,\ldots,k} c(T)\cdot 2 OPT(T_h)\leq 2 c(T)\cdot OPT. \end{align*} \end{proof} Due to the close connection between star decompositions and edge orientations in graphs, we get \begin{corollary} The approximation ratio for graphs with edge orientation number $o(G)$ is at most $2o(G)+2$. \end{corollary} In the sequel, we consider particular bounds for certain classes of graphs. It is known that the edge orientation number of a given graph can be computed in polynomial time~\cite{AsahiroJMO11-trees}. In fact, by an application of the max-flow-min-cut theorem it can be shown that $o(G)\leq \gamma$ iff for every subgraph $H$ of $G$ it holds that $|E(H)|\leq \gamma|V(H)|.$ Since this equivalent condition\footnote{This characterization of the orientation number $o(G)$ implies that a truthful mechanism with constant approximation ratio exists for any minor-closed class of graphs, because for every class of graphs with forbidden minors, there exists some constant $\gamma$ that satisfies the property (see Theorems 7.2.3, 7.2.4 and Lemma 12.6.1. in \cite{Dies12}). We are grateful to an anonymous referee for pointing this out.} holds for planar graphs with $\gamma=3,$ we immediately obtain: \begin{theorem} For every planar graph, there exists a truthful mechanism with approximation ratio $8.$ \end{theorem} A natural class of graphs fulfilling this property (with $\gamma=k$) is $k$-degenerate graphs. A graph $G(V,E)$ is called \emph{$k$-degenerate}~\cite{erdHos1966chromatic} (or \emph{$k$-inductive}) if there is an ordering $v_1,\ldots,v_n$ of its nodes such that the number of neighbors of $v_i$ in $\{v_{i+1},\ldots,v_n\}$ is at most $k$. Many interesting classes of graphs are $k$-degenerate for some small $k$. Besides planar graphs (with $k=5$), another example is given by $k$-trees~\cite{rose1974simple}: by definition, a $k$-tree is a degenerate graph with an ordering such that every $v_i$ (except for the last $k$ nodes of the ordering) has exactly $k$ neighbors in $\{v_{i+1},\ldots,v_n\}$ and these $k$ neighbors form a clique. Since graphs of treewidth $k$ are subgraphs of $k$-trees~\cite{rose1974simple}, they are also $k$-degenerate. In particular, trees are $1$-degenerate. We give here a direct proof and illustration of a star decomposition for $k$-degenerate graphs: \begin{theorem} \label{thm:degenerate} For every $k$-degenerate graph, there is a truthful mechanism with approximation ratio $2k+2$. \end{theorem} \begin{proof} Consider a $k$-degenerate graph $G$. It suffices to show that it admits a star decomposition with contention number $k+1$. Let $v_1,\ldots,v_n$ be an inductive ordering of the nodes of $G$. We consider the star covering $\{T_2,\ldots,T_n\}$ where $T_i$ is the star with root $v_i$ and leaves all its neighbors in $\{v_1,\ldots,v_{i-1}\}$. Note that stars are created in the opposite direction of the inductive order; see Figure~\ref{fig:star-decomposition} for an example. This star decomposition has contention number $k+1$ since every node belongs to at most one star as a root and to at most $k$ stars as a leaf. \begin{figure} \centering \begin{tikzpicture}[scale=0.75] \node[circle, draw, fill=blue!20] (1) at (0,1) {\tiny $1$}; \node[circle, draw, fill=blue!20] (2) at (0,2) {\tiny $2$}; \node[circle, draw, fill=blue!20] (3) at (0,3) {\tiny $3$}; \node[circle, draw, fill=blue!20] (4) at (0,4) {\tiny $4$}; \node[circle, draw, fill=blue!20] (5) at (0,5) {\tiny $5$}; \node[circle, draw, fill=blue!20] (6) at (0,6) {\tiny $6$}; \path (6) edge (5) edge[bend left] (4) edge[bend right] (3) edge[bend left] (1) (5) edge (4) edge[bend left] (3) (4) edge[bend left] (2) (3) edge (2) (2) edge (1) ; \node at (1.2,3.5) {$=$}; \node[circle, draw, fill=blue!20] (61) at (2,1) {\tiny $1$}; \node[circle, draw, fill=blue!20] (63) at (2,3) {\tiny $3$}; \node[circle, draw, fill=blue!20] (64) at (2,4) {\tiny $4$}; \node[circle, draw, fill=blue!20] (65) at (2,5) {\tiny $5$}; \node[circle, draw, fill=blue!20] (66) at (2,6) {\tiny $6$}; \path (66) edge (65) edge[bend left] (64) edge[bend right] (63) edge[bend left] (61);t \node at (3.2,3.5) {$+$}; \node[circle, draw, fill=blue!20] (53) at (4,3) {\tiny $3$}; \node[circle, draw, fill=blue!20] (54) at (4,4) {\tiny $4$}; \node[circle, draw, fill=blue!20] (55) at (4,5) {\tiny $5$}; \path (55) edge (54) edge[bend left] (53); \node at (5.2,3.5) {$+$}; \node[circle, draw, fill=blue!20] (42) at (6,2) {\tiny $2$}; \node[circle, draw, fill=blue!20] (44) at (6,4) {\tiny $4$}; \path (44) edge[bend left] (42); \node at (7.2,3.5) {$+$}; \node[circle, draw, fill=blue!20] (32) at (8,2) {\tiny $2$}; \node[circle, draw, fill=blue!20] (33) at (8,3) {\tiny $3$}; \path (33) edge (32); \node at (9.2,3.5) {$+$}; \node[circle, draw, fill=blue!20] (21) at (10,1) {\tiny $1$}; \node[circle, draw, fill=blue!20] (22) at (10,2) {\tiny $2$}; \path (22) edge (21); \end{tikzpicture} \caption{The star decomposition used in Theorem~\ref{thm:degenerate} of a 2-degenerate graph. The inductive order is upwards, while the stars are ``pointing'' downwards.} \label{fig:star-decomposition} \end{figure} \end{proof} \begin{corollary} \label{cor:planar} There exist truthful mechanisms with approximation ratio at most 4 for trees, and generally of ratio at most $2k+2$ for graphs of treewidth $k.$ \end{corollary} \section{Hybrid Mechanisms} \label{sec:hybrid} Here we provide the general definitions related to Hybrid Mechanisms, and show necessary and sufficient conditions for truthfulness on stars and hyper-stars. We emphasize that this is a multi-dimensional mechanism design setting. Each leaf $j$ has a single dimensional valuation, given by the scalar $\ell_j$ but a root has multi-dimensional preferences, given by the vector of values. % For the sake of convenience, we call non-decreasing real functions \emph{increasing}, and non-increasing functions \emph{decreasing}. We say \emph{strictly increasing/decreasing} if we want to emphasize strict monotonicity. It is known, that an allocation rule can be equipped by a truthful payment scheme iff it is \emph{weakly monotone} (definition~\ref{def:wmon}). The next two propositions give a characterization of the weak monotonicity property in our case, for the leaf-players, and for the root player (in case of stars), respectively: \begin{proposition} \label{prop:leafwmon} An allocation rule is weakly monotone for a leaf-player $i,$ iff for every $r$ and every $\ell_{-i},$ whenever leaf-player $i$ gets task $i$ with bid $\ell_i,$ then he also gets the task with every smaller bid $\ell_i'<\ell_i.$ \end{proposition} \begin{proposition} \label{prop:rootwmon} An allocation rule on stars is weakly monotone for the root player if and only if for every fixed bid vector $\ell$ of the other players, and every $T\subseteq M$ a constant $g_T(\ell)$ (i.e., independent of $r$) exists, such that for every $r$ the root player is allocated a set $S\in \arg\min_T\,\{r(T)+g_{T}(\ell)\}.$ The canonical choice for trutful payments to the $r$-player is then $P^0_S(\ell)=g_\emptyset(\ell)-g_S(\ell),$ and all other truthful payments can be obtained by an additive shift by an arbitrary $c(\ell).$ \end{proposition} Since the $r$-player (player $0$) wants to maximize his profit, and the costs for the tasks are nonnegative, it will be convenient to assume w.l.o.g. that for every fixed $\ell$ the payments $P^0_S$ correspond to an increasing set-function of $S,$\footnote{We call a setfunction $P$ \emph{increasing}, if $P(S')\leq P(S)$ whenever $S'\subset S,$ and \emph{strictly increasing} if the inequality is strict.} because a set of tasks with higher cost and less payments can not be allocated to him by a truthful mechanism.\footnote{See also the \emph{virtual payments} in \cite{CKK20}.} Motivated by Proposition~\ref{prop:rootwmon} we restrict our search for truthful mechanisms on star graphs as follows: \begin{definition}\label{def:star}[Hybrid Mechanism] Assume that an $m$-variate function $g_T:\mathbb{R}^{m}\rightarrow \mathbb{R}$ is given for every $T\subseteq M,$ so that for every fixed vector $\ell\geq 0$ the values $\{g_T(\ell)\}_{T\subseteq M}$ correspond to a decreasing setfunction of $T.$ For any input $(r,\ell),$ a \emph{Hybrid Mechanism (for the functions $\{g_T\}_{T\subseteq M}$ )} allocates a set $S$ to the root player such that $$S\in \arg\min_T\,\{r(T)+g_{T}(\ell)\};$$ if there are more than one such sets $S,$ the mechanism breaks ties according to the lexikographic order over all subsets of $M.$ The items in $M\setminus S$ are assiged to the leaves. \end{definition} The more general definition for hyperstars is Definition~\ref{def:hybrid}. For the sake of simplicity, the next two lemmas are formulated for star graphs. In the proof of the subsequent main lemma (of Lemma~\ref{lem:charstar}) we discuss the necessary changes for hyperstars. Consider a Hybrid Mechanism on a star. For any $i\in M$ fix all bids in the input except for $r_i,$ i.e., fix the vectors $r_{-i}$ and $\ell.$ \begin{definition}\label{def:critvalue} The following function defines the so called \emph{critical value} for the bid $r_i$: $$\psi_i=\psi_i[r_{-i},\ell]=\min_{T:i\notin T}\{r(T)+g_T(\ell)\}-\min_{T:i\in T}\{r(T\setminus \{i\})+g_T(\ell)\}$$ We omit the argument $r_{-i},\ell$ whenever they are obvious from the context. \end{definition} \begin{lemma}\label{lem:psiinc} $\psi_i[r_{-i},\ell]\geq 0$ for every $r_{-i},\ell.$ \end{lemma} \begin{proof} The proof follows from the fact that for every fixed $\ell$ the function $g_T(\ell)$ is a decreasing setfunction of $T.$ % This implies that if $i\notin T,$ then $g_T(\ell)\geq g_{T\cup \{i\}}(\ell),$ and so $r(T)+g_T(\ell)\geq r(T)+g_{T\cup \{i\}}(\ell).$ The same inequality must therefore hold for the minimium values $$\min_{T:i\notin T}\{r(T)+g_T(\ell)\}\geq \min_{T:i\notin T}\{r(T)+g_{T\cup \{i\}}(\ell)\}.$$ Observe that first and the second expressions in the definition of $\psi_i$ correspond precisely these two expressions, which concludes the proof. \end{proof} We show next that $\psi_i$ is, indeed, a critical value function: \begin{lemma} \label{lem:critvalue} Let $i\in M,$ and arbitrary nonnegative bid vectors $ \,r_{-i}$ and $\,\ell$ be fixed; then for every $r_i<\psi_i$ the root player receives task $i,$ and for every $r_i>\psi_i$ the leaf player with bid $\ell_i$ receives task $i.$ \end{lemma} \begin{proof} If $r_i<\psi_i,$ then $$r_i+\min_{T:i\in T}\{r(T\setminus \{i\})+g_T(\ell)\}<\min_{T:i\notin T}\{r(T)+g_T(\ell)\}$$ or equivalently $$\min_{T:i\in T}\{r(T)+g_T(\ell)\}<\min_{T:i\notin T}\{r(T)+g_T(\ell)\}.$$ Therefore, no set $S$ for which $i\notin S,$ can be in $\arg\min_T \{r(T)+g_T(\ell)\},$ and since the Hybrid Mechanism minimizes the expression $r(T)+g_T(\ell),$ a set $S\subseteq M$ with $i\in S$ will be allocated to the root player. On the other hand, if $r_i>\psi_i,$ then all inequalities in the above argument get flipped, and a set with $i\notin S$ will be given to the root player. \end{proof} The following lemma provides various necessary or sufficient conditions for the truthfulness of Hybrid Mechanisms in stars in terms of monotonicity of the critical value function $\psi_i$ as a function of $\ell_i.$ For hyperstars we (can) only give sufficient conditions. % \begin{lemma} \label{lem:charstar} For the truthfulness of the Hybrid Mechanism with given $\{g_T\}_{T\subseteq M}$ functions (i.e., for a truthful payment scheme to exist), \begin{itemize} \item[(a)] in stars it is necessary that for every $i\in M$ and every fixed $(r_{-i},\ell_{-i})$ the function $\psi_i(\ell_i)=\psi_i[r_{-i},\ell_{-i}](\ell_i)$ is an increasing function of $\ell_i;$ \item[(b)] in (hyper-)stars it is sufficient that for every $i\in M$ and every fixed $(r_{-i},\ell_{-i})$ the function $\psi_i(\ell_i)=\psi_i[r_{-i},\ell_{-i}](\ell_i)$ is a \emph{strictly} increasing function of $\ell_i;$ \item[(c)] in (hyper-)stars it is sufficient that for every $i$ and $\ell_{-i}$ the $g_T(\ell_i,\ell_{-i})$ is an increasing function of $\ell_i$ whenever $i\notin T,$ and decreasing function of $\ell_i$ whenever $i\in T.$ \end{itemize} \end{lemma} \begin{proof} We prove the lemma for stars first. By Proposition~\ref{prop:rootwmon}, an allocation rule is weakly monotone for the root player of a mechanism iff it is a minimizer with the additive values $g_T(\ell).$ Furthermore it is without loss of generality to admit only $g_T(\ell)$ so that for fixed $\ell$ the payments $P^0_T(\ell)=g_\emptyset(\ell)-g_T(\ell),$ make a (normalized) increasing setfunction of $T $~ \cite{CKK20}. This is the case iff the $g_T(\ell)$ is a decreasing setfunction of $T,$ altogether iff the mechanism is a Hybrid Mechanism by Definition~\ref{def:star}. The conditions (a)--(c) are necessary or sufficient for weak monotonicity for the \emph{leaf} players, as we show next. For (a) we need to prove that the monotonicity of the $\psi_i$ functions is necessary. Assume that there exist$\,i, r_{-i},$ and $\ell_{-i}$ so that $\psi_i$ is not monotone increasing, i.e., there are $\ell'_i<\ell_i$ values so that $\psi_i(\ell'_i)>\psi_i(\ell_i).$ We take an $r_i$ between these values, i.e., let $\psi_i(\ell_i)<r_i< \psi_i(\ell'_i).$ Now consider the leaf-player $i$ for the bids $(r_i,r_{-i}),$ and $\ell_{-i}$ of all other players. According to Lemma~\ref{lem:critvalue}, with bid $\ell_i$ this leaf-player gets task $i$ because $\psi_i(\ell_i)<r_i,$ but with the smaller bid $\ell_i'$ he does not get task $i$ because $r_i< \psi_i(\ell_i').$ Thus the mechansism cannot be truthful for the leaf-player $i$ by Proposition~\ref{prop:leafwmon}. Next we prove that (b) is sufficient for weak monotonicity of the allocation to an arbitrary leaf player $i.$ Assume that a mechanism with strictly increasing $\psi_i$ functions is not weakly monotone, then by Proposition~\ref{prop:leafwmon} there exist $r, \ell_{-i},$ and $\ell'_i<\ell_i$ so that player $i$ gets task $i$ with bid $\ell_i,$ but does not get it with bid $\ell_i'.$ Then, by Lemma~\ref{lem:critvalue} $r_i$ must be such that $\psi_i[r_{-i},\ell_i,\ell_{-i}]\leq r_i\leq \psi_i[r_{-i},\ell_i',\ell_{-i}].$ Since $\psi_i$ is a strictly increasing function of $\ell_i,$ it must be the case that $\psi_i(\ell_i)>\psi_i(\ell_i'),$ a contradiction. In order to show that (c) is sufficient, let again $i$ be a leaf-player, and assume for contradiction that there exist $r, \ell_{-i},$ and $\ell'_i<\ell_i''$ so that leaf player $i$ gets task $i$ with bid $\ell_i'',$ but does not get it with bid $\ell'_i.$ Then $r_i$ must be such that $\psi_i[r_{-i},\ell_i'',\ell_{-i}]\leq r_i\leq \psi_i[r_{-i},\ell_i',\ell_{-i}].$ Since $\psi_i$ is increasing in $\ell_i$ (by the assumptions of (c)), it must be the case that $\psi_i(\ell_i'')=\psi_i(\ell_i').$ Let $T^*$ be the set with highest priority among sets with $i\in T$ that minimize the expression $r(T)+g_T(\ell_i'').$ % Since the leaf player gets task $i$ for bid $\ell_i'',$ there must be a set $S$ with $i\not\in S$ and so that $r(S)+g_{S}(\ell_i'')\leq r(T^*)+g_{T^*}(\ell_i'')$ (with higher priority than $T^*$ in case equality holds). Given that $g_S$ is increasing and $g_{T^*}$ decreasing function of $\ell_i,$ this $S$ must beat $T^*$ for $\ell_i'$ as well. Assume that another $T^{**}$ would beat $S$ at $\ell_i',$ then this $T^{**}$ would necessarily beat $S$ and $T^{*}$ also at $\ell_i'',$ a contradiction. This concludes the proof for stars.% We discuss the necessary changes in the above proof for hyperstars. We need to prove weak-monotonicity for an arbitrary root-player, and for an arbitrary leaf-player in cases (b) and (c). For a root-player $h,$ weak monotonicity holds, if (but not only if) we replace $g_T(\ell)$ in Proposition~\ref{prop:rootwmon} by $$\tilde g_T(\ell,r_{-h}):=\min_{R\subseteq M\setminus T}\{\min_{x^R}\sum_{j\leq k\, | \,j\neq h}\lambda_j r_j\cdot x^R_j +g_{R\cup T}(\ell)\}.$$ Here, $r$ is not a vector but a $k\times m$ matrix, and the notation $\,r_{-h}\,$ refers to the bids of the root-players other than player $h.$ Note that this expression corresponds again to the minimum sum in the allocation rule that all players except for $h$ can have, given that player $h$ receives $T.$ As required, the $\tilde g_T(\ell,r_{-h})$ are increasing set-functions of $M\setminus T$ for fixed $\ell$ and $r_{-h}:$ assume that $M\setminus T'\subset M\setminus T,$ and let $R\subset M\setminus T$ provide the minimum for $\tilde g_T.$ Then $R\cap(M\setminus T')$ yields a not larger value over $M\setminus T'$ which, in turn, is an upper bound for $\tilde g_{T'}.$ (Here we exploit that $g$ is an increasing set-function.) For the leaf-players, in Definition~\ref{def:critvalue}, Lemmas~\ref{lem:psiinc} and \ref{lem:critvalue} we need to replace $r(T)$ by $\min_{x^T}\sum_{j=1}^k\lambda_j r_j\cdot x^T_j$ for hyperstars. In general, this is the smallest sum that the root players can achieve on the tasks in $T.$ If now $\min_{1\leq j\leq k} \lambda_j\cdot r_{ji}$ (instead of $r_i$ in the single-root case) is below the critical value $\psi_i$ then the corresponding root player gets task $i,$ otherwise the leaf-player $k+i.$ The rest of the proof of (b) and (c) is analogous% . \end{proof} The following example shows that conditions (b) and (c) are both \emph{not} necessary for the Hybrid Mechanism to be truthful. \begin{example} Let $m=2,$ and consider the following functions: $g_{\{1,2\}}=0,\,\, g_{\{1\}}=\ell_2+1/\ell_1,\,\,g_{\{2\}}=1,\,\,g_\emptyset=\ell_2+1/\ell_1+1,\,$ that is, we minimize over $$r_1+r_2,\quad r_1+\ell_2+1/\ell_1,\quad r_2+1,\quad \ell_2+1/\ell_1+1.\,$$ For every fixed $\ell,$ $g_T(\ell)$ is a decreasing setfunction of $T$ (in fact, additive function of $M\setminus T$). The critical value functions are $\psi_1\equiv 1$ for every $t_2$ and every $\ell,$ and $\psi_2=\ell_2+1/\ell_1$ for every $t_1.$ Observe that $\psi_1$ is not strictly increasing, and $g_\emptyset=1+\ell_2+1/\ell_1$ is not increasing in $\ell_1,$ although $1\notin \emptyset.$ Thus, this mechanism fulfils neither (b) nor (c). Still, for example with a tie-breaking that prefers giving the tasks (independently) to the root player, it is truthful. \end{example} \begin{corollary}\label{cor:sufficient} The Hybrid Mechanism for Graph Balancing, the Hybrid Max-Mechanism for Hypergraphs, and the Hybrid $L^p$ Mechanism on stars are truthful. \end{corollary} \begin{proof} The first statement follows from the fact that the Hybrid Mechanism for Graph Balancing fulfils (c). Clearly, $g_T(\ell)=\max_{i \notin T}\{\ell_i\}=\max_{i \in M\setminus T}\{\ell_i\}$ is an increasing setfunction of the sets $M\setminus T,$ and therefore a decreasing setfunction of the sets $T,$ for fixed $\ell.$ For fixed $T,$ $\max_{i \notin T}\{\ell_i\}$ is an increasing function of $\ell_i$ for every $i\notin T,$ and it is independent of $\ell_i$ (constant function) if $i\in T.$ Finally, it is easy to see that the Hybrid $L^p$ Mechanism of Section~\ref{sec:lp-norm} fulfils (b) as well as (c). \end{proof} \section{Introduction} This work belongs to the area of mechanism design, one of the most researched branches of Game Theory and Microeconomics with numerous applications in environments where a protocol of conduct of selfish participants is required. The goal is to design an algorithm, called mechanism, which is robust under selfish behavior and that produces a social outcome with a certain guaranteed quality. The mechanism solicits the preferences of the participants over the outcomes, in forms of bids, and then selects one of the outcomes. The challenge stems from the fact that the real preferences of the participants are private, and the participants care only about maximizing their private utilities and hence they will lie if a false report is profitable. A {\em truthful} mechanism provides incentives such that a truthful bid is the best action for each participant. Despite the importance of the problem the only general positive result for multi-dimensional domains is the celebrated Vickrey-Clarke-Groves (VCG) mechanism~\cite{Vic61,Cla71,Gro73} and its affine extensions, known as affine maximizers. % In their seminal paper on algorithmic mechanism design, Nisan and Ronen~\cite{NR01} proposed the scheduling problem on unrelated machines as a central problem to understand the algorithmic aspects of mechanism design. The objective is to incentivize $n$ machines to execute $m$ tasks, so that the maximum completion time of the machines, i.e. the makespan, is minimized. Scheduling, a problem that has been extensively studied from the classical algorithmic perspective, proved to be the perfect ground to study the limitations that truthfulness imposes on algorithm design. Nisan and Ronen applied the VCG mechanism, the most successful generic machinery in mechanism design, which truthfully implements the outcome that maximizes the social welfare. In the case of scheduling, the allocation of the VCG is the greedy allocation in which each task is assigned to the machine with minimum processing time. This mechanism is truthful, but has a poor approximation ratio of $n$ for the makespan. They conjectured that this is the best guarantee that can be achieved by any deterministic (polynomial-time or not) truthful mechanism and this conjecture, known as the Nisan-Ronen conjecture, is widely perceived as the holy grail in algorithmic mechanism design. An interesting special case of the scheduling problem, which is well-understood, is the single-dimensional mechanism design in which the values of each player are linear expressions of a single parameter. The principal representative is the problem of scheduling {\em related} machines, where the cost of each machine can be expressed via a single parameter, its {\em speed}. This was first studied by Archer and Tardos~\cite{AT01}, who showed that in contrast to the unrelated machines version, an algorithm that minimizes the makespan can be truthfully implemented --- albeit in exponential time. It was subsequently shown that truthfulness has essentially no impact on the computational complexity of the problem. Specifically, a randomized truthful-in-expectation\footnote{This is one of the two main definitions of truthfulness for randomized mechanisms, where truth-telling maximizes the expected utility of each player.} PTAS was given in~\cite{DDDR11} and a deterministic PTAS was given in~\cite{CK13}; a PTAS is the best possible algorithm even for the pure algorithmic problem (unless $P=NP$). \subsection{Lower Bounds for Graph Balancing } \label{sec:lower-bounds} In this subsection, we show corresponding negative results for the positive results of the previous subsection. We first observe that the natural candidate mechanisms for the Graph Balancing problem have very poor performance, in stark contrast to the Hybrid Mechanism. \begin{theorem} \label{thm:local} All local mechanisms for stars, including VCG, affine minimizers and task-independent mechanisms, have approximation ratio at least $\sqrt{m}=\sqrt{n-1}.$ % \end{theorem} \begin{proof} Consider the following input $$t=\left( \begin{array}{c c c c} \frac{1}{\sqrt{m}} & \frac{1}{\sqrt{m}} & \cdots & \frac{1}{\sqrt{m}}\\ 1 & \infty & \cdots & \infty\\ \infty & 1 & \cdots & \infty\\ \infty & \infty & \cdots & 1\\ \end{array}\right).$$ If, in the allocation of the mechanism, the root player takes all the tasks, then this allocation has approximation $\sqrt{m}$, as the optimal allocation is to assign the tasks to the leaves with makespan equal to 1. Otherwise, assume that (at least) one of the tasks, is given to some other player, say w.l.o.g. task $1$ is given to player $1.$ By a series of applications of Lemma~\ref{lemma:tool}, and by exploiting the locality of the mechanism, we set the value of the owner of task $j$ to 0 for every $j\neq 1.$ In particular, let $S$ be the set of tasks assigned to the root player, and $M\setminus S$ be the tasks assigned to their respective leaf-player. % Let $t^1=(r',\ell_1,\ldots, \ell_m)$, with $r'$ defined as follows for some arbitrarily small $\epsilon$. $$r'_j=\left\{ \begin{array}{c c} 0 & j\in S \\ \frac{1}{\sqrt{m}}+\epsilon & \text{ otherwise. } \\ \end{array}\right. $$ By applying Lemma~\ref{lemma:tool}, the root player receives again the set $S,$ and therefore, the set $M\setminus S$ is assigned to the leaves. We proceed by changing the bids of the leaf-players for the tasks in $M\setminus S$ to 0, i.e., defining a sequence $t^j$ for $j\in M\setminus S$, with $t^j=(r',\ell'_j=0,\ell^{j-1}_{-j})$ % Again, by Lemma~\ref{lemma:tool} and by locality, we get that the allocation of the tasks remains the same for the leaf $j,$ \emph{and} for all the other players as well. We end up with an instance $t'$ where player 1 still takes the first task, while the rest of the tasks are assigned to a player with 0 processing time. For $t'$, the optimal makespan is $1/\sqrt{m}$, while the mechanism achieves makespan equal to 1. We illustrate the case when $S=\emptyset,$ that is, the allocation gives all the tasks to the leaves of the star. % $$t=\left( \begin{array}{c c c c} \frac{1}{\sqrt{m}} & \frac{1}{\sqrt{m}} & \cdots & \frac{1}{\sqrt{m}}\\ \circled{1} & \infty & \cdots & \infty\\ \infty & \circled{1} & \cdots & \infty\\ \infty & \infty & \cdots & \circled{1}\\ \end{array}\right) \rightarrow t'=\left( \begin{array}{c c c c} \frac{1}{\sqrt{m}} & \frac{1}{\sqrt{m}} & \cdots & \frac{1}{\sqrt{m}}\\ \circled{1} & \infty & \cdots & \infty\\ \infty & \circled{0} & \cdots & \infty\\ \infty & \infty & \cdots & \circled{0}\\ \end{array}\right)$$ \end{proof} In the previous subsection, we showed that the Hybrid Mechanism outperforms all known mechanisms and has approximation ratio at most 2. The next theorem shows that this ratio is the best possible among all possible mechanisms for stars. \begin{theorem} \label{thm:lower-bound-stars} There is no deterministic mechanism for stars that can achieve an approximation ratio better than 2. \end{theorem} This is a special case of a more general lower bound for the $L^p$-norm objective (Theorem~\ref{thm:lower-bound-stars-lp}), but we give the proof here anyway, since it will be an ingredient of the proof of the following theorem (Theorem~\ref{thm:phi+1}). \begin{proof} Let's assume that the mechanism takes an input where the processing time of the root player is $r_j =a^{j-1}$, for each task $j$, where $a>1$ is a parameter, and the processing time of the corresponding leaf player for task $j$ is $\ell_j=a^j$, as also shown in the following table. $$t=\left( \begin{array}{c c c c c} 1 & a & \cdots & a^{m-2} & a^{m-1}\\ a & \infty & \cdots & \infty & \infty\\ \infty & a^2 & \cdots & \infty & \infty\\ \vdots & \vdots & \vdots & \vdots & \vdots\\ \infty & \infty & \cdots & a^{m-1} & \infty\\ \infty & \infty & \cdots & \infty & a^m \\ \end{array}\right)$$ If the mechanism assigns all tasks to the root player, then the makespan for this input is $(a^m-1)/(a-1)$, while the optimal makespan is $a^{m-1}$, which yields a ratio of $(a^m-1)/((a-1)a^{m-1})$. Otherwise, let $X$ be the nonempty set of tasks assigned to the leaf players. Let $k$ be the task with the maximum index in $X$. Since it is processed by the leaf player, its processing time is $a^k$. Now consider the input in which we change the processing times of the root player to $$r'_j= \begin{cases} 0 & j\not\in X \\ r_j+\epsilon & \text{ otherwise} \end{cases} $$ for some arbitrarily small $\epsilon>0$. By weak monotonicity (Lemma~\ref{lemma:tool}), the set of tasks assigned to the root player remains the same, and as a result the whole allocation stays the same. Therefore task $k$ is still assigned to the leaf player $k$ and the makespan of the mechanism is at least $a^k$. Notice that the optimum allocation for this input is $a^{k-1}+\epsilon$ which yields an approximation ratio of $a$, as $\epsilon$ tends to $0$. In conclusion, the approximation ratio is $\min\{(a^m-1)/((a-1)a^{m-1}),a\}$, for every $a>1$. By choosing $a=2$, we see that the ratio is $2-1/2^{m-1}$, which shows that for the class of stars no mechanism can have approximation ratio better than $2$. For fixed $m$, the lower bound is slightly better than $2-1/2^{m-1}$, by selecting $a$ to be the positive root of the equation $(a^m-1)/((a-1)a^{m-1})=a$. \end{proof} We now show how to extend the previous result to get a lower bound of $1+\varphi\approx 2.618$ for trees, and thus for graphs. This matches the best lower bound for the Nisan-Ronen setting~\cite{KV07} that was known until the recent improvements~\cite{GiannakopoulosH20, DS20, CKK20b}, suggesting that studying the special case of scheduling in graphs may be useful in attacking the Nisan-Ronen conjecture. \begin{figure} \centering \begin{minipage}[c]{0.4\linewidth} \begin{tikzpicture}[scale=0.8,% cnode/.style = {circle, draw, text centered, node distance=3cm, fill=blue!20}, dummy/.style = {distance=6cm}] \node [cnode] {\small $0$} child { node [cnode] {\small $1$} } child { node [cnode] (2) {\small $2$} } child { node [dummy] {\small $\cdots$} } child { node [cnode] (n) {\small $m$} } ; \end{tikzpicture} \end{minipage} \begin{minipage}[c]{0.4\linewidth} \begin{tikzpicture}[scale=0.8,% cnode/.style = {circle, draw, text centered, node distance=3cm, fill=blue!20}, dummy/.style = {distance=3cm}] \node [cnode] {\small $\overline 0$} child { node [cnode] {\small $0$} child { node [cnode] {\small $1$} child { node [cnode] {\small $\overline 1$} }} child { node [cnode] {\small $2$} child { node [cnode] {\small $\overline 2$} }} child { node [dummy] {\small $\cdots$}} child { node [cnode] {\small $k$} child { node [cnode] {\small $\overline k$} }} } ; \end{tikzpicture} \end{minipage} \caption{A star with root $0$ and leaves $1,\ldots,m$ and its extension to a tree with dummy nodes} \label{fig:star} \end{figure} \begin{theorem} \label{thm:phi+1} No mechanism for trees can achieve approximation ratio $1+\varphi \approx 2.618$. \end{theorem} \begin{proof} The proof mimics the proof of Theorem~\ref{thm:lower-bound-stars} on the tree shown in Figure~\ref{fig:star}. The tree consists of a star with root $0$ and leaves $1,\ldots, k$ in which we add a new node $\overline v$ for each node $v$ of the star and connect it to $v$. These new nodes (players), which we call dummy will not be assigned any task by any efficient mechanism since we set their processing times to an arbitrarily high value $H$. The processing times of the edges of the star are exactly the same as in the proof of Theorem~\ref{thm:lower-bound-stars}: $r_j=a^{j-1}$ and $\ell_j=a^{j}$, for some $a>1$. The processing times for all edges are given below: \begin{align*} r_j&=a^{j-1} & \ell_j&=a^j&j&=1,\ldots,k \\ \overline r&=0 & \overline \ell_j&=0 & & \end{align*} where $\overline r$ and $\overline \ell_j$ are the processing times of the star vertices of their respective dummy tasks. The dummy nodes themselves have a very large processing time $H\gg 1\,$ on these tasks. We consider two cases. In the first case, all tasks of the star are assigned to the root player $0.$ We then consider a new instance in which we slightly lower the processing time of the root on the tasks of the star (i.e., $r_j=a^{j-1}-\epsilon$ for some $\epsilon>0$) and increase the processing time of its dummy task $\overline r=a^k$. By weak monotonicity (Lemma~\ref{lemma:tool}), the $r$-player will take this task and all tasks of the star with a total processing time slightly less than $1+a+\ldots+a^k=(a^{k+1}-1)/(a-1)$. It is easy to see that the optimal allocation for this instance is $a^k$, and the approximation ratio $(a^{k+1}-1)/((a-1)a^k)$. In the second case, at least one task of the star is allocated to a leaf. Let $p$ be the star task allocated to a leaf with the maximum index (that is, task $p$ of the star is allocated to leaf-player $p$ and tasks $p+1,\ldots,k$ are allocated to the root). We consider the instance in which we change the processing times of the root player as follows: all processing times of the tasks allocated to the root become $0$ and all processing times of the root player for the remaining tasks increase slightly. By weak monotonicity (Lemma~\ref{lemma:tool}), the $r$-player will still get the same set of tasks. We now create a new instance by increasing the processing time of the $p$-th dummy task: $\overline \ell_p=a^{p-1}$ and slightly decreasing the processing time of the leaf $p$ for its task in the star: $\ell_p=a^p-\epsilon$, for some $\epsilon>0$. Then again by weak monotonicity (Lemma~\ref{lemma:tool}), player $p$ will get these two tasks. Although the allocation of the other tasks may change, the cost for the mechanism is at least $a^p+a^{p-1}-\epsilon$, while the optimal allocation has cost $a^{p-1}$. Therefore, in this case the mechanism has approximation ratio $(a^p+a^{p-1})/a^{p-1}=a+1$, as $\epsilon\rightarrow 0$. In any case, the mechanism has approximation ratio $\min\{((a^{k+1}-1)/((a-1)a^k),a+1\}$. By selecting $a=\varphi$, we get a ratio at least $1+\varphi$ (as $k\rightarrow \infty$). \end{proof} Closing the gap between the above lower bound 2.618 of Theorem~\ref{thm:phi+1} and the upper bound 4 (Corollary~\ref{cor:planar}) for mechanisms for trees is a crisp intriguing question. \section{Mechanisms for $L^p$-norm optimization} \label{sec:lp-norm} In this section we generalize some of the results of Section~\ref{sec:scheduling} where the objective is to minimize $L^p$-norm of the values of the agents, i.e., \begin{align} \min_X \bigg(\sum_{i=1}^n t_i(X)^p\bigg)^{1/p}. \end{align} The makespan scheduling problem is the special case of $p=\infty$. We consider all positive values of $p$, but we deal separately with the case $p\geq 1$, in which the $L^p$ is a proper norm, and the case $p\in(0,1)$, where the $L^p$ function is not subadditive (i.e., the triangle inequality does not hold). We also consider the \emph{maximization} case, which for $p=1$ corresponds to auctions. Some of the proofs in this section are more general than the corresponding proofs in Section~\ref{sec:scheduling}, but similar. Nevertheless, we provide them for completeness. \subsection{The Hybrid Mechanism for Minimizing the $L^p$-Norm Objective} \label{sec:stars-lp} Consider an instance of the Unrelated Graph Balancing problem on a star of $n$ nodes and set of tasks $M$. Notice that for stars the objective of minimizing the $L^p$-norm corresponds to minimizing $(r(T)^p+\sum_{i\not\in T} \ell_i^p)^\frac{1}{p}$ over all task sets $T\subseteq M$ given to the $r$-player. \begin{definition}[Hybrid $L^p$ Mechanism for stars] For a given $0< p\leq \infty,$ and an instance of the Unrelated Graph Balancing problem on a star of $n$ nodes and set $\,M\,$ of tasks, let \begin{align} \label{eq:max2} S\in \argmin_{T\subseteq M}\bigg\{r(T)+\bigg(\sum_{i\not \in T} \ell^p_i\bigg)^{1/p}\bigg\}. \end{align} The mechanism assigns $S$ to the root and the remaining tasks to leaves. Ties are broken in a deterministic way (e.g., lexicographically). \end{definition} The argmin expression that defines the Hybrid $L^p$ Mechanism coincides with the VCG mechanism for $p=1$ and with the Hybrid Mechanism of Section~\ref{sec:scheduling} for $p\rightarrow\infty$. As it is shown in Section~\ref{sec:hypergraphs} (Corollary~\ref{cor:sufficient}), the Hybrid $L^p$ mechanism is truthful. The reason that leaf players have no incentive to lie comes from the fact that the expression in \eqref{eq:max2} is monotone in $\ell_i$. And the reason that the root player has no incentive to lie comes from interpreting $-(\sum_{i\not \in T} \ell^p_i)^{1/p}$ as its payments. \begin{lemma} The Hybrid $L^p$ Mechanism for stars is truthful. \end{lemma} We now consider the approximation ratio achieved by the mechanism. We summarize here the inequalities that we will use: \begin{lemma}\label{lem:jensen} For any $p\geq 1$ it holds \begin{align} \label{eq:ineqs} \sum_{i=1}^k x_i^p \leq \bigg(\sum_{i=1}^k {x_i}\bigg)^p \leq k^{p-1}\sum_{i=1}^k x_i^p. \end{align} Similarly for any $0<p\leq 1$ it holds \begin{align} \sum_{i=1}^k x_i^{1/p} \leq \bigg(\sum_{i=1}^k {x_i}\bigg)^{1/p} \leq k^{1/p-1}\sum_{i=1}^k x_i^{1/p}. \end{align} \end{lemma} \begin{proof} The left inequality of~\eqref{eq:ineqs} is essentially the triangle inequality of the $L^p$-norm. The right inequality is an immediate application of Jensen's inequality for a convex function $\varphi$ $$\varphi\left(\frac{\sum_{i=1}^k{x_i}}{k}\right)\leq \frac{\sum_{i=1}^k{\varphi(x_i)}}{k},$$ where $\varphi(x)=x^p$. The second set of inequalities come by replacing $p$ with $1/p \geq 1$. \end{proof} Next we show two upper bound results for the approximation ratio (for the $L^p$-norm objective) separately in case $p\geq 1,$ and in case $0< p\leq 1,$ respectively. \begin{theorem} \label{thm:upper-bound-min-lp} For the problem of minimizing the $L^p$-norm, the Hybrid $L^p$ Mechanism for stars has approximation ratio of at most $2^{(p-1)/p}$, when $p\geq 1$, and $2^{(1-p)/p}$, when $0<p<1$. \end{theorem} \begin{proof} Let $S^*=\argmin_{T\subseteq M}(r(T)^p+\sum_{i\not\in T} \ell_i^p)^\frac{1}{p}$ be the subset assigned to the root in the optimal allocation, $S$ be the subset assigned to the root by the $L^p$ Mechanism, $OPT$ be the optimal $L^p$-norm, and $ALG$ be the $L^p$-norm achieved by the Hybrid $L^p$ Mechanism. We first consider the case $p\geq 1$. We have \begin{align*} ALG = \bigg(r(S)^p+\sum_{i\not \in S} \ell^p_i\bigg)^{1/p} &\leq r(S)+\bigg(\sum_{i\not \in S} \ell^p_i\bigg)^{1/p} \\ &\leq r(S^*)+\bigg(\sum_{i\not \in S^*} \ell^p_i\bigg)^{1/p} \\ & \leq \, 2^{(p-1)/p}\bigg(r(S^*)^p+\sum_{i\not \in S^*} \ell^p_i\bigg)^{1/p} = 2^{(p-1)/p} OPT, \end{align*} where the first inequality follows from the triangle inequality, % the second from the definition of the Hybrid $L^p$ Mechanism, while the last one from Jensen's inequality (Lemma~\ref{lem:jensen}, Equation~\eqref{eq:ineqs}) for $k=2,\, x_1=r(S^*),$ and $x_2=(\sum_{i\not \in S^*} \ell^p_i)^{1/p}.$ The case of $p<1$, is essentially the same, but the proof is slightly different. We first apply Jensen's inequality and then the ``triangle inequality''. \begin{align*} ALG = \bigg(r(S)^p+\sum_{i\not \in S} \ell^p_i\bigg)^{1/p} &\leq 2^{\frac{1}{p}-1}\bigg(r(S)+\big(\sum_{i\not \in S} \ell^p_i\big)^{1/p}\bigg) \\ &\leq 2^{\frac{1}{p}-1}\bigg(r(S^*)+\big(\sum_{i\not \in S^*} \ell^p_i\big)^{1/p}\bigg) \\ &\leq 2^{\frac{1}{p}-1}\bigg(r(S^*)^p+\sum_{i\not \in S^*} \ell^p_i\bigg)^{1/p} =\, 2^{\frac{1}{p}-1} OPT. \end{align*} The first inequality follows from Jensen's inequality (Lemma~\ref{lem:jensen}) for $x_1=(r(S))^p,$ and $x_2=\sum_{i\not \in S} \ell^p_i;$ the second from the definition of the $L^p$ Mechanism, while the last one from the fact that $(\alpha+\beta)^p\leq \alpha^p+\beta^p$, when $0<p\leq 1.$ \end{proof} As in the case of makespan, we can use the mechanism to other domains by decomposing them. We can apply the Star-Cover mechanism (Definition~\ref{def:star-cover}) to get good approximation ratios for general domains. \begin{theorem} For $p\geq 1,$ the Star-Cover mechanism for a given multigraph $G$ that uses the Hybrid $L^p$ Mechanism on every star of a fixed star decomposition $T=\{T_1,\ldots,T_k\}$ is truthful and has an approximation ratio at most $(2c(T))^{(p-1)/p}$ of the $L^p$-norm of the machines' costs, where $c(T)$ is the star contention number of the decomposition. \end{theorem} \begin{proof} Fix some player $i$ and let $S_{i,h}$ be the subset of tasks allocated to player $i$ by the $L^p$ Mechanism when applied to a star $T_h$, $h=1,\ldots,k$. Like in the case of Max Mechanism (Definition~\ref{def:maxmech}), truthfulness follows from two observations. First, since the fixed star decomposition is independent of player $i$'s processing times, player $i$ cannot affect it by lying. Second, $S_{i,h}$ is independent of player $i$'s processing times $t_{i}(e)$ for all edges $e\not\in T_{h}$, therefore player $i$ cannot alter the assignment on $T_h$ by changing its values outside $T_h$. Now we show the approximation guarantee. Let $OPT$, $OPT(T_h)$ be the optimal $L^p$-norm on $G$ and $T_h$ respectively, and let $ALG$ and $ALG(T_h)$ be the $L^p$-norm achieved by the $L^p$-Cover mechanism on $G$ and $T_h.$ Let $c=c(T)$ be the star contention number of $T.$ We prove that $(ALG)^p\leq (2c)^{p-1} (OPT)^p.$ \begin{align*} (ALG)^p &= \sum_i \bigg(\sum_h t(S_{i,h})\bigg)^p\\ & \leq \sum_i c^{p-1}\sum_h t(S_{i,h})^p = c^{p-1}\sum_h \sum_i t(S_{i,h})^p = c^{p-1}\sum_h (ALG(T_h))^p \\ &\leq c^{p-1}\sum_h 2^{p-1}(OPT(T_h))^p = (2c)^{p-1}\sum_h (OPT(T_h))^p = (2c)^{p-1}\sum_h \sum_i (t(\tilde S_{i,h}))^p \\ &\leq (2c)^{p-1}\sum_h \sum_i (t(S^*_{i,h}))^p = (2c)^{p-1}\sum_i \sum_h (t(S^*_{i,h}))^p \\ &\leq (2c)^{p-1}\sum_i (t(S^*_{i}))^p = (2c)^{p-1} ( OPT)^p. \end{align*} The first inequality follows from Lemma~\ref{lem:jensen}, because for every machine $t(S_{i,h})$ is nonzero only for at most $c$ stars $h\in \{1,2,\ldots k\}.$ The second holds by the approximation ratio of the Hybrid $L^p$ Mechanism for stars. $\tilde S_{i,h}$ denotes the set given to machine $i$ in the optimal allocation \emph{of star} $h,$ whereas $S^*_{i,h}$ is the restriction of the allocated task set to player $i$ by $OPT,$ to the tasks of star $h.$ The third inequality holds by the optimality of the $\tilde S_{i,h}$ on each star $h.$ Finally, the last inequality holds by the triangle inequality $\sum x_i^p\leq (\sum x_i)^p.$\end{proof} \subsection{Lower Bounds for Minimizing the $L^p$-norm} \label{sec:lower-bounds-lp} We now provide corresponding negative results for mechanisms. For the case, of $p\geq 1$, the next theorem shows that the Hybrid $L^p$ Mechanism has optimal approximation ratio. The case of $p<1$ is treated separately below, and the lower bound that we give does not match exactly the upper bound, which leaves open the possibility that there exists a mechanism with better approximation ratio than the Hybrid $L^p$ Mechanism. \begin{theorem} \label{thm:lower-bound-stars-lp} For any $p\geq 1$, there is no deterministic mechanism for stars that can achieve an approximation ratio better than $2^{1-1/p}$ for the $L^p$-objective. \end{theorem} \begin{proof} This is a generalization of the corresponding proof in Section~\ref{sec:scheduling}. Let's assume that the mechanism takes an input where the processing times of the root player are $r_j =2^{j-1}$, for each task $j$, and the processing time of the corresponding leaf player for task $j$ is $\ell_j=\alpha\cdot 2^{j-1}$, for some fixed $\alpha>0$ to be determined. If the mechanism assigns all tasks to the root player, then the makespan for this input is $$\sum_{j=1}^mr_j=2^m-1,$$ while the optimal makespan is $$\bigg(\sum_{j=1}^{m-1}\ell^p_j+r^p_m\bigg)^{1/p}= \left(\frac{\alpha^p(2^{p(m-1)}-1)+2^{p(m-1)}(2^p-1)}{2^p-1}\right)^{1/p}.$$ Therefore the approximation ratio is $$\left(\frac{(2^m-1)^p(2^p-1)}{\alpha^p(2^{p(m-1)}-1)+2^{p(m-1)}(2^p-1)}\right)^{1/p},$$ which tends to the following value as $m$ tends to infinity \begin{equation} \label{eq:1}\left({{\frac {{2}^{p} \left( {2}^{p}-1 \right) }{{a}^{p}+{2}^{p}-1 }}}\right)^{1/p}. \end{equation} Otherwise, let $X$ be the nonempty set of tasks assigned to the leaf players. Let $k$ be the task with the maximum index in $X$. Since it is processed by the leaf player, its processing time is $\ell_k=\alpha\cdot 2^{k-1}$. Now consider the input in which we change the processing times of the root player to $$r'_j= \begin{cases} 0 & j\not\in X \\ r_j+\epsilon & \text{ otherwise} \end{cases} $$ for some arbitrarily small $\epsilon>0$. By weak monotonicity (Lemma~\ref{lemma:tool}), the set of tasks assigned to the root player remains the same, and as a result the whole allocation stays the same. Therefore task $k$ is still assigned to the leaf player $k$ and the makespan of the mechanism is $\left(\sum_{j\in X}\ell_j^p\right)^{1/p}$, while the optimum allocation for this input is at most $\left(\sum_{j\in X\setminus\{k\}}\ell_j^p+r_k^p\right)^{1/p}$. Therefore the approximation ratio is at least $$\frac{\left(\sum_{j\in X}\ell_j^p\right)^{1/p}}{\left(\sum_{j\in X\setminus\{k\}}\ell_j^p+r_k^p\right)^{1/p}}\geq \frac{\left(\sum_{j=1}^k\ell_j^p\right)^{1/p}}{\left(\sum_{j=1}^{k-1}\ell_j^p+r_k^p\right)^{1/p}}=\left({\frac {{a}^{p} \left( {2}^{p \left( k+1 \right) }-1 \right) }{{a}^{p} \left( {2}^{pk}-1 \right) +{2}^{pk} \left( {2}^{p}-1 \right) }} \right)^{1/p} $$ For large $k$ this tends to \begin{equation} \label{eq:2} \frac{2\alpha}{\left(\alpha^p+2^p-1\right)^{1/p}}. \end{equation} By setting $\alpha=(2^{p}-1)^{1/p}$ both (\ref{eq:1}) and (\ref{eq:2}) become equal to $2^{1-1/p}$ and the theorem follows. \end{proof} Before we proceed to give a lower bound for the case of $p<1$, we point out that all known mechanisms perform much worse than the Hybrid Mechanism. \begin{theorem} \label{thm:local-norm} For minimizing the $L^p$-norm on stars, all local mechanisms, including affine minimizers and task-independent mechanisms, have approximation ratio of at least $m^{\frac{1}{2}(1-1/p)}=(n-1)^{\frac{1}{2}(1-1/p)},$ when $p\geq 1$. \end{theorem} Observe that for $p=1$, the VCG is optimal, but for large $p$ the inefficiency of all local mechanisms grows and tends to $\sqrt{m}$. This is essentially the same with the lower bound of the corresponding theorem in Section~\ref{sec:scheduling} and we omit its proof. We now give a lower bound for all mechanisms for the case of $p<1$. Notice that the approximation ratio tends to infinity as $p$ tends to 0. \begin{theorem} \label{thm:lower-bound-stars-lp} For any $0<p\leq 1$ and every $a> 1$, there is no deterministic mechanism for stars that can achieve an approximation ratio better than \begin{align} \min \bigg\{a, \frac{(a+1)^{1/p}}{a^{1/p}+a)}\bigg\}. \end{align} By selecting an appropriate $a$, this is $\Omega(p^{-1} / \ln(p^{-1}))$. \end{theorem} \begin{proof} Consider an instance with two tasks, where the root has costs $r_1=a^{1/p}, r_2=a$, and the leaves have costs $\ell_1=\infty,\ell_2=1$. Observe that in this instance, the optimum is to assign both tasks to the root player, with a total cost of $a^{1/p}+a$. Any other allocation has total cost at least $(a+1)^{1/p}$, which gives an approximation ratio at least $(a+1)^{1/p}/(a^{1/p}+a)$. However, if the mechanism assigns both tasks to the root, then consider the instance produced by setting $r'_1=0,r'_2=a-\epsilon$. By applying the monotonicity lemma (Lemma~\ref{lemma:tool}), the allocation remains the same, with a cost of $a-\epsilon$, while the optimum is 1. Letting $\epsilon$ go to 0, we get a ratio $a$. To show that this bound is almost proportional to $1/p$, we show that for $a>1$ and $q= 1/p \geq e^2$, we have \begin{align*} \min\{(a+1)^q/(a^q+a), a\} = \Omega(q / \ln q ). \end{align*} Indeed we have \begin{align*} (a+1)^q/(a^q+a) & \geq (a+1)^q/(2a^q) \\ &\approx \frac{1}{2} e^{q/a}, \end{align*} If $a \leq q / \ln q$, the above is at least $\frac{1}{2} e^{q/a} \geq \frac{1}{2} q \geq q / \ln q$. \end{proof} The lower bound given by the above theorem does not match the upper bound of Theorem~\ref{thm:upper-bound-min-lp}. For example, for $p=1/2$, the theorem above gives a lower bound of $\phi\approx 1.618$, while the upper bound is equal to 2. \subsection{Maximizing the $L^p$-norm (Auctions)} In this subsection, we illustrate that the Hybrid Mechanism has good performance for the maximization problem as well. \begin{definition}[Hybrid Mechanism for the $L^p$-Norm (maximization version)] Consider an instance of the Unrelated Graph Balancing problem on a star of $n$ nodes and set of tasks $M$. Let \begin{align} \label{eq:max3} S\in \arg\max_{T\subseteq M}\bigg\{r(T)+\bigg(\sum_{i\not \in T} \ell^p_i\bigg)^{1/p}\bigg\}. \end{align} The mechanism assigns $S$ to the root and the remaining tasks to leaves. Ties are broken in a deterministic way (e.g., lexicographically). \end{definition} Note that for $p=1$, the Hybrid Mechanism coincides with the VCG mechanism. \begin{lemma} The Hybrid Mechanism for maximizing the $L^p$-norm on stars has approximation ratio of at most $2^{(p-1)/p}$, when $p\geq 1$. \end{lemma} We omit the proof, which is similar to the proof for the minimization version and is based on applying the triangle and Jensen inequality. However, unlike the minimization version, there is an even better mechanism for the star, when $p\geq 2$. \begin{definition}[All-or-Nothing Mechanism] Consider an instance of the Unrelated Graph Balancing problem on a star of $n$ nodes and set of tasks $M$. If \begin{align} \label{eq:max4} r(M) \geq \bigg(\sum_{i\in M} \ell^p_i\bigg)^{1/p}. \end{align} then the mechanism assigns $M$ to the root, otherwise it assigns all tasks to the leaves. \end{definition} \begin{lemma} The All-or-Nothing Mechanism for the star is truthful and has approximation ratio of at most $2^{1/p}$. \end{lemma} \begin{proof} Let $S^*$ be the subset assigned to the root in the optimal allocation, and $OPT$ be the optimal makespan, and $ALG$ be the makespan achieved by the All-or-Nothing Mechanism. Then we have \begin{align*} OPT & = \bigg(r(S^*)^p+\sum_{i\not \in S^*} \ell^p_i\bigg)^{1/p} \leq \bigg(2\max\bigg\{r(S^*)^p,\sum_{i\not\in S^*} \ell^p_i\bigg\}\bigg)^{1/p} \leq \bigg(2\max\bigg\{r(M)^p,\sum_{i\in M} \ell^p_i\bigg\}\bigg)^{1/p} = 2^{1/p}ALG. \end{align*} \end{proof} If we use the Hybrid Mechanism for $1\leq p\leq 2$ and the All-or-Nothing Mechanism for $p>2$, we get a mechanism with approximation ratio $\min\{2^{1/p},2^{(p-1)/p}\}\leq \sqrt{2}$. We don't know whether this bound is tight, but below we show a weaker lower bound for the case of $p=2$. \begin{theorem} \label{thm:lower-bound-stars-lp-max} For $p=2$, there is no deterministic mechanism for stars that can achieve an approximation ratio better than $1.05$. \end{theorem} \begin{proof} For the sake of exposition, we use simple values. Optimizing them could lead to a slightly higher ratio. Consider an instance with two tasks, where the root has costs $r_1=1, r_2=1/3$, and the leaves have costs $\ell_1=0,\ell_2=1$. Observe that in this instance, the optimum is to assign the first task to the root and the second to the leaf, for a total cost of $\sqrt{2}$. If the mechanism assigns both tasks to the root, its cost is $4/3$ and the ratio is higher than $1.05$. Therefore it must be the case that the root takes only the first task. In that case, consider the instance produced by setting $r'_1=3,\, r'_2=1/3-\epsilon$. By applying the weak monotonicity lemma (Lemma~\ref{lemma:tool}), the allocation remains the same, with a cost of $\sqrt{10}$. The optimum is $10/3 - \epsilon$, which yields a ratio that is higher than $1.05$, when $\epsilon$ goes to 0. \end{proof} \section{Preliminaries} \label{sec:preliminaries} \noindent\textbf{Scheduling.} In the classical \emph{unrelated machines scheduling} there is a set $N$ of $n$ machines and a set $M$ of $m$ tasks that need to be scheduled on the machines. The input is given by a nonnegative matrix $t=(t_{ij})_{n\times m}:$ machine $i$ needs time $t_{ij}\in \mathbb R_{\geq 0}$ to process task $j,$ and her \emph{costs} are additive, i.e., the processing time for machine $i$ for a set of tasks $X_i\subset M$ is $t_i(X_i):=\sum_{j\in X_i} t_{ij}.$ The objective is to minimize the makespan (min-max objective). An allocation to all machines $X=(X_1,X_2,\ldots ,X_n),$ (which is a partition of $M$) can also be denoted by the characteristic matrix $x=(x_{ij})$ where $x_{ij}=1$ if $j\in X_i,$ and $x_{ij}=0$ otherwise. % The current work essentially considers a special case of unrelated scheduling, in which every task can be processed by two designated machines. The tasks can thus be modelled by the edges of a graph, and the associated problem is also known as \emph{Unrelated Graph Balancing}. More formally, in the Unrelated Graph Balancing problem, there is a given undirected graph $G=(V,E);$ the vertices correspond to a set of machines $N=V$ and the edges to a set of tasks $M=E.$ For each edge $e\in E$ only its two incident vertices can process the job $e,$ and they have in general different processing times $t_i(e),$ and $t_{i'}(e).$ The goal is to assign a direction to each edge $e=(i,i')$ (allocate the corresponding task) of the graph, to one of the incident vertices (machines). The \emph{completion time} of each vertex $i$ is then the total processing time of the jobs $X_i$ assigned to it $t_i(X_i)=\sum_{e\in X_i}t_{i}(e)$. The objective is to find an allocation that minimizes the {\em makespan}, i.e. the maximum completion time over all vertices. \noindent\textbf{Mechanism design setting.} We assume that each machine $i\in N$ is controlled by a selfish agent that is reluctant to process the tasks and the cost function $t_i$ is private information (also called the {\em type} of agent $i$). A \emph{mechanism} asks the agents to report \emph{(bid)} their types $t_i,$ and based on the collected bids it allocates the jobs, and gives payments to the agents. A player may report a false cost function $b_i\neq t_i$, if this serves her interests. Formally, a mechanism $(X,P)$ consists of two parts: \begin{description} \item[An allocation algorithm:] The allocation algorithm $X$ allocates the tasks to the machines depending on the players' bids $b=(b_1,\ldots ,b_n)$. We denote by $X_i(b)$ the subset of tasks assigned to machine $i$ in the bid profile $b.$ \item[A payment scheme:] The payment scheme $P=(P_1,\ldots,P_n)$ determines the payments also depending on the bid values $b.$ The functions $P_1,\ldots,P_n$ stand for the payments that the mechanism hands to each agent. \end{description} The {\em utility} $u_i$ of a player $i$ is the payment that she gets minus the {\em actual} time that she needs to process the set of tasks assigned to her, $u_i(b)=P_i(b)-t_i(X_i(b))$. We are interested in \emph{truthful} mechanisms. A mechanism is truthful, if for every player, reporting his true type is a \emph{dominant strategy}. Formally, $$u_i(t_i,b_{-i})\geq u_i(t'_i,b_{-i}),\qquad \forall i\in N,\;\; t_i,t'_i\in \mathbb R_{\geq 0}^m, \;\; b_{-i}\in \mathbb R_{\geq 0}^{(n-1)\times m},$$ where $b_{-i}$ denotes the reported bidvectors of all players disregarding $i.$ We are looking for \emph{truthful} mechanisms with \emph{low approximation ratio} of the allocation algorithm for the makespan irrespective of the running time to compute $X$ and $P.$ In other words, our lower bounds are information-theoretic and do not take into account computational issues. A useful characterization of truthful mechanisms in terms of the following monotonicity condition, helps us to get rid of the payments and focus on the properties of the allocation algorithm. \begin{definition}[Weak Monotonicity] \label{def:wmon} An allocation algorithm $X$ is called {\em weakly monotone (WMON)} if it satisfies the following property: for every two inputs $t=(t_i,t_{-i})$ and $t'=(t'_i,t_{-i})$, the associated allocations $X$ and $X'$ satisfy $t_i(X_i)-t_i(X'_i)\leq t'_i(X_i)-t'_i(X'_i).$ \end{definition} It is well known that the allocation function of every truthful mechanism is WMON~\cite{BCR+06}, and also that this is a sufficient condition for truthfulness in convex domains~\cite{SY05}. The following lemma was essentially shown in \cite{NR01} and has been a useful tool to show lower bounds for truthful mechanisms for several variants (see for example \cite{ChrKouVid09,MS07}). % \begin{lemma}\label{lemma:tool} Let $t$ be a bid vector, and let $S=X_i(t)$ be the subset assigned to player $i$ by a weakly monotone allocation $X.$ Let $t'=(t'_i,t_{-i})$ be a bid vector such that only the bid of machine $i$ has changed and in such a way that for every task in $S$ it has decreased (i.e., $t'_{ij}< t_{ij}, j\in S$) and for every other task it has increased (i.e., $t'_{ij}> t_{ij}, j\in M\setminus S$). Then the mechanism does not change the allocation to machine $i$, i.e., $X_i(t')=X_i(t)=S$. \end{lemma} In general, when the values of a machine change, the allocation of the other machines may change, this issue being the pivotal difficulty of truthful unrelated scheduling. Allocation algorithms that ``promise'' not to change the allocation of other machines as long as changing (only) $t_i$ does not affect the set $X_i,$ are less problematic. These allocation rules are called \emph{local} in \cite{NR01}, where it is shown that local truthful mechanisms cannot have a better than $n$ approximation. \begin{definition}[Local mechanisms]\label{def:local} A mechanism is {\em local} if for every $i\in N$, for every $t_{-i}$, and $t_i,t'_i$ for which $X_i(t_i,t_{-i})=X_i(t'_i,t_{-i})$ also holds that $X_j(t_i,t_{-i})=X_j(t'_i,t_{-i})\quad (\forall j\in N).$ \end{definition} There are several special classes of mechanisms that satisfy this property, perhaps the most prominent one is the class of \emph{affine minimizers} (see, e.g., \cite{ChrKouVid09}). \subsection{Related Work} The Nisan-Ronen conjecture~\cite{NR01} has become one of the central problems in Algorithmic Game Theory, and despite intensive efforts it remains open. The original paper showed that no truthful deterministic mechanism can achieve an approximation ratio better than $2$ for two machines, which was later improved to $2.41$~\cite{ChrKouVid09} for three machines, and finally to $2.618$ \cite{KV07} which was the best known bound for over a decade. Recent progress improved this bound to $2.755$~\cite{GiannakopoulosH20}, to $3$ \cite{DS20} and finally to the first non-constant lower bound of $1+\sqrt{n-1}$~\cite{CKK20b}. The best known upper bound is $n$~\cite{NR01}. The purely algorithmic problem of makespan minimization on unrelated machines is one of the most important scheduling problems. The seminal paper of Lenstra, Shmoys and Tardos~\cite{lenstra1990approximation}, gave a $2$-approximation algorithm, and also showed that it is NP-hard to approximate within a factor of $3/2$. Closing this gap has remained open for 30 years, and is considered one of the most important open questions in scheduling. In this work we consider the design of truthful mechanisms for the {\em Unrelated Graph Balancing} problem, a special but quite rich case of the unrelated machines problem, which was previously studied by Verschae and Wiese~\cite{VerschaeW14}, for which each task can only be assigned to two machines. This can be formulated as a graph problem, where given an undirected (multi)-graph $G=(V,E)$, each vertex corresponds to a machine, and each edge corresponds to a task. % The goal is to allocate (direct) each edge to one of its nodes, in a way that minimizes the maximum (weighted) in-degree. The special case of this problem where each direction of an edge corresponds to the same processing time $t(e)$ is known as Graph Balancing, and was introduced by Ebenlendr, Krc\'al, and Sgall~\cite{EKS14} who showed an $1.75$-approximate algorithm and also demonstrated that the problem retains the hardness of the unrelated machines problem, by showing that it is NP-hard to approximate within a factor better than $3/2$. \input{further}
{ "redpajama_set_name": "RedPajamaArXiv" }
2,818
Q: Touchpad start working with delay I'm using ubuntu 17.10 on an Asus zenbook ux430. I'm having some problem with the touchpad. On startup, or more generally after i login, the tuchpad won't work for about 5/10 seconds. After that delay, it start working normaly. (I just notice, that if i wait in to the login screen, without typing the password, after 5/10 seconds tuchpad start working, and keep working after logging in)
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,929
Volker Beck, né le à Stuttgart, est un homme politique allemand, membre de l'Alliance 90 / Les Verts. Biographie Études Il obtient son bac en 1980, fait des études en histoire de l'art, d'histoire et littérature à l'Université de Stuttgart. Engagements associatifs Volker Beck s'engage dans le mouvement de la paix dans les années 1980. Il est également un des plus célèbres militants homosexuels d'Allemagne, porte-parole de 1991 à 2004 de Lesben- und Schwulenverband in Deutschland (LSVD, Association des gays et lesbiennes en Allemagne). Carrière politique De 1987 à 1990, il est assistant parlementaire du groupe des Verts au Bundestag pour les affaires des homosexuels. Lors des élections législatives de 1994, il est élu député au Bundestag, représentant le land de Rhénanie-du-Nord-Westphalie, puis réélu en 1998, 2002, 2005, 2009 et 2013. De 2002 à 2013, il est premier secrétaire du groupe parlementaire de l'Alliance 90 / Les Verts. Il est à l'origine de la loi sur le partenariat enregistré en Allemagne. Entre 2001 et 2004, il est également le principal négociateur de la loi sur l'immigration. Il a influencé fortement la législation contre le terrorisme après 11 septembre 2001 . En 2006, il se rend à Moscou pour soutenir la tenue de la Gay Pride interdite. Il y est victime de violences de la part d'opposants à la manifestation. Son dernier jour au parlement était celui du vote en faveur du mariage homosexuel pour lequel il a lutté toute sa vie. Depuis il est maître de conférences au Centre des études de la science de la religion (CERES) à l'Université de la Ruhr à Bochum. Vie personnelle Il vit avec Jacques Teyssier, un militant LGBT français, à partir de 1992, entre Cologne, Paris et à Berlin. En 2008, ils signent en partenariat enregistré. Jacques Teyssier est mort en 2009. En 2017, il a épousé l'architecte Adrian Petkov. Honneurs Il a été fait chevalier de l'ordre du Mérite de la République fédérale d'Allemagne par le président fédéral Johannes Rau, en raison de son action pour l'indemnisation des victimes du nazisme. Cette distinction avait été suggérée par les organisations juives Jewish Claims Conference et le Conseil fédéral des Juifs en Allemagne. Il a reçu plusieurs prix des mouvements gays en Allemagne, Pologne et aux États-unis. En 2015 le Conseil central des Juifs en Allemagne a lui accordé le Prix Leo Baeck pour la défense du judaïsme dans la société allemande. Références Liens externes Député de la treizième législature du Bundestag Député de la quatorzième législature du Bundestag Député de la quinzième législature du Bundestag Député de la seizième législature du Bundestag Député de la dix-septième législature du Bundestag Député de la dix-huitième législature du Bundestag Étudiant de l'université de Stuttgart Personnalité de l'Alliance 90/Les Verts Chevalier de l'ordre du Mérite de la République fédérale d'Allemagne Naissance en décembre 1960 Naissance à Stuttgart Personnalité ayant fait son coming out
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,966
\section{Introduction} Over the decades, the research on dynamic optimization for stochastic large-population systems has already received extensive attentions. Distinguished from the single one, large-population system has considerable agents, which is broadly applied in the fields of engineering, finance, social science and so on. In this context, the single individual agent effect is insignificant and negligible while the collective behaviors of the whole population cannot be ignored. All the agents are weakly coupled via the state average or empirical distribution in dynamics and/or cost functionals. For these reasons, the centralized strategies of the given agent, which are based on the information of all peers, are infeasible. An efficient approach is to discuss the associated mean-filed game (MFG) to determine an approximate equilibrium by analyzing the related limiting behavior. Roughly speaking, the main idea is to study the related control problem by considering its own individual state and some off-line quantities. The introduction of some frozen quantities enables the highly complicated coupling problem to be a decoupled one, which is in essence a stochastic optimal control problem. As a consequence, by the well-known results such as stochastic maximum principle and dynamic programming principle, the decentralized strategies are obtained, which only rely on the local information. To complete the procedure, the off-line quantities will be computed by some consistency condition (CC) systems. MFG problems were originally formulated by Lasry and Lions \cite{LarsyLion20061,LarsyLion20062,LarsyLion2007} and independently by Huang, Malham\'e and Caines \cite{Huang2006,Huang2007}. Indeed, motivated by phenomena in statistical mechanics and physics, Lasry and Lions \cite{LarsyLion20061,LarsyLion20062,LarsyLion2007} (see also Cardaliaguet \cite{Cardaliaguet2010}) were concerned with situations that involved a large number of rational players and obtained distributed closed-loop strategies, which were represented by the forward-backward systems of a Hamilton-Jacobi-Bellman (HJB) equation and a Fokker-Planck equation. Huang, Malham\'e and Caines \cite{Huang2006,Huang2007} introduced Nash Certainty Equivalence (NCE) to handle MFG problems from the perspective of engineering applications, which led to decentralized control synthesis. For further research on MFG problems and related topics, the readers are referred to the papers e.g. \cite{Buckdahn2009,Huang2012,Cardaliaguet2013,Carmona2013, Bardi2014,Moon2017,Hu2018,Nie2018,Li2020,Wang2020} and the monographs by Bensoussan \cite{Bensoussan2013}, and by Carmona and Delarue \cite{Carmona2018}. We mention that almost above listed literatures are built on the fact that agents have access to full information. However, in many real applications, only partial information can be acquired by agents. The research of partial information has far-fetching theoretical significance and extensive application value, thus large-population problems with partial information have attracted researchers' intensive attentions. For example, Huang, Caines and Malham\'e \cite{Caines2006} considered dynamic games in a large population of stochastic agents, where agents had local noisy measurements of its own state. Huang and Wang \cite{Huang2016} studied a class of dynamic optimization problems for large-population system with partial information. \c{S}en and Caines \cite{Caines2016} investigated MFG theory with partially observed major agent in both linear and nonlinear formulations. Huang, Wang and Wu \cite{Wang2016} considered the MFG problem for backward stochastic systems with partial information. \c{S}en and Caines \cite{Caines2019} studied partially observed stochastic large-population problem with nonlinear dynamics and nonlinear cost functionals. Bensoussan, Feng and Huang \cite{Bensoussan2019} focused on a class of linear-quadratic-gaussian (LQG) MFGs with partial observation and common noise. To make some comparisons, another relevant but essentially different topic is the mean-field type control problem. For the literature review, some associated works on partial information mean-field control problem are described. For example, Wang, Zhang and Zhang \cite{Wang2013} proposed the stochastic maximum principle of mean-field type optimal control problem with partial information. Buckdahn, Li and Ma \cite{Buckdahn2017} explored a new type of mean-field, non-Markovian stochastic control problems with partial observations. For further studies, the readers can see \cite{Hafayed2015,Ma2016,Fu2019,NieYan2022,Wang2017} and the reference therein It is widely acknowledged that the control constrained problem appears frequently in the fields of finance and economics. For example, in many financial models, control variables have some restrictions, such as taking only nonnegative values. A typical example is the mean-variance portfolio selection problem with no-shorting, see. e.g. Li, Zhou and Lim \cite{Li2002}. Moreover, control constrained problems have been applied to many other fields, such as aeronautics, artificial intelligence, network communication. As for the control constrained stochastic LQ control problem with random coefficients, Hu and Zhou \cite{Hu2005} derived the explicit optimal control and optimal cost through two extended stochastic Riccati equations. Pu and Zhang \cite{Pu2019} generalized it to the infinite time horizon case. Recently, Hu, Shi and Xu \cite{Hu2020} explored a control constrained stochastic LQ control problem with regime-switching, which can better reflect the random environment. Concerning the stochastic large-population problem with control constraints, to our best knowledge, it is really a new topic and there are few literatures to give the explicit decentralized strategies. Indeed, Hu, Huang and Li \cite{Hu2018} studied a class of control constrained stochastic large-population problems with uniform minor agents where the individual control was constrained in a closed convex set. Hu, Huang and Nie \cite{Nie2018} investigated a class of LQG mixed MFGs with heterogenous input constraints, where a major agent and numerous heterogenous minor agents were embraced. In \cite{Hu2018,Nie2018}, the decentralized strategies are given explicitly through projection operators and CC system, which is a kind of nonlinear mean-field forward-backward stochastic differential equation (MF-FBSDE) with projection operator. See also \cite{ZhangLi2019} for extended works. The current paper focuses on a class of general stochastic large-population problems with partial information, where the diffusion term of the dynamics of each agent can depend both on the state and control. To illustrate our motivations, we present the following example, which generalizes the model of Carmona, Fouque and Sun \cite{Carmona2015}. \begin{example}\label{example}$($General Inter-Bank Borrowing and Lending Problem$)$ Considering a model consists of $N$ banks, which can lend to and borrow from each other. Meanwhile, there exist cash flows between each bank and the central bank. We denote $x_i(\cdot)$ the log-monetary reserve of bank $i$ lending to and borrowing from each other and $u_i(\cdot)$ the corresponding control rate of borrowing from or lending to the central bank. Suppose the evolution of $x_i(\cdot)$ is described by $($see \cite{Carmona2015} for some simple cases$)$ \begin{equation*} \left\{ \begin{aligned} dx_i(t)=&[Ax_i(t)+\frac{a}{N}\underset{j=1}{\overset{N}{\sum}}(x_j(t)-x_i(t))+Bu_i(t)+b]dt\\ &+[Cx_i(t)+Du_i(t)+\sigma]dW_i(t)+[\widetilde{D}u_i(t)+\widetilde{\sigma}]d\widetilde{W}_i(t),\\ x_i(0)=&~x, \end{aligned} \right. \end{equation*} where $\{W_i(\cdot), 1\leq i \leq N\}$ and $\{\widetilde{W}_i(\cdot), 1\leq i \leq N\}$ are two $N$-dimensional mutually independent standard Brownian motions. Here, $W_i$ and $\widetilde{W}_i$, $1\leq i \leq N$ are introduced to represent the noises of bank $i$. Due to some practical phenomena like the physical inaccessibility of some parameters, inaccuracies in measurement, discreteness of account information, bank $i$ can only observe $W_i$. The average term $\frac{a}{N}\sum_{j=1}^N(x_j(\cdot)-x_i(\cdot))$ with $a\geq 0$ characterizes the interaction, which represents the rate of bank $i$ borrowing from or lending to other banks. Let $x^{(N)}(\cdot)=\frac{1}{N}\sum_{j=1}^N x_{j}(\cdot)$, then we have \begin{equation} \left\{ \begin{aligned}\label{exstate} dx_i(t)=&[(A-a)x_i(t)+Bu_i(t)+ax^{(N)}(t)+b]dt\\ &+[Cx_i(t)+Du_i(t)+\sigma]dW_i(t)+[\widetilde{D}u_i(t)+\widetilde{\sigma}]d\widetilde{W}_i(t),\\ x_i(0)=&~x. \end{aligned} \right. \end{equation} For $1\leq i \leq N$, bank $i$ controls its rate of borrowing from or lending to the central bank by choosing $u_i(\cdot)$ to minimize \begin{equation}\label{excost} \mathcal{J}_i(u(\cdot))=\frac{1}{2}\mathbb{E}\Big\{\int_0^T[\epsilon(x_i(t)-x^{(N)}(t))^2+ru_i(t)^2]dt +c(x_i(T)-x^{(N)}(T))^2\Big\}, \end{equation} where $u(\cdot)=(u_1(\cdot),\ldots,u_N(\cdot))$ and $\epsilon$, $r$ and $c$ are all constants with $\epsilon,c\geq 0$, $r>0$. The quadratic terms $(x_i(\cdot)-x^{(N)}(\cdot))^2$ in the running cost and in terminal cost penalize the departure from the average. The quadratic terms $u_i(\cdot)^2$ denotes the cost of control process. Here, $u_i(\cdot)>0$ $($resp. $u_i(\cdot)<0$$)$ means that bank $i$ borrows from $($resp. lends to$)$ the central bank. Generally, the central bank does not need to absorb the funding from local ones, which suggests that $u_i(\cdot)$ often takes positive values. Consequently, the general inter-bank borrowing and lending problem formulates a class of control constrained LQ large-population problems with partial information. \end{example} Inspired by above discussions, we consider a class of LQ large-population problems with partial information, where both the control constrained case and control unconstrained case are studied. It should be noted that Carmona, Fouque and Sun \cite{Carmona2015} studied an inter-bank borrowing and lending model, which was by nature a LQ large-population problem without control constraints, and they obtained the decentralized strategies in form of state feedback via Riccati equation. As its promotion, we consider a general problem as in Example \ref{example}, where $x_i(\cdot)$ and $u_i(\cdot)$ both affect the drift and diffusion terms. Moreover, partial information structure and control constrained are also taken into consideration. Using Hamitonian approach, we can give the explicit decentralized strategies in control constrained case. When the control constraint is removed as in \cite{Carmona2015,Huang2016}, by using Riccati approach, we can represent the decentralized strategies as the feedback of filtered state. To make the readers quickly grasp the spirit of ideas in large-population problem, we begin with comparisons between our large-population problem and the classical stochastic differential games considered in many literatures, see e.g. \cite{Moon2020, NieWangYu2021,Yu2015} and the reference therein. From the framework setting, all players in the classical differential game control a common state system while in our large-population problem, each player possesses its own individual state. Furthermore, one usually looks for the Nash equilibrium in classical differential game while the large-population problem will result some approximate ones, since it is not implementable and not efficient to solve the large-population problem directly due to the highly complex interactions among individuals. Alternatively, the MFG provides one effective approach to determine an approximate equilibrium. In fact, the corresponding limiting problem turns out to be a class of LQ stochastic optimal control problems at bottom. We mention that even for the case without control constraint, our results are not trivial generalization of \cite{Carmona2015, Huang2016}. In fact, compared with \cite{Carmona2015}, our stochastic large population problems are in the framework of partial information, although \cite{Huang2016} gave some results for stochastic large population problems with partial information, it cannot be applied to our case, since our diffusion term can depend both on the state and control. The structure of partial information and the dependence of our diffusion term on state and control need subtle Riccati approach (as explained in the listed main contribution of the paper below). The main contributions of this paper can be summarized as follows. \begin{itemize} \item A general stochastic large-population problem with partial information is introduced. In control constrained case, by using Hamiltonian approach and convex analysis, the decentralized strategies can be given explicitly. The Hamiltonian type CC system turns out to be a new kind of MF-FBSDEs with projection operator, where both the expectation term and conditional expectation term appear. The well-posedness of such equation is proved by discounting method which is used to construct a contractive mapping. \item In control unconstrained case, by using Riccati approach, the decentralized strategies can be further represented explicitly as the feedback of filtered state. The optimal filtering equation, meanwhile, is established. \item A new Riccati type CC system is given (see \eqref{RCC}), which is not in the classical form. The well-posedness of this Riccati type CC system is obtained through some subtle analysis. In fact, the existence of $\widetilde{D}(\cdot)u_i(\cdot)$ in unobservable diffusion term brings essential difficulties to the solvability of Riccati type CC system. The first difficulty is that the additional term $\widetilde{D}^{\top}(\cdot)P(\cdot)\widetilde{D}(\cdot)$ cannot be combined with the original term $D^{\top}(\cdot)P(\cdot)D(\cdot)$ into a quadratic form (see \eqref{r1}), thus the first equation (equation for $P(\cdot)$) in system \eqref{RCC} is not a standard Riccati equation and then the existing results cannot be implemented. We can use modified iterative method (see \eqref{P0} and \eqref{iterative}) and mathematical induction to prove the existence of a solution. The second difficulty is that the equation for $\Lambda(\cdot)$ in system \eqref{RCC} is not a standard Riccati equation too, since usually the inequality $\delta P(\cdot)-Q(\cdot)\geq 0$ fails. To overcome this difficulty, inspired by Yong \cite{Yong2013}, we transform the solvability of $\Lambda(\cdot)$ to the solvability of another Riccati equation which looks quite different to the transformed Riccati equation in \cite{Yong2013} due to the additional term $\widetilde{D}^{\top}(\cdot)P(\cdot)\widetilde{D}(\cdot)$. It is interesting that we can use some algebraic inequalities (see e.g. inequality \eqref{positive}) to obtain also the well-posedness of our new transformed Riccati equation. \end{itemize} The rest of this paper is organized as follows. We formulate the LQ large-population problems with partial information in section \ref{sec:pre}. In section \ref{sec:cc}, we obtain the explicit decentralized strategies. Subsection \ref{subsec:1} devotes to the study of control constrained case. By using Hamiltonian approach, we get the explicit decentralized strategies through Hamiltonian type CC system, which is a new kind of MF-FBSDEs. The well-posedness results of MF-FBSDE is given and the corresponding $\varepsilon$-Nash equilibrium property is also verified. In subsection \ref{subsec:2}, we study the case without control constraint, by Riccati approach, the decentralized strategies can be further represented explicitly as the feedback of filtered state. Subsection \ref{subsec:3} aims to showing the existence and uniqueness of a solution to Riccati type CC system. In section \ref{sec:app}, the motivating example (Example \ref{example}) is solved. Section \ref{sec:con} concludes the paper. The well-poseness of a general kind of MF-FBSDEs is presented in Appendix A. \section{Problem Formulation}\label{sec:pre} Consider a large-population system which is composed of $N$ agent $\mathcal{A}_i$, $1\leq i \leq N$. For a fixed $T>0$, let $(\Omega, \mathcal{F},\{\mathcal{F}_t\}_{0\leq t \leq T},\mathbb{P})$ be a completed filtered probability space satisfying usual conditions, on which we define two $N$-dimensional mutually independent standard Brownian motions $\{W_i(t), 1\leq i \leq N\}_{0\leq t\leq T}$ and $\{\widetilde{W}_i(t), 1\leq i \leq N\}_{0\leq t\leq T}$. Assume that $\mathcal{F}_t$ is the natural filtration generated by $\{W_i(s),\widetilde{W}_i(s),0\leq s \leq t, 1\leq i \leq N\}$ augmented by all $\mathbb{P}$-null set $\mathcal{N}$. Moreover, let $\mathcal{F}_t^i=\sigma\{W_i(s),\widetilde{W}_i(s),0\leq s \leq t\}\vee \mathcal{N}$, $\mathcal{G}_t=\{W_i(s),0\leq s \leq t, 1\leq i \leq N\}\vee \mathcal{N}$ and $\mathcal{G}_t^i=\{W_i(s),0\leq s \leq t\}\vee \mathcal{N}$. Throughout the paper, we denote $\mathbb{R}^n$ the $n$-dimensional Euclidean space, with the usual norm and the usual inner product given by $|\cdot|$ and $\langle\cdot,\cdot\rangle$, respectively. For any vector or matrix, the superscript $\top$ denotes its transpose. $\mathcal{S}^d$ represents the set of $d\times d$-dimensional symmetric matrices, $\mathcal{S}_{+}^d$ represents the set of $d\times d$-dimensional symmetric matrices which are semi-positive, and $I$ denotes the identity matrix. For any matrix $M\in \mathbb{R}^{n\times d}$, let $|M|=\sqrt{\textup{tr}(M^{\top}M)}$ denotes its norm, where $\textup{tr}(M^{\top}M)$ stands for the trace of $M^{\top}M$. If $M\in\mathcal{S}^d$ and $M\geq(>) 0$, we say that $M$ is semi-positive (positive) definite. For positive constant $k$, if $M\in\mathcal{S}^d$ and $M>kI$, we denote $M\gg0$. For any Euclidean space $\mathbb{M}$ and any filtration $\mathcal{V}$, if $h(\cdot):[0,T]\rightarrow \mathbb{M}$ is continuous, we denote $h(\cdot)\in C([0,T];\mathbb{M})$; if $h(\cdot):[0,T]\rightarrow \mathbb{M}$ is uniformly bounded, we denote $h(\cdot)\in L^\infty(0,T;\mathbb{M})$; if $h(\cdot):\Omega\times[0,T]\rightarrow \mathbb{M}$ is $\mathcal{V}_t$-adapted process s.t. $\mathbb{E}\int_0^T|h(t)|^2dt<\infty$, we denote $h(\cdot)\in L_{\mathcal{V}_t}^2(0,T;\mathbb{M})$. Suppose the state of $i$-th agent $\mathcal{A}_i$ satisfies the following linear stochastic differential equation (SDE) \begin{equation}\label{state} \left\{ \begin{aligned} dx_{i}(t)=&~[A(t)x_{i}(t)+B(t)u_{i}(t)+F(t)x^{(N)}(t)+b(t)]dt \\ &+[C(t)x_{i}(t)+D(t)u_{i}(t)+H(t)x^{(N)}(t)+\sigma (t)]dW_{i}(t) \\ &+[\widetilde{C}(t)x_{i}(t)+\widetilde{D}(t)u_{i}(t)+\widetilde{H}% (t)x^{(N)}(t)+\widetilde{\sigma}(t)]d\widetilde{W}_{i}(t), \\ x_{i}(0)=&~x,% \end{aligned}% \right. \end{equation} where $x\in\mathbb{R}^n$ is the initial value, $x_i(\cdot)$ and $u_i(\cdot)$ denote the state process and control process, respectively. Moreover, we denote $x^{(N)}(\cdot):=\frac{1}{N}\sum_{i=1}^N x_{i}(\cdot)$, which characterizes the state average of all agents. The corresponding coefficients $A(\cdot),B(\cdot),F(\cdot),b(\cdot)$,\\$C(\cdot),D(\cdot),H(\cdot),\sigma(\cdot), \widetilde{C}(\cdot),\widetilde{D}(\cdot),\widetilde{H}(\cdot),\widetilde{\sigma}(\cdot)$ meet appropriate assumptions given later. Let $\Gamma\subseteq \mathbb{R}^m$ be a nonempty closed convex set. We define centralized strategy set as $\mathcal{U}_{ad}^{c}=\{u_{i}(\cdot )~|~u_{i}(\cdot )\in L_{\mathcal{G}% _{t}}^{2}(0,T;\Gamma )\}$ and decentralized strategy set as $\mathcal{U}_{ad}^{d,i}=\{u_{i}(\cdot )~|~u_{i}(\cdot )\in L_{\mathcal{G}% _{t}^{i}}^{2}(0,T;\Gamma )\},~1\leq i \leq N$. Obviously, for $1\leq i \leq N$, $\mathcal{G}_t\subseteq\mathcal{F}_t$ and $\mathcal{G}_t^i\subseteq\mathcal{F}_t^i$, which means that our problem is in the setting of partial information. For simplicity, let $u(\cdot)=(u_1(\cdot),\ldots,u_N(\cdot))$ be the set of strategies of all agents and $u_{-i}(\cdot)=(u_1(\cdot),\ldots,u_{i-1}(\cdot),u_{i+1}(\cdot),\ldots,u_N(\cdot))$ be the set of strategies except for $i$-th agent. The cost functional of $\mathcal{A}_i$ is given by \begin{equation} \begin{aligned} &\mathcal{J}_{i}(u_{i}(\cdot ),u_{-i}(\cdot ))\\ =&\frac{1}{2}\mathbb{E}% \Big\{\int_{0}^{T}\big[\langle Q(t)(x_{i}(t)-x^{(N)}(t)),x_{i}(t)-x^{(N)}(t)\rangle +\langle R(t)u_{i}(t),u_{i}(t)\rangle \big]dt \\ &\qquad\qquad+\langle G(x_{i}(T)-x^{(N)}(T)),x_{i}(T)-x^{(N)}(T)\rangle \Big\}\label{cost}. \end{aligned} \end{equation}% Now, we aim to formulate the large-population problem with partial information. \textbf{Problem (LP)} To choose strategy profile $\bar{u}(\cdot)=(\bar{u}_{1}(\cdot),\ldots ,\bar{u}_{N}(\cdot))$, where $\bar{u}_i(\cdot)\in \mathcal{U}_{ad}^{c} $, such that \begin{equation*} \mathcal{J}_{i}(\bar{u}_{i}(\cdot ),\bar{u}_{-i}(\cdot ))=\underset{% u_{i}\left( \cdot \right) \in \mathcal{U}_{ad}^{c}}{\inf }\mathcal{J}% _{i}(u_{i}(\cdot ),\bar{u}_{-i}(\cdot )),~~~1\leq i\leq N, \text{~~subjects to \eqref{state} and \eqref{cost}.} \end{equation*}% For $1\leq i\leq N$, if there exists such $\bar{u}_i(\cdot)$ satisfying above relationship, it's called the Nash equilibrium of Problem (LP). We denote the corresponding optimal state as $\bar{x}_i(\cdot)$. For further study, we provide the definition of $\varepsilon$-Nash equilibrium. \begin{definition} The strategy profile $\bar{u}(\cdot)=(\bar{u}_1(\cdot),\ldots,\bar{u}_N(\cdot))$, where $\bar{u}_i(\cdot)\in \mathcal{U}_{ad}^c$, $1\leq i \leq N$, is called an $\varepsilon$-Nash equilibrium with respect to the cost $\mathcal{J}_i$, if there exists an $\varepsilon\geq 0$, such that for any $1\leq i \leq N$ \begin{equation*} \mathcal{J}_{i}(\bar{u}_{i}(\cdot ),\bar{u}_{-i}(\cdot ))\leq \mathcal{J}_{i}(u_{i}(\cdot ),\bar{u}_{-i}(\cdot ))+\varepsilon, \end{equation*} where $u_{i}(\cdot )\in \mathcal{U}_{ad}^c$ is any alternative strategy applied by $\mathcal{A}_i$. \end{definition} \begin{remark} Due to the coupling structure, the centralized strategies $(i.e.~u_i(\cdot)\in \mathcal{U}_{ad}^c )$ which need the information of other agents are difficult to obtain in the noncooperative game. In this paper, we use the MFG method to derive the decentralized strategies $(i.e.~u_i(\cdot)\in \mathcal{U}_{ad}^{d,i})$, which turn out to satisfy $\varepsilon$-Nash equilibrium property. \end{remark} We introduce the following assumptions for the coefficients. If no confusion happens, we will omit the dependence on time $t$. \textup{(H1)} The coefficients of state equation satisfy \begin{equation*} A,C,\widetilde{C},F,H,\widetilde{H}, b,\sigma ,% \widetilde{\sigma }\in L^{\infty }(0,T;\mathbb{R}^{n}),\quad B,D,\widetilde{D}\in L^{\infty }(0,T;\mathbb{R}^{n\times m}), \end{equation*} \textup{(H2)} The coefficients of cost functional satisfy \begin{equation*} Q\in L^{\infty }(0,T;\mathcal{S}^{n}),\quad R\in L^{\infty }(0,T;\mathcal{S% }^{m}),\quad G\in \mathcal{S}^{n},\quad Q\geq 0,\quad R\gg0,\quad G\geq 0. \end{equation*} \begin{remark} We mention that it is not enough to solve a LQ stochastic classical differential game problems under (H1) and (H2). For example, \cite{NieWangYu2021} used the stochastic maximum principle to study the corresponding Nash equilibrium, where a complex fully coupled FBSDE including one SDE and two BSDEs was involved. To guarantee the existence and uniqueness of a solution for this FBSDE, in addition to (H1)-(H2), some other assumptions (see Assumption 3 in \cite{NieWangYu2021}) were needed. However, the large-population problem is essentially different from the classical differential game as explained in the introduction. The MFG provides one effective approach to determine an approximate equilibrium for large-population problem and the corresponding limiting problem is a LQ stochastic optimal control problem (see Section \ref{sec:cc}). Based on these observations, assumptions (H1) and (H2) are enough to solve our problem in some sense. \end{remark} \section{$\varepsilon$-Nash Equilibrium for Problem (LP)}\label{sec:cc} As analyzed earlier, since the highly complicated coupling structure, it is tricky to solve the large-population problem directly. In other words, it is intractable and inefficient to obtain the centralized strategies of Problem (LP). One possible approach is to adopt MFG method to discuss corresponding decentralized strategies. In subsection \ref{subsec:1}, we use Hamiltonian approach to obtain the explicit decentralized strategies in control constrained case. The Hamiltonian type CC system is derived and its well-posedness is given. Moreover, we show that the decentralized strategies satisfy the $\varepsilon$-Nash equilibrium property. In subsection \ref{subsec:2}, we show that in control unconstrained case, the decentralized strategies can be further represented explicitly as the feedback of filtered state through Riccati approach. The well-posedness of associated Riccati type CC system is explored. \subsection{Control Constrained Case: Hamiltonian Approach}\label{subsec:1} Let $\Gamma\subset\mathbb{R}^m$ be a nonempty closed convex set. When $\Gamma\neq\mathbb{R}^m$, Problem (LP) turns out to be a class of control constrained large-population problems with partial information. We will analyze the asymptotic behavior of large-population system when the number of agents tends to infinity. To start with, we suppose that the coupling term $\bar{x}^{N}(\cdot)=\frac{1}{N}\sum_{i=1}^N \bar{x}_{i}(\cdot)$ is approximated by $l(\cdot)$, which is a frozen limiting term and will be determined later. We introduce an auxiliary limiting system defined as \begin{equation} \left\{ \begin{aligned} dz_{i}(t)=&~[A(t)z_{i}(t)+B(t)u_{i}(t)+F(t)l(t)+b(t)]dt \\ &+[C(t)z_{i}(t)+D(t)u_{i}(t)+H(t)l(t)+\sigma (t)]dW_{i}(t) \\ &+[\widetilde{C}(t)z_{i}(t)+\widetilde{D}(t)u_{i}(t)+\widetilde{H}(t)l(t)+% \widetilde{\sigma }(t)]d\widetilde{W}_{i}(t), \\ z_{i}(0)=&~x,\label{lstate}% \end{aligned}% \right. \end{equation} and the limiting cost functional is given by \begin{equation} \begin{aligned} J_i(u_{i}(\cdot )) =&\frac{1}{2}\mathbb{E}\Big\{\int_{0}^{T}\big[\langle Q(t)(z_{i}(t)-l(t)),z_{i}(t)-l(t)\rangle +\langle R(t)u_{i}(t),u_{i}(t)\rangle \big]dt \\ &\qquad\qquad+\langle G(z_{i}(T)-l(T)),z_{i}(T)-l(T)\rangle \Big\}\label{lcost}. \end{aligned} \end{equation} Then, we formulate the limiting large-population problem with partial information as follows. \textbf{Problem(LLP)} For each agent $\mathcal{A}_i$, $1\leq i \leq N$, to find $\bar{u}_{i}(\cdot )\in \mathcal{U}_{ad}^{d,i}$ such that \begin{equation*} J_{i}(\bar{u}_{i}(\cdot ))=\underset{u_{i}(\cdot )\in \mathcal{U}_{ad}^{d,i}}{\inf }J_{i}(u_{i}(\cdot )), \text{\qquad subjects to \eqref{lstate} and \eqref{lcost}.} \end{equation*}% If there exists $\bar{u}_{i}(\cdot )\in \mathcal{U}_{ad}^{d,i}$ satisfying above relationship, it's the so-called decentralized strategy, and the corresponding $\bar{z}_i(\cdot)$ denotes the optimal decentralized trajectory. With the help of the frozen limiting term, the original large-population problem (LP) is transformed into a decoupled stochastic LQ control problem with partial information, which can be addressed with some classical methods of optimal control and filtering technique.\\ Indeed, we define the Hamiltonian function $\mathcal{H}_i:\Omega\times\mathbb{R}^n\times\mathbb{R}^n\times\mathbb{R}^n\times\mathbb{R}^n\times\Gamma\rightarrow\mathbb{R}$ by \begin{equation*} \begin{aligned} &\mathcal{H}_{i}(t,p_{i},k_{i},\widetilde{k}_{i},z_{i},u_{i}) =\langle p_{i},Az_{i}+Bu_{i}+Fl+b\rangle +\langle k_{i},Cz_{i}+Du_{i}+Hl+\sigma \rangle \\ &\qquad\qquad \qquad+\langle \widetilde{k}_{i},\widetilde{C}z_{i}+\widetilde{D}u_{i}+% \widetilde{H}l+\widetilde{\sigma }\rangle -\frac{1}{2}\langle Q(z_{i}-l),z_{i}-l\rangle -\frac{1}{2}\langle Ru_{i},u_{i}\rangle, \end{aligned} \end{equation*} and we introduce the following adjoint equation \begin{equation} \left\{ \begin{aligned} dp_{i}(t)=&-[A^{\top }(t)p_{i}(t)+C^{\top }(t)k_{i}\left( t\right) +% \widetilde{C}^{\top }(t)\widetilde{k}_{i}\left( t\right) -Q(t)(\bar{z}_i\left( t\right) -l(t))]dt\\ & +k_{i}\left( t\right) dW_{i}(t)+\widetilde{k}_{i}\left( t\right) d\widetilde{% W}_{i}(t), \\ p_{i}(T)=&-G(\bar{z}_{i}\left( T\right) -l(T)).\label{adjoint}% \end{aligned} \right. \end{equation}% Then, we have the following maximum principle of Problem (LLP), whose proof is trivial and thus is omitted. \begin{theorem} Let \textup{(H1)} and \textup{(H2)} hold. For $1\leq i \leq N$ and any fixed $l(\cdot)$, suppose $\bar{u}_{i}(\cdot )$ is the decentralized strategy of Problem (LLP), $\bar{z}_{i}(\cdot )$ is the optimal decentralized trajectory, and $(\bar{p}_i(\cdot),\bar{k}_i(\cdot),\bar{\widetilde{k}}_i(\cdot))$ solves \eqref{adjoint}, then \begin{equation*} \mathbb{E}[\langle \frac{\partial \mathcal{H}_{i}}{\partial u_{i}}(t,\bar{p}_{i},\bar{k}_{i},% \bar{\widetilde{k}}_{i},\bar{z}_{i},\bar{u}_{i}),u_{i}-\bar{u}_{i}(t)\rangle |\mathcal{G}% _{t}^{i}]\leq 0,~~\text{for any}~u_{i}\in \Gamma,~\text{a.e.} ~t\in[0,T], ~\mathbb{P}\text{-a.s.} \end{equation*}% \end{theorem} According to the adaption of strategies, we have \begin{equation*} \langle \mathbb{E}[B^{\top }(t)\bar{p}_{i}(t)+D^{\top }(t)\bar{k}_{i}(t)+\widetilde{D}% ^{\top }(t)\bar{\widetilde{k}}_{i}(t)-R(t)\bar{u}_{i}(t)|\mathcal{G}% _{t}^{i}],u_{i}-\bar{u}_{i}(t)\rangle \leq 0, \end{equation*}% and by using convex analysis, the decentralized strategies read as \begin{equation}\label{gcontrol} \bar{u}_{i}(t)=\mathbf{P}_{\Gamma }[R(t)^{-1}(B^{\top }(t)\mathbb{E}[\bar{p}% _{i}(t)|\mathcal{G}_{t}^{i}]+D^{\top }(t)\mathbb{E}[\bar{k}_{i}(t)|\mathcal{G% }_{t}^{i}]+\widetilde{D}^{\top }(t)\mathbb{E}[\bar{\widetilde{k}}_{i}(t)|% \mathcal{G}_{t}^{i}])], \end{equation}% where \textbf{P}$[\cdot]$ is a projection operator from $\mathbb{R}^m$ to $\Gamma$ under the norm $||x||_{R}^{2}=\langle \langle x,x\rangle \rangle =\langle R^{\frac{1}{2} }x,R^{\frac{1}{2}}x\rangle$ . Moreover, the stochastic Hamiltonian system can be rewritten as the following nonlinear MF-FBSDE with projection operator: \begin{equation} \left\{ \begin{aligned} d\bar{z}_{i}(t)=&\{A(t)\bar{z}_{i}(t)+B(t)\mathbf{P}_{\Gamma }[R(t)^{-1}(B^{\top }(t)\mathbb{E}[\bar{p}_{i}(t)|\mathcal{G}% _{t}^{i}]+D^{\top }(t)\mathbb{E}[\bar{k}_{i}(t)|\mathcal{G}_{t}^{i}] \\ &+\widetilde{D}^{\top }(t)\mathbb{E}[\bar{\widetilde{k}}_{i}(t)|\mathcal{G}% _{t}^{i}])]+F(t)l(t)+b(t)\}dt\\ &+\{C(t)\bar{z}_{i}(t)+D(t)\mathbf{P}_{\Gamma }[R(t)^{-1}(B^{\top }(t)\mathbb{E}[\bar{p}_{i}(t)|% \mathcal{G}_{t}^{i}]+D^{\top }(t)\mathbb{E}[\bar{k}_{i}(t)|\mathcal{G}% _{t}^{i}] \\ &+\widetilde{D}^{\top }(t)\mathbb{E}[\bar{\widetilde{k}}_{i}(t)|\mathcal{G}% _{t}^{i}])]+H(t)l(t)+\sigma (t)\}dW_{i}(t)\\ &+\{\widetilde{C}(t)\bar{z}_{i}(t)+\widetilde{D}(t)\mathbf{P}_{\Gamma }[R(t)^{-1}(B^{\top }(t)\mathbb{E}[\bar{p% }_{i}(t)|\mathcal{G}_{t}^{i}]+D^{\top }(t)\mathbb{E}[\bar{k}_{i} (t)|% \mathcal{G}_{t}^{i}] \\ &+\widetilde{D}^{\top }(t)\mathbb{E}[\bar{\widetilde{k}}_{i}(t)|\mathcal{G}% _{t}^{i}])]+\widetilde{H}(t)l(t)+\widetilde{\sigma }(t)\}d\widetilde{W}% _{i}(t), \\ d\bar{p}_{i}(t)=&-[A^{\top }(t)\bar{p}_{i}(t)+C^{\top }(t)\bar{k}_{i} \left( t\right) +\widetilde{C}^{\top }(t)\bar{\widetilde{k}}_{i}\left( t\right) -Q(t)(\bar{z}_{i}\left( t\right) -l(t))]dt\\ &+\bar{k}_{i}\left( t\right) dW_{i}(t)+\bar{\widetilde{k}}_{i} \left( t\right) d\widetilde{W}_{i}(t), \\ \bar{z}_{i}(0)=&x,~\bar{p}_{i}(T)=-G(\bar{z}_{i}\left( T\right) -l(T)).\label{Hamiltion} \end{aligned} \right. \end{equation}% When $N\rightarrow\infty$, we would like to approximate $x_i(\cdot)$ by $z_i(\cdot)$, thus $\frac{1}{N}\sum_{i=1}^N x_{i}(\cdot)$ is approximated by $\frac{1}{N}\sum_{i=1}^N z_{i}(\cdot)$. Recalling that $z_i(\cdot)$ and $z_j(\cdot)$ are independent identically distributed (i.i.d), for $1\leq i,j\leq N$, $i\neq j$. Consequently, by the strong law of large number, it should follows that \begin{equation}\label{limit} l(\cdot)=\underset{N\rightarrow \infty }{\lim }\frac{1}{N}\overset{N}{\underset{% i=1}{\sum }}\bar{z}_{i}(\cdot )=\mathbb{E}[\bar{z}_{i}(\cdot )]. \end{equation} Thus, by substituting \eqref{limit} into \eqref{Hamiltion}, we derive the following Hamiltonian type CC system, which is also a nonlinear MF-FBSDE with projection operator \begin{equation} \left\{ \begin{aligned}\label{CC} d\bar{z}_{i}(t)=&\{A(t)\bar{z}_{i}(t)+B(t)\mathbf{P}_{\Gamma }[R(t)^{-1}(B^{\top }(t)\mathbb{E}[\bar{p}_{i}(t)|\mathcal{G}% _{t}^{i}]+D^{\top }(t)\mathbb{E}[\bar{k}_{i}(t)|\mathcal{G}_{t}^{i}] \\ &+\widetilde{D}^{\top }(t)\mathbb{E}[\bar{\widetilde{k}}_{i}(t)|\mathcal{G}% _{t}^{i}])]+F(t)\mathbb{E}[\bar{z}_{i}(t)]+b(t)\}dt\\ &+\{C(t)\bar{z}_{i}(t)+D(t)\mathbf{P}_{\Gamma }[R(t)^{-1}(B^{\top }(t)\mathbb{E}[\bar{p}_{i}(t)|% \mathcal{G}_{t}^{i}]+D^{\top }(t)\mathbb{E}[\bar{k}_{i}(t)|\mathcal{G}% _{t}^{i}] \\ &+\widetilde{D}^{\top }(t)\mathbb{E}[\bar{\widetilde{k}}_{i}(t)|\mathcal{G}% _{t}^{i}])]+H(t)\mathbb{E}[\bar{z}_{i}(t)]+\sigma (t)\}dW_{i}(t)\\ &+\{\widetilde{C}(t)\bar{z}_{i}(t) +\widetilde{D}(t)\mathbf{P}_{\Gamma }[R(t)^{-1}(B^{\top }(t)\mathbb{E}[\bar{p% }_{i}(t)|\mathcal{G}_{t}^{i}]+D^{\top }(t)\mathbb{E}[\bar{k}_{i} (t)|% \mathcal{G}_{t}^{i}] \\ &+\widetilde{D}^{\top }(t)\mathbb{E}[\bar{\widetilde{k}}_{i}(t)|\mathcal{G}% _{t}^{i}])]+\widetilde{H}(t)\mathbb{E}[\bar{z}_{i}(t)]+\widetilde{\sigma }(t)\}d\widetilde{W}% _{i}(t), \\ d\bar{p}_{i}(t)=&-\{A^{\top }(t)\bar{p}_{i}(t)+C^{\top }(t)\bar{k}_{i} \left( t\right) +\widetilde{C}^{\top }(t)\bar{\widetilde{k}}_{i}\left( t\right) -Q(t)(\bar{z}_{i}\left( t\right)-\mathbb{E}[\bar{z}_{i}(t)]\}dt\\ &+\bar{k}_{i}\left( t\right) dW_{i}(t)+\bar{\widetilde{k}}_{i} \left( t\right) d\widetilde{W}_{i}(t), \\ \bar{z}_{i}(0)=&~x,~\bar{p}_{i}(T)=-G(\bar{z}_{i}\left( T\right) -\mathbb{E}[\bar{z}_{i}(T)]), \end{aligned} \right. \end{equation}% \begin{remark} The well-posedness of FBSDEs is of great significance in stochastic optimal control problems. The readers are referred to \cite{Antonelli1993} for the local case and to \cite{Ma1994,Hu1995,Peng1999,Pardoux1999,Ma1999,Ma2015}, etc. for global cases. There also exist many literatures on the well-posedness of MF-FBSDEs, for example, see \cite{Bensoussan2015,Ahuja2019,Carmona2018,Hu2018,Nie2018}. We emphasize that our MF-FBSDE \eqref{CC} cannot be covered by above listed literatures since it includes the expectation term $\mathbb{E}[\cdot]$, the conditional expectation term $\mathbb{E}[\cdot|\mathcal{G}_t^i]$ and projection operator, thus its well-posedness in global case is not obvious. Let us compare our MF-FBSDE with the counterpart in most relevant work \cite{Hu2018,Nie2018}. In \cite{Hu2018}, the authors studied a class of control constrained LQ MFGs where the diffusion term of each agent contains only control variable and nonhomogeneous part, thus for the study of the well-posedness of their MF-FBSDE with projection operator, the monotonicity condition holds and the continuity method works. As a contrast, in our work, the diffusion term of individual agent depends both on the state and control, it causes that the monotonicity condition fails and the continuity method does not work. Moreover, we mention that \cite{Hu2018} and \cite{Nie2018} are all concerned with the control constrained large-population problems in the full information case. By comparison, we place ourselves in the framework of partial information, which cause some essential differences between our MF-FBSDE for the CC condition and the previous ones in \cite{Hu2018} and \cite{Nie2018}. In fact, our MF-FBSDE will include simultaneously the expectation term, the conditional expectation term and projection operator, thus it will need some additional efforts to deal with the simultaneous existence of expectation and conditional expectation. \end{remark} However, inspired by \cite{Nie2018}, we can also use the discounting method proposed by \cite{Pardoux1999} to show the well-posedness of a kind of general MF-FBSDEs $($see \eqref{MF} below$)$ which include \eqref{CC} as a special case. To make our current paper self-contained, we will study this kind of general MF-FBSDEs in the following. Let $(\Omega, \mathcal{F},\mathbb{P})$ be a completed filtered probability space satisfying usual conditions, and $W$ and $\widetilde{W}$ be two $d$-dimensional mutually independent standard Brownian motions. Denote $\mathcal{F}_t^{W,\widetilde{W}}$ (resp. $\mathcal{F}_t^W$) as the natural filtration generated by $\{W(s), \widetilde{W}(s),$$ 0\leq s \leq t \}$ (resp. $\{W(s),0\leq s \leq t \}$) and augmented by all $\mathbb{P}$-null set. Let us consider the following MF-FBSDE \begin{equation}\label{MF} \left\{ \begin{aligned} dX(t)=&~b(t,\Theta(t))dt+\sigma(t,\Theta(t)) dW(t) +\widetilde{\sigma}(t,\Theta(t))d\widetilde{W}(t), \\ dY(t)=&-f(t,\Theta(t))dt+Z(t)dW(t)+\widetilde{Z}(t)d\widetilde{W}(t),\\ X(0)=&~x,~Y(T)=~g(X(T),\mathbb{E}[X(T)]),% \end{aligned} \right. \end{equation} where $\Theta(t)=(X(t),\mathbb{E}[X(t)],Y(t),\mathbb{E}[Y(t)|\mathcal{F}^W_t],Z(t),\mathbb{E}[Z(t)|\mathcal{F}^W_t], \widetilde{Z}(t),\mathbb{E}[\widetilde{Z}(t)|\mathcal{F}^W_t])$. Here, $X(\cdot)$, $Y(\cdot)$, $Z(\cdot)$, $\widetilde{Z}(\cdot)$ take value in $\mathbb{R}^n$, $\mathbb{R}^m$, $\mathbb{R}^{m\times d}$, $\mathbb{R}^{m\times d}$, respectively. The coefficients $b,\sigma,\widetilde{\sigma}$ and $f$ are defined on $\Omega\times[0,T]\times\mathbb{R}^n\times\mathbb{R}^n\times\mathbb{R}^m\times\mathbb{R}^m\times \mathbb{R}^{m\times d}\times \mathbb{R}^{m\times d}\times \mathbb{R}^{m\times d}\times \mathbb{R}^{m\times d}$, such that $b(\cdot,\cdot,x,\alpha,y,\beta,z,\gamma,\tilde{z},\tilde{\gamma})$, $\sigma(\cdot,\cdot,x,\alpha,y,\beta,z,\gamma,\tilde{z},\tilde{\gamma})$, $\widetilde{\sigma}(\cdot,\cdot,x,\alpha,y,\beta,z,\gamma,\tilde{z},\tilde{\gamma})$ and $f(\cdot,\cdot,x,\alpha,y,\beta,z,\gamma,\tilde{z},\tilde{\gamma})$ are all $\mathcal{F}_t^{W,\widetilde{W}}$ progressively measurable processes, for any fixed $(x,\alpha,y,\beta,z,\gamma,\tilde{z},\tilde{\gamma})$. The coefficient $g$ is defined on $\Omega\times \mathbb{R}^n \times \mathbb{R}^n$ and $g(\cdot,x,\alpha)$ is $\mathcal{F}_T^{W,\widetilde{W}}$-measurable, for any fixed $(x,\alpha)$. Suppose that the coefficients $b,\sigma,\widetilde{\sigma},f$ and $g$ are all continuous with respect to $(x,\alpha,y,\beta,z,\gamma,\tilde{z},\tilde{\gamma})$. To guarantee the existence and uniqueness of a solution to \eqref{MF}, we give the following assumptions on the coefficients. For any $t,x,x_{i},y,y_{i},\alpha,\alpha_i,\beta,\beta_i,z,z_i,\gamma,\gamma_i,\tilde{z},\tilde{z}_i,\tilde{\gamma},\tilde{\gamma}_i$, $i=1,2$, we denote $\Delta \phi=\phi_1-\phi_2$, where $\phi=x,y,\alpha,\beta,\gamma,z,\tilde{z},\tilde{\gamma}$. \textup{(A1)} There exist $\lambda _{1},\lambda _{2}\in \mathbb{R}$ such that \begin{equation*} \begin{aligned} \langle b(t,x_{1},\alpha,y,\beta,z,\gamma,\tilde{z},\tilde{\gamma})-b(t,x_{2},y,\alpha,\beta,z,\gamma,\tilde{z},\tilde{\gamma}),\Delta x\rangle \leq& \lambda _{1}|\Delta x|^{2},\\ \langle f(t,x,\alpha,y_1,\beta,z,\gamma,\tilde{z},\tilde{\gamma})-f(t,x,\alpha,y_2,\beta,z,\gamma,\tilde{z},\tilde{\gamma}),\Delta y\rangle \leq &\lambda _{2}|\Delta y|^{2}. \end{aligned} \end{equation*} \textup(A2) There exist positive constants $\rho$ and $\rho_i,\mu_i$, $i=1,2,\ldots,7$, such that \begin{equation*} \begin{aligned} |b(t,x,\alpha,y,\beta,z,\gamma,\tilde{z},\tilde{\gamma})|\leq& |b(t,0,\alpha,y,\beta,z,\gamma,\tilde{z},\tilde{\gamma})|+\rho (1+|x|),\\ |f(t,x,\alpha,y,\beta,z,\gamma,\tilde{z},\tilde{\gamma})|\leq& |f(t,x,\alpha,0,\beta,z,\gamma,\tilde{z},\tilde{\gamma})|+\rho (1+|y|), \end{aligned} \end{equation*}% \qquad\qquad and \begin{equation*} \begin{aligned} &|b(t,x,\alpha_1,y_1,\beta_1,z_1,\gamma_1,\tilde{z}_1,\tilde{\gamma}_1)-b(t,x,\alpha_2,y_2,\beta_2,z_2,\gamma_2,\tilde{z}_2,\tilde{\gamma}_2)|\\ \leq &~\rho _{1}|\Delta\alpha|+\rho _{2}|\Delta y|+\rho _{3}|\Delta\beta|+\rho _{4}|\Delta z|+\rho _{5}|\Delta \gamma|+\rho _{6}|\Delta\tilde{z}|+\rho _{7}|\Delta\tilde{\gamma}|,\\ &|f(t,x_1,\alpha_1,y,\beta_1,z_1,\gamma_1,\tilde{z}_1,\tilde{\gamma}_1)-f(t,x_2,\alpha_2,y,\beta_2,z_2,\gamma_2,\tilde{z}_2,\tilde{\gamma}_2)|\\ \leq &~\mu_1|\Delta x|+\mu_{2}|\Delta \alpha|+\mu _{3}|\Delta\beta|+\mu _{4}|\Delta z| +\mu_{5}|\Delta\gamma|+\mu _{6}|\Delta\tilde{z}|+\mu _{7}|\Delta\tilde{\gamma}|. \end{aligned} \end{equation*}% \textup(A3) There exist positive constants $w_i$ and $\kappa_i$, $i=1,2,\ldots,8$, such that \begin{equation*} \begin{aligned} &|\sigma (t,x_1,\alpha_1,y_1,\beta_1,z_1,\gamma_1,\tilde{z}_1,\tilde{\gamma}_1)-\sigma (t,x_2,\alpha_2,y_2,\beta_2,z_2,\gamma_2,\tilde{z}_2,\tilde{\gamma}_2)|^{2}\\ \leq &~ w_1^{2}|\Delta x|^{2}+w_2^{2}|\Delta\alpha|^2+w_3^2|\Delta y|^2+w_4^2|\Delta\beta|^2+w_5^2|\Delta z|^2+w_6^2|\Delta \gamma|^2+w_7^2|\Delta\tilde{z}|^2+w_8^2|\Delta\tilde{\gamma}|^2,\\ &|\widetilde{\sigma} (t,x_1,\alpha_1,y_1,\beta_1,z_1,\gamma_1,\tilde{z}_1,\tilde{\gamma}_1)-\widetilde{\sigma} (t,x_2,\alpha_2,y_2,\beta_2,z_2,\gamma_2,\tilde{z}_2,\tilde{\gamma}_2)|^{2}\\ \leq &~ \kappa_1^{2}|\Delta x|^{2}+\kappa_2^{2}|\Delta\alpha|^2+\kappa_3^2|\Delta y|^2+\kappa_4^2|\Delta \beta|^2+\kappa_5^2|\Delta z|^2+\kappa_6^2|\Delta\gamma|^2+\kappa_7^2|\Delta\tilde{z}|^2+\kappa_8^2|\Delta\tilde{\gamma}|^2. \end{aligned} \end{equation*}% \textup(A4) There exist positive constants $\rho_8$ and $\rho_9$ such that \begin{equation*} |g(x_1,\alpha_1)-g(x_2,\alpha_2)|^2\leq \rho_8^2 |\Delta x|^2+ \rho_9^2|\Delta\alpha|^2. \end{equation*} \textup{(A5)} For $\textbf{0}=(0,0,0,0,0,0,0,0)$, it holds $ \mathbb{E}\int_{0}^{T}(|b(t,\textbf{0})|^{2}+|\sigma (t,\textbf{0})|^{2}+|\widetilde{\sigma} (t,\textbf{0})|^{2}+|f(t,\textbf{0})|^{2})dt+\mathbb{E}|g(0,0)|^{2}<+\infty. $ \smallskip Then, we can formulate the well-posedness result for \eqref{MF}, whose proof is given in Appendix \ref{appendix}. \begin{theorem}\label{wellposeness} Let \textup{(A1)}-\textup{(A5)} hold. If there exists a constant $\theta_0$, which depends on $\mu_i$, $\rho_1,\rho_8,\rho_9$,$w_1,w_2$,$\kappa_1,\kappa_2$ and $\lambda_1,\lambda_2$, $(i=1,\ldots,7)$ and $T$ such that when $\rho_j,w_{\tau},\kappa_{\tau}\in[0,\theta_0)$, $(j=2,\ldots,7)$ and $(\tau=3,\ldots,8)$, then there exists a unique adapted solution $(X(\cdot),Y(\cdot),Z(\cdot),\widetilde{Z}(\cdot))\in L_{\mathcal{F }_{t}^{W,\widetilde{W}}}^{2}(0,T;\mathbb{R}^{n}\times \mathbb{R}^{m}\times \mathbb{R}^{m\times d}\times \mathbb{R}^{m\times d})$ to MF-FBSDE \eqref{MF}. Moreover, if $2(\lambda_1+\lambda_2)<-2\rho_1-2\mu_3-(\mu_4+\mu_5)^2-(\mu_6+\mu_7)^2-w_1^2-w_2^2-\kappa_1^2-\kappa_2^2$, there exists a constant $\theta_1$, which depends on $\mu_i$, $\rho_1,\rho_8,\rho_9$,$w_1,w_2$,$\kappa_1,\kappa_2$ and $\lambda_1,\lambda_2$, $(i=1,\ldots,7)$ but does not depend on $T$, such that when $\rho_j,w_{\tau},\kappa_{\tau}\in[0,\theta_1)$, $(j=2,\ldots,7)$ and $(\tau=3,\ldots,8)$, there exists a unique adapted solution $(X(\cdot),Y(\cdot),Z(\cdot),\widetilde{Z}(\cdot))\in L_{\mathcal{F }_{t}^{W,\widetilde{W}}}^{2}(0,T;\mathbb{R}^{n}\times \mathbb{R}^{m}\times \mathbb{R}^{m\times d}\times \mathbb{R}^{m\times d})$ to MF-FBSDE \eqref{MF}. \end{theorem} By applying Theorem \ref{wellposeness}, we obtain the following well-posedness result for MF-FBSDE \eqref{CC}. \begin{theorem}\label{Hwellposedness} Let $\lambda ^{\ast }$ be the maximum eigenvalue of matrix $\frac{A+A^{\top}}{2}$, assume that $4\lambda ^{\ast }<-2|F|-6|C|^{2}-6|\widetilde{C}|^{2}-5|H|^{2}-5|\widetilde{H}|^{2}$, there exists a constant $\theta_1>0$ independent of $T$, which may depend on $\lambda^{\ast}$, $|C|$, $|\widetilde{C}|$, $|F|$, $|H|$, $|\widetilde{H}|$, $|Q|$, $|G|$, when $|B|$, $|D|$, $|\widetilde{D}|$ and $|R^{-1}|\in[0,\theta_1)$, then there exists a unique adapted solution $(\bar{z}_i(\cdot),\bar{p}_i(\cdot),\bar{k}_i(\cdot),\bar{\widetilde{k}}_i(\cdot))\in L_{\mathcal{F }_{t}^{W,\widetilde{W}}}^{2}(0,T;\mathbb{R}^{n}\times \mathbb{R}^{m}\times \mathbb{R}^{m\times d}\times \mathbb{R}^{m\times d})$ to MF-FBSDE \eqref{CC}. \end{theorem} \begin{proof} Noting that MF-FBSDE \eqref{CC} is a special case of \eqref{MF}. Indeed, by choosing \begin{equation*} \begin{aligned} &\lambda _{1} =\lambda _{2}=\lambda ^{\ast },\rho =|A|,\rho _{1}=|F|,\rho _{2}=\rho _{4}=\rho_6=0,\rho_3=|B|^2|R^{-1}|,\\ &\rho_5=|B||R^{-1}||D|,\rho_7=|B||R^{-1}||\widetilde{D}|,\rho_8^2=\rho_9^2=2|G|^2,\mu_1=\mu_2=|Q|, \\ &\mu_3=\mu_5=\mu_7=0,\mu_4=|C|,\mu_6=|\widetilde{C}|, w_1^2=5|C|^2,w_2^2=5|H|^2,\\ &w_3=w_5=w_7=0,w_4^2=5|D|^2|R^{-1}|^2|B|^2,w_6^2=5|D|^2|R^{-1}|^2|D|^2,\\ &w_8^2=5|D|^2|R^{-1}|^2|\widetilde{D}|^2,\kappa_1^2=5|\widetilde{C}|^2, \kappa_2^2=5|\widetilde{H}|^2,\kappa_3=\kappa_5=\kappa_7=0,\\ &\kappa_4^2=5|\widetilde{D}|^2|R^{-1}|^2|B|^2,\kappa_6^2=5|\widetilde{D}|^2|R^{-1}|^2|D|^2,\kappa_8^2=5|\widetilde{D}|^2|R^{-1}|^2|\widetilde{D}|^2, \end{aligned} \end{equation*} thus assumptions \textup(A1)-\textup(A5) are satisfied naturally. By applying Theorem \ref{wellposeness}, the well-posedness of \eqref{CC} holds. \hfill$\square$ \end{proof} As yet, we have discussed Problem (LLP) which is the related limiting problem of Problem (LP), and we obtain the candidate decentralized strategy profile $\bar{u}(\cdot)=(\bar{u}_1(\cdot),\ldots,\bar{u}_N(\cdot))$, where $\bar{u}_i(\cdot)$ is given by \eqref{gcontrol} and $(\bar{z}_i(\cdot),\bar{p}_i(\cdot),\bar{k}_i(\cdot),\bar{\tilde{k}}_i(\cdot))$ solves \eqref{CC}. From Theorem \ref{Hwellposedness}, we know that $\bar{u}(\cdot)$ is well defined. Next, we will verify that $\bar{u}(\cdot)$ is indeed an $\varepsilon$-Nash equilibrium of Problem (LP). To do this, we suppose that $\bar{x}_i(\cdot)$ is the centralized state w.r.t. $\bar{u}_i(\cdot)$, and $\bar{z}_i(\cdot)$ is the corresponding decentralized state. Then, we have the following estimates for state (see Lemma \ref{averageerror}) and cost functional (see Lemma \ref{errorcost1}). In what follows, $K$ is a constant independent of $N$ and $i$ ($1\leq i\leq N$), which may vary line by line. \begin{lemma}\label{averageerror} Let \textup{(H1)} and \textup{(H2)} hold, it follows that $($recall $\bar{x}^{(N)}(t)=\frac{1}{N}\overset{N}{\underset{i=1}{\sum }}\bar{x}_{i}(t)$$)$, \begin{equation}\label{statee} \mathbb{E}\underset{0\leq t\leq T}{\sup }|\bar{x}^{(N)}(t)-l(t)|^{2}+\underset{1\leq i\leq N}{\sup }\mathbb{E}\underset{0\leq t\leq T}{\sup }|% \bar{x}_{i}(t)-\bar{z}_{i}(t)|^{2}=O(\frac{1}{N}). \end{equation} \end{lemma} \begin{proof} From \eqref{lstate} and \eqref{limit}, we have \begin{equation} \label{lequation} dl(t)=\{(A(t)+F(t))l(t)+B(t)\mathbb{E}[\bar{u}_i(t)]+b(t)\}dt, \qquad l(0)=x, \end{equation} and by recalling \eqref{state}, it holds that \begin{equation*} \left\{ \begin{aligned} d(\bar{x}^{(N)}(t)-l(t)) &=~\big\{(A(t)+F(t))(\bar{x}^{(N)}(t)-l(t))+B(t)(\frac{1% }{N}\overset{N}{\underset{i=1}{\sum }}\bar{u}_{i}(t)-\mathbb{E}[\bar{u}_{i}(t)])\big\}dt \\ &+\frac{1}{N}\overset{N}{\underset{i=1}{\sum }}[C(t)\bar{x}_{i}(t)+D(t)\bar{% u}_{i}(t)+H(t)\bar{x}^{(N)}(t)+\sigma (t)]dW_{i}(t) \\ &+\frac{1}{N}\overset{N}{\underset{i=1}{\sum }}[\widetilde{C}(t)\bar{x}_{i}(t)+\widetilde{D}(t)\bar{% u}_{i}(t)+\widetilde{H}(t)\bar{x}^{(N)}(t)+\widetilde{\sigma }(t)]d\widetilde{W}_{i}(t), \\ \bar{x}^{(N)}(0)-l(0)=&~0. \end{aligned}% \right. \end{equation*}% Since $\bar{u}_i(\cdot)$ and $\bar{u}_j(\cdot)$ are i.i.d (note $(\bar{p}_i(\cdot),\bar{k}_i(\cdot),\bar{\widetilde{k}}_i(\cdot))$ and $(\bar{p}_j(\cdot),\bar{k}_j(\cdot),\bar{\widetilde{k}}_j(\cdot))$ are i.i.d, for $i\neq j$), we have \begin{equation}\label{firstestimate} \mathbb{E}\int_{0}^{T}|\frac{1}{N}\overset{N}{\underset{i=1}{\sum }}\bar{u}% _{i}(t)-\mathbb{E}[\bar{u}_i(t)]|^{2}dt=\frac{1}{N^{2}}\overset{N}{\underset{i=1}{\sum }}\mathbb{E}\int_{0}^{T}|\bar{u}% _{i}(t)-\mathbb{E}[\bar{u}_i(t)]|^{2}dt \leq \frac{K}{N}=O(\frac{1}{N}). \end{equation}% Then by Burkholder-Davis-Gundy (BDG) inequality, it follows that \begin{equation} \begin{aligned} \mathbb{E}\underset{0\leq t\leq T}{\sup }|\bar{x}^{(N)}(t)-l(t)|^{2} &\leq K\mathbb{E}\int_{0}^{T}[|\bar{x}^{(N)}(t)-l(t)|^{2}+|\frac{1}{N}\overset{N% }{\underset{i=1}{\sum }}\bar{u}_{i}(t)-\mathbb{E}[\bar{u}_i(t)]|^{2}]dt \\ &+\frac{K}{N^{2}}\mathbb{E}\overset{N}{\underset{i=1}{\sum }}% \int_{0}^{T}|C(t)\bar{x}_{i}(t)+D(t)\bar{u}_{i}(t)+H(t)(\bar{x}% ^{(N)}(t)-l(t))+H(t)l(t)+\sigma (t)|^{2}dt \\ &+\frac{K}{N^{2}}\mathbb{E}\overset{N}{\underset{i=1}{\sum }}\int_{0}^{T}|\widetilde{C}(t)\bar{x}_{i}(t)+\widetilde{D}(t)\bar{% u}_{i}(t)+\widetilde{H}(t)(\bar{x}^{(N)}(t)-l(t))+\widetilde{H}(t)l(t)+\widetilde{% \sigma }(t)|^{2}dt. \end{aligned} \end{equation} From \eqref{state}, by applying Gronwall's inequality, one can prove $\mathbb{E}\underset{0\leq t\leq T}{\sup }\overset{N}{\underset{i=1}{\sum }} |\bar{x}_{i}(t)|^2=O(N) $, and thus $\mathbb{E}\underset{0\leq t\leq T}{\sup } |\bar{x}_{i}(t)|^2\leq K $. We can also show $\mathbb{E}\underset{0\leq t \leq T}{\sup}|l(t)|^2\leq K$ by noticing \eqref{lequation}. Then, we have \begin{equation}\label{secondest} \begin{aligned} &\frac{K}{N^{2}}\mathbb{E}\overset{N}{\underset{i=1}{\sum }}% \int_{0}^{T}|C(t)\bar{x}_{i}(t)+D(t)\bar{u}_{i}(t)+H(t)(\bar{x}% ^{(N)}(t)-l(t))+H(t)l(t)+\sigma (t)|^{2}dt \\ &+\frac{K}{N^{2}}\mathbb{E}\overset{N}{\underset{i=1}{\sum }}\int_{0}^{T}|\widetilde{C}(t)\bar{x}_{i}(t)+\widetilde{D}(t)\bar{% u}_{i}(t)+\widetilde{H}(t)(\bar{x}^{(N)}(t)-l(t))+\widetilde{H}(t)l(t)+\widetilde{% \sigma }(t)|^{2}dt\\ \leq &\frac{K}{N}\Big(1+\mathbb{E}\int_{0}^{T}|\bar{x}^{(N)}(t)-l(t)|^{2}dt\Big). \end{aligned} \end{equation}% Combining \eqref{firstestimate}-\eqref{secondest}, and applying Gronwall's inequality, we have $ \mathbb{E}\underset{0\leq t\leq T}{\sup }|\bar{x}^{(N)}(t)-l(t)|^{2}=O(\frac{1}{N}). $ Moreover, by recalling \eqref{state} and \eqref{lstate}, from standard estimates of SDE, we can obtain $\underset{1\leq i\leq N}{\sup }\mathbb{E}\underset{0\leq t\leq T}{\sup }|% \bar{x}_{i}(t)-\bar{z}_{i}(t)|^{2}=O(\frac{1}{N})$. \hfill$\square$ \end{proof} \begin{lemma}\label{errorcost1} Let \textup{(H1)} and \textup{(H2)} hold, then we have $ |\mathcal{J}_{i}(\bar{u}_{i}(\cdot ),\bar{u}_{-i}(\cdot ))-J_{i}(\bar{u}% _{i}(\cdot ))|=O(\frac{1}{\sqrt{N}})\label{costerror} $, for $1\leq i \leq N$. \end{lemma} \begin{proof} According to \eqref{cost} and \eqref{lcost}, we have \begin{equation*} \begin{aligned} \mathcal{J}_{i}(\bar{u}_{i}(\cdot ),\bar{u}_{-i}(\cdot ))-J_{i}(\bar{u}% _{i}(\cdot )) =&\frac{1}{2}\mathbb{E\{}\int_{0}^{T}[\langle Q(t)(\bar{x}% _{i}(t)-\bar{x}^{(N)}(t)),\bar{x}_{i}(t)-\bar{x}^{(N)}(t)\rangle-\langle Q(t)(\bar{z}_{i}(t)-l(t)),\bar{z}_{i}(t)-l(t)\rangle ]dt\\ &\qquad+\langle G(\bar{x}_{i}(T)-\bar{x}^{(N)}(T)),\bar{x}_{i}(T)-\bar{x}% ^{(N)}(T)\rangle-\langle G(\bar{z}_{i}(T)-l(T)),\bar{z}_{i}(T)-l(T)\rangle\}. \end{aligned} \end{equation*}% By noticing that $\langle Qa,a\rangle-\langle Qb,b\rangle=\langle Q(a-b),a-b\rangle+2\langle Q(a-b),b\rangle$, we obtain \begin{equation*} \begin{aligned} &\mathbb{E}\int_{0}^{T}[\langle Q(t)(\bar{x}_{i}(t)-\bar{x}^{(N)}(t)),% \bar{x}_{i}(t)-\bar{x}^{(N)}(t)\rangle -\langle Q(t)(\bar{z}% _{i}(t)-l(t)),\bar{z}_{i}(t)-l(t)\rangle ]dt \\ \leq &K\int_{0}^{T}\mathbb{E}|\bar{x}_{i}(t)-\bar{z}_{i}(t)|^{2}dt+K% \int_{0}^{T}\mathbb{E}|\bar{x}^{(N)}(t)-l(t)|^{2}dt\\&+K\int_{0}^{T}(\mathbb{E}|% \bar{x}_{i}(t)-\bar{x}^{(N)}(t)-(\bar{z}_{i}(t)-l(t))|^{2})^{\frac{1}{2}% }(\mathbb{E}|\bar{z}_{i}(t)-l(t)|^{2})^{\frac{1}{2}}dt \\ \leq &K\int_{0}^{T}\mathbb{E}|\bar{x}_{i}(t)-\bar{z}_{i}(t)|^{2}dt+K% \int_{0}^{T}\mathbb{E}|\bar{x}^{(N)}(t)-l(t)|^{2}dt\\ &+K\int_{0}^{T}(\mathbb{E}|% \bar{x}_{i}(t)-\bar{z}_{i}(t)|^{2}+\mathbb{E}|\bar{x}^{(N)}(t)-l(t)|^{2})^{% \frac{1}{2}}dt=O(\frac{1}{\sqrt{N}}), \end{aligned} \end{equation*}% where the last equality is due to Lemma \ref{averageerror} and $\mathbb{E}\underset{0\leq t\leq T}{\sup}(|\bar{z}_i(t)|^2+|l(t)|^2)\leq K$. Similarly, we can prove that the difference of terminal term is also order of $\frac{1}{\sqrt{N}}$. The proof is complete. \hfill$\square$ \end{proof} Now, we consider the perturbation to $i$-th agent, i.e. the agent $\mathcal{A}_i$ choose an alternative control $u_i(\cdot)$, while other agents $\mathcal{A}_j$, for $j\neq i$, still take the decentralized strategy $\bar{u}_j(\cdot)$. Then the perturbed centralized state of $\mathcal{A}_i$ is given by \begin{equation}\label{perturbed centralized state} \left\{ \begin{aligned} dy_{i}(t)=&~[A(t)y_{i}(t)+B(t)u_{i}(t)+F(t)y^{(N)}(t)+b(t)]dt \\ &+[C(t)y_{i}(t)+D(t)u_{i}(t)+H(t)y^{(N)}(t)+\sigma (t)]dW_{i}(t) \\ &+[\widetilde{C}(t)y_{i}(t)+\widetilde{D}(t)u_{i}(t)+\widetilde{H}(t)y^{(N)}(t)+\widetilde{\sigma }(t)]d\widetilde{W}_{i}(t), \\ y_{i}(0)=&~x,% \end{aligned}% \right. \end{equation} and the perturbed centralized state of $\mathcal{A}_j$ is given by \begin{equation}\label{perturbed centralized state yj} \left\{ \begin{aligned} dy_{j}(t)=&~[A(t)y_{j}(t)+B(t)\bar{u}_{j}(t)+F(t)y^{(N)}(t)+b(t)]dt \\ &+[C(t)y_{j}(t)+D(t)\bar{u}_{j}(t)+H(t)y^{(N)}(t)+\sigma (t)]dW_{j}(t) \\ &+[\widetilde{C}(t)y_{j}(t)+\widetilde{D}(t)\bar{u}_{j}(t)+\widetilde{H}(t)y^{(N)}(t)+\widetilde{\sigma }(t)]d\widetilde{W}_{j}(t), \\ y_{j}(0)=&~x,% \end{aligned}% \right. \end{equation} where $y^{(N)}(t)=\frac{1}{N}\sum_{i=1}^N y_{i}(t)$. Moreover, the corresponding decentralized states with perturbation satisfy \begin{equation} \left\{ \begin{aligned}\label{pyi} d\bar{y}_{i}(t)=&~[A(t)\bar{y}_{i}(t)+B(t)u_{i}(t)+F(t)l(t)+b(t)]dt \\ &+[C(t)\bar{y}_{i}(t)+D(t)u_{i}(t)+H(t)l(t)+\sigma (t)]dW_{i}(t) \\ &+[\widetilde{C}(t)\bar{y}_{i}(t)+\widetilde{D}(t)u_{i}(t)+\widetilde{H}(t)l(t)+\widetilde{\sigma }(t)]d\widetilde{W}_{i}(t), \\ \bar{y}_{i}(0)=&~x,% \end{aligned}% \right. \end{equation} and \begin{equation} \left\{ \begin{aligned}\label{pyj} d\bar{y}_{j}(t)=&~[A(t)\bar{y}_{j}(t)+B(t)\bar{u}_{j}(t)+F(t)l(t)+b(t)]dt \\ &+[C(t)\bar{y}_{j}(t)+D(t)\bar{u}_{j}(t)+H(t)l(t)+\sigma (t)]dW_{j}(t) \\ &+[\widetilde{C}(t)\bar{y}_{j}(t)+\widetilde{D}(t)\bar{u}_{j}(t)+\widetilde{H}(t)l(t)+\widetilde{\sigma }(t)]d\widetilde{W}_{j}(t), \\ \bar{y}_{j}(0)=&~x.% \end{aligned}% \right. \end{equation}% To show that $\bar{u}(\cdot)=(\bar{u}_1(\cdot),\ldots,\bar{u}_N(\cdot))$ is an $\varepsilon$-Nash equilibrium, we need to prove \begin{equation*} \mathcal{J}_{i}(\bar{u}_{i}(\cdot ),\bar{u}_{-i}(\cdot ))-\varepsilon \leq \underset{u_{i}\left( \cdot \right) \in \mathcal{U}_{ad}^{c}}{\inf }\mathcal{J% }_{i}(u_{i}(\cdot ),\bar{u}_{-i}(\cdot )),~\text{for any}~u_{i}\left( \cdot \right) \in \mathcal{U}_{ad}^{c}. \end{equation*}% Therefore, it only needs to consider the alternative control $u_i(\cdot)\in \mathcal{U}_{ad}^{c}$ s.t. $\mathcal{J}_{i}(\bar{u}_{i}(\cdot ),\bar{u}_{-i}(\cdot ))\geq\mathcal{J% }_{i}(u_{i}(\cdot ),\bar{u}_{-i}(\cdot ))$. Thus, \begin{equation*} \mathbb{E}\int_{0}^{T}\langle R(t)u_{i}(t),u_{i}(t)\rangle dt\leq \mathcal{J}% _{i}(u_{i}(\cdot ),\bar{u}_{-i}(\cdot ))\leq \mathcal{J}_{i}(\bar{u}_{i}(\cdot ),% \bar{u}_{-i}(\cdot ))\leq J_{i}(\bar{u}_{i}(\cdot ))+O(\frac{1}{\sqrt{N}}), \end{equation*}% which implies $\mathbb{E}\int_{0}^{T}|u_{i}(t)|^{2}dt\leq K$. Then, we have the following estimates for the perturbed state and cost functional. \begin{lemma}\label{yestimate} Let \textup{(H1)} and \textup{(H2)} hold, then the following estimate holds \begin{equation*} \mathbb{E}\underset{0\leq t\leq T}{\sup }|y^{(N)}(t)-l(t)|^{2}+ \underset{1\leq i\leq N}{\sup }\mathbb{E}\underset{0\leq t\leq T}{\sup }% |y_{i}(t)-\bar{y}_{i}(t)|^{2}=O(\frac{1}{N}). \end{equation*} \end{lemma} \begin{proof} By recalling \eqref{lequation}, \eqref{perturbed centralized state} and \eqref{perturbed centralized state yj}, we have \begin{equation*} \begin{aligned} &\mathbb{E}\underset{0\leq t\leq T}{\sup }|y^{(N)}(t)-l(t)|^{2}\\ \leq& K\mathbb{E}\int_0^T(|y^{(N)}(t)-l(t)|^{2}+\frac{1}{N^2}|u_i(t)|^2)ds+K\mathbb{E}\int_0^T|\frac{1}{N}\overset{N}{\underset{j=1,j\neq i}{\sum }}\bar{u}_{j}(t)-\mathbb{E}[\bar{u}_i(t)]|^2dt\\ &+\frac{K}{N^2}\mathbb{E}\int_0^T|u_i(t)|^2dt+\frac{K}{N^2}\mathbb{E}\overset{N}{\underset{j=1}{\sum}}\int_0^T|C(t)y_j(t)+H(t)(y^{(N)}(t)-l(t))+H(t)l(t)+\sigma(t)|^2dt\\ &+\frac{K}{N^2}\mathbb{E}\overset{N}{\underset{j=1}{\sum}}\int_0^T|\widetilde{C}(t)y_j(t)+\widetilde{H}(t)(y^{(N)}(t)-l(t))+\widetilde{H}(t)l(t)+\widetilde{\sigma}(t)|^2dt+\frac{K}{N^2}\mathbb{E}\overset{N}{\underset{j=1,j\neq i}{\sum }}\int_0^T|\bar{u}_j(t)|^2dt. \end{aligned} \end{equation*} Since $\{\bar{u}_i(\cdot)\}$ are i.i.d, it follows that $\mathbb{E}[\bar{u}_i(\cdot)]=\mathbb{E}[\bar{u}_j(\cdot)]$, for $1\leq i,j \leq N$ and $j\neq i$. Denote $\mu(t)=\mathbb{E}[\bar{u}_i(t)]$, we have \begin{equation*} \begin{aligned} \int_0^T\mathbb{E}|\frac{1}{N}\overset{N}{\underset{j=1,j\neq i}{\sum }}\bar{u}_{j}(t)-\mu(t)|^2dt \leq&\frac{2(N-1)^2}{N^2}\int_0^T\mathbb{E}|\frac{1}{N-1}\overset{N}{\underset{j=1,j\neq i}{\sum }}\bar{u}_{j}(t)-\mu(t)|^2dt+\frac{2}{N^2}\int_0^T\mathbb{E}|\mu(t)|^2dt\\ =&\frac{2(N-1)}{N^2}\int_0^T\mathbb{E}|\bar{u}_{j}(t)-\mu(t)|^2dt+\frac{2}{N^2}\int_0^T\mathbb{E}|\mu(t)|^2dt=O(\frac{1}{N}). \end{aligned} \end{equation*} Similar to \eqref{secondest}, by using the fact that $\mathbb{E}\underset{0\leq t\leq T}{\sup}|y_i(t)|^2\leq K$, and recalling $\mathbb{E}\int_{0}^{T}|u_{i}(t)|^{2}dt\leq K$, we have \begin{equation*} \begin{aligned} &\frac{K}{N^2}\mathbb{E}\int_0^T|u_i(t)|^2dt+\frac{K}{N^2}\mathbb{E}\overset{N}{\underset{j=1}{\sum}}\int_0^T|C(t)y_j(t)+H(t)(y^{(N)}(t)-l(t))+H(t)l(t)+\sigma(t)|^2dt\\ &+\frac{K}{N^2}\mathbb{E}\overset{N}{\underset{j=1}{\sum}}\int_0^T|\widetilde{C}(t)y_j(t)+\widetilde{H}(t)(y^{(N)}(t)-l(t))+\widetilde{H}(t)l(t)+\widetilde{\sigma}(t)|^2dt\leq \frac{K}{N}\Big(1+\mathbb{E}\int_0^T|y^{(N)}(t)-l(t)|^{2}dt\Big). \end{aligned} \end{equation*} Moreover, by i.i.d property of $\bar{u}_i(\cdot)$, we get $\frac{K}{N^2}\mathbb{E}\overset{N}{\underset{j=1,j\neq i}{\sum }}\int_0^T|\bar{u}_j(t)|^2dt=O(\frac{1}{N})$. Synthesizing above estimates, we have \begin{equation*} \mathbb{E}\underset{0\leq t\leq T}{\sup }|y^{(N)}(t)-l(t)|^{2}\leq K\mathbb{E% }\int_{0}^{T}|y^{(N)}(t)-l(t)|^{2}dt+O(\frac{1}{N}). \end{equation*}% Finally, by recalling \eqref{perturbed centralized state}-\eqref{pyj}, with the help of standard SDE estimates, we can complete the proof. \hfill$\square$ \end{proof} \smallskip By using Lemma \ref{yestimate}, similar to the proof of Lemma \ref{errorcost1}, we have the following result, whose proof is omitted. \begin{lemma}\label{errorcost2} Let \textup{(H1)} and \textup{(H2)} hold, we have \begin{equation*} |\mathcal{J}_{i}(u_{i}(\cdot ),\bar{u}_{-i}(\cdot ))-J_{i}(u_{i}(\cdot ))|=O(% \frac{1}{\sqrt{N}}) , \text{ for }1\leq i \leq N. \end{equation*} \end{lemma} Based on above Lemmas, we can give the following main result of this section. \begin{theorem}\label{generalnash} Let \textup{(H1)} and \textup{(H2)} hold. Assume that $4\lambda ^{\ast }<-2|F|-6|C|^{2}-6|\widetilde{C}|^{2}-5|H|^{2}-5|\widetilde{H}|^{2}$, there exists a constant $\theta_1>0$ independent of $T$, which may depend on $\lambda^{\ast}$, $|C|$, $|\widetilde{C}|$, $|F|$, $|H|$, $|\widetilde{H}|$, $|Q|$, $|G|$, when $|B|$, $|D|$, $|\widetilde{D}|$ and $|R^{-1}|\in[0,\theta_1)$, then the strategy profile $(\bar{u}_1(\cdot),\ldots,\bar{u}_N(\cdot))$ with $\bar{u}_i(\cdot)$ given by \eqref{gcontrol} is an $\varepsilon$-Nash equilibrium of Problem (LP), where $(\bar{z}_i(\cdot),\bar{p}_i(\cdot),\bar{k}_i(\cdot),\bar{\tilde{k}}_i(\cdot))$ is the unique solution to the FBSDE \eqref{CC}. \end{theorem} \begin{proof} From Lemmas \ref{errorcost1} and \ref{errorcost2}, we obtain that for $1\leq i \leq N$, $ \mathcal{J}_{i}(\bar{u}_{i}(\cdot ),\bar{u}_{-i}(\cdot ))=J_{i}(\bar{u}% _{i}(\cdot ))+O(\frac{1}{\sqrt{N}}) \leq J_{i}(u_{i}(\cdot ))+O(\frac{1}{\sqrt{N}}) =\mathcal{J}_{i}(u_{i}(\cdot ),\bar{u}_{-i}(\cdot ))+O(\frac{1}{\sqrt{N}}), $ which yields that $(\bar{u}_1(\cdot),\ldots,\bar{u}_N(\cdot))$ is an $\varepsilon$-Nash equilibrium. \hfill$\square$ \end{proof} \subsection{Control Unconstrained Case: Riccati Approach}\label{subsec:2} In this subsection, we consider the control unconstrained case i.e. $\Gamma=\mathbb{R}^m$. We will use Riccati approach to represent the decentralized strategies as the feedback of filtered state. Moreover, we introduce the following assumption. \textup{(H3)} It holds that \[F=\delta I ~\text{and}~ H=\widetilde{H}=\widetilde{C}=0,~\text{where}~\delta~\text{is a constant.}\] For simplicity, we denote $\hat{f}_i(t)=\mathbb{E}[f_i(t)|\mathcal{G}_t^i]$ as the filtering of $f_i(t)$ w.r.t. $\mathcal{G}_t^i$, for $1\leq i \leq N$. With above setting, the decentralized strategies give by \eqref{gcontrol} will be reduced to, for $1\leq i \leq N$, \begin{equation}\label{open-loop} \begin{aligned} \bar{u}_{i}(t)=&R(t)^{-1}(B^{\top }(t)\mathbb{E}[\bar{p}% _{i}(t)|\mathcal{G}_{t}^{i}]+D^{\top }(t)\mathbb{E}[\bar{k}_{i}(t)|\mathcal{G% }_{t}^{i}]+\widetilde{D}^{\top }(t)\mathbb{E}[\bar{\widetilde{k}}_{i}(t)|\mathcal{G% }_{t}^{i}]),\\ =&R(t)^{-1}(B^{\top }(t)\hat{\bar{p}} _{i}(t)+D^{\top }(t)\hat{\bar{k}}_{i}(t)+\widetilde{D}^{\top }(t)\hat{\bar{\widetilde{k}}}_{i}(t)), \end{aligned} \end{equation} where $(\bar{z}_i(\cdot),\bar{p}_i(\cdot),\bar{k}_i(\cdot),\bar{\widetilde{k}}_i(\cdot))$ solves the following Hamiltonian type CC system which is a MF-FBSDE \begin{equation} \left\{ \begin{aligned}\label{sCC} d\bar{z}_{i}(t)=&\{A(t)\bar{z}_{i}(t)+B(t)R^{-1}(t)(B^{\top }(t)\mathbb{E}[\bar{p}_{i}(t)|\mathcal{G}% _{t}^{i}]+D^{\top }(t)\mathbb{E}[\bar{k}_{i}(t)|\mathcal{G}_{t}^{i}] \\ &+\widetilde{D}^{\top }(t)\mathbb{E}[\bar{\widetilde{k}}_{i}(t)|\mathcal{G}% _{t}^{i}])+\delta \mathbb{E}[\bar{z}_{i}(t)]+b(t)\}dt\\ &+\{C(t)\bar{z}_{i}(t)+D(t)R^{-1}(t)(B^{\top }(t)\mathbb{E}[\bar{p}_{i}(t)|% \mathcal{G}_{t}^{i}]+D^{\top }(t)\mathbb{E}[\bar{k}_{i}(t)|\mathcal{G}% _{t}^{i}]\\ &+\widetilde{D}^{\top }(t)\mathbb{E}[\bar{\widetilde{k}}_{i}(t)|\mathcal{G}% _{t}^{i}])+\sigma (t)\}dW_{i}(t)\\ &+\{\widetilde{D}(t)R^{-1}(t)(B^{\top }(t)\mathbb{E}[\bar{p% }_{i}(t)|\mathcal{G}_{t}^{i}]+D^{\top }(t)\mathbb{E}[\bar{k}_{i} (t)|% \mathcal{G}_{t}^{i}]\\ &+\widetilde{D}^{\top }(t)\mathbb{E}[\bar{\widetilde{k}}_{i}(t)|\mathcal{G}% _{t}^{i}])+\widetilde{\sigma }(t)\}d\widetilde{W}% _{i}(t), \\ d\bar{p}_{i}(t)=&-\{A^{\top }(t)\bar{p}_{i}(t)+C^{\top }(t)\bar{k}_{i} \left( t\right) -Q(t)(\bar{z}_{i}\left( t\right)-\mathbb{E}[\bar{z}_{i}(t)]\}dt\\ &+\bar{k}_{i}\left( t\right) dW_{i}(t)+\bar{\widetilde{k}}_{i} \left( t\right) d\widetilde{W}_{i}(t), \\ \bar{z}_{i}(0)=&~x,~\bar{p}_{i}(T)=-G(\bar{z}_{i}\left( T\right) -\mathbb{E}[\bar{z}_{i}(T)]). \end{aligned} \right. \end{equation}% Moreover, in the framework of this subsection, the above decentralized strategies can be further represented as the feedback of filtered state by Riccati approach as given in the following theorem. \begin{theorem}\label{specialu} Let \textup{(H1)}-\textup{(H3)} hold. Suppose $\Gamma=\mathbb{R}^m$, then the decentralized strategies can be represented as \begin{equation} \begin{aligned}\label{scontrol} \bar{u}_i(t)=&-\widetilde{R}(t)^{-1}\widetilde{P}^{\top}(t)\hat{\bar{z}}_i(t)-\widetilde{R}(t)^{-1}B^{\top}(t)\Lambda(t)l(t)\\ &-\widetilde{R}(t)^{-1}(B^{\top}(t)\Phi(t)+D^{\top}(t)P(t)\sigma(t)+\widetilde{D}^{\top}(t)P(t)\widetilde{\sigma}(t)), ~~~1\leq i \leq N, \end{aligned} \end{equation} with \begin{equation}\label{tildeR} \widetilde{R}(t)=R(t)+D^{\top}(t)P(t)D(t)+\widetilde{D}^{\top}(t)P(t)\widetilde{D}(t), \end{equation} \begin{equation}\label{tildeP} \widetilde{P}(t)=P(t)B(t)+C^{\top}(t)P(t)D(t), \end{equation} where $P(\cdot)$ and $\Lambda(\cdot)$ solve the following Riccati equations, respectively \begin{equation} \left\{ \begin{aligned}\label{P} &\dot{P}(t)+P(t)A(t)+A^{\top}(t)P(t)+C^{\top}(t)P(t)C(t)+Q(t)-\widetilde{P}(t)\widetilde{R}(t)^{-1}\widetilde{P}^{\top}(t)=0,\\ &P(T)=~G, \end{aligned} \right. \end{equation} \begin{equation} \left\{ \begin{aligned}\label{Lambda} &\dot{\Lambda}(t)+\Lambda(t)(A(t)-B(t)\widetilde{R}(t)^{-1}\widetilde{P}^{\top}(t))+(A(t)-B(t)\widetilde{R}(t)^{-1}\widetilde{P}^{\top}(t))^{\top}\Lambda(t)\\ &\qquad+(P(t)+\Lambda(t))\delta -\Lambda(t)B(t)\widetilde{R}(t)^{-1}B^{\top}(t)\Lambda(t)-Q(t)=0,\\ &\Lambda(T)=-G, \end{aligned} \right. \end{equation} $\Phi(\cdot)$ solves the following standard ordinary differential equation (ODE) \begin{equation} \left\{ \begin{aligned}\label{Phi} &\dot{\Phi}(t)+(A^{\top}(t)-\widetilde{P}(t)\widetilde{R}(t)^{-1}B^{\top}(t)-\Lambda(t)B(t)\widetilde{R}(t)^{-1}B^{\top}(t))\Phi(t)+(C^{\top}(t)\\ &\qquad-\widetilde{P}(t)\widetilde{R}(t)^{-1}D^{\top}(t)-\Lambda(t)B(t)\widetilde{R}(t)^{-1}D^{\top}(t))P(t)\sigma(t)-(\widetilde{P}(t)\widetilde{R}(t)^{-1}\widetilde{D}^{\top}(t)\\ &\qquad+\Lambda(t)B(t)\widetilde{R}(t)^{-1}\widetilde{D}^{\top}(t))P(t)\widetilde{\sigma}(t)+(P(t)+\Lambda(t))b(t)=0,\\ &\Phi(T)=~0, \end{aligned} \right. \end{equation} $l(\cdot)$ representing the limit value of the state average solves \begin{equation} \left\{ \begin{aligned}\label{l} dl(t)=&\{[A(t)+\delta -B(t)\widetilde{R}(t)^{-1}(\widetilde{P}^{\top}(t)+B^{\top}(t)\Lambda(t))]l(t)+b(t)\\ &-B(t)\widetilde{R}(t)^{-1}(B^{\top}(t)\Phi(t)+D^{\top}(t)P(t)\sigma(t)+\widetilde{D}^{\top}(t)P(t)\widetilde{\sigma}(t))\}dt,\\ l(0)=&~x, \end{aligned} \right. \end{equation} and the optimal filtering $\hat{\bar{z}}_i(\cdot)$ solves the following SDE \begin{equation} \left\{ \begin{aligned}\label{z} &d\hat{\bar{z}}_i(t)=\{(A(t)-B(t)\widetilde{R}(t)^{-1}\widetilde{P}^{\top}(t))\hat{\bar{z}}_i(t)+(\delta-B(t)\widetilde{R}(t)^{-1}B^{\top}(t)\Lambda(t))l(t)\\ &\qquad-B(t)\widetilde{R}(t)^{-1}(B^{\top}(t)\Phi(t)+D^{\top}(t)P(t)\sigma(t)+\widetilde{D}^{\top}(t)P(t)\widetilde{\sigma}(t))+b(t)\}dt\\ &\qquad+\{(C(t)-D(t)\widetilde{R}(t)^{-1}\widetilde{P}^{\top}(t))\hat{\bar{z}}_i(t)-D(t)\widetilde{R}(t)^{-1}B^{\top}(t)\Lambda(t)l(t)\\ &\qquad-D(t)\widetilde{R}(t)^{-1}(B^{\top}(t)\Phi(t)+D^{\top}(t)P(t)\sigma(t)+\widetilde{D}^{\top}(t)P(t)\widetilde{\sigma}(t))+\sigma(t)\}dW_i(t),\\ &\hat{\bar{z}}_i(0)=~x. \end{aligned} \right. \end{equation} \end{theorem} \begin{proof} Due to the coupling structure of MF-FBSDE \eqref{sCC}, we speculate that \begin{equation}\label{pdecompose} \bar{p}_i(t)=-P(t)\bar{z}_i(t)-\Lambda(t)\mathbb{E}[\bar{z}_i(t)]-\Phi(t), \end{equation} with $P(T)=G$, $\Lambda(T)=-G$ and $\Phi(T)=0$. Here, $P(\cdot)$, $\Lambda(\cdot)$ and $\Phi(\cdot)$ satisfy deterministic equations, which will be specified later. Let us first show \eqref{scontrol}. By applying It\^o's formula to $\bar{p}_i(\cdot)$, we can derive \begin{equation} \begin{aligned}\label{barp} d\bar{p}_i(t)=&\{-(\dot{P}(t)+P(t)A(t))\bar{z}_i(t)-[\dot{\Lambda}(t)+\Lambda(t)(A(t)+\delta)+P(t)\delta] \mathbb{E}[\bar{z}_i(t)]\\ &-P(t)B(t)\bar{u}_i(t)-\Lambda(t)B(t)\mathbb{E}[\bar{u}_i(t)]-(P(t)+\Lambda(t))b(t)-\dot{\Phi}(t)\}dt\\ &-P(t)[C(t)\bar{z}_{i}(t) +D(t)\bar{u}_i(t)+\sigma (t)]dW_i(t)-P(t)[\widetilde{D}(t)\bar{u}_i(t)+\widetilde{\sigma }(t)]d\widetilde{W}_i(t). \end{aligned} \end{equation} Comparing with the diffusion term in the second equation of \eqref{sCC}, we get \begin{equation} \begin{aligned} \bar{k}_i(t)=&-P(t)[C(t)\bar{z}_{i}(t) +D(t)\bar{u}_i(t)+\sigma (t)],\\ \bar{\widetilde{k}}_i(t)=&-P(t)[\widetilde{D}(t)\bar{u}_i(t)+\widetilde{\sigma }(t)].\label{krelation} \end{aligned} \end{equation} Taking the conditional expectation with respect to $\mathcal{G}_t^i$ on both side of \eqref{pdecompose} and \eqref{krelation}, and substituting them into \eqref{open-loop}, we have \begin{equation} \begin{aligned}\label{u} \bar{u}_i(t)=&-\widetilde{R}(t)^{-1}\widetilde{P}^{\top}(t)\hat{\bar{z}}_i(t)-\widetilde{R}(t)^{-1}B^{\top}(t)\Lambda(t)\mathbb{E}[\bar{z}_i(t)]\\ &-\widetilde{R}(t)^{-1}(B^{\top}(t)\Phi(t)+D^{\top}(t)P(t)\sigma(t)+\widetilde{D}^{\top}(t)P(t)\widetilde{\sigma}(t)), ~~~1\leq i \leq N, \end{aligned} \end{equation} then \eqref{scontrol} holds by recalling that $l(\cdot)=\mathbb{E}[\bar{z}_i(\cdot)]$ (see \eqref{limit}). Moreover, we have \begin{equation} \begin{aligned}\label{Eu} \mathbb{E}[\bar{u}_i(t)]=&-\widetilde{R}(t)^{-1}(\widetilde{P}^{\top}(t)+B^{\top}(t)\Lambda(t))\mathbb{E}[\bar{z}_i(t)]\\ &-\widetilde{R}(t)^{-1}(B^{\top}(t)\Phi(t)+D^{\top}(t)P(t)\sigma(t)+\widetilde{D}^{\top}(t)P(t)\widetilde{\sigma}(t)), ~~~1\leq i \leq N. \end{aligned} \end{equation} Now, let us deduce the equations for $P(\cdot)$, $\Lambda(\cdot)$, $\Phi(\cdot)$ and $l(\cdot)$. From the drift term of \eqref{barp} and second equation in \eqref{sCC}, by noting \eqref{pdecompose} and \eqref{krelation}, one can obtain that \begin{equation} \begin{aligned}\label{drift} &(\dot{P}(t)+P(t)A(t)+A^{\top}(t)P(t)+C^{\top}(t)P(t)C(t)+Q(t))\bar{z}_i(t)\\ &+[\dot{\Lambda}(t)+\Lambda(t)(A(t)+\delta)+P(t)\delta+A^{\top}(t)\Lambda(t)-Q(t)]\mathbb{E}[\bar{z}_i(t)]\\ &+(P(t)B(t)+C^{\top}(t)P(t)D(t))\bar{u}_i(t)+\Lambda(t)B(t)\mathbb{E}[\bar{u}_i(t)]\\ &+\dot{\Phi}(t)+P(t)b(t)+\Lambda(t)b(t)+A^{\top}(t)\Phi(t)+C^{\top}(t)P(t)\sigma(t)=0. \end{aligned} \end{equation} By taking conditional expectation on \eqref{drift} and by virtue of \eqref{u}-\eqref{Eu}, we have \begin{equation} \begin{aligned} &(\dot{P}(t)+P(t)A(t)+A^{\top}(t)P(t)+C^{\top}(t)P(t)C(t)+Q(t)-\widetilde{P}(t)\widetilde{R}(t)^{-1}\widetilde{P}^{\top}(t))\hat{\bar{z}}_i(t)\\ &+[\dot{\Lambda}(t)+\Lambda(t)(A(t)-B(t)\widetilde{R}(t)^{-1}\widetilde{P}^{\top}(t))+(A(t)-B(t)\widetilde{R}(t)^{-1}\widetilde{P}^{\top}(t))^{\top}\Lambda(t)\\ &+(P(t)+\Lambda(t))\delta-\Lambda(t)B(t)\widetilde{R}(t)^{-1}B^{\top}(t)\Lambda(t)-Q(t)]\mathbb{E}[\bar{z}_i(t)]\\ &+\dot{\Phi}(t)+(A^{\top}(t)-\widetilde{P}(t)\widetilde{R}(t)^{-1}B^{\top}(t)-\Lambda(t)B(t)\widetilde{R}(t)^{-1}B^{\top}(t))\Phi(t)\\ &+(C^{\top}(t)-\widetilde{P}(t)\widetilde{R}(t)^{-1}D^{\top}(t)-\Lambda(t)B(t)\widetilde{R}(t)^{-1}D^{\top}(t))P(t)\sigma(t)\\ &-(\widetilde{P}(t)\widetilde{R}(t)^{-1}\widetilde{D}^{\top}(t)+\Lambda(t)B(t)\widetilde{R}(t)^{-1}\widetilde{D}^{\top}(t))P(t)\widetilde{\sigma}(t)+(P(t)+\Lambda(t))b(t)=0,\\ \end{aligned} \end{equation} which suggests that $P(\cdot)$, $\Lambda(\cdot)$ and $\Phi(\cdot)$ solve \eqref{P}, \eqref{Lambda} and \eqref{Phi}, respectively. In addition, by taking expectation on both side of the first equation in \eqref{sCC} and by noting \eqref{open-loop}, we have \begin{equation}\label{Ez} d\mathbb{E}[\bar{z}_i(t)]=\{(A(t)+\delta)\mathbb{E}[\bar{z}_i(t)]+B(t)\mathbb{E}[\bar{u}_i(t)]+b(t)\}dt, \end{equation} and by substituting \eqref{Eu} into \eqref{Ez} and by recalling $l(\cdot)=\mathbb{E}[\bar{z}_i(\cdot)]$, it is easy to show that $l(\cdot)$ solves \eqref{l}. Moreover, from \eqref{sCC} and \eqref{scontrol}, the optimal filtering $\hat{\bar{z}}_i(\cdot)$ can be expressed as in \eqref{z}. \hfill$\square$ \end{proof} To summarize, we obtain that $(P(\cdot),\Lambda(\cdot),\Phi(\cdot),l(\cdot))$ solves the following Riccati type CC system \begin{equation} \left\{ \begin{aligned}\label{RCC} &\dot{P}(t)+P(t)A(t)+A^{\top}(t)P(t)+C^{\top}(t)P(t)C(t)+Q(t)-\widetilde{P}(t)\widetilde{R}(t)^{-1}\widetilde{P}^{\top}(t)=0,\\ &\dot{\Lambda}(t)+\Lambda(t)(A(t)-B(t)\widetilde{R}(t)^{-1}\widetilde{P}^{\top}(t))+(A(t)-B(t)\widetilde{R}(t)^{-1}\widetilde{P}^{\top}(t))^{\top}\Lambda(t)\\ &\qquad+(P(t)+\Lambda(t))\delta -\Lambda(t)B(t)\widetilde{R}(t)^{-1}B^{\top}(t)\Lambda(t)-Q(t)=0,\\ &\dot{\Phi}(t)+(A^{\top}(t)-\widetilde{P}(t)\widetilde{R}(t)^{-1}B^{\top}(t)-\Lambda(t)B(t)\widetilde{R}(t)^{-1}B^{\top}(t))\Phi(t)+(C^{\top}(t)\\ &\qquad-\widetilde{P}(t)\widetilde{R}(t)^{-1}D^{\top}(t)-\Lambda(t)B(t)\widetilde{R}(t)^{-1}D^{\top}(t))P(t)\sigma(t)-(\widetilde{P}(t)\widetilde{R}(t)^{-1}\widetilde{D}^{\top}(t)\\ &\qquad+\Lambda(t)B(t)\widetilde{R}(t)^{-1}\widetilde{D}^{\top}(t))P(t)\widetilde{\sigma}(t)+(P(t)+\Lambda(t))b(t)=0,\\ &\dot{l}(t)-[A(t)+\delta -B(t)\widetilde{R}(t)^{-1}(\widetilde{P}^{\top}(t)+B^{\top}(t)\Lambda(t))]l(t)-b(t)\\ &\qquad+B(t)\widetilde{R}(t)^{-1}(B^{\top}(t)\Phi(t)+D^{\top}(t)P(t)\sigma(t)+\widetilde{D}^{\top}(t)P(t)\widetilde{\sigma}(t))=0,\\ &P(T)=~G,\Lambda(T)=-G,\Phi(T)=0,l(0)=~x, \end{aligned} \right. \end{equation} where we recall that $\widetilde{R}(t)=R(t)+D^{\top}(t)P(t)D(t)+\widetilde{D}^{\top}(t)P(t)\widetilde{D}(t)$ and $\widetilde{P}(t)=P(t)B(t)+C^{\top}(t)P(t)D(t)$. \smallskip By applying Theorem \ref{generalnash}, we have \begin{theorem} Let \textup{(H1)}-\textup{(H3)} hold, the strategy profile $\bar{u}(\cdot)=(\bar{u}_1(\cdot),\ldots,\bar{u}_N(\cdot))$, where $\bar{u}_i(\cdot)$ is given by \eqref{scontrol} and $(P(\cdot),\Lambda(\cdot),\Phi(\cdot),l(\cdot),\hat{\bar{z}}_i(\cdot))$ solves systems \eqref{P}-\eqref{z}, is an $\varepsilon$-Nash equilibrium of Problem (LP) with $\Gamma=\mathbb{R}^m$. \end{theorem} \begin{remark} \textup{(i)} When $\Gamma=\mathbb{R}^m$, if we further assume that $\widetilde{D}=\widetilde{\sigma}=0$ and only one Brownian motion $W_i$ is involved, then it reduces to the control unconstrained large-population problem with full information, which serves as a special case of our problem. We emphasize that in this situation $\widetilde{R}(\cdot)=R(\cdot)+D^{\top}(\cdot)P(\cdot)D(\cdot)$ and thus the Riccati equation for $P(\cdot)$ in the full information case is standard, whose solvability can be guaranteed automatically by \cite{Yong1999}. By contrast, we will see that the introduction of the partial information makes it very difficult to solve the corresponding Riccati equations, in particular for $P(\cdot)$. In fact, the $\widetilde{D}(\cdot)u_i(\cdot)$ term will result in $\widetilde{R}(\cdot)=R(\cdot)+D^{\top}(\cdot)P(\cdot)D(\cdot)+\widetilde{D}^{\top}(t)P(\cdot)\widetilde{D}(\cdot)$, where the additional term $\widetilde{D}^{\top}(\cdot)P(\cdot)\widetilde{D}(\cdot)$ cannot be combined with the original term $D^{\top}(\cdot)P(\cdot)D(\cdot)$ into a quadratic form (see \eqref{r1}), thus the equation for $P(\cdot)$ is not a standard Riccati equation and then the existing results cannot be implemented. Therefore, we will focus on the wellposedness of system \eqref{RCC} in subsection \ref{subsec:3}. \textup{(ii)} If we suppose that $\bar{p}_i(\cdot)$ has the following decomposition $($which is different to \eqref{pdecompose}$)$ \begin{equation} \bar{p}_i(t)=-P(t)\bar{z}_i(t)-\varphi(t), \label{prelation} \end{equation} with $P(T)=-G,\varphi(T)=G\widetilde{l}(T)$. Then by taking similar procedure in above proof, it can be verified that $P(\cdot)$ still solves \eqref{P}, and $\varphi(\cdot)$ solves the following ODE \begin{equation} \left\{ \begin{aligned}\label{vari} &\dot{\varphi}(t)+(A(t)-B(t)\widetilde{R}(t)^{-1}\widetilde{P}^{\top}(t))^{\top}\varphi(t)+(C(t)-D(t)\widetilde{R}(t)^{-1}\widetilde{P}^{\top}(t))^{\top}P(t)\sigma(t)\\ &\qquad-\widetilde{P}(t)\widetilde{R}(t)^{-1}\widetilde{D}^{\top}(t)P(t)\widetilde{\sigma}(t)+P(t)(\delta \widetilde{l}(t)+b(t))-Q(t)\widetilde{l}(t)=0,\\ &\varphi(T)=~G\widetilde{l}(T). \end{aligned} \right. \end{equation} and $\widetilde{l}(\cdot)$ solves \begin{equation} \left\{ \begin{aligned}\label{newl} d\widetilde{l}(t)=&\{(A(t)+\delta-B(t)\widetilde{R}(t)^{-1}\widetilde{P}^{\top}(t))\widetilde{l}(t)+b(t)-B(t)\widetilde{R}(t)^{-1}(B^{\top}(t)\varphi(t)\\ &+D^{\top}(t)P(t)\sigma(t)+\widetilde{D}^{\top}(t)P(t)\widetilde{\sigma}(t))\}dt,\\ \widetilde{l}(0)=&~x. \end{aligned} \right. \end{equation} In this case, for any $1\leq i \leq N$ the decentralized strategies can be represented as \begin{equation} \begin{aligned} \bar{u}_i(t)=-\widetilde{R}(t)^{-1}\widetilde{P}^{\top}(t)\hat{\bar{z}}_i(t) -\widetilde{R}(t)^{-1}(B^{\top}(t)\varphi(t) +D^{\top}(t)P(t)\sigma(t)+\widetilde{D}^{\top}(t)P(t)\widetilde{\sigma}(t)).\end{aligned} \end{equation} According to above analysis, it's not surprising to discover that the system of \eqref{vari} and \eqref{newl} is a kind of coupled forward backward ODEs, whose well-posedness is not easy to check. For this reason, we introduce a new decomposition as \eqref{pdecompose} which allows us to solve $P(\cdot),\Lambda(\cdot),\Phi(\cdot)$ and $l(\cdot)$ one by one. We emphasize that by comparing \eqref{pdecompose} with \eqref{prelation}, one can check that $\varphi(\cdot)=\Lambda(\cdot)\widetilde{l}(\cdot)+\Phi(\cdot)$. Moreover, if the system \eqref{RCC} is uniquely solvable, we can also solve uniquely systems \eqref{vari} and \eqref{newl}. The existence is obvious and we only mention the uniqueness here. In fact, let $(\varphi^\prime(\cdot),\widetilde{l}^\prime(\cdot))$ be an another solution to \eqref{vari} and \eqref{newl}. After simple calculation, we can verify that $\Phi(\cdot)=\varphi(\cdot)-\Lambda(\cdot)\widetilde{l}(\cdot)$ and $\Phi^\prime(\cdot)=\varphi^\prime(\cdot)-\Lambda(\cdot)\widetilde{l}^\prime(\cdot)$ both satisfy the third equation of system \eqref{RCC}. Due to the uniqueness of solution to \eqref{RCC}, we have $\Phi(\cdot)=\Phi^\prime(\cdot)$. By substituting the relationship $\varphi(\cdot)=\Lambda(\cdot)\widetilde{l}(\cdot)+\Phi(\cdot)$ and $\varphi^\prime(\cdot)=\Lambda(\cdot)\widetilde{l}^\prime(\cdot)+\Phi(\cdot)$ into \eqref{newl}, we have that $\widetilde{l}(\cdot)$ and $\widetilde{l}^\prime(\cdot)$ satisfy the same ODE, which implies $\widetilde {l }(\cdot)=\widetilde {l}^\prime(\cdot)$ from the classical ODE theory and then further $\varphi(\cdot)=\varphi^\prime(\cdot)$. Thus, we will focus on the well-posedness of system \eqref{RCC} in next subsection. \end{remark} \begin{remark}\label{remark for partial information structure} Huang and Wang \cite{Huang2016} also considered a class of unconstrained LQ large-population problems with partial information via decoupling methods. However, the diffusion coefficient of the dynamic of individual agent in \cite{Huang2016} is in a simple manner. Indeed, their diffusion coefficient depends neither on control nor state. In our work, we study a general partial information stochastic large-population problem. Moveover, the well-posedness of a new Riccati type CC system \eqref{RCC} is also provided in next subsection, which looks interesting itself. \end{remark} \subsection{Well-posedness of Riccati Type CC System}\label{subsec:3} In this subsection, we are going to study the well-posedness of general Riccati type CC system \eqref{RCC} consisting of four equations (see \eqref{P}-\eqref{l}). We emphasize that due to our general partial information structure, especially that the diffusion term of the state contains the term $\widetilde{D}(\cdot)u_i(\cdot)$, the equations for $P(\cdot)$ and $\Lambda(\cdot)$ are no longer standard Riccati equations and it will arise essential difficulties to get the well-posedness of system \eqref{RCC}. Indeed, firstly, it is obvious that we have \begin{equation}\label{r1} D^{\top}(t)P(t)D(t)+\widetilde{D}^{\top}(t)P(t)\widetilde{D}(t)\neq (D(t)+\widetilde{D}(t))^{\top}P(t)(D(t)+\widetilde{D}(t)), \end{equation} which means that equation \eqref{P} for $P(\cdot)$ is not a standard Riccati equation. Secondly, since usually the inequality $\delta P(\cdot)-Q(\cdot)\geq 0$ fails, the well-posedness of equation \eqref{Lambda} for $\Lambda(\cdot)$ is not obvious at all. To study the well-posedness of system \eqref{RCC}, we first recall the following useful results, which are classical in algebra. It is worth noting that the second assertion in the following lemma serves as a corollary of the Weyl's inequality. For saving space, we omit the proof here. \begin{lemma}\label{alegbra1} Let $\mathbb{A},\mathbb{B}\in \mathcal{S}^n$, then the following results hold:\\ \textup{(i)} if $\mathbb{A}\geq\mathbb{B}>0$, then $\mathbb{B}^{-1}\geq\mathbb{A}^{-1}>0$.\\ \textup{(ii)} if $\mathbb{A}\geq \mathbb{B}$, then $\lambda_k(\mathbb{A})\geq\lambda_k(\mathbb{B})$, $k=1,\ldots,n$, where $\lambda_k(\mathbb{A})$, $(resp.~\lambda_k(\mathbb{B}))$ is $k$-th eigenvalue of $\mathbb{A}$ $(resp.~\mathbb{B})$, i.e. $\lambda_1(\mathbb{A})\ge\lambda_2(\mathbb{A})\ge\ldots\ge\lambda_n(\mathbb{A})$ and $\lambda_1(\mathbb{B})\ge\lambda_2(\mathbb{B})\ge\ldots\ge\lambda_n(\mathbb{B})$. \end{lemma} Let us first focus on the well-posedness of equation \eqref{P}. As mentioned above, equation \eqref{P} is not a standard Riccati equation due to the additional term related to $\widetilde{D}$. In the following lemma, we can show the uniqueness of a solution for equation \eqref{P} by Lemma \ref{alegbra1} and assumptions \textup{(H1)}-\textup{(H3)}. Moreover, we will use modified iterative method and mathematical induction (see the modified term $\widehat{Q}(\cdot)$ and $\Psi(\cdot)$ in the following proof) to obtain the existence of a solution for equation \eqref{P}. \begin{lemma}\label{Plemma} Let \textup{(H1)}-\textup{(H3)} hold, then Riccati equation \eqref{P} admits a unique solution $P(\cdot)\in C([0,T];\mathcal{S}_+^n)$. \end{lemma} \begin{proof} Firstly, we prove \eqref{P} admits at most one solution $P(\cdot)\in C([0,T];\mathcal{S}_+^n)$. Suppose that $P_1(\cdot)$ and $P_2(\cdot)$ are two solutions of \eqref{P}, we denote $\widehat{P}(\cdot)=P_1(\cdot)-P_2(\cdot)$, then we have \begin{equation*} \left\{ \begin{aligned} &\dot{\widehat{P}}(t)+\widehat{P}(t)A(t)+A^{\top}(t)\widehat{P}(t)+C^{\top}(t)\widehat{P}(t)C(t)\\ &\qquad-(\widehat{P}(t)B(t)+C^{\top}(t)\widehat{P}(t)D(t))R_1(t)^{-1}(P_1(t)B(t)+C^{\top}(t)P_1(t)D(t))^{\top}\\ &\qquad-(P_2(t)B(t)+C^{\top}(t)P_2(t)D(t))R_2(t)^{-1}(\widehat{P}(t)B(t)+C^{\top}(t)\widehat{P}(t)D(t))^\top\\ &\qquad+(P_2(t)B(t)+C^{\top}(t)P_2(t)D(t))R_2(t)^{-1}(D^{\top}(t)\widehat{P}(t)D(t)\\ &\qquad+\widetilde{D}^{\top}(t)\widehat{P}(t)\widetilde{D}(t))R_1(t)^{-1}(P_1(t)B(t)+C^{\top}(t)P_1(t)D(t))^{\top}=0,\\ &\widehat{P}(T)=0, \end{aligned} \right. \end{equation*} where $R_i(t)=R(t)+D^{\top}(t)P_i(t)D(t)+\widetilde{D}^{\top}(t)P_i(t)\widetilde{D}(t)$, $i=1,2$. From Lemma \ref{alegbra1}, we have $R_i(t)^{-1}\leq R(t)^{-1}$, $\forall t\in [0,T]$, $i=1,2$, and \begin{equation*} |R_i(t)^{-1}|=\sqrt{\underset{k=1}{\overset{m}{\sum}}\lambda_k^2(R_i(t)^{-1})} \leq \sqrt{\underset{k=1}{\overset{m}{\sum}}\lambda_k^2(R(t)^{-1})}=|R(t)^{-1}|<\infty. \end{equation*} where the last inequality is due to $R\gg0$ (which implies $R^{-1}\in L^{\infty }(0,T;\mathcal{S}^{m})$). Consequently, $|R_1(t)^{-1}|$ and $|R_2(t)^{-1}|$ are uniformly bounded. From Gronwall's inequality, we have $\widehat{P}(t)=0$, which yields the uniqueness of $P(\cdot)$. Secondly, let us focus on the existence of a solution to equation \eqref{P}. Motivated by \cite{Yong1999}, we set \begin{equation*} \left\{ \begin{aligned} &\widehat{A}(t)=A(t)-B(t)\Psi(t),\widehat{C}(t)=C(t)-D(t)\Psi(t),\\ &\widehat{Q}(t)=Q(t)+\Psi^{\top}(t)(R(t)+\widetilde{D}^{\top}(t)P(t)\widetilde{D}(t))\Psi(t),\\ &\Psi(t)=(R(t)\!+\!D^{\top}(t)P(t)D(t)\!+\!\widetilde{D}^{\top}(t)P(t)\widetilde{D}(t))^{-1} (P(t)B(t)\!+\!C^\top(t)P(t)D(t))^\top.\\ \end{aligned} \right. \end{equation*} It is easy to verify that equation \eqref{P} is equivalent to the following equation \begin{equation} \left\{ \begin{aligned}\label{Pe} &\dot{P}(t)+P(t)\widehat{A}(t)+\widehat{A}^\top(t)P(t)+\widehat{C}^\top(t)P(t)\widehat{C}(t)+\widehat{Q}(t)=0,\\ &P(T)=G. \end{aligned} \right. \end{equation} Now, we will use modified iterative method and mathematical induction to prove the existence of a solution to \eqref{Pe}, thus \eqref{P} also has a solution. To do this, we set \begin{equation} \left\{ \begin{aligned}\label{P0} &\dot{P}_0(t)+P_0(t)A(t)+A^\top(t)P_0(t)+C^\top(t)P_0(t)C(t)+Q(t)=0,\\ &P_0(T)=G, \end{aligned} \right. \end{equation} which admits a unique $P_0(\cdot)\in C([0,T];\mathcal{S}_+^n)$ by Lemma 7.3 of Chapter 6 in \cite{Yong1999}. For $i\ge0$, we define \begin{equation} \left\{ \begin{aligned}\label{iterative} &\Psi_i(t)=(R(t)+D^{\top}(t)P_i(t)D(t)+\widetilde{D}^{\top}(t)P_i(t)\widetilde{D}(t))^{-1} (P_i(t)B(t)\\&\qquad\qquad+C^\top(t)P_i(t)D(t))^\top,\\ &\widehat{A}_i(t)=A(t)-B(t)\Psi_i(t),\quad \widehat{C}_i(t)=C(t)-D(t)\Psi_i(t),\\ &\widehat{Q}_i(t)=Q(t)+\Psi_i^{\top}(t)(R(t)+\widetilde{D}^{\top}(t)P_i(t)\widetilde{D}(t))\Psi_i(t),\\ \end{aligned} \right. \end{equation} and let $P_{i+1}(t)$ be defined by the following equation \begin{equation} \left\{ \begin{aligned}\label{Pie} &\dot{P}_{i+1}(t)+P_{i+1}(t)\widehat{A}_i(t)+\widehat{A}_i(t)P_{i+1}(t)+\widehat{C}_i^\top(t)P_{i+1}(t)\widehat{C}_i(t)+\widehat{Q}_i(t)=0,\\ &P_{i+1}(T)=G. \end{aligned} \right. \end{equation} Noticing that $R\gg0$, $Q\ge0$ and $G\geq0$, by using Lemma 7.3 of Chapter 6 in \cite{Yong1999} and mathematical induction, one can check that for $i\ge 0$, $P_i(\cdot)$ is well defined and moreover $P_i(\cdot)\in C([0,T];\mathcal{S}_+^n)$. We claim that $P_i(\cdot)$, for $i\ge0$, is a decreasing sequence in $C([0,T];\mathcal{S}_+^n)$. For simplicity, we set $\Psi_{-1}(t)=0$ and denote $\Delta_i(t)=P_i(t)-P_{i+1}(t)$ and $\Upsilon_i(t)=\Psi_{i-1}(t)-\Psi_i(t)$. Indeed, when $i=0$, by \eqref{P0}-\eqref{Pie}, we have \begin{equation} \begin{aligned} &-[\dot{\Delta}_0(t)+\Delta_0(t)\widehat{A}_0(t)+\widehat{A}_0^\top(t)\Delta_0(t)+\widehat{C}_0^\top(t)\Delta_0(t)\widehat{C}_0(t)]\\ =&P_0(t)(A(t)-\widehat{A}_0(t))+(A(t)-\widehat{A}_0(t))^\top P_0(t)+C^\top(t)P_0(t)C(t)\\ &-\widehat{C}_0^\top(t)P_0(t)\widehat{C}_0(t)+Q(t)-\widehat{Q}_0(t)\\ =&\Upsilon^\top_0(t)(R(t)+D^\top(t)P_0(t)D(t)+\widetilde{D}^\top(t)P_0(t)\widetilde{D}(t))\Upsilon_0(t)\\ &-[P_0(t)B(t)+\widehat{C}_0^\top(t)P_0(t)D (t)+\Upsilon^\top_0(t)(R(t)+\widetilde{D}^\top(t)P_0(t)\widetilde{D}(t))]\Upsilon_0(t)\\ &-\Upsilon^\top_0(t)[B^\top(t)P_0(t)+D^\top(t)P_0(t)\widehat{C}_0(t)+(R(t)+\widetilde{D}^\top(t)P_0(t)\widetilde{D}(t))\Upsilon_0(t)]\\ =&\Upsilon^\top_0(t)(R(t)+D^\top(t)P_0(t)D(t)+\widetilde{D}^\top(t)P_0(t)\widetilde{D}(t))\Upsilon_0(t)\geq 0. \end{aligned} \end{equation} Using $\Delta_0(T)=0$ and Lemma 7.3 of \cite{Yong1999}, we get $P_0(t)\geq P_1(t)$, for all $t\in [0,T]$. For $i\ge1$, if assume that $P_{i-1}(t)\geq P_i(t)$, $t\in[0,T]$, it is sufficient to prove $P_{i}(t)\geq P_{i+1}(t)$, $t\in[0,T]$. By using \eqref{Pie}, we have that $\Delta_i(t)$ satisfies \begin{equation} \begin{aligned}\label{deltak} -\dot{\Delta}_i(t)=&\Delta_i(t)\widehat{A}_i(t)+\widehat{A}_i^\top(t)\Delta_i(t)+\widehat{C}_i^\top(t)\Delta_i(t)\widehat{C}_i(t) +P_i(t)(\widehat{A}_{i-1}(t)-\widehat{A}_i(t))\\ &+(\widehat{A}_{i-1}(t)-\widehat{A}_i(t))^\top P_i(t)+\widehat{C}_{i-1}^\top(t)P_i(t)\widehat{C}_{i-1}(t)\\ &-\widehat{C}_i^\top(t)P_i(t)\widehat{C}_i(t)+\widehat{Q}_{i-1}(t)-\widehat{Q}_i(t). \end{aligned} \end{equation} According to \eqref{iterative}, we have \begin{equation*} \begin{aligned} &\widehat{A}_{i-1}(t)-\widehat{A}_i(t)=-B(t)\Upsilon_i(t),~~\widehat{C}_{i-1}(t)-\widehat{C}_{i}(t)=-D(t)\Upsilon_i(t),\\ &\widehat{C}_{i-1}^\top(t)P_i(t)\widehat{C}_{i-1}(t)-\widehat{C}_i^\top(t)P_i(t)\widehat{C}_i(t)=\Upsilon_i^\top(t)D^\top(t)P_i(t)D(t)\Upsilon_i(t)\\ &-\widehat{C}_i^\top(t)P_i(t)D(t)\Upsilon_i(t)-\Upsilon_i^\top(t)D^\top(t)P_i(t)\widehat{C}_i(t),\\ &\widehat{Q}_{i-1}(t)-\widehat{Q}_i(t)=\Upsilon_i^\top(t)(R(t)+\widetilde{D}^\top(t)P_i(t)\widetilde{D}(t))\Upsilon_i(t)\\ &+\Psi_i^\top(t)(R(t)+\widetilde{D}^\top(t)P_i(t)\widetilde{D}(t))\Upsilon_i(t)+\Upsilon_i^\top(t)(R(t)+\widetilde{D}^\top(t)P_i(t)\widetilde{D}(t))\Psi_i(t)\\ &+\Psi_{i-1}^\top(t)\widetilde{D}^\top(t)(P_{i-1}(t)-P_i(t))\widetilde{D}(t)\Psi_{i-1}(t). \end{aligned} \end{equation*} From \eqref{deltak} and above estimates as well as $P_{i-1}(t)\geq P_i(t)$, we obtain \begin{equation} \begin{aligned}\label{di} &-[\dot{\Delta}_i(t)+\Delta_i(t)\widehat{A}_i(t)+\widehat{A}_i^\top(t)\Delta_i(t)+\widehat{C}_i^\top(t)\Delta_i(t)\widehat{C}_i(t)] =-P_i(t)B(t)\Upsilon_i(t)\\ &-\Upsilon_i^\top(t)B^\top(t)P_i(t)+\Upsilon_i^\top(t)D^\top(t)P_i(t)D(t)\Upsilon_i(t)-\widehat{C}_i^\top(t)P_i(t)D(t)\Upsilon_i(t)\\ &-\Upsilon_i^\top(t)D^\top(t)P_i(t)\widehat{C}_i(t)+\Upsilon_i^\top(t)(R(t)+\widetilde{D}^\top(t)P_i(t)\widetilde{D}(t))\Upsilon_i(t)\\ &+\Psi_i^\top(t)(R(t)+\widetilde{D}^\top(t)P_i(t)\widetilde{D}(t))\Upsilon_i(t)+\Upsilon_i^\top(t)(R(t)+\widetilde{D}^\top(t)P_i(t)\widetilde{D}(t))\Psi_i(t)\\ &+\Psi_{i-1}^\top(t)\widetilde{D}^\top(t)(P_{i-1}(t)-P_i(t))\widetilde{D}(t)\Psi_{i-1}(t)\\ =&\Upsilon_i^\top(t)(R(t)+D^\top(t)P_i(t)D(t)+\widetilde{D}^\top(t)P_i(t)\widetilde{D}(t))\Upsilon_i(t)\\ &+\Psi_{i-1}^\top(t)\widetilde{D}^\top(t)(P_{i-1}(t)-P_i(t))\widetilde{D}(t)\Psi_{i-1}(t)\geq 0. \end{aligned} \end{equation} Using $\Delta_i(T)=0$ and Lemma 7.3 of \cite{Yong1999} again, we have $P_{i}(t)\geq P_{i+1}(t)$, $t\in[0,T]$. Therefore, $\{P_i(\cdot)\}$ is a decreasing sequence in $C([0,T];\mathcal{S}_+^n)$, and thus has a limit denoted by $P(\cdot)$. To show that $P(\cdot)$ solves \eqref{Pe} (and hence \eqref{P}), it remains to prove that $\{P_i(\cdot)\}$ is a Cauchy sequence in $C([0,T];\mathcal{S}_+^n)$ and $\{\dot{P}_i(\cdot)\}$ is a Cauchy sequence in $C([0,T];\mathcal{S}^n)$, which also provides that $P(\cdot)\in C([0,T];\mathcal{S}_+^n)$ and $\dot{P}(\cdot)\in C([0,T];\mathcal{S}^n)$. In fact, let $\widetilde{R}_{i}(t)=R(t)+D^{\top}(t)P_i(t)D(t)+\widetilde{D}^{\top}(t)P_i(t)\widetilde{D}(t)$, one can get \begin{equation} \begin{aligned}\label{Upsilon} &\Upsilon_i(t)=\Psi_{i-1}(t)-\Psi_i(t)\\ =&\widetilde{R}_{i-1}(t)^{-1}(B^\top(t)\Delta_{i-1}(t)+D^\top(t)\Delta_{i-1}(t)C(t))-\widetilde{R}_{i-1}(t)^{-1}(D^\top(t)\Delta_{i-1}(t)D(t)\\ &+\widetilde{D}^\top(t) \Delta_{i-1}(t)\widetilde{D}(t))\widetilde{R}_{i}(t)^{-1}(B^\top(t)P_{i}(t)+D^\top(t)P_{i}(t)C(t)). \end{aligned} \end{equation} Noting $\Delta_i(T)=0$ and integrating on both side of \eqref{di}, we get \begin{equation} \begin{aligned}\label{Delta} \Delta_i(t)=&\int_t^T [\Delta_i(s)\widehat{A}_i(s)+\widehat{A}_i^\top(s)\Delta_i(s)+\widehat{C}_i^\top(s)\Delta_i(s)\widehat{C}_i(s)\\ &+\Upsilon_i^\top(s)\widetilde{R}_{i}(s)\Upsilon_i(s)-\Psi_{i-1}^\top(s)\widetilde{D}^\top(s)\Delta_{i-1}(s)\widetilde{D}(s)\Psi_{i-1}(s)]ds. \end{aligned} \end{equation} Substituting \eqref{Upsilon} into \eqref{Delta}, and by noticing assumptions \textup{(H1)}-\textup{(H3)} and the uniformly boundedness of $|P_i(\cdot)|$, $|\widetilde{R}_{i}(\cdot)|$, $|\widetilde{R}_{i}(\cdot)^{-1}|$, we obtain \begin{equation*} |\Delta_i(t)|\leq K\int_t^T [|\Delta_{i-1}(s)|+|\Delta_i(s)|]ds. \end{equation*} Using Gronwall's inequality, we get $|\Delta_i(t)|\leq K\int_t^T |\Delta_{i-1}(s)|ds$. By iteration, we deduce \begin{equation}\label{Deltaiestimate} |\Delta_i(t)|\leq \frac{K^{i}}{(i-1)!}(T-t)^{i-1}|v_1(0)|, \text { \quad where } v_1(0)=\int_0^T |\Delta_{0}(s)|ds. \end{equation} For any $m>i\ge1$, we have \begin{equation*} \begin{aligned} |P_i(t)-P_m(t)| \leq|\Delta_i(t)|+|\Delta_{i+1}(t)|+\ldots|\Delta_{m-1}(t)|\leq \underset{j=i}{\overset{m}{\sum}} \frac{K^{j}}{(j-1)!}(T-t)^{j-1}|v_1(0)|, \end{aligned} \end{equation*} and thus $\underset{0\leq t \leq T}{\sup}|P_i(t)-P_m(t)|\leq \underset{j=i}{\overset{m}{\sum}} \frac{K^{j}}{(j-1)!}T^{j-1}|v_1(0)|$. Hence, $\{P_i(\cdot)\}$ is a Cauchy sequence in $C([0,T];\mathcal{S}_+^n)$. Moreover, from \eqref{Delta}, we have \begin{equation} \begin{aligned} -\dot{\Delta}_i(t)=& \Delta_i(t)\widehat{A}_i(t)+\widehat{A}_i^\top(t)\Delta_i(t)+\widehat{C}_i^\top(t)\Delta_i(t) \widehat{C}_i(t)\\ &+\Upsilon_i^\top(t)\widetilde{R}_{i}(t)\Upsilon_i(t)-\Psi_{i-1}^\top(t)\widetilde{D}^\top(t) \Delta_{i-1}(t)\widetilde{D}(t)\Psi_{i-1}(t), \end{aligned} \end{equation} and by using \eqref{Upsilon}, assumptions \textup{(H1)}-\textup{(H3)} and uniformly boundedness of $|P_i(\cdot)|$, $|\widetilde{R}_{i}(\cdot)|$, $|\widetilde{R}_{i}(\cdot)^{-1}|$, we have $|\dot{\Delta}_i(t)|\leq K(|\Delta_i(t)|+|\Delta_{i-1}(t)|). $ Then from \eqref{Deltaiestimate} and similar arguments as above, we can obtain $\{\dot{P}_i(\cdot)\}$ is a Cauchy sequence in $C([0,T];\mathcal{S}^n)$. The proof is complete. \hfill$\square$ \end{proof} Now, we are going to give the well-posedness of system \eqref{RCC}. As discussed at the beginning of this subsection, equation \eqref{Lambda} for $\Lambda(\cdot)$ is not a standard Riccati equation too, since $\delta P(\cdot)-Q(\cdot)\geq 0$ fails usually. Inspired by Yong \cite{Yong2013}, we transform the solvability of $\Lambda(\cdot)$ to the solvability of another Riccati equation, whose well-posedness can be guaranteed through some algebraic inequalities. In fact, we have the following theorem, which is the main result of this subsection. \begin{theorem}\label{RCCtheorem} Let assumptions \textup{(H1)}-\textup{(H3)} hold, then Riccati type CC system \eqref{RCC} admits a unique solution $(P(\cdot),\Lambda(\cdot),\Phi(\cdot),l(\cdot))\in $ $C([0,T];\mathcal{S}_+^n\times \mathcal{S}^n\times \mathbb{R}^n\times\mathbb{R}^n)$. \end{theorem} \begin{proof} From Lemma \ref{Plemma}, we know that equation \eqref{P} admits a unique solution in $C([0,T];\mathcal{S}_+^n)$. Now let us study the well-posedness of $\Lambda(\cdot)$. Motivated by \cite{Yong2013}, we set $\Pi(\cdot):=P(\cdot)+\Lambda(\cdot)$, then from system \eqref{RCC}, we know that $\Pi(\cdot)$ solves the following equation \begin{equation} \left\{ \begin{aligned}\label{Pi} &\dot{\Pi}(t)+\Pi(t)(A(t)-B(t)\widetilde{R}(t)^{-1}D^{\top}(t)P(t)C(t))+(A(t)\\ &\qquad-B(t)\widetilde{R}(t)^{-1}D^{\top}(t)P(t)C(t))^{\top}\Pi(t)+\delta\Pi(t)+C^{\top}(t)(P(t)\\ &\qquad-P(t)D(t)\widetilde{R}(t)^{-1}D^{\top}(t)P(t))C(t)-\Pi(t)B(t)\widetilde{R}(t)^{-1}B^{\top}(t)\Pi(t)=0,\\ &\Pi(T)=0. \end{aligned} \right. \end{equation} By recalling that $\widetilde{R}(t)=R(t)+D^{\top}(t)P(t)D(t)+\widetilde{D}^{\top}(t)P(t)\widetilde{D}(t)$, one can get \begin{equation*} \begin{aligned} &P(t)-P(t)D(t)\widetilde{R}(t)^{-1}D^{\top}(t)P(t)\\ =&P(t)-P(t)D(t)(R(t)+D^{\top}(t)P(t)D(t)+\widetilde{D}^{\top}(t)P(t)\widetilde{D}(t))^{-1}D^{\top}(t)P(t)\\ =&P(t)^{\frac{1}{2}}[I-P(t)^{\frac{1}{2}}D(t)R(t)^{-\frac{1}{2}}(I+R(t)^{-\frac{1}{2}}D^{\top}(t)P(t)^{\frac{1}{2}}P(t)^{\frac{1}{2}}D(t)R(t)^{-\frac{1}{2}}\\ &+R(t)^{-\frac{1}{2}}\widetilde{D}^{\top}(t)P(t)^{\frac{1}{2}}P(t)^{\frac{1}{2}}\widetilde{D}(t)R(t)^{-\frac{1}{2}})^{-1}R(t)^{-\frac{1}{2}}D^{\top}(t)P(t)^{\frac{1}{2}}]P(t)^{\frac{1}{2}}\\ =&P(t)^{\frac{1}{2}}[I-\Sigma(t)(I+\Sigma^{\top}(t)\Sigma(t)+\widetilde{\Sigma}^{\top}(t)\widetilde{\Sigma}(t))^{-1}\Sigma^{\top}(t)]P(t)^{\frac{1}{2}}, \end{aligned} \end{equation*} where $ \Sigma(t)=P(t)^{\frac{1}{2}}D(t)R(t)^{-\frac{1}{2}}$ and $\widetilde{\Sigma}(t)=P(t)^{\frac{1}{2}}\widetilde{D}(t)R(t)^{-\frac{1}{2}}.$ From Lemma \ref{alegbra1}, we have \[ (I+\Sigma^{\top}(t)\Sigma(t)+\widetilde{\Sigma}^{\top}(t)\widetilde{\Sigma}(t))^{-1}\leq (I+\Sigma^{\top}(t)\Sigma(t))^{-1}, \] and further \begin{equation} \begin{aligned}\label{inequality} I-\Sigma(t)(I+\Sigma^{\top}(t)\Sigma(t)+\widetilde{\Sigma}^{\top}(t)\widetilde{\Sigma}(t))^{-1} \Sigma^{\top}(t) \geq I-\Sigma(t)(I+\Sigma^{\top}(t)\Sigma(t))^{-1}\Sigma^{\top}(t). \end{aligned} \end{equation} Noting that $I-\Sigma(t)(I+\Sigma^{\top}(t)\Sigma(t))^{-1}\Sigma^{\top}(t)=(I+\Sigma(t)\Sigma^{\top}(t))^{-1}$, we have \begin{equation*} \begin{aligned} &P(t)-P(t)D(t)\widetilde{R}(t)^{-1}D^{\top}(t)P(t)\geq P(t)^{\frac{1}{2}}(I+\Sigma(t)\Sigma^{\top}(t))^{-1}P(t)^{\frac{1}{2}}\\ =&P(t)^{\frac{1}{2}}(I+P(t)^{\frac{1}{2}}D(t)R(t)^{-1}D^{\top}(t)P(t)^{\frac{1}{2}})^{-1}P(t)^{\frac{1}{2}}\geq 0. \end{aligned} \end{equation*} Therefore, the following inequality holds \begin{equation}\label{positive} C^{\top}(t)(P(t)-P(t)D(t)\widetilde{R}(t)^{-1}D^{\top}(t)P(t))C(t)\geq 0 \end{equation} and noticing that $R(t)+D^{\top}(t)P(t)D(t)+\widetilde{D}^{\top}(t)P(t)\widetilde{D}(t)>0$, we obtain that equation \eqref{Pi} admits a unique solution $\Pi(\cdot)\in C([0,T];\mathcal{S}_+^n)$. By recalling $\Pi(\cdot)=P(\cdot)+\Lambda(\cdot)$ and Lemma \ref{Plemma}, we get the well-posedness of equation \eqref{Lambda}. Once $P(\cdot)$ and $\Lambda(\cdot)$ are uniquely solved, the well-posedness of $\Phi(\cdot)$ holds by noting that the equation for $\Phi(\cdot)$ is just an ODE. Then similarly, the well-posedness of $l(\cdot)$ holds also. \hfill$\square$ \end{proof} \begin{remark}\label{Riccatiemph} In this subsection, the well-posedness of a new Riccati type CC system \eqref{RCC} is given, which generalizes the results of \cite{Huang2016,Yong2013}. To overcome the difficulties arisen from our general partial information structure, we introduce modified iterative method and equivalent transform method motivated by \cite{Yong1999,Yong2013}. We mention that our Riccati equations are quite different to the ones in \cite{Huang2016,Yong2013} due to the additional term $\widetilde{D}^{\top}(\cdot)P(\cdot)\widetilde{D}(\cdot)$. However, it is interesting that we can use some algebraic inequalities as well as modified iterative method and equivalent transform method to obtain the well-posedness of our new Riccati type CC system. \end{remark} \section{Applications}\label{sec:app} In this section, we apply our theoretical results to solve the Example \ref{example}. Let $\mathbb{R}_+$ be the set of all positive real number. For any real number $x$, we denote $x^+=\max\{x,0 \}$. Let admissible control set be $\mathcal{U}_{ad}^{c}=\{u_{i}(\cdot )~|~u_{i}(\cdot )\in L_{\mathcal{G}_{t}}^{2}(0,T;\Gamma)\}$, where $\Gamma\subseteq\mathbb{R}$. Then the general inter-bank borrowing and lending problem can be formulated as follows. \textbf{Problem (IBL)} For $1\leq i \leq N$, to find strategy profile $\bar{u}(\cdot)=(\bar{u}_1(\cdot),\ldots,\bar{u}_N(\cdot))$, where $\bar{u}_i(\cdot)\in\mathcal{U}_{ad}^{c}$, such that $ \mathcal{J}_i(\bar{u}_i(\cdot),\bar{u}_{-i}(\cdot))=\underset{% u_{i}\left( \cdot \right) \in \mathcal{U}_{ad}^{c}}{\inf }\mathcal{J}_i(u_i(\cdot),\bar{u}_{-i}(\cdot))$, subjects to \eqref{exstate} and \eqref{excost}. \begin{remark} Noting that the cost functional in \cite{Carmona2015} has the cross term between $u_i(\cdot)$ and $(x_i(\cdot)-x^{(N)}(\cdot))$. We claim that our following conclusions can be extended similarly to such case with no essential difficulties. \end{remark} When $\Gamma=\mathbb{R}_+$, the decentralized strategies are given by \begin{equation}\label{exampleu} \bar{u}_{i}(t)=\Big\{\frac{B\mathbb{E}[\bar{p}% _{i}(t)|\mathcal{G}_{t}^{i}]+D\mathbb{E}[\bar{k}_{i}(t)|\mathcal{G% }_{t}^{i}]+\widetilde{D}\mathbb{E}[\bar{\widetilde{k}}_{i}(t)|% \mathcal{G}_{t}^{i}]}{r}\Big\}^+, \end{equation}% where $(\bar{z}_{i}(\cdot),\bar{p}_{i}(\cdot),\bar{k}_{i}(\cdot),\bar{\widetilde{k}}_{i}(\cdot))$ solves the following MF-FBSDE \begin{equation} \left\{ \begin{aligned}\label{exaHCC} d\bar{z}_{i}(t)=&\{(A-a)\bar{z}_{i}(t)+B [r^{-1}(B\mathbb{E}[\bar{p}_{i}(t)|\mathcal{G}% _{t}^{i}]+D\mathbb{E}[\bar{k}_{i}(t)|\mathcal{G}_{t}^{i}] \\ &+\widetilde{D}\mathbb{E}[\bar{\widetilde{k}}_{i}(t)|\mathcal{G}% _{t}^{i}])]^++a\mathbb{E}[\bar{z}_{i}(t)]+b\}dt\\ &+\{C\bar{z}_{i}(t) +D[r^{-1}(B\mathbb{E}[\bar{p}_{i}(t)|% \mathcal{G}_{t}^{i}]+D\mathbb{E}[\bar{k}_{i}(t)|\mathcal{G}% _{t}^{i}]\\ &+\widetilde{D}\mathbb{E}[\bar{\widetilde{k}}_{i}(t)|\mathcal{G}% _{t}^{i}])]^++\sigma \}dW_{i}(t)\\ &+\{\widetilde{D}[r^{-1}(B\mathbb{E}[\bar{p% }_{i}(t)|\mathcal{G}_{t}^{i}]+D\mathbb{E}[\bar{k}_{i} (t)|% \mathcal{G}_{t}^{i}]\\ &+\widetilde{D}\mathbb{E}[\bar{\widetilde{k}}_{i}(t)|\mathcal{G}% _{t}^{i}])]^++\widetilde{\sigma }\}d\widetilde{W}% _{i}(t), \\ d\bar{p}_{i}(t)=&-[(A-a)\bar{p}_{i}(t)+C\bar{k}_{i} \left( t\right) -\epsilon(\bar{z}_{i}\left( t\right) -\mathbb{E}[\bar{z}_{i}(t)])]dt\\ &+\bar{k}_{i}\left( t\right) dW_{i}(t)+\bar{\widetilde{k}}_{i} \left( t\right) d\widetilde{W}_{i}(t), \\ \bar{z}_{i}(0)=&x,~\bar{p}_{i}(T)=-c(\bar{z}_{i}\left( T\right) -\mathbb{E}[\bar{z}_{i}(T)]). \end{aligned} \right. \end{equation}% From Theorems \ref{Hwellposedness} and \ref{generalnash}, the following results hold. \begin{theorem} Assume that $2A<a-3C^2$, there exists a constant $\theta_1>0$ independent of $T$, which may depend on $A$, $a$, $C$, $\epsilon$ and $c$, when $B$, $D$, $\widetilde{D}$ and $r^{-1}\in[0,\theta_1)$, then there exists a unique adapted solution $(\bar{z}_i(\cdot),\bar{p}_i(\cdot),\bar{k}_i(\cdot),\bar{\widetilde{k}}_i(\cdot))\in L_{\mathcal{F }_{t}^{W,\widetilde{W}}}^{2}(0,T;\mathbb{R}\times \mathbb{R}\times \mathbb{R}\times \mathbb{R})$ to MF-FBSDE \eqref{exaHCC}. Moreover, the strategy profile $\bar{u}(\cdot)=(\bar{u}_1(\cdot),\ldots,\bar{u}_N(\cdot))$, where $\bar{u}_i(\cdot)$ is given by \eqref{exampleu}, is an $\varepsilon$-Nash equilibrium of Problem (IBL). \end{theorem} If $\Gamma=\mathbb{R}$, we have $\mathcal{U}_{ad}^{c}=\{u_{i}(\cdot )~|~u_{i}(\cdot )\in L_{\mathcal{G}_{t}}^{2}(0,T;\mathbb{R})\}$. By applying Theorem \ref{specialu}, we can represent the decentralized strategies as the following feedback of filtered state \begin{equation}\label{exafeed} \bar{u}_i(t)=-\frac{(P(t)B+CP(t)D)\hat{\bar{z}}_i(t)+B\Lambda(t) l(t)+B\Phi(t)+DP(t)\sigma+DP(t)\widetilde{\sigma}}{r+D^2P(t)+\widetilde{D}^2P(t)}, \end{equation} where $P(\cdot),\Lambda(\cdot),\Phi(\cdot),l(\cdot)$ solve, respectively, the following equations \begin{equation} \label{exaP} \dot{P}(t)+2(A-a)P(t)+C^2P(t)+\epsilon-\frac{(P(t)B+CP(t)D)^2}{r+D^2P(t)+\widetilde{D}^2P(t)} =0,\qquad P(T)=c, \end{equation} and \begin{equation}\label{exaL} \begin{aligned} &\dot{\Lambda}(t)+2\left(A-a-\frac{B(P(t)B+CP(t)D)}{r+D^2P(t)+\widetilde{D}^2P(t)}\right)\Lambda(t)\\ &\qquad+(P(t)+\Lambda(t))a -\frac{\Lambda(t)^2B^2}{r+D^2P(t)+\widetilde{D}^2P(t)}-\epsilon=0,\qquad\Lambda(T)=-c, \end{aligned} \end{equation} and \begin{equation}\label{exaPhi} \begin{aligned} &\dot{\Phi}(t)+\left(A-a-\frac{B(P(t)B+CP(t)D)}{r+D^2P(t)+\widetilde{D}^2P(t)}-\frac{\Lambda(t)B^2}{r+D^2P(t)+\widetilde{D}^2P(t)}\right)\Phi(t)\\ &+\Bigg(C-\frac{D(P(t)B+CP(t)D)}{r+D^2P(t)+\widetilde{D}^2P(t)}-\frac{\Lambda(t)BD}{r+D^2P(t)+\widetilde{D}^2P(t)}\Bigg)P(t)\sigma\\ &-\Bigg(\frac{\widetilde{D}(P(t)B+CP(t)D)}{r+D^2P(t)+\widetilde{D}^2P(t)}+\frac{\Lambda(t)B\widetilde{D}}{r+D^2P(t)+\widetilde{D}^2P(t)}\Bigg)P(t)\widetilde{\sigma}\\ &+(P(t)+\Lambda(t))b=0, \qquad \Phi(T)=0, \end{aligned} \end{equation} and \begin{equation} \begin{aligned}\label{exal} dl(t)=&\Bigg\{\left[A-\frac{B(P(t)B+CP(t)D+B\Lambda(t))}{r+D^2P(t)+\widetilde{D}^2P(t)}\right]l(t)+b\\ &-\frac{B(B\Phi(t)+DP(t)\sigma+\widetilde{D}P(t)\widetilde{\sigma})}{r+D^2P(t)+\widetilde{D}^2P(t)} \Bigg\}dt,\qquad\qquad\qquad l(0)=x. \end{aligned} \end{equation} The optimal filtering $\hat{\bar{z}}_i(\cdot)$ is determined by \begin{equation} \left\{ \begin{aligned}\label{exaz} d\hat{\bar{z}}_i(t)=&~\Bigg\{\left(A-a-\frac{B(P(t)B+CP(t)D)}{r+D^2P(t)+\widetilde{D}^2P(t)}\right)\hat{\bar{z}}_i(t)-\frac{B^2\Lambda(t)}{r+D^2P(t)+\widetilde{D}^2P(t)}l(t)\\ &-\frac{B(B\Phi(t)+DP(t)\sigma+\widetilde{D}P(t)\widetilde{\sigma})}{r+D^2P(t)+\widetilde{D}^2P(t)}+b+al(t)\Bigg\}dt\\ &+\Bigg\{\left(C-\frac{D(P(t)B+CP(t)D)}{r+D^2P(t)+\widetilde{D}^2P(t)}\right)\hat{\bar{z}}_i(t)-\frac{DB\Lambda(t)}{r+D^2P(t)+\widetilde{D}^2P(t)}l(t)\\ &-\frac{D(B\Phi(t)+DP(t)\sigma+\widetilde{D}P(t)\widetilde{\sigma})}{r+D^2P(t)+\widetilde{D}^2P(t)}+\sigma\Bigg\}dW_i(t),\\ \hat{\bar{z}}_i(0)=&~x. \end{aligned} \right. \end{equation} Moreover, we have \begin{theorem} The strategy profile $\bar{u}(\cdot)=(\bar{u}_1(\cdot),\ldots,\bar{u}_N(\cdot))$, where $\bar{u}_i(\cdot)$ is given by \eqref{exafeed} and $(P(\cdot),\Lambda(\cdot),\Phi(\cdot),l(\cdot),\hat{\bar{z}}_i(\cdot))$ solves systems \eqref{exaP}-\eqref{exaz}, is an $\varepsilon$-Nash equilibrium of Problem (IBL) with $\Gamma=\mathbb{R}$. \end{theorem} Finally, for comparing our results with the results of \cite{Carmona2015}, and also for further clarifications about the financial implication, we consider some special cases and present corresponding numerical results. To do this, let $C=D=\widetilde{D}=0$, then we can derive a similar result as \cite{Carmona2015}. Indeed, in this setting, we have $P(t)=-\Lambda(t)$, $\Phi(t)\equiv0$. Moreover, equation \eqref{exaP} has the following explicit solution \begin{equation*}\label{Psolution} P(t)=\frac{r}{B^2}\frac{A-a+\theta+[-(A-a)+\theta]\frac{B^2c/r-(A-a+\theta)}{B^2c/r-(A-a)+\theta}e^{2\theta (t-T)}}{1-\frac{B^2c/r-(A-a+\theta)}{B^2c/r-(A-a)+\theta}e^{2\theta (t-T)}}, \end{equation*} where $\theta=\sqrt{(A-a)^2+\frac{B^2\epsilon}{r}}$. At this moment, the decentralized strategies of Problem (IBL) read as \begin{equation*} \bar{u}_i(t)=\frac{1}{B}\frac{A-a+\theta+[-(A-a)+\theta]\frac{B^2c/r-(A-a+\theta)}{B^2c/r-(A-a)+\theta}e^{2\theta (t-T)}}{1-\frac{B^2c/r-(A-a+\theta)}{B^2c/r-(A-a)+\theta}e^{2\theta (t-T)}}(\hat{\bar{z}}_i(t)-l(t)), \end{equation*} where $l(t)=xe^{-At}-\frac{b}{A}(1-e^{At})$ and \begin{equation*} \hat{\bar{z}}_i(t)=\Theta(t)x+\Theta(t)\int_0^t\frac{(a-\frac{B^2P(s)}{r})l(s)+b }{\Theta(s)}ds+\Theta(t)\int_0^t\frac{\sigma}{\Theta(s)} dW_i(s), \end{equation*} with $ \Theta(t)=e^{\int_0^t(A-a-\frac{B^2P(s)}{r})ds}$. Moreover, if $A=0$ and $r=B=1$, our results coincide with the results of \cite{Carmona2015} (see (5.8) in \cite{Carmona2015}). In most instances, however, it is intractable to find an explicit solution to Problem (IBL). To better illustrate our results, we give some numerical simulations below. Suppose that $N=20$, $T=1$. Set the initial data as $x=1$, $A=3.2$, $a=1.5$, $B=2.8$, $b=2$, $C=0.6$, $\sigma=0.8$, $D=0$, $\widetilde{D}=2$, $\tilde{\sigma}=0.3$, $\epsilon=3.3$, $r=2.5$, $c=5$. Let $\Pi=P+\Lambda$, then Fig \ref{fig1} gives the numerical solutions of $P$, $\Lambda$, $\Pi$, $\Phi$ and $l$. In our numerical simulations, we will illustrate the influence of the partial information structure. For $D\neq0$ case, we can do similar numerical simulations and we prefer to omit it here. \begin{figure}[H] \centering \includegraphics[width=5.4in,height=2.4in]{P.eps} \caption{The numerical solutions of $P$, $\Lambda$, $\Pi$, $\Phi$ and $l$.} \label{fig1} \end{figure} Fig \ref{fig2} and Fig \ref{fig3} show the optimal filtering of decentralized states and the decentralized control strategies of $20$ banks, which characterize the corresponding dynamics of log-monetary reverse and control rate of borrowing from or lending to a central bank. Fig \ref{fig6} and Fig \ref{fig7} draw the optimal filtering of decentralized states and decentralized control strategies of $20$ banks when $\widetilde{D}$ becomes larger. By comparing Fig \ref{fig2} and Fig \ref{fig6} (resp. Fig \ref{fig3} and Fig \ref{fig7}), we find that the optimal filtering of decentralized states (resp. the decentralized strategies) go up (resp. go down) when $\widetilde{D}$ is larger. In fact, the change of $\widetilde{D}$ will result the unknown information fluctuates greatly (eg. the central bank situation). Consequently, each bank communicates frequently with others for the unknown risk diversification, and reduces the cash flow between the central bank. \begin{figure}[H] \centering \begin{minipage}[c]{0.48\textwidth} \centering \includegraphics[width=1\hsize=1]{state.eps} \end{minipage} \hspace{0.01\textwidth} \begin{minipage}[c]{0.48\textwidth} \centering \includegraphics[width=1\hsize=1]{control.eps} \end{minipage}\\[3mm] \begin{minipage}[t]{0.48\textwidth} \centering \caption{Optimal filtering of decentralized states when $C=0.6,\widetilde{D}=2$.} \label{fig2} \end{minipage} \hspace{0.01\textwidth} \begin{minipage}[t]{0.48\textwidth} \centering \caption{Decentralized control strategies when $C=0.6,\widetilde{D}=2$.} \label{fig3} \end{minipage} \end{figure} \begin{figure}[H] \centering \begin{minipage}[c]{0.48\textwidth} \centering \includegraphics[width=1 \hsize=1]{state2.eps} \end{minipage} \hspace{0.01\textwidth} \begin{minipage}[c]{0.48\textwidth} \centering \includegraphics[width=1 \hsize=1]{control2.eps} \end{minipage}\\[3mm] \begin{minipage}[t]{0.48\textwidth} \centering \caption{Optimal filtering of decentralized states when $C=0.6,\widetilde{D}=6$.} \label{fig6} \end{minipage} \hspace{0.01\textwidth} \begin{minipage}[t]{0.48\textwidth} \centering \caption{Decentralized control strategies when $C=0.6,\widetilde{D}=6$.} \label{fig7} \end{minipage} \end{figure} \section{Conclusions}\label{sec:con} In this paper, a general stochastic large-population problem with partial information has been considered, where the diffusion of the dynamics of each agent can depend both on the state and the control. In control constrained case, by using Hamiltonian approach, we have obtained the decentralized strategies through a mixed nonlinear MF-FBSDE with projection operator, whose well-posedness has also been studied. Moreover, the corresponding $\varepsilon$-Nash equilibrium property has also been verified. In control unconstrained case, by using Riccati approach, the decentralized strategies can be further represented as the feedback of filtered state through a new Riccati type CC system, which are quite different to the classical ones due to the additional term generated from partial information structure. We have used some algebraic inequalities as well as modified iterative method and equivalent transform method to obtain the well-posedness of our Riccati type CC system. As an application, a general inter-bank borrowing and lending problem has been studied. \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
4,692
Q: "Find Selected Text in Workspace..." disabled in Xcode 4 context menu This has been bugging me for a while. In Xcode 4, sometimes this menu item is enabled, sometimes it is disabled. I cannot figure out why it is ever disabled, and there seems to be nothing at all on Google about this. A: I have this same problem. If I click on the "Show assistant editor" button (the middle button in the list of Editor buttons located in the upper-right hand corner) and then back again to "Standard Editor" (the left-most button in the list of Editor buttons) then the "Find selected text in workspace..." function is enabled. But I have to do this often, but only in the projects I created before Xcode 4. So I think some setting in the project was not created properly when Xcode 4 converted it over. A: I have found if you just right click on the word, without selecting it prior, then the menu selection will be enabled. This seems to be more prevalent in xCode 4.4.1. I have also noticed that when you select other words will "trigger" the menu to enable also. Hope this helps.
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,688
Single-story, coveted Marinwood cul-de-sac street on .5+ acre lot, backing to open space. Expanded, remodeled 3BR/2BA, laundry room, office/homework area, AC, Fantastic landscaped backyard w/gazebo & Turf flat yard for warm summer evenings, spa, great enclosed front yard. 1-car garage. Surrounded by nature and quiet peaceful hills, only 1 mile to highway 101. Coveted, award-winning Dixie School District. Don't miss this wonderful Marinwood house!
{ "redpajama_set_name": "RedPajamaC4" }
6,213
\section{Introduction} Compressive sensing \cite{do06-2,carota06,FoucartRauhut} is a recent field that has seen enormous research activity in the past years. It predicts that certain signals (vectors) can be recovered from what was previously believed to be incomplete information using efficient reconstruction methods. Applications of this principle range from magnetic resonance imaging over radar and remote sensing to astronomical signal processing and more. The key assumption of the theory is that the signal to be recovered is sparse or can at least be well-approximated by a sparse one. Most research activity so far has been dedicated to the synthesis sparsity model where one assumes that the signal can be written as a linear combination of only a small number of elements from a basis, or more generally an overcomplete frame. In certain situations, however, it turns out to be more efficient to work with an analysis-based sparsity model. Here, one rather assumes that the application of a linear map yields a vector with a large number of zero entries. While the synthesis and the analysis model are equivalent in special cases, they are very distinct in an overcomplete case. By now, comparably few investigations have been dedicated to the analysis sparsity model and its rigorous understanding is still in its infancy. The analysis based sparsity model and corresponding reconstruction methods were introduced systematically in recent work of Nam et al.~\cite{NamDaviesEladGribonval}. Nevertheless we note that it appeared also in earlier works, see e.g.~\cite{CandesEldarNeedellRandall}. In particular, the popular method of total variation minimization \cite{ChanShen,NeedellWard} in image processing is closely related to analysis based sparsity with respect to a difference operator. An estimate of the number of Gaussian measurements for successful recovery via total variation minimization has been recently obtained in \cite{KabanavaRauhutZhang}. In this paper we consider the analysis based sparsity model for the important case that the analysis transform is given by inner products with respect to a possibly redundant frame. As reconstruction method we study a corresponding analysis $\ell_1$-minimization approach. Furthermore, we assume that the linear measurements are obtained via an application of a Gaussian random matrix. The main results of this paper provide precise estimates of the number of measurements required for the reconstruction of a signal whose analysis representation has a given number of zero elements. Moreover, stability estimates are given. An alternative bound on the number of measurements can be found in \cite{KabanavaRauhutZhang}. \subsection{Problem statement and main results} We consider the task of reconstructing a signal $\mathbf{x}\in\mathbb{R}^d$ from incomplete and possibly corrupted measurements given by \begin{equation}\label{eqNoisyMeasurements} \mathbf{y}=\mathbf{M}\mathbf{x}+\mathbf{w}, \end{equation} where $\mathbf{M}\in\mathbb{R}^{m\times d}$ with $m\ll d$ is the measurement matrix and $\mathbf{w}$ corresponds to noise. Since this system is underdetermined it is impossible to recover $\mathbf{x}$ from $\mathbf{y}$ without additional information, even when $\mathbf{w} = \mathbf{0}$. As already mentioned, the underlying assumption in compressive sensing is sparsity. The \emph{synthesis sparsity prior} assumes that $\mathbf{x}$ can be represented as a linear combination of a small number of elements of a dictionary $\mathbf{D}\in\mathbb{R}^{d\times n}$, i.e., \[ \mathbf{x}=\mathbf{D\alpha},\;\;\mathbf{\alpha}\in\mathbb{R}^n, \] where the number of non-zero elements of $\mathbf{\alpha}$, denoted by $\norm{\alpha}_0$, is considerably less than $n$. Often $\mathbf{D}$ is chosen as a unitary matrix, which refers to sparsity of $\mathbf{x}$ in an orthonormal basis. Unfortunately, the approach to recover $\alpha$, or $\mathbf{x}$ respectively, from $\mathbf{y} = \mathbf{M} \mathbf{x} = \mathbf{M D \alpha}$ (assuming the noiseless case for simplicity) via $\ell_0$-minimization, i.e., \[ \underset{\alpha\in\mathbb{R}^n}\min\; \|\mathbf{\alpha}\|_0 \quad \mbox{ subject to } \mathbf{M D \alpha} = \mathbf{y}, \] is NP-hard in general. A by-now well-studied tractable alternative is the $\ell_1$-minimization approach of finding the minimizer $\alpha^*$ of \begin{equation}\label{eqBP} \underset{\mathbf{\alpha}\in\mathbb{R}^n}\min \; \|\mathbf{\alpha}\|_1 \quad \mbox{ subject to } \mathbf{M D \alpha} = \mathbf{y} \end{equation} The restored signal is then given by $\mathbf{x}^* = \mathbf{D\alpha^*}$. This optimization problem is referred to as basis pursuit \cite{ChenDonohoSaunders}. In the noisy case, one passes to \begin{equation}\label{eqBPDN} \underset{\mathbf{\alpha}\in\mathbb{R}^n}\min \; \|\mathbf{\alpha}\|_1 \quad \mbox{ subject to } \| \mathbf{M D \alpha} - \mathbf{y} \|_2 \leq \eta, \end{equation} where $\eta$ corresponds to an estimate of the noise level. The \emph{analysis sparsity prior} assumes that $\mathbf{x}$ is sparse in some transform domain, that is, given a matrix $\mathbf{\Omega}\in\mathbb{R}^{p\times d}$ -- the so-called {\em analysis operator} -- the vector $\mathbf{\mathbf{\Omega x}}$ is sparse. For instance, such operators can be generated by the discrete Fourier transform, the finite difference operator (related to total variation), wavelet \cite{Mallat,RonShen,SelesnickFigueiredo}, curvelet \cite{CandesDonoho} or Gabor transforms \cite{gr01}. Analogously to \eqref{eqBP}, a possible strategy for the reconstruction of analysis-sparse vectors (or cosparse vectors, see below) is to solve the analysis $\ell_1$-minimization problem \begin{equation}\label{eqProblemP1} \underset{\mathbf{z}\in\mathbb{R}^d}\min\;\norm{\mathbf{\Omega z}}_1\;\;\mbox{subject to}\;\;\; \mathbf{M}\mathbf{z}=\mathbf{y}, \end{equation} or, in the noisy case, \begin{equation}\label{eqProblemP1Noise} \underset{\mathbf{z}\in\mathbb{R}^d}\min\;\norm{\mathbf{\Omega z}}_1\;\;\mbox{subject to}\;\;\norm{\mathbf{M}\mathbf{z}-\mathbf{y}}_2\leq\eta. \end{equation} Both optimization problems can be solved efficiently using convex optimization techniques, see e.g.~\cite{BoydVandenberghe}. If $\mathbf\Omega$ is an invertible matrix, then these analysis $\ell_1$-minimization problems are equivalent to \eqref{eqBP} and \eqref{eqBPDN}. However, in general the analysis $\ell_1$-minimization problems cannot be reduced to the standard $\ell_1$-minimization problems. We note, that one may also pursue greedy or other iterative approaches for recovery, see e.g.~\cite{FoucartRauhut} for an overview in the standard synthesis sparsity case and see e.g.~\cite{NamDaviesEladGribonval} for the analysis sparsity case. However, we will concentrate on the above optimization approaches here. In the remainder of this paper, we assume that the analysis operator is given by a frame. Put formally, let $\{\mathbf{\mathbf{\mathbf{\omega}}}_i\}_{i=1}^p$, $\mathbf{\omega}_i\in\mathbb{R}^d$, $p\geq d$, be a frame, i.e., there exist positive constants $A$, $B>0$ such that for all $\mathbf{x}\in\mathbb{R}^d$ \[ A\norm{\mathbf{x}}_2^2\leq\sum\limits_{i=1}^p\abs{\abrac{\mathbf{\omega}_i,\mathbf{x}}}^2\leq B\norm{\mathbf{x}}_2^2. \] Its elements are collected as rows of the matrix $\mathbf{\Omega}\in\mathbb{R}^{p\times d}$. The analysis representation of a signal $\mathbf{x}$ is given by the vector $\mathbf{\mathbf{\Omega x}}=\fbrac{\abrac{\mathbf{\omega}_i,\mathbf{x}}}_{i=1}^p\in\mathbb{R}^p$. (We note that in the literature it is often common to collect the elements of a frame rather as columns of a matrix. However, for our purposes it is more convenient to collect them as rows.) The frame is called tight if the frame bounds coincide, i.e., $A = B$. Cosparsity is now defined as follows. \begin{definition}\label{defCosparsity} Given an analysis operator $\mathbf{\Omega}\in\mathbb{R}^{p\times d}$, the cosparsity of $\mathbf{x}\in\mathbb{R}^d$ is defined as \[ l:= p - \norm{\mathbf{\Omega x}}_0. \] The index set of the zero entries of $\mathbf{\Omega x}$ is called the cosupport of $\mathbf{x}$. If $\mathbf{x}$ is $l$-cosparse, then $\mathbf{\Omega x}$ is $s$-sparse with $s = p - l$. \end{definition} The motivation to work with the cosupport rather than the support in the context of analysis sparsity is that in contrast to synthesis sparsity, it is the location of the {\it zero}-elements which define a corresponding subspace. In fact, if $\Lambda$ is the cosupport of $\mathbf{x}$, then it follows from Definition~\ref{defCosparsity} that \[ \abrac{\mathbf{\omega}_j,\mathbf{x}}=0,\quad \mbox{ for all } j\in\Lambda. \] Hence, the set of $l$-cosparse signals can be written as \[ \bigcup_{\Lambda \subset [p]: \# \Lambda = l} W_\Lambda, \] where $W_\Lambda$ denotes the orthogonal complement of the linear span of $\{\mathbf{\omega}_j : j \in \Lambda\}$. In contrast to standard sparsity, there are often certain restrictions on the values that the cosparsity can take. In fact, in the generic case that the frame elements $\mathbf{\omega}_j$ are in general position in $\mathbb{R}^d$, then every set of $d$ rows of $\mathbf{\Omega}$ are linearly independent. Then the maximal number of zeros that can be achieved for a nontrivial vector $\mathbf{x}$ in the analysis representation $\mathbf{\Omega x}$ is less than $d$, since otherwise $\mathbf{x}=0$. Thus, for the cosparsity $l$ of any non-zero vector $\mathbf{x}$ it holds $l< d$ in this case. Nevertheless, if there are linear dependencies among the frame elements $\mathbf{\omega}_j$, then larger values of $l$ are possible. This applies to certain redundant frames as well as to the difference operator (related to total variation). Our main results concern the minimal number $m$ of measurements that are necessary to recover an $l$-cosparse vector $\mathbf{x}$ from $\mathbf{y} = \mathbf{M x}$ with $\mathbf{M} \in \mathbb{R}^{m \times d}$. As it is hard to come up with theoretical guarantees for deterministic matrices $\mathbf{M}$, we pass to random matrices. As common in compressive sensing, we work with Gaussian random matrices, that is, with matrices having independent standard normal distributed entries. Gaussian random matrices have already proven to provide accurate theoretical guarantees in the context of standard synthesis sparsity, see e.g.~\cite{ChandrasekaranRechtParriloWillsky,dota09}. Moreover, empirical tests indicate that also other types of random matrices behave very similar to Gaussian random matrices in terms of recovery performance \cite{dota09-1}, although a theoretical justification may be much harder than for Gaussian matrices. We both provide so-called nonuniform and uniform recovery guarantees. The nonuniform result states that a given fixed cosparse vector $\mathbf{x}$ is recovered via analysis $\ell_1$-minimization from $\mathbf{y} = \mathbf{M x}$ with high probability using a random choice of a Gaussian measurement matrix $\mathbf{M}$ under a suitable condition on the number of measurements. In contrast, the uniform result states that a single random draw of a Gaussian matrix $\mathbf{M}$ is able to recover {\em all} cosparse signals $\mathbf{x}$ simultaneously with high probability. Clearly, the uniform statement is stronger than the nonuniform one, however, as we will see, the uniform statement requires more measurements. We start with the nonuniform guarantee for recovery of cosparse signals with respect to frames using Gaussian measurement matrices. \begin{theorem}\label{thMainResultForFrame} Let $\mathbf{\Omega} \in \mathbb{R}^{p \times d}$ be a frame with frame bounds $A,B>0$ and let $\mathbf{x}$ be $l$-cosparse, that is, $\mathbf{\Omega x}$ is $s$-sparse with $s = p-l$. For a Gaussian random matrix $\mathbf{M}\in\mathbb{R}^{m\times d}$ and $0<\varepsilon<1$, if \begin{equation}\label{eqNumberOfMeasurementsForFrame} \frac{m^2}{m+1}\geq \frac{2Bs}{A}\brac{\sqrt{\ln\left(\frac{ep}{s}\right)}+\sqrt{\frac{A\ln(\varepsilon^{-1})}{Bs}}}^2, \end{equation} then with probability at least $1-\varepsilon$, the vector $\mathbf{x}$ is the unique minimizer of $\norm{\mathbf{\Omega z}}_1$ subject to $\mathbf{Mz}=\mathbf{Mx}$. \end{theorem} Roughly speaking, that is, ignoring terms of lower order, a fixed $l$-cosparse vector is recovered with high probability from \[ m > 2(B/A) s \ln(ep/s) \] Gaussian measurements where $s = p-l$. Note that the number of measurements increases with increasing frame ratio $B/A$, and the optimal behavior occurs for tight frames. For $\mathbf{\Omega} =\Id$, this bound slightly strengthens the main result for sparse recovery in \cite{ChandrasekaranRechtParriloWillsky}. We will also show stability of the reconstruction with respect to noise on the measurements, see Theorem~\ref{thNoisyMeasurements} below. The proof of the above result (given in Section~\ref{sec:nonuniform}) relies on a characterization of the minimizer via tangent cones (Theorems~\ref{thRecoveryViaTangentCones} and \ref{thRecoveryViaTangentConesWithNoise}) which is similar to corresponding conditions stated in \cite{ChandrasekaranRechtParriloWillsky,MendelsonPajorTomczakJaegermann}. Moreover, our proof uses an extension of the Gordon's escape through a mesh theorem (Theorem~\ref{thModifiedGordonsEscapeThroughTheMesh}). \medskip We now pass to the uniform recovery result which additionally takes into account that in practice the signals are often only approximately cosparse. The quantity \[ \sigma_{s}(\mathbf{\Omega x})_1:=\inf\fbrac{\norm{\mathbf{\Omega x}-\mathbf{u}}_1: \mathbf{u} \;\mbox{is $s$-sparse}} \] describes the $\ell_1$-best approximation error to $\mathbf{\Omega x}$ by $s$-sparse vectors. \begin{theorem}\label{thUniformRecoveryWithFrame} Let $\mathbf{M}\in\mathbb{R}^{m\times d}$ be a Gaussian random matrix, $0<\rho<1$ and $0<\varepsilon<1$. If \begin{equation}\label{eqNumberOfMeasurementsForFrameUniformRecovery} \frac{m^2}{m+1}\geq \frac{2Bs\brac{1+(1+\rho^{-1})^2}}{A}\brac{\!\!\sqrt{\ln\frac{ep}{s}}+\frac{1}{\sqrt 2}+\sqrt{\frac{A\ln(\varepsilon^{-1})}{Bs\brac{1+(1+\rho^{-1})^2}}}}^{\!\!2}, \end{equation} then with probability at least $1-\varepsilon$ for every vector $\mathbf{x}\in\mathbb{R}^d$ a minimizer $\mathbf{\hat x}$ of $\norm{\mathbf{\Omega z}}_1$ subject to $\mathbf{Mz}=\mathbf{Mx}$ approximates $\mathbf{x}$ with $\ell_2$-error \[ \norm{\mathbf{x}-\mathbf{\hat{x}}}_2\leq\frac{2(1+\rho)^2}{\sqrt{A}(1-\rho)}\frac{\sigma_{s}(\mathbf{\Omega x})_1}{\sqrt{s}}. \] \end{theorem} Roughly speaking, with high probability every $l$-cosparse vector can be recovered via analysis $\ell_1$-minimization using a single random draw of a Gaussian matrix if \begin{equation}\label{eqApprNumberOfMeasurementsUniformRecovery} m > 10 (B/A)s \ln(ep/s). \end{equation} Moreover, the recovery is stable under passing to approximately cosparse vectors when adding slightly more measurements. The proof of this theorem relies on an extension of the null space property, which is well known in the synthesis sparsity case \cite{codade09,FoucartRauhut,grni03} and was adapted to the analysis sparsity setting in \cite{AldroubiChenPowell,Foucart}. In fact, for the standard case $\mathbf{\Omega} = \Id$, we improve a result of Rudelson and Vershynin \cite{RudelsonVershynin} (also relying on the null space property) with respect to the constants in \eqref{eqNumberOfMeasurementsForFrameUniformRecovery} and add stability in $\ell_2$. We further show that recovery is robust under perturbations of the measurements in Theorem \ref{thRobustUniformRecoveryWithFrame}. We note, that in the standard exact sparse case with no noise, the constant $8$ in (\ref{eqApprNumberOfMeasurementsUniformRecovery}) can be replaced by $2e$, see the contribution by Donoho and Tanner in \cite{dota09}. Their methods, however, are completely different to ours, and it is not clear, whether they can be extended to analysis sparsity. \subsection{Related work} Let us discuss briefly related theoretical studies on recovery of analysis sparse vectors and compare them with our main results. An earlier version of Theorem~\ref{thUniformRecoveryWithFrame} was shown by Cand{\`e}s and Needell in \cite{CandesEldarNeedellRandall}. However, they were only able to treat the case that the analysis operator is given by a tight frame, that is, when $A=B$. Moreover, their analysis is based on a version of the restricted isometry property and does not provide explicit constants in the corresponding bound on the required number of measurements. To be fair, we note, however, that their analysis applies to general subgaussian random matrices. The results of \cite{CandesEldarNeedellRandall} were extended to the case of non-tight frames and Weibull matrices in the work of Foucart in \cite{Foucart}. The analysis in \cite{Foucart} incorporates the robust null space property, the verification of which for the Weibull matrices relies on a variant of the classical restricted isometry property. In our work we prove that Gaussian random matrices satisfy the robust null space property by referring to a modification of the Gordon's escape through a mesh theorem. A recent contribution by Needell and Ward \cite{NeedellWard} provides theoretical recovery guarantees for the special case of total variation minimization, which corresponds to analysis $\ell_1$-minimization with a certain difference operator. Unfortunately, we cannot cover this situation with our main results because the difference operator is not a frame. Nevertheless, it would be interesting to pursue theoretical recovery guarantees for total variation minimization and Gaussian random matrices using the approach of this paper. Nam et al.'s work \cite{NamDaviesEladGribonval} provides a systematic introduction of the analysis sparsity model and treats also greedy recovery methods, see also \cite{ginaelgrda13}. Further contributions are contained in \cite{LiuMiLi,VaiterPeyreDossalFadili}. Our nonuniform recovery guarantees rely on a geometric characterization of the successful recovery. We obtain quantitative estimates by bounding a certain Gaussian width which can be thought as an intrinsic complexity measure. The authors of \cite{AmelunxenLotzMcCoyTropp} exploit the geometry of optimality conditions to study phase transition phenomena in random linear inverse problems and random demixing problems. They express their results in terms of the statistical dimension which is essentially equivalent to the Gaussian width, see Section 10. 3 of \cite{AmelunxenLotzMcCoyTropp} for further details. Also, we note that the optimization problems (\ref{eqProblemP1}) and (\ref{eqProblemP1Noise}) often appear in image processing \cite{CaiOsherShen,ChanShen}. \subsection{Notation} We use the notation $\mathbf{\Omega}_{\Lambda}$ to refer to a submatrix of $\mathbf{\Omega}$ with the rows indexed by $\Lambda$; $\brac{\mathbf{\Omega x}}_S$ stands for the vector whose entries indexed by $S$ coincide with the entries of $\mathbf{\Omega x}$ and the rest are filled by zeros. As we have already mentioned, the $\ell_0$-norm $\|\cdot\|_0$ of a vector corresponds to the number of non-zero elements in it. The unit ball in $\mathbb{R}^d$ with respect to the $\ell_q$-norm is denoted by $B_q^d$. The operator norm of a matrix $\mathbf{A}$ is defined by $\norm{\mathbf{A}}_{2\to2}:=\underset{\norm{x}_2\leq 1}\sup\norm{\mathbf{Ax}}_2$ and the Frobenius norm is given by \[ \norm{\mathbf{A}}_F:=\brac{\sum_{i=1}^m\sum_{j=1}^d \abs{A_{ij}}^2}^{1/2}. \] It is well-known that Frobenius norm dominates the operator norm, $\norm{\mathbf{A}}_{2\to2} \leq \norm{\mathbf{A}}_F$. Finally, $[p]$ is the set of all natural numbers not exceeding $p$, i.e., $[p]=\{1,2,\ldots,p\}$. \section{Nonuniform recovery} \label{sec:nonuniform} In this section we prove Theorem~\ref{thMainResultForFrame} and extend it to robust recovery in Theorem~\ref{thNoisyMeasurements}. The proof strategy is similar as in \cite{ChandrasekaranRechtParriloWillsky}. We rely on conditions on the measurement matrix $\mathbf{M}$ involving tangent cones, which should be of independent interest. In order to check these conditions for a Gaussian random matrix we rely on an extension of the Gordon's escape through a mesh theorem. (In contrast to \cite{ChandrasekaranRechtParriloWillsky}, the standard version of the Gordon's result is not sufficient for our purposes.) \subsection{Recovery via tangent cones} Our conditions for successful recovery of cosparse signals are formulated via tangent cones. For fixed $\mathbf{x}\in\mathbb{R}^d$ we define the convex cone \[ T(\mathbf{x})=\cone\{\mathbf{z}-\mathbf{x}: \mathbf{z}\in\mathbb{R}^d,\;\norm{\mathbf{\Omega z}}_1\leq\norm{\mathbf{\Omega x}}_1\}, \] where the notation ``cone'' stands for the conic hull of the indicated set. The following result is analogous to Proposition~2.1 in \cite{ChandrasekaranRechtParriloWillsky}. \begin{theorem}\label{thRecoveryViaTangentCones} Let $\mathbf{M}\in\mathbb{R}^{m\times d}$. A vector $\mathbf{x}\in\mathbb{R}^d$ is the unique minimizer of $\norm{\mathbf{\Omega z}}_1$ subject to $\mathbf{M}\mathbf{z}=\mathbf{Mx}$ if and only if $\ker \mathbf{M}\cap T(\mathbf{x})=\{\mathbf{0}\}$. \end{theorem} \begin{proof} First assume that $\ker \mathbf{M}\cap T(\mathbf{x})=\{\mathbf{0}\}$. Let $\mathbf{z}\in\mathbb{R}^d$ be a vector that satisfies $\mathbf{M}\mathbf{z}=\mathbf{Mx}$ and $\norm{\mathbf{\Omega z}}_1\leq\norm{\mathbf{\Omega x}}_1$. This means that $\mathbf{z}-\mathbf{x}\in T(\mathbf{x})$ and $\mathbf{z}-\mathbf{x}\in\ker \mathbf{M}$. According to our assumption we conclude that $\mathbf{z}-\mathbf{x}=\mathbf{0}$, so that $\mathbf{x}$ is the unique minimizer. The other direction is proved by contradiction. Let $\mathbf x$ be the unique minimizer of (\ref{eqProblemP1}). Take any $\mathbf v\in T(\mathbf x)\setminus\{\mathbf 0\}$. Then $\mathbf v$ can be written as \[ \mathbf v=\sum_{j}t_j(\mathbf{z}_j-\mathbf x),\quad t_j\geq 0,\quad \norm{\mathbf{\Omega}\mathbf z_j}_1\leq\norm{\mathbf{\Omega x}}_1. \] Since $\mathbf v\neq \mathbf 0$, it holds $\sum\limits_{j}t_j>0$ and we can define $t_j':=\frac{t_j}{\sum\limits_{j}t_j}$. Suppose $\mathbf v\in\ker\mathbf M$. Then \[ \mathbf 0=\mathbf M\brac{\frac{\mathbf v}{\sum\limits_{j}t_j}}=\mathbf M\brac{\sum_jt_j'\mathbf z_j}-\mathbf{Mx}, \] so that $\mathbf M\brac{\sum_jt_j'\mathbf z_j}=\mathbf{Mx}$. Together with the estimate \[ \norm{\sum\limits_jt_j'\mathbf{z}_j}_1\leq\sum_jt_j'\norm{\mathbf z_j}\leq\norm{\mathbf x}_1 \] and uniqueness of the minimizer, this implies $\sum\limits_jt_j'\mathbf{z}_j=\mathbf x$. Hence $\mathbf v=\mathbf 0$, which leads to a contradiction. Thus, $\ker\mathbf M\cap T(\mathbf x)=\{\mathbf 0\}$. \end{proof} When the measurements are noisy, we use the following condition for successful recovery \cite{ChandrasekaranRechtParriloWillsky}. \begin{theorem}\label{thRecoveryViaTangentConesWithNoise} Let $\mathbf{x}\in\mathbb{R}^d$, $\mathbf{M}\in\mathbb{R}^{m\times d}$ and $\mathbf y = \mathbf{M}\mathbf{x}+\mathbf{w}$ with $\norm{\mathbf{w}}_2\leq\eta$. If \begin{equation}\label{eqInfOverCone} \underset{\begin{subarray}{c} \mathbf{v}\in T(\mathbf{x})\\ \norm{\mathbf{v}}_2=1 \end{subarray}}\inf\norm{\mathbf{Mv}}_2\geq\tau \end{equation} for some $\tau$, then a minimizer $\mathbf{\hat{x}}$ of (\ref{eqProblemP1Noise}) satisfies \[ \norm{\mathbf{x}-\mathbf{\hat{x}}}_2\leq\frac{2\eta}{\tau}. \] \end{theorem} \begin{proof} Since $\mathbf{\hat x}$ is a minimizer of (\ref{eqProblemP1Noise}), we have $\norm{\mathbf{\mathbf{\Omega\hat x}}}_1\leq\norm{\mathbf{\Omega x}}_1$ and $\mathbf{\hat x}-\mathbf{x}\in T(\mathbf{x})$. Our assumption (\ref{eqInfOverCone}) implies \begin{equation}\label{eqMeasurementOfDifferenceInCone} \norm{\mathbf{M}(\mathbf{\hat{x}}-\mathbf{x})}_2\geq\tau\norm{\mathbf{\hat{x}}-\mathbf{x}}_2. \end{equation} On the other hand, we can give an upper bound for $\norm{\mathbf{M\hat{x}}-\mathbf{M}\mathbf{x}}_2$ by \begin{equation}\label{eqMeasurementOfDIfferenceInitialCondition} \norm{\mathbf{M\hat x}-\mathbf{M}\mathbf{x}}_2\leq\norm{\mathbf{M\hat x}-\mathbf{y}}+\norm{\mathbf{M}\mathbf{x}-\mathbf{y}}_2\leq 2\eta. \end{equation} Combining (\ref{eqMeasurementOfDifferenceInCone}) and (\ref{eqMeasurementOfDIfferenceInitialCondition}) we get the desired estimate. \end{proof} \subsection{Nonuniform recovery with Gaussian measurements} \label{secNonUniformRecovery} To prove the non-uniform recovery result for Gaussian random measurements (Theorem \ref{thMainResultForFrame}) we rely on the condition stated in Theorem \ref{thRecoveryViaTangentCones}, which requires that the null space of the measurement matrix $\mathbf{M}$ misses the set $T(\mathbf{x})$. The next ingredient of the proof is a variation of the Gordon's escape through a mesh theorem, which was first used in the context of compressed sensing in \cite{RudelsonVershynin}. To state this theorem, we introduce some notation and formulate auxiliary lemmas. Let $\mathbf{g}\in\mathbb{R}^m$ be a standard Gaussian random vector, that is, a vector of independent mean zero, variance one normal distributed random variables. Then for \[ E_m:=\mean\norm{\mathbf{g}}_2=\sqrt{2}\;\frac{\Gamma\brac{(m+1)/2}}{\Gamma\brac{m/2}} \] we have \[ \frac{m}{\sqrt{m+1}}\leq E_m\leq\sqrt{m}, \] see \cite{Gordon,FoucartRauhut}. For a set $T\subset\mathbb{R}^d$ we define its Gaussian width by \[ \ell(T):=\mean\underset{\mathbf{x}\in T}\sup\abrac{\mathbf{x},\mathbf{g}}, \] where $\mathbf{g}\in\mathbb{R}^d$ is a standard Gaussian random vector. \begin{lemma}[Gordon \cite{Gordon}]\label{lmGordon} Let $X_{i,j}$ and $Y_{i,j}$, $i=1\ldots,m$, $j=1,\ldots,n$ be two mean-zero Gaussian random variables. If \[ \begin{aligned} \mean\abs{X_{i,j}-X_{k,l}}^2&\leq\mean\abs{Y_{i,j}-Y_{k,l}}^2,\;\;\mbox{for all}\;\;i\neq k,\;\mbox{and}\;j,l\\ \mean\abs{X_{i,j}-X_{i,l}}^2&\geq\mean\abs{Y_{i,j}-Y_{i,l}}^2\;\;\mbox{for}\;i,j,l, \end{aligned} \] then \[ \mean\underset{i}\min\,\underset{j}\max\,X_{i,j}\geq\mean\underset{i}\min\,\underset{j}\max\,Y_{i,j}. \] \end{lemma} \begin{remark}\label{Gordon:inf} Gordon's lemma extends to the case of Gaussian processes indexed by possibly infinite sets where the expected maxima or minima are replaced by corresponding lattice suprema or infima, see for instance \cite[Remark 8.28]{FoucartRauhut} or \cite{leta91}. \end{remark} We further exploit the concentration of measure phenomenon, which asserts that Lipschitz functions concentrate well around their expectation \cite{Ledoux,Massart}. \begin{lemma}[Concentration of measure]\label{lmConcentrationOfMeasure} Let $f:\mathbb{R}^n\to\mathbb{R}$ be an $L$-Lipschitz function: \[ \abs{f(\mathbf{x})-f(\mathbf{y})}\leq L\norm{\mathbf{x}-\mathbf{y}}_2,\;\;\mbox{for all}\;\mathbf{x},\mathbf{y}\in\mathbb{R}^n. \] Let $\mathbf{g}=(g_1,g_2,\ldots,g_n)$ be a vector of independent standard normal random variables. Then, for all $t>0$, \[ \mathbb{P}\brac{\mean[f(\mathbf{g})]-f(\mathbf{g})\geq t}\leq e^{-\frac{t^2}{2L^2}}. \] \end{lemma} Next, we state our modification of the Gordon's escape through a mesh theorem, see \cite{Gordon} for the original version. Below, $\mathbf{\Omega}(T)$ corresponds to the set of elements produced by applying $\mathbf{\Omega}$ to the elements of $T$. \begin{theorem}\label{thModifiedGordonsEscapeThroughTheMesh} Let $\mathbf{\Omega}\in\mathbb{R}^{p\times d}$ be a frame with constants $A$, $B>0$. Let $\mathbf{M}\in\mathbb{R}^{m\times d}$ be a Gaussian random matrix and $T$ be a subset of the unit sphere $\mathbb{S}^{d-1}=\{\mathbf{x}\in\mathbb{R}^d: \norm{\mathbf{x}}_2=1\}$. Then, for $t>0$, it holds \begin{equation}\label{eqGordonsEscapeThroughTheMesh} \mathbb{P}\brac{\underset{\mathbf{x}\in T}\inf\norm{\mathbf{M}\mathbf{x}}_2> E_m-\frac{1}{\sqrt{A}}\ell\brac{\mathbf{\Omega}(T)}-t}\geq 1-e^{-\frac{t^2}{2}}. \end{equation} \end{theorem} \begin{proof} Recall that \[ \norm{\mathbf{M}\mathbf{x}}_2=\underset{\mathbf{y}\in S^{m-1}}\max\abrac{\mathbf{M}\mathbf{x},\mathbf{y}}. \] For $\mathbf{x}\in T$ and $\mathbf{y}\in S^{m-1}$ we compare the two Gaussian processes \[ X_{\mathbf{x},\mathbf{y}}:=\abrac{\mathbf{M}\mathbf{x},\mathbf{y}}\qquad \mbox{and}\qquad Y_{\mathbf{x},\mathbf{y}}:=\frac{1}{\sqrt{A}}\abrac{\mathbf{g},\mathbf{\Omega x}}+\abrac{\mathbf{h},\mathbf{y}}, \] where $\mathbf{g}\in\mathbb{R}^{p}$ and $\mathbf{h}\in\mathbb{R}^m$ are independent standard Gaussian random vectors. Let $\mathbf{x},\mathbf{x'}\in \mathbb{S}^{d-1}$ and $\mathbf{y},\mathbf{y'}\in S^{m-1}$. Since $M_{ij}$ are independent with $\mean M_{ij} = 0$, $\mean M_{ij}^2=1$, we have \begin{align} &\mean\abs{X_{\mathbf{x},\mathbf{y}}-X_{\mathbf{x'},\mathbf{y'}}}^2=\mean\abs{\sum_{i=1}^m\sum_{j=1}^dM_{ij}(x_jy_i-x'_jy'_i)}^2=\sum_{i=1}^m\sum_{j=1}^d(x_jy_i-x'_jy'_i)^2\notag\\ &=\sum_{i=1}^m\sum_{j=1}^d(x_j^2y_i^2+x'^2_jy'^2_i-2x_jx'_jy_iy'_i)=\norm{\mathbf{x}}_2^2\norm{\mathbf{y}}_2^2+\norm{\mathbf{x'}}_2^2\norm{\mathbf{y'}}_2^2-2\abrac{\mathbf{x},\mathbf{x'}}\abrac{\mathbf{y},\mathbf{y'}}\notag\\ &=2-2\abrac{\mathbf{x},\mathbf{x'}}\abrac{\mathbf{y},\mathbf{y'}}\label{eqEstimateForXprocess}. \end{align} Independence and the isotropicity of the Gaussian vectors $\mathbf{g}$ and $\mathbf{h}$ together with the fact that $\mathbf{\Omega}$ is a frame with lower frame bound $A$ imply \begin{align} \mean\abs{Y_{\mathbf{x},\mathbf{y}}-Y_{\mathbf{x'},\mathbf{y'}}}^2 &=\mean\abs{\frac{1}{\sqrt{A}}\abrac{\mathbf{g},\mathbf{\Omega x} -\mathbf{\Omega x}'}}^2+\mean\abs{\abrac{\mathbf{h},\mathbf{y}-\mathbf{y}'}}^2\notag\\ &=\frac{1}{A}\norm{\mathbf{\Omega x} -\mathbf{\Omega x'}}_2^2+\norm{\mathbf{y}-\mathbf{y'}}_2^2\geq \norm{\mathbf{x}-\mathbf{x'}}_2^2+\norm{\mathbf{y}-\mathbf{y}'}_2^2\notag\\ &=\norm{\mathbf{x}}_2^2+\norm{\mathbf{x'}}_2^2-2\abrac{\mathbf{x},\mathbf{x'}}+\norm{\mathbf{y}}_2^2+\norm{\mathbf{y'}}_2^2-2\abrac{\mathbf{y},\mathbf{y'}}\notag\\ \label{eqEstimateForYProcess} &=4-2\abrac{\mathbf{x},\mathbf{x'}}-2\abrac{\mathbf{y},\mathbf{y'}}. \end{align} When $\mathbf{x}=\mathbf{x'}$, we have \[ \mean\abs{Y_{\mathbf{x},\mathbf{y}}-Y_{\mathbf{x},\mathbf{y'}}}^2=\norm{\mathbf{y}-\mathbf{y}'}_2^2=2-2\abrac{\mathbf{y},\mathbf{y'}}. \] Combining (\ref{eqEstimateForXprocess}) and (\ref{eqEstimateForYProcess}), we obtain \[ \mean\abs{Y_{\mathbf{x},\mathbf{y}}-Y_{\mathbf{x'},\mathbf{y'}}}^2-\mean\abs{X_{\mathbf{x},\mathbf{y}}-X_{\mathbf{x'},\mathbf{y'}}}^2\geq 2(1-\abrac{\mathbf{x},\mathbf{x'}})(1-\abrac{\mathbf{y},\mathbf{y'}}) \] and since $\abrac{\mathbf{x},\mathbf{x'}}\leq\norm{\mathbf{x}}_2\norm{\mathbf{x'}}_2\leq 1$ and similarly for $\mathbf{y},\mathbf{y}'$, it follows that \[ \mean\abs{Y_{\mathbf{x},\mathbf{y}}-Y_{\mathbf{x'},\mathbf{y'}}}^2-\mean\abs{X_{\mathbf{x},\mathbf{y}}-X_{\mathbf{x'},\mathbf{y'}}}^2\geq 0. \] Moreover, we have \[ \mean\abs{Y_{\mathbf{x},\mathbf{y}}-Y_{\mathbf{x},\mathbf{y'}}}^2=\mean\abs{X_{\mathbf{x},\mathbf{y}}-X_{\mathbf{x},\mathbf{y'}}}^2. \] Due to Gordon's lemma (Lemma \ref{lmGordon}) and Remark \ref{Gordon:inf}, \begin{align} \mean&\underset{\mathbf{x}\in T}\inf\norm{\mathbf{M}\mathbf{x}}_2=\mean\underset{\mathbf{x}\in T}\inf\underset{\mathbf{y}\in S^{m-1}}\max X_{\mathbf{x},\mathbf{y}}\geq\mean\underset{\mathbf{x}\in T}\inf\underset{\mathbf{y}\in S^{m-1}}\max Y_{\mathbf{x},\mathbf{y}}\notag\\ &=\mean\underset{\mathbf{x}\in T}\inf\underset{\mathbf{y}\in S^{m-1}}\max\fbrac{\frac{1}{\sqrt{A}}\abrac{\mathbf{g},\mathbf{\Omega x}}+\abrac{\mathbf{h},\mathbf{y}}}=\mean\underset{\mathbf{x}\in T}\inf\fbrac{\frac{1}{\sqrt{A}}\abrac{\mathbf{g},\mathbf{\Omega x}}+\norm{\mathbf{h}}_2}\notag\\ &=\mean\norm{\mathbf{h}}_2-\frac{1}{\sqrt{A}}\mean\underset{\mathbf{x}\in T}\sup\abrac{\mathbf{g},\mathbf{\Omega x}}=E_m-\frac{1}{\sqrt{A}}\mean\underset{\mathbf{z}\in\mathbf{\Omega}(T)}\sup\abrac{\mathbf{g},\mathbf{z}}=E_m-\frac{1}{\sqrt{A}}\ell(\mathbf{\Omega}(T))\label{eqExpectationOfLipschitzFunctionEstimate}. \end{align} Let $F(\mathbf{M}):=\underset{\mathbf{x}\in T}\inf\norm{\mathbf{M}\mathbf{x}}_2$. For any $\mathbf{A},\mathbf{B}\in\mathbb{R}^{m\times d}$ \begin{equation*} \underset{\mathbf{x}\in T}\inf\norm{\mathbf{A}\mathbf{x}}_2\leq\underset{\mathbf{x}\in T}\inf\brac{\norm{\mathbf{B}\mathbf{x}}_2+\norm{\brac{\mathbf{A}-\mathbf{B}}\mathbf{x}}_2} \] \[ \leq\underset{\mathbf{x}\in T}\inf\norm{\mathbf{B}\mathbf{x}}_2+\norm{\mathbf{A}-\mathbf{B}}_{2\to2} \leq \underset{\mathbf{x}\in T}\inf\norm{\mathbf{B}\mathbf{x}}_2+\norm{\mathbf{A}-\mathbf{B}}_{F}. \end{equation*} The second inequality follows from the fact that $T\subset \mathbb{S}^{d-1}$. By interchanging $\mathbf{A}$ and $\mathbf{B}$ we conclude that \[ \abs{F(\mathbf{A})-F(\mathbf{B})}\leq\norm{\mathbf{A}-\mathbf{B}}_{F}. \] This means that $F$ is $1$-Lipschitz with respect to the Frobenius norm (which corresponds to the $\ell_2$-norm when interpreting a matrix as a vector) and due to concentration of measure (Lemma~\ref{lmConcentrationOfMeasure}) \[ \mathbb{P}\brac{\underset{\mathbf{x}\in T}\inf\norm{\mathbf{M}\mathbf{x}}_2\leq\mean\underset{\mathbf{x}\in T}\inf\norm{\mathbf{M}\mathbf{x}}_2-t}\leq e^{-t^2/2}. \] Applying the estimate (\ref{eqExpectationOfLipschitzFunctionEstimate}) to the previous inequality gives \[ \mathbb{P}\!\brac{\underset{\mathbf{x}\in T}\inf\norm{\mathbf{M}\mathbf{x}}_2\leq E_m\!-\!\frac{\ell(\mathbf{\Omega}(T))}{\sqrt{A}}-t}\leq\mathbb{P}\!\brac{\underset{\mathbf{x}\in T}\inf\norm{\mathbf{M}\mathbf{x}}_2\leq\mean\!\underset{\mathbf{x}\in T}\inf\norm{\mathbf{M}\mathbf{x}}_2-t}\leq e^{-\frac{t^2}{2}}, \] which concludes the proof. \end{proof} The previous result suggests to estimate the Gaussian width of $\mathbf{\Omega}(T)$ with $T := T(\mathbf{x})\cap \mathbb{S}^{d-1}$. Since $\mathbf{\Omega}$ is a frame with upper frame constant $B$, we have \[ \mathbf{\Omega}(T)\subset\mathbf{\Omega}(T(\mathbf{x}))\cap\mathbf{\Omega}(\mathbb{S}^{d-1})\subset K(\mathbf{\Omega x})\cap\brac{\sqrt{B}\mathbb{B}_2^p}, \] where \[ K(\mathbf{\Omega x})=\cone\fbrac{\mathbf{y}-\mathbf{\Omega x}:\mathbf{y}\in\mathbb{R}^p,\; \norm{\mathbf{y}}_1\leq\norm{\mathbf{\Omega x}}_1}. \] The supremum over a larger set can only increase, hence \begin{equation}\label{eqGWidthByKoneAndSphere} \ell(\mathbf{\Omega}(T))\leq\sqrt B\ell\brac{K(\mathbf{\Omega x})\cap \mathbb{B}_2^{p}}. \end{equation} We next recall an upper bound for the Gaussian width $\ell\brac{K(\mathbf{\Omega x})\cap \mathbb{B}_2^p}$ from \cite{ChandrasekaranRechtParriloWillsky} involving the polar cone $\mathcal{N}(\mathbf{\Omega x})=K(\mathbf{\Omega x})^{\circ}$ defined by \[ \mathcal{N}(\mathbf{\Omega x})=\left\{\mathbf{z}\in\mathbb{R}^{p}:\abrac{\mathbf{z},\mathbf{y}-\mathbf{\Omega x}}\leq 0\;\mbox{for all}\;\mathbf{y}\in\mathbb{R}^p\;\;\mbox{such that}\;\norm{\mathbf{y}}_1\leq\norm{\mathbf{\Omega x}}_1\right\}. \] \begin{proposition}\label{prGaussianWidthsByPolarCone} Let $\mathbf{g}\in\mathbb{R}^{p}$ be a standard Gaussian random vector. Then \[ \ell\brac{K(\mathbf{\Omega x})\cap \mathbb{B}_2^p}\leq\mean\underset{\mathbf{z}\in\mathcal{N}(\mathbf{\Omega x})}\min\norm{\mathbf{g}-\mathbf{z}}_2. \] \end{proposition} The proof relies on tools from convex analysis, see \cite{ChandrasekaranRechtParriloWillsky,FoucartRauhut}, \cite[Ch.\ 5.9]{BoydVandenberghe}. \begin{proposition} Let $s$ be the sparsity of the vector $\mathbf{\Omega x}\in\mathbb{R}^p$. Then \begin{equation}\label{eqGaussianWidthOfCone} \ell\brac{K(\mathbf{\Omega x})\cap \mathbb{B}_2^p}^2\leq 2s\ln\frac{ep}{s}. \end{equation} \end{proposition} \begin{proof} By Proposition \ref{prGaussianWidthsByPolarCone} and H\"older's inequality \begin{equation} \ell\brac{K(\mathbf{\Omega x})\cap \mathbb{B}_2^p}^2\leq\brac{\mean\underset{\mathbf{z}\in\mathcal{N}(\mathbf{\Omega x})}\min\norm{\mathbf{g}-\mathbf{z}}_2}^2\leq\mean\underset{\mathbf{z}\in \mathcal{N}(\mathbf{\Omega x})}\min\norm{\mathbf{g}-\mathbf{z}}_2^2. \end{equation} Let $S$ denote the support of $\mathbf{\Omega x}$. Then one can verify that \begin{equation}\label{eqPolarConeAsUnionOverT} \mathcal{N}(\mathbf{\Omega x}) = \bigcup_{t\geq 0} \fbrac{\mathbf{z}\in\mathbb{R}^p:\,z_i=t\sgn(\mathbf{\Omega x})_i,\;i\in S,\;\abs{z_i}\leq t,\;i\in S^c}, \end{equation} see \cite[Lemma 9.23]{FoucartRauhut} for a proof. To proceed, we fix $t$, minimize $\norm{\mathbf{g}-\mathbf{z}}_2^2$ over all possible entries $z_j$, take the expectation of the obtained expression and finally optimize over $t$. According to (\ref{eqPolarConeAsUnionOverT}), we have \[ \begin{aligned} \underset{\mathbf{z}\in\mathcal{N}(\mathbf{\Omega x})}\min\norm{\mathbf{g}-\mathbf{z}}_2^2 &= \underset{\begin{subarray}{c} t\geq 0\\ \abs{z_i}\leq t,\,i\in S^c \end{subarray}}\min\sum_{i\in S}\brac{g_i-t\sgn(\mathbf{\Omega x})_i}^2+\sum_{i\in S^c}\brac{g_i-z_i}^2\\ &=\underset{\begin{subarray}{c} t\geq 0 \end{subarray}}\min\sum_{i\in S}\brac{g_i-t\sgn(\mathbf{\Omega x})_i}^2+\sum_{i\in S^c}S_t(g_i)^2, \end{aligned} \] where $S_t$ is the soft-thresholding operator given by \[ S_t(x)=\left\{\begin{array}{ll} x+t, & x<-t,\\ 0, & -t\leq x\leq t,\\ x-t, & x>t. \end{array}\right. \] Taking expectation we arrive at \begin{align} \mean\underset{\mathbf{z}\in\mathcal{N}(\mathbf{\Omega x})}\min\norm{\mathbf{g}-\mathbf{z}}_2^2 &\leq \mean\sbrac{\sum_{i\in S}\brac{g_i-t\sgn(\mathbf{\Omega x})_i}^2}+\mean\sbrac{\sum_{i\in S^c}S_t(g_i)^2}\notag\\ &=s(1+t^2)+(p-s)\mean S_t(g)^2\label{eqExpectationEstimateWithT}, \end{align} where $g$ is a univariate standard Gaussian random variable. To calculate the expectation of $S_t(g)^2$, we apply the direct integration \begin{align} \mean S_t(g)^2 & =\frac{1}{\sqrt{2\pi}}\sbrac{\int\limits_{-\infty}^{-t}(x+t)^2e^{-\frac{x^2}{2}}\,dx+\int\limits_t^{\infty}(x-t)^2e^{-\frac{x^2}{2}}\,dx}\notag\\ & = \frac{2}{\sqrt{2\pi}}\int\limits_{0}^{\infty} x^2e^{-\frac{(x+t)^2}{2}}\,dx =\frac{2e^{-\frac{t^2}{2}}}{\sqrt{2\pi}}\int\limits_{0}^{\infty} x^2e^{-\frac{x^2}{2}}e^{-xt}\,dx\notag\\ &\leq e^{-\frac{t^2}{2}}\sqrt{\frac{2}{\pi}}\int\limits_{0}^{\infty} x^2e^{-\frac{x^2}{2}}\,dx=e^{-\frac{t^2}{2}}.\label{eqExpectationOfSoftThreshold} \end{align} Substituting the estimate (\ref{eqExpectationOfSoftThreshold}) into (\ref{eqExpectationEstimateWithT}) gives \[ \mean\underset{\mathbf{z}\in\mathcal{N}(\mathbf{\Omega x})}\min\norm{\mathbf{g}-\mathbf{z}}_2^2\leq s(1+t^2)+(p-s)e^{-\frac{t^2}{2}}. \] Setting $t=\sqrt{2\ln(p/s)}$ finally leads to \[ \ell\brac{K(\mathbf{\Omega x})\cap \mathbb{B}_2^p}^2\leq s\brac{1+2\ln(p/s)}+s=2s\ln(ep/s). \] This concludes the proof. \end{proof} By combining inequalities (\ref{eqGWidthByKoneAndSphere}) and (\ref{eqGaussianWidthOfCone}) we obtain \[ \ell(\mathbf{\Omega}(T))^2\leq 2Bs\ln\frac{ep}{s}. \] \begin{proof}[of Theorem \ref{thMainResultForFrame}] Set $t=\sqrt{2\ln(\varepsilon^{-1})}$. The fact that $E_m\geq m/\sqrt{m+1}$ along with condition (\ref{eqNumberOfMeasurementsForFrame}) yields \[ E_m\geq \frac{1}{\sqrt{A}}\ell(\mathbf{\Omega}(T))+t. \] Theorem~\ref{thModifiedGordonsEscapeThroughTheMesh} implies \[ \mathbb{P}\brac{\underset{\mathbf{x}\in T}\inf\norm{\mathbf{M}\mathbf{x}}_2> 0}\geq\mathbb{P}\brac{\underset{\mathbf{x}\in T}\inf\norm{\mathbf{M}\mathbf{x}}_2> E_m-\frac{1}{\sqrt{A}}\ell\brac{\mathbf{\Omega}(T)}-t}\geq\ 1-e^{-\frac{t^2}{2}}=1-\varepsilon, \] which guarantees that $\ker\mathbf{M}\cap T(\mathbf{x})=\{\mathbf 0\}$ with probability at least $1-\varepsilon$. As the final step we apply Theorem \ref{thRecoveryViaTangentCones}. \end{proof} We now extend Theorem~\ref{thMainResultForFrame} to robust recovery. \begin{theorem}\label{thNoisyMeasurements} Let $\mathbf{\Omega} \in \mathbb{R}^{p \times d}$ be a frame with frame bounds $A,B > 0$ and let $\mathbf{x}$ be $l$-cosparse and $s = p -l$. For a random draw $\mathbf{M}\in\mathbb{R}^{m\times d}$ of a Gaussian random matrix, let noisy measurements $\mathbf{y}=\mathbf{M}\mathbf{x}+\mathbf{w}$ be given with $\norm{\mathbf{w}}_2\leq\eta$. If for $0<\varepsilon<1$ and some $\tau>0$ \begin{equation}\label{eqNumberOfMeasurementsForFrameNoise} \frac{m^2}{m+1}\geq \frac{2Bs}{A}\brac{\sqrt{\ln\frac{ep}{s}}+\sqrt{\frac{A\ln(\varepsilon^{-1})}{Bs}}+\tau\sqrt{\frac{A}{2sB}}}^2, \end{equation} then with probability at least $1-\varepsilon$, any minimizer $\mathbf{\hat x}$ of (\ref{eqProblemP1Noise}) satisfies \[ \norm{\mathbf{x}-\mathbf{\hat{x}}}_2\leq\frac{2\eta}{\tau}. \] \end{theorem} \begin{proof} We use the recovery condition stated in Theorem \ref{thRecoveryViaTangentConesWithNoise}. Set $t=\sqrt{2\ln(\varepsilon^{-1})}$. Our previous considerations and the choice of $m$ in (\ref{eqNumberOfMeasurementsForFrameNoise}) guarantee that \[ E_m-\frac{1}{\sqrt{A}}\ell(\mathbf{\Omega}(T))-t\geq\frac{m}{\sqrt{m+1}}-\sqrt{\frac{2Bs}{A}\ln\frac{ep}{s}}-\sqrt{2\ln(\varepsilon^{-1})}\geq\tau. \] The monotonicity of probability and Theorem \ref{thModifiedGordonsEscapeThroughTheMesh} yield \[ \mathbb{P}\brac{\underset{\mathbf{x}\in T}\inf\norm{\mathbf{M}\mathbf{x}}_2\geq\tau}\geq\mathbb{P}\brac{\underset{\mathbf{x}\in T}\inf\norm{\mathbf{M}\mathbf{x}}_2\geq E_m-\frac{1}{\sqrt{A}}\ell(\mathbf{\Omega}(T))-t}\geq 1-\varepsilon. \] \end{proof} \section{Uniform recovery} This section is dedicated to the proof of the uniform recovery result in Theorem~\ref{thUniformRecoveryWithFrame}. It relies on the $\mathbf{\Omega}$-null space property, which extends the null space property known from the standard synthesis sparsity case, see e.g.~\cite{codade09,FoucartRauhut,grni03}. We analyze this property directly for Gaussian random matrices with similar techniques as used in the previous section. \subsection{$\mathbf{\Omega}$-null space property} Let us start with the $\mathbf{\Omega}$-null space property which is a sufficient condition for the exact reconstruction of every cosparse vector. \begin{definition}\label{defNSP} A matrix $\mathbf{M}\in\mathbb{R}^{m\times d}$ is said to satisfy the $\mathbf{\Omega}$-null space property of order $s$ with constant $0<\rho<1$, if for any set $\Lambda\subset [p]$ with $\# \Lambda\geq p-s$ it holds \begin{equation}\label{eqNSP} \norm{\mathbf{\Omega}_{\Lambda^c}\mathbf{v}}_1\leq\rho\norm{\mathbf{\Omega}_{\Lambda}\mathbf{v}}_1\;\;\;\mbox{for all}\;\;\mathbf{v}\in\ker{\mathbf M}. \end{equation} \end{definition} If $\mathbf{\Omega}$ is the identity map $\Id:\mathbb{R}^d\to\mathbb{R}^d$, then (\ref{eqNSP}) is the standard null space property. We start with a result on exact recovery of cosparse vectors. \begin{theorem}\label{thRecoveryWithOmegaNSP} If $\mathbf{M}\in\mathbb{R}^{m\times d}$ satisfies the $\mathbf{\Omega}$-null space property of order $s$ with $0<\rho<1$, then every $l$-cosparse vector $\mathbf{x}\in\mathbb{R}^d$ with $l=p-s$ is the unique solution of (\ref{eqProblemP1}) with $\mathbf{y}=\mathbf{Mx}$. \end{theorem} This theorem follows immediately from the next result, which also implies a certain stability estimate in $\ell_1$. \begin{theorem}\label{thConeCostraint} Let $\mathbf{x}\in\mathbb{R}^d$ be an arbitrary vector and $\mathbf{\hat x}$ be a solution of (\ref{eqProblemP1}) with $\mathbf{y}=\mathbf{M}\mathbf{x}$, where $\mathbf{M}\in\mathbb{R}^{m\times d}$ satisfies the $\mathbf{\Omega}$-null space property of order $s$ with constant $\rho \in (0,1)$. Then \begin{equation}\label{eqConeConstraint} \norm{\mathbf{\Omega}\brac{\mathbf{x}-\mathbf{\hat{x}}}}_1\leq\frac{2(1+\rho)}{1-\rho} \sigma_s(\mathbf{\Omega x})_1. \end{equation} \end{theorem} \begin{proof} Since $\mathbf{\hat x}$ is the solution of (\ref{eqProblemP1}), we must have $\norm{\mathbf{\Omega\hat x}}_1\leq\norm{\mathbf{\Omega x}}_1$. Take any $\Lambda\subset [p]$ with $\# \Lambda\geq p-s$. Then \[ \norm{\mathbf{\Omega}_{\Lambda^c}\mathbf{\hat x}}_1+\norm{\mathbf{\Omega}_{\Lambda}\mathbf{\hat x}}_1\leq\norm{\mathbf{\Omega}_{\Lambda^c}\mathbf{x}}_1+\norm{\mathbf{\Omega}_{\Lambda}\mathbf{x}}_1. \] By the triangle inequality, the vector $\mathbf v := \mathbf{x}-\mathbf{\hat{x}}$ satisfies \[ \norm{\mathbf{\Omega}_{\Lambda^c}\mathbf{x}}_1-\norm{\mathbf{\Omega}_{\Lambda^c} \mathbf{v}}_1+\norm{\mathbf{\Omega}_{\Lambda} \mathbf{v}}_1-\norm{\mathbf{\Omega}_{\Lambda}\mathbf{x}}_1\leq\norm{\mathbf{\Omega}_{\Lambda^c}\mathbf{x}}_1+\norm{\mathbf{\Omega}_{\Lambda}\mathbf{x}}_1, \] which implies \[ \norm{\mathbf{\Omega}_{\Lambda} \mathbf{v}}_1\leq\norm{\mathbf{\Omega}_{\Lambda^c} \mathbf{v}}_1+2\norm{\mathbf{\Omega}_{\Lambda}\mathbf{x}}_1 \leq \rho \norm{\mathbf{\Omega}_{\Lambda}\mathbf{v}}_1 + 2\norm{\mathbf{\Omega}_{\Lambda}\mathbf{x}}_1. \] Hereby, we have applied the $\mathbf{\Omega}$-null space property \eqref{eqNSP}. Rearranging and choosing a set $\Lambda$ of size $p-s$ which minimizes $ \norm{\mathbf{\Omega}_{\Lambda}\mathbf{x}}_1$ yields \[ \norm{\mathbf{\Omega}_{\Lambda}\mathbf{v}}_1\leq\frac{2}{1-\rho} \sigma_s(\mathbf{\Omega x})_1. \] Furthermore, another application of the $\mathbf{\Omega}$-null space property gives \[ \norm{\mathbf{\Omega}\mathbf{v}}_1 = \norm{\mathbf{\Omega}_{\Lambda}\mathbf{v}}_1 + \norm{\mathbf{\Omega}_{\Lambda^c}\mathbf{v}}_1 \leq (1+\rho) \norm{\mathbf{\Omega}_{\Lambda}\mathbf{v}}_1 \leq \frac{2(1+\rho)}{1-\rho} \sigma_s(\mathbf{\Omega x})_1. \] This completes the proof. \end{proof} In order to provide a suitable stability estimate in $\ell_2$ we require a slightly stronger version of the $\mathbf\Omega$-null space property. \begin{definition}\label{defL2StableNSP} A matrix $\mathbf{M}\in\mathbb{R}^{m\times d}$ is said to satisfy the $\ell_2$-stable $\mathbf{\Omega}$-null space property of order $s$ with constant $0<\rho<1$, if, for any set $\Lambda\subset [p]$ with $\# \Lambda\geq p-s$, it holds \begin{equation}\label{eqL2NSP} \norm{\mathbf{\Omega}_{\Lambda^c}\mathbf{v}}_2\leq\frac{\rho}{\sqrt{s}}\norm{\mathbf{\Omega}_{\Lambda}\mathbf{v}}_1\;\;\;\mbox{for all}\;\;\mathbf{v}\in\ker{\mathbf M}. \end{equation} \end{definition} \begin{remark}\label{rem:l1l2} The H\"older's inequality implies $\norm{\mathbf\Omega_{\Lambda^c}\mathbf v}_1\leq \sqrt s\norm{\mathbf\Omega_{\Lambda^c}\mathbf v}_2$ for any set $\Lambda \subset [p]$ with $\#(\Lambda^c) = s$. This means that if $\mathbf M\in\mathbb{R}^{m\times d}$ satisfies the $\ell_2$-stable $\mathbf\Omega$-null space property of order $s$ with constant $0<\rho<1$, then it satisfies the $\mathbf\Omega$-null space property of the same order and with the same constant. \end{remark} \begin{theorem}\label{thRecoveryWithL2NSP} Let $\mathbf{M}\in\mathbb{R}^{m\times d}$ satisfy the $\ell_2$-stable $\mathbf{\Omega}$-null space property of order $s$ with constant $0<\rho<1$. Then for any $\mathbf{x}\in\mathbb{R}^d$ the solution $\mathbf{\hat x}$ of (\ref{eqProblemP1}) with $\mathbf{y}=\mathbf{M}\mathbf{x}$ approximates the vector $\mathbf{x}$ with $\ell_2$-error \begin{equation}\label{eqL2StableRecovery} \norm{\mathbf{x}-\mathbf{\hat{x}}}_2\leq\frac{2(1+\rho)^2}{\sqrt{A}(1-\rho)}\frac{\sigma_{s}(\mathbf{\Omega x})_1}{\sqrt{s}}. \end{equation} \end{theorem} Inequality (\ref{eqL2StableRecovery}) means that $l$-cosparse vectors are exactly recovered by (\ref{eqProblemP1}) and vectors $\mathbf{x}\in\mathbb{R}^d$, such that $\mathbf{\Omega x}$ is close to an $s$-sparse vector in $\ell_1$, can be well approximated in $\ell_2$ by a solution of (\ref{eqProblemP1}). The proof goes along the same lines as in the standard case in \cite{FoucartRauhut}. The novelty here is that we exploit the sparsity not of the signal itself, but of its analysis representation. So first we extend the $\ell_1$-error estimate above to an $\ell_2$-error estimate for $\mathbf{\Omega x}$ and use the fact that $\mathbf\Omega$ is a frame to bound the $\ell_2$-error $\norm{\mathbf x-\mathbf{\hat x}}_2$. The statement of Theorem \ref{thRecoveryWithL2NSP} was generalized to the setting of a perturbed frame and imprecise knowledge of the measurement matrix in \cite[Theorem 3.1]{AldroubiChenPowell}. \begin{proof}[of Theorem \ref{thRecoveryWithL2NSP}] We define the vector $\mathbf{v}:=\mathbf{\hat x}-\mathbf{x}\in\ker\mathbf{M}$ and denote by $S_0\subset [p]$ an index set of $s$ largest absolute entries of $\mathbf{\Omega v}$. Since $\# S_0^c=p-s$ and $\mathbf{M}\in\mathbb{R}^{m\times d}$ satisfies the $\ell_2$-stable $\mathbf{\Omega}$-null space property, it follows \begin{equation}\label{eqEstimateBySNSP} \norm{(\mathbf{\Omega v})_{S_0}}_2\leq\frac{\rho}{\sqrt{s}}\norm{\mathbf{\Omega}_{S_0^c} \mathbf{v}}_1\leq\frac{\rho}{\sqrt{s}}\norm{\mathbf{\Omega v}}_1. \end{equation} We partition the indices of $S_0^c$ into subsets $S_1$, $S_2$, $\ldots$ of size $s$ in order of decreasing magnitude of $(\Omega v)_i$. Then for each $k\in S_{i+1}$, $i\geq 0$, \[ \abs{(\mathbf{\Omega v})_k}\leq\frac{1}{s}\sum_{j\in S_i}\abs{(\mathbf{\Omega v})_j} \qquad \mbox{ and } \qquad \norm{(\mathbf{\Omega v})_{S_{i+1}}}_2\leq\frac{1}{\sqrt s}\norm{(\mathbf{\Omega v})_{S_i}}_1. \] Along with the triangle inequality this gives \begin{equation}\label{eqEstimateByDescendingIndexes} \norm{(\mathbf{\Omega v})_{S_0^c}}_2\leq\sum_{i\geq 1}\norm{(\mathbf{\Omega v})_{S_i}}_2\leq\frac{1}{\sqrt s}\sum_{i\geq 0}\norm{(\mathbf{\Omega v})_{S_i}}_1=\frac{1}{\sqrt s}\norm{\mathbf{\Omega v}}_1. \end{equation} Inequalities (\ref{eqEstimateBySNSP}) and (\ref{eqEstimateByDescendingIndexes}) together with Remark~\ref{rem:l1l2} and Theorem~\ref{thConeCostraint} yield \begin{equation}\label{eqEstimateOfL2NormBySNSP} \norm{ \mathbf{\Omega v}}_2\leq\norm{(\mathbf{\Omega v})_{S_0}}_2+\norm{(\mathbf{\Omega v})_{S_0^c}}_2 \leq \frac{1+\rho}{\sqrt s}\norm{ \mathbf{\Omega v}}_1 \leq \frac{2(1+\rho)^2}{(1-\rho)\sqrt s} \sigma_s(\mathbf{\Omega x})_1. \end{equation} Finally, we use that $\mathbf{\Omega}$ is a frame with lower frame bound $A$ to conclude that \[ \norm{\mathbf{x}-\mathbf{\hat{x}}}_2\leq\frac{1}{\sqrt A}\norm{\mathbf{\Omega x}-\mathbf{\Omega\hat{x}}}_2 \leq \frac{2(1+\rho)^2}{\sqrt{A}(1-\rho) \sqrt{s}} \sigma_s(\mathbf{\Omega x})_1. \] This completes the proof. \end{proof} When the measurements are given with some error, the author in \cite{Foucart} introduced the following extension of the $\mathbf{\Omega}$-null space property in order to guarantee robustness of the recovery. \begin{definition}\label{defL2RobustStableNSP} A matrix $\mathbf{M}\in\mathbb{R}^{m\times d}$ is said to satisfy the robust $\ell_2$-stable $\mathbf{\Omega}$-null space property of order $s$ with constant $0<\rho<1$ and $\tau>0$, if for any set $\Lambda\subset [p]$ with $\# \Lambda\geq p-s$ it holds \begin{equation}\label{eqRobustL2NSP} \norm{\mathbf{\Omega}_{\Lambda^c}\mathbf{v}}_2\leq\frac{\rho}{\sqrt{s}}\norm{\mathbf{\Omega}_{\Lambda}\mathbf{v}}_1+\tau\norm{\mathbf{Mv}}_2\;\;\;\mbox{for all}\;\;\mathbf{v}\in\mathbb{R}^d. \end{equation} \end{definition} If $\mathbf v\in\ker\mathbf M$, the term $\norm{\mathbf{Mv}}_2$ vanishes, and we see that the robust $\ell_2$-stable $\mathbf{\Omega}$-null space property implies the $\ell_2$-stable $\mathbf{\Omega}$-null space property. The robust $\ell_2$-stable null space property guarantees the stability and robustness of the $\ell_1$-minimization (\ref{eqProblemP1Noise}). \begin{theorem}\label{thRecoveryWithRobustL2NSP} Let $\mathbf{M}\in\mathbb{R}^{m\times d}$ satisfy the robust $\ell_2$-stable $\mathbf{\Omega}$-null space property of order $s$ with constants $0<\rho<1$ and $\tau>0$. Then for any $\mathbf{x}\in\mathbb{R}^d$ the solution $\mathbf{\hat x}$ of (\ref{eqProblemP1Noise}) with $\mathbf{y}=\mathbf{M}\mathbf{x}+\mathbf{w}$, $\norm{\mathbf w}_2\leq\eta$, approximates the vector $\mathbf{x}$ with $\ell_2$-error \begin{equation}\label{eqRobustL2StableRecovery} \norm{\mathbf{x}-\mathbf{\hat{x}}}_2\leq\frac{2(1+\rho)^2}{\sqrt{A}(1-\rho)}\frac{\sigma_{s}(\mathbf{\Omega x})_1}{\sqrt{s}}+\frac{2\tau(3+\rho)}{\sqrt A(1-\rho)}\eta. \end{equation} \end{theorem} \begin{proof} Theorem 5 in \cite{Foucart} with $q=p=2$ provides the bound for $\norm{\mathbf\Omega(\mathbf x-\mathbf{\hat x})}_2$. Taking into account that $\mathbf\Omega$ is a frame with lower frame constant $A$, we obtain estimate (\ref{eqRobustL2StableRecovery}). \end{proof} \subsection{Uniform recovery from Gaussian measurements} We now show Theorem~\ref{thUniformRecoveryWithFrame} by establishing the $\ell_2$-stable $\mathbf{\Omega}$-null space property of order $s$ for a Gaussian measurement matrix $\mathbf{M}$ by following a similar strategy as in Section~\ref{sec:nonuniform}. To this end we introduce the set \[ W_{\rho,s}:=\left\{\mathbf{w}\in\mathbb{R}^d: \norm{\mathbf{\Omega}_{\Lambda^c}\mathbf{w}}_2>\rho/\sqrt s\norm{\mathbf{\Omega}_{\Lambda}\mathbf{w}}_1\;\mbox{for some}\;\Lambda\subset[p],\; \#\Lambda=p-s\right\}. \] In fact, if \begin{equation}\label{eqMinNormPositivity} \inf\fbrac{\norm{\mathbf{Mw}}_2:\mathbf{w}\in W_{\rho,s}\cap \mathbb{S}^{d-1}}>0, \end{equation} then for all $\mathbf{w}\in\ker\mathbf{M}$ and any $\Lambda\subset [p]$ with $\#\Lambda=p-s$ we have \[ \norm{\mathbf{\Omega}_{\Lambda^c}\mathbf{w}}_2\leq\frac{\rho}{\sqrt s}\norm{\mathbf{\Omega}_{\Lambda}\mathbf{w}}_1, \] which means that $\mathbf M$ satisfies the $\ell_2$-stable $\mathbf{\Omega}$-null space property of order $s$. To show (\ref{eqMinNormPositivity}) we apply Theorem \ref{thModifiedGordonsEscapeThroughTheMesh}, which requires to study the Gaussian width of the set $\mathbf{\Omega}\brac{W_{\rho,s}\cap \mathbb{S}^{d-1}}$. Since $\mathbf{\Omega}$ is a frame with upper frame bound $B$, we have \begin{equation}\label{eqInclusionDueToFrame} \mathbf{\Omega}\brac{W_{\rho,s}\cap \mathbb{S}^{d-1}}\subset\mathbf{\Omega}\brac{W_{\rho,s}}\cap\brac{\sqrt B \mathbb{B}_2^p}\subset T_{\rho,s}\cap\brac{\sqrt B \mathbb{B}_2^p}=\sqrt B\brac{T_{\rho,s}\cap \mathbb{B}_2^p}, \end{equation} with \[ T_{\rho,s}=\left\{\mathbf{u}\in\mathbb{R}^p: \norm{\mathbf{u}_S}_2\geq\rho/\sqrt s\norm{\mathbf{u}_{S^c}}_1\;\mbox{for some}\;S\subset[p],\; \# S=s\right\}. \] Then \[ T_{\rho,s}\cap \mathbb{B}_2^p=\bigcup\limits_{\#S=s}\left\{ \mathbf{u}\in\mathbb{R}^p: \norm{\mathbf{u}}_2\leq 1,\;\norm{\mathbf{u}_S}_2>\frac{\rho}{\sqrt s}\norm{\mathbf{u}_{S^c}}_1\right\}. \] \begin{lemma}\label{lmInclusionInUniversalSet} Let $D$ be the set defined by \begin{equation}\label{eqDefinitionOfD} D:=\conv\fbrac{\mathbf{x}\in \mathbb S^{p-1}:\#\supp \mathbf{x}\leq s}. \end{equation} \begin{enumerate}[(a)] \item\label{itUnitBall} Then $D$ is the unit ball with respect to the norm \[ \norm{\mathbf{x}}_D:=\sum_{l=1}^L\sbrac{\sum_{i\in I_l}\brac{x_i^*}^2}^{1/2}, \] where $L=\lceil\frac{p}{s}\rceil$, \[ I_l=\left\{\begin{array}{ll} \fbrac{s(l-1)+1,\ldots,sl}, & l=1,\ldots, L-1,\\ \fbrac{s(L-1)+1,\ldots, p}, & l=L, \end{array}\right. \] and $\mathbf{x^*}$ is the non-increasing rearrangement of $\mathbf{x}$. \item\label{itInclusion} It holds \begin{equation}\label{eqInclusionInUniversalSet} T_{\rho,s}\cap \mathbb{B}_2^p\subset \sqrt{1+(1+\rho^{-1})^2}D. \end{equation} \end{enumerate} \end{lemma} A similar result was stated as Lemma 4.5 in \cite{RudelsonVershynin}. For the sake of completeness we present the proof. \begin{proof} \ref{itUnitBall} Suppose $\mathbf{x}\in D$. It can be represented as $\mathbf x=\sum\limits_{i}\alpha_i\mathbf x_i$ with $\alpha_i\geq0$, $\sum\limits_i\alpha_i=1$ and $\mathbf x_i\in S^{p-1}$, $\#\supp\mathbf x_i\leq s$. Then $\norm{\mathbf x_i}_D=\norm{\mathbf x_i}_2=1$. By the triangle inequality \[ \norm{\mathbf x}_D\leq\sum_i\alpha_i\norm{\mathbf x_i}_D=\sum_i\alpha_i=1. \] This proves that $D$ is a subset of the unit ball with respect to the $\norm{\cdot}_D$-norm. On the other hand, let $\norm{\mathbf x}_D\leq 1$. We partition the index set $[p]$ into subsets $S_1$, $S_2$, \ldots of size $s$ in order of decreasing magnitude of entries $x_k$. Set $\alpha_i=\norm{\mathbf x_{S_i}}_2$. Then $\mathbf x$ can be written as \[ \mathbf x=\sum_{i:\alpha_i\neq 0}\alpha_i\brac{\frac{1}{\alpha_i}\mathbf x_{S_i}}, \qquad \mbox{ where } \qquad \sum_{i:\alpha_i\neq 0}\alpha_i=\sum\limits_{i}\norm{\mathbf x_{S_i}}_2=\norm{\mathbf x}_D\leq 1 \] and, for $\alpha_i\neq 0$, $\norm{\frac{1}{\alpha_i}\mathbf x_{S_i}}_2=\frac{1}{\alpha_i}\norm{\mathbf x_{S_i}}_2=1$. Thus $\mathbf x\in D$. \ref{itInclusion} Take an arbitrary $\mathbf{x}\in T_{\rho,s}\cap \mathbb{B}_2^p$. To show (\ref{eqInclusionInUniversalSet}) we estimate $\norm{\mathbf x}_D$. According to the definition of $\norm{\cdot}_D$ in Lemma \ref{lmInclusionInUniversalSet} \ref{itUnitBall}, \begin{align} \norm{\mathbf x}_D&=\sum_{l=1}^L\sbrac{\sum_{i\in I_l}\brac{x_i^*}^2}^{\frac{1}{2}}\notag\\ &=\sbrac{\sum_{i=1}^s\brac{x_i^*}^2}^{\frac{1}{2}}+\sbrac{\sum_{i=s+1}^{2s}\brac{x_i^*}^2}^{\frac{1}{2}}+\sum_{l\geq 3}^L\sbrac{\sum_{i\in I_l}\brac{x_i^*}^2}^{\frac{1}{2}}\label{eq:EstimateForDNormInBlocks}. \end{align} To bound the last term in the inequality above, we first note that for each $i\in I_{l}$, $l\geq 3$, \[ x^*_i\leq\frac{1}{s}\sum_{j\in I_{l-1}}x^*_j\quad\text{and}\quad \sbrac{\sum_{i\in I_{l}}(x^*_i)^2}^{1/2}\leq\frac{1}{\sqrt s}\sum_{j\in I_{l-1}}x^*_j. \] Summing up over $l\geq 3$ yields \[ \sum_{l\geq 3}^L\sbrac{\sum_{i\in I_l}\brac{x_i^*}^2}^{\frac{1}{2}}\leq\frac{1}{\sqrt s}\sum_{l\geq 2}\sum_{j\in I_l}x^*_j. \] Since $\mathbf{x}\in T_{\rho,s}\cap \mathbb{B}_2^p$, it holds $\norm{\mathbf{x}}_2\leq 1$ and there is $S\subset [p]$, $\# S=s$, such that $\norm{\mathbf{x}_S}_2>\rho/\sqrt s\norm{\mathbf{x}_{S^c}}_1$. Then \[ \sum_{l\geq 2}\sum_{i\in I_l}x^*_i\leq\norm{\mathbf x_{S^c}}_1<\frac{\sqrt s}{\rho}\norm{\mathbf x_S}_2\leq\frac{\sqrt s}{\rho}\sbrac{\sum_{i=1}^s(x_i^*)^2}^{\frac{1}{2}} \] and \[ \sum_{l\geq 3}^L\sbrac{\sum_{i\in I_l}\brac{x_i^*}^2}^{\frac{1}{2}}\leq\rho^{-1}\sbrac{\sum_{i=1}^s(x_i^*)^2}^{\frac{1}{2}}. \] Applying the last estimate to (\ref{eq:EstimateForDNormInBlocks}) and taking into account that $\norm{x}_2^2\leq 1$ we derive that \[ \begin{aligned} \norm{\mathbf x}_D&\leq(1+\rho^{-1})\sbrac{\sum_{i=1}^s(x_i^*)^2}^{\frac{1}{2}}+\sbrac{\sum_{i=s+1}^{2s}\brac{x_i^*}^2}^{\frac{1}{2}}\\ &\leq1+\rho^{-1})\sbrac{\sum_{i=1}^s(x_i^*)^2}^{\frac{1}{2}}+\sbrac{1-\sum_{i=1}^{s}\brac{x_i^*}^2}^{\frac{1}{2}}. \end{aligned} \] Set $a=\sbrac{\sum_{i=1}^s(x_i^*)^2}^{\frac{1}{2}}$. The maximum of the function \[ f(a):=(1+\rho^{-1})a+\sqrt{1-a^2}, \quad 0\leq a\leq 1, \] is attained at the point \[ a = \frac{1+\rho^{-1}}{\sqrt{1+(1+\rho^{-1})^2}} \] and is equal to $\sqrt{1+(1+\rho^{-1})^2}$. Thus for any $\mathbf{x}\in W$ it holds \[ \norm{\mathbf x}_D\leq\sqrt{1+(1+\rho^{-1})^2}, \] which proves (\ref{eqInclusionInUniversalSet}). \end{proof} Lemma \ref{lmInclusionInUniversalSet} \ref{itInclusion} implies \begin{equation}\label{eqGaussianWidthOfConeAndD} \ell\brac{T_{\rho,s}\cap \mathbb{B}_2^p}\leq \sqrt{1+(1+\rho^{-1})^2}\ell(D). \end{equation} \begin{lemma}\label{lmEstimateGaussianWidthOfD} The Gaussian width of the set $D$ defined by (\ref{eqDefinitionOfD}) satisfies \[ \ell(D)\leq\sqrt{2s\ln\frac{ep}{s}}+\sqrt s. \] \end{lemma} \begin{proof} The supremum of the linear functional $\abrac{\mathbf{g},\mathbf{x}}$ over $D$ is achieved at an extreme point, i.e., at an $\mathbf{x}\in S^{p-1}$ with $\#\supp \mathbf{x}\leq s$. Hence, by H\"older's inequality \[ \ell(D)=\mean\underset{\mathbf{x}\in D}\sup\abrac{\mathbf{g},\mathbf{x}}=\mean\underset{\begin{subarray}{c} \norm{\mathbf{x}}_2=1,\\ \#\supp \mathbf{x}\leq s \end{subarray}}\sup\abrac{\mathbf{g},\mathbf{x}}\leq\mean\underset{S\subset [p], \#S=s}\max\norm{\mathbf{g}_S}_2. \] An estimate on the maximum squared $\ell_2$-norm of a sequence of standard Gaussian random vectors (see e.g.\ \cite[Lemma 3.2]{RaoRechtNowak} or \cite[Proposition 8.2]{FoucartRauhut}) gives \[ \ell(D)\leq\sqrt{\mean\underset{S\subset [p], \#S=s}\max\norm{\mathbf{g}_S}_2^2} \leq\sqrt{2\ln\binom{p}{s}}+\sqrt s\leq\sqrt{2s\ln\frac{ep}{s}}+\sqrt s. \] The last inequality follows from the fact that $ \binom{p}{s}\leq\brac{\frac{ep}{s}}^s$, see e.g.~\cite[Lemma C.5]{FoucartRauhut}. \end{proof} \begin{proof}[of Theorem \ref{thUniformRecoveryWithFrame}] Expressions (\ref{eqInclusionDueToFrame}), (\ref{eqGaussianWidthOfConeAndD}) and Lemma \ref{lmEstimateGaussianWidthOfD} show that \begin{align} \ell\brac{\mathbf{\Omega}\brac{W_{\rho,s}\cap \mathbb{S}^{d-1}}}&\leq\sqrt{B\sbrac{1+(1+\rho^{-1})^2}}\ell(D)\notag\\ &\leq\sqrt{B\sbrac{1+(1+\rho^{-1})^2}}\brac{\sqrt{2s\ln\frac{ep}{s}}+\sqrt s}\label{eq:EstimateGWImageOfTheSetUniformRecovery}. \end{align} Set $t=\sqrt{2\ln(\varepsilon^{-1})}$. The fact that $E_m\geq m/\sqrt{m+1}$ along with condition (\ref{eqNumberOfMeasurementsForFrameUniformRecovery}) yields \[ E_m\geq \frac{1}{\sqrt{A}}\ell \brac{\mathbf{\Omega}\brac{W_{\rho,s}\cap \mathbb{S}^{d-1}}}+t. \] The monotonicity of probability and Theorem \ref{thModifiedGordonsEscapeThroughTheMesh} imply \[ \mathbb{P}\brac{\inf\norm{\mathbf{Mw}}_2> 0:\mathbf{w}\in W_{\rho,s}\cap \mathbb{S}^{d-1}}\geq 1-e^{-\frac{t^2}{2}}=1-\varepsilon, \] which guarantees that with probability at least $1-\varepsilon$ \[ \norm{\mathbf{\Omega}_{\Lambda^c}\mathbf{w}}_2<\frac{\rho}{\sqrt s}\norm{\mathbf{\Omega}_{\Lambda}\mathbf{w}}_1 \] for all $\mathbf{w}\in\ker\mathbf{M}\setminus\{\mathbf 0\}$ and any $\Lambda\subset [p]$ with $\#\Lambda=p-s$, see (\ref{eqMinNormPositivity}). This means that $\mathbf{M}$ satisfies the $\ell_2$-stable $\mathbf{\Omega}$-null space property of order $s$. Finally, we apply Theorem \ref{thRecoveryWithL2NSP}. \end{proof} Finally, we extend to robustness of the recovery with respect to perturbations of the measurements. \begin{theorem}\label{thRobustUniformRecoveryWithFrame} Let $\mathbf{M}\in\mathbb{R}^{m\times d}$ be a Gaussian random matrix, $0<\rho<1$, $0<\varepsilon<1$ and $\tau> 1$. If \begin{equation}\label{eqNumberOfMeasurementsForFrameRobustUniformRecovery} \frac{m^2}{m+1}\geq \frac{2\brac{1+(1+\rho^{-1})^2}\tau^2B}{(\tau-1)^2A}\, s\, \brac{\!\!\sqrt{\ln\frac{ep}{s}}+\frac{1}{\sqrt 2}+\sqrt{\frac{A\ln(\varepsilon^{-1})}{Bs\brac{1+(1+\rho^{-1})^2}}}}^{\!\!2}\!\!, \end{equation} then with probability at least $1-\varepsilon$ for every vector $\mathbf{x}\in\mathbb{R}^d$ and perturbed measurements $\mathbf{y} = \mathbf{\Omega x} + \mathbf{w}$ with $\|\mathbf{w}\|_2 \leq \eta$ a minimizer $\mathbf{\hat x}$ of (\ref{eqProblemP1Noise}) approximates $\mathbf{x}$ with $\ell_2$-error \[ \norm{\mathbf{x}-\mathbf{\hat{x}}}_2\leq\frac{2(1+\rho)^2}{\sqrt{A}(1-\rho)}\frac{\sigma_{s}(\mathbf{\Omega x})_1}{\sqrt{s}}+\frac{2\tau\sqrt{2B}(3+\rho)}{\sqrt m\sqrt A(1-\rho)}\eta. \] \end{theorem} \begin{proof} Condition (\ref{eqNumberOfMeasurementsForFrameRobustUniformRecovery}) together with $E_m\geq m/\sqrt{m+1}$ imply \[ E_m\brac{1-\frac{1}{\tau}}\geq\sqrt{\frac{B}{A}(1+(1+\rho^{-1})^2)}\brac{\sqrt{2s\ln\frac{ep}{s}}+\sqrt s}+\sqrt{2\ln(\varepsilon^{-1})}, \] which is equivalent to \[ E_m- \sqrt{\frac{B}{A}(1+(1+\rho^{-1})^2)}\brac{\sqrt{2s\ln\frac{ep}{s}}+\sqrt s}-\sqrt{2\ln(\varepsilon^{-1})}\geq\frac{E_m}{\tau}. \] Taking into account (\ref{eq:EstimateGWImageOfTheSetUniformRecovery}) we may conclude \[ E_m-\frac{1}{\sqrt A}\ell\brac{\mathbf\Omega\brac{W_{\rho,s}\cap\mathbb S^{d-1}}}-\sqrt{2\ln(\varepsilon^{-1})}\geq\frac{E_m}{\tau}\geq\frac{1}{\tau}\sqrt\frac{m}{2}. \] Then according to Theorem \ref{thModifiedGordonsEscapeThroughTheMesh} \[ \mathbb{P}\brac{\inf\norm{\mathbf{Mw}}_2> \frac{\sqrt m}{\tau\sqrt 2}:\mathbf{w}\in W_{\rho,s}\cap \mathbb{S}^{d-1}} \geq 1-\varepsilon. \] This means that for any $\mathbf w\in\mathbb{R}^d$ such that $\norm{\mathbf{Mw}}_2\leq\frac{\sqrt m}{\tau\sqrt 2}\norm{\mathbf w}_2$ and any set $\Lambda\subset [p]$ with $\# \Lambda\geq p-s$ it holds with probability at least $1-\varepsilon$ \[ \norm{\mathbf{\Omega}_{\Lambda^c}\mathbf{w}}_2<\frac{\rho}{\sqrt{s}}\norm{\mathbf{\Omega}_{\Lambda}\mathbf{w}}_1. \] For the remaining vectors $\mathbf w\in\mathbb{R}^d$, we have $\norm{\mathbf{Mw}}_2>\frac{\sqrt m}{\tau\sqrt 2}\norm{\mathbf w}_2$, which together with the fact that $\mathbf\Omega$ is a frame with upper frame bound $B$ leads to \[ \norm{\mathbf{\Omega}_{\Lambda^c}\mathbf{w}}_2\leq\norm{\mathbf{\Omega w}}_2\leq\sqrt B\norm{\mathbf w}_2<\frac{\tau\sqrt {2B}}{\sqrt m}\norm{\mathbf{Mw}}_2. \] Thus, for any $\mathbf w\in\mathbb{R}^d$, \[ \norm{\mathbf{\Omega}_{\Lambda^c}\mathbf{w}}_2<\frac{\rho}{\sqrt{s}}\norm{\mathbf{\Omega}_{\Lambda}\mathbf{w}}_1+\frac{\tau\sqrt{2B}}{\sqrt m}\norm{\mathbf{Mw}}_2. \] Finally, we apply Theorem \ref{thRecoveryWithRobustL2NSP}. \end{proof} \section{Numerical experiments} In this section we present the results of numerical experiments on synthetic data performed in Matlab using the \texttt{cvx} package. For the first set of experiments we constructed tight frames $\mathbf{\Omega}$ as an orthonormal basis of the range of the matrix the rows of which were drawn randomly and independently from $\mathbb{S}^{d-1}$. In order to obtain also non-tight frames we simply varied the norms of the rows of $\mathbf{\Omega}$. As dimensions for the analysis operator, we have chosen $d=200$ and $p=250$. The maximal number of zeros that can be achieved in the analysis representation $\mathbf{\Omega x}$ is less than $d$, since otherwise $\mathbf{x}=0$. Therefore, the sparsity level of $\mathbf{\Omega x}$ was always greater than $50$. For each trial we fixed a cosparsity $l$ (resulting in the sparsity $s=p-l$) and selected at random $l$ rows of an analysis operator $\mathbf{\Omega}$ that constitute the cosupport $\Lambda$ of the signal. To produce a signal $\mathbf{x}$ we constructed a basis $\mathbf{B}$ of $\ker \mathbf{\Omega}_{\Lambda}$, drew a coefficient vector $\mathbf{c}$ from a normalized standard Gaussian distribution and set $\mathbf{x}=\mathbf{Bc}$. We ran the algorithm and counted the number of times the signal was recovered correctly out of $70$ trials. A reconstruction error of less than $10^{-5}$ was considered as a successful recovery. The curves in Figure~\ref{figDifFramesMeasurementsVsSparsity} depict the relation between the number of measurements and the sparsity level such that the recovery was successful at least $98\%$ of the time. Each point on the line corresponds to the maximal sparsity level that could be achieved for the given number of measurements. \begin{figure}[hbt] \centering \includegraphics[scale=0.5 ]{Gaussian_3_Frames_70_Trials} \caption{Recovery for different analysis operators. The red curve corresponds to a tight frame, the black one has frame bound ratio $B/A$ of $13.1254$ and for the blue one $B/A=45.7716$.}\label{figDifFramesMeasurementsVsSparsity} \end{figure} The experiments clearly show that analysis $\ell_1$-minimization works very well for recovering cosparse signals from Gaussian measurements. (Note that only a comparison of the experiments with the nonuniform recovery guarantees make sense.) The frame bound ratio $B/A$ indeed influences the performance of the recovery algorithm (\ref{eqProblemP1}) -- although the degradation with increasing value of $B/A$ is less dramatic than indicated by our theorems. The reason for this may be that the theorems give estimates for the worst case, while the experiments can only reflect the typical behavior. \section*{Acknowledgements} M.~Kabanava and H.~Rauhut acknowledge support by the Hausdorff Center for Mathematics, University of Bonn, and by the European Research Council through the grant StG 258926.
{ "redpajama_set_name": "RedPajamaArXiv" }
921
Q: Extracting url links within a downloaded txt file Currently working on a url extractor for work. I'm trying to extract all http links/ href links from a downloaded html file and print the links on there own in a separate txt file.So far I've managed to get the entire html of a page downloaded its just extracting the links from it and printing them using Regex is a problem. Wondering if anyone could help me with this? private void button2_Click(object sender, EventArgs e) { Uri fileURI = new Uri(URLbox2.Text); WebRequest request = WebRequest.Create(fileURI); request.Credentials = CredentialCache.DefaultCredentials; WebResponse response = request.GetResponse(); Console.WriteLine(((HttpWebResponse)response).StatusDescription); Stream dataStream = response.GetResponseStream(); StreamReader reader = new StreamReader(dataStream); string responseFromServer = reader.ReadToEnd(); SW = File.CreateText("C:\\Users\\Conal_Curran\\OneDrive\\C#\\MyProjects\\Web Crawler\\URLTester\\response1.htm"); SW.WriteLine(responseFromServer); SW.Close(); string text = System.IO.File.ReadAllText(@"C:\\Users\\Conal_Curran\\OneDrive\\C#\\MyProjects\\Web Crawler\\URLTester\\response1.htm"); string[] links = System.IO.File.ReadAllLines(@"C:\\Users\\Conal_Curran\\OneDrive\\C#\\MyProjects\\Web Crawler\\URLTester\\response1.htm"); Regex regx = new Regex(links, @"http://([\\w+?\\.\\w+])+([a-zA-Z0-9\\~\\!\\@\\#\\$\\%\\^\\&\\*\\(\\)_\\-\\=\\+\\\\\\/\\?\\.\\:\\;\\'\\,]*)?", RegexOptions.IgnoreCase); MatchCollection mactches = regx.Matches(text); foreach (Match match in mactches) { text = text.Replace(match.Value, "<a href='" + match.Value + "'>" + match.Value + "</a>"); } SW = File.CreateText("C:\\Users\\Conal_Curran\\OneDrive\\C#\\MyProjects\\Web Crawler\\URLTester\\Links.htm"); SW.WriteLine(links); } A: In case you do not know, this can be achieved (pretty easily) using one of the html parser nuget packages available. I personally use HtmlAgilityPack (along with ScrapySharp, another package) and AngleSharp. With only the 3 lines above, you have all the hrefs in the document loaded by your http get request, using HtmlAgilityPack: /* do not forget to include the usings: using HtmlAgilityPack; using ScrapySharp.Extensions; */ HtmlWeb w = new HtmlWeb(); //since you have your html locally stored, you do the following: //P.S: By prefixing file path strings with @, you are rid of having to escape slashes and other fluffs. var doc = HtmlDocument.LoadHtml(@"C:\Users\Conal_Curran\OneDrive\C#\MyProjects\Web Crawler\URLTester\response1.htm"); //for an http get request //var doc = w.Load("yourAddressHere"); var hrefs = doc.DocumentNode.CssSelect("a").Select(a => a.GetAttributeValue("href"));
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,014
Q: Reload Scripts Assemblies (busy for long time) Unity Freezes I'm Using Unity version 2020.3.32f1 & Rider as Editor. Whenever I make any changes in scripts, even small changes, Unity script assemblies take too much of time to compile and load. Sometimes it freezes Unity entirely and in the Task Manager I can see "application not responding" for the Unity session. Does anyone know how I can solve this issue? A: * *Edit > Project Settings > Editor > Check Enter Play Mode Options *Uncheck reload domain & reload scene That worked for me .. unity 2021.3.4f1 A: After so much of research, I got one solution which worked for me , in Unity Package manager , Search for Rider & got Update for rider. After Updating Rider version from unity package manager. my Scripts reloading issue got fixed. A: Disable Domain Reloading. To disable Domain Reloading: * *Go to Edit > Project Settings > Editor *Make sure Enter Play Mode Options is enabled. *Disable Reload Domain source https://docs.unity3d.com/Manual/DomainReloading.html A: The Windows version of UnityEditor also has this problem. Weirdly, if you have this problem, just save any file in your Unity project folder without changing anything and the progress bar will start progressing. A: Open Window->Package Manager. Select Packages: In Project to see the packages currently installed. Remove everything you are not using. A: Not being logged in in the Unity editor (upper left corner) certainly seems to increase the chances of this issue happening. If you open the Package Manager without being logged in you get some errors in the console. I suspect this is related but in the script reload case it just hangs.
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,021
using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; using Harness.Settings; using Harness.Tests.Integration.DataProviders; using MongoDB.Bson; using MongoDB.Bson.Serialization.Conventions; using MongoDB.Driver; using Xunit; namespace Harness.Tests.Integration { public class MongoSessionManagerTests : IDisposable { private IMongoClient Client { get; set; } private List<string> DbNames { get; } = new List<string>(); [Fact] public async Task Build_DropEverything_BuildsDatabase() { // Arrange var settings = new SettingsBuilder() .AddConvention(new CamelCaseElementNameConvention(), x => true) .AddDatabase("test1") .WithConnectionString("mongodb://localhost:27017") .DropDatabaseFirst() .AddCollection("col1", true, "Collection1.json") .AddCollection<Person>("people", true, new PersonDataProvider()) .AddDatabase("test2") .WithConnectionString("mongodb://localhost:27017") .DropDatabaseFirst() .AddCollection("col2", true, "Collection2.json") .Build(); var sut = new MongoSessionManager(settings); this.DbNames.Add("test1"); this.DbNames.Add("test2"); // Act var connections = sut.Build(); // Assert Assert.Single(connections); this.Client = connections["mongodb://localhost:27017"]; var test1 = this.Client.GetDatabase("test1"); var col1 = test1.GetCollection<BsonDocument>("col1"); var results1 = (await col1.FindAsync<BsonDocument>(new BsonDocument())).ToList(); Assert.Equal(2, results1.Count); Assert.Equal("Value1b", results1[0].GetElement("Col1b").Value); Assert.Equal("Value2b", results1[0].GetElement("Col2b").Value); Assert.Equal("Value3b", results1[1].GetElement("Col1b").Value); Assert.Equal("Value4b", results1[1].GetElement("Col2b").Value); var peopleCol = test1.GetCollection<BsonDocument>("people"); var people = (await peopleCol.FindAsync<Person>(new BsonDocument())).ToList(); Assert.Equal(3, people.Count); var data = new PersonDataProvider().GetData().ToList(); Assert.True(people[0].IsEqual(data[0] as Person)); Assert.True(people[1].IsEqual(data[1] as Person)); Assert.True(people[2].IsEqual(data[2] as Person)); var test2 = this.Client.GetDatabase("test2"); var col2 = test2.GetCollection<BsonDocument>("col2"); var results2 = (await col2.FindAsync<BsonDocument>(new BsonDocument())).ToList(); Assert.Equal(2, results2.Count); } [Fact] public async Task Build_DontDrop_BuildsDatabase() { IMongoDatabase database = null; try { // Arrange var mongo = new MongoClient(); database = mongo.GetDatabase("testExisting"); var collection = database.GetCollection<Person>("people"); await collection.InsertOneAsync(new Person { FirstName = "John", LastName = "Smith", Age = 33 }); var settings = new SettingsBuilder() .AddConvention(new List<IConvention> { new CamelCaseElementNameConvention()}, x => true) .AddDatabase("testExisting") .WithConnectionString("mongodb://localhost:27017") .AddCollection<Person>("people", false, new PersonDataProvider()) .Build(); var sut = new MongoSessionManager(settings); // Act sut.Build(); // Assert var people = await collection.Find(new BsonDocument()).ToListAsync(); Assert.Equal(4, people.Count); } finally { database?.DropCollection("people"); } } public void Dispose() { this.DbNames?.ForEach(x => this.Client?.DropDatabase(x)); } } }
{ "redpajama_set_name": "RedPajamaGithub" }
8,552
The R&D informatics group of a global biotechnology firm sought to become a world-class service provider. As such, it realized that it must be aware of potential issues and unmet customer needs on an ongoing basis. With this knowledge in hand, the group could properly calibrate plans and resource to address these gaps and ensure customer requirements were met. We led the design, development, and rollout of a comprehensive Voice of Customer program. This program defined the structure, processes, and governance to elicit direct feedback from internal customers regarding the capabilities, informatics, and services the group provided. Furthermore, it also guided the analysis of feedback, generation of insights, operationalizing solutions, and communication back to customers. The program was successfully rolled out and very well received by the customer organizations. The feedback collected and insight obtained has helped the informatics group align more closely with customers by improving its understanding of customer needs/priorities, conceptualizing/ prioritizing initiatives and highlighting critical risks/issues. It also helped to improve the group's credibility earning it a reputation as proactive and action-oriented team.
{ "redpajama_set_name": "RedPajamaC4" }
6,489
Q: Passing variables from MEL to Expression MAYA hie i have a slight confusion as to why what writing is not working. in my MEL script editor I'm writing string $Adistance = ("distanceDimensionShape1"+".distance"); expression -s (" $Bdistance = $Adistance; joint2.scaleX = $distance"); but I get this error // Warning: line 1: Converting string "distanceDimensionShape1.distance" to a float value of 0. // * *I want joint2.scaleX to copy distanceDimensionShape1.distance float A: Ok, so first I'm not a Mel programmer, I usually do my script in Python so there might be syntax errors in my code. Your problem: Correct me if I'm wrong, but I think you are trying to get the distance attribute of distanceDimensionShape1 into a variable and set it to scaleX attribute from joint2. Your code: string $Adistance = ("distanceDimensionShape1"+".distance"); expression -s (" $Bdistance = $Adistance; joint2.scaleX = $distance"); What you are doing in your 1st line: You are declaring a string variable containing "distanceDimensionShape1.distance", not getting the distance attribute of distanceDimensionShape1 What you should be doing in your 1st line: Use the getAttr command provided in maya docs the retrieve the attribute of your shape. What you are doing in your second line: You are trying to set joint2.scaleX which is a float value with a string value. I guess... because I don't know what's $distance as it appears only here in your code. What you should be doing in your second line: Use setAttr to properly set your attribute. My solution: I hope this will help as we have only few informations on your current problem: float $Adistance = `getAttr distanceDimensionShape1.distance`; setAttr joint2.scaleX $Adistance; 1st line retrieves properly the selected attributes and storages it to a float. 2nd line set your attribute with the retrieved value. Note: * *You can watch which methods are called when doing manipulation in maya by opening your script editor and checking History > Echo All Commands. This way you will be able to reproduce Maya's behaviour. *Always have an internet browser pointing to Maya's Mel/Python doc when scripting: link *Try to develop more when posting a question on SO: * *What are your trying to achieve and how you plan to do it *A commented block of code (approx 10-15 lines is fine and give a good overview of your mel script) *What's your error message Hope this will help. A: If you want to create an expression once then change what connects to it, create it the following way: expression -s ("joint2.scaleX = .I[0]") Then you can connect a specific attribute to that plug like this: connectAttr distanceDimensionShape1.distance expression1.input[0] This is assuming there's a legitimate reason you couldn't just write the expression once directly: expression -s ("joint2.scaleX = distanceDimensionShape1.distance")
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,749
Barattoli In Plastica Con Barattoli In Plastica Con Coperchio E D09f41fdfd3241a 41 Con Barattoli In Plastica Con Coperchio E 3072x2048px is one of pictures thet are related with the picture before in the collection gallery. If you would like to see the Barattoli In Plastica Con Barattoli In Plastica Con Coperchio E D09f41fdfd3241a 41 Con Barattoli In Plastica Con Coperchio E 3072x2048px in High Resolution [HD Resolution] version, please press the right click on picures/image then choose "Save as Image" option, and done. You will get Barattoli In Plastica Con Barattoli In Plastica Con Coperchio E D09f41fdfd3241a 41 Con Barattoli In Plastica Con Coperchio E 3072x2048px pictures that you want. The exactly dimension of Barattoli In Plastica Con Barattoli In Plastica Con Coperchio E D09f41fdfd3241a 41 Con Barattoli In Plastica Con Coperchio E 3072x2048px was 3072x2048 pixels. You can also look for some pictures by collection on below this picture. Find the other picture or article about Idee Per Barattoli In Plastica Con Coperchio Immagini Che Decora Per Una Casa here. We hope it can help you to get information of the picture.
{ "redpajama_set_name": "RedPajamaC4" }
6,584
\section{Introduction} Poonen recently showed that, for a global field $k$ of characteristic different from $2$, the \'etale-Brauer obstruction is insufficient to explain failures of the Hasse principle~\cite{poonen-insufficiency}. This result relied on the existence of a Ch\^atelet surface over $k$ that violates the Hasse principle~\cite[Prop 5.1 and \S 11]{poonen-chatelet}. Poonen's construction fails in characteristic $2$ due to the inseparability of $y^2-az^2$. Classically, Ch\^atelet surfaces have only been studied over fields of characteristic different from $2$. In this paper, we define a Ch\^atelet surface over fields of characteristic $2$ and obtain a result analogous to~\cite[Prop 5.1]{poonen-chatelet}. \begin{thm}\label{thm:HPcounterex} Let $k$ be any global field of characteristic $2$. There exists a Ch\^atelet surface $X$ over $k$ that violates the Hasse principle. \end{thm} The only assumption on characteristic in~\cite{poonen-insufficiency} is in using~\cite[Prop 5.1]{poonen-chatelet} (all other arguments go through exactly as stated after replacing any polynomial of the form $by^2 +az^2$ by its Artin-Schreier analogue, $by^2+byz+az^2$). Therefore, Theorem~\ref{thm:HPcounterex} extends the main result of~\cite{poonen-insufficiency} to global fields of characteristic $2$, thereby showing that the \'etale-Brauer obstruction is insufficient to explain all failures of the Hasse principle over a global field of any characteristic. The proof of Theorem~\ref{thm:HPcounterex} is constructive. The difficulty in the proof lies in finding suitable equations so that the Brauer set is easy to compute and empty. \section{Background}% \subsection{Brauer-Manin obstructions}% The counterexamples to the Hasse principle referred to in Theorem~\ref{thm:HPcounterex} are all explained by the Brauer-Manin obstruction, which we recall here~\cite[Thm. 1]{manin-BMobs}. Let $k$ be a global field and let $\mathbb{A}_k$ be the ad\`ele ring of $k$. Recall that for a projective variety $X$, we have the equality $X(\mathbb A_k) = \prod _v X(k_v)$, where $v$ runs over all nontrivial places of $k$. The Brauer group of $X$, denoted $\Br X$, is the group of equivalence classes of Azumaya algebras on $X$. Let $\inv_v$ denote the map $\Br {\mathbb Q}_v \to {\mathbb Q}/{\mathbb Z}$. Define \[ X(\mathbb A_k)^{\Br} := \left \{ (P_v)_v \in X(\mathbb A_k) \colon \sum_v \inv_v\left(\ev_{\mathcal A}(P_v)\right) = 0 \textrm{ for all } \mathcal A \in \Br X\right\} \] By class field theory we have \[ X(k) \subseteq X(\mathbb A_k)^{\Br} \subseteq X(\mathbb A_k). \] Thus, if $X(\mathbb A_k)^{\Br}=\emptyset$, then $X$ has no $k$-points. We say there is a \defi{Brauer-Manin obstruction to the Hasse principle} if $X(\mathbb A_k) \neq \emptyset$ but $X(\mathbb A_k)^{\Br} = \emptyset$. See~\cite[\S 5.2]{skorobogatov-torsors} for more details. \subsection{Ch\^atelet surfaces in characteristic $2$}% A conic bundle $X$ over $\mathbb P^1$ is the zero-locus of a nowhere-vanishing global section $s$ of $\Sym^2(\mathcal E)$ in $\mathbb P\mathcal E$, for some rank $3$ vector sheaf $\mathcal E$ on $\mathbb P^1$. Consider the special case where $\mathcal E = \mathcal O \oplus \mathcal O\oplus \mathcal O(2)$ and $s = s_1 -s_2$ where $s_1$ is a global section of $\Sym^2(\mathcal O \oplus\mathcal O)$ and $s_2$ is a global section of $\mathcal O(2)^{\otimes 2} = \mathcal O(4)$. Take $a\in k^{\times}$ and $P(x)$ a separable polynomial over $k$ of degree $3$ or $4$. If $s_1 = y^2+yz+az^2$ and $s_2 = w^4P(x/w)$, then $X$ contains the affine variety defined by $y^2+yz+az^2 = P(x)$ as an open subset. In this case we say $X$ is the Ch\^atelet surface defined by \[ y^2+yz+az^2=P(x). \] By the same basic argument used in~\cite[Lemma 3.1]{poonen-chatelet}, we can show that $X$ is smooth. See~\cite[\S 3 and \S 5]{poonen-chatelet} for the construction of a Ch\^atelet surface in the case where the characteristic is different from $2$. \section{Proof of Theorem~\ref{thm:HPcounterex} Let $k$ denote a global field of characteristic $2$. Let $\mathbb F$ denote its constant field and let $n$ denote the order of $\mathbb F^{\times}$. Fix a prime ${\mathfrak p}$ of $k$ of odd degree and let $S = \{{\mathfrak p}\}$. Let ${\mathcal O}_{k,S}$ denote the ring of $S$-integers. Let $\gamma \in \mathbb{F}$ be such that $T^2+T+\gamma$ is irreducible in $\mathbb{F}[T]$. By the Chebotarev density theorem~\cite[Thm 13.4, p. 545]{neukirch-ant}, we can find elements $a,b \in \mathcal{O}_{k,S}$ that generate prime ideals of even and odd degree, respectively, such that $a \equiv \gamma \pmod{b^2\mathcal O_{k,S}}$. These conditions imply that $v_{\mathfrak p}(a)$ is even and negative and that $v_{\mathfrak p}(b)$ is odd and negative. Define \begin{eqnarray*} f(x) & = & a^{-4n}bx^2 + x + ab^{-1},\\ g(x) & = & a^{-8n}b^2x^2+ a^{-4n}bx + a^{1-4n} + \gamma. \end{eqnarray*} Note that $g(x) = a^{-4n}bf(x) + \gamma.$ Let $X$ be the Ch\^atelet surface given by \begin{equation*}\tag{$*$}\label{eq:chatelet} y^2 + yz + \gamma z^2 = f(x)g(x). \end{equation*} In Lemma~\ref{lem:local-solvability} we show $X(\mathbb{A}_k) \neq \emptyset$, and in Lemma~\ref{lem:inv-map} we show $X(\mathbb A_k)^{\Br} = \emptyset$. Together, these show that $X$ has a Brauer-Manin obstruction to the Hasse principle. \begin{lemma}\label{lem:local-solvability} The Ch\^atelet surface $X$ has a $k_v$-point for every place $v$. \end{lemma} \begin{proof} Suppose that $v = v_a$. Since $ a$ generates a prime of even degree, the left-hand side of~(\ref{eq:chatelet}) factors in $k_v[y,z]$. Therefore, there is a solution over $k_v$. Now suppose that $v\neq v_a$. Since $y^2+yz +az^2$ is a norm form for an unramified extension of $k_v$ for all $v$, in order to prove the existence of a $k_v$-point, it suffices to find an $x \in k_v$ such that the valuation of the right-hand side of~(\ref{eq:chatelet}) is even. Suppose further that $v \neq v_{\mathfrak p}, v_b$. Choose $x$ such that $v(x) = -1.$ Then the right-hand side of~(\ref{eq:chatelet}) has valuation $-4$ so there exists a $k_v$-point. Suppose that $v = v_{{\mathfrak p}}$. Let $\pi$ be a uniformizer for $v$ and take $x = \pi a^2/b.$ Then \begin{eqnarray*} f(x) & = & b^{-1}a^{4-4n}\pi^2 + a^2b^{-1}\pi + ab^{-1}. \end{eqnarray*} Since $a$ has negative even valuation and $n \ge 1$, we have $v(f(x)) = v(a^2b^{-1}\pi)$ which is even. Now let us consider \begin{eqnarray*} g(x) & = & a^{4-8n}\pi^2 + a^{2-4n}\pi + a^{1-4n}+\gamma. \end{eqnarray*} By the same conditions mentioned above, all terms except for $\gamma$ have positive valuation. Therefore $v(g(x)) = 0$. Finally suppose that $v = v_{b}$. Take $x = \frac{1}{b} +1$. Then \[ f(x) = \frac{1}{b}\left( a^{-4n}+a+1 + b + a^{-4n}b^2\right). \] Note that by the conditions imposed on $a$, $\left( a^{-4n}+a+1 + b + a^{-4n}b^2\right) \equiv \gamma + b \pmod{b^2 \mathcal O_{k,S}}$. Thus $v(f(x))=-1.$ Now consider \[ g(x) = a^{-8n} + a^{-8n}b^2+ a^{-4n} + a^{-4n}b + a^{1-4n} + \gamma \] modulo $b^2\mathcal O_{k,S}$. By the conditions imposed on $a$, we have \[ g(x) \equiv 1+1+b+\gamma+\gamma \equiv b\pmod{b^2\mathcal O_{k,S}}. \] Thus $v(g(x))=1$, so $v\left(f(x)g(x)\right)$ is even. \end{proof} Let $L = k[T]/(T^2+T+\gamma)$ and let $\mathcal A$ denote the class of the cyclic algebra $\left(L/k, f(x)\right)_2$ in $\Br k(X)$. Using the defining equation of the surface, we can show that $\left(L/k, g(x)\right)_2$ is also a representative for $\mathcal A$. Since $g(x) + a^{-4n}bf(x)$ is a $v$-adic unit, $g(x)$ and $f(x)$ have no common zeroes. Since $\mathcal A$ is the class of a cyclic algebra of order $2$, the algebra $\left(L/k, f(x)/x^2\right)_2$ is another representative for $\mathcal A$. Note that for any point $p$ of $X$, there exists an open neighborhood $U$ containing $P$ such that either $f(x)$, $g(x)$, or $f(x)/x^2$ is a nowhere vanishing regular function on $U$. Therefore, $\mathcal A$ is an element of $\Br X$. To show that $X(\mathbb A_k)^{\Br} = \emptyset$, we use the continuity of the map $\ev_{\mathcal A}$. While the result is well-known, it is difficult to find in the literature so we give a proof for reader's convenience. \begin{lemma}\label{lem:ev-cont} Let $k_v$ be a local field and let $V$ be a smooth projective scheme over $k_v$. For any $[\mathcal B] \in \Br_{\Az} V$, \[ \ev_{\mathcal B} : V(k_v) \to \Br k_v \] is continuous for the discrete topology on $\Br k_v$. \end{lemma} \begin{proof} To prove continuity, it suffices to show that $\ev_{\mathcal B}^{-1}(\mathcal B')$ is open for any $\mathcal B'$ in the image of $\ev_{\mathcal B}$. By replacing $[\mathcal B]$ with $[\mathcal B] - [\ev_{\mathcal B}(x)]$, we reduce to showing that $\ev_{\mathcal B}^{-1}(0)$ is open. Fix a representative $\mathcal B$ of the element $[\mathcal B]\in \Br_{\Az} V$. Let $n^2$ denote the rank of $\mathcal B$ and let $f_{\mathcal B}\colon Y_{\mathcal B} \to V$ be the $\PGL_n$-torsor associated to $\mathcal B$. Then we observe that the set $\ev_{\mathcal B}^{-1}(0)$ is equal to $f_{\mathcal B}(Y_{\mathcal B}(k_v)) \subset V(k_v)$. This set is open by the implicit function theorem. \end{proof} \begin{lemma}\label{lem:inv-map} Let $P_v \in X(k_v)$. Then \[ \inv_v(\ev_{\mathcal A}(P_v)) = \begin{cases} 1/2 & \text{if $v=v_b$},\\ 0 & \text{otherwise}. \end{cases} \] Therefore $X(\mathbb A_k)^{\Br} = \emptyset$. \end{lemma} \begin{proof} The surface $X$ contains an open affine subset that can be identified with \[ V(y^2+yz+az^2 -P(x)) \subseteq \mathbb A^3. \] Let $X_0$ denote this open subset. Since $\ev_{\mathcal A}$ is continuous by Lemma~\ref{lem:ev-cont} and $\inv_v$ is an isomorphism onto its image, it suffices to prove that $\inv_v$ takes the desired value on the $v$-adically dense subset $X_0(k_v)\subset X(k_v)$. Since $L/k$ is an unramified extension for all places $v$, evaluating the invariant map reduces to computing the parity of the valuation of $f(x)$ or $g(x)$. Suppose that $v\neq v_a,v_b, v_{\mathfrak p}$. If $v(x_0) < 0$, then by the strong triangle inequality, $v(f(x_0))= v(x_0^2)$. Now suppose that $v(x_0) \geq 0$. Then both $f(x_0)$ and $g(x_0)$ are $v$-adic integers, but since $g(x) - a^{-4n}bf(x) = \gamma$ either $f(x_0)$ or $g(x_0)$ is a $v$-adic unit. Thus, for all $P_v \in X_0(k_v)$, $\inv_v(\mathcal A(P_v))=0$. Suppose that $v = v_a$. Since $a$ generates a prime of even degree, $T^2+T+\gamma$ splits in $k_a$. Therefore, $(L/k, h)$ is trivial for any $h \in k_a(V)^\times$ and so $\inv_v(\mathcal A(P_v))=0$ for all $P_v \in X_0(k_v)$. Suppose that $v = v_{\mathfrak p}$. We will use the representative $\left(L/k, g(x)\right)$ of $\mathcal A$. If $v(x_0) < v(a^{4n}b^{-1})$ then the quadratic term of $g(x_0)$ has even valuation and dominates the other terms. If $v(x_0) > v(a^{4n}b^{-1})$ then the constant term of $g(x_0)$ has even valuation and dominates the other terms. Now assume that $x_0 = a^{4n}b^{-1}u$, where $u$ is $v$-adic unit. Then we have \[ g(x_0) = u^2 + u + \gamma + a^{1-4n}. \] Since $\gamma$ was chosen such that $T^2+T+\gamma$ is irreducible in $\mathbb{F}[T]$ and ${\mathfrak p}$ is a prime of odd degree, $T^2 + T + \gamma$ is irreducible in $\mathbb{F}_{\mathfrak p}[T]$. Thus, for any $v$-adic unit $u$, $u^2+u+\gamma \not\equiv 0 \pmod {\mathfrak p}.$ Since $a \equiv 0 \mod {\mathfrak p}$, this shows $g(x_0)$ is a $v$-adic unit. Hence $\inv_v(\mathcal A(P_v))=0$ for all $P_v \in X_0(k_v)$. Finally suppose that $v = v_b$. We will use the representative $\left(L/k, f(x)\right)$ of $\mathcal A$. If $v(x_0) < -1$ then the quadratic term has odd valuation and dominates the other terms in $f(x_0)$. If $v(x_0) > -1$ then the constant term has odd valuation and dominates the other terms in $f(x_0)$. Now assume $x_0 = b^{-1}u$ where $u$ is any $v$-adic unit. Then we have \[ f(x_0) = \frac{1}{b}\left ( a^{-4n}u^2 + u + a\right). \] It suffices to show that $a^{-4n}u^2 + u + a \not \equiv 0 \pmod{ b\mathcal O_{k,S}}.$ Since $a \equiv \gamma \pmod{b\mathcal O_{k,S}},$ we have \[ a^{-4n}u^2 + u + a \equiv \overline{u}^2 + \overline{u} +\gamma. \] Using the same argument as in the previous case, we see that $a^{-4n}u^2 + u + a \not \equiv 0 \pmod{b\mathcal O_{k,s}}$ and thus $v(g(x_0))=-1$. Therefore $\inv_v(\mathcal A(P_v))=\frac{1}{2}$ for all $P_v \in X_0(k_v)$. \end{proof} \section*{Acknowledgements} I thank my advisor, Bjorn Poonen, for suggesting the problem and for many helpful conversations. I thank Olivier Wittenberg for a sketch of the proof of Lemma~\ref{lem:ev-cont} and Daniel Erman for comments improving the exposition. \begin{bibdiv} \begin{biblist} \bib{manin-BMobs}{article}{ author={Manin, Y. I.}, title={Le groupe de Brauer-Grothendieck en g\'eom\'etrie diophantienne}, conference={ title={Actes du Congr\`es International des Math\'ematiciens}, address={Nice}, date={1970}, }, book={ publisher={Gauthier-Villars}, place={Paris}, }, date={1971}, pages={401--411}, review={\MR{0427322 (55 \#356)}}, } \bib{neukirch-ant}{book}{ author={Neukirch, J{\"u}rgen}, title={Algebraic number theory}, series={Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]}, volume={322}, note={Translated from the 1992 German original and with a note by Norbert Schappacher; With a foreword by G. Harder}, publisher={Springer-Verlag}, place={Berlin}, date={1999}, pages={xviii+571}, isbn={3-540-65399-6}, review={\MR{1697859 (2000m:11104)}}, } \bib{poonen-chatelet}{misc}{ author={Poonen, Bjorn}, title={Existence of rational points on smooth projective varieties}, date={2008-06-04}, note={Preprint, to appear in {\em J.\ Europ.\ Math.\ Soc}}, } \bib{poonen-insufficiency}{misc}{ author={Poonen, Bjorn}, title={Insufficiency of the Brauer-Manin obstruction applied to \'etale covers}, date={2008-08-10}, note={Preprint}, } \bib{skorobogatov-torsors}{book}{ author={Skorobogatov, Alexei}, title={Torsors and rational points}, series={Cambridge Tracts in Mathematics}, volume={144}, publisher={Cambridge University Press}, place={Cambridge}, date={2001}, pages={viii+187}, isbn={0-521-80237-7}, review={MR1845760 (2002d:14032)}, } \end{biblist} \end{bibdiv} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
2,698
Can Congress trust Trump to implement immigration reform? By David Bier, opinion contributor — 06/02/19 02:00 PM EDT As President Donald Trump recently rolled out yet another plan to reform the immigration system, new data reveal that his administration is still ramping up denials of applications by legal immigrants. Congress has not passed any legislation, but the denial rate for immigration applications — ranging from family reunification to travel authorization — has increased in all but one of the eight fiscal quarters under President Trump Donald John TrumpCNN's Don Lemon explains handling of segment after Trump criticism NPR reporter after Pompeo clash: Journalists don't interview government officials to score 'political points' Lawyer says Parnas can't attend Senate trial due to ankle bracelet MORE. This raises an important question for Congress. Even if it adopted the reforms that the president wants, would he implement them? Overall, the denial rate in the first quarter of Fiscal Year (FY) 2019 — the most recent quarter with available numbers — was 80 percent higher than the last quarter under President Obama in the first quarter of FY 2017 (October to December 2016). The higher denial rate means that U.S. Citizenship and Immigration Services (USCIS) turned down more than 72,000 additional applicants for benefits than it would have under the prior denial rate. More than 13 percent of applicants were rejected in the first quarter of 2019, compared to just 7 percent in the first quarter of 2017. In no other quarter for as far back as the government has published quarterly denial data (2013) has the denial rate gone much past 10 percent. But under Trump, it has exceeded that level each month since the first quarter of FY 2018. These figures include USCIS applications for all types of immigration benefits except for citizenship and DACA or TPS — the two programs that the president has tried to close almost completely. They also don't count denials of visa applications made with the State Department. Denial rates increased dramatically for asylum applications as well as for victims of trafficking. In addition, the government is now turning down family-based green card applications at a 30 percent higher clip than during President Obama's final quarter. Employers requesting foreign temporary workers saw their denial rate increase from 18 percent to 28 percent. Immigrants seeking advanced parole — which allows them to travel and reenter — saw their denial rate spike 75 percent. The higher denials come even as applications have fallen 22 percent. Nearly 400,000 fewer applications were filed with the government in the first quarter of FY 2019 as in the first quarter of FY 2017. It could be that fewer applicants want to apply right now. The government is driving immigrants away. The causes of the increased denials vary depending on the category, but virtually nothing has escaped the attention of this administration. In general, U.S. Citizenship and Immigration Services, the agency responsible for immigration benefits applications, has made those forms much longer and more complicated. It has introduced complex, vague new "vetting" questions designed to trip up applicants and lead to rejections. USCIS head Francis Cissna has made it clear that he expects more denials from his employees. He has repeatedly stated he wants to focus on "restoring the integrity of the immigration system" — as if it didn't have integrity before he arrived. To that end, he has adopted a "look-over-the-shoulders" policy of adjudicators that makes them more fearful of approvals than denials. As a result, employers have seen a 45 percent increase in the number of requests for additional evidence to support a request for foreign workers. These evidence demands are basically an initial denial of the employer's request that can then be overcome with more evidence. But then last year, the agency made it easier to just issue a denial without allowing for more evidence. Obviously, these trends are worrying for immigrants as well as their U.S. families and employers. They should also worry U.S. workers and consumers who, the weight of the academic research shows, benefit from immigration. Workers are one of the primary fuels of economic growth. With unemployment already near record lows and the tax cuts already implemented, the administration has few other ways to grow the economy quickly. President Trump told Congress in his state of the union address that he wanted legal immigrants in "the largest numbers ever." But his administration is not carrying out that mission. The fact that immigration denials keep rising even without congressional action should make legislators wary of how the administration will enforce any reforms that it enacts. It is making its goal clear every day: fewer immigrants — legal or otherwise. David Bier is an immigration policy analyst at the Cato Institute. Tags Donald Trump Immigration Immigration reform United States Citizenship and Immigration Services
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
439
<!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <title>UltraSolvers | Dashboard</title> <!-- Tell the browser to be responsive to screen width --> <meta content="width=device-width, initial-scale=1, maximum-scale=1, user-scalable=no" name="viewport"> <!-- Bootstrap 3.3.6 --> <link rel="stylesheet" href="../../bootstrap/css/bootstrap.min.css"> <!-- Font Awesome --> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.5.0/css/font-awesome.min.css"> <!-- Ionicons --> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/ionicons/2.0.1/css/ionicons.min.css"> <!-- Theme style --> <link rel="stylesheet" href="../../dist/css/AdminLTE.min.css"> <!-- AdminLTE Skins. Choose a skin from the css/skins folder instead of downloading all of them to reduce the load. --> <link rel="stylesheet" href="../../dist/css/skins/all-skins.min.css"> <!-- iCheck --> <link rel="stylesheet" href="../../plugins/iCheck/flat/blue.css"> <!-- Morris chart --> <link rel="stylesheet" href="../../plugins/morris/morris.css"> <!-- jvectormap --> <link rel="stylesheet" href="../../plugins/jvectormap/jquery-jvectormap-1.2.2.css"> <!-- Date Picker --> <link rel="stylesheet" href="../../plugins/datepicker/datepicker3.css"> <!-- Daterange picker --> <link rel="stylesheet" href="../../plugins/daterangepicker/daterangepicker.css"> <!-- bootstrap wysihtml5 - text editor --> <link rel="stylesheet" href="../../plugins/bootstrap-wysihtml5/bootstrap3-wysihtml5.min.css"> <!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries --> <!-- WARNING: Respond.js doesn't work if you view the page via file:// --> <!--[if lt IE 9]> <script src="https://oss.maxcdn.com/html5shiv/3.7.3/html5shiv.min.js"></script> <script src="https://oss.maxcdn.com/respond/1.4.2/respond.min.js"></script> <![endif]--> </head> <body class="hold-transition skin-blue sidebar-mini"> <div class="wrapper"> <header class="main-header"> <!-- Logo --> <a href="index2.html" class="logo"> <!-- mini logo for sidebar mini 50x50 pixels --> <span class="logo-mini"><b>U</b>SV</span> <!-- logo for regular state and mobile devices --> <span class="logo-lg"><b>ULTRA</b>Solvers</span> </a> <!-- Header Navbar: style can be found in header.less --> <nav class="navbar navbar-static-top"> <!-- Sidebar toggle button--> <a href="#" class="sidebar-toggle" data-toggle="offcanvas" role="button"> <span class="sr-only">Toggle navigation</span> </a> <div class="navbar-custom-menu"> <ul class="nav navbar-nav"> <!-- Control Sidebar Toggle Button --> <li> <a href="#" data-toggle="control-sidebar"><i class="fa fa-gears"></i></a> </li> </ul> </div> </nav> </header> <!-- Left side column. contains the logo and sidebar --> <aside class="main-sidebar"> <!-- sidebar: style can be found in sidebar.less --> <section class="sidebar"> <!-- Sidebar user panel --> <!-- search form --> <form action="#" method="get" class="sidebar-form"> <div class="input-group"> <input type="text" name="q" class="form-control" placeholder="Search..."> <span class="input-group-btn"> <button type="submit" name="search" id="search-btn" class="btn btn-flat"><i class="fa fa-search"></i> </button> </span> </div> </form> <!-- /.search form --> <!-- sidebar menu: : style can be found in sidebar.less --> <ul class="sidebar-menu"> <li class="header">Menu</li> <li class="active treeview"> <a href="#"> <i class="fa fa-dashboard"></i> <span>Categorias</span> <span class="pull-right-container"> <i class="fa fa-angle-left pull-right"></i> </span> </a> <ul class="treeview-menu"> <li class="active"><a href="../../index.html"><i class="fa fa-circle-o"></i> Todas</a></li> <li><a href="#"><i class="fa fa-circle-o"></i> Química</a></li> <li><a href="#"><i class="fa fa-circle-o"></i> Engenharia</a></li> <li><a href="#"><i class="fa fa-circle-o"></i> Finanças</a></li> <li><a href="#"><i class="fa fa-circle-o"></i> Contabilidade</a></li> <li><a href="#"><i class="fa fa-circle-o"></i> Matemática</a></li> <li><a href="#"><i class="fa fa-circle-o"></i> Administração</a></li> <li><a href="#"><i class="fa fa-circle-o"></i> Vendas</a></li> <li><a href="#"><i class="fa fa-circle-o"></i> Marketing</a></li> <li><a href="#"><i class="fa fa-circle-o"></i> Informatica</a></li> <li><a href="#"><i class="fa fa-circle-o"></i> Lógica</a></li> <li><a href="#"><i class="fa fa-circle-o"></i> Outros</a></li> <li><a href="#"><i class="fa fa-circle-o"></i> Dashboard v2</a></li> </ul> </li> </ul> </section> <!-- /.sidebar --> </aside> <!-- Content Wrapper. Contains page content --> <div class="content-wrapper"> <!-- Content Header (Page header) --> <section class="content-header"> <h1> General Form Elements <small>Preview</small> </h1> <ol class="breadcrumb"> <li><a href="#"><i class="fa fa-dashboard"></i> Home</a></li> <li><a href="#">Forms</a></li> <li class="active">General Elements</li> </ol> </section> <!-- Main content --> <section class="content"> <div class="row"> <!-- left column --> <div class="col-md-12"> <!-- general form elements --> <div class="box box-primary"> <div class="box-header with-border"> <h3 class="box-title">Quick Example</h3> </div> <!-- /.box-header --> <!-- form start --> <form role="form"> <div class="box-body"> <div class="form-group"> <label for="exampleInputEmail1">Email address</label> <input type="email" class="form-control" id="exampleInputEmail1" placeholder="Enter email"> </div> <div class="form-group"> <label for="exampleInputPassword1">Name</label> <input type="password" class="form-control" id="exampleInputPassword1" placeholder="Your Name"> </div> </div> <div class="box"> <div class="box-header"> <h3 class="box-title">Bootstrap WYSIHTML5 <small>Simple and fast</small> </h3> <!-- tools box --> <div class="pull-right box-tools"> <button type="button" class="btn btn-default btn-sm" data-widget="collapse" data-toggle="tooltip" title="Collapse"> <i class="fa fa-minus"></i></button> <button type="button" class="btn btn-default btn-sm" data-widget="remove" data-toggle="tooltip" title="Remove"> <i class="fa fa-times"></i></button> </div> <!-- /. tools --> </div> <!-- /.box-header --> <div class="box-body pad"> <form> <textarea class="textarea" placeholder="Place some text here" style="width: 100%; height: 200px; font-size: 14px; line-height: 18px; border: 1px solid #dddddd; padding: 10px;"></textarea> </form> </div> </div> <!-- /.box-body --> <div class="box-footer"> <button type="submit" class="btn btn-primary">Submit</button> </div> </form> </div> <!-- /.box --> </div> <!--/.col (left) --> <!-- right column --> <!-- <div class="col-md-6"> </div> --> <!--/.col (right) --> </div> <!-- /.row --> </section> <!-- /.content --> </div> <!-- /.content-wrapper --> <footer class="main-footer"> <div class="pull-right hidden-xs"> <b>Version</b> 2.3.8 </div> <strong>Copyright &copy; 2014-2016 <a href="http://almsaeedstudio.com">Almsaeed Studio</a>.</strong> All rights reserved. </footer> <!-- Control Sidebar --> <aside class="control-sidebar control-sidebar-dark"> <!-- Create the tabs --> <ul class="nav nav-tabs nav-justified control-sidebar-tabs"> <li><a href="#control-sidebar-home-tab" data-toggle="tab"><i class="fa fa-home"></i></a></li> <li><a href="#control-sidebar-settings-tab" data-toggle="tab"><i class="fa fa-gears"></i></a></li> </ul> <!-- Tab panes --> <div class="tab-content"> <!-- Home tab content --> <div class="tab-pane" id="control-sidebar-home-tab"> <h3 class="control-sidebar-heading">Recent Activity</h3> <ul class="control-sidebar-menu"> <li> <a href="javascript:void(0)"> <i class="menu-icon fa fa-birthday-cake bg-red"></i> <div class="menu-info"> <h4 class="control-sidebar-subheading">Langdon's Birthday</h4> <p>Will be 23 on April 24th</p> </div> </a> </li> <li> <a href="javascript:void(0)"> <i class="menu-icon fa fa-user bg-yellow"></i> <div class="menu-info"> <h4 class="control-sidebar-subheading">Frodo Updated His Profile</h4> <p>New phone +1(800)555-1234</p> </div> </a> </li> <li> <a href="javascript:void(0)"> <i class="menu-icon fa fa-envelope-o bg-light-blue"></i> <div class="menu-info"> <h4 class="control-sidebar-subheading">Nora Joined Mailing List</h4> <p>nora@example.com</p> </div> </a> </li> <li> <a href="javascript:void(0)"> <i class="menu-icon fa fa-file-code-o bg-green"></i> <div class="menu-info"> <h4 class="control-sidebar-subheading">Cron Job 254 Executed</h4> <p>Execution time 5 seconds</p> </div> </a> </li> </ul> <!-- /.control-sidebar-menu --> <h3 class="control-sidebar-heading">Tasks Progress</h3> <ul class="control-sidebar-menu"> <li> <a href="javascript:void(0)"> <h4 class="control-sidebar-subheading"> Custom Template Design <span class="label label-danger pull-right">70%</span> </h4> <div class="progress progress-xxs"> <div class="progress-bar progress-bar-danger" style="width: 70%"></div> </div> </a> </li> <li> <a href="javascript:void(0)"> <h4 class="control-sidebar-subheading"> Update Resume <span class="label label-success pull-right">95%</span> </h4> <div class="progress progress-xxs"> <div class="progress-bar progress-bar-success" style="width: 95%"></div> </div> </a> </li> <li> <a href="javascript:void(0)"> <h4 class="control-sidebar-subheading"> Laravel Integration <span class="label label-warning pull-right">50%</span> </h4> <div class="progress progress-xxs"> <div class="progress-bar progress-bar-warning" style="width: 50%"></div> </div> </a> </li> <li> <a href="javascript:void(0)"> <h4 class="control-sidebar-subheading"> Back End Framework <span class="label label-primary pull-right">68%</span> </h4> <div class="progress progress-xxs"> <div class="progress-bar progress-bar-primary" style="width: 68%"></div> </div> </a> </li> </ul> <!-- /.control-sidebar-menu --> </div> <!-- /.tab-pane --> <!-- Stats tab content --> <div class="tab-pane" id="control-sidebar-stats-tab">Stats Tab Content</div> <!-- /.tab-pane --> <!-- Settings tab content --> <div class="tab-pane" id="control-sidebar-settings-tab"> <form method="post"> <h3 class="control-sidebar-heading">General Settings</h3> <div class="form-group"> <label class="control-sidebar-subheading"> Report panel usage <input type="checkbox" class="pull-right" checked> </label> <p> Some information about this general settings option </p> </div> <!-- /.form-group --> <div class="form-group"> <label class="control-sidebar-subheading"> Allow mail redirect <input type="checkbox" class="pull-right" checked> </label> <p> Other sets of options are available </p> </div> <!-- /.form-group --> <div class="form-group"> <label class="control-sidebar-subheading"> Expose author name in posts <input type="checkbox" class="pull-right" checked> </label> <p> Allow the user to show his name in blog posts </p> </div> <!-- /.form-group --> <h3 class="control-sidebar-heading">Chat Settings</h3> <div class="form-group"> <label class="control-sidebar-subheading"> Show me as online <input type="checkbox" class="pull-right" checked> </label> </div> <!-- /.form-group --> <div class="form-group"> <label class="control-sidebar-subheading"> Turn off notifications <input type="checkbox" class="pull-right"> </label> </div> <!-- /.form-group --> <div class="form-group"> <label class="control-sidebar-subheading"> Delete chat history <a href="javascript:void(0)" class="text-red pull-right"><i class="fa fa-trash-o"></i></a> </label> </div> <!-- /.form-group --> </form> </div> <!-- /.tab-pane --> </div> </aside> <!-- /.control-sidebar --> <!-- Add the sidebar's background. This div must be placed immediately after the control sidebar --> <div class="control-sidebar-bg"></div> </div> <!-- ./wrapper --> <!-- jQuery 2.2.3 --> <script src="../../plugins/jQuery/jquery-2.2.3.min.js"></script> <!-- Bootstrap 3.3.6 --> <script src="../../bootstrap/js/bootstrap.min.js"></script> <!-- FastClick --> <script src="../../plugins/fastclick/fastclick.js"></script> <!-- AdminLTE App --> <script src="../../dist/js/app.min.js"></script> <!-- AdminLTE for demo purposes --> <script src="../../dist/js/demo.js"></script> <!-- Bootstrap WYSIHTML5 --> <script src="../../plugins/bootstrap-wysihtml5/bootstrap3-wysihtml5.all.min.js"></script> <script> $(function () { // Replace the <textarea id="editor1"> with a CKEditor // instance, using default configuration. CKEDITOR.replace('editor1'); //bootstrap WYSIHTML5 - text editor $(".textarea").wysihtml5(); }); </script> </body> </html>
{ "redpajama_set_name": "RedPajamaGithub" }
856
package org.apache.cassandra.io.util; import java.io.File; import org.apache.cassandra.db.Directories; import org.apache.cassandra.utils.WrappedRunnable; public abstract class DiskAwareRunnable extends WrappedRunnable { /** * Run this task after selecting the optimal disk for it */ protected void runMayThrow() throws Exception { long writeSize; Directories.DataDirectory directory; while (true) { writeSize = getExpectedWriteSize(); directory = getWriteableLocation(); if (directory != null || !reduceScopeForLimitedSpace()) break; } if (directory == null) throw new RuntimeException("Insufficient disk space to write " + writeSize + " bytes"); directory.currentTasks.incrementAndGet(); directory.estimatedWorkingSize.addAndGet(writeSize); try { runWith(getDirectories().getLocationForDisk(directory)); } finally { directory.estimatedWorkingSize.addAndGet(-1 * writeSize); directory.currentTasks.decrementAndGet(); } } protected abstract Directories.DataDirectory getWriteableLocation(); /** * Get sstable directories for the CF. * @return Directories instance for the CF. */ protected abstract Directories getDirectories(); /** * Executes this task on given {@code sstableDirectory}. * @param sstableDirectory sstable directory to work on */ protected abstract void runWith(File sstableDirectory) throws Exception; /** * Get expected write size to determine which disk to use for this task. * @return expected size in bytes this task will write to disk. */ public abstract long getExpectedWriteSize(); /** * Called if no disk is available with free space for the full write size. * @return true if the scope of the task was successfully reduced. */ public boolean reduceScopeForLimitedSpace() { return false; } }
{ "redpajama_set_name": "RedPajamaGithub" }
7,662
\section{INTRODUCTION} Theories with massive gravitons provide a natural modification of the general relativity (GR) in the infrared regime and can be used to explain the current acceleration of our Universe \cite{1538-3881-116-3-1009,0004-637X-517-2-565}. Such theories have a long history pioneered by the work of Fierz and Pauli \cite{Fierz:1939ix} and marked by subsequent discoveries of many interesting features, such as the vDVZ discontinuity \cite{vanDam:1970vg,Zakharov:1970cc}, the Vainshtein mechanism \cite{Vainshtein:1972sx}, the Boulware-Deser ghost \cite{Boulware:1973my}, culminating in the discovery of the ghost-free massive gravity \cite{deRham:2010kj} and ghost-free bigravity \cite{Hassan2012} theories. The ghost-free bigravity theory is the most interesting physically. It contains two dynamical metrics, usually called $g_{\mu\nu}$ and $f_{\mu\nu}$, describing together two gravitons, one of which is massive and the other is massless. The theory admits self-accelerating cosmological solutions \cite{Volkov:2011an, vonStrauss:2011mq, Comelli:2011zm} whose properties can agree with the observations \cite{Akrami:2015qga, Mortsell:2015exa, Aoki:2015xqa, Luben:2018ekw, Hogas:2019ywm}, with the $\Lambda$ term mimicked by the graviton mass. The theory also admits solutions describing black holes \cite{Volkov2012}, wormholes \cite{Sushkov:2015fma}, and other interesting solutions (see \cite{Volkov:2013roa} for a review). In what follows we shall be discussing black holes. The bigravity black holes can be either ``bald" or ``hairy". The bald black holes are described by the known GR metrics. Such solutions were first discovered long ago \cite{Isham:1977rj, Gurses:1979qb, Gurses:1981an} within the old bigravity theory inspired by physics of strong interactions \cite{Isham:1971gm}. In the simplest case, their two metrics are both Schwarzschild-(anti-)de Sitter and can be conveniently represented in the Eddington-Finkelstein coordinates as \cite{Babichev:2014oua,Volkov:2014ooa} \begin{equation} \label{b1} g_{\mu\nu}dx^\mu dx^\nu=-\Sigma_g\, dv^2+2dv dr +r^2 d\Omega^2,~~f_{\mu\nu}dx^\mu dx^\nu=C^2\left(-\Sigma_f\, dv^2+2dv dr +r^2 d\Omega^2\right), \end{equation} with $\Sigma_g=1-2M_g/r+\Lambda_g/(3r^2)$ and $\Sigma_f=1-2M_f/r+\Lambda_f/(3r^2)$, where values of constants $C,\Lambda_g,\Lambda_f$ are fixed by the field equations. Passing to the Schwarzschild coordinates, one can diagonalize one of the two metrics, but not both of them simultaneously. Such solutions have been much studied \cite{Berezhiani:2008nr}, they exist also within the ghost-free bigravity \cite{Comelli:2011wq}, and they admit the charged \cite{Babichev:2014fka} and spinning \cite{Babichev:2014tfa} generalizations. These solutions also admit the massive gravity limit where $M_f=\Lambda_f=0$ hence the f metric is flat while the g metric remains nontrivial, and this yields all possible static black holes in the ghost-free massive gravity theory (it seems there can be also time-dependent black holes in this theory \cite{Rosen:2018lki}). Next, it was noticed in \cite{Volkov2012} that if the parameters of the potential are suitably adjusted, then the ghost-free bigravity reduces to the vacuum GR when the two metrics coincide, $g_{\mu\nu}=f_{\mu\nu}$. Therefore, all vacuum black holes, as for example the Schwarzschild solution \textcolor{black}{(to be called bald to distinguish it from the ``hairy Schwarzschild" to be described below)}, \begin{equation} \label{b2} g_{\mu\nu}dx^\mu dx^\nu=f_{\mu\nu}dx^\mu dx^\nu=-\left(1-\frac{2M}{r}\right)\, dt^2+ \frac{dr^2}{1-{2M}/{r} }+r^2 d\Omega^2, \end{equation} or its spinning generalization can be imbedded into the ghost-free bigravity. A $\Lambda$ term can be included by assuming the two metrics to be proportional to each other \cite{Volkov2012,Volkov:2013roa,Volkov:2014ooa}. Such solutions are different from solutions of type \eqref{b1}, for example they do not admit the massive gravity limit. In addition, the solution \eqref{b1} is linearly stable \cite{Babichev:2015zub}, whereas \eqref{b2} is unstable (for a small $M$) with respect to fluctuations which do not respect the condition $g_{\mu\nu}=f_{\mu\nu}$ \cite{Babichev:2013una}. These facts essentially exhaust the available knowledge of the bald black holes in the bigravity theory. At the same time, more general hairy black holes not described by the classical GR metrics can exist as well. The first example of hairy black holes in physics was found long ago \cite{Volkov:1989fi}, followed by many other examples (see \cite{Volkov:1998cc,Volkov:2016ehx} for a review), so that nowadays hairy black holes are considered as something usual. One may therefore wonder if they exist in the ghost-free bigravity theory as well. A systematic analysis of hairy black holes in the ghost-free massive bigravity has been carried out for the first time by one of the authors \cite{Volkov2012}, but none of the solutions found were asymptotically flat. In that analysis both metrics were assumed to be static and spherically symmetric. If they are not simultaneously diagonal, then the most general solution is given by \eqref{b1}. If they are simultaneously diagonal, then one of the solutions is given by \eqref{b2}, but other more general black hole solutions exist as well. Such solutions possess an event horizon -- a hypersurface that is null simultaneously with respect to $g_{\mu\nu}$ and $f_{\mu\nu}$. Therefore, both metrics share the horizon \cite{Deffayet:2011rh, Banados:2011hk}, but its radius $r_H^g$ measured by $g_{\mu\nu}$ can be different from the radius $r_H^f$ measured by $f_{\mu\nu}$. One can set $r_H\equiv r^g_H$ to unit value via rescaling the system (rescaling at the same time the graviton mass), but the ratio $u=r_H^g/r_H^f$ is scale invariant. Choosing a value of $u$ completely determines the boundary conditions at the horizon, which allows one to integrate the equations starting from the horizon toward large values of the radial coordinate $r$. As a result, the set of all black hole solutions can be labeled by just one parameter $u$, and integrating the equations for different values of $u$ gives all possible black holes. Choosing $u=1$ yields the Schwarzschild solution \eqref{b2}. For $u\neq 1$ one finds more general black holes supporting a massive graviton ``hair" outside the horizon, but in the asymptotic region their two geometries do not become flat \cite{Volkov2012}. The latter property is generic, and trying different values of $u$ always gives either solutions with a curvature singularity somewhere outside the horizon, or solutions which exist for all values of $r$ but show nonflat asymptotics. At the same time, these facts do not completely exclude a possibility of some other asymptotically flat black hole solutions different from \eqref{b2}, which would correspond to some special values of $u$ different from $u=1$. However, even if they exist, one does not find such solutions by a brute force via trying many different values of $u$, and the reason is the following. The field equations reduce to three coupled first order ordinary differential equations (ODEs) \cite{Volkov2012}, whose {\it local} at large $r$ solution has schematically the following structure when it is linearized around flat space ($A,B,C$ being integration constants): \begin{equation} \label{large} \frac{A}{r}+B e^{-r}+C e^{+r}. \end{equation} Here $r={\rm mr}$ is the dimensionless radial coordinate, with m and r being the graviton mass and dimensionful radial coordinate (we assume the graviton mass to have the dimension of inverse length, so that this is rather the inverse Compton wavelength ${\rm m}c/\hbar$). The Newtonian mode $A/r$ in \eqref{large} arises due to the massless graviton present in the theory, while the decaying mode $B e^{-r}$ and the growing mode $C e^{+r}$ are due to the massive graviton. Now, when integrating from the horizon, the growing mode $C e^{+r}$ will be inevitably present in the numerical solution at large $r$ and will drive the solution away from flat space. This is why one does not find asymptotically flat solutions in this way. To get them, one should suppress the growing mode by setting $C=0$, hence the local solutions at large $r$ will comprise a two-parameter set labeled by $A$ and $B$. The next step is to numerically extend this local solution toward small $r$, extending at the same time the local solution at the horizon labeled by $u$ toward large $r$, until the two solutions meet at some intermediate point. For these solutions to agree, three (the number of the ODEs) matching conditions should be satisfied via adjusting the three parameters $u,A,B$. In practice, this can be done within the numerical multiple-shooting method \cite{Press:2007:NRE:1403886}. Once $u,A,B$ are adjusted, this yields global asymptotically flat solutions. The difficulty, however, is that the numerical scheme requires some input values for $u,A,B$, which should be close to the ``true values", otherwise the iterations do not converge. It was {\it a priory} unclear how to choose these input values, whereas choosing them randomly does not give the convergence. Some additional information was needed to properly choose these input values, but at the time of writing the article \cite{Volkov2012} such information was not available. As a result, the conclusion of that work was that asymptotically flat hairy black holes may exist, but they should be parametrically isolated form the Schwarzschild solution \eqref{b2}. It is interesting that by adding an extra matter source to obtain not a black hole but a regular object like a star, asymptotically flat solutions can be easily constructed, as was shown first in \cite{Volkov2012} and later in \cite{Enander:2015kda, Aoki:2016eov}. The black hole case is more difficult. Fortunately, the additional information was later obtained within the analysis of perturbations of the Schwarzschild solution \eqref{b2} \cite{Babichev:2013una,Brito:2013wya}. Denoting $g^S_{\mu\nu}$ the Schwarzschild metric, the two perturbed metrics are $g_{\mu\nu}=g^S_{\mu\nu}+\delta g_{\mu\nu}$ and $f_{\mu\nu}=g^S_{\mu\nu}+\delta f_{\mu\nu}$. Linearizing the field equations with respect to $\delta g_{\mu\nu}$ and $\delta f_{\mu\nu}$, one finds that perturbations grow in time and hence the background Schwarzschild black hole is unstable if $r_H\equiv {\rm mr_H}\leq 0.86$. On the other hand, for $r_H>0.86$ the perturbations are bounded in time so that the background is stable \cite{Babichev:2013una}. Curiously, the mathematical structure of the perturbation equations is identical \cite{Babichev:2013una} to that previously discovered by Gregory and Laflamm (GL) in their analysis of black strings in $D=5$ GR \cite{Gregory:1993vy}. We shall therefore refer to the Schwarzschild solution with $r_H=0.86$ as GL point. This change of stability at the GL point suggests that for $r_H$ close to $0.86$ there could be two different asymptotically flat solutions: the Schwarzschild solution \eqref{b2} and also some other solution which can be approximated by the zero perturbation mode that exists at the GL point. This new solution is different from Schwarzschild although close to it, hence it describes an asymptotically flat hairy black hole. To get this solution within the numerical scheme outlined above, one should choose the input parameters $u,A,B$ to be close the GL point, $u\approx 1$, $r_H\approx 0.86$, $A\approx -r_H/2$, $B\approx 0$, and it is this essential piece of information that was missing when writing Ref.\cite{Volkov2012}. As soon as the solution is obtained, one can change the value of $r_H$ iteratively, thus obtaining ``fully fledged" hairy black holes which may deviate considerably from the parent Schwarzschild solution. Remarkably, this program was accomplished by the Portuguese group \cite{Brito:2013xaa} via explicitly constructing asymptotically flat hairy black holes in the theory in the region {\it below} the GL point, for $r_H< 0.86$. However, some time later spherically symmetric bigravity solutions were analyzed by the Swedish group \cite{Torsello:2017cmz}, and it was claimed that the Schwarzschild solution \eqref{b2} represents the unique asymptotically flat black hole in the theory. As a result, a controversy emerged and it was unclear if asymptotically flat hairy black holes exist or not. We have therefore reconsidered the issue ourselves and below are our results. In brief, we were able to construct asymptotically flat hairy black holes in the theory, thereby confirming the finding of \cite{Brito:2013xaa}. We apply a very carefully designed numerical scheme to exclude any ambiguities and to take into account the arguments of \cite{Torsello:2017cmz}. In fact, these arguments correctly point to some drawbacks of the numerical analysis commonly present in many publications. From the mathematical viewpoint, one has to solve a nonlinear boundary value problem where the boundaries are singular points of the differential equations (horizon and infinity). Since it is difficult to approach such points numerically, various approximations are used in practice, which may give reasonable results in some cases but inevitably increase the numerical errors and lead to a numerical instability. Only very rarely does one find in the literature a correct treatment of the problem (apart from the relaxation approach), as for example in \cite{Breitenlohner:1991aa,BFM,Breitenlohner:1993es}. We therefore pay special attention to the details of our numerical scheme and describe them in a very explicit way. From the methodological viewpoint, our paper gives an example of how one should properly tackle a nonlinear boundary value problem with singular endpoints. We cross-check our results with two different numerical codes written independently by two of us. Our results strongly suggest that the hairy solutions exist and are indeed asymptotically flat and regular. We discover many new features of these solutions, for example we obtain hairy black holes also {\it above} the GL point for $r_H>0.86$, and we study for the first time the perturbative stability of the solutions. We were able to identify regions in the parameter space which correspond to stable solutions, and we determined subsets of these regions which agree with the constraints imposed by the cosmological observations. We find that the viable hairy black holes should be described by the g metric that is very close to Schwarzschild, but their f metric is different. \textcolor{black}{ Therefore, if the bigravity theory indeed describes physics, the astrophysical black holes should hide the hair in their f-metric. We find masses of such black holes to range from $\sim 0.2\,{\rm M}_\odot$ to $\sim 0.3\times 10^6\,{\rm M}_\odot$. } We have also attentively considered the arguments of Ref. \cite{Torsello:2017cmz}. In brief, this work seems to agree that the hairy solutions exist but judges them physically unacceptable. \textcolor{black}{We analyze the arguments and we think some of them are interesting and should be taken into consideration, but none of them is decisive, so that they should rather be viewed as a conjecture. To understand its origin, we notice that the numerical procedure adopted in that work is not suitable for suppressing the growing at infinity mode, which generates artificial numerical singularities. This must be the reason why the solutions were judged unacceptable in that work.} However, no singularities appear within the properly chosen numerical scheme, and we specially adapt our scheme to be able to cope with the arguments of \cite{Torsello:2017cmz}. We shall postpone a more detailed discussion of Ref. \cite{Torsello:2017cmz} until the end of this text to be able to make a comparison with our results. The rest of the text is organized as follows. In Sec. \ref{sec2} we introduce the massive bigravity theory of Hassan and Rosen \cite{Hassan2012}. The field equations, their reduction to the static and spherically symmetric sector, and the simplest solutions are described in Sec. \ref{sec3}--\ref{simp}. In Sec. \ref{BC},\ref{BI} we describe in detail our analysis of boundary conditions at the horizon and at infinity, and then summarize in Sec. \ref{hairybh} the structure of our numerical procedure. In Sec. \ref{NR} we show our solutions for asymptotically flat hairy black holes and also describe the duality relation yielding the solutions above the GL point. After that, we discuss in Sec. \ref{pert} the perturbations of the hairy backgrounds and the analysis of the negative perturbation modes. Our discussion culminates in Sec. \ref{par} where we describe various limits and identify regions in the parameter space where the solutions exist and where they are stable. In Sec. \ref{CR} we give a brief summary of our results and discuss the arguments of Ref.\cite{Torsello:2017cmz}. The two appendixes contain the description of the desingularization of the equations at the horizon, as well as the complete set of the field equations in the time-dependent case. \section{THE GHOST-FREE BIGRAVITY \label{sec2}} \setcounter{equation}{0} The theory is defined on a four-dimensional spacetime manifold endowed with two Lorentzian metrics ${\rm g}_{\mu\nu}$ and ${\rm f}_{\mu\nu}$ with the signature $(-,+,+,+)$. The action is \cite{Hassan2012} \begin{eqnarray} \label{1} &&S[{\rm g},{\rm f}] =\frac{1}{2{\boldsymbol\kappa}_1}\int \, R({\rm g})\sqrt{-{\rm g}}\, d^4{\rm x} +\frac{1}{2{\boldsymbol\kappa}_2}\,\int R({\rm f})\sqrt{-{\rm f}}\, d^4{\rm x} -\frac{{\rm m}^2}{\boldsymbol\kappa }\int {\cal U}\sqrt{-{\rm g}} \,d^4{\rm x} \,,~~~~~~~~~~ \end{eqnarray} where ${\boldsymbol\kappa}_1$ and ${\boldsymbol\kappa}_2$ are the gravitational couplings, ${\boldsymbol\kappa}$ is a parameter with the same dimension, and ${\rm m}$ is a mass parameter. The interaction between the two metrics is expressed by a scalar function of the tensor (the hat denotes matrices) \begin{equation} \label{gam} \hat{\gamma}=\sqrt{\hat{{\rm g}}^{-1}\hat{{\rm f}}}. \end{equation} Here the matrix square root is understood in the sense that $\hat{\gamma}^2=\hat{{\rm g}}^{-1}\hat{{\rm f}}$, which can be written in components as \begin{equation} \label{gamgam} (\gamma^2)^\mu_{~\nu}\equiv \gamma^\mu_{~\alpha}\gamma^\alpha_{~\nu}= {\rm g}^{\mu\alpha} {\rm f}_{\alpha\nu}. \end{equation} If $\lambda_a$ ($a=1,2,3,4$) are the eigenvalues of $\gamma^\mu_{~\nu}$ then the interaction potential is \begin{equation} \label{2} {\cal U}=\sum_{n=0}^4 b_k\,{\cal U}_k, \end{equation} where $b_k$ are dimensionless parameters while ${\cal U}_k$ are defined by the relations \begin{eqnarray} \label{4} {\cal U}_0&=&1,~~~~ {\cal U}_1= \sum_{a}\lambda_a=[\gamma],\nonumber \\ {\cal U}_2&=& \sum_{a<b}\lambda_a\lambda_b =\frac{1}{2!}([\gamma]^2-[\gamma^2]),\nonumber \\ {\cal U}_3&=& \sum_{a<b<c}\lambda_c\lambda_b\lambda_c = \frac{1}{3!}([\gamma]^3-3[\gamma][\gamma^2]+2[\gamma^3]),\nonumber \\ {\cal U}_4&=& \lambda_1\lambda_2\lambda_3\lambda_4 =\det(\hat{\gamma}) \,. \nonumber \end{eqnarray} Here $[\gamma]={\rm tr}(\hat{\gamma})\equiv \gamma^\mu_{~\mu}$ and $[\gamma^k]={\rm tr}(\hat{\gamma}^k)\equiv (\gamma^k)^\mu_{~\mu}$. The two metrics actually enter the action in a completely symmetric way, since the action is invariant under \begin{equation} \label{sym} {\rm g}_{\mu\nu}\leftrightarrow {\rm f}_{\mu\nu},~~~~ {\boldsymbol\kappa}_1 \leftrightarrow {\boldsymbol\kappa}_2,~~~~ b_k \leftrightarrow b_{4-k}\,. \end{equation} The action is also invariant under rescalings ${\boldsymbol\kappa}\to \pm \lambda^2 {\boldsymbol\kappa}$, $b_k\to \pm b_k,$ ${\rm m}\to\lambda\,{\rm m}$, and this allows one to impose, without any loss of generality, the normalization condition ${\boldsymbol\kappa}={\boldsymbol\kappa}_1+{\boldsymbol\kappa}_2$. Varying the action with respect to the two metrics gives two sets of Einstein equations, \begin{eqnarray} \label{Enst0} G_{\mu\nu}({\rm g})&=&{\rm m}^2\,\kappa_1\, T_{\mu\nu},~~~~~~~~~~~~ G_{\mu\nu}({\rm f})= {\rm m}^2\,\kappa_2\, {\cal T}_{\mu\nu}, \end{eqnarray} where $\kappa_1\equiv {\boldsymbol\kappa}_1/{\boldsymbol\kappa}$ and $\kappa_2\equiv {\boldsymbol\kappa}_2/{\boldsymbol\kappa}$, and the normalization of $\boldsymbol\kappa$ implies that $\kappa_1+\kappa_2=1$. The source terms in \eqref{Enst0} are obtained by varying the interaction potential ${\cal U}$, \begin{eqnarray} \label{T} && T^{\mu}_{~\nu}={\rm g}^{\mu\alpha}T_{\alpha\nu}= \,\tau^\mu_{~\nu}-{\cal U}\,\delta^\mu_\nu,~~~~~ {\cal T}^{\mu}_{~\nu}= {\rm f}^{\mu\alpha}{\cal T}_{\alpha\nu}= -\frac{\sqrt{-{\rm g}}}{\sqrt{-{\rm f}}}\,\tau^\mu_{~\nu}\,, \end{eqnarray} where ${\rm f}^{\mu\alpha}$ is the inverse of ${\rm f}_{\mu\alpha}$ and \begin{eqnarray} \label{tau1} \tau^\mu_{~\nu}&=& \{b_1\,{\cal U}_0+b_2\,{\cal U}_1+b_3\,{\cal U}_2 +b_4\,{\cal U}_3\}\gamma^\mu_{~\nu} \nonumber \\ &-&\{b_2\,{\cal U}_0+b_3\,{\cal U}_1+b_4\,{\cal U}_2\}(\gamma^2)^\mu_{~\nu} \nonumber \\ &+&\{b_3\,{\cal U}_0+b_4\,{\cal U}_1\}(\gamma^3)^\mu_{~\nu} -b_4\,{\cal U}_0\,(\gamma^4)^\mu_{~\nu}\,. \end{eqnarray} There is an identity relation following from the diffeomorphism invariance of the interaction term in the action, \begin{equation} \label{id} \sqrt{-g}\stackrel{(g)}{\nabla}_\mu T^{\mu}_{~\nu}+ \sqrt{-f}\stackrel{(f)}{\nabla}_\mu {\cal T}^{\mu}_{~\nu}\equiv 0\,, \end{equation} where $\stackrel{(g)}{\nabla}_\rho$ and $\stackrel{(f)}{\nabla}_\rho$ are the covariant derivatives with respect to g$_{\mu\nu}$ and f$_{\mu\nu}$. Equations \eqref{Enst0} describe two interacting gravitons, one massive and one massless. This can be easily seen in the flat space limit. Setting ${\rm g}_{\mu\nu}={\rm f}_{\mu\nu}=\eta_{\mu\nu}$ (the Minkowski metric), Eqs.\eqref{Enst0} reduce to \begin{eqnarray} \label{Enst000} 0&=&-{\rm m}^2\,\kappa_1\,(P_0+P_1)\, \eta_{\mu\nu},~~~~~~~~~~~~ 0= -{\rm m}^2\,\kappa_2\,(P_1+P_2)\, \eta_{\mu\nu}, \end{eqnarray} with $P_m\equiv b_m+2b_{m+1}+b_{m+2}$. Therefore, the flat space will be a solution if only the parameters $b_k$ fulfil the conditions $P_1=-P_0=-P_2$. Assuming this to be the case, let us set ${\rm g}_{\mu\nu}=\eta_{\mu\nu}+\delta {\rm g}_{\mu\nu}$ and ${\rm f}_{\mu\nu}=\eta_{\mu\nu}+\delta {\rm f}_{\mu\nu}$ where the deviations $\delta {\rm g}_{\mu\nu}$ and $\delta {\rm f}_{\mu\nu}$ are small. Linearizing the equations \eqref{Enst0} with respect to the deviations yields \begin{eqnarray} &&\hat{{\cal E}}_{\mu\nu}^{\alpha\beta}h^{(0)}_{\alpha\beta}=0,~~~\label{m=0}\\ &&\hat{{\cal E}}_{\mu\nu}^{\alpha\beta}h_{\alpha\beta}+\frac{\rm m^2_{\rm FP}}{2} \label{FP}\, (h_{\mu\nu}-\eta_{\mu\nu}h)=0,~ \end{eqnarray} where $\hat{{\cal E}}_{\mu\nu}^{\alpha\beta}$ denotes the linear part of the Einstein operator, and where $h_{\mu\nu}^{(0)}=\kappa_1\delta {\rm f}_{\mu\nu}+\kappa_2\delta {\rm g}_{\mu\nu}$ and $h_{\mu\nu}=\delta {\rm f}_{\mu\nu}-\delta {\rm g}_{\mu\nu}$ with $h=\eta^{\alpha\beta}h_{\alpha\beta}$. The $h_{\mu\nu}^{(0)}$ equations are the linearized Einstein equations describing a massless graviton with two dynamical polarizations. The $h_{\mu\nu}$ field fulfills the Fierz-Pauli equations for massive gravitons with five polarizations and with the mass \begin{equation} \label{FP-mass} {\rm m^2_{\rm FP}}=P_1\, {\rm m}^2. \end{equation} Therefore, one will have ${\rm m}_{\rm FP}={\rm m}$ if \begin{equation} \label{P1} P_1=1. \end{equation} This condition can be solved together with the conditions $P_0=P_2=-1$ implied by \eqref{Enst000} to express the five $b_k$ in terms of two independent parameters, sometimes called $c_3$ and $c_4$, \begin{equation} \label{bbb} {b}_0=4c_3+c_4-6,~~ {b}_1=3-3c_3-c_4,~~ {b}_2=2c_3+c_4-1,~~ {b}_3=-(c_3+c_4),~~ {b}_4=c_4. \end{equation} At the same time, the theory has exactly 7 propagating degrees of freedom also away from the flat space limit and for arbitrary $b_k$ (see \cite{Hassan:2011ea,Alexandrov:2013rxa,Soloviev:2020zht} for its Hamiltonian formulation). Let us finally pass from the dimensionful spacetime coordinates ${\rm x}^\mu$ to the dimensionless ones, \begin{equation} \label{conf} x^\mu = {\rm m x}^\mu. \end{equation} This is equivalent to the conformal rescaling of the metrics, \begin{equation} {\rm g}_{\mu\nu}=\frac{1}{{\rm m}^2}\,g_{\mu\nu},~~~~~ {\rm f}_{\mu\nu}=\frac{1}{{\rm m}^2}\,f_{\mu\nu}, \end{equation} after which the field equations \eqref{Enst0} reduce to \begin{eqnarray} \label{Enst-eq} G^\mu_{~\nu}({g})&=&\kappa_1\, T^\mu_{~\nu},~~~~~~~~~~~~ G^\mu_{~\nu}({f})= \kappa_2\, {\cal T}^\mu_{~\nu}, \end{eqnarray} where $T^\mu_{~\nu}$ and ${\cal T}^\mu_{~\nu}$ are still given by \eqref{T},\eqref{tau1} with $\hat{\gamma}=\sqrt{\hat{{g}}^{-1}\hat{{f}}}$. The Bianchi identities for these equations imply that \begin{equation} \label{T1} \stackrel{(g)}{\nabla}_\rho T^\rho_\lambda=0\,,~~~~~ \stackrel{(f)}{\nabla}_\rho {\cal T}^\rho_\lambda=0\,,~~~~~ \end{equation} which is consistent with \eqref{id}. All fields and coordinates are now dimensionless and no trace of the mass parameter m is left in the equations. However, one has to remember that the unity of length corresponds to the dimensionful $1/{\rm m}$, which is the physical length scale. In what follows we shall be analyzing equations \eqref{Enst-eq} without making any assumptions about values of $\kappa_1$, $\kappa_2$ and $b_k$. However, when integrating the equations numerically, we shall assume that $\kappa_1+\kappa_2=1$ and choose $b_k$ according to \eqref{bbb}. Therefore, our solutions depend on three parameters of the theory, $c_3,c_4$ and $\eta$, where \begin{equation} \label{eta} \kappa_1=\cos^2\eta,~~~~~~\kappa_2=\sin^2\eta. \end{equation} \textcolor{black}{We shall assume in what follows that if the theory is extended to include an extra matter variables denoted by $\Psi$, then the action \eqref{1} becomes $S[{\rm g},{\rm f}]\to S[{\rm g},{\rm f}]+S_{\rm mat}[{\rm g},\Psi]$, so that the matter couples only to the g metric. The g-geometry is therefore physically measurable as test particles follow its geodesics. The f-geometry is not directly coupled to matter, hence it cannot be directly seen and remains hidden.} \section{SPHERICAL SYMMETRY \label{sec3}} \setcounter{equation}{0} Let us introduce coordinates $(x^0,x^1,x^2,x^3)=(t,r,\vartheta,\varphi)$ and choose both metrics to be static, spherically symmetric, and diagonal, \begin{eqnarray} \label{ansatz0} ds_g^2&=&g_{\mu\nu}dx^\mu dx^\nu=-Q^2dt^2+\frac{dr^2}{\Delta^2}+R^2d\Omega^2\,, \nonumber \\ ds_f^2&=&f_{\mu\nu}dx^\mu dx^\nu=-q^2 dt^2+\frac{dr^2}{W^2}+U^2d\Omega^2, \end{eqnarray} where $d\Omega^2=d\vartheta^2+\sin^2\vartheta d\varphi^2$ while $Q,\Delta,R,q,W,U$ are functions of the radial coordinate $r={\rm mr}$. In fact, this is not the most general form of the spherically symmetric fields, since one could also include the off-diagonal metric element $f_{01}$ as shown by Eq.\eqref{anz} in Appendix \ref{time}. However, in the {\it static} case this would imply that \eqref{b1} is the only possible solution \cite{Volkov2012} (the situation changes in the time-dependent case). Therefore, we choose the static metrics to be both diagonal, which leads to nontrivial solutions. The tensor $\gamma^\mu_{~\nu}$ in \eqref{gam} then reads \begin{equation} \gamma^\mu_{~\nu}={\rm diag}\left[ \frac{q}{Q},\frac{\Delta}{W},\frac{U}{R},\frac{U}{R} \right], \end{equation} and one obtains from \eqref{T} \begin{eqnarray} T^\mu_{~\nu}&=&{\rm diag}\left[ T^0_{~0},T^1_{~1},T^2_{~2},T^2_{~2} \right],\nonumber \\ {\cal T}^\mu_{~\nu}&=&{\rm diag}\left[ {\cal T}^0_{~0},{\cal T}^1_{~1},{\cal T}^2_{~2},{\cal T}^2_{~2} \right], \end{eqnarray} where \begin{eqnarray} T^0_{~0}&=&-{\cal P}_0-{\cal P}_1\,\frac{\Delta}{W}, \nonumber \\ T^1_{~1}&=&-{\cal P}_0-{\cal P}_1\,\frac{q}{Q},~~\nonumber \\ T^2_{~2}&=&-{\cal D}_0-{\cal D}_1\left(\frac{q}{Q} +\frac{\Delta}{W}\right)-{\cal D}_2\, \frac{q\Delta}{QW},\nonumber \\ {\bf u}^2{\cal T}^0_{~0}&=&-{\cal P}_2-{\cal P}_1\,\frac{W}{\Delta}, \nonumber \\ {\bf u}^2{\cal T}^1_{~1}&=&-{\cal P}_2-{\cal P}_1\,\frac{Q}{q},~\nonumber \\ {\bf u}{\cal T}^2_{~2}&=&-{\cal D}_3-{\cal D}_2\left(\frac{Q}{q} +\frac{W}{\Delta}\right)-{\cal D}_1\, \frac{QW}{q\Delta}. \end{eqnarray} Here ${\bf u}={U}/{R}$ and \begin{eqnarray} {\cal P}_m&=&b_m+2b_{m+1}{\bf u}+b_{m+2}{\bf u}^2\,, \nonumber \\ {\cal D}_m&=&b_m+b_{m+1}{\bf u}\,,~~~~ ~~~~~~~ ~~~(m=0,1,2). \label{e5} \end{eqnarray} The independent field equations are \begin{eqnarray} \label{Ein} G^0_0(g)&=&\kappa_1\, T^{0}_{~0}, ~~~~ G^1_1(g)=\kappa_1\, T^{1}_{~1}, ~~~~\nonumber \\ G^0_0(f)&=&\kappa_2\, {\cal T}^{0}_{~0},~~~~ G^1_1(f)=\kappa_2\, {\cal T}^{1}_{~1},~ \end{eqnarray} plus the conservation condition $ \stackrel{(g)}{\nabla}_\mu T^\mu_\nu=0\,, $ which has only one nontrivial component, \begin{equation} \label{CONS} \stackrel{(g)}{\nabla}_\mu T^\mu_{~1}= \left(T^1_{~1}\right)^\prime +\left.\left.\frac{Q^\prime}{Q}\right(T^1_{~1}-T^0_{~0}\right) +2\left.\left.\frac{R^\prime}{R} \right(T^1_{~1}-T^2_{~2}\right)=0, \end{equation} where the prime denotes differentiation with respect to $r$. The conservation condition for the second energy-momentum tensor also has only one nontrivial component, \begin{equation} \label{CONSf} \stackrel{(f)}{\nabla}_\mu {\cal T}^\mu_{~1}= \left({\cal T}^1_{~1}\right)^\prime +\left.\left.\frac{q^\prime}{q}\right({\cal T}^1_{~1}-{\cal T}^0_{~0}\right) +2\left.\left.\frac{U^\prime}{U} \right({\cal T}^1_{~1}-{\cal T}^2_{~2}\right)=0, \end{equation} but this follows from \eqref{CONS} due to the identity relation \eqref{id}. As a result, there are 5 independent equations in \eqref{Ein}, \eqref{CONS}, which is enough to determine the 6 field amplitudes $Q,\Delta,R,q,W,U$, because the freedom of reparametrization of the radial coordinate $ r\to \tilde{r}(r) $ allows one to fix one of the amplitudes. \section{FIELD EQUATIONS \label{sec4}} \setcounter{equation}{0} Let us introduce new functions \begin{equation} \label{NY} N=\Delta R^\prime\,,~~~~Y=WU^\prime\,, \end{equation} in terms of which the two metrics read \begin{eqnarray} \label{ansatz000} ds_g^2&=&-Q^2dt^2+\frac{dR^2}{N^2}+R^2d\Omega^2\,, \nonumber \\ ds_f^2&=&-q^2 dt^2+\frac{dU^2}{Y^2}+U^2d\Omega^2. \end{eqnarray} The advantage of this parametrization is that the second derivatives disappear from the Einstein tensor and the four Einstein equations \eqref{Ein} become \begin{eqnarray} N^\prime&=&-\frac{\kappa_1}{2}\frac{R}{NY}\left(R^\prime Y{\cal P}_0 +U^\prime N {\cal P}_1 \right)+\frac{(1-N^2)R^\prime}{2RN}\,, \label{e1} \\ Y^\prime&=&-\frac{\kappa_2}{2}\frac{ R^2}{UNY} \left(R^\prime Y {\cal P}_1+U^\prime N {\cal P}_2 \right)+\frac{(1-Y^2)U^\prime}{2UY}\,, \label{e2} \\ Q^\prime&=&-\left( \kappa_1(Q{\cal P}_0+q{\cal P}_1)+\frac{Q(N^2-1)}{R^2} \right) \frac{RR^\prime}{2N^2}\,, \label{e3} \\ q^\prime&=&-\left( \kappa_2(Q{\cal P}_1+q{\cal P}_2)+\frac{q(Y^2-1)}{R^2} \right) \frac{R^2U^\prime}{2Y^2U}\,. \label{e4} \end{eqnarray} The conservation condition \eqref{CONS} reads \begin{eqnarray} \label{CONSg} \stackrel{(g)}{\nabla}_\mu T^\mu_{~1}&=& \frac{U^\prime}{R}\left(1-\frac{N}{Y}\right)\left(d{\cal P}_0+ \frac{q}{Q}\,d{\cal P}_1\right) +\left( \frac{q^\prime}{Q}-\frac{NQ^\prime U^\prime}{YQR^\prime} \right){\cal P}_1=0, \end{eqnarray} and using Eqs.\eqref{e3} and \eqref{e4}, this reduces to \begin{eqnarray} \label{CONSgg} R^2Q\stackrel{(g)}{\nabla}_\mu T^\mu_{~1}&= & \frac{U^\prime}{Y}\,{\bf C}=0\,, \end{eqnarray} where \begin{eqnarray} \label{C} {\bf C}&=&\left( \kappa_2\,\frac{R^4{\cal P}_1^2}{2UY} -\kappa_1\,\frac{R^3\,{\cal P}_0{\cal P}_1}{2N} -\frac{(N^2-1)\,R{\cal P}_1}{2N}+(N-Y)Rd{\cal P}_0 \right)Q \nonumber \\ &+&\left( \kappa_2\,\frac{R^4{\cal P}_1{\cal P}_2}{2UY} -\kappa_1\,\frac{R^3\,{\cal P}_1^2 }{2N} +\frac{(Y^2-1)\,R^2{\cal P}_1}{2UY}+(N-Y)Rd{\cal P}_1 \right)q\,, \end{eqnarray} with \begin{equation} d{\cal P}_m= 2\,(b_{m+1}+b_{m+2}{\bf u})~~~(m=0,1). \label{e5a} \end{equation} The conservation condition \eqref{CONSf} becomes \begin{equation} \label{CONSff} -U^2q\stackrel{(f)}{\nabla}_\mu {\cal T}^\mu_{~1}= \frac{R^\prime }{N}\,{\bf C}=0\,. \end{equation} The two conditions \eqref{CONSgg} and \eqref{CONSff} will be fulfilled if $U^\prime=R^\prime=0$, in which case both metrics are degenerate. If the metrics are not degenerate, then conditions \eqref{CONSgg} and \eqref{CONSff} reduce to the algebraic constraint \begin{equation} \label{CC} {\bf C}=0. \end{equation} This constraint can be resolved with respect to $q$ to give \begin{equation} \label{q} q=\Sigma(R,U,N,Y)\,Q, \end{equation} where $\Sigma(N,Y,R,U)$ is the (negative) ratio of the coefficients in front of $Q$ and $q$ in \eqref{C}. As a result, we obtain four differential equations \eqref{e1}--\eqref{e4} plus one algebraic constraint \eqref{CC}. The same equations can be obtained by inserting the metrics \eqref{ansatz000} directly to the action \eqref{1}, which gives \begin{eqnarray} \label{1a} &&S =\frac{4\pi}{{\rm m}^2\boldsymbol\kappa}\int L\, dt dr \,, \end{eqnarray} where, dropping a total derivative, \begin{eqnarray} L&=&\frac{1}{\kappa_1}\left( \frac{(1-N^2)\, R^\prime}{N}-2RN^\prime \right)Q +\frac{1}{\kappa_2}\left( \frac{(1-Y^2)\,U^\prime }{Y}-2UY^\prime \right)q \nonumber \\ &-&\frac{QR^2 R^\prime }{N}\,{\cal P}_0 -\left( \frac{QR^2 U^\prime}{Y}+\frac{qR^2R^\prime}{N} \right){\cal P}_1 -\frac{qR^2 U^\prime }{Y}\,{\cal P}_2\,. \end{eqnarray} Varying $L$ with respect to $N,Y,Q,q$ gives Eqs.\eqref{e1}--\eqref{e4}, while varying it with respect to $R,U$ reproduces conditions \eqref{CONSgg} and \eqref{CONSff}. The equations and the Lagrangian $L$ are invariant under the interchange symmetry \eqref{sym}, which now reads \begin{equation} \label{change} \kappa_1\leftrightarrow\kappa_2,~~Q\leftrightarrow q,~~~ N\leftrightarrow Y,~~~R \leftrightarrow U,~~~ b_m\leftrightarrow b_{4-m}\,. \end{equation} Equations \eqref{e1}--\eqref{e4} contain $R^\prime$ and $U^\prime$ which are not yet known. One of these two amplitudes can be fixed by imposing a gauge condition, but the other one should be determined dynamically. We need therefore one more condition, and the only way to get it is to differentiate the constraint. Since the constraint should be stable, this gives the secondary constraint: \begin{equation} {\bf C}^\prime= \frac{\partial \bf C}{\partial N}\,N^\prime+ \frac{\partial \bf C}{\partial Y}\,Y^\prime+ \frac{\partial \bf C}{\partial Q}\,Q^\prime+ \frac{\partial \bf C}{\partial q}\,q^\prime+ \frac{\partial \bf C}{\partial R}\,R^\prime+ \frac{\partial \bf C}{\partial U}\,U^\prime=0. \end{equation} Expressing here the derivatives $N^\prime$, $Y^\prime$, $Q^\prime$, $q^\prime$ by Eqs.\eqref{e1}--\eqref{e4} and using the relation \eqref{q}, this condition reduces to \begin{equation} \label{C1} {\bf C}^\prime={\cal A}(R,U,N,Y)\, R^\prime+{\cal B}(R,U,N,Y)\, U^\prime=0, \end{equation} where the functions ${\cal A}(R,U,N,Y)$ and ${\cal B}(R,U,N,Y)$ are rather complicated and we do not show them explicitly. When the radial coordinate change, both $R^\prime$ and $U^\prime$ change, \begin{equation} \label{rr} r\to \tilde{r}(r),~~~~~~R^\prime\to \tilde{R}^\prime=R^\prime \,\frac{dr}{d\tilde{r}}, ~~~~~~U^\prime\to \tilde{U}^\prime=U^\prime \,\frac{dr}{d\tilde{r}}, \end{equation} but the relation \eqref{C1} between $R^\prime$ and $U^\prime$ remains the same. The secondary constraint can be resolved with respect to $U^\prime$, \begin{equation} \label{U1} U^\prime =-\frac{{\cal A}(R,U,N,Y)}{{\cal B}(R,U,N,Y) }\,R^\prime\equiv {\cal D}_U(R,U,N,Y)\,R^\prime\,. \end{equation} We can now use the gauge symmetry \eqref{rr} to impose the coordinate condition \begin{equation} R^\prime=1~~~~\Rightarrow~~~~~R=r, \end{equation} and then \eqref{U1} reduces to \begin{equation} \label{UU1} U^\prime = {\cal D}_U(r,U,N,Y)\,. \end{equation} Now, $U^\prime$ appears in the right-hand sides of Eqs.\eqref{e1} and \eqref{e2}, and replacing it there by the value \eqref{UU1}, these two equations together with \eqref{UU1} form a closed system of three equations \begin{eqnarray} \label{eqs} N^\prime&=&{\cal D}_N(r,U,N,Y),\nonumber \\ Y^\prime&=&{\cal D}_Y(r,U,N,Y), \nonumber \\ U^\prime&=&{\cal D}_U(r,U,N,Y). \end{eqnarray} The amplitudes $Q,q$ are determined as follows. Injecting \eqref{q} to \eqref{e3} yields the equation \begin{equation} \label{Q1} Q^\prime=-\frac{r}{2N^2}\left( \kappa_1({\cal P}_0+\Sigma(r,U,N,Y){\cal P}_1)+\frac{N^2-1}{r^2} \right) \, Q\equiv {\cal F}(r,U,N,Y) Q, \end{equation} which determines $Q$, and when its solution is known, $q$ is determined algebraically from \eqref{q}. \textcolor{black}{Solutions of \eqref{eqs},\eqref{Q1} and \eqref{q} are automatically compatible with \eqref{e1}--\eqref{e4} and with the constraint \eqref{CC}. For example, the algebraic solution for $q$ given by \eqref{q} is compatible with its differential equation \eqref{e4} because the latter contains $U^\prime$ not defined by \eqref{e1}--\eqref{e4}. To determine $U^\prime$ one needs to differentiate the constraint \eqref{CC} whose algebraic solution is \eqref{q} and to use \eqref{e1}--\eqref{e4}. This completes the procedure in a consistent way. } In what follows we shall mainly focus on the three coupled equations \eqref{eqs} determining $N,Y,U$. As soon as their solution is obtained, the amplitudes $Q,q$ are determined from \eqref{Q1},\eqref{q}. \section{ANALYTICAL SOLUTIONS \label{simp}} \setcounter{equation}{0} Some simple solutions of the equations can be obtained analytically \cite{Volkov2012,Hassan:2012wr}, for which it is convenient to use the equations in the form \eqref{e1}--\eqref{e4}. \subsection{Proportional backgrounds} Choosing the two metrics to be conformally related \cite{Volkov2012,Hassan:2012wr}, \begin{equation} \label{prop1} ds_f^2=C^2 ds_g^2\,, \end{equation} with a constant $C$, the solution is given by \begin{equation} \label{prop2} Q^2=N^2=Y^2= 1-\frac{2M}{r}-\frac{\Lambda(C)}{3}\,r^2\,,~~~~R=r,~~~q=C Q,~~~U=C R, \end{equation} which describes two proportional Schwarzschild-(anti-)de Sitter geometries. The constant $C$ and the cosmological constant $\Lambda(C)$ are determined by \begin{equation} \label{sgm} \kappa_1({\cal P}_0+C{\cal P}_1)= \frac{\kappa_2}{C}({\cal P}_1+C{\cal P}_2)\equiv \Lambda(C). \end{equation} Since ${\cal P}_m$ defined by \eqref{e5} are polynomials in ${\bf u}=U/R=C$, this yields an algebraic equation for $C$ that can have up to four real roots. If the parameters $b_k$ are chosen according to \eqref{bbb}, then one of the roots is $C=1$, in which case $\Lambda=0$. The value of the dimensionful cosmological constant ${\bm \Lambda}$ should agree with the observation, hence one should have \begin{equation} {\bm \Lambda}={\rm m}^2 \Lambda\sim 1/R_{\rm Hub}^2 \end{equation} where $R_{\rm Hub}$ is the Hubble radius of our Universe. One way to fulfill this relation is to assume that the graviton mass is extremely small such that the Compton length is of the order of the Hubble radius, \begin{equation} \label{Hubble} 1/{\rm m}\sim R_{\rm Hub}. \end{equation} However, the relation can also be fulfilled by assuming that $\Lambda$ is very small, which is possible if there is a hierarchy between the two couplings: $\kappa_1\ll \kappa_2=1-\kappa_1\sim 1$. Equation \eqref{sgm} implies then that $\Lambda\sim \kappa_1$ and that $C$ should be very close to a root of ${\cal P}_1+C{\cal P}_2$. \textcolor{black}{The hierarchy between the two couplings is in fact necessary to reconcile with the observations the perturbation spectrum of the massive bigravity cosmology, because it contains an instability in the scalar sector \cite{Comelli:2012db,Konnig:2014xva,Lagos:2014lca}. For this one should assume that \cite{Akrami:2015qga, Mortsell:2015exa, Aoki:2015xqa, Luben:2018ekw, Hogas:2019ywm} \begin{equation} \label{m0} \frac{\kappa_1}{\kappa_2}\approx \kappa_1\,\textcolor{black}{ \leq}\, \left(\frac{{\rm M}_{\rm ew}}{{\rm M}_{\rm Pl}} \right)^2\sim 10^{-34}\ll 1, \end{equation} where ${\rm M}_{\rm ew}\sim 100$~GeV is the electroweak energy scale and ${\rm M}_{\rm Pl}\sim 10^{19}$~GeV is the Planck mass. Here $10^{-34}$ is the {\it upper bound} for $\kappa_1$ imposing which shifts the instability toward early times making it unobservable. However, $\kappa_1$ can also be less then this bound \cite{Akrami:2015qga}, hence \begin{equation} \kappa_1=\gamma^2\times 10^{-34}~~~~~~\mbox{with}~~~~~\gamma\in[0,1]. \end{equation} As a result, \begin{equation} \label{m} 1/{\rm m}\sim \sqrt{\Lambda}\,R_{\rm Hub}=\sqrt{\kappa_1}\,R_{\rm Hub}\,=\gamma\times \left(\frac{{\rm M}_{\rm ew}}{{\rm M}_{\rm Pl}} \right)\,R_{\rm Hub}\sim \gamma\times 10^6~\mbox{ km}, \end{equation} which is of the order of the solar size if $\gamma\sim 1$. However, in what follows we shall not be always assuming $\kappa_1$ to be small and shall present our results for arbitrary $\kappa_1\in[0,1]$. } \subsection{Deformed AdS background} Choosing $U,q$ to be constant, \begin{equation} \label{Uq} U=U_0,~~~~q=q_0, \end{equation} solves Eqs.\eqref{e4} and \eqref{CONSgg}, while Eqs.\eqref{e1}--\eqref{e3} then can be integrated in quadratures \cite{Volkov2012}. However, such solution is unacceptable, since the f metric degenerates if $U^\prime=0$. At the same time, there are other, more general solutions which approach \eqref{Uq} for $r\to\infty$, and for these solutions $U^\prime$ vanishes only asymptotically, hence they are acceptable. The leading at large $r$ terms of such solutions are \begin{eqnarray} \label{NQQQ} N^2&=& -\kappa_1\frac{b_0}{3}\,r^2 -\kappa_1 b_1U_0\, r +{\cal O}(1) \,, ~~~~~ Y=-\frac{\sqrt{3}\kappa_2 b_1}{4U_0\sqrt{-\kappa_1 b_0} }\,r^2+{\cal O}(r) , \nonumber \\ Q&=&\frac{q_0 }{4U_0}\, r+{\cal O}(1), ~~~~ U=U_0+{\cal O}\left(\frac{1}{r}\right), ~~~~~ q=q_0+{\cal O}\left(\frac{1}{r}\right). \end{eqnarray} The g metric approaches the AdS metric in the leading ${\cal O}(r^2)$ order, but the subleading terms do not have the AdS structure. It turns out that solutions of Eqs.\eqref{e1}--\eqref{e4} generically approach for $r\to\infty$ either \eqref{prop2} or \eqref{NQQQ} (or they show a curvature singularity at a finite $r$), hence they are not asymptotically flat \cite{Volkov2012}. \section{BOUNDARY CONDITIONS AT THE HORIZON \label{BC}} \setcounter{equation}{0} Let us require the g metric to have a regular event horizon at some $r=r_H$ by demanding the metric components $g_{00}=Q^2$ and $g^{rr}=N^2$ to show simple zeroes at this point. Therefore, we demand that close to this point one has $Q^2\sim N^2\sim r-r_H$ and we consider only the exterior region $r\geq r_H$ where $Q^2>0$ and $N^2>0$. Such a behavior is compatible with the field equations if only the f metric also shows a regular horizon at the same place, hence $q^2\sim Y^2\sim r-r_H$. As a result, both metrics share a horizon at the same place $r=r_H$, in agreement with \cite{Deffayet:2011rh, Banados:2011hk}. However, the horizon radius measured by the g metric, $r_H$, can be different from the radius measured by the second metric, $U(r_H)$. We therefore introduce the parameter $u\equiv {\bf u}(r_H)=U(r_H)/r_H$. As a result, the local solutions close to the horizon are expected to have the form \begin{align} \label{l1} N^2&=\sum_{n\geq 1}a_n(r-r_H)^n,~~~~~~Y^2=\sum_{n\geq 1}b_n(r-r_H)^n,\nonumber \\ U&=u\,r_H+\sum_{n\geq 1}c_n(r-r_H)^n, \end{align} the two other amplitudes being \begin{align} \label{l2} Q^2&=\sum_{n\geq 1}d_n(r-r_H),~~~~~~~~ q^{2}=\sum_{n\geq 1}e_n(r-r_H)^n. \end{align} The equations then allow one to recurrently determine the coefficients $a_n,b_n,c_n,d_n,e_n$. It turns out they all can be expressed in terms of $a_1$, which should fulfil a quadratic equation \begin{equation} \label{qqq} {\cal A} a_1^2+{\cal B} a_1+{\cal C}=0~~~~~\Rightarrow~~~~ a_1=\frac{1}{2{\cal A}}\,\left(-{\cal B}+\sigma\sqrt{{\cal B}^2-4{\cal AC}} \right),~~~~~~~\sigma=\pm 1, \end{equation} where ${\cal A,B,C}$ are functions of $u,r_H$ and of the theory parameters $b_k,\kappa_1,\kappa_2$. It turns out that one should choose $\sigma=+1$, since choosing $\sigma=-1$ always yields singular solutions. Therefore, for a chosen a value of the horizon size $r_H$, the local solutions \eqref{l1},\eqref{l2} comprise a set labeled by a continuous parameter $u$. These local solutions determine the boundary conditions at the horizon, and they can be numerically extended to the region $r>r_H$. The surface gravity for each metric is \cite{Volkov2012} \begin{equation} \kappa_g^2=\lim_{r\to r_H} Q^2N^{\prime 2}=\frac14 d_1 a_1,~~~~~ \kappa_f^2=\lim_{r\to r_H} q^2\left(\frac{Y}{U^\prime} \right)^{\prime 2}= \frac{e_1 b_1}{4\,c_1^2}, \end{equation} and using the values of the expansion coefficients determined by the equations yields the relation $\kappa_g=\kappa_f$, hence the two surface gravities coincide, as coincide the Hawking temperatures, \begin{equation} \label{TT} T=\frac{\kappa_g}{2\pi}=\frac{\kappa_f}{2\pi}. \end{equation} {One has close to the horizon $N(r)\sim Y(r)\sim \sqrt{r-r_H}$ hence the derivatives $N^\prime$ and $Y^\prime$ are not defined at the horizon. The usual practice would then be to start the numerical integration not at $r=r_H$ but at a nearby point $r=r_H+\epsilon$. However, although the dependence on $\epsilon$ is expected to be small, still its presence in the procedure may lead to numerical instabilities. This point was emphasized in \cite{Torsello:2017cmz}. This difficulty can be resolved as follows. Setting \begin{equation} N(r)={S(r)}\,\nu(r),~~~~~Y(r)=S(r)\,y(r)~~~~~\mbox{with}~~S(r)=\sqrt{1-\frac{r_H}{r}}, \end{equation} the functions $\nu(r),y(r)$ and all their derivatives assume finite values at $r=r_H$. Making this change of variables in \eqref{eqs} gives a ``desingularized" version of the equations that allows us to start the numerical integration exactly at $r=r_H$. This form of the equations is described in Appendix \ref{Des}. } To recapitulate, all black holes for a given $r_H$ can be labeled by only one parameter $u$. If $u=1$ then the two metrics coincide everywhere and the solution is Schwarzschild \eqref{b2}. If $u=C$ where $C$ is a root of the algebraic equation \eqref{sgm}, then the solutions is Schwarzschild-(anti-)de Sitter and is described by \eqref{prop1} and \eqref{prop2}. For other values of $u$ the numerical integration produces more general solutions which describe hairy black holes and which can be of the following three qualitative types, depending on their asymptotic behavior \cite{Volkov2012}. {\bf a)} Solutions extending up to arbitrarily large values of $r$ and asymptotically approaching a proportional AdS background \eqref{prop1}, \eqref{prop2}. At large $r$ one has $N=N_0\,(1+\delta N)$, $Y=Y_0\,(1+\delta Y)$, $U=U_0\,(1+\delta U)$ where $N_0,Y_0,U_0$ are given by \eqref{prop2}, while the deviations $\delta N,\delta Y,\delta U$ approach zero. In the linear approximation, the latter are described by \begin{equation} \delta N=\frac{A}{r^3}, ~~~~~~\delta U=B_1 e^{\lambda_1 r}+B_2 e^{\lambda_2 r},~~~~~~\delta Y={\cal O}(\delta U), \end{equation} where $A,B_1,B_2$ are integration constants and real parts of $\lambda_1$ and $\lambda_2$ are {\it negative}. All of these three perturbation modes vanish for $r\to\infty$, and since the number of equations \eqref{eqs} is also three, it follows that the AdS background is an {\it attractor} at large $r$. {\bf b)} Solutions extending up to arbitrarily large values of $r$ and asymptotically approaching a deformed AdS background \eqref{NQQQ}. The latter is also an attractor at large $r$. {\bf c)} Solutions extending only up to $r=r_s<\infty$ where derivatives of some metric functions diverge, which corresponds to a curvature singularity. This exhausts the possible types of {\it generic} solutions. If one integrates the equation for many different values of $u$, one always obtains solutions of the above three types and one does not find asymptotically flat solutions other than Schwarzschild. For example, choosing $u=1+\epsilon$ yields solutions which are almost Schwarzschild in a region close to the horizon, but for larger values of $r$ they deviate from the Schwarzschild metric more and more \cite{Volkov2012} (this means the Schwarzschild solution is Lyapunov unstable \cite{Torsello:2017cmz}). All of this does not mean that the Schwarzschild is the only asymptotically flat black hole solution. There may be others, but they are not parametrically close to the Schwarzschild solution and should correspond to some discrete values of $u$ which are difficult to detect by a ``brute force" method. \section{BOUNDARY CONDITIONS AT INFINITY \label{BI}} \setcounter{equation}{0} Let us suppose the solutions to approach flat space with $g_{\mu\nu}=f_{\mu\nu}=\eta_{\mu\nu}$ at large $r$ and set \begin{equation} \label{inf} N=1+\delta N,~~~~Y=1+\delta Y,~~~~~U=r+\delta U. \end{equation} In fact, a more general possibility would be to require the g metric to approach the flat Minkowski metric ${\rm diag}(-1,1,1,1)$ and the f metric to approach just a flat metric, as for example ${\rm diag}(-a^2,b^2,b^2,b^2)$ with constat $a,b$. This would lead to solutions whose Lorentz invariance is broken in the asymptotic region \cite{Berezhiani:2008nr,Comelli:2011wq}. However, we shall not analyze this option. Inserting \eqref{inf} to \eqref{eqs} yields \begin{eqnarray} \label{eqinf} \delta N^\prime&=&-\frac{1}{r}(\kappa_2\,\delta N+\kappa_1\, \delta Y)-\kappa_1 \delta U+{\cal N}_N, \nonumber \\ \delta Y^\prime&=&-\frac{1}{r}(\kappa_2\,\delta N+\kappa_1\,\delta Y )+\kappa_2\, \delta U+{\cal N}_Y, \nonumber \\ \delta U^\prime&=&\left(1+\frac{2}{r^2} \right)\,\left(\delta Y-\delta N\right)+{\cal N}_U, \end{eqnarray} where ${\cal N}_N,{\cal N}_Y,{\cal N}_U$ are the nonlinear in $\delta N,\delta Y,\delta U$ parts of the right-hand sides ${\cal D}_N,{\cal D}_Y,{\cal D}_U$ in \eqref{eqs}. Neglecting the nonlinear terms, the solution of these equations is \begin{eqnarray} \label{infXX} \delta N=\frac{A}{r}+B\,\kappa_1\,\frac{1+r}{r}\,e^{-r}+C\,\kappa_1\,\frac{1-r}{r}\,e^{+r}, \nonumber \\ \delta Y=\frac{A}{r}-B\,\kappa_2\,\frac{1+r}{r}\,e^{-r}-C\,\kappa_2\,\frac{1-r}{r}\,e^{+r}, \nonumber \\ \delta U=B\,\frac{r^2+r+1}{r^2}\,e^{-r}+C\,\frac{r^2-r+1}{r^2}\,e^{+r}, \end{eqnarray} where $A,B,C$ are integration constants. The part of this solution proportional to $A$ is the Newtonian mode describing the massless graviton subject to the linearized Einstein equations \eqref{m=0}. The other two modes proportional to $B$ and $C$ fulfill the Fierz-Pauli equations \eqref{FP} and describe the massive graviton, hence they contain the Yukawa exponents (remember that $r={\rm mr}$). As one can see, among the three modes only two are stable for $r\to \infty$ while the third one diverges in this limit, hence {\it flat space is not an attractor}. This is why one cannot get asymptotically flat solutions by simply integrating from the horizon -- trying to approach flat space in this way, the unstable mode $e^{+r}$ rapidly wins and drives the solution away from flat space. The only way to proceed is to suppress the unstable mode from the very beginning by requiring the solution at large $r$ to be \begin{eqnarray} \label{nnn} \delta N&=&\frac{A}{r}+B\,\kappa_1\,\frac{1+r}{r}\,e^{-r}+\ldots, \nonumber \\ \delta Y&=&\frac{A}{r}-B\,\kappa_2\,\frac{1+r}{r}\,e^{-r}+\ldots, \nonumber \\ \delta U&=&r+B\,\frac{r^2+r+1}{r^2}\,e^{-r}+\ldots, \end{eqnarray} where the dots denote non-inear corrections. The usual practice would be to neglect the dots and assume that the linear terms approximate the solution everywhere for $r>r_\star$, where $r_\star$ is some large value. However, one can check that already the quadratic correction contains an additional factor of $\ln(r)$ and hence dominates the linear part for $r\to\infty$. Therefore, nonlinear corrections are important, but if all of them are taken into account, it is not obvious that the solution will remain asymptotically flat. Fortunately, problems of this kind have been studied -- see, e.g., \cite{BFM}. To take the nonlinear corrections into account, the procedure is as follows. Let us express $\delta N,\delta Y,\delta U$ in terms of three functions $Z_0,Z_{+},Z_{-}$: \begin{eqnarray} \label{ZZZ} \delta N&=&Z_0+\kappa_1\,\frac{1+r}{r}\,Z_{+}+\kappa_1\,\frac{1-r}{r}\,Z_{-}\,, \nonumber \\ \delta Y&=&Z_0-\kappa_2\,\frac{1+r}{r}\,Z_{+}-\kappa_2\,\frac{1-r}{r}\,Z_{-}\,, \nonumber \\ \delta U&=&\frac{1+r+r^2}{r^2}\,Z_{+}+\frac{1-r+r^2}{r^2}\,Z_{-}\, . \end{eqnarray} Equations \eqref{eqinf} then assume the form \begin{eqnarray} \label{Z} Z_0^\prime+\frac{Z_0}{r}&=&{\cal S}_{0}(r,Z_0,Z_\pm)\equiv \kappa_1\, {\cal N}_Y+ \kappa_2\, {\cal N}_N\,, \nonumber \\ Z_{+}^\prime+Z_{+}&=&{\cal S}_{+}(r,Z_0,Z_\pm)\equiv \frac{r^2-r+1}{2r^2}\,\left({\cal N}_N-{\cal N}_Y \right) +\frac{r-1}{2r}\,{\cal N}_U\,, \nonumber \\ Z_{-}^\prime-Z_{-}&=&{\cal S}_{-}(r,Z_0,Z_\pm)\equiv \frac{r^2+r+1}{2r^2}\,\left({\cal N}_Y-{\cal N}_N \right) +\frac{r+1}{2r}\,{\cal N}_U\, . \end{eqnarray} Terms on the left in these equations are linear in $Z_0,Z_\pm$, while those on the right are non-inear. Neglecting the nonlinear terms, the solution is $Z_0=1/r$, $Z_{+} =e^{-r}$, $Z_{-}=e^{+r}$, and if we set \begin{equation} \label{0} Z_0=\frac{A}{r},~~~~~Z_{+}=B\,e^{-r},~~~~~~Z_{-}=0, \end{equation} this reproduces the linear part of \eqref{nnn}. Now, to take the nonlinear terms into account, one converts Eqs.\eqref{Z} into the equivalent set of integral equations, \begin{eqnarray} \label{integral} Z_{0}(r)&=&\frac{A}{r}-\int_{r}^\infty \frac{\bar{r}}{r}\,{\cal S}_{0}(\bar{r},Z_0(\bar{r}),Z_\pm(\bar{r}))\, d\bar{r}\,, \nonumber \\ Z_{+}(r)&=&B\,e^{-r}+\int_{r_\star}^r e^{\bar{r}-r}\,{\cal S}_{+}(\bar{r},Z_0(\bar{r}),Z_\pm(\bar{r}))\, d\bar{r}\,,\nonumber \\ Z_{-}(r)&=&-\int_{r}^\infty e^{r-\bar{r}}\,{\cal S}_{-}(\bar{r},Z_0(\bar{r}),Z_\pm(\bar{r}))\, d\bar{r}\,, \end{eqnarray} where $r_\star$ is some large value. These equations determine the solution for $r>r_\star$, and they are solved by iterations. To start the iterations, one neglects the nonlinear terms, which gives the configuration \eqref{0}. The next step is to inject this configuration to the integrals, which gives the corrected configuration, and so on. In practice, one introduces variables $x=r_\star/r$ and $\bar{x}=r_\star/\bar{r}$ assuming values in the interval $[0,1]$, and then one discretizes the interval to compute the integrals. \begin{figure} \centering \includegraphics[scale=0.65]{plot_err.eps} \includegraphics[scale=0.65]{Z.eps} \caption{\textcolor{black}{Left: convergence of the iterations of the integral equations \eqref{integral}. Right: the amplitudes $Z_0$, $Z_\pm$ against $x=r_\star/r$, where the insertion shows a closeup of $Z_{-}$.}} \label{Fig0} \end{figure} To see the convergence of the iterations, we compute for each $Z$ and for each discretization point the difference $\Delta Z_i=Z_{i+1}-Z_{i}$ of the results of the consecutive $(i+1)$th and $i$th iterations, and then we take the average $\overline{{\Delta {Z}}}_i$ over all discretization points. Computing similarly the average $\bar{Z}_i$ of $Z_i$, the ratios $\overline{\Delta Z}_i/\bar{Z}_i$ decrease with $i$ exponentially fast, as seen on the left panel in Fig.\ref{Fig0}, hence the iterations converge. \textcolor{black}{The solution of the integral equations is shown on the right panel in Fig.\ref{Fig0}: the amplitudes $Z_0$ and $Z_\pm$ against $x=r_\star/r$ (for $r_\star =25\, r_H$). One can see that the amplitude $Z_{-}$ is always small but nonvanishing, and that all the three amplitudes vanish for $x=0$, hence the solution is indeed asymptotically flat. } This yields an asymptotically flat solution in the region $r>r_\star$. To extend this solution to the region $r_H<r<r_\star$ one only needs its values at $r=r_\star$, \begin{eqnarray} \label{0a} Z_0(r_\star)&=&\frac{A}{r_\star}-\int_{r_\star}^\infty \frac{\bar{r}}{r}\,{\cal S}_{0}(\bar{r},Z_0(\bar{r}),Z_\pm(\bar{r}))\, d\bar{r}\,, ~~~~~Z_{+}(r_\star)=B\,e^{-r_\star},~~~~~\nonumber\\ Z_{-}(r_\star)&=&-\int_{r_\star}^\infty e^{r_\star-\bar{r}}\,{\cal S}_{-}(\bar{r},Z_0,Z_\pm)\, d\bar{r}\,. \end{eqnarray} To recapitulate, the described above procedure yields the boundary values for the fields at a large $r_\star$ and makes sure that the solution for $r>r_\star$ exists and is indeed asymptotically flat. It is worth noting that the parameter $A$ determines the ADM mass, \begin{equation} \label{ADM} M=-A. \end{equation} \section{NUMERICAL PROCEDURE \label{hairybh}} \setcounter{equation}{0} Summarizing the above discussion, the asymptotically flat black holes are described by solutions of the three coupled first order ODEs \eqref{eqs} for the three functions $N(r),Y(r),U(r)$ [which determines also $Q(r),q(r)$] with the following boundary conditions. At the horizon $r=r_H$ one has \begin{equation} N(r)=\sqrt{1-\frac{r_H}{r}}\,\nu(r), ~~~~~~Y(r)=\sqrt{1-\frac{r_H}{r}}\,y(r), \end{equation} where the horizon values $\nu(r_H)\equiv \nu_H$ and $y(r_H)\equiv y_H$ are finite and determined by the Eqs. \eqref{alg}, \eqref{yh} in Appendix \ref{Des}, while $U(r_H)\equiv U_H\equiv u\,r_H$ can be arbitrary. Therefore, all possible boundary conditions at the horizon are labeled by just one free parameter $u$, and choosing some value for it, the equations can be integrated directly from the horizon, as explained in Appendix \ref{Des}, to the outer region $r>r_H$. Far from the horizon, at $r=r_\star\gg r_H$, one has \begin{eqnarray} \label{ZZZ1} N(r_\star)&=&1+Z_0(r_\star)+\kappa_1\,B\,\frac{1+r_\star}{r_\star}\,e^{-r_\star}+\kappa_1\,\frac{1-r_\star}{r_\star}\,Z_{-}(r_\star),\, \nonumber \\ Y(r_\star)&=&1+Z_0(r_\star)-\kappa_2\,B\,\frac{1+r_\star}{r_\star}\,e^{-r_\star}-\kappa_2\,\frac{1-r_\star}{r_\star}\,Z_{-}(r_\star),\, \nonumber \\ U(r_\star)&=&r_\star+B\,\frac{1+r_\star+r_\star^2}{r_\star^2}\,e^{-r_\star}+\frac{1-r_\star+r_\star^2}{r^2_\star}\,Z_{-}(r_\star)\, , \end{eqnarray} where $Z_0(r_\star)$ and $Z_{-}(r_\star)$ are functions of $A,B$ determined by \eqref{0a} via iterating the integral equations \eqref{integral}. As a result, we have the boundary conditions at $r=r_H$ labeled by $u$ and the boundary conditions at $r=r_\star$ labeled by $A,B$. We use them to construct solutions in the region $r_H\leq r\leq r_\star$. To this end, we choose some value of $u$ and integrate numerically the equations starting from $r=r_H$ as far as some $r=r_0<r_\star$ and we obtain at this point some values which will depend on $r_H$ and $u$: \begin{equation} N(r_0)\equiv N_{\rm hor}(r_H,u),~~~~~~~Y(r_0)\equiv Y_{\rm hor}(r_H,u),~~~~~~U(r_0)\equiv U_{\rm hor}(r_H,u). \end{equation} Then we choose $A,B$ and numerically extend the large $r$ data \eqref{ZZZ1} from $r=r_\star$ down to $r=r_0$, thereby obtaining \begin{equation} N(r_0)\equiv N_{\rm inf}(A,B),~~~~~~~Y(r_0)\equiv Y_{\rm inf}(A,B),~~~~~~U(r_0)\equiv U_{\rm inf}(A,B). \end{equation} If the two sets of values agree, hence if \begin{eqnarray} \label{match} \Delta N(r_H,u,A,B)&\equiv& N_{\rm hor}(r_H,u)-N_{\rm inf}(A,B)=0, ~~\nonumber \\ \Delta Y(r_H,u,A,B)&\equiv& Y_{\rm hor}(r_H,u)-Y_{\rm inf}(A,B)=0, ~~\nonumber \\ \Delta U(r_H,u,A,B)&\equiv& U_{\rm hor}(r_H,u)-U_{\rm inf}(A,B)=0, \end{eqnarray} then the solution in the interval $r\in [r_H,r_0]$ merges smoothly with the solution in the interval $r\in [r_0,r_\star]$ to represent one single solution in the interval $r\in [r_H,r_\star]$. The extension to the region $r>r_\star$ is then provided by the integral equations \eqref{integral}, finally yielding an asymptotically flat black hole solution in the region $r\in [r_H,\infty)$. It is worth noting that these solutions will depend neither on $r_0$ nor on $r_\star$; these values could be varied without affecting the global solution (which is a good consistency check). In some cases using just two zones $[r_H,r_0]$ and $[r_0,r_\star]$ produces too large numerical errors. To keep the numerical instability under control, one should then integrate through many smaller zones $[r_H,r_0]$, $[r_0,r_1],[r_1,r_2]\ldots [r_k,r_\star]$ and perform matchings at $r_0,r_1,\ldots r_k$ (see Sec.7.3.5 in \cite{stoer}). This yields numerically stable results. In the case of just two zones, the problem reduces to solving the matching conditions \eqref{match} by adjusting the values $u,A,B$. At least one solution to these three conditions certainly exists and corresponds to the Schwarzschild solution, for which \begin{equation} \label{Sw} u=1,~~~~~ A=-\frac{r_H}{2},~~~~~ B=0. \end{equation} Are there other solutions ? Since there are three matching conditions for the three variables, their solutions must constitute a {\it discrete} set of points $(u_k,A_k,B_k)$ in the 3-space spanned by $u,A,B$. This implies that different black hole solutions with the same $r_H$ are {\it parametrically isolated} from each other. This creates a problem, since in order to solve numerically algebraic equations \eqref{match}, an input configuration ${u},{A},{B}$ is needed to start the numerical iterations within the Newton-Raphson procedure \cite{Press:2007:NRE:1403886}. However, unless the input configuration is close to the solution, the numerical iterations do not converge, hence some additional information is necessary to specify where to start the iterations. As explained in the Introduction, the additional information is provided by the stability analysis of the Schwarzschild solution \eqref{b2} \cite{Babichev:2013una,Brito:2013wya}. In this analysis one considers the two metrics of the form \eqref{ansatz000} with \begin{eqnarray} Q&=&{S}+\delta Q,~~~N={S}+\delta N,~~~R=r\nonumber \\ q&=&{S}+\delta q,~~~Y={S}+\delta Y,~~~U=r+\delta U,~~~~~f_{01}=\delta \alpha, \end{eqnarray} where $S=\sqrt{1-\frac{r_H}{r}}$ while the perturbations $\delta Q,\delta N,\delta q,\delta Y,\delta U,\delta\alpha$ are assumed to be small and depend on $t,r$. It turns out that at the GL point, for $r_H=0.86$, the perturbation equations admit a {\it static} solution (zero mode) for which $\delta Q,\delta N,\delta q,\delta Y,\delta U$ depend only on $r$ and are bounded everywhere in the region $r\geq r_H$ while $\delta\alpha=0$. This solution can be viewed as a perturbative approximation of a new solution that merges with the Schwarzschild solution for $r_H=0.86$. This suggests that to get new solutions of the matching conditions \eqref{match}, one should choose the event horizon size to be close $r_H=0.86$ and choose the input configuration ${u},{A},{B}$ to be close to \eqref{Sw}. Then the numerical iterations should converge to values $u,A,B$ which are slightly different from \eqref{Sw} and correspond to an almost Schwarzschild black hole slightly distorted by a massive hair. Changing then iteratively the value of $r_H$ yields solutions which deviate considerably from the Schwarzschild metric close to the horizon, but always approach flat metric in the asymptotic region. \begin{figure} \centering \includegraphics[scale=0.65]{coeff_2m1.eps} \includegraphics[scale=0.65]{coeff_3m3.eps} \centering \includegraphics[scale=0.65]{coeff_15m15.eps} \includegraphics[scale=0.65]{up_bis.eps} \caption{Profiles of $N/S$, $Y/S$, $Q/S$, $q/S$ with $S=\sqrt{1-r_H/r}$ and that of $U^\prime$ for solutions with $\eta=\pi/4$ but for various values of $r_H,c_3,c_4$. Solution with $c_3=-c_4=3/2$ shown on the two lower panels is singular because the amplitudes $q,Y,U^\prime$ develop zeros outside the horizon. } \label{Fig1} \end{figure} \section{ASYMPTOTICALLY FLAT HAIRY BLACK HOLES \label{NR}} \setcounter{equation}{0} Applying the procedure outlined above, we were able to construct asymptotically flat hairy solutions. We confirm the results of Ref. \cite{Brito:2013xaa} and obtain many new results. First of all, we find that for $r_H$ approaching from below the GL value, $r_H\approx 0.86$, there are asymptotically flat hairy black hole solutions for any $c_3,c_4,\eta$. They are very close to the Schwarzschild solution: one has $u=U_H/r_H\approx 1$ and the ADM mass $M\approx r_H/2$. However, for smaller values of $r_H$ the solutions deviate more and more from Schwarzschild. To illustrate this, we plot in Fig.~\ref{Fig1} the functions $N/S$, $Q/S$, $Y/S$, $q/S$, and $U^\prime$. If these functions all equal to one, then the solution is Schwarzschild. As one can see, they indeed approach unity far away from the horizon, but close to the horizon they deviate considerably from unity, hence the massive graviton hair is concentrated in this region. \begin{figure} \centering \includegraphics[scale=0.65]{coeff25m25_pio8_compact.eps} \includegraphics[scale=0.65]{coeff25m25_pio4_compact.eps} \centering \includegraphics[scale=0.65]{coeff25m25_3pio8_compact.eps} \includegraphics[scale=0.65]{coeff25m25_pio2_compact.eps} \caption{Profiles of $N/S$, $Y/S$, $Q/S$, $q/S$ with $S=\sqrt{1-r_H/r}$ \textcolor{black}{against $\xi=(r-r_H)/(r+r_H)$} for solutions with the same $c_3,c_4,r_H$ but for different values of $\eta$. When $\eta$ approaches $\pi/2$ then the amplitudes $Y,q$ start showing zeroes. For $\eta=\pi/2$ the g metric is Schwarzschild with $N/S=Q/S=1$. } \label{Fig-eta} \end{figure} Solutions are regular for $r_H$ close to $0.86$; however, for smaller $r_H$ and depending on values of $c_3,c_4,\eta$, the amplitudes $Y,q,U^\prime$ may show additional zeros outside the horizon, whereas $Q,N$ always remain positive. This implies that the f metric is singular, because the invariants of its Riemann tensor diverge where the zeros are located. \textcolor{black}{An example of this is shown on the lower two panels in Fig.~\ref{Fig1}, and also on the lower two panels in Fig.~\ref{Fig-eta} where one can see that the phenomenon occurs when $\eta$ approaches $\pi/2$. The fact that the f metric becomes singular does not invalidate the solutions because the f-geometry is not directly measurable and its singularities are not seen, while the g metric, which can be probed by test particles, remains always regular. We shall therefore keep such solutions in our consideration. } \textcolor{black}{Solutions in Fig.~\ref{Fig1} are shown up to large but still finite values of the radial coordinate, $r/r_H\leq 100$ or $r/r_H\leq 1000$. What is shown is the combination of the solutions of differential equations \eqref{eqs} in the region $r_H\leq r\leq r_\star$ and of the integral equations \eqref{integral} for $r>r_\star$ where $r_\star/r_H=25$. At the same time, our procedure yields solutions in the whole region $r\in [r_H,\infty)$. Introducing the compactified radial variable \begin{equation} \label{comp} \xi=\frac{r-r_H}{r+r_H}\in [0,1], \end{equation} we plot in Fig.~\ref{Fig-eta} the amplitudes $N,Y,Q,q$ against $\xi$. As seen, the amplitudes approach unity as $\xi\to 1$ (same is true for $U^\prime$) hence the solutions are indeed asymptotically flat. The disadvantage of this parametrization is that the slope of the functions does not vanish for $\xi\to 1$. Indeed, for large $r$ one has $N=1-M/r+\ldots$ and $\xi=1-r_H/r+\ldots$ hence at infinity $dN/d\xi=M/r_H$. } \begin{figure} \centering \includegraphics[scale=0.65]{coeff25m25_0_compact.eps} \includegraphics[scale=0.65]{u_eta_25m25.eps} \centering \includegraphics[scale=0.65]{M_eta_25m25.eps} \includegraphics[scale=0.65]{T_eta_25m25.eps} \caption{Upper left: $N/S$, $Y/S$, $Q/S$, $q/S$ with $S=\sqrt{1-U_H/U}$ against the compact variable $\xi_U=(U-U_H)/(U+U_H)$ for $\eta=0$. One has $Y/S=q/S=1$ hence the f metric is Schwarzschild. The other three panels show $u=U_H/r_H$, the ADM mass $M$, and the temperature $T$ against $\eta$.} \label{Fig-eta0} \end{figure} If $\eta=\pi/2$ then $\kappa_1=0$ and the g metric becomes Schwarzschild. The theory reduces then to the massive gravity for the dynamical f metric on a fixed Schwarzschild background. The solution for the f metric is shown on the lower right panel in Fig.~\ref{Fig-eta}. Similarly, for $\eta=0$ one has $\kappa_2=0$ and the f metric is Schwarzschild, while the g metric is a solution of the massive gravity on the Schwarzschild background shown on the upper left panel in Fig.~\ref{Fig-eta0}. One should emphasize that the radii of the background Schwarzschild black holes for $\eta=0$ and for $\eta=\pi/2$ are not the same. For example, for $\eta=\pi/2$ the Schwarzschild black hole has $r_H=0.18$ for the solution shown in Fig.~\ref{Fig-eta}, while for $\eta=0$ the event horizon size is determined not by $r_H$ but by $U_H=ur_H$ where $u\approx 5$ (as seen in Fig.\ref{Fig-eta0}) hence this time the Schwarzschild black hole is much larger. As a result, solutions on these different backgrounds look quite different -- the solution for the f metric on the lower right panel in Fig.~\ref{Fig-eta} shows zeros hence it is singular, while the solution for the g metric on the upper left panel in Fig.~\ref{Fig-eta0} is regular. \textcolor{black}{Solutions for $\eta=\pi/2$ will play an important role below. We shall call them ``hairy Schwarzschild" because their g metric is Schwarzschild but their f metric supports hair.} Figure \ref{Fig-eta0} shows the $\eta$ dependence of $u=U_H/r_H$ and of the ADM mass $M$ expressed in units of the Schwarzschild mass $M_S=r_H/2$, as well as the temperature $T$ expressed in units of the Schwarzschild temperature $T_S=1/(4\pi r_H)$. As one can see, the dependence is rather strong for small $r_H$, in particular for $u$. The decrease of the mass $M$ with $\eta$ can be understood by noting that the mass is the same with respect to each metric (the same is true for the temperature). If $\eta=\pi/2$ then the g metric is Schwarzschild hence $M=M_S$ and $T=T_S$. If $\eta=0$ then the f metric is Schwarzschild with a larger radius $U_H=u r_H$, hence the mass is larger, $M=U_H/2=u M_S$, while the temperature is smaller, $T=T_S/u$. Therefore, if $\eta=0$ then $M/M_S=u$ so that, for example, $u\approx 5$ for $r_H=0.18$, as seen in Fig.~\ref{Fig-eta0}. Figure \ref{Fig4} shows the dependence of $u$ and $M$ on $c_3$ in the case where $c_3=-c_4$. One can see that the solutions exist if only the value of $c_3=-c_4$ is not too small. Similarly, not all hairy black holes exist for however small values of $r_H$. As was noticed in \cite{Brito:2013xaa}, small $r_H$ black holes exist if the coefficient $b_3$ in the potential \eqref{2} vanishes so that the cubic part of the potential is absent. In view of \eqref{bbb}, this requires that $c_3=-c_4$, but this is not the only condition. Depending on the parameter values, one can distinguish the following two cases: \begin{eqnarray} \label{typeI} \mbox{I}: ~~~~ c_3\neq -c_4~~\mbox{or}~~c_3= -c_4< 1, ~~~~~~~ \mbox{II}: ~~~c_3= -c_4\geq 1. \end{eqnarray} In case I asymptotically flat hairy black holes exist only if $0<r_H^{\rm min}\leq r_H<0.86$ hence they cannot be arbitrarily small. In case II they exist for any $0<r_H<0.86$, although their f metric may be singular for small $r_H$. We shall see below in Sec. \ref{par} what happens when $r_H$ approaches the lower bound. \begin{figure} \centering \includegraphics[scale=0.65]{M_c3c4.eps} \includegraphics[scale=0.65]{u_c3c4.eps} \caption{The $M/M_S$ (left) and $u=U_H/r_H$ (right) against $c_3=-c_4$. } \label{Fig4} \end{figure} \subsection{Duality relation} The results described above in this section essentially reproduce those of Ref.~\cite{Brito:2013xaa}, the only important difference being that we show solutions for different values of $\eta$, whereas Ref.~\cite{Brito:2013xaa} shows them only for $\eta=\pi/4$. However, starting from this moment and in the following two Sections we shall be describing new results. Reference \cite{Brito:2013xaa} finds solutions only below the GL point, for $r_H\leq 0.86$. At the same time, the consistency of the procedure requires that there should be asymptotically flat hairy black holes also for $r_H>0.86$. This follows from the symmetry \eqref{change} of the equations, which now reads \begin{equation} \label{dual} \eta\to\frac{\pi}{2}-\eta,~~~Q\leftrightarrow q,~~~N\leftrightarrow Y,~~~U\leftrightarrow r, ~~~c_3\to 3-c_3,~~~c_4\to 4c_3+c_4-6. \end{equation} More precisely, this means that if for some values of $\eta,c_3,c_4$ there is a solution \begin{equation} \label{QQ1} Q(r), q(r), N(r), Y(r), U(r), \end{equation} then for $\tilde{\eta}=\pi/2-\eta$, $\tilde{c}_3=3-c_3$, $\tilde{c}_4= 4c_3+c_4-6$ there should be the ``dual" solution described by \begin{equation} \label{Q2} \tilde{Q}(r)=q(w(r)),~~~\tilde{q}(r)=Q(w(r)),~~~\tilde{N}(r)=Y(w(r)),~~~\tilde{Y}(r)=N(w(r)),~~~\tilde{U}(r)=w(r), \end{equation} where $w(r)$ is the function inverse for $U(r)$, such that $U(w(r))=r$. This duality correspondence relates between themselves black holes of different size, because \eqref{QQ1} has the horizon at $r=r_H$ while the horizon of \eqref{Q2} is located where $w(r)=r_H$, that is at $r=\tilde{r}_H=U(r_H)$. One has \begin{equation} \tilde{u}=\frac{\tilde{U}(\tilde{r}_H)}{\tilde{r}_H}=\frac{r_H}{U(r_H)}=\frac{1}{u}. \end{equation} Now, for hairy solutions with $r_H<0.86$ one always has $U(r_H)>0.86$ and $u=U(r_H)/r_H>1$. It follows that their duals are characterized by $\tilde{r}_H>0.86$ and by $\tilde{u}_H<1$. An explicit example of the duality relation is shown in Fig.~\ref{Fig2}, which presents on the left panel the solution for $c_3=-c_4=2$, $\eta=\pi/4$, $r_H=0.15$ for which $U(r_H)=1.364$, hence $u=1.364/0.15=2.42$. The duality implies that for $c_3=1$, $c_4=0$, $\eta=\pi/4$ there must be the dual solution with $r_H=1.364$ and $u=0.15/1.364=0.41$, which is indeed confirmed by our numerics. Plotting the first solution against $U/U_H$ and the second one against $r/r_H$, as shown in Fig. \ref{Fig2}, yields exactly the same curves, up to the interchange $N\leftrightarrow Y$, $Q\leftrightarrow q$. It is unclear why solutions with $r_H>0.86$ were not found in \cite{Brito:2013xaa}. The duality is in fact a powerful tool for studying the solutions, because sometimes their properties may look puzzling in one description but become obvious within the dual description. \begin{figure} \centering \includegraphics[scale=0.65]{coeff2m2sym10.eps} \includegraphics[scale=0.65]{coeff10sym2m2.eps} \caption{The solution with $c_3=-c_4=2$, $\eta=\pi/4$, $r_H=0.15$ (left) and the dual solution with $c_3=1$, $c_4=0$, $\eta=\pi/4$, $r_H=1.364$ (right). The curves on the two panels are exactly the same, up to the interchange $N\leftrightarrow Y$, $Q\leftrightarrow q$, $r/r_H\leftrightarrow U/U_H$. } \label{Fig2} \end{figure} \section{STABILITY ANALYSIS\label{pert}} \setcounter{equation}{0} In this section we analyze the stability of the hairy solutions by studying their perturbations within the ansatz described in Appendix \ref{time}, \begin{align*} \label{anz0} ds^2_g&=-Q^2 dt^2+\frac{dr^2}{N^2}+r^2 d\Omega^2,\\ ds^2_f&=-\big(q^2-\alpha^2Q^2N^2\big)dt^2-2\alpha\bigg(q+\frac{QNU^\prime}{Y}\bigg)dtdr+\bigg(\frac{U^{\prime 2}}{Y^2}-\alpha^2\bigg)dr^2+U^2 d\Omega^2,\numberthis{} \end{align*} where $Q$, $q$, $N$, $Y$, $\alpha$, $U$ are functions of $r$ and $t$. The full set of the field equations in this case is shown in Appendix \ref{time}. If we set $\alpha=0$ and assume that nothing depends on time, then we return back to the static case studied above. Therefore, small deviations from the static solutions are described by \eqref{anz0} with \begin{align*} \label{ptb} Q(r,t)&=\accentset{(0)}Q(r)+\delta Q(r,t),\\ q(r,t)&=\accentset{(0)}q(r)+\delta q(r,t),\\ N(r,t)&=\accentset{(0)}N(r)+\delta N(r,t),\numberthis{}\\ Y(r,t)&=\accentset{(0)}Y(r)+\delta Y(r,t),\\ U(r,t)&=\accentset{(0)}U(r)+\delta U(r,t),\\ \alpha(r,t)&=\delta\alpha(r,t), \end{align*} where the functions $\accentset{(0)}Q(r)$, $\accentset{(0)}q(r)$, $\accentset{(0)}N(r)$, $\accentset{(0)}Y(r)$, $\accentset{(0)}U(r)$ correspond to the background black hole solution while the perturbations $\delta Q, \delta q, \delta N, \delta Y, \delta U, \delta\alpha $ are small. We therefore inject \eqref{ptb} to Eqs.\eqref{Ein00} and \eqref{cons00} and linearize with respect to the perturbations. Linearizing the $G^0_{~1}(g)=\kappa_1 T^0_{~1}$ equation yields \begin{equation} \label{XXX} \frac{2}{rNQ^2}\,\delta \dot{N}=\kappa_1 \frac{{\cal P}_1}{Q}\,\delta{\alpha}, \end{equation} where $N,Q,{\cal P}_1$ relate to the static background, and we do not write their over sign ``$(0)$'' for simplicity. \textcolor{black}{In the linear perturbation theory one can consistently separate the time variable by assuming the harmonic time dependence for all amplitudes (this would no longer be possible if non-inear corrections are taken into account), so that we choose} \begin{equation} \delta N(t,r)=e^{i\omega t}\, \delta N(r)~~~~~\delta \alpha(t,r)=e^{i\omega t}\, \delta \alpha(r), \end{equation} and similarly for $\delta Y,\delta Q,\delta q,\delta U$. Injecting to \eqref{XXX} yields the algebraic relation \begin{equation} \delta \alpha(r)=\frac{2i\omega}{rNQ{\cal P}_1}\,\delta N(r). \end{equation} Linearizing similarly the $G^0_{~1}(f)=\kappa_2 {\cal T}^0_{~1}$ equation yields a linear relation between $\delta\alpha(r)$, $\delta Y(r)$, $\delta U(r)$. Using these two algebraic relations one finds that the three equations $G^0_{~0}(g)=\kappa_1 T^0_{~0}$, $G^0_{~0}(f)=\kappa_2 {\cal T}^0_{~0}$ and $ \stackrel{(g)}{\nabla}_\mu T^\mu_{~0}=0$ yield upon the linearization three equivalent to each other relations. Therefore, among the 8 equations \eqref{Ein00}, \eqref{cons00} only 6 are independent (at least at the linearized level). Taking all of this into account and linearizing similarly the remaining 3 equations $G^1_{~1}(g)=\kappa_1 T^1_{~1}$, $G^1_{~1}(f)=\kappa_2 {\cal T}^1_{~1}$ and $ \stackrel{(g)}{\nabla}_\mu T^\mu_{~1}=0$, one finds that all 6 perturbation amplitudes $\delta Q(r)$, $\delta q(r)$, $\delta N(r)$, $\delta Y(r)$, $\delta U(r)$ and $\delta\alpha(r)$ can be expressed in terms of a single master amplitude $\Psi(r)$ subject to the Schrödinger-type equation, \begin{equation} \label{eqpert} \frac{d^2\Psi}{dr_*^2}+\big(\omega^2-V(r)\big)\Psi=0. \end{equation} \textcolor{black}{The master amplitude $\Psi(r)$ is a linear combination of $\delta N(r)$ and $\delta Y(r)$ with rather complicated coefficients whose explicit expression is not particularly illuminating, hence we do not show it explicitly. The potential $V(r)$ is also a complicated function of the background amplitudes that we do not show. } The tortoise radial coordinate $r_\ast\in (-\infty,+\infty)$ is defined by the relation \begin{equation} dr_*=\frac{1}{{a(r)}}\, dr, \end{equation} where the function $a(r)$ (also complicated) varies from $0$ to $1$ as $r$ changes from $r_H$ to $\infty$. The potential $V$ always tends to zero at the horizon, for $r_*\to -\infty$, and it approaches unit value at infinity, for $r_*\to +\infty$. One should remember that our dimensionless variables are related to the dimensionful ones via $r=$mr, $r_H={\rm mr}_H$, $V={\rm V/m^2}$, $\omega={\bm \omega}/{\rm m}$. For the bald Schwarzschild background with $Q=q=N=Y=\sqrt{1-r_H/r}$ and $U=r$, one has $a(r)=Q^2(r)$ and the potential reduces to \begin{equation} \label{potsch} V(r)=\bigg(1-\frac{r_H}{r}\bigg)\bigg(1+\frac{r_H}{r^3}+6\,\frac{r_H(r_H-2r)+r^3(r-2r_H)}{(r_H+r^3)^2}\bigg), \end{equation} in agreement with Ref. \cite{Brito:2013wya}. In the flat space limit $r_H\to 0$ this reduces to $V(r)=1+6/r^2$, which is the potential of a massive particle of unit mass (in units of the graviton mass) with spin $s=2$. Equation \eqref{eqpert} defines the eigenvalue problem on the line $r_\ast\in (-\infty,+\infty)$. Solutions of this problem with $\omega^2>0$ describe scattering states of gravitons. In addition, there can be bound states with purely imaginary frequency $\omega =i\sigma$ and hence with $\omega^2=-\sigma^2<0$. For such solutions the wave function $\Psi$ is everywhere bounded and square-integrable, because one has $ e^{+\sigma r_\ast}\leftarrow \Psi \rightarrow e^{-\sqrt{1+\sigma^2}\,r_\ast} $ as $-\infty \leftarrow r_\ast \rightarrow +\infty$, respectively. Such bound state solutions grow in time as $e^{i\omega t}=e^{\pm \sigma t}$. Therefore, they correspond to unstable modes of the background black holes. \subsection{Computing the eigenfrequencies} \label{eigenfrequencies} Our aim is to investigate a potential existence of negative modes with $\omega^2<0$ in the spectrum of the eigenvalue problem \eqref{eqpert}. If such modes exist, then the background black holes are unstable. If they do not exist, then the black holes are stable with respect to spherically symmetric perturbations, which would strongly suggest that they should be stable with respect to all perturbations. Indeed, in most known cases the $S$-channel is usually the only place where the instability can reside (of course, this should be proven case to case). \begin{figure} \centering \includegraphics[scale=0.65]{pot_036.eps} \includegraphics[scale=0.65]{pot_078.eps} \caption{Potential $V(r)$ for $r_H=0.36$ (left) and for $r_H=0.78$ (right) for different $c_3,c_4$ with $\eta=\pi/4$.} \label{potplots} \end{figure} The first thing to check is the shape of the potential $V(r)$, because if it is everywhere positive, then there are no bound states. We therefore show in Fig.~\ref{potplots} the potential $V(r)$ for the hairy backgrounds for several values of the event horizon size $r_H$ and for different $c_3,c_4$, and we also show $V(r)$ for the bald Schwarzschild solution with the same $r_H$ (it does not depend on $c_3,c_4$). We observe that in each case the potential vanishes at the horizon, then shows negative values in its vicinity, and then approaches unity as $r\to\infty$. Since the potential is not positive definite, bound states {\it may} exist, but their existence is not yet guaranteed. We know that a bound state certainly exists for the bald Schwarzschild background with $r_H<0.86$ \cite{Babichev:2013una,Brito:2013wya}. When looking at the potentials for the hairy solution with $r_H=0.78$ in Fig.~\ref{potplots}, we notice that they are close to the Schwarzschild potential, hence a bound state could exist for these potentials as well. In order to know whether bound states exist or not, we use the well-known Jacobi criterion \cite{gelfand2000calculus} and construct the solution of the Schrödinger equation \eqref{eqpert} with $\omega=0$. If this solution $\Psi(r)$ crosses zero somewhere, then there are bound states. We start in the asymptotic region where the tortoise coordinate $r_\ast$ becomes identical to the usual $r$, hence Eq.\eqref{eqpert} reduces simply to $\Psi^{\prime\prime}=\Psi$ so that the bounded solution is $\Psi=e^{-r}$. Then we extend this solution numerically toward small values of $r$, and we find that, depending on values of $r_H,\eta,c_3,c_4$, it may indeed show a zero as $r$ approaches $r_H$. Therefore, there exists a bound state. \begin{figure} \centering \includegraphics[scale=0.65]{psi_1m1.eps} \includegraphics[scale=0.65]{psi_2m2.eps} \caption{Negative mode eigenfunctions $\Psi(r)$ for $\eta=\pi/4$ and different $r_H$, with $c_3=-c_4=1$ (left) and $c_3=-c_4=2$ (right). They vanish at the horizon and at infinity.} \label{psi} \end{figure} The next step is to actually find the bound state by solving the eigenvalue problem \eqref{eqpert} with the potential $V(r)$ obtained by numerically solving the background equations. For this we set $\omega^2=-\sigma^2$ and determine the local solutions at infinity and close to the horizon, \begin{equation} \label{BBB} B\,(r-r_H)^{\sigma r_H}\leftarrow \Psi(r) \rightarrow e^{-\sqrt{1+\sigma^2}r}~~~~\mbox{as}~~~~r_H\leftarrow r\to \infty, \end{equation} where $B$ is an integration constant. Then we apply the multiple shooting method and numerically extend the horizon solution toward large $r$, extending at the same time the large $r$ solution toward small $r$. The two solutions meet at some intermediate point $r=r_0$, where the values of $\Psi(r_0)$ and $\Psi^\prime(r_0)$ should agree. This gives two conditions to be fulfilled by adjusting the two parameters $B$ and $\sigma$ in \eqref{BBB}, which finally yields the bound state solution on the whole line (see \cite{Pani:2013pma,Berti:2009kk} for a review on the black hole perturbation theory and the tools that can be used to solve the perturbation equation). The eigenfunctions $\Psi$ against the ordinary radial coordinate $r$ are shown in Fig.~\ref{psi}. They vanish at the horizon, then show a maximum, sometimes very close to the horizon, and then approach zero for $r\to \infty$. As a result, we find the negative eigenvalues $\omega^2<0$ for all hairy black holes obtained in \cite{Brito:2013xaa}. Therefore, all these solutions are unstable. It is worth emphasizing that all of them correspond to the particular choice $\eta=\pi/4$, hence $\kappa_1=\kappa_2=1/{2}$. In order to test our method, we have also computed the negative mode for the bald Schwarzschild solution as in \cite{Brito:2013wya}. As seen in Fig.~\ref{omega}, the absolute value of the negative mode eigenvalue for the Schwarzschild solution is always larger than that for the hairy solutions. Therefore, the instability growth rate for the hairy black holes is not as large as for the Schwarzschild solution. In all cases, since one has $\omega={\bm\omega}/{\rm m}$ where ${\bm\omega}$ is the dimensionful physical frequency, the instability growth time is $1/{\bm\omega}=1/(\omega {\rm m})$. If we assume the graviton mass ${\rm m}$ to be very small and given by \eqref{Hubble}, then the instability growth time will be cosmologically large, hence the instability will not play any role. However, as we shall see below, it is preferable to assume that $1/{\rm m}\leq 10^6$~km according to \eqref{m}, in which case the instability growth time will be less than $10^3$ seconds, hence the instability is dangerous and should be avoided. \begin{figure} \centering \includegraphics[scale=0.9]{omega.eps} \caption{The negative mode eigenvalue $\omega^2(r_H)$ for the hairy and for the bald Schwarzschild black holes against $r_H$ for different values of $c_3$, $c_4$. In all cases $\eta=\pi/4$. } \label{omega} \end{figure} As seen in Fig.\ref{omega}, the eigenvalue $\omega^2(r_H)<0$ approaches zero when $r_H\to 0.86$, therefore all hairy black holes become then stable. However, they are no longer hairy in this limit, because they ``lose their hair" and merge with the bald Schwarzschild solution. Near $r_H=0.86$ all solutions are close to each other and $\omega^2$ is close to zero for any $c_3,c_4,\eta$, while for smaller $r_H$ the backgrounds and $\omega^2$ become parameter dependent. \textcolor{black}{The eigenvalue $\omega^2(r_H)<0$ may approach zero also for type I solutions for a small $r_H\neq 0$ when they cease to exist. } For example, for $c_3=1$, $c_4=0$, the hairy solution disappears at $r_H\sim 0.58$, and at the same time the eigenvalue $\omega^2$ approaches zero, as seen in the insertion in Fig.~\ref{omega}. The instability of hairy black holes is in fact a somewhat puzzling phenomenon, since it is unclear what they may decay into. Since the hairy solutions with $r_H<0.86$ are more energetic than the bald Schwarzschild solution, they probably may approach the latter via absorbing and/or radiating away their hair during their decay. However, the bald Schwarzschild solution is also unstable for $r_H<0.86$ and should decay into something. The perturbative instability of the Schwarzschild solution in the massive bigravity theory is mathematically equivalent \cite{Babichev:2013una} to the Gregory-Laflamme instability of the vacuum black string in D=5 \cite{Gregory:1993vy}. It is known that the nonlinear development of the latter leads to the formation of an infinite string of ``black hole beads" in $D=5$, but the event horizon topology does not change \cite{Lehner:2011wc}. This fact being established within the $D=5$ vacuum GR, a similar scenario is not possible in the $D=4$ bigravity theory, hence the fate of the bigravity black holes should be different. One possibility is that the black hole radiates away all of its energy within the S-channel (some radiative solutions are known explicitly \cite{Kocic:2017hve, Hogas:2019cpg}), but it is unclear what happens to the horizon, whether it disappears or not. In GR the horizon cannot disappear via a classical process \cite{Hawking:1973uf}, but in the bimetric theory the situation might be different. Remarkably, we find that these puzzling issues are not omnipresent and the black holes can be stable if $\eta$ is different from $\pi/4$. In Fig.~\ref{et} we show $\omega^2$ against $\eta$ for several values of $r_H$ for solutions with $c_3=-c_4=2$. One can see that $\omega^2(\eta)<0$ approaches zero and the negative mode disappears in the hairy Schwarzschild limit when $\eta$ approaches $\pi/2$. At the same time, the bald Schwarzschild solutions for the same $r_H$ are certainly unstable. This is a very encouraging fact -- we see that adding the hair to the black hole provides the stabilizing effect. As is seen in Fig.~\ref{et}, the eigenvalue approaches zero also when $\eta$ become small, if only $r_H$ is also small, as seen in Fig.~\ref{et}. Summarizing the above discussion, for some parameter values the hairy black holes are unstable, but for other parameter values they can be stable. Below we shall describe a parameter choice leading to a large set of stable solutions. \begin{figure} \centering \includegraphics[scale=0.9]{omega_eta.eps} \caption{The negative mode eigenvalue $\omega^2(\eta)$ for the hairy black holes with $c_3=-c_4=2$. } \label{et} \end{figure} \section{PARAMETER SPACE AND THE PHYSICAL SOLUTIONS\label{par}} \setcounter{equation}{0} In this section we give a detailed description of particular subsets of solutions. Providing a complete classification of solutions depending on 4 parameters $r_H,\eta,c_3,c_4$ would be a very difficult task. We therefore adopt the following strategy: choosing the particular values \begin{equation} \label{ch1} c_3=-c_4=5/2 \end{equation} which fulfill condition II in \eqref{typeI}, we study the solutions for all possible $r_H,\eta$. Performing next the duality transformation gives us all possible solutions for \begin{equation} \label{ch2} c_3=1/2,~~~c_4=3/2, \end{equation} which values fulfill condition I in \eqref{typeI}. This approach reveals interesting and rather complex features which are presumably generic for any $c_3,c_4$. Figure \ref{FA} shows the ADM mass $M(r_H)$ and the function $U_H(r_H)$ for several values of $\eta\in[0,\pi/2]$. As one can see, all curves $M(r_H)$ intersect at the GL point, $(r_H,M_H)=(0.86,0.43)$, where all solutions bifurcate with the bald Schwarzschild solution, \begin{equation} N^2=Q^2=Y^2=q^2=1-\frac{0.86}{r},~~~~U=r, \end{equation} whereas all curves $U_H(r_H)$ pass through the point $(r_H,U_H)=(0.86,0.86)$. Away from the bifurcation point, the g metric still remains Schwarzschild if $\eta=\pi/2$, in which case $M(r_H)$ is a linear function, \begin{equation} \eta=\frac{\pi}{2}:~~~~~N^2=Q^2=1-\frac{r_H}{r}~~~\Rightarrow M=\frac{r_H}{2}, \end{equation} but the f metric for these solutions is not Schwarzschild, even though both metrics have the same mass; \textcolor{black}{as explained above, we call such solutions hairy Schwarzschild}. For $\eta\neq \pi/2$ the mass depends nonlinearly on $r_H$. \begin{figure} \centering \includegraphics[scale=0.92]{M-r_new.eps} \includegraphics[scale=0.92]{U-r_new.eps} \caption{The mass $M(r_H)$ (left) and the functions $U_H(r_H)$ (right) for the hairy solutions with $ c_3=-c_4=5/2$. \textcolor{black}{The crosses mark the points on the left of which the f metric becomes singular. The hollow circles mark the termination points beyond which the solutions would become complex valued. When $\kappa_1=\cos^2\eta \to 0$, the mass $M(r_H)$ develops a more and more profound minimum, while the values of $M(0)$ and $U_H(0)$ grow without bounds. } } \label{FA} \end{figure} Introducing the mass function $M(r)$ via $N^2(r)=1-2M(r)/r$, Eq.\eqref{e1} assumes the form \begin{equation} \label{ADM1} \textcolor{black}{M^\prime(r)=\kappa_1\,\frac{r^2}{2}\left({\cal P}_0+U^\prime{\cal P}_1\frac{N}{Y}\right) \equiv \kappa_1\,\rho,} \end{equation} from where the ADM mass \begin{equation} \label{ADM2} M=M(\infty)=\frac{r_H}{2}+\kappa_1\int_{r_H}^\infty \rho\, dr\equiv M_{\rm bare}+M_{\rm hair}. \end{equation} Here the ``bare" mass $M_{\rm bare}=r_H/2$ is determined only by the horizon radius and coincides with the mass of the Schwarzschild solution of radius $r_H$, whereas the mass $M_{\rm hair}$ expressed by the integral is the contribution of the massive hair distributed outside the horizon. As one can see in Fig.~\ref{FA}, one has $M>r_H/2$ if $r_H<0.86$, hence the ``hair mass" is positive and the hairy solutions are more energetic than the bare Schwarzschild black hole. However, the mass of the hair becomes negative above the GL point, where $r_H>0.86$, and the hairy solutions are then less energetic than the bare one. Therefore the energy density $\rho(r)$ can be negative. In fact, there are no reasons for which the standard energy conditions should be respected within the bigravity theory. \textcolor{black}{Each curve in Fig.~\ref{FA} is defined only in a finite interval $r_H\in[0,r_H^{\rm max}(\eta)]$. It is very instructive to understand what happens at the boundaries of this interval. } \subsection{ The lower limit $r_H\to 0$} \textcolor{black}{All the solutions extend down to arbitrarily small values of $r_H$.} Remarkably, as seen in Fig.~\ref{FA}, except for $\eta=\pi/2$ the mass $M$ does not vanish when $r_H\to 0$ but approaches a finite value, even though the bare mass $M_{\rm bare}=r_H/2\to 0$. Therefore, all mass is contained in the hair mass in this limit, hence something remains even when the horizon size $r_H$ shrinks to zero. A similar phenomenon is actually well known, since in many nonlinear field theories there are solutions describing a small black hole inside a soliton (for example, inside the magnetic monopole) \cite{Volkov:1998cc}. Sending the horizon size to zero the black hole disappears, but its external nonlinear matter fields remain and become a gravitating soliton containing a regular origin in its center instead of the horizon. Therefore, the $r_H\to 0$ limit of a hairy black hole may correspond to a regular soliton. One may expect the situation to be similar also in our case and that there is a limiting configuration to which the black hole solutions approach pointwise when $r_H\to 0$. Such a limiting configuration indeed exists; however it seems to be singular and not of the regular soliton type. First, as seen in Fig.~\ref{FA}, the value of $U_H$ which determines the size of the f horizon remains finite when $r_H\to 0$, hence the f geometry remains a black hole even in the limit. Secondly, as seen in Fig.~\ref{lim}, one has $N^2/S^2\sim r$ for $r\leq 0.5$ for a solution with a very small $r_H$. However, one has $S=\sqrt{1-r_H/r}\to 1$ as $r_H\to 0$, hence one has in this limit $N^2\sim r$ and the limiting form of the g metric is something like a ``zero size black hole". The numerical profiles shown in Fig.~\ref{lim} suggest this limiting configuration to have the following structure at small $r$: \begin{equation} N^2\sim Y^2\sim Q^2\sim q^2\sim r,~~~~~~U=U_{\rm min}+{\cal O}(r). \end{equation} The g geometry is singular since its Ricci invariant $R(g)=2/r^2+{\cal O}(1/r)$ at small $r$, but the f geometry remains of the regular black hole type because $U$ does not vanish. Curiously, the temperature remains finite for $r_H\to 0$ and is always the same for both metrics. The limiting g temperature can be formally computed by assuming $N^2=\alpha r$, $Q^2=\beta r$ with $\alpha\approx 0.7$ and $\beta\approx 6$ from Fig.~\ref{lim}. Equation \eqref{TT} then yields $T=\sqrt{\alpha\beta}/(4\pi)\approx 0.163$, which is very close to the value $T=0.16$ for the solution with $r_H\sim 10^{-5}$ shown in Fig.~\ref{lim}. However, these considerations are of course purely formal since the zero size black hole cannot evaporate and further reduce its size, and the standard WKB arguments for the black hole evaporation do not apply because the geometry is singular at the horizon. \begin{figure} \centering \includegraphics[scale=0.82]{lim.eps} \includegraphics[scale=0.82]{tach.eps} \caption{Profiles of the solution with $r_H\sim 10^{-5}$ that is close to the zero size black hole (left), and of that close to the tachyon limit, with $D\sim 10^{-6}$ (right). One has $S^2={1-r_H/r}$. The amplitude ${\cal P}_1$ determines the graviton mass via \eqref{FP-mass-1} and the gravitons behave as tachyons if ${\cal P}_1<0$. } \label{lim} \end{figure} \textcolor{black}{One should say that the f metric can become singular for small $r_H$ because the $q,Y$ amplitudes develop additional zeros outside the horizon. This happens along the parts of the curves on the left of the points marked by the crosses in Fig.\ref{FA}. We have already discussed this phenomenon and said that we do not exclude such solutions from consideration because the f geometry is not observable and its singularities are invisible, while the g geometry that can be directly probed remains always regular. The physical parameters of the solutions such as the ADM mass also do not show anything special when the $q,Y$ amplitudes starts to oscillate. The potential $V$ in the perturbation equation \eqref{eqpert} also remains regular. We therefore have no reason to exclude such solutions from consideration, and in fact they are necessary in order that the theory could describe black holes within a broad mass spectrum. } \subsection{ The upper ``tachyon" limit $r_H\to r_H^{\rm max}(\eta)$} In this limit the solutions always remain regular and disappear after a fusion of roots of the algebraic equation \eqref{qqq} [or \eqref{alg}]. As explained above, this equation determines the horizon values of the solutions. Its two roots determine two solution branches, but only the root with $\sigma=+1$ gives rise to asymptotically flat solutions, the other branch showing a singularity of the g metric outside the horizon. When $r_H$ increases, the determinant of \eqref{qqq} decreases and vanishes for some $r_H=r_H^{\rm tach}(\eta)$, then it becomes positive again, decreases again and vanishes for the second time for $r_H=r_H^{\rm max}(\eta)>r_H^{\rm tach}(\eta)$, after which it becomes negative and the procedure stops. Specifically, it turns out that the determinant of \eqref{qqq} factorizes, \begin{equation} \label{D} {\cal D}\equiv {\cal B}^2-4{\cal AC}={\cal P}^2_1(r_H)\, D~~~~\Rightarrow~~~~\sqrt{\cal D}={\cal P}_1(r_H) \sqrt{D}, \end{equation} where ${\cal P}_1(r_H)$ is defined by \eqref{e5} with ${\bf u}=U/r$ replaced by $u=U_H/r_H$ while $D$ is a complicated function of $r_H,U_H,\eta,c_3,c_4$. When $r_H$ increases, then ${\cal P}_1(r_H)$ crosses zero at some $r_H=r_H^{\rm tach}(\eta)$ while $D$ remains positive, hence the square root $\sqrt{\cal D}$ changes sign. When $r_H$ continues to increase, then $D$ approaches zero and vanishes as $r_H\to r_H^{\rm max}(\eta)$. No further increase of $r_H$ is possible since $D$ would then be negative thus rendering the solutions complex valued. Although the determinant ${\cal D}$ vanishes for $r_H=r_H^{\rm tach}(\eta)$ when ${\cal P}_1(r_H)=0$ and also for $r_H=r_H^{\rm max}(\eta)$ when $D=0$, the two solution branches never merge. Specifically, the two horizon values $\nu_H$ determined by \eqref{alg} merge when ${\cal D}=0$, but a careful inspection reveals that $y_H,U_H$ in \eqref{Up} and \eqref{yh} remain different for the two branches when ${\cal P}_1(r_H)=0$. If $D=0$ then all horizon values $\nu_H,y_H,U_H$ coincide for the two branches, but the derivatives $y_H^\prime$ defined by \eqref{fin} remain different. This is a consequence of the fact that the existence and uniqueness theorem applies only to regular points of the differential equations, whereas the event horizon $r=r_H$ is a singular point. In the interval $r_H^{\rm tach}(\eta)<r_H<r_H^{\rm max}(\eta)$ the solutions show a ``tachyon zone" near the horizon where the function ${\cal P}_1(r)$ defined by \eqref{e5} is negative, as shown in the right panel in Fig. \ref{lim}. \textcolor{black}{Let us remember relation \eqref{FP-mass} for the Fierz-Pauli mass of gravitons obtained via linearizing the field equations around the flat background. This relation can be written as ${\rm m^2_{\rm FP}}={\cal P}_1(\infty)\, {\rm m}^2$. However, the equations can be similarly linearized around an arbitrary background solution, which yields in the spherically symmetric case the position-dependent mass term \cite{Mazuet:2018ysa} \begin{equation} \label{FP-mass-1} {\rm m^2_{\rm FP}}={\cal P}_1(r)\, {\rm m}^2. \end{equation} Therefore, if ${\cal P}_1(r)<0$ then the mass effectively becomes imaginary. } As a result, solutions for $r_H>r_H^{\rm tach}$ show unphysical features, hence we call $r_H\to r_H^{\rm max}(\eta)$ the ``tachyon limit". The horizon value $y^\prime_H$ diverges in this limit, but this seems to be an integrable divergence similar to $y^\prime(r)\sim 1/\sqrt{r-r_H}$ and the limiting solution itself stays regular. We were able to approach this solution rather closely, as shown in Fig.~\ref{lim} (right panel) which presents ``an almost limiting" solution with the horizon value of the determinant $D\sim 10^{-6}$. To recapitulate, hairy solutions exist only for $0<r_H\leq r_H^{\rm max}(\eta)$. \subsection{The ADM mass} \textcolor{black}{It is important that, unless $\kappa_1=\cos^2\eta$ is very small, the ADM mass of all hairy solutions always varies within a finite range and can be neither very large nor very small, as seen in Fig.~\ref{FA}. It seems this fact was not recognized in Ref.~\cite{Brito:2013xaa}, which always shows only the ratio $M/r_H$ which diverges as $r_H\to 0$. However, the mass $M$ remains finite for $r_{H}\to 0$. As seen in Fig.~\ref{FA}, the mass actually does not change much when $r_H$ changes and always remains close to the GL value, which is the mass of the Schwarzschild solution with $r_H=0.86$,} \begin{equation} M\sim \frac{0.86}{2}=0.43. \end{equation} This means that the dimensionful mass (restoring for the moment the speed of light $c$ and Newton's constant $G$) \begin{equation} {\rm M}=\frac{c^2\, M}{G\, \rm m} \end{equation} is always close to that of the Schwarzschild black hole of size ${\rm r}_H=0.86/{\rm m}$, which is close to the Compton length of massive gravitons. As a result, one cannot assume the graviton mass m to be very small and of the order of the inverse Hubble radius as in \eqref{Hubble}. Indeed, this would imply the hairy black holes to be as heavy as the Schwarzschild black hole of a cosmological size -- a physically meaningless result. \textcolor{black}{However, assuming instead that $1/{\rm m}=\gamma\times 10^6$~km with $\gamma\in[0,1]$ as in \eqref{m}, which is consistent with the cosmological observations if $\kappa_1$ is parametrically small as expressed by \eqref{m0}, yields a physically acceptable result. The masses of the hairy black holes are then close to the mass of the Schwarzschild black hole of radius $\gamma\times 10^6$~km, that is ${\rm M}\sim 0.3\times 10^6 \,\gamma\times {\rm M}_\odot$. If $\gamma\sim 1$ this gives the value typical for supermassive astrophysical black holes observed in the center of many galaxies. } \textcolor{black}{If $\kappa_1$ is very small then the mass can deviate considerably from the GL value and can become very small or very large. As seen in Fig.~\ref{FA}, for small $\kappa_1$ the mass $M(r_H)$ shows a minimum: first it decreases with $r_H$, then reaches a minimal value $M_{\rm min}$, and then increases up to some $M(r_H=0)$. For smaller values of $\kappa_1$ the minimum becomes more and more profound and the value $M_{\rm min}$ approaches zero while $M(r_H=0)$ becomes larger and larger. If $\kappa_1$ is extremely small as in \eqref{m0}, $\kappa_1\leq 10^{-34}$, then the minimum value $M_{\rm min}$ is extremely close to zero. One has then $M(r_H)\approx r_H/2$ for $r>(r_H)_{\rm min}$ where $(r_H)_{\rm min}$ is very small, but in the region $r<(r_H)_{\rm min}$ the mass grows rapidly when $r_H\to 0$ up to a very large value $M(r_H=0)$. } \begin{figure} \centering \includegraphics[scale=0.65]{U_pio2.eps} \includegraphics[scale=0.65]{y_pio2.eps} \includegraphics[scale=0.65]{rho_pio2.eps} \includegraphics[scale=0.65]{E_pio2.eps} \caption{\textcolor{black}{The amplitudes $U(r),Y(r)$, the ``hair energy density" $\rho(r)$ and its integral $E(r)$ whose asymptotic value $E(\infty)$ is the ``hair energy" for the hairy Schwarzschild solutions with $\kappa_1=0$ and $r_H\ll 1$.}} \label{pi_2} \end{figure} \textcolor{black}{ To get an approximation for $M(r_H)$ for very small $\kappa_1$, we consider the hairy Schwarzschild solutions with $\kappa_1=0$. Their g metric is Schwarzschild with all the hair contained in the f metric. It turns out that the $Y,U$ amplitudes of the f metric depend very strongly on $r_H$ if the latter is small. As seen in Fig.~\ref{pi_2}, in the horizon vicinity these amplitudes show very large values which apparently grow without bounds when $r_H\to 0$, although one always has $Y(r)\to 1$ and $U(r)\to r$ far away from the horizon. We inject these solutions to \eqref{ADM1} and \eqref{ADM2} to obtain the radial energy density $\rho(r)$ and $E(r)=\int_{r_H}^r \rho\, dr$. They also become very large when $r_H$ decreases, as seen in Fig.~\ref{pi_2}. The asymptotic value $E(\infty)$ is the ``hair energy". As seen in Fig.~\ref{pi_2}, the hair energy is large for small $r_H$, but it does not backreact and the g metric remains Schwarzschild if $\kappa_1=0$. However, the hair energy starts to backreact if $\kappa_1\neq 0$. If $\kappa_1\ll 1$ then one can deduce from Eq.\eqref{ADM2} that \begin{equation} M= \frac{r_H}{2}+\kappa_1 E(\infty)+{\cal O}(\kappa_1^2), \end{equation} where $E(\infty)$ is computed for $\kappa_1=0$. We evaluate numerically $E(\infty)$ for various values of $r_H$ and obtain the following best fit approximation: \begin{equation} \label{MMM} M\approx \frac{r_H}{2}+\kappa_1\,\frac{a}{(r_H)^s}, \end{equation} where $a=0.0056$ and $s=4.61$. Assuming that $\kappa_1=\gamma^2\times 10^{-34}$, this function shows an absolute minimum at \begin{equation} (r_H)_{\rm min}\approx 5.2 \,\gamma^{0.35}\times 10^{-7},~~~~~~~M_{\rm min}\approx 3.1\,\gamma^{0.35}\times 10^{-7}\,, \end{equation} whose dimensionful versions are obtained by multiplying by ${\rm 1/m}=\gamma\times 10^6$~km (restoring again the speed of light and Newton's constant) \begin{eqnarray} \label{min} ({\rm r}_H)_{\rm min}=\frac{(r_H)_{\rm min}}{\rm m} \approx 0.52\,\gamma^{1.35}~{\rm km},~~~~~~~ {\rm M}_{\rm min}=\frac{c^2\, M_{\rm min}}{G\,\rm m} \approx 0.2\,\gamma^{1.35}\times {\rm M}_\odot\,. \end{eqnarray} This determines the minimum mass for the hairy black holes. When $r_H$ gets smaller still then the mass starts to grow, but it grows only up to a finite although very large value as $r_H\to 0$ because the approximation \eqref{MMM} is not valid for however small $r_H$.} \subsection{Parameter regions for solutions with $c_3=-c_4=5/2$ } Let us now collect all the facts together. The diagram in Fig.~\ref{FB} shows the region in the $(r_H,\eta)$ plane within which there are hairy black hole solutions. The low boundary of this region at $\eta=0$ corresponds to solutions whose f metric is Schwarzschild, while the upper boundary at $\eta=\pi/2$ corresponds to solutions whose g metric is Schwarzschild. \textcolor{black}{The left boundary corresponds to the limiting solutions with $r_H=0$: the zero size black holes. The right boundary marks the ``tachyon limit" beyond which the solutions would become complex valued. The upper-left corner of the diagram contains solutions with a singular f geometry, but their g geometry, which is physically measurable, is regular. } The diagram also shows lines corresponding to the zero modes, $\omega^2=0$, of the perturbative eigenvalue problem \eqref{eqpert}. The vertical line corresponds to the GL value $r_H=0.86$. The eigenvalue $\omega^2$ changes sign when crossing these lines, therefore, the lines separate sectors where $\omega^2>0$ and hence the solutions are stable, from sectors where $\omega^2<0$ and the solutions are unstable. There are altogether two stable and two unstable sectors. It is worth noting that the stability region is now much larger than for solutions with $c_3=-c_4=2$ considered in the previous section. One also notices that the tachyonic solutions are in the unstable sector. \begin{figure} \centering \includegraphics[scale=1.2]{range1.eps} \caption{The parameter region in the $(r_H,\eta)$ plane corresponding to regular hairy black hole solutions with $c_3=-c_4=5/2$. The dashed black $\omega^2=0$ lines separate stable and unstable sectors. \textcolor{black}{The upper left corner contains solutions with a singular f metric, however, their g geometry is regular.}} \label{FB} \end{figure} Finally, the diagram shows the ``physical region" corresponding to physically acceptable solutions. As explained above, for such solutions the coupling $\kappa_1=\cos^2(\eta)$ should be very small for their mass not to be too large, hence $\eta$ should be very close to $\pi/2$. The solutions should be stable, hence they should correspond to the sector where $\omega^2>0$. These conditions specify the physical region to be the thick (green online) line at the top of the diagram. Physical solutions are therefore described by the g metric which is extremely close to Schwarzschild, since \begin{eqnarray} \label{Enst-eq-1} G_{\mu\nu}({g})&=&\kappa_1\, T_{\mu\nu}(g,f),~~~~~~~~\mbox{where}~~~~\kappa_1~\textcolor{black}{\leq}~ 10^{-34}. \end{eqnarray} The ``hairy features" of the solutions hidden in the f metric should be difficult to observe, unless in violent processes like black hole collisions producing large enough $T_{\mu\nu}(g,f)$ to overcome the $10^{-34}$ suppression. Summarizing, the static bigravity black holes should be extremely similar to the GR black holes, but their strong field dynamics is expected to be different. \textcolor{black}{As explained above, the physical region contains stable hairy black holes whose masses range from the minimal value $\sim 0.2\,\,\gamma^{1.35}\times {\rm M}_\odot$ up to the maximal value $\sim 0.3\times 10^6\,\gamma^{1.35}\times {\rm M}_\odot$ with $\gamma\in [0,1]$. Yet heavier black holes also exist in the theory but they cannot be hairy and should be described by the ``bald" Schwarzschild solution \eqref{b2}, which is stable for $r_H>0.86$. Stable black holes with ${\rm M}\leq 0.2\,\gamma^{1.35}\times {\rm M}_\odot$ can only be of the type \eqref{b1}. } \subsection{Parameter regions for dual solutions with $c_3=1/2$, $c_4=3/2$ } Let us now see how the described above solutions look after the duality transformation \eqref{dual}. This transformation converts the parameter values \eqref{ch1} into \eqref{ch2}, flips the sign of $\eta-\pi/4$ and swaps the $Q,N,r$ with $q,Y,U$. Graphically, this amounts to relabelling the functions and plotting them against $U$ instead of $r$. The ADM mass and temperature are invariant under duality. The stability property also does not change since, for example, if a solution is unstable and admits growing in time perturbations, then its dual version will contain the same growing modes and hence will be unstable as well. \begin{figure} \centering \includegraphics[scale=0.92]{M-U_new.eps} \includegraphics[scale=0.92]{r-U_new.eps} \caption{The mass $M(r_H)$ (left) and the functions $U_H(r_H)$ (right) for the hairy black hole solutions with $c_3=1/2$, $c_4=3/2$. \textcolor{black}{ The crosses mark points on the right of which the g metric becomes singular, hence these pars of the curves correspond to unphysical solutions that should be excluded from consideration.}} \label{FAA} \end{figure} Figure \ref{FAA} shows the dual version of Fig.~\ref{FA}. The mass curves $M(r_H)$ still intersect in the GL point but they look quite different as compared to those in Fig.~\ref{FA}. In particular, not all of them are single valued. The reason is that the functions $U_H(r_H)$ in Fig.~\ref{FA} are not always monotone, hence their inverses shown in Fig.~\ref{FAA} are not single-valued. As a result, for each $\eta$ such that $0\leq \cos^2\eta\leq 0.6$ there are two different solutions with the same $r_H$ but with different $U_H$, hence the curves $M(r_H)$ are not always single valued. The solutions now exist for $r_H\in[r_H^{\rm min}(\eta),r_H^{\rm max}(\eta)]$. The lower limit $r_H^{\rm min}(\eta)$ corresponds to what used to be the upper limit before the duality -- the tachyon solutions with vanishingly small horizon determinant $D$. \textcolor{black}{The upper limit $r_H^{\rm max}(\eta)$ corresponds for small $\eta$ to solutions whose g metric starts being singular. Before the duality these were solutions whose f metric started being singular while their g-metric was regular. After the duality their g-metric becomes singular, hence such solutions are no longer allowed and should be excluded. } For larger values of $\eta$ the right boundary $r_H^{\rm max}(\eta)$ corresponds to points where the two different solutions with the same $r_H$ but with different $U_H$ merge to each other. The solutions below the GL point, for $r_H<0.86$, are still more energetic than the solution with $\eta=\pi/2$, hence their hair mass $M_{\rm hair}$ is positive, whereas above the GL point it becomes negative. Finally, Fig.~\ref{FC} shows the existence diagram in the $(r_H,\eta)$ plane, together with the stability regions. The diagram now looks quite different as compared to that in Fig.~\ref{FA}, although it corresponds to essentially the same solutions, up to the duality transformation. Although the duality does not change stability, it interchanges positions of the stability sectors. Therefore, the physical region corresponding to stable solutions with $\eta$ close to $\pi/2$ is now above the GL point, where the hair mass is negative. The physical solutions are again characterized by the g metric that is extremely close to Schwarzschild, but the novel feature is that now for each value of $r_H$ from the physical region there are two different solutions whose g metrics are almost the same but the f metrics are different. \textcolor{black}{ As one can see, the physical region in Fig.\ref{FAA} is rather short and corresponds only to supermassive black holes with $0.86<r_H<r_{H}^{\rm max}$. All black holes of smaller masses are unstable. Therefore, the parameter choice $c_3=1/2$, $c_4=3/2$ is not physically interesting. } \begin{figure} \centering \includegraphics[scale=1.2]{range2.eps} \caption{The parameter region in the $(r_H,\eta)$ plane corresponding to regular hairy black hole solutions with $c_3=1/2$, $c_4=3/2$. The dashed black $\omega^2=0$ lines separate stable and unstable sectors.} \label{FC} \end{figure} \section{CONCLUDING REMARKS \label{CR}} To recapitulate, we presented above a detailed analysis of static and asymptotically flat black holes in the ghost-free massive bigravity theory. Extending the earlier result of \cite{Brito:2013xaa}, we find that for given values of the theory parameters $c_3,c_4,\eta$ and for a given event horizon size varying within a finite range, $r_H\in [r_H^{\rm min}, r_H^{\rm max}]$, there are one or sometimes two different black holes supporting a nonlinear massive graviton hair, in addition to the ``bald Schwarzschild" solution with $g_{\mu\nu}=f_{\mu\nu}$ described by \eqref{b2}. The hairy solutions are more energetic than the Schwarzschild one if $r_H<0.86$ and they are less energetic otherwise. When $r_H$ approaches the limiting values $r_H^{\rm min}$ or $r_H^{\rm max}$, the solutions either become complex valued or merge between themselves. For some values of $c_3,c_4$ zero-size black holes exist for which $r_H^{\rm min}=0$ but the corresponding $U_H$ remains finite. Depending on values of $r_H,c_3,c_4,\eta$, the hairy solutions can be either stable of unstable. \textcolor{black}{To avoid the hairy black holes being unphysically heavy, one is bound to assume the massive graviton Compton length to be ${\rm 1/m}= \gamma \times \, 10^6$ km where the parameter $\gamma$ may range in the interval $[0,1]$. The agreement with the cosmological data is then achieved by assuming that $\kappa_1=\cos^2\eta=\gamma^2\times ({\rm M}_{\rm ew}/{\rm M}_{\rm Pl})^2=\gamma^2\,\times 10^{-34}$. The stable hairy black holes are described by a g metric which is extremely close to Schwarzschild, but their f metric is quite different. These black holes have the mass and size close to those of ordinary black holes with the masses ranging from $\sim 0.2\,\gamma^{1.35}\times {\rm M}_\odot$ to $\sim 0.3\times 10^6\,\gamma^{1.35}\times {\rm M}_\odot$, the latter being the value typical for the supermassive astrophysical black holes if $\gamma\sim 1$. Yet heavier black holes in the theory should be bald. As a result, if the bigravity theory indeed applies to describe physics, the astrophysical black holes should support the hair hidden in the f metric. } \textcolor{black}{ Since the f metric is not coupled to matter and cannot be directly probed, while the deviation of the ``visible" g metric from Schwarzschild is suppressed by the factor of $\kappa_1= \gamma\times 10^{-34}$, the hairy black holes should normally be undistinguishable from the usual GR black holes. However, in violent processes like black hole coalescences the interaction between the two metrics may produce an energy momentum tensor strong enough to overcome the $10^{-34}$ suppression in $G_{\mu\nu}({g})=\kappa_1\, T_{\mu\nu}(g,f)$. In this case the deviation from GR should become visible. Therefore, it is possible that signals from black hole mergers detected by LIGO/VIRGO \cite{Abbott:2016blz} may carry information about the hairy structure of the black holes. One could expect a ``hair imprint" in the signal to be stronger for {\it small} black holes, since we know that for small black holes the amplitudes $U,Y$ of the f metric become very large, which should influence the $T_{\mu\nu}(g,f)$ of the merger. It is therefore possible that the hair imprints will be visible when smaller mass mergers (see \cite{Abbott:2020niy} for a recent review) are detected. However, to actually determine the hair imprint in the signal would require calculations going beyond the scope of the present paper. We therefore leave this problem for a separate project and for the time being simply refer to the recent preprint \cite{Dong} where calculations of this type are performed within the context of the ghost-gree massive gravity \cite{deRham:2010kj} [where the only static black holes are those described by \eqref{b1}]. } Finally, we should discuss the paper \cite{Torsello:2017cmz} that also considers black holes in the ghost-free massive bigravity theory. \textcolor{black}{This paper presents essentially the same classification of different types of black holes as the one previously given in \cite{Volkov2012} but in a more refined way, extending it and paying attention to some subtle points. } The paper addresses in particular the issue of convergence of the solutions to the flat background in the asymptotic region. Among other things, it claims that the Schwarzschild solution is the only asymptotically flat black hole in the theory. \textcolor{black}{At the same time, the paper does not contain a rigorous proof of this statement but gives just a number of plausibility arguments, so that the claim should rather be viewed as a conjecture, as actually explicitly stated in some places of \cite{Torsello:2017cmz}. } These arguments are as follows. First of all, it was emphasized in \cite{Torsello:2017cmz} that the usual practice of starting the numerical integration not at the horizon $r=r_H$, which is a singular point of the differential equations, but at a regular nearby point $r=r_H+\epsilon$, as was done in \cite{Brito:2013xaa}, could in principle lead to numerical instabilities. We agree with this, and it is for this reason that we use the desingularization procedure (described in Appendix \ref{Des} below) which allows us to start the numerical integration exactly at $r=r_H$ \textcolor{black}{(initial conditions exactly at $r=r_H$ were described also in \cite{Torsello:2017cmz})}. The paper \cite{Torsello:2017cmz} makes also another remark concerning the behavior at the horizon. It is known that in order to be able to {\it cross} the horizon, for example when studying geodesics, one cannot use the Schwarzschild coordinates and one should introduce instead regular at the horizon coordinates. These can be, for example, Eddington-Finkelshtein (EF) coordinates in which $g_{00}=g_{11}=0$, $g_{01}=g_{10}\neq 0$. \textcolor{black}{It was noticed in \cite{Torsello:2017cmz} that the f metric, when expressed in the same coordinates, generically does not have the same form, since it has $f_{11}\neq 0$, hence the two metrics cannot be simultaneously EF. We understand this, but this does not invalidate the background solutions (Ref.~\cite{Torsello:2017cmz} agrees on this). } The horizon geometries are regular, and if one wishes, one can use the same boundary conditions at the horizon to integrate inside the horizon to recover the interior solutions. Within the parametrization described in Appendix \ref{Des}, this is achieved by simply changing the sign of the numerical integration step. Next, small initial deviations from the Schwarzschild solution via setting at the horizon $u=U_H/r_H=1+\epsilon$ were considered in \cite{Torsello:2017cmz}. Integrating the equations toward large $r$ then yields metrics whose components diverge as $r\to\infty$ instead of approaching finite values. \textcolor{black}{This observation, made already in \cite{Volkov2012}, shows that there are no regular and asymptotically flat solutions {\it in a small vicinity} of the Schwarzschild solution. However, there can be regular solutions corresponding to $u$ considerably deviating from unit value. } \textcolor{black}{Finally, the paper \cite{Torsello:2017cmz} reproduces and analyzes (in Appendix A) one of the asymptotically flat solution (with a singular f metric) found in \cite{Brito:2013xaa}. It obtains a pathological result, and the reason is the following. Appendix D of \cite{Torsello:2017cmz} describes the numerical method used -- a straightforward integration starting from the horizon with the standard routine of {\it Mathematica}. This adequately produces the solution with a given precision, but only within a finite range of the radial coordinate $r$. If one integrates farther on trying to approach flat space, then the growing $C e^{+r}$ mode generically present in the solution leads to a rapid accumulation of numerical errors triggering a numerical instability. Trying to suppress this mode by adjusting the horizon boundary conditions, one typically observes the derivatives of some functions in the solution growing without bounds at a some finite $r$. Precisely this type of behavior at the end of the integration interval is seen in Fig.~11 in \cite{Torsello:2017cmz}. } \textcolor{black}{ One cannot get asymptotically flat solutions within the numerical scheme adopted in \cite{Torsello:2017cmz}, since pathological features arise in this way inevitably. This must be the reason behind the conviction that such solutions do not exist. However, all pathologies can be eliminated within the more elaborate numerical scheme described above -- via suppressing the growing $C e^{+r}$ mode from the very beginning. } \section*{ACKNOWLEDGEMENTS} \textcolor{black}{We thank Francesco Torsello, Mikica Kocic and Edvard Mörtsell for clarifying discussions and useful remarks. } The work of M.S.V. was partly supported by the French National Center of Scientific Research within the collaborative French-Russian research program, Grant No. 289860, as well as by the Institute of Theoretical and Mathematical Physics at the Moscow University during the visit in early 2020, and also by the Russian Government Program of Competitive Growth of the Kazan Federal University. \section*{APPENDIX A: DESINGULARIZATION AT THE HORIZON \label{Des}} \renewcommand{\theequation}{A.\arabic{equation}} The horizon $r=r_H$ is a singular point of the differential equations -- the derivatives $N^\prime$ and $Y^\prime$ expressed by Eqs.\eqref{eqs} are not defined at this point. The usual practice to handle this difficulty is to use the local power series expansions \eqref{l1} and \eqref{l2} to start the numerical integration not exactly at $r=r_H$ but at a nearby point with $r=r_H+\epsilon$ where $\epsilon$ is a small number. One may then hope that the results will not be very sensitive to the value of $\epsilon$. However, in such an approach $\epsilon$ remains an arbitrary parameter not defined by any prescription. This inevitably affects the stability of the numerical procedure, which becomes evident when one studies the dependence of the solutions on the parameters. At the same time, it is possible to reformulate the problem in such a way that the numerical integration starts exactly at $r=r_H$. Let us make the change of variables \begin{equation} N={S}\,\nu,~~~~~Y={S}\,y~~~~~\mbox{with}~~S=\sqrt{1-\frac{r_H}{r}}. \end{equation} The functions $\nu,y$ and their derivatives are defined also at $r=r_H$. Equations \eqref{e1} and \eqref{e2} then yield \begin{equation} \label{desing} \nu^\prime=-\frac{\nu}{2r}+\frac{{\cal C}_1}{2\nu y\, r^2 S^2},~~~~~~~y^\prime=-\frac{y U^\prime}{2U}+\frac{{\cal C}_2}{2\nu y\, r^2 U S^2}, \end{equation} where \begin{eqnarray} \label{CCC} {\cal C}_1&=&(r-r_H\nu^2 -\kappa_1\,r^3 {\cal P}_0)\,y-\kappa_1\,r^3\,{\cal P}_1 U^\prime \,\nu\,, \nonumber \\ {\cal C}_2&=&\nu\,r^2 (1-\kappa_2\,r^2 {\cal P}_2)\,U^\prime -\kappa_2\,r^4\,{\cal P}_1\,y-r_H U\nu\,y^2\,. \end{eqnarray} At the horizon the derivatives $\nu^\prime$ and $y^\prime$ are finite, which requires that \begin{equation} {\cal C}_{1|_{r_H}}=0,~~~~~{\cal C}_{2|_{r_H}}=0, \end{equation} from where one obtains the horizon values \begin{eqnarray} U^\prime_H&=&\frac{(1-\nu^2-\kappa_1\,r^2{\cal P}_0)\,y }{\kappa_1\, r^2 {\cal P}_1\,\nu}_{|_{r_H}}\,, \label{Up} \\ y_H&=&\left.\frac{1+(\kappa_2\,r^2{\cal P}_2-1)\nu^2 +\kappa_1\kappa_2({\cal P}_0{\cal P}_2-{\cal P}_1^2) \,r^4-(\kappa_1{\cal P}_0+\kappa_2{\cal P}_2)\,r^2}{\kappa_1 r{\cal P}_1 U\nu}\right|_{_{r_H}}. \label{yh} \end{eqnarray} At the same time, the horizon value of $U^\prime$ can be obtained from \eqref{eqs}, \begin{eqnarray} \label{Up1} U^\prime_H=\lim_{r\to r_H}{\cal D}_U(r,U,{S}\nu,{S}y)\equiv {\cal D}_{UH}(r_H,U_H,\nu_H,y_H). \end{eqnarray} This value must agree with the one given by \eqref{Up}, which yields a condition on $\nu_H$, and using \eqref{yh}, this condition reduces [if $b_k$ are chosen according to \eqref{bbb}] to a biquadratic equation \begin{equation} \label{alg} {\cal A}\,(\nu_H^2)^2+{\cal B}\,\nu_H^2+{\cal C}=0, \end{equation} where the coefficients ${\cal A}$, ${\cal B}$, ${\cal C}$ are (rather complicated) functions of $r_H,U_H$. As a result, for given $r_H,U_H$ there are two possible horizon values $\nu_H^{(1)}$ and $\nu_H^{(2)}$. Injecting to \eqref{Up} and \eqref{yh}, this determines the horizon values $y_H$ and $U^\prime_H$. Finally, the horizon values of $\nu^\prime$ and $y^\prime$ are obtained from \eqref{desing} by taking the $S\to 0$ limit and using l'Hopital's rule, which yields \begin{equation} \label{des} \nu^\prime_H=-\frac{\nu_H}{2r_H}+\frac{{\cal C}_{1|_{r_H}}^\prime }{2r_H\nu_H y_H },~~~~~~~ y^\prime_H=-\frac{y_H U^\prime_H}{2U_H}+\frac{{\cal C}^\prime_{2|_{r_H}}}{2r_H \nu_H y_H U_H }. \end{equation} There remains to compute the derivatives here. One has, for example, \begin{equation} {\cal C}_{1|_{r_H}}^\prime=\left.\left(\frac{\partial}{\partial r} +\nu^\prime_H\frac{\partial}{\partial \nu} +y_H^\prime\frac{\partial}{\partial y} + U'_H \frac{\partial}{\partial U} +U^{\prime\prime}_H\frac{\partial}{\partial U^\prime}\right){{\cal C}_1}(r,U,\nu,y,U^\prime)\right|_{r=r_H,U=U_H,\nu=\nu_H,y=y_H} \end{equation} where the second derivative is similarly obtained from \eqref{Up1}, \begin{equation} U^{\prime\prime}_H=\left.\left(\frac{\partial}{\partial r}+\nu^\prime_H\frac{\partial}{\partial \nu} +y_H^\prime\frac{\partial}{\partial y} + U'_H \frac{\partial}{\partial U} \right){\cal D}_U(r,U,{S}\nu,{S} y)\right|_{r=r_H,U=U_H,\nu=\nu_H,y=y_H}, \end{equation} and similar expressions for ${\cal C}_{2|_{r_H}}^\prime$. Injecting this to \eqref{des} yields {\it linear} in $\nu_H^\prime$ and $y_H^\prime$ relations, which can be resolved to give (we do not show explicit formulas in view of their complexity) \begin{equation} \label{fin} \nu_H^\prime=\nu_H^\prime(r_H,U_H,\nu_H,y_H),~~~~~y_H^\prime=y_H^\prime(r_H,U_H,\nu_H,y_H). \end{equation} Summarizing the above discussion, the equations in the desingularized form read \begin{eqnarray} \label{desint} \nu^\prime&=&-\frac{\nu}{2r}+\frac{{\cal C}_1}{2\nu y r^2 S^2}\equiv {\cal F}_\nu(r,U,\nu,y), \nonumber \\ y^\prime&=&-\frac{y U^\prime}{2U}+\frac{{\cal C}_2}{2\nu y r^2 U S^2}\equiv {\cal F}_y(r,U,\nu,y),\nonumber \\ U^\prime&=&{\cal D}_U(r,U,{S}\nu,{S}y)\equiv {\cal F}_U(r,U,\nu,y), \end{eqnarray} where ${\cal C}_1$ and ${\cal C}_2$ are defined by \eqref{CCC} while ${\cal D}_U$ is the same as in \eqref{eqs}. These equations apply for $r>r_H$, while at $r=r_H$ they should be replaced by \begin{eqnarray} \nu^\prime&=&\nu_H^\prime(r_H,U_H,\nu_H,y_H),~\nonumber \\ y^\prime&=&y_H^\prime(r_H,U_H,\nu_H,y_H), \nonumber \\ U^\prime&=&U_H^\prime(r_H,U_H,\nu_H,y_H), \end{eqnarray} where $\nu^\prime_H$, $y^\prime_H$, $U^\prime_H$ are defined by Eqs.\eqref{Up} and \eqref{fin}. The horizon values $r_H$ and $U_H\equiv ur_H$ can be arbitrary, while $\nu_H$ is not arbitrary but must fulfil the algebraic equation \eqref{alg}, whereas $y_H$ is determined by \eqref{yh}. This formulation allows one to start the integration exactly at the horizon $r=r_H$ and then continue to the $r>r_H$ region. \section*{APPENDIX B: FIELD EQUATIONS WITH TIME DEPENDENCE \label{time}} \renewcommand{\theequation}{B.\arabic{equation}} \label{completefieldeq} Let us allow both metrics to depend on time, assuming that they are still spherically symmetric. The gauge freedom of reparametrizations of the $t,r$ coordinates can be used to make the g metric diagonal, but the f metric will in general contain an off-diagonal term. The two metrics can be written as \cite{Volkov:2011an} \begin{align*} ds^2_g&=-Q^2 dt^2+\frac{dr^2}{\Delta^2}+R^2 d\Omega^2,\\ \label{anz} ds^2_f&=-\big(q^2-\alpha^2Q^2\Delta^2\big)dt^2-2\alpha\bigg(q+\frac{Q\Delta}{W}\bigg)dtdr+\bigg(\frac{1}{W^2}-\alpha^2\bigg)dr^2+U^2 d\Omega^2,\numberthis{} \end{align*} where $d\Omega^2=d\theta^2+\sin^2\theta\;d\phi^2$ and $Q$, $q$, $\Delta$, $W$, $\alpha$, $U$, $R$ are functions of $r$ and $t$. One can check that the tensor \begin{equation} \tensor{\gamma}{^\mu_\nu}=\begin{pmatrix} q/Q & \alpha/Q & 0 & 0\\ -\alpha Q \Delta^2 & \Delta/W & 0 & 0\\ 0 & 0 & U/R & 0\\ 0 & 0 & 0 & U/R \end{pmatrix} \end{equation} has the property $\gamma^\mu_{~\sigma}\gamma^\sigma_{~\nu}=g^\mu_{~\sigma}f^\sigma_{~\nu}$. This tensor is used to compute the energy-momentum tensors $T^\mu_{~\nu}$ and ${\cal T}^\mu_{~\nu}$ in \eqref{T}. One can redefine the two amplitudes similarly to \eqref{NY} \begin{equation} \label{NYa} N=\Delta R^\prime\,,~~~~Y=WU^\prime\,, \end{equation} where the prime denotes the derivative with respect to $r$, and one can impose the gauge condition \begin{equation} R=r. \end{equation} As a result, the independent field equations \eqref{Enst-eq} become \begin{eqnarray} \label{Ein00} G^0_0(g)&=&\kappa_1\, T^{0}_{~0}, ~~~~ G^1_1(g)=\kappa_1\, T^{1}_{~1}, ~~~~ G^0_1(g)=\kappa_1\, T^{0}_{~1}, ~~~~ \nonumber \\ G^0_0(f)&=&\kappa_2\, {\cal T}^{0}_{~0},~~~~ G^1_1(f)=\kappa_2\, {\cal T}^{1}_{~1},~~~~~ G^0_1(f)=\kappa_2\, {\cal T}^{0}_{~1}, \end{eqnarray} plus two nontrivial components of the the conservation condition$ \stackrel{(g)}{\nabla}_\mu T^\mu_{~\nu}=0\,$, \begin{equation} \label{cons00} \stackrel{(g)}{\nabla}_\mu T^\mu_{~0}=0\,,~~~~~~~~\stackrel{(g)}{\nabla}_\mu T^\mu_{~1}=0. \end{equation} Here one has explicitly \begin{align*} G(g)^0_0=\frac{N^2-1}{r^2}+\frac{2NN'}{r},~~~~~~~ G^1_1(g)=\frac{N^2-1}{r^2}+\frac{2N^2Q'}{rQ},~~~~~~~~~ G^0_1(g)=\frac{2\dot{N}}{rNQ^2},~~ \numberthis \end{align*} where the dot denotes the partial derivative with respect to $t$, while \begin{align} T^0_{~0}=-\mathcal{P}_0-\mathcal{P}_1\frac{NU'}{Y},~~~~~~~~ T^1_{~1}=-\mathcal{P}_0-\mathcal{P}_1\frac{q}{Q},~~~~~~~~ T^0_{~1}=\mathcal{P}_1\frac{\alpha}{Q},~ \end{align} where $\mathcal{P}_m$ are defined in \eqref{e5}. The components of the second stress-energy tensor are \begin{align*} {\cal T}^0_{~0}=&-\frac{r^2}{N U^2\mathcal{A}}\bigg(\mathcal{P}_1 q Y+\mathcal{P}_2\big(\alpha^2 N^2 QY+qNU'\big)\bigg),\nonumber \\ {\cal T}^1_{~1}=&-\frac{r^2}{U^2\mathcal{A}}\bigg(\mathcal{P}_1 QU'+\mathcal{P}_2\big(\alpha^2NQY+qU'\big)\bigg),\nonumber\\ {\cal T}^0_{~1}=&-\frac{r^2}{N U^2\mathcal{A}}\mathcal{P}_1 Y\alpha, \numberthis \end{align*} where $\mathcal{A}=N Q Y \alpha^2+qU'$. The components of the Einstein tensor for $f_{\mu\nu}$, are complicated: \begin{align*} \tensor{G(f)}{^0_0}=&-\frac{1}{U^2 Y \mathcal{A}^3}\bigg(N^3 Q^3 Y^4 \alpha^6+\big(-N Q \dot{U}^2 Y^4+N^3 Q^3 U'^2 Y^4+2 N^3 Q^3 U U''Y^4\\ &+3 N^2 q Q^2 U' Y^3\big) \alpha^4+\big(-2 N Q U \dot{U} \dot{\alpha} Y^4-2 q Q U \dot{U} N' Y^4+2 N Q U \dot{U} q' Y^4\\ &-2 N q U \dot{U} Q' Y^4+2 N q Q \dot{U} U' Y^4-2 N^3 Q^3 U U' \alpha ' Y^4+2 N q Q U \dot{U}' Y^4\\ &+2 N^2 Q^2 \dot{U} U'^2 Y^3+2 N^2 Q^2 U U' \dot{U}' Y^3+2 N^2 Q^2 U \dot{U} U'' Y^3-2 N^2 Q^2 U \dot{U} U' Y' Y^2\big) \alpha^3\\ &+\big(-N q^2 Q U'^2 Y^4+2 q^2 Q U N' U' Y^4-2 N q Q U q' U' Y^4+2 N q^2 U Q' U' Y^4\\ &-2 N q Q U \dot{U} \alpha ' Y^4-2 N q^2 Q U U'' Y^4+N^2 q Q^2 U'^3 Y^3+2 N q Q^2 U N' U'^2 Y^3\\ &-2 N^2 Q^2 U q' U'^2 Y^3+2 N^2 q Q U Q' U'^2 Y^3-q \dot{U}^2 U' Y^3-2 N^2 Q^2 U \dot{U} U' \alpha ' Y^3\\ &+N Q \dot{U}^2 U'^2 Y^2+3 N q^2 Q U'^2 Y^2+2 N^2 q Q^2 U U'^2 Y' Y^2+2 N Q U \dot{U} U' \dot{U}' Y^2\\ &-2 N Q U \dot{U} \dot{Y} U'^2 Y\big) \alpha^2+\big(4 N q^2 Q U U' \alpha ' Y^4+2 q^2 \dot{U} U'^2 Y^3-2 q U \dot{U} \dot{\alpha} U' Y^3\\ &+2 N^2 q Q^2 U U'^2 \alpha ' Y^3+2 q^2 U U' \dot{U}' Y^3-2 q^2 U \dot{U} U'' Y^3+2 N q Q \dot{U} U'^3 Y^2\\ &+2 q Q U \dot{U} N' U'^2 Y^2-2 N Q U \dot{U} q' U'^2 Y^2+2 N q U \dot{U} Q' U'^2 Y^2+2 q^2 U \dot{U} U' Y' Y^2\\ &+2 N q Q U U'^2 \dot{U}' Y^2\big) \alpha-q^3 Y^3 U'^3+q Y \dot{U}^2 U'^3+q^3 Y U'^3-2 q U \dot{U} \dot{Y} U'^3\\ &-2 q^3 U Y^2 U'^2 Y'+2 N q Q U Y^2 \dot{U} U'^2 \alpha '+2 q^2 U Y^3 \dot{U} U' \alpha '+2 q U Y \dot{U} U'^2 \dot{U}'\bigg), \end{align*} \begin{align*} \tensor{G(f)}{^0_1}=&-\frac{2}{U Y\mathcal{A}^3}\bigg(\big(-Q \dot{U} N' Y^4-N \dot{U} Q' Y^4+N Q \dot{U}' Y^4\big) \alpha^4+\big(-N Q\dot{\alpha} U' Y^4\\ &+q Q N' U' Y^4+N q Q' U' Y^4-N Q \dot{U} \alpha ' Y^4-N q Q U'' Y^4+N Q^2 N' U'^2 Y^3\\ &+N^2 Q Q' U'^2 Y^3-N^2 Q^2 U' U'' Y^3\big) \alpha^3+\big(2 N q Q U' \alpha ' Y^4-\dot{U} q' U' Y^3+q U' \dot{U}' Y^3\\ &+2 N^2 Q^2 U'^2 \alpha ' Y^3-q \dot{U} U'' Y^3+Q \dot{U} N' U'^2 Y^2+N \dot{U} Q' U'^2 Y^2+q \dot{U} U' Y' Y^2\\ &-N Q \dot{U} U' U'' Y^2-N Q \dot{Y} U'^3 Y+N Q \dot{U} U'^2 Y' Y\big) \alpha^2+\big(-q \dot{\alpha} U'^2 Y^3+q q' U'^2 Y^3\\ &+q \dot{U} U' \alpha ' Y^3+N Q q' U'^3 Y^2-q^2 U'^2 Y' Y^2+2 N Q \dot{U} U'^2 \alpha ' Y^2-N q Q U'^3 Y' Y\big) \alpha\\ &-q \dot{Y} U'^4+Y \dot{U} q' U'^3\bigg), \end{align*} \begin{align*} \tensor{G(f)}{^1_1}=&-\frac{1}{U^2\mathcal{A}^3}\bigg(N^3 Q^3 Y^3 \alpha^6+\big(-N Q \dot{U}^2 Y^3+N^3 Q^3 U'^2 Y^3+2 Q U \dot{N} \dot{U} Y^3+2 N U \dot{Q} \dot{U} Y^3\\ &-2 N Q U \ddot{U} Y^3+2 N^2 Q^3 U N' U' Y^3+2 N^3 Q^2 U Q' U' Y^3+3 N^2 q Q^2 U' Y^2\big) \alpha^4\\ &+\big(2 N Q U \dot{U} \dot{\alpha} Y^3-2 q Q U \dot{N} U' Y^3+2 N Q U \dot{q} U' Y^3-2 N q U \dot{Q} U' Y^3\\ &+2 N q Q \dot{U} U' Y^3+2 N^3 Q^3 U U' \alpha ' Y^3+2 N q Q U \dot{U}' Y^3+2 N^2 Q^2 \dot{U} U'^2 Y^2\\ &+4 N^2 Q^2 U U' \dot{U}' Y^2-2 N^2 Q^2 U \dot{Y} U'^2 Y\big) \alpha^3+\big(-N q^2 Q U'^2 Y^3-2 N q Q U \dot{\alpha} U' Y^3\\ &-2 N q Q U q' U' Y^3+N^2 q Q^2 U'^3 Y^2-2 N^2 Q^2 U \dot{\alpha} U'^2 Y^2+2 N q Q^2 U N' U'^2 Y^2\\ &+2 N^2 q Q U Q' U'^2 Y^2-q \dot{U}^2 U' Y^2+2 U \dot{q} \dot{U} U' Y^2-2 q U \ddot{U} U' Y^2+2 q U \dot{U} \dot{U}' Y^2\\ &+N Q \dot{U}^2 U'^2 Y+3 N q^2 Q U'^2 Y-2 Q U \dot{N} \dot{U} U'^2 Y-2 N U \dot{Q} \dot{U} U'^2 Y+2 N Q U \ddot{U} U'^2 Y\\ &-2 q U \dot{U} \dot{Y} U' Y+2 N Q U \dot{U} U' \dot{U}' Y-2 N Q U \dot{U} \dot{Y} U'^2\big) \alpha^2+\big(2 q Q U Y \dot{N} U'^3\\ &-2 N Q U Y \dot{q} U'^3+2 N q U Y \dot{Q} U'^3+2 N q Q Y \dot{U} U'^3+2 q^2 Y^2 \dot{U} U'^2+2 q^2 U Y \dot{Y} U'^2\\ &-4 N Q U Y \dot{U} \dot{\alpha} U'^2+2 N^2 q Q^2 U Y^2 \alpha ' U'^2+2 N q Q U Y \dot{U}' U'^2-2 q U Y^2 \dot{U} \dot{\alpha} U'\big) \alpha\\ &+q^3 U'^3-q^3 Y^2 U'^3+q \dot{U}^2 U'^3-2 U \dot{q} \dot{U} U'^3+2 N q Q U Y \dot{\alpha} U'^3+2 q U \ddot{U} U'^3\\ &+2 q^2 U Y^2 \dot{\alpha} U'^2-2 q^2 U Y^2 q' U'^2\bigg). \numberthis \end{align*} Finally, there are two nontrivial components of the conservation law, \begin{align*} \stackrel{(g)}{\nabla}_\mu T^\mu_{~0}=&-\mathcal{P}_1\bigg(\alpha N'NQ+2\alpha N^2Q+\alpha'N^2Q+\frac{q\dot{N}}{NQ}+\frac{N\dot{U}'}{Y}-\frac{NU'\dot{Y}}{Y^2}\bigg)\\ &-\frac{d\mathcal{P}_0}{r}\Big(\alpha N^2Q+\dot{U}\Big)-\frac{d\mathcal{P}_1}{r}\bigg(\alpha N^2QU'+\frac{N\dot{U}U'}{Y}\bigg), \\ \stackrel{(g)}{\nabla}_\mu T^\mu_{~1}=&\mathcal{P}_1\bigg(\frac{\dot{\alpha}}{Q}-\frac{\alpha\dot{N}}{NQ}-\frac{q'}{Q}+\frac{NQ'U'}{QY}\bigg)+\frac{d\mathcal{P}_1}{r}\bigg(\alpha^2N^2+\frac{\alpha\dot{U}}{Q}+\frac{qNU'}{QY}-\frac{qU'}{Q}\bigg)\\ &+\frac{d\mathcal{P}_0}{r}\bigg(\frac{NU'}{Y}-U'\bigg),\numberthis \end{align*} where $d\mathcal{P}_m$ are defined in \eqref{e5a}. Equations \eqref{Ein00}, \eqref{cons00} comprise a system of 8 equations for 6 functions $Q$, $q$, $\Delta$, $W$, $\alpha$, $U$. For this system not to be overdetermined, only 6 equations out of 8 should be independent. As shown in Sec. \ref{pert}, this indeed happens at least for small $\alpha$, when the perturbative analysis of the equations shows that some of them coincide. \providecommand{\href}[2]{#2}\begingroup\raggedright
{ "redpajama_set_name": "RedPajamaArXiv" }
1,623
Q: Weird output when printing loop variable Why does this outputs some weird thing? And what do parenthesis around (function(i) { onImageLoad(i); }) do? It seems like they "materialize" the function but I would like to know the real term. function startGame() { for (var i = 0; i < assets.length; i++) { frames.push(new Image()); // correct //frames[i].onload = (function(i) { onImageLoad(i); })(i); // wrong frames[i].onload = function(i) { onImageLoad(i); }; frames[i].src = assets[i]; } setInterval(animate, frameRate); } onImageLoad = function(n) { console.log("image number", n, "loaded"); } image number Event {clipboardData: undefined, cancelBubble: false, returnValue: true, srcElement: img, defaultPrevented: false…} loaded
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,978
// // PhotoDetailViewController.m // UICategories // // Created by xiekw on 10/22/14. // Copyright (c) 2014 xiekw. All rights reserved. // #import "PhotoDetailViewController.h" #import "DXPhoto.h" #import "TransitionCollectionViewController.h" #import "DXNavPopPhotoTransition.h" @interface PhotoDetailViewController ()<UINavigationControllerDelegate> @property (nonatomic, strong) DXNavPopPhotoTransition *popTransitation; @end @implementation PhotoDetailViewController - (void)viewDidAppear:(BOOL)animated { [super viewDidAppear:animated]; self.navigationController.delegate = self; self.popTransitation = [DXNavPopPhotoTransition new]; [self.popTransitation attachGesturePopToNavigationViewController:self]; //make some custom operation to the self.imageview } - (void)viewDidDisappear:(BOOL)animated { [super viewDidDisappear:animated]; [self.popTransitation detachGesturePop]; } - (id<UIViewControllerAnimatedTransitioning>)navigationController:(UINavigationController *)navigationController animationControllerForOperation:(UINavigationControllerOperation)operation fromViewController:(UIViewController *)fromVC toViewController:(UIViewController *)toVC { if (operation == UINavigationControllerOperationPop) { return self.popTransitation; } return nil; } - (id<UIViewControllerInteractiveTransitioning>)navigationController:(UINavigationController *)navigationController interactionControllerForAnimationController:(id<UIViewControllerAnimatedTransitioning>)animationController { if (animationController == self.popTransitation) { return self.popTransitation.pctInteractive; } return nil; } - (void)viewDidLoad { [super viewDidLoad]; self.view.backgroundColor = [UIColor whiteColor]; self.title = @"photo"; if (self.presentingViewController) { self.navigationItem.rightBarButtonItem = [[UIBarButtonItem alloc] initWithTitle:@"Done" style:UIBarButtonItemStyleBordered target:self action:@selector(dismiss)]; } NSLog(@"self.navigationViewController is %@ and delegate is %@", self.navigationController, self.navigationController.delegate); } - (void)dismiss { [self.presentingViewController dismissViewControllerAnimated:YES completion:nil]; } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; // Dispose of any resources that can be recreated. } /* #pragma mark - Navigation // In a storyboard-based application, you will often want to do a little preparation before navigation - (void)prepareForSegue:(UIStoryboardSegue *)segue sender:(id)sender { // Get the new view controller using [segue destinationViewController]. // Pass the selected object to the new view controller. } */ @end
{ "redpajama_set_name": "RedPajamaGithub" }
4,407
{"url":"https:\/\/www.sparrho.com\/item\/transient-x-ray-sources-in-the-magellanic-type-galaxy-ngc-4449\/b3eafa\/","text":"# Transient X-ray Sources in the Magellanic-type Galaxy NGC 4449\n\nResearch paper by V. Jithesh, Zhongxiang Wang\n\nIndexed on: 15 Dec '16Published on: 15 Dec '16Published in: arXiv - Astrophysics - High Energy Astrophysical Phenomena\n\n#### Abstract\n\nWe report the identification of seven transient X-ray sources in the nearby Magellanic-type galaxy NGC 4449 using the archival multi-epoch X-ray observations conducted with {\\it Chandra}, {\\it XMM-Newton} and {\\it Swift} telescopes over year 2001--2013. Among them, two sources are classified as supersoft X-ray sources (SSSs) because of their soft X-ray color and rest of the sources are X-ray binaries (XRBs). Transient SSSs spectra can be fitted with a blackbody of effective temperature $\\sim 80-105$ eV and luminosities were $\\simeq 10^{37} - 10^{38} {\\rm~erg\\ s}^{-1}$ in 0.3--8 keV. These properties are consistent with the widely accepted model for SSSs, an accreting white dwarf with the steady nuclear burning on its surface, while the SSS emission has also been observed in many post-nova systems. Detailed analysis of one sufficiently bright SSS revealed the strong short-term variability, possibly showing a 2.3 hour periodic modulation, and long-term variability, detectable over 23 years with different X-ray telescopes before year 2003. The X-ray properties of four other transients are consistent with neutron star or black hole binaries in their hard state, while the remaining source is most likely an XRB with a quasi-soft X-ray spectrum. Analysis of archival {\\it Hubble Space Telescope} image data was also conducted, and multiple massive stars were found as possible counterparts. We conclude that the X-ray transient properties in NGC 4449 are similar to those in other Magellanic-type galaxies.","date":"2021-04-21 11:49:25","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8583412170410156, \"perplexity\": 4492.46944736826}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-17\/segments\/1618039536858.83\/warc\/CC-MAIN-20210421100029-20210421130029-00142.warc.gz\"}"}
null
null
<?php namespace Symfony\Component\Locale\Stub\DateFormat; use Symfony\Component\Intl\DateFormatter\DateFormat\HourTransformer as BaseHourTransformer; /** * Alias of {@link \Symfony\Component\Intl\DateFormatter\DateFormat\HourTransformer}. * * @author Bernhard Schussek <bschussek@gmail.com> * * @deprecated Deprecated since version 2.3, to be removed in 3.0. Use * {@link \Symfony\Component\Intl\DateFormatter\DateFormat\HourTransformer} * instead. */ abstract class HourTransformer extends BaseHourTransformer { }
{ "redpajama_set_name": "RedPajamaGithub" }
3,044
Q: solve two term equation with different fractional exponents Suppose: $$a = bw^f + cw^g $$ where $a,b$ and $c$ are known, and $f$ and $g$ are known fractional exponents Ex. $50000 = 200w^{0.72} + 4000w^{0.19}$ How can one solve for the value of w? A: Except if very particular cases, equations such as $$f(w)=b w^f+c w^g-a$$ do not have analytical solutions and numerical methods should be used. One of the simplest root-finding methods is Newton which, starting from a reasonable guess $w_0$, will update it according to $$w_{n+1}=x_n-\frac{f(w_n)}{f'(w_n)}$$ For your example, a quick look at the graph of the function shows that (let me be very lazy) there is a root betwenn $1000$ and $2000$. So, let us start Newton with $w_0=1000$; the method then generates the following iterates : $1263.55$, $1275.06$, $1275.08$ which is the solution for six significant figures.
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,671
> Start, stop, test, manage concurrent Django servers. ## Getting Started This plugin requires Grunt `~0.4.5` If you haven't used [Grunt](http://gruntjs.com/) before, be sure to check out the [Getting Started](http://gruntjs.com/getting-started) guide, as it explains how to create a [Gruntfile](http://gruntjs.com/sample-gruntfile) as well as install and use Grunt plugins. Once you're familiar with that process, you may install this plugin with this command: ```shell npm install grunt-control-django --save-dev ``` Once the plugin has been installed, it may be enabled inside your Gruntfile with this line of JavaScript: ```js grunt.loadNpmTasks('grunt-control-django'); ``` ## The "control_django" task ### Overview In your project's Gruntfile, add a section named `control_django` to the data object passed into `grunt.initConfig()`. ```js grunt.initConfig({ control_django: { dev_server_up: { options: { host: 8000, port: 127.0.0.1, // Starts the server always_restart: true, }, }, test_server_up: { options: { logfile: './django.log', host: host, port: testport, always_restart: true, }, }, test_server_down: { options: { host: host, port: testport, // Kills the server always_kill: true, }, }, }, }); ``` ### Options #### options.host Type: `String` Default value: `undefined` This is the host IP from which you will serve Django, for example: `127.0.0.1`. #### options.port Type: `number` Default value: `undefined` This is the port you are serving your Django server from, Eg: `8000`. #### options.log Type: `filename` Default value: `undefined` The logfile that will capture server output (Django outputs to stderr). Relative to the root of your Gruntfile. ### Usage Examples #### Default Options In this example, I want to run a Django test server for my end-to-end tests on port `8001`, without bringing down my default Django dev server on port `8000`. The code below shows how to start and stop the server from the Grunt config. ```js grunt.initConfig({ control_django: { test_server_up: { options: { host: '127.0.0.1', port: '8001', always_restart: true, }, }, test_server_down: { options: { host: '127.0.0.1', port: '8001', always_kill: true, }, }, } }); grunt.registerTask('django-start', 'control_django:test_server_up'); grunt.registerTask('django-stop', 'control_django:test_server_up'); ``` ## Contributing In lieu of a formal styleguide, take care to maintain the existing coding style. Add unit tests for any new or changed functionality. Lint and test your code using [Grunt](http://gruntjs.com/). ## Release History _(Nothing yet)_ ======= Start, stop, test, manage concurrent Django servers.
{ "redpajama_set_name": "RedPajamaGithub" }
5,552
Q: Using Dirichlet's theorem to show existence of number coprime to $n$ I have the following question: Let $n$ be a positive integer and $d$ be divisor of $n$. Use Dirichlet's theorem to show that there exists an integer $k$, where $1\le k\le d-1$ such that the number $m:=1+\frac{nk}{d}$ is coprime to $n$. My idea is to show that if we can find such a $m$ that is prime, then $m$ is coprime to $n$ necessarily. Let us suppose on the contrary for all $k=1,\cdots, d-1$, the number $m$ is not prime. Then (perhaps?) This will show that there exists finitely many numbers of the form $1+\frac{nk}{d}$ that is not prime, contradicting Dirichlet's theorem. I am stuck here. Any ideas how to proceed? A: Using a sledgehammer to kill a flea... Hint: by Dirichlet's theorem, there is some $k$ (not necessarily $\le d-1$) such that $p = 1 + nk/d$ is prime. Note that if $k' \equiv k\; (\bmod d)$, $1 + n k'/d \equiv p\; (\bmod n)$. A: Unfortunately, using Dirichlet's Theorem is not necessarily sufficient by itself to give a prime $p = 1 + nk/d$ which you can use (such as described in the the answer hint of Robert Israel) given your problem restrictions. In particular, this is because it doesn't exclude the possibility that $k \equiv 0 \pmod d$. In fact, Dirichlet's Theorem can even require this if, for example, $d = 2$ and $n/d$ is odd and greater than $1$. Also, note that what you're asking to prove isn't even always true! For example, if $n = 6$ and $d = 2$, then the restriction $1 \le k \le d - 1$ means that $k = 1$ is the only value allowed. However, $1 + \frac{6 \times 1}{2} = 4$ is not coprime to $n = 6$. To avoid the trivial example of $k = 0$, I believe a better question would be to have $1 \le k \le d$. However, using $k = d$ gives $m = 1 + n$, so it's also fairly trivial to solve. Nonetheless, at least with my example, $k = 2$ will then be allowed, giving $1 + \frac{6 \times 2}{2} = 7$. Also, Dirichlet's Theorem could then be appropriately used as well. Another option is that I believe just simply requiring $d \gt 2$ is sufficient for your statement to then always be true. However, there would then still remain the issue of Dirichlet's Theorem not necessarily guaranteeing (as far as I know) any primes apart from those where $k \equiv 0 \pmod d$. Please check your source to see if you made a mistake in your question text.
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,997
Q: Qt check for valid URL I am trying to create a Qt application which checks if a URL entered by the user into a text edit is valid. This is what I have so far but it only ever says the URL entered is valid, even when I enter one which is not. bool checkUrl(const QUrl &url) { if (!url.isValid()) { //qDebug(QString("Invalid URL: %1").arg(url.toString())); return false; } return true; } void MainWindow::on_pushButton_clicked() { QString usertext = ui->plainTextEdit->toPlainText(); QUrl url = QUrl::fromUserInput(usertext); if (checkUrl(url)) ui->textEdit->setPlainText("Valid URL."); else ui->textEdit->setPlainText("Invalid URL."); } Also on the qDebug line there is an error: /home/user/HTML/mainwindow.cpp:32: error: no matching function for call to 'qDebug(QString)' Does anyone know what the problem is as it keeps returning true? A: You should use qDebug like this: qDebug() << QString("Invalid URL: %1").arg(url.toString()); also note that QUrl::isValid() does not check syntax of url. You may want to use regular expressions to validate urls. A: QUrl::isValid() only basically checks if the character encoding is right. What are you considering a wrong url? Re qDebug, the form you use basically encapsulates printf, so it doesn't work with QString. You want to do: qDebug() << QString("Invalid URL: %1").arg(url.toString());
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,789
package thrift // Autogenerated by Thrift Compiler (FIXME) // DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING /* THE FOLLOWING THRIFT FILE WAS USED TO CREATE THIS enum MyTestEnum { FIRST = 1, SECOND = 2, THIRD = 3, FOURTH = 4, } struct MyTestStruct { 1: bool on, 2: byte b, 3: i16 int16, 4: i32 int32, 5: i64 int64, 6: double d, 7: string st, 8: binary bin, 9: map<string, string> stringMap, 10: list<string> stringList, 11: set<string> stringSet, 12: MyTestEnum e, } */ import ( "context" "fmt" ) // (needed to ensure safety because of naive import list construction.) var _ = ZERO var _ = fmt.Printf var GoUnusedProtection__ int type MyTestEnum int64 const ( MyTestEnum_FIRST MyTestEnum = 1 MyTestEnum_SECOND MyTestEnum = 2 MyTestEnum_THIRD MyTestEnum = 3 MyTestEnum_FOURTH MyTestEnum = 4 ) func (p MyTestEnum) String() string { switch p { case MyTestEnum_FIRST: return "FIRST" case MyTestEnum_SECOND: return "SECOND" case MyTestEnum_THIRD: return "THIRD" case MyTestEnum_FOURTH: return "FOURTH" } return "<UNSET>" } func MyTestEnumFromString(s string) (MyTestEnum, error) { switch s { case "FIRST": return MyTestEnum_FIRST, nil case "SECOND": return MyTestEnum_SECOND, nil case "THIRD": return MyTestEnum_THIRD, nil case "FOURTH": return MyTestEnum_FOURTH, nil } return MyTestEnum(0), fmt.Errorf("not a valid MyTestEnum string") } func MyTestEnumPtr(v MyTestEnum) *MyTestEnum { return &v } type MyTestStruct struct { On bool `thrift:"on,1" json:"on"` B int8 `thrift:"b,2" json:"b"` Int16 int16 `thrift:"int16,3" json:"int16"` Int32 int32 `thrift:"int32,4" json:"int32"` Int64 int64 `thrift:"int64,5" json:"int64"` D float64 `thrift:"d,6" json:"d"` St string `thrift:"st,7" json:"st"` Bin []byte `thrift:"bin,8" json:"bin"` StringMap map[string]string `thrift:"stringMap,9" json:"stringMap"` StringList []string `thrift:"stringList,10" json:"stringList"` StringSet map[string]struct{} `thrift:"stringSet,11" json:"stringSet"` E MyTestEnum `thrift:"e,12" json:"e"` } func NewMyTestStruct() *MyTestStruct { return &MyTestStruct{} } func (p *MyTestStruct) GetOn() bool { return p.On } func (p *MyTestStruct) GetB() int8 { return p.B } func (p *MyTestStruct) GetInt16() int16 { return p.Int16 } func (p *MyTestStruct) GetInt32() int32 { return p.Int32 } func (p *MyTestStruct) GetInt64() int64 { return p.Int64 } func (p *MyTestStruct) GetD() float64 { return p.D } func (p *MyTestStruct) GetSt() string { return p.St } func (p *MyTestStruct) GetBin() []byte { return p.Bin } func (p *MyTestStruct) GetStringMap() map[string]string { return p.StringMap } func (p *MyTestStruct) GetStringList() []string { return p.StringList } func (p *MyTestStruct) GetStringSet() map[string]struct{} { return p.StringSet } func (p *MyTestStruct) GetE() MyTestEnum { return p.E } func (p *MyTestStruct) Read(ctx context.Context, iprot TProtocol) error { if _, err := iprot.ReadStructBegin(ctx); err != nil { return PrependError(fmt.Sprintf("%T read error: ", p), err) } for { _, fieldTypeId, fieldId, err := iprot.ReadFieldBegin(ctx) if err != nil { return PrependError(fmt.Sprintf("%T field %d read error: ", p, fieldId), err) } if fieldTypeId == STOP { break } switch fieldId { case 1: if err := p.readField1(ctx, iprot); err != nil { return err } case 2: if err := p.readField2(ctx, iprot); err != nil { return err } case 3: if err := p.readField3(ctx, iprot); err != nil { return err } case 4: if err := p.readField4(ctx, iprot); err != nil { return err } case 5: if err := p.readField5(ctx, iprot); err != nil { return err } case 6: if err := p.readField6(ctx, iprot); err != nil { return err } case 7: if err := p.readField7(ctx, iprot); err != nil { return err } case 8: if err := p.readField8(ctx, iprot); err != nil { return err } case 9: if err := p.readField9(ctx, iprot); err != nil { return err } case 10: if err := p.readField10(ctx, iprot); err != nil { return err } case 11: if err := p.readField11(ctx, iprot); err != nil { return err } case 12: if err := p.readField12(ctx, iprot); err != nil { return err } default: if err := iprot.Skip(ctx, fieldTypeId); err != nil { return err } } if err := iprot.ReadFieldEnd(ctx); err != nil { return err } } if err := iprot.ReadStructEnd(ctx); err != nil { return PrependError(fmt.Sprintf("%T read struct end error: ", p), err) } return nil } func (p *MyTestStruct) readField1(ctx context.Context, iprot TProtocol) error { if v, err := iprot.ReadBool(ctx); err != nil { return PrependError("error reading field 1: ", err) } else { p.On = v } return nil } func (p *MyTestStruct) readField2(ctx context.Context, iprot TProtocol) error { if v, err := iprot.ReadByte(ctx); err != nil { return PrependError("error reading field 2: ", err) } else { temp := int8(v) p.B = temp } return nil } func (p *MyTestStruct) readField3(ctx context.Context, iprot TProtocol) error { if v, err := iprot.ReadI16(ctx); err != nil { return PrependError("error reading field 3: ", err) } else { p.Int16 = v } return nil } func (p *MyTestStruct) readField4(ctx context.Context, iprot TProtocol) error { if v, err := iprot.ReadI32(ctx); err != nil { return PrependError("error reading field 4: ", err) } else { p.Int32 = v } return nil } func (p *MyTestStruct) readField5(ctx context.Context, iprot TProtocol) error { if v, err := iprot.ReadI64(ctx); err != nil { return PrependError("error reading field 5: ", err) } else { p.Int64 = v } return nil } func (p *MyTestStruct) readField6(ctx context.Context, iprot TProtocol) error { if v, err := iprot.ReadDouble(ctx); err != nil { return PrependError("error reading field 6: ", err) } else { p.D = v } return nil } func (p *MyTestStruct) readField7(ctx context.Context, iprot TProtocol) error { if v, err := iprot.ReadString(ctx); err != nil { return PrependError("error reading field 7: ", err) } else { p.St = v } return nil } func (p *MyTestStruct) readField8(ctx context.Context, iprot TProtocol) error { if v, err := iprot.ReadBinary(ctx); err != nil { return PrependError("error reading field 8: ", err) } else { p.Bin = v } return nil } func (p *MyTestStruct) readField9(ctx context.Context, iprot TProtocol) error { _, _, size, err := iprot.ReadMapBegin(ctx) if err != nil { return PrependError("error reading map begin: ", err) } tMap := make(map[string]string, size) p.StringMap = tMap for i := 0; i < size; i++ { var _key0 string if v, err := iprot.ReadString(ctx); err != nil { return PrependError("error reading field 0: ", err) } else { _key0 = v } var _val1 string if v, err := iprot.ReadString(ctx); err != nil { return PrependError("error reading field 0: ", err) } else { _val1 = v } p.StringMap[_key0] = _val1 } if err := iprot.ReadMapEnd(ctx); err != nil { return PrependError("error reading map end: ", err) } return nil } func (p *MyTestStruct) readField10(ctx context.Context, iprot TProtocol) error { _, size, err := iprot.ReadListBegin(ctx) if err != nil { return PrependError("error reading list begin: ", err) } tSlice := make([]string, 0, size) p.StringList = tSlice for i := 0; i < size; i++ { var _elem2 string if v, err := iprot.ReadString(ctx); err != nil { return PrependError("error reading field 0: ", err) } else { _elem2 = v } p.StringList = append(p.StringList, _elem2) } if err := iprot.ReadListEnd(ctx); err != nil { return PrependError("error reading list end: ", err) } return nil } func (p *MyTestStruct) readField11(ctx context.Context, iprot TProtocol) error { _, size, err := iprot.ReadSetBegin(ctx) if err != nil { return PrependError("error reading set begin: ", err) } tSet := make(map[string]struct{}, size) p.StringSet = tSet for i := 0; i < size; i++ { var _elem3 string if v, err := iprot.ReadString(ctx); err != nil { return PrependError("error reading field 0: ", err) } else { _elem3 = v } p.StringSet[_elem3] = struct{}{} } if err := iprot.ReadSetEnd(ctx); err != nil { return PrependError("error reading set end: ", err) } return nil } func (p *MyTestStruct) readField12(ctx context.Context, iprot TProtocol) error { if v, err := iprot.ReadI32(ctx); err != nil { return PrependError("error reading field 12: ", err) } else { temp := MyTestEnum(v) p.E = temp } return nil } func (p *MyTestStruct) Write(ctx context.Context, oprot TProtocol) error { if err := oprot.WriteStructBegin(ctx, "MyTestStruct"); err != nil { return PrependError(fmt.Sprintf("%T write struct begin error: ", p), err) } if err := p.writeField1(ctx, oprot); err != nil { return err } if err := p.writeField2(ctx, oprot); err != nil { return err } if err := p.writeField3(ctx, oprot); err != nil { return err } if err := p.writeField4(ctx, oprot); err != nil { return err } if err := p.writeField5(ctx, oprot); err != nil { return err } if err := p.writeField6(ctx, oprot); err != nil { return err } if err := p.writeField7(ctx, oprot); err != nil { return err } if err := p.writeField8(ctx, oprot); err != nil { return err } if err := p.writeField9(ctx, oprot); err != nil { return err } if err := p.writeField10(ctx, oprot); err != nil { return err } if err := p.writeField11(ctx, oprot); err != nil { return err } if err := p.writeField12(ctx, oprot); err != nil { return err } if err := oprot.WriteFieldStop(ctx); err != nil { return PrependError("write field stop error: ", err) } if err := oprot.WriteStructEnd(ctx); err != nil { return PrependError("write struct stop error: ", err) } return nil } func (p *MyTestStruct) writeField1(ctx context.Context, oprot TProtocol) (err error) { if err := oprot.WriteFieldBegin(ctx, "on", BOOL, 1); err != nil { return PrependError(fmt.Sprintf("%T write field begin error 1:on: ", p), err) } if err := oprot.WriteBool(ctx, bool(p.On)); err != nil { return PrependError(fmt.Sprintf("%T.on (1) field write error: ", p), err) } if err := oprot.WriteFieldEnd(ctx); err != nil { return PrependError(fmt.Sprintf("%T write field end error 1:on: ", p), err) } return err } func (p *MyTestStruct) writeField2(ctx context.Context, oprot TProtocol) (err error) { if err := oprot.WriteFieldBegin(ctx, "b", BYTE, 2); err != nil { return PrependError(fmt.Sprintf("%T write field begin error 2:b: ", p), err) } if err := oprot.WriteByte(ctx, int8(p.B)); err != nil { return PrependError(fmt.Sprintf("%T.b (2) field write error: ", p), err) } if err := oprot.WriteFieldEnd(ctx); err != nil { return PrependError(fmt.Sprintf("%T write field end error 2:b: ", p), err) } return err } func (p *MyTestStruct) writeField3(ctx context.Context, oprot TProtocol) (err error) { if err := oprot.WriteFieldBegin(ctx, "int16", I16, 3); err != nil { return PrependError(fmt.Sprintf("%T write field begin error 3:int16: ", p), err) } if err := oprot.WriteI16(ctx, int16(p.Int16)); err != nil { return PrependError(fmt.Sprintf("%T.int16 (3) field write error: ", p), err) } if err := oprot.WriteFieldEnd(ctx); err != nil { return PrependError(fmt.Sprintf("%T write field end error 3:int16: ", p), err) } return err } func (p *MyTestStruct) writeField4(ctx context.Context, oprot TProtocol) (err error) { if err := oprot.WriteFieldBegin(ctx, "int32", I32, 4); err != nil { return PrependError(fmt.Sprintf("%T write field begin error 4:int32: ", p), err) } if err := oprot.WriteI32(ctx, int32(p.Int32)); err != nil { return PrependError(fmt.Sprintf("%T.int32 (4) field write error: ", p), err) } if err := oprot.WriteFieldEnd(ctx); err != nil { return PrependError(fmt.Sprintf("%T write field end error 4:int32: ", p), err) } return err } func (p *MyTestStruct) writeField5(ctx context.Context, oprot TProtocol) (err error) { if err := oprot.WriteFieldBegin(ctx, "int64", I64, 5); err != nil { return PrependError(fmt.Sprintf("%T write field begin error 5:int64: ", p), err) } if err := oprot.WriteI64(ctx, int64(p.Int64)); err != nil { return PrependError(fmt.Sprintf("%T.int64 (5) field write error: ", p), err) } if err := oprot.WriteFieldEnd(ctx); err != nil { return PrependError(fmt.Sprintf("%T write field end error 5:int64: ", p), err) } return err } func (p *MyTestStruct) writeField6(ctx context.Context, oprot TProtocol) (err error) { if err := oprot.WriteFieldBegin(ctx, "d", DOUBLE, 6); err != nil { return PrependError(fmt.Sprintf("%T write field begin error 6:d: ", p), err) } if err := oprot.WriteDouble(ctx, float64(p.D)); err != nil { return PrependError(fmt.Sprintf("%T.d (6) field write error: ", p), err) } if err := oprot.WriteFieldEnd(ctx); err != nil { return PrependError(fmt.Sprintf("%T write field end error 6:d: ", p), err) } return err } func (p *MyTestStruct) writeField7(ctx context.Context, oprot TProtocol) (err error) { if err := oprot.WriteFieldBegin(ctx, "st", STRING, 7); err != nil { return PrependError(fmt.Sprintf("%T write field begin error 7:st: ", p), err) } if err := oprot.WriteString(ctx, string(p.St)); err != nil { return PrependError(fmt.Sprintf("%T.st (7) field write error: ", p), err) } if err := oprot.WriteFieldEnd(ctx); err != nil { return PrependError(fmt.Sprintf("%T write field end error 7:st: ", p), err) } return err } func (p *MyTestStruct) writeField8(ctx context.Context, oprot TProtocol) (err error) { if err := oprot.WriteFieldBegin(ctx, "bin", STRING, 8); err != nil { return PrependError(fmt.Sprintf("%T write field begin error 8:bin: ", p), err) } if err := oprot.WriteBinary(ctx, p.Bin); err != nil { return PrependError(fmt.Sprintf("%T.bin (8) field write error: ", p), err) } if err := oprot.WriteFieldEnd(ctx); err != nil { return PrependError(fmt.Sprintf("%T write field end error 8:bin: ", p), err) } return err } func (p *MyTestStruct) writeField9(ctx context.Context, oprot TProtocol) (err error) { if err := oprot.WriteFieldBegin(ctx, "stringMap", MAP, 9); err != nil { return PrependError(fmt.Sprintf("%T write field begin error 9:stringMap: ", p), err) } if err := oprot.WriteMapBegin(ctx, STRING, STRING, len(p.StringMap)); err != nil { return PrependError("error writing map begin: ", err) } for k, v := range p.StringMap { if err := oprot.WriteString(ctx, string(k)); err != nil { return PrependError(fmt.Sprintf("%T. (0) field write error: ", p), err) } if err := oprot.WriteString(ctx, string(v)); err != nil { return PrependError(fmt.Sprintf("%T. (0) field write error: ", p), err) } } if err := oprot.WriteMapEnd(ctx); err != nil { return PrependError("error writing map end: ", err) } if err := oprot.WriteFieldEnd(ctx); err != nil { return PrependError(fmt.Sprintf("%T write field end error 9:stringMap: ", p), err) } return err } func (p *MyTestStruct) writeField10(ctx context.Context, oprot TProtocol) (err error) { if err := oprot.WriteFieldBegin(ctx, "stringList", LIST, 10); err != nil { return PrependError(fmt.Sprintf("%T write field begin error 10:stringList: ", p), err) } if err := oprot.WriteListBegin(ctx, STRING, len(p.StringList)); err != nil { return PrependError("error writing list begin: ", err) } for _, v := range p.StringList { if err := oprot.WriteString(ctx, string(v)); err != nil { return PrependError(fmt.Sprintf("%T. (0) field write error: ", p), err) } } if err := oprot.WriteListEnd(ctx); err != nil { return PrependError("error writing list end: ", err) } if err := oprot.WriteFieldEnd(ctx); err != nil { return PrependError(fmt.Sprintf("%T write field end error 10:stringList: ", p), err) } return err } func (p *MyTestStruct) writeField11(ctx context.Context, oprot TProtocol) (err error) { if err := oprot.WriteFieldBegin(ctx, "stringSet", SET, 11); err != nil { return PrependError(fmt.Sprintf("%T write field begin error 11:stringSet: ", p), err) } if err := oprot.WriteSetBegin(ctx, STRING, len(p.StringSet)); err != nil { return PrependError("error writing set begin: ", err) } for v := range p.StringSet { if err := oprot.WriteString(ctx, string(v)); err != nil { return PrependError(fmt.Sprintf("%T. (0) field write error: ", p), err) } } if err := oprot.WriteSetEnd(ctx); err != nil { return PrependError("error writing set end: ", err) } if err := oprot.WriteFieldEnd(ctx); err != nil { return PrependError(fmt.Sprintf("%T write field end error 11:stringSet: ", p), err) } return err } func (p *MyTestStruct) writeField12(ctx context.Context, oprot TProtocol) (err error) { if err := oprot.WriteFieldBegin(ctx, "e", I32, 12); err != nil { return PrependError(fmt.Sprintf("%T write field begin error 12:e: ", p), err) } if err := oprot.WriteI32(ctx, int32(p.E)); err != nil { return PrependError(fmt.Sprintf("%T.e (12) field write error: ", p), err) } if err := oprot.WriteFieldEnd(ctx); err != nil { return PrependError(fmt.Sprintf("%T write field end error 12:e: ", p), err) } return err } func (p *MyTestStruct) String() string { if p == nil { return "<nil>" } return fmt.Sprintf("MyTestStruct(%+v)", *p) }
{ "redpajama_set_name": "RedPajamaGithub" }
1,496
{"url":"https:\/\/computers.tutsplus.com\/tutorials\/using-spreadsheets-for-finance-how-to-calculate-depreciation--cms-19409?ec_unit=translation-info-language","text":"## Declining Balance method\n\nLike straight line depreciation, the declining balance method is a constant amount each year, but at a higher rate. To calculate it, the worksheet multiplies a rate by the asset's declining book value. The book value is the initial cost, minus accumulated depreciation, and is sometimes called the carrying value. Don't confuse book value with market value, which is how much you can sell the asset for.\n\nWhat is the rate for declining balance? Excel and Google both use this formula:\n\nThe declining balance function (DB)\u00a0has the same 3 parameters as the straight line method\u2014cost, salvage value, and life\u2014plus two more:\n\n\u2022 Period: we run the calculation several times. When we run it this time, which period do we want to see? For example, we run the function for the second year, the third year, etc.\n\u2022 Month: assuming we're depreciating over several years, how many months are in the first year of depreciation? This value is optional. If we omit it, the function assumes it's 12.\n\nThe syntax of this function is:\n= DB(cost, salvage, life, period, [month])\n\nLook at the declining balance worksheet of Depreciation Worksheets.xlxs, or in the screenshot below.\u00a0The initial cost, salvage value, number of years and months in the first year are contained in cells B3, B4, B5 and B6, respectively. The years are listed down column A.\n\nSo click in cell B9 and enter the function:\n=DB(B3,B4,B5,A9,B6)\n\nDon't press Enter, yet! To save typing, we'll use the AutoFill feature to drag the formula down the column. To avoid errors, we need to make all the cell references, except the years, absolute.\n\nSelect each cell reference except A9, then press the F4 key to insert dollar signs. This prevents the reference from changing. The function should now be:\n=DB($B$3,$B$4,$B$5,A9,$B$6)\n\nPut the mouse pointer on the AutoFill handle so the mouse pointer becomes a cross-hair:\n\nDrag the AutoFill cursor down to the bottom of the column to get these results:\n\nNote that the Year column\u00a0goes to 11 because the first year had only 6 months.\n\n## Double-declining balance\n\nWhen we use the declining balance method to accelerate straight-line depreciation by twice as much as we ordinarily would, this is a special case known as double-declining balance.\u00a0This function is actually more flexible than the name sounds. By default, it will double the rate, but we can optionally use any rate we want.\n\nThe parameters for the double-declining balance function (DDB) are similar to the regular declining balance method:\n\n\u2022 Cost: initial cost of the asset\n\u2022 Salvage: the asset's value when it's fully depreciated\n\u2022 Life: the number of periods (typically years) over which we depreciate the asset\n\u2022 Period: we run the calculation several times. When we run it this time, which period do we want to see? For example, we can run the function for the second year, the third year, etc.\n\u2022 Rate: optional. If we don't specify, the rate is double, i.e. 2. But we can set this parameter to any rate we want.\n\nThe syntax of this function is:\n=DDB(cost, salvage, life, period, [rate])\n\nNote that unlike the \"regular\" fixed declining balance method, for double-declining balance, the number of months in the first year doesn't matter.\n\nLook at the double declining balance worksheet of Depreciation Worksheets.xlxs, or in the screenshot below.\u00a0The values are the same and in the same cells as in the regular declining balance, so click in cell B8 and enter the function:\n=DDB(B3,B4,B5,A8)\n\nAlso like in the previous example, we want to AutoFill down the column, so select all cell references except A8 and press the F4 key to make them absolute. The formula you enter should now be:\n=DDB($B$3,$B$4,$B$5,A8)\n\nPut the mouse pointer on the AutoFill handle so the mouse pointer becomes a cross-hair:\n\nThen drag the AutoFill cursor down to the bottom of the column to get these results:\n\nLet's say we want a rate of 150% instead of the default rate of 200%. To do this, add a 5th parameter of 1.5, as follows.\n\nClick in B21, and enter the function below, remembering to make the first 3 parameters absolute, as before:\n=DDB($B$3,$B$4,$B$5,A21,1.5)\n\nAutoFill down to the bottom to get these results:\n\n## Sum of Year's Digits\n\nYou might want depreciation to accelerate faster in the early years and slower in later years, perhaps for an asset that loses value quickly, or where you want to take a charge-off sooner. For this, you can use the Sum of Year's Digits method.\n\nIt's best to explain this method with the example of an asset that you expect to use for 5 years. In the first year, you add the year's digits: 5 + 4 + 3 + 2 + 1, which is 15. Then you multiply the cost less the salvage by 5\/15 (which is 1\/3). In the second year, you add the remaining digits 4 + 3 + 2 + 1, which is 10. Then you multiply the cost less the salvage by 4\/10 (which is 2\/5). And so it goes.\n\nThe Sum of Year's function (SYD)\u00a0parameters are similar to the previous methods:\n\n\u2022 Cost: initial cost of the asset\n\u2022 Salvage: the asset's value when it's fully depreciated\n\u2022 Life: the number of periods (typically years) over which we depreciate the asset\n\u2022 Period: we run the calculation several times. When we run it this time, which period do we want to see? For example, we run the function for the second year, the third year, etc.\n\nThe syntax of the function is:\n=SYD(cost, salvage, life, period)\n\nThe function does the calculation using this formula:\n\nLook at the\u00a0SOYD\u00a0worksheet of\u00a0Depreciation Worksheets.xlxs, or at the screenshot below.\u00a0The same values are in the same cells as in the previous examples, except the life is now 5 years. Click in B8 and enter the formula:\n=SYD(B3,B4,B5,A8)\n\nUse the F4 key to make the first 3 parameters absolute, so the formula becomes:\n=SYD($B$3,$B$4,$B$5,A8)\n\nAutoFill to the bottom to get these results:\n\n## Conclusion\n\nNow you know how to calculate four types of depreciation for capital purchases: Straight line for general, all-purpose use, declining balance for a faster rate, double declining balance for an even faster or more flexible rate, and sum of year's digits to get more depreciation in earlier years. Best of all, you know how to calculate each of those in any spreadsheet app you have.\n\nSo open a new spreadsheet, plug in your own numbers, and see what you get!\n\nPlease note: This tutorial is not intended to give you financial advice, but only to explain how to use spreadsheets for depreciation calculations. Please consult a qualified financial advisor before making any financial decisions.","date":"2021-10-16 18:41:58","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4237689971923828, \"perplexity\": 1677.439497219024}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-43\/segments\/1634323584913.24\/warc\/CC-MAIN-20211016170013-20211016200013-00122.warc.gz\"}"}
null
null
Lieutenant Ernest Hardcastle (31 December 1898 – November 1973) was an English World War I flying ace observer/gunner credited with twelve aerial victories. He would return to military service during World War II. Early life and service Ernest Hardcastle was born on 31 December 1898 in Dudley Hill, Bradford, England. Hardcastle worked for the Bradford Chamber of Commerce until World War I began. He enlisted in the Yorkshire Regiment, but transferred to the Royal Flying Corps in August 1917, and after initial training as a cadet, was commissioned as a temporary second lieutenant (on probation) on 30 January 1918. He was assigned to No. 20 Squadron RAF as an observer/gunner on 18 April 1918. World War I aerial service Hardcastle's winning streak began on 8 May 1918 and ended on 30 July 1918, with all but one victory being over an enemy fighter aircraft. His final claim tally was nine destroyed and three 'driven down out of control'. The pilots aiding him included fellow aces Lieutenants Victor Groom and August Iaccaci, as well as Captains Douglas Graham Cooke, and Horace Percy Lale. Hardcastle was awarded the Distinguished Flying Cross, which was gazetted on 2 November 1918. On 20 December 1918 he relinquished his commission for reasons of ill health resulting from military service. Hardcastle was transferred to the unemployed list by the Royal Air Force on 13 February 1919. World War II Hardcastle returned to military service in World War II, being commissioned as a pilot officer (on probation) in the Royal Air Force Volunteer Reserve on 30 September 1940. On 30 September 1941 he was confirmed in his appointment as a flying officer. On 1 January 1943 he was promoted to flight lieutenant. On 9 June 1945, he again relinquished his commission on account of medical unfitness. Honours and awards Distinguished Flying Cross Lieutenant Ernest Hardcastle This officer displayed great courage and skill on two occasions when he was observer in company with Lieut. Groom. While on patrol their formation of eight attacked twenty-five hostile scouts; he and Lieut. Groom accounted for two. On another occasion, when with the same officer, they were attacked by twelve scouts, two of these they shot down. References Citations Bibliography Franks, Norman; Guest, Russell; Alegi, Gregory (1997). Above the War Fronts: the British Two-seater Bomber Pilot and Observer Aces, the British Two-seater Fighter Observer Aces, and the Belgian, Italian, Austro-Hungarian and Russian Fighter Aces, 1914-1918: Volume 4 of Fighting Airmen of WWI Series: Volume 4 of Air Aces of WWI. Grub Street. 1898 births 1973 deaths Military personnel from Bradford Green Howards soldiers Royal Flying Corps officers Royal Air Force personnel of World War I British World War I flying aces Recipients of the Distinguished Flying Cross (United Kingdom) Royal Air Force Volunteer Reserve personnel of World War II Royal Air Force officers British Army personnel of World War I
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,574
{"url":"https:\/\/stacks.math.columbia.edu\/tag\/0CQ8","text":"This is a result due to Ofer Gabber, see [Theorem 1.1, olsson_proper]\n\nTheorem 105.10.3 (Chow's lemma). Let $f : \\mathcal{X} \\to Y$ be a morphism from an algebraic stack to an algebraic space. Assume\n\n1. $Y$ is quasi-compact and quasi-separated,\n\n2. $f$ is separated of finite type.\n\nThen there exists a commutative diagram\n\n$\\xymatrix{ \\mathcal{X} \\ar[rd] & X \\ar[l] \\ar[d] \\ar[r] & \\overline{X} \\ar[ld] \\\\ & Y }$\n\nwhere $X \\to \\mathcal{X}$ is proper surjective, $X \\to \\overline{X}$ is an open immersion, and $\\overline{X} \\to Y$ is proper morphism of algebraic spaces.\n\nProof. The rough idea is to use that $\\mathcal{X}$ has a dense open which is a gerbe (Morphisms of Stacks, Proposition 100.29.1) and appeal to Lemma 105.10.2. The reason this does not work is that the open may not be quasi-compact and one runs into technical problems. Thus we first do a (standard) reduction to the Noetherian case.\n\nFirst we choose a closed immersion $\\mathcal{X} \\to \\mathcal{X}'$ where $\\mathcal{X}'$ is an algebraic stack separated and of finite type over $Y$. See Limits of Stacks, Lemma 101.6.2. Clearly it suffices to prove the theorem for $\\mathcal{X}'$, hence we may assume $\\mathcal{X} \\to Y$ is separated and of finite presentation.\n\nAssume $\\mathcal{X} \\to Y$ is separated and of finite presentation. By Limits of Spaces, Proposition 69.8.1 we can write $Y = \\mathop{\\mathrm{lim}}\\nolimits Y_ i$ as the directed limit of a system of Noetherian algebraic spaces with affine transition morphisms. By Limits of Stacks, Lemma 101.5.1 there is an $i$ and a morphism $\\mathcal{X}_ i \\to Y_ i$ of finite presentation from an algebraic stack to $Y_ i$ such that $\\mathcal{X} = Y \\times _{Y_ i} \\mathcal{X}_ i$. After increasing $i$ we may assume that $\\mathcal{X}_ i \\to Y_ i$ is separated, see Limits of Stacks, Lemma 101.4.2. Then it suffices to prove the theorem for $\\mathcal{X}_ i \\to Y_ i$. This reduces us to the case discussed in the next paragraph.\n\nAssume $Y$ is Noetherian. We may replace $\\mathcal{X}$ by its reduction (Properties of Stacks, Definition 99.10.4). This reduces us to the case discussed in the next paragraph.\n\nAssume $Y$ is Noetherian and $\\mathcal{X}$ is reduced. Since $\\mathcal{X} \\to Y$ is separated and $Y$ quasi-separated, we see that $\\mathcal{X}$ is quasi-separated as an algebraic stack. Hence the inertia $\\mathcal{I}_\\mathcal {X} \\to \\mathcal{X}$ is quasi-compact. Thus by Morphisms of Stacks, Proposition 100.29.1 there exists a dense open substack $\\mathcal{V} \\subset \\mathcal{X}$ which is a gerbe. Let $\\mathcal{V} \\to V$ be the morphism which expresses $\\mathcal{V}$ as a gerbe over the algebraic space $V$. See Morphisms of Stacks, Lemma 100.28.2 for a construction of $\\mathcal{V} \\to V$. This construction in particular shows that the morphism $\\mathcal{V} \\to Y$ factors as $\\mathcal{V} \\to V \\to Y$. Picture\n\n$\\xymatrix{ \\mathcal{V} \\ar[r] \\ar[d] & \\mathcal{X} \\ar[d] \\\\ V \\ar[r] & Y }$\n\nSince the morphism $\\mathcal{V} \\to V$ is surjective, flat, and of finite presentation (Morphisms of Stacks, Lemma 100.28.8) and since $\\mathcal{V} \\to Y$ is locally of finite presentation, it follows that $V \\to Y$ is locally of finite presentation (Morphisms of Stacks, Lemma 100.27.12). Note that $\\mathcal{V} \\to V$ is a universal homeomorphism (Morphisms of Stacks, Lemma 100.28.13). Since $\\mathcal{V}$ is quasi-compact (see Morphisms of Stacks, Lemma 100.8.2) we see that $V$ is quasi-compact. Finally, since $\\mathcal{V} \\to Y$ is separated the same is true for $V \\to Y$ by Morphisms of Stacks, Lemma 100.27.17 applied to $\\mathcal{V} \\to V \\to Y$ (whose assumptions are satisfied as we've already seen).\n\nAll of the above means that the assumptions of Limits of Spaces, Lemma 69.13.3 apply to the morphism $V \\to Y$. Thus we can find a dense open subspace $V' \\subset V$ and an immersion $V' \\to \\mathbf{P}^ n_ Y$ over $Y$. Clearly we may replace $V$ by $V'$ and $\\mathcal{V}$ by the inverse image of $V'$ in $\\mathcal{V}$ (recall that $|\\mathcal{V}| = |V|$ as we've seen above). Thus we may assume we have a diagram\n\n$\\xymatrix{ \\mathcal{V} \\ar[rr] \\ar[d] & & \\mathcal{X} \\ar[d] \\\\ V \\ar[r] & \\mathbf{P}^ n_ Y \\ar[r] & Y }$\n\nwhere the arrow $V \\to \\mathbf{P}^ n_ Y$ is an immersion. Let $\\mathcal{X}'$ be the scheme theoretic image of the morphism\n\n$j : \\mathcal{V} \\longrightarrow \\mathbf{P}^ n_ Y \\times _ Y \\mathcal{X}$\n\nand let $Y'$ be the scheme theoretic image of the morphism $V \\to \\mathbf{P}^ n_ Y$. We obtain a commutative diagram\n\n$\\xymatrix{ \\mathcal{V} \\ar[r] \\ar[d] & \\mathcal{X}' \\ar[r] \\ar[d] & \\mathbf{P}^ n_ Y \\times _ Y \\mathcal{X} \\ar[d] \\ar[r] & \\mathcal{X} \\ar[d] \\\\ V \\ar[r] & Y' \\ar[r] & \\mathbf{P}^ n_ Y \\ar[r] & Y }$\n\n(See Morphisms of Stacks, Lemma 100.38.4). We claim that $\\mathcal{V} = V \\times _{Y'} \\mathcal{X}'$ and that Lemma 105.10.2 applies to the morphism $\\mathcal{X}' \\to Y'$ and the open subspace $V \\subset Y'$. If the claim is true, then we obtain\n\n$\\xymatrix{ \\overline{X} \\ar[rd]_{\\overline{g}} & X \\ar[l] \\ar[d]_ g \\ar[r]_ h & \\mathcal{X}' \\ar[ld]^ f \\\\ & Y' }$\n\nwith $X \\to \\overline{X}$ an open immersion, $\\overline{g}$ and $h$ proper, and such that $|V|$ is contained in the image of $|g|$. Then the composition $X \\to \\mathcal{X}' \\to \\mathcal{X}$ is proper (as a composition of proper morphisms) and its image contains $|\\mathcal{V}|$, hence this composition is surjective. As well, $\\overline{X} \\to Y' \\to Y$ is proper as a composition of proper morphisms.\n\nThe last step is to prove the claim. Observe that $\\mathcal{X}' \\to Y'$ is separated and of finite type, that $Y'$ is quasi-compact and quasi-separated, and that $V$ is quasi-compact (we omit checking all the details completely). Next, we observe that $b : \\mathcal{X}' \\to \\mathcal{X}$ is an isomorphism over $\\mathcal{V}$ by Morphisms of Stacks, Lemma 100.38.7. In particular $\\mathcal{V}$ is identified with an open substack of $\\mathcal{X}'$. The morphism $j$ is quasi-compact (source is quasi-compact and target is quasi-separated), so formation of the scheme theoretic image of $j$ commutes with flat base change by Morphisms of Stacks, Lemma 100.38.5. In particular we see that $V \\times _{Y'} \\mathcal{X}'$ is the scheme theoretic image of $\\mathcal{V} \\to V \\times _{Y'} \\mathcal{X}'$. However, by Morphisms of Stacks, Lemma 100.37.5 the image of $|\\mathcal{V}| \\to |V \\times _{Y'} \\mathcal{X}'|$ is closed (use that $\\mathcal{V} \\to V$ is a universal homeomorphism as we've seen above and hence is universally closed). Also the image is dense (combine what we just said with Morphisms of Stacks, Lemma 100.38.6) we conclude $|\\mathcal{V}| = |V \\times _{Y'} \\mathcal{X}'|$. Thus $\\mathcal{V} \\to V \\times _{Y'} \\mathcal{X}'$ is an isomorphism and the proof of the claim is complete. $\\square$\n\nIn your comment you can use Markdown and LaTeX style mathematics (enclose it like $\\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).","date":"2022-05-27 21:59:35","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 2, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 2, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9875977635383606, \"perplexity\": 172.95633046250813}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-21\/segments\/1652663006341.98\/warc\/CC-MAIN-20220527205437-20220527235437-00577.warc.gz\"}"}
null
null
Q: compilation of abstact class type pointer is successfull? Compilation of following code is successful, it doesn't run though, i think since the pointer p might be having a virtual ptr but that vptr might not be having anything, that is why it compiles and can't run or is there something like no vptr is being created since there is no class other than abstract class present here. class one { int a; public: one(){a=0;}; virtual void get()=0; }; int main() { one *p; p->get(); } A: You don't initialise p, which means it's pointing to a random memory location. Dereferencing it is undefined behaviour, most likely a crash. A: The value of your p is not a valid pointer, since it's not the address of any object. Dereferencing p is undefined behaviour.
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,365
Зенкува́ння — (від  — проходити, поглиблювати (шахту)) — вид механічної обробки різанням, який використовують для створення конічних заглиблень (фасок) у отворах під конічні головки потайних гвинтів, при нарізанні різьби, для притуплення гострих кромок тощо. Для зенкування використовують спеціальний інструмент — зенківки або свердла великого діаметра, загостреними під потрібним кутом. Див. також Зенківка Джерела Основи формоутворення поверхонь при механічній обробці: Навчальний посібник/ Н. С. Равська, П. Р. Родін, Т. П. Ніколаєнко, П. П. Мельничук.- Ж.: ЖІТІ, 2000. — 332с. — ISBN 966-7570-07-X Филиппов Г. В. Режущий инструмент. — Л.: Машиностроение. Ленингр. отд-ние, 1981. — 392 с. ГОСТ 26258-87 Цековки цилиндрические для обработки опорных поверхностей под крепежные детали. Технические условия. Бучинський М. Я., Горик О. В., Чернявський А. М., Яхін С. В. Основи творення машин / [За редакцією О. В. Горика, доктора технічних наук, професора, заслуженого працівника народної освіти України]. — Харків: Вид-во «НТМТ», 2017. — 448 с. : 52 іл. — ISBN 978-966-2989-39-7 Механообробка
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,760
Q: Google Tag Manager causing page form to submit twice I have a webpage - which i guess is fairly standard. Basically something like: <form action="/mywebsite/mypage" method="post" novalidate="novalidate"> <div class="blah"> <input class="form-control" id="name" maxlength="80" name="name" tabindex="1" type="text" value=""> <!-- tonnes more inputs and labels and stuff --> <input type="submit" class="btn" value="Submit" tabindex="2"> </div> </form> But when the submit button is pressed, the form is submitted twice (depending on browser). Firefox - works fine - submits once consistantly Chrome - intermittent, mostly works, sometimes submits twice. IE (Edge) - submits twice 100% of the time Managed to narrow it down to google tag manager. When this is removed it works fine. So I have this script in my _layout page (master page, template page, whatever you call it) <!-- Google Tag Manager --> <noscript> <iframe src="//www.googletagmanager.com/ns.html?id=GTM-ABCDEF1" height="0" width="0" style="display: none; visibility: hidden"> </iframe> </noscript> <script> (function(w, d, s, l, i) { w[l] = w[l] || []; w[l].push({ 'gtm.start': new Date().getTime(), event: 'gtm.js' }); var f = d.getElementsByTagName(s)[0], j = d.createElement(s), dl = l != 'dataLayer' ? '&l=' + l : ''; j.async = true; j.src = '//www.googletagmanager.com/gtm.js?id=' + i + dl; f.parentNode.insertBefore(j, f); })(window, document, 'script', 'dataLayer', 'GTM-ABCDEF1'); </script> <!-- End Google Tag Manager --> So I'm trying to understand what's going on - why it would be causing it - and how to stop it. I can work around it - but I don't want to. Would rather fix it properly. Notes: yes for some reason the url starts // rather than https:// - but changing it makes no difference. Any ideas? I have a very basic understanding of google analytics - but 0 knowledge about google tag manager. A: A bug in GTM, see https://productforums.google.com/forum/?nomobile=true#!topic/tag-manager/QVb2sNyvp5k;context-place=forum/tag-manager for more information. The selected answer isn't really the correct answer.
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,638
Sunyaev Lab Research / Members / Publications / Software / Opportunities / Contacts We are a computational genetics and genomics lab. Our main research is on genetic variation, including mechanisms of spontaneous mutagenesis, functional effects of mutations and allelic variants, population genetics and relationship between genotype and phenotype. As part of our research we develop new computational and statistical methods to assist DNA sequencing studies. Understanding mutations from sequencing data Mutations are the source of population genetic variation; they fuel evolution and cause disease. Data on de novo germ-line mutations are now available from whole genome sequencing of parent-child trios. Cancer genomics provides data on somatic cancer mutations. We analyze statistical properties of germ-line and somatic cancer mutations alongside epigenomic datasets. We believe that this analysis has a potential to generate biologically relevant hypotheses on leading mechanisms of spontaneous mutations in humans. From an evolutionary viewpoint, it can be informative about the evolution of mutation rate. On the practical side, accurate models of mutation rate will enhance statistical methods of cancer genomics and neuropsychiatric genetics aimed at mapping genes using recurrent de novo mutations. Some of our findings include the demonstrated association between mutation rate and replication timing; elevated mutation rate in functional regions due to maintenance of hypermutable sites by natural selection; and a unique spectrum of clustered mutations suggesting a specific mechanism generating clustered mutations. For somatic cancer mutations, we demonstrated that the relationship between chromatin accessibility and modification and mutation rate is highly cell-type specific. We also showed that the somatic mutation rate is decreased in regulatory regions marked by accessible chromatin, and linked this observation to the action of nucleotide excision repair. Functional effect of allelic variants It is essential to identify, among a myriad of allelic variants, those with the effect on molecular function. For predicting the functional effect of sequence variants in protein coding regions we rely on the comparative sequence analysis and analysis of protein structure. We are continuously developing and maintaining PolyPhen-2 – a computational method for predicting the effect of missense mutations and SNPs. We are interested in dependence of the functional effect of coding variants on genetic background, and are using comparative genomics to identify suppressors of coding mutations. In non-coding regions of the genome, the effects of regulatory variants can also be analyzed using a combination of functional and comparative genomics data. Here, we are interested in using Whole Genome Sequencing to identify regulatory variants of larger effects in humans and animals. We are interested in population genetics as a lens through which we can study microevolution. Dynamics of allele propagation in populations depends on a number of evolutionary forces. Now, development of theoretical models is enhanced by the availability of massive sequencing datasets. Our recent results include the demonstration that deleterious alleles are younger than neutral alleles at the same population frequency. We studied the effect of population bottlenecks and expansions on the burden of deleterious mutations under arbitrary dominance coefficient. We are currently interested in the inference of complex natural selection in the form of balancing selection, genetic dominance, epistasis, and pleiotropy from population sequencing data. Evolution, maintenance and allelic architecture of complex traits Despite widespread interest in the genetics of complex traits (including common human diseases), basic principles of complex trait genetics are still poorly understood. We attack the problem from three directions. First, we develop theoretical models of evolution and maintenance of complex trait variation under various allelic architectures. Second, we are involved in a large zebrafish screen aiming at identification of key parameters of allelic architecture of complex traits. Third, we work on statistical methods for the analysis of available genomic data in phenotyped human populations. This includes methods for predicting complex phenotypes from genotypes. Computational and statistical methods for sequencing studies We develop computational and statistical methods for sequencing studies. VT-test is designed to detect combined association of rare variants with a complex phenotype. SNPTrack has been developed for gene mapping in model organisms. We continue developing new methods including methods that benefit from pedigree collection and functional genomic data. We actively participate in collaborative projects devoted to sequencing of populations with common diseases. Brigham Genomic Medicine (BGM) The lab is intertwined with the computational component of Brigham Genomic Medicine (BGM) program. This service aims at discovering genes underlying previously uncharacterized human Mendelian diseases of rare diseases with unknown genetic etiology. We use genomic data of individual pedigrees to identify mutations potentially causing the phenotypes. [ Log In | Old revisions ]
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,265
Joelia spina är en kvalsterart som beskrevs av Kulijev 1979. Joelia spina ingår i släktet Joelia och familjen Oribatellidae. Inga underarter finns listade i Catalogue of Life. Källor Spindeldjur spina
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,719
\section{Monochromatic components} An easy exercise in an introductory graph theory course -- a remark by Erd\H{o}s and Rado, see~\cite{Gy2} -- states that any $2$-coloring of the edges of $K_n$ has a monochromatic spanning component. In general, Gy\'{a}rf\'{a}s \cite{Gy1} proved that the largest monochromatic component in an $r$-edge-coloring of $K_n$ has order at least $n/(r-1)$ and equality holds if an affine plane of order $r-1$ exists and $(r-1)^2$ divides $n$. F\"uredi \cite{F} proved the significantly larger lower bound $n/(r-1-(r-1)^{-1})$ in the case that there is no affine plane of order $r-1$. This connection to the existence of affine planes suggests that to determine exactly the maximum size of a monochromatic component is extremely difficult in general. A double [triple] star is the tree obtained by joining the centers of two [three] stars by a path of length one [two]. Clearly, double or triple stars have diameter 3 or 4 respectively. Additional structure on large monochromatic components has been conjectured by Gy\'arf\'as~\cite{Gy2}. \begin{conj}[Gy\'arf\'as, Problem 4.2 in \cite{Gy2}] \label{P1} For $r\ge 3$, is there a monochromatic double star on at least $n/(r-1)-o(n)$ vertices in every $r$-coloring of $K_n$? \end{conj} A weaker version of the problem reads as follows. \begin{conj}[Gy\'arf\'as, Problem 4.3 in \cite{Gy2}] \label{P} Given positive numbers $n$, $r$. Is there a constant $d$ (perhaps $d=3$) such that in every $r$-coloring of $K_n$ there is a monochromatic subgraph of diameter at most $d$ with at least $n/(r-1)$ vertices? \end{conj} The assumption $r\ge 3$ in Conjecture \ref{P1} is necessary, since a random two-coloring will a give a monochromatic double star of size $\approx 3n/4$ only. The best result for double stars is due to Gy\'arf\'as and S\'ark\"ozy. \begin{thm}[Gy\'arf\'as, S\'ark\"ozy~\cite{GS}] \label{GS} Every $r$-edge-coloring of $K_n$ contains a monochromatic double star on at least $\frac{n(r+1)+r-1}{r^2}$ vertices. \end{thm} The bipartite Ramsey number of the double star has been determined by Mubayi \cite{M}. The result of Theorem~\ref{bip} is tight if each color class is biregular. \begin{thm}[Mubayi~\cite{M}]\label{bip} In every $r$-edge-coloring of the complete bipartite graph $K_{k,\ell }$ there is a monochromatic double star of order $\frac{k+\ell}{r}$. \end{thm} The weaker Conjecture~\ref{P} was later shown to be true by Ruszink\'o~\cite{R} with $d=5$. \begin{thm}[Ruszink\'o~\cite{R}] \label{fo1} In every $r$-edge-coloring of $K_n$ there is a monochromatic subgraph of diameter at most $5$ on at least $n/(r-1)$ vertices. \end{thm} This was further improved and was shown to be true for $d=4$ by Letzter~\cite{L}. \begin{thm}[Letzter~\cite{L}] \label{fo2} In every $r$-edge-coloring of $K_n$ there is a monochromatic triple star on at least $n/(r-1)$ vertices. \end{thm} For the case of $d=r=2$, the following tight bound was proved by Erd\H{o}s and Fowler~\cite{EF}. \begin{thm}[Erd\H{o}s, Fowler \cite{EF}] \label{EF1} Every $2$-edge-coloring of $K_n$ contains a monochromatic connected subgraph of diameter at most $2$ on at least $3n/4$ vertices. \end{thm} Moreover, for $r=3,4,5,6$, Ruszink\'o, Song and Szabo~\cite{RSS} constructed colorings where the maximum size of a monochromatic, diameter 2 subgraph is strictly less than $n/(r-1)$, suggesting that $d=3$ is best possible for diameter in Conjecture~\ref{P}.\\ In this note we further improve (in terms of diameter) Theorem \ref{fo2} for three colors. Let $G_\alpha$, $G_\beta$, and $G_\gamma$ be the subgraphs of $K_n$ induced by the edges that have color $\alpha$, $\beta$, and $\gamma$ respectively. \begin{thm}\label{main} In every $3$-edge-coloring of $K_n$ either there is a monochromatic connected subgraph of diameter at most $3$ on at least $n/2$ vertices or each of $G_\alpha$, $G_\beta$, and $G_\gamma$ is spanning and has diameter at most $4$. \end{thm} \begin{proof} By Theorem~\ref{bip}, we may assume that each of $G_\alpha$, $G_\beta$, and $G_\gamma$ is both spanning and connected because if one is not, then the union of the other two color classes is a complete bipartite graph on $n$ vertices. Suppose, towards a contradiction and without loss of generality, that the distance between $w_1$ and $w_2$ is at least $5$ in $G_\alpha$ and $w_1w_2\in E(G_\beta)$. The set of the vertices $U$ of the double star centered by $w_1$ and $w_2$ in $G_\beta$ must contain less than $n/2$ vertices, otherwise the theorem is proven. Note that there are no $\beta$-colored edges from $\{w_1,w_2\}$ to $V\setminus U$ by definition. Split the remaining vertices of $V\setminus U$ into $3$ parts: \begin{align*} X &= \{v\in V\setminus U:~vw_1\in E(G_\gamma),~vw_2\in E(G_\alpha)\} , \\ Y &= \{v\in V\setminus U:~vw_1\in E(G_\gamma),~vw_2\in E(G_\gamma)\} , \\ Z &= \{v\in V\setminus U:~vw_1\in E(G_\alpha),~vw_2\in E(G_\gamma)\} . \end{align*} Note that there are no vertices $v$, such that $vw_1\in E(G_\alpha)$ and $vw_2\in E(G_\alpha)$, or else the distance between $w_1$ and $w_2$ in $G_1$ would be $2<5$. Clearly, neither $X$ nor $Z$ is empty, or else there is a star in $G_\gamma$ (centered at either $w_1$ or $w_2$) of order greater than $n/2$. Furthermore, no edge between $X$ and $Z$ is colored $\alpha$, or else we have a path of length $3$ in $G_\alpha$ between $w_1$ and $w_2$. In addition, there is a length $2$ path in color $\gamma$ between each pair of vertices in $X$ (through $w_1$), between each vertex in $X$ with each vertex in $Y$ (through $w_1$), between each pair of vertices in $Y$ (through either $w_1$ or $w_2$), between each vertex in $Y$ and each vertex in $Z$ (through $w_2$), and between each pair of vertices in $Z$ (through $w_2$). Since $X\cup Y\cup Z\cup\{w_1,w_2\}$ contains more than $n/2$ vertices, there must exist some vertices, $v_X\in X$, $v_Z\in Z$, such that their distance in color $\gamma$ within the vertex set $X\cup Y\cup Z\cup\{w_1,w_2\}$ is at least $4$, otherwise we have found a vertex set of diameter $3$ in color $\gamma$ of size larger than $n/2$. For this to be the case, neither $v_X$ nor $v_Z$ can have an edge colored $\gamma$ connecting it to any vertex in $Y$, otherwise there would be a path of length 3 connecting $v_X$ and $v_Z$ in $G_\gamma$. Furthermore, since there is no edge of color $\alpha$ between $X$ and $Z$, $v_X$ must have only edges colored $\beta$ between itself and all vertices of $Z$, and $v_Z$ must have only edges colored $\beta$ between itself and all vertices of $X$, otherwise, again $v_X$ and $v_Z$ would have distance at most 3 in $G_\gamma$. Now, we have a double star in color $\beta$, anchored at $v_X$ and $v_Z$, containing all of $X\cup Z$. If $Y$ is empty, the theorem is proven. Therefore, there must be some $v_Y \in Y$ such that neither $v_Yv_X$ nor $v_Yv_Z$ has color $\beta$, otherwise there is a double star in color $\beta$, anchored at $v_X$ and $v_Z$, containing $X\cup Y\cup Z$, which is a double star on at least $n/2$ vertices. So, the edges $v_Yv_X$ and $v_Yv_Z$ have neither color $\beta$ nor color $\gamma$. Therefore, both such edges have color $\alpha$. This produces a path in $G_\alpha$, namely $w_1v_Zv_Yv_Xw_2$, of length $4$, which is a contradiction to the assumption that $w_1$ and $w_2$ have distance at least $5$ in $G_\alpha$. \end{proof} \medskip \noindent {\bf Conclusion}. Though Theorem~\ref{main} does not prove Conjecture~\ref{P} for $d=3$ in the case of three colors, it gives support to this very natural and surprisingly difficult question. \medskip \noindent {\bf Acknowledgements}. This research was part of a class in the Budapest Semesters in Mathematics program in the Fall of 2019. The authors also wish to acknowledge the R\'enyi Institute of Mathematics for the use of its facilities. The authors would like also to thank G\'abor S\'ark\"ozy and Andr\'as Gy\'arf\'as for fruitful comments and discussions.
{ "redpajama_set_name": "RedPajamaArXiv" }
3,059
Q: 25 point FFT equations? I was going to attempt to study up on various papers like this one: One Million-Point FFT (Hans Kanders and Tobias Mellqvist) https://liu.diva-portal.org/smash/get/diva2:1184623/FULLTEXT01.pdf to try and come up with the equations for a 25-point FFT algorithm. But, before I went about doing that I figured I'd ask here if anyone just happened to already have the equations for a 25-point FFT or better yet a general N-point FFT that I can use to create an algorithm? It seems like I just have to plug in 25 for N in the paper I linked (section 2.1.3), but I'm not sure.
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,417
## Chapter ## One WHEN ARE WE going to be there, Nan?" Nancy Drew looked in the rearview mirror at her friend Bess Marvin, who was sitting in the back seat. Bess's blond hair was blowing back from her face in the warm summer breeze. "We've still got another hour or so," Nancy said. She tucked a flyaway strand of reddish blond hair behind her ear. The two front windows were down all the way, and with the wind gusting, Nancy was glad she'd thought to pull her hair back in a French braid. Her friend, George Fayne, in the seat next to her, wore her hair short and had no trouble. "Right," Bess said with a sigh. "It's just that—well, you guys know what I'm like around one chocolate dessert. The thought that I'm about to eat thousands of them is driving me crazy!" "Not thousands, Bess," Nancy replied. "The Oakwood Inn brochure just said there'd be—" "You don't have to tell me," interrupted Bess. "I've memorized the whole thing. 'Dozens of delectable chocolate creations for breakfast, lunch, and dinner, prepared by one of the state's most renowned pastry chefs,' " she parroted. "And cooking classes. And free samples." Bess's blue eyes sparkled at the thought. "It's a dream vacation, you guys. I just wish it would hurry up and start." Almost a year had passed since Nancy had received the Oakwood Inn brochure advertising the Chocolate Festival. It sounded like so much fun that she had made reservations for the three of them right away. They were all pretty different, Nancy reflected. Bess was curvy and blond and hated anything athletic and loved everything having to do with cute guys. George, with her short dark curls and lithe, athletic figure, liked guys, of course, but sports were high on her list of priorities, too. Nancy guessed that she fell somewhere in between, with her main loves being Ned Nickerson and a good mystery. The one thing that they all loved was chocolate, though. "I'm amazed we were able to take this trip at all, Nancy," George observed. "I was sure you'd get called away on a case." "I was sure she'd get called away by Ned," Bess added with a giggle. Nancy was a detective, and the past couple of months had been unusually busy ones for her. Not only had she been practically blown up during her last case, which she called Poison Pen, but Ned had also been home from college on vacation. "It's only July, so I'll still be able to spend lots of time with Ned before he goes back to Emerson for the fall semester. Anyway, I'm sure the Chocolate Festival will take my mind off missing him for a few days." "Chocolate has a way of doing that," Bess agreed. "But I hope there are at least one or two cute guys at the inn to take my mind off the desserts. I mean, I don't want to turn into a total blimp while I'm there." • • • "Okay, Bess, you can unpack your fork now," Nancy teased, turning the car into a long, sweeping driveway. "We're here!" "This is an inn?" Bess asked in surprise. "I was expecting a little cottage. This place is huge!" Bess was right, Nancy thought. The Oakwood Inn was a rambling four-story stone estate whose central building was flanked by two wings. Carefully tended flower beds lined the building, and the banner hanging over the door had the image of a slice of chocolate layer cake on it. Nancy steered the Mustang into the parking lot, which was almost empty, and the three girls climbed out and began lugging their suitcases up the front walk toward the inn's wide front door. Just as Nancy was reaching for the door handle, she heard a voice behind her. "Hey, wait! Let me give you a hand!" Turning, Nancy saw a young man racing toward them. He looked about twenty years old. His sandy hair was falling into his hazel eyes, and he was dressed casually in a blue workshirt, jeans, and heavy workboots. "I wonder where my stepsister is? She's supposed to be meeting the guests!" the guy said in a slightly annoyed tone. "I just came over from the other wing, where I was working, and happened to see you." "Well, it's great of you to help us," Bess said quickly. She gave him a dazzling smile and tossed her long blond hair back over her shoulder. "We appreciate it." Behind Bess, Nancy exchanged an amused look with George. Three minutes was the average time it took Bess to get a crush on someone. She was ahead of schedule that day. The young man smiled back at Bess as he reached for her suitcase. "My name's Jake Tagley," he said. "I take it you three have come for the Chocolate Festival." He used his foot to push the front door open. "You're a little early, but—" Jake didn't get to finish his sentence. Just as he was ushering Nancy, Bess, and George through the door, a beautiful girl rushed up to them and spoke. She was petite, with a waist-length mane of wavy black hair and huge dark eyes. Nancy guessed that she wasn't more than twenty-two or twenty-three, but she was dressed in a conservative navy skirt and blazer that made her look older. "Jake! What are you doing here?" she asked angrily, trying without success to shuffle an armful of papers into order as she spoke to him. "I thought you were supposed to be putting down that new subfloor in the east wing!" Jake set the suitcases down. "And I thought you were supposed to be meeting our guests," he snapped back. "If I hadn't noticed them, they'd probably still be trying to get through the door." "Oh, no!" The girl's expression changed instantly from one of anger to dismay. Turning to Nancy and her friends, she said, "Oh, I'm so sorry! I was in my office! I didn't think anyone would get here until eleven!" Nancy, Bess, and George introduced themselves, and the dark-haired girl said, "I'm Samantha Patton. I run the inn. I'm Jake's stepsister, and I'm also the director of the Chocolate Festival." She sighed ruefully. "Sorry I bit your head off like that, Jake. I guess I'm a little flustered." "No problem," said Jake. "I know things are tense—and speaking of tension, where's Brock?" Was it just Nancy's imagination, or did Samantha blush at her stepbrother's words? "He's—he's in my office, actually," Samantha said. She looked back over her shoulder at an open door down a hallway off the lobby. "We've been going over his schedule." "His schedule," Jake echoed in a dubious tone. "I see." Facing her stepbrother, Samantha said defiantly, "Oh, stop. It's not like that, and you know it." "Let's just hope Tim knows it," was Jake's curt answer. "Now, as long as I'm here, why don't I show these young ladies to their rooms? Then I'll go back to work. Let me just get your keys. Marvin, Drew, and Fayne, right?" "Right," Nancy confirmed. "But really, we can find our rooms ourselves if you're busy—" "Oh, it's no trouble at all!" Jake called, smiling at Bess again. "I'm happy to help." He crossed the carpeted lobby to the front desk, a gleaming oak counter that curved in a semicircle against the far wall. Nancy noted that hallways stretched back on either side of the desk, and open double doors led to a living room that ran along one-half of the front of the house. Damask armchairs and a sofa were set up by a fireplace on the wall opposite the living room. A wide hall opened up next to them, going back into what looked like a dining room. The place was homey and comfortable and just a little shabby. The sofa and chairs around the fireplace were frayed, and the lobby walls could have used a fresh coat of paint. As though she had read Nancy's mind, Samantha said quickly, "We're doing a lot of renovations. We've got a long way to go, but the place is really going to be spectacular when it's finished. Oh—here's my mother!" A tall, stately woman with steel gray hair swept back in a chignon was walking down the hall toward them. "Samantha, have you checked that purchase order for— Oh, excuse me." Samantha's mother broke off. "I didn't realize you had company." "Yes, our first guests have arrived," said Samantha, a trifle nervously, Nancy thought. "Girls, this is my mother, Mrs. Tagley." "It's a pleasure to meet you," said Mrs. Tagley. Her voice, like her smile, was formal and a bit frosty. "My husband is around somewhere. Pete?" she called down the hall she had just entered from. After a second a shy-looking man about fifty-five emerged from a room down the hall and wandered out to the lobby. From his resemblance to Jake, it was clear that he was Jake's father. Like his son, he was wearing work clothes and heavy boots. He wiped his palm on his shirt before extending his hand to the girls. "Sawdust," he explained with an apologetic smile. But Mrs. Tagley wasn't smiling. "Shouldn't you change into a coat and tie, dear?" she asked. Her tone of voice made it sound like an order, not a question. Samantha said quickly, "Here's Jake with your keys, girls." Obviously she was trying to distract them from the exchange between her mother and stepfather. "I'll see you soon and show you around!" Then she dashed off down the hall. Jake slung the girls' suitcases onto a cart, which he pushed down a hall lined with faded old portraits. "You're on the third floor," he said as Nancy, Bess, and George followed along. As he pushed the elevator button for them, Bess asked, "Could that Brock you were talking about back there possibly be Brock Sawyer?" "Oh, you heard?" Jake said. "Yes, Brock Sawyer is definitely here." "I thought that was who you might mean, but I couldn't believe it!" Bess marveled. "Wow!" Even George was impressed. "He's just about the most famous TV actor in the country!" "And definitely the cutest," Bess chimed in excitedly. "I usually hate cop shows, but 'City Heat' is the best show I've ever seen—and it's all because of Brock! Oh, I can't believe he's here!" "How did Brock hear about the Chocolate Festival?" Nancy asked as the wooden elevator door creaked open. The Oakwood Inn didn't seem like the place a major TV star would spend his time. "He's a—well, you might say, a friend of Samantha's," Jake said. He pressed the button for the third floor, and the elevator began its creaky ascent. Bess's face fell. "How did she meet him?" "It was a couple of years ago, before Brock made it big. He was in summer stock at a theater near Oakwood, and he and Samantha met at a cast party here at the inn. They hit it off right away." "So I guess he's off-limits, right?" Bess asked. "Not necessarily," Jake replied after a brief pause. "He and Samantha broke up at the end of that summer. She's got a new boyfriend now. His name's Tim Krueger, who's working for us as an accountant. But just between you and me," Jake went on, glancing meaningfully at the girls, "I don't think the flame between Sam and Brock is completely out." "Well, don't give up hope," Bess said cheerfully, but it was clear to Nancy that what Bess was hoping for wasn't for Brock and Samantha to get back together. When the elevator door slid open on the third floor, Jake showed the girls to their suite and deposited their bags inside the door. "The schedule of events for the festival is on the coffee table. See you later—I hope," he added, and closed the door quietly behind him. The girls' rooms were comfortable but a bit threadbare. The three bedrooms adjoining the living room were too small to hold more than a bed and a dresser each. The living room carpet was worn in patches, and some of the tiles on the floor of the tiny bathroom were missing. But the gilt-wrapped box of handmade chocolates on the coffee table in the living room was certainly elegant, and the truffles were the best Nancy had ever tasted. "This is going to be fun," George said as she stretched out in one of the living room's easy chairs and surveyed the box of chocolates with interest. "Of course it's going to be fun," said Bess, her mouth already full of chocolate. "Hey, do you think I look good enough to meet Brock Sawyer?" "Hang on a minute, Bess," George said cautiously. "If you're planning to act like some kind of crazed fan all weekend, I don't want to be seen in public with you." "Of course I'm not!" Bess told her indignantly. "That would be totally uncool. Famous people hate it if you gush all over them. I just want to look nice in case we happen to bump into Brock, that's all. Nothing obvious." Nancy picked up her suitcase and carried it into her bedroom. "Come on, then," she called back to Bess. "Let's get out of here before you eat that whole box of candy and can't get into any of your clothes." The girls unpacked quickly, then headed back downstairs. The lobby was almost crowded now. Festival guests had obviously started to arrive, and they were milling around, waiting for information or the keys to their rooms. "I bet Samantha's too busy to show us around now," said Nancy. "Maybe we could—" "Nancy! George! Look over there!" Bess's blue eyes were wide, and she was pointing a shaky finger down the wide hall. Samantha Patton was standing in a doorway. Next to her was a man all three girls recognized instantly. Brock Sawyer was even more handsome in person than on TV, Nancy decided. Tall and slim, he had craggy features, wavy brown hair, and amazingly blue eyes—eyes that were fixed on Samantha. "It's him! It's him!" Bess's whisper was more like a scream, and several people in the lobby turned to look. "I've got to go get his autograph! I've got to meet him! I've got to— Wait here!" Before the girls could stop her, Bess was dashing toward Samantha and Brock Sawyer, jostling other guests as she raced by. "Nothing obvious, eh?" George murmured, rolling her eyes. "Brother!" Just as Bess rushed over to Samantha and Brock, two things happened. The first was that Brock leaned down and slipped his arm around Samantha's shoulders. Nancy's head swiveled automatically toward Bess. That was when she saw the second thing. From out of nowhere a young man with dark brown hair and icy green eyes raced up behind Bess, shoved her out of the way, and aimed a vicious punch at Brock's jaw. ## Chapter ## Two YOU KEEP AWAY from Samantha!" the young man yelled as he threw the punch. Brock Sawyer ducked just in time, and the young man's fist crashed into the doorjamb. Nancy noticed that Bess had halted a few feet from them, a startled expression on her face. "Tim, what's your problem?" Samantha shouted angrily. "Stop it!" So that angry blond guy was Tim Krueger, Samantha's boyfriend, Nancy realized. He was obviously very upset about Brock. Ignoring Samantha, he reeled backward and began swinging at Brock again. "No, Tim!" Samantha cried. "Please, somebody stop him!" "I'll stop him," Brock growled between clenched teeth—and he slugged Tim right in the stomach. Nancy grimaced as Tim doubled over. Behind her a woman let out a frightened gasp. What can I do to stop this? Nancy thought. She scanned the crowd, and for a second her gaze landed on a short, heavyset man standing beside the fireplace. He had a camera up to his face and was busily snapping picture after picture of the fight. Ugh! Nancy thought. What a nasty way to behave! A grunt from Tim brought her thoughts back to the fight. Tim was lurching unsteadily forward, ready to throw another punch. Nancy started forward to grab him. Then, from somewhere in the crowd, Jake Tagley stepped around Nancy. "That's enough, guys," he said, stepping firmly between Tim and Brock. With a swift movement of his arms he pushed the two men apart. Gasping for breath, Brock and Tim glared murderously at each other. A hush had fallen on the guests in the lobby. Then the silence was broken. "Are—are you all right, Mr. Sawyer?" came Bess's hesitant voice. "I'm fine," Brock Sawyer answered, scowling. "It would take more than this to—" "Then may I have your autograph?" Bess interrupted. Everyone in the lobby burst into laughter. A reluctant grin spread across Brock Sawyer's face, too. "Let me just ask the boss," he answered with a nod toward Samantha. "You're the one who's been keeping track of my schedule, Sam. Do I have time?" "Uh, sure, Brock. This would be a good time for me to check in with the kitchen. I want to see how everything's coming along." Nancy noticed that Samantha was shaking slightly. "I can check for you," offered Tim. From his expression Nancy guessed he wanted to smooth things over. But Samantha wasn't about to let him off easy. Giving him a frosty glare, she snapped, "No, thanks. You've done enough already." Then suddenly she seemed to remember the guests and turned to face them. "Sorry about the disturbance, folks!" she called, her voice full of forced cheer. "Why don't you come over and meet our celebrity guest? And don't forget to be at the Round Room at twelve-thirty for the first chocolate event of the day. Lunch will be at one-thirty in the dining room." Samantha's words broke the last of the tension in the room. More than a dozen people crowded around Brock, all talking at once. Brock, too, had become the professional once again. He was smiling and chatting easily as he signed the scraps of paper people were holding out to him. In the commotion the people in the crowd seemed to have forgotten the fight. All but Tim, who was leaning against the wall, scrutinizing Brock with flashing green eyes. Next to Tim, Jake was bending down to pick up his toolbox. "Want to give me a hand in the east wing?" Nancy heard him ask Tim quietly. Tim opened his mouth to say something, then seemed to think better of it. Raking a hand through his hair, he shrugged and then followed Jake out the door. "Looks like we've got all the makings of a good soap opera here," George said into Nancy's ear. "Apparently, some of the other guests think so, too," Nancy whispered back. She flicked a thumb toward the fireplace. "See that guy with the camera over there? He was taking pictures during the whole fight." George followed Nancy's gaze. "Press, probably," she suggested. "Maybe. He's not wearing a press badge, though. I'm going to head over there to see what he's up to." "Always the detective," said George, laughing. "I'll wait here for Bess." Nancy was frowning when she came back ten minutes later. "So, what's his story?" George asked. "The guy's name is Dan Avery. Apparently, he's just a nut for chocolate, like the rest of us," Nancy explained. "But—I don't know. All that camera equipment he's got looks a lot more expensive than most people would carry around, and—" "Hey," said Bess, rushing up to them and waving a cocktail napkin. "Look at my autograph. Let's hang around until the crowd thins out a little. Maybe we'll get a chance to really talk to Brock." "Come on, Bess," George said with a groan. "He'll be here all weekend," Nancy added. "All we've seen so far is the lobby, and I'd like to check out the inn a little." Reluctantly Bess followed her friends. "Some people don't recognize real scenery when they see it," she grumbled under her breath. • • • "Hey, George! I found an antique!" Bess called from a corner of the torn-up room that the girls were exploring. She held up a creased wall map. "What do you think it's worth?" The girls had made their way up a flight of stairs into the east wing. This was the part of the inn being worked on, and the girls had gotten dirt and sawdust all over their clothes. "Ten cents, probably," George told her cousin with a grin. "But look at this!" She showed Nancy and Bess a tiny porcelain figurine she had found on the dusty mantelpiece. "If someone cleaned this up, it would be really pretty." Nancy glanced around the room. It had a forlorn, abandoned quality, as did the other east wing rooms they'd been in. They had poked through bedrooms with four-poster beds wearing canopies of cobwebs, and bathrooms with shelves lined with long-forgotten brands of shampoo and soap. Except for the areas under construction, the east wing looked as though it hadn't been visited in about fifty years. "There's lots of stuff here that would look nice if someone cleaned it up," Nancy commented. She brushed her hands together to get rid of some dust. "They must have left the whole east wing pretty much the way it was when they closed it off." Bess shivered nervously. "I feel as though we're surrounded by ghosts, don't you?" "Nope," said George cheerfully. "Let's go check out some more rooms." Suddenly Bess froze. "Wait!" she whispered. "What's that bumping sound?" Nancy stuck her head out into the hallway. "It's just Jake," she said, catching sight of his sandy hair and jeans. "Hi, Jake!" He was walking down the hallway toward them, lugging a power saw. "Find any skeletons yet?" he asked. "Not yet," Nancy told him, "but this sure looks like the kind of place where we could." "You're right," Jake agreed. "In a few months, though, you won't recognize this place. If you can believe it, we're actually about to finish one of the rooms in this wing—the new conference room. It should be done today." "You must have been working hard getting ready for this weekend," Bess observed. "We've been going pretty much nonstop for the past few months," Jake replied, nodding. "Tim helps when he can, but he's pretty busy with his own job." "Your dad's been helping, too, right?" asked George, wiping her hands on her shorts. "Yes. But mostly he works in the basement," Jake told her. "He's building bookcases in his workshop. My dad's really a cabinetmaker, not a carpenter. "In fact, that's how he met my stepmother," Jake went on. "He was hired to do some restoration work a few years ago. Samantha's mother was running the inn then, too. She and my father hit it off, and they got married about six months later." "That's so romantic!" Bess exclaimed. "Love at first sight!" "Well, maybe," said Jake slowly. "I'm not sure it was the greatest match, but—" Suddenly he broke off. "It's none of my business as long as my dad's happy, I guess." Bess seemed not to notice his doubtful tone. "This inn would be a great setting for a romance," she said. "Though not the east wing, of course." "I guess not," Jake agreed ruefully. "So Mrs. Tagley was running the inn alone before?" Nancy asked, half to herself. "Right," Jake said with a nod. "She took it over after her first husband died. I don't know much about him, but there are people on the staff who were here even before my stepmother came. I hear her first husband was a nice enough guy, but she was really the one in charge. Kind of like now." "But I thought Samantha ran the inn now," George put in. "Well, she's certainly on her way," Jake said proudly. "Sam graduated from hotel school last year at the top of her class. The Chocolate Festival was her idea. Her mother has always been a great pastry chef and candy maker, so Samantha decided to use those talents to promote the inn. We've been trying to come up with ways to bring in more people, and—" He stopped again, and a deep blush crept over his face. "Guess I've been working by myself too long," he said awkwardly. "I'm really rambling on. Sorry." "Hey, we don't mind," Nancy said quickly. "This place looks like it has so much history. It's nice to learn some of it. Do you work here full time?" "Oh, no. I'm in hotel school, too. Once I saw the possibilities for this place, I got bitten by the same bug as Samantha and my stepmother. I'm still in my first year, though. What about you three?" he went on. "No fair for me to answer all the questions. Are you students or chefs or what? What brings you to our little chocolate paradise?" "Love of chocolate, plain and simple," said Nancy with a smile. "That's right," Bess echoed. "Nancy's a detective. But the only mystery she's going to be solving this weekend is how I'm going to fit into my clothes after all the chocolate I plan to eat." "Speaking of chocolate," George put in, checking her watch, "weren't we supposed to be somewhere at twelve-thirty? It's twelve twenty-five now." "That's right!" said Bess. "The Round Room, our schedule said. Jake, could you tell us how to get there?" A wide smile spread over Jake's face. "I'll do better than that. I'll take you there myself," he told the girls. "Just give me a second to brush off all the sawdust." • • • "So this is the Round Room," commented a woman walking through the door ahead of Nancy, Bess, and George. They all stopped to take the pieces of paper and pencils being handed to them. "Well, it fits its name." She was right. The Round Room was certainly round—a white, windowless room that made Nancy feel as though she were standing inside a huge drum. It was filled with expectant and hungry people. "Mmm." Bess gave a rapturous sniff and grabbed George's arm as she made it into the room. "What's that incredible smell?" "That," said Samantha, who happened to be standing nearby, "is melted chocolate. Pure, rich Creamfield's milk chocolate. Three hundred pounds of it. See that vat over there?" She pointed to an immense copper kettle standing on a platform at one end of the room, a dark green silk curtain behind it, setting the copper off perfectly. "It's full of melted chocolate." "Great, but what's it for?" asked George. Samantha laughed, and Nancy was glad to see she'd shaken off her bad mood after the fight between Tim and Brock. "It's designed to tempt you into buying Creamfield's milk chocolate, of course. They're bringing out a new line of super deluxe candy bars that weigh a pound apiece. Some Creamfield's executive got the idea that the best way to promote the new chocolate was to let people smell it. And since melted chocolate smells even better than unmelted chocolate, they send that dipping vat filled with chocolate around to festivals like this one." Samantha gave them a tempting smile. "Hurry on up front so you'll get a good view. You might get a chance to win a couple hundred Creamfield's chocolate bars, if you're lucky." "How?" Bess asked. Samantha just raised her eyebrows mysteriously. "You'll find out." The girls followed Samantha as she threaded her way through the crowd toward the platform that held the vat of melted chocolate. They found a place at the front and watched Samantha jump up onto the platform, pick up a microphone, and move over to the vat of chocolate. "Attention, please, chocolate lovers!" Samantha said brightly. "Welcome to Oakwood Inn's first annual Chocolate Festival!" Loud applause rang out from around the room. "We'd like to kick off the festival with a little contest," Samantha announced. "To help us, please welcome the festival's celebrity taster, Mr. Brock Sawyer!" Beaming, Brock strode up onto the stage and put his arm around Samantha. Nancy couldn't help but notice Tim, who was slouching against the curved wall. He looked pretty miserable. His fists were slightly clenched, and the expression on his face was drawn and tight. "And the prize," Samantha went on, "is the next best thing to Brock himself—Brock's weight in Creamfield chocolate bars!" With a flourish she pulled open the green curtain behind them to reveal what looked like a mountain of chunky chocolate bars. They were piled high next to a huge, old-fashioned scale that hung suspended from the ceiling. "We're going to ask Brock to climb onto one side of this scale," explained Samantha. "And then we're going to ask you to guess how many delicious Creamfield's chocolate bars it will take to equal Brock's weight. Whoever guesses correctly wins all the chocolate. Please fill out your papers and put them in this box on the stage." When the last guess was tucked inside, she said, "Okay, here goes!" There was a buzz of excitement from the audience as Brock, with a wave of mock farewell at the audience, climbed carefully onto one side of the huge scale and perched gingerly in the pan. "It's kind of an unsteady perch," Samantha said, "so I'm going to use these straps to keep Brock from sliding off." "Oh, I wish I'd brought my camera," moaned Bess softly. Lots of other guests had brought theirs. They were crowding forward now to snap the comical picture Brock made wobbling around on the huge scale. Nancy noticed that Dan Avery had made his way to the front with his camera. He was kneeling on the ground just in front of Nancy, snapping away. "Now let's hear what some of your guesses were!" called Samantha. She pointed to a woman across the room from Nancy. "Yes, ma'am?" "A hundred and sixty-two!" "Okay," said Samantha. "Let's try it." She picked up handfuls of the large candy bars and piled them on the scale. "Hey, wait a second. The scale's tipping!" Brock exclaimed. Nancy's attention snapped to the actor. "It's doing more than that!" she added. "It's shaking!" Brock grabbed at the edge of the pan to steady himself, but the shaking only grew worse. "I've got to get off this thing!" he shouted, pulling at the straps tying him to the scale. But it was too late. With a tremendous crash the scale tipped forward. Brock soared through the air—and landed headfirst in the vat of steaming melted chocolate! ## Chapter ## Three OH, NO! He'll be boiled to death!" shrieked Bess. "That's impossible," said a thin, bespectacled man near her. "Fine chocolate is never heated to the boiling point. The cocoa butter would—" Nancy didn't bother listening to the rest. She leapt onto the stage just as Brock's head appeared above the edge of the tempering pot. He was coughing, sputtering, and trying unsuccessfully to wipe chocolate from his face with a chocolate-drenched hand. Samantha reached the pot at the same time as Nancy, and both girls held down their hands to Brock. "Are you all right?" Samantha cried. "I—I think I am," sputtered Brock. By this time both Tim and Jake had also rushed forward to help. The four of them tugged on Brock's hands and arms. As they were yanking him over the edge of the vat, it unexpectedly tipped. Nancy jumped back, but there was no avoiding the wave of hot melted chocolate that cascaded onto her feet, covering the stage and dripping down to the floor. Brock, Samantha, Tim, and Jake—all of them as chocolate-covered as Nancy—were unable to move. The guests who were closest to the stage yelped and stepped hastily back. Except for one—Dan Avery. He pushed his way through the crowd so eagerly that for a second Nancy almost thought he wanted to lap up some of the melted chocolate. Once he was near the stage, though, he held up his camera and began taking pictures. "What is that guy's problem?" Nancy heard George say to Bess. And next to her, a disgusted-looking Brock was trying to wipe chocolate from his face. "I've got to get to a shower," he muttered. "Right away," Samantha agreed. She turned to Tim and Jake. "Could you guys do me a big favor and start cleaning up this mess while I help Brock to his room? I'll help later." "Sure thing," said Jake. "I really appreciate it," said Samantha. Then she turned toward the guests. "Sorry, folks," she said with a strained smile. "We didn't mean to go quite so far to get your attention. I guess we'll have to postpone this contest—but don't despair. In an hour we'll be serving the first of our spectacular Chocolate Festival meals—complete with a surprise dessert." "Haven't we had enough surprises for one day?" a woman remarked tartly. "First we get ringside seats at a boxing match. Then we get dipped in chocolate! This isn't the most festive festival I've ever been to." Samantha's mouth was set in a straight line, Nancy noticed. "Well, I'm sure things will go smoothly from now on," Samantha assured the crowd. "Now I'd better help poor Mr. Sawyer. Don't worry about getting chocolate on the floor, Brock. We can clean it up later." As the guests began to trickle out of the room after Samantha, Nancy said, "I'm going to have to clean up, but first I'd like to get a closer look at that vat of chocolate." "Fine," said George, who had remained untouched by the chocolate. "Maybe Bess and I will check out the grounds. We'll see you at one-thirty." Stepping around Tim and Jake, who were scraping sticky chocolate off the platform, Nancy went over to the vat and scale. She was grateful that they didn't seem to notice her. They both seemed preoccupied with something Tim was muttering angrily about. "Why did Samantha say she'd help us clean up the chocolate?" Tim was saying. "Because you know with one thing and another, she won't be able to help us. She'll have to check something in her office or make a phone call. Or talk to Brock," he finished in disgust. "I know what you mean," Jake said sympathetically. Nancy was listening with only half an ear. Her attention was mainly concentrated on the scale that had tipped Brock into the chocolate. "What's so interesting about that scale?" Suddenly Nancy realized that Jake's question was directed at her. "Oh, nothing," she replied casually. "I've just never seen one of these up close." That was true, and up close Nancy could see that there was something wrong with it. The two pans on either side of the scale were held up by chains that, in turn, were attached to one central chain. There seemed to be some kind of crack in one link of the chain leading to the pan Brock had been sitting on. Bending in to examine it even more carefully, she saw that the link had been filed almost all the way through! Someone had meant that chain to loosen and stretch, which would dump the contents—Brock—into the vat. But who? And why? There was no way to answer those questions before lunch, Nancy realized. She might as well get cleaned up. Saying nothing about her discovery, Nancy murmured a quick "See you guys later," then walked out of the Round Room. The inn was so big and rambling, she decided to try a new route back to the room. The hallway she chose was dimly lit and empty—except for Dan Avery, who was talking on a pay phone in a little alcove. He was speaking so venomously that he didn't even notice Nancy as she walked past. "Absolutely. I'm in total control. Believe me, I'll take care of him for you. I'll get that actor if it's the last thing I do." • • • "Chocolate rice? I can't believe it!" Nancy exclaimed at lunch. The Chocolate Festival's first lunch had just begun—and chocolate had made its way into every course. The rice served with the shrimp main course had unsweetened cocoa in it, though not enough to make it taste strange, Nancy was relieved to note. The butter served with the chocolate whole-wheat rolls was chocolate flavored. There was even chocolate salad dressing on the fruit salad. "I can't imagine what dessert will be," Nancy said, scooping some rice onto her fork. Brock Sawyer smiled down at her, his blue eyes sparkling. "I don't know, but you'd better save some room for it. It's bound to be delicious." Before the meal Samantha had spotted Nancy, Bess, and George hesitating at one end of the dining room, trying to decide where to sit. She had asked them to join her family, Tim, and—to Bess's delight—Brock. Now Nancy was feeling a little uncomfortable, though. Brock had spent most of the meal talking to her. She was seated between him and Samantha. Nancy kept trying to steer the conversation toward Bess, who was seated on Brock's other side, but it wasn't working. Bess kept trying to steer the conversation in her direction, too. "Want some of this chocolate butter, Brock?" she asked eagerly. "It's great!" "No, thanks, Bess. I'm watching my weight. With all the chocolate I have to taste here, I need to be careful the rest of the time. I had a long session with my nutritionist before I came, and she told me what I could and couldn't eat. I'm going to stick to her rules if it kills me." He turned back to Nancy and went on with what he had been saying before. "I'm just glad that Samantha had the good sense to keep the tabloid reporters away from this festival. If there's anything I can't stand, it's those junky supermarket newspapers. They make up the worst lies I've ever read." "Do they write about you a lot?" asked Nancy. Brock grimaced. "Oh, yeah. The more successful 'City Heat' has gotten, the more they've picked on me. Especially the Midnight Examiner. The last time I saw an issue, they were claiming that I was married to my thirteen-year-old cousin. I don't even have any cousins. My fiasco in that chocolate vat would have been right up the Examiner's alley." Suddenly Mrs. Tagley leaned across the table. "And just what caused that disaster, Samantha?" she asked sharply. "I don't know yet, Mom," Samantha replied in a tight voice. "I'll get on it, don't worry. There are a lot of other things at the inn that need my attention besides that." Mrs. Tagley briefly patted her gray hair. "Well, if you're going to be running this place, as you insist on doing, you have to be concerned with everything," she said evenly. "A good innkeeper keeps track of the details and the big picture, you know." A sugary-sweet smile spread across Samantha's face. "All right, Mom," she cooed. "I'll just follow your good example, okay?" Uh-oh, thought Nancy. That sounded like a direct jab at Mrs. Tagley's own innkeeping skills. From what Jake had said earlier, Oakwood had been having trouble attracting customers. Was Samantha implying that that had been her mother's fault? Now Mrs. Tagley seemed as though she was about to explode, but her husband intervened. "Let's leave this for another time, all right?" Mr. Tagley said quietly. He looked stiff and uncomfortable in his suit and tie. "The festival's driving us all crazy enough as it is. No need to bother our company with it, too." "Oh, all right," snapped Mrs. Tagley. This family certainly didn't seem to be self-conscious about arguing in front of total strangers! Nancy thought. She decided it was time to try to get people back into a good mood. "This is a fantastic meal," she told Mrs. Tagley. "I can't believe your chef could prepare chocolate in so many interesting ways." "Wait till you taste dessert," Jake volunteered. He sounded relieved at the change of subject. "My stepmother's chocolate desserts are out of this world. They're the thing that's kept this inn going for the past couple of years." Once again he broke off, embarrassed, and nervously brushed his sandy hair back. Nancy guessed he hadn't meant to blurt out yet another reminder that the inn was in trouble. "What is for dessert?" she asked swiftly. "Brock Sawyer—the chocolate version, that is," Mrs. Tagley said mysteriously. "What do you mean?" asked Bess. "You'll have to see for yourself," Samantha put in. Glancing around at the other tables, she asked, "Do you think people are ready for dessert yet?" "Definitely!" Bess and George said in unison. "Well, then, I'll go get it!" Samantha jumped up and walked across the dining room toward a cart by the kitchen door, where Nancy could see there was something covered with a white cloth on the cart. As Samantha wheeled the cart to the front of the room, conversation at the other tables began to die down. "Did everyone have a nice lunch?" Samantha asked, smiling as the guests burst into applause. "You couldn't possibly find room for more chocolate, could you?" "Yes! Yes!" people called out. "Then I guess we're just going to have to give you what you want. As some of you may know, my mother is a real artist with chocolate." Once again the room filled with applause. "And for dessert today, she's made what I think is her finest creation ever. "I'm going to ask our special guest to unveil this spectacular dessert for us," Samantha went on. She glanced toward Nancy's table. "Ready, Brock?" Smiling broadly, Brock stood and walked over to her. "Here goes!" he said. With a flourish he picked up a corner of the white cloth and whisked it off the dessert. Then his smile turned into a shudder of disgust. "What is this?" he shouted. Everyone craned their necks to see what he was talking about—and a confused murmur filled the room. On the table was a spectacular white-chocolate cake—a replica of Brock's face. It was stunning, except for one thing. The whole surface of the cake was pulsating with a living blanket of ants! ## Chapter ## Four WHAT IS THIS, Samantha?" Rage and horror were mixed in Brock's voice, and Nancy couldn't blame him. She had seen few sights as bizarre and sickening. Samantha drew a shaky breath and staggered backward a few steps. She looked as if she was about to faint, but her voice was steady as she summoned a waiter. "Please take this back to the kitchen and dispose of it immediately." As the waiter gripped the cart and wheeled it away, Samantha returned to her seat, motioning for Brock to do the same. Once there, she beckoned to another waiter. "Could you ask the chefs to put together another dessert immediately?" "Another—another dessert?" the waiter faltered. "What kind, Miss Patton?" "There's plenty of ice cream in the freezers, isn't there?" Jake suggested, coming to the rescue. Samantha gratefully turned to her stepbrother. "Yes, and lots of fudge sauce. We can have sundaes." Nodding, Jake jumped to his feet. "I'll go help in the kitchen. I'm sure they could use an extra hand." The whole conversation had taken about thirty seconds. Glancing around, Nancy could tell that only the guests closest to the cake saw what had happened. But the people who had seen the ants had disgusted expressions on their faces. "Darling, let's get out of here," Nancy heard a wan-looking woman at the next table say to her husband. "I feel sick." Her husband helped her to her feet, and they hurried out of the room. Samantha stared bleakly at Nancy. "I sure am getting a lot of practice calming down guests," she commented. "I'd better fix things up." She stood up to address the crowd. "They say bad things come in threes," she called cheerfully. Nancy and George exchanged an admiring glance. Samantha sounded unbelievably poised. "So I'm sure we'll have no more trouble from now on! "I think you'll find that our replacement dessert will take your minds off anything unpleasant. You're just about to taste a good old-fashioned sundae made with homemade vanilla ice cream and my mother's fabulous ultra-fudge sauce. Here come the waiters now!" She gestured toward some waiters carrying trays of sundaes through the kitchen door. Several "oohs" rose up from the diners. "Let's hope that works," Samantha said under her breath, sinking back into her chair. "I'm not sure how much longer I can continue to smooth things over." "I'm not sure, either," Brock Sawyer told her flatly. "I'm a pretty good actor, but it's getting hard to act as if I'm having a good time. I think it might be time for me to head back to California." At Brock's words Nancy darted a quick glance around the table. Jake and Mr. Tagley seemed to be concerned, but to Nancy's surprise, Mrs. Tagley looked oddly happy. Why would she want to lose the festival's star? Brock's participation was a definite plus for the inn. If he left, the festival's reputation could suffer. Why would Mrs. Tagley be happy about something that might hurt the inn? On the other hand, Nancy wasn't at all surprised to see that Tim was also pleased at Brock's words. He was eyeing Brock with an expression that seemed to say, So you can't handle it, huh? "Brock, you can't go!" Samantha pleaded quietly, grasping his arm. "We need you here! Please promise you'll stay." "Well—" Brock paused. "I really don't know—" Then he smiled at Samantha and put his hand over hers. "Maybe for a little longer—just to help a friend in need." A waiter was hovering over his shoulder with a sundae in his hand, but Brock waved him away. "Can't waste the calories," he explained. "I'd love some coffee, though." As the waiter moved on, Brock explained to his dinner companions, "I brought my own low-cal sweetener—my nutritionist recommended it." He pulled out a small glass jar to show them. "Conscience, it's called. Great stuff." "As he's told everyone in this inn since he got here," Tim grumbled under his breath. "Waiters included." As the conversation began to pick up at their table, George leaned forward and spoke to Nancy in a low voice. "Aren't these accidents getting a little suspicious?" "Definitely," Nancy whispered back. "As soon as lunch is over, I'm going to look around a little. Those ants didn't just find that cake. Someone put them there. If I'm lucky, I'll find a clue or two to tell me what happened." • • • "Can I help you, miss?" Nancy looked up with a start from where she had been peering behind the refrigerator. A bus-boy had paused in the kitchen doorway, his arms full of dishes and a questioning expression on his face. "Have you had problems with ants before today?" Nancy asked him. The busboy shook his head. "You can't believe how clean this kitchen is," he said, stepping over to the counter and setting the dishes down with a clatter. "Mrs. Tagley is a real— I mean, everyone at the inn keeps an eye on the kitchen. The trash is taken out six times a day just so we don't attract any pests. Besides, how could ants crawl through tile walls and a tile floor?" he asked, then seemed to forget she was there. No way that Nancy could think of. That made her more certain that someone had brought the ants into the kitchen. But in what? She'd already checked under the steam tables and behind the huge glass-doored refrigerators. The shelves, with their neat rows of kitchen supplies, had turned up nothing. Nancy had even stirred through the industrial-size garbage cans at one end of the kitchen with no success. And now she was starting to worry that the kitchen staff would kick her out soon. Nancy let out a sigh, brushed back her reddish blond hair, and started to leave, bumping into a stainless-steel worktable on the way. Then it occurred to her that she hadn't examined the rows of pots and pans under the huge worktables. In the bottom of a two-gallon double boiler, Nancy found what she'd been looking for. • • • "An empty jar wrapped in an apron? Why are you showing me that, Nancy?" Samantha asked. She was staring blankly at the bundle Nancy had plopped down on the desk in her office. "Look more closely," Nancy urged. "This is what held the ants we saw on the cake." It was a large half-gallon glass jar. It had probably been a mayonnaise jar, Nancy thought, but there was no mayonnaise in it now. There were only ants—a few sluggish ones crawling sleepily around the bottom of the jar. "I found it hidden in the kitchen," Nancy explained. "I think whoever put those ants on the cake brought them into the kitchen in this." "But—but where would someone get ants?" Samantha asked, confusion in her dark eyes. "That wouldn't be too hard," Nancy answered. "Some pet stores sell ants for ant farms. All anyone would have to do is put them in the refrigerator for a few minutes to make them sluggish enough to pour onto the—" "Stop!" Samantha was turning slightly green. "I believe you," she said quickly. "But who would do something like that?" "I don't know," Nancy admitted. "Maybe the same person who set up the scale so it would tip while Brock was being weighed." Now Samantha was even more confused. "Set up the—the scale?" "I forgot to tell you about that," Nancy said gravely. Quickly she filled Samantha in. "I don't know whether these pranks are being aimed at Brock or the festival in general," she finished. "But I'm a detective, and if you'd like me to investigate, I'd be happy to." "No!" Samantha said emphatically. Then, as if to calm herself, she began rubbing her temples. "No, thank you, I mean. I'm sure these were just isolated incidents. The scale was probably already broken." "But if someone's out to sabotage the festival or hurt Brock—" "No, Nancy," Samantha said firmly. "That's impossible. It's—it's just an old mayonnaise jar, after all. I'll tell the kitchen staff to do a better job cleaning up from now on." She seemed so determined not to hear Nancy's message that Nancy didn't bother pointing out that someone had already done an excellent cleanup job—on the jar. There wasn't a fingerprint on it. • • • "Pure cocoa butter," a woman with a round face and bouffant hairdo was telling Nancy. "That's the only way you can get it to melt properly. I buy mine from a mail-order place in Switzerland. Would you like the address?" "It sounds wonderful, but I don't think so," Nancy said politely. Dinner had just ended—a fabulous buffet that included everything from melon in white-chocolate sauce to turkey with chocolate stuffing to chocolate-raspberry mousse torte. Now Nancy, Bess, and George were in the living room—a cozy room with a flagstone fireplace at one end, and sofas and chairs scattered throughout—as they waited for the final chocolate event of the day. The woman headed off to find someone else to trade recipes with, and Nancy turned to Bess. "I think the people here take chocolate even more seriously than you do, Bess." "Impossible," Bess said promptly. "I mean, I've eaten about four thousand chocolate things today, and I still can't wait to try those new chocolate creams Samantha was telling us about." "They did sound pretty scrumptious," George agreed. At dinner Samantha had announced that Oakwood Inn was planning to launch a line of its own homemade chocolates. And Brock Sawyer was going to give the new chocolates their first official taste. "I wonder how famous you have to be before you're asked to be a taste tester at one of these things," Bess said longingly. "Oh, look, there's Brock! I'll ask him." She bolted across the room toward the actor. Nancy chuckled. "Bess doesn't believe in playing hard to get, does she?" she said to George. "Let's go see how her tactics are working." When they reached Bess, she was saying, "But don't you get full? I can't eat more than a bite of these rich desserts, myself." George elbowed Nancy in the ribs, and Nancy had to bite her tongue to keep from laughing. But Brock seemed to buy Bess's act. The actor shook some of his special artificial sweetener into a glass of iced tea he was holding and took a big swig. "I'm just grateful I've got such a great nutritionist," he told Bess. "If she hadn't told me about this sweetener, I'd be a total blimp, with all the sampling I'm supposed to be doing. But I admit I'm looking forward to sampling these chocolates." Brock set his glass down on a coffee table. "At least I would be, if I were hungrier. That dinner did me in. I wonder if—" "Ready, Brock?" Samantha spoke up from behind him. She was wearing a red dress that set off her dark hair, which fell in pretty waves down her back. In her hands was a gleaming red box tied with a silver ribbon. "I sure am." Brock was a little pale, but his voice was resolute as he followed Samantha to the fireplace. "Nice talking to you, Bess." As the girls squeezed into a love seat near the fireplace, Samantha held the red box aloft. "Ladies and gentlemen, your attention, please. I'd like you to meet my mother's newest candy—Silk and Cream Chocolates! They're really something—made with pure Wisconsin cream and imported Belgian chocolate, from a recipe formulated by my mother. Tomorrow Silk and Cream Chocolates will hit the stores, but tonight our star taster will enjoy the very first bite!" Samantha handed the red box to Brock. "These are my own personal favorites—Lemon Mousse Truffles." Brock pulled the ribbon open with a long, sweeping dramatic gesture. He reached in and lifted out a dark chocolate shaped like a heart. "These look fabulous," he said, then popped it into his mouth. Nancy saw a look of surprise cross Brock's face, but all he said was "And they—uh—they taste fabulous, too!" He reached for another truffle, but Nancy noticed that he was wincing as he tried to chew it. "I don't think he likes them," George commented, sounding puzzled. Brock kept chewing. "I'll hate myself tomorrow," he said, "but—" Abruptly he stopped. His face registered shock, not surprise any longer, Nancy noticed. Clutching his stomach, he moaned. "Sam, there's something wrong with these," he managed to get out. There was a nervous titter from some of the guests. "Oh, Brock, stop it," Samantha laughed. Giving him a jovial punch in the shoulder, she added, "He's such a kidder. Aren't you, Brock?" Brock didn't answer. He just fell forward, bent over double. Then, twisting in agony, he collapsed to the floor. ## Chapter ## Five NANCY LEAPT to her feet and raced to him. "Brock, are you all right?" she asked urgently. The only answer was a dreadful moan. "Brock?" Samantha shouted in a panicked voice. "Brock?" Bending over, Nancy shook his shoulder. At her touch Brock fell onto his back. Bess screamed, and a chorus of gasps and cries rose from the other guests. "What's the matter?" shouted a woman wearing a press pass. She rushed forward—but stopped short when she saw Brock. His face was gray and flecked with sweat, and his lips were drawn back into a shocking grimace of agony. His blue eyes were bulging and staring, as if he couldn't focus. "Help me. Please!" he managed to gasp out. "Is there a doctor here?" someone called. "I'll call an ambulance," Jake Tagley spoke up in a take-charge tone. He dashed out the door. "Brock, can you hear me?" Nancy asked as calmly as she could. "We're getting help." Brock didn't respond. Nancy checked his pulse. It was shallow, and his wrist was icy. Sobbing, Samantha threw herself down next to Brock. "Say something, Brock," she begged. "I can't believe this is happening!" Gripping both of his hands tightly, she stared up at Nancy. "He's—he's not going to die, is he?" But Nancy couldn't answer. • • • Half an hour later an ambulance pulled away from Oakwood Inn, carrying Brock to the hospital. Samantha had gone with him, so Mrs. Tagley was now frantically trying to put together an activity for the horrified guests. Nancy, Bess, and George were still in the living room. A police car had arrived, and a gangling young officer named Steve Ullman was taking statements from the guests. "You know, Nancy's a detective. She's incredible," Bess said proudly when it was her turn to be questioned. "You should let her work with you on this case." Officer Ullman smiled politely. "We don't know if it's a case yet," he said, flipping to a new page of his notebook. "But, of course, I'd be grateful for any help any of you can give me." "You should know about a couple of strange things that happened earlier." Nancy told him about the "accidents" that had taken place that day and about her suspicion that Brock was poisoned. "His attack came right after he'd eaten the first piece of chocolate." "Seems hard to believe it could work so fast without killing him," Officer Ullman mused. He was eyeing Nancy with more respect now. "I'm not a poison expert, but I'll definitely take those chocolates back to the lab. You say you found the jar of ants hidden in the kitchen? Do you think any of the kitchen staff could be responsible?" "It's hard to think of a motive, but I can check it out for you," Nancy replied. "I may be back myself, depending on what the lab boys turn up," said Officer Ullman. "In the meantime, let me know if you find anything." Unfortunately, most of the staff had left for the evening by the time Nancy and her friends reached the kitchen. The lone waitress putting away some leftover chocolate-raspberry mousse torte had nothing to add to what the girls had seen for themselves. "We'll have to try again tomorrow," Nancy said, pushing through the kitchen doors into the deserted dining room. "Let's go up to the suite. I'd like to go over what we know so far"—she frowned—"which isn't much." "Oh, let's not go back up yet," said George. "Couldn't we find some other room where we could talk in private? Our suite's so small it makes me feel claustrophobic." "Fine with me," said Nancy. "But nothing on the first floor where anyone could interrupt us or listen in." They settled on a small lounge in the basement that smelled as if it was a smoking room, probably for the staff. "Well, Nan," George said, settling into a battered armchair. "It looks as though you have another case on your hands." Nancy and Bess sat down on a couch covered with an Indian-print spread. "Whoever put the ants on that cake and tampered with the chocolate scale may have been playing a prank. But poisoning's no joke. Someone's definitely out to get Brock." "But who would want to hurt him?" Bess asked. "I mean, an actor might have enemies, but you'd expect them to be—oh, I don't know, rivals for acting parts or something. Who would try to attack an actor at a chocolate festival?" Three names popped into Nancy's head immediately. "Tim might," she said. "He's obviously jealous of Brock. And, maybe, Mrs. Tagley. She made the chocolates, so she had a perfect opportunity to poison them. I'm not sure what her motive would be, but I get the feeling that she's not crazy about Brock. I also don't trust Dan Avery. I have no idea what he's up to, but I did overhear him say he'd get Brock. Other than those three, I—" Thwack! Thwack! "What's that?" Bess asked. Nancy was the first to get up and walk to the doorway and out into the hall. A light was on in an adjoining room. A pool table stood in the middle of the room, and board games were scattered on card tables with fold-out chairs around them. On the far wall was a dart board, at which Jake Tagley was just aiming his third dart. "It's Jake," she called back to Bess and George, who were still in the hall. "Hi," Jake said to the girls as they came over to join him. "I couldn't think of anything else to do. This is supposed to relax me, but I'm not sure it'll work." Nancy noticed that Jake was speaking mostly to Bess. "I don't blame you," Bess said sympathetically. "We feel terrible about Brock, too. What a nightmare!" She shivered, and Jake stepped closer as if to protect her. "It's just so spooky thinking that there's someone out there who could . . ." Bess's voice trailed off, and she shuddered again. "Don't worry," said Jake, putting a hand on her shoulder. "I'll watch out for you." He tried to sound as if he was half joking, but his admiring gaze made Nancy sure he meant what he said. From the glazed expression in Bess's blue eyes, however, Nancy realized that Jake's concern for her friend wasn't even registering. Jake seemed to notice Bess's indifference as well. To change course, he checked his watch and sighed. "So much for my dart game. I guess I should go help my dad a little before I call it a night. He's a night owl and loves to work late. He's nailing down baseboards in the east wing." "And we might as well head up to the lobby and see what's going on," Nancy said. "Maybe Mrs. Tagley has some news about Brock." They found Samantha's mother sitting at the front desk going over some flow charts. Seeing the girls, she put down her pen wearily. "Were you looking for something to do?" she asked the girls. "I'm afraid our evening plans have fizzled out." "We just wondered if there was any news about Brock," Bess said. "He's not doing well at all." Nancy thought she detected a strange note of satisfaction in Mrs. Tagley's voice. But why? Could it be that she disliked Brock so much that she was actually happy he was sick? "Samantha called a little while ago," Mrs. Tagley went on. "Brock's in intensive care, unconscious. The doctors suggested that Samantha come home because there's nothing she can do for him right now." Just then Nancy heard the front door open. She turned to see Samantha walking wearily up to the front desk, her face chalk white. "M-Mother?" "Hello, dear," said Mrs. Tagley worriedly. "How are you doing?" Before Samantha could answer, Bess spoke up. "How's Brock doing?" "No change from the last time I checked in. But I can tell that the police think"—Samantha's dark eyes filled with tears—"that they feel Brock was poisoned," she choked out. "They were asking all kinds of questions about the chocolates. And about the guests here. And—and about Tim." "What about Tim?" Mrs. Tagley asked quickly. "Things like where he was when Brock got sick," Samantha said miserably, tears falling down her pale cheeks. "And whether he had access to the scale that dumped Brock into the chocolate. And if he had any reason to be jealous of Brock. Mother, I know they suspect Tim of poisoning Brock!" "Oh, that's ridiculous," said Mrs. Tagley, but Nancy didn't think she sounded convinced. "B-but if it wasn't Tim, then who was it?" There was an awful silence. "I can't believe this is happening," Samantha finally said, wiping her tears away. "I couldn't even face Tim right now. Things are awkward enough between us." Brushing back a wayward strand of long hair, she glanced at the grandfather clock by the front door. "I should go up to bed, but first I need a glass of milk." "We should head upstairs, too," said Nancy to Bess and George. "I'm sure we've got a big day ahead of us." And not just tasting chocolate, she added wearily to herself. Once they were in the elevator and out of earshot, Bess muttered, "I don't blame Tim for being upset. Samantha's just running away from her problems. I mean, why is she so cozy with Brock if she's going with someone else?" "Maybe she's just confused and needs some time to work out her feelings," Nancy suggested. George nudged Bess teasingly. "Come on. Admit it. Aren't you really just jealous that Sam is stringing two guys along?" "No way," Bess said defensively. Then, giggling, she admitted, "Maybe just a little." • • • Five minutes later Bess was already in her nightgown, flipping through a magazine in the living room of the girls' suite. "I'm too wide awake even to think about getting ready for bed," Nancy said. "George, want to help me search the downstairs living room one more time?" "Sure. We'll let Bess get her beauty sleep." There was a mischievous twinkle in her brown eyes as she added, "She needs it after tiring herself out eating all that chocolate today." The pillow Bess hurled just missed them as they slipped out the door. Downstairs only a couple of lights were burning. Most of the rooms were shrouded in shadow, and there were no guests anywhere. "Will you keep out of this?" an angry voice ripped through the darkness, causing Nancy and George to jump. The voice was rising high and shrill, and Nancy realized it was coming from behind the door to a lighted office. "You're not my father, you know!" "I'm not trying to be your father!" It was Mr. Tagley's voice, and he sounded just as angry as Samantha. "I'm just suggesting that we bring someone in to give you a hand running this place until the festival is over! That doesn't seem like much, considering the strain you're under. The strain you're putting us all under." "I'm not under any strain!" Samantha insisted in a tone that contradicted her words. "Stop trying to take the festival away from me!" "Sam, maybe there is too much for one person to do—at the moment, anyway." This voice was Jake's, and he sounded much calmer than the other two. "Why don't you let me help you out? Dad can handle the construction in the east wing by himself, and I could give you a hand with the day-to-day stuff." "No. No," repeated Samantha in a cracked voice. "I'll do fine with the day-to-day stuff if you guys will just get off my back!" "Well, you didn't exactly do a great job with those guests in Room two fourteen," Samantha's mother put in tartly. "If I hadn't been on hand to persuade them to stay, they would be long gone. And if you don't watch out, you're going to lose all the guests. People don't like wondering whether they're about to be poisoned, you know." "It's not my fault they got the creeps!" Samantha shot back. "And how can you talk about losing guests? This inn was losing guests and money before you let me take over!" "The only thing the guests care about are my desserts," her mother retorted. "My chocolate concoctions are the only reason people have come to this festival." "Now, wait a minute, guys," Jake said mildly. "Why don't we all try to—" Samantha wouldn't let him finish. "Oh, so you're the reason this festival got started, Mother?" she asked sarcastically. "I had nothing to do with it—is that what you're saying? After all, I only came up with the whole idea and handled all the publicity and convinced Brock to come and—" "Brock Sawyer has brought us nothing but problems so far," Mrs. Tagley snapped. "He was your first mistake." "You're all against me!" Samantha yelled. She sounded beside herself. In the darkened hallway Nancy and George exchanged an uncomfortable glance. Nancy had been so shocked by all they were hearing that she hadn't even realized they were eavesdropping. With a tilt of her head Nancy suggested that they should start back to the lobby. "No one's against you, Sam," came Mr. Tagley's faint voice. "Can't you see we're on your side? It's just that you can't be expected to work as hard as you have been." "Oh, so you think I can't handle the work?" Even from down the hall, Samantha's voice was louder and shriller than before. Then Nancy heard the sound of a door being yanked open. "I don't want to hear any more!" Samantha shouted. "You all deserve to have this festival fall apart!" ## Chapter ## Six SHE'S COMING THIS WAY! Quick, get back!" Nancy whispered. She swiftly pulled George into a shadowed doorway. Samantha swept by without appearing to notice the girls at all. Then she was gone. There was nothing but silence coming from the office she had just left. Finally Nancy heard the sound of a chair scraping on the floor, as if someone was standing up. "Let's get back upstairs before the rest of them come out," she whispered to George. They tiptoed the few steps to the elevator, and Nancy punched the button. Thankfully, the doors slid open quickly, and Nancy and George ducked inside. "This is even more of a soap opera than I thought," George commented. Bess was still reading her magazine when Nancy and George got back to the suite. "Wow!" she exclaimed softly after hearing what had happened. "I didn't realize that Samantha and her mother were that mad at each other. You don't think this festival is making Samantha a little crazy, do you?" Nancy had been wondering about that herself. "She seemed ready to come unhinged tonight," she answered soberly as she sat down on the couch next to Bess. "Unhinged enough to poison Brock, though?" George called from her little bedroom. She emerged a moment later in an oversize red T-shirt and plopped down in the worn armchair. "I'm not sure," Nancy began thoughtfully, propping her long legs up on the coffee table. "She asked me not to investigate this case after I found that ant jar. I guess she might be trying to sabotage the festival herself—both to take the pressure off herself and to teach her family a lesson. Except that everything that's happened so far has been aimed at Brock, not the festival in general. Can you see Samantha trying to hurt Brock in that way?" Both Bess and George shook their heads. "I can't see anyone trying to hurt him," Bess put in emphatically. "Poor Brock! I called the hospital while you were downstairs. They said he's in stable condition. I'm glad he's okay, but I'm sick from worrying about him." "I notice you polished off the rest of that candy while Nancy and I were downstairs," George pointed out, grinning. "Maybe that's what's making you feel sick. Anyway, what about Jake Tagley? Did you forget about him?" "Forget about Jake? What do you mean?" Bess sounded puzzled. George's brown eyes were twinkling. "Well, Bess, you've certainly had a busy day. Jake's got a major crush on you, and you're so in love with Brock you haven't noticed!" "I'm not in love with Brock or Jake," Bess said stiffly. "Besides, it's mean of you to joke about Brock when he's in the hospital." "Brock's in good hands," Nancy reassured her. She got up from the couch and stretched. "Anyway, you guys should get some rest. Tomorrow's going to be busy." "What about you?" asked George. "Aren't you going to bed?" "Not yet. We were interrupted before we got to check out the living room, remember? I won't be able to fall asleep if I don't do it." Bess's blue eyes opened wide. "But, Nan, you can't go down there alone. It's so late! Can't it wait until tomorrow morning?" "Too risky," Nancy told her. "I don't want the cleaning staff to get the chance to clean the place up. They probably start early. I've got to check the room for clues tonight." "Do you want me to come?" George asked. "No, you're all ready for bed. I'll just hurry down and be right back up." After saying good night, Nancy stepped quietly out into the hall and moved toward the elevator. Now every creak the elevator made seemed loud enough to wake the whole inn. Nancy held her breath when the door clanged open in the lobby—but the first floor was dark and deserted. No one was there to see her tiptoe across the lobby and into the living room. Did she dare switch on a light? It was so dark that Nancy knew she had no choice. Feeling along the wall inside the doorway, she clicked on the light switch, blinking in the sudden brightness. As she made her way across the room, she saw that the end tables were littered with glasses, crumpled napkins, festival schedules, and ashtrays. There was a faint trail on the Oriental carpet of what appeared to be sawdust footprints leading from the fireplace to a side door. Sawdust? Nancy suddenly asked herself. What was sawdust doing in the living room? There wasn't any construction there! She gently pushed open the side door. The tracks led into a narrow hallway that Nancy hadn't been down before. Leaving the side door ajar so she could see by the light from the living room, she stepped out and followed the yellow footprints to— "The kitchen," Nancy murmured aloud. "Another entrance to the kitchen!" She held her breath as she switched on a kitchen light—then let out a huge sigh of disappointment. The trail of sawdust ended right at the kitchen door. The busboy Nancy had spoken to earlier obviously hadn't been exaggerating when he described how clean the kitchen was kept. The floor was gleaming brightly enough to be used in a floor wax commercial! Then Nancy's gaze landed on something else. On the counter right next to the light switch, within easy reach of the door, was a huge pile of Silk and Cream chocolate boxes. They were stacked in neat rows against the wall. Nancy noted that the stack closest to the door had one less box than the others. She was willing to bet that it had been the box that poisoned Brock. Had the poisoner gotten into the kitchen to alter the chocolates before Samantha brought them out? Was the sawdust a clue? If it was, whoever had tracked it in had probably come from the east wing of the inn. That pointed to someone who had been working there—probably Jake, Tim, or Mr. Tagley. A visitor to the east wing might have picked up a little sawdust on his or her shoes, but not enough to leave an actual trail. "Hmmm," Nancy murmured aloud. It wasn't much of a clue—more of a hint really. Just then her mouth stretched open in an enormous yawn. You've done enough for one day, Drew. The case will have to wait until morning. Yawning again, she tiptoed back toward the elevator. • • • "I think I'm going to skip the brownie workshop," Nancy told Bess and George the following morning after breakfast. "I want to head over to the police lab. They may have figured out what poisoned Brock by now." "Do you want us to come?" Bess asked reluctantly, twisting her blond hair in her fingers. "I'd hate to miss trying the ultimate brownie, but—" "Go on," said Nancy, laughing. "I'll be fine. The workshop sounds like a lot of fun." George groaned and tugged at the waist of her jeans. "After those chocolate-chip pancakes we just had, I may never eat again. It seems kind of soon to be making brownies." "Speak for yourself!" Bess sounded shocked. • • • The police station and lab was about a half-hour's drive from the Oakwood Inn. The technician on duty in the lab, a woman in her thirties named Officer Sherbinski, greeted Nancy coolly but politely. "Officer Ullman told me you were coming," she said. "He said it was fine to answer any questions you might have." She directed Nancy to a small table, and they sat down. "Any word on how Brock's doing?" "Mr. Sawyer is conscious, but he's feeling too weak to talk," Officer Sherbinski replied. "A detective went to the hospital to question him, but he wasn't up to it." "I see," said Nancy. "Do you know yet what kind of poison was used?" Officer Sherbinski nodded. "Yes. Mercurous chloride. Its common name is calomel." "Calomel? I don't think I've heard of it." "It's a white, tasteless powder that was once used as a purgative. People took it to clean out their systems," the officer explained. "It's not used much nowadays. It can do a lot of damage—especially to the liver and kidneys. Mr. Sawyer is lucky to be alive." "He certainly is," Nancy agreed. "Were all the chocolates in the box poisoned?" Giving Nancy a meaningful look, the officer said, "Well, that's where it gets complicated. There wasn't any mercurous chloride in the chocolates. None at all. They were clean." "What?" Nancy said, leaning forward over the table. She didn't suppose there was any way the poisoner could have tainted only the chocolates Brock ate, since there couldn't be any way of knowing which ones they'd be. "That means you have no idea how Brock was poisoned," she said at last. "Exactly. According to the report"—Officer Sherbinski tapped a manila folder resting on the table—"dinner was served buffet-style. There's no way the culprit could have singled out Brock's food." Nancy sighed. "We're totally in the dark then." • • • Nancy stepped out of her car and walked slowly across the parking lot toward the inn, her shoes crunching on the gravel. She wasn't sure how to proceed with the case. At least Brock was safe in the hospital for the time being. But with him out of the picture, the culprit would probably lie low. Nancy would have to work with the few clues that she already had. Deep in thought, Nancy pushed open the door and stepped into the lobby. The scene there brought her sharply back to the present. A tearful Samantha was standing by the front desk, her arms around Tim Krueger. Next to her were two police officers—Officer Ullman and another young man. A cluster of guests had gathered, too. From their expressions, Nancy guessed the officers weren't there to join in the Chocolate Festival. "You can't take him away!" Samantha was sobbing. "I won't let you!" Jolted into action, Nancy stepped forward to join Samantha. "What's going on?" she asked. "Nothing for you to be concerned about, Miss Drew," Officer Ullman told her calmly. "We're just taking Mr. Kreuger in for questioning." "Questioning?" Nancy repeated. "That's right. He's our main suspect in the attempted murder of Brock Sawyer." ## Chapter ## Seven SEEING NANCY, Samantha turned to her. "Nancy, I know Tim didn't do it. You've got to find out what really happened," she begged. Tears were streaming down her cheeks. "I don't have anyone else to turn to!" Samantha let out a little moan as the police led Tim toward the door. She, Nancy, Bess, and George followed them outside and stood watching as the police car sped away with Tim inside. "I'll be happy to take on the case," Nancy told Samantha quietly. She didn't mention that she'd already begun investigating. "Should we talk in your office?" "I guess that would be better than broadcasting all my problems to the guests," Samantha said with a wan smile. "They know more than enough already." Straightening up with determination, she led the way through the lobby and down the hall to her office. Samantha sat at her desk, motioning the girls toward chairs. "The whole inn must know Tim was arrested," Samantha groaned. "Maybe that's best right now," Nancy suggested. "If your guests think the problem's been taken care of, they might start to relax again. And if the real culprit is someone else—and if he or she thinks that Tim is the only suspect—then that person might start getting careless." "So you don't suspect Tim?" Samantha asked, brightening. "Oh, I'm so glad!" "Well, I certainly don't think the case against him is airtight," Nancy replied carefully. "It's the fight Tim picked with Brock that makes him the most likely suspect to the police. But picking a fight with someone is a long way from poisoning him." "That's right," George put in hopefully. "I know Tim didn't poison Brock," Samantha said firmly. "He—he certainly had a motive. But I've known Tim for a long time. There's no way he'd be so vicious." "I hope you're right," said Nancy, "And if he didn't, then we need to find out if anyone else has a bone to pick with Brock Sawyer." She got to her feet. "Why don't you go back to your guests now, Samantha, while we get to work. The minute we turn something up, we'll let you know." A sheepish expression came over Samantha's face as she asked, "How would you feel about keeping me company for lunch? I don't feel as if I can face eating in the dining room with all those people around." She plucked nervously at one of the combs in her hair. "I've got a few phone calls to make, but then would you like to grab a sandwich in here with me?" "Sounds great," said Nancy. They agreed to meet in forty-five minutes. "I didn't want to say this in front of Samantha," Nancy told her friends when they were out in the lobby, "but we definitely can't rule Tim out. There's no way I can question him when he's in police custody, though. Let's start our questioning with Dan Avery, since I did hear him make a threat about getting some actor. It's about time we found out what he was talking about. George, you want to come along?" "What about me? Should I question Brock?" Bess asked hopefully. "Not quite," Nancy told her, smiling. "You can do the next best thing and spend time getting to know Jake Tagley better." Bess's blue eyes widened. "You think Jake's a suspect? What could he have against Brock?" "I have no idea," said Nancy, "but he may be able to shed some light on what the people around the inn think of Brock. Just turn on the charm, Bess, and see what you come up with." "You've got it!" Bess said brightly. Then, gesturing to her stained T-shirt and shorts, she added, "But first I'd better change. I can't get to know Jake in clothes that are covered with brownie batter!" • • • Nancy and George knocked on Dan Avery's door, but there was no answer. He wasn't in the basement playing Chocolate Trivia or taking a chocolate pastry class with Mrs. Tagley or participating in an auction of chocolate-related cooking supplies. In fact, he was nowhere to be found. "Okay, on to plan B," Nancy said, running a hand through her hair. "Let's try some members of the staff instead. The waiters who work in the dining room might have something to tell us." Most of the waiting staff were too busy setting up for lunch to talk to Nancy and George, but two waitresses named Karen and Liz agreed to spare the girls a few minutes. After a brief explanation of her involvement in the case, Nancy asked, "Has either of you noticed anything in the kitchen that seems out of the ordinary? Even the smallest discrepancy could be a clue." The two waitresses considered the question. "Well, Mrs. Tagley's been in the kitchen more than usual," said Liz slowly. She had a round face and curly dark hair. "It's not exactly unusual, since so many of her desserts are being prepared for the festival. But it seems as though she's in there constantly." "Is she cooking or just checking up on things?" asked Nancy. "She's definitely not cooking," said Karen immediately. "Mrs. Tagley hates to cook where people can watch her. Says she can't concentrate if she feels like people are looking over her shoulder. She usually makes everything in the family's private kitchen upstairs—you know, where her apartment is. Then she has the stuff brought down to the main kitchen." Nancy was intrigued by this detail. She didn't know what Mrs. Tagley's motive might be. But Samantha's mother certainly had the perfect opportunity to poison anything she made. "But what was she doing in the kitchen if she wasn't cooking?" Both waitresses shrugged. "Beats me," said Karen. "Maybe it makes her feel more in control." She stared uneasily over her shoulder. "Uh, we should really go back to work." Nancy thanked the waitresses for their time, then she and George headed back to Samantha's office. "It's nice of you to keep me company," Samantha said gratefully when they arrived. "All those guests staring at me is a little hard to take." She reached for the telephone on her desk. "Let me just give a quick call down to room service and order us something." "You have room service here?" asked George. "You can just get sandwiches, coffee, things like that." "Sounds good to me," said Nancy cheerfully. The girls chatted until their order arrived, and then Nancy said, "I've been wanting to learn a little more about your background, Samantha. When did you meet Brock?" Samantha took a sip of iced tea. "Let me see, the summer after my freshman year. Brock was in summer stock here." She smiled, remembering. "He was the lead in Brigadoon. It's kind of hard to imagine when you see him playing a cop on TV, but he was great." She took a bite of her turkey sandwich before continuing. "Oakwood's got a pretty good summer stock company, considering we're way out in the sticks like this. Actually, Brock grew up in an even tinier town than Oakwood, about thirty miles from here. I used to wonder if that was why my mother disapproved of him so much. Maybe she didn't want me hanging around with someone who was from an even smaller town than I was." "Your mother disapproved of Brock?" Nancy asked, glancing up alertly. "I didn't realize that." Samantha grimaced. "She practically bit my head off when I first mentioned his name. You can't believe how hard she made my life the whole time I was going out with him." She shook her head. "Mothers. I swear, there's no way to keep them happy." George had been munching on her roast beef sandwich while she listened. Now she asked, "But your mother doesn't dislike Brock anymore, does she?" "No. She wasn't crazy about having him come for the festival, but at least she didn't throw a fit about it. I mean, it's good publicity for the inn and for her new line of chocolates. Or it was. They're certainly getting bad publicity now," Samantha added bitterly. Then she glanced at her watch. "I've really got to get back to work," she said regretfully. "We're setting up a chocolate fondue demonstration, and I have to go track down some chairs. Thanks for keeping me company." The three girls left the office at the same time, but Nancy was careful to head in the opposite direction from Samantha. "Hmm," she said to George when they were far enough away. "So Mrs. Tagley doesn't like Brock. It fits, in a way. She seemed happy when he was talking about leaving. Now I just have to find out why." "Why don't you ask her?" suggested George. "Good idea," said Nancy. "She's going to be giving chocolate classes all afternoon, though—I checked the schedule. So let's talk to some of the employees first. I remember Jake saying that some of them have been here even longer than Mrs. Tagley has." • • • "I can't discuss my boss," the gray-haired gardener explained gruffly. "That would be unprofessional." As the older man turned back to clipping the azalea bushes by the front entrance, Nancy looked at George and shrugged. "That's the fourth person we've tried," George said as they stepped out of the heat and back into the cool lobby. "It doesn't seem like any of the old-timers want to talk to us." Nancy tucked her hands into the pockets of her shorts. "That's for sure. 'In a close-knit place like this,' " Nancy went on, mimicking the gravelly voice of one of the older chefs, " 'you never know what's going to get back to people.' " George laughed at Nancy's imitation. Nancy sighed and said, "I'm starting to think we've wasted the whole afternoon." "What about trying the person in charge of room service, or whatever they call it here?" George suggested. "We haven't been there yet." "Good idea." Nancy smiled as she added, "You know, I think this will be the first time I've ever seen anyone who works in room service. I've always just thought of those people as voices on the phone before." The woman they met in the small basement service kitchen was a lot more than a voice on the phone. She was a wiry woman named Mrs. Reames, with curly gray hair and glasses. She seemed to be in her seventies and was very happy to get the chance to talk—a lot—to Nancy and George. "I spend all day listening to people order hamburgers," she said, once Nancy explained why they were there. "It would be a pleasure to get to talk for once. I've seen this place go through a lot of changes. Oh, they've tried to retire me a couple of times, but I tell them I'm not leaving until they drag me out. So what if I get the orders mixed up once in a while? It's not as if—" "I bet you have some fascinating stories to tell about the old days," Nancy said quickly. She hated to interrupt, but she didn't want to spend all afternoon listening to stories about mixed-up orders. "You must have been here for nearly as long as Mrs. Tagley—is that right?" "Longer! I was here before she and her first husband ever bought the place. 'Course, Mrs. Patton—I mean Mrs. Tagley—was the real power behind the throne, you might say. Samantha's father never did have the gumption she did. But then, I guess that's why they moved out to the country in the first place." "Because Mr. Patton didn't—didn't have enough gumption?" George asked, leaning against the counter. "Well, because his business had failed, I mean," clarified Mrs. Reames. "He'd had some kind of nervous collapse after that businessman got through with him, and he and the missus bought the inn to give him more quiet surroundings. Ha! More quiet, my foot! Why, I can remember—" "You said 'after that businessman got through with him,' " Nancy gently reminded Mrs. Reames. "Who do you mean?" "Why, he was—he was—the name escapes me now," said Mrs. Reames. The room service telephone rang just then, but she ignored it. "Let them wait! They won't starve! Well, whatever his name was, it was a real scandal, what he did to Mr. Patton. Said he was going into partnership with him. Got him to sign a lot of bad checks—and just cleaned him out. Mr. Patton never could hold his head up after that—" Mrs. Reames snapped her fingers so suddenly that her glasses nearly fell off the end of her nose. "Sawyer! That was it, Mr. Sawyer!" Nancy just stared at Mrs. Reames for a moment before asking, "Mr. Sawyer? Any relation to Brock Sawyer?" "That actor? Yup. That's the one. He's sitting pretty high in the saddle now, isn't he? But his background is nothing to be proud of. I'm not surprised Mrs. Patton—I mean Mrs. Tagley—can't stand the sight of him." Mrs. Reames shot Nancy and George a knowing look. "After all, the boy's own father as good as murdered her husband." ## Chapter ## Eight SUDDENLY Mrs. Reames's face froze. "You won't tell Mrs. Tagley I've been blabbing on about her like this, will you?" she begged, twisting her apron. "I'd probably get in all kinds of trouble!" "I promise we'll keep your secret," Nancy told her. She stood up to leave. "Thank you so much. Come on, George." "You were right, George," said Nancy as she and George took the stairs up to the main floor. "This really is a soap opera!" George asked, "Do you think Mrs. Tagley could have been trying to get revenge on Brock for what his father did to her first husband?" "It's possible," said Nancy. "I hope not, though. I like Samantha. It would be terrible for her if her mother had done something like that. And that reminds me of something else. Does Samantha know about the way Brock's father treated her father? I mean, does she know she fell in love with the son of the man who destroyed her father's spirit?" George shrugged. "I guess that's one of the things you can find out when you talk to Mrs. Tagley. You are going to talk to her, aren't you?" "You bet—very, very carefully. I really don't want to get Mrs. Reames in trouble, so I'll have to tiptoe around the whole thing. Let's see, what time is it?" Nancy checked her watch. "Four-thirty. I don't think there are any more activities scheduled for this afternoon. This is probably as good a time as any to talk to Mrs. Tagley." "Want me to come?" asked George. "Or do you think she might say more if I'm not around?" "I guess I should try a personal approach," said Nancy. "Maybe you can track down Bess and Jake." "Will do. Good luck!" George headed for the elevator. Nancy found Mrs. Tagley's office next to Samantha's—the door open. When Nancy peeked in, she saw that Samantha's mother was talking on the telephone. "You'll be done working by dinnertime, won't you?" she snapped into the receiver. There was a short pause, then she said, "Well, what about my dessert demonstration tonight? It would be nice to see you once in a while, instead of having you work every second." She paused again, then sighed. "Oh. Well, okay. Listen, put on a tie if you get a chance, won't you?" She hung up. Seeing Nancy, Mrs. Tagley smiled ruefully and said, "I'm married to a workaholic. He'd rather finish a staircase than eat. Can you imagine?" Nancy wasn't sure what to say, but fortunately Mrs. Tagley didn't wait for her to answer. "Well, I'm sure you didn't come to listen to me complain about my husband. Have a seat, Nancy. My daughter's told me you're looking into Brock's poisoning. Is that what you've come to talk to me about?" "That's right," Nancy said, sitting down. "I'm trying to find out who might have had a motive." "I should have thought that was easy," said Mrs. Tagley. Was there a wary look in her eyes now? Nancy wondered. "Poor Tim has a pretty good reason." "He does," Nancy agreed, "but so do some other people." She took a deep breath before asking, "How did you feel about Brock?" Now Mrs. Tagley was definitely on the alert. "How did I feel?" she repeated, staring at Nancy. "You sound as if he's dead or something. I like him fine." "Samantha told me that you objected when she first started dating Brock." That, at least, wouldn't get Mrs. Reames into trouble. "Well, they did get serious awfully fast." Mrs. Tagley gave an uneasy laugh. "Besides, I have to admit I have an old-fashioned prejudice against actors. You never know if they're suddenly going to be out of work." "That's hardly a problem for Brock now." "No. It's not." Mrs. Tagley fell silent. "I've heard a rumor," Nancy said carefully, "that Brock's father and your first husband had a falling-out." In an instant all pretense of cordiality vanished from Mrs. Tagley's face. "Where did you hear that?" "It seems to be common knowledge." Nancy knew she was stretching the truth a little—but surely more of the staff than Mrs. Reames knew about Brock's father. "I don't mean to pry, Mrs. Tagley, but I'm sure you can see that this information has a bearing on the case." For a long moment Mrs. Tagley merely stared at Nancy. Then, letting out a long breath, she said, "My first husband, Lloyd Patton, was a very successful realtor. He was a brilliant businessman, but he was also very temperamental—you could almost say unstable." Samantha seemed to have inherited some of his temperament, Nancy thought. Mrs. Tagley's eyes focused far off as she explained. "As long as things were going right for him, he got through his days all right. But whenever he was disappointed or worried about something, he seemed to feel it ten times more than the average person. "So along came Brock's father—who was also named Brock, by the way." Mrs. Tagley's eyes flicked to Nancy. "I bet you thought Brock's name was made up, didn't you?" "I did, now that you mention it," Nancy admitted with a smile. "It's so perfect for TV." "Anyway, Brock Sawyer senior had all the charm of his son and then some. He told Lloyd he had a great idea for doubling their incomes. They would go into partnership to develop a retirement community in Arizona. Lloyd would put up the capital while Brock senior handled the actual developing. He told Lloyd he didn't want to bother him with the day-to-day stuff." Mrs. Tagley shuddered at the memory. "Well, my husband loved the idea. He found investors. He found potential buyers. He put everything he had into that business—his assets and his good name. And—well, I guess you know what happened next." "Mr. Sawyer didn't hold up his end of the bargain," Nancy said quietly. "Hah! That's a mild way of putting it!" Nancy thought Mrs. Tagley was about to launch into an angry tirade, but she just took a deep breath, as if to calm herself. She continued, "Lloyd lost all his money—and his investors' money. He never forgave himself for that. He was never a happy man again. Brock Sawyer destroyed him." Nancy felt terrible about bringing up such painful memories, but she knew it was the only way to get at the truth of Brock's poisoning. "That's when you moved out here to the inn?" she asked. Mrs. Tagley nodded. "We thought we could make a go of it—that it would be a pleasant and maybe relaxing way to support ourselves. Shows how much we knew about innkeeping," she said with a snort. "I liked our new life, but Lloyd just couldn't make the adjustment. His health started failing after a couple of months, and he went downhill very fast. "The doctor said it was heart failure. I'd call it heartbreak. My husband died of grief." Tears suddenly sprang to Mrs. Tagley's eyes. "So now you know." Nancy still had one more question. "But Samantha doesn't, does she?" she asked gently. "No. I've kept all this from her. I hope you will, too." Mrs. Tagley leaned forward, gripping the edge of her desk intently. "But whatever problems I had with Brock Sawyer senior are in the past. All of this really has no bearing on what's happened to his son." Unless Mrs. Tagley had poisoned Brock, of course. • • • "And that was pretty much all she'd say," Nancy told Bess and George as they were sipping glasses of iced tea in the living room. The spacious room was once again immaculate, and the girls were sitting on a window seat in a bay window that overlooked the inn's front lawn and flower beds. "If she is the one who poisoned Brock, it'll be hard to prove it." "I feel sorry for her," said George. "I thought she was just kind of stern, but now I can see why." "I can, too," Nancy said, "not that being sorry for her means she's not a suspect." Turning to Bess, Nancy asked, "Did Jake tell you anything that might be useful?" "Not exactly," said Bess, her mouth curving into a hint of a smile. "He did mention that he couldn't figure out what was going on between Samantha and Brock. But I'm afraid we didn't get around to discussing the case much. . . ." Her voice trailed off, and a furious blush spread up her face. Crossing her arms over her chest, Nancy asked, "Just what did you do?" Bess grinned at the memory. "Well, Jake took me out for pizza for lunch—said he was getting sick of chocolate. Then we drove around the countryside and just talked. He's really a nice guy, Nancy! Funny, considerate—and he's a great listener." George's expression indicated she was dubious. "Is he nicer than Brock?" "Well, I don't really know Brock," said Bess with a dismissive wave. She made it sound as if she'd never even glanced in Brock's direction before. "Do you guys think we could extend our visit a little?" she went on in an excited rush. Nancy sighed. "We may have to, if I don't come any closer to solving this case than I did today." "Oh, give yourself a break," Bess told her. "You're doing a great job." Jumping up, she started in the direction of the elevator. "Let's go up and get dressed for dinner. Jake asked us to sit with them." An hour later Nancy, Bess, and George entered the dining room and headed for the Tagleys' table. The girls had all changed into dresses, but Jake seemed to notice only Bess. Nancy had to admit Bess looked terrific in her flowered minidress. Jake wasn't bad himself, in his white pants, blue shirt, and navy blazer. The girls said hello to Jake and Samantha. Mr. and Mrs. Tagley weren't around and Nancy wondered if Mrs. Tagley—like her daughter at lunch—hadn't been able to face eating in public after the emotional scene in her office. "Don't tell me there's chocolate in this!" Nancy exclaimed as a waiter set a plate of chicken with dark, spicy-looking sauce in front of her a few minutes later. "Well, there is," Samantha told her, laughing. "That's a mole sauce. It's a Mexican recipe that uses unsweetened chocolate. You can't really taste the chocolate, but it adds wonderfully to the flavor." "It's delicious," said Nancy after she'd taken a bite. "I can't wait to tell Ned I ate chicken with chocolate!" "Save some room for dessert," Samantha cautioned. "After my mom's cooking demonstration, we're going to pass around a big selection—all chocolate, of course." "That sounds wonderful," said Nancy. "I'm in." "Well, I hope you all have a great time," Samantha said. Jake shot his sister a startled look. "You're not coming?" "I can't. I'm going to visit Tim. I—I can't just forget about him while he's in police custody, can I?" "Of course not," George said warmly. "Make sure you notice Jake's handiwork, too. This will be the first time we're using the conference room in the east wing. He's done a great job restoring it." The work showed, Nancy thought when she walked into the new conference room after dinner. The room was on the second floor, with windows running all along one side. The other three walls were papered in a woven fabric Nancy thought was cheerful and businesslike at the same time. The ceiling's acoustical tiles kept the room from echoing even though it was full of people. A big oval table was set up at one end of the long room. At the other end rows of chairs were lined up facing the new wooden stage. A demonstration table was set in the center of it. Jake was inside and directing people to sit. Going over to him, Nancy said, "This room looks great, Jake. Did you and your father do all the work yourselves?" "Yeah, but my dad was the real mastermind. I just held nails for him." "I bet you did more than that," Bess piped up, moving up beside them. "Anyway, it's beautiful." Jake nervously brushed his sandy hair back, as if embarrassed. But his hazel eyes were filled with pride. After finding the girls chairs at the end of one row, he continued seating the other guests. Glancing around, Nancy saw that the room was already almost full, and a couple of photographers had stationed themselves near the stage. It seemed odd that Dan Avery wasn't among them, she thought, considering how he'd practically stomped all over everyone trying to get shots of some of the other events. Come to think of it, Nancy hadn't seen him all day. Where could he be? Before she could wonder any further, Mrs. Tagley strode briskly onto the stage, and everyone started clapping. Then, with a quick bow, Mrs. Tagley announced, "Tonight I'll be preparing a dessert I call Chocolate Volcano. It's a spectacular finale to any meal, and it's a fun dessert to demonstrate because it's so dramatic. I'm going to need an audience volunteer. Who'd like to help me?" A forest of hands popped into the air. "How would you like to help?" she asked, pointing straight at Nancy. "Me?" Nancy asked in surprise. She hadn't even had her hand up. "But I—" "Go ahead. It'll be fun," whispered Bess eagerly. "How can you resist?" George chimed in, grinning. "Well, okay—why not?" Nancy stood up and began making her way toward the stage. Bess was right. Working on the Chocolate Volcano was fun. Together Nancy and Mrs. Tagley shaped a mound of chocolate mousse into the shape of a mountain. Then Mrs. Tagley showed Nancy how to roll out a sheet of chocolate "leather"—made of chocolate mixed with corn syrup—and how to fit the leather neatly over the "mountain," leaving a hole at the top. "Why do I suddenly feel all thumbs?" asked Nancy with a laugh. She happened to glance out at the audience as she spoke and saw that Jake had taken her chair and was whispering something in Bess's ear. Bess smiled at him, and then Jake slipped quietly out of the conference room. "And now comes the most realistic touch," Mrs. Tagley was saying cheerfully. She seemed more relaxed than Nancy had ever seen her. Obviously, cooking and chocolate brought out the best in her. "I'm talking about the molten lava." "Lava?" Nancy repeated in mock alarm. "Sounds dangerous!" "Not at all. First we pour a half-cup of rum into the hole at the top of the mountain." Mrs. Tagley handed over the rum, and Nancy poured it in carefully. "Next, we light the flame. The rum will begin to burn, and that's what makes the 'lava.' There won't be any alcohol left after it burns off, so you younger people will be able to sample this. Now to light the flame!" From a drawer under the table, Mrs. Tagley pulled out a miniature acetylene torch. "Now, that really looks dangerous!" someone in the crowd called out. There was a little nervous laughter before Mrs. Tagley said reassuringly, "It's not. Honestly, it's one of the most important cooking tools I own." Mrs. Tagley pressed a button on the torch, and a thin spurt of blue flame leapt out. Carefully she aimed the flame at the rum that was now streaming down the sides of the cake. In an instant the rum was ablaze. "There's your volcano!" she announced triumphantly—and the audience burst into spontaneous applause. Just then Nancy was distracted by a thin cloud of white powder drifting down from the ceiling and settling in the air around her. "Hey!" said Nancy, waving to try to clear the air. "Where did that—" A muffled explosion cut off the rest of her sentence. Before anyone had time to move, Nancy was surrounded by a sheet of flame! ## Chapter ## Nine FOR THE REST of her life Nancy would be grateful that she'd so often rehearsed what to do in a fire emergency. Without conscious thought she dropped to the ground and rolled rapidly over and over until the flames licking at her clothes were out. Then, panting, she jumped to her feet. She had acted so swiftly that she hadn't been burned at all. One sleeve of her blouse was slightly charred, but that was the extent of the burns. Checking Mrs. Tagley, Nancy saw that the woman, though white and shaking, was also unharmed. The fire had spread by then to the stage curtains behind them and the audience had panicked and was screaming and running for the exit. "Where's, a fire extinguisher?" Nancy called loudly to Mrs. Tagley, above the din. Mrs. Tagley's mouth opened, but she didn't make a sound. She was swaying and gripping the edge of the table as though she were about to pass out. Nancy peered out at the audience, frantically checking the room for a fire extinguisher. There had to be one— Yes! There it was, hanging on the wall next to a circuit breaker. She ran to it, yanked it from the wall, and raced back to the fire. As she reached the stage, the curtain ripped from its metal frame and tumbled to the floor, a mass of flames that spread across the entire width of the platform. Nancy yanked back the pin on the fire extinguisher and aimed it at the flames. Foam shot out, dousing the fire. In seconds the flames were out, and the curtain was a black, smoldering tangle on the floor. Drawing a deep breath, Nancy checked on Mrs. Tagley again. She was still leaning against the table for support and was quite obviously in shock. There were only a few people left in the conference room now, and one hysterical voice kept calling out over and over. "Nancy! Nancy! Are you all right?" It was Bess. "I'm fine," Nancy called back shakily. "But I don't feel like making another Chocolate Volcano for a long, long time." • • • "Now, explain to me just why you turned off the sprinkler system," the fire chief was saying patiently to Samantha. "I knew that the dessert was going to be flambéed," Samantha said shakily. "I was afraid that the flames would set off the sprinkler, so—so I switched off the system. I'll never do it again," she added in a small voice. The fire chief's expression softened. "Okay. I'm holding you to that." Nancy was standing a little to the side with Bess and George, frowning. Something seemed wrong to her. She wasn't an expert, of course, but she was pretty sure flambéing a dessert wouldn't set off a sprinkler system. Was Samantha lying? Had she turned off the sprinkler in preparation for what was to come? With a start, Nancy realized the fire chief was now talking to her. "You're a lucky girl. If you hadn't been so quick on your feet, that flour could have burned the whole room down in a matter of minutes." "That powder was just flour?" said George incredulously. "I thought it was some kind of explosive." "It was, in a way. Flour's just like any fine powder. It can be an explosive if the individual particles have lots of air around them," the chief explained. "All it takes is a spark and—well, you saw what happened." Nancy shuddered. "I certainly did." Turning back to Samantha, the fire chief said, "I know this was a cooking demonstration, but can you tell me one more thing? How did that flour happen to fall?" "I don't know, Chief," Samantha said, shaking her head. "Maybe my mother does, but I—I don't think she should be disturbed tonight." Samantha had missed the visiting hours to see Tim, so she had arrived back just after the fire department—or, at least, that was her claim. She had taken one look at her mother and called their family doctor, who had prescribed a sedative and sent Mrs. Tagley to bed. "Well, I was just asking," said the chief, shrugging. "Probably there's a simple explanation." But that wasn't what Nancy thought. The Chocolate Volcano didn't contain any flour. Nancy had seen the recipe. And even if the dessert had needed flour, the flour would have been in a canister on the table—not drifting down from the ceiling. One thing was for sure—once everything settled down and the conference room was empty again, Nancy was going to find out how flour got up to the ceiling. It was another two hours before Nancy could go back to the room to investigate. She'd found a ladder in a nearby room and dragged it up onto the stage. She set it up and climbed up to get a close look at the ceiling. Nancy's lips tightened. A small hole had been drilled in the ceiling tile directly above the spot where the demonstration table had stood. So her suspicions had been right! She saw that the tile could be lifted from its frame. She pushed it up carefully and peered into the gloomy crawlspace above the ceiling. There, lying on its side on a beam, was a five-pound bag of flour. Nancy tested the frame that the tiles were set into. It seemed to be strong enough to support a person's weight, but the beam definitely would be. Someone had probably perched on the beam and poured flour through the hole. Someone who knew it would burst into flames in Nancy's face! • • • "It was Mrs. Tagley, Nancy," Bess said decisively, squeezing toothpaste onto her toothbrush. The three girls had crowded into the single bathroom and were going over the case as they washed up before bed. "It had to be. That fainting act of hers was just that—an act. She was out to stop you because you were getting too close to the truth about her poisoning Brock!" "I don't know about that," said George. She finished splashing water on her face and reached past Bess for a towel. "If she was faking it, she's a pretty good actress. But we do know it couldn't have been Tim. He's still in police custody." "You're right," Nancy said from her perch on the edge of the bathtub. "Whoever poured the flour down obviously wanted to stop my investigation. Mrs. Tagley's definitely my strongest suspect right now. I remember thinking it was odd that she picked me out of the crowd like that. Maybe she had the whole thing planned. Of course, she couldn't have poured the flour while she was standing right next to me. But she could have rigged the bag so that some of the flour would spill during the demonstration. "I'm still wondering about Dan Avery, too," Nancy went on. "He wasn't around during the demonstration. He could have hidden above the ceiling and waited. I went down to the front desk and asked about him after checking out the conference room. The clerk said she hadn't seen him since yesterday. Have either of you?" Bess and George shook their heads. "He's so gross I almost hope he's the culprit," said Bess. "Actually, though," Nancy went on, "there's another person who wasn't at the demonstration tonight—Samantha. And if there's anyone who could have slipped into the conference room with a bag of flour, it's her. No one would question what she was doing." Bess pulled her blond hair off her face with a terry headband and bent over the sink to wash her face. "Yes, but why would she want to attack you, Nancy?" she said, frowning. "Samantha wouldn't have done anything to hurt Brock, so she wouldn't have any reason to stop your investigation. Besides, she asked you to investigate this case—remember?" Nancy nodded. "She doesn't seem to have any kind of motive, either. All I'm saying is that she had the opportunity to rig the conference room. She turned off the sprinkler system, too." "Wait—there's one other person who had the same opportunity as Sam and who wasn't at the demonstration tonight," said George slowly. She'd retreated to the doorway to give the others more room. "Not at it most of the time, anyway—Jake." "Jake!" Bess gasped sharply. "George, you've got to be kidding! Jake wouldn't hurt a fly! Besides, he had a perfectly good reason for leaving the demonstration. He told me he had to finish varnishing a section of floor in the east wing." "Hmm," said Nancy, considering. "Jake would have had plenty of chances to rig the conference room. But I can't think of any reason he'd want to poison Brock. So why would he want me out of the way? We certainly can't rule him out, but—" "Yes, you can," Bess cut in, still obviously distressed. "Rule him out right now." "Actually, I think that what I should do now—what we all should do—is get some sleep," said Nancy. "Good idea," George agreed. "But I hope I don't dream about chocolate waffles—or whatever chocolatey breakfast they have in store for us tomorrow morning." • • • "He's doing much better! He's doing much better!" Bess's shriek reached Nancy through a fog of sleep. The next thing Nancy knew, someone was bouncing at the foot of her bed. "Oof!" Groaning, Nancy sat up and rubbed her eyes. "Bess, what's going on?" she mumbled groggily. "Did you just win a million dollars or something?" "No, but listen to this! I woke up early and couldn't get back to sleep. So after I took a shower, I decided to call the hospital and see how Brock's doing. He's off the critical list! He can even have visitors today! So what are we waiting for?" George came stumbling into the room in her red T-shirt. "Only one thing could make you so happy, Bess," she said, yawning and ruffling a hand through her short brown curls. "Jake's asked you to marry him." "Jake? Who cares about Jake?" said Bess, waving away the notion. "I'm talking about Brock, George! He's well enough to have visitors! Nancy was just saying we should get over there right away," she added. "I even got Brock's room number. Four twenty-four." With a resigned sigh, Nancy threw off the covers and got out of bed. "Actually, I never said that, but I do think we should head over there," said Nancy. "After breakfast." • • • Oakwood Hospital turned out to be tiny—so tiny that when Nancy mentioned the purpose of their visit at the reception desk, the receptionist asked, "Are you Nancy Drew?" "Uh, yes, I am," she answered, a bit taken aback. "How did you know?" "One of the police officers who was here earlier—Ullman, I think his name was—said it would be okay for you to visit Brock even though you're not a member of the family." The young woman glanced sternly at Bess and George. "He didn't say anything about your friends, though." "Oh, but we've got to see him!" Bess wailed. "My associates usually accompany me for every facet of an investigation," Nancy said quickly in her most official-sounding voice. The receptionist wouldn't bend the rules, though. Taking the pass the young woman gave her, Nancy took the elevator up to the fourth floor. "Let's see," Nancy murmured aloud, scanning the room numbers as she went down the hall. "Four eighteen—four twenty— There it is." Brock's room was at the end of the hall. To Nancy's surprise, there was no police officer standing guard outside. Someone was fumbling with the door handle, though—a heavyset man in a lab coat. He half turned at Nancy's approach. Nancy gasped. It was Dan Avery! I've got to stop him! an inner voice screeched. He's sneaking in to finish Brock off! ## Chapter ## Ten MR. AVERY! What are you doing here?" Nancy demanded. Horror filled Dan Avery's face as he turned and recognized her, but he didn't stop to answer. Whirling around, he fled rapidly down the corridor. Nancy dashed after him. Farther down the hall she glimpsed a burly police officer ambling toward Brock's room with a cup of steaming coffee in his hand. "Stop that man!" Nancy shouted, pointing at Dan Avery. "He was breaking into Brock Sawyer's room!" Startled, the officer halted in his tracks—and in that split-second of indecision, Dan Avery scrambled left down a staircase and disappeared. Biting off a cry of frustration, Nancy raced down the hall herself. At the top of the stairs she slid on a slippery patch of floor, nearly colliding with the police officer. "Hey!" he yelped in pain as scalding coffee spilled onto his hand. Nancy didn't stop—she continued her race down the steps after Avery. Over the thudding of her heart, she could hear his footsteps pounding down the staircase below her. Then she heard a woman's voice shouting, "No! That's an emergency exit!" Too late. Avery had already crashed the emergency door open and gotten away. The shrill beeping of the security system was activated instantly. A moment later Nancy could hear the door slamming shut. When she reached the bottom of the stairs, Avery was gone. "Oh, no!" Nancy groaned aloud. "I can't believe it!" "I saw him, miss! I got a good look at him!" A middle-aged woman wearing a pale blue uniform and carrying a can of disinfectant came rushing up to stand at the second-floor landing. "He was a heavyset man, kind of balding," she called to Nancy. "He seemed to be in an awful hurry." Just then the police officer came skidding into view, panting from exertion. "Just what do you think you're up to, young lady?" he gasped. Then he yelled over his shoulder, "Can't someone please switch off this ridiculous noise?" A couple of seconds later the security system fell silent. "Now," the officer began again, glaring at Nancy. "Tell me what's going on." "I caught that man trying to break into Brock Sawyer's room," she explained. "I think he may be the person who poisoned him." The police officer—his name tag read Officer Webley, Nancy noticed—gave her a long, dubious look. "And what's your connection with Mr. Sawyer?" he asked skeptically. "Fan of his, are you?" "I'm a private detective." Nancy quickly filled the officer in on her involvement with the case so far. "I haven't seen Dan Avery in the inn since yesterday," she finished. "Whatever he's up to now, it couldn't possibly be good for Brock." "Well, let's not jump to conclusions," said Officer Webley in a patronizing voice. "Maybe you got mixed up. Whoever you saw going into Mr. Sawyer's room probably works here at the hospital." "A hospital employee wouldn't run away," Nancy pointed out, trying not to lose her patience. "Anyway, why wasn't there a guard stationed at Brock Sawyer's door? That's pretty loose security for a celebrity like him, isn't it?" Officer Webley was suddenly uncomfortable. "Uh, I'm supposed to be the guard at the door," he admitted. "I just stepped away for a second to get a cup of coffee. I—uh—I'll check into your story, miss, okay?" "Okay. But I need to speak to Brock Sawyer, before any more time goes by," she said, showing him her pass. "Well, I guess a short visit couldn't hurt," the officer said reluctantly after examining the slip of paper. "Great," said Nancy. "Thank you very much. Oh, and my two associates will be joining me," she added. "Your associates? Where are they?" Officer Webley looked around as though he expected to see them in the stairwell. "Down in the lobby waiting for me," Nancy replied. "Why don't you come with me, so you can clear our visit with the receptionist there?" They found Bess and George looking immensely bored as they scrutinized the gift-shop window in the reception area. "Is Brock really going to see us?" Bess asked a few minutes later. Officer Webley had spoken with the young woman at reception, and the four of them were riding the elevator back up to the fourth floor. "This is so cool!" Behind Officer Webley's back, Nancy gave Bess's arm a warning squeeze. "I'm sure he'll be glad to help us with our investigation," she said meaningfully. "And I've promised the officer that we won't stay long." Nancy was relieved that Officer Webley decided to station himself outside the door rather than join them in the room. Otherwise he might have started wondering exactly what kind of a detective Bess was. "Oh, you poor thing!" Bess cooed, practically flying over to Brock's bedside. "Do you still hurt anywhere? Gosh, it's great to see you again!" Brock was a little pale, but other than that he seemed to be back to normal. He grinned at Bess from his pile of pillows. Then he waved at Nancy and George, who were pulling over some chairs. "With such a charming cheering squad, it's impossible not to feel better. How are you all doing? And how's Samantha?" Bess's smile flickered a little. "She's fine. Worried about you—but of course she's got a lot on her mind." Bess's tone somehow managed to convey the suggestion that Samantha was too busy to be thinking much about Brock. "What with this fire and all, she's really—" "What fire?" Brock cut in. He propped himself up on his elbows, concern making his features look even more rugged than usual. Nancy filled him in. "Samantha has asked me to investigate the case," she finished. "That's why I wanted to talk to you as soon as I could. We've got to figure out who might want to kill you, Brock." He leaned back dejectedly against the pillows again. "It sounds so weird to hear you say that," he said. "Until a couple of days ago I didn't know I had any enemies, let alone one who wants me dead! I mean, what could I have done to make anyone so angry?" "Well, there's something your father did that might have made Mrs. Tagley very angry," Nancy said hesitantly. "Do you know about that?" "I do, and believe me, I'll never forgive my father for treating another human being that way," he said sincerely. "But I've already talked about my father with Mrs. Tagley," Brock went on. "About two years ago—at the end of the summer I was dating Samantha—Sam's mother and I hashed the whole thing out." "You did?" Nancy asked, arching a brow. "She didn't mention that to me." Brock shrugged. "Maybe that's because she and I agreed to put the whole business out of our minds. It was a terrible thing, but it's over now. I may not be Mrs. Tagley's favorite person, but I'm sure she doesn't hate me enough to poison me." Nancy mentally flipped through her list of suspects. "What about Tim?" she asked. A dark look came into Brock's blue eyes. "If I had to put money on anyone, I'd pick Tim as the culprit," he said slowly. "You saw that fight we had, but you haven't seen all the little ways he's tried to provoke me. Making fun of me under his breath, intercepting my phone messages, sending room service to my bedroom at four in the morning. Nothing you can really get mad about, but it's been a real drag. I don't want to sound paranoid, Nancy, but Tim's been against me all along." "And Jake?" asked Nancy. "There's some evidence that points to him." Brock was startled. "I thought he was on my side. He's been really nice and polite." "What about Dan Avery?" asked George. "Who?" asked Brock blankly. "A guest named Dan Avery," Nancy explained. "I caught him trying to break into this room half an hour ago. You must have seen him around the inn." "He's hard to miss," Bess added. "Stumpy-looking, with greasy hair and beady little eyes. Sort of like a sleazy woodchuck." "Sounds charming," said Brock, chuckling. "I can't wait to meet him. But I don't think I have met him yet. Never even heard of him. And I certainly have no idea why he'd want to kill me." "Well, thanks for your help. We're glad you're better, at least," said Nancy, straightening up. She'd been hoping to come up with more leads, but Brock hadn't added much to what they already knew. "Did you eat or drink anything unusual the night you were poisoned? The chocolates came up clean, you know, so you must have taken the poison in some other food." "I can't really think of anything," Brock said, shaking his head. "I had exactly what Sam had. In fact, she brought me my plate of food from the buffet line. I was afraid that I'd pig out if I went up there myself." Nancy, Bess, and George exchanged a quick glance. What Brock had just told them was more important than he realized. If Samantha was the last person to handle Brock's food before he ate it, the finger of suspicion pointed very strongly in her direction now. But Nancy didn't think she should mention this detail aloud—not until she had more to go on, at least. There was no point in upsetting Brock unnecessarily. "Thanks again for your help" was all she said. "And we'll come to see you again very soon," Bess added eagerly. • • • "This case is turning out to be tricky," said Nancy as she and her friends walked out to the parking lot. "I haven't been able to narrow down our list of suspects at all." "You've never blown a case yet, Nancy," George reminded her. "I'm sure you'll turn up something. You always do." "Well, I wish I knew where to turn next," Nancy said, half to herself. The three girls had almost reached the row where Nancy's Mustang was parked. "It all seems so—" Nancy stopped in her tracks. "Look!" she gasped, pointing across the parking lot. "There's Dan Avery!" He was just unlocking a car door. "We've got to catch him!" Nancy cried. She and George took off across the lot at the same time. "This time, he's not going to get away!" ## Chapter ## Eleven AT THE SOUND of Nancy's voice, Dan Avery dropped his keys and bolted. This time, though, Nancy had started running before he had—and it was two against one. While Nancy was making a beeline for Avery, George dashed around the other side of the parking lot to head him off. It took only a couple of minutes before they had him trapped. For a minute it looked as if Avery was going to fight. But while aiming a totally ineffectual punch at Nancy, he slipped and fell flat on his back, and lay there panting from the exertion. "Help me hold him, George," Nancy gasped, struggling to pin Avery's feet down. Bess had just caught up to them. "I'll get his arm," she called, panting. In a few seconds the three girls had Avery totally immobilized. "And now," Nancy said, "now you're going to tell us what's going on." "No way," said Dan Avery sullenly. "I'm not wasting my time explaining myself to a bunch of hysterical teenage girls. Let me up or I'll report you to the police." "The police know about you already," said Nancy grimly. "The officer guarding Brock is on the lookout for you right now." It wasn't exactly true, but Avery wasn't in much of a position to question her. "So you might as well talk to us." Glaring at her, Avery said defiantly, "There's nothing to talk about. I'm just doing my job. And you'll be sorry if you get in the way." "What are you talking about?" asked Bess incredulously. "Murdering people is your job?" "Murdering people?" Dan Avery stared back at the girls just as incredulously. "What are you talking about?" "Your attempted murder of Brock Sawyer," Nancy answered flatly. "And of me." Suddenly all the color drained from Dan Avery's face. "Attempted—you suspect me?" he sputtered. "You think I . . ." His voice trailed off, and he shook his head wordlessly. "I'm a reporter," he said at last. "I'm just trying to get a story. I-I'm not a murderer!" He sounded sincere, but Nancy wasn't convinced. "Maybe you'd better tell us about it, Mr. Avery. "Sure, sure." Now Dan Avery seemed pathetically eager to comply. "But could you let me up? It's hard to talk when the three of you are pressing me into the asphalt." Nancy, Bess, and George cautiously took their hands off him and stood up, brushing the dust from their clothes. Rubbing a shaky hand over his sweaty face, their captive got slowly to his feet. "I'm a reporter with the Midnight Examiner," Avery began. "Well, the Examiner is probably the only newspaper in the country that wasn't invited to the Chocolate Festival." "Wait a minute," said Nancy. "Didn't Brock say something about the Examiner—about the stories you've been running on him?" The stocky reporter nodded. "That's right. We've been giving him a hard time, I guess—but, hey, he's famous. It's the price you pay when you become a star. Anyway, everyone knows the Examiner's not some big, serious paper like the Chicago Tribune. It's just a fun read!" "I guess Brock didn't feel that way about it, though," said George dryly. "Uh-huh. That's why he warned us to stay away. But my editor thought it would be a great scoop if we could sneak in anyway. A great scoop." His mouth widened into a big smile. "Get it? Like a scoop of chocolate ice cream? It was going to be the headline." He looked from face to face, but none of the three girls was smiling. "Uh, anyway, I knew I'd never get a legit invitation, so I—well, I kind of got one from someone who owed me a favor." "What do you mean?" Nancy asked warily. Avery scratched his balding head before answering. "I sort of—persuaded another reporter to give me his invitation. He works for another paper, see, and one time I slipped him a couple of celebrity photos from the Examiner's files when he was in a tight spot. So he owed me. "So anyway, I sneaked in, using this other guy's invitation. And I've been waiting for a big—er—scoop ever since. His getting dipped in the chocolate was good," he recalled, "but what I was really looking for was a big juicy story about Brock being poisoned. Boy, would our readers go for that!" Nancy was completely disgusted. It sounded as if he actually enjoyed ruining people's reputations. "So what are you doing here at the hospital?" Nancy asked icily. "Wasn't the story back at the inn juicy enough?" "Yeah, but I didn't have any pictures. I wasn't there when Brock got zapped—took the poison, I mean. I was—well, to tell you the truth, I was feeling sick to my stomach. Too much chocolate, I guess," said Avery sheepishly. "Since I didn't have any photos from the actual poisoning, I decided that a few shots of Brock in his hospital bed would be the next best thing. "I've been camping out here for the past day," Avery went on, "waiting for a chance to sneak in. That's what you caught me trying to do a little while ago." He gave a little shrug. "So there we are. A man's got to earn a living, you know." "One more question," Nancy told him. "Did you touch any of Brock's food before he got sick? Or his utensils?" "Uh, actually I did more than touch the guy's food. I helped myself to a little of it. Not really his food," the reporter added hastily. "Just some of that weird artificial sweetener he took around with him. I put some into my coffee at lunch-time when he was busy signing an autograph. I know it sounds dumb, but I wanted to cut a few calories. "Well, if that's it I guess I'll be going." Dan Avery started off in the direction of his car, but Nancy grabbed his arm. "Wait a minute," she said suddenly. "You say you took the sweetener at lunchtime, Mr. Avery?" He nodded. "And you were feeling sick at dinnertime?" "More than sick!" Avery said emphatically. "I mean, I was—ah—really indisposed all afternoon." "Then it might be the sweetener!" said Nancy excitedly. "If it made you sick, it could have poisoned Brock. This could be the break I've been looking for!" George was looking at her curiously. "So someone put the poison in the sweetener?" "That's got to be it!" Nancy exclaimed. "Let's get going, guys. Bess and George, could you head back to the inn and see if you can track down that jar of sweetener? Maybe Mr. Avery could give you a ride back—" "Be delighted to," said the reporter with a big grin. "It's the least I can do." Behind him Bess was giving Nancy a disgusted look that said "thanks for nothing." "Where are you going, Nan?" George asked. "To the police lab," she replied. "The lab technicians and I are going to have a little chat. About poison." • • • "I think I can get you in to see Dr. Demado," said a young man at the reception desk. He led Nancy down the hall to an office. Dr. Demado turned out to be a calm, gray-haired woman in a business suit. "Of course I've heard of the Oakwood case," she said when Nancy explained why she'd come. "Calomel poisoning, right? As far as I know, we haven't traced the source yet." "But I've just found something out." Nancy went on to tell Dr. Demado what she'd learned from Dan Avery. The chemist whistled. "No wonder he felt so sick! A dose of calomel could really lay a person flat." "But how could one poison have caused two such different reactions?" Nancy inquired. "Calomel definitely could," Dr. Demado said with a firm nod. "Do you remember what Brock was using the sweetener for?" "Iced tea," Nancy told her. "Iced tea with lemon. And coffee. I saw him use the sweetener in that, too." "Calomel breaks down into a poison when it comes into contact with acid," Dr. Demado explained. "Acid like the lemon in Brock Sawyer's tea." "And in the chocolates he tasted," Nancy suddenly remembered, growing more excited. "They were lemon truffles. And he ate two of them before he collapsed." "So he got a double dose of acid," Dr. Demado mused, shaking her head. "Wait a minute," said Nancy. "Let me catch up to you." Rapidly she summarized what she'd heard so far. "Someone dumped calomel into Brock's artificial sweetener. Brock and Mr. Avery both used the sweetener, but neither of them noticed that it had been poisoned because calomel is tasteless. It made Mr. Avery fall sick because that's what calomel does. But it poisoned Brock because he took it with the acid in his tea and in those lemon truffles. Is that right?" "Right." There was still one piece missing from the puzzle, Nancy realized. "But where would someone get calomel?" she asked. "Now, that's something I can't answer," said the chemist. "It was taken off the market as an internal medicine years ago—precisely because it was so unstable. Possibly your poisoner found it in an old medicine cabinet somewhere?" Nancy nodded, remembering the walk she, Bess, and George had taken through the inn's east wing. Some of the rooms there had looked as if they'd been left untouched for years—including a couple of bathrooms. It wasn't uncommon for people to hold on to old medicines they should have thrown out. So the poisoner might have been able to dig up calomel pretty easily— Abruptly Nancy thought of something else. "Wait," she said aloud. "Would the poisoner have known Brock was going to be eating something with acid in it? Those truffles were kept secret until Samantha unveiled them. Besides, is there anyone at the inn who knows that calomel turns into a poison when it reacts with acid? That seems a little hard to believe. . . ." Nancy slumped down in her chair as her excitement drained away. "Whoever put calomel into Brock's sweetener may not have meant to kill him at all," she said in despair. Dr. Demado eyed her curiously. "Why is that bad?" she asked. "Oh, in terms of the poisoner's guilt, it's not bad at all," Nancy said quickly. "But if it's true, it means I've got to start looking for a different motive. "I've been on the wrong track all along!" ## Chapter ## Twelve AS NANCY DROVE BACK to the inn, she hardly noticed the scenery. Her mind was circling around the newest development in the case. From the poisoner's point of view, putting calomel in Brock's artificial sweetener made a lot of sense. No one else would take it. He or she could be guaranteed that at some point Brock would use it. But Nancy was sure that even the poisoner didn't know that calomel would turn into a poison when it reacted with the lemon juice in the tea and the truffles. After all, Nancy knew a fair amount about poisons—more than the average person, at least. And she had never even heard of calomel, much less that it could turn poisonous in the presence of acid! No, whoever had used the calomel had probably intended to make Brock feel sick—and to ruin the truffle-tasting event. If that was true, that person's goal might be to sabotage the festival—not to kill Brock. So I'm back to square one, Nancy thought, banging the steering wheel in frustration. She had to figure out who would want the Chocolate Festival to fail. Quickly she ran down her list of suspects again. Perhaps Samantha had found the stress of running the festival to be too much. She might have decided to end it any way she could. Mrs. Tagley had a motive for wanting the festival to end, too. She seemed to feel that she and Samantha were in direct competition for control of her inn. Ruining the festival would be a good way to make Samantha look as if she couldn't handle things without her mother. Then there was Tim. He had every reason to resent the demands the festival was making on Samantha's time. "On top of that," Nancy said aloud, "there may be suspects I haven't started suspecting yet—a whole inn full of them." • • • "No sign of Brock's sweetener," George announced when Nancy let herself into the girls' suite a short while later. She had spread her lean frame out on the couch and had a book propped up on her stomach. "We hunted through the kitchen until the chef was ready to wring our necks. But it's gone." "Jake even pitched in and helped for a while," Bess called from her room, where she was lying on her bed with a magazine. "You know, he's really a sweet guy. I wonder if I'm making a mistake concentrating on Brock so much." "It probably doesn't make much difference, considering that your relationship with Brock is completely in your head," said George. "We've got to find that jar of sweetener," said Nancy. "It could be the key to everything." She recounted what Dr. Demado had told her. "So we're not dealing with poison, we're dealing with sabotage," said George, her brown eyes wide. "That's right," answered Nancy. "We've got to determine who hates the Chocolate Festival enough to ruin it." She let out a sigh. "We don't know whether the culprit used the calomel because it was the first thing he or she came across, or whether he or she chose it on purpose. "We don't even know for sure that the calomel was in the sweetener," she added. "That's why we've got to find that jar." Nancy started pacing around the little room. "While everyone's busy with the festival, I'm going to check all the Tagleys' rooms for it." "What if someone walks in on you? What are you going to say?" Bess asked nervously, getting off her bed and joining Nancy and George in the living room. "That's not going to happen," Nancy told her with a grin. "Because you and George are going to be my lookouts. I know you'll be great at fending people off while I'm poking around under the Tagleys' beds." "We'd better come up with some kind of excuse, don't you think?" Bess whispered a few minutes later as the three girls headed toward the stairs that led up a flight to the Tagleys' suite of rooms. The fourth-floor hall was hushed and shadowy. Nancy felt as if they were in the middle of a ghost story. "Maybe I can say I dropped an earring—" Bess suggested. "And it just rolled up four flights into the Tagleys' wing?" George finished for her. "I doubt they'll go for that. If anyone comes up here, let's just try to distract them." Nancy held her breath as she twisted the knob of the first door they came to. It swung open easily. "Thank heaven for friendly family inns like this one," said George with a chuckle. "What a pretty bedroom!" Bess commented. It was furnished entirely in antique cherry furniture, and on the floor was a faded but still handsome Oriental rug. From the framed pictures of Samantha and Jake that lined the walls, Nancy guessed this was Mr. and Mrs. Tagley's room. "Look, this must have been taken when Jake was about four years old," said Bess, pointing to a picture of a sunny-faced little boy in a cowboy suit. "What a cutie!" "Hey, get outside," scolded Nancy with a laugh. "You guys are supposed to be standing guard, remember?" "Oops, sorry!" Bess scooted out of the room to stand with George. Nancy quickly searched the room. No sweetener in the closet or any of the bureau drawers. None under the bed or any of the furniture, nor in the medicine cabinet of the bathroom adjoining the bedroom. After a few minutes she decided she was wasting her time. "No luck," she said, closing the door carefully behind her. "Let's try another room." To her relief, the next door they tried was also unlocked. This room was obviously Jake's. "What a mess!" George marveled, staring at the piles of books and magazines on the floor. The desk was cluttered with papers, and the unmade bed was piled high with laundry. "Anyone who wanted to break in here would give up and leave, thinking someone had already beat him to it." Nancy gave George a friendly jab on the shoulder. Nancy sifted through piles of wadded-up shirts, peered cautiously around precariously balanced stacks of books, and dug mountains of debris out from under the bed before shoving them back again. The whole search would have been a lot easier if she had dared to clean up the room, but, of course, that was impossible. She had just decided to give up when Bess leaned into the room. "Nancy, hurry!" she begged. "You've been in there for ten minutes!" "Okay. I'm done—at least, I think I am. There may still be a pile of laundry I didn't paw through, but I don't think so." "Couldn't you just skip Samantha's room?" urged Bess. "I'm sure she didn't take the jar. I just know someone's going to discover us any minute. And besides, it's lunchtime!" "I can't quit now." Leading the way, Nancy rounded a corner onto a sunny corridor lined with windows. Seeing another door there, Nancy tried it. Unlike Jake's, Samantha's room was in pristine order, with a dainty canopy bed and white-painted furniture. "This won't take long, anyway," Nancy muttered to herself as she began pulling out bureau drawers. "Come on, Nancy!" Bess urged. She was dancing up and down with impatience. "You're taking forever!" "It's only been about three minutes," Nancy protested as she pulled open the closet door. "Just give me a chance to—" Nancy froze. "Oh, no," she whispered. Tucked into the back of the closet, behind a pair of leather tennis shoes, was the jar of sweetener. ## Chapter ## Thirteen NANCY PICKED UP the small jar and hurried back into the hall. "Guys, I hit the jackpot!" Bess's mouth fell open. "In Samantha's room? I don't believe it." In her surprise she seemed to forget they could be found out any minute. But George grabbed her arm and dragged her down the hall. A minute later they were back in their suite with the door safely shut. Bess plopped down on the couch. "I just don't believe she put that jar there, Nan," she said again. "Someone's framing her. Why would she hold on to something so incriminating? Besides, Samantha wouldn't poison a guy she used to be in love with." Nancy went to get her purse from her room and tucked the jar of sweetener safely inside. "I hope you're right, Bess." "I hope so, too," said George. "But how are you going to prove it, Nan?" "I'm not sure. Right after lunch I'll take the jar to the police lab. I want them to tell me whether it's actually got calomel in it before I start talking to Samantha." She checked her watch. "We're already late. We'd better get down to the dining room." • • • "You were right, Nancy," said Officer Sherbinski, coming into the waiting area of the police lab. She held up the jar of sweetener Nancy had given her. "This sweetener has been laced with calomel. I'm afraid this may implicate Samantha." Nancy nodded. She'd been waiting for the better part of the afternoon, but it had been worth it. "By the way, Tim Krueger has been released," the technician added. "Why? Lack of evidence?" Nancy asked. The officer nodded. "This jar of poisoned sweetener is our best evidence." "It's not enough to arrest Samantha, though," Nancy put in quickly. "No. But it gives us a very good reason to question her further." After thanking Officer Sherbinski, Nancy went back outside. If I could only find the jar of calomel itself! she thought as she climbed into her car and switched on the ignition. But whoever had found the calomel originally had surely gotten rid of it by now. But then Nancy would have thought the poisoner would have thrown the sweetener away, too. It was almost six-thirty by the time Nancy reached the inn again. When she opened the door to her suite, Bess and George weren't there, but something else was. Nancy saw that a note with her name on it had been slipped under the door. It was printed on cheap stationery that had been folded in half. Nancy unfolded it and read the message inside. If you want to know more about the poison, meet me in the east wing at 7:00 P.M. tonight. It was signed "A Friend." The handwriting was utterly without character. Nancy couldn't begin to guess whether it had been written by a man or a woman. Note in hand, Nancy walked toward her bedroom. What the— She hurried over to her bed and snatched up a second piece of paper. This note was from Bess and George. Nan, We're down in the dining room. Don't skip dinner just for a case! Meet us there! B&G Nancy grinned to herself. She was going to skip dinner. But she'd slip down to the dining room first to ask them to cover for her. But Nancy didn't get to leave the dining room as fast as she'd planned. "Tim!" Nancy exclaimed. He was walking toward the Tagleys' table arm in arm with Samantha. "It's great to be back," Tim said warmly. "I never would have thought I'd miss the Chocolate Festival. But it only takes a second or two of being in police custody to make you appreciate what you've got." "When did they release you?" Nancy asked. It was Samantha who answered. "Last night," she said happily, giving his arm a squeeze. "He's had a twenty-four-hour vacation from the festival, so now he's extra-ready to help me out again." She leaned teasingly against her boyfriend and ruffled his dark hair. "Aren't you? Now can we please eat? I'm starving." They continued on to the table, but Nancy stayed rooted where she was. A horrible thought had just struck her. "Nancy! The appetizers are on their way!" Bess's voice shook her back to reality—sort of. She started toward her friends, who were sitting at a table by themselves. "And they'll be a lot more appetizing if they're not made of chocolate," said George as Nancy reached the table and pulled out a chair. "I'm getting a little sick of eating dessert before my main course—" Suddenly she stopped speaking and peered more closely at Nancy. "What's the matter, Nan?" "I just talked to Tim," Nancy said. "But—but it's great that he's not being held anymore, isn't it?" Bess asked. "Yes," Nancy said slowly, "but he was released last night—not today." "So?" Bess bit into a chocolate-iced roll. "So he could have been the one who poured that flour all over me. He had plenty of time." "Oh, no," said George. "I was really hoping Tim wasn't a suspect." "Me, too," Nancy agreed. "But I guess we can't rule him out yet." She sighed. "This is terrible. I have too many suspects!" "Maybe eating something will make you feel better," suggested Bess. Nancy shook her head. "No, thanks. I'm not eating. I've got a rendezvous instead." "A rendezvous?" echoed George. In a low voice Nancy explained. "Nan, you can't meet a total stranger," Bess protested in a worried voice. "It could be the murderer!" "What murderer?" asked Nancy. "No one's dead that I know of. It's probably just someone who wants to tell me something about the case in a place where we're not likely to be overheard. Now, come on. Can you guys really see me not meeting this person, whoever it is?" Both of her friends shook their heads. "Would you guys cover for me again?" Nancy asked. "If anyone asks for me, don't tell them what I'm doing, okay?" "No problem," George assured her. "Want us to save you something to eat?" "No, thanks. I can call room service later." Nancy chuckled. "Maybe Mrs. Reames will have some more gossip for me." • • • If the Tagleys' living quarters had been a little creepy, the east wing at night was positively frightening. The moonlight streaming through the windows gave the only light. It shone down onto the huge, empty rooms filled with signs of construction—ladders, scaffolding, cans of joint compound and paint, and ghostly white tarpaulins draped here and there. Nancy shuddered. Don't get nervous, she reminded herself. You're here for a reason. But she still didn't know exactly where in the east wing she was to meet her invisible "friend." Nancy squared her shoulders resolutely as she walked from one dark, deserted room to another. She was so determined to stay calm that when the noise first sounded, she told herself she hadn't heard a thing. But then it happened again. And this time Nancy knew she wasn't imagining things. It was a soft, gentle tapping. In broad daylight it would have sounded like nothing more than a child knocking on a door. In the darkness it sounded like a ghostly summons beckoning Nancy forward. "Stop it," Nancy scolded herself aloud. "It's just the floors settling or something." Tap tap . . . tap tap . . . tap tap . . . No, that was too regular to be the floors. Someone was making that sound. Could it possibly be a signal luring her to the meeting? Nancy tiptoed toward the doorway of the vast, empty room she was standing in and poked her head out into the corridor. There was no doubt about it—the sound was coming from down the hall. Moving as silently as she could, Nancy slipped down the hall. Always the tapping seemed just a few feet ahead of her, but she couldn't seem to reach it. She followed the sound down the hall, around the corner, and into yet another shadowy room. Then, abruptly, there was silence. Nancy took a tentative step forward—and froze in fear. Just in front of her the room's floor had been ripped out. The space was like a bottomless black pit. A few more steps, and she would have plunged into it! Nancy's heart was pounding. I've been led into a trap! Someone lured me here to— Just then there was a scrabbling sound behind her. Nancy whirled around—and screamed. From out of the darkness a razor-sharp wood chisel was hurtling straight at her! ## Chapter ## Fourteen NO!" NANCY SCREAMED. She jumped out of the chisel's deadly path and felt as if someone had pulled the floor out from under her. She was plummeting through the gaping hole! It all happened in a flash. Almost before she realized she was falling, Nancy's flailing arms had grabbed a beam and she jerked still. Gasping for breath, she clung to the beam. She didn't dare look down. Below her, she knew, yawned the cavernous space of the subbasement. The only thing that would keep her from smashing to the stone floor below was her own strength—and already her muscles were shrieking with agony. As the panic subsided, she realized something was stabbing into her hand—probably a nail sticking out from the beam. Carefully Nancy moved her hand a fraction to the right. Better. Then, warily, she raised her eyes. She couldn't hear her assailant anywhere. Was he—or she—lurking above her, waiting for her to drop? Waiting to kick her hands off the beam if she made a move? Nancy suddenly remembered something else. The chisel! She desperately searched her memory to see whether she had heard it drop through the floor, but she couldn't remember. Had her attacker found it? Was the chisel poised to strike again? She listened again—and heard no sign of anyone else nearby. I can't hang here forever, Nancy told herself. Even if someone was up there, the risks of climbing back up were a lot better than what would happen if she dropped into the subbasement. But getting up on the beam was easier said than done. It took three agonizing tries before Nancy was able to hoist herself onto it. Precariously balanced, and feeling as if she might fall with every movement, she began to creep to safety. The hand that had been pierced by the nail was throbbing now, and her muscles ached. Nancy felt sick with pain and fear. But inch by dreadful inch she moved along until at last she had reached the edge of the hole. Trembling with relief, she crawled onto solid ground and collapsed onto the floor. For a minute all she could do was lie sprawled against the floor, breathing deeply. Then she pulled herself together and sat up. She peered down the shadowy hallway. There was neither sight nor sound of her attacker. Whoever had set this trap was gone. That had been a close call. But if her assailant thought she'd back off now, he—or she—had another think coming! Trying to kill her that way had been a desperate move. And now that the culprit was desperate, it was time for Nancy to make her own move—one that would send the criminal over the edge. • • • "Nancy, you missed the best mocha sorbet—" Bess's smile turned to a look of shock. "What happened to you?" Nancy had come directly to the dining room from the east wing. Luckily the dining room was nearly empty now. But a few late diners were staring at Nancy's dusty clothes and bloody hands. At least the Tagleys, Samantha, and Tim were gone, Nancy noted with relief. "You look as though you've been crawling through construction or something," George added. "That's pretty much what I have been doing," said Nancy wearily as she dropped into a chair. "I'll fill you in in a second. Just let me catch my breath." "Did you meet whoever sent you that note?" asked George. "Well, yes and no. I think that now it's time to get tough." "How?" Bess asked. Just then a waiter walked up to the table. "Can I get you anything?" he asked Nancy politely. "Just some information. Were you working here when Brock Sawyer was poisoned?" "No, but another waiter on duty tonight was. Do you want me to get him?" "That would be great." A young waiter with dark, spiky hair appeared shortly. "Do you have time to talk to me for a second?" Nancy asked him. "I guess so," he answered with a quick glance around the dining room. "Things seem to be winding down here." "Thanks a lot. This won't take long," Nancy assured him. "I understand you were on duty when Brock Sawyer was poisoned?" "I was. What a horrible thing!" "It looks now as though an artificial sweetener that Brock used in his tea and coffee was what poisoned him," Nancy went on. The waiter's eyes grew wide. "You mean that powdery stuff? Boy, I'm glad I didn't try any! I never trust health food." Nancy couldn't help laughing a little. "Actually, someone added the poison to the sweetener," she explained. "That's what I'm trying to find out. You didn't notice anyone besides Brock handling the jar of sweetener, did you?" The waiter thought for a moment. "Besides Mr. Tagley, you mean?" "Mr. Tagley?" Nancy asked incredulously. "Jake, that is. He likes the staff to call him Mr. Tagley." "I didn't realize that," Nancy said, half to herself. "Well, anyway, Mr. Tagley took it out to the kitchen. Day before yesterday, I'm pretty sure. He said it needed a refill." Nancy shot her friends a meaningful glance. That was the day Brock was poisoned. "Do they keep refills of Brock's sweetener in the kitchen?" asked George in surprise. "I've never seen any," the waiter said, shrugging, "but Jake must know the kitchen a lot better than I do. He came back with the refill right away." The waiter glanced at the clock on the opposite wall. "I should really get back," he said. With a quick smile, he walked away. "Let's go upstairs," Nancy said to Bess and George as soon as the waiter was out of earshot. "We'll be able to talk a lot more easily without people leaning over our shoulders." "So Jake refilled the sweetener," Nancy mused thoughtfully when the girls were in the elevator heading upstairs. "I wonder if—" "You're not accusing him of being the poisoner, are you?" Bess cut in. "Because I just know he's not." Nancy smiled slightly. "If it were up to you, Bess, no one would be the culprit." The elevator door slid open, and the girls started down the third-floor hallway. As soon as they reached their suite, Nancy began peeling off her grimy clothes. "Anyway, I'm not accusing Jake of anything," Nancy continued. "It does seem significant that he handled the sweetener just before Brock was poisoned. But I've got to take a shower and put some disinfectant on my hand before I even think about this case." Fifteen minutes later—showered, dressed in clean shorts and a T-shirt, and feeling a hundred percent better—Nancy sat down in the suite's living room with Bess and George and told them what had just happened to her. Bess's blue eyes were full of tears when Nancy finished. "Nan, you could have been killed!" George didn't seem to even hear her cousin. Her brow was furrowed as she asked, "Remember the night we saw Jake playing darts?" Nancy nodded. "So?" "So someone aimed that chisel pretty well, that's all," George answered. "You're right!" Nancy exclaimed. "I didn't even think of that, George!" Bess was angry. "You don't have a shred of proof, either of you!" she stated emphatically. "That's right, we don't," agreed Nancy. "That's why I came up with a new plan while I was in the shower. It ought to help us even if we don't have any proof." "Well, what is this plan?" asked George. "How can we help?" "First, did you guys notice anyone in the Tagley family leave the dining room during dinner?" said Nancy. "Let's see," George said thoughtfully. "They were all in and out. Weren't they, Bess?" "Except Jake," Bess said with a triumphant smile. "He came in a couple of minutes late, then stayed for the whole meal. But both Mr. and Mrs. Tagley left a couple of times, and so did Samantha. Come to think of it, Tim did, too. He was bringing in some kind of speaker system for the dance tonight. I think we should definitely go to that, by the way. It'll probably be fun." "I doubt we'll have time to get to the dance," Nancy said apologetically. "I have a feeling we're going to be busy this evening. "But I've got to make one call to set things up for my plan," Nancy went on. She opened the local directory that lay next to the phone on the coffee table and looked up the number of the hospital. Then she picked up the receiver and began to dial. "Hello, may I please speak to Brock Sawyer?" she said when the hospital switchboard answered. After a short pause the actor's voice came on the line. "Hello, Brock? This is Nancy Drew. . . . Fine, thank you. And you? . . . Oh, that's good. Listen, Brock. I've come up with a plan to trap the person who poisoned you, but I'm going to need your help. And I think I'm going to need a doctor's permission, too." A few minutes later Nancy hung up and turned excitedly to her friends. "Now we'll run through my plan—and then we start rehearsing." • • • The living room clock was just striking nine as Nancy walked gravely into the room and closed the double doors behind her. Waiting for her were Bess, George, and the group of people who had assembled at Nancy's request. Mrs. Tagley was there, sitting on the faded love seat by the fireplace. Samantha leaned against one wall, worriedly fingering the silk of her blue dress. Tim sat in a chair next to her, his head in his hands. And Jake made a determined effort to flip through a magazine despite the tension in the room. From far down the hall the lilting strains of ballroom music could be heard. Samantha checked her watch. "The dance has already started," she said. "I hope you can let me get back soon, Nancy. I don't want to leave my stepfather to run things there too long without me. "Would you mind telling me what's going on, Nancy?" Mrs. Tagley asked angrily. "I've got a lot of work to do, too, you know!" She began tapping her high heel impatiently on the floor. "I'm sure Nancy's got a good reason for bringing us all together," said Jake. "Well, I hope we're not in for some kind of interrogation," Tim muttered. Samantha shot him a warning glance, but he ignored it. "I've answered enough questions in the past couple of days." "I haven't come to interrogate you. I've come with some news," Nancy told them. "News from the hospital." Her voice was so somber that the group fell still instantly. Every pair of eyes in the room was watching Nancy intently. Nancy made her voice tremble as she spoke again. "I just talked to Brock's doctor," she said. "He told me that Brock has had a relapse." Samantha let out a little gasp. "But I—I spoke to him on the phone earlier. He was fine!" "It happened very quickly." Nancy bit her lip and stared at the floor as though she were fighting to keep from crying. Then she took a deep breath and said the hardest thing of all. "An hour ago Brock Sawyer died." ## Chapter ## Fifteen THERE WAS A GASP of horror from Nancy's listeners. "Oh, no! Oh, no!" Samantha cried sharply. "It's my fault! If only I hadn't asked Brock to come here!" Burying her head in her hands, she burst into tears. Tim patted her shoulder awkwardly, but his green eyes showed no emotion. Nancy wondered fleetingly what he was thinking. What would it feel like to console your girlfriend over another guy's death? Mrs. Tagley was sitting as if paralyzed, her face so pale that Nancy was afraid she was going to faint. And Jake was biting his lip as if he, too, feared that he might cry. There were tears in Bess's eyes as well. "I—I can't believe it," she said in a trembling voice. "I thought he was doing so much better!" Great acting, Bess! Nancy cheered silently. In a sober voice she said aloud, "He was. But the doctor says his system was so weak that when he ran a fever, his body couldn't hold out against it." "Then that makes it murder we're dealing with, doesn't it?" asked George, her brown eyes wide as she looked around the room. "Now that Brock is dead, one of these people is a murderer," she said in a hushed tone. "That's right," said Nancy. She, too, eyed the roomful of people. "One of you is Brock's killer." Mrs. Tagley shook her head in disgust. "This is all a little melodramatic, isn't it?" she asked harshly. "Do you suspect one of us in particular, or did you just bring us together for the fun of it?" "You're all suspects," Nancy replied. "And since you started this conversation, Mrs. Tagley, I'll start with you." Taking a few steps toward Samantha's mother, Nancy said, "From the very beginning there seemed to be two different ways to read this case. It was possible that someone was out to sabotage the Chocolate Festival. It was also possible that someone was out to get Brock. In your case, Mrs. Tagley, sabotage was unlikely. But there was a good reason you might be out to get Brock." Nancy met the older woman's glare steadily. "In fact, you probably had the strongest motive of anyone in this room," she said. "Brock's father ruined your first husband's life. You could even say he killed him." Samantha turned to stare at her mother. "You never told me that!" she breathed. "It wasn't worth telling," Mrs. Tagley answered in a strained voice. "It was all in the past." "But was it?" Nancy continued. "Your life was very difficult for a long time after Mr. Patton's death. Any sane person would feel a grudge toward the son of someone who'd inflicted such a terrible wound." "But he and I talked that whole mess over," Mrs. Tagley burst out, her face red. "Brock wasn't my favorite person, but I would never have poisoned him!" "That's what you say now," said Nancy. "But I'm not sure I believe you. "You had a strong motive, too, Tim," she went on, turning to face him. "Jealousy is one of the most common motives for murder. You could see that Brock's feelings for Samantha hadn't disappeared—and that her feelings for him might be stronger than she thought." Tim just stared sullenly at the floor, but Samantha cried, "No! I was just being polite!" Nancy paid no attention. "You also had reasons for wanting to sabotage the Chocolate Festival," she told Tim. "It was eating up a huge amount of Samantha's time. Maybe you were jealous of the festival instead of being jealous of Brock. Maybe you poisoned Brock without actually wanting him to die." Tim raised his head to glare at her. "You're being ridiculous," he growled. "I thought you were a lot smarter than this, Nancy. Anyone who would come up with such a stupid solution has to be pretty dumb." "I didn't say it was the solution," Nancy reminded him. "I just said it might be." Now Nancy turned to Jake. "Jealousy might be your motive, too. I couldn't help noticing that even though you've been very helpful all week, it's Samantha who gets most of the attention in your family." Samantha and her mother flinched guiltily at that, Nancy noticed. "You've had some good ideas over the past few days—ideas everyone has ignored," Nancy continued. "Has it been too hard for you being around a stepsister whose rank at the inn is so much higher than yours? Did you feel left out in the cold?" Jake was stunned. "I didn't think I did," he said at last. "I mean, sure Samantha's done a lot better than I have—but she's already graduated from hotel school. When it's my turn, I'm sure I'll do just as well. And as for Sam getting more of the attention"—he smiled crookedly—"well, that's just the way families are. Dad gets less attention than my stepmother. He and I are just background people, I guess." Glancing toward the love seat, Nancy thought she saw Mrs. Tagley's stern veneer crack once more. "You're not background people to me," Mrs. Tagley said, dabbing at her eyes. "And, Samantha—" Nancy wanted to be professional, but she couldn't help speaking more gently to Samantha than she had to the other suspects. "It's hard to believe that you would try to hurt Brock or sabotage your own festival. But I've been wondering whether you might have cracked under all the pressure. Was it too much for you? Did you decide you had to put a stop to the whole thing—without losing face?" Samantha's expression was more hurt than angry. "I—I can see why you'd think that, Nancy," she faltered, staring down at her clasped hands. "What you say—what everyone has been saying—is true. Running the festival has been too much for me." Then, as if she remembered the reason they had all been brought to the library, she stared defiantly up at Nancy. "Still, I'm not guilty of those dumb, vicious pranks, and—and I'm especially not guilty of killing Brock. You'll just have to believe me." "I wish I could believe all of you," Nancy said quietly. "Unfortunately, I can't. One of you is lying. "Luckily someone has offered to help the liar come forward with the truth," Nancy continued. She turned to the living room doors. "Here he is now." The handle turned, and the doors pushed slowly open. Brock Sawyer stepped into the room. "I've come to see justice done," he announced in a solemn voice. Never in her life had Nancy heard a sound like the eerie, shrieking wail that rose from Jake Tagley's lips at that moment. Jake jumped to his feet, staring wild-eyed at Brock. His cheek was twitching uncontrollably, and sweat was pouring down his face. "No! No!" he screamed. "Don't come near me! Or I-I'll kill you again!" Still making that unearthly noise, he stumbled across the library and out the door. "Well," Brock said. "That has to be the best acting I've ever done." "You're—you're not dead!" Samantha rushed over to hug Brock, laughing and crying at the same time. Mrs. Tagley rose shakily to her feet. "Then it was Jake who—who—" "I'm afraid so," said Nancy urgently. "And now we have to find him because I think he may be dangerous." Bess, George, and Tim were already on their feet racing out the door. "There he is!" Bess cried, pointing down the hall. Jake was just disappearing down the stairs to the basement. Nancy and Tim thundered down the hall after him, shooting past the dining room. The ballroom music that floated out into the hallway sounded horribly out of place. When they reached the stairs, Nancy took them two at a time. "He went that way!" Tim shouted, pointing right. "Toward his father's workroom!" That's strange, Nancy thought. Why run to a place where we can corner him? But there was no time to think about that. In a flash they had reached the doorway to the storeroom. "Don't come any closer!" Jake screamed. His four pursuers froze just inside the room. Jake was just yanking his father's circular saw off its stand, the long electric cord still plugged into the wall outlet. He pressed the On switch and held the saw, whirring ominously, up in the air. Then—with a taunting smile on his face—he moved it up to a pipe on the wall. "That looks like a gas pipe!" Tim shouted hoarsely. "Right you are." Jake gave a mirthless laugh and inched the saw closer to the pipe. "It's the main gas line, and I'm going to saw through it now," he growled. "But you can't!" George cried. "The sparks will ignite the gas!" "Right again. The sparks will ignite the gas." The whirring blade was only a fraction of an inch from the pipe now. "And then," Jake went on, "this whole building will go up in a fireball." ## Chapter ## Sixteen I'VE GOT TO stall him! Nancy thought desperately. It's our only hope! Forcing a light tone into her voice, she said, "I hope you're not planning to kill us before you explain how you pulled this off." She had to talk loudly to be heard over the whirring of the saw. "That would be a little unfair, don't you think?" Jake gave her an icy stare. "The old stall-the-bad-guy ploy, huh?" he said, to Nancy's dismay. "Well, it won't work. I've seen too many detective movies. Besides," he added bitterly, "I didn't pull it off. You tricked me into confessing. Old Jake messed up yet again." "Oh, stop feeling sorry for yourself," Bess said behind Nancy. "You did a fantastic job. Anyone would have freaked out when Brock walked in like that. I practically had a heart attack myself." Good, Bess! Nancy thought. Keep it up! But Jake wasn't going to fall for that trick either. Scowling, he said, "You're the last person I'd listen to, you traitor. I thought you liked me, not Brock! I should have known I was only your second choice— Well, I'm used to second place now. After all, I'm always second to Samantha." "I'm surprised to hear you say that," George spoke up. "It seemed to me you were doing as much to keep the inn going as she was. I mean, look at the way you met us at the door when we first got here." Out of the corner of her eye, Nancy could see Tim edging slowly toward the door. "Yeah, but did I get any credit for meeting you?" Jake spat out. "No! Samantha acts like she doesn't even want me around!" Nancy could see Jake had gotten even more worked up. "Boy, when I think of the times she's insulted me—and I've just smiled and pretended not to care—Well, I'll pay her back now." "You certainly will," said Nancy—and she meant it. "I've got to congratulate you, Jake. I thought you really didn't care. You always seemed to be so reasonable about everything. You were always calm when everyone else was going crazy." Tim was standing in the doorway now, poised to slip out into the hall. "It's not hard to stay calm when you know you're about to get even in a big way," said Jake. "I've been planning this a long time. It doesn't even matter that you caught me, Nancy. I'll die in this fire, but so will everybody else. I think that's a pretty fair trade-off." He lifted the saw toward the pipe again. "Oh, come on," Bess coaxed. "You've got to tell us how you did all this. I already knew you were smart, but don't you want everyone else to know?" To Nancy's astonishment, that seemed to do the trick. Jake kept one hand poised on the handle of the saw—but he let go of the On button and lowered the saw to its stand. He didn't seem to notice that Tim was gone. "Okay, okay," he said. "What do you want to hear before I torch you?" "Everything," said Nancy promptly. "Start at the beginning. You rigged the scale, didn't you?" Jake chuckled. "Of course I did. That was hilarious. Seeing Mr. Beautiful chocolate-coated really made my day. Plus I knew my stepmother would give Samantha a lot of grief for it—which is mainly what I wanted. Messing up Brock was secondary to wrecking the festival." "Well, that was a good start," George said approvingly. "You grossed out a lot of people." "Yeah, but the ants were even better, don't you think?" said Jake. Nancy shuddered. "They really were. Where did you ever find so many?" "I just bought a few ant farms," said Jake offhandedly. "I poured all the ants into a jar—they came in these little packets—and hid them in the back of the refrigerator. You know that old joke about how no one ever knows what's back there? Well, that's even more true in a big restaurant refrigerator. "I went out to the kitchen to help bring in some dishes," Jake went on. "When no one was looking, I opened the jar and dumped the ants all over the cake. That wasn't too hard. The cake was already set up on that rolling table, with the cloth over it. So I knew my surprise wouldn't be ruined. Pretty slick, huh?" Nancy nodded. "Very. But the sweetener was your biggest project of all, of course. You must have found the calomel when you were working in the east wing—is that right?" "Right," Jake said proudly. "It was in an old medicine cabinet. I read the label and thought, What a weird thing—medicine that's supposed to make you sick to your stomach! Then I realized it might be kind of funny to make Brock sick. Especially when he kept blabbing on about that stupid nutritionist with her stupid sweetener. I thought it would really serve him right when his sweetener made him sick!" "So you didn't mean to poison him?" said Bess. "Oh, I'm so relieved!" "No, I didn't. When I added the calomel to his sweetener in the kitchen, I had no idea he'd react that way. To tell you the truth, I was pretty freaked out. I mean, I wanted to play a few tricks—not poison someone." Was it Nancy's imagination, or was Jake actually acting sorry? Maybe they could reason with him. It might be their only hope. Even if Tim had already called the police, they wouldn't arrive for another ten minutes or so. "I didn't mean to set you on fire, either, Nancy," Jake went on in the same contrite tone. "I just thought it would be funny to scare you. In fact, I didn't know you were going to be up there. I was just planning to dump the flour on my stepmother. But you came up on stage, and the fire started, and—and all of a sudden I wasn't a prankster anymore. I mean, who would believe I hadn't known that either the poisoning or the fire was going to happen? Everything kind of—you know—snowballed. I suddenly realized that if I got caught, I was going to go to jail!" He looked appalled. "That's when I stashed the jar of sweetener in Sam's closet. If the police started looking for evidence, I figured they could have fun trying to pin it on Miss Goody Two-shoes." "No wonder you wanted me out of the way," Nancy said as sympathetically as she could. "It must have seemed like I was the only person standing between you and your freedom." He shot Nancy a glance that seemed almost apologetic. "That's right. I didn't want to hurt you. I even kind of liked you. But, of course, I couldn't let you ruin my life, could I?" "And that's why you lured her to that hole in the floor?" asked George. "Yup. I hoped that either that or the chisel would finish her off. I have pretty good aim. I play a lot of darts." Jake stared down at the saw. "This should be pretty foolproof, though," he mused. Then he looked back up at Nancy. "I've got to hand it to you," he told her. "You caught me fair and square. I wasn't even a strong suspect, was I?" "No, you—" "Wait a minute." Jake's voice was suddenly electric with menace. "Speaking of suspects—where's Tim?" Glaring at the girls, he whipped the saw back off its stand. Uh-oh! thought Nancy hopelessly. "He sneaked out of here, didn't he? He's going to call the police!" Jake was beginning to scream. "Well, he's not going to get the chance! None of you will ever have a chance again! Say goodbye to one another because—here goes!" Nancy didn't have time to react before he leapt toward the gas pipe. In the next instant, though, the room was plunged into darkness. "Hey!" came Jake's furious voice. "What the—" For a split second Nancy thought that somehow the lights had gone out because Jake had sawed the pipe in two. Then she realized that that couldn't be it. Tim must have found the circuit breaker! He must have switched off the lights—and the electric saw. That meant all Nancy had to do was— She hurled herself toward the darkness and smashed full force into Jake. "George! Bess! Over here!" she screamed. "Help me!" "I'm right behind you," George called back. The three girls yelled that they had pinned him down, and Tim switched the lights back on. Looking up, Nancy saw Tim was standing in the doorway. "The police are on their way," he said grimly. • • • "I know we ought to thank you, Nancy—but I can't make myself feel grateful yet," said Samantha. It had been an hour since they'd subdued Jake. Samantha and Tim were sitting in the living room with Nancy, Bess, and George. The police had taken Jake away, and Mrs. and Mrs. Tagley left immediately after that to speak with their lawyer. Amazingly, the other guests were still enjoying the dance. Samantha had made up some excuse for the blackout. And with the music and dancing to distract everyone, they had been able to keep Jake's arrest fairly quiet. "I can't make myself feel much of anything except sick," Samantha continued. "This is a terrible, terrible tragedy for my family. I—I know Jake isn't my real brother, but he seems like one. I can't forgive myself for thinking that I somehow pushed him into doing this." "I know you must feel that way," said Nancy sympathetically. "But I don't think anything you did or didn't do made any difference. Once Jake started to realize the consequences of what he had done, I think he went over the edge." "Yes, but he would never have had to play those tricks In the first place if he hadn't felt so jealous of me!" Samantha's voice was trembling. "You did nothing wrong, Sam," Tim said, slipping an arm around her shoulders. "Jake just has very serious problems." "Oh, Tim, I'm sorry if I've been taking you for granted," Samantha said sadly. She looked over at Nancy, Bess, and George. "Would you mind leaving us alone for a little while? Tim and I have a lot to talk about. We've had a lot to talk about for a while now." • • • "Nancy, wait!" Samantha's voice rang out as Nancy, Bess, and George were leaving the dining room the next morning. Samantha rushed toward them, her white skirt flapping around her legs. She looked much happier. In fact, she looked radiant. And Tim, who was standing behind her in shorts and a polo shirt, was grinning broadly. "I wanted you three to be the first to know," she said breathlessly. "Tim and I are engaged!" "What wonderful news!" Nancy said, giving her a warm hug. "When did all this happen?" "Right after you left us last night," Samantha told her. "We got everything out in the open. My feelings about Brock and my role in the inn, and Tim's role in the inn—" "And my feelings about Brock," said Tim with a chuckle, "which, as you may have guessed, aren't love filled. But Sam convinced me that I really had nothing to worry about." "I did drag Brock out here all the way from Hollywood," said Samantha. "I didn't think it would be very nice to keep reminding him that I was going out with someone else. But it's all straightened out now." "I also got Sam to promise not to work so hard," said Tim. "If we're going to get married, I want a wife I can spend time with once in a while." "Well, you'll get to see me at work," Samantha pointed out, taking his hand and squeezing it. "We're going to run the inn together," she explained. "I finally realized that it really is too big a job for me. There's plenty of room for two people at the top—especially two people in love. "I had a big talk with Mom last night, too," Samantha went on in a rush. "You can't believe how upset she is about what happened. I feel guilty—but she feels even worse." For an instant her smile wavered. "Mom's not great at leaving things to other people," Samantha added. "But I think that this time she's really going to let me be in charge. She says this whole thing with Jake has changed her priorities." "Well, that's great," said Bess. "What about Brock?" she added, trying to be casual. "What does he think about all this?" "I think it's great, too," a deep voice spoke up. None of them had noticed Brock walking up behind Samantha and Tim. He put an arm around Samantha's shoulders and slapped Tim on the back. "The news about Samantha and Tim, I mean," he clarified. "No one could wish them more happiness than I do." Then his expression became serious. "And no one could feel sorrier for Jake than I do. I've decided not to press charges, Sam." "You—you have?" Samantha stammered. Brock nodded. "Jake needs a lot of help. As long as he gets it, I'm not going to hold a grudge. There's been too much of that already. It's time to move on." "Speaking of moving on," asked George, "what's happening with the festival?" Samantha giggled. "I'll have to make sure not to forget about it, won't I? Today should be pretty easy. Five guest chefs are giving workshops. Do any of you want to come?" Nancy and her friends looked at one another for a few seconds. "You know, I don't think so," Nancy said at last. "I feel as though it's time to head home." Her friends nodded in agreement. "Oh, that will never do," said Brock promptly. A mischievous light was dancing in his dazzling blue eyes. "I'm leaving for home tomorrow myself. We can't let everything fizzle out this way." He smiled down at Bess. "So, Bess, would you do me the honor of accompanying me to a movie this afternoon?" • • • "He's coming in five minutes! Five more minutes!" caroled Bess, dancing around the living room of the girls' suite. From where she lay stretched out on the sofa, Nancy looked up at her friend. Bess was wearing a pink minidress that showed off her curvy figure perfectly. And her excitement had brought a flush to her cheeks that only added to her appeal. Brock Sawyer, look out! "Calm down," said George. "You'll fall and break your leg, and then Brock will have to take you to the hospital instead of the movies." Bess flopped down into a chair. "Wouldn't you guys like to finish my packing for me? When I come back, I know I won't feel like it." "Forget it," Nancy told her. "I've got enough problems with my own suitcase. Why is it that the stuff you take home always takes up three times more space than the stuff you brought with you?" "Oh, all right. I'll do it myself later." Suddenly Bess brightened. "After all, I may have to tuck in some little present. Brock might be feeling sentimental—you never know." "Maybe he'll even give you a nice, big box of chocolates," said George. "Something that will really make you remember this trip." "No way," said Bess immediately. "You guys aren't going to believe this, but I've had enough chocolate to last me the rest of my life." Nancy grinned. "You know, Bess, I do believe you," she said. "I don't want to go near chocolate for a long, long time. But I'm glad this case had a sweet ending after all." This book is a work of fiction. Any references to historical events, real people, or real places are used fictitiously. Other names, characters, places, and events are products of the author's imagination, and any resemblance to actual events or places or persons, living or dead, is entirely coincidental. Simon Pulse An imprint of Simon & Schuster Children's Publishing Division 1230 Avenue of the Americas, New York, 10020 www.SimonandSchuster.com Copyright © 1991 by Simon & Schuster, Inc. All rights reserved, including the right to reproduce this book or portions thereof in any form whatsoever. ISBN: 978-0-6717-3065-9 (pbk) ISBN: 978-1-4814-2850-7 (eBook) NANCY DREW and colophon are registered trademarks of Simon & Schuster, Inc. THE NANCY DREW FILES is a trademark of Simon & Schuster, Inc. ## Contents Chapter One Chapter Two Chapter Three Chapter Four Chapter Five Chapter Six Chapter Seven Chapter Eight Chapter Nine Chapter Ten Chapter Eleven Chapter Twelve Chapter Thirteen Chapter Fourteen Chapter Fifteen Chapter Sixteen
{ "redpajama_set_name": "RedPajamaBook" }
9,308
<?php namespace ProdeMundial\WebBundle\Twig; use Symfony\Bundle\TwigBundle\Extension\AssetsExtension; use Symfony\Component\DependencyInjection\ContainerInterface; use Symfony\Component\Templating\Helper\CoreAssetsHelper; class FlagExtension extends \Twig_Extension { private $container; public function __construct(ContainerInterface $container) { $this->container = $container; } public function getFunctions() { return array( new \Twig_SimpleFunction('flag', array($this, 'renderFlag')) ); } public function renderFlag($flag) { $asset = sprintf('bundles/prodemundialweb/img/flags/%s.png', strtolower($flag)); return $this->container->get('templating.helper.assets')->getUrl($asset); } public function getName() { return 'prodemundial_flag_extension'; } }
{ "redpajama_set_name": "RedPajamaGithub" }
1,683
package com.github.metaprgmr.gapi_flat.rt; import java.io.File; import java.util.HashMap; import java.util.Map; import com.github.metaprgmr.util.JavaJSONSerializer; import com.google.api.client.http.FileContent; import com.google.api.client.util.DateTime; /** * To handle a few Google API classes, * such as DateTime and AbstractInputStreamContent. * * @author jhuang */ public class GAPIJavaJSONSerializer extends JavaJSONSerializer { @Override public Map<String,Object> toJSON(Object val) { if (val instanceof FileContent) { FileContent fc = (FileContent)val; Map<String,Object> map = new HashMap<String,Object>(); map.put("jclsName", "com.google.api.client.http.AbstractInputStreamContent"); map.put("type", fc.getType()); map.put("file", fc.getFile().getPath()); return map; } return super.toJSON(val); } @Override public Object fromJSON(Object jsonValue) throws Exception { if (jsonValue instanceof Map) { Map map = (Map)jsonValue; String clsName = (String)map.get("jclsName"); if ("com.google.api.client.http.AbstractInputStreamContent".equals(clsName)) { String type = (String)map.get("type"); String file = (String)map.get("file"); return new FileContent(type, new File(file)); } if ("com.google.api.client.util.DateTime".equals(clsName)) { long value = ((Number)map.get("value")).longValue(); Boolean b = (Boolean)map.get("dateOnly"); boolean dateOnly = (b != null) && b; Number tzShift = (Number)map.get("tzShift"); if (tzShift == null) return new DateTime(value); return new DateTime(dateOnly, value, tzShift.intValue()); } } return super.fromJSON(jsonValue); } } // end of class.
{ "redpajama_set_name": "RedPajamaGithub" }
3,204
Coach Sancomb comes to FC Baltimore after spectacular success with Christos FC. The team made a deep run in the Lamar Hunt US Open Cup. They became a national darling by scoring the first goal against DC United of Major League Soccer. In addition to his success with Christos FC, Larry has been heavily involved with Olympic Development, High School, Youth and College Soccer in Baltimore since 1994. Send Coach Sancomb an email. Coach Gonzaga has been a successful coach at the Sao Caetano Youth Academy in Brazil, Maryland United, Premier SC, Loyola Blakefield and St. Timothy School. In his youth days, Gonzaga grew up playing with Thiago Mota in Sao Paulo, Brazil. Coach Gonzaga has a reputation for technical and attacking mastery. His playing career includes pro contracts with Racing Club of Montevideo (Uruguay), Sao Caetano (Brazil), Chivas (Mexico), Real Maryland, Crystal Palace Baltimore and a recent over 30 National Championship with Christos FC. Gonzaga also played pro indoor soccer for the Baltimore Blast, Harrisburg Heat, Ontario Fury and New Jersey Ironmen. Coach Lookingland played at Bucknell University, earning Third Team All American honors. In 2005, Real Salt Lake selected Lookingland in the second round of the MLS Supplemental Draft. He spent two seasons in Salt Lake. In addition to his outdoor career, Lookingland had an extensive indoor career. In 2005, he signed with the Baltimore Blast, being named to the 2005–2006 All Rookie team and in 2012 named MISL Defender of the Year. Mike's passion has been coaching. He started Dynasty Sports Academy, has coached for Celtic, Pipeline and currently with the Baltimore Armour Academy. Mike holds his United States Soccer Federation "A" license. Brought up through the Baltimore area soccer system, coach Fendryk understands the local winning traditions better than anyone. Having graduated Calvert Hall in 2001, he played youth soccer for Soccer Club of Baltimore Bays (where he was also a teammate of FC Baltimore assistant coach Mike Lookingland), and then attended and starred at UMBC. After graduation he joined Christos FC, first as a player and later as a coach. He has coached for the Christos FC, Baltimore Bays, CCBC Essex, and at theJohn Carroll School. He has won at each of his stops, with multiple national championships with Christos FC and NJCAA National Tournament Appearances with CCBC Essex. Coach Saunders is currently the head coach at CCBC at Catonsville. He is also the Director of Goalkeeping at Baltimore Union. Phil played professionally for two years in the Icelandic First Division for Bi/Bolungarvik. He also featured for the Baltimore Blast. Before his pro career, Phil played for UMBC. Phil is tied for 2nd most shutouts in program history. He played youth soccer for the Baltimore Bays- where he won the Golden Glove award for best keeper in the nation.
{ "redpajama_set_name": "RedPajamaC4" }
2,754
Temperament is habitually recognized in our daily affairs but overlooked in regard to society and history. How can the natural-born rebel communicate his fervor and dissatisfaction to the timid and meek? How can the good-natured and jovial persuade the militant to moderation? Or as Elsa Morante says in the dedication of her novel La Storia, a message sent from one of these to the other is "por el analfabeto a quien escribo." We share over 98% of our genes with every other human in the world, yet almost all the means by which we can effectually communicate our emotions lies in the other 2%. One could then, then, expect a single reaction to events from all of humanity even if the feelings they provoked were substantially the same. And what to make of ideas themselves, limitlessly reproducible and communicable, at least in theory? Richard Dawkins even goes so far as to claim that human ideas are self-replicating evolutionary units like genes. He calls them memes. These are the real ghosts in the machine, the real spiritual entities if there are such, much more than the mind as a whole, which is irrevocably physical in nature at least in part, if nothing else because it cannot be liberated from a particular piece of matter. Ideas, on the other hand, can transmit from person to person like beacon signals, even though they cannot exist, seemingly, without some mind to receive or create them. Yet there's the rub. While Dawkins would have it that the power of genes to replicate themselves arises from the fact that they are in a way information as opposed to matter, and hence universally transmissable, a property they share with ideas, at a further look it seems clear that their drive originates in a different engine, namely from the fact that they design the very organic systems which perpetuate them. This system is also the medium of transmission of ideas, i.e. a living organism, but it is designed by genes. It may well be that ideas can exert an effect on organisms in turn through feedback, but since genes establish the system which exists at the outset of every generation according to their own needs, one might expect that the receptivity to and ability to create certain ideas will ultimately be determined by its amenability to the perpetuation of genes, so that the tendency across time will be for ideas to subordinate themselves to genes (this is of course a trend, not an absolute reality at any given time). This entry was posted on Saturday, August 26th, 2006 at 2:54 am and is filed under Ramblings, Science. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
{ "redpajama_set_name": "RedPajamaC4" }
6,620
"Red rover, red rover send Stacy right over!" You remember that game don't you? Two lines of children would face one another and when your name is called, your job is to run at full speed and attempt to break through the human chain that opposed you. Some would timidly trot only to be captured…others would use this opportunity to take out some aggression. These are the types of things we did at recess as children. Recess was an incredible time. Next to lunch, it was my personal favorite period of time during school. If we grew up with this as a common occurrence we can close our eyes and remember times when we made terrible decisions, and also lifelong friendships in the midst of a kickball game or on the "time out wall" when we pushed Emily Stevens down the curly slide and she had to be a stinkin' little tattle tail and rat us out…umm…hypothetically. As I was reflecting on my experiences during this unstructured time of play, I realized that there were lessons I learned from the playground that can translate into my spiritual life. It may be a stretch, but sometimes it is fun to see how things like this can link our thoughts to wisdom. 1. Tetherball is important– The game of tetherball can be a lesson in the way we handle our pain. The point of tetherball is to hit the ball and draw it closer to the center pole. Your opponent's purpose is to drive the ball farther away from the pole in the opposite direction and oppose your progress. Each day we are tested in the same way. Life abuses us, sends us through trials, and we walk through suffering unwillingly. If we are not careful, we can walk through these struggles and, if not looked at redemptively, they can oppose our growth and draw us further from the center focus…God. God does not want any pain to be wasted. We can use these instances to draw closer to Him. 2. Bullys are everywhere– Whether it be in the work place, in the grocery store, or the bank we know that bullys still exist. We encounter people on a regular basis who seem to desire the worst for us and our well being. Our first impulse may be to put them in their place or get angry, but often we do not see below the surface. Many times the people who are this way are dealing with such deep pain. Pray for them…they need an encounter with God. They are not your enemy. 3. People will chase you– I noticed that, when people chased you at recess, that meant they really like you. Personally, I was a fan of being chased by girls even though I didn't admit it. Obviously our main goal in life is not to be liked by all, but we must ask ourselves what type of influence we are to others. What kind of life are you living that deserves to be influential? Hopefully it is one that is chasing after Christ. 4. The point is to get sweaty– As kids we knew what recess was really about. Teachers wanted to get us sweaty so we would "get the wiggles out" for the rest of the school day. It was smart because we were more productive in the second half. Many of us limit our Christian experience to reading, and silent reflection. These are absolutely essential, but there is also more to it. When we get closer to God we realize that there must be a point in which we bring action and service to our faith. "Getting sweaty" is the point of a life that draws us closer to Him. We are being formed in His image to be a light to the dark world. 5. Red rover, red rover– Remember this game? I think we can learn something from it. God wants us to live our lives with purpose and focus. With His resourcing we can break through to freedom. The enemy will try to prevent us from growing and being a part of God's mission, but He has plans for us. Plans that can give us hope and a future…we just need to close our eyes, put our heads down, and run towards Him. This is just what I have reflected on…it is no comprehesive exposition of scripture, nor is it meant to be used in a university setting. All I know is that God wants us to learn things through every experience we encounter. Know you are loved. Know you have a purpose. I've heard that they don't do recess anymore. Seems silly to me to quit it, and a bit counterproductive. It seems unfortunate that kids can't have the unstructured (and sometimes unsupervised) play that we used to have. We played hard, we ran hard, and most times we had bandaids on our knees and elbows. I usually had broken glasses from playing tether ball. The rules of the games were always adjusted for the young or the short or the ones who didn't have a good skill set. I wonder if today's children are missing those negotiation, compromise, and "I can't win at everything" lessons.
{ "redpajama_set_name": "RedPajamaC4" }
9,686
{"url":"http:\/\/physicsinventions.com\/amperes-law-problem\/","text":"# Ampere\u2019s law problem\n\n1. The problem statement, all variables and given\/known data\nA conductor carrying current ##I## is in the form of a semicircle AB of radius ##R## and lying in xy-plane with it\u0092s centre at origin as shown in the figure. Find the magnitude of ##\\oint \\vec{B}\\cdot \\vec{dl}## for the circle ##3x^2+3z^2=R^2## in the xz plane due to curve AB.\n\n(Ans: ##(2-\\sqrt{3})\\frac{\\mu_0 I}{2}##)\n\n2. Relevant equations\n\n3. The attempt at a solution\nFrom Ampere\u2019s law:\n$$\\oint \\vec{B}\\cdot \\vec{dl}=\\mu_0 I_{enclosed}$$\nThere is no current passing through the loop ##3x^2+3z^2=R^2##, so for this loop, RHS for ampere\u2019s law is zero hence answer should be zero but it isn\u2019t. :confused:\n\nAny help is appreciated. Thanks!\n\nAttached Images\n bdl.png (4.3 KB)\n\nhttp:\/\/ift.tt\/1f5j1vu","date":"2019-04-18 18:59:26","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4893665611743927, \"perplexity\": 1887.1622218555149}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-18\/segments\/1555578526228.27\/warc\/CC-MAIN-20190418181435-20190418202443-00026.warc.gz\"}"}
null
null
The 'Ellie Light' Scandal January 27, 2010, 4:38 AM EST The declining (or is it dying?) newspaper industry has suffered another blow to its image as punctilious skeptic. So much for the motto, "If your mother says she loves you, check it out." It turns out a pile of American newspapers can't manage to check out the most basic information about people who are flat-out using their pages to push political agendas. VH-1 on Virginity: Cynicism and Censorship When the cable network VH1 planned a news special called "The New Virginity," an abstinence backer might have felt optimistic that teenagers and young adults were going to get a refreshing jolt of publicity about the option of premarital celibacy. That is, unless you looked at the network's promotional fine print. The Meanness of Martha Coakley In recent years, the network news shows have glossed over any political campaigns below the level of president, stopping only if the candidate is named "Clinton" or "Schwarzenegger." That principle held true for the U.S. Senate race in Massachusetts. Europe's Decadent Education When people think of the public morals of Europe, the word "decadence" comes to mind. Sex, drugs, and the decline and fall of the churches all define the trend. Amsterdam, for example is celebrated as "San Francisco times ten." The Media's Democrat Dialect Mark Halperin and John Heilemann are laughing all the way to the bank at the mess Harry Reid is facing. The hottest backstage tidbit of their new campaign chronicle "Game Change" is that Reid praised Barack Obama's political appeal as a "light-skinned" black man with "no Negro dialect, unless he wanted to have one." The Soul of Tiger Woods January 8, 2010, 5:00 AM EST The first rule of dinner-table conversation is no hot talk about politics or religion. Apparently there's a rule regarding the discussion of religion during political talk shows, too. Kicking Rush When He's Down The news that Rush Limbaugh had entered a Hawaii hospital over the New Year's weekend complaining of chest pains triggered a volcanic internet eruption for the hard left, the likes of which we've never seen before. If Mt. Vesuvius could vomit in a literal sense, this would be it. This time these radicals let their guard down and showed their true colors. The Twitter lines were ablaze as liberals celebrated the news, news that suggested Mr. Limbaugh was at the very least very ill, and quite possibly dying or maybe already dead: The 'Stimulus' Picture Crumbled December 30, 2009, 4:44 AM EST On December 22, the networks calmly, briefly, and quietly acknowledged the news that the government revised its economic-growth number for the third quarter downward, from 3.5 percent to a less impressive 2.2 percent. As 2009 comes to a close, the media elite are showing enormous patience with the pace of a recovery, without any troublesome talk of whether Barack Obama's dramatic expansion of government is helping or hurting the economy. A Year of Obama Love The year 2009 might be classified as the year Barack Obama came down to Earth. The latest NBC-Wall Street Journal poll found that 47 percent approve of the job Obama is doing, and 46 percent disapprove. Those are not exactly Messiah numbers. And that's the big difference between the public and the press. The media do believe he's God. TV and The Soft Eshoo Bill Lest any citizen think the U.S. Congress is absorbed only in the weightiest matters like nationalizing the health care system, the House just passed another piece of legislation – a bill urging that TV commercials be no louder than the shows in which they appear. The Commercial Advertisement Loudness Mitigation Act (CALM) passed Tuesday on a voice vote, "presumably expressed at a comfortable level," joked a USA Today writer. It now goes to the Senate, which is considering an identical bill. Preferring Liberals in Both Parties Liberal newspaper people are so predictable when it comes to internal party fights. If it's inside the Republican Party, it's the conservative Republicans who are wrong. If inside the Democratic Party, it's the conservative Democrats who are wrong. Frosty the Pervert? Climate Skeptics Need Mental Help? December 9, 2009, 9:51 AM EST Talk about an inconvenient truth. In ever-increasing numbers, Americans are becoming skeptical about the scientific argument that there's a man-made global-warming crisis that requires immediate and drastic government action. The media's enablers of the radical environmental left have a response: maybe America just isn't smart or curious enough to save the planet. In fact, they say our growing denial is making us nationally irrational. Degrading Degrassi On Sunday morning, November 22, Nickelodeon's cable channel Teen Nick was running a series of promos during a rerun of its junior-high sitcom "Ned's Declassified School Survival Guide." Which of these ads isn't quite like the others? 1. A promo for a themed "Attack of the Little Sisters Thanksgiving Weekend," with reruns of child-friendly shows such as "Full House" and "Drake and Josh." Clubbing Navy Seals Last week, Fox News reported a jaw-dropping story about how our war on terror has now become a war on ourselves. In September, a team of Navy SEALs captured terrorist Ahmed Hashim Abed, a man known to the U.S. military as "Objective Amber," the architect of the vicious and deadly attack on four American contractors in the summer of 2004. These poor men were shot, burned, and then their bodies were desecrated, hung from a bridge over the Euphrates River. Words for Potent Jerks November 20, 2009, 5:09 AM EST It is amazing how a phrase can emerge seemingly out of nowhere to become the statement du jour – used, overused, and ultimately abused. Last year there was "low hanging fruit" everywhere. Today everyone's being "thrown under the bus." Sometimes, it's just one word. Secular Saboteurs August 18, 2009, 8:43 AM EDT There are an awful lot of people I know in the world of public policy, many of whom I respect and admire. But beyond respecting his wisdom and admiring his courage, I just plain like Bill Donohue, president of the Catholic League. I like his Irish feistiness. I like his sense of loyalty. I like his sense of humor. Most of all, I like how he drives his opponents mad. The Quiet War Movie August 6, 2009, 3:13 PM EDT There have been a couple of constants where Iraq War cinematography is concerned. One, movie makers ignore the public appetite for movies supporting the anti-terror war message in favor of drab, depressing, preachy anti-war politicking featuring marquee names and little else. Two, those movies, which predictably bomb at the box office, are the rage of the film critics who levitate in ecstasy at the opportunity to praise that which trashes Bush, the war on terror and the military all at once. A Kidnapped 'Fetus?' Darlene Haynes was only 23 years old when another woman brutally slashed her open and removed her eight-month-old baby girl from her womb. Her decomposing body was found July 27 wrapped in a blanket and dumped in a closet inside her apartment in Worcester, Massachusetts. The body was so mutilated that when they found it, the police said they couldn't immediately determine its gender. All-Access Obama July 29, 2009, 12:55 PM EDT Martha Joynt Kumar, a scholar of presidential communications strategies at Towson University, reports that President Obama is almost everywhere in the media. In their first four months, Bill Clinton gave 11 interviews and George W. Bush gave 18, compared with 43 from Obama. He has offered his eloquence to ABC News at least six times, seven times on CBS and nine times on NBC.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
311
{"url":"https:\/\/www.physicsforums.com\/threads\/decoherence-why-does-the-fringe-disappear.109408\/","text":"# Decoherence - why does the fringe disappear?\n\n1. Feb 4, 2006\n\n### zekise\n\nRecent double-slit experiments with massive molecules, such as fluoro-fullerene consisting of 108 atoms and atomic mass 1632, see heavy matterwave experiment, show that the interference fringe will disappear if the FF molecule emits a thermal photon, or collides with a gas particle, where the mass of the gas particle is immaterial. I am having a hard time understanding some of the explanations given for this phenomena in different places, which I think are as follows:\n\n1- The emission or collision produces an entangled pair. Therefore, as shown in entanglement experiments (Scully, Walborn), the fringe gets obscured by the other entangled member, and can only be revealed if we capture the entangled particle, and do a coincidence selection. Original fringe cannot be revived.\n\n2- Upon emission or collision, the wavefunction collapses and the molecule gets localized, and the partial wavefunction due to the other slit disappears, and so does the fringe. No need to explain this in terms of entanglement. Fringe cannot be revived.\n\n3- The emission or collision passes which-path information to the emitted or collided particle. Although this information has not been measured, in principle it can be obtained. Thus the Principle of Complementarity dictates that the fringe would disappear as the which-path becomes known. Furthermore, if we erase this information, the fringe can be revived.\n\nThese are different and contradictory explanations. Any and all insights into this phenomena is highly appreciated. Thanks.\n\n2. Feb 5, 2006\n\n### vanesch\n\nStaff Emeritus\nDecoherence is best understood if one does NOT consider collapse (but takes on an MWI viewpoint - it is not for nothing that I push this view here ; it is for its explanatory power in exactly these situations).\n\nThe fundamental idea of decoherence is that, when a system 1 gets entangled with a system 2 (mostly through an interaction between both), the potential interference that was possible by observing system 1 alone, is now displaced to the overall system, and will only be observed when doing CORRELATION MEASUREMENTS acting on both systems.\n\nImagine that system 1 was in a state |a> + |b>, and that an experiment was designed to test the interference between |a> and |b> (in other words, an observable I1 that gives 1 for the eigenstate |a>+|b> and gives 0 for the eigenstate |a>-|b>). Clearly, if we look at system 1 with this observable (with this experiment), we will see interference (always a click, always result \"1\").\nEven we consider the composed system of system1, and system2, as long as both are in a PRODUCT state (|a>+|b>) |u>, the observable I1 (which is now in fact, the observable I1 x 1 on the product hilbert space) will show \"interference\" (that is, always the same outcome).\n\nBut when we now have the systems entangle themselves, into\n|a>|v> + |b>|w>, and we NOW apply the observable I1 x 1, we will NOT find interference anymore. We will find 50% \"1\" and 50% \"0\".\nThe reason for this is that the observable I1 operates ONLY on system 1, and that its results are hence determined by the reduced density matrix of system 1, which changed, due to the entanglement, from the pure state |a> + |b> into the mixture of 50% |a> and 50% |b>.\n\nSo, if you only look at system 1, IT LOOKS AS IF COLLAPSE OCCURED. It looks as if the state |a> + |b>, due to the entanglement with system 2, changed into a probabilistic mixture of 50% |a> and 50% |b>.\nSo did collapse occur or not (and hence, was system 2 a \"measurement device\" ?).\nUnfortunately, no. Because if this were the case, we wouldn't be able TO RESTORE INTERFERENCE, by looking also at system 2, as is shown in several quantum erasure experiments. In order to explain THOSE results, one cannot accept that the interaction collapsed the wavefunction of system 1, but one needs the entire, entangled state\n|a>|v> + |b>|w> and NOT the statistical mixture of 50% |a>|u> and 50% |b>|w>. So collapse didn't occur physically, it only APPEARED to be so when we only looked at system 1, because then the REDUCED density matrix is sufficient to explain all results (while the TOTAL density matrix is needed for the results on both systems, and all correlations - total density matrix which has not been changed from a pure state to a mixture - which is the density matrix form of collapse).\n\nThat's why the MWI view takes it that this collapse NEVER occurs, and that all collapse is only apparent because we limited ourselves to a part of the system that got entangled with something else. Of course, once you get entangled with *the environment*, you are totally lost, and you'll probably never be able to do coincidence measurements that take all entanglements into account, to restore interference. One can call this a kind of practically irreversible entanglement, and this will result in you always obtain correct results through collapse for feasible experiments (because the result that would be wrong, namely the restoration of interference, is an experiment which is practically impossible to perform). As such, the *apparent* collapse in measurement has been explained, because measurement devices entangle the entire system to the environment, in an intractable way.\n\nYes, this is the \"decoherence\" view (part of the MWI view), and is in my opinion the cleanest one.\n\nOne has then to postulate some \"special physics\" for this process, which cannot be described by the usual unitary interaction operators, and one runs into trouble when one IS going to restore the fringes using the second system - as long as this is experimentally feasible.\n\nThat's the same idea as 1, but there's confusion about \"erasing this information\". \"Erasing the information\" comes down to performing a measurement that EXTRACTS THE COMPLEMENTARY INFORMATION. For instance, a measurement that will look at the (|u>+|v>) state of system 2, and will give you the right TAGGING to extract the subsample of a\/b results that will show interference.\n\nQuantum erasure has been too often presented as: when you \"erase\" the information on the remote system, magically, the interference fringes appear at the first system. That's NOT the case. You need the RESULT of the remote measurement which \"erased\" the which-path information (the a or b information, hence the u or v information) in order to find the SUBSAMPLE of the a\/b data in which interference can be seen.\n\n3. Feb 8, 2006\n\n### pavsic\n\nThis is really a very clear explanation showing the merits of many worlds interpretation (or relative state interpretation) first proposed by Everett. Unfortunately Everett could not publish his long PhD thesis, but only a rather short paper. This has caused a lot of confusion and misunderstanding regarding the MWI.\n\nIn 2002 I discussed this with B. De Witt at a conference in Washington and he said that most problems that people find in MWI had already been clarified in Everett's PhD thesis. Had it been published right at the beginning (and not only with a great delay in a book), all such misunderstanding and confusion (and reluctance) concerning MWI would have been very probably avoided. Few people that criticize MWI have read his thesis.\n\nHowever, I think that there are still issues that are not completely clarified in Everett's PhD thesis and subsequent works, including the modern ones. I discuss this in Part IV of my book The Landscape of Theoretical Physics: A Global View (Kluwer, 2001). A link to the contents of the book and sample pages can be found on my Home Page http:\/\/www-f1.ijs.si\/~pavsic\/ .\nIn my opinion quantum mechanics cannot be fully understood without employing some radical views that are discussed in my book. However, I do not claim to understand QM (Nobody really understands quantum mechanics\" [Feynman]).\n\nMatej Pavsic\n\nLast edited: Feb 8, 2006\n4. Feb 9, 2006\n\n### setAI\n\nI like the cut of your jib- vanesch!\n\n\"decoherence is just a matter of degree. There is never a moment after which an object's invisible counterparts cannot affect it any longer. It just gets too expensive to set up the apparatus that would demonstrate their existence. \" ~David Deutsch\n\n5. Feb 12, 2006\n\n### zekise\n\nHi Patrick \u2013 many thanks for your wonderful explanation. I was delayed in getting back as I was for a while schlossed up with Schlosshauer. Not that I pretend to fathom the formalism.\n\nA good answer generates more questions. So here they come if you don\u2019t mind :\n\nQuestion 4) What do you mean by the DIFFERENCE of two states |a> - |b> ? Is this simply a phase shift of the \u2018b\u2019 component of the wavefunction resulting in an anti-fringe when interfered?\n\nQuestion 5) Is there such a thing as a \u201cquantum coherent state\u201d for a single particle? Now in the heavy molecule experiment, Zeilinger et al used a Talbot-Lau grating to collimate the FF beam. This would be a necessary step to observe interference. However, if the experiment can be practically done on a single molecule, would this be unnecessary? So is it correct to say that when the FF solid sublimates, and a single molecule flies at the interferometer, that it is already in a coherent state? That is, if the molecule is NOT entangled in any manner and therefore IS isolated, it will be in a coherent state?\n\nQ6) Assume a single FF molecule is coherent and flies through the interferometer. Assume that we somehow are able to figure out (calculate) the sublimation ejection trajectory (without perturbation of the molecule) before the slits, and therefore calculate which slit the molecule will pass through - would this not put it in a mixed state and destroy the IP? Would this, if possible, be an example of non-entangled decoherence? Is there such a thing?\n\nQ7) Now the FF molecule consists of 3,264 particles bound together by electric forces. I assume this entangles the components of the molecule with each other, as they are continuously interacting with one another. Why is it that this entanglement does not decohere the molecule, in the manner that an emitted thermal photon does - and result in IP loss?\n\nQ8) If the FF emits an ionization electron instead of a thermal photon, I understand this electron will be entangled with the molecule. Is this always the case, or only if it is emitted in a certain manner? Now assuming the electron before ionization was entangled with the molecule, and remains entangled after emission, then what is now different that results in the IP disappearing?\n\nQ9) Do all interactions result in a degree of entanglement? If this is the case a particle must always be in a state of perpetual entanglement with a huge number of other particles as force fields can travel quite a distance. Is it fair to say most particles are entangled with most other particles in the universe?\n\nI will be polite, and take my seat. But there are more I need to ask \u2013 on degrees of freedom, superselection, disentanglement, reversal, and information. TIA\n\nLast edited: Feb 12, 2006\n6. Feb 13, 2006\n\n### zekise\n\nHelp Help\n\nbump\n\n7. Feb 14, 2006\n\n### vanesch\n\nStaff Emeritus\nYes. I addressed this in another post on a parallel thread somewhere. It's a matter of choice of phases, initially, but you are entirely right that (whatever your phase choice), |a>+|b> and |a>-|b> will give you the fringe and anti-fringe (where you arbitrary can call one \"fringe\" and the other 'anti' of course).\n\nThere's a terminology issue here. I think you're mixing up the terminology for \"coherent state\", \"pure state\", entangled state, and mixture, no?\nWe can go into details if you want to. They will even be, at a certain point, interpretation-dependent.\n\nWhen I look at your next question, it seems that what you call \"coherent\" state, seems to be a pure state that is not entangled, and that can give rise to interference when applied to a certain measurement. So I'll take it as such, knowing that I'm guessing here...\n\nIf we were to be able to know the ejection trajectory to such detail that we knew which slit it went through, then the measurement of the IP would simply not be such a measurement. Let's not forget that \"interference\" is such a measurement on a state, that we can consider the initial state as |a>+|b>, and that we apply a measurement that distinguishes |a>+|b> from |a>-|b> (fringe from anti-fringe). Interference shows when there is a high probability to get the first, and a low probability to have the second (or vice versa). If you now have an initial state which is NOT |a> + |b>, but simply |a>, then the IP will not show (the probabilities will be equal for the two outcomes). In all 2-slit interference, |a> stands for \"beam through the first slit\" and |b> stands for \"beam through the second slit\" (and the relative sign gives you the relative phase between both paths - which is up to a point conventional).\n\nBecause they all undergo the SAME interference measurement, in other words, we do an interference experiment on the TOTAL state, and not on some substate (usually, we loose the thermal photon, and we only look on \"one leg\" of the entangled pair (molecule-photon)).\n\nBecause the interference experiment didn't apply to the electron when it was emitted, while it took part in the experiment when it was still part of the molecule.\n\nHere, we enter interpretation-dependent issues. In the von Neumann view (with projection), a measurement UNDOES entanglement (as the state is now an eigenvector of the measurement operator, and hence again in a product state).\nIn the MWI view, each measurement ENTANGLES the observer with the system, but as the observer is going to see only one branch, this entanglement becomes unobservable.\nInteractions usually result in entanglement according to the Schroedinger equation (it doesn't have to be so, but there are chances that things go that way).\n\n8. Feb 18, 2006\n\n### zekise\n\nPatrick - I am pretty sure you do not mean that if we only went through the motions of calculating the FF molecule ejection trajectory, then the system would somehow become a mixed state |a> or |b>.\n\nTherefore it must be one of the following 2 cases (?):\n\n1- The sublimated FF molecule is entangled with the rest of the FF solid (and is not coherent to itself). In this case, there will be no interference pattern observed because the molecule is entangled and the fringe is obscured by the anti-fringe. But in reality we do see a vivid interference pattern in matterwave experiments (without correlation selection).\n\n2- There is no entanglement between the molecule and the rest of the FF solid (and the molecule is singularly coherent). In this case, in theory, we can obtain the recoil and calculate the trajectory and figure out |a> or |b>, and also get the IP. Surely it is not a matter of practicality. But, it would be incorrect to say, IMO, that the IP survives because in practice we cannot calculate the trajectory, even though in principle the information is there in the solid.\n\nWhat gives?\n\n(I think this is closely related to the issue raised in the other Decoherence thread.)\n\nMy own hunch is that if there are a large number of degrees of freedoms in the solid, then the information IS SIMPLY NOT THERE. As we reduce the degrees of freedoms, the information BECOMES THERE, but the molecule starts becoming entangled with the solid. Once entanglement starts to set in, we will start losing the IP.\n\nIf this is the case, then this points to three things. 1- there is no information without entanglement. 2- entanglement is not a binary yes-no thing and is graduated (or is it quantized?). And 3- Entanglement can be broken and destroyed in the Copenhagen (non-MWI) model, like when a member of an entangled pair collides with a system with large degrees of freedom (iow, decoheres and superselection results).\n\nLast edited: Feb 19, 2006\n9. Feb 19, 2006\n\n### vanesch\n\nStaff Emeritus\nAh, you mean, that by the simple act of working out the calculation, we'd change the state of the system ? No of course not\n\nBut you think already of \"trajectory\", I'm talking about \"quantum state\". The result of your most fine-grained calculation will be a *quantum state* of the ejected particle, and that can be a quantum state that \"looks like a trajectory\" (state |a> or state |b>, say, that go through slit 1 or slit 2), OR (most probably) it will be some kind of spherical wave state, which will be something of the kind |a> + |b>. If you say \"trajectory\" you take it already for granted that the quantum state will be of the |a> or |b> flavor, but that doesn't need to be so (I'd even say that it almost certainly will NOT be the case, by the very fact that interference is observed!).\nIt's a bit as if you are asking about the \"precise trajectory calculation\" of a particle in a momentum state. A pure momentum state (which could be the result of a precise calculation) is NOT to be seen as a \"trajectory\", but is a superposition of \"position states\" (trajectories, if you want to).\n\nYes, so that's probably NOT the case, as you point out.\n\nNo. IF we are able to distinguish between the two \"recoil states\", THEN we're back in your case 1. So you can only get an IP if the two different recoil states are, in fact, almost identical (see thread with mirror).\n\nThat the information is NOT there in the solid ! Not even in principle. That the quantum state of the solid is (almost) the same after emission |a> or |b> and that the (biggest part of the) state of the solid can be factored out.\n\nYES! You've got it...\n\nYes. The only problem with this view is that you have to select arbitrarily when this thing is becoming \"irreversible\".\n\nConcerning the fact that entanglement is gradual, yes, of course. Consider this:\n\nInitial state (|a> + |b>) |c>\n\nAfter interaction, |a>|c> could evolve into something like |a>(|d> + eps |k>), while |b>|c> could evolve into something like |b>(|d> + eps |l>), where |k> and |l> are orthogonal states.\n\nWe now have the final state:\n\n|a>(|d>+eps |k>) + |b>(|d>+eps|l>) = (|a>+|b>) |d> + eps (|a>|k> + |b>|l>)\n\nif eps is a small number, we've still essentially the product state, with a \"tiny bit\" of entanglement. That's what my \"almost\" statements referred to earlier.\n\nIf you do an interference experiment on |a>+|b>, you'll find a slightly decreased IP (because of the eps part). If you do a correlation eperiment to test entanglement between a\/b and k\/l, you'll find a very very small correlation.\n\n10. Feb 22, 2006\n\n### zekise\n\nVery interesting and thanks for the excellent post. So we are saying:\n\nA1) If the substrate consists of a large degrees of freedom (LDF), then the emitted (sublimated) molecule will NOT be entangled with it, and the substrate will not enter into a new state after emission. Therefore, there is no information in the substrate.\n\nA2) Now consider the other side of the experiment, where the superposed molecule will hit the screen and get \"absorbed\". Now I understand the decoherence folks are saying (pls. correct if necessary) that if the screen consists of a large degrees of freedom, then the molecule will collapse, and statistically we will have a fringe, which implies that there will again be no entanglement with the screen. However, in this case, there is residual information left at the screen (aka the measurement) and the screen will enter a different state, and this will also be in principle irreversible (\"indelible\").\n\nA3) So here we have an example of information transfer without entanglement. What is the nature of this information, and how is it different from entangled information? Is this information the energy of the molecule, its momentum, or its position or spin or what? What really bothers me here is that A1) and A2) are the time reverse of each other. But A2 does result in information transfer, and A1 does not.:uhh: What gives?\n\nBut I understand decoherence theory says the pointer states that will emerge are superselected for stability or some other criteria (\"Quantum Darwinism\" as per Zurek). So they are not arbitrary (indeterminate). And this is why classicisity is not arbitrary.\n\nA4) What is it about the \"large degrees of freedom\" that inhibits entanglement (and can cause disentanglement)? Is it the density matrix quickly losing its off-diagonal entries when there are more degrees of freedom? How many degrees of freedom does a single isolated atom have, and what does the density matrix look like, and why does the off-diagonals NOT approach zero?\n\nA5) Now is it fair to say that when an entangled particle meets a system with large degrees of freedom (LDF), it will disentangle and classicisity will emerge? If it did not disentangle, then the particle will put the system into a superposed entanglement, and the entanglement will spread throughout the system and the environment. Soon you will be seeing things vanishing before your own eyes! (Schro's cat). Then the universe would become one big gigantic entangled mess, and wierd things would be happening, such as particles refusing to collide, and there would be no ability to sustain life or consciousness. So we are saved by this LDF. But I do not understand how this LDF causes disentanglement.\n\nbruce2g, thanks for the article on Penrose. I wonder if there have been any results from this proposed experiment? I am not sure of the significance of demonstrating a macroscopic superposition. I am not familiar with Penrose's theses (except that he claims a new sort of physical phenomena, and that the brain is loaded with that, resulting in consciousness). I think someone wrote a paper that there can't be anything coherent in the brain for too long (heh heh, no pun intended), and it will decohere, and the brain is a classical device. I think decoherence theory has really taken the steam out of the mystics and the consciousness people.\n\nLast edited: Feb 22, 2006\n11. Feb 23, 2006\n\n### vanesch\n\nStaff Emeritus\nThe point is not so much the large number of degrees of freedom, but the coherent interaction of the particle with the entire system, which ends up in the same state after the interaction or not. As a simplistic classical example, the difference between, say, an elastic collision (where the colliding objects have no \"souvenir\" of their collision) vs an inelastic collision (where some internal degrees of freedom got changed due to the collision, and can hence serve as \"memory\" for a collision).\nI'd even say that having a lot of degrees of freedom would *increase* the chance of the inelastic collision ; so it is only in special cases that this doesn't happen (such as with a mirror). The mirror is rather the exception, where no \"remnant\" is left. Most stuff doesn't act as a perfect mirror, and some remnant is left.\n\nWell, decoherence doesn't make much sense outside of an MWI perspective. And then things don't collapse, they just entangle! The screen just *entangles* with the molecule:\n\ninitial:\n(|moleculepos1> + |moleculepos2> +...) |virginscreen>\n\nbecomes:\n(|moleculepos1>|screenflash1> + |moleculepos2>|screenflash2>+...)\n\nAs we'll get entangled up also in this way, we'll only observe one of the branches.\n\nYes, it is quite hopeless (although in principle possible) to evolve again backwards the final state back to the initial state, where the screen information is \"erased\" (every branch gets back to the |virginscreen> state). At this point, you can just as well call this \"a collapse\" because of the quasi-impossibility to merge again the different terms, and forget about all the others. (at least, that's how one can introduce a \"practical\" collapse in the MWI view - IMO that's how decoherence is to be seen).\n\nNo, on the contrary: it is because the screen DID get entangled with the molecule state that it got the information. And it is because WE got entangled with the screen that we knew about it. (in the MWI - decoherence view).\n\nWhat decoherence says is that the Schmidt decomposition in the observer x rest of universe basis will result in classically-looking states for each term. But you still have to pick out one of them.\n\nIn the decoherence view, this is rather irreversible entanglement, and not disentanglement! But IF YOU APPLY A COLLAPSE, of course ,this irreversible entanglement (resulting in different terms) followed by the picking out of one term (collapse) will then result in a product state (= non-entangled system). In fact, more degrees of freedom you have, more chance you have that this happens. It is only in special circumstances that all those degrees of freedom act coherently (such as is the case with a mirror), and then you do not have entanglement.\n\nThat depends entirely on the interactions that are going on.\n\nIt depends whether these DoF interact coherently or not. If they do NOT interact coherently, you will get irreversible entanglement, which can also be seen (after projection) as disentanglement (although it was in fact entanglement from a decoherence PoV). If they interact coherently (as do the electrons in a mirror), you can avoid extra entanglement, and hence preserve the *original* interference which was a proof of the simple entanglement.\n\nYes, that's in fact the real decoherence-MWI view. And once YOU get entangled too, you'll only observe \"one branch\" (one term). It is then up to you to decide whether or not you want to drag along the entire wavefunction (and keep the entanglement) or whether you will limit your attention only to the term you are observing (= collapse), at which point you arbitrarily decided to \"throw away\" the other terms, and hence disentangle the entire circus.\n\nNo, on the contrary (that's the MWI view). Because of this entangled mess, we WON'T often be seeing quantum interference effects and we'd have the impression to live in only one branch, which looks fairly classical.\nSo there is no *real* disentanglement in this view (as you say, we're living in a huge entangled mess) but things are so hopelessly entangled with eachother that almost ALL OBSERVATIONAL CONSEQUENCES (which are quantum interference and correlation results) are gone. Which is exactly what we observe ! Remember that we cannot observe entanglement, we can only observe *interference*. And entanglement KILLS interference of the subsystems, to make it only obervable of the overall system if you do a carefull correlation study. Now, if we get entangled with the entire universe, we won't ever be able to do all the necessary measurements to find out the overall correlation ; we'll always be limited to measurements on the subsystem, and as such, NOT find any correlation (= NO proof of quantum interference, and hence \"local\" entanglement).\n\ncheers,\nPatrick.\n\n12. Feb 23, 2006\n\n### zekise\n\nA2) Now I am a bit confused - in this case (A2), the experiment is just a standard double-hole matterwave experiment and it shows (self) interference on the screen, due to wavefunction components from each hole. So would this not indicate that there is NO entanglement between the molecule and the screen, because if it were entangled we should get a gaussian? (MWI really confuses me, so let us first discuss the decoherence view in the non-MWI interpretation, if that is OK with you.) Since we see an interference pattern emerge from statistical registrations of the molecules on the screen, then there cannot be entanglement between the molecule and the screen. And there is some information passed to the screen such as momentum transfer (or in the case of a photon, it would be the absorbtion of the photon energy). If the screen was a detector array, then it could say, \"molecule #n was registered at (x, y)\".\n\nA2.1) So if indeed there is no entanglement in this non-MWI view, then what caused that? Collisions between the molecule and an isolated gas atom has shown to produce entanglement. But collision between molecule and the screen in a vacuum has shown to produce interference, so presumably there is no molecule-screen entanglement.\n\nA4) Now imagine in the same expirement we place in front of the (macro) screen (which is really an array detector) an array of suspended and isolated atoms (micro), one atom for each cell. Then if the molecule first collides with one of these atoms, we WILL get entanglement, and lets say the molecule (but not the atom) then registers itself on the screen array detector. We will not observe interference anymore, and instead will see a bell-shaped clumping statistically. So again, a (micro) \"screen\" made of isolated atoms will show gaussian, while a regular (macro) screen will show interference. What is the qualitative difference?\n\nA5) Now if it is the case of no entanglement with a regular macro screen, then would this not indicate that it is possible to break entanglement? Imagine the molecule first colliding with an isolated atom on its way to the screen, and getting entangled with this atom. The molecule then proceeds to hit the screen and essentially becomes part of the screen. We will observe a gaussian statistically. Now, since the screen did not generate an entanglement as per A2) with the molecule, then it would be reasonable to assume that the entanglement of the molecule with the isolated atom would NOT be able to survive, and will get broken or decohered. Otherwise you would have an entanglement between the atom and the macro screen (where the molecule is now \"integral\" to the screen), which as per A2) was shown not to be possible.\n\nI also believe this is what Scully has shown. The 2nd photon is not entangled with the detector that absorbed the 1st photon. There is no mention of this.\n\nSo in this non-MWI view, it is possible to disentangle, and the universe would be generally disentangled, because most of the universe acts like a decohering macro screen.\n\nA6) BTW, in addition to lack of interference, can't we use violation of the Bell inequality for proof of entanglement between 2 particles?\n\nregards\n\nLast edited: Feb 23, 2006","date":"2016-10-26 02:33:45","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6551599502563477, \"perplexity\": 814.1174021576385}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-44\/segments\/1476988720475.79\/warc\/CC-MAIN-20161020183840-00287-ip-10-171-6-4.ec2.internal.warc.gz\"}"}
null
null
Alain Lavalle est un réalisateur et scénariste français. Assistant-Réalisateur 1957 : Le Naïf aux quarante enfants de Philippe Agostini 1966 : L'Âge heureux de Philippe Agostini - Série TV 1967 : Poker d'as pour Django (Le due facce del dollaro) de Roberto Bianchi Montero 1968 : Bérénice de Pierre-Alain Jolivet 1968 : Tête de pont pour huit implacables (Testa di sbarco per otto implacabili) d'Alfonso Brescia 1969 : Paulina s'en va d'André Téchiné 1970 : Trop petit mon ami, d'Eddy Matalon 1972 : La Nuit bulgare de Michel Mitrani Réalisateur 1967 : Quand la liberté venait du ciel 1973 : La Révélation Scénariste 1986 : Juste une histoire de Jean-Claude Longin Conseiller technique 1976 : Les Petits dessous des grands ensembles de Christian Chevreuse et Michel Caputo Lien externe Réalisateur français Scénariste français de cinéma Date de naissance non renseignée (XXe siècle)
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,107
Metro Parks to buy 80 acres of land at Clear Creek Jeff Barron Lancaster Eagle Gazette ROCKBRIDGE - In order to preserve a good-sized portion of Clear Creek Metro Parks from development, Columbus Metro Parks is in the process of buying 80 acres of the park that lies in both Fairfield and Hocking counties. The 80 acres are in the southern part of the park in Hocking County. "It's the largest state nature preserve in Ohio," Metro Parks Executive Director Tim Moloney said. "With 5,000 acres of land, we say that a park like that is second to none." The purchase price is $400,000 and comes from a $200,000 grant from Clean Ohio and a $200,000 grant from the Conservation Fund. Metro Parks is buying the property from Arc of Appalachia, which bought the property from a private owner. That group is a non-profit dedicated to forest preservation. Moloney said the sale could close in 30 days or less. He said the sale will prevent any future commercial development at the park. "It's important to preserve it," Moloney said. "With the valley and the streams that feed into the Hocking River, it is spectacular." Columbus Metro Parks has owned Clear Creek since 1975. The agency dates back to 1945 when it bought Blacklick Woods Metro Park in Reynoldsburg. There are several other metro parks around Fairfield County. They are Slate Run and Pickerington Ponds in Canal Winchester and Chestnut Ridge in Carroll. The Metro Park website touts Clear Creek for as being home to 2,200 species of plants and animals. It also says the park's forested areas range from Canadian hemlocks and ferns, to oak and hickory. The site also says Clear Creek is home to the state's last remaining colonies of the rhododendron plant. Visit www.metroparks.net/parks-and-trails/clear-creek/ to learn more about Clear Creek Metro Park. jbarron@gannett.com Twitter: @JeffDBarron
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
31
For decades, the West's discarded winter wear has come here to be ripped apart and reborn as cheap, warm blankets. Now, a new type of Chinese fleece is threatening to make them obsolete. The streets of Panipat are lined with cloth; trucks bursting with the stuff lumber along them. Step closer and the cloth turns out to be shredded woollen scrap, ready to be turned back into a type of yarn called shoddy. This is the castoff capital of the world. Woollen clothing from the US, Canada, the UK, Western Europe, Japan and Korea is brought to this town in Haryana to be resold, recycled and reused. Most of it is turned into shoddy yarn, and used to make coarse blankets that are sold very cheap but are exceptionally warm. At one point, Panipat accounted for 90% of the shoddy blankets used worldwide — most of it in disaster relief, some of it sold domestically, for use in rural UP, Bihar and Rajasthan, where the winters are fierce. It's a fair point. The factories have dwindled in number from about 400 in the early 2000s to 100 today. Production is now at 300 tonnes of yarn a day, down 25% from 2008. The falling numbers can be traced all the way to China, where a new type of artificial fleece is being used to churn out soft, new blankets from virgin polyester yarns. Where the blankets made from shoddy are rough, coarse and usually brown or grey — in addition to disaster relief, they are used by the Indian Railway and in government hospitals — the Chinese blankets are light, soft and come in a variety of colours. Worst of all, for Panipat, the Chinese blankets are cheaper. Fleece blankets cost between Rs 80 and Rs 250 each; shoddy between Rs 70 and Rs 300. The hundred factories are holding on because there is some demand from the disaster-relief segment. "But even aid organisations have started to reject our shoddy, saying it's not soft enough or light enough," says Garg. He believes the end is near. It's been more than 50 years since Panipat acquired the status of castoff capital. It was a tag held, before that, by Prato, an industrial city in Italy. Ironically, it was low-cost alternatives in the Indian subcontinent that led to Prato's decline. Recycling units here began to use the same machines to offer a competitive product at prices that were far lower, because of lower labour costs here. Woollen clothes previously exported to Prato began to find their way to India, Pakistan and Bangladesh. "In India, as Panipat was a spinning centre with a history of working with woollen yarn, so the recycled clothes started coming here, via the Kandla port in Gujarat," says Garg. That's the route they still take today, but with production falling, only half as many castoffs are being imported. A walk through the 30-year-old Shankar Woollen Mill is a reminder of how thing used to be. Piles of mutilated woollen clothes stand 10 ft high across the 20,000-sq-metre space. Most have already been sorted into the basic colour families of gray, red, blue, camel and green — the colour of most winter clothing in the West. At each pile, five or six workers are ripping out zips, buttons and linings. Others are shredding the clothes with special slicers. "The clothes are then put into rag-pulling machines, from where they emerge as uniformly coloured fluff; then soaked in oil for conditioning; re-conditioned; pulled apart in a carding machine, and finally twisted into yarn," says Surender Gupta, 64, owner of the mill. The constant activity, the hum of the machines can give you a false sense of success, he says. "The reality is that in the last two years, our shoddy yarn business has fallen by half. Two years ago, we were importing 100 tonnes of clothes a month. Today, it has reduced to 50. The demand is less. We have also sold one of our three machine sets," Gupta says. Some factories are staying afloat by switching to the Chinese fleece. Birmi International, set up in 1992, was one of the first to make the move, in 2010. Where he once bought used clothes and turned them into yarn, he now buys virgin polyester yarns and, using machines bought in China, turn them into fleece blankets. Other units are trying to survive by diversifying into cotton yarn. The association had submitted a representation to the Centre on February 1, regarding the issue of fleece blankets affecting the business. "I plan to take it up with the union minister and ensure something is done to make the environment feasible for the continued existence of the industry," says Vipul Goel, state minister in charge of industries and commerce. Meanwhile, it doesn't help that production costs are rising, even as demand tanks. "Transportation and storage are becoming more expensive, so is power supply and labour," says Kumar.
{ "redpajama_set_name": "RedPajamaC4" }
4,633
If you have multiple audio interfaces, synthesizers, or even iPad interfaces, you can get them to all work as one virtual interface on the Mac. Here's how to combine them as an aggregate device. In many DAWs including Logic Pro and Live, you can only choose a single input and output device when using Apple's Core Audio. Because of this, it makes for a bit of a challenge when the need for more inputs/outputs arrises. In this article, I'll show you how to create a virtual device that you can select from in your DAW as you would any physical interface. That single device contains all the audio devices in your studio combined. In addition to your main audio interface, there are many devices today that have built-in audio interfaces like microphones, iPad interfaces, synthesizers, and guitar cable/interfaces. You might have an extra audio interface you no longer use. Getting all these working together into your DAW can be challenging, but worth it. Let's take a look at how to combine these types of devices together into a single device for use in your DAW (Core Audio users only). AMS (Audio MIDI Setup) is a built-in Apple application found in your Mac's Utilities folder. This application is updated with OS X and cannot be updated any other way. Basically, if you're running an old version of OS X, you're running an old AMS. To use the most current version, you'll need to be on the latest OS X (currently 10.11.4). All the functionality shown in this article is available on older Mac OS/AMS versions, but if you're experiencing difficulties with AMS, I'd suggest updating to the latest OS. As its name implies, there really are two sides to Audio MIDI Setup. Audio and MIDI. We'll only be dealing with the audio page. You can access the Audio Devices window (if it's closed) by choosing it from AMS's Window menu. Along the left side you'll see your various devices listed. At the very bottom left corner you'll see a small plus button. Press that and choose to create a new "Aggregate Device". This new virtual device appears in the list along with all your other devices. Once created, double-click it to give it a meaningful name (so it's easy to spot when switching between audio inputs/devices in your DAW). From the display area on the right, you can configure AMS. Choose any or all of the devices you want as part of your one monster interface, but pay attention to the order in which you select them. Take a photo of the "sub-devices" display area, or write down which devices are going where and to what inputs/outputs. You'll need this when configuring your input/output labels in your DAW (if it has the capability of doing that as Logic Pro does). All devices that convert data from analog to digital and back, have what is called a word clock. Some of these devices can sync their word clock (flow of data) between other devices to eliminate clicks/pops and artifacts. If any of your devices can send word clock, you'll see one of several types of digital connectors on the back. If all of your devices have word clock connectors, cable them together in the physical world and set one as the master and the other(s) as slave. My Apogee Quartet interface for example has one of these ports, but my iPad interface and my JX-03 synth I'm wanting to combine do not. Since I won't be able to tie these guys together physically, I'll need to use AMS's built-in "Drift Control." Choose the most reliable device from the Clock Source drop-down menu. In the example image below I chose my Apogee Quartet since Apogee are known for a very stable word clock. When you do this, AMS automatically checks off the Drift Control buttons for the other devices to follow the main selected clock source. They're not actually synching in the way a physical word clock connector would, but AMS regulates the data and will correct the "drift" of the other slave devices. Once created, your new virtual device will be selectable via the DAW's interface setup window. Simply select it as you would any single audio interface. You can now choose from any of the inputs/outputs on your DAW tracks as you normally would, but with more choices now! In my setup for example, I can choose inputs 13/14 when I want to record my Roland JX-03 desktop synth. Not all DAWs can do this, but in Logic Pro for example, you can name the various ins and outs via the Mix menu under "I/O Labels." AMS does actually have the ability to name the specific channels (you can single click the specific channels in AMS and name away), but that data doesn't seem to carry over yet. Either reference the photo/paper you wrote down your AMS setup on and start writing them in. Enter in the long and short names and the "user" buttons will enable. Once configured, you can easily select from any input/output menu in Logic and you'll see your custom names! Check AMS periodically before opening your DAW to make sure all your sub-devices are enabled in the created Aggregate Device. Making sure all devices are plugged in/on before powering on your Mac can help assure they will be available. If one or more devices is not available in AMS, try powering them on, and watch in AMS to see if they come up. If one or more does not, you may have to reselect the device within the Aggregate Device before opening your DAW. Not entirely sure technically why this happens, but it does on occasion. If they end up in the wrong order, then the labels you set in your DAW will not match up… simply uncheck all sub-devices, and re-check them in the correct order. If the devices have different sample rate ceilings, stay at a rate that all the devices can do. Darren started making music on computers when he was a teenager in 1987. His first computer was an Amiga, and when he realized the power of computer-based production, his addiction for making electronic music began. Darren switched to Mac in 1994 and started using Logic Pro. He's been involved in many music projects over the years including Psychoid. For two years Darren travelled with Apple showing Logic Pro to visitors of Macworld, NAMM, Remix Hotel and NAB. Currently, he teaches a class in L.A. on electronic music production using Logic for Logic Pro Help. Darren also runs two small businesses on Mixing, Mastering and Logic Pro training and support.
{ "redpajama_set_name": "RedPajamaC4" }
2,870
\section{Introduction} \subsection{Background} Recently, \emph{physical-layer service integration} (PHY-SI), a technique of combining multicast service and confidential service into one integrated service for one-time transmission at the physical layer, has received much attention in wireless communications. For one thing, PHY-SI caters to the demand for high transmission rate and secure communication, which has been identified as the key targets that need to be effectively addressed by fifth generation (5G) wireless systems \cite{andrew2014what}. Besides, compared with the conventional upper-layer-based approach, PHY-SI enables coexisting services to share the same resources by solely exploiting the physical characteristics of wireless channels, thereby significantly increasing the spectral efficiency. This property makes PHY-SI a prominent approach to satisfy the ever-increasing need for radio spectrum. The technique of PHY-SI could also find a wide range of applications in the commercial and military areas. For example, many commercial applications, e.g., advertisement, digital television, Internet telephony, and so on, are supposed to provide personalized service customization. As a consequence, confidential service and public service are collectively provided to satisfy the demand of different user groups. In battlefield scenarios, it is essential to propagate commands with different security levels to the frontline. The public information should be distributed to all soldiers, while the confidential information can only be accessed by specific soldiers. Such emerging applications lead to a crucial problem: \emph{how to establish the security of confidential service while not compromising the quality of public service}? \subsection{Related Works} Let us first have a very brief review on physical-layer security, a technique that lays foundation for the research on PHY-SI. The broadcast nature of wireless medium makes privacy an inherent concern. Physical layer security technique is playing an increasingly important role in wireless communication recently. It can secure communications information-theoretically at the physical layer without using secret keys whose distribution or management may become difficult in e.g., ad-hoc wireless networks. Different strategies against eavesdropping have been developed with various levels of channel state information (CSI) available to the transmitter (see the comprehensive overview in \cite{shiu2011physical,he2013wireless,hong2013enhancing,mukherjee2014principles,liu2016physical}). Liu and Poor first coined the term \emph{confidential broadcasting} in \cite{liu2009secrecy,liu2010multiple} and established the corresponding secrecy capacity region. In confidential broadcasting, a transmitter broadcasts multiple confidential messages to all receivers. Each confidential message is intended for one specified receiver but required to be perfectly secret from the others. Some efforts have been made in e.g., \cite{fakoorian2013on,park2016weighted} to maximize the sum secrecy rate in the scenario of confidential broadcasting. The study of PHY-SI can be traced back to Csisz{\'a}r and K{\"o}rner's seminar work in \cite{csiszar1978broadcast}. In the basic model of PHY-SI, a transmitter sends a common message to two receivers, and simultaneously, sends a confidential message intended only for one receiver and kept perfectly secure from the other one. Under discrete memoryless broadcast channel (DMBC) setup, Csisz{\'a}r and K{\"o}rner gave a closed-form expression of the maximum rate region that can be applied reliably under the secrecy constraint (i.e., the secrecy capacity region). In recent years, this kind of approach has gained renewed interest, especially that in various multi-antenna scenarios, such as Gaussian broadcast channels \cite{Hung2010Multiple, ekrem2012capacity, liu2010mimo, liu2013new}, and bidirectional relay channels \cite{wyrembelski2011service, Wyrembelski2012Physical}. In \cite{Hung2010Multiple}, the authors extended the results in \cite{csiszar1978broadcast} to a general MIMO Gaussian case by adopting the channel-enhancement argument. Further, the works \cite{ekrem2012capacity, liu2010mimo} considered the case with two confidential messages intended for two different receivers. The resulting secrecy capacity region is proved to be attainable by combining the secret dirty-paper coding (S-DPC) with Gaussian superposition coding. Furthermore, in \cite{wyrembelski2011service} and \cite{Wyrembelski2012Physical}, Wyrembelski and Boche amalgamated broadcast service, multicast service and confidential service in bidirectional relay networks, in which a relay adds an additional multicast message for all nodes and a confidential message for only one node besides establishing the conventional bidirectional communication. Nonetheless, the main goal of the aforementioned papers is just to obtain capacity results or to characterize coding strategies that lead to certain rate regions\cite{Schaefer2014Physical}. For implementation efficiency, it is also important to treat physical layer service integration from a signal processing point of view. In particular, optimal or complexity-efficient transmit strategies have to be characterized, so that the achieved performance could reach/approach the boundary of the secrecy capacity region. Such strategies are usually given by optimization problems, which generally turn out to be nonconvex. Along with this comes the fact that most works on PHY-SI end once a certain characterization of a rate region is derived. Recently, to fill in the gap between the previous information-theoretic results and practical implementation, there is growing interest in analyzing PHY-SI from a signal processing point of view. In \cite{Hung2010Multiple}, the authors proposed a \emph{re-parameterizing} method to devise transmit strategies for achieving the secrecy boundary performance. However, this method is only applicable to a very simple two-user MISO scenario. Besides, it involves solving a sequence of convex feasibility problems, which is computationally expensive. To improve it, the work \cite{mei2016secrecy} proposed a \emph{quality-of-service (QoS)-based} method to seek the boundary-achieving transmit strategies. Its basic idea is to establish the tradeoff between the secrecy rate and the multicast rate by maximizing the secrecy rate while ensuring the multicast rate above a given threshold. This method is demonstrated as effective in characterizing the secrecy boundary, and thus triggered research endeavors on extending the result to a more general and realistic setting. Notable results include the extension to the multi-user \cite{mei1512} and imperfect CSI \cite{mei2016robust, mei2016artificial} settings. Even so, relatively less work focussed on the MIMO channel setup, due to the intractability of the associated optimization problems. In \cite{mei2016GSVD}, the authors circumvented the intractability by proposing a generalized singular value decomposition (GSVD) based transmission scheme. Using GSVD, multicast message and confidential message can be perfectly decoupled and the resulting problem is easier to handle. However, this result is not applicable to the general multi-user MIMO case. In addition, it is also interesting to incorporate artificial noise (AN) into consideration, as such technique has been shown to be effective in enhancing transmission security\cite{li2013transmit,li2013spatially,chu2015robust,chu2016secrecy,zheng2015multi}. Specifically, the authors in \cite{li2013transmit}, \cite{li2013spatially,chu2015robust,chu2016secrecy} and \cite{zheng2015multi} respectively showed that AN is of paramount importance to physical-layer security when there exist multiple eavesdroppers in the network, when the CSI of eavesdropper(s) is imperfectly known at the transmitter, and/or when eavesdroppers are randomly located in the network. \subsection{Main Contributions} In this paper, we delve into the AN-aided transmit precoding design in PHY-SI under a general multi-user MIMO case. Specifically, two sorts of service messages are combined and promulgated at the same time: a multicast message intended for all receivers, and a confidential message intended for merely one authorized receiver. The confidential message must be kept perfectly secure from all other unauthorized receivers. Meanwhile, AN is employed to degrade the potential eavesdropping of the unauthorized receivers. This paper aims to jointly optimize the input covariance matrices of the multicast message, confidential message and AN, to maximize the achievable secrecy and multicast rates simultaneously, or equivalently, to maximize the achievable secrecy rate region. This secrecy rate region maximization (SRRM) problem turns out to be a biobjective optimization problem. Since the re-parameterizing method is invalid in a general MIMO case, we develop two scalarization methods to convert it into an easier-to-handle scalar version for characterizing its Pareto boundary. \begin{enumerate} \item In the first method, we propose to fix the multicast rate as a constant. Through varying the value of the constant, this method could yield different secrecy boundary points. Since the Pareto optimal points must reside on the boundary of the achievable rate region, this method is bound to provide a complete set of the Pareto optimal points. Though the resultant secrecy rate maximization (SRM) problem is nonconvex by nature, we show this problem actually falls into the context of difference-of-concave (DC) programming \cite{NIPS2009}. Hence, it can be handled by classical DC algorithm with convergence guarantee. \item As for the second method, a weighted sum-based scalarization is introduced. The crux of this scalarization method is to optimize the weighted sum of the two objectives with different weight vectors. By varying the weight vector, this method gives rise to different Pareto optimal solution. To solve this weighted sum rate maximization (WSRM) problem, we reveal its hidden decomposability by recasting it as an equivalent form amenable to alternating optimization (AO). AO algorithm is naturally employed to solve the WSRM problem. It can be proved that this AO algorithm must converge to one stationary point of the WSRM problem. \item It is particularly worth mentioning that though the DC and AO algorithms have been applied to address the issue of physical-layer security before in e.g., \cite{li2013transmit,fang2016precoding,chu2015secrecy}, none of these works considered integrating an additional multicast message. Our paper is an initial attempt to study the application of DC and AO to the emerging PHY-SI system, which turns out to be a harder task than its counterpart in physical-layer security due to the coexisting multicast service. \end{enumerate} Then we compare these two sorts of scalarization methods in terms of their overall performance and computational complexity. The comparison results reveal that the first method is more efficacious in finding all Pareto optimal points than the second one. The advantage of the second method lies in its problem structure, which provides the service provider a solution to maximizing the overall revenue. Besides, we show that the DC algorithm is more time-efficient at low transmit power than the AO algorithm. Interestingly, the numerical results indicate that at high transmit power, the AO algorithm becomes the more time-efficient one. \subsection{Organization and Notations} This paper is organized as follows. Section \Rmnum{2} provides the system model description and problem formulation. The optimization aspects of our formulated problems are addressed in Section \Rmnum{3} and \Rmnum{4}, corresponding to the first and the second scalarization methods, respectively. The comparison results are given in Sections \Rmnum{5}. Section \Rmnum{6} presents simulation results to show the efficacy of our proposed methods. Finally, conclusions are drawn in Section \Rmnum{7}. The notation of this paper is as follows. Bold symbols in capital letter and small letter denote matrices and vectors, respectively. ${{(\cdot)}^{H}}$, $\rm{rank}(\cdot)$ and $\text{Tr}(\cdot )$ represent conjugate transpose, rank and trace of a matrix, respectively. ${\mathbb{R}}_{+}$ and ${\mathbb{H}}_{+}^n$ denote the set of nonnegative real numbers and of $n$-by-$n$ Hermitian positive semidefinite (PSD) matrices. The $n \times n$ identity matrix is denoted by ${\mathbf{I}}_n$. $\mathbf{x}\sim \mathcal{CN}(\mathbf{\mu },\mathbf{\Omega })$ denotes that \textbf{x} is a complex circular Gaussian random vector with mean $\mathbf{\mu}$ and covariance $\mathbf{\Omega}$. $\mathbf{A}\succeq \mathbf{0}$ $(\mathbf{A}\succ \mathbf{0})$ implies that $\mathbf{A}$ is a Hermitian positive semidefinite (definite) matrix. ${\left\| \cdot \right\|}$ represents the vector Euclidean norm. $K$ represents a proper cone, and $K^*$ represents a dual cone associated with $K$. \section{System Model and Problem Formulation} We consider the downlink of a multiuser system in which a multi-antenna transmitter serves $K$ receivers, and each receiver is equipped with multiple antennas. Assume that all receivers have ordered the multicast service and receiver 1 further ordered the confidential service\footnote{In this paper, we assume that only one receiver orders the confidential service within a single time slot. In practice, this assumption is valid under the case where the confidential service is provided to all receivers in a \emph{round-robin} manner, i.e., the time slots are assigned to each subscriber of the confidential service in equal portions and in circular order.}. To enhance the security performance, the transmitter utilizes a fraction of its transmit power to send artificially generated noise to interfere the unauthorized receivers (eavesdroppers), i.e., receiver 2 to receiver $K$. We assume in this paper that all receivers are static and that all the communication links undergo slow frequency-flat fading. \begin{remark} In this paper, we assume that only one receiver orders the confidential service within a single time slot. In practice, this assumption is valid under the cases where the confidential service is provided to all receivers in a \emph{round-robin} manner to strengthen the security of confidential messages and to reduce the operational complexity at the transmitter. \end{remark} The received signal at receiver $k$ is modeled as \begin{equation}\label{yk} {{\mathbf{y}}_k} = \;{{\bf{H}}_k}{\bf{x}} + {{\mathbf{z}}_k}, k=1,2,\cdots,K \end{equation} where ${{\mathbf{H}}_k}\in {{\mathbb{C}}^{{{N}_{r,k} \times {N}_{t}}}}$ is the channel response between the transmitter and receiver $k$; and ${N}_{t}$ and ${N}_{r,k}$ are the number of transmit antennas employed by the transmitter and $k$th receiver, respectively. ${{\mathbf{z}}_k}$ is independent identically distributed (i.i.d.) complex Gaussian noise with zero mean and unit variance. ${{\mathbf{x}}}\in {{\mathbb{C}}^{{{N}_{t}}}}$ is the coded transmit message, which consists of three independent components, i.e., \begin{equation}\label{x3c} {\bf{x\;}}\; = \;{{\bf{x}}_0} + \;{{\bf{x}}_c} + \;{{\bf{x}}_a}, \end{equation} where ${\bf{x}}_{0}$ is the multicast message intended for all receivers, ${\bf{x}}_{c}$ is the confidential message intended for receiver 1, and ${\bf{x}}_{a}$ is the artificial noise. We assume $\mathbf{x}_{0} \sim \mathcal{CN}(\mathbf{0},\mathbf{Q}_0)$, $\mathbf{x}_{c} \sim \mathcal{CN}(\mathbf{0},\mathbf{Q}_c)$ \cite{Hung2010Multiple}, where $\mathbf{Q}_0$ and $\mathbf{Q}_c$ are the transmit covariance matrices. The AN ${\bf{x}}_{a}$ follows a distribution $\mathbf{x}_{a} \sim \mathcal{CN}(\mathbf{0},\mathbf{Q}_a)$, where $\mathbf{Q}_a$ is the AN covariance. The CSI on all links is assumed to be perfectly known at the corresponding transmitter and receivers in that all receivers have to register in the network for subscribing the multicast service. In practice, the CSI at the receivers can be obtained from the channel estimation of the downlink pilots. CSI at the transmitter can be acquired via uplink channel estimation in time division duplex (TDD) systems. The design of a high-quality channel estimation scheme is beyond the scope of this paper. Note that the full CSI assumption is commonly adopted in the area of physical layer security/multicasting, especially in MIMO channels \cite{li2013transmit,fang2016precoding,yang2013optimal,park2016weighted,wu2013physical,zhu2012precoder,lee2013a,du2013optimum}. For ease of exposition, let us define ${\cal K} \buildrel \Delta \over = \{1,2,...,K\}$ and ${{\cal K}_e} \buildrel \Delta \over = {\cal K}/\{ 1\} $, which denote the indices of all receivers and of all unauthorized receivers, respectively. Denote $R_0$ and $R_c$ as the achievable rates associated with the multicast and confidential messages, respectively. Then an achievable secrecy rate region $R_s({\left\{ {{\bf{H}}_k} \right\}_{k \in {\cal{K}}}},P)$ is given as the set of nonnegative rate pairs $(R_0,R_c)$ satisfying \cite{Hung2010Multiple} \begin{equation}\label{Region1} \begin{split} &{R_0} \le \mathop {\min }\limits_{k \in {\cal K}} C_{m,k}({{\bf{Q}}_0},{{\bf{Q}}_c},{{\bf{Q}}_a})\\ &{R_c} \le C_b({{\bf{Q}}_c},{{\bf{Q}}_a}) - \mathop {\max}\limits_{k \in {{\cal K}_e}} C_{e,k}({{\bf{Q}}_c},{{\bf{Q}}_a}), \end{split} \end{equation} where \begin{subequations}\label{Region2} \begin{align} \nonumber &C_{m,k}({{\bf{Q}}_0},{{\bf{Q}}_c},{{\bf{Q}}_a})=\\ &\qquad{\log | {{\bf{I}} + {{({\bf{I}} + {{\bf{H}}_k}({{\bf{Q}}_c} + {{\bf{Q}}_a}){\bf{H}}_k^H)}^{ - 1}}{{\bf{H}}_k}{{\bf{Q}}_0}{\bf{H}}_k^H} |},\\ &C_b({{\bf{Q}}_c},{{\bf{Q}}_a}) = \log | {{\bf{I}} + {{({\bf{I}} + {{\bf{H}}_1}{{\bf{Q}}_a}{\bf{H}}_1^H)}^{ - 1}}{{\bf{H}}_1}{{\bf{Q}}_c}{\bf{H}}_1^H} |,\\ &C_{e,k}({{\bf{Q}}_c},{{\bf{Q}}_a})= \log | {{\bf{I}} + {{({\bf{I}} + {{\bf{H}}_k}{{\bf{Q}}_a}{\bf{H}}_k^H)}^{ - 1}}{{\bf{H}}_k}{{\bf{Q}}_c}{\bf{H}}_k^H} |, \end{align} \end{subequations} and $\text{Tr}(\mathbf{Q}_0+\mathbf{Q}_c+\mathbf{Q}_a) \le P$ with $P$ being total transmit power budget at the transmitter. The secrecy rate region (\ref{Region1}) implies that all receivers first decode their common multicast message by treating the confidential message as noise, and then receiver 1 acquires a clean link for the transmission of its exclusive confidential message, where there is no interference from the multicast message. This can be achieved by utilizing the encoding schemes proposed in \cite{csiszar1978broadcast}. To maximize this achievable secrecy rate region, our goal is to find the boundary-achieving $\mathbf{Q}_0$, $\mathbf{Q}_a$ and $\mathbf{Q}_c$, which is also known as Pareto optimal solutions to this SRRM problem. Specifically, with perfect CSI available at the transmitter, we must first solve the following optimization problem, which is a biobjective maximization problem with cone $K=K^*={\mathbb {R}}_ + ^2$. \begin{subequations}\label{op1} \begin{align} \nonumber&\mathop{\max}\limits_{{{\bf{Q}}_0},{{\bf{Q}}_c},{{\bf{Q}}_a}} \left({\text{w.r.t.}}\; {\mathbb {R}}_ + ^2 \right)\; ( {\mathop {\min }\limits_{k \in {\cal K}} C_{m,k},C_b - \mathop {\max}\limits_{k \in {{\cal K}_e}} C_{e,k}} )\\ \text{s.t.}\quad&\text{Tr}({{\bf{Q}}_0} + {{\bf{Q}}_c} + {\bf{Q}}_a) \le P,\label{op1a}\\ &{{\bf{Q}}_0} \succeq {\bf{0}}, {{\bf{Q}}_c} \succeq {\bf{0}}, {\bf{Q}}_a \succeq {\bf{0}},\label{op1b} \end{align} \end{subequations} where, with a slight abuse of notations but for notational simplicity, the explicit dependence of $C_{m,k}$, $C_b$ and $C_{e,k}$ on $({{\bf{Q}}_0},{{\bf{Q}}_c},{{\bf{Q}}_a})$ is omitted. Since the SRRM problem is a biobjective maximization problem, it is necessary to harness some method of scalarization to convert it into an easier-to-handle scalar version. \begin{remark} It is also viable to consider the scenario where all receivers order the confidential service and all confidential messages are propagated concurrently by the transmitter, i.e., the integration of multicasting and confidential broadcasting. The merit of this scheme lies in its higher spectral efficiency and low latency. However, this comes at the expense of much higher operational complexity at the transmitter, especially when the number of users increases. Thus, our considered PHY-SI scheme is particularly desired in delay-tolerant applications or when the transmitter possesses limited computational capacity for security-related computations. \end{remark} \section{A DC-Based Approach to the SRRM Problem} In this section, we develop our first scalarization method to solve (\ref{op1}). The basic problem formulation is a secrecy rate maximization (SRM) with imposed quality of multicast service (QoMS) constraints. \subsection{Scalarization} In particular, our method is to move the multicast rate maximization part to the constraint, i.e., we fix at the time being the multicast rate as a constant $\tau_{ms} \ge 0$. As a result, the biobjective SRRM problem (\ref{op1}) will be degraded into a scalar maximization problem, which is shown in (\ref{op2}). \begin{subequations}\label{op2} \begin{align} \nonumber R&(\tau _{ms}) =\mathop{\max}\limits_{{{\bf{Q}}_0},{{\bf{Q}}_c},{{\bf{Q}}_a}} C_b({{\bf{Q}}_c},{{\bf{Q}}_a})-\mathop {\max }\limits_{k \in {\cal K}_e}C_{e,k}({{\bf{Q}}_c},{{\bf{Q}}_a})\\ \text{s.t.}\; &\mathop {\min }\limits_{k \in {\cal K}}C_{m,k}({{\bf{Q}}_0},{{\bf{Q}}_c},{{\bf{Q}}_a}) = {\tau_{ms}}, \label{op2a}\\ &\text{Tr}({{\bf{Q}}_0} + {{\bf{Q}}_c}+{{\bf{Q}}_a}) \le P, \\ &{{\bf{Q}}_0} \succeq {\bf{0}}, {{\bf{Q}}_a} \succeq {\bf{0}}, {{\bf{Q}}_c} \succeq {\bf{0}}. \end{align} \end{subequations} In (\ref{op2}), $R(\tau _{ms})$ is the optimal objective value, $\tau _{ms}$ can be interpreted as the preset requirement on the multicast rate, and accordingly, the constraint (\ref{op2a}) can be interpreted as a QoMS constraint. To guarantee the feasibility of problem (\ref{op2}), $\tau _{ms}$ cannot exceed a threshold $\tau _{\max}$ given by \begin{equation}\label{max_tau} {\tau _{\max }} = \mathop {\max }\limits_{{{\bf{Q}}_0} \succeq {\bf{0}}, \text{Tr}({{\bf{Q}}_0}) \le P} \mathop {\min }\limits_{k \in {\cal K}} \log \left| {{\bf{I}} + {{\bf{H}}_k}{{\bf{Q}}_0}{\bf{H}}_k^H} \right|. \end{equation} The value of $\tau _{\max}$ can be numerically obtained by solving (\ref{max_tau}) via the convex optimization solver \texttt{CVX} \cite{Boyd2011CVX}. This sort of problem formulation, in fact, enables us to find one boundary point $(\tau_{ms},R(\tau _{ms}))$ of the secrecy rate region $R_s({\left\{ {{\bf{H}}_k} \right\}_{k \in {\cal{K}}}},P)$ by solving (\ref{op2}). All boundary points of $R_s({\left\{ {{\bf{H}}_k} \right\}_{k \in {\cal{K}}}},P)$ can be found if we traverse all possible ${\tau_{ms}}$'s lying within $[0,{\tau _{\max }}]$ and store the corresponding optimal objective values. Since the Pareto optimal solution to (\ref{op1}) must reside on the boundary of $R_s({\left\{ {{\bf{H}}_k} \right\}_{k \in {\cal{K}}}},P)$, i.e., the Pareto optimal set of (\ref{op1}) is a \emph{subset} of the boundary set of $R_s({\left\{ {{\bf{H}}_k} \right\}_{k \in {\cal{K}}}},P)$, all Pareto optimal solution to (\ref{op2}) can also be found by this means. However, problem (\ref{op2}) is nonconvex. Especially, the determinant equality constraint (\ref{op2a}) is very difficult to handle. To circumvent this difficulty, we pay our attention to the following relaxed problem of (\ref{op2}), in which the equality constraint (\ref{op2a}) is replaced by the inequality constraint (\ref{relax.a}). \begin{subequations}\label{relax} \begin{align} \nonumber \tilde R&(\tau _{ms}) =\mathop{\max}\limits_{{{\bf{Q}}_0},{{\bf{Q}}_c},{{\bf{Q}}_a}} C_b({{\bf{Q}}_c},{{\bf{Q}}_a})-\mathop {\max }\limits_{k \in {\cal K}_e}C_{e,k}({{\bf{Q}}_c},{{\bf{Q}}_a})\\ \text{s.t.}\; &\mathop {\min }\limits_{k \in {\cal K}}C_{m,k}({{\bf{Q}}_0},{{\bf{Q}}_c},{{\bf{Q}}_a}) \ge {\tau_{ms}}, \label{relax.a}\\ &\text{Tr}({{\bf{Q}}_0} + {{\bf{Q}}_c}+{{\bf{Q}}_a}) \le P, \label{relax.b}\\ &{{\bf{Q}}_0} \succeq {\bf{0}}, {{\bf{Q}}_a} \succeq {\bf{0}}, {{\bf{Q}}_c} \succeq {\bf{0}}. \end{align} \end{subequations} Apparently, any optimal solution to (\ref{op2}) is feasible to (\ref{relax}) in the sense that replacing (\ref{op2a}) with (\ref{relax.a}) yields a larger feasible solution set. Hence, problem (\ref{relax}) has $R(\tau _{ms}) \le \tilde R(\tau _{ms})$ in general. Interestingly, we show that $R(\tau _{ms}) = \tilde R(\tau _{ms})$ can always be achieved without loss of optimality to (\ref{relax}). \begin{lemma}\label{equivalent} Problem (\ref{relax}) is a tight relaxation to problem (\ref{op2}). In other words, the rate pair $({\tau_{ms}},\tilde R(\tau _{ms}))$ must be a boundary point of $R_s({\left\{ {{\bf{H}}_k} \right\}_{k \in {\cal{K}}}},P)$. \end{lemma} \begin{IEEEproof} The proof can be easily accomplished by construction. Suppose that the constraint (\ref{relax.a}) is satisfied with strict inequality, we can always multiply ${{\bf{Q}}_0}$ by a scalar $\nu\; (\nu < 1)$ to make (\ref{relax.a}) active, yet without decreasing the objective value of (\ref{relax}) and violating the total power constraint (\ref{relax.b}). This fact implies that there always exists an optimal solution to (\ref{relax}) such that the constraint (\ref{relax.a}) is satisfied with equality, and thus, accomplishes the proof. \end{IEEEproof} Lemma \ref{equivalent} implies that problem (\ref{relax}) admits an optimal $({{\bf{Q}}^*_0},{{\bf{Q}}^*_c},{{\bf{Q}}^*_a})$ with $\mathop {\min }\limits_{k \in {\cal K}}C_{m,k}({{\bf{Q}}^*_0},{{\bf{Q}}^*_c},{{\bf{Q}}^*_a})={\tau_{ms}}$. Hence, $({{\bf{Q}}^*_0},{{\bf{Q}}^*_c},{{\bf{Q}}^*_a})$ is also optimal to (\ref{op2}). The proof of Lemma \ref{equivalent} reveals that such an optimal $({{\bf{Q}}^*_0},{{\bf{Q}}^*_c},{{\bf{Q}}^*_a})$ can always be constructed algorithmically based on the following procedures: \begin{corollary} Suppose that $({{\bf{Q}}^*_0},{{\bf{Q}}^*_c},{{\bf{Q}}^*_a})$ is an optimal solution returned by solving problem (\ref{relax}). If $\mathop {\min }\limits_{k \in {\cal K}}C_{m,k}({{\bf{Q}}^*_0},{{\bf{Q}}^*_c},{{\bf{Q}}^*_a})={\tau_{ms}}$, then output $({{\bf{Q}}^*_0},{{\bf{Q}}^*_c},{{\bf{Q}}^*_a})$ as an optimal solution of problem (\ref{op2}). Otherwise, solve the following equation with regard to $\nu$, i.e., $\mathop {\min }\limits_{k \in {\cal K}}C_{m,k}(\nu{{\bf{Q}}^*_0},{{\bf{Q}}^*_c},{{\bf{Q}}^*_a})={\tau_{ms}}$, via bisection search within the unit interval $[0,1]$, and output $(\nu{{\bf{Q}}^*_0},{{\bf{Q}}^*_c},{{\bf{Q}}^*_a})$ as an optimal solution of problem (\ref{op2}). \end{corollary} Next, we will point out two special cases, under which problem (\ref{relax}) is \emph{equivalent} to problem (\ref{op2}); or equivalently, any optimal solution to (\ref{relax}) is achieved with constraint (\ref{relax.a}) active. This is described in the following proposition. \begin{proposition}\label{P1} Suppose that the system configurations satisfy either one of the following conditions: \begin{condition}\label{C1} The number of antennas at the transmitter is larger than that at the authorized receiver, i.e., $N_t > N_{r,1}$. \end{condition} \begin{condition}\label{C2} The number of antennas at the transmitter is larger than the sum of the antenna number at the unauthorized receivers, i.e., $N_t > \sum\nolimits_{k \in {{\cal K}_e}} N_{r,k}$. \end{condition} Then the rate pair $({\tau_{ms}},\tilde R(\tau _{ms}))$ must be a \emph{Pareto optimal} point of (\ref{op1}), and all Pareto optimal points of (\ref{op1}) can be obtained by solving (\ref{op2}) with different $\tau_{ms}$'s lying within the interval $[0,\tau_{\max}]$. \end{proposition} \begin{IEEEproof} The proof can be found in Appendix \ref{DC_Appendix}. \end{IEEEproof} \begin{remark} Proposition \ref{P1} bridges the Pareto optimal points of (\ref{op1}) to the boundary points of $C_s({\bf{H}}_1,{\bf{H}}_2,P)$. When either Condition \ref{C1} or Condition \ref{C2} is satisfied, all Pareto optimal points of (\ref{op1}) are also the boundary points of $R_s({\left\{ {{\bf{H}}_k} \right\}_{k \in {\cal{K}}}},P)$, and vice versa. \end{remark} \subsection{DC Iterative Algorithm} We now focus on solving the relaxed problem (\ref{relax}) derived in the last subsection. Problem (\ref{relax}) still remains nonconvex due to its objective function and constraint (\ref{relax.a}). To deal with it, we first equivalently transform it into its epigraph form by introducing a slack variable $\eta$, i.e., \begin{subequations}\label{op3} \begin{align} \nonumber R(\tau _{ms})& =\mathop{\max}\limits_{{{\bf{Q}}_0},{{\bf{Q}}_c},{{\bf{Q}}_a},\eta} C_b({{\bf{Q}}_c},{{\bf{Q}}_a})-\eta\\ \text{s.t.}\; &C_{e,k}({{\bf{Q}}_c},{{\bf{Q}}_a}) \le \eta, \forall k \in {\cal{K}}_e\\ &C_{m,k}({{\bf{Q}}_0},{{\bf{Q}}_c},{{\bf{Q}}_a}) \ge {\tau_{ms}}, \forall k \in {\cal{K}}\label{op3a}\\ &\text{Tr}({{\bf{Q}}_0} + {{\bf{Q}}_c}+{{\bf{Q}}_a}) \le P, \\ &{{\bf{Q}}_0} \succeq {\bf{0}}, {{\bf{Q}}_a} \succeq {\bf{0}}, {{\bf{Q}}_c} \succeq {\bf{0}}. \end{align} \end{subequations} Next, we will show that problem (\ref{op3}) constitutes a DC-type programming problem, which can be iteratively solved by employing the DC algorithm. To begin with, we reformulate the capacity function $C_b({{\bf{Q}}_c},{{\bf{Q}}_a})$, $C_{e,k}({{\bf{Q}}_c},{{\bf{Q}}_a})$ and $C_{m,k}({{\bf{Q}}_0},{{\bf{Q}}_c},{{\bf{Q}}_a})$ into a DC-type form, given by \begin{align}\label{DC1} &C_b({{\bf{Q}}_c},{{\bf{Q}}_a})= {\phi _1}({{\bf{Q}}_c},{{\bf{Q}}_a}) - {\varphi _1}({{\bf{Q}}_a}),\nonumber\\ &C_{e,k}({{\bf{Q}}_c},{{\bf{Q}}_a})={\phi _k}({{\bf{Q}}_c},{{\bf{Q}}_a}) - {\varphi _k}({{\bf{Q}}_a}),\forall k \in {\cal{K}}_e \\ &C_{m,k}({{\bf{Q}}_0},{{\bf{Q}}_c},{{\bf{Q}}_a})={\eta _k}({{\bf{Q}}_0},{{\bf{Q}}_c},{{\bf{Q}}_a}) - {\phi _k}({{\bf{Q}}_c},{{\bf{Q}}_a}),\forall k \in {\cal{K}}\nonumber \end{align} in which we define \begin{align} &{\phi _k}({{\bf{Q}}_c},{{\bf{Q}}_a}) = \log \left| {{\bf{I}} + {{\bf{H}}_k}({{\bf{Q}}_c} + {{\bf{Q}}_a}){\bf{H}}_k^H} \right|,\forall k \in {\cal{K}}\nonumber\\ &{\varphi _k}({{\bf{Q}}_a}) = \log \left| {{\bf{I}} + {{\bf{H}}_k}{{\bf{Q}}_a}{\bf{H}}_k^H} \right|,\forall k \in {\cal{K}}\label{DC2}\\ &{\eta _k}({{\bf{Q}}_0},{{\bf{Q}}_c},{{\bf{Q}}_a}) = \log \left| {{\bf{I}} + {{\bf{H}}_1}({{\bf{Q}}_c} + {{\bf{Q}}_a} + {{\bf{Q}}_0}){\bf{H}}_1^H} \right|, \forall k \in {\cal{K}}.\nonumber \end{align} Substituting (\ref{DC1}) into problem (\ref{op3}), we obtain \begin{subequations}\label{op4} \begin{align} \nonumber R&(\tau _{ms}) =\mathop{\max}\limits_{{{\bf{Q}}_0},{{\bf{Q}}_c},{{\bf{Q}}_a},\eta} {\phi _1}({{\bf{Q}}_c},{{\bf{Q}}_a}) - {\varphi _1}({{\bf{Q}}_a})-\eta\\ \text{s.t.}\; &{\varphi _k}({{\bf{Q}}_a})-{\phi _k}({{\bf{Q}}_c},{{\bf{Q}}_a})+ \eta \ge 0, \forall k \in {\cal{K}}_e\label{op4a}\\ &{\eta _k}({{\bf{Q}}_0},{{\bf{Q}}_c},{{\bf{Q}}_a}) - {\phi _k}({{\bf{Q}}_c},{{\bf{Q}}_a}) \ge {\tau_{ms}}, \forall k \in {\cal{K}}\label{op4b}\\ &\text{Tr}({{\bf{Q}}_0} + {{\bf{Q}}_c}+{{\bf{Q}}_a}) \le P, \\ &{{\bf{Q}}_0} \succeq {\bf{0}}, {{\bf{Q}}_a} \succeq {\bf{0}}, {{\bf{Q}}_c} \succeq {\bf{0}}. \end{align} \end{subequations} Since ${\phi _k}({{\bf{Q}}_c},{{\bf{Q}}_a})$, ${\varphi _k}({{\bf{Q}}_a})$ and ${\eta _k}({{\bf{Q}}_0},{{\bf{Q}}_c},{{\bf{Q}}_a})$ are all concave w.r.t. $({{\bf{Q}}_0},{{\bf{Q}}_c},{{\bf{Q}}_a})$, one can easily notice that the objective function of (\ref{op2}) and constraints (\ref{op4a}) and (\ref{op4b}) are all in a difference-of-concave form. This property makes problem (\ref{op1}) fall into the context of DC program \cite{NIPS2009}, which can be iteratively solved via DC algorithm. Our next endeavor is to show the DC approach to (\ref{op4}) mathematically. Its basic idea is to locally linearize the nonconcave parts in (\ref{op4}) at some feasible point via Taylor series expansion (TSE), and then iteratively solve the linearized problem. To this end, we introduce the TSE via the following lemma. \begin{lemma}[\cite{chu2015secrecy}]\label{TSE} An affine Taylor series approximation of a function $f({\mathbf{X}}):{{\mathbb R}^{M \times N}} \to {\mathbb R}$ can be expressed at ${\bf{\tilde X}}$ as below. \begin{equation} f\left( {\bf{X}} \right) \approx f( {{\bf{\tilde X}}}) + {\rm{vec}}\left( {f'\left( {\bf{X}} \right)} \right)^H{\rm{vec}}({{\bf{X}} - {\bf{\tilde X}}} ). \end{equation} \end{lemma} The TSE above enables us to reformulate the primal nonconcave parts of (\ref{op4}) into a linear form. In particular, by applying Lemma \ref{TSE} and the fact $\partial \left( {\log \left| {\bf{X}} \right|} \right) = {\rm{Tr}}\left( {{{\bf{X}}^{ - 1}}\partial {\bf{X}}} \right)$, ${\varphi _1}({{\bf{Q}}_a})$ can be approximated as \begin{align} \nonumber{\varphi _1}({{\bf{Q}}_a})&=\log \left| {{\bf{I}} + {{\bf{H}}_1}{{\bf{Q}}_a}{\bf{H}}_1^H} \right|\\ \nonumber&\approx {\varphi _1}({{\bf{\tilde Q}}_a})+ ({\rm{vec}}\left({\bf{S}}\right))^H{\rm{vec}}\left( {{{\bf{Q}}_a} - {{{\bf{\tilde Q}}}_a}} \right)\\ \nonumber&\mathop= \limits^{(a)} {\varphi _1}({{\bf{\tilde Q}}_a})+ {\rm{Tr}}\left[{\bf{S}}({{\bf{Q}}_a}-{{{\bf{\tilde Q}}}_a})\right],\\ &\buildrel \Delta \over = {\tilde \varphi _1}({{\bf{Q}}_a}) \label{approx1} \end{align} in the objective function of (\ref{op4}), where ${{{\bf{\tilde Q}}}_a}$ is a given transmit covariance matrix, ${\bf{S}} \buildrel \Delta \over = {{{\bf{H}}_1^H}{\left( {{\bf{I}} + {{\bf{H}}_1}{{{\bf{\tilde Q}}}_a}{\bf{H}}_1^H} \right)^{ - 1}}{\bf{H}}_1}$ and the equality $(a)$ is due to the fact that ${\rm{Tr}}({{\bf{A}}^H}{\bf{B}}) = {({\rm{vec}}({\bf{A}}))^H}{\rm{vec}}({\bf{B}})$ for appropriate dimensions of ${\bf{A}}$ and ${\bf{B}}$. Likewise, ${\phi _k}({{\bf{Q}}_c},{{\bf{Q}}_a})$, appearing in the constraints (\ref{op4a}) and (\ref{op4b}), can be approximated as \begin{align} &\nonumber{\phi _k}({{\bf{Q}}_c},{{\bf{Q}}_a})=\log \left| {{\bf{I}} + {{\bf{H}}_k}({{\bf{Q}}_c} + {{\bf{Q}}_a}){\bf{H}}_k^H} \right| \\ \nonumber&\approx {\phi _k}({{\bf{\tilde Q}}_c},{{\bf{\tilde Q}}_a})+{\rm{Tr}}\left[{\bf{U}}({{\bf{Q}}_c}-{{{\bf{\tilde Q}}}_c})\right]+{\rm{Tr}}\left[{\bf{U}}({{\bf{Q}}_a}-{{{\bf{\tilde Q}}}_a})\right]\\ &\buildrel \Delta \over = {\tilde \phi _k}({{\bf{Q}}_c},{{\bf{Q}}_a}),\label{approx2} \end{align} in which ${\bf{U}}$ is determined by \begin{equation} {\bf{U}} = {\bf{H}}_k^H{({\bf{I}} + {{\bf{H}}_k}({{{\bf{\tilde Q}}}_c} + {{{\bf{\tilde Q}}}_a}){\bf{H}}_k^H)^{ - 1}}{{\bf{H}}_k}. \end{equation} Based on the approximations above, the original QoMS-constrained SRM problem (\ref{op4}) can be reformulated as \begin{subequations}\label{op5} \begin{align} \nonumber &{\bar{R}}(\tau _{ms}) =\mathop{\max}\limits_{{{\bf{Q}}_0},{{\bf{Q}}_c},{{\bf{Q}}_a},\eta} {\phi _1}({{\bf{Q}}_c},{{\bf{Q}}_a}) - {\tilde \varphi _1}({{\bf{Q}}_a})-\eta\\ \text{s.t.}\; &{\varphi _k}({{\bf{Q}}_a})-{\tilde \phi _k}({{\bf{Q}}_c},{{\bf{Q}}_a})+ \eta \ge 0, \forall k \in {\cal{K}}_e\label{op5a}\\ &{\eta _k}({{\bf{Q}}_0},{{\bf{Q}}_c},{{\bf{Q}}_a}) - {\tilde \phi _k}({{\bf{Q}}_c},{{\bf{Q}}_a}) \ge {\tau_{ms}}, \forall k \in {\cal{K}}\label{op5b}\\ &\text{Tr}({{\bf{Q}}_0} + {{\bf{Q}}_c}+{{\bf{Q}}_a}) \le P, \\ &{{\bf{Q}}_0} \succeq {\bf{0}}, {{\bf{Q}}_a} \succeq {\bf{0}}, {{\bf{Q}}_c} \succeq {\bf{0}}. \end{align} \end{subequations} where ${\bar{R}}(\tau _{ms})$ is the optimal objective value of (\ref{op3}), serving as an approximation to $R(\tau _{ms})$. According to the relationship between a concave function and its Taylor series expansion, it is immediate to get \begin{align} &{\varphi _1}({{\bf{Q}}_a}) \le {\tilde \varphi _1}({{\bf{Q}}_a}), \forall {{\bf{Q}}_a} \succeq {\bf{0}},\nonumber\\ &{\phi _k}({{\bf{Q}}_c},{{\bf{Q}}_a}) \le {\tilde \phi _k}({{\bf{Q}}_c},{{\bf{Q}}_a}), \forall {{\bf{Q}}_a} \succeq {\bf{0}}, {{\bf{Q}}_c} \succeq {\bf{0}}.\label{relation} \end{align} As a consequence, any feasible solution to (\ref{op5}) should also be feasible to (\ref{op4}), and ${\bar{R}}(\tau _{ms}) \le {R}(\tau _{ms})$ must hold. This approximated problem (\ref{op5}) is convex with regard to (w.r.t.) $\left({{\bf{Q}}_0},{{{\bf{Q}}_c},{{\bf{Q}}_a}} \right)$ and hence $\left({{\bf{Q}}_0},{{{\bf{Q}}_c},{{\bf{Q}}_a}} \right)$ can be iteratively obtained by solving problem (\ref{op5}) via some off-the-shelf interior-point algorithm, e.g., \texttt{CVX}. We summarize our proposed iterative algorithm for solving (\ref{op4}) in Algorithm 1. To acquire the secrecy rate region, we need to traverse ${\tau_{ms}}$ lying within the interval $[0,\tau_{\max}]$ and store the corresponding objective value of (\ref{op5}). \begin{algorithm} \caption{Iterative method for solving (\ref{op4})} \begin{algorithmic}[1]\label{DC.Alg} \State Initiate $n=0$ and choose an arbitrary starting point $({{\bf{\tilde Q}}_{c,n}},{{\bf{\tilde Q}}_{a,n}})$ feasible to (\ref{op5}) \State \textbf{Repeat} \State \quad Solve (\ref{op5}) with ${\bf{\tilde Q}}_c={{\bf{\tilde Q}}_{c,n}}$ and ${\bf{\tilde Q}}_a={{\bf{\tilde Q}}_{a,n}}$, and obtain $({{\bf{Q}}_c^*},{{\bf{Q}}_a^*})$, which is the optimal solution of (\ref{op5});\label{sub} \State \quad Update ${{\bf{\tilde Q}}_{c,n+1}}={{\bf{Q}}_c^*}$, ${{\bf{\tilde Q}}_{a,n+1}}={{\bf{Q}}_a^*}$; \State \quad Update $n=n+1$; \State \textbf{Until the convergence conditions are satisfied.} \State Output ${{\bf{\tilde Q}}_{c,n}}$ and ${{\bf{\tilde Q}}_{a,n}}$. \end{algorithmic} \end{algorithm} \begin{remark}\label{initial} In Algorithm 1, the initialization of $({{\bf{\tilde Q}}_{c,0}},{{\bf{\tilde Q}}_{a,0}})$ plays a crucial role in influencing the total iteration times. Let us define $\left({{\bf{Q}}^{i}_c},{{\bf{Q}}^{i}_a}\right)$ as the output solution in $i$th traversal of ${\tau_{ms}}$. The following ``warmstart operation'' could be adopted to initialize $({{\bf{\tilde Q}}_{c,0}},{{\bf{\tilde Q}}_{a,0}})$ for achieving a fast convergence rate: \textbf{Warmstart Operation}: We start the traversal of ${\tau_{ms}}$ from ${\tau_{ms}}={\tau_{\max}}$. In the first traversal, ${{\bf{\tilde Q}}_{c,0}}$ and ${{\bf{\tilde Q}}_{a,0}}$ are both initialized as $\bf{0}$. In the $i$th ($i>1$) traversal, $({{\bf{\tilde Q}}_{c,0}},{{\bf{\tilde Q}}_{a,0}})$ is initialized as the solution output by Algorithm 1 in the $(i-1)$th traversal. \end{remark} \subsection{Convergence Analysis} As one can see, the basic merit of DC lies in its tractability, which caters to the numerical optimization using the parser-solver. As an additional merit, the proposed DC approach has a theoretically provable guarantee on its solution convergence, which will be demonstrated in the following proposition. \begin{proposition} Every limit point of $\left( {{{\bf{Q}}_0^*},{{\bf{Q}}_c^*}} \right)$ is a stationary point of problem (\ref{op2}) \end{proposition} \begin{IEEEproof} The proof is a direct application of \cite[Th 10]{NIPS2009}, and thus omitted here for simplicity. \end{IEEEproof} \section{An AO-Based Approach to the SRRM Problem} In this section, we develop our another scalarization method, referred to as weighted-sum method, to solve (\ref{op1}). The basic problem formulation is a WSRM problem, which can be solved via an AO-based approach. Here we should point out that the application of AO to SRM problem has been observed in some existing papers, i.e., \cite{li2013transmit}. Nonetheless, the AO algorithm we used in this section is a nontrivial extension of that in \cite{li2013transmit}. Specifically, the objective function in \cite{li2013transmit} only contains a single secrecy rate term. While in our considered scenario, an extra multicast rate term is incorporated, which brings some new issues, say, the convergence proof, that should be tackled. \subsection{Scalarization} The basic idea of the weighted-sum method is to introduce a so-called weight vector \cite{boyd2009convex} that is positive in the dual cone $K^*={\mathbb {R}}_ + ^2$, and then to transform the primal vector optimization problem into a scalar optimization problem. By varying the vector, we can obtain different Pareto optimal solutions of (\ref{op1}). To put into context, the Pareto boundary of (\ref{Region1}) can be characterized by the solution of \begin{equation}\label{op6} \begin{split} &\mathop{\max}\limits_{{{\bf{Q}}_0},{{\bf{Q}}_c},{{\bf{Q}}_a},R_0,R_c} R_0 + \lambda_c R_c\\ \text{s.t.}\quad &{R_0} \le \mathop {\min }\limits_{k \in {\cal K}} C_{m,k}({{\bf{Q}}_0},{{\bf{Q}}_c},{{\bf{Q}}_a})\\ &{R_c} \le C_b({{\bf{Q}}_c},{{\bf{Q}}_a}) - \mathop {\max}\limits_{k \in {{\cal K}_e}} C_{e,k}({{\bf{Q}}_c},{{\bf{Q}}_a})\\ &\text{(\ref{op1a})-(\ref{op1b}) satisfied}, \end{split} \end{equation} in which $\lambda_c \in [0,+\infty)$ and ${\bm{\lambda }} = [1,\lambda_c]$ is our introduced weight vector. In general, the optimal $\left(R_0,R_c\right)$ to (\ref{op2}) is the point where a straight line with slope $-1/{\lambda_c}$ is tangent to the Pareto boundary. Before proceeding, let us first point out some special cases of problem (\ref{op2}). \begin{enumerate} \item When ${\bm{\lambda }} = [1,1]$, the optimal $\left(R_0,R_c\right)$ turns out to be the so-called utilitarian point, also referred to as ``sum-rate'' point in communications. \item The single-service points are the two points where $R_0 = 0$ and where $R_c = 0$, respectively. When $R_0 = 0$, problem (\ref{op2}) is degraded into a conventional AN-aided SRM problem in MIMO wiretap channel. When $R_c = 0$, the maximum $R_0$ can be derived by solving the same convex optimization problem as (\ref{max_tau}). \end{enumerate} \subsection{AO Iterative Algorithm} We are now in a position to determine the tractable approaches to the WSRM problem (\ref{op6}). First, one can notice that by discarding $R_0$ and $R_c$ as slack variables, problem (\ref{op6}) is equivalent to the following optimization problem. \begin{subequations}\label{op7} \begin{align} \nonumber R(\lambda_c)=&\mathop{\max}\limits_{{{\bf{Q}}_0},{{\bf{Q}}_c},{{\bf{Q}}_a}} \lambda_c (C_b - \mathop {\max}\limits_{k \in {{\cal K}_e}} C_{e,k}) + \mathop {\min }\limits_{k \in {\cal K}} C_{m,k}\\ \text{s.t.}\quad &\text{Tr}({{\bf{Q}}_0} + {{\bf{Q}}_c} + {\bf{Q}}_a) \le P,\label{op7a}\\ &{{\bf{Q}}_0} \succeq {\bf{0}}, {{\bf{Q}}_c} \succeq {\bf{0}}, {\bf{Q}}_a \succeq {\bf{0}}.\label{op7b} \end{align} \end{subequations} The obstacle of solving (\ref{op7}) mainly lies in the non-smoothness of its objective function, which negates the use of many derivative-related iterative algorithms. As a result, we next develop a derivative-free AO iterative algorithm to solve (\ref{op7}). To this end, we will first need to transform the WSRM problem (\ref{op7}) into a form amenable to AO. \begin{lemma}[\cite{li2013transmit}]\label{lem1} Let ${\bf{E}} \in {{\mathbb{C}}^{N \times N}}$ be any matrix satisfying ${\bf{E}} \succ \mathbf{0}$. Define the function $f({\bf{S}}) = - {\rm{Tr}}({\bf{SE}}) + \log \left| {\bf{S}} \right| + N$. Then \begin{equation}\label{lemma1} \log \left| {{{\bf{E}}^{ - 1}}} \right| = \mathop {\max }\limits_{{\bf{S}} \in {{\mathbb{C}}^{N \times N}},{\bf{S}} \succeq 0} f({\bf{S}}), \end{equation} and the optimal solution to the right-hand side (RHS) of (\ref{lemma1}) is ${{\bf{S}}^ * } = {{\bf{E}}^{ - 1}}$. \end{lemma} Applying Lemma \ref{lem1} to ${C_b}$, $C_{e,k}$ and $C_{m,k}$, one can obtain \begin{subequations}\label{eq1} \begin{align} {C_b}&=\mathop {\max }\limits_{{{\bf{S}}_1} \succeq {\bf{0}}} {\varphi _b}({{\bf{Q}}_c},{{\bf{Q}}_a},{\bf{S}}_1),\label{eq1a}\\ C_{e,k}&=\mathop {\min }\limits_{{{\bf{S}}_k} \succeq {\bf{0}}} {\varphi _{e,k}}({{\bf{Q}}_c},{{\bf{Q}}_a},{\bf{S}}_k), \forall k \in {\cal{K}}_e,\label{eq1b}\\ C_{m,k}&=\mathop {\max }\limits_{{{\bf{U}}_k} \succeq {\bf{0}}} {\varphi _{m,k}}({{\bf{Q}}_0}, {{\bf{Q}}_c},{{\bf{Q}}_a},{\bf{U}}_k), \forall k \in {\cal{K}},\label{eq1c} \end{align} \end{subequations} where we define \begin{subequations}\label{eq2} \begin{align} \nonumber&{\varphi _b}({{\bf{Q}}_c},{{\bf{Q}}_a},{\bf{S}}_1)= - {\rm{Tr}}({{\bf{S}}_1}({\bf{I}} + {{\bf{H}}_1}{{\bf{Q}}_a}{\bf{H}}_1^H)+ \log\left| {{{\bf{S}}_1}} \right|+N_{r,1} \\ &+ \log \left| {{\bf{I}} + {{\bf{H}}_1}({{\bf{Q}}_a} + {{\bf{Q}}_c}){\bf{H}}_1^H} \right|,\\ \nonumber&{\varphi _{e,k}}({{\bf{Q}}_c},{{\bf{Q}}_a},{\bf{S}}_k)= - \log \left| {{{\bf{S}}_k}} \right| - \log \left| {{\bf{I}}+{{\bf{H}}_k}{{\bf{Q}}_a}{\bf{H}}_{k}^H} \right|-N_{r,k} \\ &+{\rm{Tr}}({{\bf{S}}_k}({\bf{I}} + {{\bf{H}}_{k}}({{\bf{Q}}_a} + {{\bf{Q}}_c}){\bf{H}}_{k}^H)),\\ \nonumber&{\varphi _{m,k}}({{\bf{Q}}_0}, {{\bf{Q}}_c},{{\bf{Q}}_a},{\bf{U}}_k)=- {\rm{Tr}}({{\bf{U}}_k}({\bf{I}} + {{\bf{H}}_k}({{\bf{Q}}_c}+{{\bf{Q}}_a}){\bf{H}}_k^H) \\ &+ \log\left| {{{\bf{U}}_k}} \right|+ \log \left| {{\bf{I}} + {{\bf{H}}_k}({{\bf{Q}}_0} + {{\bf{Q}}_c} + {{\bf{Q}}_a}){\bf{H}}_k^H} \right|+ N_{r,k}, \end{align} \end{subequations} in which $\{{\bf{S}}_k\}_{k \in {\cal{K}}}$ and $\{{\bf{U}}_k\}_{k \in {\cal{K}}}$ are slack variables satisfying ${\bf{S}}_k \succeq \mathbf{0}$ and ${\bf{U}}_k \succeq \mathbf{0}$ for $\forall k \in {\cal{K}}$. Following the matrix manipulations in \cite{li2013transmit}, we have \begin{equation}\label{eq3} \begin{split} &\mathop {\max }\limits_{k \in {\cal{K}}_e} \mathop {\min }\limits_{{{\bf{S}}_k}} {\varphi _{e,k}}({{\bf{Q}}_c},{{\bf{Q}}_a},{\bf{S}}_k)\\ =&\mathop {\min }\limits_{\{ {{\bf{S}}_k}\} _{k \in {\cal{K}}_e}} \mathop {\max }\limits_{k \in {\cal{K}}_e} {\varphi _{e,k}}({{\bf{Q}}_c},{{\bf{Q}}_a},{\bf{S}}_k), \end{split} \end{equation} and \begin{equation}\label{eq4} \begin{split} &\mathop {\min }\limits_{k \in {\cal{K}}} \mathop {\max }\limits_{{{\bf{U}}_k}} {\varphi _{m,k}}({{\bf{Q}}_0},{{\bf{Q}}_c},{{\bf{Q}}_a},{\bf{U}}_k)\\ =& \mathop {\max }\limits_{\{ {{\bf{U}}_k}\} _{k \in {\cal{K}}}} \mathop {\min }\limits_{k \in {\cal{K}}} {\varphi _{m,k}}({{\bf{Q}}_0},{{\bf{Q}}_c},{{\bf{Q}}_a},{\bf{U}}_k). \end{split} \end{equation} Substituting (\ref{eq1a})-(\ref{eq1c}) into (\ref{op7}) and making use of (\ref{eq3}) and (\ref{eq4}), one can check that problem (\ref{op7}) is equivalent to the following optimization problem. \begin{subequations}\label{op8} \begin{align} \nonumber R(\lambda_c)=&\mathop {\max }\limits_{{{\bf{Q}}_0},{{\bf{Q}}_c},{{\bf{Q}}_a},\atop \left\{ {\bf{S}}_k \right\}_{k \in {\cal{K}}}, \left\{ {\bf{U}}_k \right\}_{k \in {\cal{K}}}}f({{\bf{Q}}_0},{{\bf{Q}}_c},{{\bf{Q}}_a},\left\{ {\bf{S}}_k \right\}_{k \in {\cal{K}}},\left\{ {\bf{U}}_k \right\}_{k \in {\cal{K}}})\\ \text{s.t.}\quad&\text{Tr}({{\bf{Q}}_0} + {{\bf{Q}}_c} + {\bf{Q}}_a) \le P,\label{op8a}\\ &{{\bf{Q}}_0} \succeq {\bf{0}}, {{\bf{Q}}_c} \succeq {\bf{0}}, {\bf{Q}}_a \succeq {\bf{0}},\label{op8b} \end{align} \end{subequations} in which we define \begin{equation}\label{eq5} \begin{split} &f({{\bf{Q}}_0},{{\bf{Q}}_c},{{\bf{Q}}_a},\left\{ {\bf{S}}_k \right\}_{k \in {\cal{K}}},\left\{ {\bf{U}}_k \right\}_{k \in {\cal{K}}}) = \\ &\lambda_c ({\varphi _b}({{\bf{Q}}_c},{{\bf{Q}}_a},{\bf{S}}_1)- \mathop {\max }\limits_{k \in {\cal{K}}_e}{\varphi _{e,k}}({{\bf{Q}}_c},{{\bf{Q}}_a},\{ {{\bf{S}}_k}\} _{k \in {\cal{K}}_e}))\\ &+ \mathop {\min }\limits_{k \in {\cal{K}}}{\varphi _{m,k}}({{\bf{Q}}_0},{{\bf{Q}}_c},{{\bf{Q}}_a},\left\{ {\bf{U}}_k \right\}_{k \in {\cal{K}}}). \end{split} \end{equation} The upshot of this reformation is that problem (\ref{op7}) becomes primal decomposable. Specifically, problem (\ref{op8}) is convex w.r.t. \emph{either} $({{\bf{Q}}_0},{{\bf{Q}}_c},{{\bf{Q}}_a})$ \emph{or} $(\left\{ {\bf{S}}_k \right\}_{k \in {\cal{K}}},\left\{ {\bf{U}}_k \right\}_{k \in {\cal{K}}})$. Hence, AO is naturally employed to solve (\ref{op8}). With $({{\bf{Q}}_0},{{\bf{Q}}_c},{{\bf{Q}}_a})$ fixed, the optimal solution of $(\left\{ {\bf{S}}_k \right\}_{k \in {\cal{K}}},\left\{ {\bf{U}}_k \right\}_{k \in {\cal{K}}})$ admits an analytical expression, according to Lemma \ref{lem1}, given by \begin{subequations}\label{cls} \begin{align} &{\bf{S}}_1^ * = {({\bf{I}} + {{\bf{H}}_1}{{\bf{Q}}_a}{\bf{H}}_1^H)^{ - 1}},\\ &{\bf{S}}_k^ * = {({\bf{I}} + {{\bf{H}}_k}({{\bf{Q}}_a} + {{\bf{Q}}_c}){\bf{H}}_k^H)^{ - 1}}, \forall k \in {\cal{K}}_e,\\ &{\bf{U}}_k^ * = {({\bf{I}} + {{\bf{H}}_k}({{\bf{Q}}_a} + {{\bf{Q}}_c}){\bf{H}}_k^H)^{ - 1}}, \forall k \in {\cal{K}}, \end{align} \end{subequations} in which we utilize the fact that $\left\{ {\bf{S}}_k \right\}_{k \in {\cal{K}}}$ and $\left\{ {\bf{U}}_k \right\}_{k \in {\cal{K}}}$ are decoupled among ${\varphi _b}$, ${\varphi _{e,k}}$ and ${\varphi _{m,k}}$. Comparatively, with $(\left\{ {\bf{S}}_k \right\}_{k \in {\cal{K}}},\left\{ {\bf{U}}_k \right\}_{k \in {\cal{K}}})$ fixed, the optimal solution of $({{\bf{Q}}_0},{{\bf{Q}}_c},{{\bf{Q}}_a})$ can be obtained by solving a convex optimization problem as below, i.e., \begin{equation}\label{op9} \begin{split} ({\bf{Q}}_0^ * &,{\bf{Q}}_c^ * ,{\bf{Q}}_a^ *) = \\ &\arg \mathop {\max }\limits_{({{\bf{Q}}_0},{{\bf{Q}}_c},{{\bf{Q}}_a}) \in {\cal{F}}} f({{\bf{Q}}_0},{{\bf{Q}}_c},{{\bf{Q}}_a},\left\{ {\bf{S}}_k \right\}_{k \in {\cal{K}}},\left\{ {\bf{U}}_k \right\}_{k \in {\cal{K}}}), \end{split} \end{equation} where ${\cal{F}}$ denotes the feasible set of (\ref{op7}), which is convex. The whole AO process for solving (\ref{op8}) is given in Algorithm 2. In line \ref{cp} of Algorithm 2, the convex subproblem can be solved via \texttt{CVX}. Following the similar warmstart operation introduced in Remark \ref{initial}, the iteration times of Algorithm 2 can be significantly decreased. \begin{algorithm} \caption{AO algorithm for solving (\ref{op8})} \begin{algorithmic}[1]\label{AO.Alg} \State Initiate $n=1$, and $({\bf{Q}}_c^0,{\bf{Q}}_a^0) \in {\cal{F}}$; \State \textbf{Repeat} \State \quad ${\bf{S}}_1^ {n} = {({\bf{I}} + {{\bf{H}}_1}{{\bf{Q}}_a^{n-1}}{\bf{H}}_1^H)^{ - 1}}$; \State \quad ${\bf{S}}_k^ {n} = ({\bf{I}} + {{\bf{H}}_k}({{\bf{Q}}_a^{n-1}} + {{\bf{Q}}_c^{n-1}}){\bf{H}}_k^H)^{ - 1}, \forall k \in {\cal{K}}_e$; \State \quad ${\bf{U}}_k^ {n} = ({\bf{I}} + {{\bf{H}}_k}({{\bf{Q}}_a^{n-1}} + {{\bf{Q}}_c^{n-1}}){\bf{H}}_k^H)^{ - 1}, \forall k \in {\cal{K}}$; \State \quad $({\bf{Q}}_0^ {n},{\bf{Q}}_c^ {n},{\bf{Q}}_a^ {n}) = \arg \mathop {\max }\limits_{({{\bf{Q}}_0},{{\bf{Q}}_c},{{\bf{Q}}_a}) \in {\cal{F}}} f({{\bf{Q}}_0},{{\bf{Q}}_c},{{\bf{Q}}_a},$ \label{cp} $\left\{ {{{\bf{S}}_k^{n}}} \right\}_{k \in {\cal{K}}}, \left\{ {{{\bf{U}}_k^{n}}} \right\}_{k \in {\cal{K}}})$; \State \quad $n=n+1$; \State \textbf{Until the convergence conditions are satisfied.} \State Output $({\bf{Q}}_0^n,{\bf{Q}}_c^n,{\bf{Q}}_a^n)$. \end{algorithmic} \end{algorithm} \subsection{Convergence Analysis} It can be verified that the AO algorithm produces a nondecreasing objective value of (\ref{op8}). Besides, the following convergence result is always guaranteed. \begin{proposition}\label{KKTConv} Suppose that $({\bf{Q}}_0^ n ,{\bf{Q}}_c^ n ,{\bf{Q}}_a^ n)$ is the solution generated by the AO algorithm in $n^{\text{th}}$ iteration, then the sequence $\{({\bf{Q}}_0^ n ,{\bf{Q}}_c^ n ,{\bf{Q}}_a^ n)\}_n$ must converge to one stationary point (i.e., Karush-Kuhn-Tucker (KKT) point) of the primal WSRM problem (\ref{op7}). \end{proposition} \begin{IEEEproof} The proof can be found in Appendix \ref{AO_Appendix}. \end{IEEEproof} Since the global optimal solutions to problem (\ref{op7}) hitherto remains inaccessible, our achieved secrecy rate region would serve as a lower bound on $R_s({\left\{ {{\bf{H}}_k} \right\}_{k \in {\cal{K}}}},P)$, which achieves KKT optimality. \section{Comparison of the Proposed Methods} In the previous sections, we present two tractable convex formulations of the SRRM problem (\ref{op1}). This naturally leads to the question about the relative performance of the two formulations. In the following subsections, we address this question by comparing their performance and computational complexity in solving (\ref{op1}). \subsection{Performance Analysis} As introduced in the preceding sections, the QoMS-based scalarization can yield a complete set of boundary points of $R_s({\left\{ {{\bf{H}}_k} \right\}_{k \in {\cal{K}}}},P)$, which contains all Pareto optimal points of (\ref{op1}). The resulting scalar problem (\ref{relax}) aims to maximize the secrecy rate and meanwhile maintain the QoMS above a given threshold. Predictably, the use of AN should be effective merely at low QoMS region, since AN exerts a negative effect on the multicasting performance. To guarantee the high demand for QoMS, AN has to be prohibitive at high QoMS region. This QoMS-constrained SRM is a generalization of traditional SRM in physical-layer security, and provides the transmitter with some insights in how to tradeoff the security performance and the multicasting performance. As for the weighted-sum scalarization method, the necessary condition for it to find all Pareto optimal points is that the secrecy rate region should be convex. Besides, its performance is also dependent on the precision of $\lambda_c$. The traversal of $\lambda_c$ should span from zero to an extremely large number with appropriate step, so that each Pareto optimal points can be detected. Nonetheless, the weighted-sum problem structure has an interesting pricing interpretation from the field of economics. To elaborate a little further, let us define $p_0$ and $p_c$ as the unit price for the secrecy rate and the multicast rate, respectively, charged by the service provider. To maximize its revenue, the service provider should be concerned about how to solve the WSRM problem in (\ref{op6}) with setting $\lambda_c=p_c/p_0$. The use of AN could also be explained in this context. It is evident to see when $p_0 \gg p_c$, the revenue from multicasting transmission would dominate the objective function of (\ref{op6}), and thus, eliminating AN would be helpful in increasing the overall revenue. In all, these two scalarization methods are suitable for different application scenarios and provide different insights. Nonetheless, the QoMS-based scalarization could yield all Pareto optimal points, while the weighted-sum scalarization might only yield some of them, dependent on the shape of the secrecy rate region. \begin{remark} Besides the QoMS-based and weighted-sum scalarization methods, some other scalarization methods have been proposed in literature to find the complete Pareto set for biobjective optimization, e.g., the weighted Tchebycheff method \cite{marler2004survey}. However, to implement this method, one has to first obtain the single-service point of the confidential message (cf. (\ref{single1})) and then solve a highly nonconvex max-min optimization problem. \begin{align} {R_c^{\max}} =& \mathop {\max }\limits_{{{\bf{Q}}_c} \succeq {\bf{0}},{\rm{Tr}}({{\bf{Q}}_c}) \le P} \log \left| {{\bf{I}} + {{\bf{H}}_1}{{\bf{Q}}_c}{\bf{H}}_1^H} \right| \nonumber\\ &- \mathop {\max }\limits_{k \in {\cal K}} \log \left| {{\bf{I}} + {{\bf{H}}_k}{{\bf{Q}}_c}{\bf{H}}_k^H} \right|.\label{single1} \end{align} Unfortunately, problem (\ref{single1}) is nonconvex, and so the optimal solution to (\ref{single1}) may not be obtained, which invalidates the use of the weighted Tchebycheff method. \end{remark} \subsection{Complexity Analysis} The major computational complexity of the two scalarization methods comes from solving the problems (\ref{op5}) and (\ref{op9}). While both of problems (\ref{op5}) and (\ref{op9}) are convex, they are not in a standard semidefinite programming (SDP) form, owing to the logarithm functions therein. To solve them, a successive approximation method embedded with a primal-dual interior-point method (IPM) is employed, say by \texttt{CVX}. As is known, the arithmetic complexity for the generic primal-dual IPM to solve a standard SDP is ${\cal O}(\max {\{ m,n\} ^4}{n^{1/2}}\log (1/\varepsilon ))$\cite{luo2010semi}, in which $m$, $n$ and $\varepsilon$ represent the number of linear constraints, the dimension of the positive semidefinite cone and the solution accuracy, respectively. Therefore, the complexity of solving (\ref{op5}) or (\ref{op9}) is ${\cal O}({L_{SA}}\max {\{ 2K,{N_t}\} ^4}N_t^{1/2}\log (1/\varepsilon ))$, where $L_{SA}$ denotes the number of successive approximations used. Since we are not aware of the relation between $L_{SA}$ and $N_t$, this complexity expression is rather rough. However, by utilizing the following approximation \cite{cumanan2014secrecy}: \begin{equation}\label{approx3} \log \left| {{\bf{I}} + {\bf{HQ}}{{\bf{H}}^H}} \right| = {\rm{Tr}}({\bf{HQ}}{{\bf{H}}^H}) + {\cal O}(\left\| {{\bf{HQ}}{{\bf{H}}^H}} \right\|), \end{equation} all logarithm terms in problems (\ref{op5}) and (\ref{op9}) can be approximated by a trace function at \emph{low} transmit power. This approximation further converts the convex problems (\ref{op5}) and (\ref{op9}) into SDP ones, which makes it possible to acquire a more accurate big-O expression of the computational complexity for low transmit power. Specifically, consider (\ref{op5}), which has three linear matrix inequality (LMI) constraints of size $N_t$, and $2K$ LMI constraints of size 1 after introducing the approximation (\ref{approx3}). Moreover, for (\ref{op5}), the number of decision variables is on the order $n_1=3N_t^2+1$. Then, when a generic path-following IPM is used to solve problem (\ref{op5}), the total arithmetic computation cost is on the order of \cite{ben2001lectures} \begin{equation}\label{order1} \begin{split} &{T_1} = \sqrt {2K + 3{N_t}}\phi(n_1),\\ &\phi(n_1)={{n_1}(2K + 3N_t^3) + n_1^2(2K + 3N_t^2) + n_1^3} \end{split} \end{equation} with $n_1={\cal O}(3N_t^2+1)$. On the other hand, for solving (\ref{op9}), we need to introduce two additional slack variables to move the maximum and minimum terms in the objective function of (\ref{op9}) to the constraints. Hence, the number of decision variables is on the order of $n_2=3N_t^2+2$, and (\ref{op9}) also has three LMI constraints of size $N_t$, and $2K$ LMI constraints of size 1. The total arithmetic computation cost for solving (\ref{op9}) is on the order of \begin{equation}\label{order2} \begin{split} &{T_2} = \sqrt {2K + 3{N_t}}\phi(n_2),\\ &\phi(n_2)={{n_2}(2K + 3N_t^3) + n_2^2(2K + 3N_t^2) + n_2^3} \end{split} \end{equation} with $n_2={\cal O}(3N_t^2+2)$. Comparing (\ref{order1}) and (\ref{order2}), one can note that the total arithmetic computation cost of solving the two problems is comparable, with $T_2$ slightly greater than $T_1$ due to $n_2 > n_1$. This observation implies that the QoMS-based scalarization is more time-efficient at low transmit power. This is also consistent with our following simulation results, as we shall see in Section \Rmnum{6}. \section{Numerical Results} In this section, we provide numerical results to illustrate the secrecy rate region derived from the two proposed methods, compared with two other existing strategies. The first one is the no-AN transmission, i.e., prefixing ${{\bf{Q}}_a}$ as $\bf{0}$ in problem (\ref{op1}). Thus, its achieved secrecy rate region can also be derived via the DC and AO algorithms. Another one is the traditional service integration using time division multiple address (TDMA), which assigns the confidential message and multicast message to two orthogonal time slots. Its maximum secrecy rate and multicast rate can be obtained by seeking the single-service points of $R_s({\left\{ {{\bf{H}}_k} \right\}_{k \in {\cal{K}}}},P)$. For the fairness of comparison, the secrecy rate and multicast rate achieved by this TDMA-based strategy should be \textbf{halved}\cite{Wyrembelski2012Physical}. In the first subsection, the convergence results of both algorithms are presented. The second subsection gives the comparison between these two algorithms in terms of achievable performance and computational complexity. \subsection{Convergence Results} In this subsection, we assume $N_t=5$, $N_{r,k}=3$ for all $k \in \cal{K}$, and $K=4$. The channel matrices are randomly generated from an i.i.d. complex Gaussian distribution with zero mean and unit variance. According to Proposition \ref{P1}, since $N_t > N_{r,1}$, the optimal solution to (\ref{relax}) is attained when the constraint (\ref{relax.a}) holds with equality. \begin{figure}[!t] \begin{center} \includegraphics[width=3in]{MultiConv.eps} \caption{DC algorithm: Convergence of the multicast rate}\label{Convergence_Multi} \end{center} \end{figure} \begin{figure}[!t] \begin{center} \includegraphics[width=3in]{ConfConv.eps} \caption{DC algorithm: Convergence of the secrecy rate}\label{Convergence_Conf} \end{center} \end{figure} First, we evaluate the convergence of the DC algorithm. Especially, we are concerned about whether the primal constraint (\ref{relax.a}) is violated by our approximation. Setting $\tau_{ms}$ as 2 bps/Hz, Fig.\,\ref{Convergence_Multi} shows the convergence of the multicast rate in the iteration with different transmit power. ${{\bf{\tilde Q}}_{c,0}}$ and ${{\bf{\tilde Q}}_{a,0}}$ are both initiated as $\bf{0}$. The algorithm stops iterating when the difference between two successive values of ${\bar{R}}(\tau _{ms})$ returned by the algorithm is less than or equal to $10^{-4}$. One can observe that, the multicast rates ultimately converge to our predefined multicast rate with a limited number of iterations in all tested transmit powers. This observation indicates the efficacy of TSE in approximating the multicast rate. Then we also plot the achieved secrecy rates and the approximated secrecy rates in Fig.\,\ref{Convergence_Conf}. The general observation of Fig.\,\ref{Convergence_Multi} is also applicable to Fig.\,\ref{Convergence_Conf}. \begin{figure}[!t] \begin{center} \includegraphics[width=3in]{AOConv.eps} \caption{AO algorithm: Convergence of the weighted sum rate}\label{Convergence_AO} \end{center} \end{figure} The convergence results of the AO algorithm are presented in Fig.\,\ref{Convergence_AO}. In Fig.\,\ref{Convergence_AO}, we set $\lambda_c=1$ to seek the sum-rate point. ${\bf{Q}}_c^0$ and ${\bf{Q}}_a^0$ are both initialized as $(P/(2N_t)){\bf{I}}_{N_t}$. The algorithm stops iterating when the difference between two successive values of ${\bar{R}}(\lambda _{c})$ is less than or equal to $10^{-4}$. As one can observe from Fig.\,\ref{Convergence_AO}, the achieved weighted sum rate is monotonically increasing and finally converges with a limited number of iterations in all tested transmit powers. In addition, we find out that the AN covariance matrix $\mathbf{Q}_a$ output by AO is no longer diagonal. This implies that the associated AN design is spatially selective rather than isotropic, which blocks the eavesdroppers much more effectively. One can also note that the increase in the weighted sum rate is particularly remarkable when the transmit power is high. After all, higher transmit power means that the transmitter can allocate more power to the confidential message transmission, while not compromising the multicast performance. The extra power allocated to the confidential message can be used to generate more interference at the eavesdropper and/or strengthen the signal reception at the intended receiver, whereby more remarkable improvement is observed. \subsection{Performance Comparison} In this subsection, we focus on two sorts of system configuration. The first one is the same as that in the last subsection. Besides, we consider another sort of system configuration: $N_t=N_{r,1}=4$, $N_{r,k}=5$ for all $k \in {\cal{K}}_e$, and $K=4$. Under the second system configuration, \emph{neither} Condition \ref{C1} \emph{nor} Condition \ref{C2} is satisfied. \begin{figure}[!t] \begin{center} \includegraphics[width=3in]{AO_DC1.eps} \caption{Secrecy rate regions with and without AN.}\label{AO_DC1} \end{center} \end{figure} First, we will show the secrecy rate regions achieved by the first system configuration. Overall results are shown in Fig.\,\ref{AO_DC1}, with $P$ set as $10$dB and $20$dB, respectively. Fig.\,\ref{AO_DC1} reveals two general trends. First, our AN-aided scheme achieves a secrecy rate region larger than the no-AN one. The striking gap indicates the efficacy of AN in expanding the secrecy rate region. However, the gap between these two strategies dramatically reduces when $R_0$ increases. This phenomenon agrees with our conjecture in Section \Rmnum{5}-A. The second observation is that our proposed strategies, though only attain a lower bound on $R_s({\left\{ {{\bf{H}}_k} \right\}_{k \in {\cal{K}}}},P)$, is sufficient to achieve significantly larger secrecy rate regions than the TDMA-based one. This observation also implies that PHY-SI is an effective approach to improve the spectral efficiency. Then let us compare the achievable performance of the two proposed scalarization methods. One can notice that the performance gap between these two methods is negligible in the tested system configuration, especially when $P=10$dB. \begin{figure}[!t] \begin{center} \includegraphics[width=3in]{AO_DC2.eps} \caption{Secrecy rate regions with and without AN.}\label{AO_DC2} \end{center} \end{figure} Fig. \ref{AO_DC2} plots the secrecy rate regions achieved by the second system configuration. Still, the secrecy rate region with AN is larger than the one without AN and the one achieved by TDMA. Besides, we can observe two very interesting phenomena. First, when we increase the transmit power from 10dB to 20dB, the secrecy rate regions practically expand in the horizontal direction. That is, under the second system configuration, the increasing transmit power mainly contributes to the multicast message transmission, rather than the confidential message transmission. This can be interpreted from the transmit degree of freedom (d.o.f.). The total d.o.f. of unauthorized receivers is $\sum\nolimits_{k=2}^{K-1}{N_{r,k}}=15$, much higher than the transmit d.o.f. $N_t = 4$. The high d.o.f. at the unauthorized receivers leads to the d.o.f. bottleneck at the transmitter and thus compromises the overall secrecy performance. Second, one can notice that when $P=20$dB: 1) there exist some boundary points residing on a line, marked by the red dashed lines, that are not Pareto optimal to (\ref{op1}). Apparently, these points cannot be detected by the weighted-sum scalarization, but can be easily detected by the QoMS-based scalarization; 2) the QoMS-based scalarization detects more Pareto optimal points than the weighted-sum scalarization. This is attributed to the insensitivity of the weighted-sum scalarization to the points residing on an approximately horizontal boundary. To detect these boundary points, one has to precisely adjust the value of $\lambda_c$ to get different tangent points. \subsection{Complexity Comparison} \begin{table}[htbp] \centering \caption{Averaged running times (in secs.)}\label{running} \begin{tabular}{ccccccc} \toprule \multirow{2}[4]{*}{Method} & \multicolumn{6}{c}{Power (dB)} \\ & 0 & 4 & 8 & 12 & 16 & 20 \\ \midrule DC algorithm & 6.07 & 8.89 & 12.91 & 17.35 & 21.18 & 30.84 \\ AO algorithm & 7.57 & 11.58 &11.04 & 12.61 & 13.61 & 17.11 \\ \bottomrule \end{tabular \end{table Finally, we tabulated the averaged running times of DC and AO for obtaining a boundary point in Table \ref{running} under the same setting as Fig.\,\ref{AO_DC1}. As seen, the DC algorithm runs faster than the AO algorithm when the transmit power is low. This phenomenon is consistent with our preceding analysis in Section \Rmnum{5}-B. However, at high transmit power, the DC algorithm scales nearly exponentially with $P$ and gradually spends more time converging in each iteration than the AO algorithm. This observation indicates that the two proposed scalarization methods might exhibit a performance-complexity tradeoff at high transmit power. \section{Conclusion} In this paper, we considered the AN-aided transmit design for multiuser MIMO broadcast channel with confidential service and multicast service. The transmit covariance matrices of confidential message, multicast message and AN were designed to maximize the achievable secrecy rate and achievable multicast rate simultaneously. To deal with this biobjective optimization problem, two different sorts of scalarization were introduced to transform this SRRM problem into a scalar optimization problem. In the QoMS-based scalarization, the scalar problem is an SRM problem with QoMS constraints, while in the weighted-sum scalarization, the scalar problem is a WSRM problem. DC and AO algorithms were utilized to solve the QoMS-constrained SRM problem and the WSRM problem, respectively. Both algorithms can converge to a stationary point of the primal problems. Further, we gave a detailed comparison between the two proposed scalarization methods. The comparison results indicated that at low transmit power, the QoMS-based scalarization is superior to the weighted-sum one in terms of achievable performance and computational complexity. On the other hand, at high transmit power, these two methods exhibit a tradeoff between achievable performance and computational complexity. Numerical results also confirmed the effectiveness of AN in expanding the secrecy rate region. As a future direction, it would be interesting to analyze the robust service integration scheme to combat the possible CSI uncertainties caused by channel aging, and to take into account some application-specific requirements in 5G wireless communication system, e.g., the mobility of terminals and the overhead in CSI acquisition.
{ "redpajama_set_name": "RedPajamaArXiv" }
5,427
Academic personnel, also known as faculty member or member of the faculty (in North American usage) or academics or academic staff (in British, Australia, and New Zealand usage), are vague terms that describe teachers or research staff of a school, college, university or research institute. In British and Australian/New Zealand English "faculty" usually refers to a sub-division of a university (usually a group of departments), not to the employees, as it can also do in North America. Universities, community colleges and even some secondary and primary schools use the terms faculty and professor. Other institutions (e.g., teaching hospitals or not-for-profit research institutes) may likewise use the term faculty. The higher education regulatory body of India, University Grants Commission, defines academic staff as teachers, librarians, and physical education personnel. In countries like the Philippines, faculty is used more broadly to refer to teaching staff of either a basic or higher education institution. Overview In many universities, the members of the administration (e.g., department chairs, deans, vice presidents, presidents, and librarians) are also faculty members; many of them begin (and remain) as professors. At some universities, the distinction between "academic faculty" and "administrative faculty" is made explicit by the former being contracted for nine months per year, meaning that they can devote their time to research (and possibly be absent from the campus) during the summer months, while the latter are contracted for twelve months per year. These two types of faculty members are sometimes known as "nine-month faculty" and "twelve-month faculty". Faculty who are paid a nine-month salary are typically allowed to seek external funds from grant agencies to partially or fully support their research activities during the summer months. Librarians are a special case in that they are educators like faculty who belong to degree granting departments, not necessarily administrators who have management responsibilities like Deans, Presidents, and Vice Presidents. Most university faculty members hold a Ph.D. or equivalent highest-level degree in their field. Some professionals or instructors from other institutions who are associated with a particular university (e.g., by teaching some courses or supervising graduate students) but do not hold professorships may be appointed as adjunct faculty. In North America, faculty is a distinct category from staff, although members of both groups are employees of the institution in question. This is distinct from, for example, the British (and European, Australia, and New Zealand) usage, in which all employees of the institution are staff either on academic or professional (i.e. non-academic) contracts. See also List of academic ranks Tenure References Academic terminology
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,649
\section{Introduction} It is well known that, when solving an optimization problem, different stages of the process require different search behavior. For example, while exploration is needed in the initial phases, the algorithm needs to eventually converge to a solution (exploitation). State-of-the-art optimization algorithms therefore often incorporate mechanisms to adjust their search behavior \emph{while optimizing}, by taking into account the information obtained during the run. These techniques are studied under many different umbrellas, such as \emph{parameter control}~\cite{EibenHM99}, \emph{meta-heuristics}~\cite{metaheuristics_survey}, adaptive operator selection~\cite{AOS}, or \emph{hyper-heuristics}~\cite{BurkeGHKOOQ13}. The probably best-known and most widely used techniques for achieving a dynamic search behavior are the \emph{one-fifth success rule}~\cite{Rechenberg73,Devroye72,SchumerS68} and the \emph{covariance adaptation technique} that the family of CMA-ES algorithms~\cite{hansen_adapting_1996,hansen2001self_adaptation_es} is build upon. While each of these two control mechanisms tackles the problem of balancing performance in different phases of the search in its own way, they are mostly working with a specific algorithm, aiming to tune its performance by changing internal parameters or algorithm modules. This inherently limits the potential of these methods, since different algorithms can have widely varying performances during different phases of the optimization process. By switching between these algorithms during the search, these differences could potentially be exploited to get even better performance. We coin the problem of choosing which algorithms to switch between, and under which circumstances, the \emph{Dynamic Algorithm Selection}~(dynAS) problem. Solving the dynAS problem would be an important milestone towards tackling the more general \emph{dynamic Algorithm Configuration (dynAC)} problem, which also addresses the problem of selecting (and possibly adjusting) suitable algorithm configurations. Specifically, dynAS is limited to switching between algorithms from a discrete portfolio of pre-configured heuristics, whereas for dynAC, the algorithms come with (possibly several) parameters whose settings can have significant influence on the performance. We do not solve dynAS here, but aim to show its potential for numerical optimization. We then aim to develop suitable environments to encourage and enable future research into achieving the identified potential of dynAS and, in the longer run, to extend this to the dynAC problem. As a first step, we need to identify a meaningful collection of algorithms and benchmark problems, which together cover the main characteristics and challenges of the dynAS problem, without imposing too many additional challenges. The Black-Box Optimization Benchmarking (BBOB) environment~\cite{hansen_coco:_2016} with its rich data sets available at~\cite{bbob-data} suggests itself as a natural starting point for such considerations, since the community has already acquired a quite solid understanding of the problems and solvers in this test-bed over the last decade. We perform a first assessment of the performance that one could expect to see when applying dynAS to the algorithms in the BBOB data sets, to understand whether the gains would justify further exploration of the dynAS paradigm on this test-bed. We find that -- even when restricting the dynAS problem further to allowing only a single switch between algorithms in the portfolio -- promising improvements over the best static solvers can be expected, in particular for the more complex problems (functions 19-24). Our considerations are purely based on a theoretical investigation of the potential, which might be too optimistic for the single-switch dynAS case -- most importantly, because of the problem of \emph{warm-staring} the algorithms: since the heuristics are adaptive themselves, their states need to be initialized appropriately at the switch. This may be a difficult problem when changing between algorithms of very different structure. We do not consider, on the other hand, the possibility to switch more than once, so that our bounds may be too too pessimistic for the full dynAS setting, in which an arbitrary number of switches is allowed. Given the above limitations, we therefore also provide a critical assessment of our approach, and highlight ideas for addressing the main challenges in dynAS. \subsection{Related Work} The idea that a dynamic configurations and/or selection of algorithms can be beneficial in the context of iterative optimization heuristics is almost as old as evolutionary computation itself, in particular in the context of solving numerical optimization problems, see~\cite{LoboLM07} for an entire book focusing mostly on dynamic algorithm configuration techniques. However, as mentioned above, existing works almost exclusively focus on changing parameters of selected components of an otherwise stable algorithmic framework. This includes most works on hyper-heuristics~\cite{BurkeGHKOOQ13} and related concepts such as adaptive operator selection~\cite{AOS}, and parameter control~\cite{EibenHM99}. To the best of our knowledge, the full dynAC problem as described above was only recently formalized~\cite{BiedenkappBHL19}. Biedenkamp \emph{et al.} introduce dynAC as a Contextual Markov Decision Process (CMDP), where a policy can be learned to switch hyperparameters of a meta-algorithm, with some of these hyperparameters possibly encoding the choice between different algorithms.\footnote{Note here that there is a long-standing debate about the classification of algorithm configuration vs. algorithm selection. That is, while some consider a parametrized algorithm framework an algorithm with different configurations, others argue that each such configuration is an algorithm by itself. We omit this discussion here, and use the convention that an algorithm can have possibly different configurations. Note, though, that -- in the context of this work -- this only makes a difference in the terminology. All concepts and ideas can be equivalently described using the other, possibly mathematically more stringent, convention.} They also show that artificial CMDPs can be solved effectively by using reinforcement learning techniques, providing a promising direction for future research on dynAC. In the context of evolutionary computation, the concept of switching between different algorithms during the optimization process was recently investigated in~\cite{van_rijn_ppns_2018_adpative}, by a similar theoretical assessment as in this work. The approach was then tested in~\cite{research_project}, where it was shown that the predicted gains can indeed materialize, with the caveat that one has to ensure a sufficiently accurate estimate for the median anytime performances of each algorithm. These two works, however, focus on a single family of numerical black-box optimization techniques, the modular CMA-ES framework suggested in~\cite{van_rijn_evolving_2016}. Here in this work, in contrast, we explicitly want to go one step further, and study combinations of heuristics that are potentially of very different structure, such as, for example combining a Differential Evolution (DE) algorithm for the global exploration with a CMA-ES for the final convergence. While the dynAC problem is solved by an unsupervised reinforcement learning approach in~\cite{BiedenkappBHL19}, we observe that dynAC in evolutionary computation is more frequently based on on supervised learning approaches, see~\cite{MalanM19,MunozS17footprint,jankovic2019adaptive} for examples. These techniques combine exploratory landscape analysis~\cite{mersmann2011exploratory} and/or fitness landscape analysis~\cite{Pitzer2012fitnesslandscape} with supervised learning techniques, such as random forests, support vector machines, etc. While still in its infancy, even in the static algorithm configuration case~\cite{munoz2015algorithm, kerschke2017automated,KerschkeT19,BelkhirDSS17}, these works may pave an interesting alternative to reinforcement learning, as they may more directly provide insight into (and make use of) the correlation between fitness landscapes and algorithms' performance. \section{Preliminaries \subsection{Dynamic Algorithm Selection} Classically, algorithm selection attempts to find the best algorithm $A$ from a portfolio $\mathcal{A}$ to solve a specific function $f$ from a set of functions $\mathcal{F}$. Specifically, this static version of algorithm selection can be defined as follows: \begin{definition}[Static Algorithm Selection] Given an algorithm portfolio $\mathcal{A}$ and a function $f\in\mathcal{F}$, we aim to find: $$\argmin_{A\in\mathcal{A}} \text{PERF}(A, f) \; ,$$ where PERF is a performance measure (which assigns lower values to better performing algorithms). \end{definition} To extend algorithm selection to the dynamical case, we need to define a function which switches between algorithms. We use techniques from~\cite{BiedenkappBHL19} to represent this as a policy function, and modify it as follows: \begin{definition}[Dynamic Algorithm Selection (dynAS) Given an algorithm portfolio $\mathcal{A}$, a $f\in\mathcal{F}$ and a state description $s_t\in\mathds{S}$ at time step $t$ of an algorithm run. We want to find a policy $\pi: \mathds{S} \xrightarrow{} \mathcal{A} $ which minimizes $\text{PERF}(A_{\pi}, f)$ \end{definition} Note that this definition can be extended to dynamic algorithm configuration by changing the policy to be $\pi: \mathds{S} \xrightarrow{} (\mathcal{A}\times\Theta_A) $, where $\Theta_A$ is the configuration space of algorithm $A$. \subsection{The BBOB Benchmark} The Black Box Optimization Benchmark (BBOB) is widely accepted as the go-to benchmarking framework within the field of optimization. While BBOB has grown a lot over the years, the functions within their noiseless suite have remained stable. This suite contains 24 noiseless optimization functions, each of which being theoretically defined for any number of dimensions. In practice however, the commonly used dimension set is $\mathcal{D}=\{2,3,5,10,20,40\}$. For each function, several transformation methods are defined, both for the variable as the objective spaces. These transformations are fixed, and different combinations lead to different versions of the function, called instances. Since these functions are defined mathematically, the optimal values are known in advance. Because of this, we can define target values we wish to reach in terms of closeness to this optimal value, instead of an abstract value. This gives the advantage of comparability between instances, which would not be possible when using raw target values. The 24 noiseless functions have been studied in detail, not just from a performance perspective. Especially within the landscape analysis community, a lot of analysis of the BBOB-functions has been performed, leading to a lot of useful insights about their properties. These properties are ideal to use when implementing dynAS in practice, as they are very influential on the local performance of algorithms. Generally, it is agreed that the 24 BBOB functions cover a broad range of potential challenges for different optimization algorithms~\cite{mersmann2011exploratory}, even though certain aspects, i.e., discontinuities or plateaus, are not very well represented~\cite{lacroix2019}. The popularity of BBOB means that many researchers have benchmarked their algorithms on the BBOB-functions. Most of these have then submitted versions of their algorithms to competitions or workshops organized by the BBOB-team. Between the first competition in 2009~\cite{hansen2010comparing} and the latest workshop in 2019, a total of 226 algorithms have been submitted and their data made available to the public~\cite{bbob-data}. Because of this large amount of available data, there are plenty of baselines to compare algorithms against and gain inspiration from. These algorithms have often been well justified and rigorously tested. However, the implementations used are generally not freely available, and even if they are, they might be hard to combine into a single dynAS framework, since BBOB is available in many different languages. However, the majority of the algorithms is either directly available online or has been well-documented, making the challenge of implementing them doable. Additionally, the large amount of algorithms which have been run on BBOB provide a good way to select sets of algorithms from which to build initial dynAS portfolios. However, since the BBOB-repository is largely the result from running competitions, many of the used algorithms are highly tuned, making them hard to beat and giving rise to the question of generalizability of dynAS results to other functions. Eventually, a move to true dynAC would resolve this issue, but these techniques will require a lot of further study to implement. Since the BBOB-framework provides the functions, algorithms and performance baselines, it is an ideal candidate for initial experiments related to dynAS. \subsection{Performance Measures} To measure the performance of the algorithms on the BBOB-dataset, several approaches are possible. These usually fall into two categories: fixed-budget and fixed-target. The fixed-budget approach asks the question: "What target value is reached after $x$ function evaluations?", while the fixed-target question can be phrased as: "How many function evaluations are needed to reach target $y$?". In this paper, we will use the fixed-target approach. Since most algorithms in our data set are stochastic in nature, the question of how many function evaluations are needed to reach a certain target is dealing with random variables. For a certain function instance $f_i\in\mathcal{F}$ and dimension $d\in\mathcal{D}$, we let $t_j(A, f_i^{(d)}, \phi)$ denote the number of evaluations that algorithm $A\in\mathcal{A}$ needed in the $j$-th run to evaluate for the first time a point of target precision at least $\phi$. Note that $t_j(A, f_i^{(d)}, \phi)$ is a random variable, which is commonly referred to as the \emph{Hitting Time (HT)}. If run $j$ did not manage to hit target $\phi$ within its allocated budget, we say that $t_j(A, f_i^{(d)}, \phi) = \infty$. While just taking the average of the observed hitting time gives some estimate of the true mean, previous work~\cite{auger_restart_2005} has shown that it is not a consistent, unbiased estimator of the mean of the distribution of hitting times. Instead, the Expected Running Time (ERT) is used. This is defined as follows: \begin{definition}[Expected Running Time (ERT)] $$\ERT(A, f^{(d)}, \phi) = \frac{\sum_{i=1}^n\sum_{j=1}^K \min\{ t_{i}(A, f_j^{(d)}, \phi), B\}}{\sum_{i=1}^n\sum_{j=1}^K \mathds{1}\{t_{i}(A, f_j^{(d)}, \phi) <\infty\}}.$$ Here, $n$ is the number of runs of the algorithm, $K$ the number of instances of function $f$ and $B$ the maximum budget for algorithm $A$ on function $f_j^{(d)}$. \end{definition} To allow for a fair comparison between instances, the BBOB-benchmark uses target 'precisions' for their analysis, instead of the raw target values seen by the algorithm. The precision is simply defined as the difference between the best-so-far-$f(x)$ and the global optimum. This is done to make runtime comparisons between different instances and even different functions possible. \section{Methods} \subsection{Analysis of Available data}\label{sec:preselection} Since the set of available algorithms from the BBOB-competitions is quite large, several issues in terms of data consistency arise. When processing the algorithms, we found that a small subset have issues such as incomplete files or missing data. We decided to ignore these algorithms, and work only with the ones which were made available within the IOHanalyzer tool~\cite{iohprofiler}. This leaves us with a set of 182 out of 226 possible algorithms to do our analysis. There are some caveats to this data, mostly related to the lack of a consistent policy for submission to the competitions over the years. For example, the 2009 competition required submission of 3 runs on 5 instances each, while the 2010 version changed this to 1 run on 15 instances. In theory, the instances should have very little impact on the performance of the algorithms, as they are selected in such a way to preserve the characteristics of the functions. However, in practice there has been some debate about the impact of instances on algorithm performance, claiming that the landscapes of different instances of the same function can look significantly different to an algorithm~\cite{Munoz18instances,MunozKH15InformationContent,KerschkeT19}. In the following, we ignore this discussion and assume that performance is not significantly impacted by the instances. Another issue with the dataset are the widely inconsistent budgets for the different algorithms. These can be as low as $50D$ and as large as $10^7D$. However, since we use a fixed-target perspective to study the performance of the algorithms, these differences are not very impactful. Since the BBOB-competitions see an optimizer as having 'solved' an optimization problem when reaching a target precision of $10^{-8}$, many of the algorithms will stop their runs after reaching this point to avoid unnecessary computation. Because of this, we will use the same target value in our computations. However, for some of the more difficult functions, this target can be challenging to reach within their budget. To avoid the problem of dealing with algorithms without any finished runs, we only consider an algorithm in our analysis when it has at least 15 runs on the function, of which at least one managed to reach the target $10^{-8}$. Figure~\ref{fig:nr_finished_algs} plots the number of algorithms per each function/dimension pair that satisfy all the requirements mentioned above. We observe large discrepancies between functions and dimensions, with the number of admissible algorithms ranging from 4 to 155, and note that there are no algorithms which are admissible on all functions in all dimensions. \begin{figure} \centering \includegraphics[width=0.5\textwidth, trim={0, 5, 0, 3},clip]{Images/Scatter_nr_finished_algs_v2.pdf} \caption{Number of algorithms with at least 15 independent runs and at least one them reaching the target $\phi=10^{-8}$.}\vspace{-10pt} \label{fig:nr_finished_algs} \end{figure} \subsection{DynAS for BBOB-Functions}\label{sec:dynas_bbob} In this work, we will restrict the dynAS problem on BBOB-functions to using policies which switch algorithms based on the target precisions hit. To get an indication for the amount of improvement which can be gained by dynAC over static algorithm configuration, we use the BBOB-data to theoretically simulate a simple policy which only implements a single switch of algorithm. We can define this as follows: \begin{definition}[Single-Switch dynAS] Let $f^{(d)}$ be a BBOB-function in dimension $d$ and $\mathcal{A}$ the corresponding portfolio of admissible algorithms. A single-split policy is defined as the triple $(A_1, A_2, \tau) \in \mathcal{A}\times\mathcal{A}\times\Phi$, where $\Phi = \left\{10^{2-0.2i)} | i\in \{0,\dots, 50\}\right\}$ is the set of admissible splitpoints. This corresponds to the policy which starts the optimization procedure with algorithm $A_1$, and run this until target $\tau$ is reached, after which the algorithm is changed to $A_2$. \end{definition} The performance of this single switch method can then be calculated as follows: \begin{align*} T(f^{(d)},A_1,A_2, \tau, \phi) &= \ERT(A_1, f^{(d)}, \tau) \\ &+ \ERT(A_2, f^{(d)},\phi) - \ERT(A_2,f^{(d)},\tau) \end{align*} Where $\phi$ is the final target precision we want to reach. For the BBOB-functions, we set $\phi = 10^{-8}$, as noted in Section~\ref{sec:preselection}. Generally, to assess the performance of an \textit{algorithm selection} method, its performance can be compared to the \textit{Single Best Solver (SBS)}, which can be defined as follows: \begin{definition}[Single Best Solver] For each dimension $d\in\mathcal{D}$, we have: $$\text{SBS}_{\text{static}} (\mathcal{F}^{(d)})=\arg\min_{A\in\mathcal{A}}\sum_{f\in\mathcal{F}} \text{PERF}(A, f^{(d)}, \phi)$$ Often, ERT is used as the performance function, but this value can differ widely between functions, leading to a biased weighting. To avoid this, we can instead use the ranking of ERT per function, to give equal importance to every function. Note that we have final target precision $\phi=10^{-8}$. \end{definition} While this SBS has a good average performance, it can easily be beaten by a decent \textit{algorithm selection} technique. As such, a better baseline for performance is needed. This is the theoretically best algorithm selection method, which is called the Virtual Best Solver. This can defined as follows: \begin{definition}[Static Virtual Best Solver (VBS$_{\text{static}}$\xspace)] For each function $f\in\mathcal{F}$ and dimension $d\in\mathcal{D}$, we have: $$\text{VBS}_{\text{static}} (f^{(d)})=\arg\min_{A\in\mathcal{A}}\text{PERF}(A, f^{(d)})$$ For the BBOB functions, we use $\text{PERF}(A, f^{(d)}) = \ERT(A, f^{(d)}, \phi)$ with $\phi = 10^{-8}$. \end{definition} Note that the VBS$_{\text{static}}$\xspace will always perform at least as good as the SBS, and theoretically gives an upper bound for the performance of any real implementation of algorithm selection techniques. Thus, the difference between SBS and VBS$_{\text{static}}$\xspace gives an indication of the maximal possible performance gained by algorithm selection. For the BBOB-data, the relative ERT between these two methods is visualized in Figure~\ref{fig:sbs_vs_vbs}. From this, we see that the differences can be extremely large, highlighting the importance of algorithm selection. Similar to the way we defined VBS$_{\text{static}}$\xspace, we can define a Dynamic Virtual Best Solver, VBS$_{\text{dyn}}$\xspace, as follows: \begin{definition}[Dynamic Virtual Best Solver] For each BBOB-function $f\in\mathcal{F}$ and dimension $d\in\mathcal{D}$, we have: $$\text{VBS}_{\text{dyn}} (f^{(d)})=\argmin_{(A_1,A_2, \tau)\in(\mathcal{A}\times\mathcal{A}\times\Phi)} T(f^{(d)}, A_1, A_2, \tau, \phi) $$ \end{definition} \begin{figure} \centering \includegraphics[width=0.5\textwidth, trim={0, 15, 0, 10},clip]{Images/Rel_ERT_SBS_over_VBS.pdf} \caption{Relative ERT of the SBS over the VBS$_{\text{static}}$\xspace. The selected SBS are: Nelder-Doerr (2D), HCMA(3, 10 and 20D) and BIPOP-aCMA-STEP (5D). Dimension 40 was removed because no algorithm hit the final target on all functions in this dimension. \label{fig:sbs_vs_vbs}\vspace{-5pt} \end{figure} \begin{table*}[!bht] \tiny \centering \begin{tabular}{rlrllrrr} \hline FID & VBS$_{\text{static}}$\xspace & ERT of VBS$_{\text{static}}$\xspace & $A_1$ & $A_2$ & $\log_{10}(\tau)$ & ERT of VBS$_{\text{dyn}}$\xspace & speedup \\ \hline 1 & fminunc & 13.0 & HMLSL & HCMA & 1.2 & 6.6 & 1.97 \\ 2 & LSfminbnd & 94.7 & BrentSTEPrr & LSfminbnd & 2.0 & 52.4 & 1.81 \\ 3 & BrentSTEPrr & 315.5 & STEPrr & BrentSTEPif & -0.2 & 246.8 & 1.28 \\ 4 & BrentSTEPif & 763.9 & STEPrr & BrentSTEPif & -0.2 & 578.1 & 1.32 \\ 5 & MCS & 10.8 & ALPS & MCS & 1.8 & 6.0 & 1.80 \\ 6 & MLSL & 1050.9 & fmincon & GLOBAL & -7.0 & 928.2 & 1.13 \\ 7 & PSA-CMA-ES & 1129.8 & GP5-CMAES & PSA-CMA-ES & 0.0 & 792.3 & 1.43 \\ 8 & fminunc & 399.1 & OQNLP & DE-BFGS & 0.6 & 304.7 & 1.31 \\ 9 & fminunc & 188.3 & fminunc & DE-AUTO & 0.0 & 152.3 & 1.24 \\ 10 & DTS-CMA-ES & 262.4 & fmincon & DTS-CMA-ES & -2.0 & 199.8 & 1.31 \\ 11 & DTS-CMA-ES & 268.3 & HMLSL & DTS-CMA-ES & -2.2 & 153.6 & 1.75 \\ 12 & NELDERDOERR & 1909.7 & HMLSL & BFGS-P-StPt & -3.2 & 1041.5 & 1.83 \\ 13 & IPOPsaACM & 835.1 & DE-AUTO & IPOPsaACM & -3.6 & 661.7 & 1.26 \\ 14 & DTS-CMA-ES & 546.6 & DE-BFGS & DE-SIMPLEX & -6.0 & 348.6 & 1.57 \\ 15 & PSA-CMA-ES & 10029.7 & LHD-10xDefault-MATSuMoTo & PSA-CMA-ES & 0.4 & 6982.4 & 1.44 \\ 16 & IPOPsaACM & 6767.1 & GLOBAL & CMA-ES-TPA & -0.4 & 5115.0 & 1.32 \\ 17 & PSA-CMA-ES & 4862.3 & PSA-CMA-ES & IPOP400D & -5.8 & 4201.8 & 1.16 \\ 18 & PSA-CMA-ES & 6717.4 & PSA-CMA-ES & CMA-ES multistart & -5.2 & 5687.3 & 1.18 \\ 19 & DTS-CMA-ES & 18768.0 & OQNLP & DTS-CMA-ES & -1.6 & 463.0 & 40.54 \\ 20 & DEctpb & 10670.3 & DEctpb & OQNLP & -0.4 & 3360.7 & 3.18 \\ 21 & GLOBAL & 2095.5 & MLSL & NELDERDOERR & 0.0 & 1209.8 & 1.73 \\ 22 & GLOBAL & 1079.9 & RAND-2xDefault-MATSuMoTo & GLOBAL & 0.4 & 844.1 & 1.28 \\ 23 & CMA-ES-MSR & 18971.4 & DTS-CMA-ES & SSEABC & -2.6 & 10295.0 & 1.84 \\ 24 & OQNLP & 285173.0 & GP5-CMAES & CMAES-APOP-Var2 & 0.0 & 52387.0 & 5.44 \\ \hline \end{tabular} \caption{Relative gain of optimal single-switch dynamic algorithm combination VBS$_{\text{dyn}}$\xspace over the best static algorithm VBS$_{\text{static}}$\xspace for all 24 BBOB functions in dimension 5. ERT values are computed from data available at \url{https://coco.gforge.inria.fr/doku.php?id=algorithms-bbob}. We only consider algorithms with at least 15 runs, one of which reaching target precision $\phi=10^{-8}$, which is also the target used for the ERT calculations. The full version of this table, also for other dimensions, is available at~\cite{data-bbob-link}. Abbreviations: FID = function ID (as in~\cite{hansen_coco:_2016}, $\tau$ = splitpoint target, speedup = ERT\_{stat}/ERT\_{dyn}. We also shortened DTS-CMA-ES\_005-2pop\_v26\_1model to DTS-CMA-ES for readability} \label{Dim5_overview_table}\vspace{-20pt} \end{table*} \section{Results} Since the number of algorithms considered in this paper is relatively large, many of the results are only shown for a subset of functions, dimensions or algorithms. The complete data is made available at~\cite{data-bbob-link}. An example of the available data is also shown in Table~\ref{Dim5_overview_table}. \subsection{Overall Gain of Single-Switch DynAS} Before investigating the possible improvements to be gained by dynamic algorithm selection, we investigate the performance of the static algorithms from the BBOB-dataset. To achieve this, we look at the distribution of ERTs among the BBOB-functions. For dimension 5, this is visualized in Figure~\ref{fig:violin_static_ert_dim5}.\footnote{Note that for function F05, the linear slope, most algorithms simply move outside the search-space to find an optimal solution, which is accepted by the BBOB-competitions, but leads to a disadvantage to those algorithms which respect the bounds.} This figure shows the large differences in performance, both between the algorithms as well as between the different functions. We marked the performance of the VBS$_{\text{static}}$\xspace and VBS$_{\text{dyn}}$\xspace, and see that their differences also vary largely between functions. \begin{figure} \centering \includegraphics[width=0.48\textwidth, trim={0, 5, 0, 3},clip]{Images/Violins_dim5.pdf} \caption{Distribution of ERTs among all algorithms for all 24 BBOB-functions in dimension 5. Please recall from Fig.~\ref{fig:nr_finished_algs} that the number of data points varies between functions. Also shown are the ERTs of the VBS$_{\text{static}}$\xspace and VBS$_{\text{dyn}}$\xspace. \label{fig:violin_static_ert_dim5}\vspace{-10pt} \end{figure} To zoom in on the differences between the VBS$_{\text{static}}$\xspace and VBS$_{\text{dyn}}$\xspace we see in Figure~\ref{fig:violin_static_ert_dim5}, we can compute for each function, dimension and corresponding algorithm portfolio the relative ERT of a the Single-Switch VBS$_{\text{dyn}}$\xspace over VBS$_{\text{static}}$\xspace. Specifically, this is calculated as $\frac{\ERT(\text{VBS}_{\text{dynamic}} (f^{(d)}))}{\ERT(\text{VBS}_{\text{static}} (f^{(d)}))}$. This value is shown for each (function, dimension)-pair in Figure~\ref{fig:heatmap_impr_dim_func}. From this figure, we can see that for most functions, the improvements when using a single configuration change are quite large. Especially for the functions which are traditionally considered more difficult for a black-box optimization algorithm to solve, the possible improvement is massive. In terms of the median over all (function, dimension)-pairs, the VBS$_{\text{dyn}}$\xspace is $1.49$ faster than the VBS$_{\text{static}}$\xspace. \begin{figure} \centering \includegraphics[width=0.45\textwidth, trim={0, 15, 0, 15},clip]{Images/Rel_ERT_per_funcid_dim_times_v6.pdf} \caption{Heatmap of the ratio of ERTs between the Virtual Best Static Solver and the Virtual Best Dynamic Solver, for each (function, dimension)-pair. \label{fig:heatmap_impr_dim_func}\vspace{-10pt} \end{figure} \subsection{Selected Algorithm Combinations} Since the VBS$_{\text{dyn}}$\xspace shows a lot of potential improvement over the classical VBS$_{\text{static}}$\xspace, it makes sense to study its behaviour in more detail. To achieve this, we can zoom in on a single (function, dimension)-pair and study the behaviour of the VBS$_{\text{dyn}}$\xspace and split algorithm configurations in general. In Figure~\ref{fig:F21_10_dim}, we show the ERT of the best possible switch between any combination of algorithms in our portfolio $\mathcal{A}$, on function $21$ in dimension $10$. This figure shows some clear patterns in the horizontal and vertical lines. A horizontal line, such as the one for the MLSL-algorithm~\cite{MLSL1999}, indicates that an algorithm adds to the performance of most algorithms by being the $A_1$-algorithm. This can be interpreted as having a good exploratory search behaviour, but poor exploitation. There are also vertical lines present, which indicate the algorithms which perform well as $A_2$-algorithms. These are less pronounced than the horizontal lines, which might indicate that the choice of $A_2$ algorithms has less impact on the performance than the choice of $A_1$. \begin{figure} \centering \includegraphics[width=0.47\textwidth, trim={0, 10, 0, 15},clip]{Images/F21_10D_v2.pdf} \caption{Relative ERT of configuration switches relative to VBS$_{\text{static}}$\xspace, for function 21 in 10 dimensions. The X- and Y-axes indicate algorithms selected as $A_2$ and $A_1$ respectively. Larger values (red) indicate better algorithm combinations. \label{fig:F21_10_dim}\vspace{-10pt} \end{figure} We see that there are different algorithms which perform well as either the first or second part of the search. This gives rise to the question of how to quantify these differences, and more generally, how to quantify the benefit which can be gained by selecting an algorithm as $A_1$ or $A_2$. This can be done by executing the following steps to compute a quantitative value for the benefit gained by selecting an algorithm for a part of the search: \begin{definition}[Improvement-values] The initial performance value $I_1$ and finishing performance value $I_2$ of algorithm $A$ on function $f^{(d)}$ can be defined as: $$I_1(A) = \frac{\min_{A_2\in\mathcal{A}, \tau \in \Phi} T(A,A_2,\tau, \phi)}{\min_{A_1,A_2\in\mathcal{A}, \tau \in \Phi} T(A_1,A_2,\tau, \phi)}$$ $$I_2(A) = \frac{\min_{A_1\in\mathcal{A}, \tau \in \Phi} T(A_1,A,\tau, \phi)}{\min_{A_1,A_2\in\mathcal{A}, \tau \in \Phi} T(A_1,A_2,\tau, \phi)}$$ \end{definition} Note that for the VBS$_{\text{dyn}}$\xspace$=(A_1,A_2,\tau)$, we always have $I_1(A_1)=1=I_2(A_2)$, and values can not be below $1$. Intuitively, the larger the value of $I_1$, the worse the algorithm can perform as the first part of the search, and similarly for $I_2$. The values of $I_1$ and $I_2$ for dimension $5$ are shown in Figures~\ref{fig:I1_values_subset} and~\ref{fig:I2_values_subset} respectively. To ensure the readability of the figures, only a subset of algorithms is chosen. This is done by selecting the algorithm with the best value for each function, and then adding to it the set of algorithms which have the best average value over all functions\footnote{Missing values and values larger than $3$ are set to $3$ to reduce the large impact of outliers on the average.}. From these figures, we see clear differences, both between functions and between algorithms. While some algorithms occur in both Figures~\ref{fig:I1_values_subset} and~\ref{fig:I2_values_subset}, many are included only once, indicating that they are relatively good choices for one part of the search, but not the remainder. The clearest example of this is HMLSL~\cite{HMLSL2013}, which performs very well as $A_1$, but has relatively high $I_2$-values. This is caused by the fact that this algorithm typically converges quickly to a value close to the optimum, but has issues in the final exploitation phase, thus only being beneficial to use at the start of the search. We also notice that in general, the $I_2$-values are much lower across all algorithms, indicating that the choice of starting algorithm is the most important for dynAS, while most good algorithms can provide similar benefits to the final part of the search. \begin{figure} \centering \includegraphics[width=0.5\textwidth, trim={0, 5, 0, 15},clip]{Images/Heatmap_Mean_Impr_dim5_asa1_subgroup_v4.pdf} \caption{$I_1$-values for a group of 15 selected algorithms in dimension $5$. Darker colors correspond to better values.} \label{fig:I1_values_subset}\vspace{-5pt} \end{figure} \begin{figure} \centering \includegraphics[width=0.47\textwidth, trim={0, 5, 0, 15},clip]{Images/Heatmap_Mean_Impr_dim5_asa2_subgroup_v4.pdf} \caption{$I_2$-values for a group of 15 selected algorithms in dimension $5$. Darker colors correspond to better values. \label{fig:I2_values_subset}\vspace{-5pt} \end{figure} \subsection{Small Portfolio: Case Study}\label{sec:small_portfolio} Since the algorithm space we consider is quite large, it can be challenging to gain insights into the individual algorithms. To show that dynamic algorithm selection is also applicable to smaller portfolio's, we limit ourselves to 5 algorithms. These are representative of some widely used algorithm families: Nelder-Doerr~\cite{nelderdoerr}, DE-Auto~\cite{DE_AUTO}, Bipop-aCMA-Step~\cite{bipop_acma_step}, HMLSL~\cite{HMLSL2013} and PSO-BFGS~\cite{PSO_BFGS} With this reduced algorithm portfolio, we can study the improvements over their respective VBS$_{\text{static}}$\xspace in more detail, and find interesting algorithms combinations to explore further. In Figure~\ref{fig:matrix_of_bars}, we show the relative improvement in ERT over VBS$_{\text{static}}$\xspace of the best combination of two algorithms. In each subplot, all 24 functions are represented. Note that the diagonal represents the static algorithms, which can never lead to an improvement over the VBS$_{\text{static}}$\xspace. We notice some clear trends in this figure. Specifically, we notice that using HMSLS as $A_2$ is rarely effective, while it provides large benefits when used in the initial part of the search. We also note that Nelder-Doerr has the reverse behaviour, seemingly performing much better in the final exploitation phase. To illustrate the configuration switches which can be considered in this algorithm portfolio, we can zoom in on function 12 in dimension 3 and look at the fixed-target curve showing ERT. This is done in Figure~\ref{fig:ert_curve}, where we also indicate the best switching points between algorithms. This figure highlights the different behaviors of the algorithms in the portfolio, and thus indicates where switching algorithms would be beneficial. The best possible switch in this function would occur from PSO-BFGS to Nelder-Doerr, at target $10^{-6.4}$, leading to a relative speedup of $1.76$ over VBS$_{\text{static}}$\xspace. To decide which algorithms to use in an algorithm portfolio such as the one used here, two main ways of selecting the algorithms are possible. The first is to use some knowledge about the algorithms to determine which are important. This is useful for initial exploration, but might lead to useful algorithms being ignored. Instead, one can use performance information, such as the $I_1$ and $I_2$-values, to provide some initial representation of the usefulness of algorithms to the portfolio. This approach is much more generic, however the choice of measures can be challenging. For example, the $I_1$ and $I_2$ measures are hard to extend to more general $k$-switch dynAS methods. Instead, an extension of marginal contributions~\cite{XuHHL12} and related concepts such as measures building on Shapley values (like those suggested in~\cite{FrechetteKMRHL16}) would capture algorithm contribution to a portfolio in a much more robust sense, and thus be useful additions to the dynAS setting. \begin{figure} \centering \includegraphics[width=0.48\textwidth, trim={0, 7, 0, 15},clip]{Images/Matrix_of_bars_3D_v3.pdf} \caption{Overview of the best possible ERTs of the combination of algorithms $A_1$ and $A_2$ over VBS$_{\text{static}}$\xspace. Each plot represents a single $A_1$ (X-axis), $A_2$ (Y-axis) combination, where each bar represents a single function, in dimension 3. Values are capped at 2.} \label{fig:matrix_of_bars}\vspace{0pt} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth, trim={0, 15, 0, 10},clip]{Images/fixed_target_curve_with_switchmarks_F12_3D.pdf} \caption{ERT-curves for a selected algorithm portfolio of size 5 on F12 in 3D. Markers indicate optimal switch points between algorithms. Their color and symbol indicate the starting and finishing algorithms respectively (star = Nelder-Doerr, triangle = DE-AUTO, cross = BIPOP-aCMA-STEP, square = HMLSL and pentagon = PSO-BFGS).}\vspace{-5pt} \label{fig:ert_curve} \end{figure} \section{Discussion and Future Work} \subsubsection*{Summary} The previous results have shown that there is still a large amount of improvement possible over the VBS$_{\text{static}}$\xspace by using dynamic algorithm selection. We have shown several methods to gain insights into the differences between different algorithms and functions. However, the results shown in the previous sections rely on an underlying assumption of feasibility of algorithm switching. For many algorithms, this switching mechanism can be implemented in a relatively straightforward manner, i.e. between different population-based algorithms, such as different CMA-ES variants, for which the algorithm switching methods have already been implemented~\cite{research_project}. \subsubsection*{Warm-start} For other algorithms combinations, a dynamic switch during the optimization procedure might be more challenging. For example, a switch from a single-solution algorithm to a population-based one gives rise to an information deficit, which needs to be dealt with to properly initialize the new population. Because of this, the gains indicated by simply combining the ERT values might be tough to achieve in practice. More generally, internal parameters are different between algorithms. So the first challenge to overcome is that one needs to decide how to ``warm-start'' the algorithms, to assure an optimal internal state for the required phase of the optimization process. To be able to achieve the performance of the VBS$_{\text{dyn}}$\xspace, such warm-start techniques will need to be implemented without the need of additional function evaluations, which could be a big challenge. We would considering to use reinforcement learning approaches to be a promising first step for this task, but since those are quite expensive in terms of computational cost, we hope to see other approaches evolve in the near future. \subsubsection*{Stochasticity} Assuming such warm-start mechanisms are implemented, as was previously done for example within CMA-ES, it has been shown that the theoretical improvements can still be tough to achieve in practice~\cite{research_project}. This is largely caused by the fact that hitting times are stochastic with relatively large variances, which can cause ERT to be unstable. When selecting the $(A_1,A_2,\tau)$-triple, differences in ERT might be obscured by the variance of the hitting times, leading to a worse performance than expected. These effects might become even more important when dealing with larger algorithm spaces, or when incorporating hyperparameters in the search (see paragraph \emph{Hyperparameter tuning}). Analyzing the robustness of common solvers therefore seems to be an essential building block for the development of reliable dynAC approaches. \subsubsection*{Switch point} Another challenge which needs to be overcome to achieve effective dynamic algorithm selection is the question how to identify suitable switching points. In this work we used target precision, which is usually not applicable in practice, since the algorithm has no knowledge about the precise value of the optimum. Because of this, we would need to find some other way to use the knowledge of the algorithm to determine when to switch, i.e., the state of internal parameters, landscape features computed from additionally or previously evaluated points, the evolution of fitness values, population diversity, etc. \subsubsection*{True dynamic switching} While improving the way a switching point is detected is a big challenge to overcome, it also provides new opportunities to improve performance. The estimates shown in this paper consider only a single algorithm switch, whereas a truly dynamic approach could benefit from switching more often, to fully exploit the differences in search behaviour of the different algorithms. \subsubsection*{Hyperparameter tuning} A second factor of improvement can come from adding hyperparameter tuning into the dynamic process; i.e., when moving from the algorithm selection setting to a dynamic variant of \emph{Combined Algorithm Selection and Hyperparameter optimization} (CASH~\cite{thornton2013autoweka,Vermetten20CASH}). A dynamic CASH approach would allow the algorithms to specialize even more, so they can focus even more on performing as good as possible on their specific part of the optimization process. \subsubsection*{Extensions} As any benchmark study, our results are -- for the time being -- limited to the 24 noiseless BBOB functions. Extending them to other classes of numerical black-box optimization problems forms another important avenue for future research. In this context, we consider supervised learning approaches building on exploratory landscape analysis~\cite{mersmann2011exploratory} as particularly promising. It has previously been shown to yield promising results for the task of configuring the hyper-parameters of CMA-ES~\cite{BelkhirDSS17}. Note, though, that all existing studies concentrate on static algorithm configuration and/or selection. We would therefore need to extend exploratory landscape analysis to the dynamic setting. First steps into this direction have been made in~\cite{jankovic2019adaptive}, where it is shown that the fitness landscapes, as seen by the algorithm, can change quite drastically during the run. \subsubsection*{Short-term} All the objectives listed above are quite ambitious. We therefore also formulate a few short-term goals for our research. Building on the techniques used to select interesting algorithms in Section~\ref{sec:small_portfolio}, we aim to create smaller algorithm portfolio's of algorithms for intial implementations of dynAS. This could be done based on techniques studied in this paper, or using measures like the Shapley value~\cite{FrechetteKMRHL16}, allowing for much smaller portfolios which nonetheless capture the different performances of the algorithms. With such a portfolio we can then more efficiently carry out research on the problems mentioned above, i.e., how to warm-start the algorithms and how to decide when to switch from one algorithm to another. \begin{acks} This work has been supported by the Paris Ile-de-France region. \end{acks} \bibliographystyle{ACM-Reference-Format}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,450
Czeslaw Viktorovich Znamierowski (23 de mayo de 1890 - 9 de agosto de 1977) fue un renombrado pintor lituano soviético, miembro de la Unión de Artistas de la URSS, conocido por su gran obras de arte y amor por la naturaleza. Znamierowski combinó estas dos pasiones para crear algunas de las pinturas más notables de la Unión Soviética, ganando el prestigioso título de "Artista de Honor de la RSSL " en 1965. Sus obras se pueden encontrar en el Museo de Arte de Lituania, en el Museo Šiaulių "Aušros", en la Galería Nacional de Arte de Varsovia, la Galería Tretyakov, así como en conocidos museos privados. colecciones y fondos de arte. Znamierowski nació en Zatišje, Ludza, en el este de Letonia, de un padre que trabajaba como agrimensor y una madre que era profesora de música. Asistió a la Academia Imperial de Artes en dos ocasiones entre 1912 y 1917, y luego asistió a la Universidad de Vilna de 1926 a 1929. Estudió con Ferdynand Ruszczyc, Arkady Rylov, Isaac Levitán y Nicholái Roerich. Znamierowski vivió en Vilna el resto de su vida,y a menudo pintó imágenes de los paisajes de las ciudades. Para 1965, pintó alrededor de 1400 paisajes e hizo 800 bocetos y más de 3000 obras de arte en toda su carrera de 50 años como artista. El arte de Znamierowski fue popular en el país y en el extranjero. Fue uno de los pocos artistas en la Unión Soviética a cuyo arte se le permitió salir del Telón de Acero. Sus obras de arte se vendieron a museos, instituciones estatales, galerías y colecciones privadas de Lituania, Letonia, Polonia, Rusia (dentro del bloque soviético), así como de EE. UU., Canadá, Alemania, Suecia, España, Francia (fuera del bloque soviético) Es muy probable que su arte se vendiera a muchos otros países: "Sus obras [de Czeslaw Znamierowski] han sido adquiridas por museos lituanos y extranjeros". Biografía Znamierowski nació el 23 de mayo de 1890 en Letonia en un pequeño pueblo de Zatishye (lituano: Zatišje // polaco: Zacisze). El pueblo en sí era parte del distrito rural de Pilden, región de Ludza, que limita con Letonia y Bielorrusia. Nació en una familia polaca de clase trabajadora pobre pero muy artística. Su padre era agrimensor y su madre era profesora de música y canto, y ocasionalmente también pintaba. Su abuelo, por parte de madre, era escultor, y su tía (A. Bobrowicz) era pintora. De niño, en la Letonia rural, Czeslaw estaba rodeado de naturaleza. La casa de campo donde vivía siempre estaba llena de flores. Su madre adoraba el arte y fue bajo su influencia que Czeslaw descubrió por primera vez la pintura. Desde ese momento nunca paró. Cuando ingresó a la escuela secundaria, en Daugavpils (polaco: Dzwinsk), su tía lo estaba guiando más en su educación artística. Estando en la escuela secundaria, conoció a un estudiante (A. Pliszko) de la Academia de Artes de San Petersburgo. Pliszko percibió un talento en las obras de arte de Czeslaw y lo invitó a San Petersburgo. Después de terminar la escuela secundaria, Czeslaw reunió todo el dinero que pudo de la venta de sus pertenencias y los ahorros de varios trabajos para mudarse a San Petersburgo. En 1911 Znamierowski llegó a San Petersburgo para continuar su educación artística. En palabras de Znamierowski: "No dudé mucho, vendí mi bicicleta, con el dinero recaudado compré un billete de tren y en 1911 me encontré en un edificio en Moika…" Fue admitido en la "Sociedad para el Fomento de las Artes" ubicada en la calle Bolshaya Morskaya, no lejos del río Moika. La sociedad estaba encabezada por Nicholas Roerich (1874-1947), quien guio a Znamierowski en su educación artística en San Petersburgo. Poco después de que Czeslaw comenzara sus estudios, tuvo que retirarse de la escuela y regresar a casa debido a una situación familiar difícil. Después de más de un año de la interrupción, Czeslaw regresó para continuar sus estudios. En 1915, Znamierowski fue aceptado en la Academia de Artes de San Petersburgo, donde las enseñanzas de Arkady Rylov (1870-1939) y las obras de Isaac Levitan (1860-1900) tuvieron un fuerte impacto en el joven artista. Sin embargo, sus estudios se vieron truncados, esta vez por la Gran Revolución de Octubre de 1917. En palabras de Znamierowski: "No podía ser indiferente a lo que estaba pasando. En Zatishye, me propagué entre los campesinos y fui nombrado secretario del Comité Público de Campesinos. En enero de 1918 me arrestaron los guardias blancos, corría peligro de ser ejecutado, apenas salvé la vida. Me enfrenté a un posible encarcelamiento, sin embargo, esto no me impidió seguir tomando medidas". Después de la Revolución de Octubre, Znamierowski regresó a Letonia. Allí tomó parte activa en el establecimiento del gobierno soviético y durante algún tiempo trabajó como presidente de la Cultura del Proletariado de la región de Liucen. También volvió a la vida artística, mostrando sus paisajes en la Galería de Arte de Riga. En 1920 comenzó a participar en más exposiciones y se convirtió en miembro de la Sociedad de Artistas Independientes de Letonia y de la Unión de Artistas de Letonia. Znamierowski dejó Letonia en 1926 y se mudó a la vecina Polonia para poder reanudar nuevamente su educación artística. Esta vez solicitó y fue admitido en la Facultad de Arte de la Universidad de Vilnius, que en ese momento estaba bajo la dirección de un conocido artista, Ferdynand Ruszczyc (1870-1936). Para trasladarse a Vilnius, tuvo que vender todo lo que tenía, incluida la casa de su familia en Letonia. La razón por la que Znamierowski ingresó a la Universidad de Vilnius no fue porque careciera de conocimientos o habilidades, sino porque quería específicamente estudiar con el Prof. Ferdynand Ruszczyc y el Prof. Aleksander Szturman. Terminó sus estudios en 1929, sin embargo, la mudanza a Polonia resultó ser permanente. Se enamoró del país y, aunque viajó mucho a lo largo de su vida, Vilnius fue donde vivió hasta sus últimos días. Después de terminar sus estudios y convertirse en un consumado artista a tiempo completo, Znamierowski ganó una gran popularidad en Polonia. Muchas de sus pinturas fueron adquiridas por la Galería Nacional de Arte Zacheta en Varsovia. En 1931, Czeslaw obtuvo un premio de honor por su arte, y en 1932, en Cracovia, obtuvo un Diploma de Honor por su pintura de paisaje "Antes de la lluvia". En 1933, nuevamente, su pintura "Antes de la lluvia" le valió una medalla de bronce en la exposición de la Galería Nacional de Arte Zacheta llamada "Incentivo". Znamierowski a menudo participó en actividades organizativas y sociales. En 1933 organizó la primera Sociedad de Artistas Independientes de Vilnius, que desempeñó un papel importante en la historia del arte lituano. Continuó participando en exposiciones, eventos sociales y actividades hasta 1941 cuando estalló la guerra entre la Alemania nazi y la Unión Soviética. Su última exposición antes de la guerra fue en 1939 en Varsovia, justo antes de que fuera invadida por las fuerzas alemanas. Znamierowski sobrevivió a la Segunda Guerra Mundial e incluso continuó pintando durante esos años difíciles, mientras ayudaba con los esfuerzos de guerra. Esto es lo que Znamierowski le dijo a un reportero en una entrevista de 1970 sobre ese período de tiempo: "CZ: Durante la guerra todavía estaba pintando. Cuando comenzó el bombardeo de la ciudad, estábamos acostados aquí, en esta habitación, uno encima del otro en el suelo. Reportero: ¿No escondiste las pinturas? CZ: No. Reportero: ¿Y después de la guerra? CZ: Pinté, pinté, pinté..." En 1947, Znamierowski se convirtió en miembro de la Unión de Artistas de RSSL, y en 1965 fue galardonado con el prestigioso título de Artista de Honor de RSSL. Siendo un partidario activo del socialismo, durante una entrevista en 1970, esto es lo que dijo sobre la ideología y cómo se relaciona con su arte: "Mi lema siempre ha sido el principio de Lenin, que el arte es para la gente y debe ser ampliamente comprensible por todo". A lo largo de su vida viajó mucho, especialmente en las regiones del Cáucaso, Crimea y Zakarpattia. De cada viaje traía entre 20 y 25 lienzos. El 9 de agosto de 1977, a la edad de 87 años, falleció el pintor Znamierowski. Fue un artista profesional durante toda su vida. Czeslaw fue enterrado en el cementerio de laiglesia de San Pedro y San Pablo en la ciudad de Vilna. Casa de Znamierowski "Esta casa - es una obra de arte. Por eso resistió modas y reconstrucciones, y se dejó disfrutar a la vista de todos los que paseaban y conducían. Sin embargo, debemos mencionar que, además del arte, Czeslaw Znamierowski se dedica a la floricultura y recibe una distinción anual del Ayuntamiento". En Vilna, Czeslaw vivió y trabajó toda su vida en la misma casa ubicada en la calle 15 Antakalnis. Esa casa fue una representación del artista y le dio cierta fama en la ciudad. Como pintor y amante de las flores, Znamierowski redecoró el patio delantero convirtiéndolo en uno de los jardines más bellos de la ciudad, atrayendo mucha atención en los meses de verano. Su jardín contenía diferentes variedades de rosas, gerberas, dalias y muchas otras flores. Aunque la casa ya no existe, durante la vida de Czeslaw, debido a sus esfuerzos de jardinería, la casa se consideraba una obra de arte y uno de los lugares turísticos no oficiales de la ciudad. Algunos residentes locales y turistas tomaron un trolebús o un autobús para ver el encantador jardín en la colina con un anciano animado que trajinaba entre las flores. Debido a esto, la casa resistió todas las tendencias de reconstrucción, que fueron populares en la Unión Soviética, especialmente después de la Segunda Guerra Mundial. El compromiso de Znamierowski con la floricultura le valió una distinción anual del ayuntamiento. Continuó manteniendo su jardín hasta que murió. Logros En 1930, Znamierowski estableció la primera Sociedad de Artistas Independientes en Vilna, lo que supuso un acontecimiento importante en la historia del arte lituano. Znamierowski fue uno de los primeros artistas que comenzó a pintar paisajes panorámicos monumentales de la Lituania soviética, muchos de los cuales decoraban interiores de edificios estatales no solo en la Lituania soviética sino también en la URSS en general. Znamierowski creó más de 3000 obras de arte en su carrera de 50 años como artista. El lienzo más grande de Znamierowski se llamó "Panorama de la ciudad de Vilna". Tenía 8 metros de ancho por 2,5 metros de alto y se exhibió en Moscú en el pabellón de la República Socialista Soviética de Lituania en la Exposición de Logros Económicos Nacionales. Por sus logros artísticos, Znamierowski recibió dos diplomas honoríficos del Comité del Presidium Supremo de la RSS de Lituania. En 1965, Znamierowski se convirtió en Artista de Honor de LSSR. De 1970 a 1977, Znamierowski fue el artista vivo de mayor edad de la ciudad de Vilna. Exposiciones Znamierowski obtuvo diplomas y medallas muchas veces. Tuvo exposiciones en Varsovia, Moscú, Londres y también más allá, en Europa y América. Sus cuadros fueron comprados con buena disposición. Las exposiciones eran muy personales para Znamierowski. Sintió que una exposición de su arte es una "confesión" de toda su vida. Después de casi 50 años de ser un artista profesional, Znamierowski realizó una cantidad sustancial de exposiciones pequeñas y grandes en algunos de los museos y galerías nacionales más destacados de la Unión Soviética. Tuvo siete importantes exposiciones individuales en Vilnius y Varsovia. Las obras de arte de Znamierowski se exhibieron en Riga, Ludza, Vilna, Cracovia, Varsovia, Moscú, San Petersburgo y muchas otras ciudades de la URSS. Una gran pintura de la ciudad de Vilna creada por Znamierowski ocupó el centro del escenario en el pabellón RSSL durante la Exposición de Logros Económicos Nacionales en Moscú. Otros lugares destacados donde se exhibieron las pinturas de Znamierowski incluyen: el Museo Nacional de Riga, la Universidad de Vilna, el Museo Nacional de Arte de Lituania, la Galería Nacional de Arte de Lituania, el Lugar de Exposiciones de Vilna, la Galería Estatal de Cracovia, la Galería de Arte de Varsovia, la Galería Nacional de Arte de Zacheta, entre otros. Lista incompleta de las exposiciones de Znamierowski que fueron confirmadas a través de registros públicos: 1920 – Museo Nacional de Riga, Letonia "En 1920 participé por primera vez en una exposición en Riga. Recuerdo que la Asociación Sueca de Artistas Independientes compró dos cuadros". – C. Znamierowski 1929 - Universidad de Vilna, Lituania En 1929, después de graduarse de la Universidad de Vilna, Znamierowski organizó una exposición de sus obras de arte. "Las cosas más queridas son siempre las que ocurren por primera vez en la vida. En 1929 organicé la primera exposición individual en Vilna y exhibí unas 60 obras de arte". – C. Znamierowski 1931 – Galería Estatal de Cracovia, Polonia En 1931, durante una exposición organizada por la Galería Estatal de Cracovia, Znamierowski recibió un premio de honor por sus pinturas. 1932 – Galería Estatal de Cracovia, Polonia En 1932, en un concurso de arte en Cracovia, recibió un diploma de arte por su pintura de paisaje "Antes de la lluvia". 1933 - Galería Nacional de Arte Zacheta, Varsovia, Polonia En 1933 en una exposición llamada "Incentivo" organizada por la Galería Nacional de Arte Zacheta C. Znamierowski ganó una medalla de bronce, una vez más por su pintura "Antes de la lluvia". 1936 – Galería Estatal de Vilna, Lituania En 1936 Znamierowski organizó otra exposición en solitario. 1939 – Galería Nacional de Arte Zacheta, Varsovia, Polonia "En 1939, mi última exposición de la pre-guerra tuvo lugar". – C. Znamierowski 1947 - La Galería Estatal Tretyakov. "EXPOSICIÓN DE ARTE DE TODA LA UNIÓN DE 1947 - Bellas artes, escultura, artes gráficas" Primera participación de C. Znamierowski en una importante exposición de posguerra junto a otros conocidos artistas y escultores soviéticos. 1954 - Museo de Arte de Vilna, Lituania En 1954, Znamierowski llevó adelante otra exposición de un solo hombre en Vilna, Lituania 1960 - Exposición Nacional de Logros Económicos, Moscú, Rusia El lienzo más grande de Znamierowski se llamó "Panorama de la ciudad de Vilna". Tenía 8 metros de ancho por 2,5 metros de alto y se exhibió en Moscú en el pabellón de la República Socialista Soviética de Lituania en la Exposición de Logros Económicos Nacionales. 1962 - Palacio de Exposiciones de Vilna, Lituania En 1962, Znamierowski realizó las exposiciones individuales más grandes hasta la fecha. 1970 - Museo de Arte de Vilna, Lituania En agosto de 1970, Znamierowski realizó una exposición de aniversario individual para celebrar su 80 cumpleaños y 50 años como artista profesional. La exposición resultó ser nuevamente la más grande hasta la fecha. Se exhibieron más de 300 pinturas, que representan más de 40 años de trabajo. 1975 - Galería Estatal de Vilna En celebración de su 85 cumpleaños, Znamierowski realizó una exposición individual en la ciudad de Vilna. 1976 – Galería de Arte de Varsovia, Polonia En septiembre de 1976 tuvo lugar la última gran exposición de Znamierowski en la Galería de Arte de Varsovia. Se presentaron cincuenta obras de arte. Znamierowski concedió una entrevista a un reportero poco antes de esa exposición, esto es lo que dijo: "Estoy preparando una venta de exposición en la Galería de Arte de Varsovia que tendrá lugar a mediados de septiembre. Organizo este tipo de exposiciones muy a menudo. Este año exhibiré 50 lienzos". Arte Como artista, Znamierowski era más conocido por dos cosas: el realismo social y la naturaleza. Aunque era un comunista activo, alrededor del 90 por ciento de sus obras de arte no tenían nada que ver con la agenda soviética y se centraban principalmente en la belleza del entorno natural que lo rodeaba. Znamierowski se esforzó y pudo representar el estado de ánimo en constante cambio de la naturaleza que mostraba muchas tendencias impresionistas en su arte. Znamierowski se destacó enormemente en grandes paisajes y paisajes urbanos, que ocuparon un lugar importante en su carrera artística. Muchas de estas obras de arte tenían un tamaño panorámico monumental y, a menudo, representaban personas en ellas. Los críticos de arte afirmaron que los paisajes eran donde más se revelaba el artista. El amor por la naturaleza le fue inculcado desde temprana edad por el lugar de su nacimiento y por el amor de su madre por las flores y la jardinería. Al crecer en un pequeño pueblo de Letonia, desde muy pequeño comenzó a admirar los colores cambiantes de las estaciones, la belleza de los campos, bosques, lagos y ríos. Debido al jardín de flores de su madre, se convirtió en un ávido amante de las flores, creando muchas naturalezas muertas y bocetos, como si siempre se esforzara por revelar su belleza natural al mundo. También le gustaba mucho pintar rincones tranquilos de la naturaleza, especialmente el comienzo de la primavera cuando la nieve se derretía y la naturaleza despertaba. En tales obras de arte hizo todo lo posible para capturar la impresión del espacio, la limpidez del agua, la firmeza del hielo y la delicadeza de la hierba nueva. Un motivo favorito del pintor era un río serpenteante en el paisaje, a veces representado en una vista panorámica amplia, y a veces de una manera íntima y romántica. Sobre todo, el tema del campo ocupa un lugar importante en su arte y, especialmente, las grandes pinturas panorámicas como "Panorama de la ciudad de Vilna" y "Los lagos verdes", en las que el pintor revela con delicadeza la belleza de la naturaleza. Znamierowski describió su amor por la naturaleza y el deseo de recrearla en el lienzo de la siguiente manera: "Quiero acercarme lo más posible a la naturaleza para que uno pueda dar un paseo en mi cuadro. Quiero que mi arte transmita el estado de ánimo para que sea posible decir la estación o incluso la hora del día". Vilna Le gustaba mucho representar Vilna, sus alrededores, ríos y lagos. Se sintió muy atraído por las colinas, los valles y los bosques de pinos de hoja perenne de la capital lituana. El tema de la campiña de Vilna constituye una parte importante de sus obras de arte. Casi todas sus pinturas principales están relacionadas con la ciudad de Vilna. Disciplina Después de ser un artista profesional durante más de 50 años y crear más de 3000 obras de arte, cuando se le preguntó sobre la disciplina, Znamierowski dijo: "Esto no es para mí. Si se pega, puedo pintar durante horas sin parar. Si no es así, es posible que no trabaje durante un par de semanas". Pintura Pintó principalmente en tres estilos, a menudo entremezclándolos: realismo, romanticismo e impresionismo. Desde muy temprano, Znamierowski estuvo muy influenciado por el estilo académico del arte ruso y el realismo en general, que le infundieron sus maestros (A. Rylov, A. Dubovskoy) en la Academia de Artes de San Petersburgo. Cuando trabajaba, aplicaba la pintura suavemente, luego retrocedía un poco, la miraba desde una perspectiva diferente y luego aplicaba otra pincelada. Trató de absorber el misterio de la creación y nunca se apresuró a terminar su trabajo. Si bien su enfoque meticuloso del arte nunca cambió, a mediados de los años 60 su técnica comenzó a mostrar cambios significativos, su trazo se hizo más amplio y rápido, los colores son más brillantes y su gama más compleja. El impresionismo dentro del artista comenzó a mostrarse más que nunca. Sus obras de arte también se volvieron más emocionales, coloridas y románticas. Un conocido escritor de arte, crítico y artista lituano, Augustinas Savickas dijo lo siguiente sobre Znamierowski en su libro (LANDSCAPE IN LITHUANIAN PAINTING / 1965) "Por su actitud hacia la naturaleza, su estilo está cerca de la tendencia "hedonista". … En los bocetos pintados directamente de la naturaleza es más sensible y tierno, mientras que en los grandes lienzos idealiza la naturaleza y trata de embellecerla." Color y estado de ánimo Era muy sensible a la belleza de la naturaleza, captando sutilmente sus detalles y sintiendo el ritmo de la composición. Nunca se limitó a un tipo de paisaje, estado del tiempo o época del año. Algunas de sus obras de arte son oscuras y sombrías (p. ej., "Nubes de tormenta"), mientras que otras son alegres y brillantes, especialmente cuando pintó la costa de Palanga y el Mar Negro (p. ej., "Viento del mar"). "Me gusta el invierno, el agua y particularmente el mar, es tan colorido..." – C. Znamierowski. Géneros Trabajó en diferentes géneros, como retratos, naturalezas muertas, paisajes urbanos, sitios arquitectónicos y otras composiciones. Fue considerado un gran maestro de esta profesión y recibió muchas órdenes. Cuando iba a pintar, a menudo traía no uno, sino tres o cuatro bocetos que luego en su estudio utilizaba para desarrollar, mejorar y recrear cuidadosamente cada detalle. Se sintió atraído por obras de arte grandes, panorámicas e incluso épicas, y pintó varias de ellas durante su vida. Fue uno de los primeros artistas que comenzó a pintar paisajes panorámicos de la Lituania soviética, que estaban destinados a ser utilizados para decorar los interiores de los edificios públicos. Las grandes obras de arte creadas en las décadas de 1950 y 1960 pueden considerarse como una de las pinturas más características de este tipo. Más notables de ellos son: "Panorama de la ciudad de Vilna", "Salute in Vilnius" y "Los Lagos Verdes ". Otros talentos Tenía muchos otros talentos. Estaba bien versado en tallado en madera, arquitectura, carpintería, floricultura. Hizo sus propios marcos e incluso esculpió esculturas de madera para su jardín. Logró abordar todo de forma independiente y lograr una calidad perfecta en casi todo lo que intentó. Independientemente de la forma de arte en la que participara, creía profundamente que el arte tenía que ser hermoso y reflejar el entorno de un período de tiempo específico. Obras de arte "Dedicó todo su vigor y talento al arte. Encontrará muchas de sus pinturas no solo en los museos e instituciones de Lituania, sino también en Letonia, EE. UU., Suecia, Alemania, así como en casas particulares". "El arte de Czeslaw Znamierowski cubre miles de paisajes, retratos, sitios arquitectónicos. Sus pinturas se encuentran en muchos museos y galerías de arte; también se pueden encontrar en el extranjero. Sir Czeslaw no obedecía a la vanidad, no buscaba la fama, pero era un pintor de reputación poco común". El arte de Znamierowski fue popular en el país y en el extranjero. Fue uno de los pocos artistas de la Unión Soviética a cuyo arte se le permitió salir del Telón de Acero. Sus obras de arte se vendieron a museos, instituciones estatales, galerías y colecciones privadas de Lituania, Letonia, Polonia, Rusia (dentro del bloque soviético), así como de EE. UU., Canadá, Alemania, Suecia, España, Francia (fuera del bloque soviético) Es muy probable que su arte se vendiera a muchos otros países: "Sus obras [de Czeslaw Znamierowski] han sido adquiridas por museos lituanos y extranjeros". En 1965 creó entre 2000 y 2400 obras de arte, y en 1976 ese número llegó a más de 3000 (pinturas y bocetos). "C. La cosecha de Znamierovskis, recogida en la mitad del siglo, es abundante: unas 2300 obras de arte. Están ampliamente repartidos por diferentes museos de nuestro país y del exterior, encontraremos sus pinturas en instituciones, escuelas, cafés y hoteles." "Durante los largos años de actividad creativa, el artista produjo más de 2000 pinturas y bocetos, muchos de los cuales fueron adquiridos por museos y amantes del arte". Personalidad Era conocido por su personalidad alegre, su tranquilidad exterior y sus grandes emociones internas que había estado expresando a través de su arte. Siempre fue una persona cariñosa y generosa. Las personas que lo conocieron dijeron que siempre estaba de buen humor, lleno de energía y creatividad. Incluso después de cumplir 80 años, era joven de corazón y estaba lleno de optimismo. También fue un idealista que vio lo bueno en el mundo que lo rodeaba. Algunos críticos de arte vieron tendencias hedonistas en su arte. Tuvo una excelente ética de trabajo, dedicación y compromiso con el arte, lo que se demostró a través de más de 3000 pinturas que creó durante su vida, muchas de las cuales eran de gran tamaño panorámico. Se las arregló para abordar todo de forma independiente y lograr una calidad perfecta en casi todas las cosas que probó. Fue notable por su inusual diligencia; su breve sueño fue su único descanso. En el exterior era en gran medida una figura pública, actuó en teatro, participó en exposiciones, concedió entrevistas y fue un crítico de arte publicado. En la vida privada, vivía y trabajaba en una reclusión de su casa, casi nadie lo notaba. A pesar de ello, sus obras siguen siendo un modelo de sinceridad y constancia. Además de pintar, tiene otras dos pasiones: las flores y las palomas. Eran sus aficiones de vida que nunca abandonó a lo largo de toda su vida. Dondequiera que viviera, siempre encontraba la manera de criar palomas. Las hermosas aves lo ayudaron a preservar su equilibrio mental, serenidad y paz mental. Los pájaros le dieron armonía en la vida, estimularon su imaginación. Las flores también lo acompañaron durante toda su vida. Rosas, ásteres, peonías, orquídeas, dalias y decenas de otras flores formaban parte inseparable del entorno de la casa del pintor en Antakalnis. Nunca bebió alcohol. Decía que cuando en su niñez vio la fealdad de los borrachos, él -un hombre inseparablemente ligado a una belleza- discernió decisivamente este aspecto de la fealdad. Multiculturalismo "Para él [Czeslaw Znamierowski] no había fronteras entre nacionalidades. Fácilmente se hizo amigo de los nativos de cualquier país... No era ajeno a los letones, lituanos, judíos, tártaros, caraítas, rusos. Estaba listo para ayudar a todos si era posible". Debido a su multiculturalismo intencional y no intencional, Znamierowski es considerado un artista nacional de cuatro países: Letonia, Lituania, Polonia y Rusia. El reclamo de Letonia: nació, se educó y se dedicó al arte en Letonia. Allí vivió toda su familia, y es el país donde comenzó su carrera como artista. Znamierowski hablaba letón con fluidez. Reclamación de Lituania: desde 1926 hasta su muerte en 1977, residió en Vilna, Lituania. La mayoría de sus obras de arte estaban dedicadas a y sobre este país. Fue galardonado con el título de Artista de Honor de LSSR. Aprendió, leyó, escribió y habló lituano con fluidez, pero por encima de eso, aceptó Lituania y especialmente Vilnius (capital) como su hogar. El reclamo de Polonia: aunque residía en Letonia y luego en Lituania, la familia de Znamierowski era étnicamente polaca, por lo que su nacionalidad también era polaca; "Czeslaw Znamierowski" es un nombre polaco, y claramente se aseguró de escribirlo de forma polaca (ejemplo de ortografía lituana: Česlovas Znamierovskis) en muchas de sus obras de arte, incluso aquellas que fueron pintadas en (y dedicadas a) Lituania. Además, el artista mantuvo estrechos lazos y fue expuesto en numerosas ocasiones en Polonia (ver capítulo Exposiciones). Hablaba polaco con fluidez. El reclamo de Rusia: técnicamente, nació en la Rusia imperial (de la que Letonia formaba parte en ese momento). Hablaba ruso con fluidez y durante varios años vivió y recibió su educación artística más alta en San Petersburgo (Rusia). Fue más influenciado y pintado en lo que se puede llamar realismo ruso y estilo académico ruso. Además, Znamierowski era partidario del comunismo, que era una conexión vívida con Rusia, donde la ideología era más fuerte. Debido a estos factores, hasta la fecha muchos lo perciben como un artista ruso soviético. Obras Referencias Alumnado de la Universidad de Vilna Pintores de la Unión Soviética Wikipedia:Páginas con traducciones sin revisar Nacidos en Letonia Fallecidos en Vilna
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,981
Blues License is the sixth studio album by Australian musician Renée Geyer. The album was released in June 1979 and peaked at number 41 on the Kent Music Report. Track listing Vinyl/ cassette (VPL1–0214) Side One "The Thrill Is Gone" (Rick Darnell, Roy Hawkins) – 6.55 "That Did It Babe" (Pearl Woods) – 5.15 "Set Me Free" (Deadric Malone) – 4.08 "Bellhop Blues" (Kevin Borich) – 3.23 Side Two "Won't Be Long" (J. Leslie McFarland) – 3.48 "Stormy Monday" (Aaron "T-Bone" Walker) – 6.43 "Dust My Blues" (Elmore James) – 3.03 "Feeling Is Believing" (Willie Henderson, Richard Parker) – 7.01 Credits Renée Geyer: vocals, backing vocals Mal Logan: keyboards Kevin Borich: guitar ("The Thrill Is Gone", "Set Me Free", "Bellhop Blues", "Stormy Monday", "Feeling Is Believing") Tim Partridge: bass guitar (All tracks) John Annas: drums (All except "Won't Be Long") Kerrie Biddell: backing vocals ("Won't Be Long", "Feeling is Believing") Tim Piper: guitar ("That Did It Babe", "Dust My Blues") Mark Punch: guitar ("Won't Be Long") Steve Hopes: drums ("Won't Be Long") Ron King: harmonica ("Dust My Blues") Charts References 1979 albums Renée Geyer albums Blues rock albums by Australian artists Mushroom Records albums RCA Records albums
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,277
{"url":"http:\/\/mathhelpforum.com\/advanced-algebra\/146130-do-these-matrices-represent-linear-transformations.html","text":"# Math Help - Do these matrices represent linear transformations?\n\n1. ## Do these matrices represent linear transformations?\n\nHello,\n\nI have a question to do which I'm a bit stuck on. Any and all help would be much appreciated!\n\nAssume V is an inner product space. If dim(V) = 3, for which of the following matrices does there exist a self-adjoint linear transformation $T_{i}$ on V for which the matrix $A_{i}$ represents $T_{i}$ with respect to some basis?\n\n$A_{0}=\\begin{pmatrix}\n2 & 0 & 0\\\\\n0 & 1 & 1\\\\\n0 & 0 & 0\n\\end{pmatrix}\nA_{1}=\\begin{pmatrix}\n2 & 0 & 0\\\\\n0 & 1 & 1\\\\\n0 & 0 & 1\n\\end{pmatrix}\nA_{2}=\\begin{pmatrix}\n2 & 0 & 0\\\\\n0 & 1 & 1\\\\\n0 & 0 & 2\n\\end{pmatrix}$\n\nFor the first one I'm thinking there's not, because of the row of 0's at the bottom. However, I'm still not 100% sure.\n\n(I understand what a self-adjoint linear transformation is, and I also know that the eigenvectors of V will form an orthonormal basis of V.)\n\n2. Originally Posted by amy99\nHello,\n\nI have a question to do which I'm a bit stuck on. Any and all help would be much appreciated!\n\nAssume V is an inner product space. If dim(V) = 3, for which of the following matrices does there exist a self-adjoint linear transformation $T_{i}$ on V for which the matrix $A_{i}$ represents $T_{i}$ with respect to some basis?\n\n$A_{0}=\\begin{pmatrix}\n2 & 0 & 0\\\\\n0 & 1 & 1\\\\\n0 & 0 & 0\n\\end{pmatrix}\nA_{1}=\\begin{pmatrix}\n2 & 0 & 0\\\\\n0 & 1 & 1\\\\\n0 & 0 & 1\n\\end{pmatrix}\nA_{2}=\\begin{pmatrix}\n2 & 0 & 0\\\\\n0 & 1 & 1\\\\\n0 & 0 & 2\n\\end{pmatrix}$\n\nFor the first one I'm thinking there's not, because of the row of 0's at the bottom. However, I'm still not 100% sure.\n\n(I understand what a self-adjoint linear transformation is, and I also know that the eigenvectors of V will form an orthonormal basis of V.)\nEigenvectors I obtained for $A_0$\n$\\begin{bmatrix}\n1\\\\\n0\\\\\n0\n\\end{bmatrix}, \\begin{bmatrix}\n0\\\\\n1\\\\\n0\n\\end{bmatrix}, \\begin{bmatrix}\n0\\\\\n-1\\\\\n1\n\\end{bmatrix}$\n\nEigenvectors for $A_1$\n$\\begin{bmatrix}\n1\\\\\n0\\\\\n0\n\\end{bmatrix}, \\begin{bmatrix}\n0\\\\\n1\\\\\n0\n\\end{bmatrix}$\n\n$A_3$\n$\\begin{bmatrix}\n1\\\\\n0\\\\\n0\n\\end{bmatrix}, \\begin{bmatrix}\n0\\\\\n1\\\\\n1\n\\end{bmatrix}, \\begin{bmatrix}\n0\\\\\n1\\\\\n0\n\\end{bmatrix}$\n\n3. Thank you, dwsmith. From what I can tell, then, $A_{1}$ cannot represent a linear transformation, because it only has two eigenvectors, and so cannot form an orthonormal basis of V (2 vectors cannot span V). Does it remain, then, to check whether the eigenvectors of $A_{0}$ and $A_{2}$ form an orthonormal set?\n\n4. Originally Posted by amy99\nThank you, dwsmith. From what I can tell, then, $A_{1}$ cannot represent a linear transformation, because it only has two eigenvectors, and so cannot form an orthonormal basis of V (2 vectors cannot span V). Does it remain, then, to check whether the eigenvectors of $A_{0}$ and $A_{2}$ form an orthonormal set?\nBe careful, here, every matrix can represent a linear transformation. But in your post you said self-adjoint linear transformation. Every self-adjoint linear transformation has a \"complete set\" of eigenvectors- a set of independent eigenvectors equal to the size of the matrix. That means that the eigenvectors could be used as a basis for the space.\n\nYou also said \" Does it remain, then, to check whether the eigenvectors of $A_0$ and $A_2$ form an orthonormal set? \"\n\nAn \"orthonormal\" set consists of vectors that all have length 1 and are orthogonal. In fact, here, both sets contain a vector that does not have length 1 so these are NOT \"orthonormal\". All you need are that the vectors are independent.","date":"2015-07-07 10:13:55","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 22, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.885302722454071, \"perplexity\": 203.93779324857778}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-27\/segments\/1435375099105.15\/warc\/CC-MAIN-20150627031819-00058-ip-10-179-60-89.ec2.internal.warc.gz\"}"}
null
null
{"url":"https:\/\/proxieslive.com\/tag\/welldefined\/","text":"## Will irrational parameters make a problem not well-defined on complexity\n\nGiven a set $$\ud835\udc41=\\{\ud835\udc4e_1,\u22ef,\ud835\udc4e_\ud835\udc5b\\}$$ where all $$\ud835\udc4e_\ud835\udc56$$s are rational positive numbers and $$\\sum_{i\\in N}a_i=1$$, find a subset \ud835\udc46\u2286\ud835\udc41 such that $$(\\sqrt{2\\sum_{i\\in S}a_i}-1)^2$$ is minimized. Does the appearance of \u221a make the problem ill-defined with regrading to complexity? If well-defined, it is NP-hard, right?\n\n## For which tempered distributions is the fractional derivative well-defined?\n\nLet $$\\gamma \\geq 0$$ and consider the fractional derivative operator defined in Fourier domain by $$\\mathcal{F} \\{\\mathrm{D}^{\\gamma} \\varphi \\} (\\omega) = (\\mathrm{i} \\omega)^{\\gamma} \\mathcal{F}\\{\\varphi\\} (\\omega),$$ where $$\\varphi \\in \\mathcal{S}(\\mathbb{R})$$ is a smooth and rapidly decaying function.\n\nOf course, the definition can be extended to much more functions than $$\\varphi \\in \\mathcal{S}(\\mathbb{R})$$, including some, but not all, tempered distributions. It is for instance possible to extend $$\\mathrm{D}^{\\gamma}$$ to any compactly supported distribution (as for any convolution operator from $$\\mathcal{S}(\\mathbb{R})$$ to $$\\mathcal{S}'(\\mathbb{R})$$).\n\nMy question is the following: Is there a good notion of the \u201cdomain of definition\u201d of the operator $$\\mathrm{D}^{\\gamma}$$, understood as the largest topological vector space $$\\mathcal{S}(\\mathbb{R}) \\subseteq \\mathcal{X} \\subseteq \\mathcal{S}'(\\mathbb{R})$$ such that $$\\mathrm{D}^{\\gamma} : \\mathcal{X} \\rightarrow \\mathcal{S}'(\\mathbb{R})$$ is well-defined and continuous? Or at least, if the question is somehow meaningless, any natural construction that will include many tempered distributions in a satisfactory* manner?\n\n*To give a bit of context, I am especially interested by the fractional case where $$\\gamma \\notin \\mathbb{N}$$. The question is obvious for $$\\gamma = n \\in \\mathbb{N}$$, since one can select $$\\mathcal{X} = \\mathcal{S}'(\\mathbb{R})$$. However. when $$\\gamma$$ is purely fractional, there is no hope to define the product $$(\\mathrm{i} \\omega)^{\\gamma} \\mathcal{F}\\{u\\} (\\omega)$$ when $$u \\in \\mathcal{S}'(\\mathbb{R})$$ is too irregular around the origin, which means morally that $$u$$ growth too fast at infinity. \u201cIn a satisfactory manner\u201d would be a way of specifying properly a good \u201cgrowth property\u201d of $$u \\in \\mathcal{X}$$.","date":"2020-06-06 02:23:03","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 22, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8616306781768799, \"perplexity\": 195.57305790879846}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-24\/segments\/1590348509264.96\/warc\/CC-MAIN-20200606000537-20200606030537-00437.warc.gz\"}"}
null
null
<?php require_once('tcpdf_include.php'); require_once('../tcpdf.php'); include ('../../protected/config/db_config.php'); $db = new db_config(); $connect = $db->connect(); $pdf = new TCPDF(PDF_PAGE_ORIENTATION, PDF_UNIT, PDF_PAGE_FORMAT, true, 'UTF-8', false); $pdf->SetCreator(PDF_CREATOR); //$pdf->SetAuthor('Nicola Asuni'); $pdf->SetTitle('Tenant Sales Report'); $pdf->SetSubject(''); $pdf->SetKeywords(''); $pdf->SetHeaderData(PDF_HEADER_LOGO, PDF_HEADER_LOGO_WIDTH, PDF_HEADER_TITLE.'ZÁRIL lifestyle store', PDF_HEADER_STRING, array(0,64,255), array(0,64,128)); $pdf->setFooterData(array(0,64,0), array(0,64,128)); $pdf->setHeaderFont(Array(PDF_FONT_NAME_MAIN, '', PDF_FONT_SIZE_MAIN)); $pdf->setFooterFont(Array(PDF_FONT_NAME_DATA, '', PDF_FONT_SIZE_DATA)); $pdf->SetDefaultMonospacedFont(PDF_FONT_MONOSPACED); $pdf->SetMargins(PDF_MARGIN_LEFT, PDF_MARGIN_TOP, PDF_MARGIN_RIGHT); $pdf->SetHeaderMargin(PDF_MARGIN_HEADER); $pdf->SetFooterMargin(PDF_MARGIN_FOOTER); $pdf->SetAutoPageBreak(TRUE, PDF_MARGIN_BOTTOM); $pdf->setImageScale(PDF_IMAGE_SCALE_RATIO); if (@file_exists(dirname(__FILE__).'/lang/eng.php')) { require_once(dirname(__FILE__).'/lang/eng.php'); $pdf->setLanguageArray($l); } $pdf->SetFont('', '', 8, '', true); $pdf->AddPage(); $brand_name = $_GET['brand_name']; $from = $_GET['from']; $to = $_GET['to']; $tbl_header .= '<style> tr.info td { height:60px; text-align:left; } tr.headings td{ text-align:center; height:30px; background-color:#ccc; font-weight:bold; } tr.details td { text-align:center; height:15px; } </style>'; //$sql = "SELECT * FROM tbl_sales_trans ORDER BY transaction_date DESC"; $sqlGetSalesReport = "SELECT * FROM tbl_sales_trans_report ORDER BY transaction_date DESC"; $sales_report_results = mysqli_query($connect, $sqlGetSalesReport); while ($row = mysqli_fetch_array($sales_report_results)) { $sales_transaction_id = $row['sales_transaction_id']; $brand_name = $row['brand_name']; $subtotal = $row['subtotal']; $sales_tax_amount = $row['sales_tax_amount']; $total_amount = $row['total_amount']; $transaction_date = $row['transaction_date']; $new_date = date("d/m/Y h:i:s", strtotime($transaction_date) ); } $tbl_header .='<table border="0">'; $tbl_header .='<tr>'; $tbl_header .='<td colspan="7"><strong>Sales Report</strong></td>'; $tbl_header .='</tr>'; $tbl_header .='<tr>'; $tbl_header .='<td colspan="7"><strong>From:</strong>' . $from . '</td>'; $tbl_header .='</tr>'; $tbl_header .='<tr>'; $tbl_header .='<td colspan="7"><strong>To:</strong>' . $to . '</td>'; $tbl_header .='</tr>'; $tbl_header .='<tr>'; $tbl_header .='<td colspan="7">&nbsp;</td>'; $tbl_header .='</tr>'; $tbl_header .='</table>'; $tbl_header .= '<table border="1">'; $tbl_header_titles .='<tr class="headings">'; $tbl_header_titles .='<td>Transaction #</td>'; $tbl_header_titles .='<td>DateTime of Purchase</td>'; $tbl_header_titles .='<td>Price</td>'; $tbl_header_titles .='<td>Sales Tax</td>'; $tbl_header_titles .='<td>Total</td>'; $tbl_header_titles .='</tr>'; $tbl_footer = '</table>'; $tbl =''; $sales_report_results = mysqli_query($connect, $sqlGetSalesReport); while ($row = mysqli_fetch_array($sales_report_results)) { $sales_transaction_id = $row['sales_transaction_id']; $brand_name = $row['brand_name']; $subtotal = $row['subtotal']; $sales_tax_amount = $row['sales_tax_amount']; $total_amount = $row['total_amount']; $transaction_date = $row['transaction_date']; $new_date = date("d/m/Y h:i:s", strtotime($transaction_date) ); $tbl .= '<tr class="details">'; $tbl .= '<td>' . $sales_transaction_id . '</td>'; $tbl .= '<td>' . $new_date . '</td>'; $tbl .= '<td>' . $subtotal . '</td>'; $tbl .= '<td>' . $sales_tax_amount . '</td>'; $tbl .= '<td>' . $total_amount . '</td>'; $tbl .= '</tr>'; } $pdf->writeHTML($tbl_header . $tbl_header_titles . $tbl . $tbl_footer, true, false, false, false, ''); $pdf->Output('user-sales-'.$reportId.'.pdf', 'I');
{ "redpajama_set_name": "RedPajamaGithub" }
2,470
The 2009–10 Louisiana–Lafayette Ragin' Cajuns women's basketball team represented the University of Louisiana at Lafayette during the 2009–10 NCAA Division I women's basketball season. The Ragin' Cajuns were led by third-year head coach Errol Rogers; they played their double-header home games at the Cajundome with other games at the Earl K. Long Gymnasium, which is located on campus. They were members in the Sun Belt Conference. They finished the season 10–22, 4–14 in Sun Belt play to finish in a three-way tie for fifth place in the West Division. They were eliminated in the first round of the Sun Belt women's tournament. Previous season The Ragin' Cajuns finished the 2008–09 season 4–27, 0–18 in Sun Belt play to finish in seventh place in the West Division. They made it to the 2009 Sun Belt Conference women's basketball tournament, losing in the first round game by a score of 65-68 to the Troy Trojans. They were not invited to any other postseason tournament. Roster Schedule and results |- !colspan=6 style=| Non-conference regular season |- !colspan=6 style=| Sun Belt regular season |- !colspan=6 style=| Non–conference regular season |- !colspan=6 style=| Sun Belt regular season |- !colspan=6 style=| Non-conference regular season |- !colspan=6 style=| Sun Belt regular season |- !colspan=6 style=| Sun Belt Women's Tournament See also 2009–10 Louisiana–Lafayette Ragin' Cajuns men's basketball team References Louisiana Ragin' Cajuns women's basketball seasons Louisiana-Lafayette Louisiana Louisiana
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,170
package org.hswebframework.web.crud.configuration; import org.hswebframework.ezorm.rdb.metadata.RDBColumnMetadata; import org.hswebframework.ezorm.rdb.metadata.RDBTableMetadata; import java.beans.PropertyDescriptor; import java.lang.annotation.Annotation; import java.lang.reflect.Field; import java.util.Set; /** * 表结构自定义器,实现此接口来自定义表结构. * * @author zhouhao * @since 4.0.14 */ public interface TableMetadataCustomizer { /** * 自定义列,在列被解析后调用. * * @param entityType 实体类型 * @param descriptor 字段描述 * @param field 字段 * @param column 列定义 * @param annotations 字段上的注解 */ void customColumn(Class<?> entityType, PropertyDescriptor descriptor, Field field, Set<Annotation> annotations, RDBColumnMetadata column); /** * 自定义表,在实体类被解析完成后调用. * * @param entityType 字段类型 * @param table 表结构 */ void customTable(Class<?> entityType, RDBTableMetadata table); }
{ "redpajama_set_name": "RedPajamaGithub" }
4,418
Anselm Kratochwil (* 5. Februar 1951 in Heidelberg) ist ein deutscher Biologe, Hochschulprofessor im Ruhestand und Vertreter der Biozönologie. Biografie Nach dem Abitur am Bunsen-Gymnasium Heidelberg studierte Anselm Kratochwil von 1972 bis 1978 Zoologie, Geobotanik, Limnologie und Bodenkunde an der Universität Freiburg und diplomierte 1978. Dissertation bei Hannes F. Paulus am Zoologisches Institut der Universität Freiburg über das Thema "Blumen-Insekten-Gemeinschaften eines nicht mehr bewirtschafteten Halbtrockenrasens im Kaiserstuhl: Aspekte der Co-Phänologie, der Biogeographie und der Co-Evolution. Ein Beitrag zur Blütenökologie auf pflanzensoziologischer Grundlage". Für seine mit summa cum laude abgeschlossene Arbeit erhielt er 1983 den Goedecke-Forschungspreis der Universität Freiburg, eine der höchsten Auszeichnungen auf dem Gebiet der Naturwissenschaften in Freiburg. Von 1982 bis 1986 war Kratochwil Wissenschaftlicher Mitarbeiter am Biologischen Institut II der Universität Freiburg, Lehrstuhl für Geobotanik bei Otti Wilmanns. Anschließend 1987 bis 1989 war er Stipendiat der Deutschen Forschungsgemeinschaft. Von 1989 bis 1992 hatte Kratochwil einen Lehrauftrag am Biologischen Institut I (Zoologie) der Universität Freiburg für die Bereiche Biozönologie und Tierökologie und habilitierte sich dort 1990 für das Fach Ökologie. Verleihung der venia legendi durch den Senat der Universität Freiburg; Privatdozent an der Fakultät für Biologie der Universität Freiburg. 1990 und 1991 war Kratochwil Gastprofessor an der Universität für Bodenkultur Wien bzw. der Universität Utrecht. 1992 erfolgte die Berufung als Nachfolger von Helmut Lieth an die Universität Osnabrück als Ordinarius für Ökologie. Dort wirkt er als Vorstand der Arbeitsgruppe "Ökologische und sozioökonomische Systemforschung" der Universität Osnabrück. Er ist Vorstand des Institutes für Umweltsystemforschung der Universität Osnabrück sowie stellvertretender Direktor des Botanischen Gartens der Universität Osnabrück. 1998 bis 1999 war Kratochwil Dekan des Fachbereiches Biologie/Chemie der Universität Osnabrück. Zum 31. März 2013 trat Anselm Kratochwil in den Ruhestand. Anselm Kratochwil ist verheiratet mit der Botanikerin Angelika Schwabe-Kratochwil, mit der er einen gemeinsamen Sohn (Claudius, *1983) hat. Werk Der Fachbereich von Kratochwil an der Universität Osnabrück beschäftigt sich vorrangig mit den Themenkomplexen Biozönologie, Tierökologie, Vegetationsökologie und Geoökologie/Umweltanalytik. Kratochwil ist Herausgeber und Mit-Herausgeber mehrerer wissenschaftlicher Fachzeitschriften u. a. Plant Ecology, Phytocoenologia, Tasks for Vegetation Science. 2002 erhielt er den Transferpreis der Universität Osnabrück für Wissenstransfer und Kooperationen. Mitgliedschaften Vorstand des Hochschulverbandes, Ortsgruppe Osnabrück, seit 1996 (Vorsitz 1999–2003) Wissenschaftlicher Beirat: Global zukunftsfähige Entwicklung - Perspektiven für Deutschland (Forschungszentren der Hermann von Helmholtz-Gemeinschaft) (2000–2003) Umweltbeirat der Evangelischen Landeskirche Baden seit 1991 Arbeitsgruppe "Global Change" der RTG, stellvertretender Vorsitzender (1998–2004) Arbeitsgruppe "Umwelt, Frieden, Entwicklung" der Universität Osnabrück (seit 1998) Sprecher des Arbeitskreises "Biozönologie" in der Gesellschaft für Ökologie (1987–1998) Beirat der Gesellschaft für Ökologie (1992–1995) Arbeitskreis Wildbienenkataster am Staatlichen Museum für Naturkunde Stuttgart (seit 2004) Werke (Auswahl) Nach eigenen Angaben umfasst die Publikationsliste von Anselm Kratochwil 89 Veröffentlichungen, einige werden hier dargestellt: A. Schwabe, A. Kratochwil: Weidbuchen im Schwarzwald und ihre Entstehung durch Verbiß des Wälderviehs: Verbreitung, Geschichte und Möglichkeiten der Verjüngung. - Beih. Veröff. Naturschutz Landschaftspflege 49, Karlsruhe 1987. A. Kratochwil, A. Schwabe: Ökologie der Lebensgemeinschaften: Biozönologie. UTB-Reihe. Ulmer Verlag 2001 ISBN 3-8252-8199-X A.Kratochwil (Hrsg.): Biodiversity in ecosystems. Principles and case studies of different complexity levels. Tasks for Vegetations Science 34, Kluwer Academic Publishers, Dordrecht 1999 C. Burga, A. Kratochwil (Hrsg.): Biomonitoring - General and Applied Aspects on a Regional and Global Scale. Tasks for Vegetations Science 35, Kluwer Academic Publishers, Dordrecht 2001. Kratochwil (Hrsg.): Interactions between animals, plant species and vegetation. Special Issue Phytocoenologia 32 (4), Berlin, Stuttgart 2002; S. 515–676. Zudem gab Kratochwil die deutsche Übersetzung von Elements of Ecology heraus: Thomas M. Smith, Robert L. Smith: Ökologie (6., aktualisierte Auflage). Pearson Studium, München 2009. ISBN 3-8273-7313-1 Belege Weblinks Division Ökologie im Fachbereich Biologie/Chemie der Universität Osnabrück (Arbeitsgruppe Prof. Kratochwil) Zoologe Ökologe Hochschullehrer (Universität Osnabrück) Deutscher Geboren 1951 Mann
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,060
\section{Introduction} Granular materials cover a large variety of natural and industrial matter as diverse as soil, rock, building materials, cereals, or drug capsules. Controling their density, defined as the volume fraction occupied by grains (also called \emph{solid fraction}), is critical to study their mechanical behavior - which may be gaslike, liquidlike or solidlike - and reduce their manufacturing, storage, or packaging costs. Over the centuries, scientists and engineers have attempted to predict the maximum solid fraction achievable by solidlike assemblies - \emph{ or packings} - of hard particles knowning their individual geometrical characteristics. Due to their apparent simplicity, monodisperse hard sphere packings have received wide interest and various outstanding solid fraction values have been reported, ranging from $\phi=0.555\pm0.005$ for random loose packings~\cite{onoda_90} up to $\phi=\pi/\sqrt{18}\approx0.74$ for crystal-ordered packings~\cite{hales_05}. In the former case, the packing structure is often stabilised by interparticle friction in a hypostatic state~\cite{makse_00}, that is with less particle constraints than their number of degrees of freedom, whereas the latter case seems to be favored by long term cyclic shear~\cite{Panaitescu_12} and refers to highly hyperstatic structures. In between these two extremes lies the random close packing (RCP) state, equivalently defined as the maximally randomly jammed state~\cite{torquato_00} or stable equilibrium state under isotropic pressure of frictionless particles devoid of crystal nucleus~\cite{roux_04}. Such a state may be repeatedly achieved following experimental protocols, e.g. gently kneading waxed balls enclosed in a rubber membrane~\cite{bernal_60}, pouring particles at a controlled rate into a cylinder~\cite{macrae_61}, or vertically shaking them within a container~\cite{scott_69}, as well as numerically using purely geometrical~\cite{lubachevsky_90} or mechanically-based protocols~\cite{makse_00,silbert_02}. Whatever the protocol, the generally agreed solid fraction of sphere packings in RCP state is $\phi_{RCP}=0.6366\pm0.0005$~\cite{scott_69,cumberland_87,ohern_02,silbert_02} and their structure is known to be isostatic with a mean number of contacts per spheres\textemdash or $coordination$ $number$\textemdash equal to $z=6$. However, real granular media are very seldom made of spheres. In particular, natural $aggregates$\textemdash i.e., granular materials for construction and civil engineering uses\textemdash may at best be sphere-like (rounded) when extracted from alluvial deposit, but most of them are more or less convex polyhedron-like (angular) as a consequence of their processing from the crushing of massive rock deposits. Unfortunately, literature on assessing the packing solid fraction of convex nonspherical particles is far less abundant. Yet such particles would have higher maximum solid fraction values than spheres according to Ulam's conjecture~\cite{gardner_01}. Indeed, Ulam's conjecture was verified numerically for dense crystal packings of spheroids of axes ratio larger than $\sqrt{3}$~\cite{donev_3_04}, regular tetrahedra dimers~\cite{chen_10}, Platonic and Archimedean solids~\cite{betke_00,degraaf_11}, whose reported maximum solid fraction values are respectively $\phi_{spheroids}\approx0.7707$, $\phi_{tetra}\approx0.8663$ and $\phi_{Pl\&Ar}$ in the range $0.784$ for truncated icosahedra to $1$ for cubes. Ulam's conjecture was also investigated experimentally for rounded frictionless particles such as ellipsoids of axes ratio $1.25:1:0.8$ and angular grains such as tetrahedral dice. Surprisingly, both particle shapes were found able to pack randomly as densely as crystal-ordered spheres, with solid fraction values of $\phi=0.74\pm0.005$ for the former~\cite{man_05}, and an even denser $\phi=0.76\pm0.02$ for the latter~\cite{jaoshvili_10} in the limit of infinite system size. In fact, additional simulations revealed that dense amorphous packings of frictionless non-cubical Platonic or axisymmetric-low-aspect-ratio particles would achieve solid fraction values ranging between those of spheres in the RCP and the dense crystal states~\cite{donev_04,baule_13,jiao_11}. This conclusion led Jiao and co-workers to suggest that, among convex particles with moderate asphericity (low aspect ratio), spheres possess the lowest solid fraction in RCP state, which is known as the analogue of Ulam's conjecture for random packings~\cite{jiao_11}. According to microstructural investigations of numerical packings, the high level of solid fraction observed with nonspherical particles is caused by the higher coordination number needed to constrain their additional rotational degrees of freedom and achieve jamming~\cite{donev_04}. Note however that amorphous packings of frictionless Platonic solids are isostatic~\cite{jaoshvili_10,jiao_11}, whereas axisymmetric particles were found hypostatic at least for aspect ratio smaller than $1.5$~\cite{donev_04,donev_07,baule_13}. In order to increase further the representativeness of real granular media in packing solid fraction studies, one shall account for the particle size distribution which is seldom monodisperse, and consider higher particle aspect ratios. Indeed, from field experience with aggregates, polydispersity increases the packing solid fraction~\cite{caquot_37,powers_68} and bidisperse rounded particles tend to pack denser than angular ones~\cite{sedran_94}. More or less empirical models have been designed to mimic these phenomena~\cite{stovall_86,cumberland_87,ouchiyama_89,delarrard_99,liu_02}. A few numerical studies have been published~\cite{roux_07,farr_10}, which confirm higher solid fraction values of bidisperse sphere packings compared to monodisperse ones (up to $0.827$ for a particle diameter ratio of $10$), but little to no similar study dealing with polydisperse polyhedron packing was identified. More literature examines the role played by large particle aspect ratio on packing density. Indeed, frictionless axisymmetric particles with length to diameter ratio larger than $2$ to $3$ were reported to exhibit smaller RCP solid fractions than spheres~\cite{kyrylyuk_11,baule_13} as a consequence of increased excluded volume effects. For both sphere and convex nonspherical particle packings, numerical approaches proved essential to investigate their microstructure and confirm their amorphous nature. Indeed, the presence of translational order may easily be checked using common statistical tools such as the $pair$ $correlation$ $function$ (or $radial$ $distribution$ $function$), which describes the probability of finding a pair of particles a distance $r$ apart relative to the probability expected for a completely random distribution at the same density~\cite{allen_89}. According to this definition, the pair correlation function of a perfectly amorphous packing is $1$ regardless of $r$ value. The pair correlation function of $dense$ amorphous sphere packings obtained from numerical simulations classically tends towards $1$ beyond a few particle diameters~\cite{silbert_02,agnolin_07}, thus evidencing the absence of long range translational order. Similar results evidencing even less translational order were reported from nonspherical particle packings~\cite{williams_03,jaoshvili_10}, as a consequence of their loss of rotational symmetry. For the latter, further checking is needed to ensure the absence of orientational order, which may be achieved using the $nematic$ $order$ $parameter$ and the $biaxial$ $parameter$~\cite{john_04,camp_97}. These parameters assess the level of alignment of respectively one and two of the particle inertia axes, and they vary between $0$ (no alignment) and $1$ (perfect alignment). Several authors scrutinizing the onset of orientational order in initially random convex nonspherical particle packings subjected to a geometrical densification process have reported amorphous-nematic transition characterized by a nematic order parameter jump in the range $[0.2;0.5]$~\cite{allen_93,camp_97}, whereas the nematic-biaxial transition occured for values of the biaxial parameter in excess of $0.2$~\cite{camp_97}. The present paper explores the influence of size, angularity and aspect ratio of frictionless particles on their ability to achieve maximally randomly jammed packings. Further to determining maximum solid fraction values, our goal is to understand how the packing microstructure is affected by particle geometrical characteristics. For this purpose, numerical simulations have been performed so as to mimic a mechanically-based densification process, thus allowing comparisons with similar laboratory experiences. In section~\ref{sec:simmet}, we describe our simulation protocol. Results gathered in section~\ref{sec:results} are presented and discussed according to the following outline: in section~\ref{sec:mech_equil}, the mechanical equilibrium achieved by the packings are carefully examined, as well as their homogeneity to ensure the absence of particle segregation; then section~\ref{sec:solid_fraction} reports on the calculated solid fraction values and their variations as a function of particle bidispersity, angularity and aspect ratio; finally, section~\ref{sec:packing_micro} investigates the microstructure of our packings. \section{Simulation protocol}\label{sec:simmet} The simulated systems are dense assemblies of $3000$ rigid frictionless particles of identical mass density $\rho$, interacting with each other through totally inelastic collisions. Particles are either spheres or $pinacoids$, the latter referring to a variety of convex polyhedra comprised of eight vertices, fourteen edges and eight faces as shown on figure~\ref{pinacoid}. According to an extensive experimental study with various rock types mentioned in ref.~\cite{tourenq_82}, the pinacoid gives the best fit among simple geometries for an aggregate grain. As explained elsewhere~\cite{azema_12,camenen_12}, a pinacoid has three planes of symmetry and is determined by four parameters, length $L$, width $G$, height $E$ ($L \geq G \geq E$) and angle $\alpha$ set to $60^\circ$ here, so that its volume $V$ is given by: ~~\\ \begin{equation} V=\frac{EG}{2}(L-\frac{E}{3\sqrt{3}}). \end{equation} \label{equ_1} ~~\\ \noindent Alternatively, a pinacoid may be determined by its size $d=\sqrt{G^2+L^2}$, corresponding to the diameter of its circumscribed sphere, supplemented with its aspect ratios $L/G$ and $G/E$. Figure~\ref{fig:pinacoids} depicts pinacoids with various aspect ratios and Table~\ref{tab:shape} summarizes the size and aspect ratio of any particle used in the present study. \begin{figure}[!t] \centering \begin{tabular}{c} \includegraphics*[width=0.65\columnwidth]{fig_1.eps}\\ \end{tabular} \caption{\label{pinacoid}Pinacoid, a model polyhedron characterised by its length $L$, width $G$, height $E$ and angle $\alpha$. This polyhedron has three symmetry planes, each perpendicular to an inertia axis $\roarrow u$, $\roarrow v$, or $\roarrow w$.} \end{figure} \begin{figure}[!t] \centering \begin{tabular}{c c} \includegraphics*[width=0.3\columnwidth]{fig_2a.eps}&\includegraphics*[width=0.5\columnwidth]{fig_2b.eps}\\ (a) & (b)\\ \includegraphics*[width=0.3\columnwidth]{fig_2c.eps}&\includegraphics*[width=0.5\columnwidth]{fig_2d.eps}\\ (c) & (d) \end{tabular} \caption{\label{fig:pinacoids}(Color online) Snapshot of the pinacoids used in this study: $(a)$ isometric, $(b)$ elongated, $(c)$ flat and $(d)$ flat \& elongated.} \end{figure} \begin{table}[!t] \begin{center} \caption{\label{tab:shape}Size (diameter $d$ for spheres, $d=\sqrt{L^2+G^2}$ for pinacoids) and aspect ratio of simulated particles.} \begin{tabular}{lccc} \hline \hline Particles & Size & $L/G$ & $G/E$\\ \hline spheres (large, small) & $d$, $d/3$ & $-$ & $-$ \\ isometric pinacoids (large, small) & $d$, $d/3$ & $1$ & $1$ \\ elongated pinacoids & $d$ & $2$ & $1$ \\ flat pinacoids & $d$ &$1$ & $3$ \\ flat \& elongated pinacoids & $d$ & $2$ & $3$ \\ \hline \hline \end{tabular} \end{center} \end{table} Binary mixes in which various proportions of small spheres have been substituted for large ones, or small isometric pinacoids for large isometric pinacoids, have been prepared to study the influence of particle size distribution and angularity on packing properties. Additionally, binary mixes comprising various proportions of flat, elongated, or flat \& elongated pinacoids substituted for large isometric ones have been prepared to investigate the role played by particle aspect ratio. Table~\ref{tab:granulo} summarizes the mean proportion by volume of small, elongated, flat or flat \& elongated particles in each binary mix simulated. \begin{table}[!t] \caption{\label{tab:granulo}Mean proportion by volume of small $X_{d/3}$, elongated $X_{P}$, flat $X_{O}$, or flat \& elongated $X_{PO}$ particles substituted for large isometric ones in each binary mix simulated (e.g. $X_P=14\%$ refers to the bidisperse pinacoid packing comprising $14\%$ of elongated pinacoids and $86\%$ of large isometric ones).} \begin{tabular}{c|c|c|c|c} \hline \hline Spheres (\%)&\multicolumn{4}{c}{Pinacoids (\%)} \\ \hline $X_{d/3} $ & $X_{d/3}$ & $X_{P}$ & $X_{O}$ & $X_{PO}$ \\ \hline $0$ & $0$ & $14$ & $12$ & $14$ \\ $3$ & $13$ & $20$ & $28$ & $29$ \\ $13$ & $30$ & $36$ & $47$ & $50$ \\ $23$ & $51$ & $49$ & $77$ & $70$ \\ $30$ & $64$ & $70$ & $92$ & $100$ \\ $41$ & - & $100$ & $100$ & - \\ $55$ & - & - & - & - \\ \hline \hline \end{tabular} \end{table} For each binary mix, three cuboidal samples have been prepared following a pluviation protocol inspired by~\cite{laniel_07} and described in details in~\cite{azema_12,camenen_12}: spherical shells, each circumscribed to a randomly oriented particle, are first randomly dropped inside a vertical parallelepiped container and subsequently moved to the closest local minimum of potential energy; second, the spherical shells are removed, bi-periodic boundary conditions are substituted for the container vertical walls, and the same gravity $\vec g$ ($0$,$0$,$-g$) is applied to all particles until they find an equilibrium position under their own weight (examples of such an equilibrated packing is shown on Fig.~\ref{fig:rendus1}). This protocol was selected for the ability of pluviation to achieve $RCP$ states of hard spheres with solid fraction repeatedly peaking at $0.64$~\cite{bernal_60,macrae_61,scott_69,zhang_01,silbert_02,emam_05}. Similarly, nonspherical frictionless particles subjected to the selected pluviation protocol were expected to achieve repeatedly states of maximum density characterized by a unique solid fraction value, provided that they are homogeneously spread and achieve a stable equilibrium without crystallization or segregation~\cite{roux_04,agnolin_07}. \begin{figure}[!t] \centering \includegraphics[width=0.7\columnwidth]{fig_oblateprolate29.eps}\\ \caption{(Color online) 3D snapshot of a packing incorporating $29\%$ by volume of flat \& elongated pinacoids. Bi-periodic boundary conditions apply in $x$ and $y$ directions}\label{fig:rendus1} \end{figure} All simulations were performed using the Non Smooth Contact Dynamic method (NSCD)~\cite{radjai_11,radjai_09,jean_99,moreau_94}. This distinct element method (DEM), implemented in the LMGC90 sofware platform~\cite{dubois_03}, was successfully applied to a number of physical problems ranging from dense inertial flows~\cite{lois_05,chevoir_01,azema_12} to quasistatic deformable packings~\cite{azema_07b,azema_09,estrada_08,camenen_12}. Basically, the equations of motion of a collection of rigid particles interacting through unilateral contacts with dry friction are first integrated over one time step. Hence, instead of accelerations and forces, the unknowns are particle velocities and percussions (also called $impulses$). The advantage of this integral formulation is that collision (shock) and lasting contact situations do not need to be distinguished any more, both giving rise to a percussion. Percussions, which may be defined as the integral of a force over one time step, are parameterized using normal ($e_N$) and tangential ($e_T$) restitution coefficients as well as a sliding friction coefficient ($\mu$). Second, at each time step, a geometrical contact detection algorithm identifies any potential contact situation\textemdash mainly vertex-face, edge-edge, edge-face or face-face contacts between convex polyhedra and a point between spheres\textemdash and describes it in terms of location and normal unit vector. Last, the equations of motion are solved at each time step by an iterative process using a non-linear Gauss-Seidel like method. This resolution is performed with respect to $complementarity$ $relations$~\cite{moreau_88} between relative velocities and percussions substituted for classical non-interpenetration (Signorini) and contact friction (Coulomb) constraints. Simulation parameters are summarized in table~\ref{tab:summary}. \begin{table}[!t] \caption{ \label{tab:summary}Summary of simulation parameters. $3000$ particles, horizontal dimension of square simulation cell $L$ normalized by particle size $d$ (periodic boundary conditions apply along $x$ and $y$ directions), friction $\mu$, restitution coefficients $e_{N,T}$ and time steps $\Delta T$ normalized by $\sqrt{d/g}$.} \begin{tabular}{lcccc} \hline \hline Particle & $L/d$ & $\mu$ & $e_{N,T}$ & $\Delta T/\sqrt{d/g}$ \\ \hline Spheres and & $3$ to $8$ & $0$ & $0$ & $5\times10^{-3}$ \\ isometric pinacoids & & & & \\ elongated pinacoids & $8$ to $11$ & $0$ & $0$ & $5\times10^{-3}$ \\ flat pinacoids & $6$ to $8$ & $0$ & $0$ & $5\times10^{-3}$ \\ flat \& elongated & $8$ & $0$ & $0$ & $5\times10^{-3}$ \\ pinacoids & & & & \\ \hline \hline \end{tabular} \end{table} \section{Results}\label{sec:results} We first check the quality of mechanical equilibrium achieved by our simulated packings, then we examine the variations of packing solid fraction as a function of particle size and shape. Finally, we investigate microstructural properties of our pinacoid packings to help understanding solid fraction variations. \subsection{Mechanical equilibrium and homogeneity}\label{sec:mech_equil} A packing of rigid particles achieves a mechanically stable equilibrium under the following conditions: $1$) the net force and net torque applied to each particle as well as its kinetic energy are negligible, and $2$) the potential energy of the collection of particles is minimal. For rigid spheres, all these conditions are met when the following criteria are fulfilled~\cite{camenen_12,agnolin_07}: \begin{eqnarray} \sum F < 10^{-4}d^2P, \label{equ_2} \\ \sum M < 10^{-4}d^3P, \label{equ_3} \\ E_c < 10^{-8}d^3P, \label{equ_4} \\ \sum\limits_{particles} \rho V g = F_S. \label{equ_5} \end{eqnarray} \noindent where $\sum F$, $\sum M$ and $E_c$ are respectively the net force, net torque and total kinetic energy of a particle, $P$ denotes the local average stress in its neighborhood and $F_S$ the net force exerted at contacts between particles and the bottom wall. Though our packings of frictionless spheres were found to meet these criteria, moderate deviations were observed for pinacoids. Indeed, the net force was found in the order of $10^{-3}d^2P$, whereas the net torque and kinetic energy were found in the order of $10^{-3}d^3P$ and $10^{-7}d^3P$ respectively. The fact that these results are slightly higher than previous ones~\cite{camenen_12} may be explained by shorter simulation durations (about $25 s$ compared to about $35 s$ in~\cite{camenen_12}), but this does not call into question the quality of mechanical equilibrium achieved (this was further checked upon continuing four simulations up to $35s$ - results are not shown here). Besides, all packings meet the requirement stated by equation~(\ref{equ_5}) since the ratio of the total mass of particles to the net force $F_S$ exerted on the bottom wall varies between $0.9957$ and $1.0048$. Next, we verify that no significant interpenetration occurs at interparticle contacts to ensure the relevance of solid fraction calculations. For sphere packings, the maximum calculated interpenetration is lower than $10^{-3}d$. Similarly, we verify that the interpenetration routinely generated at polyhedra contacts when applying the NSCD method~\cite{saussine_04,camenen_12} remains small enough. For this purpose, we apply the virtual works principle to generate a vertical expansion of each packing that will reset the vertical component of all interpenetrations to zero: ~~\\ \begin{equation} k\sum\limits_{particles} \rho V g z = \sum\limits_{contacts} F_C^z \delta_C^z \label{virt_work}\end{equation} ~~\\ \noindent where $k$ is the coefficient of vertical expansion of the packing (taken proportional to distance $z$ between the center of gravity of each particle and the bottom wall), $F_C^z$ and $\delta_C^z$ are the vertical components of contact forces and interpenetrations respectively, and both sums are calculated two particle size above the layered bottom to get rid of wall effect~\cite{camenen_12,donev_04}. The mean $k$ value calculated over all pinacoid packings is $0.80\%$ and the sandard deviation is $0.47\%$. Upon assuming a similar coefficient value along $x$ and $y$ axes, interpenetrations at contacts are found to yield no more than $2$ to $3\%$ overestimation in solid fraction calculations, which is consistent with references~\cite{saussine_04,camenen_12}. Finally, we checked the homogeneity of the distribution of each population of particles once our packings have reached a stable mechanical equilibrium. Figure~\ref{fig:homo_distri} depicts the proportions of large $P_d$, small $P_{d/3}$ and flat \& elongated $P_{PO}$ particles as a function of distance $z$ from the bottom wall of their respective packings. The proportion profiles are almost constant, with only small deviations close to the bottom wall of bidisperse packings. Hence, we can conclude that each population of spheres or pinacoids is reasonably homogeneously spread in its respective packing. \begin{figure}[!t] \centering \includegraphics*[width=0.9\columnwidth]{fig_3.eps}\\ \caption{\label{fig:homo_distri}(Color online) Proportions of small (solid lines) and large (dashed lines) particles in $d$-thick layers along the $z$ axis for bidisperse packings incorporating $X_{d/3}=13\%$ by mass of spheres (${\color{blue}\bullet}$) or pinacoids (${\color[rgb]{0,.5,0}{\blacksquare}}$). Proportions of large isometric pinacoids (solid line) and flat \& elongated pinacoids (dashed line) in $d$-thick layers for packings incorporating $X_{PO}=29\%$ by mass of flat \& elongated (${\color{red}\blacktriangle}$) particles. Error bars denote the standard deviation.} \end{figure} \subsection{Solid fraction}\label{sec:solid_fraction} In this subsection, we investigate the effect of particle size distribution, angularity, and aspect ratio on the maximum solid fraction value reached by the packing once a stable mechanical equilibrium is achieved. For each packing, the solid fraction is computed upon evenly slicing the packing horizontally and performing analytical calculation of the volume of each sphere or each pinacoid present in each slice of known volume. Naturally, the mean solid fraction is calculated in the bulk, that is away from bottom wall and free surface. \subsubsection{Effects of particle size distribution and angularity}\label{subsec:ang_poly} For various proportions of small particles $X_S$, Fig.~\ref{fig:phi_particles} depicts maximal solid fraction values $\phi$ achieved by bidisperse packings of (a) spherical particles and (b) pinacoids. For spherical particles, excellent agreement can be observed between our results and those calculated from molecular dynamics~\cite{roux_07} or sphere mapping~\cite{farr_10} simulations. The corresponding curves peak for $30\%$ of small spherical particles by volume, and their respective maximum values are $\phi = 0.726\pm 0.004$ for the present study, $\phi = 0.7207\pm 0.0004$ for ref.~\cite{roux_07} and $\phi = 0.7324$ for ref.~\cite{farr_10}. A good agreement is also observed with experimental results reported in ref.~\cite{sedran_94} from bidisperse packings of ($d$, $d/4$) rounded aggregates densified inside a rigid cylinder through vertical taps and upper surface loading, though these are finite size packings of non-strictly bidisperse spherical particles. By contrast, similar experiments with bidisperse packings of ($d$, $d/2$) rounded aggregates yielded significantly lower maximum solid fractions, which may be explained by increased crowding of the local arrangement of large particles by small ones when their size ratio comes closer to $1$~\cite{stovall_86}. For pinacoids, similar layout of the solid fraction curve is observed (Fig.~\ref{fig:phi_particles}b), with monodisperse isometric pinacoid packings ($X_S=0\%$ and $X_S=100\%$) achieving the minimum $\phi = 0.676\pm0.004$ and a peak $\phi=0.769\pm0.001$ being reached when $X_S=30\%$. Interestingly, observe that this peak is significantly higher than the one calculated for sphere packings, suggesting that angular particles pack denser than spherical ones. This conclusion is consistent with previous experimentations reported by ref.~\cite{jaoshvili_10}, showing that monodisperse plastic tetrahedra pack with a maximum solid fraction of $\phi=0.76\pm0.02$ by extrapolation to infinite packing size, and with subsequent numerical work on Platonic solids reported in ref.~\cite{jiao_11}. Note however that other previous works support experimentally~\cite{baker_10} or numerically~\cite{smith_10} the opposite conclusion illustrated on Fig.~\ref{fig:phi_particles} by experimental results from ref.~\cite{sedran_94}, according to which rounded particles would pack denser than angular ones. In fact, low experimental solid fraction values result from difficulties in overcoming interparticle friction while compacting a granular assembly~\cite{silbert_02,agnolin_07}, which do not affect our frictionless pinacoids. \begin{figure}[!t] \centering \begin{tabular}{c} \includegraphics*[width=0.9\columnwidth]{fig_4a.eps}\\ (a) \\ \includegraphics*[width=0.9\columnwidth]{fig_4b.eps}\\ (b) \\ \end{tabular} \caption{ \label{fig:phi_particles}(Color online) Maximum solid fraction $\phi$ as a function of the volume proportion $X_S$ of (a) small spheres or rounded particles and (b) small pinacoids or crushed particles in bidisperse packings. Error bars denote the standard deviation.} \end{figure} \subsubsection{Shape effect} Figure~\ref{fig:phi_L_P_PO} depicts maximum solid fraction $\phi$ versus the proportion by volume of $(a)$ elongated, $(b)$ flat and $(c)$ flat \& elongated particles in mixtures with large isometric pinacoids. This figure shows that partial substitution of flat pinacoids for isometric ones results in packing solid fraction decrease by maximum $3\%$ for elongated and $8\%$ for flat \& elongated particles. In both cases, the minimum solid fraction is achieved when the proportion of flat or flat \& elongated particles reaches approximately $50\%$. Increasing further the proportion of flat particles raises the solid fraction value up to $\phi = 0.676\pm0.001$ for $100\%$ flat pinacoids and slightly above for $100\%$ flat \& elongated pinacoids. By comparison, random jammed assemblies of $100\%$ oblate ellipsoids with an aspect ratio of $3$ were reported in ref.~\cite{donev_04} to pack with a maximum solid fraction $\phi = 0.67$. Contrarywise, partial substitution of elongated pinacoids for isometric ones does not seem to significantly affect the packing solid fraction which may be estimated at $\phi = 0.683\pm0.012$ for $100\%$ elongated pinacoids. Note that this estimate compares well with the $\phi = 0.68$ solid fraction reported in ref.~\cite{kyrylyuk_11} for packings of $100\%$ spherocylinders with the same length to diameter aspect ratio of $2$. Upon applying the virtual works principle as depicted in equation (\ref{virt_work}), lower bounds accounting for particle maximum interpenetration may be calculated for the solid fraction of packings made of $100\%$ of the various particle shapes studied. Table~\ref{tab:sol_fract} gathers these lower bounds and evidences that, regardless of their aspect ratios, pinacoids pack denser than spheres in the RCP state. \begin{figure}[!t] \centering \includegraphics*[width=0.9\columnwidth]{fig_5.eps}\\ \caption{\label{fig:phi_L_P_PO}(Color online) Maximum solid fraction $\phi$ $\emph{vs}$ the proportion by volume of (a) elongated $X_P$, (b) flat $X_O$, and (c) flat \& elongated $X_{PO}$ particles in mixtures with large isometric pinacoids. Error bars denote the standard deviation.} \end{figure} \begin{table}[!t] \caption{ \label{tab:sol_fract}Minimum solid fraction values obtained for packings made of $100\%$ isometric, $100\%$ elongated, $100\%$ flat or $100\%$ flat \& elongated pinacoids.} \begin{tabular}{l|c|c|c|c} \hline \hline Pinacoid shape & $100\%$ & $100\%$ & $100\%$ & $100\%$ \\ & isometric & elongated & flat & flat \& elongated \\ \hline $\phi_{min}$ & $0.664$ & $0.662$ & $0.648$ & $0.654$ \\ \hline \hline \end{tabular} \end{table} \subsection{Microstructure}\label{sec:packing_micro} To shed some light on maximum solid fraction variations as a function of particle size, angularity and shape, we now investigate the microstructural properties of our packings. We first examine the existence of particle arrangement and then we explore the contact network. \subsubsection{Particle arrangement} The existence of translational arrangement is studied by means of the pair correlation function $g(r)$, which is calculated in the packing bulk (two particle size above the layered bottom to get rid of wall effect~\cite{camenen_12,donev_04}) using the expression detailed in ref.~\cite{allen_89} (page $55$). Figure~\ref{fig:gr} depicts the variations of $g(r)$ in (a) bidisperse pinacoid packings and in (b) mixtures of isometric pinacoids with elongated, flat or flat \& elongated particles. These curves clearly show that, for $r$ larger that $2R_{min}$ to $3R_{min}$, $g(r)$ no more significantly differs from $1$, meaning that the local distribution of particle centers is free of long-range translational ordering. Like tetrahedron packings~\cite{jaoshvili_10,smith_10,neudecker_13}, pinacoid packings appear to be less correlated than sphere packings, because the former lack of rotational symmetry and angles between their adjacent flat faces induce frustration. Note that the peak visible on several curves for $r \simeq (2/\sqrt6 + 1/2)R_{min} \simeq 1.3R_{min}$ corresponds to the face-face contact between small particles as depicted on fig.~\ref{fig:gr_face_face}. \begin{figure}[!t] \centering \includegraphics*[width=0.9\columnwidth]{fig_6a.eps}\\ (a) \\ \includegraphics*[width=0.9\columnwidth]{fig_6b.eps}\\ (b) \\ \caption{\label{fig:gr}[Color online) Pair correlation function $g(r)$ for several proportions by volume of $(a)$ small pinacoids $X_S$ and $(b)$ elongated $X_P$, flat $X_O$ or flat \& elongated particles $X_{PO}$. For each mixture, $R_{min}$ stands for the radius of the sphere circumscribed to the smallest pinacoid (e.g. $R_{min}=d/6$ for mixtures of large and small isometric pinacoids).} \end{figure} \begin{figure}[!t] \centering \includegraphics*[width=0.5\columnwidth]{gr_face_face.eps}\\ \caption{\label{fig:gr_face_face}Cross-sectionnal view of the arrangement due to contact between trapezoidal and triangular faces with a distance between pinacoid centers of $r \simeq (2/\sqrt6 + 1/2)R_{min} \simeq 1.3R_{min}$.} \end{figure} Given their symmetry properties, pinacoids may potentially adopt the same orientation upon aligning one or more of their inertia axes, thus confering orientational order to the packing. To detect such an orientational order, the nematic order parameter $Q_{00}^2$ and the biaxial parameter $Q_{22}^2$ are computed in the packing bulk (see ref.~\cite{camenen_12,camp_97} for computation details). $Q_{00}^2$ assesses the highest level of alignement of a given inertia axis between all particles, either $\roarrow u$, $\roarrow v$, or $\roarrow w$ (see fig.~\ref{pinacoid}), whereas $Q_{22}^2$ assesses overall alignment of particle inertia axes as a consequence of the pinacoid symmetry properties. Figure~\ref{fig:Qoo} depicts the variations of $Q_{00}^2$ for pinacoid packings incorporating various proportions by volume of small, elongated, flat or flat \& elongated particles. It is remarkable that substituting small isometric pinacoids for large ones decreases the packing nematic order parameter whatever the substituted proportion. Similarly, substituting elongated particles for large isometric ones does not significantly increase the packing nematic order parameter. By contrast, substituting more than $30\%$ of flat or flat \& elongated particles for large isometric ones increases the packing nematic order parameter beyond the isotropic-nematic transition~\cite{allen_93,camp_97}, since flat particles tend to align their $\roarrow w$ inertia axis with the $\roarrow z$ direction as shown on fig.~\ref{fig:rendus2}. This alignment evidences the reorientation of flat and flat \& elongated pinacoids during the densification phase to form a layered structure which minimizes the potential energy of the equilibrated packing. This reorientation is facilitated by frustration release resulting from one particle dimension being significantly smaller than the other two. Besides, with the biaxial parameter never exceeding $0.05$ whatever the tested mixture, it should be pointed out that none of the tested mixture has reached the nematic-biaxial transition~\cite{camp_97}, in other words particle orientations remained random in the horizontal plane. \begin{figure}[!t] \centering \includegraphics*[width=0.9\columnwidth]{fig_8.eps} \caption{\label{fig:Qoo}(Color online) Nematic order parameter $(Q_{00}^2)$ of pinacoid packings incorporating various proportions by volume of small ($X_S$), elongated ($X_P$), flat ($X_O$) or flat \& elongated ($X_{PO}$) pinacoids. Error bars denote the standard deviation. Initial state refers to $Q_{00}^2$ values calculated during sample preparation just before applying gravity.} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=0.7\columnwidth]{fig_oblateprolate100.eps}\\ \caption{(Color online) 3D snapshot of a packing incorporating $100\%$ by volume of flat \& elongated pinacoids. Bi-periodic boundary conditions apply in $x$ and $y$ directions}\label{fig:rendus2} \end{figure} \subsubsection{Contact network in the samples} In this subsection, we first check whether the contact network of frictionless isometric pinacoid packings fulfills the so-called isostatic conjecture. Next, we investigate whether gradually substituting small, elongated, flat, or flat \& elongated pinacoids for isometric ones modifies the typology of the contact network. The isostatic conjecture states that amorphous rigid packings of hard frictionless particles are isostatic~\cite{alexander_98}. In this statement, \emph{rigid packing} refers to particle assembly which cannot be deformed subject to arbitrarily low stress without breaking any interparticle contact or deforming any particle, in other words packing in a mechanically stable equilibrium state. Furthermore, \emph{isostatic} means that the total number of interparticle constraints equals the sum of particle degrees of freedom (DOF), that is the coordination number equals twice the particle DOF. Though disputed in the recent years~\cite{smith_10}, authors have established the validity of this conjecture for random close packings of spheres~\cite{silbert_02,agnolin_07} and nontiling platonic solids such as tetrahedra, octaedra, icosaedra and dodecaedra~\cite{jaoshvili_10,jiao_11}, whereas packings of rounded particle shapes such as moderately oblate or prolate ellipsoids were found hypostatic~\cite{donev_04}. The coordination number calculated here for monodisperse isometric pinacoid packings is $8.4\pm0.2$. Note that this value lies between those observed for similar particle shapes, such as maximally randomly jammed monodisperse tetrahedra for which $N\approx8.5$ and $N=8.6\pm0.1$ are respectively reported in ref.~\cite{neudecker_13,smith_10}, or monodisperse octahedra for which $N\approx7.7$ and $N=7.8\pm0.1$ are respectively reported in ref.~\cite{smith_10,jiao_11}. However, this value is well below the isostatic number $N=12$ since it amalgamates diverse contact types corresponding to different numbers of constaints. Indeed, upon assigning $1$ constraint to each vertex-face or edge-edge contact (simple contact), $2$ constraints to each edge-face contact (double contact) and $3$ constraints to each face-face contact (triple contact)~\cite{jaoshvili_10,jiao_11,camenen_12}, the number of constraints writes $N_c = N_s + 2.N_d + 3.N_t$, where $N_s$, $N_d$ and $N_t$ are, respectively, the numbers of simple, double and triple contacts. With $N_s=5.57\pm0.06$, $N_d=2.12\pm0.04$ and $N_t=0.72\pm0.07$, we obtain $N_c=11.99\pm0.35$. Despite the finite size of our packing periodic cell, observe that this value compares well with the isostatic number $N=12$ valid for infinite size packings. As a consequence, our monodisperse isometric pinacoid packings are reasonably isostatic as conjectured in reference~\cite{alexander_98}. Yet, it should be observed that the number of face-face contacts per particle ($N_t=0.72\pm0.07$) is lower than those tabled in ref.~\cite{jiao_11} for tetrahedra ($2.21\pm0.01$), even for octaedra ($1.44\pm0.01$). As observed in ref~\cite{neudecker_13}, we argue that the probability of perfect face-face alignment is low compared to that of either slightly shifted face-face or low angle edge-face contact. This is consistent with the finite slope/moderate first peak value of our pair correlation curves (see Fig.~\ref{fig:gr}) and our significantly higher number of edge-face contacts per particle ($N_d=2.12\pm0.04$) compared to those reported for tetrahedra ($0.98\pm0.01$) and octaedra ($1.38\pm0.01$) in reference~\cite{jiao_11}. \begin{figure*}[!t] \centering \begin{tabular}{c c} \includegraphics*[width=0.85\columnwidth]{fig_10a.eps}&\includegraphics*[width=0.85\columnwidth]{fig_10b.eps}\\ (a) & (b)\\ \includegraphics*[width=0.85\columnwidth]{fig_10c.eps}&\includegraphics*[width=0.85\columnwidth]{fig_10d.eps}\\ (c) & (d) \end{tabular} \caption{\label{fig:coor}(Color online) Coordination number $N$ of $(a)$ all, $(b)$ simple, $(c)$ double, and $(d)$ triple contacts as a function of the proportion by volume of small ($X_S$), elongated ($X_P$), flat ($X_O$) or flat \& elongated ($X_{PO}$) pinacoids in the packing. Error bars denote the standard deviation} \end{figure*} To get insight into how the contact network fluctuates with increasing proportions of small or non-isometric particles, it is worth examining Fig.~\ref{fig:coor} depicting the variations of the coordination numbers of all, simple, double and triple contacts as a function of the proportion by volume of small, elongated, flat, or flat \& elongated pinacoids. When gradually substituting small isometric pinacoids for large ones, the coordination number first reaches a minimum $N=7.4\pm0.2$ for $X_S=13\%$ and then gradually increases back to its monodisperse value, as shown on Fig.~\ref{fig:coor}a. Interestingly, similar coordination number fluctuations may be observed with simple contacts (Fig.~\ref{fig:coor}b), whereas the coordination numbers of double and triple contacts vary conversely (Fig.~\ref{fig:coor}c and~\ref{fig:coor}d). In fact, small pinacoids are trapped inside the excluded volume of large ones. Until their proportion is sufficient for completely filling this volume, which corresponds to the achievement of the packing maximum solid fraction ($X_S=30\%$, see Fig.~\ref{fig:phi_particles}b), their low steric hindrance allows them to rotate and establish more stable face-face or edge-face contacts with large ones. Contrarywise, gradually substituting elongated, flat, or flat \& elongated pinacoids for isometric ones tends to increase the coordination number $N$ of our pinacoid packings up to, respectively, $8.9\pm0.1$, $9.1\pm0.2$, and $9.2\pm0.2$. Actually, these non-isometric particles are too large to fit in the excluded volume of large isometric pinacoids while leaving there arrangement undisturbed. Besides, contrary to elongated or isometric pinacoids, the adjacent trapezoidal flat faces of flat and flat \& elongated pinacoids do not intersect at right angle and these particles have a smaller thickness than the former. Stated otherwise, flat and flat \& elongated pinacoids have a broken angular symmetry and reduced external surface (at least reduced triangular flat faces for flat \& elongated pinacoids), causing the coordination number of face-face contacts to decrease to the benefit of simple contacts with increasing proportions of flat or flat \& elongated pinacoids in the packing (see Fig.~\ref{fig:coor}b and d). Furthermore, for increasing proportions of flat or flat \& elongated pinacoids, observe that the coordination number of simple contacts increases quicker than the coordination number of face-face contacts decreases, which is a consequence of packing isostaticity : whatever the proportion of small, elongated, flat, or flat \& elongated particles, all our packings remain isostatic with $12.02\pm0.35$ constraints per particle. Finally, Fig.~\ref{fig:N_Qoo} depicts the variations of the coordination number as a function of the nematic order parameter. This figure suggests that the coordination number increases continuously with orientational ordering in the packings and irrespective of particle size or aspect ratio. An exponential fit of the following form was adjusted to the point cloud: \begin{equation} N(Q^2_{00}) = N(0)+[N(1)-N(0)].[1-\exp(-\frac{Q^2_{00}}{Q^2_{00,c}})] \end{equation} \label{fit} \noindent where [$N(0),N(1),Q^2_{00,c}$] were calculated respectively as ($7.4, 9.2,0.2$) with $R^2=0.92$. This fit suggests that orientationnally ordered pinacoid packings have coordination numbers in excess of about $9$ and, conversely, that packings may be considered orientationnally disordered for coordination numbers below roughly $8.5$, corresponding to $Q^2_{00}\leq 0.2$. \begin{figure}[!t] \centering \includegraphics*[width=0.9\columnwidth]{N_Q00.eps} \caption{\label{fig:N_Qoo}(Color online) Coordination number $N$ $vs$ the nematic order parameter $(Q_{00}^2)$ of pinacoid packings incorporating various proportions by volume of small ($X_S$), elongated ($X_P$), flat ($X_O$) or flat \& elongated ($X_{PO}$) pinacoids. Fit equation is $N(Q^2_{00}) = 7.4 + (9.2 - 7.4).[1 - \exp(-Q^2_{00}/0.2)]$ with $R^2=0.92$. Error bars denote the standard deviation.} \end{figure} \section{Conclusion}\label{sec:conclusion} The properties of dense packings of spheres or pinacoids compacted under their own weight have been investigated using three-dimensional Non Smooth Contact Dynamic simulations. Various proportions by volume of small, elongated, flat, or flat \& elongated pinacoids were substituted for large isometric ones in order to understand how polydispersity and shape affect their solid fraction and microstructural properties. Numerical simulations show that disordered assemblies of frictionless pinacoids, were they monodisperse or bidisperse, pack with a higher solid fraction than corresponding assemblies of spherical or rounded particles, thus fulfilling the analogue of Ulam's conjecture for random packings proposed in ref.~\cite{jiao_11}. This seeming discrepancy with experimental results reported in ref.~\cite{delarrard_99,baker_10} is believed to lie with difficulties in overcoming interparticle friction through experimental densification processes. Moreover, solid fraction increases further with bidisperse particles and peaks when the proportion of small ones reaches $30\%$, achieving $\phi=0.726\pm0.004$ and $\phi=0.769\pm0.001$ respectively for spheres and pinacoids. Contrarywise, partial substitution of flat pinacoids for isometric ones results in packing solid fraction decrease by a maximum $8\%$, especially when flat particles are also elongated. Minimum solid fraction is achieved when the proportion by volume of flat or flat \& elongated pinacoids reaches $50\%$. Nevertheless, particle shape seems to play a minor role on packing solid fraction compared to polydispersity. Additional investigations focused on packing microstructure confirm that, with $12.02\pm0.35$ constraints per particle, pinacoid packings fulfill the isostatic conjecture and that they are free of order except beyond $30$ to $50\%$ by volume of flat or flat \& elongated polyhedra in the packing. This order increase progressively takes the form of a nematic phase as flat or flat \& elongated particles reorientate so that their largest projected area is horizontal to minimize the packing potential energy. Simultaneously, this reorientation seems to increase the solid fraction value slightly above the maximum achieved by monodisperse isometric pinacoids, as well as the coordination number. Finally, partial substitution of elongated pinacoids for isometric ones has limited effect on packing solid fraction or order. \begin{acknowledgments} The authors thank the team running the Centre de Calculs Intensifs des Pays de la Loire (CCIPL) for providing the calculation ressources through the MTEEGD project. They are also grateful to their colleagues J.N. Roux, N. Roquet and P. Richard for helpful conversations. Many thanks to the LMGC90 team in Montpellier for making their software plateform freely available. \vspace{4cm} \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
2,260
\section{Introduction} \label{sec:intro} Learning a nonlinear function through a finite number of input-output observations is a fundamental problem of supervised machine learning, and has wide applications in science and engineering. From a statistical vantage point, this problem entails a regression procedure which, depending on the nature of the underlying function, may be linear or nonlinear. In the past few decades, there has been a flurry of advances in the area of nonlinear regression \cite{hastie2009overview}. Deep learning is perhaps one of the most well-known approaches with a promising and remarkable performance in great many applications. Deep learning has a number of distinctive advantages: 1. It relies on a parametric description of functions that are easily computable. Once the parameters (weights) of a deep network are set, the output can be rapidly computed in a feed forward fashion by a few iterations of affine and elementwise nonlinear operations; 2. it can avoid over-parametrization by adjusting the architecture (number of parameters) of the network, hence providing control over the generalization power of deep learning. Finally, deep networks have been observed to be highly flexible in expressing complex and highly nonlinear functions \cite{haykin2001neural,hassoun1995fundamentals}. There are, however, a number of challenges associated with deep learning, chief among them is that of obtaining the exact assessment of their expressive power which remains to this day, an open problem. An important exception is the single-layer network for which the so called universal approximation property (UAP) has been established for some time\cite{gybenko1989approximation, hornik1991approximation}, and which is clearly a highly desirable property. Another practical difficulty with deep learning is that the output becomes unproportionally sensitive to the parameters of different layers, making it, from an optimization perspective, extremely difficult to train \cite{lecun2015deep}. A recent solution is the so-called residual network (ResNet) learning, which introduces bridging branches to the conventional deep learning architecture \cite{he2016deep}. In this paper, we address the above issues by proposing a different perspective on learning with a substantially different architecture, which totally forgoes any feedback. Specifically, we propose an interative foward projection in lieu of back propagation to update parameters. As such, this may rapidly yield an over-parametrized system, we restrict each layer to perform an "incremental" update on the data, as approximately captured by the realization of a differential equation, we refer to as geometric regularization, as discussed in Section \ref{sec:GR}. The formulation of this geometric regularization allows us to tie the analysis of deep networks to differential geometry. The study in \cite{hauser2017principles} notices this relation, but adopts a different approach. In particular, we conjecture a converse of the celebrated Frobenius integrability theorem, which potentially proves a universal approximation property for a family of modified deep ResNets. We also present preliminary results in Section \ref{sec:results}, and show that foregoing back propagation in a neural network does not greatly limit the expressive power of deep networks, and in fact potentially decreases their training effort, dramatically. \section{MMSE Estimation by Geometric Regularization} For the sake of generality, we consider a $C^1$ Banach manifold\footnote{A Banach manifold is an infinite-dimensional generalization of a conventional differentiable manifold \cite{lang1972differential}.} $\calF$ of functions $f:\bbR^n\to\bbR^m$, where $n,m$ are the dimensions of the data and label vectors, respectively, and each element $f\in\calF$ represents a candidate model between the data and the labels. The arbitrary choice of $\calF$ allows one to impose structural properties on the models. Due to space limitation and for clarity sake, we just focus on the simpler case of $\calF=\calL^2$, i.e. the space of square integrable functions, and defer further generalizations to a later publication. Moreover, consider a probability space $(\Omega,\Sigma,\mu)$, and two random vectors $\bx:\Omega\to\bbR^n$ and $\by:\Omega\to\bbR^m$ representing statistical information about the data. As samples $(\bx_t,\by_t)$ for $t=1,2,\ldots,T$ of $\bx,\by$ are often provided, in which case their empirical distribution is used. We consider the supervised learning problem by minimizing the following mean square error (MSE), \begin{equation}\label{eq:objective} L(f)=\bbE\left[\|f(\bx)-\by\|_2^2\right], \end{equation} where $\bbE[\ldotp]$ denotes expectation. For observed samples $(\bx_t,\by_t)$, this criterion simplifies to \begin{eqnarray}\label{eq:min_sample} \minl_{f\in\calF}\frac 1 T\suml_{t=1}^T\|f(\bx_t)-\by_t\|_2^2. \end{eqnarray} In practice, the statistical assumptions in \eqref{eq:min_sample} are highly underdetermined and minimization of MSE (MMSE) leads to undesired solutions. To cope with this, additional constraints are considered to tame the problem by way of regularization. For example the set $\calF$ can be restricted to a (finite-dimensional) smooth sub-manifold. This is an implicit fact in parametric approaches, such as deep neural networks. \subsection{Geometric Regularization}\label{sec:GR} We introduce a more general type of regularization, which also includes parametric restriction. Our generalization is inspired by the observation that standard (smooth) optimization techniques such as gradient descent, are based on a realization of a differential equation of the following form, \begin{equation}\label{eq:integral} \frac{\td f_\tau}{\td \tau}=\phi(f_\tau), \end{equation} where $\phi(f)\in T_f$ is a tangent vector of $\calF$ at $f$. The resulting solution is typically in an iterative form, as follows, \begin{equation}\label{eq:step_simple} f_{t+1}=f_t+\mu_t\phi(f_t), \end{equation} where $\mu_t$ is the step size at iteration $t=0,1,2,\ldots$. The tangent vector $\phi(f_\tau)$ is often selected as a descent direction, where according to \eqref{eq:objective}, $\td L/\td \tau<0$. For geometric regularization, we restrict the choice of the tangent vector to a closed cone $C_f\subseteq T_f$ in the tangent space. In the case of function estimation, where $\calF$ and hence the tangent space $T_f$, is infinite dimensional, we adopt a parametric definition of $C_f$ by restricting the tangent vector to a finite dimensional space. However, this might not restrict the function to a finite dimensional submanifold. A particularly important case, where geometric regularization simplifies to a parametric (finite dimensional) manifold restriction is given by the Frobenius integrability theorem \cite{camacho2013geometric,khalil1996noninear}: \begin{theorem}\label{theorem:fro}{\bf (Frobenius theorem)} Suppose that $C_f$ is an $n$-dimensional linear subspace of $T_f$. For any choice of $\phi(f)\in C_f$, the solution of \eqref{eq:integral} remains on an $n$-dimensional submanifold of $\calF$ only, depending on the initial point $f_0$, iff $C_f$ is involutive, i.e. for any two vector fields $\phi(f),\psi(f)$ in $C_f$ we have that \[ \left[\phi(f),\psi(f)\right]\in C_f, \] where $[\ldotp,\ldotp]$ denotes a Lie bracket \cite{khalil1996noninear}. \end{theorem} A simple example of an involutive regularization is when \[ C_f=\left\{W_0f+\suml_{k=1}^rW_kf^k+b\mid W_k\in\bbR^{m\times m}, b\in\bbR^{ m}\right\} \] where $f^k$ are fixed functions. It is clear that the solution $f_t$ remains in $C_{f_0}$ from an initial $f_0$. Hence, this case corresponds to a linear regression. Selecting a nonlinear function $g:\bbR\to\bbR$, we can write a more general form of the geometric regularization discussed here, as follows, \begin{eqnarray}\label{eq:GR_general} &C_f=\nwl &\left\{\Gamma(f)\left[W_0f+\suml_{k=1}^rW_kf^k+b\right]\mid W_k\in\bbR^{d\times d_k}, b\in\bbR^d\right\}, \end{eqnarray} where $f^k$ are arbitrary fixed functions and $\Gamma(a)$ for $a=(a_1,a_2,\ldots,a_d)\in\bbR^d$ is a diagonal matrix with diagonals $\Gamma_{ii}=\td g/\td x(a_i)$. \section{Algorithmic Solution} The solution to the differential equation in \eqref{eq:integral} with the geometric regularization in \eqref{eq:GR_general} requires a specification of the tangent vectors $\phi(f)\in C_f$. To preserve a good control on the computations, and much like for the DNN architecture, we define $f:\bbR^n\to\bbR^d$, where the reduced dimension $d<n$ is a design parameter. Then, the desired function is calculated as $Df_\tau+c$ where $D\in\bbR^{m\times d}$ and $c\in\bbR^m$ are fixed. Letting $f_0(x)=Ux$ where $U\in\bbR^{d\times n}$ is a constant dimensionality reduction matrix, we rewrite the MSE objective in \eqref{eq:objective} as \[ L(f)=\bbE\left[\left\|\by-Df(\bx)-c\right\|_2^2\right]. \] We subsequently apply the steepest descent principle to yield the following optimization: \begin{eqnarray}\label{eq:steep} \phi_f=\arg\minl_{\phi\in C_f\mid \|\phi\|_2\leq 1}\frac{\td L}{\td\tau}. \end{eqnarray} We next observe that under mild assumptions, \[ \frac{\td L}{\td\tau}=-\bbE\left[\left\langle \bz~,~D\Gamma(f)\left[W_0f+\suml_{k=1}^rW_kf^k+b\right]\right\rangle\right], \] where $\bz=\by-Df(\bx)-c$ and $W_0,W_k,b$ are to be decided based on the optimization in \eqref{eq:steep}. After some manipulations, this leads to \[ \phi_f=\Gamma(f)\left[W_{0,f}f+\suml_{k=1}^rW_{k,f}f^k+b_f\right], \] where \begin{eqnarray}\label{eq:params} &W_{0,f}=\bbE\left[\Gamma(f(\bx))D^T\bz f^T(\bx)\right],\nwl &W_{k,f}=\bbE\left[\Gamma(f(\bx))D^T\bz \left(f^k\right)^T(\bx)\right],\ k=1,2,\ldots,\nwl &b_f=\bbE\left[\Gamma(f(\bx))D^T\bz\right] \end{eqnarray} are specialized values of $W_0,W_k,b$, respectively. \subsection{Initialization} An efficient execution of the above procedure requires us to judiciously select the parameters $U,D,c$. We select $U$ as the collection of basis vectors of the first $d$ principal components of $\bx$, i.e. $U=P_1^T$ where $\bbE[\bx\bx^T]=P\Sigma P^T$ is the Eigen-representation (SVD) of the correlation matrix, $P=[p_1\ p_2\ \ldots p_n]$ and $P_1=[p_1\ p_2\ \ldots p_d]$. The matrices $U,c$ are selected by minimizing the MSE objective with $f=f_0$. This yields, \[D=\bbE\left[\by f^T_0(\bx)\right]\bbE\left[f_0(\bx) f^T_0(\bx)\right]^{-1},\] \[c=\bbE\left[\by\right]-D\bbE\left[f_0(\bx)\right].\] This also affords us to update these matrices in the course of the optimization, \[D\gets\bbE\left[\by f^T_t(\bx)\right]\bbE\left[f_t(\bx) f^T_t(\bx)\right]^{-1},\] \[c\gets\bbE\left[\by\right]-D\bbE\left[f_t(\bx)\right].\] \subsection{Momentum Method} Momentum methods are popular in machine learning and lead to considerable improvement in both performance and convergence speed \cite{sutskever2013importance, kingma2014adam}. Since the originally formulated do not conform to our geometric regularization framework, we proposed an alternative approach to effectively mix the learning direction $\phi_f$ at each iteration with its preceding iterates to better control any associated rapid changes over iterations (low-pass filtering). Here to keep the geometric regularization structure, we instead mix the parameters $W_k,b$. This leads to the following modification in the original algorithm in \eqref{eq:step_simple}: \begin{eqnarray}\label{eq:momentum} &f_{t+1}=f_t+\mu_t\Gamma(f)\left[V_{0,t}f_t+\suml_{k=1}^rV_{k,t}f^k+e_t\right], \nwl &V_{k,t+1}=\alpha_k V_{k,t}+ W_{k,f_t},\ k=0,1,\ldots,r, \nwl &e_{t+1}=\beta e_{t}+ b_{f_t}, \end{eqnarray} where $W_{k,f_t}, b_{f_t}$ are given in \eqref{eq:params}. \subsection{Learning Parameter Selection} To select the remaining parameters $\alpha_k,\beta$ and $\mu_t$, we manually tune parameters $\alpha_k,\beta$, specifically utilize two strategies when tuning $\mu_t$: a) fixing $\mu_t=\mu$ and b) using line search. The second method, we obtain by simple computations as \[ \mu_t=\frac{\bbE[\bz_t^TD\psi_t]}{\bbE[\|D\psi_t\|_2^2]}, \] where $\bz_t=\by-Df_t(\bx)-c$, and \[ \psi_t=\Gamma(f_t)\left[V_{0,t}f_t+\suml_{k=1}^rV_{k,t}f^k+e_t\right]. \] \subsection{Incorporating Shift Invariance} In the context of deep learning, especially for image processing, convolutional networks are popular. They differ from the regular deep networks in attempting to induce shift invariance in the linear operations of some layers, by way of convolution (Toeplitz matrix). We may adopt the same strategy in geometric regularization by further assuming that $W_0$ in \eqref{eq:GR_general} represents a convolution. We skip the derivations, for not only space limitation reasons, but also for their similarity to those leading to \eqref{eq:params}. The resulting algorithm with the momentum method is similar to \eqref{eq:momentum} where $W_{0,f}$ is replaced by $W_{\mathrm{conv},f}$, defined as \[ W_{\mathrm{conv},f}=\arg\maxl_W\langle W, W_{0,f}\rangle, \] where the optimization is over unit-norm convolution (Toeplitz) matrices. It turns out that since Toeplitz matrices form a vector space, $W_{\mathrm{conv},f}$ is a linear function of $W_{0,f}$ and can be quickly calculated \cite{bottcher2012toeplitz}. Due to space limitation, we defer the details to \cite{future_paper}. \section{Theoretical Discussion} \subsection{Relation to Deep Residual Networks} The proposed geometric regularization for nonlinear regression in \eqref{eq:GR_general} is inspired by the advances in the field of neural networks and deep learning. Recall that a generic deep artificial neural network (DNN) represents a sequence of functions (hidden layers) $f_0(x), f_1(x),\ldots, f_T(x)$ where $f_0(x)=x$ and $f_t$ for $t=0,1,2,\ldots$, is $d_t$-dimensional, where $d_t$ is the network width in the $t^\tth$ layer. The relation of these functions is plotted in Figure \ref{fig:ResNet} (a). The so-called residual network (ResNet) architecture modifies DNNs by introducing bridging branches (edges) as shown by Figure \ref{fig:ResNet} (b). \begin{figure}[t] \centering \includegraphics[width=7cm, height=4cm]{Architecture2} \caption{Schematic scheme of a single layer in a)ANN b)ResNet and c)modified ResNet Architectures.} \label{fig:ResNet} \end{figure} We observe that the geometric regularization in \eqref{eq:GR_general} corresponds to a modified version of ResNets, as depicted in Figure \ref{fig:ResNet} (c), which can be written as \begin{equation}\label{eq:mod_resnet} f_{t+1}=g(W_tf_t+b_t)-g(f_t)+f_{t}. \end{equation} More concretely, when $W_t, b_t$ are respectively near identity and near zero, i.e. $W_t=I+\epsilon \barW_t$ and $b_t=\epsilon\barb_t$ for small values of $\epsilon$, we observe by Taylor expansion of \eqref{eq:mod_resnet} with respect to $\epsilon$ that the differential equation in \eqref{eq:integral} with geometric regularization in \eqref{eq:GR_general} provides the limit of the above modified ResNet architecture. This profound relation provides a novel approach for analyzing deep networks, which is deferred to \cite{future_paper}. \section{Numerical Results} \label{sec:results} As a preliminary validation, we examine geometric regularization on the MNIST handwritten digits database including $50,000$ $28\times 28$ black and white images of handwritten digits for training and $10,000$ more for testing \cite{lecun1998gradient,ciregan2012multi}. We note that state-of-the-art techniques already achieve an accuracy as high as $99.7\%$, thus justifying our validation study merely as a proof of concept. We use a single fixed function $f^1(x)=x$. In all experiments, we set $\alpha_0=\beta=0.98$ and let $\alpha_1$ vary. \begin{figure} \centering \begin{tabular}{|c|c|} \hline Method & Performance \\ \hline plain iterations & 97.4\%\\ convolutional iterations & 98.0\%\\ 2-stage learning & 98.7\%\\ \hline \end{tabular} \caption{Performance of different learning strategies} \label{table:1} \end{figure} We have performed extensive numerical studies with different strategies (fixed or variable $D,c$, different step size selection methods and convolutional/plain layers), but can only focus on some key results due to space limitation. A more comprehensive comparison between these strategies is also insightful \cite{future_paper}. A summary of the best achieved performances is listed in Figure \ref{table:1}, where plain (non-convolutional) iterations are applied with fixed step size $\mu=0.06$ and $\alpha_1=0.99$, $d=400$ and fixed $D,c$. The convolutional iterations also include 2-D convolutional (Toeplitz) matrices with window length 5, fixed step size $\mu=1$ and $\alpha_1=0.95$, and $D,c$ updated at each iteration. We also consider a 2-stage procedure, where in the first $50$ iterations, convolutional matrices are considered, and plain iterations are subsequently applied. In both stages, the step size is fixed to $\mu=3$ and $\alpha=0.95$, while the matrices $D,c$ are updated at each iteration. \begin{figure}[t] \centering \includegraphics[width=8cm,height=4.8cm]{conv_zoomed} \caption{Performance of different step size selection strategies.} \label{fig:perform} \end{figure} Figure \ref{fig:perform} compares different strategies for step size selection with convolutional iterations by their associated performance, i.e. the fraction of correctly classified images in different iterations. The best asymptotic performance is obtained by fixing $\mu=1$. Faster convergence may be obtained by larger step sizes at the expense of a decreased asymptotic performance. For example for $\mu=6$, the algorithms reaches $96\%$ accuracy in only $10$ iterations and is $97\%$ correct at $30$. However, the process becomes substantially slower afterwards, which suggests a multi-stage procedure to boost performance. Using adaptive step size with line search shows a slightly degraded (higher than $\mu=6$) performance, but dramatically decreases the convergence rate. \section{Conclusion} We proposed a supervised learning technique, which enjoys many common properties with deep learning, such as successive application of linear and non-linear operators, momentum method of implementation and convolutional layers. In contrast to deep learning, our method abandons the need for back propagation to hence improve the computational burden. Our method is semi-parametric as it essentially exploits a large number of weight parameters, yet avoiding over-parametrization. Another advantage of our technique is that it can theoretically be analyzed by tools in differential geometry as briefly discussed earlier. A conprehensive development is in \cite{future_paper}. The performance on the data sets we have thus far achieved, promises a great and unexplored potential waiting to be unveiled. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,011
Q: Language translation in iphone I am trying to build an app which consists of a label which displays the text in English by default. The user gets a list to select his/her language and after selecting the language, the text changes to that particular language. Any idea as to how to do it? I had tried [[NSUserDefaults standardUserDefaults] setObject:[NSArray arrayWithObjects:@"de", @"en", @"fr", nil] forKey:@"AppleLanguages"]; but it is not working. A: You can do that with : -[NSBundle localizedStringForKey:value:table:] There is a small sample in the docs. Basically what you need to do is create a MyTable.strings file with the localizations you need. Create one file per language you need. Then do: NSBundle *bundle = [NSBundle mainBundle]; NSString *localizedString = [bundle localizedStringForKey:@"TheKeyYouWantToLocalize"] value:@"TheDefaultValue" table:@"MyTable"]; This method will look for the key: @"TheKeyYouWantToLocalize" in MyTable.strings file , if its is found then it will return that otherwise it will return @"TheDefaultValue" FYI, This is the same process the system uses when localizing an app. (Heard of NSLocalizedString ?) but now you have to do it manually since you are asking the user the language to show and not relying in the system language.
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,226
universities in new york city ranking by Alvina Ibe Founded in 1955, Long Island University had a humble beginning. Their first classes took place in converted barns, servant's bedrooms and garages but the school has grown exponentially over the years into one of the nation's finest liberal arts schools. Situated on a beautiful, 307-acre residential campus, the university's C.W. Post campus offers more than 250 degree choices and serves more than 8000 students a year. That number may seem high, but there are more than 320 faculty members so class numbers hover at around 20 per class so that a high level of education is provided to each student. Students have a plethora of social, athletic and academic activities from which to choose. Long Island University's C.W. Post campus has an active Greek life, numerous student clubs and organizations, an intercollegiate athletics organization and several club, intramural and recreational sports. Because the university is located less than an hour from New York City, students have access to some of the world's finest restaurants, museums, cultural events and sports teams. Pace University students do more than keep pace; They Work Toward Greatness. Pace University is serious about academics but the school's motto encompasses the entire student experience. Students define what greatness means to them through opportunities and experiences, and then gain the education to achieve it. Pace University includes five schools and one college. More than 13,500 students are enrolled in 100 different undergraduate majors in 27 bachelor's degree programs (and 3,000 courses), 47 master's programs, and four doctoral programs. Majors with the highest enrollment include accounting, finance, and nursing. Disciplines with the highest pecentage of degrees awarded are business/marketing, health professions and related sciences, communication technologies, computer and information sciences, and education. Graduate degrees include the MBA, law degrees, and physician assistant, among many others. A number of joint degree programs with other universities are also available. Pace also has an undergraduate honors college and study abroad opportunities. Cornell University, New York It is a privately owned, statutory Ivy League research-based university in the city Ithaca, of New York region. The university contains more than 100 fields of study, many of which provide opportunities for learning and engagement that span the world. It is one of those universities in new york with high acceptance rates among those applying to top law and medical schools in the nation. The State University of New York(SUNY) It is one of the public universities in new york city. It is counted in the list of the largest universities in new york. The university welcomes and supports students from across the globe. The most popular programs of SUNY include Engineering, Business, Computer Science, Communications, Science, and Liberal Arts. It is one of the affordable universities in new york for international students. It is a privately owned research-based institution in the city of New York, found in the year 1841. It is the oldest Jesuit and Catholic university in the northeastern USA and the 3rd oldest university in New york city. 10 different colleges are included in Fordham University out of them, four are for undergraduates while the other six are for postgraduates. The university consists of 3 campuses namely: Lincoln Centre in Manhattan Rose Hill in the Bronx Westchester in West Harrison. New York University (NYU) New York University is a private institution which was established in 1831. It follows an academic semester-based calendar. It is counted among the top universities in new york city. As per QS World Ranking, this university stands on 39th position. The minimum English score requirement in this university is either 7.5 band in IELTS OR 100 points in TOEFL for international students. A selective liberal arts college based in the town of Hamilton, Colgate University enrolls around 3,000 learners annually. Colgate primarily offers undergraduate education along with a respected master's program in teaching. The college combines academic rigor and extensive campus resources with the intimacy of a small private college. The average class size is 17 students, and the school features a student-to-faculty ratio of 9-to-1. Undergraduates can choose from 56 majors and several minors, with options including art and art history, astronomy and physics, creative writing, Middle Eastern and Islamic studies, and peace and conflict studies. Regardless of a student's chosen major, Colgate stresses values of interdisciplinary knowledge and shared intellectual engagement. All learners must complete a liberal arts core that includes courses in legacies of the ancient world, challenges of modernity, communities and identity, global engagement, and scientific perspectives. A private Ivy League college located near Manhattan's Upper West Side, Columbia Universitywas established in 1754, making it the oldest college in New York state and one of the oldest in the country. Consistently ranked among the nation's most prestigious and selective colleges, Columbia counts three U.S. presidents and several Nobel laureates among alumni. Columbia organizes academics into 20 schools, including three undergraduate schools and several notable graduate schools. The college hosts some of the country's top programs in English, history, nursing, public health, and social work, among other areas. Columbia also sits at the forefront of research in many fields, including neuroscience, medicine, data science, and earth science. Befitting a reputation as a top national college, Columbia offers comprehensive campus resources and services including the renowned Butler Library, which holds over two million volumes. The college's 33,000 students also benefit from top-notch research facilities and an active campus community.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,852
\section{Introduction} Recent results in compressed sensing \cite{Baraniuk2007,Candes2006conf,Candes2008,Donoho2006} show that for a class of finite discrete signals named \textit{sparse}, it is possible to reconstruct the signal from significantly lower number of samples than predicted by Nyquist rate in uniform sampling theory. More precisely, if $\mathbf{x}_{n\times 1}$ is a vector whose representation in the orthonormal basis $\mathbf{\Psi}=[\psi_{i,j}]_{n\times n}$ has at most $k$ nonzero elements , it is called $k$-sparse. Also $\mathbf{s}=\mathbf{\Psi}^{H}\cdot \mathbf{x}$ is the translation of $\mathbf{x}$ in the sparsity domain (since $\mathbf{\Psi}$ is orthonormal $\mathbf{\Psi}^{-1}=\mathbf{\Psi}^{H}$); i.e., $\mathbf{s}$ has at most $k$ nonzero elements. It is proved that if $m$ is of order not less than $O(k\cdot \log(\frac{n}{k}))$, $m$ random linear combination of the samples (multiplication by a random matrix $\mathbf{\Phi}$) of a $k$-sparse signal with length $n$ ($k\ll n$) where the coefficients of the linear samples have normal i.i.d. distribution, suffice (with overwhelming probability) to reconstruct the signal \cite{Donoho2006}. Other distributions have also been proposed which yield the same result \cite{Baraniuk2008}. One of the features of this type of random sampling which was originally of main concern (but no longer as important as it was) is that at the time of sampling it is unimportant in which basis the signal is sparse. The domain of sparsity comes into the picture only at the time of the reconstruction. Therefore, this type of sampling is not matched to a specific sparsity domain. The other advantage of the method lies in its reconstruction method. The mentioned order of $m$ is the same as the order for which the minimization using $\ell_0$ norm for finding the sparse solution could be replaced with $\ell_1$ without considerable change in the solution (solving a polynomial time problem instead of an NP-complete problem) \cite{Candes2005,Gilbert2006conf}. Since in most of the applications the input signal has a priory known characteristics, the sparsity domain is also known. Thus, it is logical to design a deterministic sampling matrix ($\mathbf{\Phi}$); in this way, we eliminate the need to save the sampling matrix. Some possible matrices are introduced in \cite{DeVore2007} and \cite{Saligrama2008}. Although the analog branch of the compressed sensing is being developed parallel to the improvements in the discrete domain \cite{Eldar2008June,Eldar2008Sep}, we will only consider the discrete case in this paper. We will investigate the consequences of deterministic sampling when the sparsity domain is not known at the time of sampling; however, we do not restrict the sampling functions to be linear. In fact we are looking for deterministic functions of $\mathbf{x}$ which uniquely represent all $k$-sparse signals irrespective of their sparsity domain (for reconstruction the domain should be known). \section{Preliminaries} Before we start the main argument, it is necessary to point out a key issue: since the cardinality of $\mathbb{C}$ (or $\mathbb{R}$) is the same as $\mathbb{C}^n$ (or $\mathbb{R}^n$), there exists an injection $g:\mathbb{C}^n\rightarrow \mathbb{C}$ (or $g:\mathbb{R}^n\rightarrow \mathbb{R}$) which as a sampling function, compresses the whole information of a vector into a single sample, even without the assumption of sparsity. Although the compression is amazing, the output sample can not be easily quantized; i.e., the theoretical solution is impractical. For this reason, we are looking for sampling functions which employ the sparsity constraint in order to reconstruct the original signal and then we will discuss how they can be quantized. To formulate the mentioned constraint: we are looking for sampling functions $f_i:\mathbb{C}^n\rightarrow\mathbb{C},~~1\leq i\leq m$ that result in equivalent sampling classes defined by: \begin{eqnarray} \mathcal{S}_{s_1,\dots,s_m}=\{\mathbf{x}\in\mathbb{C}^n~~|~~\forall~1\leq i\leq m:~f_i(\mathbf{x})=s_i\} \end{eqnarray} such that in each arbitrary orthonormal basis, at most one of the elements of $\mathcal{S}$ has $k$-sparse representation. Moreover, the sparsity constraint should play a key role; i.e., for many cases we should have $|\mathcal{S}|>1$. First we show why deterministic linear sampling is not a proper choice. Assume the samples are produced as: \begin{eqnarray} \left[\begin{array}{c} s_1\\ \vdots\\ s_m \end{array}\right]=\left[\begin{array}{c} f_1(\mathbf{x})\\ \vdots\\ f_m(\mathbf{x}) \end{array}\right]=\mathbf{\Phi}_{m\times n}\cdot \mathbf{x}_{n\times 1} \end{eqnarray} If $m\geq n$, the number of samples exceeds the number of original elements in $\mathbf{x}$, which defeats the underlying purpose of compression. For the case of $m< n$, $\textrm{rank}(\mathbf{\Phi})\leq m<n$ and therefore, the null space of $\mathbf{\Phi}$ will have non-zero dimension: \begin{eqnarray} \exists~\mathbf{v}\in\mathbb{C}^n,\|\mathbf{v}\|_{\ell_2}=1:~~\mathbf{\Phi}\cdot\mathbf{v}=0 \end{eqnarray} Let $\mathcal{V}^{\perp}$ be the subspace of $\mathbb{C}^n$ formed by the vectors perpendicular to $\mathbf{v}$: \begin{eqnarray} \mathcal{V}^{\perp}\triangleq\{\mathbf{x}\in\mathbb{C}^n~~|~~\mathbf{v}^H\cdot\mathbf{x}=0\}, \end{eqnarray} and let $\{\mathbf{u}_1,\dots,\mathbf{u}_{n-1}\}$ be an orthonormal basis for this subspace. Hence, $\{\mathbf{v},\mathbf{u}_1,\dots,\mathbf{u}_{n-1}\}$ forms an orthonormal basis for $\mathbb{C}^n$, or equivalently, \begin{eqnarray} \mathbf{\Psi}\triangleq\left[\mathbf{v},~\mathbf{u}_1,~\dots,~\mathbf{u}_{n-1}\right] \end{eqnarray} is a unitary matrix. Obviously, the input vector $\mathbf{v}$ is $1$-sparse with respect to the representation in $\mathbf{\Psi}$; however, $\mathbf{v}$ lies in the null space of $\mathbf{\Phi}$ and can not be recovered using its samples (all zero) produced by $\mathbf{\Phi}$. Thus, for any fixed linear sampling set with less number of samples than the original signal's length, there exists an orthonormal basis and a $1$-sparse signal (in this basis) which can not be uniquely recovered from its samples. \section{General Sampling for $k$-sparse Signals} Although deterministic linear sampling fails either in compression or reconstruction, there exists the possibility to introduce general nonlinear sampling functions which do not have these drawbacks. In this section we will show that for $k$-sparse signals with $k\geq 2$ no such sampling functions exist. Let us assume that $\{f_i(\mathbf{x})\}_{i=1}^{m}$ are the sampling functions and let $\{s_i\}_{i=1}^{m}$ be the respective samples obtained from a given $k$-sparse vector $\mathbf{x}_0$ (in an orthonormal basis). Reversibility of the sampling process is equivalent to the fact that $\mathbf{x}_0$ is the only $k$-sparse vector which results in samples $\{s_i\}_{i=1}^{m}$. As previously defined, we represent the sampling equivalent class of the obtained samples $\{s_i\}_{i=1}^{m}$ by $\mathcal{S}$: \begin{eqnarray} \mathcal{S}=\big\{\mathbf{x}\in\mathbb{C}^n~\big|~\forall 1\leq i\leq m~,~f_i(\mathbf{x})=s_i \big\} \end{eqnarray} Since $\mathbf{x}_0\in S$ we know $\mathcal{S}\neq \emptyset$. Since the case $m\geq n$ contradicts the compression concept, we are assuming $m< n$. Moreover, if for all input vectors $\mathbf{x}_0\in\mathbb{C}^n$ we have $|\mathcal{S}|=1$, sparsity constraint is of no use; i.e., a 1-1 mapping from $\mathbb{C}^n$ to $\mathbb{C}^m$ is formed which is undesired due to its quantization problem. Therefore, there exist input vectors for which the equivalent class $\mathcal{S}$ has at least two elements. We choose $\mathbf{x}_0$ among these input vectors; i.e., there exists $\mathbf{x}_1\neq\mathbf{x}_0$ such that the samples obtained from both of $\mathbf{x}_0$ and $\mathbf{x}_1$ are the same ($\mathbf{x}_0,\mathbf{x}_1\in\mathcal{S}$). Let $\tilde{\mathbf{v}}_0$ be the normal vector in the direction of $\mathbf{x}_0$ and let $\mathbf{v}_1$ be defined as follows: \begin{eqnarray} \mathbf{v}_1=\mathbf{x}_1-\langle\mathbf{x}_1,\tilde{\mathbf{v}}_0\rangle\tilde{\mathbf{v}}_0 \end{eqnarray} where $\langle\mathbf{a},\mathbf{b}\rangle$ stands for the inner product of the vectors $\mathbf{a}$ and $\mathbf{b}$, ($\langle\mathbf{a},\mathbf{b}\rangle=\mathbf{a}^H\cdot\mathbf{b}$). We have two cases: \begin{enumerate} \item If $\mathbf{v}_1=\mathbf{0}$, both $\mathbf{x}_0$ and $\mathbf{x}_1$ lie in the direction identified by the normal vector $\tilde{\mathbf{v}}_0$. Similar to the argument in the previous section, there exist normal vectors $\mathbf{u}_1,\dots,\mathbf{u}_{n-1}$ such that $\{\tilde{\mathbf{v}}_0,\mathbf{u}_1,\dots,\mathbf{u}_{n-1}\}$ forms an orthonormal basis for $\mathbb{C}^n$. Obviously both $\mathbf{x}_0$ and $\mathbf{x}_1$ are 1-sparse in this basis and moreover, they produce the same set of samples using the assumed sampling functions. Hence, using the mentioned samples, these two vectors can not be distinguished even with the sparsity constraint. \item If $\mathbf{v}_1\neq\mathbf{0}$, we define $\tilde{\mathbf{v}}_1$ to be the normal vector in the direction of $\mathbf{v}_1$. Since $\tilde{\mathbf{v}}_0$ and $\mathbf{v}_1$ are orthogonal, $\tilde{\mathbf{v}}_0$ and $\tilde{\mathbf{v}}_1$ are also orthogonal: \begin{eqnarray} &&\langle\mathbf{v}_1,\tilde{\mathbf{v}}_0\rangle= \langle\mathbf{x}_1,\tilde{\mathbf{v}}_0\rangle - \langle\mathbf{x}_1,\tilde{\mathbf{v}}_0\rangle \cdot \underbrace{\langle\tilde{\mathbf{v}}_0,\tilde{\mathbf{v}}_0\rangle}_{= \|\tilde{\mathbf{v}}_0\|^2_{\ell_2}=1}=0\nonumber\\ &&~~~~~~~~\Rightarrow~~\mathbf{v}_1 \perp \tilde{\mathbf{v}}_0~~\Rightarrow~~\tilde{\mathbf{v}}_1 \perp \tilde{\mathbf{v}}_0 \end{eqnarray} Moreover from the definition of $\mathbf{v}_1$ we have: \begin{eqnarray} \mathbf{x}_1&=&\mathbf{v}_1+\langle\mathbf{x}_1,\tilde{\mathbf{v}}_0\rangle \tilde{\mathbf{v}}_0 \nonumber\\ &=& \langle\mathbf{x}_1,\tilde{\mathbf{v}}_1\rangle \tilde{\mathbf{v}}_1+\langle\mathbf{x}_1,\tilde{\mathbf{v}}_0\rangle \tilde{\mathbf{v}}_0 \end{eqnarray} Therefore, $\mathbf{x}_0$ and $\mathbf{x}_1$ are respectively 1-sparse and 2-sparse in any orthonormal basis which contains $\tilde{\mathbf{v}}_0$ and $\tilde{\mathbf{v}}_1$ (existence of such a basis is trivial by considering the subspace of $\mathbb{C}^n$ orthogonal to $\textrm{span}\{\tilde{\mathbf{v}}_0,\tilde{\mathbf{v}}_1\}$). Thus, if $k\geq 2$, both $\mathbf{x}_0$ and $\mathbf{x}_1$ are valid solutions to the sparsity constraint. Thus, again we can not uniquely reconstruct the original signal from its samples. \end{enumerate} \section{Sampling for $1$-sparse Signals} The only case that we are left is the general (nonlinear) sampling of $1$-sparse signals. In fact we will show the existence of such sampling functions for this case, but first lets focus on the conditions that the sampling functions should satisfy: \newtheorem{lemma}{Lemma} \begin{lemma} The set $\mathcal{S}\subset\mathbb{C}^n$ is uniquely 1-sparse decodable iff: \begin{eqnarray} \forall ~\mathbf{a}\neq\mathbf{b}\in \mathcal{S}:~~~0<\big|\langle\mathbf{a},\mathbf{b}\rangle\big| < \|\mathbf{a}\|_{\ell_2}\cdot\|\mathbf{b}\|_{\ell_2} \end{eqnarray} where $\langle\mathbf{a},\mathbf{b}\rangle$ is the inner product of the two vectors ($\mathbf{a}^H\cdot\mathbf{b}$), $\|.\|_{\ell_2}$ is the $\ell_2$ norm of the vector (square root of the inner product of the vector by itself) and $|.|$ refers to the absolute value of a complex number. \end{lemma} Using the Cauchy inequality and the non-negative property of the absolute-value operator, it is straightforward that: \begin{eqnarray} \forall ~\mathbf{a},\mathbf{b}\in \mathbb{C}^n:~~~0\leq\big|\langle\mathbf{a},\mathbf{b}\rangle\big| \leq \|\mathbf{a}\|_{\ell_2}\cdot\|\mathbf{b}\|_{\ell_2} \end{eqnarray} The only characteristic which distinguishes an arbitrary set from a uniquely 1-sparse decodable set is that the equalities do not occur for the latter. Now we get back to the proof of the lemma. According to the Cauchy theorem, for the right inequality we have: \begin{eqnarray} \big|\langle\mathbf{a},\mathbf{b}\rangle\big| = \|\mathbf{a}\|_{\ell_2}\cdot\|\mathbf{b}\|_{\ell_2} ~~\Leftrightarrow~~\exists~c\in\mathbb{C}:~\mathbf{a}=c\mathbf{b} \end{eqnarray} If $\mathcal{S}$ contains two unequal vectors $\mathbf{a},\mathbf{b}$ for which the above equality holds, in the orthonormal basis containing the normal vector in the direction of $\mathbf{a}$ (or $\mathbf{b}$) both of the vectors are 1-sparse (since they are in the same direction) which means that $\mathcal{S}$ is not uniquely 1-sparse decodable. On the other hand, if for all unequal vectors of $\mathcal{S}$ strict inequality holds, each line in $\mathbb{C}^n$ passing through the origin can not intersect $\mathcal{S}$ at more than one point. This means that for each vector of an orthonormal basis, there exists at most one element of $\mathcal{S}$ which is 1-sparse in this direction. The left inequality considers the orthogonality of the vectors in $\mathcal{S}$: \begin{eqnarray} \big|\langle\mathbf{a},\mathbf{b}\rangle\big|=0~\Rightarrow~\mathbf{a}\perp\mathbf{b} \end{eqnarray} Assume $\mathcal{S}$ contains two nonzero perpendicular vectors $\mathbf{a},\mathbf{b}$ and let $\tilde{\mathbf{a}},\tilde{\mathbf{b}}$ be the normal vectors in their directions, respectively. Since $\tilde{\mathbf{a}}$ and $\tilde{\mathbf{b}}$ are orthogonal, there exists an orthonormal basis for $\mathbb{C}^n$ which contains both $\tilde{\mathbf{a}},\tilde{\mathbf{b}}$; obviously $\mathbf{a}$ and $\mathbf{b}$ are 1-sparse in this basis which implies that $\mathcal{S}$ is not uniquely 1-sparse decodable. For the sufficiency of the condition, if no two vectors of $\mathcal{S}$ are perpendicular, among the vectors of each orthonormal basis of $\mathbb{C}^n$ there exists at most one which intersects $\mathcal{S}$ (two intersections reveal a perpendicular pair in $\mathcal{S}$). Also the result of the right-side inequality restricts the number of intersections for each direction to one; so, at most one of the vectors in $\mathcal{S}$ can be 1-sparse in this basis (1-sparse decodablity of $\mathcal{S}$) and the proof of the lemma is complete. Our next step is to introduce sampling functions whose equivalent sampling classes fulfill the condition of the lemma. We claim that the following three functions always produce such sets: \begin{eqnarray} \left\{\begin{array}{l} f_1(\mathbf{x})=\sum_{i=1}^{n}3^{2(i-1)}\textrm{sign}(\Re\{x_{i}\})\\ ~~~~~~~~~~~~~~~~~~~~+3^{2i-1}\textrm{sign}(\Im\{x_{i}\})\\ \\ f_2(\mathbf{x})=\|\mathbf{x}\|_{\ell_1}=\sum_{i=1}^{n}|x_{i}|\\ \\ f_3(\mathbf{x})=\sum_{x_i\neq 0}\frac{\textrm{msign}(\Re\{x_i\})~\cdot~\Im\{x_i\} }{|\Re\{x_i\}|+|\Im\{x_i\}|} \end{array}\right. \end{eqnarray} where $\Re\{.\}$ and $\Im\{.\}$ represent the real and imaginary parts, respectively, and $msign(.)$ is the modified sign function which generates the same output as the $sign(.)$ function except that $msign(0)=1$. The first sampling function, $f_1$, although produces only one sample, uniquely determines the sign of both real and imaginary parts of all the elements of $\mathbf{x}$. Since the output range of $\textrm{sign}$ function is restricted to three different values of $-1$, $0$ and $1$, the sample generated by $f_1$ can be viewed as the base 3 representation of an integer number with the difference that the digit 2 is replaced with -1 (the same residue in division by 3 which yields the uniqueness). Moreover, this sample is an integer between $-\frac{3^{2n}-1}{2}$ and $\frac{3^{2n}-1}{2}$ which requires $\lceil 2n\log_{2}3\rceil$ bits for errorless quantization. We will show that this sample guarantees the left inequality. Let $\mathbf{a}$ and $\mathbf{b}$ be two unequal vectors that $f_1(\mathbf{a})=f_1(\mathbf{b})$, we show that $\langle\mathbf{a},\mathbf{b}\rangle\neq 0$: \begin{eqnarray} &&\Re\{\langle\mathbf{a},\mathbf{b}\rangle\}=\Re\{\mathbf{a}^H\cdot\mathbf{b}\}\nonumber\\ &&~~~~~~~~~~~=\sum_{i=1}^{n}\Re\{a_i\}\Re\{b_i\}+\Im\{a_i\}\Im\{b_i\} \end{eqnarray} Since $f_1(\mathbf{a})=f_1(\mathbf{b})$, we have $sign(\Re\{a_i\})=sign(\Re\{b_i\})$ and $sign(\Im\{a_i\})=sign(\Im\{b_i\})$ for all $1\leq i \leq n$. Therefore $\Re\{a_i\}\cdot\Re\{b_i\}+\Im\{a_i\}\cdot\Im\{b_i\}\geq 0$ and the equality occurs if and only if $\Re\{a_i\}=\Re\{b_i\}=\Im\{a_i\}=\Im\{b_i\}=0$. Consequently we have: \begin{eqnarray} &&\sum_{i=1}^{n}\Re\{a_i\}\cdot\Re\{b_i\}+\Im\{a_i\}\cdot\Im\{b_i\}\geq 0 \nonumber\\ &&\Rightarrow \Re\{\langle\mathbf{a},\mathbf{b}\rangle\}\geq 0 \end{eqnarray} with the equality for $\mathbf{a}=\mathbf{b}=\mathbf{0}$ which contradicts the fact that $\mathbf{a}$ and $\mathbf{b}$ are unequal. Thus: \begin{eqnarray} \left\{\begin{array}{l} \mathbf{a}\neq\mathbf{b} \\ f_1(\mathbf{a})=f_1(\mathbf{b})\\ \end{array}\right. &\Rightarrow&\Re\{\langle\mathbf{a},\mathbf{b}\rangle\}> 0 \nonumber\\ &\Rightarrow& \big|\langle\mathbf{a},\mathbf{b}\rangle\big| > 0 \end{eqnarray} Now using the samples generated by $f_2$ and $f_3$ (in addition to $f_1$) we prove the right inequality. If $\mathbf{a}$ and $\mathbf{b}$ are such that $f_2(\mathbf{a})=f_2(\mathbf{b})$, $f_3(\mathbf{a})=f_3(\mathbf{b})$ and $\big|\langle\mathbf{a},\mathbf{b}\rangle\big|=\|\mathbf{a}\|_{\ell_2}\cdot \|\mathbf{b}\|_{\ell_2}$, $\mathbf{a}$ and $\mathbf{b}$ must have the same direction (Cauchy inequality) or equivalently, there exists $c\in\mathbb{C}$ such that $\mathbf{b}=c~\mathbf{a}$: \begin{eqnarray} f_2(\mathbf{b})=f_2(c~\mathbf{a})=\sum_{i=1}^n\big|c\cdot a_i\big|&=&|c|\sum_{i=1}^n |a_i|\nonumber\\ &=&|c|f_2(\mathbf{a}) \end{eqnarray} The condition $f_2(\mathbf{a})=f_2(\mathbf{b})$ restricts the amplitude of $c$ to have the value 1 ($|c|=1$); note that $f_2(\mathbf{a})=0$ results in $\mathbf{a}=\mathbf{b}=0$ which again contradicts the previous assumption of distinctness of $\mathbf{a}$ and $\mathbf{b}$. Thus, up to this point we shown that by having the first two samples (generated by $f_1$ and $f_2$), the original vector within a constant complex coefficient on the unit circle in known. We show that the third sample uniquely determines the phase of that coefficient which implies the overall uniqueness. Before we proceed to reveal the role of the third sample, let us consider the effect of quantizing the second sample. Since the absolute-value operator is a continuous function, the sampling function $f_2$ is also a continuous function of the input vector. Continuity implies that small perturbations in the input such as quantization are mapped to small deviations in the output. Moreover, the quantization of this sample does not affect the mentioned restriction on $c$ ($|c|=1$); i.e., the ambiguity is still the phase of $c$. In simple words, although the reconstructed amplitudes do not exactly match the original ones, the uniqueness (in the amplitude) is still valid. In fact, due to quantization, the reproduced amplitudes fall within a precision from the original values. If the input elements of the vector are quantized with $l$ bits prior to sampling, the output of the $f_2$ can be quantized with $l+\lceil \log_2(n) \rceil$ bits (unsigned) without causing any further loss of information. Now we show the role of the last sample in revealing the phase of the nonzero elements of the input vector in the sparsity domain. Since the summation in $f_3$ is taken over the nonzero elements of the vector, the denominator in each of the terms is a positive real number (the denominator becomes zero only when $\Re\{x_i\}=\Im\{x_i\}=0$ which is excluded by $x_i\neq 0$). Besides, we have: \begin{eqnarray} 0\leq&&\bigg|\frac{\textrm{msign}(\Re\{x_i\})~\cdot~\Im\{x_i\}}{|\Re\{x_i\}|+|\Im\{x_i\}|}\bigg|\nonumber\\ &=&\frac{|\Im\{x_i\}|}{|\Re\{x_i\}|+|\Im\{x_i\}|}~~~\leq 1 \end{eqnarray} which shows that sample generated by $f_3$ is bounded between $-n$ and $n$ (boundedness is necessary for quantization). If $\mathbf{a}$ and $\mathbf{b}$ are two vectors which produce the same set of samples (all three), we know $\mathbf{b}=c~\mathbf{a}$ where $|c|=1$ by use of the first two samples. Considering the third sample we should have $f_3(\mathbf{a})=f_3(\mathbf{b})$: \begin{eqnarray} &&\sum_{a_i\neq 0} \frac{\textrm{msign}(\Re\{a_i\})~\cdot~\Im\{a_i\}}{|\Re\{a_i\}|+|\Im\{a_i\}|} \nonumber\\ &=& \sum_{c a_i\neq 0} \frac{\textrm{msign}(\Re\{ca_i\})~\cdot~\Im\{ca_i\}}{|\Re\{ca_i\}|+|\Im\{ca_i\}|} \end{eqnarray} Owing to the fact that $|c|=1$, the conditions $a_i\neq 0$ and $ca_i\neq 0$ are equivalent; therefore, the terms used in the summations are the same in both sides. In addition, the first sample uniquely determines the $sign$ and therefore $msign$ of both real and imaginary parts; thus, we should have $msign(\Re\{a_i\})=msign(\Re\{ca_i\})$ and $sign(\Im\{a_i\})=sign(\Im\{ca_i\})$. To simplify the notations, we represent the real and imaginary parts of $a_i$ and $ca_i$ by $R_i,I_i$ and $R_i^{'},I_i^{'}$, respectively. Moreover, by $sn$ and $\tilde{sn}$ we mean the $sign$ and $msign$ operators: \begin{eqnarray} &&\frac{\tilde{sn}(R_i)~\cdot~I_i}{|R_i|+|I_i|} - \frac{\tilde{sn}(R_i^{'})~\cdot~I_i^{'}}{|R_i^{'}|+|I_i^{'}|} \nonumber\\ &&=\tilde{sn}(R_i)\cdot sn(I_i)\bigg(\frac{|I_i|}{|R_i|+|I_i|} - \frac{|I_i^{'}|}{|R_i^{'}|+|I_i^{'}|}\bigg)\nonumber\\ &&=\tilde{sn}(R_i)\cdot sn(I_i)\frac{|I_i|.|R_i^{'}|- |I_i^{'}|.|R_i|}{\big(|R_i|+|I_i|\big)\big(|R_i^{'}| +|I_i^{'}|\big)}\nonumber\\ &&=\frac{I_i.R_i^{'}- I_i^{'}.R_i}{\big(|R_i|+|I_i|\big)\big(|R_i^{'}| +|I_i^{'}|\big)}\nonumber\\ &&=\frac{-\Im\{c\}\cdot|a_i|^2}{\big(|R_i|+|I_i|\big)\big(|R_i^{'}| +|I_i^{'}|\big)} \end{eqnarray} Thus, $f_3(\mathbf{a})=f_3(\mathbf{b})$ yields: \begin{eqnarray} &&-\Im\{c\}\underbrace{\sum_{a_i\neq 0}\frac{|a_i|^2}{\big(|R_i|+|I_i|\big)\big(|R_i^{'}| +|I_i^{'}|\big)}}_{>~0}=0\nonumber\\ &&~~~~~~~~~~~~~~~~~~~\Rightarrow~~~\Im\{c\}=0 \end{eqnarray} The above result in conjunction with the previous condition that $|c|=1$ remains two possible choices for $c$, $1$ or $-1$; nonetheless, $c=-1$ produces different signs for real and imaginary parts in $\mathbf{b}$ which is not acceptable by the choice of the first sample. As a consequence, $c=1$ is the only choice which means that $\mathbf{a}=\mathbf{b}$ and the proof for the claimed unique 1-sparse decodability of the aforementioned set of sampling functions is complete. A similar argument to the one presented for quantization of the second sample is valid here; even after quantization, there exists at most one 1-sparse vector which produces these samples. Given that the quantization does not change the sign of the real and imaginary parts, the sampling functions will still be in their continuous region and therefore, small errors in the input (such as quantization) will be mapped to small errors in the reconstructed signal. It is easy to check that if the domain is $\mathbb{R}$ rather than $\mathbb{C}$, the third sample is not required; i.e., the sign of the elements and the $\ell_1$ norm of the vector suffice for unique representation of 1-sparse vectors. \section{Conclusions} We have considered the deterministic sampling of sparse signals whose sparsity domain is not known at the time of sampling. Although random linear sampling of such signals has been shown to be a proper choice (under some conditions), the deterministic approach fails in reconstruction of some identifiable subclasses. The mentioned drawback is still an issue when nonlinear measurements of the $k$-sparse signals where $k>1$ are employed. It is shown that class of $1$-sparse signals (in an arbitrary linear sparsity domain) can be uniquely identified using nonlinear sampling. We have demonstrated a necessary and sufficient condition for the sampling functions to provide the unique 1-sparse decodability of the samples, in addition to presenting a realizable set of such functions. \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,333
Q: How do I automatically import data in sites using API to Google spread sheet? I am interested in importing data to spreadsheet automatically. I have used magic script to import GA data to google spreadsheet, and it was awesome. But there're more data other than GA that need to be automatically updated, like my advertisement sales records to compare daily CTR, revenue from various sites. These sites are NOT related to Google, but i know they provide API, so i was hoping to get data to google spreadsheet using App script. if there's something like magic script for outside API, it'll be awesome. When i have looked into it, however, I couldn't find any. So my question is this, * *Is there a way to import data automatically to google spreadsheet using API(sites that are not related to google) and App script? If there is, please tell me how. *Is there any written (free) script for that, like the case of GA and magic script? Thank you in advance. A: * *Yes you can do that with google apps script performing a urlfetch here a complete exemple with twitter (unfortunately it's not a solution for beginers). *it depend of the site/api in all case you'll need to google that to have a real answer. There is not an official repository for that.
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,035
Q: Is it possible to build a generic function for the fetches? I build an app in react.js with a lot of 'crud' operations. I am a beginner in React. For all these fetches there are always the same header (content-type, token...). And for each requests I have to resolve promises, test the code status, parse the answer, manage the errors , etc.... It is very long to write. So I wonder if is possible to build a generic function. Something like that: myBooks = callApi('endpoint', 'myparams'); And that's all ! The function callApi would do all the necessary (add the headers, the token, etc....). I tried on my side, but I have not enough skills to do that in React. Do you make your fetches with a special package ? or do you write, like me, your fetches and too bad it's long to write. Do you have some packages to suggest? A: I am using axios for fetching from REST apis. You can use axios.create to create an instance that you can pass headers and the base url of your API. You could even define middlewares with axios. Using axios create: const instance = axios.create({ baseURL: 'https://some-domain.com/api/', timeout: 1000, headers: {'X-Custom-Header': 'foobar'} }); Personally, I prefer to just wrap my axios calls myself like this: function getHeaders() { return { accept: 'application/json', authorization: `Bearer ${ getStoredAuthToken() }`, }; } function postHeaders() { return { 'content-type': 'application/json', authorization: `Bearer ${ getStoredAuthToken() }`, }; } export const postRequest = ( endpoint, data ) => axios .post( API + endpoint, data, { headers: postHeaders() } ) .then( res => res.data ) .catch( ( err ) => { LoggingUtility.error( `Error in post request to entpoint ${ endpoint }`, err ); if ( isNetworkError( err ) ) { throwServerNotReachableError(); } const { status } = err.response; if ( isUnauthorizedError( status ) ) { return refreshAuthToken( () => postRequest( endpoint, data ), ); } throw err; } ); You could do something like this for every http method, e.g. deleteRequest, putRequest, postRequest etc. In React/Frontend land it is very common to do this in a folder called services which abstracts away all async fetches of data. A: I have actually done this on personal projects before, here is the basics that I use. const baseUrl = 'http://localhost'; const port = '8080'; export async function get(route) { try { const response = await fetch(`${baseUrl}:${port}/${route}`, { method: 'GET', cache: 'no-cache', credentials: 'same-origin', }); return response.json(); } catch (err) { console.error(`GET error: ${err}`); } } Of course, you'll have to modify this for authentication, https, etc, but you'll call it with something like (for a similar post method) (and one for get): const response = await post('user-files/update', { uuid, newFilename: filename, originalFilename, }); const rows = await get(`dataset/${uuid}`); A: Yes totally. You can create a callApi function like this: const callApi = (url, params) => { let token = localStorage.getItem('token'); let heads = {}; let body = params; if (token !== null && token !== undefined && token !== ''){ heads['token'] = token; } heads["Content-Type"] = "application/json"; body = params && JSON.stringify(params); let options = { mode: "cors", method: "POST", headers: heads }; if (body) options['body'] = body; try { const response = await fetch(url, options); const res = await response.json(); return res; } catch (error) { console.log(error); } } Hope this helps!!
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,014