text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
import Enum from '@gdbots/pbj/Enum.js'; export default class PhoneType extends Enum { } PhoneType.configure({ UNKNOWN: 'unknown', MOBILE: 'mobile', HOME: 'home', WORK: 'work', FAX: 'fax', }, 'gdbots:common:phone-type');
{ "redpajama_set_name": "RedPajamaGithub" }
8,724
const { defineSupportCode } = require('cucumber'); class World { meow() { console.log('Meow!'); } } defineSupportCode(({ setWorldConstructor }) => { setWorldConstructor(World); });
{ "redpajama_set_name": "RedPajamaGithub" }
5,459
Wednesday, December 16, 2020 IWK Bureau The Donald Trump-led skewering of foreign worker visas can't be undone in a hurry when the Trump lame duck ends. The easier task will be to nix executive actions, the tangled web of regulations will be much harder to undo, New York based immigration attorney Cyrus Mehta told us during a wide ranging interview on the US work visa playbook in a post-Trump world. The Trump lame duck has been a busy time for the US president's slash and burn immigration agenda. The president's A team on immigration policy has been pushing through a rash of rules and regulations to close out four years of systematic reductions in legal (and illegal) immigration to the United States. According to the Migration Policy Institute, a think tank, Trump has made more than 400 changes to immigration policy since his election. "The saving grace here is that Trump was not able to get Congress to pass any law. The hardest is to actually change a law," Mehta pointed out. "Congress has just not passed any law in the four years with respect to any type of substantive immigration category." Mehta put Trump-led immigration actions since 2017 in context and pointed to the general direction in which work visas, especially H1B and L categories, could move once President-elect Joe Biden is in the White House. Below are highlights from the conversation. The Trump hangover: "What's really significant when we talk about work visas - you have these travel bans, you have the H1B work visa ban that is still in effect, you have the immigrant visa ban that is still in effect. It's supposed to expire at the end of December, but I suspect that Trump is going to extend it, so long as he is president. And then it is going to be left up to the (Joe) Biden administration to rescind those executive orders." Executive orders vs regulations: "It's going to be much easier to rescind executive orders than regulations. So, the travel bans that restrict H1B and L visa holders from coming to the United States - we hope that the Biden administration might be able to rescind them, even though they were done under the pretext of the Covid emergency. Trump also proposed regulations that would gut the H1B visa program the way we know it. So for example, there is a rule that will make the H1B visa categories more restrictive. It will be much harder to win H1B visas, and if H1B visa workers are placed at client sites - which is the business model for the IT industry - the H1 B visas will only be approved for one year. Thankfully, that regulation has temporarily been blocked. Also the Department of Labor regulation that exponentially increased the prevailing wages that made it impossible for employers to file labor certifications or even renew H1B visa petitions - fortunately that regulation has also been blocked by the same court order. So right now we do have some respite with both these rules being blocked by the courts." New Trump regulations before January 20?: "Trump could again issue regulation the same way as he did before - with notice and comment, but it might be hard for him to do it before January 20. If these regulations are caught up in the court system and in the appeal system, I can see the Biden administration not challenging them, and just conceding to the court challenges to these regulations. That's how I see these regulations disappearing. The public charge regulation is also a problematic one because it's a regulation! If you try to rescind the old regulation by promulgating a new regulation, that can also be subject to a court challenge by opponents. I'm hopeful that even with the public charge rule, Biden is going to go ahead and start a procedure in which he will rescind the old regulation, even that is caught up in court appeals and there might be a way to maneuver in such a way that the Biden administration just kind of accepts defeat and does not challenge the court appeals to the public charge rule. There is a possibility that other people might want to intervene and the court might still appoint an intervener, even if the government is not interested in upholding this regulation. So all that can happen. Trying to rescind regulations is going to be much harder. The saving grace here is that Trump was not able to get Congress to pass any law. The hardest is to actually change a law. But Congress has just not passed any law in the four years with respect to any type of substantive immigration category."
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,226
In May 2017, early tracking had Wonder Woman opening with $65–75 million, and possibly as high as $105 million.[177][178][179][180][174] The film opened Friday, June 2, 2017, across 4,165 theaters and made $38.7 million on its opening day, including $3.7 million in IMAX. It was the biggest single-day gross for a woman-directed film, ahead of the $35.9 million opening Friday of Catherine Hardwicke's Twilight in 2008 and the biggest opening day for a woman-led comic book superhero film, ahead of Ghost in the Shell ($7 million).[181] This included $11 million it made from Thursday previews, also the best start for a film directed by a woman, surpassing Fifty Shades of Grey's $8.6 million which was directed by Sam Taylor-Johnson, and the third-biggest of the year, behind Beauty and the Beast and Guardians of the Galaxy Vol. 2. Of that, $1.5 million came from IMAX screenings.[182][183] Hermes attacked Wonder Woman there, refusing to simply give up the child, but during their battle, War ripped the baby from Demeter's womb and disappeared. Unable to let a grave wound such as that go unattended, Diana saw to Demeter first, and the goddess warned that War could not be trusted. Worriedly, Diana and Orion returned to Manhattan to find that War had returned the baby to Zola. At last, the baby and his mother were reunited - and Orion would not have to look any further for the child he needed to kill.[31] Contact us at webmaster@signplusplus.com | Sitemap xml | Sitemap txt | Sitemap
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,596
{"url":"https:\/\/www.gradesaver.com\/textbooks\/science\/physics\/college-physics-4th-edition\/chapter-6-problems-page-227\/3","text":"## College Physics (4th Edition)\n\nWe can find the work done by Hilda on the book: $Work = F~d~cos~\\theta$ $Work = F~(0)~cos~\\theta$ $Work = 0$ Since the book does not move, Hilda does no work on the book.","date":"2020-02-22 08:37:09","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6811125874519348, \"perplexity\": 748.1356813402808}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-10\/segments\/1581875145654.0\/warc\/CC-MAIN-20200222054424-20200222084424-00449.warc.gz\"}"}
null
null
Windows | Release: 10.10.2014 | Genre: Action Journey to the heart of the Roman Empire and experience the brutality of battle like never before as "Ryse: Son of Rome" comes to PC with support for glorious 4K resolution. DEAL ROCKETS ONLINE SALE: Save now 55% Ryse for PC will come with bonus material originally released as downloadable content, including: The Colosseum Pack containing two character skins and two Arena maps. The Mars' Chosen Pack containing one new character skin, four Arena maps, and the new Survival mode. The Duel of Fates Pack containing two character skins, two Arena maps, and one additional Survival map. The Morituri Pack, with three new Arena maps, two Survival maps, and five solo Arena maps. NOTICE: Activation key must be used on a valid Steam account, requires internet connection. A riveting story of roman revenge "Ryse: Son of Rome" tells the story of Marius Titus, a young Roman soldier who witnesses the murder of his family at the hands of barbarian bandits, then travels with the Roman army to Britannia to seek revenge. Quickly rising through the ranks, Marius must become a leader of men and defender of the Empire on his quest to exact vengeance – a destiny he soon discovers can only be fulfilled much closer to home... Journey to the heart of the Roman Empire and experience the brutality of battle like never before as "Ryse: Son of Rome" comes to PC with support for glorious 4K resolution. Continuing Crytek's legacy for groundbreaking games, Ryse pushes PC hardware to its limits whilst drawing players deep into the bloody drama of ancient Rome. "Ryse: Son of Rome" is an immersive action-adventure story of struggle, brutality and heroism. It follows a fearless Roman soldier named Marius Titus who joins the army to avenge the slaying of his family and emerges as a hero who must fight to save the Roman Empire. "Ryse: Son of Rome" presents a cinematic re-creation of the Roman Empire, its people, conflicts and landscapes in breathtaking detail and brings the brutality and intensity of Roman warfare to life in visceral detail. The complete Ryse experience The PC version of "Ryse: Son of Rome" offers the full experience, bundling the original Xbox One launch hit with all 4 DLC Packs. Next-generation cinematic immersion Marius' tale of revenge comes to life through new advancements in performance capture, allowing players to interact with believably realistic characters. Brutally intense combat "Ryse: Son of Rome" delivers a visceral, brutally realistic combat experience with epic-scale battles. Relive the ruthless history of ancient Rome as you engage in raw, close-quarters combat against the barbarian hordes. Cooperative gladiatorial combat in the Colosseum Colosseum Mode plunges you into the Arena to fight alongside a friend against an ever-changing array of enemies and challenges, to the roar of thousands of spectators. A dynamic battlefield Colosseum Mode includes 25 multiplayer maps, with tile sets ranging from British camps to Roman villas and Egyptian deserts. Forge your own Legend You decide your destiny, customizing your gladiator through gold and Valor (XP) with new armor, weapons, shields, and consumables to win the crowd and survive in the arena. Taking full advantage of the PC 4K gaming is another leap in graphics quality for PC gamers and Ryse is the perfect showcase for what's now possible in high-end PC games. "Ryse: Son of Rome" leverages the power of Crytek's CRYENGINE and the latest high-end PC gaming technology to present conflict in the Roman Empire like you've never seen it before. OS: 64 bit Windows (Vista, 7, 8)CPU: Intel Core i3 2.8 GHz (3220T) - AMD Phenom II X4 3.2 GHz (945) CPU Details: SSE1-3, 4+ logical processors GPU: DirectX 11 graphics card with 1 GB video RAM - NVIDIA GeForce GTX 560 - AMD Radeon HD 7770 OS: 64 bit Windows (Vista, 7, 8) CPU: Intel Core i5 3.3 GHz (2500k) - AMD FX-6350 3.9 GHz GPU: DirectX 11 graphics card with 2 GB video RAM - NVIDIA GeForce GTX 660Ti - AMD Radeon 260x or 7850 GenreAction DRMSteam
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,318
Vultureşti é uma comuna romena localizada no distrito de Olt, na região de Oltênia. A comuna possui uma área de 45.11 km² e sua população era de 2501 habitantes segundo o censo de 2007. Comunas de Olt (distrito)
{ "redpajama_set_name": "RedPajamaWikipedia" }
144
[![Build Status](https://travis-ci.org/wazery/norby.svg?style=flat-square)](https://travis-ci.org/wazery/norby) [![Code Climate](https://codeclimate.com/github/wazery/norby/badges/gpa.svg)](https://codeclimate.com/github/wazery/norby) # Norby This is a simple CLI program that is a simulation of a simple toy robot moving on a square tabletop, of dimensions 5 units x 5 units. ## Naming > The name is derived from the fictional robot Norby, that small but very productive robot! ## It works according to the following specifications: - There are no other obstructions on the table surface. - The robot is free to roam around the surface of the table, but must be prevented from falling to destruction. - Any movement that would result in the robot falling from the table must be prevented, however further valid movement commands must still be allowed. ## It can read the following form: ``` PLACE X,Y,F MOVE LEFT RIGHT REPORT ``` - **PLACE** will put the toy robot on the table in position X,Y and facing NORTH, SOUTH, EAST or WEST. - The origin (0,0) can be considered to be the SOUTH WEST most corner. - The first valid command to the robot is a **PLACE** command, after that, any sequence of commands may be issued, in any order, including another PLACE command. The application should discard all commands in the sequence until a valid PLACE command has been executed. - **MOVE** will move the toy robot one unit forward in the direction it is currently facing. - **LEFT** and RIGHT will rotate the robot 90 degrees in the specified direction without changing the position of the robot. - **REPORT** will announce the X,Y and orientation of the robot. - A robot that is not on the table can choose to ignore the MOVE, LEFT, RIGHT and REPORT commands. - Provide test data to exercise the application. ## Constraints: The toy robot must not fall off the table during movement. This also includes the initial placement of the toy robot. Any move that would cause the robot to fall must be ignored. ## Installation Add this line to your application's Gemfile: $ gem install norby And then execute: $ norby ## Example Input and Output: ### a) ``` PLACE 0,0,NORTH MOVE REPORT Output: 0,1,NORTH ``` ### b) ``` PLACE 0,0,NORTH LEFT REPORT Output: 0,0,WEST ``` ### c) ``` PLACE 1,2,EAST MOVE MOVE LEFT MOVE REPORT Output: 3,3,NORTH ``` ## Development After checking out the repo, run `bin/setup` to install dependencies. Then, run `rake spec ---` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment. To install this gem onto your local machine, run `bundle exec rake install`. To release a new version, update the version number in `version.rb`, and then run `bundle exec rake release`, which will create a git tag for the version, push git commits and tags, and push the `.gem` file to [rubygems.org](https://rubygems.org). ## Contributing Bug reports and pull requests are welcome on GitHub at https://github.com/wazery/norby.
{ "redpajama_set_name": "RedPajamaGithub" }
4,598
Eugamandus jamaicensis är en skalbaggsart som beskrevs av Francesco Vitali 2003. Eugamandus jamaicensis ingår i släktet Eugamandus och familjen långhorningar. Inga underarter finns listade i Catalogue of Life. Källor Långhorningar jamaicensis
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,129
Il Gran Premio di Monaco 1977 è stata la sesta prova della stagione 1977 del Campionato mondiale di Formula 1. Si è corsa domenica 22 maggio 1977 sul Circuito di Monte Carlo. La gara è stata vinta da Jody Scheckter, su Wolf-Ford Cosworth; per il vincitore si tratta del sesto successo in carriera. Ha preceduto sul traguardo l'austriaco Niki Lauda e l'argentino Carlos Reutemann, entrambi su Ferrari. Fu la centesima vittoria nel mondiale di F1 per una vettura motorizzata dalla Ford Cosworth. Vigilia Aspetti tecnici Il 30 marzo venne presentato il progetto per la costruzione di un circuito stradale presso il Monte Fontana nel comune di Camporosso, in provincia di Imperia. Il tracciato, lungo 4.068 mt, avrebbe dovuto sostituire il Circuito di Monte Carlo quale sede del Gran Premio di Monaco. La Wolf tornò a impiegare il modello WR1, mentre la McLaren abbandonò per questo gran premio l'M26 e impiegò solo l'M23. La Brabham presentò un alettone posteriore molto più ampio, per garantire maggiore carico aerodinamico. Aspetti sportivi A seguito del riacutizzarsi del dolore al torace, patito nel corso del warm up del Gran Premio di Jarama, che costrinse Niki Lauda a saltare quella gara, venne messa in dubbio la sua partecipazione al gran premio di Monaco. La conferma della partecipazione avvenne solo il 17 maggio, dopo un consulto con il dott. Leonardo Gui, presso la Clinica Rizzoli di Bologna. In caso di indisponibilità la Scuderia Ferrari avrebbe presentato il solo Carlos Reutemann. Fece il suo esordio nel mondiale di F1 Riccardo Patrese, che sostituì Renzo Zorzi alla Shadow. Il padovano aveva vinto il Campionato Europeo di Formula 3 nel 1976. L'Ensign aveva iscritto Clay Regazzoni, pilota titolare del team, che avrebbe cercato di qualificare la vettura il giovedì delle prime prove, per poi volare negli Stati Uniti, dove avrebbe dovuto qualificarsi per la 500 Miglia di Indianapolis, quindi ritornare in Europa per la corsa monegasca, la domenica. Il team Ensign iscrisse anche Jacky Ickx, già al via di 102 gran premi validi per il mondiale, che mancava dal Gran Premio del Giappone 1976. Non si rividero Emilio de Villota, la Chesterfield Racing, la BRM e la Williams. In compenso la British Formula 1 Racing Team sostituì Brian Henton con il francese Jean-Pierre Jabouille, sempre mettendo a disposizione del pilota una March. La scuderia comunque non si presentò all'evento, così come la LEC. Qualifiche Resoconto Nella prima giornata di prove il miglior tempo venne fatto segnare da Hans-Joachim Stuck, su Brabham in 1'30"73; il tedesco precedette il compagno di scuderia John Watson. La prima parte della giornata fu nuvolosa, mentre la pioggia arrivò sul circuito nel pomeriggio, non permettendo ai piloti di migliorare i tempi della sessione della mattina. Niki Lauda chiuse quarto, ma venne rallentato dalla rottura di un giunto della trasmissione che lo costrinse a usare il muletto. Mario Andretti, su Lotus, che aveva vinto gli ultimi due gran premi, si lamentò per il surriscaldamento delle testate in magnesio del motore. Al sabato venne cambiato il motore sulla Ferrari di Lauda, mentre i tre nuovi motori Ford Cosworth DFV, che presentavano un ridisegno della meccanica, venne montati sulle vetture di Mario Andretti, James Hunt e Ronnie Peterson. Il tempo fu soleggiato e ciò consentì ai piloti di migliorare i tempi del giovedì. La pole venne fatta, per la prima volta nella carriera, da John Watson. Era dal 1951, con Juan Manuel Fangio a Monza, che una vettura spinta da un motore Alfa Romeo non otteneva la partenza al palo in una gara del mondiale, mentre lo stesso Watson aveva ottenuto la pole anche nella Race of Champions, disputata a marzo, e non valida per il mondiale. In prima fila vi fu Jody Scheckter (il sudafricano danneggiò però la sua Wolf toccando le barriere alle curve della Piscina), mentre la seconda fu appannaggio di Carlos Reutemann e Ronnie Peterson. Andretti chiuse decimo e fu protagonista di un incidente al Casinò, senza conseguenza fisiche.Clay Regazzoni, dell'Ensign, non prese parte alla seconda giornata di prove, ma preferì volare negli Stati Uniti, per tentare di qualificarsi alla 500 Miglia di Indianapolis, abbandonando l'ipotesi di rientrare anche perché col solo tempo del giovedì non sarebbe riuscito a prendere una posizione valida nello schieramento di partenza: e sulla sua vettura il sabato si qualificò in una utile diciassettesima posizione Jackie Ickx. Risultati Nella sessione di qualifica si è avuta questa situazione: Gara Resoconto John Watson partì male e venne passato subito da Jody Scheckter; dietro ai primi due si posero Carlos Reutemann, Hans-Joachim Stuck, Ronnie Peterson, Niki Lauda e James Hunt. Il sudafricano della Wolf venne pressato per diversi giri da Watson ma, senza commettere errori, mantenne il comando della gara. Al nono giro Peterson venne passato sia da Lauda che da Hunt, ciò in quanto la sua Tyrrell scontava dei problemi ai freni. Le posizioni di testa rimasero immutate fino al giro 20 quando Hans-Joachim Stuck si ritirò per un principio d'incendio sulla sua vettura. Cinque giri dopo fu il turno del ritiro di Hunt, per un guasto alla valvola. Il giro dopo Lauda passò il compagno di scuderia Reutemann. La classifica vedeva sempre in testa Jody Scheckter, seguito da John Watson, Niki Lauda, Carlos Reutemann, Jochen Mass e Mario Andretti. Il continuo pressing di Watson su Scheckter affaticò l'impianto frenante della sua Brabham, tanto che al giro 45 andò lungo alla chicane e venne passato da Niki Lauda. Tre giri dopo fu costretto al ritiro per un guasto al cambio. Entrò in zona punti Alan Jones. La classifica non mutò più negli ultimi giri, tranne per il sorpasso di Jacques Laffite su Vittorio Brambilla per la settima posizione al sessantanovesimo giro. Jody Scheckter vinse per la sesta volta nel mondiale, pur rallentando vistosamente negli ultimi giri per un problema di alimentazione; la Ford-Cosworth conquistò la centesima vittoria (la prima nel Gran Premio d'Olanda 1967 con Jim Clark su Lotus). Risultati I risultati del gran premio sono i seguenti: Classifiche Piloti Costruttori Note Altri progetti 06
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,846
Revenue of the cosmetic industry in the U.S. Value of the leading 10 textile exporters worldwide Vegetable oils: global consumption by oil type 2013/14 to 2019/2020 Cocoa bean production worldwide 2018/19 & 2019/20, by country World coffee per capita consumption: major consumer countries Cosmetics Industry in the U.S. - Statistics & Facts Coca-Cola Company - statistics & facts Consumer Goods & FMCG› Consumers of cider in the United States in 2018, by age Published by Alexander Kunst, Sep 3, 2019 This statistic illustrates the share of people who drank cider in the United States as of 2018. The results were sorted by age. In 2018, 22.33 percent of respondents aged 18 to 29 years stated they drink cider. The Statista Global Consumer Survey offers a global perspective on consumption and media usage, covering the offline und online world of the consumer. Share of consumers of cider in the United States in 2018, by age November 22 to December 27, 2017 and April 11 to May 28, 2018 20,409 respondents Answering this question was optional for respondents, based on the individual relevance of the topic (profiling data). See the Global Consumer Survey methodology for details. Multiple answers were possible. The original question was "Which of the following beverages do your regularly consume?" Cider Industry U.S. total apple production 2000-2019 U.S. cider market volume sales growth by state 2017 U.S. leading cider brands based on dollar sales 2019 U.S. cider market: on premise dollar and case volume sales 2017 Alexander Kunst Research Manager Consumer & Business Insights Statistics on "Cider industry in the U.S." Global market value of cider 2016-2023 Share of cider consumption worldwide 2018, by region U.S. cider import volume 2019, by country of origin Top beverage categories among leading producers in the United States in 2018 U.S. number of cider manufacturers 2019, by type U.S. number of cider manufacturers by state 2018 Cider product launches in the U.S. 2013-2017 Cider product launches United States 2018, by flavor Cider product launches United States from 2013 to 2017, by packaging type Retail sales volume of cider in the United States from 2013 to 2022 Market concentration of top three cider companies U. S. 2008 to 2017, by volume share Market concentration of top three cider brands U. S. 2008 to 2017, by volume share Cider product claims in the United States in 2018, by type of claim United States: consumption of alcoholic beverages 2017-2021 U.S. hard cider consumption frequency by age in 2017 Share of U.S. consumers who regularly buy alcoholic beverages 2016, by type Frequency of hard cider consumption in the U.S. 2017 Preferred hard cider flavors among consumers in the U.S. 2017 Sales volume of coolers and ciders in British Columbia 2012-2019 Sales volume of domestic cider in British Columbia 2011-2015, by type Sales volume of imported cider in British Columbia 2015/16-2020/21 Retail sales of imported cider in British Columbia 2015/16-2020/21 Retail sales of domestic cider in British Columbia 2011-2015, by type Retail sales of coolers and ciders in British Columbia 2012-2019 Alcohol-related hospital admissions as a share of all MHSA admissions 2000-2010 Alcohol-related emergency department visits in the U.S. from 2004-2010 Market volume of the beverage industry in Canada 2014, by category Number of alcohol abuse medication fills in the U.S. 2002-2010 Percentage of U.S. adults with alcohol problems receiving treatment 2008-2011 Current alcohol use in persons aged 12-20 in the U.S. by age 2002-2013 Drinking habits in the United States 2013 Cider usage in Spain 2014-2019, by user type Number of liquor licenses in Alberta by class 2019 Most popular New Year's Eve drinks choices for men in the UK in 2018, by category Revenue of Bols 2016-2020, by region Total alcohol-specific deaths United Kingdom (UK) 2018, by age and gender Number of alcohol serving licenses in Finland 2005-2018 Industry revenue of »manufacture of cider and other fruit wines« in Denmark 2011-2023 Cider Industry Wine Market Spirits Industry Statista Survey (Global Consumer Survey). (June 28, 2018). Share of consumers of cider in the United States in 2018, by age [Graph]. In Statista. Retrieved January 21, 2021, from https://www.statista.com/statistics/228267/strong-cider-consumption-usa/ Statista Survey (Global Consumer Survey). "Share of consumers of cider in the United States in 2018, by age." Chart. June 28, 2018. Statista. Accessed January 21, 2021. https://www.statista.com/statistics/228267/strong-cider-consumption-usa/ Statista Survey (Global Consumer Survey). (2018). Share of consumers of cider in the United States in 2018, by age. Statista. Statista Inc.. Accessed: January 21, 2021. https://www.statista.com/statistics/228267/strong-cider-consumption-usa/ Statista Survey (Global Consumer Survey). "Share of Consumers of Cider in The United States in 2018, by Age." Statista, Statista Inc., 28 Jun 2018, https://www.statista.com/statistics/228267/strong-cider-consumption-usa/ Statista Survey (Global Consumer Survey), Share of consumers of cider in the United States in 2018, by age Statista, https://www.statista.com/statistics/228267/strong-cider-consumption-usa/ (last visited January 21, 2021)
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,953
HomePod on battery? Apple had thought of it iPhone SE, in 2023 a model with a 5.7-inch display could arrive Mac Pro, a version with Apple Silicon will arrive by the end of 2022 | Rumor Powerbeats Pro, a new class action against Apple for charging problems iPhone 13: Lack of call noise cancellation is not a bug Apple's latest MacBook Pro with CD player ends up on the vintage product list The AR / VR viewer from Apple in the sign of "2,000": price and the latest from Gurman iPhone 14, display with ProMotion will still be exclusive to the Pro | Rumor Home Latest news The Great Pacific Garbage Patch, the garbage island as big as the... The Great Pacific Garbage Patch, the garbage island as big as the USA La Great Pacific Garbage Patch Isola Spazzatura Come Usa Speciale V4 48805 1280x16.jpg In the seas all over the world there are large islands of rubbish, even larger than Italy many times. A problem that should not be underestimated. The problem of plastic pollution is serious. Very often it is underestimated, especially when confronted with equally serious emergencies such as global warming. Just think that, according to a relationship, between the 4.9 and 12.7 million tons of plastic they end up at sea every year. It is practically as if a garbage truck was pouring all its contents into the water every minute for a year. If something doesn't change, the amount of plastic in our seas could increase tenfold by 2025, in 2050 there will be more plastic than fish. Reading up to this point about 30 seconds will have passed, in this tiny lapse of time they have been thrown into our sea, the Mediterranean, about 16,900 plastic bottles (data based on a WWF report). The garbage island The situation is so serious that it is possible to find the in the Pacific Ocean Pacific Trash Vortex (it is also called the Great Pacific Garbage Patch), a huge accumulation of floating garbage, a real island composed – mostly – of plastic. The accumulation was formed from the 80s, due to the incessant pollution by man and due to an ocean current called the North Pacific subtropical vortex. Its extent is not known precisely: estimates range from 700,000 square kilometers up to more than 10 million square kilometers (i.e. from a larger area of ​​the Iberian Peninsula to a larger area of ​​the United States), an area that occupies between 0.41% and 5.6% of the Pacific Ocean. The amount of weight of this island? Always uncertain, but it goes from 3 million tons to 100 million. Like everything on Earth, this incredible and frightening garbage island is also home to life forms: it is possible to find about a thousand different types of heterotrophic, autotrophic, predatory and symbiont organisms, including diatoms and bacteria, some of which seemingly capable of degrading plastics and hydrocarbons. On the surface, moreover, the island is home to numerous dangerous pathogens (such as viruses and bacteria). According to the United Nations Environment Program (UNEP), this island will soon be able to even be seen from space. The other islands The Great Pacific Garbage Patch is not a unique case. Currently, in fact, there are 6 islands (including the Pacific one) in the world composed entirely of plastic and other materials. Between Peru and Chile you can find the South Pacific Garbage Patch, recently discovered, which is estimated to have an area of ​​around 2.6 million square kilometers. How big is this number? Just think that the surface of Italy is 301,338 square kilometers; the South Pacific Garbage Patch is about eight times larger than our country. Another island is located in the Atlantic Sea, the North Atlantic Garbage Patch, originally documented in 1972. The size of the garbage pile in question stands at around 4 million square kilometers (16 times Italy) and is the second largest agglomeration on Earth where there seem to be 200,000 debris per square kilometer. Then there is the South Atlantic Garbage Patch, which appears to be the smallest of all these clusters, with "only" 1 million square kilometers and is located between South America and southern Africa. Due to its distance from trade routes, however, the growth of this island is poorly documented. The list seems endless and the "latest arrivals" arrive, theIndian Ocean Garbage Patch, in the Indian Ocean, and also theArctic Garbage Patch, both recently discovered. The Indian one had been hypothesized in 1988 by the National Oceanic and Atmospheric Administration, and does not appear as a continuous field of debris, but contains scattered debris (10,000 debris per square kilometer). The Arctic Garbage Patch, on the other hand, is located in the Barents Sea, the part of the Arctic Ocean located north of Norway and Russia; it is the smallest and most recent plastic island (found so far) in the world. Can you clean it all? Is it possible to dispose of all this plastic? We don't know but someone is trying. In particular, a boy called Boyan Slat. In 2013, at 18, Slat founded the non-profit entity The Ocean Cleanup. The mission? Develop advanced technologies to eliminate plastic from the world's oceans. After its foundation and creation, the institution managed to collect 2.2 million dollars through a crowdfunding campaign with the precious help of 38000 donors from 160 countries. Since its creation, The Ocean Cleanup has raised $ 31.5 million in donations. In 2013 the company deployed a device capable of passively collect garbage in the ocean. The system was employed to clean up the Great Pacific Garbage Patch. Initially the device had some problems, including a defect that caused the plastic to leak into the ocean. A problem that has now been resolved. Ocean Cleanup not only wants to collect this garbage, but wants to give it new life by recycling it and transforming it into a new product. Obviously, the main goal remains the immense plastic island. With the creation of new machines to complete the venture, Boyan estimates that half of the large plastic island in the Pacific will be collected within the next 5 years. The project started in mid-2018 and will progressively make use of additional systems until it reaches its full potential by 2020. Tributes flow following sudden death of 'true Stoneybatter man' Philip O'Neill Two arrested after shots fired at garda car in Tallaght Ashling Murphy: Gardai appeal for information on bike rider wearing black tracksuit "Baby Shark" is already the video with the most views on YouTube Techmark Up
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,455
Als Bozner Blutsonntag werden die Ereignisse vom 24. April 1921 in Bozen bezeichnet. Es handelte sich dabei um einen ersten Gewalthöhepunkt des italienischen Faschismus im nach dem Ersten Weltkrieg an Italien gefallenen, mehrheitlich deutschsprachigen Südtirol. Hintergründe 1921 befand sich der italienische Faschismus in Südtirol noch im Aufbau. In Bozen waren im Februar des gleichen Jahres mit Mühen die ersten Fasci di combattimento, damals noch eine italienweit tätige Schlägertruppe, gegründet worden. Aufgrund der marginalen Rolle der Linken in Südtirol, fand der in Italien schwelende militante Kampf der Schwarzhemden gegen die Linke in Südtirol nicht statt. Stattdessen rückten nationalistische Argumente in den Vordergrund. Auf diesem Hintergrund, und angesichts der für den 15. Mai 1921 angesetzten Wahlen zum italienischen Parlament, gärte im Wahlkampf die politische Stimmung im Land. Eine Strafexpedition gegen die Deutschen in Südtirol ließ auf eine entsprechende landesweite Resonanz hoffen. Die für den 24. April 1921 im österreichischen Tirol angesetzte Volksabstimmung über den Anschluss an das Deutsche Reich wurde wegen ihrer möglichen Auswirkungen auf Südtirol zudem mit besonderer Aufmerksamkeit verfolgt. In der Tat sah unter anderem Eduard Reut-Nicolussi vom Deutschen Verband die Volksabstimmung auch als Protest gegen die Pariser Vorortverträge von 1919 und die damit verbundene Teilung Tirols an. Der ebenfalls am 24. April im Rahmen der zwischen dem 16. und 26. April stattfindenden Bozner Frühjahrsmesse geplante Trachtenumzug durch Bozen wurde von den Schwarzhemden als antiitalienische und alldeutsche Demonstration der Pangermanisten beurteilt, mit der die Volksabstimmung unterstützt werden sollte. Der in ihren Augen provokative Umzug musste gestört werden. Trotz Warnungen und aufgrund beruhigender Beteuerungen der Bozner Handelskammer ergriff der zuständige Generalzivilkommissar Luigi Credaro keine Sicherheitsmaßnahmen. Ausführung Am 16. April fuhr Attilio Crupi von den Bozner Schwarzhemden nach Mailand und sicherte sich beim Zentralkomitee der Fasci die Zustimmung für die Aktion ab. Mit einem Schreiben in der Hand, mit dem die Fasci di combattimento in Brescia unter Leitung von Augusto Turati und Verona unter Italo Bresciani zur Mithilfe aufgefordert wurden, traf er sich auf dem Rückweg mit Turati, Bresciani und Achille Starace, zu dem Zeitpunkt politischer Sekretär der Fasci in Trient, die ihm ihre Unterstützung zusicherten. In den Tagen vor dem Trachtenumzug kam es in Bozen vermehrt zu politisch motivierten Aktionen in Form von nächtlichen Vandalismus, bei dem unter anderem Häuserwände mit nationalistischen Parolen beschmiert wurden. Die alarmierten Behörden konsultierten sich mit Rom und führten Gespräche mit den Bozner Schwarzhemden und der Bozner Messe. Am 23. April trafen sich beide Seiten in den Amtsräumen von Credaro und versicherten, auf Provokationen aller Art zu verzichten, woraufhin sowohl der Trachtenumzug als auch der Aufmarsch der Squadristi von Credaro genehmigt wurden. Zugleich ersuchte Credaro die Behörden der Nachbarprovinzen, die Abfahrt von Faschisten nach Bozen zu unterbinden, dem man aber nur unzureichend nachkam. Die Sicherheitskräfte in Bozen dagegen beschlossen, die Züge mit den Teilnehmern des faschistischen Aufmarsches nach Bozen einfahren zu lassen. Am Morgen des 24. April trafen am Bahnhof Bozen etwa 290 Schwarzhemden aus dem übrigen Italien ein, angeführt von Francesco Giunta und Achille Starace, denen sich etwa 120 örtliche Faschisten anschlossen. Von den angekündigten 1000 Teilnehmern des Aufmarsches standen am Ende etwas mehr als 400 unter der Gesamtleitung von Starace. Diesen standen 170 Carabinieri, etwa 1000 Soldaten sowie 150 Finanzbeamte und die 30 Mann starke Stadtwache von Bozen gegenüber, die Credaro zusammengezogen hatte. Bereits bei der Ankunft war klar, dass die Schwarzhemden nicht auf Provokationen verzichten würden. Nachdem durchgesickert war, dass die Faschisten die Trikolore am Rathaus hissen wollten, wurden Teile der Sicherheitskräfte zur Bewachung des Rathauses abgezogen. Andere Sicherheitskräfte sollten das Zivilkommissariat im nahen Palais Widmann, das Gewerkschaftshaus, die Redaktionen der Südtiroler Zeitungen sowie andere als bedroht angesehene Einrichtungen schützen. Etwa zwei Dutzend Carabinieri wurden zudem im Bahnhofsbereich abgestellt, um Zusammenstöße mit kommunistischen Eisenbahnern zu verhindern. Ein anderer Teil der Sicherheitskräfte begleite den Zug der Schwarzhemden zum Sitz der Bozner Fasci in das Palais Pock. Als der Aufmarsch den Waltherplatz passierte, bemerkte man das Fehlen der italienische Nationalflagge, worauf eine Abordnung bei Credaro vorstellig wurde und das Anbringen der Trikolore forderte. Der Bürgermeister Julius Perathoner weigerte sich, den Forderungen der Squadristi nachzukommen, worauf Credaro das Hissen der Flagge selbst anordnete. Daraufhin beruhigte sich zunächst die Lage bei den vor dem Palais Pock versammelten Schwarzhemden. Nachdem der um 13 Uhr gestartete Trachtenumzug seinerseits den Waltherplatz passiert hatte, gelang es einer kleinen Gruppe von Faschisten, die sich von den übrigen Teilnehmern des Aufmarsches abgesetzt hatten, in den Zug einzureihen. Auf Höhe des Bozner Obstmarktes provozierten die Squadristi die Zuschauer, indem sie ein erbeutetes Wirtshausschild in Form eines Habsburger-Doppeladlers verhöhnten, worauf aus den Fenstern der umliegenden Häuser Gegenstände flogen und Wassereimer auf die Squadristi geleert wurden. Die Squadristi antworteten mit Pistolenschüssen und warfen eine Handgranate. In Panik brach die Menschenmenge auseinander. Etwa fünfzig Südtiroler wurden teils schwer verletzt, wovon 15 im Krankenhaus behandelt werden mussten. Der Lehrer Franz Innerhofer aus Marling, der als Trommler der Musikkapelle Marling mit nach Bozen gekommen war, starb beim Versuch, einen Jungen zu beschützen, durch Schüsse im Hauseingang des Bozner Ansitzes Stillendorf. Ob ein weiterer Mann an den Folgen der Verletzungen starb, die er im Zusammenhang mit den Ereignissen am "Blutsonntag" erlitt, ist umstritten. Auf Seiten der Faschisten wurden laut Starace vier Squadristi verletzt. Folgen Das nun einschreitende Militär beschränkte sich darauf, die am Palais Pock ihren "Sieg" feiernden Aggressoren zum Bahnhof zu eskortieren, wo sie unbehelligt noch am gleichen Nachmittag abreisen konnten. Die Bozner Bevölkerung reagierte aufgebracht unmittelbar nach den Ereignissen. Dem Protest schlossen sich auch linke italienische Gruppierungen an, und die italienische Bevölkerung in Südtirol distanzierte sich am nächsten Tag bei Credaro öffentlich von den Vorfällen. Den Sicherheitskräften wurde Kollaboration mit den Schwarzhemden vorgeworfen. Es kam zu gewalttätigen Übergriffen auf lokale Schwarzhemden. Am Folgetag wurde ein sprachgruppenübergreifender Generalstreik ausgerufen, den die Gewerkschaften und alle Parteien unterstützten. Es kam zu einer großen Protestkundgebung am Viehmarktplatz (heutiger Verdiplatz). In der von Julius Perathoner angeführten Trauersitzung im Gemeinderat beschuldigte Perathoner die italienischen Offiziere der Fraternisierung mit den Squadristi. Als Hauptverantwortliche der Zwischenfälle wurde aber allein die italienische Regierung ausgemacht, die den Faschisten freie Hand gelassen und die Südtiroler Bevölkerung nicht geschützt hatte. Am 26. April wurde der Leichnam Innerhofers von Bozen nach Marling in einem öffentlichen Kondukt überführt, den zahlreiche Politiker und Generalzivilkommissar Luigi Credaro anführten. Dem Trauerzug folgten in Bozen bis zu 15.000 Menschen aller Sprachgruppen, und auch auf dem weiteren Weg säumten zahlreiche Menschen die Straßen. Am 28. April fand die Beisetzung in Marling statt, bei der Reut-Nicolussi die Grabrede hielt und Innerhofer als Opfer im Kampf gegen Italien deutete. In seiner ersten Parlamentsrede am 21. Juni 1921 übernahm Benito Mussolini die moralische Verantwortung für die Ereignisse in Bozen. Zugleich verschärfte die italienische Regierung ihre Südtirolpolitik und Credaro wurde zu einem energischeren Auftreten angewiesen. Nach wie vor stand die Haltung des italienischen Staates in der Kritik. In Südtirol kritisierte man die unterlassenen Schutzmaßnahmen und die Kollaboration mit den Faschisten, in Italien wurden dagegen die zu nachsichtige Südtirolpolitik der italienischen Regierung an den Pranger gestellt. Für den Faschismus in Südtirol stellte der Anschlag zunächst einen Rückschlag dar. Die italienischsprachige Bevölkerung nahm eine abwartende Haltung ein, die italienische Linke und ihre Sympathisanten verstärkten ihre antifaschistische Linie. Der von den Faschisten erhoffte Aufschwung der faschistischen Bewegung in Südtirol blieb aus. Lediglich eine neue Ortsgruppe der Fasci di combattimento wurde in der Folge in Franzensfeste gegründet. Der Deutsche Verband konnte dagegen bei den Parlamentswahlen im Mai 1921 einen überwältigenden Wahlsieg erringen. Für die Südtiroler Parteienwelt war der von der Tiroler Tageszeitung als "Bozener Blutsonntag" bezeichnete Anschlag Anlass, sich näher mit dem italienischen Faschismus auseinanderzusetzen. Für den Sozialdemokraten Franz Tappeiner war nicht das friedliche Zusammenleben verschiedener Volksgruppen ein Problem. Der faschistischen Gewaltbereitschaft müsse man sich dagegen auch bewaffnet entgegenstellen. Im Gegensatz dazu liebäugelten konservative Kräfte mit dem Faschismus, lehnten aber den italienischen Faschismus, wie Friedrich von Toggenburg, nicht aus ideologischen, sondern aus rein nationalistischen Gründen ab. Täter Noch am 24. April ordnete der italienischen Ministerpräsident Giovanni Giolitti die Aufklärung des Anschlags an, da er negative Reaktionen aus dem Ausland befürchtete. Noch am gleichen Tag wurden die Bozner Faschistenführer Vittorio Moggio und Attilio Crupi festgenommen. Der von Credaro mit der Aufklärung des Falles beauftragte Adolfo Lutrario ging der Sache jedoch nur halbherzig nach. So wurden vermeintliche Zeugen, wie Moggio und Crupi erst gar nicht vernommen und bei Starace beschränkte man sich auf eine schriftliche Stellungnahme. Da die Südtiroler kaum Vertrauen in die Untersuchung setzten, blieb auch von dieser Seite die Unterstützung aus. Die Ermittlungen kamen zu dem Ergebnis, dass die im Trachtenzug mitmarschierende Gruppe der Squadristi um die zehn Mann stark war, deren Mitglieder aus Verona, Brescia und Riva kamen, wobei die Ermittlungen von dem mit den Faschisten sympathisierenden und späteren OVRA-Agenten Filippo Tagliavacche geführt wurden. Tagliavacche machte den in Mailand 1900 geborenen Ugo Saldarini als Träger des Wirtshaus-Doppeladlers aus, der bereits bei anderen Aktionen der Squadristi als gewalttätiger Teilnehmer polizeilich aufgefallen war. Als mutmaßlichen Bombenwerfer konnte er den 22-jährigen Bruno Zeni aus Turin identifizieren, die beide der Fasci-Sektion Brescia angehörten. Allerdings konnte nie restlos geklärt werden, wer die Handgranate tatsächlich geworfen hatte. Kein einziger Verdächtiger wurde jemals für den Anschlag auf den Trachtenumzug oder die Ermordung Innerhofers gerichtlich belangt, wobei die Untersuchung des Mordfalles nicht im Vordergrund der Ermittlungen stand. Bald kursierten Gerüchte, dass man bewusst an der falsche Stelle nachforschte, und mit dem Mord an Innerhofer wurde der Bozner Squadrist Lino Mariotti in Verbindung gebracht. Mariotti, 1900 im Friaul geboren, war nach dem Ersten Weltkrieg nach Bozen gekommen und betrieb einen Verkaufsstand am Obstmarkt. Er war 1920 den Fasci beigetreten und wohnte nur wenige Meter vom Tatort entfernt. Er verstarb 1938 nach längerer Krankheit in Bozen. Auf seiner Trauerfeier wurde er von der faschistischen Parteispitze in auffälliger Weise für seine besonderen Verdienste gewürdigt, was die Gerüchte um seine vermeintliche Verbindung mit dem Mord an Innerhofer noch verstärkte. Die nach dem Anschlag festgenommenen Moggio und Crupi wurden ebenfalls nie belangt. Sie hatten behauptet, sie hätten nur zur Beruhigung der Lage beitragen und Gewalttätigkeiten verhindern wollen. Starace drohte bereits am Tag nach der Festnahmen mit schwerwiegenden Konsequenzen, falls sie nicht unverzüglich freigelassen würden. Die beiden blieben aber zunächst in Haft und es kursierten Gerüchte, etwa 2000 Faschisten würden am Begräbnistag Innerhofers anrücken, um sie gewaltsam zu befreien. Nach drei Wochen kamen sie schließlich frei. Gedenken Heute erinnert eine Gedenktafel im Ansitz Stillendorf an die Ereignisse. Vor ihr legten am 23. November 2019 die beiden Staatspräsidenten Italiens und Österreichs, Sergio Mattarella und Alexander Van der Bellen, einen Strauß weißer Blumen zum Zeichen des gemeinsamen Gedenkens nieder. Am 25. April 2011, dem Tag der Befreiung Italiens von Faschismus und Nationalsozialismus, wurde ein Platz in der Altstadt von Bozen (beim Hauptgebäude der Freien Universität Bozen) nach Franz Innerhofer benannt. 2021 wurde Innerhofer vom Andreas-Hofer-Bund Tirol im Zusammenwirken mit dem Südtiroler Heimatbund ein Gedenkstein an der Landesgedächtnisstätte Tummelplatz in Amras bei Innsbruck enthüllt, die ihn als "Blutzeugen für das deutsche Südtirol" instrumentalisiert. Literatur Stefan Lechner: Der "Bozner Blutsonntag": Ereignisse, Hintergründe, Folgen. In: Hannes Obermair, Sabrina Michielli (Hrsg.): Erinnerungskulturen des 20. Jahrhunderts im Vergleich – Culture della memoria del Novecento a confronto (= Hefte zur Bozner Stadtgeschichte/Quaderni di storia cittadina. 7). Stadtgemeinde Bozen, Bozen 2014, ISBN 978-88-907060-9-7, S. 37–46. (Digitalisat) Günther Pallaver: Südtirol studieren, um den Faschismus zu verstehen. In: Hannes Obermair, Sabrina Michielli (Hrsg.): Erinnerungskulturen des 20. Jahrhunderts im Vergleich – Culture della memoria del Novecento a confronto (= Hefte zur Bozner Stadtgeschichte/Quaderni di storia cittadina. 7). Stadtgemeinde Bozen, Bozen 2014, ISBN 978-88-907060-9-7, S. 55–63. Der Tiroler. 26. April 1921, S. 1f. Gerhard Hölzle: Vor 100 Jahren: Der Bozner Blutsonntag. Seine Rezeption, von draußen betrachtet. In: Der Schlern. Jg. 95, Heft 4, 2021, S. 62–69. Weblinks Tagung der Stadt Bozen zu "Franz Innerhofer und der frühe Faschismus in Bozen" HSozKult: Franz Innerhofer und der frühe Faschismus in Bozen / Franz Innerhofer e il primo fascismo a Bolzano, 15. April 2011 Maridl Innerhofer über die Ermordung ihres Vaters am Bozner Blutsonntag Einzelnachweise Geschichte Bozens Geschichte Südtirols Faschismus (Italien) Italienische Geschichte (Zwischenkriegszeit) Konflikt 1921
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,353
\section{\label{sec:Intro} Introduction} In recent years, a large number of measurements of the fusion cross-sections involving weakly bound nuclei, such as, $^{6,7}$Li and $^{9}$Be, on several targets have been performed at energies around the Coulomb barrier \cite{Canto15}. Due to the low breakup threshold of the projectile nucleus, the breakup process has a significant role in description of fusion phenomenon in these reactions. In many of these studies, the measured fusion cross-sections are found to be significantly suppressed with respect to the coupled channel calculations at the above barrier energies \cite{Dasgupta04,Mukherjee06,Gas09,Rath09,Parkar10,Harphool12,Palshetkar10,Pradhan11,Rath13,Wu03,Gomes06,Hu15}. Similar coupled channel calculations for fusion cross-sections in strongly bound nuclei, which account for the couplings of the inelastic and transfer processes, successfully explain the measured fusion cross-sections. The suppression that is observed in reactions with weakly bound nuclei, is almost a universal phenomena observed in a large number of systems involving weakly bound projectiles for varying atomic masses of the target nucleus \cite{Parkar10,Harphool12}. The suppression phenomena is largely attributed to the loss of flux subsequent to the breakup of the weakly bound projectile. There is a large probability that one or more fragments emanating from the breakup process may subsequently fuse with the target nucleus to form a compound nucleus. Therefore, for a consistent explanation of the fusion suppression, the fusion cross-section is often divided into two categories, namely, the complete fusion (CF) and the incomplete fusion (ICF). In the CF process, the whole projectile or all its charged fragments fuse with the target nucleus, whereas only a part of projectile is captured by the target in the ICF process and the remaining part escapes. These two processes together give a measure of the total fusion (TF) process. The knowledge of the relative contribution of the ICF and the CF is essential for understanding the suppression phenomena at energies above the barrier. Despite considerable efforts in last several decades, quantum mechanical calculations of individual CF and ICF cross-sections are very scarce in the literature. In the CDCC based coupled channel calculations, which is a fully quantum mechanical treatment, Hagino $\textit{et al.}$ \cite{Hagino00} and Diaz-Torres and Thompson \cite{Diaz02} performed calculations by classifying `complete fusion' as absorption from ground state, and `incomplete fusion' as absorption from breakup states. However, this method does not provide suitable description for the weakly bound nuclei that dissociate into fragments of comparable masses, because the absorption of centre of mass of the projectile modeled in these works is not necessarily connected to the capture of all the fragments \cite{Diaz03}. In another work using CDCC calculations by Rusek $\textit{et al.}$ \cite{Rusek04}, a good description of certain observables such as, the total fusion (i.e., the sum of ICF and CF), the elastic, and non-capture breakup (NCBU) cross-sections was obtained. In very recent times, models based on post-form theory for the calculation of the inclusive breakup cross-sections has been successfully applied for $^{6}$Li+$^{209}$Bi system \cite{Lei15}. Apart from these efforts, approximate methods such as, a model based on a classical trajectories, which parameterizes ICF in terms of breakup probabilities for weakly bound nuclei at energies around the Coulomb barrier have been developed by Diaz-Torres \cite{Diaz-Torres11}. A recent model by Boselli and Diaz-Torres \cite{Boseli15}, uses time-dependent wave-packet perspective for separating CF and ICF processes. In another work by Marta $\it{et~al.}$ \cite{Marta2014}, the semiclassical model calculations were attempted to understand CF and ICF data for $^{6,7}$Li+$^{209}$Bi, which however, did not describe the data satisfactorily. In many of these works, the absorption from the breakup channel is a crucial ingredient for calculation of ICF. The couplings arising from the excitation to the projectile continuum using the CDCC approach has provided a successful method for modeling the breakup process. Since the breakup process and the ICF arising due to the absorption following the breakup are intricately related, it is important to understand how breakup process is correlated with the ICF components. For a comprehensive understanding of the fusion process with weakly bound nuclei, the relative contribution of the ICF and CF as a function of energy around the Coulomb barrier needs to be investigated well. In our earlier work, we attempted to obtain the use of absorption of the projectile fragments within the CDCC approach, which could provide a reasonable simultaneous explanation of CF and TF data with $^{9}$Be projectile over a large energy and target mass range \cite{9BePRC}. The absorption cross-sections obtained from the CDCC calculations were found to also successfully explain the universal suppression of fusion for the $^{9}$Be projectile with different targets. In these calculations, absorption cross-sections obtained using only one part of the fragment target potential is used to calculate the ICF. The $^{6,7}$Li, with their well defined cluster structure are quite amenable to this approach for calculations of ICF, CF and TF simultaneously. However, there is only a few data available for the simultaneous measurement of CF, ICF and TF for comparison with the calculations and most of the data available is either the CF or TF data. In the present work, we have carried out calculations with the breakup-absorption model for the $^{6,7}$Li+$^{209}$Bi and $^{6,7}$Li+$^{198}$Pt systems. For the $^{6,7}$Li+$^{209}$Bi \cite{Dasgupta04} systems, the cross-section data of CF, ICF and TF is available over a large energy range. In addition, limited ICF data along with the CF data are available for the $^{6,7}$Li+$^{198}$Pt \cite{Shrivastava09,Shrivastava13} systems. These data have been utilized for the present investigations. Using the data and the calculations, we attempt to study the relative importance of ICF fraction in the fusion process and also investigate how the ICF depends on the non-capture breakup as a function of energy. \section{\label{sec:Caln} Calculation Details} \begin{figure} \includegraphics[width=0.44\textwidth,trim=0.3cm 7.5cm 10.5cm 1cm, clip=true]{Fig1.pdf} \caption{\label{fig1}(Color online)(a) The data of Complete Fusion (CF), Incomplete Fusion (ICF) and Total fusion (TF)=CF+ICF+Fission for $^{6}$Li+$^{209}$Bi reaction from Ref.\ \cite{Dasgupta04} is compared with the calculations. The arrow indicate the position of Coulomb barrier. (b) Comparison of individual ICF contributions from $\alpha$-ICF, d-ICF along with Tot-ICF with the calculations. (see text for details).} \end{figure} \begin{figure} \includegraphics[width=0.44\textwidth,trim=0.3cm 7.5cm 10.5cm 1cm, clip=true]{Fig2.pdf} \caption{\label{fig2} (Color online) Same as Fig. 1 but for $^{7}$Li+$^{209}$Bi reaction.} \end{figure} \begin{figure} \includegraphics[width=0.44\textwidth,trim=0.3cm 7.5cm 10.5cm 1cm, clip=true]{Fig3.pdf} \caption{\label{fig3} (Color online) Same as Fig. 1 but for $^{6}$Li+$^{198}$Pt reaction.} \end{figure} \begin{figure} \includegraphics[width=0.44\textwidth,trim=0.3cm 7.5cm 10.5cm 1cm, clip=true]{Fig4.pdf} \caption{\label{fig4} (Color online) Same as Fig. 1 but for $^{7}$Li+$^{198}$Pt reaction.} \end{figure} We have performed the detailed coupled channels calculations to study the fusion process for the $^{6,7}$Li+$^{209}$Bi and $^{6,7}$Li+$^{198}$Pt systems. The Continuum Discretized Coupled Channel (CDCC) calculations were performed using the code FRESCO version 2.9 \cite{Thomp88}. The coupling scheme used in CDCC is similar to that described in our earlier works \cite{Harphool08,Jha09}. In $^6$Li, couplings were included to the 1$^{+}$, 2$^{+}$, and 3$^{+}$ resonances and L = 0,1,2,3 $\alpha$-d continuum and in $^7$Li to {1/2}$^{-}$ first excited state, the {5/2}$^{-}$ and {7/2}$^{-}$ resonances and the L = 0,1,2,3 $\alpha$-t continuum. The binding potentials for $\alpha$-d in $^6$Li and $\alpha$-t in $^7$Li are taken from the Refs.\ \cite{Kubo72} and \cite{Buck88} respectively. In the CDCC calculations, the fusion cross sections can be obtained as the total absorption cross section, which is equal to the difference of the total reaction cross section $\sigma_R$ and the cross section of all explicitly coupled direct reaction channels $\sigma_{D}$. The absorption cross sections that are taken as the total fusion cross sections are in turn obtained from the S-matrix elements given by, \begin{equation} \nonumber \sigma_{abs}= \sigma_{R} - \sigma_{D} \label{eq:1} \end{equation} where \\ \begin{equation} \nonumber \sigma_{R} = \frac{\pi}{k^2}\sum_{l}(2l+1)(1-|S_l|^2) \end{equation} $\hbar k$ represents the relative momentum of the two nuclei in the entrance channel. The required fragment-target potentials were generated in the cluster folding (CF) model using real potentials, $\it{viz.}$, V$_{\alpha-T}$ taken as Sau-Paulo potential \cite{Sau-Paulo}, while V$_{d-T}$ and V$_{t-T}$ were taken from Refs.\ \cite{Daehnick80} and \cite{Beccheeti69}, respectively. In the calculations presented here, the fusion cross-sections are first calculated by including the short-range imaginary (W$_{SR}$) volume type potentials in the coordinates of both projectile fragments relative to the target, as in Ref.\ \cite{Diaz03}. The fusion cross-section is calculated in terms of that amount of flux which leaves the coupled channels set (total absorption cross-section) because of the short-range imaginary part of the optical potential used for the fragment-target potentials. The use of this short-range imaginary potential is equivalent to the use of an incoming boundary condition inside the Coulomb barrier. The $\sigma_{TF}$ calculated in this way corresponds to a sum of the complete fusion cross section $\sigma_{CF}$ and two incomplete fusion cross sections, i.e., $\sigma_{\alpha-\textrm{ICF}}$ and $\sigma_{\textrm{d/t}-\textrm{ICF}}$. It is equivalent to the expectation value of the imaginary part with three body CDCC wave function of the system from both the bound and scattering states as discussed in Ref.\ \cite{Des15}. The CDCC calculations with the breakup couplings are performed with three choices of optical potentials, where W$_{SR}$ is used for (i) both the projectile fragments relative to the target (Pot. A), (ii) the $\alpha$-T part only (Pot. B), and (iii) the d(t)-T part only (Pot. C). However, in all these calculations, an additional imaginary volume type potential with parameters W=25 MeV, r$_w$=1.0 fm and a$_w$=0.4 fm, without any real part is also present in the center of mass of the whole projectile for the projectile-target radial motion. With these potential choices, we perform three independent calculations one with all three imaginary components and other two where either of two fragment-target imaginary components are disabled. The cases where the imaginary part of a particular fragment is disabled means that the only one of the fragment is absorbed following the breakup and calculated absorption cross section reduces. Therefore, $\sigma_{TF}$ is reduced by the incomplete fusion of the other fragment. The differences allow us to make an estimate of individual ICF channel cross-sections. By performing these three independent calculations, one can evaluate all three quantities: (i) $\sigma_{TF}$, (ii) $\sigma_{\textrm{d/t}-\textrm{ICF}}$, and (iii) $\sigma_{\alpha-\textrm{ICF}}$ separately. The calculated CF cross-section is obtained from the subtraction of total ICF from TF cross-sections. The optical model potentials for fragment-target interaction used in the CDCC calculations are given in Table\ \ref{Table1}. The present choice of optical potentials, where the real part has the standard parameters of the global potential and the imaginary part has short-range character, successfully describes the measured elastic scattering data for the $^{6}$Li+$^{209}$Bi system providing a simultaneous description of the fusion and the elastic scattering data. A satisfactory agreement with the elastic scattering data is also obtained for $^{7}$Li+$^{208}$Pb elastic scattering angular distribution (since the data for the $^{7}$Li+$^{209}$Bi system is not available in the literature). In addition, the parameters of the short range imaginary potential in the range of r$_{w}$ = 0.6 to 1.0 fm and a$_{w}$ = 0.1 to 0.4 fm are found to be less sensitive for the calculation of $\sigma_{TF}$. However, in the calculation of ICF, the radius parameter of imaginary part is optimized with the higher energy $\alpha$-ICF and d\t-ICF data of $^{6,7}$Li+$^{209}$Bi systems and kept fixed for remaining energies and also for $^{6,7}$Li+$^{198}$Pt systems. \begin{table} \caption{\label{Table1} Optical model potentials for fragment-target interaction used in the CDCC calculations.} \begin{tabular} {cccccccc} \hline System & V$_{0}$ & r$_{0}$ & a$_{0}$ & W$_{SR}$ & r$_{w}$ & a$_{w}$ & \\ & (MeV) & (fm) & (fm) & (MeV) & (fm) & (fm) & \\ \hline \\ $\alpha$+$^{209}$Bi & Sau & Paulo & Pot & 25.0 & 0.62 & 0.40 & \\ d+$^{209}$Bi & 96.5 & 0.97 & 0.74 & 25.0 & 0.66 & 0.40 & \\ t+$^{209}$Bi & 160.8 & 0.97 & 0.72 & 25.0 & 0.69 & 0.40 & \\ $\alpha$+$^{198}$Pt & Sau & Paulo & Pot & 25.0 & 0.62 & 0.40 & \\ d+$^{198}$Pt & 97.7 & 0.96 & 0.74 & 25.0 & 0.66 & 0.40 & \\ t+$^{198}$Pt & 161.3 & 0.96 & 0.72 & 25.0 & 0.69 & 0.40 & \\ \hline \\ \end{tabular} \label{tab1} \end{table} \section{\label{sec:Result} Results and Discussion} \begin{figure} \includegraphics[width=0.64\textwidth,trim=0.3cm 5.6cm 6.0cm 0.1cm, clip=true]{Fig5.pdf}% \caption{\label{ratiofig} (Color online) The ratio of cross-sections, $\sigma_{\textrm{ICF}}/\sigma_{\textrm{TF}}$, $\sigma_{\textrm{CF}}/\sigma_{\textrm{TF}}$ and $\sigma_{\textrm{ICF}}/\sigma_{\textrm{NCBU}}$ derived from the calculations as a function of E$_{\textrm{c.m.}}$/V$_{\textrm{B}}$ for $^{6,7}$Li+$^{209}$Bi and $^{6,7}$Li+$^{198}$Pt systems is shown by dashed, dashed-dot and dotted lines respectively. The symbols are showing the experimental data (see text for details).} \end{figure} In Figs.\ \ref{fig1}(a), \ref{fig2}(a), \ref{fig3}(a) and \ref{fig4}(a) results of the calculations for the TF, CF and ICF cross-sections are shown with long dashed, short dashed and dotted lines, respectively along with the available measured data from Refs.\ \cite{Dasgupta04} and \cite{Shrivastava09,Shrivastava13} for $^{6,7}$Li+$^{209}$Bi and $^{6,7}$Li+$^{198}$Pt systems respectively. The bare calculations (without breakup couplings) were also performed and the calculated fusion cross-sections are denoted by dashed-dot-dot lines in the above mentioned figures. The Coulomb barrier positions are marked by arrow in all the figures. It is seen that at energies above the Coulomb barrier, the calculations which include the couplings and calculations that omit them have negligible difference but at energies below the barrier, the coupled TF cross-sections are enhanced in comparison to bare TF cross-sections. The individual ICF cross-sections, $\sigma_{\alpha-\textrm{ICF}}$ and $\sigma_{\textrm{d/t}-\textrm{ICF}}$, described in previous section are extracted and shown in Figs.\ \ref{fig1}(b), \ref{fig2}(b), \ref{fig3}(b) and \ref{fig4}(b). In these figures, the long dashed, dotted and short dashed lines are the $\alpha$-ICF, d/t-ICF and Tot-ICF calculations, respectively. As can be seen from Figs.\ \ref{fig1} (b) and \ref{fig2} (b), there is a better agreement of $\alpha$-ICF data with the calculations at all the energies. In the case of d-ICF data (Fig.\ \ref{fig1} (b)), the high energy d-ICF data is over predicted by the calculations. Similarly, for t-ICF data (Fig.\ \ref{fig2} (b)), while the overall agreement of calculations with the data is satisfactory, the high energy data is below the calculations. This may be because of the fact that the long lived residue from d or t-ICF followed by neutron evaporation path, i.e. $^{209}$Po was not measured in the experiment, which has a significant contribution in d/t ICF. There is also an interesting observation that the cross-section of d-ICF and $\alpha$-ICF in $^{6}$Li+$^{209}$Bi is of similar order. However, in the case of $^{7}$Li+$^{209}$Bi, t-ICF cross-section is much higher than $\alpha$-ICF, which is also evident from the data. Intuitively, we expect that there should be larger d-ICF than the $\alpha$-ICF cross-section, as the Coulomb barriers seen by d+$^{209}$Bi is lower than that of $\alpha$+$^{209}$Bi. This logic holds true for t-ICF and $\alpha$-ICF as well in the $^{7}$Li+$^{209}$Bi reaction. In the case of $^{6}$Li+$^{198}$Pt system, higher energy data is not available unlike other system under study and also only d-ICF data (Fig.\ \ref{fig3} (b)) was available for comparison with the calculations. This data was found to be highly under predicted by the calculations. Also, it was noticed that the d-ICF cross-section reported \cite {Shrivastava09} is even higher than the calculated total fusion cross-section at the last two measured energies, which is surprising. Besides this, for the $^{6}$Li+$^{198}$Pt system, the available d-capture data may also contain the contribution from the direct d-transfer to the target apart from the d-fusion component. Since, the calculations presented here only predict the d-fusion, it can be one of the reasons for the discrepancy apart from some possible uncertainties in the data measurements for this particular system. For $^{7}$Li+$^{198}$Pt system, both t-ICF and $\alpha$-ICF data is nicely explained by calculations (Fig.\ \ref{fig4} (b)). In this system also, we observe that the t-ICF cross-section is higher than $\alpha$-ICF, which is also evident from the data. For further confirmation of the above mentioned observations, we need more data of d-ICF, $\alpha$-ICF and t-ICF simultaneously in various systems like $^{6,7}$Li+$^{209}$Bi. The measured d/t-ICF (alpha as outgoing channel) can also have the contributions from transfer induced breakup, which we have not taken into account in our calculations. Although, the transfer process is important, it is not found to significantly contribute to the measured inclusive alpha cross-sections for several systems. In the studies with $^{6}$Li on $^{209}$Bi \cite{Santra12} and $^{159}$Tb \cite{Pradhan13}, it is pointed out that the d-ICF cross-sections are much more dominant compared with the transfer cross-sections in inclusive alpha measurements. In recent complete study on transfer induced breakup with $^{7}$Li on $^{89}$Y \cite{Sanat16}, it is concluded that the transfer induced breakup and non-capture breakup added together can only account for 8 \% of inclusive alpha cross-section. The ICF cross-sections calculated in the present work represents the absorption from the breakup channel and hence, it is expected that the non-capture breakup and ICF are competing processes. The ratio of cross-sections, $\sigma_{\textrm{ICF}}/\sigma_{\textrm{TF}}$, $\sigma_{\textrm{CF}}/\sigma_{\textrm{TF}}$ and $\sigma_{\textrm{ICF}}/\sigma_{\textrm{NCBU}}$ derived from the calculations as a function of E$_{\textrm{c.m.}}$/V$_{\textrm{B}}$ for $^{6,7}$Li+$^{209}$Bi and $^{6,7}$Li+$^{198}$Pt systems is shown by dashed, dashed-dot and dotted lines in Fig.\ \ref{ratiofig} respectively. The corresponding experimental data of $\sigma_{\textrm{ICF}}/\sigma_{\textrm{TF}}$ and $\sigma_{\textrm{CF}}/\sigma_{\textrm{TF}}$ is shown with filled circle and filled triangle respectively in Fig.\ \ref{ratiofig}. Following two observations can be made from the plots: (i)$\sigma_{\textrm{ICF}}/\sigma_{\textrm{TF}}$ and $\sigma_{\textrm{CF}}/\sigma_{\textrm{TF}}$ ratio remain approximately constant over the energy range above the Coulomb barrier. For energies below the barrier, the $\sigma_{\textrm{ICF}}/\sigma_{\textrm{TF}}$ ratio is increasing while $\sigma_{\textrm{CF}}/\sigma_{\textrm{TF}}$ ratio is decreasing. This shows the dominance of ICF at below barrier energies in TF over CF cross-sections. The $\sigma_{\textrm{ICF}}/\sigma_{\textrm{TF}}$ ratio at above barrier energies gives the value of suppression factor in CF, which is found to be in agreement ($\sim$ 30 \%) with the literature data with $^{6,7}$Li projectiles from various measurements \cite{Parkar10,Harphool12}. That means the suppression observed in CF at above barrier energies is commensurate with the value of ICF. (ii) $\sigma_{\textrm{ICF}}/\sigma_{\textrm{NCBU}}$ ratio is nearly constant at above barrier while it decreases at below barrier energies. This indicates that at below barrier, the probability of capturing one fragment from breakup (ICF) is much less than escape of both the fragments (NCBU), while at above barrier energies ICF gradually becomes more significant. \section{\label{sec:Sum} Summary} In the study of weakly bound nuclei, because of low breakup thresholds, the breakup channel plays an important role in reaction mechanism. Here we have attempted to find its effect on fusion cross-sections via Continuum Discretized Coupled Channel calculations. The cluster folding potentials in the real part along with short-range imaginary part were used to calculate the CF, ICF and TF cross-sections for $^{6,7}$Li+$^{209}$Bi and $^{6,7}$Li+$^{198}$Pt systems. The simultaneous explanation of the measured experimental data for the CF, ICF and TF cross-sections over the entire energy range is obtained for the first time using calculations in the full quantum mechanical approach. The calculated TF cross-sections from uncoupled and coupled were found to match at energies above the barrier, while below barrier uncoupled TF is lower than the coupled one. The large difference between these results at below barrier energies, imply the strong role of breakup couplings at below barrier energies. The individual ICF cross-sections imply that the d-ICF cross-section is of similar order that of $\alpha$-ICF cross-section in case of $^{6}$Li, while t-ICF cross-section is much more than $\alpha$-ICF cross-section in case of $^{7}$Li, which is also evident from the experimental data. More data of d-ICF, $\alpha$-ICF and t-ICF simultaneously for various systems is required to further emphasize this observation. The calculated ICF fraction which is the ratio of ICF and TF as a function of energy is found to be constant at energies above the barrier and it increases at energies below the barrier. This ratio which signifies the suppression of CF in TF is constant at above barrier energies and it is in agreement with the available data for several systems. At below barrier, as the ratio increases, it shows the enhanced importance of ICF contribution in TF at below barrier energies. The ratio of calculated ICF to NCBU shows the greater importance of NCBU at below barrier energies, while at above barrier energies ICF gradually becomes more significant. \begin{acknowledgments} One of the authors (V.V.P.) acknowledges the financial support through the INSPIRE Faculty Program, from the Department of Science and Technology, Government of India, in carrying out these investigations. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,104
The title of the present series is Anachronism. In fact, by placing unconventional, unusual, irrelevant, inharmonious, and inconsistent elements of harmony, order, meaning, and a different and new meaning of the meaning that has been used in the elements used to deconstruct. In this series, the main subjects are chess pieces that appear in dress and style of Qajar women and men and have been used in decorative elements and symbols and signs of power. But the chess pieces have been transformed into actors in a different appearance from a mere game. The shadows, however, have a meaningful role in these works and show us the virtual reality of the vertebrate and in a way their original identity. But the use of unconventional elements and multi-semantic components can engage the mindset of the audience and instill in him the thought, thought and thought and give different meanings to the mind of the viewer.
{ "redpajama_set_name": "RedPajamaC4" }
5,970
\section{Introduction} \label{Intro} In causal inference, researchers typically aim at estimating causal effects in a particular ``source'' population based on randomized trials or observational studies in that population. When the source population is the sole population of interest, random samples from the source are representative, and standard techniques to estimate average treatment effects (ATEs) can be applied to obtain reliable estimates of effects in the whole population. However, in many studies (e.g. randomized trials in clinical medicine, policy analysis) samples are drawn from a different population, unrepresentative of the target of interest \citep{kennedy2015literature, bell2016estimates, allcott2015site}. In other words, the source and target populations can be different, and the ATE we obtain from the sample only applies to the source population -- and may not generalize to the target population directly. Failing to consider this lack of representation may yield unreliable conclusions that can even be harmful, especially in medicine and policy evaluations \citep{chen2021ethical}. One example of unrepresentative samples is when the distribution of some covariates (e.g. age and BMI) in the target population differ from that in the source population. When some of these covariates are effect modifiers (e.g. age and BMI may modify the effects of some medicine), the ATE in the target population can be quite different from that in the source population. There exists a rich literature in bridging findings from the source population to the target population \citep{cole2010generalizing, tipton2013improving, buchanan2018generalizing, dahabreh2019generalizing, dahabreh2020extending, chattopadhyay2022one}. Most of these papers adopt the idea that we first estimate the probability for a subject to be in the source population and use it as a reweighting term in the proposed estimator. For instance, \cite{dahabreh2019generalizing} and \cite{dahabreh2020extending} proposed three types of estimators based on outcome modeling, inverse probability weighting and doubly robust-style augmented inverse probability weighting. \cite{chattopadhyay2022one} developed a one-step weighting estimator where the weights are learned from a convex optimization problem to simultaneously model the inverse propensity score and outcome regression functions. However, the theoretical properties of the doubly robust estimators have not been well-understood. The form of their second-order bias and the conditions required to ensure their asymptotic normality are unclear. Another limitation is that the previous work assumes the sets of covariates in the source population and the target population are the same. In practice there often exist some covariates that are available in the source, but not in the target. Simply ignoring them is not an efficient way to exploit the samples. For instance, including more covariates in the source may enable us to be more confident in the conditional exchangeability assumption in the source population. In this work, we fill these two gaps by developing efficiency theory and establishing theoretical guarantees for the doubly robust estimators. In particular, we generalize previous results to accomodate covariate mismatch in the source and target populations. After formulating the generalization and transportation functional of interest, we examine the assumptions required to identify these functionals. When these functionals are identified, we derive the first-order influence functions and establish the asymptotic normality of a doubly robust estimator (under additional conditions). Simulations show how the proposed doubly robust estimator has smaller estimation error compared to a simple plug-in estimator. To provide a complete story for generalization and transportation of causal effects, we then consider the minimax lower bound and higher-order estimation when the source population and the target population share the same set of covariates. We derive the minimax lower bound and propose a new higher-order estimator based on second-order influence functions, which attains the minimax lower bound in a broader regime than the doubly robust estimator. To the best of our knowledge, the minimax lower bound and high-order estimation have not been studied under generalization and transportation settings. Finally we apply our proposed methods to transporting the causal effects of adverse dietary outcome from an observational study to the whole U.S. female population as an illustration. The paper is organized as follows. In Section \ref{sec:Preliminaries} we introduce the setup and notation. In Section \ref{sec:Identification} we discuss the identification assumptions to identify the ATE in the target population. Efficiency theory and doubly robust methods are provided in Section \ref{sec:Efficiency-theory}. We then summarize the minimax lower bounds of the target functionals in Section \ref{sec:Minimax} and derive higher-order estimators in Section \ref{sec:Quadratic}. Simulation studies are presented in Section \ref{sec:Simulation} to explore finite-sample properties of our methods. In Section \ref{sec:Real-data} we provide a data analysis to illustrate the proposed methods, transporting effects of dietary intake on pregnancy outcomes. Finally we conclude with a discussion in Section \ref{sec:Discussion}. All the proofs and additional details on real data analysis are presented in the supplementary materials. \section{Preliminaries}\label{sec:Preliminaries} In this section we first introduce the setup in generalization and transportation setting. We then formalize the treatment effects of interests as statistical functionals with the potential outcome framework \citep{splawa1990application, rubin1974estimating}. Finally we introduce some notation that will be useful in presenting our results. \subsection{Data structure} In the generalization and transportation setting there are typically two populations, i.e., a source population and a target population. The source population is usually the underlying population of a randomized trial or observational study and is defined by enrollment processes and inclusion or exclusion criteria of the study. We assume we observe $n_1$ source samples $$\mathcal{D}_1 = \{ Z_i = (X_i,A_i,Y_i, S_i = 1), 1\leq i \leq n_1 \}$$ from the source population, where $X$ is a $d$-dimensional vector containing all covariates, $A$ is the treament assignment and $Y$ is the outcome. We denote $S$ as an indicator such that $S=1$ if a subject is in the source population and $S=0$ otherwise. We are interested in estimating the ATE in a target population without direct access to treatment and outcome information on target samples. The observational unit in the target population is $Z = (V, S=0)$ and the target dataset $\mathcal{D}_2$ consists of $n_2$ such realizations, i.e., $$\mathcal{D}_2 = \{Z_i = (V_i, S_i=0), n_1+1 \leq i \leq n_1 +n_2=n \} , $$ where, importantly, $V \subseteq X$ represents partial covariates in $X$, which may be a strict subset. \subsection{Effects of interest} Before defining our estimands of interest, we first discuss differences in the target population definition in the generalization and transportation setting. In the generalization setting, the source population is a subset of the target population. After we collect treatment and outcome information and estimate the ATE in the source population, we hope to \emph{generalize} the causal effects to the whole target population. The most natural design is a trial nested within a cohort of eligible individuals \citep{dahabreh2019generalizing}. In such designs, researchers collect covariate information for all individuals, but only collect treatment and outcome information from a subset of them. This setup arises in comprehensive cohort studies \citep{olschewski1985comprehensive}, where only few patients consent to randomization in a clinical trial, as well as clinical trials embedded in health-care systems where all individuals' information is collected routinely but only some of them are included in a trial \citep{fiore2016integrating}. In contrast, in the transportation setting, the source population is (at least partly) external to the target population \citep{cole2010generalizing}. The source population is different from the target population and source samples may not be representative of the target population. The goal is to \emph{transport} the causal effects from the source population to the target population. This setup arises widely in public policy research, where randomized trial or observational study is conducted in selected samples while the target samples are from administrative databases or surveys and can be very different from the samples enrolled in the study (our data example in Section \ref{sec:Real-data} falls into this category, as will be discussed in more detail shortly). Now we define the generalization and transportation effects of interest. We use the random variable $Y^a$ to denote the potential (counterfactual) outcome we would have observed had a subject received treatment $A = a$, which may be contrary to the fact $Y$. For simplicity we consider binary treatment in this work, where $A=1$ means treatment and $A=0$ means control. Then the ATE in the target population is \begin{equation}\label{generalization-effect} \psi := \mathbb{E}[Y^1 - Y^0] \end{equation} in the generalization case and \begin{equation}\label{transportation-effect} \theta := \mathbb{E}[Y^1 - Y^0|S=0] \end{equation} in the transportation case. Note that in the generalization case the source population is part of the target population and hence we take an expectation over the whole population. However, in the transportation case the source population may not be representative of the target population, and we only take an expectation in the target population (i.e. conditioning on $S=0$). Under standard causal assumptions (consistency, positivity\citep{rosenbaum1983central}, exchangeability\citep{hernan2023causal}), the ATE in the source population $\mathbb{E}[Y^1 - Y^0|S=1]$ can be identified and efficiently estimated. Mathematically, we do not have $\mathbb{E}[Y^1 - Y^0|S=0] = \mathbb{E}[Y^1 - Y^0|S=1] = \mathbb{E}[Y^1 - Y^0]$ in general, especially when the effect modifiers have different distributions in the source and the target population. Hence we need additional assumptions and novel methodology to estimate the treatment effect in the target population. We define the mean potential outcomes in the target population as \begin{equation}\label{theta_a} \psi_a = \mathbb{E}[Y^a], \quad \theta_a = \mathbb{E}[Y^a|S=0] \end{equation} so $\psi_a$ and $\theta_a$ correspond to the generalization and transportation cases, respectively. In this paper we will focus on the identification and estimation of $\psi_a$ and $\theta_a$. After estimating $\psi_a$ or $\theta_a$, the ATE in the target population can be estimated by taking the difference. \subsection{Nuisance functions \& other notation} To present our results concisely we need the following notation on commonly used nuisance functions. \begin{itemize} \item The propensity score in the source population is $\pi_a(x) = \mathbb P(A = a | X = x, S = 1)$. If the source dataset $\mathcal{D}_1$ is from a randomized trial then $\pi_a(x)$ can be known. Otherwise we would need to estimate it from $\mathcal{D}_1$. \item The conditional probability of being selected into the source population (commonly referred to as the participation probability) is $\rho(v) = \mathbb P (S=1|V=v)$. \item the conditional mean and variance of the outcomes among subjects receiving treatment $A=a$ in the source population are $\mu_a(x) = \mathbb{E}[Y \mid X=x, A=a, S=1]$ and $\sigma_a^2(x) = \text{Var}(Y \mid X=x, A=a, S=1) $. \item The function obtained by further regressing $\mu_a$ on $V$ in the source population, $\tau_a(v) = \mathbb E[\mu_a(X)|V=v, S=1]$. \end{itemize} For a univariate function $f$ on variables $Z$ we use $\mathbb P_n [f(Z)] $ or $\mathbb P_n (f)$to denote the sample average $\frac{1}{n}\sum_{i=1}^n f(Z_i)$. For a bivariate function $g$ we use $\mathbb U_n [g(Z_1,Z_2)]$ or $\mathbb U_n (g)$ to denote the U-statistic measure $ \frac{1}{n(n-1)} \sum_{i \neq j}g(Z_i, Z_j)$. The Hellinger distance $H^2(P,Q)$ between two distributions $P$ and $Q$ is defined as \[ H^2(P, Q)=\frac{1}{2} \int \left[ \sqrt{p(x)}-\sqrt{q(x)}\right]^2 \nu(d x) \] for a dominating measure $\nu$. We say a function $f$ is $s$-smooth if it is $\lfloor s\rfloor$ times continuously differentiable with derivatives up to order $\lfloor s\rfloor$ bounded by some constant $C>0$ and $\lfloor s\rfloor$-order derivatives Hölder continuous, i.e. \[ \left|D^\beta f(x)-D^\beta f\left(x^{\prime}\right)\right| \leq C\left\|x-x^{\prime}\right\|_2^{s-\lfloor s\rfloor} \] for all $\beta = (\beta_1,\dots, \beta_d)$ with $\sum_{i} \beta_i = \lfloor s\rfloor$, where $D^\beta=\frac{\partial^\beta}{\partial x_1^{\beta_1} \ldots \partial x_d^{\beta_d}}$ is the differential operator. Hölder class, denoted by $\mathcal{H}(s)$, is the function class containing all $s$-smooth functions. We denote the weighted $L_2$ norm with weight function $w$ as $\|f\|_w = \sqrt{\int f(z)^2w(z) d \mathbb P (z)}$ and when the weight function $w=1$ we abbreviate the notation as $\|f\|$. In this paper we mainly use $w = \rho \pi_a$ as a weight function. For a matrix $\Omega$ we let $\|\Omega\|$ and $\|\Omega\|_F$ denote its spectral norm and Frobenius norm, respectively. We write $a_n \lesssim b_n$ if $a_n \leq Cb_n$ for a positive constant $C$ and sufficiently large $n$. \section{Identification} \label{sec:Identification} In this section we discuss sufficient conditions to identify functionals in \eqref{theta_a} from the observable data. These assumptions are generalizations of the identification conditions used in \cite{dahabreh2019generalizing} and \cite{dahabreh2020extending} to the case $V\subseteq X$. (Note in Section \ref{sec:Sensitivity-analysis} we consider sensitivity analysis and allow several of the following assumptions to be violated.) \begin{assumption}\label{consistency} Consistency: $Y = Y^a \text{\, if \,} A=a.$ \end{assumption} \begin{assumption}\label{exchangeability} No unmeasured confounding in source: $(Y^0, Y^1) \independent A \mid X, S=1.$ \end{assumption} \begin{assumption}\label{positivity-treatment} Treatment positivity in source: $\pi_a(X) >0 \quad \mathbb P(\cdot|S=1) \text{ a.s. for all } a.$ \end{assumption} Assumption \ref{consistency} is also known as stable unit treatment value assumption (SUTVA) and requires no interference between different subjects, i.e. the outcome for an individual is not affected by other individuals' treatments. Assumption \ref{exchangeability} is a standard assumption used to identify average treatment effects. It holds if the source dataset comes from a randomized trial or if we collect enough covariates in $X$ so that the treatment process is completely explained by $X$. Assumption \ref{positivity-treatment}, also known as the overlap assumption, has been used in causal inference since \cite{rosenbaum1983central}. It guarantees that every subject in the source population has a positive probability of receiving each treatment $a$. With these three assumptions we are able to identify the average treatment effect in the source population. But we need additional assumptions for generalization and transportation. \begin{assumption}\label{transportability} Exchangability between populations: $ S \independent Y^{a} \mid V.$ \end{assumption} \begin{assumption}\label{positivity-selection} Positivity of selection: $\rho(V)>0 \text{\, a.s.}$ \end{assumption} Assumption \ref{transportability} is critical to generalizing/transporting the effects from the source population to the target population \citep{kern2016assessing, dahabreh2019generalizing, dahabreh2020extending}. Under assumption \ref{transportability} we have \[ \mathbb{E}[Y^a|V,S=1] = \mathbb{E}[Y^a|V] = \mathbb{E}[Y^a|V,S=0], \] which further implies \begin{equation}\label{cond_eff_tran} \mathbb{E}[Y^1-Y^0|V,S=1] = \mathbb{E}[Y^1-Y^0|V,S=0]. \end{equation} Hence, the source population and the target population have the same conditional average treatment effect. We essentially just need \eqref{cond_eff_tran} to identify the ATE in the target population. To state our results concisely we formalize the assumption in terms of potential outcome $Y^a$ instead of the contrast $Y^1-Y^0$. For equality \eqref{cond_eff_tran} to hold, all effect modifiers that are distributed differently between the source and the target populations must be measured in $V$. Assumption \ref{positivity-selection} requires that in each stratum of effect modifiers $V$, there is a positive probability of being in the source population for every individual. Thus all members in the target population are represented by some individuals in the source population. \begin{theorem} \label{thm-identification} Under identification assumptions \ref{consistency}--\ref{positivity-selection}, the estimands $\psi_a$ and $\theta_a$ are identified as \begin{equation}\label{eq:identify} \begin{aligned} \psi_a = &\, \mathbb{E} \left\{\mathbb{E}\left[\mathbb{E}(Y \mid X, A=a, S=1) \mid V, S=1\right] \right\} \\ = & \, \mathbb{E} \{\mathbb{E}[\mu_a(X) \mid V, S=1] \}= \mathbb E [\tau_a(V)] \\ \theta_a =&\, \mathbb{E} \left\{\mathbb{E}\left[\mathbb{E}(Y \mid X, A=a, S=1) \mid V, S=1\right] |S=0\right\} \\ = & \, \mathbb{E} \{\mathbb{E}[\mu_a(X) \mid V, S=1] |S=0\}= \mathbb E [\tau_a(V)|S=0]. \end{aligned} \end{equation} \end{theorem} We can understand the above functionals by evaluating the three iterative expectations. First we regress the outcome $Y$ on $X$ among subjects receiving treatment $A=a$ in the source population and obtain $\mu_a(x)$, which contains information on the conditional treatment effect. Then we further regress $\mu_a$ on effect modifiers $V$ in the source population to obtain $\tau_a(V)$, which summarizes the conditional treatment effects within the subset of covariates $V$. Finally we take the mean of $\tau_a$ in the target population and obtain the target functional. The validity of the last step is guaranteed by Assumption \ref{transportability}, which implies the information on treatment effects contained in $\tau_a$ generalizes to the whole population. The proof of identifiability is provided in the appendix. Note that the mean potential outcome in the source population can be written as \[ \mathbb E [Y^a \mid S=1] = \mathbb E [\tau_a(V) \mid S=1]. \] It differs from the target functional only in the last step, where we average over $\tau_a$ in the target population instead of in the source population. When the distributions of effect modifiers $V$ in the source population and the target populations are different, we have $\mathbb E [\tau_a(V) \mid S=1] \neq \mathbb E [\tau_a(V) \mid S=0]$ and thus the treatment effect in the source population may not generalize to the target population directly. In applications, one needs to carefully assess the five assumptions above. In general, these assumptions are untestable and their plausibility needs to be evaluated from substantive knowledge on the mechanism of treatment assignment and study participation. If some assumptions are likely to be violated, researchers should perform sensitivity analysis to assess the robustness of their results. We discuss one way of performing sensitivity analysis in the following section. \subsection{Sensitivity Analysis}\label{sec:Sensitivity-analysis} When the identification assumptions do not hold simultaneously, it is not guaranteed that identification results \eqref{eq:identify} hold and we cannot identify the target functionals from the observed data. But we can still derive bounds on them under some relaxation of the identification assumptions. These bounds provide us with a range of plausible values for the ATE and can be useful in some applications. \begin{assumption}\label{relax-exchangeability} Relaxation of Assumption \ref{exchangeability}: There exists a positive constant $\delta_1$ such that \[ |\mathbb{E}\left[Y^{a} \mid X, A=1, S=1\right] - \mathbb{E}\left[Y^{a} \mid X, A=0, S=1\right]| \leq \delta_1 \text{ a.s. for all } a \] \end{assumption} \begin{assumption}\label{relax-transportability} Relaxation of Assumption \ref{transportability}: There exists a positive constant $\delta_2$ such that \[ |\mathbb{E}[Y^a|V,S=0] - \mathbb{E}[Y^a|V,S=1]| \leq \delta_2 \text{ a.s. for all } a \] \end{assumption} We note that when Assumption \ref{exchangeability} and Assumption \ref{transportability} hold, we have $\delta_1 = \delta_2 = 0$. Hence Assumption \ref{relax-exchangeability} and Assumption \ref{relax-transportability} are indeed relaxations of Assumption \ref{exchangeability} and Assumption \ref{transportability}. In practice, the value $\delta_1$ and $\delta_2$ may come from the domain knowledge of the problem of interests. The following theorem characterizes the bounds on the ATE in the target population when the treatment is binary. \begin{theorem}\label{thm-sensitivity} Under Assumption \ref{consistency}, \ref{positivity-treatment}, \ref{positivity-selection}, \ref{relax-exchangeability} and \ref{relax-transportability}, we have \begin{equation*} \begin{aligned} \psi_a &\,\in \left[\mathbb E [\tau_a(V)] -\delta_1 \mathbb E[\mathbb P(A=1-a \mid V,S=1)] -\delta_2 \mathbb P(S=0), \right.\\ &\, \left. \mathbb E [\tau_a(V)] +\delta_1 \mathbb E[\mathbb P(A=1-a \mid V,S=1)] +\delta_2 \mathbb P(S=0)\right]. \\ \theta_a &\,\in \left[\mathbb E [\tau_a(V)|S=0] -\delta_1 \mathbb E[\mathbb P(A=1-a \mid V,S=1) \mid S=0] -\delta_2, \right.\\ &\, \left. \mathbb E [\tau_a(V)|S=0] +\delta_1 \mathbb E[\mathbb P(A=1-a \mid V,S=1) \mid S=0] +\delta_2\right]. \end{aligned} \end{equation*} Hence the ATE in the generalization functional $\psi_1 - \psi_0$ is in the interval \[ \left[ \mathbb E[\tau_1(V) - \tau_0(V)] -\delta_1 -2\delta_2 \mathbb P(S=0), \mathbb E[\tau_1(V) - \tau_0(V)] +\delta_1 +2\delta_2 \mathbb P(S=0) \right], \] the ATE in the transportation functional $\theta_1 - \theta_0$ is in the interval \[ \left[\mathbb E [\tau_1(V)-\tau_0(V)|S=0] - \delta_1-2\delta_2, \mathbb E [\tau_1(V)-\tau_0(V)|S=0] + \delta_1+2\delta_2\right]. \] \end{theorem} Since the efficiency theory in Section \ref{sec:DR-estimation} directly holds for $\mathbb E [\tau_a(V)]$ and $[\mathbb E [\tau_a(V)|S=0]$, one can use the doubly robust estimator to estimate them efficiently. When the specific values of $\delta_1$ and $\delta_2$ are available, one can directly construct the bounds in Theorem \ref{thm-sensitivity}. If exact domain knowledge on the precise value of $\delta_1$ and $\delta_2$ is unavailable, we can estimate the bounds as a function of $(\delta_1,\delta_2)$, and for example obtain which values of $\delta_1$ and $\delta_2$ substantially change results (e.g., flip the sign of the treatment effects). For instance, when we are confident about Assumption \ref{exchangeability} (e.g., the source dataset comes from a randomized experiment), we can set $\delta_1 =0$. Then the value for $\delta_2$ changes the sign of the effect is $|\mathbb E [\tau_1(V)-\tau_0(V)|S=0]|/2$. This value can reflect the robustness of our results when the identification assumptions do not necessarily hold. \section{Efficiency Theory and Doubly Robust Estimation}\label{sec:Efficiency-theory} In this section we develop nonparametric theory for estimation of the ATE in the target population. Namely we first derive the efficient influence function, together with the nonparametric efficiency bound. The nonparametric efficiency bound provides a benchmark for efficient estimation in a nonparametric model, indicating the best possible performance in a local asymptotic minimax sense \citep{van2000asymptotic}. Next we propose a doubly robust estimator of $\psi_a$ and $\theta_a$ based on the influence function, which is shown to be asymptotically normal and attain the efficiency bound under weak high-level conditions. \subsection{Efficient Influence Function and Efficiency Bound}\label{sec:EIF-EB} We first introduce the problem faced by plug-in estimator and motivate the study of efficient influence functions. Denote the plug-in estimators of nuisance functions $(\mu_a, \tau_a, \pi_a, \rho)$ as $(\widehat{\mu}_a, \widehat{\tau}_a, \widehat{\pi}, \widehat{\rho})$. Based on the identification result for $\psi_a$ after equation \eqref{eq:identify}, a plug-in estimator of $\psi_a$ is then given by \[ \widehat{\psi}_a = \mathbb P_{n} (\widehat{\tau}_a) = \frac{1}{n} \sum_{i=1}^{n} \widehat{\tau}_a(V_i). \] This plug-in estimator would be $\sqrt{n}$-consistent if we used correct parametric models to estimate all the nuisance functions. However, there is generally not sufficient background knowledge to ensure correct specification of such parametric models; thus analysts often use flexible non-parametric methods to avoid model misspecification. However, under such circumstances, the conditional bias of the plug-in estimator is of order $\|\widehat{\tau}_a - \tau_a\|$, which is typically slower than the $\sqrt{n}$-rate, perhaps much slower when the nuisance functions are complex and the number of covariates is large. Hence the plug-in estimator generally suffers from slow convergence rates and a lack of tractable limiting distributions. These drawbacks make it difficult to estimate $\psi_a$ accurately and perform statistical inference with plug-in estimators. To address these difficulties, one can derive the efficient influence functions of the target functionals. The efficient influence function is critical in non-parametric efficiency theory \citep{bickel1993efficient, tsiatis2006semiparametric, van2000asymptotic,van2003unified, kennedy2022semiparametric}. Mathematically, the influence function is the derivative in a Von Mises expansion (i.e., distributional Taylor expansion) of the target statistical functional. In the discrete case, it coincides with the Gateaux derivative of the functional when the contamination distribution is a point-mass. Influence functions are important in the following respects. First, the variance of the influence function is equal to the efficiency bound of the target statistical functional, which characterizes the inherent estimation difficulty of the target functional and provides a benchmark to compare against when we construct estimators. Moreover, it allows us to correct for first-order bias in the plug-in estimator and obtain doubly robust-style estimators, which enjoy appealing statistical properties even if non-parametric methods with relatively slow rates are used in nuisance estimation. In the following discussions, we will first present the efficient influence function of $\psi_a$ and $\theta_a$. We then derive the doubly robust estimator and establish its $\sqrt{n}$-consistency and asymptotic normality under appropriate conditions. The efficient influence functions and efficiency bounds are summarized in the following results. \begin{lemma}\label{lem-if} Under an unrestricted nonparametric model, the efficient influence function of $\psi_a$ is given by \[ \phi_a^{ge} (Z) = \frac{I(A=a, S=1)(Y-\mu_a(X))}{\rho(V) \pi_a(X)} + \frac{I(S=1)(\mu_a(X) - \tau_a(V))}{\rho(V)} + \tau_a(V) - \psi_a \] and the efficient influence function of $\theta_a$ is given by \begin{equation*} \begin{aligned} \phi_{a}^{\text{tr}} (Z) &=\frac{1}{\mathbb P(S=0)}\left\{\frac{I(A=a, S=1) (1 - \rho(V))\left(Y-\mu_{a}(X)\right)}{\rho (V) \pi_a(X)}\right.\\ &\left.+\frac{I(S=1) (1 - \rho(V))\left(\mu_{a}(X)-\tau_{a}(V)\right)}{\rho(V)} +I(S=0)\left[\tau_{a}(V)-\theta_{a}\right]\right\}. \end{aligned} \end{equation*} \end{lemma} \begin{theorem}\label{thm-if} The nonparametric efficiency bound of $\psi_a$ is \begin{equation*} \begin{aligned} \sigma_{a, \text{ge}}^2 &=\mathbb{E}\left[\frac{\mathbb P(S=1|X) \operatorname{Var}(Y \mid X, A=a, S=1)}{\rho^2 (V) \pi_a(X)}\right]\\ &+\mathbb{E}\left[\frac{ \operatorname{Var}(\mu_a(X)|V,S=1)}{\rho(V)}\right] + \operatorname{Var}(\tau_a(V)) \end{aligned} \end{equation*} and the nonparametric efficiency bound of $\theta_a$ is \begin{equation*} \begin{aligned} \sigma_{a, \text{tr}}^2 &=\frac{1}{\mathbb P^2(S=0)}\left\{\mathbb{E}\left[\frac{\mathbb P(S=1|X) (1 - \rho(V))^2 \operatorname{Var}(Y \mid X, A=a, S=1)}{\rho^2 (V) \pi_a(X)}\right]\right.\\ &\left.+\mathbb{E}\left[\frac{(1 - \rho(V))^2 \operatorname{Var}(\mu_a(X)|V,S=1)}{\rho(V)}\right] + \mathbb P(S=0)\operatorname{Var}(\tau_a(V)|S=0) \right\}. \end{aligned} \end{equation*} \end{theorem} The efficiency bounds in Theorem \ref{thm-if} show how particular nuisance quantities determine the estimation difficulty of our target functionals. Specifically, the efficiency bounds of $\psi_a$ and $\theta_a$ both depend on \begin{itemize} \item The inverse propensity score $1/\pi_a(X)$ measuring how likely an individual will receive treatment $A=a$. \item The conditional variance $\sigma_a^2(x) =\text{Var}(Y \mid X=x, A=a, S=1)$, which measures how much variation of $Y$ can be explained by $X$ for subjects receiving treatment $a$ in the source population. \item The conditional variance $ \text{Var}(\mu_a(X) \mid V=v, S=1)$, which measures how much variation of $\mu_a(X)$ can be explained by $V$ for subjects in the source population. \item The variance of $\tau_a(V)$ in the target population, i.e. $\text{Var}(\tau_a(V))$ in the generalization case and $\text{Var}(\tau_a(V)|S=0)$ in the transportation case. \end{itemize} There are also some differences in the efficiency bounds of two functionals. First, the efficiency bound in the transportation case depends on the probability of being in the target population $\mathbb P (S=0)$ explicitly. Moreover, the first term in the efficiency bound $\sigma_{a,\text{ge}}^2$ depends on $\rho(V)$ via its reciprocal while the first term in $\sigma_{a,\text{tr}}^2$ depends on the inverse odds of being in the source population $(1-\rho(V))/\rho(V)$. This implies the inverse odds ratio $(1-\rho(V))/\rho(V)$ may be a more fundamental quantity than $1/\rho(V)$ in transportation problems, and we will see this phenomenon in Section \ref{sec:Quadratic} as well. The efficient influence functions in Theorem \ref{thm-if} generalize those of \citet{dahabreh2019generalizing, dahabreh2020extending} to the setting where there can be a mismatch between the covariates $V$ in the target population, and the covariates $X$ in the source population. To be concrete, in the special case $V=X$ the efficient influence functions of $\psi_a$ and $\theta_a$ are \[ \phi_a^{ge} (Z) = \frac{I(A=a, S=1)(Y-\mu_a(X))}{\rho(X) \pi_a(X)} + \mu_a(X) - \psi_a, \] \[ \phi_a^{tr} (Z) = \frac{1}{\mathbb P(S=0)} \left\{\frac{I(A=a, S=1)(1-\rho(X))(Y-\mu_a(X))}{\rho(X) \pi_a(X)} + I(S=0)( \mu_a(X) - \theta_a) \right\} \] and the corresponding efficiency bounds are \[ \sigma_{a, \text{ge}}^2 =\mathbb{E}\left[\frac{ \operatorname{Var}(Y \mid X, A=a, S=1)}{\rho (X) \pi_a(X)}\right]\\ + \operatorname{Var}(\mu_a(X)) \] \[ \sigma_{a, \text{tr}}^2 =\frac{1}{\mathbb P^2(S=0)}\left\{\mathbb{E}\left[\frac{ (1 - \rho(X))^2 \operatorname{Var}(Y \mid X, A=a, S=1)}{\rho (X) \pi_a(X)}\right] + \mathbb P(S=0)\operatorname{Var}(\mu_a(X)|S=0) \right\}. \] These results can be derived separately starting from the functional $\psi_a = \mathbb E[\mathbb E(Y|X,A=a,S=1)]$ and $\theta_a = \mathbb E[\mathbb E(Y|X,A=a,S=1)|S=0]$. Alternatively, one can set $V=X$ in Lemma \ref{lem-if} and Theorem \ref{thm-if} to obtain the results by noting the second terms vanish in each formula due to $\mu_a(X) = \tau_a(V)$ and $\text{Var}(\mu_a(X)|X,S=1) = 0$. From this perspective, the second terms in the general case $V \subset X$ come from an extra step where we regress the conditional mean $\mu_a(X)$ on possible effect modifiers $V$. As we mentioned above the efficiency bound characterizes the fundamental statistical difficulty of estimating the target functionals, and acts as a nonparametric analog of the Cramer-Rao bound. Specifically, no estimator can have smaller mean square error than the efficiency bound in a local asymptotic minimax sense, as summarized in the following Corollary \ref{cor:local-minimax}. \begin{corollary}\label{cor:local-minimax} For any estimators $\widehat{\psi}_a$ and $\widehat{\theta}_a$, we have \begin{equation*} \begin{aligned} &\, \inf _{\delta>0} \liminf _{n \rightarrow \infty} \sup _{\mathbb{Q}:\text{TV}(\mathbb P, \mathbb{Q})<\delta} n \mathbb{E}_{\mathbb{Q}}\left[\{\widehat{\psi}_a-\psi_a(\mathbb{Q})\}^2\right] \geq \sigma_{a, \text{ge}}^2 (\mathbb P)\\ &\, \inf _{\delta>0} \liminf _{n \rightarrow \infty} \sup _{\mathbb{Q}:\text{TV}(\mathbb P, \mathbb{Q})<\delta} n \mathbb{E}_{\mathbb{Q}}\left[\{\widehat{\theta}_a-\theta_a(\mathbb{Q})\}^2\right] \geq \sigma_{a, \text{tr}}^2 (\mathbb P) \end{aligned} \end{equation*} where $\text{TV}(\mathbb P, \mathbb{Q})$ is the total variation distance between $\mathbb P$ and $\mathbb{Q}$ and $\sigma_{a, \text{ge}}^2 (\mathbb P)$ and $\sigma_{a, \text{tr}}^2 (\mathbb P)$ are the nonparametric efficiency bounds in Theorem \ref{thm-if} evaluated at $\mathbb P$. \end{corollary} We have characterized the efficiency bounds with efficient influence functions, which implies that (without further assumptions) the asymptotic local minimax mean squared error of any estimator scaled by a factor of $n$ cannot be smaller than these bounds \citep{van2000asymptotic}. The next step is to correct for the first-order bias of plug-in-style estimators by instead deriving doubly robust estimators, as detailed in the following subsection. \subsection{Doubly Robust Estimation}\label{sec:DR-estimation} After deriving the influence functions, we can correct for the first-order bias of the plug-in estimator via the following doubly robust estimators: \begin{equation}\label{dr-est-generalization} \widehat{\psi}_{a}^{dr} = \mathbb P_n \left\{\frac{I(A=a, S=1) \left(Y-\widehat{\mu}_{a}(X)\right)}{\widehat{\rho} (V) \widehat{\pi}_a(X)} +\frac{I(S=1) \left(\widehat{\mu}_{a}(X)-\widehat{\tau}_{a}(V)\right)}{\widehat{\rho}(V)} +\widehat{\tau}_{a}(V)\right\} , \end{equation} \begin{equation}\label{dr-est-transportation} \begin{aligned} \widehat{\theta}_{a}^{dr} &=\frac{1}{\widehat{\mathbb P}(S=0)}\mathbb P_n \left\{\frac{I(A=a, S=1) (1 - \widehat{\rho}(V))\left(Y-\widehat{\mu}_{a}(X)\right)}{\widehat{\rho} (V) \widehat{\pi}_a(X)}\right.\\ &\left.+\frac{I(S=1) (1 - \widehat{\rho}(V))\left(\widehat{\mu}_{a}(X)-\widehat{\tau}_{a}(V)\right)}{\widehat{\rho}(V)} +I(S=0)\widehat{\tau}_{a}(V)\right\}. \end{aligned} \end{equation} The doubly robust estimators combine simple plug-in-style estimators with inverse-probability-weighted estimators to correct for the first-order bias. For instance, in the doubly robust estimator $\widehat{\psi}_{a}^{dr}$ the last term $\mathbb P_n[\widehat{\tau}_a(V)]$ is the outcome regression-based plug-in estimator and the first two terms are centered inverse-probability-weighted terms motivating from the influence function in Lemma \ref{lem-if}. Compared with the well-known doubly robust estimator for the ATE \[ \mathbb P_n \left \{ \frac{I(A=a)(Y-\widehat{\mu}_a(X))}{\widehat{\pi}_a(X)} + \widehat{\mu}_a(X) \right \} \] in the generalization and transportation setting the participation probability also needs to be modeled and incorporated into the reweighting terms. Moreover, an extra term appears in \eqref{dr-est-generalization} and \eqref{dr-est-transportation} due to further regressing $\mu_a(X)$ on $V$, as similarly discussed in Section \ref{sec:EIF-EB}. For simplicity we define the uncentered influence function terms in the brackets above as \begin{equation*} \varphi_{a}^{\text{ge}} (Z) =\frac{I(A=a, S=1) \left(Y-\mu_{a}(X)\right)}{\rho (V) \pi_a(X)} +\frac{I(S=1) \left(\mu_{a}(X)-\tau_{a}(V)\right)}{\rho(V)} +\tau_{a}(V). \end{equation*} \begin{equation*} \begin{aligned} \varphi_{a}^{\text{tr}} (Z) &=\frac{I(A=a, S=1) (1 - \rho(V))\left(Y-\mu_{a}(X)\right)}{\rho (V) \pi_a(X)}\\ &+\frac{I(S=1) (1 - \rho(V))\left(\mu_{a}(X)-\tau_{a}(V)\right)}{\rho(V)} +I(S=0)\tau_{a}(V). \end{aligned} \end{equation*} The following theorems characterize the properties of these new doubly robust estimators. \begin{theorem}\label{thm-dr-generalization} (Doubly robust estimation of generalization functional) Suppose the nuisance functions $(\widehat{\mu}_a, \widehat{\tau}_a ,\widehat{\pi}_a, \widehat{\rho})$ are estimated from a separate independent sample. Further assume our estimates satisfy $\left\|\widehat{\varphi}_{a}^{\text{ge}}-\varphi_{a}^{\text{ge}}\right\|_{2}=o_{\mathbb P}(1)$, and $\widehat{\rho}(V), \widehat{\pi}_a (X) \geq \epsilon >0$ for some positive constant $\epsilon$. Then we have \[ \widehat{\psi}_a^{\text{dr}} - \psi_a = \mathbb{P}_n (\phi_a^{\text{ge}}) + O_\mathbb P\Big(\left\|\widehat{\mu}_{a}-\mu_{a}\right\|\|\widehat{\pi}_a - \pi_a\| + \|\widehat{\rho}-\rho\|\left\|\widehat{\tau}_a-\tau_{a}\right\| \Big) + o_{\mathbb P}(1/\sqrt{n}) . \] If the nuisance estimators further satisfy the following convergence rate \begin{equation*} \begin{aligned} &\left\|\widehat{\mu}_{a}-\mu_{a}\right\|\|\widehat{\pi}_a - \pi_a\|=o_{\mathbb P}\left(1/\sqrt{n}\right), \\ &\|\widehat{\rho}-\rho\|\left\|\widehat{\tau}_a-\tau_{a}\right\|=o_{\mathbb P}\left(1/\sqrt{n}\right), \end{aligned} \end{equation*} then $\widehat{\psi}_a^{\text{dr}}$ is $\sqrt{n}$-consistent and asymptotically normal, with asymptotic variance equal to the nonparametric efficiency bound equal to $\sigma_{a, \text{ge}}^2$ of Theorem \ref{thm-if}, and so also locally asymptotic minimax optimal in the sense of Corollary \ref{cor:local-minimax}. \end{theorem} \begin{theorem}\label{thm-dr-transportation} (Doubly robust estimation of transportation functional) Suppose the nuisance functions $\widehat{\mu}_a, \widehat{\tau}_a ,\widehat{\pi}_a, \widehat{\rho}$ are estimated from a separate independent sample. Further assume our estimates satisfy $\left\|\widehat{\varphi}_{a}^{\text{tr}}-\varphi_{a}^{\text{tr}}\right\|_{2}=o_{\mathbb P}(1), \mathbb P(S=0)>0, \widehat{\rho}(V), \widehat{\pi}_a (X) \geq \epsilon >0$ for some positive constant $\epsilon$. Then we have \[ \widehat{\theta}_a^{\text{dr}} - \theta_a = \mathbb{P}_n (\phi_a^{\text{tr}}) +O_\mathbb P\Big(\left\|\widehat{\mu}_{a}-\mu_{a}\right\|\|\widehat{\pi}_a - \pi_a\| + \|\widehat{\rho}-\rho\|\left\|\widehat{\tau}_a-\tau_{a}\right\| \Big) + o_{\mathbb P}(1/\sqrt{n}). \] If the nuisance estimators further satisfy the following convergence rate \begin{equation*} \begin{aligned} &\left\|\widehat{\mu}_{a}-\mu_{a}\right\|\|\widehat{\pi}_a - \pi_a\|=o_{\mathbb P}\left(1/\sqrt{n}\right), \\ &\|\widehat{\rho}-\rho\|\left\|\widehat{\tau}_{a}-\tau_{a}\right\|=o_{\mathbb P}\left(1/\sqrt{n}\right), \end{aligned} \end{equation*} then $\widehat{\theta}_a^{\text{dr}}$ is $\sqrt{n}$-consistent and asymptotically normal, with asymptotic variance equal to the nonparametric efficiency bound equal to $\sigma_{a, \text{tr}}^2$ of Theorem \ref{thm-if}, and so also locally asymptotic minimax optimal in the sense of Corollary \ref{cor:local-minimax}. \end{theorem} \begin{remark} For simplicity, we assume all the nuisance estimators are constructed from a separate independent sample with the same size $n$ as the estimation sample over which $\mathbb P_n$ takes an average. Using the same sample to both estimate the nuisance functions and average the (uncentered) influence functions may also yield similar results by further assuming empirical process assumptions to avoid overfitting. For instance one can assume the nuisance functions and their estimates belong to a Donsker class and arrive at similar estimation guarantees. However, such assumptions are hard to verify in practice and simple sample splitting enables us to get rid of them: one can randomly split the data in folds and use different folds to estimate the nuisance functions and average the influence functions. To recover full sample size efficiency, one can swap the folds, repeat the same procedures and finally average the results, known as cross-fitting and commonly used in the literature \citep{bickel1988estimating, robins2008higher, zheng2010asymptotic, chernozhukov2018double, kennedy2020sharp}. In this paper all the results are based on a single split procedure, with the understanding that extending to procedures based on cross-fitting is straightforward. \\ \end{remark} We note that in Theorem \ref{thm-dr-generalization} and \ref{thm-dr-transportation} we do not require that each individual nuisance function converges at $\sqrt{n}$-rate as might be required in the plug-in estimator case. The condition is instead on the product of convergence rates, i.e. $\left\|\widehat{\mu}_{a}-\mu_{a}\right\|\|\widehat{\pi}_a - \pi_a\| = o_{\mathbb P}(1/\sqrt{n})$ and $\left\|\widehat{\tau}_{a}-\tau_{a}\right\|\|\widehat{\rho} - \rho\| = o_{\mathbb P}(1/\sqrt{n})$. This shows the key property of doubly robust estimators: after we correct for the first-order bias, the error only involves second-order products and hence is ``doubly small". In applications, such conditions on the convergence rate are much easier to satisfy. Estimators like ours that have errors that involve multiple nuisance functions are sometimes referred to as multiply robust estimators \citep{tchetgen2012semiparametric}, since there are multiple ways in which the error term is $o_{\mathbb P}(1/\sqrt{n})$. For instance, (1) quarter rate $\left\|\widehat{\mu}_{a}-\mu_{a}\right\| = o_{\mathbb P}(n^{-1/4})$ and $\|\widehat{\pi}_a - \pi_a\| = O_{\mathbb P}(n^{-1/4})$ or (2) $\left\|\widehat{\mu}_{a}-\mu_{a}\right\| = o_{\mathbb P}(1)$ and $\|\widehat{\pi}_a - \pi_a\| = O_{\mathbb P}(n^{-1/2})$ (e.g., we know exactly the parametric model for propensity score) both satisfy the condition; further, $n^{-1/4}$-style rates can be attained under appropriate smoothness, sparsity, or other structural assumptions. So we may apply flexible non-parametric methods (e.g. random forests) or high dimensional models (e.g. Lasso regression) to estimate the nuisance functions and still maintain the $\sqrt{n}$-consistency and asymptotic normality of our effect estimator; this is the main advantage of doubly robust estimators over plug-ins. In the special case $V=X$ (i.e. the source and the target population share the same sets of covariates), conditions in Theorem \ref{thm-dr-generalization} and Theorem \ref{thm-dr-transportation} are reduced to $\left\|\widehat{\mu}_{a}-\mu_{a}\right\|\|\widehat{\pi}_a - \pi_a\| = o_{\mathbb P}(1/\sqrt{n})$ and $\left\|\widehat{\mu}_{a}-\mu_{a}\right\|\|\widehat{\rho} - \rho\| = o_{\mathbb P}(1/\sqrt{n})$. Compared with the ATE case where we only require $\|\widehat{\mu}_{a}-\mu_{a}\|\|\widehat{\pi}_a - \pi_a\| = o_{\mathbb P}(1/\sqrt{n})$, extra conditions on the convergence rates of modeling participation probability is needed in the generalization and transportation problem. We conclude this section with additional comments on estimating $\tau_a(V)$. In real applications one needs to first estimate $\mu_a(X)$ for each data point and then further regress $\widehat{\mu}_a$ on partial covariates $V$ in the source dataset, known as regression with estimated or imputed outcomes \citep{kennedy2020towards, foster2019orthogonal}. Faster convergence rates of estimating $\tau_a(V)$ can be achieved by adopting the stability framework in \cite{kennedy2020towards} or orthogonal statistical learning framework in \cite{foster2019orthogonal}. For example, instead of regressing $\widehat{\mu}_a(X)$ on $V$, one can construct a pseudo-outcome \[ \widehat{g}(Z)=\frac{I(A=a)(Y-\widehat{\mu}_a(X))}{\widehat{\pi}_a(X)} + \widehat{\mu}_a(X) \] for each sample and regress $\widehat{g}(Z)$ on $V$ \citep{kennedy2020towards}. Under suitable conditions such estimator enjoys faster convergence rate than the naive plug-ins. \section{Minimax Lower Bounds}\label{sec:Minimax} In Section \ref{sec:Efficiency-theory} we established local asymptotic minimax optimality of doubly robust estimators (under certain conditions). In this section we will examine the minimax lower bounds from a global perspective, i.e., the minimax rate over suitable model classes, more generally when parametric $\sqrt{n}$ rates are not attainable. The minimax rate provides an important benchmark to compare against in constructing estimators. If one estimator has estimation error guarantees matching the minimax rate, then one may stop searching for estimators that can achieve smaller estimation error and conclude the estimator is optimal in terms of worst-case rates. If the minimax rate is not achieved, one may study alternative estimators with smaller statistical error or study sharper bounds for the problem. In this section we derive the fundamental minimax lower bounds in estimating ATE in the target population under the special case $V=X$ (so the source population and target population share the same set of covariates). We introduce the ideas and techniques that are useful in deriving minimax rates in Appendix \ref{sec:minimax-general}. Then we apply these tools to show the minimax lower bound in estimating ATE in the target population. \subsection{Minimax Lower Bounds in Generalization and Transportation}\label{sec:minimax-rates} In the generalization and transportation setting, consider the target functionals in the case $V=X$. When the identification assumptions hold we can identify the effects as \begin{equation}\label{eq:identification-v=x} \begin{aligned} \psi_a =&\, \mathbb{E}[Y^a] = \mathbb{E}\{\mathbb{E}[Y|X,S=1,A=a]\} \\ \theta_a =&\, \mathbb{E}[Y^a|S=0] = \mathbb{E}\{\mathbb{E}[Y|X,S=1,A=a]|S=0\}. \end{aligned} \end{equation} We restrict the range of covariates $X$ as $[0,1]^d$ in this section. Consider the following model class for the generalization and transportation functionals, respectively: \begin{equation*} \begin{aligned} \mathcal{P}_{\text{ge}}=\{(f, \rho, \pi_a, \mu_a): &\, \frac{1}{\rho \pi_a} \text{ is } \alpha\text{-smooth },\mu_a \text{ is } \beta\text{-smooth }, f\rho \pi_a=1 / 2, \\ &\, \rho\pi_a \text{ and } \mu_a \text{ are bounded away from 0 and 1}\} \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} \mathcal{P}_{\text{tr}}=\{(f, \rho, \pi_a, \mu_a): &\, \frac{1-\rho}{\rho \pi_a} \text{ is } \alpha\text{-smooth },\mu_a \text{ is } \beta\text{-smooth }, f\rho \pi_a=1 / 2, \\ &\, \rho \pi_a \text{ and } \mu_a \text{ are bounded away from 0 and 1}\} \end{aligned} \end{equation*} Here $f$ is the density of covariates $X$. We note that the selection/treatment probability $\rho \pi_a$ is parameterized together in the model class $\mathcal{P}_{\text{ge}}$. In other words, one can intuitively view $I(S=1, A=a)$ (i.e., both selected in the source population and receive treatment $a$) as a new treatment. The probability of getting treated under this ``compound" treatment is exactly $\rho \pi_a$. One minor difference between $\mathcal{P}_{\text{ge}}$ and $\mathcal{P}_{\text{tr}}$ is that in $\mathcal{P}_{\text{ge}}$ we impose smoothness conditions on $1/\rho \pi_a$ while in $\mathcal{P}_{\text{tr}}$ the smoothness condition is imposed on $(1-\rho)/\rho \pi_a$. As pointed out in the discussion on efficiency bounds in Section \ref{sec:EIF-EB} and quadratic estimation in Section \ref{sec:Quadratic}, the inverse odds ratio $(1-\rho)/\rho$ turns out to be a more fundamental quantity in transportation problems. To be consistent with these results, we also impose smoothness condition on the inverse odds ratio in deriving the minimax rate. The following theorem characterizes the minimax lower bounds in estimating generalization and transportation functionals. The strategy to construct two distribution classes $P_{\lambda}$ and $Q_{\lambda}$ is similar to the techniques in proving the minimax rate for ATE, as in \cite{robins2009semiparametric}. \begin{theorem}\label{thm-minimax} Let $s=(\alpha+\beta)/2$ denote the average smoothness of the nuisance functions. The minimax rate of generalization functional $\psi_a$ over $\mathcal{P}_{\text{ge}}$ is lower bounded by \[ \inf_{\widehat{\psi}_a} \sup_{\mathbb P \in \mathcal{P}_{\text{ge}}} \left( \mathbb{E}_{\mathbb P} (\widehat{\psi}_a - \psi_a)^2 \right)^{1/2} \gtrsim \begin{cases}n^{-1/(1+d/4s)} & \text { if } s<d/4 \\ n^{-1/2} & \text { otherwise. }\end{cases} \] The minimax rate of transportation functional $\theta_a$ over $\mathcal{P}_{\text{tr}}$ is lower bounded by \[ \inf_{\widehat{\theta}_a} \sup_{\mathbb P \in \mathcal{P}_{\text{tr}}} \left( \mathbb{E}_{\mathbb P} (\widehat{\theta}_a - \theta_a)^2 \right)^{1/2} \gtrsim \begin{cases}n^{-1/(1+d/4s)} & \text { if } s<d/4 \\ n^{-1/2} & \text { otherwise. }\end{cases} \] \end{theorem} Although the minimax lower bounds above are the same as those for the ATE, we believe these results are not trivial and presenting them helps us gain a more complete and comprehensive insight into generalization and transportation functionals. Following the efficiency theory for the ATE, the doubly robust estimator of $\psi_a$ and $\theta_a$ may achieve the minimax rate under some regimes. For instance, under the conditions in Theorem \ref{thm-dr-generalization} and assuming $\|\widehat{\pi}_a \widehat{\rho} - \pi \rho\| \|\widehat{\mu}_a - \mu_a\| = o_{\mathbb P}(1/\sqrt{n})$, the doubly robust estimator is $\sqrt{n}-$consistent and attains the minimax rate (this corresponds to the smooth regime $s \geq d/4$). However, in the less smooth regime $s < d/4$, even if we can estimate $\rho \pi_a$ at a minimax rate (i.e. $\|\widehat{\pi}_a \widehat{\rho} - \pi \rho\| = O_{\mathbb P}(n^{-\frac{\alpha}{2\alpha + d}})$ and $\|\widehat{\mu}_a - \mu_a\| = O_{\mathbb P}(n^{-\frac{\beta}{2\beta + d}})$), the necessary condition for the doubly robust estimator to achieve the rate $n^{-(2 \alpha+2 \beta) /(2 \alpha+2 \beta+d)}$ is \[ \frac{\alpha}{2\alpha+d} + \frac{\beta}{2\beta+d} \geq \frac{2\alpha + 2\beta}{2\alpha + 2\beta +d}, \] which can be a restrictive condition and motivates us to improve the doubly robust estimator. The main potential drawback of the doubly robust estimator is that we only correct for the first-order bias of plug-in estimator. In the following section, we propose a higher-order/quadratic estimator based on second-order Von Mises expansion \citep{robins2009quadratic}, which also takes second-order bias into consideration and achieves the minimax lower bound in a broader regime. \section{Higher-Order Estimation}\label{sec:Quadratic} In this section we study a higher-order (quadratic) estimator of the functionals of interest. An introduction to quadratic Von Mises calculus is provided in Appendix \ref{sec:Quadratic-VonMises} . We apply the second-order Von Mises expansion to generalization and transportation functionals in the case $V=X$ and propose quadratic estimators of them in Section \ref{sec:Quadratic-estimation}. The error guarantees of our estimators are then established under mild assumptions. \subsection{Quadratic Estimators in Generalization and Transportation} \label{sec:Quadratic-estimation} We now present our higher-order estimator. We first define some objects that will be useful in presenting the theory of this section. Namely let: \begin{equation*} \begin{aligned} \Omega = &\, \int b(x)b(x)^{\top} \rho(x) \pi_a(x) dF(x), \\ \widehat{\Omega} = &\, \int b(x)b(x)^{\top} \widehat{\rho}(x) \widehat{\pi}_a(x) d\widehat{F}(x), \\ \Pi_b (g) (x) =&\, b(x)^{\top} \Omega^{-1}\int b(t)g(t)\rho(t) \pi_a(t) dF(t),\\ \widehat{\Pi}_b (g) (x) =&\, b(x)^{\top} \widehat{\Omega}^{-1}\int b(t)g(t)\rho(t) \pi_a(t) dF(t).\\ \end{aligned} \end{equation*} Here $b: \mathbb{R}^d \mapsto \mathbb{R}^k$ is a $k$-dimensional basis ($k$ corresponds to the dimension of the subspace $L$ in Section \ref{sec:Quadratic-VonMises}). $\Omega$ is the Gram matrix of basis $b$ if we define the inner product between two functions $f,g$ as $\langle f,g \rangle = \int f(x)g(x)\rho(x)\pi_a(x) dF(x)$. $\widehat{\Omega}$ is the estimated Gram matrix where all nuisance functions are replaced with their estimators. ${\Pi}_b (g)$ is the projection of $g$ onto the linear span of $b$ and $\widehat{\Pi}_b (g)$ is the estimated projection of $g$ where the gram matrix $\Omega$ is replaced with its estimator $\widehat{\Omega}$. With this notation, the non-centered first-order and (approximate) second-order influence functions of the generalization functional are \begin{equation*} \begin{aligned} \phi_{a,1}^{\text{ge}}(Z) = &\, \frac{I(S=1,A=a)}{\rho(X)\pi_a(X)}(Y-\mu_a(X)) + \mu_a(X), \\ \phi_{a,2}^{\text{ge}}(Z_1, Z_2) = &\, -I(S_1=1,A_1=a)(Y_1-\mu_a(X_1))b(X_1)^{\top}\Omega^{-1}b(X_2) \frac{I(S_2=1,A_2=a) - \rho(X_2)\pi_a(X_2)}{\rho(X_2)\pi_a(X_2)}. \end{aligned} \end{equation*} Following the discussions in Section \ref{sec:Quadratic-VonMises}, a quadratic estimator is \[ \widehat{\psi}_{a}^{qr} = \mathbb{P}_n[\widehat{\phi}_{a,1}^{\text{ge}}(Z)] + \mathbb{U}_n [\widehat{\phi}_{a,2}^{\text{ge}}(Z_1, Z_2)], \] where all the nuisance functions and $\Omega$ are replaced with their estimators. Note that the plug-in estimator $\psi(\widehat{\mathbb P})$ in the general quadratic estimator \eqref{eq:qr-estimator} cancels the centralization term in the centered first-order influence function $\phi_1(Z,\widehat{\mathbb P})$. Hence in $\widehat{\psi}_a^{qr}$ (where $\phi_{a,1}^{\text{ge}}(Z)$ is the non-centered influence function) the plug-in term disappears. In the following discussions, we first present a general theorem summarizing the conditional bias and variance of the quadratic estimator $\widehat{\psi}_a^{qr}$, without assuming any conditions on the convergence rates of nuisance estimation. Then we examine the assumptions needed for the quadratic estimator $\widehat{\psi}_a^{qr}$ to achieve the minimax optimal rate. \begin{theorem}\label{thm-qr-generalization} (Quadratic estimation of generalization functional) Assume all the nuisance functions $\widehat{\mu}_a, \widehat{\rho}, \widehat{\pi}_a, \widehat{F}$ are estimated from a separate training sample $D^n$. Further assume that $\rho, \pi, \widehat{\rho}, \widehat{\pi}$ are all bounded away from zero, the eigenvalues of $\Omega, \widehat{\Omega}$ are bounded away from zero and infinity. Then the conditional bias and variance of $\widehat{\psi}_a^{qr}$ (giving the training data to estimate the nuisance functions) are bounded as \begin{equation*} \begin{aligned} |\mathbb{E}[\widehat{\psi}_a^{qr}|D^n] - \psi_a| \lesssim &\, \left\|(I-\Pi_b) \left(\frac{1}{\widehat{\rho}\widehat{\pi}_a} - \frac{1}{\rho\pi_a} \right) \right\|_w \left\|(I-\Pi_b) \left( \widehat{\mu}_a - \mu_a\right) \right\|_w \\ + &\, \left\| \frac{1}{\widehat{\rho}\widehat{\pi}_a} - \frac{1}{\rho\pi_a} \right\|_w \left\| \widehat{\mu}_a - \mu_a \right\|_w \|\widehat{\Omega}^{-1} - \Omega^{-1}\|. \\ \operatorname{Var}(\widehat{\psi}_a^{qr}|D^n) \lesssim &\, \frac{1}{n}+ \frac{k}{n^2} \end{aligned} \end{equation*} where the weighted $L_2$ norm of a function $g$ is $\|g\|_w^2 = \int g^2(x)\pi_a(x)\rho(x) dF(x)$. \end{theorem} The boundedness assumption on the eigenvalues of $\Omega$ can be implied by boundedness of $\rho, \pi_a$ and $\frac{d F}{d \nu}$ (the density of $F$ with respect to an underlying measure $\nu$) together with the assumption that $\int b(x) b(x)^{\top} d\nu(x)$ is positive definite. See Proposition 2.1 in \cite{belloni2015some} and Proposition 8 in \cite{kennedy2022minimax} for more detailed discussions. From Theorem \ref{thm-qr-generalization} we see the conditional bias is mainly composed of two parts. The first term \[ \left\|(I-\Pi_b) \left(\frac{1}{\widehat{\rho}\widehat{\pi}_a} - \frac{1}{\rho\pi_a} \right) \right\|_w \left\|(I-\Pi_b) \left( \widehat{\mu}_a - \mu_a\right) \right\|_w \] is the approximation error of the projection $\Pi_b$ and corresponds to the representational error discussed in Section \ref{sec:Quadratic-VonMises}. The second term \[ \left\| \frac{1}{\widehat{\rho}\widehat{\pi}_a} - \frac{1}{\rho\pi_a} \right\|_w \left\| \widehat{\mu}_a - \mu_a \right\|_w \|\widehat{\Omega}^{-1} - \Omega^{-1}\| \] is a third-order error term and corresponds to the remainder term $R_3(\widehat{\mathbb P}, \mathbb P)$ discussed in Section \ref{sec:Quadratic-VonMises}. There is also an extra term $k/n^2$ in the conditional variance of the proposed higher-order estimator $\widehat{\psi}_a^{qr}$. Hence we need to carefully select $k$ to balance the representational error, third-order error term, and the extra term in the variance. Note that we need some approximation guarantees of the basis $b$ to ensure that the representational error is small. Concretely, we impose the following uniform approximation assumption on $b$. \begin{assumption}\label{assume-approximation} For any $s>0$ and $g \in \mathcal{H}(s)$, the basis $b$ satisfies \[ \left\|\left(I-\Pi_b\right)g\right\|_w \lesssim k^{-s / d}. \] \end{assumption} Assumption \ref{assume-approximation} holds for a wide class of basis functions. For instance, when the function class containing $g$ is supported on a convex and compact subset of $\mathbb{R}^d$, the approximation is valid even with the uniform norm $\|\cdot\|_{\infty}$ when the basis uses spline, CDV wavelet, or local polynomial partition series. Under the conditions in Theorem \ref{thm-qr-generalization}, if we assume $1/\rho\pi_a, 1/\widehat{\rho}\widehat{\pi}_a \in \mathcal{H}(\alpha) $ and $\widehat{\mu}_a, \mu_a \in \mathcal{H}(\beta)$, by the approximation property of the basis $b$, we have \[ \left\|(I-\Pi_b) \left(\frac{1}{\widehat{\rho}\widehat{\pi}_a} - \frac{1}{\rho\pi_a} \right) \right\|_w \lesssim k^{-\alpha / d} \quad, \left\|(I-\Pi_b) \left( \widehat{\mu}_a - \mu_a\right) \right\|_w \lesssim k^{-\beta / d}. \] In the less smooth regime $s = (\alpha + \beta)/2 < d/4$, set $k \sim n^{2 d /(d+2 \alpha+2 \beta)}$ to balance the representational error and variance. Further assume \[ \left\| \frac{1}{\widehat{\rho}\widehat{\pi}_a} - \frac{1}{\rho\pi_a} \right\|_w \left\| \widehat{\mu}_a - \mu_a \right\|_w \|\widehat{\Omega}^{-1} - \Omega^{-1}\| \lesssim n^{-(2 \alpha+2 \beta) /(2 \alpha+2 \beta+d)}, \] Then we see that this estimator can achieve the minimax lower bound. In the smooth regime $s=(\alpha + \beta)/2 \geq d/4$, we can set $k$ similarly and assume \[ \left\| \frac{1}{\widehat{\rho}\widehat{\pi}_a} - \frac{1}{\rho\pi_a} \right\|_w \left\| \widehat{\mu}_a - \mu_a \right\|_w \|\widehat{\Omega}^{-1} - \Omega^{-1}\| \lesssim n^{-1/2}, \] and again this estimator achieves the minimax lower bound. Compared with the condition required for the doubly robust estimator to achieve the minimax lower bound \[ \|\widehat{\rho}\widehat{\pi}_a - \rho \pi_a\|\|\widehat{\mu}_a - \mu_a\| = o_{\mathbb P}(n^{-1/2}), \] since we have a third-order error term in quadratic estimator, our higher-order/quadratic estimator achieves the minimax lower bound in a broader regime. For the transportation functional $\theta_a$, we consider a related functional \[ \eta_a = \mathbb{E}[I(S=0)\mathbb{E}(Y|X,A=a,S=1)]=\mathbb{E}[I(S=0)\mu_a(X)] = \mathbb P(S=0) \theta_a. \] Since $\mathbb P(S=0)$ can be estimated at $\sqrt{n}$-rate, to estimate $\theta_a$ at a minimax optimal rate we only need to estimate $\eta_a$ at an optimal rate. The first-order and approximate second-order influence functions for $\eta_a$ are \begin{equation*} \begin{aligned} \phi_{a,1}^{\text{tr}}(Z) = &\, \frac{I(S=1,A=a)(1-\rho(X))}{\rho(X)\pi_a(X)}(Y-\mu_a(X)) + I(S=0)\mu_a(X) \\ \phi_{a,2}^{\text{tr}}(Z_1, Z_2) = &\, I(S_1=1,A_1=a)(Y_1-\mu_a(X_1))b(X_1)^{\top}\Omega^{-1}b(X_2)\\ &\, \times \frac{I(S_2=0)\rho(X_2)\pi_a(X_2) - (1-\rho(X_2))I(A_2=a, S_2=1)}{\rho(X_2)\pi_a(X_2)} \end{aligned} \end{equation*} The quadratic estimator is \[ \widehat{\eta}_{a}^{qr} = \mathbb{P}_n[\widehat{\phi}_{a,1}^{\text{tr}}(Z)] + \mathbb{U}_n [\widehat{\phi}_{a,2}^{\text{tr}}(Z_1, Z_2)]. \] The following theorem, which is the analogous version of Theorem \ref{thm-qr-generalization} in the transportation setting, summarizes the estimation guarantee of quadratic estimator $\widehat{\eta}_{a}^{qr}$. \begin{theorem}\label{thm-qr-transportation} (Quadratic estimation of transportation functional) Assume all the nuisance functions $\widehat{\mu}_a, \widehat{\rho}, \widehat{\pi}_a, \widehat{F}$ are estimated from a separate training sample $D^n$. Further assume that $\rho, \pi, \widehat{\rho}, \widehat{\pi}$ are all bounded away from zero, the eigenvalues of $\Omega, \widehat{\Omega}$ are bounded away from zero and infinity. The conditional bias and variance of $\widehat{\eta}_{a}$ (giving the training data to estimate the nuisance functions) are bounded as \begin{equation*} \begin{aligned} |\mathbb{E}[\widehat{\eta}_{a}^{qr}|D^n] - \eta_a| \lesssim &\, \left\|(I-\Pi_b) \left(\frac{1-\widehat{\rho}}{\widehat{\rho}\widehat{\pi}_a} - \frac{1-\rho}{\rho\pi_a} \right) \right\|_w \left\|(I-\Pi_b) \left( \widehat{\mu}_a - \mu_a\right) \right\|_w \\ + &\, \left\| \frac{1-\widehat{\rho}}{\widehat{\rho}\widehat{\pi}_a} - \frac{1-\rho}{\rho\pi_a} \right\|_w \left\| \widehat{\mu}_a - \mu_a \right\|_w \|\widehat{\Omega}^{-1} - \Omega^{-1}\|. \\ \operatorname{Var}(\widehat{\eta}_{a}^{qr}|D^n) \lesssim &\, \frac{1}{n}+ \frac{k}{n^2} \end{aligned} \end{equation*} where the weighted $L_2$ norm of a function $g$ is $\|g\|_w^2 = \int g^2(x)\pi_a(x)\rho(x) dF(x)$ \end{theorem} Now the story is the same as the generalization case except we need to replace $1/\rho\pi_a$ with $(1-\rho)/\rho\pi_a$ (i.e. we impose smoothness assumption on $(1-\rho)/\rho\pi_a$ and $(1-\widehat{\rho})/\widehat{\rho}\widehat{\pi}_a$). With the approximation property of basis $b$ (Assumption \ref{assume-approximation}) and further assume a bound on the convergence rate of the nuisance function estimators \[ \left\| \frac{1-\widehat{\rho}}{\widehat{\rho}\widehat{\pi}_a} - \frac{1-\rho}{\rho\pi_a} \right\|_w \left\| \widehat{\mu}_a - \mu_a \right\|_w \|\widehat{\Omega}^{-1} - \Omega^{-1}\| , \] we can prove that the quadratic estimator $\widehat{\eta}_{a}^{qr}$ achieves the minimax lower bound in a broader regime than the doubly robust estimator. \section{Simulation Study}\label{sec:Simulation} In this section we examine the performance of doubly robust estimators empirically. Consider the following setting: $X = (X_1, X_2, X_3, X_4, X_5) \sim N(0, I_5)$, $V = (X_1, X_2, X_3)$. Given $n$ samples, generate $S$ according to $\rho (V) = \mathbb P(S=1 | V) = 0.5$ (participation probability). In the source population, set $\pi_1(x) = \text{expit}(0.3x_1 - 0.3x_3)$ and simulate the treatment $A \sim \text{Bernoulli }(\pi_1(X))$. Consider the following linear potential outcome model \[ \mu_1 (x) = 1.5x_1+x_4+1, \, \mu_0(x) = x_1, \] and $Y = A\mu_1(X) + (1-A)\mu_0(X) + N(0,1)$. So we have \[ \tau_1(v) = \mathbb{E}[\mu_1(X)|V=v] = 1.5x_1 +1, \tau_0(v) = x_1. \] The effect is $\mathbb E [\tau_1(V) - \tau_0(V)\mid S=0] = 1$. The nuisance estimators are $\widehat{\mu}_a(x) = \mu_a(x) + \epsilon_{1,n}, \widehat{\tau}_a(v) = \tau_a(v) + \epsilon_{2,n}, \widehat{\rho}(v) = \text{expit}(\text{logit}(\rho(v))+\epsilon_{3,n}), \widehat{\pi}(x) = \text{expit}(\text{logit}(\pi_1(x))+\epsilon_{4,n})$, where $\epsilon_{i,n} \sim N(n^{-\alpha}, n^{-2\alpha})$. This construction guarantees that the root mean square errors (RMSE) of $\widehat{\pi}_1, \widehat{\rho}, \widehat{\mu}_a, \widehat{\tau}_a$ are of order $O(n^{-\alpha})$. Then we can use different values of $\alpha$ to evaluate the performance of the doubly robust estimator and plug-in estimator when the nuisance functions are estimated with different convergence rate $O(n^{-\alpha})$. Specifically, we set the possible values of $\alpha$ to be a sequence ranging from 0.1 to 0.5 by a step of 0.05. In each replication, we generate the data and use doubly robust estimator and plug-in estimator to estimate the functional $\psi_1 = \mathbb E [Y^1 \mid S=0]=1$. This process is replicated 1000 times for sample size $n=100, 1000, 5000$ and the RMSE is computed. We report the simulation results in Figure \ref{simu-results}. \begin{figure}[H] \centering \subfigure[n=100]{ \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[width=2in]{transport_simu_n100.png} \end{minipage}} \subfigure[n=1000]{ \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[width=2in]{transport_simu_n1000.png} \end{minipage}} \subfigure[n=5000]{ \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[width=2in]{transport_simu_n5000.png} \end{minipage}} \centering \caption{RMSE V.S. $\alpha$ } \label{simu-results} \end{figure} From Figure \ref{simu-results} we see when the sample size is large and the convergence rate of the nuisance functions is slower than the parametric rate $O(n^{-1/2})$, the RMSE of doubly robust estimators can be much lower than the plug-in estimators. This coincides with our theoretical results. Note that although the performance of the plug-in estimator can be as good as the doubly robust estimator when the convergence rate is approximately the parametric rate, in practice we may be unable to correctly specify a parametric model for nuisance functions, and hence we are unlikely to achieve the parametric rate. So the doubly robust estimator is recommended in generalizing and transporting causal effects from the source population to the target population in real data analysis. In future work we will compare against the performance of our higher-order estimator. \section{Data Analysis}\label{sec:Real-data} In this section we illustrate the proposed method with a real data example. To be concrete, we aim to transport the causal effects of dietary intake on adverse pregnancy outcomes from an observational study to the whole U.S. female population. We first introduce the background and motivation of the problem in Section \ref{sec:Real-data-background}. Then we assess the necessity of transportation in Section \ref{sec:Real-data-difference}, i.e. we show the distributions of covariates in two population are different and some covariates may modify effects. In Section \ref{sec:Real-data-transportation}, we apply the proposed method to two datasets and estimate the ATE of different dietary components in the target population (the whole U.S. female population). Finally sensitivity analysis is performed when we are not confident in the exchangeability and transportability assumption in Section \ref{sec:Real-data-sensitivity}. \subsection{Background and Motivation}\label{sec:Real-data-background} Adverse pregnancy outcomes (e.g. preterm birth, gestational diabetes) are severe problems faced by many women in the U.S. There are multiple factors that can contribute to such negative outcomes. One potential factor is the dietary component of pregnant women. We are interested in estimating the causal effects of dietary components on adverse pregnancy outcomes in the whole U.S. population of women. Ideally we would conduct randomized controlled trials on this population, or a random sample of it, and apply standard causal inference techniques to estimate the ATE. However, sampling and doing experiments on the whole population is usually expensive and takes much time. In our problem we only have access to pregnancy outcomes from the Nulliparous Pregnancy Outcomes Study: monitoring mothers-to-be (Numom). The goal is to transport the causal effects estimated from this dataset to the whole U.S. female population. The Numom study includes 9502 subjects. From 2010 to 2013, Numom enrolled all respondents from 8 medical centers across the United States if they had a viable singleton pregnancy, were at 6-13 completed weeks of gestation, and had no previous pregnancy that lasted $\geq 20$ weeks of gestation. At enrollment (6-13 completed weeks of gestation), women completed an FFQ querying usual periconceptional dietary intake. Some covariates of the Numom populations are also available, including age, race, education, BMI, smoking status, marital status, insurance status and employment. The treatments of interests are dietary components such as vegetables and fruit intake. At least 30 days after delivery, a trained certified chart abstractor recorded final birth outcomes, medical history, and delivery diagnoses and complications. These information provides us with data on response, including pre-term birth, SGA birth, gestational diabetes and pre-eclampsia. In our analysis, let the threshold be the 80\% quantile of total fruit/vegetable intake (measured in cups per 1000 kcal). If one has total fruit/vegetable intake higher than the 80\% quantile than she is considered as treated $(A=1)$, otherwise she is not treated $(A=0)$. This 80\% threshold is often used in the clinical nutrition literature, so we will stick to this common choice. For the outcome, when the adverse pregnancy outcome occurs, we let $Y=1$. Otherwise $Y=0$. We will focus on the causal effects of fruit and vegetables on the four adverse pregnancy outcomes pre-term birth, SGA birth, gestational diabetes and pre-eclampsia. However, samples in Numom study may not be representative of the whole U.S. populations. For instance, 23.2\% of the women in the Numom trial received education level beyond college. In contrast, in the whole U.S. population data, only about 10\% of women received beyond college education. Hence the distributions of covariates in study participants and target population are quite different. The Numom dataset may not provide representative samples of the target population. It is possible that some covariates such as age, education level will modify the effects of dietary components. Then the estimates in \cite{bodnar2020machine} based on Numom study will not immediately generalize to the U.S. population of women. To estimate the ATE in the whole U.S. female population, we find a U.S. representative sample from the National Survey of Family Growth (NSFG), which contains information on 9553 women in the U.S. The documentation of the data is available at \verb|https://www.cdc.gov/nchs/nsfg/nsfg_2015_2017_puf.htm|. Before applying the proposed methods, we formally assess the necessity of transportation in Section \ref{sec:Real-data-difference} in the Appendix. \subsection{Transportation}\label{sec:Real-data-transportation} Having observed that transportation methods are necessary based on results in Section \ref{sec:Real-data-difference}, here we assume the five identification assumptions in Section \ref{sec:Identification} and use our doubly robust estimator to estimate the ATE in the target population. The covariate sets that we use are $X=$\{Education, Age, Race, Marital Status, Insurance, Work, Smoking, Number of cigarettes, BMI, other dietary components(e.g. protein)\} and $ V = $ \{Education, Age, Race, Marital Status, Insurance, Work, Smoking, Number of cigarettes\}. In particular, we adapt our estimators to these two datasets in two aspects. First, there exist some units with extremely small propensity score $\widehat{\pi}_a(X)$ in the source dataset. Since we need to reweight each sample in the Numom dataset by the inverse of the probability of getting treated, our doubly robust estimator may suffer from high instability if we directly use these small propensity scores. So we enforce all the propensity scores $\pi_a(X_i)$'s to be in the range [0.01, 0.99] (i.e. the positivity constant $\epsilon$ is set to be 0.01 in our analysis). Similarly we also enforce all the participation probabilities $\rho(V_i)$'s to be greater than 0.01. The other point is that the sampling process of NSFG dataset undergoes multiple stages and is more complicated than simple random sampling (SRS). According to examples given by CDC, we can approximate that sampling process with a stratified cluster sampling procedure and estimate the mean of a variable with appropriate weights. The variance of the estimator can also be obtained from the theory of stratified cluster sampling. The details on adjusting our estimator based on stratified cluster sampling can be found in the supplementary materials. Our methods with above adjustments are applied to each combination of treatment $A \in \{\text{fruit}, \text{vegetable}\}$ and outcome $Y \in \{\text{preterm Birth, SGA birth, gestational diabetes,}$ pre-eclampsia$\}$. All the nuisance functions are fitted with ``SL.ranger" (random forests) and ``SL.glmnet" (penalized GLM) and ``SL.mean" in R package ``SuperLearner". Five-fold cross fitting is used to guarantee the sample splitting condition in Theorem \ref{thm-dr-transportation}. The results are summarized in Figure \ref{fig:effects}. \begin{figure}[H] \centering \subfigure[Fruit]{ \begin{minipage}[t]{0.45\linewidth} \centering \includegraphics[width=3in]{fruit_effects.png} \end{minipage}} \subfigure[Vegetables]{ \begin{minipage}[t]{0.45\linewidth} \centering \includegraphics[width=3in]{vege_effects.png} \end{minipage}} \centering \caption{Effects of fruit and vegetables in the whole U.S. female population} \label{fig:effects} \end{figure} Detailed analysis on model selection (i.e. choice of models in SuperLearner), choice of positivity constant $\epsilon$ and potential outliers is presented in the supplementary materials, which justifies the choice in our analysis. The effects of fruit on preterm Birth, pre-eclampsia, gestational diabetes and SGA birth are -0.0214 (95\% CI [-0.0266, -0.0162]), -0.012 (95\% CI [-0.0176, -0.00626]), 0.00114 (95\% CI [-0.00398, 0.00626]), -0.0164 (95\% CI [-0.0227, -0.0101]), respectively. The effects of vegetables on preterm Birth, pre-eclampsia, gestational diabetes and SGA birth are -0.0442 (95\% CI [-0.0491, -0.0393]), -0.00102 (95\% CI [-0.0122, 0.0102]), 0.024 (95\% CI [0.0191, 0.029]), -0.0166 (95\% CI [-0.0218, -0.0113]). From the results above, we see the effects of fruit on preterm Birth, pre-eclampsia, SGA birth are significantly negative at level 0.05 in the target population, which implies eating more fruit potentially causes a lower risk of suffering from these adverse pregnancy outcomes. For the results on vegetables, the effect of vegetables on preterm birth and SGA birth is significantly negative. The strict interpretation for the effects of vegetables on preterm birth is: Compared with women whose vegetable intake are less than 80\% quantile of vegetable intake in Numom dataset, women with higher vegetable intake have 4 fewer preterm births for every 100 women in the whole U.S. female population. Other combinations of treatments and outcomes can be similarly intepreted. We also see (significant) positive effects of vegetables on gestational diabetes, which shows eating more vegetables may potentially increase the risk of getting gestational diabetes. This result seems unreasonable and counter-intuitive. One potential problem is the identification assumptions may not hold, so we cannot interpret our results from a causal perspective. As a result we will perform sensitivity analysis in the following section, to deal with possible violations of the identification assumptions. \subsection{Sensitivity Analysis and Discussions}\label{sec:Real-data-sensitivity} Here we focus on the effects of vegetables on gestational diabetes as an example. According to the discussions in Section \ref{sec:Sensitivity-analysis}, under Assumption \ref{relax-exchangeability} and Assumption \ref{relax-transportability} the bound for the ATE in the target population is given by \[ [ 0.024 - \delta_1 - 2\delta_2, 0.024 + \delta_1 + 2\delta_2]. \] We visualize the range of the bounds when only exchangeability is violated ($\delta_2=0$) or only transportability is violated ($\delta_1=0$) in Figure \ref{fig:sensitivity}, respectively. \begin{figure}[H] \centering \subfigure[Only exchangeability is violated ($\delta_2=0$)]{ \begin{minipage}[t]{0.45\linewidth} \centering \includegraphics[width=3in]{sensi1.png} \end{minipage}} \subfigure[Only transportability is violated($\delta_1=0$)]{ \begin{minipage}[t]{0.45\linewidth} \centering \includegraphics[width=3in]{sensi2.png} \end{minipage}} \centering \caption{Sensitivity analysis when the treatment is vegetable and outcome is gestational diabetes} \label{fig:sensitivity} \end{figure} The plots should be understood as follows: (a) For a specific value $\delta_1$, the interval that covers the ATE is given by the intersection of the blue region and the line $x=\delta_1$. (b) can be interpreted similarly. From Figure \ref{fig:sensitivity} we see if only exchangeability is violated (i.e. $\delta_2=0$), then the critical value for $\delta_1$ to turn around the result is $0.024$. If only transportability is violated (i.e. $\delta_1=0$), then the critical value for $\delta_2$ to turn around the result is $0.012$. Considering the covariate sets $X=$\{Education, Age, Race, Marital Status, Insurance, Work, Smoking, Number of cigarettes, BMI, other dietary components(e.g. protein)\} and $ V = $ \{Education, Age, Race, Marital Status, Insurance, Work, Smoking, Number of cigarettes\}, we find most of the covariates are categorical. The information contained in categorical variables is usually less than that in continuous ones. Furthermore, there may be some confounders or effect modifiers that are not measured in the dataset. For instance, Body mass index (BMI) may be an effect modifier but we do not have information on it in the NSFG dataset and hence cannot take it into account. Therefore the categorical variables we include in the analysis may not provide sufficient information for the identification assumption \ref{exchangeability} and \ref{transportability} to hold. Hence sensitivity analysis is quite necessary in our problem. From the perspective of the vegetable treatment itself, the result that eating more vegetables may increase the risk of gestational diabetes is not so surprising since many people view potatoes as vegetables. Potatoes are rich in starch, which may cause an increase in the risk of getting gestational diabetes. \section{Discussions}\label{sec:Discussion} In this paper we extend the generalization and transportation techniques in \citep{dahabreh2019generalizing, dahabreh2020extending} to the case where different set of covariates can be used in the source population and the target population. To be concrete, we summarize the sufficient assumptions to identify the ATE in the target population. We also provide methods to perform sensitivity analysis when the identification assumptions fail. The first-order influence functions and efficiency bounds are derived for the target statistical functionals when they are identified. We further propose a doubly robust estimator based on the first-order influence functions and establish its asymptotic normality under proper conditions. Simulation study shows the advantage of doubly robust estimator over plug-in estimator. We also study the minimax lower bounds and higher-order/quadratic estimation based on second-order Von Mises expansion in the case where the source population and the target population share the same set of covariates (i.e. $V=X$). Although we rely on similar techniques used in the ATE case \citep{robins2009quadratic, robins2009semiparametric}, these results are non-trivial and important in understanding the properties of generalization and transportation functional. Finally we illustrate the proposed methods with an interesting example, where we transport the causal effects of fruit and vegetable intake on adverse pregnancy outcomes from an observational study to the whole U.S. female population. In this paper we only consider the minimax rate and quadratic estimation in the special case $V=X$. In the general case $X=(W,V)$, the target functionals in \eqref{eq:identify} involve a triple integral and can be viewed as ``cubic functionals" \citep{tchetgen2008minimax, mukherjee2015lepski}. Such functionals are not well understood in the literature and it is more challenging to construct adversarial settings to establish the minimax lower bounds or derive higher-order influence functions to correct for the second-order bias. One special case is to assume the covariates $V$ are discrete and we can write \[ \psi_a = \sum_{v}p(v) \mathbb E[\mathbb E(Y|X,A=a,S=1)|V=v,S=1]. \] Now we only need to estimate $\mathbb E[\mathbb E(Y|X,A=a,S=1)|V=v,S=1] = \mathbb E[\mathbb E(Y|W,A=a,V=v,S=1)|V=v,S=1]$. Note that this functional is equivalent to estimating ATE among individuals with $V=v, S=1$ using covariate set $W$ and hence the standard ATE theory applies. We leave the general case as future work. Some other potentially interesting questions in generalization and transportation include whether it is possible to generalize or transport time-varying treatment effects, where the treatment/exposure changes over time. Moreover, in the classic ATE setting, existence of instrumental variables helps us identify local average treatment effect (LATE) under certain assumptions. How to formalize the definition of instrumental variables and examine which kind of treatment effect can be identified with the help of instruments in a generalization and transportation setting is an interesting question left for future investigation. \bibliographystyle{apalike}
{ "redpajama_set_name": "RedPajamaArXiv" }
6,883
Copyright 2015 Dylan Howell. All rights reserved. I didn't think to get my driver's license until my mid-twenties. I grew up in Geneva, Switzerland, where the legal driving age was 18, and moved to New York as soon as I could. I suppose I could have at least taken lessons, but driving seemed like a risky chore; it was also pretty pointless, since driving just wasn't necessary for Geneva life. I didn't realize this wasn't normal until I went to college and made friends with kids from California, Arizona, and New Jersey—kids whose adolescence seemed to revolve around driving. But I forgot about cars as soon as I mastered swiping my NYC MetroCard, and before I knew it, ten years had passed. At 26, I had a job, an apartment, a boyfriend, and two cats—but I couldn't leave town if the trains were down. I thought about taking my road test, but reasoned my way out of it. Besides, I had a hunch that I wouldn't be very good. I'd scored so poorly on a spatial reasoning exam in the ninth grade that a teacher gently inquired about whether I needed glasses (my eyesight is perfect). I cultivated my own set of delusions: I was waiting for self-driving vehicles. I was saving the environment. I, too, would have a personal driver. My inability to drive only began to bother me in earnest when it got in the way of my work as a reporter. A trip to northern Michigan ended up costing double because I couldn't make the four-hour drive from Detroit to Traverse City. I missed out on a great assignment covering a deadly train crash in Quebec because it was in the sticks and the trains…well, the train had crashed. Not driving was getting in the way of my career, and I was ambitious. So as soon as I had the time and the money to sign up for classes, I did. My first mistake was a product of that same ambition. I would be spending hours learning to do something decidedly unintellectual. Why not make the most of it and learn to drive in Russian? I grew up speaking Russian at home, but I never got a chance to practice. This seemed like a good chance to. I'd been down this road before: When I was a child, my mother wanted desperately for me to learn the piano. But it was not enough for me to take lessons like a normal kid. No. I had to learn from one Ivan Ivanovich, who spoke only Russian. Poor Ivanovich didn't last long. I don't know what happened to him, and I never learned the piano. I turned to Google and clicked on a link to a driving school in Brighton Beach. I bought a package of classes and waited. A couple of days later, I received a note from boss@leftturndrivingschool.com in broken English confirming my first class and promising me I'd enjoy the lesson. It was signed Valeriy. Valeriy rolled in late for my first lesson in a small Honda, wearing an Adidas tracksuit and sneakers. His car was littered with Little Debbie wrappers. It smelled lived-in and weird. He was a schlubby guy who fit every stereotype of a middle-aged immigrant from Eastern Europe. When I told him I spoke Russian, he seemed relieved. Valeriy's English, it turned out, was not so hot. On my quiet street in Brooklyn Heights, he taught me the basics—how to turn on the car, how to go backward and forward, how to turn, how to parallel-park. I thought I understood him well; in retrospect, his gestures were more useful than my Russian vocab. Within an hour, I was sitting behind the wheel on the Brooklyn-Queens Expressway, petrified, as Valeriy directed me onto the ramp and into the right lane. That's when I confused the word brake for the word gear. We made it out alive, but barely. I did make progress over the coming weeks, and after 10 or so sessions of an hour and a half each, he drove me deep into Brooklyn to take my test. We laughed on our way there, and I felt good about the test. But the moment the examiner got in the car, my mind went blank: I was so used to learning in Russian that being told to turn and stop in English was confusing. She did not gesture. She was mean! I failed after driving too near a UPS truck and parallel-parking a good three feet from the curb. I was dejected. I'd never failed failed a test before. School–from first grade through to my graduate studies—had always come easily to me. I was writing a book proposal and editing a magazine in my free time. I was running a half marathon. I was used to succeeding when I tried, and—to be perfectly honest—succeeding when I didn't try that hard too. His words weren't much of a consolation. By any standard, driving wasn't supposed to be hard. Teenagers did it. Dumb people did it! What was wrong with me? The next time wasn't better. I took 10 more classes; I drove out to another testing site with Valeriy; and by the time we arrived, my anxiety levels were so high that, blinded by nerves, I failed to yield. "I can't in good conscience let you pass!" the second examiner said, after I tried talking her into passing me anyway. "You almost got us killed!" I kept trying to explain to her that I was actually really good, just nervous. Then I realized that I argue my way out of a great many situations in life—but that on the road, no one cares if you're clever. The third time, I figured, would be the charm. Valeriy had passed me off to a lady named Katya for a remedial session. I was officially a desperate case. Katya was a delicate, soft-spoken woman in her thirties who worked for the city by day and gave driving lessons in her time off because, she claimed, she enjoyed it so much. Her enthusiasm did not rub off. My third test date came and went. Another failure. And this time, I gave up. I'd spent too much time and money. Perhaps, I told myself, there was a good reason I could not get my license. For a couple of years after, I simply stopped thinking about driving. First, I was too busy and happy to bother: I had amazing friends and a great job, had sold my first book, and was traveling the world reporting fascinating stories. Then a number of enormous, unquantifiable failures—a devastating breakup, the shutdown of the company I worked for, the terrifying feeling of not knowing what to do next after publishing my book—distracted me even more. I wanted desperately to get in a car and drive far, far away from everything that was going wrong—but I needed a license, and I wasn't in the right frame of mind to face what seemed an almost certain chance of failure. It was only about six months ago that I felt compelled to try again—partly because I felt ready to face failing, but also because I was getting restless, and had run out of excuses not to try. I worked from home; I had the time; I was 30 years old. It was getting embarrassing, not driving. This time, I would learn in English, with a driving instructor from my neighborhood named Damon. He would not make for good stories. He would definitely not improve my Russian. But he might help me achieve the one task at hand: getting my license. Over the next three months, I re-re-relearned to drive, one step at a time, and I was set to take the exam in mid-January. There were some bad omens the morning of the test. I'd sliced my finger open the day prior. I hadn't slept well on account of my anxiety. I was waiting to hear if I'd gotten a job I'd applied for. To make things worse, when we arrived at the test site, there were cops everywhere—someone had stolen a car, crashed through a fence, and abandoned the vehicle. We moved to a different street and waited. Then a small South Asian man introduced himself as my examiner, and for the next five minutes, I did what he asked: turning on the car; driving, stopping, and turning; changing lanes, turning; pulling over, parking, stopping, pulling out, and then stopping one final time. I wish I could say I passed with flying colors, but I barely scraped by: I parked too far from the curb again, drove way too slow, and I remembered only at a critical moment to yield to the stop sign. As he added up my points, I felt a familiar dread. I've definitely failed, I told myself. But then he congratulated me. "You passed!" he said. It felt unreal. Failing my test so many times sucked, but it was also instructive. It gave me a taste of failure—and with it a perspective on success. Sure, I was a crummy driver, but it wasn't for lack of trying; in fact, I probably worked six times harder than someone for whom driving came naturally, and still screwed up. At the same time, I was naturally good at a lot of things without trying, like waking up early, reading complicated books, even sports. So much of what we all get in life boils down to luck, chance, circumstance. Besides, it's not like I was completely without luck when it came to driving, either; Damon later told me that I'd chanced upon the most lenient examiner in the entire city, and that he'd have been genuinely shocked if with that guy, I hadn't passed. My license arrived in the mail two weeks later. I haven't driven since.
{ "redpajama_set_name": "RedPajamaC4" }
7,008
This article is about the ability exclusive to Tiny Kong. For other teleport pads in Donkey Kong 64, see Bananaport Pad. Tiny about to Monkeyport to the lobby entrance of Hideout Helm. Monkeyport (Warpum Craftious) is an ability that Tiny Kong can use from a Tiny Pad in Donkey Kong 64. She can learn this move by purchasing a potion from Cranky's Lab at Crystal Caves for seven Banana Bunch Coins. While Tiny is standing on a Tiny Pad, the player can press to teleport her to another Tiny Pad within the level. This ability is essential for reaching Hideout Helm's lobby on Crocodile Isle. This page was last edited on March 5, 2019, at 16:15.
{ "redpajama_set_name": "RedPajamaC4" }
8,791
There are a lot of work with babies. When they grow up a little, it's even worse. In today's game you can try how hard is it. Baby will need constant care and full attention. He has to play, sleep, eat, drink and plenty of other things. If you are good, you will earn some money to buy few stuff.
{ "redpajama_set_name": "RedPajamaC4" }
119
Paul argues that knowing Jesus Christ is more important than any earthly outward show of pious obedience. He argues that if that was the case, he is more devout than any other Jew. However, these accomplishments he counted as losses for the sake of Christ.
{ "redpajama_set_name": "RedPajamaC4" }
1,408
{"url":"https:\/\/defragdev.com\/blog\/2021\/11\/04\/dotnet5-sgen-parameters.html","text":"# Parameterising sgen (aka the .NET Microsoft.XmlSerializer.Generator) via a .csproj PropertyGroup\n\n## The problem\n\nIf you\u2019ve landed on this post via searching the web, you probably already know what sgen.exe is, and what the Microsoft.XmlSerializer.Generator nuget package does.\n\nYou probably used to invoke sgen as part of a build script or a post-build .csproj step and parametrise it as you saw fit: e.g. passing \/type:MyType to limit it to a single type.\n\nSgen has historically been flaky to use (as it changed location on disk depending on which Windows SDK was installed and some other factors), so it\u2019s great it\u2019s now done via a nuget package. That\u2019s the good part.\n\nThe bad part is that since switching to the Microsoft.XmlSerializer.Generator nuget package, you\u2019re thinking: \u201cHow do I pass arguments to sgen now that it\u2019s automagically invoked via dotnet build?\u201d and also \u201cwhere is the documentation?\u201d\n\nI had the same reaction, dear reader. Porting an old C# project to .NET 5 resulted in sgen exiting with an error, as the default behaviour tries to generate serializers for every type in the target assembly. I don\u2019t want this behaviour for a few reasons:\n\n1. It generates code I don\u2019t want or need (and a bulkier serialization assembly)\n2. It fails if namespacing is required (Got an assembly containing NS1.Triangle & NS2.Triangle? Sgen will fail unless you disambiguate the types via namespacing)\n\nWe can fix both issues by simply telling sgen \u201chey, only generate serialization types for \/type:MyType\u201d. Only now we can\u2019t, because Microsoft.XmlSerializer.Generator is calling the shots rather than us directly calling sgen.\n\nBlessed art thou, because it wasn\u2019t even possible until 2019 (which seems like an oversight). The answer is in the GitHub Pull Request that added this functionality.\n\nI haven\u2019t been able to find any official documentation, so this is all we have to go on.\n\n1. Open the .csproj containing the serialization types\n2. Add a PropertyGroup section\n3. Set one of the properties, using the attribute naming format of <SGenParamName>, where \u201cParamName\u201d is a parameter name gleaned from the sgen documentation\n\u2022 E.g. type would become <SGenType>\n4. Save the project & then build as normal.\n\nHere\u2019s the example XML from the PR by jiayi11 (thank you, kind fellow):\n\n<PropertyGroup>\n<SGenReferences>C:\\myfolder\\abc.dll;C:\\myfolder\\def.dll<\/SGenReferences>\n<SGenTypes>SgenTestProgram.MyType1;SgenTestProgram.MyType2<\/SGenTypes>\n<SGenProxyTypes>false<\/SGenProxyTypes>\n<SGenVerbose>true<\/SGenVerbose>\n<SGenKeyFile>mykey.snk<\/SGenKeyFile>\n<SGenDelaySign>true<\/SGenDelaySign>\n<\/PropertyGroup>\n\n\nFor my use-case (serialization of a single type in a single project), all I needed to add was:\n\n<PropertyGroup>\n<SGenTypes>Tools.Blah.MyClass<\/SGenTypes>\n<\/PropertyGroup>\n\n\nThat\u2019s it!\n\n## A bonus tip\n\nThe documentation for the Microsoft.XmlSerializer.Generator doesn\u2019t seem to be 100% up to date for .NET 5 (and .NET 6 is due any day now, too!)\n\nIf you\u2019re running with .NET 5, rather than copying the docs that suggest:\n\ndotnet add package Microsoft.XmlSerializer.Generator -v 1.0.0\n\n\n&\n\n<ItemGroup>\n<DotNetCliToolReference Include=\"Microsoft.XmlSerializer.Generator\" Version=\"1.0.0\" \/>\n<\/ItemGroup>\n\n\n\u2026 you can the command & XML version to =5.0.0 to get the latest version.\n\n## Read on for some background\n\nAnyway, I\u2019ve hopefully saved you some head-banging.\n\nFor the rest of you that don\u2019t know what sgen is and are vaguely interested, here\u2019s a short explanation (warning: this is boring, but it\u2019s also useful to know for anyone doing XML serialization).\n\n## sgen.exe\n\nSgen is an XML serialization code generator. For a given assembly and (optionally) a type that you want to serialize, it generates a .dll containing the serialization code at compile-time.\n\nLet\u2019s say you have a toy project like so:\n\nproject_root\n\u251c\u2500\u2500 MyApp.sln\n\u251c\u2500\u2500 MyConsoleApp\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 MyConsoleApp.csproj\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 Program.cs\n\u251c\u2500\u2500 MyClassLibrary\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 MyClassLibrary.csproj\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 NeedsXmlSerialized.cs\n\n\nOur class NeedsXmlSerialized.cs needs to be serialized to Xml (or deserialized) during the course of the app run.\n\npublic class NeedsXmlSerialized\n{\npublic int SomeInt { get; set; }\n}\n\n\nIn our Program.cs, we use the XmlSerializer class to convert our NeedsXmlSerialized instance to XML:\n\npublic class Program\n{\npublic static void Main()\n{\nvar serializer = new XmlSerializer(typeof(NeedsXmlSerialized));\nusing(var writer = new StreamWriter(@\"C:\\some\\file.xml\"))\n{\nvar typeToBeSerialized = new NeedsXmlSerialized;\ntypeToBeSerialized.SomeInt = 5;\nserializer.Serialize(writer, po);\n}\n}\n}\n\n\nOK, so far so good. The program works, and we\u2019ve not even mentioned sgen yet.\n\n## If it works already, why use sgen?\n\nThe short answer is performance. If the serialization types have not yet been generated, they must be created on-the-fly at run-time. I.e. the first time you want to serialize\/deserialize your class to\/from XML, there is a constant (and large!) stall. The serialization types are then cached & re-used for the rest of the run, but the down-payment can be considerable.\n\nI\u2019ve seen this in production and it\u2019s not pretty, especially in pathological cases where the runtime is short-lived and only does one bout of serialization, e.g.:\n\n\u2022 The program starts up (20ms)\n\u2022 It does some CPU-bound work (300ms)\n\u2022 XML serialization types are generated on the fly (350ms)\n\u2022 Serialize to XML (20ms)\n\u2022 The program exits (5ms)\n\nIn this case, we spend around half of the program\u2019s runtime generating the serialization types each and every run. This is scandalously wasteful. If you were to gaze at the profiling data, you\u2019ll see a monolithic block of waste that seems to be non-app code.\n\nAn even better thing to do is avoid XML serialization and do something else instead. In my case we have no choice, as the file format is sadly non-negotiable.\n\nTags:\n\nUpdated:","date":"2022-01-26 18:08:31","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.26246511936187744, \"perplexity\": 6104.204564726483}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-05\/segments\/1642320304959.80\/warc\/CC-MAIN-20220126162115-20220126192115-00678.warc.gz\"}"}
null
null
{"url":"http:\/\/mathhelpforum.com\/pre-calculus\/106480-combining-simplifying-logs-part-2-a-print.html","text":"# combining\/simplifying logs (part 2)\n\nPrintable View\n\n\u2022 Oct 6th 2009, 10:48 AM\nEvan.Kimia\ncombining\/simplifying logs (part 2)\nhttp:\/\/img25.imageshack.us\/img25\/2971\/log2vh.jpg\n\nHere is the 2nd one which asks to simplify the whole thing as a logarithm. Again, not sure how to take the integers at the end and convert them into log form. (Headbang) Thanks.\n\u2022 Oct 6th 2009, 10:58 AM\ne^(i*pi)\nQuote:\n\nOriginally Posted by Evan.Kimia\nhttp:\/\/img25.imageshack.us\/img25\/2971\/log2vh.jpg\n\nHere is the 2nd one which asks to simplify the whole thing as a logarithm. Again, not sure how to take the integers at the end and convert them into log form. (Headbang) Thanks.\n\n$3 = log_4 (4^3)$\n\nedit:\n\nbefore that I would change $2log_2(8) = 2log_2(2^3) = 6$.\n\nSimplify the two integers and then make that into a log with base 4 to match that of x and y\n\u2022 Oct 6th 2009, 11:29 AM\nEvan.Kimia\nThank you, but im a bit confused how to go about changing the base, or if i even have to to log base 4.\n\nps., the correct answer is log base 4 (xy^2 over 64)\n\u2022 Oct 6th 2009, 11:44 AM\ne^(i*pi)\nWe know that $log_a(a) = 1$\n\nNote that $2log_2(8) = 6$ because $8=2^3$\n\n$2log_4(y) = log_4(y^2)$\n\n$-6+3 = -3$\n\nAs $-3 = log_4(4^{-3})$ (see the rule above)\n\nSo overall we can rewrite it as $log_4(x)+log_4(y^2)+log_4 \\left(\\frac{1}{64}\\right)$\n\nYou can then combine the logs to give:\n\n$log_4 \\left(\\frac{xy^2}{64}\\right)$\n\u2022 Oct 6th 2009, 11:56 AM\nEvan.Kimia\nah! thank you!","date":"2016-10-28 13:30:11","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 10, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8128666877746582, \"perplexity\": 1441.9495938959021}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-44\/segments\/1476988722459.85\/warc\/CC-MAIN-20161020183842-00110-ip-10-171-6-4.ec2.internal.warc.gz\"}"}
null
null
San Jose New York Berlin All Software Architecture, All the Time API landscapes as the foundation of digital transformation Erik Wilde (Axway), Mike Amundsen (Amundsen.com, Inc.) 9:00am–12:30pm Tuesday, June 11, 2019 Application architecture, Enterprise architecture, Microservices Location: 230 C Secondary topics: Best Practice, Overview API product managers, API designers, digital transformation practitioners, enterprise architects, integration architects, and API developers Experience in working with HTTP and the Web An understanding of how web-based APIs are designed and implemented General knowledge of API use and integration Understand the business value of APIs Apply an assessment tool (API Compass) to gauge your existing API program Identify the development lifecycle of individual APIs in your program Identify the scaling/integration aspects of your API landscape or ecosystem Digital transformation has become a necessity for many organizations. In short, it means to reimagine an organization's structure and operations in the context of the new reality of customers, products, and supply chains becoming increasingly digital. APIs play an important role in digital transformation because a robust and dynamic API landscape is essential as a foundation for successful digital transformation initiatives. APIs merely are the technical reflection of what digital transformation really is all about: making it easier for an organization to change itself, to react to external changes such as customers or the competition, and to quickly gain insights into how well these changes work and how they can be further improved. In essence, APIs reflect an organizational structure that is loosely coupled, where connections can be made on demand and where flexibility is valued over optimization. Erik Wilde and Mike Amundsen look at API landscapes in two ways. First, Erik and Mike highlight those aspects that contribute to the business value of APIs. These are issues such as findability, DX, loose coupling, and externalizability. Second, they look at tools to assess both the state of individual APIs and the state of the overall API landscape. For these assessments, they use the Continuous API Management (CAM) API compass, which provides a structure for better understanding the fitness of APIs and API landscapes. The goal is to provide a comprehensive overview of the various aspects that play into how APIs and API landscapes are an essential ingredient of digital transformation and how analyses and measurements can help to provide better insights into individual APIs and API landscape in organizations. CAM focuses on a holistic view of individual APIs, their development cycles, their maturity journeys, and how they fit into an organization's API landscape. For the API landscape, a similar structured view is provided, which provides an emphasis on specific aspects of the API landscape and how investments ideally should be made to improve the organization's API landscape. Both the individual and the landscape view are complemented by a compass, which provides a structured analysis and thus helps with assessment and management. Erik Wilde Axway Erik works in the Catalyst team of Axway. His goal is to make clients more successful by providing them with insights and guidance on their path towards API-centric architectures in particular, and on their Digital Transformation journey in general. Previously, he was an adjunct professor at UC Berkeley and worked at EMC, Siemens, CA Technologies, and Good API. Erik is active in the IETF and W3C communities. He holds a PhD from ETH Zurich. Mike Amundsen Amundsen.com, Inc. Mike Amundsen is an internationally known author and speaker who travels the world discussing network architecture, web development, and the intersection of technology and society. He works with companies large and small to help them capitalize on the opportunities provided by APIs, microservices, and digital transformation. He's authored numerous books and papers. He contributed to the O'Reilly book Continuous API Management (2018). His RESTful Web Clients was published by O'Reilly in February 2017, and he coauthored Microservice Architecture (June 2016). His latest book, Design and Build Great APIs, (Pragmatic Publishing) is scheduled for release in late 2019. Erik Wilde | CATALYST 07/08/2019 11:57am PDT hello abdulrahman. the presentations are available online here: http://dret.net/lectures/oreilly-sa-ca-2019/ dret. Abdulrahman Mansour Sanad | TECHNICAL MANAGER How I can download the presentation? Hello2morrow LogRocket Solo.io For exhibition and sponsorship opportunities, email SAconf@oreilly.com View a complete list of O'Reilly Software Architecture contacts
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,038
critiqom opus trust communications Archangels sells Critiqom stake Business angel investment group Archangels has sold its 25% shareholding in Bellshill-headquartered document outsourcing specialist Critiqom as part of the sale of the business to Opus Trust Communications. Niki McKenzie, Investment Director at Archangels (Picture credit - Graeme Hunter) Archangels first invested in Critiqom in 2005 and since then the company has grown to become a national provider of a variety of complementary document mailing solutions, including an end-to-end suite of data, print, post and multi-channel services. The business is headquartered in Bellshill, North Lanarkshire, with manufacturing operations both at Bellshill and in Warrington, Cheshire. Opus Trust Communications, headquartered in Leicester, provides multi-channel digital, print and postal solutions, designed to support business communication strategies and drive customer engagement. The acquisition price is not being disclosed. Niki McKenzie, investment director at Archangels, said: "Critiqom has been a valued member of the Archangels portfolio for many years and it is clear that its next strategic step forward should be as part of Opus Trust Communications. We wish the whole Critiqom team further success as they embark on the next stage in the company's development." Critiqom secures multi-million pound national framework agreement for provision of postal services Archangels led over £14m of investment in 2019 Scottish academics among global critics of FT series on Corbyn agenda
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,727
Are you looking for a suitable conference hotel in Derita for your next meeting or event? Use our free and convenient service in Derita and receive comparable proposals directly from the chosen conference hotels in Derita with just one online request and save a lot of time. The list below shows you the most popular Derita conference hotels as well as conference venues in Derita and gives you details about the hotel and its conference facilities. By clicking the name of the hotel you can view even more information, pictures or videos. Select your preferred hotels to start your free and non-binding online enquiry and you will start receiving your hotel proposals within a couple of hours. We feature a 3000sq. ft. ballroom that can be separated in to 3 different sections to acommodate most groups. We offer catering and audio visual services. Haven't found an adequate Derita conference hotel in our list or want to propose a different conference hotel? Please contact us! May we assist you with planning your conference in Derita? Call or email us to benefit from our experience and use our free service to find suitable conference hotels in Derita or in other destinations around the world. You will receive comparable offers directly from the hotels within a couple of hours.
{ "redpajama_set_name": "RedPajamaC4" }
50
'use strict'; // This script depends on the following scripts: // /file-system-access/resources/messaging-helpers.js // /file-system-access/resources/messaging-blob-helpers.js // /file-system-access/resources/messaging-serialize-helpers.js // /file-system-access/resources/test-helpers.js // /service-workers/service-worker/resources/test-helpers.sub.js directory_test(async (t, root_dir) => { const dedicated_worker = create_dedicated_worker(t, kDedicatedWorkerMessageTarget); await do_post_message_test( t, root_dir, /*receiver=*/ dedicated_worker, /*target=*/ dedicated_worker); }, 'Send and receive messages using a dedicated worker.'); directory_test(async (t, root_dir) => { const scope = `${kServiceWorkerMessageTarget}?post-message-with-file-handle`; const registration = await create_service_worker(t, kServiceWorkerMessageTarget, scope); await do_post_message_test( t, root_dir, /*receiver=*/ navigator.serviceWorker, /*target=*/ registration.installing); }, 'Send and receive messages using a service worker.'); if (self.SharedWorker !== undefined) { directory_test(async (t, root_dir) => { const shared_worker = new SharedWorker(kSharedWorkerMessageTarget); shared_worker.port.start(); await do_post_message_test( t, root_dir, /*receiver=*/ shared_worker.port, /*target=*/ shared_worker.port); }, 'Send and receive messages using a shared worker.'); }
{ "redpajama_set_name": "RedPajamaGithub" }
4,350
<?xml version="1.0" encoding="utf-8"?> <android.support.design.widget.CoordinatorLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:fitsSystemWindows="true" tools:context=".view.activity.MainActivity"> <android.support.design.widget.AppBarLayout android:id="@+id/app_bar" android:layout_width="match_parent" android:layout_height="wrap_content" android:theme="@style/AppTheme.AppBarOverlay"> <include layout="@layout/toolbar"/> </android.support.design.widget.AppBarLayout> <FrameLayout android:id="@+id/frameLayout" android:layout_width="match_parent" android:layout_height="match_parent" app:layout_behavior="@string/appbar_scrolling_view_behavior"> <include layout="@layout/item_list" /> </FrameLayout> </android.support.design.widget.CoordinatorLayout>
{ "redpajama_set_name": "RedPajamaGithub" }
1,737
hr { border: solid 1px blue; height: 1px; } .Warning { color: Red; } .Notice { color: Green; } .Tip { color: Gray; } .ErrInput { border:2px solid red; } .banner { width: 100% - 100; text-align: center; height: 50px; line-height: 20px; background: url("../images/banner.jpg") no-repeat; background-color: Gray; border-top: solid 1px gray; border-bottom: solid 1px gray; margin-left:5px; margin-right:5px; } .icon1 { width: 96px; height: 100px; background: url("../images/icon.gif") no-repeat; float:right; } .icon2 { width: 96px; height: 100px; float: left; } .banner_right { top: 20px; text-align: right; position: relative; padding-right: 15px; } .header { width: 100% - 100; text-align: center; height: 20px; line-height: 20px; background-color: #f0f0f0; padding: 5px; border-bottom: solid 1px gray; margin-left:5px; margin-right:5px; } .body { width: 100%; margin: 0 auto; padding: 0px; height: 100%; background-color: white; } .footer { width: 100% - 100; text-align: center; height: 32px; line-height: 16px; background-color: white; padding: 5px; border-top: solid 1px gray; margin-left:5px; margin-right:5px; padding-top:10px; } #navigator { float: left; } #oprator { float: right; } .content { font-size: 11pt; line-height:140%; margin:5px 5px; padding: 0px; } .ffcenter { margin:0 auto; } .tip { margin: 2px; font-size: small; color: Gray; } a:link { color: #003399; text-decoration:none; } a:visited { color: #003399; text-decoration:none; } a:hover { color: white; background-color:#003399; } table { width: 100%; } .table_title { background-color:#f0f0f0; height:35px; border-top: solid 1px gray; } .table_subtitle { background: #f0f0f0; height:25px; border-top: solid 1px gray; } .table_content { background: white ; line-height:120% } .table_content ul li { margin-left:-1.5em } .table_subfooter { background: white ; line-height:120%; text-align:right; } .table_footer { background: #f0f0f0; text-align:right; width:95%; border-top: solid 1px gray; padding:5px; border-bottom: solid 1px gray; } .comment_count { color:red; } .read_count { color:Red; } .article_title {} .article_title h1 { text-align:center; margin:0 auto; padding: 15px; background-color:#f0f0f0; margin-top:5px; margin-bottom:5px; border-top:solid 1px gray; line-height:120%; } .article_content {} .article_footer { background-color:#f0f0f0; text-align:right; padding:10px; margin-top:5px; margin-bottom:5px; border-bottom:solid 1px gray; } .comment_title { height:20px; background-color:#f0f0f0; border-top:solid 1px gray; font-size:9pt; } .comment_content { font-size:9pt; } .comment_action { float:right; } .comment_input {} .page_bar { height:30px; background-color:#f0f0f0; text-align:right; border-bottom:solid 1px gray; border-top:solid 1px gray; } .register_user { color:Green; } .unregister_user { color:Gray; } .home { width:100%; } .home_list { width:80%; } .home_widget { width:16%; } .category_list { margin-left:3px; margin-right:3px; } .edit_title {} .edit_title h1 { text-align:center; margin:0 auto; padding: 15px; background-color:#f0f0f0; margin-top:5px; margin-bottom:5px; border-top:solid 1px gray; } .edit_content { width:635px; margin:0 auto; background-color:#f0f0f0; padding:5px; border:solid 1px gray; } .edit_action { text-align:center; } .search_bar { margin-bottom:5px; } .search_bar form { margin:0 auto; } .search_button { width: 60px; } body { background:#FFFFFF; font-size: 9pt; font-family:ÐÂËÎÌå; } input, textarea { font-size: 9pt; } select { font-size: 9pt; border-width:1px } .quote { margin:5px; margin-left:2em; border:1px solid #CCCCCC; padding:5px; background: #FFFFFF; font-family:Verdana,Arial,ËÎÌå; } .code { margin:5px; margin-left:2em; } .aTitle { font-size: 12pt;font-weight:bold; } .border { border:1px solid #ccccff } .gray { color:gray; text-decoration:none } .time { color:red } .hit { color:green } .tdBg { background: #f2f8ff ; line-height:120%} .bg1 { background: #f2f8ff; } .bg2 { background: #E8F2FF; } .SubTitle{font-family:ÐÂËÎÌå;font-size:12pt;color:black;text-align:center;background-color:#e8f2ff; border:1px solid #ccccff; padding-top:2pt;} .Head{font-size:16pt;color:#ff3399;} .nblock{ background:#f2f8ff; border-top:0;border-bottom:0;padding:5pt; } blockquote{font-size:9pt;color:black;background-color:white;cursor:default;border-top:double;border-bottom:double;padding:10pt} .n1{color:black;background-color:yellow;font-weight:normal} .n2{color:purple;font-weight:normal} .n3{color:red;font-weight:normal} .n4{color:blue;font-weight:normal} .n5{color:green;font-weight:normal} .c1{color:yellow;background-color:red} .c2{color:red;background-color:yellow} .c3{color:yellow;background-color:yellow} .rightnew{ text-align:right; width:95%; } img{max-width:100%;}
{ "redpajama_set_name": "RedPajamaGithub" }
9,447
\section{Introduction} For classification tasks, deep neural networks (DNNs) are able to achieve zero training error when trained on datasets with label noise, even in the extreme scenario of totally randomized labels \cite{zhang2016understanding}. This poses a challenge when training on real-world datasets. Manual data labeling is both inefficient and expensive, while automated annotation methods inherently introduce label noise. In either case, we typically have access to a small clean subset of the dataset. For such datasets with label noise, how then do we train DNNs that generalize well, regardless of the actual (unknown) noise levels? Label noise in real-world datasets is inevitably asymmetric. This asymmetry naturally arises because the performance of any data annotation method that is based on the output labels of a prediction model, is necessarily both class-dependent and instance-dependent \cite{Cheng2018:SurveyAutoImageAnnotation}. Although there are numerous existing methods for tackling label noise~\cite{reed2014training, ghosh2017robust, patrini2017making, hendrycks2018using, tanaka2018joint, zhang2018generalized, arazo2019unsupervised, shu2019meta, wang2019symmetric, xu2019l_dmi, zhang2019metacleaner,li2020dividemix,ma2020normalized}, all of which perform well in the idealized case of symmetric label noise (provided the noise level is not too high), these existing methods are not as robust to asymmetric label noise, and they exhibit a sharp performance drop at medium-to-high noise levels. Despite much progress on tackling label noise, there are two seemingly opposing challenges that have not been tackled jointly: How do we train a model that is (i) robust to all noise levels, and (ii) whose performance at any given noise level is not sensitive to any variation in the noise model? To solve both challenges via a unified approach, we propose a distillation-based framework that incorporates a new method of Positive-Unlabeled (PU) learning. In general, we have a trade-off between increasing accuracies across all noise levels for a given noise model, and increasing accuracies across all noise models for a given noise level. Our motivation to use PU learning is based on the observation that samples of any dataset with label noise can naturally be partitioned into ``correct" and ``incorrect" classes. Clean data is correct by definition, while the remaining noisy dataset contains both correct and incorrect instances. This is precisely the scenario of PU learning, where instances in the noisy dataset are treated as ``unlabeled". Our framework comprises two components: clean data augmentation and knowledge distillation. Starting with our given clean subset, we initially treat all instances with noisy labels (henceforth called ``noisy samples'') as being unlabeled, and we gradually augment the original clean subset, iteratively, via PU learning; those (unlabeled) noisy samples inserted into our augmented clean set would be assigned new labels. Crucially, this new label assignment does not require the originally given labels of the noisy samples. In other words, we \emph{do not require any assumptions on the underlying noise model.} Hence, this clean data augmentation component is automatically robust to all noise models. As for our distillation component, we train a teacher model solely on the augmented clean set, thus we are able to suppress the influence of label noise when training the student model, especially at high noise levels. Our major contributions are summarized as follows: \begin{itemize} \item We propose a versatile distillation-based framework for tackling label noise. In contrast to existing work, our framework is robust to all noise levels, and is not sensitive to noise model variation. To the best of our knowledge, this is the first ever solution that overcomes both major challenges we described earlier. \item We introduce a new type of PU learning. We then use this new technique to design a clean data augmentation algorithm, which also allows us to correct noisy samples with high confidence. The effectiveness of our augmentation algorithm is demonstrated by the high precision scores of reliably clean samples extracted from the validation set. \item In experiments on CIFAR-10 \cite{krizhevsky2009learning} with asymmetric semantic label noise, our proposed framework outperformed outperforms state-of-the-art (SOTA) methods at noise levels $50\%$--$90\%$. When evaluated on the real-world noisy dataset Clothing1M~\cite{xiao2015learning}, we achieved a new SOTA accuracy of 77.70\% (2.94\% higher than previous SOTA). \end{itemize} \section{Related Work} \label{sec: related work} \subsection{Existing Approaches for Tackling Label Noise} \noindent\textbf{Data cleaning methods.} These are methods applied to identified noisy labels, and they include label correction~\cite{tanaka2018joint}, sample re-weighting~\cite{shu2019meta, jiang2018mentornet, zhang2019metacleaner}, label distribution estimation~\cite{yi2019probabilistic}, re-sampling of a relabeled dataset~\cite{wu2018light}, and treating ambiguous samples as unlabeled data and applying a semi-supervised method~\cite{Ding2018AST}. The efficacy of such methods depends heavily on the identification of noisy labels, which is inherently a difficult problem as DNNs easily overfit noisy labels. \noindent\textbf{Methods with robust loss functions.} There are numerous loss functions proposed to alleviate the influence of label noise. These include symmetric loss~\cite{ghosh2017robust}, variants of cross-entropy (CE) loss (generalized CE loss~\cite{wang2019symmetric}, symmetric CE loss~\cite{wang2019symmetric}, Taylor CE loss~\cite{feng2020can}), bootstrap loss~\cite{arazo2019unsupervised, reed2014training}, bilevel-optimization-based loss~\cite{jenni2018deep}, information-theoretic loss~\cite{xu2019l_dmi}, SIGUA loss~\cite{han2020sigua}. See also \cite{ma2020normalized} for a general framework for combining loss functions to enhance robustness to label noise. \noindent\textbf{Noise model estimation methods.} Label noise is modeled either explicitly by a noise transition matrix \cite{hendrycks2018using, patrini2017making}; or implicitly via graphical models \cite{xiao2015learning}, knowledge graphs \cite{li2017learning}, conditional random fields \cite{Vahdat2017toward}, residual nets \cite{hu2019weakly}, or joint embedding networks \cite{lee2018cleannet}. Once an underlying noise model has been well-estimated, the true labels can then be inferred with high confidence. For example, a good estimation of the noise transition matrix can be achieved by adding an extra noise model over the base model and simultaneously training both models \cite{jindal2016learning, sukhbaatar2014training}, albeit with certain strong assumptions \subsection{Preliminaries} \label{preliminaries} \textbf{Knowledge distillation} was first proposed by Hinton \textit{et al.} \cite{hinton2015distilling} for model compression, whereby the knowledge learned by a large teacher model $f_t(\cdot)$, is transferred to another smaller student model $f_s(\cdot)$, by applying a weighted average of two objective functions as follows: \vspace*{-0.1em} \begin{equation} \label{distillation_loss_function} \mathcal{L}(y,f_{s}(x))=\lambda l(y, f_{s}(x)) + (1-\lambda) l(f_{t}(x), f_{s}(x)), \end{equation} \vspace*{-1.1em} \noindent where $l(\cdot)$ is the traditional loss function, and $\lambda$ is a parameter to balance the effect of the given labels and the outputs of the teacher model. Assuming that a subset $\mathcal{D}_c$ of a given dataset is known to have correct labels, Li \textit{et al.} \cite{li2017learning} proposed a distillation-based method, where the teacher model $f_t(\cdot)$ is trained on $\mathcal{D}_c$ and the student model $f_s(\cdot)$ is subsequently trained using the loss function in \eqref{distillation_loss_function}. Hence, the student model would have a higher accuracy compared to the same model trained directly on the noisy dataset, as long as the error rate of the teacher model is less than the noise rate of the dataset. A key merit of this distillation-based method is that it can be flexibly built on top of any algorithm for training the teacher and student models. However, their method has two drawbacks: (i) The noisy labels remain invariant in the objective function, which affects the performance of the student model; (ii) It requires a large clean set (e.g. the values $\frac{\vert \mathcal{D}_c \vert}{\vert \mathcal{D} \vert}=1$ and $\frac{1}{4}$ were used in their experiments) to train a sufficiently accurate teacher model. \textbf{PU learning} is a special type of semi-supervised learning that involves training a binary classifier on a set of positive data $\mathbf{P}$ and a set of unlabeled data $\mathbf{U}$~\cite{li2005learning}. Existing PU learning algorithms are classified into four approaches: (i) bias-based, (ii) heuristic-based, (iii) bagging-based, and (iv) risk-based. \begin{itemize} \item A bias-based approach treats $\mathbf{U}$ as learnable weighted combinations of ``positive" and ``negative" classes \cite{elkan2008learning}. \item A heuristic-based approach iteratively selects reliable negative samples from the unlabeled data via a two-step algorithm: (i) identify new reliable negative samples $\mathbf{N}$ from $\mathbf{U}$; (ii) train a binary classifier on $\mathbf{P}$ and $\mathbf{N}$ \cite{liu2002partially}. \item A bagging-based approach uses an ensemble of classifiers trained on bootstrap subsets \cite{mordelet2014bagging}. \item A risk-based approach estimates the misclassification risk by replacing the risk of negative samples with risk in terms of $\mathbf{P}$ and $\mathbf{U}$ given class prior \cite{du2014analysis}. \end{itemize} \textbf{Mixup} \cite{zhang2017mixup} is a data augmentation method proposed to favor linear relations on the samples, which involves augmenting the training dataset with convex combinations of pairs of examples and their labels. Specifically, given a pair of samples, ($x_i$, $y_i$) and ($x_j$, $y_j$), the augmented sample is generated via \vspace*{-0.1em} \begin{equation*} \tilde{x}=\beta x_i + (1-\beta)x_j, \quad \tilde{y}=\beta y_i + (1-\beta)y_j, \end{equation*} \vspace*{-1.1em} \noindent where $\beta \sim \text{Beta}(\mu, \mu)$ follows a beta distribution for some parameter $\mu \in (0, \infty)$. We used $\mu=2$ in our experiments. \textbf{Entropy regularization} \cite{tanaka2018joint} is introduced to concentrate the distribution of each prediction vector to one peak: \begin{equation} \mathcal{L}_e=-\frac{1}{n}\sum_{i=1}^n\sum_{j=1}^C p_i(x_i)\log(p_i(x_i)), \end{equation} where $n$ is the number of instances, $C$ is the number of classes, and $p_i(\cdot)$ is the $i$-th entry of the output vector. \section{Proposed Framework} \label{sec: proposed method} Consider a $C$-class noisy dataset with a small portion of clean set known to have correct labels. We denote the dataset by $\mathcal{D}=\mathcal{D}_c \cup \mathcal{D}_n$, where $\mathcal{D}_c$ and $\mathcal{D}_n$ represent this small clean set and the remaining noisy set, respectively. \begin{algorithm}[H] \caption{Proposed Method}\label{alg:algorithm} \textbf{Inputs}: Number of classes $C$, clean set $\textstyle\mathcal{D}_c=\bigcup_{i= 1}^C\mathcal{D}_c^{(i)}$, noisy set $\mathcal{D}_n$, number of iterations $K$, number of ensemble models $N$, positive threshold $\alpha$, reliability criterion $\theta$, number of teacher models $N_t$. \\ \textbf{Intermediate teacher models}: $f^{(i)}_n$ ($1\leq n\leq N, 1\leq i\leq C$).\\ \textbf{Output}: Student classifier $f_s$.\\[-0.9em] \begin{algorithmic}[1] \STATEx{// Clean data augmentation} \FOR{$i=1$ \textbf{to} $C$} \STATE {$\hat{\mathcal{P}}^{(i)} \gets \emptyset$.} \FOR{$k=1$ \textbf{to} $K$} \STATE{$m_k^{(i)} \gets \min\{\vert \mathcal{D}_c^{(i)} \vert, \vert \hat{\mathcal{P}}^{(i)} \vert\}$; $m'^{(i)}_k \gets \frac{m_k^{(i)}+\vert \mathcal{D}_c^{(i)}\vert}{2}$.} \FOR{$n=1$ \TO $N$} \STATE{\begin{varwidth}[t]{\linewidth} Randomly sample $\mathcal{P}'$, $\mathcal{N}'$, and $\mathcal{N}''$, such that \par \hskip\algorithmicindent $\vert \mathcal{P}'\vert=m_k^{(i)}$, and $\vert \mathcal{N}'\vert=\vert \mathcal{N}''\vert=m'^{(i)}_k$. \end{varwidth}} \STATE{\begin{varwidth}[t]{\linewidth} Train $f^{(i)}_n$ on $\Big\{\begin{smallmatrix}\mathcal{D}_c^{(i)}\cup \mathcal{P}' \text{ (positives)};\\ \mathcal{N}' \cup \mathcal{N}'' \text{ (negatives)}. \end{smallmatrix}$ \end{varwidth}} \ENDFOR \STATE{$\hat{\mathcal{P}}^{(i)} \gets \left\{x\in\mathcal{D}_n \,\middle\vert\, \vert \{n \mid f^{(i)}_n(x)\geq\alpha \} \vert \geq \theta \right\}$.} \STATE{Update $\mathcal{P}^{(i)}=\hat{\mathcal{P}}^{(i)}\cup\mathcal{D}_c^{(i)}$.} \ENDFOR \ENDFOR \STATE{$\mathcal{P} \gets \bigcup\limits_{i= 1}^C\mathcal{P}^{(i)}$.} \STATEx{//Knowledge distillation} \FOR{$n=1$ \textbf{to} $N_t$} \STATE{\begin{varwidth}[t]{\linewidth} Train $f_t^{(n)}$ on a balanced bootstrap subset of $\mathcal{P}$. \end{varwidth}} \ENDFOR \STATE{Generate pseudo-labels for samples in $\mathcal{D}_n$ with teacher models}. \STATE{Train a student model $f_s$ on $\mathcal{D}_c\cup\mathcal{D}_n$ with pseudo-labels.} \RETURN {$f_s$} \end{algorithmic} \end{algorithm} Our framework comprises two components, clean data augmentation and knowledge distillation. In the first component, we introduce a new method of PU learning to train a filter on the clean set and generate the augmented clean set with the filter. The filter and the clean set are updated iteratively, and the final augmented clean set is used to correct $\mathcal{D}_n$. In the second component, we apply a variant of knowledge distillation, where the teacher model is trained on the augmented clean set, and the student model is trained on the entire dataset. (See Algorithm \ref{alg:algorithm} for a summary.) \subsection{Clean Data Augmentation Component} \label{clean data augmentation} We first propose a tiered PU learning method to augment the small clean set with a two-step iterative method: (i) Train an ensemble filter on the clean data with the idea of bagging \cite{Breiman1996}, which generates an ensemble of models separately trained on bootstrap subsets of the whole dataset; (ii) Use the filter to choose reliable samples from the noisy set and update the clean set. By repeatedly alternating between these two steps, we will gradually improve the filter and enlarge the clean set. In contrast to existing PU learning approaches, which fix the given positive set throughout training, we update the positive set iteratively. Also, despite having a similar two-step strategy, our approach is not heuristic-based: We do not need an initial distinguished set of reliable negative examples. Let $\mathcal{P}=\hat{\mathcal{P}}\cup\mathcal{D}_c$ (resp. $\mathcal{P}^{(i)}=\hat{\mathcal{P}}^{(i)}\cup\mathcal{D}_c^{(i)}$) be the augmented clean set (for class $i$), where $\hat{\mathcal{P}}$ (resp. $\hat{\mathcal{P}}^{(i)}$) represents the additional reliable samples filtered from the noisy set (for class $i$). Augmentation is done separately for each class, so for ease of explanation, consider a single class $i$. Initialize $\hat{\mathcal{P}}^{(i)}=\emptyset$. Next, run $K$ iterations, where in each iteration, form an ensemble filter consisting of $N$ binary classifiers $\{f^{(i)}_n\}_{n=1}^N$ to update the augmented clean dataset $\mathcal{P}^{(i)}$, described as follows: \begin{itemize} \item \textbf{Train an ensemble filter $\{f^{(i)}_n\}_{n=1}^N$.} The main idea is to separately train each binary classifier on a bootstrap subset of ``positive" data combined with a bootstrap subset of ``negative" data. Note that $\mathcal{D}^{(i)}_c$ and $\textstyle\bigcup_{j\neq i}\mathcal{D}_c^{(j)}$ are already known to be absolutely positive samples and negative samples, respectively. We shall obtain more positive (resp. negative) samples with a lower confidence level from the additional augmented clean set $\hat{\mathcal{P}}^{(i)}$ (resp. the remaining noisy set $\mathcal{D}_n\setminus \hat{\mathcal{P}}^{(i)}$). Let $m_k^{(i)} := \min\{\vert \mathcal{D}_c^{(i)} \vert, \vert \hat{\mathcal{P}}^{(i)} \vert\}$ and $m'^{(i)}_k :=\frac{m_k^{(i)}+\vert \mathcal{D}_c^{(i)}\vert}{2}$ be two parameters to control the size of bootstrap subsets. For each binary classifier $f^{(i)}_n$, we shall sample subsets $\mathcal{P}'\subseteq\hat{\mathcal{P}}^{(i)}$, $\mathcal{N}'\subseteq\mathcal{D}_n\setminus \hat{\mathcal{P}}$ and $\mathcal{N}''\subseteq \textstyle\bigcup_{j\neq i}\mathcal{D}_c^{(j)}$ uniformly at random, such that $\vert \mathcal{P}'\vert=m_k^{(i)}$, $\vert \mathcal{N}'\vert=\vert \mathcal{N}''\vert=m'^{(i)}_k$. Next, we train $f^{(i)}_n$ on $\mathcal{D}_c^{(i)}\cup \mathcal{P}'$ as the positives, and $\mathcal{N}' \cup \mathcal{N}''$ as the negatives. Note that $f^{(i)}_1, \dots, f^{(i)}_N$ are trained on different random subsets, each with an equal number of positive and negative samples. Hence, the aggregation of these $N$ models would have lower bias. \item \textbf{Generate the augmented clean set $\mathcal{P}^{(i)}=\hat{\mathcal{P}}^{(i)}\cup\mathcal{D}_c^{(i)}$.} Using the ensemble filter generated in the previous step, we select samples from the noisy set that are identified as positive samples with relatively high confidence. Specifically, we define the set of additional positive samples. \begin{equation} \hat{\mathcal{P}}^{(i)}=\left\{x\in\mathcal{D}_n \,\middle\vert\, \vert \{n \mid f^{(i)}_n(x)\geq\alpha \} \vert \geq \theta \right\}, \end{equation} where the reliability criterion $\theta$ and confidence threshold $\alpha$ are two hyperparameters designed to control the confidence level of $\hat{\mathcal{P}}^{(i)}$. The set of positive samples for each model $f^{(i)}_n$ is given by $\{x \vert f^{(i)}_n(x)>\alpha\}$, thus $\hat{\mathcal{P}}^{(i)}$ is composed of samples $x$ for which there exist at least $\theta$ binary classifiers ($f^{(i)}_\ast$) that classify $x$ as positive data. To complete this iterative step, let $\mathcal{P}^{(i)} = \hat{\mathcal{P}}^{(i)} \cup \mathcal{D}_c^{(i)}$, and samples in $\mathcal{P}^{(i)}$ are automatically relabeled with label $i$. \end{itemize} Repeat these two steps for $K$ iterations. In general, we convert a multi-class semi-supervised problem into $C$ binary classification problems that are easier to solve. Take class $i$ for example, we have a small clean "positive" set $\mathcal{D}_c^(i)$ at the beginning, and we sample a "negative" bootstrap set from $\mathcal{D}_n$ to train each binary model, where roughly $\frac{C-1}{C}\times 100\%$ samples are true negatives. Intuitively, the negative set sampled in this way is more reliable than other semi-supervised methods that are based on similarity measures, and is easier to implement. \subsection{Knowledge Distillation Component} Given the augmented clean dataset $\mathcal{P}$, we apply a variant of knowledge distillation to train a teacher-student model. In contrast to the original distillation method \cite{hinton2015distilling, li2017learning}, we instead use a casewise parameter $\lambda(x_j)$ based on the maximum entry of the prediction vector of teacher model. We first train the teacher model on the augmented clean set $\mathcal{P}$ with reassigned labels. Next, we train the student model on the entire dataset $\mathcal{D}$. (The given labels are already corrected at the end of the previous component.) If $\mathcal{P}$ is class-balanced, we directly train a multi-classifier on $\mathcal{P}$ as the teacher model. Otherwise, we train an ensemble of several (e.g. 5) classifiers on balanced bootstrap subsets of $\mathcal{P}$, where each bootstrap subset contains $\smash{\underset{i}{\operatorname{min}} \vert \mathcal{P}^{(i)} \vert}$ samples for every class. Note that the loss function given in \eqref{distillation_loss_function} is equivalent to a normal loss function (e.g. cross-entropy) applied to pseudo-label $\hat{y}$, which is generated as a weighted combination of the teacher model output and the (corrected) given label. There are three ways to generate the pseudo-labels: \begin{itemize} \item Soft bootstrap label: \vspace*{-0.4em} \begin{equation} \label{soft bootstrap label formulation} \hat{y}_j=\lambda(x_j)f_t(x_j)+(1-\lambda(x_j))y_j, \end{equation} \vspace*{-1.5em} \noindent where $f_t(x_j)$ is the average prediction vector of 5 teacher models, $y_j$ is the one-hot vector of the $j$-th corrected given label, and $\lambda(\cdot)$ is a function to measure the confidence level of the teacher models for $x_j$, defined by \vspace*{-0.7em} \begin{equation} \label{casewise parameter} \lambda(x_j) = \begin{cases} \lambda, &\mbox{if } \max(f_t(x_j))\geq\eta; \\ 0, & \mbox{if } \eta > \max(f_t(x_j)); \end{cases} \end{equation} \vspace*{-0.9em} \noindent where $\eta$ is used to control the confidence level of the pseudo-labels, and $\lambda$ is used to keep the balance between the prediction of teacher model and given label. \item Hard bootstrap label: The definition is similar to soft bootstrap label, but the prediction vector $f_t(x_j)$ in \eqref{soft bootstrap label formulation} is replaced by a one-hot vector, where the entry of value $1$ is the maximum entry in $f_t(x_j)$, i.e. $\operatorname{argmax} f_t(x_j)$. \item Hard label: \vspace*{-0.1em} \begin{equation} \label{hard label formulation} \hat{y}_j = \begin{cases} \operatorname{argmax} \ \ f_t(x_j), &\mbox{if } \max(f_t(x_j))\geq\eta; \\ \operatorname{argmax} \ \ y_j, & \mbox{if } \eta > \max(f_t(x_j)). \end{cases} \end{equation} \end{itemize} \section{Experiments} \label{experiment} \subsection{Experiment Setup} \subsubsection{Datasets} \label{dataset} \textbf{CIFAR-10}. The CIFAR-10 dataset \cite{krizhevsky2009learning} contains 50,000 training images and 10,000 test images in 10 classes, with 6,000 images per class. Each image has a size of $32\times 32\times 3$. Let the ratio of the original clean set $\mathcal{D}_c$ and the noise level of the entire dataset $\mathcal{D}$ be denoted by $\pi$\% and $r$\% respectively. Within each class, we sampled $\frac{\pi\%}{10}\times50,000$ images uniformly at random from the training set to form the original clean data $\mathcal{D}_c$. Then, we added two types of synthetic label noise to the remaining $(100-\pi)$\% data as follows: \begin{itemize} \item \textbf{Symmetric noise}. We randomly chose $\frac{100r}{100-\pi}\%$ samples from the remaining $(100-\pi)\%$ data, and for each chosen sample, we randomly assigned a new label. We require a $\frac{100}{100-\pi}$ factor to take into consideration the original $\pi$\% clean set used in our method; \item \textbf{Asymmetric noise}. We followed the definition in \cite{patrini2017making}, where label noise for pairs of semantically similar classes (CAT$\leftrightarrow$DOG, DEER$\rightarrow$HORSE, BIRD$\rightarrow$AIRPLANE, TRUCK$\rightarrow$AUTOMOBILE) was generated by randomly assigning $\frac{100r}{100-\pi}\%$ samples from each objective class with the target label. Note that the noise level defined in \cite{patrini2017making} refers the class noise level. As we have $5$ objective classes, the overall noise level of the dataset is $0.5r\%$. \end{itemize} \textbf{Clothing1M}. The clothing1M dataset \cite{xiao2015learning} is a real-world image dataset with both noisy and clean labels. There are over a million clothing images in 14 classes collected from online shopping websites, and a noisy label is automatically assigned to each image based on the keywords in surrounding text. A manually labeled clean subset with 72,409 images is provided. \subsubsection{Baselines} For CIFAR-10, we compared our framework with the following baselines, using the code provided in the respective papers. For those methods that add synthetic noise via a transition matrix, we multiply the noise level by factor 0.9 $(i.e., \frac{N_{class}-1}{N_{class}})$ for fair comparison. \begin{itemize} \item Mixup\cite{zhang2017mixup}, which alleviates the effect of noisy labels by training on convex combinations of pairs of samples. \item Joint Optimization \cite{tanaka2018joint}, which alternately updates network parameters and labels during training. \item Co-teaching\cite{han2018co}, which simultaneously trains two neural networks and cross-updates the network parameters. \item Loss Correction\cite{arazo2019unsupervised}, which estimates wrong label probabilities and corrects the loss with a beta mixture model. \item JoCoR\cite{wei2020combating}, which jointly trains two networks using a joint loss with co-regularization. \item DivideMix\cite{li2020dividemix}, which divides the dataset into two subsets, and concurrently trains two networks in a semi-supervised manner using MixMatch. \end{itemize} \subsubsection{Implementation Details} For CIFAR-10, we used a Pre-Activation ResNet-18 \cite{he2016identity} and an SGD optimizer with a momentum of 0.9, a weight decay of $10^{-4}$, and batch size of 128. For preprocessing, the images were normalized, and augmented by random horizontal flipping and random 32$\times$32 cropping with padding=4. In the clean set augmentation step, each model was trained over 30 epochs. The learning rate was initialized at 0.01 and was divided by 10 after 20 epochs. The teacher models and the student model were trained over 100 epochs each. The learning rate was initialized at 0.05 and was divided by 10 after 30, 50, and 80 epochs. For Clothing1M, we follow the configuration used in previous works \cite{arazo2019unsupervised,li2019learning,tanaka2018joint, xu2019l_dmi, zhang2019metacleaner, lee2018cleannet, han2019deep}, i.e. we used a ResNet-50 \cite{he2016identity} pre-trained on ImageNet. We used an SGD optimizer with a momentum of 0.9, a weight decay of $10^{-4}$, a cross-entropy loss function, and a batch size of 32. For preprocessing, the images were normalized, and augmented by random horizontal flipping and centered 224$\times$224 cropping. In the clean set augmentation step, each model was trained over 20 epochs. The learning rate was initialized at 0.03 and was divided by 10 after 10 and 15 epochs. We trained the student model and each teacher model for 5 epochs. The learning rate was initialized at 0.001 and was divided by 10 after 3 epochs. Code is available at \url{https://github.com/Xu-Jingyi/PUDistill}. \begin{table}[tb] \caption{The precision and size of the augmented clean set for CIFAR-10 (test set) and Clothing1M (validation set).} \label{Table of augmented clean set} \centering \begin{tabular}{lrr|rr} \toprule \multirow{2}{*}{Class} & \multicolumn{2}{c|}{CIFAR10} & \multicolumn{2}{c}{Clothing1M} \\ \cmidrule{2-5} & \multicolumn{1}{r}{Precision} & \multicolumn{1}{r|}{Size} & \multicolumn{1}{r}{Precision} & \multicolumn{1}{r}{Size} \\ \midrule 0 & 0.92 & 413 & 0.94 & 298 \\ 1 & 0.98 & 586 & 0.83 & 142 \\ 2 & 0.93 & 281 & 0.46 & 166 \\ 3 & 0.80 & 219 & 0.95 & 736 \\ 4 & 0.90 & 306 & 0.78 & 651 \\ 5 & 0.93 & 368 & 0.93 & 554 \\ 6 & 0.96 & 462 & 0.69 & 247 \\ 7 & 0.97 & 456 & 0.54 & 273 \\ 8 & 0.96 & 575 & 0.93 & 498 \\ 9 & 0.97 & 538 & 0.95 & 480 \\ 10 & - & - & 0.96 & 272 \\ 11 & - & - & 0.58 & 173 \\ 12 & - & - & 0.82 & 680 \\ 13 & - & - & 0.91 & 624 \\ \midrule \multirow{2}{*}{Overall} & \multirow{2}{*}{0.94} & 4,204 & \multirow{2}{*}{0.85} & 5,794 \\ & & (out of 10,000) & & (out of 14,313) \\ \bottomrule \end{tabular} \end{table} \begin{table}[htb!] \caption{Accuracy of the ensemble teacher model, which is evaluated on the subset $\{x\in \mathcal{D}_{test}\mid\max(f_t(x))\geq\eta\}$ of the test set (resp. evaluated on the validation set) for CIFAR-10 (resp. Clothing1M) with various threshold values $\eta$.\\[-1.5em]} \label{teacher model} \centering \begin{tabular}{lrr|rr} \toprule \multirow{2}{*}{$\eta$} & \multicolumn{2}{c|}{CIFAR10} & \multicolumn{2}{c}{Clothing1M} \\ \cmidrule{2-5} & Accuracy (\%) & Size & Accuracy (\%) & Size \\ \midrule 0.9 & 99.3 & 1,313 & 94.6 & 5,635 \\ 0.8 & 98.0 & 4,845 & 91.3 & 8,219 \\ 0.7 & 95.6 & 6,492 & 88.1 & 9,950 \\ 0.6 & 93.0 & 7,561 & 85.1 & 11,407 \\ 0.5 & 89.8 & 8,487 & 82.0 & 12,808 \\ 0.0 & 82.5 & 10,000 & 77.7 & 14,313 \\ \bottomrule \end{tabular} \end{table} \subsection{Comparison between Different Ratios of Clean Set} \begin{figure}[tb] \centering \includegraphics[width=80mm]{comparison_clean_ratio.png} \caption{Comparison of the best test accuracies (average of 5 trials) corresponding to different clean ratios $(\pi\%)$ on CIFAR-10 with symmetric noise (upper) and asymmetric noise (lower). All experiments were implemented using soft-bootstrap pseudo-labels.} \label{Comparison clean ratio} \end{figure} We applied bootstrapping when training the teacher model on the augmented clean set to (i) mitigate the problem of imbalance, and (ii) reduce the training time. Specificically, we sampled a balanced subset of the augmented clean set in each epoch, and trained the teacher model on the bootstrap subset, instead of the entire augmented clean set. In order to improve robustness, we also used mixup data augmentation and entropy regularization (cf. Section \ref{preliminaries}) to train the teacher models and student model. The detailed ablation study and elaboration of the effectiveness of each technique is given in Section \ref{ablation study}. \begin{figure}[htb!] \centering \includegraphics[width=80mm]{comparison_label_type.png} \caption{Comparison of the best test accuracies (average of 5 trials) corresponding to different types of pseudo-labels on CIFAR-10 with symmetric noise (upper) and asymmetric noise (lower). We fixed $\eta=0.7$ and $\pi\%=10\%$ in the experiments.} \label{Comparison types of label} \end{figure} \subsection{Evaluation of Augmented Clean Set} To show the effectiveness of our clean data augmentation algorithm, we reported the precision and number of augmented clean samples in TABLE \ref{Table of augmented clean set}, where the results were evaluated on test set for CIFAR-10 and validation set for Clothing1M. As described in Section \ref{clean data augmentation}, we introduced a threshold $\alpha$ to filter ``positive" samples, and a hyperparameter $\theta$ to control the size of augmented clean set. In our experiment, we fixed $\alpha=0.9$, $\theta=19$ (out of 20) for CIFAR-10, and $\alpha=0.95$, $\theta=10$ (out of 10) for Clothing1M. Our method extracted more than 40\% samples while achieved overall precision 0.94 and 0.85 for CIFAR-10 and Clothing1M, respectively. \subsection{Evaluation of Teacher Model} As given in \eqref{soft bootstrap label formulation}--\eqref{hard label formulation}, the teacher model is used to generate pseudo-labels for the training of the student model. In order to guarantee the accuracy of the pseudo-labels, we only modified the labels of samples in $\{x_j\in \mathcal{D}_n \mid\max(f_t(x_j))\geq\eta \}$, while the pseudo-labels of the remaining samples were kept as the given labels. Here $f_t(\cdot)$ is the average output vector of 5 models, and $\eta$ is the threshold to control the confidence level of the ensemble teacher model. We evaluated the ensemble teacher model for different values of $\eta$; see TABLE \ref{teacher model}. \begin{table}[!tb] \caption{Best test accuracies of different methods on the Clothing1M dataset. For all baselines, we use the reported results in the respective papers.\\[-1.5em]} \label{clothing reuslt} \centering \begin{tabular}{llr} \toprule \# & Method & Accuracy (\%) \\ \midrule 1 & Cross Entropy \cite{tanaka2018joint} & 68.94 \\ 2 & Forward \cite{patrini2017making} & 69.84 \\ 3 & JoCoR \cite{wei2020combating} & 70.30 \\ 4 & Loss correction \cite{arazo2019unsupervised} & 71.00\\ 5 & Joint opt. \cite{tanaka2018joint} & 72.23 \\ 6 & $\mathcal{L}_{\text{DMI}}$ \cite{xu2019l_dmi} & 72.46 \\ 7 & Metacleaner \cite{zhang2019metacleaner} & 72.50 \\ 8 & Meta learning\cite{li2019learning} & 73.47 \\ 9 & DeepSelf \cite{han2019deep} & 74.45 \\ 10 & CleanNet \cite{lee2018cleannet} & 74.69 \\ 11 & DivideMix\cite{li2020dividemix} & 74.76 \\ \midrule 12 & our method & \textbf{77.70} \\ \bottomrule \end{tabular} \end{table} \begin{table}[!tb] \caption{Ablation study results in terms of teacher model's test accuracy (\%) on CIFAR-10 and Clothing1M.\\[-1.5em]} \label{ablation teacher} \centering \begin{tabular}{lrr} \toprule Methods & CIFAR-10 & Clothing1M \\\midrule Standard & 76.88$\pm$1.82 & 77.27$\pm$0.11 \\\midrule + mixup & 80.68$\pm$2.48 & 77.57$\pm$0.05 \\\midrule + bootstrap & 77.31$\pm$1.16 & 76.97$\pm$0.47 \\\midrule + entropy reg. + bootstrap & 77.12$\pm$1.30 & 77.68$\pm$0.04 \\\midrule + mixup + bootstrap & 81.05$\pm$1.70 & 77.75$\pm$0.10 \\\midrule + mixup + bootstrap + entropy reg. & 81.45$\pm$1.45 & 77.18$\pm$0.05 \\ \bottomrule \end{tabular} \end{table} \subsection{Comparison between Different Types of Pseudo-labels} To show the difference between three types of pseudo-labels introduced in \eqref{soft bootstrap label formulation}--\eqref{hard label formulation}, we compared the best test accuracies on CIFAR-10 with different levels of symmetric noise and asymmetric noise in Fig. \ref{Comparison types of label}. For both symmetric and asymmetric noise, hard label performed worse than two bootstrap labels at low noise levels, and hard bootstrap label had a similar performance compared with soft bootstrap label. Intuitively, hard label resulted in lower accuracies because it failed to take the given label into consideration for those samples with high confidence, i.e. $\max(f_t(x_j))\geq\eta$. However, this problem was mitigated by the case-wise pattern of pseudo-labels, where the pseudo-label was only assigned to those samples that teacher model was confident on, and the labels of the remaining samples remained the given labels. Hence this case-wise pattern can avoid introducing additional noise because of inaccurate teacher model, and it happened to shorten the gap between hard labels and the other two bootstrap labels. As described previously, our method uses an initial small clean set $\mathcal{D}_c$, with proportion $\pi\%:=\frac{\vert \mathcal{D}_c \vert}{\vert \mathcal{D}_c\cup\mathcal{D}_n \vert}$. To study the effect of $\pi$ on the test accuracy, we applied our algorithm to CIFAR-10 with different noise levels for both symmetric and asymmetric noise. \mbox{Fig. \ref{Comparison clean ratio}} shows that the gap between curves of different clean ratios is not large under low noise levels, and our method performs well even with only a $1\%$ clean set. However, at high noise levels, the curves for low clean ratios dramatically dropped, especially for asymmetric noise, while the curve for $10\%$ clean ratio was relatively flat. \subsection{Comparison with State-of-the-art} \subsubsection{Experiments on CIFAR-10} \begin{table*}[!tb] \caption{Average (5 trials) and standard deviation of the best test accuracies of different methods on the CIFAR-10 dataset with semantic asymmetric noise. The highest accuracy for each noise level is boldfaced.} \label{cifar_asyn} \centering \begin{tabular}{lrrrrrrr} \toprule \multirow{2}{*}{Method} & \multicolumn{7}{c}{Noise Level (\%)} \\ \cmidrule{2-8} & 30 & 40 & 50 & 60 & 70 & 80 & 90 \\ \midrule Cross Entropy & 88.30$\pm$0.42 & 84.28$\pm$0.65 & 77.40$\pm$1.52 & 67.54$\pm$1.21 & 61.72$\pm$0.30 & 57.24$\pm$0.31 & 52.70$\pm$0.21 \\ \midrule Mixup\cite{zhang2017mixup} & 90.53$\pm$0.70 & 86.59$\pm$1.13 & 78.67$\pm$0.93 & 69.19$\pm$1.16 & 62.52$\pm$1.32 & 57.90$\pm$2.1 & 53.30$\pm$0.80 \\ \midrule Joint Optimization\cite{tanaka2018joint} & 92.01$\pm$0.21 & \textbf{89.56$\pm$0.78} & 84.56$\pm$0.94 & 78.21$\pm$0.32 & 76.70$\pm$0.11 & 76.44$\pm$0.21 & 76.00$\pm$0.14 \\ \midrule Co-teaching\cite{han2018co} & 84.50$\pm$0.41 & 70.69$\pm$3.53 & 54.29$\pm$1.30 & 48.76$\pm$0.92 & 46.40$\pm$0.89 & 45.66$\pm$1.77 & 44.39$\pm$1.53 \\\midrule Loss Correction\cite{arazo2019unsupervised} & 90.87$\pm$0.23 & 87.21$\pm$0.22 & 74.63$\pm$1.08 & 58.82$\pm$1.08 & 58.06$\pm$0.09 & 53.72$\pm$3.32 & 53.04$\pm$3.59 \\ \midrule JoCoR\cite{wei2020combating} &76.57$\pm$2.67 & 67.74$\pm$4.23 & 56.54$\pm$1.22 & 48.52$\pm$1.84 & 46.22$\pm$0.80 & 43.74$\pm$0.81 & 43.06$\pm$1.89 \\ \midrule DivideMix\cite{li2020dividemix} & \textbf{93.95$\pm$0.06} & 84.43$\pm$0.94 & 73.73$\pm$1.07 & 60.13$\pm$2.40 & 53.18$\pm$3.99 & 50.60$\pm$0.546 & 49.42$\pm$0.33 \\ \midrule Our Method & 90.00$\pm$0.22 & 88.22$\pm$0.26 & \textbf{85.98$\pm$0.45} & \textbf{84.41$\pm$0.15} & \textbf{83.77$\pm$0.12} & \textbf{83.00$\pm$0.10} & \textbf{82.63$\pm$0.25} \\ \bottomrule \end{tabular} \end{table*} \begin{figure*}[tb] \centering \includegraphics[width=180mm]{Comparison_sym_asym.png} \caption{Comparison of the performance of different methods under symmetric noise and asymmetric noise. The noise level here refers to overall noise level within the dataset, e.g. $r\%$ asymmetric noise would correspond to $0.5r\%$ overall noise level. } \label{cifar_fig} \end{figure*} We reproduced the results of the baselines \cite{tanaka2018joint,arazo2019unsupervised,zhang2017mixup,han2018co,wei2020combating, li2020dividemix} using the same configuration that we used to evaluate our proposed method: (i) We used the same architecture, Pre-Activation ResNet-18 \cite{he2016identity}; (ii) We multiplied the noise level in \cite{han2018co, wei2020combating} by a factor 0.9, and multiplied the noise level in our method with a factor $\frac{100}{100-\pi}$ to keep the expected proportion of incorrect labels equal across all methods. The comparison results between our method and all the baselines on the CIFAR-10 dataset with asymmetric noise are reported in TABLE \ref{cifar_asyn}. To evaluate the robustness of each method against different noise models, we compared the accuracies of each mothod with different noise levels for symmetric and asymmetric noise in \mbox{Fig. \ref{cifar_fig}}. As claimed in Section \ref{dataset}, the noise level of asymmetric defined in \cite{patrini2017making} is the class noise level within the corrupted classes, hence the overall noise level of the dataset is actually $0.5r\%$. In order to compare the performance of each method between symmetric model and asymmetric model, we need to modify the noise level of asymmetric noise by a factor $0.5$. The curves corresponding to symmetric and asymmetric noise model of our method were very close and flat across all of the noise levels. However, the gap between two lines for other 7 baselines became larger as the noise level increased, and the curves of some methods dramatically dropped, especially under asymmetric case. Our method is robust not only over all noise levels but also against different noise models because of the effectiveness of the clean data augmentation step. By treating all the noisy samples as unlabeled samples and applying a tired PU learning method, our augmentation process is totally independent of the given noisy labels, thereby explaining this robustness. \subsubsection{Experiments on Clothing1M} We directly compared our experiment result with accuracies reported in papers that used the same model architecture, i.e. a ResNet-50 \cite{he2016identity} pre-trained on ImageNet; see TABLE \ref{clothing reuslt}. We trained the student model over 5 epochs and achieved a highest test accuracy 77.70\%; this is the highest accuracy among all methods in TABLE \ref{clothing reuslt}. \subsection{Ablation Study} \label{ablation study} To study the effect of mixup data augmentation, bootstrapping, and entropy regularization on our method, we conducted an ablation study on the teacher model (see TABLE \ref{ablation teacher}) and student model (see TABLE \ref{ablation student}), where the ``standard" method means our method without any of these three techniques. Below, we consolidate some insights obtained. \begin{itemize} \item All three techniques helped to improve the accuracies slightly. Mixup data augmentation has the largest effect. \item The effect of mixup is more significant under symmetric noise than under asymmetric noise. Intuitively, mixup improves the robustness to label noise because the combination of a pair of samples might partially correct the wrong labels when one sample accidentally contains the true label of the other noisy sample. For asymmetric noise, the label flips occur only between similar classes, so ``accidental label correction" is less likely. \end{itemize} \section{Conclusion and Further Remarks} Our proposed framework is robust to all noise levels. This robustness is universal since the performance at any given noise level is not sensitive to variations in the noise model. We achieved state-of-the-art accuracy on Clothing1M, a dataset with real-world label noise, and our experiments on CIFAR-10 with asymmetric semantic label noise show superior outperformance over all baselines. Crucially, we achieve universal robustness because our clean data augmentation process does not use the labels of noisy samples; we only require a small clean subset. Our framework can be built on top of any classification algorithm (not necessarily using DNNs). Therefore, our framework is versatile, robust, and widely applicable to any classification tasks for datasets with arbitrary label noise. \begin{table*}[!tb] \caption{Ablation study results in terms of student model's test accuracy (\%) on CIFAR-10. We set the ratio of original clean set=0.1, $\eta=0.9$, and $\lambda=0.5$.\\[-1.5em]} \label{ablation student} \centering \begin{tabular}{lrrrrrrrr} \toprule Noise type & \multicolumn{4}{c}{Symmetric} & \multicolumn{4}{c}{Asymmetric} \\ \midrule Method/Noise level & 30\% & 50\% & 70\% & 90\% & 30\% & 50\% & 70\% & 90\% \\\midrule Standard & 84.74$\pm$0.32 & 81.06$\pm$0.33 & 77.39$\pm$0.37 & 73.29$\pm$0.66 & 87.32$\pm$0.47 & 82.91$\pm$0.46 & 76.34$\pm$0.84 & 70.32$\pm$1.76 \\ \midrule +Mixup & 88.90$\pm$0.23 & 85.34$\pm$0.34 & 81.82$\pm$0.19 & 76.94$\pm$0.45 & 89.30$\pm$0.27 & 84.93$\pm$0.32 & 77.72$\pm$1.50 & 71.25$\pm$2.51 \\\midrule +Entropy reg. & 86.52$\pm$0.21 & 83.26$\pm$0.36 & 79.68$\pm$0.38 & 74.58$\pm$0.40 & 87.41$\pm$0.23 & 82.98$\pm$0.44 & 76.16$\pm$1.14 & 70.33$\pm$1.93 \\\midrule +Mixup +Entropy reg. & 90.10$\pm$0.17 & 87.12$\pm$0.26 & 83.52$\pm$0.10 & 78.88$\pm$0.49 & 90.00$\pm$0.22 & 85.66$\pm$0.26 & 77.20$\pm$1.61 & 70.52$\pm$2.73 \\\midrule \bottomrule \end{tabular} \end{table*} \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,604
Suprachromacy: full-spectrum macro photography of Lanzarote's alien plant life Suprachromacy is a new series of full-spectrum, macro photography, shot in Lanzarote by Marcus Wendt, creative director at digital art studio FIELD in London. "For the Rays, to speak properly, are not coloured. In them, there is nothing else than a certain power and disposition to stir up a sensation of this or that colour." – These, the words of Isaac Newton, form the basis and inspiration behind this stunning project. Getting up, close and personal with the island's stunning plant life, the images hope to answer the immortal question: is colour a property, or a sensation – a part of the object, or the spectator? "For us, these alien colour spectra spark ideas about how we see colour, how much depth is locked up in the colour green, and whether colour is a property or a sensation," says Marcus. "And also what plants might look like on planets under a different coloured sun." Discover more at field.io. Beyond What is Written: photographs of non-believers in the Bible Belt of Tennessee Emerging artist Alexandra Grounds modernises pop-art icons with giant feminist works Nick Chaffe on discovering graphic design, going freelance and surviving tornadoes The Portraits: photographs of people living in Stourbridge in the 1970s by John Myers Neon photographs of Times Square shot from above show a futuristic NYC Chroma: photographs of cities by Ben Thomas bursting with primary colours and light British artist Alys Tomlinson named Photographer of the Year in 2018 Sony World Photography Awards Notes from the Heartland: photographer Danny Wilcox Frazier documents the lives of Midwesterners
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,626
The call was held as scheduled 12 US Eastern Thursday December 7. The call was sponsored by 3M. The main purpose of the call was to report any progress on action items. A common thread was that other priorities were interfering with progress on action items. Though there is still some work being done on many fronts, it is not happening as quickly as all would wish. Gail, John, Mary, Candy, and Rob Gray reported work underway or starting. Sue was particularly interested in the profile streamlining work and will contact Gail. John will work with his subgroup and arrange a conference call among themselves in the next couple weeks. Rob will work with his subgroup on the scheme/value question before sending something out to the list. Candy will post a proposal and questions around the need for a directory (aka the NCIP phone book). Some discussion during the call helped shape what this proposal will look like. The Maintenance Agency has just about completed the migration of content from the old NCIP site. Steve Wrede has redirected traffic to the new NCIP site and will be shutting down the old site at his convenience. The group touched on the question of meeting at ALA Midwinter and concluded that there was no benefit in doing so. We will address issues around the spring meeting in our February 1 call. When asked whether there was anything that NISO could do to help the efforts of this group, the consensus was that promoting use of the standard would be the biggest help.
{ "redpajama_set_name": "RedPajamaC4" }
7,341
The Spring & Curly shows the natural beauty of our collection. The mejor items are Cane Cone, Spring, Cane Coil, Curly Ting Ting, Palm Ting Ting etc. These are all made from natural material and can be offered in different colors and sizes. These can also be used for Christmas decoration and can be glittered, flocked and painted in required shades. These can be mixed with fillers to enhance the look of the decoration.
{ "redpajama_set_name": "RedPajamaC4" }
6,872
As most of you know, I'm currently playing Forza Motorsport 2 on the Xbox360. As is my 2G Creative partner in crime, OJ. I'll use my oft spouted phrase again here. How hard can it be? After lots of departments not speaking to each other, attempted fobbing off and a hint at getting shirty, 2G Creative now has a business bank account. With credit cards and everything 🙂 Whilst this sounds like a trivial matter, we've had no end of troubles sorting this out. For the record, offshore call centers don't work. It doesn't matter how good their English is, or how polite they are, they're put in a lose-lose situation by the people who employ them. Don't get me wrong, having spent the last six or so years working in CRM, (and working tech support many moons ago) I appreciate the thought behind it. But, nothing works better than the person answering the phone having an appreciation for the customer's issue. In the end though, thanks to the staff at HSBC in Herne Bay who, despite knowing nothing about us being promised a resolution, dealt with 2G Creative's directors turning up (unannounced) and accommodated us there and then. Oh, I've also added a little 'In other news' box over there on the right as a dumping ground for links that would otherwise clog up inboxes. About damn time. I've found a decent metal album. Too much pretentious crap by kids who barely know pubity, let alone 'angst' has been churned out recently. Oh yes. I've always liked Machine Head, and this is one of their finest. Some quotes liken it to Master Of Puppets, which I'm not sure about as that has achieved an almost religious status in my (non religious) mind. But it's good. Very good.
{ "redpajama_set_name": "RedPajamaC4" }
330
\section{Introduction} The COVID-19 pandemic has caused serious shocks on the global economy just within a short span of time. Many countries across the world have imposed various non-pharmaceutical interventions (NPIs) to contain the spread of virus. For instance, China implemented an extremely stringent lockdown in Wuhan on January 23, 2020, which was lifted later on April 8, 2020; France closed its schools on March 16, 2020; South Korea banned international travelers from Hubei China on February 02, 2020; and Singapore started contact tracing on January 23, 2020. Although these NPIs have been effective in containing the spread of pandemic, they also lead to negative economic consequences at all scales. The closure of non-essential stores, restaurants, and business, and the disruptions of the global value chains cause direct revenue losses, extremely high unemployment rates, and sharp declines in personal incomes. Such influences were also reflected in the performance of financial markets. For example, on March 16, 2020, the Dow Jones Industrial Average encountered the worst percentage drop since the infamous ``Black Monday'' crash of 1987, i.e., dropped by 12.9\% in a single day. The S\&P index lost almost 12\% in the same day. On the other hand, various economic support policies (ESPs) have been proposed and implemented to save economies, such as direct cash assistance for households, and the temporary stop of loan repayments for both individuals and businesses. As the implementation timings of various levels of NPIs and ESPs are different for each country, one question arises: which NPI or ESP has the strongest influence on the economy? The answer to this question will shed some lights for the policymakers and also the market investors on which measures to rely on when they have to make the decisions. In this paper, we attempt to give one answer to the question through studying the impact of NPIs and ESPs on the international economy. The exchange rates usually comove with a country's importing and exporting activities since they are direct components of and therefore highly correlated with the Gross Domestic Product (GDP)\footnote{GDP is one of the most common measures for the prosperity of each economy.}. The NPIs and ESPs during the pandemic pose widespread and long-lasting impacts on the economy via the channel of foreign exchange markets by disturbing the international trade directly in the short run, and influencing the aggregate demand indirectly in the long run. For instance, the Australian dollar (AUD) hit \$0.59215 in exchange for 1 US dollar at the end of March 2020, which was the lowest level of the past 17 years. Note that some works have studied the impact of COVID-19 and NPIs on the FX markets. For example, \citet{aslam2020efficiency} studied the impact of COVID-19 on the efficiency of FX markets; \citet{demirgucc2020sooner} explored the early economic impact of COVID-19 NPIs; \citet{lazebnik2021spatio} assessed the economic losses caused by COVID-19 NPIs. However, the influence of individual NPIs or ESPs on the dynamic of FX markets has not been studied. In this paper, we utilize Explainable AI (XAI) techniques to investigate the impact of NPIs and ESPs. XAI is a rising field in AI. In addition to developing AI systems that make accurate predictions, XAI systems ``explain'' their predictions to obtain insights from data. From a data science perspective, in addition to understanding ``What conclusion can be drawn from data'', XAI holds the key to answering the more important question ``Why such a conclusion is reached''. Various techniques of constructing the explanation have been developed in XAI. One category is to compute explanations to data instances in the form of ``feature weights''. SHapley Additive expPlanations (SHAP)~\cite{lundberg2017unified} is one such method that is independent of underlying prediction models and built on a sound mathematical foundation. In this work, we use SHAP to assess the impact of NPIs and ESPs on FX market. More specifically, we first train a Long Short-Term Memory (LSTM) prediction model to predict the exchange rates for G10 currencies using the data prior to the COVID-19 pandemic. Then we train a Random Forest (RF) model using the rate predicted by LSTM together with NPIs and ESPs to produce a refined exchange rate prediction. We then apply SHAP on the RF model to obtain the attribution of each NPI and ESP in the results of FX predictions. In such a way, we can obtain insights on which NPI or ESP measures have more contributions to FX dynamics, i.e., the appreciation or depreciation of exchange rates. To the best of our knowledge, this is the first work to study the impact of NPIs and ESPs on FX exchange markets using XAI techniques. The remainder of the paper is organized as follows. Section 2 discusses the related literature. Section 3 presents the techniques we use for prediction and explanation. Section 4 introduces the proposed model of investigating the impact of individual NPI on FX market. Section 5 presents the experiment results. Finally, the conclusions are drawn in Section 6. \section{Related Work} There have been some studies conducted to explore the impact of COVID-19 NPIs and ESPs on the economic and financial systems across different perspectives. We briefly review them as follows. \citet{demirgucc2020sooner} estimated the economic impacts of the NPIs implemented by Europe and Central Asia countries at the initial stage of the COVID-19 pandemic through tracing the economic disruptions based on the analysis of daily electricity consumption, nitrogen dioxide emission, and mobility records. Their results suggest that NPIs led to about a 10\% decline in economic activity across the region. \citet{lazebnik2021spatio} developed an extended spatial-temporal SIR model to analyze the impact of NPIs on the pandemic spread and assessed the economic losses caused by the pandemic. Two NPIs, i.e., the duration of working and school days and various lockdown levels, were incorporated into their model. The results based on their model and the Israeli economy suggest that 7.5 working hours alongside 4.5 school hours, or 89\% lockdown among children and 63\% lockdown among adults will achieve a balanced output, i.e., minimizing the death toll and maximising output. \citet{mirza2020impact} evaluated the impact of COVID-19 on corporate solvency in the EU member states by introducing stress scenarios on the non-financial listed firms. A progressive increase in the probability of default, debt payback and declining coverage is reported. The results indicate that the solvency profile of all firms deteriorates. The authors further examined the possible policy interventions to sustain COVID-19. It was suggested that a hybrid support through debt and equality will be effective in the event of exacerbating business shocks to avoid a meltdown. \citet{rizvi2020impact} assessed the impact of COVID-19 on the value of non-financial firms using a sample of 5,342 listed non-financial firms across 10 EU member states. Their findings show a significant loss in valuations across all sectors due to a possible decline in sales and increase in cost of equity. In the extreme cases, average firms in some sectors may lose up to 60\% of their intrinsic value in one year. \citet{aslam2020efficiency} studied the efficiency of FX markets during the initial period of COVID-19 pandemic through a multifractal detrended fluctuation analysis using the exchange rate data for six currencies (AUD, CAD, CHF, EUR, GBP, and JPY). Their results demonstrate a decline in the efficiently of FX markets during the COVID-19 pandemic. \citet{fasanya2021dynamic} examined dynamic spillovers between the COVID-19 pandemic and the global FX market. The authors analyzed the spillovers using the daily data for the period of December 31, 2019 to April 10, 2020 of six currency pairs, i.e., USD/EUR, USD/JPY, USD/CHF, USD/GBP, USD/CAD, and USD/AUD. Their findings indicate a high degree of interdependence between the COVID-19 pandemic and returns volatility. It can be seen that there is no work conducted to analyze the impact of NPIs and ESPs with a unified framework on the FX market, which is critical for government and policymakers to address the risks caused by current COVID-19 pandemic and possible future crisis. \begin{comment} \subsection{Explainable AI (XAI) techniques} Explainable AI (XAI) is a fast growing research area in AI that aims to provide insight into processes that AI uses to conclude~\cite{adadi2018peeking}. Several explanation methods have been proposed in the literature to address the need for explainability in AI system. \end{comment} \section{Background} Before coming into the detail of the proposed model for evaluating the impact of NPIs and ESPs on FX markets, we review a few techniques used in this work. \subsection{Random Forest} Random Forest is a commonly-used machine learning algorithm for classification and regression problems~\cite{breiman2001random}. It starts from creating decision trees. A decision tree recursively splits data until the best partition to subset the data is found, which is typically trained through the Classification and Regression Tree (CART) algorithm ~\cite{breiman2017classification}. As decision trees are prone to problems like bias and overfitting problems, random forest aggregate the predictions of a set of decision trees to reach a single result to reduce of risk of overfitting. \subsection{Long Short-term Memory (LSTM)} Long short-term memory (LSTM) is a recurrent neural network (RNN) architecture in deep learning~\cite{hochreiter1997long}. Differing from feedforward neural networks, LSTM has connections for feedback, which makes it well-suited to classify, process and make predictions for time series data. Figure~\ref{fig:rnn} gives the overall structure of an LSTM. In the model, $X_{t}$ and $Y_{t}$ are the input and output vectors at sampling instant $t$, respectively. $U$, $W$, and $V$ are the corresponding connection weights. The structure of a memory unit depends on the variants of LSTM, such as normal LSTM and Gate Recurrent Units (GRU)~\cite{cho2014learning}. \begin{figure}[!h] \centerline{ \includegraphics[width=0.45\textwidth]{rnn.png} } \caption{The structure of LSTM \label{fig:rnn}} \end{figure} Recently, LSTM has been widely used in FX prediction with promising results~\cite{cao2020deep}. Comparing to the traditional commonly used statistical methods, e.g., ARIMA~\cite{zhang2003time}, LSTM shows better performance when nonlinear and interconnected relationships are presented in data. Therefore, in our proposed model, we utilize a simple LSTM model to predict exchange rates. It is worth of mentioning that as the aim of the proposed model is to study the impact of NPIs and ESPs on FX market, the performance of LSTM in predicting exchange rates is not our focus. \subsection{Shapley Explainer} SHapley Additive exPlanations (SHAP) is a method that gives explanations to black-box machine learning predictions~\cite{lundberg2017unified}. SHAP belongs to the class of {\em feature attribution} methods. Given a prediction model $P \in \mc{P}$ where $\mc{P}$ is the set of models, let $\mathbf{y} = P(\mathbf{x})$ be the prediction made by $P$ on the input $\mathbf{x} = \langle x_1, \ldots, x_M \rangle \in \mathbb{R}^M$, SHAP gives an explanation $\langle \phi_1, \ldots, \phi_M \rangle \in \mathbb{R}^M$ (for $\mathbf{y} = P(\mathbf{x})$), where $\phi_i$ can be viewed as the contribution of $x_i$ for this prediction. SHAP is based on the coalitional game theory concept \emph{Shapley value}~\cite{shapley201617}. Shapley value is defined to answer the questions:``What is the fairest way for a coalition to divide its payout among the players?'' It assumes that payouts should be assigned to players in a game depending on their contribution towards total payout. In a machine learning context, feature values are ``player'', and the prediction is the ``total payout''. The Shapley value of a feature represents its contribution to the prediction and thus explains the prediction. Specifically, let $g$ be the explanation model. For an input $x$ with $M$ features, there is a corresponding $z \in \{0,1\}^M$ such that SHAP specifies $g$ being a linear function of $z$: \[g(z) = \phi_0 + \sum_{j=1}^{M} \phi_j z_{j}\] where $\phi_j (j > 0)$ is the Shapley value of feature $j$ and $\phi_0$ is the ``average'' prediction when none of the feature in $x$ is present. The idea is that if $z_j = 0$, the corresponding feature value is absent in $x$. Otherwise, the corresponding feature value is present in $x$. In the context of evaluating impacts of NPIs and ESPs on FX markets, each NPI/ESP at a specific level is modelled as a feature; the predicted appreciation or depreciation of the exchange rate for a currency is the ``total payout''. By using SHAP, we get the contribution of each NPI and ESP to a prediction, implying the influence of NPIs and ESPs to the FX market, i.e., a greater contribution means a larger influence. In this work, we use the tree-based model SHAP model, TreeSHAP, for estimating Shapley values of features introduced in~\cite{lundberg2018consistent} as which is shown to be a superior method than the Kernel SHAP introduced in~\cite{lundberg2018consistent}. \section{The Proposed LSTM-RF-SHAP Model} Modelling NPIs, ESPs, and other factors that impact FX markets as features, we formulate the exploration of the impact of NPIs and ESPs on FX markets as evaluating the contribution of each feature to the predictions of exchange rates using SHAP. An RF-SHAP model has been initially proposed as shown in Figure~\ref{fig:model1}. In this model, suppose that we have the exchange rate for a currency\footnote{The exchange rates studied in this paper are quoted against a single currency, the US dollar.} (e.g., GBP) on day $t$, denoted as $R_t$, and $m$ features, denoted as $C^{1}_{t}$, $C^{2}_{t}$, $\ldots$, $C^{m}_{t}$. A feature can be an individual NPI or ESP, and the feature values corresponds to whether the NPI or ESP is implemented on day $t$ in the country corresponding to the currency, such as the ``2nd level lockdown'' implemented in the UK (for GBP). A feature can also be some other factors in the country that will influence the exchange rate prediction like the cumulative number of COVID-19 infection cases over last five days in the UK. \begin{figure}[!ht] \centerline{ \includegraphics[width=0.45\textwidth]{model1.png} } \caption{An RF-Shapley model. \label{fig:model1}} \end{figure} \begin{figure*}[!ht] \centerline{ \includegraphics[width=0.7\textwidth]{model.png} } \caption{The proposed LSTM-RF-Shapley model. \label{fig:model}} \end{figure*} In this RF-SHAP model, we build a random forest predictor which takes inputs including exchange rates in the past $n$ days ($n$ is window size), i.e., $R_{t-n+1}$, $R_{t-n+2}$, $\ldots$, $R_{t}$, NPIs and ESPs values on day $t$, i.e., $C^{1}_{t}$, $C^{2}_{t}$, $\ldots$, $C^{m}_{t}$, to predict the exchange rate at day $t+1$, denoted as $R_{t+1}$. For example, suppose we are going to predict the exchange rate between GBP and USD for September 21, 2021, and the window size is five days, i.e., $n=5$. We use the exchange rates from September 16, 17, 18, 19, and 20 of the year 2021, the NPIs and ESPs values in the UK on that day, (such as that the ``Level 1 stay at home requirements'' has been enforced) on September 20, 2021, and other factors, e.g, the cumulative number of infected cases to produce the prediction. With the exchange rates being predicted from random forest, SHAP is applied to calculate the contributions of each feature, i.e., $R_{t}$, $C^{1}_{t}$,$C^{2}_{t}$, $\ldots$, $C^{m}_{t}$. However, the contribution for each NPI and ESP obtained from this model may not reflect their real impact on the FX prediction as the actual exchange rates on $t$ are already influenced by the implemented NPIs and ESPs. Therefore, we need to disentangle the impacts of COVID-19 pandemic related factors from the non-pandemic ones. To this end, we propose a LSTM-RF-SHAP model to achieve the purpose, as shown in Figure~\ref{fig:model}. In this LSTM-RF-SHAP model, for each currency, an LSTM model is first trained by using the historical FX data prior to 2020. Then, for day $t+1$, LSTM will take the exchange rates in the window with a size of $n$ days as input and predict the exchange rate for day $t+1$, denoted as as $\hat{R}_{t+1}$, which is independent of any information related to COVID-19 pandemic. Then this prediction together with the COVID-19 pandemic related factors will be used as an input to a random forest model, and a new predicted rate for day $t+1$, denoted as $\hat{R'}_{t+1}$, will be produced. SHAP is then applied to the random forest and we can get the contribution of each feature (i.e., $\hat{R}_{t+1}$, $C^{1}_{t}$,$C^{2}_{t}$,..., $C^{m}_{t}$) to the prediction $\hat{R'}_{t+1}$. Taking the GBP prediction on September 21, 2021 for example, first we will train an LSTM model using the historical data before 2020, such as the exchange rates in 2019. With the LSTM model trained and window size being set at $n=$ 5, we will pass to LSTM the exchange rates of September 16, 17, 18, 19 and 20 of the year 2021 as input and will get a predicted exchange rate, i.e., $\hat{R}_{09212021}$. Then we will pass $\hat{R}_{09212021}$ together with the COVID-19 pandemic related factors on September 20, 2021, such as the NPIs and ESPs values in the UK on September 20, 2021, the cumulative number of infected cases in the past 5 days, etc. to the random forest to get the prediction $\hat{R'}_{09212021}$. SHAP is then applied to calculate the contributions of each feature, including the output from LSTM, e.g., $\hat{R}_{09212021}$, NPIs, ESPs, and other COVID-19 pandemic related factors. \section{Experimental Settings and Results} \subsection{Data Preparation} We choose the G10 currencies\footnote{https://en.wikipedia.org/wiki/G10\_currencies} to study the impact of the NPIs on the FX markets as these currencies account for over 95\% of trading volume in the worldwide FX markets~\cite{salisu2018modelling}. In particular, we collected FX data at daily frequency\footnote{The data are collected from the \textit{Datastream} database.} for the nine currency pairs, i.e., the Australian Dollar (AUD), the Canadian Dollar (CAD), the Swiss Franc (CHF), the Euro (EUR), the British Pound (GBP), the Japanese Yen (JPY), the Norwegian Krone (NOK), the New Zealand Dollar (NZD), and the Swedish Krona (SEK), all against the US Dollar (USD) in the period of January 01, 2019 to January 13, 2021. The exchange rates from January 01, 2019 to December 31, 2019 are used to train an LSTM predictor for each currency pair. The window size to train the LSTM is set at 5. It is worth of pointing out that we will pass the predicted returns to random forest in order to improve the prediction accuracy. In particular, for a day $t$ in the period of January 1, 2020 to January 13, 2021, we will first use the exchange rate for each currency from day $t-5$ to $t-1$ to predict the exchange rate for day $t$. Then, we compute the predicted return on day $t$ by taking the logarithm difference of exchange rates between day $t$ and day $t-1$. The exchange rates may have very different scales of market prices, e.g., 1 US dollar on January 1, 2020 can be exchanged for 108.0961 JPY versus 0.8891 EUR. The logarithm return is more comparable than the simple return across currencies as it addresses this scale issue Then the NPIs and ESPs values in a country and other COVID-19 pandemic related factors\footnote{The data are collected from https://github.com/OxCGRT/covid-policy-tracker} alongside with the predicted returns from LSTM during the period of January 01, 2020 to January 13, 2021 are used to train a random forest\footnote{There are in total nine LSTM and one random forest trained.}. More specifically, the NPIs, EPSs and other COVID-19 pandemic related factors we consider are as follows. \begin{enumerate} \item Economic support policies: governments in different countries or regions have taken discretionary actions to sustain employment rate and solvency. \begin{enumerate} \item \textbf{$E_{1}$: Income Support} -- Government provides direct cash support to people who cannot work due to COVID-19 pandemic. For example, in March 2020, the UK government implemented a policy to pay 80\% of a furloughed employee's wages (subject to a cap of GBP2,500 per month). \item \textbf{$E_{2}$: Level 1 Debt/Contract relief} -- Government freezes financial obligations for households (e.g., stopping loan repayments, preventing services like water from stopping, or banning evictions). Level 1 will be specific to one category of debt or contract. For example, in March 2020, the CARES Act was signed into law in United States, which implemented targeted debt relief based on Federal jurisdiction (e.g. mortgage relief). \item \textbf{$E_{3}$: Level 2 Debt/Contract relief} -- Comparing to Level 1 Debt/Contract relief, level 2 targets at multiple categories of debt or contract. For example, the Australian Government implemented a series of changes to bankruptcy law in March 2020, which includes an increase in the debt threshold, an increase to the timeframe to respond to a bankruptcy notice, and an increase to the temporary debt protection period. \end{enumerate} \item The NPIs to contain the spread of COVID-19 pandemic \begin{enumerate} \item \textbf{$N_1$: Level 1 Stay at home requirements} -- Many countries and regions implemented various levels of ``stay at home requirement''. At Level 1, it is not compulsory for residents to stay in their residences although it is recommended. \item \textbf{$N_2$: Level 2 Stay at home requirements} -- At this level, residents are not allowed to leave their residences without exceptions (e.g., daily exercises, grocery shopping, and essential trips). \item \textbf{$N_3$: Level 1 Workplace closing} -- It is recommended to close workplaces or to work from home although it is not required. \item \textbf{$N_4$: Level 2 Workplace closing} -- It is required to close workplaces or to work from home when possible. \item \textbf{$N_5$: Level 1 International travel controls} -- Since the onset of COVID19 pandemic, many countries and regions have implemented restrictions over international and internal movements. For level 1 international controls, screening (e..g., temperature taking) will be taken upon arrivals. \item \textbf{$N_6$: Level 2 International travel controls} -- At this level, a period (e..g, 14 days) of quarantine in designated places is required for the travellers from some countries or regions. \item \textbf{$N_7$: Level 3 International travel controls} -- At this level, no travellers from some countries or regions are allowed to arrive. \item \textbf{$N_8$: Restrictions on internal movement} -- Besides international travel controls, domestic travelling between regions is not recommended. \end{enumerate} \item \textbf{$C_1$}: For other COVID-19 pandemic related factors except the ESPs and NPIs, we also take \textbf{cumulative cases in last five days} into consideration. \end{enumerate} \subsection{Evaluation on Prediction Accuracy} We use directional prediction accuracy ($DA$), Mean Absolute Error ($MAE$), and Root Mean Square Error ($RMSE$) to evaluate the accuracy of the proposed LSTM-RF-SHAP model in FX predictions. Although the focus of the work is to study the impact of NPIs and ESPs on FX markets, the model performance in predicting FX price is important as well because we need to obtain insights from correct prediction instances by using SHAP, i.e., if the prediction for an instance is incorrect (e.g., the actual result is an FX appreciation but the prediction indicates rather a depreciation), we will not be able to have a meaningful analysis for the obtained explanations on such an instance. In more detail, $DA$ is a direction measure of the FX prediction accuracy, ranges from 0 to 1, with a higher value indicating a better prediction accuracy. \begin{equation} DA = \frac{1}{N}\sum_{t=1}^{N}d(t)\times100\%, \end{equation} where \begin{equation} d(t)=\left\{ \begin{aligned} &1\qquad if [y(t+1)-y(t)][\hat{y}(t+1)-y(t)]\geq0;\\ &0\qquad otherwise. \end{aligned} \right. \end{equation} where $\hat{y}(t)$ and $y(t)$ denote the predicted and the actual FX prices at day $t$, respectively, and $N$ is the number of the prediction instances (i.e., the number of working days from January 01, 2020 to January 13, 2021). MAE is the average of the differences between the actual and the predicted FX prices where a smaller value implies a higher prediction accuracy. \begin{equation} MAE=\frac{1}{N}\sum_{t=1}^{N}|y_{t}-\hat{y}_{t}|, \end{equation} where $\hat{y}(t)$, $y(t)$, and $N$ are the same as in Equation (2). RMSE is the standard deviation of the differences between the actual and the predicted FX prices where a smaller value denotes a better prediction performance. \begin{equation} RMSE=\sqrt{\frac{1}{N}\sum_{t=1}^{N}(y_{t}-\hat{y}_{t})^{2}}, \end{equation} where $\hat{y}(t)$, $y(t)$, and N are the same as in Equation (2). We compare the accuracy of the proposed LSTM-RF-SHAP model with Autoregressive Integrated Moving Average model (ARIMA)~\cite{zhang2003time} in the accuracy of FX predictions. ARIMA analyzes the time-series correlation and builds a prediction model from a statistical approach. We first train an ARIMA model using the exchange rates in the period of January 01, 2019 to December 31, 2019, and predict the exchange rates for a day in January 01, 2020 to January 13, 2021 using ARIMA-RF-SHAP by following the similar procedure as the proposed LSTM-RF-SHAP model. As an example, Figure~\ref{fig:accuracy} shows the GBP/USD exchange rate by using LSTM-RF-SHAP and ARIMA-RF-SHAP for the period of January 1, 2020 to January 13, 2021. The x-axis represents prediction instances. As there are no exchange rates available in weekends, there are 272 instances in total in this period. The y-axis represents predicted exchange rates for each instance. The blue line and green line are the predictions produced by LSTM-RF-SHAP and ARIMA-RF-SHAP for an instance, respectively. The red line shows the actual rate for the instance. It can be seen that the blue line is closer to the red line compared to the green line, suggesting that a more accurate GBP/USD rate prediction can be achieved by using LSTM-RF-SHAP. \begin{figure*}[!ht] \centerline{ \includegraphics[trim=35mm 30mm 35mm 50mm,width=0.85\textwidth]{pound_day2.eps} } \caption{The predicted GBP/USD rates for workdays in the period of January 1, 2020 to January 13, 2021. \label{fig:accuracy}} \end{figure*} \begin{figure*}[!ht] \centerline{ \includegraphics[trim=20mm 0mm 20mm 0mm,width=0.85\textwidth]{nor_acc_mae_rmse.eps} } \caption{The $DA$, $MAE$, and $RMSE$ results for each currency for workdays in the period of January 1, 2020 to January 13, 2021.} \label{fig:accuracy2} \end{figure*} We further compare LSTM-RF-SHAP with ARIMA-RF-SHAP in terms of $DA$, $MAE$, and $RMSE$ for the period of January 1, 2020 to January 13, 2021. The results are shown in Figure~\ref{fig:accuracy2}. Figure~\ref{fig:accuracy2}(a), (b) and (c) show the $DA$, $MAE$, and $RMSE$ results, respectively. The blue bar represents LSTM-RF-SHAP and green bar represents ARIMA-RF-SHAP. It can be seen that the $DA$ of LSTM-RF-SHAP is consistently higher than that of AMIRM-RF-SHAP for each currency, and the errors (i.e., MAE and RMSE) are lower, suggesting that the proposed LSTM-RF-SHAP model can achieve a more accurate FX predictions comparing to ARIMA-RF-SHAP. \subsection{Explanations} As we introduced in Section 4 and Section 5.1, there are 13 features in total used as the input to the random forest predictor. The 13 features are the predicted return from LSTM, the ESPs $E_{1}$, $E_{2}$, and $E_{3}$, the NPIs $N_{1}$, $\ldots$, $N_{8}$, and the number of cumulative COVID-19 cases in last 5 days $C_{1}$. SHAP is used to evaluate contributions of each feature. To simplify the explanation task, we classify the RF prediction results as a binary exchange rate direction prediction (exchange rate appreciates or depreciates). For a day $t$ in the period of January 01, 2020 to January 13, 2021, in addition to obtaining the prediction of exchange rate appreciation or depreciation from the RF, SHAP outputs a vector with a length of 13. The value of each element in the vector is in the range of $[-1,1]$. If a feature has a high SHAP value, it is understood as such feature has a great impact to the FX market. As an example, Figure~\ref{fig:uk_shap} shows contributions of ESPs ($E_1$-$E_3$) and NPIs ($N_1$-$N_8$) to the predicted exchange rate directions for GBP/USD. In this figure, x-axis represents the instances; and y-axis is the normalised SHAP values. The sign of the y value on each instance is determined by the exchange rate direction of that instance. In other words, if the prediction is positive (the rate appreciates), then the y values are shown positively; otherwise, the y values are shown negatively. Only instances with correct predictions are shown in this figure; and there are 210 such instances. The length of each color bar represents the amount of contribution made by the corresponding feature. \begin{figure*}[!ht] \centerline{ \includegraphics[trim=10mm 0mm 10mm 0mm,width=0.95\textwidth]{uk_shap_bar2.eps} } \caption{The contributions of ESPs and NPIs to correct GBP/USD prediction instances in the period of January 1, 2020 to January 13, 2021. \label{fig:uk_shap}} \end{figure*} \begin{figure*}[t!] \centerline{ \includegraphics[trim=10mm 0mm 10mm 0mm,width=0.95\textwidth]{shap_instances3.eps} } \caption{The number of instances to which a particular ESP or NPI has the largest contribution in the period of January 1, 2020 to January 13, 2021: (a) Appreciation Instances; (b) Depreciation Instances. \label{fig:all_cur}} \end{figure*} From Figure~\ref{fig:uk_shap}, we can see that for GBP/USD, \emph{$E_{1}$ (Income support)}, \emph{$E_{3}$ (Level 2 Debt/Contract relief)} and \emph{$N_{2}$ (Level 2 Stay at home requirements)}, have the largest contributions for appreciation instances more frequently than other features. \emph{$N_{4}$ (Level 2 Workplaces closing)} has the largest contribution for depreciation instances. Note that as each feature can take different values, which include the values representing the non-existence of the control measure, so features that are prominent only indicate the ESPs and NPIs they represent are influential to the market; they do not suggest implementing such ESPs or NPIs would appreciate or depreciate the rate, as values of the features may indicate the ESPs or NPIs being not implemented. \begin{comment} From figure~\ref{fig:uk_shap1} and figure~\ref{fig:uk_shap2}(b), we can see that there are only a few instances that ESPs or NPIs have positive contribution to the depreciation instance\footnote{The negative contribution for depreciation can be from other features, such as the predicted return and the number of cumulative cases in the last 5 days}. $E_{1}$ and $N_{4}$ presents largest negative contribution for most decrease instance, suggesting that $E_{1}$ and $N_{4}$ have a larger impact on GBP/USD depreciation which means that $E_{1}$ and $N_{4}$ will benefit the exchange rate appreciation. \end{comment} Figure~\ref{fig:all_cur} presents the number of instances to which a particular ESP or an NPI has the largest contribution. For each currency, we separate the appreciation and depreciation instances. Figure~\ref{fig:all_cur} (a) and (b) show the contribution for appreciation and depreciation instances, respectively. Each bar corresponds to an ESP or an NPI. The height of a bar represents the number of the instances in which an ESP or an NPI has the largest contribution. There are some points observed when we consider the features that have greatest contribution to more than 20 instances. Firstly, from figure~\ref{fig:all_cur}(a) we can see that for most currencies except GBP, the number of appreciation instances with $E_{1}$ having the largest contribution is obviously greater than other features. This shows that {\em income support} being the most influential factor throughout all currencies. $E_{3}$ ({\em Level 2 Debt/Contract relief}), $N_{1}$ ({\em Level 1 Stay at home requirements}), $N_{8}$ ({\em Level 3 International travel controls}), and $N_{9}$ ({\em Restrictions on internal movement}) also present large contributions to some currencies. Secondly, from figure~\ref{fig:all_cur}(b), we can see that various features present the largest contribution to the depreciation instances, such as $E_{1}$, $N_{4}$, and $N_{6}$ ({\em Level 2 International travel controls}). As indicated in the results shown in Figure~\ref{fig:uk_shap}, large values represent the presence or absence of an ESP or NPI being important to exchange rates. \begin{comment} As a summary, although the contributions from the economic support policy and NPIs vary in different currencies, there are almost no large negative contributions to exchange rate appreciation or positive contributions to exchange rate depreciation, suggesting that the economic support polices, such as income support and Level 2 debt/contract relief, and strict lockdown measures like Level 2 stay at home requirements, Level 2 workplace closing, Level 3 international travel controls and restrictions on internal movement are associated with the appreciation of the exchange rates, implying US dollar depreciation. \end{comment} \section{Conclusion} COVID-19 pandemic and the associated non-pharmaceutical interventions (NPIs) across the world have triggered a large size of economic shocks. Without exceptions, the global foreign exchange (FX) market is disrupted. Economic support policies (ESPs) were also implemented to boost economy. Although there has been some studies exploring the influence of the ongoing pandemic, NPIs and ESPs on FX market, the question regarding the impacts of individual ESP or NPI on the FX markets remains unanswered. In this paper, we provide one answer by using an XAI techniques, featuring attribution algorithm SHAP. To our best knowledge, this is the first work that uses XAI techniques to study the impact of individual NPI or ESP during the COVID-19 pandemic and its association with the FX markets. In particular, we use daily exchange rate data for G10 currencies prior to the onset of COVID-19 pandemic to train LSTM models. Then for each day in the period January 01, 2020 to January 13, 2021, we first use trained LSTMs to generate predictions for exchange rates, which are subsequently fed into a random forest (RF) model alongside some COVID-19 related policy responses, such as ESPs, NPIs, as well as the number of cumulative COVID-19 cases in past days, to obtain predictions on exchange rate, either appreciation or depreciation. Later, SHAP is applied over the RF model to produce the explanations. Experimental results suggest that the ESPs, such as income support and debt/contract relief, and strict NPIs like stay at home requirements, workplace closing, international travel controls and restrictions on internal movement are associated with the appreciation and depreciation of the exchange rates. Their influences are heterogeneous across currencies. In the future, there are a few directions we would like to explore further. Firstly, we will improve current LSTM model by incorporating macro-financial factors such as inflation rate, money supply index, consumer index, and industrial production index to achieve a more accurate prediction on the exchange rate. Achieving accurate predictions is essential for an XAI technique to produce meaningful explanations. Secondly, as we have only considered predicting and explaining exchange rates in this work, we would like to investigate other aspects of FX markets, such as efficiency, dynamic spillovers, and volatility transmissions. Lastly, we would like to apply other XAI techniques, like~\cite{lime2016,aas2019explaining}, to generate explanations to gain more comprehensive explanations. \begin{acks} This work is funded by the Quebec-Wales Collaboration 2020 project: {\em Understanding Impact of COVID-19 on International Economy via Currency Markets using Explainable AI.} \end{acks} \bibliographystyle{ACM-Reference-Format}
{ "redpajama_set_name": "RedPajamaArXiv" }
6,587
{"url":"http:\/\/physics.bgu.ac.il\/WikiView.php?page=researcher-+folman1","text":"Highlights\n\n## The Atom Chip Group ()\n\nRon Folman\n\nQuantum theory is one of the scientific revolutions of the 20th century. It is with us for nearly a century and we still don\u2019t fully understand it and its implications. The quantum nature of atoms becomes dominant when their deBroglie wavelength becomes comparable to the size of the potential in which the atoms are held. This occurs at ultra low temperatures ( $$<1\\mu K$$ ). The wave properties of cold atoms (which are thus named \"matter waves\") can be exploited for fundamental measurements such as the study of nature's symmetries, search for new forces or analyzing the border between the quantum and classical worlds (with implications even to the question of freedom of thought). The field of quantum optics has made leaps in the past 15 years or so, and in this period 4 Nobel prizes have been awarded (1997, 2001, 2005, 2012).\n\nOur main research tool is the AtomChip (Fig. 1). This device enables the manipulation and detection of isolated cold atoms for quantum operations. Such systems also have technological applications. For example, they have already set the best time standards; they are now being developed via interferometric schemes into acceleration sensors for ultra-accurate navigation systems, as well as for detecting minute changes in the gravitational field; magnetometry can be made so sensitive ( $$10^{-17}$$ Tesla) that it can be used for medical imaging of the brain; more futuristic applications involve secure communication (quantum cryptography) and the super-fast quantum computer.\n\nThe \"AtomChip\" group, and the nano-fabrication facility at Ben-Gurion University, combine in developing AtomChips for new fundamental insights into the laws of nature as well as new technological applications. Our students study for degrees in Theoretical and Experimental Physics (and combinations thereof), and are exposed to a variety of fundamental theory as well as advanced technology, ranging from lasers and optics, to electronics and computer interfaces. Our students have won numerous excellence awards in Israel and abroad (last one in 2014). See our group web site for more information.\n\nBottom line: We talk to atoms, and you are invited to talk to them too!\n\nFigure 1: An Atom Chip recently fabricated at BGU, with current carrying wires forming magnetic traps and guides for cold atoms.\n\nFigure 2:Figure 2: An interference pattern made by matter-waves in our lab. This interference pattern is the direct result of putting an atom in two places at the same time, as allowed by quantum rules. Reference: S. Machluf, Y. Japha, R. Folman, Nature Communications 4, 2424 (2013).","date":"2017-04-26 21:30:08","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4259578287601471, \"perplexity\": 1093.0889408860164}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-17\/segments\/1492917121665.69\/warc\/CC-MAIN-20170423031201-00193-ip-10-145-167-34.ec2.internal.warc.gz\"}"}
null
null
{"url":"https:\/\/mathoverflow.net\/questions\/315288\/is-the-annihilator-of-a-minimal-prime-ideal-principal","text":"# Is the annihilator of a minimal prime ideal principal?\n\nMy setup is as follows: $$X$$ is a projective, reduced curve (which is not integral) with a finite morphism onto $$\\mathbb{P}_k^1$$. $$\\DeclareMathOperator{\\Ann}{Ann}$$ Let $$R$$ be a coordinate ring of $$X$$ which is finite free over $$k[x]$$ (since $$X$$ has more than one irreducible component, $$R$$ has at least two minimal prime ideals). Let $$P$$ be a minimal prime of $$R$$ corresponding to an irreducible component of $$X$$.\n\nDo we always have that $$\\Ann(P)$$ is pincipal?\n\nWhat I tried:\n\n\u2022 Every example I was able to come up satisfied the above property. Hence I did not find any counter-example.\n\n\u2022 There is some $$b \\in R$$ such that $$P = \\Ann(b)$$ and hence $$b \\in \\Ann(\\Ann(b))$$. Thus a necessary condition is that such a generator $$a \\in R$$ of $$\\Ann(P)$$ must divide every $$b \\in R$$ that satisfies $$\\Ann(b) = P$$. Since $$\\Ann(b) = \\Ann(bf)$$ for all $$f \\in k[x]$$ (since $$R$$ is torsion-free over $$k[x]$$) we may assume that no element $$f \\in k[x]$$ divides $$b$$ in $$R$$. That's where I am stuck at.\n\nI am grateful for any kind of help, counter-example or hints.\n\n## 1 Answer\n\nThis is false. To see why, consider the following lemma.\n\nLemma. Let $$R$$ be a Noetherian ring with exactly two minimal primes $$\\mathfrak p$$ and $$\\mathfrak q$$ such that $$\\mathfrak p \\mathfrak q = 0$$. Then $$\\operatorname{Ann}(\\mathfrak p) = \\mathfrak q$$.\n\nThe assumption is in particular satisfied if $$\\mathfrak p \\cap \\mathfrak q = 0$$, which is equivalent to $$R$$ being reduced.\n\nProof. Clearly $$\\mathfrak q \\subseteq \\operatorname{Ann}(\\mathfrak p)$$, since $$\\mathfrak p \\mathfrak q = 0$$. Since $$\\mathfrak p \\not\\subseteq \\mathfrak q$$, we have $$\\mathfrak p_{\\mathfrak q} = R_{\\mathfrak q}$$, so any element killing $$\\mathfrak p$$ better be in $$\\mathfrak q$$. (See also Tag 00L2.) $$\\square$$\n\nThus, it suffices to construct such a ring $$R$$ where $$\\mathfrak q$$ is not principal. Basically anything you write down will work.\n\nExample. Let $$E$$ be an elliptic curve over an algebraically closed field $$k$$, and let $$p \\in E$$ be a closed point. Glue two copies of $$E$$ at $$p$$, i.e. consider the union $$X = (E \\times p) \\cup (p \\times E) \\subseteq E \\times E$$. This admits a finite flat map to $$\\mathbb P^1$$ given by $$E \\times E \\to E \\to \\mathbb P^1$$ where the first map is $$(x,y) \\mapsto x+y$$ and the second is any nonconstant map.\n\nRemoving any other point $$q$$ gives an affine open $$((E \\setminus q) \\times p) \\cup (p \\times (E \\setminus q)) \\subseteq X$$, and on its coordinate ring $$R$$ we have two minimal prime ideals $$\\mathfrak p$$ and $$\\mathfrak q$$ corresponding to the components $$(E \\setminus q) \\times p$$ and $$p \\times (E \\setminus q)$$ respectively.\n\nIf $$\\mathfrak q$$ were principal, then the same is true for its restriction to $$R\/\\mathfrak p = \\Gamma(E\\setminus q,\\mathcal O)$$. But the map \\begin{align} \\operatorname{Pic}(E \\setminus q) &\\to E(k) = \\operatorname{Pic}^0(E)\\\\ r &\\mapsto r-q \\end{align} is an isomorphism, and if $$p \\in E \\setminus q$$ were principal this implies that $$p - q = 0$$, which is absurd. $$\\square$$","date":"2020-12-02 03:05:29","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 64, \"wp-katex-eq\": 0, \"align\": 1, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9899519085884094, \"perplexity\": 80.45886580299512}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-50\/segments\/1606141686635.62\/warc\/CC-MAIN-20201202021743-20201202051743-00673.warc.gz\"}"}
null
null
Q: Why do not get any data from this callback function in jquery? I have tried to get this callback function to work. I get no error, but also no data. I have tested my controller in Postman and I get data. function GetSelected() { $.ajax({ type: 'GET', url: '/api/machine/', dataType: "JSON", data: "data", contentType: "Application/json;charset=utf-8", success: backcall }) }; function backcall(data) { var selectmachine = $("#selectmachine").val(); var Selectmachine = selectmachine + 1; alert('Data er:') $.each(function (data) { if (data.machinenumber == 3) { selected = data.selected; alert('Data er:' + selected) } }) };
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,320
\section{Introduction} The group of (smooth) diffeomorphisms of a manifold has been extensively studied and there have been many interesting results concerning its algebraic and topological properties, see e.g.\! \cite{Milnor84}. Among them, the group ${\rm Diff}_+(S^1)$ of orientation preserving diffeomorphisms of the circle $S^1$ is of particular interest in connection with conformal field theory. In $(1+1)$-dimensional conformal field theory, the symmetry group of the chiral components is ${\rm Diff}_+({\mathbb R})$ and often this can be extended to ${\rm Diff}_+(S^1)$. As this group contains spacetime translations, the relevant representations must be {\it positive energy representations} and they act on the space of local observables. The representation theory of positive energy representations has been exploited for construction and classification of a certain subclass of conformal field theories, see e.g.\! \cite{KL04-1}. Non-trivial positive energy representations of ${\rm Diff}_+(S^1)$ are necessarily projective. Any irreducible unitary positive energy representation of the Virasoro algebra extends to a projective representation of the Lie algebra ${\rm Vect}(S^1)$, the Lie algebra of vector fields on $S^1$, and it integrates to a positive energy projective unitary representation of ${\rm Diff}_+(S^1)$ \cite{GW85, Toledano-Laredo99-1}. It follows from \cite[Theorem A.2]{Carpi04}, see also \cite[Section 3.2]{CKLW18}, that all irreducible positive energy unitary projective representations of ${\rm Diff}_+(S^1)$ arise in this way. Accordingly they are completely classified by the central charge $c$ and the lowest conformal energy $h$ \cite{KR87}. Related results including reducible representations have been recently obtained in \cite{NS15,Zellner17}. These representations of ${\rm Vect}(S^1)$ extend to certain non-smooth vector fields as linear maps \cite{CW05}. Apart from that this fact had many applications (e.g.\! the uniqueness of conformal covariance in conformal nets \cite{CW05}, positivity of energy in DHR sectors \cite{Weiner06}, split property in conformal nets \cite{MTW18} and covariance of soliton representations \cite{Henriques17, DIT18}), it leads naturally to the question whether the group representations extend to suitable groups of non-smooth diffeomorphisms. In contrast to the wide range of results and applications concerning the algebraic, analytic and topological properties of the group ${\rm Diff}_+^k(M)$ of $C^k$ diffeomorphisms and ${\mathcal D}^s(M)$ of Sobolev class diffeomorphisms (see e.g.\! \cite{EM70, Misiolek97, Baynaga97, KW09, Figalli10}) and on and some results on (true) representations \cite{KL02, AM06, Kuzmin07, Malliavin08}, there appears to be only few results in the literature on positive energy representations of these groups. Indeed, ${\mathcal D}^s(M)$ is an infinite-dimensional manifold modelled on the space $H^s(M)$ of $H^s$-vector fields, which is {\it not} a Lie algebra with the usual Lie bracket for ${\rm Vect}^\infty(M)$. This makes the study of representations of ${\mathcal D}^s(M)$ rather subtle. In this paper, we show that any positive energy (projective) representations of the diffeomorphism group extends to ${\mathcal D}^s(S^1)$ for $s>3$, by considering its action on vector fields, and therefore, by exploiting the representation theory of the Virasoro algebra. We also show that these representations can be locally made into multiplier representation by fixing the phase. This allows us to take the direct sum and it turns out that conformal nets are covariant with respect to this extended action. For some special representations appearing in Fock space, further extensions have been done first to $C^3$-diffeomorphisms \cite{Vromen13}, then to ${\mathcal D}^s(S^1), s>2$ \cite{DIT18}. The arguments depend on realizing these representations in some specific conformal field theory, and it is open whether the results are valid for general central charge $c$. In contrast, by our argument, representations extend to ${\mathcal D}^s(S^1)$ for any real $s > 3$ and for any $c$. While the extensions to ${\mathcal D}^s(S^1), 2<s\le 3$ do not necessarily act nicely on the Lie algebra representations, by our construction the extensions to ${\mathcal D}^s(S^1), s>3$ do so and are differentiable. Indeed, our proof follows in part the strategy in \cite{GW85} for the integrability of the representations of the Virasoro algebra. The extension to non-smooth diffeomorphisms then follows from the above mentioned extension to non-smooth vector fields of the corresponding projective representation of ${\rm Vect}(S^1)$ given in \cite{CW05}. Actually, our argument can be used to give a simpler proof of the results in \cite{GW85}, see Remark \ref{remarkGW}. This paper is organized as follows. In Section \ref{preliminaries}, we recall the relevant groups and algebras, their topologies and representations. In Section \ref{extension}, we first extend the irreducible projective representations of ${\rm Diff}_+(S^1)$ to ${\mathcal D}^s(S^1)$ with $s>3$. Then we lift them locally to multiplier representations, and show that the direct sum can make sense as projective representations. Section \ref{conformal} demonstrates that two-dimensional chiral conformal field theories described by conformal nets of von Neumann algebras have this extended symmetry of ${\mathcal D}^s(S^1)$. We summarize possible further continuation of this work in Section \ref{outlook}. \section{Preliminaries}\label{preliminaries} \subsection{\texorpdfstring{${\rm Diff}_+(S^1)$}{diffs1} and the Virasoro algebra} \paragraph{The diffeomorphism group.} Let us denote by ${\rm Diff}_+(S^1)$ the group of orientation preserving, smooth diffeomorphisms of the circle $S^1\coloneqq \lbrace z\in{\mathbb C} :\vert z\vert=1\rbrace$ and ${\rm Vect}(S^1)$ denote the set of smooth vector fields on $S^1.$ ${\rm Diff}_+(S^1)$ is an infinite dimensional Lie group whose Lie algebra is identified with the real topological vector space ${\rm Vect}(S^1)$ of smooth vector fields on $S^1$ with $C^\infty$ topology \cite{Milnor84}. In the following we identify ${\rm Vect}(S^1)$ with $C^\infty(S^1,\mathbb{R})$ and for $f\in C^{\infty}(S^1,\mathbb{R})$ we denote by $f^\prime$ the derivative of $f$ with respect to the angle $\theta$, $$ f^\prime(z)=\frac{d}{d\theta}f(e^{i\theta})\bigg\rvert_{e^{i \theta}=z}.$$ We consider a diffeomorphism $\gamma\in{\rm Diff}_+(S^1)$ as a map from $S^1$ in $S^1 \subset {\mathbb C}$. With this convention, its action on $f\in {\rm Vect}(S^1)$ is \begin{align}\label{eq:defgammaf} (\gamma_* f)(e^{i\theta}) =-ie^{-i\theta}\left(\frac{d}{d\varphi}\gamma(e^{i\varphi})\right)\bigg\rvert_{e^{i\varphi} = \gamma^{-1}(e^{i\theta})}f(\gamma^{-1}(e^{i\theta})). \end{align} We denote by ${\rm Diff}_+^k(S^1)$ the group of $C^k$-diffeomorphisms of $S^1$. Note that this is not a Lie group, and indeed, the corresponding linear space ${\rm Vect}^k(S^1)$ of $C^k$-vector fields is not closed under the natural Lie bracket (see below). The universal covering group of ${\rm Diff}_+(S^1)$ (resp.\! ${\rm Diff}_+^k(S^1)$), $\widetilde{{\rm Diff}_+(S^1)}$ (resp.\! $\widetilde{{\rm Diff}_+^k(S^1)}$), can be identified\footnote{The realization of $\widetilde{{\rm Diff}_+^k(S^1)}$ works in the same way as $\widetilde{{\rm Diff}_+(S^1)}$ as in \cite[Section 6.1]{Toledano-Laredo99-1}, see also \cite[Example 4.2.6]{Hamilton82}.} with the group of $C^{\infty}$-diffeomorphisms (resp.\! $C^k$-diffeomorphisms) $\gamma$ of $\mathbb{R}$ which satisfy \begin{equation*} \gamma(\theta+2\pi)=\gamma(\theta)+2\pi. \end{equation*} If $\gamma\in\widetilde{{\rm Diff}_+(S^1)}$, its image under the covering map is in the following denoted by $\mathring{\gamma}\in{\rm Diff}_+(S^1)$, where $\mathring{\gamma}(e^{i\theta})=e^{i\gamma(\theta)}$. Conversely, if $\gamma \in {\rm Diff}_+(S^1)$, there is an element $\tilde{\gamma} \in \widetilde{{\rm Diff}_+(S^1)}$ whose image under the covering map is $\gamma$. Such a $\tilde{\gamma}$ is unique up to $2\pi$ and called a lift of $\gamma$. The group ${\rm Diff}_+(S^1)$ admits the Bott-Virasoro cocycle $B:{\rm Diff}_+(S^1)\times{\rm Diff}_+(S^1)\rightarrow \mathbb{R}$ (see e.g. \cite{FH05}). The Bott-Virasoro group is then defined as the group with elements \[ (\gamma, t)\in{\rm Diff}_+(S^1)\times\mathbb{R} \] and with multiplication \[ (\gamma_1,t_1)\circ(\gamma_2,t_2)=(\gamma_1\circ\gamma_2, t_1+t_2+ B(\gamma_1,\gamma_2)). \] Note that, given a true (not projective) unitary irreducible representation $V$ of the universal covering of the Bott-Virasoro group, one can obtain a unitary multiplier representation\footnote{for the definition of unitary multiplier representation see section \ref{projective}.} $\underline{V}(\gamma) := V(\gamma, 0)$ of $\widetilde{{\rm Diff}_+(S^1)}$ (with respect to the Bott-Virasoro cocycle $B$). Then the map $\underline{V}:\widetilde{{\rm Diff}_+(S^1)}\rightarrow U(\mathcal{H})$ satisfies \[ \underline{V}(\gamma_1)\underline{V}(\gamma_2)=e^{ic B(\mathring{\gamma_1},\mathring{\gamma_2})}\underline{V}(\gamma_2)\underline{V}(\gamma_1), \] where $c\in {\mathbb R}$ by irreducibility. \paragraph{The Lie algebra.} The space ${\rm Vect}(S^1)$ is endowed with the Lie algebra structure with the Lie bracket given by \[ [f,g]=f^{\prime}g-f g^{\prime}. \] As a Lie algebra, ${\rm Vect}(S^1)$ admits the Gelfand–Fuchs two-cocycle \[ \omega (f,g)=\frac{1}{48\pi}\int_{S^1}(f(e^{i\theta})g^{\prime\prime\prime}(e^{i\theta})-f^{\prime\prime\prime}(e^{i\theta})g(e^{i\theta}))d\theta. \] The Virasoro algebra ${\rm Vir}$ is the central extension of the complexification of the algebra generated by the trigonometric polynomials in ${\rm Vect}(S^1)$ defined by the two-cocycle $\omega$. It can be explicitly described as the complex Lie algebra generated by $L_n$, $n\in\mathbb{Z}$, and the central element $\mathfrak{1}$, with brackets \[ [L_n,L_m]=(n-m)L_{n+m}+\delta_{n+m,0}\frac{n^3-n}{12}\mathfrak{1}. \] Consider a representation $\pi:{\rm Vir}\rightarrow{\hbox{End}}(V)$ of ${\rm Vir}$ on a complex vector space $V$ endowed with a scalar product $\langle\cdot,\cdot\rangle$. We call $\pi$ a {\bf unitary positive energy representation} if the following hold \begin{enumerate} \item Unitarity: $\langle v,\pi(L_n)w\rangle=\langle \pi(L_{-n})v,w\rangle$ for every $v,w\in V$ and $n\in{\mathbb Z}$; \item Positivity of the energy: $V=\bigoplus_{\lambda\in{\mathbb R}_+\cup\lbrace 0\rbrace}V_{\lambda}$, where $V_{\lambda}\coloneqq \ker(\pi(L_0)-\lambda{\mathbbm 1}_V)$. The lowest eigenvalue of $\pi(L_0)$ is called lowest weight; \item Central charge: $\pi(\mathfrak{1})=c{\mathbbm 1}_V$; \end{enumerate} There exists an irreducible unitary positive energy representation with central charge $c$ and lowest weight $h$ if and only if $c\ge 1$ and $h\ge 0$ (continuous series representation) or $(c,h)=(c(m),h_{p,q}(m))$, where $c(m)=1-\frac{6}{(m+2)(m+3)}$, $h_{p,q}(m)=\frac{(p(m+1)-qm)^2-1}{4m(m+1)}$, $m=3,4,\cdots$, $p=1,2,\cdots,m-1$, $q=1,2,\cdots,p$, (discrete series representation) \cite{KR87}\cite{DMS97}. In this case the representation space $V$ is denoted by ${\mathcal H}^\mathrm{fin}(c,h)$. We denote by $\mathcal{H}(c,h)$ the Hilbert space completion of the vector space ${\mathcal H}^\mathrm{fin}(c,h)$ associated with the unique irreducible unitary positive energy representation of ${\rm Vir}$ with central charge $c$ and lowest weight $h$. In these representations, the conformal Hamiltonian $\pi(L_0)$ is diagonalized, and on the linear span of its eigenvectors $\mathcal{H}^{\mathrm{fin}}(c,h)$ (the space of finite energy vectors), the Virasoro algebra acts algebraically as unbounded operators. \paragraph{The stress-energy tensor.} Let $\mathcal{H}(c,h)$ as above and, with abuse of notation, we denote by $L_n$ the elements of ${\rm Vir}$ represented in $\mathcal{H}(c,h)$. For a smooth complex-valued function $f$ on $S^1$ with finitely many non-zero Fourier coefficients, the (chiral) stress-energy tensor associated with $f$ is the operator $$T(f)=\sum_{n\in\mathbb{Z}}L_n \hat{f}_n$$ acting on $\mathcal{H}(c,h)$, where $$\hat{f}_n=\int_0^{2\pi}\frac{d\theta}{2\pi}e^{-in\theta}f(e^{i\theta}).$$ by the linear energy bounds, yielding a self-adjoint unbounded operator $T(f)$. Moreover it can be extended to a particular class of non-smooth functions \cite{CW05}, retaining its self-adjointness. This fact will be used in this article and will be thus resumed in some detail in Section \ref{non-smooth}. It is a crucial fact that the irreducible representations $\mathcal{H}(c,h)$ of ${\rm Vir}$ integrate to irreducible unitary strongly continuous representations of the universal covering of the Bott-Virasoro group \cite{FH05}. In other words, denoting by $q$ the quotient map $q: {\mathcal U}(\mathcal{H}(c,h))\rightarrow {\mathcal U}(\mathcal{H}(c,h))/\mathbb{C}$ (we denote by ${\mathcal U}({\mathcal K})$ the group of unitary operators on ${\mathcal K}$), there is an irreducible, unitary, strongly continuous multiplier representation $U$ of $\widetilde{{\rm Diff}_+(S^1)}$, the universal covering of ${\rm Diff}_+(S^1)$, such that \[ q(U({\rm Exp}(f)))=q(e^{iT(f)}) \] for all $f\in{\rm Vect}(S^1)$, where ${\rm Exp}$ is the Lie-theoretic exponential map of ${\rm Diff}_+(S^1)$ (see \cite{Milnor84}). For the stress-energy tensor $T$, we have the following covariance \cite[Proposition 5.1, Proposition 3.1]{FH05}. \begin{proposition}\label{pr:covariance} The stress-energy tensor $T$ on $\mathcal{H}(c,h)$ transforms according to \[ U(\gamma)T(f)U(\gamma)^*=T(\mathring{\gamma}_*({f}))+\frac{c}{24\pi}\int^{2\pi}_0\{\mathring{\gamma},z\}\bigg\rvert_{z=e^{i\theta}}f(e^{i\theta})e^{i2\theta}d\theta \] on vectors in $\mathcal{H}^{\mathrm{fin}}(c,h)$, for $f\in{\rm Vect}(S^1)$ and $\gamma\in\widetilde{{\rm Diff}_+(S^1)}$. Furthermore the commutation relations \[ i[T(g),T(f)]=T(g^\prime f-f^\prime g)+ c \omega(g,f), \] hold for arbitrary $f,g\in C^\infty (S^1)$, on vectors $\psi\in \mathcal{H}^{\mathrm{fin}}(c,h).$ \end{proposition} Here \[ \{\mathring{\gamma},z\}=\frac{\frac{d^3}{dz^3}\mathring{\gamma}(z)}{\frac{d}{dz}\mathring{\gamma}(z)}-\frac{3}{2}\left(\frac{\frac{d^2}{dz^2}\mathring{\gamma}(z)}{\frac{d}{dz}\mathring{\gamma}(z)}\right)^2 \] is the Schwarzian derivative of $\mathring{\gamma}$ and $\frac{d}{dz}\mathring{\gamma}(z)=-i\bar{z}\frac{d}{d\theta}\mathring{\gamma}(e^{i\theta})\bigg\rvert_{e^{i\theta}=z}$. Note that \[ \beta(\gamma,f)\coloneqq \frac{c}{24\pi}\int_{S^1}\{\mathring\gamma,z\}izf(z)dz \] and $\omega(\cdot,\cdot)$ are related by \begin{align}\label{eq:gelfandderivative} \frac{d}{dt}\beta({\rm Exp}(tf),g)\bigg\rvert_{t=0}=-c\omega(f,g). \end{align} \subsection{The stress-energy tensor on non-smooth vector fields}\label{non-smooth} Let $T$ be the stress-energy tensor on ${\mathcal H}(c,h)$. Given a not necessarily smooth real function $f$ of $S^1$ it is possible to evaluate the stress-energy tensor on $f$ \cite[Proposition 4.5]{CW05}. First of all we define for a real-valued function $f$ of the circle \[ \Vert f\Vert_{\frac{3}{2}}\coloneqq \sum_{n\in\mathbb{Z}}\vert{\hat{f}}_n\vert(1+|n|^{\frac{3}{2}}), \] where $\hat{f}_n\coloneqq \frac{1}{2\pi}\int_0^{2\pi}e^{-in\theta}f(e^{i\theta})d\theta$ is the $n$-th Fourier coefficient of $f$. We denote\footnote{We consider $\mathcal{S}_{\frac32}(S^1)$ and $H^s(S^1)$ below as the spaces of nonsmooth vector fields on $S^1$, and accordingly, without specification, they are the spaces of real functions.} with $\mathcal{S}_{\frac{3}{2}}(S^1)$ the class of functions $f\in L^1(S^1,{\mathbb R})$ such that $\Vert f\Vert_{\frac{3}{2}}$ is finite endowed with the topology induced by the norm $\Vert\cdot\Vert_{\frac{3}{2}}$. The following is \cite[Proposition 4.2, Theorem 4.4, Proposition 4.5]{CW05}. \begin{proposition}\label{pr:nonsmooth} If $f:S^1\rightarrow\mathbb{C}$ is continuous and such that $\sum_{n\in\mathbb{Z}}|\hat{f}_n|(1+|n|^{\frac{3}{2}})<\infty$ then \begin{enumerate}[{(}1{)}] \item\label{pr:nonsmooth-def} the operator $T(f)=\sum_{n\in\mathbb{Z}}L_n \hat{f}_n$ on the domain $\mathcal{H}^{\mathrm{fin}}(c,h)$ is well defined, (i.e. the sum is strongly convergent on the domain); \item\label{pr:nonsmooth-star} $T(f)^*$ is an extension of the operator $T(f)^+:=\sum_{n\in\mathbb{Z}}L_n \bar{\hat f}_n$ (this is again understood as an operator on the domain $\mathcal{H}^{\mathrm{fin}}(c,h)$). \item\label{pr:nonsmooth-symmetry} $T(f)$ is closable and $\overline{T(f)}=(T(f)^+)^*$, where $T(f)$ and $T(f)^+$ are considered as operators on the domain $\mathcal{H}^{\mathrm{fin}}(c,h)$. In particular, if $\hat{f}_n=\bar{\hat f}_{-n}$ for all $n\in\mathbb{Z}$ (i.e. if $f$ is a real-valued function), then $T(f)$ is essentially self-adjoint on $\mathcal{H}^{\mathrm{fin}}(c,h)$. \item\label{pr:nonsmooth-bound} If $f$ is real, then for every $\xi \in {\mathscr{D}}(L_0)$ we have the following energy bounds \[ \|T(f)\xi\|\leq r\|f\|_{\frac{3}{2}}\|(1+L_0)\xi\| \] where $r$ is a positive constant. Consequently, ${\mathscr{D}}(L_0) \subset {\mathscr{D}}(T(f))$. \item\label{pr:nonsmooth-convergence} If $\{f_n\}$ ($n\in\mathbb{N}$) is a sequence\footnote{This should be distinguished from the Fourier coefficients $\hat f_n$ of a single function $f$.} of continuous real functions on $S^1$ in $\mathcal{S}_{\frac{3}{2}}(S^1)$ and $\|f-f_n\|_{\frac{3}{2}}$ converges to $0$ as $n$ tends to $\infty$, then \[ T(f_n)\rightarrow T(f) \] in the strong resolvent sense. \end{enumerate} \end{proposition} It has been also shown that the class $\mathcal{S}_{\frac{3}{2}}(S^1)$ contains many non-smooth functions \cite[Lemma 2.2]{Weiner06},\cite[Lemma 5.3]{CW05}. \begin{proposition} If a real-valued function $f$ on the circle is piecewise smooth and once continuously differentiable on the whole $S^1$, then $f \in \mathcal{S}_{\frac{3}{2}}(S^1)$. \end{proposition} \subsection{Groups of diffeomorphisms of Sobolev class \texorpdfstring{$H^s(S^1)$}{Hs(S1)}}\label{sobolev} Let $s>\frac12$ be a real number. We introduce (see \cite[Section 2]{EK14} and \cite[Definition 2.2]{EK14}, respectively) \begin{align*} H^s(S^1) &:= \{f\in L^2(S^1, {\mathbb R}): \|f\|_{H^s} < \infty\}, \text{ where } \|f\|_{H^s} := \left(\sum_{n\in{\mathbb Z}} (1+n^2)^s|\hat f_n|^2\right)^\frac12 \\ H^s(S^1,{\mathbb C}) &:= \{f\in L^2(S^1, {\mathbb C}): \|f\|_{H^s} < \infty\}, \text{ where } \|f\|_{H^s} := \left(\sum_{n\in{\mathbb Z}} (1+n^2)^s|\hat f_n|^2\right)^\frac12 \\ {\mathcal D}^s(S^1) &:= \{\gamma \in {\rm Diff}_+^1(S^1): \tilde \gamma - \iota \in H^s(S^1)\}, \end{align*} where $\tilde \gamma$ is a lift of $\gamma$ to ${\mathbb R}$. Actually, in literature there are various definitions of these Sobolev spaces and their topologies. Although it is well-known that they coincide, for the convenience of the reader we recall them and show their equivalence in Appendix. As $s>\frac12$, the space $H^s(S^1)$ is a subspace of $C(S^1, {\mathbb R})$. The universal covering group $\widetilde{{\mathcal D}^s(S^1)}$ of ${\mathcal D}^s(S^1)$ is a subspace of $\widetilde{{\rm Diff}_+^1(S^1)}$, namely the space of the maps $\gamma:{\mathbb R}\to{\mathbb R}$ satisfying $\gamma(\theta+2\pi)=\gamma(\theta)+2\pi$ and locally $H^s$ (see Appendix), and this can be identified with an open subset of the subspace of $H^s(S^1)$. From these definitions, it is immediate that ${\rm Diff}_+^k(S^1)$ is continuously embedded in ${\mathcal D}^k(S^1)$. Conversely, by the Sobolev-Morrey embedding \cite[Proposition 2.2]{IKT13}, it holds that ${\mathcal D}^s \hookrightarrow {\rm Diff}_+^k(S^1)$ if $s > k+\frac12$. The first statement of the following is a straightforward adaptation of \cite[Lemma 2.3]{IKT13}. One can also find various different elementary proofs, for example \cite{timur315086, Smyrlis823756}. The second statement is an adaptation of \cite[Lemma B.4]{IKT13}. \begin{lemma}\label{lm:sobolevalgebra} Let $s > \frac12$. Then $H^s(S^1)$ is an algebra and $\|fg\|_{H^s} \le C_s \|f\|_{H^s}\|g\|_{H^s}$. If $g \in H^s(S^1)$ and $\inf_\theta (1+g(\theta))> 0$, then $\frac1{1+g} \in H^s(S^1)$. \end{lemma} The following is a special case of \cite[Theorem B.2]{IKT13} and an analogue of \cite[Proposition B.7]{IKT13}, see also Appendix. According to \cite[P.12]{Kolev13}, Lemma \ref{lm:sobolevgroup}(a) for integer $s$ has been first established in \cite{Ebin68}. \begin{lemma}\label{lm:sobolevgroup} Let $s > \frac 32$. Then \begin{enumerate}[{(}a{)}] \item $(\gamma,f) \mapsto f\circ \gamma,\; {\mathcal D}^s(S^1)\times H^s(S^1) \to H^s(S^1)$ is continuous. \item $\gamma \mapsto \gamma^{-1},\; {\mathcal D}^s(S^1)\to {\mathcal D}^s(S^1)$ is continuous. \item ${\mathcal D}^s(S^1)$ is a topological group. \end{enumerate} \end{lemma} By applying these results, we obtain the following \begin{lemma}\label{lm:gamma32sobolev} We have the following. \begin{enumerate}[{(}a{)}] \item Let $s > 2$. The embedding $H^s(S^1)\hookrightarrow \mathcal{S}_{\frac{3}{2}}(S^1)$ is continuous. \item Let $s > \frac32$. The map \begin{align*} {\mathcal D}^{s+1}(S^1)\times H^s(S^1)&\rightarrow H^s(S^1)\\ (\gamma,f)&\mapsto \gamma_*(f), \end{align*} where $\gamma_*(f)$ is as in \eqref{eq:defgammaf}, is continuous. \item Let $s>3$. $\beta(\gamma,f)$ extends continuously to $\gamma \in {\mathcal D}^s(S^1), f\in L^2(S^1, {\mathbb R})$. \end{enumerate} \end{lemma} \begin{proof} (a) is obtained from the following inequality $$\sum_{k\neq 0} |\hat{f}_k||k|^{\frac{3}{2}}=\sum_{k\neq 0} |\hat{f}_k| |k|^{2+\epsilon}\frac{1}{|k|^{\frac{1}{2}+\epsilon}} \leq \sqrt{\sum_{k\neq0} \frac{1}{k^{1+2\epsilon}}}\sqrt{\sum_{k\neq0} |\hat{f}_k|^{2}|k|^{4+2\epsilon}}.$$ for any $\epsilon>0$. (b) follows from Lemmas \ref{lm:sobolevgroup} and \ref{lm:sobolevalgebra} and \eqref{eq:defgammaf}. (c) Note that, with $s>3$, ${\mathcal D}^{s}(S^1) \ni \gamma \mapsto \{\mathring{\gamma},z\} \in L^2(S^1, {\mathbb C})$ is continuous. To see it, in the definition \[ \{\mathring{\gamma},z\}=\frac{\frac{d^3}{dz^3}\mathring{\gamma}(z)}{\frac{d}{dz}\mathring{\gamma}(z)}-\frac{3}{2}\left(\frac{\frac{d^2}{dz^2}\mathring{\gamma}(z)}{\frac{d}{dz}\mathring{\gamma}(z)}\right)^2, \] the maps $\gamma\mapsto\frac{d^3}{dz^3}\mathring{\gamma}(z)\in L^2(S^1, {\mathbb C})$ and $\gamma\mapsto \frac{1}{\frac{d}{dz}\mathring{\gamma}(z)} \in H^{s-1}(S^1, {\mathbb C}) \subset L^\infty(S^1, {\mathbb C})$ are continuous, hence their product is continuous in $L^2(S^1, {\mathbb C})$. The second derivative $\gamma\mapsto\frac{d^2}{dz^2}\mathring{\gamma}(z) \in H^{s-2}(S^1, {\mathbb C})$ is continuous hence so is $\gamma\mapsto\left(\frac{\frac{d^2}{dz^2}\mathring{\gamma}(z)}{\frac{d}{dz}\mathring{\gamma}(z)}\right)^2 \in H^{s-2}(S^1, {\mathbb C})$ (by the complexification of Lemma \ref{lm:sobolevalgebra}), hence we obtain the continuity of $\gamma \mapsto \{\mathring{\gamma},z\}$ by the complexification of Lemma \ref{lm:sobolevalgebra}. Now the claim is immediate because $\beta(\gamma, f) = \frac{c}{24\pi}\int_{S^1}\{\mathring\gamma,z\}izf(z)dz$ \end{proof} \subsection{Projective and multiplier representations}\label{projective} A strongly continuous unitary projective representation of a topological group $G$ is a pair $(U,\mathcal{H})$ where $\mathcal{H}$ is a Hilbert space and $U$ is a continuous group homomorphism from $G$ to $\mathcal{U}(\mathcal{H})/\mathbb{T}$, where ${\mathcal U}({\mathcal H})$ is equipped with the strong operator topology and $\mathcal{U}(\mathcal{H})/\mathbb{T}$ with the quotient topology by the quotient map $q$. Namely, the subbasis elements which contain $q(u)$ are $\{{\mathcal U}_{q(u),\xi,\varepsilon}\}_{\xi \in {\mathcal H}, \varepsilon > 0}$, where \[ {\mathcal U}_{q(u),\xi,\varepsilon} = \{q(v): \text{ there are }u',v' \in {\mathcal U}({\mathcal H}), q(u) = q(u'), q(v) = q(v'), \text{ and } \|(v'-u')\xi\| < \varepsilon\}. \] Therefore, it is clear that a net $\{q(u_\lambda)\}$ has limit $q(u)$ if and only if for each $\xi \in {\mathcal H}$ there is $z_{\xi,\lambda}, \hat z_{\xi,\lambda}\in \mathbb{T}$ such that $\|z_{\xi,\lambda} u_\lambda \xi - \hat z_{\xi,\lambda}u\xi\| \to 0$ if and only if there is $z_{\xi,\lambda} \in \mathbb{T}$ such that\footnote{One can concretely make the following choice: $z_{\xi ,\lambda} = \frac{\overline{\<u \xi, u_\lambda \xi\>}}{|\<u \xi, u_\lambda \xi\>|}$, then $z_{\xi, \lambda} u_\lambda \xi$ converges to $u \xi$.} $z_{\xi,\lambda}u_\lambda\xi \to u\xi$. Actually, $z_{\xi,\lambda}$ does not depend on $\xi$ (because, if $z_{\xi,\lambda}u_\lambda\eta$ were not convergent for $\eta \perp \xi$, $z_{\xi,\lambda}u_\lambda(\xi + \eta)$ would not be convergent in ${\mathcal H}/\mathbb{T}$, hence convergence holds for any $\eta$), hence $q(u_\lambda)$ is convergent if and only if there is a net $z_\lambda \in \mathbb{T}$ such that $z_\lambda u_\lambda$ is convergent in the strong operator topology. The above continuity is equivalent to the following, see \cite{Bargmann54}: whenever $g_\lambda \to g$, it holds for any $x\in {\mathcal B}({\mathcal H})$ that ${\hbox{\rm Ad\,}} U(g_\lambda)(x) \to {\hbox{\rm Ad\,}} U(g)(x)$ (note that ${\hbox{\rm Ad\,}} U(g)$ is well-defined for $U(g) \in {\mathcal U}({\mathcal H})/\mathbb{T}$). We show it here for the convenience of the reader. It is straightforward that if $U(g_\lambda)\to U(g)$ in ${\mathcal U}({\mathcal H})/\mathbb{T}$, then one can fix phases of $U(g_\lambda)$ and $U(g)$ so that $U(g_\lambda) \to U(g)$ in the strong operator topology (with a slight abuse of notation) by the previous paragraph, hence ${\hbox{\rm Ad\,}} U(g_\lambda)(x) \to {\hbox{\rm Ad\,}} U(g)(x)$. Conversely, assume that ${\hbox{\rm Ad\,}} U(g_\lambda)(x) \to {\hbox{\rm Ad\,}} U(g)(x)$ for any $x \in {\mathcal B}({\mathcal H})$, and fix the phases of $U(g_\lambda), U(g)$. For any one-dimensional projection $p_\xi$ to a unit vector $\xi$, we have ${\hbox{\rm Ad\,}} U(g_\lambda)(p_\xi) \to {\hbox{\rm Ad\,}} U(g)(p_\xi)$, and the latter is again a one-dimensional projection to, say, ${\mathbb C}\xi_g$, and we may assume $\|\xi\| = \|\xi_g\| = 1$. Since $\<\xi_g, {\hbox{\rm Ad\,}} U(g_\lambda)(p_\xi)\xi_g\> \to \<\xi_g, {\hbox{\rm Ad\,}} U(g)(p_\xi)\xi_g\> = 1$, $\|p_\xi U(g_\lambda)^*\xi_g\| = |\<\xi, U(g_\lambda)^*\xi_g\>|\to 1$. Let us assign new phases by $U'(g_\lambda) = U(g_\lambda)\cdot \frac{\overline{\<\xi_g, U(g_\lambda)\xi\>}}{|\<\xi_g, U(g_\lambda)\xi\>|}$. Then $\<\xi_g, U'(g_\lambda)\xi\>$ is positive and tends to $1$, implying that $U'(g_\lambda)\xi \to \xi_g$. Since this holds for an arbitrary $\xi$, it yields the convergence in ${\mathcal U}({\mathcal H})/\mathbb{T}$ again by the previous paragraph. We can consider $U(g)$ as an operator acting on $\mathcal{H}$ determined up to a phase factor. Two projective unitary representations $(U_1,\mathcal{H}_1)$ and $(U_2,\mathcal{H}_2)$ are said to be equivalent if exists an unitary $W:\mathcal{H}_1\rightarrow\mathcal{H}_2$ such that $WU_1(g)=U_2(g)W$ for every $g\in G$ up to a phase factor. A unitary multiplier representation of $G$ is a pair $(U,\mathcal{H})$ were $U:G\rightarrow \mathcal{U}(\mathcal{H})$ is a map such that $U(g_1)U(g_2)=\omega(g_1,g_2)U(g_1g_2)$ and $\omega:G\times G\rightarrow \mathbb{T}$ is a map which satisfies the equality \begin{equation*} \omega(g_1,g_2)\omega(g_1g_2,g_3)=\omega(g_1,g_2g_3)\omega(g_2,g_3). \end{equation*} A unitary multiplier representation $U$ of $G$ is strongly continuous if $U(g)v$ tends to $U(g_0)v$ for all $v\in{\mathcal H}$ if $g$ tends to $g_0$. \section{Extension of the \texorpdfstring{${\rm Diff}_+(S^1)$}{diffs1} representations to Sobolev diffeomorphisms}\label{extension} \subsection{Irreducible case} Our purpose of this section is to extend the (positive energy projective) representation $U$ on ${\mathcal H}(c,h)$ of ${\rm Diff}_+(S^1)$ to ${\mathcal D}^s(S^1)$ with $s>3$. In the following $s>3$ will be always assumed. An element $\gamma\in{\mathcal D}^s(S^1)$ acts on $f\in{\rm Vect}(S^1)$ via \eqref{eq:defgammaf}. If $T$ is the energy-momentum operator associated with a positive energy unitary representation of the Virasoro algebra ${\rm Vir}$ with central charge $c$ and lowest weight $h$, we define a new class of operators \begin{align* T^{\gamma}(f)\coloneqq T(\gamma_*f)-\beta(\gamma,f), \end{align*} where $f \in {\rm Vect}(S^1)$ and $\beta(\gamma,f)=\frac{c}{24\pi}\int_{S^1}\{\gamma,z\}izf(z)dz$, which makes sense for $\gamma \in {\mathcal D}^s(S^1)$ by Lemma \ref{lm:gamma32sobolev} and Proposition \ref{pr:nonsmooth}(\ref{pr:nonsmooth-def}). The fact that $\gamma_*f$ is in $\mathcal{S}_{\frac{3}{2}}(S^1)$ ensures that $T(\gamma_*f)$ is an essentially self-adjoint operator on $\mathcal{H}^\mathrm{fin}(c,h)$ and so is $T^{\gamma}(f)$ by Proposition \ref{pr:nonsmooth}(\ref{pr:nonsmooth-symmetry}). We denote its closure by the same symbol $T^\gamma(f)$, so long as no confusion arises. Note that, if $\gamma \in {\rm Diff}_+(S^1)$, then we have \begin{align}\label{eq:Tgammasmooth} T^{\gamma}(f) = {\hbox{\rm Ad\,}} U(\gamma)(T(f)). \end{align} Indeed, by definition $T^{\gamma}(f) = T(\gamma_*f)-\beta(\gamma,f)$ and by Proposition \ref{pr:covariance}, \eqref{eq:Tgammasmooth} holds on ${\mathscr{D}}(L_0)$, and the both operators are essentially self-adjoint there, hence they must coincide. As they are unitarily implemented, the energy bound holds as well: \begin{align}\label{eq:smoothbound} \|T^{\gamma}(f)\xi\| \le r\|f\|_{\frac{3}{2}}\cdot \|(1+L_0^{\gamma})\xi\|, \end{align} where $L_0^{\gamma} := T^{\gamma}(1)$. We define for $\gamma_1,\gamma_2\in{\mathcal D}^s(S^1)$ \[ (T^{\gamma_1})^{\gamma_2}(f)\coloneqq T^{\gamma_1}((\gamma_2)_*f)-\beta(\gamma_2,f). \] \begin{proposition}\label{lm:composition} Let $\gamma_1,\gamma_2\in{\mathcal D}^s(S^1)$, $s>3$, and $f\in{\rm Vect}(S^1)$. Then $(T^{\gamma_1})^{\gamma_2}(f)=T^{\gamma_1\circ\gamma_2}(f)$. \end{proposition} \begin{proof} Using the properties of the Schwarzian derivative \cite{OT05} \[ \left\{\gamma_1\circ\gamma_2,z\right\}=\left\{\gamma_1,\gamma_2(z)\right\}\left(\frac{d}{dz}\gamma_2(z)\right)^2+\left\{\gamma_2,z\right\} \] where $y=\gamma_2(z)$, we infer that \begin{align*} \beta(\gamma_1\circ\gamma_2,f)&=-\frac{c}{24\pi}\int_{0}^{2\pi}\left\{\gamma_1\circ\gamma_2,z\right\}\bigg\rvert_{z=e^{i\theta}}f(e^{i\theta})e^{i2\theta}d\theta \\ &=-\frac{c}{24\pi}\int_0^{2\pi}\left\{\gamma_1,y\right\}\bigg\rvert_{y=\gamma_2(e^{i\theta})}\left(\frac{d}{dz}\gamma_2(z)\right)^2\bigg\rvert_{z=e^{i\theta}}f(e^{i\theta})e^{i2\theta}d\theta\\ &\qquad-\frac{c}{24\pi}\int_0^{2\pi}\left\{\gamma_2,z\right\}\bigg\rvert_{z=e^{i\theta}}f(e^{i\theta})e^{i2\theta}d\theta \\ &=-\frac{c}{24\pi}\int_0^{2\pi}\left\{\gamma_1,y\right\}\bigg\rvert_{y=e^{i\varphi}}\cdot(-i)\frac{d}{d\theta}\left(\gamma_2(e^{i\theta})\right)\bigg\rvert_{e^{i\theta}=\gamma_2^{-1}(e^{i\varphi})}f(\gamma_2^{-1}(e^{i\varphi}))e^{i\varphi}d\varphi \\ &\qquad-\frac{c}{24\pi}\int_0^{2\pi}\left\{\gamma_2,z\right\}\bigg\rvert_{z=e^{i\theta}}f(e^{i\theta})e^{i2\theta}d\theta \\ &=-\frac{c}{24\pi}\int_0^{2\pi}\left\{\gamma_1,y\right\}\bigg\rvert_{y=e^{i\varphi}}\cdot(-i)e^{-i\varphi}\frac{d}{d\theta}\left(\gamma_2(e^{i\theta})\right)\bigg\rvert_{e^{i\theta}=\gamma_2^{-1}(e^{i\varphi})}f(\gamma_2^{-1}(e^{i\varphi}))e^{i2\varphi}d\varphi \\ &\qquad-\frac{c}{24\pi}\int_0^{2\pi}\left\{\gamma_2,z\right\}\bigg\rvert_{z=e^{i\theta}}f(e^{i\theta})e^{i2\theta}d\theta \\ &=\beta(\gamma_1,\gamma_{2_{*}}(f))+\beta(\gamma_2,f), \end{align*} where we used the change of variables $e^{i\varphi} = \gamma_2(e^{i\theta})$, hence $e^{i\theta}\frac{d\theta}{d\varphi}\frac{d\gamma_2}{dz}(e^{i\theta})|_{\gamma_2(e^{i\theta})=e^{i\varphi}}=e^{i\varphi}$, $\frac{d\gamma_2}{dz}(e^{i\theta})=-ie^{-i\theta}\frac{d}{d\theta}\gamma_2(e^{i\theta})$ and \eqref{eq:defgammaf}. So $(T^{\gamma_1})^{\gamma_2}(f)=T((\gamma_1)_*((\gamma_2)_*f))-\beta(\gamma_1,\gamma_{2*}f)-\beta(\gamma_2,f) = T((\gamma_1\circ \gamma_2)_*f)-\beta(\gamma_1\circ\gamma_2,f)=T^{\gamma_1\circ \gamma_2}(f)$. \end{proof} \begin{lemma}\label{lm:l0gammadomain} Let $s>3$. ${\mathscr{D}}(L_0)={\mathscr{D}}(L_0^{\gamma})$ for every $\gamma\in{\mathcal D}^s(S^1)$, where $L_0^{\gamma}\coloneqq T^{\gamma}(1)$ and here we denote by $1$ the constant function with the value $1$. \end{lemma} \begin{proof} By Lemma \ref{lem:localapprox} we can take a sequence $\{\gamma_n\}$ in ${\rm Diff}_+(S^1)$ convergent to $\gamma$ in the topology of ${\mathcal D}^s(S^1)$. We observe that $1 = \lim_n \gamma_{n*}(\gamma^{-1}_*(1))$ in the topology of $\mathcal{S}_{\frac32}(S^1)$. For $\xi\in {\mathscr{D}}(L_0)$ we know from Proposition \ref{pr:nonsmooth}(\ref{pr:nonsmooth-convergence}) and \eqref{eq:smoothbound} that \begin{align*} \|L_0\xi\|&=\lim_{n\to\infty}\|\left(T^{\gamma_n}((\gamma^{-1}_* )(1))+\beta(\gamma_n,\gamma_*^{-1}(1))\right)\xi\| \\ &\leq \left(\lim_{n\to\infty} r\|\gamma^{-1}_{*} (1)\|_{\frac{3}{2}}\cdot \|(1+L_0^{\gamma_n})\xi\|+|\beta(\gamma_n,\gamma_*^{-1}(1))|\|\xi\|\right)\\ &= r\|\gamma^{-1}_{*} (1)\|_{\frac{3}{2}}\cdot \|(1+L_0^{\gamma})\xi\|+|\beta(\gamma,\gamma_*^{-1}(1))|\|\xi\|, \end{align*} Recall that we know that ${\mathscr{D}}(L_0)\subset {\mathscr{D}}(L^\gamma_0)$ from Proposition \ref{pr:nonsmooth}(\ref{pr:nonsmooth-bound}) and $L_0^{\gamma}$ is essentially self-adjoint on ${\mathscr{D}}(L_0)$. From the above inequality, we infer that any sequence $\xi_n \in {\mathscr{D}}(L_0)$ converging to $\xi \in {\mathscr{D}}(L_0^\gamma)$ in the graph norm of $L_0^\gamma$ is also convergent in the graph norm of $L_0$, and therefore, we have ${\mathscr{D}}(L_0^{\gamma})={\mathscr{D}}(L_0)$. \end{proof} \begin{proposition}[energy bounds for $T^\gamma$]\label{pr:energybound} Let $\gamma\in{\mathcal D}^s(S^1)$, $s>3$. Then $$\Vert T^{\gamma}(f)\xi\Vert \leq r\Vert f\Vert_{\frac{3}{2}}\Vert(1+L_0^{\gamma})\xi\Vert$$ for all $\xi\in {\mathscr{D}}(L_0)$. \end{proposition} \begin{proof} Let $\{\gamma_n\}$ a sequence of elements in ${\rm Diff}_+(S^1)$ converging to $\gamma\in{\mathcal D}^s(S^1)$ as in Lemma \ref{lem:localapprox}. By Proposition \ref{pr:nonsmooth}(\ref{pr:nonsmooth-convergence}) and \eqref{eq:smoothbound}, \begin{align*} \Vert T^{\gamma}(f)\xi\Vert &= \lim_{n\to\infty}\Vert T^{\gamma_n}(f)\xi\Vert\leq \lim_{n\to\infty}r\Vert f\Vert_{\frac{3}{2}}\Vert(1+L_0^{\gamma_n})\xi\Vert=\\ &= r\Vert f\Vert_{\frac{3}{2}}\Vert(1+L_0^{\gamma})\xi\Vert, \end{align*} which is the desired inequality. \end{proof} \begin{theorem} Let $\gamma\in{\mathcal D}^s(S^1)$, $s>3$. $T^{\gamma}$ yields an irreducible unitary positive energy representation of ${\rm Vir}$ with central charge $c$ and lowest weight $h$ on ${\mathcal H}(c,h)$. \end{theorem} \begin{proof} We are going to prove the Virasoro relations on $C^\infty(L_0^\gamma)$. For this purpose, we have to take under control the action of various exponentiated operators. \paragraph{Computations on ${\mathscr{D}}(L_0)$.} We start by noting that $e^{iT^\gamma(g)} {\mathscr{D}}(L_0)\subset {\mathscr{D}}(L_0)$. Indeed, using \cite[Proposition 3.1]{FH05} we have, for $\xi\in {\mathscr{D}}(L_0)$ and $\gamma_n \in {\rm Diff}_+(S^1)$ as in Lemma \ref{lem:localapprox}, \[ L_0e^{iT^{\gamma_n}(g)}\xi = e^{iT^{\gamma_n}(g)} ( T((\gamma_n {\rm Exp}(-g) \gamma_n^{-1})_*(1)) - \beta(\gamma_n {\rm Exp}(-g) \gamma_n^{-1},1) )\xi, \] and the right-hand side converges as $n\rightarrow \infty$ by Proposition \ref{pr:nonsmooth}(\ref{pr:nonsmooth-convergence}). Therefore, since both $e^{iT^{\gamma_n}(g)}\xi$ and $L_0e^{iT^{\gamma_n}(g)}\xi$ are convergent, it follows that $e^{iT^{\gamma}(g)}\xi \in {\mathscr{D}}(L_0)$ and \[ L_0e^{iT^{\gamma}(g)}\xi=e^{iT^{\gamma}(g)}( T((\gamma {\rm Exp}(-g) \gamma^{-1})_*(1)) - \beta(\gamma {\rm Exp}(-g) \gamma^{-1},1) )\xi. \] For vectors $\xi\in {\mathscr{D}}(L_0)$ and $\gamma_n \in {\rm Diff}_+(S^1)$, by Proposition \ref{pr:covariance} we have the operator equality \[ e^{iT^{\gamma_n}(g)}T^{\gamma_n}(f)e^{-iT^{\gamma_n}(g)}=T^{\gamma_n}({\rm Exp}(g)_* (f)) - \left(\frac{c}{24\pi}\int_{S^1}\{{\rm Exp}(g),z\}izf(z)dz\right), \] and we saw above that for $\xi \in {\mathscr{D}}(L_0)$ and $\gamma_n \in {\rm Diff}_+(S^1)$, it holds that $e^{-iT^{\gamma_n}(g)}\xi \in {\mathscr{D}}(L_0) \subset {\mathscr{D}}(T^{\gamma_n}(f))$, therefore, we have \[ e^{iT^{\gamma_n}(g)}T^{\gamma_n}(f)e^{-iT^{\gamma_n}(g)}\xi=T^{\gamma_n}({\rm Exp}(g)_* (f))\xi - \left(\frac{c}{24\pi}\int_{S^1}\{{\rm Exp}(g),z\}izf(z)dz\right) \xi. \] We apply to the operator equality the function $$h_k:s\in\mathbb{R}\rightarrow s\chi_{(-k,k)}$$ where $\chi$ is the characteristic function of the interval $(-k,k)\subset\mathbb{R}.$ By bounded functional calculus, we obtain for any $\xi \in {\mathscr{D}}(L_0)$ \begin{align}\label{eq:nk} h_k(e^{iT^{\gamma_n}(g)}T^{\gamma_n}(f)e^{-iT^{\gamma_n}(g)})\xi&=e^{iT^{\gamma_n}(g)}h_k(T^{\gamma_n}(f))e^{-iT^{\gamma_n}(g)}\xi, \end{align} and the right-hand side tends to $e^{iT^{\gamma}(g)}h_k(T^{\gamma}(f))e^{-iT^{\gamma}(g)}\xi$ as $n\rightarrow\infty$, because we have convergence of $T^{\gamma_n}(f)$ to $T^{\gamma}(f)$ and $T^{\gamma_n}(g)$ to $T^{\gamma}(g)$ in the strong resolvent sense, and their bounded functional calculus $e^{iT^{\gamma_n}(g)}, h_k(T^{\gamma_n}(f))$ converge to $e^{iT^{\gamma}(g)}, h_k(T^{\gamma_n}(f))$, respectively. On the other hand, the left-hand side of \eqref{eq:nk} can be rewritten as \[ h_k\left(T^{\gamma_n}({\rm Exp}(g)_* (f))-\frac{c}{24\pi}\int_{S^1}\{{\rm Exp}(g),z\}izf(z)dz\right) \xi \] and this converges to \[ h_k\left(T^{\gamma}({\rm Exp}(g)_* (f))-\frac{c}{24\pi}\int_{S^1}\{{\rm Exp}(g),z\}izf(z)dz\right) \xi \] as $n\rightarrow\infty$, again by the convergence of $\{T^{\gamma_n}({\rm Exp}(g)_*(f))\}$ in the strong resolvent sense and bounded functional calculus with $h_k$. Altogether, we know that the following equality holds: \[ e^{iT^{\gamma}(g)}h_k(T^{\gamma}(f))e^{-iT^{\gamma}(g)}\xi = h_k\left(T^{\gamma}({\rm Exp}(g)_* (f))-\frac{c}{24\pi}\int_{S^1}\{{\rm Exp}(g),z\}izf(z)dz\right) \xi. \] By taking the limit for $k\rightarrow\infty$, we get for every $\xi\in {\mathscr{D}}(L_0)$ \begin{align}\label{eq:commutationexp} e^{iT^{\gamma}(g)}T^{\gamma}(f)e^{-iT^{\gamma}(g)}\xi=T^{\gamma}({\rm Exp}(g)_* (f))\xi - \left(\frac{c}{24\pi}\int_{S^1}\{{\rm Exp}(g),z\}izf(z)dz\right) \xi. \end{align} Recall that ${\mathscr{D}}(L_0)={\mathscr{D}}(L^{\gamma}_0)$. We get in particular \begin{align}\label{eq:gammarotation} e^{itL_0^{\gamma}}T^{\gamma}(f)e^{-itL_0^{\gamma}}\xi = T^{\gamma}(f_t)\xi, \end{align} where $f_t(e^{i\theta}) = f(e^{i(\theta -t)})$. \paragraph{Computations on $C^\infty(L_0^\gamma)$.} The right-hand side of \eqref{eq:gammarotation} is differentiable with respect to $t$ when $\xi\in {\mathscr{D}}(L_0)$ since for the right hand side we get \[ \lim_{t\rightarrow 0}\frac{1}{t}(T^{\gamma}(f_t)-T^{\gamma}(f))\xi = \lim_{t\rightarrow 0} T^{\gamma}(\textstyle{\frac1 t}(f_t - f))\xi = T^{\gamma}(-f^{\prime})\xi = - T^{\gamma}(f^{\prime})\xi, \] by the continuity of $T^\gamma$ in the topology of $\mathcal{S}_{\frac32}(S^1)$ (Proposition \ref{pr:energybound}). Let us specialize it to $\xi\in C^\infty (L_0^\gamma) := \bigcap_n {\mathscr{D}}((L^\gamma_0)^n)$. For the left-hand side of \eqref{eq:gammarotation}, we have \begin{align}\label{eq:rotationdiff} &\left.\frac{d}{dt}\right\vert_{t=0}e^{itL^{\gamma}_0}T^{\gamma}(f)e^{-itL_0^{\gamma}}\xi \nonumber \\ &=\lim_{t\rightarrow\infty}\left(\frac{1}{t}\left(e^{itL^{\gamma}_0}T^{\gamma}(f)e^{-itL_0^{\gamma}}-e^{itL^{\gamma}_0}T^{\gamma}(f)\right)\xi +\frac{1}{t}\left(e^{itL^{\gamma}_{0}}T^{\gamma}(f)-T^{\gamma}(f)\right)\xi\right). \end{align} The first term converges to $-iT^{\gamma}(f)L_0\xi$. Indeed, by Proposition \ref{pr:energybound}, \begin{align*} &\left\|\frac{1}{t}\left(e^{itL^{\gamma}_0}T^{\gamma}(f)e^{-itL_0^{\gamma}}-e^{itL^{\gamma}_0}T^{\gamma}(f)\right)\xi+ie^{itL^{\gamma}_0}T^{\gamma}(f)L_0^\gamma\xi\right\| \\ &=\left\|\frac{1}{t}\left(T^{\gamma}(f)e^{-itL_0^{\gamma}}-T^{\gamma}(f)\right)\xi+iT^{\gamma}(f)L_0^\gamma \xi\right \| \\ &\leq r\|f\|_{\frac{3}{2}}\left\|(1+L_0^\gamma)\left(\frac{e^{-itL_0^{\gamma}}-1}{t}+iL^{\gamma}_0\right)\xi\right\| \\ &= r\|f\|_{\frac{3}{2}}\left\|\left(\frac{e^{-itL_0^{\gamma}}-1}{t}+iL^{\gamma}_0\right)(1+L^{\gamma}_0)\xi\right\|. \end{align*} Since $\xi\in C^{\infty}(L_0^{\gamma})$, by Stone's theorem \cite[Theorem VIII.7(c)]{RSI} the above converges to 0 as $t\rightarrow 0$. Thus the limit exists also for the second term of \eqref{eq:rotationdiff}, and by applying Stone's theorem \cite[Theorem VIII.7(d)]{RSI}, we get $T^{{\gamma}}(f)\xi\in {\mathscr{D}}(L^{\gamma}_0)$, and the second term converges to $iL_0^\gamma T^{{\gamma}}(f)\xi$. or in other words, $T^{{\gamma}}(f)C^\infty(L_0) \subset {\mathscr{D}}(L^{\gamma}_0)$ (actually, we proved $T^{{\gamma}}(f){\mathscr{D}}((L_0^\gamma)^2) \subset {\mathscr{D}}(L^{\gamma}_0)$). Thus we have established the following commutation relation on $C^{\infty}(L^{\gamma}_0)$: \begin{align}\label{eq:commutation} [L^{\gamma}_0,T^{\gamma}(f)]\xi = iT^{\gamma}(f^{\prime})\xi. \end{align} It follows that $C^\infty (L^{\gamma}_0)$ is an invariant domain for every $T^{\gamma}(f)$ with $f\in C^\infty(S^1,\mathbb{R})$. Indeed, for $T^{\gamma}(f)\xi$, with $\xi\in C^{\infty}(L_0^{\gamma})$ and $f\in C^{\infty} (S^1,\mathbb{R})$, \eqref{eq:commutation} is equivalent to \begin{align}\label{eq:commutation2} L^{\gamma}_0 T^{\gamma}(f)\xi= [L^{\gamma}_0,T^{\gamma}(f)]\xi + T^{\gamma}(f)L_0^{\gamma}\xi = iT^{\gamma}(f^{\prime})\xi+ T^{\gamma}(f)L_0^{\gamma}\xi. \end{align} Now we go by induction in $k$. Assume that $T^{\gamma}(f)\xi\in {\mathscr{D}}((L_0^\gamma)^k)$ and all $f\in C^{\infty}(S^1,\mathbb{R})$. It then follows from \eqref{eq:commutation2} that $L_0^{\gamma}T^{\gamma}(f)\xi\in {\mathscr{D}}((L_0^\gamma)^k)$, i.e. $T^\gamma(f)\xi\in {\mathscr{D}}((L_0^\gamma)^{k+1})$. We thus get the desired claim $T^{\gamma}(f)C^{\infty}(L^{\gamma}_0)\subset C^{\infty}(L^{\gamma}_0)$. \paragraph{The Virasoro relations.} Finally we show that the stress-energy tensor $T^{\gamma}$ indeed yields a representation of ${\rm Vect}(S^1)$. For $\xi \in C^\infty(L_0^\gamma)$, \begin{align}\label{eq:commutationdiff} &\left.\frac{d}{dt}\right\vert_{t=0}e^{itT^{\gamma}(g)}T^{\gamma}(f)e^{-it T^{\gamma}(g)}\xi \nonumber \\ &=\lim_{t\rightarrow 0}\left(\frac{1}{t}\left(e^{itT^{\gamma}(g)}T^{\gamma}(f)e^{-it T^{\gamma}(g)}-e^{itT^{\gamma}(g)}T^{\gamma}(f)\right) +\frac{1}{t}\left(e^{itT^{\gamma}(g)}T^{\gamma}(f)-T^{\gamma}(f)\right)\right)\xi. \end{align} As for the left-hand side, from \eqref{eq:commutationexp}, we obtain $(T^\gamma(g'f-gf') + c\omega(g,f))\xi$ by \eqref{eq:gelfandderivative}. Let us see the right-hand side of \eqref{eq:commutationdiff} term by term. As for the first term, we have \begin{align}\label{eq:comm1} &\left\Vert\frac{1}{t}\left(e^{itT^{\gamma}(g)}T^{\gamma}(f)e^{-it T^{\gamma}(g)} - e^{itT^{\gamma}(g)}T^{\gamma}(f)\right)\xi + e^{itT^{\gamma}(g)}\cdot iT^{\gamma}(f)T^{\gamma}(g)\xi\right\Vert \nonumber \\ &=\left\Vert\frac{1}{t}\left(T^{\gamma}(f)e^{-it T^{\gamma}(g)}-T^{\gamma}(f)\right)\xi + iT^{\gamma}(f)T^{\gamma}(g)\xi\right\Vert \nonumber \\ &\le r\Vert f\Vert_{\frac{3}{2}}\left\Vert(1+L^{\gamma}_0)\frac{1}{t}\left(e^{-itT^{\gamma}(g)}-1\right)\xi + (1+L^{\gamma}_0)\cdot iT^{\gamma}(g)\xi\right\Vert \nonumber \\ &\le r\Vert f\Vert_{\frac{3}{2}}\left(\left\Vert \left(\frac{1}{t}\left(e^{-itT^{\gamma}(g)}-1\right)+ iT^{\gamma}(g)\right)\xi\right\Vert +\left\Vert\left( \frac{1}{t}L_0^\gamma\left(e^{-itT^{\gamma}(g)}-1\right) + iL_0^\gamma T^{\gamma}(g)\right)\xi\right\Vert\right). \end{align} The first term of \eqref{eq:comm1} goes to $0$ by Stone's theorem \cite[Theorem VIII.7(c)]{RSI}. The second term can be treated by \eqref{eq:commutationexp} and \eqref{eq:commutation} as follows: \begin{align*} &\left\Vert \frac{1}{t}L^{\gamma}_0(e^{-itT^{\gamma}(g)}-1)\xi + iL^{\gamma}_0 T^{\gamma}(g)\xi\right\Vert \\ &=\left\Vert\frac{1}{t}\left(e^{-itT^{\gamma}(g)}(T^{\gamma}({\rm Exp}(tg)_* (1))-\beta({\rm Exp}(tg),1))-L^{\gamma}_0\right)\xi + i(iT^{\gamma}(g^{\prime})+T^{\gamma}(g)L^{\gamma}_0)\xi\right\Vert\\ &\leq \left\Vert\frac{1}{t}(e^{-itT^{\gamma}(g)}T^{\gamma}({\rm Exp}(tg)_* (1))-e^{-itT^{\gamma}(g)}L_0^{\gamma})\xi-T^{\gamma}(g^{\prime})\xi\right\Vert \\ &\qquad\qquad+\left\Vert\frac{1}{t}(e^{-itT^{\gamma}(g)}L^{\gamma}_0-L^{\gamma}_0)\xi + iT^{\gamma}(g)L^{\gamma}_0\xi\right\Vert+\bigg\vert\frac{1}{t}\beta({\rm Exp}(tg),1)\bigg\vert\Vert\xi\Vert. \end{align*} each term can be seen to converge to $0$: the first term is done by noting that $L_0^\gamma = T^\gamma(1)$, continuity of $T^\gamma$ (Proposition \ref{pr:energybound}), $[g,1] = g'$ and unitarity of $e^{-itT^{\gamma}(g)}$. The second term vanishes by using Stone's theorem. The last term also converges to zero by \eqref{eq:gelfandderivative} and using the fact that $\omega(g,1)=0$. To summarize, the first term of the right-hand side of \eqref{eq:commutationdiff} tends to $-iT^\gamma(f)T^\gamma(g)$. The second term of \eqref{eq:commutationdiff} is equal to $iT^{\gamma}(g)T^{\gamma}(f)$. Indeed, since $C^{\infty}(L^{\gamma}_0)$ is invariant under the action of $T^{\gamma}(f)$, this follows by Stone's theorem. Altogether, we obtained the equality $i[T^\gamma(g),T^\gamma(f)] = T^\gamma(g'f-gf') + c\omega(g,f)$ on $C^\infty(L_0^\gamma)$, which is the Virasoro commutation relation. Note that until here we have only used that $T$ is a positive energy representation of the Virasoro algebra with the central charge $c$ with diagonalizable $L_0$, but not irreducibility. Therefore, one can iterate our construction for another element in ${\mathcal D}^s(S^1)$. In particular, by taking $\gamma^{-1}$, we obtain by Proposition \ref{lm:composition} \begin{equation}\label{eq:gammagamma-1} (T^\gamma)^{\gamma^{-1}}(f)= T(f). \end{equation} We claim that the new representation $T^\gamma$ is irreducible and has the same lowest weight $h$. Indeed, by \eqref{eq:gammagamma-1}, one can approximate $T(f)$ by $T^\gamma(\gamma^{-1}_{n*}f)+\beta(\gamma,(\gamma_n^{-1})_*(f))$ in the strong resolvent sense, where $\{\gamma_n\} \subset {\rm Diff}_+(S^1)$ and $\gamma_n \to \gamma$ in the topology of ${\mathcal D}^s(S^1)$. As $\{e^{iT(f)}: f\in{\rm Vect}(S^1)\}$ generates ${\mathcal B}({\mathcal H}(c,h))$, so does $\{e^{iT^\gamma(f)}: f\in{\rm Vect}(S^1)\}$, and this shows that $T^\gamma$ is a irreducible representation of the Virasoro algebra. Furthermore, the new conformal Hamiltonian $L^{\gamma}_0=T^{\gamma}(1)$ has spectrum which is a subset of the spectrum of the old conformal Hamiltonian $L_0$ since it is obtained as a limit in the strong resolvent sense of $\{{\hbox{\rm Ad\,}} U(\gamma_n)(L_0)\}$ with the same spectrum \cite[Theorem VIII.24(a)]{RSI}. Again by iteration, we have \[ {\rm sp}\, L_0 = {\rm sp}\,(T^\gamma)^{\gamma^{-1}}(1)\subset {\rm sp}\, L^\gamma_0 = {\rm sp}\, T^\gamma(1) \subset {\rm sp}\, L_0, \] therefore, all these sets must coincide. In particular, $h$ is the lowest eigenvalue of $L^\gamma_0$. \end{proof} As $T$ and $T^\gamma$ are equivalent as irreducible representations of ${\rm Vect}(S^1)$ and thus of the Virasoro algebra, there is a unitary intertwiner $U(\gamma)$, defined up to a scalar such that $U(\gamma)T(f)=T^{\gamma}(f)U(\gamma)$. \begin{corollary}\label{cr:projective} The map $\gamma\mapsto U(\gamma)$ where $\gamma\in{\mathcal D}^s(S^1)$, $s>3$, is a unitary projective representation of ${\mathcal D}^s(S^1)$, i.e.\! $U(\gamma_1\circ \gamma_2)=U(\gamma_1)U(\gamma_2)$ up to a phase factor. \end{corollary} \begin{proof} We know that for $\gamma_1,\gamma_2\in{\mathcal D}^s(S^1)$ \begin{align*} U(\gamma_1)T(f)&=T^{\gamma_1}(f)U(\gamma_1),\\ U(\gamma_2)T(f)&=T^{\gamma_2}(f)U(\gamma_2) \end{align*} hold for every $f\in{\rm Vect}(S^1)$. So \begin{align*} U(\gamma_1)U(\gamma_2)T(f)&=U(\gamma_1)T^{\gamma_2}(f)U(\gamma_2)=U(\gamma_1)(T(\gamma_{2*}f)-\beta(\gamma_2,f))U(\gamma_2)=\\ &=(T^{\gamma_1}(\gamma_{2*}f)-\beta(\gamma_2,f))U(\gamma_1)U(\gamma_2)=\\ &=(T((\gamma_1\circ \gamma_2)_*f)-\beta(\gamma_1,\gamma_{2*}f)-\beta(\gamma_2,f))U(\gamma_1)U(\gamma_2). \end{align*} Consequently by the computations of Proposition \ref{lm:composition} \[ U(\gamma_1)U(\gamma_2)T(f)=T^{\gamma_1\circ \gamma_2}(f)U(\gamma_1)U(\gamma_2), \] therefore, $U(\gamma_1\circ \gamma_2)=U(\gamma_1)U(\gamma_2)$ up to a phase because we are dealing with irreducible representations of the Virasoro algebra. \end{proof} \begin{corollary}\label{cr:continuityB(H)} Let $U=U_{(c,h)}$ be the irreducible unitary projective representation of ${\rm Diff}_+(S^1)$ with central charge $c$ and lowest weight $h$. Then $U$ extends to a strongly continuous irreducible unitary projective representation of ${\mathcal D}^s(S^1)$, $s>3$. \end{corollary} \begin{proof} The only thing that remains to be proven is continuity, namely that the action $\alpha:{\mathcal D}^s(S^1)\rightarrow {\hbox{Aut}}({\mathcal B}({\mathcal H}(c,h)))$, $\gamma\mapsto {\hbox{\rm Ad\,}} U(\gamma)$ is pointwise continuous in the strong operator topology of ${\mathcal B}({\mathcal H}(c,h))$. Let $\lbrace\gamma_n\rbrace\subset{\rm Diff}_+(S^1)$, $\gamma\in{\mathcal D}^s(S^1)$ with $\gamma_n\rightarrow \gamma$ in the topology of ${\mathcal D}^s(S^1)$. Then \[ \lim_{n\rightarrow \infty}U(\gamma_n)e^{itT(f)}U(\gamma_n)^{*}=\lim_{n\rightarrow\infty}e^{itT^{\gamma_n}(f)}=e^{itT^{\gamma}(f)} \] where the limit is meant in the strong topology. By taking $f=1$, we obtain the convergence of $L_0^{\gamma_n}$ to $L_0^\gamma$ in the strong resolvent sense. As they are in the $(c,h)$-representation of the Virasoro algebra, the lowest eigenprojections $E_0, E_0^\gamma$ are one-dimensional, and it holds that $\lim_{n\to \infty}{\hbox{\rm Ad\,}} U(\gamma_n)(E_0) = E_0^\gamma$. Let $\Omega, \Omega^\gamma$ be the lowest eigenvectors. By fixing the scalars, we may assume that $\Omega^{\gamma_n} := U(\gamma_n)\Omega \to \Omega^\gamma$, see the arguments of Section \ref{projective}. With this $U(\gamma_n)$ with fixed phase, the sequence \[ U(\gamma_n)e^{iT(f_1)}\cdots e^{iT(f_k)}\Omega = e^{iT^{\gamma_n}(f_1)}\cdots e^{iT^{\gamma_n}(f_k)}\Omega^{\gamma_n} \] is convergent to $e^{iT^{\gamma}(f_1)}\cdots e^{iT^{\gamma}(f_k)}\Omega^{\gamma}$, because all the operators $e^{iT^{\gamma_n}(f_1)},\cdots, e^{iT^{\gamma_n}(f_k)}$ are uniformly bounded and convergent in the strong operator topology. Since vectors of the form $e^{iT(f_1)}\cdots e^{iT(f_k)}\Omega$ span a dense subspace of the whole Hilbert space ${\mathcal H}(c,h)$, together with the uniform boundedness of $U(\gamma_n)$, we obtain the convergence of $U(\gamma_n)$ to $U(\gamma)$ in the strong operator topology. The claimed continuity is follows from this, because for any $x\in {\mathcal B}({\mathcal H})$, ${\hbox{\rm Ad\,}} U(\gamma_n)(x)$ is convergent in the strong operator topology, again because $U(\gamma_n)$ is uniformly bounded. \end{proof} \begin{corollary}\label{cr:diff4} Let $U=U_{(c,h)}$ be the irreducible unitary projective representation of ${\rm Diff}_+(S^1)$ with central charge $c$ and lowest weight $h$. Then $U$ extends to a strongly continuous irreducible unitary projective representation of ${\rm Diff}_+^k(S^1)$ with $k\geq4$. \end{corollary} \begin{proof} This is an immediate corollary of the continuous embedding ${\rm Diff}_+^k(S^1) \hookrightarrow {\mathcal D}^s(S^1)$, $s \le k$. \end{proof} \begin{remark}\label{remarkGW} Our argument for the construction of projective representations of ${\mathcal D}^s(S^1)$ can be used to simplify the proof the integrability of the irreducible unitary positive energy representations of the Virasoro algebra to strongly continuous projective unitary representations of ${\rm Diff}_+(S^1)$. Such a proof was first given in by realizing them in the oscillator algebra \cite[Section 3, Theorem 4.2]{GW85}. One can do it now only within the Virasoro algebra as follows. Besides the energy-bounds ({\it a priori estimates}) in \cite[Section 2]{GW85}, see also \cite{BS90}, which are used in \cite{CW05} and are crucial to our proof, we also used \eqref{eq:Tgammasmooth} coming from \cite{GW85}. More precisely, we used the fact that for every $\gamma \in {\rm Diff}_+(S^1)$ there is an unitary operator $U(\gamma)$ such that $U(\gamma)T(f)U(\gamma)^* = T^\gamma(f)$ for all $f \in {\rm Vect}(S^1)$ and $U(\gamma){\mathscr{D}}(L_0) = {\mathscr{D}}(L_0)$. This can be proved directly following the strategy in pages 1100-1101 of \cite{CKL08}, see also the proof of \cite[Proposition 6.4]{CKLW18}. One only needs some of the direct consequences of the energy bounds proved in \cite[Section 2]{Toledano-Laredo99-1}. We outline the arguments here: \begin{itemize} \item Since ${\rm Diff}_+(S^1)$ is simple \cite[Remark 1.7]{Milnor84}, it is generated by exponentials, because the subgroup generated by exponentials is a normal subgroup. \item By the proof of Corollary \ref{cr:projective}, the set of $\gamma$ such that an unitary $U(\gamma)$ with the required properties exists forms a subgroup of ${\rm Diff}_+(S^1)$. Hence, it is enough to consider the special case where $\gamma = {\rm Exp}(g)$ for $g \in {\rm Vect}(S^1)$. \item It follows from the {\it linear energy-bounds} by \cite[Proposition 2.1]{Toledano-Laredo99-1} that $e^{itT(g)}{\mathscr{D}}(L_0^k) = {\mathscr{D}}(L_0^k)$ for all positive integers $k$ and all $t \in \mathbb{R}$. As a consequence $e^{itT(g)} C^\infty(L_0) = C^\infty(L_0)$ for all $t \in \mathbb{R}$. \item Now, let $\xi \in C^\infty(L_0)$ and let $\xi(t) = T^{{\rm Exp}(tg)}(f)e^{itT(g)}\xi$. By \cite[Corollary 2.2]{Toledano-Laredo99-1} we have $\frac{d}{dt} e^{itT(g)}\xi = i e^{itT(g)} T(g)\xi$ in the graph topology of ${\mathscr{D}}(L_0^k)$ for all positive integers $k$. It then follows from the energy bounds that $\frac{d}{dt}\xi(t) = i T(g)\xi(t)$. Hence, $\xi(t) = e^{itT(g)}T(f)\xi$ for all $\xi \in C^\infty(L_0)$ so that $T^{{\rm Exp}(tg)}(f) = e^{itT(g)}T(f) e^{-it T(g)}$ which is the required relation. Continuity of $U$ follows as in Corollary \ref{cr:continuityB(H)}. \end{itemize} \end{remark} \subsection{Direct sum of irreducible representations} Here we prove that every positive energy projective unitary representation of ${\rm Diff}_+(S^1)$ extends to a unitary projective representation of ${\mathcal D}^s(S^1)$ for $s>3$. A similar result holds for the universal covering groups provided that the representation is assumed to be a direct sum of irreducibles. This is not an immediate consequence of Corollary \ref{cr:continuityB(H)}, because, in general, the direct sum of projective representations does not make sense: ${\mathcal U}({\mathcal H}_j)/{\mathbb C}$ is not a linear space. On the other hand, if we have {\bf multiplier representations} of a group $G$ with the same cocycle, $U_j(g_1)U_j(g_2) = \omega(g_1,g_2)U_j(g_1 g_2)$ where $\omega(g_1,g_2)$ is a 2-cocycle $H^2(G,{\mathbb C})$ of G, then the direct sum $\bigoplus_j U_j(g)$ is again a multiplier representation with the same cocycle $\omega$. If we are interested in a projective representation of a certain quotient $G/H$ by a normal subgroup $H$ we have to make sure that the direct sum $\bigoplus U_j(h)$ reduces to a scalar when $h \in H$. \paragraph{Continuous fragmentation of $\widetilde{{\mathcal D}^s(S^1)}$.} Let $I$ be a proper open interval of $S^1$ and $I^\prime = (S^1\setminus I)^{\circ}$ the interior of its complement. We denote by $\overline{I}$ the closure of $I$. ${\rm Diff}_+(I)$ (resp.\! ${\mathcal D}^s(I)$) denotes the subgroup of diffeomorphisms ${\rm Diff}_+(S^1)$ (resp.\! ${\mathcal D}^s(S^1)$) such that $\gamma(x)=x$ for $x\in I^\prime$. We also say that $\gamma\in {\rm Diff}_+(I)$ (resp.\! $\gamma\in{\mathcal D}^s(I)$) is supported in $I$. Let $\{I_j\}_{j=1,2,3}$ be a cover of the unit circle as Fig.\! \ref{fig:intervals}. Let us name the end points of the intervals: $I_k = (a_k, b_k)$. We also take a slightly smaller interval $\hat I_k = (\hat a_k, \hat b_k) \subset I_k$ which still consist a cover of $S^1$ points $\breve a_1, \breve b_1$, c.f.\! \cite{DFK04}. Furthermore, we take $\hat b_2, \check b_2$ such that $\hat a_1 < \hat b_2 < \check b_2 < b_2$. \begin{figure}[ht] \centering \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm] \clip(-2.43,-0.61) rectangle (6.15,5.35); \draw(1.4,2.42) circle (2cm); \draw [shift={(1.4,2.42)}] plot[domain=1.57:3.78,variable=\tau]({1*2.2*cos(\tau r)+0*2.2*sin(\tau r)},{0*2.2*cos(\tau r)+1*2.2*sin(\tau r)}); \draw [shift={(1.4,2.42)}] plot[domain=3.62:5.79,variable=\tau]({1*2.39*cos(\tau r)+0*2.39*sin(\tau r)},{0*2.39*cos(\tau r)+1*2.39*sin(\tau r)}); \draw [shift={(1.4,2.42)}] plot[domain=-0.58:1.68,variable=\tau]({1*2.65*cos(\tau r)+0*2.65*sin(\tau r)},{0*2.65*cos(\tau r)+1*2.65*sin(\tau r)}); \draw (1.07,5.16)-- (1.09,4.95); \draw (3.53,1.05)-- (3.73,0.91); \draw (3.41,1.37)-- (3.61,1.24); \draw (1.41,4.73)-- (1.41,4.53); \draw (-0.45,1.05)-- (-0.27,1.17); \draw (-0.82,1.27)-- (-0.63,1.39); \draw (3.69,4.73) node[anchor=north west] {$I_1$}; \draw (1.27,0.05) node[anchor=north west] {$I_2$}; \draw (-1.17,4.25) node[anchor=north west] {$I_3$}; \end{tikzpicture} \begin{tikzpicture}[path fading=north,scale=0.5] \draw [thick] (-6,0) --(6,0); \draw [thick,dotted] (-6,0) --(-7,0); \draw [thick,dotted] (6,0) --(7,0); \node at(0,1.8) {$I_1$}; \draw [] (-2.5,-0.5) node{$($}--(2.5,-0.5)node{$)$}; \node at(0,-1.3) {$\hat I_1$}; \draw [] (-4.5,1.3) node{$($}--(4.5,1.3)node{$)$}; \node at(-4.5,0.5) {$a_1$}; \fill (-4.5,0) circle[radius=3pt]; \node at(4.5,0.55) {$b_1$}; \fill (4.5,0) circle[radius=3pt]; \node at(-3.5,0.6) {$\breve a_1$}; \fill (-3.5,0) circle[radius=3pt]; \node at(-2.7,0.6) {$\hat a_1$}; \fill (-2.5,0) circle[radius=3pt]; \node at(2.7,0.65) {$\hat b_1$}; \fill (2.5,0) circle[radius=3pt]; \node at(3.5,0.65) {$\breve b_1$}; \fill (3.5,0) circle[radius=3pt]; \draw [dotted] (-6,-2) --(-7,-2); \node at(-4,-3) {$I_2$}; \node at(-2,0.6) {$b_2$}; \fill (-2,0) circle[radius=3pt]; \draw [] (6,-2) --(2,-2)node{$($}; \draw [] (-6,-2) --(-2,-2)node{$)$}; \draw [dotted] (6,-2) --(7,-2); \node at(4,-3) {$I_3$}; \node at(2,0.5) {$a_3$}; \fill (2,0) circle[radius=3pt]; \node at(0,-6) {}; \end{tikzpicture} \caption{The covering of the unit circle.} \label{fig:intervals} \end{figure} Any given diffeomorphism $\gamma$ can be written as a product of elements supported in $I_k$. This is known as fragmentation (see \cite{Mann15} and references therein). We need a slightly refined version of it, namely, if $\gamma$ is in a small neighborhood ${\mathcal V}$ of the unit element $\iota$, then we can take the fragments $\gamma_k$ also in a small, but larger neighborhood $\hat {\mathcal V}$. The precise statement is the following. \begin{lemma}\label{lm:fragmentation} Let $s>3/2$. There is a neighborhood ${\mathcal V}$ of the unit element $\iota$ of $\widetilde{{\mathcal D}^s(S^1)}$ and continuous localizing maps $\chi_k: {\mathcal V} \to$ $\widetilde{{\mathcal D}^s(I_k)}$ with \[ \gamma = \chi_1(\gamma)\chi_2(\gamma)\chi_3(\gamma) \] and $\chi_k(\iota) = \iota$, ${\rm supp\,} \chi_k(\gamma) \subset I_k$, where ${\rm supp\,} \gamma := \overline{\{\theta \in S^1: \gamma(\theta) \neq \theta\}}$. If ${\rm supp\,} \gamma \subset \breve I_k\cup \breve I_{k+1}$, then $\chi_{k+2}(\gamma) = \iota$, where $k = 1,2,3 \mod 3$. \end{lemma} \begin{proof} We may assume without loss of generality that $0 < a_1 < \breve a_1 < \hat a_1 < b_2 < a_3 < \hat b_1 < \breve b_1 < b_1 < 2\pi$, (see Figure \ref{fig:intervals}). Let us take a smooth $2\pi$-periodic function $D_{\mathrm{c},1}$ with $D_{\mathrm{c},1}(t)=1$ for $t\in \hat I_1 = [\hat a_1, \hat b_1]$ and $D_{\mathrm{c},1}(t)=0$ for $t\in [0,\breve a_1]\cup [\breve b_1,2\pi]$ and $0 \le D_{\mathrm{c},1}(t) \le 1$ everywhere. Let $0 \le D_{\mathrm{l},1}(t) \le 1$ be another smooth $2\pi$-periodic function with support in $(a_1,\breve a_1)$ and with $\int_0^{2\pi}D_{\mathrm{l},1}(t)dt = \int_{a_1}^{\breve a_1}D_{\mathrm{l},1}(t)dt= \frac12(\breve a_1-a_1)$ (which is possible because the interval $(a_1,\breve a_1)$ is longer than $\frac12(a_1,\breve a_1)$). Similarly, let $0 \le D_{\mathrm{r},1}(t) \le 1$ be a smooth $2\pi$-periodic function with support in $(\breve b_1,b_1)$ and with $\int_0^{2\pi}D_{\mathrm{r},1}(t)dt=\frac12(b_1 - \breve b_1)$. We consider the following neighborhood of the unit element of $\widetilde{{\mathcal D}^s(S^1)} \[ {\mathcal V}_{\varepsilon}\coloneqq \left\{\gamma \in \widetilde{{\mathcal D}^s(S^1)}: |\gamma(\theta)-\iota(\theta)|<\varepsilon, |\gamma^{\prime}(\theta)-1|<\varepsilon \;\text{ for }\theta\in[0,2\pi]\right\}. \] Note that ${\mathcal V}_\varepsilon$ is open since $s>3/2$, by the Sobolev-Morrey embedding theorem. Suppose $\gamma \in {\mathcal V}_{\varepsilon}$. We set \begin{align*} M &:= \max\left\{D_{\mathrm{c}, 1}(t), t \in[0,2\pi]\right\} \end{align*} and define the constant $\alpha(\gamma)$ by \begin{align}\label{eq:alpha} \alpha_1(\gamma) = \frac2{\breve a_1 - a_1}\left(\gamma(\hat a_1)-\hat a_1 - \int_0^{\hat a_1} (\gamma^{\prime}(t)-1)D_{\mathrm{c},1}(t)dt\right). \end{align} It follows that \begin{equation}\label{estalpha} |\alpha_1(\gamma)|\leq \frac {2}{|\breve a_1 - a_1|} \varepsilon (1+\hat{a}_1M) \end{equation} by the definition of ${\mathcal V}_{\varepsilon}$ and \[ \gamma(\hat a_1)=\int_0^{\hat a_1} ((\gamma^{\prime}(t)-1)D_{\mathrm{c},1}(t)+1+\alpha_1(\gamma)D_{\mathrm{l},1}(t))dt. \] Similarly, set the constant $\beta_1(\gamma)$ by \begin{align}\label{eq:beta} \beta_1(\gamma) &\;\;= \frac{-2}{b_1 - \breve b_1}\left(\int_0^{2\pi} ((\gamma^{\prime}(t)-1)D_{\mathrm{c},1} (t)+\alpha_1(\gamma)D_{\mathrm{l},1}(t))dt\right) \\ &\left( =\frac2{b_1 - \breve b_1}\left(\hat b_1 - \gamma(\hat b_1) - \int_{\hat b_1}^{b_1} (\gamma^{\prime}(t)-1)D_{\mathrm{c},1} (t)\right) \right), \nonumber \end{align} then it follows that \begin{align}\label{estbeta} |\beta_1(\gamma)|\leq \frac {2}{|b_1 - \breve b_1|} \varepsilon (|\hat{b}_1-b_1| M+1) \end{align} and \begin{align*} b_1 = \int_0^{b_1} ((\gamma^{\prime}(t)-1)D_{\mathrm{c},1} (t)+1+\alpha_1(\gamma)D_{\mathrm{l},1}(t)+\beta_1(\gamma)D_{\mathrm{r},1}(t))dt. \end{align*} Now, the function \begin{align}\label{eq:gamma1} \gamma_1(\theta)=\int_0^\theta((\gamma^\prime (t)-1)D_{\mathrm{c},1}(t)+1+\alpha_1(\gamma)D_{\mathrm{l},1}(t)+\beta_1(\gamma)D_{\mathrm{r},1}(t))dt \end{align} is $2\pi$-periodic, the first derivative \begin{align* \gamma'_1(\theta)= (\gamma^\prime (\theta)-1)D_{\mathrm{c},1}(\theta)+1+\alpha_1(\gamma)D_{\mathrm{l},1}(\theta)+\beta_1(\gamma)D_{\mathrm{r},1}(\theta) \end{align*} is positive by \eqref{estalpha}, \eqref{estbeta} if $\varepsilon$ is taken sufficiently small and $\gamma_1^{\prime}-1\in H^{s-1}(S^1)$ (by Lemma \ref{lm:sobolevalgebra}, using that $\gamma-\iota\in H^s$), therefore, $\gamma_1$ can be regarded as an element in $\widetilde{D^s(S^1)}$. It also has the desired properties, namely $\gamma_1(\theta)=\theta$ for $\theta\in I_1'$ and $\gamma_1(\theta)=\gamma(\theta)$ for $\theta\in \hat I_1$. Note that the assignment ${\mathcal V}_{\varepsilon}\rightarrow \widetilde{{\mathcal D}^s(S^1)}$, $\gamma\rightarrow\gamma_1$ is continuous by \eqref{eq:gamma1}\eqref{eq:alpha}\eqref{eq:beta} and Lemma \ref{lm:derivative}. We choose $\varepsilon$ such that $\gamma_1^{\prime}$ is positive for $\gamma\in{\mathcal V}_\varepsilon$. Now the assignment ${\mathcal V}_{\varepsilon}\rightarrow \widetilde{{\mathcal D}^s(S^1)}$, $\gamma\rightarrow\gamma\g_1^{-1} $ is continuous by Lemma \ref{lm:sobolevgroup}. We take ${\mathcal V} \subset {\mathcal V}_\varepsilon$ to be the neighborhood of the identity of $\widetilde{D^s(S^1)}$ such that for $\gamma\in{\mathcal V}$ we have $\gamma\g_1^{-1}\in{\mathcal V}_{\varepsilon_1}$ where $\varepsilon_1$ is small enough that we obtain $\gamma_2\in\widetilde{D^s(S^1)}$ (in particular $\gamma_2^{\prime}$ is positive) if we do an analogous construction on $I_2$ for $\gamma \gamma_1^{-1}$. For $\gamma\in{\mathcal V}$ we set $\chi_1(\gamma) = \gamma_1$. The continuity of the map $\chi_1$ in the topology of $\widetilde{{\mathcal D}^s(S^1)}$ is clear from \eqref{eq:gamma1} and \eqref{eq:alpha}\eqref{eq:beta}. Next we construct $\chi_2(\gamma)$. By construction $(\gamma\g_1^{-1})(\theta) = \theta$ for $\theta \in \hat I_1$, therefore , ${\rm supp\,} \gamma\g_1^{-1} \subset I_2 \cup I_3$. We can apply an analogous construction to $I_2$ and $\gamma\gamma_1^{-1}$ to obtain $\gamma_2$ such that ${\rm supp\,} \gamma_2 \subset \hat I_2, \gamma_2(\theta) = (\gamma\g_1^{-1})(\theta)$ for $\theta \in \hat I_2$. In this way we obtain the continuous map $\chi_2(\gamma) := \gamma_2$. Furthermore, by our choice $\hat a_1 < \hat b_2 < \check b_2 < b_2$, $\gamma_2(\theta) = (\gamma\g_1^{-1})(\theta)$ for $\theta \in \hat I_1$ where both are equal to $\theta$, hence for $\hat I_1 \cup \hat I_2$. Now we have $(\gamma\g_1^{-1}\gamma_2^{-1})(\theta) = \theta$ for $\theta \in \hat I_1 \cup \hat I_2$, and as $\{\hat I_k\}$ is a cover of $S^1$, $(\hat I_1 \cup \hat I_2)' \subset \hat I_3$. Therefore, if we set $\chi_3(\gamma) = \gamma\g_1^{-1}\gamma_2^{-1}$, it is supported in $\hat I_3 \subset I_3$ and the map $\chi_3$ is continuous because it is a composition of continuous maps (Lemma \ref{lm:sobolevgroup}). \end{proof} If $\gamma$ is already localized, we can have the following improvement. \begin{lemma}\label{lm:fragmentation-local} Let $k \in \{1,2,3\} \mod 3$ and $\tilde I_k = I_k \cup I_{k+1}$. There is a neighborhood ${\mathcal V}$ of the unit element $\iota$ of $\widetilde{{\mathcal D}^s(S^1)}$, $s>3/2$, and continuous localizing maps \begin{align*} \chi^{(k)}_k&: {\mathcal V}\cap \widetilde{{\mathcal D}^s(\tilde I_k)} \to \widetilde{{\mathcal D}^s(I_k)}, \\ \chi^{(k)}_{k+1}&: {\mathcal V}\cap \widetilde{{\mathcal D}^s(\tilde I_k)} \to \widetilde{{\mathcal D}^s(I_{k+1})} \end{align*} with $\gamma = \chi^{(k)}_k(\gamma)\chi^{(k)}_{k+1}(\gamma)$ and $\chi^{(k)}_k(\iota) = \chi^{(k)}_{k+1}(\iota) = \iota$. \end{lemma} \begin{proof} Without loss of generality, we may assume $k=2$. This is done by applying the steps of constructing $\chi_2$ and $\chi_3$ in the proof of Lemma \ref{lm:fragmentation} to slightly enlarged $I_2$ and $\hat I_2$, so that $\chi^{(2)}_2(\gamma)(\theta) = \gamma(\theta)$ for $\theta \in I_3'$. \end{proof} \begin{lemma}\label{lm:localequivalence} Let $U_{(c,h_1)}, U_{(c,h_2)}$ be irreducible, projective representations of $\widetilde{{\mathcal D}^s(S^1)}$ with central charge $c$ and lowest weight $h_1,h_2$ respectively, constructed as in Section \ref{extension}. Let $I$ be a proper interval of $S^1$. Then the projective representations $U_{(c,h_1)}$ and $U_{(c,h_2)}$ restricted to ${\mathcal D}^s(I)$ are unitarily equivalent. Furthermore, a unitary $U$ intertwines $U_{(c,h_1)}$ and $U_{(c,h_2)}$ restricted to ${\mathcal D}^s(I)$ if and only if it intertwines $T_{(c,h_1)}(f)$ and $T_{(c,h_2)}(f)$ for every $f\in{\rm Vect}(S^1)$ with support in $I$. \end{lemma} \begin{proof} Let $\tilde I$ an open proper interval of $S^1$ such that $\tilde I\supset\overline{I}$. By \cite[Theorem 5.6]{Weiner17} there exists a unitary $W$ which intertwines the representations $U_{(c,h_1)}, U_{(c,h_2)}$ when restricted to ${\rm Diff}_+(\tilde I)$. Let $\gamma\in{\mathcal D}^s(I)$, then by Lemma \ref{lem:localapprox} there exists a sequence of $C^{\infty}$-diffeomorphisms $\lbrace\gamma_n\rbrace\subset{\rm Diff}_+(\tilde I)$ converging to $\gamma$. By Corollary \ref{cr:continuityB(H)}, \begin{align*} {\hbox{\rm Ad\,}} WU_{(c,h_1)}(\gamma)W^*& = {\hbox{\rm Ad\,}} \lim_{n\rightarrow\infty} WU_{(c,h_1)}(\gamma_n)W^* = {\hbox{\rm Ad\,}} \lim_{n\rightarrow\infty} U_{(c,h_2)}(\gamma_n) = {\hbox{\rm Ad\,}} U_{(c,h_2)}(\gamma). \end{align*} The last assertion follows from \cite[Lemma 2.1]{Weiner17}. \end{proof} We are going to show that we can take the direct sum of irreducible projective representations of ${\mathcal D}^s(S^1)$, $\{U_{(c,h_j)}\}$, with the same central charge $c$ but possibly different lowest weights $\{h_j\}$ where differences $h_j - h_{j'}$ are integers. We split the proof into two steps. First, we make $U_{(c,h_j)}$ into continuous multiplier representations with the same cocycle in some neighborhood ${\mathcal V}$ of the identity diffeomorphism $\iota\in \widetilde{{\mathcal D}^s(S^1)}$. Then it is straightforward to take the direct sum. Next, we show that the direct sum representation reduced to a projective representation of ${\mathcal D}^s(S^1)$ if the differences $h_j - h_{j'}$ are integers. Let $G$ and $G'$ be two topological groups. Given a neighborhood ${\mathcal V}$ of the identity in $G$, a continuous map $\mu:{\mathcal V}\rightarrow G'$ is a local homomorphism if $\mu(g_1)\mu(g_2)=\mu(g_1g_2)$ for all $g_1,g_2\in{\mathcal V}$ and $g_1g_2\in{\mathcal V}$. We say that a map $U$ is a local unitary multiplier representation of a topological group $G$ on a neighborhood ${\mathcal V}$ of the identity if $U$ is a map from ${\mathcal V}$ to the unitary group ${\mathcal U}({\mathcal H})$ of a Hilbert space ${\mathcal H}$ which satisfies the equality $U(g_1)U(g_2)=\omega(g_1,g_2)U(g_1g_2)$, where $\omega:{\mathcal V}\times{\mathcal V}\rightarrow\mathbb{T}$ and $\omega(g_1,g_2)\omega(g_1g_2,g_3)=\omega(g_1,g_2g_3)\omega(g_2,g_3)$ whenever $g_1,g_2,g_3$, $g_1g_2$ and $g_2g_3$ are in ${\mathcal V}$. The following is obtained by reversing the idea of \cite{Tanimoto18-2}. \begin{proposition}\label{pr:directsum} Let $s>3$. For a family $\{(c, h_j)\}$ of pairs with the same central charge $c$, there is a neighborhood ${\mathcal V}$ of $\widetilde{{\mathcal D}^s(S^1)}$ such that the irreducible unitary projective representations $U_{(c,h_j)}$ lift to local multiplier representations of ${\mathcal V}$ with the same cocycle $c(\cdot,\cdot)$. \end{proposition} \begin{proof} Let us take $h_1$. By \cite{Bargmann54}\cite[Proposition 12.44]{Moretti17}, in a neighborhood $\hat{{\mathcal V}}$ of the identity $\iota\in\widetilde{\diff^4(S^1)}$, $U_{(c,h_1)}$ lifts to a continuous multiplier representation, with some continuous cocycle $c(\cdot,\cdot)$, which we will denote by $U_1$. Because $\widetilde{{\mathcal D}^s(S^1)}$ is a topological group, and by Lemmas \ref{lm:fragmentation}, \ref{lm:fragmentation-local}, for each neighborhood ${\mathcal W}$, there is a smaller neighborhood $p({\mathcal W})$ such that $p({\mathcal W})^2 \subset {\mathcal W}$ and $\chi_k(\gamma), \chi^{(k)}_k(\gamma), \chi^{(k)}_{k+1}(\gamma) \subset {\mathcal W}$ for $\gamma \in p({\mathcal W})$. We take ${\mathcal V} = p^{11}(\hat {\mathcal V}) = \underset{11\text{-times}}{\underbrace{p(p(p(\cdots \hat {\mathcal V}\cdots)))}}$. \paragraph{Construction of multiplier representations $U_j$.} We show that we can take $U_j$ with the same cocycle $c(\cdot,\cdot)$. We fix a covering $\{I_k\}$ of $S^1$ as in Lemma \ref{lm:fragmentation}. For $\gamma \in p(\hat {\mathcal V})$, we define $U_j$ as follows: By Lemma \ref{lm:localequivalence}, there are unitary intertwiners $\{V_{j,k}\}$ between $U_{(c,h_1)}$ and $U_{(c,h_j)}$ restricted to ${\mathcal D}^s(I_k)$. We set \[ U_j(\chi_k(\gamma))={\hbox{\rm Ad\,}} V_{j,k}(U_1(\gamma_k)), \] which makes sense because $p(\hat {\mathcal V}) \subset \hat {\mathcal V}$. Note that $U_j(\chi_k(\gamma))$ does not depend on the choice of unitary intertwiner $V_{j,k}$, since, if $V_{j,k}$ and $\hat{V}_{j,k}$ are both unitary intertwiners, then by Lemma \ref{lm:localequivalence} \[ {\hbox{\rm Ad\,}} V_{j,k}^*\hat{V}_{j,k}(U_j(\chi_k(\gamma)))=U_j(\chi_k(\gamma)) \] for $\gamma$ smooth, and by continuity of $U_1$ for $\chi_k(\gamma)\in {\mathcal D}^s(I_k) \cap \hat {\mathcal V}$. Let us denote $\gamma_k = \chi_k(\gamma)$ for simplicity. Now, since $\gamma=\gamma_1\gamma_2\gamma_3$ with $\gamma_k\in {\mathcal D}^s(I_k)\cap \hat {\mathcal V}$, we can define $U_j(\gamma)$ by \begin{align}\label{eq:defpi} U_j(\gamma)=U_j(\gamma_1)U_j(\gamma_2)U_j(\gamma_3)c(\gamma_1,\gamma_2)^{-1}c(\gamma_1\gamma_2,\gamma_3)^{-1}, \end{align} and note that the corresponding equation holds for $U_1$. \paragraph{Well-definedness.} We used a particular set of maps $\chi_k$ to define $U_j$, but actually they do not depend on the choice of such map $\chi_k$ if $\gamma$ satisfies certain properties and is sufficiently close to $\iota$. Namely, we take two decompositions $\gamma = \gamma_1\gamma_2\gamma_3 = \gamma'_1\gamma'_2\gamma'_3$ where $\gamma_k, \gamma_k' \in {\mathcal D}^s(I_k) \cap p^5(\hat{\mathcal V})$. It holds that $\gamma_3^{-1}\gamma_2^{-1}\gamma_1^{-1}\gamma'_1\gamma'_2\gamma'_3 = \iota$ in $\widetilde{{\mathcal D}^s(S^1)}$ and $U_1(\gamma_1)^* = c(\gamma_1,\gamma_1^{-1})U_1(\gamma_1^{-1})$, hence we have \[ c(\gamma_1,\gamma_2,\gamma_3,\gamma'_1,\gamma'_2,\gamma'_3) := U_1(\gamma_3)^*U_1(\gamma_2)^*U_1(\gamma_1^{-1}\gamma'_1)U_1(\gamma'_2)U_1(\gamma'_3) \in \mathbb{C}. \] Furthermore, as $U_1$ is a multiplier representation in $\hat {\mathcal V}$, we have \begin{align*} U_1(\gamma) &= U_1(\gamma_1)U_1(\gamma_2)U_1(\gamma_3)c(\gamma_1,\gamma_2)^{-1}c(\gamma_1\gamma_2,\gamma_3)^{-1} \\ &= U_1(\gamma'_1)U_1(\gamma'_2,)U_1(\gamma'_3)c(\gamma'_1,\gamma'_2)^{-1}c(\gamma'_1\gamma_2',\gamma'_3)^{-1}. \end{align*} By putting all factors in one side, we obtain \begin{align}\label{eq:c6} c(\gamma_1,\gamma_2,\gamma_3,\gamma'_1,\gamma'_2,\gamma'_3)c(\gamma_1^{-1},\gamma'_1)c(\gamma_1,\gamma_1^{-1})c(\gamma_1,\gamma_2)c(\gamma_1\gamma_2,\gamma_3)c(\gamma'_1,\gamma'_2)^{-1}c(\gamma'_1\gamma_2',\gamma'_3)^{-1} = 1. \end{align} Note that $U_j$ is unitarily equivalent to $U_1$ on any proper interval, therefore, $U_j(\gamma_1)^*U_j(\gamma'_1) = c(\gamma_1^{-1},\gamma'_1)c(\gamma_1,\gamma_1^{-1})U_j(\gamma_1^{-1}\gamma'_1)$, and $\gamma_1^{-1}\gamma'_1 = \gamma_2\gamma_3\gamma_3^{\prime-1}\gamma_2^{\prime-1}$ has support in $I_2\cup I_3$. Then we can again use the unitary equivalence between $U_j$ and $U_1$ on $I_2\cup I_3$ to obtain \[ U_j(\gamma_3)^*U_j(\gamma_2)^*U_j(\gamma_1^{-1}\gamma'_1)U_j(\gamma'_2)U_j(\gamma'_3) = c(\gamma_1,\gamma_2,\gamma_3,\gamma'_1,\gamma'_2,\gamma'_3), \] which is, by \eqref{eq:c6}, equivalent to the equality \begin{align*} &U_j(\gamma_1)U_j(\gamma_2)U_j(\gamma_3)c(\gamma_1,\gamma_2)^{-1}c(\gamma_1\gamma_2,\gamma_3)^{-1} \\ =\;& U_j(\gamma'_1)U_j(\gamma'_2)U_j(\gamma'_3)c(\gamma'_1,\gamma'_2)^{-1}c(\gamma'_1\gamma_2',\gamma'_3)^{-1}. \end{align*} In other words, $U_j$ is well-defined on $p^6(\hat {\mathcal V})$. \paragraph{Cocycle relations.} Next we show that $U_j$ is a local multiplier representation on ${\mathcal V}$. Let $\gamma,\gamma'\in {\mathcal V} = p^{11}(\hat {\mathcal V})$ and we take decompositions $\gamma=\gamma_1\gamma_2\gamma_3, \gamma'=\gamma'_1\gamma'_2\gamma'_3$. We first look at the product $\gamma_3\gamma'_1$. This is supported in $I_1\cup I_3$, and we can find another decomposition $\gamma_3\gamma'_1 = \gamma''_1\gamma''_3$ using Lemma \ref{lm:fragmentation-local}, where $\gamma''_j \in {\mathcal D}^s(I_j) \cap p^{8}(\hat {\mathcal V})$. By repeating such operations and taking new decompositions in proper intervals, we find \begin{align*} \gamma\g' &= \gamma_1\gamma_2\gamma_3\gamma'_1\gamma'_2\gamma'_3 \\ &= \gamma_1\gamma_2\gamma''_1\gamma''_3\gamma'_2\gamma'_3 \\ &= \gamma_1\gamma'''_1\gamma'''_2\gamma''''_2\gamma''''_3\gamma'_3, \end{align*} where $\gamma_j^{(k)} \in {\mathcal D}^s(I_j) \cap p^6(\hat {\mathcal V})$. Again, by considering the multiplier representation $U_1$, we can prove the following relations \begin{align}\label{eq:defc4} \begin{array}{rl} U_1(\gamma_3)U_1(\gamma'_1) &= U_1(\gamma''_1)U_1(\gamma''_3)c(\gamma_3,\gamma'_1,\gamma''_1,\gamma''_3),\\ U_1(\gamma_2)U_1(\gamma''_1) &= U_1(\gamma'''_1)U_1(\gamma'''_2)c(\gamma_2,\gamma''_1,\gamma'''_1,\gamma'''_2),\\ U_1(\gamma''_3)U_1(\gamma'_2) &= U_1(\gamma''''_2)U_1(\gamma''''_3)c(\gamma''_3,\gamma'_2,\gamma''''_2,\gamma''''_3), \end{array} \end{align} where $c(\gamma_3,\gamma'_1,\gamma''_1,\gamma''_3),c(\gamma_2,\gamma''_1,\gamma'''_1,\gamma'''_2),c(\gamma''_3,\gamma'_2,\gamma''''_2,\gamma''''_3)\in\mathbb{C}$ are defined through these equalities. Therefore, as $U_1$ has the cocycle $c$, \begin{align*} &c(\gamma,\gamma') U_1(\gamma\g')\\ &\,=U_1(\gamma)U_1(\gamma')\\ & \begin{array}{l} = c(\gamma_1,\gamma_2)^{-1}c(\gamma_1\gamma_2,\gamma_3)^{-1}c(\gamma'_1,\gamma'_2)^{-1}c(\gamma'_1\gamma'_2,\gamma'_3)^{-1}\\ \quad\times U_1(\gamma_1)U_1(\gamma_2)U_1(\gamma_3)U_1(\gamma'_1)U_1(\gamma'_2)U_1(\gamma'_3) \end{array} &\text{ by } \eqref{eq:defpi}\\ & \begin{array}{l} =c(\gamma_1,\gamma_2)^{-1}c(\gamma_1\gamma_2,\gamma_3)^{-1}c(\gamma'_1,\gamma'_2)^{-1}c(\gamma'_1\gamma'_2,\gamma'_3)^{-1}\\ \quad\times U_1(\gamma_1)U_1(\gamma'''_1)U_1(\gamma'''_2)U_1(\gamma''''_2)U_1(\gamma''''_3)U_1(\gamma'_3)\\ \quad\times c(\gamma_3,\gamma'_1,\gamma''_1,\gamma''_3)c(\gamma_2,\gamma''_1,\gamma'''_1,\gamma'''_2) c(\gamma''_3,\gamma'_2,\gamma''''_2,\gamma''''_3) \end{array} & \text{ by } \eqref{eq:defc4} \\ & \begin{array}{l} =c(\gamma_1,\gamma_2)^{-1}c(\gamma_1\gamma_2,\gamma_3)^{-1}c(\gamma'_1,\gamma'_2)^{-1}c(\gamma'_1\gamma'_2,\gamma'_3)^{-1}\\ \quad\times c(\gamma_3,\gamma'_1,\gamma''_1,\gamma''_3)c(\gamma_2,\gamma''_1,\gamma'''_1,\gamma'''_2) c(\gamma''_3,\gamma'_2,\gamma''''_2,\gamma''''_3)\\ \quad\times c(\gamma_1,\gamma'''_1)c(\gamma'''_2\gamma''''_2)c(\gamma''''_3\gamma'_3)\cdot U_1(\gamma_1\gamma'''_1)U_1(\gamma'''_2\gamma''''_2)U_1(\gamma''''_3\gamma'_3) \end{array} \\ & \begin{array}{l} =c(\gamma_1,\gamma_2)^{-1}c(\gamma_1\gamma_2,\gamma_3)^{-1}c(\gamma'_1,\gamma'_2)^{-1}c(\gamma'_1\gamma'_2,\gamma'_3)^{-1}\\ \quad \times c(\gamma_3,\gamma'_1,\gamma''_1,\gamma''_3)c(\gamma_2,\gamma''_1,\gamma'''_1,\gamma'''_2) c(\gamma''_3,\gamma'_2,\gamma''''_2,\gamma''''_3)\\ \quad \times c(\gamma_1,\gamma'''_1)c(\gamma'''_2\gamma''''_2)c(\gamma''''_3\gamma'_3)\cdot c(\gamma_1\gamma'''_1,\gamma'''_2\gamma''''_2)c(\gamma_1\gamma'''_1\gamma'''_2\gamma''''_2,\gamma''''_3\gamma'_3)U_1(\gamma\g') \end{array} \end{align*} or equivalently, the following relation between scalars: \begin{align}\label{eq:c246} c(\gamma,\gamma') =&\;c(\gamma_1,\gamma_2)^{-1}c(\gamma_1\gamma_2,\gamma_3)^{-1}c(\gamma'_1,\gamma'_2)^{-1}c(\gamma'_1\gamma'_2,\gamma'_3)^{-1} \nonumber \\ &\times c(\gamma_3,\gamma'_1,\gamma''_1,\gamma''_3)c(\gamma_2,\gamma''_1,\gamma'''_1,\gamma'''_2) c(\gamma''_3,\gamma'_2,\gamma''''_2,\gamma''''_3)\\ &\times c(\gamma_1,\gamma'''_1)c(\gamma'''_2\gamma''''_2)c(\gamma''''_3\gamma'_3)\cdot c(\gamma_1\gamma'''_1,\gamma'''_2\gamma''''_2)c(\gamma_1\gamma'''_1\gamma'''_2\gamma''''_2,\gamma''''_3\gamma'_3). \nonumber \end{align} Since $U_j$ is locally equivalent to $U_1$, the following also follows from \eqref{eq:defc4}: \begin{align}\label{eq:c4-j} \begin{array}{rl} U_j(\gamma_3)U_j(\gamma'_1) &= U_j(\gamma''_1)U_j(\gamma''_3)c(\gamma_3,\gamma'_1,\gamma''_1,\gamma''_3),\\ U_j(\gamma_2)U_j(\gamma''_1) &= U_j(\gamma'''_1)U_j(\gamma'''_2)c(\gamma_2,\gamma''_1,\gamma'''_1,\gamma'''_2),\\ U_j(\gamma''_3)U_j(\gamma'_2) &= U_j(\gamma''''_2)U_j(\gamma''''_3)c(\gamma''_3,\gamma'_2,\gamma''''_2,\gamma''''_3), \end{array} \end{align} Now, in order to show that $U_j$ is a local multipler representation with the cocycle $c$, we only have to compute \begin{align*} &U_j(\gamma)U_j(\gamma') \\ & \begin{array}{l} = c(\gamma_1,\gamma_2)^{-1}c(\gamma_1\gamma_2,\gamma_3)^{-1}c(\gamma'_1,\gamma'_2)^{-1}c(\gamma'_1\gamma'_2,\gamma'_3)^{-1}\\ \quad \times U_j(\gamma_1)U_j(\gamma_2)U_j(\gamma_3)U_j(\gamma'_1)U_j(\gamma'_2)U_j(\gamma'_3) \end{array} &\text{ by } \eqref{eq:defpi} \\ & \begin{array}{l} = c(\gamma_1,\gamma_2)^{-1}c(\gamma_1\gamma_2,\gamma_3)^{-1}c(\gamma'_1,\gamma'_2)^{-1}c(\gamma'_1\gamma'_2,\gamma'_3)^{-1}\\ \quad \times U_j(\gamma_1)U_j(\gamma'''_1)U_j(\gamma'''_2)U_j(\gamma''''_2)U_j(\gamma''''_3)U_j(\gamma'_3)\\ \quad \times c(\gamma_3,\gamma'_1,\gamma''_1,\gamma''_3)c(\gamma_2,\gamma''_1,\gamma'''_1,\gamma'''_2) c(\gamma''_3,\gamma'_2,\gamma''''_2,\gamma''''_3) \end{array} & \text{ by } \eqref{eq:c4-j} \\ & \begin{array}{l} = c(\gamma,\gamma')\left(c(\gamma_1,\gamma'''_1)c(\gamma'''_2\gamma''''_2)c(\gamma''''_3\gamma'_3)\cdot c(\gamma_1\gamma'''_1,\gamma'''_2\gamma''''_2)c(\gamma_1\gamma'''_1\gamma'''_2\gamma''''_2,\gamma''''_3\gamma'_3)\right)^{-1} \\ \quad \times U_j(\gamma_1)U_j(\gamma'''_1)U_j(\gamma'''_2)U_j(\gamma''''_2)U_j(\gamma''''_3)U_j(\gamma'_3) \end{array} & \text{ by } \eqref{eq:c246} \\ & \begin{array}{l} = c(\gamma,\gamma')\left( c(\gamma_1\gamma'''_1,\gamma'''_2\gamma''''_2)c(\gamma_1\gamma'''_1\gamma'''_2\gamma''''_2,\gamma''''_3\gamma'_3)\right)^{-1} \\ \quad \times U_j(\gamma_1\gamma'''_1)U_j(\gamma'''_2\gamma''''_2)U_j(\gamma''''_3\gamma'_3) \end{array} \\ &=\,c(\gamma,\gamma')U_j(\gamma\g'), \end{align*} where we used local equivalence between $U_j$ and $U_1$ in and 4th equalities, and the well-definedness (independence of the partition of a group element into ${\mathcal D}^s(I_k)\cap p^5(\hat {\mathcal V})$) in the 5th equality. Namely, $U_j$ has the cocycle $c$ on ${\mathcal V} = p^{11}(\hat {\mathcal V})$. \end{proof} \paragraph{Direct sum of multiplier representations.} Since all the projective representations $U_j$ can be made into the local multiplier representations with the same cocycle $c$, the direct sum $U := \bigoplus_j U_j$ is again a local multiplier representation of $\widetilde{{\mathcal D}^s(S^1)}$ on ${\mathcal V}$. By forgetting the phase, we can interpret that $U$ is a local projective representation of ${\mathcal V} \subset \widetilde{{\mathcal D}^s(S^1)}$, or in other words, a continuous local group homomorphism from ${\mathcal V}$ into ${\mathcal U}({\mathcal H})/\mathbb{T}$ (see Section \ref{projective}), where ${\mathcal H} = \bigoplus_j {\mathcal H}(c,h_j)$. As $\widetilde{{\mathcal D}^s(S^1)}$ is simply connected and locally connected, $U$ extends to a continuous projective representation of $\widetilde{{\mathcal D}^s(S^1)}$ \cite[Theorem 63]{Pontryagin46}. \begin{theorem}\label{th:sumdiff} Let $s>3$. For a family $\{(c, h_j)\}$ of pairs with the same central charge $c$ such that $h_j - h_{j'} \in {\mathbb N}$, the direct sum projective representation $U$ of $\widetilde{{\mathcal D}^s(S^1)}$ as above satisfies $U(R(2\pi)) \in {\mathbb C}$, where $R(\cdot)$ is the lift of rotations to $\widetilde{{\mathcal D}^s(S^1)}$, or in other words, $U$ is a projective representation of ${\mathcal D}^s(S^1)$. \end{theorem} \begin{proof} Let $\tilde{U}_{(c,h_j)}$ the irreducible global multiplier representation of $\widetilde{{\rm Diff}_+(S^1)}$ with central charge $c$ and lowest weight $h_j$ associated to the Bott-Virasoro cocycle. As a projective representation, we have $U\big\vert_{\widetilde{{\rm Diff}_+(S^1)}}=\bigoplus_j \tilde{U}_{(c,h_j)}$: this is because, by definition of $U$, they agree on a neighborhood of the identity of $\widetilde{{\rm Diff}_+(S^1)}$, and since $\widetilde{{\rm Diff}_+(S^1)}$ is simply connected they agree globally. Since $\widetilde{\mathrm{PSL}(2, {\mathbb R})}$ is a simple Lie group, $U\big\vert_{\widetilde{\mathrm{PSL}(2,{\mathbb R})}}$ extends to a true representation of $\widetilde{\mathrm{PSL}(2, {\mathbb R})}$ by changing $U(\gamma)$ only by a scalar \cite{Bargmann54}\cite[Theorem 12.72]{Moretti17} (see also \cite[Example 12.77]{Moretti17}). The lift to a true representation of $\widetilde{\mathrm{PSL}(2,{\mathbb R})}$ is unique, since if $V_1$ and $V_2$ are true representations which give rise to the same projective representation, we have that $V_1(g)=\chi(g)V_2(g)$ for all $g\in \widetilde{\mathrm{PSL}(2,{\mathbb R})}$, where $\chi$ is a character. Since $\widetilde{\mathrm{PSL}(2,{\mathbb R})}$ is a perfect group, $\chi(g)=1$ for all $g$. By the uniqueness of the lift of $U\big\vert_{\widetilde{\mathrm{PSL}(2,{\mathbb R})}}$ to a true representation $V$, we have that $V=\bigoplus_ji V_{(c,h_j)}$, where $V_{(c,h_j)}$ is the lift of $\tilde{U}_{(c,h_j)}\big\vert_{\widetilde{\mathrm{PSL}(2, {\mathbb R})}}$ to a true representation. As we assume that $h_j - h_{j'}$ are integers, $V(R(2\pi)) \in {\mathbb C}$. \end{proof} From the previous theorem, it follows that every positive energy projective unitary representation of ${\rm Diff}_+(S^1)$ extends to a unitary projective representation of ${\mathcal D}^s(S^1)$ using the following well-known fact that we here prove for completeness. \begin{proposition}\label{pr:compred} Let $U$ be a positive energy unitary projective representation of ${\rm Diff}_+(S^1)$ on the Hilbert space ${\mathcal H}$. Then $U$ is unitarily equivalent to a direct sum of irreducible positive energy unitary projective representation of ${\rm Diff}_+(S^1)$ and extends to ${\mathcal D}^s(S^1)$, $s>3$. \end{proposition} \begin{proof} As in the proof of Theorem \ref{th:sumdiff}, we have that $U\big\rvert_{\mathrm{PSL}(2,{\mathbb R})}$ can be lifted to a true representation of $\widetilde{\mathrm{PSL}(2, {\mathbb R})}$. Thus we can take the generator of rotations $L_0$ and, since $e^{i2\pi L_0}\in{\mathbb C} {\mathbbm 1}$ from the fact that $U$ is a projective representation of ${\rm Diff}_+(S^1)$, it follows that $L_0$ is diagonalizable with spectrum Sp$(L_0)\subset\{h_1+\mathbb{N}\}$ with $h_1\in\mathbb{R}$, $h_1\geq 0$. Let ${\mathcal H}^\mathrm{fin}$ be the dense subspace of ${\mathcal H}$ generated by the eigenvectors of $L_0$. We can apply \cite[Theorem 3.4]{CKLW18} to conclude that there exists a positive energy unitary representation $\pi_U$ of ${\rm Vir}$ on ${\mathcal H}^\mathrm{fin}$. The representation of ${\rm Vir}$ on ${\mathcal H}^\mathrm{fin}$ is equivalent to an algebraic orthogonal direct sum of multiples of irreducible positive energy representations of ${\rm Vir}$ in the following sense. Let $V_1$ be the smallest $\pi_U$-invariant subspace of ${\mathcal H}^\mathrm{fin}$ which contains $\ker(L_0-h_1{\mathbbm 1}_{{\mathcal H}^\mathrm{fin}})$ where $h_1$ is the smallest eigenvalue of $L_0$. By induction let $V_n$ be the smallest $\pi_U$-invariant subspace of $\left(V_1\oplus V_2\oplus\cdots\oplus V_{n-1}\right)^{\perp}\cap{\mathcal H}^\mathrm{fin}$ which contains $\left(V_1\oplus V_2\oplus\cdots\oplus V_{n-1}\right)^{\perp}\cap\ker(L_0-h_n{\mathbbm 1}_{{\mathcal H}^\mathrm{fin}})$ where $h_n$ is the smallest eigenvalue of $L_0$ restricted to $\left(V_1\oplus V_2\oplus\cdots\oplus V_{n-1}\right)^{\perp}\cap{\mathcal H}^\mathrm{fin}$. It is straightforward to see that ${\mathcal H}^\mathrm{fin}=\bigoplus_n V_n$ in the algebraic sense. Now choose an orthonormal basis $\lbrace e^n_j\rbrace$ of $W_n\coloneqq V_n\cap\ker(L_0-h_n{\mathbbm 1}_{{\mathcal H}^\mathrm{fin}})$. We define $H_j^n$ to be the smallest $\pi_U$-invariant subspace of $W_n$ which contains the vector $e^n_j$. By construction $H_j^n$ has no proper $\pi_U$-invariant subspaces, $H_j^n$ and $H_k^n$ are orthogonal subspaces for $j\ne k$ and $\overline{V_n}=\bigoplus_j\overline{H^n_j}$. Let $T$ be the stress-energy tensor associated to the representation $\pi_U$ of ${\rm Vir}$. By construction $T(f)|_{H_j^n}$ is essentially self-adjoint on $H^n_j$. To conclude the decomposition of $U$, we have to show that $e^{iT(f)}\overline{H^n_j}\subset\overline{H^n_j}$ for all $f\in{\rm Vect}(S^1)$. We note that ${\mathscr{D}}\left(\left(\overline{(T(f)|_{H_j^n})}\right)^\ell\right)\subset {\mathscr{D}}(T(f)^\ell)$ and if $\xi\in{\mathscr{D}}\left(\left(\overline{(T(f)|_{H_j^n})}\right)^\ell\right)$ then $\left(\overline{T(f)|_{H_j^n}}\right)^\ell\xi=(T(f))^\ell\xi$. Thus the analytic vectors for $\overline{(T(f)|_{H_j^n})}$ are also analytic for $T(f)$ and $e^{i\overline{(T(f)|_{H_j^n})}}\xi=e^{iT(f)}\xi$. Using the density of the analytic vectors in $\overline{H_j^n}$, we obtain that $e^{i\overline{(T(f)|_{H_j^n})}}=e^{iT(f)}\big\vert_{H^n_j}$. Irreducibility of $U\vert_{\overline{H^n_j}}$ follows because $T\vert_{H^n_j}$ is irreducible. The extension to ${\mathcal D}^s(S^1)$ is now a mere corollary of Theorem \ref{th:sumdiff}. \end{proof} \begin{corollary}\label{cr:diff4red} Let $U$ be a positive energy unitary projective representation of ${\rm Diff}_+(S^1)$ on the Hilbert space ${\mathcal H}$. Then $U$ is unitarily equivalent to a direct sum of irreducible positive energy unitary projective representation of ${\rm Diff}_+(S^1)$ and extends to ${\rm Diff}_+^k(S^1)$ with $k\geq 4$. \end{corollary} \begin{proof} This again follows from Proposition \ref{pr:compred} and the continuous embedding ${\rm Diff}_+^k(S^1) \hookrightarrow {\mathcal D}^s(S^1), s \le k$. \end{proof} We do not know whether our local multiplier representations can be extended to a global multiplier representation of $\widetilde{{\mathcal D}^s(S^1)}$. It is also open whether the global multiplier representation of ${\rm Diff}_+(S^1)$ with the Bott-Virasoro cocycle \cite[Proposition 5.1]{FH05} extends to $\widetilde{{\mathcal D}^s(S^1)}$ by continuity. \section{Conformal nets and diffeomorphism covariance}\label{conformal} Let $\mathrm{PSL}(2,\mathbb{R})$ be the M\"obius group and $\mathcal{I}$ be the set of nonempty, non-dense, open intervals of the unit circle $S^{1}$. $I'$ denotes the interior of the complement of the interval $I\in\mathcal{I}$, namely $I'=(S^{1}\setminus I)^{\circ}$. A {\bf M\"obius covariant net} $({\mathcal A}, U, \Omega)$ on $S^{1}$ is a triple of a family $\mathcal{A}=\left\{\mathcal{A}(I), I\in\mathcal{I}\right\}$ of von Neumann algebras, a strongly continuous unitary representation $U$ of $\mathrm{PSL}(2,\mathbb{R})$ acting on a separable complex Hilbert space $\mathcal{H}$ and $\Omega \in {\mathcal H}$, satisfying the following properties: \begin{enumerate}[{(}1{)}] \item Isotony: $\mathcal{A}(I_{1})\subset\mathcal{A}(I_{2})$, if $I_{1}\subset I_{2}$, $I_{1},I_{2}\in \mathcal{I}$. \item Locality: $\mathcal{A}(I_{1})\subset\mathcal{A}(I_{2})^{\prime}$, if $I_{1}\cap I_{2}=\emptyset$, $I_{1},I_{2}\in \mathcal{I}$. \item M\"obius covariance: for $g\in \mathrm{PSL}(2,\mathbb{R})$, $I\in\mathcal{I}$, \begin{equation*} U(g)\mathcal{A}(I)U(g)^{-1}=\mathcal{A}(gI) \end{equation*} where $\mathrm{PSL}(2,\mathbb{R})$ acts on $S^{1}$ by M\"obius transformations. \item Positivity of energy: the representation $U$ has positive energy, i.e. the conformal Hamiltonian $L_{0}$ (the generator of rotations) has non-negative spectrum. \item Vacuum vector: there exists a unique (up to scalar) vector $\Omega\in\mathcal{H}$ with the property $U(g)\Omega=\Omega$ for $g\in \mathrm{PSL}(2,\mathbb{R})$. Additionally $\Omega$ is cyclic for the algebra $\bigvee_{I\in\mathcal{I}}\mathcal{A}(I)$. \setcounter{NET}{\value{enumi}} \end{enumerate} With these assumptions, the following automatically hold \cite[Theorem 2.19(ii)]{GF93}\cite[Section 3]{FJ96} \begin{enumerate}[{(}1{)}] \setcounter{enumi}{\value{NET}} \item Reeh-Schlieder property: $\Omega$ is cyclic and separating for ${\mathcal A}(I)$. \item Haag duality: for every $I\in\mathcal{I}$, $\mathcal{A}(I')=\mathcal{A}(I)'$ where $\mathcal{A}(I)'$ is the commutant of $\mathcal{A}(I)$. \item Additivity: if $\lbrace I_{\alpha}\rbrace_{\alpha\in A}$ is a covering of $I\in\mathcal{I}$, with $I_{\alpha}\in\mathcal{I}$ for every $\alpha$, then ${\mathcal{A}(I)\subset \bigvee_{\alpha}\mathcal{A}(I_{\alpha})}$. \item Semicontinuity: if $I_n\in\mathcal{I}$ is a decreasing family of intervals and $I=\left(\bigcap_n I_n\right)^{\circ}$ then \\$\mathcal{A}(I)=\bigwedge_n \mathcal{A}(I_n)$. \setcounter{NET}{\value{enumi}} \end{enumerate} By a conformal net (or diffeomorphism covariant net) we shall mean a M\"obius covariant net which satisfies the following: \begin{enumerate}[{(}1{)}] \setcounter{enumi}{\value{NET}} \item \label{diffcov} The representation $U$ extends to a projective unitary representation of ${\rm Diff}_+(S^1)$ such that for all $I\in\mathcal{I}$ we have \begin{align*} U(\gamma)\mathcal{A}(I)U(\gamma)^*&=\mathcal{A}(\gamma I), \quad\gamma\in{\rm Diff}_+(S^1),\\ U(\gamma)xU(\gamma)^*&=x,\hspace{3mm}x\in \mathcal{A}(I), \quad\gamma\in{\rm Diff}_+(I^\prime) \end{align*} where ${\rm Diff}_+(I^\prime)$ denotes the subgroup of diffeomorphisms $\gamma$ such that $\gamma(z)=z$ for all $z\in I$. \end{enumerate} A positive energy representation $U$ of ${\rm Diff}_+(S^1)$ is equivalent to a direct sum of irreducible representations, see Proposition \ref{pr:compred}. Every irreducible component $U_j$ in decomposition has the same value of the central charge $c$ and if $h_j$ is the lowest weight of $U_j$, $h_j-h_k\in\mathbb{Z}$ for every $j,k$. This fact is crucial for our purpose, which is to extend the conformal symmetry of the net to the larger group ${\mathcal D}^s(S^1)$, $s>3$, in the sense that we want to show that the conditions in (\ref{diffcov}) are satisfied for arbitrary $\gamma$ in ${\mathcal D}^s(S^1)$ and ${\mathcal D}^s(I')$ respectively. \begin{proposition} A conformal net $(\mathcal{A},U,\Omega)$ is ${\mathcal D}^s(S^1)$-covariant, $s>3$. \end{proposition} \begin{proof} Let $\lbrace\gamma_n\rbrace$ be a sequence of diffeomorphisms in ${\rm Diff}_+(S^1)$ converging to $\gamma\in{\mathcal D}^s(S^1)$ in the topology of ${\mathcal D}^s(S^1)$ as in Lemma \ref{lem:localapprox}. For all $n\in\mathbb{N}$ it holds that \[ U(\gamma_n){\mathcal A}(I)U(\gamma_n)^*={\mathcal A}(\gamma_n I) \subset {\mathcal A}(\textstyle{\bigcup_{k=m}^n\gamma_k I}), \] where we used isotony of the net ${\mathcal A}$. For $x \in {\mathcal A}(I)$, it follows for $m \le n$ that \[ U(\gamma_n)xU(\gamma_n)^*\in {\mathcal A}(\textstyle{\bigcup_{k=m}^n\gamma_k I}) = \bigvee_{k=m}^{\infty}{\mathcal A}(\gamma_k I), \] by additivity. By Proposition \ref{cr:continuityB(H)} it follows that $U(\gamma)xU(\gamma)^*=\lim_{n\rightarrow \infty} U(\gamma_n)xU(\gamma_n)^*$ (convergence in the strong operator topology) is in $\bigcup_{k=m}^{\infty}{\mathcal A}(\gamma_k\cdot I)$ for any $m$, hence we have by upper semicontinuity that \[ U(\gamma){\mathcal A}(I)U(\gamma)^*\subset \bigcap_m{\mathcal A}(\textstyle{\bigcup_{k=m}^{\infty}\gamma_k I})= {\mathcal A}(\gamma I). \] The other inclusion follows by applying ${\hbox{\rm Ad\,}} U(\gamma^{-1})$. Now consider $\gamma\in{\mathcal D}^s(I')$ and $x\in{\mathcal A}(I)$. We know from lemma \ref{lem:localapprox} that exists a sequence $\lbrace \gamma_n\rbrace\subset {\rm Diff}_+(I_n')$ converging to $\gamma$ in the topology of ${\mathcal D}^s(S^1)$ and a decreasing sequence of intervals $I'_n\supset{\rm supp\,}(\gamma_n)\supset I'$ such that $\bigcap_n I'_n= I'$. For $x\in{\mathcal A}(I_n)$, $U(\gamma_m)xU(\gamma_m)^*=x$ if $m \ge n$, hence by Proposition \ref{cr:continuityB(H)} we obtain $U(\gamma)xU(\gamma)^*=x$. As $n$ is arbitrary, this holds for any $x \in{\mathcal A}(\textstyle{\bigcup_n I_n}) = {\mathcal A}(I)$ by additivity. \end{proof} \subsection*{Representations of conformal nets} Let $({\mathcal A},U,\Omega)$ a conformal net. A representation $\rho$ of $({\mathcal A},U,\Omega)$ is a family $\rho=\lbrace \rho_I\rbrace$, $I\in{\mathcal I}$, where $\rho_I$ are representations of ${\mathcal A}(I)$ on a common Hilbert space ${\mathcal H}_\rho$ and such that $\rho_J\vert_{{\mathcal A}(I)}=\rho_I$ if $I\subset J$. The representation $\rho$ is said to be locally normal if $\rho_I$ is normal for every $I\in{\mathcal I}$ (this is always true if the representation space ${\mathcal H}_\rho$ is separable \cite[Theorem 5.1]{TakesakiI}). We say that a representation $\rho$ of a conformal net $({\mathcal A},U,\Omega)$ is diffeomorphism covariant if there exists a positive energy representation $U^\rho$ of $\widetilde{{\rm Diff}_+(S^1)}$ such that \begin{equation*} U^\rho(\gamma)\rho_I(x)U^\rho(\gamma)^*=\rho_{\mathring{\gamma}I}(U(\mathring{\gamma})xU(\mathring{\gamma})^*),\quad \text{ for } x\in{\mathcal A}(I),g\in\widetilde{{\rm Diff}_+(S^1)}, \end{equation*} where $\mathring{\gamma}$ is the image of $\gamma$ in ${\rm Diff}_+(S^1)$ under the covering map. Now let $\rho$ a locally normal representation of the conformal net ${\mathcal A}$ and assume that $e^{i2\pi L^\rho_0}$ has pure point spectrum (this is always the case if $\rho$ is a direct sum of irreducibles). By using \cite[Proposition 2.2]{Carpi04} and arguing as in the proof of \cite[Proposition 3.7]{Carpi04} it is not hard to see that $\rho$ is diffeomorphism covariant (this will be directly proved in \cite{Tanimoto18-2}) and that the corresponding positive energy projective unitary representation $U_\rho$ of $\widetilde{{\rm Diff}_+(S^1)}$ is a direct sum of irreducibles. By our previous results $U_\rho$ extends to $\widetilde{{\mathcal D}^s(S^1)}$, $s>3$, and this extension makes $\rho$ $\widetilde{{\mathcal D}^s(S^1)}$-covariant. Furthermore, if $\rho$ is a direct sum of irreducible representations, then the adjoint action ${\hbox{\rm Ad\,}} U_\rho(R(2\pi))$ is trivial, and in this sense $\rho$ is ${\mathcal D}^s(S^1)$-covariant. We summarize this fact in the following proposition. \begin{proposition} Let $\rho$ be a locally normal representation of the conformal net ${\mathcal A}$ and assume that $e^{i2\pi L^\rho_0}$ has pure point spectrum. Then $\rho$ is $\widetilde{{\mathcal D}^s(S^1)}$-covariant, $s>3$. If $\rho$ is a direct sum of irreducible representations, then it is ${\mathcal D}^s(S^1)$-covariant. \end{proposition} \section{Outlook}\label{outlook} For integer $n$ and some $h$, the irreducible unitary representation $U_{(n,h)}$ can be extended to ${\mathcal D}^s(S^1), s>2$ \cite{DIT18}. It would be interesting to further determine to what point the regularity of the diffeomorphisms can be weakened in such a way that the representations $U_{(c,h)}$ may be extended to such a class in a continuous way. The proof of \cite{DIT18} (based on the strategy of \cite{Vromen13}) relies on the better-behaving $\mathrm{U}(1)$-current, and it appears that such extensions do not act nicely on the stress-energy tensor $T$, which we are currently able to extend only to $\mathcal{S}_\frac32(S^1)$. Another interesting question is whether the global multiplier representations in \cite{FH05} extend to $\widetilde{{\mathcal D}^s(S^1)}$. The question is whether these representations are continuous in the ${\mathcal D}^s(S^1)$-topology. Instead, what we used in Proposition \ref{pr:directsum} is the continuity of our extensions as projective representations, and the existence of local multiplier representations follows. In particular, we do not know whether there is a multiplier representation of $\widetilde{{\mathcal D}^s(S^1)}$ with the Bott-Virasoro cocycle. \subsubsection*{Acknowledgements.} S.C.\! would like to thank Gerard Misio\lambda{}ek for inspiring discussions. S.D.\! and S.I.\! would like to thank Stefano Rossi for the valuable discussions. Y.T.\! thanks Andr\'e Henriques and Kathryn Mann for useful information. S.D.\! and Y.T.\! acknowledge the MIUR Excellence Department Project awarded to the Department of Mathematics, University of Rome Tor Vergata, CUP E83C18000100006.
{ "redpajama_set_name": "RedPajamaArXiv" }
7,781
<?php namespace Scribe\Jabiru\Extension\Textile; use Scribe\Jabiru\Component\Element\ElementLiteral; use Scribe\Jabiru\Extension\ExtensionInterface; use Scribe\Jabiru\Markdown; use Scribe\Jabiru\Renderer\RendererAwareInterface; use Scribe\Jabiru\Renderer\RendererAwareTrait; /** * [Experimental] Textile Headers * * This extension replaces Core\HeaderExtension */ class HeaderExtension implements ExtensionInterface, RendererAwareInterface { use RendererAwareTrait; /** * {@inheritdoc} */ public function register(Markdown $markdown) { $markdown->on('block', array($this, 'processHeader'), 10); } /** * @param ElementLiteral $text */ public function processHeader(ElementLiteral $text) { $text->replace('{ ^h([1-6]) #1 Level (|=|>)\. #2 Align marker [ \t]* (.+) [ \t]*\n+ }mx', function (ElementLiteral $w, ElementLiteral $level, ElementLiteral $mark, ElementLiteral $header) { $attributes = []; switch ((string) $mark) { case '>': $attributes['align'] = 'right'; break; case '=': $attributes['align'] = 'center'; break; } return $this->getRenderer()->renderHeader( $header, ['level' => (int)$level->getString(), 'attr' => $attributes] ) . "\n\n"; }); } /** * {@inheritdoc} */ public function getName() { return 'header'; } }
{ "redpajama_set_name": "RedPajamaGithub" }
3,235
{"url":"https:\/\/www.lmfdb.org\/L\/2\/3\/3.2\/c8-0","text":"# Learn more\n\n## Results (displaying both matches)\n\nLabel $\\alpha$ $A$ $d$ $N$ $\\chi$ $\\mu$ $\\nu$ $w$ prim arith $\\mathbb{Q}$ self-dual $\\operatorname{Arg}(\\epsilon)$ $r$ First zero Origin\n2-3-3.2-c8-0-0 $1.10$ $1.22$ $2$ $3$ 3.2 $$8.0 8 -0.156 0 3.29129 Modular form 3.9.b.a.2.2 2-3-3.2-c8-0-1 1.10 1.22 2 3 3.2$$ $8.0$ $8$ $0.156$ $0$ $6.56108$ Modular form 3.9.b.a.2.1","date":"2022-09-27 01:03:21","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9529843330383301, \"perplexity\": 2078.8142263536693}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-40\/segments\/1664030334974.57\/warc\/CC-MAIN-20220927002241-20220927032241-00329.warc.gz\"}"}
null
null
<?php namespace SMFClient; class TopicParser { private static function removeQuotes($topic) { $count = 1; while ($count) { $topic = preg_replace('/<div class="quoteheader">(?!.*<div class="quoteheader">).*?<div class="quotefooter"><div class="botslice_quote"><\/div><\/div>/', '', $topic, 1, $count); } return $topic; } private static function removeSpecialCharacters($topic) { // Change non-breaking spaces to regular one $topic = str_replace("\00A0", " ", $topic); $topic = str_replace("&nbsp;", " ", $topic); return $topic; } private static function preProcess($topic) { $topic = self::removeSpecialCharacters($topic); return $topic; } public static function parseMessages($topic, $options) { $topic = self::preProcess($topic); $messages = []; $rows = explode("\n", $topic); $index = 0; $length = count($rows); while ($index < $length) { $row = $rows[$index]; $row = trim($row); if ($row == '<dl id="posts">') { $index++; break; } $index++; } $postNumber = 0; while ($index < $length) { $message = []; $postNumber++; $row_postheader_start = trim($rows[$index]); // Check that we are dealing with actual post if ($row_postheader_start != '<dt class="postheader">') { break; } //$row_title = $rows[$index+1]; $row_author = trim($rows[$index+2]); //$row_postheader_end = $rows[$index+3]; //$row_postbody_start = $rows[$index+4]; $row_postbody = trim($rows[$index+5]); // Remove double whitespaces $row_postbody = preg_replace('/\s+/', ' ', $row_postbody); $row_postbody = str_replace('<strong>', '', $row_postbody); //$row_postbody_end = $rows[$index+6]; // Get name of the poster $row_author = strip_tags($row_author); $author_data = explode(" ", $row_author); $firstName = $author_data[1]; $lastName = $author_data[2]; // Get posting time $pos = strrpos($row_author, '-'); $timeStr = substr($row_author, $pos + 2); $timeStr = str_replace( array("Tammikuu","Helmikuu","Maaliskuu","Huhtikuu","Toukokuu","Kesäkuu","Heinäkuu","Elokuu","Syyskuu","Lokakuu","Marraskuu","Joulukuu"), array("January","February","March","April","May","June","July","August","September","October","November","December"), $timeStr ); $time = strtotime($timeStr); $index = $index + 7; $message['body'] = $row_postbody; $message['firstName'] = $firstName; $message['lastName'] = $lastName; $message['name'] = $firstName . ' ' . $lastName; $message['bodyWithoutQuotes'] = self::removeQuotes($row_postbody); $message['time'] = $time; $message['index'] = $postNumber; $messages[] = $message; } return $messages; } /* * Filter messages containing given pattern * */ public static function parseRegex($messages, $pattern, $copyFields, $matchFields) { $totalResults = []; foreach ($messages as $message) { $matches = []; $body = $message['bodyWithoutQuotes']; $count = preg_match_all($pattern, $body, $matches); for ($i = 0; $i < $count; $i++) { $result = []; foreach ($copyFields as $copyField) { $result[$copyField] = $message[$copyField]; } for ($j = 0; $j < count($matchFields); $j++) { if ($matchFields[$j]) $result[$matchFields[$j]] = $matches[$j][$i]; } $totalResults[] = $result; } } return $totalResults; } }
{ "redpajama_set_name": "RedPajamaGithub" }
8,108
How many middle class families would object to adding a few higher brackets like say a Tax Rate of 40% for Married Couples and Individuals with Incomes over $500,000. Why not 50% for incomes over $1 million, 60% for incomes over $5 million, 70% for incomes over $10 million, 80% for incomes over $100 million and 90% for incomes over $1 billion? Even at the 90% tax rate, a billionaire filing would still have 10% of $1 billion left over in disposable income which is $100 million. Pity the poor person who can't live on that! And then why are the real high earners whose income is mostly capital gains only taxed 15% when everyone who makes at least $16,750. or higher by the sweat of their brow is taxed at least 15%. A married couple making $16, 570 is poverty level. Why is some single billionaire taxed at the same rate as a poverty level family? Note there is no reduction in tax rate when there are children in the family. CEOs take most of their income in stock options. When they cash them in, their profit is all capital gains taxed at 15%. And please note that adding higher tax brackets for millionaires and billionaires while keeping the lower brackets exactly the way they are will not add one iota of taxation to the poor and middle class. On January 28, 2011, it was reported in various news media on television and online that John Paulson earned at least $5,000,000,000 (five billion USD) in 2010. Since Paulson is a hedge fund manager he paid taxes at the 15% rate due to the tax loophole of "carried interest." If capital gains were taxed the same as regular income, he would have paid at the 35% rate, the same as someone making $373,650. If my tax table given above was in effect he would have paid at a 90% rate. The reason why the Federal government has such large deficits is that tax rates for the rich under Republican Presidents have been systematically lowered. Tax breaks have been given to the rich with the result that now there is a debt crisis and Republicans are attemtping at this late date to balance the budget on the backs of the poor and middle class while continuing to give even greater tax breaks to the rich. From 1946 to 1954 the top tax rate was either 89% or 90%. From 1956 to 1960 the top rate was 75%. Does it make sense that millionaires and billionaires, if they are not paying most of their tax at the 15% capital gains rate, are paying at the same rate as those making $373,650? Shouldn't there be a few higher brackets in the tax code say for income over $1 million and income over $1 billion at least? The focus becomes even clearer when we look at top marginal tax rates. A marginal tax rate is tax on income over a certain amount. When we look at marginal rates, we see that millionaires and billionaires are treated the same as lower income individuals and families on lower amounts say the first $50,000. They would pay the same rate as everyone else on their first $50,000. of income. Then, for instance the marginal tax rate on income between $50K and $100K would be greater. For only that income over $100K would there be an even higher rate and so on. Higher marginal rates would only apply to that portion of income over a certain amount not to the entire income. Marginal tax rates collapse down to one effective overall rate on the entire amount of income, but it is instructive to think of the tax code in terms of marginal rates. The notion that, once a person makes over a certain amount, their need for that additional income diminishes supports the idea of progressive marginal tax rates. The top marginal rate from 1936 to 1963 was at least 79%. From 1948 to 1964 this applied to incomes above $400,000. From 1964 to 1980 it was at least 70% on incomes above approximately $200,000. Then Ronald Reagan was elected. Reagan lowered the top marginal rate to 50% in 1982 and then George H W Bush lowered it again to 28% in 1988. So this was definitely tax breaks for the rich. Any time the higher marginal tax rates are lowered or eliminated altogether, this constitutes, ipso facto, TAX BREAKS FOR THE RICH. The cumulative debt of the United States in the past seven completed fiscal years was approximately $4.08 trillion, or about 40.8% of the total national debt at the time of that completion of approximately $10.0 trillion. The total surplus in FY 2001 was $128 billion. A combination of tax cuts and spending initiatives has added almost $1.7 trillion—through budget deficits—to the national debt since then (October 1, 2001 through September 30, 2007). It should be noted that yearly debt accumulation often exceeds the yearly budget deficit, because, for example, paying the interest on the debt is not planned in the budget to be paid off or because Social Security receipts run a surplus (see Fiscal policy of the United States). The total budget deficit for FY 2007 was $162 billion. Most debt was accumulated as a result of tax cuts and increased national security spending. According to Richard Kogan and Matt Fiedler, "the largest costs — $1.2 trillion over six years — resulted from the tax cuts enacted since the start of 2001. Increased spending for defense, international affairs, and homeland security – primarily for prosecuting the wars in Iraq and Afghanistan – also was quite costly, amounting to almost $800 billion to date. Together, tax cuts and the spending increases for these security programs account for 84 percent of the increases in debt racked up by Congress and the President over this period." Lawrence Kudlow, however, noted "The U.S. has spent roughly $750 billion for the five-year war. Sure, that's a lot of money. But the total cost works out to 1 percent of the $63 trillion GDP over that time period. It's miniscule [sic]." He also reported that "during the five years of the Iraq war,. . .household net worth has increased by $20 trillion." Nobel laureate Joseph Stiglitz has estimated the total cost of the Iraq War at closer to $3 trillion. Interest on the debt (including both public and intragovernmental amounts) increased from $322 billion to $454 billion annually. The share of public debt owned by foreigners increased significantly from 31% in June 2001 to 50% in June 2008, with the dollar balance owed to foreigners increasing from $1.0 trillion to $2.6 trillion. This also significantly increased the interest payments sent overseas, from approximately $50 billion in 2001 to $121 billion during 2008. President Bush also signed into law Medicare Part D, which provides additional prescription drug benefits to seniors. The program was not funded by any changes to the tax code. According to the GAO, this program alone created $8.4 trillion in unfunded obligations in present value terms, a larger fiscal challenge than Social Security. So contrary to what the Republicans would have you believe, the US does not have a spending problem, it has a revenue problem brought about by insane policies of lowering taxes on the rich, increased defense spending and an unfunded prescription drug benefit. Now Obama, having inherited these built-in deficit producing Bush policies is being blamed for running the country into debt, and Republicans are attempting to balance the budget on the backs of the poor while giving even more tax breaks to the wealthy on the grounds that they are the "job creators." If they were truly job creators, don't you think that Bush tax lowering policies would have produced one job for the eight years Bush was in power? They didn't. Obama tried to get Medicare spending under control with his Affordable Care Act. Unfortunately, Republicans fought him every step of the way particularly on the elements which would have reduced spending on health care. If a public option had gone into effect, it would have reduced health care expense. If Medicare was allowed to negotiate prescription drug charges, that would have brought down the cost of Medicare. Now Republicans want to privatize even Medicare which will mean that poor senior citizens will just die from lack of health care because they won't be able to afford private health insurance. Taxes on corporations used to constitute a substantial part of US government revenues. No longer. The share that corporate tax revenues comprise of total federal tax revenues has collapsed, falling from an average of 28 percent of federal revenues in the 1950s and 21 percent in the 1960s to 9% in 2010. The Republican game plan has been to lower taxes on corporations and the wealthy while claiming that that will create jobs, a fact that has been historically proven to be untrue, and then to raise taxes on the poor and middle class while cutting social programs because they claim we can no longer afford them. When they talk about cutting government spending they never talk about cutting the bloated Defense Department budget which is larger than the rest of the world's military budgets combined. That is MIA from their conversations. They never talk about increasing government revenues in order to balance the budget. If the US wants to get serious about decreasing deficits, it needs to roll back the Bush tax cuts, cut military spending and reform the Medicare prescription drug benefit. Just going back to pre-Bush policies would likely bring the budget under control. Ending the tax loopholes for corporations would bring in additional revenues. Corporations like GE and Exxon Mobil which make tens of billions of dollars in profits should not be getting billions of dollars from the US government in tax refunds. Subsidies to agriculture and Big Oil should be eliminated saving billions more. Adding higher income marginal tax brackets would broaden the notion of shared sacrifice. Democrats need to extend the scope of the debate to include these items. Instead they let the Republicans narrow the debate to just eliminating or decreasing programs which benefit the poor and middle class.
{ "redpajama_set_name": "RedPajamaC4" }
5,493
Documents related to George Moir's appointment as delegate for 1952 Olympic Games Documents and books/Document/Certificate Documents and books/Document/Letter Documents and books/Document/Newspaper/Clipping Date Used Folder containing items associated with George Moir's appointment as a member of the Delegation of the 1956 Olympic Organising Committee to the 1952 Olympic Games, Helsinki. Includes two certificates of introduction for George Moir, one signed by Menzies, the other by the Lord Mayor of Melbourne, two newspaper clippings and four letters. Australian Gallery of Sport and Olympic Museum H: 338 W: 215 D: 1mm (H: 13 5/16 W: 8 7/16 D: 1/16") H: 330 W: 200 D: 1mm (H: 13 W: 7 7/8 D: 1/16") H: 265 W: 209 D: 1mm (H: 10 7/16 W: 8 1/4 D: 1/16") Olympic Games 1952; 1952; Helsinki, Uusimaa, Finland Olympic Games 1956; 1956; Melbourne, Victoria George Moir; 25 Apr 1905 Organising Committee for the Games of the XVIth Olympiad, Melbourne 1956 Edgar S Tanner, Hon Secretary Treasurer, Organising Committee Robert Menzies; 20 Dec 1894; 15 May 1978 Kindly donated to the Australian Gallery of Sport and Olympic Museum by George Moir Certificate Document Documents and books Text Document Paper Paper product
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,811
Q: Ubuntu-Tweak tweaks get ignored after reboots When I set the corners to reveal workspaces/windows, It stops working after a couple of reboots. Why? In addition, my chosen Login screen lasts for 1 second during login and reverts to current wallpaper. Why? A: I was having the same problem. So I found the solution here: Scale plugin keeps forgetting hot corner settings on restart Ubuntu-Tweak use compiz for this tweak. So the problem is the same. Copied from there: * *Run gconf-editor from Terminal or Alt+F2 *Navigate to apps → compiz-1 → general → screen0 → options → active_plugins *Move "Scale" to the bottom of the list. *Move "Expo" to bottom right above "Scale" and underneath Unityshell.
{ "redpajama_set_name": "RedPajamaStackExchange" }
39
{"url":"https:\/\/physics.stackexchange.com\/questions\/205703\/error-propagation-for-products","text":"# Error propagation for products [duplicate]\n\nSuppose you have two measured (independent) physical quantities $x$ and $y$ with relative errors $r_x := \\frac{\\delta x}{x}$ and $r_y := \\frac{\\delta y}{y}$, where $\\delta x$ and $\\delta y$ are the corresponding absolute errors. Now you want to calculate $z = xy$ (or $z = \\frac{x}{y}$).\n\nUsually the relative error $r_z$ is calculated as\n\n$$r_z = \\sqrt{r_x^2 + r_y^2}$$\n\nSometimes (usually in lower level or high school courses) it is said that you have to take just the sum of the relative errors, i.e.\n\n$$r_z = r_x + r_y$$\n\nFor example for the product this seems easy to derive: \\begin{align} (x + \\delta x)\\cdot (y + \\delta y) = xy + x \\delta y + y \\delta x + \\delta x \\delta y \\\\ (x - \\delta x)\\cdot (y - \\delta y) = xy - x \\delta y - y \\delta x + \\delta x \\delta y \\\\ \\end{align}\n\nSubtracting both equations you get for the right side: $2(x\\delta y + y \\delta x)$ and half of it seems to be a good measure for the absolute value of $\\delta z$. So you get for the relative error $r_z$\n\n$$r_z = \\frac{\\delta z}{z} = \\frac{x\\delta y + y \\delta}{xy} = \\frac{\\delta y}{y} + \\frac{\\delta x}{x} = r_y + r_x$$\n\nWhat's wrong with this reasoning?\n\nWhat is the correct formula and why? Are there different domains of application of both formulas?\n\n## marked as duplicate by ACuriousMind\u2666, Emilio Pisanty, John Rennie, Danu, user10851 Sep 8 '15 at 21:59\n\n\u2022 It wouldn't be unheard of to assume the product of the errors $\\delta x \\delta y$ is negligible, but this assumes that both are small numbers relative to x and y \u2013\u00a0tpg2114 Sep 7 '15 at 12:56\n\u2022 \"What's wrong with this reasoning?\" Intuitively, what's wrong is that it assumes the errors all have the same sign, whereas in most cases you need to assume that the errors are uncorrelated. Sum ten uncorrelated noise terms together: what's the probability that they're all in the same direction? Its $2^{-9}\\approx 0.002$. Even with two terms, the probability that they are both of the same sign is only a half. Half of the time, they will mitigate one another. \u2013\u00a0WetSavannaAnimal Sep 7 '15 at 13:28","date":"2019-10-17 20:44:45","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 1, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9963756799697876, \"perplexity\": 322.18184554929746}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-43\/segments\/1570986676227.57\/warc\/CC-MAIN-20191017200101-20191017223601-00336.warc.gz\"}"}
null
null
Transcribed from "An Illustrated History of The Big Bend Country, embracing Lincoln, Douglas, Adams and Franklin counties, State of Washington", published by Western Historical Publishing Co., 1904. ANDREW FLYNN. When the first wave of civilization began to roll into the Big Bend country, Andrew Flynn was on the crest. He took the land which is his home place and started to work, both to make for himself a fortune and to assist materially in the upbuilding of the country. Judging from the possession that he now holds, we see that he made no mistake in settling in this country. He has a large estate, and on the home place, about six miles north from Hartline, has some of the most beautiful and commodious buildings in the entire Big Bend country. He has spared no effort in arranging his place and making improvements and excellent wisdom, thrift and progress are manifested throughout the entire premises. Andrew Flynn was born in Albany, New York, on April 5, 1857. His parents, Bernard and Catherine (Bennett) Flynn, were natives of Leland and are now living in Marion county, Oregon, having crossed the plains thither, in 1869, with ox and mule teams. Our subject was educated in Canada and Oregon. In the latter place, he remained until arriving at manhood's estate and then learned the bricklayer's trade. For ten years, he wrought in the Webfoot State, then came to Washington and took up railroading, as bridge builder. Two years later, he settled in Douglas county, taking a pre-emption and timber culture claim on which he brought to a high state of cultivation. Then he selected his homestead, where he resides at the present time. He has, in addition to this property, large herds of fine graded cattle and other stock and is known as one of the leading and wealthy men of the country. When Mr. Flynn first settled in this country, there were no settlers near and the nearest trading point was Sprague, Washington. He came in company with Jim Heathman and Michael Buckley. Mr. Flynn has three brothers and three sisters, Charles, Eugene, William, Mrs. Mary Mallen, Mrs. Kate Manhoney, and Ellen. In this country, on May 26, 1892, occurred the marriage of Andrew Flynn and Miss Amanda M. Hennnig. Her parents were Herman and Louisa (Young) Henning, the former a native of Germany and the latter of Indiana. She was born in Winneshiek county, Iowa , on June 17, 1874, and has three brothers and two sisters, William, Edward, Otto, Mrs. Julia Thompson, and Elvina. To Mr. and Mrs. Flynn the following named children have been born: Walter, on March 7, 1893; Lila A., April 26, 1894; Bertholima, August 16, 1895; Edward Leo, February 3, 1897; and Van Dudley, on January 18, 1901. Mr. Flynn was raised a Catholic. He is active in everything that is for the benefit and welfare of the community and has always been a progressive and energetic man. No man is better known in the community than Mr. Flynn and he is justly entitled to the esteem and confidence so liberally given him by all.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,087
The Luhn mod N algorithm is an extension to the Luhn algorithm (also known as mod 10 algorithm) that allows it to work with sequences of values in any even-numbered base. This can be useful when a check digit is required to validate an identification string composed of letters, a combination of letters and digits or any arbitrary set of characters where is divisible by 2. Informal explanation The Luhn mod N algorithm generates a check digit (more precisely, a check character) within the same range of valid characters as the input string. For example, if the algorithm is applied to a string of lower-case letters (a to z), the check character will also be a lower-case letter. Apart from this distinction, it resembles very closely the original algorithm. The main idea behind the extension is that the full set of valid input characters is mapped to a list of code-points (i.e., sequential integers beginning with zero). The algorithm processes the input string by converting each character to its associated code-point and then performing the computations in mod N (where is the number of valid input characters). Finally, the resulting check code-point is mapped back to obtain its corresponding check character. Limitation The Luhn mod N algorithm only works where is divisible by 2. This is because there is an operation to correct the value of a position after doubling its value which does not work where is not divisable by 2. For applications using the English alphabet this is not a problem, since a string of lower-case letters has 26 code-points, and adding Decimal characters adds a further 10, maintaining an divisible by 2. Explanation The second step in the Luhn algorithm re-packs the doubled value of a position into the original digit's base by adding together the individual digits in the doubled value when written in base . This step results in even numbers if the doubled value is less than or equal to , and odd numbers if the doubled value is greater than . For example, in Decimal applications where is 10, original values between 0 and 4 result in even numbers and original values between 5 and 9 result in odd numbers, effectively re-packing the doubled values between 0 and 18 into a single distinct result between 0 and 9. Where an is used that is not divisable by 2 this step returns even numbers for doubled values greater than which cannot be distinguished from doubled values less than or equal to . Outcome The algorithm will neither detect all single-digit errors nor all transpositions of adjacent digits if an is used that is not divisable by 2. As these detection capabilities are the algorithm's primary strengths, the algorithm is weakened almost entirely by this limitation. The Luhn mod N algorithm odd variation enables applications where is not divisable by 2 by replacing the doubled value at each position with the remainder after dividing the position's value by which gives odd number remainders consistent with the original algorithm design. Mapping characters to code-points Initially, a mapping between valid input characters and code-points must be created. For example, consider that the valid characters are the lower-case letters from a to f. Therefore, a suitable mapping would be: Note that the order of the characters is completely irrelevant. This other mapping would also be acceptable (although possibly more cumbersome to implement): It is also possible to intermix letters and digits (and possibly even other characters). For example, this mapping would be appropriate for lower-case hexadecimal digits: Algorithm in C# Assuming the following functions are defined: int CodePointFromCharacter(char character) {...} char CharacterFromCodePoint(int codePoint) {...} int NumberOfValidInputCharacters() {...} The function to generate a check character is: char GenerateCheckCharacter(string input) { int factor = 2; int sum = 0; int n = NumberOfValidInputCharacters(); // Starting from the right and working leftwards is easier since // the initial "factor" will always be "2". for (int i = input.Length - 1; i >= 0; i--) { int codePoint = CodePointFromCharacter(input[i]); int addend = factor * codePoint; // Alternate the "factor" that each "codePoint" is multiplied by factor = (factor == 2) ? 1 : 2; // Sum the digits of the "addend" as expressed in base "n" addend = IntegerValue(addend / n) + (addend % n); sum += addend; } // Calculate the number that must be added to the "sum" // to make it divisible by "n". int remainder = sum % n; int checkCodePoint = (n - remainder) % n; return CharacterFromCodePoint(checkCodePoint); } And the function to validate a string (with the check character as the last character) is: bool ValidateCheckCharacter(string input) { int factor = 1; int sum = 0; int n = NumberOfValidInputCharacters(); // Starting from the right, work leftwards // Now, the initial "factor" will always be "1" // since the last character is the check character. for (int i = input.Length - 1; i >= 0; i--) { int codePoint = CodePointFromCharacter(input[i]); int addend = factor * codePoint; // Alternate the "factor" that each "codePoint" is multiplied by factor = (factor == 2) ? 1 : 2; // Sum the digits of the "addend" as expressed in base "n" addend = IntegerValue(addend / n) + (addend % n); sum += addend; } int remainder = sum % n; return (remainder == 0); } Algorithm in Java Assuming the following functions are defined: int codePointFromCharacter(char character) {...} char characterFromCodePoint(int codePoint) {...} int numberOfValidInputCharacters() {...} The function to generate a check character is: char generateCheckCharacter(String input) { int factor = 2; int sum = 0; int n = numberOfValidInputCharacters(); // Starting from the right and working leftwards is easier since // the initial "factor" will always be "2". for (int i = input.length() - 1; i >= 0; i--) { int codePoint = codePointFromCharacter(input.charAt(i)); int addend = factor * codePoint; // Alternate the "factor" that each "codePoint" is multiplied by factor = (factor == 2) ? 1 : 2; // Sum the digits of the "addend" as expressed in base "n" addend = (addend / n) + (addend % n); sum += addend; } // Calculate the number that must be added to the "sum" // to make it divisible by "n". int remainder = sum % n; int checkCodePoint = (n - remainder) % n; return characterFromCodePoint(checkCodePoint); } And the function to validate a string (with the check character as the last character) is: boolean validateCheckCharacter(String input) { int factor = 1; int sum = 0; int n = numberOfValidInputCharacters(); // Starting from the right, work leftwards // Now, the initial "factor" will always be "1" // since the last character is the check character. for (int i = input.length() - 1; i >= 0; i--) { int codePoint = codePointFromCharacter(input.charAt(i)); int addend = factor * codePoint; // Alternate the "factor" that each "codePoint" is multiplied by factor = (factor == 2) ? 1 : 2; // Sum the digits of the "addend" as expressed in base "n" addend = (addend / n) + (addend % n); sum += addend; } int remainder = sum % n; return (remainder == 0); } Algorithm in JavaScript Assuming the following functions are defined: const codePoints = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ"; //This can be any string of permitted characters function numberOfValidInputCharacters() { return codePoints.length; } function codePointFromCharacter(character) { return codePoints.indexOf(character); } function characterFromCodePoint(codePoint) { return codePoints.charAt(codePoint); } The function to generate a check character is: function generateCheckCharacter(input) { let factor = 2; let sum = 0; let n = numberOfValidInputCharacters(); // Starting from the right and working leftwards is easier since // the initial "factor" will always be "2". for (let i = input.length - 1; i >= 0; i--) { let codePoint = codePointFromCharacter(input.charAt(i)); let addend = factor * codePoint; // Alternate the "factor" that each "codePoint" is multiplied by factor = (factor == 2) ? 1 : 2; // Sum the digits of the "addend" as expressed in base "n" addend = (Math.floor(addend / n)) + (addend % n); sum += addend; } // Calculate the number that must be added to the "sum" // to make it divisible by "n". let remainder = sum % n; let checkCodePoint = (n - remainder) % n; return characterFromCodePoint(checkCodePoint); } And the function to validate a string (with the check character as the last character) is: function validateCheckCharacter(input) { let factor = 1; let sum = 0; let n = numberOfValidInputCharacters(); // Starting from the right, work leftwards // Now, the initial "factor" will always be "1" // since the last character is the check character. for (let i = input.length - 1; i >= 0; i--) { let codePoint = codePointFromCharacter(input.charAt(i)); let addend = factor * codePoint; // Alternate the "factor" that each "codePoint" is multiplied by factor = (factor == 2) ? 1 : 2; // Sum the digits of the "addend" as expressed in base "n" addend = (Math.floor(addend / n)) + (addend % n); sum += addend; } let remainder = sum % n; return (remainder == 0); } Example Generation Consider the above set of valid input characters and the example input string . To generate the check character, start with the last character in the string and move left doubling every other code-point. The "digits" of the code-points as written in base 6 (since there are 6 valid input characters) should then be summed up: The total sum of digits is 14 (0 + 2 + 2 + 1 + 4 + 5). The number that must be added to obtain the next multiple of 6 (in this case, 18) is 4. This is the resulting check code-point. The associated check character is e. Validation The resulting string can then be validated by using a similar procedure: The total sum of digits is 18. Since it is divisible by 6, the check character is valid. Implementation The mapping of characters to code-points and back can be implemented in a number of ways. The simplest approach (akin to the original Luhn algorithm) is to use ASCII code arithmetic. For example, given an input set of 0 to 9, the code-point can be calculated by subtracting the ASCII code for '0' from the ASCII code of the desired character. The reverse operation will provide the reverse mapping. Additional ranges of characters can be dealt with by using conditional statements. Non-sequential sets can be mapped both ways using a hard-coded switch/case statement. A more flexible approach is to use something similar to an associative array. For this to work, a pair of arrays is required to provide the two-way mapping. An additional possibility is to use an array of characters where the array indexes are the code-points associated with each character. The mapping from character to code-point can then be performed with a linear or binary search. In this case, the reverse mapping is just a simple array lookup. Weakness This extension shares the same weakness as the original algorithm, namely, it cannot detect the transposition of the sequence <first-valid-character><last-valid-character> to <last-valid-character><first-valid-character> (or vice versa). This is equivalent to the transposition of 09 to 90 (assuming a set of valid input characters from 0 to 9 in order). On a positive note, the larger the set of valid input characters, the smaller the impact of the weakness. See also International Securities Identification Number (ISIN) Modular arithmetic Checksum algorithms Articles with example code Articles with example C Sharp code
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,060
Q: function returns pointer to int My main() crashes below when add(4) is called. As I understand int* add, it should return a pointer to integer. Then, I should be able in main to say: int * a = add(3); to return a pointer to int. Please explain what I'm doing wrong. #include <cstdlib> #include <iostream> using namespace std; int* add (int a) { int * c, d; d = a + 1; *c = d; cout << "c = " << c << endl; return c; } int main(int argc, char *argv[]) { int a = 4; int * c; c = add(4); system("PAUSE"); return EXIT_SUCCESS; } A: The problem is that you have declared an int* but not given it anything to point to. What you need to do is initialize it with a memory location (error checknig omitted) int* c = new int(); ... *c = d; // Now works Later on though you'll need to make sure to free this memory since it's an allocated resource. A better solution though is to use references. Pointers have several nasty attributes including unitialized values, NULL, need to free, etc ... Most of which aren't present on references. Here is an example of how to use references in this scenario. void add (int a, int& c) { int d; d = a + 1; c = d; cout << "c = " << c << endl; } int c; add(4, c); A: In *c = d; the pointer c is not initialized, so your program runs into undefined behavior. You could do something like the following instead: void add( int what, int* toWhat ) { (*toWhat) += what; } and call it like this: int initialValue = ...; add( 4, &initialValue ); A: You never allocate any memory to the pointer c. Pointers must refer to valid memory, and you must allocate that memory yourself with a call to new, e.g. write int* c = new int(); within the add function. Now c points to a valid block of memory that is large enough to hold an int. When you are done with that memory, call delete c to deallocate it and release it back to the system. A: You get an error because c is an uninitialized pointer, so it is undefined behaviour.
{ "redpajama_set_name": "RedPajamaStackExchange" }
76
Q: Riddle: can you swim faster upstream than downstream (with respect to the water)? A friend of mine posed a riddle to me: A man swims upstream in a river, which is flowing at an unknown rate. He is wearing swimming goggles. At a certain point he loses his goggles. 10 minutes later he realizes he lost them, so he immediately turns around and swims back downstream to get them. When he finds his goggles, floating in the water, he finds himself at a point 500 meters downstream from the point where he lost his goggles (with respect to the ground, of course, not the water). Question: At what speed (km/h) is the river flowing? -------------- SPOILER ------------- My solution is one and a half kilometres per hour, but another friend does not agree with me, he says swimming upstream is more efficient (with respect to the water) than swimming downstream. Is he correct? If so, why? A: Since this is not marked as homework... An equation for this problem can be created as follows: Velocity of the swimmer is: $V_{sw}$ Velocity of the stream is: $V_{st}$ The ratio of velocity of swimmer and stream is: $k = \dfrac{V_{sw}}{V_{st}}$ The time the swimmer goes upstream is: $t_{u}$ The time the swimmer goes downstream is: $t_{d}$ The distance traveled by the goggles is: $d_{g}$ The distance the swimmer travels upstream is: $(k-1)V_{st}t_u$ The distance the swimmer travels downstream is: $(k+1)V_{st}t_d$ The equation then is:$$(k+1)V_{st}t_d - (k-1)V_{st}t_u = d_g$$ If one sets $k = 1$ and $t_u = t_d = \dfrac{1}{6}hr$ and $d_g = 0.5km$ then one gets the solution of $V_{st} = 1.5 \dfrac{km}{hr}$ For the sake of argument, lets set $k=2$ and $d_g = 0.5km$ and $t_u = \dfrac{1}{6}hr$. Let's first rewrite the equation: $$(k+1)t_d - (k-1)t_u = \dfrac{d_g}{V_{st}}$$ then as: $$(k+1)t_d - (k-1)\dfrac{1}{6}hr = \dfrac{0.5km}{V_{st}}$$ and: $$6t_d - \dfrac{1}{3} = \dfrac{1}{V_{st}}$$ This can be plugged into wolfram alpha to find the set of solutions. Update for Berhard's sake (where now all you need to do is input the values in for $t_d$ and $V_{st}$ since I have already their units): $$6t_d\dfrac{hr}{km} - \dfrac{1}{3}\dfrac{hr}{km} = \dfrac{1}{V_{st}}\dfrac{hr}{km} $$ Note: For future ref, the general form of the equation is:$$x + y + k(x-y) = \dfrac{d}{z}$$ A: he says swimming upstream is more efficient (with respect to the water) than swimming downstream. Go to a swimming pool and try swimming in various directions, the water in any pool on planet Earth is moving at 67,000 miles per hour around the sun. If it is easier "upstream" you should soon find out. If you move the water in this pool to the middle of the Amazon river, the body of water in which you are swimming is still travelling at about the same 67,000 MPH around the sun. If there is some property of the water's motion that makes swimming in one direction more efficient, your friend should explain why it doesn't apply to movement of that water around the Sun A: You are also travelling at the same speed as the water around the sun so it becomes more to do with the motion of the water itself 'on top' of the 67,000 miles an hour If you had to swim against a current you are clearly going to cover less distance going against the current than with it A river with a heavy current could easily carry someone downstream without them even needing to swim so then you add their swimming motion to that and they will travel faster than the river's actual current Swim against the current and you will have a force against the body of the swimmer so if they use the same swimming motion then they will not get as far Simple A: I think it depends on what your friend means by "with respect to the water" When I hear that I think spatially. sample constants: you travel 1 meter per stroke, 1 second per stroke, water traveling 1 meter per second. When swimming downstream you'll cover 2 meters with respect to the ground in 1 second (yay for triathlons that are downstream). However, from the point you start your next cycle you're only 1 meter away from the point in the water you started your previous stroke. (why you don't pull too quickly when swimming you don't want to lose your grip on the water and you what to have the water that you pulled be behind you for your next stroke). "With respect to the water" you traveled 1 meter. If you're swimming upstream you take one stroke at that same time the water is going the opposite direction (lets forget about drag.. it is a riddle after all) so you'd be 1 meter from where you were but at the same time you'll get pushed back that 1 meter by the current. You've gone nowhere with respect to the ground. However, from the point of your hand entry to your next stroke you'll have 2 meters of distance between your hand and it's previous entry. "With respect to the water" you traveled 2 meters. With respect to the water you are faster swimming upstream! But that's not how races are measured so I want no part of it. If that's not what your friend means by "With respect to the water" I take it all back! A: Swimming upstream is less efficient than downstream. When swimming upstream, you are also swimming uphill. Part of the energy you use for the swimming goes into potential gravitational energy. Therefore you'll have a lower swimming speed w.r.t. the water when swimming upstream. That potential energy is released again when swimming downstream/downhill, and so your swimming speed w.r.t. the water will be higher downstream. The mathematical answer to the question will depend on many unknown factors, like the relation between the descent rate of the river and the velocity of the water, and the energy loss due to friction at various swimming speeds. But it will be more 1.5 km/h
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,143
Let the Rhythm (and Melody) hit 'em: 3 Communiqués from Classical Music's Long March. An artist needs recognition, but recognition as part of one's creating for oneself. It's a little bit of a contradiction. In other words, you create something that comes out of you, it's your personal thing, you don't think about what someone else will think about it. A true artist should never [do that]. But then, once it's done, the artist needs to know that it got to someone, that it touched someone, that it opened something for someone, that it did something to someone. —Spoken by a twenty-nine-year-old Russian-Israeli pianist living in New York City. Lots of people certainly think [classical music is a dying art]. I have mixed feelings about it. I think if it is a dying art, it's a slowly dying art—fortunately for people like me [who] love it so much. I think it is getting harder and harder for classical music institutions, particularly orchestras, to survive, because they don't rely on a visual element, which is what keeps opera companies in better financial shape. In opera there is more spectacle, which is what people are programmed now to seek out, probably due to the influence of television. I think television really drew people away from being aurally tuned in. I mean, before television, people spent hours around the radio—they were still sitting around a box, but they were just listening. There was no visual element to distract from [what was being heard]. Pure listening is really a vehicle for exploring the inner world of your imagination and emotions when you're not being spoon-fed visual imagery to go along with the sound. I think we gradually got more and more bombarded by visual stimuli that took us out of the realm of hearing. Think about where we used to be culturally after the Second World War. You know, it used to be that there was an NBC Symphony Orchestra, there was classical music programming that was part of mainstream popular television in its early years—it was integrated that way. Certainly classical music was all over the radio. There were many classical music stations, there were live broadcasts of concerts by major orchestras with lots of people tuned in around the country. Leonard Bernstein started those young people's concerts [on television], but I don't know if that was a reaction to the beginning of an erosion of that kind of integration of classical music in the mainstream society or not. We've [also] lost, I think, almost completely, formal music education in public schools, which I think is the biggest tragedy to befall this country as far as music and art goes—because I think the same thing's probably happened with visual arts, although I just don't know enough about that. I know from my own experience in public school in upstate New York that the musical education I got was terrible, totally uninspired. Teachers would basically just give us a study hall most of the time. And I've heard it's only gotten worse, and in some places just doesn't exist. If there's no exposure at all from an early age, given that it is such a marginal part of popular culture, without that exposure these kids, who could grow up to be potential audience members, have no real point of reference and it may as well be Chinese Opera to them. There was [also] the timing of the evolution of the twelve-tone system and atonal music, which was by far the dominant trend in contemporary classical music of the time, we're talking mid-twentieth century. There's always a lag in the acceptance of new works, but the dissonances of atonal music alienated the broader audience to a greater degree than in the past. Contemporary classical music became, or had to become, very academic based; I mean, that's where it really survived. It went into hibernation in a way, or at least into retreat in institutions, where composers weren't as concerned with the general public's reaction to their work. That has changed today, at least for many contemporary American composers, who have by and large moved back toward more tonal styles. It's a different scenario [in Europe], at least in my experience, and I've been over there a fair amount over the past ten years or so. Most of my exposure is in France and a little bit in Germany. Classical music seems to be in a lot better shape there, and I think there are several reasons for that. One of them is that its roots are there—it was transplanted here. At the time it was transplanted here it was much more mainstream; popular music and classical music were much closer together—not entirely unified, but the distance was very small. In Europe the musical tradition is longstanding, and really part of who they are. You know, for them, Beethoven walked the streets. It's that kind of situation. There's a stronger tie to begin with, and then the trend is also set from the top down—which is completely not the case here. The government there recognizes that art and culture is a public right, something that the average Joe is entitled to, and therefore deserves a significant amount of government support. I don't know much about the way music is taught in European public schools, but I assume that it's vastly better than it is here. From what I see when I go to concerts there, I get a sense that European concert-goers are much better educated audiences. I was just recently in Berlin and heard the Berlin Philharmonic for the first time in their own hall, and the way people listened there actually astounded me. I have never been in a hall here or anywhere else where there was that level of attention—there was a real reverence for the art form there. I didn't hear the program pages or shopping bags ruffling, or the interminable fiddling with plastic candy wrappers—I'm used to hearing here all of that here; and surprisingly, no cell phones went off either, but that might have been a fluke. I've [also] seen a larger percentage of younger people in European audiences. For instance, I read somewhere that at the Opera at the Bastille, all of these celebrities come for the season opening. It's sort of like an Oscar event or something where there's a lot of press attention. And of course, the productions are much less conservative there. But I also want to say that in Europe, my impression is that things are going in the same direction, but I think because of the tradition, it's moving more slowly. In France, for instance, I know that there is national [television] coverage of a singing competition. Here we have American Idol, there they'll be broadcasting an operatic singing competition. But most of the stuff they have over there is modeled on American programming. It's the same crap. What's the game where they answer and make money every correct question they answer? [Who Wants to Be A Millionaire?] Those kinds of things—all of that vapid stuff is there too. The question is how long it takes for mainstream popular culture to erode the longer tradition. I think the same is also true for healthcare, and other things that have nothing to do with arts and culture. The trend in Europe is away from traditional socially based public programs, and towards American-style privatization and free-market capitalism. [Here,] most people's exposure to classical music is likely to come from a car ad or a diamond engagement-ring ad on television, where companies use classical music for the prestige, elitist, social associations it has for people who don't have a lot of exposure to it—I always protest when people call opera and classical music elitist because of high ticket prices and then they'll spend more money on a Knicks ticket. But, using only the most superficial elements of classical music, its beautiful melodies, the advertising industry can appeal to people. In a first hearing of anything, people are drawn to the melodic line. Of course, there is much more going on than a melodic line—there's a whole universe, a vast landscape of expression so varied and complex that you can spend a lifetime completely dedicated to exploring it and never get to the end. But just the superficial layers—the blunt instruments, if you will—are enough, and based on that alone, Andre Rieu and the Three Tenors and performers like them are able to pack stadiums. And since many people aspire to be elitist themselves, those associations are not always negative—that's why Renee Fleming, Placido Domingo, and other classical musicians are hired to do Rolex ads. I'm torn when it comes to the argument about exposing new audiences to classical music. The purist in me wants them to hear it in its original form. There is a way of initiating somebody that way when you have unlimited time and resources, but the ideal way is not always practical. For most people, the only hope of opening their ears to classical music is likely to be through what I consider a tainted form. I had the misfortune of hearing at some point a recording of Il Divo or Amici Forever or one of those groups, and for me they are nothing but money machines based on the marketing of some fashion or image ideal, using incidental music—classical music distorted into schmaltzed-out junk, to put it kindly. It's basically a Rolex ad with a soundtrack. But even that stuff has a potential side benefit. Maybe if they hear Charlotte Church it will open their minds, maybe they'll get curious and go out and see an opera and they'll get hooked and we'll have a bigger audience. There's always a chance that exposure of any kind will stimulate some kind of interest that could lead to something else, and I would rather allow for this opportunity then not have one at all. I have nothing against popular music. I grew up listening to David Bowie and Rolling Stones albums that my parents put on. But for me, on an absolute level, there is a major qualitative difference between the two. They have different levels of depth. Classical music demands your full attention. There's just much more there. Compared to something like a Beatles song—which is getting up there in terms of complexity and depth in a popular-music song—the complexity of a Beethoven symphony is exponentially greater. [Someone saying, "Well, you're going to make me listen to some Schubert Lieder or something like that and I'm going to put that next to Gnarls Barkley's Crazy or, I don't know, Crimson and Clover, and I just happen to have a more emotional response to the pop song"] is totally justified. A lot of time has elapsed since Schubert wrote his Lieder. For the contemporary listener, even growing up in a family with a lot of [classical music] exposure, there are a lot of hurdles to get over. Stylistically, it's a very foreign world to ours. Appreciating a Schubert Lied requires an understanding of text sung in a foreign language, a sensitivity to poetry in general, and a sensitivity to the musical style which only comes from repeated exposure. If you are uninitiated to all of that, then of course it's much easier to find a greater appreciation for something that's ultimately, in my mind, of lesser quality, because it is more familiar, it's more accessible. That said, for me, you can't compare the two in terms of absolute artistic quality. environmentally, politically, in terms of respect for human and other forms of life, in almost every way I can think of. I don't know why classical music would be an exception. Some people, the true idealists, think that classical music can save the world, and I hope they are right. I wish I could agree with them but I tend to be practically minded. So, I don't ultimately have a lot of hope. Maybe there is some way to slow down the erosion process even more, or turn it around—I just don't know of one. Organizations that present classical music will have to adapt to contemporary society in order to survive as long as they can. The trick is not to degrade the art form in the process. For some abstract reason, which I can't explain up front, it takes more time for music to move forward as well as to react to society. When you look at other art forms like painting or literature, it is always precursor-oriented. There is something prophetic about it. In music as well—but for the happy few. It takes much more time to get to the broader audience. Repetition and conformism always exist, but human beings have no patience or taste for that at some point. They need a change. Music, finally, is based on something very simple; feelings, I will say. Moods, colors. So there is hope because there are so many billions of people on the planet, there are so many ways to express yourself. Why are we still playing [composers] who were born in the sixteenth century? Who died centuries ago? In Europe [the early music revival] is huge now. It's maybe [even] too much. Twenty-five years ago, people were making fun of these people like crazy. But now there is not only a group of fanatics, but there is a market for that—a huge market. So, is it right or wrong? Well, it's happening. That's the interesting thing. [Now, phenomena like Andre Rieu and Il Divo] damage the music itself for sure. That's for sure. There are no more—I am going to use a provocative expression—there are no more rock stars in classical music. Maybe Yo-Yo Ma in cello. Nigel Kennedy in violin. I'm saying huge personalities. [Violinist Itzhak] Perlman is unbelievable, but his career is behind him. [Pianist Evgeny] Kissin—but he's a man, a former prodigy, in his ivory tower. Maybe [Pianist] Lang Lang is the new provocative object. People hate him and love him because he can be so substantial and so superficial. I think this is maybe why classical people start doing [provocative] things—because the general audience needs not only names but faces. [In order to successfully market classical music on a mass level], maybe people need faces with names—I'm not completely against that. But how they market it—the fact that they market it is great—but the way it is done is awful. Playing in stadiums and playing weak arrangements of Vienna Waltz for Rieu or excerpts of famous operas for the Three Tenors. But take Opera on the Common [in Boston], which doesn't exist anymore, we don't know why. But, well, even if it was half crappy, people were reading the story of Carmen, or there was a Le Nozze di Figaro—it was sung in English, which was a terrible idea, I think—but there is a tiny door open. Maybe they will go to a Boston Symphony Orchestra concert, or they will go to a ballet. Look what they do in New York now at the Met. They are broadcasting operas outside in major squares in New York City. [And] in movie theatres. They are going to start doing the same in Europe. I would never ask myself, [is classical music a dying art?] if I were living in France. Not because it's going especially well. But maybe there is room for dying arts over there. I have the feeling that this question is directed towards efficiency. It has to produce something. And what if it is not producing anything, although it does exist within society? [What if it is just existing in an infirmary, but indefinitely?] You have six fingers, and you rarely use the sixth one, but it's there. And that's cool. Why not? [In the end], music is not a language, it's something that language cannot express. So that's why classical music can't disappear, but also can't instantly reach and speak to everyone. Because we need those undefined parts of our existence. And I think it's very important for our balance; our psychic equilibrium also. This may be too personal, but when I was a teenager, for my own reasons I stopped playing classical music for three years. I played reggae, jazz, rock 'n' roll. And when I came back to it, to classical music, I felt that I would never leave it. I felt it was the most profound way to express yourself musically. I felt I would never leave it again. It was a very emotional experience for me. It's very funny with classical music. It's part of humanity, but it's also something which stands behind the screen. It can't be materialized. There is no canvas, there is no paper. Of course we use all of those tools, but we create something that isn't palpable. This is what makes it so special. I'm not saying that music is above all arts, but this is the tiny tiny piece of spirituality in our life, and I'm not sure it's going to die. Because I'm not a believer, so I need that. You will never hear modern classical music played in Walgreen's, but that's a good thing. Just because it's not popular enough to be played while you are shopping for soap does not mean there's no audience. The cellist Matt Haimovitz plays in bars. I think that's great, but I'm sure that he prefers it when, even in a bar, people listen to the music he's playing. Recall that bebop jazz musicians fought against having their music associated with atmosphere. They wanted the respect of having people sit down and listen. Since music is everywhere, it is not heard. There is a thirst to acquire music on your iPod like any other commodity. I wonder if anyone actually listens to all of the 3,000 or more songs they have downloaded. The Washington Post recently did a story on how, as an experiment, the acclaimed violinist Joshua Bell played for free in a N.Y. subway. Big surprise: no one stopped to listen. What a scandal! But no one stops to listen to buskers playing popular music either, even in the Paris Metro. Here's an old one: Contemporary classical music has been relegated to the universities, where stuffy musicians can sit and stew about how no one notices how important their work is. We can blame Arnold Schoenberg, whose Society for Private Musical Performances banned anyone who wasn't in the elite crowd from attending concerts. Or we can blame music departments for having the same disease that afflicts the social sciences— that is, science envy. Although books like Musicophilia and Musimathics are fascinating, I challenge any of them to speak to the mystery and logic of music as articulately as ten bars of a Bach invention. Association of classical music with upper levels of society: Who advertises on classical music stations? Diamond retailers, luxury cars, and piano dealers. Access to music lessons is mostly the privilege of the rich. Instruments and lessons are expensive and time-consuming. Since I live in Boston, I should probably speak to the health of classical music here. There is a "new music" community, but it seems insular to me. The same people (mostly composers) attend every concert and vie with each other for who can write the most abstruse and colorfully titled piece. Again, blame Schoenberg. The U.S. has been looking to Europe to make up for its lack of tradition for a long time. It has been common practice to send young, talented composers abroad to write watered-down versions of whatever is popular on the Continent. Like creative artists anywhere, American composers need to remember that what they are trying to communicate begins on a personal, not a national, level. The aesthetic should follow. On the supposed lack of audience: This is true for small groups, but at big venues there are plenty of people. Audiences want to see established personalities, just like in the pop world. Avant-garde music has always had a limited, if not resistant, audience. In his new book, The Rest is Noise, Alex Ross tells the story of Hitler ordering people off the street to be dragged into a Wagner opera by force because of low attendance. On another note, when Stravinsky premiered the Rite of Spring, there were riots. The difference between one hundred years ago and now is that people cared enough about the supposed classical "tradition" to boo when they felt it was threatened. The Internet has actually made it easier for the average composer to be heard. Apparently MySpace was started for musicians? The line between classical music and pop is blurring more and more, which has created many interesting genres. Pop often relies on unique timbres, rather than complex harmonies, to create sound worlds. This "chic" trend reaches back to Debussy, who was heavily influenced by gamelan music he heard at the Paris Exhibition of 1900. Sack, Oliver. Musicophilia: Tales of Music and the Brain. New York: Knopf. 2007. Ross, Alex. The Rest is Noise: Listening to the Twentieth Century. New York: Farrar, Straus and Giroux. 2007.
{ "redpajama_set_name": "RedPajamaC4" }
2,769
{"url":"https:\/\/mail.gnu.org\/archive\/html\/bug-texinfo\/2022-10\/msg00130.html","text":"bug-texinfo\n[Top][All Lists]\n\n## Re: behavior of @math with HTML output\n\n From: Patrice Dumas Subject: Re: behavior of @math with HTML output Date: Mon, 17 Oct 2022 09:38:41 +0200\n\nOn Sun, Oct 16, 2022 at 10:08:17PM -0700, Raymond Toy wrote:\n> > I'm wondering why MathML (presentation markup) isn't available,\n> > as according the test at\n> >\n> > http:\/\/eyeasme.com\/Joe\/MathML\/MathML_browser_test.html\n> >\n> > it seems to be well supported by Firefox. It is as nice as SVG,\n> > these doesn't seem to be scaling issues, and the text can be\n> > selected.\n> >\n>\n> Possibly a bug in Mathjax? Or because Mathjax is using SVG instead of\n> MathML for displaying math. I'll have to read the Mathjax docs again to\n> know what Mathjax uses to render equations.\n>\n\nSeems like MathML output was dropped from mathjax because it is not\nporatble:\nhttps:\/\/docs.mathjax.org\/en\/latest\/output\/mathml.html#mathml-output\n\nThis page has some code to still use MathML output, but it is proably\nnot a good idea to try to maintain a deprecated feature in the long\nterm.\n\nThat being said, there could be some way to add to\/modify\/replace the\nMathJax configuration block output by texi2any if somebody feels that\nit will be used.\n\n--\nPat","date":"2023-03-22 23:46:00","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5680369138717651, \"perplexity\": 10135.332779016228}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-14\/segments\/1679296944452.97\/warc\/CC-MAIN-20230322211955-20230323001955-00019.warc.gz\"}"}
null
null
\section{Introduction} Cosmology, especially its observational sector, is currently a thriving field of physics. On the theoretical side opinions have converged to what is nowadays dubbed {\it cosmological concordance model} (CCM). But, despite of all the successes of this model in describing different cosmological observations, we should not fool ourselves to believe that the grand picture of cosmology stands on a firm basis. The reason for this is simple: Interpretation of the data within the concordance model leads inevitably to the introduction of the concepts of {\it dark matter} and {\it dark energy}\footnote{See \cite{Fukugita2004} for an inventory of cosmological parameters.}. We surely could live with such concepts by stating that they depend on some peculiar details which yet have to be added to the description of our universe. Unfortunately, dark matter and energy make up the complete energy budget within our simple picture and therefore cannot be treated as some minor details which remains to be worked out. This is clearly an embarrassing situation which needs to be addressed by cosmologists. In the following we will have a glance at the theoretical landscape of cosmology and pay special attention to the non-Riemannian approach. Non-Riemannian extensions of our current gravity theory, i.e.\ General Relativity (GR), represent a well motivated framework and have been discussed extensively in the literature. In this short review we only address questions which are related to the possible cosmological significance of such an approach. Readers who want to learn more about the fundamentals of non-Riemannian gravity theories, the gauge theoretical approach to gravity, and metric-affine gravity (MAG) should consult the excellent reviews \cite{Blagojevic,Goenner2004,Hammond2002,Hehl1976,PhysRep}. \section{Theoretical landscape} With the right amount of crudeness one could summarize the reasons to consider a drastic step, like the change of the gravity theory which underlies cosmology, as follows: \begin{itemize} \item Large amounts of dark matter/energy necessary to fit current observations within the CCM. \item No direct observation of a dark matter particle in the laboratory. \item No theoretical explanation for the smallness of the dark energy component when compared to quantum field theory. \item No reason to believe that GR is valid in the early universe, i.e.\ at high energies. \item No test of Newtonian/general relativistic gravity on cosmological scales. \end{itemize} In the following we have a glance at some of the proposed remedies for this situation. \subsection{Alternatives} Although we know for sure that GR has to be modified in order to make it compatible with quantum theory \cite{Kiefer}, we do not have any final form of this new gravitational theory. Additionally, we do not know how possible low-energy modifications, and thus modifications that may play an important role for the aforementioned observational problems in cosmology, caused by such a new theory will look like. In table \ref{table_models} we provided a very rough overview over current theoretical approaches to extend/replace our current gravity theory and thereby also our cosmological model. The separation into different model classes is sometimes not unique. For example, one could also count the non-symmetric gravity models as non-Riemannian models and all of the models listed in table \ref{table_models} could in principle also have a non-trivial topology. \begin{table}[t] \begin{center} \caption{Some examples for different classes of models recently used to explain observations of cosmological significance.} \begin{tabular}{|p{3.5cm}|p{4.5cm}|} \hline \textbf{Model type} & \textbf{Description} \\ \hline Scalar-tensor theories& Modified Lagrangian, additional scalar field (maybe a leftover from some higher theory) non-minimally coupled to the Ricci scalar.\\ \hline Higher dimensions& Our universe represents only a 4-d brane in a 5-d bulk, gravity assumed to be the only interaction which propagates in the bulk.\\ \hline $f(R, R_{\alpha \beta}, \dots)$ models& Modified gravitational Lagrangian in terms of the curvature.\\ \hline Topological models& Universe assumed to have non-trivial topology, i.e.\ impose some additional global properties of spacetime which GR, as a local theory, makes no statements about.\\ \hline Non-symmetric gravity& Theories in which the metric $g_{\alpha \beta}$ is no longer symmetric.\\ \hline Tensor-vector-scalar theory&Additional vector field introduced by hand into the definition of the metric, extended Lagrangian which contains the additional vectorial quantity and an extra scalar field. \\ \hline Non-Riemannian models& Spacetime no longer Riemannian, new field strengths torsion $T^\alpha{}_{\beta \gamma}$ and nonmetricity $Q_{\alpha \beta \gamma}$ couple to intrinsic properties of matter such as the spin.\\ \hline \end{tabular} \label{table_models} \end{center} \end{table} \begin{figure} \includegraphics[width=80mm]{different_spacetimes_small.ps} \caption{Classification of different spacetime types according to the non-Riemannian scheme. By switching off torsion and nonmetricity we arrive at the usual Riemannian spacetime as encountered in GR.} \label{fig_spacetimes} \end{figure} \subsection{Non-Riemannian gravity} One of the general frameworks for non-Riemannian gravity theories is metric-affine gravity (MAG) as reviewed by Hehl et al.\ in \cite{PhysRep}. In the following we focus on the new geometrical notions in metric-affine gravity and try to explain their possible impact on cosmology. The reasons to pick MAG as starting point are varied: (i) Among the different alternative gravity theories MAG represents a very well motivated and natural generalization, c.f.\ the introduction of \cite{PhysRep} for a list of arguments, and (ii) there exists a general Lagrangian formulation of MAG according to which many other non-Riemannian theories may be systematically classified. (iii) There exist several exact (also non-cosmological) solutions for MAG which rank it among the best studied alternative gravity theories during the last years. (iv) The idea to couple intrinsic features of matter to new geometrical quantities can be viewed as the natural prolongation of the line of reasoning which led to the formulation of the so far most successful gravity theory, namely GR. \subsection{Metric-affine gravity} In metric-affine gravity the spacetime continuum which contains matter carries both stresses $\sigma^{\alpha \beta}$ (or momentum currents) and hyperstresses $\Delta^{\alpha \beta \gamma}$ (or hypermomentum currents). The geometry of spacetime is described by means of a metric $g_{\alpha \beta}$ and an independent affine connection $\Gamma^\gamma_{\alpha \beta}$. The metric is still symmetric, i.e.\ $g_{\alpha \beta} = g_{\beta \alpha}$, but the connection is no-longer given by the metric compatible connection $\left\{ {}^\alpha_{\beta \gamma}\right\}$\footnote{$\left\{ {}^\alpha_{\beta \gamma}\right\}:=\frac{1}{2}g^{\alpha \mu} \left( \partial_\beta g_{\gamma \mu} + \partial_\gamma g_{\beta \mu} -\partial_\mu g_{\beta \gamma} \right)$.} known from GR and may be asymmetric $\Gamma^\gamma_{\alpha \beta} \neq \Gamma^\gamma_{\beta \alpha}$. If we define Cartan's torsion tensor $T^\alpha{}_{\beta \gamma}:=\Gamma^\alpha_{[\beta \gamma]}$ and the nonmetricity tensor $Q_{\alpha \beta \gamma}:=-\nabla_\alpha g_{\beta \gamma}$ then, cf.\ \cite{Schouten}, the affine connection might be split up as follows \begin{eqnarray} \Gamma^\alpha_{\beta \gamma}=\left\{ {}^\alpha_{\beta \gamma}\right\} &+& T_{\beta \gamma}{}^{\alpha} - T_\gamma{}^\alpha{}_\beta + T^\alpha{}_{\beta \gamma} \nonumber \\ &+&\frac{1}{2} \left( Q_{\beta \gamma}{}^\alpha + Q_\gamma{}^\alpha{}_\beta - Q^\alpha{}_{\beta \gamma}\right).\label{gencon} \end{eqnarray} Furthermore, one assumes that the momentum current $\Sigma^{\alpha \beta}$ of matter essentially couples to the metric $g_{\alpha \beta}$ whereas the hypermomentum current $\Delta^{\alpha \beta \gamma}$ couples to the affine connection $\Gamma^\alpha_{\beta \gamma}$ of the spacetime. From the last assumption and the splitting of the connection as given in eq.\ (\ref{gencon}) it becomes clear that MAG incorporates several other alternative gravitational theories, as well as GR itself. For example it is well known from Einstein-Cartan (EC) theory that the torsion of spacetime couples to the intrinsic spin of particles. In figure \ref{fig_spacetimes} we sketched how different spacetimes, and thereby different alternative theories which make use of these richer spacetime concepts, maybe classified with respect to the torsion and nonmetricity. The gravitational field Lagrangian of MAG is expected to be of the form $L=L\left(g,\partial g, \Gamma, \partial \Gamma \right)$ and a matter Lagrangian minimally coupled to the new geometrical fields $L_{\rm m}=L_{\rm m}\left( \psi, \nabla \psi, g\right)$. The gravitational field equations are given by the variational derivatives with respect to the metric $\delta L / \delta g_{\alpha \beta} \sim \sigma^{\alpha \beta}$ and the connection $\delta L / \delta \Gamma^\alpha_{\beta \gamma} \sim \Delta_{\alpha}{}^{\beta \gamma} $. We only mention here that a very general suggestion for the dynamics of this theory, which makes use of a slightly different but equivalent notation than the one used here, has been made in \cite{Exact2}. \section{Cosmology} In \cite{Puetzfeld2004} we provided a brief chronological guide to the literature on non-Riemannian cosmological models. Therein the developments in cosmology were traced back to the early seventies and were given in table form. Most of the early non-Riemannian cosmological models were based on Einstein-Cartan theory. Investigations mainly revolved around the construction of exact solutions and the question of whether or not an initial singularity can be avoided in such models. In the 1980s more general types of Lagrangians were considered. The inclusion of quadratic terms in the Lagrangian, leading to dynamical degrees of freedom, was mainly motivated by the framework of Poincar\'{e} gauge theory (PGT) and led to new classes of exact solutions. The story continued with the advent of the inflationary model, which led to investigations which tried to mimic or justify this new idea within different non-Riemannian scenarios. Till the end of the 1990s most of the works in non-Riemannian cosmology (NRC) were focused on the description of the early stages of the universe. This bias can mainly be ascribed to the estimates for the new spin-spin contact interaction encountered in Einstein-Cartan theory. This interaction shows up at extremely high energy densities\footnote{In \cite{Hehl1973,Hehl1976} it was estimated that this may be the case at approximately $10^{47}\, {\rm g/cm}^3$.} and might therefore play only a crucial role in the early universe. This focus has changed during recent years, mostly due to the persisting need for the large amount of dark matter and more recently also dark energy. The requirement of dark energy at late stages of the cosmic evolution might be taken as an indicator for the presence of new physics possibly due to some non-Riemannian relics in cosmology. Since the field equations of the FLRW model are extremely simple and the main evidence (see \cite{Barris, Tonry} for the latest SNIa samples) for the new dark energy component comes from cosmological tests which are related to the expansion history of the universe, it is natural to study the impact of changes in this history and their possible origin. There have been several suggestions for modifications during the last years coming from different directions, cf.\ table \ref{table_models}. Non-Riemannian models, especially MAG, provide a very good starting point for the study of changes of the cosmological field equations which are justified by a grander theory, in the case of MAG by the general Lagrangian provided in \cite{Exact2}. Let us now come to the rhs, i.e.\ the matter side, of the field equations. \subsection{Continua with microstructure} \begin{figure} \includegraphics[width=80mm]{dislocbub.ps} \caption{In more complex fluid models matter may be represented by a medium with dislocations or finely dispersed voids. Non-Riemannian models allow for a natural coupling of the new geometrical quantities like torsion and nonmetricity to such kind of fluid properties.} \label{fig_dislocbub} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=100mm]{cosmologicalmodelbuilding_small.ps} \caption{How to build and compare cosmological models?} \label{fig_modelbuild} \end{figure*} A very promising approach to construct cosmological models in a non-Riemannian setup is related to the availability of more sophisticated fluid models. In the case of metric-affine gravity Obukhov \& Tresguerres \cite{Obukhov1993,Obukhov1996} devised a fluid model termed the {\it hyperfluid} \footnote{See also \cite{Babourova1998}.}. This kind of fluid can be used as a natural source for the hypermomentum current which appears on the rhs of the field equations of MAG. The new degrees of freedom in such a fluid model can be coupled to the new geometrical properties, i.e. torsion and nonmetricity. In the hyperfluid picture, which can viewed as a generalization of early spin-fluid models \cite{Weyssenhoff,Kopczynski,Obukhov1987,deRitis1983,deRitis1985,Ray}, the motion of the fluid is described by usual four-velocity and a triad attached to each fluid element, which can undergo arbitrary deformations during the motion of the fluid. This is analogous to the description of continua with microstructure \cite{Capriz} in the theory of elasticity. In figure \ref{fig_dislocbub} we sketched two examples of such media, namely one with dislocations and another one with finely dispersed voids. The hyperfluid and special cases of it were used in several cosmological models in the past\footnote{See \cite{Puetzfeld2004} for a list of references.}, a systematic treatment for a fairly general Lagrangian of MAG will be published in \cite{Puetzfeld2005}. \section{How to test and compare?} As we sketched in figure \ref{fig_modelbuild}, the list of different cosmological tests is quite long, and still growing. Usually one starts with a theoretical model from the upper portion of the figure and then compares it to some of the observations in the lower portion of the figure in order to falsify it. In view of the sometimes very different theoretical approaches this can become quite cumbersome, i.e.\ one has to spend a lot of time to work out the single tests in a scenario which significantly deviates from the cosmological concordance model. Therefore one of the most pressing tasks in cosmology is the development of a post-Newtonian framework which allows us to compare different theoretical approaches in a systematic and somewhat standardized way, and hopefully allows for the a fast backreaction of the cosmological tests on the theoretical model. In figure \ref{fig_modelbuild} we denoted such a framework, which has yet to be developed, by the connecting middle part. \section{Conclusion \& Outlook} Up to now there seems to be no real competitor model in the non-Riemannian context which can replace the current cosmological concordance model and at the same time explain the effects caused by dark matter/energy in a purely gravitational way. Most of the models proposed so far are either not worked out to a sufficient level of detail, fail one or more of the cosmological tests, or are not distinguishable from the CCM with the currently available data. But, and this cannot be stressed enough, we are clearly only beginning to explore the different possibilities of non-Riemannian models. This is especially true for the cosmological sector of metric-affine gravity which currently only covers a very small region of the theoretically permissible parameter space. \bigskip \begin{acknowledgments} The author wants to thank F.W.\ Hehl for constant advice and support. The financial support by M.\ Pohl and SLAC is gratefully acknowledged. Additionally, the author wants to thank S.\ LeBohec for the hospitality during his stay at University of Utah. \end{acknowledgments} \bigskip
{ "redpajama_set_name": "RedPajamaArXiv" }
8,960
Aspen boys start with jammed schedule Dale Strode The Aspen Times Game day. That means the basketball season opens today for the Aspen High School boys basketball team. And the season begins with a flurry of games for the senior-laden Skiers. "Five games in a week … that's a quick start," Aspen head coach Steve Ketchum said as he prepped the Skiers for today's 7 p.m. game at Grand Valley (0-2). The Skiers (0-0) will play four games in the next five days and five games in the next seven days as part of a jam-packed pre-holiday schedule. Adding to the importance of today's season opener, it also is a Western Slope League game for both teams. "We are so ready to play a game. The players are sick and tired of (practicing) against each other … we've been working so hard," said Ketchum, in his 16th season as the head coach of the Aspen boys. He said the Skiers worked extremely hard in their traditional two-a-day workouts. "But we needed those practice days to get ready," Ketchum said. The Skiers, after today's opener, will play three games in the annual Glenwood Springs Tournament on Thursday, Friday and Saturday. The Aspen boys will take on Class 4A Glenwood Springs, Class 4A Eagle Valley and traditional 3A powerhouse Faith Christian on consecutive days. "It's a great opportunity for us. It's a great opportunity for us to see who we are," Ketchum said of the opening four games, including three tournament games in Glenwood Springs. "We've got really, really strong senior leadership this year," Ketchum said of a roster that is eight strong in seniors, including returning all-conference selections Clayton Crawford and Trent Lichtenwalter. "Those two are model students, model players … the kind of kids you'd want your kids to grow up to be like," Ketchum said. Crawford, a Class 3A all-state selection last year when the Skiers went 19-5, is a four-year varsity player for Aspen. He assumed a starting role halfway through his freshman season, and he's been starting ever since. Lichtenwalter, an all-state honorable mention selection for the Skiers last season, is a two-year starter and three-year varsity player for the Skiers, who advanced to the second round of the 3A state playoffs last season before falling to eventual state runner-up Kent Denver. Aspen beat La Junta in the first round of the state playoffs last year. "They both lead by example. They don't say a lot, but they get it done," Ketchum said of the 6-4 twin towers who can dunk and shoot 3-pointers with equal aplomb. "Trent … the transformation in him is unbelievable," Ketchum said. "The last two years, he's dedicated to getting better, stronger." The extensive fitness training, including CrossFit, has made Lichtenwalter not only stronger, but more explosive. "He's not the same player you saw a year ago," Ketchum said. Fellow senior JJ Ready was selected by his teammates as one of the team captains this year, Ketchum said. Parker Johnson and Levi Wright also return as seniors this season. Junior Evan Patzoldt, another all-conference selection last year, also is back for the Skiers. "Evan has improved by leaps and bounds," he said of the Aspen junior who not only made the all-star team at the summer Aspen Basketball Academy, but he was named the academy MVP. "His basketball IQ is going through the roof," he said of Patzoldt's propensity for involving his teammates. Another junior, Dominic Alcorta, also will boost the Skiers this season. "He really understands the game. He's the son of a coach," Ketchum said. Alcorta's mother, Debbie Alcorta was the longtime girls basketball coach at Basalt who led the Longhorns to multiple Class 3A state tournaments. "And he's grown. He's 6-4 … long. He has a high-skill level," the veteran Aspen coach said. Ketchum said his starting five has a chance to be one of the best he's fielded in his 16 seasons at Aspen HIgh School. "How are they all going to fit together? Who will be the first off the bench? With five games in seven days, I think we'll find out," Ketchum said. The Aspen boys will open the home season Dec. 17 with a conference game against Cedaredge. After the holiday break, the Skiers will return to action Jan. 10 against Coal Ridge. dstrode@aspentimes.com X Games Aspen 2020 preview: Without Kim, Mastro has chance to shine in women's snowboarding Preps: Aspen High boys basketball team rallies to win fourth straight game X Games Aspen 2020 preview: Scotty James, Mark McMorris are back for more in men's snowboarding James first, Gold third in Laax Open superpipe final with X Games next Another modified superpipe coming to Burton U.S. Open next month in Vail INSPIRATO Hospitality Position at INSPIRATO in SNOWMASS Come work and have fun while doing it with Inspirato, a luxury destination vacation club!… Housekeepers at The Timbers Club in SNOWMASS VILLAGE Now Hiring Full & Part Time Seasonal Houskeepers Must be able to work weekends &… Studio Assistant at Art Studio in Roarking Fork Valley Studio Assistant Studio Assistant sought for growing fine art and design business. Competitive pay offered.…
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,822
{"url":"https:\/\/socratic.org\/questions\/598cb78211ef6b2cad6abd6c","text":"# A 2.19*g mass of potassium nitrate is dissolved in a 75*g mass of water. What is it percentage concentration?\n\nAug 10, 2017\n\n#### Answer:\n\n\"% by mass\"=\"mass of solute\"\/\"mass of solute + solvent\"xx100%~=3%\n\n#### Explanation:\n\n\"% by mass\"=(2.19*g)\/(2.19*g+75*g)xx100%=?%\n\nThe volume of this solution would not be much different to the volume of the starting solution. And the likely solvent is water.....","date":"2019-08-22 00:48:52","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 2, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7891644835472107, \"perplexity\": 7162.833035942289}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-35\/segments\/1566027316555.4\/warc\/CC-MAIN-20190822000659-20190822022659-00236.warc.gz\"}"}
null
null
Q: Regarding Payeeze payment integration well iam using payeezy as a payment gateway integration through a single page php script.below is my code 1)index.html <html> <body background="skyblue"> <h1 style="text-align:center">User Payment Page</h1> <br> <form action="Samplegge4payment.php" method="post"> <table align="center" border='1'> <!--<th>Enter Product Description</th>--><th>Enter Amount(USD)</th> <!--<TR><TD><input type="text" name="x_desc"/></TD>--> <TD><input name="x_amount" value="" type="text"></TD></TR> <TR> <TD colspan=3><center><input type="submit" value="Pay Now"></center> </TD> </TR> </TABLE> </form> </body> </html> 2)Samplegge4payment.php <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd" > <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title> Payment Pages: Sample PHP Payment Form </title> <style type="text/css"> label { display: block; margin: 5px 0px; color: #AAA; } input { display: block; } input[type=submit] { margin-top: 20px; } </style> </head> <body> <h1>Processing Please Wait...</h1> <form action="https://demo.globalgatewaye4.firstdata.com/payment" method="POST" name="myForm" id="myForm"> <!--<form action="https://checkout.globalgatewaye4.firstdata.com/payment" method="POST" name="myForm" id="myForm">--> <?php $x_login = "HCO-DEMO-202"; // Take from Payment Page ID in Payment Pages interface $transaction_key = "0ZFNA2QayjNbeiiHKulT"; // Take from Payment Pages configuration interface //$x_relay_response="T5ZvGEJwQwQdtIZmNgCo"; //$x_desc=$_POST['x_desc']; $x_amount = $_POST["x_amount"]; $x_currency_code = "USD"; // Needs to agree with the currency of the payment page srand(time()); // initialize random generator for x_fp_sequence $x_fp_sequence = rand(1000, 100000) + 123456; $x_fp_timestamp = time(); // needs to be in UTC. Make sure webserver produces UTC // The values that contribute to x_fp_hash $hmac_data = $x_login . "^" . $x_fp_sequence . "^" . $x_fp_timestamp . "^" . $x_amount . "^" . $x_currency_code; $x_fp_hash = hash_hmac('MD5', $hmac_data, $transaction_key); echo ('<input name="x_login" value="' . $x_login . '" type="hidden">' ); echo ('<input name="x_amount" value="' . $x_amount . '" type="hidden">' ); //echo ('<input name="x_desc" value="' . $x_desc . '" type="hidden">' ); echo ('<input name="x_fp_sequence" value="' . $x_fp_sequence . '" type="hidden">' ); echo ('<input name="x_fp_timestamp" value="' . $x_fp_timestamp . '" type="hidden">' ); echo ('<input name="x_fp_hash" value="' . $x_fp_hash . '" size="50" type="hidden">' ); echo ('<input name="x_currency_code" value="' . $x_currency_code . '" type="hidden">'); //create parameters input in html foreach ($_POST as $a => $b) { echo "<input type='hidden' name='".htmlentities($a)."' value='".htmlentities($b)."'>"; } ?> <input type="hidden" name="x_show_form" value="PAYMENT_FORM"/> </form> <script type='text/javascript'>document.myForm.submit();</script> </body> </html> now iam getting below page after presisng submit button Payment Screenshot Its fine as its showing only amount and processing succesfully. but issue is i want a php script having product name,description,quantity,id & price and which hsould reflect same on payment processing page. I know i have to customize some data in my demo payeeze account but dont kno which one and how to see the changes along with product descriptio,quantity,price,id and amount. can anyone help me please?
{ "redpajama_set_name": "RedPajamaStackExchange" }
483
Шерали-хан (1790 - 1844), годы правления 1842—1844, девятый правитель из узбекской династии Мингов в Кокандском ханстве. Шерали-хан (1790–1844) — хан, правивший Кокандским ханством в 1842–1844 годах. Восхождение на престол Шерали был приглашен кокандской знатью занять кокандский престол в 1842 году и был совершен обряд восхождения на престол. Многократный штурм Коканда бухарцами эмира Насруллы-хана не привёл к успеху, погибло много бухарских солдат и эмир Насрулла отступил. Внешняя политика В период правления Шерали-хана велась борьба за выживание Кокандского ханства против власти бухарского эмира Насруллы. Бухарские войска были неоднократно разгромлены. Одновременно были предприняты действия по возвращению в состав ханства других земель. В 1843 году кокандские войска вновь покорили Ташкент. В период правления Шерали-хана резко усилилась власть кипчаков, во главе которых стоял Мусульманкул. Смерть По одной версии Шерали-хан был убит в апреле 1844 году По другой версии Шерали-хан был убит 16 августа 1844 года сыном бывшего кокандского хана Алим-хана Мурадом и позже ханом был объявлен сам Мурад-хан. Примечания Литература История Средней Азии. Москва: Евролинц. Русская панорама, 2003 История Узбекистана. Т.3. Ташкент, 1993. Кокандские ханы Минги Монархи, убитые в XIX веке
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,543
Using Ria Ria stories Category: Remittances Online Money Remittance: Is It Safe? The classic definition of remittance is 'a sum of money sent as payment or gift,' though it is more commonly known as the sum of What Are Remittances and Why Do People Send Them? Remittances have remained a trending and powerful topic despite the twists and turns of technology and society. This kind of money transfer, sent by a How Women Migrant Workers Closed the Remittance Gender Gap In 1605, the first woman was granted a university degree. In 1881, the suffragists gained the right to vote in the Isle of Man. And Ria's Astounding Retail Expansion: Europe Edition If there's one thing we crave more than convenience, it's connection. No matter how much we love to shop online, there's something about physical retail A Brief History of Migration and Remittances in the Philippines As an archipelago comprised of over 7,000 islands during high tide, the Philippines has always been poised for multiculturalism and traveling. Over the years, the The Global Cost of Remittances One in seven people has sent or received remittances, so it's not surprising that around USD$600 billion will be transferred this year. For those unfamiliar A Brief History of Migration and Remittances in Morocco The gateway to Africa, the home of the Berbers and the emblematic backdrop to Casablanca. These are only some of the titles borne by the Malaysian Independence Day: Landmark in Remittance History Portuguese, Dutch, British, Chinese and Indian. Malaysia has been used to immigration since the dawn of time (2nd century BCE). Between 1874 and 1957, Malaysia Top 5 countries most dependent on remittances An estimated $574 billion was sent by migrants to their home countries in 2016. In some cases, these billions in cash inflows – called remittances « Previous Posts Page1 Page2 Page3 Page4 Newer Posts » Ria Money Transfer App: New Card Payment Options How to Deal with Homesickness Best Indian Food: Dishes You Should Try #transfer #sendingmoney #remittances #lifeabroad I've read and accepted the Terms and Conditions.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,203
Vidim may refer to: Vidim (Mělník District), a municipality and village in the Czech Republic Vidim, Russia, an urban-type settlement in Irkutsk Oblast, Russia
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,204
\section{Introduction} The term ``foreshocks'' refers to small earthquakes that would occur nearby in time and space of a larger earthquake to come. \cite{papazachos1973time} made the observation that when a sufficient number of foreshock sequences were synchronized to the time of their respective main shock and then stacked, the seismicity rate increases as an inverse power law of time when approaching the nucleation. This law, called "the inverse Omori law", had then provided a potential path to earthquake prediction. Since that time, a lot of effort have been made to understand the driving forces of foreshock occurrence. Crustal earthquakes are dynamic instabilities which result from the weakening of frictional properties of a seismogenic fault that has started to slip. The relation between on-fault friction and slip provides the theoretical frame to understand how earthquakes nucleate. Based on either slip weakening or rate-and-state friction laws, theoretical \cite{ida1972cohesive,campillo1997initiation,uenishi2003universal} and numerical models \cite{rubin2005earthquake,ampuero2008earthquake} have demonstrated that before propagating dynamically, slip initially develops on a localized, slowly growing zone, which is defined as the nucleation zone. A large number of stick-slip experiments have supported this conceptual view of earthquake nucleation, whether it is for experiments conducted at low normal stress conditions on synthetic materials \cite{latour2013characterization,nielsen2010experimental} or on crustal rocks \cite{okubo1984effects, ohnaka1990characteristic,ohnaka2003constitutive,mclaskey2013foreshocks,fukuyama2018spatiotemporal}.\\ Although rupture nucleation is a process thought to be aseismic, laboratory friction experiments \cite{thompson2009premonitory,mclaskey2014preslip,kwiatek2014seismic,passelegue2017influence} have found the acoustic emission (AE) rate to be correlated to aseismic slip propagation and have reinforced the possibility of earthquake forecasting. Experimental works have also investigated changes in the frequency-magnitude distribution (i.e. the b-value of the Gutenberg-Richter slope) of AEs during stick-slip cycles. When the shear stress increases and the rupture is developing, a significant drop of the b-value has been reported, i.e. the ratio between large and small AEs increases \cite{goebel2013acoustic,riviere2018evolution,lei2018seismic}. This was thought to be driven by accelerating slip before dynamic rupture propagation. Consequently, this indicates that b-value changes could be used as a tool for seismic hazard assessment. However, under the assumption that foreshocks only reflect nucleation processes, it is necessary to constrain the length and time scales over which earthquakes nucleate. \\ In the frame of rate-and-state friction laws, models that use laboratory derived friction parameters predict that earthquakes nucleate on short time and space scales, of the order of milliseconds and meters respectively \cite{lapusta2003nucleation,kaneko2008variability,fang2010effect}. This is a consequence of the characteristic slip distance $D_{c}$ (i.e. the length required for the friction to reach its residual value inferred from rock friction experiments being of the order of 1-100 $\mu m$). In the former case, detecting earthquakes nucleation from geodetic of seismological measurements would likely be unreachable. On the other hand, seismological observations have suggested that $D_{c}$ should be scale dependent \cite{ide1997determination,olsen1997three}, of the order of the centimeter at the scale of crustal earthquakes. The scaling of $D_{c}$ has been attributed to length scales inherent to the size of earthquakes such as long wavelength roughness of fault zones \cite{ohnaka2003constitutive} or gouge thickness \cite{marone1998laboratory}. If we consider that the critical slip distance involved during coseismic slip is the same that governs earthquake nucleation \cite{cocco2009scaling}, this would imply nucleation processes to happen at much larger length and time scales.\\ At the scale of crustal earthquakes, numerous seismological observations have reported on increasing foreshock activity preceding the occurrence of large earthquakes \cite{jones1976frequency,abercrombie1996occurrence,bouchon2011extended,kato2014multiple}. Foreshock activity preceding large subduction earthquakes has been found to correlate with the occurrence of slow slip transients in the region close to the hypocenter \cite{kato2012propagation,ruiz2014intense}. When examining the occurrence of foreshock sequences with respect to the geodynamic context, it has been demonstrated that faults subject to high-slip rates produce more foreshock sequences \cite{mcguire2005foreshock,bouchon2013long}. Moreover, compared with the ordinary seismicity, foreshocks present singular characteristics such as migration and acceleration prior to the mainshock \cite{marsan2014foreshock,kato2016accelerated}. Therefore, it has been argued that foreshocks are a by-product of the larger nucleation of the upcoming mainshock. However, because of the sparsity of the observations, the physical processes that govern the occurrence of foreshocks are still controversial. For instance, statistical ETAS models \cite{ogata1988statistical,helmstetter2003foreshocks} are able to reproduce most of the features attributed to foreshock sequences which was used as an argument to suggest that foreshocks reflect stochastic rather than physical processes. One of the underlying questions is whether or not earthquakes are preceded by a slow, emerging nucleation phase before propagating dynamically or start as small instabilities that may eventually grow bigger. These two opposite views are termed the "preslip" and the "cascade" models respectively \cite{ellsworth1995seismic,beroza1996properties}. In the latter scenario, the use of foreshocks as a predictive tool for the occurrence of a larger earthquake would be compromised.\\ Here we report on precursory AE sequences during stick-slip experiments conducted on metagabbro saw-cut samples and under crustal stress conditions (30, 45 and 60 MPa). The purpose of this study is to use generated precursory AEs as a proxy to investigate the dominant mechanisms that control foreshock dynamics. Using calibrated acoustic sensors, AE seismological parameters (absolute moment magnitude, corner frequency, source size and stress drop) are estimated. AE features such as magnitude-frequency distribution, spatial distribution and temporal evolution towards failure are examined and interpreted with regard to along fault premonitory deformation, fault surface roughness and post-experiment fault structure. At last, we rely on absolute AE moment magnitudes to estimate the ratio between the seismic and the aseismic components of the pre-failure phase. \section{Experimental set-up and methodology.} Here, we describe the experimental set-up that we used to produce stick-slip events (SSEs) and the methods used to analyse and process the data.\\ \subsection{Tri-axial press and external measurements.} Stick-slip experiments were conducted on saw-cut samples of Indian metagabbro under tri-axial conditions. The tri-axial apparatus used is described in details in the supplementary materials (text and figure S1). Saw cut samples were axially loaded at constant strain rate of about $4.10^{-6}$ (about 0.02 $MPa/s$). Pressure sensors positioned outside of the cell allowed us to measure the axial stress and the confining pressure from which we calculated the average macroscopic shear stress, the macroscopic normal stress and the friction coefficient acting onto the fault plane (text S1). Displacement was measured by a LVDT at the top of the axial piston and thus includes the elastic shortening of the whole system (i.e. apparatus + sample). Along fault displacement was calculated by correcting the overall displacement from the elastic shortening of the axial piston and the sample (text S1). Stresses and displacement were measured at 10 Hz sampling rate with respectively $\pm$ 0.001 $MPa$ resolution and $\pm$ 0.1 $\mu m$ of resolution. \\ \subsection{Acoustic recording system.} The acoustic wave-field was continuously recorded at 10 MHz sampling rate by 8 acoustic sensors (figure S2). AEs were detected within the continuous acoustic waveforms (text S2, supp. mat.). Note that we opted to position all the acoustic sensors on the same half of the sample so their relative positions do not change with cumulative displacement. Acoustic signals were amplified at 45 dB, i.e. by a factor of about 177. This allowed us for recording the microseismicity close to the noise level. Local strain measurements were also continuously measured at 10 MHz sampling rate by 8 single component strain gauges located on both sides of the fault. It should be noted that here we only focus on acoustic measurements, strain gauges data will be further analyzed in a future study.\\ \subsection{AE source localization.} AEs locations were inverted according to first P-wave arrivals (Text S3, supp. mat.). We made the assumption that all AEs came from the fault (i.e. 2-D grid search) which seems a reasonable assumption given that (i) by localizing the AEs with a 3-D grid search, we found that AE locations align with the fault plane and (ii) we often observed positive and negative first P-wave polarities (except for AEs located at one edge of the fault plane) which indicates double-couple seismic sources. The smallest AEs could not be located due to their first P-wave arrivals really close to the noise level and not easily distinguishable. The location procedure was thus restricted to AEs with sufficiently high-amplitude and impulsive first P-wave arrivals.\\ \subsection{Acoustic sensors calibration.} Waveforms recorded by an uncalibrated acoustic sensor have a unit of voltage and part of the information reflects sensor's sensitivity. Therefore, estimating AE seismological parameters requires acoustic sensor calibration. Acoustic sensor calibration aims to obtain acoustic sensor's sensitivity function that can be used (i) to convert voltage measurement into absolute measurements (displacement, velocity or acceleration) and (ii) to correct for variations of sensor's sensitivity with frequency. In what follows we briefly describe the methodology to calibrate the acoustic sensor's and the principal results obtained. A detailed description of the methodology and the experimental set-up is given in the supplementary material (figure S1, text s1).\\ The sensitivity function of the acoustic sensors was obtained by laser interferometry. Outside of the cell, we affixed a broadband transducer to the center of the simulated fault surface (figure s1). Then a step voltage was applied to the broadband transducer and vibration of the opposite surface of the sample was measurement by one of the acoustic sensor used in the experiments. Then the acoustic sensor was removed and surface vibration (i.e. at the same location) was recorded by a Laser Doppler Vibrometer which was set to measure particle velocity with a flat instrumental response from 0 to 2.5 $MHz$. The sensitivity function of the acoustic sensor $I_{a}(f)$ was obtained in frequency domain by deconvolution of the waveform recorded by the acoustic sensor $S_{a}(f)$ out of the waveform recorded by the LDV $S_{v}(f)$: \begin{equation} I_{a}(f)=\frac{S_{a}(f)}{ S_{v}(f)} \end{equation} Therefore $I_{a}(f)$ acts as a transfer function and can be used to convert the waveforms $S_{a}(f)$ recorded by the acoustic sensors into particle velocity measurements $S_{c}(f)$, such as (in frequency domain): \begin{equation} S_{c}(f)=\frac{S_{a}(f)}{ I_{a}(f)} \end{equation} \\ Since we expected AEs to have variable moment magnitudes and source duration we examined the variability of the acoustic sensors sensitivity function with the size of the source, its amplitude and its duration. Two types of broadband acoustic transducers (namely, V109-rm and M110-sm), designed by the Olympus company, were used as a source. Both transducers had a similar central frequency of 5 $MHz$ but differed by their size: the transducer V109-RM has a nominal element size of 13 $mm$ while M110-RM has a nominal element size of 6 $mm$ (see supp. mat.). Figure 2 summarizes the calibration results that we obtained. The calibration curves were obtained for two input voltages, 40 V and 200 V and for three source durations, 2 $\mu s$, 1 $\mu s$ and 0.5 $\mu s$ (i.e. 0.5$MHz$, 1 $MHz$ and 2 $MHz$). For the same type of source, we observed no significant differences with respect to the amplitude and the duration of the input voltage. All calibration curves almost collapse (Figures 2a, b). Figure 2c, displays the sensitivity function averaged over all input voltages and source durations for both transducers. In both case, it is clear that the sensitivity of the acoustic sensors shows non linearity, with a wide resonance band between about 1.2 and 2.2 $MHz$. This might be related to the specific properties of the PZT ceramics. Above 1 $MHz$, wavelengths are of the order of few millimeters which lies in the range of the length scales that characterize the acoustic sensor casing. This could also induce strong sensitivity variations at high-frequency. Although the sensitivity functions are quite similar up to 1 $MHz$, some differences emerge when increasing frequency. In the case of the larger source, V109-rm, the resonance band is narrower and the sensitivity function decreases to a lower value after the maximum peak. A larger source size is equivalent to the multiple point source scenario that would generate waves at the same time. This might reduce the curvature of the wavefronts and induce negative interferences with increasing frequency. Although AE sources can be of different sizes, we posit that the synchronized multiple source point scenario is unlikely. For this reason we chose to use the sensitivity function obtained in the case of the smaller source, M110-sm. \subsection{Inversion of AE paramameters} Seismic parameters estimation relies on the analysis of displacement spectra to estimate the absolute magnitude of the source, its size and stress-drop. Seismological parameters were obtained based on S-wave displacement spectra since we expect that most of the energy comes from S-waves.\\ Acoustic waveforms were analysed within a 27.5 $\mu s$ time window occuring 2.5 $\mu s$ before the theoretical S-wave arrival times. The energy contained between the beginning of the selected time-window and the S-wave arrival was damped with a ramp function to reduce energy related to P-waves. The selected time window was then rescaled to a 50 $\mu s$ time window centered to the theoretical S-wave arrival and multiplied by a von Hann window (Figure 4 b.). This allows us to lower energy contributions coming from reflections and surface waves. We obtained S-wave displacement spectra $\Omega_{s}(f)$ by first averaging over all acoustic sensors the spectra corrected by deconvolution with the estimated sensitivity function $I_{a}(f)$. The final displacement spectra were then obtained by integration in frequency domain. This takes the form: \begin{equation} \Omega_{s}(f)=\frac{\sum_{k=1}^{K}S_{k}^{as}(f)}{K.I_{a}(f)}.\frac{1}{2\pi f} \end{equation} where $K$ corresponds to the total number of acoustic sensors and $S_{k}^{as}(f)$ to the spectrum of the $k'th$ acoustic waveform. The next step was to fit the S-wave displacement spectra with a Brune model corrected for attenuation. The S-wave displacement specta $\Omega_{s}(f)$ were modelled as: \begin{equation} \Omega_{s}(f) =\Omega_{0}\exp^{(-\pi f t/Q)}.\frac{1}{1+(f/f_{c})^2} \end{equation} where $\Omega_{0}$ is the long period spectral plateau, $t$ is the averaged S-wave travel time, $Q$ the attenuation factor and $f_{c}$ the corner frequency. $\Omega_{0}$, $f_{c}$ and $Q$ were estimated by performing a grid search over the three parameters. Here, Q is an important parameter because it controls the high-frequency decay together with the corner frequency $f_{c}$. Therefore, to avoid significant trade-offs between Q and $f_c$ we limited Q search from 30 to 50 based on values found in the literature \cite{goldberg1992acoustic,liu1997stress,yoshimitsu2014magnitude}. Search ranges were from $10^{-18}$ to $10^{-15}$ $m.s$ for $\Omega_{0}$ and 100 $kHz$ to 2.5 $MHz$ for $f_{c}$. The seismic moment was computed from $\Omega_{0}$ according to: \begin{equation} M_{0}=\frac{4.\pi.\rho.C_{s}.R.\Omega_{0}}{\Lambda_{\theta,\phi}} \end{equation} where $\rho$ is the density, $C_{s}$ the shear wave velocity, $R$ the averaged distance and $\Lambda_{\theta,\phi}$ the averaged S wave radiation pattern (0.63, \cite{aki2002quantitative}). From $M_{0}$ we obtained the absolute moment magnitude as: \begin{equation} M_{w}=(log10(M_{0})-9.1)/1.5 \end{equation} Assuming the circular crack model of Madariaga \cite{madariaga1976dynamics}, the radius of the seismic source is calculated from $f_{c}$ such as: \begin{equation} r=\frac{0.21.C_{s}}{f_{c}} \end{equation} Finally, the stress drop $\Delta\sigma$ was computed as a function of the seismic moment and the radius of the source as \cite{eshelby1957determination}: \begin{equation} \Delta\sigma=\frac{7M_{0}}{16r^{3}} \end{equation} Figure 4a displays an example of fitted displacement spectra for two events of magnitudes $Mw -7.7$ and $Mw -8.6$ and the associated waveforms. Corner frequencies were found to be 0.88 $MHz$ and 1.5 $MHz$, respectively, which yields source radius of the order of 0.8 $mm$ and 0.45 $mm$ respectively. Estimated stress drops are approximately 0.75 $MPa$ for the $Mw$ -8.6 event and 3.35 MPa for the $Mw$ -7.7 which is in the range of those observed for natural earthquakes. The absence of the resonance band in the displacement spectra (Figure 4a) confirms in part that the sensitivity function was well estimated. \section{Results} For the sake of clarity we explain here the next term that will come up frequently in what follows: "Normalized time to failure". The normalized to failure refers to the time prior to failure divided by the total duration of loading. \subsection{Mechanical data} Three experiments were performed at varying confining pressures, $P_{c}$: 30, 45 and 60 $MPa$. Figures 5a, b and c display the evolution of shear-stress, along fault cumulative displacement and AE rate at $P_{c}$ = 30, 45 and 60 $MPa$ respectively.\\ At $P_{c} = 30$ $MPa$ (Figure 5a) we have reproduced a sequence of 55 SSEs. The first one occurred when the macroscopic shear-stress was about 22 $MPa$, this equates to a static friction coefficient of 0.5. The associated coseismic displacement was 31 $\mu m$. From the beginning to the end of the experiment, the maximum shear-stress (i.e. the shear stress at the time of the rupture) has increased from 22 to 36 $MPa$ which corresponds to an increase of the static friction coefficient from 0.5 to 0.7. Although the static friction coefficient continuously increased with successive SSEs, it started to stabilize after approximately 5 $mm$ of cumulative displacement. At the beginning of the experiment we recorded only a few AEs in the last second prior to dynamic rupture propagation. This can be observed by the relatively low acoustic activity that only arises close to stick-slip instabilities. Then, up the to the end of the experiment, the acoustic activity intensified. One interesting feature is that the acoustic activity started to occur earlier but at a lower rate when the static friction coefficient started to stabilize.\\ At $P_{c} = 45$ $MPa$ (Figure 5b) we have reproduced a sequence of 29 SSEs. The mechanical behavior of the rock specimen has shown some similarities with the one at $P_{c} = 30$ $MPa$. The first SSE occurred at relatively low stress conditions, when the shear stress was about 32 $MPa$ which corresponds to a static friction coefficient of 0.5. The corresponding coseismic displacement was 58 $\mu m$. Then the maximum shear stress has increased from 32 to 51 $MPa$ which equates to an increase of the static friction coefficient from 0.5 to 0.68. Quite remarkably, similarly to the experiment at $P_{c} = 30$ $MPa$ the static coefficient of friction has approximately stabilized after 5 $mm$ of cumulative displacement. Regarding the acoustic activity, the AE rate has rapidly increased with the successive stick-slip cycles. However a noticeable difference with the experiment conducted at $Pc = 30$ $MPa$ is that the AEs remained concentrated in the last 2-3 seconds prior to stick-slip instabilities.\\ At $P_{c} = 60$ $MPa$ (Figure 5c) we have reproduced a sequence of 13 SSEs. The mechanical behavior of the sample has shown significant differences with the experiments at $P_{c} = 30$ $MPa$ and $P_{c} = 45$ $MPa$. The first SSE happened when the shear stress reached 64 $MPa$ and the static friction coefficient 0.65. The corresponding coseismic slip was 184 $\mu m$. After the first SSE, the static friction coefficient oscillated between 0.65 and 0.72 and was almost constant for the last 5 SSEs. The AE rate has largely fluctuated from the beginning to the end of the experiment. Prior to particular SSEs (SSEs 9 and 13 for instance) we recorded intense bursts of AEs while for other SSEs the AEs rate preceding failure remained low (SSEs 11, 12 and 13). AEs mostly happened during the last 1-2 seconds prior to failure. \subsection{AEs distribution} Figure 6 displays the number of precursory AEs recorded (left) and the total AE moment release (right) per SSE. The total AE moment that we show here is likely to be underestimated for particular SSEs because acoustic sensor recordings started to saturate for moment magnitudes $M_{w}$ higher than -7, although for such a magnitude we usually observed that only few acoustic sensors were saturating. Star symbols mark the SSEs prior to which we recorded at least one AE with moment magnitude $Mw$ higher than -7 (in total, 23 at $Pc = 30$ $MPa$, 5 at $Pc = 45$ $MPa$ and 2 at $Pc = 60$ $MPa$). \\ The total number of AEs recorded during the experiments is 905, 380 and 185 for respectively $Pc=$ 30, 45 and 60 $MPa$. This equates to an average number of AEs per SSE of about 17, 13 and 14 respectively. As we could have expected according to the AE rate presented in figure 5, the number of AEs per stick-slip cycle fluctuates somewhat but tends to increase with the successive SSEs although it is less significant at $Pc = 60$ $MPa$. The maximum number of precursory AEs within one sequence (i.e. for one SSE) that we recorded is 48, 31 and 46 at $Pc = 30, 45$ and $60$ $MPa$ respectively.\\ The total AE moment per SSE depicts a different picture. A common feature to all the experiments is that the seismic energy released can largely vary from one precursory AEs sequence to another. At the early stage of the experiments conducted at $Pc = 30$ and $45$ $MPa$ we only recorded small AEs, which corresponds to the periods during which the static friction coefficient on the fault increased relatively fast. Then, we recorded oscillations between low and high energy AE sequences. During the experiment conducted at $Pc = 60$ $MPa$ the seismic energy released onto the fault prior to stick-slip instabilities was slightly more stable (for instance from SSE 8 to SSE 12) but has shown significant variations as well.\\ A notable feature is that we recorded more large AEs during the experiment conducted at $Pc = 30$ $MPa$ compared to $Pc = 45$ and $60$ $MPa$ (note that for visualization, the axis of the total AE moment release is different at $Pc = 30$ $MPa$). The maximum precursory AE moment release that we estimated for a single sequence was 0.8 $N.m$ at $Pc = 30$ $MPa$ and 0.18 $N.m$ at both $Pc = 45$ and $60$ $MPa$. We recall that these values are lower bounds due to the saturation of acoustic sensors.\\ Figure 7 shows the frequency-magnitude distributions of the recorded AEs (blue, red and black circles correspond to $Pc = 30, 45$ and $60$ $MPa$ respectively). The colored circles indicate the cumulative Gutenberg-Richter (G-R) distribution of the estimated AEs magnitudes and the bar plots display their distribution into 0.1 magnitude interval bins. We estimated that the magnitude of completeness $M_{c}$ was close to $Mw = -8.7$. $M_{c}$ might vary a little depending on the confining pressure (for instance between $P_{c} = 30$ and $Pc = 45$ $MPa$) but it is not significant given that the typical error in magnitude estimation was 0.1. The black arrows indicate the upper limit magnitude ($Mw = -7$) beyond which acoustic sensors started to saturate. The estimated moment magnitudes $M_w$ went beyond $M_w = -7$ for 71 AEs during the experiment conducted at $Pc = 30$ $MPa$ , for 6 AEs at $Pc = 45$ $MPa$ and for 5 AEs at $Pc = 60$ $MPa$. As mentioned earlier, we found from visual inspection that not all acoustic sensors were saturating for AEs with $Mw \approx -7$. Moreover, acoustic waveforms were saturating only for a short period (10 $\mu s$ typically).Therefore we believe that close to this upper limit magnitude, our estimations are not significantly biased. However, beyond $Mw = -6.8$ almost all acoustic sensors were saturating over a large portion of the signal, which in the former case unambiguously indicates a significantly larger moment magnitude. Such a case only happened during the experiment conducted at $Pc = 30$ $MPa$, for 21 AEs.\\ Using $M_{c} = -8.7$ we estimated the G-R b-value based on the Aki-Utsu maximum likelihood method (Aki, 1965; Utsu 1965). The best fits we obtained are given by the black dashed lines and are $b = 0.57 \pm 0.02$, $b = 0.65 \pm 0.03$ and $b = 0.66 \pm 0.04$ at $P_{c} = 30$, $45$ and $60$ $MPa$ respectively. The experiments conducted at $P_{c} = 45$ $MPa$ and $P_{c} = 60$ $MPa$ show a similar AEs G-R distribution with a net decrease of the number of AEs beyond $M_w ~ -7.6$. This is in sharp contrast with the experiment conducted $P_{c} = 30$ $MPa$ which is characterized by a significant larger number of AEs beyond $M_w ~ -7.6$. Quite remarkably, at $P_{c} = 30$ $MPa$ the AEs G-R distribution tends to follow a double distribution. \subsection{AE and stick-slip nucleation locations} Figure 8 displays on the left the photographs of the simulated faults after the experiments. AE (circles) and stick-slip nucleation (stars) locations onto the fault planes are shown at the center and on the right respectively. The colorscale refers to the SSE index. The size of the circles was set according to the estimated moment magnitudes. Assuming a circular source shape, the typical source sizes for a $M_{w} = -7$ AE and for a $M_{w} = -8$ AE are about 3 $mm$ and 1 $mm$ respectively. Only the AEs for which the residual times were less than 0.3 $\mu s$ (which equates to 2-3 $mm$ of location accuracy) are shown. Therefore large AEs are over represented on the figure.\\ The fault surface at $Pc = 30$ $MPa$ is the one that presents the largest amount of gouge particles. Gouge particles have aggregated into patches of various sizes, as illustrated by the white patterns elongated in the direction of sliding. The gouge particle clusters tend to concentrate in the middle of the fault and have a characteristic size in the order of few millimeters. It should be noted that we looked at the other surface condition and both surfaces were symmetrical. Therefore, zones without gouge particles are not due to gouge removal when the two pieces of the rock specimen were separated after the experiments. AEs correlate well with areas where gouge particles are concentrated. However, there are zones covered with gouge particles where no AEs were detected (for instance on the area on the top part of the fault). Quite remarkably, we can observe that SSEs nucleated in the same area at the early stage of the experiment and then migrated to another region. An easily observable feature is that the SSEs at the last stage of the experiment (warm colors) tend to nucleate at the edges of the areas where most of the precursory AE moment was released. \\ The simulated fault at $Pc = 45$ $MPa$ presents less gouge particles compared to the experiment conducted at $Pc = 30$ $MPa$. We still observe patches where gouge particles concentrate but these are more heterogeneously distributed on the fault. Similarly to the experiment conducted at $Pc = 30$ $MPa$ there are areas covered by gouge particles where few or no AEs were detected (for instance, on the lower edge of the fault on the left). Because there are less AEs here, it is easier to observe that their locations mirror fairly well the geometry of the areas covered by gouge particles. In the same way than the experiment conducted at $Pc =30$ $MPa$, SSEs nucleation migrated over time (from the left edge of the fault plane to the right edge). We can observe as well that SSEs do not necessarily nucleate where most of the AE activity is concentrated.\\ At $Pc = 60$ $MPa$, gouge particles are homogeneously distributed over the fault surface. Unlike the other two experiments, we no longer observe patches of gouge particles. AEs tend to locate in a reduced region of the fault surface with respect to the other two experiments. In that case, we can observe migration of the AE activity (from the left to the right) with the successive ruptures, which seems roughly correlated with SSEs nucleation locations. \subsection{Microstructural analysis} Figure 9 displays the post-mortem fault surfaces observed by scanning electron microscopy (SEM). The large scale view of the fault surfaces at $Pc = 30$ $MPa$ and $Pc = 45$ $MPa$ (Figures 9b and d respectively) reveals highly damages zones with a large quantity of generated gouge particles that cluster into patches. Gouge particles present a typical size ranging from less than 1 $\mu m$ to few $\mu m$ (Figure 9a) and cover topographic highs with size of the order of few tens of $\mu m$ (Figure 9b) that we might interpret as small scales asperities. The enlarged view of the fault surface at $Pc = 60$ $MPa$ (Figure 9f) also reveals fine gouge particles production but the latter are not aggregated but rather homogeneously distributed on top of stretched and elongated surfaces in the direction of sliding. Zooming on the fault surface at $Pc = 60$ $MPa$ (Figure 9e) allows us to observe stringy microstructures that contain gas bubbles. This suggests partial melting of the fault surface during slip. The micro-crack that crosses the residual melt results likely to the rapid cooling following melting. We can observe that a fraction of the small gouge particles is trapped into the melt. At $Pc = 45$ $MPa$ the fault surface displays elongated patterns in the direction of sliding as well (Figure 9d) which presupposes that fault surface temperature has nearly reached the melting point. At smaller scale (Figure 9c), the fault surface has compacted and flatten microstructures that align with the direction of slip which we interpret as markers of plastic deformation processes.\\ \subsection{Fault surface roughness analysis} Fault surfaces roughness were accurately measured over 15 $mm$ x 30 $mm$ surfaces using a laser profilometer with 0.05 $\mu m$ of vertical resolution (Figures 10a, b and c at $Pc = 30$, $45$ and $60$ $MPa$ respectively) and $\approx$ 20 $\mu m$ of horizontal resolution. Note that due to a light contrast issue, surface elevation measurement failed for a fraction of the sampled surface at $Pc = 30$ $MPa$ (indicated in light grey). Elevations range from about -25 to 25 $\mu m$. At the lowest confining pressure we can observe coarse topographic highs (red colors) elongated in the direction of slip. These large and rough asperities likely correspond to gouge material accumulation with slip. The bigger one that we can see (at the bottom left) is about 2 $mm$ thick, 5 $mm$ long and 25 $\mu m$ high. Compared to the other two experiments, no marker of the coseismic displacement is easily identifiable at $Pc = 60$ $MPa$, which is likely due to partial melting of the fault during rapid slip episodes. At the intermediate confining pressure $Pc = 45$ $MPa$, striations of the fault surface likely formed by mechanical abrasion have been preserved and reveal a flattened surface. At that scale it can clearly be seen that fault surfaces roughness at $Pc = 45$ $MPa$ and $Pc = 60$ $MPa$ are similar and less rough compared to the experiment conducted at $Pc = 30$ $MPa$.\\ We quantified fault surface roughness by means of the Hurst exponent $H$ (also called roughness coefficient). To estimate $H$, we compute the Fourier power spectrum for each individual parallel profiles $I(k)$ perpendicularly and parallel to the slip direction as a function of the wavenumber $k$. Then we compute the average spectrum $P(k)$ (Figures 10d and e) of the whole surface in both directions (i.e., perpendicular and parallel to the slip direction) by stacking the individual Fourier transforms such as: \begin{equation} P(k)=\sum_{n=1}^{n=N}I_{n}(k) \end{equation} where N is the total number of 1-D profiles. This ensures to lower the noise contained in 1-D individual profiles. For a self-affine 1-D profile, the Hurst exponent ranges between $0 \leq H \leq 1 $ and $P(k)$ is related to $H$ according to the following power law: $P(k) \propto k^{-1-2H}$. For a self-affine (i.e. fractal) 1-D profil, the roughness $r$ will increase with the length of the profile $l$ such as $r \propto l^{H}$. \\ A common feature to all the experiments is that fault surfaces roughness are similar both in terms of shape and amplitudes along the perpendicular and the parallel directions of sliding. This implies quasi-isotropic fault surfaces roughness and contrasts with what is observed in the case of natural faults. A large majority of natural faults are characterized by anisotropic self-affine surfaces \cite{candela2009characterization}. Although the Hurst exponents vary in the range [0.4 - 0.9], $H$ is found to be around 0.6 along the direction of sliding, and around 0.8 in the direction perpendicular to the sliding direction. We may assume that the initially smooth fault surfaces in our experiments prevent fault surface roughness anisotropy from developing. Another possibility is that additional rapid slip episodes would have been required for fault surface roughness to become anisotropic. It is noteworthy that isotropic roughness also implies that gouge particles produced during slip are not only transported along the direction of slip but also perpendicular to it.\\ We found that for wavenumbers $k$ less than $4\times10^{3}$ $m^{-1}$ ($\approx 0.25$ $mm$), fault surfaces roughness are characterized by a similar Hurst exponent $H$ close to 0.4 which is rather low compared to what is typically found for natural fault but is a lower bound. A low Hurst exponent as opposed to a high Hurst exponent has the primary physical meaning that the ratio of roughness amplitude at long and short wavelengths is smaller which in the case of our experiments may be intrinsically related to fault surface preparation. The topography at long wavelength is necessarily damped to ensure an homogeneous contact between the two parts of the sample. However, to ensure a minimum of cohesion, fault surfaces are lapped with a fine-grained abrasive paper ($\#$120 grit paper in this case, average particle diameter of about 125 $\mu m$), which, in turns, produces small wavelength topography.\\ It can be clearly observed that fault surfaces roughness are nearly identical at $Pc = 45$ and $Pc = 60$ $MPa$. Compared to the other two experiments, at $Pc = 30$ $MPa$ the long wavelength roughness, inherited from gouge particles accumulation, emerges for wavenumbers $k$ less than about $4\times10^{3}$ $m^{-1}$ ($\approx 0.25$ $mm$). It can be clearly observed that Fourier power spectra of fault surfaces topography share significant similarities with AEs G-R distribution. For moment magnitudes $M_{w}$ larger than about -7.6, the G-R slope rapidly drops at $Pc = 45$ $MPa$ and $Pc = 60$ $MPa$ but is unchanged at $Pc = 30$ $MPa$. This could be the reciprocal of the decrease in roughness amplitudes for wavelengths higher than about 0.25 $mm$ ($k \approx4\times10^{3}$ $m^{-1}$ ) observed at $Pc = 45$ $MPa$ and $Pc = 60$ $MPa$. We can speculate onto the fact that AEs G-R distribution mirrors fault surface roughness. This is intriguing but would require more quantitative analysis to be validated. \section{Statistics of the nucleation phase} \subsection{Evolution of precursory AE activity towards nucleation} Figures 11a, b and c display the cumulative AE moment release and the temporal variations of b-value as a function of normalized time to failure at $Pc = 30$ $MPa$, $Pc = 45$ $MPa$ and $Pc = 60$ $MPa$ respectively. Note that for each experiment, all AE sequences were stacked. b-values were estimated for three different time intervals which were selected in order to contain a similar number of AEs. To ensure that no bias would be introduced due to AEs saturation, b-values were computed either by taking into account all AEs (diamond symbols) or by removing AEs with $M_w > -7$ (square symbols). Including or not the AEs that have saturated impacts only the absolute value of b but not its temporal variation.\\ Hereafter, we discuss on b-value variations for the complete AE catalogs (i.e. that includes all magnitudes). At $Pc = 30$ $MPa$ we estimate that the b-value is about 0.68 $\pm$ 0.02 up to (on average) $\approx$ 3 seconds prior to failure. Then the b-value drops rapidly to an almost constant level: 0.49 $\pm$ 0.02 and 0.52 $\pm$ 0.02. At $Pc = 45$ $MPa$ the b-value is close to 0.7 $\pm$ 0.04 up to $\approx$ 2.5 s prior to failure and then drops to 0.54 $\pm$ 0.04 and 0.59 $\pm$ 0.04. For both experiments, b-value increases slightly in the last tenths of a second but this lies into the range of uncertainties. b-value temporal variations prior to failure are more complicated to analyse for the experiment conducted at $Pc = 60$ $MPa$ for two reasons (i) the large uncertainties and (ii) about 90 $\%$ of the AEs were recorded in the last 3 seconds prior to failure which lowers considerably the temporal resolution of b-value variations during stick-slip cycles. In comparison about 30 $\%$ and $25 \%$ of the total number of AEs were generated before entering the last 3 seconds prior to failure at $Pc = 30$ and $Pc = 45$ $MPa$ respectively. However, unlike the other two experiments the b-value is initially low, close to 0.61 $\pm 0.06$. In the last second prior to failure the b-value returns to a fairly high value, about 0.76 $\pm$ 0.08 and then decreases again to 0.67 $\pm$ 0.05.\\ Temporal variation in b-value prior to failure has been well documented during fracture experiments conducted on intact rock samples \cite{scholz1968frequency} and during rock friction experiments \cite{goebel2012identifying,kwiatek2014seismic,riviere2018evolution}. Fracture experiments on intact samples show that b-value and differential stress are anticorrelated, which takes its origin in the formation and the coalescence of microfractures. Such a process causes a large number of AEs to be generated and a smooth and accelerating drop of b-value up to the time of failure. Decrease in b-value towards failure has also been documented preceding large subduction earthquakes \cite{suyehiro1966difference,enescu2001some,nanjo2012decade,tormann2015randomness}. However, foreshocks that precede large earthquakes occur on time scales from hours to years. Long term variations of b-value are usually attributed to stress accumulation or partial stress release while short term variations are related to the mainshock nucleation. Based on the two experiments conducted at $Pc = 30$ $MPa$ and $Pc = 45$ $MPa$ we can at least say that large AEs rapidly grow in the last seconds preceding failure. The rapid drop of b-value prior to stick-slip instabilities better suggests rapid weakening of the fault interface in a short interval of time close to failure. \subsection{Precursory AEs dynamics and fault maturation} Figures 12a ,b and c compare the along fault displacement, the along fault velocity, the cumulative number of AEs and the cumulative AE moment release with respect to time to failure at $Pc = 30$ $MPa$, $Pc = 45$ $MPa$ and $Pc = 60$ $MPa$ respectively. Each quantity is normalized by its maximum value at the time of failure. Here again, all AE sequences are stacked to bring out a general trend. The grey shaded area indicates the range of uncertainty for the cumulative AE moment release. Figures 12d, e and f show the cumulative precursory AE activity per SSE at $Pc = 30$ $MPa$, $Pc = 45$ $MPa$ and $Pc = 60$ $MPa$ respectively . Cumulative precursory AE activity is normalized by its final value and is plotted against normalized time to failure. The colorscale refers to the SSE index and the black curves result from the stacking of all AE sequences. Note that for visual inspection, not all AE sequences are shown at $Pc = 30$ $MPa$ (Figure 12d) and at $Pc = 45$ $MPa$ (Figure 12e).\\ Consistently with previous experimental studies \cite{mclaskey2014preslip,passelegue2017influence}, we have always observed that the displacement onto the fault accelerates preceding failure. However, although along fault slip is required to generate AEs, both the number of AEs and the AE moment rather appear to correlate with the slip rate onto the fault. This is particularly well illustrated in the last seconds prior to failure during which AE moment release and along fault slip velocity almost collapse. Nevertheless, there are notable differences between the experiments, both in terms of AE moment release and mechanical behavior of fault surfaces. The cumulative AE moment release exhibits the smoothest behavior at $Pc = 30$ $MPa$. Seismic energy is continuously radiated from the fault but in a delayed fashion with respect to the slip rate. For instance, between about -15 s and -5 s, the slip rate linearly increases with time while the AE moment release remains low. These features can be retrieved for the experiments conducted at $Pc = 45$ $MPa$. The AE moment release follows the fault slip rate but is delayed and starts to intensify only once the fault accelerates. In the same way than the experiment conducted at $Pc = 30$ $MPa$, the slip rate increases linearly before accelerating (between about -5 and -2 s prior to failure). However, compared to the experiment conducted at $Pc = 30$ $MPa$, the slip rate onto the fault and the AE moment release increase later prior to stick-slip instabilities and at higher rate. The picture depicted by the experiment conducted at $Pc = 60$ $MPa$ is somewhat different. Although we observe a clear correlation between the fault slip velocity and the AE moment release, the seismic energy is not released continuously, but rather in bursts. For instance, the two largest AEs that were recorded ($Mw > -6.9$) occured about -17 s and -5 s prior to failure, while the fault had not accelerated yet. This case is not limited to the experiment conducted at $Pc = 60$ $MPa$ and also occurred at $Pc = 30$ $MPa$ and $Pc = 45$ $MPa$. Even at the small scale of the experiments presented here, stress and thus strain are not homogeneously released during coseismic displacement. Bursts of AE activity that occur without external forcing such as slip acceleration might reflect the brittle failure of small patches where residual stress accumulated. Also, the stacking procedure inherently smooths the variability of precursory AE sequences. It is likely that bursts of AE activity would have been smoothed if a larger number of AE sequences would have been stacked together at $Pc = 60$ $MPa$. \\ AEs may reflect the brittle destruction of fault surface topography or may occur when stress applied onto the fault exceeds the strength of local brittle fault patches. For sufficiently large AEs ($Mw > -8.6$) we often observed positive and negative first P-wave arrivals which implies that most of the moment release is deviatoric. According to the similarity between fault slip velocity and cumulative AE moment release, we posit that precursory AEs highlight the rupture of locked portions of the fault embedded in and loaded by an aseismically slipping larger portion. Similar observations have been reported in larger scale experiments \cite{mclaskey2013foreshocks}. This is also consistent with observations at the scale of crustal faults. \citeA{bouchon2013long} showed that foreshock sequences were more common for interplate than for intraplate earthquakes due to facilitating slow slip phase at plate boundaries. Similarly, \citeA{mcguire2005foreshock} have observed that oceanic transform faults with relatively high-slip rates were producing more foreshock sequences. However, the susceptibility of a fault to produce foreshocks will depend at first order of its degree of heterogeneity.\\ The experiment conducted at $Pc = 30$ $MPa$ gives the clearest example of what we would call "fault maturation" (Figure 12d). At the early stage of the experiment, most of the AE activity remain concentrated close to failure, but with the successive ruptures, precursory AE activity increases in number and occurs earlier during loading. Summing all AE sequences results in a smooth increase of the cumulative number of AEs as previously described. These characteristics can be approximately retrieved at the intermediate confining pressure, $Pc = 45$ $MPa$ (Figure 12e) but not at $Pc = 60$ $MPa$ (Figure 12f). During this experiment, AEs occur later which results in a sharper acceleration of the cumulative number of AEs towards failure. A noticeable difference at $Pc = 60$ $MPa$ lies in the absence of AEs early during loading. On the other hand, the experiment conducted at $Pc = 60$ $MPa$ is the only one for which the first sequence of AEs has released a comparable amount of seismic energy with respect to the ones that followed. At $Pc = 30$ $MPa$ and $Pc = 45$ $MPa$, the AE moment release prior to failure started to significantly increase after 10 stick-slip cycles. However, when averaging over all AE sequences, the experiments conducted at $Pc = 45$ $MPa$ and $Pc = 60$ $MPa$ are characterized by a similar AE frequency-magnitude distribution. Conversely, significantly more large AEs were generated during the experiment conducted at $Pc = 30$ $MPa$. In the following, we attempt to rely on microstructural and fault surface roughness analysis to explain the similarities and differences of AE timing and moment release.\\ Fault strength heterogeneity arises, in part, from multiscale roughness and spatial variations of fault rheology. Compared to the other two experiments, large asperities were formed by mechanical abrasion of the fault surface at $Pc = 30$ $MPa$ which increased the level of fault strength heterogeneity in terms of stress concentration and frictional resistance and which, consequently, provided the necessary conditions to generate large AEs. At $Pc = 45$ $MPa$ and $Pc = 60$ $MPa$, same fault surface roughness produce similar AEs. We can argue that fault surfaces were more scrubbed at $Pc = 45$ $MPa$ and $Pc = 60$ $MPa$ because of either thermally activated plastic deformation processes or partial melting. It is noteworthy that for the experiments conducted at $Pc = 45$ $MPa$ and $Pc = 60$ $MPa$ the average amount of pre-slip was quite similar (about 9 $\mu m$) and larger than the one for the experiment conducted at $Pc = 30$ $MPa$ ( about 6 $\mu m$). This is probably caused by the organized large scale roughness resisting slip \cite{schaff2002high} that has developed at $Pc = 30$ $MPa$.\\ However, almost no large AEs were produced at the early stage of the experiments conducted at $Pc = 30$ $MPa$ and $Pc = 45$ $MPa$. Let us take a step back to the mechanical data. At $Pc = 30$ $MPa$ and $Pc = 45$ $MPa$, the first SSE happened at particularly low stress conditions, when the average normal stress $\sigma_{n}$ onto the fault was about 43 $MPa$ and 65 $MPa$ respectively resulting in coseismic displacements of 31 $\mu m$ and 59 $\mu m$. In comparison, at $Pc = 60$ $MPa$ the first SSE occurred when the normal stress acting onto the fault was about $98$ $MPa$ resulting in a coseismic displacement of 184 $ \mu m$. Slip during the first SSE at $Pc = 60$ $MPa$ probably produced fine gouge or accentuated topographic heterogeneities that effectively increased the roughness and promoted the generation of relatively large AEs prior to the subsequent SSE. Therefore, the fault was already "mature". At the other two confining pressure conditions, more SSEs were required to increase the roughness due to smaller coseismic displacements and lower normal stress conditions. Once a sufficient amount of gouge particles or topographic heterogeneities were produced, both the number of AEs and the AE moment release started to intensify.\\ Finally, the absence of AEs far from failure at $Pc = 60 MPa$ may be related to the spatial distribution of gouge particles and their interactions with the underlying surface. AEs that happen early during loading are considerably small, which causes the high b-values far from failure at $Pc = 30$ $MPa$ and $Pc = 45$ $MPa$. We interpret these small AEs as either micro-shear events or buckling of a force chain in a compacted gouge layer \cite{mair2002influence,hartley2003logarithmic}. Thus, these small AEs cannot be generated at $Pc = 60$ $MPa$ because of either the homogeneous distribution of gouge particles onto the fault or due to the fact that a significant fraction of gouge particles is trapped into the melt. However, another possibility is that small AEs that happen early during loading at $Pc = 30$ $MPa$ and $Pc = 45$ $MPa$ reflect microfracturing processes promoted by the damage accumulation with cumulative slip. \subsection{Spatial distribution of precursory AEs} Here we look at the evolution of the precursory AEs spatial distribution towards failure. In what follows, "nucleation" refers to the location on the fault surface where SSEs initiated and was estimated according to first p-wave arrivals. SSEs whose nucleation sites were poorly constrained (less than about 2-3 $mm$) are not shown here. Figures 13a, b and c display the distance to nucleation (i.e. where SSEs initiated) of the precursory AEs as a function of normalized time to failure at $Pc = 30$ $MPa$, $Pc = 45$ $MPa$ and $Pc = 60$ $MPa$ respectively. Cyan triangles indicate the average distance to nucleation and its standard deviation computed into 10 evenly log-distributed time bins. To the left is displayed the AEs distribution as a function of distance to nucleation. Note that, AEs located with more than 0.3 $\mu s$ travel time residuals (about 2-3 $mm$ of location accuracy) were disregarded. At the lowest confining pressure, $Pc = 30$ $MPa$, nothing indicates on spatial migration of the precursory AEs towards nucleation. Precursory AEs remain randomly distributed over the fault surface up to the time of failure. Most of the precursory AEs occurred at larger distances than 20 $mm$ relative to where SSEs initiated. However, the randomness of AEs spatial distribution tends to decrease with increasing stress conditions. At the highest confining pressure, $Pc = 60$ $MPa$, the majority of the AEs occurred on a more localized portion of the fault surface, within 10 to 15 $mm$ relative to where SSEs initiated. In that case, precursory AEs do not occur randomly on the fault surface but migrate, on average, towards where SSEs initiated.\\ The precursory AEs spatial distribution yields relevant information about the way SSEs initiate. In all experiments, we found that SSEs are always preceded by preslip acceleration phase and that preslip drives smaller scale seismicity (i.e. AEs). As proposed by \citeA{mclaskey2014preslip}, preslip may sufficiently weaken fault strength to facilitate a small instability to grow large and eventually propagate over the entire fault. In such a scenario, precursory AE activity in the last milliseconds prior to failure should co-localize with locations where SSEs initiated. This would result in clear migration patterns (Figure 13), which is not what we observe. In our experiments, we found that SSEs do not necessarily nucleate where precursory AE activity concentrates but rather at the edges of the areas where most of the precursory AE moment was released. This is well illustrated for the last SSEs at $Pc = 30$ $MPa$ and $Pc = 45$ $MPa$ (Figure 8). Moreover, we would expect that large precursory AEs promote cascade process. While the largest AEs were generated at $Pc = 30$ $MPa$, there is no indication of migration over time for this experiment. Therefore, our observations better suggest that, in most cases, SSEs begin as slowly growing fault slip that transitions to dynamic rupture rather than resulting from a small AE that would propagate over the entire fault in a cascade-up process. \\ We attempt here to give a qualitative explanation for migration patterns promoted by increasing stress conditions. We may assume that if the nucleation length $L_{c}$ is smaller, there is more chance that precursory AEs occur at shorter distances to where SSEs initiate, which would, consequently, favor migration. According to slip weakening linear \cite{ida1972cohesive,campillo1997initiation,uenishi2003universal} or rate-and-state friction (R\&S) laws \cite{rubin2005earthquake,ampuero2008earthquake} we expect the critical nucleation length $L_{c}$ to decrease with increasing the normal stress acting onto the fault such as: \begin{equation} L_{c} = 2.\beta.\frac{\mu D_{c}}{\sigma_{n}(f_{s}-f_{d})} \end{equation} for the linear slip weakening law where $\mu$ is the shear modulus of the rock sample, $D_{c}$ is the critical slip distance, $\sigma_{n}$ is the normal stress acting onto the fault, $f_{s}$ and $f_{d}$ are the static and the dynamic friction coefficients respectively and $\beta$ is a non-dimensional shape factor coefficient ($\approx$ 1.158). For R\&S friction laws: \begin{equation} L_{c} =\frac{\mu D_{c}}{\sigma_{n}(b-a)} \end{equation} where $b$ and $a$ are the constitutive parameters of R\&S friction laws. According to (10) and (11), $L_{c}$ will decrease with increasing the normal stress acting onto the fault and the friction drop. However, $D_{c}$ is expected to increase with increasing the normal stress acting onto the fault. For instance, assuming a purely slip weakening behavior, $D_{c}$ is expressed as \cite{ida1972cohesive,palmer1973growth,rice1979mechanics}: \begin{equation} D_{c} =\frac{16(1-\nu)}{16\pi}\frac{V_{r}t_{w}\sigma_{n}(f_{s}-f_{d})}{\mu} \end{equation} where $V_{r}$ is the local rupture velocity, $t_{w}$ is the weakening time and $\nu$ is the poisson's ratio of the rock specimen. $D_{c}$ is also expected to increase with normal stress and friction drop. Moreover, while weakening time $t_{w}$ does not vary much with stress conditions \cite{passelegue2016dynamic}, we expect that rupture velocity will, on average, increase with normal stress. Therefore, we do not necessarily expect that $L_{c}$ decreases with increasing $\sigma_{n}$. \\ One possible explanation may lie into stress heterogeneity. Let us assume that the nucleation zone expands in a crack-like fashion. Inside the crack, the shear-stress drop $\Delta \tau$ can be approximated as the shear modulus $\mu$ of the medium times the ratio between the slip velocity $V_{s}$ and the shear-wave velocity $C_{s}$ such as: \begin{equation} \Delta \tau=-\mu\frac{V_{s}}{C_{s}} \end{equation} In the case of an expanding crack, the slip velocity is maximum at the tips and nearly uniform inside the crack. As the nucleation zone expands, unlocked portions of the fault in the interior of the nucleation zone releases stress. Due to stress perturbations close to the tips of the nucleation zone, the stress-drop $\Delta \tau$ is positive near the tips and decreases and becomes negative in direction of the center of the nucleation zone. However, this is only valid for areas in the interior of the nucleation zone that are able to slip. Locked portions of the fault inside the nucleation zone continuously accumulate stress, even after nucleation started. In that case, the stress-drop $\Delta \tau$ is positive near the tips and decreases but remains positive in direction of the center of the nucleation zone.\\ Let us propose the following mechanism to explain the correlation between increasing stress conditions and precursory AEs migration: AEs may occur if the applied stress to a locally brittle patch exceeds its strength. Precursory AEs at relatively large distances from the center of the nucleation zone are triggered first due to stress perturbations at the tips of the nucleation zone, which are sufficient to overcome the critical strength of the locked portions of the fault interface. As the nucleation zone expands, stress build-up in the interior of the nucleation zone. Because of the negative gradient of the stress profile that goes from the edges of the nucleation zone to its center, the precursory AEs will tend to migrate towards the center of the nucleation zone. This mechanism is schematically presented in figure 14d. As an illustration, the figures 14a, b and c display a summary of the precursory AEs sequence prior to SSE $\#$6 at $Pc = 60 MPa$. The cumulative AE moment release and along fault displacement in the last 10 seconds prior to failure are shown in figure 14a. Figure 14b displays the distance to nucleation of the precursory AEs as a function of time to failure and the figure 14c shows the locations on the fault plane of the precursory AEs. The colorscale refers to the occurrence time of the precursory AEs relative to failure and the star symbol indicates where the SSE initiated. This precursory AEs sequence is characterized by three bursts of microseismicity which occurred about -2.2, -0.5 and -0.1 $s$ prior to failure (Figure 14a). The AE moment release rapidly increased in the same way that the displacement onto the fault which is consistent with the interpretations made so far (i.e., AEs highlight the rupture of brittle fault patches within the interior of the nucleation zone). In that case, AEs migration is clearly identifiable. Initially, precursory AEs locate at about 20-25 $mm$ from where the SSE initiated and then rapidly migrate towards the latter (Figure 14b). To be fully consistent with the interpretation proposed above, the edges of the nucleation zone were close to the locations of the first burst of microseismicity that occurred about -2.2 s prior to failure (Figure 14c). In the case of a self-similar crack, we expect that the displacement in the interior of the nucleation zone grows as the nucleation zone expands. According to the displacement along the fault, we can assume that the nucleation zone has then rapidly expanded after -2.2 s prior to failure. This resulted into the fast loading of the locked portions of the fault which triggered the subsequent bursts of microseismicity. AEs migration was then controlled by the shear-stress gradient in the interior of the nucleation zone. In such a scenario, fault strength homogeneity will favor migration, while fault strength heterogeneity will make precursory AE activity to randomly occur relative to the center of the nucleation zone. Assuming that fault strength heterogeneity is, at first order, provided by multiscale roughness of the fault plane, this may explain why migration is not observed at the lowest confining pressure $Pc = 30$ $MPa$ but only emerges when increasing normal stress. It should also be noted that AEs may also be able to trigger each other due to dynamic or static stress transfer. In that case, we would expect them to draw a well defined path, both in time and space, which is not what we observe. Therefore, this may happen but is likely of second order.\\ Foreshock migration prior to large earthquakes is often attributed to slow-slip propagation towards the mainshock hypocenter \cite{kato2012propagation,ruiz2014intense,kato2014multiple}. It has also been suggested that fluids diffusion may trigger foreshock swarms by reducing effective normal stress \cite{moreno20152014,socquet20178}. The experiments were conducted under dry conditions which makes the latter case unlikely. Slow slip transients usually involve slip rates that range from 10 to 100 $\mu m/s$. This is more that what we observe during our experiments, fault slip rates are typically of the order of few $\mu m/s$ in the last tenth of a second prior to failure. The question whether slow slip transients prior to large earthquakes are part of the nucleation process or not is still debated. The 2014 $Mw$ $8.2$ Iquique and the 2011 $Mw$ $9.0$ Tohoku-oki earthquakes were both preceded by slow slip events, however the latter did not propagate with slip (and foreshock rate) acceleration which is kinematically expected in case of nucleation process. Therefore, slow-slip transients were not interpreted as part of the nucleation process. This contrasts with the case of the 1999 $Mw$ $7.6$ Izmith earthquake prior to which an increase in seismicity rate (includig repeaters) and seismicity migration towards the mainshock hypocenter were reported \cite{bouchon2011extended}. We have attempted to give a qualitative interpretation of AEs migration during our experiments. This may also be a plausible explanation for foreshock migration prior to natural earthquakes. In such a case, there is no need to involve fluids diffusion or slow-slip propagation. \subsection{Temporal distribution of foreshocks} Here we look at the temporal evolution of the cumulative number of precursory AEs prior to failure. When averaged over numerous foreshock sequences, it is known that the foreshock rate $N(t)$ increases as an inverse power law of the time to the mainshock \cite{jones1979some} such as: \begin{equation} N(t)=\frac{K}{(c+\Delta t)^{p}} \end{equation} where $K$ is the foreshock productivity, $c$ and $p$ are empirical constants and $\Delta t$ is the time that separates from the mainshock. A previous experimental study \cite{passelegue2017influence} showed that AE activity was increasing exponentially towards failure which has been interpreted as a consequence of preslip which is itself exponential. In our experiments, we found that precursory AE activity better follows an inverse power law of time of the form of (14). Figures 14a, b and c show the cumulative number of AEs $N_{a}(t)$ in the last 35 seconds prior to failure at $Pc = 30$ $MPa$, $Pc = 45$ $MPa$ and $Pc = 60$ $MPa$ respectively. The cumulative number of AEs results from the stacking of all AE sequences and is averaged over all sequences. This allows us to conserve the smooth shape of the cumulative total number of AEs and to compare between the experiments the average number of precursory AEs during individual sequence. Thus we can express $N_{a}(t)$ as: \begin{equation} N_{a}(t)=\frac{K}{(c+\Delta t)^{p}} \end{equation} where $\Delta t$ is the time to failure which is positive in that case. The red curves display the best fits that we obtained over the parameters $c$ and $p$. The parameters $p$ and $c$ were searched in the range [0.1-3] with a step of 0.01. We made the choice to link $K$ to $c$ and $p$ such as $K = N_{f}.(c^{p})$ where $N_{f}$ is the average cumulative number of AEs at the time of failure. This ensures that the average cumulative number of AEs at the time of failure equals $N_{f}$. The logarithm of the residuals is given by the inserted panels as a function of $p$ and $c$. Residuals are normalized by the minimum (i.e., the value 0 indicates minimum). The best fits were obtained for $c = 2.39$ $\pm 0.3$ $s$ and $p = 1.31 \pm 0.08$, $c = 0.6$ $\pm 0.25$ $s$ and $p = 0.79 \pm 0.1$ and $c = 0.24$ $\pm 0.09$ $s$ and $p = 0.82 \pm 0.05$ at $Pc = 30$ $MPa$, $Pc = 45$ $MPa$ and $Pc = 60$ $MPa$ respectively. Uncertainties correspond to the 90 $\%$ confidence level. The average AEs rate is given by the time derivative of (15) such as: \begin{equation} \dot{N}_{a}(t) =-K\frac{p(c+\Delta t)^{p-1}}{(c+\Delta t)^{2p}} \end{equation} which gives: \begin{equation} \dot{N}_{a}(t) =-K\frac{p}{(c+\Delta t)^{p+1}} \end{equation} As we could have expected, the power exponent $p$ is higher for the average AE rate. If we use the best values of $p$ that we estimated, we obtained $p = 2.31$, $p = 1.79$ and $p=1.82$ at $Pc = 30$ $MPa$, $Pc = 45$ $MPa$ and $Pc = 60$ $MPa$ respectively. This values are larger that the typical values found for tectonic seismicity which are less or close to unity \cite{helmstetter2003foreshocks}. It should be noted that we have linked $K$ to $c$ and $p$ which may affect the results. Indeed, the three parameters $K$, $c$ and $p$ are linked to each other. The most common way to estimate them if to use the maximum likelihood method \cite{ogata1983estimation}. However, since we have linked $K$ to $c$ and $p$ in the same way for each experiment and that $N_{f}$ do not differ much ($N_{f}$ equals 17, 13 and 14 at $Pc = 30$ $MPa$, $Pc = 45$ $MPa$ and $Pc = 60$ $MPa$ respectively) we believe that the results obtained can be compared relative to each other. Most of the time, the seismicity rate, whether it is for foreshocks or aftershocks, is only related to the parameter $p$. The physical meaning of $c$ received far less attention while both will impact the seismicity rate. A decrease of $c$ or an increase of $p$ will result in a higher seismicity rate. At the laboratory scale, it has been demonstrated that $p$ correlates with strain rate \cite{ojala2004strain} and with stress heterogeneity in the context of rate and state friction law \cite{dieterich1994constitutive}. This would be consistent with the high value of $p$ that we found at $Pc = 30$ $MPa$ compared with the other two experiments since the fault surface presented higher roughness for this experiment (i.e. higher stress heterogeneity). The parameter $c$ may control in a sense when microseismicity starts. We can assume that precursory seismicity/AE activity may start at earlier time in case of an heterogeneous medium. This will have a counter effect on seismicity rate since brittle failures of locked areas of the fault will be more diffuse in time. In that case the parameter $c$ will increase and seismicity rate will decrease. This may explain why we found an higher value of $c$ at $Pc = 30$ $MPa$. However, this is only speculation and would require further analysis and additional data to be validated.\\ According to (17) and using the best set of parameters obtained for $c$ and $p$, we find that the average AE rate is about 5 times larger at $Pc = 60$ $MPa$ compared with $Pc = 30$ $MPa$ and about two times larger at $Pc = 45$ $MPa$ compared with $Pc = 30$ $MPa$ at the time of failure. This correlates well with the fault velocity. If we compare with the average fault slip velocity in the last millisecond we find that the fault slip rate is about four times larger at $Pc = 60$ $MPa$ (about 4 $\mu m/s$) compared with $Pc = 30$ $MPa$ and about three times larger at $Pc = 45$ $MPa$ compared with $Pc = 30$ $MPa$. Given the good correlation that we found between along fault velocity and AE cumulative number (Figure 12), we suggest that AE rate is primarily controlled by fault slip rate. However, it should be noted that this is only valid on average since precursory AE sequences exhibit variable behaviors with respect to each other. \section{Scaling laws and implication to natural faults} \subsection{AE source parameters} Numerous studies show that the scaling relationship between moment magnitude and corner frequency $M_{0} \propto f_{c}^{3}$ is verified whether for earthquakes at the scale of crustal faults, induced seismicity or laboratory generated AEs, that is for a wide range of moment magnitudes from -8 to 8. \cite{aki1967scaling,abercrombie1995earthquake,hiramatsu2002scaling,prieto2004earthquake,yamada2007stress,kwiatek2011source,yoshimitsu2014magnitude}. Demonstrating that laboratory generated AEs satisfy the scaling relationship between moment magnitude and corner frequency is crucial since it allows valuable inferences to be drawn about whether or not knowledge obtained in the laboratory can be extrapolated to the natural field. Figures 16a, b and c display the corner frequencies $f_{c}$ versus the seismic moments $M_{0}$ and moment magnitudes $M_{w}$ obtained by inversion for the recorded AEs at $Pc = 30$ $MPa$, $Pc = 45$ $MPa$ and $Pc = 60$ $MPa$ respectively. Errorbars for the estimated corner frequencies and moment magnitudes are indicated in light gray. We recall that we could not estimate $f_{c}$ for the AEs with moment magnitudes less than $M_{w} ~ -8.6$ due to too low signal to noise ratio neither for the AEs with $M_{w} > -7$ due to the saturation of the acoustic sensors. Figure 16d shows the comparison between the AEs source parameters and a corpus of other studies which gathers natural earthquakes and laboratory generated AEs having moment magnitudes of -4 to 4. Figure 16d was re-adapted from the study of \citeA{yoshimitsu2014magnitude} but note that data from the study of \citeA{yoshimitsu2014magnitude} do not appear on figure 16d since they overlap with ours. \\ According to the expected scaling relationship between $M_{0}$ and $f_{c}$ we find no differences between the AEs recorded during our experiments and natural earthquakes. AEs have corner frequencies that mostly range from 300 $kHz$ (source size $\approx$ 4 $mm$) to 1.5 $MHz$ (source size $\approx$ 0.5 $mm$).The average stress-drops we obtained are $1$ $MPa$, $0.88$ $MPa$ and $0.68$ $MPa$ at respectively $Pc = 30$ $MPa$, $Pc = 45$ $MPa$ and $Pc = 60$ $MPa$. Quite surprisingly, we find that larger AEs have larger stress-drops. This might be directly related to insufficiently well calibrated acoustic sensors. Using ball drop momentum tranfer for acoustic sensors calibration, \citeA{mclaskey2015robust} showed that the peaks of resonance that characterize the instrumental response of an acoustic sensor were diminished under confinement. Because the acoustic sensors were calibrated under atmospheric pressure, it is possible that particular frequency bands were over damped. Thus, corner frequencies, near these frequency bands would be underestimated. Another possibility is that the length of the time window that we used ($50$ $\mu s$ centered on the theoretical first S-wave arrival) to compute the spectra was too long to sufficiently reduce the energy coming from surface waves. Surface waves carry high-frequency energy which, thus, will be contained into the spectra. As we expect surface waves to be less attenuated for larger AEs this would be consistent with overestimated corner frequencies for the largest AEs. However, this feature might also be physically meaningfull. Large AEs tend to occur closer to stick-slip instability when stressing rate is higher due to accelerating aseismic slip which will thus result in larger stress-drops in case of larger AEs.\\ According to the seismological parameters estimated for the AEs, we infer that the latter can be considered as micro-earthquakes. In a sense, AEs might be more similar to natural earthquakes than SSEs are since they highlight self-terminating ruptures that are contained in an elastic material with similar mechanical properties. \subsection{Pre-seismic moment and coupling} We inferred that SSEs initiated as the expansion of an aseismically slipping fault patch that was driving precurosry AE activity. Figure 17a compares the total AE moment release per SSE $M_{a}$ with the pre-seismic moment release $M_{p}$. Note that we report here only the precursory AE sequences that do not include saturated AEs which equates to 67 SSEs out of 97. Figure 17b shows pre-seismic moment release as a function of co-seismic moment release. Our data (diamond symbols) are plotted together with the observations made by two previous experimental studies (\cite{passelegue2017influence,acosta2019precursory}, grey symbols). The inserted figure displays the comparison between our observations and what was found for a set of large earthquakes. These earthquakes are the 1999 $Mw$ 7.6 Izmit earthquake \cite{bouchon2011extended}, the 2011 $Mw$ 9.0 Tohoku-Oki earthquake \cite{kato2012propagation}, the 2012 $Mw$ 7.6 Nicoya earthquake \cite{voss2018slow}, the 2014 $Mw$ 8.2 Iquique earthquake \cite{socquet20178} and the 2015 $Mw$ 8.4 Illapel earthquake \cite{huang2018slow}. Pre-seismic moment release and co-seismic moment release were estimated according to $M_{p,c} = \mu D_{p,c} S $ with $\mu$ being the metagabbro shear modulus ($\mu = 40 GPa$), $S$ the surface of the fault and $D_{p}$ and $D_{s}$ the pre-seismic slip and the co-seismic slip respectively. $D_{p}$ is to the total macroscopic fault slip from the beginning to the end of loading and thus includes preslip related with nucleation and potential creep. In addition, the size of the nucleation zone might be smaller than the total surface of the fault which implies that $M_{p}$ constitutes an upper bound.\\ The total AE moment release prior to nucleation represents only a very small percentage of the pre-seismic slip (Figure 17a). The ratio between both, that we refer as to "seismic coupling" hereafter, ranges from about $5.10^{-7}$ ($5.10^{-5} \%$) to $4.10^{-4}$ (0.04 $\%$). Such a low seismic coupling may explain why, in our experiments, SSEs are unlikely to result from a cascade process. Indeed, we can assume that cascading failure processes require the rupture of patches large enough to propagate over the entire fault. However and without a doubt, the precursory AE sequences that include saturated AEs imply higher seismic coupling. The largest number of oversaturated ($M_{w} > 6.8$) AEs was generated prior to SSE $\#$53 at $Pc = 30$ $MPa$. Let us assume a drastic scenario in which all of them would have been $M_{w} \approx 6.0$ AEs. Even in that hypothetical case, we estimate that the seismic coupling would be still low, of the order of $0.2 \%$.\\ Plotting the total AE moment release $M_{a}$ as a function of the pre-seismic moment $M_{p}$ indicates that $M_{a}$ goes as $M_{p}^{4}$. In the case of an isotropic expansion of a circular crack with length $L$ the moment release inside the crack would scale as $\Delta\tau L^{3}$ \cite{madariaga1976dynamics}. For a self-similar crack, the amount slip $D$ inside the crack scales with its length $L$. Therefore, by making the approximation that the nucleation zone expands in the same way that a self-similar circular crack, we could have expected that $M_{a}$ goes as $M_{p}^{3}$. The fact that $M_{a}$ scales as $M_{p}^{4}$ can be explained if AEs have stress-drops that are magnitude dependent, that is higher stress-drops for larger magnitudes, which would be consistent with the AEs source parameters that we obtained (Figure 16). Note that extending this scaling relationship to larger pre-seismic moments would rapidly lead to 100 $\%$ of coupling. Taking the experiment conducted at $Pc = 45$ $MPa,$ as an example, $M_{a}$ would equal $M_{p}$ for $M_{p} \approx 10^{4.5}$ $N.m$. $M_{p} \approx 10^{4.5}$ $N.m$ equates an amount of pre-slip of about 300 $\mu m$. If we consider a ratio of $M_{p}/M_{c}$ of about $5\%$ this implies a co-seismic slip displacement of about 6 $mm$. Assuming a linear scaling between the co-seismic displacement and the rupture length, 6 $mm$ of coseismic slip is expected for an earthquake of magnitude $Mw$ about 2.5-3. A recent study \cite{tamaribuchi2018characteristics} investigateg foreshock activity characteristics using the JMA catalog over the last 20 years. Despite the fact that the magnitude of the largest foreshock within a sequence scales with the magnitude of the mainshock, it has been observed that many mainshocks are not preceded by foreshock activity, at least not by foreshocks of $Mw > 1.0$ (the completeness magnitude of the catalog). Moreover, there are numerous foreshock sequences associated with mainshocks of magnitude $Mw \geq 2.5$ for which the magnitude of the largest foreshock is at least 2 orders of moment magnitude less than that of the mainshock. If 100 $\%$ of coupling was consistently expected during nucleation, we would expect to observe very often intense foreshock activity. One possibility is that the power law 4 that we find between $M_{a}$ and $M_{p}$ is related to the experimental conditions such as rapid loading which likely prevents healing, the smoothness of the fault which may promote pre-slip or its simple geometry which could favor smooth acceleration of the fault plane during nucleation.\\ In a recent study, Acosta et al. \cite{acosta2019precursory} argued that the pre-seismic moment release $M_{p}$ should scale with the co-seismic moment release $M_{c}$. This scaling relationship is expected if fracture energy increases as a power law of co-seismic displacement \cite{abercrombie2005can,ohnaka2013physics,passelegue2016dynamic} such as: \begin{equation} G=au_{cos}^{\alpha} \end{equation} where a is a scaling pre-factor and $\alpha$ is a given power and $u_{cos}$ is the co-seismic displacement. The following empirical scaling relation between $M_{p}$ and $M_{c}$ was proposed (indicated by the slope=0.56, figure 17b): \begin{equation} M_{p} \propto M_{c}^{0.56} \end{equation} On average, $M_{p}$ contribute to about 4 $\%$, 6 $\%$ and 2 $\%$ of $M_{c}$ at $Pc = 30$ $MPa$, $Pc = 45$ $MPa$ and $Pc = 60$ $MPa$ respectively. This is slightly less that what was found by \citeA{passelegue2016dynamic} and \citeA{acosta2019precursory} but is typically of the same order of magnitude. If we only look at the experimental data (Figure 17b), it is hard to distinguish if $M_{p}$ scales ad $M_{c}^{0.56}$. Experimental observations may also simply indicates a linear relation between $M_{p}$ and $M_{c}$ as given by the slope of 1. Although the nucleation phase can not be appropriately examined through geodetic measurements for most earthquakes (either because of a lack of instrumentation or because of low earthquake magnitudes), well instrumented large interplate earthquakes form exceptions. Excepted for the 1999 $Mw$ 7.6 Izmith earthquakes, the examples that we show in figure 17b have $M_{p}/M_{c}$ that ranges from about 0.4 $\%$ to 3 $\%$. All those earthquakes have in common that their precursory moment was estimated using geodetic and/or repeater measurements. The precursory moment associated with the $Mw$ 7.6 Izmith earthquake was inferred \cite{bouchon2011extended} only from repeaters and was about 6 orders of magnitude lower than the co-seismic moment. It is likely that the occurrence of repeaters in a short amount of time requires a fast reloading of stress. This is typically what is expected during nucleation since slip is accelerating up to dynamic rupture. However, our observations suggest that coupling may be extremely low during nucleation. Therefore, only relying on the seismic moment released by repeaters may result in a lower bound estimation of $M_{p}$ if a significant part of the precursory slip is accommodated aseismically. Comparing our results with what is typically observed for large interplate earthquakes, it suggests a simple linear relation between $M_{p}$ and $M_{c}$ (Figure 17b). This would imply that fracture energy is proportional to co-sesimic displacement. Note that different forms of (19) waere proposed. For instance, within the framework of slip-weakening theory and on the basis of seismological observations, \citeA{abercrombie2005can} proposed that $M_{p} \propto M_{c}^{0.78}$.\\ Comparing the total AE moment release $M_{a}$ with the co-seismic moment release $M_{p}$, there is up to 8 orders of magnitude difference between $M_{a}$ and $M_{c}$ which corresponds to just under 5 orders of magnitude difference in terms of moment magnitude $M_{w}$. This is intriguing since one the commonest argument to claim that earthquakes do begin as small instabilities that cascade-up grow into larger ruptures (\cite{beroza1996properties}) is the lack of detectable seismic activity prior to mainshock. Nucleation process could be so silent that most of the time, the nucleation phase would be difficult to detect. \section{Summary} In this study, we continuously recorded microseismicity generated during stick-slip experiments and analyzed the dynamics of precursory AEs prior to stick-slip instabilities. Using calibrated acoustic sensors we were able to analyze AE source parameters. According to the scaling laws that describe the frequency-magnitude distribution of earthquakes and that link the size of an earthquake to its magnitude, our results suggest that millimetric AEs can be fairly considered as microearthquakes. We found clear evidences that the occurrence of AEs was driven by fault slip acceleration during the nucleation phase of the upcoming stick-slip instability. Precursory AEs share significant similarities with foreshocks at the scale of crustal faults: (i) AE rate increases as an inverse power law of time to failure and (ii) AEs migration, promoted by increasing stress conditions. Having been able to measure the seismic component and the aseismic component of the nucleation phase, we suggested that nucleation is an almost fully aseismic process. This might therefore explain why most of the time, foreshocks are not detected preceding mainshock. Finally, we argued, based on fault surface analysis, that fault strength heterogeneity controls fault coupling. Higher the roughness, stronger the coupling. As a consequence, topographical modifications of the fault during rapid slip episodes such as mechanical abrasion, plastic deformation processes or partial/ complete melting of the fault may reduce or increase fault strength heterogeneity. \begin{figure} \includegraphics[width=\textwidth]{figures/schema_cali.jpg} \caption{\textbf{Top}. Photograph of the experimental set-up used for acoustic sensors calibration. a. High frequency generator (HFG) . b. Amplifier. c. Laser vibrometer acquisition system. d. Laser beam. e. Rock sample with the acoustic sensor and the source glued on. f. Digital oscilloscope. \textbf{Bottom}. Schematic view of the calibration procedure. The source is positioned at the center of the fault and subject to an input voltage. Surface vibrations of the opposing side are recorded by the acoustic sensor first and then by LDV.} \label{Figure 1} \end{figure} \begin{figure} \includegraphics[width=\textwidth]{figures/spec_sig_m110_v109.pdf} \caption{Example of voltage and velocity measurements for the two types of sources and the estimated spectra. The time window used to estimate the spectra is indicated by the black double arrow. This time window is 50 $\mu s$ long and is centered to the first P-wave arrival. } \label{Figure 2} \end{figure} \begin{figure} \includegraphics[width=\textwidth]{figures/spec_all_compa.pdf} \caption{Calibration curves. \textbf{a.} Sensitivity functions corresponding to the source M110-sm. The dashed lines indicate the calibration curves obtained for an input voltage of 40 V and the solid lines for an input voltage of 200 V. \textbf{b.} Same as \textbf{a.} but for the source V109-rm. \textbf{c.} Comparison of the sensitivity function averaged over all input voltages and source durations. Acoustic sensors have a net non linear instrumental response showing a large resonance band between 1.2 $MHz$ and 2.2 $MHz$ (delimited by the two black arrows)} \label{Figure 3} \end{figure} \begin{figure} \includegraphics[width=\textwidth]{figures/spec__wf.pdf} \caption{Fitted displacement spectra and acoustic waveforms. \textbf{a.} Displacement spectra and best fit for $Mw -7.7$ and $Mw -8.6$ events with their respective estimated corner frequencies indicated by the arrows (0.88 $MHz$ and 1.5 $MHz$, respectively). \textbf{b.} Corresponding waveforms used to estimate the spectra, the color code is the same than in \textbf{a.}. Waveform amplitudes were multiplied by a factor two for visualization. The black dashed line indicates the hanning window used to taper the waveforms.} \label{Figure 4} \end{figure} \begin{figure} \label{Figure 5} \centering \includegraphics[height=0.9\textheight]{figures/mecha_data.pdf} \caption{Cumulative slip, shear-stress and AE rate during the experiments. AEs were stacked into 1 second bins. The displacement was corrected from the elastic deformation of the sample and of the apparatus.} \label{Figure 5} \end{figure} \begin{figure} \label{Figure 6} \centering \hspace*{-2.2cm} \includegraphics[width=1.3\textwidth]{figures/bar_sumfs_sum_m0.pdf} \caption{Distribution of the number of AEs (left) and the total AE moment release (right) per stick-slip cycle during the experiments. For particular SSEs, the total AE moment release is a lower bound due to the saturation of the acoustic sensors for $Mw > -7$. Star symbols indicate the AE sequences that contain at least more than 1 AE of $Mw > -7$} \label{Figure 6} \end{figure} \begin{figure} \label{Figure 7} \centering \hspace*{-2.2cm} \includegraphics[width=1.3\textwidth]{figures/gutenberg_new.pdf} \caption{Frequency-magnitude distribution of the generated AEs during the experiments. Colored circles correspond to the cumulative G-R distribution of the AEs moment magnitudes. Black arrows indicate the moment magnitude $Mw$ that correspond to the beginning of the acoustic sensors saturation ($Mw = -7$). Black dashed lines show the b-values that we estimated according to the Aki-Utsu maximum likelyhood method. Bar plots are showing the distribution of the AEs moment magnitudes into 0.1 magnitude interval bins. } \label{Figure 7} \end{figure} \begin{figure} \label{Figure 8} \centering \includegraphics[height=0.9\textheight]{figures/picture_nuc_fs-0101.jpg} \caption{Fault surfaces conditions, AE and stick-slip nucleation locations. Circle size refers to the AE moment magnitude and was set according to the estimated source size. The colorscale refers to the SSE index. Only the AEs whose location errors are less than 2-3 $mm$ are reported here.} \label{Figure 8} \end{figure} \begin{figure} \label{Figure 9} \centering \includegraphics[width=\textwidth]{figures/MEB_bis01.jpg} \caption{Microtexture of the fault surfaces after stick-slip experiments under Scanning Electron Microscopy at : \textbf{a.},\textbf{b.} $Pc = 30$ $MPa$, \textbf{c.},\textbf{d.} $Pc = 45$ $MPa$ and \textbf{e.},\textbf{f.} $Pc = 60$ $MPa$. The direction of sliding is indicated by the white arrow. \textbf{a.} Small scale view of gouge particles with various sizes ranging from few $\mu m$ to 100 $nm$. \textbf{b.} Large scale view of \textbf{a.} showing an highly damaged surface covered with patches of gouge particles heterogeneously distributed. We sense a small scale asperity at the center slightly deformed into the direction of sliding. \textbf{c.} Small scale view of amorphous fine gouge particles layer. \textbf{d.} Large scale view of \textbf{c.} showing clusters of smashed gouge particles with sizes up to 10 $ \mu s$. The fault surface presents striations along the sliding direction which suggest plastic deformation during stick-slip events. \textbf{e.} Small scale view of the fault surface showing evidence of partial melting during sliding. A fraction of the small gouge particles is trapped into the melt. \textbf{f.} Large scale view of \textbf{e.} showing stretched and elongated surfaces formed due to partial melting and covered with (more) homogeneously distributed gouge particles.} \label{Figure 9} \end{figure} \begin{figure} \label{Figure 10} \centering \includegraphics[width=\textwidth]{figures/roughness_hurst_f.jpg} \caption{\textbf{Microtopography of fault surfaces at:} \textbf{a.} $Pc = 30 MPa$, \textbf{b.} $Pc = 45 MPa$ and \textbf{c.} $Pc = 60 MPa$. The microtopography was measured using a laser profilometer presenting a resolution of 0.05 $\mu m$. The colorscale indicate the microtopography and is given in $\mu m$. Sampled surfaces are 15 $mm$ wide and 30 $mm$ long and correspond to the black rectangles shown on the right. \textbf{Power spectrum of the fault surfaces microtopography} as a function of the wavenumber $k$ and extracted from the stacking of the 1-D profiles along the perpendicular, \textbf{d.} and the parallel directions, \textbf{e.}, of the direction of sliding. Black dashed lines represent the power-law expected for a self-affine surface characterized by a Hurst exponent $H$ of 0.4} \label{Figure 10} \end{figure} \begin{figure} \label{Figure 11} \centering \includegraphics[height=0.9\textheight]{figures/gut_slinding.pdf} \caption{Cumulative AE moment release and b-value evolution prior to failure at : \textbf{a.} $Pc = 30$ $MPa$, \textbf{b.} $Pc = 45$ $MPa$ and \textbf{c.} $Pc = 60$ $MPa$. The cumulative AE moment release is relative to the normalized time to failure and results from the stacking of all the precursory AE sequences. Square and diamond symbols show the AEs b-values and their uncertainties that were estimated at various time intervals relative to the onset of stick-slip instability. Square symbols correspond to the b-values that were estimated after removing the saturated AEs ($Mw > 7$) and the diamond symbols show the b-values that were estimated using the full AEs catalogs.} \label{Figure 11} \end{figure} \begin{figure} \centering \hspace*{-1cm} \includegraphics[height=0.8\textheight]{figures/faultlaturation_fsdynamics.jpg} \caption{\textbf{Comparison between the normalized along fault displacement, along fault velocity, cumulative number of AEs and cumulative AE moment release as a function of the normalized time to failure at:} \textbf{a.}, $Pc = 30$ $MPa$, \textbf{b.}, $Pc = 45$ $MPa$ and \textbf{c.}, $Pc = 60$ $MPa$. All curves result from the stacking of all the SSEs. The grey shaded area around the AE moment release corresponds to the cumulative error of the magnitude estimates. \textbf{Evolution of of the normalized cumulative number of AEs as a function of the normalized time to failure at:} \textbf{d.}, $Pc = 30$ $MPa$, \textbf{e.}, $Pc = 45$ $MPa$ and \textbf{f.}, $Pc = 60$ $MPa$. The colorscale indicates the SSE index and the black curves result from the stacking of all precursory AE sequences.} \label{Figure 12} \end{figure} \begin{figure} \label{Figure 13} \centering \includegraphics[height=0.9\textheight]{figures/migra_all.pdf} \caption{Distance to nucleation of the precursory AEs as a function of the normalized time to failure at : \textbf{a.} $Pc = 30$ $MPa$, \textbf{b.} $Pc = 45$ $MPa$, \textbf{c.} $Pc = 45$ $MPa$. The cyan triangles indicate the average distance to nucleation and its standard deviation computed into 10 log-distributed time intervals. On the left is shown the pdf of the precursory AEs as a function of their distance to nucleation.} \label{Figure 13} \end{figure} \begin{figure} \label{Figure 14} \centering \hspace*{-1.5cm} \includegraphics[width=1.2\textwidth]{figures/expli_migra.pdf} \caption{\textbf{a.} Cumulative AE moment release and along fault displacement in the last 10 seconds prior to SSE $\#$6 during the experiment conducted at $Pc = 60$ $MPa$. \textbf{b.} Distance to nucleation of the precursory AEs prior to failure. \textbf{c.} Locations, sizes and timing of the precursory AEs that occur prior to SSE $\#$6 ($Pc = 60$ $MPa$). The colorscale refers to the timing of the AEs relative to failure. Circle size indicates the moment magnitude and was set according to source size. The star symbol indicates the nucleation location. \textbf{d.} Schematic view of the shear-stress evolution on locked portions of the fault (i.e., in the interior of the nucleation zone) during nucleation. The black dashed lines indicate the shear-stress profile. The red line idealizes the critical strength of the locked fault patches in the case of an homogeneous medium. The star symbols depict the schematic view of the migration in time and space of the precursory AEs towards nucleation initiation. The stress perturbations at the tips of the nucleation zone trigger the precursory AE activity far from nucleation. As the nucleation zone expands, stresses build-up in the interior of the nucleation. The shear-stress gradient leads to the migration of the precursory AEs towards the center of the nucleation zone.} \label{Figure 14} \end{figure} \begin{figure} \label{Figure 15} \centering \includegraphics[height=0.95\textheight]{figures/omori_bis.pdf} \caption{Inverse power law of time of the average cumulative number of AEs towards failure at: \textbf{a.} $Pc = 30$ $MPa$, \textbf{b.} $Pc = 45$ $MPa$ and \textbf{c.} $Pc = 60$ $MPa$. The red curves indicate the best fits obtained on parameters $c$ and $p$. The inserted figures display the logarithm of the residuals normalized by the minimum (i.e., 0 indicates the minimum) as a function of $c$ and $p$} \label{Figure 15} \end{figure} \begin{figure} \label{Figure 16} \centering \includegraphics[width=\textwidth]{figures/corner_frequency.pdf} \caption{Relationship between $M_{0}$ and $f_{c}$ at: \textbf{a.} $Pc = 30$ $MPa$, \textbf{b.} $Pc = 45$ $MPa$ and \textbf{c.} $Pc = 60$ $MPa$. Dashed black lines represent stress drops of 0.01, 0.1, 1, 10, 100 $MPa$ from Madariaga's source model \cite{madariaga1976dynamics}. \textbf{d.} The AEs source paramters for all the experiments are plotted as gray circles. The other points represent a corpus of previous studies and were taken from \cite{yoshimitsu2014magnitude}.} \label{Figure 16} \end{figure} \begin{figure}[h] \label{Figure 17} \centering \hspace*{-1.5cm} \includegraphics[width=0.85\textwidth]{figures/scaling_m0_slip_all.pdf} \caption{\textbf{a.} Relationship between the pre-seismic moment release and the total AE moment release. Each diamond represents one SSE. Only the AE sequences that do not contain saturated AEs are shown here. The black-dashed lines indicates a power-law exponents of 4. \textbf{b.} Relationship between the pre-seismic moment release and the co-seismic moment release. The grey squares and circles correspond to the observations of two other experimental studies \cite{passelegue2017influence,acosta2019precursory}. The black dashed line which indicates a slope of 0.56 corresponds to the scaling law between the pre-seismic moment release $M_{p}$ and the co-seismic moment release $M_{c}$ proposed by \citeA{acosta2019precursory}. A linear relation between both is given by the black dashed line whose slope = 1. The inserted figure displays the comparison between our observations and what was found for a set of large earthquakes ($Mw \geq 7.6$).} \label{Figure 17} \end{figure} \clearpage \acknowledgments This work was funded by the European Research Council grant REALISM (2016‐grant 681346). The authors declare that they have no competing financial interests. H.S.B. would like to acknowledge funding from the European Research Council grant PERSISMO (865411). All data are available online (https://github.com/samsonmarty/high‐frequency‐radiation‐during‐laboratory‐earthquakes).
{ "redpajama_set_name": "RedPajamaArXiv" }
50
Dev Corner Quick Gameplay Thoughts: October 10 Meddler (NA) submitted in Dev Corner Hi folks, ------------------------------------------------------------------------------- **Usual Disclaimers** These posts will often contain talk about future work we're doing, or planning to do, that isn't yet guaranteed to ship. The nature of the work could change or, depending on what we discover, projects mentioned may get delayed or even stopped. If you'd like to see a Tweet whenever a new one of these posts goes up: https://twitter.com/RiotMeddler http://ddragon.leagueoflegends.com/cdn/6.24.1/img/champion/Ziggs.png ---------------------------------------------------------------------------- **Removal of the jungle funneling penalty** As mentioned when it was introduced one of our goals for preseason is to remove the jungle funneling penalty (Monster Hunter). That penalty was added to discourage funneling both jungle and lane farm onto one champion without interacting much in either position with the enemy. Our current belief is that two of the preseason changes should reduce the appeal of funneling enough we can remove Monster Hunter without an explicit replacement: * Bounty scaling - Bounties will now be scaling somewhat off minions/monsters killed, not only off champions killed. That means that focusing farm so heavily on one champ becomes a bigger risk, with greater payout for the enemy if they can shut them down, given better options against funnel strats. * Barricade (temp name) gold - Being able to gain rewards from pushing on a tower even if you can't take it down entirely makes leaving a lane uncontested a higher price to pay. That also reduces the appeal of funneling play in most circumstances. We're still testing whether the combination of those two things should be enough. Looks hopeful so far though. ---------------------------------------------------------------------------- **How popular are the different queues?** Something I thought it would be good to share was a look at how popular the different queue types in LoL are, both from an overall perspective and looking at how they vary region to region. Chart below showing a % breakdown (hours played in each mode) in early September for a few regions with different habits. https://imgur.com/a/7ClMRFs Looking at the regions shown: * Overall - None of the regions, whether those shown here or the others not listed, is highly representative of the overall average. Every region has at least one queue in which they're noticeably divergent from average play rates. * Korea - Korean players like solo queue a lot more than those from other regions, with low play in normals as a result. They play average amounts of ARAM, but have low interest in other modes compared to players elsewhere (e.g. Nexus Blitz or most RGMs we've run in the past). * NA - NA Players play less Flex than almost every other region. They're bigger fans of non SR modes, with the most Nexus Blitz play of any region and nearly the most TT play (EUW has slightly more TT games). * Brazil - Brazil's a newer region and so players there follow a couple of trends we see on newer servers, with more normal play and Co-op versus AI than average and less ARAM. ---------------------------------------------------------------------------- **Some small item/rune changes in 8.21** We've got some small adjustments to a few items in 8.21, bit of context: * Essence Reaver - Cutting 200g off the cost as short term help. Might have something larger for it later in the year, want to make it a better regardless now though. * Edge of Night - Cost reduction by 200g as well. No current plans for changing its functionality, think we've definitely got it overpriced though. * Time Warp Tonic - We're looking at different possible ways to nerf TWT, given the amount of health/mana restoration it's offering is excessive. Current version in testing is a change where instead of offering increased duration and MS during the duration it instead gives some restoration up front, doesn't change duration at all and still gives the movement speed. **What's a Dev Corner?** This is a place where League of Legends developers can share thoughts, questions, and retrospectives on all things to do with the game. Please keep discussions focused on the topics at hand - we'll be pretty firm about attempts at hijacking top comments with unrelated content. The plan with the Dev Corner is to pilot various types of communication, ranging from open office hours to 'state of the game' essays. Every month, we'll have 'open forum' style discussions to talk about future ideas. In the future we'll have a calendar up! **Culture & Etiquette** Don't be an asshole. Seriously. Since this is going to be an experimental environment, we ask that everyone do their best to keep the discussions constructive and respectful. Seek to understand rather than demanding to be convinced, and we will do the same. Our long-term goal is to set up a positive environment for developers to engage in quick, constructed conversations. As always, adhere to the Universal Rules.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,574
Alexander Kühl, né le , à Rendsburg, en Allemagne, est un ancien joueur de basket-ball allemand. Il évolue au poste de pivot. Biographie Palmarès Notes et références Liens externes Naissance en janvier 1973 Naissance à Rendsburg Joueur international allemand de basket-ball Joueur de basket-ball des 49ers de Charlotte Joueur du Bayer Giants Leverkusen Joueur de l'Aris Salonique (basket-ball) Joueur de l'AC Near East Joueur du Pallacanestro Cantù Joueur de Peristéri BC Joueur du SS Felice Scandone Joueur du CSP Limoges Joueur de l'EnBW Ludwigsburg
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,184
Displaying all 23 items (0.000 seconds) Topic: Indian boys Topic: Indian children Topic: Children Topic: Girls Topic: Indian girls Topic: Boys 1. American Indian Boys and Girls at a Day School at Bayou Blue, Louisiana 2. American Indian Couple with Young Woman and Baby in front of House, Nebraska 3. American Indian Girls in White Dresses, Boys in Suits and Ties 4. Boys and Girls Dancing 5. Boys and Girls in Historical Costume Dancing a Minuet at Sante Fe, New Mexico 6. Children at the Pablo Public School, Flathead Reservation, Montana 7. Children in Traditional Euopean Dress and Wooden Shoes 9. Children Smiling 10. Class of Elementary School Students and Their Teachers,St. Regis Indian Mission Hogansburg, New York 11. Class Photo, Elementary Age Boys and Girls, Wahpeton, North Dakota 13. Class Photo, Elementary Age Girls, Wahpeton, North Dakota 14. Graduating Class and Faculty from the Ononoaga Reservation School Yearbook, 1937-38 15. Group of American Indian Adults and Children 16. Group of Public School Children and Their Teacher from the Klamath Agency 17. Group of Students at Mount Zion Community House and School 18. Group of Young people from the Klamath Government Day School 20. Student Group, Santa Fe, New Mexico 21. Students at the Rocky Boy Indian Day School 22. Very Small American Indian Boy and Girl in a Cotton Field 23. Whitetail Day School, Group of Students and Adults on the Front Steps Wahpeton (N.D.) (3) Santa Fe (N.M.) (2) Bayou Blue (La.) (1) Chippewa-Cree Indians of the Rocky Boy's Reservation, Montana (1) Flathead Indian Reservation (Mont.) (1) Hogansburg (Bombay, N.Y.) (1) Onondaga Indian Reservation (N.Y.) (1) Rocky Boy's Reservation (Mont.) (1) Boys (23)[x] Children (23)[x] Girls (23)[x] Indian boys (23)[x] Indian children (23)[x] Indian girls (23)[x] Portraits, Group (21) Indians of North America--Education (14) Wahpeton Indian School (N.D.) (3) McAlheny, Mrs. (1) Rocky Boy Indian Day School (1) United States. Bureau of Indian Affairs. Klamath Agency (1) New York (State) (2) lantern slides (1)
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,913
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <meta name="robots" content="index, follow, all" /> <title>Khill\Lavacharts\Configs\Tooltip | Lavacharts API</title> <link rel="stylesheet" type="text/css" href="../../../css/bootstrap.min.css"> <link rel="stylesheet" type="text/css" href="../../../css/bootstrap-theme.min.css"> <link rel="stylesheet" type="text/css" href="../../../css/sami.css"> <script src="../../../js/jquery-1.11.1.min.js"></script> <script src="../../../js/bootstrap.min.js"></script> <script src="../../../js/typeahead.min.js"></script> <script src="../../../sami.js"></script> <meta name="MobileOptimized" content="width"> <meta name="HandheldFriendly" content="true"> <meta name="viewport" content="width=device-width,initial-scale=1,maximum-scale=1"> <link rel="icon" href="../../../images/favicon.ico" type="image/x-icon"> </head> <body id="class" data-name="class:Khill_Lavacharts_Configs_Tooltip" data-root-path="../../../"> <div id="content"> <div id="left-column"> <div id="control-panel"> <form id="search-form" action="../../../search.html" method="GET"> <span class="glyphicon glyphicon-search"></span> <input name="search" class="typeahead form-control" type="search" placeholder="Search"> </form> </div> <div id="api-tree"></div> </div> <div id="right-column"> <nav id="site-nav" class="navbar navbar-default" role="navigation"> <div class="container-fluid"> <div class="navbar-header"> <button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#navbar-elements"> <span class="sr-only">Toggle navigation</span> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> <a class="navbar-brand" href="../../../index.html"> <img src="../../../images/lava-logo.gif" /> </a> </div> <div class="collapse navbar-collapse" id="navbar-elements"> <ul class="nav navbar-nav"> <li><a href="../../../classes.html">Classes</a></li> <li><a href="../../../namespaces.html">Namespaces</a></li> <li><a href="../../../interfaces.html">Interfaces</a></li> <li><a href="../../../traits.html">Traits</a></li> <li><a href="../../../doc-index.html">Index</a></li> <li><a href="../../../search.html">Search</a></li> </ul> </div> </div> </nav> <div class="namespace-breadcrumbs"> <ol class="breadcrumb"> <li><span class="label label-default">class</span></li> <li><a href="../../../Khill.html">Khill</a></li> <li><a href="../../../Khill/Lavacharts.html">Lavacharts</a></li> <li><a href="../../../Khill/Lavacharts/Configs.html">Configs</a></li> <li>Tooltip</li> </ol> </div> <div id="page-content"> <div class="page-header"> <h1>Tooltip</h1> </div> <p> class <strong>Tooltip</strong> extends <a href="../../../Khill/Lavacharts/JsonConfig.html"><abbr title="Khill\Lavacharts\JsonConfig">JsonConfig</abbr></a> (<a href="https://github.com/khill/lavacharts/blob/master/src/Configs/Tooltip.php">View source</a>) </p> <div class="description"> <p>Tooltip ConfigObject</p> <p>An object containing all the values for the tooltip which can be passed into the chart's options.</p> </div> <h2>Constants</h2> <table class="table table-condensed"> <tr> <td>TYPE</td> <td class="last"> <p><em>Type of JsonConfig object</em></p> <p> </p> </td> </tr> </table> <h2>Methods</h2> <div class="container-fluid underlined"> <div class="row"> <div class="col-md-2 type"> </div> <div class="col-md-8 type"> <a href="#method___construct">__construct</a>( array $config = array()) <p>Builds the tooltip object with specified options.</p> </div> <div class="col-md-2"></div> </div> <div class="row"> <div class="col-md-2 type"> mixed </div> <div class="col-md-8 type"> <a href="#method___get">__get</a>( string $option) <p>Get the value of a set option via magic method through UI.</p> </div> <div class="col-md-2"><small>from&nbsp; <a href="../../../Khill/Lavacharts/JsonConfig.html#method___get"><abbr title="Khill\Lavacharts\JsonConfig">JsonConfig</abbr></a></small></div> </div> <div class="row"> <div class="col-md-2 type"> <a href="../../../Khill/Lavacharts/Options.html"><abbr title="Khill\Lavacharts\Options">Options</abbr></a> </div> <div class="col-md-8 type"> <a href="#method_getOptions">getOptions</a>() <p>Gets the Options object for the JsonConfig</p> </div> <div class="col-md-2"><small>from&nbsp; <a href="../../../Khill/Lavacharts/JsonConfig.html#method_getOptions"><abbr title="Khill\Lavacharts\JsonConfig">JsonConfig</abbr></a></small></div> </div> <div class="row"> <div class="col-md-2 type"> <a href="../../../Khill/Lavacharts/JsonConfig.html"><abbr title="Khill\Lavacharts\JsonConfig">JsonConfig</abbr></a> </div> <div class="col-md-8 type"> <a href="#method_setOption">setOption</a>( string $option, mixed $value) <p>Shortcut method to set the value of an option and return $this.</p> </div> <div class="col-md-2"><small>from&nbsp; <a href="../../../Khill/Lavacharts/JsonConfig.html#method_setOption"><abbr title="Khill\Lavacharts\JsonConfig">JsonConfig</abbr></a></small></div> </div> <div class="row"> <div class="col-md-2 type"> </div> <div class="col-md-8 type"> <a href="#method_setOptions">setOptions</a>( array $config) <p>Parses the config array by passing the values through each method to check validity against if the option exists.</p> </div> <div class="col-md-2"><small>from&nbsp; <a href="../../../Khill/Lavacharts/JsonConfig.html#method_setOptions"><abbr title="Khill\Lavacharts\JsonConfig">JsonConfig</abbr></a></small></div> </div> <div class="row"> <div class="col-md-2 type"> array </div> <div class="col-md-8 type"> <a href="#method_jsonSerialize">jsonSerialize</a>() <p>Custom serialization of the JsonConfig object.</p> </div> <div class="col-md-2"><small>from&nbsp; <a href="../../../Khill/Lavacharts/JsonConfig.html#method_jsonSerialize"><abbr title="Khill\Lavacharts\JsonConfig">JsonConfig</abbr></a></small></div> </div> <div class="row"> <div class="col-md-2 type"> <a href="../../../Khill/Lavacharts/Configs/Tooltip.html"><abbr title="Khill\Lavacharts\Configs\Tooltip">Tooltip</abbr></a> </div> <div class="col-md-8 type"> <a href="#method_showColorCode">showColorCode</a>( bool $showColorCode) <p>Sets whether to show the color code.</p> </div> <div class="col-md-2"></div> </div> <div class="row"> <div class="col-md-2 type"> <a href="../../../Khill/Lavacharts/Configs/Tooltip.html"><abbr title="Khill\Lavacharts\Configs\Tooltip">Tooltip</abbr></a> </div> <div class="col-md-8 type"> <a href="#method_textStyle">textStyle</a>( array $textStyleConfig) <p>Sets the text style of the tooltip.</p> </div> <div class="col-md-2"></div> </div> <div class="row"> <div class="col-md-2 type"> <a href="../../../Khill/Lavacharts/Configs/Tooltip.html"><abbr title="Khill\Lavacharts\Configs\Tooltip">Tooltip</abbr></a> </div> <div class="col-md-8 type"> <a href="#method_trigger">trigger</a>( string $trigger) <p>Sets the user interaction that causes the tooltip to be displayed.</p> </div> <div class="col-md-2"></div> </div> </div> <h2>Details</h2> <div id="method-details"> <div class="method-item"> <h3 id="method___construct"> <div class="location">at line 51</div> <code> <strong>__construct</strong>( array $config = array())</code> </h3> <div class="details"> <div class="method-description"> <p>Builds the tooltip object with specified options.</p> </div> <div class="tags"> <h4>Parameters</h4> <table class="table table-condensed"> <tr> <td> array</td> <td>$config</td> <td> </td> </tr> </table> <h4>Exceptions</h4> <table class="table table-condensed"> <tr> <td><a href="../../../Khill/Lavacharts/Exceptions/InvalidConfigValue.html"><abbr title="Khill\Lavacharts\Exceptions\InvalidConfigValue">InvalidConfigValue</abbr></a></td> <td> </td> </tr> <tr> <td><a href="../../../Khill/Lavacharts/Exceptions/InvalidConfigProperty.html"><abbr title="Khill\Lavacharts\Exceptions\InvalidConfigProperty">InvalidConfigProperty</abbr></a></td> <td> </td> </tr> </table> </div> </div> </div> <div class="method-item"> <h3 id="method___get"> <div class="location">in <a href="../../../Khill/Lavacharts/JsonConfig.html#method___get"><abbr title="Khill\Lavacharts\JsonConfig">JsonConfig</abbr></a> at line 64</div> <code> mixed <strong>__get</strong>( string $option)</code> </h3> <div class="details"> <div class="method-description"> <p>Get the value of a set option via magic method through UI.</p> </div> <div class="tags"> <h4>Parameters</h4> <table class="table table-condensed"> <tr> <td> string</td> <td>$option</td> <td>Name of option.</td> </tr> </table> <h4>Return Value</h4> <table class="table table-condensed"> <tr> <td> mixed</td> <td> </td> </tr> </table> <h4>Exceptions</h4> <table class="table table-condensed"> <tr> <td><a href="../../../Khill/Lavacharts/Exceptions/InvalidConfigProperty.html"><abbr title="Khill\Lavacharts\Exceptions\InvalidConfigProperty">InvalidConfigProperty</abbr></a></td> <td> </td> </tr> </table> </div> </div> </div> <div class="method-item"> <h3 id="method_getOptions"> <div class="location">in <a href="../../../Khill/Lavacharts/JsonConfig.html#method_getOptions"><abbr title="Khill\Lavacharts\JsonConfig">JsonConfig</abbr></a> at line 75</div> <code> <a href="../../../Khill/Lavacharts/Options.html"><abbr title="Khill\Lavacharts\Options">Options</abbr></a> <strong>getOptions</strong>()</code> </h3> <div class="details"> <div class="method-description"> <p>Gets the Options object for the JsonConfig</p> </div> <div class="tags"> <h4>Return Value</h4> <table class="table table-condensed"> <tr> <td> <a href="../../../Khill/Lavacharts/Options.html"><abbr title="Khill\Lavacharts\Options">Options</abbr></a></td> <td> </td> </tr> </table> </div> </div> </div> <div class="method-item"> <h3 id="method_setOption"> <div class="location">in <a href="../../../Khill/Lavacharts/JsonConfig.html#method_setOption"><abbr title="Khill\Lavacharts\JsonConfig">JsonConfig</abbr></a> at line 90</div> <code> <a href="../../../Khill/Lavacharts/JsonConfig.html"><abbr title="Khill\Lavacharts\JsonConfig">JsonConfig</abbr></a> <strong>setOption</strong>( string $option, mixed $value)</code> </h3> <div class="details"> <div class="method-description"> <p>Shortcut method to set the value of an option and return $this.</p> <p>In order to maintain backwards compatibility, ConfigObjects will be unwrapped.</p> </div> <div class="tags"> <h4>Parameters</h4> <table class="table table-condensed"> <tr> <td> string</td> <td>$option</td> <td>Option to set.</td> </tr> <tr> <td> mixed</td> <td>$value</td> <td>Value of the option.</td> </tr> </table> <h4>Return Value</h4> <table class="table table-condensed"> <tr> <td> <a href="../../../Khill/Lavacharts/JsonConfig.html"><abbr title="Khill\Lavacharts\JsonConfig">JsonConfig</abbr></a></td> <td> </td> </tr> </table> </div> </div> </div> <div class="method-item"> <h3 id="method_setOptions"> <div class="location">in <a href="../../../Khill/Lavacharts/JsonConfig.html#method_setOptions"><abbr title="Khill\Lavacharts\JsonConfig">JsonConfig</abbr></a> at line 106</div> <code> <strong>setOptions</strong>( array $config)</code> </h3> <div class="details"> <div class="method-description"> <p>Parses the config array by passing the values through each method to check validity against if the option exists.</p> </div> <div class="tags"> <h4>Parameters</h4> <table class="table table-condensed"> <tr> <td> array</td> <td>$config</td> <td> </td> </tr> </table> <h4>Exceptions</h4> <table class="table table-condensed"> <tr> <td><a href="../../../Khill/Lavacharts/Exceptions/InvalidConfigValue.html"><abbr title="Khill\Lavacharts\Exceptions\InvalidConfigValue">InvalidConfigValue</abbr></a></td> <td> </td> </tr> <tr> <td><a href="../../../Khill/Lavacharts/Exceptions/InvalidConfigProperty.html"><abbr title="Khill\Lavacharts\Exceptions\InvalidConfigProperty">InvalidConfigProperty</abbr></a></td> <td> </td> </tr> </table> </div> </div> </div> <div class="method-item"> <h3 id="method_jsonSerialize"> <div class="location">in <a href="../../../Khill/Lavacharts/JsonConfig.html#method_jsonSerialize"><abbr title="Khill\Lavacharts\JsonConfig">JsonConfig</abbr></a> at line 275</div> <code> array <strong>jsonSerialize</strong>()</code> </h3> <div class="details"> <div class="method-description"> <p>Custom serialization of the JsonConfig object.</p> </div> <div class="tags"> <h4>Return Value</h4> <table class="table table-condensed"> <tr> <td> array</td> <td> </td> </tr> </table> </div> </div> </div> <div class="method-item"> <h3 id="method_showColorCode"> <div class="location">at line 65</div> <code> <a href="../../../Khill/Lavacharts/Configs/Tooltip.html"><abbr title="Khill\Lavacharts\Configs\Tooltip">Tooltip</abbr></a> <strong>showColorCode</strong>( bool $showColorCode)</code> </h3> <div class="details"> <div class="method-description"> <p>Sets whether to show the color code.</p> </div> <div class="tags"> <h4>Parameters</h4> <table class="table table-condensed"> <tr> <td> bool</td> <td>$showColorCode</td> <td>State of showing the color code.</td> </tr> </table> <h4>Return Value</h4> <table class="table table-condensed"> <tr> <td> <a href="../../../Khill/Lavacharts/Configs/Tooltip.html"><abbr title="Khill\Lavacharts\Configs\Tooltip">Tooltip</abbr></a></td> <td> </td> </tr> </table> <h4>Exceptions</h4> <table class="table table-condensed"> <tr> <td><a href="../../../Khill/Lavacharts/Exceptions/InvalidConfigValue.html"><abbr title="Khill\Lavacharts\Exceptions\InvalidConfigValue">InvalidConfigValue</abbr></a></td> <td> </td> </tr> </table> </div> </div> </div> <div class="method-item"> <h3 id="method_textStyle"> <div class="location">at line 77</div> <code> <a href="../../../Khill/Lavacharts/Configs/Tooltip.html"><abbr title="Khill\Lavacharts\Configs\Tooltip">Tooltip</abbr></a> <strong>textStyle</strong>( array $textStyleConfig)</code> </h3> <div class="details"> <div class="method-description"> <p>Sets the text style of the tooltip.</p> </div> <div class="tags"> <h4>Parameters</h4> <table class="table table-condensed"> <tr> <td> array</td> <td>$textStyleConfig</td> <td> </td> </tr> </table> <h4>Return Value</h4> <table class="table table-condensed"> <tr> <td> <a href="../../../Khill/Lavacharts/Configs/Tooltip.html"><abbr title="Khill\Lavacharts\Configs\Tooltip">Tooltip</abbr></a></td> <td> </td> </tr> </table> <h4>Exceptions</h4> <table class="table table-condensed"> <tr> <td><a href="../../../Khill/Lavacharts/Exceptions/InvalidConfigValue.html"><abbr title="Khill\Lavacharts\Exceptions\InvalidConfigValue">InvalidConfigValue</abbr></a></td> <td> </td> </tr> </table> </div> </div> </div> <div class="method-item"> <h3 id="method_trigger"> <div class="location">at line 92</div> <code> <a href="../../../Khill/Lavacharts/Configs/Tooltip.html"><abbr title="Khill\Lavacharts\Configs\Tooltip">Tooltip</abbr></a> <strong>trigger</strong>( string $trigger)</code> </h3> <div class="details"> <div class="method-description"> <p>Sets the user interaction that causes the tooltip to be displayed.</p> <p>'focus' - The tooltip will be displayed when the user hovers over an element. 'none' - The tooltip will not be displayed.</p> </div> <div class="tags"> <h4>Parameters</h4> <table class="table table-condensed"> <tr> <td> string</td> <td>$trigger</td> <td>Type of trigger.</td> </tr> </table> <h4>Return Value</h4> <table class="table table-condensed"> <tr> <td> <a href="../../../Khill/Lavacharts/Configs/Tooltip.html"><abbr title="Khill\Lavacharts\Configs\Tooltip">Tooltip</abbr></a></td> <td> </td> </tr> </table> <h4>Exceptions</h4> <table class="table table-condensed"> <tr> <td><a href="../../../Khill/Lavacharts/Exceptions/InvalidConfigValue.html"><abbr title="Khill\Lavacharts\Exceptions\InvalidConfigValue">InvalidConfigValue</abbr></a></td> <td> </td> </tr> </table> </div> </div> </div> </div> </div> <div id="footer"> Generated by <a href="http://sami.sensiolabs.org/">Sami, the API Documentation Generator</a>. </div> </div> </div> </body> </html>
{ "redpajama_set_name": "RedPajamaGithub" }
3,160
Q: Hadoop and Cassandra - InvalidRequestException(why:Column timestamp required) I have a simple mapred job running on my Cassandra cluster, but when it tries to save the output to a table I get InvalidRequestException(why:Column timestamp required). I've tried manually adding a 'timestamp' column to the CF but it doesnt make any difference. Here's the description of my CF (as interpreted by cqlsh): CREATE TABLE output_words ( key text PRIMARY KEY, "count" int, ) WITH COMPACT STORAGE AND bloom_filter_fp_chance=0.010000 AND caching='KEYS_ONLY' AND comment='' AND dclocal_read_repair_chance=0.000000 AND gc_grace_seconds=864000 AND read_repair_chance=0.100000 AND replicate_On_write='true' AND populate_io_cache_on_flush='false' AND compaction={'class': 'SizeTieredCompactionStrategy'} AND compression={'sstable_compression': 'SnappyCompressor'}; I'm using POM with hadoop-core v1.1.2 and cassandra-thrift v1.2.4 on top of Cassandra v1.2.4 Can anyone suggest how to get around this? Additional info Im configuring my job as follows (only showing config relevant to the output): Job job = new Job(getConf(), "wordcount"); job.setJarByClass(TestJob.class); job.setMapperClass(TokenizerMapper.class); job.setReducerClass(ReducerToCassandra.class); job.setMapOutputKeyClass(Text.class); job.setMapOutputValueClass(IntWritable.class); job.setOutputKeyClass(ByteBuffer.class); job.setOutputValueClass(List.class); job.setOutputFormatClass(ColumnFamilyOutputFormat.class); ConfigHelper.setOutputColumnFamily(job.getConfiguration(), _keyspace, OUTPUT_COLUMN_FAMILY); ConfigHelper.setOutputRpcPort(job.getConfiguration(), _port); ConfigHelper.setOutputInitialAddress(job.getConfiguration(), _host); ConfigHelper.setOutputPartitioner(job.getConfiguration(), "org.apache.cassandra.dht.Murmur3Partitioner"); And my reducer class: public static class ReducerToCassandra extends Reducer<Text, IntWritable, ByteBuffer, List<Mutation>> { public void reduce(Text word, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } context.write(StringSerializer.get().toByteBuffer(word.toString()), Collections.singletonList(getMutation(word, sum))); } private static Mutation getMutation(Text word, int sum) { Column c = new Column(); c.name = StringSerializer.get().toByteBuffer("count"); c.value = IntegerSerializer.get().toByteBuffer(sum); c.timestamp = System.currentTimeMillis() * 1000; Mutation m = new Mutation(); m.column_or_supercolumn = new ColumnOrSuperColumn(); m.column_or_supercolumn.column = c; return m; } } A: Instead of this c.timestamp = System.currentTimeMillis() * 1000; Can you try this c.setTimestamp(System.currentTimeMillis() * 1000)
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,763
package org.n52.sos.service; import java.util.Map; /** * Interface to provide SOAP Header support in Request and Response objects. * * @author Matthes Rieke * * @since 4.0.0 * */ public interface CommunicationObjectWithSoapHeader { /** * @return the SoapHeader encoded as an InputStream containing XML. */ Map<String, SoapHeader> getSoapHeader(); /** * @param header * the SoapHeader encoded as an InputStream containing XML. */ void setSoapHeader(Map<String, SoapHeader> header); /** * Convenience method to check if the SoapHeader is set. * * @return true if Header is set */ boolean isSetSoapHeader(); }
{ "redpajama_set_name": "RedPajamaGithub" }
2,754
(4657) López, asteroide descobert el 1979 per l'astrònom rus Nikolay Stepanovich Chernykh a l'Observatori Astrofísic de Crimea Casa López i Soler, edifici d'habitatges a Amposta Editorial López, editorial fundada a Barcelona per Innocenci López Bernagossi, editor i llibreter Biografies: Adelardo López de Ayala (Guadalcanal, 1828 – Madrid, 1879), polític i dramaturg espanyol del Realisme Adolfo López Mateos (Atizapán, Estat de Mèxic, 1909), president constitucional de Mèxic de 1958 a 1964 Adrián López Álvarez (1988, Teberga, Astúries), futbolista espanyol Adrián López Rodríguez (As Pontes, 1987), futbolista gallec que ocupa la posició de defensa Agustí Lluís López i Pla (Sort, Pallars Sobirà, 1952), polític català, diputat al Parlament de Catalunya Aitor López Rekarte (Arrasate, 1975), futbolista basc, que jugava en la posició de lateral Alberto López, diversos personatges Alfons López Tena (Sagunt, 1957), polític valencià i diputat del Parlament de Catalunya per Solidaritat per la Independència Alfonso López (Lleida, 1950), dibuixant, guionista de còmic, humorista gràfic i director de publicacions Alfonso López Trujillo (Villahermosa, 1935 - Roma, 2008), cardenal colombià Andreu López Blasco, polític valencià que fou Conseller de Cultura, Educació i Ciència Andrés Manuel López Obrador (1953, Macuspana, Tabasco), polític mexicà afiliat al Partit de la Revolució Democràtica (PRD) Ángel López Pérez (1873-1964), advocat i polític gallec Ángel Domingo López Ruano (Las Palmas de Gran Canaria, 1981), futbolista espanyol Ángeles López de Ayala (Sevilla, 1858 – Barcelona, 1926), activista política espanyola Antoni López i Benturas (1861-1931), llibreter i editor Antoni López i Llausàs (Barcelona, 1888 - Buenos Aires, 1979), llibreter, distribuïdor i editor Antonio López, diversos personatges Arcadio López Casanova (Lugo, 1942), poeta i crític literari gallec en llengües gallega i castellana Bernat López Piquer (València, 1799 -Madrid, 1874), pintor Borja López i Castilla (Barcelona, 1979), jugador d'hoquei sobre patins català Candela López Tagliafico (Lomas de Zamora, Argentina, 1985), política ecosocialista catalana militant d'ICV i alcaldessa de Castelldefels Carol López Carlos López, diversos personatges Carlos López Buchardo (Buenos Aires, 1881 - 1948), compositor argentí Carlos Ariel López Chimino (San Juan de la Frontera, 1977), jugador d'hoquei patins argentí Casimiro López Llorente Carmen López, diversos personatges César López Fretes (Asunción, 1923 - Pereira, 2001), futbolista paraguaià de les dècades de 1940 i 1950 Christian Alfonso López (Barcelona, 1989), futbolista que juga com a migcampista Claudi López i Bru Claudio López, diversos personatges Cristóbal López de Valladolid (Mérida, província de Badajoz, 1638 - Còrdova, Diego López, diversos personatges David López, diversos personatges Dolors López i Aguilar Eduard López-Chávarri i Marco (València, 1871 - València, 1970), compositor, escriptor i teòric musical Eduardo López Albizu Eduardo López de Ochoa (Barcelona, 1877 - Madrid, 1936), General de Divisió de l'Exèrcit de Terra Emilio López, diversos personatges Encarnación López Julves (Buenos Aires, 1895 –Nova York, 1945), ballarina, bailaora i cantant Ennec I López (), primer senyor de Biscaia Enrique López, diversos personatges Esther López Barceló (Alacant, 1983), política valenciana, diputada a les Corts Valencianes entre 2011 i 2015 per Esquerra Unida Eufrasio López de Rojas (Andújar, 1628 - Jaén, 1684), arquitecte espanyol Feliciano López Díaz-Guerra (Toledo, 1981), tennista espanyol Fernando López-Amor García (Salamanca, 1952), polític espanyol Florentino López Cuevillas (Ourense, 1886 - 1958), historiador i escriptor gallec Francesc López Barrios (València, 1958), poeta valencià, que treballa com a guionista i director de televisió Francesc López Fabra (Barcelona, ? - 1891), militar, impressor, geògraf i polític català Francisco López Alfaro (Osuna, 1962), futbolista i entrenador andalús Francisco López Fernández Francisco López de Gómara Francisco López Gonzalvo (Barcelona, 1958), ciclista català professional entre 1982 i 1988 Francisco López Hernández Francisco Javier López Aguilera o Javi López (Barcelona, 1973), futbolista català que ocupava la posició de lateral Francisco Javier López Bravo (Màlaga, 1974), futbolista andalús que ocupa la posició de defensa Francisco Javier López Castro o Javi López (Barcelona, 1964), jugador i entrenador de futbol Francisco Javier López Izkue (Ibiricu, Eguesibar, 1956), ciclista espanyol professional entre 1980 i 1982 García López de Cárdenas (Llerena, Extremadura, - ?), explorador espanyol, conegut per ser el primer occidental en veure García López de Sessé (segles XIV -XV), noble aragonès, senyor d'Oliete, Alcaine, Favara i la Codonyera Gerard López i Segú Gonçalo López Abente (Muxía, La Corunya, 1878 - 1963), escriptor gallec Gregorio López-Bravo de Castro Gregorio López Irasuegi (Bilbao, 1946 - 1988), lluitador antifranquista basc Gregorio López Raimundo Gustavo Adrián López Pablo (Valentín Alsina, Lanús, Província de Buenos Aires), futbolista argentí que juga d'extrem esquerre Hèctor López Bofill (Badalona, 1973), doctor en dret i professor de dret constitucional Higinio Atilio López Riveros (Villarrica, 1925), futbolista paraguaià de la dècada de 1950 Horacio López Usera Íñigo López de Loyola (Azpeitia, 1491 - Roma, 1556), noble basc que va seguir la vida religiosa Íñigo López de Mendoza (Carrión de los Condes, 1398 - Guadalajara, ?), Marquès de Santillana i Comte del Real de Manzanares Íñigo López de Mendoza i Quiñones Innocenci López i Bernagossi Isaac López Pérez (1978, Granada), jugador de bàsquet espanyol que ocupa la posició d'escorta Isabel López i Chamosa Isidro López-Aparicio Ismael López Blanco o Isma López (Pamplona, Navarra, 1990), futbolista navarrès que juga de davanter Ismael Santiago López López (Jaén, 1978), futbolista andalús que ocupa la posició de migcampista Iván López Mendoza (1993), futbolista valencià que juga com a lateral dret Javier López Fernández o Javi López (Madrid, 1985), jurista i polític català Javier López Rodríguez (Osuna, Sevilla, 1986), futbolista andalús Javier López Vallejo (1975, Pamplona), futbolista navarrès Jennifer Lopez (Nova York, 1969), actriu, cantant, ballarina, productora de discos Jesús López Cobos (Toro, Castella i Lleó, 1940), director d'orquestra espanyol Joan López (Sant Hipòlit de Voltregà, 1730 - Vic, 1798), filòleg i frare franciscà català conegut per les seves gramàtiques aràbigues Joan Francesc López Casasnovas Joan Josep López Ibor (Sollana, la Ribera Baixa, 1906 - Madrid, 1991), metge i escriptor valencià Joan Manuel López Nadal (Palma, 1951), diplomàtic i escriptor mallorquí Joaquín López Puigcerver, advocat de Carlet que el 1857 s'instal·là a Madrid i arribaria a ser magistrat del Tribunal Suprem d'Espanya Joaquín López-Dóriga y Ruiz de la Escalera (Madrid, 1848 - París, 1911), advocat, banquer i polític espanyol, diputat a les Corts Espanyoles Joaquín María López López (Villena, 1798 - Madrid, 1855), advocat i polític valencià Jon Ander López Maquiera (Barakaldo, 1976-Sestao, 2013), futbolista basc que ocupà la posició de porter Jonathan López Pérez (1981), futbolista asturià que juga com a porter Jordi López Felpeto (Granollers, 1981), futbolista català que ocupa la posició de migcampista Jorge López Marco (Madrid, 1978), futbolista madrileny que ocupa la posició de davanter Jorge López Montaña (Logronyo, 1978), futbolista riojà que ocupa la posició de migcampista Josep López de Lerma i López (Sant Feliu de Guíxols, 1950), advocat, professor i polític català Josep Antoni López i Àlvarez (Girona, 1947), instrumentista de tenora i compositor de sardanes Josep Lluís López Bulla Josep Maria López-Picó José López, diversos personatges José Alberto López Pérez (Madrid, 1960), futbolista madrileny que ocupava la posició de defensa José Ignacio López de Arriortúa (Amorebieta-Etxano, 1941), enginyer industrial basc José Luis López, diversos personatges José Manuel López (Lleó, 1971), fotoperiodsta i corresponsal de guerra José Manuel López Rodríguez (Caboalles de Abajo, Villablino, 1940), ciclista espanyol professional entre 1966 i 1972 José Manuel López Prieto (Ciañu, Llangréu, 1946), futbolista asturià de les dècades de 1960 i 1970 José María López, diversos personatges José Miguel López Quevedo (Madrid, 1974), futbolista que jugava de davanter José Ramón López Díaz-Flor Josefina López Sanmartín Josep Manuel López Martínez Juan López Fernández (Villadecanes-Toral de los Vados, Castella i Lleó, 1939), historietista Juan López Sánchez Juan López de Velasco, escultor espanyol del Juan Antonio López Toribio (Barcelona, 1966), entrenador de futbol i futbolista que jugava de defensa Juan Fernando López Aguilar (Las Palmas de Gran Canaria, 1961), polític, jurista i professor universitari canari. Va néixer el Juan José López Burniol Juan Manuel López Iturriaga Juan Manuel López Martínez o Juanma López (Madrid, 1969), futbolista de central a l'Atlético de Madrid Juan Ramón López Caro (Lebrija, 1963), entrenador de futbol andalús Juan Ramón López Muñiz (Gijón, 1968), futbolista asturià que va jugar a l'Sporting de Gijón, Rayo Vallecano i al Numància Julià López i Segú (Granollers, 1969), futbolista català que jugava de defensa Julián López de Lerma Barahona (Badajoz, 1987), futbolista extremeny Julián López Milla Julio María López Orozco (Elx, 1885 - 1970), metge i polític valencià, diputat a Corts Espanyoles durant la Segona República Laura López Valle Laureà López Rodó (Barcelona, 1920 - Madrid, 2000), polític, jurista, catedràtic i advocat Lisandro López Lucía López i Martínez Luciano López Dávila, polític espanyol de finals del Luciano López Ferrer (València, 1869 - Madrid, 1945), advocat, diplomàtic i polític valencià, diputat a les Corts Espanyoles durant la restauració Luis López Dóriga, polític valencià, diputat a les Corts Espanyoles durant la restauració borbònica Luis López Pérez (Petrer, Vinalopó Mitjà, 1961 /1962), pilot de motocròs valencià Luis María López Rekarte (Arrasate, 1962), futbolista basc Luis Miguel López Beltrán (València, 1975) futbolista valencià que ocupa la posició de defensa Manuel López Àlvarez (1948, Rabanal de Fenar, León-2011, Barcelona), militant obrer. Afeccionat a la poesia Manuel López López (Galícia, ? - Valdelatas, Madrid, 1941), dirigent anarcosindicalista d'origen gallec Manuel López Lozano (Barcelona, 1942), advocat i polític Manuel López Santana (Arucas, 1961), futbolista canari que ocupava la posició de porter Marc López i Tarrés Mario López (San Diego, 1973), actor dels Estats Units conegut pels seus papers en sèries de televisió Martín López-Zubero Purcell Mencía López de Haro (Biscaia ~1215 - Palència, 1270), dama lleonesa-biscaïna i reina consort de Portugal (1239-1247) Michael López-Alegría (1958, Madrid), astronauta Miguel López de Carrizosa y de Giles (Jerez de la Frontera, 1857 – Madrid, 1919), marquès de Mochales, advocat i polític espanyol Miguel López de Legazpi (Zumarraga, Guipúscoa, ~1503 - Manila, Filipines, 1572), conqueridor Miguel López Muñoz (Altura, Alt Palància, 1950), economista i polític valencià, diputat a la primera legislatura de les Corts Valencianes Miguel Ángel López-Cedrón Freije (Oviedo, 1978), futbolista asturià que juga de davanter Miguel Ángel López Moreno (Pesca, Boyacá, 1994), ciclista colombià professional des del 2015 Miguel López Tortosa (Villacarrillo, província de Jaén, 1946), professor i polític català d'origen andalús Miquel López (Villarroya de la Sierra, Aragó, 1669 - 1723), compositor aragonès i monjo del monestir de Montserrat Miquel López Crespí (Sa Pobla, 1946), novel·lista, dramaturg, poeta, assagista, escriptor i historiador mallorquí Mònica López, diversos personatges Modesto López Novoa (Ourense, 1965), futbolista gallec que ocupava la posició de defensa Óscar López Hinarejos, cracker valencià Òscar López Hernández (Cerdanyola del Vallès, 1980), futbolista català que ocupa la posició de defensa Óscar López Martínez (València, 1984), futbolista valencià que ocupa la posició de defensa Oswaldo López Arellano (1921, Danlí - 2010, Tegucigalpa), militar, polític i empresari Pedro López, diversos personatges Pau López i Sabata (Girona, 1994), futbolista català Patxi López (Portugalete, 1959), polític basc, que exercí de lehendakari del Govern Basc entre 2009 i 2012 Próspero López Buchardo (Buenos Aires, 1883 - 1964), pintor i compositor argentí Pere López i Agràs, polític andorrà, líder del Partit Socialdemòcrata d'Andorra Pepa López (La Vila Joiosa, Marina Baixa, 1953?) actriu valenciana Rafael López Gómez (1985, Peñafiel), futbolista castellanolleonès Rafael López Rueda (Barcelona, 1976), sociòleg i polític Ramón López Redondo, pintor decorador de finals del Raül López i Molist (Vic, 1980), jugador de bàsquet català Ricardo López Felipe (Madrid, 1971), futbolista madrileny que juga de porter Roberto López Ufarte (Fes, Marroc, 1958), futbolista basc que jugava d'extrem esquerre Rosa López (Láchar, Granada, 1981), cantant espanyola Ruy López, diversos personatges Salvador López Arnal (Barcelona, 1954), professor-tutor de Matemàtiques de la UNED i professor d'informàtica de cicles formatius a l'Institut Puig Castellar Salvador López Sanz (Múrcia, 1924 - València, 2009), polític i catedràtic valencià d'origen murcià Sebastián López Serrano (Tetuan, 1961), futbolista retirat que jugava en la posició de migcampista Sergi López, diversos personatges Sotero López Clemente (Albacete, 1972), futbolista castellanomanxec que ocupa la posició de defensa Steven López (Nova York, 1978) és un taekwondista estatunidenc Tomàs López Torregrosa (Alacant, 1868 - Madrid, 1913) Vicent López i Portaña (València, 1772 – Madrid, 1850), pintor del barroc tardà i el neoclassicisme, pintor de cambra de la monarquia borbònica Vicente López, diversos personatges Victòria dels Àngels López García (Barcelona, 1923 - 2005), soprano i cantant d'òpera catalana Xan López Facal (Toba, Cee, 1940), economista i polític gallec Xulio López Valcárcel (Lugo, 1953), poeta gallec
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,081
The Elias Canetti Mythical Winter Bestiary that flower, that mint, that columbine The Mandrake Orpheus and the Tar Pit Mystery Blogger Awar… on The Mandrake The Writer apologise… on The Good Version walkcheerfullyblog on After that hiatus Sidharth on Digging a glimpse of on Two Collages [aglimpseof.com] Creative Acts Orpheus (& Eurydice) & Melusina (& Siegfried) Outside World PhD Research & Archetypes Playing with Archetypes Post-Reading Thoughts PsychoWrito The Writing Body The Writing Practice Word-Doodles Words of Others Interior Dasein The Mandrake is drawn from the seclusion of earth by the leaves of her hair, by the hands of people with a death wish, or by dogs tied to her with strings. Those who pull the Mandrake from her unlit, sodden isolation are people who want to use her to improve their lives, people who presume in her a magic they respond to with yearning. Before even laying eyes on her, they hallucinate her into an enticing shape, likely to provide happiness and glory. They have seen her depicted in illuminations, in fragmented impressions on the pages of books. Based on her appearance, they take her to be something she is not, a creature of inestimable abilities. They fall in love in ways that cannot be sustained by reality. When they bite her, they lose their minds, slipping into dream states so deep an incision could be made into their very skulls and they wouldn't notice. They hold her in their enormous hands and say, "You will fix my sadness, my past mistakes, my shaking bouts of fever. I don't care if my next life is spent in the absence of light, surrounded by ash." She has unsettled many people with these promises they make to themselves, yet when the time comes for her to live in their care, these same people are already overrun by madness, unable to see the truth of her root body, her leafy hair, her need to be kept watered and safe. It is easy for her to believe in the magic others ascribe to her. Sometimes, while still packed safely in soil, she thinks of herself the way others have, and finds a tingling joy in the idea of being special. But this joy comes at a price, and she will always end up damaged, ground up completely and mixed into a drink, retrieved from the corners of the earth by a lovesick elephant, or else made into an immovable amulet, a trophy to cure someone's stagnating libido. Ultimately, once drawn from the earth and seen in the reductive light of day, she can't help but disappoint. The only defence left to her when she feels the familiar tug on her quills is to go deep into the visceral part of herself, and there to conjure up a scream that will burst eardrums and arteries the moment it reaches the air, scream and scream until the grip of the desiring hand has loosened, and the tugging person, with all her unfulfillable anticipations, falls lifeless to the ground and disappears. Woodcut of Mandragora in Leiden, 6th Century Posted on April 3, 2019 April 3, 2019 by Florence Sunnen Posted in PhD Research & Archetypes, Playing with Archetypes, PsychoWrito, The Elias Canetti Mythical Winter Bestiary Tagged archetypes, elias canetti, mandrake, Playing with Archetypes 1 Comment As if he had been poured in tar, he lies on a pillow of turf and seems to weep the black river of himself. – Seamus Heaney, The Grauballe Man Orpheus as an adult is a different person from Orpheus as a young man. It's the bridge between the two, made from little more than knotted string, that can at times be precarious, the kind that folds in on itself and tangles in near-permanent ways. What Orpheus seeks as an adult is internal, no longer bound to a self contained in others. Orpheus as a young man seeks love, and only external love, to make up for the widening, crumbling emptiness at work inside him. Orpheus as an adult has constructed an inner citadel in the spirit of Marcus Aurelius, one whose spiralling towers and metamorphic soapstone structure is impermeable. The citadel reassures Orpheus with its timeline intimately tied to his: while people, objects, riches, and even ideas, exist on shifting, tectonic levels and will either outlive Orpheus or disappear from his presence for any reason or none, the citadel will exist exactly as long as Orpheus will, is so inextricably part of him that it has become the only reliable thing in life. There is peace to be found in this idea. Nietzsche, more bearded than ever, rests his dirty feet on Orpheus's pillow and waffles on about the importance of forgetting, and of digesting one's past properly. The body and soul, which are in a sense one and the same in that their boundaries blur and pulsate between the visible and the concealed, are here, are his, and whatever they digest or are drawn to, no matter how intensely, will eventually pass through them without causing them to disappear. After the digestive process, drawn-out as it may be, the body is still there with its stomach and bowels. Nietzsche seems concerned that digestion is something people have abandoned for the sake of wallowing, of an eternal practice of chewing the cud. Orpheus can't deny this; much of his time was spent in such a state of rumination, turning his face more and more into the long muzzled mask of a cow. For a while, taurine horns sprouted from his head. Orpheus has undergone so many metamorphoses that the limits of his self feel unclear, smeared with the grease of otherness. But this otherness is of a particular kind, more ethereal than a person's simple presence. Orpheus is backed up with ideal selves, which have undermined and torn through his digestive organs for years, have made him incapable of releasing what kept accumulating. The mind turns life into memories, but when the soil that should cradle these memories in its darkness is overrun with writhing bodies, bloated and undead, the memories have no choice but to fall in heaps onto the surface, rotting there like Antigone's kin in the blazing sun. Orpheus is dizzy with the fumes of slow decay. "You need to become empty so that you can fill yourself again," says Nietzsche, unhelpfully. Franz Stuck, Orpheus, 1891 Orpheus, whose oral fixation is considerable, takes bites upon bites of the world, sucking and gnawing on it to find the precise combination of sensations and tastes that will still the rumbling inside. Orpheus has heard of a man across many mountains, perhaps even across many chunks of time, who suffers from the opposite problem, the inability to stop excreting. Losing control over his body has trapped that man in his own hell of increasingly destructive dreams. Orpheus wishes he and the man could meet, speak to each other of their afflictions, and find a middle-ground in which to attempt a mutual healing. But Orpheus knows that such a desire is selfish, and that the man, who finds himself chained up by greedy men who harvest his excretions for fuel, has enough to deal with without being weighed-down with Orpheus's indigestible accumulation of selves. Orpheus comes not as a clean and single self, but as a cluster of concerns and pains, triggers and difficulties, all these things he hasn't yet managed to shave off himself. Orpheus spends more time thinking about his own mouth than is perhaps advisable, and he has recently stopped trusting even this aperture, which used to be his truest means of expressing himself. The mouth is where the voice substantiates, where language shows itself with the greatest possible immediacy, where longing quivers, where food breaks in, where kisses fall together, where the beloved's body can be tasted and indulged. But now, Orpheus has given up his finger-painting relation to the world, has exiled all hands from his vicinity. The wind touches him only through tissue paper, the light hits his skin only with invisible brushstrokes. Still, despite its failures, the mouth remains. Whether Orpheus wants it or not, his mouth is open to the world, a concentrate of yearning. In response, his environment either curls up into absence or opens itself in kind. The world wavers between petrified wood and openness, and the landscape Orpheus inhabits pits itself with holes of incalculable depth. These holes open around him with wet smacking sounds, bringing bitumen to the surface like pus, old suppurations, which both the world and Orpheus ought to have dealt with but which they chose instead to ignore, letting them ferment into hypogeal patches. Orpheus exists in a time of tectonic upheaval. One morning, one such hole opens in front of him while he empties his bladder into a mulberry bush. The stream running from his phallus his clear as glass, almost silent as it hits the thorns. The hole's presence annoys Orpheus, who wishes for a semblance of stability. His internal citadel is still under construction, not yet a home, torn down each night by the same perfectionist impulses that cause people-pleasing Melusina to redraft again and again the palace she builds for Siegfried, the man she wants to look after. Orpheus, still homeless but with a vague idea of what an internal home might entail, does not want to be confronted with holes and their attention-seeking bullshit. He shouts a number of obscenities into the hole, expecting an echo, but nothing returns, not even a faint, whimpering reflection of his call. All he feels is a fluttering at the back of his throat, the sense of something tearing away, and for a moment the rustle of wings obscures his vision. When Orpheus opens his mouth to ask what the hell just happened, his voice is gone. Like an unsettled bat it flees with the flap of leathery wings towards a more amenable cave, and it is now lost in the bottomless hole along with his words. Orpheus hurls a handful of mulberry blooms into the hole and heads for the deep end of the forest, kicking pebbles along the way. After an hour, he comes to rest against a mossy rock. A few paces ahead, he sees what appears to be another hole, of a black so total no light returns from it. Orpheus approaches and feels the ground sucking at his feet. The hole is surrounded by tar, perhaps even filled with it, although its centre is a deeper black than Orpheus has ever witnessed. For a moment, his mind is full of bodies preserved after death, able, by the grace of chemical magic, to retain their human form even after consciousness has trickled like fat from the flesh. He looks into the centre of the hole, into the complete impossibility of a reflection. This hole is the loneliest place on earth, where not even the self can be witnessed. Now, to pry into roots, to finger slime, To stare, big-eyed Narcissus, into some spring Is beneath all adult dignity. I rhyme To see myself, to set the darkness echoing. – Seamus Heaney, Personal Helicon The sun shuffles around in the branches of trees like an animal waking up, and Orpheus wants to escape. Whenever the sun shines his mind flings itself back towards Eurydice, towards Hamlet, towards Apollo, towards Melusina, all these impossible beings sprouting from his sides and inside his heart, who left their perforations inside Orpheus's flesh; all these creatures who inhabit a higher realm into which they have retreated, and where Orpheus can't follow. Do not take them seriously. If they retreat into heights, into distance and indifference, then Orpheus has at his disposal an entire realm of depth and intense claustrophobia. The earth has swallowed Henri Michaux. His body is in a process of slow decomposition as it sinks, and the gas escaping from his corpse is making the earth burp up sebaceous bubbles of advice. "Descends," says the wet voice of Henri Michaux, bursting from a bubble, "oui, descends en toi, vers cet immense rayonnage de besoins sans grandeurs. Il le faut. Après tu pourras, tu devras remonter." Sink deep into yourself, it is necessary. Come back, says a choir of spirits, cradled by the rocks and trees, only when the needs you feel have shifted, the yearning faded. Stay inside yourself, no matter how uncomfortable you may be there. Discomfort soon turns into placing one stone upon the other, building a makeshift resting place for the tired spine. Others, especially idealised others, those you trust too much without reason, cannot be your home. They will, like everything else in the world, crumble in your hands, disappoint and hurt, and leave. Some of them will not leave willingly, but they will break just the same. At least when you break, Orpheus, you won't be left behind; all of you will disappear at once. Find comfort in this. "Les arbres frissonnent plus finement," says Michaux from an other bubble, "plus amplement, plus souplement, plus gracieusement, plus infiniment qu'homme ou femme sur cette terre et soulagent d'avantage." Orpheus sighs, once again weighed down with well-meaning words of wisdom. But they are someone else's wisdom. Orpheus, full of the past, full of the toxicity of echoes and evocations, full of long-gone happiness, has no space for the new, not yet. Orpheus needs darkness, and silence; he needs to be alone. There is nothing lonelier than the bottom of a tarpit infused with Vantablack. For a moment, Orpheus hesitates on the edge of the tar pit, and the sunlight falls in such a way that he thinks he sees the shadowy outline of Eurydice kneeling on the opposite edge of the pit. Eurydice's hands shake against the black surface, and as she submerges them in an attempt to quieten them, they disappear completely. Eurydice, too, wants stillness, and when she looks over at Orpheus she sees his blue stained hands, raises hers, dripping with tar, and like two drenched skunks they seem to recognise each other. In a fanciful flash, Orpheus and Eurydice fling their bodies toward each other, landing in the tar's black heaviness. They splash the slow substance about their writhing bodies, and as they embrace they sink down towards the centre of the earth. Inside the tar, there is no speech, no sight, no air. They have only the weight of their bodies against each other, mediated by the viscosity of bitumen. When Orpheus hits the bottom of the tar pit, Eurydice is gone. Silence and darkness are equally absolute. He folds his knees against his chest, where neither heart nor lungs feel a need for air, and he waits for a light to come on inside him. He waits like this for a long time, his body held by the tar like the yolk inside an egg. You deserve to be loved, whispers the tar in its tongueless, throatless voice. You deserve to exist. But this voice is only Orpheus's superficial reassurance, and he needs to hear something else, something more substantial. He waits until his skin seems to have melted. "Sache n'importe où tu te trouves reconnaître ton axe," says Henri Michaux, muffled by the tar. "Ensuite tu aviseras." August Natterer, World Axis with Hare, 1911 In this perfect dark symmetry, where above is below and sides bear no difference to each other, Orpheus tries to feel his own spine, the way it has warped in grief, its line compromised by sorrow, and from this line, its bent and bumpy descent from skull to tail, Orpheus constructs a new compass needle, with a magnetic north down in his tailbone, south up in his head, and as he slowly stretches out his invisible body in the tar he feels a new gravitational pull jerk at him, lure him down to where the surface is, this new world he will inhabit. There is love there, and things make sense; there, his mind is no longer sore with isolated desires, impossible hopes. His spine shudders with a sudden navigational need, and Orpheus follows its pull. He opens his mouth wide and lets the black fluid in, lets it fill his throat and organs, his ears and nostrils, he opens himself fully to the viscous bitumen, and a composite sensation, of drowning and breathing too fast, too deeply, smears itself across Orpheus's consciousness, momentarily erasing his fear of having lost the thing he cares about the most. The tar sucks at his body, compelling him to stay, to remain there in silent suspension, to let his body mummify alongside prehistoric animals and murdered men. But Orpheus would not be Orpheus if he didn't know how to ascend from the impossible. Once he is full of tar, a reversal occurs and the tar circling within him streams back out again, out of his ears, his nose, his throat, pushing more and more of itself out of him and in this increasing lightness Orpheus rises, tail first, towards an exit. When he reaches the tarpit's surface, his eyes, bloodshot from the tar, are searing coals, ruddy like a pigeon's. Orpheus feels the shudder of something in his mouth. Something has remained in him. Against his teeth, he feels his voice twitching its oily wings. He closes his burning eyes and lies on the tarnished grass in the sun, and his skin aches as the asphalt dries on it. There is a realisation he has come to: it was never Eurydice down in Hades. Perspectives were misplaced, dislodged like retinas. This entire time, Eurydice was alive. It was never her moving inside deaths's leaden clutch, unfeeling like a bug trapped in resin. It was never Eurydice who refused to cross over into life on their recurring ascents; this whole time, it was Orpheus in Hades, sedate and bleary-eyed, enmeshed in death's delusion. This whole time, Eurydice was alive, moving at the speed of life, which seems uncannily quick from the vantage point of death. While she shot through life as a swallow, Orpheus spent years in paralysis, stuck inside the ice block of Hades, and from there he watched her shape-shift, lamenting his own stagnant point of view. Orpheus looked on and saw his own limited capacities, his mind capable only of useless repetition, the return to a past that could never heal his present. Now he knows. Eurydice could not help him rise from the Underworld because it is not in Eurydice's set of tasks to do so. Orpheus is the one to whom the charming of the infernal keepers befalls, who is meant to bring the dead to life, but how can he do so when he is the one trapped? The filaments of his mind are too flimsy to hoist him up from the bottom of the well where he lies. The clouds pass left to right on the other side of his lids. A shadow leans over him, and Orpheus knows who it is, but isn't ready to look, to confront, not just yet. Posted on April 1, 2019 April 1, 2019 by Florence Sunnen Posted in Orpheus (& Eurydice) & Melusina (& Siegfried), PsychoWrito Tagged henri michaux, nietzsche, orpheus, orpheus and eurydice, tar pit There you are. Your skin's impeccable smell, the beeswax whiff of it. The rustle of your limbs around my skull, like the turn of a page progressing along a two-voice tale. Your scent returns as a ripple. You who are my week, my gristle. Hop into the space I've opened between my hands, rest there in your figurative purrs. I have said before that I cannot hold these leaves open on my own, that the space I gave once deadened the brass in me, but your air still reverberates with the uncanny sensation of feathers dipped in gold. Listen. Your whole body is a whisker. Love has caused these ribbons to tighten inside my skin, hold me upright in false and disconcerting ways, and your response was this: yes, I too am tired of running, running in this way that feels like falling between loosened sheets of earth. Yes, you said, my whole body is a whisker. Let me give you the water I've wrung from my hair, cup your ears and catch its languid syllable curd. Begin a benevolent trade between soil and atmosphere. Yes, I too am tired of the blackened wick, the missing glue between things. We have seen what your eyes can do; we have both been on the cusp of your fire. Posted on March 25, 2019 March 25, 2019 by Florence Sunnen Posted in PsychoWrito, Word-Doodles You told me to go through the garden and find the thing that was most like myself, and so once I was alone I walked through the vegetation, looking. I walked through the high grasses and my feet folded their blades into more complicated shapes. I walked and the pebbles flicked out from under me. I walked close to the water, past reeds that gorged themselves on its pools, past driftwood with intricate reliefs, rocks intensely veined, birds rotund with song. I walked past exquisite lilies, past the structural devotion of pines, every part of me looking in this beauty for a resonance. After a while, the rain fell hard into my hair and I hid in the undergrowth, crawling between streams of ivy, my hands smeared with lichen and wiped clean again by neon pads of moss, until a low clearing emerged and I leaned against the striated bark of a cherry tree. Its cauterised marks embossed themselves into my back and this damaged being seemed to me such an obvious mirror I decided this was it. Having completed my task, I closed my eyes until the rain let up. But when I crawled out after a while to hold my hand into the air and check for drops, I saw further down the path a cluster of dried grey twigs growing bare, clipped and idle from the earth, and the part of me that wishes I could just exist in my true and unadorned mediocrity felt understood. I weaved myself, with great cost to my personal boundaries, between the brittle twigs, making my body as boneless as it could be, and there I breathed the shallow breath of deferral until the day went dark and you returned to me. When you asked me what I had learned, I told you I would have to think about it deeply, and tell you once I understood. Not like now heaven is insufficient / you know too well it's paradise you want // where we are bodies, extemporised and full of melting splinters /// fondness consumed amidst animals and trees, our colours all coiled in embrace //// you think the white light of love is a quiet bath of bliss, so immaterial, the inscrutable everlastingness of it ///// paradise is heaven with lungs, but you say there is no return to a place of breath and sublimity ////// our grunting cannot blend with the birds' capacity for speech, not in the damp chill of the shade after our dying /////// you bit me, and I know I bit you in turn, betraying pale matter below the sun-reddened skin //////// not here, and not now paradise is incarnate, but this ongoing heaven is bland, a doorway of bodies / peeled off and hung up like garb //////// that which we want is deep / and bright / and unlikely it already slipped once / and you tore out your lungs / saying ////////// that was enough Posted on March 17, 2019 March 26, 2019 by Florence Sunnen Posted in PsychoWrito, The Writing Body, Word-Doodles Tagged poem Obligatory Flashback Sequence Starting From Mildly Distorted Reality The compass needle spins where Orpheus is sitting. It is earlier in time, an earlier point in the myth, before he learns to cradle himself properly, before his arms and feet are stained a deep blue, before he learns to resist the past's pointless call. Orpheus is furious with the ugliness of the world, its inability to charm him back towards it. No effort, the world an old wife who has stopped trying. No blossoms in the trees, no warmth in the sand, the air so bland he worries his sense of smell has atrophied. "Dr. Mother," says Orpheus, slumped in his therapist's chair, with his untied boots and naked chest, wearing only his coat woven from hair collected by brushstroke from the backs of a hundred gibbons, "Dr. Mother, listen. I have tried to improve myself, I have tried to let go of self-doubt, of accumulations of yearning and anxiety, but it's too hard. Compared to the present, the past has so much more allure. Do you know who I ran into the other day? A girl I hadn't seen in years, with whom a short-lived fling had long passed. Of course, even though this girl precedes her by half a decade, I compared her to Eurydice, then as if to punish myself for the thought I let her lead me by the hand into the nearest park and I fucked her under an overturned canoe. Not saying any of this is true, but isn't it fun to say? This attachment to my myth has ruined my ability to distinguish between figment and reality. Do you think, Dr. Mother, that my entire life is made up of lies that exist in the world for no reason other than because the words composing them hold each other by the hand just right?" Dr. Mother replies, "I thought we agreed, Orpheus, to keep your poetry out of our sessions. There is no place in psychoanalysis for balladry." "Yeah, yeah." "So tell me again, and tell me the truth this time. What happened to you this week? Did you see anyone?" "No, although I did run into an unusual number of wild dogs. For most of the week, I hung from the ceiling by my left ankle, until the colours of the world became inverted. It was nuts." "Orpheus, enough," says Dr. Mother. She doesn't usually smoke, but suddenly there is a cigarette in a red holder between her lips. "You're being very unprofessional," says Orpheus, and feigns a cough. The truth is, he hasn't felt a thing in the back of his throat for months, certainly not enough to warrant a cough. In fact, come to think of it, his entire mouth is numb. His palate is still coated with his longing for Eurydice. "Smoking in front of a patient," he says. "You know how suggestible I am." "You're not a smoker, Orpheus, and I am. My nerves are frayed. Just look away until I'm done." Orpheus sighs and slumps deeper in his chair until only his head rests on the seat, and he stares up at Dr. Mother's lofty ceiling. In many ways, he is younger than he looks. "So there I was, hanging from the ceiling," he resumes. Dr. Mother throws a lighter across the room, nearly missing his forehead. "Fine," he says. "I did absolutely nothing, saw nobody, had no opinions about anything, least of all myself, spent hours smelling my own armpits, as you do, and then for a moment, just a very short moment, I thought about the future, which is so uncertain now that my hands no longer pulsate with magic and my brain is out of room for words, and at that thought a constriction in my torso echoed so violently I think most of my organs must have been rearranged in the process." "And how did that make you feel?" "Fuck off," says Orpheus. "Let me rephrase that. Are you still in love with Eurydice?" "Orpheus, we won't get anywhere if all your answers are either lies or sarcasm. I want to help you, it's what you pay me for." "Fine. What was the question?" "Do you still want Eurydice to return to you?" "That's a different question than the one you asked me earlier, but I'll take it. Yes, Dr. Mother, I do. Twice a day, up and down with vigorous strokes, lasting between two and five minutes." "Orpheus, normally I'd say that this is a safe space, but as you can see, I've succumbed to my nicotine cravings, this is not the time for your inappropriate shenanigans." "Brushing my teeth is inappropriate?" "Let's talk about something else. How is your family?" Orpheus lets out a sigh so long and loud his body slides from the chair onto the floor. "How's work?" Dr. Mother asks. "You said no poetry." "Do you mean to tell me after all these years we have run out of things to talk about?" "I'm in love," groans Orpheus. "What the fuck do you want to talk about?" "Your goals, your self-development. Your mother, if necessary." "In love with a ghost who thinks Hades is the place to be. This old tale again, as if it's scored into my flesh. What sort of goals do you foresee there, Dr. Mother? What kind of self-development, for her or me? My mother would be proud, I can tell you that. This whole mess is perfect muse-material." Dr. Mother slaps the side of her chair as if berating a badly-mannered dog. "Listen, you impossible child," she snaps. "For years I have humoured your bullshit, enough is enough. Forget about Eurydice for a second, forget about what you think you feel. Sit down on your fucking chair and act like a human being, if you can remember how." "Jeez." Orpheus drags himself back onto his chair and hangs one leg over the armrest. "What is up with you today?" "Sit properly," says Dr. Mother. "I am. Still growing into my parts, I'll have you know. Honestly, you're never this weird. What's going on?" Dr. Mother stubs out her cigarette against her phone screen. "I read my daughter's diary last night." "Oh, Dr. Mother. What a stupid thing to do." "I know." Dr. Mother waves her hand at the tiny plume of smoke rising from her singed screen. "She is such a smart girl, but the men she involves herself with… I don't understand how you can go so wrong." Dr. Mother closes her eyes the way only an exasperated mother can. "I blame the parents," says Orpheus. Dr. Mother threatens him with a throw pillow, then sighs. "Speaking of badly chosen men," she says, "we haven't discussed Hamlet for a while. Why is that?" "Because, first of all, it's in the past, and also it's disruptive to my myth." "I thought the past had so much more allure." Orpheus groans. "That was back when I was lying to myself. You need to keep up." "Wouldn't it be useful to start disrupting your myth? Isn't your attachment to Eurydice based on some very restrictive assumptions?" "I don't know," says Orpheus, picking his toenails. "I wouldn't call it attachment. Anyway, I vote for talking about what hurts in the present, not what hurt back then." "We should talk about the past, Orpheus," says Dr. Mother, suddenly back to her true form – this is psychoanalysis, after all. Dr. Mother puts the cigarettes away and opens a window to clear the fumes from her melting phone screen. "Eurydice isn't where it all began." The compass needle spins where Orpheus is sitting. It is earlier in time, much earlier, before Eurydice, when his name wasn't quite Orpheus but something else, when the love he felt was as consuming and voracious as a swarm of flies. His food tastes of nothing, the people he sleeps with make no sound, the pages he turns reveal the same words as the ones before. Orpheus is in love with Hamlet, who is somehow always otherwise engaged. Blablabla, my father's ghost visits me in my sleep, blablabla, I may have to enact my revenge on a murdering uncle, blablabla, I've got to finish this report for my scientific assistant post tomorrow. Oh, I've got an essay due on Monday, I was once hurt so badly I don't remember how to love, and on top of everything I need to apply for MA funding within the coming month. Nightmare. Orpheus, who is always something of a child, even more so at that age, isn't always this much of an asshole. At first, when Hamlet catches his eye, Orpheus is a delight. He sings the most enchanting songs he can muster, writes every word of his juvenile poetry addressing Hamlet's gentle body and beautiful mind. He spends hours listening to Hamlet's wild ideas, future plans, and endless complaints. He behaves the way Orpheus behaves when he is in love, opening his heart wide and bathing the entire world in his incandescent charms. So many of his thoughts pertain to Hamlet, who has no use for them. Every inch of his skin is open to Hamlet, who is left cold by its touch. In his immature passion, Orpheus has no weapon against Hamlet's disinterest except his own loveliness, which crumbles in the face of apathy, and Orpheus's heart crumbles along with it. Hamlet sits in their university library for hours without speaking a word to Orpheus, because Hamlet isn't sure if it's Orpheus he wants or someone else. After all, the girl who left him might return, and in the meantime there are all these other girls with bright red hair who look so charming when they laugh at his jokes. Despite this, Hamlet has abstracted from all bodily pleasures and made desire into a completely intangible pursuit, to be led only by the ego in an immaterial realm. He writes love letters to Orpheus, writes poetry, endless lines of worship and erotica, but when they are alone, Hamlet won't so much as touch him. Orpheus's body is Hamlet's fantasy, and as such it must never be consumed, only praised and made love to in words. Orpheus, up on his pedestal, is exhausted by having to hold a pose his agile, living body isn't meant to hold, his body which wants only to be touched and stroked and played with, but which Hamlet will only caress through language. Hamlet's mouth and Hamlet's hands do not understand their purpose, and reduce themselves to verbalisers. Slowly, Orpheus fades away inside his unreciprocated needs, until after years of this, Hamlet shows up at his door and says, "Why did you leave?" Orpheus says nothing. He builds a fort from blankets to cradle their bodies and offers Hamlet tea, then holds his hand while Hamlet weeps out his exasperation. The next day, Orpheus gives in and says, "I love you Hamlet, I never stopped," which is true, and Hamlet's ego is satisfied for a while. But it isn't enough, and soon Hamlet is drawn back into his own tumultuous self, and the easy admiration he receives from others. Turns out it's not just the prospect of a duel that makes Hamlet feel queasy, but anything resembling a twosome. The person Hamlet needs to be is always elsewhere, always out of reach, and there is no room there for Orpheus, who wants reciprocity, to play his lyre and write his poetry, and whose need for physical closeness is too great for what Hamlet will allow himself to give. "What was it that hurt you about Hamlet, Orpheus?" asks Dr. Mother. "That whole unshakeable sense that none of what occurred between us was up to me. He removed my body from our bond, made my needs irrelevant and turned me into an idea," says Orpheus. "I didn't exist to him, not in my embodied form." "Do you think you exist, Orpheus? Physically, I mean." Orpheus slaps his thigh, once, then again, and again, and again, until Dr. Mother says, "I get your point. What I mean is, do you believe you deserve to exist physically in a way that compels those you love to engage with you? Don't you feel you are somehow always a burden just by virtue of having a material presence, and physical needs?" "You know," says Orpheus, "you're not supposed to validate my narratives this way. It's bad form. I'm in a psychically vulnerable state." "I'm sorry," says Dr. Mother. "I don't know what's the matter with me today." "Your daughter is sleeping with bikers, I believe. Happens to the best of us." Dr. Mother rubs the bridge of her nose. "The most recent one is apparently a knight and falconer. I just don't know where she digs them up." "Can we get back to my story, or do you want to switch seats?" "Yes," says Orpheus, "I carry damage. Yes, I once chose someone who exacerbated my fears about being rebuffed in my intensity and needs. Yes, being with Hamlet somehow confirmed my fear of never being a priority to the people I cared about the most. Yes, for years I forgot what it felt like to have a body, to be an agent in the world, to desire and be desired. But I was also young, prey to more insecurities than I could handle, and Hamlet stabbed right into those unpleasant wounds for three continuous years. In a way, it's remarkable I came out of it able, and willing, to keep desiring, and wanting to be desired. That I came out of it willing to keep trying." "I'm very happy to hear you say that, Orpheus," says Dr. Mother. "That doesn't mean I don't still know how to pick 'em." "We all do. Of course we choose those who match our wounds, how could we not? Those vulnerable parts of us that fall in love are also the parts that need the most healing, the most care. Sometimes, those we fall for cause our wounds to deepen, and sometimes, if we're lucky, and if we accept their goodness, there will be a rare few who help us decontaminate all that festering hurt and redress our narratives, allowing the lesions to slowly heal." "I should be so lucky," says Orpheus. "You aren't all that bad at healing yourself, Orpheus. You're just impatient. Sometimes I worry you'd rather care for the wounds of others than give that attention to yourself. Why? Is it because in others the reward is more visible than in yourself?" "Eh," says Orpheus. His lids feel heavy. "Who's to say." "You understand, don't you, that Eurydice isn't Hamlet, and vice-versa?" "Yes, yes." "And the pain of losing Eurydice isn't the same as the pain of Hamlet failing to love you," says Dr. Mother. "I know that. You're the one constantly drawing comparisons." Outside, the sky has darkened to indicate the end of their session. Dr. Mother's eyes are glazed over, her mind is already elsewhere. Orpheus can't blame her, this has been his own condition for a while. The myth is slowly coming apart, undoing the rigidity of its seams, and all of its elements are now floating in free association, and already new matter is growing inside the widening spaces. The last thing that remains is Orpheus, loving Eurydice, this thought of her that will not fade from his mind, the gentle vibration of her that lives in his flesh. Posted on March 14, 2019 by Florence Sunnen Posted in Orpheus (& Eurydice) & Melusina (& Siegfried), Playing with Archetypes, PsychoWrito Tagged flashback sequence i guess, hamlet, orpheus, orpheus and eurydice Orpheus and the Flayed Man Orpheus is distraught. A man without skin lives in his dreams. Orpheus knows this is because too many of his thoughts focus on limitations, on the boundaries of the self. Trapped inside his own skin, in his unique consciousness, Orpheus has no chance at communion with others, not even those he wants the most. Eurydice swims in her set of underwater chambers, moving pebbles with her mouth. The muscles around the man's eyes resemble the rings on a felled tree trunk. Orpheus watches them twitch. The bare musculature is an embrace, the twist of jungle vines around the tender skeletal trunks – in all things, Orpheus sees what he is lacking. In his dreams, the skinless man stalks across the landscape Orpheus is attempting to inhabit. The landscape has released its logic. The shadows thrown by its elements are golden, as are the pupils of animals, reflecting light rather than absorbing it. When they first found each other, Orpheus and Eurydice rubbed their pasts against each other, to see which parts this process might heal. In the fading light, their bodies, panting and warm, fell side by side into the sand. Eurydice placed a handful of sand on Orpheus's skin and rubbed it into his legs, his arms, his back and torso. He returned the favour until their skins glowed with awareness, as new and receptive as an infant's. The flayed man in Orpheus's dream cannot blink. He can only watch without pause, without release. Skinning is not an improvement on exfoliation. Skinning deadens the impact of touch. Despite the impression of opening, removing the skin is in fact a closing. In losing your skin, you lose the membrane allowing you to feel another's caress. Left behind is only a raw, impotent mass, unable to engage or receive. Too much has been removed, and you become untouchable. The dripping muscles on the flayed man's stomach twitch. Eurydice has arranged her pebbles in hexagonal patterns, befitting the vibrations that rise from the bottom of the lake. Orpheus turns over to seek a cold patch on his mattress, scratching and chewing the pillow in his sleep. Posted on March 13, 2019 March 17, 2019 by Florence Sunnen Posted in Orpheus (& Eurydice) & Melusina (& Siegfried), PsychoWrito, The Writing Body Tagged orpheus, orpheus and eurydice, touch
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,034
\section{Introduction} Given the number of currently known extrasolar planetary systems ($\sim$200), it is tempting to draw some conclusions about their formation from statistical correlations. One of the basic correlations is the membership in a multiple stellar system. About 15\% of the planets were found in multiple stellar systems. Assuming that the planets are equally common around single stars and in multiple systems, this is a clear discrepancy with the observed frequency of the multiplicity of solar type stars in the solar neighborhood, which is about 60\% \citep{Duquennoy91}. A natural explanation is observational selection effects, because multiple systems have been usually excluded from radial velocity searches for planets. Whether it is the only source of discrepancy will remain unknown until more systematic searches for planets in multiple systems are carried out. Such searches are currently underway \citep[e.g.][]{PHASES}. It seems reasonable, however, to expect that the planet formation process is influenced by companion stars. Distant companions (say, with periastra farther than 100\,AU) do not affect planetary systems too much, but several extrasolar planets in binary systems with relatively small separation (less than 100\,AU) have also been discovered \citep[and references therein]{Eggenberger04}. In particular, three planets have been discovered in binary systems with separations of $\sim 20$\,AU (HD41004A b, $\gamma$ Cep b, Gl86 b). These close companions must have modified the structure of the protoplanetary disk (i.e. initial conditions for planet formation), as well as the dynamical evolution of planetary orbits. Although the number of extrasolar planets in multiple systems is currently very small, some correlations seem to be statistically significant. \citet{Zucker02} pointed out that all the most massive (more than 2 Jupiter masses), short-period (periods shorter than 40\,d) planets orbit stars from binary systems. In consequence, for planets in binaries there is no correlation between mass and period as is observed for planets around single stars. \citet{Eggenberger04} showed that orbital eccentricities tend to be very low for short-period planets. All those differences indicate that the companion star is affecting the planet formation process. To address planet formation in binary systems, studies of the orbital evolution of planetesimals have been done \citep[e.g.][]{Hep78, Whitmire98, Marzari00, Quintana02, Thebault04, Thebault06}. In the classical scenario without a gaseous disk, the secular perturbations from the binary companion pump up orbital eccentricities of planetesimals and decelerate their accretion by reducing the gravitational focusing factor. Also, if the relative velocities exceed the escape velocity from their surface, collisions result in disruption rather than coagulation \citep[e.g.][]{Agnor04}, so that planetesimal accretion is inhibited. \citet{Marzari00} included the effects of uniform gas drag in an eccentric binary system and found strong periastron alignment of equal-size planetesimals. If the periastra are aligned, the relative velocities are kept low in spite of high eccentricities, and planetesimal accretion is not inhibited. However, \citet{Thebault06} pointed out that the alignment angle depends on planetesimal size, and if the size distribution is introduced, the relative velocities between particles of different sizes are typically prohibitive for the collisional growth. An important result of these works is that even a small gas-drag force can significantly change the growth rate of planetesimals if combined with perturbations of the companion star. The weak point of these studies is the assumption that the gaseous disk is not perturbed, but remains stationary and axisymmetric. It is known \citep[e.g.][]{PapPringle77, LinPap79, GT79}, however, that the companion induces tidal waves in the gaseous disk, which can evolve into strong spiral shocks. The density and velocity of the gas are then strongly perturbed, and the drag force acting on solid particles is different than in the stationary, axisymmetric case. To date no calculations of particle motion in a disk with tidally induced spiral waves have been done. In a series of papers we will explore this subject, both analytically and numerically studying the orbital evolution and accretion of particles in disks perturbed by a companion star. In this paper we consider the case of non-interacting particles orbiting in a circumprimary disk, with the perturbing companion star on a {\em circular} orbit. The particles range in size from 1\,m to 10\,km, so our results apply to the planetesimal formation and early accretion phases. At first glance, the circular case may not seem interesting because the gravity of the companion does not induce secular effects on the particle orbit, like eccentricity forcing or periastra libration. However, we show that effects similar to those from an eccentric binary are also observed in the circular case if the perturbations of the gaseous disk are included. Furthermore, we show that the orbital evolution of particles in such a system is significantly different than in the unperturbed, axisymmetric disk. The circular case is a very good starting point since it allows us to understand the sole effect of spiral waves in the gaseous disk. The eccentric binary case will be discussed in next paper. The paper is organized as follows. In Sect. \ref{sec:comp_method} we present computational methods and input physics. Some important properties of the gaseous disk are discussed in Sect. \ref{sec:gas_disk}. In Sect. \ref{sec:orb_evol} we investigate analytically and numerically the orbital evolution of a single particle. Section \ref{sec:coherence} is devoted to relative shapes and alignment of neighbouring orbits of particles. In Sect. \ref{sec:summary} we summarize and discuss the results. \section{Computational method \label{sec:comp_method}} The problem we investigated involves the solution of both gas and particle equations of motion. One approach would be to combine hydro and N-body schemes into a single numerical code, however, it was enough for our purpose to perform two-stage simulations. We exploited the fact that in the circular binary system the pattern of spiral waves in the gaseous disk is quasi-stationary in the frame co-rotating with the secondary star. By quasi-stationarity we mean here that the time scale of its evolution is longer than the time scale of the evolution of N-body particles. In the first step we obtained such a quasi-stationary model of the gaseous disk and then we fed it to the N-body code. In this way we eliminated the temporal evolution of the gas. This was very desirable since our goal was to investigate generic effects of the spiral shocks on the motion of the solid bodies and not to perform realistic simulations of the particle growth in the binary system, a task we leave for future work. The binary system we simulated consists of primary and secondary stars of equal mass, $M_p=M_s=1 M_{\odot}$ on a fixed circular orbit with the semi-major axis $a=23.4$\,AU. The implied orbital period is close to 80\,yr. The gaseous disk and particles are orbiting a primary star. We chose these parameters because, apart from the eccentricity, they are close to the $\alpha\ Centauri$ system investigated in \citet{Marzari00}, and it will enable us to compare their results with ours, especially in the follow-up paper about the eccentric binary case. Note that the self-gravity is not included in either fluid or particle simulations , so it is possible to scale the models to any size. \subsection{Hydrodynamical simulation} Our model of the gaseous disk was evolved using an adaptive mesh refinement (AMR) code, FLASH \citep{FLASH00}. As a hydro solver we employed direct Eulerian PPM \citep{CW1984} scheme modified to conserve angular momentum. The PPM scheme combines high-order spatial interpolation with a Riemann solver and shock-capturing method that results in low numerical viscosity and sharp shock profiles. This makes PPM particularly useful for all applications requiring accurate transport of momentum in a supersonic flow environment. The code solved Euler equations in 2D polar coordinates with the origin located at the primary star: \begin{equation} \label{eq:hydro_cont} \frac{\partial\Sigma}{\partial t}+\nabla\cdot(\Sigma{\vec V})=0 \end{equation} \begin{equation} \label{eq:hydro_motion} \frac{\partial(\Sigma{\vec V})}{\partial t} + \nabla\cdot(\Sigma{\vec V}\otimes{\vec V})+\nabla P = -G\Sigma \left( M_p\frac{{\vec r}}{r^3} + M_s \frac{{\vec r}-{\vec r}_s}{|{{\vec r}-{\vec r}_s}|^3} + M_s \frac{{\vec r}_s}{r_s^3}\right) \end{equation} where $\Sigma, P$, and $\vec{V}=(V_r, V_\phi)$ denote surface density, surface pressure, and gas velocity at position $\vec{r}$. The secondary star was located at $\vec{r}_s$ such that $r_s=a$. The equation of state was locally isothermal (temperature was a fixed function of distance from the primary star), so there was no need to solve the energy equation, and a faster isothermal version of the Riemann solver was used. The final model of the gaseous disk was obtained as follows: (1) The initial disk was set up as in the case of a single star, and it was truncated exponentially beyond a radius slightly larger than the expected tidal truncation radius. (2) During the first two orbital periods of the binary system the grid resolution was increased up to 2048x768 in $r$ and $\phi$ respectively (we used the AMR option here, but to avoid any artifacts at the edges of the refined blocks the whole grid was always refined). (3) The disk was evolved for an additional orbital period of the binary. The Courant number was 0.5. The grid extended radially from 0.4\,AU to 9\,AU -- far enough to avoid any influence from boundaries on the most interesting region of the outer disk where the spiral waves are strongest. We took special care to minimize reflections at the inner and outer grid boundaries. For this purpose we tuned the standard outflow boundary conditions in order to reduce any discontinuities in radial direction. We also introduced a low-density hole in the first 5 radial cells at the inner boundary, which served as an additional buffer to damp the propagating waves. We stress that all these measures are necessary in order to recover the proper mass transport in the non-viscid disk. \subsection{Orbital integration} Our orbital integration code is based on Nbody4 \citep{Aarseth79}. It implements the direct summation method for self-gravity, 4th-order Hermite scheme, and block time step \citep{Makino91a}. In this paper we consider motion of particles only under gravitational forces of the two stars and gas drag force, and mutual gravitational forces of planetesimals are neglected. Inter-particle forces and collisional accretion will be included in future papers. In the reference frame located at the primary star, the corresponding equation of motion for particle $i$ with mass $m_i$ reads: \begin{equation} \label{eqmot} \ddot{{\vec r}_i} = -G (M_p+m_i)\frac{{\vec r}_i}{r_i^3} - G M_s \frac{{\vec r}_i-{\vec r}_s}{|{\vec r}_i-{\vec r}_s|^3} - G M_s \frac{{\vec r}_s}{r_s^3} + {\vec{f}}_{drag,i}, \end{equation} where ${\vec r}_s$ denotes the secondary star position. Components on the right hand side of Eq.~\ref{eqmot} represent the gravity of the primary and secondary, the indirect term accounting for acceleration of the primary relative to the center of mass and gas drag force per unit mass, respectively. For the latter we adopt a simple formula: \begin{equation} \label{eq:drag_force} \vec{f}_{drag,i} = -A \rho |{\vec u}|{\vec u} , ~~~~~~~A=\frac{1}{2m_i}C_\mathrm{D} \pi s_i^2, \end{equation} where $s_i$ is the particle radius, $\rho$ -- gas density, $C_\mathrm{D}$ -- drag coefficient, and ${\vec u}$ is the particle velocity relative to the gas. The factor $A$ is constant in our simulations. Denoting the particle's velocity with $\vec v$ we have \begin{equation} \label{eq:urel} u_r = v_r-V_r ~,\quad\quad u_{\phi}=v_{\phi}-V_{\phi}. \end{equation} We used values of $C_\mathrm{D}=1.4$ and internal density of particles $\rho_p=2\, \mathrm{g/cm}^3$. The Hermite scheme also requires the value of $\dot{\vec{f}}_{drag}$, which is a minor correction that was accounted for using numerically calculated values of $\dot{\rho}$ and $\dot{{\vec u}}$. We took special care to include gas-drag effects accurately yet efficiently in calculations. At the beginning of the orbital calculation, the grid data containing gas density and velocity were read in. During the simulation the data were rotated to match the current position angle of the secondary star (spiral pattern of the gas co-rotates with the companion), and the bi-linear interpolation was used to find the gas state at an arbitrary position of the particle. The time step was variable but limited to a maximum of $1/(2\pi\cdot64)$\,yr. We tested the code with basic problems like conservation of the Jacobi constant, and properly recovered more complex results like runaway growth \citep{KokuboIda96} and gas drag in a uniform disk \citep{Inaba01}. \subsection{The models} \label{ch:Models} Each model here is composed of the two components: gaseous disk and stellar system configuration. We used the following three gaseous disk configurations: \begin{itemize} \item Axisymmetric disk in radial equilibrium or in a short equilibrium axisymmetric disk. Characterized by a simple power-law density and temperature radial profiles, the radial gas velocity is zero. \item Axisymmetric disk in radial non-equilibrium or in a short non-equilibrium axisymmetric disk. It is similar to the equilibrium axisymmetric disk, but the radial gas velocity is artificially set to a non-zero value. \item Non-axisymmetric disk, resulting from the evolution of an initially equilibrium axisymmetric disk in a circular binary system. It develops a spiral wave pattern and is naturally in radial non-equilibrium. \end{itemize} The stellar configuration can simply be either a {\bf single star} or a {\bf circular binary system}. Sometimes we refer to the models using abbreviations presented in the Table \ref{tab:models}. \begin{table}[!h] \caption[]{Abbreviations of presented models.} \label{tab:models} \begin{tabular}{|l|c|c|} \hline ~&single star&binary system\\ \hline equilibrium axisymmetric disk&EA1&EA2\\ non-equilibrium axisymmetric disk&NA1&NA2\\ non-axisymmetric disk&W1&W2\\ \hline \end{tabular} \end{table} We note here that only models EA1 and W2 are physically consistent, while only model EA2 has been investigated by other authors. Thus we concentrate on differences between models EA2 and W2. The other models are used only as a support in understanding the observed effects. \section{Gaseous disk \label{sec:gas_disk}} \subsection{Disk parameters and structure} Our numerical method requires the gaseous disk to be in a state close to stationarity. It should also be minimally biased by numerical effects and should adequately recover all deviations from the Keplerian flow. To that end we used a fairly simple isothermal model, which is nonetheless is close to the minimum mass solar nebula \citep{Hayashi81}. The equation of state, \begin{equation} P=\Sigma {c_\mathrm{s}}^2, \end{equation} is locally isothermal, with the local sound speed ${c_\mathrm{s}}$, given by the vertical hydrostatic equilibrium condition \begin{equation} {c_\mathrm{s}} = \frac{h}{r} v_{k}, \end{equation} where $v_{k}=\sqrt{GM_p/r}$ is the Keplerian velocity and $h$ the local half-thickness of the disk. \citet{Blondin00} has shown that, in sufficiently cold disks, the spiral waves at the outer edge of the disk may become unsteady. We found experimentally that for the following height profile \begin{equation} h_r \equiv \frac{h}{r} = 0.05\cdot \left(\frac{r}{1\,{\mathrm{AU}}}\right)^{0.5} \end{equation} the spiral pattern stays stable everywhere. Note that, although our equation of state is locally isothermal, such $h_r$ results in global isothermality with ${c_\mathrm{s}}=0.05\sqrt{GM_p/(1\,{\mathrm{AU}})}$. The corresponding Mach number (${v_\mathrm{k}}/{c_\mathrm{s}}$) is 31.4 at the inner disk edge ($r$=0.9\,AU), which falls to 7.6 at the outer edge ($r$=9\,AU). The initial profile of the surface density was given by the power law \begin{equation} \Sigma_i = \Sigma_0 \left(\frac{r}{1\,{\mathrm{AU}}}\right)^{-1.5}, \end{equation} which is close to the MMSN model. Since self-gravity is not included in hydro simulation, the normalization $\Sigma_0$ can be arbitrary. To calculate the drag force (Eq.~\ref{eq:drag_force}) during orbital integration, the evolved surface density $\Sigma(\vec{r})$ was converted to the volume density and normalized as follows: \begin{equation} \label{eq:voldens} \rho = 2\cdot 10^{-9} ~\frac{\Sigma/\Sigma_0}{h(r)/1\,{\mathrm{AU}}} = 2\cdot 10^{-9} ~\frac{\Sigma}{\Sigma_0}~ \left(\frac{r}{1\,{\mathrm{AU}}}\right)^{-1.5} \mathrm{[g/cm^3]}. \end{equation} The initial angular velocity was set to the equilibrium one for the given pressure gradient \begin{equation} \label{eq:vang2d} V_\phi = {v_\mathrm{k}} + \frac{1}{2} \frac{\partial \textrm{ln}\,P}{\partial \textrm{ln}\,r} \frac{{c_\mathrm{s}}^2}{{v_\mathrm{k}}}, \end{equation} and the radial velocity was set to 0. The initial conditions described above represent our equilibrium axisymmetric disk configuration. This configuration was evolved numerically for three binary orbital periods, which is enough to develop a quasi-stable spiral pattern. This evolved disk will be referred to as the non-axisymmetric disk. Here we have to point out that the 2D approximation introduces a certain inconsistency into both models of the gaseous disk. The problem with 2D hydrodynamical simulations is that the velocity field cannot be directly linked to the 3D one. The 2D velocity given by Eq. \ref{eq:vang2d} assures radial equilibrium for a given gradient of the {\it vertically averaged pressure}. Note, however, that it is neither the vertically averaged velocity nor velocity in the equatorial plane. Proper, equatorial plane velocity is given by \begin{equation} \label{eq:vang3d} V_\phi = {v_\mathrm{k}} + \frac{1}{2} \frac{\partial \textrm{ln}\,p}{\partial \textrm{ln}\,r} \frac{{c_\mathrm{s}}^2}{{v_\mathrm{k}}} \end{equation} where $p=\rho {c_\mathrm{s}}^2$ denotes pressure in the equatorial plane. Both formulae give different results since, in general, the gradient of $p$ is different from the gradient of $P$. Unfortunately, the simulated 2D velocity cannot be transformed to the equatorial plane velocity. Thus an inconsistency arises: we use 2D velocities, while the density is converted to 3D one (Eq. \ref{eq:voldens}). We decided that it is better to use Eq. \ref{eq:vang2d} for the axisymmetric disk model although in principle Eq. \ref{eq:vang3d} should be used: our results may not be accurate quantitatively, but at least we can compare both models qualitatively. Furthermore we expect {\it relative} results from both models to be less affected than absolute ones. For the later considerations it is useful to define two dimensionless velocities which describe deviations from the Keplerian flow: \begin{equation} \eta = ({v_\mathrm{k}}-V_{\phi})/{v_\mathrm{k}}, \end{equation} \begin{equation} \kappa = -V_r/{v_\mathrm{k}}. \end{equation} In other words, these are velocity components of a large particle moving on a circular, Keplerian orbit relative to the gas. \begin{figure} \resizebox{\hsize}{!}{ \includegraphics[]{v_rho_gas_radial.eps} } \caption{ \label{fig:rad_prof} Top panel: radial profiles of angle-averaged $\eta$ in non-axisymmetric disk (solid), axisymmetric disk with ``2D velocity'' (Eq. \ref{eq:vang2d}, dotted), and axisymmetric with ``3D velocity'' (Eq. \ref{eq:vang3d}, dot-dashed). Bottom panel: radial profile of angle-averaged surface overdensity, $(\Sigma-\Sigma_i)/\Sigma_i$, in a non-axisymmetric disk.} \end{figure} To reveal the influence of spiral waves on the particle motion, we have to compare results obtained in the axisymmetric disk with those obtained in the perturbed disk at a radius where angle-averaged values of $\rho, V_r$, and $V_\phi$ are comparable. The upper panel of Fig. \ref{fig:rad_prof} shows the radial profiles of $\eta$ averaged over the full angle for three disk models: non-axisymmetric, axisymmetric employing formula \ref{eq:vang2d}, and axisymmetric employing formula \ref{eq:vang3d}. As we see, the profiles of the two first models are comparable up to a distance of 3\,AU from the primary star. Outside of this region the deviations from the Keplerian flow grow substantially. Since the spiral waves are strongest in the outer region of the disk, in the next section we will trace the motion of the particle placed initially at 3\,AU. The dash-dotted curve illustrates why the ``3D velocity'' is not suitable for our comparison. The density in the non-axisymmetric model at 3\,AU has grown during the simulation time by roughly 15\% with respect to the initial model (see lower panel of Fig. \ref{fig:rad_prof}). Since the migration speed due to gas drag scales linearly with the density, this difference can be easily accounted for in comparisons with the axisymmetric disk. \subsection{Radial transport of gas} When the spiral density waves are excited in the disk, it is no longer in radial equilibrium. Because the spiral pattern is rotating slower than the local Keplerian velocity, the dissipation at the shock leads to a decrease in angular momentum of the orbiting gas and its radial mass transport. Figure \ref{fig:ang_prof} shows the angular cross sections of velocity components and density at 3\,AU in the non-axisymmetric disk. \begin{figure} \resizebox{\hsize}{!}{ \includegraphics[]{ang-prof.eps} } \caption{\label{fig:ang_prof} Angular cross-sections of velocity components and density at $r$=3\,AU.} \end{figure} Indeed, the angle-integrated mass flux calculated from those profiles is negative (inward). It can be interpreted as the result of an effective viscosity that we define as the turbulent viscosity necessary to cause the same radial mass flux. In the standard $\alpha$-disk theory, the kinematic viscosity $\nu$ is expressed in terms of a dimensionless parameter $\alpha$ as $\nu=2/3\alpha {c_\mathrm{s}} h$. Assuming a steady accretion disk, i.e. $\dot{m}=-3\pi \nu \Sigma$ independent of $r$, we can parametrize the mass accretion rate with the effective value of $\alpha_{\mathrm{eff}}$: \begin{equation} \dot{m}=2\pi r V_r \Sigma = -2 \pi \alpha_{\mathrm{eff}} {c_\mathrm{s}} h \Sigma. \end{equation} For the non-axisymmetric disk, the above equations must be averaged over the azimuthal angle, and finally the effective $\alpha$-viscosity is defined as \begin{equation} \alpha_{\mathrm{eff}} = - \frac{2 {v_\mathrm{k}} V_\mathrm{r, eff}}{3 {c_\mathrm{s}}^2}, \end{equation} where \begin{equation} V_\mathrm{r, eff} = \frac{\int_0^{2\pi}\Sigma V_\mathrm{r} \, \mathrm{d}\phi}{\int_0^{2\pi}\Sigma \, \mathrm{d}\phi}. \end{equation} It has been shown analytically \citep{Spruit87} and numerically \citep{Blondin00, Rozyczka93} that the mass transport by tidal waves can be very effective. For reasonable disk parameters in close binary systems, the effective $\alpha$-viscosity in the outer parts of the disk can easily reach 0.1 or even more. It is hard to produce such high values by ordinary turbulent viscosity. \begin{figure} \resizebox{\hsize}{!}{ \includegraphics[]{alpha_lin.eps} } \caption{\label{alpha} Radial profile of the effective $\alpha$-viscosity.} \end{figure} The radial profile of $\alpha_{\mathrm{eff}}$ in our non-axisymmetric disk is shown in Fig. \ref{alpha}. Close to the inner edge of the disk, it oscillates strongly because the spiral waves are very tightly wound, and the radial grid resolution is insufficient for resolving them. However in the region close to 3\,AU, the resolution is sufficient, and the effective $\alpha$ may be easily found. \section{Evolution of orbital elements \label{sec:orb_evol}} \subsection{Non-equilibrium axisymmetric disk: analytical calculations} The rate of change of orbital elements of a particle experiencing gas drag was calculated by \citet[][hereafter A76]{Adachi76}. Their results cannot be applied, however, applied to a disk in the binary system because the authors assumed that the disk is axisymmetric and stays in radial equilibrium ($V_r=0$). In the binary system the disk is neither axisymmetric nor in radial equilibrium, so the evolution of orbital elements may be very different. In this subsection, we calculate analytically the evolution of orbital elements of a particle moving through the gaseous disk, which is not in radial equilibrium, while still neglecting the action of the companion on particles. In other words, we extend the A76 work to the case of non-zero radial gas velocity. First, we consider the simplest case of the particle on a nearly circular, non-inclined orbit. Let $u=\sqrt{u_r^2+u_\phi^2}$ be the value of the total relative velocity between particle and gas, where ($u_r$, $u_\phi$) are radial and angular components, respectively. The particle loses specific angular momentum only due to angular component of the drag force, and for small drag (when the orbit stays nearly circular) its loss rate can be approximated as \begin{equation} \frac{{\mathrm{d}}({v_\mathrm{k}} a)}{{\mathrm{d}} t} = \frac{1}{2}{v_\mathrm{k}} \frac{{\mathrm{d}} a}{{\mathrm{d}} t} \approx -\frac{u_\phi \cdot a}{\tau} \end{equation} where ${v_\mathrm{k}}$ is the Keplerian velocity at radius $a$, and $\tau$ is the stopping timescale: \begin{equation} \tau = \frac{u_\phi}{A\rho u_\phi u}. \end{equation} Thus the orbit decay rate is given by the approximate formula \begin{equation} \label{eq:dadt_approx} \frac{{\mathrm{d}} a}{{\mathrm{d}} t}\approx -2A\rho a \frac{u_\phi}{{v_\mathrm{k}}}u. \end{equation} There are two important factors here: $u_\phi$ and $\rho u$. The particle loses angular momentum when colliding with the gas at relative velocity $u_\phi$, but the mass flux of this gas is $\rho u$, which is how the radial gas velocity enhances the migration rate. For large enough particles moving with Keplerian velocity, we can further write \begin{equation} \label{eq:dadt_eta} \frac{{\mathrm{d}} a}{{\mathrm{d}} t} \approx -2A\rho a\eta u. \end{equation} In appendix A we calculate the evolution of orbital elements for the general case of eccentric and inclined orbits within the framework of perturbation theory. In the limit of a circular, non-inclined orbit, the general formula for orbitally averaged $da/dt$ (Eq. \ref{eq:dadt}) reduces exactly to Eq. \ref{eq:dadt_eta}. We would like to turn the reader's attention to the dependence on the {\em value} of total relative velocity in formula (\ref{eq:dadt_approx}) (for details, see Appendix). In particular, in the limit of high radial velocity, $u_r \gg u_\phi$, we have a very surprising relation: \begin{equation} \frac{{\mathrm{d}} a}{{\mathrm{d}} t} \propto -\vert u_r \vert. \end{equation} The migration rate is proportional to the absolute value of gas radial velocity; i.e., the particle migrates inward regardless of the direction of the radial gas flow! In order to test this result numerically we measured particle migration rates in the axisymmetric disk, while artificially varying the radial gas velocity. The particle was initially on a circular orbit with components of relative velocity $u_\phi$ and $u_r = V_r = n \cdot u_\phi$, where $n$ was an integer number between -5 and 5. Figure \ref{fig:adot-n} shows measured and predicted migration rates (normalized to $\dot{a}(n=1)=1$) as a function of $n$. The predicted curve is given by the formula \begin{equation} \label{eq:adot-n} \dot{a}(n)=\sqrt{\frac{1+n^2}{2}}. \end{equation} \begin{figure} \resizebox{\hsize}{!}{ \includegraphics[]{dadt-n.eps} } \caption{\label{fig:adot-n} Dependence of the particle migration rate (in units of $\dot{a}(n=1)$) on the radial gas velocity. Solid line - simulation; dotted line -- analytical approximation (Eq. \ref{eq:adot-n}).} \end{figure} Indeed, the migration rates measured in the simulation for $n$ and $-n$ are the same within 1\%. Also the agreement with prediction (Eq. \ref{eq:adot-n}) is very good, especially for small $n$. For higher values of $|n|$, formula ($\ref{eq:adot-n}$) deviates slightly from results of the simulation because the orbit differs more and more from the assumed circular shape. In fact, changes in the semi-major axis and eccentricity are coupled so must be considered together. Even though the radial gas velocity does not change the angular momentum of the particle directly, it does change its eccentricity, which in turn affects the decay rate of the semimajor axis (see Eqs. \ref{eq:dadt_avg}-\ref{eq:deidt_avg}). The above result concerns an idealized axisymmetric disk and aims to enhance basic effects of radial gas flow (regardless of its source), as well as to provide a proof of code validity. \subsection{Non-axisymmetric disk: numerical and analytical results} In this section we present the results of the orbital calculations in model W2 and compare them with models EA1, EA2 and W1. We also derive an analytical approximation to the perturbative formulae \ref{eq:dadt}-\ref{eq:deidt} for the case of non-axisymmetric disk and compare it with numerical results from model W2. \begin{figure} \resizebox{\hsize}{!}{ \includegraphics[]{ae-t-e=0_i=0.eps} } \caption{\label{fig:ae-t} Semimajor axis (upper panel) and eccentricity (lower panel) versus time for a 10\,m particle in models: W2 (solid), EA2 (dashed), EA1 (dotted) and W2 with radial gas velocity set to zero (dash-dotted).} \end{figure} As a first example, we consider particle of radius 10\,m on an initially non-inclined, circular orbit of radius 3\,AU. We checked empirically that in all our models the particle of this size migrates by less than 0.2\,AU within the simulation time, so that the parameters of the gaseous disk can be regarded as roughly constant. This enables us to compare the migration speed between different disk models. The upper panel of Fig. \ref{fig:ae-t} shows the evolution of the particle's semimajor axis in models W2, EA2, and EA1. As we see, the difference in migration speed of a particle in the non-axisymmetric disk is substantially enhanced with respect to the other two cases, roughly by a factor of three. The lower panel of Fig. \ref{fig:ae-t} shows the corresponding evolution of eccentricity. Oscillations with the synodic period of the companion star are clearly seen. We checked that the oscillations are not damped by gas drag for particles larger than 10\,m. The amplitude of oscillations depends on the semimajor axis, and the mean eccentricity is roughly the same in both models with a companion star (only a very small decrease is observed with decreasing $a$). For this reason the eccentricity cannot be responsible for the difference in the migration speed. We have already shown analytically that this relatively fast migration can be induced by the radial gas velocity. In order to test this prediction numerically, we performed an orbital calculation with the non-axisymmetric disk in which the radial gas velocity was set artificially to zero. The corresponding curve in Fig. \ref{fig:ae-t} clearly proves that the radial gas flow is the main factor responsible for the accelerated particle migration. Another two factors are responsible for the remaining part of the difference from the axisymmetric model. First, in the non-axisymmetric case the effective value of $\eta^2$ is higher due to weighting by the density (see discussion in the next paragraph). Second, due to the dissipation in spiral waves, the mean density in the simulation has increased by 15\% with respect to the axisymmetric model (see Fig. \ref{fig:rad_prof}). How well do the above numerical results agree with the analytical approximations? The comparison is straightforward for the equilibrium axisymmetric disk. In that case formula \ref{eq:dadt} reduces to the original formula 4.21 from A76. We have $a/\tau_0=0.51$ [2$\pi$\,AU/yr] in our disk model for the 10\,m particle on the orbit with $a=3$\,AU. Furthermore, $\eta=0.0056$, and the measured mean eccentricity is $e=0.006$. For those parameters the analytically predicted migration velocity is within 1\% of the value measured in the simulation: $4.3\cdot10^{-5}$ [2$\pi$\,AU/yr]. This proves the accuracy of our orbital integration. Application of formula $\ref{eq:dadt}$ to the non-axisymmetric disk is not as straightforward. We have found that simply inserting angular averages of $\eta$ and $\kappa$ leads to quite a large discrepancy with the value measured in simulation. This is because the density in our non-axisymmetric disk {\em is} correlated with the velocity so the approximation \ref{eq:uF_approx} is not justified. Here we derive a new version of the formulae \ref{eq:dadt}-\ref{eq:deidt} that takes such a correlation into account. To that end we modify approximation \ref{eq:uF_approx} by also detaching $\rho$ from $F$ and averaging its product with $u$ separately: \begin{equation} \label{eq:uF_approx2} \langle \rho uF \rangle = \{\langle (\rho u)^2 \rangle\}^{1/2} \langle F \rangle, \end{equation} where the orbital average, $\langle \rangle$ is defined by Eq. \ref{eq:orbavg}. This leads to the following formulae: \begin{equation} \label{eq:dadt_avg} \frac{\tau_0'}{a}\left\langle\frac{{\mathrm{d}} a}{{\mathrm{d}} t}\right\rangle = -2\left[ \left(\frac{5}{8}-\kappa_{\mathrm{eff}}^2\right) e^2 + \frac{1}{2}i^2+ \eta_{\mathrm{eff}}^2 + \kappa_{\mathrm{eff}}^2 \right] ^{1/2}\eta \end{equation} \begin{equation} \label{eq:deidt_avg} \frac{\tau_0'}{e}\left\langle\frac{{\mathrm{d}} e}{{\mathrm{d}} t}\right\rangle = 2\frac{\tau_0'}{i}\left\langle\frac{{\mathrm{d}} i}{{\mathrm{d}} t}\right\rangle = \left[ \left(\frac{5}{8}-\kappa_{\mathrm{eff}}^2\right) e^2 + \frac{1}{2}i^2+ \eta_{\mathrm{eff}}^2 + \kappa_{\mathrm{eff}}^2 \right]^{1/2}, \end{equation} which are very similar to Eqs. \ref{eq:dadt}-\ref{eq:deidt}; but variables $\eta$ and $\kappa$, which enter the formula for the total relative velocity (Eq. \ref{eq:usq}), have been changed to the {\em effective} values \begin{equation} \label{eq:etaeff} \eta^2 \rightarrow \eta_{\mathrm{eff}}^2 = \frac{\overline{(\rho \eta)^2}}{\overline{\rho}^2} \end{equation} \begin{equation} \label{eq:kappaeff} \kappa^2 \rightarrow \kappa_{\mathrm{eff}}^2 = \frac{\overline{(\rho \kappa)^2 }}{\overline{\rho}^2}. \end{equation} The bar denotes here averaging over the full angle at radius $r=a$ in contrast to the orbital average defined by Eq. \ref{eq:orbavg}. Also the characteristic time scale now depends on the averaged density: \begin{equation} \tau_0'=\frac{1}{A\, \overline{\rho}\, {v_\mathrm{k}}(a)}. \end{equation} We note that $e$ and $i$ are not translated to the effective values because we have assumed that they remain constant during one orbital period. This might not be fulfilled for small particles that are more strongly coupled to the gas. Fortunately, the effect becomes noticeable only at the lower limit of the size range for which the drag law we used is applicable. In analogy to Eq. \ref{eq:kappaeff}, we can define the effective $\alpha$-viscosity, and its relation to $\kappa_{\mathrm{eff}}$ is \begin{equation} \kappa_{\mathrm{eff}}=\frac{3}{2}h_r^2 \sqrt{\overline{ \alpha_{\mathrm{eff}}^2 }}. \end{equation} Now we are in a position to test this result with numerical simulations. From the model W2 we measured mean $e=0.006$, $\eta=0.0056$, and effective $\kappa_{\mathrm{eff}}=0.027$, $\eta_{\mathrm{eff}}=0.01$. For these parameters, the predicted migration velocity is $1.9\cdot 10^{-4}$ [2$\pi$\,AU/yr] in comparison to $1.8\cdot 10^{-4}$ [2$\pi$\,AU/yr] measured from the simulation. Taking the number of approximations that had to be done in the analytical derivation into account, this agreement is indeed excellent. \begin{figure} \resizebox{\hsize}{!}{ \includegraphics[]{dadt-s.eps} } \caption{\label{fig:dadt-s} Migration rate as a function of the particle size for models: W2 -- solid line, EA2 -- dashed line, and EA1 -- dotted line.} \end{figure} In Fig. \ref{fig:dadt-s} we display the migration rates measured in simulations as a function of the particle size. As we see, the presence of the companion star does not change the migration rate in axisymmetric disks, because the excited eccentricity oscillations are small in comparison to the $\eta$ parameter that plays the major role. In the non-axisymmetric disk parameter $\kappa_{\mathrm{eff}}$ clearly dominates other factors ($e, i, \eta_{\mathrm{eff}}$), and thus for all considered sizes migration speed is enhanced by roughly the same factor of 3. Of course this factor will change as $\kappa_{\mathrm{eff}}$ is changing. The value of $\kappa_{\mathrm{eff}}$ depends on the position in the disk, disk properties, and binary separation. In particular, for a given disk model, it decreases with decreasing $r$, because the waves are weaker in the inner disk. For a discussion of the dependence of $\alpha_{\mathrm{eff}}$ on the disk density profile and temperature (for isothermal models), please refer to \citet{Blondin00}. To test the formula for inclination damping (Eq. \ref{eq:deidt_avg}), we placed a particle on an initially circular orbit at 3\,AU with inclination 0.01. The evolution of its orbital elements in models W1, W2, and EA2 is shown in Fig. \ref{fig:aei-t}. Clearly, the inclination decreases faster in the non-axisymmetric disk. The initial difference in damping rate, measured over interval $\delta t=50\,yr/2\pi$, is about three times higher than in model EA2. In order to make a comparison with the analytical approximation (which does not account for the presence of a secondary), we ran one more model with the non-axisymmetric disk but with the gravity of the secondary star switched off (W1; see Fig. \ref{fig:ae-t}). The difference between models W1 and W2 is very small, which is expected since the companion star influences the inclination rather weakly. We measured the initial slope in model W1 to be $2.2\cdot 10^{-5}\,[2\pi/yr]$, while the approximate Eq. \ref{eq:deidt_avg} gives $2.5\cdot 10^{-5}\,[2\pi/yr]$, which makes a 12\% difference. Althrough worse than for the semimajor axis, this accuracy is still very good for an approximate formula. \begin{figure} \resizebox{\hsize}{!}{ \includegraphics[]{aei-t.eps} } \caption{\label{fig:aei-t} Evolution of orbital elements for a particle on an initially circular, inclined orbit in models: W2 (solid line), EA2 (dotted line), and W1 (dashed line).} \end{figure} The eccentricity of the particle in a binary system is set by the balance between forcing from the secondary and drag damping. As it was already shown in Figure \ref{fig:ae-t}, the eccentricity of a particle on an initially circular orbit is immediately forced to oscillate with a roughly constant amplitude (0.01 at 3\,AU). Comparison of the analytical formula for damping of $e$ (Eq. \ref{eq:deidt_avg}) with simulation results in model W2 would require setting up the particle with initial eccentricity substantially higher than 0.01. Since it is rather unlikely for a small particle to have such eccentricities, and in addition analytical approximation may not work well in this regime, we will not present it. We note only that the comparison of eccentricity between models W1 and W2 in Fig. \ref{fig:aei-t} reveals that the spiral waves are only responsible for the lower limit of eccentricity in model W2, while the remaining contribution comes from the perturbation by the secondary. \section{Coherence of periastra and eccentricities \label{sec:coherence}} Relative shape and alignment of neighbouring orbits is a very important factor because it controls the relative velocities of the particles and thus their growth rate. Even if the companion star excites high eccentricities, it does not necessarily mean that the relative velocities are high. \citet{Marzari00} have shown that secular perturbations from an eccentric companion, combined with the gas drag, lead to the strong alignment of periastra between particles of the same size on neighbouring orbits. Since periastra are also coupled to eccentricities, the relative velocities are low. This effect of ``orbital phasing'' in an eccentric binary is prominent even for bodies of 100 km in diameter, which are commonly regarded as decoupled from the gas. It should be stressed that the collision velocities are low only between particles of the same size. \citet{Thebault06} has shown that for distribution of sizes, even small misalignment in the periastra of particles of different sizes results in relative velocities that are high enough to prohibit collisional growth (note however that they used an equilibrium axisymmetric-disk approximation). In this section we investigate the effects of the orbital coherence, but in the circular binary system and for smaller sizes of particles. The secular effects of eccentricity forcing and periastra alignment are not present in the circular system. However, the inclusion of a non-axisymmetric gaseous wave pattern provides an additional factor that localy perturbs orbit of the particle. Since the wave pattern co-rotates with the companion star, we may expect similar effects to the secular gravitational perturbations. To check what really happens in such systems, we performed 4 runs with non-interacting particles of sizes 1\,m, 10\,m, 100\,m, and 1\,km. Each run was initiated with 30000 particles on circular, non-inclined orbits distributed randomly between 0.8\,AU and 6\,AU. Figure \ref{fig:w-a} shows the longitudes of particle periastra with respect to the longitude of the companion, $\tilde{\omega}$, as a function of particle semimajor axis. Each row shows the evolutionary sequence for the population of particles of different size (indicated on the vertical axis legend). The time evolution is shown to prove that the observed configurations are not transient but converge to a certain, stationary pattern on the $\omega-a$ plane. \begin{figure} \resizebox{\hsize}{!}{ \includegraphics[]{w-a-img.eps} } \caption{\label{fig:w-a} Periastron longitude $\tilde{\omega}$ in model W2. Each row shows temporal evolution for a particle of a given size. The time is displayed in years.} \end{figure} A quick look at the plot reveals that the effect of periastra alignment {\em is} present for all considered sizes of particles. The degree of alignment depends, however, on the distance from the central star and on the particle size. A closer exploration of the plot allows two different regimes to be distinguished depending on the particle size: \begin{enumerate} \item Particles with radii smaller than a few meters feel strong gas drag, and their periastra are correlated with the spiral pattern in the gaseous disk. It can be observed on the plot for 1\,m particles in the region of stable orbits (below ~3\,AU). \item Particles with radii larger than a few meters have orbits aligned, but the alignment is not correlated with the waves in the gaseous disk. In the outer parts of the disk, around 3\,AU, the periastra are in opposition to the companion star with a substantial scatter in longitude (larger for larger particles). When the drag force is stronger (in the inner disk or for smaller particles), the alignment is more pronounced. It is particularly strong for 10\,m particles. \end{enumerate} We want to stress here that the alignment of periastra does not imply any spatial alignment of particles. In fact we did not observe a spatial correlation of particles with the spiral waves at the particle-size range considered here. We only checked that such a correlation becomes weakly visible for 10\,cm particles. Figure \ref{fig:e-a} illustrates the dependence of eccentricity on semimajor axis in the same manner as for periastra longitudes. The coherence in eccentricity is much weaker in comparison to the eccentric model of \citet{Marzari00}. In the inner disk, it is simply the effect of strong damping by the gas drag. The presence of spiral waves manifests mainly for 1\,m and 10\,m size particles in the form of pulsations in $a$. Actually, two pulsation patters can be noticed for 1\,m particles corresponding to two spiral arms. We expect that the relative phase between those two patterns depends on the winding angle of the spirals. Thus the coherence in eccentricity will vary depending on the disk temperature. \begin{figure} \resizebox{\hsize}{!}{ \includegraphics[]{e-a-img.eps} } \caption{\label{fig:e-a} Eccentricity as a function of semimajor axis. Each row shows the temporal evolution for a particle of a given size.} \end{figure} \section{Discussion and summary \label{sec:summary}} We have investigated orbital evolution of particles moving in a circumprimary gaseous disk in a circular binary system. This is the first investigation of the orbital motions that includes tidally excited density waves in the gas-drag calculation. We have demonstrated numerically that the radial flow of gas in the disk (inward or outward) effectively increases the drag force due to an enhanced mass flux colliding with the solid particles. We derived approximate analytical formulae for the change rate of orbital elements ($a, e, i$) in the gaseous disk, which is not in radial equilibrium. The formulae do not assume anything about the source of the radial gas flow. In general there are four constituents of the relative velocity between particle and gas. The first two, eccentricity and inclination, describe the deviation of the particle from the Keplerian gas flow. The remaining two represent the radial and angular deviation of the gas from the Keplerian flow. To account for the non-axisymmetric features in the disk, one has to introduce {\em effective} components of the gas velocity. The effective components can be substantially larger than simple angular averages, meaning that the evolution of orbital elements can be faster in the non-axisymmetric disk. Exactly this situation happens in the investigated disk perturbed by the companion star: the tidally induced spiral waves which propagate radially in the form of shocks increase the effective components of the gas velocity. In particular, in the outer disk where the perturbation is strongest, the effective radial component dominates other components of the relative velocity. We have found numerically that the particles of sizes from 1\,m to 10\,km migrate there around three times faster than in an axisymmetric disk in radial equilibrium. This result agrees with the analytical prediction very well, within a few percent. The effect of enhanced drag force due to radial gas flow has some significant consequences in the context of planetesimal formation in the binary system. The faster particle migration raises the old question of whether there is enough time to form planetesimals before all smaller particles have fallen onto the star. We have to note, however, that the enhancement in the drag force gradually vanishes inward in the disk. Depending on the disk temperature and density radial profile, this differential migration may eventually result in the accumulation of particles at a certain radius in the inner disk, although we estimate that this is not the case for realistic disk models. There are other effects that may support faster planetesimal formation. If only the larger bodies can form quickly, the size-independent enhancement of the migration speed will result in a higher flux of smaller particles, which will feed the larger body. Furthermore, an accelerated damping of the particle's inclinations leads to an increase in their number density in the mid-plane of the disk and more frequent collisions. Both effects are important for the planetesimal formation and early stages of their growth. The frequency of collisions between particles is only one of the two factors controlling their growth rate. The other one is the relative velocity of colliding bodies that determines whether the bodies stick together or shatter into smaller pieces. The relative velocity almost exclusively depends on the eccentricity and periastra orientation of the particles, provided that the inclinations are already damped. The contribution from decay of the semimajor axis is negligible (in our system it can play a role only for 1\,m particles at 3\,AU). We find that the spiral waves induce a certain coherency in both periastra longitudes and eccentricities. Only for the smallest particles of 1\,m size does the coherence in periastra longitudes come from a simple correlation of the orbit orientation with the spiral wave pattern in the gas. For larger bodies, there is no such correlation, and the degree of coherence decreases with increasing size of the body. It seems that the relative velocities will be affected by coherence only for bodies smaller than 10\,m. In order to determine how much it does help in planetesimal formation, collisional simulations with realistic size distribution will have to be carried. This will be the subject of the next paper. In this paper we focused on the circular binary system because it allows the sole effects of the spiral waves to be studied without time dependent effects that are present in the eccentric binary. On the other hand, the radial gas flow in the eccentric system is much stronger and we expect it to influence particle motion to a much higher degree than in the circular case. The eccentric binary case will be investigated in one of the subsequent papers. \begin{acknowledgements} P.C. acknowledges financial support provided through the European Community's Human Potential Programme under contract HPRN-CT-2002-00308, PLANETS. P.C. and A.G. were supported by the Polish Ministry of Science through grant No. 1 P03D 026 26. The software used in this work was in part developed by the DOE-supported ASC / Alliance Center for Astrophysical Thermonuclear Flashes at the University of Chicago. \end{acknowledgements} \renewcommand{\theequation}{A.\arabic{equation}} \setcounter{equation}{0} \begin{appendix} \section{Appendix: Mean variations of orbital elements} In this section we extend calculations of A76 for the case of a disk in radial non-equilibrium. All other assumptions made in A76 are preserved; in particular, the disk is axisymmetric. Parts of the formulae written in bold font are additions with respect to their formulae. Introducing non-dimensional radial velocity $\kappa=-V_r/{v_\mathrm{k}}$, Eqs. (4.7) in A76 take the following form: \begin{eqnarray} \label{eq:dadt-ext} \frac{{\mathrm{d}} a}{{\mathrm{d}} t} &=& -A\rho u \frac{2a}{1-e^2} \left\{ 1+2e \cos\psi + e^2 - (1+e \cos\psi)^{3/2}h \cos i \right. + {}\nonumber\\ & & {} + \left. \boldsymbol{\kappa (1-e^2)^{1/2}e\sin \psi} \right\}, \\ \label{eq:dedt-ext} \frac{{\mathrm{d}} e}{{\mathrm{d}} t} &=& -A\rho u \left\{ 2\cos \psi+2e- \frac{2\cos\psi + e + e \cos^2\psi}{(1+e\cos \psi)^{1/2}}h\cos i + \right. {}\nonumber\\ & & {} + \left. \boldsymbol{\kappa (1-e^2)^{1/2}\sin \psi} \right\}, \\ \label{eq:didt-ext} \frac{{\mathrm{d}} i}{{\mathrm{d}} t} &=& -A\rho u \frac{h}{(1+e\cos\psi)^{1/2}} \cos^2(\psi+\omega)\sin i , \end{eqnarray} where $\psi$ and $\omega$ denote true anomaly and argument of periastron respectively, and $h \approx 1-\eta$. The total relative velocity $u$ reads: \begin{eqnarray} \label{eq:usq} u^2 &=&{v_\mathrm{k}}^2(a)\{(1-\frac{3}{4}\cos^2 \psi)e^2+\cos^2(\psi+\omega)i^2+\eta^2+\eta e\cos \psi\} + {}\nonumber \\ & & {} \boldsymbol{+ \frac{{v_\mathrm{k}}^2}{1-e^2}\left\{ \kappa^2(1-e^2)-2\kappa e(1-e^2)^{1/2} \sin \psi \right\}}. \end{eqnarray} Assuming that the drag force is weak and the orbital elements are constant during one orbital period, their orbitally averaged rate of variation is expressed as \begin{equation} \label{eq:orbavg} \left\langle\frac{{\mathrm{d}} Q}{{\mathrm{d}} t}\right\rangle = \frac{1}{2\pi}\int_{0}^{2\pi} \frac{{\mathrm{d}} Q}{{\mathrm{d}} t}\frac{(1-e^2)^{3/2}}{(1+e\cos \psi)^2} \mathrm{d}\psi, \end{equation} where $Q\in\{a,e,i\}$. In order to simplify further calculations we apply the approximation (Eq. 4.19 in A76): \begin{equation} \label{eq:uF_approx} \langle uF \rangle = \langle u \rangle \langle F \rangle = \left\{\langle u^2 \rangle\right\}^{1/2} \langle F \rangle \end{equation} where $F$ denotes factors other than $u$ in Eqs. (\ref{eq:dadt-ext})-(\ref{eq:didt-ext}). It is justified as long as $u$ and $F$ are independent and the constant part of $u^2$ is greater than the amplitude of the oscillatory part. Providing that variables $e,i,\eta,\kappa$ are much smaller than unity and preserving only leading terms for each of them, we obtain from Eq. (\ref{eq:usq}) \begin{equation} \langle u^2 \rangle = {v_\mathrm{k}}^2(a)\left[ \left(\frac{5}{8}-\boldsymbol{\kappa^2}\right) e^2 + \frac{1}{2}i^2+ \eta^2+ \boldsymbol{\kappa^2} \right]. \end{equation} In practice, term $\kappa^2$ in front of $e^2$ is negligible and the remaining formula is intuitive: in addition to the angular component of velocity, $\eta$, its radial part, $\kappa$, enters. Fortunately, the orbital average of $F$ is not changed with respect to A76, since all new terms in Eqs. (\ref{eq:dadt-ext})-(\ref{eq:didt-ext}) vanish when averaged. Thus the only changes come from $<u^2>$ term and finally we obtain: \begin{equation} \label{eq:dadt} \frac{\tau_0}{a}\left\langle\frac{{\mathrm{d}} a}{{\mathrm{d}} t}\right\rangle = -2\left[ \left(\frac{5}{8}-\kappa^2\right) e^2 + \frac{1}{2}i^2+ \eta^2+ \kappa^2 \right] ^{1/2}\eta \end{equation} \begin{equation} \label{eq:deidt} \frac{\tau_0}{e}\left\langle\frac{{\mathrm{d}} e}{{\mathrm{d}} t}\right\rangle = 2\frac{\tau_0}{i}\left\langle\frac{{\mathrm{d}} i}{{\mathrm{d}} t}\right\rangle = \left[ \left(\frac{5}{8}-\kappa^2\right) e^2 + \frac{1}{2}i^2+ \eta^2+ \kappa^2 \right] ^{1/2}. \end{equation} \end{appendix}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,485
Q: How to embed a website within ipython notebook I want to make my notebook such that a website open within notebook and anything I do/click in website, it works like it open in new tab but remains within ipython notebook cell only. I know about selenium package which open the website in new tab or there are other ways to0 but every time, I need to leave notebook and go to either new window/tab. So how can I make my ipython notebook such that it open website within cell and whatever I do, it remains within notebook. Thanks A: You can try this: %%html <iframe src="https://playground.tensorflow.org" width="1200" height="1000"></iframe> A: For sites that can be embedded in IFrame, you can try from IPython.display import IFrame IFrame("https://www.openasapp.com/embedding-an-iframe-step-by-step/", 900,500) This will work if IFrame() is the last thing called in the cell becuase Jupyter Notebooks automatically call the display function on the last active line in the cell. If you want to have this at the beginning of the cell, you will need to call display manually like this from IPython.display import IFrame display(IFrame("https://www.openasapp.com/embedding-an-iframe-step-by-step/", 900,500)) print("This code is now the last line, so we need to call display(Iframe()) explicitly")
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,604
This is a contributing entry for Virginia Women in History - Northern Virginia Region and only appears as part of that tour.Learn More. Mary A. Marshall Text-to-speech Audio Mary A. Marshall advocated public education and equal rights as a member of the General Assembly for more than twenty years, and Arlington County recognized her work on behalf of its residents when it named a community-based assisted living residence in her honor. Photograph courtesy of the Library of Virginia. The Library of Virginia honored Mary Marhsall as one of its Virginia Women in History in 2018. The Virginia Women in History Digital Trail is made possible by the Library of Virginia and American Evolution: Virginia to America, 1619–2019 Backstory and Context Mary A. Marshall (June 14, 1921–October 15, 1992) represented Arlington County in the House of Delegates from 1966 to 1969 and again from 1972 until 1991. After studying political science at Swarthmore College, from which she graduated with honors, she worked for the U.S. Department of Justice during World War II. The mother of three daughters, Marshall got involved in politics during the 1950s to keep Arlington's public schools open when the state's policy of Massive Resistance threatened to close schools that obeyed federal court orders to desegregate. She sponsored voter registration drives and was the first woman elected chair of the county's Democratic Committee before winning election to the General Assembly. Marshall was a strong supporter of public education, health care, help for the mentally ill, and issues relating to children and the environment. During the 1970s she was a leader in the unsuccessful attempt to have the General Assembly ratify the proposed Equal Rights Amendment to the U.S. Constitution. Smart and funny, Marshall was a skilled legislator and served on some of the most important House committees, including Privileges and Elections. For her last six years in the assembly she was chair of the Committee on Counties, Cities, and Towns, which was of critical interest to her Northern Virginia constituents, and during her last term she was also a member of the influential House Committee on Appropriations. Considered one of Northern Virginia's most effective delegates and sometimes spoken of as the likely first female Speaker of the House, Marshall retired from politics in 1991. Reprinted by permission of the Library of Virginia. Virginia Women in History, 2018 Political and Diplomatic History This entry has been edited 1 times. Created by Education and Outreach (Library of Virginia) on February 6th 2019, 5:21:50 pm. Tours that feature this entry Virginia Women in History - Northern Virginia Region A driving tour showcasing the Library of Virginia's Virginia Women in History honorees from the Northern Virginia region.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,611
Kim Kardashian Spent $10,000 on Fake Testicles for Her Dog by Simon Delott at July 9, 2018 3:10 pm . Kim Kardashian may not be able to keep Kanye from blaming black people for slavery, but there's still one man in her life she can control. That man is her dog, Rocky, whom she had neutered, as any responsible pet owner would do. Strangely, she has reportedly purchased fake testicles for him ... for a whopping $10,000. According to The New York Times, Kim Kardashian has spent $10,000 to purchase prosthetic testicles for her dog, Rocky. Allegedly, she purchased the Neuticles and had them implanted because she wanted to help Rocky's self-esteem. Way back in the simpler times of 2012, Kim Kardashian told The Independent that she "doesn't like big balls on dogs, or anything else." That's sort of a weird comment in general. Some people are uncomfortable with the sight of their dog's genitalia, which is a perfectly fine hang-up to have. It's the "anything else" line that's weirder for her to include. Barring inconvenience during a couple of sex acts, who cares about testicle size? Whatever. Maybe she was trying to be funny. It was 2012. Regardless, people assume that she purchased Neuticles that were smaller than the originals. Gregg Miller is the creator of Neuticles, and he does not mince words when advocating for people to purchase his very expensive product. "Some have their dog turned into a eunuch because they don't care," Miller says, according to The Daily Mail. First of all, who says eunuch unless they're talking about Ancient Rome or Game of Thrones? Neutered is a perfectly good word, dude. Second of all, you neuter your dog because you care very much. Responsible humans neuter their dogs. "But there's a certain segment of pet owners," Miller says. "That do care and that's where Neuticles come in." Miller sure has some opinions, folks. Kim, as you may recall, got Rocky with then-boyfriend Reggie Bush way, way back in 2010. "Rocky is most like me, his mommy," Kim said at the time. "He's really cool and calm, and goes with the flow." Kim is not really known for being a dog person, and with the exception of photoshoots and a few pomeranian photos from last year, you're just not going to see dogs show up on her Instagram. It's weird. At the time, Kim revealed that she got into a bit of trouble with Kourtney over Rocky, but it was nothing like Kim and Kourtney's recent klashes. "Mason hasn't really been around dogs that much," Kim said. "Rocky was licking Mason in the face, and Kourtney was mad at me." That's a weird thing to be mad about. "I was like, "No, they need to meet!" It was really funny," Kim described. "And Rocky did calm down after a little bit." As we mentioned, Kim is sort of weird about pets. In fact, she has plainly stated that she forgot what became of various pets that have been gifted to her and her sisters over the years. We're talking about childhood pets. And no, not sea monkeys. If it weren't for the fact that Kim is clearly a good mother, we might worry that she's some sort of unloving monster. Because, seriously, who does that? Kim also doesn't post many pet photos to Instagram. She's posted more explanations of a strange mark in the marble of a hotel room in the past year than she has posted photos of Rocky. Kylie, in the mean time, has made it clear on social media that she is obsessed with her dogs. Which is much, much more normal. But it seems very clear that Kim cares about Rocky if she's dropping what to mere mortals would be a sizable down payment on a car to give him fake testicles to help his self-esteem. Some people just don't show their love on social media. Especially when they are celebrities trying to pick and choose what fits their brand. For Kylie, who was until late last year, a teenager, showing constant dog photos and videos did zero harm to her absolute juggernaut of a brand. Kim caters to a different, older crowd, and we're not going to second guess the judgment of the woman who all but invented branding. Kim Kardashian & Family: 19 Conspiracy Theories That May Actually Be True (Okay, Some of Them) Start Gallery Tags: Kim Kardashian, Pets, Animals, Celebrity Gossip Kim Kardashian Biography Kim Kardashian is the ex-girlfriend of Nick Cannon, Reggie Bush and Ray J. She had intercourse on camera with the former, which is what... More » Kimberly Noel Kardashian Kim Kardashian Shares Precious Snap of 2-Month-Old Psalm West Kim Kardashian: Fine, I'm SORRY I Tried to Trademark "Kimono!" Kim Kardashian on Taylor Swift-Scooter Braun Feud: I'm Loving It! Kim Kardashian Photos Kim Kardashian Quotes So far, designing is the most exciting thing I've done... I have a vision of what I want [the clothes] to look like in my mind and it's fun to see it come alive on paper. Permalink: So far, designing is the most exciting thing I've done... I h... He just seemed very firm about the change, and that's, like, his motto. Kim Kardashian [on Barack Obama] Permalink: He just seemed very firm about the change, and that's, like, ... Kim Kardashian Videos Kim Kardashian: Watch Her Make Kris Jenner CRY Kim Kardashian Gets Accused of Killing a Family Member on KUWTK Keeping Up With The Kardashians Sneak Peek: Kim Worries About Khloe and Tristan ... Amber Portwood Assault: This is EVERYthing We Know... Andrew Glennon Files for Custody of Amber Portwood's Son Jenelle Evans Celebrates Milestone with Newly-Reunited Son Shay Mitchell Breaks the Internet Again With Beautiful Baby Bump! Meghan Markle Tell-All Book Idea Shot Down HARD By Palace! Thomas Markle: The Queen Should Have Invited Me to Archie's Christening!
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,751
PRAISE FOR _One of Us is Sleeping_ AND JOSEFINE KLOUGART "Denmark's pre-eminent postmodernist writer." _—Fjords Magazine_ "Scandinavia has its own Virginia Woolf. Few come closer to the human condition than Klougart." — _VG_ (NORWAY) "Klougart's graceful and precise language propels the novel through a succession of images that justify the vagueness of that feeling, what is eventually described as something akin to 'separating an egg, passing the yolk from hand to hand, the fragile yolk that might break at any moment.' This is a beguiling conjuring of consciousness.'" — _Publishers Weekly_ "Therein lies Klougart's genius. She renders the emotional landscape in impressionistic soft focus. The speaker's voice arrests because it conveys more than setting, plot, or character development—it transmits powerful feelings." —LANIE TANKARD, _World Literature Today_ "Klougart delivers a sustained meditation on love, loss, and alienation." _—Kirkus Reviews_ "Klougart deftly transports us into another person's mind while simultaneously showing us our own." —RACHEL S. CORDASCO, _Bookishly Witty_ "Klougart has crafted a rich novel. Her evocative explorations of how words and life work in tandem to tease meaning from the seemingly inexplicable and random events of life combine to create a novel that is richly creative and boldly written." —ERIC MARONEY, _Colorado Review_ "A dolorous, yet beautifully composed work of failed love, loss, and lament. The star of Klougart's book is her gorgeous, evocative imagery and emotional acuity." —JEREMY GARBER, _Three Percent_ "A consistently compelling read from beginning to end." _—Midwest Book Review_ ALSO AVAILABLE IN ENGLISH BY JOSEFINE KLOUGART _One of Us is Sleeping_ translated by Martin Aitken Deep Vellum Publishing 3000 Commerce St., Dallas, Texas 75226 deepvellum.org · @deepvellum Deep Vellum Publishing is a 501C3 nonprofit literary arts organization founded in 2013. Copyright © Josefine Klougart and Gladiator, 2013 Published by agreement with Leonhardt & Høier Literary Agency A/S, Copenhagen Originally published in Danish as _Om mørke_ by Forlaget Gladiator, Copenhagen, Denmark English translation copyright © 2017 by Martin Aitken First edition, 2017 All rights reserved. The author would like to thank The Danish Arts Foundation for their support. ISBN: 978-1-941920-50-3 (paperback) · 978-1-941920-51-0 (ebook) LIBRARY OF CONGRESS CONTROL NUMBER: 2016959329 — **DANISH ARTS FOUNDATION** This translation has been supported with a grant from the Danish Arts Foundation. — Cover design & typesetting by Anna Zylicz · annazylicz.com Text set in Bembo, a typeface modeled on typefaces cut by Francesco Griffo for Aldo Manuzio's printing of _De Aetna_ in 1495 in Venice. Distributed by Consortium Book Sales & Distribution. Printed in the United States of America on acid-free paper. CONTENTS * TITLE PAGE * COPYRIGHT 1. OF DARKNESS 2. PROLOGUE 3. SCENE 1 4. SCENE 2 5. SCENE 3 6. SCENE 4 7. SCENE 5 8. SCENE 6 9. SCENE 7 10. SCENE 8 11. EPILOGUE * ABOUT THE AUTHOR _"Assuming that beauty is the distribution of light in the fashion most congenial to one's retina, a tear is an acknowledgment of the retina's, as well as the tear's, failure to retain beauty. On the whole, love comes with the speed of light; separation, with that of sound. It is the deterioration of the greater speed to the lesser that moistens one's eye. Because one is finite, a departure from this place always feels final; leaving it behind is leaving it forever. For leaving is a banishment of the eye to the provinces of the other senses; at best to the crevices and crevasses of the brain. For the eye identifies itself not with the body it belongs to but with the object of its attention. And to the eye, for purely optical reasons, departure is not the body leaving the city but the city abandoning the pupil. Likewise, disappearance of the beloved, especially a gradual one, causes grief no matter who, and for what peripatetic reason, is actually in motion. As the world goes, this city is the eye's beloved. After it, everything is a letdown. A tear is the anticipation of the eye's future."_ Joseph Brodsky, _Watermark: An Essay on Venice_ All that the eyes see, upon which a gaze falls. A bag someone places on the floor is: a bag someone places on the floor. All things remain as things, and in that way they are _here._ The room is not disrupted, the chronology is not disrupted—none of its constituent parts have ever been together in that way. The way _I_ have always been _she,_ and _you_ have always been _he._ There isn't necessarily any problem in that. A movement in and out of our bodies, a recollection returned, wandering back and forth between us. Or an anger no one understands. A common reservoir, the increasingly threadlike capillaries of the veins; something proceeding through time, then turning back. All sounds are quite as distinct. All voices can be heard, and as such none enjoys priority. A whisper is as clear as a shout. Something serves to amplify the weaker sounds and lengthen the louder ones so that we may hear them. The eyes decide for themselves what they want to observe. That may be a comfort. The ceiling, like the spine of a crouching animal. The duality of movement: inwards and outwards; down to the floor, then up. A whisper, and the space expands. Or: a whisper, and the space is compressed. Not focusing on anything allows _things_ to emerge more clearly. The ways in which they connect—with the eyes that see, and the bodies that listen. The fact of the eye requiring distance in order for an image to come together again in a new way. Plains and skin. Coasts, cuticles. Such leaps, on all imaginable scales. Sound and image work on their own, independently. A thing such as _distance._ What can distances be measured against. A sky. A sail we have stretched out between walls. The arching vaults of cathedrals. And the same goes for time, the past mingling with what is; the salient past that is here, and all that is yet to come: _here._ The will of the image, and the will of sound. A liberation of the different planes. For instance: The image of a beach, a broad belt of sand in panorama. There are no people in sight, we see only beach, sea, sky. Presently we hear two voices, a man and a woman talking. We hear them clearly, their voices rise with ease above the clamour of the waves. Next, they enter the frame, and the image splits into two images superimposed: the beach before and the beach now; before him and after him, before her and after her; everything that happened _here_ will happen _here_ —happens _here._ Death is perhaps merely a displacement, the same as silence. A moment's imprudence and then again: _here._ She opens her eyes and sees the sky through the crown of the tree. He is standing in the boat, watching the ash settle like a film upon the sea. The sea, calmed; the sea, placid now the wind has died. The ash upon the surface, a rise and fall with the shifting swell, a soothing hand, a membrane containing all that is fluid. Like skin covered in burns. One can no longer see the sky reflected in the sea; the sky, mirrored no more; the sea, no longer returned by his eyes, ash, descended upon orbs, lids above the oceans; and she, as she lies here beneath the tree, sees the sky dappled by its branches. The sky is an eye. Later, she must have been crying—her eyes are bloodshot. The dying of the wind makes her open her eyes; the sky and the tree reflected there. The ash remains; the family must come ashore again, a tight huddle in the boat, joined together by the missing of another, fingers knotted; coming apart and coming together, filling out a shell. All to no avail. Grief, condensing, a pearl in the hand; music in the next room. Blue-black canvas. Nothing in the frame but that. Threadbare canvas, greyed and lame beneath the sun. The long fingers of the sun counting out different objects or dabbing at them, picking out buildings, areas, illuminating one thing from outside, something else from within. A few objects can be sufficient. Illuminating from within. A movement we understand with our eyes; things reaching out to you with light. Or else: our eyes understand differently than the mind; the blun- dering mind. If one can distinguish and choose, then it is the eyes one must embrace. Trusting as the sleepwalker, the world throwing itself before the eyes. There is but one light in the world, belonging to the universe; beaming from the galaxy, radiant in objects and things, passing through the eye, this way or that; give me your hand, like this. At first we see only the fabric, an expanse of smokish blue. After a while we see the movement. A body breathing beneath the cover. A body is a crack through which to breathe. After some time: a sudden adjustment of position, a glimpse of bare skin, not pale, but not the opposite, neither rough nor smooth. The eyes, borrowing and returning. The eyes borrow the woman and the hills, the sea, the trees, all that can be seen. The skin, according all movements direction; towards or away. A person is the only thing that can move a person. An absence of interest in nature as it is found out there, or perhaps an interest in what is human in nature. Nature's humanity, if that's something we can talk about. Where everything is a directed approach. What do we do with that which is _without_ direction? Emotion undirected, and a feeling of being left out, always. It's not combat; there isn't that much left to conquer, not in that sense. White flags. She remembered they had talked at length and with gravity, that he had looked at her with resignation and asked what there was to be gained. There is no movement in the frame. The woman in the picture. She is lying on her side; we see her knees from above, the clarity of tendons. Still the syntax of nature exists, the sentence spoken: one voice among several. Or writing emerged from under limewash, now simply _there_ , a gaping wound affording sight of a time other than the one in which you want to be. We see her body in its entirety; the landscape a blur in the background, darkness. The weave of the fabric, ripples of cotton, alternating dark and light, the shiny, skin-like quality of its surface. Metallic, like the sea's metal gleam in mid-morning. Stillness, because what we see has no borders, no horizon, nothing that reaches an end. Our field of vision draws the only boundaries, and they are all but imperceptible to us. Within the frame of our vision, the picture, all that we have: blue fabric. We come no closer, only the opposite—we are moving away. Moving backwards, losing the pores of the woman's skin, we lose the pores, the fair down of her upper lip that you discovered, the lines of her skin reminding you of some other age—youth, funnily enough, that couldn't quite be placed. One step at a time, backwards across the fields, upwards through the hills, stumbling, higher still. More and more dry red earth is seen, more and more of the earth's skin, less of the woman's. If you can tell the difference, then that's the way it is. The eye weeps because it is always losing something. Cities. Views. All that the eye no longer sees is lost. Rapid movements; the business of turning round on a step; of moving to the other end of the country; grasping a bottle of pills before it hits the floor, nodding and retreating a few despair- ing paces before sleep in the final metres; leaving furniture under wraps, yet another summer, houses, apart- ments, gardens, a street light's sad persistence, reading through all your messages before you wake. And still: the fact that only what we once saw is close enough to us, so close we can reach out and touch it. We touch it with our wanting or with our joy. A returning wish to retain something or merely keep it _here_ a moment. Always the same exchange: what you get, and what you deliver. What the eyes get, and what they lose. A city to leave. My body as it was, an apartment, a city. _Before you wake_. The hills, or a jam jar with a single pearl inside. The details of the skin, the birth spots on your neck and the four pale scars on your legs after the thorn, that's how I think of it. With distance all the surfaces become more distinct. We see the skin as a surface, the sweep of a landscape, the fir trees a belt beneath the sky, the ocean a blue band keeping the sky in place, the glassy sky during spring. The city is a smear of grey on the peninsula. Extolled cities, how could they ever disappoint; what wanting does to what is wanted. You feel the relationship between body and land, as if it were sickness, you feel it, what it does to the body, the place from which understanding something begins. And the way the body is then an area, a surface, the way the fabric is another. There are no hierarchies, there are planes latticing like day and night. Plaiting one's hair tight. Distances alter when the eye finds a place to attach. What eye can see in such a way. We see a leg, a bare ankle. A brown sandal of the kind I had when I was a child. Flaxen hair like a bunch of flowers dropped in the sand. She sleeps and shifts in sleep. The trusting movements that occur in sleep. Even unnatural sleep, the sleep of alcohol or medicine, has something touching about it. Through her thin eyelids we see her eyes. The unsettled birds. Ice, twisting itself apart in the bay, the rhythm possessed by nature, seen from somewhere else everything occurs in patterns, there is a rhythm underneath all that is small, all that is horrific, the tiny hairs below the eyebrow, plucked throughout a life, to the eyes distance is not crucial the way it is to the person dis- turbed, to me, who is always disturbed by details and the seasons. The composition and the rhythm of all things is the same in the smallest and the greatest, distance makes the pattern clearer; my distance from you, today, as I pass through the city in which we met, visit the same café and generally; try to get closer to you. The rhythm of all things, not as a logical structure, but a sonnet or a tree or a symphony; simply that, something finding its proper place. A moment only, of falling into the world, standing on top of Stabelhøj Hill and leaning back against the wind, finding a point of balance there, only then to tumble once more. Standing there three times in the course of a life. Balance is no stable state, but disintegrates, the same way that the proper place vanishes, the light changing, now once more another, once more again. We have pulled away and see now her body in one image; perhaps thinking that motion away must halt before her body disappears, the way a pore of the skin disappears, burns out. A feeling of standing with your back against a mountainside, or with your heels on the edge of a gorge. The distance becomes greater, we see more and more of the hill. The hill, sweeping up from the sea, and in a corner of the picture, vanishing: this blue heap of fabric, the skin as an area of land within the interior. An object falling from a flatbed on the square. Four or five glances picking it up. A hand that does. A hand closing around it, the way the dirt is brushed from it with a corner of a blue scarf, the careful way it's returned to the pile, as if putting a sleeping child to its bed, carried in from the pristine car, through pristine snow, to the pristine bed. Firetraps. The blue light that connects all things. The daylight's warmth enables the eye to tell apart the figures, the trees. It's not the light's intensity, but the light's quality, that makes the difference. What is possible, and what is not. The light in the blue hours. A thought, that everything is about to perish, the paint flakes from the vitrine like my skin in summer, the lines around your eyes and mouth, the day disappears. But here. Seeing it. The wrecked body, as it might be found on a road. That sigh as a body hits the ground, the air expelled from inside. The blue light is a chute, the day slides towards night. Differently in the big cities, it's to do with speed, the traffic dividing up the sounds in another way. The blue light enticing with the thought that we are connected now. We are connected. The blue light is a blanket that covers the day, a sheet drawn over; an eyelid drawn down by a finger to cover an eye. To close a room for the night. Switch off what is on. Then come back to check. Make sure. Pearls begin as grains of sand in oyster shells, later they must be stringed, or mounted on metal, or placed in a small, soft pouch. Loose pearls look so abandoned. Nuts released, exposed in a shatter of broken shells. Eyes without sockets. A single shoe, there on the sidewalk. The roundness that exists in the world is an expression of the simplest laws of motion. The laws that govern things that float. The spiralling circles at a drain. And the two of us. When the water runs out, whenever a thing goes to the wall, motion is rotary. Shape is a way of communicating, connecting, a way of listening. From all sides. The crystals that cast back the light, a pine cone drawing glances. The sphere or the circle is the strongest form. A shape that belongs to all things, in that way connected with winding down as well, united with the winding down of all things. When you let the water out of the tub and a body remains stretched out within it. Do you remember that. A core, that is more like a shell we fill up in order to see what we remember. Mostly I talk about something I miss, the days. Mostly I believe there is a form into which we two may settle. Or perhaps not settle at all. We are travelling at the same speed away from each other. Everything is moving, at the same distance from a middle. Something that may put us in touch with the world, on an equal footing, as it were. He says photographs can be viewed as a kind of frozen music. Mostly it's more him saying it that occupies her. The spaces bodies create together are also passages of a kind. The space between his body and hers is also a room, the room possesses shape. And is at the same time already a region of memory. Or at least it shares the shape of a region there. Magnetism draws the now out of things and connects them with a place already waiting in memory. A problem for the shape our bodies have found. Light is impossible to describe. Or— it has yet to be done. She stands at the lakes of Copenhagen, on a path colonised by swans Grey, overgrown cygnets. It's that time of day, a fade into blue, like certain fungi when the finger indents the flesh, or bruises on a thigh. One could say the light draws a boundary, outlining one thing, marginalising another. That on which the light falls, and that on which it falls no lon- ger—that which exists _without_. Whatever place it then may find. What language can be summoned, to describe that which exists outside the light. All that on which the light does not fall. With what voice may we speak of darkness. The facing light that discomforts us, the eye understanding that we are being conversed, or perhaps: not being conversed at all. You don't think there's anything left to go back to. One closes one's eyes, the way a child closes its eyes. The action of lowering someone else's eyelids. Had he done so. On its descent the sun falls level with the eye and the window sills. She has left the city behind. The evening comes with warmth here. Her hand clutches her pocket, hanging heavily at the side of the chair. She places both hands on the table in front of her, between them. She turns her head, looks out across the sea. From the table this can be done. She has come here to walk and to gather up pebbles. She collects all sorts of things, and pebbles are a fine, fine thing indeed. As if noticing the unique character of some particular example, those aspects of it deemed seldom— and the very act of bending down over a pile of pebbles and choosing between them, the hand that reaches out and casts its shadow upon them— is reflective of something both smaller and greater at the same time. The things you started doing as a child, that you carry on doing after discovering they can't really be done. At least not like that. To keep something. Something round that lies in the hand. Gathered up, and solid. Pebbles change when removed from the beach. The way a person is exiled to some other place once something has _passed_. Or simply changed. What surrounds us vanishes when we no longer can be seen in its midst in a picture. One endeavours all the time, with photography, everything, draw- ing up lists, walking the same paths through the city, sitting down at the same cafés. To love the same things. To make love regularly. To keep a space close to the body like a necessary item of clothing. Her mother thinks her daughter would be happier if she got a _proper_ job. One that meant she could _see_ people, be of use to others, gain some perspective on things. But she sees people all the time, in fact she does little else, she tells herself. The spaces in which we are enclosed encroach upon us and ensheathe us like the thinnest membrane. He thinks there's something touching about her when she sleeps; when you gather pebbles too. _Touching_. I am exiled here to this place and mostly I miss everything I knew. Seeing you in that picture, you and me entwined. You've left the city to walk and gather up pebbles. There's something touching about you, when you sleep too. Like a pebble held in the hand. Your body has changed, we have both become saggier in the flesh, bigger and older and more pathetic. All that sticks to us. Cat silver in the sunlight; and you discover it and fill your carrier bags until they can hardly be carried at all. Hardly enough. Can a person find pleasure in anything existing in the world. Some place other than this. Love possibly, or pearls. We see an image, a beach in autumn. The light is special, a singular warmth and milky light. We see only the beach, and hear two voices. They are talking. You can't hear exactly _what_ they're saying to each other. We note they speak with caution that perhaps suggests they don't yet know each other that well. _(...) no, not very often (...)_ _(...) wish it on my worst enemy (...)_ _(...) dream about (...)_ _(...) that someone might see, or, you know, sort of (...)_ _(laughter)_ _Exactly, I know!_ They stare down at the sand, her short-cut jacket stretches like the canvas of a tent when she buries her hands in her pockets. _(...) Sometimes I'm just in doubt as to whether I fit in. If this is me,_ _or if (...)_ _(...) A bit like embers that are still warm in the morning, the day after._ _That's right, yeah._ Small beads of perspiration on the chest, in the groin, the small of the back. The body fighting against. He pulls the cover up around his throat with bony fingers. The empty duvet cover is heavy with warmth and the moisture of sickness. He turns onto his side, and the cover peels away to expose his back, like when you separate sheets of dampened paper. He disentangles himself and dumps the cover on the floor, turns onto his back again, his chest rising and falling in compli- cated rhythm. He sweats. We see his chest, the hollows of the collarbones, the hollow of the sternum, descending to where the ribs part, the chest as a basket softened in water, willow malleable to a certain point, a bead of sweat collecting between the collarbones, then running down to the neck like a rope. He shifts his head on the pillow, another bead, trickling down his upper arm, travelling a path that follows the muscles exactly like a shadow, or like water finding its most natural course down a mountain. It soaks into the pillow. He's breathing, we see. The chest as it rises and falls. Another bead gathers. Suspended, it trembles. A hair, piercing the bead like a needle. A section of the hair is magnified by the bead and we study it. Any line is infinite, it's all a matter of seeing it up close, a stretch of coast, the contours of a grain of sand, the skin perhaps. She thinks about what the ring might have looked like. A man's ring. A man's ring would be simpler. If it might still be there, among the ashes, or if it vanished into the sea, a gleaming iris ring, when the ashes were scattered; did the sun gleam, did the ring fall first; under a cloud of ash the ring falls. The bead releases and runs down the back of his arm, into the shadows there, where we cannot see what becomes of it, out of frame. A moist trail left behind on his skin, the area of skin we can see. It's hard to say if the trail is lighter or darker than the rest of the skin. A surfeit of dust, a monument to slowness and cathedrals never completed, built only to stand and witness. Bulbous yellow of trees. The tulip trees are blossoming here, the spring is more advanced, you say: spring has arrived, you should see the light. A disorder in the beds. A soft patch in the lawn where once was a tree, a yielding of the soil. A chest of drawers, panting green in the room's depths. What's that you're wearing. Rooms where the chair has been pulled out in that way, angled into the space. She pauses in the middle of the floor, in darkness. His voice, wrenching the skin from my frame in a single move- ment; imperceptibly your breathing has made a fine incision at the nape of my neck, and now you skin me. What do you want, the man asks her. The kitchen crackles, and light from the street enfolds the darkness, wrapping it up in its pallid slough. We view the scene from the doorway, the room is dark, or nearly so. She has got up to fetch a drink of water or tea, is what we assume. The question is how much to share, how much of a _patient_ one can be. How much _human being_ one can import into such enterprise. The sickness had rooted itself within him, his eyes were like that horse's we saw, the black mare down on the farm; if you looked hard enough into its eyes you realised the pupil was quite deformed, spongy in the way of coral, growth upon growth. In the right light you could see it clearly. I held my hand to my mouth, it was a warm day in September when we noticed it first; I rode home gently. It turned out to be nothing. So they said. It was as if his clothing—his shirt, his jeans—was what kept his body upright, displayed in that way, a thin sheet of skin drawn out over the bones. Faith is one thing when you're sick, another when you're not. The flies in the window look more and more like amber in the glow. "It's touching, what your parents are doing," he says. "They have to, they love him," she says. "They love you," he says. "Same thing," she says. What price is a person then prepared to pay. She holds the pearl necklace to her mouth, pressing three pearls inside like a bit. I don't know what to say to her, but I feel the urge to say some- thing, anything. The sky is streaked with rain, two different shades of grey, though to you—I imagine—they are alike. I miss you. Is it okay to say that. A single water sculpture on the square, water with no outlet, slowly flooding the space. Erosion of the ground on which the city stands. You drink as if alcohol were the answer to a very important question. A test that's been given. The inner lines and the boundaries between the fields. The transitions from one thing to another. Help is _near_. What lies closest to the heart. Some insane changes in the weather reflect in us, as ways in which we leave the apartment. When I place my hand on yours, or on your knee, I always get the same feeling, a feeling of not really knowing you. Had we met before, it might have been different. You ask me to do something, get some help, and place a hand on my knee. There's a common region for caring and prayer and wanting something. Always on the bounds of what is possible. The things you describe threaten to fall apart. To break. Pain can be traumatic because you discover a connection you thought you already knew. The fact of everything being joined and dependent on something else, of our being in danger together, each of us on our own. I kind of knew that, but it's the realisation of it and the fact of believing something only then to be caught out not knowing. Whatever else that might apply to. Brittle glass, small, barely noticeable stone-chips in a window, a small error of calculation, the slightest redistribution of weight, and it surrenders and shatters into pieces. You have to be as vulnerable as possible. You have to be as aware as possible. My body and that of the other. You have to look beyond yourself. In such a way you're forever on the brink of dispersing. Today, while walking in the Botanical Gardens, I didn't tell you, but with each gust of wind I thought I might be blown apart from you and you would have to spend the rest of the afternoon, and the evening too, putting me back together again; I would lie there like a wing spread out in the snow on the slope just here. You could spend days. Feather by feather, a dead bird, wing drawn open against the dark snow, the way snow is dark at the end of winter; the wing extended, the assymetrical form of its stiff quills laid bare, as if pulled out of place. The space between the feathers looks like a negative of the shape, a larger, dark wing that in many ways seems more assembled compared to the wing's lighter spaces. The axes of the body, thought upon thought, what resides outside of thought and language resembles the language in reverse; and quite as diffuse is the darkness. The rhythm of the plumage, eleven sails. We entered the greenhouse and pulled off each other's coats and jumpers and undershirts. I turned, and you unhooked the clasps of my bra, opening it, and there we stood, embracing each other rather awkwardly. Later we would speak of the plant that was blooming there. It flowered for a few days, a week at most, and only once every fifty years, something like that. I could feel your chest very distinctly, rising and falling. I noticed how clear the veins of my body appeared in that light. Or my pale winter skin. I remember wondering if you could sense that I wasn't breathing, and then you said: breathe. It's something you say to make me relax, you claim. Maybe it has to do with your fear of me not really hanging together, that I am already dispersed, spread out behind you. All the living and all the dead. How to make room for oneself in such a world. Deposition is a term used to describe geological material deposited following _transport,_ or as one perhaps should understand it: motion. Three different types of deposit are distinguished: aeolian sediment, consisting of wind deposits; fluvial sediment, deposited by the flow of water; and lacustrine sediment, comprising marine deposits. Glacial sediment, morainic deposits, chemical sediment, salt deposits. The first image is of the town. It looks like it once was lashed to the mountainside, and now remains there out of something like—stubbornness or oversight. The mountains couldn't care less. The mountains breathe and are blue unreal. The mountains' hearts possess will in the way of our own. An interchangeableness, becoming clear as evening arrives, as morning does. And the ocean: the way it lies there at the foot of the picture. Silenced by the morning haze, which is insistent and—like she—indifferent. The boats as they lie waiting in the bay. Their sounds; waves lapping against a hull, chugging engines, rope that slaps against the masts, drifting in over the narrow shore, the quay, the main road. The sounds that cut like blades through the harbour; the arid earth, blue mountains, an echo lunging up from the sea, into the landscape. Chopping hoofbeats of the boats—voices of vessels. The dry moss that yields under her weight, her feet as they are placed, the roll of heel to forefoot, sandal straps as they stretch, the foot raised again. Fig trees, hugging their fruit to their frames. The sap that leaves the body and the thoughts, now only these empty pupa remain, to dangle like lanterns—here. The lips of witnesses have been sewn together. The violet crowns of the trees; night's violet teeth. The distal joints of the fingers becoming loose, thoughts, becoming loose. A human being; that one should be lying there. In the picture, a heap somewhere in the landscape. Slumped among cypresses, occasional vines, spared or forgotten. The keeping of something, close to the chest. He has already felt most of what she says, perhaps even touched upon it in thought. An organ, a glossy liver—we breathe on it and wipe it clean. A connection between the pores of the skin that hoods the nose, extends across the cheeks, and the starry firmament here, these splashes of red on the bench, a simple lamp switched on, a mere socket dangling from a cord, a round bulb. The light inside the room, the only light on the property. Everything she can see from _here_. The tree is older than the rest of the world. A sound reverberating back to us from the time they cultivated the slope. Speaking from _there_. An olive tree. And beyond the sea's blue tongue as it swallows the strokes of the boat engines, beneath the surface of the sea—a crackling distortion of sound, and the song of pearls: a person is lying there, among the shadows of the cypress trees. It is late, the shadows are longer than the trees that cast them. Accounts of that kind, a balance sheet. The slender defence of something perishing and something else remaining. Small houses, refuge for the yearning. The blue heap in all that dry red. Between the town and the sea. A blue patch in a belt of burnt colour. An organ, the liver of the sea. Lying in that way, motionless. That's how we see it. The sound of the boat engines tears no hole in the haze, their endless chugging endeavour towards the world. The haze—a soft, devouring pillow. We observe its consumption, the passage of prey down through the throat, the thin skin that gives it away. Nothing exists that can sway such a world, nothing exists that can sway me anymore, she thinks. Her hand in the sand, fingers sprouting dead and dry in the sun, tiny twitches, something dead, pumping life into something alive or— the fact that all of this exists _at once_. This is making you ill, he says. He sits on the bed and gathers her up, the way you pick a pair of sheets up off the floor, and arranges her in his arms. That's it, that's it, he'll say. He has a bottle of water; he makes her sit up. Her eyes are glassy, anaesthetised, their numbed expression, as if the whole eye has only one colour; as if the black, the blue, and the white have been mixed together into something like the colour of dust. Dust, absorbing all distinctions, annexing and appropriating all _things_. You see the water in the bottle, the rings close up, and you see it is a thing of beauty, all too easily overlooked. The sea consumes itself again. Don't be a child now. You see her dusty eyes, as if they are succumbing and will die. And she drinks; I acquiesce and drink. I'm on my way to the beach, I say. You're on your way to the beach, he repeats. He sighs, and turns his face to the sky. I'm on my way to the beach, she says. Yes, he says, you are, but we need to get you home now. You're not yourself, you can't be on your way to the beach, you're not even here. She is loose-limbed, his cautious rearrangement of her body makes her head loll like a baby's. Her mouth is open, her upper lip retracted, millimetred back, exposing the white enamel of incisors, you think to yourself he'll remember this image for a long time; you think you'll remember it too. He nods towards the car parked by the side of the road. It ticks in the heat. The door on the driver's side is open, a broken wing beckoning. We need to get you there, first we need to get you there, and you have to help. Help yourself. Her eyes keep closing. You see her lips up close, you see an eye. An eye, a pair of lips, filling the frame, are all you can see. Her visible breathing—lungs, and skin. The breathing of bodies together in sleep. Birds sail across the sky, dipping and diving, weaving like lengths of cloth being folded, a fan unfurled and opened again, one side then another; sunlight, bird shadows cast to the ground, agitated blurs of darkness smudging the land below, the way the birds themselves shear the light, the landscape beneath their fragile frames; the towns. Our fingers almost touch. The two people as they sit on the naked plain. He holds her still. Holds her the way you hold a person you know you soon will miss. Points of contact are a way of breathing. A finger becomes a mouth as it touches her skin; a mouth that breaks the surface of the sea, to breathe at last—that kind of feeling. Wide expanses and shining surfaces make us truly fearful. Being unable to find a place to latch on, find purchase, being unable to make any kind of decision at all. A point of departure. It's hard to see how breathing may be shared with a clearing in the forest, or any kind of nature. At the same time—all surfaces breathe, and one may be encom- passed by their respiration. Basically, there is always some way of _connection_. _Objects in mirror are closer than they appear_. Basically, there is always some way of breathing and surviving, again. New image. A shore, the sea behind it. A leaf-green veil of summer shrouds all motion, muffles all sound. Two people enter the frame, we recognise them, the man and the woman. They walk along the shore, together, as when we left them, but this is later. He has drawn her body slightly closer to his. We observe the motion of their bodies, the way they gradually move closer; closer still. We approach at the speed of her body, the speed at which her body moves closer to his. New image. Again, the shore, the sea, the sky—this only. There is a great deal of sky in the frame. The heat. The picture shimmers, like when you get out of bed too quickly and the blood drains away. The shore. Where she was _meant_ to be now. If it had only been a matter of— _will_. Like a word running on ahead, the way a blaze takes hold. Trying it out for ourselves. If it had only been a matter of will she would have been at the sea now, and alone there. _Here_ , like this. She sighs. The waves lap against the sand, a knife scraped across a table, a lifeless layer of solidified candle wax lifted up, white froth, absorbing into the sand, leaving behind its various remains, soap bubbles or lace, tiny organisms that vanish down among the grains, a smooth surface, smoothed like a sheet. The beach is a new-made bed. Smoothed sheets, tongues of silky wax, immaculate as a friendship you're not sure if you can introduce to another table or even— another part of the country. A frailty, reminding us of something we haven't quite the courage to admit is us. But then once again it's us, thinking like that; once again our own train of thought bleeding into images, voices. The fingernails of winter are short. They have travelled from the winter. The hollow scratching at the doors that is winter, in all its tedium, winter still. Often the sun is a human voice that addresses you. A letter that keeps on returning to its sender, who once more turns out to be you. The will of the skin, the will of the planets. To have a function or occupy a space that is given. The night is an unprotected place, like an unexpected clearing in a forest. When later that afternoon you tell me I'm your best friend, I think of a deer wishing to cross such a bare and treeless place. You smile and say it's okay. That kind of unprotected. If you don't feel the same way, it's okay. If one were to give the night a voice. I feel the same about him, but in a way it's worse. It's still a question of whether it's a kind of crime—reading so much human into nature. Whether it's our fate to do so. The seamless movements and transitions, the friction of the bod- ies' joins. Everything started with the symbiosis of cells, the way they com- bined with bacteria that could survive the oxygen, it's almost the same principle on which poetry works. And us. In a way, we're already back at the start, as ever. The problem of movement always having direction but termi- nating in itself. To enter into a symbiosis with the self again, in a new and sur- prising way. Eternal rebirth. He seeps into the ground, his arm dangles from the edge of the bed, legs eaten by light. A shirt hangs brightly in the wardrobe. The thought of having a brother you never knew existed. A wish to be found. Weighed and measured. Who do you look like. Any movement becomes a movement towards her or away. My paternal grandfather burned his thesis after it was rejected. The only thing I know is that it was about Selma Lagerlöf. _Jerusalem_. One thinks about the fact that some people can gleam the same way as the pupil in a lazy eye reflects the flash of a camera or some other burst of light. Just the eye. A former classmate they'd nearly forgotten. At first she thinks the darkness belongs to the wall, then realises it to be human. All the time, a new past to recall. We see her face. A panning shot, a slow, vertical sweep. Our gaze moving downwards upon her face, our eyes passing over her, as over a field, or the way some stories trickle down through a family. The pores of her skin, tiny dots or shafts leading inside her. The fairest down. Too many have died too soon. In my family. We see her eyes begin to moisten, visible capillaries. They make a map, rivers entangling in a rhythm we cannot com- prehend, tide and rain, seasons. To listen to the slightest shifts. Snow makes a sound when it falls and settles, as it becomes com- pressed, as it wanders through the various layers of the world. Crystals grow. Blood has sound—when the body is punctured, you can it hear it sing. Pain is a general term for the feeling that arises when seeing the person inside you vanish from the body. Thoughts are a comb you can draw through the body. Our eyes are fixed upon her, and insistent—they do not rest, but are constantly busy. My sister said something one day that made me wonder if she thought you could get stuck inside a person if you stared at them for too long. I thought it naïve, but now I find it more and more likely to be true. Various fossils. You tell me you saw an exhibition in Sorø, and that you've seen the oldest _object_ in the world. Older than the universe, you said. I remember thinking it naïve. The way your eyes gleamed as you spoke. Our eyes move slowly down over her face, panning at a speed that seems so very human, the speed of the body, painstaking and cautious. Eyes are hands. Fingers are the gaze. We see her eyes, her eyes are in the middle of the picture the whole time. Eyes are at the centre of what we see all the time, ever a centre of something. We see only one detail at a time. It's almost impossible to ignore an eye. The surface becomes taut and shudders; we see reflections of a sky without us. We are moved. The sight of the sky in the eye _moves_ us. Like teeth that split in the mouth and double in number, and yet at the same time become: something else. The mouth becomes another. Separation does something. The body is a comb that can be drawn through thoughts. The body is continually changing into something else. Another body. The fingernails of night are concealed in the sleeves. A thread connects the bodies. _We are not here_. Little messages and food between us. An exchange of something. The bodies as rooms. Her mouth is dry. She spreads her legs, he can see right up her skirt. He lowers his eyes and leaves. She gathers her legs again. The night sky may be seen as a weapon, but everything can. She drops a tray of plastic beads that spill out in a circle around her. The glorious child. Locked rooms next to your own. What do you want here. She stands on the balcony, leans out over the railing. The tall buildings opposite reflect in her eyes. We stand slightly left of picture, looking in at her from the side. The canal has been frozen for two weeks. Yesterday the ice broke up, and now the boats are sailing again, through a carpet of ice, shattered windows or the ice as verdigrised roofing sheets that burn white and chink like bottles, yet another street of frost, the canal's long train grating, slipping under and over the cape of ice that casts back the sun in every direction. She stands on the bridge and stares into the water. It runs underneath her, and the wind blows. Sticks and twigs come and disappear. Garbage passes, an expensive, relentless gloss of plastic, she holds the railing tight, there is a light and she is standing in the sea in summer. The water reaches halfway up her thighs. These threads of light twenty centimetres down, ribbons of tan- gling gift-wrap, and the sun drilling into the ribbed cheek of the shore. You wonder how she can stay upright there, how she can avoid being carried away by the river. The release of a hand, a moment of imprudence and then the current, gripping her The movement of her hair. There is no sound, the image is enough. To keep something together that can hardly be kept together, hardly reach. The sound comes later. A black screen with sound—her breathing and the murmur of the channel is all we get. Your heart is beating fast, she whispers. Yes. She crawls around him, trying to _gain access_. Like a burglar or someone else, wanting something. Please stop, he says. She lies on her back, stiffly, the way you do in the sea when trying to keep afloat. A thin film of oil on the surface. The square flooded, and everything enlarges, a biggening of space; the sky is the excavator in a city expanding. The sound inside an ear. If you ate it, the way a sheet of paper can be crumpled in the hand—whose winter then. The bush that scrabbles in through the broken pane; the sprouting floor. That's how they stand. We hear the waves. The movement of her hair, like a voice in water. You look like you're lost in thought. The way you're standing there. In that way. Lost in thought. His eyebrows are bigger and more and more like plants, her gaze is made of wood. The tension of the bentwood chairs, the way they curve, like love that stays the winter. The idea of surviving oneself. Two beams of timber thrusting diagonally through the space to keep him upright. The piercing gaze in his back that prompts a person onward. Or a look that binds you to the table, staples your feet to the floor. She looks up at him. Raises her chin. Slowly. As if the shadow beneath her jaw is a broad band of dark elastic that splits and tears when pulled taut, quivering, jawbone sharp and salient. You're wasting your time, she says. A voice in the room, saying just that. Her voice, and yet from another time. The ice-breaker lies still in the harbour, or rests in its steel-limbed cradle in the dock. She doesn't phone her sisters—it would only ruin things. The sea, swallowing all. The sea, making everything its own. The damming-up of the outermost fields in those years. The sky, swallowing all. I can't go on, he sighs. It's a way of giving her a voice, is what she thinks to herself. How can you say a thing like that, she asks him in sleep. She is a body, salvaged from the sea. She has been without air. This is what we understand. She splutters. What were you doing out there. Or—what were you looking for. Can a person bring the night up from the bottom of the sea. Can the night be transported into other rooms. He nods. He understands, she drinks and brings up a clod of the night. The night, swallowing all. You'll always have me. Making everything its own. The descent of certain sentences into episodes of history. A pearl inside a clam. The sea, being the possibility of a hundred thousand pearls. A bit like the two of us. And then not. Have I told you about the hills. Yes. And I can see them, he says. He hesitates— the way you can, he adds after a moment. She nods—but, she says: have I told you about the hills. The way they superimpose like faces. Or days. We see them from above. Skin becomes more skin. Everything is a question of distance—if you get close enough everything dissolves, and drawing back again it comes together in new and different ways, it turns into something one can miss, of which one is _really_ fond. A gradually increasing distance between one thing and another. An uncertainty as to direction, as to what is moving; who is seeing and what is being seen. A feeling of being witness to something vanishing. The eye weeps, its constant loss. His legs are bent like his thoughts, bent around a very small point or an eye. The body can be seen as an embracement of air. Direction in all things. What will you do about me, he thinks. His stomach contracts, as if she has gathered together his organs and carries them now across a precious rug. You see the shadow. The two arms and a bundle being lifted. The body expiring like light and discolouring all things. The air can be seen as an embracement of the body. The moon behind us. The night, making everything its own, swallowing all. What can be said of darkness. The balcony plants have dried out while we've been away. Dark wood, submerged too long in water. Birds pecking in the pots. The green dill, like hands that clutch or else let go. The lavender heads drooping on grey stalks. Something red, glimpsed as you turn, gone when you look back. Pale furniture, a thistle in the rearmost pot, nature as a kind of darkness inside the city, wrapped around us like a cloak or a shawl. Nature, making everything its own. The moon behind us. Gristle, when cut with scissors, him shrinking in that way after the transplant. Face fallen in, as if immersed in a book. I don't know, she whispered one night, I feel so low. Not having been there to comfort you. Sentimental, he said. All ruins remain intact. If you glance at them quickly, then look away. You see things the way they were. At first she thought it was one of the cello's strings that had snapped, but on closer inspection it was the instrument itself that had split open. She made a joke about it being his father interfering. But he was actually cut-up about it being broken. She suddenly remembered once having suggested to him that he made a little box, in which to collect some of his father's things. The sister-in-law's brother commits suicide. It's a month ago now. She says various things about it when they visit her in their new apartment on the outskirts of the city. They ask her how things are. How are you coping. But she replies without saying anything. While preparing dinner. I hear myself saying it probably won't get any _easier_ , not even with time. Some objects might need to be coloured so they can be seen properly when magnified. An experiment colouring a landscape and moving some of its elements about. Moving some rocks and scraping some soil aside, for instance. Making some order visible to the human eye, altering it. The difference of disturbance. Some words, entering things and changing them from the inside. Entering people, changing them from the inside. Time should be understood like that. History's medium is the fragment. The fact of something being moved so we can see. Alterations of form. The different speeds of different places, their different movements in time away from a geographical centre. Beads spilling in a circle around her as she drops the tray. Form is a way of recognising time. The organisation of material as a prerequisite of understanding anything at all. I.e. that's where it all starts. Regardless of his own condition, man is always emerging from a form, and must exist within—a form. And then another. To mark time. Alterations of form become crucial. If undisturbed within the form, one remains young; she, suddenly, is older, now that he is no longer there. The eye weeps with the loss of what it is accustomed to seeing. They huddle around the table like an iris illuminated by a flashlight. A contraction. In a way, the only difference is the scale and the sensitivity. You can only see one thing at a time. Watermarks. The perspective you then select. The three-dimensional image requires an open viewpoint, one that remains unfocused, or else one that focuses—on a point beyond the picture, exactly as in literature; the structures that become apparent appear to us with voice and a form. The eye's most immediate urge—to see several pictures in one— has to be short-circuited. Slabs of time, settling as field upon field, or as clouds. The man and the woman huddled at the table, the iris contracting in the flashlight beam, as if the boundary of light and dark were the boundary of everything. Simple, self-dependent images and double exposures. Nobody then forgotten. A white-hot coal. She puts her fingers in her mouth and goes about like that for days. We see her as a blur, a figure at his rear. Mostly she is a body, we see her like that. The wall is latticed with shadows cast by the timber of windows. It's hard to tell whether light or darkness is falling into the room. A voice. The ocean, brought inside. Carried from the bay, into the town, across the parched lawn, passing through the branches of the fig tree. Passing through the branches of the olive tree, the lemon grove. The birch. Transported through snow and summer, to slip between the slats of shutters. I miss her, she says. Men, taking on the burden, bearing her on; his eyes, bearing her on across the narrow streams, over the plain beneath the sky's heavy skin. Her eyes— skin contracting upon her body like boiled wool. She is cold, and yet she sweats, perspiration seeps from her pores, a crystal rain of coldness, beading and trickling. Both of them tremble with rage, shaking—why are we doing this, what are we doing here; it was your idea, they say in turn, in different ways and with their healthy bodies. Yes, he says, and sleep descends upon him like a guilty conscience, that's been hidden away. She toys with ideas about being gone by the time he wakes. Only then she falls asleep, and will not wake before him. She dreams. If only the narrative of dreams could be suffered by others besides the dreamer. You would see then. If that were the case, you would see. The hills bunching up the landscape, the earth here, the grass. Perhaps nature can be viewed as a blanket over something more real. Beneath the grass, beneath the outermost mantle of rock, inside the smallest droplet, a world undistorted. Beneath all the reflec- tions of something else, a place to grab hold. Something firm, as wanted by the eye. Towards evening the hills turn blue. Beneath the skin a body more real. From under your hand I might slowly be revealed. Albeit to your inverting gaze, or something—your eyes are like two pearls upon my hip. To lie still and cower in the hedgerow. Pain cannot be divided and cannot as such be understood. There's no language for it. In that way it is divine and yet a problem for music, for art, and for people by and large. To come back to a locked room that turns out to have been emptied during the night. Or day. The idea of _not_ losing one's bearings. That crucial moment. Some nails that are held in the hand and retain their coldness for a measure of time. Different spans of time and the relation between them, the distance between two points. To be of _general delight._ It snowed, and the island became frozen into a sea that joined it to the mainland for months. She told no one, but walked out into the white that lit up the woods from below. The cover of snow speaks to the sky, as if together they possess some knowledge they continue to share, in that way to remain as one. A language requiring no translation, like a hedgerow con- necting two places in the world. January. Bells of frost beneath the horses' hooves, compact snow wedged to the iron shoe, the frog of the hoof blued and fraying in the freeze. High walls balanced on the branches here. It snowed, the way it had snowed for days, weeks soon. Feet kicking up their fans of powdery snow with each step. The darkness unrevealing of such detonations of crystal. The crystal shares much with literature. Material held together in a particular pattern, determined by particular rules. Structures repeating everywhere. He can see that, he says. It makes sense. She remembers the snow consumed her tracks and that she was unable to find her way home again. Trudging, then to pause and listen to the sound of her breath, which in turn startled her. No way forward, no way back. Like a year suddenly past. Or just a summer. She remembers she gave up and thought of a farewell scene, a parting from her family and lover. She recalls being surprised at who turned up in her mind. How many were present, and the way the snow settled in her hair. We come closer in a single seamless movement—a hand lifts the long, dark hair of a girl aside in order that we may see her face. A sick girl, draped over a toilet bowl, or a beautiful woman bent over a bed—hair swept aside. It is with the slowness of the hand that we approach the man's face. We see the stubble of his beard. No one has any use for a sick girlfriend. No one wants a sick girlfriend. His stubble is too prickly to be pressed against a face, is what we think. Visible millimetres beneath the surface of the skin. His eyes are so dark. He is despairing. We have no idea why, all we know is that this is the case. Despair at her condition, at the two of them, that it should come to this, this point in the story. Or at himself. It makes no difference. Reproaches. I don't want to go on, I can't, I don't want to any more. Who are you looking for. The sounds she makes at the bar cabinet, on the tiles. She is standing still, but the sound of her feet crossing the tiles has been delayed by the image of him, the sound of his beard as it grows. Her hands passing over the bottles, not this one, not that one—as if there were a choice, as if it mattered. Then the sound of her footsteps, and the sound of cognac sloshing inside the bottle. The two sounds combined. And ice scooped into a glass. What do we want with our bodies. This perhaps: to wake up again and have given them away, swapped them for something else. Catastrophes, violent, near-sickening _reorganisations_ , accidents. She is tormented by the feeling of everything erasing itself. The water, when almost gone from the tub. Concentric rings, fungus spores on fruit. White horses you have to follow. You say you will not be destroyed here. Or repeat some pattern that isn't even yours. Or that you can't bear to see me imitate my mother any more, the whole time. But then it's my mother who has taken over my body. Who puts herself behind your eyes, helping herself. You fold our clothes to please me. You make an effort with something. I am exempted. I don't know who we are protecting by all this. You, I suppose. What is strong in the world is forever on its way to not being strong enough. Termination everywhere. The seasons phasing out. That protest that exists in nature too; spring coming round again. The body's recollection of rhythm, the yearning for another state. I miss the repetition of you, mornings in a certain place, always, a certain way you had in sleep, at once troubled and unconscious. Standing behind you in the bathroom, seeing my body behind yours in the mirror. Or that twist of your body turning over on your side. As if you became stuck when lying in that position, as if the skin refused to let go. I imagine it will cease, and have already begun to miss it, though I am unsure what to make of it. Cold feet fingers horses scraping at the frozen ground. Everything ices up, and they're skating on the lakes. You not liking Berlin. We travel to Boston together. The cities we leave destroy us slightly. We've left a part of us in every place we've been. The light comes in easier now, yet drains away from us so swiftly. The things you have to leave behind. Abandon. The eye weeps for all it lost. Cities no longer there. What are you supposed to do. Before long the inshore waters will turn to ice. I keep thinking everything's different now, firm ground beneath my feet; another, for the last time. It all remains here within us, the lost is like a hollow chamber, a monument, changeless as an echo, a grief that goes on _for ever_. The only city that can endure is the city that crumbles. The only firm ground that exists is the ground caving in. And the loss and the grief are doubled as such, and for all its luminous humanity it seems so very much _not_ of this human fabric: a withered lilac, one evening last week when I was home in Mols. You can look at a withered lilac and feel convinced that from that moment on nothing more remains to be said about life and death. Always losing something we love, something we are. Again, we have lost what used to be, and yet are none the wiser for the loss, the _lesson_. None the wiser, nowhere near _changed_. Still just a person, grieving over everything that can be remembered, a person _believing_ , a person not living in the present world. But then— refusal. Not wanting to be a part. On those conditions. Autumn, simply, the vanishing of lilacs, the smell of soil after rain. Firm ground beneath your feet; the only firm ground that exists is the ground caving in; the only city that can endure is the city that crumbles. Only it remains. Nothing has changed, but everything is lost. Not a single useful insight to be noddingly embraced, then worn like a shiny medal. But still—the fragrance of the hawthorn, the fatigued brown violet of the now definitively withered lilacs, field upon field. An image, and another, and the two of us together, wanting to share so much before the inshore waters turn to ice, before the winter is _upon_ us. Between us. The sun lays all things bare. The fact of the wallpaper having come loose, and your skin no longer being the same. Too much sun. I love you, but I'm disappointed that— I love you, and I'm disappointed that— And you— And us— Maybe we could, maybe _I_ could, be _here_. With you. Yes, you. The characteristic restlessness of the voice. You. The stairs turn towards you. Feet on this day, with no more snow. But no more water in the rivers either. Cities of quiet, slender women. I never knew before that winter took so much away. Or rather: takes. You say you need to live in one place. You emphasise _one_ with a gesture, a downward cast of the hand. I laugh out loud, because I've heard it before. You stole it from me. That knowledge. Or assumption. Ashes to ashes, and so on. They're burning off the fields. It's against the law, that much at least she knows. Everything unlikely collects together, a fireplace scene in which we gather in a knot of— _emotion_. Maybe the disappointment is hardest, the struggle to believe— and then no longer believe. Nothing new being gained. Nothing old being lost. Only the self again, nullified. Whatever you used to be, it disappears, that's what it feels like, everything reduced to tiny, including the feeling of there perhaps being some meaning in all the madness. The feeling of standing here on the corner where years ago we met, me in raptures at your sloppy appearance of which later I would so helplessly tire, and later still miss doubly, to the power of two as it were, yearning to even feel something at all. Apart from nostalgia, or reconciliation perhaps. No longer being affected. SAPPHIC FRAGMENTS every single day if not now then you in the light larger than any of us. for you. Or your sake all that beautiful glass cupboards and drawers. What we have more than either here or we know. after the war and the winter who the winter garden and I think I can live with that. Or the after the war comes and you really believed it? winter again returned to them impossible not to wait for you here. Do you think scratching cement from between the stones you brought and wanted me to We divide everything into two equal piles the flagline in the garden help but believe me freezing, I've just arrived "dead" The weather is fantastic, autumn is actually and pulling down on the branches, everything in cold light. and then I think, no one about to happen. For both of us. joy and relief the first But what I'm saying is no party, so throw myself out I don't really know. met. But for Are you doing that too, walking on tiptoe a year ago now properly talking and holding me tight at the same time shovelling coal into your stove, struggled to open the door cold water and looking like wet hadn't the courage and convinced myself that you knew. current the shore. Scrumping apples in Bogens recently left me in the same state as compassionate as otherwise winter, if not that, then the sun is setting, the shadows are in some way more of the treetops and stitch by stitch, slowly sedateness of the trees, becoming her slowly as in the sea. That was the point she picked up the scissors he said with a laugh pity and envy in equal part, shadows and golden light. after all the trouble he'd caused us sprouting on the beds of drained pools as if we'd never Your dreams are not your own, your skin hands that Warm bedrooms, the feeling of not needing After our bath we lie like never need if you can make it so if you think so remains after the snow has melted. count on it. You being there. if. like a formula for it in the darkness as planets or snails, them. of things. _keep the different horses in different stables_ not _here_. it never happened nice all in one basket OF DARKNESS The setting sun. The way the light at first seems to dip down and coil, then launch forward to gild the landscape from a standing start, commencing at the far end of the fields where the hedgerow runs and the woods begin; gentle and yet enraged, like the seeming coldness of white-hot coals, or a seeming attention to matters of detail that is actually disappointment over some very basic states-of-affairs. The way things _fit together,_ the way a passage of events draws something through the organism, summer autumn winter, the rhythm of the flesh, and the displacements that may also occur. The holding together of something, the hanging together by spite. A skilled carpenter whose box joints are made with such accuracy as to be quite as strong as the solid wood itself. The feeling of the sun and earth coming together in the same way as two people. The fact of not _understanding._ The body is the corset that keeps the thoughts in place; neglect the body and the thoughts withdraw, they seep away imperceptibly, the body undoing the ties, removing them from their metal eyelets; or the thoughts seeping away, tightening a frail drawstring in retreat, a string that eventually succumbs and breaks. The girls stand on the riding ground with their horses. Seven girls and seven horses. The horses have been walked with slackened reins, now they lower their heads by turn, snorting muzzles in the dirt, a looseness of gait, the sounds they make. Nipping at grass. Flap, flap, muzzles flapping over tarnished teeth, the muscle of tongues, the rigid bristle of eyelashes seemingly inserted physically into the lids. They lead the horses around, stirrup irons flopped over saddles, drawn from the right, across the leather's gleaming seat, to dangle on the left, and likewise from left to right. The sun upon the black leather, a girl untangling a knotted mane, the thickness and stiffness of the hairs. A saddle scratched by a tack-room cat sharpening claws against the leather. We see the horses with their riders, a girl and a horse connected by the reins. The horses led around the exercise ground. We watch the weary suppleness of their movements, the way seven pairs spread out over the pale oblong landscape. To all sides: fields extending like tongues, only a long, gravelled intestine cuts through their tossing contours, connecting the oblong with the stables; the manor farm at the end of the tree-lined track, the way it stands resplendent. A horse lifts a hoof, the elegant bend of the pastern, the elbow, the joining of the animal's various parts, the seamless movement from the joints, the stretch of the tendons, the contraction of muscle. Like planets, the horses disperse with their riders, their spreading out is the only movement in the frame, a symmetry to which one can only acquiesce. Some sounds—birdsong, farm machinery at work somewhere in the fields. A flaxen fringe swept from a moistened brow, a girth loosened three notches by a practised hand beneath the saddle flap, two girth straps at once and then the third. Quiet chatter that becomes particular by virtue of the sun's position in the sky. The movements of the horses, the body of their flesh, the spaces between them. The sun touches the horizon and ignites the fields. Lengthened beams of clutching red, the narrowest steel impacting on the eye. At once the light is changed. A complete and simultaneous upheaval of all things, the sun powering its rays in every direction, as if they were arms thrown up in helpless surrender, only more vigorously, more elongated; the sudden coldness of everything, the emerging darkness that clutches at the girls, clutches at the horses, the painted oil drums and the striped poles, the helmet dropped in the grass. Next, bodies are seen propelled, a few centimetres, twenty, fifty centimetres in the air, outstretched fingers, teeth bared and revealing of darkness. An abrupt detonation. Yet momentary, so brief as to be silence; and seconds later a turmoil of jetsam; the bodies of the girls, their open mouths and half-closed eyes. The wrench of the horses, a diagonal motion through the air, their long heads tossing back, seven forelocks unfurled like fans, a hock that nearly touches the ground, the outstretched forelegs, a tightened rein wrapped around a wrist. And then a sudden turnaround, as if everything has reached some saturation point, the apex of the upward thrust induced by the blast. The bodies of the girls then dashed to the ground, sprays of blood, festooning from a head or a stomach, trickling from noses and mouths; time altered, knees striking the earth, feet twisted awkwardly awry, a hand dragged through the air, or fanned out on the sandy earth. In this sudden downpour of death, an opening of the heavens, the bodies of the girls fall to the ground; and the hollow sigh of all things, the landscape, the arms of the sun drawing back like fingers retracting into a hand. A meltdown of day, and of the light. Next, seven horses are seen, walking quietly about an illuminated oblong of ground in the midst of darkness. Eight floodlights are directed towards the area, their beams long and identical. The gait of the horses seems laboured and encumbered, as if they have traversed a very long distance through inhospitable terrain, searching for water or some other release. All unbroken expanses may be places for such release, perhaps even some kind of serenity. We realise a time has passed, that there is already a resignation about the wanderings of these animals. Around the enclosure, back and forth within the enclosure. Criss-crossed paths with spaces in between. The way the planets drag with them their moons, this is how the horses drag the cold frames of their girls. Reins wrapped tight around wrists, hands a blood-drained alabaster, fingers stiff and crooked as gnarled sticks of arthritis or hearts stricken with jealousy, racked and immobile, veins and arteries raised blue. Of the night, much remains to be said. It is a task only for someone who can withstand the light, the glaring artificial light that floods the enclosure still. The horses go about their business; there is a flexing of joints, a casting of shadows. Of darkness, much remains to be said. Of the fields too, and the darkness of fields, their night. And of the horses, the horses of the night; of them, much may still be said. Moreover: the girls, the darkness that settles upon their alabaster skin, death so finely powdering the flesh, the green-white blush of death; the pale red of the lips. She visits him again, for the first time in a while. They talk about that. He rocks gently, backwards and forwards in the chair. He's a good friend, she thinks to herself. He says he feels no need to fall in love again, that it is past now. After her, love is past. When he goes to the kitchen to get two oranges and some chocolate for their trip into the hills—before they realised they had no time to go to the hills, not that day—she noses around in his living room. The room is so very old. It's the first time she's been to see him. She passes her fingers across the spines of some books, the frame containing a photograph he took, and notices a bowl of withered fruit. Three peaches and an apple, their shrivelled skins, like dulled and sunken cheeks. She thinks it to be the saddest thing she has ever seen. Fruit, sapless and diminished, consigned to bowls of oblivion in the homes of abandoned people everywhere, broken people who yearn as yet, and who will continue to yearn in time to come, perhaps even forever—there, in such places, fruit is left, to decompose and slowly rot, though never quite to vanish. And there it remains, an organic timepiece measuring the hours from the first wrench of grief, when all things came to an end. It's as if these people wish to be reminded that everything has broken and come to a standstill; or else that life goes on, albeit _without them._ And they themselves: the advanced age of the fruit becomes that of the body, its deterioration a correlate of their own organism. The grieving body and the dying fruit. The dying body's celebration of grief. Love becoming solicitude and a diligence as to _decay._ Another friend's oranges, a Cox apple. The mattress is on the bare floor, everything looks like it's fallen down; books piled all over, a table-top deposited without its legs, the shelving just five more or less horizontal lines between uneven rows of books. The sloping walls of the room cast shadows; the busy blade of the scissors. It's morning. Like the feathers of a wing, the books lean first one way then the other. Plants with their pots broken open like petals scattered on the floor, the white roots extending their pale and sleepy capillaries, soil spread about a core; like her heart, the core of her warmth and the occasional sounds that issue out into the room that encloses her body. Her body, pumping warmth out into the room. It must be morning. You can tell from the light—soft, the way a body can be soft, an organic, fleshy light that does not stream into the room, but barges its way in, breaking things in its path, denting the thin partition walls, pressing the duvet flat as a frightened dog that cowers on the ground; the changing nature of the seams, from plunging indentations to these looser threads that strive towards the cotton like shallow water thrusting on to shore in windy weather, a shimmer of undulation in all things. She turns her head, though strenuously in the light, as if the light occupied the room like some thick transparent gel obstructing every movement. The pillow retains the imprint of his head, the duvet cast aside, its corner turned down like the page of a book. As if to remind of something other than how far one has come, something more important that one (again) wishes to prevent oneself from forgetting, dismissing (once again) from the mind. She mumbles a few words to herself. Her voice acts like everything else in the room: falling, then falling silent. His body, no longer there. Imprints of the human body are in some way more human than human bodies themselves. They contain the body as a negative, yet something more besides. A very fundamental voice, the tone of the human, that lingers, reverberating in the impression. There's something satisfying about hearing a pop song's reiteration of a simple truth, for instance the banality of not knowing what you've got until it's gone. You lose someone, but at the same time gain a more complete picture of the love you nonetheless felt for that person. That's one way of putting it. But one might also consider that time changes everything; that the next day will always be new; that in a way it's too late to learn what you had to lose after you've already lost it—the glancing back over your shoulder, or the longer look, reveals the land you've covered to be different from the land in which you lived. The fields you left behind, the distance measured out in units of assumptions and kilometers. She stands with her hands on her midriff, concentrating on listening. But the light has the same effect as water, distorting all sounds. And yet she is certain, he is downstairs shaving with the electric shaver. The door is closed, she lies down and turns on her side. Lying there on the bed she can look down between the beams and see the door, which indeed is closed. She gets to her feet. The pane is steamed up, a drop of condensation travels down the middle. The sky is not blue but white; the light is the voice of the sun, unready as yet, though sleep-drenched it muscles in. The pane is soaking wet. She descends the loft ladder and cautiously opens the door of the bathroom. He is facing away, quite apathetic. She goes towards him. In the washbasin in front of him the electric shaver buzzes. He is standing quite still, staring out through the milk of punctured double glazing above the washbasin. She steps up slowly and pauses a few centimetres behind his back. He is naked. She turns her head, as if the light should take her photograph in silhouette, baring her cheek and glancing at her reflection in the mirror that is affixed to the wall next to the washbasin and which cunningly doubles the bathroom's size. Her face is partially obliterated. Only the part of it that is turned towards the light exists, the rest has collapsed, to dribble like thick glue from her hip, the eye left behind at the shoulder. She blinks, but only the left eye closes, the skin that surrounds the other, at her shoulder, contracts as if in resignation, a half-hearted smile. Great, black gloves cover his hands, his only garment. His eyes are different colours. In the mirror she sees the glint of something metallic. A few centimetres in front of his eyes, leaping sparkles of light as if from a Roman candle. But what she sees is a needle, threaded with thin red sewing thread. It protrudes from his eye. It was your brother, he says quietly. You were twelve. Is it still there. He does not move, she does not reply. The gloves are like an answer. Can the past leave a person and come back for them again. The past, leaving you and coming back at inconvenient times. His face is her face. Their bodies have worked through the night, have lain in various positions, limbs draped like honey spun from the comb. Condensation trickles down the panes, both the windows are punctured. Through the glass one sees the sun upon the rooftops. Other planets are visible too, one is very near, dissected by the corner of the window frame. The planets drift as if suspended in water, close by and far away, sedately, prompting one to attribute their slowness to the distance at which they are seen, though in actual fact it is all about the eyes. The eyes are planets too. The slowness lies between the objects. The individual body, the individual planet, possesses unimaginable speed and is proceeding insanely towards destruction. She reaches up and raises her eye to her lips. With two fingers she presses the orb between them. She stands for a moment, the eye in her mouth, the planet soon to block out the light from the window in front of them. One has the feeling of everything closing in, and yet one might easily claim the opposite. That would be true as well. The outer wall, pale and yellow-washed. The cold stillness of the cobbled yard like a boiled sheet draped out to dry over a pile of sticks and left forgotten over the autumn and on into the winter, a face frozen in a kindly look, the coldness of demonstrations, symbols. They're winding the rectory up in its present form, selling it off to cut their losses. Everything's being marked up. A feeling of all the time that has passed, the struggle to keep things going, the realisation that it wasn't worth the effort. Sacrifices, losses. The two oak trees where the well used to be. Later, a leaf tumbling across the cobbles. Stepping under the trees one senses the trachea drop through the body; be dashed to the ground, thrust into the dirt and the tangle of roots, then a series of crippling blows that echo across the yard, causing the thin panes of the windows to rattle in their frames, the leaf to dry out and wither, turn brown, disintegrate and vanish, leaving only the frailest skeleton to be daubed against the yellow wall. The trachea is implanted in the ground like a fencepost; a fleshy paleness, blood as it drips from a half-open mouth, blood as it seeps, the trees that take on its colour, the roots becoming veins, the leaves then a deepening red, almost violet. The light in the yard transforms, now it penetrates the red cover of leaves. One cannot shout, one cannot hear, for this is a place of stillness. It comes with the soft yellow, one senses, the colour of the limewash. The trachea is implanted in the ground, the mouth is the eye of the well; there is a feeling of function and death. The leaves no longer fall from the trees, not here, not anymore. The losses are inscribed in the stones and in the leaves, all that now becomes still here. Worse than the counting is the lull when no one is counting. Even he who was meant to count is lost. The great hands of the trees bear witness. As long as there is someone to count, to call something by name in a way that does not destroy; then there is something worthwhile, a rhythm in the world, a relation between two points. The sound of the glass, placed on the table. A hollow sound, the glass encountering the surface, amid this landscape of objects, a hard sound of wood and glass, puddles and clouds, oceans calm as millponds and the sun that glitters therein, the water in the horse trough, the water in the basement, the letters that float about, the scales of the fish as they reflect the sun, when the half-dead fish flap their tails, twist their bodies and gasp on the quayside. A fish-eye as it stares, without direction, seemingly at _everything_. A gaze that has all the time in the world. Why am I telling you this. She says something along the lines of constantly missing someone. He can't really hear what she's saying, she speaks rather quietly and there's the noise of the traffic too. It's not because you don't love me, she asks. He shakes his head. She seems changed, he thinks. She closes the window—if we want to get out today we should go now, before it's too late. He nods. A sack of rubbish dropping through the chute, a spinal column, green smoke rising from the oil drum in the back garden. Maybe you should hold off writing something so harsh. Until you know more about it. Until you've felt what it's like _yourself_. Whatever it is I'm supposed to feel. Regret, guilt, gratitude for the love that nevertheless still exists for what nevertheless once was. I'm somewhere else completely, with no idea what I'm doing or if I ever even knew you, you say you have learned such a lot from me. The place where the shards of the urn were buried, at the foot of the tree, looks like a cathedral. The sallow trunks turning dark against the light of the sky. The sun entering in such oddly staggered fashion, a blade-box light, sword-beams of sharpened light penetrating the living body that time and again survives. The _character_ of the light. Light as it falls through windows high up in towers. I almost miss the train, and had no time to put on underwear. I thought of the message it would have sent—if I'd missed the train. And you not being with me. How it could have been construed. I like that place a lot, the whole idea of being scattered into the sea and the urn interred in such a cathedral of nature is beautiful. The low fir trees. The tangle of brambles. You've said you'll come and pick brambles in September. I'll be in Rome then. The summer will be gone. What has yet to happen is just as strong as what happened and went. I'm pretty certain of what I would feel in this or that situation. She looks back over her shoulder the whole time. She misses him now it's spring again: it comes back in loops, the yearning, with the same intensity, with the precision of the seasons, the imprecision. She considers writing to tell you how it is. On a bad day he might come back in some guise. The idea of coming back for something you forgot. Not coming home, just back for something you're not exactly sure what is. Maybe then you could take it with you, in a little bag, carry it around with care or whatever, according to the circumstances. She sits out on the balcony. It's so quiet you could hear a man fold up a handkerchief. All that cannot be transported, cannot be moved. It's like moving a lake. The body lagging behind thoughts that have gone on ahead, the body always yearning. This direction or that. Forget it, she says in sleep. The sunlight of morning reflects in the windows and is hurled back into nature. The rear yard plunged into shadow for most of the day, the underbellies of the horses, the space between those underbellies and the grass, where e.g. the stream thrashes up, when hooves kick through its water. Smothered coals, a closed circle of sighs in the sand, their grey remains. Everything the human body finds possible. But we have no coals on which to walk. New York, March 2012 _Back pain due to misalignment of the pelvis. She did not have strong enough corset muscles to keep it in place when the irritation in the lower back began this fall._ 42nd St/Bryant Park. _Soho Herbs and Acupuncture._ I tear myself away like a boat from a quay at night. You'll wake up early on such a morning and not have me there to help you live, to get you through the day, to allow you to breathe. Breathe, he tells her. Drink. It's so very common, they say, and she senses a calmness descend upon them. They walk over the bridges. The wind bites at her fingernails—cut too short. Animals graze, _regardless_. The body's desire to get away with something. The sun shines on the lawn, warming her lower legs; she walks home through the cemetery. You stop believing it will all go wrong, and then: you die. When they came to the house they saw the trees were in leaf. The winter, depositing everything; the summer gathering something else up. And you, where are you in all of this. Some days in March, the fishermen put out to sea. _Inconsistencies_. Dead-end streets have no air. Host and guest. Some frozen tufts of couch grass. Whether you want change or not. You're not here. Everyone agrees the situation is alarming. In principle the garden should simply be plundered. Summer pulls everything up by its roots, leaving plants, bushes, and flowers on the ground like this to wither. Weeding the beds, at intervals. Going to the zoo as if by ritual. A particular way of descending into calm. The distance between two objects may change by one object moving, the other object moving, both objects moving, or by some external force moving one, the other, or both. Something with no obvious connection. Is it a bad idea. But love is no idea. Quiet, quiet old song. It's like my eyes are repeating something you're trying to put behind you. As if I remind you of something you don't even know what is. A body you can't forget, but more than that. Josiah McElheny, _Modernity, Mirrored and Reflected Infinitely,_ 2003. Mirrored brown glass, aluminium metal display, lighting, two-way mirror, glass, and mirror, 29 1/2 × 55 3/8 × 18 1/4 inches (74.9 × 140.7 × 46.4 cm). Maybe he's just explaining something. You know how the media work, you say. They want the drama. Yes, I think, like you. Sporadic movements forward, one knows them far too well, the way one knows one's sisters far too well. Why do you always try to make me feel _worthless_. That poster of the sunset that used to be on the wall outside the blue room upstairs. Had it always been there, and if so who put it there. Whenever it was. At the dawn of time. Counting down from ten and then starting from twenty. The order of factors. Ascription of value. _Value added_. Every time I close my eyes I see the image of your back. Apart from that I spend time looking at the view of the hills. She thinks of an intuition she has, the way it feels like she is stealing his love for her. Borrowing isn't the right word. Buying isn't either. You look like you think we can protect him. He's a child, he knows everything, he says proudly. Are you going to drink any more of that, do you think it's a good idea. Where are you, are you in here. Had an eye for that kind of thing, to be able to breathe. It has been eaten up by silence, eaten up by stillness. Your voice, I've forgotten what it can do. Ground water and rain and blood and cries and spit. The beads rattling across the teeth. Him rising, shirt hugging skin the way I did in the night, flogging another person as only I can. As only you can. Yearning in advance. This afternoon's sun is yellower and heavier than the morning's; it's as if it needs to convince you the day is still here, is not yet gone, not yet; it is too early to surrender, spare me the white flags, for nothing in this world is too late. A dark-yellow sun, nourishing an almost maternal concern for everything that exists in nature—that which belongs there and also that left behind by people, a pair of sunglasses on a lounger, a glass, a paperback read by the wind at a speed one can only accept as a possibility. The light turns blue. The flagstones turn cold, a transition much like a sigh, something turning in on itself and vanishing. A balloon lasting a week at most. And the skin covering one's arms contracts in busy spasms, a window blind raised with a snap. The down of forearms rising, nipples hardening, breasts round as a rounded hand; if I lose my breath, the beat of my heart will present itself in a tremble of tissue, a shuddering breast. I sit down on the patio and look out. Even the sounds can be seen now. The sound of sand, a dry grinding that comes from light being so mean with its warmth. We stand inside the woman's body and listen to his voice. Hollow, his voice in the marrow and through the bone, the flesh, the skin. Sound travels through water, like a grain of sand or a shard of glass back out again, through the body, a fish swimming against the current. His shirt is wet, water settled on the field, the stagnant pools, rain unable to escape, through which they walk. Clay soil—if we scraped all the mud from our boots it would make a land. Or the sand whistling in the dry wind, settling in all folds, in the hair, in the nostrils, in all notches and grooves, all that sand together would make a land, and not a single grain would be able to hide anywhere else again. Run, he stresses, we _ran_ through the land. I've forgotten how, that's all. This is how we see him now, in the light of the descending sun, low and warm, behind him: as if but muscle, overly tensed and quivering. A man, getting to his feet, the mud that drips from his coat like entrails, his nose, hands far too big, the sun spreads between his fingers. And the woman's eyes, suddenly they are the only thing in the frame: her eyes, we see her smoke-blue eyes, heavy lashes motionless, then the shutter of the lids, closing and opening over the orbs, slicing everything in two. Everything you lost, everything you never dreamed about. We look into the woman's glassy eyes, but instead of seeing our- selves we see only the man. The man, getting to his feet— the man, again a body, parting the woman's gaze as a knife through fruit, and the beauty his broken body must now accept. Recurring dreams. What became of the crab apple tree, the one we planted in the back garden. Disease killed it, she says aloofly, unmoved, as if having assumed nature's indifferent brutality, its indifferent nurture of all things. The dead, the living. A love of everything in any form that might remind one of indifference, but is the opposite: an attention to what there is. In all new forms, in all the forms being may assume. Forwards and backwards in time, the opposite of nostalgia, not keeping anything for what it was, but perhaps retaining something, or continuing to watch while something dissolves, so that something else might emerge in its place. She considers it brutal, but at the same time rather elegant. She says this out loud, he nods. In concentric rings originating from a central source, the disease spreads through the garden. And in concentric rings the creamy spores advance within the fruit. She forms an eye with four fingers. The sound made by his shoes as he crosses the floor. The sounds are ominous—a rummaging, rustling upheaval of his very being, like a clawing rake catching on a lower branch of the hedgerow, snagging in a tangle of brambles; a gashing of bark, sweat that starts to trickle, leaves crushed and crumpled. Looking down into the apartment from above is like peering into a shoebox furnished by a child, with tiny chairs and tables and rugs. He is lying with his ear against the wall. She paints her nails in the next room. He hears the drip of varnish onto the surface of nail, hears its application, counts each and every brush stroke. We view the two rooms as chambers of the heart, looking down from above. All walls are thin, all sounds clear to him, and he is sharp, she thinks to herself. He can cut through a matter, grasp what things are about. Individual sounds, their _significance._ He knows everything, he sees it all _from above._ Her glossy nails reflect the bulb of the blue halogen lamp. He hears all of this, and what she _imagines_ too; what she _plans._ To gaze at a glassy object and see the world reflected there without oneself being a part of that reflection; in that way to cease to exist as anything but the gaze of an eye; and yet to be that very gaze; a most peerless feeling indeed. The opposite of coming to a new and desolate town and seeing oneself in everything. It may be the case that one comes to a new town, desolate and acutely visible to one's own gaze; that one seeks refuge in a place where such an image exists in which to vanish; or that one finds a particular book and reads a poem, or simply the remains of a poem, and that one in that way vanishes, to become but a gaze. The sun draws the colour from all things. I remember thinking this and being assailed by the feeling of it being obviously correct. That this really was the way of things; the sunlight as a drain. Now I despair at ever having entertained the thought, how I ever could have felt that way, certain of something. One could also assume that light fills the world with colour. It would seem just as obvious, and today just as correct. What then to trust. What do we have other than the days, and in addition to them a gaze that on occasion might see. There are cracks and crevices, even in the laws of nature there are cracks and crevices; and there is light, entering and departing all things. I don't know if a person can take leave of something; I don't know how it would help. I know that you are here, that you are here still. The following day, on the instruction of her betrothed, the men came to drag Lucia away to a brothel; but the young woman stood firm as a mountain. They brought axes and roped her at the belly and knees. Still she stood firm. She bit her lip, her jowels trembled, and yet she stood firm as a deeply rooted tree, a mountain almost. Stood. Their axes were useless, the men conceded, wiping the sweat from their eyes with curses. Presently they came with firewood by which to burn her at the stake, but once again their efforts were in vain. A number of the strongest men then approached to bend her head backwards and she did not resist. The men put tongs to her gleaming eyes and wrenched them out with resolve, though not without several being compelled to vomit, their backs turned, the sound of a snapping branch, the sigh of bark under the blade surprised them perhaps. Or else she stood firm and gouged out her own eyes that no man ever again might desire them. Often, she is depicted with her eyes on a tray or in a small bowl. Or as here: on the stem of a flower held in her hand. She looks down at her eyes; there are six eyes in all: those on the stem; those in her sockets, by which she sees; and our own, in this instance mine, the two eyes I am forever lending out. One can pray to her, for she is Saint Lucy, the patron saint of the blind. No one else may desire them ever again. Such a thought. There's always someone who is parent to another. I have become mother to my mother. She phones again and I am gripped by a feeling of solicitude I imagine must be the same as a mother's solicitude for her child. A recurrent feeling of her not being able to look after herself, me having to be there more often. Different things being placed in small bowls. Beads in a bowl. Grain. Some colourful candy and raw red meat with a white marbling of fat. It's as if the sounds trail on. The images are more sound than image. The high panelling, the three bright rooms facing out towards the mouth of the harbour; the storm and the rain lashing the trees, no one ventures out today. The siren sounds; you say we're safe here, we're safe here. The rising water is cause for concern, the canal swelling still, and they listen to the radio as it rains. They have been cooped up indoors for some time now; they have begun to mark off the days on the wall by the door—one mark for each day, the fifth diagonal. It was amusing to begin with, a kind of joke, but now it's different. They try not to look when they pass through the hall on their way to and from the kitchen. The balcony is under water, the plants have lain down in their pots and troughs, and three scrawny pigeons have set up camp there, looking ever more wretched in their sorry plumage. Their eyes have dulled and become milky, in contrast to the glassy surface of the water that is disturbed only by the rippling eyes that are made by raindrops. The choppy waters of the canal rise up under the bridge, fat liquid slabs pressed up between the slats, sheets of spray and salt. She awakens slowly on the sofa, has dreamt, something about her sister, her sister being angry with her and shoving her backwards, causing her to fall. In the dream she decided to punish her sister, and pretended her fall to be serious. She lay there on the floor, as if she were unconscious. From the darkness behind her eyelids she watched as her sister fell silent and became gripped by fear, then to run away and fetch some men who lifted her up; that's her lying there, be careful with her. And they took her to the hospital, where the doctors had to operate right away. They shaved her skull, and she knew as she lay there that she would have to let them operate on her brain even though there was nothing wrong with her—she knew she had to go through with it. She sensed her sister's distress and deep regret, the way it mingled with her own, and yet all the time she felt that little dash of pleasure at seeing justice to be done. Having to pay for one's sins. The meting out of punishment. She lies with the old white throw covering her, the one they can't discard even though it's worn thin and frayed. Light falls through the clouds. Razor-finned spheres fly through the air, slashing their way through everything, furniture, bodies—you're bathed in sweat, you've been dreaming, he says. He grips her and lifts her head as if she were an infant or the victim of some accident, a casualty. He pulls her up towards him, kissing her on the mouth, as if a kiss in some way counted in his favour. She is limp with sleep, the sleep that courses through her body. She is draped over his shoulder and stares out through the windows, out across the sea, another Venice entirely now; and the sheets of salt water thrusting up from under the bridge are a glassy arcade, ten thousand mirrors, the city rebuilt here in the midst of its demise. The harsh facades of the new buildings on the other side of the canal. No one ventures out, a single face in a window, but that was earlier. Not many windows can be opened anymore, no one left with cigarettes to smoke and windows to open in order to do so, and the last of the daredevils who took to the waters for a swim have either vanished or given up. He took hold of her knee and lifted her leg over the edge of the sofa, pushed her beneath him. The sea is rising, swallowing the bridge, suffocating the columns, its waves unfolding across the harbour area, the cobbles, consuming the old wooden sleepers, the tables, flooding the lawn, dissolving the trampled-down turf, thinning the soil that now begins to float, little scraps of bark and tiny stones filtered through the blades that stand like bristles, wave like bristles beneath the water, the calm beneath the surface, the rush of the waves drawing back, the grass bending with the movement, mimicking, then upright again; the next wave, and the next, and another. The sky contracts and tightens, different strata of cloud separating in bands, revealing something white beyond, light from somewhere, only then it is gone, fading away into grey. Where they are most compact, these grey ribbons shaft towards the ground in dark violet, a negative light, darkness decimating all things before it, sealing the view from the windows of this apartment. Everything closes in on them, the occasional chinks of brightness in the clouds are passages opened to allow something through. Metallic curtains of purple-grey rain. They exist inside this cubicle. An impending light, approaching like a calendar date or a saturation point; reactions, implosions, matter expanding and contracting. His back is turned and he cannot look into her eyes; if he looked at her now, he would see what was behind him reflected, but he looks at the wall, the splendid high panelling, the door leading out. Who is most hopeless. Who is most in need of drawing on some addiction, drawing on the other. A boat torn from its moorings in the storm, the darkness of the prow; the darkness that lingers about the beds and bodies, the hands folded and slid beneath the pillow, or the hands that hold another body, the cheek pressed against the throat of the other, the warmth between the two bodies; seaweed caught and then extracted like her hair as she drowsily opens the catch in the bedroom and the wind wrenches the entire window from her hand with a bang, like a sudden wound, or a garment of leather ripped open at the seam; horse riders tumbling like hail from the sky. The wind breaks up the landscape in a raging clamour of lashing branches and rattling gates, bringing moisture to the eyes, sweeping across the open expanse between the boundary and the fringe of the woods, scraping and clattering over the frozen earth, over the tops and stubble. Winter lasts longer than summer because it reaches so far inside of everything. It counts and appropriates the ribs. It heaves the branches apart and hurls them together again. Time is bound in the movement, and winter paws and claws with its frost and its storms, ceaselessly altering the form of all things; frost and thaw and frost again, the same coat buttoned many times each day, done up and undone, sweaters and scarves drawn around the body, wrapping it up, the body flapping its arms to keep warm, or stamping the snow from its boots. A young man battles to light up a cigarette as winter batters the trees. A new layer of calcium with each passing day. Stargazer, horses chewing the bit, heads tossing back, the urge to move on in any direction, movement being enough; every street corner awaits a presence, shaped to accommodate anyone who lingers. She could wake up to a mass of people staring in at the window; in the cold, their faces would be unclear, the pane steamed up by so much breath; a droplet of moisture could travel down and expose a section of chin, collarbone, or chest. She often had to remind herself that nature possessed no will, that as such it was _impersonal_ —unwanting of human contact, not meaning anything by snow. Unbelieving. The snow, glittering. The sea, glittering. The veins of her forehead are made conspicuous by frost, and she has noted that the long, almost invisible scar on her brow from the time she rode a horse under a low tree seems to emerge more clearly in the whiteness of such light. She takes off her gloves and holds them between her knees, feeling her brow with the tips of her fingers. There's nothing to feel. The snow is dry. The sounds are quick. To sit and linger behind a stack of firewood and stand up as the horses come galloping by; they leap in the air and swerve away; fear creates empty spaces around that which is feared; the strange patterns of alarm, deposits of empty pockets of air incalculable as sea currents, plunging falls and hours spent alone, the body being unable to find a hold, connect itself. A lightbulb hanging from its socket on the ceiling. Getting up on one's own to boil an egg, picking away its shell and running a hand over one's arm. The dream of a summer cabin or a lighthouse. Is the sun the same as the eye. Is sun. To walk to the other end of the town, to the lock, like peeling a fruit and standing with something very bright in one's hand. An arrangment. It is the height of summer and they are all together at last. The women and girls are standing in a row in the driveway, all five lined up between the two pillars. The men are at the sides, three on the right, three on the left. And then—this being the whole idea of the photo—on top of the two pillars that serve to mark the entrance of the driveway, the two tallest of the men stand holding each end of a long branch. From a distance it looks almost like a roof they are holding up above the females' heads, or at least a canopy of some kind. The effect is that of a rectangular frame—the lower edge formed by the heads, the upper edge being the outstretched arms and the branch they hold between them, the sides are the pillars and the straight figures of the two tall men. The intended motif is thereby the field behind them, the photographer being positioned at the house rather than the road, as one might have expected. The rectangle frames the landscape and becomes as such an institution. A sorry mulberry bush steals the beholder's attention and becomes the picture's true subject, perhaps along with the dial of a watch worn by one of the men, which gleams in the sun. It, too, calls for attention. All the females have dark hair and are wearing dresses with waistbands, though this is merely a part of the frame. In another photo—a family portrait—a sheep has found its way into the picture. It stands there, white as white with bulging eyes, yet there is nothing about such an animal that can disturb the image, no style of dress to signpost an era and thereby beguile the beholder. It is as if people who really _feel_ things, their faces—the way they can make time burst open like ripened fruit out of which seeps the clearest liquid, a sense of our being _here._ She is woken by him gripping her arm. _You are innocent when you dream, live and let die, get yourself out on the ground, boys don't cry._ Though he is the one running a temperature (40.2 degrees), it is she who feels hottest. Her breathing. She gasps for air. Dogs running out in front of cars, running away on crushed legs. A black metal anchor had embedded itself in the skull. It was a miracle in a way, that the boy wasn't dead. She looks at the painting of the track in the woods. It's winter, the snow is blue in the shadows of the fir trees, yellow in the clearing further ahead. Stacks of firewood line the track, stockpiled like bundles of banknotes, a speech scribbled on a napkin and stuffed into a pocket for later use. A defence of some kind; a man comes walking as if on his way through the painting and out of it, and yet he is coming towards us, towards where we are in the picture. To huddle together when all is calm and peaceful, the longest of days. To step on one's own toes. The war passed like a sickness, I am always the same. Never any progression as such. Life runs the other way inside her, and thus she moves. Something inside her. The sun is low in the sky, the way the moon was. The wall is stuck to the picture like a playing card under a cup raised to the lips. The sky sticks to the eyes. Russia looks far too big on such a map. Too much of one thing. More than can be coped with. She runs her fingers over the painting. She cuts herself on the hole. Though no shard can be seen, she cuts herself. Blood trickles down her finger, drips to the floor in highly complex rhythm, a very poor kind of rain, a few drops is all. The sky collapses into the clouds, making them dark and heavy. We go to church at Christmas, and he comes with us. Next year—this is what he says—we are going to spend Christmas with his family. In the USA, where they live. They sit there on the plane. She happens to suggest it might not be that important for them to go over every other year. His family being so easy and relaxed; your family being the way they are. He says nothing for a while. Maybe she thinks, now that his father's dead and his mother moved away, and his brother moved away—maybe I think he doesn't have that kind of dream inside him anymore. He holds up his hand when they come to serve the food. She feels she has to do likewise. For his sake. Maybe at some point tell him she thinks she might stay. An experiment using iron filings on a sheet of paper. She remembers moving the magnet underneath the paper, the patterns it made. Adults fall asleep when they come home to visit their parents. Not because they relax, but because so much time must pass through their minds. _The very soul of France, don't you agree,_ he says. She doesn't know what he's talking about, it could be a musician, a dish, or a cookbook. When she came up to the house her hair was almost dry, or at least it hung down in strips, dark marrow encased in a dull, yet lighter crust of frazzled strands. She had no clothes on, only a towel wrapped around her. She hadn't seen him. He'd left the car up at the road and walked. The winter crop upholsters the fields from below, a dusting of green velvet or cotton, growing and encasing the soil; a mantle made visible by the storm, the wind's shawl of snow, gusts blowing open the coat, wrenching away the shawl; the snow as it drifts and piles, and then these islands of green. This green that wants to _witness._ This _merciful_ green. Whatever mercy could be—a hand held out beneath you, perhaps, or a whole body protecting another. A colour. Sun. To describe an image to someone may be a kind of love. The green beneath the snow bears some semblance, but is not. It's nature, that's all. Nothing to depend on, and as such there is some coincidence yet. Points of similarity. The distance between things and us. That which survives another day, and that which is lost. Other women say we look like each other; our boyfriends say we're night and day. I'm night, I think to myself, though I can see the opposite could easily be said—I'm the one with fair hair. I'm sitting on the floor in the living room, it's my mother's birthday. We've all come home. We're going skating, only no one's got skates. Or rather, we think there might be a moving box with skates in it in the loft, but they must have been there ten years, more than likely they're right at the back now and can't be got at. It would be too much trouble. Only later, on the ferry back to Sjælland, does the thought occur to me that they would have been too small, children's skates. I find it odd no one thought of it, or said anything. My boyfriend is sitting on the floor too, we're watching a film from what my mother calls the old days. Our cheeks are red from being out in the cold on the pond. They sweep the snow away with a machine that looks like the kind of cultivator we use to dig up the vegetable garden in the spring. It's got brushes instead. They start at the edge, moving along the shore, tracing the oval of the pond, this dark rink of frozen water. The water, darker than the snow. Two separate rectangles have been cleared, on one of which they're skating and playing ice hockey now. Some of the kids from Egens Havhuse. Between the two rectangles is a snaking path. My boyfriend goes over and studies the work—the man is clearing a circle but has started from the outside, the machine keeps throwing the snow back inside the circle. He's going to end up with a pile in the middle. It won't take much wind for all his work to be in vain. Do they always do it like this. He stands with his gloved hands in his pockets. There are so many layers in the landscape, the solemn trees closest to us cut up the picture like the cracks of an oil painting, a fracture in the wall in the corner of the bedroom. Is it worsening. It's hard to tell from day to day. A translation—then, now. The past, continually collapsing like buildings behind us, becoming something else. They have driven through the woods to the beach so they can watch the bonfire. There is no other way to get it said than this, the hard way. They have come to see the bonfire, so no one says anything until they reach the car park outside the beach hotel. Her mother turns off the ignition and they sit for a moment in their coats, a dampness in the interior and under their clothes. They're wearing lightweight summer coats, their skin is tanned and their hair bleached by the sun: it is the height of summer. She and her sisters, her mother, her father. They can see the bonfire from the car, but no people. It feels odd, the bonfire piled up like a peak on the empty beach in the rain—the summer of 2004 is a summer of rain. Up at the hotel the grey flagline slaps against the pole, beating out a weary rhythm familiar from the harbour almost any day in spring when boats are made ready. Curtains of rain across the sea. A man trudges past, a dogged angle in the wind. The car ticks. The air is not cold, more close and blustery at the same time. Her younger sister unclicks her seat belt. They are startled by the sound as the belt retracts, the metal clasp striking the window. The unobtrusive sea, its waves are an unsettled band of greyish brown. The light is not the summer's. Her sister shuts the car door behind her, they all get out and stand for a moment gazing in their different directions: their mother looks towards the woods, her sisters consider opposite ends of the sea; she stares blankly at the sand. There are candles in all the windows—it's too dark for Midsummer's Eve. There's something unnatural that doesn't fit in with the season, the time of day. At the water's edge she veers off and follows the shore like a sphere rolling through the groove of a wooden board. Seen from above it looks like the shore and all its sand empty out into the sea; the undulation of waves, repeated extensions of green and white, fanning out as they break; the effervescent rush before retreat. The sound of—a sphere in wood, a very simple sound against the murmur of the sea, always the same—whether heard or not, it exists. The bonfire won't be lit, her mother says definitively. They are quiet. They stand with their backs against the car, then walk past the boathouse, where the lifeboats are stabled, and down through the dunes. Her sister picks up a branch blown from the bonfire to lie like a bone in the sand. She tosses it back onto the pile, that reacts with a groan, the slightest of landslides, a few smaller elements rattling down a level or two, like a body turning in sleep when touched by a hand. They walk around the bonfire, considering it from various angles, though all the time from below and all the time with distrust or a feeling that something is wrong with it. Once they've been all the way round, they stop. He's got things in jeopardy: money, and his face. The garden is an eye, the lawn swathed in rippling green; and in the middle are the perennials, older than us all. You amble around them, casually, as if you were a planet fastened to its orbit around the sun, older than us all; or else you are a cone of light in search of something, a pencil beam penetrating the eye in order to find some weakness, or perhaps even disease. The light has no age. Light is no older or younger than the eye on which it falls. You stop and jab a finger at a plant. They're strangling each other, you tell me softly. A bed like this is war; the minute you look away, it's war. You nod as you speak. I can see the way your neck bends and extends, the silhouette of your head, your fair hair that in the light of afternoon looks like a cluster of aquatic plants. It's a shame, you say softly. I have always thought you to be a child, but now I see that you are not. You have all ages in you, while I stand here bare, a tableau like the perennials. No age or time will ever latch on to me, and thus I am already someone you miss. I am barefoot in the grass, walking backwards now out of the garden. I hear you speak to me. I see your girlfriend at the kitchen window, preparing pigeons and curly kale with a face that seems new every day. Unlike us, your girlfriend masters the art of living. She lives the same way as fledgling birds—they hatch out in a nest, oblivious to all that exists outside, and die if they fall from the nest too soon. I have dreamt about being like her—of being her—but today I am no longer sure what kind of dream that is, or whose. Some of us draw the strangest of straws—within us collect all the stray dreams that exist in the world, those left over. It becomes impossible to tell the difference, which are one's own and which come from without and belong to another. The lawn is alive with caterpillars, it makes me itch, and you let me off the hook. Gone, I feel the same as I do in the garden and when I am with you—completely alone. And thus we squeeze the juices onto our brows, until we no longer can remain inside the body, until we are beasts that cause the stomach to turn, or perhaps until the human being within surrenders with a wince. You think I am still close by, but you could turn around at any time and see something else. I slam shut my eyes as I leave—the metallic clatter of the gate, before everything once more is still. A length of knitting relieved of its needles on account of alcohol. A number of stitches waiting to be unravelled. A kind of vulnerability that is almost nauseating to watch—fingernails on a blackboard, that kind of nausea, that instead of rising up inside engulfs a person from below, the kind that cuts one's consciousness into very thin slices and serves them to a father who leans forward across the table and holds forth on the matter like a schoolteacher explaining something about which he has only the slightest knowledge, or a businessman on the verge of closing a profitable deal, with the utmost stringency, a recipe or a set of rules a person can pass on or teach, exercises to strengthen the small of the back, studies indicate, etc. Stains on a shirt. Various substances. Maybe the problem isn't so much _hoping_ for something else. He lights a cigarette and the palm of his hand is illuminated like the inside of a cave. A drop of moisture released from a branch. Autumn: leaves descending like tired faces in the streets. Steam drawn out of the window. Rising. You're paranoid. She reaches for the red wine and empties the bottle into her lap, leaning back against the counter. What are you doing, he asks calmly. Having a miscarriage. Okay, he says with a nod. He drinks from his glass. That's all we've got, he says, pointing with it. The light of summer draws the colours from the world. Green and blue. No matter. The dismal belly of the hedge, the leaves of the birch though brightest green, waving in the breeze, whenever there is one; limp as droplets when there is none. Clustered weary on the branches, those thin arms. The birch. Birch trees, wandering, as if troubled. Troubled by e.g. war, or the promise of death. Was it so bad you thought you'd die. So bad the only thing you want is to die. The next image is from Normandy, the coast there. No people, just an empty beach. Waves. Nothing but waves and the sound of waves. The sound of the garden and the sea. Presumably, he wants to see you happy. He knows I won't be. He knows I never will. He's not that stupid. He knows me. I got this idea about you and that lump of amber, like it was the amber that picked you up. From the beach. Thus march the trees in a flicked-out fan from the garden, now from the sea: like soldiers to the land, over the beaches, slowly to the house as if risen up from the ocean itself, kelp about their ankles, seaweed for hair, barnacles beneath their soles, calves encrusted, occasional mussels embedded in algae, entwined around the thighs. Like the sun. Returning to a lodger who will turn out to be gone. Washed away. Nothing here, whatever happened to... And in reply, a pair of shoes, or perhaps only a single shoe. Left behind before a house taken by the swell, laces rotting. He bent down and picked up a lump of amber, tapped it cautiously against his teeth and held it up to the light. Come closer, he said. Look, he said. And as the trees withdrew, they shone through all things with their white bark, and beams of light were their gaze. We look into the woman's glassy eyes, but instead of seeing our- selves we see only the man. The fact of our not seeing ourselves in the woman's gaze. The exchangeable nature of love, and always: promises of the opposite. Approximately eighty per cent of what may be said about me may also be said about you. To circle a building by allowing the index finger to follow a mortar joint in the brickwork. Freedom is something one used to have, found only subsequently and in hindsight, and thereby such a nostalgic idea, and exactly that—an idea. The darkness of the woods, _regardless._ Like your face during that time. Some favours I do for you, without you really noticing. Look what you've done. The sun adds and subtracts indefinitely, like an abacus in the play area on board the ferry, first one side, then the other; you get up early and say you want to get something done today. When later we walk around the lake we must clamber over fallen trees. Or rather, you bypass them, holding down thin branches from the top of the crown with your hand. The oak trees had just come into leaf before being cut down, the way a person might think of something they should remember to say. Summer, and a conversation that could have been. I keep thinking about the way I banish the sickness to a place outside of me by calling it some particular name. This or that. I don't know if you could call it a breakthrough, I don't really believe in stuff like that. Maybe the speed can be adjusted, but the crash is always going to be inevitable. Maybe the rate at which a person disintegrates can be slowed down—maybe that's what these fleeting realisations can do. I survive by the language. The language as an additional body part, a substitute heart for when the other one stops, an extra pair of lungs. Salvation, to possess a voice. Two kidneys. And what if it is not dreams and the night that disrupt everything, but the day that makes everything contract and shrink. Like when the moon looks bigger when you're close to the horizon. What are the proportions, what perspective is right. You marched through the city, dressed in black, red flowers in women's hands, and hooded. As if hoods or colour could ever keep something so fluid together, knots and ties. A dog tags along. The boat waits at the headland, at the jetty, engine chugging. You step on board one by one, like a necklace of beads stretched between two hands, the gap of elastic thereby exposed, one bead at a time allowed to pass. One sees a foot, and then another, the footwear, stockings. Nylon, leather, stripes. Bright-coloured shoes are comical, in a heart-rending kind of way. Unexpected guests in the middle of another of our arguments. You know how it is. If anyone came through the door now you could force a smile. _Gerbera,_ I think they were. I think they were gerbera, the flower heads they scattered on the sea, bobbing on the swell. Strong colours. The stars seen from Earth are more numerous than all the grains of sand in the world. The photograph's distortion of the subject, the ends of the horizon curving like the corners of a mouth. To make the aperture big enough to let in all light; to pluck the flesh from the pigeons; to part a crown of the darkest hair. Where do you imagine the money's going to come from. They have reached all the way around, emerging now into the clearing and walking the final stretch towards us. We can hear them talking, they have been hidden from us by trees and the boathouse on the other side. The denser the foliage, the less intimate became their talk. Whatever a person endures, it leaves a mark in their language, the way the sky determines the colours of the sea, children always being the children of their parents. My mother is unhappy about the little blue tattoos they made on her breasts before commencing the radiation therapy. They look like dots made with a ballpen—they'll fade in time, she says to comfort me. A pile of timber darkens at the shore; shreds of fibre torn loose like hair floating in water, hair blowing in the wind. A softness in the language, and in her face. Is it possible to reflect and be happy at the same time. She stoops and draws an arm of the beech away from her face. The sunlight cleaves the trunks, the trees subside into the forest floor. He stumbles and nearly falls. The path is studded with rocks, like bald heads breaking the surface, a thousand metres from the burial mound. There are paths below the ground and everything is continually in the process of becoming something else. You regain your footing and reappear at my side. I found an A4-size envelope today, on the front of which my mother had written: _To be opened in the event of a new winter coat. Love, Mum._ I opened it—the adhesive glittered blue—and put the three thousand-kroner notes it contained in my wallet, the envelope in a black box I keep by the DVD player and your records. I had already bought a winter coat. I felt glad I hadn't opened the envelope before and spent the money on something else, like food, or just frittered it away. In fact, I thought I had. They burned off the fields, the smoke was purple and settled on the landscape like a dusty cloak. The risks of mistaking loneliness for something other than loneliness are various. When you lie on your back, a shadow makes your nose look squint. I don't know if I believe in optical illusion. Or if there is anything else. Whatever can be said about reading poems can also be said about living or being in a relationship. A number of requirements, or instructions given. Your toes inside your sandals, nails colored black. It's all like looking at a 3D image—to make it work you've got to concentrate on a point beyond the screen. I've noticed I feel happiest owing you something. Having something to return, being one favour behind. During our first months together you got rid of various items of sentimental value, things that concerned your relationships with other women before me—letters, handcuffs, jewellery, lotion. We too have accumulated stuff, I see, and now I wonder if I could become a collection of remnants in the same way. We are back at the clearing where we started. Our towels lie brightly at the bench. Beyond the trees, the burnt-off fields moved like an ocean, a gentle swell of smoke, ever sinking, never retreating. No higher bid. My cousin is drinking himself to death. That's what we do in my family. Some of us, anyway. It's a slow way to die and belongs to the indecisive, those who can actually see there might be something worth living for: the beauty that exists in the world, and love—the chance of fondness, still. My other cousin writes to me on Facebook and says it looks like he's on his way out, that it's a battle now. I write back and tell her I thought he was getting better the last time I saw him. The time we visited. I even thought he'd come to some realisations. Or one, at least. She tells me his condition has not deteriorated, but that he's been more dead than alive for a long time, that his internal organs have been steeping for years. My family has lost several in that war. My cousin tells me she's developed her own strategy. It's a question of being unsentimental, she writes. Let them drink themselves into the grave if that's what they want, as long as they accept it's their own choice—and if they happen to decide something else one day, then all well and good. Let them know you're there for them if needed; but you can't spend your life urging and appealing, begging and pleading, and always getting let down. My mother is cut up about something I wrote. She feels like she's being held up to ridicule all the time. I suppose it's the surrendering of power. The child claiming the right to her own story. Sharing it with others, if that's what she wants. A miniscule fragment of the self that can be handed out to whoever; like a garden in autumn, with always a leaf releasing; or a body soaking in a tub, beginning to dissolve, tiny cells of skin, or flakes, floating like a film on the surface of the water. She says she finds it hard that I always make her a _victim._ It's an odd thing to say, as if it short-circuits the brain or leaves it in a state of self-fuelling oscillation. Who makes who a victim. When I was fourteen I put a newspaper clipping up on the fridge with a round, red magnet. There was a picture of a woman writer who wrote about the conflict in the Middle East. It had to do with the role of victim, the way it made the bloodshed possible. Because the victim can always do as he likes. The same applies to kindergarten. Who makes who a victim, who is comforted. The greatest revenge is perhaps simply not to be there any more. But when you're waging war a hundred kilometres apart and are there no longer, you have never been closer. Maybe that's how it is too. The more you fight, the closer you become; the more space you take up being missed, as an imprint, the closer you are. Dear cousin, A brief word from me here in Nørrebro, Copenhagen. It's a cold day, the fourteenth of February. I'm sitting here trying to work on my new book and happened to think about you. As I've done often of late—hearing how you are from your sister and my dad and wishing the very best for you. Hoping you're getting better. Are you able to eat? Is the hospital food any good? For someone who knows as much about food as you do it must be a trial sometimes to find the appetite to eat and get well, that must be hard enough on its own. It's been a while now since we spoke. I live here on Fælledvej in Nørrebro with my boyfriend. You haven't met him yet. Who is it now, I hear you ask, and I can understand why. There's been quite a few the last couple of years. A coming and going of men. I can't manage being on my own. Still, this time I think it's going to work out. Nine months already, which is something. I think about how many ways I can tell a lie and that I'm good at it. I'm rewriting my essay. Revising and making amendments to keep my family happy. I'm manipulating. The story about that letter to my maternal grandmother, the way it got photocopied and put away. A simple matter of retaining something, now an issue about having the right to tell. They say history belongs to the victorious; but I am no victor. I have violated something in which I truly believe. And acquiesce so as not to be shunned. I walked around the city lakes yesterday so I could talk to her. After a week it could no longer be postponed. The energy my sisters get out of it. It was nearly dark as I went. Darkness falls early on this land in winter. It surprises me still, how early. It creeps up and assails you. We talked about my father, how impossible he is on vacations. There's so much resistance in him—fear, I think to myself, that maybe has to do with alcohol in some way. His childhood, with a father who drank, drank and wrote the whole time. I think his own take would be that it was down to some other stuff—his mother, no doubt. In a way, the whole thing is a tragedy no matter which way you look at it, whatever the truth of it. The way something can be _handed down._ My mother and I—it's not hard to see we're in it together when it comes to him. Enemies, loved ones, frost, winter. Nothing binds together like that. The seeming potential of alcohol with regard to cementing kinship. Or maybe just that exactly—kinship. The frailty of family, the darkness of it. A few days later I'm on the phone to my best friend. We talk about my mother and the essay. He says it's the crux of the piece—that it's extremely important to retain, my reflections on wanting to preserve memories and who owns the past. He thinks I've brought it out well in the writing. I tell him about the feeling I get having those talks with her. Like being put in your parents' car and made to visit some aunt and uncle you don't like. We laugh about that. I go back to the office, balancing my tea. I read somewhere that men don't want to hear about their partner's previous erotic experiences. That it just ruins everything. In a way, it makes sense. In a way, it might be the most honest thing that's been said on the matter. A shunning of history one can't help but love. Maybe it's not about purity in that sense—maybe it's because a person can't live with that much time, that much past to _skirt around._ Discovering the video a few weeks later he snatches the camera out of her hand and fends her off with an outstretched elbow. Give me that. He studies the film, his eyes soften like a wound. He becomes hospitable. His body, oblivious to being observed. He asks about that scene—it's evening and they're seated in the kitchen having spaghetti. What scene. The one she told him about, the one with the breast. What about it. Is that what it was like, he wants to know. He's talking about Duras, that film with the very lengthy shot of a naked woman in it, the breast of a woman asleep. You think you're looking up into the crown of a tree, only to discover that what you're seeing aren't branches at all, but the photographed capillaries of a heart. A wrist, a body transilluminated. The way things fall together, a pulse turned into something you can see. Who was she out with. No one. Limping, hobbling home, the fir trees parting like scarlet lips; a leg dragging behind, the way a cat might drag itself home, hind leg trailing like a broken cart. The horse was down by the meadow, tossing its head, the reins tangled up in the branches, only then it wrenched itself loose and set off at a trot, bridle dangling from the headstall, following the perimeter of the colts' enclosure. The green of confusion, the vegetation, the last hours of afternoon. A group of girls stopped what they were doing and dropped their currycombs, or else simply stood and stared; a couple of them came running towards her, clambering over the fence, and as they reached the wood and the girl, she fell down in front of them, like a heavy sinker when the line is released, plummeting to the bottom, an anchor descending through current, all the softer strata, motion in the direction of the horizon. The wind picked up and passed over them like waves rolling in from the open fields, a rotten stench of sea borne upon the air, slabs of mingling perception, rising up in a murmur, lapping this far or that, issuing its sighs and sinking to the ground, to whisper in the gravel, in the sand, and the wounds. Her freckled skin was gashed apart, arms blotched with blood, blood trickling from her nose, and an eyebrow glistened red. In the far field they attended her, tearing open her long-sleeved T-shirt, its weary fabric relenting at once. Her arm was at an angle, the bone stuck out from the middle of her forearm, the lower part with the hand dangling like a decoration. There was a lot less blood than one would have thought. Her leg, her leg, the girls cried out in unison, then busy whispers exchanged, endeavours to make her comfortable, to arrange her in some way that resembled a natural position of the body. And all the time thwarted by some issue, knees that refused to bend, and the sight of her eyes as they flickered in shock. Her face was the worst, but no one saw. And the internal organs: a lung slowly filled with blood, patches of deepening shadow, fluid seeping darkly from the body. Help is on its way, they assured her. Someone had phoned. The horse had stepped on her face, the left side, a loose tack in the shoe had gashed her open from just above the eye, a fleshy flap hung from the socket, her apple cheek parted like the tall grass of the meadow through which they rode. Who was she out with. The horse lowered its head, came to a halt, snorted into the soil, turned, lowered its head again, and nibbled absently at the couch grass. The flap, flap of horse lips smacking together, the moist rending of pasture detached by the teeth. She gazed up at the tops of the fir trees, they pointed up like mountain peaks that strove towards the sky, and all movement was suddenly directed upwards, she felt; the fall had lasted an age, but when first she let go and allowed herself to tumble she thought fleetingly of the speed at which everything hurtled towards the clouds, the whistling rush of the air; she hit the ground at the edge of the bridle path hollowed out over time by the tramp of hooves, twisting round in mid-flight as she was hurled under the horse's hind legs. Now she tried to move her arm, but couldn't, and found the other to be likewise unresponsive. Blades slashed at her like darting swallows beneath the ridge of a roof; she wanted to know about her face, but not a sound would leave her mouth. She's trying to say something, one of the girls realised, commanding her friends to shush; they could see only the right side of her face, the left was seen by no one. The body, opening itself. They stood and listened, watching her lips. _Face,_ the girl whispered, _face,_ repeating the word several times, and everyone understood, yet no one spoke. They saw the ambulance—and three weeks later her face. It looked like a field, skin sewn together in a patchwork of boundaries and trampled-down tracks. Part of the jaw was saved, three teeth, the cheek rebuilt from bone grafted from the radius. For a long time following she was blue, and they shot the horse, it was too wild to ride, too _afflicted_ to be kept in any place. Castrated too late, it was like it never realised it wasn't a stallion anymore, they said. The girls understood that to a point, but everyone agreed that shooting it was best, on account of that face. Its eyes darted in their sockets as it was led out into the farmyard so they wouldn't have to move it as far once it was dead. The gunshot was a whirle in the air—dust, grass seed, sand. All language is a translation of something. The leaves of the chestnut tree, the way they unfold from the bud in the space of a few days in early May, are a translation. A man stands in the middle of the road and two dark-coloured cars sweep past him very closely, one on each side, moving in their opposite directions. He has to turn sideways so as not to be hit. His body mirrors in the paintwork and the windows, the rush of wind as they pass causes the hairs on his arms to tremble. The particular hang of a dress. The minutest of movements in the region of an eye—industry. A translation. Roots in poor soil, sandy soil, meagre. You appear on the path beneath the chestnuts and have lost weight from all your worries. It suits you. The compactness of your body, the fact of being able to see what's under the skin. Your hands are in the high pockets of your short-cut coat, so your arms stick out like wings, two triangles in your wake. You greet me, and later you say something about selling at a loss. I can't remember what it was you meant, only that it seemed plausible that it should be so. I put my hand out and thought how fleshy it looked. Everything is a translation. You shook my hand and it felt like a reconciliation. You held a cigarette between your fingers while it disintegrated into ash. We walked in the direction of your nod. I imagined how he would look sitting in my kitchen. I'm going to Fyn, you say. Your eyes have different colours from the trauma. You want to go the limit, you say. You say you're not sure if I understand you when you say you're tired, I'm tired. You'd rather crash out in style than not give it a go. I think you're right about me not understanding. I want comfort—but right now it seems like it's not going to happen. You look at your watch and the sky. It's our own fault, you point out. We sit down on the slope and watch the swans. They put their cheeks to the wind in turn, first one then the other, like sails. We're both speechless—the choreography of it is like a symbol. The lake changes colour from blue-violet to deep blue, shifting in a matter of seconds—three, four, a mere blink of the eye. Eight swans, now in a circle on the lake. The lake is not an eye. We tramp out a path on our wanderings around it, deeper and deeper. What is the relationship of the body to the voice. Prayer, declaration, oath, song, elegy, ode, allegory, novel. How can a voice be retained, how much can be altered without the voice becoming another. Nothing is ever the same. Therefore, there is no comfort nor any argument in favour of us being together at this moment; no reason _we_ should be together. Tone is determined by distance. Keep talking to me. The dying fruits hang and wither, folding themselves up into a wind harp of origami skulls. Fungus spores spread on the wind, are scattered by wind, rain, insects; flies carry the microscopic spores from fruit to fruit. The bruised fruits are the ones assailed. The biting, sucking insects, the grubs that bore, the wasps that gnaw, the birds that peck, and the hail that beats and batters—all opening up their points of entry. And the untended apple trees whose apples have been left to flourish cheek by cheek, so bountiful the fruits hang almost in bunches—these are the trees to be attacked. The spores wander from last year's hollow fruits to the new of summer. And we see it happen. We see our woman sit down in the grass, and we see the man remain standing with that rake. The straw hat—where did the straw hat come from—casts a shadow across my eye. A sandbank where amber and flatfish absorb the sun in shallows of warmth. We see her look down at her hands. She sees herself with his eyes. In his eyes she is not herself. A pear, a stricken bird dropping to the ground. By mistake a woman washes a woolen jumper in the washing machine, causing it to shrink—and commits suicide that same afternoon. A man takes his own life after seeing an unfamiliar cat get run over a few hundred metres down the street. Another breaks down crying during dinner, his partner having mentioned Dublin and the holiday they spent there, when they couldn't get a taxi and had to walk four kilometres in the rain. A girl lies sleepless in her room, in tears over having lost a ballpen that was special to her. A woman swallows fifty paracetamols after being handed a speeding ticket. I have a preference for plainness. Plain make-up. Plain plants—the weeping fig, for instance. The idea of _regular._ When things don't draw attention to themselves and try to be better than they are, or more out of the ordinary, like that. There's enough pageant in the world as it is. Enough showing off. PROLOGUE The first thing we see is grey, near-black earth. A warm glow in the blackness, a moistness of colour. The wind is the only thing to reveal time. Or the gentle arc of the plant in its lean towards the ground is the only thing to reveal time. The green with the black. The green against the black. The image is the softest shudder. The movement becomes a state. Again, we lose the sense of time into which we had settled. SCENE 1 A bed inside a bedroom. Night. Darkness. On a desk at the back of the room stands a lamp with a lampshade of green glass. The lampshade refracts the light, making it fall like heavy rain. The table is sodden with light, like a forest floor. The light fans out around the lampshade and makes a halo. Another lamp stands on a heavy foot at the bedside, arching its neck. Its light is harsher. It falls coldly upon the two people in the bed. It is a dusty light that does not enshroud the body but seems instead to peel away a layer of the skin. Occasional hairs tremble, detached from one another, and every strand seemingly wreathed with light, individually and collectively, halo-like about her face. Illuminated. The man's face in profile. We see her features like a sky behind him. Her weightless hair exudes from her scalp and is as kindled by the harsh light. The white surfaces we know must be the teeth. Removed and icy, he seems, the way nature can be: indifferent and dramatic at the same time. Pupils darting beneath the thinnest eyelids, partially translucent, the soft wafer-thin hull like the membrane of an egg, quite as thin and yet not as strong—fragile. THE MAN: You knew all along. THE WOMAN: That there was so little time. THE MAN: It's easy for you. You hardly noticed a thing. Hesitation, revealing a kind of solicitude for him. Or for herself. The crowns of the trees absorb nourishment from the sky. They too are roots; and roots become crowns in the earth-sky, the worms are insects there. THE WOMAN: Do you think that's how it is. That the journey itself exhausts a person. I think it's everything but the journey. In fact, it's more the parting. That, and revisiting what you left behind. The journey is nothing, really. Your paranoid look. The woman's throat. The sweep of the collarbone towards the arm. Goose bumps. Throat, rising and falling. And the skin, contracting around the body, the hairs as they rise. The skin breathes, the body in exchange with its surroundings. SCENE 2 A tall barstool with shiny, black-varnished legs. A high table in front of the window facing the garden. She sits on the stool, erect: the light is summer's, it is summer, the breeze from the garden tugs at the white curtains that brush the floor like a hand passing over a knee, a disruption inside the room. Summer. The light falls in flat bands through the leaky walls, ribbons of light like swords plunged from all sides into a blade-box: that image, fleeting. And then again: the light, falling in through slats, slicing up the room. There are no colours. Sun draws the colour from all things. A dimness is all there is, and this insistent light that seems to want in to everywhere. Like jealousy, the way it works things open. It seeps between the woman's teeth, the narrow gap between her teeth. Light floods into the room like piercing jets of water, the shutters holding together the body of light, allowing only so much to pass. A kitchen, afternoon. Summer. The tall stool with the delicate, curving legs. Her hair hiding her face. The crown of the weeping willow, the one by the lake, hangs like a woman's hair at the water's edge. These lightest of touches, the sun. All the tiny hairs. A thought that may occur: that this must be the place. Closer and closer. A close-up of the woman's eye. It is half-closed, the eyelid covering exactly half the front of the orb. Her eye, the collarbone, the chest with its ridges. An image of a flower losing a petal. And another. Withered. The two images superimposed. Nature declared incapable, cheated, for the most part, she too. This is the way we sense her strength. A tough membrane. Her fingers are greasy, their tips are moist and we glimpse an eye. It glistens in the light. Her eye, and then her hands. Only the hands. Cuticles large and white. Fingers wet, vaguely orange. We search for an explanation in the image. An explanation in simple terms, a sphere colliding with the next. We find nothing. Our thoughts make our eyes homeless, our eyes beginning to wander. First within the image itself, then back within our thoughts, and into the image again, for we are unable to escape from the image into abstraction, not here. We search outside the image and yet within what is seen. We try expanding the space by means of thought—a still larger image. Thinking by visualising. We find her, for we have marked the place we left her, though not exactly, not the stool, but her body. A small blue mark, occupation of a country, a flag or a stamp—this is how we occupy her, by leaving our mark on her body. The woman looks up. We can find our way back to the marked body. Her hands are placed before her on the table like fish. She looks out at the garden. We see the garden as she sees the garden, through the double French doors. They have stayed open all afternoon. Shimmering warmth, something sugary that makes the dark green seem sticky. We turn our gaze towards her again, study her eyes to see if we saw the same as she. Her expression is concentrated. She stares into the distance, a very particular concentration that makes us think that she does not see the garden at all. Her expression is that of a person looking out to sea. A rhythm tapped out by a finger on the edge of a bathtub, the edge of a table, a hard and shrivelled fruit. Waves cross-hatch the sea, the way people cross-hatch landscapes and bodies. The sea seems to go on for ever. The work of cross-hatching goes on forever. The rhythm of the work, reminding us of states, other occasions, the memory of something repeated many times, the way you remember a season or a way of existence. One of her eyes. And the image of the sea. We see the two images superimposed. The two images are equal, there is no hierarchy. Time and place are unimportant, nothing is more imagined than anything else, her eye and the sea are equal because of their salience for us, here. The image of her eye gradually becomes clearer, more distinct. It seems almost to soften the sea. And behind it: the image of the garden. We see the three images as one, each superseding the next by turn. The garden, vanishing. The garden, vanished. Her eye now fills the screen and is the only image we see: her eye. A black pupil, mottled green iris, the white of the orb, the only part of the body that is truly white—apart perhaps, in certain cases, from the teeth. Almost invisible, these tiny capillaries; and at the same time we see the ocean reflected in the eye. We understand: she is looking out across the sea. We understand: the garden is the sea. The door is not merely a door leading out into the garden, but a door leading out to the sea, both existing at once. The sea, waves. And then once more: they are bright, a heap of rose hip on the table in front of her. Not a single stain on her white blouse with its ribbons and lace. Preparing rose hip in such attire. An old-fashioned blouse, it may be very old, an item kept and cared for. Then the three colours. The green of the garden, the blue of the sea, the orange of the rose hip. The three colours can borrow from each other, her skin borrows colour from it all. And at the same time we realise: in such light, in such heat, colours cannot exist. Everything is either black or white, the season of calligraphy. Not winter, as one might think. We hear the breeze. We see her ear—now we see only her ear. The image changes, we see the rose hip dissected. The white seeds like teeth. Her collarbone. And we see the bark of the birch tree up close. The two images melt together and she rises, her chest ridged like the bed of the sandbank, the same metal gleam. Green eyes. She has risen and moves towards the door. We see her from behind. We see the garden outside, and at the same time her eye, the sea reflected in its sheen, not distinctly, we sense it to be a kind of disharmony, the kind that is always present in the world. We see the garden, but then the sea. The sound is of the sea. The shift has occurred imperceptibly, the sound of the garden has become the sound of the sea, imperceptibly, and yet within minutes the sound alters again and becomes once more the sound of the garden. Trees. The wind in the birch trees and the aspen. The sound of light and wind passing through foliage. Through tall grass that has not been cut for a whole summer and a whole spring. This is the movement that carries the shift in sound, from the garden to the sea, the trees, that are the sound of both, always. And then: the heap of rose hip. The halved rose hips, rinsed and cleansed, in a bowl of black enamel that is matte on the outside, shiny inside. The colour of rose hip dulls the senses. SCENE 3 A wide beach, the sky. And the sea, slicing the image in two, as a revelation might, or an involuntary insight into the way something hangs together. To lose something one never thought could be lost. There are no sounds from the sea, but a person breathing. A sound from a body breathing—lungs, skin, breathing in and out. The beach has emptied, no one remains there, but we hear a person breathing. At first the breathing of one. Then two. Two bodies, breathing. First the breathing, then voices. What are you doing here. The sound of the sea surges in, as if until now it has been contained within a cloth, and then the deluge as the taut fabric is slashed with a knife. We hear the voice distinctly. The way it comes in over the sound of the sea, the sound of sand swept by the wind. The man and the woman move into the frame, they enter the frame, appearing on the screen from the right. They walk on the beach. The steps they take are many. We watch them walk. We watch them from afar. And then: the heel as it strikes the sand, the release of the toe, the knee reaching its apex, bent, stretched. Again, we see them from afar. Their voices remain distinct, as if we are very close to them. They make no special effort in speaking, the rush of the sea is a voice apart. In this way the voices stand out, rather like an unfamiliar black wallet left on a dining table. Something on top of something else and shining almost. Now and then a seagull cries, or else we hear the wind at our backs. The wind, buffetting the vegetation, the noise of leaves rustling above their words, allowing us to hear only fragments: THE WOMAN: ...to get away. It's like there's no room for thoughts when my head is blown full of sand, and that sound, the waves. She gestures, throwing up her hands, pointing or whatever. THE MAN: ...the body casts, of the mother and child. THE WOMAN: ...like that...Pompeii...you know... We move closer to the beach, venture some metres out into the open space, some metres out into the sand. THE WOMAN: The heat there—it's like your throat filling with sand. She grips her throat with both her hands, forming a collar that rises towards the jaw. THE MAN: They were so small. THE WOMAN: Something they weren't meant to see, something stolen, something you steal your way into seeing, but which maybe you... THE MAN: ...which maybe you shouldn't have. They look like children dressed up as adults. And the children look like animals dressed up as human beings. THE WOMAN: What do you remember. THE MAN: I remember the body casts, of course, the one of the mother and child. That's mostly it. But then I come to think of all the living, draining their water bottles as they wait for the train back along the coast. Water bottles strewn all over the place. And their faces, the gravity written all over them, the understanding that it would be wrong to laugh, that gravity is the appropriate thing, to be weighed down like that. And they think: I'm glad to have seen this, and glad to be able to go home again and forget such images. The beach, the rocks, and the sea—I suppose that's what they thought about. Tomatoes and lemon trees, lemon liqueur, lemon soap. We see the sea. The two people have left the frame again, the man and the woman. They continue to speak, we hear their voices. THE MAN: A thin dog. A lost child. A despairing face. THE WOMAN: Did you see their mouths, the way they were open. You can almost hear them, can't you. They've been screaming for seventeen hundred years, interred in the form of cavities. Before eventually they were discovered and had plaster of Paris poured into their forms. I think of it like developing huge photographs, only in three dimensions. The plaster is the developer poured over the photographic paper in a darkroom. I don't think you can deal with looking at them unless with that thought in mind—that they're empty shells to us, reconstructions, like shadows of something you... you have to misrepresent, and see for what they are. THE MAN: They were all running, that's what I remember best. I imagined the way they tried to run away from the cloud of ash. Children in their arms, legs bent, that kind of thing. THE WOMAN: I can't think at all when I'm there, I'll never learn how. It's like everything dissolves as soon as I set eyes on the place, like it won't present itself as something real, it's a problem. All thoughts kind of disappear—whoosh—just like that, reduced to nothing. We hear the clack of two pebbles and understand that she has paused to pick them up and throw them into the sea, or else keep them in her pocket, or walk with them in her hands. SCENE 4 The boy sitting on that person's knee, that must be him. In the photo both his front teeth are missing. In another he is standing in cotton underwear, white underwear, on a quiet residential street. He's got big leather boots on and in his hand is a leash that droops away in an arc towards a dog the same size as the boy himself. His eyes haven't changed, but everything else has. His face has shifted in some way, it's the same face only different. Like the autumn, summer, death, is merely a passage, like all things. In the process of becoming the same only different. Nothing can be held up and compared. Contradictions don't exist. A beach, raked and made tidy in the night—next morning everything is different. He emerges from the mudroom. She is the only family he has left. His uncle and the others, who in a way vanished along with his father back then. They were in the boat. The yearning for a time one no longer recalls. There is a white orchid in the window, its flowers are three quiet parasols in the sun. Muffled sounds of activity from the kitchenette. A sink with a visible drain that disappears into the wall, a bar of lavender soap, the smell issuing into the room, keeping its walls upright, the ceiling in place. His rented room. THE MAN: Her cheek was gashed, she smiled out of her cheek. Her eye had been attached to its socket, but both eyelids, lower and upper [he points at the woman, puts his fingers to her eyelids, lower and upper at once], were kind of split open vertically and had retracted from the eyeball. THE WOMAN: A film about two women, an actress. She stops talking. THE MAN: I've seen it. It's brilliant. THE WOMAN: _Atonement,_ was that it. THE MAN: I think it was _Persona._ Their bodies, skin against skin. The violet tinge of the shadows, where an arm angles like an Italian stone pine (the slender trunk with its dramatic twists, the way you turn your head away suddenly in horror, an arm resting on his abdomen; various details, dark hair, a nipple with surrounding lactiferous glands, a hand, tendons visible through transparent skin stretching across the back of the hand, a knee, the rear of the knee where veins run close to the surface of the skin. THE MAN [looking at the plant]: It's doing fine. Look at the flowers, they're still there. He strokes her hair to reassure her or himself. She is nervous, thinking about their parting, thinking about the way it feels like a countdown; she is unsure as to whether she will miss him. If he will mean anything to her. THE MAN: It's fine, it needs watering, that's all. They lie still and we see them from the side. Their legs look long. They are looking in the same direction, his hand comes to rest on her head like a shadow that won't go away. Some time passes. The sun moves across the sky, shafts of light escape through the layer of cloud and slant into the room. On the street outside, people pass by, a young woman with her grandmother, the obvious annoyance of having to take care of the aged. THE WOMAN: Are you asleep. THE MAN: Yes. THE WOMAN: I miss you already. THE MAN: You're tired, that's all. I'm right here. THE WOMAN: She fell through the window. The line of her back against the sheet. A split image, like an eye bisected by an eyelid. The shimmering violet where sheet meets skin. The work of gutting a fish, the palm of a hand pressing it flat against the board. The image dissolves with a shudder into a tableau of metal, tin. Some discarded roofing panels, variously grey and rust-red, tinged with green. Another dissolve, and we see her skin, the sheet, the sweeping line of her cheek, like a travelling droplet of moisture. Dissolve: tin, roofing panels. Dissolve: skin and sheet. Eventually, the two images replace each other so quickly that all we see is an abstract shimmer. The breathing of the two people grows louder and increasingly abstract, dinsintegrating. The faintest of sounds amplified, the image changing too fast for us to discern any single element. Thus we leave the scene: The two figures mid-stage in the bed, the spinning storm of images, the sound of their breathing as it turns into the sound of the sea; or the sea being the sound of their breathing, a sound such as that produced by pressing one's fingers hard into one's ears, or closing one's eyes and listening. SCENE 5 We see a gleaming, white-painted floor of wood. Long curtains drape wave-like, sweeping in the breeze in front of the open door. On the other side of the door: blue where there should be green. The garden is blue. But there are sounds of a garden in summer, a bird. The low hum of agricultural machinery in the distance. THE WOMAN: Are you asleep. No one answers, the room is quiet. A cat comes in through the door and passes through the room like a pair of scissors through a length of fabric. THE WOMAN [again]: Aren't you going to wake up soon. No answer. Time. Now and again the curtains are lifted from the floor entirely, the breeze is gentle, the lightness of the curtains is like a woman who has not eaten in months, half-years, that same bluish tinge, the down that covers her skin, a lightness of movement and a peculiar masculinity as the bones, the jaw, the skeleton become visible. The lightness that resides in that. THE WOMAN: I remember I was going to pretend to be asleep. I've forgotten the reason, if there even was one. I suppose there wasn't, apart from the desire not to...miss out on that special kind of solicitude, or whatever you might call it; the tiptoeing about when people saw I was sleeping. I wanted to hear that. The way people's movements become cautious, as if they were actually walking on top of me like on glass or nails. The look in their eyes, without sound. But I woke up, it was evening by then, and all the guests had gone home. I don't know...I could tell they'd been there and had celebrated my birthday without me. It was for me they'd come—and perhaps they carried me into the other room. Had I really been asleep, rather than just pretending. Are you asleep. THE MAN: Hmm. THE WOMAN: Please don't. A sound of crisp sheets, as if the bed were made of the dryest straw or paper, when the body settles. We see the fabric, the skin, as the duvet is drawn aside. Their bodies are a single beast, sleeping. THE MAN: What were you thinking when you bought this place. THE WOMAN: Were you there that evening. Did you see me. THE MAN: It was because of the garden, wasn't it. THE WOMAN: And did you leave then. Couldn't you see the difference. THE MAN: The difference between what. THE WOMAN: Real sleep and... A woman's hand cuts into the lower frame. The hand is slender, the fingers long. Tendons flex beneath the skin. It rests lazily on the shiny white of the painted wooden floor. THE MAN: Hmm. THE WOMAN: ...pretend sleep. That...goes wrong. A silence. The woman crawls naked on all fours across the floor, reaches out and grasps the bottom rail of the French door to pull it shut. A hand reaches out to clutch her leg. He grips her ankle and she pulls free. THE MAN: We've only just met. You keep forgetting. Sleep now. Can't you sleep. THE WOMAN: I don't think I can. The woman is making up a story about him, a story about the remnants of an encounter. You're tired, we walked I don't know how many kilometres along the shore, you must be tired. We see the woman hold a hand up in the air, opened like a fan. She turns her hand as if considering a prism. The ceiling lamp's severe splay of light against the wall. An image of the wall: the figure of the fan. The graphics contained in the movement of the hand, fingers like slats. The alienation that can arise all of a sudden, as abruptly as the opposite arose. Everything can be reclaimed, it's so obvious here. The mobility of bodies, thoughts. One minute—the next. That collapse and the resurrection into something like: togetherness. One, a union, without end, and yet always ending. SCENE 6 A glowing coal he suddenly holds in his hand. The sun squeezed into a black ball. Wishing for something, or wanting something. SCENE 7 In rain. A summer, everything threatening to burst into flames at any moment. Bonfires and watering are banned. All glass is forbidden, mirrors are, tin foil, gold leaf. Garden waste piling up, because we want things tidy too, and the hedge needs doing. We want brand new views, we want _vistas._ If we dig up all the bushes on the hill we'll be able to see the sea from the decking. And he might, you might have gone back to your parents. If only they could cut down those trees, the cluster of trees that block a view you remember from when you were a child. The view comes creeping forth. And we dream again of rain. The late sun has to ignite a whole landscape, though it's nearly on fire already, and of its own accord. Burning. And then it comes. A drop that strikes a cheek or a warm arm. And then the next, and the next again, and all at once the sky rips open, rain pours down on us, plastering heads of hair to scalps, nature opening wide its mouth, gaping gullets, all drains and ditches, clothes, skin and veins are vessels greedy for rain. In the towns, water fills the streets. It travels across the road in its patterns of herringbone, washing with it a jetsam of newspapers, plastic, matchsticks, and still it falls, heavy drapes of fabric hung from the sky. The streets are overflowing, the rain rising up above the kerbs, to the doors of the houses. It seeps into their entrances, trapping the people inside, who stand and watch while the water floods in like light under their doors. A single door, opened slowly. We seek the high ground, clambering onto furniture. That kind of rain. And maybe eventually you manage to enter your building, you let yourself in, press open the door, step into the hallway, close the door behind you. The encroaching water, so quietly it flows. And slowly you realise: this isn't where you live, this is the wrong apartment. Yet you take off your coat and walk through the hallway, into the living room, to kiss a woman who has no idea anything is wrong. Or a woman who doesn't think anyone will come, that all are drowned, and that either she takes this man or no one at all. This may be reasonable. Arbitrariness is a fact we must live with, a fact we live with regardless of any other circumstance. Presumably that would be the kind of thing such a woman might think. Dryshod first. Nice, shiny black, patent leather shoes. Round toes. And she goes into the kitchen to see to the dinner. Maybe she leans across the counter and sees her distorted face reflected in some surface. Maybe she speaks her name out loud, in fact she does, she whispers it so that you may not hear. She listens with eyes closed. The way he takes off all his clothes and puts on those of a stranger instead. Your transformation can be seen in the woman's face too. Her face changes. It's what faces do, change with every new person they love. My face is your face. We decide to make love. My face is your face. And when she opens her eyes, her eyes have become white. She closes them again. When she opens her eyes, they are black. The entire orb, black. She tries again, and now they are blue, they are acceptable. Borrowed new eyes, and yet seemingly so—and this is the word she thinks—REALISTIC. We realise this is important to her, to live a realistic life, whatever that is. And when she puts a dish of steamed fish down on the table, she thinks to herself: This is a realistic fish. These are lovely potatoes, he says later, and she thinks: This is a realistic thing to say. At this point in time it's realistic that he say such a thing. Compliment her on the food. And their entire life together can be realistic, she thinks. The thought comforts her. The fact that she can ENVISAGE it being so. And so she is reconciled. Everything that may be seen, and everything that may be envisaged, and everything that nearly exists. It all runs together in her mind. Whatever difference there might be. Whether it matters. You drink wine. You talk. She confides in you, and you nod as if you understand. Or maybe you really do understand her. Maybe she really is the one you have always loved, maybe she's the one you were always looking for in your girlfriend. Who would go looking for someone they never knew. I suppose that's what occurs to me—that you never knew the one you loved. Only nearly. The woman switches on the light and we see the man has lain down to sleep with his head in her lap. She strokes him, we see his hair, her fingers at first trembling, then becoming steady. The scene draws out, slowly, slowly. The woman's hand moves with increasing weariness, heavily it proceeds across his skin, comes to a standstill, then jolts abruptly into motion again. All this is seen in close-up. There is nothing else in the frame than this hand and the trail of light that traces its movement. It's like seeing the hand and the night through the sights of a rifle. As if one could blow the two people away with two shots and put an end to it all. We feel empowered, a feeling superseded by something like: the sense of having no power at all over anything. Impotence. That kind of feeling. Being subordinate to the two bodies. While eyes might be imagined that can see in the dark, one can never imagine a human body that does not at some point fall asleep. To be trapped in the body. Though perhaps in the proximity of another, a real body next to one's own, or perhaps on top. The feeling that this possibility exists. To lie in one bed, two people; to lie there all four, five, six. Riddled with holes and alone, an insane multitude. THE WOMAN: You know, I was thinking, that when I met him it was like something happened to time. I never got older. Not by a single day, not until he left me. The year after. The woman puts her hands to her face and explores her skin. THE WOMAN: And then you wake up [she repeats] and your face is the same as two, three, four years before. Then straight away, at a single glance, the body ages. Just like that. Like a skin sloughed off, as if the realisation of standing still precipitates collapse. A kind of hideous unmasking. You've yet to see what damage it did, the time that was spent together. And then there you stand, with the wreckage of your face. You think: I can't ever see anyone again, not with this face. THE MAN: You think the whole world will turn away. But then. THE WOMAN: But then no one can tell. Your face has looked like that all the time, and you're the only person not to have noticed. THE MAN: That may be the most terrifying part of it. Those around you never letting on. THE WOMAN: And so it turns out you don't know a soul in this world. The fact that they saw nothing. Said nothing. Or the fact of you not hearing. SCENE 8 A bed, in the middle of the room. The man and the woman tightly entwined. A tangle of arms. Sleeping. Only a blue sheet covers them, or rather: it has slipped partially from their bodies and hangs to the floor like water running over the lip of the bathtub in their hotel room. Like honey spun from the comb—a limp disarray of arms and legs, and blue light. Night, or early morning. Summer. We see them on their stage, from the audience, and from above. A film, the movements made in the course of a night. Images bleeding into each other, a night of poses edited together, physical arrangements, more or less: a single body. At least: a single movement. And at the same time: the image of a hand. We understand the person is asleep. The hand of a woman asleep. The navel, and the suggestion of her sex; her hips. The image remains, longer than we thought we wanted it to. The light is soft. We see the throb of her pulse in the neck's artery; and through every image a persistent crackle, the sound of something ablaze, and yet not. It is a human sound, a sound of the body, though uttered in language unintended to refer to anything else. The sound of a body when consciousness is rendered unconscious by the truth of sleep—unchoreographed sleep. EPILOGUE An image of the lava pouring down the slopes of the volcano's cone. Plants withering in the heat. It's as if the volcanic soil is being fed. The heat chars everything, makes everything its own. An image of a blackened landscape. The volcano sleeps, the lava stark and solidified. Everything is burnt. We see what must be the remains of the green plant. We remember it. There's a kind of grief over time having passed. The picture is still, without movement. After some time something stirs, dust being shifted by what we can only take to be the wind. The charred plant pulverises in the disruptive air. We see what used to be plant, disintegrating in that way. Dust whirls and settles, a fine mantle of black on the stage. The woman must close her eyes for the dust not to make them dry and lustreless. The man shields her face with his hand. Covers her mouth. He shuts his own eyes tightly. We watch as the woman rises to her feet, removing the man's hand from her breast; she rises slowly from the bed and steps out of the light, passes through the dimness, to a table behind the bed. The murk is green and soft. We see her body become another in the green. Light has age. That's what hurts about light, and what is uplifting about darkness. The body understands this. She examines some papers that are spread out on the table. With this act, time passes. The body adjusts to all things. The body accumulates time. The body takes in the time of light. The morning light is 8.3 minutes old when it is shed upon our faces. We absorb this time, becoming older and older still, depending on the amount of time that is shed upon us. If we stay in the dark it's different. But the fact you had to go home again. Or me having to stay. There are various chairs in the room. We see their silhouettes, along the wall and drawn out onto the floor. The shadows they cast. The house is by the sea, slightly back from the shore. A hundred metres, or a hundred and ten, a hundred and twenty. Depending on the tide and the storms. A small patch of grass out front, juniper. Dark, green sloe. Heather, dry grass. We see the house on a summer's day, a face, a boy reading. His eyebrows are knitted close and look like the leaf of a fern, the same shape, widest towards the middle of his face, a slender, freckled nose. It is the height of summer, the dreadful spring has passed, endured. Now the days run together, the way light can run together with things that radiate, the same light, one day and the next. The approaching of a point, a place in the woods, simultaneously from two different positions. The darkness of night is no longer darkness, more an inky kind of light, a bluish rendition of daylight, as it is here on this night, with the sea's fog descending upon us. An odd collapse of time, like a row of books on a shelf, the glue of spines vanished and gone, the threads of bound volumes rotted away. A form that surrenders and leaves its elements to stand on their own. The being unable, incapable of maintaining standards, the way you might give up on what is done and what is not and simply hang the washing out on the balcony to dry. A whole human, when thoughts dislodge and drift away from the body's pain—this is what we hold up to the light and examine, this is what we see. Okay, so this is what happens. An outpouching of a spinal disc, the fluid that seeps, pressing against the nerve, like when she went to open the back door for the first time after her long trip, a shove of her shoulder against the rail, a gummy complaint from the rubber beading, the wood that had contracted and expanded so many times since she had been there last. Where have I been—where have I been all this time. Time, accumulated in all the spaces, the gaps in between. Between her lips. The spine, not only holding the limbs in place, but also upholding a relationship between an arm and a point in the brain. The different parts of the face, merging. The spine, the books on the shelf—when the first page succumbs, and then the next, a wing whose feathers loosen and dislodge one by one, dropping to the floor like birds shot down from the sky, or stumbling horses. Time as a frail form, the scenes that dislodge from time, whirling in descent. Some words that tear themselves loose and keep returning. The man and the woman are present in the apartment. They have breakfast. He is going back to the place they met, he says it's a year ago now. She turns the empty eggshell in the egg cup, as she does every morning. That we should meet there. One always expects some sort of payment in return from the world, signs perhaps—coherence, or some kind of solicitude. Outside, and behind her, a pigeon pecks at a scrap of tin foil on the decking. We see it from the man's viewpoint. The tin foil flashes light into the room, a window of light that disturbs his vision. His eyes moisten and run. He blinks, though without turning away. The sound of the pigeon is foregrounded, while the camera focuses on the woman—her fingers as they turn the egg, the way she positions the egg cup at the centre of her empty plate, as one might place a tower on a town square. She finds churches tiresome. The exceptions are bombed churches, those derelict or being restored. The sound of the egg cup against the plate, the sounds of a kitchen. There's something human about the apartment. The way it breathes in the background. It doesn't feel like a whole year, he says. She wonders if this is what it's like to be old, not really understanding that so much time has passed. She picks at the egg, the shell white as the white of an eye, faintly speckled, or tarnished—that's the word that occurs to her: tarnished. Her parents always bring a dozen eggs with them when they come over from Jutland. And a jar of preserves and an orchid. They are fine gifts, she thinks. Simple, yet fine. The thought she had before about old age is too obvious, she thinks now, banal. She would never utter it out loud. But then he does instead, as if to spare her the embarrassment. I can see she's unhappy about having to leave. She doesn't want to go back. But it's not the going back she's unhappy about—it's the opposite. Leaving, never to return. And you think I'm scared of flying, which is touching in a way. You put off going to work, because I'm flying that far—across the Atlantic. The engines quieten abruptly, and for a moment she thinks: they had a year together. What does she feel at the thought. We see her lips, the way the lower lip is curled back into her mouth, the way she bites tiny flakes of skin from its surface. First one side, then the other. It might be understood as concern, but it could be anything. The house by the sea is hers. It's a thing she owns. It looks like something that found its way onto the land, washed up like wreckage tossed on the surf, that critical point where the waves are as tall as the water is deep and break at the crest, break and break and break. She connects waves with a variety of things: Virginia Woolf, death, summer, loneliness, conquest. She thinks it's the most pathetic list she can imagine, but there is nothing to be done about it. Dedications, epitaphs. There are different kinds of recollection, but she is not interested in making any kind of division. It interests her less and less. The opposite makes more sense, finding a common denominator that brings things together. If there isn't enough light in a room, the picture will be blurred or non-existent. The available light is not inexhaustible. If there isn't enough darkness in a room, outlines will be erased, faces extinguished. The available darkness is not inexhaustible. The house comes into sight and vanishes with the shifting of day and night. She's thinking of learning a foreign language, to connect up some more regions of the world. The house is built on top of a slope. Its tall wooden panelling is painted white. We see a man's hand, fingers held flat, a sparse dusting of dark hairs over the wrist. Tendons beneath the skin. The fingers travel over the painted panelling, pausing near-imperceptibly at every join in the wood. We study the nail of the index finger. The cuticle is the same colour as the panelling—this is what we recall, or else the body recollects, though as a dream retained inside, the reappearance of something, like a person you've seen somewhere before but can't quite place, a thought encountered in a book, something you felt, only not in any language that was able to absorb and retain, a language open at both ends, through which things merely pass, a language unable to save. One can always see what's in a person's eyes, if only one's own reflection. Shiny surfaces can do that, reflect the self, whereby they are the truest nightmare. The constant reminder, the casting back, all those ideas and fantasies. Many of the trees have been cut down, the trunks already sawn up, ready to be loaded and driven away. The estate owner is quiet on the phone. I tell him how much it would mean to us—me and my boyfriend. He yields and concedes that he has been out looking for the tree too, or at least his daughter has, after I sent him my description of where it stood. The pale pink stones you collected on the beach and placed at its foot. We arrange for me to stop by and show him the spot next time I go to Fyn. I wonder if you will think of it as the gift I want it to be. Or as me trying to write myself into a story in which I don't belong. Picking at the bark. I wonder what kind of instinct is at work inside me. Whether I'm nostalgic on behalf of others, or a parasite on their suffering, dependent on the yearning for things lost, recalling the past to such an extent I recall even those of whom I have no recollection and whose histories with me I gradually invent, remembering in advance. There's a thing called _false memory syndrome._ I don't know what to make of it. I know a lot about wanting to be a part of something. I know everything about standing on the outside, misting up windows with my breath. The sea is gathered at the bottom of the picture and one could think of the picture as a container—it could look like that; the sea flooded into it, from some leak in the sky, an expanding basin inking in a horizon, reminding us of something we knew but couldn't pin down and still can't find words for. The two people, the man and the woman, perched on wooden uprights at the jetty, waves lapping beneath them. We see them against the light. In the sunset they're just a pair of silhouettes, feet dangling like the heads of wilted flowers, cumbersome weights. The sea has promised nothing, and as such it is uncapricious. It swells without will, witholding nothing, revealing nothing, devoid of any narrative, simple or complex, that could cause confusion. We see an arm reach out—there is a gap between them and they must tip their bodies towards each other like jugs in order to join. The sun torches her forearm in two just above the wrist, like some accessory come apart, a rope giving way at its weakest point having chafed against an iron mounting, the snap of webbing in the upholstery of a chair, the resultant disintegration that spreads like a creaking, crackling fire. If you put your ear to it you can hear each and every thread, succumbing. If you retreat from someone you love, eventually you will hear only the tiny popping of blisters as they burst. If you put your head under water, you will hear only air rising in small and insignificant pockets, invisible to the human eye. Whatever it is. The microorganisms, the flies. Everything contains the possibility of seeing things in new ways. Thus the revolution—the potential of all things resides in ourselves. The way we see, or maybe the viewpoint from which we see. What heights may be scaled, what graves dug for the self. We see them from the quay, perhaps from the vantage point of a tall stool—a barstool, say. His breathing is unsettled. We hear that. The stutter of his chest as it rises and falls, and yet at the same time the exactness of it. We see an unbuttoned shirt, a spray of dark hairs on a chest. We see him from the front, he shadows our skin. It takes a moment for the eyes to adjust—at first everything is black, the particular blackness of backlight, that contains all colours. He taps a finger against the ridge of another; unlike his breathing the sound is without rhythm, exploratory, human in its tone, an amalgam of wood and teeth. The sea is charcoal grey, silver, and orange, speckled as a heavy fish whose scales parry the sun and send it ricocheting in all directions, causing structures to shimmer. She does not sigh. We hear only the sound of her breathing now. She lowers her hand and places it in her lap. A close-up of the hand, the thumb folded in the palm, like a jewel or a bone picked clean. The skin against the blue fabric, the structure of the skin, a topographic map, tiny dashes of purple and grey, the pink tinge of the knuckles, some veins. Lines of the skin, lines of a map, contours marked with elevations. Waves break against the uprights. Sea spits at the woman's toes, the man's leather soles. The city is a backdrop, a shawl at the moment before it is drawn around the shoulders. The city's heart-rending solicitude, the disconcerting rumble of the metro, an anxiety of nature, that in all other respects knows no such symptoms of chaos. That patent love of simplicity. The silhouettes. The two figures on the wooden uprights of the jetty. I spoke to that girl again, you know the one. I helped her boyfriend once. I don't know why I'm telling you, you're not supposed to know. She turns her head and lowers her eyes, perhaps as a sign for him to continue, it's hard to tell from a distance, but whatever it is he goes on: I helped him win her back. All I did was state the obvious in writing. He couldn't find the words himself. No. What he meant. A pair of heavy waves roll in from a passing ferry. They draw their feet up, the way you draw children to your chest at a busy road, the way you hitch up a long dress to cross over puddles. What do you want me to say. His eyes turn hard, like horn or bone. He looks up at the sky, throws out his arms in an angular gesture. Bird-like, the way he sometimes is. The horse lowered its head and trotted flat-backed off into the ruffled landscape, disappearing from sight behind the barn. They could hear the hollow thud of its hooves against the ground, their familiar, graceless rhythm. She looked down at his hands and he followed the arc of her gaze through the air, attentively, the way you might watch a bead travel across a floor before bending down to pick it up. After their fight she feels like her love for him is burning a hole in her pocket. She doesn't quite know what he _expects_ of her. There's a fatigue, too, that evades capture. I will always think of her as something I failed to let go in time, a burning coal in the hand, a watering eye dripping between fingers, a leak within the world that implants itself in the body, the telephone wires that slice through the poplar, the various patterns of the sky, Lille Strandgade, Skt. Annæ Plads. He let slip that he used to see a woman who lived somewhere around here. In that building there. He pointed up, and naturally she was unable to stop herself from looking, even if she told herself not to, that it was the last thing in the world she needed to do—and the feeling it left them with afterwards as they sat on the edge of the Gefion Fountain was mostly one of no longer having access to each other. It churned away inside. What can be gained from overturning a table, dashing a vase. She was beside herself with rage; the scene played out in front of her, and she was her own audience. You've broken our things. He gripped her tight, her wrist. They fought, and for the first time she felt his anger in that way. He wanted to hit her. _Things._ For him. They fell asleep in the afternoon, wrapped up in the duvet, pupated. The cover left marks on their skin. They woke up and made dinner, the day turned on its head. The day unhinged. _His things._ His annoyance at her not treating his things _properly._ Their fight was more about that than anything else. Cutting away the clutter, that's basically what they can't agree on: _things,_ the distribution of things, and how to treat them. A battle to behold, own and use. The horse was out of sight. It stood at the pond, lifted its head, listened. The horse's eye, with the deformed pupil, not round and smooth like a pebble, but spongy, moss-like, misshapen. He noticed as it stood tethered to the hayrack, the sun angled down and he called for her to come, she was pruning the brambles. She went over, the shears in her hand, heavy like a pistol at her side. Look, he said. And she looked, tilting her head as she did so, pressing her face close to see. Shh, they said, to reassure the horse. The bike's wheel buckles when in the night we cycle to the sea, unable to sleep. I remember you came home and sat down pale at the table like a theory you suddenly see through but have yet to abandon. Speaking a foreign language in your own country. You say you never felt as healthy in all your life, and now you understand your parents. You're shaken by the attack, though you hardly remember it. In a way we are all under suspicion. An awning unfolding over a sidewalk. Deprived areas of the mind, deprived areas of the city. The rim of the cup is an echo of the moist rim of the eyelid. She holds the cup cautiously to her eye, lower eyelash resting on the rim of porcelain. Her eyelash, fifty or sixty jointless fingers gripping the cup. The cup, hanging from the flesh like a droplet collected and poised to release. He calls out and says it's only him. We see her closed eyes. The daylight is revealing and at the same time anything else but revealing. As if there is a filter on all things, making everything look like something else. Her feet are on the kitchen counter. The room weaves like an old riding-school horse, it scrapes at the sand and tosses its head. The sound of a bridle, the sound of worn leather on worn leather, such heat at two o'clock, such heat at three; the first bell from the church; we see her eyes, startled by the sound, the twitch of the skin around the eye; or the eyelid's collapse at the hammer's strike. The splits and cracks of the epidermis, the contraction of minuscule muscles, like fabric pulled together in little spasms by impatient hands at a drawstring, a thread severed by angry teeth; to disturb such order is a crime; and his body is a gathering storm, a wind yelling down the avenue, collecting up the leaves, collecting up the newspapers with their hideous headlines, collecting up soil and dead insects, sloughed skins, pupae, a gently cupped hand that is the upper section of an empty beehive. Her chest, rising and falling, a wave on the sea running in from two sides, two waves joining together some twenty or thirty metres from shore, like two cold hands pulling off gloves, reaching out and grabbing hold, the white foam at the crest of one and then the other, a mane of colliding momentum, and for a few short seconds the surging rush of union, the way they seemed almost to travel inside each other, like they too once travelled inside each other, a fire seizing hold and galloping across the fields; their two directions becoming one, striving for the shore, that kind of wave; and a chest, rising and falling, trembling, dismaying, its rhythm being the rhythm of the waves; up through the avenue, across the flat expanse, across the beach where the pebbles roll and shift, her eyes, we see them beneath the skin; her eyes, rolling back, a horse taking the bit, tossing back its head, running. There is such a thing as directionless movement towards each other, and there is such a thing as the opposite: directionless movement away. Outwards, and so on. Grief is without direction, grief seeking out the hollows in this world; so in what forests, in what rooms; nothing to regret, nothing can be endured. Colonies, wastelands of history, a remembered image of avenues of trees in concentration camps without buildings, only trees remaining. The meticulousness of memory as to the items of pain—objects that collect pain and preserve it. A spoon, for instance, that can't be forgotten. The thought occurs that memory cannot withstand _things,_ that somewhere there's a saturation point, a collapse relating to concrete entities—they become unbearable. The childhood home's presentation of things forgotten, that the conscious mind lacks the strength to carry around on its own, objects bearing witness to our demise, slow and disconcerting, the body becoming brittle and unsound, cells dividing insanely, the blood cleansed no more, hair lost by the tuft or little by little, the nausea, the convulsions of the stomach, the bitter swill of bile that gnaws at the tooth's enamel and eats into the oral cavity, the eyes that cry when they no longer can see the body in which they were set. Homeless eyes, for the body is another; the eyes are the only things whose form remains unchanged when the body becomes—deformed. They can yellow, and be bloodshot. Two waves we see, that meet and mingle, and surge against the land, a heavy stage curtain drawn up onto the beach, a cool, abundant quilt to cover those who doze, the shells and the creatures, the sand fleas as they spin, thrust here and there on the lather of the sea; blanks blighting our thoughts as we bask in the sun, and soon we are unable to think at all, the mind's every formation shrivelled and forsaken in the bowls of the grief-stricken; implosions of universes, of days that might have been, but never were. And as she stood there looking out on the sea, it was as if a change in the weather coincided with a voice, and the light transformed, outside and in, becoming colder in the same way. Admiring the view, he said. It was like an icy hand gripped her foot as she lay dreaming and snatched her onto the floor. She turned, and there he was. She knew he would be standing there like that, his hand still on the door handle, as if to make his intrusion seem fleeting or coincidental; as if it were _natural._ Yet it was anything but, she thought to herself, and that atrocious comment, too. She decided never to forgive him, the way feelings have to be decided to make them last. It's like what there is on the beach—time engulfs it all, washes it all away; a lump of wood fades and deteriorates, and one day it no longer exists. It's the same with feelings. Whereas decisions endure; they are how countries are governed, and how we govern our lives. The decision takes an emotion hostage and endures, perhaps fading and deteriorating to a certain extent, but still _remaining._ Can I come in, he says without waiting for an answer. A bit like colliding with a brick wall in the dead of night when you can't see a thing but think you know the road exactly and can find your way home no problem. Not even the greatest revolutions change the future as much as they change the past. A garden of ghost trees, skeleton leaves with the tissue carefully extracted by means of two fingers; the frame of the leaf is soft and moves like hair in water, undulating gently, yielding to the gaze; the figures as they run through the garden, and the wind is a hundred glancing eyes, dusting the scene with a solicitude accumulated from all that the eyes have abandoned, all that has departed them. Tears are the eyes talking about loss. And the eye becomes our only access; the garden itself is an eye, and the stories we tell are all about the eye, what the eye saw and what from it departed. The way the lips move up and down, shaped by our words, and your facial expression when you talk, miming the eyelid's sweep over the orb, again and again. Any kind of movement is a rhythm when considered from a sufficiently distant place. An hour, a day, a life, a millenium. The aspen leaves, tossing like horses' heads in the sun; the sun, reflecting in the eyes, the metal of the bit; the leaves, minuscule mirrors of green throwing back the light, the eyes of the tree. We linger in the garden, longing more than seeing, fidgeting with the past, blinding ourselves, patching thoughts on to everything that is, or was. Turning back again, once more into stone. The milky glaze of the cataract, untreated disease, the years of horses, the march of days, falling into patterns, the way the waves again make patterns, herringbone, or perhaps merely repeating a shoreline over and over again, the outline of a human, over and over again, a hope that he might come and look for me, a hiding place constructed so that you might be found, like in the garden before, when everything was transparent, made so by you, and you were surprised that what I was looking through was you. And therefore I could walk simply past and carry on through the garden, because that's the way many of us are inside. The fact of never being able to deliver what's asked of you, but always something else instead. The surprise, the _unpredictability._ A feeble attempt to move on according to one's own designs, and yet: going through motions mapped out for us from the start. Who is it, you ask. Who is it you miss. I have come to you beneath the oak tree. The day is darker and cooler here. Tears run down your cheeks. And thus we remain in what was, allowing ourselves no release to enter tomorrow, or even today, in any guise of life. I gaze up into the crown, leaning my head back, opening my eyes. My father phones and tells me he's wearing a headset. He's in bouyant mood, driving across Fyn. He says he's happy about the weather. I was trying to count all my various texts and had the feeling most would have to be deleted again. I've been thinking I need a system, only then I think I need the system that's already there. My father tells me the students all passed. It's spring, he says. There's no bringing him down. It can be like that a lot—things falling out of synch, an awkward collision of waves, or a boat battered by a rough sea. I miss the ways of the fields, or the rules that exist. The attention paid every morning on the way to school, my father talking about the fields, the work of harrowing and sowing, the harvest, and what would happen next. The frost, if frost had fallen in the night. The patterns left by the various machinery. The brittle white hoar, the key in which the body is tuned; the _mood_ dispensed. I push the blue bead bracelet up my arm and examine the burn. I'm no good at secrets, they seep out of me. The budgie flies from its perch in the open cage and settles on her head. She has given the bird a name. It belongs to her, a gift. It's a young bird, six months or a year old, blue in colour. It sits on her shoulder when she walks through the village. She hears her mother pass through the beaded curtain on her way out into the garden. The bottle with the pear in it, the bird on top of her head, the clack of her mother's wooden shoes; the swish of the beaded curtain as it parts. Straight-backed, she measures across the room to the birdcage and kneels. A gentle nudge and the bird flutters back to its perch. She closes the cage again with a curl of her hand. She wipes the streaks from her face and follows her mother through the beaded curtain, her mother who is already up the slope and past the apple trees and the woodshed, the heavy washing on her hip in the red plastic laundry basket, her arm angled out to clutch its rim, her torso a tilted counterweight. It's almost midday, and the sun is high in the sky. She starts to run, and the labour of breathing interrupts her sobs. She becomes aware of the fact. The way her body works to find oxygen. Running seems little more than that, using more and more energy in order to breathe, the act of breathing becoming a means of thought, something at once complex and automatic, a state for which one might silently yearn, the motion of the body releasing something inside, something you never knew was stuck. She catches up with her mother, who holds out her hand. They walk together to the end of the garden, the part that borders the fields. It will soon be harvest. They look forward to the smell of harvest, to the mowing of grass and the making of hay; the grass, turned and turned again to be dried, baled, gathered in, stored away for winter. The skin of her legs is like a landscape to anyone close enough to see. Her legs, never fine, forever a patchwork of bruises. Once she counted a hundred. So they say. You have to adjust to the sound of trees falling. He takes her hand as though picking something up off the street, something valuable, the claiming of which nonetheless feels embarrassing, but lost is lost, and there it was just lying there. Again she thinks about the way her body always seems to precede her, like a light that can't be caught. They return to the place where the shards of his father's urn were interred, at the foot of the tree. Each year brings some minor discussion as to which tree exactly. But she knows it's protected now in a way and is glad she could make that happen, at least. The fact that she did something. How many times a helping hand, how often a shield. There are no prizes to come back for. That's how it is with spring, there's only so much to go round. The rest is: stumbling. Then winter comes, or just the autumn, as if things weren't bad enough already. You offer no resistance, and change by the day. The low dry-stone wall here behind the rectory, the many fruit trees, whose web of roots must be so old and so expansive the cherry trees, the cherries, grow upon the dead. All is but passage. A new organisation of matter. In a way, such roots are like the gaze of an eye, looking forward, looking back. Like the two of us, in the midst of these years. It's the oddest age—one feels oneself to be standing on the brink of something, even if that's always true. It's as if something is taking shape, accumulating inside. This dawning conviction. _Almost_ having hold of something. Untreated yearning skins over, and sores heal best in the night, when the body is at rest. Is it a problem, that I'm not sleeping well. Hardly at all, or having these nightmares no one cares to hear about. We see her standing in the woods, holding his hand. They speak to each other, we see them from a distance. Further and further away. There is a rhythm to everything—considered from afar, that's how it is. The footage from her childhood is characterised by her father's hectic panning shots. His wanting to get as much _in_ as possible. The chatter of the two girls can still be heard, the camera panning across the lawn, sweeping by the new chicken coop to the trees at the boundary, the washing line. Almost having hold of something. Her face twitches almost imperceptibly, as if someone were pulling on a thread fixed to the belly of the sun, a direct line to death or eternity, or some other such thing. They sleep together in the bed his father died in. Mushrooms in the field, pasty lips in the fallow, greying grass of winter. A child, a snail's shell lifted to her ear. The leaving of someone, from within to without. The way warmth leaves the body. But you're used to being abandoned, you can sleep anywhere, all you have to do is close your eyes and put something in your ears. Crossing the Atlantic. You're used to it. From within to without. Sometimes he feels a warmth trickle down his hand. Like a wound opening, the blood coming out. He looks down at his fingers. Turns his hand. Nothing. Might pain be a way of exploring the world. Can we look at it like that. We have different ways of feeling things, the way flowers are different depending on whether they belong to someone sick or dead, or to a garden. His long legs. Like the gardener, he works nearly all the time. There's always _something,_ he says, people wanting hold, asking him things. And no one ever comes back if he passes their questions on. Questions, that's what we're left with. In a way, the past is the only thing we have. Who can say they feel more now than they did then. If you turn and look back for someone, we all know what happens. The fact that we do so anyway. Through Aprils we proceed, forever looking back, idling in this fossil state you find me in tonight. Thin is the darkness, spilled between the birches. You say the first eider fowl have laid their eggs at the fjord. You point in the direction, with a wave of binoculars. It's too dark to see, but we can go there tomorrow. I find myself thinking your fingers must get cold. The air has no warmth. In the night we hear the sound of birds unfamiliar to us. Migrating waders, you say. We hear them more clearly at night. When the wind is spent and the air is a mist. We are calm as far as it goes, and lie here accordingly. My body precedes me like the beam of a flashlight I carry through these empty buildings, the only light there is. The snow descends regardless, uncaring of whether we stroll the boulevard or the shore, the snow descends _indifferently._ It can be shifted and cleared, or squeezed into a sphere in the palm of a hand. But from sky the snow merely descends. Snowfall in April, the month of my birth nearly thirty years ago. It's easier to part with something that _is_ than something that _was._ Everything we lose remains inside us, while everything we have remains invisible. The thought occurs to me that the loss I feel can be seen in my eyes, these index cards, place names; all you see are the titles, the rest is in the archive, lining the corridors. In storage. Waiting to be thrown back like the sun as it strikes the pane, across the square, the pond at Agri, a childhood home. The only thing I note at this late stage is that words spoken in love resemble the mutterings of drunks insofar as they are uttered under cover of darkness and therefore vanish with the coming of day, a morning in April, late Easter. Fortunately, some infatuations never pass. Prove me the opposite. Clasp your hands together. Clasp your hands _like this_ into a stirrup. He prods her with his foot. You can't sleep now. My mother says everything was fine until we moved out. We all have our failings. All the rooms are filled to the brim. She talks of going on a walking trip, a few weeks. Spain, maybe. Forward motion. Making the effort to propel the body forward through the snow, though everything feels like you're stuck. The drifts of snow. A black bird on a blue sky. Branches entangled in sky. Who am I to cause such disturbance. Twice or more she sees a blackbird pecking at the roof window above the bed, is woken by its shadow on her face, the cooling of her cheek. She is warm when she sleeps. Others besides her have moved out. Basically there are two kinds of people. _Me_ and _everyone else._ Inside the _me_ is _us, we, he, she,_ sometimes a more general _one._ And then there is _you._ She doesn't know quite what to do with _you._ An unstoppable return of summer. He smiles, glancing at the sea and at her by turn. His gaze is a cot in which to rest. Seeing things without being seen. Therein lies the exercise. I have nightmares, nearly every night. Incomparable entities do not exist. Everything contains elements of something else. Threads, drawn through the world by the needles of our eyes. Everything is bleeding. And everything is still, like soldiers ordered to attention. The language of soldiers is blunt, their words halt like horses at a precipice. The colours of the birch. This near-violet in the transition from black to white. The colours of marrow. The children leave animal corpses to steep in a bowl of hydrogen peroxide: mice, a squirrel, a wood pigeon. They return a week later, coats unbuttoned to the spring. One day I tell you I can't sleep. These days. Bones and skulls, the same pale white as the sky. You nod and strain off the liquid. You suggest coffee. I nod. The sound of skeletons in a plastic bag. Little by little I acquiesce, like winter. And now it's April. It's hard to put a finger on what it is that keeps me awake. Thoughts go on in my mind, a variety of interests. My body keeps score in the night, jotting things down on the covers in invisible writing. I get the feeling you could be gone one morning. You've listened to my dreams this long, and to be honest what could be more fatiguing. Hushful, hushful song of old. The square, the sight of you lying there on a bench under the linden trees. You were convinced it would all make sense to us _at some point._ The idea of having to go through certain phases to get there. But woods don't always work like that. They come to an end without warning, you stand there teetering on the brink. A feeling of only being joined to the world by virtue of the heel. Sudden clearings; space, air. The body's exhalation, like a leaf released by autumn. The surrender of the mature. A thickness of movement. The letting go of all things at once, like a bundle of pick-up sticks dropped from a hand. Seedheads opening in the sun. Something ruptures, and dark-centred serenity spreads inside you. Pale pink sky. Dark in the centre. The city, opening. You suggest something or other. The sound of seedheads opening in the sun. The sound of a sapling birch, solitary at the edge of the field, with no history known to us. The near-violet of the transitions, the days once unfolding, dark and damp in the centre. The art of the possible—or, as you say, the human circumstance. To give up the warmth of movement. Something dark and damp, opening with a pop. The comedy of grief. Have you given up already. At very short intervals I feel more _together_ than _alone._ Darkness lets go of morning; a sliding hand, fingers, fingertips ravaged by gravel, releasing the rock; bone-coloured slats of light from a point on the horizon, beyond the sea, projecting into cloud. The light is a body plunging, a flailing in every direction, fingers splayed from a hand, new shoots on the near-dead pear tree at the bottom of the garden, arteries of the heart, glacial streams etching through snow and in the night; the machines at work, light cast electric in snow-spangled beams. High above, on the mountain, the machines trawl back and forth, dragging the inaccessible heights, the snowstorm's impossible, and then: the insane tumult of avalanche. The animals of the peaks toss their heads to the sky amid the yell of all things. Down in the town: an abrupt awakening in the hotel room, a bare foot pressed into the carpet's pile, a curtain drawn aside, come back to bed. The smallest fluctuation, a shift of the wind, a single degree warmer prompts the snow to release, crystals melting and merging, arms lashed into embrace, we clutch at what we believe to be each other. Curtains of rain travel the sky; vast, overlapping drapes, graphic divisions of light, the repetition of mountains. The white mountainside, breaking apart, collapsing. Melting ice, dissolving limbs of crystal, or crystal like a cramp-stricken hand folding in on itself in a stagger of spasms. The cloaks of descending snow; a person, two people engulfed by sound, the raucous barrage, arms and legs twisting, snapping like wings wrenched from the carcass, white bone, whiter snow—and then the silence. Hypothermia, and the lungs no longer finding air. One minute, two. The snow is a firm hand that shields the living. Sounds veer away and depart. After the avalanche all is still, as still as lime. Later in the night the machines return to work, smoothing the slopes, clearing the snow among the fir; and up above the treeline, between the red markers, the mountains unfleeced as shorn sheep—another of nature's patterns. The flat ribbons of the pistes, like mesh drawn tight around a tree, an intricacy of scattered tones woven into music: the music of the mountain, to which we listen. Continuing rumbles of snow. The bodies are still and buried in snow. A silent, chilly death, crushed beneath a tonnage of weightless crystals, a sky dropped suddenly at the release of a spring, an atmosphere penetrated by stars and planets, descending as dust upon the earth; the life that was changes form, the life to which we are left, again and again. And everything we lost speaks in the stillness of snow. After the avalanches: this is where we are. Every footprint is something longed for, hoped for. The town is quiet, a headlamped skier arriving home, a heavy trudge of awkward boots, skis balanced on his shoulder like a stiff wing poking back into the air behind him. He pauses and turns, peers up at the mountain, the lights of the machines that scratch the cornea, etching the contours of the mountain into his eye. His eyes water, and we see his lips, the dryness of days on the mountain, cracked, moistureless surfaces of skin, the creviced flesh of earthquakes and volcanoes, the crust of the earth breaking apart, or the still waters between one wave and the next. As he breathes, a loose flake of skin on his lip flutters in the air, the rustle of a plastic bag snagged in a hedgerow in the wind; his eyes water, rivulets of slime run from his nostrils, descending into his mouth. He turns back towards the road and carries on, the way you carry on the day after a preposterous bankruptcy, the way you sit in the apartment, bent forward, head in hands between your knees, glancing up now and then at the bailiffs as they collect the last of your furniture, and then, finally: the way you have to stand up and let them take the chair from under you; this is how you carry on, boiling water in a saucepan that like you was not worth taking, too insignificant to be of use anywhere else. He sprinkles instant coffee into the saucepan and stares at the patterns it makes, a cream-coloured sky mottled with darker, tarlike clouds. The circular motion of the liquid stirred with a spoon. He turns off the heat and sits down on the floor with the saucepan, leaving it to cool; the distant hum of machinery at work; he picks up the saucepan and puts it to his lips, elbow angled out to the side; he blows into the coffee, watches the rings as they spread, rings of tiny waves, dissipating concentrically into oblivion. His trembling hand, like some pang of jealousy, a cluster of crystals melting together on the mountain, curtains tossed by a wind, bands of snow beneath the clouds, snow on the repeating peaks, mountain upon mountain, patterns of catastrophe. He sips the coffee and it feels as if his insides are montainsides covered in snow, melting, and then abruptly he is collapsed on the floor, the curtains are folded away and collected in boxes, the room punctured, the man, holding all the stars, all the planets in place, ceiling caving in; the walls inside the body, subsiding, a devastation perhaps expected, though not yet, not like _this._ Machines plummet, light smothered by snow, the dry flakes of skin on the man's lower lip rejoin the flesh, pressed together like commuters packed on a train and starved of oxygen; a belt of everything implodes and combusts, the way grief can bring two people together. We walk to the sea, and in one way we are a form waiting for collapse, in another we are that collapse already, at some other stage. You say a few things about how the war will proceed. How we should act. The crystallisation that is forever going on. Protest against the basic conditions, the refusal to even accept such terms. Everything else is madness, and worse: futile, without will. And then again: the light of morning. JOSEFINE KLOUGART (b. 1985) has been hailed as one of Denmark's greatest contemporary writers. Klougart is the author of five novels, two were nominated for the Nordic Council Literature Prize, Scandinavia's most prestigious literary award, and she received the Danish Royal Prize for Culture in 2011 with the committee stating that she is "one of the most important writers, not just of her generation, but of her time." Her English-language debut novel, _One of Us is Sleeping,_ was published by Open Letter Books in 2016, also translated by Martin Aitken. MARTIN AITKEN is an award-winning translator of Scandinavian literature. His translations include novels by authors such as Dorthe Nors, Peter Høeg, Helle Helle, and Pia Juul. He was awarded the American-Scandinavian Foundation's Nadia Christensen Translation Prize, and has been longlisted for both the Independent Foreign Fiction Prize and the IMPAC Dublin Literary Award. Aitken's co-translation with Don Bartlett of the sixth book in Karl Ove Knausgaard's _My Struggle_ sextology is forthcoming in 2017. Thank you all for your support. We do this for you, and could not do it without you. DEAR READERS, Deep Vellum Publishing is a 501c3 nonprofit literary arts organization founded in 2013 with a threefold mission: to publish international literature in English translation; to foster the art and craft of translation; and to build a more vibrant book culture in Dallas and beyond. We are dedicated to broadening cultural connections across the English-reading world by connecting readers, in new and creative ways, with the work of international authors. We strive for diversity in publishing authors from various languages, viewpoints, genders, sexual orientations, countries, continents, and literary styles, whose works provide lasting cultural value and build bridges with foreign cultures while expanding our understanding of how the world thinks, feels, and experiences the human condition. Operating as a nonprofit means that we rely on the generosity of tax-deductible donations from individual donors, cultural organizations, government institutions, and foundations. Your donations provide the basis of our operational budget as we seek out and publish exciting literary works from around the globe and build a vibrant and active literary arts community both locally and within the global society. Deep Vellum offers multiple donor levels, including LIGA DE ORO ($5,000+) and LIGA DEL SIGLO ($1,000+). Donors at various levels receive personalized benefits for their donations, including books and Deep Vellum merchandise, invitations to special events, and recognition in each book and on our website. In addition to donations, we rely on subscriptions from readers like you to provide an invaluable ongoing investment in Deep Vellum that demonstrates a commitment to our editorial vision and mission. Subscribers are the bedrock of our support as we grow the readership for these amazing works of literature from every corner of the world. The investment our subscribers make allows us to demonstrate to potential donors and bookstores alike the support and demand for Deep Vellum's literature across a broad readership and gives us the ability to grow our mission in ever-new, ever-innovative ways. In partnership with our sister company and bookstore, Deep Vellum Books, located in the historic cultural district of Deep Ellum in central Dallas, we organize and host literary programming such as author readings, translator workshops, creative writing classes, spoken word performances, and interdisciplinary arts events for writers, translators, and artists from across the globe. Our goal is to enrich and connect the world through the power of the written and spoken word, and we have been recognized for our efforts by being named one of the "Five Small Presses Changing the Face of the Industry" by _Flavorwire_ and honored as Dallas's Best Publisher by _D Magazine._ If you would like to get involved with Deep Vellum as a donor, subscriber, or volunteer, please contact us at deepvellum.org. We would love to hear from you. Thank you all. Enjoy reading. Will Evans Founder & Publisher Deep Vellum Publishing LIGA DE ORO ($5,000+) Anonymous (2) LIGA DEL SIGLO ($1,000+) Allred Capital Management Ben & Sharon Fountain David Tomlinson & Kathryn Berry Judy Pollock Life in Deep Ellum Loretta Siciliano Lori Feathers Mary Ann Thompson-Frenk & Joshua Frenk Matthew Rittmayer Meriwether Evans Pixel and Texel Nick Storch Social Venture Partners Dallas Stephen Bullock DONORS Adam Rekerdres Alan Shockley AMr.it Dhir Anonymous Andrew Yorke Anthony Messenger Bob Appel Bob & Katherine Penn Brandon Childress Brandon Kennedy Caroline Casey Charles Dee Mitchell Charley Mitcherson Cheryl Thompson Christie Tull Daniel J. Hale Ed Nawotka Rev. Elizabeth & Neil Moseley Ester & Matt Harrison Grace Kenney Greg McConeghy Jeff Waxman JJ Italiano Justin Childress Kay Cattarulla Kelly Falconer Linda Nell Evans Lissa Dunlay Marian Schwartz & Reid Minot Mark Haber Mary Cline Maynard Thomson Michael Reklis Mike Kaminsky Mokhtar Ramadan Nikki & Dennis Gibson Olga Kislova Patrick Kukucka Richard Meyer Steve Bullock Suejean Kim Susan Carp Susan Ernst Theater Jones Tim Perttula Tony Thomson SUBSCRIBERS Anita Tarar Ben Fountain Ben Nichols Blair Bullock Bradford Pearson Charles Dee Mitchell Chris Sweet Christie Tull Courtney Sheedy David Christensen David Travis David Weinberger Dori Boone-Costantino Elaine Corwin Farley Houston Frank Garrett Guilty Dave Bristow Horatiu Matei James Tierney Janine Allen Jeanne Milazzo Jeffrey Collins Jessa Crispin John O'Neill John Schmerein John Winkelman Joshua Edwin Kimberly Alexander Kristopher Phillips Marcia Lynx Qualey Margaret Terwey Martha Gifford Meaghan Corwin Michael Elliott Michael Wilson Mies de Vries Mike Kaminsky Neal Chuang Nick Oxford Nicola Molinaro Patrick Shirak Peter McCambridge Stephanie Barr Steven Kornajcik Tim Kindseth Tim Looney Todd Jailer Whitney Leader-Picone Will Pepple William Jarrell AVAILABLE NOW FROM DEEP VELLUM MICHÈLE AUDIN · _One Hundred Twenty-One Days_ translated by Christiana Hills · FRANCE CARMEN BOULLOSA · _Texas: The Great Theft · Before_ translated by Samantha Schnee · translated by Peter Bush · MEXICO LEILA S. CHUDORI · _Home_ translated by John H. McGlynn · INDONESIA ALISA GANIEVA · _The Mountain and the Wall_ translated by Carol Apollonio · RUSSIA ANNE GARRÉTA · _Sphinx_ translated by Emma Ramadan · FRANCE JÓN GNARR · _The Indian_ · _The Pirate_ translated by Lytton Smith · ICELAND NOEMI JAFFE · _What are the Blind Men Dreaming?_ translated by Julia Sanches & Ellen Elias-Bursac · BRAZIL JUNG YOUNG MOON · _Vaseline Buddha_ translated by Yewon Jung · SOUTH KOREA FOUAD LAROUI · _The Curious Case of Dassoukine's Trousers_ translated by Emma Ramadan · MOROCCO LINA MERUANE · _Seeing Red_ translated by Megan McDowell · CHILE FISTON MWANZA MUJILA · _Tram 83_ translated by Roland Glasser · DEMOCRATIC REPUBLIC OF CONGO ILJA LEONARD PFEIJFFER · _La Superba_ translated by Michele Hutchison · NETHERLANDS RICARDO PIGLIA · _Target in the Night_ translated by Sergio Waisman · ARGENTINA SERGIO PITOL · _The Art of Flight_ · _The Journey_ translated by George Henson · MEXICO MIKHAIL SHISHKIN · _Calligraphy Lesson: The Collected Stories_ translated by Marian Schwartz, Leo Shtutin, Mariya Bashkatova, Sylvia Maizell · RUSSIA SERHIY ZHADAN · _Voroshilovgrad_ translated by Reilly Costigan-Humes & Isaac Stackhouse Wheeler · UKRAINE COMING FALL/SPRING 2016–2017 FROM DEEP VELLUM CARMEN BOULLOSA · _Heavens on Earth_ translated by Shelby Vincent · MEXICO ANANDA DEVI · _Eve Out of Her Ruins_ translated by Jeffrey Zuckerman · MAURITIUS JÓN GNARR · _The Outlaw_ translated by Lytton Smith · ICELAND CLAUDIA SALAZAR JIMÉNEZ · _Blood of the Dawn_ translated by Elizabeth Bryer · PERU JOSEFINE KLOUGART · _Of Darkness_ translated by Martin Aitken · DENMARK SERGIO PITOL · _The Magician of Vienna_ translated by George Henson · MEXICO EDUARDO RABASA · _A Zero-Sum Game_ translated by Christina MacSweeney · MEXICO BAE SUAH · _Recitation_ translated by Deborah Smith · SOUTH KOREA JUAN RULFO · _The Golden Cockerel & Other Writings_ translated by Douglas J. Weatherford · MEXICO ANNE GARRÉTA · _Not One Day_ translated by Emma Ramadan · FRANCE YANICK LAHENS · _Moonbath_ translated by Emily Gogolak · HAITI Table of Contents 1. COVER 2. TITLE PAGE 3. COPYRIGHT 4. CONTENTS 5. OF DARKNESS 6. PROLOGUE 7. SCENE 1 8. SCENE 2 9. SCENE 3 10. SCENE 4 11. SCENE 5 12. SCENE 6 13. SCENE 7 14. SCENE 8 15. EPILOGUE 16. ABOUT THE AUTHOR # Guide 1. COVER 2. CONTENTS 3. TITLE PAGE 1. a 2. b 3. c 4. d 5. e 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63. 64. 65. 66. 67. 68. 69. 70. 71. 72. 73. 74. 75. 76. 77. 78. 79. 80. 81. 82. 83. 84. 85. 86. 87. 88. 89. 90. 91. 92. 93. 94. 95. 96. 97. 98. 99. 100. 101. 102. 103. 104. 105. 106. 107. 108. 109. 110. 111. 112. 113. 114. 115. 116. 117. 118. 119. 120. 121. 122. 123. 124. 125. 126. 127. 128. 129. 130. 131. 132. 133. 134. 135. 136. 137. 138. 139. 140. 141. 142. 143. 144. 145. 146. 147. 148. 149. 150. 151. 152. 153. 154. 155. 156. 157. 158. 159. 160. 161. 162. 163. 164. 165. 166. 167. 168. 169. 170. 171. 172. 173. 174. 175. 176. 177. 178. 179. 180. 181. 182. 183. 184. 185. 186. 187. 188. 189. 190. 191. 192. 193. 194. 195. 196. 197. 198. 199. 200. 201. 202. 203. 204. 205. 206. 207. 208. 209. 210. 211. 212. 213. 214. 215. 216. 217. 218. 219. 220. 221. 222. 223. 224. 225. 226. 227. 228. 229. 230. 231. 232. 233. 234. 235. 236. 237. 238. 239. 240. 241. 242. 243. 244. 245. 246. 247. 248. 249. 250. 251. 252. 253. 254. 255. 256. 257. 258. 259. 260. 261. 262. 263. 264. 265. 266. 267. 268. 269. 270. 271. 272. 273. 274. 275. 276. 277. 278. 279. 280. 281. 282. 283. 284. 285. 286. 287. 288. 289. 290. 291. 292. 293. 294. 295. 296. 297. 298. 299. 300. 301. 302. 303. 304. 305. 306. 307. 308. 309. 310. 311. 312. 313. 314. 315. 316. 317. 318. 319. 320. 321.
{ "redpajama_set_name": "RedPajamaBook" }
4,459
<link rel="import" href="chrome://resources/html/i18n_behavior.html"> <link rel="import" href="chrome://resources/html/polymer.html"> <link rel="import" href="chrome://resources/polymer/v1_0/iron-icon/iron-icon.html"> <link rel="import" href="chrome://resources/polymer/v1_0/paper-button/paper-button.html"> <link rel="import" href="chrome://md-settings/settings_shared_css.html"> <dom-module id="settings-default-browser-page"> <link rel="import" type="css" href="default_browser_page.css"> <template> <style include="settings-shared"></style> <div class="settings-box first two-line"> <template is="dom-if" if="[[showButton_]]"> <div class="start" on-tap="onSetDefaultBrowserTap_"> <div>[[i18n('defaultBrowser')]]</div> <div class="secondary">[[i18n('defaultBrowserMakeDefault')]]</div> </div> <template is="dom-if" if="[[showError_]]"> <iron-icon icon="error" class="error-icon" title="[[i18n('unableToSetDefaultBrowser')]]"></iron-icon> </template> </template> <template is="dom-if" if="[[!showButton_]]"> <div class="secondary">[[message_]]</div> </template> </div> </template> <script src="default_browser_page.js"></script> </dom-module>
{ "redpajama_set_name": "RedPajamaGithub" }
856
{"url":"https:\/\/socratic.org\/questions\/what-is-bromine-s-electron-configuration","text":"# What is bromine's electron configuration?\n\nJul 2, 2016\n\n$B r = 1 {s}^{2} 2 {s}^{2} 2 {p}^{6} 3 {s}^{2} 3 {p}^{6} 4 {s}^{2} 3 {d}^{10} 4 {p}^{5}$\n\n$B r = \\left[A r\\right] 4 {s}^{2} 3 {d}^{10} 4 {p}^{5}$\n\n#### Explanation:\n\nBromine is in the $4 t h$ energy level, $p$ block, $5 t h$ column, this means that the electron configuration will end $4 {p}^{5}$\n\n$B r = 1 {s}^{2} 2 {s}^{2} 2 {p}^{6} 3 {s}^{2} 3 {p}^{6} 4 {s}^{2} 3 {d}^{10} 4 {p}^{5}$\n\n$B r = \\left[A r\\right] 4 {s}^{2} 3 {d}^{10} 4 {p}^{5}$","date":"2019-10-19 17:02:44","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 8, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7822805047035217, \"perplexity\": 2726.7357982744734}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-43\/segments\/1570986697439.41\/warc\/CC-MAIN-20191019164943-20191019192443-00077.warc.gz\"}"}
null
null
Q: Unable to save a view in a different BigQuery project I have my big query table Tab1 in GCP Project A. I have created a new GCP Project B. I have written a query that retrieves data stored in Tab1 and I want to store this as view in Project B. I am getting an error like this: Not found: Dataset Project A:Tab1 not found Both projects are under the same organization. How do I create views in new projects based on data stored in another project. A: If you are going to query a table that is not located in the project that you are using, you have to also specified the project_name in the FROM. For instance, SELECT * FROM `project_A.dataset.tab1` Based on the error message, you are not doing that properly (`project_ID.dataset.table`) A: If the rights are good, you can do as Alvaro said, othewise if it didnt works, you can add/declare some rights for your view as : One possibility is to create an authorized view in the dataset permission , And after that you can add your view :
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,226
El pati de Cal Lluís és un jardí privat del municipi del Masnou (Maresme) protegit com a bé cultural d'interès local. Descripció Es tracta d'una petita parcel·la de forma rectangular situada entre el carrer del Bergantí Caupolican, el carrer del Doctor Botey i el passeig de Prat de la Riba. El jardí constitueix pati davanter d'una casa de cos amb façana principal orientada al pla del carrer del Bergantí Caupolican. És el pati característic de l'urbanisme masnoví del que, en aquest cas, s'ha conservat. Està delimitat per un mur baix d'obra amb una entrada pel carrer del Bergantí Caupolican i una altra pel passeig de Prat de la Riba, amb reixa de ferro i un mur més alt a la façana de llevant. Per la façana de ponent fa mitgera amb un bloc de pisos. En un lateral hi ha una pèrgola amb un roser i hi ha diverses especies plantades (margalló, llorer, pi i ficus, entre d'altres). També hi havia una palmera canària que va morir a conseqüència de la plaga de l'escarabat morrut. Referències Patrimoni monumental del Masnou
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,485
\section{Individual level analysis and \textit{row2vec} approach} \begin{figure}[ht] \centering \subfloat[]{% \centering \includegraphics[width=.35\columnwidth] {results/allDummyWithIndividualsalphaThreshold=0.05_dispUse=out_true2model=0_shapePerson=2684-726.png} } \qquad \subfloat[]{ \centering \includegraphics[width=.35\columnwidth] {results/allDummyWithIndividualsalphaThreshold=0.05_dispUse=out_true2model=0_shapePerson=1724-726.png} } \caption{...} \label{fighighlight2} \end{figure} \section{Introduction} At the end of the nineteenth century, Edmund Landau addressed the problem of how to distribute a money prize among a group of chess players using a table of matches\cite{landauZurRelativenWertbemessung1895}. In a table of matches, each index is associated to a player and the value of each cell represents the result of a match between two players. Landau proposed a method that outperformed the best approaches at the time to perform prize distributions\cite{landauZurRelativenWertbemessung1895, landauUberPreisverteilungBei1915}. Later on, his method, which is known today as eigenvector centrality, and further developments, encountered a myriad of applications, from fraud detection to rank preferences in a group\cite{vignaSpectralRanking2019}. Landau's method scope and limitations would become clear with the emergence of new challenges related to the analysis of tabular data from a variety of domains\cite{ghoshComprehensiveReviewTools2018, adamsSiriusMutualInformation2021}. For instance, rather than finding the best player, one may want to identify the most important socioeconomic factors to explain the variation inside of a group of people. Rather than finding a probable coalition of players or a group of chess players with similar characteristics and performance, we may want to find a group of health factors that have a similar impact on a group of diseases\cite{levy2010consumo}. To solve these new challenges, recent works\cite{correlatioNet2008, geneNetwork2019,ambriolaokuPotentialConfoundersAnalysis2019, adamsSiriusMutualInformation2021} have been addressing the problem of exploratory analysis of tabular dataset by mapping it to a graph where the features (columns) are mapped to the vertices and edges quantify the relationships between these features. In\cite{correlatioNet2008, geneNetwork2019, adamsSiriusMutualInformation2021}, the relationships are modeled by non-directed edges, with edge weights determined based on the mutual information or correlation values. Also, as a result of this construction, the resulting graph has generally multiple disconnected components. These characteristics do not suit the purpose of the current work because in our analysis we take into account the relationships between \textit{every} pair of features. Also, the directionality of the relationships is another important aspect of our approach. In \cite{ambriolaokuPotentialConfoundersAnalysis2019}, the relationships are modeled by a complete directed graph with weight values expressing by the global feature importance known as ``gain'' \cite{ganho1987, gradBoostTutorial2013}. Although, the use of gain as an importance variable seems reasonable, this measurement can lead to inconsistent results\cite{shapConsistency2019}. In addition, it also does not allow the derivation of a local explanation, in the sense of constructing a graph from a single observation (row) or a sample. Because of those previous issues related with tabular to the graph mapping task, we propose here an alternative approach which uses recent developments from tree-based machine learning interpretability techniques\cite{gradBoostTutorial2013, samekLearningExplainableTrees2020} . More specifically, here we map the dataset into a weighted directed graph with the edge weight obtained through the SHapley Additive exPlanations (SHAP)\cite{lundbergUnifiedApproachInterpreting2017} technique. This technique was chosen for its good properties when compared with other tools to compute the importance of a feature in machine learning task. In addition, it can supply a local interpretation for each object in the dataset. The obtained dense graph, here called called interpretability graph, is initially sparsified, with the goal of removing the weak relationships between features extending the scope of graph analysis methods that can be applied. In particular, we use the disparity filter\cite{serranoExtractingMultiscaleBackbone2009}, which is an edge filtering method that has a good performance in preserving the backbone structure of the graph. With the filtered interpretability graph, we show how graph analysis can be employed to interpret the relationships through the graph structure. We discuss with more detail two aspects about this graph: the spectral and the community structure. As well know in the context of undirected graphs the spectral proprieties of the combinatorial Laplacian have several interesting properties related with the community structure. Those properties motivates in embedding methods for clustering algorithms\cite{spectraldatascience,vignaSpectralRanking2019}. However, those methods relies in two properties of the combinatorial Laplacian operator: the existence of an orthogonal basis and the fact that the eigenvalues associated with them resides in the real line. Therefore, in the case of our interpretability graph, which is a digraph, those two properties are not satisfied\cite{Li_Yuan_Wu_Lu_2018}. To overpass that, instead of analyzing the combinatorial Laplacian we have opted to study the spectral information of the magnetic Laplacian operator. The theory and the applications of this magnetic operator has been in recently focus in the literature\cite{f.deresendeCharacterizationComparisonLarge2020, fanuelDeformedLaplaciansSpectral2019, magnet2021}, which one of the reasons for that is because the magnetic Laplacian is a Hermitian operator even for directed graphs. More specifically, here we have used the eigenfucntions associated with this operator to map the feature of a tabular dataset into a toroidal space aiming at exploring the data in a more detailed manner. Meanwhile the analysis of the spectral space can give us a notion about how the features are connected in the interpretability graph in order to explore the community structures we must use a method specifically build for that. Here, we translate the problem of how divide a group of features into classes into finding the communities that the vertex related to those features belong. To do that, we have applied the \textbf{n}ested-\textbf{S}ochastic \textbf{B}lock \textbf{M}odel (nSBM)\cite{peixotoHierarchicalBlockStructures2014, peixotoNonparametricBayesianInference2017, nsbm2021} to infer the hierarchical community structure in our graph. The nSBM revealed hierarchical relationships between the features enabling us to explore and unravel categories that have similar or dissimilar behaviors. Further, we analyze the correspondence between these results and those derived from spectral information associated with the graph. As an application example, we employed our method to the PeNSE (National Survey of Scholar's Health, from IBGE)~\cite{oliveira2017characteristics} tabular dataset. This periodic survey has been extensively studied across the years in order to understand the health behaviors of students in Brazil; from illicit and licit drugs consumption\cite{pense2014drugs, pense2014drugs2} and health issues\cite{pense2014asthma} to sedentary behavior\cite{pense2020sedentary} . The proposed method allowed us to construct a weighted directed graph from the questions of this survey. The sparsification of the graph and its posterior visualization allowed us to inspect the modular structure of the features. The spectral information of the graph allowed the establishment of a magnetic embedding of the vertices, which indicated that the physical activities questions formed an isolated group. Our method provided a quantitative way of grouping the questions in an unsupervised fashion. The results showed considerable agreement with the divisions of the survey. For example, we discovered that some questions such as driving behavior were originally aggregated into the class of safety in the design of the survey, but our method suggested that they may present stronger relationships with questions related to the use of drugs. The classes of questions in the survey were probably obtained in a qualitative and subjective way and, therefore, it is natural to expect some structural variations. We believe that the reported results may motivate future works aiming at exploring the effect of interdependence or confounding features in more general tabular datasets, and also provide subsidies to improve the design of surveys. \section{Methods} The~\figref{fig:Diagram} describes the method proposed in this work. A weighted directed graph is derived from the original tabular data, with vertices representing the features, and edges representing their relationships, weighted by the SHAP values. As a first measurement, centrality measurements, in this case the hub-score, can be calculated from the obtained graph. An edge filtering approach, namely the disparity filter, is applied to the obtained complete graph, so as to remove the weakest edges. Spectral information, in particular using the deformed magnetic Laplacian operator, allows us to gain additional insight about the data. Finally, the hierarchical modular structure obtained using the nSBM and the SHAP method allows to analyze the entire dataset or just a sample. For example, the responses of a single individual in a survey. The results can be refined based on a subset of the graph and this refinement can be repeated up to a desired granularity. \begin{figure}[!htb] \centering \includegraphics[width=.8\columnwidth]{methods/simplified_diagram4.pdf} \caption{Flow diagram of the proposed approach. The tabular dataset is initially mapped to a weighted directed graph. The graph is reduced using a graph sparsification approach and used to (1) perform a spectral analysis using the deformed magnetic Laplacian operator and (2) identify similar/dissimilar features using a hierarchical community detection approach. The latter operation can be done iteratively on subgraphs obtained by the previous steps.} \label{fig:Diagram} \end{figure} In the following, we discuss in more detail the mapping of the tabular dataset into a weighted directed graph, with the weights quantifying how important a feature is to a prediction task. \subsection{Mapping a tabular dataset into a weighted directed graph} Here we discuss how a graph is created based on the tabular dataset. A weighted directed graph is a tuple $(V, E, w)$ composed by a set of vertices, $V$, a set of ordered tuples, $E$, and a weight function $w: E\mapsto \mathbb R^+$. Each feature of the dataset associates to at least one vertex of the graph. The directed weighted edges represent the relationships between two columns. Let $C$ be the set of columns of the tabular data. A column $c\in C$ is randomly chosen and mapped to a set of vertices $V_c \subset V$ . We use the remaining columns as features to train a gradient boosting machine (GBM) to predict the column $c$. Let $\bar V_c$ be the features columns of c. After training, for each $v_c \in \bar V_c$, we understand the weights for each edge ($v, v_c$) as corresponding to the contribution of the vertex $v$ to the task of predicting the vertices related to $c$. We repeat this procedure for each vertex in $V$ and obtain a complete weighted directed graph. A subject of particular importance concerns the \emph{contribution of $v$ to predict $c$}. First, we want to map the tabular dataset to a graph. We require that the in-degree of vertex $v_c$ quantifies the accuracy of the trained GBM, that is $k_{in}(v_c) = \sum_{u \in \bar V_c} w(u, v_c) = Acc(v_c)$. For instance, if a column has no relevant relationship with the remaining columns or cannot be explained by them, the in-degree is low, which reduces the contribution of the vertex to the overall structure of the graph. This accuracy is used to calculate the weights of the edges. Let $\epsilon(u\rightarrow v_c) \in \mathbb R^+$ be a function that quantifies the contribution of a column associated with $u$ to the task of predicting the values of column $p$ using the GBM. Here, we choose the weight function of an edge $(u, v)$ as: \begin{eqnarray} w(u, v) = Acc(v) \frac{ \epsilon(u\rightarrow v) }{ \sum\limits_{z\in V}\epsilon(z\rightarrow v) } \label{eqVar2graph}. \end{eqnarray} Next we discuss how to choose $\epsilon$. To use \eqref{eqVar2graph} and consequently, to construct the interpretability graph, it is necessary to choose a way to explain the prediction of a given variable $v$ due to the presence of a feature $u$. There is a wide range of methods in the literature to achieve this\cite{molnarInterpretableMachineLearning}. In this work, we opted to use the SHapley Additive exPlanations (SHAP) \cite{samekLearningExplainableTrees2020,lundbergLocalExplanationsGlobal2020}. The SHAP method approximates the Shapley value\cite{kuhnContributionsTheoryGames1953}. The SHAP method was motivated in the theory of cooperation games and works by quantifying the marginal contribution of a feature to a single prediction task. Since the SHAP value is calculated for each element of the dataset, we have a different graph defined by~\eqreff{eqVar2graph} for each instance. For example, if the tabular data corresponds to a survey, the graph can be used to study the answers of each person. Although this local exploration allows associating a graph with each instance in the data, in this work we focus on a single graph to describe the entire dataset. In this case, the weight of edge $(u,v)$ is defined as the mean of the absolute values of SHAP, that is \begin{eqnarray} w(u, v) = Acc(v) \frac{ \mathbb E [ |\mathrm{SHAP}_i(u\rightarrow v)|] }{ \sum\limits_{z\in V}\mathbb E[|\mathrm{SHAP}_i(z\rightarrow v)|] } \label{eqShapMean}. \end{eqnarray} To calculate each SHAP value, we also need to choose a proper way to handle possible dependencies between features\cite{chenTrueModelTrue2020}. While the tree path approach does not depend on a background dataset and may run faster than causal approaches, the latter is able to deal with feature dependencies using causal inference tools\cite{janzingFeatureRelevanceQuantification2019}. In this work we opted to use the tree path approach in the PeNSE case study due the low computational costs. \subsection{Graph filtering} The obtained interpretability graph is, by construction, complete. As a result, the posterior processing may be difficult or even unfeasible. One of the reasons is the high computational cost associated to the processing of the entire graph. Another reason relates to the excess of information, which may end up blurring the objects of interest\cite{cosciaAtlasAspiringNetwork2021}. A simple approach to reduce the number of edges and to enhance the interpretability of the graph visualization techniques consists in the application of a naive threshold to the edge weights so as to keep just the strongest connections. However, it is hard to choose and justify the value used for the threshold parameter\cite{cosciaAtlasAspiringNetwork2021}. In addition, this method can create many disconnected components. \begin{figure}[ht] \centering \subfloat[]{% \centering \includegraphics[width=.25\columnwidth] {methods/graphfiltering10a.pdf} } \qquad \subfloat[]{ \centering \includegraphics[width=.25\columnwidth] {methods/graphfiltering10b.pdf} } \caption{Graph filtering based on edge weight threshold. A weighted graph (a) and the resulting graph (b) after the removal of weak edges (dashed lines). This approach is highly sensitive to the choice of the threshold: similar threshold values may lead to completely different graphs and remove what is called backbone structures.} \label{fig:graphfiltering2} \end{figure} In the last decade, a large number of graph filtering methods (a.k.a. graph sparsification) has been developed in order to mitigate the issues present in the naive threshold-based edge filtering approach \cite{serranoExtractingMultiscaleBackbone2009,marcaccioliPolyaUrnApproach2019,batsonSpectralSparsificationGraphs2013}. In this work, we adopted the disparity filter criterion developed by\cite{serranoExtractingMultiscaleBackbone2009} to filter the edges. Let $s(u)=\sum\limits_{v\in V | (u, v) \in E} w(u, v)$ be the out-degree of a feature associated with the node $u$ in the interpretability graph. Defined in this way, $s(u)$ is related to the contribution of feature $u$ to explain the outputs of all remaining features. Thus $p(u ,v)=\frac{w(u, v)}{s(u)}$ quantifies how the the explanation given by the feature $u$ in the task of predicting feature $v$ contributes to the total amount of interpretability of the feature $u$. Then, with $k_{out}(u)$ being the out-degree of node $u$, we can associate with each edge $(u, v)$ the following quantity \begin{eqnarray} w_\alpha(u, v) = 1 - (k_{out}(u)-1)\int\limits_{0}^{p(u, v)} (1-x)^{k_{out}(u)-2}\mathrm d x. \end{eqnarray} Edges with $w_{\alpha}$ above a given threshold $\alpha \in [0, 1]$ are filtered out. Therefore, this method allows to filter the edges and at the same time keep the graph backbone, as pointed in \cite{serranoExtractingMultiscaleBackbone2009}. \subsection{Spectral embedding of the tabular dataset} In the previous sections we discussed the construction of the weighted directed graph from tabular data and how to extract insights from this data structure. Here we discuss how the spectral information of the magnetic Laplacian can be used to unravel clustering of features. The derivation of the magnetic Laplacian formalism requires decomposing the weight function between a symmetrical $w_s(u, v)=\frac{w(u,v)+w(v, u)}{2}$ and an asymmetrical $w_a(u, v)=\frac{w(u,v)-w(v, u)}{2}$ components. This allows the definition of a flow function in each vertex $v$ due to $u$ as $a(v, u) = 2w_a(u, v)$. With the decomposition, each digraph results in an associated undirected version $G_s=(V, E_s, w_s)$, which relates to the Laplacian operator, $L$, by: \begin{align} (L f)(u) = f(u)d(u) -\sum\limits_{v\in V}w_s(u, v)f(v), \label{eqCombL} \end{align} where $d(u)=\sum\limits_{v\in V}w_s(u, v)$. As can been seen, the combinatorial Laplacian for the undirected graph is symmetric. The second term of the right hand side of Eq.\eqref{eqCombL} needs to be modified to deal with the directionality information of the digraph. To do so, the directionality information is treated as a phase perturbation, formally represented by a function whose domain corresponds to the edge set of the directed graph. This function has the following form: \begin{align} \gamma_q(u, v) = e^{2\pi \mathrm i q a(v, u)} \end{align} which inserted in the second term of right-hand side~Eq.\ref{eqCombL} gives us the magnetic Laplacian, $\mathcal L_q$, \begin{align} (\mathcal L_q f)(u) &= f(u)d(u) -\sum\limits_{v\in V}w_s(u, v)\gamma_q(u, v)f(v) \label{eqMagL} \end{align} where $q\in[0, 1]$ is a parameter called \emph{charge} because of historical reasons\cite{shubinDiscreteMagneticLaplacian1994a}. It is convenient to define a normalized version of the magnetic Laplacian, $\mathcal H_q$, as \begin{align} (\mathcal H_q f)(u) = f(u) -\frac{ \sum\limits_{v}w_s(u, v)\gamma_q(u, v)f(v) }{ d(u) }. \label{eqMagNormedL} \end{align} Noticeably, the magnetic operator can be represented by Hermitian matrx which is not the case of combinatorial operator for digraphs\cite{f.deresendeCharacterizationComparisonLarge2020}. In addition, the magnetic Laplacian is a positive semi-definite operator. The positive semi-definite and hermiticity properties of the magnetic Laplacian allow constructions of physical analogies which can be used to characterize digraphs\cite{f.deresendeCharacterizationComparisonLarge2020}. In addition, the phases of a given eigenvector of the normed magnetic Laplacian~\eqref{eqMagNormedL}, $\mathbf v_q^{(l)}\in \mathbb C^{|V|}$ capture the notion of circularity in the graph. For example, the phases of the eigenvector associated with the lowest eigenvalue of $\mathcal H_q$ is the approximated solution for the group synchronization problem related with the magnetic Laplacian\cite{fanuelMagneticEigenmapsVisualization2018}. In mathematical terms this problem searches for a mapping $\theta: V\mapsto [0, 2\pi) $ which minimizes the following function \begin{align} \eta_{c}(\theta) &= \frac{1}{2 \mathrm{vol}(G_s)} \sum\limits_{u, v \in V} w_s(u, v) \left| e^{\mathrm i \theta(u)}-\gamma_q(u, v)e^{\mathrm i \theta(v)} \right|^2 \label{eqCircFrustation} \end{align} where $\mathrm{vol}(G_s) = \sum\limits_{u\in V} d(u)$. The phases of the second eigenfunction of~\eqref{eqMagNormedL} also have a remarkable property in the sense that this phase can approximately solve a graph-cut problem\cite{imageSegSpectra, fanuelMagneticEigenmapsVisualization2018}. \subsection{Unraveling the structure of the features using the Nested Stochastic Block Model:} In principle, a class of features having similar interpretation behavior should belong to the same community in the proposed interpretability graph. Therefore, to understand the relationships between the features, it is first necessary to define first how these communities can be identified. One possibility to do that is to use a modularity optimization method\cite{Newman_Girvan_2004}. Unfortunately, this method has some drawbacks. For example, it can find communities even in a random graph \cite{guimera2004}. Thus, this can gives to us a meaningless division between the feature of a tabular data. Fortunately, the non-parametric Bayesian method called \textbf{n}ested \textbf{S}tochastic \textbf{B}lock \textbf{M}odel (nSBM)\cite{peixotoHierarchicalBlockStructures2014} mitigates that. The nSBM method is the hierarchical formulation of the well-known Stochastic Block Model (SBM)\cite{peixotoNonparametricBayesianInference2017, sbmTopicModelScience}. The major difference between SBM and nSBM is that the latter proceeds by agglomerating graph communities into levels, which represent blocks modeled by a SBM. Using this hierarchical construction, nSBM overcomes some issues of its counterpart, such as the inefficiency in identifying small graph communities\cite{peixotoHierarchicalBlockStructures2014}. In essence, the SBM performs a Bayesian inference on a set of parameters of a generative graph model. Such parameters are the vertex partitions, that is, the sizes and the number of blocks, and the probability of connections inside and outside those partitions. In mathematical terms, let $b$ be a set of vertex partitions and $\theta$ the parameters of a given generative model for a graph $G$, the Bayesian problem is given by: \begin{eqnarray} P(b| G) = \frac{P(G| b,\theta )P( b,\theta )}{P(G)}, \end{eqnarray} where $P(G)$ it is the model evidence. In this work we used the graph-tool\footnote{https://graph-tool.skewed.de/} implementation of nSBM\cite{peixotoNonparametricBayesianInference2017, sbmNsbm2020}. This method uses the non-parametric framework proposed by Peixoto and it is able to efficiently infer the block-hierarchical structures. The nSBM allows to understand the modular organization of the graph. Consequently, using this method a user can unravel the relationships between features in the dataset. \subsection{Measuring the relevance of each column} The nSBM method can provide information about how the features in the dataset relate to each other. This information can be useful to investigate the relationships in the data and the existence of redundant features. Another important related question is: \emph{how important a feature is to a dataset?} Similarly to \cite{ambriolaokuPotentialConfoundersAnalysis2019}, we choose to quantify the importance of a column as a measure of the centrality of the related vertex. For example, we have the eigenvector centrality\cite{vignaSpectralRanking2019}, page-rank\cite{irfanReviewDifferentRanking2018a} and hub/authority scores\cite{kleinberg1998authoritative}. Each of these gives a different interpretation about the relevance of a given vertex for the structure of the graph. Here, we analyze just the hub and authority scores for simplicity. The hub and authority scores were proposed in the context of finding and ranking relevant web pages~\cite{kleinberg1998authoritative}. \emph{Authoritative} pages ideally contain relevant information according to the query and represent the result of the search, while the \emph{hubs} are linked to the authorities and represent an important element to find the authorities. Although we needed to filter the interpretability graph to apply the force directed algorithm and the nSBM, the calculation of the hub/authority scores are relatively less expensive, which allows the consideration of the complete graph. Therefore, we evaluated such measures without removing edges. \section{Case study: PeNSE} The adolescence phase may strongly impact adulthood. For this reason, different surveys have focused on the related subjects~\cite{grunbaumYouthRiskBehavior2004,currieInequalitiesYoungPeople2008a}. The PeNSE (National Survey of Scholar's Health)~\cite{oliveira2017characteristics} is a survey organized by the Brazilian Institute of Geography and Statistics (IBGE), with collaboration of the Ministry of Health and of the Ministry of Education. Its mission is to better understand the risk factors and health profiles of the teenagers in Brazil. The three editions of the survey (2009, 2012 and 2015) targeted students regularly enrolled in a Brazilian school, public or private, at the 9th grade, which often corresponds to fourteen-year-old teenagers. This school age was chosen considering the international ethic guidelines of age to conduct socioeconomic questionnaires targeted at the teenagers group. Here we have explored the 2015 edition which inquired almost $130,000$ students in Brazil\footnote{The data is public and available here \href{https://ftp.ibge.gov.br/pense/2015/microdados/PeNSE_2015_AMOSTRA1.zip}{https://ftp.ibge.gov.br/pense/2015/}}. The survey consists in an electronic questionnaire comprising questions from diverse areas, such as the respondents' socioeconomic context: parents' level of education, profession, possession of goods; health, including sexual, oral and mental health; eating habits and the risk factors; family relationships and domestic violence; and the infrastructure provided by the school. This dataset has already been explored by~\cite{levy2010consumo}, where the authors explored the association between key indicators to sociodemographical profiles. For example, a healthy nutrition indicator, which takes into account the frequency of meals and the consumption of other type of foods, was found to be associated with the age, gender and socioeconomic profile. The analysis was constrained to a linear analysis (linear regression) between these markers. This dataset has also been explored in other works~\cite{maltaBullyingBrazilianSchools2010,maltaTrendRiskProtective2014b}, but focusing on specific set of features, such as related to bullying or chronic diseases. \subsection{Force-directed layout and the effect of the disparity filter} We first discuss how our method could unravel groups of questions in the PeNSE survey. To do so, we first created the interpretability graph as previously discussed and removed less important edges using the disparity filter. In~\figref{figPENSEForceDirected} we show the force-directed visualization of both the complete graph (\figref{figPENSEForceDirected}(a)) as well as the sparse graph obtained after application of the edges filtering method (\figref{figPENSEForceDirected}(b)). The hairy-ball appearance of the complete graph does not allow a direct interpretation. In contrast, when the disparity filter with a $\alpha=0.1$ was applied to the complete graph, community structures start to appear. A visual inspection shows that the questions related with physical activities seem involve two separated groups. However, as it is well known, the force-directed embedding can be subjectively interpreted by the person who is seeing the graph. Therefore, any insight given by this method should be verified by more formal methods. Thus, in the following we will investigate more about how this group of questions behaves in the spectral space and in the inferred modular structure. \begin{figure}[!htb] \centering \includegraphics[width=.98\columnwidth]{results/pense2015/fd5.pdf} \caption{Interpretability graph of the PeNSE dataset. The nodes represent the features and the edges represent the relationship between pairs of features, considering our approach. In (b) the graph was initially filtered using a disparity filter, through \eqref{eqVar2graph} with parameter $0.1$. The vertices disposition is given by the force-directed algorithm. The colour of the vertices/edges corresponds to the group of each variable, provided by the dataset. A good correspondence can be observed the between \emph{spatial} communities and \emph{colours} , such as the brown group at the bottom portion of the figure, which indicates that these features are from the same class and were correspondingly joined in the visualization.} \label{figPENSEForceDirected} \end{figure} \subsection{Spectral analysis} \begin{figure}[ht] \centering \includegraphics[width=.5\columnwidth]{results/pense2015/allDummy_hotencode=1_shapValues=1_testScore=0_alphaThreshold=0-1_dispUse=out_sumVarsHub=0_useDisparityHubCalc=0_shapePerson=-1.png} \caption{Toroidal visualization of the features. Features with higher hub scores are closer to the axial center. A well defined cluster of pink vertices can be observed on the left, which represent questions from class \textit{Physical Activities}. A magnetic eigenmap embedding with $q=1/10$ was utilized.} \label{figPENSETorus} \end{figure} In \figref{figPENSETorus} we present the toroidal embedding using the first two phases of the magnetic Laplacian, with $q=1/10$ and hub score. The highest hub score questions in the survey according are close to the center. The embedding shows that the questions related to physical activities are grouped in a well separated cluster by the magnetic embedding. Therefore, we must expect that the questions related to physical activities form a group more strongly related with itself. In addition, if a more detailed analysis is requested, a graph-cut approach can been done in the toroidal embedding aiming at removing most of these questions, followed by the application of our method to the reaming columns data aiming at complementing the analysis of other questions. \subsection{Hierarchical categorization of the features} Community detection is generally a hard problem and this difficulty stems, in part, from the the absence of a clear and common definition of what a community is~\cite{peixotoHierarchicalBlockStructures2014}. The nSBM approach attempts to mitigate this issue by proposing a statistically principled approach to identify the modular structures. We show in~\figref{figPENSESBM} the circular visualization of the filtered interpretability graph of the features in the PeNSE survey provided by nSBM. The directed graph with gray vertices and edges represents the hierarchical structure of the communities of the questions. The vertices are positioned according to the modular structure of the graph and the color of the edges and of the nodes represents the class to which each question belongs in the survey. Such classes were originally defined by the designers of the survey. Thus, communities of vertices with the same color mean a correspondence between the modular structure predicted by the method and the qualitatively classification of the questions in the questionnaire. \begin{figure}[ht] \centering \includegraphics[width=.75\columnwidth] {results/pense2015/allDummyAUC_alpha=0-05.png} \caption{Circular visualization~\cite{peixotoHierarchicalBlockStructures2014} of the filtered interpretability graph with edge bundling. Vertices in the enclosing circle represent the features and the directed edges show the relationship between two features. Vertices are grouped according to the modular structure and the color of each vertex represents the class of the question in the survey. The overlaid hierarchical structure symbolizes the hierarchy of the communities.} \label{figPENSESBM} \end{figure} This hierarchical circular visualization in~\figref{figPENSESBM} allows different types of analyses, but two are of particular interest regarding the analysis of the survey. The first relates to the positioning and grouping of the vertices and their correspondence with the divisions proposed in the survey. The second has to do with the connections among the areas, i.e., the existence of dominant areas to which a group of features may connect to. In~\figref{figPENSESBM}, one can readily see a high correspondence of the obtained grouping of the questions and the divisions of the survey for at least two classes: \textit{Food} (brown) and \textit{Body} image (magenta). Whereas the class \textit{Safety} (pink) presents a considerable agreement, part of the features were positioned by the method separately on the left region of the circle, grouped with questions related to the consumption of drugs (\figref{fighighlight}(a)). This shows that an alternative classification of the features on the left could be as pertaining to the class of \textit{Illicit drugs}. Important to emphasize that the nSBM approach is completely automatic and non-subjective, solely based on the pattern of responses in the survey. \begin{figure}[ht] \centering \subfloat[]{% \centering \includegraphics[width=.25\columnwidth] {results/pense2015/allDummyAUC_rng_726_drugs_and_safety_crop.png} } \qquad \subfloat[]{ \centering \includegraphics[width=.25\columnwidth] {results/pense2015/allDummyAUC_rng_726_food_and_parents_crop.png} } \caption{Particular groups in the hierarchical community visualization. In (a), part of the features related to safety are located in a separate group, possibly showing a better grouping of the features. In (b), the highlighted features in orange have strong connections to green vertices, which is expected, according to the classification of the features proposed in the questionnaire.} \label{fighighlight} \end{figure} In~\figref{fighighlight}(b), the small group in orange, on the left, is emphasized. This visualization allows us to see that this group has high connectivity to the green group, on the bottom part of the circle. The class in orange corresponds to \textit{Food} and the highlighted vertices correspond to questions related to eating with parents. The highlighted vertices in green, in turn, represent questions that deal with the relationship of the teenager and their parents. That may be understood as the strong relationship, from the point of view of the student, of a healthy relationship with the parents and sharing regular meals with them. Again, it potentially indicates another possibility of organizing these questions in the questionnaire. \begin{figure}[ht] \centering \subfloat[]{ \centering \includegraphics[width=.25\columnwidth] {results/pense2015/allDummyAUC_rng_726_physical_activities_crop.png} } \qquad \subfloat[]{% \centering \includegraphics[width=.25\columnwidth] {results/pense2015/allDummyAUC_rng_726_physical_activities_b_crop.png} } \caption{Questions originally categorized as related to Physical Activities. While being positioned close to each other in the circle, they are divided into two groups. In (a) they are related to general sports while in (b) they are related to the activities due to their socioeconomic condition, such as commute on foot.} \label{figphysical} \end{figure} Furthermore, the hierarchical nature of nSBM allows a more detailed categorization of the features. Most of the questions related to Physical activities (in violet) are positioned in the same region of the circle, but they are grouped into two distinct subgroups (see~\figref{figphysical}). By inspecting the questions in these subgroups, we noticed that the group in~\figref{figphysical}(a) is related to entertaining activities, such as playing soccer or dancing, while the other (\figref{figphysical}(b)) relates to physical activities required by the socioeconomic condition of the respondent, such as walking or cycling from home to school (see the most relevant questions in Table~\ref{tableBicicleta}). That is related to the fact that in developing countries mobility relates to the socioeconomic level in different ways~\cite{da2008multiple}. The proposed method groups similar questions, such as from Table~\ref{tableBicicleta}, in nearby regions in the graph. Whereas this analysis could be done manually for visualization purposes, an alternative approach is to performing it automatic and less subjective way. For instance, the questions could be mapped into a vectorial space, \textit{tabular2vec}. In appendix~\ref{sec:tabular2vec}, we further discuss this idea and present some results. These preliminary results seem to be consistent with our findings. \begin{table}[ht] \centering \caption{Questions with highest hub score in the community highlighted in~\figref{figphysical}(b)} \begin{tabular}{l} \toprule ``During the last 7 days, in how many days you went on foot or by bicycle to school?''\\ ``During the last 7 days, in how many days you came back on foot or by bicycle to school?''\\ ``When you go to school on foot or by bicycle, how long does it take?''\\ ``When you come back to school on foot or by bicycle, how long does it take?''\\ \bottomrule \end{tabular} \label{tableBicicleta} \end{table} \section{Conclusions} \label{sec:conclusions} Network science (e.g.~\cite{newman2003structure}) has largely been used to study artificial and real systems, mainly thanks to its direct formalism on modelling relationships. Knowledge in this field has proven to be useful in the study of a variety of problem and data. In this work we report a method that uses recent developments in machine learning interpretability, as well as community and spectral analysis of graphs to unravel relationships of the features in a tabular dataset. The proposed method differ from related works mainly by: (1) providing the possibility of interpreting the importance of features in predicting each other and, (2) allowing the study of the data to focus on each observation or to encompass the entire dataset. To perform the graph analysis proposed in this work it was necessary first to develop a method to map a tabular dataset into a graph that avoids the issues present in previous works. In this method, the graph is modelled having features as vertices and the importance of each feature in predicting another as the weight value of the corresponding directed edge. These weights are assigned by considering the SHAP values of respective predictions of a machine learning model. Since the edges weights are computed for each pair of features, the resulting graph is complete. The complexity of this structure restricts the scope for graph analysis methods that can be effectively applied to it. Therefore, the disparity filter criterion was employed to keep just the strong relationships. From the filtered graph, we showed how to use graph analysis methods to extract insights and improve the understanding of the dataset. Specifically, we analyze the the toroidal embedding obtained by the magnetic Laplacian and the nested stochastic block model to unravel how the features of the dataset group into communities. The resulting modular structure, in turn, allows us to analyze the groups according to varying levels of granularity, thanks to its hierarchical grouping capabilities. The usefulness of our methodology is exemplified respectively to the PeNSE survey dataset. The results showed several findings such as the good overall agreement between the communities obtained and the original qualitative classification of the questions in the survey, especially for the groups \textit{Food} and \textit{Body Image}. However, the method also showed that some questions from the class \textit{Safety} could also be reassigned as \textit{Drug Consumption} questions. Also, a high connectivity was observed between the questions from the class \textit{Food} related to eating with parents and questions from the class \textit{Situations at Home}, maybe related to the harmonious relationship with the parents. It is important to understand the scope and limitations of the proposed approach while aiming at developing future works. For instance, the obtained graph takes into account the predictions of a machine learning model, but it does not aim at representing a causality graph. Stronger conditions would need to be satisfied to construct such a graph. Also, if the data is composed of few instances, the findings may result strongly biased. As a future work, different tabular datasets like medical, economical, and technical could be considered. We believe also that future works could investigate the use of some synthetic models for generating extensive tabular therefore allowing more systematic investigations of the suggested method in the spectral space. In doing so, it could be possible to establish a more direct connection between the eigenvalues and eigenvectors behavior and the structural dependencies of features. \section*{Acknowledgments} The authors thank CNPq (grant 307085/2018-0), CAPES and FAPESP (grants 2019/01077-3 and 15/22308-2) for financial support. The authors thank Joao Ricardo Sato, Filipi N. Silva and Thomas Peron and for all suggestions and useful discussions. \ifdefined\pdfforpub \bibliographystyle{plain} \else \fi \section{Toy Model and the spectral gap} \begin{figure}[ht] \centering \includegraphics[width=1.0\columnwidth] {appendix/toymodel.pdf} \caption{} \end{figure}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,279
Q: iOS Blur effect for a view behind when showing a UIView on top (dialog) I want to blur a view behind the the popped up UIView. Is there anyway to achieve this programatically? Thanks in advance! A: You can do this through many ways depending the iOS you are supporting. Here are two ways backwards compatible with before iOS8: * *FXBlurView *iOS-Blur (Similar to you implement a UIToolbar view) In iOS8 you can use a UIVisualEffectView. The process is simple and you can checkout this tutorial of Ray Wenderlich on how to do it. Since you have asked a "programatically" way, here is a example using iOS-Blur library mentioned before: JCRBlurView *blurView = [JCRBlurView new]; [blurView setFrame:CGRectMake(0.0f,0.0f,100.0f,100.0f)]; [self.view addSubview:blurView];
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,199
Gravitational waves are ripples in the fabric of spacetime produced by accelerating massive bodies to Albert Einstein's general theory of relativity. In general relativity, gravity manifests itself as massive objects bending the structure of spacetime. In addition, something else happens if the gravitational field varies, for example when two massive objects orbit each other. The motion of massive bodies through spacetime perturbs its very fabric, imprinting a signal that travels away as a disturbance to the structure of spacetime itself: gravitational waves. The animation visualises the effect of these oscillations, which consist of sequential stretches and compressions of spacetime, rhythmically increasing and reducing the distance between particles as a wave propagates through the surroundings.
{ "redpajama_set_name": "RedPajamaC4" }
3,038
There is good news for fans of the fun kids show Fluffy Gardens; spring see the arrival of a range of pre-school toys, play-sets, mini plush, beanie dolls and other accessories. Soft nursery toys and collectible plush toys with light and sound. The pre-school animation series is aimed at children old enough to enjoy bedtime stories. Children aged from 2 to 5 years old and older kids will love it including the parents. The show revolves around events that happen in the Fluffy Garden; a place where a colourful bunch of friends live. Each story is unique and covers the adventures and surprise happenings that occur to one particular character. The Fluffy Club has a huge following and little ones will have their own favourite Fluffy Gardens character. They will enjoy the adventures and escapades that will be different every time. All sorts of animals live in Fluffy Gardens and always take care of each other; they are very friendly and helpful. The show teaches a child that the world they live in is full of different people with their own personalities just like in Fluffy Gardens. Understanding that it is okay to be the way you are and that others aren't exactly the same reassures a child as they grow up. It also promotes a set of positive values that encourages learning through gentle feel good stories. They will be learning tolerance and be aware how their actions can affect others. Doing good and helping friends knowing what is right and wrong and sharing is also important factors covered in the programme. Floella: The Fruitbat who sleeps all day and gets up at night. Mr. Johnson: A Panda with a passion for gardening. Mavis: The Pony who lives in a pink house and is very careful not to scrape her knees. Wee Reg: A happy little puppy who is everyone's best friend. Tooty: The Elephant who loves to keep in trim and goes jogging and swimming. Lola: The Mosquito is a silly little thing who is always confuseds. Colleen: The inquisitive Cow who likes stargazing and watching the world go by. Lenny: A messy and lazy Octopus who never puts his toys away. Rex: A very happy Pig who loves to laugh and loves food too. Fudge & Lily: The naughty twin kittens who like to play tricks on others. At the moment children can have fun with colouring in pictures from downloaded material of Fluffy Gardens. There is also an online game to play too. Hopefully it won't be too long before there are Fluffy Gardens colouring books and games to go with the new line of plush promised for the New Year.
{ "redpajama_set_name": "RedPajamaC4" }
577
Albany's Catholic bishop has called on New York Governor Andrew Cuomo to stop the "Death Star" as he called a bill in the state Legislature to expand current state law on abortion that has the full backing of Cuomo, a Catholic. Introduced in the Legislature the week of January 7, the Reproductive Health Act, or RHA, is known as S. 240 in the state Senate and A. 21 in the state Assembly. Cuomo has promised it will pass both houses within the first 30 days of the legislative session. Bishop Scharfenberger also expressed the concern being voiced by pro-life leaders in the state, that "if abortion is deemed a fundamental right in New York state," the consequences for the pro-life movement could be dire. "Mr. Cuomo, do not build this Death Star," Bishop Scharfenberger concluded.
{ "redpajama_set_name": "RedPajamaC4" }
3,522
Міге́л I (; — ) — король Португалії (1828—1834). Представник Браганського дому. Син португальського короля Жуана VI. Брат Педру IV. Регент до прибуття в Португалію Педру IV (26 лютого по 3 березня 1828). Регент при Марії II (3 по 11 липня 1828). Керував країною під час громадянської війни 1828-1834. Зрікся престолу внаслідок поразки від лібералів під проводом Педру IV. Прізвиська — Абсолюти́ст (), Традиціоналі́ст (). Біографія Молоді роки Третій син португальського короля Жуана VI та Шарлоти Жоакіни Іспанської. Народився в Португалії, але в 1807 році разом з родиною виїхав у Бразилію, де провів дитинство та юність. Коли королівська сім'я повернулася в 1821 році в Португалію, Мігель, за підтримки матері, став на чолі абсолютистів. Його ім'ям названі громадянські війни в Португалії в 1823-1834 роках між прихильниками збереження конституційної монархії та прихильниками абсолютизму. 30 квітня 1824 року він заарештував міністрів та оточив вартою королівський палац. Король, однак, утік на англійському кораблі, і Мігель був змушений просити пробачення. Він був висланий з країни та оселився в Відні. Сходження на трон Після смерті в 1826 році його батька, короля Жуана VI, старший брат Мігеля, Петро I Солдат, який, як бразильський імператор, не міг зайняти португальський престол, проголосив свою семирічну доньку Марію португальською королевою та оголосив її нареченою Мігеля, який до її повноліття мав бути регентом. Разом з тим країні була дана ліберальна конституція. Мігель погодився на все, присягнув конституції, заручився зі своєю племінницею та прийняв 26 лютого 1828 року регентство, але вже 13 березня розпустив конституційні кортеси, скликав старі кортеси та змусив проголосити себе королем. Війна Марно Петро оголосив брата таким, який втратив усі права і його заручення з Марією недійсним. Нарешті Петру вдалося зайняти Порту, а потім Лісабон та знову ввести туди Марію. У 1834 році Мігель змушений підписати в Еворі капітуляцію, за якою він відмовлявся від усяких домагань на престол та обіцяв ніколи не повертатися до Португалії. Незабаром, однак, він протестував проти підписаних ним актів, внаслідок чого втратив призначене йому відступне. Мігель і його нащадки були виключені з лінії спадкоємства португальського престолу. Деякий час Мігель жив в Італії, а після весілля оселився в Гейбасі на Майні. Кінець життя провів у Німеччині в замку Бронбах під Вертгаймом. Сім'я У 1851 році у віці 48 років Мігель одружився з Аделаїдою, принцесою фон Левенштайн-Вертгайм-Розенберг (1831-1909), від якої мав шістьох дочок і сина. Він бажав стати «дідусем Європи», що здійснилося вже після його смерті. Аделаїді вдалося вдало видати заміж їх дочок. Діти: Марія даш Невеш (1852—1941), дружина Альфонса Карлоса, претендента на іспанський трон; Мігель (1853—1927), герцог Браганса; Марія Тереза​​ (1855—1944), дружина Австрійського ерцгерцога Карла Людвіга; Марія Жозе (1857—1943), дружина Баварського герцога Карла Теодора; Адельгунда (1858—1946), дружина Генріха Бурбон-Пармського; Марія Анна (1861—1942), дружина герцога Люксембурзького Вільгельма IV; Марія Антонія (1862—1959), дружина герцога Пармського Роберта I. Також мав двох позашлюбних дочок. Після смерті бездітних синів короля Карлуша I (Луїша Феліпе та Мануеля II) та припинення Саксен-Кобург-Готської гілки династії Браганса нащадки Мігела знову стали претендентами на португальський трон. Примітки Джерела Livermore H.V. History of Portugal. Cambridge: University Press, 1947. Livermore H.V. A New History of Portugal. Cambridge: University Press, 1969. Посилання Браганський дім Генералісимуси Анти-масонство Кавалери Великого хреста Королівського угорського ордена Святого Стефана Персоналії:Вертгайм Герцоги Браганські Ліберальні війни Уродженці Лісабона
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,030
**What data to use** Here we will provide you list of what data you'll need to answer the question. **How do I use this visualization?** Here will be a description of how to interpret what you are seeing in the visualization. **Why did we select this visualization?** And we will provide a brief explanation of what went into making these visualization design choices to answer this question.
{ "redpajama_set_name": "RedPajamaGithub" }
9,020
if(DEFINED TBB_FOUND) return() endif(DEFINED TBB_FOUND) if(DEFINED TBB_ROOT) if(NOT DEFINED TBB_INC_PATH) set(TBB_INC_PATH ${TBB_ROOT}/include) endif(NOT DEFINED TBB_INC_PATH) if(NOT DEFINED TBB_LIB_PATH) if(APPLE) set(TBB_LIB_PATH ${TBB_ROOT}/lib) else(APPLE) set(TBB_LIB_PATH ${TBB_ROOT}/lib/intel64/gcc4.7) endif(APPLE) endif(NOT DEFINED TBB_LIB_PATH) endif(DEFINED TBB_ROOT) file(READ ${CMAKE_CURRENT_LIST_DIR}/FindTBB.cpp TBB_TEST_SOURCE) if(NOT DEFINED TBB_LINK_LIBRARIES) find_library(TBB_LINK_LIBRARIES_RELEASE_FOUND tbb PATHS ${TBB_LIB_PATH} ENV LIBRARY_PATH ENV LIB NO_DEFAULT_PATH) find_library(TBB_LINK_LIBRARIES_RELEASE_FOUND tbb) find_library(TBB_LINK_LIBRARIES_DEBUG_FOUND tbb_debug PATHS ${TBB_LIB_PATH} ENV LIBRARY_PATH ENV LIB NO_DEFAULT_PATH) find_library(TBB_LINK_LIBRARIES_DEBUG_FOUND tbb_debug) if(TBB_LINK_LIBRARIES_RELEASE_FOUND AND TBB_LINK_LIBRARIES_DEBUG_FOUND) set(TBB_LINK_LIBRARIES optimized ${TBB_LINK_LIBRARIES_RELEASE_FOUND} debug ${TBB_LINK_LIBRARIES_DEBUG_FOUND} CACHE STRING "Link to TBB") set(TBB_LINK_LIBRARIES_RELEASE ${TBB_LINK_LIBRARIES_RELEASE_FOUND} CACHE STRING "Link to TBB Release") set(TBB_LINK_LIBRARIES_DEBUG ${TBB_LINK_LIBRARIES_DEBUG_FOUND} CACHE STRING "Link to TBB Debug") message(STATUS "Found TBB libraries: ${TBB_LINK_LIBRARIES}") elseif(TBB_LINK_LIBRARIES_RELEASE_FOUND) set(TBB_LINK_LIBRARIES ${TBB_LINK_LIBRARIES_RELEASE_FOUND} CACHE STRING "Link to TBB") set(TBB_LINK_LIBRARIES_RELEASE ${TBB_LINK_LIBRARIES_RELEASE_FOUND} CACHE STRING "Link to TBB Release") message(STATUS "Found TBB libraries: ${TBB_LINK_LIBRARIES}") else(TBB_LINK_LIBRARIES_RELEASE_FOUND AND TBB_LINK_LIBRARIES_DEBUG_FOUND) message(STATUS "NOT Found TBB libraries") endif(TBB_LINK_LIBRARIES_RELEASE_FOUND AND TBB_LINK_LIBRARIES_DEBUG_FOUND) endif(NOT DEFINED TBB_LINK_LIBRARIES) if(NOT DEFINED TBB_MALLOC_LINK_LIBRARIES) find_library(TBB_MALLOC_LINK_LIBRARIES_RELEASE_FOUND tbbmalloc PATHS ${TBB_LIB_PATH} ENV LIBRARY_PATH ENV LIB NO_DEFAULT_PATH) find_library(TBB_MALLOC_LINK_LIBRARIES_RELEASE_FOUND tbbmalloc) find_library(TBB_MALLOC_LINK_LIBRARIES_DEBUG_FOUND tbbmalloc_debug PATHS ${TBB_LIB_PATH} ENV LIBRARY_PATH ENV LIB NO_DEFAULT_PATH) find_library(TBB_MALLOC_LINK_LIBRARIES_DEBUG_FOUND tbbmalloc_debug) if(TBB_MALLOC_LINK_LIBRARIES_RELEASE_FOUND AND TBB_MALLOC_LINK_LIBRARIES_DEBUG_FOUND) set(TBB_MALLOC_LINK_LIBRARIES optimized ${TBB_MALLOC_LINK_LIBRARIES_RELEASE_FOUND} debug ${TBB_MALLOC_LINK_LIBRARIES_DEBUG_FOUND} CACHE STRING "Link to TBB malloc") set(TBB_MALLOC_LINK_LIBRARIES_RELEASE ${TBB_MALLOC_LINK_LIBRARIES_RELEASE_FOUND} CACHE STRING "Link to TBB malloc Release") set(TBB_MALLOC_LINK_LIBRARIES_DEBUG ${TBB_MALLOC_LINK_LIBRARIES_DEBUG_FOUND} CACHE STRING "Link to TBB malloc Debug") message(STATUS "Found TBB malloc libraries: ${TBB_MALLOC_LINK_LIBRARIES}") elseif(TBB_MALLOC_LINK_LIBRARIES_RELEASE_FOUND) set(TBB_MALLOC_LINK_LIBRARIES ${TBB_MALLOC_LINK_LIBRARIES_RELEASE_FOUND} CACHE STRING "Link to TBB malloc") set(TBB_MALLOC_LINK_LIBRARIES_RELEASE ${TBB_MALLOC_LINK_LIBRARIES_RELEASE_FOUND} CACHE STRING "Link to TBB malloc Release") message(STATUS "Found TBB malloc libraries: ${TBB_MALLOC_LINK_LIBRARIES}") else(TBB_MALLOC_LINK_LIBRARIES_RELEASE_FOUND AND TBB_MALLOC_LINK_LIBRARIES_DEBUG_FOUND) message(STATUS "NOT Found TBB malloc libraries") endif(TBB_MALLOC_LINK_LIBRARIES_RELEASE_FOUND AND TBB_MALLOC_LINK_LIBRARIES_DEBUG_FOUND) endif(NOT DEFINED TBB_MALLOC_LINK_LIBRARIES) if(NOT DEFINED TBB_MALLOC_PROXY_LINK_LIBRARIES) find_library(TBB_MALLOC_PROXY_LINK_LIBRARIES_RELEASE_FOUND tbbmalloc_proxy PATHS ${TBB_LIB_PATH} ENV LIBRARY_PATH ENV LIB NO_DEFAULT_PATH) find_library(TBB_MALLOC_PROXY_LINK_LIBRARIES_RELEASE_FOUND tbbmalloc_proxy) find_library(TBB_MALLOC_PROXY_LINK_LIBRARIES_DEBUG_FOUND tbbmalloc_proxy_debug PATHS ${TBB_LIB_PATH} ENV LIBRARY_PATH ENV LIB NO_DEFAULT_PATH) find_library(TBB_MALLOC_PROXY_LINK_LIBRARIES_DEBUG_FOUND tbbmalloc_proxy_debug) if(TBB_MALLOC_PROXY_LINK_LIBRARIES_RELEASE_FOUND AND TBB_MALLOC_PROXY_LINK_LIBRARIES_DEBUG_FOUND) set(TBB_MALLOC_PROXY_LINK_LIBRARIES optimized ${TBB_MALLOC_PROXY_LINK_LIBRARIES_RELEASE_FOUND} debug ${TBB_MALLOC_PROXY_LINK_LIBRARIES_DEBUG_FOUND} CACHE STRING "Link to TBB malloc proxy") set(TBB_MALLOC_PROXY_LINK_LIBRARIES_RELEASE ${TBB_MALLOC_PROXY_LINK_LIBRARIES_RELEASE_FOUND} CACHE STRING "Link to TBB malloc proxy Release") set(TBB_MALLOC_PROXY_LINK_LIBRARIES_DEBUG ${TBB_MALLOC_PROXY_LINK_LIBRARIES_DEBUG_FOUND} CACHE STRING "Link to TBB malloc proxy Debug") message(STATUS "Found TBB malloc proxy libraries: ${TBB_MALLOC_PROXY_LINK_LIBRARIES}") elseif(TBB_MALLOC_PROXY_LINK_LIBRARIES_RELEASE_FOUND) set(TBB_MALLOC_PROXY_LINK_LIBRARIES ${TBB_MALLOC_PROXY_LINK_LIBRARIES_RELEASE_FOUND} CACHE STRING "Link to TBB malloc proxy") set(TBB_MALLOC_PROXY_LINK_LIBRARIES_RELEASE ${TBB_MALLOC_PROXY_LINK_LIBRARIES_RELEASE_FOUND} CACHE STRING "Link to TBB malloc proxy Release") message(STATUS "Found TBB malloc proxy libraries: ${TBB_MALLOC_PROXY_LINK_LIBRARIES}") else(TBB_MALLOC_PROXY_LINK_LIBRARIES_RELEASE_FOUND AND TBB_MALLOC_PROXY_LINK_LIBRARIES_DEBUG_FOUND) message(STATUS "NOT Found TBB malloc proxy libraries") endif(TBB_MALLOC_PROXY_LINK_LIBRARIES_RELEASE_FOUND AND TBB_MALLOC_PROXY_LINK_LIBRARIES_DEBUG_FOUND) endif(NOT DEFINED TBB_MALLOC_PROXY_LINK_LIBRARIES) if(NOT DEFINED TBB_INCLUDE_DIR) find_path(TBB_INCLUDE_DIR tbb/tbb.h PATHS ${TBB_INC_PATH} ENV CPATH NO_DEFAULT_PATH) find_path(TBB_INCLUDE_DIR tbb/tbb.h) if(TBB_INCLUDE_DIR) message(STATUS "Found TBB headers: ${TBB_INCLUDE_DIR}") else(TBB_INCLUDE_DIR) message(STATUS "NOT Found TBB headers") endif(TBB_INCLUDE_DIR) endif(NOT DEFINED TBB_INCLUDE_DIR) if(TBB_LINK_LIBRARIES AND TBB_INCLUDE_DIR) set(TBB_FOUND TRUE CACHE BOOL "Found TBB") else(TBB_LINK_LIBRARIES AND TBB_INCLUDE_DIR) set(TBB_FOUND FALSE CACHE BOOL "NOT Found TBB") endif(TBB_LINK_LIBRARIES AND TBB_INCLUDE_DIR)
{ "redpajama_set_name": "RedPajamaGithub" }
7,874
67 LISTS What the Kids Are WatchingLists of movies to turn on for kids on sick days, snow days, bad days, and good days—because a little screentime never hurt anybody. Top New Kids Movies of the Last Few Years Greatest Disney Animated Movies The Best Family Movies of 2019 The Best Movies for Kids, Ranked The Funniest Kids Shows Ever The Best Newer Animated Kids Shows, Ranked The Best PG-13 Family Movies Classic Films for the Whole Family The Top Movies for Toddlers Great Movies for 10-Year-Olds The Top Disney Live-Action Movies Greatest Children's Movies Ever Made Dinosaur Movies Kids Will Love Best Characters in the Ice Age Series, Ranked Great Movies for Preschoolers Bilingual Shows That Teach Kids a Second Language Best Kids Movies of the 1990s Saddest Deaths in Children's Movies Top Live-Action Animal Movies Kids Adventure Movies, Ranked Photo: Netflix Culture Good Movies for 6 Year Olds List Rules Vote up the best films appropriate for a six year old to watch. Below you'll find the most awesome movies for 6 year olds, ranked from best to worst by user votes. The best movies for 6 year olds come in many genres and from many different decades. Some great movies for six year olds to watch are live action while other good films for six year olds are cartoons. It won't surprise anyone to see that Disney and Pixar are responsible for many of the top films for 6 year olds, though there are many other great kid's movies listed here as well. What titles will you see on this best movies for 6 year olds list? By the age of six, many kids are ready to experience the colorful adventures of The Wizard of Oz. Though the Wicked Witch may be a bit scary, Dorothy and her friends will fill little imaginations with wonder. Mary Poppins is another good movie that a six year old might enjoy watching. If your kids like to build things, they will probably have a blast watching The Lego Movie. Other great movies that appear on this top movies for six year olds list include The Iron Giant, Hugo, and James and the Giant Peach. Which movie do you think is the best for six year olds? Let your voice be heard by voting the best to the top of this list and add any other age appropriate movies all six year olds will love if they are not already listed. jsnungaray added Minions Sandra Bullock, Jon Hamm, Michael Keaton Minions is a 2015 American 3D computer-animated comedy film directed by Pierre Coffin and Kyle Balda, and is a spin-off/prequel to the Despicable Me franchise. Minions Stuart, Kevin, and Bob are ...more jsnungaray added Despicable Me Franchise Despicable Me is a computer-animated comedy film franchise distributed by Universal Pictures and produced by Illumination Entertainment. It consists of three feature films, ten short films and ...more Kung Fu Panda Angelina Jolie, Lucy Liu, Dustin Hoffman Kung Fu Panda is a 2008 American computer-animated action comedy martial arts film produced by DreamWorks Animation and distributed by Paramount Pictures. It was directed by John Stevenson and ...more Mary Poppins Julie Andrews, Dick Van Dyke, Elsa Lanchester Mary Poppins is a 1964 American musical fantasy film directed by Robert Stevenson and produced by Walt Disney, with songs written and composed by the Sherman Brothers. The screenplay is by Bill ...more How to Train Your Dragon Gerard Butler, Kristen Wiig, David Tennant How to Train Your Dragon is a 2010 American 3D computer-animated action-fantasy film by DreamWorks Animation loosely based on the British book series of the same name by Cressida Cowell. The ...more Monsters, Inc. Billy Crystal, John Goodman, Steve Buscemi Monsters, Inc. is a 2001 American computer-animated comedy film directed by Pete Docter, produced by Pixar Animation Studios, and released by Walt Disney Pictures. John Lasseter and Andrew ...more Cars Tom Hanks, Billy Crystal, Sheryl Crow Cars is a 2006 American computer-animated comedy-adventure sports film produced by Pixar Animation Studios and released by Walt Disney Pictures. Directed and co-written by John Lasseter, it is ...more Shrek Cameron Diaz, Eddie Murphy, Mike Myers Shrek is a 2001 American computer-animated fantasy-comedy film produced by PDI/DreamWorks, released by DreamWorks Pictures, directed by Andrew Adamson and Vicky Jenson, featuring the voices of ...more Inside Out Amy Poehler, Diane Lane, Mindy Kaling Inside Out is a 2015 American 3D computer-animated comedy-drama film directed by Pete Docter. In the mind of a young girl, five personified emotions - Joy (Amy Poehler), Sadness (Phyllis Smith), ...more The Sound of Music Julie Andrews, Christopher Plummer, Eleanor Parker The Sound of Music is a 1965 American musical drama film produced and directed by Robert Wise and starring Julie Andrews and Christopher Plummer. The film is an adaptation of the 1959 Broadway ...more The Incredibles Samuel L. Jackson, Holly Hunter, Jason Lee The Incredibles is a 2004 American computer-animated comedy superhero film written and directed by Brad Bird and released by Walt Disney Pictures. It was the sixth film produced by Pixar ...more WALL-E Sigourney Weaver, Laraine Newman, Kathy Najimy WALL-E is a 2008 American computer-animated science-fiction comedy film produced by Pixar Animation Studios and released by Walt Disney Pictures. Directed by Andrew Stanton, the story follows a ...more Toy Story Franchise Toy Story is a CGI animated film series and Disney media franchise that began with the original 1995 film, Toy Story, produced by Pixar Animation Studios and released by Walt Disney Pictures. ...more Honey, I Shrunk the Kids Keri Russell, Allison Mack, Rick Moranis Honey, I Shrunk the Kids is a 1989 soft science fiction-family film. The directorial debut of Joe Johnston and produced by Walt Disney Pictures, it tells the story of an inventor who ...more The Wizard of Oz Judy Garland, Margaret Hamilton, Frank Morgan The Wizard of Oz is a 1939 American musical fantasy film produced by Metro-Goldwyn-Mayer, and the most well-known and commercially successful adaptation based on the 1900 novel The Wonderful ...more The Lego Movie Chris Pratt, Will Ferrell, Elizabeth Banks The Lego Movie is a 2014 3D computer-animated adventure comedy film directed by Phil Lord and Christopher Miller. An ordinary Lego minifigure (Chris Pratt) who finds himself being the only one ...more The Iron Giant Jennifer Aniston, Vin Diesel, Cloris Leachman The Iron Giant is a 1999 American animated science fiction comedy-drama film using both traditional animation and computer animation, produced by Warner Bros. Animation, and based on the 1968 ...more E.T. the Extra-Terrestrial Drew Barrymore, Erika Eleniak, Debra Winger E.T. the Extra-Terrestrial is a 1982 American science fiction-family film co-produced and directed by Steven Spielberg and written by Melissa Mathison, featuring special effects by Carlo ...more The Lego Movie 2: The Second Part Chris Pratt, Elizabeth Banks, Will Arnett The Lego Movie 2: The Second Part is a 2019 3D computer-animated space action musical comedy film directed by Mike Mitchell. Now a Master Builder, life for Emmet Brickowski (Chris Pratt) is ...more Elf Zooey Deschanel, Will Ferrell, Peter Dinklage Elf is a 2003 American Christmas comedy film directed by Jon Favreau and written by David Berenbaum. It stars Will Ferrell, James Caan, Bob Newhart, Ed Asner, and Zooey Deschanel. It was ...more Planes Teri Hatcher, Val Kilmer, Julia Louis-Dreyfus Planes is a 2013 American 3D computer-animated sports comedy film produced by DisneyToon Studios and released by Walt Disney Pictures. It is a spin-off of Pixar's Cars franchise and the first ...more Harry Potter and the Philosopher's Stone Emma Watson, Daniel Radcliffe, Julianne Hough Harry Potter and the Philosopher's Stone is a 2001 fantasy film directed by Chris Columbus and distributed by Warner Bros. Pictures. It is based on the novel of the same name by J. K. Rowling. ...more The Muppet Christmas Carol Michael Caine, Frank Oz, Louise Gold The Muppet Christmas Carol is a 1992 American musical fantasy-comedy film and an adaptation of Charles Dickens's 1843 novel A Christmas Carol. It is the fourth in a series of live-action musical ...more The Croods Emma Stone, Nicolas Cage, Ryan Reynolds The Croods is a 2013 American 3D computer-animated adventure comedy film produced by DreamWorks Animation and distributed by 20th Century Fox. It features the voices of Nicolas Cage, Emma Stone, ...more Hugo Jude Law, Chloë Grace Moretz, Christopher Lee Hugo is a 2011 American 3D historical adventure drama film directed and co-produced by Martin Scorsese and adapted for the screen by John Logan. Based on Brian Selznick's novel The Invention of ...more List Rules: Vote up the best films appropriate for a six year old to watch. Filed Under: Films FilmEntertainmenttop 25 The Scariest Animal Movies Ever Made People We Wish Were Still Alive The Best Film Adaptations of Young Adult Novels The Best Single Season Canceled Shows The Best Movies for Kids The Greatest Epic Movies Ever Made The Most Annoying Cartoon Characters of All Time The Longest Hollywood Marriages Celebrities Who Were Rich Before They Were Famous Infuriating Images That Will Trigger You The Best Disney Princesses The Best Movie Theater Snacks The Best Lifetime Original Movies of 2019 The Best Actors in Film History The Best Scottish Actors Working Today The Best Western Movies Ever Made The Greatest British Actors of All Time The Best Animated Films Ever The Most Overrated Movies of All Time The Best, Funniest Comedy Movie Trailers of 2019 The Best Movies Based on Books The Best Psychological Thrillers of All Time The Funniest '90s Movies 'Old' Movies Every Young Person Needs To Watch In Their Lifetime lucid dreaming stories circle punch game get back song anime dream battles nyu notable alumni sad cutting quotes don winslow books gg allin casket how tall are fictional characters psychopath symptoms
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,619
Sumowo () ist ein kleines Dorf im Nordosten der polnischen Woiwodschaft Ermland-Masuren. Es gehört zur Landgemeinde Dubeninki (Dubeningken) im Powiat Gołdapski (Kreis Goldap). Geographische Lage Sumowo liegt 15 Kilometer südöstlich der Kreisstadt Gołdap (Goldap) nordöstlich des Jezioro Niskie (Niedersee), unmittelbar an der einstigen Grenze zwischen dem Deutschen Reich und Polen, die hier parallel zur heutigen Woiwodschaftsgrenze Ermland-Masuren/Podlachien verläuft. Geschichte Der kleine und seinerzeit Szumowen genannte Ort erfuhr wohl um 1562 seine Gründung. Nach 1818 Sumowen und dann bis 1938 Summowen geschrieben, bestand das Dorf vor 1945 lediglich aus ein paar kleinen und größeren Höfen. 1874 wurde Summowen in den neu errichteten Amtsbezirk Rogainen eingegliedert, 1939 dann in den Amtsbezirk Gurnen umgegliedert. Beide gehörten bis 1945 zum Kreis Goldap im Regierungsbezirk Gumbinnen der preußischen Provinz Ostpreußen. Im Jahre 1910 waren in Summowen 88 Einwohner gemeldet. Ihre Zahl verringert sich bis 1933 auf 61 und belief sich 1939 auf 99. Im Zuge der nationalsozialistischen Umbenennungsaktion erhielt Summowen am 3. Juni (amtlich bestätigt am 16. Juli) des Jahres 1938 den Namen "Summau". In Kriegsfolge kam das Dorf dann 1945 mit dem gesamten südlichen Ostpreußen zu Polen, wo man es seither "Sumowo" nennt. Sumowo ist jetzt eine Ortschaft im Verbund der Gmina Dubeninki im Powiat Gołdapski und liegt in der Woiwodschaft Ermland-Masuren. Kirche Mehrheitlich war die Bevölkerung Summowens resp. Summaus vor 1945 evangelischer Konfession. Das Dorf war in das Kirchspiel der Kirche Dubeningken eingepfarrt, die zum Kirchenkreis Goldap in der Kirchenprovinz Ostpreußen der Kirche der Altpreußischen Union gehörte. Die katholischen Kirchenglieder gehörten zur Pfarrei in Goldap im Bistum Ermland. In Sumowo hat sich nach 1945 die Situation verändert, leben doch jetzt überwiegend katholische Einwohner in dem Ort. Ihre Pfarrkirche ist die einstige evangelische Kirche in Dubeninki, die zum Dekanat Filipów im Bistum Ełk (Lyck) der Katholischen Kirche in Polen gehört. Die wenigen evangelischen Kirchenglieder sind jetzt in die Kirchengemeinde in Gołdap eingegliedert, eine Filialgemeinde der Pfarrei in Suwałki in der Diözese Masuren der Evangelisch-lutherischen Kirche in Polen. Verkehr Sumowo liegt ein wenig abseits vom Verkehr an einem Landweg, der von Czarne (Czarnen, 1938 bis 1945 Scharnen) am gleichnamigen See nach Białe Jeziorki über die frühere Staatsgrenze Deutsches Reich/Polen führt. Eine Bahnanbindung besteht nicht mehr, seit die Bahnstrecken Goldap–Szittkehmen (mit der Bahnstation Dubeningken) und Lyck–Insterburg (Ełk–Tschernjachowsk, mit der Bahnstation Gurnen) 1945 aufgegeben bzw. 1993 für den Personenverkehr geschlossen wurden. Einzelnachweise Ort der Woiwodschaft Ermland-Masuren Gmina Dubeninki
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,914
\section{Introduction} Many NLP applications, such as machine translation or question answering, require \emph{subword tokenization}, i.e. splitting words into a sequence of substrings \cite{mielke2021between}. Such tokenizers are trained by an unsupervised algorithm, usually either Byte-Pair Encoding (BPE; \citealt{gage1994new,sennrich2016neural}) or Unigram Language Modeling (ULM; \citealt{kudo-2018-subword}). To give a few examples, contemporary language models RoBERTa \cite{liu2019roberta} and GPT-3 \cite{brown2020language} use a byte-level BPE \cite{radford2019language} while XLNet \cite{yang2019xlnet} relies on ULM. These subword tokenization algorithms are not linguistically motivated but are rather based on statistical co-occurrences. Therefore, unsupervised and semi-supervised methods for morphological segmentation \cite{creutz2005unsupervised} have emerged in parallel, state-of-the-art methods of this kind being Morfessor variants \cite{gronroos2014morfessor,gronroos2020morfessor}. \citet{ataman2017linguistically} and \citet{schwartz2020neural} find that Morfessor-based language models can outperform BPE-based ones. \citet{matthews2018using,nzeyimana2022kinyabert} show that enriching BPE with morphological analyzers can be beneficial for translation, while many others \cite{domingo2018much,machavcek2018morphological,schwartz2020neural,saleva-lignos-2021-effectiveness} find no conclusive improvements over BPE for machine translation. \begin{table}[t] \small \newcolumntype{R}{>{\raggedleft\arraybackslash}X} \newcolumntype{C}{>{\centering\arraybackslash}X} \newcolumntype{L}{>{\raggedright\arraybackslash}X \centering \begin{tabularx}{\linewidth}{l|ll|r} System & type & motivation & segmentation \\ \hline \hline BPE & surface & sta. & in | val | uable \\ Morfessor2 & surface & sta. \& lin. & in | valuable \\ DeepSPIN-3 & canonical & sta. \& lin. & in | value | able \\ \hline \end{tabularx} \caption{\label{tab:examples} Structural differences of subword tokenization (BPE), morphological segmentation (Morfessor2), and morpheme segmentation (DeepSPIN-3 -- subtask 1 winning system); acronyms: sta. - statistics and lin. - linguistic} \end{table} One of the core problems is that the state-of-the-art morphological segmentation and subword tokenization algorithms provide ``surface-level'' segmentation, which has several theoretical drawbacks with respect to ``canonical'' segmentation (e.g., segmented substrings are not considered as meaningful as morphemes). \citet{cotterell2016joint} provided formal definitions for both: given a word $w$, its ``surface'' segmentation is a sequence of \textit{surface substrings} the concatenation of which is~$w$, e.g., \textit{funniest} → \textit{funn-i-est}. The purpose of canonical segmentation \cite{kann-etal-2016-neural,Task2-TueSeg}, on the other hand, is not only computing surface segmentation but also restoring standardized forms of morphemes, e.g., \textit{funniest} → \textit{fun-y-est}. More detailed structural distinctions between these segmentation types are shown in Table~\ref{tab:examples}. However, state-of-the-art studies in canonical segmentations have been limited to very low numbers of languages with sufficiently rich morphological resources \cite{kurimo2010morpho,kurimo2010proceedings,cotterell2016joint,kann-etal-2018-fortification}. With the goal of advancing research in this direction, we present a \textit{morpheme segmentation shared task} and provide large-scale datasets over nine languages, evaluation metrics, and morphological annotations of five million word formations. In this, we rely on the latest release of UniMorph \cite{batsuren2022unimorph} which has introduced morpheme segmentations and derivational data from MorphyNet \cite{batsuren-etal-2021-morphynet}. The resulting shared task is a follow-up to past morphological segmentation shared tasks such as ``MorphoChallenge'' \cite{kurimo2007unsupervised,kurimo2008overview,kurimo2009overview} or ``Multilingual parsing'' \cite[where lemmatization as segmentation is a subtask]{zeman2017conll}. \begin{table}[t] \small \newcolumntype{R}{>{\raggedleft\arraybackslash}X} \newcolumntype{C}{>{\centering\arraybackslash}X} \newcolumntype{L}{>{\raggedright\arraybackslash}X \centering \begin{tabularx}{\linewidth}{clLc} \hline Lang & word & segmentation & category \\ \hline \multirow{2}{*}{eng} & sheepiness & sheep @@y @@ness & 010 \\ & pokers & poke @@er @@s & 110 \\ \hline \multirow{2}{*}{hun} & időpontod & idő @@pont @@od & 101 \\ & szőttetek & sző @@tt @@etek & 100 \\ \hline \multirow{2}{*}{mon} & \foreignlanguage{russian}{харах} & \foreignlanguage{russian}{харах} & 000 \\ & \foreignlanguage{russian}{гэмтлийг} & \foreignlanguage{russian}{гэмтэх @@л @@ийг} & 110 \\ \hline \end{tabularx} \caption{\label{tab:training_examples} Training samples for Subtask 1. Each sample consists of a word, its canonical segmentation, and a category encoding word formation processes.} \end{table} \begin{table*}[h] \small \newcolumntype{R}{>{\raggedleft\arraybackslash}X} \newcolumntype{C}{>{\centering\arraybackslash}X} \newcolumntype{L}{>{\raggedright\arraybackslash}X \centering \begin{tabularx}{\textwidth}{|c|ccc|l|L|} \hline Category & Infl. & Deri. & Comp. & Description & English example (input ==\textgreater ~ output) \\ \hline 000 & - & - & - &Root words (free morphemes) & progress ==\textgreater ~progress \\ 100 & \checkmark & - & - & Inflection only & prepared ==\textgreater ~ prepare @@ed \\ 010 & - & \checkmark & - &Derivation only & intensive ==\textgreater ~intense @@ive \\ 001 & - & - & \checkmark &Compound only & hotpot ==\textgreater ~hot @@pot \\ 101 & \checkmark & - & \checkmark & Inflection and Compound & wheelbands ==\textgreater ~wheel @@band @@s \\ 011 & - & \checkmark & \checkmark & Derivation and Compound & tankbuster ==\textgreater ~tank @@bust @@er \\ 110 & \checkmark & \checkmark & - & Inflection and Derivation & urbanizes ==\textgreater ~urban @@ize @@s \\ 111 & \checkmark & \checkmark & \checkmark & Inflection, Derivation, Compound & trackworkers ==\textgreater ~track @@work @@er @@s\\ \hline \end{tabularx} \caption{\label{tab:categories}Morphological categories and descriptions of segmented words in subtask 1} \end{table*} \begin{table*}[ht] \small \newcolumntype{R}{>{\raggedleft\arraybackslash}X} \newcolumntype{C}{>{\centering\arraybackslash}X} \newcolumntype{L}{>{\raggedright\arraybackslash}X \centering \begin{tabularx}{0.95\textwidth}{|C|rrrrrrrrr|} \hline Category & English & Spanish & Hungarian & French & Italian & Russian & Czech & Latin & Mongolian \\ \hline 000 & 101938 & 15843 & 6952 & 13619 & 21037 & 2921 & - & 50338 & 1604 \\ 100 & 126544 & 502229 & 410662 & 105192 & 253455 & 221760 & - & 831991 & 7266 \\ 010 & 203102 & 18449 & 24923 & 67983 & 41092 & 72970 & - & 0 & 2201 \\ 001 & 16990 & 248 & 3320 & 1684 & 431 & 259 & - & 0 & 5 \\ 101 & 13790 & 458 & 101189 & 478 & 317 & 1909 & - & 0 & 35 \\ 011 & 5381 & 82 & 1654 & 506 & 140 & 328 & - & 0 & 0 \\ 110 & 106570 & 346862 & 323119 & 126196 & 237104 & 481409 & - & 0 & 7855 \\ 111 & 3059 & 343 & 54279 & 186 & 158 & 2658 & - & 0 & 0 \\ \hline total words & 577374 & 884514 & 926098 & 382797 & 553734 & 784214 & 38682 & 882329 & 18966 \\ \hline \end{tabularx} \caption{\label{tab:task1stats}Word statistics across morphological categories on subtask 1} \end{table*} \section{Task and Evaluation Details} \subsection{Subtask 1: Word-level Morpheme Segmentation} In subtask 1, participating systems were asked to segment a given word into a sequence of morphemes. The participants were initially provided with examples of segmentation to train and fine-tune their systems, as shown in Table~\ref{tab:training_examples}. Each instance in the training set is a triplet consisting of a word, a sequence of morphemes, and a morphological category specifying the types of word formation (see Table~\ref{tab:categories}). The morphological category is an optional feature that can only be used to oversample or undersample the training dataset (the word frequencies are imbalanced across the morphological categories, e.g., Italian has 431 compound words and 253K inflections). The test data only contained the initial word itself. Key points of this subtask are: \begin{itemize} \item The task is focusing on canonical segmentation, i.e. given an input word, participants had to predict \emph{a sequence of morphemes}. In canonical segmentation, the participating systems need to reconstruct internal morphophonological processes involved in word formation. For example, the word ``intensive'' will be decomposed into the base form ``intens\textit{\textbf{e}}'' and the adjectival siffix `@@ive'' (note that the ending `\textit{\textbf{e}}' of the base word is inferred here); \item As shown in Table~\ref{tab:task1stats}, the task is multilingual, with seven high-resource languages (English, Spanish, Hungarian, French, Italian, Russian, Latin) and two low-resource languages (Czech and Mongolian); \item The annotated corpus data represents a variety of morphological phenomena, including inflection, derivation, compounding (Table \ref{tab:task1stats}); \item A large-scale coverage as segmentations of five million words. \end{itemize} \subsection{Subtask 2: Sentence-level Morpheme Segmentation} The second subtask is a context-dependent morpheme segmentation and focuses on resolving ambiguity in segmentations. Consider the following example containing a Mongolian homonym: \begin{exe} \ex \glll \foreignlanguage{russian}{Гэрт} \foreignlanguage{russian}{эмээ} \foreignlanguage{russian}{хоол} \foreignlanguage{russian}{хийв}\\ \foreignlanguage{russian}{Гэр @@т} \foreignlanguage{russian}{эмээ} \foreignlanguage{russian}{хоол} \foreignlanguage{russian}{хийх @@в} \\ Home.\texttt{DAT} grandma meal cook.\texttt{PRS.PRF} \\ \glt `Grandma just cooked a meal at home.' \end{exe} \begin{exe} \ex \glll \foreignlanguage{russian}{Би} \foreignlanguage{russian}{өдөр} \foreignlanguage{russian}{эмээ} \foreignlanguage{russian}{уусан }\\ \foreignlanguage{russian}{Би} \foreignlanguage{russian}{өдөр} \foreignlanguage{russian}{эм @@ээ} \foreignlanguage{russian}{уух @@сан} \\ I afternoon medicine.\texttt{PSSD} take.\texttt{PST} \\ \glt `Afternoon I took my medicine.' \end{exe} \noindent where ``\foreignlanguage{russian}{эмээ}'' is a homonym of two different words; in the first sentence, it is ``grandmother'', and in the second sentence --- an inflected form of ``medicine''. Thus, the form in the second case can be segmented. However, the modern subword segmentation tools consider no contextual differences in word forms. Key points of this subtask are: \begin{itemize} \item Morpheme segmentation is context-dependent; \item We organize it for three languages: English, Czech, and Mongolian; \item For Czech and Mongolian we asked native speakers to manually annotate the data. The details of data collection are provided in Section~\ref{sec:Data}. \end{itemize} \begin{table}[t] \newcolumntype{R}{>{\raggedleft\arraybackslash}X} \newcolumntype{C}{>{\centering\arraybackslash}X} \newcolumntype{L}{>{\raggedright\arraybackslash}X \centering \begin{tabularx}{0.9\linewidth}{lRRR} \hline Language & train & dev & test \\ \hline Czech & 1,000 & 500 & 500 \\ English & 11,007 & 1,783 & 1,845 \\ Mongolian & 1,000 & 500 & 600 \\ \hline \end{tabularx} \caption{\label{tab:task2stats}The number of samples in each language in Subtask 2.} \end{table} \subsection{Evaluation} In order to evaluate and compare the systems, we used four metrics: (i) \textit{\textbf{precision}}, the ratio of correctly predicted morphemes over all predicted morphemes; (ii) \textit{\textbf{recall}}, the ratio of correctly predicted morphemes over all gold-label morphemes; (iii) \textit{\textbf{f-measure}}, the harmonic mean of the precision and recall; (iv) \textit{\textbf{edit distance}} - average Levenshtein distance between the predicted output and the gold instance. For convenience, we provided the python tool\footnote{\url{https://github.com/sigmorphon/2022SegmentationST/tree/main/evaluation}} to evaluate these metrics on both subtasks. In addition, for subtask 1 this tool also provided detailed results across the morphological categories. \section{Data} \label{sec:Data} We collected our morphological data from various sources to account for all types of morphology: derivational, inflectional, compounding. We also collected base forms. For derivational and inflectional morphology, we have used the segmentation data from UniMorph 4.0 \cite{batsuren2022unimorph} and MorphyNet \cite{batsuren-etal-2021-morphynet}. UniMorph contains inflectional paradigms collected from linguistic sources as well as Wiktionary, while MorphyNet represents derivations scraped from various editions of Wiktionary. Compounds and base forms were also extracted from Wiktionary (see Section~\ref{sub:extraction} for more details on the data extraction). We then used the data to produce morpheme segmentations for seven high-resource languages. For Czech and Mongolian, as low-resource languages, we asked native speakers and linguists to develop the resources (Section~\ref{sub:LRL} provides more details). For English sentence data, we have used the universal dependency treebank of English \cite{silveira14gold}. \subsection{Data Statistics} The data for the shared task was moderately multilingual, containing nine unique languages of five genera including Germanic, Italic, Slavic, Mongolic, and Uralic. In subtask 1, we have over 5 million samples of morpheme segmentations that cover nine languages over nine morphological categories, as shown in Table~ \ref{tab:task1stats}. In subtask 2, Table~\ref{tab:task2stats} displays the data statistics of three languages. \subsection{Extraction from Wiktionary} \label{sub:extraction} Language-specific editions of Wiktionary contain a considerably large amount of derivations and compounds. \emph{Compound extraction rules} were applied to the etymology sections of Wiktionary entries to collect the Morphology template usages, such as for the English \emph{newspaper}: \begin{center} Equivalent to \textbf{news} + \textbf{paper}. \end{center} where we have a morphology entry from the Wiktionary XML dump as follows: \begin{center} \{\{compound~|~en~|~news~|~paper\}\} \end{center} Most of compound entries use ``compound'' etymology template while some cases use ``affix`` templates, e.g., \emph{basketball} and \emph{volleyball}. \emph{Root (and base) word extraction} is a two-step procedure. In the first step we collected words, inherited from earlier phases of corresponding languages. For example, English `book' is traced back to the Middle English `bok', according to the etymology section of Wiktionary. We extracted 279,173 words from 6 languages from CogNet, a cognate database containing 8.1 million cognate pairs of 335 languages from Wiktionary \cite{batsuren2019cognet,batsuren2021large}. In the second step, we filtered out 116,863 words from the earlier extracted derivational and compound data, resulting in 162,310 root words in 6 languages. Similar Wiktionary data extraction procedures have been applied to a wide range of linguistic data, e.g., etymology \cite{fourrier2020methodological}, multilingual lexicons - DBnary \cite{serasset2015dbnary} and Yawipa \citep{wu-yarowsky-2020-computational}. \begin{table*}[t] \small \newcolumntype{R}{>{\raggedleft\arraybackslash}X} \newcolumntype{C}{>{\centering\arraybackslash}X} \newcolumntype{L}{>{\raggedright\arraybackslash}X \centering \begin{tabularx}{\textwidth}{l|l|l|CcCcc} & & & \multicolumn{5}{c}{System features} \\ Team & Description & System & Neural & Ensemble & Data+ & Multilingual & Multi-task \\ \hline \hline \multirow{3}{*}{Baseline} & \cite{schuster2012japanese} & \md{WordPiece*} & - & - & - & - & - \\ & \cite{kudo-2018-subword} & \md{ULM*} & - & - & - & - & - \\ & \cite{virpioja2013morfessor} & \md{Morfessor2*} & - & - & - & - & - \\ \hline \hline \multirow{6}{*}{AUUH} & \multirow{6}{*}{\cite{auuh22sigmorphon}} & \md{AUUH\_A*} & \checkmark & - & \checkmark & \checkmark & \checkmark \\ & & \md{AUUH\_B*} & \checkmark & - & - & \checkmark & \checkmark \\ & & \md{AUUH\_C} & \checkmark & - & \checkmark & - & \checkmark \\ & & \md{AUUH\_D} & \checkmark & - & - & - & \checkmark \\ & & \md{AUUH\_E*} & \checkmark & - & \checkmark & - & - \\ & & \md{AUUH\_F*} & \checkmark & - & - & - & - \\ \hline \hline \multirow{4}{*}{CLUZH} & \multirow{4}{*}{\cite{cluzh_sig22}} & \md{CLUZH} & \checkmark & \checkmark & - & - & - \\ & & \md{CLUZH-1} & \checkmark & \checkmark & - & - & - \\ & & \md{CLUZH-2} & \checkmark & \checkmark & - & - & - \\ & & \md{CLUZH-3} & \checkmark & \checkmark & - & - & - \\ \hline \hline \multirow{3}{*}{DeepSPIN} & \multirow{3}{*}{\cite{DeepSPIN2022}} & \md{DeepSPIN-1} & \checkmark & - & - & - & - \\ & & \md{DeepSPIN-2} & \checkmark & - & - & - & - \\ & & \md{DeepSPIN-3} & \checkmark & - & - & - & - \\ \hline \hline \multirow{2}{*}{GU} & \multirow{2}{*}{\cite{GU2022}} & \md{GU-1} & \checkmark & - & \checkmark & - & - \\ & & \md{GU-2} & \checkmark & - & \checkmark & - & - \\ \hline \hline NUM DI & \cite{Task2_NUMDI} & \md{NUM DI} & \checkmark & - & - & - & - \\ \hline \hline JB132 & \cite{JB132} & \md{JB132} & - & - & - & - & - \\ \hline \hline \multirow{2}{*}{Tü Seg} & \multirow{2}{*}{\cite{Task2-TueSeg}} & \md{Tü\_Seg-1} & \checkmark & - & - & - & - \\ & & \md{Tü\_Seg-2} & \checkmark & - & - & - & \checkmark \end{tabularx} \caption{\label{tab:systems}The list of participating systems submitted to the shared task and baseline systems; Systems marked with * are submitted to both subtasks} \end{table*} \subsection{Collecting data for Czech and Mongolian} \label{sub:LRL} We had two languages with limited amount of data, Czech and Mongolian. For each language, we used a different development methodology than for the other seven languages (with larger amount of available data). \textbf{Mongolian}: we asked two linguists (who are also native speakers of Mongolian) to annotate morpheme segmentations of 3,810 words from Mongolian WordNet \cite{batsuren-etal-2019-building}. After manual annotation, we received 1,604 base forms, 2201 derived forms, and 5 compounds. To account for inflectional morphology, we have used the Mongolian transducer tool \cite{munkhjargal2016morphological} to generate inflected forms of the 3,810 annotated words. In total, we collected morpheme segmentations of 18,966~Mongolian words for subtask~1. For subtask~2, the same two linguists annotated 2,100~Mongolian sentences. \textbf{Czech}: we merged hand-segmented word forms from four sources for the purpose of subtask 1: (a) segmentations previously created within DeriNet \cite{derinet-2019}, a project aimed at capturing derivational relations in Czech (9,508 word forms), (b) segmentations of Czech verb lemmas imported from a partially digitized version of a printed dictionary (\citealt{slavickova-2017}; 13,162 word forms in addition, i.e. not counting overlaps), (c) segmentations available in the MorfCzech dataset \cite{morfoczech-data-2022}, mostly extracted from dictionaries and grammar books existing for Czech (additional 11,137 word forms), and (d) word forms that we annotated newly in order to reach complete coverage of Czech subtask 2 sentences (see below; additional 4,887 word forms). In total, the subtask~1 dataset contains 38,694 unique Czech word forms segmented to morphs. All annotations were performed by native speakers with linguistic education, and underwent careful harmonization if the input resources disagreed, as well as numerous consistency checks. However, because of rich allomorphy in Czech, we have not been able to merge allomorph sets under more abstract umbrella morphemes so far, and thus words are represented as sequences of morphs (whose concatenation perfectly matches the original word forms), not of morphemes. The Czech subtask~2 dataset contains in total 2,000 sentences from the Czech subset of Universal Dependencies (\citealt{ud-cl-2021}; more specifically, 1000, 500, and 500 first sentences from the train, dev, and test sections, respectively, of the Prague Dependency Treebank subset of UD 2.9). Given that homonymy resulting in different morph boundaries is extremely rare in Czech, words are segmented basically regardless of their contexts. \subsection{Data Splits} From each language's collection of morpheme segmentations in subtask 1, we sampled 80\% for the training, 10\% for development, and 10\% for test sets.\footnote{All the data splits can be obtained from \url{ https://github.com/sigmorphon/2022SegmentationST/tree/main/data}} All splits of subtask 1 are balanced w.r.t. the nine morphological categories, described in Table~ \ref{tab:categories}. While sampling the training and development sets for the subtask 1, we excluded words that were present in the test sentences of subtask 2. This was done in order to avoid situations when the subtask 1 data could directly influence the results of subtask 2 (since we allowed the multi-task learnings between both subtasks). \section{Baseline Systems} The shared task provided predictions and results of baseline systems to participants that covered all languages and both subtasks. We chose three baseline systems: First is \texttt{WordPiece}, one of the state-of-the-art subword tokenization algorithms used in BERT \cite{devlin-etal-2019-bert}, which is based on \citet{schuster2012japanese} and somewhat resembles BPE \cite{sennrich2016neural}. Second is \texttt{ULM} (Unigram Language Model \citet{kudo-2018-subword}), another popular subword tokenization, used in XLNet \cite{yang2019xlnet}. Third is \texttt{Morfessor2}, one of the state-of-the-art unsupervised morphological segmentations \cite{virpioja2013morfessor}. In future shared tasks, we aim to include more state-of-the-art tokenization tools including other Morfessor variants \cite{gronroos2014morfessor,ataman2017linguistically,gronroos2020morfessor}, BPE-dropout \cite{provilkov2019bpe}, dynamic programming encoding (DPE) \cite{he2020dynamic} or its variant \cite{hiraoka2021joint,song2022self}, multi-view subword regularization \cite{wang2021multi}, Charformer \cite{tay2021charformer}, space-treatment variants of BPE and ULM \cite{gow2022improving}. \begin{table*}[t] \small \newcolumntype{R}{>{\raggedleft\arraybackslash}X} \newcolumntype{C}{>{\centering\arraybackslash}X} \newcolumntype{L}{>{\raggedright\arraybackslash}X \centering \begin{tabularx}{\textwidth}{l|RRRRRRRRR|R} & & & & & & & & & & macro \\ System & \multicolumn{1}{c}{ces} & \multicolumn{1}{c}{eng} & \multicolumn{1}{c}{fra} & \multicolumn{1}{c}{ita} & \multicolumn{1}{c}{lat} & \multicolumn{1}{c}{rus} & \multicolumn{1}{c}{mon} & \multicolumn{1}{c}{hun} & \multicolumn{1}{c|}{spa} & avg. \\ \hline \hline WordPiece & 20.42 & 23.06 & 12.66 & 9.08 & 8.84 & 13.81 & 14.58 & 24.00 & 16.57 & 15.89 \\ ULM & 23.71 & 32.32 & 16.08 & 10.65 & 10.42 & 15.67 & 25.82 & 31.27 & 19.58 & 20.61 \\ Morfessor2 & 29.43 & 37.65 & 22.38 & 9.02 & 14.53 & 17.71 & 37.80 & 40.96 & 20.64 & 25.57\\ \hline \hline AUUH\_A* & 93.65 & 92.32 & - & - & - & - & 98.19 & - & - & 94.72 \\ AUUH\_B* & 93.85 & 93.20 & - & - & - & - & 98.31 & - & - & 95.12 \\ AUUH\_E* & 90.71 & 87.10 & 90.78 & 92.39 & 98.71 & 94.33 & 96.06 & - & - & 92.87 \\ AUUH\_F & 90.28 & 86.40 & 90.81 & 92.56 & 98.85 & 93.68 & 95.32 & 98.34 & 97.25 & \textbf{93.72} \\ \hline \hline CLUZH & 93.81 & 92.70 & 94.80 & 96.93 & 99.37 & 98.62 & 98.12 & 98.54 & 98.74 & \textbf{96.85} \\ \hline \hline DeepSPIN-1 & 93.42 & 92.29 & 91.66 & 96.01 & 99.37 & 98.75 & 98.03 & 98.56 & 98.79 & 96.32 \\ DeepSPIN-2 & \textbf{93.88} & 93.39 & 95.29 & \textbf{97.47} & 99.36 & 99.30 & 98.00 & 98.68 & 99.02 & 97.15 \\ DeepSPIN-3 & 93.84 & \textbf{93.63} & \textbf{95.73} & 97.43 & \textbf{99.38} & \textbf{99.35} & \textbf{98.51} & \textbf{98.72} & \textbf{99.04} & \textbf{97.29} \\ \hline \hline GU-1* & - & - & 83.44 & 88.69 & - & - & - & - & - & 86.07 \\ GU-2* & - & - & 83.38 & 87.49 & - & - & - & - & 95.95 & 88.94 \\ \hline \hline JB132 & 64.65 & 65.43 & 46.20 & 33.44 & 91.39 & 50.55 & 57.82 & 72.64 & 43.39 & \textbf{58.39} \\ \hline \hline NUM DI* & - & 83.56 & - & 89.55 & - & - & 85.59 & 95.91 & - & 88.65 \\ \hline \hline Tü\_Seg-1 & 93.38 & 90.51 & 93.76 & 95.73 & 99.37 & 98.21 & 97.02 & 98.59 & 97.93 & \textbf{96.06} \end{tabularx} \caption{\label{tab:subtask1:all}Subtask 1 word-level results by system: The f-measure performance of systems by language; and macro average f-measure of all languages in the last column. Systems marked with * are partial submissions of a specific language set. The performances in bold are best performance of corresponding languages.} \end{table*} \section{System Descriptions} The SIGMORPHON 2022 Shared Task on Morpheme Segmentation received submissions from 7 teams with members from 10 universities and institutes. Many teams submitted more than one system while some focused on a specific set of languages like Romance. In total, we had 24 unique systems over two subtasks, including the baseline system. More system details can be seen in Table~\ref{tab:systems}. \vspace{1em} \noindent \textbf{AUUH} Researchers at the Aalto University and the University of Helsinki produced six submission systems: two were transformer models and four were bidirectional GRU models created with several innovations of Morfessor feature enrichment, multi-task learning, and multilingual learning. Morfessor \cite{creutz2002unsupervised,creutz2007unsupervised} is the famous language-independent unsupervised and semi-supervised segmentation tool and has a big family of Morfessor variants \cite{virpioja2013morfessor,gronroos2014morfessor,ataman2017linguistically,gronroos2020morfessor}. They have used the first variant of Morfessor \cite{creutz2005unsupervised} for enriching input words along with their Morfessor subword segmentations. AUUH\_A, AUUH\_C, AAUH\_E systems used this Morfessor-based feature enrichment. The key innovation of AUUH systems was multilingual and multi-task traning. They used a similar preprocessing technique \cite{johnson2017google} to distinguish tasks and languages from one another, and then trained multilingual neural models which work on both subtasks. Their transformer-based multilingual and multi-task model, AUUH\_B was the subtask 2 winning system (by its macro average f-measure) and also quite competitive with the subtask 1 winning systems on its partial three-language submissions. \vspace{1em} \noindent \textbf{CLUZH} Researchers at the University of Zurich ensembled four submissions \cite{cluzh_sig22} by extending their previous neural hard-attention transducer models \cite{makarov2018uzh,makarov2018imitation,makarov-clematide-2020-cluzh}. For subtask 1, they submit the following strong ensemble \textbf{CLUZH} composed of 3 models without encoder dropout and 2 models with encoder dropout of 0.15. In the sentence-level subtask 2, they submitted three ensembles, and treated this problem as the word-level problem by tokenizing sentences into words. They have also used POS tags as additional features to provide a light for the context of words. All individual models have an encoder dropout probability of 0.25 and vary only in their use of features: \textbf{CLUZH-1} with 3 models without POS features, \textbf{CLUZH-2} with 3 models with POS tag features, and \textbf{CLUZH-3} with combined all the models from CLUZH-1 and CLUZH-2. In overall, the \textbf{CLUZH-3} system was the subtask 2 winning system (by winning two out of three languages) and in subtask 1 \textbf{CLUZH} was the only system, outranked one (DeepSPIN-1) of three DeepSPIN systems. \vspace{1em} \noindent \textbf{DeepSPIN} Researchers submitted three neural seq2seq models: (1) \textbf{DeepSPIN-1}, a character-level LSTM with soft attention \cite{bahdanau2014neural} with softmax trained with cross-entropy loss; (2) \textbf{DeepSPIN-2}, a character-level LSTM with soft attention in which softwax is replaced with its sparser version, 1.5-entmax \cite{peters2019sigmorphon}; (3) \textbf{DeepSPIN-3}, a subword-level transformer \cite{vaswani2017attention} with the proposed 1.5- entmax, in which subword segments are modelled using ULM \cite{kudo-2018-subword}. This design was one of most innovative architectures among all submitted systems. The authors previously experimented with the 1.5-entmax function on other tasks, demonstrating its utility, especially in the tasks with less uncertainty in the search space (e.g., compared to language modelling or machine translation) such as morphological and phonological modelling \cite{peters-martins-2020-one}. The final results of this year's shared task confirm these observations: \textbf{DeepSPIN-2} and \textbf{DeepSPIN-3} achieve superior results and are the winner of the shared task. \begin{table*}[!h] \scriptsize \newcolumntype{R}{>{\raggedleft\arraybackslash}X} \newcolumntype{C}{>{\centering\arraybackslash}X} \newcolumntype{L}{>{\raggedright\arraybackslash}X \centering \begin{tabularx}{\textwidth}{CCc|lllllll|l} \hline inf. & drv. & cmp. & eng & fra & ita & rus & mon & hun & spa & macro avg. \\ \hline \hline \multirow{2}{*}{-} & \multirow{2}{*}{-} & \multirow{2}{*}{-} & \textbf{83.80} & 84.08 & 82.69* & 82.56* & 93.37 & \textbf{85.52} & 83.58 & 83.6 \\ & & & CLUZH & DeepSPIN-3 & DeepSPIN-3 & DeepSPIN-1 & JB132 & DeepSPIN-3 & DeepSPIN-2 & DeepSPIN-3 \\ \hline \hline \multirow{2}{*}{-} & \multirow{2}{*}{-} & \multirow{2}{*}{\checkmark} & 93.23 & \textbf{81.80} & \textbf{58.10}* & \textbf{77.67} & 100.00 & 85.89 & \textbf{57.89}* & \textbf{78.60} \\ & & & AUUH\_A & CLUZH & CLUZH & DeepSPIN-2 & all systems & DeepSPIN-3 & DeepSPIN-3 & DeepSPIN-3 \\ \hline \hline \multirow{2}{*}{-} & \multirow{2}{*}{\checkmark} & \multirow{2}{*}{-} & 94.12 & 87.36* & 94.62 & 91.4 & \textbf{92.41} & 94.96 & 92.47 & 92.48 \\ & & & DeepSPIN-3 & DeepSPIN-3 & DeepSPIN-3 & DeepSPIN-3 & DeepSPIN-3 & DeepSPIN-3 & DeepSPIN-3 & DeepSPIN-3 \\ \hline \hline \multirow{2}{*}{\checkmark} & \multirow{2}{*}{-} & \multirow{2}{*}{-} & 91.29* & 96.37 & 96.27 & 99.75 & 99.66 & 98.31 & 98.81 & 96.97 \\ & & & CLUZH & CLUZH & CLUZH & DeepSPIN-3 & DeepSPIN-3 & DeepSPIN-3 & DeepSPIN-2 & DeepSPIN-3 \\ \hline \hline \multirow{2}{*}{-} & \multirow{2}{*}{\checkmark} & \multirow{2}{*}{\checkmark} & 95.74 & 80.61 & 70.59* & 92.13 & - & 89.82 & 97.3 & 87.65 \\ & & & DeepSPIN-2 & DeepSPIN-3 & DeepSPIN-3 & DeepSPIN-3 & - & DeepSPIN-3 & DeepSPIN-3 & DeepSPIN-3 \\ \hline \hline \multirow{2}{*}{\checkmark} & \multirow{2}{*}{-} & \multirow{2}{*}{\checkmark} & 96.89 & 96.60 & 94.97 & 100 & 100 & 98.71 & 96.15 & 97.45 \\ & & & DeepSPIN-3 & DeepSPIN-2 & DeepSPIN-3 & DeepSPIN-3 & all systems & DeepSPIN-3 & DeepSPIN-1 & DeepSPIN-3 \\ \hline \hline \multirow{2}{*}{\checkmark} & \multirow{2}{*}{\checkmark} & \multirow{2}{*}{-} & 97.54 & 99.03 & 99.23 & 99.97 & 99.74 & 99.41 & 99.75 & 99.24 \\ & & & DeepSPIN-3 & DeepSPIN-3 & DeepSPIN-3 & DeepSPIN-3 & DeepSPIN-3 & DeepSPIN-2 & DeepSPIN-3 & DeepSPIN-3 \\ \hline \hline \multirow{2}{*}{\checkmark} & \multirow{2}{*}{\checkmark} & \multirow{2}{*}{\checkmark} & 97.13 & 100 & 100 & 99.88 & - & 99.28 & 97.04 & 98.23 \\ & & & DeepSPIN-3 & DeepSPIN-3 & DeepSPIN-2 & DeepSPIN-2 & - & DeepSPIN-2 & DeepSPIN-2 & DeepSPIN-2 \\ \hline \end{tabularx} \caption{\label{tab:besttask1:all}Subtask 1 word-level results by morphological category: f-measure performance of best performing system on a corresponding language and a category; Numbers in bold are worst performance of their corresponding language. Performances marked with * are worst performances of their morphological category.} \end{table*} \vspace{1em} \noindent \textbf{GU} One team from Georgetown University produced two submissions for three Romance languages of the word-level subtask, based on the GRU-based encoder-decoder model \cite{GU2022}. In initial attempts, they tried to use additional features from the Wiktionary lists of prefixes and suffixes to train the model. However, such additional features decreased the main performances across morphological categories, so they excluded these features from the final submissions. Later on, they focus on data sharing between Romance languages. In French, the training data were augmented with four morphological category data from Italian and Spanish training and development datasets. These categories include non-inflection categories of \texttt{000}, \texttt{001}, \texttt{010}, \texttt{011}. With these experiments, they made minor improvements to these three languages. For these results, more research is needed to understand that transfer learning is useful. \vspace{1em} \noindent \textbf{NUM DI} A single submission from the National University of Mongolia \cite{Task2_NUMDI} is a transformer-based neural model. Their model architecture is simple as single-layered encoder-decoder classic architecture. All the hyper-parameter settings are same as fairseq's standard tutorial tool. Their submission is also limited by four languages of subtask 1 due to human error. \vspace{1em} \noindent \textbf{JB132} The Charles University team \cite{JB132} designed the Hidden Markov model, trained with the expectation-maximization algorithm. This model architecture has two sub-models. The first sub-model takes words as input and converts them into candidate morphemes. The second sub-model takes candidate morphemes and generates morphs as output. The first sub-model has three generators for accounting prefixes, root words, and suffixes. It is the only system not using neural methods among all submitted systems and the system's prediction is interpretable and can be useful for error analysis. \vspace{1em} \noindent \textbf{Tü Seg} The University of Tübingen \cite{Task2-TueSeg} team submitted two systems for each of subtasks. Both systems extend the sequence-labeling method proposed by \cite{hellwig2018sanskrit,li2022word}. Their systems are very innovative and unique among all other neural models for considering the main segmentation task as a sequence-labeling task. All other neural systems used seq2seq architecture. Their neural model used a plain two-layer BiLSTM architecture. By its design, Tü Seg systems have at least two advantages over the main seq2seq alternative: (a) the number of parameters is much fewer, so the model can be trained fast and process quickly; (b) the system predictions are more interpretable compared to other neural systems and can help with the error analyses of high-resource datasets. \begin{figure*}[!h] \begin{center} \includegraphics[width=\textwidth]{images/category_systems_f1.v5.pdf} \caption{Impact of training sizes over languages and morphological categories: Results from top5-ranked systems of word-level subtask 1} \label{fig:size} \end{center} \end{figure*} \begin{figure*}[!h] \begin{center} \includegraphics[width=\textwidth]{images/length_deepspin3_f_measure.v3.pdf} \caption{Impact of word length over languages and morphological categories: Results from DeepSPIN-3, the winning system of subtask 1, word-level morpheme segmentation} \label{fig:word_len} \end{center} \end{figure*} \section{The System Results} All system results can be found and downloaded from the shared task GitHub page.\footnote{\url{https://github.com/sigmorphon/2022SegmentationST/tree/main/results}} \subsection{Subtask 1 word-level results} Relative system performance of subtask 1 is provided in Table~\ref{tab:subtask1:all} which shows each system's f-measure by languages. The best performance of each language from submitted systems is in bold. Two teams exploited external resources in some form: AUUH and GU. In general, any relative performance gained was minimal. AUUH submitted two systems that used additional resources, they received extra ~1\% compared to the team's other systems. Similarly, GU and their submitted systems saw some minimal improvements over the performances. This details can be seen from their system description paper \cite{GU2022}. Only two of all the systems submitted to subtask 1 were multilingual and multi-task learning at same time. These two systems were proposed by AUUH team, but partial-language submissions were for English, Czech, and Mongolian. The important insight from this experiment is that the multi-task and multilingual learning approaches are quite beneficial for the task because their partial performances are quite competitive with the winning systems, DeepSPIN-3, DeepSPIN-2, and CLUZH. \vspace{1em} \noindent \textbf{Impact of training size:} In subtask 1, the training datasets' sizes vary across languages and morphological categories. It might have impacted the top-ranked systems. Therefore, we plotted the top5-ranked systems over training size and f-measure performance across morphological categories, as shown in Figure \ref{fig:size}. Here, in high-resource setting (as greater than $10^5$) in all morphological categories, any of the top5-ranked systems always achieves 80\% f-measure greater than 80\%. The root words are present in all types of resources settings from high to low. All the systems in this category of root words achieved no more than 85.5\% f-measure except for Mongolian. The two inflectional categories \texttt{100} and \texttt{110} are always in high-resource setting, having more than $10^6$ training instances (except for two low-resource languages Czech and Mongolian). All systems achieved their best system performance over these two categories, compared to other categories. \vspace{1em} \noindent \textbf{Impact of word length:} In many NLP tasks, the length of the input sequence is strongly correlated with the difficulty of their tasks \cite{yin2017comparative,wu2018phrase}. So, we present how the DeepSPIN-3's (subtask 1 winning system) performance relates to the word length across languages and morphological categories. Figure \ref{fig:word_len} shows various related facts: (i) for root words \texttt{000}, overall performance decreases across languages with increasing word length; (ii) inflectional morphology is systematically far more productive than other morphological categories, so this fact is reproduced here: the main inflectional category \texttt{100} has consistently high performance across languages and word lengths. \begin{table*}[!h] \small \newcolumntype{R}{>{\raggedleft\arraybackslash}X} \newcolumntype{C}{>{\centering\arraybackslash}X} \newcolumntype{L}{>{\raggedright\arraybackslash}X \centering \begin{tabularx}{\textwidth}{l|RRRr|RRRr|RRRr|rr} \multirow{2}{*}{System} & \multicolumn{4}{c|}{Czech} & \multicolumn{4}{c|}{English} & \multicolumn{4}{c|}{Mongolian} & \multicolumn{2}{c}{Macro avg.} \\ \cline{2-15} & \multicolumn{1}{c}{P} & \multicolumn{1}{c}{R} & \multicolumn{1}{c}{$F_1$} & \multicolumn{1}{c|}{Lev.} & \multicolumn{1}{c}{P} & \multicolumn{1}{c}{R} & \multicolumn{1}{c}{$F_1$} & \multicolumn{1}{c|}{Lev.} & \multicolumn{1}{c}{P} & \multicolumn{1}{c}{R} & \multicolumn{1}{c}{$F_1$} & \multicolumn{1}{c|}{Lev.} & \multicolumn{1}{c}{$F_1$} & \multicolumn{1}{c}{Lev.} \\ \hline \hline WordPiece & 38.47 & 31.45 & 34.61 & 17.88 & 62.02 & 65.13 & 63.53 & 5.54 & 19.82 & 29.20 & 23.62 & 29.19 & 40.59 & 17.54 \\ ULM & 41.98 & 30.39 & 35.26 & 16.39 & 62.32 & 69.24 & 65.60 & 5.68 & 38.79 & 35.58 & 37.12 & 20.76 & 45.99 & 14.28 \\ Morfessor2 & 49.89 & 36.95 & 42.45 & 13.09 & 54.61 & 69.75 & 61.25 & 6.00 & 50.88 & 45.91 & 48.26 & 17.16 & 50.65 & 12.08 \\ \hline \hline AUUH\_A & 89.70 & 87.53 & 88.60 & 4.97 & 96.66 & 95.78 & 96.22 & 1.86 & 83.49 & 80.94 & 82.19 & 5.42 & 89.00 & 4.08 \\ AUUH\_B & 91.89 & 89.00 & 90.42 & 3.96 & \textbf{96.82} & \textbf{95.79} & \textbf{96.31} & \textbf{1.39} & 83.74 & 81.46 & 82.59 & 5.16 & \textbf{89.77} & \textbf{3.50} \\ AUUH\_C & 50.60 & 69.19 & 58.45 & 71.37 & 84.77 & 71.67 & 77.67 & 19.13 & 79.07 & 73.45 & 76.15 & 17.33 & 70.76 & 35.94 \\ AUUH\_D & 45.07 & 67.82 & 54.15 & 80.67 & 93.29 & 83.41 & 88.07 & 10.58 & 77.99 & 74.15 & 76.02 & 17.88 & 72.75 & 36.38 \\ AUUH\_E & 57.39 & 67.22 & 61.92 & 55.92 & 95.23 & 76.82 & 85.04 & 12.36 & 73.34 & 72.01 & 72.67 & 24.88 & 73.21 & 31.05 \\ AUUH\_F & 62.36 & 43.82 & 51.47 & 61.84 & 91.50 & 74.84 & 82.34 & 13.30 & 75.50 & 59.22 & 66.38 & 33.91 & 66.73 & 36.35 \\ \hline \hline CLUZH-1 & 92.03 & 90.69 & 91.35 & 1.93 & 89.74 & 89.20 & 89.47 & 9.86 & 82.98 & 81.48 & 82.22 & 5.28 & 87.68 & 5.69 \\ CLUZH-2 & 92.41 & 91.13 & 91.76 & 1.87 & 89.71 & 89.22 & 89.47 & 9.79 & 83.29 & 81.83 & 82.55 & 5.19 & 87.93 & 5.62 \\ CLUZH-3 & \textbf{92.63} & \textbf{91.35} & \textbf{91.99} & \textbf{1.80} & 89.83 & 89.25 & 89.54 & 9.84 & \textbf{83.71} & \textbf{82.07} & \textbf{82.88} & \textbf{5.10} & 88.14 & 5.58 \\ \hline \hline Tü\_Seg-2 & 89.52 & 88.42 & 88.97 & 2.50 & 87.83 & 89.58 & 88.69 & 1.78 & 69.59 & 67.55 & 68.55 & 9.85 & 82.07 & 4.71 \end{tabularx} \caption{\label{tab:subtask2:all}Subtask 2 sentence-level results: F-measure across 3 languages} \end{table*} \vspace{1em} \noindent \textbf{Difficulty of morphological categories:} Even though the top-ranking systems perform very well on their own, other systems may have some complementary information across morphological categories. Therefore, we listed the best-performing systems for combinations of each language and each morphological category in Table~ \ref{tab:besttask1:all}. In the table, the lowest scores in corresponding languages are provided in bold. For instance, English root words (83.80 f-measure) are much harder to predict than other morphological categories in English. The hardest morphological categories are roots \texttt{000}, compounds \texttt{001}, and derivation and compound words \texttt{011}. The winning system, DeepSPIN-3 (marked with + in Figure~\ref{fig:size}), is consistently winning in these three categories across languages. Another observation from Figure~\ref{fig:word_len} is that compound and root words are getting harder to predict across languages with the increase of word length. Also, identifying inflections from short words (word length~\textless~5) is one of the unsolved challenges in all languages (except for English), as shown in Figure~\ref{fig:word_len}. \subsection{Subtask 2 sentence-level results} Relative system performance is described in Table~\ref{tab:subtask2:all}, showing all four evaluation metrics by each combination of system and language. In the sentence-level subtask 2, we have two winners: CLUZH-3 (won two out of three languages) and AUUH\_B (F1 89.77 as maximum macro- average among submissions). The performance of systems in the sentence-level subtask significantly decreased by 15\% in Mongolian compared to the results of the word-level subtask. One reason is that all submitted systems treated this problem as a zero-shot solution of word-level subtask 1, and mostly ignored its context by their design. \section{Future Directions} The submitted systems achieved unexpectedly high accuracy across nine languages. This result suggests that the neural systems may have more capabilities beyond segmenting morphemes. For the next year, we plan to modify the task design and enrich the dataset with more fine-grained analysis. For example, \textit{truckdrivers} → \textit{truck @@drive @@er @@s} → \textit{truck \$\$drive @@er \#\#s} where \$\$ is compound, @@ is derivation, and \#\# is inflection. In another direction, we will explore possibilities of adapting other morphological resources including word-formation resources \cite{zeller2013derivbase,talamo2016derivatario,derinet-2019,vodolazsky2020derivbase} or segmentation resources, UniSegments \cite{unisegments-lrec-2022,unisegments-data-2022}. Our shared task team welcomes continued contributions from the community. \section{Conclusion} The SIGMORPHON 2022 Shared Task on Morpheme Segmentation significantly expanded the problem of morphological segmentation, making it more linguistically plausible. In this task, seven teams submitted 23 systems for two subtasks in total of nine languages, achieving at minimum F1 30.71 improvement over the three baselines of the state-of-the-art subword tokenization and morphological segmentation tools, being used to train large language models, e.g., XLNet \cite{yang2019xlnet}. The results suggest many directions for improving morpheme segmentation shared task. \nocite{Ando2005,borschinger-johnson-2011-particle,andrew2007scalable,rasooli-tetrault-2015,goodman-etal-2016-noise,harper-2014-learning} \section*{Acknowledgements} We thank Garrett Nicolai and Eleanor Chodroff for their advice and support. The authors also thank Ben Peters and Simon Clematide for their invaluable contributions and advice, including developing the evaluation tool and early detection of data errors.
{ "redpajama_set_name": "RedPajamaArXiv" }
3,794
Q: Symfony2: Check authentication before checking the route Currently I'm working on an restricted API. All routes (wether they exist or not) should return a 401 if the user is not authenticated. Unfortunately I'll only get the 401 if the route exists. If it doesn't exist I get an 404. Is there a way to check the authentication before the route is checked? Maybe a wildcard route? A: That sounds like it's the correct behaviour - ie. if a route doesn't exist, it should return a 404... Maybe explain why you would want to ALWAYS return a 401. Shouldn't your clients that are consuming the API check for a 404? Sorry, wanted to comment, but haven't got enough of a reputation to do so yet.. A: You can try to match a ANY route, something like: any_route: path: /{anyparams} defaults: _controller: YourProjectBundle:Index:anyroute requirements: anyparams: ".+" But make sure is defined at the end So, "non existent" routes now exists and will throw 401 error
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,118
Q: Laravel send json date not as a string Right now I send the date like this (in Laravel5.2): 'BirthDate' => $employee->BirthDate, result: "2016-05-10" But the json result in postman should be this: 2016-05-10 How could I accomplish that? A: Use strtotime() $time = strtotime($employee->BirthDate); 'BirthDate' => date('Y-m-d',$time); Please refer to strtotime() function at php.net
{ "redpajama_set_name": "RedPajamaStackExchange" }
487
{"url":"https:\/\/math.stackexchange.com\/questions\/3316716\/finding-the-surface-area-of-a-region-which-is-generated-by-revolving-a-curve-aro","text":"# Finding the surface area of a region which is generated by revolving a curve around a line\n\nThe following problem is from the book, Calculus and Analytical Geometer by Thomas and Finney. It is early on in the book so I would expect \/ hope any integral would be easy to solve.\n\nProblem:\nFind the area of the surface generated by revolving the following curve about the line $$y = -1$$. The curve is $$y = \\frac{x^3}{3} + \\frac{1}{4x}$$ for $$1 \\leq x \\leq 3$$.\n\nSince we are revolving the curve about $$y = -1$$, I augment the function by adding $$1$$ to it and treating it as revolving it around $$y = 0$$. The format of the integral for surface area revolved around the y-axis is: $$S = \\int_a^b 2\\pi x \\sqrt{1 + \\left( \\frac{dx}{dy} \\right)^2} \\,\\, dx$$ Now we need to find the bounds on $$y$$. \\begin{align*} y(1) &= \\frac{1^3}{3} + \\frac{1}{4} = \\frac{1}{3} + \\frac{1}{4} \\\\ y &= \\frac{7}{12} \\\\ y(3) &= \\frac{3^3}{3} + \\frac{1}{4(3)} = 9 + \\frac{1}{12} \\\\ y(3) &= \\frac{109}{12} \\\\ y &= \\frac{x^3}{3} + \\frac{x^{-1}}{4} \\\\ \\frac{dy}{dx} &= x^2 - \\frac{x^{-2}}{4} \\\\ \\frac{dy}{dx} &=\\frac{4x^2 - x^{-2}}{4} \\\\ \\frac{dx}{dy} &= \\frac{4}{4x^2 - x^{-2}} \\\\ S &= \\int_{\\frac{7}{12}}^{\\frac{109}{12}} 2\\pi x \\sqrt{1 + \\left( \\frac{4}{4x^2 - x^{-2}} \\right)^2} \\,\\, dx \\end{align*} Now we need to integrate. \\begin{align*} S &= \\int_{\\frac{7}{12}}^{\\frac{109}{12}} 2\\pi x \\sqrt{1 + \\frac{16}{\\left(4x^2 - x^{-2}\\right)^2 }} \\,\\, dx \\\\ S &= \\int_{\\frac{7}{12}}^{\\frac{109}{12}} 2\\pi x \\sqrt{ \\frac{\\left(4x^2 - x^{-2}\\right)^2 + 16 }{\\left(4x^2 - x^{-2}\\right)^2 }} \\,\\, dx \\\\ \\end{align*} This does not seem right to me.\n\nBased upon the comments from the group, I updated my solution.\n\nSince we are revolving the curve about $$y = -1$$, I augment the function by adding $$1$$ to it and treating it as revolving it around $$y = 0$$. Let $$S$$ be the surface area we are trying to find. \\begin{align*} y &= \\frac{x^3}{3} + \\frac{x^{-1}}{4} \\\\ y' &= x^2 - \\frac{x^{-2}}{4} \\\\ S &= \\int_1^3 2 \\pi \\left(y+1 \\right) \\sqrt{1 + \\left( x^2 - \\frac{x^{-2}}{4} \\right) ^2 } \\,\\, dx \\\\ S &= \\int_1^3 2 \\pi \\left(\\frac{x^3}{3} + \\frac{x^{-1}}{4}+1 \\right) \\sqrt{1 + \\left( x^2 - \\frac{x^{-2}}{4} \\right) ^2 } \\,\\, dx \\\\ S &= \\int_1^3 2 \\pi \\left(\\frac{x^3}{3} + \\frac{x^{-1}}{4}+1 \\right) \\sqrt{ \\frac{16x^4 + 8 + x^{-4}}{16} } \\,\\, dx \\\\ \\end{align*}\n\nThis does not seem right to me. How do I complete this integration?\n\nConsider a scalar function $$f(x)$$: $$R$$ $$\\rightarrow$$ $$R$$. Then the surface area of the solid formed by revoling $$f(x)$$ about $$y=0$$ is S=$$2\\pi\\int_a^bf(x)ds$$, where $$ds$$ represents an infintesimal arclength element of the curve $$f(x)$$.\n\nTo calculate the surface area of the solid formed by revolving $$f(x)$$ about y=$$-1$$, you must add 1 to the integrand. So we have that S=$$2\\pi\\int_a^b(f(x)+1)ds$$= $$2\\pi\\int_a^bg(x)$$ds, where $$g(x)$$=$$f(x)+1$$=$${x^3\\over 3}$$+$${1\\over 4x}$$+$$1$$.\n\nNote that $$ds$$=$$\\sqrt{1+({dg\\over dx})^2}dx$$=$$\\sqrt{{1}+(x^2-{{1\\over4x^2}}})^2dx$$.\n\nNow substitute a=1, b=3, and the appropriate values for $$g(x)$$ and $$ds$$ into the integral S=$$2\\pi\\int_1^3g(x)$$ds. Calculating it will yield the desired surface area.\n\nThe integral formula for $$S$$ In the post is questionable. It is good for surfaces revolving around $$x$$, not $$y$$,\n\nThe following expression should be used, instead, to integrate surfaces around $$y$$, $$2\\pi \\int_1^3 (y+1)\\sqrt{1 + \\left( \\frac{dy}{dx} \\right)^2} \\,\\, dx$$\n\nOtherwise, the integrand goes to infinity.","date":"2021-12-08 13:08:26","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 44, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 1.0000100135803223, \"perplexity\": 119.75888059395913}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-49\/segments\/1637964363510.40\/warc\/CC-MAIN-20211208114112-20211208144112-00056.warc.gz\"}"}
null
null
<?php $page_name = $this->uri->rsegment(1); $sub_name = $this->uri->rsegment(2); /*switch ($page_name) { case 'download': $html_title = "下载 CodeIgniter - CodeIgniter 中国"; break; case 'docs': $html_title = "CodeIgniter 用户手册 - CodeIgniter 中国"; break; case 'community': $html_title = "CodeIgniter 中文社区 - CodeIgniter 中国"; break; case 'contribute': $html_title = "CodeIgniter 贡献 - CodeIgniter 中国"; break; case 'help': if ($sub_name == 'legal') { $html_title = "保留条款 - CodeIgniter 中国"; } elseif ($sub_name == 'about') { $html_title = "关于 - CodeIgniter 中国"; } else { $html_title = "政策 - CodeIgniter 中国"; } break; case 'news': $html_title = "{$NewsTitle}新闻 - CodeIgniter 中国"; break; case 'tutorials': if (isset($TutorialTitle)) { $html_title = $TutorialTitle . ' - '; } else { $html_title = ''; } $html_title .= 'CodeIgniter 视频教程 | CodeIgniter 中国'; break; case 'projects': $html_title = 'CodeIgniter 应用案例 | CodeIgniter 中国'; break; case 'irc': $html_title = "IRC | CodeIgniter 中国"; break; default: $html_title = 'CodeIgniter 中国 - PHP 框架 CodeIgniter 中国社区'; }*/ ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:wb="http://open.weibo.com/wb"> <head> <meta charset="utf-8"/> <meta http-equiv="X-UA-Compatible" content="IE=edge"/> <meta name="viewport" content="width=device-width, initial-scale=1"/> <title><?=empty($html_title) ? 'CodeIgniter 中国 - PHP 框架 CodeIgniter 中国社区' : $html_title . ' - CodeIgniter 中国'?></title> <link rel="icon" type="image/png" href="<?=base_url('/assets/images/ci-icon.png')?>" /> <link rel="stylesheet" type="text/css" href="<?=base_url('/assets/css/bootstrap.min.css')?>" media="screen"/> <link rel="stylesheet" type="text/css" href="<?=base_url('/assets/css/style.css')?>"/> <link rel="canonical" href="http://codeigniter.org.cn<?=$_SERVER['REQUEST_URI']?>" /> <link rel="home" href="<?=site_url();?>" title="首页" /> <meta name="description" content="CodeIgniter: 帮助你编写 Web 应用程序的敏捷开源 PHP 框架" /> <meta name="keywords" content="CodeIgniter,PHP,PHP框架,MVC框架,开放源代码,开源,应用程序,快速开发,MVC,framework,web,application" /> <script src="http://tjs.sjs.sinajs.cn/open/api/js/wb.js" type="text/javascript" charset="utf-8"></script> </head> <body> <!-- top of the page --> <div class="navbar navbar-default navbar-fixed-top" id="mainnav" role="navigation"> <div class="container"> <div class="navbar-header"> <button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target=".navbar-collapse"> <span class="sr-only">Toggle navigation</span> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> <a class="navbar-brand" href="/"><span>CodeIgniter</span><strong>中国</strong></a> </div> <div class="collapse navbar-collapse"> <ul class="nav navbar-nav navbar-right"> <li<?php if ($page_name == 'home'): ?> class="active"<?php endif; ?>><a href="<?=site_url()?>"><span class="glyphicon glyphicon-home"></span></a></li> <li<?php if ($page_name == 'download'): ?> class="active"<?php endif; ?>><a href="<?=site_url('download')?>">下载</a></li> <li<?php if ($page_name == 'docs'): ?> class="active"<?php endif; ?>><a href="<?=site_url('docs')?>">用户手册</a></li> <li<?php if ($page_name == 'community'): ?> class="active"<?php endif; ?>><a href="<?=site_url('community')?>">开发者社区</a></li> <li<?php if ($page_name == 'contribute'): ?> class="active"<?php endif; ?>><a href="<?=site_url('contribute')?>">贡献</a></li> </ul> </div><!--/.nav-collapse --> </div> </div>
{ "redpajama_set_name": "RedPajamaGithub" }
39
{"url":"https:\/\/stackapps.com\/questions\/6756\/badge-oneboxer-for-chat","text":"This script makes it possible for badges to onebox in Chat.\n\n# Example format:\n\nthat's a [badge:nice-answer] on a powershell bountied question\n\n\nThe only somewhat custom one you can use is Strunk & White, in which case I had to strip out the ampersand and an extra space, resulting in [badge:strunk-white].\n\n## Things that I'm working on:\n\n\u2022 Custom badges, currently working, but displays a black colour\n\u2022 Making them actual buttons\n\n## GitHub:\n\nIt's on GitHub too!\n\n## Review:\n\nIf you want to read reviews or review the code behind this, see the question on Code Review Stack Exchange!\n\n\u2022 Will the highlighting for badges be implemented as well? (hover over a tag like this one to see what I mean) \u2013\u00a0Addison Crump Jan 14 '16 at 11:55\n\u2022 @FlagAsSpam I'm not 100% yet... the rollovers require links and the links are a little hard to do per site (see here), but I'm working on a solution \u2013\u00a0Quill Jan 14 '16 at 11:57\n\u2022 The userscript also has a few problems on Safari 9, at least, I left an issue report on the Github page. \u2013\u00a0Addison Crump Jan 14 '16 at 11:59","date":"2019-11-18 08:29:25","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.17099621891975403, \"perplexity\": 2896.4277491919206}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-47\/segments\/1573496669730.38\/warc\/CC-MAIN-20191118080848-20191118104848-00260.warc.gz\"}"}
null
null
\subsubsection{Audio Pre-processing} \label{sec:CNN_prepro} We consider two input representations for the CNN: mel-spectrograms and gammatone-based spectrograms. Both start with the computation of the power of the short-time Fourier transform (STFT) (using Hamming windows of 40~ms with 50\% overlap) after down-mixing the 2-channel of the binaural files to mono. In short, the mel-spectrogram aggregates the power values using triangular filters (in the frequency domain) distributed according to the mel scale. In contrast, the gammatone-based spectrogram aggregates the power values using gammatone filters with center frequencies distributed according to the ERB-rate scale~\cite{wang2006computational}. For the former we used the Librosa library (v0.5.1), while for the latter we used the Essentia implementation~\cite{Bogdanov_essentia}, which is in turn adapted from~\cite{ellis2009gamma}. After preliminary experiments we chose mel-spectrograms as input representation, whose computation is detailed next. A mel filter bank consisting of 128 bands from 0 to 22050 Hz according to Slaney's formula~\cite{slaney1998auditory} is applied to the power of the STFT. Our mel filter bank presents triangular filters with a peak value of one, as opposed to other filter banks where the filters have equal area. Finally, the mel energies are logarithmically scaled. We standardize the log-scaled mel-spectrograms by subtracting the mean and dividing by the standard deviation. We do this on whichever subset of data we use for training. Then, we keep the normalization values and subsequently apply them to standardize the corresponding test set (see Section~\ref{sec:eval_setup}). Since the recordings of the dataset used are 10s long, the dimensionality of the corresponding spectrograms is considered too high for the proposed architecture. Therefore, they are split into non-overlapping time-frequency patches (T-F patches) or \textit{segments} of 1.5s (i.e., 75 frames). We hence obtain 7 segments per recording, the last one being padded with the last original frame. Thus, the CNN learns from T-F patches of $R^{75\times128}$. \subsubsection{CNN Architecture} The proposed CNN architecture is depicted in Table \ref{tab:CNN_arc} and illustrated in Fig. \ref{fig:archi}. \begin{table}[ht!] \centering \begin{tabular}{c} \hline Input: 1 x (75,128) \\ \hline \textit{Conv1}: 48x (3,8) $|$ 32x (3,32) $|$ 16x (3,64) $|$ 16x (3,90) + \\ BN + ReLU \\ Max-Pooling: (5,5) \\ \hline \textit{Conv2}: 224x (5,5) + BN + ReLU \\ Max-Pooling: (11,4) \\ \hline Dense: 15 units + softmax \\ \hline \end{tabular} \caption{Proposed CNN architecture.} \label{tab:CNN_arc} \end{table} \begin{figure*}[ht] \centering \includegraphics[width=\textwidth]{archi_spec.png} \caption{Sketch of the proposed CNN architecture. Four vertical filter shapes co-exist in the first convolutional layer.} \label{fig:archi} \vspace{-2mm} \end{figure*} The architecture is composed of two convolutional layers (\textit{Conv1} and \textit{Conv2}) alternated with max-pooling operations and it ends with a softmax layer. It can be regarded as a relatively simple network comprising standard operations. Also, the network can be regarded as \textit{wide}, in contrast to the trend of building \textit{deeper} networks with tens of layers (or more in other disciplines like image recognition). One of the most distinctive aspects of this network is the convolutional filters in the first layer. We hypothesize that the spectro-temporal patterns that allow to recognize many of the scenes considered are more discriminative along the frequency domain (rather than in the time domain). We consider this during the filters' design. That is, our approach attempts to prioritize the modeling of spectral envelope shapes and background noises, rather than onsets/offsets or attack-decay patterns of specific acoustic events. While most CNNs in the literature leverage squared filters and only one filter shape in the first convolutional layer~\cite{salamon2017deep,valenti2016dcase,eghbal2016cp}, some recent works suggest to employ rectangular filters and different shapes at the same time~\cite{phan2016robust,pons2017timbre}. In particular, we explore several configurations of filters with multiple \textit{vertical} shapes in the first layer. We call vertical filters to those whose frequency dimension is much larger than its time dimension. By using these filters, we intend to aid the learning process towards what we intuitively assume as more important for ASC. The first convolutional layer is implemented as the concatenation of several convolutional layers such that every layer has filters of one single and distinct shape. Using filters of different dimensions leads to feature maps of different dimensions as well. In order to come back to same-sized feature maps, two options exist: \textit{i)} zero-pad network's input appropriately, and \textit{ii)} use filter-dependent max-pooling operations. Preliminary experiments were run with both options and no major difference in performance was observed. Hence the simpler zero-padding option was adopted. The filter shapes employed are listed in Table~\ref{tab:CNN_arc} as \textit{number of filters} x \textit{(time, frequency)}. The first convolutional layer presents 112 filters. This number is doubled for the second layer. The proposed final network presents four different filter shapes in Conv1, as illustrated over the T-F patch of Fig.~\ref{fig:archi}. All the filters in Conv1 have a time dimension of 3. On the contrary, filters in Conv2 are squared 5x5. We apply batch normalization (BN)~\cite{ioffe2015batch} and Rectified Linear Unit (ReLU)~\cite{glorot2011deep} after every convolutional layer, followed by max-pooling operations. The latter downsample the feature maps while adding some invariance along the time-frequency dimensions. In particular, max-pooling is applied over squares of dimension 5 after Conv1. After Conv2, global time domain pooling is applied in order to select only the most prominent feature~\cite{valenti2016dcase}. Finally, after flattening the resulting feature maps, the predicted class (for the input T-F patch) is obtained by a dense layer with softmax activation with 15 output units (corresponding to the 15 acoustic scenes). We also experiment with the concept of \textit{pre-activation}\cite{he2016identity}. This technique was initially devised for image recognition in the context of deep residual networks. In~\cite{he2016identity} a residual unit is proposed containing two paths: \textit{i)} a clean information path for the information to propagate and \textit{ii)} another path with an additive residual function. In the latter path, BN and ReLU are applied as pre-activation of the convolutional layers (in addition to the common post-activation consisting of the same couple BN and ReLU after the convolution operation). Reported advantages in the particular case of deep residual networks, with 100+ layers, include ease of optimization and improved regularization. Moreover, pre-activation has recently proved successful for ASC in~\cite{hanconvolutional17}, still with a deeper network than the one proposed here. We want to explore this technique in a fairly shallow network. Based on the results obtained in Section~\ref{sec:CNN_results_PREACT}, we add BN and ReLU non-linearity directly at the network's input of Fig.~\ref{fig:archi} (before the first convolution layer) to form the final proposed CNN. \subsubsection{Training Strategy and Hyperparameters} Network weights are initialized with a uniform distribution. The loss function is categorical cross-entropy and the optimizer is Adam. The initial learning rate is 0.002, and it is reduced by a factor of 2 whenever the validation loss does not decrease during 5 epochs. We also experimented with \textit{i)} dropping the learning rate by half every fixed number of epochs and \textit{ii)} using Adam with no learning rate scheduling. However, best results were obtained by reducing learning rate when the validation loss plateaus. The training is early-stopped if the validation loss is not improved during 15 epochs, up to a maximum of 200. For early-stopping, a 15\% validation set is randomly split from the training data of every class. The batch size is 64, and training samples are shuffled between epochs. In both convolutional layers L2 regularization is applied with a parameter of $10^{-5}$. The system is implemented using Keras (v2.1.3) and Tensorflow (v1.4.1). \subsubsection{Feature Extraction and Pre-processing} \label{sec:feature_extraction} We segment each 10s recording into 7 non-overlapped \textit{segments}. The first 6 segments last 1.5s, and the last one 1s. We then extract features on each segment using the \textit{FreesoundExtractor},\footnote{\url{http://essentia.upf.edu/documentation/freesound_extractor.html}} an out-of-box feature extractor from the Essentia open-source library for audio analysis~\cite{Bogdanov_essentia}. This extractor computes hundreds of features for sound and music analysis and it is originally used by Freesound\footnote{\url{https://freesound.org/}} in order to provide sound analysis API and searching functionalities. The most musically-related features (e.g., key, chords, etc.) are discarded. The selected pool of features is listed in Table~\ref{tab:feature_gbm}, along with their dimensionality. The features are calculated at frame-level by using the same frame and hop size mentioned in Section \ref{sec:CNN_prepro}. All other parameters of the \textit{FreesoundExtractor} are set to default values. We perform four statistical aggregations---mean, variance, and mean and variance of the derivative---to the frame-level feature vectors of each segment. Therefore, a $R^{820\times1}$ (i.e., 205$\times$4) feature vector is output for each segment. As in Section \ref{sec:CNN_prepro}, we fit a mean and variance standardization scaler over whichever subset of data we use for training, and use it to standardize both train and test data. \begin{table}[ht!] \centering \label{tab:feature_gbm} \begin{tabular}{lclc} \hline Feature name & Dim. & Feature name& Dim. \\ \hline Bark bands energy & 32 & Tonal features & 3 \\ ERB bands energy & 23 & Pitch features& 3 \\ Mel bands energy & 45 & Silence rate & 3\\ MFCC & 13 & Spectral features & 32 \\ HPCP & 38 & GFCC & 13\\ \hline \end{tabular} \caption{Selected features extracted by \textit{FreesoundExtractor} and number of dimensions.} \end{table} \subsubsection{Linear Discriminant Analysis Feature Reduction} Linear Discriminant Analysis (LDA) can be used as a dimensionality reduction technique after the feature extraction stage. The ultimate goal is to mitigate overfitting by projecting a high dimensional dataset onto a lower dimensional space. This is done by maximizing the variance of the data as well as the separability of classes. Some of the features of Table~\ref{tab:feature_gbm} are computed in a similar way, e.g., several energy bands are computed with different psychoacoustic scales (e.g., Bark or Mel). While they may provide some complementary information, it is likely that they also have a considerable amount of redundancy. This, together with the high dimensionality of the feature vector, may cause overfitting and a slow-down of model training. In order to mitigate this, while keeping the rich information of the extracted features, we perform LDA-based feature reduction. It is applied on any subset of data used for training, and then the corresponding test set is transformed accordingly (see Section~\ref{sec:eval_setup}). \subsubsection{Hyperparameter Tuning} Since ASC is a multi-class classification problem, we use logarithmic loss as the objective function. We do grid search over 5 hyperparameters. Four of them relate to the GBM (learning rate, \textit{max bins}, \textit{number of leaves}, and \textit{min data in leaf}) while the reduced feature dimension relates to the LDA. The number of leaves is the main parameter to control model complexity, whereas \textit{max bins} and \textit{min data in leaf} are two important parameters to deal with overfitting. All other hyperparameters are set to default values. We do the grid search in two cases---with and without LDA---and the hyperparameter values considered are listed in Table~\ref{tab:gridsearch_values}. The grid search is performed using cross-validation on the development set. The hyperparameters setting leading to the best cross-validation accuracy is kept for the final GBM model, which is used to predict acoustic scenes on the evaluation set. \begin{table}[ht!] \centering \label{tab:gridsearch_values} \begin{tabular}{ll} \hline Hyperparameter & Values \\ \hline Learning rate & [0.01, 0.05, 0.1] \\ Max bins & [128, 256, 512] \\ Number of leaves & [64, 128, 256] \\ Min data in leaf & [500, 1000, 2000] \\ Reduced feature dimension & [64, 128, 256, 512] \\ \hline \end{tabular} \caption{Hyperparameter grid search for GBM and LDA.} \end{table} \vspace{-2mm} \subsection{Convolutional Neural Network} \label{sec:CNN} \input{S21_CNN} \subsection{Gradient Boosting Machine} \label{sec:GBM} \input{S22_GBM} \subsection{Late Fusion} \label{sec:system_latefusion} In order to combine the predictions from both methods, we tried approaches with and without learning, all of them starting from the individual models' class probabilities computed on the development set using the proposed four-fold cross validation setup. The simplest approach (i.e., without learning) consists of combining the prediction probabilities by taking their geometric mean, arithmetic mean, or rank averaging. Then, the final predicted label is selected by taking the \textit{argmax} over the resulting values. The learning-based approach consists of two steps. First, using the models' prediction probabilities computed on the \textit{development} set as \textit{training data}, we fit a classifier or \textit{meta learner}. We experimented with logistic regression and SVM with several kernels. The models' hyperparameters were determined by grid search on the training data using four-fold cross validation, trying to restrict the parameter search to ranges providing large regularization. Then, once the meta learner is fit, we predict labels on the \textit{evaluation} set by taking as input the pre-computed prediction probabilities from CNN and GBM on this set. This approach is sometimes referred to as \textit{stacking}. \subsection{Dataset and Baseline} \label{sec:dataset} Systems are evaluated with \textit{TUT Acoustic Scenes 2017}, a dataset that contains recordings made in 15 acoustic scenes. The dataset is split into a development and an evaluation set, of 4680 and 1620 audio recordings respectively.\footnote{A list of the scenes together with more details about the dataset can be found in \url{http://www.cs.tut.fi/sgn/arg/dcase2017/challenge/task-acoustic-scene-classification}.} The development set contains 312 recordings per class. All recordings last 10s and have a sampling rate of 44.1~kHz. A four-fold cross-validation setup is provided for the development set. The dataset presents a mismatch between development and evaluation set due to differences in the recording conditions. The average accuracy drop between both sets across all submitted systems to the ASC task of DCASE2017 is 20.1\%.\footnote{\url{http://www.cs.tut.fi/sgn/arg/dcase2017/challenge/task-acoustic-scene-classification-results}\label{footnote_url_ASC_results}} A multilayer perceptron (MLP) is provided as baseline system. The input representation is 40 log mel-band energies in 5 consecutive frames and the MLP has 2 layers with 50 hidden units each. \subsection{Evaluation Setup} \label{sec:eval_setup} The output of CNN and GBM models for every input 1.5s segment is a $R^{15\times1}$ vector with the probabilities of the segment belonging to every class. The class prediction for each 10s~recording is computed by averaging per-class scores across segments and finding the class with maximum average score. The development set is used for training/testing the CNN and GBM approaches according to the provided four-fold cross-validation setup (see Fig.~\ref{fig:setup_dev}). \begin{figure}[ht] \centering \includegraphics[width=0.49\textwidth]{smc2018_dev.png} \caption{Flowchart illustrating the workflow in development mode.} \label{fig:setup_dev} \end{figure} For predicting acoustic scenes on the evaluation set, the models are trained on the full development set and evaluated on the evaluation set (see Fig.~\ref{fig:setup_eval}). The metric used is classification accuracy, i.e., the number of correctly classified recordings divided by the total amount of recordings. \begin{figure}[ht] \centering \includegraphics[width=0.49\textwidth]{smc2018_eval.png} \caption{Flowchart illustrating the workflow in evaluation mode. Models are trained on the full development set and predictions are computed on the evaluation set.} \label{fig:setup_eval} \end{figure} \vspace{-2mm} \subsection{Convolutional Neural Network} \label{sec:CNN_results} Two types of experiments were carried out with the CNN: \textit{i)} experimenting with filter configurations in the first layer, and \textit{ii)} exploring the concept of pre-activation. Since results obtained with GPU are generally non-deterministic, accuracies reported in this Section are the result of averaging ten independent trials of every experiment. Confidence intervals are also shown. \subsubsection{Filter Configurations} \label{sec:CNN_results_filters} We design filter configurations with several filter shapes in the first layer. The number of shapes is specified in Table~\ref{tab:CNN_filter_conf} and Fig.~\ref{fig:cnn_from_sq_to_5} as \textit{CNN\_x}, where $x$ denotes the number of different shapes.\footnote{CNN\_sq refers to the case where filters are squared, which is a specific case of CNN\_1.} Every shape (denoted by (\textit{time, frequency})) can be repeated a different number of times, as illustrated in Table~\ref{tab:CNN_filter_conf}, but in all cases the total number of filters is 112. \begin{table}[ht!] \centering \label{tab:CNN_filter_conf} \begin{tabular}{llcc} \hline System & Filter configuration - \#\textit{filters} x (\textit{time, freq}) \\ \hline MLP & - \\ CNN\_sq & 112x (5,5) \\ CNN\_1 & 112x (3,40) \\ CNN\_2 & 64x (3,20) $|$ 48x (3,70) \\ CNN\_3 & 48x (3,10) $|$ 32x (3,30) $|$ 32x (3,60) \\ CNN\_4 & 48x (3,8) $|$ 32x (3,32) $|$ 16x (3,64) $|$ 16x (3,90) \\ CNN\_5 & 36x (3,6) $|$ 22x (3,26) $|$ 22x (3,48) $|$ 16x (3,70) \\ & 16x (3,96) \\ \hline \end{tabular} \caption{Filter configurations in the first layer for the CNN of Fig.~\ref{fig:archi}.} \end{table} \vspace{-1mm} The motivation for designing filters with different vertical dimensions is to intuitively be able to cover diverse spectral patterns, ranging from narrow-band patterns to others that may spread over frequency. In order to establish a fair comparison among networks, the number of parameters was kept approximately constant by adjusting the number of filters per shape and the filter dimensions. The number of parameters in all cases lie in the range 656k-660k, with the exception of the squared filters case that has 648k (due to the smaller size of the squared filters). In particular, the top performing case of CNN\_4 has 657k parameters. The specific filter shapes in Table~\ref{tab:CNN_filter_conf} were chosen through a number of preliminary experiments. While an exhaustive search may be desirable, it may require prohibitively long computation times. Fig.~\ref{fig:cnn_from_sq_to_5} shows the classification accuracy values for the architecture of Fig.~\ref{fig:archi} and the filter configurations of Table~\ref{tab:CNN_filter_conf}. The accuracy of the MLP baseline is specified as well. \vspace*{-2mm} \begin{figure}[ht] \centering \includegraphics[width=0.52\textwidth]{cnn_from_sq_to_5_shapes.png} \vspace*{-8mm} \caption{ASC performance using the CNN of Fig.~\ref{fig:archi} with the filter configurations in the first layer given by Table~\ref{tab:CNN_filter_conf}. No pre-activation is adopted in these experiments. Note that the y-axis differs for development and evaluation sets.} \label{fig:cnn_from_sq_to_5} \end{figure} It can be observed that the accuracy on the evaluation set increases overall with the diversity of the filter shapes, until a point where this diversity no longer helps (CNN\_5). We also carried out some preliminary experiments with horizontal filters but results were slightly worse than those obtained with vertical ones. \vspace*{-3mm} \subsubsection{Pre-activation} \label{sec:CNN_results_PREACT} Fig.~\ref{fig:cnn_preact_TDnorm} shows the results obtained by adding pre-activation \cite{he2016identity} to the top-performing case of Fig.~\ref{fig:cnn_from_sq_to_5}, i.e., to CNN\_4. \vspace*{-2mm} \begin{figure}[ht] \centering \includegraphics[width=0.52\textwidth]{cnn_preact_TDnorm.png} \vspace*{-8mm} \caption{ASC performance by adopting pre-activation in the CNN of Fig.~\ref{fig:archi}, i.e., adding BN and ReLU before the first convolutional layer. Note that the y-axis differs for development and evaluation sets.} \label{fig:cnn_preact_TDnorm} \end{figure} It can be seen that adding pre-activation improves the results slightly on the evaluation set (see \textit{preact} bar). However, the gap between development and evaluation accuracies is still substantial. Curiously, we found out that this gap is reduced when we complement pre-activation with normalization of the input audio waveform (see \textit{norm\&preact} bar). This is somewhat surprising as the T-F patches that input the CNN were already standardized (see Section~\ref{sec:CNN_prepro}). Finally, we report the accuracy obtained by applying \textit{only} time domain normalization of audio (without pre-activation), to confirm that it is the combination of both which yields the improvement (see \textit{norm} bar). We also experimented with pre-activation not only prior to the first convolutional layer, but also between every max-pooling operation and the next layer, following previous work~\cite{hanconvolutional17}. Resulting accuracies were not higher. It hence appears that the combination of pre-activation and normalization of the input waveform helps to improve model's generalization, showing slightly lower development accuracy while increasing evaluation accuracy. Nevertheless, further experiments are needed to better assess and understand the benefits of pre-activation and its dependency on audio signal energy or dynamic range. For example, one aspect of the audio signal in acoustic scenes or field-recordings is its small dynamic range. This happens often as sources can be far away from the microphone, since the goal is to capture the entirety of the acoustic context rather than specific acoustic events. Evaluating this approach on different datasets may be revealing. \subsection{Gradient Boosting Machine} \label{sec:GBM_results} The best hyperparameters found for LDA and non-LDA cases are listed in Table~\ref{tab:best_values}. The dimensionality of the feature vector after LDA-based feature reduction is 64. This is a 7.8\% of the initial dimensionality (820), which indicates considerable information redundancy in the initial pool of features gathered from the \textit{FreesoundExtractor}. After the feature dimension reduction, we observe significant boost in training speed. \begin{table}[ht!] \centering \label{tab:best_values} \begin{tabular}{lcc} \hline Hyperparameter & non-LDA & LDA \\ \hline Learning rate & 0.05 & 0.05 \\ Max bins & 128 & 128 \\ Number of leaves & 128 & 128 \\ Min data in leafs & 1000 & 500 \\ Reduced feature dimension & -- & 64 \\ \hline \end{tabular} \caption{Best hyperparameters in both LDA and non-LDA cases by grid searching on the development set.} \end{table} Table \ref{tab:gbm_results} shows the accuracy results. The performance using LDA feature reduction is greater than the one without LDA and the MLP baseline, resulting in small improvements of 1.7\% and 2.6\% on the evaluation set. However, we still witness a significant accuracy drop in both cases. It is worth to mention that, to tackle the overfitting problem, we have experimented with another two techniques, namely PCA and feature selection using feature importance. However, no significant improvements were observed. For the late fusion we use the GBM with LDA. \begin{table}[ht!] \centering \begin{tabular}{lcc} \hline Approach & dev acc (\%) & eval acc (\%) \\ \hline Baseline & 74.8 & 61.0 \\ GBM non-LDA & 81.4 & 61.9 \\ GBM LDA & 81.1 & \textbf{63.6} \\ \hline \end{tabular} \caption{ASC performance by the GBM model with and without LDA feature reduction.} \label{tab:gbm_results} \end{table} \vspace*{-4mm} \subsection{Models' Comparison} \label{sec:comparison} The CNN method clearly outperforms the GBM method. However, we wanted to assess the potential complementarity of these models, i.e., whether their output predictions are complementary or redundant. We follow the approach of~\cite{salamon2017fusing} consisting of plotting the difference of confusion matrixes yielded by both systems, which is shown in Fig.~\ref{fig:confusion_diff}. \begin{figure}[ht] \centering \includegraphics[width=0.51\textwidth]{diff_conf_mat_eval_cnn_minus_gbm.png} \vspace*{-6mm} \caption{Difference between the confusion matrixes produced by \textit{i)} the CNN and \textit{ii)} the GBM models (in this order), evaluated on the evaluation set.} \label{fig:confusion_diff} \end{figure} If we have a look at the main diagonal, positive red numbers illustrate scenes where CNN performs better, whereas negative blue numbers represent scenes where the GBM achieves more correct predictions. The CNN yields better results in most of the acoustic scenes. However, despite the lower performance of the GBM, it interestingly yields better predictions in the 'park', 'beach' and 'cafe/restaurant' scenes. Then, off the diagonal, positive red numbers illustrate that the CNN presents higher confusion between pairs of acoustic scenes. Similarly, negative blue numbers represent that the GBM suffers from higher confusion between pairs of acoustic scenes. Overall, it can be seen that the models get confused between different pairs of scenes. In summary, the methods present different behaviour to some extent, and hence their predictions may be complementary. \subsection{Late Fusion} \label{sec:fusion_results} After exploring the approaches described in Section~\ref{sec:system_latefusion}, the logistic regression led to the best results, which are listed in Table~\ref{tab:late_fusion_results}. \begin{table}[ht!] \centering \begin{tabular}{lcc} \hline System & dev acc (\%) & eval acc (\%)\\ \hline MLP baseline & 74.8 & 61.0 \\ Proposed CNN + GBM & 83.3 & \textbf{72.8}\\ \hline \end{tabular} \caption{ASC performance by the combined system.} \label{tab:late_fusion_results} \end{table} The proposed combined system shows an improvement of 3.1\% over the average score provided by the best CNN architecture, and an improvement of 11.8\% over the MLP baseline. It also shows an improvement of 5.5\% with respect to our previous work \cite{fonseca2017acoustic}. We consider as state of the art the top performing submissions to the ASC task of DCASE2017 Challenge.\textsuperscript{\ref{footnote_url_ASC_results}} Among them, there are a few systems that outperform the one proposed here. However, they have the burden of being more complex or computationally intensive, including Generative Adversarial Networks, ensembles of 4 or more systems (with several CNNs), data augmentation, or deeper networks. Compared to them, we consider that our system is simpler in overall terms. The proposed CNN is more interpretable as domain knowledge was used in its design. The GBM can be trained in a standard desktop computer without need of additional infrastructure, e.g., a GPU. Figure~\ref{fig:confusion_fusion} shows the confusion matrix for the proposed combined system, where it can be seen which acoustic scenes are misclassified the most. The worst case occurs when the systems predicts 'residential area' while the true label is 'beach' or 'park'. \begin{figure}[ht] \centering \includegraphics[width=0.51\textwidth]{conf_mat_eval_fusion_with_logReg.png} \vspace*{-6mm} \caption{Confusion matrix for the proposed combined system evaluated on the evaluation set.} \label{fig:confusion_fusion} \end{figure} \vspace*{-1mm} \section{Introduction} \label{sec:intro} \input{S1_Intro} \section{Method} \label{sec:method} \input{S2_Method} \section{Evaluation} \label{sec:experi} \input{S3_Experiments} \section{Results and Discussion} \label{sec:results} \input{S4_Results} \section{Conclusion} \label{sec:conclu} \input{S5_Conclusion}
{ "redpajama_set_name": "RedPajamaArXiv" }
9,788
Jinnah Sports Stadium is a multi-purpose stadium in Islamabad, Pakistan. It is currently used mostly for football matches. The stadium has a capacity of 48,000 people and is the largest stadium in Pakistan. Stadium This stadium was built in the 1970s. The stadium was renovated and used for the SAF Games in 2004. The playing field also has a running track around its perimeter allowing athletics use. Tournaments hosted It has hosted the following sporting events: South Asian Games: 1989, 2004 SAFF Championship: 2005 (semi-finals and final only) SAFF Women's Championship: 2014 National Games of Pakistan: 2013 Quaid-e-Azam Inter Provincial Youth Games: 2016, 2017 Pakistan Premier League National Women Football Championship: 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012 PFF League: 2010 (region round and Group B matches only), 2011 All Pakistan Women Inter University Women Football Championship: 2011 See also List of stadiums in Pakistan List of stadiums by capacity References Sports venues in Pakistan Athletics (track and field) venues in Pakistan Football venues in Pakistan Sport in Islamabad Multi-purpose stadiums in Pakistan 1970 establishments in Pakistan Sports venues completed in 1970 Memorials to Muhammad Ali Jinnah
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,496
/** * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.hyperledger.common; /** * A Transaction ID * This is technically a cryptographic hash of its content. * Bitcoin hashes the entire transaction content that makes reference to * unsigned or partially signed transaction impractical. * <p> * Introducing this class to allow other implementations of the ID and to * ensure transaction IDs are not mixed up with block/header IDs */ public class TID extends Hash { public static final TID INVALID = new TID(new byte[32]); // TODO in Sidechain Elements this is the genesis block hash public static final TID BITCOIN_NATIVE = new TID(new byte[]{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1}); public TID(Hash hash) { super(hash.unsafeGetArray()); } public TID(byte[] hash) { super(hash); } public TID(String hex) { super(hex); } private TID(byte[] hash, boolean safe) { super(hash, safe); } public static TID createFromSafeArray(byte[] hash) { if (hash.length != 32) { throw new IllegalArgumentException("Digest length must be 32 bytes for Hash"); } return new TID(hash, true); } }
{ "redpajama_set_name": "RedPajamaGithub" }
9,944
package com.box.l10n.mojito.rest.client; import com.box.l10n.mojito.rest.client.exception.ResourceNotCreatedException; import com.box.l10n.mojito.rest.client.exception.ResourceNotFoundException; import com.box.l10n.mojito.rest.entity.Authority; import com.box.l10n.mojito.rest.entity.Role; import com.box.l10n.mojito.rest.entity.User; import com.google.common.collect.Sets; import java.util.HashMap; import java.util.HashSet; import java.util.List; import java.util.Map; import java.util.Set; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.http.HttpStatus; import org.springframework.stereotype.Component; import org.springframework.web.client.HttpClientErrorException; /** @author jyi */ @Component public class UserClient extends BaseClient { /** logger */ static Logger logger = LoggerFactory.getLogger(UserClient.class); @Override public String getEntityName() { return "users"; } /** * Get a list of {@link User}s. * * @param username * @return List of {@link User}s */ public List<User> getUsers(String username) { Map<String, String> filterParams = new HashMap<>(); if (username != null) { filterParams.put("username", username); } return authenticatedRestTemplate.getForObjectAsListWithQueryStringParams( getBasePathForEntity(), User[].class, filterParams); } /** * Creates a {@link User} * * @param username * @param password * @param role * @param surname * @param givenName * @param commonName * @return * @throws com.box.l10n.mojito.rest.client.exception.ResourceNotCreatedException */ public User createUser( String username, String password, Role role, String surname, String givenName, String commonName) throws ResourceNotCreatedException { logger.debug("Creating user with username [{}]", username); User userToCreate = new User(); userToCreate.setUsername(username); userToCreate.setPassword(password); userToCreate.setSurname(surname); userToCreate.setGivenName(givenName); userToCreate.setCommonName(commonName); if (role != null) { Authority authority = new Authority(); authority.setAuthority(role.toString()); userToCreate.setAuthorities(Sets.newHashSet(authority)); } try { return authenticatedRestTemplate.postForObject( getBasePathForEntity(), userToCreate, User.class); } catch (HttpClientErrorException exception) { if (exception.getStatusCode().equals(HttpStatus.CONFLICT)) { throw new ResourceNotCreatedException( "User with username [" + username + "] already exists"); } else { throw exception; } } } /** * Deletes a {@link User} by the {@link User#username} * * @param username * @throws com.box.l10n.mojito.rest.client.exception.ResourceNotFoundException */ public void deleteUserByUsername(String username) throws ResourceNotFoundException { logger.debug("Deleting user by username = [{}]", username); List<User> users = getUsers(username); if (users.isEmpty()) { throw new ResourceNotFoundException("User with username [" + username + "] is not found"); } else { authenticatedRestTemplate.delete(getBasePathForEntity() + "/" + users.get(0).getId()); } } /** * Updates a {@link User} by the {@link User#username} * * @param username * @param password * @param role * @param surname * @param givenName * @param commonName * @throws ResourceNotFoundException */ public void updateUserByUsername( String username, String password, Role role, String surname, String givenName, String commonName) throws ResourceNotFoundException { logger.debug("Updating user by username = [{}]", username); List<User> users = getUsers(username); if (users.isEmpty()) { throw new ResourceNotFoundException("User with username [" + username + "] is not found"); } else { User user = users.get(0); user.setPassword(password); user.setSurname(surname); user.setGivenName(givenName); user.setCommonName(commonName); Set<Authority> authorities = new HashSet<>(); if (role != null) { Authority authority = new Authority(); authority.setAuthority(role.toString()); authorities.add(authority); } user.setAuthorities(authorities); authenticatedRestTemplate.patch(getBasePathForResource(user.getId()), user); } } }
{ "redpajama_set_name": "RedPajamaGithub" }
2,039
Moonlight (Barry Jenkins, US) By Phil Coldiron In CS69, From The Magazine, Web Only By Phil Coldiron Liberty City, the Miami neighbourhood Barry Jenkins hails from and the setting for much of Moonlight, his exceptional second feature, has a median annual household income of under $22,000; 47% of its population lives below the United States federal poverty line, while nearly a third of working-age adults are unemployed. Grown up around Liberty Square, segregationist blocks built in the '30s as Florida's first venture into public housing, the area is acutely typical of the pains wrought on black communities by America's ongoing history of structural racism: it is underserved by medical facilities and public programs, abandoned by industry, and torn apart by aggressive and unjust policing and lawmaking. It would not be quite correct to say that the few films concerned with black lives to receive both wide distribution and critical praise in recent years have ignored such material conditions faced by the individuals and communities they're depicting. Rather, poverty and oppression is often acknowledged quickly into drama, so that a singularly able hero might transcend it or suffer it nobly or more productively as the means for analyzing some relationship of power marked off by the distance of history. What is remarkable about Moonlight is that Jenkins and playwright Tarell Alvin McCraney, another son of Liberty City, have crafted a film which has this place so deep in its bones that it strikes me as impossible to separate any aspect of its form from the context which produced it, and which is in turn reflected through the frame of its moving and fascinating central figure, Chiron. The film follows Chiron across three discrete and chronologically consecutive chapters, each concerned with a specific relationship in his life. "Little," in which he is portrayed by Alex Hibbert, covers a period of weeks or months in childhood as he comes to know Juan (Mahershala Ali), a local drug dealer; "Chiron" (Ashton Sanders) follows a string of days in adolescence leading up to the moonlit moment he first acts on his desire for another man, and the violence that follows it; and "Black" (Trevante Rhodes) settles into the reunion of a now-adult Chiron with Kevin (Andre Holland), the classmate involved in the second chapter's twinned pleasure and pain. Hibbert, Sanders, and Rhodes are given the freedom to interpret Chiron in their own ways, while the character's coherence resides in his silence. Jenkins and his actors modulate and explore the textures of this silence in fine detail, testing it at once as a fact of this particular individual's social existence and as an aesthetic derived from the canons of global art cinema. In this sense, we can think of Jenkins as a director of what the scholar and poet Fred Moten calls rubbing, "the constant refusal and disestablishment of separation…a kind of radical indistinctness." As Moten, in a poem from his collection b jenkins (titled so after the poet's mother), draws together Billie Holliday and Roland Barthes, their names situated at its beginning and end through the felt figure of "grain," so Jenkins aims similarly to create heady, conceptually rich images which sacrifice none of their ability to work directly on the body. In what stands among the more astute artistic self-appraisals I've encountered, Jenkins recently tweeted, in approving response to director John Magary's use of "marinate" in a comment on the film, that Moonlight is "food for the soul/sticks to ya ribs." One need not dwell on the history, the meanings bound up in a recipe which has been passed through generations in order to enjoy such a meal while eating it, but it is this quality, known or not, which finally nourishes something other than the belly even as it sticks to the ribs. But there is a crucial gap in Jenkins' apercu: soul food implies the presence of a tradition or a heritage, while Moonlight works very nearly in the absence of one. Though this year has seen the release of Kino's "Pioneers of African-American Cinema" home video set, an historical corrective of tremendous importance, and films by both Charles Burnett and Julie Dash have received theatrical revivals—in the latter case, written, as Rivette said of Cezanne, into film history by the artists (the Knowles sisters, in particular)—it remains the case that much of the most exciting moving-image work today is being produced by artists whose people have, through a litany of structural inequalities, long been denied the means and freedom to work in their own ways. Nathaniel Mackey, in his brief, brilliant essay on radical black art, "Destination Out," writes that "Coleman Hawkins felt no identity crisis playing an instrument invented by a Belgian." Surveying only the best of recent work, one might think here of Frances Bodomo's Everybody Dies!, which brushes the everyday terrors of being a black woman in America against the anodyne space of public access television, setting in stark relief both the normalization of violence and, by working in a DIY lineage of detourning such popular forms (e.g., the major films of Owen Land's middle period, such as Remedial Reading Comprehension [1970], or more broadly, the underground films of Jacobs, Smith, or the Kuchars), pointing towards the filmic avant-garde's casual, consistent avoidance of race. Or of Glenn Ligon's multi-screen installation We Need to Wake Up Cause That's What Time It Is, which scrubs Richard Pryor's performance in Richard Pryor: Live on the Sunset Strip (1982) free of not just his jokes, but of his voice entirely, taking his body as the grain of a tradition and expanding it across seven screens, each linked to a part of his figure and active only when it is. Ligon thus achieves both the formal pleasures of immersive installation—the room-filling rhythmic red glow produced by Pryor's suit comes as close to the sublime as any art I encountered this year—at the same time as he elaborates a specifically black gestural diction, isolating the ways in which meaning refuses to be contained by words alone and arguing that our language has grown insufficiently alarming in the face of the horrors it is regularly called on to describe (Ligon's reference to waking makes clear how long this insufficiency has been known by those who suffer from it most.) While Moonlight is more traditional in form than Bodomo's or Ligon's work, it seems to me to go further in its absolute focus on issues which reside inside of blackness. The film begins, briefly and subtly, with the ocean, the sound of its waves washing into Boris Gardiner's Blaxploitation anthem, "Every N— Is a Star," serving as an overture here as it did on Kendrick Lamar's To Pimp a Butterfly. If the rubbing of silences structures the film vertically, we might think of this rubbing between blueness and blackness as structuring it horizontally, building out its emotional breadth as Jenkins and McCraney test the various points of contact between these broad concepts. This relationship is spoken directly in the line from which the film's title is derived. Having spent a day teaching Chiron to swim, Juan offers a bit of the sort of inscrutable truth which forms the basis of any tradition, as he recalls a night as a child in his native Cuba, when, as he ran the streets with friends, an old woman called out to him, "In moonlight, black boys look blue. You blue!" As he speaks, the camera twice drifts in a calm, downward arc from Juan's face to Chiron's, tying the two together in this moment of teaching and learning. Juan, played by Mahershala Ali with a gentle patience I can find no analogue for in American movies, recognizes the difficulties that Chiron will face in his life as gay black man (Moonlight shows an exceedingly rare concern for a child's queerness) and, with this story, draws him into his family, assures him that he is not alone in the world. That Juan's life ends in the space between chapters, and that his death is reported only obliquely, is a painful reminder of the inadequacy of words. And yet, that Jenkins is unafraid to take such silence as the ground on which he operates, and that he searches within it not for the suffering in blackness and blueness, but for the beauty which overflows such bounds, marks Moonlight as a film from which anyone could stand to learn. Cinema Scope 69 Editor's Note Northern Exposure: Future//Present at VIFF
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,803
How alternative lending options support SMEs Funding remains one of the biggest limitations to both large and small businesses as banks adopt more stringent lending terms. The contraction in the economy has seen investors and conventional credit providers alike pull back on lending. If credit provision for large corporates is being restricted, one can only imagine what is happening to businesses that have typically been fragile - the small and medium-sized businesses (SMEs). The focus should be on empowering the entrepreneur and expanding their perspective when they think about access to funding. Entrepreneurs have long fallen into the trap of familiarity. When they think funding, they naturally gravitate towards the traditional banks - forgetting the space that is full of alternative lenders. The biggest obstacle for alternative lenders is not bad debts or stringent credit policies, but educating entrepreneurs about the benefits of alternative funding solutions. Among the reasons entrepreneurs get declined for funding is a misalignment between their funding needs and the potential funder's lending criteria. Other countries, such as the UK, have clearly become conscious to this misalignment. Since 2016, as part of the UK government's bid to promote small business, they passed an Act (SBEE Act) that obliged the major banks to refer SME finance applications that they have declined to alternative credit providers. We certainly need that in South Africa, an intentional referral effort by banks to connect SMMEs with alternative funders. Nonetheless, entrepreneurs must do their homework to find alternative funders that match their business requirements and stage of the business' life. Alternative funders tend to be niché and specialised, with their credit policies built around funding support. Small businesses must think about a concept that's not as complex as it sounds: building a "data bank". Many alternative funders use transaction data to perform their risk underwriting processes. Having reliable transaction data opens new channels of funding. Asset-backed solutions are well suited for businesses that have equity trapped in assets they already own. They can leverage business assets (for example, vehicles or specialised equipment) by providing them as security to alternative lenders to access quick short-term funding. Cash is, and always has been, king in any business. Furthermore, a rand received by a business today, if redeployed effectively, is better than a rand received tomorrow. Businesses fail every day, due to cash flow pressure as debtors stretch repayment terms. The funding solution seeks to address working capital and cash flow challenges for business that have invoiced their debtors. The funder buys a business's future invoice for a fee. This allows businesses to transfer the risk of the invoice to the funder and unlock the cash from its debtors by getting it upfront. The availability of fintech lenders challenges conventional lenders in the algorithms that they use to understand risk. They cut the high acquisition costs and operating leverage from their business models, as they lend through proprietary channels. Mandla Khupe is the head of commercial at Retail Capital. SweepSouth – Sweeping the nation One of the most vulnerable labour groups in the country, Aisha Pandor and has worked tirelessly to lobby for better pay and working conditions for domestic workers. She admits that the idea for SweepSouth seemed like a no-brainer when she and her husband, Alen, were faced with a situation where they struggled to… #21interviews: Plan for a ballet of black swans By Rachel Irvine Imagine running a marathon only to get to the finish line and be told you're not done. In fact, you have the whole damn thing to run again. Welcome to 2021. It's not the news we want to hear but it's the news we're going to have to get comfortable… Why you need to change your Facebook content plan in 2021 By: Pieter Geyser As content creators, we'd all like to think that the content we produce is interesting enough to stand the test of time. Unfortunately for us, when it comes to Facebook, there are more than 30 billion pieces of content published on the platform each month that our posts have to compete… 5 things about work that changed in 2020 While any reference to the "new normal" should certainly be banned by now, before we move forward into just "normal" it's useful to reflect on some of the most significant ways in which Covid-19 has influenced how and where work happens. There was a time, not terribly long ago, that employees dreamt of… © 2019 Irvine Partners. All Rights Reserved.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,802
Kauaiina montgomeryi är en fjärilsart som beskrevs av Riotte 1978. Kauaiina montgomeryi ingår i släktet Kauaiina och familjen mätare. Inga underarter finns listade i Catalogue of Life. Källor Mätare montgomeryi
{ "redpajama_set_name": "RedPajamaWikipedia" }
35
Q: User authentication using POE::Component::Client::HTTP I am trying to find a module in perl poe which can do user authentication while making a HTTP request. HTTP request should be non-blocking or How should I use poe::component::client:http to do user authentication by providing username , password details? A: You can pass a HTTP::Request object to POE::Component::Client::HTTP. Basic Auth is solved with a header, and can be sent as a header: use strict; use warnings; use MIME::Base64; use HTTP::Request; my $username = 'username'; my $password = 'password'; my $auth = 'Basic ' . MIME::Base64::encode($username . ':' . $password); my $request = HTTP::Request->new(GET => 'http://www.example/', [Authorization => $auth]); And then just pass the $request to $poe_kernel->post as in the documentation.
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,080