text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
Supplying top-notch investigation solutions within Cheshire and various services within the United Kingdom, Private Investigator Cheshire has gained it's title within Manor Park as the best investigation solutions organization. The private investigators at Private Investigator Cheshire are both experienced and educated about the areas that they cover for both corporate and private clients. All inquiries within Manor Park that are recognized tend to be satisfied lawfully and may provide the reality as well as particulars you'll need within Cheshire. Private Investigator Cheshire private investigators tend to be frequently attending up-to-date training to become knowledgeable with brand new methods associated with investigation, technologies and equipment within Manor Park. Private Investigator Cheshire might help individuals straighten out their own issues within Manor Park, Cheshire. A few of the issues Private Investigator Cheshire assists their clients along with is actually personal issues for example monitoring, surveillance as well as checking. Why Would I Employ Private Investigator Cheshire Within Manor Park, Cheshire? Prove you innocence to your envious partner within Newtown who is blaming you of cheating which is not the case. You need to contact Private Investigator Cheshire, therefore the they'll supply you with the proof which you need. You are believing that somebody takes money out of your pocket book or even handbag within Murdishaw, although it is tough to demonstrate. Dropping hard earned cash models a person back again several weeks monetarily; you can stop this and hang the actual report directly through interesting Private Investigator Cheshire experienced private detectives help. You've been wrongly charged because of vast amounts associated with inventory lacking out of your Norton Cross stockroom. These days you would like the actual evidence of whomever accounts for the actual misplaced materials in order to exonerate your company name as well as Norton Cross Private Investigator Cheshire is able to help with all you would like for the scenario. Your own used kid went from Moore following understanding that he isn't your own actual kid and it was truly unfortunate about this. Perfect method of choose a lacking person is using the aid of Private Investigator Cheshire within the Manor Park variety. Surveillance offers solutions for a number of problems within Norton simply because proof gathered is extremely obvious. There are many Types of Checking Supplied by Private Investigator Cheshire Within Manor Park for person as well as company clients. If a person is actually robbing out of your place of work within Manor Park, it can result in the actual squander of cash as well as period because it leads to issues on the manufacturing collection. Organization surveillance within Manor Park will help you discover the crook by putting concealed digital cameras inside your place of work. You need to your kids to satisfy your brand-new sweetheart within Manor Park, although not before you make sure regarding their identification. Among the best methods to do that is as simple as employing Private Investigator Cheshire to carry out a background check within Manor Park. You need surveillance on your household to see if the new nanny employee is right for your kids within Manor Park. If you want monochrome evidence that can not be refuted after that surveillance through Private Investigator Cheshire may be the support for you personally.
{ "redpajama_set_name": "RedPajamaC4" }
1,330
Bidessus calabricus är en skalbaggsart som beskrevs av Guignot 1957. Bidessus calabricus ingår i släktet Bidessus och familjen dykare. Inga underarter finns listade i Catalogue of Life. Källor Dykare calabricus
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,691
Louisville & Nashville R. Co. v. Mottley , 211 U.S. 149 ( 1908 ) 211 U.S. 149 (1908) LOUISVILLE AND NASHVILLE RAILROAD COMPANY MOTTLEY. Supreme Court of United States. Argued October 13, 1908. Decided November 16, 1908. APPEAL FROM THE CIRCUIT COURT OF THE UNITED STATES FOR THE WESTERN DISTRICT OF KENTUCKY. *151 Mr. Henry Lane Stone for appellant. Mr. Lewis McQuown and Mr. Clarence U. McElroy for appellees. By leave of court, Mr. L.A. Shaver, in behalf of The Interstate Commerce Commission, submitted a brief as amicus curioe. MR. JUSTICE MOODY, after making the foregoing statement, delivered the opinion of the court. Two questions of law were raised by the demurrer to the bill, were brought here by appeal, and have been argued before us. They are, first, whether that part of the act of Congress of June 29, 1906 (34 Stat. 584), which forbids the giving of free passes or the collection of any different compensation for transportation of passengers than that specified in the tariff filed, makes it unlawful to perform a contract for transportation of persons, who in good faith, before the passage of the act, had accepted such contract in satisfaction of a valid cause of action against the railroad; and, second, whether the statute, if it should be construed to render such a contract unlawful, is in *152 violation of the Fifth Amendment of the Constitution of the United States. We do not deem it necessary, however, to consider either of these questions, because, in our opinion, the court below was without jurisdiction of the cause. Neither party has questioned that jurisdiction, but it is the duty of this court to see to it that the jurisdiction of the Circuit Court, which is defined and limited by statute, is not exceeded. This duty we have frequently performed of our own motion. Mansfield, &c. Railway Company v. Swan, 111 U.S. 379, 382; King Bridge Company v. Otoe County, 120 U.S. 225; Blacklock v. Small, 127 U.S. 96, 105; Cameron v. Hodges, 127 U.S. 322, 326; Metcalf v. Watertown, 128 U.S. 586, 587; Continental National Bank v. Buford, 191 U.S. 119. There was no diversity of citizenship and it is not and cannot be suggested that there was any ground of jurisdiction, except that the case was a "suit . . . arising under the Constitution and laws of the United States." Act of August 13, 1888, c. 866, 25 Stat. 433, 434. It is the settled interpretation of these words, as used in this statute, conferring jurisdiction, that a suit arises under the Constitution and laws of the United States only when the plaintiff's statement of his own cause of action shows that it is based upon those laws or that Constitution. It is not enough that the plaintiff alleges some anticipated defense to his cause of action and asserts that the defense is invalidated by some provision of the Constitution of the United States. Although such allegations show that very likely, in the course of the litigation, a question under the Constitution would arise, they do not show that the suit, that is, the plaintiff's original cause of action, arises under the Constitution. In Tennessee v. Union & Planters' Bank, 152 U.S. 454, the plaintiff, the State of Tennessee, brought suit in the Circuit Court of the United States to recover from the defendant certain taxes alleged to be due under the laws of the State. The plaintiff alleged that the defendant claimed an immunity from the taxation by virtue of its charter, and that therefore the tax was void, because in violation of the provision of the Constitution of the United *153 States, which forbids any State from passing a law impairing the obligation of contracts. The cause was held to be beyond the jurisdiction of the Circuit Court, the court saying, by Mr. Justice Gray (p. 464), "a suggestion of one party, that the other will or may set up a claim under the Constitution or laws of the United States, does not make the suit one arising under that Constitution or those laws." Again, in Boston & Montana Consolidated Copper & Silver Mining Company v. Montana Ore Purchasing Company, 188 U.S. 632, the plaintiff brought suit in the Circuit Court of the United States for the conversion of copper ore and for an injunction against its continuance. The plaintiff then alleged, for the purpose of showing jurisdiction, in substance, that the defendant would set up in defense certain laws of the United States. The cause was held to be beyond the jurisdiction of the Circuit Court, the court saying, by Mr. Justice Peckham (pp. 638, 639). "It would be wholly unnecessary and improper in order to prove complainant's cause of action to go into any matters of defence which the defendants might possibly set up and then attempt to reply to such defence, and thus, if possible, to show that a Federal question might or probably would arise in the course of the trial of the case. To allege such defence and then make an answer to it before the defendant has the opportunity to itself plead or prove its own defence is inconsistent with any known rule of pleading so far as we are aware, and is improper. "The rule is a reasonable and just one that the complainant in the first instance shall be confined to a statement of its cause of action, leaving to the defendant to set up in his answer what his defence is and, if anything more than a denial of complainant's cause of action, imposing upon the defendant the burden of proving such defence. "Conforming itself to that rule the complainant would not, in the assertion or proof of its cause of action, bring up a single Federal question. The presentation of its cause of action would not show that it was one arising under the Constitution or laws of the United States. *154 "The only way in which it might be claimed that a Federal question was presented would be in the complainant's statement of what the defence of defendants would be and complainant's answer to such defence. Under these circumstances the case is brought within the rule laid down in Tennessee v. Union & Planters' Bank, 152 U.S. 454. That case has been cited and approved many times since, . . ." The interpretation of the act which we have stated was first announced in Metcalf v. Watertown, 128 U.S. 586, and has since been repeated and applied in Colorado Central Consolidated Mining Company v. Turck, 150 U.S. 138, 142; Tennessee v. Union & Planters' Bank, 152 U.S. 454, 459; Chappell v. Waterworth, 155 U.S. 102, 107; Postal Telegraph Cable Company v. Alabama, 155 U.S. 482, 487; Oregon Short Line & Utah Northern Railway Company v. Skottowe, 162 U.S. 490, 494; Walker v. Collins, 167 U.S. 57, 59; Muse v. Arlington Hotel Company, 168 U.S. 430, 436; Galveston &c. Railway v. Texas, 170 U.S. 226, 236; Third Street & Suburban Railway Company v. Lewis, 173 U.S. 457, 460; Florida Central & Peninsular Railroad Company v. Bell, 176 U.S. 321, 327; Houston & Texas Central Railroad Company v. Texas, 177 U.S. 66, 78; Arkansas v. Kansas & Texas Coal Company & San Francisco Railroad, 183 U.S. 185, 188; Vicksburg Waterworks Company v. Vicksburg, 185 U.S. 65, 68; Boston & Montana Consolidated Copper & Silver Mining Company v. Montana Ore Purchasing Company, 188 U.S. 632, 639; Minnesota v. Northern Securities Company, 194 U.S. 48, 63; Joy v. City of St. Louis, 201 U.S. 332, 340; Devine v. Los Angeles, 202 U.S. 313, 334. The application of this rule to the case at bar is decisive against the jurisdiction of the Circuit Court. It is ordered that the Judgment be reversed and the case remitted to the Circuit Court with instructions to dismiss the suit for want of jurisdiction. DocketNumber: 37 Citation Numbers: 211 U.S. 149, 29 S. Ct. 42, 53 L. Ed. 126, 1908 U.S. LEXIS 1533 Judges: Moody, After Making the Foregoing Statement Filed Date: 11/16/1908 Authorities (23) Mansfield, C. & LMR Co. v. Swan , 111 U.S. 379 ( 1884 ) King Bridge Co. v. Otoe County , 120 U.S. 225 ( 1887 ) Blacklock v. Small , 127 U.S. 96 ( 1888 ) Cameron v. Hodges , 127 U.S. 322 ( 1888 ) Metcalf v. Watertown , 128 U.S. 586 ( 1888 ) Colorado Central Consol. Mining Co. v. Turck , 150 U.S. 138 ( 1893 ) Tennessee v. Union & Planters' Bank , 152 U.S. 454 ( 1894 ) Chappell v. Waterworth , 155 U.S. 102 ( 1894 ) Postal Telegraph Cable Co. v. Alabama , 155 U.S. 482 ( 1894 ) Oregon SL & UNR Co. v. Skottowe , 162 U.S. 490 ( 1896 ) Walker v. Collins , 167 U.S. 57 ( 1897 ) Muse v. Arlington Hotel Co. , 168 U.S. 430 ( 1897 ) Galveston, H. & SAR Co. v. Texas , 170 U.S. 226 ( 1898 ) Florida Central & Peninsular R. Co. v. Bell , 176 U.S. 321 ( 1900 ) Houston & Texas Central R. Co. v. Texas , 177 U.S. 66 ( 1900 ) Arkansas v. Kansas & Texas Coal Co. , 183 U.S. 185 ( 1901 ) Vicksburg Waterworks Co. v. Vicksburg , 185 U.S. 65 ( 1902 ) Boston & C. Mining Co. v. Montana Ore Co. , 188 U.S. 632 ( 1903 ) Continental Nat. Bank of Memphis v. Buford , 191 U.S. 119 ( 1903 ) Minnesota v. Northern Securities Co. , 194 U.S. 48 ( 1904 ) Riley v. Dozier Internet Law, PC ( 2010 ) Rodriguez v. Pacificare of Texas, Inc. ( 1993 ) Bruneau v. F.D.I.C. ( 1992 ) Self-Insurance Institute of America, Inc. v. Korioth ( 1993 ) Carpenter v. Wichita Falls Independent School Dist. , 44 F.3d 362 ( 1995 ) Rivet v. Regions Bnk of LA , 139 F.3d 512 ( 1997 ) Giles v. NYLCare Health ( 1999 ) Copling v. The Container Store ( 1999 ) T T E A v. Ysleta Del Sur , 181 F.3d 676 ( 1999 ) Heimann v. National Elevator , 187 F.3d 493 ( 1999 ) Hart v. Bayer Corporation ( 2000 ) Lipscomb v. Columbus Municipal , 269 F.3d 494 ( 2001 ) Beiser v. Weyler , 284 F.3d 665 ( 2002 ) Arana v. Ochsner Health Plan , 352 F.3d 973 ( 2002 ) Calad v. Cigna Healthcare TX ( 2002 ) Davila v. Aetna US Hlthcare ( 2002 )
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,132
Jeremy Corbyn's election co-ordinator Ian Lavery gets his breakfast mixed up with Brexit Keely Lockhart, video source APTN 20 April 2017 • 12:46pm Keely Lockhart Ian Lavery, a member of Jeremy Corbyn's shadow cabinet, has confused his breakfast with Brexit, even ad-libbing about bacon and eggs, shouting out "working class food!" Labour's election co-ordinator is the latest in an increasingly long line of politicians to confuse Brexit with breakfast. In October last year, Shadow Chancellor John McDonnell got his bacon and eggs crossed when delivering a speech at a Labour event in London - confusing breakfast for Brexit three times in the space of five minutes. During a speech at the Tory party conference last year, Welsh Conservatives leader Andrew RT Davies mistakenly declared: "We WILL make breakfast a success!" Even First Minister Nicola Sturgeon has succumbed to the malapropism.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
450
// Angular Imports. import {Component} from '@angular/core'; // Project Imports. import {error} from '@angular/compiler/src/util'; import {StyleDialogComponent} from '../style-dialog/style-dialog.component'; import {MatDialog} from '@angular/material/dialog'; @Component({ selector: 'app-plot', templateUrl: './plot.component.html', styleUrls: ['./plot.component.css'], }) /** * Handles the Plotly plot and the plots' data. */ export class PlotComponent { /** * plot_data is an array of maps that define what is plotted. * It is initialized with a (0,0) point so that the plot appears when the page is opened. * Each map defines a single plotly trace with possible options. * Trace options: * x: Takes an array of x-axis values that correspond to the y-axis. * Usually timestamps. * y: Takes an array of y-axis values. * (Note: The lengths of x and y must be equal.) * type: How the data is displayed to the user. 'scattergl' and 'historgram' * are the only currently used types. (Avoid 'scatter' type since it * has poorer performance on large datasets than 'scattergl'.) * mode: Finer details for how to display the type. 'markers' displays only a dot * for each datapoint. 'lines' displays a line through all datapoints. 'lines+markers' * displays both. * marker: A map that allows custom changes to markers. * marker.symbol: The shape markers take. Options used: 'circle', 'diamond', 'star'. * https://plotly.com/python/marker-style/#custom-marker-symbols * id: The id used by Sensor Visualizer to uniquely identify traces so they can be * removed, altered, or referenced in some way. This field is not used by plotly. * visible: If true, the trace will show up. If false, the trace is invisible. * name: The trace name displayed to the user. */ plot_data = [ { x: [0], y: [0], type: 'scattergl', mode: 'markers', id: -1, marker: {symbol: 'diamond'}, visible: true, name: 'Placeholder Point', }, ]; /** * Plot configurations. * hovermode must be set to closest. closest mode selects only * a single trace on hover, rather than every trace in * a dataset which would happen if hovermode was not specified. * Selecting a single trace allows for the options menu to know * which trace is being changed. Further documenation at: * https://plotly.com/javascript/reference/layout/ */ plot_layout = { title: 'Add a new dataset by clicking the upload button.', // Disable toggling of trace visibility through plotly legend clicks. legend: {itemclick: false, itemdoubleclick: false}, hovermode: 'closest', autosize: true, }; /** * Configuration options, further documentaion at: * https://plotly.com/javascript/configuration-options/ */ plot_config = {scrollZoom: true, displayModeBar: true}; // A map from id number to array index that will speed up toggle operations. idMap = new Map<number, number>(); // Tracks if this plot contains only histograms. isAHistogram = false; selfRef; // Tracks if this plot needs to be resized. resize: boolean; constructor(private dialog: MatDialog) {} /** * Forces a plot resize by adding a single invisible point to the plot, * which triggers a plotly resize. This is a hacky way to make sure the * plot fills its entire container, without it the plot is too small. * The invisible point is removed whenever a trace is added so it doesn't * cause any problems. */ forceResize() { if (this.resize) { this.addTrace([0], [0], -2, 'fake', false); this.resize = false; this.idMap.set(-2, this.plot_data.length - 1); } } /** * Save a reference to self when created. This self reference is used when * opening the plot options menu so that the menu can directly change the plot. * Also sets the resize boolean. * @param ref A reference to this component. * @param resize Tracks if this plot has been viewed yet. */ public setSelfRef(ref, resize: boolean) { this.selfRef = ref; this.resize = resize; } /** * If placeholder data is still present in the plot, remove it. * Helper method for addTrace and addSamples methods. */ checkDataAdded() { if (this.idMap.has(-2)) { this.plot_data.pop(); this.idMap.delete(-2); } if (this.plot_data.length === 1 && this.plot_data[0].id === -1) { this.plot_data.pop(); this.plot_layout.title = ''; return true; } return false; } /** * Add a single trace to the plot and set its id in idMap. * @param x The x data to plot. Length must match length(y). * @param y The y data to plot. Length must match length(x). * @param id The unique ID for the trace being added. * @param name The name of the trace to display to the user. * @param show If true, shows the trace, else hides the trace from view until toggled. */ public addTrace( x: Array<number>, y: Array<number>, id: number, name: string, show: boolean ) { if (x.length !== y.length) { console.log('x: ', x, ' y: ', y); throw error( /*eslint-disable*/ 'x and y arrays must match in length' + ' len x: ' + x.length + ' len y: ' + y.length); } /*eslint-enable*/ this.checkDataAdded(); this.plot_data.push({ x: x, y: y, type: 'scattergl', mode: 'markers', marker: {symbol: 'circle'}, id: id, visible: show, name: name, }); this.plot_data[this.plot_data.length - 1].marker['size'] = 10; this.idMap.set(id, this.plot_data.length - 1); } /** * Adds a set of samples to this plot. * @param samples */ public addSamples(samples) { this.checkDataAdded(); for (const i in samples) { // Plot each individual data channel. for (const j in samples[i].data) { this.addTrace( samples[i].timestamps, samples[i].data[j]['arr'], samples[i].data[j]['id'], j + ' ' + samples[i].sensor_name, true ); } // Plot the timestamp difference, but don't show it until the user // toggles it in the dataset menu. this.addTrace( samples[i].timestamps, samples[i].timestamp_diffs['arr'], samples[i].timestamp_diffs['id'], 'TS Diff ' + samples[i].sensor_name, false ); // Plot the latencies if they are present in the sample. // Don't show until user toggles them on. if ('latencies' in samples[i]) { this.addTrace( samples[i].timestamps, samples[i].latencies['arr'], samples[i].latencies['id'], 'Latencies ' + samples[i].sensor_name, false ); } } } /** * Called by dataset.component button to toggle a trace. * @param id The id of the trace to toggle on/off. */ toggleTrace(id: number) { const index = this.idMap.get(id); this.plot_data[index].visible = !this.plot_data[index].visible; } /** * Opens the style options menu dialog. * @param event The result of the click event generated by plotly. * event.points gives the information for the clicked point. */ async styleOptions(event) { if (event.points[0].data.id < 0) { return; } await this.showOptionsMenu(event.points[0].data.id); } /** * Opens the StyleDialogComponent menu. * @param traceID Which trace was clicked by the user. */ showOptionsMenu(traceID: number) { return new Promise(() => { const optionsRef = this.dialog.open(StyleDialogComponent); optionsRef.componentInstance.init( traceID, this.selfRef, this.isAHistogram ); }); } /** * Removes an entire dataset from plot_data. * @param ids The individual traces to delete. */ async deleteDataset(ids: Set<number>) { let i = 0; while (i < this.plot_data.length) { if (ids.has(this.plot_data[i].id)) { // At index i, remove 1 element. this.plot_data.splice(i, 1); } else { i++; } } // Reorder idMap with remaining traces and delete removed traces. for (const i in this.plot_data) { this.idMap.set(this.plot_data[i].id, Number(i)); } ids.forEach(id => this.idMap.delete(id)); } /** * Assigns new timestamps to an entire dataset. * @param traces The IDs for which to assign new timestamps. * @param timestamps The new timestamps to assign. These are either * newly normalized or the original timestamps. */ normalizeX(traces: Array<number>, timestamps: Array<number>) { traces.forEach(traceID => { if (this.idMap.has(traceID)) { this.plot_data[this.idMap.get(traceID)].x = timestamps; } }); } /** * Apply the de/normalization performed in dataset.component.ts * @param traceID The ID of the trace to change. * @param data The new data to use. */ normalizeY(traceID: number, data: Array<number>) { this.plot_data[this.idMap.get(traceID)].y = data; } /** * Changes a scattergl plot to either lines or markers mode. * @param traceID The ID of the trace to change. * @param mode Either 'lines' or 'markers' */ toggleLineStyle(traceID: number, mode: string) { const trace = this.plot_data[this.idMap.get(traceID)]; if (trace.type === 'histogram') { trace.type = 'scattergl'; } else { trace.mode = mode; } } /** * Changes the shapes of the markers contained in the plot. * @param traceID The ID of the trace to change. * @param mode The style to apply. One of: 'none', 'circle', 'star', 'diamond' * If mode === 'none' the trace type is switched to mode 'lines'. */ toggleMarkerStyle(traceID: number, mode: string) { const trace = this.plot_data[this.idMap.get(traceID)]; if (mode === 'none') { trace.mode = 'lines'; } else { if (trace.mode === 'lines') { trace.mode = 'lines+markers'; } trace.marker.symbol = mode; } } /** * Changes the color of a trace. * @param traceID The trace to change. * @param r red channel * @param g green channel * @param b blue channel */ changeColor(traceID: number, r: number, g: number, b: number) { this.plot_data[this.idMap.get(traceID)].marker['color'] = 'rgb(' + r + ', ' + g + ', ' + b + ')'; } /** * Creates a histogram plot with the given sorted data. The x and y axes are the same as * plotly automatically converts the Y-axis to the number of values in each axis. * @param sorted Contains the data array, id and name of the histogram to create. * Passed by dataset.component.ts toggleHistogram() */ createHistogram(sorted) { this.isAHistogram = true; this.checkDataAdded(); const index = this.plot_data.push({ x: sorted['arr'], y: sorted['arr'], type: 'histogram', mode: '', id: sorted['id'], marker: { symbol: '', }, visible: true, name: sorted['name'], }); this.plot_data[index - 1].marker['line'] = { color: 'rgb(0, 0, 0)', width: 0.7, }; this.plot_layout['xaxis'] = {title: 'Range of values'}; this.plot_layout['yaxis'] = {title: 'Count'}; this.idMap.set(sorted['id'], index - 1); } }
{ "redpajama_set_name": "RedPajamaGithub" }
8,789
\section{Introduction} \label{Intro} \IEEEPARstart{5}{G} and beyond heterogeneous networks (HetNets) will utilize both sub-6 GHz and millimeter wave (mmWave) frequency bands. These networks will be highly dense and composed of different types of base stations (BSs) with different sizes, transmit powers, and capabilities. In these dense multi-tier networks, finding optimal user associations is a challenging problem, which defines the best possible connections between BSs and user equipment (UEs) to achieve an optimal network performance while satisfying BSs' load constraints. Traditional association method of connecting to the BS with highest SINR may overload certain BSs and no longer work well in a HetNet with different BS transmit powers and under the highly directional and variable mmWave channel conditions. Furthermore, the user association process needs to be fast and efficient to adapt to the low-latency requirements of beyond 5G (B5G) networks. In a user association problem, the presence or absence of the connection between a UE and a BS is usually indicated by an integer variable with value one or zero, making it an integer optimization. In this paper, we focus on unique user association in which each UE can only be associated with one BS at a time. In B5G HetNets, UEs will be able to work in dual connectivity mode as they are equipped with a multi-mode modem supporting both sub-6 GHz and mmWave bands, allowing the possibility of association to either a macro cell BS (MCBS) or a small cell BS (SCBS). \subsection{Background and Related Works} User association for load balancing in LTE HetNets is studied in \cite{Andrews}, where a user association algorithm is introduced by relaxing the unique association constraint and then using a rounding method to obtain unique association coefficients. In \cite{Caire}, the researchers studied optimal user association in massive MIMO networks and solve the user association problem using Lagrangian duality. User association in a 60-GHz wireless network is studied in \cite{60GHz}, where the researchers assumed that interference is negligible due to directional steerable antenna arrays and highly attenuated signals with distance at 60-GHz. This assumption, however, becomes inaccurate at other mmWave frequencies considered for cellular use as it is shown that the mmWave systems can transit from noise-limited to interference-limited regimes \cite{Niknam,NoiseOrInterf}. The unique user association problem usually results in a complex integer nonlinear programming which is NP-hard in general. Heuristic algorithms have been designed to solve this problem and achieve a near-optimal solution \cite{zalghout2015greedy,TWC}. These algorithms often require centralized implementation which usually suffers from high computational complexity and relies on a central coordinator to collect the channel state information (CSI) between all BSs and UEs. As a result, practical implementation of these algorithms in mmWave-enabled networks is potentially inefficient given the erratic nature of mmWave channels. The authors in \cite{Dist_UA_60} proposed two distributed user association methods for a 60-GHz mmWave network. They only considered a simple mmWave channel model based on large-scale CSI and antenna gains, and assumed that the interference is negligible because of highly directional transmissions. Matching theory has been proposed as a promising low-complexity mathematical framework with interesting applications from labor market to wireless networks. In a pioneering work \cite{gale1962college}, Gale and Shapley introduced a matching game, called deferred acceptance (DA) game, to solve the problem of one-to-one and many-to-one matching. The DA matching game has recently found applications for user association in wireless networks. For example, the DA game is employed for resource management in wireless networks using algorithmic implementations in \cite{gu2015matching}. A two-tier matching game is proposed for user association based on task offloading to BSs jointly with collaboration among BSs for task transferring to underloaded BSs \cite{UA_MT_MEC}. Matching algorithms based on the DA game have also been used for user association in the downlink of small cell networks in \cite{semiari2014matching} and for the uplink in \cite{saad2014college}. \subsection{Our Contributions} We propose matching-theory based user association schemes designed by considering system aspects specifically relevant to B5G cellular networks. We consider a mmWave clustered channel model for generating the mmWave links. In the system model, the UEs have dual connectivity and are equipped with both sub-6 GHz and mmWave antenna arrays. The effect of directional transmissions in B5G systems is directly integrated via beamforming transmission and reception in both frequency bands. Moreover, we take into account the dependency between user association and interference which is crucial for beamforming transmission, while previous works either ignore interference or assume it to be independent of user association. Such an assumption is applicable in LTE cellular networks because of omni-directional transmissions, but is no longer suitable for B5G mmWave-enabled networks because of directional transmissions by beamforming at both BSs and UEs. Existing matching theory user association works are based exclusively on the DA game. One important issue with the DA game is excessive association delay for distributed implementation, as the associations of all UEs are postponed to the end of the game. This is due to the presence of a waiting list at each BS, described in more details in Sec. \ref{DA_subsec}. In \cite{GLC19}, we proposed a new matching game with lower delay and provided some preliminary results. In this paper, we propose a set of low-delay distributed matching games tailored for user associations in B5G cellular networks. The main contributions of this paper are: \begin{itemize}[leftmargin=*] \item We introduce a distributed user association framework, in which UEs and BSs exchange application and response messages to make decisions, and define relevant metrics to assess its performance. We show that stability of a matching game does not directly lead to optimality in other objectives such as the network throughput. Instead of stability which is also less relevant in a dynamic network, we consider performance metrics for distributed user association in terms of association delay, user's power consumption in the association process, percentage of unassociated users, and network utility including throughput or spectral efficiency. \item We propose three novel and purely distributed matching games, called early acceptance (EA), and compare them with the well-known DA matching game. Unlike DA which postpones association decisions to the last iteration of the game, our proposed EA games allow the BSs to make their decisions in accepting or rejecting UEs immediately. This approach results in significantly faster association process and at the same time a slightly higher network throughput, and hence presents a better choice for association in fast-varying mmWave systems. \item Our proposed EA games follow a set of similar rules, but are different in terms of updating the UE/BS preference lists and reapplying to BSs. Numerical results show that the basic EA game without updating or reapplying is the fastest, whereas the EA game with preference list updating and reapplication leads to the highest percentage of associated UEs. Thus, there is a tradeoff between the game simplicity and the number of unassociated UEs. \item We also propose a multi-game user association algorithm to further improve the network throughput by performing multiple rounds of a matching game. The proposed algorithm requires minimal centralized coordination, and as the number of rounds of the game increases, reaches closely the performance of centralized worst connection swapping (WCS) algorithm introduced in \cite{TWC}. \item Our simulations show that the proposed EA games have comparable performance with the DA game in terms of power consumption and signaling overhead, while resulting in a slightly higher network utility and a superior performance in terms of association delay. Considering the fact that the number of UEs to be associated is usually different from the total quota of BSs, we show that our proposed EA games are effective in any loading scenarios (underload, critical load, and overload). \end{itemize} \subsection{Notation} In this paper, scalars and sets are denoted by italic letters (e.g. $x$ or $X$) and calligraphy letters (e.g. $\mathcal{X}$), respectively. Vectors are represented by lowercase boldface letters (e.g. $\mathbf{x}$), and matrices by uppercase boldface letters (e.g. $\mathbf{X}$). Superscript $(.)^T$ and $(.)^*$ represent the transpose operator and the conjugate transpose operator, respectively. $\log(.)$ stands for base-2 logarithm, and big-O notation $\mathcal{O}(.)$ expresses the complexity. $|\mathcal{X}|$ denotes the cardinality of set $\mathcal{X}$. $\boldsymbol{I}_N$ is the $N\times N$ identity matrix, and $|\mathbf{X}|$ denotes determinant of matrix $\mathbf{X}$. \section{System and User Association Models}\label{Sys_model} We study the problem of user association in the downlink of a two-tier HetNet with $B$ macro cell BSs (MCBS) operating at microwave band, $S$ small cell BSs (SCBSs) working at mmWave band, and $K$ UEs. Let $\mathcal{B}$, $\mathcal{S}$, and $\mathcal{J}=\{1, ..., J\}$ denote the respective set of MCBSs, SCBSs, and all BSs with $J=B+S$, and $\mathcal{K}=\{1, ..., K\}$ represents the set of UEs. Each BS $j$ has $M_j$ antennas, and each UE $k$ is equipped with two antenna modules: 1) a single antenna for LTE connections at microwave band, and 2) an antenna array with $N_k$ elements for 5G connections at mmWave band. Each UE $k$ aims to receive $n_k$ data streams from its serving BS such that $1\leq n_k\leq N_k$, where the upper inequality indicates that the number of data streams for each UE cannot exceed the number of its antennas. We also assume that each UE supports dual-connectivity so that association to either MCBS or SCBS is possible. \subsection{Microwave and mmWave Channel Models}\label{Ch_Models} In this subsection, we introduce the microwave and mmWave channel models. In the microwave band the transmissions are omnidirectional and we use the well-known Gaussian channel model \cite{Telatar}. We denote $\mathbf{h}^{\mu\text{W}}$ as the channel vector between a MCBS and a UE where its entries are i.i.d. complex Gaussian random variables with zero-mean and unit variance, i.e., $h^{\mu\text{W}} \sim \mathcal{CN}(0,1)$. In the mmWave band, the transmissions are highly directional and the Gaussian MIMO channel model no longer applies. We employ the specific clustered mmWave channel model which includes $C$ clusters with $L$ rays per cluster defined as \cite{3GPP901}, \cite{Nokia} \begin{align}\label{clustered_ch} \mathbf{H}^{\text{mmW}}=\frac{1}{\sqrt{CL}}\sum_{c=1}^{C}\sum_{l=1}^{L} \sqrt{\gamma_c}~\mathbf{a}(\phi_{c,l}^{\textrm{UE}},\theta_{c,l}^{\textrm{UE}}) ~\mathbf{a}^*(\phi_{c,l}^{\textrm{BS}},\theta_{c,l}^{\textrm{BS}}) \end{align} where $\gamma_c$ is the power gain of the $c$th cluster. The parameters $\phi^{\textrm{UE}}$, $\theta^\textrm{UE}$, $\phi^\textrm{BS}$, $\theta^\textrm{BS}$ represent azimuth angle of arrival, elevation angle of arrival, azimuth angle of departure, and elevation angle of departure, respectively. The vector $\mathbf{a}(\phi,\theta)$ is the response vector of a uniform planar array (UPA) which allows 3D beamforming in both the azimuth and elevation directions. We consider the probability of LoS and NLoS as given in \cite{RapLetter}, and utilize the path loss model for LoS and NLoS links as given in \cite{3GPP901}. The numerical results provided in Sec. \ref{Sim_res} are based on these channel models and related parameters. \subsection{Signal Model} For tier-1 working at sub-6 GHz band, the effective interfering channel on UE $k$ from MCBS $j\in\mathcal{B}$ serving UE $l$ is defined as \begin{equation} h_{k,l,j} = \mathbf{h}^{\mu\text{W}}_{k,j}\mathbf{f}_{l,j} \end{equation} where $\mathbf{f}_{l,j}\in\mathbb{C}^{M_j\times 1}$ is the linear precoder (transmit beamforming vector) at MCBS $j$ intended for UE $l$. If $l=k$, this defines the effective channel between MCBS $j$ and UE $k$ as $h_{k,j} = \mathbf{h}^{\mu\text{W}}_{k,j}\mathbf{f}_{k,j}$. Similarly, for tier-2 operating at mmWave band, the effective interfering channel on UE $k$ from SCBS $j\in\mathcal{S}$ serving UE $l$ is defined as \begin{equation} \mathbf{H}_{k,l,j} = \mathbf{W}^*_k\mathbf{H}^\text{mmW}_{k,j}\mathbf{F}_{l,j} \label{H_mmW} \end{equation} where $\mathbf{F}_{l,j}\in\mathbb{C}^{M_j\times n_l}$ is the linear precoder at SCBS $j$ intended for UE $l$, and $\mathbf{W}_k\in\mathbb{C}^{N_k \times n_k}$ is the linear combiner (receive beamforming matrix) of UE $k$. If $l=k$, (\ref{H_mmW}) becomes the effective channel between SCBS $j\in\mathcal{S}$ and UE $k$ which includes both beamforming vectors/matrices at the BS and UE, and can be expressed as $\mathbf{H}_{k,j} = \mathbf{W}^*_k\mathbf{H}^\text{mmW}_{k,j}\mathbf{F}_{k,j}$. Thus, the received signal at UE $k$ connected to MCBS $j\in\mathcal{B}$ can be written as \begin{equation}\label{y_k_muW} y_k^{\mu\text{W}} = \sum_{j\in \mathcal{B}}h_{k,j}s_{k,j} + z_k \end{equation} where $s_{k,j}\in \mathbb{C}$ is the data symbol intended for UE $k$ with $\mathbb{E}\lbrack s^*_{k,j}s_{k,j}\rbrack =P_{k,j}$, and $z_k\in\mathbb{C}$ is the complex additive white Gaussian noise at UE $k$ with $z_k\sim\mathcal{CN}(0,N_0)$, and $N_0$ is the noise power. We consider an equal power allocation scheme to split each BS $j$ transmit power ($P_j$) equally among its associated users, i.e., $P_{k,j}=P_j/|\mathcal{K}_j|$. Similarly, the received signals at UE $k$ connected to SCBS $j\in\mathcal{S}$ is given by \begin{equation}\label{y_k_mmW} \mathbf{y}_k^\text{mmW} = \sum_{j\in \mathcal{S}}\mathbf{H}_{k,j}\mathbf{s}_{k,j} + \mathbf{W}^*_k\mathbf{z}_k \end{equation} where $\mathbf{s}_{k,j}\in \mathbb{C}^{n_k}$ is the data stream vector for UE $k$ consisting of mutually uncorrelated zero-mean symbols with $\mathbb{E}\lbrack \mathbf{s}^*_{k,j}\mathbf{s}_{k,j}\rbrack =P_{k,j}$, and $\mathbf{z}_k\in\mathbb{C}^{N_k}$ is the complex additive white Gaussian noise vector at UE $k$. The presented signal model, developed algorithms and insights for user association in this paper is applicable to all types of channel models, transmit beamforming and receive combining. \subsection{User Association Model} We follow the mmWave-specific user association model introduced in \cite{TWC}. Because of directional beamforming in mmWave systems, the interference structure depends on the user association. Taking into account this dependency is important for mmWave systems where the channels are probabilistic and fast time-varying, and the interference depends on the highly directional connections between BSs and UEs. We perform user association per a time duration which we call an \textit{association block}, which can span a single or multiple time slots depending on the availability of CSI and is a design choice (Fig. \ref{Ass_block}). We assume a model where in each association block, the user association process occurs in the association time interval which establishes the UE-BS connections for transmission time interval. The association time interval is further divided into sub-slots for distributed implementation, where during each sub-slot UEs can apply to a BS for association. In a fully distributed algorithm, UE-BS associations can be determined at the end of each sub-slot within the association time interval. This is fundamentally different from a centralized implementation where all user associations are determined at the end of the association time interval. In distributed implementation, the duration of the association time interval can vary for each UE's association block, depending on the delay in the association process for that UE. In the next association block, the user association process needs to be performed again to update associations according to users' mobility and channel variations. \begin{figure} \centering \includegraphics[scale=.45]{Association_Block2.pdf} \caption{Structure of an association block: User association is established in a distributed fashion during the association time interval then is applied to the transmission time interval.} \label{Ass_block} \end{figure} An association block can represent a single time slot, when both instantaneous (small-scale) and large-scale CSI are available, or can be composed of several consecutive time slots when only large-scale CSI is available. Such a choice will lead to a trade-off between user association overhead and resulting network performance. We study a \textit{unique user association} problem in which each UE can be served by only one BS in each association block. In this problem, UE-BS associations can be completely determined by an association vector $\mathbold{\beta}$ defined as follows \begin{equation}\label{Beta_eq} \mathbold{\beta}=[\beta_1, ..., \beta_K]^T \end{equation} and \textit{unique association constraints} can be expressed by \begin{align}\label{TFA_cons_01} \sum_{j\in\mathcal{J}} \mathds{1}_{\beta_k=j}&\leq 1, ~\forall k\in \mathcal{K} \end{align} where $\beta_k$ represents the index of BS to which user $k$ is associated, and $\mathds{1}_{\beta_k=j}$ is the indicator function such that $\mathds{1}_{\beta_k=j}=1$ if $\beta_k=j$, and $\mathds{1}_{\beta_k=j}=0$ if $\beta_k\neq j$. All analysis and results in this paper are per association block, and thus we do not consider a time index in our formulations. In a HetNet composed of different types of BSs, each BS $j$ can have a different \textit{quota} $q_j$, i.e., the number of UEs it can serve simultaneously. We define $\mathbf{q}=[q_1,...,q_J]$ as the \textit{quota vector} of BSs, and $\mathcal{K}_j$ as the \textit{activation set} of BS $j$ which represents the set of active UEs in BS $j$, such that $\mathcal{K}_j \subseteq \mathcal{K}$, $|\mathcal{K}_j|\leq q_j$. Thus, we can define \textit{load balancing constraints} for BSs as \begin{align}\label{TFA_cons_02} \sum_{k\in\mathcal{K}}\mathds{1}_{\beta_k=j} &\leq q_j, ~\forall j\in \mathcal{J} \end{align} This set of constraints denotes that the number of UEs served by BS $j$ can not exceed its quota $q_j$. The load balancing constraints allow our formulation to specify each BS's quota separately. This makes the resulting user association scheme applicable to HetNets where there are different types of BSs with different capabilities. When UE $k$ is connected to MCBS $j\in\mathcal{B}$, its \textit{instantaneous rate} (in bps/Hz) is \begin{equation}\label{R_kj_muW} R_{k,j}^{\mu\text{W}}(\mathbold{\beta}) = \log_2\left(1+ \frac{P_{k,j}h_{k,j} h_{k,j}^*}{v_{k,j}(\mathbold{\beta})}\right) \end{equation} where $P_{k,j}$ represents the transmit power from BS $j$ dedicated to UE $k$, and $v_{k,j}$ is the interference plus noise given as \begin{align} v_{k,j}(\mathbold{\beta}) = \sum_{\substack{l\in \mathcal{K}_{j} \\ l\neq k}} P_{l,j}h_{k,l,j}h_{k,l,j}^*+\sum_{\substack{i\in \mathcal{B} \\ i\neq j}} \sum_{\substack{l\in \mathcal{K}_i}} P_{l,i}h_{k,l,i}h_{k,l,i}^*+ N_0 \nonumber \end{align} Similarly, the instantaneous rate of UE $k$ connected to SCBS $j\in\mathcal{S}$ is given by \begin{equation}\label{R_kj_mmW} R_{k,j}^\text{mmW}(\mathbold{\beta}) = \log_2\Big |\mathbf{I}_{n_k} + \mathbf{V}_{k,j}^{-1}(\mathbold{\beta})P_{k,j}\mathbf{H}_{k,j} \mathbf{H}_{k,j}^*\Big | \end{equation} where $\mathbf{V}_{k,j}$ is the interference and noise covariance matrix given as \begin{align} \mathbf{V}_{k,j}(\mathbold{\beta})&=\sum_{\substack{l\in \mathcal{K}_{j} \\ l\neq k}} P_{l,j}\mathbf{H}_{k,l,j}\mathbf{H}_{k,l,j}^*\nonumber \\&+ \sum_{\substack{i\in \mathcal{S} \\ i\neq j}} \sum_{\substack{l\in \mathcal{K}_i}} P_{l,i}\mathbf{H}_{k,l,i}\mathbf{H}_{k,l,i}^* + N_0 \mathbf{W}_k^*\mathbf{W}_k \nonumber \end{align} Note that the summations in $v_{k,j}$ and $\mathbf{V}_{k,j}$ are taken over the activation set of BSs, indicating the dependency between interference and user association. \subsection{Centralized vs. Distributed User Associations}\label{centralized_vs_distributed} User association is usually studied in the literature in the form of centralized algorithms. Centralized algorithms can reach a near-optimal solution, however, they require a central coordinator, for example, located in a cloud-radio access network (C-RAN), to collect all required CSI and run the user association algorithm. In this centralized structure, BSs transmit CSI reference signals via physical downlink control channels (PDCCHs) to enable UEs to estimate the CSI. The CSI is sent back to BSs and then forwarded to C-RAN where the central coordinator is located. Since at each iteration of a centralized algorithm, such as the WCS algorithm in \cite{TWC}, user instantaneous rates are updated based on the current association vector, raw CSI and not the SINR must be available to compute these user rates. After collecting all required CSI from the network, the central coordinator runs the user association algorithm to find the best possible connections. The signaling overhead in this ideal centralized structure is usually high due to network densification in B5G HetNets, which requires a significant amount of CSI to be reported, leading to high computational cost and time complexity as the network size increases. Distributed user association algorithms have been introduced as low-complexity approaches with a reasonably low convergence time. These algorithms only involve low bit-rate signaling exchanges between UEs and BSs such that association decisions happen in a distributed fashion without a need for a central coordinator. We assume the UEs exchange messages with the BSs directly and the association decision for different UEs can occur asynchronously at different times. In this paper, we employ matching theory and propose a new matching game to solve the user association problem in a distributed fashion. We also introduce a multi-game matching algorithm to further improve the network performance. \section{Matching Theory for Distributed User Association} \label{MT_for_DUA} Matching theory attracted the attention of researchers due to its low-complexity and fast convergence time \cite{gu2015matching}. These promising features make matching theory a suitable framework for distributed user association in fast-varying mmWave systems. User association can be posed as a matching game with two sets of players -- the BSs and the UEs. In this game, each player collects the required information to build a preference list based on its own objective function using local measurements. Each user then apply to the BSs based on its preference list, and association decision is made by the BSs individually. Thus, no central coordinator is required and user association can be performed fully distributed. This feature makes matching theory an efficient approach for designing a distributed user association in B5G HetNets. \subsection{User Association Matching Game Concepts} In the context of matching theory, user association problem can be considered as a college admission game where the BSs with their specific quota represent the colleges and UEs are considered as students. This framework is suitable for user association in a HetNet where BSs may have different quotas and capabilities. In order to formulate our user association as a matching game, we first introduce some basic concepts based on two-sided matching theory \cite{roth1992two}. \begin{definition1} A \textit{preference relation} $\succeq_k$ helps UE $k$ to specify the preferred BS between any two BSs $i,j \in \mathcal{J},~i\neq j$ such that \begin{equation}\label{prf_rlt_K} j \succeq_k i \Leftrightarrow \Psi_{k,j}^{\text{UE}} \geq \Psi_{k,i}^{\text{UE}} \Leftrightarrow \text{UE}~k~\text{prefers BS}~j~\text{to BS}~i \end{equation} where $\Psi^{\text{UE}}_{k,j}$ is the \textit{preference value} between UE $k$ and BS $j$, which can be simply a local measurement at the UE (e.g. SINR). Similarly, for any two UEs $k,l \in \mathcal{K},~k\neq l$, each BS builds a preference relation $\succeq_j$ such that \begin{equation}\label{prf_rlt_J} k \succeq_j l \Leftrightarrow \Psi_{k,j}^{\text{BS}} \geq \Psi_{l,j}^{\text{BS}} \Leftrightarrow \text{BS}~j~\text{prefers UE}~k~\text{to UE}~l \end{equation} \end{definition1} \begin{definition2} Based on the preference relations, each UE $k$ (BS $j$) builds its own \textit{preference list} $\mathcal{P}_k^\text{UE}$ ($\mathcal{P}_j^\text{BS}$) over the set of all BSs (UEs) in descending order of preference. \end{definition2} \begin{definition3} Association vector $\mathbold{\beta}$ defines a \textit{matching relation}\footnote{In this paper we use the terms "association vector" and "matching relation" interchangeably.}, which specifies the association between UEs and BSs and has the following properties \begin{enumerate} \item $\beta_k \in \mathcal{J}$ with $k\in\mathcal{K}$; \item $\beta_k=j$ if and only if $k\in \mathcal{K}_j$. \end{enumerate} The second property states that the association vector $\mathbold{\beta}$ is a bilateral matching. \end{definition3} \begin{definition4} A user association \textit{matching game} $\mathcal{G}$ is a game with two sets of players (BSs and UEs) and a set of rules which apply on the input data to obtain a result. The input data of the game are: \begin{itemize} \item Preference lists of BSs: $\mathcal{P}_j^\text{BS}, \forall j\in\mathcal{J}$ \item Preference lists of UEs: $\mathcal{P}_k^\text{UE}, \forall k\in\mathcal{K}$ \item BSs' quota: $q_j, \forall j\in\mathcal{J}$ \end{itemize} and game outcome or result is the association vector $\mathbold{\beta}$. Each particular game is defined by the specific way of building preference lists and its set of rules. \end{definition4} \subsection{Building Preference Lists}\label{building_prf_lists} A preliminary step in a matching game is to build the preference lists of the players. In this subsection, we describe the process of building the preference lists for the UEs and BSs. These preference lists can be built based on a number of metrics, including users' instantaneous rates, channel norms or UEs' local measurements. \subsubsection{Users' instantaneous rates} Using users' instantaneous rates to build preference lists will require the knowledge of both instantaneous and large-scale CSI. For example, we can use the user instantaneous rate in (\ref{R_kj_muW})-(\ref{R_kj_mmW}) as the objective function for both sides of the game, i.e., \begin{equation}\label{R_as_PrfList} \Psi_{k,j}^{\text{UE}}=\Psi_{k,j}^{\text{BS}}=R_{k,j},~\forall k\in\mathcal{K},j\in\mathcal{J} \end{equation} \noindent In other words, each UE prefers the BS which provides the highest user instantaneous rate, and each BS is willing to serve the UE that can get the highest user instantaneous rate. This metric, however, requires frequent update of the preference value which depends on the association of other users and can complicate the process. One approach is fixing the association of all other UEs based on an initial association vector for computing the current user's instantaneous rates from all BSs. Alternatively, we can switch the association of the current UE with another UE connected to a BS while computing the instantaneous rate from that BS to the current UE, in order to maintain the BS quota. This switching can be either random or with the weakest UE connected to that BS as in \cite{TWC}. \subsubsection{Channel norms} The preference lists can also be built based on just CSI. As stated earlier, we assume both instantaneous and large-scale CSI available through beamforming CSI estimation techniques. In this case, the preference values can be expressed as the Frobenius norm of a MIMO channel \begin{equation} \Psi_{k,j}^{\text{UE}}=\Psi_{k,j}^{\text{BS}}=||\mathbf{H}_{k,j}||_F,~\forall k\in\mathcal{K},j\in\mathcal{J} \end{equation} \noindent where $\mathbf{H}_{k,j}$ is the channel matrix between UE $k$ and BS $j$, and the operator $||.||_F$ represents the Frobenius norm. Using the channel norm, each UE (BS) ranks the BSs (UEs) and builds its own preference list such that the BS (UE) with the strongest channel (highest channel norm) is the most preferred one. The preference lists can also be built based on large-scale CSI alone, which will stay valid for longer durations but may result in a lower overall network utility. \subsubsection{Local CQI measurements} A more practical approach to building the preference lists is based on UEs' local measurements. Using reference signal received power information, each UE measures the received SINR from each BS as a ratio of a valid signal to non-valid signals. Then, the UE converts this SINR information to CQI values and reports them to BSs via the PUCCH signaling mechanism \cite{3GPP_PDCCH}. In this approach, the preference values are given by \begin{equation} \Psi_{k,j}^{\text{UE}}=\Psi_{k,j}^{\text{BS}}=\text{CQI}_{k,j},~\forall k\in\mathcal{K},j\in\mathcal{J} \end{equation} where $\text{CQI}_{k,j}$ represents the channel quality between UE $k$ and BS $j$. Using CQI values, each UE (BS) ranks the BSs (UEs) such that the BS (UE) with the highest CQI is the most preferred one. The periodicity of CQI report is a configurable parameter and it could be as fast as every four time slots\footnote{The duration of time slot in 5G NR depends on transmission numerology, and is less than 1ms which is the subframe duration \cite{3GPP_PHY}. Thus, CQI can be reported as fast as every 4ms.} \cite{3GPP_RRC}. \begin{figure} \centering \centering \includegraphics[scale=.625]{DA_process_bold_new.pdf} \caption{User association process in the DA game. (a) UE $k$ is associated with BS $j$, (b) UE $k$ is pushed out of waiting lists and eventually rejected by all BSs.} \label{DA_game_fig} \end{figure} \subsection{Deferred Acceptance \cite{gale1962college} User Association Matching Game}\label{DA_subsec} Matching theory dates back to the early 60s when mathematicians Gale and Shapley proposed the now-famous DA matching game, which can be posed as a college admission game and produces optimal and stable results \cite{gale1962college}. Applying to user association, the input data of this game are preference lists of BSs and UEs as well as the quota of BSs, and its result is a matching relation $\mathbold{\beta}$. While the DA game can be implemented in a centralized way, in this paper, we focus on its distributed implementation, which does not require a central entity to collect the preference lists of all BSs and UEs and run the game \cite{gu2015matching}. In what follows, we describe the rules of this game in the context of user association. \begin{algorithm}[t]\small \SetAlgoLined \KwData{$\mathcal{P}_j^\text{BS}$, $\mathcal{P}_k^\text{UE}$, $q_j$, $\forall k\in\mathcal{K},j\in \mathcal{J}$} \KwResult{Association vector $\mathbold{\beta}=[\beta_1, \beta_2, ..., \beta_K]$} \textbf{Initialization}: Set $m_k=1,~\forall k$, $n=1$, form a rejection set $\mathcal{R}=\{1, 2, ..., K\}$, initialize a set of unassociated UEs $\mathcal{U}=\varnothing$ and the waiting list of each BS $\mathcal{W}_j^0=\varnothing,~\forall j$\; \While{$\mathcal{R}\neq\varnothing$}{ Each UE $k\in\mathcal{R}$ applies to its $m_k$th preferred BS\; Each BS $j$ forms its current waiting list $\mathcal{W}_j^{n}$ from its new applicants and its previous waiting list $\mathcal{W}_j^{n-1}$\; Each BS $j$ keeps the first $q_j$ preferred UEs from $\mathcal{W}_j^{n+1}$, and reject the rest of them\; \For{$k\in\mathcal{R}$}{ $m_k\leftarrow m_k+1$\; \If{$m_k>J$}{ Remove UE $k$ from $\mathcal{R}$ and add it to $\mathcal{U}$\; } } $n\leftarrow n+1$\; } Form $\mathbold{\beta}$ based on the final waiting lists of BSs $\mathcal{W}_j,~j=1,...,J$. \caption{DA User Association Game (based on \cite{gale1962college})} \label{DA_Game} \end{algorithm} We first define $m_k$ as the \textit{preference index} of UE $k$, $\mathcal{R}$ as the \textit{rejection set} of UEs, $\mathcal{U}$ as the \textit{set of unassociated UEs}, and $\mathcal{W}_j$ as the \textit{waiting list} of BS $j$. Before starting the game, we set the preference index of each UE to one ($m_k=1,~\forall k$), form a rejection set $\mathcal{R}$ including all $K$ UEs, initialize a set of unassociated UEs ($\mathcal{U}=\varnothing$) and the waiting list of each BS ($\mathcal{W}^0_j=\varnothing,~\forall j$). At $n$th iteration of the game, each UE $k$ applies to its $m_k$th preferred BS by sending an application message. Each BS $j$ then ranks its new applicants together with the UEs in its current waiting list ($\mathcal{W}_j^{n-1}$) based on its preference list, keeps the first $q_j$ UEs in its new waiting list $\mathcal{W}_j^n$, and rejects the rest of UEs. Each BS accordingly sends a response of either rejection or waitlisting to new UE applicants as well as those previously in its waiting list but are now rejected. Rejected UEs remain in the rejection set, update their preference index $m_k$, and apply to their next preferred BS in the next iteration. If $m_k>J$, it means UE $k$ is applied to all BSs and get rejected. Thus, we remove UE $k$ from rejection set $\mathcal{R}$ and add it to the set of unassociated UEs $\mathcal{U}$. At each new iteration, each BS forms a new waiting list by selecting the top $q_j$ UEs among the new applicants and those on its current waiting list. The game terminates when the rejection set is empty. In the DA game, the associations are only determined when the game terminates. The BSs use their waiting lists to keep the most preferred UEs over all application rounds, and the final waiting lists after the last iteration determine the associations. This DA user association process is shown in Fig. \ref{DA_game_fig} and the algorithm for the DA user association game is described in Alg. \ref{DA_Game}. \subsection{User Association Matching Game Metrics}\label{MG_Metrics} \subsubsection{Game Stability vs. Other Objectives} \label{Stability_sec} In the context of matching theory, stability has been considered as a key performance metric \cite{gale1962college}. Stability means there is no blocking pair, a UE-BS pair $(k,j)\notin\mathbold{\beta}$ where they prefer each other more than their associations under matching relation $\mathbold{\beta}$ \cite{gale1962college}. Stability in matching is an important criterion as discussed in the original paper by Gale and Shapley \cite{gale1962college}, who showed that DA is optimal in terms of stability. There is, however, no direct implication that an optimal matching in terms of stability is also optimal in terms of another performance objective, including the metric used to build the preference list. We will illustrate this lack of connection via an example. Table \ref{Prf_table} shows the preference lists of the players (in the same format as in \cite{gale1962college}): the first number of each cell is the preference of the Greek-letter player for the Roman-letter player, and the second number is Roman for Greek. Each preference list is built based on the metric values (for example, spectral efficiency in user association) in Table \ref{MetVal_table}. We can see that the stable assignment here is $(\alpha,A)$ and $(\beta,B)$ as shown in bold in Table \ref{Prf_table} . The other assignment set of $(\alpha,B)$ and $(\beta,A)$ is inherently unstable since $\alpha$ prefers $A$ more than $B$, and $A$ also prefers $\alpha$ more than $\beta$ (per definition of stability as in \cite{gale1962college}). The total metric value for the stable assignment, however, is 5 and is less than the value of the unstable assignment of 6. Thus, optimality in terms of stability does not necessarily result in the highest (optimal) metric values. As such, when applying to a cellular system where a performance metric such as spectral efficiency is of primary interest, a stable association does not necessarily lead to the optimal spectral efficiency. This fact will also be confirmed via simulation results in Sec. \ref{Sim_res} (see Fig. \ref{SumRate}). \begin{table}[t \setlength{\tabcolsep}{3pt} \begin{minipage}{.5\linewidth} \centering \caption{Preference lists of players} \label{Prf_table} \begin{tabular}{cccc} \toprule & $~~~~A~~~~$& $~~~~B~~~~$\\ \midrule $~~~\alpha~~$ & \textbf{(1,1)} & (2,1) \\ $~~~\beta~~$ & (1,2) & \textbf{(2,2)} \\ \bottomrule \end{tabular} \end{minipage}\hfill \begin{minipage}{.5\linewidth} \centering \caption{Metric values of players} \label{MetVal_table} \begin{tabular}{ccc} \toprule & $~~~~A~~~~$& $~~~~B~~~~$\\ \midrule $~~~\alpha~$ & 4 & 3 \\ $~~~\beta~$ & 3 & 1 \\ \bottomrule \end{tabular} \end{minipage}\hfill \end{table} Furthermore, stability becomes less relevant in a setting where users' preferences change over time, as is the case of a cellular system. Instead of using stability which doesn't apply in a dynamic system where user association can change at every association slot, we define four relevant metrics for distributed user association matching games. \subsubsection{User Association Delay} This metric represents the amount of time it takes for a UE to get associated with a BS. Denote the time delay for one iteration as $t^\text{iter}$, which is a system parameter and independent of the matching game. Thus, we can compare the association delay of different matching games by comparing their respective time delay until association. In a DA game, the association decision is postponed to the last iteration of the game due to the presence of waiting lists. Thus for DA, all UEs have the same association delay ($\tau_{k,\text{DA}}=\tau_\text{DA}$) which is proportional to the total number of iterations $N_\text{DA}^\text{iter}$ of the game as follows \begin{equation} \tau_\text{DA}\triangleq N^\text{iter}_\text{DA}t^\text{iter} \end{equation} For an EA game, as will be described in Sec. \ref{EA_MG}, the $k^\text{th}$ UE's association delay is \begin{equation} \tau_{k,\text{EA}}\triangleq N^\text{appl}_kt^\text{iter} \end{equation} where $N_k^\text{appl}$ is the number of applications of UE $k$ until it is associated with a BS. Thus, the average association delay for the EA game can be obtained as \begin{equation} \tau_{\text{EA}} = \frac{1}{K}\sum_{k\in\mathcal{K}}\tau_{k,\text{EA}} \end{equation} We assume that all applicants apply simultaneously at each iteration of the game, thus the time duration for each iteration is the same for every applicant. \subsubsection{User Power Consumption for Association} In a matching game, each UE applies to its most preferred BSs, until it is accepted by one of them. Each application requires the UE to send a signal to a BS, which is a power-consuming process. We assume that each application consumes a specific amount of power ($P^\text{appl}$), which is also a system parameter independent of the matching game. Thus, the power consumption of UE $k$ during the association process can be defined as \begin{equation} \gamma_k\triangleq N^\text{appl}_k P^\text{appl} \end{equation} Thus, different user power consumption during the association process can be compared by considering the users' number of applications, the lower the number of applications of a UE, the less its power consumption. \subsubsection{Percentage of Unassociated Users} In the case the number of UEs is more than the total quota of BSs, or some BS are out of range such that the preference lists of certain UEs are shorter than $J$, there may be unassociated UEs at the end of a matching game. We can evaluate the performance of matching games by comparing the percentage of unassociated UEs under these games. In Sec. \ref{Sim_res}, we consider the following scenarios in evaluating the performance of matching games. \begin{itemize} \item \textit{Underload}: $K<\sum_{j\in\mathcal{J}} q_j$ \item \textit{Critical load}: $K=\sum_{j\in\mathcal{J}} q_j$ \item \textit{Overload}: $K>\sum_{j\in\mathcal{J}} q_j$ \end{itemize} \subsubsection{Network Utility Function} A network utility function can be used to compare the performance of centralized and distributed user association algorithms. In particular, we employ sum-rate utility function to assess the performance of association schemes. Defining the instantaneous user throughput vector $\mathbf{r}(\mathbold{\beta})\triangleq (R_{1,\beta_1}, ..., R_{K,\beta_K})$, we can express the sum-rate utility function as \begin{equation}\label{sum_rate} U(\mathbf{r}(\mathbold{\beta}))\triangleq \sum_{k\in\mathcal{K}} R_{k,\beta_k} \end{equation} where $R_{k,\beta_k}$ is the instantaneous rate of UE $k$ associated with BS $\beta_k$ given in (\ref{R_kj_muW})-(\ref{R_kj_mmW}). \begin{figure} \centering \includegraphics[scale=0.625]{EA_process_bold_new.pdf} \caption{User association process in an EA game. (a) UE $k$ is associated with BS $j$, (b) UE $k$ is not associated as it is rejected at the $m_k$th (last) iteration of the game, with $m_k\geq J$.} \label{EA_game_fig} \end{figure} \section{Proposed Early Acceptance Matching Games}\label{EA_MG} The original DA game defers matching decision until the last iteration of the game, and thus is suitable for processes which do not require making decisions in real time. When applied to user association, however, all UEs are kept in BSs' waiting lists until the game terminates. This game can result in an excessive delay for the association process and can be problematic when it comes to user association in fast-varying mmWave systems which require low-latency communications. In order to overcome this problem, we propose a set of new matching games, called \textit{early acceptance} (EA) games, to solve the user association problem in B5G HetNets. In an EA game, BSs immediately decide about acceptance or rejection of UEs at each application round. This leads to a significantly faster and more efficient user association process. The DA game however, has an advantage over the EA games in terms of stability, but this property is less relevant in the user association context since the preference lists of users change with time. Furthermore, stability needs not lead to optimal throughput, as shown in Sec. \ref{Stability_sec}. Our simulations also confirm that the EA game achieves slightly higher network throughput but at a much lower delay. \subsection{Proposed Early Acceptance User Association Matching Games} We introduce three distributed EA matching games which follow a set of similar rules, but are different in terms of updating the UE/BS preference lists and reapplying to BSs. Similar to the DA game, an EA game takes as input data the preference lists of BSs and UEs and quota of BSs, and delivers a matching relation $\mathbold{\beta}$. The initial steps of these games is similar to that of DA which is to set the preference index of all UEs to one ($m_k=1,~\forall k$), form a rejection set $\mathcal{R}$ composed of all $K$ UEs, and initialize a set of unassociated UEs ($\mathcal{U}=\varnothing$). \subsubsection{EA-Base Game} This game defines the basic rules of the EA games. At each iteration of the game, each UE $k\in\mathcal{R}$ applies to its $m_k$th preferred BS (say BS $j$) regardless of its available quota. If UE $k$ is among the top $q_j$ UEs in the preference list of BS $j$, it will be immediately accepted by this BS, and the game updates the association vector with $\beta_k=j$. The accepted UEs will be removed from the rejection set $\mathcal{R}$ and any rejected UEs will apply to their next preferred BS in the next iteration. If a UE applies to all the BSs in its preference list and get rejected from all of them, we add the UE to the set of unassociated UEs ($\mathcal{U}$). This game is the simplest form EA since the UEs and BSs do not update their preference list during the game, and if a UE is rejected by a BS it will not reapply to that BS. These features make the EA-Base game very fast with only a small number of applications ($\leq J$) for each UE. However, at the end of this game some UEs may be unassociated regardless of loading scenarios. \subsubsection{EA-PLU (EA with Preference List Updating) Game} In order to improve the performance of EA-Base, we update the preference lists of UEs and BSs at the end of each iteration. When an association happens, the following updates occur: 1) associated UEs are removed from the rejection set and the preference lists of all BSs, and 2) for each new association with BS $j$, its quota is updated as $q_j\leftarrow q_j-1$. When a BS runs out of quota, it informs all the UEs by sending a broadcast message and the UEs remove this BS from their preference lists. As a result, the UEs will no longer apply to that BS. Similar to EA-Base though, there is still no guarantee that all UEs are associated at the end of the EA-PLU game. \begin{algorithm}[t]\small \SetAlgoLined \KwData{$\mathcal{P}_j^\text{BS}$, $\mathcal{P}_k^\text{UE}$, $q_j$, $\forall k\in\mathcal{K},j\in \mathcal{J}$} \KwResult{Association vector $\mathbold{\beta}=[\beta_1, \beta_2, ..., \beta_K]$ } \textbf{Initialization}: Set $m_k=1,~\forall k$, form a rejection set $\mathcal{R}=\{1, 2, ..., K\}$, and initialize a set of unassociated UEs $\mathcal{U}=\varnothing$\; \While{$\mathcal{R}\neq\varnothing$}{ Each UE $k\in\mathcal{R}$ applies to its $m_k$th preferred BS (namely BS $j$ with $q_j\neq 0$)\; \eIf{$k \in \mathcal{P}_j^\text{BS}(1:q_j)$}{ $\beta_k=j$\; $q_j\leftarrow q_j-1$\; \If{$q_j=0$}{ Remove BS $j$ from $\mathcal{P}_k^\text{UE},~\forall k\in\mathcal{K}$\; }Remove UE $k$ from $\mathcal{R}$ and $\mathcal{P}_j^\text{BS},~\forall j\in\mathcal{J}$\;}{ \eIf{$\mathcal{P}_k^\text{UE}\neq\varnothing$}{ $m_k\leftarrow m_k+1$\; Keep UE $k$ in $\mathcal{R}$\;}{ Remove UE $k$ from $\mathcal{R}$ and add it to $\mathcal{U}$\; } } } \caption{Proposed EA-PLU-RA User Association Game} \label{EA_Game} \end{algorithm} \subsubsection{EA-PLU-RA (EA with Preference List Updating and ReApplying) Game}\label{EA-PLU-RA} In order to improve the percentage of associated users, we allow each UE to reapply to those BSs from which it has been rejected in previous iterations. Recall that if UE is rejected by a BS, it will be kept in the rejection set. In the following iterations, each UE applies to the next preferred BS in its updated preference list. When a UE reaches the end of its updated preference list, it comes back to the beginning and repeats the application process until its updated preference list becomes empty or it is associated. In the case that the updated preference list becomes empty, no reapplication is possible and the UE is removed from the rejection set $\mathcal{R}$ and added to the set of unassociated UEs $\mathcal{U}$. With this reapplication, when the length of the original preference list of each UE is equal to $J$, no UE is unassociated at the end of the game in underload and critical load cases (see Lemma 2). In case of an incomplete original preference list, UE unassociation is possible. The user association process in the EA-PLU-RA game is depicted in Fig. \ref{EA_game_fig}, and described in Alg. \ref{EA_Game}. We note that it is practical to update the preference lists of UEs and BSs after each round of applications during an EA matching game. When UE-BS associations occur, the associating BSs update their quota and preference lists. They also inform all other BSs (via backhaul links) and UEs (via a cell broadcast message \cite{3GPP_CB}) to update their preference lists accordingly. In particular, each BS updates its quota and remove associated UEs from its preference list. If a BS runs out of quota, all unassociated UEs remove that BS from their preference lists. This reporting mechanism incurs minimal signaling overhead in practical implementations. \subsection{Convergence of Matching Games}\label{Conv_MGs} For a matching game, convergence implies that the game will eventually terminate and produce a matching result. Depending on the game and loading scenario, different events can happen at convergence. In the EA-Base and EA-PLU games which do not allow re-application, convergence occurs when the last unassociated UE(s) applies to its least preferred BS or all BSs run out of quota. In the EA-PLU-RA game, convergence occurs when either all UEs are associated or all BSs are full. We provide four lemmas on the convergence of the proposed EA matching games and obtain their \textit{worst convergence time} (maximum number of iterations). \textbf{Lemma 1}. \textit{EA-Base and EA-PLU games always converge and their worst convergence time is $J$ iterations.} \begin{proof} EA-Base and EA-PLU games converge when the last unassociated UE(s) applies to its least preferred BS after having been rejected from all others. Since each UE can only apply once to each BS (reapplying is not allowed), the games terminate in at most $J$ iterations. \end{proof} \textbf{Lemma 2}. \textit{For underload and critical load cases, when the length of the original preference list of each UE is equal to the number of BSs $J$, all UEs will be associated at the end of the EA-PLU-RA game.} \begin{proof} If the length of the original preference list of each UE is equal to $J$, i.e., $|\mathcal{P}^\text{UE}_k|=J,~\forall k\in\mathcal{K}$, then any rejected UE(s) will have a chance to apply to all BSs and therefore will be associated as long as there is quota left at any of the BSs, which is always the case in underload and critical load scenarios. But if there is a UE with $|\mathcal{P}^\text{UE}_k|<J$, the UE can not apply to all BSs and become unassociated if all BSs in its preference list run out of quota. \end{proof} \textbf{Lemma 3}. \textit{The EA-PLU-RA game converges within finite number of iterations.} \begin{proof} As stated in Alg. \ref{EA_Game}, the EA-PLU-RA game terminates when the rejection set $\mathcal{R}$ becomes empty. According to Lemma 2, for underload and critical load cases when the length of the original preference list of each UE is equal to $J$, all UEs will be associated by the end of the game and the rejection set becomes empty. If there is a UE with a shorter original preference list than $J$, it keeps reapplying to the BSs in its preference list. Once a BS runs out of quota, it will be removed from the preference list of the UE. If all the BSs run out of quota, the preference list of the UE becomes empty and the UE will be moved to set $\mathcal{U}$ (unassociated) and the rejection set becomes empty. For the overload scenario, the game converges when all BSs run out of quota, which will eventually happens because of preference list update and reapplication. After this point the preference lists of all rejected UEs are empty, and they will be moved from $\mathcal{R}$ to $\mathcal{U}$, causing the game to terminate. \end{proof} Because of reapplication, the worst convergence time of EA-PLU-RA is longer than of the other two EA games and is harder to determine as it depends on the preferences setting. The next lemma provides an upper bound on the worst convergence time of the EA-PLU-RA game. \textbf{Lemma 4}. \textit{For a network with $J$ BSs and $K$ UEs, an upper bound on the maximum number of iterations for the EA-PLU-RA game under critical load scenario is $N_{K,J}^\text{max} = J(K-J)+\sum_{j=1}^{J}j$.} \begin{proof} We prove the bound by reduction. When there are more than $J$ quotas left in the network, then at least one UE must be associated after every $J$ iterations. The worst case is when no association occurs in the first $J-1$ iterations, then there must be at least one UE associated at the $J^\text{th}$ iteration. Using this logic, then when there are $J$ unassociated users left, an upper bound on the maximum number of iterations up to this point is $J(K-J)$. From this point on, if there are $L$ quotas left, then the longest time it takes to get at least one UE associated is after $L$ applications, in the case those quotas belong to different BSs. Thus after at most $L$ applications, the number of unassociated UEs and available quota reduces to $L-1$. Using this reduction, we can obtain an upper bound on the maximum number of iterations for a network with $J$ BSs and $K$ UEs as $N_{K,J}^\text{max} = J(K-J)+\sum_{j=1}^{J}j$. \end{proof} The bound in Lemma 4 is conservative and quite loose as indicated by our numerical results, nevertheless it provides a concrete cutoff value on the worst convergence time. The actual worst convergence time is found numerically to be significantly smaller than the bound. \subsection{Complexity of Matching Games}\label{Comp_MGs} In this subsection, we analyze the complexity of user association matching games. 1) \textit{Computation complexity of building preference lists}: Prior to starting the game, we need to build the preference lists of BSs and UEs. This process needs to be performed by each player of the game. As mentioned earlier, preference lists can be built based on user's instantaneous rate or some local measurements at the UE. In practical scenarios, this information can be measured at a minimal computational complexity using the mechanism described in Sec. \ref{building_prf_lists}. In a distributed matching game, each UE can locally measure the received SINR from each BS separately, then compute the instantaneous rate using a single computation. After computing the instantaneous rates, the UE sorts these rates as a mean of ranking the BSs in a descending order to build its preference list. This sorting step results in a computational cost of $\mathcal{O}(J\log(J))$ at each UE. Similarly, we obtain the computational cost of building a preference list at each BS as $\mathcal{O}(K\log(K))$. As a result, the total computational cost for building the preference lists of all UEs and BSs is $\mathcal{O}(JK\log(JK))$. 2) \textit{Game execution complexity}: During a matching game, each UE applies to a BS by sending an application message, and it is notified about the BS decision via a response message. This process is the same for DA and EA games at the UEs' side, and the number of iterations specifies the execution complexity for each game. However, the execution complexity of each game is different at the BSs' side because the BSs respond to applicants in different ways. At each iteration of the DA game, each BS requires to perform a sorting procedure (Step 5) with cost $\mathcal{O}(K\log(K))$, which incurs a total computational cost of order $\mathcal{O}(JK\log(K))$ at each iteration. In the EA games, no such sorting is required by the BSs, and thus they have a much lower execution complexity than the DA game on the BSs' side. Our numerical results show that the number of iterations of these games is usually around the number of BSs ($J$). Thus, the total execution complexity of the DA game and EA games are $\mathcal{O}(J^2K\log(K))$ and $\mathcal{O}(J)$, respectively. We assume BSs and UEs perform their actions based on their most recently updated preference lists through the reporting mechanism described in Sec. \ref{EA-PLU-RA}. Thus, updating the preference lists in the EA-PLU and EA-PLU-RA games does not increase the execution complexity of these games. Considering both the computational complexity of building the preference lists and executing the game, the total computational complexity for the DA game and EA games are $\mathcal{O}(J^2K\log(K))$ and $\mathcal{O}(JK\log(JK))$, respectively. In the complexity analysis of the centralized WCS algorithm in \cite{TWC}, we showed that the total complexity of that algorithm is $\mathcal{O}(M_j^2K^2\log(K))$, which is much higher than the one for distributed matching games since $M_j\gg J$. The computational complexities of the centralized, distributed, and semi-distributed (discussed later in Sec. \ref{Comp_MA}) user association schemes are summarized in Table \ref{Comp_UA_schemes}. \begin{algorithm}[t]\small \SetAlgoLined \KwData{$\mathcal{J}$, $\mathcal{K}$, $\mathbold{q}_j, \forall j\in \mathcal{J}$, Path loss information} \KwResult{Near-optimal association vector $\mathbold{\beta}^\star$ } \textbf{Initialization}: \\ - Set the number of games ($N$)\; - Build initial preference lists of BSs and UEs ($\mathcal{P}_{j}^0$, $\mathcal{P}_{k}^0$, $\forall k,j$) based on channel norms\; - Perform a matching game (DA or EA) to obtain initial $\mathbold{\beta}^1$\; \For{$n=1:N$}{ Calculate $R_{k,j}(\mathbold{\beta}^n),~\forall k, j$\; Build preference lists $\mathcal{P}_{j}^n$, $\forall j\in \mathcal{J}$ and $\mathcal{P}_{k}^n$, $\forall k\in\mathcal{K}$\; Perform a matching game (DA or EA) to obtain $\mathbold{\beta}^{n+1}$\; } $\mathbold{\beta}^\star=\mathrm{arg}~\max_{n=1,...,N} ~U(\mathbf{r}(\mathbold{\beta}^n))$. \caption{\small Multi-Game Matching Algorithm for User Association with Max-throughput} \label{EA_Alg} \end{algorithm} \section{Multi-Game Matching Algorithm}\label{Dist_UA} The distributed matching games are fast and efficient in terms of delay and power consumption, but due to the distributed nature, they may not reach the performance of centralized algorithms. If we can afford some delay and additional power consumption as well as a minimal signaling exchange, we can further enhance the performance in terms of a network utility. In this section, we introduce a user association optimization problem which aims to maximize a network utility function, then propose a multi-game matching algorithm which requires running multiple rounds of a game and a central entity to keep track of the best association vector. Each game is still run in an entirely distributed fashion, and only the resulting association vector is sent to the central entity for tracking. Due to the dependency between user association and interference structure of the network, user instantaneous rates or local measurements could change with different associations. Thus, the preference lists may change according to user associations. In particular, at the end of a user association matching game, we obtain an association vector $\mathbold{\beta}$ which specifies the UE-BS connections. Since the user instantaneous rate is a function of $\mathbold{\beta}$, the resulting preference lists at the end of a game round may be different from the original one at the start of the same round, and performing another round of a matching game may produce a better user association in terms of a network utility. In order to keep improving the network performance, we introduce a matching algorithm which plays multiple rounds of a matching game in an iterative manner. Each round of a game is aimed to maximize the sum-rate as in (\ref{sum_rate}), given the initial association vector obtained from previous round of the game. \subsection{Multi-Game Matching Algorithm for Max-throughput} This matching algorithm requires an initial association vector $\mathbold{\beta}^1$ which can be obtained by performing a user association matching game in the initialization procedure. The preference lists of BSs and UEs for this initial game can be built based on channel norms as described in Sec. \ref{MT_for_DUA}.B. At each subsequent iteration of the algorithm, by fixing the associations of all other UEs based on the current association vector $\mathbold{\beta}^n$, each UE computes the instantaneous rate it can get from each BS and reports this rate to the corresponding BS. Then, each BS (UE) updates its preference list by ranking all UEs (BSs) based on the computed instantaneous rates. Next, a matching game (DA or EA) is performed to obtain the new association vector $\mathbold{\beta}^{n+1}$. This new association vector will be used to establish the preference lists for the next round. The algorithm performs the matching game $N$ times, where $N$ is a design parameter. At each time, the game is run in a distributed fashion. At the end of each game, the BSs report their associations to a central entity, called \textit{best-$\mathbold{\beta}$-tracker}, which computes the utility function and keeps track of the best association vector. As $N$ increases, there is a higher chance of obtaining a better association at the cost of more association delay and higher power consumption at the UEs. The value of $N$ can be determined based on practical delay and power constraints. At the end of the algorithm, the best-$\mathbold{\beta}$-tracker notifies the BSs with the best association vector corresponding to the highest network utility. Although this algorithm requires a central entity to keep track of the best association vector, at each round, the game is performed in a purely distributed manner. This algorithm is described in Alg. \ref{EA_Alg}. \begin{table}[t] \centering \caption{Computational complexity of centralized, distributed, and semi-distributed user association schemes} {\tabulinesep=1.2mm \begin{tabu} {|c|c|} \hline \small User Association Scheme & \small Complexity \\ \hline \scriptsize WCS Algorithm (centralized)\cite{TWC} & \scriptsize $\mathcal{O}\Big(M_j^2K^2\log(K)\Big)$ \\\hline \scriptsize DA Game (distributed)\cite{gale1962college} & \scriptsize $\mathcal{O}\Big(J^2K\log(K)\Big)$ \\\hline \scriptsize Proposed EA-PLU-RA Game (distributed) & \scriptsize $\mathcal{O}\Big(JK\log(JK)\Big)$\\\hline \scriptsize Proposed Multi-game DA Alg. (semi-distributed) & \scriptsize $\mathcal{O}\Big(NJ^2K\log(K)\Big)$\\\hline \scriptsize Proposed Multi-game EA Alg. (semi-distributed) & \scriptsize $\mathcal{O}\Big(NJK\log(JK)\Big)$\\\hline \end{tabu}} \label{Comp_UA_schemes} \end{table} \begin{figure*}[t] \centering \includegraphics[scale=.31]{M130_Delay_2.pdf} \hspace*{4em} \includegraphics[scale=.31]{M130_Delay_PDF_Empr_2.pdf} \vspace*{-0.6em} \caption{Comparing the association delay of DA and EA games under critical load scenario. a) Average association delay in a HetNet with 1 MSBS and $J-1$ SCBSs, b) Empirical PDF of association delay in a HetNet with $J=5$ BSs and $K=35$ UEs. The vertical lines show the 25 and 75 percentile bars. The time unit is the amount of time for sending an application and receiving a response.} \label{Delay_Appl} \end{figure*} \begin{figure*}[t] \centering \includegraphics[scale=.31]{M130_Power_2.pdf} \hspace*{4em} \includegraphics[scale=.31]{M130_Power_PDF_Empr_2.pdf} \vspace*{-0.6em} \caption{Comparing the power consumption of DA and EA games under critical load scenario with $q_1=15$. a) Average user power consumption for association in a HetNet with 1 MSBS and $J-1$ SCBSs, b) Empirical PDF of user power consumption for association in a HetNet with $J=5$ and $K=35$. The vertical lines show the 25 and 75 percentile bars. The power unit is the amount of power consumed during an application and response step.} \label{Delay_Appl_PDF} \end{figure*} \subsection{Complexity of Multi-Game Matching Algorithm}\label{Comp_MA} Based on the complexity analysis in Sec. \ref{Comp_MGs}, the complexity of the proposed $N$-game matching algorithm using the DA game is $\mathcal{O}(NJ^2K\log(K))$ and using the EA game is $\mathcal{O}(NJK\log(JK))$, both of which are still much smaller than the cost of centralized WCS algorithm ($\mathcal{O}(M_j^2 K^2\log(K))$) since $N$ and $J$ are usually much smaller than $K$, and $M_j$ is typically a large number. We note that all computations in the WCS algorithm are carried out by a central coordinator. In the proposed multi-game matching algorithm, however, computations are distributed among the best-$\mathbold{\beta}$-tracker (for computing the network utility and keeping track of the best association vector), and the BSs and UEs who need to build their own preference lists. A summary of computational complexity of different user association schemes is shown in Table \ref{Comp_UA_schemes}. \section{Numerical Results}\label{Sim_res} In this section, we evaluate the performance of the proposed user association matching games and algorithm in the downlink of a mmWave-enabled HetNet with $J$ BSs and $K$ UEs. The network includes 1 MCBS operating at 1.8 GHz with quota $q_1=15$ and $J-1$ SCBSs operating at 28 GHz each with quota $q_j=5, j\in\{2, .., J\}$. Unless otherwise stated, we consider a HetNet with 1 MCBS, 4 SCBSs, and 35 UEs. Sub-6 GHz channels and mmWave channels are generated as described in Sec. \ref{Ch_Models}. We assume each mmWave channel is composed of 5 clusters with 10 rays per cluster. In order to implement 3D beamforming, each MCBS is equipped with a massive MIMO antenna with 64 elements, each SCBS has a $8\times 8$ UPA, and each UE is equipped with a single-antenna module for sub-6 GHz band, and a 4-element antenna array for mmWave band. Also, we assume that the transmit power of MCBS is 10 dB higher than that for SCBSs. Network nodes are deployed in a $500 \times 500~\textrm{m}^2$ square area where the BSs are placed at specific locations and the UEs are distributed randomly according to a PPP distribution with density $K$ UEs within the given area. \subsection{Association Delay and Power Consumption} Fig. \ref{Delay_Appl} compares the DA and EA games in terms of association delay under critical load scenario, with average delay on the left and delay distribution on the right. Subfigure (a) shows that the EA games significantly outperforms the DA game in terms of average association delay. This advantage becomes more significant as the network size increases. Subfigure (b) illustrates that all EA games perform better than the DA game in terms of user association delay by having the delay distribution more concentrated around low delay values. For example, the association delay under the EA games for about 50\% of the UEs is only one time unit, while for the DA game, more than 96\% of the UEs have an association delay of at least 5 time units. Also, the maximum delay is 8 time units for the DA game, 5 time units for EA-Base and EA-PLU games, and 14 time units for the EA-PLU-RA game. However, it is worth mentioning that the probability of long delay (more than 8 time units) for EA-PLU-RA is only 5\%. Fig. \ref{Delay_Appl_PDF} compares the matching games in terms of association power consumption under critical load scenario. Subfigure (a) shows that the DA game is slightly more power efficient on the average than the proposed matching games, as it has a lower average user power consumption for association. The additional power consumption of EA games, however, is not substantial and is attributed to the fact that in EA games, each UE may apply on the average more times than in the DA game because of no waiting lists, so that a UE may be rejected and apply again immediately. According to subfigure (b), about 75\% of UEs only consume one power unit under the DA game, while this value for the EA games is about 51\%. As expected, EA-PLU-RA consumes the most power due to the reapplication process, where the probability of consuming more than the maximum power of other games (5 power units) under EA-PLU-RA is about 23\%. These observations confirm that the EA games results in faster association process, while the DA game is slightly more power efficient. \begin{figure} \centering \centering \includegraphics[scale=.31]{M130_CDF_Delay_2.pdf} \vspace*{-0.5em} \caption{Comparing the empirical CDF of user association delay for EA and DA matching games in a HetNet with $J=5$ BSs, $K=35$ UEs. } \label{CDF_Delay} \end{figure} Fig. \ref{CDF_Delay} depicts the empirical CDF of user association delay for the user association matching games. This figure again confirms that the probability of having low association delay (below 5 time units) is significantly higher in EA games than in the DA game, and is the highest for the EA-PLU and EA-PLU-RA games. For instance, the probability of having a maximum association delay of 4 time units is 63\% for the EA-PLU-RA game and only 1.4\% for the DA game. These results illustrate that the association process in the EA games is much faster compared to the DA game. Fig. \ref{underload_to_overload} depicts the association delay versus the number of UEs in a HetNet with 1 MCBS, 4 SCBSs. Keeping the BSs' quota fixed so that the network can serve a maximum of 35 UEs, we increase the number of UEs such that the network transitions from underload (left shaded region) to critical load (vertical line at $K=35$ UEs), lightly overload (middle shaded region), and finally heavily overload (right shaded region) cases, in order to investigate the effect of different loading scenarios on association delay. We observe an interesting effect that in the DA game, the average delay in the overloading cases are exactly equal to the number of BSs, since the DA game terminates when all UEs are waitlisted by BSs, and this happens at the end of $J$th iteration since each UE has no more than $J$ options. For the EA-Base and EA-PLU games, the association delay is always less than $J$ since UEs do not reapply to BSs. A different trend is observed for the EA-PLU-RA game due to reapplication process and since BSs only accept UEs within their quota. Thus, the higher number of UEs, the more reapplications and the larger the association delay. Note that the heavily overload region where the delay of EA-PLU-RA crosses that of DA occurs when the number of UEs in the network is about twice the BSs' total quota, which is unlikely to happen in the real world. \begin{figure} \centering \includegraphics[scale=.31]{M130_AssDelay_vs_Load_new_shaded.pdf} \vspace*{-0.5em} \caption{Comparing the effect of different loading scenarios on average association delay in a HetNet with $J=5$ BSs and quota vector $\mathbf{q}=[15, 5, 5, 5, 5]$. } \label{underload_to_overload} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=0.9\textwidth]{M138_Perc_UnAss4.pdf} \vspace*{-0.5em} \caption{Percentage of unassociated UEs under three different loading scenarios: a) underload, b) critical load, and c) overload. The number of BSs and UEs increases while BS quotas are fixed $q_1=15, q_j=5, j\in\{2,...,J\}$.} \label{UnAss} \end{figure*} \subsection{Percentage of Unassociated Users} In practice, the number of UEs in the network is not always equal to the total quota of BSs. Thus, there may be unassociated UEs at the end of an associations game. For the next simulation, we increase both the numbers of BSs and UEs while keeping the BSs' quota fixed. Fig. \ref{UnAss} compares the percentage of unassociated UEs under three loading scenarios: a) underload, b) critical load, and c) overload. For the underload case (a), the number of UEs is 20\% less than the total quota of BSs, whereas for the overloading case (c), the number of UEs is 20\% more. This figure shows that the proposed EA-PLU-RA game has similar performance as the DA game under all three loading scenarios since both games guarantee that maximum number of UEs (limited by BSs' quotas) are associated at the end of the game (see Lemma 2). Also, it can be inferred that without preference list updating and reapplying steps (EA-Base and EA-PLU), there may be unassociated UEs even for the underload and critical load cases. Thus, these two steps are necessary to make the best use of available resources provided by BSs. Fig. \ref{SumRate} compares the network spectral efficiency of the HetNet for several association schemes: 1) centralized WCS algorithm \cite{TWC}, 2) proposed multi-game DA algorithm with $N=10$, 3) proposed multi-game EA algorithm with $N=10$, 4) distributed single DA game \cite{gale1962college}, 5) proposed distributed single EA game, 6) max-SINR association, and 7) random association. The EA game used in this simulation is the one with preference list updating and reapplying (EA-PLU-RA). For the single matching games and initial round of the multi-game algorithms, the preference lists are built based on channel norms which include both instantaneous and large-scale CSI. For the multi-game matching algorithms, we use the matching obtained from the corresponding single matching game as the initial association vectors, and run each algorithm for $N=10$ iterations. At each iteration, the preference lists are updated using the instantaneous rates obtained based on the resulting association vector of that iteration (see (\ref{R_as_PrfList})). In the max-SINR association scheme, each UE connects to the BS providing the highest received SINR. For random association, each UE randomly associates with a BS based on a uniform distribution. In these two schemes, if a BS is overloaded, the overloading UEs are dropped and become unassociated. The results show that the EA-PLU-RA game outperforms all other distributed games and approaches the performance of centralized WCS. Interestingly, the EA-PLU-RA game slightly outperforms the DA game in terms of network spectral efficiency, confirming the fact that stability is a less relevant performance metric for cellular networks. This higher spectral efficiency is concurrent with lower association delay of the EA game. While this result may appear counter-intuitive at first, it actually makes sense. Although the DA game was shown to be optimal in terms of stability \cite{gale1962college}, it was not known to be optimal for other metrics relevant to cellular systems, including spectral efficiency and delay. Since DA is not optimal for either of these metrics, there is no inherent trade-off between them for the DA game, and as evident by the EA game achieving better performance in both metrics. This figure shows that compared to the centralized WCS algorithm, multi-game algorithms achieve about 90\% of the performance, and a purely distributed single game can achieve 80\% performance of the centralized algorithm. Such a performance figure is laudable for a distributed algorithm. Although max-SINR association has a close performance to matching games in terms of average network spectral efficiency, it results in more unassociated UEs. Random association shows a very poor performance which implies the importance of user association in cellular HetNets. In Fig. \ref{RunTime}, we compare execution run time of the user association matching games and multi-game matching algorithms under a critical load scenario. The vertical axis represents the run time in time unit (which varies depending on capabilities of the processing machine) and the horizontal axis shows the quota of SCBSs. We observe that as the number of UEs increases, the proposed EA-PLU-RA matching game performs faster than the DA game. The trend is similar for the multi-game matching algorithms. We also observed there is a stark difference between the run times of centralized WCS algorithm and distributed matching games/algorithms. This result validates the advantage of distributed matching games as the matching games/algorithms have much lower complexity than the centralized WCS algorithm. \section{Conclusion} We proposed a set of distributed early acceptance (EA) user association matching games for 5G and beyond HetNets, and compared their performance with the well-known DA matching game. We showed that at slightly more power overhead, the EA games result in a significantly faster association process compared to the DA game while achieving better network spectral efficiency. The EA-PLU-RA game with preference list updating and reapplication provides the best overall network performance in terms of both association delay and percentage of associated users. These results suggest that stability is a less relevant metric for user association and EA may be more suitable in B5G wireless networks for real-time distributed association. Next, we proposed a multi-game matching algorithm to further enhance the network spectral efficiency by running multiple rounds of a matching game. Numerical results show that the proposed distributed EA games and multi-game algorithm achieve a network spectral efficiency within 80-90\% of the near-optimal centralized WCS benchmark algorithm, while incurring a complexity at several orders of magnitude lower and significantly less overheads due to their distributed or semi-distributed nature. \begin{figure} \centering \includegraphics[scale=.31]{M138_NetSumRate.pdf} \vspace*{-0.5em} \caption{Comparing the average network spectral efficiency of user association schemes in a HetNet with $J=5$ BSs and $K=35$ UEs.} \label{SumRate} \end{figure} \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran} \section{Introduction} \label{Intro} \IEEEPARstart{5}{G} and beyond heterogeneous networks (HetNets) will utilize both sub-6 GHz and millimeter wave (mmWave) frequency bands. These networks will be highly dense and composed of different types of base stations (BSs) with different sizes, transmit powers, and capabilities. In these dense multi-tier networks, finding optimal user associations is a challenging problem, which defines the best possible connections between BSs and user equipment (UEs) to achieve an optimal network performance while satisfying BSs' load constraints. Traditional association method of connecting to the BS with highest SINR may overload certain BSs and no longer work well in a HetNet with different BS transmit powers and under the highly directional and variable mmWave channel conditions. Furthermore, the user association process needs to be fast and efficient to adapt to the low-latency requirements of beyond 5G (B5G) networks. In a user association problem, the presence or absence of the connection between a UE and a BS is usually indicated by an integer variable with value one or zero, making it an integer optimization. In this paper, we focus on unique user association in which each UE can only be associated with one BS at a time. In B5G HetNets, UEs will be able to work in dual connectivity mode as they are equipped with a multi-mode modem supporting both sub-6 GHz and mmWave bands, allowing the possibility of association to either a macro cell BS (MCBS) or a small cell BS (SCBS). \subsection{Background and Related Works} User association for load balancing in LTE HetNets is studied in \cite{Andrews}, where a user association algorithm is introduced by relaxing the unique association constraint and then using a rounding method to obtain unique association coefficients. In \cite{Caire}, the researchers studied optimal user association in massive MIMO networks and solve the user association problem using Lagrangian duality. User association in a 60-GHz wireless network is studied in \cite{60GHz}, where the researchers assumed that interference is negligible due to directional steerable antenna arrays and highly attenuated signals with distance at 60-GHz. This assumption, however, becomes inaccurate at other mmWave frequencies considered for cellular use as it is shown that the mmWave systems can transit from noise-limited to interference-limited regimes \cite{Niknam,NoiseOrInterf}. The unique user association problem usually results in a complex integer nonlinear programming which is NP-hard in general. Heuristic algorithms have been designed to solve this problem and achieve a near-optimal solution \cite{zalghout2015greedy,TWC}. These algorithms often require centralized implementation which usually suffers from high computational complexity and relies on a central coordinator to collect the channel state information (CSI) between all BSs and UEs. As a result, practical implementation of these algorithms in mmWave-enabled networks is potentially inefficient given the erratic nature of mmWave channels. The authors in \cite{Dist_UA_60} proposed two distributed user association methods for a 60-GHz mmWave network. They only considered a simple mmWave channel model based on large-scale CSI and antenna gains, and assumed that the interference is negligible because of highly directional transmissions. Matching theory has been proposed as a promising low-complexity mathematical framework with interesting applications from labor market to wireless networks. In a pioneering work \cite{gale1962college}, Gale and Shapley introduced a matching game, called deferred acceptance (DA) game, to solve the problem of one-to-one and many-to-one matching. The DA matching game has recently found applications for user association in wireless networks. For example, the DA game is employed for resource management in wireless networks using algorithmic implementations in \cite{gu2015matching}. A two-tier matching game is proposed for user association based on task offloading to BSs jointly with collaboration among BSs for task transferring to underloaded BSs \cite{UA_MT_MEC}. Matching algorithms based on the DA game have also been used for user association in the downlink of small cell networks in \cite{semiari2014matching} and for the uplink in \cite{saad2014college}. \subsection{Our Contributions} We propose matching-theory based user association schemes designed by considering system aspects specifically relevant to B5G cellular networks. We consider a mmWave clustered channel model for generating the mmWave links. In the system model, the UEs have dual connectivity and are equipped with both sub-6 GHz and mmWave antenna arrays. The effect of directional transmissions in B5G systems is directly integrated via beamforming transmission and reception in both frequency bands. Moreover, we take into account the dependency between user association and interference which is crucial for beamforming transmission, while previous works either ignore interference or assume it to be independent of user association. Such an assumption is applicable in LTE cellular networks because of omni-directional transmissions, but is no longer suitable for B5G mmWave-enabled networks because of directional transmissions by beamforming at both BSs and UEs. Existing matching theory user association works are based exclusively on the DA game. One important issue with the DA game is excessive association delay for distributed implementation, as the associations of all UEs are postponed to the end of the game. This is due to the presence of a waiting list at each BS, described in more details in Sec. \ref{DA_subsec}. In \cite{GLC19}, we proposed a new matching game with lower delay and provided some preliminary results. In this paper, we propose a set of low-delay distributed matching games tailored for user associations in B5G cellular networks. The main contributions of this paper are: \begin{itemize}[leftmargin=*] \item We introduce a distributed user association framework, in which UEs and BSs exchange application and response messages to make decisions, and define relevant metrics to assess its performance. We show that stability of a matching game does not directly lead to optimality in other objectives such as the network throughput. Instead of stability which is also less relevant in a dynamic network, we consider performance metrics for distributed user association in terms of association delay, user's power consumption in the association process, percentage of unassociated users, and network utility including throughput or spectral efficiency. \item We propose three novel and purely distributed matching games, called early acceptance (EA), and compare them with the well-known DA matching game. Unlike DA which postpones association decisions to the last iteration of the game, our proposed EA games allow the BSs to make their decisions in accepting or rejecting UEs immediately. This approach results in significantly faster association process and at the same time a slightly higher network throughput, and hence presents a better choice for association in fast-varying mmWave systems. \item Our proposed EA games follow a set of similar rules, but are different in terms of updating the UE/BS preference lists and reapplying to BSs. Numerical results show that the basic EA game without updating or reapplying is the fastest, whereas the EA game with preference list updating and reapplication leads to the highest percentage of associated UEs. Thus, there is a tradeoff between the game simplicity and the number of unassociated UEs. \item We also propose a multi-game user association algorithm to further improve the network throughput by performing multiple rounds of a matching game. The proposed algorithm requires minimal centralized coordination, and as the number of rounds of the game increases, reaches closely the performance of centralized worst connection swapping (WCS) algorithm introduced in \cite{TWC}. \item Our simulations show that the proposed EA games have comparable performance with the DA game in terms of power consumption and signaling overhead, while resulting in a slightly higher network utility and a superior performance in terms of association delay. Considering the fact that the number of UEs to be associated is usually different from the total quota of BSs, we show that our proposed EA games are effective in any loading scenarios (underload, critical load, and overload). \end{itemize} \subsection{Notation} In this paper, scalars and sets are denoted by italic letters (e.g. $x$ or $X$) and calligraphy letters (e.g. $\mathcal{X}$), respectively. Vectors are represented by lowercase boldface letters (e.g. $\mathbf{x}$), and matrices by uppercase boldface letters (e.g. $\mathbf{X}$). Superscript $(.)^T$ and $(.)^*$ represent the transpose operator and the conjugate transpose operator, respectively. $\log(.)$ stands for base-2 logarithm, and big-O notation $\mathcal{O}(.)$ expresses the complexity. $|\mathcal{X}|$ denotes the cardinality of set $\mathcal{X}$. $\boldsymbol{I}_N$ is the $N\times N$ identity matrix, and $|\mathbf{X}|$ denotes determinant of matrix $\mathbf{X}$. \section{System and User Association Models}\label{Sys_model} We study the problem of user association in the downlink of a two-tier HetNet with $B$ macro cell BSs (MCBS) operating at microwave band, $S$ small cell BSs (SCBSs) working at mmWave band, and $K$ UEs. Let $\mathcal{B}$, $\mathcal{S}$, and $\mathcal{J}=\{1, ..., J\}$ denote the respective set of MCBSs, SCBSs, and all BSs with $J=B+S$, and $\mathcal{K}=\{1, ..., K\}$ represents the set of UEs. Each BS $j$ has $M_j$ antennas, and each UE $k$ is equipped with two antenna modules: 1) a single antenna for LTE connections at microwave band, and 2) an antenna array with $N_k$ elements for 5G connections at mmWave band. Each UE $k$ aims to receive $n_k$ data streams from its serving BS such that $1\leq n_k\leq N_k$, where the upper inequality indicates that the number of data streams for each UE cannot exceed the number of its antennas. We also assume that each UE supports dual-connectivity so that association to either MCBS or SCBS is possible. \subsection{Microwave and mmWave Channel Models}\label{Ch_Models} In this subsection, we introduce the microwave and mmWave channel models. In the microwave band the transmissions are omnidirectional and we use the well-known Gaussian channel model \cite{Telatar}. We denote $\mathbf{h}^{\mu\text{W}}$ as the channel vector between a MCBS and a UE where its entries are i.i.d. complex Gaussian random variables with zero-mean and unit variance, i.e., $h^{\mu\text{W}} \sim \mathcal{CN}(0,1)$. In the mmWave band, the transmissions are highly directional and the Gaussian MIMO channel model no longer applies. We employ the specific clustered mmWave channel model which includes $C$ clusters with $L$ rays per cluster defined as \cite{3GPP901}, \cite{Nokia} \begin{align}\label{clustered_ch} \mathbf{H}^{\text{mmW}}=\frac{1}{\sqrt{CL}}\sum_{c=1}^{C}\sum_{l=1}^{L} \sqrt{\gamma_c}~\mathbf{a}(\phi_{c,l}^{\textrm{UE}},\theta_{c,l}^{\textrm{UE}}) ~\mathbf{a}^*(\phi_{c,l}^{\textrm{BS}},\theta_{c,l}^{\textrm{BS}}) \end{align} where $\gamma_c$ is the power gain of the $c$th cluster. The parameters $\phi^{\textrm{UE}}$, $\theta^\textrm{UE}$, $\phi^\textrm{BS}$, $\theta^\textrm{BS}$ represent azimuth angle of arrival, elevation angle of arrival, azimuth angle of departure, and elevation angle of departure, respectively. The vector $\mathbf{a}(\phi,\theta)$ is the response vector of a uniform planar array (UPA) which allows 3D beamforming in both the azimuth and elevation directions. We consider the probability of LoS and NLoS as given in \cite{RapLetter}, and utilize the path loss model for LoS and NLoS links as given in \cite{3GPP901}. The numerical results provided in Sec. \ref{Sim_res} are based on these channel models and related parameters. \subsection{Signal Model} For tier-1 working at sub-6 GHz band, the effective interfering channel on UE $k$ from MCBS $j\in\mathcal{B}$ serving UE $l$ is defined as \begin{equation} h_{k,l,j} = \mathbf{h}^{\mu\text{W}}_{k,j}\mathbf{f}_{l,j} \end{equation} where $\mathbf{f}_{l,j}\in\mathbb{C}^{M_j\times 1}$ is the linear precoder (transmit beamforming vector) at MCBS $j$ intended for UE $l$. If $l=k$, this defines the effective channel between MCBS $j$ and UE $k$ as $h_{k,j} = \mathbf{h}^{\mu\text{W}}_{k,j}\mathbf{f}_{k,j}$. Similarly, for tier-2 operating at mmWave band, the effective interfering channel on UE $k$ from SCBS $j\in\mathcal{S}$ serving UE $l$ is defined as \begin{equation} \mathbf{H}_{k,l,j} = \mathbf{W}^*_k\mathbf{H}^\text{mmW}_{k,j}\mathbf{F}_{l,j} \label{H_mmW} \end{equation} where $\mathbf{F}_{l,j}\in\mathbb{C}^{M_j\times n_l}$ is the linear precoder at SCBS $j$ intended for UE $l$, and $\mathbf{W}_k\in\mathbb{C}^{N_k \times n_k}$ is the linear combiner (receive beamforming matrix) of UE $k$. If $l=k$, (\ref{H_mmW}) becomes the effective channel between SCBS $j\in\mathcal{S}$ and UE $k$ which includes both beamforming vectors/matrices at the BS and UE, and can be expressed as $\mathbf{H}_{k,j} = \mathbf{W}^*_k\mathbf{H}^\text{mmW}_{k,j}\mathbf{F}_{k,j}$. Thus, the received signal at UE $k$ connected to MCBS $j\in\mathcal{B}$ can be written as \begin{equation}\label{y_k_muW} y_k^{\mu\text{W}} = \sum_{j\in \mathcal{B}}h_{k,j}s_{k,j} + z_k \end{equation} where $s_{k,j}\in \mathbb{C}$ is the data symbol intended for UE $k$ with $\mathbb{E}\lbrack s^*_{k,j}s_{k,j}\rbrack =P_{k,j}$, and $z_k\in\mathbb{C}$ is the complex additive white Gaussian noise at UE $k$ with $z_k\sim\mathcal{CN}(0,N_0)$, and $N_0$ is the noise power. We consider an equal power allocation scheme to split each BS $j$ transmit power ($P_j$) equally among its associated users, i.e., $P_{k,j}=P_j/|\mathcal{K}_j|$. Similarly, the received signals at UE $k$ connected to SCBS $j\in\mathcal{S}$ is given by \begin{equation}\label{y_k_mmW} \mathbf{y}_k^\text{mmW} = \sum_{j\in \mathcal{S}}\mathbf{H}_{k,j}\mathbf{s}_{k,j} + \mathbf{W}^*_k\mathbf{z}_k \end{equation} where $\mathbf{s}_{k,j}\in \mathbb{C}^{n_k}$ is the data stream vector for UE $k$ consisting of mutually uncorrelated zero-mean symbols with $\mathbb{E}\lbrack \mathbf{s}^*_{k,j}\mathbf{s}_{k,j}\rbrack =P_{k,j}$, and $\mathbf{z}_k\in\mathbb{C}^{N_k}$ is the complex additive white Gaussian noise vector at UE $k$. The presented signal model, developed algorithms and insights for user association in this paper is applicable to all types of channel models, transmit beamforming and receive combining. \subsection{User Association Model} We follow the mmWave-specific user association model introduced in \cite{TWC}. Because of directional beamforming in mmWave systems, the interference structure depends on the user association. Taking into account this dependency is important for mmWave systems where the channels are probabilistic and fast time-varying, and the interference depends on the highly directional connections between BSs and UEs. We perform user association per a time duration which we call an \textit{association block}, which can span a single or multiple time slots depending on the availability of CSI and is a design choice (Fig. \ref{Ass_block}). We assume a model where in each association block, the user association process occurs in the association time interval which establishes the UE-BS connections for transmission time interval. The association time interval is further divided into sub-slots for distributed implementation, where during each sub-slot UEs can apply to a BS for association. In a fully distributed algorithm, UE-BS associations can be determined at the end of each sub-slot within the association time interval. This is fundamentally different from a centralized implementation where all user associations are determined at the end of the association time interval. In distributed implementation, the duration of the association time interval can vary for each UE's association block, depending on the delay in the association process for that UE. In the next association block, the user association process needs to be performed again to update associations according to users' mobility and channel variations. \begin{figure} \centering \includegraphics[scale=.45]{Association_Block2.pdf} \caption{Structure of an association block: User association is established in a distributed fashion during the association time interval then is applied to the transmission time interval.} \label{Ass_block} \end{figure} An association block can represent a single time slot, when both instantaneous (small-scale) and large-scale CSI are available, or can be composed of several consecutive time slots when only large-scale CSI is available. Such a choice will lead to a trade-off between user association overhead and resulting network performance. We study a \textit{unique user association} problem in which each UE can be served by only one BS in each association block. In this problem, UE-BS associations can be completely determined by an association vector $\mathbold{\beta}$ defined as follows \begin{equation}\label{Beta_eq} \mathbold{\beta}=[\beta_1, ..., \beta_K]^T \end{equation} and \textit{unique association constraints} can be expressed by \begin{align}\label{TFA_cons_01} \sum_{j\in\mathcal{J}} \mathds{1}_{\beta_k=j}&\leq 1, ~\forall k\in \mathcal{K} \end{align} where $\beta_k$ represents the index of BS to which user $k$ is associated, and $\mathds{1}_{\beta_k=j}$ is the indicator function such that $\mathds{1}_{\beta_k=j}=1$ if $\beta_k=j$, and $\mathds{1}_{\beta_k=j}=0$ if $\beta_k\neq j$. All analysis and results in this paper are per association block, and thus we do not consider a time index in our formulations. In a HetNet composed of different types of BSs, each BS $j$ can have a different \textit{quota} $q_j$, i.e., the number of UEs it can serve simultaneously. We define $\mathbf{q}=[q_1,...,q_J]$ as the \textit{quota vector} of BSs, and $\mathcal{K}_j$ as the \textit{activation set} of BS $j$ which represents the set of active UEs in BS $j$, such that $\mathcal{K}_j \subseteq \mathcal{K}$, $|\mathcal{K}_j|\leq q_j$. Thus, we can define \textit{load balancing constraints} for BSs as \begin{align}\label{TFA_cons_02} \sum_{k\in\mathcal{K}}\mathds{1}_{\beta_k=j} &\leq q_j, ~\forall j\in \mathcal{J} \end{align} This set of constraints denotes that the number of UEs served by BS $j$ can not exceed its quota $q_j$. The load balancing constraints allow our formulation to specify each BS's quota separately. This makes the resulting user association scheme applicable to HetNets where there are different types of BSs with different capabilities. When UE $k$ is connected to MCBS $j\in\mathcal{B}$, its \textit{instantaneous rate} (in bps/Hz) is \begin{equation}\label{R_kj_muW} R_{k,j}^{\mu\text{W}}(\mathbold{\beta}) = \log_2\left(1+ \frac{P_{k,j}h_{k,j} h_{k,j}^*}{v_{k,j}(\mathbold{\beta})}\right) \end{equation} where $P_{k,j}$ represents the transmit power from BS $j$ dedicated to UE $k$, and $v_{k,j}$ is the interference plus noise given as \begin{align} v_{k,j}(\mathbold{\beta}) = \sum_{\substack{l\in \mathcal{K}_{j} \\ l\neq k}} P_{l,j}h_{k,l,j}h_{k,l,j}^*+\sum_{\substack{i\in \mathcal{B} \\ i\neq j}} \sum_{\substack{l\in \mathcal{K}_i}} P_{l,i}h_{k,l,i}h_{k,l,i}^*+ N_0 \nonumber \end{align} Similarly, the instantaneous rate of UE $k$ connected to SCBS $j\in\mathcal{S}$ is given by \begin{equation}\label{R_kj_mmW} R_{k,j}^\text{mmW}(\mathbold{\beta}) = \log_2\Big |\mathbf{I}_{n_k} + \mathbf{V}_{k,j}^{-1}(\mathbold{\beta})P_{k,j}\mathbf{H}_{k,j} \mathbf{H}_{k,j}^*\Big | \end{equation} where $\mathbf{V}_{k,j}$ is the interference and noise covariance matrix given as \begin{align} \mathbf{V}_{k,j}(\mathbold{\beta})&=\sum_{\substack{l\in \mathcal{K}_{j} \\ l\neq k}} P_{l,j}\mathbf{H}_{k,l,j}\mathbf{H}_{k,l,j}^*\nonumber \\&+ \sum_{\substack{i\in \mathcal{S} \\ i\neq j}} \sum_{\substack{l\in \mathcal{K}_i}} P_{l,i}\mathbf{H}_{k,l,i}\mathbf{H}_{k,l,i}^* + N_0 \mathbf{W}_k^*\mathbf{W}_k \nonumber \end{align} Note that the summations in $v_{k,j}$ and $\mathbf{V}_{k,j}$ are taken over the activation set of BSs, indicating the dependency between interference and user association. \subsection{Centralized vs. Distributed User Associations}\label{centralized_vs_distributed} User association is usually studied in the literature in the form of centralized algorithms. Centralized algorithms can reach a near-optimal solution, however, they require a central coordinator, for example, located in a cloud-radio access network (C-RAN), to collect all required CSI and run the user association algorithm. In this centralized structure, BSs transmit CSI reference signals via physical downlink control channels (PDCCHs) to enable UEs to estimate the CSI. The CSI is sent back to BSs and then forwarded to C-RAN where the central coordinator is located. Since at each iteration of a centralized algorithm, such as the WCS algorithm in \cite{TWC}, user instantaneous rates are updated based on the current association vector, raw CSI and not the SINR must be available to compute these user rates. After collecting all required CSI from the network, the central coordinator runs the user association algorithm to find the best possible connections. The signaling overhead in this ideal centralized structure is usually high due to network densification in B5G HetNets, which requires a significant amount of CSI to be reported, leading to high computational cost and time complexity as the network size increases. Distributed user association algorithms have been introduced as low-complexity approaches with a reasonably low convergence time. These algorithms only involve low bit-rate signaling exchanges between UEs and BSs such that association decisions happen in a distributed fashion without a need for a central coordinator. We assume the UEs exchange messages with the BSs directly and the association decision for different UEs can occur asynchronously at different times. In this paper, we employ matching theory and propose a new matching game to solve the user association problem in a distributed fashion. We also introduce a multi-game matching algorithm to further improve the network performance. \section{Matching Theory for Distributed User Association} \label{MT_for_DUA} Matching theory attracted the attention of researchers due to its low-complexity and fast convergence time \cite{gu2015matching}. These promising features make matching theory a suitable framework for distributed user association in fast-varying mmWave systems. User association can be posed as a matching game with two sets of players -- the BSs and the UEs. In this game, each player collects the required information to build a preference list based on its own objective function using local measurements. Each user then apply to the BSs based on its preference list, and association decision is made by the BSs individually. Thus, no central coordinator is required and user association can be performed fully distributed. This feature makes matching theory an efficient approach for designing a distributed user association in B5G HetNets. \subsection{User Association Matching Game Concepts} In the context of matching theory, user association problem can be considered as a college admission game where the BSs with their specific quota represent the colleges and UEs are considered as students. This framework is suitable for user association in a HetNet where BSs may have different quotas and capabilities. In order to formulate our user association as a matching game, we first introduce some basic concepts based on two-sided matching theory \cite{roth1992two}. \begin{definition1} A \textit{preference relation} $\succeq_k$ helps UE $k$ to specify the preferred BS between any two BSs $i,j \in \mathcal{J},~i\neq j$ such that \begin{equation}\label{prf_rlt_K} j \succeq_k i \Leftrightarrow \Psi_{k,j}^{\text{UE}} \geq \Psi_{k,i}^{\text{UE}} \Leftrightarrow \text{UE}~k~\text{prefers BS}~j~\text{to BS}~i \end{equation} where $\Psi^{\text{UE}}_{k,j}$ is the \textit{preference value} between UE $k$ and BS $j$, which can be simply a local measurement at the UE (e.g. SINR). Similarly, for any two UEs $k,l \in \mathcal{K},~k\neq l$, each BS builds a preference relation $\succeq_j$ such that \begin{equation}\label{prf_rlt_J} k \succeq_j l \Leftrightarrow \Psi_{k,j}^{\text{BS}} \geq \Psi_{l,j}^{\text{BS}} \Leftrightarrow \text{BS}~j~\text{prefers UE}~k~\text{to UE}~l \end{equation} \end{definition1} \begin{definition2} Based on the preference relations, each UE $k$ (BS $j$) builds its own \textit{preference list} $\mathcal{P}_k^\text{UE}$ ($\mathcal{P}_j^\text{BS}$) over the set of all BSs (UEs) in descending order of preference. \end{definition2} \begin{definition3} Association vector $\mathbold{\beta}$ defines a \textit{matching relation}\footnote{In this paper we use the terms "association vector" and "matching relation" interchangeably.}, which specifies the association between UEs and BSs and has the following properties \begin{enumerate} \item $\beta_k \in \mathcal{J}$ with $k\in\mathcal{K}$; \item $\beta_k=j$ if and only if $k\in \mathcal{K}_j$. \end{enumerate} The second property states that the association vector $\mathbold{\beta}$ is a bilateral matching. \end{definition3} \begin{definition4} A user association \textit{matching game} $\mathcal{G}$ is a game with two sets of players (BSs and UEs) and a set of rules which apply on the input data to obtain a result. The input data of the game are: \begin{itemize} \item Preference lists of BSs: $\mathcal{P}_j^\text{BS}, \forall j\in\mathcal{J}$ \item Preference lists of UEs: $\mathcal{P}_k^\text{UE}, \forall k\in\mathcal{K}$ \item BSs' quota: $q_j, \forall j\in\mathcal{J}$ \end{itemize} and game outcome or result is the association vector $\mathbold{\beta}$. Each particular game is defined by the specific way of building preference lists and its set of rules. \end{definition4} \subsection{Building Preference Lists}\label{building_prf_lists} A preliminary step in a matching game is to build the preference lists of the players. In this subsection, we describe the process of building the preference lists for the UEs and BSs. These preference lists can be built based on a number of metrics, including users' instantaneous rates, channel norms or UEs' local measurements. \subsubsection{Users' instantaneous rates} Using users' instantaneous rates to build preference lists will require the knowledge of both instantaneous and large-scale CSI. For example, we can use the user instantaneous rate in (\ref{R_kj_muW})-(\ref{R_kj_mmW}) as the objective function for both sides of the game, i.e., \begin{equation}\label{R_as_PrfList} \Psi_{k,j}^{\text{UE}}=\Psi_{k,j}^{\text{BS}}=R_{k,j},~\forall k\in\mathcal{K},j\in\mathcal{J} \end{equation} \noindent In other words, each UE prefers the BS which provides the highest user instantaneous rate, and each BS is willing to serve the UE that can get the highest user instantaneous rate. This metric, however, requires frequent update of the preference value which depends on the association of other users and can complicate the process. One approach is fixing the association of all other UEs based on an initial association vector for computing the current user's instantaneous rates from all BSs. Alternatively, we can switch the association of the current UE with another UE connected to a BS while computing the instantaneous rate from that BS to the current UE, in order to maintain the BS quota. This switching can be either random or with the weakest UE connected to that BS as in \cite{TWC}. \subsubsection{Channel norms} The preference lists can also be built based on just CSI. As stated earlier, we assume both instantaneous and large-scale CSI available through beamforming CSI estimation techniques. In this case, the preference values can be expressed as the Frobenius norm of a MIMO channel \begin{equation} \Psi_{k,j}^{\text{UE}}=\Psi_{k,j}^{\text{BS}}=||\mathbf{H}_{k,j}||_F,~\forall k\in\mathcal{K},j\in\mathcal{J} \end{equation} \noindent where $\mathbf{H}_{k,j}$ is the channel matrix between UE $k$ and BS $j$, and the operator $||.||_F$ represents the Frobenius norm. Using the channel norm, each UE (BS) ranks the BSs (UEs) and builds its own preference list such that the BS (UE) with the strongest channel (highest channel norm) is the most preferred one. The preference lists can also be built based on large-scale CSI alone, which will stay valid for longer durations but may result in a lower overall network utility. \subsubsection{Local CQI measurements} A more practical approach to building the preference lists is based on UEs' local measurements. Using reference signal received power information, each UE measures the received SINR from each BS as a ratio of a valid signal to non-valid signals. Then, the UE converts this SINR information to CQI values and reports them to BSs via the PUCCH signaling mechanism \cite{3GPP_PDCCH}. In this approach, the preference values are given by \begin{equation} \Psi_{k,j}^{\text{UE}}=\Psi_{k,j}^{\text{BS}}=\text{CQI}_{k,j},~\forall k\in\mathcal{K},j\in\mathcal{J} \end{equation} where $\text{CQI}_{k,j}$ represents the channel quality between UE $k$ and BS $j$. Using CQI values, each UE (BS) ranks the BSs (UEs) such that the BS (UE) with the highest CQI is the most preferred one. The periodicity of CQI report is a configurable parameter and it could be as fast as every four time slots\footnote{The duration of time slot in 5G NR depends on transmission numerology, and is less than 1ms which is the subframe duration \cite{3GPP_PHY}. Thus, CQI can be reported as fast as every 4ms.} \cite{3GPP_RRC}. \begin{figure} \centering \centering \includegraphics[scale=.625]{DA_process_bold_new.pdf} \caption{User association process in the DA game. (a) UE $k$ is associated with BS $j$, (b) UE $k$ is pushed out of waiting lists and eventually rejected by all BSs.} \label{DA_game_fig} \end{figure} \subsection{Deferred Acceptance \cite{gale1962college} User Association Matching Game}\label{DA_subsec} Matching theory dates back to the early 60s when mathematicians Gale and Shapley proposed the now-famous DA matching game, which can be posed as a college admission game and produces optimal and stable results \cite{gale1962college}. Applying to user association, the input data of this game are preference lists of BSs and UEs as well as the quota of BSs, and its result is a matching relation $\mathbold{\beta}$. While the DA game can be implemented in a centralized way, in this paper, we focus on its distributed implementation, which does not require a central entity to collect the preference lists of all BSs and UEs and run the game \cite{gu2015matching}. In what follows, we describe the rules of this game in the context of user association. \begin{algorithm}[t]\small \SetAlgoLined \KwData{$\mathcal{P}_j^\text{BS}$, $\mathcal{P}_k^\text{UE}$, $q_j$, $\forall k\in\mathcal{K},j\in \mathcal{J}$} \KwResult{Association vector $\mathbold{\beta}=[\beta_1, \beta_2, ..., \beta_K]$} \textbf{Initialization}: Set $m_k=1,~\forall k$, $n=1$, form a rejection set $\mathcal{R}=\{1, 2, ..., K\}$, initialize a set of unassociated UEs $\mathcal{U}=\varnothing$ and the waiting list of each BS $\mathcal{W}_j^0=\varnothing,~\forall j$\; \While{$\mathcal{R}\neq\varnothing$}{ Each UE $k\in\mathcal{R}$ applies to its $m_k$th preferred BS\; Each BS $j$ forms its current waiting list $\mathcal{W}_j^{n}$ from its new applicants and its previous waiting list $\mathcal{W}_j^{n-1}$\; Each BS $j$ keeps the first $q_j$ preferred UEs from $\mathcal{W}_j^{n+1}$, and reject the rest of them\; \For{$k\in\mathcal{R}$}{ $m_k\leftarrow m_k+1$\; \If{$m_k>J$}{ Remove UE $k$ from $\mathcal{R}$ and add it to $\mathcal{U}$\; } } $n\leftarrow n+1$\; } Form $\mathbold{\beta}$ based on the final waiting lists of BSs $\mathcal{W}_j,~j=1,...,J$. \caption{DA User Association Game (based on \cite{gale1962college})} \label{DA_Game} \end{algorithm} We first define $m_k$ as the \textit{preference index} of UE $k$, $\mathcal{R}$ as the \textit{rejection set} of UEs, $\mathcal{U}$ as the \textit{set of unassociated UEs}, and $\mathcal{W}_j$ as the \textit{waiting list} of BS $j$. Before starting the game, we set the preference index of each UE to one ($m_k=1,~\forall k$), form a rejection set $\mathcal{R}$ including all $K$ UEs, initialize a set of unassociated UEs ($\mathcal{U}=\varnothing$) and the waiting list of each BS ($\mathcal{W}^0_j=\varnothing,~\forall j$). At $n$th iteration of the game, each UE $k$ applies to its $m_k$th preferred BS by sending an application message. Each BS $j$ then ranks its new applicants together with the UEs in its current waiting list ($\mathcal{W}_j^{n-1}$) based on its preference list, keeps the first $q_j$ UEs in its new waiting list $\mathcal{W}_j^n$, and rejects the rest of UEs. Each BS accordingly sends a response of either rejection or waitlisting to new UE applicants as well as those previously in its waiting list but are now rejected. Rejected UEs remain in the rejection set, update their preference index $m_k$, and apply to their next preferred BS in the next iteration. If $m_k>J$, it means UE $k$ is applied to all BSs and get rejected. Thus, we remove UE $k$ from rejection set $\mathcal{R}$ and add it to the set of unassociated UEs $\mathcal{U}$. At each new iteration, each BS forms a new waiting list by selecting the top $q_j$ UEs among the new applicants and those on its current waiting list. The game terminates when the rejection set is empty. In the DA game, the associations are only determined when the game terminates. The BSs use their waiting lists to keep the most preferred UEs over all application rounds, and the final waiting lists after the last iteration determine the associations. This DA user association process is shown in Fig. \ref{DA_game_fig} and the algorithm for the DA user association game is described in Alg. \ref{DA_Game}. \subsection{User Association Matching Game Metrics}\label{MG_Metrics} \subsubsection{Game Stability vs. Other Objectives} \label{Stability_sec} In the context of matching theory, stability has been considered as a key performance metric \cite{gale1962college}. Stability means there is no blocking pair, a UE-BS pair $(k,j)\notin\mathbold{\beta}$ where they prefer each other more than their associations under matching relation $\mathbold{\beta}$ \cite{gale1962college}. Stability in matching is an important criterion as discussed in the original paper by Gale and Shapley \cite{gale1962college}, who showed that DA is optimal in terms of stability. There is, however, no direct implication that an optimal matching in terms of stability is also optimal in terms of another performance objective, including the metric used to build the preference list. We will illustrate this lack of connection via an example. Table \ref{Prf_table} shows the preference lists of the players (in the same format as in \cite{gale1962college}): the first number of each cell is the preference of the Greek-letter player for the Roman-letter player, and the second number is Roman for Greek. Each preference list is built based on the metric values (for example, spectral efficiency in user association) in Table \ref{MetVal_table}. We can see that the stable assignment here is $(\alpha,A)$ and $(\beta,B)$ as shown in bold in Table \ref{Prf_table} . The other assignment set of $(\alpha,B)$ and $(\beta,A)$ is inherently unstable since $\alpha$ prefers $A$ more than $B$, and $A$ also prefers $\alpha$ more than $\beta$ (per definition of stability as in \cite{gale1962college}). The total metric value for the stable assignment, however, is 5 and is less than the value of the unstable assignment of 6. Thus, optimality in terms of stability does not necessarily result in the highest (optimal) metric values. As such, when applying to a cellular system where a performance metric such as spectral efficiency is of primary interest, a stable association does not necessarily lead to the optimal spectral efficiency. This fact will also be confirmed via simulation results in Sec. \ref{Sim_res} (see Fig. \ref{SumRate}). \begin{table}[t \setlength{\tabcolsep}{3pt} \begin{minipage}{.5\linewidth} \centering \caption{Preference lists of players} \label{Prf_table} \begin{tabular}{cccc} \toprule & $~~~~A~~~~$& $~~~~B~~~~$\\ \midrule $~~~\alpha~~$ & \textbf{(1,1)} & (2,1) \\ $~~~\beta~~$ & (1,2) & \textbf{(2,2)} \\ \bottomrule \end{tabular} \end{minipage}\hfill \begin{minipage}{.5\linewidth} \centering \caption{Metric values of players} \label{MetVal_table} \begin{tabular}{ccc} \toprule & $~~~~A~~~~$& $~~~~B~~~~$\\ \midrule $~~~\alpha~$ & 4 & 3 \\ $~~~\beta~$ & 3 & 1 \\ \bottomrule \end{tabular} \end{minipage}\hfill \end{table} Furthermore, stability becomes less relevant in a setting where users' preferences change over time, as is the case of a cellular system. Instead of using stability which doesn't apply in a dynamic system where user association can change at every association slot, we define four relevant metrics for distributed user association matching games. \subsubsection{User Association Delay} This metric represents the amount of time it takes for a UE to get associated with a BS. Denote the time delay for one iteration as $t^\text{iter}$, which is a system parameter and independent of the matching game. Thus, we can compare the association delay of different matching games by comparing their respective time delay until association. In a DA game, the association decision is postponed to the last iteration of the game due to the presence of waiting lists. Thus for DA, all UEs have the same association delay ($\tau_{k,\text{DA}}=\tau_\text{DA}$) which is proportional to the total number of iterations $N_\text{DA}^\text{iter}$ of the game as follows \begin{equation} \tau_\text{DA}\triangleq N^\text{iter}_\text{DA}t^\text{iter} \end{equation} For an EA game, as will be described in Sec. \ref{EA_MG}, the $k^\text{th}$ UE's association delay is \begin{equation} \tau_{k,\text{EA}}\triangleq N^\text{appl}_kt^\text{iter} \end{equation} where $N_k^\text{appl}$ is the number of applications of UE $k$ until it is associated with a BS. Thus, the average association delay for the EA game can be obtained as \begin{equation} \tau_{\text{EA}} = \frac{1}{K}\sum_{k\in\mathcal{K}}\tau_{k,\text{EA}} \end{equation} We assume that all applicants apply simultaneously at each iteration of the game, thus the time duration for each iteration is the same for every applicant. \subsubsection{User Power Consumption for Association} In a matching game, each UE applies to its most preferred BSs, until it is accepted by one of them. Each application requires the UE to send a signal to a BS, which is a power-consuming process. We assume that each application consumes a specific amount of power ($P^\text{appl}$), which is also a system parameter independent of the matching game. Thus, the power consumption of UE $k$ during the association process can be defined as \begin{equation} \gamma_k\triangleq N^\text{appl}_k P^\text{appl} \end{equation} Thus, different user power consumption during the association process can be compared by considering the users' number of applications, the lower the number of applications of a UE, the less its power consumption. \subsubsection{Percentage of Unassociated Users} In the case the number of UEs is more than the total quota of BSs, or some BS are out of range such that the preference lists of certain UEs are shorter than $J$, there may be unassociated UEs at the end of a matching game. We can evaluate the performance of matching games by comparing the percentage of unassociated UEs under these games. In Sec. \ref{Sim_res}, we consider the following scenarios in evaluating the performance of matching games. \begin{itemize} \item \textit{Underload}: $K<\sum_{j\in\mathcal{J}} q_j$ \item \textit{Critical load}: $K=\sum_{j\in\mathcal{J}} q_j$ \item \textit{Overload}: $K>\sum_{j\in\mathcal{J}} q_j$ \end{itemize} \subsubsection{Network Utility Function} A network utility function can be used to compare the performance of centralized and distributed user association algorithms. In particular, we employ sum-rate utility function to assess the performance of association schemes. Defining the instantaneous user throughput vector $\mathbf{r}(\mathbold{\beta})\triangleq (R_{1,\beta_1}, ..., R_{K,\beta_K})$, we can express the sum-rate utility function as \begin{equation}\label{sum_rate} U(\mathbf{r}(\mathbold{\beta}))\triangleq \sum_{k\in\mathcal{K}} R_{k,\beta_k} \end{equation} where $R_{k,\beta_k}$ is the instantaneous rate of UE $k$ associated with BS $\beta_k$ given in (\ref{R_kj_muW})-(\ref{R_kj_mmW}). \begin{figure} \centering \includegraphics[scale=0.625]{EA_process_bold_new.pdf} \caption{User association process in an EA game. (a) UE $k$ is associated with BS $j$, (b) UE $k$ is not associated as it is rejected at the $m_k$th (last) iteration of the game, with $m_k\geq J$.} \label{EA_game_fig} \end{figure} \section{Proposed Early Acceptance Matching Games}\label{EA_MG} The original DA game defers matching decision until the last iteration of the game, and thus is suitable for processes which do not require making decisions in real time. When applied to user association, however, all UEs are kept in BSs' waiting lists until the game terminates. This game can result in an excessive delay for the association process and can be problematic when it comes to user association in fast-varying mmWave systems which require low-latency communications. In order to overcome this problem, we propose a set of new matching games, called \textit{early acceptance} (EA) games, to solve the user association problem in B5G HetNets. In an EA game, BSs immediately decide about acceptance or rejection of UEs at each application round. This leads to a significantly faster and more efficient user association process. The DA game however, has an advantage over the EA games in terms of stability, but this property is less relevant in the user association context since the preference lists of users change with time. Furthermore, stability needs not lead to optimal throughput, as shown in Sec. \ref{Stability_sec}. Our simulations also confirm that the EA game achieves slightly higher network throughput but at a much lower delay. \subsection{Proposed Early Acceptance User Association Matching Games} We introduce three distributed EA matching games which follow a set of similar rules, but are different in terms of updating the UE/BS preference lists and reapplying to BSs. Similar to the DA game, an EA game takes as input data the preference lists of BSs and UEs and quota of BSs, and delivers a matching relation $\mathbold{\beta}$. The initial steps of these games is similar to that of DA which is to set the preference index of all UEs to one ($m_k=1,~\forall k$), form a rejection set $\mathcal{R}$ composed of all $K$ UEs, and initialize a set of unassociated UEs ($\mathcal{U}=\varnothing$). \subsubsection{EA-Base Game} This game defines the basic rules of the EA games. At each iteration of the game, each UE $k\in\mathcal{R}$ applies to its $m_k$th preferred BS (say BS $j$) regardless of its available quota. If UE $k$ is among the top $q_j$ UEs in the preference list of BS $j$, it will be immediately accepted by this BS, and the game updates the association vector with $\beta_k=j$. The accepted UEs will be removed from the rejection set $\mathcal{R}$ and any rejected UEs will apply to their next preferred BS in the next iteration. If a UE applies to all the BSs in its preference list and get rejected from all of them, we add the UE to the set of unassociated UEs ($\mathcal{U}$). This game is the simplest form EA since the UEs and BSs do not update their preference list during the game, and if a UE is rejected by a BS it will not reapply to that BS. These features make the EA-Base game very fast with only a small number of applications ($\leq J$) for each UE. However, at the end of this game some UEs may be unassociated regardless of loading scenarios. \subsubsection{EA-PLU (EA with Preference List Updating) Game} In order to improve the performance of EA-Base, we update the preference lists of UEs and BSs at the end of each iteration. When an association happens, the following updates occur: 1) associated UEs are removed from the rejection set and the preference lists of all BSs, and 2) for each new association with BS $j$, its quota is updated as $q_j\leftarrow q_j-1$. When a BS runs out of quota, it informs all the UEs by sending a broadcast message and the UEs remove this BS from their preference lists. As a result, the UEs will no longer apply to that BS. Similar to EA-Base though, there is still no guarantee that all UEs are associated at the end of the EA-PLU game. \begin{algorithm}[t]\small \SetAlgoLined \KwData{$\mathcal{P}_j^\text{BS}$, $\mathcal{P}_k^\text{UE}$, $q_j$, $\forall k\in\mathcal{K},j\in \mathcal{J}$} \KwResult{Association vector $\mathbold{\beta}=[\beta_1, \beta_2, ..., \beta_K]$ } \textbf{Initialization}: Set $m_k=1,~\forall k$, form a rejection set $\mathcal{R}=\{1, 2, ..., K\}$, and initialize a set of unassociated UEs $\mathcal{U}=\varnothing$\; \While{$\mathcal{R}\neq\varnothing$}{ Each UE $k\in\mathcal{R}$ applies to its $m_k$th preferred BS (namely BS $j$ with $q_j\neq 0$)\; \eIf{$k \in \mathcal{P}_j^\text{BS}(1:q_j)$}{ $\beta_k=j$\; $q_j\leftarrow q_j-1$\; \If{$q_j=0$}{ Remove BS $j$ from $\mathcal{P}_k^\text{UE},~\forall k\in\mathcal{K}$\; }Remove UE $k$ from $\mathcal{R}$ and $\mathcal{P}_j^\text{BS},~\forall j\in\mathcal{J}$\;}{ \eIf{$\mathcal{P}_k^\text{UE}\neq\varnothing$}{ $m_k\leftarrow m_k+1$\; Keep UE $k$ in $\mathcal{R}$\;}{ Remove UE $k$ from $\mathcal{R}$ and add it to $\mathcal{U}$\; } } } \caption{Proposed EA-PLU-RA User Association Game} \label{EA_Game} \end{algorithm} \subsubsection{EA-PLU-RA (EA with Preference List Updating and ReApplying) Game}\label{EA-PLU-RA} In order to improve the percentage of associated users, we allow each UE to reapply to those BSs from which it has been rejected in previous iterations. Recall that if UE is rejected by a BS, it will be kept in the rejection set. In the following iterations, each UE applies to the next preferred BS in its updated preference list. When a UE reaches the end of its updated preference list, it comes back to the beginning and repeats the application process until its updated preference list becomes empty or it is associated. In the case that the updated preference list becomes empty, no reapplication is possible and the UE is removed from the rejection set $\mathcal{R}$ and added to the set of unassociated UEs $\mathcal{U}$. With this reapplication, when the length of the original preference list of each UE is equal to $J$, no UE is unassociated at the end of the game in underload and critical load cases (see Lemma 2). In case of an incomplete original preference list, UE unassociation is possible. The user association process in the EA-PLU-RA game is depicted in Fig. \ref{EA_game_fig}, and described in Alg. \ref{EA_Game}. We note that it is practical to update the preference lists of UEs and BSs after each round of applications during an EA matching game. When UE-BS associations occur, the associating BSs update their quota and preference lists. They also inform all other BSs (via backhaul links) and UEs (via a cell broadcast message \cite{3GPP_CB}) to update their preference lists accordingly. In particular, each BS updates its quota and remove associated UEs from its preference list. If a BS runs out of quota, all unassociated UEs remove that BS from their preference lists. This reporting mechanism incurs minimal signaling overhead in practical implementations. \subsection{Convergence of Matching Games}\label{Conv_MGs} For a matching game, convergence implies that the game will eventually terminate and produce a matching result. Depending on the game and loading scenario, different events can happen at convergence. In the EA-Base and EA-PLU games which do not allow re-application, convergence occurs when the last unassociated UE(s) applies to its least preferred BS or all BSs run out of quota. In the EA-PLU-RA game, convergence occurs when either all UEs are associated or all BSs are full. We provide four lemmas on the convergence of the proposed EA matching games and obtain their \textit{worst convergence time} (maximum number of iterations). \textbf{Lemma 1}. \textit{EA-Base and EA-PLU games always converge and their worst convergence time is $J$ iterations.} \begin{proof} EA-Base and EA-PLU games converge when the last unassociated UE(s) applies to its least preferred BS after having been rejected from all others. Since each UE can only apply once to each BS (reapplying is not allowed), the games terminate in at most $J$ iterations. \end{proof} \textbf{Lemma 2}. \textit{For underload and critical load cases, when the length of the original preference list of each UE is equal to the number of BSs $J$, all UEs will be associated at the end of the EA-PLU-RA game.} \begin{proof} If the length of the original preference list of each UE is equal to $J$, i.e., $|\mathcal{P}^\text{UE}_k|=J,~\forall k\in\mathcal{K}$, then any rejected UE(s) will have a chance to apply to all BSs and therefore will be associated as long as there is quota left at any of the BSs, which is always the case in underload and critical load scenarios. But if there is a UE with $|\mathcal{P}^\text{UE}_k|<J$, the UE can not apply to all BSs and become unassociated if all BSs in its preference list run out of quota. \end{proof} \textbf{Lemma 3}. \textit{The EA-PLU-RA game converges within finite number of iterations.} \begin{proof} As stated in Alg. \ref{EA_Game}, the EA-PLU-RA game terminates when the rejection set $\mathcal{R}$ becomes empty. According to Lemma 2, for underload and critical load cases when the length of the original preference list of each UE is equal to $J$, all UEs will be associated by the end of the game and the rejection set becomes empty. If there is a UE with a shorter original preference list than $J$, it keeps reapplying to the BSs in its preference list. Once a BS runs out of quota, it will be removed from the preference list of the UE. If all the BSs run out of quota, the preference list of the UE becomes empty and the UE will be moved to set $\mathcal{U}$ (unassociated) and the rejection set becomes empty. For the overload scenario, the game converges when all BSs run out of quota, which will eventually happens because of preference list update and reapplication. After this point the preference lists of all rejected UEs are empty, and they will be moved from $\mathcal{R}$ to $\mathcal{U}$, causing the game to terminate. \end{proof} Because of reapplication, the worst convergence time of EA-PLU-RA is longer than of the other two EA games and is harder to determine as it depends on the preferences setting. The next lemma provides an upper bound on the worst convergence time of the EA-PLU-RA game. \textbf{Lemma 4}. \textit{For a network with $J$ BSs and $K$ UEs, an upper bound on the maximum number of iterations for the EA-PLU-RA game under critical load scenario is $N_{K,J}^\text{max} = J(K-J)+\sum_{j=1}^{J}j$.} \begin{proof} We prove the bound by reduction. When there are more than $J$ quotas left in the network, then at least one UE must be associated after every $J$ iterations. The worst case is when no association occurs in the first $J-1$ iterations, then there must be at least one UE associated at the $J^\text{th}$ iteration. Using this logic, then when there are $J$ unassociated users left, an upper bound on the maximum number of iterations up to this point is $J(K-J)$. From this point on, if there are $L$ quotas left, then the longest time it takes to get at least one UE associated is after $L$ applications, in the case those quotas belong to different BSs. Thus after at most $L$ applications, the number of unassociated UEs and available quota reduces to $L-1$. Using this reduction, we can obtain an upper bound on the maximum number of iterations for a network with $J$ BSs and $K$ UEs as $N_{K,J}^\text{max} = J(K-J)+\sum_{j=1}^{J}j$. \end{proof} The bound in Lemma 4 is conservative and quite loose as indicated by our numerical results, nevertheless it provides a concrete cutoff value on the worst convergence time. The actual worst convergence time is found numerically to be significantly smaller than the bound. \subsection{Complexity of Matching Games}\label{Comp_MGs} In this subsection, we analyze the complexity of user association matching games. 1) \textit{Computation complexity of building preference lists}: Prior to starting the game, we need to build the preference lists of BSs and UEs. This process needs to be performed by each player of the game. As mentioned earlier, preference lists can be built based on user's instantaneous rate or some local measurements at the UE. In practical scenarios, this information can be measured at a minimal computational complexity using the mechanism described in Sec. \ref{building_prf_lists}. In a distributed matching game, each UE can locally measure the received SINR from each BS separately, then compute the instantaneous rate using a single computation. After computing the instantaneous rates, the UE sorts these rates as a mean of ranking the BSs in a descending order to build its preference list. This sorting step results in a computational cost of $\mathcal{O}(J\log(J))$ at each UE. Similarly, we obtain the computational cost of building a preference list at each BS as $\mathcal{O}(K\log(K))$. As a result, the total computational cost for building the preference lists of all UEs and BSs is $\mathcal{O}(JK\log(JK))$. 2) \textit{Game execution complexity}: During a matching game, each UE applies to a BS by sending an application message, and it is notified about the BS decision via a response message. This process is the same for DA and EA games at the UEs' side, and the number of iterations specifies the execution complexity for each game. However, the execution complexity of each game is different at the BSs' side because the BSs respond to applicants in different ways. At each iteration of the DA game, each BS requires to perform a sorting procedure (Step 5) with cost $\mathcal{O}(K\log(K))$, which incurs a total computational cost of order $\mathcal{O}(JK\log(K))$ at each iteration. In the EA games, no such sorting is required by the BSs, and thus they have a much lower execution complexity than the DA game on the BSs' side. Our numerical results show that the number of iterations of these games is usually around the number of BSs ($J$). Thus, the total execution complexity of the DA game and EA games are $\mathcal{O}(J^2K\log(K))$ and $\mathcal{O}(J)$, respectively. We assume BSs and UEs perform their actions based on their most recently updated preference lists through the reporting mechanism described in Sec. \ref{EA-PLU-RA}. Thus, updating the preference lists in the EA-PLU and EA-PLU-RA games does not increase the execution complexity of these games. Considering both the computational complexity of building the preference lists and executing the game, the total computational complexity for the DA game and EA games are $\mathcal{O}(J^2K\log(K))$ and $\mathcal{O}(JK\log(JK))$, respectively. In the complexity analysis of the centralized WCS algorithm in \cite{TWC}, we showed that the total complexity of that algorithm is $\mathcal{O}(M_j^2K^2\log(K))$, which is much higher than the one for distributed matching games since $M_j\gg J$. The computational complexities of the centralized, distributed, and semi-distributed (discussed later in Sec. \ref{Comp_MA}) user association schemes are summarized in Table \ref{Comp_UA_schemes}. \begin{algorithm}[t]\small \SetAlgoLined \KwData{$\mathcal{J}$, $\mathcal{K}$, $\mathbold{q}_j, \forall j\in \mathcal{J}$, Path loss information} \KwResult{Near-optimal association vector $\mathbold{\beta}^\star$ } \textbf{Initialization}: \\ - Set the number of games ($N$)\; - Build initial preference lists of BSs and UEs ($\mathcal{P}_{j}^0$, $\mathcal{P}_{k}^0$, $\forall k,j$) based on channel norms\; - Perform a matching game (DA or EA) to obtain initial $\mathbold{\beta}^1$\; \For{$n=1:N$}{ Calculate $R_{k,j}(\mathbold{\beta}^n),~\forall k, j$\; Build preference lists $\mathcal{P}_{j}^n$, $\forall j\in \mathcal{J}$ and $\mathcal{P}_{k}^n$, $\forall k\in\mathcal{K}$\; Perform a matching game (DA or EA) to obtain $\mathbold{\beta}^{n+1}$\; } $\mathbold{\beta}^\star=\mathrm{arg}~\max_{n=1,...,N} ~U(\mathbf{r}(\mathbold{\beta}^n))$. \caption{\small Multi-Game Matching Algorithm for User Association with Max-throughput} \label{EA_Alg} \end{algorithm} \section{Multi-Game Matching Algorithm}\label{Dist_UA} The distributed matching games are fast and efficient in terms of delay and power consumption, but due to the distributed nature, they may not reach the performance of centralized algorithms. If we can afford some delay and additional power consumption as well as a minimal signaling exchange, we can further enhance the performance in terms of a network utility. In this section, we introduce a user association optimization problem which aims to maximize a network utility function, then propose a multi-game matching algorithm which requires running multiple rounds of a game and a central entity to keep track of the best association vector. Each game is still run in an entirely distributed fashion, and only the resulting association vector is sent to the central entity for tracking. Due to the dependency between user association and interference structure of the network, user instantaneous rates or local measurements could change with different associations. Thus, the preference lists may change according to user associations. In particular, at the end of a user association matching game, we obtain an association vector $\mathbold{\beta}$ which specifies the UE-BS connections. Since the user instantaneous rate is a function of $\mathbold{\beta}$, the resulting preference lists at the end of a game round may be different from the original one at the start of the same round, and performing another round of a matching game may produce a better user association in terms of a network utility. In order to keep improving the network performance, we introduce a matching algorithm which plays multiple rounds of a matching game in an iterative manner. Each round of a game is aimed to maximize the sum-rate as in (\ref{sum_rate}), given the initial association vector obtained from previous round of the game. \subsection{Multi-Game Matching Algorithm for Max-throughput} This matching algorithm requires an initial association vector $\mathbold{\beta}^1$ which can be obtained by performing a user association matching game in the initialization procedure. The preference lists of BSs and UEs for this initial game can be built based on channel norms as described in Sec. \ref{MT_for_DUA}.B. At each subsequent iteration of the algorithm, by fixing the associations of all other UEs based on the current association vector $\mathbold{\beta}^n$, each UE computes the instantaneous rate it can get from each BS and reports this rate to the corresponding BS. Then, each BS (UE) updates its preference list by ranking all UEs (BSs) based on the computed instantaneous rates. Next, a matching game (DA or EA) is performed to obtain the new association vector $\mathbold{\beta}^{n+1}$. This new association vector will be used to establish the preference lists for the next round. The algorithm performs the matching game $N$ times, where $N$ is a design parameter. At each time, the game is run in a distributed fashion. At the end of each game, the BSs report their associations to a central entity, called \textit{best-$\mathbold{\beta}$-tracker}, which computes the utility function and keeps track of the best association vector. As $N$ increases, there is a higher chance of obtaining a better association at the cost of more association delay and higher power consumption at the UEs. The value of $N$ can be determined based on practical delay and power constraints. At the end of the algorithm, the best-$\mathbold{\beta}$-tracker notifies the BSs with the best association vector corresponding to the highest network utility. Although this algorithm requires a central entity to keep track of the best association vector, at each round, the game is performed in a purely distributed manner. This algorithm is described in Alg. \ref{EA_Alg}. \begin{table}[t] \centering \caption{Computational complexity of centralized, distributed, and semi-distributed user association schemes} {\tabulinesep=1.2mm \begin{tabu} {|c|c|} \hline \small User Association Scheme & \small Complexity \\ \hline \scriptsize WCS Algorithm (centralized)\cite{TWC} & \scriptsize $\mathcal{O}\Big(M_j^2K^2\log(K)\Big)$ \\\hline \scriptsize DA Game (distributed)\cite{gale1962college} & \scriptsize $\mathcal{O}\Big(J^2K\log(K)\Big)$ \\\hline \scriptsize Proposed EA-PLU-RA Game (distributed) & \scriptsize $\mathcal{O}\Big(JK\log(JK)\Big)$\\\hline \scriptsize Proposed Multi-game DA Alg. (semi-distributed) & \scriptsize $\mathcal{O}\Big(NJ^2K\log(K)\Big)$\\\hline \scriptsize Proposed Multi-game EA Alg. (semi-distributed) & \scriptsize $\mathcal{O}\Big(NJK\log(JK)\Big)$\\\hline \end{tabu}} \label{Comp_UA_schemes} \end{table} \begin{figure*}[t] \centering \includegraphics[scale=.31]{M130_Delay_2.pdf} \hspace*{4em} \includegraphics[scale=.31]{M130_Delay_PDF_Empr_2.pdf} \vspace*{-0.6em} \caption{Comparing the association delay of DA and EA games under critical load scenario. a) Average association delay in a HetNet with 1 MSBS and $J-1$ SCBSs, b) Empirical PDF of association delay in a HetNet with $J=5$ BSs and $K=35$ UEs. The vertical lines show the 25 and 75 percentile bars. The time unit is the amount of time for sending an application and receiving a response.} \label{Delay_Appl} \end{figure*} \begin{figure*}[t] \centering \includegraphics[scale=.31]{M130_Power_2.pdf} \hspace*{4em} \includegraphics[scale=.31]{M130_Power_PDF_Empr_2.pdf} \vspace*{-0.6em} \caption{Comparing the power consumption of DA and EA games under critical load scenario with $q_1=15$. a) Average user power consumption for association in a HetNet with 1 MSBS and $J-1$ SCBSs, b) Empirical PDF of user power consumption for association in a HetNet with $J=5$ and $K=35$. The vertical lines show the 25 and 75 percentile bars. The power unit is the amount of power consumed during an application and response step.} \label{Delay_Appl_PDF} \end{figure*} \subsection{Complexity of Multi-Game Matching Algorithm}\label{Comp_MA} Based on the complexity analysis in Sec. \ref{Comp_MGs}, the complexity of the proposed $N$-game matching algorithm using the DA game is $\mathcal{O}(NJ^2K\log(K))$ and using the EA game is $\mathcal{O}(NJK\log(JK))$, both of which are still much smaller than the cost of centralized WCS algorithm ($\mathcal{O}(M_j^2 K^2\log(K))$) since $N$ and $J$ are usually much smaller than $K$, and $M_j$ is typically a large number. We note that all computations in the WCS algorithm are carried out by a central coordinator. In the proposed multi-game matching algorithm, however, computations are distributed among the best-$\mathbold{\beta}$-tracker (for computing the network utility and keeping track of the best association vector), and the BSs and UEs who need to build their own preference lists. A summary of computational complexity of different user association schemes is shown in Table \ref{Comp_UA_schemes}. \section{Numerical Results}\label{Sim_res} In this section, we evaluate the performance of the proposed user association matching games and algorithm in the downlink of a mmWave-enabled HetNet with $J$ BSs and $K$ UEs. The network includes 1 MCBS operating at 1.8 GHz with quota $q_1=15$ and $J-1$ SCBSs operating at 28 GHz each with quota $q_j=5, j\in\{2, .., J\}$. Unless otherwise stated, we consider a HetNet with 1 MCBS, 4 SCBSs, and 35 UEs. Sub-6 GHz channels and mmWave channels are generated as described in Sec. \ref{Ch_Models}. We assume each mmWave channel is composed of 5 clusters with 10 rays per cluster. In order to implement 3D beamforming, each MCBS is equipped with a massive MIMO antenna with 64 elements, each SCBS has a $8\times 8$ UPA, and each UE is equipped with a single-antenna module for sub-6 GHz band, and a 4-element antenna array for mmWave band. Also, we assume that the transmit power of MCBS is 10 dB higher than that for SCBSs. Network nodes are deployed in a $500 \times 500~\textrm{m}^2$ square area where the BSs are placed at specific locations and the UEs are distributed randomly according to a PPP distribution with density $K$ UEs within the given area. \subsection{Association Delay and Power Consumption} Fig. \ref{Delay_Appl} compares the DA and EA games in terms of association delay under critical load scenario, with average delay on the left and delay distribution on the right. Subfigure (a) shows that the EA games significantly outperforms the DA game in terms of average association delay. This advantage becomes more significant as the network size increases. Subfigure (b) illustrates that all EA games perform better than the DA game in terms of user association delay by having the delay distribution more concentrated around low delay values. For example, the association delay under the EA games for about 50\% of the UEs is only one time unit, while for the DA game, more than 96\% of the UEs have an association delay of at least 5 time units. Also, the maximum delay is 8 time units for the DA game, 5 time units for EA-Base and EA-PLU games, and 14 time units for the EA-PLU-RA game. However, it is worth mentioning that the probability of long delay (more than 8 time units) for EA-PLU-RA is only 5\%. Fig. \ref{Delay_Appl_PDF} compares the matching games in terms of association power consumption under critical load scenario. Subfigure (a) shows that the DA game is slightly more power efficient on the average than the proposed matching games, as it has a lower average user power consumption for association. The additional power consumption of EA games, however, is not substantial and is attributed to the fact that in EA games, each UE may apply on the average more times than in the DA game because of no waiting lists, so that a UE may be rejected and apply again immediately. According to subfigure (b), about 75\% of UEs only consume one power unit under the DA game, while this value for the EA games is about 51\%. As expected, EA-PLU-RA consumes the most power due to the reapplication process, where the probability of consuming more than the maximum power of other games (5 power units) under EA-PLU-RA is about 23\%. These observations confirm that the EA games results in faster association process, while the DA game is slightly more power efficient. \begin{figure} \centering \centering \includegraphics[scale=.31]{M130_CDF_Delay_2.pdf} \vspace*{-0.5em} \caption{Comparing the empirical CDF of user association delay for EA and DA matching games in a HetNet with $J=5$ BSs, $K=35$ UEs. } \label{CDF_Delay} \end{figure} Fig. \ref{CDF_Delay} depicts the empirical CDF of user association delay for the user association matching games. This figure again confirms that the probability of having low association delay (below 5 time units) is significantly higher in EA games than in the DA game, and is the highest for the EA-PLU and EA-PLU-RA games. For instance, the probability of having a maximum association delay of 4 time units is 63\% for the EA-PLU-RA game and only 1.4\% for the DA game. These results illustrate that the association process in the EA games is much faster compared to the DA game. Fig. \ref{underload_to_overload} depicts the association delay versus the number of UEs in a HetNet with 1 MCBS, 4 SCBSs. Keeping the BSs' quota fixed so that the network can serve a maximum of 35 UEs, we increase the number of UEs such that the network transitions from underload (left shaded region) to critical load (vertical line at $K=35$ UEs), lightly overload (middle shaded region), and finally heavily overload (right shaded region) cases, in order to investigate the effect of different loading scenarios on association delay. We observe an interesting effect that in the DA game, the average delay in the overloading cases are exactly equal to the number of BSs, since the DA game terminates when all UEs are waitlisted by BSs, and this happens at the end of $J$th iteration since each UE has no more than $J$ options. For the EA-Base and EA-PLU games, the association delay is always less than $J$ since UEs do not reapply to BSs. A different trend is observed for the EA-PLU-RA game due to reapplication process and since BSs only accept UEs within their quota. Thus, the higher number of UEs, the more reapplications and the larger the association delay. Note that the heavily overload region where the delay of EA-PLU-RA crosses that of DA occurs when the number of UEs in the network is about twice the BSs' total quota, which is unlikely to happen in the real world. \begin{figure} \centering \includegraphics[scale=.31]{M130_AssDelay_vs_Load_new_shaded.pdf} \vspace*{-0.5em} \caption{Comparing the effect of different loading scenarios on average association delay in a HetNet with $J=5$ BSs and quota vector $\mathbf{q}=[15, 5, 5, 5, 5]$. } \label{underload_to_overload} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=0.9\textwidth]{M138_Perc_UnAss4.pdf} \vspace*{-0.5em} \caption{Percentage of unassociated UEs under three different loading scenarios: a) underload, b) critical load, and c) overload. The number of BSs and UEs increases while BS quotas are fixed $q_1=15, q_j=5, j\in\{2,...,J\}$.} \label{UnAss} \end{figure*} \subsection{Percentage of Unassociated Users} In practice, the number of UEs in the network is not always equal to the total quota of BSs. Thus, there may be unassociated UEs at the end of an associations game. For the next simulation, we increase both the numbers of BSs and UEs while keeping the BSs' quota fixed. Fig. \ref{UnAss} compares the percentage of unassociated UEs under three loading scenarios: a) underload, b) critical load, and c) overload. For the underload case (a), the number of UEs is 20\% less than the total quota of BSs, whereas for the overloading case (c), the number of UEs is 20\% more. This figure shows that the proposed EA-PLU-RA game has similar performance as the DA game under all three loading scenarios since both games guarantee that maximum number of UEs (limited by BSs' quotas) are associated at the end of the game (see Lemma 2). Also, it can be inferred that without preference list updating and reapplying steps (EA-Base and EA-PLU), there may be unassociated UEs even for the underload and critical load cases. Thus, these two steps are necessary to make the best use of available resources provided by BSs. Fig. \ref{SumRate} compares the network spectral efficiency of the HetNet for several association schemes: 1) centralized WCS algorithm \cite{TWC}, 2) proposed multi-game DA algorithm with $N=10$, 3) proposed multi-game EA algorithm with $N=10$, 4) distributed single DA game \cite{gale1962college}, 5) proposed distributed single EA game, 6) max-SINR association, and 7) random association. The EA game used in this simulation is the one with preference list updating and reapplying (EA-PLU-RA). For the single matching games and initial round of the multi-game algorithms, the preference lists are built based on channel norms which include both instantaneous and large-scale CSI. For the multi-game matching algorithms, we use the matching obtained from the corresponding single matching game as the initial association vectors, and run each algorithm for $N=10$ iterations. At each iteration, the preference lists are updated using the instantaneous rates obtained based on the resulting association vector of that iteration (see (\ref{R_as_PrfList})). In the max-SINR association scheme, each UE connects to the BS providing the highest received SINR. For random association, each UE randomly associates with a BS based on a uniform distribution. In these two schemes, if a BS is overloaded, the overloading UEs are dropped and become unassociated. The results show that the EA-PLU-RA game outperforms all other distributed games and approaches the performance of centralized WCS. Interestingly, the EA-PLU-RA game slightly outperforms the DA game in terms of network spectral efficiency, confirming the fact that stability is a less relevant performance metric for cellular networks. This higher spectral efficiency is concurrent with lower association delay of the EA game. While this result may appear counter-intuitive at first, it actually makes sense. Although the DA game was shown to be optimal in terms of stability \cite{gale1962college}, it was not known to be optimal for other metrics relevant to cellular systems, including spectral efficiency and delay. Since DA is not optimal for either of these metrics, there is no inherent trade-off between them for the DA game, and as evident by the EA game achieving better performance in both metrics. This figure shows that compared to the centralized WCS algorithm, multi-game algorithms achieve about 90\% of the performance, and a purely distributed single game can achieve 80\% performance of the centralized algorithm. Such a performance figure is laudable for a distributed algorithm. Although max-SINR association has a close performance to matching games in terms of average network spectral efficiency, it results in more unassociated UEs. Random association shows a very poor performance which implies the importance of user association in cellular HetNets. In Fig. \ref{RunTime}, we compare execution run time of the user association matching games and multi-game matching algorithms under a critical load scenario. The vertical axis represents the run time in time unit (which varies depending on capabilities of the processing machine) and the horizontal axis shows the quota of SCBSs. We observe that as the number of UEs increases, the proposed EA-PLU-RA matching game performs faster than the DA game. The trend is similar for the multi-game matching algorithms. We also observed there is a stark difference between the run times of centralized WCS algorithm and distributed matching games/algorithms. This result validates the advantage of distributed matching games as the matching games/algorithms have much lower complexity than the centralized WCS algorithm. \section{Conclusion} We proposed a set of distributed early acceptance (EA) user association matching games for 5G and beyond HetNets, and compared their performance with the well-known DA matching game. We showed that at slightly more power overhead, the EA games result in a significantly faster association process compared to the DA game while achieving better network spectral efficiency. The EA-PLU-RA game with preference list updating and reapplication provides the best overall network performance in terms of both association delay and percentage of associated users. These results suggest that stability is a less relevant metric for user association and EA may be more suitable in B5G wireless networks for real-time distributed association. Next, we proposed a multi-game matching algorithm to further enhance the network spectral efficiency by running multiple rounds of a matching game. Numerical results show that the proposed distributed EA games and multi-game algorithm achieve a network spectral efficiency within 80-90\% of the near-optimal centralized WCS benchmark algorithm, while incurring a complexity at several orders of magnitude lower and significantly less overheads due to their distributed or semi-distributed nature. \begin{figure} \centering \includegraphics[scale=.31]{M138_NetSumRate.pdf} \vspace*{-0.5em} \caption{Comparing the average network spectral efficiency of user association schemes in a HetNet with $J=5$ BSs and $K=35$ UEs.} \label{SumRate} \end{figure} \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,862
KMW in numbers Attestations and tests National Court Register Welded components Ventilation ducts Panel constructions Air showers Ventilation components Heating and cooling systems Sanitary systems Leak-tightness measurements Career at KMW Production of KMW engineering with Revit We strive to constantly improve our production process. For this reason, we use many computer aided design (CAD) programs. In recent years, Revit has become the leading BIM program that allows users to design a building and structure as well as its components in 3D. Responding to new customer needs, we have created a semi-automated method of exporting Revit model elements to our production process. This method significantly reduces the time needed for technological development and eliminates human error. Fitter's Academy Marginalizing of vocational education over the last years forced numerous employers to assume a teacher's role and to engage in employee's vocational training. Education and the so-far career of candidates for work seldom meet the minimal requirements posed by the companies and their quality systems. Anniversary galleries Visit new galleries documenting two company events organised on the 25th and 30th anniversaries of KMW Engineering. We celebrated these special dates in Ostromecko and in Bożejewiczki near Żnin. Competition for Young Pianists - photo relation On 15th of November 2014 there was a concert of this year laureates of the International Competition for Young Pianists: Arthur Rubinstein in Memoriam which had been won by Szymon Nehring. Nineteen years old Pole not only received the most prestigious award of that evening but also Chopin Award and the Special award in the name of Aniela Młynarska-Rubinstein founded by Rubinstein family. KMW supports young musicians For the tenth time the most talented young pianists from all around the world will come to Bydgoszcz in order to take part in the international Arthur Rubinstein In Memoriam competition. This competition, which is both celebrating its X anniversary and has settled into the cultural calendar of the region, is, not for the first time, supported by our company. Search in: Choose from the list Whole portal News Search by Choose from the list keywords titles contents content and titles all The method of sorting results Choose from the list a-z z-a by date ascending by date descending Reichstag, Berlin (D) Implementation: 1999 Reichstag In 1999 we started works in Reichstag, the seat of the German Parliament. KMW Engineering was responsible for a new ventilation system for the plenary chamber and the general ventilation system for other rooms (including those of political fractions). In the steel and glass dome of Reichstag we installed a special exhaust air processing unit with a heat recovery system. The dome itself weights over 1200 tonnes, forms a ceiling for the Parliament chamber, and is one of the most well-known landmarks of Berlin and Germany. Sea Bank, Bristol (GB) Number of articles per page KMW Engineering Sp. z o.o. ul. Powstańców 8a Our website uses cookies, as almost all websites do, to help provide you with the best experience we can. You can usually switch cookies off by adjusting your browser settings to stop it from accepting cookies. Doing so, however, will likely limit the functionality of our and a large proportion of the world's websites as cookies are a standard part of most modern websites. More details here.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,752
Q: How do I close each instance of the same div class one a time using Javascript What I'm trying to do is very straightforward: close every instance of a div each time the close button is clicked. What I'm getting instead is only the first clicked-on instance closes but the remaining ones can't. I must say I come from Python and I'm not very familiar with Javascript. I think there is someway using ID instead of class? Here is my html (with Jinja) code for dynamically creating the objects I would want to close when clicked on: {% with messages = get_flashed_messages(with_categories=true) %} {% if messages %} {% for category, message in messages%} <div class="notification {{ category }} is-bold"> <button class="delete"></button> {{ message }} </div> {% endfor %} {% endif %} {% endwith %} And here is my Javascript code: // notification remover document.addEventListener('DOMContentLoaded', () => { (document.querySelectorAll('.notification .delete') || []).forEach(($delete) => { $notification = $delete.parentNode; $delete.addEventListener('click', () => { $notification.parentNode.removeChild($notification); }); }); }); Example of generated html: <div class="notification is-info is-bold"> <button class="delete"></button> <div class="notification is-info is-bold"> <button class="delete"></button> <div class="notification is-info is-bold"> <button class="delete"></button> A: Try this instead: document.addEventListener('DOMContentLoaded', () => { (document.querySelectorAll('.notification .delete') || []).forEach(($delete) => { $delete.addEventListener('click', (event) => { event.target.parentNode.remove(); }); }); }); The problem with your code is that you are setting $notification with the last one, so when the eventListener triggers $notification is always the last.
{ "redpajama_set_name": "RedPajamaStackExchange" }
31
Choosing the right Kentucky life insurance policy is a building block in you and your family's financial future. This article will attempt to explain a little about life insurance to help you make the best decision for your family. Life insurance at its essence is a policy that pays a set amount of tax free monies to your beneficiary in the event of your demise. Sound simple? The problem is that life insurance can do much more than just pay money after you pass. There are two main types of life insurance policies available cash value life insurance and term life insurance. Term life insurance is a temporary type of insurance that you can purchase for 1 to 20 years. If you die during the life of the policy then the death benefits are paid to your beneficiary however if you do not the coverage simply expires. Term life is usually cheaper than a cash value policy. You can use a term life policy for such things as ensuring your children's educations, a mortgage or any other short term type goals. Cash value life insurance is a little different. Unlike term a cash value policy is meant to last for your lifetime. With a cash value life insurance policy you have many more options that you can benefit from while you are in the land of the living as well. A cash value policy builds cash value. This is money you can use for emergencies, to help pay for college tuition or even to build a retirement income. It may sound a little like a savings or investment plan and it is up to a point but it is a life insurance policy and will pay a death benefit to your beneficiary upon your demise. Give us a call today toll free at 877-334-1597 to talk with an expert about your needs and your options or explore our site and request a free online insurance quote.
{ "redpajama_set_name": "RedPajamaC4" }
6,708
Randolph County, West Virginia Title: Randolph County, West Virginia Subject: List of census-designated places in West Virginia, List of counties in West Virginia, Glady Fork, Middle Fork River, List of West Virginia wildlife management areas Collection: 1787 Establishments in the United States, 1787 Establishments in Virginia, Counties of Appalachia, Populated Places Established in 1787, Randolph County, West Virginia, West Virginia Counties Randolph County Courthouse and Jail in Elkins Location in the state of West Virginia West Virginia's location in the U.S. Named for Edmund Jennings Randolph 1,040 sq mi (2,694 km2) 0.3 sq mi (1 km2), 0.03% Population (est.) • (2014) • Density 28/sq mi (11/km²) Eastern: UTC-5/-4 .orgrandolphcountycommissionwv Randolph County is a county located in the U.S. state of West Virginia. As of the 2010 census, the population was 29,405.[1] Its county seat is Elkins.[2] The county was founded in 1787 and is named for Edmund Jennings Randolph.[3] Randolph County comprises the Elkins, West Virginia, Micropolitan Statistical Area. Rivers 1.1 Mountains 1.2 Caves and caverns 1.3 National Natural Landmarks 1.4 Major highways 1.5 Adjacent counties 1.6 National protected area 1.7 Communities 3 City 3.1 Towns 3.2 Census-designated places 3.3 Unincorporated communities 3.4 Registered historic places 4 Notable people 5 Wildflowers add a splash of color to grazing fields near Osceola in July. Fall in the forest According to the U.S. Census Bureau, the county has a total area of 1,040 square miles (2,700 km2), of which 1,040 square miles (2,700 km2) is land and 0.3 square miles (0.78 km2) (0.03%) is water.[4] It is the largest county in West Virginia by area. Tygart Valley River Shavers Fork Laurel Fork Point Mountain Cheat Mountain White Top, a knob of Cheat Mountain Laurel Mountain Rich Mountain Shavers Mountain Gaudineer Knob, a knob of Shavers Mountain Bowden Cave Sinks of Gandy Blister Run Swamp Gaudineer Scenic Area Shavers Mountain Spruce-Hemlock Stand Major highways U.S. Highway 33 U.S. Highway 219 West Virginia Route 15 Tucker County (northeast) Pendleton County (east) Pocahontas County (south) Webster County (southwest) Upshur County (west) Barbour County (northwest) National protected area Monongahela National Forest (part) United States National Radio Quiet Zone (part) As of the census[10] of 2000, there were 28,262 people, 11,072 households, and 7,661 families residing in the county. The population density was 27 people per square mile (10/km²). There were 13,478 housing units at an average density of 13 per square mile (5/km²). The racial makeup of the county was 97.69% White, 1.07% Black or African American, 0.16% Native American, 0.38% Asian, 0.01% Pacific Islander, 0.16% from other races, and 0.53% from two or more races. 0.68% of the population were Hispanic or Latino of any race. There were 11,072 households out of which 29.80% had children under the age of 18 living with them, 54.70% were married couples living together, 9.80% had a female householder with no husband present, and 30.80% were non-families. 26.30% of all households were made up of individuals and 11.90% had someone living alone who was 65 years of age or older. The average household size was 2.41 and the average family size was 2.89. In the county, the population was spread out with 22.30% under the age of 18, 8.70% from 18 to 24, 28.50% from 25 to 44, 25.40% from 45 to 64, and 15.10% who were 65 years of age or older. The median age was 39 years. For every 100 females there were 101.30 males. For every 100 females age 18 and over, there were 99.70 males. View from atop Yokum Knob, Randolph County, West Virginia Elkins (county seat) Huttonsville Womelsdorf (Coalton) Dailey East Dailey Valley Bend Unincorporated communities Arnold Hill Brady Gate Bruxton Cassity Dryfork Elkins Junction Elkwater Ellamore Evenwood Monterville Newlonton Pumpkintown Smith Crossing Tigheville Upper Mingo Registered historic places Beverly Historic District Blackman-Bosworth Store Butcher Hill Historic District Rich Mountain Battlefield Tygart Valley Homesteads Historic District Albert and Liberal Arts Halls Baldwin-Chandlee Supply Company-Valley Supply Company Davis Memorial Presbyterian Church Davis and Elkins Historic District Downtown Elkins Historic District Dr. John C. Irons House Elkins Milling Company Gov. H. Guy Kump House Randolph County Courthouse and Jail Senator Stephen Benton Elkins House Taylor-Condry House Warfield-Dye Residence Wees Historic District West Virginia Children's Home Glady Presbyterian Church and Manse Day-Vandevander Mill Cheat Summit Fort E. E. Hutton House Tygarts Valley Church See-Ward House Middle Mountain Cabins Herman Ball, football player Lemuel Chenoweth, master covered bridge builder William Wallace Barron, former governor who was indicted for bribery and jury tampering. Dellos Clinton "Sheriff" Gainer, major league baseball player Marshall Goldberg, football player Wilma Lee Cooper Grand Ole Opry and WWVA Jamboree star Stoney Cooper, Grand Ole Opry and WWVA Jamboree star Eldora Marie Bolyard Nuzum, American newspaper editor and interviewer of U.S. Presidents Becky Creek Wildlife Management Area National Register of Historic Places listings in Randolph County, West Virginia ^ a b "State & County QuickFacts". United States Census Bureau. Retrieved January 11, 2014. ^ "Find a County". National Association of Counties. Retrieved 2011-06-07. ^ http://www.wvculture.org/history/counties/randolph.html ^ "2010 Census Gazetteer Files". United States Census Bureau. August 22, 2012. Retrieved July 30, 2015. ^ "Annual Estimates of the Resident Population for Incorporated Places: April 1, 2010 to July 1, 2014". Retrieved June 4, 2015. ^ "U.S. Decennial Census". United States Census Bureau. Retrieved January 11, 2014. ^ "Historical Census Browser". University of Virginia Library. Retrieved January 11, 2014. ^ "Population of Counties by Decennial Census: 1900 to 1990". United States Census Bureau. Retrieved January 11, 2014. ^ "Census 2000 PHC-T-4. Ranking Tables for Counties: 1990 and 2000" (PDF). United States Census Bureau. Retrieved January 11, 2014. ^ "American FactFinder". Barbour County Tucker County Upshur County Pendleton County Webster County Pocahontas County Municipalities and communities of Randolph County, West Virginia, United States County seat: Elkins Cheat Bridge Ellamore‡ Fairview (near Elkins) Fairview (near Helvetia) Lee Bell Suncrest Weese Wymer Jeff Scotts Kuntzville Perlytown ‡This populated place also has portions in an adjacent county or counties Charleston (capital) Baltimore-Washington Metropolitan Area Charleston Metropolitan Area Cumberland Plateau Cumberland Mountains Huntington Metropolitan Area North-Central West Virginia Potomac Highlands Ridge-and-valley Appalachians Southern West Virginia Western West Virginia Parkersburg-Vienna Weirton Charelston West Virginia counties U.S. Counties with long url 1787 establishments in Virginia Counties of Appalachia Virginia, Charleston, West Virginia, Huntington, West Virginia, Pennsylvania, Wheeling, West Virginia Tucker County, West Virginia Barbour County, West Virginia, Grant County, West Virginia, Preston County, West Virginia, Randolph County, West Virginia, Race (United States Census) Pendleton County, West Virginia Hardy County, West Virginia, Grant County, West Virginia, West Virginia, Franklin, West Virginia, Augusta County, Virginia Pocahontas County, West Virginia West Virginia, Race (United States Census), Monongahela National Forest, Webster County, West Virginia, Randolph County, West Virginia Webster County, West Virginia West Virginia, Lewis County, West Virginia, Upshur County, West Virginia, Randolph County, West Virginia, Pocahontas County, West Virginia List of census-designated places in West Virginia Logan County, West Virginia, Fayette County, West Virginia, Raleigh County, West Virginia, Kanawha County, West Virginia, McDowell County, West Virginia List of counties in West Virginia Kanawha County, West Virginia, Harrison County, West Virginia, Randolph County, West Virginia, Greenbrier County, West Virginia, Lewis County, West Virginia Glady Fork West Virginia, Randolph County, West Virginia, United States, Tucker County, West Virginia, %s%s Randolph County, West Virginia, West Virginia, Barbour County, West Virginia, Monongahela River, %s%s List of West Virginia wildlife management areas Hampshire County, West Virginia, McDowell County, West Virginia, Randolph County, West Virginia, Cabell County, West Virginia, Lincoln County, West Virginia
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,263
I sent the payment through MoneyGram and all I could come up with was $4500 dollars, I was unable to come up with the total amount $9000 USD. Meanwhile the Agent at the MoneyGram office just called me that they has to purchase your payment files so as to put your name on the receiver's file inorder to avoid any illegal transfer or any hidrance on you when receiving the money that $35 dollars is for the purchasen fee only. But I was already out of the town for an investiment project which I told you that I will be going for, Note: I had already paid the transfer charge to them so please what I want you to do is to send the $35 dollars to them so that they could purchase the payment files and send the payment informations to you inorder to pick up the $4500 dollars today okay. I told the lady incharge of the transfer not to delay on any thing once you send the $35 dollars to them that you really need the money urgently. Here is their contact address. Send the $35 dollars to them through Western Union or MoneyGram any of the office that is close to you on this information written below so as to make the process easier and faster. Then after you send them $35 dollars, send the payment informations to them. Tell Mrs.Rose Taylor to give you the payment information for $4500 USD, once you send the payment of $35.00 to them to enable you pick up the money.
{ "redpajama_set_name": "RedPajamaC4" }
6,181
Q: python catching NameError when loading module I'm trying to catch any exceptions that happen when you cannot load a module. The current results are that the "except" block does not get executed. import sys def loadModule(module): try: import module except: print """ Cannot load %s For this script you will need: cx_Oracle: http://cx-oracle.sourceforge.net/ pycrypto: https://www.dlitz.net/software/pycrypto/ paramiko: http://www.lag.net/paramiko/ """ % module sys.exit(1) loadModule(cx_Oracle) Error: Traceback (most recent call last): File "./temp_script.py", line 16, in <module> loadModule(cx_Oracle) NameError: name 'cx_Oracle' is not defined A: loadModule(cx_Oracle) What do you think you are passing to this function? There is nothing named cx_Oracle in the code so far. That's why you are getting a NameError. You aren't even getting into the function. import module You can't pass variables to import, it interprets what you put in as the literal name of the module In this case, I question that you even need a function. Just move the try/except to the module level and import cx_Oracle directly. Just because I was curious, here is a way you can make a reusable exception-catching import function. I'm not sure when/how it would be useful, but here it is: from contextlib import contextmanager import sys @contextmanager def safe_import(name): try: yield except: print 'Failed to import ' + name sys.exit(1) with safe_import('cuckoo'): import cuckoo A: The error is happening when Python is trying to look up the cx_Oracle variable before the loadModule function can even be called. If you want to keep your current strategy, you probably really want to be using something like importlib.import_module so that you can import the module by name, like import_module('cx_Oracle'). I'd suggest doing something like this instead: try: import cx_Oracle except ImportError: print "Can't load the Oracle module" dosomething() in the top level of your module. That's the Pythonic way of handling this situation. A: Always think about which exception you want to catch. Do not overgeneralize by just coding except:. In this case, you want to catch the ImportError. The argument you want to pass to your function loadModule should be of type string, e.g. loadModule('cx_Oracle') (then you will get rid of the NameError). For loading modules dynamically within loadModule have a look at for example Dynamic loading of python modules.
{ "redpajama_set_name": "RedPajamaStackExchange" }
742
\section{Credits} \section{Introduction} Spatial relation extraction focuses on identifying the relationship between two geographical entities in natural language texts. Currently, only a few studies focused on this task in the NLP community, while most studies aimed at the other tasks of relation extraction, such as temporal and causal relation extraction. However, spatial information is one kind of critical information for natural language understanding, which can benefit the downstream NLP applications, such as spatial domain query \cite{zhang2020joint}, spatial reference \cite{yang2020robust} and data forecasting \cite{song2020spatial}. \begin{figure}[htp] \centering \includegraphics[width=0.5\textwidth]{example_new.pdf} \caption{ An example in SpaceEval with null-role and non-null-role.} \label{fig:galaxy} \vspace{-0.4cm} \end{figure} Various kinds of schemes have been proposed to represent spatial relation. As one of the SemEval evaluation tasks, SpaceEval \cite{DBLP:conf/semeval/PustejovskyKMLD15} proposes an annotation scheme adopted from ISO-space \cite{pustejovsky2011using} and its goals include identifying and classifying items from an inventory of spatial concepts, such as topological relations, orientational relations, and motion, etc. Commonly, this task needs to extract the spatial elements and classify static and dynamic spatial relations into three types: the move link (MOVELINK), the qualitative spatial link (QSLINK), and the orientation link (OLINK). MOVELINK connects motion-events with corresponding mover-participants as a triplet of three roles \emph{(mover, goal, trigger)}, while QSLINK and OLINK refer to the topological relation and non-topological relation between two spatial elements, respectively, and are formalized as a triplet of three roles \emph{(trajector, landmark, trigger)}. Following previous work, we also simplified the whole task as Figure \ref{fig:galaxy} and only focus on extracting QSLINK/OLINK/MOVELINK from texts. Thus, a spatial relation is defined as a triplet with three spatial types. The spatial relation can be divided into two classes: null-role and non-null-role relations. The former refers to a relation containing null-value roles, such as two MOVELINKs and one OLINK in Figure \ref{fig:galaxy}, while the latter (e.g., QSLINK in Figure \ref{fig:galaxy}) refers to a relation whose three roles are fulfilled the values extracted from sentences. Almost all previous studies regarded spatial relation extraction as a classification task using traditional machine learning \cite{DBLP:conf/semeval/NicholsB15, DBLP:conf/emnlp/DSouzaN15} or neural network methods \cite{DBLP:conf/ecir/RamrakhiyaniPV19,shin-etal-2020-bert}. Those classification models work well on extracting those non-null-role relations due to their rich information. However, they often suffer from those null-role relations. The reason is that some information is missing in these relations. Moreover, they also cannot benefit from the knowledge of the spatial schema, such as the roles and their relations. In the annotation stage, annotators usually not only annotate relations and relation types, but also provide a description or basis for their annotation implicitly. Therefore, we hope the model can simulate a human and provide a target sentence instead of a simple label index for understanding the spatial relation deeper. The target sentence in generation models can describe the relation between all spatial elements and it allows null slots (i.e., roles) to exist. Thus, the generation model not only can more explicitly learn the semantics of the spatial relations through such a form of the learning goal, but also can generate those null-role relations. Moreover, the classification model and the generation model have their complementary advantages. The former usually has better performance on no-null-role relations, while the latter can introduce prior knowledge to capture the semantics of null-role relations better and its results are in a natural language expression with stronger interpretability \cite{jiang2021not}. Therefore, we combine the advantages of the classification and generation models to further capture different knowledge. In this paper, we propose a novel hybrid model \textbf{HMCGR} (\textbf{H}ybrid \textbf{M}odel of \textbf{C}lassification, \textbf{G}eneration and \textbf{R}eflexivity) for spatial relation extraction, which contains a generation model and a classification model. Specially, the former can generate those null-role relations and the latter can extract those non-null-role relations to complement each other. Moreover, a reflexivity evaluation mechanism is applied to further improve the accuracy based on the reflexivity principle of spatial relation. Experimental results on the SpaceEval dataset shows that our HMCGR outperforms the SOTA baselines significantly. \section{Related Work} Various kinds of schemes have been proposed to represent spatial relations. SpatialML \cite{mani2010spatialml} characterized directional and topological relations among locations in terms of a region calculus. The SpRL task \cite{2011Spatial} developed a semantic role labeling scheme focusing on the main roles in spatial relations. Spatial relation extraction was introduced as subtask at SemEval 2012 \cite{kordjamshidi2012semeval}, SemEval 2013 \cite{kolomiyets2013semeval} and SemEval 2015 \cite{DBLP:conf/semeval/PustejovskyKMLD15}. As the Task 8 of SemEval 2015, SpaceEval proposed an annotation scheme adopted from ISO-space, and it enriched SpRL's semantics by refining the granularity. Most of previous studies were evaluated on this dataset. The task of spatial relation extraction can be divided into traditional machine learning and neural network methods. The former highly relies on manual features or explicit syntactic structures. \citet{DBLP:conf/semeval/NicholsB15} used a CRF layer to extract spatial elements, and then introduced SVM to classify spatial relations. \citet{DBLP:conf/emnlp/DSouzaN15} proposed a Sieve-based model where various kinds of manual features are generated by a greedy feature selection technique. \citet{DBLP:conf/semeval/SalaberriAZ15} introduced external knowledge as a supplement to spatial information, in which WordNet and PropBank provided information on many spatial elements. \citet{Kim2016ExtractingSE} proposed a Korean spatial relation extraction model using dependency relations to find the proper elements to fulfill roles. With the wide application of neural network, \citet{DBLP:conf/ecir/RamrakhiyaniPV19} generated candidate relations by dependency parsing and classified the candidates with a BiLSTM model. \citet{shin-etal-2020-bert} first used BERT-CRF to extract the spatial roles and then introduced R-BERT \cite{wu2019enriching} to extract the spatial relations. Besides, a few studies focused on multi-modal spatial relation extraction. For example, \citet{DBLP:journals/corr/abs-2007-09551} proposed a spatial BERT which gives two spatial entities and a picture to determine spatial relations. \begin{figure*}[htp] \centering \includegraphics[width=1.0\textwidth,height=0.7\textwidth]{model_f.pdf} \caption{Overall structure of our HMCGR.} \label{fig:B} \end{figure*} \section{HMCGR} Figure \ref{fig:B} shows the overall architecture of our model HMCGR. As a whole, HMCGR can be divided into four modules, i.e., candidate triplet extraction (CTE), spatial relation classification (CLS), spatial relation generation (GEN), and Reflexivity evaluation (RFX). The module CTE is first used to extract spatial elements and spatial roles from a raw sentence to obtain candidate triplets by a BERT-CRF model. And then the candidate triplets and the raw sentence are fed to the module CLS, which uses a BERT encoder and a T5 encoder to encode the sentence, respectively, and apply a GCN (Graph Convolutional Networks) layer to capture the sentence structure. Simultaneously, the module GEN uses a T5 decoder to generate a target sentence following a specific template, and the module RFX uses the cosine function to calculate the similarity between the original sentence and its inverted sentence to further improve the accuracy. \subsection{CTE: Candidate Triplet Extraction} Since a spatial relation is represented as a triplet with its relation type MOVELINK, OLINK or QSLINK, the first step of HMCGR is to extract candidate triplets from raw texts as much as possible. Similar to \citet{shin-etal-2020-bert}, we also use the BERT+CRF model for spatial role extraction, as showed in Figure \ref{fig:C}. Spatial role extraction is a task to form candidate triplets, which extracts the spatial elements from texts and then assigns a role to each extracted element. Formally, the input is a token sequence $X=(x_{1},...,x_i,...,x_n)$ where $x_i$ is the $i$-th token in a sentence $S$. We feed $X$ with the label CLS to BERT to obtain a new embedding $H_B$ from BERT which $H_B=[b_{1},...,b_i,...,b_n]$ where $b_{i} \in R^{d_{b}}$ and $R^{d_{b}}$ is the pre-defined spatial role set. In Figure \ref{fig:C}, there are two CRF layers with the input embedding $H_B$, i.e., the Spatial Element CRF SE-CRF and the Spatial Role CRF SR-CRF. We use SE-CRF to obtain the spatial element set $SE=[se_1,...,se_i,...,se_m]$ in $S$ where $se_i$ is a spatial element, and use SR-CRF to obtain the role set $RL = [rl_1,...,rl_i,...,rl_m]$ for all elements where $rl_i$ is the spatial role of the element $se_i$. We simply apply a multi-task framework to train these two CRFs and they share the same BERT encoder layer. Take the sentence in Figure \ref{fig:galaxy} as example, we can extract six spatial elements ``children'', ``school'', ``in'', ``who'', ``at'' and ``recess'', whose roles are \emph{Spatial Entity}, \emph{Place}, \emph{Spatial Signal}, \emph{Spatial Entity}, \emph{Spatial Signal} and \emph{Place}, respectively. Due to CTE is the first stage of HMCGR, we tend to generate all possible spatial role triplets for the subsequent CLS module to achieve high recall. Hence, we first split the set $SE$ into three subsets : 1) $TM$=\{$Trajecto$r, $Mover$\}, 2) $LG$=\{$Landmark$, $Goal$\}, and 3) $TR$=\{$Trigger$\} according to their roles. Take the above elements for example, ``children'' and ``who'' belong to $TM$, while ``school'' and ``recess'' belong to $LG$ and the others belong to $TR$. \begin{figure}[htp] \centering \includegraphics[width=0.5\textwidth]{model_f_v1.pdf} \caption{Overview of candidate triplet extraction.} \label{fig:C} \end{figure} \vspace{-0.2cm} Finally, we enumerate possible triplets as candidates following the spatial relation definition. Commonly, some triplets may have the roles with null values, as the role $trigger$ showed in Figure \ref{fig:galaxy}, because its element does not mention in the according sentence. If we enumerate all possible triplets including null roles as candidates, this will introduce enormous negative triplets into the candidate set and then harm the precision badly. For example, there are 27 ($3^3$) candidate triplets in the sentence in Figure \ref{fig:galaxy}, while only 4 are annotated triplets. Hence, we do not generate the triplets with null-value roles in the module CTE and the extracted candidate triplet set $ET=|TM|*|TR|*|LG|$ of the example in Figure \ref{fig:galaxy} is as follows: (who, at, recess), (who, in, school), (who, at, school), (who, in, recess), (children, in, school), (children, at, school), (children, in, recess) and (children, at, recess). \subsection{CLS: Spatial Relation Classification} Following previous work, CLS is to classify the candidate triplets into four types, i.e., MOVELINK, OLINK, QSLINK, and null. If a triplet belongs to the type null, this triplet is a pseudo spatial relation. The reason that we introduce the type null to CLS is that there are lots of pseudo triplets extracted by CTE and they will harm the precision. \subsubsection{Encoding} First, we simply use BERT and T5 to encode the sequence $X$\footnote{We add [cls] to the start of $X$ to obtain the sentence representation of BERT.} of the sentence $S$ to obtain the embeddings $H_{EB}=\{eb_{1},...,eb_{i},...,eb_{n}\}$ and $H_{ET}=\{et_{1},...,et_{i},...,et_{k}\}$, respectively. To make better use of the advantages of both two pre-training models, we use cross attention to represent the hidden layer state as follows. In this way, we can get the new embeddings $H_{CB}=\{cb_{1},...,cb_{i},...,cb_{n}\}$ and $H_{CT}=\{ct_{1},...,ct_{i},...,ct_{k}\}$ while the latter is used in the RFX module. \begin{equation} cb_{i}=cross\_attention(eb_i, et_j) \end{equation} \begin{equation} ct_{i}=cross\_attention(et_j, eb_i) \end{equation} Second, we incorporate the candidate triplets extracted by CTE into the above embedding $H_{cb}$ to enhance the representation of spatial elements. Specially, we introduce the SelfAttentiveSpanExtractor in AllenNLP to obtain the latent representation of three spatial roles $H_{tm}$, $H_{lg}$ and $H_{tr}$ as follows. \begin{equation} H_{y}=\sum_{i=y_{start}}^{y_{end}} W_{y_{i}} cb_{i}\\ \end{equation} \noindent where $y \in \{tm, lg, tr \}$. $y_{start}$ and $y_{end}$ represent the start and end position of a spatial element, respectively, and $W_{y_{i}}$ are learnable parameters. Besides, since BERT maybe splits a word into multiple word-pieces, we also use SelfAttentiveSpanExtractor to obtain word-level representation. \subsubsection{Spatial GCN} Most previous work ignored the function of demonstrative pronouns in spatial relation extraction. However, those pronouns can participate in various spatial relations. Inspired by \citet{DBLP:conf/naacl/PhuN21} in casual relation extraction, to capture the relationship between sentences and spatial roles, and make better use of sentence structure and anaphora, we introduce a spatial graph $G=\{N,E\}$ to CLS, where the node set $N=X \cup SE$, which are defined in subsection 3.1. We initialize four adjacency matrices ($A^{B}$, $A^{E}$, $A^{C}$, $A^{D}$) to represent four edge types in our graph $G$ as follows. \emph{Sentence Boundary Edge}: Intuitively, relevant contextual information between the spatial elements within a sentence is helpful for this task. Hence, we create an undirected edge between two nodes if they are in the same sentence. Formally, we set $A^{B}_{i,j}=A^{B}_{j,i}=1$ if the nodes $n_i$ and $n_j$ ($n_i, n_j \in N$) in the same sentence; otherwise, 0. \emph{Spatial Element Edge}: The intersections between the spatial elements and their containing tokens maybe share some useful information. Therefore, we create a spatial element edge between the spatial element and its token. Formally, we set $A^{E}_{i,j}=1$ if $n_i$ contains $n_j$; otherwise, 0. \emph{Coreference Edge}: According to our statistics, about 20\% of the spatial relations in SpaceEval are participated by demonstrative pronouns. Hence, we construct an edge from two nodes if one can reference the other. Formally, we set $A^{C}_{i,j}=1$ if $n_i$ and $n_j$ are coreferential; otherwise, 0. \emph{Dependency Edge}: Following previous work in NLP, we also create an edge if two nodes have the same parent node in the dependency tree. Formally, we set $A^{D}_{i,j}=A^{D}_{i,j}=1$ if $n_i$ and $n_j$ has the same parent node in a dependency tree. Besides, we utilize SpaCy \footnote{https://spacy.io/} to extract the dependency trees and coreference chains. Due to the importance of different type edges, we conduct four learnable weight matrices $W^{y}$ ($y \in \{B, E, C, D\}$) to merge four type edges by their weights to an adjacency matrix $A$ as follows. \begin{equation} A_{i,j}=\sum_{y \in \{B, E, C, D\}} W^y_{i,j} A^{y}_{i,j} \end{equation} Finally, we can easily construct the graph $G$ and formulate GCN for spatial information fusion to obtain its representation $H^{NS}$ as follows. \begin{equation} H^{NS} = GCN(G,N) \end{equation} \subsubsection{Classification} By recording the node identifier of the spatial role in the currently processed triplet, we can get the latent representation of the spatial role in $NS$. Inspired by the idea of ResNet \cite{he2016deep}, we concat the BERT hidden state $H_y$ ( $y \in \{tm, lg, tr \}$) and the representation of the GCN nodes $H_y^{NS}$ as the final feature of the spatial roles as follows. \begin{equation} H^{'}_{y}=[H_{y};H_{y}^{NS}] \end{equation} \noindent where $H_{y}^{NS}$ represents the latent representation of the spatial roles in $H^{NS}$. Finally, a multi-layer perceptron (MLP) is to classify the spatial relations and we calculate the cross-entropy loss as follows. \begin{equation} y_{rel} = MLP([H^{'}_{tm};H^{'}_{tr};H^{'}_{lg}]) \end{equation} \begin{equation} L_{cls}=-\sum_{(tm,tr,lg) \in ET}log P(rel|tm,tr,lg) \end{equation} \noindent where $ET$ is the triplet set mentioned in subsection 3.1 and $rel$ is the relation of the triplet. \subsection{GEN: Spatial Relation Generation} To reduce negative triplets in the CTE module, we only enumerate candidate triplets without null roles. This strategy can help CLS improve its precision. However, it also cannot extract those null-role relations. Our statistics on the SpaceEval dataset show 20\% of the annotated spatial relations have a null role. Hence, how to extract those null-role relations still is a challenge. To address this issue, we introduce a spatial relation generation module GEN to extract those null-role relations. Hence, HMCGR contains a classification and a generation model, and they can complement each other to address their shortcomings. We introduce the pre-trained generation model T5 to our GEN, due to its excellent performance on many NLP applications \cite{DBLP:journals/jmlr/RaffelSRLNMZLL20}. Normally, there are two T5-decoding methods that can be used in our task, i.e., triplet or a normal sentence. In our experiments, we found that a structure normal sentence is suitable for the target sentence of T5, which contains the following three parts. \emph{Referential Phrase Prefix}: To better use the coreference relation, we add a phrase with referential meaning to the target sentence and put this phrase in the beginning of the target sentence to let our GEN use this useful information. \emph{Relation Name}: To get the type of spatial relation, we designed a slot of spatial relation name for the target sentence. \emph{Relation Explanation}: To decode spatial relations more quickly and conveniently, we design a generate structured sentence with $<$pad$>$ spatial role slots as our target sentence. Specifically, the form of target sentence is as follows: ``The token ``$pronoun$'' stands for ``$noun$'', and $<pad>$ $qslink$ $<pad>$ can be describe as following : the first element is $<pad>$ $tm$ $<pad>$, the trigger is $<pad>$ $tr$ $<pad>$, and the second element is $<pad>$ $lg$ $<pad>$.'' Take the candidate triplet (who, at, recess) as an example, we generate the following target statement $TGS$ for T5:``The token ``who'' stands for ``children'', and $<pad>$ qslink $<pad>$ can be describe as following : the first element is $<pad>$ who $<pad>$, the trigger is $<pad>$ at $<pad>$, and the second element is $<pad>$ recess $<pad>$.''. We feed a sentence representation $H_{ET}$ into T5-decoder and obtain a target sentence following the format of $TGS$, which can be translated into the form $Relation(tm, tr, lg)$. It is worth noting that one of $tm$, $tr$ and $lg$ may be null, and we can obtain those null-role relations. Finally, T5 generates a token or phrase for each output position using softmax and then we can get the target sentence and the cross-entropy loss $L_{gen}$ is defined as follows. \begin{equation} L_{gen}=T5_{decoder}(TGS,H_{ET}) \end{equation} \subsection{RFX: Reflexivity Evaluation} Our CLS and GEN can extract spatial relations from different perspectives and complement each other effectively. However, the performance of GEN is still lower than CLS, because it suffers from the limited training data and the high rate of negative and positive instances in this task. Most spatial relations have the attribute of reflexivity due to their nature. For example, "A in B" equals to "B out of A" in spatial relation. According to the reflexivity of spatial relation, we design a similarity-based reflexivity evaluation mechanism to help GEN improve its performance. RFX first creates an inverted sentence $IVS$ according to the original sentence $S$ and a candidate triplet $et$, and then uses the cosine function to calculate the similarity of their embeddings. If two sentences are similar, the candidate triplet $et$ will be regarded as a spatial relation with high probability. For a sentence $S$ and a candidate triplet $et=(tm, tr,lg)$, we first exchange the positions of two participants $tm$ and $lg$ in $S$, and then replace $tr$ with its antonym from an antonym dictionary. If $tr$ has more than one antonym, we randomly select one. The original sentence $S$ and the inverted sentence $IVS$ are fed to a T5-encoder to obtain the embeddings $H_{CT}$ using cross attention and $H_{IVT}$, respectively. The avg-polling is applied to the above two embeddings to capture their global features as follows. \begin{equation} H_{CTA}=avgpooling(H_{CT}) \end{equation} \begin{equation} H_{IVA}=avgpooling(H_{IVT}) \end{equation} Finally, we design the spatial semantic loss $rfx-loss$ using a cosine similarity as follows. \begin{equation} L_{rfx}=1-cos(H_{CTA},H_{IVA}) \end{equation} \subsection{Joint Training and Decoding} In the training step, we train the classification model CLS and the generation model GEN together. To sum up, the overall loss $L$ of our model HMCGR consists of three parts as follows. \begin{equation} L=L_{cls}+L_{rfx}+L_{gen} \end{equation} Finally, the spatial relations are extracted by two models, i.e., CLS and GEN. The final spatial relation set is the union of their results. Besides, the module RFX is an effective auxiliary task to help GEN improve its performance. \begin{table} \centering \begin{tabular}{lc} \hline \textbf{Tool/Parameter} & \textbf{Version/Value}\\ \hline Pytorch & 1.7.0+cu110\\ Spacy & 2.1.0\\ Allennlp & 2.6.0\\ dgl-cu110 & 0.6.1\\ Learning rate & 2e-5\\ Batch size & 4\\ Random seed & 1024\\ Hidden size of pre-training model & 768\\ Optimizer & AdamW\\ \hline \end{tabular} \caption{\label{font-table} Key parameters and tools used in our model. } \end{table} \begin{table} \centering \begin{tabular}{llll} \hline Model & P & R & F1\\ \hline BERT+CRF & 88.1 & 91.2 & 89.1\\ \hline \end{tabular} \caption{\label{roleresults} The results of spatial role extraction. } \end{table} \begin{table*} \setlength\tabcolsep{5pt} \centering \begin{tabular}{cccc|ccc|ccc|ccc} \hline \multirow{2}{*}{Model}& \multicolumn{3}{c}{QSLINK}& \multicolumn{3}{c}{OLINK}& \multicolumn{3}{c}{MOVELINK}& \multicolumn{3}{c}{Overall}\\ \cline{2-4} \cline{5-7} \cline{8-10} \cline{11-13} & P & R & F1 & P & R & F1 & P & R & F1 & P & R & F1\\ \hline {\shortstack{Sieve-Based}} & {12.9} & {28.3} & {17.8} & {\textbf{100}} & {31.2} & {47.5} & {24.5} & {56.2} & {34.2} & {45.8} & {38.5} & {41.8}\\ {\shortstack{WordNet}} & {-} & {-} & {-} & {-} & {-} & {-} & {-} & {-} & {-} & {54.0} & {51.0} & {53.0}\\ {\shortstack{SpRL-CWW }} & {\textbf{66.1}} & {53.8} & {59.4} & {69.1} & {51.7} & {59.1} & {57.1} & {45.1} & {50.4} & {63.6} & {50.1} & {56.1}\\ {\shortstack{BERT-base}} & {\underline{45.1}} & {\underline{58.3}} & {\underline{50.5}} & {\underline{71.0}} & {\underline{69.6}} & {\underline{70.2}} & {\underline{62.7}} & {\underline{61.5}} & {\underline{62.1}} & {62.7} & {59.8} & {61.2}\\ {\shortstack{HMCGR}} & {53.5} & {\textbf{73.1}} & {\textbf{61.1}} & {73.1} & {\textbf{85.2}} & {\textbf{78.6}} & {\textbf{66.8}} & {\textbf{83.0}} & {\textbf{73.9}} & {\textbf{64.3}} & {\textbf{79.2}} & {\textbf{70.9}}\\ \hline \end{tabular} \caption{\label{mainresults} Performance comparison between the baselines and HMCGR on spatial relation extraction. Since BERT-base did not report the results on each category, we run their model to obtain the results (underlined). } \end{table*} \section{Experimentation} \subsection{Experimental Settings} We evaluate our model on the latest dataset SpaceEval. According to the official statistics, there are 1110 QSLINKs, 974 MOVELINKs and 287 OLINKs. We use the standard training/development/test set following previous work \cite{shin-etal-2020-bert} where the rate of the training set and the test set is 8:2. As for evaluation, we report Precision (P), Recall (R), and Micro-F1 score. We use Pytorch and Huggingface as our base tools and use the base versions of BERT and T5. The specific tool versions and key hyper-parameters are shown in Table \ref{font-table}. Currently, only a few work focused on spatial relation extraction. To evaluate the effectiveness of our HMCGR, we conduct the following strong baselines for comparison: 1) \textbf{Sieve-Based} \cite{DBLP:conf/emnlp/DSouzaN15}, which used the sieve mechanism and syntactic parse trees to enhance the features in spatial relations; 2) \textbf{WordNet} \cite{DBLP:conf/semeval/SalaberriAZ15}, which used WordNet as an external knowledge to assist their task; 3) \textbf{SpRL-CWW} \cite{DBLP:conf/semeval/NicholsB15}, which is the SOTA traditional model using SVM and CRF classifiers on the GloVe features to extract the spatial relations; 4) \textbf{BERT-base} \cite{shin-etal-2020-bert}, which is the SOTA neural network model using a BERT-based neural network model on the spatial elements extraction and spatial relation extraction. \subsection{Experimental Results} The results of spatial role extraction on SpaceEval is showed in Table \ref{roleresults} and its performance is similar with \citet{shin-etal-2020-bert}. In the stage of CTE, we get 3096 candidate triplets, in which 1355 triplets are positive and 1741 triplets are negative. These figures show that the number of the negative instances is more than that of the positive ones. If we use the null value to construct the candidate triplets, the large number of negative instances will harm the performance critically. Table \ref{mainresults} shows the overall performance of the baselines and our HMCGR on SpaceEval. Compared with the SOTA baseline BERT-base, our HMCGR significantly improves the overall F1-score by 9.7, especially the Recall with a gain of 19.4. This result verifies the effectiveness of HMCGR, and indicates that our generation model GEN and our classification model CLS can promote each other. Moreover, the improvement comes from all three links QSLINK, OLINK, MOVELINK with the gains of 10.6, 8.4, and 11.8, respectively. This result shows that our HMCGR works well on all links. It is worth noting that our improvement mainly comes from the recall and this indicates that the generation model is helpful to recover those null-role relations. \begin{table} \centering \begin{tabular}{llll} \hline Model & P & R & F1\\ \hline BERT-base& 44.5 & 31.7 & 37.0\\ HMCGR & 46.7 & 40.0 & 43.0\\ \hline \end{tabular} \caption{\label{defaultvalue} The results of spatial relation extraction on null-role relations. } \end{table} \section{Analysis} \subsection{Analysis on Null-role Relations} To further verify the effectiveness of our GEN, we count the null-role relations and Table \ref{defaultvalue} shows the performance of BERT-base and HMCGR. Compared with BERT-base, HMCGR improves the F1-score by 6.0, especially the significant gain on recall (+8.3). This result verifies our motivation that the generation model GEN is effective to extract those null-role relations. However, only 40.0\% of null-role relations in the test set are extracted by GEN and this indicates that the null-role relation extraction has much room for improvement. \begin{table} \centering \begin{tabular}{llll} \hline Model & P & R & F1\\ \hline HMCGR & \textbf{64.3} & \textbf{79.2} & \textbf{70.9}\\ \hline GEN & 60.4 & 53.1 & 56.5\\ CLS & 62.0 & 65.5 & 63.7\\ GEN+CLS & 64.1 & 75.2 & 69.2\\ GEN+RFX & 62.2 & 55.1 & 58.8\\ CLS+RFX & 62.0 & 62.5 & 62.2\\ \hline \end{tabular} \caption{\label{table:abd} Ablation study on different modules. } \end{table} \subsection{Ablation Study on Different Modules} We conduct the ablation experiments to verify the effectiveness of the modules used in HMCGR, and Table \ref{table:abd} shows the results of the simplified models. The performance descents of both single GEN and CLS are very large, in comparison with the hybrid HMCGR. This result indicates a single classification or generation model maybe cannot extract those null-role and non-null-role relations simultaneously. Moreover, the performance of GEN is lower than that of CLS and the reason is that the number of non-null-role relations is twice as much as that of null-role relations. Besides, CLS works better than BERT-base and this verifies the success of our classification model. However, the performance of GEN is lower than that of BERT-base and this indicates how to apply generation model to the traditional classification tasks still is a challenge. The combination model GEN+CLS outperforms GEN and CLS, with huge gains of 12.7 and 5.5, respectively. This indicates GEN and CLS can boost each other to improve THE F1-score, especially the recall. In the SpaceEval dataset, 32.3\% of the spatial relations belong to null-role one, in which 65.3\% of the null-role relations do not have the role $trigger$ and the others do not have the role $landmark/goal$. Our GEN can recover almost 40\% null-role relations and this indicates that the generation model prefers to extract those null-role relations. Moreover, the decision from two different models also can further improve the performance on different perspectives. Compared with GEN, GEN+RFX improves the F1-score by 2.3 with the gains in both the precision and recall. This indicates that our reflexivity evaluation mechanism RFX can not only help the generation model to extract more spatial relations, but also filter out the pseudo relations. However, the F1-score of CLS+RFX is lower than that of CLS, especially the recall. Among three relation types, only the F1-score of MOVELINK decreases from 62.5 to 61.0. The reason is that some triggers in MOVELINKs do not have an antonym (e.g., "run" and "biking") and some sentences cannot be inverted. Besides, compared with GEN+CLS, HMCGR improves the F1-score by 1.7, with a gain of 4.0 on the recall. This verifies that RFX is helpful to discover more relations in a hybrid model. \begin{table} \centering \begin{tabular}{llll} \hline Model & P & R & F1\\ \hline HMCGR & 64.3 & 79.2 & 70.9\\ \hline w/o GCN & 63.3 & 74.7 & 68.1\\ w/o CrossAtt & 62.1 & 74.2 & 67.6\\ \hline \end{tabular} \caption{\label{table-cls} Results of HMCGR and its simplified version on SpaceEval. } \end{table} \subsection{Analysis on CLS} To verify the contributions of the components in CLS, We conduct the following two simplified versions of HMCGR: 1) w/o GCN: the GCN layer is removed from HMCGR; 2) w/o CrossAtt: the cross attention is removed. That is, we only use BERT to encode sentences. Table \ref{table-cls} shows the results of HMCGR and its simplified versions. If we remove the GCN layer and the cross attention, the F1-score will decrease by 2.9 and 3.3, respectively. This result indicates that T5 is helpful for BERT to represent the sentence from different perspectives. As for the GCN layer, we find out that the coreference edge is the main contributor, and more than 90\% of the improvement comes from this edge type. \subsection{Error Analysis} The errors of our HMCGR mainly come from those in CTE, GEN, and entity coreference. In table \ref{roleresults}, we can find out that 8.8\% of spatial relations are missing and 11.9\% of pseudo relations are introduced to the following modules by CTE. Our statistics on the results shows that GEN often badly predicts those null-role relations when there are a non-null-role relation and a null-role relation in the same sentence. Since T5 is a sequential generation model, the generation of the next spatial relation will be affected by the relation predicted above. That is, if the previous relation is non-null-role one, the current relation tends to be non-null-role too. Take Table \ref{exa} as an example, there are two MOVELINKs in the sentence. After HMCGR has extracted the first relation MOVELINK(cattle, to, fields), it tends to predict the next one as MOVELINK(men, to, fields), instead of MOVELINK(cattle, null, fields). Although the coreference edge is the most effective one in the graph, lots of errors derive from it due to its low performance. \begin{table} \centering \begin{tabular}{l} \hline \textbf{Sentence:} There were already old men taking cattle \\out to the fields to graze.\\ \hline \textbf{Gold MOVELINKs:} \\ \{$mover$: cattle, $trigger$: to, $goal$: fields\}\\ \{$mover$: men, $trigger$: null, $goal$: fields\}\\ \hline \textbf{Predicted MOVELINKs:} \\ \{$mover$: cattle, $trigger$: to, $goal$: fields\}\\ \{$mover$: men, $trigger$: to, $goal$: fields\}\\ \hline \end{tabular} \caption{\label{exa} Examples of the errors in GEN. } \end{table} \section{Conclusion} In this paper, we propose a novel hybrid model HMCGR for spatial relation extraction. The generation model GEN can generate those null-role relations, while the classification model CLS can extract those non-null-role relations to complement each other. Moreover, a reflexivity evaluation mechanism is applied to further improve the accuracy based on the reflexivity of spatial relation. Experimental results on the SpaceEval dataset show that our HMCGR outperforms the SOTA baseline significantly. Our future work will focus on how to extract those null-role relations effectively.
{ "redpajama_set_name": "RedPajamaArXiv" }
6,125
Oliver Eustace Tidman (16 May 1911 – 20 December 2000) was a former English footballer who played as an outside left. Career In 1932, Tidman signed for Aston Villa from Tufnell Park. On 11 February 1933, Tidman made his only appearance for the club in a 1–0 away win against Chelsea. In 1935, Tidman signed for Stockport County, making 24 Football League appearances, scoring four times. In 1936, Tidman signed for Bristol Rovers, making 16 league appearances, scoring once. Tidman later played for Clapton Orient and Chelmsford City, signing for each club in 1937 and 1938 respectively. References 1911 births 2000 deaths Association football wingers English footballers People from Margate Middlesex Wanderers A.F.C. players Tufnell Park F.C. players Aston Villa F.C. players Stockport County F.C. players Bristol Rovers F.C. players Leyton Orient F.C. players Chelmsford City F.C. players English Football League players
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,114
"Small Horse Head Sterling Silver Set" Brand New!! Gift sets ready to go with everything seen here. Included horse head pendant, matching horse head earrings, and 18 inch 1mm cable chain. All items are sterling silver. Buying the right gift can be difficult. The Gorgeous Horse is now offering beautiful gift sets with everything you need to provide the perfect gift for the horse lover in your life. Horse heads are three dimensional in design. Everything pictured is available with your purchase down to the gift box.
{ "redpajama_set_name": "RedPajamaC4" }
3,963
Q: Given an encoded message, count the number of ways it can be decoded You are given an encoded message containing only numbers. You are also provided with the following mapping A : 1 B : 2 C : 3 .. Z : 26 Given an encoded message, count the number of ways it can be decoded. eg: 12 can be decoded in 2 ways: (A,B) and (L) I came up with the algorithm of accepting the numbers as a character of strings and then checking for each of its digit: 1.If the first digit of the string array is zero , return zero. 2.for each of its digit(i) from 1 to n perform: if str[i-1]>2 || (str[i-1]=='2' && str[i]>'6') return 0; if(str[i]==0) return 0; Each time i try to encode the first digit in the message to a letter, or i can encode the first two digits into a letter if possible. When there is no way to encode like encountering a single '0′, or encountering '32′, simply return. Can this problem be solved more efficiently? A: Your current approximation to the problem is correct. Although, you need to be really careful that you are handling all the cases which it is not clear and this will make my answer a bit longer than needed. A correct way to see this problem is from a Dynamic Programming perspective. Let's consider your input string as message and its length as n. To decode a message of n characters, you need to know in how many ways you can decode message using n - 1 characters and a message using n - 2 characters. That is, A message of n characters. 1 1 2 3 4 5 6 7 8 9 0 1 +---+---+---+---+---+---+---+---+---+---+---+ message | 1 | 2 | 3 | 4 | 1 | 2 | 3 | 1 | 4 | 1 | 2 | +---+---+---+---+---+---+---+---+---+---+---+ Using a 1 digit and a message of n - 1 characters long. 1 1 2 3 4 5 6 7 8 9 0 1 +---+---+---+---+---+---+---+---+---+---+ +---+ message | 1 | 2 | 3 | 4 | 1 | 2 | 3 | 1 | 4 | 1 | + | 2 | +---+---+---+---+---+---+---+---+---+---+ +---+ Using a 2 digits and a message of n - 2 characters long. 1 1 2 3 4 5 6 7 8 9 0 1 +---+---+---+---+---+---+---+---+---+ +---+---+ message | 1 | 2 | 3 | 4 | 1 | 2 | 3 | 1 | 4 | + | 1 | 2 | +---+---+---+---+---+---+---+---+---+ +---+---+ Now, you may ask yourself: How do I calculate in how many ways you can decode message of n - 1 characters and of n - 2 characters? It's actually in the same way. Eventually you will reduce it to its base case. Let's say ways[n] its the number of ways you can decode message of n characters. Then, you can put ways[n] in this way, ways[n] = ways[n - 1] + ways[n - 2] (Since there is no clue how you'd define the number of ways for an empty string I considered it as 1.) With proper constraints and base case, * *n = 0, ways[n] = 1 *n > 1 and message[n] is valid and message[n - 1:n] is valid, ways[n] = ways[n - 1] + ways[n - 2] *n > 1 and message[n] is valid and message[n - 1:n] is not valid, ways[n] = ways[n - 1] *n > 1 and message[n] is not valid and message[n - 1:n] is valid, ways[n] = ways[n - 2] *otherwise, ways[n] = 0 An iterative decode function in C may look as follows, int decode(char* message, size_t len) { int i, w, ways[] = { 1, 0 }; for(i = 0, w; i < len; ++i) { w = 0; if((i > 0) && ((message[i - 1] == '1') || (message[i - 1] == '2' && message[i] < '7'))) { w += ways[1]; } if(message[i] > '0') { w += ways[0]; } ways[1] = ways[0]; ways[0] = w; } return ways[0]; } You can see it here at ideone. I'm using constant extra memory for the calculation. A: Thought I'd complement @Alexander's post with my commented python code, as it took me a bit of time to get my head around exactly how the dynamic programming solution worked. I find it useful to realise how similar this is to coding the Fibonacci function. I've also uploaded full code, tests and runtime comparison with naive recursion on my github. def number_of_decodings_fast(code): """ Dynamic programming implementation which runs in O(n) time and O(1) space. The implementation is very similar to the dynamic programming solution for the Fibonacci series. """ length = len(code) if (length <= 1): # assume all such codes are unambiguously decodable return 1 else: n_prev = 1 # len 0 n_current = 1 # len 1 for i in range(1,length): if ( # a '1' is ambiguous if followed by # a digit X=[1-9] since it could be # decoded as '1X' or '1'+'X' code[i-1]=='1' and code[1] in [str(k) for k in (range(1,10))] ) or ( # a '2' is ambiguous if followed by # a digit X=[1-6] since it could be # decoded as '2X' or '2'+'X' code[i-1]=='2' and code[i] in [str(k) for k in range(1,7)] ): # New number of decodings is the sum of # decodings obtainable from the code two digits back # (code[0:(i-2)] + '[1-2]X' interpretation) # and decodings obtainable from the code one digit back # (code[0:(i-1)] + 'X' interpretation). n_new = n_prev + n_current else: # New number of decodings is the same as # that obtainable from the code one digit back # (code[0:(i-1)] + 'X' interpretation). n_new = n_current # update n_prev and n_current n_prev = n_current n_current = n_new return n_current A: Recursive solution int countdecodes(char * s) { int r = 0; switch(*s) { case 0: return 1; case '0': return 0; case '1': r = countdecodes(s+1); if(*(s+1)) r = r + countdecodes(s+2); return r; case '2': r = countdecodes(s+1); switch(*(s+1)) { case '0': case '1': case '2': case '3': case '4': case '5': case '6': r = r + countdecodes(s+2); default: return r; } case '3': case '4': case '5': case '6': case '7':case '8': case '9': return countdecodes(s+1); default: return 0; } } Sample returns * *countdecodes("123"); // returns 3 *countdecodes("1230"); // returns 0 *countdecodes("1220"); // returns 2 *countdecodes("12321"); // returns 6 A: def nombre_codage(message): map=[] for i in range(1,27,1): map.append(i) nbr=1 n=len(message) pair_couple=0 for j in range (0,n,2): if int(message[j:j+2]) in map and len(message[j:j+2])==2: pair_couple+=1 nbr+=2**pair_couple-1 impair_couple=0 for k in range (1,n,2): if int(message[k:k+2]) in map and len(message[k:k+2])==2: impair_couple+=1 nbr+=2**impair_couple-1 return nbr Here's a solution in Python, it takes the message as a string then counts the digits of two numbers that could be encoded and using the binomial coefficient -- I used another form of it (C(n-p)=2^n)-- it counts how many ways you can encode the message. A: This is a small and simple O(n) solution for the problem, using dynamic programming: int ways(string s){ int n = s.size(); vector<int> v(n + 1, 1); v[n - 1] = s[n - 1] != '0'; for(int i = n - 2; i >= 0; i--){ if(s[i] == '0') {v[i] = v[i + 1]; continue;} if(s[i] <= '2' && s[i + 1] <= '6') if(s[i + 1] != '0') v[i] = v[i + 1] + v[i + 2]; else v[i] = v[i + 2]; else v[i] = v[i + 1]; } return v[0]; } The idea is to walk index by index. If that index (attached to the next) contains a number less or equal to 26 and no zeros, it is divided in 2 possibilities, otherwise just one. ways("123"); //output: 3 ways("1230"); //output: 0 ways("1220"); //output: 2 ways("12321"); //output: 6 A: I am having a simple pattern based solution for this problem having O(N) time complexity and O(1) space complexity. Explanation Example:- 12312 1 / \ 1,2 12 / \ \ 1,2,3 1,23 12,3 | | \ 1,2,3,1 1,23,1 12,3,1 / \ / \ / \ 1,2,3,1,2 1,2,3,12 1,23,1,2 1,23,12 12,3,1,2 12,3,12 P1 P2 N T 1 0 1 1 1 1 2 2 2 1 2 3 3 0 1 3 3 3 2 6 P1 represents number of cases with new element not forming a 2 digit number P2 represents number of cases with new element forming a 2 digit number N=1 represents case related to p1 and N=2 represents case related to p2 So a new tree with respect to case 1 and 2 would be like 1 / \ 1 2 / \ \ 1 2 1 | | | 1 1 1 / \ / \ / \ 1 2 1 2 1 2 #include <iostream> #include <string> using namespace std; int decode(string s) { int l=s.length(); int p1=1; int p2=0; int n; int temp; for(int i=0;i<l-1;i++) { if(((int(s[i]-'0')*10)+int(s[i+1]-'0'))<=26) { n=2; } else { n=1; } temp=p2; p2=(n==2)?p1:0; p1=p1+t; } return p1+p2; } int main() { string s; getline(cin,s); cout<<decode(s); return 0; } It is only valid for characters from 1 to 9. A: Recursive solution list = [] def findCombiation(s,ls): if len(s) == 0: list.append(ls) ls = ls + "0" if s[0:1] == "0": findCombiation(s[1:],ls) else: st = s[:2] if st == "": return if int(st) > 25 : findCombiation(s[1:],ls) ls = ls + s[:1] else: findCombiation(s[1:],ls+s[:1]) findCombiation(s[2:],ls+s[:2]) findCombiation("99","") print(set(list))
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,910
Shimmer and Shine: How to Host a Sponsored House Party FREE! A few weeks ago we were chosen to host a Shimmer and Shine House Party for this past weekend. It was a ton of fun! Shimmer and Shine is a new NickJr. show premiering tonight! Shimmer and Shine are Leah's secret genies who sometimes misinterpret what Leah wishes for — and that's when the adventures begin. Your kids will love the humor and music in every episode; plus, it teaches problem-solving and teamwork. – NickJr. They greatest part about this party is that we were sent a bunch of products for free to host it! You too can host your own house party. Here's how it works. Sign up on the House Party website to be alerted when new parties are listed. Peruse the site for a party you are interested in. There are parties for adults as well! Apply and do all of the extra steps they ask you to do to have a good chance of being selected. Cross your fingers and say a prayer! What's great is that you can invite everyone right from the website. They provide the evite for you to send, and they also provide printable party favors on the site. The only thing you have to do is provide some vital vittles and have your party! The kids colored in this poster while they waited for the other party guests to arrive. Everyone quietly watched the show munching on popcorn. After the show they enjoyed punch and cake, then had a ball running around playing with each other. I just wanted to add a tiny bit more sparkle to the party favors, so I added shimmery lipgloss, a sparkly bracelet, sparkly maracas, and neon bubbles to the favors. As mentioned, this was a house party sponsored party, and received the shimmer and shine products for free! The best advice I can give when hosting your own party is to remember this is a great opportunity to have a party for almost free. If you're a party planner like I am, you're going to want to go all out and it's going to be hard to restrain yourself, but I had to remind myself that this wasn't a birthday party, this was a party to show off the new Shimmer and Shine show, so it was ok not to have a candy buffet, a ton of food, etc. Trust me, that part was hard. Hopefully the next house party we do is around her birthday and I WILL be able to go all out! Catch Shimmer and Shine Monday, Aug. 24, at 7:30 p.m. (ET/PT) on Nick and the Nick Jr.
{ "redpajama_set_name": "RedPajamaC4" }
764
{"url":"https:\/\/www.gradesaver.com\/textbooks\/science\/chemistry\/chemistry-and-chemical-reactivity-9th-edition\/chapter-25-nuclear-chemistry-study-questions-page-1007b\/29","text":"## Chemistry and Chemical Reactivity (9th Edition)\n\nPublished by Cengage Learning\n\n# Chapter 25 Nuclear Chemistry - Study Questions - Page 1007b: 29\n\n#### Answer\n\n$0.781\\,\\mu g$\n\n#### Work Step by Step\n\nTime elapsed=63.5 hours=$\\frac{63.5\\,hours}{12.7\\,hours}$ half-lives= 5 half-lives. The amount of Cu-64 has decreased to $(\\frac{1}{2}\\times\\frac{1}{2}\\times\\frac{1}{2}\\times\\frac{1}{2}\\times\\frac{1}{2})$ of the original amount. The amount of Cu-64 remaining$=25.0\\,\\mu g\\times(\\frac{1}{2})^{5}=25.0\\,\\mu g\\times\\frac{1}{32}=0.781\\,\\mu g$\n\nAfter you claim an answer you\u2019ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide\u00a0feedback.","date":"2019-09-22 11:53:05","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6380151510238647, \"perplexity\": 6988.1657910166905}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-39\/segments\/1568514575513.97\/warc\/CC-MAIN-20190922114839-20190922140839-00281.warc.gz\"}"}
null
null
Meschi – nazwisko gruzińskie. Gela Meschi – rosyjski aktor Micheil Meschi – gruziński piłkarz
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,872
\section{Introduction} The fast termination of a fusion-grade plasma in a tokamak reactor is prone to Ohmic-to-runaway current conversion~\cite{Hender-etal-nf-2007}, which is made extraordinarily efficient by the avalanche mechanism~\cite{Sokolov:1979, Jayakumar:1993, Rosenbluth:1997} due to the knock-on collisions between primary runaways and background free and bound electrons~\cite{Martin:2017, hesslow2019influence, mcdevitt2019avalanche}. Such fast shutdowns could be intentional, for safety upon the detection of an inadvertent sub-system fault, for example, or unplanned, as the result of a tokamak disruption. Disruptions can have a variety of causes~\cite{deVries-etal-nf-2011} including such a mundane event as a falling tungsten flake into the plasma. For the relativistic energies characteristic of runaway electrons (RE), their local deposition on the first wall can induce severe surface and sub-surface damage of plasma facing components. A straightforward and perhaps ideal approach to mitigate RE damage is to minimize the runaway population by avoiding the runaway avalanche altogether. This is the so-called runaway electron avoidance problem in a tokamak plasma. The most troublesome feature of a fast shutdown, as in a tokamak disruption, is the ease for a fusion-grade plasma to rid its thermal energy in comparison with the plasma current. The so-called thermal quench (loss of plasma thermal energy) is often one to two orders of magnitude (if not more) shorter than the current quench (decay of plasma current)~\cite{Hender-etal-nf-2007}. In a post-thermal-quench plasma, mitigated or not, the plasma power balance is mostly between collisional or Ohmic heating and plasma radiation. This is usually the case because the post-thermal-quench plasma temperature is clamped by high-Z impurity radiation to be a very low value, likely in the range of a few electron volts. Radial transport at such low thermal energies is relatively slow, even in the presence of a stochastic magnetic field~\cite{ward1992impurity, breizman2019physics}. The source of high-Z impurities could be divertor/wall materials that are introduced into the plasma through intense plasma-wall interaction during the thermal quench when the bulk of the plasma thermal energy is dumped on the plasma-facing components. In a mitigated thermal quench, high-Z impurities, such as neon or argon, are deliberately injected into the plasma via pellets or gas jets. In the standard scenario where the thermal quench is fast and the post-thermal-quench plasma is cold and rich in high-Z impurities, an Ohmic-to-runaway current conversion is inevitable when a finite RE seed and large amount of plasma current is present. This results in the formation of a runaway plateau shortly after the thermal quench. An interesting discovery, from experiments on both DIII-D~\cite{paz2021novel} and JET~\cite{reux2021demonstration}, is that the high-Z impurities can be purged by a massive hydrogen injection in the runaway plateau phase. The resulting, mostly hydrogen plasma can expel the runaway electrons via a large-scale MHD event that stochasticizes the magnetic fields globally. The details of the underlying MHD instabilities vary in DIII-D and JET experiments~\cite{bandaru2021magnetohydrodynamic}, but the expectation that open field lines lead to rapid runaway loss via parallel streaming is robustly met in both devices. The added benefit is the experimental observation that the runaways are broadly disbursed onto the first wall so no appreciable localized heating is detected. The so-called MHD flush of the runaways after an impurity purge leaves the possibility that the mostly hydrogenic plasma could reheat to sustain an Ohmic current without crossing the avalanche threshold. This is the topic of the current paper. \begin{figure} \begin{centering} \includegraphics[scale=0.33]{./ElectricFieldRoots_2} \par\end{centering} \caption{Transition between Ohmic and RE roots. The red curve indicates the parallel electric field on the Ohmic root, whereas the green curve indicates the parallel electric field on the RE root. The temperature at which the curves intersect defines $T_{av}$. The deuterium density was taken to be $n_D=10^{21}$~m$^{-3}$, the neon density $n_{Ne}=10^{19}$~m$^{-3}$, and the current density was taken to be $j = 2\;\text{MA/m}^2$.} \label{fig:RE-heating} \end{figure} In a plasma of atomic mix $\{n_\alpha\}$ with $\alpha$ labeling the atomic species, the power balance between Ohmic heating and radiative cooling sets the plasma temperature, ion charge state distribution $\{n_\alpha^i\}$ with $i$ the charge number, and through the electron temperature $T_e$, the ion charge state distribution $\{n_\alpha^i\},$ and the parallel electric field $E_\parallel.$ Since the threshold electric field for runaway avalanche growth $E_{av}$ is also set by the atomic mixture, charge state distribution and its derived quantity, the electron density $n_e,$ the plasma power balance between Ohmic heating and radiative cooling imposes a stringent constraint on the plasma regime for avoiding and minimizing runaways when a fusion-grade tokamak plasma is to be terminated either intentionally or unintentionally. Robust RE avoidance can be achieved if Ohmic heating is able to offset the radiative and transport losses, and reheat the plasma so the parallel electric field $E_{\parallel}=\eta j_\Vert$ drops below the runaway avalanche threshold $E_{av}.$ If this could be maintained over the remainder of the current quench, effective runaway ``avoidance'' would have been achieved. The key question is the critical deuterium density and the fractional neon impurity density below which such a scenario can be triggered. A second question is whether the reheated plasma can be placed in the regime that the Ohmic current quench falls within the known design constraint for the current quench duration, which in the case of ITER has an upper bound of 150~milliseconds (ms) and a lower bound of 50~ms~\cite{Hender-etal-nf-2007,Hollmann-etal-PoP-2015}. \begin{figure} \begin{centering} \subfigure[]{\includegraphics[scale=0.24]{./RadLosses2MAfNeon0o05DensityScanOct8_2022}} \subfigure[]{\includegraphics[scale=0.24]{./RadLosses2MAfNeon0o01DensityScanOct8_2022}} \subfigure[]{\includegraphics[scale=0.24]{./RadLosses2MAfNeon0DensityScanOct9_2022}} \subfigure[]{\includegraphics[scale=0.24]{./RadLosses1MAfNeon0DensityScanOct9_2022}} \par\end{centering} \caption{Ohmic heating $\eta j^2$ (dashed lines) with current carried by background electrons, collisional heating $\mathbf{E}_{av}\cdot\mathbf{j}$ (dashed-dotted lines) with current carried by runaway electrons and $E_{av}$ the avalanche threshold field, and radiative cooling rate $P_{rad}$ (solid lines) are shown as a function of $T_e$ and for three deuterium densities: $n_D=10^{20}$~m$^{-3}$ (black), $n_D=10^{21}$~m$^{-3}$ (red), and $n_D=10^{22}$~m$^{-3}$ (blue). There are 4 cases shown: (a) $j = 2$~MA/m$^2, n_{Neon}/n_D = 5$\%; (b) $j = 2$~MA/m$^2, n_{Neon}/n_D = 1$\%; (c) $j = 2$~MA/m$^2, n_{Neon}=0;$ (d) $j = 1$~MA/m$^2, n_{Neon}=0.$} \label{fig:runaway-avoidance} \end{figure} This Letter lays out the basic physics considerations underlying the answers to both questions explained above, which are of practical importance to a tokamak reactor like ITER. From the plasma power balance between Ohmic heating and radiative cooling, we find that the operational space for plasma reheating and runaway avoidance is highly constrained in terms of the plasma density and the remnant impurity content. This can be illustrated by considering the quasi-steady state parallel electric field as a function of the electron temperature, an example of which is plotted in Fig. \ref{fig:RE-heating}. First considering the case in which a negligible number of runaway electrons are present, the parallel electric field will be given by $E_\Vert = \eta j_\Vert$, with $\eta$ the plasma resistivity and $j_\parallel$ the plasma parallel current density~\cite{comment:force-free}. Noting that the plasma resistivity scales as $\eta \propto 1/T^{3/2}_e$, the electric field will decrease rapidly as $T_e$ is increased for a given plasma current density $j_\Vert$. Once the magnitude of the electric field has dropped below $E_{av}$, runaway electron amplification by the avalanche mechanism will no longer be possible. The electron temperature at which this occurs will be referred to as $T_{av}$. For temperatures below $T_{av}$, two distinct roots of the system are present. This can be motivated by considering an Ohm's law, modified to account for the presence of runaway electrons, of the form: \[ E_\Vert = \eta \left( j_\Vert - j_{RE}\right) . \] For $j_{RE} \ll j_\Vert$, the electric field can again be approximated by $E_\Vert \approx \eta j_\Vert$, which yields the red curve shown in Fig. \ref{fig:RE-heating}. For $T_e < T_{av}$ this root can, however, be recognized to be unstable. In particular, since $E_\Vert > E_{av}$ when $T_e < T_{av}$, any seed RE population present in the plasma will be amplified by the avalanche mechanism. As a larger fraction of the plasma current is carried by REs, this will cause $E_\Vert$ to drop until $E_\Vert \approx E_{av}$~\cite{Rosenbluth:1997,Breizman:2014}. This second root, which we will refer to as the RE root, is stable for $T_e < T_{av}$, and leads to the formation of a current plateau. Thus, a sufficient condition to avoid RE formation is to maintain $T_e \gtrsim T_{av}$. The primary challenge is to identify a solution whereby $T_e \gtrsim T_{av}$ while simultaneously adhering to the ITER requirement of a current quench timescale in the range of 50-150~ms.~\cite{Hollmann-etal-PoP-2015} The challenge of simultaneously satisfying these two constraints is made evident in the power balance curves illustrated in Fig.~\ref{fig:runaway-avoidance}. Here, the Ohmic heating rate is plotted along with the radiative cooling rate $P_{rad}$ as a function of the electron temperature $T_e.$ The bulk plasma heating can be estimated by multiplying the parallel electric field sketched in Fig.~\ref{fig:RE-heating} by the plasma current density. For the Ohmic root, this leads to the familiar expression $P_\eta \equiv \eta j^2_\Vert$. For the RE root, this leads to the net energy transferred to the plasma being given by $P_{RE} = E_{av} j_\parallel$. While a small fraction of this energy will be lost via radiative losses in the channels of synchrotron radiation~\cite{Martin:2000, guo2017phase}, bremsstrahlung~\cite{Embreus-etal-NJoP-2016}, and line emission~\cite{garland-etal-pop-2020,garland-etal-pop-2022}, the majority of this energy will be collisionally transferred to the bulk electrons. Thus, at steady state the heating of the bulk electrons will be bounded from above by $P_{RE} = E_{av} j_\parallel$ when on the RE root. For given atomic densities of deuterium and neon impurities, the collisional-radiative codes FLYCHK~\cite{chung2005flychk} (for D) and ATOMIC~\cite{Fontes-etal-JPB-2015} (for Ne) are used to compute the charge state distribution and the radiative cooling rate, in the steady-state approximation, as a function of $T_e.$ The free electron density $n_e$ is then found from quasineutrality. The charge state distribution is then fed into the avalanche threshold evaluation using the runaway vortex O-X merger model~\cite{mcdevitt2018relation}, which accounts for the partial screening effect using the collisional friction and pitch angle scattering rates given in Hesslow, et al.~\cite{Hesslow:2017}. This latter step yields an estimate of the avalanche threshold as a function of the plasma composition. It is interesting to note that at very low $T_e,$ there is a sizable neutral population and the electron-neutral collisions can contribute significantly to collisional friction~\cite{frost1961conductivity, schweitzer1966electrical, zhdanov2002transport}. This is reflected by the enhanced Ohmic heating at the low $T_e$ end in Fig.~\ref{fig:runaway-avoidance}, where the Ohmic heating power, after factoring in the electron-neutral collisions, deviates from the $T_e^{-3/2}$ scaling that is predicted from the Spitzer resistivity. Recall that a mitigated post-thermal-quench plasma is radiatively clamped to low $T_e,$ likely in the range of a few eVs, and the purge of neon by massive deuterium injection involves a further cooling of $T_e,$ so the reheating of the bulk plasma necessarily starts from the very low $T_e$ end, most likely below the first peak of the radiative cooling rate curve shown in Fig.~\ref{fig:runaway-avoidance}, which is set by deuterium, not the neon impurity. For high enough deuterium density $n_D$ and at modest plasma current density, Ohmic heating may not be able to overcome this first peak in the radiative cooling curve, and there is no significant reheating of $T_e$ possible. This is shown by the solid blue curve (radiative cooling) in Fig.~\ref{fig:runaway-avoidance}(d) in comparison with the dotted-dash line (Ohmic heating). It is of interest to note that the deuterium radiative peak, in the case of $n_D=10^{22}$~m$^{-3},$ is very close to the $P_\eta$ curve in Fig.~\ref{fig:runaway-avoidance}(a,b). If $j_\parallel$ is dropped from 2~MA/m$^2$ to 1~MA/m$^2$ in these two cases, $P_\eta$ will also cross the deuterium radiative cooling peak. For the deuterium radiation peak to safely stay below $P_\eta$, the deuterium density $n_D$ must be lower, by an amount that scales with $j_\parallel^{1/2}.$ For discharges that satisfy this constraint, the mostly hydrogenic plasma will be reheated above the deuterium peak, which is around $T_e=1.2$~eV. This deuterium density constraint is a necessary, but generally not sufficient condition, for the plasma to be reheated enough to avoid runaways. The complication comes from the presence of remnant impurities. In Fig.~\ref{fig:runaway-avoidance}(a,b), one can see that the presence of neon impurities, as small as 1-5\% in fractional number density, introduces a second radiative cooling peak in the range of $T_e \approx 30\;\text{eV}$. The first crossing point between the radiative cooling ($P_{rad}$) curve and the Ohmic heating ($P_\eta$) curve marks the critical electron temperature $T_{reheat}$ that the reheating of the plasma will be bounded from above. From Fig.~\ref{fig:runaway-avoidance}(a), we find that with high enough $n_{neon}$ ($5\%$ for this case), $T_{reheat}$ is in the range of a few eV to 30 eV. This suggests an in-range Ohmic current quench time, but avalanche is unavoidable because $T_{reheat} < T_{av}$ for all three densities. To further quantify this concept, we recall that the parallel electric field at $T_{reheat}$ for an Ohmic plasma (i.e. the plasma current is purely Ohmic), is simply $E_{reheat} \equiv E_{\parallel}(\eta) = \eta j_\parallel$. We can plot the $P_{RE}=E_{av} j_\parallel$ in the same plot, and the ratio of $P_\eta(T_{reheat}) = \eta(T_{reheat}) j_\parallel^2$ and $P_{RE}$ is just the ratio of $E_{reheat}$ and $E_{av}.$ Equivalently, we can cast the ratio of $E_{reheat}$ and $E_{av}$ in terms of the $T_{av}/T_{reheat},$ with $T_{av}$ the intercept of the runaway heating curve $P_{RE}$ and the Ohmic heating curve $P_\eta.$ Since $E_\parallel(\eta) = \eta j_\parallel \propto Z_{eff}/T_e^{3/2}$ with $Z_{eff}$ the effective ion charge of the plasma, one finds \begin{align} \frac{E_{reheat}}{E_{av}} = \frac{Z_{eff}(T_{reheat})}{Z_{eff}(T_{av})} \left(\frac{T_{av}}{T_{reheat}}\right)^{3/2} . \end{align} Figure~\ref{fig:runaway-avoidance}(b) reveals that even with five times lower neon densities, $n_{neon}=10^{20}$~m$^{-3}$ (solid blue line) and $n_{neon}=10^{19}$~m$^{-3}$ (solid red line), which correspond to fractional number density of 1\% for neon impurities in a deuterium plasma, $E_{reheat}/E_{av} \ge 10.$ For such a large parallel electric field, we anticipate robust runaway current reconstitution via the avalanche mechanism. To safely avoid runaways, $E_\parallel=\eta(T_e) j_\parallel$ should stay below $E_{av},$ which corresponds to $T_{av} < T_{reheat}.$ From Fig.~\ref{fig:runaway-avoidance}(a,b), we find that only the case of lowest $n_D$ ($10^{20}$~m$^{-3}$) and impurity content ($n_{Neon}/n_D=1$\%) satisfies this requirement. And when it does, the plasma actually recovers from the disruption by reaching electron temperatures in excess of one keV. This could be a favorable outcome in a tokamak that offers sufficiently fast positional control to avoid vertical displacement events (VDEs). In an ITER-like reactor, reheating of the plasma with less plasma current simply implies a hot VDE due to the long wall time of the vacuum vessel. If the goal is to terminate the plasma for a shut down of the reactor, the more desirable scenario lies with much reduced impurity radiation, but high deuterium density to prevent the plasma from achieving electron temperatures in excess of a keV. The limiting case is $n_{neon}=0$ is shown in Fig.~\ref{fig:runaway-avoidance}(c,d). One can see that $n_D=10^{21}$~m$^{-3}$ (solid red curve) is high enough to force $T_{av} < T_{reheat},$ so the Ohmic electric field stays below the avalanche threshold. The choice of even higher deuterium densities, for example, the blue curves in Fig.~\ref{fig:runaway-avoidance}(c) for $n_D=10^{22}$~m$^{-3},$ offers the intriguing possibility of a lower $T_{reheat}$ with an Ohmic electric field that is marginally above the avalanche threshold electric field at $j=2$~MA/m$^2.$ The $T_{reheat}$ is more consistent with a current quench duration of 100~ms envisioned for ITER, which is in the range of 10-15~eV or so for a deuterium plasma. This promising prospect is complicated by the fact that as the plasma current density drops from 2~MA/m$^2$ to 1~MA/m$^2,$ the reduction in Ohmic heating power would lead to a radiatively clamped $T_e$ below the deuterium peak around 1.2~eV in a plasma of $n_D=10^{22}$~m$^{-3},$ resulting in an Ohmic electric field significantly above avalanche threshold, see Fig.~\ref{fig:runaway-avoidance}(d). To avoid the avalanche growth of runaways during the current quench, one thus relies on (1) the current-carrying plasma shrinking in size as $I_p$ decays but maintaining comparable $j_\Vert,$ or (2) a way to dynamically reduce the plasma particle density as $I_p$ and $j_\Vert$ decay in time. A number of observations can be made here on both (1) and (2). For (1), it is indeed the case that as the toroidal plasma current $I_p$ decreases, the current-carrying plasma column does shrink. The resulting change in $j_\parallel$ is modest, at most by a factor of two in an ITER-like plasma initially with 15 MA of plasma current. In a goldilocks situation with $T_e$ fixed, a factor of 2 drop in $j_\parallel$ produces a factor of 4 decrease in $P_\eta.$ Since the deuterium radiative cooling rate scales with the product of the ion and electron densities, which is approximately equal in the $T_e\ge 10$~eV range, in order to balance the reduced Ohmic heating rate, the $n_D$ would have to be reduced by a factor of 2 as well. In practice, the more likely scenario is that the reduced Ohmic heating due to a lower $j_\parallel$ would lower $T_e,$ boosting the deuterium radiative power loss rate in the temperature range of $T_e=10-30$~eV. This would further aggravate the need to further reduce $n_D.$ Reduction of $n_D$ in the temperature range of $T_e \approx 10-20$~eV can only be achieved via plasma transport, which can be sustained in a discharge if particle pumping at the chamber boundary is maintained in a post-thermal-quench plasma. The potential remedy to possibly impede a drop of $T_e$ with a decreasing $j_\parallel$ lies with physical mechanisms that can reduce plasma cooling with a decreasing $T_e.$ In the targeted range of $T_e \approx 10-20$~eV, neon radiation intensity rapidly decreases with a decreasing $T_e.$ This suggests the mitigating role of neon impurities. By contrasting the radiation intensity of deuterium and neon around $T_e=20$~eV at fixed $n_e=10^{22}$/m$^3,$ one finds that a fractional number density of $10^{-5}$ for the neon impurity would have the neon impurity radiative cooling rate twice that of the bulk deuterium plasma. Along the same line, if the neon fractional number density is $10^{-6},$ the neon radiation would be 1/5 of the deuterium's, and it would have a negligible offsetting effect in reducing the cooling rate as $T_e$ drops. \begin{figure} \begin{centering} \subfigure[]{\includegraphics[scale=0.25]{./EoverEavj2_Tecontour_Oct10_2022_20by20}} \subfigure[]{\includegraphics[scale=0.25]{./EoverEavj1o5_Tecontour_Oct10_2022}} \subfigure[]{\includegraphics[scale=0.25]{./EoverEavj2_TecontourSmallnNe_Oct10_2022}} \subfigure[]{\includegraphics[scale=0.25]{./EoverEavj1o5_TecontourSmallnNe_Oct10_2022}} \par\end{centering} \caption{Comparison of the parallel electric field with the avalanche threshold. Panels (a) and (c) are for $2\;\text{MA}/\text{m}^{2}$ and panels (b) and (d) are for $1.5\;\text{MA}/\text{m}^{2}$. The yellow contours indicate temperature, whereas the white contour is for $E_\Vert/E_{av} - 1 = 0$.} \label{fig:I5} \end{figure} The case studies shown so far clarify the basic physics considerations and the resulting constraints on the plasma regime for avoiding runaway avalanche in a post-thermal-quench plasma. Next we perform a more comprehensive scan to demarcate the preferred operational regime in terms of $(n_D, n_{neon}).$ Two derived quantities will be used to characterize the operational regime. These are ${\cal{E}} \equiv E_{reheat}/E_{av}-1$ and $T_{reheat},$ all of which were previously explained in the text and computed in Fig.~\ref{fig:runaway-avoidance}. The result of this calculation is shown in Fig. \ref{fig:I5} for two different current densities. Two temperature contours are also plotted, where to remain within the current decay time targeted by ITER the electron temperature should remain roughly in the range of $T_e \approx 10-20\;\text{eV}$. Considering first a high current density case [see Fig. \ref{fig:I5}(a)] with $j_\parallel = 2\;\text{MA}/\text{m}^{-2}$, it is evident that the system will remain well above the avalanche threshold unless a near complete purge of the neon is present. Furthermore, for low to modest deuterium densities ($n_D \lesssim 2\times 10^{15}\;\text{cm}^{-3}$) the regions below the avalanche threshold (white contour in Fig. \ref{fig:I5}) coincide with electron temperature in excess of $100\;\text{eV}$, implying that these cases would have exceptionally long current decay times. At higher deuterium density a solution near the avalanche threshold with a temperature in the range of $T_e \approx 10-20\;\text{eV}$ is present though it requires a near complete purge of the neon. At a modestly lower current density of $j_\parallel = 1.5\;\text{MA}/\text{m}^{-2}$ (see Fig. \ref{fig:I5}(b)), the plasma recombines at the highest deuterium density considered thus leading to a region with $E_\parallel / E_{av} \gg 1$. This has the effect of shifting the region with $E_\parallel/E_{av} \sim 1$ to lower deuterium densities. Hence, the target deuterium density will depend on the local current density of the plasma. Focusing on the very low neon density regime [Figs. \ref{fig:I5}(b) and (d)], it is evident that even at very low neon densities, there is no solution below the avalanche threshold with an electron temperature less than $20\;\text{eV}$. It is, however, apparent that a solution with the electric field within a factor of two of the avalanche threshold is present at high deuterium density. Although this cannot avoid the avalanche growth of runaway electrons, it does lead to higher poloidal flux consumption in growing the runaway population, which has the favorable effect of reducing the plasma current after runaway reconstitution. In conclusion, the plasma power balance in a post-thermal-quench plasma places a rigorous constraint on the plasma regime in which runaways can be avoided or minimized. Specifically, unless a current quench duration of greater than 150~ms can be tolerated, there does not appear to be a $(n_D, n_{Neon})$ regime in which runaway avalanche can be completely avoided. Within the known ITER constraint for current quench duration the high $n_D$ but negligibly low $n_{Neon}$ regime can deliver the desired current quench time while minimizing the runaway current, by reaching an Ohmic parallel electric field that is above, but close to, the avalanche threshold electric field. We thank the U.S. Department of Energy Office of Fusion Energy Sciences and Office of Advanced Scientific Computing Research for support under the Tokamak Disruption Simulation (TDS) Scientific Discovery through Advanced Computing (SciDAC) project, and the Base Theory Program, both at Los Alamos National Laboratory (LANL) under contract No. 89233218CNA000001. This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility operated under Contract No. DE-AC02-05CH11231 and the Los Alamos National Laboratory Institutional Computing Program, which is supported by the U.S. Department of Energy National Nuclear Security Administration under Contract No. 89233218CNA000001. \bibliographystyle{apsrev}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,976
Comment Form is a new addon for GD Rating System Pro, and it adds an easy way to integrate ratings (stars and multi stars) into WordPress comments form. This addon achieves a simple task: how to integrate stars rating (or multi stars) into comments form so that users will be able to leave a comment and the rating to rate the post or article comment belongs to. This is much simpler addon then User Reviews, and if you need more comprehensive reviews system, User Reviews Addon is much better solution. The addon has a few settings and filters for extra control through custom coding. The rating can be set as required, it can be hidden from replies (so it works for main comments) and you can automatically display rating inside the saved comment. This addon is available for free to all users with valid GD Rating System Pro license. This addon works with standard WordPress comments only, and there are no plans to support other comment systems with this addon. GD Rating System Pro can be integrated into other comment systems that are implemented with plugins (it can't work with hosted comments system like Disqus or Facebook). Comment Form Addon 1.0 requires GD Rating System Pro 2.5 or newer, and if you need multi ratings, Multi Ratings Addon 1.6 or newer, it will not work with older versions. If you find any issues, please let me know.
{ "redpajama_set_name": "RedPajamaC4" }
3,411
Don't use that back button! Have no fear. Go forward, never go back. Never give up. There is always someplace to go. Soulsista is a storyteller, maybe a blues singer or a fortune teller, maybe an insurrectionist or a church sister. She has power, but, so do you. We are all endowed by the Creator with inherent inalienable rights, the power to transform--and be transformed.
{ "redpajama_set_name": "RedPajamaC4" }
3,045
Q: Question about ad-hominem I know it's a latin phrase and maybe it doesn't belong here. But two of my friends are having an argument and one of my friends, who is the head of some committee said "At least try and then complain"(try becoming the head of the committee, doing my job and then complain) to my second friend who said that she is not doing her duties well enough. A third person who is a spectator to this argument pointed out that what the first person said is an Ad-Hominem. I don't think it's an Ad-hominem, could someone weigh in on this? A: I would say that it is an ad hominem. An ad hominem argument is when you instead of addressing the topic in question are trying to win the argument by making comments about your opponent's physical appearance, personal history, professional experience and other things of that nature. What makes your particular case an ad hominem is the fact that the head of the committee instead of actually addressing your friend's point started talking about how your friend would not be able to run the committee were he to head it. He is clearly trying to sidetrack the conversation by pointing out the fact that your friend does not know what he is talking about because he has never worked in that position. So, he is making a reference to your friend's personal qualities and abilities which, strictly speaking, has really nothing to do with the real meat of the question that has been posed. But hey, that's one big fat argumentum ad hominem right there, as far as its definition is concerned. Here's some additional information worth noting: https://www.youtube.com/watch?v=a2OnY4d_1Is
{ "redpajama_set_name": "RedPajamaStackExchange" }
952
Be The First to Review the Belton FUMC 4-G 5K! The Belton FUMC 4-G 5K is a Running race in Belton, Texas consisting of a 5K. The "4-G" 5K stands for: Get up, Get outside, Get moving, for God! This event is a fund raiser hosted by the First United Methodist Church of Belton for the Belton Body of Christ Community Clinic, in Belton, Texas. The clinic offers assistance to people who cannot afford medical and dental services. www.fumcbelton.org View Course Map Incorrect Race Info/Update Race Details?
{ "redpajama_set_name": "RedPajamaC4" }
5,321
Who doesn't like having soft skin? We all want to keep our skin looking healthy and well cared for, and it's much easier to make that a regular part of your routine if you've found a moisturising shower gel. That's the best way to feel confident that you're nurturing your skin right from the start of your day. We know how you can do just that – with Dove Pro Age Body Wash. As we age, our skin begins to slow down its production of the natural lipids that help make it soft, but the right anti-ageing skin care products can help you get glowing, youthful-looking skin. Dove pro•age Body Wash is designed to give your skin the daily nourishment it needs, with Dove's gentle cleansers in a mild shower cream that's kind to ageing skin. The right moisturising shower gel can help you to nurture your skin right at the beginning of your day, in the shower. This Dove body wash combines NutriumMoisture™ with Dove's mild cleansers to help your skin retain its natural moisture while you wash and leaves you with softer, smoother skin after just one shower. NutriumMoisture™ is a blend of moisturisers and skin natural nutrients, and helps your skin to maintain its natural balance while you shower. This helps to create a shower gel that minimises skin dryness while nourishing deep into the surface layers of your skin. Squeeze generously onto a shower puff or lather between your hands. Spread the rich lather over your skin before rinsing away with warm water.
{ "redpajama_set_name": "RedPajamaC4" }
9,528
@implementation PVGroupContext - (id)init { self = [super init]; if (self != nil) { _direction = NSLayoutFormatDirectionMask; } return self; } @end
{ "redpajama_set_name": "RedPajamaGithub" }
7,322
I becchini (The Undertakers) è un racconto dello scrittore inglese Rudyard Kipling appartenente alla raccolta de Il secondo libro della giungla. Venne pubblicato per la prima volta l'8, 9, 10 e 12 novembre 1894 sul New York World e il 14 e 15 novembre sulla Pall Mall Gazette; fu infine ristampato nella raccolta Il secondo libro della giungla l'anno successivo (1895). Trama Il racconto è incentrato sul dialogo tra uno sciacallo, un marabù e un coccodrillo, gli spazzini del fiume divoratori di cadaveri, che discutono riguardo agli esseri umani mentre si trovano appunto sulle rive di un corso d'acqua vicino a un ponte ferroviario. Essi vedono passare i morti della "mutiny", la rivolta dei sepoy contro gli inglesi, e infatti notano che i primi cadaveri erano anglosassoni, i secondi (quando la rivolta viene violentemente repressa)indiani. Il coccodrillo racconta poi un proprio rimpianto, quello di essersi lasciato sfuggire un bambino che dalla barca faceva penzolare la manina nell'acqua. Era talmente piccolo che le dita erano passate indenni tra le due fauci. Dopo diversi scambi di battute e di aneddoti si salutano, non senza aver provato a ingannarsi l'un l'altro , quando improvvisamente il coccodrillo viene ucciso da due cacciatori, uno dei quali si scopre essere proprio il bambino sfuggitogli per caso anni prima. Personaggi Mugger: vecchio coccodrillo di palude un tempo venerato e temuto dalla gente del suo fiume, perde il suo status con la costruzione della ferrovia. Sciacallo: uno sciacallo che, insieme al marabù, ascolta la storia del Mugger. Vile e untuoso, finge di rispettare il coccodrillo con parole adulatrici. Marabù: un marabù che, insieme allo sciacallo, ascolta la storia del Mugger. È più dignitoso e diretto dello sciacallo, che offende spesso. Cacciatori: uomini bianchi, cacciatori di coccodrilli. Tra questi si cela il bambino che il Mugger non riuscì a mangiare, e che su questi si prenderà un'inconsapevole vendetta. Critica Del racconto viene sottolineata la raffigurazione del «lato oscuro e infido dell'India»; il racconto del coccodrillo sui cadaveri che passano nel Gange viene invece definito «uno dei passaggi più straordinari ne Il libro della giungla». Note Altri progetti Collegamenti esterni The Undertakers sul sito della Kipling Society Racconti de Il libro della giungla
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,113
\section{Introduction} At the heart of the success of many biological + artificial object recognition systems, is the prior of compositionality: a scene is composed of a collection of objects; an object is composed by a collection of parts; and a face -- arguably the ``most discriminable object'' that we learn to categorize in our lifetime~\cite{kanwisher1997fusiform} -- is a collection of sub-parts such as a pair of eyes, ears, nose and a mouth. Even prior to the 2nd wave of deep learning with the advent of AlexNet's~\cite{krizhevsky2012imagenet} success on ImageNet~\cite{russakovsky2015imagenet}, there have been a series of models that have depended on mixing features in different ways such as using Histogram of Oriented Gradients (HoG)~\cite{dalal2005histograms}, Deformable Parts Model (DPM)~\cite{felzenszwalb2008discriminatively} or Bag-of-Words (BoW)~\cite{csurka2004visual} with a SVM classifier. Why is it then that deep convolutional networks (such as AlexNet) performed so well on object classification tasks compared to their predecessor models, and what could such models do that previously hand-engineered models could not? Are dense connections worse than sparse ones? Is deeper better than shallow? When do deep convolutional networks fail? The empirical answer to several of these questions is known~\cite{lecun2015deep,goodfellow2016deep}, but theoretical explanations are lacking as they are lacking about issues such as convergence, generalization, and adversarial attacks~\cite{zhang2016understanding,poggio2020theoretical,goodfellow2014explaining,geirhos2020shortcut,feather2019metamers}. Indeed, it seems that the \textit{`unspoken truth'} in the machine learning and computer vision community when talking about these classical results regarding neural network architectures and tasks is that \textit{``deeper and convolutional is better''} -- as they will exploit the compositional and localized structure of images~\cite{henaff2014local}. More specifically, the questions of why and under which conditions however remains: \vspace{5pt} \fbox{\begin{minipage}{21em} Why and for which tasks do convolutional networks perform better than fully-connected networks? \end{minipage}} \vspace{5pt} A possible explanation from the point of view of approximation theory was proposed by Poggio~\textit{et~al.}~\cite{poggio2020theoretical}) in theorems that we summarize as follows: \begin{figure*}[t!] \centering \includegraphics[width=1.0\linewidth,clip]{Network_Diagram_New_C.pdf} \caption{A cartoon depicting the different types of tasks (top) in pure mathematical form (center) and their respective `optimal' approximation network (bottom) given a finite input vector: $[x_1, ..., x_8]$. Left: We have a global and non-compositional function -- as it has \textit{no hierarchy nor locality}. Middle: a first order local and compositional function as it relies on \textit{one level} of non-linear local function combination before being sent to a decision function $(+)$. Right: A higher order hierarchically compositional function that consists of a \textit{cascade} of combinations of non-linear local functions. This last group of functions are generally well approximated by deep conv. networks~\cite{Poggio2017}; here we try to understand under for what visual tasks and why} \label{fig:Network_Architectures} \end{figure*} \begin{enumerate} \itemsep0em \item {\it Both shallow and deep networks are universal}, that is they can approximate arbitrarily well any continuous function of $d$ variables on a compact domain but both suffer from the {\it curse of dimensionality} with a number of parameters of order $O(\epsilon^d)$, where $\epsilon$ is the approximation error. \item For the class of functions of $d$ variables on a compact domain that are {\it hierarchical compositions of local (that is with bounded dimensionality) constituent functions}, approximation by deep networks with the same architecture can be achieved with a number of parameters which is linear in $d$ instead than exponential. \item The key aspect of convolutional networks that can give them an exponential advantage is not weight sharing but locality at each level of the hierarchy. Weight sharing helps but not in an exponential way. \end{enumerate} The result (1) for shallow networks is classical (See Cybenko~\cite{cybenko1989approximation}), and the extension of this result is easy to prove as shallow networks are a subset of deep networks. An example of a function which is a hierarchical compositions of constituent functions is $f(x_1, \cdots, x_8) = \phi_3(\phi_{21}(\phi_{11} (x_1, x_2), \phi_{12}(x_3, x_4)), \allowbreak \phi_{22}(\phi_{13}(x_5, x_6), \phi_{14}(x_7, x_8))) $. Here the constiuent functions have dimensions $2$. The equivalent quantity in a convolutional network is the dimensionality of the convolution kernel (for images it is typically $3 \times 3$). All these results are about the representational power of deep networks and especially hierarchical networks. As such they do not say what \textit{can be} or \textit{cannot be} learned from data. However, the exponential separation between deep convolutional networks and shallow networks for the class of hierarchically local functions suggests that such architectures may be difficult to learn from a reasonable amount of data. Notice that the success of gradient descent techniques in learning overparametrized networks does not imply at all that it is easy to learn convolutional networks from fully connected networks. In fact the opposite is true. Recently, Malach~\&~Shalev-Shwartz~\cite{malach2021computational} demonstrated a class of problems that can be {\it efficiently solved using convolutional networks trained with gradient-descent, but at the same time it is hard to learn using a polynomial-size fully-connected network}. Following the intuition provided by the approximation theorem above it is natural to further conjecture that given appropriate data, {\it learning shared weights with gradient descent is ``easy'' for hierarchically local networks. } These and related results (See Appendix~\ref{sec:Pilot}) suggest that the hierarchy of visual processing cortical areas with local receptive fields is genetically hardwired, whereas effective weight sharing is actually learned from translation invariant images during development of the organism. \vspace{10pt} In the same spirit, we believe that the results and conjectures discussed above could help shed light into one of the greatest puzzles that has emerged from the empirical field of deep learning, which is trying to explain the {\it unreasonable effectiveness} of convolutional deep networks in a number of sensory problems. CNNs are indeed, a special case of the hierarchical networks of the theorems above (See Figure~\ref{fig:Network_Architectures}). In this paper, we attempt to go from the mathematical why of the theorems to answering the questions of which are the visual tasks in which we should expect an out-performance by convolutional networks and which other visual tasks should be equally difficult for convolutional \textit{vs} fully connected networks. We will show that the property of hierarchical compositionality of the target function to be learned depends on $g(x)$, that is on both the data ($x$) and the task ($g$). The approximation properties of the network $f$ should be matched to the task ~\cite{zamir2018taskonomy,wang2019neural,conwellinterpreting,dwivedi2020unveiling,kunhardt2021effects}. We will show evidence for this claim by showing a dissociation between different visual tasks (object recognition, texture perception and color estimation) on the same type of images as we disrupt the image locality prior through ``deterministic'' image scrambling. We find that systems in general will learn better a certain task when the approximation function $f$ matches the graph of of the visual task $g$ as closely as possible. \begin{figure*}[t!] \centering \includegraphics[width=1.0\linewidth,clip]{Approach_Explained.pdf} \caption{\underline{A.} Our overall question is to analyze the contribution of compositional structure into networks that have a locality inductive bias such as deep convolutional neural networks \textit{vs} those that do not such as shallow fully connected networks (kernel machines). To do this, we will train/test different flavors of networks such as the before-mentioned on a variety of visual tasks where we will disrupt the locality of the data at different levels. \underline{B.} An example of how locality can be disrupted in a CIFAR image. Here we show different hierarchical scrambling operators $\mathcal{S}$ that follow a binary tree-like structure: top-down or bottom-up, to potentially explore how different networks will perform when trained/tested on these conditions. } \label{fig:experimental_setup} \end{figure*} \section{Hierarchical Compositional Tasks in Vision} Before investigating the approximation power of different networks to a family of hierarchical compositional tasks, we will first define such tasks mathematically to check their compositional nature. \subsection{Color Estimation} A classical example of a visual task that is quite simple and requires no hierarchical structure is the color approximation task~\cite{emery2019individual,singh2020assessing,taylor2020representation,taylor2020representation,harris2019spatial,harris2020convolutional}. Indeed, color approximation, hereto the `average color' $C_I$ of an image $I$ can be computed exactly via the averaging function: \begin{equation} C_I = (1/N)\sum_i x_i \end{equation} where $N$ is the total number of $i$ pixels in the image (assuming a single channel), and where $x_i$ is the color intensity of the pixel at the $i$-th position. If is easy to verify that a fully connected network of the shape $f(W;x) = W^Tx$ for a given set of weights $W$, can approximate $C_I$ exactly if $W$ is a column vector with each component $W_i=1/N$ for a vectorized image $x\in\mathbb{R}^d$. In fact, this function can also be approximated without a non-linearity as the usual structure of $f(\circ)$ generally yields the shape $f=\phi(x)=\sigma(Wx)$, for some $\sigma(\circ)$ non-linearity such as ReLU. Indeed, in this case the ReLU non-linearity would be redundant (as color is a positive value) and we can summarize the general form of color estimation as: \begin{equation} C_I=\phi(x) \end{equation} In classical views of vision science, one could classify color estimation (here computing the average color), as a low-level visual task~\cite{taylor2020representation}, as this task does not require higher-level cognitive processes such as identifying if it is an object/scene, or any sort of recognition-like processes. Thus in the context of this paper, we will define this low-level task as a global and non-compositional function or ``order 0'' Hierarchically Compositional, as the simplest form of computation of the function requires \textit{no compositionality} (nor locality) and can be approximated through a linear function. \subsection{Texture Perception} Another visual task that increases in complexity given its compositional nature is the task of texture perception. Texture that is loosely viewed as ``stuff, not things"~\cite{heitz2008learning,balas2009summary,caesar2018coco,rosenholtz2014texture,rosenholtz2016capabilities,long2018mid}, where visual information is periodic at some level, can be better understood when using a generative (and parametric) texture modelling framework. Classic examples of these models include the Portilla~\&~Simoncelli~\cite{portilla2000parametric} texture synthesis model, that strongly influenced the Gatys~\textit{et~al.}~\cite{gatys2015texture} texture synthesis model -- which was the stepping stone to the pioneering field of Style Transfer~\cite{gatys2015neural} -- as such generative models reveal how textures can be rendered by following a mathematical formulation. In summary, these two pioneering approaches at heart consist of rendering a texture through the matching of second-order image statistics. These can be derived as a collection of cross-correlation of image transform outputs (\textit{i.e.} local feature maps $\phi(\circ)$), whether engineered through a Steerable Pyramid decomposition~\cite{simoncelli1995steerable} as in Portilla and Simoncelli, or learned through an optimization procedure as done with the multiple feature maps of a VGG19~\cite{simonyan2014very} that define a family of ``Texture Gramians'' as in the Gatys~\textit{et~al.} model~\cite{gatys2015texture}. More formally these models define a Texture Matrix $T_M$ of a given image $x$ of the general form: \begin{equation} T_M = \begin{bmatrix}\phi_1(x) \phi_1(x) & \phi_1(x) \phi_2(x) & \dots & \phi_1(x) \phi_n(x)\\ \phi_2(x) \phi_1(x) & \phi_2(x) \phi_2(x) & \dots & \phi_2(x) \phi_n(x)\\ \vdots & \vdots & \ddots & \vdots\\ \phi_n(x) \phi_1(x) & \phi_n(x) \phi_2(x) & \dots & \phi_n(x) \phi_n(x) \end{bmatrix} \end{equation} From here we can vectorize $T_M$, which will recast the same information (of form $\mathbb{R}^n\times\mathbb{R}^n$) as a column feature vector $T_F=[\{\phi_i(x),\phi_j(x)\}],\forall~i,j\in\mathbb{R}^T$. Naturally as texture perception can be defined as a classification task (\textit{i.e.} ``this texture belongs to this family of textures, but not to this other''~\cite{ziemba2016selectivity,freeman2013functional}) , we must have another hyper-plane decision function, which will recombine the feature vector products and also act on top of $T_F$ to produce an output from the collection of texture features. This yields an order-1 hierarchically local compositional function of the general form: \begin{equation} \label{eq:texure} T_P = \psi([\{\phi_i(x)\phi_j(x)\}]),\forall i,j \end{equation} Notice that in Equation~\ref{eq:texure}, there is already a notion of compositionality that by nature of the inner product of feature vector maps is non-linear. Thus texture is loosely being computed as \textit{``a function of a set of local functions of the image''} -- an idea that goes back to Julesz~\cite{julesz1981textons}, hence our attribution to giving texture the name of order-1 hierarchical compositionality. \subsection{Object Recognition} Perhaps the most challenging of all visual tasks is object recognition, which has been characterized as a highly compositional process -- dating back to works in perceptual psychology from Biederman~\cite{biederman1987recognition}, to the ever-more recent work of Hinton~\cite{hinton1988representing,hinton2021represent}. Indeed, for humans and machines to achieve robust object detection mechanisms, these systems must learn to recognize the object against a variety of conditions (or noise) that may affect the percept such as size, occlusion, illumination, viewpoint and clutter as suggested by Poggio~\cite{riesenhuber2000models,serre2005object,serre2007feedforward,poggio2011computational} and DiCarlo~\cite{dicarlo2007untangling,dicarlo2012does,pinto2008real}. Such theories of the representations of objects seem to suggest that objects can be identified as \textit{``a combination of parts, which are a combination of textures, which are a combination of edges, etc ...''} -- reminiscing to our definition of texture, though taken to a higher-order (or degree). This type of reasoning has both been posited in perception by Peirce~\cite{peirce2015understanding} when studying mid-level vision, and also in the Texform theory of Long~\&~Konkle~\cite{long2017mid,long2018mid,deza2019accelerated}, who found that objects coded as local texture ensembles can give rise to super-ordinate object categorization such as animacy without possibly encoding object identity. Similarly, computational approaches that support the higher-order compositional theory of objects have been found in the pioneering work of feature visualization of Zeiler~\&~Fergus~\cite{zeiler2014visualizing} and the feature invertibility models of Mahendran~\&~Vedaldi~\cite{mahendran2015understanding}. While it is true that deep networks can be biased to learn texture representations over shape for object categorization (see Geirhos~\textit{et~al.}~\cite{geirhos2018imagenet}), this problem can still be overcome with different data-augmentation strategies (see Hermann~\textit{et~al.}~\cite{hermann2019origins}), suggesting -- in a perhaps chicken-and-egg fashion -- that under the right conditions, the compositional nature of objects in hierarchical functions can indeed be learned. We can thus ``generally'' pose the object recognition $O_R$ functional form as a cascade of localized non-linear functions $\phi_{\square}(\circ)$: \begin{equation} \label{eq:Object} O_R = \psi(\phi_n(\phi_{n-1}(...\phi_2(\phi_1(x),\phi'_1(x))))) \end{equation} While the formulation of Equation~\ref{eq:Object} is encouraging and perhaps even tractable for a purely feed-forward perceptual system, the non-obvious challenge is to find the correct family of non-linear local functions $\phi_{\square}(\circ)$, -- and \textit{how} and \textit{why} they change in the visual hierarchy~\cite{lindsey2019unified,harris2020anatomically} -- as this has been the main challenge of both human and machine vision since their conception. \begin{figure*}[!t] \centering \includegraphics[width=1.0\linewidth,clip]{Task_Variation_New.pdf} \caption{\underline{(A.)}: A cartoon depicting the task manipulation to compare the approximation power between fully connected networks and deep convolutional networks given the compositional structure of the approximation task. Notice that although both tasks are different, the image distribution is the same (CIFAR-10). \underline{(B.)}: Top: If fully connected networks do indeed approximate trivially local tasks such as Color Estimation, then they will yield lower mean square error than deep convolutional networks. Middle: the case for Texture Perception that is compositional but to the first degree should show small advantages for convolutional networks over fully connected ones. Bottom: Similarly, if deep convolutional networks will exploit locality better than shallow fully connected networks, then they will achieve higher performance in the Object Recognition task -- and these perceptual gains should go away when trained/tested on \textit{shuffled images} where the locality prior is disrupted.} \label{fig:Task_Manipulation} \end{figure*} \section{Methods} \label{sec:Methods} To investigate the role of the compositional structure in images on the approximation power of different neural networks, we must design a controlled experiment where we can manipulate their `degree' of compositionality. To accomplish this, we will hierarchically scramble the family of images over which the networks are trained/tested in a tree-like manner such that we can study the coarse-to-fine and fine-to-coarse implications of scrambling. \vspace{-10pt} \subsection{Image Scrambling} \vspace{-5pt} Figure~\ref{fig:experimental_setup} shows a summary of our experimental framework. To test if the approximation power of one network type is greater than the other (\textit{e.g.} shallow non-convolutional vs deep convolutional), we will have them perform an approximation task for the unscrambled and scrambled conditions (\underline{A.}). To plot a performance curve as a discretized version of `level of scrambling', we propose two scrambling schemes: Top-Down scrambling where the image is scrambled in a coarse-to-fine manner, their dual form: Bottom-Up (fine-to-coarse) scrambling where the coarse image structure is preserved but the local components of the image are scrambled (\underline{B.} bottom). It is worth noting that the end-points of the scrambling tree (the fully scrambled top-down and bottom-up functions) are the same. For all experiments we will show the i.i.d. performance of several networks when trained \& tested on the \textit{same} degree of scrambling (solid lines), and we will also as a reference an o.o.d scenario where networks are trained on unshuffled images and tested on shuffled images (dashed lines). \vspace{-5pt} \subsection{Dataset and Networks} In what follows in the paper we will present our main set of results on experiments derived from CIFAR-10 objects. Our two main networks to be compared are a shallow fully connected network (SFCN) which consists of a single 10'000 hidden layer fully connected network with a ReLU non-linearity, and 5-stage deep convolutional network (DCN) with an additional 2-stage fully connected decision layer which is the classically known VGG11~\cite{simonyan2014very}. In addition to these two main networks we have 3 additional controls which are a ``Wide-Net'' which is a 2-layer wide shallow fully connected network that has that same number of parameters as the VGG11, a ``Deep-Net'' which is a 7-layer deep fully connected network that also has the same number of parameters as the VGG11, and a hyper small 6000 under-parametrized three-layer convolutional network which we call ``ThreeConvNet''. The goal of these controls will be to pin-point what the number of parameters is buying over the type of computation (convolution vs non-convolutional operators) and also to test if even an under-parametrized system with a convolutional prior can perform as well as a over-parameterized non-convolutional system. All error bars shown in the following sections of the paper represent the standard deviation over 5 runs with different random weight initialization. Additional details with regards to the training and testing procedure and hyper-parameters for each experiment can be seen in the Supplementary Material. \begin{figure*}[!t] \centering \includegraphics[width=1.0\linewidth,clip]{Main_Results_Object_Mini_Full.pdf} \caption{\underline{A.} Results empirically showing that \textit{only} deep convolutional networks can exploit image locality priors for the hierarchically compositional task of object recognition. \underline{B.} Additional results showing that when the texture-bias is removed, that convolutional networks still have an exponential advantage in learning object recognition (a hierarchically compositional task) over non-convolutional networks (\textit{e.g.} see ThreeConvNet).} \label{fig:Scrambling_Objects} \end{figure*} \section{Experiments} In the following section we will go over our set of results in descending order of hierarchical compositionality, starting with the most complex task: object recognition, following with texture perception (compositional + local), and finally color estimation (that is neither a compositional or local task). A summary of our experiments can be seen in Figure~\ref{fig:Task_Manipulation} (A), accompanied by their approximation power hypothesis (B), where we conjecture that as the tasks become less hierarchical and compositional, that convolutional networks will lose their advantage, and that potentially fully connected networks will out-perform convolutional networks. \subsection{Object Recognition: \normalfont{Deep Convolutional Networks can (1) approximate Hierarchically Local Compositional Tasks better than Shallow Fully Connected Networks AND (2) are \textit{affected} by scrambling}} In our first experiment we will test to see how well do deep convolutional networks perform when doing a classical object recognition task and compare the performance to shallow fully connected networks. In particular, the goal of this experiment is to verify that DCN's will exploit the locality prior in the image structure and thus arrive to a higher performance than SFCN's which will not naturally converge to exploiting the local correlations in the image structure. In addition, we will evaluate how both of these networks behave as we train/test on different type of image scrambling manipulations where the locality prior is destroyed in different ways. As explained before in Section~\ref{sec:Methods}, the purpose of destroying the locality prior is to empirically verify that approximation functions such as those that have convolutional operators will \textit{not} excel as one would expect despite their over-parameterized nature. Thus anticipating the differences in number of learnable parameters, we will also compare how the previously mentioned networks will perform to Wide-Net, Deep-Net and ThreeConvNet. The logic here is as follows: if even under matched amount of learning parameters, the deep convolutional network is the \textit{only} system that performs better for the unscrambled images, then we can claim that the benefit of DCNs in image recognition system is not due to their over-parameterized nature, but rather that they exploit image locality for a compositional task. Indeed, if this is the case, this benefit in performance should slowly disappear as we perform greater levels of scrambling on the images. \underline{Results 4.1 A}: Figure~\ref{fig:Scrambling_Objects} (\underline{A.}) proves the previously mentioned hypothesis, where deep convolutional networks outperform the shallow fully connected network. The results with the additional controls are also overlayed, showing that the DCNS still outperforms the wide fully connected network (Wide-Net) and the deep fully connected network (Deep-Net) -- even when equalized with the same number of parameters. Naturally, the deep convolutional network outperforms ThreeConvNet as it is deeper despite both having a convolutional prior (more on this in Results 4.1 C). \underline{Results 4.1 B}: In addition, we also notice as we increasingly disrupt locality through both the Top-Down and Bottom-Up variations of hierarchical scrambling, the difference in performance for the DCNs eventually goes away until the the image is fully scrambled at which performance is matched across networks that have the same number of learnables parameters (DCN, Wide-Net, Deep-Net). Furthermore, there is an asymmetry in the decay of performance for the dashed-lines (o.o.d testing: training on un-shuffled, testing on shuffled) Bottom-Up Hierarchical Scrambling operator where coarse information is roughly preserved. It would seem as if this last result implicitly verifies that fully connected networks are encoding long-range dependencies and coarse structure while deep convolutional networks are biased to encode localized feature correlations (\textit{i.e.} texture) as observed by Gatys~\textit{et~al.}~\cite{gatys2015texture}, Geirhos~\textit{et~al.}~\cite{geirhos2018imagenet} and Wendel~\&~Bethge~\cite{brendel2019approximating}, and recently Jacob~\textit{et~al.}~\cite{Jacob860759} in higher resolution images. \underline{Results 4.1 C}: What is perhaps the most striking result is that the small under-parametrized 3-layer convolutional network with $\sim6000$ learnable parameters can still achieve roughly the same performance as all other over-parametrized fully connected networks, even those that exceed the number of parameters by an order of $10^5$ (Figure~\ref{fig:Scrambling_Objects} \underline{A.}). This last result complimentary proves the added benefit of convolutional operators on images with hierarchical structure for the compositional task of object recognition. \vspace{-5pt} \subsubsection{Stylized-CIFAR-10: Preservation of results when removing the texture-bias in CIFAR-10} As an additional control with regards to the texture-bias phenomena~\cite{gatys2015neural,geirhos2018imagenet,brendel2019approximating}, we re-ran our experiments on a Stylized-CIFAR-10 dataset (following the procedure of Geirhos~\textit{et~al.}~\cite{geirhos2018imagenet} where we created a CIFAR-10 dataset with the texture of a collection of paintings similar to Stylized-ImageNet), and obtained the same pattern of results as in our previous experiments (See Figure~\ref{fig:Scrambling_Objects} \underline{B.}). These findings support the idea that the object recognition task is hierarchically compositional and does benefit exponentially from the convolutional prior when image locality is preserved and other cue-conflicting biases are removed. What is also worth noting is that the under-parametrized ThreeConvNet now shines and outperforms all other over-parameterized fully connected networks (wide or shallow or deep). This last results further shows the computational tractability and relative ease of convolutional neural networks to exploit localized hierarchical structure in a compositional manner when all other biases are removed~\cite{geirhos2020shortcut}. \subsection{Texture Perception: \normalfont{Deep Convolutional Networks better approximate Local Compositional Tasks than Shallow Fully Connected Networks and are \textit{un-affected} by scrambling}} The next task we decided to explore was a lower level hierarchically compositional task -- a task of texture perception, as it is known that texture perception in both humans and machines can be modelled via second-order image statistics~\cite{julesz1981textons,portilla2000parametric,gatys2015texture,deza2018towards,wallis2019image,vacher2020texture,herrera2021flexible}. The peculiarity about this task is that it is still hierarchical but not as hierarchical as object recognition that goes from edges to shapes: $(edges \rightarrow textures \rightarrow parts \rightarrow shapes)$. This contrived control condition allows us to examine at a more nuanced level what can convolutional structure provide (if anything) when the approximation task is not \textit{too} compositional. \begin{figure}[!t] \centering \includegraphics[width=1.0\linewidth,clip]{Main_Results_Texture_Mini.pdf} \caption{Results showing that convolutional networks better approximate texture than fully connected networks -- as there is a small (non-obvious) advantage in exploiting locality (Appendix~\ref{sec:App_Feature_Vis}). Even \textit{under-parameterized} convolutional networks approximate texture better than all other networks. What is more surprising is that the \textit{approximation power of all networks is unaffected by disrupting the locality priors through image scrambling} -- also suggesting that CNNs maintain performance via hierarchical computation. } \label{fig:Scrambling_Texture} \vspace{-5pt} \end{figure} To do this, we rendered a texturized-CIFAR dataset by running a set of normally distributed noise images that were colored with their corresponding CIFAR-10 image through ZCA, to later run these noise images through a Style Transfer operator (Adaptive Instance Normalization~\cite{huang2017arbitrary}) thus rendering an equivalent texture to the original CIFAR-10 image -- an approach loosely inspired from Deza~\textit{et~al.}~\cite{deza2018towards} or the Texforms from Long~\textit{et~al.}~\cite{long2018mid,deza2019accelerated}. Altogether, this pipeline rendered 50000 small textured images corresponding to each CIFAR-10 image with their matches classes. The task these networks are then expected to learn via a standard cross-entropy loss is a `texturized-object' categorization task where the matching class is the category from which the texturized CIFAR-10 image was rendered. \underline{Results 4.2 A}: Figure~\ref{fig:Scrambling_Texture} shows that deep convolutional networks slightly overperform shallow fully connected networks on this order-1 hierarchically compositional task. However, a closer look at the previous figure shows us that even when training and testing on the same type of scrambled textures, both deep convolutional networks and shallow fully connected networks are un-affected by this manipulation as the solid lines stay mostly horizontal. While perhaps counter-intuitive, this last result matches the expected theoretical outcome for the deep convolutional network as it learns to capitalize on local structure (even with overlapping receptive fields) to learn these Texture Gramian-like structures from Gatys~\textit{et~al.}~\cite{gatys2015texture}. \underline{Results 4.2 B}: We then examined how the additional control networks would perform (also Figure~\ref{fig:Scrambling_Texture}) and found a similar pattern where both convolutional networks perform better than all other fully connected networks, including the heavily under-parameterized ThreeConvNet that even surpasses the performance of the VGG11. It is possible that the under-parametrization has aided as an advantage to not overfit and correctly represent the aggregate texture of each object class. It is also possible that this advantage has come through via another factor: kernel size, given that ThreeConvNet has kernels with greater receptive field size ($5\times5$ in ThreeConvNet \textit{vs} $3\times3$ in VGG11; Appendix~\ref{sec:App_Feature_Vis}.). \vspace{-5pt} \subsection{Color Estimation: \normalfont{Shallow Fully Connected Networks better approximate Global and non-Compositional Tasks than Deep Convolutional Networks}} Finally, our last experiment is a color estimation experiment to investigate how well both deep convolutional networks and fully connected can estimate the normalized color of an image. To do this, we trained both networks to output a 3-dimensional column real-valued vector with normalized Red, Green and Blue channel information that was the average global color value under a Mean Square Error (MSE) loss. If the hypothesis of proper ``function-to-task'' matching is correct, then shallow fully connected networks should achieve greater performance (\textit{i.e. less error}) that deep convolutional networks when computing this value over a set of testing images. We used the full collection of 50000 un-modified CIFAR images for training and the remaining 10000 images for testing for this color approximation task, where the image class is irrelevant. \underline{Results 4.3 A}: Figure~\ref{fig:Scrambling_Color} shows that for the unscrambled condition, shallow fully connected network achieve reduced mean square error rates over deep convolutional networks for the ensemble of testing images. However, as the locality prior is gradually destroyed through scrambling, we notice that the deep convolutional network asymptotes to the performance of the SFCN (lower error is better). These results suggest that the convolutional operator still tries to exploit the locality prior to search for compositional structure even if the approximation function is non-hierarchical. \underline{Results 4.3 B}: Figure~\ref{fig:Scrambling_Color} further proves the previous point by showing that a shallow and wider fully connected network with greater number of parameters (Wide-Net) yields a smaller error over all other models, while the most hierarchical of all models a 7-layer fully connected network (Deep-Net) achieves the highest error (performs worse). Similarly, we observe that the high performing texture-perception ThreeConvNet is now at a disadvantage against all other networks. This last observation indeed agrees with the theory, as color estimation is a global property and the convolutional prior of ThreeConvNet works at a disadvantage -- despite still being able to approximate color reasonably well. \section{Discussion} In this paper we reviewed the importance and critical cases of how and when deep convolutional networks succeed and exponentially do better than shallow fully connected networks (object recognition), when they have a minimal advantage (texture perception) -- but also counter-intuitively when they can do worse (color estimation) -- as modulated by the visual approximation task. There have also been several theoretical works that have also supported some of our experimental results. For example, Poggio~\text{et~al.}~\cite{poggio2020theoretical} shows that convolutional networks -- that exploit locality -- require an exponentially smaller number of parameters to generalize than fully connected networks. The ratio of the sample complexity is in the order of $\frac{N_{deep}}{N_{shallow}} \approx \epsilon^d$. Empirical work by Elsayed~\textit{et~al.}~\cite{elsayed2020revisiting} and Neyshabur~\cite{neyshabur2020towards} has shown the importance of the convolutional prior as an inductive bias and how certain regularization priors may induce learned locality in a fully connected neural network. Consistently with our experiments, Brendel~\&~Bethge~\cite{brendel2019approximating} have also shown that local pixel structure \textit{still} supports object recognition even if the higher-order compositionality of an object is destroyed (they scramble patches that are large for this results to hold). \begin{figure}[!t] \centering \includegraphics[width=1.0\linewidth,clip]{Main_Results_Color_Mini.pdf} \caption{Results showing that shallow fully connected networks achieve less approximation error that deep convolutional networks when computing the average color of an image -- a global task that requires no hierarchical function. When fixing the number of parameters, and varying hierarchical structure, shallow fully connected layers continue to achieve smaller error, while deep fully connected networks perform worse (Wide-Net vs Deep-Net). These set of results \textit{show that convolutional + deeper is not always better} contingent on the approximation task. } \label{fig:Scrambling_Color} \vspace{-8pt} \end{figure} Overall, this work reminds us to not loose sight on evaluating and understanding the tasks for which different perceptual systems work. With the advent of new models such as Transformers~\cite{dosovitskiy2021an,jaegle2021perceiver} it will continue to be important to not only study their robustness, generalization \& convergence properties but also to understand for what tasks do such models out-perform (or under-perform) classical neural network models and why. \newpage {\small \bibliographystyle{ieee_fullname}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,081
NGC 4449 je nepravilna galaktika u zviježđu Lovačkim psima. Spada u skupinu Lovačkih pasa I. Izvori Vanjske poveznice Revidirani Novi opći katalog Izvangalaktička baza podataka NASA-e i IPAC-a Astronomska baza podataka SIMBAD VizieR 4449
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,077
{"url":"https:\/\/socratic.org\/questions\/how-do-you-find-the-real-or-imaginary-solutions-of-the-equation-6x-2-13x-5-0","text":"# How do you find the real or imaginary solutions of the equation 6x^2+13x-5=0?\n\nNov 28, 2016\n\nThe solutions are $S = \\left\\{\\frac{1}{3} , - \\frac{5}{2}\\right\\}$\n\n#### Explanation:\n\nThe simultaneous equations $a {x}^{2} + b x + c = 0$\n\nOur equation is $6 {x}^{2} + 13 x - 5 = 0$\n\nWe calculate the determinant,\n\n$\\Delta = {b}^{2} - 4 a c$\n\n$\\Delta = {13}^{2} - 4 \\cdot 6 \\cdot - 5 = 289$\n\n$\\Delta > 0$, so we have 2 real roots\n\nSo,\n\n$x = \\frac{- b \\pm \\sqrt{\\Delta}}{2 a}$\n\n$= \\frac{- 13 \\pm \\sqrt{289}}{12}$\n\n$= \\frac{- 13 \\pm 17}{12}$\n\nSo, ${x}_{1} = - \\frac{30}{12} = - \\frac{5}{2}$\n\nand ${x}_{2} = \\frac{4}{12} = \\frac{1}{3}$","date":"2020-01-29 14:01:30","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 11, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.718711256980896, \"perplexity\": 1323.140461836776}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-05\/segments\/1579251799918.97\/warc\/CC-MAIN-20200129133601-20200129163601-00220.warc.gz\"}"}
null
null
<html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>basic_static_string::replace (5 of 14 overloads)</title> <link rel="stylesheet" href="../../../../../../../../doc/src/boostbook.css" type="text/css"> <meta name="generator" content="DocBook XSL Stylesheets V1.79.1"> <link rel="home" href="../../../../index.html" title="Boost.StaticString"> <link rel="up" href="../replace.html" title="basic_static_string::replace"> <link rel="prev" href="overload4.html" title="basic_static_string::replace (4 of 14 overloads)"> <link rel="next" href="overload6.html" title="basic_static_string::replace (6 of 14 overloads)"> </head> <body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF"> <table cellpadding="2" width="100%"><tr> <td valign="top"><img alt="Boost C++ Libraries" width="277" height="86" src="../../../../../../../../boost.png"></td> <td align="center"><a href="../../../../../../../../index.html">Home</a></td> <td align="center"><a href="../../../../../../../../libs/libraries.htm">Libraries</a></td> <td align="center"><a href="http://www.boost.org/users/people.html">People</a></td> <td align="center"><a href="http://www.boost.org/users/faq.html">FAQ</a></td> <td align="center"><a href="../../../../../../../../more/index.htm">More</a></td> </tr></table> <hr> <div class="spirit-nav"> <a accesskey="p" href="overload4.html"><img src="../../../../../../../../doc/src/images/prev.png" alt="Prev"></a><a accesskey="u" href="../replace.html"><img src="../../../../../../../../doc/src/images/up.png" alt="Up"></a><a accesskey="h" href="../../../../index.html"><img src="../../../../../../../../doc/src/images/home.png" alt="Home"></a><a accesskey="n" href="overload6.html"><img src="../../../../../../../../doc/src/images/next.png" alt="Next"></a> </div> <div class="section"> <div class="titlepage"><div><div><h5 class="title"> <a name="static_string.ref.boost__static_strings__basic_static_string.replace.overload5"></a><a class="link" href="overload5.html" title="basic_static_string::replace (5 of 14 overloads)">basic_static_string::replace (5 of 14 overloads)</a> </h5></div></div></div> <p> Replace a part of the string. </p> <h6> <a name="static_string.ref.boost__static_strings__basic_static_string.replace.overload5.h0"></a> <span class="phrase"><a name="static_string.ref.boost__static_strings__basic_static_string.replace.overload5.synopsis"></a></span><a class="link" href="overload5.html#static_string.ref.boost__static_strings__basic_static_string.replace.overload5.synopsis">Synopsis</a> </h6> <pre class="programlisting"><span class="keyword">constexpr</span> <span class="identifier">basic_static_string</span><span class="special">&amp;</span> <span class="identifier">replace</span><span class="special">(</span> <span class="identifier">size_type</span> <span class="identifier">pos</span><span class="special">,</span> <span class="identifier">size_type</span> <span class="identifier">n1</span><span class="special">,</span> <span class="identifier">const_pointer</span> <span class="identifier">s</span><span class="special">,</span> <span class="identifier">size_type</span> <span class="identifier">n2</span><span class="special">);</span> </pre> <h6> <a name="static_string.ref.boost__static_strings__basic_static_string.replace.overload5.h1"></a> <span class="phrase"><a name="static_string.ref.boost__static_strings__basic_static_string.replace.overload5.description"></a></span><a class="link" href="overload5.html#static_string.ref.boost__static_strings__basic_static_string.replace.overload5.description">Description</a> </h6> <p> Replaces <code class="computeroutput"><span class="identifier">rcount</span></code> characters starting at index <code class="computeroutput"><span class="identifier">pos</span></code> with those of <code class="computeroutput"><span class="special">{</span><span class="identifier">s</span><span class="special">,</span> <span class="identifier">s</span> <span class="special">+</span> <span class="identifier">n2</span><span class="special">)</span></code>, where <code class="computeroutput"><span class="identifier">rcount</span></code> is <code class="computeroutput"><span class="identifier">std</span><span class="special">::</span><span class="identifier">min</span><span class="special">(</span><span class="identifier">n1</span><span class="special">,</span> <span class="identifier">size</span><span class="special">()</span> <span class="special">-</span> <span class="identifier">pos</span><span class="special">)</span></code>. </p> <h6> <a name="static_string.ref.boost__static_strings__basic_static_string.replace.overload5.h2"></a> <span class="phrase"><a name="static_string.ref.boost__static_strings__basic_static_string.replace.overload5.exception_safety"></a></span><a class="link" href="overload5.html#static_string.ref.boost__static_strings__basic_static_string.replace.overload5.exception_safety">Exception Safety</a> </h6> <p> Strong guarantee. </p> <h6> <a name="static_string.ref.boost__static_strings__basic_static_string.replace.overload5.h3"></a> <span class="phrase"><a name="static_string.ref.boost__static_strings__basic_static_string.replace.overload5.remarks"></a></span><a class="link" href="overload5.html#static_string.ref.boost__static_strings__basic_static_string.replace.overload5.remarks">Remarks</a> </h6> <p> All references, pointers, or iterators referring to contained elements are invalidated. Any past-the-end iterators are also invalidated. </p> <h6> <a name="static_string.ref.boost__static_strings__basic_static_string.replace.overload5.h4"></a> <span class="phrase"><a name="static_string.ref.boost__static_strings__basic_static_string.replace.overload5.return_value"></a></span><a class="link" href="overload5.html#static_string.ref.boost__static_strings__basic_static_string.replace.overload5.return_value">Return Value</a> </h6> <p> <code class="computeroutput"><span class="special">*</span><span class="keyword">this</span></code> </p> <h6> <a name="static_string.ref.boost__static_strings__basic_static_string.replace.overload5.h5"></a> <span class="phrase"><a name="static_string.ref.boost__static_strings__basic_static_string.replace.overload5.parameters"></a></span><a class="link" href="overload5.html#static_string.ref.boost__static_strings__basic_static_string.replace.overload5.parameters">Parameters</a> </h6> <div class="informaltable"><table class="table"> <colgroup> <col> <col> </colgroup> <thead><tr> <th> <p> Name </p> </th> <th> <p> Description </p> </th> </tr></thead> <tbody> <tr> <td> <p> <code class="computeroutput"><span class="identifier">pos</span></code> </p> </td> <td> <p> The index to replace at. </p> </td> </tr> <tr> <td> <p> <code class="computeroutput"><span class="identifier">n1</span></code> </p> </td> <td> <p> The number of characters to replace. </p> </td> </tr> <tr> <td> <p> <code class="computeroutput"><span class="identifier">s</span></code> </p> </td> <td> <p> The string to replace with. </p> </td> </tr> <tr> <td> <p> <code class="computeroutput"><span class="identifier">n2</span></code> </p> </td> <td> <p> The length of the string to replace with. </p> </td> </tr> </tbody> </table></div> <h6> <a name="static_string.ref.boost__static_strings__basic_static_string.replace.overload5.h6"></a> <span class="phrase"><a name="static_string.ref.boost__static_strings__basic_static_string.replace.overload5.exceptions"></a></span><a class="link" href="overload5.html#static_string.ref.boost__static_strings__basic_static_string.replace.overload5.exceptions">Exceptions</a> </h6> <div class="informaltable"><table class="table"> <colgroup> <col> <col> </colgroup> <thead><tr> <th> <p> Type </p> </th> <th> <p> Thrown On </p> </th> </tr></thead> <tbody> <tr> <td> <p> <code class="computeroutput"><span class="identifier">std</span><span class="special">::</span><span class="identifier">length_error</span></code> </p> </td> <td> <p> <code class="computeroutput"><span class="identifier">size</span><span class="special">()</span> <span class="special">+</span> <span class="special">(</span><span class="identifier">n2</span> <span class="special">-</span> <span class="identifier">rcount</span><span class="special">)</span> <span class="special">&gt;</span> <span class="identifier">max_size</span><span class="special">()</span></code> </p> </td> </tr> <tr> <td> <p> <code class="computeroutput"><span class="identifier">std</span><span class="special">::</span><span class="identifier">out_of_range</span></code> </p> </td> <td> <p> <code class="computeroutput"><span class="identifier">pos</span> <span class="special">&gt;</span> <span class="identifier">size</span><span class="special">()</span></code> </p> </td> </tr> </tbody> </table></div> </div> <table xmlns:rev="http://www.cs.rpi.edu/~gregod/boost/tools/doc/revision" width="100%"><tr> <td align="left"></td> <td align="right"><div class="copyright-footer">Copyright © 2019, 2020 Krystian Stasiowski<br>Copyright © 2016-2019 Vinnie Falco<p> Distributed under the Boost Software License, Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at <a href="http://www.boost.org/LICENSE_1_0.txt" target="_top">http://www.boost.org/LICENSE_1_0.txt</a>) </p> </div></td> </tr></table> <hr> <div class="spirit-nav"> <a accesskey="p" href="overload4.html"><img src="../../../../../../../../doc/src/images/prev.png" alt="Prev"></a><a accesskey="u" href="../replace.html"><img src="../../../../../../../../doc/src/images/up.png" alt="Up"></a><a accesskey="h" href="../../../../index.html"><img src="../../../../../../../../doc/src/images/home.png" alt="Home"></a><a accesskey="n" href="overload6.html"><img src="../../../../../../../../doc/src/images/next.png" alt="Next"></a> </div> </body> </html>
{ "redpajama_set_name": "RedPajamaGithub" }
1,981
by 30 Seconds to Mars (featuring A$AP Rocky) Album: America (2018) This track features an unexpectedly laid-back verse by A$AP Rocky. "We met a couple of times and I always really liked him a lot," frontman Jared Leto told the BBC. "I think he's got an interesting perspective and he's a super-talented rapper and singer, he really made the most sense, he's a really creative artistic person." More songs from 30 Seconds to Mars More songs from A$AP Rocky Lyrics to One Track Mind 30 Seconds to Mars Artistfacts Graham ParkerSongwriter Interviews When Judd Apatow needed under-appreciated rockers for his Knocked Up sequel, he immediately thought of Parker, who just happened to be getting his band The Rumour back together. Intentionally AtrociousSong Writing A selection of songs made to be terrible - some clearly achieved that goal. James Williamson of Iggy & the StoogesSongwriter Interviews The Stooges guitarist (and producer of the Kill City album) talks about those early recordings and what really happened with David Bowie. Jesus Thinks You're a Jerk: Rock vs. TelevangelistsSong Writing When televangelists like Jimmy Swaggart took on rockers like Ozzy Osbourne and Metallica, the rockers retaliated. Bono could even be seen mocking the preachers. Joe ElySongwriter Interviews The renown Texas songwriter has been at it for 40 years, with tales to tell about The Flatlanders and The Clash - that's Joe's Tex-Mex on "Should I Stay or Should I Go?" Timothy B. SchmitSongwriter Interviews The longtime Eagle talks about soaring back to his solo career, and what he learned about songwriting in the group.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,700
\section{Introduction} For small-area Josephson junctions, where the critical current is in the nanoampere range and the capacitance is in femtofarads, the dynamics of the Josephson phase is strongly influenced by the electromagnetic environment \cite{Joyez}. This can have profound effects on the measured $V$-$I$ characteristics and voltage noise. For example, depending on the realization of the environment the same junction can be in the underdamped regime, with a typical hysteretic behavior of the $V$-$I$ curves between the forward and backward source-current ramping, but also in the nonhysteretic overdamped regime. A recent preliminary experimental study by Pertti Hakonen's group in Helsinki \cite{Hakonen} focusing on the voltage noise of small-area tunnel junctions revealed a strong forward vs.~backward current-ramping mean-voltage and voltage-noise asymmetry and hysteresis (see Fig.~\ref{F1}). Furthermore, a discontinuous change of the voltage noise coinciding with the switching current was observed. The purpose of this paper is to show that such surprising effects can be understood as a consequence of junction's electromagnetic environment. The organization of our work is as follows. First, we introduce in Sec.~\ref{circuit} a simple model circuit close to those used in experiments for obtaining the critical current of real junctions and shortly discuss the matrix continued fraction method used to analyze properties of the model circuit. In Sec.~\ref{mechanism}, we present the basic ideas of our explanation. Finally, we discuss our numerical results and show that they are in qualitative agreement with the experiment in Sec.~\ref{results}. \begin{figure}[h!] \includegraphics[width=0.5\textwidth]{Figure1.eps} \caption{Experimental dependence of the mean junction-voltage on the bias current (top panel) with the corresponding voltage-noise (in arbitrary units) dependence (bottom panel). Black line represents forward current ramping while the red corresponds to the backward direction. Unpublished data courtesy of P.~Hakonen~\cite{Hakonen}. \label{F1}} \end{figure} \section{Circuit and method}\label{circuit} As it is known the electromagnetic environment can be engineered to place any Josephson junction in the overdamped regime \cite{Steinbach}. We investigate the influence of such environments on the junction properties in more detail. The considered circuit is shown in Fig.~\ref{F2}, where a Josephson junction (red square) with critical current $I_c$, intrinsic capacitance $C_j$, and resistance $R_j$ is biased by a circuit with a capacitor $C$, resistor $R$, and an ideal voltage source $V_s$. This circuit is equivalent to a current source $I_s=V_s/R$ in parallel with $R$, $C$, and the junction. Identical circuit was used to show that the commonly observed reduction of the maximum supercurrent in an ultrasmall junction is not an intrinsic junction property, but is due to its electromagnetic environment \cite{Steinbach}. Its advantage lies in the fact that, in contrast to the usual current-bias method (applied directly to the junction) giving access only to the positive differential resistance part of the $V$-$I$ characteristic, all points on the $V$-$I$ characteristic are in principle achievable. That's because both the current $I$ through the junction (measured by ideal amperemeter $I$) and the junction voltage $V$ (measured by ideal voltmeter $V$) are average quantities adjusted to the global drive. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Figure2.eps} \caption{Idealized circuit for measurement of the supercurrent of Josephson junctions. The red square contains a junction modeled by the RCSJ model. \label{F2}} \end{figure} The dynamics of this circuit following from Kirchhoff's laws and Josephson's relations is described by the Langevin equations \begin{align} I(t)&=\frac{V_s-V(t)}{R} - C\dfrac{dV(t)}{dt} + \xi_R(t),\nonumber\\ I(t)&=\frac{V(t)}{R_j} + C_j\dfrac{dV(t)}{dt} + I_c \sin{\varphi(t)} + \xi_{R_j}(t),\nonumber\\ V(t)&=\frac{\hbar}{2e}\frac{d\varphi(t)}{dt}, \label{E1} \end{align} where two current-noise sources $\xi_R$ and $\xi_{R_j}$ associated with (mutually uncorrelated) thermal fluctuations in resistors $R$ and $R_j$ are assumed to be simple Gaussian white noises satisfying \begin{equation} \langle \xi_{R_x}(t)\rangle=0,\hspace{0.2cm} \langle \xi_{R_x}(t_1)\xi_{R_x}(t_2)\rangle=\frac{2k_BT}{R_x}\delta(t_1-t_2). \end{equation} It is useful to introduce the equivalent parallel resistance $R_p=RR_j/(R+R_j)$ and the equivalent parallel capacitance $C_p=C+C_j$. Then the quality factor of the circuit reads \begin{equation} Q=\omega_pR_pC_p, \label{EQ} \end{equation} where $\omega_p=\sqrt{I_c/\varphi_0 C_p}$ is the plasma frequency with $\varphi_0=\hbar/2e$ the reduced flux quantum. For the purpose of theoretical analysis it is useful to rewrite Eq.~(\ref{E1}) into the dimensionless form by introducing the following dimensionless quantities \cite{Kautz} - junction voltage $ v=\tfrac{QV}{I_cR_p}$, junction current $i=\tfrac{I}{I_{c}}$, time $\tau=\omega_pt$, temperature $\Theta=\tfrac{k_BT}{\varphi_0I_c}$ (with $k_B$ the Boltzmann constant), driving force $i_s=\tfrac{V_s}{RI_c}$, and, finally, the composite Gaussian white noise $\zeta$ with the correlation function \begin{align} \langle \zeta(\tau_1)\zeta(\tau_2)\rangle&=\frac{\omega_p}{I_c^2}\langle (\xi_R(t_1)-\xi_{R_j}(t_1))(\xi_R(t_2)-\xi_{R_j}(t_2))\rangle\nonumber\\ &=2\gamma\Theta\delta(\tau_1-\tau_2), \end{align} where $\gamma=1/Q$ is the the damping coefficient. Using these definitions Eq.~(\ref{E1}) can be reformulated in the dimensionless form ($v(t)=\partial\varphi/\partial \tau$) \begin{equation} \frac{\partial v(\tau)}{\partial \tau}= i_s - \gamma v(\tau) - \sin\varphi(\tau) + \zeta(\tau). \label{E2} \end{equation} This equation is formally identical to the RCSJ model \cite{Stewart}, which is commonly used for description of Josephson junctions without the environment and, thus, the methods developed for the RCSJ model can be applied. We have used the matrix continued fraction method \cite[Sec.~11.5]{Risken} for the solution of the Fokker-Planck equation associated to the Langevin Eq.~(\ref{E2}) for the probability distribution function $W(\varphi,v,\tau)$ \begin{align} \dfrac{\partial}{\partial \tau}W&=-v\dfrac{\partial}{\partial \varphi}W + \dfrac{\partial}{\partial v}\left(\gamma v + \sin\varphi - i_s + \gamma\Theta\dfrac{\partial}{\partial v}\right)W\nonumber\\ &\equiv -L_{FP}W. \end{align} The probability distribution function is obtained numerically by expanding the $v$-part of $W(\varphi,v,\tau)$ into quantum oscillator basis functions, thus obtaining a tridiagonal coupled system of differential equation, and the $2\pi$-periodic $\varphi$-part into the Fourier series \cite[Sec.~11.5]{Risken}. We restrict our analysis here to the stationary case where the average value of junction voltage $v$ and junction current $i$ can be computed as \begin{align} \langle v\rangle &=\int\limits_0^{2\pi}\mathrm{d}\varphi\int\limits_{-\infty}^{\infty}\!\mathrm{d}v\, v W(\varphi,v,\infty),\\ \langle i\rangle &= i_s-\frac{\gamma}{1+\rho}\langle v\rangle, \label{Ei} \end{align} where the environmental parameter $\rho=R/R_j$, and the voltage autocorrelation function reads \cite[Sec.~7.2]{Risken} \begin{align} \langle v(\tau)v(0)\rangle =\int\limits_0^{2\pi}\mathrm{d}\varphi\int\limits_{-\infty}^{\infty}\!\mathrm{d}v\, v\,e^{-|\tau|L_{FP}}v W(\varphi,v,\infty). \end{align} Finally, the steady state voltage noise is obtained by time integral of the (connected) voltage autocorrelation function \begin{align} S=\int\limits_{-\infty}^{\infty}\mathrm{d}\tau\big(\langle v(\tau)v(0)\rangle - \langle v(\tau)\rangle\langle v(0)\rangle\big). \label{VN} \end{align} \section{Basic mechanism}\label{mechanism} In Fig.~\ref{F3}(a) we plot a typical curve of the mean junction current $\langle i\rangle$ dependence on the mean junction voltage $\langle v\rangle$ for temperature $\Theta=0.04$, quality factor $Q=1$, and different environment parameters $\rho$. Here, limit $\rho\rightarrow\infty$ corresponds to the case without the environment. For that case the mean current $\langle i\rangle$ is a monotonic function of the voltage $\langle v\rangle$. This changes with the introduction of the environment when a local maximum and local minimum (decreasing with decreasing $\rho$) of $\langle i\rangle$ develop. Fig.~\ref{F3}(b) shows the voltage dependence of the voltage noise --- it turns out that for a given $Q$ this quantity does not depend on the environment details and, thus, all curves for various $\rho$ coincide. \begin{figure}[ht!] \centering \includegraphics[width=0.5\textwidth]{Figure3.eps} \caption{\label{F3} Dependence of the junction current $\langle i\rangle$ (a) and the voltage noise $S$ (b) (all curves overlap) on the junction voltage $\langle v \rangle$ for $\Theta=0.04$, $Q=1$ and different $\rho$'s. (c) Corresponding dependence of the voltage noise $S$ on the junction current $\langle i \rangle$ contains for finite $\rho$ a loop.} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width=0.5\textwidth]{Figure4.eps} \caption{\label{F4} (a) Sketch of a typical current $\langle i\rangle$ vs.~voltage $\langle v\rangle$ characteristic for the RCSJ model of the circuit in Fig.~\ref{F2} with shown hysteresis between forward and backward current ramping directions. (b) Corresponding picture of a typical voltage noise dependence on the average junction current $\langle i\rangle$ with the hysteresis and discontinuities.} \end{figure} The non-monotonic dependence of current on the mean voltage has serious consequences for current-biased junctions as illustrated in Fig.~\ref{F4}(a). The mean voltage is not a function of the junction current since in the range between the current minimum and maximum 3 different voltage values can be attributed to the same current (green line in Fig.~\ref{F4}(a)). In practice, when the junction current is ramped up, it will follow the stable branch until it hits the point A when a step back would be required to keep on the average characteristic (grey curve). Instead, the mean voltage will switch to its other stable value at point B (red dots). For the reverse ramp, the analogous situation happens between points C and D (black dotted curve). Depending on the distance between the current extrema the hysteresis can be large. The fact that $\langle v\rangle$ is not a function of $\langle i\rangle$ has also a profound effect on the voltage noise dependence on $\langle i\rangle$. Typically, this dependence (understood as a parametric curve of both quantities parametrized by the mean voltage) contains a loop as shown in Fig.~\ref{F3}(c) and further illustrated in detail in Fig.~\ref{F4}(b) by the solid grey line. There the points $A$, $B$, $C$, $D$ correspond to the points in Fig.~\ref{F4}(a). For the same reasons causing the hysteresis in $v$-$i$ characteristics one will not observe the whole loop in the noise either under the junction-current ramp conditions. The measured noise will leave the loop in the moment when a step back in the current bias (negative differential resistance) is needed to remain on the loop. This coincides with the points $A$ and $C$, respectively, i.e.~with the switching currents for opposite ramp directions. Therefore, we suggest this mechanism as responsible for the discontinuous changes of the noise observed in the experiment close to the switching currents (see bottom panel of Fig.~\ref{F1}). Furthermore, the noise loop is not symmetric and, therefore, the height and the position of the peaks depends on the direction of the measurement as illustrated again by the black and red dotted lines (same color-code as in Fig.~\ref{F1}). As we will show later, for some parameters the lower peak becomes negligible and in this case we get qualitatively the same noise behavior as in the experiment. To sum up, we have strong evidence that both the large hysteresis and the asymmetric voltage noise are a consequence of the influence of electromagnetic environment on the current-biased junction. \section{Results and discussion} \label{results} In this section we show that also other properties of the $V$-$I$ curves and noise are qualitatively in agreement with the experiment. The junction used in the experiment has $R_j=7.7k\Omega$, $I_c=19.8nA$, and $C_j=5fF$ and the environment was modeled by $R=0.1k\Omega$ and $C=0.1pF$ in all presented examples. In Fig.~\ref{F5} we plot the $V$-$I$ characteristics for different temperatures and the current dependence of the voltage noise for the same parameters is plotted in Fig.~\ref{F6}. The voltage noise is expressed in units of the thermal Johnson-Nyquist noise $S_T=2k_BTR_p$ of the equivalent parallel resistor $R_p$. In all figures the solid grey line represents the full static solution of the circuit Fig.~\ref{F2} and dotted red and black lines show the supposed measured curves when the junction current ramps are considered. From the $V$-$I$ characteristics one can see that the measured hysteresis can by rather large and increases with the decreasing temperature. For the chosen parameters the upper drop in the mean junction voltage is approximately $0.1mV$. It is of the same order as in the experiment (Fig.~\ref{F1}). There is one difference, however. While the asymptotic $V$-$I$ lines are almost horizontal in the experiment (Fig.~\ref{F1}) they follow the $V=R_jI$ line in our theory. We believe that this discrepancy would be fixed by using a nonlinear model for the junction. The nonlinear model gives a better description of real junctions for the voltages close to the gap voltage $V_G$ \cite[Sec.~2.3.2]{Likharev}. A simple piecewise-linear approximation ($R_j(V)=R_{(a)}$ at $|V|<V_G$, $R_j(V)=R_{(b)}$ at $|V|>V_G$ with $R_{(a)}\gg R_{(b)}$) should be enough to show that these measured horizontal top and bottom $V$-$I$ lines have their origin in the almost infinite slope change of $I$ close to the $V=V_G$ known from typical $V$-$I$ characteristics of a Josephson junction \cite[Sec.~2.3.2]{Likharev} (note that a vertical line in the $I$-$V$ curve is horizontal in the $V$-$I$ characteristic). \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Figure5.eps} \caption{\label{F5} Junction voltage $V$ vs.~junction current $I$ for varying temperatures (for other parameter values see the main text).} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Figure6.eps} \caption{\label{F6} Analogue of Fig.~\ref{F5} for the voltage noise $S$ normalized by $S_{T}=2k_{B}TR_{p}$.} \end{figure} Our calculated voltage noise properties are also in good agreement with the experiment \cite{Hakonen} (experimental data are unpublished yet). With decreasing temperature the location of the higher asymmetric noise peak shifts farther away from the zero current and the peak becomes steeper. The lower peak is very small even for low temperatures. The very same trends are observed also in the experiment \cite{Hakonen}. A closer look at the experimental data in Fig.~\ref{F1} reveals also a small drop in the noise close to $\langle I\rangle=0$ (marked with the green circle). The realistic environment present in the experiment was apparently more complex than the one assumed here and, thus, quantitative differences are expected, yet our qualitative explanation should be valid also for more realistic environments. An important question is the origin of the voltage noise. As demonstrated in Fig.~\ref{F7} we are not dealing exclusively with the thermal noise associated with the combination of the junction differential resistance $R_D=dV/dI$ and resistance $R$. Fig.~\ref{F7} shows the voltage noise $S$ divided by the $S_D=2k_BTR_{Dp}$, where $R_{Dp}=RR_D/(R+R_D)$. For $\langle I\rangle =0$ the ratio $S/S_D$ approaches one (as expected from dissipation-fluctuation theorem) but it exceeds one in the region where the hysteresis of $V$-$I$ curves is observed. The origin of this extra noise is in the double-valued nature of the voltage (dichotomous switching) in this regime as can be deduced from the previous RCSJ model studies \cite{Voss} as well as from the fact that the maximum of the noise is decreasing with increasing temperature (typical for dichotomous noise). In conclusion, there are several possible methods for verification of our hypothesis. Depending on the details of the measurement the true nature of the $V$-$I$ characteristic could be revealed by using the voltage ramp instead of the junction-current ramp. The other possibility is to use a larger number of measurements and slower current ramping. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Figure7.eps} \caption{\label{F7} Voltage noise $S$ normalized by the equivalent noise $S_D=2k_BTR_D$ of junction's differential resistance vs.~the mean junction current $I$ for wide range of temperatures (for other parameter values see the main text). Normalized-noise unitary value at zero junction current reflects the fluctuation-dissipation theorem.} \end{figure} \begin{acknowledgments} We thank Aurelien Fay and Pertti Hakonen for sharing their experimental data with us before publication and for useful discussions. We also thank Gabriel Niebler and Tero Heikkil\"{a} for valuable input at the early stages of this work. We acknowledge support by the Czech Science Foundation via grant No.~204/11/J042 and by the Charles University Research Center "Physics of Condensed Matter and Functional Materials". \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
6,181
A cordon bleu or schnitzel cordon bleu is a dish of meat wrapped around cheese (or with cheese filling), then breaded and pan-fried or deep-fried. Veal or pork cordon bleu is made of veal or pork pounded thin and wrapped around a slice of ham and a slice of cheese, breaded, and then pan-fried or baked. For chicken cordon bleu, chicken breast is used instead of veal. Ham cordon bleu is ham stuffed with mushrooms and cheese. Name The French term is translated as "blue ribbon". According to Larousse Gastronomique, the "was originally a wide blue ribbon worn by members of the highest order of knighthood, L'Ordre des chevaliers du Saint-Esprit, instituted by Henri III of France in 1578. By extension, the term has since been applied to food preparation to a very high standard and by outstanding cooks. The analogy no doubt arose from the similarity between the sash worn by the knights and the ribbons (generally blue) of a cook's apron." History The origins of cordon bleu as a schnitzel filled with cheese are in Brig, Switzerland, probably about the 1940s, first mentioned in a cookbook from 1949. The earliest reference to "chicken cordon bleu" in The New York Times is dated to 1967, while similar veal recipes are found from at least 1955. Variants There are many variations of the recipe involving cutlet, cheese, and meat. A popular way to prepare chicken cordon bleu is to butterfly cut a chicken breast, place a thin slice of ham inside, along with a thin slice of a soft, easily melted cheese. The chicken breast is then rolled into a roulade, coated in bread crumbs, and then deep-fried. Other variations exist with the chicken baked rather than fried. Other common variations include omitting the bread crumbs, wrapping the ham around the chicken, or using bacon in place of ham. A variant popular in the Asturias province of Spain is cachopo, a deep-fried cutlet of veal, beef or chicken wrapped around a filling of Serrano ham and cheese. In Spain, a version made usually with just two slices of ham and cheese, although it can also be found with chicken or pork loin added, is often called san jacobo. A common variant in Uruguay and Argentina is the milanesa rellena. It consists of two beef or chicken fillets passed through beaten egg, later, stuffed with cooked ham and mozzarella cheese and superimposed like a sandwich. Once this is done, they are again passed through beaten eggs and breadcrumbs, to be fried or baked. It is usually served with papas fritas (French fries) as a garnish. In largely Muslim-populated countries, halal versions of chicken cordon bleu are also popular: the chicken is rolled around beef or mutton instead of pork product. See also Culinary Heritage of Switzerland Chicken Kiev Karađorđeva šnicla Dishes à la Maréchale Breaded cutlet List of stuffed dishes Double Down (sandwich) References American chicken dishes French cuisine Swiss cuisine Breaded cutlets Chicken dishes Pork dishes Veal dishes Culinary Heritage of Switzerland Culinary terminology Baked foods Stuffed dishes
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,479
module OneviewCookbook module API2000 module C7000 # ServerCertificate API2000 C7000 provider class ServerCertificateProvider < OneviewCookbook::API1800::C7000::ServerCertificateProvider end end end end
{ "redpajama_set_name": "RedPajamaGithub" }
3,284
<?php namespace Twig; /** * Marks a content as safe. * * @author Fabien Potencier <fabien@symfony.com> */ class Markup implements \Countable, \JsonSerializable { private $content; private $charset; public function __construct($content, $charset) { $this->content = (string) $content; $this->charset = $charset; } public function __toString() { return $this->content; } public function count() { return mb_strlen($this->content, $this->charset); } public function jsonSerialize() { return $this->content; } } class_alias('Twig\Markup', 'Twig_Markup');
{ "redpajama_set_name": "RedPajamaGithub" }
6,859
package com.wincom.actor.editor.flow.model; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * @author hudsonr Created on Jun 30, 2003 */ public class Transition extends FlowElement { Logger log = LoggerFactory.getLogger(this.getClass()); private static final long serialVersionUID = 4486688831285730788L; public Activity source, target; public Transition(Activity source, Activity target) { log.info("check"); this.source = source; this.target = target; source.addOutput(this); target.addInput(this); } }
{ "redpajama_set_name": "RedPajamaGithub" }
9,254
Q: Is it possible to make the pointer "stick" to one display when using multiple displays? I use multiple displays, and often find the mouse pointer moving to the other display when going to the scroll bar or pressing the close button on an application, for example. Is it possible to make the mouse "stick" to one display? To see what I mean, you could try to slowly drag a window to a side of your display. It will stop when the edge of the window touches the edge of the display for a little while, unless you push it further. I essentially want this behavior for my mouse pointer. I found something similar to this for Windows, but unfortunately that's for Windows, not macOS where I need it! A: Try EdgeCase for US$14.99 from the MAS. It has 4.5/5.0 stars with 53 reviews. It should 'lock' your mouse to 1 display. It might not be an exact duplicate of the one you found for Windows, though. The application description states: The app prevents you from ever accidentally losing your cursor into a rarely-used secondary display, or from overshooting as you flick to an OS X hotcorner or the menu bar. EdgeCase prevents your mouse from moving between multiple monitors by putting a temporary barrier between the edges of your screens. When you do want to switch to another screen, EdgeCase provides several shortcuts that allow you to cross screen edges manually. Simply perform a 'bounce' gesture, wait for ½ second, or hold a hotkey. Your cursor will then move into the next screen freely.
{ "redpajama_set_name": "RedPajamaStackExchange" }
360
Q: project of file storage system in asp.net how to implement correctly? on upload.aspx page i have conn1.ConnectionString = "Data Source=.\ip-of-remote-database-server;AttachDbFilename=signup.mdf;Integrated Security=True;User Instance=True"; and all the queries are also on same page ,only database on another machine.. so is this the correct way of implementing ?? or i have to create all queries on another machine and call them by application?? A: Any given query query might originate from the client code (such as ASP.NET), or it might be stored a-priori in the DBMS itself as a VIEW or a stored procedure (or even a trigger). But no matter where it originated from, the query is always executed by the DBMS server. This way, the DBMS can guarantee the integrity of data and "defend" itself from the bugs in the client code. The logical separation of client and server is why this model is called client/server, but that doesn't mean they must be separate physical machines - you'll decide that based on expected workload1 and usage patterns2. 1 Distributing the processing to multiple machines might increase performance. 2 E.g. you might need several "fat" clients around the LAN (communicating with the same database server) to reach all your users. This is less relevant for Web where there are additional layers of indirection between users and the database. A: It depends on your infrastructure. If you have got Sql Server locally you can use it. I assume that it is a school project so it does not matter. In real life it usually a good idea to separate web server and database server
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,277
{"url":"http:\/\/www.konstantinkashin.com\/blog\/2013\/09\/25\/scraping-pdfs-and-word-documents-with-python\/","text":"# Konstantin Kashin\n\nInstitute for Quantitative Social Science, Harvard University\n\nKonstantin Kashin is a Fellow at the Institute for Quantitative Social Science at Harvard University and will be joining Facebook's Core Data Science group in September 2015. Konstantin develops new statistical methods for diverse applications in the social sciences, with a focus on causal inference, text as data, and Bayesian forecasting. He holds a PhD in Political Science and an AM in Statistics from Harvard University.\n\n## Scraping PDFs and Word Documents with Python\n\n| Tags: python bash\n\nThis week I wanted to write a Python script that was able to extract text from both pdf files and Microsoft Word documents (both .doc and .docx formats). This actually proved to be rather difficult, particularly when it came to both Microsoft Word since there was no one utility that was able to handle both the old Word format and the more recent .docx one. This post is a summary of the utilities that I came across and what I finally used to complete this task.\n\nFirst, with regards to pdf files, the main Python library for opening pdf files is PDFMiner. There exist several additional libraries that essentially serve as wrappers to PDFMiner, including Slate. Slate is significantly simpler to use than PDFMiner, but this comes at the expense of very basic functionality. Even though I first tried to use Slate, it ended up not performing well for the pdfs I was working with. Specifically, it did not fully respect the original spacing between words, thereby cutting certain words into multiple fragments or concatenating others. I thus switched to PDFMiner because of its customizability. Using the pdf2txt.py command line utility, PDFMiner experienced a similar problem with word spacing. However, this turned out to be extremely easy to tune just using a word margin option passed to the pdf2txt.py utility. Specifically, I ran the following in the command line:\n\npdf2txt.py -o foo.txt -W 0.5 foo.pdf\n\n\nWhen it comes to Word 2007 .docx files, the Python-based utility that worked well is the python-docx library. It worked well in the command line as follows:\n\n.\/example-extracttext.py foo.docx foo.txt\n\n\nFor older Word documents (for example Word 2003), the python-docx library does not work. I ended up using the C-based antiword utility. Originally a Linux-based utility, antiword (version 0.37) can be installed on Mac OS X as follows:\n\nbrew update\nbrew install antiword\n\n\nFrom within Python, I was then easily able to convert a .doc document to text:\n\nos.system('antiword foo.doc > foo.txt')","date":"2018-07-20 10:53:44","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4101082384586334, \"perplexity\": 2492.583249666727}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-30\/segments\/1531676591578.1\/warc\/CC-MAIN-20180720100114-20180720120114-00019.warc.gz\"}"}
null
null
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <!-- NewPage --> <html lang="en"> <head> <!-- Generated by javadoc (1.8.0_102) on Mon Oct 03 08:48:22 EDT 2016 --> <title>A-Index</title> <meta name="date" content="2016-10-03"> <link rel="stylesheet" type="text/css" href="../stylesheet.css" title="Style"> <script type="text/javascript" src="../script.js"></script> </head> <body> <script type="text/javascript"><!-- try { if (location.href.indexOf('is-external=true') == -1) { parent.document.title="A-Index"; } } catch(err) { } //--> </script> <noscript> <div>JavaScript is disabled on your browser.</div> </noscript> <!-- ========= START OF TOP NAVBAR ======= --> <div class="topNav"><a name="navbar.top"> <!-- --> </a> <div class="skipNav"><a href="#skip.navbar.top" title="Skip navigation links">Skip navigation links</a></div> <a name="navbar.top.firstrow"> <!-- --> </a> <ul class="navList" title="Navigation"> <li><a href="../overview-summary.html">Overview</a></li> <li>Package</li> <li>Class</li> <li>Use</li> <li><a href="../overview-tree.html">Tree</a></li> <li><a href="../deprecated-list.html">Deprecated</a></li> <li class="navBarCell1Rev">Index</li> <li><a href="../help-doc.html">Help</a></li> </ul> </div> <div class="subNav"> <ul class="navList"> <li>Prev Letter</li> <li><a href="index-2.html">Next Letter</a></li> </ul> <ul class="navList"> <li><a href="../index.html?index-files/index-1.html" target="_top">Frames</a></li> <li><a href="index-1.html" target="_top">No&nbsp;Frames</a></li> </ul> <ul class="navList" id="allclasses_navbar_top"> <li><a href="../allclasses-noframe.html">All&nbsp;Classes</a></li> </ul> <div> <script type="text/javascript"><!-- allClassesLink = document.getElementById("allclasses_navbar_top"); if(window==top) { allClassesLink.style.display = "block"; } else { allClassesLink.style.display = "none"; } //--> </script> </div> <a name="skip.navbar.top"> <!-- --> </a></div> <!-- ========= END OF TOP NAVBAR ========= --> <div class="contentContainer"><a href="index-1.html">A</a>&nbsp;<a href="index-2.html">B</a>&nbsp;<a href="index-3.html">C</a>&nbsp;<a href="index-4.html">D</a>&nbsp;<a href="index-5.html">E</a>&nbsp;<a href="index-6.html">F</a>&nbsp;<a href="index-7.html">G</a>&nbsp;<a href="index-8.html">H</a>&nbsp;<a href="index-9.html">I</a>&nbsp;<a href="index-10.html">J</a>&nbsp;<a href="index-11.html">K</a>&nbsp;<a href="index-12.html">L</a>&nbsp;<a href="index-13.html">M</a>&nbsp;<a href="index-14.html">N</a>&nbsp;<a href="index-15.html">O</a>&nbsp;<a href="index-16.html">P</a>&nbsp;<a href="index-17.html">Q</a>&nbsp;<a href="index-18.html">R</a>&nbsp;<a href="index-19.html">S</a>&nbsp;<a href="index-20.html">T</a>&nbsp;<a href="index-21.html">U</a>&nbsp;<a href="index-22.html">V</a>&nbsp;<a href="index-23.html">W</a>&nbsp;<a name="I:A"> <!-- --> </a> <h2 class="title">A</h2> <dl> <dt><a href="../org/apache/pirk/responder/wideskies/spark/Accumulators.html" title="class in org.apache.pirk.responder.wideskies.spark"><span class="typeNameLink">Accumulators</span></a> - Class in <a href="../org/apache/pirk/responder/wideskies/spark/package-summary.html">org.apache.pirk.responder.wideskies.spark</a></dt> <dd> <div class="block">Accumulators for the Responder</div> </dd> <dt><span class="memberNameLink"><a href="../org/apache/pirk/responder/wideskies/spark/Accumulators.html#Accumulators-org.apache.spark.api.java.JavaSparkContext-">Accumulators(JavaSparkContext)</a></span> - Constructor for class org.apache.pirk.responder.wideskies.spark.<a href="../org/apache/pirk/responder/wideskies/spark/Accumulators.html" title="class in org.apache.pirk.responder.wideskies.spark">Accumulators</a></dt> <dd>&nbsp;</dd> <dt><span class="memberNameLink"><a href="../org/apache/pirk/responder/wideskies/standalone/Responder.html#addDataElement-java.lang.String-org.json.simple.JSONObject-">addDataElement(String, JSONObject)</a></span> - Method in class org.apache.pirk.responder.wideskies.standalone.<a href="../org/apache/pirk/responder/wideskies/standalone/Responder.html" title="class in org.apache.pirk.responder.wideskies.standalone">Responder</a></dt> <dd> <div class="block">Method to add a data element associated with the given selector to the Response</div> </dd> <dt><span class="memberNameLink"><a href="../org/apache/pirk/response/wideskies/Response.html#addElement-int-java.math.BigInteger-">addElement(int, BigInteger)</a></span> - Method in class org.apache.pirk.response.wideskies.<a href="../org/apache/pirk/response/wideskies/Response.html" title="class in org.apache.pirk.response.wideskies">Response</a></dt> <dd>&nbsp;</dd> <dt><span class="memberNameLink"><a href="../org/apache/pirk/test/utils/TestUtils.html#addElement-org.w3c.dom.Document-org.w3c.dom.Element-java.lang.String-java.lang.String-java.lang.String-java.lang.String-">addElement(Document, Element, String, String, String, String)</a></span> - Static method in class org.apache.pirk.test.utils.<a href="../org/apache/pirk/test/utils/TestUtils.html" title="class in org.apache.pirk.test.utils">TestUtils</a></dt> <dd> <div class="block">Helper method to add elements to the test data schema</div> </dd> <dt><span class="memberNameLink"><a href="../org/apache/pirk/query/wideskies/QueryInfo.html#addQuerySchema-org.apache.pirk.schema.query.QuerySchema-">addQuerySchema(QuerySchema)</a></span> - Method in class org.apache.pirk.query.wideskies.<a href="../org/apache/pirk/query/wideskies/QueryInfo.html" title="class in org.apache.pirk.query.wideskies">QueryInfo</a></dt> <dd>&nbsp;</dd> <dt><span class="memberNameLink"><a href="../org/apache/pirk/inputformat/hadoop/InputFormatConst.html#ALLOWED_FORMATS">ALLOWED_FORMATS</a></span> - Static variable in class org.apache.pirk.inputformat.hadoop.<a href="../org/apache/pirk/inputformat/hadoop/InputFormatConst.html" title="class in org.apache.pirk.inputformat.hadoop">InputFormatConst</a></dt> <dd>&nbsp;</dd> <dt><span class="memberNameLink"><a href="../org/apache/pirk/utils/SystemConfiguration.html#appendProperty-java.lang.String-java.lang.String-">appendProperty(String, String)</a></span> - Static method in class org.apache.pirk.utils.<a href="../org/apache/pirk/utils/SystemConfiguration.html" title="class in org.apache.pirk.utils">SystemConfiguration</a></dt> <dd> <div class="block">Appends a property via a comma separated list</div> </dd> <dt><span class="memberNameLink"><a href="../org/apache/pirk/schema/data/partitioner/DataPartitioner.html#arrayToPartitions-java.util.List-java.lang.String-">arrayToPartitions(List&lt;?&gt;, String)</a></span> - Method in interface org.apache.pirk.schema.data.partitioner.<a href="../org/apache/pirk/schema/data/partitioner/DataPartitioner.html" title="interface in org.apache.pirk.schema.data.partitioner">DataPartitioner</a></dt> <dd> <div class="block">Creates partitions for an array of the same type of elements - used when a data value field is an array and we wish to encode these into the return value.</div> </dd> <dt><span class="memberNameLink"><a href="../org/apache/pirk/schema/data/partitioner/IPDataPartitioner.html#arrayToPartitions-java.util.List-java.lang.String-">arrayToPartitions(List&lt;?&gt;, String)</a></span> - Method in class org.apache.pirk.schema.data.partitioner.<a href="../org/apache/pirk/schema/data/partitioner/IPDataPartitioner.html" title="class in org.apache.pirk.schema.data.partitioner">IPDataPartitioner</a></dt> <dd> <div class="block">Create partitions for an array of the same type of elements - used when a data value field is an array and we wish to encode these into the return value</div> </dd> <dt><span class="memberNameLink"><a href="../org/apache/pirk/schema/data/partitioner/ISO8601DatePartitioner.html#arrayToPartitions-java.util.List-java.lang.String-">arrayToPartitions(List&lt;?&gt;, String)</a></span> - Method in class org.apache.pirk.schema.data.partitioner.<a href="../org/apache/pirk/schema/data/partitioner/ISO8601DatePartitioner.html" title="class in org.apache.pirk.schema.data.partitioner">ISO8601DatePartitioner</a></dt> <dd>&nbsp;</dd> <dt><span class="memberNameLink"><a href="../org/apache/pirk/schema/data/partitioner/PrimitiveTypePartitioner.html#arrayToPartitions-java.util.List-java.lang.String-">arrayToPartitions(List&lt;?&gt;, String)</a></span> - Method in class org.apache.pirk.schema.data.partitioner.<a href="../org/apache/pirk/schema/data/partitioner/PrimitiveTypePartitioner.html" title="class in org.apache.pirk.schema.data.partitioner">PrimitiveTypePartitioner</a></dt> <dd> <div class="block">Create partitions for an array of the same type of elements - used when a data value field is an array and we wish to encode these into the return value</div> </dd> </dl> <a href="index-1.html">A</a>&nbsp;<a href="index-2.html">B</a>&nbsp;<a href="index-3.html">C</a>&nbsp;<a href="index-4.html">D</a>&nbsp;<a href="index-5.html">E</a>&nbsp;<a href="index-6.html">F</a>&nbsp;<a href="index-7.html">G</a>&nbsp;<a href="index-8.html">H</a>&nbsp;<a href="index-9.html">I</a>&nbsp;<a href="index-10.html">J</a>&nbsp;<a href="index-11.html">K</a>&nbsp;<a href="index-12.html">L</a>&nbsp;<a href="index-13.html">M</a>&nbsp;<a href="index-14.html">N</a>&nbsp;<a href="index-15.html">O</a>&nbsp;<a href="index-16.html">P</a>&nbsp;<a href="index-17.html">Q</a>&nbsp;<a href="index-18.html">R</a>&nbsp;<a href="index-19.html">S</a>&nbsp;<a href="index-20.html">T</a>&nbsp;<a href="index-21.html">U</a>&nbsp;<a href="index-22.html">V</a>&nbsp;<a href="index-23.html">W</a>&nbsp;</div> <!-- ======= START OF BOTTOM NAVBAR ====== --> <div class="bottomNav"><a name="navbar.bottom"> <!-- --> </a> <div class="skipNav"><a href="#skip.navbar.bottom" title="Skip navigation links">Skip navigation links</a></div> <a name="navbar.bottom.firstrow"> <!-- --> </a> <ul class="navList" title="Navigation"> <li><a href="../overview-summary.html">Overview</a></li> <li>Package</li> <li>Class</li> <li>Use</li> <li><a href="../overview-tree.html">Tree</a></li> <li><a href="../deprecated-list.html">Deprecated</a></li> <li class="navBarCell1Rev">Index</li> <li><a href="../help-doc.html">Help</a></li> </ul> </div> <div class="subNav"> <ul class="navList"> <li>Prev Letter</li> <li><a href="index-2.html">Next Letter</a></li> </ul> <ul class="navList"> <li><a href="../index.html?index-files/index-1.html" target="_top">Frames</a></li> <li><a href="index-1.html" target="_top">No&nbsp;Frames</a></li> </ul> <ul class="navList" id="allclasses_navbar_bottom"> <li><a href="../allclasses-noframe.html">All&nbsp;Classes</a></li> </ul> <div> <script type="text/javascript"><!-- allClassesLink = document.getElementById("allclasses_navbar_bottom"); if(window==top) { allClassesLink.style.display = "block"; } else { allClassesLink.style.display = "none"; } //--> </script> </div> <a name="skip.navbar.bottom"> <!-- --> </a></div> <!-- ======== END OF BOTTOM NAVBAR ======= --> </body> </html>
{ "redpajama_set_name": "RedPajamaGithub" }
7,618
{"url":"http:\/\/clay6.com\/qa\/47432\/the-henry-s-law-constant-for-the-solubility-of-n-2-gas-in-water-at-298-k-is","text":"Browse Questions\n\n# The Henry\u2019s law constant for the solubility of $N_2$ gas in water at 298 K is $1.0 \\times 10^5$ atm. The mole fraction of $N_2$ in air is 0.8. The number of moles of $N_2$ from air dissolved in 10 moles of water of 298 K and 5 atm pressure is\n\n$4 \\times 10^{\u22124}$","date":"2017-01-18 08:04:57","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7248073816299438, \"perplexity\": 505.75060218583445}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-04\/segments\/1484560280242.65\/warc\/CC-MAIN-20170116095120-00402-ip-10-171-10-70.ec2.internal.warc.gz\"}"}
null
null
\section*{Abstract} In this article, we construct and analyse explicit numerical splitting methods for a class of semi-linear stochastic differential equations (SDEs) with additive noise, where the drift is allowed to grow polynomially and satisfies a global one-sided Lipschitz condition. The methods are proved to be mean-square convergent of order $1$ and to preserve important structural properties of the SDE. In particular, first, they are hypoelliptic in every iteration step. Second, they are geometrically ergodic and have asymptotically bounded second moments. Third, they preserve oscillatory dynamics, such as amplitudes, frequencies and phases of oscillations, even for large time steps. Our results are illustrated on the stochastic FitzHugh-Nagumo model and compared with known mean-square convergent tamed/truncated variants of the Euler-Maruyama method. The capability of the proposed splitting methods to preserve the aforementioned properties makes them applicable within different statistical inference procedures. In contrast, known Euler-Maruyama type methods commonly fail in preserving such properties, yielding ill-conditioned likelihood-based estimation tools or computationally infeasible simulation-based inference algorithms. \subsubsection*{Keywords} Stochastic differential equations, Locally Lipschitz drift, Hypoellipticity, Ergodicity, FitzHugh-Nagumo model, Splitting methods, Mean-square convergence \subsubsection*{AMS subject classifications} 60H10, 60H35, 65C20, 65C30 \subsubsection*{Acknowledgements} E.B. was supported by the LCM -- K2 Center within the framework of the Austrian COMET-K2 program. A.S. was supported by MIAI@Grenoble Alpes, (ANR-19-P3IA-0003). E.B., M.T. and I.T. were supported by the Austrian Science Fund (FWF): W1214-N15, project DK14. All authors were supported by the Austrian Exchange Service (OeAD), bilateral project FR 03/2017. \section{Introduction} The aim of this article is to construct and analyse splitting methods for semi-linear stochastic differential equations (SDEs) of additive noise type \begin{equation}\label{eq:semi_linear_SDE} dX(t)=F(X(t)) dt + \Sigma dW(t)\, := \, \bigl( AX(t) + N(X(t)) \bigr)dt +\Sigma dW(t), \quad X(0)=X_0, \quad t \in [0,T], \end{equation} where the diffusion matrix $\Sigma$ may be degenerate and the drift $F$ satisfies a global one-sided Lipschitz condition and is allowed to grow polynomially. Coefficients with these properties appear in many applications \cite{Hutzenthaler2012}, ranging from physics \cite{Mattingly2002,Milstein2007} over population growth problems \cite{Hutzenthaler2007,Khasminskii2011} to neuroscience \cite{FitzHugh1961,Hodgkin1952,Nagumo1962} and others. As an illustrative equation from this class of SDEs, we discuss the stochastic FitzHugh-Nagumo (FHN) model \cite{Berglund2012,Bonaccorsi2008,Leon2018}, a well-known neuronal model describing the generation of spikes of single neurons at the intracellular level. Our aim is to construct numerical methods for SDE \eqref{eq:semi_linear_SDE}, which are easy to implement and also applicable across different disciplines in the broad field of statistical inference. This implies that the developed numerical methods need to meet several requirements: \begin{itemize} \item Statistical applications require strong approximations of SDEs. Thus, we focus on the concept of \textit{mean-square convergence} of order $p>0$ \cite{Kloeden1992}, i.e., there exists $c>0$ such that for sufficiently small $\Delta$ the inequality \begin{equation}\label{eq:ms_convergence_order} \max\limits_{0\leq t_i\leq T} \left( \mathbb{E}\left[ \norm{X(t_i)-\widetilde{X}(t_i)}^2 \right] \right)^{1/2} \leq c \Delta^p \end{equation} holds, where $t_i$ are time points of a discretised time interval $[0,T]$ with equidistant time steps $\Delta=t_i-t_{i-1}$. In \eqref{eq:ms_convergence_order}, $X(t_i)$ and $\widetilde{X}(t_i)$ denote the true and the numerical solution of \eqref{eq:semi_linear_SDE} at the discrete time points $t_i$, respectively, and $\norm{\cdot}$ is the Euclidean norm. Since it was shown in \cite{Hutzenthaler2011} that the standard Euler-Maruyama method does not converge in the mean-square sense under the above assumptions on the drift $F$, the development of mean-square convergent variants of this method has received much attention. In particular, tamed \cite{Hutzenthaler2012_2,Sabanis2013,Tretyakov2012} and truncated \cite{Hutzenthaler2012,Mao2015,Mao2016} Euler-Maruyama methods have been proposed. They all aim to control the unbounded growth arising from the non-globally Lipschitz drift by enforcing a rescaling modification to the drift and/or diffusion coefficients. \item Simulation-based statistical methods require to generate paths of SDEs as computationally efficient as possible, see, e.g., \cite{Buckwar2019}. Using \textit{explicit} numerical methods is a first step to achieve sufficiently low computational cost. While the aforementioned mean-square convergent variants of the Euler-Maruyama method are explicit, they commonly fail in preserving important structural properties of the SDE. The major key to computational efficiency, however, is to construct explicit methods which are capable to preserve the underlying properties of the SDE for time steps as large as possible. This leads to the next point. \item An important issue in the field of (stochastic) numerical analysis is the \textit{preservation of structural properties} of the considered SDE by the numerical methods used to approximate it. Geometric Numerical Integration is a well-established framework in this context \cite{Hairer2006}. Here, we discuss the preservation of hypoellipticity, geometric ergodicity and oscillatory dynamics such as amplitudes, frequencies and phases of oscillations: \begin{itemize} \item The diffusion matrix $\Sigma$ of SDE \eqref{eq:semi_linear_SDE} may be of full rank or degenerate, where in the latter case the SDE may be \textit{hypoelliptic}. The case of degenerate noise naturally occurs in many applications \cite{Ableidinger2017,Ditlevsen2019,Leimkuhler2015,Leon2018,Mattingly2002,Milstein2007} and the hypoelliptic property ensures that the solution of the SDE admits a smooth transition density \cite{Nualart1995}. This means that the noise is propagated through the whole system via the drift of the SDE, even though it does not act directly on all components. In many inference approaches using discrete approximations of SDEs, it is necessary that a discrete analogue of the hypoelliptic property holds at each iteration step. In particular, the distribution of $\widetilde{X}(t_i)$ given the previous value $\widetilde{X}(t_{i-1})$ must admit a smooth density, a property that we term $1$-step hypoellipticity. It is known that Euler-Maruyama type methods do not satisfy this. Thus, they yield non-invertible conditional covariance matrices, and, consequently, ill conditioned likelihood-based inference methods \cite{Ditlevsen2019,Melnykova2018,Pokern2009}. Higher-order Taylor approximation methods \cite{Kloeden1992} may be $1$-step hypoelliptic \cite{Ditlevsen2019}. However, since such methods are also not mean-square convergent in the case of superlinearly growing coefficients \cite{Hutzenthaler2011}, and yield covariance matrices with unpreserved order of magnitude with respect to the time step \cite{Ditlevsen2019}, they lead again to ill-posed statistical problems \item The analysis of the asymptotic behaviour of the process is of further crucial interest. In particular, if SDE \eqref{eq:semi_linear_SDE} possesses an underlying Lyapunov structure, it may be \textit{geometrically ergodic} \cite{Mattingly2002}. This property ensures that the distribution of the process converges exponentially fast to a unique limit for any starting value $X_0$, and has two important statistical implications. First, the choice of the initial value $X_0$ is negligible since its impact on the distribution of the process decreases exponentially fast. This is particularly relevant, since $X_0$ is usually not known, especially when the process is only partially observed. Second, there is a correspondence of ``time averages along trajectories'' and ``space averages across trajectories'' of geometrically ergodic systems, see, e.g., \cite{Ableidinger2017,daPrato1996}. This means that quantities related to the distribution of the process can be estimated from a single path simulated over a sufficiently large time horizon instead of relying on repeated simulations of trajectories. For the importance of this feature in statistical inference algorithms, we refer, e.g., to \cite{Buckwar2019}. These two features are only useful when they are preserved by the numerical method used to approximate the geometrically ergodic process. Euler-Maruyama type methods tend to lose the Lyapunov structure of the SDE, not preserving this property \cite{Ableidinger2017,Mattingly2002}. In particular, here, we illustrate that they react sensitively to the starting condition $X_0$, and that they may yield poor approximations of the underlying invariant density of the process. \item The last structural properties we are focusing on are features linked to oscillatory dynamics such as \textit{amplitudes, frequencies} and \textit{phases} of oscillations. Already in the deterministic scenario it has been observed that Euler type methods may not preserve amplitudes and frequencies of oscillations, see, e.g., \cite{Stern2020,Hairer2006}. Similar findings have been made for Euler-Maruyama type methods in the stochastic case. For example, it has been proved that the Euler-Maruyama method does not preserve the linear growth rate of linear stochastic harmonic oscillators, overshooting the amplitudes of the underlying oscillations, even for arbitrarily small time steps $\Delta$ \cite{Strommen2004}. Similar non-preserving results of oscillation amplitudes have been observed for non-linear, ergodic and higher-dimensional stochastic oscillators \cite{Ableidinger2017,Chevallier2020,Cohen2012}. Taming/truncating perturbations do not improve this behaviour. Even worse, taming perturbations may also lead to a non-preservation of frequencies of oscillations \cite{Kelly2017,Kelly2018}. This lack of amplitude and frequency preservation is confirmed by our numerical experiments on the FHN model. Moreover, we find that Euler-Maruyama type methods may also not preserve phases of oscillations. This poor behaviour is linked to the non-preservation of geometric ergodicity. \end{itemize} \end{itemize} Here, we propose to apply \textit{splitting methods} for the approximation of paths of SDE \eqref{eq:semi_linear_SDE}, an approach that addresses all previously listed issues. The general idea of this technique is to split the SDE into explicitly solvable subequations, to derive their solutions, and to compose them in a suitable way. We refer to \cite{Blanes2009,Mclachlan2002} for a thorough discussion of splitting methods for ordinary differential equations (ODEs) and to \cite{Ableidinger2016,Ableidinger2017,BouRabee2017,Brehier2018,Chevallier2020,Leimkuhler2015,Milstein2003,Misawa2001,Petersen1998,Shardlow2003} for articles considering extensions to SDEs. Note that it is often possible to split the differential equation under consideration into different sets of subequations, the choice of the most useful set depending on the problem to be solved. For the class of SDEs with additive noise, where the drift $F$ consists of a linear and a non-linear term, i.e., $F(X(t))=AX(t)+N(X(t))$, as in Equation \eqref{eq:semi_linear_SDE}, the idea is to treat the nonlinear term $N(X(t))$ via a deterministic differential subequation and to solve the remaining linear stochastic subequation explicitly. When $N$ is globally Lipschitz continuous and uniformly bounded, this idea has been applied to the Jansen and Rit neural mass model in \cite{Ableidinger2017}, and for locally Lipschitz $N$ it has been applied to the Allen-Cahn equation in \cite{Brehier2018}. Here, we require the ODE with the non-linear term $N$ to be explicitly solvable, as it is the case for the FHN model. If this is not possible, a further splitting step may be necessary or a numerical method for locally Lipschitz ODEs may be applied. Finally, composing the derived explicit solutions via the Lie-Trotter \cite{Trotter1959} and Strang \cite{Strang1968} approach, respectively, yields two \textit{explicit} numerical methods for SDE \eqref{eq:semi_linear_SDE}. We provide two versions of proving the boundedness of the second moment of the proposed splitting methods, each based on an additional assumption on the explicit solution of the underlying ODE with locally Lipschitz term $N$. This result is the key to establish their \textit{mean-square convergence}. In particular, we prove that they converge with order $p=1$, in agreement with the convergence rate of comparable known splitting methods in the globally Lipschitz scenario \cite{Ableidinger2017,Milstein2003}, and with standard methods such as the Euler-Maruyama method in the case of additive noise \cite{Kloeden1992}. Moreover, we show that the proposed splitting methods are \textit{$1$-step hypoelliptic} for any time step $\Delta$, provided that the stochastic subequation of the splitting framework is hypoelliptic. The invertability of the resulting covariance matrix is guaranteed by using the information contained in the drift matrix $A$. In particular, we show that the transition distribution of the Lie-Trotter method corresponds to a non-degenerate multivariate normal distribution. Having a non-degenerate covariance matrix may then allow likelihood-based inference. Furthermore, we prove that the constructed splitting methods satisfy a discrete Lyapunov condition, and are \textit{geometrically ergodic} for any time step $\Delta$. This result requires the assumption that $\norm{e^{A\Delta}}<1$, where the matrix norm is induced by the Euclidean norm. Moreover, we show that the second moments of the splitting methods are asymptotically bounded by constants which are independent of the time step $\Delta$. This result holds if the logarithmic norm \cite{Soderlind2006,Strom1975} of the matrix $A$ is strictly negative. In the one-dimensional case, some of the involved expressions simplify and we obtain precise \textit{closed-form (asymptotic) bounds of the second moments} of the proposed splitting methods. These bounds are illustrated on a cubic one-dimensional model problem with drift given by $F(X(t))=-X^3(t)$ \cite{Hutzenthaler2011,Mattingly2002}. In addition, we illustrate the proposed splitting methods on the stochastic FHN model and show through a variety of numerical experiments that they preserve the qualitative dynamics of neuronal spiking, in particular, \textit{amplitudes, frequencies} and \textit{phases} of the underlying oscillations even for large time discretisation steps $\Delta$. The article is organised as follows. In Section \ref{sec:2_FHN}, we introduce necessary mathematical preliminaries and notations, and we discuss equations of interest and relevant properties. In Section~\ref{sec:3_FHN}, we present the proposed splitting methods and recall different tamed and truncated variants of the Euler-Maruyama method. In Section \ref{sec:4_FHN}, we establish the mean-square convergence of the splitting methods. In Section \ref{sec:5_FHN}, we prove their $1$-step hypoellipticity, establish their geometric ergodicity, derive (asymptotic) second moment bounds and illustrate them on a one-dimensional model problem. In Section \ref{sec:6_FHN}, we apply the proposed splitting approach to the stochastic FHN model and discuss conditions under wich the presented results hold. In Section \ref{sec:7_FHN}, we provide a variety of numerical experiments, illustrating the theoretical results and comparing the considered numerical methods. Conclusions are reported in Section \ref{sec:8_FHN}. \section{Model and properties} \label{sec:2_FHN} Throughout, the following notations are used. \begin{notation} Let $x,y \in \mathbb{R}^d$ be generic vectors. Then $x_l$ denotes the $l$-th entry of $x$, $x^\top$ the transpose of $x$, $\norm{x}=(x_1^2+\ldots+ x_d^2)^{1/2}$ the Euclidean norm of $x$ and $(x,y)=x_1y_1+\ldots+x_dy_d$ the scalar product of $x$ and $y$. Further, let $A,B \in \mathbb{R}^{d\times d}$ be generic matrices. Then $a_{lj}$ denotes the component in the $l$-th row and $j$-th column of $A$, $A^{\top}$ the transpose of $A$, $0_d$ the $d$-dimensional zero vector and $\mathbb{I}_{d}$ the $d \times d$-dimensional identity matrix. Moreover, we denote by $\norm{A}=\sqrt{\lambda_{\textrm{max}}(A^\top A)}$ the matrix norm which is induced by the Euclidean norm, where $\lambda_{\textrm{max}}(A)$ is the largest eigenvalue of $A$, and with $\mu(A)=\lambda_{\textrm{max}}((A+A^\top)/2)$ the real-valued logarithmic norm which results from the Euclidean norm and its induced matrix norm. \end{notation}\noindent Let $(\Omega,\mathcal{F},\mathbb{P})$ be a complete probability space with filtration $(\mathcal{F}(t))_{t \in [0,T]}$. Furthermore, let $(W(t))_{t \in [0,T]}$ be a $m$-dimensional Wiener process defined on that space and adapted to $(\mathcal{F}(t))_{t \in [0,T]}$. We consider the $d$-dimensional autonomous SDE of additive noise type \eqref{eq:semi_linear_SDE} \begin{equation*} dX(t)=F(X(t))dt + \Sigma dW(t):=\bigl( AX(t) + N(X(t)) \bigr)dt + \Sigma dW(t), \quad X(0)=X_0, \quad t \in [0,T], \end{equation*} where $T>0$, $A\in\mathbb{R}^{d\times d}$, $\Sigma \in \mathbb{R}^{d \times m}$, $F: \mathbb{R}^d \to \mathbb{R}^d$ and $N: \mathbb{R}^d \to \mathbb{R}^d$ are locally Lipschitz continuous. The initial value $X_0$ is an $\mathcal{F}(0)$-measurable $\mathbb{R}^d$-valued random variable which is independent of $(W(t))_{t \in [0,T]}$ and such that $\mathbb{E}\left[ \norm{X_0}^2 \right]<\infty$. We suppose that SDE \eqref{eq:semi_linear_SDE} has a unique strong solution, which is regular in the sense of \cite{Khasminskii2012}, i.e., it is defined on the entire interval $[0,T]$ such that sample paths do not blow up to infinity in finite time. Formally, we require the existence of a stochastic process $(X(t))_{t \in [0,T]}$ which is adapted to $(\mathcal{F}(t))_{t \in [0,T]}$ and has continuous paths satisfying \begin{equation*} X(t)=X_0+\int\limits_{0}^{t} F(X(s)) \ ds + \int\limits_{0}^{t} \Sigma \ dW(s), \end{equation*} for all $t \in [0,T]$ $\mathbb{P}$-almost surely. Conditions required to ensure the existence of such a process $(X(t))_{t \in [0,T]}$ are, e.g., discussed in \cite{Alyushina1988,Khasminskii2012,Krylov1991,Mao2011}. Here, we follow the setting in \cite{Hutzenthaler2012_2,Kelly2017,Tretyakov2012} and suppose that the drift satisfies a global one-sided Lipschitz condition and is allowed to grow polynomially at infinity. It suffices to place these conditions on $N$: \begin{assumption}\label{assum:1} \begin{enumerate}[label=\text{(A\arabic*)}] \item \label{(A1)} The function $N$ is globally one-sided Lipschitz continuous, i.e., there exists a constant $c_1>0$ such that \begin{equation*} (x-y,N(x)-N(y))\leq c_1\norm{x-y}^2, \quad \forall \ x,y\in \mathbb{R}^d. \end{equation*} \item \label{(A2)} The function $N$ grows at most polynomially, i.e., there exist constants $c_2>0$ and $\chi\geq 1$ such that \begin{equation*} \norm{N(x)-N(y)}^2\leq c_2(1+\norm{x}^{2\chi-2}+\norm{y}^{2\chi-2})||x-y||^2, \quad \forall \ x,y\in\mathbb{R}^d. \end{equation*} \end{enumerate} \end{assumption}\noindent Assumption \ref{assum:1} ensures the finiteness of the moments of the solution of \eqref{eq:semi_linear_SDE} \cite{Higham2002,Kelly2017,Khasminskii2012,Tretyakov2012}. In particular, there exists a constant $K(T)>0$ such that \begin{equation* \max\limits_{0\leq t\leq T} \mathbb{E}\left[ \norm{X(t)}^2 \right] \leq K(T)\left(1+\mathbb{E}\left[ \norm{X_0}^2 \right]\right). \end{equation*} Moreover, the process $(X(t))_{t \in [0,T]}$ is a Markov process. Denoting $\mathcal{B}(\mathbb{R}^d)$ the Borel sigma-algebra on $\mathbb{R}^d$, its transition probability is defined as \begin{equation}\label{eq:trans_prob} P_{t}(\mathcal{A},x):=\mathbb{P}\left( X(t) \in \mathcal{A} | X(0)=x \right), \end{equation} where $\mathcal{A} \in \mathcal{B}(\mathbb{R}^d)$. This corresponds to the probability that the process reaches a Borel set $\mathcal{A} \subset \mathbb{R}^d$ at time $t$, provided that it started in $x \in \mathbb{R}^d$ at time $0< t$. \subsection{Noise structure: ellipticity and hypoellipticity} Depending on the noise structure, two classes of models are obtained. The first class is called \textit{elliptic} and corresponds to SDEs with a non-degenerate diffusion matrix, i.e., $\Sigma\Sigma^\top$ is of full rank such that $\textrm{det}(\Sigma\Sigma^\top) \neq 0$. In particular, we consider $d=m$ and a diagonal matrix $\Sigma=\textrm{diag}[\sigma_1,\ldots,\sigma_{d}]$ with $\sigma_j>0$ for $j=1,\ldots,d$. The second class corresponds to SDEs with degenerate diffusion matrix, as it naturally occurs in many application models. Following the notion in \cite{Ditlevsen2019}, we consider $m=d-1$ and $\Sigma$ given by \begin{equation}\label{eq:hypo_SDE} \Sigma:= \begin{pmatrix} 0_{d-1}^{\top} \\ \Gamma \end{pmatrix}, \end{equation} where $\Gamma=\textrm{diag}[\sigma_1,\ldots,\sigma_{d-1}] \in \mathbb{R}^{(d-1)\times (d-1)}$ is a diagonal matrix with $\sigma_j>0$ for $j=1,\ldots,d-1$. The first component of the solution $(X(t))_{t \in [0,T]}$ is called \textit{smooth}, since it is not directly affected by the noise. The remaining $d-1$ components are called \textit{rough}, because the noise acts directly on them. In this scenario, SDE \eqref{eq:semi_linear_SDE} is often \textit{hypoelliptic}. It means that the transition probability~\eqref{eq:trans_prob} admits a smooth density, even though $\Sigma\Sigma^\top$ is not of full rank. This is the case when the SDE satisfies the weak H\"ormander condition, based on the concept of Lie-brackets \cite{Nualart1995}. A sufficient and necessary condition for the process $(X(t))_{t \in [0,T]}$ to be hypoelliptic, is that at least one of its rough coordinates appears in the first component $F_1(X(t))$ of the drift \cite{Ditlevsen2019}. In particular, the following condition is required \begin{equation}\label{eq:cond_hypo} \forall x \in \mathbb{R}^d, \ \left(\partial_r F_1(x), \sigma^j\right) \neq 0 \quad \text{for at least one} \ j=1,\ldots,d-1, \end{equation} where $\sigma^j$ denotes the $j$-th column vector of $\Gamma$ and $\partial_r F_1(x):=(\partial_{x_2} F_1(x),\ldots, \partial_{x_{d}} F_1(x) )^\top$ is the vector of partial derivatives of the first entry of the drift with respect to the rough components. This setting can be extended to several smooth coordinates by requiring that at least one of the rough coordinates enters the respective components of the drift. \subsection{Lyapunov structure: geometric ergodicity} Here, a particular interest lies in SDEs of type \eqref{eq:semi_linear_SDE} where the drift $F(X(t))$ satisfies the following dissipativity condition \begin{equation}\label{eq:dissipative} (F(x),x)\leq \alpha -\delta \norm{x}^2, \quad \forall \ x\in \mathbb{R}^d, \end{equation} where $\alpha,\delta>0$. Condition \eqref{eq:dissipative} ensures that the function $L:\mathbb{R}^d\to [1,\infty)$ defined by $L(x):=~1+~\norm{x}^2$ is a Lyapunov function for \eqref{eq:semi_linear_SDE}, see \cite{Mattingly2002}. That is $L(x)\to\infty$ as $\norm{x}\to \infty$, and there exist constants $\rho,\eta>0$ such that \begin{equation}\label{eq:Lyapunov} \mathcal{L}\{ L(x) \} \leq -\rho L(x) +\eta, \end{equation} where $\mathcal{L}$ is the generator of the SDE given by \begin{equation*} \mathcal{L}\{g(x)\}=\sum\limits_{l=1}^{d} F_l(x) \frac{\partial g}{\partial x_l}(x)+\frac{1}{2}\sum\limits_{l,j=1}^{d}\left[ \Sigma\Sigma^\top \right]_{lj} \frac{\partial^2 g}{\partial x_l\partial x_j}(x), \end{equation*} for sufficiently smooth functions $g:\mathbb{R}^d\to\mathbb{R}$, with $[ \Sigma\Sigma^\top]_{lj}$ denoting the entry in the $l$-th row and $j$-th column of the matrix $\Sigma\Sigma^\top$. The existence of a Lyapunov function satisfying \eqref{eq:Lyapunov} is the key to establish the \textit{geometric ergodicity} of the solution of \eqref{eq:semi_linear_SDE}. This property means that the distribution of the Markov process $(X(t))_{t\in [0,T]}$ converges exponentially fast to a unique invariant distribution~$\pi$, satisfying \begin{equation*} \pi(\mathcal{A})=\int\limits_{\mathbb{R}^d}P_t(\mathcal{A},x)\pi(dx), \quad \forall \ \mathcal{A} \in \mathcal{B}(\mathbb{R}^d), \ t \in [0,T]. \end{equation*} In particular, if SDE \eqref{eq:semi_linear_SDE} is elliptic, Condition~\eqref{eq:dissipative} suffices to establish the geometric ergodicity of $(X(t))_{t\in [0,T]}$. If SDE \eqref{eq:semi_linear_SDE} is not elliptic, the process is geometrically ergodic, if, in addition to fulfilling Condition~\eqref{eq:dissipative} (and thus \eqref{eq:Lyapunov}), it is hypoelliptic and satisfies the irreducibility condition $P_t(\mathcal{A},x)>0$ for all open sets $\mathcal{A}\in \mathbb{B}(\mathbb{R}^d)$ and $x\in~\mathbb{R}^d$. The reader is referred to \cite{Mattingly2002} and the references therein for further details. \section{Numerical methods} \label{sec:3_FHN} Consider a discretised time interval $[0,T]$ with equidistant time steps $\Delta=t_{i}-t_{i-1} \in (0,\Delta_0]$, with $\Delta_0>0$, $i=1,\dots,n$, where $t_0=0$ and $t_n=T$. Without loss of generality, we consider $\Delta_0 \in~ (0,1)$. We denote by $(\widetilde{X}(t_i))_{i=0,\ldots,n}$ a numerical solution of SDE \eqref{eq:semi_linear_SDE} approximating the process $(X(t))_{t \in [0,T]}$ at $t_i$, where $\widetilde{X}(t_0):=X_0$. We first present splitting methods for Equation~\eqref{eq:semi_linear_SDE} and then recall different Euler-Maruyama type methods proposed for SDEs with superlinearly growing drift coefficients. \subsection{Splitting approach} \label{sec:3:2_FHN} We start by providing a brief account of the main ideas behind the numerical splitting approach \cite{Blanes2009,Mclachlan2002}. For the sake of simplicity, consider the following ODE \begin{equation}\label{dot_f_FHN} dX(t)=F(X(t))dt, \quad X(0)=X_0 \in \mathbb{R}^d, \quad t \in [0,T]. \end{equation} The goal is to derive a numerical method to approximate its solution. Assume that the function $F(X(t))$ can be expressed as \begin{equation}\label{Fk} F(X(t))=\sum_{k=1}^{N}F^{[k]}(X(t)), \quad N \in \mathbb{N}, \end{equation} where $F^{[k]}:\mathbb{R}^d \to \mathbb{R}^d$. Usually, there are several ways to decompose this function. The goal is to do it in a way such that the solutions of the resulting subequations \begin{equation}\label{Sub_FHN} dX(t)=F^{[k]}(X(t))dt, \quad k=1,\ldots,N, \end{equation} exist on $[0,T]$ and can be derived explicitly. Having derived the explicit solutions of the subequations, the next task is to compose them properly. Two common ways are the so-called Lie-Trotter~\cite{Trotter1959} and the Strang \cite{Strang1968} approach. Let $\varphi_t^{[k]}(X_0)$ denote the exact solutions (flows) of the subequations in \eqref{Sub_FHN} at time $t$ and starting from $X_0$. Then the Lie-Trotter composition of flows \begin{equation*} \widetilde{X}^{\textrm{LT}}(t_i)=\left( \varphi_\Delta^{[1]} \circ ... \circ \varphi_\Delta^{[N]} \right)(\widetilde{X}^{\textrm{LT}}(t_{i-1})) \end{equation*} and the Strang approach \begin{equation*} \widetilde{X}^{\textrm{S}}(t_i)=\left( \varphi_{\Delta/2}^{[1]} \circ ... \circ \varphi_{\Delta/2}^{[N-1]} \circ \varphi_{\Delta}^{[N]} \circ \varphi_{\Delta/2}^{[N-1]} \circ ... \circ \varphi_{\Delta/2}^{[1]} \right)(\widetilde{X}^{\textrm{S}}(t_{i-1})) \end{equation*} define splitting solutions of ODE \eqref{dot_f_FHN}. Thus, splitting methods consists of three equally important steps: \begin{itemize} \item[(i)] Choosing the functions $F^{[k]}(X(t))$ in \eqref{Fk}. \item[(ii)] Deriving the solutions of the subequations in \eqref{Sub_FHN}. \item[(iii)] Composing the derived solutions to construct a numerical solution of \eqref{dot_f_FHN}. \end{itemize} This idea can be directly extended to SDEs, where the subequations in \eqref{Sub_FHN} may consist of deterministic and/or stochastic dynamical systems. \subsection{Splitting methods for semi-linear SDEs} In this section, we propose a splitting strategy for SDE \eqref{eq:semi_linear_SDE}, where \begin{equation}\label{eq:semi} F^{[1]}(X(t))=AX(t) \quad \text{and} \quad F^{[2]}(X(t))=N(X(t)). \end{equation} \paragraph{Step (i): Choice of the subequations} To make use of the treatable underlying stochastic linear dynamics, we propose to split equations of this type into the following two subequations \begin{eqnarray} d X^{[1]}(t)&=&AX^{[1]}(t) dt + \Sigma dW(t), \quad X^{[1]}(0)=X^{[1]}_0, \quad t \in [0,T], \label{SDE_FHN} \\ dX^{[2]}(t)&=&N(X^{[2]}(t))dt, \quad X^{[2]}(0)=X^{[2]}_0, \quad t \in [0,T]. \label{ODE_FHN} \end{eqnarray} This splitting strategy is an extension of the method presented in \cite{Ableidinger2017}, where the authors consider a globally Lipschitz Hamiltonian type equation with uniformly bounded non-linear terms. Our method considers a more general class of coefficients $N(X(t))$, including functions which are allowed to grow polynomially at infinity according to Assumption \ref{assum:1}. \paragraph{Step (ii): Explicit solution of the subequations} In the following, we discuss the subequations \eqref{SDE_FHN} and \eqref{ODE_FHN}. The first subequation is a linear SDE. It can be solved explicitly, even when the dimension $d$ is large and independent of whether the equation has an elliptic or hypoelliptic noise structure \cite{Arnold1974,Mao2011}. In particular, the explicit solution of \eqref{SDE_FHN} is given by \begin{equation}\label{eq:SDE} X^{[1]}(t)=e^{At}X_0^{[1]} + \int\limits_{0}^{t} e^{A(t-s)} \Sigma \ dW(s). \end{equation} The It\^{o} integral in \eqref{eq:SDE} is normally distributed with mean $0_d$. Moreover, using It\^{o}'s isometry and the fact that the components of the Wiener process are independent, its $d \times d$-dimensional covariance matrix is given by \begin{equation}\label{eq:Cov} C(t) =\int\limits_{0}^{t} e^{A(t-s)}\Sigma\Sigma^{\top}(e^{A(t-s)})^{\top} \ ds. \end{equation} Hence, paths of \eqref{SDE_FHN} can be simulated exactly at the discrete time points $t_i$. In particular, \begin{equation} \varphi_{\Delta}^{[1]}(X^{[1]}(t_{i-1})):=X^{[1]}(t_{i})=e^{A\Delta} X^{[1]}(t_{i-1}) + \xi_{i-1}, \quad i=1,\dots,n, \label{Exact_SDE_FHN} \end{equation} where the $\xi_{i-1}$ are independent and identically distributed $d$-dimensional Gaussian vectors with mean $0_d$ and covariance matrix $C(\Delta)$ given by \eqref{eq:Cov}. Regarding the second subequation, the following assumption is made. \begin{assumption}\label{assum:2} \begin{enumerate}[label=\text{(A3)}] \item \label{(A3)} The function $N: \mathbb{R}^d \to \mathbb{R}^d$ in \eqref{eq:semi} is such that the global solution of ODE \eqref{ODE_FHN} can be derived explicitly. \end{enumerate} \end{assumption}\noindent Granting Assumption \ref{(A3)}, the solution of the second subequation can also be obtained exactly at the discrete time points $t_i$. In particular, \begin{equation} \varphi_{\Delta}^{[2]}(X^{[2]}(t_{i-1})):=X^{[2]}(t_i)=f(X^{[2]}(t_{i-1});\Delta), \quad i=1,\dots,n, \label{Exact_ODE_FHN} \end{equation} where $f:\mathbb{R}^d \to \mathbb{R}^d$ denotes the explicit solution of \eqref{ODE_FHN}. Assumption \ref{(A3)} may be relaxed, e.g., by applying a proper numerical method to approximate the solution of \eqref{ODE_FHN} \cite{Stern2020,Mclachlan2002}. We refer to \cite{Hairer2000,Humphries2001} for an exhaustive discussion of numerical methods for locally Lipschitz ODEs. \paragraph{Step (iii): Composition of the explicit solutions} To finally obtain numerical solutions of the original SDE \eqref{eq:semi_linear_SDE}, the explicit solutions \eqref{Exact_SDE_FHN} and \eqref{Exact_ODE_FHN} of the subequations \eqref{SDE_FHN} and \eqref{ODE_FHN} are composed in every iteration step. In particular, we investigate the following methods \begin{eqnarray}\label{SP1_FHN} \widetilde{X}^{\textrm{LT}}(t_i)&=&\left( \varphi_\Delta^{[1]} \circ \varphi_\Delta^{[2]} \right) \bigl(\widetilde{X}^{\textrm{LT}}(t_{i-1}) \bigr)=e^{A\Delta}f(\widetilde{X}^{\textrm{LT}}(t_{i-1});\Delta)+\xi_{i-1}, \\ \label{SP3_FHN} \widetilde{X}^{\textrm{S}}(t_i)&=&\left( \varphi_{\Delta/2}^{[2]} \circ \varphi_{\Delta}^{[1]} \circ \varphi_{\Delta/2}^{[2]} \right)\bigl(\widetilde{X}^{\textrm{S}}(t_{i-1}) \bigr)=f\left(e^{A\Delta}f\bigl(\widetilde{X}^{\textrm{S}}(t_{i-1});\Delta/2\bigr) + \xi_{i-1};\Delta/2 \right), \end{eqnarray} which are based on the Lie-Trotter (LT) and Strang (S) approach, respectively. We emphasise that the constructed splitting schemes \eqref{SP1_FHN} and \eqref{SP3_FHN} are easy to implement. In particular, the matrix exponential $e^{A\Delta}$ and the covariance matrix $C(\Delta)$ have to be computed only once and the normal random variables $\xi_{i-1}$, $i=1,\ldots,n$, can be obtained using a Cholesky decomposition of the covariance matrix $C(\Delta)$. \subsection{Euler-Maruyama type methods} \label{sec:3:1_FHN} Let $\psi_{i-1} \sim \mathcal{N}(0_m,\mathbb{I}_m)$, $i=1,\ldots,n$, be independent and identically distributed $m$-dimensional standard Gaussian vectors. The Euler-Maruyama method \cite{Kloeden1992,Milstein2004} used to approximate solutions of SDEs with globally Lipschitz continuous coefficients yields paths of \eqref{eq:semi_linear_SDE} through the iteration \begin{equation}\label{EM_FHN} \widetilde{X}^{\textrm{EM}}(t_i)=\widetilde{X}^{\textrm{EM}}(t_{i-1}) + F(\widetilde{X}^{\textrm{EM}}(t_{i-1}))\Delta + \Sigma \sqrt{\Delta} \psi_{i-1}. \end{equation} In \cite{Hutzenthaler2011}, it has been shown that the Euler-Maruyama method is not mean-square convergent in the sense of \eqref{eq:ms_convergence_order} if at least one of the coefficients of the SDE grows superlinearly, as this results in unbounded moments of the iterates. Several explicit variants of this method have been proposed, which aim to control this unbounded growth. The first variant, designed for polynomially growing and one-sided Lipschitz drift and globally Lipschitz diffusion coefficients, has been introduced in \cite{Hutzenthaler2012_2}. It is based on a taming perturbation which avoids large values caused by the superlinearly growing drift. The method is defined through the iteration \begin{equation}\label{eq:EM_Tamed} \widetilde{X}^{\textrm{TEM}}(t_i)=\widetilde{X}^{\textrm{TEM}}(t_{i-1})+\frac{F(\widetilde{X}^{\textrm{TEM}}(t_{i-1}))\Delta }{1+ \norm{F(\widetilde{X}^{\textrm{TEM}}(t_{i-1}))}\Delta} + \Sigma \sqrt{\Delta} \psi_{i-1}, \end{equation} and can be rewritten to coincide with the Euler-Maruyama method \eqref{EM_FHN} up to terms of second order. This fact is used to establish the mean-square convergence of order $1/2$ (order $1$) for SDEs with multiplicative noise (additive noise), i.e., the same rate achieved by the Euler-Maruyama method in the globally Lipschitz case \cite{Kloeden1992}. Another variant, aiming to tame both the drift and the diffusion term, has been suggested in~\cite{Tretyakov2012}. The method is defined via \begin{equation}\label{DTEM} \widetilde{X}^{\textrm{DTEM}}(t_i)=\widetilde{X}^{\textrm{DTEM}}(t_{i-1})+\frac{F(\widetilde{X}^{\textrm{DTEM}}(t_{i-1}))\Delta + \Sigma \sqrt{\Delta} \psi_{i-1} }{1+ \norm{F(\widetilde{X}^{\textrm{DTEM}}(t_{i-1}))}\Delta + \norm{\Sigma \sqrt{\Delta} \psi_{i-1}}}, \end{equation} and is designed for the broader class of equations where also the diffusion coefficient is allowed to grow polynomially at infinity and satisfies a one-sided Lipschitz condition. It has been shown to converge with mean-square order $1/2$. The strong convergence (without order) of a related class of variants, based on space truncation techniques, has been discussed in \cite{Hutzenthaler2012}. In particular, we recall the two methods \begin{equation}\label{TrEM} \widetilde{X}^{\textrm{TrEM}}(t_i)=\widetilde{X}^{\textrm{TrEM}}(t_{i-1})+\frac{F(\widetilde{X}^{\textrm{TrEM}}(t_{i-1}))\Delta }{\textrm{max}\left\{ 1,\norm{F(\widetilde{X}^{\textrm{TrEM}}(t_{i-1}))}\Delta \right\}} + \Sigma \sqrt{\Delta} \psi_{i-1}, \end{equation} \begin{equation}\label{DTrEM} \widetilde{X}^{\textrm{DTrEM}}(t_i)=\widetilde{X}^{\textrm{DTrEM}}(t_{i-1})+\frac{F(\widetilde{X}^{\textrm{DTrEM}}(t_{i-1}))\Delta + \Sigma \sqrt{\Delta} \psi_{i-1}}{\textrm{max}\left\{ 1,\Delta \norm{F(\widetilde{X}^{\textrm{DTrEM}}(t_{i-1}))\Delta+ \Sigma \sqrt{\Delta} \psi_{i-1}} \right\}}, \end{equation} constructed to truncate the drift and the drift and diffusion term, respectively. In the following, we denote by tamed (TEM), diffusion tamed (DTEM), truncated (TrEM) and diffusion truncated (DTrEM) Euler-Maruyama method, the schemes \eqref{eq:EM_Tamed}, \eqref{DTEM}, \eqref{TrEM} and \eqref{DTrEM}, respectively. \section{Mean-square convergence} \label{sec:4_FHN} In this section, mean-square convergence of the constructed splitting methods is proved. It has been observed in the globally Lipschitz case that splitting methods have the same convergence order as the Euler-Maruyama method \eqref{EM_FHN}, i.e., order $1$ in the case of additive noise, see, e.g., \cite{Ableidinger2017,Milstein2003}. We extend this result to the locally Lipschitz case, proving mean-square convergence of order $1$ for the proposed splitting methods. \subsection{Required background} To establish this result, we rely on Theorem 2.1 of \cite{Tretyakov2012}, which provides an extension of Milstein's fundamental theorem on the mean-square order of convergence \cite{Milstein1988} (see also Theorem 1.1 in \cite{Milstein2004}) for globally Lipschitz coefficients to the considered setting specified in Assumption \ref{assum:1}. To facilitate the illustration of our results, we recall this statement in Theorem \ref{thm:Zang} below, after defining the required ingredients of mean-square consistency and boundedness. Let $X_{t_{i-1},x}(t_i)$ denote the true solution at time $t_i$ starting from $x$ at time $t_{i-1}$, i.e., $X(t_{i-1})=x$, and $\widetilde{X}_{t_{i-1},x}(t_i)$ the one-step approximation used to construct a numerical solution $\widetilde{X}(t_i)$. In particular, the one-step approximations of the numerical methods discussed in the previous section are defined by \eqref{SP1_FHN}-\eqref{DTrEM}, where $\widetilde{X}(t_{i-1})$ is replaced by $x$. \begin{definition}\label{eq:ms_consistency} The one-step approximation $\widetilde{X}_{t_{i-1},x}(t_i)$ of a numerical solution $\widetilde{X}(t_i)$ of SDE~\eqref{eq:semi_linear_SDE} is mean-square consistent of order $p>0$, if there exists $\Delta_0>0$ such that for arbitrary $t_i$, $i=~1,\ldots,n$, $x \in \mathbb{R}^d$, and for all $\Delta\in(0,\Delta_0]$ it has the following orders of accuracy \begin{eqnarray*}\hspace{-0.5cm} \nonumber\norm{\mathbb{E}\left[X_{t_{i-1},x}(t_i)-\widetilde X_{t_{i-1},x}(t_i)\right]} &= \mathcal{O}(\Delta^{p+1}), \\ \left(\mathbb{E}\left[\norm{X_{t_{i-1},x}(t_i)-\widetilde X_{t_{i-1},x}(t_i)}^2\right]\right)^{1/2}&=\mathcal{O}(\Delta^{p+\frac{1}{2}}). \end{eqnarray*} \end{definition}\noindent Besides mean-square consistency, the boundedness of the second moment of the numerical solution has to be proved. In the globally Lipschitz case, this is guaranteed by the linear growth bounds of the coefficients. \begin{definition}\label{eq:bm_num} A numerical solution $\widetilde{X}(t_i)$ of SDE \eqref{eq:semi_linear_SDE} is mean-square bounded, if there exist $\Delta_0>0$ and a constant $\widetilde{K}(T,\Delta_0)>0$ such that for all $\Delta\in(0,\Delta_0]$ it holds that \begin{equation*} \max\limits_{0\leq t_i\leq T} \mathbb{E}\left[ \norm{\widetilde X(t_i)}^{2}\right] \leq \widetilde{K}(T,\Delta_0)\left(1+ \mathbb{E}\left[\norm{X_0}^{2}\right]\right). \end{equation*} \end{definition}\noindent Based on the above defined ingredients, the following theorem guarantees mean-square convergence. \begin{theorem}[Theorem 2.1 in Tretyakov and Zhang (2013)]\label{thm:Zang} Let $\widetilde{X}(t_i)$ denote a numerical solution of SDE \eqref{eq:semi_linear_SDE} at time $t_i$ starting at $X_0$, constructed using the one-step approximation $\widetilde{X}_{t_{i-1},x}(t_i)$. Further, let Assumptions \ref{(A1)} and \ref{(A2)} be satisfied. If \begin{itemize} \item[(i)] The one-step approximation $\widetilde{X}_{t_{i-1},x}(t_i)$ is mean-square consistent of order $p>0$ in the sense of Definition \ref{eq:ms_consistency}. \item[(ii)] The numerical method $\widetilde{X}(t_i)$ is mean-square bounded in the sense of Definition \ref{eq:bm_num}. \end{itemize} Then the numerical method $\widetilde{X}(t_i)$ is mean-square convergent of order $p$ in the sense of \eqref{eq:ms_convergence_order}. \end{theorem} \subsection{Mean-square convergence of the splitting methods} In the following, we prove the required Conditions $(i)$ and $(ii)$ of Theorem \ref{thm:Zang} for the constructed splitting methods. Condition $(i)$ can be proved in a similar fashion as Lemma 2.1 in \cite{Milstein2003}. Its proof relies on the fact that the Euler-Maruyama method \eqref{EM_FHN} is mean-square consistent of order $1$, which is also true in the locally Lipschitz case. Since the tamed Euler-Maruyama method \eqref{eq:EM_Tamed} can be written as a sum of the Euler-Maruyama method and a deterministic remainder of second order \cite{Hutzenthaler2012_2}, the same result would be obtained when using this method instead. \begin{lemma}[Mean-square consistency]\label{Lemma2} Let Assumption \ref{(A3)} hold and let $\widetilde{X}^{\textrm{LT}}_{t_{i-1},x}(t_i)$ and $\widetilde{X}^{\textrm{S}}_{t_{i-1},x}(t_i)$ be the one-step approximations of the Lie-Trotter and Strang splitting defined through \eqref{SP1_FHN} and \eqref{SP3_FHN}, respectively. Then $\widetilde{X}^{\textrm{LT}}_{t_{i-1},x}(t_i)$ and $\widetilde{X}^{\textrm{S}}_{t_{i-1},x}(t_i)$ are mean-square consistent of order $p=1$ in the sense of Definition \ref{eq:ms_consistency}. \end{lemma}\noindent \begin{proof} Let $\norm{\cdot}_{L^2}:=\left( \mathbb{E}\left[ \norm{\cdot}^2 \right] \right)^{1/2}$ denote the $L^2$-norm. Due to Assumption \ref{(A3)}, the splitting methods \eqref{SP1_FHN} and \eqref{SP3_FHN} are well-defined. We start with the Lie-Trotter splitting. The triangle inequality yields {\small{\begin{equation*} \norm{ \mathbb{E}\left[X_{t_{i-1},x}(t_i)-\widetilde{X}^{\textrm{LT}}_{t_{i-1},x}(t_i)\right]} \leq \norm{ \mathbb{E}\left[X_{t_{i-1},x}(t_i)-\widetilde{X}^{\textrm{EM}}_{t_{i-1},x}(t_i)\right]} +\norm{ \mathbb{E}\left[\widetilde{X}^{\textrm{EM}}_{t_{i-1},x}(t_i)-\widetilde{X}^{\textrm{LT}}_{t_{i-1},x}(t_i)\right]} \end{equation*}}} and {\small{\begin{equation*} \norm{X_{t_{i-1},x}(t_i)-\widetilde{X}^{\textrm{LT}}_{t_{i-1},x}(t_i)}_{L^2} \leq \norm{X_{t_{i-1},x}(t_i)-\widetilde{X}^{\textrm{EM}}_{t_{i-1},x}(t_i)}_{L^2} + \norm{\widetilde{X}^{\textrm{EM}}_{t_{i-1},x}(t_i)-\widetilde{X}^{\textrm{LT}}_{t_{i-1},x}(t_i)}_{L^2}, \end{equation*}}}\noindent where $\widetilde{X}^{\textrm{EM}}_{t_{i-1},x}(t_i)$ is the one step-approximation of the Euler-Maruyama scheme \eqref{EM_FHN}. Since SDE~\eqref{eq:semi_linear_SDE} is of additive noise type, it is known that this method has first order of mean-square consistency~ \cite{Kloeden1992}, i.e., its one-step approximation satisfies \begin{equation*} \norm{ \mathbb{E}[X_{t_{i-1},x}(t_i)-\widetilde{X}^{\textrm{EM}}_{t_{i-1},x}(t_i)]}=\mathcal{O}(\Delta^2), \quad \norm{X_{t_{i-1},x}(t_i)-\widetilde{X}^{\textrm{EM}}_{t_{i-1},x}(t_i)}_{L^2}=\mathcal{O}(\Delta^{3/2}). \end{equation*} It remains to consider the difference between the splitting and the Euler-Maruyama method. Using a stochastic Taylor expansion, the solution of the linear stochastic subequation \eqref{SDE_FHN} of the splitting framework can be expressed as \begin{equation}\label{SDE_remainder} \varphi_{\Delta}^{[1]}(x)=x+ Ax \Delta + \Sigma \sqrt{\Delta} \psi_{i-1}+r, \end{equation} where $r=(r_1,\ldots,r_d)^{\top}$ with $\mathbb{E}[r_j]=\mathcal{O}(\Delta^2)$, $\mathbb{E}[r_j^2]=\mathcal{O}(\Delta^3)$, $j=1,\ldots,d$, and $\psi_{i-1}$ is a Gaussian vector with mean $0_m$ and covariance matrix $\mathbb{I}_{m}$ \cite{Kloeden1992}. Similarly, the solution of the ODE subequation~ \eqref{ODE_FHN} of the splitting framework can be expressed as \begin{equation}\label{ODE_remainder_2} \varphi_{\Delta}^{[2]}(x)=x+ N(x)\Delta+q, \end{equation} where the reminder $q=(q_1,\ldots,q_d)^{\top}$ satisfies $q_j=\mathcal{O}(\Delta^2)$, $j=1,\ldots,d$, through a deterministic Taylor expansion. The one-step approximation of the Lie-Trotter splitting method is then obtained by composing the above expressions \eqref{SDE_remainder} and \eqref{ODE_remainder_2}, yielding \begin{eqnarray*} \widetilde{X}^{\textrm{LT}}_{t_{i-1},x}(t_i)&=&(\varphi_{\Delta}^{[1]} \ \circ \ \varphi_{\Delta}^{[2]})(x)=x+N(x)\Delta + q \\&& \hspace{0.5cm} + Ax\Delta + AN(x)\Delta^2+Aq\Delta+\Sigma\sqrt{\Delta}\varphi_{i-1}+r. \end{eqnarray*} From \eqref{EM_FHN}, the one-step approximation of the Euler-Maruyama method is given by \begin{equation}\label{eq:EM_one_step} \widetilde{X}^{\textrm{EM}}_{t_{i-1},x}(t_i)=x+Ax\Delta+N(x)\Delta+\Sigma\sqrt{\Delta}\psi_{i-1}. \end{equation} Thus, the difference between the Lie-Trotter splitting and the Euler-Maruyama method becomes \begin{equation*} \widetilde{X}^{\textrm{LT}}_{t_{i-1},x}(t_i)-\widetilde{X}^{\textrm{EM}}_{t_{i-1},x}(t_i)=q+AN(x)\Delta^2+Aq\Delta+r. \end{equation*} Since the stochastic reminders $r_j$ have mean $\mathcal{O}(\Delta^2)$ and second moment $\mathcal{O}(\Delta^3)$, it follows that \begin{equation*} \norm{\mathbb{E} \left[ \widetilde{X}^{\textrm{LT}}_{t_{i-1},x}(t_i)-\widetilde{X}^{\textrm{EM}}_{t_{i-1},x}(t_i) \right]} = \mathcal{O}(\Delta^2) \quad \text{and} \quad \norm{ \widetilde{X}^{\textrm{LT}}_{t_{i-1},x}(t_i)-\widetilde{X}^{\textrm{EM}}_{t_{i-1},x}(t_i) }_{L^2} = \mathcal{O}(\Delta^{3/2}), \end{equation*} which proves the result for the Lie-Trotter method. Now, we consider the Strang splitting and define $\omega=(\omega_1,\ldots,\omega_d)^\top$ as \begin{eqnarray*} \omega:=N(x)\Delta/2+q+Ax\Delta+AN(x)\Delta^2/2+Aq \Delta + \Sigma \sqrt{\Delta} \psi_{i-1} + r, \end{eqnarray*} where $\mathbb{E}[\omega_j]=O(\Delta)$ and $\mathbb{E}[\omega_j^2]=O(\Delta)$, since $\mathbb{E}[\Delta\psi_{i-1,j}^2]=\Delta$. Applying again a Taylor expansion yields \begin{equation*} N(x+\omega)=N(x)+\tilde{\omega}, \end{equation*} where $\tilde{\omega}$ is of same order as $\omega$, i.e., $\mathbb{E}[\tilde{\omega}_j]=O(\Delta)$ and $\mathbb{E}[\tilde{\omega}_j^2]=O(\Delta)$. Using the expressions \eqref{SDE_remainder} and \eqref{ODE_remainder_2}, the one-step approximation of the Strang splitting method is then given by \begin{eqnarray*} \widetilde{X}^{\textrm{S}}_{t_{i-1},x}(t_i)&=&(\varphi_{\Delta/2}^{[2]} \ \circ \ \varphi_{\Delta}^{[1]} \ \circ \ \varphi_{\Delta/2}^{[2]})(x)=x+\omega+N(x)\Delta/2+\tilde{\omega}\Delta/2+q \\ &=& x+Ax\Delta+N(x)\Delta+\Sigma\sqrt{\Delta}\psi_{i-1}+2q+AN(x)\Delta^2/2 + Aq \Delta + r +\tilde{\omega}\Delta/2. \end{eqnarray*} Thus, the difference between the Strang and the Euler-Maruyama method \eqref{eq:EM_one_step} becomes \begin{eqnarray*} \widetilde{X}^{\textrm{S}}_{t_{i-1},x}(t_i)-\widetilde{X}^{\textrm{EM}}_{t_{i-1},x}(t_i)&=&2q+AN(x)\Delta^2/2+Aq\Delta +r+\tilde{\omega}\Delta/2. \end{eqnarray*} Since $\mathbb{E}[r_j]=O(\Delta^2)$, $\Delta\mathbb{E}[\tilde{\omega}_j]=O(\Delta^2)$, $\mathbb{E}[r_j^2]=O(\Delta^3)$ and $\Delta^2\mathbb{E}[\tilde{\omega}_j^2]=O(\Delta^3)$, the result is proved also for the Strang splitting. \end{proof} Now, we establish the boundedness of the second moment of the splitting methods. Intuitively, this is guaranteed by the use of the global explicit solution of the locally Lipschitz ODE \eqref{ODE_FHN}, which is defined on the entire interval $[0,T]$ without any explosion occuring in finite time. Thus, the iterative composition of this function with the solution of the linear SDE via the Lie-Trotter~\eqref{SP1_FHN} and Strang \eqref{SP3_FHN} methods does not cause an explosion of the moments in finite time either. The proof of this result, however, is not straight-forward. In particular, it requires an assumption related to the locally Lipschitz ODE \eqref{ODE_FHN}. In Lemma \ref{Lemma3}, we provide two proofs of the mean-square boundedness, each based on a different assumption. The first variant is an extension of the proof of Proposition 3.7 in \cite{Brehier2018}, and is formulated for the Lie-Trotter splitting only. The second variant covers both the Lie-Trotter and the Strang splitting, and builds the basis for the results presented in Section \ref{sec:5:2_FHN}. The required assumptions are introduced in Assumption \ref{assum:3}. While the first assumption holds for constant functions $N(x)\equiv c$, and thus linear functions $f(x;\Delta)=c\Delta+x$, this is not the case for the second one. Which of the two proofs is more convenient and which assumption is easier to verify depends on the problem under consideration. \begin{assumption}\label{assum:3} Let $f:\mathbb{R}^d\to\mathbb{R}^d$ be as in \eqref{Exact_ODE_FHN}. \begin{enumerate}[label=\text{(A4.\arabic*)}] \item \label{(A4.1)} There exist constants $c_3(\Delta_0)>0$ and $q\in\mathbb{N}$ such that for any $x\in\mathbb{R}^d$ the auxiliary function \begin{equation*} M(x;\Delta):=\frac{f(x;\Delta)-x}{\Delta} \end{equation*} satisfies \begin{equation*} \norm{M(x;\Delta)}\leq c_3(\Delta_0)\Bigl( 1+ \norm{\bigl(|x_1|^{2q},\ldots,|x_d|^{2q}\bigr)^\top} \Bigr), \quad \forall \ \Delta \in (0,\Delta_0]. \end{equation*} \item \label{(A4.2)} There exists a constant $c_4\geq0$ such that for any $x\in\mathbb{R}^d$ it holds that \begin{equation*} \norm{f(x;\Delta)}^2\leq \norm{x}^2+c_4\Delta, \quad \forall \ \Delta \in (0,\Delta_0]. \end{equation*} \end{enumerate} \end{assumption} \begin{lemma}[Mean-square boundedness]\label{Lemma3} Let Assumption \ref{(A3)} be satisfied and let $\widetilde{X}^{\textrm{LT}}(t_i)$ and $\widetilde{X}^{\textrm{S}}(t_i)$ be the Lie-Trotter and Strang splitting defined through \eqref{SP1_FHN} and \eqref{SP3_FHN}, respectively. Part I: Let Assumptions \ref{(A1)} and \ref{(A4.1)} hold. Then $\widetilde{X}^{\textrm{LT}}(t_i)$ is mean-square bounded in the sense of Definition \ref{eq:bm_num}. Part II: Let Assumption \ref{(A4.2)} hold. Then $\widetilde{X}^{\textrm{LT}}(t_i)$ and $\widetilde{X}^{\textrm{S}}(t_i)$ are mean-square bounded in the sense of Definition \ref{eq:bm_num}. \end{lemma}\noindent \begin{proof} Due to Assumption \ref{(A3)}, the splitting methods \eqref{SP1_FHN} and \eqref{SP3_FHN} are well-defined. \textit{Part I}: Consider the linear SDE \begin{equation*} dZ(t)=AZ(t)dt+\Sigma dW(t), \quad Z(0)=Z_0=0_d. \end{equation*} Its explicit solution is given by \begin{equation*} Z(t)=\int\limits_{0}^{t} e^{A(t-s)} \Sigma dW(s), \end{equation*} where $Z(t)$ is normally distributed with mean vector $0_d$ and covariance matrix $C(t)$ as defined in \eqref{eq:Cov}. Consequently, the second moment of $Z(t)$ is bounded, i.e., there exists $C_Z(T)>0$ such that \begin{equation}\label{eq:moments_Z} \mathbb{E}\left[ \max\limits_{0\leq t\leq T} \norm{Z(t)}^2 \right] \leq C_Z(T). \end{equation} Moreover, all moments of the components $Z_j(t)$, $j=1,\ldots,d$, of $Z(t)$ are bounded. Thus, for any $p \in \mathbb{N}$ and $j \in \{1,\ldots, d \}$, there exists $C_{j,p}(T)>0$ such that \begin{equation}\label{eq:moments_Z2} \mathbb{E}\left[ \max_{0\leq t \leq T} |Z_j(t)|^{2p} \right] \leq C_{j,p}(T). \end{equation} Now, define the process $R(t_i):=\widetilde{X}^{\textrm{LT}}(t_i)-Z(t_i)$. Since \begin{equation*} \Bigl(\sum\limits_{j=1}^{l} a_j \Bigr)^2 \leq l \sum\limits_{j=1}^{l}a_j^2, \end{equation*} and thus \begin{eqnarray*} \norm{\widetilde{X}^{\textrm{LT}}(t_i)}^2=\norm{\widetilde{X}^{\textrm{LT}}(t_i)-Z(t_i)+Z(t_i)}^2 \leq \Bigl( \norm{R(t_i)} + \norm{Z(t_{i})} \Bigr)^2 \leq 2\norm{R(t_i)}^2 + 2\norm{Z(t_i)}^2, \end{eqnarray*} the bound \eqref{eq:moments_Z} implies that it suffices to prove the boundedness of the second moment of the process $R(t_i)$. Note that in a discretised regime we have that $Z(t_i)=e^{A\Delta}Z(t_{i-1})+\xi_{i-1}$. Thus, \begin{eqnarray*} \norm{R(t_i)}&=&\norm{\widetilde{X}^{\textrm{LT}}(t_i)-Z(t_i)}=\norm{ e^{A\Delta} \left( f(\widetilde{X}^{\textrm{LT}}(t_{i-1});\Delta) - Z(t_{i-1}) \right) } \\ &=& \norm{ e^{A\Delta} \Bigl( f(R(t_{i-1})+Z(t_{i-1});\Delta) -Z(t_{i-1})\Bigr) } \\ &=& \norm{ e^{A\Delta} \Bigl( f(R(t_{i-1})+Z(t_{i-1});\Delta) -f(Z(t_{i-1});\Delta) + f(Z(t_{i-1});\Delta) -Z(t_{i-1})\Bigr) }. \end{eqnarray*} Using that $\norm{e^{A\Delta}x}\leq\norm{e^{A\Delta}}\norm{x}\leq e^{\mu(A)\Delta}\norm{x}$ for all $x \in \mathbb{R}^d$, we obtain \begin{eqnarray*} \norm{R(t_i)}&\leq& e^{\mu(A)\Delta} \norm{ f(R(t_{i-1})+Z(t_{i-1});\Delta) -f(Z(t_{i-1});\Delta) } + e^{\mu(A)\Delta} \norm{ f(Z(t_{i-1});\Delta) -Z(t_{i-1}) }. \end{eqnarray*} Since the function $N:\mathbb{R}^d\to\mathbb{R}^d$ satisfies \ref{(A1)}, using the continuous Gronwall Lemma, the function $f:\mathbb{R}^d\to\mathbb{R}^d$ fulfils the following global Lipschitz condition \begin{equation*} \norm{f(x;\Delta)-f(y;\Delta)} \leq e^{c_1 \Delta}\norm{x-y}, \quad \forall \ x,y \in \mathbb{R}^d, \end{equation*} where $c_1$ is the same as in \ref{(A1)}, see, e.g., \cite{Humphries2001}. Moreover, noting that $f(x;\Delta)-x=\Delta M(x,\Delta)$ and using Assumption \ref{(A4.1)}, we obtain that \begin{eqnarray*} \norm{R(t_i)} &\leq& e^{\mu(A)\Delta} \left( e^{c_1\Delta} \norm{ R(t_{i-1})} + c_3(\Delta_0) \Delta \bigl( 1+\norm{ \bigl(|Z_1(t_{i-1})|^{2q},\ldots, |Z_d(t_{i-1})|^{2q} \bigr)^\top } \bigr) \right). \end{eqnarray*} Defining $\tilde{c}:=|\mu(A)|+c_1>0$, we get that \begin{eqnarray*} \norm{R(t_i)} &\leq& e^{\tilde{c}\Delta} \left( \norm{ R(t_{i-1})} + c_3(\Delta_0) \Delta \bigl( 1+\norm{ \bigl( |Z_1(t_{i-1})|^{2q},\ldots, |Z_d(t_{i-1})|^{2q} \bigr)^\top } \bigr) \right). \end{eqnarray*} Now, we can perform back iteration, obtaining \begin{eqnarray*} \norm{R(t_i)} &\leq& e^{\tilde{c}t_i} \norm{ R_0} + c_3(\Delta_0) \Delta \sum_{k=1}^{i}e^{\tilde{c}k\Delta} \Bigl( 1+\norm{ \bigl( |Z_1(t_{i-k})|^{2q}, \ldots, |Z_d(t_{i-k})|^{2q} \bigr)^\top } \Bigr) \\ &\leq& e^{\tilde{c}T} \norm{ X_0} + {c_3}(\Delta_0) \Bigl( 1+ \max_{0\leq k \leq i-1} \norm{ \bigl( |Z_1(t_{k})|^{2q}, \ldots, |Z_d(t_{k})|^{2q} \bigr)^\top } \Bigr) \Delta \sum_{k=1}^{i}e^{\tilde{c}k\Delta}, \end{eqnarray*} where we used that $R_0=X_0$, since $Z_0=0_d$. Using that \begin{equation*}\label{eq:geom1} \Delta\sum_{k=1}^{i}e^{\tilde{c} k\Delta}=(e^{\tilde{c} t_i}-1)\frac{\Delta e^{\tilde{c}\Delta}}{e^{\tilde{c}\Delta}-1} \leq (e^{\tilde{c}T}-1)\frac{\Delta_0e^{\tilde{c}\Delta_0}}{e^{\tilde{c}\Delta_0}-1}, \quad \forall \ \Delta \in (0,\Delta_0], \end{equation*} we get that \begin{eqnarray*} \norm{R(t_i)} &\leq& e^{\tilde{c}T} \norm{ X_0} + {c_3}(\Delta_0) (e^{\tilde{c}T}-1) \frac{\Delta_0e^{\tilde{c}\Delta_0}}{e^{\tilde{c}\Delta_0}-1} \Bigl( 1+\max_{0\leq k \leq i-1} \norm{ \bigl( |Z_1(t_{k})|^{2q}, \ldots, |Z_d(t_{k})|^{2q} \bigr)^\top } \Bigr). \end{eqnarray*} Thus, there exists a constant $C(T,\Delta_0)>0$ such that \begin{equation*} \norm{R(t_i)}\leq C(T,\Delta_0)\Bigl( 1+\norm{X_0} + \max_{0\leq k \leq i-1} \norm{ \bigl( |Z_1(t_{k})|^{2q}, \ldots, |Z_d(t_{k})|^{2q} \bigr)^\top } \Bigr). \end{equation*} Moreover, there exists a constant $C_1(T,\Delta_0)>1$ such that \begin{eqnarray*} \norm{R(t_i)}^2&\leq& C_1(T,\Delta_0)\Bigl( 1+\norm{X_0} + \max_{0\leq k \leq i-1} \norm{ \bigl( |Z_1(t_{k})|^{2q}, \ldots, |Z_d(t_{k})|^{2q} \bigr)^\top } \Bigr)^2 \\ &\leq& 3C_1(T,\Delta_0)\Bigl( 1+\norm{X_0}^2 + \max_{0\leq k \leq i-1} \norm{ \bigl( |Z_1(t_{k})|^{2q}, \ldots, |Z_d(t_{k})|^{2q} \bigr)^\top }^2 \Bigr) \\ &\leq& 3C_1(T,\Delta_0)\Bigl( 1+\norm{X_0}^2 + \max_{0\leq k \leq i-1} |Z_1(t_{k})|^{4q} +\ldots + \max_{0\leq k \leq i-1} |Z_d(t_{k})|^{4q}\Bigr). \end{eqnarray*} Taking the expectation and using \eqref{eq:moments_Z2} gives \begin{eqnarray*} \mathbb{E}\left[ \norm{R(t_i)}^2 \right] &\leq& 3C_1(T,\Delta_0)\Bigl( 1+ \mathbb{E}\left[ \norm{X_0}^2 \right] + C_{1,2q}(T) +\ldots + C_{d,2q}(T) \Bigr) \\ &\leq& \widetilde{K}(T,\Delta_0)\left( 1+\mathbb{E}\left[ \norm{X_0}^2\right] \right). \end{eqnarray*} This concludes the first part of the proof. \textit{Part II}: We start with the Lie-Trotter splitting and have that \begin{eqnarray*} \norm{\widetilde{X}^{\textrm{LT}}(t_i)}^2&=&\norm{e^{A\Delta}f(\widetilde{X}^{\textrm{LT}}(t_{i-1});\Delta)+\xi_{i-1}}^2 \\ &=&f(\widetilde{X}^{\textrm{LT}}(t_{i-1});\Delta)^\top (e^{A\Delta})^\top (e^{A\Delta}) f(\widetilde{X}^{\textrm{LT}}(t_{i-1});\Delta)\\ && \hspace{0.5cm}+f(\widetilde{X}^{\textrm{LT}}(t_{i-1});\Delta)^\top(e^{A\Delta})^\top \xi_{i-1} + \xi_{i-1}^\top e^{A\Delta} f(\widetilde{X}^{\textrm{LT}}(t_{i-1});\Delta) + \xi_{i-1}^\top \xi_{i-1}. \end{eqnarray*} Taking the expectation, using the fact that $\widetilde{X}^{\textrm{LT}}(t_{i-1})$ and $\xi_{i-1}$ are independent, that $\mathbb{E}[\xi_{i-1}]=0_d$, that $\mathbb{E}[\xi_{i-1}^\top]=0_d^\top$ and that \begin{equation*} \bar{C}(\Delta):=\sum\limits_{j=1}^{d}c_{jj}(\Delta) \mathbb{E}[\xi_{i-1}^\top \xi_{i-1} ], \end{equation*} we get that \begin{eqnarray} \nonumber\mathbb{E}\left[ \norm{\widetilde{X}^{\textrm{LT}}(t_i)}^2 \right] &=& \mathbb{E}\left[ f(\widetilde{X}^{\textrm{LT}}(t_{i-1});\Delta)^\top (e^{A\Delta})^\top (e^{A\Delta}) f(\widetilde{X}^{\textrm{LT}}(t_{i-1});\Delta) \right] + \bar{C}(\Delta) \\ &=& \label{eq:help} \mathbb{E}\left[ \norm{ e^{A\Delta} f(\widetilde{X}^{\textrm{LT}}(t_{i-1});\Delta) }^2 \right] + \bar{C}(\Delta). \end{eqnarray} Using Assumption \ref{(A4.2)} and the logarithmic norm, we further obtain \begin{eqnarray*} \mathbb{E}\left[ \norm{\widetilde{X}^{\textrm{LT}}(t_i)}^2 \right]&\leq&e^{2\mu(A)\Delta} \left( \mathbb{E}\left[ \norm{\widetilde{X}^{\textrm{LT}}(t_{i-1})}^2 \right]+ c_4\Delta \right) +\bar{C}(\Delta). \end{eqnarray*} Now, we can perform back iteration, yielding \begin{eqnarray*} \mathbb{E}\left[ \norm{\widetilde{X}^{\textrm{LT}}(t_i)}^2 \right]&\leq&e^{2\mu(A)t_i} \mathbb{E}\left[ \norm{X_0}^2 \right]+ c_4\Delta \sum\limits_{k=1}^{i} e^{2\mu(A)k\Delta} +\bar{C}(\Delta) \sum\limits_{k=0}^{i-1}e^{2\mu(A)k\Delta}. \end{eqnarray*} Using that \begin{eqnarray*} \sum\limits_{k=1}^{i}e^{2\mu(A)k\Delta}&=&\left(1- e^{2\mu(A)t_i} \right)\frac{e^{2\mu(A)\Delta}}{\left(1- e^{2\mu(A)\Delta}\right)},\\ \sum\limits_{k=0}^{i-1}e^{2\mu(A)k\Delta}&=&\left(1- e^{2\mu(A)t_i} \right)\frac{1}{\left(1- e^{2\mu(A)\Delta}\right)}, \end{eqnarray*} we obtain \begin{equation}\label{eq:bound_LT} \mathbb{E}\left[ \norm{\widetilde{X}^{\textrm{LT}}(t_i)}^2 \right]\leq e^{2\mu(A)t_i} \mathbb{E}\left[ \norm{X_0}^2 \right]+\left( 1-e^{2\mu(A)t_i} \right)\left( \frac{c_4\Delta e^{2\mu(A)\Delta}}{1-e^{2\mu(A)\Delta}}+\frac{\bar{C}(\Delta)}{1-e^{2\mu(A)\Delta}} \right), \end{equation} where $e^{2\mu(A)t_i}$ and $(1-e^{2\mu(A)t_i})$ can be bounded by $e^{2|\mu(A)|T}$ and $(1+e^{2|\mu(A)|T})$, respectively. Moreover, we have that \begin{equation}\label{eq:help3} \frac{\Delta e^{2\mu(A)\Delta}}{1-e^{2\mu(A)\Delta}} \leq -\frac{1}{2\mu(A)}, \quad \forall \ \Delta>0 \quad \text{and} \quad \frac{\Delta}{1-e^{2\mu(A)\Delta}} \leq \frac{\Delta_0}{1-e^{2\mu(A)\Delta_0}}, \quad \forall \ \Delta \in (0,\Delta_0]. \end{equation} Recalling that $e^{A\Delta}=\mathbb{I}_d+\Delta A+O(\Delta^2)$, it follows from \eqref{eq:Cov} that $\bar{C}(\Delta)=O(\Delta)$. Thus, there exists a constant $\widetilde{K}(T,\Delta_0)$ such that \begin{equation*} \mathbb{E}\left[ \norm{\widetilde{X}^{\textrm{LT}}(t_i)}^2 \right] \leq \widetilde{K}(T,\Delta_0)\left(1+\mathbb{E}\left[ \norm{X_0}^2 \right] \right). \end{equation*} This proves the result for the Lie-Trotter splitting. Now, we consider the Strang splitting and note that \begin{eqnarray} \nonumber\norm{\widetilde{X}^{\textrm{S}}(t_i)}^2&=&\norm{f\Bigl( e^{A\Delta}f(\widetilde{X}^{\textrm{S}}(t_{i-1});\Delta/2) + \xi_{i-1} ;\Delta/2 \Bigr)}^2 \\ &\leq& \label{eq:help2} \norm{ e^{A\Delta}f(\widetilde{X}^{\textrm{S}}(t_{i-1});\Delta/2) + \xi_{i-1} }^2 +c_4{\Delta}/{2}, \end{eqnarray} using Assumption \ref{(A4.2)}. Proceeding in the same way as for the Lie-Trotter splitting, we obtain \begin{equation}\label{eq:bound_S} \mathbb{E}\left[ \norm{\widetilde{X}^{\textrm{S}}(t_i)}^2 \right]\leq e^{2\mu(A)t_i} \mathbb{E}\left[ \norm{X_0}^2 \right]+\left( 1- e^{2\mu(A)t_i} \right)\left( \frac{c_4\Delta/2 (1+ e^{2\mu(A)\Delta})}{1-e^{2\mu(A)\Delta}}+\frac{\bar{C}(\Delta)}{1-e^{2\mu(A)\Delta}} \right). \end{equation} Using the same arguments as before, the result is also proved for the Strang splitting. This concludes the second part of the proof. \end{proof} Based on the above results, we establish the mean-square convergence of the splitting methods in the following theorem. \newpage \begin{theorem}[Mean-square convergence]\label{thm:convergence_FHN} Let $\widetilde{X}^{\textrm{LT}}(t_i)$ and $\widetilde{X}^{\textrm{S}}(t_i)$ be the Lie-Trotter and Strang splitting defined through \eqref{SP1_FHN} and \eqref{SP3_FHN}, respectively. Further, let the assumptions of Theorem \ref{thm:Zang}, Lemma \ref{Lemma2} and Lemma \ref{Lemma3} be satisfied. Then $\widetilde{X}^{\textrm{LT}}(t_i)$ and $\widetilde{X}^{\textrm{S}}(t_i)$ are mean-square convergent of order $p=1$ in the sense of \eqref{eq:ms_convergence_order}. \end{theorem} \begin{proof} The result is a direct consequence of Theorem \ref{thm:Zang}, Lemma \ref{Lemma2} and Lemma \ref{Lemma3}. \end{proof}\noindent Note that, in contrast to deterministic equations \cite{Hairer2006}, the convergence order of a splitting method cannot be increased by using compositions based on fractional steps. Indeed, to achieve this in the stochastic scenario, higher-order stochastic integrals would be required \cite{Milstein2003}. Thus, the Strang method \eqref{SP3_FHN} has also mean-square order $1$. Nevertheless, it has been observed that, even without including higher-order stochastic integrals, the Strang approach may perform considerably better than the Lie-Trotter method in numerical experiments, see, e.g., \cite{Ableidinger2017,Buckwar2019,Stern2020,Chevallier2020,Tubikanec2020}, possibly due to the symmetry of this composition method. \section{Structure preservation} \label{sec:5_FHN} The mean-square convergence discussed in the previous section is a limit result for the time discretisation step $\Delta$ going to zero over a finite interval. This result does not carry any information about the quality of the numerical method under the use of strictly positive time steps $\Delta$, as always required when implementing any numerical method. In the following, we discuss the preservation of important structural properties. In particular, we focus on hypoellipticity and ergodicity. \subsection{Preservation of noise structure and $1$-step hypoellipticity} To obtain a discrete analogue of the transition probability \eqref{eq:trans_prob} introduced in Section \ref{sec:2_FHN}, we define the $k$-step transition probability of a numerical solution $\widetilde{X}(t_i)$ of SDE \eqref{eq:semi_linear_SDE} as follows \begin{equation}\label{eq:trans_prob_k} \widetilde{P}_{t_k}(\mathcal{A},x):=\mathbb{P}(\widetilde{X}(t_k)\in\mathcal{A}|X(0)=x), \end{equation} where $\mathcal{A}\in\mathcal{B}(\mathbb{R}^d)$ and $x\in \mathbb{R}^d$. Now, assume that SDE \eqref{eq:semi_linear_SDE} is hypoelliptic, i.e., its transition probability \eqref{eq:trans_prob} has a smooth density even though $\Sigma\Sigma^\top$ is not of full rank. A discrete version of this property is provided in the subsequent definition. \begin{definition}[$k$-step hypoellipticity]\label{def:k:hypo} Let $\widetilde{X}(t_i)$ be a numerical solution of \eqref{eq:semi_linear_SDE} and $k\in\mathbb{N}$ be the smallest $k$ such that its transition probability \eqref{eq:trans_prob_k} has a smooth density. Then, $\widetilde{X}(t_i)$ is called $k$-step hypoelliptic. \end{definition}\noindent This means that the numerical method propagates the noise into the smooth component after $k$ iteration steps. The preservation of this property is not an issue when using the numerical method to simulate paths of the SDE over a large enough time horizon, as standard methods usually satisfy it for some $k$. For example, Euler-Maruyama type methods have been observed to be $2$-step hypoelliptic, see, e.g., Corollary 7.4 in \cite{Mattingly2002}. However, the case $k=1$, where we also use the notation \begin{equation}\label{eq:trans_prob_1} \widetilde{P}_{\Delta}(\mathcal{A},x):=\mathbb{P}(\widetilde{X}(t_i)\in\mathcal{A}|\widetilde{X}(t_{i-1})=x), \end{equation} is of crucial relevance when using the numerical method within statistical applications. In the following, we provide a brief insight into this issue. In the field of likelihood-based parameter estimation \cite{Ditlevsen2019,Melnykova2018,Pokern2009}, a particular interest lies in the situation where \eqref{eq:trans_prob_1} corresponds to the multivariate normal distribution, i.e., $\widetilde{X}(t_i)$ given $\widetilde{X}(t_{i-1})$ is normally distributed with mean vector and covariance matrix given by \begin{equation*} \mathbb{E}\left[\widetilde{X}(t_i)|\widetilde{X}(t_{i-1})\right]=m_i(\widetilde{X}(t_{i-1}),\Delta;\theta), \quad \textrm{Cov}\left(\widetilde{X}(t_i)|\widetilde{X}(t_{i-1})\right)=\textrm{Q}(\Delta;\theta), \end{equation*} respectively, where $\theta$ denotes the relevant model parameters which are aimed to be inferred, based on discrete time observations of the process $(X(t))_{t\in[0,T]}$. Assuming that the process is fully observed and that $(x(t_i))_{i=0,\ldots,n}$ are the available data, the estimator of $\theta$ can be obtained as {\small{\begin{eqnarray*} \underset{\theta}{\arg\min} \sum\limits_{i=1}^{n} \Bigl[ (x(t_i)-m_i(x(t_{i-1}),\Delta;\theta))^\top \textrm{Q}(\Delta;\theta)^{-1} (x(t_i)-m_i(x(t_{i-1}),\Delta;\theta)) \Bigr] + n \log \Bigl( \det(\textrm{Q}(\Delta;\theta)) \Bigr), \end{eqnarray*}}}\noindent a criterion which results from taking minus two times the corresponding log-likelihood. This estimation tool is only well-defined if $Q(\Delta;\theta)^{-1}$ exists and $\det(\textrm{Q}(\Delta;\theta))>0$, i.e., if the underlying numerical method $\widetilde{X}(t_i)$ is $1$-step hypoelliptic. This is not fulfilled by Euler-Maruyama type methods. For example, \eqref{EM_FHN}, \eqref{eq:EM_Tamed} and \eqref{TrEM} yield degenerate multivariate normal distributions with non-invertible conditional covariance matrices \begin{eqnarray*} \nonumber&&\textrm{Cov}(\widetilde{X}^{\textrm{EM}}(t_i)|\widetilde{X}^{\textrm{EM}}(t_{i-1}))=\textrm{Cov}(\widetilde{X}^{\textrm{TEM}}(t_i)|\widetilde{X}^{\textrm{TEM}}(t_{i-1}))\\ &&=\textrm{Cov}(\widetilde{X}^{\textrm{TrEM}}(t_i)|\widetilde{X}^{\textrm{TrEM}}(t_{i-1}))=\Delta\Sigma\Sigma^{\top}. \end{eqnarray*} Thus, they do not have a density and are impracticable within statistical inference tools for hypoelliptic SDEs. In contrast, the splitting methods benefit from the random quantities $\xi_{i}$ which have covariance matrix $C(\Delta)$, as defined in \eqref{eq:Cov}. Both the diffusion matrix $\Sigma$ and the drift matrix~$A$, through its exponential, enter into $C(\Delta)$. Thus, the splitting methods are $1$-step hypoelliptic provided that the stochastic linear subequation \eqref{SDE_FHN} of the splitting framework is hypoelliptic. While the conditional distribution of the Strang splitting is not explicitly available in general, the Lie-Trotter method yields a non-degenerate multivariate normal distribution. \begin{theorem}\label{thm:hypo_N_LT} Let Assumption \ref{(A3)} hold and let $\widetilde{X}^{\textrm{LT}}(t_i)$ be the Lie-Trotter splitting defined through \eqref{SP1_FHN}. Further, assume that SDE \eqref{SDE_FHN} is hypoelliptic. Then, $\widetilde{X}^{\textrm{LT}}(t_i)$ is $1$-step hypoelliptic according to Definition \ref{def:k:hypo}. Moreover, $\widetilde{X}^{\textrm{LT}}(t_i)$ given $\widetilde{X}^{\textrm{LT}}(t_{i-1})$ admits a non-degenerate normal distribution with mean vector and covariance matrix given by \begin{equation*} \mathbb{E}\left[\widetilde{X}^{\textrm{LT}}(t_i)|\widetilde{X}^{\textrm{LT}}(t_{i-1})\right]=e^{A\Delta}f(\widetilde{X}^{\textrm{LT}}(t_{i-1});\Delta), \quad \textrm{Cov}(\widetilde{X}^{\textrm{LT}}(t_i)|\widetilde{X}^{\textrm{LT}}(t_{i-1}))=C(\Delta), \end{equation*} respectively, where $C(\Delta)$ is defined in \eqref{eq:Cov}. \end{theorem} \begin{proof} Due to Assumption \ref{(A3)}, the Lie-Trotter splitting \eqref{SP1_FHN} is well-defined. The fact that $\widetilde{X}^{\textrm{LT}}(t_i)$ given $\widetilde{X}^{\textrm{LT}}(t_{i-1})$ is normally distributed with the corresponding mean vector and covariance matrix is an immediate consequence of formula \eqref{SP1_FHN}, recalling that $\xi_i$ are Gaussian random vectors with null mean and covariance matrix $C(\Delta)$. Moreover, the linear SDE \eqref{SDE_FHN} is hypoelliptic by assumption. Thus, its solution $(X^{[1]}(t))_{t\in[0,T]}$ has conditional covariance matrix $C(t)=\textrm{Cov}(X^{[1]}(t)|X_0^{[1]})$ \eqref{eq:Cov} which is of full rank. Since the covariance matrix of the Lie-Trotter splitting equals $C(\Delta)$, this method is $1$-step hypoelliptic according to Definition \ref{def:k:hypo}, and thus the normal distribution is non-degenerate. \end{proof} \subsection{Preservation of Lyapunov structure and geometric ergodicity}\label{sec:5:2_FHN} We now assume that SDE \eqref{eq:semi_linear_SDE} is geometrically ergodic. The main task to establish the geometric ergodicity of a numerical solution of \eqref{eq:semi_linear_SDE} is to prove a discrete analogue of the Lyapunov condition~\eqref{eq:Lyapunov} introduced in Section \ref{sec:2_FHN}. \begin{definition}[Discrete Lyapunov condition]\label{def:Lyapunov_discrete} Let $L$ be a Lyapunov function for SDE \eqref{eq:semi_linear_SDE}. A numerical solution $\widetilde{X}(t_i)$ of \eqref{eq:semi_linear_SDE} satisfies the discrete Lyapunov condition if there exist $\tilde{\rho}\in (0,1)$ and $\tilde{\eta}>0$ such that \begin{equation*} \mathbb{E}\left[ L(\widetilde{X}(t_i)) | \widetilde{X}(t_{i-1}) \right] \leq \tilde{\rho}L(\widetilde{X}(t_{i-1}))+\tilde{\eta}, \quad \forall \ i\in\mathbb{N}. \end{equation*} \end{definition}\noindent Analogously to the continuous case, this condition, combined with the $k$-step hypoellipticity and a discrete irreducibility condition, implies geometric ergodicity of the numerical method. For further details, the reader is referred to \cite{Ableidinger2017,Mattingly2002}. Unfortunately, Euler-Maruyama type methods do not preserve this property, especially when the drift of SDE \eqref{eq:semi_linear_SDE} is only locally Lipschitz continuous. In particular, the problem does not lie in the preservation of hypoellipticity and irreducibility, but in preserving the Lyapunov structure~\cite{Mattingly2002}. Consider, for example, the cubic one-dimensional SDE \begin{equation}\label{eq:Toy} dX(t)=-X^3(t)dt+\sigma dW(t), \quad X(0)=X_0. \end{equation} Since $F(x)x=-x^4\leq1-x^4\leq2-2x^2$, this SDE satisfies the dissipativity condition \eqref{eq:dissipative}, and thus $L(x)=1+x^2$ is a Lyapunov function satisfying \eqref{eq:Lyapunov} and the process $(X(t))_{t\in[0,T]}$ is geometrically ergodic. However, it is shown in Lemma 6.3 of \cite{Mattingly2002} that, if $\mathbb{E}[X_0^2]\geq2/\Delta$, the second moment of the Euler-Maruyama method goes to infinity as the time $t_i$ grows, since \begin{equation*} \mathbb{E}\left[ \left( \widetilde{X}^{\textrm{EM}}(t_i) \right)^2 \right] \geq \mathbb{E}[X_0^2] + t_i. \end{equation*} Thus, even for arbitrarily small time steps $\Delta$, the Euler-Maruyama method fails to converge to a unique invariant distribution independently of the choice of $X_0$. In contrast, splitting methods often preserve the Lyapunov structure, see, e.g., \cite{Ableidinger2017,Chevallier2020}. This is also proved for the proposed splitting methods and the Lyapunov function $L(x)=1+\norm{x}^2$, under Assumption \ref{(A4.2)} and an additional assumption related to the matrix $A$. \begin{assumption}\label{assum:A5} \begin{enumerate}[label=\text{(A5)}] \item \label{(A5)} The matrix A is such that $\norm{e^{A\Delta}}<1$ for all $\Delta \in (0,\Delta_0]$. \end{enumerate} \end{assumption} \begin{theorem}\label{thm:Lyapunov_discrete} Let Assumption \ref{(A3)} hold and let $\widetilde{X}^{\textrm{LT}}(t_i)$ and $\widetilde{X}^{\textrm{S}}(t_i)$ be the Lie-Trotter and Strang splitting defined through \eqref{SP1_FHN} and \eqref{SP3_FHN}, respectively. Further, suppose that $L(x)=1+\norm{x}^2$ is a Lyapunov function of SDE \eqref{eq:semi_linear_SDE} and that Assumptions \ref{(A4.2)} and \ref{(A5)} are satisfied. Then $\widetilde{X}^{\textrm{LT}}(t_i)$ and $\widetilde{X}^{\textrm{S}}(t_i)$ satisfy the discrete Lyapunov condition of Definition \ref{def:Lyapunov_discrete}. \end{theorem} \begin{proof} Due to Assumption \ref{(A3)}, the splitting methods \eqref{SP1_FHN} and \eqref{SP3_FHN} are well-defined. We start with the Lie-Trotter splitting and have that \begin{equation*} L(\widetilde{X}^{\textrm{LT}}(t_i))=1+\norm{\widetilde{X}^{\textrm{LT}}(t_i)}^2=1+\norm{e^{A\Delta}f(\widetilde{X}^{\textrm{LT}}(t_{i-1});\Delta)+\xi_{i-1}}^2. \end{equation*} Using \eqref{eq:help} and Assumption \ref{(A4.2)}, we get that \begin{eqnarray*} \mathbb{E}\left[ L(\widetilde{X}^{\textrm{LT}}(t_i))| \widetilde{X}^{\textrm{LT}}(t_{i-1}) \right]&=&1+\norm{e^{A\Delta}f(\widetilde{X}^{\textrm{LT}}(t_{i-1});\Delta)}^2 + \bar{C}(\Delta) \\ &\leq& 1+\norm{e^{A\Delta}}^2\norm{\widetilde{X}^{\textrm{LT}}(t_{i-1})}^2+ \norm{e^{A\Delta}}^2c_4\Delta+ \bar{C}(\Delta)+\norm{e^{A\Delta}}^2 \\ &=& \norm{e^{A\Delta}}^2 L(\widetilde{X}^{\textrm{LT}}(t_{i-1})) + 1 + \norm{e^{A\Delta}}^2c_4\Delta +\bar{C}(\Delta), \end{eqnarray*} where we added $\norm{e^{A\Delta}}^2$ in the inequality. Thus, applying Assumption \ref{(A5)}, the discrete Lyapunov condition of Definition \ref{def:Lyapunov_discrete} is satisfied for $\tilde{\rho}=\norm{e^{A\Delta}}^2 <1$ and $\tilde{\eta}=1 + \norm{e^{A\Delta}}^2c_4\Delta+ \bar{C}(\Delta)>0$. Now, we consider the Strang splitting. Using \eqref{eq:help2}, we get that \begin{eqnarray*} L(\widetilde{X}^{\textrm{S}}(t_i))= 1+\norm{\widetilde{X}^{\textrm{S}}(t_i)}^2 \leq 1+ \norm{e^{A\Delta}f(\widetilde{X}^{\textrm{S}}(t_{i-1});\Delta/2)+\xi_{i-1}}^2 +c_4\Delta/2. \end{eqnarray*} Starting from here, the result for the Strang splitting can be proved in the same way as for the Lie-Trotter splitting. In particular, the condition of Definition \ref{def:Lyapunov_discrete} holds for $\tilde{\rho}=\norm{e^{A\Delta}}^2<1$ and $\tilde{\eta}=1+\norm{e^{A\Delta}}^2c_4\Delta/2+\bar{C}(\Delta)+c_4\Delta/2>0$. \end{proof} Moreover, in the following corollary of Lemma \ref{Lemma3} (Part II), we show that the second moments of the splitting methods are asymptotically bounded by constants which are independent of $T$ and $\Delta$. In particular, these bounds are reached exponentially fast, independently of the choice of $X_0$, in agreement with the geometric ergodicity of the splitting methods. This result also requires Assumption \ref{(A4.2)} and an assumption related to the matrix $A$. \begin{assumption}\label{assum:A6} \begin{enumerate}[label=\text{(A6)}] \item \label{(A6)} The matrix A is such that the logarithmic norm $\mu(A)<0$. \end{enumerate} \end{assumption}\noindent Note that Assumption \ref{(A6)} implies Assumption \ref{(A5)}, since $\norm{e^{A\Delta}}\leq e^{\mu(A)\Delta}$ \cite{Strom1975}. However, the converse is not true in general. Assumption \ref{(A6)} is, e.g., satisfied for normal matrices, where all eigenvalues have strictly negative real part \cite{Soderlind2006}. Matrices contained in this class are, e.g., diagonal ones with strictly negative diagonal entries. \begin{corollary}\label{cor:asym_2nd_moment} Let Assumption \ref{(A3)} hold and let $\widetilde{X}^{\textrm{LT}}(t_i)$ and $\widetilde{X}^{\textrm{S}}(t_i)$ be the Lie-Trotter and Strang splitting defined through \eqref{SP1_FHN} and \eqref{SP3_FHN}, respectively. Further, let Assumptions \ref{(A4.2)} and~\ref{(A6)} be satisfied. Then there exist constant $\widetilde{K}^{\textrm{LT}}_\infty(\Delta_0)>0$ and $\widetilde{K}^{\textrm{S}}_\infty(\Delta_0)>0$, which are independent of $T$ and $\Delta$, such that \begin{equation*} \lim\limits_{t_i\to\infty}\mathbb{E}\left[ \norm{\widetilde{X}^{\textrm{LT}}(t_i)}^2 \right] \leq \widetilde{K}^{\textrm{LT}}_\infty(\Delta_0), \quad \lim\limits_{t_i\to\infty}\mathbb{E}\left[ \norm{\widetilde{X}^{\textrm{S}}(t_i)}^2 \right] \leq \widetilde{K}^{\textrm{S}}_\infty(\Delta_0). \end{equation*} \end{corollary} \begin{proof} Recall the bounds \eqref{eq:bound_LT} and \eqref{eq:bound_S} from the proof of Part II of Lemma \ref{Lemma3}, which can be derived under Assumption \ref{(A4.2)}. Applying Assumption \ref{(A6)}, we further obtain that \begin{eqnarray*} \lim\limits_{t_i\to\infty}\mathbb{E}\left[ \norm{\widetilde{X}^{\textrm{LT}}(t_i)}^2 \right]&\leq&\frac{c_4\Delta e^{2\mu(A)\Delta}}{1-e^{2\mu(A)\Delta}}+\frac{\bar{C}(\Delta)}{1-e^{2\mu(A)\Delta}}, \\ \lim\limits_{t_i\to\infty}\mathbb{E}\left[ \norm{\widetilde{X}^{\textrm{S}}(t_i)}^2 \right]&\leq& \frac{c_4}{2}\Bigl( \frac{\Delta}{1-e^{2\mu(A)\Delta}} + \frac{\Delta e^{2\mu(A)\Delta}}{1-e^{2\mu(A)\Delta}} \Bigr)+\frac{\bar{C}(\Delta)}{1-e^{2\mu(A)\Delta}}. \end{eqnarray*} Using \eqref{eq:help3}, and the fact that $\bar{C}(\Delta)=O(\Delta)$, the result holds. \end{proof} \paragraph{The one-dimensional case} In the one-dimensional case, the previously derived bounds have convenient closed-form expressions, due to the specific form of the covariance matrix $C(\Delta)$. In particular, the bounds for the Lie-Trotter splitting no longer depend on $\Delta_0$. The following (asymptotic) bounds for the second moment of the splitting methods are obtained. \begin{corollary}\label{cor:bounds_1dim} Let Assumption \ref{(A3)} hold and let $\widetilde{X}^{\textrm{LT}}(t_i)$ and $\widetilde{X}^{\textrm{S}}(t_i)$ be the Lie-Trotter and Strang splitting defined through \eqref{SP1_FHN} and \eqref{SP3_FHN}, respectively. Further, let Assumption \ref{(A4.2)} be satisfied, $d=1$, $\Sigma=\sigma>0$ and $A=-a<0$. Then it holds that \begin{eqnarray*} \mathbb{E}\left[ (\widetilde{X}^{\textrm{LT}}(t_i))^2 \right] &\leq& \widetilde{K}^{\textrm{LT}}(t_i,X_0):=e^{-2at_i}\mathbb{E}\left[ X_0^2 \right] + (1-e^{-2at_i}) \left(\frac{c_4}{2a}+\frac{\sigma^2}{2a}\right),\\ \mathbb{E}\left[ (\widetilde{X}^{\textrm{S}}(t_i))^2 \right] &\leq& \widetilde{K}^{\textrm{S}}(t_i,X_0,\Delta_0)\\&:=&e^{-2at_i}\mathbb{E}\left[ X_0^2 \right] + (1-e^{-2at_i}) \left(\frac{c_4}{2}\Bigl( \frac{\Delta_0}{1-e^{-2a\Delta_0}} + \frac{1}{2a} \Bigr)+\frac{\sigma^2}{2a}\right), \\ \lim\limits_{t_i\to\infty}\mathbb{E}\left[ (\widetilde{X}^{\textrm{LT}}(t_i))^2 \right] &\leq& \widetilde{K}^{\textrm{LT}}_\infty:=\frac{c_4}{2a}+\frac{\sigma^2}{2a}, \\ \lim\limits_{t_i\to\infty}\mathbb{E}\left[ (\widetilde{X}^{\textrm{S}}(t_i))^2 \right] &\leq& \widetilde{K}^{\textrm{S}}_\infty(\Delta_0):=\frac{c_4}{2}\Bigl( \frac{\Delta_0}{1-e^{-2a\Delta_0}} + \frac{1}{2a} \Bigr)+\frac{\sigma^2}{2a}. \end{eqnarray*} \end{corollary} \begin{proof} Noting that \begin{equation*} \bar{C}(\Delta)=C(\Delta)=\frac{\sigma^2}{2a}(1-e^{-2a\Delta}) \quad \text{and} \quad \mu(A)=-a<0, \end{equation*} the result is a direct consequence of Part II of Lemma \ref{Lemma3} (using \eqref{eq:bound_LT}, \eqref{eq:help3} and \eqref{eq:bound_S}) and Corollary~ \ref{cor:asym_2nd_moment}. \end{proof} \begin{remark} For $t_i=0$, the bounds $\widetilde{K}^{\textrm{LT}}(0,X_0)$ and $\widetilde{K}^{\textrm{S}}(0,X_0,\Delta_0)$ in Corollary \ref{cor:bounds_1dim} coincide with $\mathbb{E}[X_0^2]$. Moroever, $\widetilde{K}^{\textrm{S}}(t_i,X_0,\Delta_0)$ and $\widetilde{K}^{\textrm{S}}_\infty(\Delta_0)$ approach $\widetilde{K}^{\textrm{LT}}(t_i,X_0)$ and $\widetilde{K}^{\textrm{LT}}_\infty$, respectively, as $\Delta_0$ tends to zero. \end{remark} Note that, since $a>0$, the linear SDE \eqref{SDE_FHN} is geometrically ergodic. In particular, its solution corresponds to the Ornstein-Uhlenbeck process \begin{equation*} X^{[1]}(t)=e^{-at}X_0^{[1]}+\sigma \int\limits_{0}^{t}e^{-a(t-s)}dW(s), \end{equation*} whose distribution converges to a unique limit \begin{equation*} X^{[1]}(t) \xrightarrow[t \to \infty]{\mathcal{D}} \mathcal{N}(0,\frac{\sigma^2}{2a}). \end{equation*} Intuitively, this fact, combined with Assumption \ref{(A4.2)}, guarantees the geometric ergodicity of the splitting methods obtained via Theorem \ref{thm:Lyapunov_discrete}, and thus the existence of the asymptotic bounds for the second moment. \paragraph{Cubic model problem} For an illustration of the derived bounds, consider again SDE \eqref{eq:Toy}. We propose to rewrite this equation as \begin{equation*} dX(t)=\left( -X(t) +X(t) -X^3(t)\right)dt+\sigma dW(t), \end{equation*} and to choose \begin{equation}\label{eq:A_N_Toy} A=-1<0, \qquad N(X(t))=X(t)-X^3(t). \end{equation} Assumptions \ref{(A1)} and \ref{(A2)} regarding the function $N$ hold. \begin{proposition}\label{prop:A1_A2_N_toy} Let $N:\mathbb{R}\to\mathbb{R}$ be as in \eqref{eq:A_N_Toy}. Then $N$ satisfies Assumptions \ref{(A1)} and \ref{(A2)}. \end{proposition} \begin{proof} The proof is given in Appendix \ref{appA_FHN}. \end{proof}\noindent The global explicit solution of ODE \eqref{ODE_FHN} exists and is given by \begin{equation}\label{eq:f_toy} X^{[2]}(t)=f(X_0^{[2]};t)=\frac{X_0^{[2]}}{\sqrt{e^{-2t}+(X_0^{[2]})^2(1-e^{-2t})}}. \end{equation} Thus, Assumption \ref{(A3)} is satisfied and the Lie-Trotter \eqref{SP1_FHN} and Strang \eqref{SP3_FHN} splitting methods are well-defined. In addition, Assumption \ref{(A4.1)} can be proved in the same way as Lemma 3.4 in \cite{Brehier2018}, and Assumption \ref{(A4.2)} is proved as follows. \begin{proposition}\label{prop:toy_A4.2} Let $f:\mathbb{R}\to\mathbb{R}$ be as in \eqref{eq:f_toy}. Then $f$ satisfies Assumption \ref{(A4.2)} for $c_4=1/2$. \end{proposition} \begin{proof} The proof is given in Appendix \ref{appB_FHN}. \end{proof}\noindent Note also that Assumptions \ref{(A5)} and \ref{(A6)} regarding $A$ hold, since $\norm{e^{A\Delta}}=e^{-\Delta}<1$ and $\mu(A)=-1<0$. Therefore, the splitting methods \eqref{SP1_FHN} and \eqref{SP3_FHN} applied to SDE \eqref{eq:Toy} are not only mean-square convergent, but also geometrically ergodic. In particular, while even for arbitrarily small $\Delta$ one can find $X_0$ such that the second moment of the Euler-Maruyama method explodes (see the beginning of Section \ref{sec:5:2_FHN}), the second moments of the splitting methods are bounded by the constants $\widetilde{K}^{\textrm{LT}}(t_i,X_0)$ and $\widetilde{K}^{\textrm{S}}(t_i,X_0,\Delta_0)$, which converge exponentially fast to \begin{equation*} \widetilde{K}^{\textrm{LT}}_\infty=\frac{1}{4}+\frac{\sigma^2}{2} \quad \text{and} \quad \widetilde{K}^{\textrm{S}}_\infty(\Delta_0)=\frac{1}{4}\Bigl( \frac{\Delta_0}{1-e^{-2\Delta_0}} + \frac{1}{2} \Bigr)+\frac{\sigma^2}{2}, \end{equation*} for any choice of the initial value $X_0$, see Corollary \ref{cor:bounds_1dim}. \begin{figure} \begin{centering} \includegraphics[width=0.6\textwidth]{Fig0_bound_toy_Strang_h0_cropped2.pdf} \caption{Bounds $\widetilde{K}^{\textrm{S}}(t_i,X_0,\Delta_0)$ and $\widetilde{K}^{\textrm{LT}}(t_i,X_0)$ of Corollary \ref{cor:bounds_1dim} for SDE \eqref{eq:Toy} ($a=1 $ and $c_4=~1/2$) with $\sigma=1/2$, $X_0=2$ and $\Delta_0=10^{-2}$ as functions of time, and estimates of $\mathbb{E}[X^2(t)]$ obtained from $10^4$ paths generated under the LT and S splittings, respectively.} \label{fig:bound_toy} \end{centering} \end{figure} In Figure \ref{fig:bound_toy}, we illustrate the derived bounds $\widetilde{K}^{\textrm{S}}(t_i,X_0,\Delta_0)$ (grey solid line) and $\widetilde{K}^{\textrm{LT}}(t_i,X_0)$ (red dotted line) of Corollary \ref{cor:bounds_1dim} for SDE \eqref{eq:Toy}. The bounds are almost overlapping for $\Delta_0=10^{-2}$ and converge to $\widetilde{K}^{\textrm{LT}}_\infty$ and $\widetilde{K}^{\textrm{S}}_\infty(\Delta_0)$, respectively, as $t$ grows. Moreover, we estimate $\mathbb{E}[X^2(t)]$ based on $10^4$ paths generated under the Strang \eqref{SP3_FHN} (black solid line) and Lie-Trotter \eqref{SP1_FHN} (cyan dotted line) splitting, respectively. Both estimates almost overlap and lie below the bound. They also converge to a constant, which is smaller than $\widetilde{K}^{\textrm{LT}}_\infty$ and $\widetilde{K}^{\textrm{S}}_\infty(\Delta_0)$, as the time evolves. \begin{remark} For SDE \eqref{eq:Toy}, an immediate choice of the subequations of the splitting framework would also be $N(X(t))=-X^3(t)$ and $A=0$. For this choice, Assumptions \ref{(A1)}-\ref{(A4.2)} related to the locally Lipschitz function $N$ are satisfied, and thus the resulting splitting methods $\eqref{SP1_FHN}$ and $\eqref{SP3_FHN}$ are mean-square convergent. However, since $e^{A\Delta}=1$ and $\mu(A)=0$, Assumptions \ref{(A5)} and \ref{(A6)} do not hold, asymptotic bounds cannot be derived, and the preservation of ergodicity remains an open question. In particular, in contrast to the proposed approach, the distribution of the solution of the resulting linear SDE does not converge to a unique limit, and thus SDE \eqref{SDE_FHN} is not ergodic. \end{remark} \begin{figure} \begin{centering} \includegraphics[width=1.0\textwidth]{Fig0_paths_toy_cropped.pdf} \caption{Paths of SDE \eqref{eq:Toy} simulated under the considered numerical methods for large values of $X_0$, $\Delta=10^{-4}$ and $\sigma=1/2$. The grey reference paths are obtained under $\Delta=10^{-7}$ using the TEM method \eqref{eq:EM_Tamed}.} \label{fig:paths_toy} \end{centering} \end{figure} In the following, we illustrate how the choice of $X_0$ influences the behaviour of paths of the ergodic process $X(t)$ simulated under the different numerical methods. If $X_0$ is large, the Euler-Maruyama method produces paths which are computationally pushed to $+/-$ infinity within a few iteration steps, even for very small values of $\Delta$. This is not the case for the tamed/truncated variants of this method. However, when $X_0$ is large, they may also react sensitively to $X_0$, even for small $\Delta$. This is illustrated in Figure \ref{fig:paths_toy}, where we report paths of SDE \eqref{eq:Toy} generated for $X_0=10^4$ (left panel) and $X_0=3\cdot 10^4$ (right panel), using $\Delta=10^{-4}$ and $\sigma=1/2$. The grey reference paths are simulated under $\Delta=10^{-7}$ using the TEM method \eqref{eq:EM_Tamed}. The choice of the reference method does not change the reported results. Moreover, all paths are generated under the same underlying pseudo random numbers. As desired, the paths obtained under the splitting methods are not deterred by the large value of $X_0$, and overlap with the reference path. In contrast, the Euler-Maruyama type methods introduce a delay in when the respective paths reach the reference path. This behaviour deteriorates as $X_0$ increases. A similar picture is obtained for lager values of $\Delta$ and accordingly smaller values of $X_0$. Moreover, for some values of $X_0$, we observe that the DTrEM method \eqref{DTrEM} may produce spurious oscillations (figures not shown). See \cite{Kelly2018,Tretyakov2012}, where such a behaviour has also been observed. \section{Stochastic FitzHugh-Nagumo model} \label{sec:6_FHN} In this section, the proposed splitting methods are illustrated on the stochastic FHN model, a widely used neuronal model. It is given by the following $2$-dimensional SDE \begin{equation}\label{FHN} d \begin{pmatrix} V(t) \\ U(t) \end{pmatrix} = \underbrace{\begin{pmatrix} \frac{1}{\epsilon}\Bigl(V(t)-V^3(t)-U(t)\Bigr) \\ \gamma V(t)-U(t)+\beta \end{pmatrix}}_\text{$:=F(X(t))$} dt \ + \ \underbrace{\begin{pmatrix} \sigma_1 & 0 \\ 0 & \sigma_2 \end{pmatrix}}_{:=\Sigma} dW(t), \quad X(0)=X_0, \quad t \in [0,T], \end{equation} with solution $X(t):=(V(t),U(t))^{\top}$ for $t \in [0,T]$. This equation has been used to model the firing activity of single neurons \cite{FitzHugh1961,Nagumo1962}. If the membrane voltage of the neuron is sufficiently high, it releases an action potential, also called spike. The first component $(V(t))_{t \in [0,T]}$ describes the membrane voltage of the neuron at time $t$, while the second component $(U(t))_{t \in [0,T]}$ corresponds to a recovery variable modelling the channel kinetics. The parameter $\epsilon>0$ corresponds to the time scale separation of the two components and $\beta\geq0$ and $\gamma>0$ are position and duration parameters of an excitation, respectively. SDE \eqref{FHN} could be extended by adding a parameter, modelling an injected input current, to the first component of the drift. This parameter is typically set to zero, see, e.g., \cite{Ditlevsen2019,Leon2018,Quentin2020}, and thus neglected here. If it is not zero, a splitting method which considers a third subequation could be derived. \paragraph{Properties of the FHN model} If both noise intensities $\sigma_1$ and $\sigma_2$ are strictly positive, the model is elliptic. If $\sigma_1=0$, the diffusion term becomes $\Sigma dW(t)=(0,\sigma_2)^\top dW_2(t)$, corresponding to the notation in \eqref{eq:hypo_SDE}. In this case, due to the $U$-component entering the first entry of the drift $F(X(t))$, the model is hypoelliptic. This is confirmed by the fact that \begin{equation}\label{eq:cond_hypo_FHN} \partial_u F_1(x) \sigma_2=-\frac{\sigma_2}{\epsilon} \neq 0, \end{equation} guaranteeing Condition \eqref{eq:cond_hypo}. We refer to \cite{Berglund2012,Bonaccorsi2008,Kelly2018,Muratov2008} and to \cite{Ditlevsen2019,Leon2018} for the consideration of the elliptic and hypoelliptic FHN model, respectively, and to \cite{Quentin2020} for an investigation of both cases. Moreover, it has been proved that the FHN model is ergodic, see, e.g., \cite{Bonaccorsi2008,Leon2018}. Here, we study this property under a restricted parameter space, for which SDE \eqref{FHN} satisfies the dissipativity condition \eqref{eq:dissipative} such that the function $L(x)=1+\norm{x}^2$ is a Lyapunov function meeting Condition~\eqref{eq:Lyapunov}. \begin{proposition}\label{prop:dissipative_FHN} Let \begin{equation*} \frac{1}{\epsilon}>\frac{1}{2}\left|\gamma-\frac{1}{\epsilon}\right| \quad \text{and} \quad 1-\beta>\frac{1}{2}\left|\gamma-\frac{1}{\epsilon}\right|. \end{equation*} Then, the drift $F$ of the FHN model \eqref{FHN} satisfies the dissipativity condition \eqref{eq:dissipative}. \end{proposition} \begin{proof} We have that \begin{eqnarray*} (F(x),x)=\Bigl( \begin{pmatrix} \frac{1}{\epsilon}(v-v^3-u) \\ \gamma v-u+\beta \end{pmatrix}, \begin{pmatrix} v \\ u \end{pmatrix} \Bigr) = \frac{1}{\epsilon}(v^2-v^4)+vu(\gamma-\frac{1}{\epsilon})-u^2+\beta u. \end{eqnarray*} Defining $c:=|\gamma-1/\epsilon|$ and using $2vu\leq v^2+u^2$, $u<1+u^2$ and $v^2-v^4\leq1-v^2$, we obtain \begin{eqnarray*} (F(x),x) &\leq& \frac{1}{\epsilon}(1-v^2)+\frac{c}{2}(v^2+u^2)-u^2+\beta(1+u^2) \\ &=& -v^2(\frac{1}{\epsilon}-\frac{c}{2})-u^2(1-\beta-\frac{c}{2})+\beta+\frac{1}{\epsilon}. \end{eqnarray*} Since \begin{equation*} \frac{1}{\epsilon}-\frac{c}{2}>0 \quad \text{and} \quad 1-\beta-\frac{c}{2}>0, \end{equation*} by assumption, it follows that \begin{equation*} (F(x),x)\leq \alpha-\delta\norm{x}^2, \end{equation*} where $\alpha=\beta+1/\epsilon>0$ and $\delta=\min\{ 1/\epsilon-c/2,1-\beta-c/2 \}>0$. \end{proof}\noindent Note that the conditions on the model parameters in Proposition \ref{prop:dissipative_FHN} are satisfied, e.g., if $\gamma=1/\epsilon$ and $\beta<1$. Such parameter settings may be relevant in applications, see Section \ref{sec:7_FHN}. \subsection{Splitting methods for the FHN model} The FHN model \eqref{FHN} is a semi-linear SDE of type \eqref{eq:semi_linear_SDE}. The choice of the matrix $A$ and the function $N(X(t))$ is not unique. While the locally Lipschitz term $-V^3(t)/\epsilon$ and the constant $\beta$ have to enter into $N(X(t))$, the goal is to allocate the remaining terms such that as many of the introduced assumptions as possible are satisfied. For the splitting methods to be $1$-step hypoelliptic, the term $-U(t)/\epsilon$ of the first component of the drift must enter into $AX(t)$. Moreover, shifting the term $\gamma V(t)$ to $AX(t)$ leads to a decoupling of the resulting ODE \eqref{ODE_FHN} such that its global solution can be derived explicitly, guaranteeing Assumption \ref{(A3)}. Thus, there are four strategies left, depending on whether the remaining terms $V(t)/\epsilon$ and $-U(t)$ enter into $AX(t)$ or $N(X(t))$. The only case where the matrix $A$ meets Assumption \ref{(A5)} (under a restricted parameter space) is when $-U(t)$ appears in $AX(t)$ and $V(t)/\epsilon$ in $N(X(t))$. Similar to the proposed splitting of SDE~\eqref{eq:Toy}, the resulting linear SDE \eqref{SDE_FHN} is then geometrically ergodic. In particular, it corresponds to a version of the well-studied damped stochastic harmonic oscillator whose matrix exponential $e^{At}$ and covariance matrix $C(t)$ have manageable expressions. Therefore, we propose to choose the matrix $A$ and the function $N$ as follows \begin{equation}\label{eq:A_N_FHN} A=\begin{pmatrix} 0 & -\frac{1}{\epsilon} \\ \gamma & -1 \end{pmatrix}, \qquad N(X(t))=\begin{pmatrix} \frac{1}{\epsilon}\bigl( V(t)-V^3(t) \bigr) \\ \beta \end{pmatrix}. \end{equation} The resulting linear damped stochastic harmonic oscillator \eqref{SDE_FHN} with $A$ as in \eqref{eq:A_N_FHN} is weakly-, critically- or over-damped, depending on whether \begin{equation}\label{kappa} \kappa:=\frac{4\gamma}{\epsilon}-1 \end{equation} is positive, zero or negative, respectively \cite{Buckwar2019,Mao2011,Pokern2009}. In particular, the sign of $\kappa$ determines the shape of the exponential of the matrix $A$. If $\kappa=0$, \begin{equation*}\label{M1} e^{At}= e^{-{\frac{t}{2}}} \begin{pmatrix} 1+\frac{t}{2} & -\frac{t}{\epsilon} \\ \frac{\epsilon t}{4} & 1-\frac{t}{2} \end{pmatrix}. \end{equation*} If $\kappa>0$, \begin{equation*}\label{M2} e^{At}=e^{-\frac{t}{2}}\begin{pmatrix} \cos(\frac{1}{2}\sqrt{\kappa}t)+\frac{1}{\sqrt{\kappa}}\sin(\frac{1}{2}\sqrt{\kappa}t) & -\frac{2}{\epsilon\sqrt{\kappa}}\sin(\frac{1}{2}\sqrt{\kappa}t) \\ \frac{2\gamma}{\sqrt{\kappa}}\sin(\frac{1}{2}\sqrt{\kappa}t) & \cos(\frac{1}{2}\sqrt{\kappa}t)-\frac{1}{\sqrt{\kappa}}\sin(\frac{1}{2}\sqrt{\kappa}t) \end{pmatrix}. \end{equation*} If $\kappa<0$, the sine and cosine terms of the above expressions can be rearranged using the relations \begin{equation}\label{relations} \cos\left(\frac{1}{2}\sqrt{\kappa}t\right)=\cosh\left(\frac{1}{2}\sqrt{-\kappa}t\right) \quad \text{and} \quad \frac{1}{\sqrt{\kappa}}\sin\left(\frac{1}{2}\sqrt{\kappa}t\right)=\frac{1}{\sqrt{-\kappa}}\sinh\left(\frac{1}{2}\sqrt{-\kappa}t\right). \end{equation} Moreover, the covariance matrix $C(t)$ \eqref{eq:Cov} also depends on the sign of $\kappa$ and is given as follows. If $\kappa=0$, \begin{eqnarray*} c_{11}(t)&=& \frac{e^{-t}}{4\epsilon^2}\left( 4 \sigma_2^2 \left( -2+2e^t-t(2+t) \right) + \epsilon^2\sigma_1^2 \left( -10+10e^t-t(6+t) \right) \right), \\ c_{12}(t)&=&c_{21}(t)=\frac{e^{-t}}{8\epsilon}\left( -4\sigma_2^2t^2+\epsilon^2\sigma_1^2 \left( 4e^t-(2+t)^2 \right) \right), \\ c_{22}(t)&=&\frac{e^{-t}}{16}\left( 4\sigma_2^2 \left( -2+2e^t-(t-2)t \right) + \epsilon^2 \sigma_1^2 \left( -2+2e^t-t(2+t) \right) \right). \end{eqnarray*} If $\kappa>0$, \begin{eqnarray*} c_{11}(t)&=&\frac{\epsilon e^{-t}}{2\gamma\kappa} \biggl( -\frac{4\gamma}{\epsilon^2}( \sigma_1^2\gamma+\sigma_2^2\frac{1}{\epsilon}) + \kappa e^t (\sigma_1^2(1+\frac{\gamma}{\epsilon}) +\sigma_2^2\frac{1}{\epsilon^2}) \\ \nonumber && \hspace{0.5cm} + \Bigl( \sigma_1^2(1-\frac{3\gamma}{\epsilon}) +\sigma_2^2\frac{1}{\epsilon^2} \Bigr)\cos(\sqrt{\kappa}t) - \sqrt{\kappa} (\sigma_1^2(1-\frac{\gamma }{\epsilon})+\sigma_2^2\frac{1}{\epsilon^2}) \sin(\sqrt{\kappa}t) \biggr), \\ c_{12}(t)&=&c_{21}(t)=\frac{\epsilon e^{-t}}{2 \kappa } \biggl( \sigma_1^2\kappa e^t - \frac{2}{\epsilon}(\sigma_1^2 \gamma +\sigma_2^2\frac{1}{\epsilon}) \\ \nonumber && \hspace{0.5cm} + \Bigl( \sigma_1^2(1-\frac{2\gamma}{\epsilon}) + 2\sigma_2^2 \frac{1}{\epsilon^2} \Bigr) \cos(\sqrt{\kappa}t) -\sigma_1^2\sqrt{\kappa} \sin(\sqrt{\kappa}t) \biggr), \\ c_{22}(t)&=&\frac{\epsilon e^{-t}}{2\kappa} \biggl( (\sigma_2^2\frac{1}{\epsilon}+\sigma_1^2\gamma) \Bigl( \cos(\sqrt{\kappa}t)-\frac{4\gamma}{\epsilon}+\kappa e^t \Bigr) + (\sigma_2^2\frac{1}{\epsilon}-\sigma_1^2\gamma) \sqrt{\kappa} \sin(\sqrt{\kappa}t) \biggr). \end{eqnarray*} If $\kappa<0$, the relations \eqref{relations} can again be used to rewrite the above expressions accordingly. Note that parameter configurations typically considered in the literature fulfill $\kappa>0$, see, e.g., \cite{Ditlevsen2019,Leon2018,Quentin2020}. This is in agreement with the fact that, under $\kappa>0$, SDE \eqref{SDE_FHN} models a weakly damped system which describes oscillatory dynamics. The locally Lipschitz function $N$ in \eqref{eq:A_N_FHN} satisfies Assumptions \ref{(A1)} and \ref{(A2)}. \begin{proposition}\label{prop:A1_A2_N_FHN} Let $N:\mathbb{R}^2\to\mathbb{R}^2$ be as in \eqref{eq:A_N_FHN}. Then $N$ satisfies Assumptions \ref{(A1)} and \ref{(A2)}. \end{proposition} \begin{proof} The proof is given in Appendix \ref{appC_FHN}. \end{proof}\noindent \vspace{-0.063cm} Moreover, the explicit solution of ODE \eqref{ODE_FHN} with $N(X(t))$ as in \eqref{eq:A_N_FHN} is given by \begin{equation} X^{[2]}(t)=f(X_0^{[2]};t)= \begin{pmatrix} \frac{V_0^{[2]}}{\sqrt{e^{-\frac{2t}{\epsilon}}+(V_0^{[2]})^2\left(1-e^{-\frac{2t}{\epsilon}}\right)}} \\ \beta t + U_0^{[2]} \end{pmatrix}, \label{Exact_ODE_FHN2} \end{equation} and thus Assumption \ref{(A3)} is satisfied. The well-defined Lie-Trotter and Strang splitting methods for the FHN model \eqref{FHN} are then given by \eqref{SP1_FHN} and \eqref{SP3_FHN}, respectively, where the matrix exponential $e^{A\Delta}$, the covariance matrix $C(\Delta)$ and the function $f$ are as reported above. \subsection{Mean-square convergence} In the following proposition, we verify Assumptions \ref{(A4.1)} and \ref{(A4.2)} (under $\beta=0$), which are required to establish the mean-square boundedness of the splitting methods under Part I and Part~II of Lemma \ref{Lemma3}, respectively. \begin{proposition}\label{prop:FHN_A4} Let $f:\mathbb{R}^2\to\mathbb{R}^2$ be as in \eqref{Exact_ODE_FHN2}. Part I: Then $f$ satisfies Assumption \ref{(A4.1)}. Part II: If $\beta=0$, then $f$ satisfies Assumption \ref{(A4.2)}. \end{proposition} \begin{proof} The proof is given in Appendix \ref{appD_FHN}. \end{proof}\noindent Thus, using Theorem \ref{thm:convergence_FHN}, the Lie-Trotter \eqref{SP1_FHN} and Strang \eqref{SP3_FHN} splitting methods applied to the FHN model are mean-square convergent of order $1$. While for the Lie-Trotter splitting this result holds without any restrictions on the parameter space of the model, we require $\beta=0$ for the Strang method. However, since compositions based on fractional steps usually improve the performance of a splitting method rather than deteriorating it, it is expected that the Strang method is also mean-square convergent for any $\beta\geq 0$. This is confirmed in our experiments. \subsection{Structure preservation} \paragraph{1-step hypoellipticity} Recall that, if $\sigma_1=0$, the FHN model \eqref{FHN} is hypoelliptic. Due to the choice of the matrix $A$ as in \eqref{eq:A_N_FHN}, the linear stochastic subequation \eqref{SDE_FHN} of the splitting framework is also hypoelliptic. In particular, Condition \eqref{eq:cond_hypo}, which is given by \begin{equation*} \partial_u(Ax)_1\sigma_2=-\frac{\sigma_2}{\epsilon}\neq 0, \end{equation*} and coincides with \eqref{eq:cond_hypo_FHN} for the FHN model, holds. Therefore, the splitting methods \eqref{SP1_FHN} and \eqref{SP3_FHN} are $1$-step hypoelliptic and the Lie-Trotter splitting \eqref{SP1_FHN} yields a non-degenerate conditional Gaussian distribution according to Theorem \ref{thm:hypo_N_LT}. In particular, its conditional covariance matrix coincides with $C(\Delta)$ reported above, which is of full rank even if $\sigma_1=0$, independently of the value of $\kappa$. Moreover, the diagonal terms of $C(\Delta)$ are of the same order with respect to $\Delta$, which is also relevant for likelihood-based inference tools, see, e.g., \cite{Ditlevsen2019}. Note that, as for the mean-square convergence of the Lie-Trotter method, these results hold without any restrictions on the parameter space. \paragraph{Geometric ergodicity} In the following, we establish the geometric ergodicity of the splitting methods applied to the FHN model under a restricted parameter space. In particular, we require $\beta=0$ and $\gamma=1/\epsilon$. Recall the validity of Assumption \ref{(A4.2)} (under $\beta=0$) from Proposition \ref{prop:FHN_A4}. Moreover, we verify Assumption \ref{(A5)} (under $\gamma=1/\epsilon$) in the following proposition. \begin{proposition}\label{prop:A5} Let $\gamma=1/\epsilon$ and $A\in\mathbb{R}^{2\times 2}$ be as in \eqref{eq:A_N_FHN}. Then, $A$ satisfies Assumption \ref{(A5)}. \end{proposition} \begin{proof} The proof is given in Appendix \ref{appF_FHN}. \end{proof} Note that, under $\gamma=1/\epsilon$, the logarithmic norm $\mu(A)=0$. Thus, Assumption \ref{(A6)} is not fulfilled and the asymptotic bounds of Corollary \ref{cor:asym_2nd_moment} cannot be derived. However, since Assumptions \ref{(A4.2)} and \ref{(A5)} hold under $\beta=0$ and $\gamma=1/\epsilon$, respectively, and, under these parameter restrictions, $L(x)=1+\norm{x}^2$ is a Lyapunov function for the FHN model \eqref{FHN} according to Propositon~ \ref{prop:dissipative_FHN}, the splitting methods \eqref{SP1_FHN} and \eqref{SP3_FHN} satisfy the discrete Lyapunov condition of Definition \ref{def:Lyapunov_discrete}. This is a direct consequence of Theorem \ref{thm:Lyapunov_discrete}. Combined with the $1$-step hypoellipticity and a discrete irreducibility condition, which can be proved in the same way as done, e.g, in \cite{Ableidinger2017,Chevallier2020,Mattingly2002}, the splitting methods applied to the FHN model are geometrically ergodic. Intuitively, the Lyapunov structure of the FHN model is kept by the numerical solutions, since the linear SDE \eqref{SDE_FHN} determined by the matrix $A$ in \eqref{eq:A_N_FHN} is geometrically ergodic, implying that the process $(X^{[1]}(t))_{t \in [0,T]}$ converges to a unique invariant distribution given by \begin{equation*} X^{[1]}(t) \xrightarrow[t \to \infty]{\mathcal{D}} \mathcal{N}\left( \begin{pmatrix} 0 \\ 0 \end{pmatrix}, \begin{pmatrix} \frac{5 }{2}\sigma_1^2+\frac{2}{\epsilon^2} \sigma_2^2 & \frac{\epsilon}{2} \sigma_1^2 \\ \frac{\epsilon}{2} \sigma_1^2 & \frac{\epsilon^2}{8} \sigma_1^2+\frac{1}{2}\sigma_2^2 \end{pmatrix} \right), \end{equation*} for $\kappa=0$, and \begin{equation*} X^{[1]}(t) \xrightarrow[t \to \infty]{\mathcal{D}} \mathcal{N}\left( \begin{pmatrix} 0 \\ 0 \end{pmatrix}, \begin{pmatrix} \frac{\epsilon}{2\gamma}(\sigma_1^2+\frac{\gamma}{\epsilon}\sigma_1^2+\frac{1}{\epsilon^2}\sigma_2^2) & \frac{\epsilon}{2} \sigma_1^2 \\ \frac{\epsilon}{2} \sigma_1^2 & \frac{1}{2}(\epsilon\gamma\sigma_1^2+\sigma_2^2) \end{pmatrix} \right), \end{equation*} for $\kappa>0$. Since this fact holds without any restrictions of the parameters, it is expected that the splitting methods preserve this property for any values of $\gamma>0$ and $\epsilon>0$. This is confirmed in our experiments. \begin{remark} Another plausible choice of the subequations is \begin{equation*} A=\begin{pmatrix} 0 & -\frac{1}{\epsilon} \\ \gamma & \ \ 0 \end{pmatrix}, \qquad N(X(t))=\begin{pmatrix} \frac{1}{\epsilon}\bigl( V(t)-V^3(t) \bigr) \\ -U(t)+\beta \end{pmatrix}. \end{equation*} For this choice, Assumptions \ref{(A1)}-\ref{(A4.1)} related to the locally Lipschitz function $N$ are satisfied. Moreover, Assumption \ref{(A4.2)} also holds under $\beta=0$. Thus, the splitting methods are mean-square convergent. In addition, since the term $-U(t)/\epsilon$ still enters into $AX(t)$, they are also $1$-step hypoelliptic. However, in this case, the linear SDE \eqref{SDE_FHN} corresponds to a version of the simple (undamped) harmonic oscillator which is not ergodic. In particular, its matrix exponential is given by \begin{equation*} e^{At}=\begin{pmatrix} \cos(\frac{\sqrt{\gamma}t}{\sqrt{\epsilon}}) & -\frac{1}{\sqrt{\epsilon\gamma}}\sin(\frac{\sqrt{\gamma}t}{\sqrt{\epsilon}}) \\ \sqrt{\epsilon\gamma}\sin(\frac{\sqrt{\gamma}t}{\sqrt{\epsilon}}) & \cos(\frac{\sqrt{\gamma}t}{\sqrt{\epsilon}}) \end{pmatrix}, \end{equation*} with $\norm{e^{A\Delta}}\geq 1$ and $\norm{e^{A\Delta}}=1$ under $\gamma=1/\epsilon$. \end{remark} \section{Numerical experiments for the FHN model} \label{sec:7_FHN} We now illustrate the performance of the proposed splitting methods in comparison with Euler-Maruyama type methods through a variety of numerical experiments carried out on the FHN model~\eqref{FHN}. First, the proved mean-square convergence order $1$ is illustrated numerically. Second, the ability of the different numerical methods to preserve the qualitative dynamics of neuronal spiking is analysed, in particular the correct amplitudes and frequencies of the underlying oscillations when the time step $\Delta$ is increased. Third, the robustness of the numerical methods to changes in the initial condition $X_0$, and how the choice of $X_0$ may influence the preservation of the phases of the underlying oscillations are analysed. All simulations are carried out in the computing environment R \cite{R}. \subsection{Convergence order} The mean-square convergence order can be illustrated by approximating the left-hand side of the inequality in \eqref{eq:ms_convergence_order} (for a fixed time $T$) with the root mean-squared error (RMSE) defined by \begin{equation}\label{eq:RMSE} \text{RMSE}(\Delta):=\left( \frac{1}{M} \sum_{l=1}^{M} \norm{X^l(T)-\widetilde{X}^l_\Delta(T)}^2 \right)^{1/2}, \end{equation} where $X^l(T)$ and $\widetilde{X}^l_\Delta(T)$ denote the $l$-th simulated path at a fixed time $T$ of the true process and the approximated process, respectively, for $l=1,\ldots,M$. \begin{figure} \begin{centering} \includegraphics[width=0.6\textwidth]{Fig1_convergence_cropped2.pdf} \caption{Illustration of the mean-square convergence order on the FHN model \eqref{FHN} via the RMSE~\eqref{eq:RMSE}. All model parameters are set to $1$, $X_0=(0,0)^\top$ and $T=10$.} \label{fig:convergence} \end{centering} \end{figure} In Figure \ref{fig:convergence}, we report the RMSEs of the different numerical methods in log2 scale as a function of the time step $\Delta$ used to simulate $\widetilde{X}^l_\Delta(T)$. Since the true process is not known, the reference values $X^l(T)$ are simulated with the TEM method \eqref{eq:EM_Tamed} using the small time step $\Delta=2^{-14}$. We verified that using a different scheme for the simulation of the reference paths does not affect the results of the experiments. The approximated trajectories $\widetilde{X}^l_\Delta(T)$ are generated with the LT \eqref{SP1_FHN}, S \eqref{SP3_FHN}, TEM \eqref{eq:EM_Tamed}, DTEM \eqref{DTEM}, TrEM \eqref{TrEM} and DTrEM \eqref{DTrEM} method, respectively, under different choices of the time step, namely $\Delta=2^{-k}$, $k=6,\ldots,12$. We consider $T=10$, $M=10^3$, $X_0=(0,0)^{\top}$ and set all model parameters to $1$. The theoretical convergence order $1$ of the splitting methods, established in Theorem \ref{thm:convergence_FHN}, is confirmed numerically. The S splitting yields the smallest RMSEs among all considered numerical methods. The RMSEs of the LT splitting almost coincide with that of the two truncated Euler-Maruyama variants. The RMSEs of the tamed Euler-Maruyama methods are larger than those obtained under the splitting and truncated methods. The DTEM method yields the largest RMSEs, and for that method we only observe a convergence of order $1/2$. The latter finding is in agreement with the observations in \cite{Kelly2018}. All RMSEs are also reported in Table \ref{table1}. It can be observed that the RMSEs of the LT method lie slightly above those obtained under the truncated methods, which are identical (up to the reported precision). \begin{table} {\footnotesize \caption{RMSE \eqref{eq:RMSE} for different values of $\Delta$. All parameters are set to $1$, $X_0=(0,0)^\top$ and~$T=10$.} \vspace{-0.5cm} \label{table1} \begin{center} \scalebox{1.0}{ \begin{tabular}{|c|c|c|c|c|c|c|} \hline $\Delta$ & $\textrm{S}$ & $\textrm{LT}$ & $\textrm{TEM}$ & $\textrm{DTEM}$ & $\textrm{TrEM}$ & $\textrm{DTrEM}$ \\ \hline $2^{-6}$ & $0.01348$ & $0.01771$ & $0.03683$ & $0.27024$ & $0.01667$ & $0.01667$ \\ $2^{-7}$ & $0.00689$ & $0.00895$ & $0.01844$ & $0.19696$ & $0.00845$ & $0.00845$ \\ $2^{-8}$ & $0.00331$ & $0.0044$ & $0.00906$ & $0.14577$ & $0.00417$ & $0.00417$ \\ $2^{-9}$ & $0.00165$ & $0.00219$ & $0.00441$ & $0.10613$ & $0.00209$ & $0.00209$ \\ $2^{-10}$ & $0.00082$ & $0.0011$ & $0.00211$ & $0.07643$ & $0.00101$ & $0.00101$ \\ $2^{-11}$ & $0.0004$ & $0.00053$ & $0.00099$ & $0.05456$ & $0.000513$ & $0.000513$ \\ $2^{-12}$ & $0.00021$ & $0.00028$ & $0.00043$ & $0.03926$ & $0.00027$ & $0.00027$ \\ \hline \end{tabular}} \end{center}} \end{table} \begin{figure}[H] \begin{centering} \includegraphics[width=1.0\textwidth]{Fig2_paths_V_Int_Sub1_2gamma.pdf}\\ \includegraphics[width=1.0\textwidth]{Fig2_paths_V_Int_Sub2_2gamma.pdf}\\ \includegraphics[width=1.0\textwidth]{Fig2_paths_V_Int_Sub3_2gamma.pdf} \caption{Paths of the $V$-component of the FHN model \eqref{FHN} simulated under the considered numerical methods for $X_0=(-1,0)^\top$, $\beta=\sigma_1=0.1$, $\sigma_2=0.2$ $\epsilon=0.05$, two different values of $\gamma$ and increasing time step $\Delta$.} \label{fig:paths} \end{centering} \end{figure} \subsection{Preservation of neuronal spiking dynamics: amplitudes and frequencies} \label{sec:7:2_FHN} In the following, we analyse the ability of the considered numerical methods to preserve the qualitative neuronal spiking dynamics of the FHN model. In particular, we investigate whether the amplitudes and frequencies of the neuronal oscillations are kept when increasing the time step $\Delta$. Throughout this and the next section, we set $\beta=\sigma_1=0.1$ and $\sigma_2=0.2$, and consider different values for $\gamma$ and $\epsilon$. These two parameters are of particular interest, because they regulate the spiking intensity of the neuron and separate the time scale of the two model components, respectively. When $\epsilon$ is small, both variables evolve on different time scales. This situation is often referred to as ``stiff'' case, while larger values of $\epsilon$ refer to the ``nonstiff'' case, see, e.g., \cite{Stern2020}. Moreover, these parameters determine the value of $\norm{e^{A\Delta}}$, and thus the validity of Theorem \ref{thm:Lyapunov_discrete}. Since the true process is not available, all reference paths are obtained under the TEM method~\eqref{eq:EM_Tamed}, using the small time step $\Delta=2\cdot 10^{-5}$. Also in this case, the choice of the scheme used to simulate the reference paths does not affect the results of the experiments. In the following, the focus lies on the $V$-component of the equation, modelling the membrane voltage, which can be experimentally recorded with intracellular measurements. Similar results are obtained for the $U$-component. In Figure \ref{fig:paths}, we report paths of the $V$-component of the FHN model generated under different values of the time step $\Delta$, but using the same underlying pseudo random numbers. An increase in $\gamma$ leads to an increase in the frequency of the oscillations, and thus in the number of released spikes. Both splitting methods yield almost overlapping paths as $\Delta$ increases, preserving thus the qualitative dynamics of the model, independently of the choice of the intensity parameter $\gamma$. In contrast, both tamed Euler-Maruyama methods underestimate the frequency and overestimate the amplitude of the neuronal oscillations as $\Delta$ increases, for both values of $\gamma$ under consideration. For similar observations regarding tamed methods, we refer to \cite{Kelly2017,Kelly2018}. Note also that their paths already start deviating from the reference paths for $\Delta=2\cdot 10^{-3}$, performing thus worse than the truncated Euler-Maruyama methods. For a deeper investigation of the neuronal spiking dynamics, we consider the spectral density of the $V$-component, which takes into account its autocovariance, and thus the dependence of the membrane voltage on previous epochs. It is given by \begin{equation}\label{spectrum} S_V(\nu)=\mathcal{F}\left\{ r_V \right\} (\nu)=\int\limits_{-\infty}^{\infty} r_V(\tau)e^{-i 2 \pi \nu \tau} \ d\tau, \end{equation} where $\mathcal{F}$ denotes the Fourier transformation, $r_V$ the autocovariance function of the stochastic process $(V(t))_{t \in [0,T]}$ and the frequency $\nu$ can be interpreted as the number of oscillations in one time unit. We estimate the spectral density $S_V(\nu)$ with a smoothed periodogram estimator, see, e.g., \cite{Buckwar2019,Quinn2014}, based on paths generated over the time interval $[0,10^3]$. We use the R-function \texttt{spectrum} and set the required smoothing parameter to \texttt{span}$=0.3T$. \begin{figure} \begin{centering} \includegraphics[width=1.0\textwidth]{Fig3_specDens_V_Int_Sub1_cropped.pdf}\\ \includegraphics[width=1.0\textwidth]{Fig3_specDens_V_Int_Sub3_cropped.pdf}\\ \includegraphics[width=1.0\textwidth]{Fig3_specDens_V_Int_Sub4_cropped.pdf}\\ \includegraphics[width=1.0\textwidth]{Fig3_specDens_V_Int_Sub6_cropped.pdf} \caption{Estimates of the spectral density \eqref{spectrum} of the $V$-component of the FHN model \eqref{FHN} obtained under the considered numerical methods for $X_0=(0,0)^\top$, $\beta=\sigma_1=0.1$, $\sigma_2=0.2$, $\epsilon=0.05$, two different values of $\gamma$ and increasing time step $\Delta$.} \label{fig:specDens} \end{centering} \end{figure} \begin{figure}[H] \begin{centering} \includegraphics[width=1.0\textwidth]{Fig4_phaseOscillatory_Sub1_cropped.pdf}\\ \includegraphics[width=1.0\textwidth]{Fig4_phaseOscillatory_Sub3_cropped.pdf}\\ \includegraphics[width=1.0\textwidth]{Fig9_phaseOscillatory_Sub4_cropped.pdf}\\ \includegraphics[width=1.0\textwidth]{Fig9_phaseOscillatory_Sub6_cropped.pdf} \caption{Phase portraits of the FHN model \eqref{FHN} simulated under the considered numerical methods for $X_0=(0,0)^\top$, $\beta=\sigma_1=0.1$, $\sigma_2=0.2$, $\epsilon=1$, $\gamma=20$ and increasing time step $\Delta$.} \label{fig:phase} \end{centering} \end{figure} The estimated spectral densities obtained under different values of $\gamma$ and different choices of the time step $\Delta$ are reported in Figure \ref{fig:specDens}. As desired, for a fixed $\gamma$, all spectral densities estimated from the paths generated under the splitting schemes are almost overlapping as $\Delta$ increases. In contrast, the frequency $\nu$ estimated under the Euler-Maruyama type methods decreases as $\Delta$ increases, and the height of the peaks, carrying information about the amplitude of the neuronal oscillations, increases with $\Delta$. Their performance deteriorates as $\gamma$ increases, the truncated methods yielding better results than the tamed methods. Note also that the estimated frequencies are in agreement with those deduced from Figure \ref{fig:paths}. \begin{figure} \begin{centering} \includegraphics[width=1.0\textwidth]{Fig7_densityOscillatory_V_Sub1_cropped.pdf}\\ \includegraphics[width=1.0\textwidth]{Fig7_densityOscillatory_V_Sub2_cropped.pdf} \caption{Estimates of the invariant density \eqref{eq:kernel} of the $V$-component of the FHN model \eqref{FHN} obtained under the considered numerical methods for $X_0=(0,0)^\top$, $\beta=\sigma_1=0.1$, $\sigma_2=0.2$, $\epsilon=1$, $\gamma=20$ and increasing time step $\Delta$.} \label{fig:density} \end{centering} \end{figure} Moreover, all Euler-Maruyama type methods perform even worse in terms of second moment (amplitude) preservation when the parameter $\epsilon$ is increased, while the splitting methods preserve the qualitative behaviour of the model. This is illustrated in Figure~\ref{fig:phase}, where we increase $\epsilon$ to~ $1$, fix $\gamma=20$ and report phase portraits of the system obtained under the different numerical methods for $\Delta=2\cdot10^{-4}$ and $\Delta=2\cdot10^{-2}$. Again, the splitting methods preserve the behaviour of the process $(X(t))_{t \in [0,T]}$ as $\Delta$ increases, while the Euler-Maruyama type methods produce larger orbits, overshooting the second moment of the process. In addition, we investigate the ability of the considered numerical methods to preserve the underlying invariant density of the process $(X(t))_{t \in [0,T]}$. In particular, we estimate the marginal invariant density of the $V$-component with a standard kernel density estimator given by \begin{equation}\label{eq:kernel} \pi_V(v)=\frac{1}{nh}\sum\limits_{i=1}^{n} \mathcal{K}\left( \frac{v-\widetilde{V}(t_i)}{h} \right), \end{equation} where $h$ denotes a smoothing bandwidth and $\mathcal{K}$ is a kernel function \cite{Pons2011}. Taking advantage of the ergodicity of the FHN model, the sample $\widetilde{V}(t_i)$, $i=1,\ldots,n$, in \eqref{eq:kernel} is obtained from a long-time simulation of a single path. We use the R-function \texttt{density}, a kernel estimator as in~\eqref{eq:kernel}. In Figure \ref{fig:density}, we report the marginal invariant densities of the process $(V(t))_{t \in [0,T]}$ estimated via \eqref{eq:kernel} based on paths generated over the time interval $[0,10^4]$, for $\epsilon=1$, $\gamma=20$ and different values of $\Delta$. Both splitting methods yield reliable estimates for all values of $\Delta$ under consideration. In contrast, the densities obtained under the Euler-Maruyama type methods already deviate from the desired ones for $\Delta=2\cdot10^{-3}$, and suggest a transition from a unimodal to a bimodal density when $\Delta$ is further increased to $2\cdot 10^{-2}$. It is again visible that the Euler-Maruyama type methods overestimate the second moment, and thus the amplitudes of the process. Similar results are also obtained for the $U$-component. \begin{figure}[H] \begin{centering} \includegraphics[width=1.0\textwidth]{Fig6_V0_h_vertical3.pdf} \caption{Paths of the $V$- and $U$-components of the FHN model \eqref{FHN} simulated under the considered numerical methods for large values of $V_0$ ($U_0=0$), $\Delta=2\cdot 10^{-4}$, $\beta=\sigma_1=0.1$, $\sigma_2=0.2$, $\gamma=5$ and $\epsilon=0.05$. The grey reference paths are obtained under $\Delta=2\cdot10^{-7}$ using the TEM method~\eqref{eq:EM_Tamed}.} \label{fig:V0} \end{centering} \end{figure} \begin{figure}[H] \begin{centering} \includegraphics[width=1.0\textwidth]{Fig6_U0_h_vertical3.pdf} \caption{Paths of the $V$- and $U$-components of the FHN model \eqref{FHN} simulated under the considered numerical methods for large values of $U_0$ ($V_0=0$), $\Delta=2\cdot 10^{-4}$, $\beta=\sigma_1=0.1$, $\sigma_2=0.2$, $\gamma=5$ and $\epsilon=0.05$. The grey reference paths are obtained under $\Delta=2 \cdot 10^{-7}$ using the TEM method~\eqref{eq:EM_Tamed}.} \label{fig:U0} \end{centering} \end{figure} \subsection{Impact of the initial condition: preservation of phases} Finally, we compare the considered numerical methods regarding their sensitivity to changes in $X_0$, as done in Section \ref{sec:5:2_FHN} for SDE \eqref{eq:Toy}. In particular, we illustrate that, when $V_0$ or $U_0$ are large, the considered Euler-Maruyama type methods do not correctly reproduce the phases of the underlying oscillations, even when the time step $\Delta$ is very small. In contrast, the splitting methods are less sensitive to changes in the starting condition. The impact of $V_0$ is shown in Figure \ref{fig:V0}, where we report paths of the $V$-component (left panels) and the $U$-component (right panels), simulated under the different numerical methods with $\Delta=2 \cdot 10^{-4}$, $U_0=0$, $V_0=10^3$ (top panels) and $V_0=10^4$ (bottom panels). The grey reference path is simulated under $\Delta=2\cdot10^{-7}$ using the TEM method \eqref{eq:EM_Tamed}. As before, the results are not influenced by the choice of the numerical method used to generate the reference paths. The underlying parameter values are the same as in Section \ref{sec:7:2_FHN}, setting $\gamma=5$ and $\epsilon=0.05$. As desired, the splitting methods are barely influenced by $V_0$, even when it is very large, with paths overlapping with the reference paths for all $t$ under consideration. The main reason for this preservative behaviour lies in the exact treatment of the locally Lipschitz ODE via \eqref{Exact_ODE_FHN2}. In particular, for the considered values $\epsilon=0.05$ and $\Delta=2\cdot10^{-4}$ we obtain $V^{[2]}(t_1)\approx 11.2$ already after the first iteration for both $V_0=10^3$ and $V_0=10^4$. In contrast, the Euler-Maruyama type methods introduce a delay in when the generated paths reach the oscillatory dynamics, this behaviour deteriorating as $V_0$ increases. Moreover, they also do not preserve the phases of the oscillations, introducing a shift. The DTrEM method reaches the correct oscillatory dynamics, though shifted, almost as fast as the splitting methods for $V_0=10^{4}$, but fails for $V_0=10^3$. This is indicated by the brown vertical lines in the top panels of Figure \ref{fig:V0}. In that case, this method produces almost constant values $\widetilde{V}(t_i)\approx 10^3$, and thus does not enter the oscillatory regime. Spurious oscillations were obtained for other parameter combinations, as also observed in \cite{Kelly2018,Tretyakov2012}. Similar observations can be drawn from Figure \ref{fig:U0}, where we set $V_0=0$ and consider $U_0=10^3$ (top panels) and $U_0=10^4$ (bottom panels). From the top panels of Figure \ref{fig:U0}, it can be observed that the truncated Euler-Maruyama methods preserve the initial behaviour of the process for $U_0=10^3$. However, they both fail when $U_0=10^4$, with the TrEM method \eqref{TrEM} yielding shifted trajectories and the DTrEM method \eqref{DTrEM} not entering into the oscillatory regime. The latter fact is indicated by the brown vertical lines in the bottom panels of Figure \ref{fig:U0}. The tamed Euler-Maruyama methods lead to shifted paths in both considered scenarios. In contrast, the splitting methods overlap with the grey reference paths of the $U$-component for all $t$ under consideration, preserving the phases of the oscillations. \begin{remark} Note that the only considered combination of $\gamma$ and $\epsilon$ in this section for which Assumption \ref{(A5)}, and thus Theorem \ref{thm:Lyapunov_discrete} holds is $\gamma=1/\epsilon=20$. However, we do not observe a difference in the quality of the splitting methods depending on the combination of these parameters. Intuitively, this is because the underlying linear SDE \eqref{SDE_FHN} with matrix $A$ as in \eqref{eq:A_N_FHN}, i.e., the damped stochastic oscillator, is geometrically ergodic for all $\gamma>0$ and $\epsilon>0$. \end{remark} \section{Conclusion} \label{sec:8_FHN} We construct explicit and reliable numerical splitting methods to approximate the solutions of semi-linear SDEs with additive noise and globally one-sided Lipschitz continuous drift coefficients which are allowed to grow polynomially at infinity. We prove that the derived splitting methods are mean-square convergent of order $1$, under an assumption on the explicit solution of the deterministic subequation containing the locally Lipschitz drift term. In particular, two versions for the proof of the boundedness of the second moment of the splitting methods, each requiring a different assumption, are provided. In contrast to existing explicit mean-square convergent Euler-Maruyama type methods, which may also achieve a convergence rate of order $1$, the proposed splitting methods preserve important structural properties of the model. First, they provide a more accurate approximation of the noise structure of the SDE through the covariance matrix of the explicit solution of the stochastic subequation. In particular, while the conditional covariance matrix of Euler-Maruyama type methods only contains the information of the diffusion matrix $\Sigma$, the splitting methods also rely on the matrix $A$ in the semi-linear drift. This is particularly beneficial when the SDE is hypoelliptic. While the conditional covariance matrix of the existing methods is degenerate in that case, we establish the $1$-step hypoellipticity of the constructed splitting methods, meaning that they admit a smooth transition density in every iteration step. In particular, the Lie-Trotter splitting yields non-degenerate Gaussian transition densities, a feature which is advantageous within likelihood-based estimation techniques, where the existing numerical methods cannot be applied due to the degenerate covariance matrix \cite{Ditlevsen2019,Melnykova2018,Pokern2009}. Second, Euler-Maruyama type methods do not preserve the geometric ergodicity of the process. As a consequence, they are not robust to changes in the initial condition, yield poor approximations of the underlying invariant distribution, or do not preserve the moments of the process. In contrast, the proposed splitting methods are proved to preserve the Lyapunov structure of the SDE, as long as $\norm{e^{A\Delta}}<1$ for all $\Delta\in(0,\Delta_0]$. In addition, if the logarithmic norm $\mu(A)<0$, they are proved to have asymptotically bounded second moments. In the one-dimensional case, closed-form expressions for the precise bounds of the second moments of the splitting methods are derived and illustrated on a cubic model problem. We also consider the FHN model, a well known equation used to describe the firing activity of single neurons. The geometric ergodicity of the proposed splitting methods applied to this equation is established under a restricted parameter space. Third, we illustrate on the FHN model that, in contrast to Euler-Maruyama type methods, the proposed splitting methods preserve the amplitudes, frequencies and phases of neuronal oscillations, even for large time steps. As the considered Euler-Maruyama type methods do converge, their lack of structure preservation becomes less visible when using very small time steps. However, the use of significantly smaller time steps results in drastically higher computational costs, making these methods highly inefficient and, consequently, computationally infeasible within simulation-based inference algorithms, as previously illustrated in \cite{Buckwar2019}. Among all considered methods, the Strang splitting shows the best performance in terms of preserving the qualitative dynamics of neuronal spiking. This makes the method particularly beneficial when used, e.g., to simulate large networks of neurons, or when embedded within simulation-based inference procedures. Several generalisations of the considered approach are possible. The proposed splitting methods can be, e.g., applied to the stochastic Van der Pol oscillator \cite{VanderPol1920,VanderPol1926}, whose investigation leads to similar numerical results. Moreover, they may be extended to SDEs with multiplicative noise, e.g., to $\Sigma(X(t))=\sqrt{X(t)}$ or $\Sigma(X(t))=\sigma X(t)$, $\sigma>0$, where the stochastic subequation of the splitting framework corresponds to the square-root process (also known as Cox-Ingersoll-Ross model \cite{Cox1985}) or to the geometric Brownian motion, respectively. A prominent example for the latter one is the stochastic Ginzburg-Landau equation arising from the theory of superconductivity \cite{Ginzburg1959,Hutzenthaler2011}. Furthermore, the proposed splitting methods may be extended to SDEs with more general functions $N$, requiring to relax Assumption \ref{(A3)}. In addition, one may extend the range of structural properties which characterise the quality of the considered numerical methods.
{ "redpajama_set_name": "RedPajamaArXiv" }
7,719
{"url":"https:\/\/en.wikipedia.org\/wiki\/Hodges%E2%80%99_estimator","text":"# Hodges' estimator\n\n(Redirected from Hodges\u2019 estimator)\n\nIn statistics, Hodges\u2019 estimator[1] (or the Hodges\u2013Le Cam estimator[2]), named for Joseph Hodges, is a famous[3] counter example of an estimator which is \"superefficient\", i.e. it attains smaller asymptotic variance than regular efficient estimators. The existence of such a counterexample is the reason for the introduction of the notion of regular estimators.\n\nHodges\u2019 estimator improves upon a regular estimator at a single point. In general, any superefficient estimator may surpass a regular estimator at most on a set of Lebesgue measure zero.[4]\n\n## Construction\n\nSuppose ${\\displaystyle \\scriptstyle {\\hat {\\theta }}_{n}}$ is a \"common\" estimator for some parameter \u03b8: it is consistent, and converges to some asymptotic distribution L\u03b8 (usually this is a normal distribution with mean zero and variance which may depend on \u03b8) at the n-rate:\n\n${\\displaystyle {\\sqrt {n}}({\\hat {\\theta }}_{n}-\\theta )\\ {\\xrightarrow {d}}\\ L_{\\theta }\\ .}$\n\nThen the Hodges\u2019 estimator ${\\displaystyle \\scriptstyle {\\hat {\\theta }}_{n}^{H}}$ is defined as [5]\n\n${\\displaystyle {\\hat {\\theta }}_{n}^{H}={\\begin{cases}{\\hat {\\theta }}_{n},&{\\text{if }}|{\\hat {\\theta }}_{n}|\\geq n^{-1\/4},{\\text{ and}}\\\\0,&{\\text{if }}|{\\hat {\\theta }}_{n}|\n\nThis estimator is equal to ${\\displaystyle \\scriptstyle {\\hat {\\theta }}_{n}}$ everywhere except on the small interval [\u2212n\u22121\/4, n\u22121\/4], where it is equal to zero. It is not difficult to see that this estimator is consistent for \u03b8, and its asymptotic distribution is [6]\n\n{\\displaystyle {\\begin{aligned}&n^{\\alpha }({\\hat {\\theta }}_{n}^{H}-\\theta )\\ {\\xrightarrow {d}}\\ 0,\\qquad {\\text{when }}\\theta =0,\\\\&{\\sqrt {n}}({\\hat {\\theta }}_{n}^{H}-\\theta )\\ {\\xrightarrow {d}}\\ L_{\\theta },\\quad {\\text{when }}\\theta \\neq 0,\\end{aligned}}}\n\nfor any \u03b1R. Thus this estimator has the same asymptotic distribution as ${\\displaystyle \\scriptstyle {\\hat {\\theta }}_{n}}$ for all \u03b8 \u2260 0, whereas for \u03b8 = 0 the rate of convergence becomes arbitrarily fast. This estimator is superefficient, as it surpasses the asymptotic behavior of the efficient estimator ${\\displaystyle \\scriptstyle {\\hat {\\theta }}_{n}}$ at least at one point \u03b8 = 0. In general, superefficiency may only be attained on a subset of measure zero of the parameter space \u0398.\n\n## Example\n\nThe mean square error (times n) of Hodges\u2019 estimator. Blue curve corresponds to n = 5, purple to n = 50, and olive to n = 500.[7]\n\nSuppose x1, \u2026, xn is an iid sample from normal distribution N(\u03b8, 1) with unknown mean but known variance. Then the common estimator for the population mean \u03b8 is the arithmetic mean of all observations: ${\\displaystyle \\scriptstyle {\\bar {x}}}$. The corresponding Hodges\u2019 estimator will be ${\\displaystyle \\scriptstyle {\\hat {\\theta }}_{n}^{H}\\;=\\;{\\bar {x}}\\cdot \\mathbf {1} \\{|{\\bar {x}}|\\,\\geq \\,n^{-1\/4}\\}}$, where 1{\u2026} denotes the indicator function.\n\nThe mean square error (scaled by n) associated with the regular estimator x is constant and equal to 1 for all \u03b8\u2019s. At the same time the mean square error of the Hodges\u2019 estimator ${\\displaystyle \\scriptstyle {\\hat {\\theta }}_{n}^{H}}$ behaves erratically in the vicinity of zero, and even becomes unbounded as n \u2192 \u221e. This demonstrates that the Hodges\u2019 estimator is not regular, and its asymptotic properties are not adequately described by limits of the form (\u03b8 fixed, n \u2192 \u221e).\n\n## Notes\n\n1. ^ Vaart (1998, p.\u00a0109)\n2. ^ Kale (1985)\n3. ^ Bickel (1998, p.\u00a021)\n4. ^ Vaart (1998, p.\u00a0116)\n5. ^ Stoica & Ottersten (1996, p.\u00a0135)\n6. ^ Vaart (1998, p.\u00a0109)\n7. ^ Vaart (1998, p.\u00a0110)\n\n### References\n\n\u2022 Bickel, Peter J.; Klaassen, Chris A.J.; Ritov, Ya\u2019acov; Wellner, Jon A. (1998). Efficient and adaptive estimation for semiparametric models. Springer: New York. ISBN\u00a00-387-98473-9.\n\u2022 Kale, B.K. (1985). \"A note on the super efficient estimator\". Journal of Statistical Planning and Inference. 12: 259\u2013263. doi:10.1016\/0378-3758(85)90074-6.\n\u2022 Stoica, P.; Ottersten, B. (1996). \"The evil of superefficiency\". Signal Processing. 55: 133\u2013136. doi:10.1016\/S0165-1684(96)00159-4.\n\u2022 Vaart, A. W. van der (1998). Asymptotic statistics. Cambridge University Press. ISBN\u00a0978-0-521-78450-4.","date":"2017-09-24 18:14:45","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 11, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9437059164047241, \"perplexity\": 7868.566353058212}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-39\/segments\/1505818690112.3\/warc\/CC-MAIN-20170924171658-20170924191658-00427.warc.gz\"}"}
null
null
package com.delware.classhub.Activities; import android.content.DialogInterface; import android.media.MediaRecorder; import android.os.Bundle; import android.os.Handler; import android.os.SystemClock; import android.support.v7.app.ActionBar; import android.support.v7.app.AlertDialog; import android.support.v7.app.AppCompatActivity; import android.util.Log; import android.view.View; import android.widget.Button; import android.widget.TextView; import android.widget.Toast; import com.coremedia.iso.boxes.Container; import com.delware.classhub.DatabaseModels.AudioRecordingModel; import com.delware.classhub.R; import com.delware.classhub.Singletons.SingletonSelectedClass; import com.googlecode.mp4parser.authoring.Movie; import com.googlecode.mp4parser.authoring.Track; import com.googlecode.mp4parser.authoring.builder.DefaultMp4Builder; import com.googlecode.mp4parser.authoring.container.mp4.MovieCreator; import com.googlecode.mp4parser.authoring.tracks.AppendTrack; import java.io.File; import java.io.IOException; import java.io.RandomAccessFile; import java.nio.channels.FileChannel; import java.text.SimpleDateFormat; import java.util.ArrayList; import java.util.Calendar; import java.util.LinkedList; import java.util.List; import pl.droidsonroids.gif.GifDrawable; import pl.droidsonroids.gif.GifImageView; /** * Overview: This class allows users to create an audio recording * and associate the audio recording with a particular class * @author Matt Del Fante */ public class RecordAudioActivity extends AppCompatActivity { //allows audio recordings to be created private MediaRecorder m_recorder = null; //the audio recording that is being created private AudioRecordingModel m_audioRecording = null; //gets set to true if any issues occur when creating an audio recording private boolean m_problemOccurred = false; //true when the audio recording is paused, false if not private boolean m_isPaused = false; //all of the audio files that are created every time the user pauses an audio recording //and then resumes the audio recording private List<String> m_audioRecordingFiles = null; //the TextView that shows the current time of the recording private TextView m_timerText = null; //various times needed to be display the correct time on the timer text view private long m_startTime = 0L; private long m_timeInMs = 0L; private long m_timeSwapBuff = 0L; private long m_updateTime = 0L; //handler to update the text view of the time private Handler m_timeHandler = null; //thread to update the text view of the time private Runnable m_timeThread = null; //The holder of the gif that plays when audio is being recorded private GifImageView m_gifView = null; //The gif that plays when audio is being recorded private GifDrawable m_gifDrawable = null; //This needs to implement the media recorder stuff @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); //overwrite the action bar for this activity with a different one getSupportActionBar().setDisplayOptions(ActionBar.DISPLAY_SHOW_CUSTOM); getSupportActionBar().setCustomView(R.layout.default_action_bar_layout); setContentView(R.layout.activity_record_audio); //Set the content of the action bar title TextView actionBarTextView = (TextView) findViewById(R.id.defaultActionBarTitle); actionBarTextView.setText("Record Audio"); m_audioRecording = new AudioRecordingModel(); m_isPaused = false; m_problemOccurred = false; m_audioRecordingFiles = new ArrayList<>(); m_timerText = (TextView) findViewById(R.id.timerValue); m_timeHandler = new Handler(); //handles updating the timer text m_timeThread = new Runnable() { @Override public void run() { m_timeInMs = SystemClock.uptimeMillis() - m_startTime; m_updateTime = m_timeSwapBuff + m_timeInMs; int seconds = (int) (m_updateTime/1000); int minutes = seconds/60; seconds %= 60; int milliseconds = ((int) (m_updateTime%1000)) / 10; m_timerText.setText("" + minutes + ":" + String.format("%02d", seconds) + ":" + String.format("%02d", milliseconds)); m_timeHandler.postDelayed(this, 0); } }; //Handles starting the gif try { m_gifView = (GifImageView) findViewById(R.id.recordAudioGif); m_gifDrawable = new GifDrawable(getResources(), R.drawable.microphone); m_gifDrawable.setSpeed(0.5f); m_gifView.setImageDrawable(m_gifDrawable); } catch (IOException e) { Log.i("LOG: ", "The audio recording gif was not able to be played."); } //Handles starting the audio recording try { beginRecordingAudio(); m_startTime = SystemClock.uptimeMillis(); m_timeHandler.postDelayed(m_timeThread, 0); } catch (Exception e) { //couldn't create the recording, display an error message Toast.makeText(getApplicationContext(), "Failed to record audio.", Toast.LENGTH_SHORT).show(); //go back to the AudioRecordingsActivity finish(); } } @Override protected void onDestroy() { super.onDestroy(); stopAudioRecording(); m_timeSwapBuff += m_timeInMs; m_timeHandler.removeCallbacks(m_timeThread); //delete all of the audio recording files for (String fileName : m_audioRecordingFiles) { File f = new File(fileName); if (!f.delete()) Log.i("LOG: ", "The audio recording: " + fileName + " was not deleted."); } } /** * Begins recording a .mp4 audio file and saves it to internal storage on the device. * @throws IOException if the media recorder isn't set up properly. */ private void beginRecordingAudio() throws IOException { if (m_recorder != null) m_recorder.release(); //set up the media recorder to record mp4s m_recorder = new MediaRecorder(); m_recorder.setAudioSource(MediaRecorder.AudioSource.MIC); m_recorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4); m_recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB); //If the date is 12/14/17 and the time is 2:14 pm, the fileName will read: /20171214_141422.mp4 String fileName = "/" + new SimpleDateFormat("yyyyMMdd_HHmmss").format(Calendar.getInstance().getTime()) + ".mp4"; m_audioRecordingFiles.add(getFilesDir()+fileName); m_recorder.setOutputFile(m_audioRecordingFiles.get(m_audioRecordingFiles.size() - 1)); m_recorder.prepare(); // Recording is now started m_recorder.start(); } /** * Stops recording audio. */ private void stopAudioRecording() { if (m_recorder != null) { m_recorder.stop(); m_recorder.release(); m_recorder = null; } } /** * Either allows an audio recording to be paused or resumed depending on what * context the method is called in. * @param view The view that invokes the method. */ public void pauseContinueAudioRecordingHandler(View view) { Button b = (Button) findViewById(R.id.pauseContinueAudioRecordingButton); //if audio recording is paused if (m_isPaused) { //start another audio recording try{ beginRecordingAudio(); //restart the timer m_startTime = SystemClock.uptimeMillis(); m_timeHandler.postDelayed(m_timeThread, 0); //restart the gif m_gifDrawable.start(); b.setText("Pause Recording"); m_isPaused = false; } catch (IOException e) { Toast.makeText(getApplicationContext(), "The audio recording was unable to continue.", Toast.LENGTH_SHORT).show(); Log.i("LOG: ", "The audio recording: was not able to be continued. Exception: " + e.getMessage()); finishAudioRecording(null); } } else { //end the current audio recording stopAudioRecording(); //end the timer m_timeSwapBuff += m_timeInMs; m_timeHandler.removeCallbacks(m_timeThread); //stop the gif m_gifDrawable.stop(); b.setText("Continue Recording"); m_isPaused = true; } } /** * Finishes recording audio for the current audio recording. Invokes a method that displays * an alert dialog which provides the option for the user to delete the audio recording or * save the audio recording. * @param view The view that invokes the method. */ public void finishAudioRecording(View view) { stopAudioRecording(); if (!m_isPaused) { //stop the timer m_timeHandler.removeCallbacks(m_timeThread); //stop the gif m_gifDrawable.stop(); } //merge all of the m mergeMediaFiles(true); //now I am done with all of the non-merged files, so delete them for (String fileName : new ArrayList<>(m_audioRecordingFiles)) { File f = new File(fileName); if (!f.delete()) Log.i("LOG: ", "The audio recording: " + fileName + " was not deleted."); else m_audioRecordingFiles.remove(fileName); } //if no problems occurred if (!m_problemOccurred) //ask the user what he or she would like to do with the audio recording promptUserFoAction(); else { Toast.makeText(getApplicationContext(), "The audio recording was unable to be created.", Toast.LENGTH_SHORT).show(); finish(); } } /** * Displays an alert dialog which gives the user an option to delete the audio recording or * save the audio recording. The method performs the corresponding actions depending on which * action the user selects. */ private void promptUserFoAction() { //the unique prefix for each class' audio recordings String uniquePrefix = SingletonSelectedClass.getInstance().getSelectedClass().getId() + "audio"; String filePath = getFilesDir() + "/" + uniquePrefix + "_" + m_audioRecording.getName() + ".mp4"; final File f = new File(filePath); //simple alert dialog for confirmation final AlertDialog.Builder adb = new AlertDialog.Builder(this); adb.setTitle("What Would You Like To Do?"); adb.setMessage("Do you want to save the audio recording or delete the audio recording?"); adb.setPositiveButton("Save", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { //add the associated class to the model and save it to the database m_audioRecording.setAssociatedClass(SingletonSelectedClass.getInstance().getSelectedClass()); m_audioRecording.save(); Toast.makeText(getApplicationContext(), "The audio recording was successfully saved!", Toast.LENGTH_SHORT).show(); //go back to the AudioRecordingsActivity finish(); } }); adb.setNegativeButton("Delete", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { confirmDeletionOfAudioRecording(f, adb); } }); //makes it so the user can't press the back button on his or her device adb.setCancelable(false); adb.show(); } /** * Makes an alert dialog box and displays it prompting the user they are sure * they want to delete the audio recording. If the user is sure, the audio recording * is deleted. Else, nothing happens. * @param f The file that contains the audio recording. * @param prevAdb The previous dialog box that was displayed */ private void confirmDeletionOfAudioRecording(final File f, final AlertDialog.Builder prevAdb) { //simple alert dialog for confirmation new AlertDialog.Builder(this) .setTitle("Wait...") .setMessage("Are you sure you want to delete the audio recording?") .setPositiveButton("Yes", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { String toastMsg; //if deleting the audio recording was successful if (f.delete()) toastMsg = "The audio recording was successfully deleted!"; else toastMsg = "An issue occurred deleting the audio recording"; Toast.makeText(getApplicationContext(), toastMsg, Toast.LENGTH_SHORT).show(); //go back to the AudioRecordingsActivity finish(); } }) .setNegativeButton("No", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { //show the previous alert dialog builder prevAdb.show(); } }) .setCancelable(false) .show(); } /** * Merges multiple audio recording files into one file. This is how the pause and resume * functionality works. Every time an audio recording is resumed, a new audio recording * is created. So this method merges all of the audio recording files into one. * @param isAudio A flag representing if the audio recording is an audio recording or not */ private void mergeMediaFiles(boolean isAudio) { try { String mediaKey = isAudio ? "soun" : "vide"; List<Movie> listMovies = new ArrayList<>(); for (String filename : m_audioRecordingFiles) { listMovies.add(MovieCreator.build(filename)); } List<Track> listTracks = new LinkedList<>(); for (Movie movie : listMovies) { for (Track track : movie.getTracks()) { if (track.getHandler().equals(mediaKey)) { listTracks.add(track); } } } Movie outputMovie = new Movie(); if (!listTracks.isEmpty()) { outputMovie.addTrack(new AppendTrack(listTracks.toArray(new Track[listTracks.size()]))); } Container container = new DefaultMp4Builder().build(outputMovie); String targetFile = createFilePath(); FileChannel fileChannel = new RandomAccessFile(String.format(targetFile), "rw").getChannel(); container.writeContainer(fileChannel); fileChannel.close(); } catch (IOException e) { m_problemOccurred = true; Log.e("LOG: ", "Error merging media files. Exception: "+e.getMessage()); } } /** * Creates a string that represents a path to the internal storage of the phone where the * audio recording will be saved. * @return The string representation of the path to the audio recording file. */ private String createFilePath() { String timeStamp = new SimpleDateFormat("MM-dd-yyyy_HHmmss").format(Calendar.getInstance().getTime()); m_audioRecording.setName(timeStamp); //the modified class name removes all characters that aren't valid file names String uniquePrefix = SingletonSelectedClass.getInstance().getSelectedClass().getId() + "audio"; //the fileName has the format /uniquePrefix_12-16-17_141422.mp4 String fileName = "/" + uniquePrefix + "_" + timeStamp + ".mp4"; return getFilesDir() + fileName; } }
{ "redpajama_set_name": "RedPajamaGithub" }
8,384
Step 1: If you have never attended Reedley College, the Madera Community College Center, the Oakhurst Community College Center, or if you have had a break in enrollment since the last time you attended, you must complete the Reedley College Admission Application. Step 2: Activate your Reedley College student email account. Step 3: Complete the Free Application for Federal Student Aid(FAFSA) or Dream Act Application (Undocumented and nonresident document students) BEFORE submitting your scholarship application in order to be considered for need-based scholarships. Step 4: Go to Scholarship Manager to create your account and complete the scholarship application. High School Transcripts: current high school seniors and continuing college students with less than 12 college units completed must upload their high school transcripts to the Scholarship Application. Unofficial copies are acceptable. College Transcripts: current State Center Community College District (SCCCD) students with 12 or more college units completed must upload their SCCCD transcript to the Scholarship Application. Students who have attended another college/university outside of SCCCD and have completed 12 or more college units must upload prior college transcripts. Unofficial copies are acceptable. Letters of Recommendation: You are required to upload at least one Letter of Recommendation to your Scholarship Application, although some scholarships may required TWO or more Letters of Recommendation. Contact the person(s) that you will be requesting recommendation from and give them plenty of time to write the letter for you. Good people to use as recommenders are instructors, counselors or employers. Autobiographical Information: Part of the Scholarship Application requires you to provide autobiographical information, such as your family background, educational and career goals, and other information about yourself to share with the Selection Committee. We recommend that you type your responses in a word document processor (i.e. Microsoft Works or Word), then copy and paste it into your application. For tips to help you with writing your autobiographical information, refer to the Scholarship Catalog or contact the Writing Center. Students attending Reedley College during the award year may apply. Our donors designate specific criteria based on academic achievement, educational and career goals, and financial need. By completing the Reedley College Scholarship Application you will be matched to all scholarships for which you qualify. The Reedley College Scholarship Application - Opens Oct 1- Deadline March 2nd @ 11:59 p.m.
{ "redpajama_set_name": "RedPajamaC4" }
3,021
Don't worry, we won't share your email address with anybody. We'll only use it to send you content that can help you or your business save money or improve your processes. Welcome to the Velocity International Community!
{ "redpajama_set_name": "RedPajamaC4" }
6,475
// djaodjin-upload.js (function (root, factory) { if (typeof define === 'function' && define.amd) { // AMD. Register as an anonymous module. define(['exports', 'jQuery'], factory); } else if (typeof exports === 'object' && typeof exports.nodeName !== 'string') { // CommonJS factory(exports, require('jQuery')); } else { // Browser true globals added to `window`. factory(root, root.jQuery); // If we want to put the exports in a namespace, use the following line // instead. // factory((root.djResources = {}), root.jQuery); } }(typeof self !== 'undefined' ? self : this, function (exports, jQuery) { (function ($) { "use strict"; function Djupload(el, options){ this.element = $(el); this.options = options; this.init(); } /* UI element to upload files directly to S3. <div data-complete-url=""> </div> */ Djupload.prototype = { _csrfToken: function() { var self = this; if( self.options.csrfToken ) { return self.options.csrfToken; } return getMetaCSRFToken(); }, _uploadSuccess: function(file, resp) { var self = this; self.element.trigger("djupload.success", resp.location); if( self.options.uploadSuccess && {}.toString.call(self.options.uploadSuccess) === '[object Function]' ) { self.options.uploadSuccess(file, resp, self.element); } else { if( resp.detail ) { showMessages(resp.detail, "success"); } else { showMessages([self.options.uploadSuccessMessage( file.name, resp.location)], "success"); } } return true; }, _uploadError: function(file, resp) { var self = this; self.element.trigger("djupload.error", [file.name, resp]); if( self.options.uploadError ) { self.options.uploadError(file, resp, self.element); } else { showErrorMessages(resp); } }, _uploadProgress: function(file, progress) { var self = this; self.element.trigger("djupload.progress", [file.name, progress]); if( self.options.uploadProgress ) { self.options.uploadProgress(file, progress, self.element); } return true; }, init: function(){ var self = this; if( !self.options.uploadUrl ) { console.warning("[djupload] uploading assets will not work because 'uploadUrl' is undefined."); return; } if( self.options.mediaPrefix !== "" && !self.options.mediaPrefix.match(/\/$/)){ self.options.mediaPrefix += "/"; } if( self.options.uploadUrl.indexOf("/api/auth/") >= 0 ) { $.ajax({ method: "GET", url: self.options.uploadUrl + (self.options.acl === "public-read" ? "?public=1" : ""), datatype: "json", contentType: "application/json; charset=utf-8", success: function(data) { var parser = document.createElement('a'); parser.href = data.location; self.options.uploadUrl = parser.host + "/"; if( parser.protocol ) { self.options.uploadUrl = parser.protocol + "//" + self.options.uploadUrl; } self.options.mediaPrefix = parser.pathname; if( self.options.mediaPrefix === 'undefined' || self.options.mediaPrefix === null ) { self.options.mediaPrefix = ""; } if( self.options.mediaPrefix !== "" && self.options.mediaPrefix.match(/^\//)){ self.options.mediaPrefix = self.options.mediaPrefix.substring(1); } if( self.options.mediaPrefix !== "" && !self.options.mediaPrefix.match(/\/$/)){ self.options.mediaPrefix += "/"; } self.options.accessKey = data.access_key; self.options.policy = data.policy; self.options.amzCredential = data.x_amz_credential; self.options.amzDate = data.x_amz_date; self.options.amzServerSideEncryption = data.x_amz_server_side_encryption; self.options.securityToken = data.security_token; self.options.signature = data.signature; self.initDropzone(); }, error: function(resp) { showErrorMessages(resp); } }); } else { self.initDropzone(); } }, initDropzone: function() { var self = this; var dropzoneUrl = (self.options.accessKey ? self.options.uploadUrl : (self.element.attr("data-complete-url") ? self.element.attr("data-complete-url") : self.options.uploadUrl)); if( !dropzoneUrl ) { showErrorMessages(self.options.configError); throw new Error(self.options.configError); } self.element.dropzone({ paramName: self.options.uploadParamName, url: dropzoneUrl, maxFilesize: self.options.uploadMaxFileSize, clickable: self.options.uploadClickableZone, createImageThumbnails: false, previewTemplate: "<div></div>", init: function() { if( self.options.accessKey ) { // We are going to remove extra input files that AWS // would reject (ex: csrftoken). var fields = this.element.querySelectorAll( "input, textarea, select, button"); for( var idx = 0; idx < fields.length; ++idx ) { if( fields[idx].getAttribute("name") && (fields[idx].getAttribute("name") !== self.options.uploadParamName) ) { fields[idx].parentNode.removeChild(fields[idx]); } } } this.on("sending", function(file, xhr, formData){ if( self.options.accessKey ) { formData.append( "key", self.options.mediaPrefix + file.name); formData.append("policy", self.options.policy); formData.append( "x-amz-algorithm", "AWS4-HMAC-SHA256"); formData.append( "x-amz-credential", self.options.amzCredential); formData.append("x-amz-date", self.options.amzDate); formData.append("x-amz-security-token", self.options.securityToken); formData.append( "x-amz-signature", self.options.signature); if( self.options.acl ) { formData.append("acl", self.options.acl); } else { formData.append("acl", "private"); } if( self.options.amzServerSideEncryption ) { formData.append("x-amz-server-side-encryption", self.options.amzServerSideEncryption); } else if( !self.options.acl || self.options.acl !== "public-read" ) { formData.append("x-amz-server-side-encryption", "AES256"); } var ext = file.name.slice( file.name.lastIndexOf('.')).toLowerCase(); if( ext === ".jpg" ) { formData.append("Content-Type", "image/jpeg"); } else if( ext === ".png" ) { formData.append("Content-Type", "image/png"); } else if( ext === ".gif" ) { formData.append("Content-Type", "image/gif"); } else if( ext === ".mp4" ) { formData.append("Content-Type", "video/mp4"); } else { formData.append( "Content-Type", "binary/octet-stream"); } } else { formData.append( "csrfmiddlewaretoken", self._csrfToken()); var data = self.element.data(); for( var key in data ) { if( data.hasOwnProperty(key) && key != 'djupload' ) { formData.append(key, data[key]); } } } }); this.on("success", function(file, response){ if( self.options.accessKey) { // With a direct upload to S3, we need to build // a custom response with location url ourselves. response = { location: file.xhr.responseURL + self.options.mediaPrefix + file.name }; // We will also call back a completion url // on the server. var completeUrl = self.element.attr( "data-complete-url"); if( completeUrl ) { var data = {}; [].forEach.call(self.element[0].attributes, function(attr) { if (/^data-/.test(attr.name)) { var camelCaseName = attr.name.substr(5).replace(/-(.)/g, function ($0, $1) { return $1.toUpperCase(); }); data[camelCaseName] = attr.value; } }); for( var key in data ) { if( data.hasOwnProperty(key) && key != 'djupload' ) { response[key] = data[key]; } } $.ajax({ type: "POST", url: completeUrl, beforeSend: function(xhr) { xhr.setRequestHeader( "X-CSRFToken", self._csrfToken()); }, data: JSON.stringify(response), datatype: "json", contentType: "application/json; charset=utf-8", success: function(resp) { self._uploadSuccess(file, resp); }, error: function(resp) { self._uploadError(file, resp); } }); } else { self._uploadSuccess(file, response); } } else { self._uploadSuccess(file, response); } }); this.on("error", function(file, message){ self._uploadError(file, message); }); this.on("uploadprogress", function(file, progress){ self._uploadProgress(file, progress); }); } }); } }; $.fn.djupload = function(options) { var opts = $.extend( {}, $.fn.djupload.defaults, options ); return this.each(function() { if (!$.data(this, "djupload")) { $.data(this, "djupload", new Djupload(this, opts)); } }); }; $.fn.djupload.defaults = { // location uploadUrl: null, mediaPrefix: "", uploadZone: "body", uploadClickableZone: true, uploadParamName: "file", uploadMaxFileSize: 250, // Django upload csrfToken: null, // S3 direct upload accessKey: null, securityToken: null, acl: null, // defaults to "private". policy: "", signature: null, amzCredential: null, amzDate: null, amzServerSideEncryption: null, // callback uploadSuccess: null, uploadError: null, uploadProgress: null, // messages uploadSuccessMessage: function(filename, location) { return '"${filename}" uploaded sucessfully to ${location}'.replace( '${filename}', filename).replace( '${location}', location); }, configError: "instantiated djupload() with no uploadUrl specified." }; Dropzone.autoDiscover = false; })(jQuery); }));
{ "redpajama_set_name": "RedPajamaGithub" }
4,375
# About this Book DON'T SWEAT THE TEST! Having a problem with math? Nervous about your test? This book introduces all the topics you need to know about geometry. Learn great test-taking tips for solving multiple choice, short-answer, and show-your-work questions. A great book for students to use on their own, or with parents, teachers, or tutors. "This book is an excellent tool for a student to review basic geometry skills or master new concepts." —Patricia Leonard, MS, Series Math Consultant, Middle School Math Professional * * * Need More Practice? FREE WORKSHEETS AVAILABLE AT ENSLOW.COM * * * About The Author **AUTHOR REBECCA WINGARD-NELSON** has worked in public, private, and home-school mathematics education. She has been involved in various educational math projects, including developing and writing state assessment tests, exit exams, and proficiency tests, as well as writing and editing textbooks and workbooks. # CONTENTS Cover About this Book Title Page * * * Test-Taking Tips Be Prepared! Practice Test Day! The Night Before The Big Day Test Time! Let's Go! _Chapter 1:_ Basic Terms Points Lines TEST TIME: Multiple Choice Test-Taking Hint Definitions Line Segments TEST TIME: Show Your Work Test-Taking Hint _Chapter 2:_ Line Relationships Definitions TEST TIME: Multiple Choice Test-Taking Hint Definitions Skew TEST TIME: Explain Your Answer Test-Taking Hint _Chapter 3:_ Angles Definitions Intersecting Lines TEST TIME: Multiple Choice Test-Taking Hint Definitions Angle Names TEST TIME: Explain Your Answer Test-Taking Hint _Chapter 4:_ Angle Relationships Definitions Congruent Angles Test-Taking Hint TEST TIME: Show Your Work Angle Sums Definitions TEST TIME: Multiple Choice Test-Taking Hint _Chapter 5:_ Intersecting Lines Definitions TEST TIME: Multiple Choice Test-Taking Hint TEST TIME: Show Your Work Test-Taking Hint Complete Circle _Chapter 6:_ Transversals Definitions Corresponding Angles TEST TIME: Multiple Choice Test-Taking Hint Congruent Angles TEST TIME: Explain Your Answer _Chapter 7:_ Polygons Definitions TEST TIME: Show Your Work Definitions Convex or Concave TEST TIME: Multiple Choice Test-Taking Hint _Chapter 8:_ Triangles Definitions Naming Triangles Test-Taking Hint TEST TIME: Show Your Work Definitions Test-Taking Hint Test-Taking Hint TEST TIME: Multiple Choice _Chapter 9:_ Triangle Angle Sums Interior Angles TEST TIME: Multiple Choice Test-Taking Hint TEST TIME: Show Your Work Test-Taking Hint TEST TIME: Explain Your Answer _Chapter 10:_ The Pythagorean Theorem Definitions Powers TEST TIME: Multiple Choice Test-Taking Hint Definitions Right Angles TEST TIME: Show Your Work Test-Taking Hint _Chapter 11:_ Quadrilaterals Definitions TEST TIME: Multiple Choice Test-Taking Hint Test-Taking Hint TEST TIME: Explain Your Answer Special Trapezoids Test-Taking Hint _Chapter 12:_ Diagonal Properties Definitions TEST TIME: Show Your Work Test-Taking Hint TEST TIME: Explain Your Answer Test-Taking Hint Definitions Using Algebra _Chapter 13:_ Other Polygon Angles Interior Angle Sums Definitions TEST TIME: Multiple Choice TEST TIME: Show Your Work Test-Taking Hint Definitions Central Angle Measure _Chapter 14:_ Perimeter Definitions Adding to Find Perimeter Test-Taking Hint Test-Taking Hint TEST TIME: Multiple Choice Rectangles, Parallelograms, and Kites Test-Taking Hint TEST TIME: Show Your Work _Chapter 15:_ Area Definitions Rectangles Area Formulas TEST TIME: Show Your Work Area Formulas Parallelograms TEST TIME: Explain Your Answer _Chapter 16:_ Area: Rhombi and Kites Area Formulas Rhombus TEST TIME: Multiple Choice TEST TIME: Show Your Work Test-Taking Hint TEST TIME: Explain Your Answer _Chapter 17:_ Area: Triangles Area Formula Find the Area Test-Taking Hint TEST TIME: Show Your Work TEST TIME: Multiple Choice Test-Taking Hint TEST TIME: Explain Your Answer _Chapter 18:_ Circles Definitions Identifying Parts TEST TIME: Show Your Work Test-Taking Hint _Chapter 19:_ Circle Measurements Definitions Circumference TEST TIME: Multiple Choice Formulas Area TEST TIME: Multiple Choice Test-Taking Hint _Chapter 20:_ Combined Figures Polygon Combinations TEST TIME: Multiple Choice TEST TIME: Show Your Work Test-Taking Hint Using Area _Chapter 21:_ Symmetry Definitions Line Symmetry TEST TIME: Multiple Choice Test-Taking Hint Rotational Symmetry Test-Taking Hint TEST TIME: Explain Your Answer _Chapter 22:_ Similarity and Congruence Definition Compare Figures TEST TIME: Show Your Work Test-Taking Hint Definition Determine Similarity TEST TIME: Multiple Choice _Chapter 23:_ The Coordinate Plane Definitions Quadrants TEST TIME: Multiple Choice Plotting Points Test-Taking Hint Test-Taking Hint TEST TIME: Explain Your Answer _Chapter 24:_ Translations Coordinate Figures Definition TEST TIME: Multiple Choice Test-Taking Hint TEST TIME: Show Your Work Test-Taking Hint * * * Further Reading Internet Addresses Index Note to Our Readers Copyright More Books from Enslow # Test-Taking Tips Be Prepared! Most of the topics that are found on math tests are taught in the classroom. Paying attention in class, taking good notes, and keeping up with your homework are the best ways to be prepared for tests. Practice Use test preparation materials, such as flash cards and timed worksheets, to practice your basic math skills. Take practice tests. They show the kinds of items that will be on the actual test. They can show you what areas you understand, and what areas you need more practice in. Test Day! The Night Before Relax. Eat a good meal. Go to bed early enough to get a good night's sleep. Don't cram on new material! Review the material you know is going to be on the test. Get what you need ready. Sharpen your pencils, set out things like erasers, a calculator, and any extra materials, like books, protractors, tissues, or cough drops. The Big Day Get up early enough to eat breakfast and not have to hurry. Wear something that is comfortable and makes you feel good. Listen to your favorite music. Get to school and class on time. Stay calm. Stay positive. Test Time! Before you begin, take a deep breath. Focus on the test, not the people or things around you. Remind yourself to do your best and not worry about what you do not know. Work through the entire test, but don't spend too much time on any one problem. Don't rush, but move quickly, answering all of the questions you can do easily. Go back a second time and answer the questions that take more time. Read each question completely. Read all the answer choices. Eliminate answers that are obviously wrong. Read word problems carefully, and decide what the problem is asking. Check each answer to make sure it is reasonable. Estimate numbers to see if your answer makes sense. Concentrate on the test. Stay focused. If your attention starts to wander, take a short break. Breathe. Relax. Refocus. Don't get upset if you can't answer a question. Mark it, then come back to it later. When you finish, look back over the entire test. Are all of the questions answered? Check as many problems as you can. Look at your calculations and make sure you have the same answer on the blank as you do on your worksheet. Let's Go! Three common types of test problems are covered in this book: Multiple Choice, Show Your Work, and Explain Your Answer. Tips on how to solve each, as well as common errors to avoid, are also presented. Knowing what to expect on a test and what is expected of you will have you ready to ace every math test you take. # # Basic Terms Points **A point is an exact location. Points have no length or width. A point may be thought of as a dot on a piece of paper.** **_What point is shown in red?_** **Step 1:** Find the red point. The capital letter next to the point, B, names the point. In geometry, points are normally named by a capital letter. Point B is shown in red. Lines **A line is a straight set of points that travel in two directions without end. Lines can be thought of as the straight lines that can be drawn with a ruler on a piece of paper. Arrows are used to show that the line extends in both directions forever.** * * * TEST TIME: Multiple Choice _**Which of the following is NOT a way to name the line shown?**_ Lines can be named by any two points on the line. Answer a uses the word _line_ and two points. Answer d uses the symbol for line and two points. Both of these are correct. A line can also be named using a lowercase letter. Answer b uses the lowercase letter _f_. Answer c is not a correct way to name a line. **Solution:** The correct answer is c. * * * Test-Taking Hint **Multiple choice questions give you a set of answers. You choose which of the given answers is correct.** Definitions **line segment: Part of a line with two endpoints.** **ray: Part of a line that starts at one endpoint and extends without end in one direction.** **plane: A flat surface that extends in all directions without end.** Line Segments _**Name three line segments that have point B as one endpoint.**_ **Step 1:** Line segments are named using two endpoints in any order. The symbol for a line segment is similar to that for a line, but it has no arrow. The segment that has endpoints at point A and B can be named or . Choose any three segments that have point B as an endpoint. * * * TEST TIME: Show Your Work _**Draw a figure that includes line LX and ray XY.**_ Each person who draws this figure may have a different looking answer. Some problems can have different answers and still be correct. In this problem, line LX and ray XY share point X. Draw line LX to begin. A ray is named using the endpoint first and then any other point on the ray. Ray XY, or has an endpoint at point X, then extends forever in a direction that goes through point Y. Draw a point Y, and extend a ray through it. * * * Test-Taking Hint **Questions that do not give you solutions to choose from are sometimes called "Show Your Work" questions. You may need to fill in a blank, make a drawing or graph, or show the equations or work that you used to find your answer.** # # Line Relationships Definitions **intersecting lines: Lines that meet or cross.** **point of intersection: The place where two lines or line segments meet.** **Lines AB and CD are intersecting lines. The point of intersection is point S.** **perpendicular lines: Intersecting lines in the same plane that form right (90° angles. The symbol ⊥ means "is perpendicular to." ** **oblique lines: Intersecting lines in the same plane that do not form right (90°) angles. Lines RT and VW are oblique lines.** * * * TEST TIME: Multiple Choice _**Which two lines are perpendicular?**_ Perpendicular lines form right angles. These are sometimes called square corners. There are four square corners where lines k and s intersect. **Solution:** The correct answer is a. * * * Test-Taking Hint **Mark your answers clearly. On tests that have circles to fill in, make sure you are neat and fill in the full circle.** Definitions **parallel lines: Lines in the same plane that never intersect. Parallel lines are always the same distance apart. The symbol || means "is parallel to."** **Line r || line s** Skew _**Will the orange lines ever cross? How are they related?**_ **Step 1:** Answer the first question. Will the orange lines ever cross? If the lines are extended forever in both directions, will they cross? No, the orange lines will never cross. **Step 2:** Answer the next question. How are the lines related? Are they in the same plane? Imagine a flat surface like a table top. Can one flat surface contain both of these orange lines? No. The lines are in different planes. Lines that do not cross and are in different planes are called skew. (Lines that never cross and are in the same plane are parallel.) The orange lines are skew. * * * TEST TIME: Explain Your Answer _**Lines a, b, and m lie in the same plane. Line a is parallel to line b. Line m is perpendicular to line a. Explain how line m and line b are related.**_ **Solution:** When two lines are parallel and a third line is perpendicular to one of the lines, then it is also perpendicular to the second of the two lines. Line a and line b are parallel. Line m is perpendicular to line a, so it is also perpendicular to line b. * * * Test-Taking Hint **Some problems ask a question and ask you to explain your answer. Others just ask for an explanation. Your score is based on both a correct response and how clearly you explain your reasoning. If there is no direct computation, try to include an example when you can.** # # Angles Definitions **angle: A figure formed by two lines or rays with a common endpoint. Angles are named using the angle symbol, ∠.** **vertex: The corner point in a geometric figure. In an angle, the vertex is the common endpoint. The vertex of ∠ ABC is point B.** Intersecting Lines _**How many angles are formed when two lines intersect? What do all of the angles have in common?**_ **Step 1:** Use a diagram to understand and answer the problem. Draw two intersecting lines. **Step 2:** Count the angles. There are four angles. They each have the same vertex. * * * TEST TIME: Multiple Choice _**Which of the following is the correct way to name the angle?**_ Angles are named in three ways: using the vertex, using three letters with the vertex at the center, or using a number that is inside the angle. **Solution:** Since answers a, b, and c are all correct, the correct choice is answer d, all of the above.z * * * Test-Taking Hint **In multiple choice problems with the answer choice all of the above, it is best to check all of the choices.** Definitions **angle measurement: The measure of an angle tells how far one edge of the angle is turned from the other. The most common measurement unit for angles is degrees. The symbol for degrees is °.** **acute angle: An angle with a measure that is less than 90°.** **right angle: An angle with a measure that is exactly 90°.** **obtuse angle: An angle with a measure that is greater than 90°, but less than 180°.** **straight angle: An angle with a measure that is exactly 180°.** Angle Names _**Classify each of the following angles by their measure.**_ * * * TEST TIME: Explain Your Answer _**Define and draw an example of an acute angle.**_ **Solution:** Acute angles have a measure that is less than 90°. A 90° angle is like the corner of a paper. Draw an angle that is smaller than the corner of a paper. * * * Test-Taking Hint **Knowing math definitions helps you understand what the problem is asking.** # # Angle Relationships Definitions **adjacent angles: Angles that share a vertex and a side. Angles 1 and 2 are adjacent.** **congruent angles: Angles that have the same measure. The symbol for congruent is ≅. ∠ 3 ≅ ∠ 4** Congruent Angles _**Angles 3 and 5 are congruent. What is the measure of 5?**_ **Step 1** : What is the measure you are given in the diagram? **Step 2:** Congruent angles have the same measure. When you know the measure of one angle, you also know the measure of the other. You know the measure of ∠3, so you know the measure of ∠5. Test-Taking Hint **When a question is taking an especially long time, or has you stumped, leave the question and go on. Come back later if you have time. Another question may give you a clue that can help you solve the problem.** * * * TEST TIME: Show Your Work _**Name an angle adjacent to ∠ STQ.**_ Adjacent angles share a vertex. The vertex of ∠ STQ is point T. Adjacent angles share a side. The sides of ∠ STQ are and There are two angles shown that sharea vertex and a side with ∠ STQ. Theadjacent angle must be named using three points with the vertex in the center. **Solution:** Angles STR and QTU are both adjacent to ∠ STQ. * * * Angle Sums _**What is the measure of ∠ ACD?**_ **Step 1:** You are given the measure of ∠ACB and ∠BCD. m∠ACB = 26° m∠BCD = 41° **Step 2:** You can add the measures of angles that share sides to find the measure of the combined angle. m∠ACB + m∠BCD = m∠ACD 26° + 41° = 67° m∠ACD = 67° Definitions **complementary angles: Two angles that have measures with a sum of 90°. Complementary angles form a right angle when they are adjacent.** **supplementary angles: Two angles that have measures with a sum of 180°. Supplementary angles form a straight angle when they are adjacent.** * * * TEST TIME: Multiple Choice _**Which pair of angles is complementary?**_ Complementary angles have a sum of 90°. The measures of ∠2 and ∠4 have a sum of 90°. **Solution:** The correct choice is answer d. * * * Test-Taking Hint **Some of the answers may look correct if you do not read the questions carefully. In the question above, you could easily choose answer b if you confuse complementary and supplementary.** # # Intersecting Lines Definitions **linear pair: Two adjacent angles formed when lines intersect. Linear pairs are always supplementary.** **For the diagram above, the linear pairs are:** **vertical angles: Two angles that are NOT adjacent when two lines intersect. Vertical angles are always congruent.** **For the diagram above, the vertical angles are:** * * * TEST TIME: Multiple Choice _**What is the measure of ∠ 3?**_ The diagram gives the measure of ∠ 1. Angles 1 and 3 are vertical angles. Vertical angles are congruent. Congruent angles have the same measure. This means ∠ 1 and ∠ 3 have the same measure, 107°. **Solution:** The correct answer is c. * * * Test-Taking Hint **Put a small mark next to answers you're not sure of or do not finish. When you finish your test, go back to those problems.** * * * TEST TIME: Show Your Work _**What is the measure of ∠ PRQ?**_ Angles PRQ and PRS are a linear pair. Linear pairs are supplementary. Their sum is always 180°. You know the sum of the two angles, and you know the measure of one of the angles. You can use subtraction to find the measure of the other angle. **Solution:** m∠PRQ + m∠PRS = 180°, so 180º – m∠PRS = m∠PRQ 180º – 42º = m∠PRQ 180º – 42º = 138º m∠PRQ = 138º * * * Test-Taking Hint **Pay more attention to the question you are working on than to the amount of time left for the test.** Complete Circle _**The measure of angle 1 is 18°. Find the measure of ∠ 2, ∠ 3, and ∠ 4. What is the sum of all four angles?**_ **Step 1:** Angles 1 and 2 are a linear pair. Find the measure of angle 2 by subtracting the measure of angle 1 from 180°. 180° – 18° – 162º The measure of ∠ 2 is 162°. **Step 2:** Angles 1 and 3 are vertical angles. Vertical angles are congruent. The measure of angle 3 is the same as the measure of angle 1. The measure of ∠ 3 is 18°. **Step 3:** Angles 2 and 4 are vertical angles. Vertical angles are congruent. The measure of angle 4 is the same as the measure of angle 2. The measure of ∠ 4 is 162°. **Step 4:** Add the measure of the four angles. 18° + 162° + 18° + 162° = 360° **An angle that goes the entire way around a circle measures 360°.** # # Transversals Definitions **transversal: A line that intersects at least two other lines.** **corresponding angles: A pair of angles in matching corners when a transversal intersects parallel lines. Angles 1 and 5 are corresponding angles.** **alternate interior angles: A pair of angles inside parallel lines and on opposite sides of a transversal. Angles 3 and 6 are alternate interior angles.** **alternate exterior angles: A pair of angles outside parallel lines and on opposite sides of a transversal. Angles 1 and 8 are alternate exterior angles.** Corresponding Angles _**Find the measure of ∠ 6.**_ **Step 1:** Angles 2 and 6 are corresponding angles. Corresponding angles are congruent. When you know the measure of one, you also know the measure of the other. m∠6 = 60° * * * TEST TIME: Multiple Choice _**How are angles 3 and 6 related?**_ Angles 3 and 6 are on opposite sides of the transversal and inside the two parallel lines. They are alternate interior angles. **Solution:** Answer b is correct. * * * Test-Taking Hint **An answer in a multiple choice problem might look correct if you go too quickly. Often the wrong answers listed are ones you would find if you made a common error.** Congruent Angles _**Given || and m ∠ 5 = 50°.**_ **_Which angles have a measure of 50°? Give a reason for each._** **Step 1:** Some problems give you information in both the problem and a diagram. This problem tells you that lines m and n are parallel. It also tells you that the measure of ∠ 5 is 50°. m∠ 5 = 50° This is given. **Step 2:** Vertical angles are congruent. Angles 5 and 8 are vertical angles. m∠ 8 = 50° (vertical angles) **Step 3:** Alternate interior angles of two parallel lines crossed by a transversal are congruent. Angles 4 and 5 are alternate interior angles. m∠ 4 = 50° (alternate interior angles) **Step 4:** Alternate exterior angles of two parallel lines crossed by a transversal are congruent. Angles 1 and 8 are alternate exterior angles. m∠ 1 = 50° (alternate exterior angles) Angles 1, 4, 5, and 8 each measure 50°. * * * TEST TIME: Explain Your Answer _**When two parallel lines are crossed by a transversal, what is the relationship between the consecutive interior angles? Explain your reasoning.**_ **Solution:** Consecutive interior angles are a pair of angles that are on the same side of a transversal and inside the parallel lines. Angles 4 and 6 and angles 3 and 5 are consectutive interior angles. Let's look at angles 4 and 6. Angles 4 and 5 are congruent because they are alternate interior angles. This means they have the same measure. Angles 5 and 6 are supplementary because they are a linear pair. Their sum is 180°. Since angles 4 and 5 have the same measure, angles 4 and 6 must also have a sum of 180°, which means they are supplementary. Consecutive interior angles are also called same side interior angles. Same side exterior angles, such as angles 2 and 8 in the diagram, are also supplementary. * * * # # Polygons Definitions **plane figure: A set of connected line segments and curves that lie in one plane.** **edges: The line segments or curves that make up a plane figure.** **vertex: A point where line segments or curves of a figure meet. The plural of vertex is vertices.** **closed figure: A plane figure that has no end points.** **open figure: A plane figure that has a beginning and ending point.** **polygon: A closed plane figure with edges that are all line segments.** * * * TEST TIME: Show Your Work _**Which of the following appear to be regular polygons?**_ Regular polygons have congruent sides and angles. Each side is the same length. Each angle has the same measure. Figure A is taller than it is wide. Figure B has one very long side. Figure C appears to be regular. Figure D is not a polygon, because one edge is a curve. Figure E appears to be regular. Figure F is not a polygon. **Solution:** Figures C and E appear to be regular polygons. * * * Definitions **convex polygon: A polygon with all interior angles less than 180°.** **concave polygon: A polygon with at least one interior angle greater than 180°.** Convex or Concave _**Is the polygon at right convex or concave?**_ **One way:** There are a number of ways to decide if a polygon is convex or concave. Look at each angle inside the polygon. Are any of them greater than 180°? Yes. The angle inside the polygon at vertex G is greater than 180°. This polygon is concave. **Second way:** When a vertex appears to point into the polygon, the polygon is concave. Vertex G appears to point into the polygon, so the polygon is concave. **Third way:** In a convex polygon, any line segments drawn between two vertices (diagonals) are inside the polygon. When a diagonal is not inside the polygon, the polygon is concave. A diagonal from point A to point F is outside the polygon, so the polygon is concave. The polygon is concave. * * * TEST TIME: Multiple Choice _**Name the shape of the given polygon.**_ Polygons are named by the number of sides and angles they have. Some polygon names are easily recognized, like triangle and rectangle. Others can be figured out by applying what you know about other words. For example, an octagon has 8 sides, just as an octopus has 8 tentacles. Count the number of sides. There are 7. You know a triangle has 3 sides, so a is not correct. An octagon has 8 sides, so answer d is not correct. It is likely that you will also recognize that a pentagon has 5 sides. This leaves only answer c. A heptagon has 7 sides. **Solution:** Answer c is correct. * * * Test-Taking Hint **Common prefixes can help you remember the number of sides on a polygon. Such as tri- means 3, quad- means 4, and penta- means 5.** # # Triangles Definitions **triangle: A three-sided polygon. Triangles have three angles and three vertices.** Naming Triangles _**Name the red triangle.**_ **Step 1:** Triangles are named by listing the vertices in any order. The vertices of the red triangle are E, F and G. The symbol for triangle is a small triangle, ∆. The red triangle is ∆ EFG. Test-Taking Hint **Word problems, or story problems, should be answered in complete sentences.** * * * TEST TIME: Show Your Work _**Sean has three pipe cleaners. The lengths are 8 inches, 5 inches, and 4 inches. Can Sean make a triangle from the three pipe cleaners?**_ For any three lengths to form a triangle, the sum of the shorter two lengths must be greater than the longest length. Add the two shortest lengths and compare the sum to the longest. **Solution:** The two shortest pipe cleaners are 5 inches and 4 inches. 5 inches + 4 inches = 9 inches The longest pipe cleaner is 8 inches. Is 9 inches greater than 8 inches? Yes. Yes, Sean can make a triangle using the three pipe cleaners. * * * Definitions **acute triangle: A triangle with three angles less than 90°.** **right triangle: A triangle with one 90° angle.** **obtuse triangle: A triangle with one angle greater than 90°.** **equilateral triangle: A triangle with three congruent sides. Equilateral triangles are also equiangular. They have three congruent angles.** **isosceles triangle: A triangle with exactly two congruent sides.** **scalene triangle: A triangle with no congruent sides.** Test-Taking Hint **Small hatch marks and arcs are used to show that sides or angles are congruent.** Test-Taking Hint **Knowing the names of angles (acute, right, and obtuse) will help you remember the names of triangles.** * * * TEST TIME: Multiple Choice _**Classify ∆ ABC by angle measure and by side length.**_ Look at the angles of triangle ABC. Angles B and C are acute. Angle A is obtuse. Triangle ABC is obtuse. Look at the sides of triangle ABC. Two sides have the same length. They are congruent. Triangle ABC is isosceles. **Solution:** Answer d is correct. * * * # # Triangle Angle Sums Interior Angles **The interior angles of a triangle always have a sum of 180°.** _**What is the measure of the interior angle at vertex Y? Check your answer.**_ **Step 1:** The sum of the three interior angles is 180°. You know two of the measures and need to find the third. Add the two measures that you know. 101° + 43° = 144° **Step 2:** Subtract the sum of the two angles from 180°. 180° – 144° = 36° The measure of the interior angle at vertex Y is 36°. **Step 3:** Check your answer by adding the three angles. 101° + 43° + 36° = 180° * * * TEST TIME: Multiple Choice _**Use mental math to find the third angle in a triangle where two of the angles each measure 50°.**_ Think: The interior angles of a triangle have a sum of 180º. You know two of the angles are 50° and 50°. Add these mentally. 50° + 50° = 100° Subtract the sum, 100°, from 180° mentally. 180º – 100° = 80° **Solution:** The correct answer is c. * * * Test-Taking Hint **When you can solve a problem mentally, do it and move quickly to the next problem. However, do not move too quickly! You could misread the problem.** * * * TEST TIME: Show Your Work _**What are the measures of the angles in an isosceles right triangle?**_ Use the definition of an isoceles right triangle and the sum of the interior angles to solve this problem. A right triangle has one right angle, so one angle is 90°. An isosceles triangle has two angles that are congruent, or have the same measure. Subtract the measure of the right triangle and divide the remaining degrees by two to find the measure of the remaining two angles. **Solution:** 180° – 90° = 90° 90° ÷ 2 = 45° The measures of the angles in an isosceles right triangle are 90°, 45°, and 45°. * * * Test-Taking Hint **Showing your work and showing some effort can earn you part of the credit, even if you have the wrong answer. The right answer, without showing some work, may only give you partial credit.** * * * TEST TIME: Explain Your Answer _**Can an obtuse triangle have a right angle? Explain.**_ **Solution:** An obtuse triangle can never have a right angle. An obtuse triangle has one angle that is greater than 90°. The sum of all three interior angles is 180°. If you subtract more than 90° (the obtuse angle) from 180°, you are left with less than 90° for the sum of the remaining two angles. Therefore, each of the remaining angles must be less than 90°. * * * # # The Pythagorean Theorem Definitions **exponent: A value placed above and to the right of an expression that tells the number of times the expression is used as a factor. For example, 5 3 means 5 × 5 × 5. Exponents are also called powers. The expression 53 is read as "5 to the third power."** **The exponent 2 is often read as "squared," so a 2 is read as "a squared." The exponent 3 is often read as cubed, so b3 is read as "b cubed."** Powers **_Evaluate 2 4._** **Step 1:** The word evaluate means "find the value of." The exponent 4 tells you to use the value 2 as a factor 4 times. 2 × 2 × 2 × 2 **Step 2:** Multiply 2 × 2 × 2 × 2 = 16 24 = 16 * * * TEST TIME: Multiple Choice _**Which expression has a value of 36?**_ Find the value of each expression. Evaluate exponents before doing other operations. **Solution:** Answer c is correct. * * * Test-Taking Hint **Incorrect answers in multiple choice problems are often ones that look correct if you were to make an error.** Definitions **Pythagorean Theorem: The sum of the squares of the two leg lengths of a right triangle is equal to the square of the hypotenuse. This is usuallly written as a 2 \+ b2 = c2, where a and b are the leg lengths and c is the length of the hypotenuse (across from the right angle).** Right Angles _**A triangle has sides that measure 6 inches, 8 inches, and 10 inches. Is the triangle a right triangle?**_ **Step 1:** You can use the Pythagorean Theorem to decide if a triangle is a right triangle. The longest length is always the hypotenuse. The two shortest lengths are the legs. a2 \+ b2 = c2 62 \+ 82 = 102 **Step 2:** Do the operations. 62 \+ 82 = 102 36 + 64 = 100 100 = 100 The side lengths work in the Pythagorean Theorem, so the triangle is a right triangle. * * * TEST TIME: Show Your Work _**The legs of a right triangle are 10 feet and 24 feet. What is the length of the hypotenuse?**_ Substitute the values you know into the Pythagorean Theorem. **Solution:** a2 \+ b2 = c2 102 \+ 242 = c2 100 + 576 = c2 676 = c2 What number when multiplied by itself equals 676? 26. 26 = c The hypotenuse is 26 feet long. To solve a problem like this quickly and accurately, you can use a calculator for the exponents. The square root key (√ ) is used to find the number that is multiplied by itself to equal 676. * * * Test-Taking Hint **Calculators can be used on some tests. Use a calculator when you know how to solve a problem and it will save time you may need for other problems.** # # Quadrilaterals Definitions **quadrilateral: A four-sided polygon.** **kite: A quadrilateral with two pairs of adjacent, congruent sides.** **trapezoid: A quadrilateral with exactly one pair of parallel sides.** **parallelogram: A quadrilateral with two pairs of parallel sides. The parallel sides are congruent.** **rhombus: A parallelogram with four congruent sides.** **rectangle: A parallelogram with four right angles.** **square: A rectangle with four congruent sides.** * * * TEST TIME: Multiple Choice _**What name fits the quadrilateral shown?**_ The quadrilateral has two pairs of parallel sides. This makes the figure a parallelogram. It has four congruent sides. This makes it a rhombus. It also has four right angles. This makes it a rectangle. The figure can also be called a square. **Solution:** The correct answer is d. * * * Test-Taking Hint **Not all of the questions on a math test need computations. Know math definitions and know the reasons behind the math.** Test-Taking Hint **When a question is taking an especially long time, or has you stumped, leave it and go on. Come back later if you have time.** * * * TEST TIME: Explain Your Answer _**What is the sum of the interior angles of any quadrilateral? Explain your reasoning.**_ **Solution:** To find the sum of the interior angles of a quadrilateral, divide it up into triangles. There are two triangles. Because the sum of the angles of each triangle is 180°, you can add the angles from each triangle to find the sum of the interior angles of the quadrilateral. 180° + 180° = 360° So, the sum of the interior angles of a quadrilateral is 360°. * * * Special Trapezoids _**Trapezoid ABCD is an isosceles trapezoid. What is the measure of ∠ B?**_ **Step 1:** An isosceles trapezoid is a special kind of trapezoid. Isosceles trapezoids have special properties. Opposite sides of an isosceles trapezoid are congruent. The angles on either side of the bases are congruent. Adjacent angles along the sides are supplementary. Supplementary angles have a sum of 180°. Since the measure of ∠ D is 62°, the measure of ∠ B is 180° – 62°. 180° – 62° = 118° The measure of ∠ B is 118°. Test-Taking Hint **An isosceles trapezoid is like an isosceles triangle with the top cut off parallel to the base. Even if you do not remember the properties of an isosceles trapezoid, you can solve this problem using the diagram and the sum of the interior angles.** # # Diagonal Properties Definitions **diagonal: A line segment connecting two non-adjacent vertices of a polygon. Every quadrilateral has two diagonals.** **The diagonals of parallelograms bisect each other. This means they cut each other in half.** **The diagonals of a rectangle form four congruent segments.** **The diagonals of a rhombus are perpendicular.** * * * TEST TIME: Show Your Work _**Quadrilateral LMOP is a rhombus. Line segment MO is 8 centimeters long. What is the length of line segment NO?**_ Line segment MO is a diagonal of a rhombus. A rhombus is a parallelogram, so the diagonals of a rhombus bisect each other. Line segment NO is half of line segment MO. **Solution:** 8 ÷ 2 = 4 Line segment NO is 4 centimeters long. * * * Test-Taking Hint **Remember to include the units in your answers. The units in this problem are centimeters.** * * * TEST TIME: Explain Your Answer _**The diagonals of a quadrilateral are perpendicular and form four congruent line segments. What is the most specific shape name for the quadrilateral?**_ **Solution:** Quadrilaterals with perpendicular diagonals are rhombi. Each side of the quadrilateral is congruent. Quadrilaterals with diagonals that form four congruent line segments are rectangles. Quadrilaterals that are both rhombi and rectangles are squares. The quadrilateral is a square. Sketch a figure by drawing the diagonals first. Draw perpendicular lines that form four congruent segments. Connect the endpoints to form a quadrilateral. The figure formed is a square. * * * Test-Taking Hint **Use complete sentences when answering word problems and problems that ask you to explain your answer.** Definitions **expression: A mathematical phrase that combines numbers, variables, and operators to represent a value.** **variable: A value in an expression that is either not known or can change. Variables are usually letters.** Using Algebra _**What is the value of x in the rectangle?**_ **Step 1:** Some problems combine algebra and geometry. This problem asks you to find the value of a variable, or unknown value. The diagonals of a rectangle form four congruent line segments. This means the measures of line segments AK and KD are equal. Write this using an equal sign. measure of AK = measure of KD **Step 2:** Replace the words with the values from the diagram. 20 = _x_ \+ 5 **Step 3:** Solve for _x_. 20 – 5 = _x_ \+ 5 – 5 15 = _x_ _x_ = 15 # # Other Polygon Angles Interior Angle Sums _**What is the sum of the interior angles of a pentagon?**_ **Step 1:** The sum of the interior angles of any polygon can be found by dividing the polygon into triangles. Each triangle has an interior angle sum of 180°. Draw a pentagon. **Step 2:** Divide the pentagon into triangles by drawing diagonals from one vertex. Count the triangles. There are 3 triangles. **Step 3:** Find the interior angle sum of the pentagon by multiplying the number of triangles (3) by 180°. 3 × 180° = 540° The sum of the interior angles of a pentagon is 540°. **Any polygon can be divided into triangles. Each polygon has two less triangles than it does sides. For example, a 10-sided polygon will divide into 8 triangles.** **The interior angle sum for any polygon can be found by taking the number of sides minus 2 and multiplying it by 180°.** **interior angle sum = (number of sides – 2) × 180°.** Definitions **formula: A math rule that is written using words or symbols.** * * * TEST TIME: Multiple Choice _**What is the sum of the interior angles of a polygon with 12 sides?**_ Use the formula for the interior angle sums. interior angle sum = (number of sides – 2) × 180° = (12 – 2) × 180° = (10) × 180° You can mentally multiply any number and 10 by adding a zero on the right end. (10) × 180° = 1800° **Solution:** The correct answer is b. * * * * * * TEST TIME: Show Your Work _**What is the angle measure of each interior angle in a regular hexagon?**_ This is a two-step problem. First, find the sum of the interior angles in a hexagon (6 sides). Then, since each angle in a regular polygon has the same measure, divide by the number of angles to find the measure of each. **Solution:** interior angle sum of a hexagon = (6 – 2) × 180° = (4) × 180° = 720° each interior angle of a regular hexagon = 720° ÷ 6 = 120° The measure of each interior angle of a regular hexagon is 120°. * * * Test-Taking Hint **Make sure you are answering the question that is asked. Check your answer to see that it matches the question.** Definitions **central angle of a regular polygon: An angle made at the center of the polygon by the lines drawn from any two adjacent vertices of the polygon to the center. All of the central angles in one regular polygon are the same.** Central Angle Measure _**What is the measure of the central angle of a regular octagon?**_ **Step 1:** A regular octagon has 8 congruent sides, 8 congruent interior angles, and 8 congruent central angles. To find the measure of the central angle of a regular octagon, make a circle in the middle. A circle is 360 degrees around. Divide 360° by eight angles. 360° ÷ 8 = 45° The measure of the central angle of a regular octagon is 45°. **The measure of the central angle of a regular polygon can be found by dividing 360° by the number of sides.** **measure of central angle = 360° ÷ number of sides** # # Perimeter Definitions **perimeter: The distance around a figure. The capital letter P is usually used to represent perimeter in a formula.** Adding to Find Perimeter **_What is the perimeter of ∆RST?_** **Step 1:** To find the perimeter of any shape, you can add the length of all the sides. **6.6** + **5** + **9.7** = **21.3** **The perimeter of** **∆** **RST is 21.3 units.** Test-Taking Hint **If no units are given in a problem, you can write the answer using the word "units."** Test-Taking Hint **Calculators are useful tools for solving and checking the solutions to math problems. You can use a calculator to multiply decimals.** * * * TEST TIME: Multiple Choice **_What is the perimeter of a regular pentagon with side length of 2.7 centimeters?_** Each side of a regular polygon has the same length. A pentagon has 5 sides, so the perimeter is found by multiplying the side length by 5. 2.7 × 5 = 13.5 **Solution:** The correct answer is a. * * * Rectangles, Parallelograms, and Kites **_What is the perimeter of the kite shown?_** **One way:** The perimeter of any figure can be found by adding each of the sides. Kites, parallelograms, and rectangles each have two pairs of congruent sides. 4 cm + 4 cm + 8cm + 8cm = 24 cm The perimeter of the kite is 24 centimeters. **Another way:** Multiply each of the measures of the congruent sides by two. Add the products. 2(4 cm) + 2(8 cm) = 8 cm + 16 cm = 24 cm The perimeter of the kite is 24 centimeters. Test-Taking Hint **When you don't feel confident about an answer, and have time, try solving it a different way. That way, you are less likely to make the same mistake twice.** * * * TEST TIME: Show Your Work **_The perimeter of a rectangle is 44 meters. The width of the rectangle is 10 meters. What is the length of the rectangle?_** A rectangle has 2 pairs of congruent sides. In a rectangle, these are called length and width. The perimeter of a rectangle can be found by adding 2 lengths and 2 widths. A diagramcan help you to understand this problem. **Solution:** Perimeter = 2(length) + 2(width) 44 = 2(length) + 2(10) 44 = 2(length) + 20 24 = 2(length) 12 = (length) Subtract 20 from each side. Divide each side by 2. The length of the rectangle is 12 meters. Check: length + length + width + width = perimeter 12 meters + 12 meters + 10 meters + 10 meters = 44 meters 44 meters = 44 meters * * * # # Area Definitions **square unit: A square that is one unit long and one unit wide. A square inch is one inch long and one inch wide. Square units can be written as units 2. ** **area: The number of square units needed to cover a figure.** Rectangles **_What is the area of a rectangle that is 2 inches wide and 3 inches long?_** **One way:** Use a diagram. The diagram shows one square for each square inch. Count the number of squares. There are 6. The area of the rectangle is 6 square inches. **Another way:** Use a formula. The formula to find the area of a rectangle is length multiplied by width. Area = length × width Area = 2 inches × 3 inches Area = 6 square inches The area of the rectangle is 6 square inches. Area Formulas **Area of a rectangle = length × width A = l × w** **Area of a square = side × side A = s2** * * * TEST TIME: Show Your Work **_Brianna built a picture frame using 4 wood sides that each measure 8 inches long. Each corner is a 90° angle. She is covering it with a fabric to paint on. What is the area covered by the fabric?_** Each side is the same length and the corners are 90° angles. This picture frame is a square. The area of a square is found by multiplying a side by itself. **Solution:** 8 inches × 8 inches = 64 square inches The area covered by fabric is 64 square inches. * * * Area Formulas **Area of a parallelogram = base × height** **A = b × h** **Area of a trapezoid = 1/2 × (base 1 + base 2) × height** **A = 1/2(b1 \+ b2) h** **The height of a trapezoid or parallelogram is the perpendicular distance between the bases.** Parallelograms **_Find the area of the parallelogram._** **Step 1:** Write the formula. Put in the values from the problem. Area = base × height Area = 10 × 6 **Step 2:** Multiply. Area = 60 The area of the parallelogram is 60 square units. * * * TEST TIME: Explain Your Answer **_Use the formula for the area of a parallelogram to show why the formula for the area of a trapezoid works._** **Solution:** You can double a trapezoid, flip one over, and put the two trapezoids end to end to form a parallelogram. The area of this parallelogram is found using the formula. The length of this parallelogram's base is b1 \+ b2. Area of a parallelogram = base × height Area of a parallelogram = (b1 \+ b2) × h Since this is double what the area of the trapezoid is, multiply the area of the parallelogram by 1/2 to find the area of the trapezoid. Area of a trapezoid = 1/2(b1 \+ b2) × h * * * # # Area: Rhombi and Kites Area Formulas **Area of a kite = 1/2 × diagonal a × diagonal b A = 1/2 ab** **Area of a rhombus = 1/2 × diagonal a × diagonal b A = 1/2 ab** Rhombus _**What is the area of a rhombus with diagonal lengths of 3 feet and 2 feet?**_ **Step 1:** Write the formula. Put in the values from the problem. Area = 1/2 × diagonal _a_ × diagonal _b_ Area = 1/2(3)(2) **Step 2:** Multiply. Area = 3 The area of the rhombus is 3 square units. * * * TEST TIME: Multiple Choice _**A kite has diagonals with lengths of 4.6 inches and 6.2 inches.**_ _**What is the area of the kite?**_ The units in this problem are decimals. You can use a calculator to multiply decimals. The formula for the area of a kite includes multiplying by 1/2. This is the same as multiplying by the decimal 0.5, or dividing by 2. Area = 1/2(4.6)(6.2) = (0.5)(4.6)(6.2) = 14.26 **Solution:** The correct answer is b. * * * TEST TIME: Show Your Work _**Bridgette wants to make a kite with an area of 102 square inches. One of the diagonals is set at one foot. How long should the other diagonal be?**_ The problem uses two different measurement units, inches and feet. When you perform an operation with measurement units, only use the same units. For example, only multiply inches with inches, not inches with feet. **Solution:** Area of a kite = 1/2ab 102 in2 = 1/2(1 ft) _b_ Convert 1 foot to 12 inches. 102 in2 = 1/2(12 in) _b_ Multiply 1/2 and 12. 102 in2 = (6 in) _b_ Divide both sides by 6 in. 17 in = _b_ The other diagonal should be 17 inches long. Check: 102 in2 = (1/2)(12 in)(17 in) 102 in2 = 102 in2 * * * Test-Taking Hint **Work at your own pace. Don't worry about how fast anyone else is taking the same test.** * * * TEST TIME: Explain Your Answer _**The area formulas for a kite and a rhombus are the same. What are the differences and similarities between the diagonals in a kite and a rhombus? Could you use diagonals of the same length to form a kite and a rhombus?**_ **Solution:** Both a kite and a rhombus have diagonals that are perpendicular at their intersection. The longer diagonal of a kite bisects the shorter. The diagonals of a rhombus bisect each other. The same diagonal lengths can form a kite or a rhombus. If you begin with diagonals that bisect each other for a rhombus, you can slide one of the diagonals along the other to form a kite. * * * # # Area: Triangles Area Formula **Area of a triangle = 1/2 × base × height** **A = 1/2 bh** Find the Area **_Find the area of a triangle with a base of 8 units and a height of 7 units._** **Step 1:** Write the formula. Put in the values from the problem. Area = 1/2 × base × height Area = 1/2(8)(7) **Step 2:** Multiply. Area = 28 The area of the triangle is 28 square units. Test-Taking Hint **Area is always in square units. Remember to include the units in your answer.** * * * TEST TIME: Show Your Work _**An isosceles right triangle has a leg with a length of 12 inches. What is the area of the triangle?**_ The height of a triangle is a perpendicular distance from the base to the opposite vertex. Any of the three sides can be the base of a triangle. If you choose one of the legs of a right triangle as the base, the other leg is the same as the height. An isosceles right triangle is a right triangle with congruent legs. This makes the height and base the same length. **Solution:** Area of a triangle = 1/2 _bh_ = 1/2(12)(12) = 72 The area of the triangle is 72 square inches. * * * TEST TIME: Multiple Choice _**What is the base of a triangle with an area of 1,045 square centimeters if the height is 95 centimeters?**_ You can use the values in the problem to find the missing value in the formula, or you can test each answer. The formula for the area of a triangle is 1/2 base times height.Let's say you believe answer c is correct. You can do this mentally. Area = 1/2(200)(95). Multiply 1/2 and 200 first. This leaves 100 × 95 = 9,500. This is much too large. Since answer d is larger than answer c, both can be eliminated. Test answers a and b. a. 1/2(12)(95) = 570 b. 1/2(22)(95) = 1,045 **Solution:** The correct answer is b. * * * Test-Taking Hint **One way to solve a multiple choice problem is to test each answer. This may take longer than solving the problem yourself, but it will give you a correct answer.** * * * TEST TIME: Explain Your Answer _**A student says the area of the triangle is 44 square inches. Explain why the student is incorrect.**_ **Solution:** The formula for the area of a triangle is Area = 1/2 _bh_ This triangle has a base of 11 inches and a height of 5 inches. The area is 1/2(11)(5) = 27.5 inches. The student used the lengths of the two sides that are given, 11 inches and 8 inches, instead of base and height, 11 inches and 5 inches. * * * # # Circles Definitions **circle: The set of all points in a plane that are the same distance from a given point, called the center. A circle is named by its center. This is circle A.** **chord: A line segment with endpoints on a circle.** **diameter: A chord that passes through the center of a circle.** **radius: A line segment with one endpoint at the center of a circle and the other at any point on the circle.** **central angle of a circle: An angle formed by two radii.** **arc: Part of the curve of a circle.** **sector: The part of the circle that is enclosed by a central angle and the arc connecting the two endpoints.** Identifying Parts **_Name the parts of circle L._** _**a. radii**_ Write the radii using symbols for line segments. _**b. diameter**_ _**c. chords**_ _**d. arcs**_ The symbol for arc is an arch over the endpoints of the arc. If the arc is 180° or greater, include a third point. Other possible arcs: _**e. central angles**_ ∠ALB, ∠BLC, ∠ALC * * * TEST TIME: Show Your Work _**A circle graph shows the results of a survey of people and their eye color. Find the measure of the central angle of the sector that shows the percent of people with blue eyes.**_ The problem asks for the measure of the central angle that represents people with blue eyes. The correct answer will be an angle measure in degrees. The sector labeled Blue is 15% of the whole circle. The central angle measure for the sector is 15% of the angle measure of the whole circle. The angle measure of a whole circle is 360°. **Solution:** Use the percent equation. A percent of a whole is a part, or percent × whole = part Put in the values from the problem. 15% of 360° = central angle measurement 15% × 360° = central angle measurement To multiply a percent, change the percent to a decimal. 15% = 0.15 0.15 × 360° = 54° The central angle of the sector measures 54°. * * * Test-Taking Hint **When you use a calculator, you still need to understand what to do with the numbers in the problem. A calculator is only a tool, not a problem-solver.** # # Circle Measurements Definitions **circumference: The distance around a circle.** **pi: The ratio of circumference to diameter. The ratio is the same for every circle and is represented by the Greek letter π , called pi. Pi is approximately equal to 3.14 or 22/7.** Circumference **The ratio of the circumference of a circle to the diameter is π. Use algebra and this ratio to write a formula for the circumference of a circle.** **Step 1:** Write the ratio as an equation. Use _C_ for circumference and _d_ for diameter. **Step 2:** Multiply each side of the equation by the diameter, _d_. **Step 3:** Write the equation so that circumference is on the left side of the equal sign. _C = πd_ * * * TEST TIME: Multiple Choice _**The radius of a circle is 6 inches. What is the circumference of the circle?**_ The circumference of a circle = π _d_. The diameter is twice as long as the radius. The formula for circumference can also be written as _C_ = 2π _r._ _C_ = 2π _r_ = 2π6 = 12π **Solution:** The correct answer is c. * * * Formulas **Circumference = πd or 2πr, where d is diameter and r is radius.** **Area = πr 2 ** Area **_Find the area of the circle to the nearest tenth. Use 3.14 for π._** **Step 1:** Use the formula. A = π _r_ 2 **Step 2:** Replace with the values from the problem. The problem tells you to use the decimal approximation for π, 3.14. When you replace π with 3.14, use ≈ instead of the equal sign. The sign ≈ means approximately. A ≈ (3.14)32 **Step 3:** Do the computations. Evaluate the power first. Then multiply. A ≈ (3.14)32 A ≈ (3.14)9 A ≈ 28.26 **Step 4:** Write the answer in a complete sentence. Round the area to the nearest tenth. Remember to include the units. The area of the circle is about 28.3 m2. * * * TEST TIME: Multiple Choice _**A circular flower bed has a diameter of 28 feet. Using 22/7 as the approximation for π, what is the approximate area of the flower bed?**_ Use the formula. The problem gives you diameter, but the formula uses the radius. Remember, the radius is half of the diameter. Half of 28 feet is 14 feet. A = π _r_ 2 A ≈ (22/7)142 A ≈ (22/7)196 A ≈ 616 **Solution:** The correct answer is d. * * * Test-Taking Hint **Area problems for circles will often tell you what form of pi to use. Read carefully!** # # Combined Figures Polygon Combinations **_What is the perimeter of the figure shown?_ ** **Step 1:** To find the perimeter of a figure, you can add the length of each side. The length of each side is not shown, but can be found. One section is missing. The full height of the square area is 10 feet. The triangular area is 5 feet high. 10 ft – 5 ft = 5 ft **Step 2:** Add the length of each side. Choose a side to begin, and add in order around the figure. 10 ft + 10 ft + 20 ft + 11 ft + 5 ft = 56 ft The perimeter of the figure is 56 feet. * * * TEST TIME: Multiple Choice _**Estimate the area of the figure.**_ _**Each square represents one square foot.**_ Count the number of filled or almost filled squares. There are 23. Count the number of squares that are about half-filled. There are 4. Add the number of filled squares and half the number of half-filled squares. 23 + 2 = 25 **Solution:** The correct answer is a. * * * TEST TIME: Show Your Work _**Harrison needs to carpet his closet. The closet is rectangular, but has a raised triangular area that does not need carpet. Using the diagram, how many square feet of carpet does Harrison need?**_ Areas that are made up of polygons like rectangles and triangles can be found by adding or subtracting theareas of the polygons they are made of. **Solution:** Find the area of each figure. Area of the rectangle = _lw_ = (8)(9) = 72 Area of the triangle = 1/2 _bh_ = 1/2(4)(4) = 8 Subtract the area of the triangle from the area of the rectangle. 72 square feet ‒ 8 square feet = 64 square feet Harrison needs 64 square feet of carpet. * * * Test-Taking Hint **Most tests are scored on the number of questions you answer correctly. You do not lose points for wrong answers. Answer every question, even if you have to guess.** Using Area **The area outlined in red is a garden plot. The green-shaded area produced 3 quarts of green beans. If the entire garden is planted in green beans, about how many quarts of beans can be produced?** **Step 1:** Estimate the area inside the red outline. Whole or nearly whole units: 19 Half or near half units: 8 Whole units + half of partial units: 19 + 4 = 23 **Step 2:** Each of the units that are planted in beans will produce about 3 quarts of beans. Multiply the number of units by 3 quarts. 23 × 3 quarts = 69 quarts The entire garden can produce around 69 quarts of beans. # # Symmetry Definitions **line symmetry: A figure with line symmetry has two halves that are mirror images.** **line of symmetry: The line along which a symmetrical figure is divided.** **rotational symmetry: A figure with rotational symmetry can be turned around a central point less than 360° and be an exact image of itself.** **center of rotation: The central point around which a figure is rotated.** Line Symmetry **_Decide if each figure has line symmetry._** **Figure 1:** Can lines be drawn that divide the figure into mirror images? Yes. **Figure 2:** No lines can be drawn to form mirror images. * * * TEST TIME: Multiple Choice _**How many lines of symmetry does the figure have?**_ You can draw 4 lines that divide the figure into mirror images. **Solution:** The correct answer is c. * * * Test-Taking Hint **Make notes in your test booklet to help you solve problems. For the problem above, drawing the lines of symmetry will help you see the mirror images. It will also help you count the number of lines of symmetry in the figure.** Rotational Symmetry **_How many times will the figure show rotational symmetry within one full rotation?_** **Step 1:** Draw lines from the center of the figure through identical places in the figure. **Step 2:** Count the number of lines drawn. The figure will show rotational symmetry five times within a full rotation. Test-Taking Hint **Be careful to avoid careless answers on easy questions. Focus on each problem, and don't be in a hurry.** * * * TEST TIME: Explain Your Answer _**What kinds of symmetry does an equilateral triangle show? Explain.**_ **Solution:** An equilateral triangle shows both line symmetry and rotational symmetry. Because each side and each angle of an equilateral triangle are congruent, you can draw a line of symmetry running through each vertex and the center of the opposite side. You can draw three lines from the center of the equilateral triangle through identical places in the figure. The triangle has rotational symmetry, and will show the symmetry three times within one rotation. * * * # # Similarity and Congruence Definition **congruent figures: Figures that have the same shape and size. Corresponding sides and angles in congruent figures have the same measure. The symbol ≅ means congruent.** Compare Figures _**Are the parallelograms ABCD and LMNO congruent?**_ **Step 1:** Compare the corresponding angles and sides. The corresponding angles are in the same order in the names of the parallelograms. Corresponding angles: ∠A ≅ ∠L: 135° ∠B ≅ ∠M: 45° ∠C ≅ ∠N: 135° ∠D ≅ ∠O: 45° Corresponding sides: AB ≅ LM: 6 BC ≅ MN: 4 CD ≅ NO: 6 DA ≅ OL: 4 **Parallelograms ABCD and LMNO are congruent.** * * * TEST TIME: Show Your Work _**Triangles ABC and XYZ are congruent. What are the values of a and m?**_ **Solution:** The corresponding angles of congruent triangles are congruent. ∠A ≅ ∠X, so _a_ ° = 56° The corresponding sides of congruent triangles are congruent. AC ≅ XZ, so _m_ = 5 _a_ = 56 and _m_ = 5 * * * Test-Taking Hint **Make sure you answer the entire problem. If there are two questions, make sure you have two answers.** Definition **similar figures: Figures that have the same shape but do not need to be the same size. Corresponding angles of similar figures are congruent. The sides of similar figures are in proportion. The symbol ,means similar.** Determine Similarity **_Are the triangles similar?_** **Step 1:** Corresponding angles of similar figures are congruent. Are the angles congruent? Yes. Each triangle has angles that measure 31°, 37°, and 22°. **Step 2:** Corresponding sides of similar figures are proportional, or have the same ratio. Write a ratio that compares each pair of corresponding sides. Reduce each ratio to lowest terms. The corresponding sides all have the same ratio, 1 to 3. They are proportional. Yes, the triangles are similar. * * * TEST TIME: Multiple Choice _**The two windows shown are similar.**_ _**What is the width of the smaller window?**_ Write a proportion comparing the width and height of the windows. In proportions, the cross products are equal. Cross multiply. 7.5 × 4 = 10 × ? 30 = 10 × ? You need to find the number that can by multiplied by 10 for a product of 30. Divide 30 by 10. 30 ÷ 10 = 3 The smaller window is 3 units wide. **Solution:** Answer c is correct. * * * # # The Coordinate Plane Definitions **coordinate plane: A plane formed by a horizonal number line, called the x-axis, and a vertical number line, called the y-axis.** **origin: The intersection of the x-axis and y-axis.** **quadrants: The axes divide the coordinate plane into four regions called quadrants.** Quadrants _**Identify the quadrant containing points A and B.**_ **Point A:** Quadrant I **Point B:** Quadrant III * * * TEST TIME: Multiple Choice _**What point is located at the ordered pair (3, 1)?**_ An ordered pair is a pair of numbers that tell the location of a point in the coordinate plane. The first number in an ordered pair is the _x_ -coordinate. It describes where on the _x_ -axis the point is located, or how far the point is to the left or right of the origin. The _x_ -coordinate is a positive 3. The point is 3 units right of the origin. The second number in an ordered pair is the _y_ -coordinate. It describes where on the _y_ -axis the point is located, or how far the point is above or below the origin. The _y_ -coordinate is positive 1. The point is 1 unit above the origin. Point D is 3 units right and 1 unit above the origin. **Solution:** Answer d is correct. * * * Plotting Points **Plot each point on a coordinate plane.** **K(1, 4):** Start at the origin. Positive _x_ -values are right of the origin, negative values are left. Move 1 unit right. Positive _y_ -values are above the origin, negative values are below. Move 4 units up. Draw and label point K. **M(4,** **–** **2):** Start at the origin. Move 4 units right and 2 units down. Draw and label point M. Test-Taking Hint **You can go through a test and do the easy problems first. This can help you gain confidence, and keeps you from running out of time and missing easy points.** Test-Taking Hint **The coordinates of the origin are (0, 0).** **A point with an x-value of zero is not left or right of the origin. It lies on the y-axis.** **A point with a y-value of zero is not above or below the origin. The point lies on the x-axis.** * * * TEST TIME: Explain Your Answer _**To plot point (**_ _ **–**_ _ **6,**_ _ **–**_ _ **4), Kelly started at the origin and moved 6 units right and 4 units down. Is Kelly correct? Explain.**_ **Solution:** No, Kelly is not correct. She plotted point (6, _ **–**_ 4). Since the x-coordinate is a negative number, Kelly should have moved left from the origin instead of right. Kelly did plot the correct y-value. * * * # # Translations Coordinate Figures _**Plot the points A(2, 3), B(2, –1), C(–2, –1), and D(–2, 3). Connect the points in order and describe the figure.**_ **Step 1:** Plot each point. **Step 2:** Connect the points beginning at A. Connect point A to point B, point B to point C, point C to point D, and point D to point A. **Step 3:** Describe the figure. The result is a figure with four equal sides and four right angles. The figure is a square. Definition **translation: A movement of a figure along a straight line. Translations are sometimes called slides.** * * * TEST TIME: Multiple Choice _**Point A(1, 1) is translated 2 units left and 3 units up. What are the coordinates of the new point?**_ Draw a sketch to help you see the movement of the point. **Solution:** The correct answer is a. * * * Test-Taking Hint **Most problems can be solved in more than one way. If you are not sure of an answer, try solving the problem a different way.** * * * TEST TIME: Show Your Work _**Graph the translation of ∆PQR 4 units right and 3 units down. List the coordinates of the translated figure.**_ Figures are translated by translating each vertex, then connecting the translated points. **Solution:** Translate each vertex.The vertices of a translatedfigure are written usingthe same letter with asmall apostrophe (').The new vertex is readas "prime". For example,P' is read "P prime." Connect the points to form the translated triangle, ∆P'Q'R'. List the vertices of the translated triangle. P'(1, 1), Q'(2, –2), and R'(5, –1) * * * Test-Taking Hint **Transformations take a figure and create an identical image. A translation is a type of transformation. Reflections (flips) and rotations (turns) are also transformations.** # Further Reading Books Ferrell, Karen. _The Great Polygon Caper._ Hauppauge, N.Y.: Barron's Educational Series, 2008. McKellar, Danica. _Math Doesn't Suck: How to Survive Middle School Math Without Losing Your Mind or Breaking a Nail._ New York: Hudson Street Press, 2007. Rozakis, Laurie. _Get Test Smart!: The Ultimate Guide to Middle School Standardized Tests._ New York: Scholastic Reference, 2007. # Internet Addresses Coolmath.com, Inc. _**Geometry & Trig Reference Area. **_1997–2010­. <http://www.coolmath.com/reference/geometry-trigonometry-reference.html> Mathwarehouse.com. _**Math Warehouse.**_ <http://www.mathwarehouse.com/geometry> Testtakingtips.com. _ **Test Taking Tips.**_ 2003–2010. <http://www.testtakingtips.com/test/math.htm> # Index A angles, 14–17 acute, 16–17 adjacent, 18–19 alternate exterior, 26, 28 alternate interior, 26, 27, 28 complementary, 20–21 congruent, 18 consecutive interior, 29 corresponding, 26 measurement, 16 obtuse, 16 right, 16 straight, 16 sum, 20, 25 supplementary, 20, 49 vertical, 22–23, 28 arc, 74–75 area, 62–69 circle, 79–81 estimate, 83, 85 kite, 66–69 parallelogram, 64–65 rectangle, 63 rhombus, 66, 69 square, 63 trapezoid, 64–65 triangle, 70–73 C calculator, 45, 59 central angle, circle, 74–75 polygon, 57 chord, 74–75 circle, 74–77 circumference, 78–79 combined figures, 82–85 area, 83 perimeter, 82 congruent figure, 90–91 coordinate figure, 98 coordinate plane, 94–97 D diagonals, 50–53 diameter, 74–75 E edge, 30 explain your answer questions, 13, 16, 29, 41, 48, 52, 65, 69, 73, 89, 97 exponent, 42 expression, 43, 53 F formula, 55 H hexagon, 56 hypotenuse, 44 K kite, 46 L lines, 6–7 intersecting, 10, 14, 22–25 oblique, 10 parallel, 12–13 perpendicular, 10–11, 13 skew, 12 linear pair, 22, 24 line segment, 8 line of symmetry, 86–87 M mental math, 39 multiple choice questions, 7, 11, 15, 21, 23, 27, 33, 37, 39, 43, 47, 55, 59, 67, 83, 79, 81, 83, 87, 93, 95, 99 O octagon, 57 open figure, 30 ordered pairs, 95 origin, 94, 97 P parallelogram, 46–47, 50 percent equation, 77 perimeter, 58–61 pi, 78 plane, 8 plane figure, 30 points, 6 plotting, 96–97 point of intersection, polygon, 30–33 angle sums, 54–57 concave, 32 convex, 32 names, 33 regular, 31 powers, 42 proportions, 93 Pythagorean Theorem, 44–45 Q quadrants, 94 quadrilateral, 46–49 angle sum, 48 R radius, 74–75 ray, 8–9 rectangle, 46–47, 50, 53 rhombus, 46–47, 50–51 S sector, 74, 76–77 show your work questions, 9, 19, 24, 35, 40, 45, 51, 56, 61, 63, 68, 71, 76–77, 84, 91, 100–101 similar, 92–93 square, 46, 52 square units, 62 symmetery, 86–89 line, 86–87, 89 rotational, 86, 88, 89 T test-taking hints, 7, 9, 11, 13, 15, 17, 19, 21, 23, 24, 27, 33, 34, 36, 37, 39, 41, 43, 45, 47, 48, 49, 51, 52, 56, 58, 59, 60, 69, 70, 73, 77, 81, 85, 96, 97, 100, 101 test-taking tips, 4–5 translation, 98–101 transversal, 26–29 trapezoid, 46, 49 triangle, 34–37 acute, 36 angle sums, 38–41 equilateral, 36 isoceles, 36, 37, 40 name, 34 obtuse, 25, 37, 41 right, 36, 40, 44–45 scalene, 36 side lengths, 35 V variable, 53 vertex, 14–15, 30 # Note To Our Readers **About This Electronic Book:** This electronic book was initially published as a printed book. We have made many changes in the formatting of this electronic edition, but in certain instances, we have left references from the printed book so that this version is more helpful to you. **Chapter Notes and Internet Addresses:** We have done our best to make sure all Internet Addresses in this electronic book were active and appropriate when this edition was created. However, the author and the publisher have no control over and assume no liability for the material available on those Internet sites or on other Web sites they may link to. The Chapter Notes are meant as a historical reference source of the original research for this book. The references may not be active or appropriate at this time, therefore we have deactivated the internet links referenced in the Chapter Notes. **Index:** All page numbers in the index refer to pages in the printed edition of this book. We have intentionally left these page references. While electronic books have a search capability, we feel that leaving in the original index allows the reader to not only see what was initially referenced, but also how often a term has been referenced. Any comments, problems, or suggestions can be sent by e-mail to comments@enslow.com or to the following address: <http://www.enslow.com> All rights reserved. No part of this text may be reproduced, downloaded, uploaded, transmitted, deconstructed, reverse engineered, or placed into any current or future information storage and retrieval system, electronic or mechanical, in any form or by any means, without the prior written permission of Enslow Publishers, Inc. Copyright © 2012 by Enslow Publishers, Inc. All rights reserved. No part of this book may be reproduced by any means without the written permission of the publisher. **Library of Congress Cataloging-in-Publication Data** Wingard-Nelson, Rebecca. Geometry / Rebecca Wingard-Nelson. p. cm. — (Ace your math test) Summary: "Re-inforce in-class geometry skills such as lines, angles, polygons, triangles and the Pythagorean theorem"— Provided by publisher. Includes index. ISBN 978-0-7660-3783-0 1. Geometry—Juvenile literature. I. Title. QA445.5.W5545 2011 516—dc22 2011006225 **Future Editions:** Paperback ISBN: 978-1-4644-0010-0 EPUB ISBN: 978-1-4645-0455-6 Single-User PDF ISBN: 978-1-4646-0455-3 Multi-User PDF ISBN: 978-0-7660-5346-5 This is the EPUB version 1.0. **Illustration Credits:** Shutterstock.com **Cover Photo:** © iStockphoto.com/Derek Latta # More Books from Enslow _Come to_ enslow.com _for more information!_ Library Ed. ISBN 978-0-7660-3778-6 Paperback ISBN 978-1-4644-0005-6 Library Ed. ISBN 978-0-7660-3780-9 Paperback ISBN 978-1-4644-0007-0 Library Ed. ISBN 978-0-7660-3782-3 Paperback ISBN 978-1-4644-0009-4
{ "redpajama_set_name": "RedPajamaBook" }
4,833
<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>it.redhat.demo</groupId> <artifactId>eap6-play</artifactId> <version>1.0-SNAPSHOT</version> </parent> <artifactId>eap6-rest</artifactId> <packaging>war</packaging> <dependencies> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <scope>provided</scope> </dependency> <dependency> <groupId>javax.enterprise</groupId> <artifactId>cdi-api</artifactId> <scope>provided</scope> </dependency> <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-jaxrs</artifactId> <scope>provided</scope> </dependency> </dependencies> </project>
{ "redpajama_set_name": "RedPajamaGithub" }
5,427
layout: post title: "Selection Bias" excerpt: "National" categories: national comments: false share: true --- ![](http://img.photobucket.com/albums/v642/shakespeares_sister/reusables/gop2016.jpg) Whenever you have guests over for dinner, you always neaten up the place, clean areas you never clean regularly, make the house smell nice, etc. You do this not because you are some uber-mensch that just does these type of things for others, you do it to represent yourself to others in a positive light. However we all know it is not a fair representation of who you are; we all know you live a disgusting slob and right after the dinner you are going let the dishes rot in the sink for a week. There is nothing inherently wrong with this, but let's not pretend this is 'you'. This is exactly what the GOP does when they constantly claim they are the party of Abraham Lincoln and Ronald Reagan. First off, anyone who really thinks Lincoln is a reasonable paragon for the GOP is sadly mistaken. Other than the obvious fact that the Republican Party is a homogeneous as my milk (remember, Lincoln was the one against slavery), here are some other examples: - Conservatives by definition want minimal change and tend to lobby for status quo. Eh, Lincoln didn't adhere to that. - Conservatives prefer smaller government and less government interaction. Strike two. - The classic core philosophical divide between Republicans and Democrats revolve around State vs Federal rights. Republicans argue for State rights, Democrats want a stronger Federal influence. I would say the political leader during a Civil War where you are fighting against the idea of state-by-state slavery laws favors a stronger Federal government. Regardless of the labels back in the 1860's about who is 'liberal' or 'conservative', it is fair to say those labels have changed. This is the GOP cleaning up the house before independent voters go to the ballot booth. Now about Reagan... He obviously is far more of a conventional modern-day conservative. I mean anyone who runs the economy into the ground for military purposes and defends the rich at any cost is a modern-day GOP legacy. But what makes Reagan all the better is the fact he was exhibiting symptoms Alzheimer's as early as 1983 (maybe early but our research friends at Arizona St aren't willing to commit to an earlier time). I mean just watch that Mondale/Reagan debate in '84. Holy cow! This performance makes Admiral Stockdale look like Henry Clay. And obviously we cannot forget ye olde Iran-Contra stuff. I mean how could he not know what was going? Did he actually forget all he said he forgot? Well, if your hippocampus is as shriveled as George Costanza's penis after swimming, he probably did forget. I'm so tired of the 'party of Lincoln and Reagan' rhetoric that permeates every GOP mouthpiece. It's played out, lame, and not even completely accurate. Let's go with a more fitting representation of the GOP: Nixon or Grant? OK, not real fair (selection bias works both ways). Let's settle on Rutherford B. Hayes and call it a day.
{ "redpajama_set_name": "RedPajamaGithub" }
9,308
Stock Market: Options For Your Gold Mini-Portfolio Gordon Pape | July 28th, 2020 With gold rising and equities uncertain, financial expert Gordon Pape recommends some options for those interested in building a mini-portfolio focused on gold. Photo: fergregory/Getty Images Gold has always been seen as a safe haven investment in times of market turmoil. It's no different this time around. The price of the yellow metal is now in the US$1,900 range, the highest it has been in years. The price is up 25 per cent year-to-date, a much stronger performance than any of the major stock indexes, including Nasdaq. I recommend a gold weighting of 5-10 per cent in portfolios at this time. But one reader wants to go even farther. He wrote: "With gold rising, and equities uncertain, what holdings would you recommend for building a mini-portfolio focused on gold? We already own some FNV (Franco-Nevada) shares". – Dale W. It's a good question. In fact, there are several ways to own gold and other precious metals, thus providing some degree of diversification. In all cases, however, fluctuations in the gold price will affect valuations. Franco-Nevada is a good start. It's a gold streaming company, meaning it does not own any mines but rather purchases a share of the income from producers. In effect, it provides financing in exchange for a percentage of revenue – think of it as a type of mining bank. For example, the company recently announced it has invested US$100 million for a 1 per cent net smelter royalty from SolGold PLC with reference to all minerals produced from the Alpala copper-gold project in northern Ecuador. Franco-Nevada stock is up about 59 per cent year-to-date. With FNV as a starting point, I suggest four additional options for your mini-portfolio. A Gold Miner You want a company that owns high-grade producing mines. Don't invest in exploration companies or start-ups — there's no point taking more risk than you need to. As of July 9, the S&P/TSX Global Gold Index, which covers the broad precious metals industry, was up 38.75 per cent so far this year. However, there's a wide variation in performance among individual companies. For example, major producer Agnico Eagle (TSX, NYSE: AEM) is up only 10 per cent for the year. But Barrick Gold (TSX: ABX, NYSE: GOLD) has gained 52.7 per cent during the same period. There are a variety of reasons why one mining company will outperform or underperform at any given time. For example, Agnico Eagle reported that seven of its eight mines were operating at significantly reduced activity levels in the first quarter because of the impact of COVID-19. Barrick, on the other hand, said first-quarter disruptions were minimal and reported gold production and costs were consistent with guidance. Between these two, Barrick would be the preferred choice right now for this portion of a gold portfolio. The stock pays a quarterly dividend of US$0.07 per share to yield about 1 per cent. Whichever gold miner you select, do some research. See how the shares have performed this year, examine the financials, and read what the company has said about the impact of COVID-19 on its business. A Gold-Mining ETF If you want to cover the broad universe of gold producers and streaming companies, put a portion of your money into the iShares S&P/TSX Global Gold Index ETF (TSX: XGD). It tracks the performance of the index of the same name. Holdings include some of the world's top gold producers/streamers including Newmont, Barrick, Franco-Nevada, Wheaten Precious Metals, and Agnico Eagle. It's ahead 48 per cent this year. A Bullion-Based ETF In this case, you are not investing in a portfolio of mining companies but in the metal itself. The world's biggest ETF of this type is SPDR Gold Shares (NYSE: GLD), which invests exclusively in physical gold bullion. It has gained 25 per cent in 2020, about the same as the price of gold itself. Fund expenses are 0.4 per cent. Covered Call ETFs Finally, there are two Canadian ETFs that invest in portfolios of gold and precious metals companies but add a kicker – the managers write covered call options to generate additional income. That makes them suitable for investors who need more cash flow. The Horizons Enhanced Income Gold Producers ETF (TSX: HEP) had a year-to-date gain of 20.9 per cent at the time of writing and a trailing 12-month yield of 4.9 per cent. The CI First Asset Gold+ Giants Covered Call ETF (TSX: CGXF) is ahead 18.4 per cent for the year but currently shows a higher trailing yield at 6.4 per cent. A blend of these securities would provide a diversified mix of gold-based assets. But don't overload your portfolio with them. I believe gold will continue to track higher, as central banks continue to print money at an unprecedented rate, so owning some gold is prudent. Just remember, all things in moderation, especially these days. Stock Market: How to Generate Cash Flow From Gold ETFs Stock Market: Smaller Companies That Are Performing Well in Pandemic Times Stock Market: Why It's Best to Remain Conservative in These Uncertain Times
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,537
\section{Introduction} Observed large scale structure in the Universe is generally conjectured to arise from Gaussian initial condition or nearly so; the rather high level non-Gaussianity at present is due to the action of gravitational force and gas physics. The three-point correlation function (3PCF) is of the lowest order among correlation functions capable of probing such non-Gaussianity. With the recent increase of interest and the corresponding attempts to extract more information about structure formation processes and primordial non-Gaussianity from fine clustering patterns of galaxies, the 3PCF (or its counterpart in Fourier space, bispectrum) has attracted much attention in recent years \citep[e.g][]{KayoEtal2004, NicholEtal2006, SmithEtal2008, JeongKomatsu2009, Sefusatti2009}. However, 3PCF is well known for its low return of investment compared with the two-point correlation function (2PCF). One major obstacle hindering the interpretation and consequently the application of 3PCF is the redshift distortion induced by the peculiar velocities of galaxies. Although effects of redshift distortion on 2PCF (or power spectrum) are not yet well understood analytically \citep[e.g.][]{Scoccimarro2004}, approximations by incorporating pairwise velocity distribution have been proposed, validated and applied successfully to statistical analyses \citep{Peebles1980, DavisPeebles1983, White2001, Seljak2001, KangEtal2002, Tinker2007, SmithEtal2008}. In the case of 3PCF (or bispectrum) analogous approach would involve higher order statistics of peculiar velocities. The complicated entanglement of redshift distortions with nonlinear gravitational dynamics and nonlinear biasing renders theoretical prediction extremely difficult in configuration space. In Fourier space and with the distant observer approximation, prediction of the bispectrum in redshift space in various perturbative and empirical schemes has been moderately successful, although none have been able to show satisfactory agreement with simulations \citep{MatsubaraSuto1994, HivonEtal1995, VerdeEtal1998, ScoccimarroEtal1999}. The mostly accurate model to date appears to be the work of \citet{SmithEtal2008}, a halo model extension implemented with higher order perturbation theory. One can eliminate the complexity of redshift distortion with projection of the correlation functions upon the plane perpendicular to the line-of-sight (LOS). Projected correlation functions are obtained by integrating over the anisotropic correlation functions along LOS, which effectively removes redshift distortions if the conservation of total number of galaxy pairs and triplets along LOS can be satisfied. Since thickness of a realistic sample is finite, galaxies near radial edges could enter or leave the sample space by their apparent movement due to peculiar velocity, such conservation is only approximately achieved if the sample is shallow, or redshifts are photometric. Violation of the conservation condition may bring non-negligible systematical bias on large scales \citep{NockEtal2010}. Nevertheless, this is not a problem for most modern spectroscopic galaxy samples, and the bias actually can be minimized by careful design of estimation methodology. In comparison with the projected 2PCF that has been widely used to investigate clustering dependence on galaxy intrinsic properties, evolution history and environment and to distinguish cosmological models \citep[e.g.][]{HawkinsEtal2003, ZhengEtal2007, BaldaufEtal2010, ZehaviEtal2010}, exploration and application of the projected 3PCF has been limited in the literature \citep{JingBoerner1998, JingBorner2004a, Zheng2004, McBrideEtal2010}. Lack of accurate theoretical models of 3PCF prevents proper interpretation of measurements. \citet{ScoccimarroCouchman2001} offered a phenomenological model based on hyper-extended perturbation theory for the bispectrum in the nonlinear regime. Their fitting formula is accurate on smaller scales but in the weakly and mildly nonlinear regimes it is improved upon by the empirical model of \citet{PanColesSzapudi2007}. Both fail on very small scales, and neither can capture the signal of baryonic oscillation in bispectrum appropriately \citep{SefusattiEtal2010}. The approach of halo model appears more promising, as it can reproduce most measurements in simulations for the bispectrum \citep[e.g.][]{MaFry2000a, MaFry2000b, ScoccimarroEtal2001, SmithWattsSheth2006, SmithEtal2008} and the 3PCF in configuration space \citep[e.g.][]{TakadaJain2003, WangEtal2004, FosalbaPanSzapudi2005}. In spite of disagreement with simulations for some configurations of 3PCF, the halo model is still more attractive than the phenomenological models for its clean and physically motivated parametrization to galaxy biasing through e.g. the machinery of the halo occupation distribution \citep[HOD,][]{BerlindWeinberg2002}. Another reason for the scarce exploration of projected 3PCF is the complexity of estimation. Computational requirement of 3PCF is demanding for currently available computers when millions of points are typical. The additional task of decomposing the separations among three points for projected 3PCF adds to the CPU load. Furthermore, the 3PCF is already more prone to Poisson noise than the 2PCF, and typical bin width of scales for projected 3PCF is even smaller than for the normal 3PCF. In order to suppress discreteness effects for a reliable estimation, a high number density of points in the sample is crucial, but often unrealistic for real surveys. By analogy to the monople of 3PCF advocated by \citet{PanSzapudi2005a, PanSzapudi2005b}, we show that a third-order statistical function similar to the angular average of the projected 3PCF is redshift distortion free and relatively easy to estimate and model theoretically. In the next section, the definitions, and relation with 3PCF together with estimation algorithm is described. Section 3 presents numerical properties of the new statistical measure while in section 4 we demonstrate the consistency of halo models to simulations of the new function. Summary and discussion are in the last section. \section{projected three-point correlation function and its zeroth-order component} Let ${\bf r}={\bf x}_2-{\bf x}_1$ be the vector pointing to a point at position ${\bf x}_2$ from point at ${\bf x}_1$, the vector can be decomposed to two components, separation along the line-of-sight (LOS) $\pi=r\mu$ with $\mu$ being the cosine of the angle between the LOS and ${\bf r}$, and separation perpendicular to LOS $\sigma=(r^2-\pi^2)^{1/2}$, then we have the anisotropic 2PCF $\xi(\sigma, \pi)$ and so the projected 2PCF \begin{equation} \begin{aligned} \Xi(\sigma)& \equiv \int_{-\infty}^{+\infty} \xi(\sigma, \pi) {\rm d}\pi\ \\ =& 2\int_{\sigma}^{+\infty} \frac{r\xi(r){\rm d}r}{\sqrt{r^2-\sigma^2}} = 2\int_{\sigma}^{+\infty} \frac{s\xi(s){\rm d}s}{\sqrt{s^2-\sigma^2}}\ , \end{aligned} \end{equation} where ${\bf s}$ is the separation vector between two points measured in redshift space and the last step comes from conservation of total number of pairs along LOS. Inversion of $\Xi(\sigma)$ could directly render 2PCF $\xi(r)$ although inversion of such Abel integration is unstable mathematically \citep{DavisPeebles1983}. Similarly, giving three points at ${\bf x}_1$, ${\bf x}_2$ and ${\bf x}_3$, 3PCF $\zeta(r_1,r_2, r_3)$ is of the the triangle configuration with three separations ${\bf r}_1={\bf x}_2-{\bf x}_1=(\sigma_1, \pi_1)$, ${\bf r}_2={\bf x}_3-{\bf x}_2=(\sigma_2, \pi_2)$ and ${\bf r}_3={\bf x}_1-{\bf x}_3=(\sigma_3, \pi_3)$, decomposition of the three separations bring up anisotropic 3PCF $\zeta(\sigma_{1,2,3};\pi_{1,2,3})$ with $\sum\pi_{1,2,3}=0$, and the projected 3PCF is just defined as \citep{JingBoerner1998, JingBorner2004a, Zheng2004} \begin{equation} \begin{aligned} Z( & \sigma_1, \sigma_2, \sigma_3) \equiv \int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty} \zeta \left(\sigma_{1, 2, 3}; \pi_{1, 2}\right) {\rm d} \pi_1 {\rm d}\pi_2\\ &=2\int_{\sigma_1}^{+\infty} \int_{\sigma_2}^{+\infty} \frac{r_1 r_2\left[\zeta(r_1, r_2, r_3^+)+ \zeta(r_1, r_2,r_3^-)\right] }{ \sqrt{(r_1^2-\sigma_1^2)(r_2^2-\sigma_2^2)}} {\rm d}r_1 {\rm d}r_2 \ , \end{aligned} \end{equation} in which \begin{equation} \begin{aligned} r_3^+ & = \sqrt{\sigma_3^2+\left( |\pi_1| + |\pi_2| \right)^2}\\ r_3^- & = \sqrt{\sigma_3^2+\left( |\pi_1| - |\pi_2|\right)^2}\ . \end{aligned} \end{equation} \citet{Szapudi2004a} pointed out that 3PCF can be expanded with Legendre polynomials $P_\ell$ to isolate part of the configuration dependence, \begin{equation} \begin{aligned} & \zeta(r_1, r_2, \theta) =\sum_{\ell=0}^{\infty} \frac{2\ell+1}{4\pi} \zeta_\ell(r_1, r_2)P_\ell(\cos\theta) \\ & \zeta_\ell(r_1, r_2) =2\pi \int_{-1}^1 \zeta(r_1, r_2, \theta) P_\ell(\cos\theta) {\rm d}\cos\theta\ , \end{aligned} \end{equation} in which $\cos\theta=-{\bf r}_1 \cdot {\bf r}_2/(r_1 r_2)$. In the expansion the monople $\zeta_0$ is of particular interests for its relatively simplicity in measurement and interpretation \citep{PanSzapudi2005a, PanSzapudi2005b}. One can easily found that $\zeta_0$ is actually the spherical average of $\zeta$ in three-dimensional space \begin{equation} \frac{\zeta_0(r_1, r_2)}{4\pi}= \frac{\int \zeta(r_1, r_2, \theta) 2\pi \sin\theta {\rm d}\theta}{4\pi} = \frac{\int \zeta{\rm d}\Omega}{\int {\rm d}\Omega}\ , \end{equation} which effectively becomes theoretical support to the estimator in \citet{PanSzapudi2005b}. In the same spirit, the projected 3PCF $Z$ also can be expanded but in a different treatment, the cosine Fourier transformation proposed by \citet{Zheng2004} and \citet{Szapudi2009} is the appropriate one since $Z$ is defined on a two-dimensional plane which is perpendicular to LOS. Angular averaging of $Z$ thus produces the zeroth-order component of the cosine expansion to $Z$ \citep{Zheng2004,Szapudi2009}, \begin{equation} Z_0(\sigma_1, \sigma_2)=\frac{1}{2\pi} \int_0^{2\pi} Z(\sigma_1, \sigma_2, \theta_p) {\rm d}\theta_p \end{equation} with $\theta_p=\cos^{-1} [ (\sigma_1^2+\sigma_2^2- \sigma_3^2)/(2\sigma_1 \sigma_2 ) ]$, which is the object function that we focus on and actually is related to $\zeta$ by \begin{equation} \begin{aligned} Z_0 & = \frac{1}{2\pi}\int_0^{2\pi} {\rm d}\theta_p \int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty} \zeta \left(\sigma_1,\sigma_2,\theta_p; \pi_1, \pi_2 \right) {\rm d} \pi_1 {\rm d}\pi_2 \\ & = \int_{-\infty}^{+\infty} \int_{-\infty}^{+\infty} \widetilde{\zeta_0}(\sigma_1, \sigma_2, \pi_1, \pi_2) {\rm d}\pi_1 {\rm d}\pi_2\ \end{aligned} \label{eq:z0} \end{equation} where \begin{equation} \widetilde{\zeta_0}(\sigma_1, \sigma_2, \pi_1, \pi_2) = \frac{1}{2\pi}\int_0^{2\pi} \zeta \left(\sigma_1,\sigma_2,\theta_p; \pi_1, \pi_2\right) {\rm d}\theta_p\ . \label{eq:az0} \end{equation} Note that $\widetilde{\zeta_0}$ and the monople of 3PCF $\zeta_0$ are not equal at all. Theoretically if the nonlinear bispectrum is known, by the cosine transformation \begin{equation} \begin{aligned} & B(k_1, k_2, \phi)= \sum_{n=-\infty}^{+\infty} B_n(k_1, k_2) \cos(n\phi) \\ & B_n(k_1, k_2)=\frac{1}{2\pi}\int_0^{2\pi} B(k_1, k_2, \phi) \cos(n \phi) {\rm d}\phi\ , \end{aligned} \end{equation} it is fairly straightforward to compute $Z_0$ \citep{Zheng2004} \begin{equation} \begin{aligned} Z_0(\sigma_1,\sigma_2)= \frac{1}{(2\pi)^2} & \int_0^{+\infty} \int_0^{+\infty} B_0(k_1, k_2) \\ \times &J_0(k_1\sigma_1) J_0(k_2\sigma_2) k_1 k_2 {\rm d} k_1 {\rm d} k_2 \ , \end{aligned} \label{eq:b0z0} \end{equation} where $J_0$ is the zero-order Bessel function of the first kind. An immediate fact is that $Z_0$ only requires good approximation to the zeroth-order component of the nonlinear bispectrum, which simplifies theoretical development. At a first glance it seems that it is not useful to invoke $Z_0$, since $Z$ contains more information, the former erases the angular dependence completely through averaging. However, by smoothing $Z_0$ suffers much less from shot-noise than $Z$, i.e. has smaller variance, which is a celebrated property particularly when sample is not of high number density. More important, as we will see in next section, $Z_0$ can be easily estimated with the common procedure for anisotropic 2PCF after some minor modification. The savings in computing time, proportional to the number of galaxies, is tremendous compared with calculating the projected 3PCF of the full configuration. \section{estimation and numerical test} \subsection{Estimator} Estimation of the zeroth-order component of the projected 3PCF is based on Eqs.~\ref{eq:z0} and \ref{eq:az0}. Eq.~\ref{eq:az0} indicates that $\widetilde{\zeta_0}$ can be measured with the same estimator of $\zeta_0$ as in \citet{PanSzapudi2005b}, taking the same form of the one in \citet{SzapudiSzalay1998}, \begin{equation} \widetilde{\zeta_0} =\frac{DDD-3DDR+3DRR-RRR}{RRR}\ , \end{equation} grouped symbols of $D$ and $R$ refer to various normalized number counts of triplets similar to what is in \citet{PanSzapudi2005b}, difference is that $\widetilde{\zeta_0}$ is estimated in bins of both $\sigma$ and $\pi$. Explicitly, if scale bins are linear, given two vector bins ${\bf r}_{jk}=(\sigma_j, \pi_k)$ and ${\bf r}_{j'k'}=(\sigma_{j'}, \pi_{k'})$ , with $\sigma_{jk}$ in $(\sigma_{jk}-\Delta \sigma/2, \sigma_{jk} +\Delta\sigma/2)$, and $\pi_{jk}$ in $(\pi_{jk}-\Delta\pi/2, \pi_{jk}+\Delta\pi/2)$, as an example, the $DDD$ is obtained through \begin{equation} DDD=\left\{ \begin{array}{cc} \frac{\sum_{i=1}^{N_g} n_i({\bf r}_{jk}) \left[n_i({\bf r}_{j'k'})-1 \right]}{N_g (N_g-1) (N_g-2)}\ , & \textrm{if}\ {\bf r}_{jk}={\bf r}_{j'k'} \\ \frac{\sum_{i=1}^{N_g} n_i({\bf r}_{jk}) n_i({\bf r}_{j'k'})} {N_g (N_g-1) (N_g-2)}\ , & \textrm{if}\ {\bf r}_{jk}\ne {\bf r}_{j'k'} \end{array} \right. \ , \end{equation} where $n_i$ is the number of neighbours to the center point counted in the vector bin ${\bf r}_{jk}$. Then by Eq.~\ref{eq:z0} integrating $\widetilde{\zeta_0}$ over $\pi_{k}$ and $\pi_{k'}$ yields estimation of $Z_0$. We have to address here that unlike 3PCF, the estimator can not completely eliminate edge effects for $\zeta_0$, $\widetilde{\zeta_0}$ and so $Z_0$, one needs to be cautious when scales at probe is comparable to sample's characteristic size. \subsection{Data preparation and estimation setup} Since our goal is to provide a redshift distortion free third-order statistics, a key question is whether $Z_0$ measured in redshift space agrees with what we get in real space. In absence of accurate models about redshift distorted 3PCF, particularly in nonlinear regime, the best approach is to work with N-body simulation data directly. Two realizations of LCDM simulations run with Gadget2 \citep{Springel2005} were analysed. Their cosmological parameters are taken from WMAP3 fits \citep{SpergelEtal2007}, $\Omega_m=0.236$, $\Omega_\Lambda=0.764$, $h=0.73$ and $\sigma_8=0.74$. $512^3$ particles were evolved in both simulations, but one box size is $L=300h^{-1}$Mpc (box300) and the other is $L=600h^{-1}$Mpc (box600), the force softening lengths are $12h^{-1}$kpc and $24h^{-1}$kpc respectively. The $z=0$ output of box300 simulation and $z=0.09855$ output of box600 simulation were selected for our numerical experiment. It is unpractical to use all particles in the simulations therefore for each set of data we generate nine diluted samples for analysis to control the amount of computation at a reasonable level; all results we present here are mean values of nine runs, and the actual scatter of different realizations is very small. For box300 the number of randomly picked points is about $0.2\%$ of the total, while for box600 more than $\sim 600,000$ points are used. Several other samples diluted at different levels were also generated for consistency check. We find that sample dilution does affect our estimation of $Z_0$ but mainly on very small scales, and that variance due to discreteness becomes larger with fewer points, as expected. A common assumption about redshift distortions is the plane parallel approximation (distant observer assumption), which assumes that the observer is very far away from the sample so that all lines-of-sight from the observer to galaxies are parallel to each other. It simplifies calculation by reducing a 3D problem into 1D and indeed works well when the interested scale opens only a narrow angle to the observer. But the systematic bias introduced by the plane parallel approximation turns out to be significant if the angle becomes wide. Theoretical calculations and numerical measurements have shown that the deviation mainly occurs at relatively large scales and could be more than $10\%$ \citep[e.g.][]{SzalayEtal1998, Scoccimarro2000, Szapudi2004b, CaiPan2007, PapaiSzapudi2008}. To test the accuracy of the plane parallel approximation, two sets of samples in redshift space are generated for the box300 data, one set takes the distant observer assumption while the other mimics realistic samples by placing an observer at distance of $100h^{-1}$Mpc to the nearest surface of the sample. One has to bear in mind that the two redshift distortion scenarios differ not only in sample construction but also the way of decomposing the separation ${\bf r}$ into $(\sigma, \pi)$ amid measurement. The output of box600 simulation we used is of $z\approx 0.1$, the plane parallel approximation is sufficient for most analysis if interested scales are less than $\sim 50h^{-1}$Mpc. We also noticed that applying periodic boundary condition or simply throwing away those points shifted out of box by peculiar velocity makes little difference for the final $Z_0$, which effectively eliminates the concern of \citet{NockEtal2010}. During our estimation the $\sigma$ bins are set logarithmic with $\Delta\log \sigma=\log 1.4$, $\pi$ bins are linear with $\Delta\pi=3, 5h^{-1}$Mpc for the box300 and the box600 respectively. Caution must be taken about the bin width which shall not be too wide to degrade the accuracy too much, while it shall be large enough to achieve $DDD>0$ for even the narrowest bin. Experience shows that normally $DDD > 100$ is good enough to give reliable estimation at our accuracy goal of 10 \%. As we do not have multiple realizations to produce error bars, for each simulation data set we split the sample into eight sub-volume boxes in half size, then the scatter of measurements in these eight sub-volume boxes are taken as an estimate of the variance. \subsection{Finite integration range along LOS} \begin{figure*} \resizebox{\hsize}{!}{ \includegraphics{f1a.ps} \includegraphics{f1b.ps} \includegraphics{f1c.ps}} \caption{Fractional change of $Z_0(\sigma_1, \sigma_2)$ with $\pi_{max}$ compared to the one measured with largest $\pi_{max}$. Three classes of configuration of $\sigma_2/\sigma_1$ of $Z_0$ are presented. $Z_{0r}$ measured in real space with $\pi_{max}$ as specified in the legend, $Z_{0rmax}$ is estimated with the largest $\pi_{max}=150h^{-1}$Mpc. Plane parallel approximation in real space means merely the decomposition of ${\bf r}$ into $(\sigma, \pi)$ using parallel LOS, while nonparallel corresponds to an external observer at the distance of $100h^{-1}$Mpc to the bottom of the simulation box. Errorbars of $\pi_{max}=78h^{-1}$Mpc for box300, $\pi_{max}=120h^{-1}$Mpc for box600 are plotted to show the uncertainty.} \label{fig:pimax} \end{figure*} The integration range along LOS to have $Z_0$ from $\widetilde{\zeta_0}$ in Eq.~(\ref{eq:z0}) should be $(-\infty, +\infty)$ to guarantee conservation of triplets along LOS. However, one can not integrate infinite scales due to finite radial thickness of realistic samples, so there is always a finite upper limit of $\pi$ to the integration. Our measurements thus correspond to \begin{equation} \hat{Z}_0=\int_{-\pi_{max}}^{+\pi_{max}}\int_{-\pi_{max}}^{+\pi_{max}} \widetilde{\zeta_0}{\rm d}\pi_1{\rm d}\pi_2 =\sum_{i, j}\widetilde{\zeta_0}\Delta\pi_i\Delta\pi_j \ . \label{eq:sum} \end{equation} Let subscript $r$ denote quantities in real space and $s$ for those in redshift space. The practical limitation certainly introduces systematic bias, henceforth mathematically $\hat{Z}_{0s} \ne \hat{Z}_{0r}\ne Z_0$. What we hope is that we can find a $\pi_{max}$ so that the contribution from $\pi$ larger than that is negligible at a given tolerance. In our test runs we found that the largest $\pi_{max}$ permitted is around $1/4-1/2$ of the box size. If larger scale is used, the estimator of $Z_0$ suffers greatly of finite-volume effects. The same problem is present when estimating projected 2PCF, and normally it is agreed that $\pi_{max}\sim 40-70h^{-1}$Mpc is sufficient to give stable results at small $\sigma$ of less than $\sim 20-30h^{-1}$Mpc, but may not be enough for measurement on larger scales \citep[see][and references there in]{BaldaufEtal2010}. Figure~\ref{fig:pimax} presents the convergence of measurements with changing $\pi_{max}$. It displays the fractional differences of $Z_0$ compared to that calculated from the largest $\pi_{max}$ allowed by the geometry of sample. Samples used in this test are all in real space, but for box300 data we decompose scales by LOS in two ways: plane parallel approximation and wide angle treatment. For box300 at scales $\sigma< 1h^{-1}$Mpc $Z_0$ is extremely stable against different choices of $\pi_{max}$, independent of the scheme of scale decomposition, but at larger $\sigma$ scales the influence of those large $\pi$ becomes more and more evident. It appears that in the wide angle treatment $Z_0$ actually increases with $\pi_{max}$ up to $\sim 110h^{-1}$Mpc and then falls down when further enlarging $\pi_{max}$, while in the plane parallel assumption $Z_0$ monotonically rises with larger $\pi_{max}$. The results from box600 are somewhat different: $Z_0$ decreases with increasing $\pi_{max}$ on all $\sigma$ scales. Additional numerical experiments with the box600 data revealed that this behaviour is largely caused by the dilution of the original data: a denser sample has less variation against the choice of $\pi_{max}$ when $\sigma < 1h^{-1}$Mpc. We conclude that for an overall precision target of $\sim 10\%$ for $\sigma$ scales below $10h^{-1}$Mpc, $\pi_{max}\approx 120h^{-1}$Mpc suffices. This is much larger than customary for the projected 2PCF. Note that the sharp break down of convergence at scales $\sigma\sim 10-20h^{-1}$Mpc appears to be a numerical artifacts where $Z_0$ quickly approaches zero. \subsection{Redshift space {\it versus} real space} \begin{figure*} \resizebox{\hsize}{!}{ \includegraphics{f2a.ps} \includegraphics{f2b.ps}} \caption{$Z_0$ measured in real space (thick lines) and that in redshift space (thin lines) for different $\sigma_2/\sigma_1$ and $\pi_{max}$. Errorbars are of measurements of configuration $\sigma_2/\sigma_1=1$ in real space. The results not shown for box300 under plane parallel approximation are similar to the non-parallel case.} \label{fig:z0rs} \end{figure*} \begin{figure*} \resizebox{\hsize}{!}{ \includegraphics{f3a.ps} \includegraphics{f3b.ps} \includegraphics{f3c.ps} } \caption{Deviation of $Z_0$ in redshift space to $Z_0$ in real space for different configurations of $\sigma_2/\sigma_1$ and choices of $\pi_{max}$. Left panel shows the wide angle treatment to the redshift distortion to the box300 data, the middle shows the box300 results under plane parallel assumption, and the right panel is shows box600 with plane parallel redshift distortion.} \label{fig:z0rserror} \end{figure*} Figure~\ref{fig:z0rs} demonstrates $Z_0$ of the two simulation data sets in redshift space and real space for different $\sigma_2/\sigma_1$ and $\pi_{max}$; detailed comparison is drawn in Figure~\ref{fig:z0rserror}. On most scales of $\sigma< 10h^{-1}$Mpc, residual effects of redshift distortion due to finite integration domain result in only a minor bias within $10\%$, except for an upshot in $Z_0$ in redshift space on scales of $\sigma < \sim0.2 h^{-1}$Mpc. Adjusting $\pi_{max}$ does not modulate $Z_0$ significantly on $\sigma < \sim 1h^{-1}$Mpc, but causes some apparent deviations on larger scales, especially where $Z_0$ approaches its zero-crossing point. Nevertheless, it is reassuring from Figures~\ref{fig:z0rs} and \ref{fig:z0rserror} that $Z_0$ estimated in redshift space agrees well with that of real space with at most $10\%$ uncertainty for $0.2< \sigma < \sim 10h^{-1}$Mpc and $\pi_{max}\sim 120h^{-1}$Mpc. Thus $Z_0$ can be accepted as a redshift distortion free third order statistics to a good precision. \section{halo model prediction of $Z_0$} \subsection{Formalism} The halo model invoked to model the third-order statistics $Z_0$ of dark matter basically follows \citet{MaFry2000a, MaFry2000b}, \citet{ScoccimarroEtal2001}, \citet{FosalbaPanSzapudi2005} and \citet{SmithEtal2008}. Here we just give a brief description of main ingredients of the model, for more details we refer to the review of \citet{CooraySheth2002}. \begin{enumerate} \item Halo profile $\rho(r)$. It has been pointed out that the density profile of a virialized dark matter halo in general is ellipsoidal and shows various morphology rather than a simple universal spherical approximation \citep{JingSuto2000, JingSuto2002}. The non-spherical shape of halo can evidently affect the halo model prediction of the clustering of dark matter on small scales where the one-halo term dominates \citep{SmithWattsSheth2006}. Noting that $Z_0$ is a degenerated 3PCF in analogous to $\zeta_0$, and should be similarly insensitive to halo shapes, the popular NFW profile \citep{NFW1997} is still adequate for our model. For a halo of mass $M$ it reads, \begin{equation} u(r) = \frac{\rho(r)}{M}=\frac{fc^3}{4\pi R^3_{v}} \frac{1}{cr/R_v(1+cr/R_v)^2}\ , \label{eq:profile} \end{equation} where $f = \left[ \ln{(1+c)}-c/(1+c) \right]^{-1}$ and $c(M) = c_0 (M/M_*)^{-\beta}$ known as the concentration parameter; parameters $c_0=9$, $\beta=0.13$ are calibrated by numerical simulation \citep{BullockEtal2001}. Halo mass is defined as $M=(4\pi R_v^3/3) \Delta \rho_{crit}$ with $\rho_{crit}$ being the cosmological critical density. $\Delta$ is the density contrast for virialization and can be estimated from spherical collapse model. A good fit for a flat universe with cosmological constant is given by \citet{BryanNorman1998} \begin{equation} \Delta=\frac{18\pi^2+83x-39x^2}{\Omega(a)}\ ,\ x=\Omega(a)-1\ . \end{equation} As it is more convenient to work in Fourier space for 3PCF, the Fourier transformed halo profile of \citet{ScoccimarroEtal2001} is the used for computations \begin{equation} \begin{aligned} u(k,M)= & f \bigg\{ \sin \eta \left[ \rm{Si} (\eta[1+\eta])-\rm{Si}(\eta)\right] \\ &+\cos \eta \left[ \rm{Ci} (\eta[1+\eta])-\rm{Ci}(\eta)\right] - \frac{\sin{\eta c}}{\eta(1+c)}\bigg\} \end{aligned} \end{equation} with $\eta=kR_v/c$. Note that the halo profile is presumably truncated at virial radius $R_v$ in the standard version of halo model for large scale structure; in extensions it becomes an adjustable parameter in an attempt to find the best match to simulations. \item Mass function $n(M)$. There are many versions of halo mass functions, but it turns out that using mass function with higher precision actually brings about only a relatively minor change to $Z_0$ on the small scales as we tested. The classical Sheth-Tormen function \citep{ShethTormen1999} is sufficient, \begin{equation} n(M)M{\rm d}M =\bar{\rho} \frac{dy}{y} A \gamma \sqrt{\frac{g(\nu)}{2\pi}} \left(1+g(\nu)^{-p} \right) \exp(-\frac{g(\nu)}{2}) \end{equation} in which $\gamma={\rm d}\ln\sigma_M^2/{\rm d}\ln R$, $g(\nu)=\alpha \nu^2$, $\nu=\delta_c/\sigma_M$, $y=(M/M_*)^{1/3}=R/R_*$ with $R_*=R_v\Delta$ and $\sigma_M$ being the extrapolated linear variance of the dark matter fluctuations smoothed over the Lagrangian scale $R$. $A=0.322$ $\alpha=0.707$, $p=0.3$ are parameters fitted to simulations by \citet{JenkinsEtal2001}. Note that the definition of halo mass in the Sheth-Tormen function is $M=4\pi\bar{\rho} \Delta R_v^3/3$ with $\bar{\rho}=2.78\times 10^{11} \Omega_m h^2 M_{\sun} {\rm Mpc}^{-3}$ being the dark matter density of the present universe, while the halo mass in in the NFW profile is defined with the critical mass $\rho_{crit}=\bar{\rho}/\Omega_m$, conversion between halo parameters of the two sets is given in \citet{SmithWatts2005}. \citet{ScoccimarroEtal2001} already noticed the inconsistency but argued that effects of the difference could be largely cancelled in practical calculations, and \citet{FosalbaPanSzapudi2005} also find that changing the concentration parameter by as much as $50\%$ would not affect the final results significantly. This is also the case in our calculation. \item Halo bias. The distribution of massive halos is biased to the dark matter. Most halo bias functions extracted from N-body simulations \citep[e.g.][]{MoJingWhite1997, Jing1999, ShethTormen1999, TinkerEtal2010} are refinements to the analytical model of \citet{MoWhite1996}. The bias plan used in our recipe is the fitting formula given by \citet{ShethTormen1999}, \begin{equation}\label{eq:bias} b(\nu) = 1 + \frac{g(\nu)-1}{\delta_c} + \frac{2p}{\delta_c(1+g(\nu)^p)}\ , \end{equation} in which $\delta_c$ is the linear overdensity threshold for spherical collapse. Its cosmological dependence is so weak that a constant value of $1.686$ is usually taken. The most recent update of \citet{TinkerEtal2010} is also applied in our code for a consistency check, and the results indicate that the improvement to $Z_0$ is minor in the intermediate nonlinear regime only; it does not bring significant improvement to the overall accuracy when considering the magnitude of numerical errors of estimation of the previous section. \end{enumerate} To prevent multi-dimensional integrations involved in direct calculation of $\zeta$ in configuration space \citep{TakadaJain2003}, we work in Fourier space to yield the bispectrum $B$ predicted by halo model first. Then $B_0$ is easily obtained to render $Z_0$ through the transformation of Eq.~\ref{eq:b0z0}. In the halo model, bispectrum consists of three separate terms, namely the one-halo, two-halo and three-halo terms, \begin{equation} B(k_1, k_2, \phi) =B_{1h} +B_{2h} +B_{3h}\ , \end{equation} in which \begin{equation} \begin{aligned} B_{1h} & =I_{03}(k_1, k_2, k_3) \\ B_{2h} & =I_{11}(k_1)I_{12}(k_2, k_3)P_L(k_1)+ cyc.\\ B_{3h} & = \left[\prod_{i=1}^3 I_{11}(k_i)\right]B_{PT} \end{aligned} \end{equation} and \begin{equation} I_{ij} =\int \, \frac{dr}{r}n(r)b_i(r)[u (k_1,r)\ldots u(k_j,r)] \left( \frac{4\pi r^3}{3}\right)^{j-1} \end{equation} with $b_0=1$, $b_1=b(\nu)$ and $b_i=0$ for $i>1$ to neglect quadratic and high order biasing terms. $P_L$ is the linear power spectrum and it is generated by CMBFAST \citep{CMBFAST1996} with the cosmological parameters from the simulations we use. $B_{PT}$ is the bispectrum predicted by the Eulerian perturbation theory at tree level \citep[e.g.][]{BernardeauEtal2002}. \subsection{Comparison with simulations} \begin{figure*} \resizebox{\hsize}{!}{\includegraphics{f4.ps}} \caption{$Z_0$ of the box300 simulation (z=0) with predictions from halo model and Eulerian perturbation theory (PT). Symbols are measurements of simulations in real space with different $\pi_{max}$. The upper row of plots shows the effects of adjusting halo boundary radius in unit of $R_v$ but without a cut-off to the halo mass function, while the bottom row demonstrates the consequence of cutting the high mass tails of the mass function with the halo boundary fixed to $R_v$.} \label{fig:z0box300} \end{figure*} \begin{figure*} \resizebox{\hsize}{!}{\includegraphics{f5.ps}} \caption{$Z_0$ of the box600 simulation data and models. See last figure for details.} \label{fig:z0box600} \end{figure*} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{f6.ps}} \caption{Relative differences of the halo model predicted $Z_0$ to that of estimated from the box300 simulation. Red dotted lines correspond to the prediction of the halo model with halo mass function and bias function given by \citet{ShethTormen1999}, blue dashed lines are generated by the model with mass function and bias function provided by \citet{TinkerEtal2010}. $Z_{0\, simulation}$ is the estimation from the box300 data with plane-parallel assumption to the redshift distortions.} \label{fig:haloerror} \end{figure} $Z_0$ predicted by halo model and Eulerian perturbation theory is demonstrated in Figures~\ref{fig:z0box300} and \ref{fig:z0box600} overlaid with measurements of the box300 and the box600 simulation data, respectively, estimated in real space. Results of our halo model and simulations agree remarkably well at both redshifts $z=0, 0.1$, especially on $\sigma_1$ scales between $\sim 0.2 - \sim 5h^{-1}$Mpc. On very small scales $\sigma_1 < \sim 0.2h^{-1}$Mpc, the halo model predicted $Z_0$ is larger and steeper than the simulation. This is more apparent for box600. Numerical tests reveal that this is partly due to dilution to the original data set: a higher density of points leads a higher clustering power in this regime. On large scales, where halo model follows perturbation theory, both theories begin to over-predict the clustering strength of simulations for larger $\sigma_2/\sigma_1$, which should not be attributed to the imperfection of halo models and should be the inaccuracy of $B_{PT}$ on these scales \citep{PanColesSzapudi2007, GuoJing2009}. To improve halo model performance at the three-point level, halo boundary and mass function adjustments are usually adopted \citep{TakadaJain2003, WangEtal2004, FosalbaPanSzapudi2005}. This alleviates the disagreement to some extent. Here we also enlarge halo boundary beyond $R_v$ and truncate the high mass tail of halo mass function (Figures~\ref{fig:z0box300} and \ref{fig:z0box600}). Experiment indicates that this extension of halo radius without a hard cut-off of the mass function can easily generate the correct shape and amplitude of $Z_0$ of simulations. Simple fitting shows that best halo boundary is $\sim1.5R_v$ for box300 and $\sim1.6R_v$ for box600. In contrast, if we keep the halo boundary unchanged but truncate the halo mass function, the one- and two-halo terms are so strongly modified, and the shape and the amplitude of $Z_0$ deviate from simulations significantly. Simple fitting to simulations by setting both halo boundary and mass cut-off free reveals that mass cut-off could not be smaller than $10^{15}M_{\sun}$. Otherwise, there is no way to reconcile the under-predicted $Z_0$ with simulations at transition scales of $\sigma_1\sim 3h^{-1}$Mpc, above which $B_{PT}$ breaks. In conclusion, enlarging halo boundary alone is sufficient for for accurately predicting $Z_0$. During our calculation, we also examine the influence of the halo bias function and the mass function by using the high precision formula of \citet{TinkerEtal2010}. Such replacement does not cause a fundamental change to the theoretical prediction (Figure~\ref{fig:haloerror}). The new mass functions do not benefit the halo model much. On most $\sigma$ scales, less than $10^{-1}$Mpc, the halo model with Sheth-Tormen functions is consistent with simulations within our error budget of $\sim10-20\%$. The replacement of functions provided by \citet{TinkerEtal2010} increases deviation level to around $20-30\%$, especially on scales of $\sim 1h^{-1}$Mpc; visible advantage only just appears on scales of $\sigma_1> \sim3h^{-1}$Mpc with accuracy gain of a few percents. In addition to the halo model, we also checked the phenomenological models of \citet{ScoccimarroCouchman2001} and \citet{PanColesSzapudi2007}. The accuracy of the formula by \citet{ScoccimarroCouchman2001} is very good on scales $\sigma_1 > \sim 1h^{-1}$Mpc but then deviates from the simulations by more than 40\% on smaller scales. The performance of \citet{PanColesSzapudi2007} is poor in terms of $Z_0$ as the bispectrum model is not designed to conserve clustering power and the resulting integration over it yields incorrect amplitude. Nevertheless, if a renormalization is enforced for the model to be consistent with the perturbation theory on large $\sigma$, the model works well for $Z_0$ at $\sigma_1> \sim 2h^{-1}$Mpc. \section{Summary and Discussion} In this paper we propose a third-order correlation function for characterising galaxy clustering properties. The statistics $Z_0$ we advocate is the zeroth-order component of the projected 3PCF. Although $Z_0$ is a 3PCF, its estimation takes roughly the same amount of computing operation as the projected 2PCF. The algorithm can be easily implemented after moderate modification of a code for the projected 2PCF. Various numerical experiments confirm that $Z_0$ can be deemed to be redshift distortion free within approximately $10\%$ for the regime where the scale perpendicular to LOS is $0.2<\sigma<10h^{-1}$Mpc. In addition, the maximal integration scale $\pi_{max}$ parallel to LOS during estimation ought to be greater than $\sim 120h^{-1}$Mpc. A serious concern is that shot noise could ruin the estimation in the strongly nonlinear regime if the number density of points in a sample is too low. This requirement for a robust $Z_0$ measurement is tighter than for the projected 2PCF, but still weaker than the normal projected 3PCF, since $Z_0$ is an integral of the former. The criterion we suggest is $DDD>\sim 100$. As we expected, the halo model provides satisfactory prediction to dark matter $Z_0$ of simulations within $\sim10\%$, if the classical Sheth-Tormen mass functions are used. Our computation indicates that extending the halo boundary is enough to yield good fit to simulations, while a hard cut-off to mass function is not as effective as previous works claimed. Substituting new functions of the halo mass distribution and halo biasing in high precision does not lead to significantly better agreement with simulations. Since the angular dependence in the projected 3PCF and the normal 3PCF is smeared out in $Z_0$, we conjecture that using an anisotropic halo profile probably will not significantly improve accuracy. A significant bias of halo model predicted $Z_0$ compared to simulations emerges in the weakly nonlinear regime, where halo models boil down to second-order perturbation theory; the latter is already known to be poor in predicting dark matter 3PCF. A more precise bispectrum from higher order perturbation theories may offer a way to increase precision \citep[e.g.][]{Valageas2008, Sefusatti2009, BartoloEtal2010}. The principal reason for proposing $Z_0$ is to provide an efficient redshift distortion free 3PCF, complementary to the standard projected 2PCF, for galaxy clustering analyses. It is well known that the projected 2PCF itself is a Gaussian statistic only and thus has its limitations. Third order correlation functions, mainly carrying information about non-Gaussianity, are more sensitive to details of the galaxy distribution. Non-Gaussianity of galaxy distribution is generated by the nonlinear action of gravitational force and gas physics if the primordial density fluctuation of the universe after inflation is Gaussian. The degeneracy shown in projected 2PCF \citep[e.g.][]{ZuEtal2008} may be broken if third order correlation functions are employed. The redshift distortion free feature of $Z_0$ on scales less than $10h^{-1}$Mpc defines its potential in investigating the relation of galaxies with their host halos, and the formation histories of galaxies and halos. Furthermore, the success of halo model prediction on dark matter $Z_0$ encourages us to apply $Z_0$ for analysing galaxies. In principle, with measurements from galaxy samples, $Z_0$ enables us to generalize and diagnose schemes of HOD, conditional luminosity function \citep[CLF, ][]{YangEtal2003} and semi-analytical models \citep[e.g.][]{Baugh2006} to third order statistics at cost of one additional free parameter, the halo boundary. Our present work is restricted to dark matter only, the behavior of $Z_0$ for biased objects remains unclear. Testing with mock galaxy samples before applying to real data will be necessary. \section*{Acknowledgment} This work is supported by the Ministry of Science \& Technology of China through 973 grant of No. 2007CB815402 and the NSFC through grants of Nos. 10633040, 10873035. JP acknowledges the One-Hundred-Talent fellowship of CAS. IS acknowledges support from NASA grants NNG06GE71G and NNX10AD53G and from the Pol{\'a}nyi Program of the Hungarian National Office for Research and Technology (NKTH). We thank Weipeng Lin for his kindness of providing us N-body simulation data. \input projzeta_v3.bbl \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
291
Starbucks® has contracted with MTI to provide customer service call center, warranty validation, and parts replacement operations in support of the Verismo® V home brewer across the US and Canada. MTI Pittsburgh has increased its square footage for 3PL activity by adding adjoining contiguous space. This addition brings MTI's total logistical and technical footprint across the US to 134,000 sqft or 12,500 sqmtr. Mitsubishi Logistics America has contracted with MTI to provide various warehousing and related logistics services. Reflexion Health has contracted with MTI to provide contract assembly and related repurposing logistics services for Reflexion Health's Virtual Exercise Rehabilitation Assistant (VERA) software's hardware kit. Nalu Medical has contracted with MTI to provide contract assembly and related logistics services for Nalu's External Trial Stimulator (ETS1). Mendtronix Inc. (MTI) announces it has completed FDA (Food & Drug Administration) registration. MTI is now fully registered to perform final assembly and test of Class 1 FDA products which is immediately applied to telemedicine carts for a leading medical products company. Mendtronix, a leading medical and commercial electronics forward logistics, repair and refurbishment services company announces a new contract supporting Simavita's new products in the U.S. via its Atlanta GA operations. Mendtronix, a leading national 3PL, announces a contract supplying forward logistics services via its Pittsburgh PA and San Diego CA sites. Mendtronix, Inc. Renews ISO 9001 Certification for Repair and Logistics Services. Mendtronix, a leading IFP LCD full service forward/reverse logistics, repairs/refurbishments company, announces a new contract providing these services to Avocor at its San Diego CA operations. Mendtronix, a leading national 3PL, announces a contract supplying advanced product logistics, printing services and more via its Pittsburgh and San Diego CA sites to all 70+ Bosley facilities and customers. Mendtronix, the leading electronics 3PL company, announces a new contract distributing a new product line via its San Diego CA operations. Mendtronix, a leading AV 3PL company, announces a new contract supporting SealTV with forward and reverse logistics services via its San Diego CA operations. Mendtronix announces it has completed a new contract with long term partner Boxlight-Mimio for repair and logistics services. Black Tai, a California supplier has awarded Mendtronix (MTI) its U.S. forward logistics contract with multiple services via MTI's San Diego and Pittsburgh facilities. Wetek, a Slovenian electronics manufacturer has awarded Mendtronix (MTI) its U.S. forward logistics contract with multiple services via MTI's Atlanta operation starting in March. Mendtronix new Medical Repair & Logistics website, www.mendtronics.com is now live.
{ "redpajama_set_name": "RedPajamaC4" }
6,073
Madison College Foundation Donate to Madison College Foundation Stand By YOU Campaign Impact of Campaign Madison College: A Promise and Pathway to Education We Can't Do This Without You Stand By YOU Gala Photos Stand By YOU Awareness Luncheon Photos Stand By YOU Campaign Cabinet Scholarship Support Corporations & Foundations Apply for Scholarships New Madison College Students Continuing Madison College Students Special Scholarship Applications Regional Campus Scholarships Scholarship FAQs How Madison College Gives BackRuth Hankes2014-03-12T13:25:46-06:00 Founded in 1912 to teach vocational skills, today Madison College is a nationally recognized technical and community college serving parts of 12 counties located in south central Wisconsin. Madison is the second largest of the Wisconsin Technical College System's 16 colleges and serves about 42,000 students. Madison College provides "real world smart" education through a comprehensive curriculum of technical, liberal arts and sciences, adult basic education and continuing education, as well as customized training for employers. The college offers associate degrees, technical diplomas and certificates as well as classes that transfer to four-year degree programs. Madison College has earned a solid reputation for high quality, practical and affordable education. Small classes, dedicated teachers and personalized attention help students succeed in their chosen programs to completion. Madison College also offers a variety of online and other distance learning courses. Approximately 88 percent of our graduates are employed soon after graduation. In addition, satisfaction scores among our graduates and their employers routinely rate above 90 percent. Madison College provides training for more than 140 careers, including biotechnology, advanced manufacturing, network security, digital forensics and visual communications. Our varied programs include accounting, marketing, culinary arts, nursing, automotive technology, criminal justice-law enforcement and welding. Madison College is the single largest source of students transferring to the University of Wisconsin-Madison and the UW System and is one of the state's leading providers of customized training for employers. Students can take part in seven intercollegiate sports. Madison College athletic teams often reach regional and national finals and the college boasts a three-time national championship baseball team. In 2010, WolfPack women won the National Basketball Championship! Go WolfPack! Madison College's 11 facilities are spread among five campuses making education convenient for students from the city and its surrounding suburbs. Regional campuses are located in Fort Atkinson, Portage, Reedsburg and Watertown. 3591 Anderson Street Madison College Website
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
103
{"url":"http:\/\/git.kamailio.org\/gitlist\/index.php\/sip-router\/blame\/vseva\/scriptvar2xavp\/bit_count.c","text":"bit_count.c\n 7ee950eb \/* * Copyright (C) 2010 iptelorg GmbH * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. * 6a0f4382 *\/ \/*! * \\file 88693a29 * \\brief Kamailio core :: 6a0f4382 * \\ingroup core * Module: \\ref core 7ee950eb *\/ #include \"bit_count.h\" #if 0 \/* number of bits in a byte *\/ int bits_in_char[256] = { 0 ,1 ,1 ,2 ,1 ,2 ,2 ,3 ,1 ,2 ,2 ,3 ,2 ,3 ,3 ,4 , 1 ,2 ,2 ,3 ,2 ,3 ,3 ,4 ,2 ,3 ,3 ,4 ,3 ,4 ,4 ,5 , 1 ,2 ,2 ,3 ,2 ,3 ,3 ,4 ,2 ,3 ,3 ,4 ,3 ,4 ,4 ,5 , 2 ,3 ,3 ,4 ,3 ,4 ,4 ,5 ,3 ,4 ,4 ,5 ,4 ,5 ,5 ,6 , 1 ,2 ,2 ,3 ,2 ,3 ,3 ,4 ,2 ,3 ,3 ,4 ,3 ,4 ,4 ,5 , 2 ,3 ,3 ,4 ,3 ,4 ,4 ,5 ,3 ,4 ,4 ,5 ,4 ,5 ,5 ,6 , 2 ,3 ,3 ,4 ,3 ,4 ,4 ,5 ,3 ,4 ,4 ,5 ,4 ,5 ,5 ,6 , 3 ,4 ,4 ,5 ,4 ,5 ,5 ,6 ,4 ,5 ,5 ,6 ,5 ,6 ,6 ,7 , 1 ,2 ,2 ,3 ,2 ,3 ,3 ,4 ,2 ,3 ,3 ,4 ,3 ,4 ,4 ,5 , 2 ,3 ,3 ,4 ,3 ,4 ,4 ,5 ,3 ,4 ,4 ,5 ,4 ,5 ,5 ,6 , 2 ,3 ,3 ,4 ,3 ,4 ,4 ,5 ,3 ,4 ,4 ,5 ,4 ,5 ,5 ,6 , 3 ,4 ,4 ,5 ,4 ,5 ,5 ,6 ,4 ,5 ,5 ,6 ,5 ,6 ,6 ,7 , 2 ,3 ,3 ,4 ,3 ,4 ,4 ,5 ,3 ,4 ,4 ,5 ,4 ,5 ,5 ,6 , 3 ,4 ,4 ,5 ,4 ,5 ,5 ,6 ,4 ,5 ,5 ,6 ,5 ,6 ,6 ,7 , 3 ,4 ,4 ,5 ,4 ,5 ,5 ,6 ,4 ,5 ,5 ,6 ,5 ,6 ,6 ,7 , 4 ,5 ,5 ,6 ,5 ,6 ,6 ,7 ,5 ,6 ,6 ,7 ,6 ,7 ,7 ,8 }; #endif","date":"2019-12-14 07:12:48","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9990354776382446, \"perplexity\": 100.86410672442285}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-51\/segments\/1575540585566.60\/warc\/CC-MAIN-20191214070158-20191214094158-00064.warc.gz\"}"}
null
null
>> We had our big annual Benefit Dinner for Teach For America - Connecticut last week, an event I've been working on for over 9 months. It was so exciting to see all our hard work come together to pull off a great night with guest speakers Wendy Kopp and David Gergen. >> I've been saving up my Fashion Project Nordstrom gift cards for JUST the right splurge. On Friday, C and I made a trip to the mall to make the big purchase - this amazing Barbour jacket that I've had my eye on for some time. I absolutely love it and can't wait to wear it all day, every day. >> We spent about 24 hours in Vermont this weekend, just outside of Burlington, visited C's college friends. It was such a beautiful drive and we caught the tail end of some beautiful leaves. Even though Sunday was rainy, we managed to get in a 3 mile hike during a break in the rain and it was so lovely. Makes me realize how C and I really need to take advantage of all the great hiking spots around us. >> One of my colleges mentioned this journal a few months ago and I knew it was something I would love. I've been a long time journal-er (since 6th grade!) and now, this blog is pretty much like a journal! This little book has a different page for every day and you only write one line. Then, the same day, the following year, you write another line...over the course of five years. I think it's an awesome idea and I can't wait to start filling the pages. >> I've been slowly updating my Pinterest wish list as the holidays start to creep up on us. It's actually really fun especially because I add stuff to it all year and then I'll go back and be like "what the heck, why did I think I wanted this???". >> Did you catch last week's post about natural skincare? Well be on the lookout for a post this week about my all-natural makeup faves!
{ "redpajama_set_name": "RedPajamaC4" }
4,467
Rounders::Receivers::Mail.configure do |config| # please show more option # config.protocol = :imap config.mail_server_setting = { address: ENV['IMAP_SERVER_ADDRESS'], port: ENV['IMAP_SERVER_PORT'], user_name: ENV['IMAP_USER_NAME'], password: ENV['IMAP_PASSWORD'], enable_ssl: true } config.options = { # flag for whether to delete each receive mail after find Default: false # delete_after_find: true } end
{ "redpajama_set_name": "RedPajamaGithub" }
4,004
{"url":"https:\/\/scikit-learn.org\/dev\/modules\/generated\/sklearn.random_projection.SparseRandomProjection.html","text":"# sklearn.random_projection.SparseRandomProjection\u00b6\n\nclass sklearn.random_projection.SparseRandomProjection(n_components='auto', *, density='auto', eps=0.1, dense_output=False, random_state=None)[source]\n\nReduce dimensionality through sparse random projection\n\nSparse random matrix is an alternative to dense random projection matrix that guarantees similar embedding quality while being much more memory efficient and allowing faster computation of the projected data.\n\nIf we note s = 1 \/ density the components of the random matrix are drawn from:\n\n\u2022 -sqrt(s) \/ sqrt(n_components) with probability 1 \/ 2s\n\n\u2022 0 with probability 1 - 1 \/ s\n\n\u2022 +sqrt(s) \/ sqrt(n_components) with probability 1 \/ 2s\n\nRead more in the User Guide.\n\nNew in version 0.13.\n\nParameters\nn_componentsint or \u2018auto\u2019, default=\u2019auto\u2019\n\nDimensionality of the target projection space.\n\nn_components can be automatically adjusted according to the number of samples in the dataset and the bound given by the Johnson-Lindenstrauss lemma. In that case the quality of the embedding is controlled by the eps parameter.\n\nIt should be noted that Johnson-Lindenstrauss lemma can yield very conservative estimated of the required number of components as it makes no assumption on the structure of the dataset.\n\ndensityfloat or \u2018auto\u2019, default=\u2019auto\u2019\n\nRatio in the range (0, 1] of non-zero component in the random projection matrix.\n\nIf density = \u2018auto\u2019, the value is set to the minimum density as recommended by Ping Li et al.: 1 \/ sqrt(n_features).\n\nUse density = 1 \/ 3.0 if you want to reproduce the results from Achlioptas, 2001.\n\nepsfloat, default=0.1\n\nParameter to control the quality of the embedding according to the Johnson-Lindenstrauss lemma when n_components is set to \u2018auto\u2019. This value should be strictly positive.\n\nSmaller values lead to better embedding and higher number of dimensions (n_components) in the target projection space.\n\ndense_outputbool, default=False\n\nIf True, ensure that the output of the random projection is a dense numpy array even if the input and random projection matrix are both sparse. In practice, if the number of components is small the number of zero components in the projected data will be very small and it will be more CPU and memory efficient to use a dense representation.\n\nIf False, the projected data uses a sparse representation if the input is sparse.\n\nrandom_stateint or RandomState instance, default=None\n\nControls the pseudo random number generator used to generate the projection matrix at fit time. Pass an int for reproducible output across multiple function calls. See Glossary.\n\nAttributes\nn_components_int\n\nConcrete number of components computed when n_components=\u201dauto\u201d.\n\ncomponents_CSR matrix with shape [n_components, n_features]\n\nRandom matrix used for the projection.\n\ndensity_float in range 0.0 - 1.0\n\nConcrete density computed from when density = \u201cauto\u201d.\n\nReferences\n\n1\n\nPing Li, T. Hastie and K. W. Church, 2006, \u201cVery Sparse Random Projections\u201d. https:\/\/web.stanford.edu\/~hastie\/Papers\/Ping\/KDD06_rp.pdf\n\n2\n\nD. Achlioptas, 2001, \u201cDatabase-friendly random projections\u201d, https:\/\/users.soe.ucsc.edu\/~optas\/papers\/jl.pdf\n\nExamples\n\n>>> import numpy as np\n>>> from sklearn.random_projection import SparseRandomProjection\n>>> rng = np.random.RandomState(42)\n>>> X = rng.rand(100, 10000)\n>>> transformer = SparseRandomProjection(random_state=rng)\n>>> X_new = transformer.fit_transform(X)\n>>> X_new.shape\n(100, 3947)\n>>> # very few components are non-zero\n>>> np.mean(transformer.components_ != 0)\n0.0100...\n\n\nMethods\n\n fit(X[,\u00a0y]) Generate a sparse random projection matrix fit_transform(X[,\u00a0y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. set_params(**params) Set the parameters of this estimator. Project the data by using matrix product with the random matrix\nfit(X, y=None)[source]\n\nGenerate a sparse random projection matrix\n\nParameters\nXnumpy array or scipy.sparse of shape [n_samples, n_features]\n\nTraining set: only the shape is used to find optimal random matrix dimensions based on the theory referenced in the afore mentioned papers.\n\ny\n\nIgnored\n\nReturns\nself\nfit_transform(X, y=None, **fit_params)[source]\n\nFit to data, then transform it.\n\nFits transformer to X and y with optional parameters fit_params and returns a transformed version of X.\n\nParameters\nX{array-like, sparse matrix, dataframe} of shape (n_samples, n_features)\n\nInput samples.\n\nyndarray of shape (n_samples,), default=None\n\nTarget values (None for unsupervised transformations).\n\n**fit_paramsdict\n\nReturns\nX_newndarray array of shape (n_samples, n_features_new)\n\nTransformed array.\n\nget_params(deep=True)[source]\n\nGet parameters for this estimator.\n\nParameters\ndeepbool, default=True\n\nIf True, will return the parameters for this estimator and contained subobjects that are estimators.\n\nReturns\nparamsmapping of string to any\n\nParameter names mapped to their values.\n\nset_params(**params)[source]\n\nSet the parameters of this estimator.\n\nThe method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it\u2019s possible to update each component of a nested object.\n\nParameters\n**paramsdict\n\nEstimator parameters.\n\nReturns\nselfobject\n\nEstimator instance.\n\ntransform(X)[source]\n\nProject the data by using matrix product with the random matrix\n\nParameters\nXnumpy array or scipy.sparse of shape [n_samples, n_features]\n\nThe input data to project into a smaller dimensional space.\n\nReturns\nX_newnumpy array or scipy sparse of shape [n_samples, n_components]\n\nProjected array.","date":"2020-08-04 15:00:00","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.46190589666366577, \"perplexity\": 2766.9688224978445}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-34\/segments\/1596439735867.94\/warc\/CC-MAIN-20200804131928-20200804161928-00355.warc.gz\"}"}
null
null
\section{Introduction}\label{sec:introduction} The instability of capillary interfaces has long been an intriguing topic in fluid mechanics. Perhaps one of the earliest investigated interfacial instability phenomena is the Rayleigh-Taylor instability, where a denser fluid located above a lighter one protrudes into the latter due to any arbitrary small perturbation of the initially flat interface. However, this protrusion is not always observed when a droplet rises or sediments into another density-contrasted fluid. According to ~\citet{hadamard} and ~\citet{rybzynski}, a spherical translating droplet is a solution of this problem in the Stokes regime, regardless of the presence or magnitude of the surface tension. What remains unknown, however, is the existence of other equilibrium shapes of the droplet and the influence of surface tension on the stability of the spherical solution. Experiments were conducted by ~\citet{kojima1984formation} to examine this issue. Two patterns of shape instability were observed: depending on the viscosity ratio $\lambda$, a protrusion or an indentation at the rear of droplet was seen to grow with time. ~\citet{kojima1984formation} also performed a linear stability analysis assuming that the droplet underwent small deformations. A linear operator depending on the viscosity ratio $\lambda$ and capillary number $\Ca$ (inversely scaling with the surface tension) was derived which governs the linearised droplet shape evolution. It was found that, irrespective of the value of $\Ca$, i.e. even for arbitrary small surface tension, the eigenvalues of the operator had negative real part, pointing to a linearly stable shape. The authors recognized that this linear stability study contradicted their experiments showing instabilities with finite surface tension; Direct numerical simulations (DNS)~\citep{koh1989stability} also reported the unstable shape evolution of slightly disturbed droplets in the presence of sufficient surface tension ($\Ca < 10$). Recent numerical work has examined the effect of surfactants~\citep{johnson2000stability} and viscoelasticity~\citep{wu2012stability} on this scenario. The contradiction between the theory and experiments/DNS is somewhat reminiscent of the case of the fingering instability of a film flowing down an inclined plane: the experimentally-measured~\citep{huppert1982flow,de1992growth} critical inclination angle triggering instability was found to be well below that obtained from the linear theory. ~\citet{bertozzi1997linear} discovered that the traditional spectrum analysis failed to capture the short-time but significant energy amplification of the perturbations near the contact line. They pinpointed the missing mechanism by performing a so-called non-modal analysis, borrowed from the transient growth theory founded and developed in the early $1990$s for hydrodynamic stability analysis~\citep{trefethen1993hydrodynamic,reddy1993energy, baggett1995mostly}, to identify and interpret the short-time energy amplification. The non-modal tools of stability theory have been used to explain the discrepancies between the theoretically computed critical Reynolds number and the experimentally-observed counterpart in a variety of wall-bounded shear flows~\citep{schmid2007nonmodal}. The traditional eigenvalue analysis as also used in ~\citet{kojima1984formation}, i.e. the so-called modal approach, can sometimes fail to interpret real flow dynamics as the spectrum of the linear operator only dictates the asymptotic fate of the perturbations \textit{without} considering their \textit{short-term} dynamics~\citep{schmid2001stability}. The non-modal analysis, in contrast, is able to capture the short-time perturbation characteristics and determine the most dangerous initial conditions leading to the optimal energy growth. In addition to its great success in the traditional hydrodynamic stability analysis, it has been also used to elucidate complex flow instability problems including capillary interfaces~\citep{davis2003generalized}, thermal-acoustic interactions~\citep{balasubramanian2008thermoacoustic, juniper2011triggering} and viscoelasticity~\citep{jovanovic2010transient}. In this paper, we perform a non-modal analysis to investigate the shape instability of a rising droplet in an ambient fluid, neglecting inertial effects. After introducing the linearised equations and operator in Sec.~\ref{sec:problem} and the non-modal approaches in Sec.~\ref{sec:methods}, we demonstrate the existence of transient growth and predict the critical capillary numbers required for instability to become possible in Sec.~\ref{sec:linear_nonmodal}. In Sec.~\ref{sec:lin_non_evol}, we conduct in-house DNS to compute the nonlinear shape evolutions of the droplets initiated with the linear optimal perturbations and identify the minimal amplitudes leading eventually to instability. We further analyse the relationship between the optimal growth and the critical amplitude of perturbation. We finally examine how the instability pattern is related to the viscosity ratio and propose a phenomenological explanation in Sec.~\ref{sec:conclusions}. \section{Governing equations and linearisation}\label{sec:problem} We study the dynamics of a buoyant droplet rising in an ambient fluid in the Stokes regime. The droplet is assumed to be axisymmetric and the axis is along the $z$ direction with gravity ${\bf g}=-g {\bf e}_z$. The two Newtonian immiscible fluids, one carrying the droplet (fluid $2$), and the other constituting the droplet (fluid $1$) are characterized by different densities $\rho_{2}>\rho_{1}$, inducing (without loss of generality) an upward migration of the droplet. Likewise, their viscosities are $\mu_2$ and $\mu_1$ respectively, with a ratio $\lambda=\mu_1/\mu_2$. The interface between the two fluids has a uniform and constant surface tension coefficient $\gamma$. The undeformed state of the droplet is a sphere of radius $a$ and terminal velocity $\uter = \frac{a^{2}g (\rho_{2} - \rho_{1}) }{\mu_2} \frac{1+\lambda}{3\left( 1+ 3\lambda/2 \right)}$ ~\citep{leal2007advanced}. We use $a$ and $\uter$ as the reference length and velocity scales, and $\mu_2 \uter/a$ as the reference scale for $p$ and $\bsig$, the modified pressure (removing the hydrostatic part) and the corresponding stress tensor respectively~\citep{batchelor2000introduction}. Hence, the governing equations for the non-dimensional velocity and pressure field inside the droplet $\left( \mathbf{u}_1, p_1 \right)$ and that outside the droplet $\left( \mathbf{u}_2, p_2 \right)$ are written as \begin{align}\label{equ:stokes} & \nabla \cdot \bu_1 = 0, -\nabla p_{1} +\lambda \nabla^{2} \bu_1 = 0, \nonumber \\ & \nabla \cdot \bu_2 = 0, -\nabla p_{2} + \nabla^{2} \bu_2 = 0, \end{align} where the velocity is zero at infinity and the boundary conditions on the interface are \begin{align}\label{equ:bcinter} \bue &= \bui, \nonumber \\ \bsig_{2} \cdot \bn - \bsig_{1} \cdot \bn &= \left[\nabla \cdot \bn /\Ca + 3 z \left( 1 + 3\lambda/2 \right)/\left( 1+\lambda \right) \right] \bn. \end{align} Here, $\bn$ is the unit normal vector pointing from the interface towards the carrier fluid and $\Ca=\mu_2\uter/\gamma$ is the capillary number indicating the ratio of the viscous effect with respect to the surface tension effect. \begin{figure} \centering \includegraphics[scale=0.5]{FIG1.eps} \caption{An axisymmetric droplet rising in the quiescent fluid, along the axial ($z$) direction. The fluid inside and outside is labelled as fluid 1 and fluid 2 respectively, so as their dynamic viscosities ($\muc$, $\mud$) and densities ($\rhoc$, $\rhod$). The polar coordinates $R\left( \theta \right)$ are used to represent its shape, where $\theta$ is measured from its rear stagnation point. $L$ and $B$ is the axis length along the revolution axis and orthogonal directions.} \label{fig:sketch} \end{figure} Following~\citet{kojima1984formation}, the interface of an axisymmetric droplet undergoing small deformation can be expressed in polar coordinates as \begin{equation}\label{equ:R_drop} R \left( \t \right) = 1 + \delta \sum\nolimits_{n=2}^{\infty}\left( 2n+1 \right) \fn \Pn \left( \cos{\t}\right), \end{equation} where $\t$ is the polar angle measured from the rear of droplet, $\Rthe$ is the polar distance (see figure~\ref{fig:sketch}), $\delta$ indicates the amplitude of the deformation, the $\Pn$ are the $n$th-order Legendre polynomials and the $\fn$ are the corresponding coefficients. The first two terms $P_{0}$ and $P_{1}$ are removed such that the volume of the droplet is conserved and its centroid stays at the origin~\citep{kojima1984formation}. To advance the interface, the kinematic condition $\partial R(\theta,t)/\partial t = \bu (\theta,t) \cdot \bn$ is applied. Following ~\citet{kojima1984formation} and linearising the governing equations and truncating the series expansion, the evolution of the droplet can be obtained by solving a system of ordinary differential equations, \begin{equation}\label{equ:linear_evol} \d \f/\d t = \mathbf{A} \f, \end{equation} where the shape coefficient $\f=\left( f_{2},f_{3},...,f_{m+1} \right)^{\mathrm{T}}$ is a truncated vector and $\mathbf{A}$ is an $m \times m$ matrix depending on $\lambda$ and $\Ca$. It should be noted that the shape of the droplet can be expressed by a unique series of coefficients $\f \delta$ and vice versa; for a certain $\f$, the effective shape varies significantly with the amplitude and sign of $\delta$. For the truncation of $\f$ we use $m=1000$ throughout our study: extensive tests using larger values of $m$ confirm that our results are independent of this truncation level. \section{Non-modal analysis: Theory}\label{sec:methods} As shown by the modal analysis of ~\citet{kojima1984formation}, the operator $\mathbf{A}$ has a stable spectrum with all of its eigenvalues having negative real parts, irrespective of the magnitude of the surface tension, as long as the capillary number is finite. This model analysis predicts the long-term behaviour of the disturbance but in the short-term limit it is only valid if the linear operator $\mathbf{A}$ is normal, i.e. its eigenvectors are orthogonal. In the case of a non-normal operator, even though the amplitudes of all eigenmodes decay exponentially, their nonorthogonality can lead to a transient energy growth over a short time. We indeed found that $\mathbf{A}$ was non-normal. The optimal growth $G^{\mathrm{max}}$ of the initial energy ($L_2$ norm) over a chosen time interval $ [0,T]$~\citep{schmid2007nonmodal} is \begin{align}\label{equ:gtniv} \gt &= \max_{\f \left( 0 \right)} \left[ G\left( T \right) = \frac{||\bft||_{2}}{||\f \left( 0 \right)||_{2}} \right] = ||\exp{\left( T\mathbf{A} \right)}||_{2}, \end{align} where $\f \left( 0 \right)$ denotes the initial perturbation. $\gt$ represents the maximum amplification of the initial energy at a target time (the so-called horizon) $T$ where the optimization has been performed over all possible perturbations $\f \left( 0 \right)$. The optimal initial perturbation for horizon $T$ will be denoted $\foi$. The quantity $G^{\mathrm{max}}$ is the envelope of all individual gain profiles, indicating the presence of transient growth when $G^{\mathrm{max}}\left( T \right)>1$ for some $T$. Compared with the $L_2$ norm in equ.~\ref{equ:gtniv}, it is natural to introduce a physically-driven form of energy, designed for the physical problem at hand. In the present study, the variation of surface area of the droplet $\delS$ is chosen as the target energy, since $\gamma \delS$ indicates the interfacial energy throughout the evolution: $\delS$ is zero only for a spherical droplet and is positive otherwise. The surface area is $S = 2\pi\int_{0}^{\pi}R^{2}\sin{\theta}\sqrt{1+\left[(1/R)(\partial R/\partial \theta)\right]^{2}}\d \theta$. Assuming small deformation and thus $\frac{1}{R}\frac{\partial R}{\partial \theta} \ll 1$, a Taylor expansion yields \begin{equation}\label{equ:taylor_area} S = 2\pi\int_{0}^{\pi}R^{2}\sin{\theta}\left( 1+\frac{1}{2R^{2}}\left(\frac{\partial R}{\partial \theta}\right)^{2}\right) \d\theta. \end{equation} Plugging~\ref{equ:R_drop} into~\ref{equ:taylor_area}, the area variation $\Delta S = S-4 \pi$ is found to be \begin{align}\label{equ:dels_linear} \delS/\left( 2\pi \delta^{2} \right) =\f^{\mathrm{T}} \Marea \f + o\left( \delta^{2} \right), \end{align} where $\Marea$ is the so-called weight matrix~\citep{schmid2001tools} of size $m \times m$, with entries \begin{equation} \Marea\left( i,j \right) = 2\delta^{\mathrm{K}}_{ij} \left( 2i+1 \right) + \frac{1}{2}\left( 2i+1 \right) \left( 2j+1 \right) \int_{0}^{\pi} P^{\prime}_{i}\left( \cos\t \right) P^{\prime}_{j} \left( \cos \t \right) \sin^{3} \t \d\t. \end{equation} The optimal growth of $\delS$ can now be defined as \begin{align} \gareamax \left( T \right) = \max_{\f\left( 0 \right)} \left [G_{\Delta S} \left( T \right) = \frac{\sqrt{\delS\left( T \right)} }{\sqrt{\delS \left( 0 \right)}} \right] = \max_{\f\left( 0 \right)} \left [G_{\Delta S} \left( T \right) = \frac{\sqrt{\f^{\mathrm{T}} \Marea \f}} {\sqrt{{\f}^{\mathrm{T}}\left( 0 \right) \Marea \f\left( 0 \right)}} \right]. \end{align} By Cholesky decomposition $\Marea = {{\mathbf{F}}}^{\mathrm{T}}{\mathbf{F}}$, the above equation is formulated as \begin{align} \gareamax \left( T \right) = \max_{\f\left( 0 \right)} \left [G_{\Delta S} \left( T \right) = \frac{||{\mathbf{F}} \f\left( T \right)||_{2}} {||{\mathbf{F}} \f\left( 0 \right)||_{2}} \right]. \end{align} In a similar way to how the asymptotic stability ($t \rightarrow \infty$) is determined by the eigenvalues of the evolution operator $\mathbf{A}$, the maximum instantaneous growth rate of the perturbation energy at $t = 0^{+}$ can be determined algebraically, expanding the matrix exponential $\exp(t\mathbf{A}) \approx I+t\mathbf{A}$ at $t=0^+$. The growth rate of the excess area $\Delta S$ is then \begin{equation}\label{eq:area_growth} \frac{1}{\Delta S}\frac{ d \Delta S}{d t} \bigg |_{t=0^+} = \frac{\f^{\T}(0) \left[ \mathbf{A}^{\T} {\mathbf{F}}^{\T} {\mathbf{F}} + {\mathbf{F}}^{\T} {\mathbf{F}} \mathbf{A}\right] \f(0)}{\f^{\T}(0) {\mathbf{F}}^{\T} {\mathbf{F}} \f(0)}. \end{equation} By introducing ${\mathbf{h}} = {\mathbf{F}} \f(0)$, the maximum growth rate of $\Delta S$ is formulated as \begin{equation}\label{eq:area_growth2} \max \frac{1}{\Delta S}\frac{ d \Delta S}{d t} \bigg |_{t=0^+} = \max_{{\mathbf{h}}} \frac{{\mathbf{h}}^{\T} \left[ {\mathbf{F}}\mathbf{A}{\mathbf{F}}^{-1} + \left( {\mathbf{F}}\mathbf{A}{\mathbf{F}}^{-1} \right)^{\mathrm{T}} \right ] {\mathbf{h}}}{{\mathbf{h}}^{\T} {\mathbf{h}}}, \end{equation} which becomes the optimization of a Rayleigh quotient with respect to ${\mathbf{h}}$. Because ${\mathbf{F}}\mathbf{A}{\mathbf{F}}^{-1} + \left( {\mathbf{F}}\mathbf{A}{\mathbf{F}}^{-1} \right)^{\mathrm{T}} $ is a symmetric operator, the maximum is given by its largest eigenvalue, \begin{eqnarray}\label{eq:weightnumrange} \max\frac{1}{\sqrt{\delS} }\frac{d\sqrt{\delS}}{dt} \bigg|_{t=0^{+}}= s_{\mathrm{max}} \left[ \frac{1}{2}\left( {\mathbf{F}}\mathbf{A}{\mathbf{F}}^{-1} + \left( {\mathbf{F}}\mathbf{A}{\mathbf{F}}^{-1} \right)^{\mathrm{T}} \right) \right], \end{eqnarray} where $s_{\mathrm{max}}\left[ \cdot \right]$ denotes the largest eigenvalue. This maximum instantaneous growth rate is commonly called the numerical abscissa \citep{trefethen2005spectra}, which is closely linked to the numerical range $W_{\delS}\left( \mathbf{A}, {\mathbf{F}} \right) $ defined as the set of all Rayleigh quotients, \begin{equation}\label{eq:weightnumrange} W_{\delS}\left( \mathbf{A}, {\mathbf{F}} \right) \equiv \{ z: z = \left( {\mathbf{F}}\mathbf{A}{\mathbf{F}}^{-1} \bp, \bp \right) / \left( \bp, \bp \right) \}. \end{equation} The numerical range is the convex hull of the spectrum for a normal operator (and is therefore always in the stable half plane $z_{r}<0$ for a stable operator) , but can extend significantly to even protrude into the unstable half-plane $z_{r}>0$ for stable non-normal operators. Its maximum protrusion is equal to the numerical abscissa and thus determines the maximum energy growth rate at $t = 0^{+}$. \section{Non-modal analysis: results}\label{sec:linear_nonmodal} \subsection{Transient growth and numerical range} In figure ~\ref{fig:gtarea}, we show the optimal growth of the interfacial energy $\gtareamax$ for viscosity of ratios $\lambda=0.5$ and $5$, varying the capillary number $\Ca$. The threshold value of $\Ca$ to yield transient growth is between $4$ and $5$, in accordance with the rightmost boundary of the numerical range (see inset) depicted in the complex plane $\left( z_{r}, z_{i} \right)$. The boundary is almost tangent to $z_{r}=0$ at $\Ca \approx 4.9$ for $\lambda=0.5$ and $\Ca \approx 4.53$ for $\lambda=5$ representing the critical capillary number $\Cacri$ above which the maximum energy growth rate at $t=0$, $\max_{\f\left( 0 \right)}\frac{1}{\sqrt{\delS} }\frac{d\sqrt{\delS}}{dt}|_{t=0^{+}}$, is positive, guaranteeing transient growth. \begin{figure} \hspace{-0.5em}\includegraphics[scale = 0.44] {FIG2a.eps} \hspace{1em}\includegraphics[scale = 0.44] {FIG2b.eps} \caption{ The optimal growth of the interfacial energy $\gtareamax$ versus the nondimensional time $T$, for viscosity ratio $\lambda=0.5$ (a) and $\lambda=5$ (b); for each case, four capillary number $\Ca$s are shown and for the highest Ca, the time $\tmax$ corresponding to the peak energy growth is marked. The inset shows the boundary of the numerical range $\left( z_{r}, z_{i}\right)$. } \label{fig:gtarea} \end{figure} \subsection{Linear growth and shape evolutions} Non-modal analysis not only predicts the maximum energy growth over a particular time interval, but also provides the optimal perturbation, i.e. the initial shape coefficients $\foi$ that ensure the optimal gain at horizon $T$. Figure~\ref{fig:linear_evol_four_opt} depicts the individual energy gains $\garea$ for four optimal initial conditions $\foi$ corresponding to $T=0.2$, $1.05$, $3.95$ and $5.45$, with $\lambda=0.5$ and $\Ca = 6 $. Their gain profiles are tangent to $\gtareamax$ at $t=T$. The optimal perturbation targeting $T=\tmax=2.95$ coincides with the optimal growth $\gdelsmax$ at its peak. Assuming small deformation amplitude and integrating equ.~\ref{equ:linear_evol} in time, the linear shape evolution is readily reconstructed for the droplets with the four optimal initial conditions, depicted in figure~\ref{fig:linear_evol_four_opt}, at time $t=0$ (dashed), $2.5, 5, 7.5, 10$ (light solid) and the target time $t=T$ (solid); the evolution is shown for negative/positive $\delta$ in (a)/(c). For both signs, the initial perturbation is mainly introduced near the tail ($\theta=0$) of the droplet where the interface is respectively flattened for $\delta<0$ and stretched for $\delta>0$ while the front part of the droplet remains spherical. In accordance with the modal analysis implying a linearly stable evolution, the perturbations eventually decay and the droplets finally recover a spherical shape. \begin{figure} \centering \includegraphics[scale=0.33]{FIG3_small.eps} \caption{Linear growth $\garea$ of the interfacial energy of the droplets with an optimal initial perturbation $\fo \left( 0 \right)$ for the target times $T=0.2$, $1.05$, $2.95$ and $5.45$; the solid curve indicates the optimal growth $\garea^{\mathrm{max}}\left( T \right)$ and it reaches its peak at $T=T_{\mathrm{max}}=2.95$. The linear shape evolution of the perturbations are shown for negative and positive $\delta$, on the left and right panel respectively. } \label{fig:linear_evol_four_opt} \end{figure} \section{Nonlinear analysis}~\label{sec:lin_non_evol} \subsection{Nonlinear energy growth and shape evolution using DNS} As the droplets deform more and more on increasing the initial perturbation amplitude, nonlinearities become significant and the droplet evolution cannot be adequately described by the linearised equations. We resort to DNS to address the non-linear dynamics using a three-dimensional axisymmetric boundary integral implementation, following the standard approach of ~\citet{koh1989stability}. We focus on the droplets of $\lambda=0.5$ and $\Ca = 6 $ with the optimal perturbation $\fomax \left( 0 \right)$ achieving the peak of the optimal energy growth $\gdelsmax$ at $\tmax$. Two slightly different magnitudes of perturbation $\delta=0.0496,0.0505$ are chosen for the positive $\delta$ and similarly $\delta=-0.122, -0.126$ for the negative case. Their energy growth $\gtarea = \sqrt{\frac{\delS \left( t \right)}{\delS\left( 0 \right)}} $ is plotted in figure~\ref{fig:nonlinear_gt}, together with the linear counterpart $\gtarea$ using equ.~\ref{equ:dels_linear}. The linear and non-linear energy growth share the same trend in the initial growing phase $t<3$, but differ as the former is approximated by a truncated Taylor expansion. For the two values of $\delta$ with the same sign but slightly different magnitude, the energy growth curves almost collapse before reaching their peaks at $t \approx 4$, but diverge afterwards; $\garea$ decays for the smaller magnitudes $\delta = -0.122$ and $\delta = 0.0496$ indicating stable evolutions but maintains a sustained value around $1$ for larger initial amplitudes $\delta = -0.126$ and $\delta = 0.0505$, implying the onset of instability. \begin{figure} \centering \subfigure[Energy growth $\gtarea$]{\label{fig:nonlinear_gt} \hspace{-3em}\includegraphics[scale = 0.38] {FIG4a.eps} } \subfigure[Shape evolution]{\label{fig:nonlinear_shape} \hspace{0em}\includegraphics[scale = 0.45] {FIG4b.eps} } \caption{~\subref{fig:nonlinear_gt}: Nonlinear energy growth $\garea$ of the droplets with the optimal perturbation $\foi$ for the target time $T=\tmax=2.95$; the solid curve indicates the linear energy growth. For positive $\delta$, $\garea$ of droplets with $\delta=0.0496$ and $0.0505$ are shown, the former/latter being stable/unstable; for negative $\delta$, the chosen value leading to stable and unstable evolution is $\delta=-0.122$ and $\delta=-0.126$ respectively. ~\subref{fig:nonlinear_shape}: The shape evolutions of the corresponding droplets.} \label{fig:nonlinear_lam05_ca6} \end{figure} The shape evolutions of droplets are shown in figure~\ref{fig:nonlinear_shape}. For $\delta=-0.122$ and $-0.126$, no significant difference is observed for $0<t<7.5$: an inward cavity develops at the rear and sharpens; it is subsequently smoothed out and disappears for $\delta=-0.122$ while it keeps growing to form a long indentation for $\delta=-0.126$. These two values of $\delta$ bound a threshold initial amplitude required to excite nonlinear instabilities. A similar trend is found for positive values of $\delta$, while the instability arises through the formation of a dripping tail. It becomes natural to introduce $\delcri$, the critical magnitude of the perturbation above/below which the evolution of the drop is unstable/stable. Parametric computations are conducted to identify $\delcri^{\pm}$ within a confidence interval (for instance $ \delcri^+ \in \left[ 0.0496, 0.0505 \right]$ and $ \delcri^- \in \left[ -0.122, -0.126 \right]$ as in figure~\ref{fig:nonlinear_gt}). Searching in both directions, the critical amplitude is then defined as $\delcri=\mathrm{min}(|\delcri^{+}|,|\delcri^{-}|)$. When $\lambda=0.5$ and $\Ca=6$, $|\delcri^{-}|>|\delcri^{+}|$, implying that the instability tends to favour an initially stretched tail with respect to a flattened bottom; otherwise when $\lambda=5$ the situation reverses $\left( |\delcri^{-}|<|\delcri^{+}| \right)$, as discussed in next section. \subsection{Critical amplitude of the perturbation $\delcri$} Following the description of the previous paragraph, the critical deformation amplitude $\delcri \left( T \right)$ can be determined for any targeting time $T$ and associated optimal initial perturbation $\fo \left( 0 \right)$. The critical deformation amplitude $\delcri \left( T \right)$ is plotted in figure~\ref{fig:critdelta} for $\Ca = 6$ and both $\lambda=0.5$ and $\lambda=5$, together with the optimal growth $\gareamax$. The critical deformation amplitude $\delcri $ is negatively correlated with $\gareamax$ and corresponds to a target time $T$ slightly larger than $\tmax$ where the peak transient growth is reached. This shows that the transient growth reduces the threshold non-linearity needed to trigger instabilities and consequently the critical magnitude of the initial perturbation. We also determined the critical amplitude $\delcri^{\mathrm{P}/\mathrm{O}}$ for an initially prolate ($\mathrm{P}$) / oblate ($\mathrm{O}$) ellipsoidal droplet to be unstable, as reported in figure~\ref{fig:critdelta}. When the fluid inside the droplet is less viscous than the one outside, i.e. $\lambda<1$, an initially prolate droplet is more unstable, $\delcri^{\mathrm{P}}$ is less than half that of an oblate droplet $\delcri^{\mathrm{O}}$; the trend reverses as $\lambda>1$. Such an observation is in agreement with the results of \citet{koh1989stability} using DNS (see fig. 11 of their paper). As expected, the minimum $\delcri$ using the optimal perturbations is smaller than $\mathrm{min}(\delcri^{\mathrm{P}},\delcri^{\mathrm{O}})$ based on the limited family of ellipsoidal shapes. \begin{figure} \centering \subfigure[$\left(\lambda,\Ca\right)=\lp0.5,6\right)$]{\label{fig:critdelta_lam05} \hspace{0em}\includegraphics[scale = 0.27] {FIG5a.eps} } \subfigure[$\left(\lambda,\Ca\right)=\lp5,6\right)$]{\label{fig:critdelta_lam5} \hspace{0em}\includegraphics[scale = 0.27] {FIG5b.eps} } \caption{The critical perturbation magnitude $\delcri$ for: ~\subref{fig:critdelta_lam05}: $\left( \lambda,\Ca \right) = \lp0.5, 6\right)$ and ~\subref{fig:critdelta_lam5}: $\left( \lambda, \Ca \right) = \lp5, 6\right)$. The upper and lower limits of $\delcri$ (measured by the left scale) are plotted versus the target time $T$, with a curve fitted to show the trend. Accordingly, the linear energy growth $\garea$ (measured by the right scale) is provided. ${\delta}^{\mathrm{P}}_{\mathrm{c}}$ and ${\delta}^{\mathrm{O}}_{\mathrm{c}}$ is the critical magnitude for an initially prolate and oblate respectively. } \label{fig:critdelta} \end{figure} So far, we have analysed the critical amplitude $\delcri$ of perturbations exhibiting transient energy growth. We would like to know how it varies as the transient growth decreases and even disappears as it is suppressed by high surface tension. In addition to $\Ca=6$, the time-dependence of $\delcri$ is shown in figure~\ref{fig:critdelta_ca} for $\Ca=4, 5$. As expected, $\delcri$ increases with decreasing $\Ca$, by a factor of approximately $3$, varying from the highest to the lowest $\Ca$. With respect to $T$, $\delcri$ varies non-monotonically for $\Ca=5, 6$ showing transient growth. In the absence of transient growth, like for $\Ca=4$, $\delcri$ increases with $T$ monotonically. Indeed, without transient growth, the energy decays monotonically and $\tmax=0$, hence the minimum $\delcri$ appears at $T \approx 0$. \begin{figure} \hspace{-0.5em}\includegraphics[scale = 0.35] {FIG6a.eps} \hspace{2em}\includegraphics[scale = 0.35] {FIG6b.eps} \caption{Akin to figure~\ref{fig:critdelta}, adding $\delcri$ of two smaller $\Ca$s for $(a)$: $\lambda=0.5$ and $(b)$: $\lambda=5$.} \label{fig:critdelta_ca} \end{figure} \section{Conclusion and discussions}\label{sec:conclusions} \begin{figure} \centering \hspace{0em}\includegraphics[scale = 0.37] {FIG7a.eps} \hspace{0em}\includegraphics[scale = 0.37] {FIG7b.eps} \caption{The flow field co-moving with the droplet, using the optimal initial coefficient $\fomax \left( 0 \right)$ , when $\left(\lambda,\Ca\right)=\lp0.5,6\right)$, $(a)$: $\delta=-0.126$ and $(b)$: $\delta=0.0505$. The red dot indicates the stagnation point of the flow.} \label{fig:flowfield} \end{figure} In this paper, we have performed non-modal analysis and DNS to investigate the shape instabilities of an intertialess rising droplet which tends to recover the spherical shape, the attractor solution, due to surface tension. For sufficiently low surface tension, transient growth of the interfacial energy arises and leads to a bypass transition. This reduces the initial disturbance amplitude required to trigger instability, hence significantly decreasing the threshold magnitude of perturbation for the droplet to escape the basin of attraction. This magnitude is negatively correlated to the optimal growth of the interfacial energy. We now compare our results with the work of~\citet{koh1989stability} who employed DNS to identify the critical capillary number $\Cacri$ leading to shape instabilities of an initially prolate or oblate ellipsoidal sedimenting droplet; the magnitude of perturbation is $\Delta=\frac{L-B}{L+B}$ (see figure~\ref{fig:sketch}). For their lowest magnitude $|\Delta|=1/21$ considered, $\Cacri \in (4, 5)$ for $\lambda=0.1, 0.5$ and $5$, indeed close to our prediction: $\Cacri \approx 5.42$, $4.9$ and $4.53$ respectively for the same $\lambda$. Additionally, ~\citet{koh1989stability} observed that for a viscosity ratio $\lambda<1$/$\lambda>1$, the first unstable pattern appears as a protrusion/indentation developing near the tail of the droplet that is initially a prolate/oblate. The trend holds in our case even though we search over all possibilities for the most 'dangerous' initial perturbation instead of using an initially ellipsoidal shape. This is also reflected from the initial shapes: as $\lambda<1$/$\lambda>1$, the optimal shape shares a common feature with an oblate/prolate ellipsoid, namely its rear interface is compressed/stretched. To explain the dependence of the instability patterns on the viscosity ratio $\lambda$, let us focus on the velocity field near the tail of the droplet (see figure~\ref{fig:flowfield}), where the flow resembles a uniaxial extensional flow, drawing the tip into the drop on the top side and pulling it outwards on the other side. We suggest that this imbalance induces the onset of the shape instability. The internal (respectively external) viscous force on the tip is $\mu_{1} \partial u^{\mathrm{tip}}_{z}/\partial z $ (respectively $\mu_{2} \partial u^{\mathrm{tip}}_{z} / \partial z$). When $\mu_{1}<\mu_{2}$, i.e. $\lambda<1$, the external viscous effect overcomes the internal one, hence the perturbation tends to be stretched outward to form a protrusion; otherwise, when $\lambda>1$, it is prone to be sucked inwards to form an indentation. Developed originally for hydrodynamic stability analysis, non-modal tools have here demonstrated the predictive capacity for the inertialess shape instabilities of capillary interfaces. This work might stimulate the application of non-modal analysis for complex multiphase flow instabilities even at low Reynolds number. \section*{Acknowledgements} L.Z. thanks Francesco Viola for the helpful discussions. We thank the anonymous referee for pointing out an incorrect coefficient in our previous derivation. Computer time from SCITAS at EPFL is acknowledged, and the European Research Council is acknowledged for funding the work through a starting grant (ERC SimCoMiCs 280117). \bibliographystyle{jfm} \input{arxiv.bbl} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
144
Jurij Vladimirovitj Andropov (), född 15 juni (enl. n.s; 2 juni enl. g.s.) 1914 sannolikt i byn Nagutskaja i guvernementet Stavropol i Kejsardömet Ryssland (nuvarande Soluno-Dmitrijevskoje i Stavropol kraj, södra Ryssland), död 9 februari 1984 i Moskva (av njursvikt), var en sovjetisk politiker som var generalsekreterare i sovjetiska kommunistpartiets centralkommittés politbyrå från 12 november 1982 till 9 februari 1984. Biografi Andropov var partimedlem från 1939, förstesekreterare i Komsomol i Karelen under andra världskriget och ambassadör i Budapest vid tiden för Ungernrevolten och den sovjetiska invasionen 1956. Han ersatte Michail Suslov i sovjetiska kommunistpartiets centralkommitté 1962, blev chef för KGB 1967 och ledamot av politbyrån 1973. Som KGB-chef förespråkade Andropov en hård linje mot landets dissidenter. Han drev även på för ett sovjetiskt militärt ingripande i Afghanistan och var en av de drivande krafterna bakom utbrottet av det Afghansk-sovjetiska kriget 1979. Sedan Leonid Brezjnev avlidit i november 1982 utnämndes Andropov av politbyrån till Brezjnevs efterträdare. Samtidigt avgick Andropov som chef för KGB. Vid tidpunkten för Brezjnevs död hade han dock i praktiken redan innehaft den politiska makten i Sovjetunionen en tid i takt med att Brezjnevs hälsa försämrats och denne successivt blivit allt mer oförmögen att styra landet. Under sin korta tid vid makten försökte Andropov få bukt med korruptionen i det sovjetiska samhället och den allt mer haltande sovjetiska ekonomin, samtidigt som han tampades med försämrade förbindelser med USA, sedan Ronald Reagan blivit USA:s president. Nedrustningsförhandlingarna blev resultatlösa under denna tid som även kallas "Andra kalla Kriget". Andropovs viktigaste arv till eftervärlden var hans uppbackning av Michail Gorbatjov, en partikamrat från samma trakt som Andropov – Stavropol. Referenser http://ru.wikipedia.org/wiki/Андропов,_Юрий_Владимирович Externa länkar Jurij Andropov på NNDB Födda 1914 Avlidna 1984 Män Personer inom KGB Sovjetiska politiker Personer från Stavropol kraj
{ "redpajama_set_name": "RedPajamaWikipedia" }
88
Q: Call controlgroup .click() method Trying to call the .click() method for a controlgroup in another method. Heres what I have so far: $("#list").click(function() { searchclick = true; value = $("#listval").text().split(" - "); window.location.href = "index.html"; item = $("#listval").attr("title"); if (item == "this") { $("#this").click(); } else { $("#that").click(); } });​ "#this" and "#that" are the two buttons in the control group. The control group click method is initialized like so: $("#this, #that").click(function(){ ...code... } Any help? EDIT: here is what the html looks like: <div data-role="content" class="ui-content" role="main"> <div id="item" data-role="fieldcontain"> <fieldset data-role="controlgroup" data-type="horizontal"> <legend> </legend> <input id="this" name="choose" value="val1" type="radio"/> <label for="this"> val1 </label> <input id="that" name="choose" value="val2" type="radio"/> <label for="that"> val2 </label> </fieldset> </div> ... A: This will send the current values to the server, where (I suppose) you have the code that will deal with the sent values and will generate the proper response. NOTES: * *Are searchclick and value global variables? if so, they will be available inside the handlers of the #this and #that's clicks, but will be lost after submitting the form. *Is item a local variable? If so, you should declare it like this: var item = $("#listval").attr("title");. As a general rule: Don't create global variables unless is absolutely necessary. *Next code may interfere with other code in your form. If fixing other code is too hard you may try the other option: instead of submitting the form, do a ajax call (see jQuery documentation for that here). $("#list").click(function() { searchclick = true; value = $("#listval").text().split(" - "); item = $("#listval").attr("title"); if (item == "this") { $("#this").click(); } else { $("#that").click(); } //your controls are inside a form, right? document.forms[0].submit(); });​ A: I ended up setting the variables to Local Variables and calling them in the index.html page. In index.html, I used a try,catch block to check that when the page was loaded, if there are any variables in the Local Variables. Here's my code in index.html (after list is clicked on the other page: try{ //grab variables set in search if(window.localStorage.getItem("searchclick") == "true"){ searchclick = true; }else{searchclick = false} thisthat = window.localStorage.getItem("thisthat"); if(thisthat == "this"){ $("#this").click(); $("#thislabel").addClass("ui-btn-hover-c"); $("#thislabel").addClass("ui-radio-on"); $("#thislabel").addClass("ui-btn-active"); }else if(thisthat == "that"){ $("#that").click(); $("#thatlabel").addClass("ui-btn-hover-c"); $("#thatlabel").addClass("ui-radio-on"); $("#thatlabel").addClass("ui-btn-active"); } //clear local variables after being used window.localStorage.clear(); }catch(err){alert(err);}
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,781
\section*{Appendix} \label{app:survey} \subsection{Survey Methodology} To validate the components of our empirical framework not already verified in prior work, we conducted a survey of 1,000 Americans in June 2020. \textit{Survey Sampling.} To improve the generalizability of our results, we hired Cint, Inc. to conduct quota-based survey sampling to ensure that our survey respondents were representative of the demographics of the U.S. in terms of their age, race, education, income, gender, and geographic region. \textit{Survey Questionnaire.} Our survey questions were reviewed by at least two external survey experts, tested for respondent comprehension using cognitive interviews, and used multiple attention check questions to ensure respondent attentiveness, following survey methodological best practice~\cite{beatty2007research,redmiles2017summary}. We asked both open and closed-answer questions about respondents willingness to install COVID19 apps with particular functionalities or components. In the main body of the paper, we report the responses to these survey questions, in addition to referencing pre-existing empirical work, as empirical validation of our framework. To examine users' overall willingness to install COVID19 apps we asked the following questions: \begin{itemize} \item Would you be willing to install a coronavirus app? [Answer options: Yes, No, Unsure] \item Why [would you/would you not/are you not sure whether you would] be willing to install a coronavirus app? [Open answer] \end{itemize} Respondents' answers to these questions were open-coded by two researchers, who met to ensure consistency on all responses. Results from these questions related to privacy are reported in Section~\ref{sec:privacy} and results related to mobile costs are reported in Section~\ref{sec:mobile}. To understand what benefits (Section~\ref{sec:benefits}) would make users want to install a COVID19 app, we asked the following questions: \begin{itemize} \item Now we would like to understand why you would want to install a coronavirus app. Please select all that apply. \\ I would install a coronavirus app because I want to reduce... [answer options randomized except the final two] \begin{itemize} \item my risk of getting coronavirus \item my loved ones risk of getting coronavirus \item the risk of getting coronavirus for people I've been near \item the number of coronavirus cases in my area \item Other: <text entry> \item I would never want to install a coronavirus app. \end{itemize} \item We'd like to understand a bit more about why you would want to install a coronavirus app. Please select all that apply. I would install a coronavirus app because I want to... [answer options randomized except the final two] \begin{itemize} \item know which locations near me were recently visited by people who have coronavirus \item help end the lockdown period sooner \item meaningfully contribute to the effort to fight coronavirus \item help researchers to get data on how many people have coronavirus \item help researchers get data on how many people have been near someone who has coronavirus \item Other: <text entry> \item I would never want to install a coronavirus app. \end{itemize} \end{itemize} To examine the relevance of privacy costs (Section~\ref{sec:privacy}) to users' consideration of COVID19 apps, we asked the following questions: \begin{itemize} \item Using some coronavirus apps might allow someone to learn information about you. Which of the following types of information would you be worried about someone learning? Please select all that apply. [answer choices randomized except the final choice] \begin{itemize} \item That I have coronavirus \item That I have been exposed to coronavirus \item That I have the coronavirus app installed \item My locations over the past two weeks (e.g., where your home is, what grocery store you visited) \item Who I have been near over the past two weeks \item I am not concerned about anyone being able to learn any of the above information about me. \end{itemize} \item \textit{Asked for each piece of information about which the respondent was concerned.} Please select which of these you are most concerned about being able to learn <that you have coronavirus/that you have been exposed...>. Please select all that apply. \begin{itemize} \item My neighbor \item My employer \item People who have been near me in the past 2 weeks \item Whoever provides the app \item The US federal government \item Non-government sponsored hackers \item Foreign-government sponsored hackers \item I am fine with all of the above learning that I have coronavirus \end{itemize} \end{itemize} Finally, to examine the relevance of mobile costs to users' consideration of COVID19 apps we asked the following questions: \begin{itemize} \item Imagine that installing a coronavirus app will noticeably drain your phone's battery. Would this stop you from installing the app? [Answer choices: Yes, No, Other <text entry>] \item Imagine that installing a coronavirus app will use a noticeable amount of storage space on your mobile phone. Would this stop you from installing the app? [Answer choices: Yes, No, Other <text entry>] \item Imagine that installing a coronavirus app will use up a noticeable amount of your monthly mobile data. Would this stop you from installing the app? [Answer choices: Yes, No, Other <text entry>] \item Imagine that installing a coronavirus app will make using the other apps on your phone noticeably slower (e.g., it will take longer to perform an internet search). Would this stop you from installing the app? [Answer choices: Yes, No, Other <text entry>] \end{itemize} \textit{Limitations.} As with all self-report studies, this empirical validation has inherent limitations due to potential response and generalizability biases. While we did our best to prevent such issues through careful respondent sampling and questionnaire design, our results should be interpreted with these limitations in mind. \section{Technological Approaches to COVID19} \label{sec:approaches} There have been a number of technical approaches proposed to assist in the response to COVID19. Arguably the most discussed approach is digitally-enabled contact tracing. Contact tracing is a traditional epidemiological approach used to trace and limit the spread of infection~\cite{fairchild2007searching}. While technology-based contact-tracing approaches were proposed to address the 2014 Ebola outbreak, these approaches were never widely implemented~\cite{sacks2015introduction, danquah2019use}. Digital contact tracing via mobile app was first implemented at scale during the COVID19 pandemic. These 'contact-tracing apps' are characterized by their data architecture: centralized or decentralized. In centralized contact-tracing apps, users are assigned an encrypted identifier by a trusted third party (TTP). Users' apps broadcast their identifier or a function of it to other apps within some distance \textit{d}. Users' apps then store lists of identifiers they have been in contact with, and if an app user reports that they have tested positive foor COVID19, their app will notify the TTP of it's stored list of contacts and the TTP will push a notification to the relevant, at-risk, app users. On the other hand, in decentralized contact-tracing apps, users' apps periodically generate \textit{anonymized identifiers} for them, which are broadcast to other apps within a given distance at periodic time intervals. Apps who's users have reported that they have tested positive for COVID19 push a list of exposed contact identifiers to a public list, rather than to a TTP as there is no TTP in a decentralized system. The other decentralized apps periodically pull this public list and check if they have any matches, if so, they notify the user that they have been exposed. The differences in the privacy guarantees offered by these two contact-tracing app architectures led to significant debates among experts regarding user privacy~\cite{ahmed2020survey,sun2020vetting,Coronavi68:online}. However, subsequent research on users' perceptions of the privacy these systems does not show a clear winner among end users, as we will discuss further in this paper~\cite{li2020decentralized}. contact-tracing apps are not the only technological responses proposed to COVID19. Another popular solution, sometimes combined with contact tracing functionality~\cite{15millio48:online}, are broadcast apps~\cite{raskar2020apps}. Some broadcast apps leverage existing public infrastructure that is normally used to provide emergency alerts to inform everyone in a given area of COVID19-related information. Others, such as ``narrowcasts''~\cite{chan2020pact}, provide personalized information, for example regarding COVID19 hotspot locations near the app user. Such narrowcasts can, similarly to contact-tracing apps, have centralized or decentralized architectures. In a centralized narrowcast architecture, the user's app pushes their location regularly to a TTP. The TTP then pushes notifications of hotspots in the user's vicinity that it identifies based on the user's known location. In a decentralized narrowcast, a TTP publishes a list of hotspot locations. The user's app periodically pulls from this central list, matches the user's location (known only locally) against central list, and notifies the user of hotspots in user's vicinity.\footnote{For the least data collection, the user can also simply search their location on a map.} Additional types of COVID19 response technologies include non-consensual surveillance of citizens using e.g., telecomm infrastructure~\cite{troncoso2020decentralized}, technologies to improve or scale manual contact tracing interviews~\cite{chan2020pact}, symptom checkers and self-diagnosis tools (e.g., thermometer checks)~\cite{covid19a46:online}, home health support for those diagnosed with COVID19~\cite{covid19a46:online}, and data-donation technologies for citizen scientists to contribute their data~\cite{CitizenS20:online}. In this paper, we focus on user tradeoffs related to the most commonly proposed COVID19 technology: contact-tracing apps; as well as narrowcast apps, which have been frequently proposed or implemented as a feature of contact-tracing apps~\cite{15millio48:online}. \begin{table}[t] \centering \scriptsize \begin{tabular}{rlll} & \textbf{Component} & \textbf{Description} & \textbf{Dependencies}\\ \hline \parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{\textbf{Explicit}}}} & App Architecture & Type of app (e.g., centralized, decentralized, narrowcast) & -- \\ & Provider & Who is the app provider and/or the TTP (e.g., CDC, tech company) & --\\ & Data \& Accuracy& Data collected (e.g., proximity/location, health status) \& app accuracy & architecture \\ & Benefit: my health & My direct benefits (e.g., knowledge of my risk status, hotspots) & architecture, data collected \\ & Benefit: contacts' health & Direct benefits to contacts (e.g., they learn they are at risk) & architecture, data collected \\ &&&\\ \hline &&&\\ \parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{\textbf{Implicit}}}}& Information Protection & How the identifiers and/or data are protected (e.g., pseudoanonymized, encrypted) & architecture\\ & Privacy Costs & \textit{Who} can learn \textit{what} about me (e.g., neighbor can learn my health status) & architecture, data collected\\ & Mobile Costs & What will installing this app cost me? (e.g., battery life, GB of data used) & architecture (pull vs. push)\\ & User Agency & What do I control (data deletion, what I reveal) & architecture\\ &Benefit: environment safety & Others having app makes my environment safer & others actions\\ &Benefit: sense of altruism & Feeling like I helped others & \\ &Benefit: epidemiological data & e.g., decision makers know spread, infection rate & architecture\\ &&&\\ \hline &&&\\ & Transparency & How transparent the app provider makes the implicit components&\\ &&&\\ \hline \end{tabular} \caption{Components of COVID19 apps.} \label{tab:appcomponents} \end{table} \section{Potential Inputs to Users' COVID19 Adoption Decisions} \label{sec:considerations} In this section, we break down the attributes of COVID19 apps that are relevant to users' decisions regarding whether to adopt a COVID19 app (see Table~\ref{tab:appcomponents} for a summary of these attributes). We delineate users' potential considerations (e.g., concerns) related to these attributes, and provide empirical evidence supporting the relevance of these considerations on users' willingness to install. It is important to note that some app attributes that influence users' adopotion decisions are already transparent (e.g., the user can tell who is providing these app they download), while others (e.g., potential privacy risks) are not currently made transparent to the user by the app provider. In the absence of transparency, users may have their own, accurate or inaccurate, expectations about these implicit components, as we discuss below. Finally, it is important to note that this paper focuses on the features of COVID19 apps that users may consider in their adoption decision. This paper is not a comprehensive list of all possible user motivations to install COVID19 apps (or reasons to avoid them). Additional user-related factors that may affect willingness to install COVID19 apps (e.g., level of concern about COVID) are addressed briefly in Section~\ref{sec:controls}. \textbf{Empirical Validation.} Where confirmed by external researchers, we cite the relevant research to support each user consideration detailed below. To validate considerations not already explored in prior work, we conducted our own empirical validation. To do so, we surveyed 1,000 Americans in June 2020. To maximize generalizability, we used quota sampling to ensure our respondents' demographics were representative of the U.S. in terms of age, race, education, income, gender, and geographic region. See the Appendix for more details regarding our survey methodology, including the limitations of our instrument. \subsection{App Provider} There are many possible providers of COVID19 apps\footnote{List drawn in part from prior work~\cite{WillAmer45:online}.}: health protection agencies (e.g., CDC, FDA), who are required to be the providers of apps built on the Google-Apple exposure notification API~\cite{PrivacyP2:online}, health insurers~\cite{PayersAs70:online}, employers~\cite{Coronavi1:online}, technology companies~\cite{CitizenS70:online}, non-health agencies within a federal government or local government~\cite{Demonstr56:online}, non-profit organizations~\cite{Demonstr56:online}, international organizations such as the United Nations or World Health Organization~\cite{WillAmer45:online}, or universities~\cite{Contactt61:online}. \textbf{Empirical Validation.} Prior work finds that users differ in their willingness to install COVID19 apps based on which entity provides the app, likely due to variance in their \textit{trust} of these entities or the contextual integrity of these entities collecting sensitive data related to health applications~\cite{WillAmer45:online, brown2020ethics,Whyproxi5:online}. \subsection{Accuracy} There are two types of errors that can occur in digital contact-tracing applications. The app could have false positives, in which the app notifies the user that they have been exposed to COVID19 when they actually have not been exposed. This can happen due to inaccuracies in proximity and location measurement or due to the app allowing non-validated self-reports of positive COVID19 status. The app could also have false negatives, where it fails to alert the user to all exposures to coronavirus. \textbf{Empirical Validation.} Prior academic work has shown that accuracy affects users' reported willingness to install COVID19 apps~\cite{howgoodenough, frimpong2020financial}, and real world evidence from Google reviews and adoption statistics of released COVID19 apps further supports the relevance of accuracy considerations~\cite{NorthDak75:online}. \subsection{Data Collection} There are two types of data that contact-tracing apps may collect: proximity data (who the app user has had contact with, where the "who" is anonymized as described in Section~\ref{sec:approaches}), or location data (where the app user has been)~\cite{ahmed2020survey}. \textbf{Empirical Validation.} Prior academic work shows that different users have different preferences for the type of data collected about them for contact tracing~\cite{li2020decentralized, simko2020covid}. \subsection{User Agency} Different COVID19 app implementations give users different levels of agency over their data. In the suggested implementations considered here, users always have the agency to decide whether to reveal their COVID-positive health status to an app, however, depending on app architecture, users may or may not have control over data retention. \textbf{Empirical Validation.} Prior work supports that users may have different preferences for the tradeoff between autonomy vs. decision-burden offered by apps with different implementations~\cite{li2020decentralized}. \subsection{Privacy Concerns} \label{sec:privacy} A contact-tracing app's architecture, including what data it collects and users' agency over that data, influences potential privacy concerns for the user. Privacy concerns can be thought of from a user lens as: ``\textit{who} can learn \textit{what} about me''. There are five potential pieces of information (``whats'') that can be learned about a user: (1) That they are COVID-positive, (2) that they have been exposedd to COVID and are at risk of infection, (3) their social graph (i.e., who they have had contact with regardless of their health status), (4) their location information, (5) the COVID-status of their social group (e.g., other people of their race)\footnote{In a centralized system the TTP may be able to assign encrypted identifiers in such a way that it can track social groups~\cite{troncoso2020decentralized}}. There are six potential attackers (``whos'') that can learn these pieces of information, depending on the app architecture. These attackers range from individuals to nation states, specifically: (1) other users of the app, (2) attackers who exploit the app (e.g., by placing Bluetooth beacons at specific locations, or who falsify the users' GPS coordinates), (3) the app (including individuals who work for the app), (4) any third-party service used by the app (including individuals who work for these services), (5) network providers, and (6) government entities that can use legal process to force the app to turn over data. \textbf{Empirical Validation.} Users have varying levels of concern about these pieces of information being leaked to different attackers. In open-answer questions, 20\% of our respondents volunteered that they were undecided or unwilling to install a COVID19-related app due to privacy or surveillance concerns. Additionally, we find differing levels of concern regarding the leakage of differing pieces of information (Figure~\ref{fig:sensitivedata} summarizes these perceptions), with the most (48\%) being concerned about someone being able to learn their location information and the least (18\%) being concerned about someone learning that they did / did not have the app installed. Respondents also varied in their concern regarding who could learn these pieces of information (Figure~\ref{fig:sensitiveentities}. The fewest respondents were concerned about their employer (27\%) or neighbor (33\%), who could e.g., place Bluetooth beacons at their place of work to eavesdrop on relevant information, being able to learn the information about which they were concerned while the most were concerned about hackers of various types being able to learn this information. Our results are confirmed by prior work by multiple other groups showing the relevance of privacy considerations in users' COVID19 app adoption decisions~\cite{howgoodenough,li2020decentralized,simko2020covid,frimpong2020financial,Contactt3:online}. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{figures/sensitiveinfocovid.pdf} \caption{Survey responses from our survey of 1,000 Americans regarding their data privacy concerns regarding COVID19 apps (see Appendix for question wording).} \label{fig:sensitivedata} \end{figure} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{figures/sensitiveentitiescovid.pdf} \caption{Entities about which potential COVID19 app users would be most concerned about learning each piece of information (Figure~\ref{fig:sensitivedata}) about which they were concerned. Responses are averaged across respondents with multiple data concerns.} \label{fig:sensitiveentities} \end{figure} \textbf{Potential for Inequity.} The potential for tracking COVID-status by social group may risk increased marginalization of underrepresented groups (as has already been a concern with high rates of COVID19 infection among communities of color~\cite{BlackAme57:online}), however documenting disparities can be critical to ensuring that marginalized groups receive the resources and support that they need~\cite{Howcanwe99:online}. \subsection{Mobile Costs} \label{sec:mobile} contact-tracing apps are not cost-free, different app architectures may result in different mobile costs. For example, contact-tracing apps that rely on Bluetooth such as those built on the Google-Apple notification API~\cite{PrivacyP2:online} require the user to frequently use Bluetooth, which has known impacts on battery life~\cite{WhatYouS0:online}. Similarly, whether the app has push or pull architecture may also have implications for users' data costs (MB of mobile plan used for the app), storage costs (MB of space on the mobile phone used for the app), battery performance (impact on battery life from using the app), and the performance of other apps on their phone / their network speed. \textbf{Empirical Validation.} Recent related work~\cite{trang2020one} and our own empirical work shows that mobile costs are an influential part of users' decisions to adopt COVID19 apps: twelve percent of our respondents reported that they were undecided or did not want to install a COVID19 app because they either do not use mobile apps as a habit or because of concerns about costs of mobile data use, limited phone storage, or battery life. In follow-up closed-answer questions asked to all respondents, regardless of their interest in installing a COVID19 app, we find that over half of respondents report that a noticeable impact on battery (66\%), storage (57\%), mobile data (62\%), or performance (64\%) costs would dissuade them from wanting to install a COVID19 app. \textbf{Potential for Inequity.} Mobile costs are a potential source of inequity: less resourced users who are known to have less-featured / older mobile devices and are more likely to have limited mobile data~\cite{pew2019mobile,smith2015us,Washingt3:online}. These users may be disadvantaged by or unable to use apps with high mobile costs or whose functionality their devices do not support. \subsection{Benefits} \label{sec:benefits} COVID19 contact-tracing apps can have up to six different possible benefits depending on their architecture: \begin{enumerate} \item Knowledge of risk: Users may feel that it is a benefit that the app can notify them if they have been exposed to someone who has COVID19. \item Knowledge of hotspots: Users may feel it is a benefit that they can use some contact-tracing apps to learn what locations near them have been visited by many people with COVID19~\cite{Newlocat33:online, chan2020pact}. \item Feeling of Altruism: Users may feel good about themselves for using or donating data through a contact-tracing app~\cite{CitizenS20:online}, because they feel that they are helping society. \item Environment safety: Users may feel that by using a contact-tracing app they are improving the safety of their environment (e.g., country, community). \item Protecting loved ones: Users may feel that using the app helps them protect those they love or have come into contact with. \item Epidemiological data: Users may feel that a benefit of a contact-tracing app is that it allows for scientists or the government to collect COVID19-related data on infection rate (i.e., how many people are COVID19 positive) and/or spread (i.e., how many people have been exposed to COVID19). \end{enumerate} The relevance of these benefits to users depends on: whether or not the user plans on taking action once they learn that they are at-risk (1,2); whether the user cares about the safety of those around them (3,4,5); whether the user thinks that others will take action once they learn that they are at risk (4); and whether the user believes that epidemiological data will be used / should be used to inform government/institutional COVID19 planning (e.g., lockdown lengths, PPE orders, hospital capacity planning) and whether the user cares about this planning (6). \textbf{Empirical Validation.} Both prior work~\cite{trang2020one,li2020decentralized,CitizenS20:online} which covers a subset of these benefits, and our own work covering all six benefits shows that different benefits appeal to different users (Figure~\ref{fig:benefits}). Further, wide variation in reported willingness to adopt COVID19 apps in surveys that use differing descriptions of contact-tracing app benefits also supports the relevance of such benefits and how they are framed to users~\cite{Contactt3:online,WillAmer45:online,Washingt3:online}. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{figures/benefitscovid.pdf} \caption{Proportion of survey respondents interested in installing a COVID19 app with different benefits. Responses are averaged across categories (see Appendix for exact survey wording).} \label{fig:benefits} \end{figure} It is important to note that COVID19 apps primarily benefit the health of others. Given that we tend to be self-focused~\cite{fehr2007human}, research is beginning to explore the use of direct (e.g., monetary) incentives for encouraging contact-tracing app adoption, as a supplement to the inherent societal benefits of these apps~\cite{frimpong2020financial}. \textbf{Potential for Inequity.} While desirable to users, hotspot features (benefit (2)) can negatively impact marginalized communities. Less resourced and minority communities have, thus far, experienced higher rates of COVID~\cite{BlackAme57:online}. Hotspot information may lead to increased marginalization of these communities and reduction in economic stimulus. \section{COVID19 App Architecture Tradeoffs Through A User Lens} \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{figures/contact-tracing-tradeoffs-covid2.pdf} \caption{User-relevant tradeoffs between technologically-facilitated COVID19 approaches.} \label{fig:tradeofftable} \end{figure} Table~\ref{fig:tradeofftable} summarizes how the attributes that influence users' willingness to adopt COVID19 technologies detailed in the prior section vary based on functionality and architecture. In this section, we highlight the most critical differences in these architectures through a user lens. \textbf{Broadcast vs. Contact Tracing.} Broadcast apps differ from contact-tracing apps in that they have lower privacy and mobile costs (they only know/can reveal a user's location and they do not require Bluetooth), however, they have fewer benefits both for users and for public health: they can only inform users of hotspots. \textbf{Centralized vs. Decentralized Contact Tracing.} From a user perspective, these two competing contact tracing architectures differ in their privacy risks, mobile costs, and benefits. Centralized architectures allow the TTP, an individual at the TTP who knows the link between identifiers and real identity, and an attacker who hacks the TTP and the identifier link to learn whether you have the app installed, who is COVID-positive, who is at risk (COVID-exposed), and the social graph (who has had contact with whom) in the absence of testing COVID19 positive. Additionally, in centralized systems the user does not have agency over the deletion of their data. On the other hand, in decentralized solutions the only thing that an attacker can learn is that the user's COVID-positive status and that they have the app installed (or not); the user has agency over their data in such systems as they can delete the app and data at any time. These systems may differ in mobile costs due to their differing push/push-pull architectures. Finally, they may also differ in the degree to which they benefit public health. In a centralized system the TTP can provide epidemiology data regarding spread (e.g., TTP can count the number of at risk persons), while in a decentralized system this information is available only if at risk people opt-in to sharing this data with an epidemiology server. \textit{Location vs. Proximity Data.} Finally, location-based contact-tracing apps of either architecture differ from proximity-based apps in that they have increased privacy risks (the user's location can be leaked) but also increased benefits: the app can provide hotspot information (benefit (2)), which many users find desirable (Figure~\ref{fig:benefits}). \section{Conclusion} \label{sec:controls} As shown in this paper, and through emerging evidence of adoption (or lack thereof) of contact-tracing apps throughout the world, there are a multitude of factors that must be considered when trying to encourage ethical adoption of digital contact-tracing apps. Beyond the app-specific factors covered in this work, user-specific factors such as sociodemographics, health status, type of employment (essential vs. non-essential worker), political leaning, and Internet skill may also influence people's willingness to adopt these technologies~\cite{li2020decentralized,Contactt3:online,WillAmer45:online,howgoodenough,simko2020covid}. Technologists should take care to consider both the app-specific considerations outlined in this paper and these additional user-specific considerations in their app architecture and marketing strategies, while researchers should continue to explore how contact-tracing app implementations may affect users' willingness to adopt and may exacerbate societal inequities. \section*{Acknowledgements} With thanks to Paul England, Eszter Hargittai, Cormac Herley, Eric Horvitz, Gabriel Kaptchuk, Tadayoshi Kohno, Marina Micheli, Josh Rosenbaum, and Carmela Troncoso for feedback and conversations that contributed to this document. \bibliographystyle{acm}
{ "redpajama_set_name": "RedPajamaArXiv" }
101
"Gyöngyhajú lány" ("The girl with pearly hair") is a song by Hungarian rock band Omega. It was written in 1968, composed in 1969, and released on their album 10 000 lépés. "Gyöngyhajú lány" was very popular in many countries, including West Germany, Great Britain, France, Poland, Romania, Czechoslovakia, Yugoslavia and Bulgaria. The lyrics were written by Anna Adamis, the music was composed by Gábor Presser and the song was sung by János Kóbor. In 1969, the single "Petróleumlámpa / Gyöngyhajú lány" was released and the song gained popularity. Omega also recorded other versions of this song in foreign languages: English ("Pearls in Her Hair") and German ("Perlen im Haar"). Other versions "Gyöngyhajú lány" was covered in Poland (as "Dziewczyna o perłowych włosach"), Czech Republic (as "Paleta" by Markýz John and "Dívka s perlami ve vlasech" by Aleš Brichta), Yugoslavia (as "Devojka biserne kose" by Griva), Bulgaria (as "Батальонът се строява / Batalyonat se stroyava" by Дует "Южен Вятър" / duet "Juzhen Vyatar") and Lithuania (as "Meilės Nėra" by Keistuolių Teatras). It was also covered by Frank Schöbel (as "Schreib es mir in den Sand"). The song was also remixed (e.g. by Kozmix). Scorpions version German rock band Scorpions covered the song titled as "White Dove", from the album Live Bites. Released as a single in 1994, it was a top 20 hit in Germany and Switzerland, peaking at numbers 18 and 20, respectively. Charts Sampling In 2013, hip-hop artist Kanye West sampled the song in the outro of "New Slaves", having Frank Ocean sing over it, without personally asking the band for permission, although the sample was cleared for use in the album by their label, Hungaroton Records. However, in May 2016 songwriter Gábor Presser filed a lawsuit seeking $2.5 million in damages for copyright infringement for the use of the sample, which was later settled out of court. Other uses In March 2014, the song was used for the reveal trailer of the video game This War of Mine. In August 2012, the song was used in the official trailer of the film This Ain't California. In 2016, the melody of the song was used in a German rap-song ″FLOUZ KOMMT FLOUZ GEHT″ by Nimo. In July 2018, the song was used in the official trailer and in a scene of the film Mid90s. In 2021, the Australian bank Westpac used the song in their "Life is Everything" campaign. In 2021, the Second Captains podcast sampled the original version of the song on an audiobed which empathised with the plight of Frank Lampard being expected work with this group of players whilst manager of Chelsea F.C. In 2022, the French car company Citroën used the song in a TV commercial for their C5 Aircross hybride. References 1969 songs 1960s ballads 1970 singles 1973 singles 1994 singles Rock ballads Griva songs Hungarian songs Scorpions (band) songs Song recordings produced by Keith Olsen Songs written by Klaus Meine Songs written by Rudolf Schenker PolyGram singles Mercury Records singles Bellaphon Records singles
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,619
Q: Change in Values of Dropdown , based on other dropdown I have two dropdowns Dropdown 1 <form:select path="StartTimings" id="startTime" onchange="javascript:changeTiming()"> <form:option value="9:00 AM">9:00 AM</form:option> <form:option value="10:00 AM">10:00 AM</form:option> <form:option value="11:00 AM">11:00 AM</form:option> <form:option value="12:00 PM">12:00 PM</form:option> <form:option value="1:00 PM">1:00 PM</form:option> <form:option value="2:00 PM">2:00 PM</form:option> <form:option value="3:00 PM">3:00 PM</form:option> </form:select> Dropdown 2 <form:select path="EndTimings" id="endTime"> <form:option value="10:00 AM">10:00 AM</form:option> <form:option value="11:00 AM">11:00 AM</form:option> <form:option value="12:00 PM">12:00 PM</form:option> <form:option value="1:00 PM">1:00 PM</form:option> <form:option value="2:00 PM">2:00 PM</form:option> <form:option value="3:00 PM">3:00 PM</form:option> <form:option value="4:00 PM">4:00 PM</form:option> </form:select> Snippet of javascript function function changeTiming() { var select1 = document.getElementById("startTime"); if (select1.value == "10:00 AM") { document.getElementById("endTime").selectedIndex = 1; } else if (select1.value == "11:00 AM") { document.getElementById("endTime").selectedIndex = 2; } else if (select1.value == "12:00 PM") { document.getElementById("endTime").selectedIndex = 3; } I need some help in writing/modifying above javascript function in Which the value(time) in DropDown 2 should always be greater than the value selected in DropDown 1. i.e If a user selects 11 AM from DropDown 1, the DropDown 2 must ONLY contain all the values greater than 11AM i.e 12PM, 1PM, 2PM, 3PM, 4PM. A: DEMO HERE window.onload=function() { var start = document.getElementById("startTime"); var end = document.getElementById("endTime"); start.onchange=function() { end.options.length=0; for (var i=this.selectedIndex+1;i<this.options.length;i++) { end.options[end.options.length]=new Option(this.options[i].text,this.options[i].value); } end.options[end.options.length]=new Option("4:00 PM","4:00 PM"); } start.onchange(); } A: This can be really simple. Store 1 set of your times in an array. Loop through them to load the start time. Your onChange event should then get the selectedIndex of the startTime selected and loop through the times array from that number to load the endtime. Good luck with your code.
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,408
Малий флот — історичний термін теорії , під яким мався на увазі флот, який має в своєму складі в основному невеликі кораблі і поступається противнику перш за все в лінійних кораблях. Термін з'явивс на початку XX століття й відбивав пошуки способів боротьби легкими силами флоту проти окремих груп великих кораблів супротивника, порушення його комунікацій і захисту свого узбережжя. Розвиток теорії малого флоту в СРСР Теорія «Малого флоту» в 1920-ті—1930-ті роки була прийнята в СРСР, як офіційна військово-морська доктрина. На її основі були розроблені суднобудівні програми 1926, 1929, 1933 років, які передбачали в першу чергу будівництво підводних човнів і торпедних катерів, й в останню — крейсерів, ескадрених міноносців і лідерів ескадрених міноносців. Будівництво лінкорів, лінійних і важких крейсерів для ВМФ СРСР вони не передбачали. В 1936 році була прийнята програма будівництва радянського «Великого морського й океанського флоту», яка змінила доктрину «Малого флоту». Її реалізація розпочалася в 1938 році. Див. також Москітний флот Посилання Морська термінологія Військово-морський флот СРСР
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,414
{"url":"https:\/\/math.stackexchange.com\/questions\/1754090\/linearly-independent-sets-of-vectors","text":"# Linearly independent sets of vectors\n\nFind $3$ vectors $a$, $b$ and $c$ in $\\mathbb{R}^3$ such that {$a$, $b$}, {$a$, $c$} and {$b$, $c$} are each linearly independent sets of vectors, but the set {$a$, $b$, $c$} is linearly dependent.\n\nIs there a better way of finding a solution to this problem than just \"guessing and check\"?\n\n\u2022 Look for easy dependences. Suppose any two of your vectors spans the usual $xy$ plane. \u2013\u00a0lulu Apr 22 '16 at 11:38\n\nHow about taking $a,b$ to be any two linearly independent vectors, and then let $c=a+b$.","date":"2019-11-12 06:31:51","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9598596692085266, \"perplexity\": 234.46007304240962}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-47\/segments\/1573496664752.70\/warc\/CC-MAIN-20191112051214-20191112075214-00110.warc.gz\"}"}
null
null
describe('async service', function () { var fooObject = {foo: 777}; beforeEach(module('app.core')); beforeEach(function (done) { inject(function ($window) { $window.async = fooObject; done(); }); }); it('asyncService is attached with right object', function (done) { inject(function (asyncService) { expect(asyncService).toBe(fooObject); done(); }); }); });
{ "redpajama_set_name": "RedPajamaGithub" }
291
{"url":"https:\/\/blog.iisreset.me\/admin-account-schema-extensions\/","text":"The best administration models for Active Directory prescribe some level of account segregation - at least for any user holding sensitive group memberships like Domain Admins - and in some cases might even require multiple admin accounts per individual administrator, such as with Microsoft's tiered administrative model. But how might one keep track of these account relationship, for lifecycle management purposes for example...? Today, let's have a look at an extension to the Active Directory schema that might help!\n\nThe built-in administration model in Active Directory is dangerously permissive towards behaviors that expose Domain Admins and other sensitive accounts to credential theft in most enterprise networks of today - like, allowing logons to workstations and other client member computers which might have been compromised already, by default. This leads the uninitiated to employ all sorts of inappropriate points of contact between highly privileged accounts and untrusted assets.\n\nFor this reason, one of the most fundamental steps you can (and should) take to secure your AD environment is to establish some form of account segration.\n\nIt can be as simple just giving your admins an extra account to which administrative privileges can be assigned. This admin account, in turn, should not be used for day-to-day tasks like reading email or browsing the internet. This paradigm is prevalent in (and sometimes a good fit for) smaller operations where helpdesk\/sysadmin responsibilities are shared by the same employees.\n\n## Lockdown mode!\n\nOn the other end of the spectrum, we find Microsoft's Active Directory administrative tier model\n\nIn it's most extreme form, this 3-tier model is enforced via a dedicated and appropriately hardened administrative forest (colloquially referred to as a \"Red Forest\" or an ESEA - Enhanced Security Administrative Environment), but it can also be implemented directly in a single forest to provide appropriate segregation not just between servers and workstations but also isolate core identity administration from application infrastructure.\n\nI'd strongly recommend anyone involved with securing, designing or managing Active Directory to familiarize themselves with the concepts and principles of the tiering model, especially the idea of tier 0 equivalency.\n\nImagine you've followed all the advice above - provisioned separate admin accounts for all support staff, one for each of the tiers they need to manage, so far so good. Now, all of a sudden, you realize that all this added complexity has presented you with another problem: admin account lifecycle management.\n\n\u2022 Who's gonna remember to check for (and disable!) associated admin accounts when one of your admins leave the company?\n\nAnother problem you might encounter with all of these accounts belonging to the same people is identity correlation when triaging suspicious or anomalous activity originating from the administrative accounts.\n\nSay, if a Domain Admin account suddenly deletes a bunch of objects in the directory at 3:45 in the morning and starts changing peoples passwords - you might want to contact the actual employee and ask them to confirm whether this was intended or not - so the ability to automatically translate the identity of an admin account to the associated regular user account might be of importance.\nThis leaves us with two questions we need to be able to answer:\n\n\u2022 Given the identity of an admin account, return the associated user\n\u2022 Given the identity of a user, return any associated admin accounts\n\nI've seen many attempts to do this by comparing usernames, in the hopes that the organization's (often unenforced) naming conventions will save the day (\"john-admin probably belongs to john\"), and I've seen almost as many failures. We need a more robust way of solving this...\n\n# Schema extensions FTW!\n\nIn other words, we need some way of persisting a one-to-many relationship between user account objects in Active Directory. Sound familiar?\n\nAs it so happens, Active Directory already has a number of attributes that act exactly in this manner. The first example that comes to mind is manager\/directReports - these are what we call link-value attributes. When the manager attribute is set on an account to a value of X, the directReports attribute on the object identified by X will automatically reflect the value of the account that was updated. We call the first attribute (manager) the forward-linked attribute of this pair, whereas the second attribute (directReports) is a back-link.\n\nWe can leverage this mechanism to create our own such mappings in 3 easy steps:\n\n\u2022 Create a pair of linked attributes in the schema\n\u2022 Create an auxiliary class that may contain the attributes\n\u2022 Associate the auxiliary class with the existing user object class\n\n## attributeSchema\n\nSimply put, every single object in Active Directory consists of an internal, replica-specific identity (a DNT), and a number of attributes that may or may not have values. The behavior of each attribute is in turn bound by an attributeSchema object in the Schema naming-context. If we want to extend the behavior of Active Directory, we need to add some attributeSchema's!\n\nWith this in mind, let's start by having a look at the schema definition for the manager attribute:\n\n# Discover the schema container\n$rootDSE = Get-ADRootDSE$schemaNC = $rootDSE.schemaNamingContext # Discover schema master$schemaMaster = Get-ADObject $schemaNC -Properties fSMORoleOwner | Get-ADDomainController -Identity {$_.fSMORoleOwner }\n\n# Retrieve the 'manager' attribute\nGet-ADObject -Filter 'Name -eq \"manager\"' -SearchBase $schemaNC -Properties * adminDisplayName : Manager attributeID : 0.9.2342.19200300.100.1.10 attributeSyntax : 2.5.5.1 DistinguishedName : CN=Manager,CN=Schema,CN=Configuration,DC=corp,DC=iisreset,DC=me isMemberOfPartialAttributeSet : True isSingleValued : True lDAPDisplayName : manager linkID : 42 Name : Manager ObjectClass : attributeSchema oMSyntax : 127 In the output above I've listed only a subset of the attributes, but this in essence contains everything we need, specifically: \u2022 We need to create an attributeSchema object with a name and \u2022 ... a linkID \u2022 ... an attributeID \u2022 ... an oMSyntax value of 127 \u2022 ... an attributeSyntax value of 2.5.5.1 The oMSyntax value 127 indicates that the attribute value is an object reference, and the attributeSyntax 2.5.5.1 means LDAP will use Distinguished Names to represent the object references. Another point of interest in the output above is the boolean isSingleValued. Given that we're interested in establishing one-to-many relationships, our forward-link to tie admin accounts back to their primary owner should also be single-valued, whereas the back-link must be multi-valued. Note: member\/memberOf is an example of a linked attribute pair in which both links are multi-valued - as they are intended to represent many-to-many relationships The forward- and back-links are paired by the linkID value - forward-links have even linkID values, and their corresponding back-link attributes will have the same value + 1. In the example above, manager has a linkID value of 42, and sure enough: PS C:\\> Get-ADObject -Filter 'linkID -eq 43' -SearchBase$schemaNC |% ldapDisplayName\ndirectReports\n\n\nWe could just pick some even number, 31706 for example, and assign 31707 to the backlink attribute - and it would probably work - but we run the risk of grabbing an ID that Microsoft might want to use in a future schema update, at which point we'd be in dire trouble.\n\nSince Windows Server 2003, Active Directory supports auto-generating a random linkID pair via a magic OID, by doing the following:\n\n\u2022 Create the forward-link with a linkID value of 1.2.840.113556.1.2.50\n\u2022 Create the back-link with a linkID value of the attributeID of the forward link\n\n... and AD will take care of the rest.\n\nThe other unique identifiers we need to supply are OIDs for the attributeID values. If your organization already has an ISO OID assigned, you'll want to use an extension of this - but if not, Microsoft provides guidance on how you can generate one.\n\nFinally, a best practice recommendation: add a company-specific prefix to your schema names to avoid collissions with other applications or the base schema! In the following I'll be using the prefix iRM as a shorthand for IISResetMe.\n\nWith that out of the way, let's get started!\n\n# Forward-link attribute to indicate the owner of a non-primary account\n$fwdAttrName = \"${orgPrefix}-sourceAccount\"\n$fwdAttrSchema = New-ADObject -Name$fwdAttrName -Type attributeSchema -OtherAttributes @{\nadminDisplayName = $fwdAttrName lDAPDisplayName =$fwdAttrName\noMSyntax = 127\nattributeSyntax = \"2.5.5.1\"\nattributeID = \"${orgOID}.1\" isSingleValued =$true\nisMemberOfPartialAttributeSet = $true adminDescription = \"Account owner\" instanceType = 4 linkID = '1.2.840.113556.1.2.50' showInAdvancedViewOnly =$false\n} -Server $schemaMaster -PassThru # Refresh the schema$rootDSE.Put('schemaUpdateNow', 1)\n$rootDSE.SetInfo() Alright, that's our forward-link, the sourceAccount attribute - this will be used to indicate who's the real owner of an admin account. Next up, the back-link: $bckAttrName = \"${orgPrefix}-adminAccounts\"$bckAttrSchema = New-ADObject -Name $bckAttrName -Type attributeSchema -OtherAttributes @{ adminDisplayName =$bckAttrName\nlDAPDisplayName = $bckAttrName oMSyntax = 127 attributeSyntax = \"2.5.5.1\" attributeID = \"${orgOID}.2\"\nisSingleValued = $false isMemberOfPartialAttributeSet =$true\ninstanceType = 4\nshowInAdvancedViewOnly = $false linkID =$fwdAttrSchema.attributeID\n} -Server $schemaMaster -PassThru # Refresh the schema$rootDSE.Put('schemaUpdateNow', 1)\n$rootDSE.SetInfo() ... and that's our back-link, the adminAccounts attribute which can be used to discover the admin accounts associated with an account. ## classSchema Now, the ultimate goal here is to associate the attribute pair with the user object class - but rather than modifying the set of attributes that the user class can contain directly, the recommended method of extending the base schema data types is to create an auxiliary class, and use that to extend the existing type with a collective set of behaviors. For this reason, our next step is to create such a class: # Auxiliary class that may contain our attributes$auxClassName = \"${orgPrefix}-adminAccountExtensions\"$auxClassSchema = New-ADObject -Name $auxClassName -Type classSchema -OtherAttributes @{ adminDisplayName =$auxClassName\nlDAPDisplayName = $auxClassName governsID = \"${orgOID}.3\"\nmayContain = @(\n$fwdAttributeSchema.attributeID$bckAttributeSchema.attributeID\n)\nobjectClassCategory = '3'\nsubClassOf = 'top'\n} -Server $schemaNC -PassThru # Refresh the schema$rootDSE.Put('schemaUpdateNow', 1)\n$rootDSE.SetInfo() A few points to take note of here: \u2022 governsID is the class schema identifier, equivalent to attributeID \u2022 objectClassCategory = 3 means \"this is an auxiliary class\" \u2022 We intend to extend the behavior of user, but we're not inheriting from it (hence subClassOf = top) Finally we need to refresh the schema again one last time for the last operation - tying it all together: $userClass = Get-ADObject -SearchBase $schemaNC -Filter \"Name -like 'user'\"$userClass |Set-ADObject -Add @{\n# add our new auxiliary class to 'user'\nauxiliaryClass = $auxClassSchema.governsID } -Server$schemaMaster\n\n\nOur auxiliary class schema as seen in the AD Schema MMC snap-in\n\n## Let's see it in action!\n\nIn my test environment, the user john has two separate admin accounts:\n\n\u2022 His \"Ultra Admin\" account, john-ua, for tier 0 administration\n\u2022 His \"Infra Admin\" account, john-ia, for tier 1 administration\n\nTo establish the correct relationship using our new attributes, we can use familiar tools like ADAC, dsa.msc or - even better - good old Set-ADUser:\n\n$john = Get-ADUser john foreach($admin in 'john-ua','john-ia'){\nSet-ADUser $admin -Replace @{ 'iRM-sourceAccount' =$john.distinguishedName }\n}\n\n\nGoing forward, you'll want to populate the initial relation during admin account creation, so don't forget to update your provisioning scripts!\n\nNow that the accounts are properly linked, let's revisit the questions I posed earlier:\n\n\u2022 Given the identity of an admin account, return the associated user\n\nWell, now that we literally have that info directly on the object, it becomes as easy as:\n\nPS C:\\> $johnIA = Get-ADUser john-ia -Properties iRM-sourceAccount PS C:\\>$johnIA |Get-ADUser -Identity { $_.'iRM-sourceAccount' } Nifty! But what about the other way around? \u2022 Given the identity of a user, return any associated admin accounts Very well - given the back-link on the source account, that's almost as trivial! PS C:\\>$john = Get-ADUser john-ia -Properties iRM-adminAccounts","date":"2021-07-24 14:29:16","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.33672600984573364, \"perplexity\": 3817.0395772168686}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-31\/segments\/1627046150266.65\/warc\/CC-MAIN-20210724125655-20210724155655-00190.warc.gz\"}"}
null
null
package org.thingsboard.server.transport.coap.efento.adaptor; import com.google.gson.Gson; import lombok.extern.slf4j.Slf4j; import org.springframework.stereotype.Component; import org.thingsboard.server.common.transport.adaptor.AdaptorException; import org.thingsboard.server.common.transport.adaptor.JsonConverter; import org.thingsboard.server.gen.transport.TransportProtos; import org.thingsboard.server.transport.coap.efento.CoapEfentoTransportResource; import java.util.List; import java.util.UUID; @Component @Slf4j public class EfentoCoapAdaptor { private static final Gson gson = new Gson(); public TransportProtos.PostTelemetryMsg convertToPostTelemetry(UUID sessionId, List<CoapEfentoTransportResource.EfentoMeasurements> measurements) throws AdaptorException { try { return JsonConverter.convertToTelemetryProto(gson.toJsonTree(measurements)); } catch (Exception ex) { log.warn("[{}] Failed to convert EfentoMeasurements to PostTelemetry request!", sessionId); throw new AdaptorException(ex); } } }
{ "redpajama_set_name": "RedPajamaGithub" }
3,125
// Generated by the protocol buffer compiler. DO NOT EDIT! // source: google/cloud/bigquery/migration/v2/translation_config.proto package com.google.cloud.bigquery.migration.v2; /** * * * <pre> * The potential components of a full name mapping that will be mapped * during translation in the source data warehouse. * </pre> * * Protobuf type {@code google.cloud.bigquery.migration.v2.NameMappingKey} */ public final class NameMappingKey extends com.google.protobuf.GeneratedMessageV3 implements // @@protoc_insertion_point(message_implements:google.cloud.bigquery.migration.v2.NameMappingKey) NameMappingKeyOrBuilder { private static final long serialVersionUID = 0L; // Use NameMappingKey.newBuilder() to construct. private NameMappingKey(com.google.protobuf.GeneratedMessageV3.Builder<?> builder) { super(builder); } private NameMappingKey() { type_ = 0; database_ = ""; schema_ = ""; relation_ = ""; attribute_ = ""; } @java.lang.Override @SuppressWarnings({"unused"}) protected java.lang.Object newInstance(UnusedPrivateParameter unused) { return new NameMappingKey(); } @java.lang.Override public final com.google.protobuf.UnknownFieldSet getUnknownFields() { return this.unknownFields; } public static final com.google.protobuf.Descriptors.Descriptor getDescriptor() { return com.google.cloud.bigquery.migration.v2.TranslationConfigProto .internal_static_google_cloud_bigquery_migration_v2_NameMappingKey_descriptor; } @java.lang.Override protected com.google.protobuf.GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable() { return com.google.cloud.bigquery.migration.v2.TranslationConfigProto .internal_static_google_cloud_bigquery_migration_v2_NameMappingKey_fieldAccessorTable .ensureFieldAccessorsInitialized( com.google.cloud.bigquery.migration.v2.NameMappingKey.class, com.google.cloud.bigquery.migration.v2.NameMappingKey.Builder.class); } /** * * * <pre> * The type of the object that is being mapped. * </pre> * * Protobuf enum {@code google.cloud.bigquery.migration.v2.NameMappingKey.Type} */ public enum Type implements com.google.protobuf.ProtocolMessageEnum { /** * * * <pre> * Unspecified name mapping type. * </pre> * * <code>TYPE_UNSPECIFIED = 0;</code> */ TYPE_UNSPECIFIED(0), /** * * * <pre> * The object being mapped is a database. * </pre> * * <code>DATABASE = 1;</code> */ DATABASE(1), /** * * * <pre> * The object being mapped is a schema. * </pre> * * <code>SCHEMA = 2;</code> */ SCHEMA(2), /** * * * <pre> * The object being mapped is a relation. * </pre> * * <code>RELATION = 3;</code> */ RELATION(3), /** * * * <pre> * The object being mapped is an attribute. * </pre> * * <code>ATTRIBUTE = 4;</code> */ ATTRIBUTE(4), /** * * * <pre> * The object being mapped is a relation alias. * </pre> * * <code>RELATION_ALIAS = 5;</code> */ RELATION_ALIAS(5), /** * * * <pre> * The object being mapped is a an attribute alias. * </pre> * * <code>ATTRIBUTE_ALIAS = 6;</code> */ ATTRIBUTE_ALIAS(6), /** * * * <pre> * The object being mapped is a function. * </pre> * * <code>FUNCTION = 7;</code> */ FUNCTION(7), UNRECOGNIZED(-1), ; /** * * * <pre> * Unspecified name mapping type. * </pre> * * <code>TYPE_UNSPECIFIED = 0;</code> */ public static final int TYPE_UNSPECIFIED_VALUE = 0; /** * * * <pre> * The object being mapped is a database. * </pre> * * <code>DATABASE = 1;</code> */ public static final int DATABASE_VALUE = 1; /** * * * <pre> * The object being mapped is a schema. * </pre> * * <code>SCHEMA = 2;</code> */ public static final int SCHEMA_VALUE = 2; /** * * * <pre> * The object being mapped is a relation. * </pre> * * <code>RELATION = 3;</code> */ public static final int RELATION_VALUE = 3; /** * * * <pre> * The object being mapped is an attribute. * </pre> * * <code>ATTRIBUTE = 4;</code> */ public static final int ATTRIBUTE_VALUE = 4; /** * * * <pre> * The object being mapped is a relation alias. * </pre> * * <code>RELATION_ALIAS = 5;</code> */ public static final int RELATION_ALIAS_VALUE = 5; /** * * * <pre> * The object being mapped is a an attribute alias. * </pre> * * <code>ATTRIBUTE_ALIAS = 6;</code> */ public static final int ATTRIBUTE_ALIAS_VALUE = 6; /** * * * <pre> * The object being mapped is a function. * </pre> * * <code>FUNCTION = 7;</code> */ public static final int FUNCTION_VALUE = 7; public final int getNumber() { if (this == UNRECOGNIZED) { throw new java.lang.IllegalArgumentException( "Can't get the number of an unknown enum value."); } return value; } /** * @param value The numeric wire value of the corresponding enum entry. * @return The enum associated with the given numeric wire value. * @deprecated Use {@link #forNumber(int)} instead. */ @java.lang.Deprecated public static Type valueOf(int value) { return forNumber(value); } /** * @param value The numeric wire value of the corresponding enum entry. * @return The enum associated with the given numeric wire value. */ public static Type forNumber(int value) { switch (value) { case 0: return TYPE_UNSPECIFIED; case 1: return DATABASE; case 2: return SCHEMA; case 3: return RELATION; case 4: return ATTRIBUTE; case 5: return RELATION_ALIAS; case 6: return ATTRIBUTE_ALIAS; case 7: return FUNCTION; default: return null; } } public static com.google.protobuf.Internal.EnumLiteMap<Type> internalGetValueMap() { return internalValueMap; } private static final com.google.protobuf.Internal.EnumLiteMap<Type> internalValueMap = new com.google.protobuf.Internal.EnumLiteMap<Type>() { public Type findValueByNumber(int number) { return Type.forNumber(number); } }; public final com.google.protobuf.Descriptors.EnumValueDescriptor getValueDescriptor() { if (this == UNRECOGNIZED) { throw new java.lang.IllegalStateException( "Can't get the descriptor of an unrecognized enum value."); } return getDescriptor().getValues().get(ordinal()); } public final com.google.protobuf.Descriptors.EnumDescriptor getDescriptorForType() { return getDescriptor(); } public static final com.google.protobuf.Descriptors.EnumDescriptor getDescriptor() { return com.google.cloud.bigquery.migration.v2.NameMappingKey.getDescriptor() .getEnumTypes() .get(0); } private static final Type[] VALUES = values(); public static Type valueOf(com.google.protobuf.Descriptors.EnumValueDescriptor desc) { if (desc.getType() != getDescriptor()) { throw new java.lang.IllegalArgumentException("EnumValueDescriptor is not for this type."); } if (desc.getIndex() == -1) { return UNRECOGNIZED; } return VALUES[desc.getIndex()]; } private final int value; private Type(int value) { this.value = value; } // @@protoc_insertion_point(enum_scope:google.cloud.bigquery.migration.v2.NameMappingKey.Type) } public static final int TYPE_FIELD_NUMBER = 1; private int type_; /** * * * <pre> * The type of object that is being mapped. * </pre> * * <code>.google.cloud.bigquery.migration.v2.NameMappingKey.Type type = 1;</code> * * @return The enum numeric value on the wire for type. */ @java.lang.Override public int getTypeValue() { return type_; } /** * * * <pre> * The type of object that is being mapped. * </pre> * * <code>.google.cloud.bigquery.migration.v2.NameMappingKey.Type type = 1;</code> * * @return The type. */ @java.lang.Override public com.google.cloud.bigquery.migration.v2.NameMappingKey.Type getType() { @SuppressWarnings("deprecation") com.google.cloud.bigquery.migration.v2.NameMappingKey.Type result = com.google.cloud.bigquery.migration.v2.NameMappingKey.Type.valueOf(type_); return result == null ? com.google.cloud.bigquery.migration.v2.NameMappingKey.Type.UNRECOGNIZED : result; } public static final int DATABASE_FIELD_NUMBER = 2; private volatile java.lang.Object database_; /** * * * <pre> * The database name (BigQuery project ID equivalent in the source data * warehouse). * </pre> * * <code>string database = 2;</code> * * @return The database. */ @java.lang.Override public java.lang.String getDatabase() { java.lang.Object ref = database_; if (ref instanceof java.lang.String) { return (java.lang.String) ref; } else { com.google.protobuf.ByteString bs = (com.google.protobuf.ByteString) ref; java.lang.String s = bs.toStringUtf8(); database_ = s; return s; } } /** * * * <pre> * The database name (BigQuery project ID equivalent in the source data * warehouse). * </pre> * * <code>string database = 2;</code> * * @return The bytes for database. */ @java.lang.Override public com.google.protobuf.ByteString getDatabaseBytes() { java.lang.Object ref = database_; if (ref instanceof java.lang.String) { com.google.protobuf.ByteString b = com.google.protobuf.ByteString.copyFromUtf8((java.lang.String) ref); database_ = b; return b; } else { return (com.google.protobuf.ByteString) ref; } } public static final int SCHEMA_FIELD_NUMBER = 3; private volatile java.lang.Object schema_; /** * * * <pre> * The schema name (BigQuery dataset equivalent in the source data warehouse). * </pre> * * <code>string schema = 3;</code> * * @return The schema. */ @java.lang.Override public java.lang.String getSchema() { java.lang.Object ref = schema_; if (ref instanceof java.lang.String) { return (java.lang.String) ref; } else { com.google.protobuf.ByteString bs = (com.google.protobuf.ByteString) ref; java.lang.String s = bs.toStringUtf8(); schema_ = s; return s; } } /** * * * <pre> * The schema name (BigQuery dataset equivalent in the source data warehouse). * </pre> * * <code>string schema = 3;</code> * * @return The bytes for schema. */ @java.lang.Override public com.google.protobuf.ByteString getSchemaBytes() { java.lang.Object ref = schema_; if (ref instanceof java.lang.String) { com.google.protobuf.ByteString b = com.google.protobuf.ByteString.copyFromUtf8((java.lang.String) ref); schema_ = b; return b; } else { return (com.google.protobuf.ByteString) ref; } } public static final int RELATION_FIELD_NUMBER = 4; private volatile java.lang.Object relation_; /** * * * <pre> * The relation name (BigQuery table or view equivalent in the source data * warehouse). * </pre> * * <code>string relation = 4;</code> * * @return The relation. */ @java.lang.Override public java.lang.String getRelation() { java.lang.Object ref = relation_; if (ref instanceof java.lang.String) { return (java.lang.String) ref; } else { com.google.protobuf.ByteString bs = (com.google.protobuf.ByteString) ref; java.lang.String s = bs.toStringUtf8(); relation_ = s; return s; } } /** * * * <pre> * The relation name (BigQuery table or view equivalent in the source data * warehouse). * </pre> * * <code>string relation = 4;</code> * * @return The bytes for relation. */ @java.lang.Override public com.google.protobuf.ByteString getRelationBytes() { java.lang.Object ref = relation_; if (ref instanceof java.lang.String) { com.google.protobuf.ByteString b = com.google.protobuf.ByteString.copyFromUtf8((java.lang.String) ref); relation_ = b; return b; } else { return (com.google.protobuf.ByteString) ref; } } public static final int ATTRIBUTE_FIELD_NUMBER = 5; private volatile java.lang.Object attribute_; /** * * * <pre> * The attribute name (BigQuery column equivalent in the source data * warehouse). * </pre> * * <code>string attribute = 5;</code> * * @return The attribute. */ @java.lang.Override public java.lang.String getAttribute() { java.lang.Object ref = attribute_; if (ref instanceof java.lang.String) { return (java.lang.String) ref; } else { com.google.protobuf.ByteString bs = (com.google.protobuf.ByteString) ref; java.lang.String s = bs.toStringUtf8(); attribute_ = s; return s; } } /** * * * <pre> * The attribute name (BigQuery column equivalent in the source data * warehouse). * </pre> * * <code>string attribute = 5;</code> * * @return The bytes for attribute. */ @java.lang.Override public com.google.protobuf.ByteString getAttributeBytes() { java.lang.Object ref = attribute_; if (ref instanceof java.lang.String) { com.google.protobuf.ByteString b = com.google.protobuf.ByteString.copyFromUtf8((java.lang.String) ref); attribute_ = b; return b; } else { return (com.google.protobuf.ByteString) ref; } } private byte memoizedIsInitialized = -1; @java.lang.Override public final boolean isInitialized() { byte isInitialized = memoizedIsInitialized; if (isInitialized == 1) return true; if (isInitialized == 0) return false; memoizedIsInitialized = 1; return true; } @java.lang.Override public void writeTo(com.google.protobuf.CodedOutputStream output) throws java.io.IOException { if (type_ != com.google.cloud.bigquery.migration.v2.NameMappingKey.Type.TYPE_UNSPECIFIED .getNumber()) { output.writeEnum(1, type_); } if (!com.google.protobuf.GeneratedMessageV3.isStringEmpty(database_)) { com.google.protobuf.GeneratedMessageV3.writeString(output, 2, database_); } if (!com.google.protobuf.GeneratedMessageV3.isStringEmpty(schema_)) { com.google.protobuf.GeneratedMessageV3.writeString(output, 3, schema_); } if (!com.google.protobuf.GeneratedMessageV3.isStringEmpty(relation_)) { com.google.protobuf.GeneratedMessageV3.writeString(output, 4, relation_); } if (!com.google.protobuf.GeneratedMessageV3.isStringEmpty(attribute_)) { com.google.protobuf.GeneratedMessageV3.writeString(output, 5, attribute_); } getUnknownFields().writeTo(output); } @java.lang.Override public int getSerializedSize() { int size = memoizedSize; if (size != -1) return size; size = 0; if (type_ != com.google.cloud.bigquery.migration.v2.NameMappingKey.Type.TYPE_UNSPECIFIED .getNumber()) { size += com.google.protobuf.CodedOutputStream.computeEnumSize(1, type_); } if (!com.google.protobuf.GeneratedMessageV3.isStringEmpty(database_)) { size += com.google.protobuf.GeneratedMessageV3.computeStringSize(2, database_); } if (!com.google.protobuf.GeneratedMessageV3.isStringEmpty(schema_)) { size += com.google.protobuf.GeneratedMessageV3.computeStringSize(3, schema_); } if (!com.google.protobuf.GeneratedMessageV3.isStringEmpty(relation_)) { size += com.google.protobuf.GeneratedMessageV3.computeStringSize(4, relation_); } if (!com.google.protobuf.GeneratedMessageV3.isStringEmpty(attribute_)) { size += com.google.protobuf.GeneratedMessageV3.computeStringSize(5, attribute_); } size += getUnknownFields().getSerializedSize(); memoizedSize = size; return size; } @java.lang.Override public boolean equals(final java.lang.Object obj) { if (obj == this) { return true; } if (!(obj instanceof com.google.cloud.bigquery.migration.v2.NameMappingKey)) { return super.equals(obj); } com.google.cloud.bigquery.migration.v2.NameMappingKey other = (com.google.cloud.bigquery.migration.v2.NameMappingKey) obj; if (type_ != other.type_) return false; if (!getDatabase().equals(other.getDatabase())) return false; if (!getSchema().equals(other.getSchema())) return false; if (!getRelation().equals(other.getRelation())) return false; if (!getAttribute().equals(other.getAttribute())) return false; if (!getUnknownFields().equals(other.getUnknownFields())) return false; return true; } @java.lang.Override public int hashCode() { if (memoizedHashCode != 0) { return memoizedHashCode; } int hash = 41; hash = (19 * hash) + getDescriptor().hashCode(); hash = (37 * hash) + TYPE_FIELD_NUMBER; hash = (53 * hash) + type_; hash = (37 * hash) + DATABASE_FIELD_NUMBER; hash = (53 * hash) + getDatabase().hashCode(); hash = (37 * hash) + SCHEMA_FIELD_NUMBER; hash = (53 * hash) + getSchema().hashCode(); hash = (37 * hash) + RELATION_FIELD_NUMBER; hash = (53 * hash) + getRelation().hashCode(); hash = (37 * hash) + ATTRIBUTE_FIELD_NUMBER; hash = (53 * hash) + getAttribute().hashCode(); hash = (29 * hash) + getUnknownFields().hashCode(); memoizedHashCode = hash; return hash; } public static com.google.cloud.bigquery.migration.v2.NameMappingKey parseFrom( java.nio.ByteBuffer data) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data); } public static com.google.cloud.bigquery.migration.v2.NameMappingKey parseFrom( java.nio.ByteBuffer data, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data, extensionRegistry); } public static com.google.cloud.bigquery.migration.v2.NameMappingKey parseFrom( com.google.protobuf.ByteString data) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data); } public static com.google.cloud.bigquery.migration.v2.NameMappingKey parseFrom( com.google.protobuf.ByteString data, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data, extensionRegistry); } public static com.google.cloud.bigquery.migration.v2.NameMappingKey parseFrom(byte[] data) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data); } public static com.google.cloud.bigquery.migration.v2.NameMappingKey parseFrom( byte[] data, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data, extensionRegistry); } public static com.google.cloud.bigquery.migration.v2.NameMappingKey parseFrom( java.io.InputStream input) throws java.io.IOException { return com.google.protobuf.GeneratedMessageV3.parseWithIOException(PARSER, input); } public static com.google.cloud.bigquery.migration.v2.NameMappingKey parseFrom( java.io.InputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { return com.google.protobuf.GeneratedMessageV3.parseWithIOException( PARSER, input, extensionRegistry); } public static com.google.cloud.bigquery.migration.v2.NameMappingKey parseDelimitedFrom( java.io.InputStream input) throws java.io.IOException { return com.google.protobuf.GeneratedMessageV3.parseDelimitedWithIOException(PARSER, input); } public static com.google.cloud.bigquery.migration.v2.NameMappingKey parseDelimitedFrom( java.io.InputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { return com.google.protobuf.GeneratedMessageV3.parseDelimitedWithIOException( PARSER, input, extensionRegistry); } public static com.google.cloud.bigquery.migration.v2.NameMappingKey parseFrom( com.google.protobuf.CodedInputStream input) throws java.io.IOException { return com.google.protobuf.GeneratedMessageV3.parseWithIOException(PARSER, input); } public static com.google.cloud.bigquery.migration.v2.NameMappingKey parseFrom( com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { return com.google.protobuf.GeneratedMessageV3.parseWithIOException( PARSER, input, extensionRegistry); } @java.lang.Override public Builder newBuilderForType() { return newBuilder(); } public static Builder newBuilder() { return DEFAULT_INSTANCE.toBuilder(); } public static Builder newBuilder( com.google.cloud.bigquery.migration.v2.NameMappingKey prototype) { return DEFAULT_INSTANCE.toBuilder().mergeFrom(prototype); } @java.lang.Override public Builder toBuilder() { return this == DEFAULT_INSTANCE ? new Builder() : new Builder().mergeFrom(this); } @java.lang.Override protected Builder newBuilderForType(com.google.protobuf.GeneratedMessageV3.BuilderParent parent) { Builder builder = new Builder(parent); return builder; } /** * * * <pre> * The potential components of a full name mapping that will be mapped * during translation in the source data warehouse. * </pre> * * Protobuf type {@code google.cloud.bigquery.migration.v2.NameMappingKey} */ public static final class Builder extends com.google.protobuf.GeneratedMessageV3.Builder<Builder> implements // @@protoc_insertion_point(builder_implements:google.cloud.bigquery.migration.v2.NameMappingKey) com.google.cloud.bigquery.migration.v2.NameMappingKeyOrBuilder { public static final com.google.protobuf.Descriptors.Descriptor getDescriptor() { return com.google.cloud.bigquery.migration.v2.TranslationConfigProto .internal_static_google_cloud_bigquery_migration_v2_NameMappingKey_descriptor; } @java.lang.Override protected com.google.protobuf.GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable() { return com.google.cloud.bigquery.migration.v2.TranslationConfigProto .internal_static_google_cloud_bigquery_migration_v2_NameMappingKey_fieldAccessorTable .ensureFieldAccessorsInitialized( com.google.cloud.bigquery.migration.v2.NameMappingKey.class, com.google.cloud.bigquery.migration.v2.NameMappingKey.Builder.class); } // Construct using com.google.cloud.bigquery.migration.v2.NameMappingKey.newBuilder() private Builder() {} private Builder(com.google.protobuf.GeneratedMessageV3.BuilderParent parent) { super(parent); } @java.lang.Override public Builder clear() { super.clear(); type_ = 0; database_ = ""; schema_ = ""; relation_ = ""; attribute_ = ""; return this; } @java.lang.Override public com.google.protobuf.Descriptors.Descriptor getDescriptorForType() { return com.google.cloud.bigquery.migration.v2.TranslationConfigProto .internal_static_google_cloud_bigquery_migration_v2_NameMappingKey_descriptor; } @java.lang.Override public com.google.cloud.bigquery.migration.v2.NameMappingKey getDefaultInstanceForType() { return com.google.cloud.bigquery.migration.v2.NameMappingKey.getDefaultInstance(); } @java.lang.Override public com.google.cloud.bigquery.migration.v2.NameMappingKey build() { com.google.cloud.bigquery.migration.v2.NameMappingKey result = buildPartial(); if (!result.isInitialized()) { throw newUninitializedMessageException(result); } return result; } @java.lang.Override public com.google.cloud.bigquery.migration.v2.NameMappingKey buildPartial() { com.google.cloud.bigquery.migration.v2.NameMappingKey result = new com.google.cloud.bigquery.migration.v2.NameMappingKey(this); result.type_ = type_; result.database_ = database_; result.schema_ = schema_; result.relation_ = relation_; result.attribute_ = attribute_; onBuilt(); return result; } @java.lang.Override public Builder clone() { return super.clone(); } @java.lang.Override public Builder setField( com.google.protobuf.Descriptors.FieldDescriptor field, java.lang.Object value) { return super.setField(field, value); } @java.lang.Override public Builder clearField(com.google.protobuf.Descriptors.FieldDescriptor field) { return super.clearField(field); } @java.lang.Override public Builder clearOneof(com.google.protobuf.Descriptors.OneofDescriptor oneof) { return super.clearOneof(oneof); } @java.lang.Override public Builder setRepeatedField( com.google.protobuf.Descriptors.FieldDescriptor field, int index, java.lang.Object value) { return super.setRepeatedField(field, index, value); } @java.lang.Override public Builder addRepeatedField( com.google.protobuf.Descriptors.FieldDescriptor field, java.lang.Object value) { return super.addRepeatedField(field, value); } @java.lang.Override public Builder mergeFrom(com.google.protobuf.Message other) { if (other instanceof com.google.cloud.bigquery.migration.v2.NameMappingKey) { return mergeFrom((com.google.cloud.bigquery.migration.v2.NameMappingKey) other); } else { super.mergeFrom(other); return this; } } public Builder mergeFrom(com.google.cloud.bigquery.migration.v2.NameMappingKey other) { if (other == com.google.cloud.bigquery.migration.v2.NameMappingKey.getDefaultInstance()) return this; if (other.type_ != 0) { setTypeValue(other.getTypeValue()); } if (!other.getDatabase().isEmpty()) { database_ = other.database_; onChanged(); } if (!other.getSchema().isEmpty()) { schema_ = other.schema_; onChanged(); } if (!other.getRelation().isEmpty()) { relation_ = other.relation_; onChanged(); } if (!other.getAttribute().isEmpty()) { attribute_ = other.attribute_; onChanged(); } this.mergeUnknownFields(other.getUnknownFields()); onChanged(); return this; } @java.lang.Override public final boolean isInitialized() { return true; } @java.lang.Override public Builder mergeFrom( com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { if (extensionRegistry == null) { throw new java.lang.NullPointerException(); } try { boolean done = false; while (!done) { int tag = input.readTag(); switch (tag) { case 0: done = true; break; case 8: { type_ = input.readEnum(); break; } // case 8 case 18: { database_ = input.readStringRequireUtf8(); break; } // case 18 case 26: { schema_ = input.readStringRequireUtf8(); break; } // case 26 case 34: { relation_ = input.readStringRequireUtf8(); break; } // case 34 case 42: { attribute_ = input.readStringRequireUtf8(); break; } // case 42 default: { if (!super.parseUnknownField(input, extensionRegistry, tag)) { done = true; // was an endgroup tag } break; } // default: } // switch (tag) } // while (!done) } catch (com.google.protobuf.InvalidProtocolBufferException e) { throw e.unwrapIOException(); } finally { onChanged(); } // finally return this; } private int type_ = 0; /** * * * <pre> * The type of object that is being mapped. * </pre> * * <code>.google.cloud.bigquery.migration.v2.NameMappingKey.Type type = 1;</code> * * @return The enum numeric value on the wire for type. */ @java.lang.Override public int getTypeValue() { return type_; } /** * * * <pre> * The type of object that is being mapped. * </pre> * * <code>.google.cloud.bigquery.migration.v2.NameMappingKey.Type type = 1;</code> * * @param value The enum numeric value on the wire for type to set. * @return This builder for chaining. */ public Builder setTypeValue(int value) { type_ = value; onChanged(); return this; } /** * * * <pre> * The type of object that is being mapped. * </pre> * * <code>.google.cloud.bigquery.migration.v2.NameMappingKey.Type type = 1;</code> * * @return The type. */ @java.lang.Override public com.google.cloud.bigquery.migration.v2.NameMappingKey.Type getType() { @SuppressWarnings("deprecation") com.google.cloud.bigquery.migration.v2.NameMappingKey.Type result = com.google.cloud.bigquery.migration.v2.NameMappingKey.Type.valueOf(type_); return result == null ? com.google.cloud.bigquery.migration.v2.NameMappingKey.Type.UNRECOGNIZED : result; } /** * * * <pre> * The type of object that is being mapped. * </pre> * * <code>.google.cloud.bigquery.migration.v2.NameMappingKey.Type type = 1;</code> * * @param value The type to set. * @return This builder for chaining. */ public Builder setType(com.google.cloud.bigquery.migration.v2.NameMappingKey.Type value) { if (value == null) { throw new NullPointerException(); } type_ = value.getNumber(); onChanged(); return this; } /** * * * <pre> * The type of object that is being mapped. * </pre> * * <code>.google.cloud.bigquery.migration.v2.NameMappingKey.Type type = 1;</code> * * @return This builder for chaining. */ public Builder clearType() { type_ = 0; onChanged(); return this; } private java.lang.Object database_ = ""; /** * * * <pre> * The database name (BigQuery project ID equivalent in the source data * warehouse). * </pre> * * <code>string database = 2;</code> * * @return The database. */ public java.lang.String getDatabase() { java.lang.Object ref = database_; if (!(ref instanceof java.lang.String)) { com.google.protobuf.ByteString bs = (com.google.protobuf.ByteString) ref; java.lang.String s = bs.toStringUtf8(); database_ = s; return s; } else { return (java.lang.String) ref; } } /** * * * <pre> * The database name (BigQuery project ID equivalent in the source data * warehouse). * </pre> * * <code>string database = 2;</code> * * @return The bytes for database. */ public com.google.protobuf.ByteString getDatabaseBytes() { java.lang.Object ref = database_; if (ref instanceof String) { com.google.protobuf.ByteString b = com.google.protobuf.ByteString.copyFromUtf8((java.lang.String) ref); database_ = b; return b; } else { return (com.google.protobuf.ByteString) ref; } } /** * * * <pre> * The database name (BigQuery project ID equivalent in the source data * warehouse). * </pre> * * <code>string database = 2;</code> * * @param value The database to set. * @return This builder for chaining. */ public Builder setDatabase(java.lang.String value) { if (value == null) { throw new NullPointerException(); } database_ = value; onChanged(); return this; } /** * * * <pre> * The database name (BigQuery project ID equivalent in the source data * warehouse). * </pre> * * <code>string database = 2;</code> * * @return This builder for chaining. */ public Builder clearDatabase() { database_ = getDefaultInstance().getDatabase(); onChanged(); return this; } /** * * * <pre> * The database name (BigQuery project ID equivalent in the source data * warehouse). * </pre> * * <code>string database = 2;</code> * * @param value The bytes for database to set. * @return This builder for chaining. */ public Builder setDatabaseBytes(com.google.protobuf.ByteString value) { if (value == null) { throw new NullPointerException(); } checkByteStringIsUtf8(value); database_ = value; onChanged(); return this; } private java.lang.Object schema_ = ""; /** * * * <pre> * The schema name (BigQuery dataset equivalent in the source data warehouse). * </pre> * * <code>string schema = 3;</code> * * @return The schema. */ public java.lang.String getSchema() { java.lang.Object ref = schema_; if (!(ref instanceof java.lang.String)) { com.google.protobuf.ByteString bs = (com.google.protobuf.ByteString) ref; java.lang.String s = bs.toStringUtf8(); schema_ = s; return s; } else { return (java.lang.String) ref; } } /** * * * <pre> * The schema name (BigQuery dataset equivalent in the source data warehouse). * </pre> * * <code>string schema = 3;</code> * * @return The bytes for schema. */ public com.google.protobuf.ByteString getSchemaBytes() { java.lang.Object ref = schema_; if (ref instanceof String) { com.google.protobuf.ByteString b = com.google.protobuf.ByteString.copyFromUtf8((java.lang.String) ref); schema_ = b; return b; } else { return (com.google.protobuf.ByteString) ref; } } /** * * * <pre> * The schema name (BigQuery dataset equivalent in the source data warehouse). * </pre> * * <code>string schema = 3;</code> * * @param value The schema to set. * @return This builder for chaining. */ public Builder setSchema(java.lang.String value) { if (value == null) { throw new NullPointerException(); } schema_ = value; onChanged(); return this; } /** * * * <pre> * The schema name (BigQuery dataset equivalent in the source data warehouse). * </pre> * * <code>string schema = 3;</code> * * @return This builder for chaining. */ public Builder clearSchema() { schema_ = getDefaultInstance().getSchema(); onChanged(); return this; } /** * * * <pre> * The schema name (BigQuery dataset equivalent in the source data warehouse). * </pre> * * <code>string schema = 3;</code> * * @param value The bytes for schema to set. * @return This builder for chaining. */ public Builder setSchemaBytes(com.google.protobuf.ByteString value) { if (value == null) { throw new NullPointerException(); } checkByteStringIsUtf8(value); schema_ = value; onChanged(); return this; } private java.lang.Object relation_ = ""; /** * * * <pre> * The relation name (BigQuery table or view equivalent in the source data * warehouse). * </pre> * * <code>string relation = 4;</code> * * @return The relation. */ public java.lang.String getRelation() { java.lang.Object ref = relation_; if (!(ref instanceof java.lang.String)) { com.google.protobuf.ByteString bs = (com.google.protobuf.ByteString) ref; java.lang.String s = bs.toStringUtf8(); relation_ = s; return s; } else { return (java.lang.String) ref; } } /** * * * <pre> * The relation name (BigQuery table or view equivalent in the source data * warehouse). * </pre> * * <code>string relation = 4;</code> * * @return The bytes for relation. */ public com.google.protobuf.ByteString getRelationBytes() { java.lang.Object ref = relation_; if (ref instanceof String) { com.google.protobuf.ByteString b = com.google.protobuf.ByteString.copyFromUtf8((java.lang.String) ref); relation_ = b; return b; } else { return (com.google.protobuf.ByteString) ref; } } /** * * * <pre> * The relation name (BigQuery table or view equivalent in the source data * warehouse). * </pre> * * <code>string relation = 4;</code> * * @param value The relation to set. * @return This builder for chaining. */ public Builder setRelation(java.lang.String value) { if (value == null) { throw new NullPointerException(); } relation_ = value; onChanged(); return this; } /** * * * <pre> * The relation name (BigQuery table or view equivalent in the source data * warehouse). * </pre> * * <code>string relation = 4;</code> * * @return This builder for chaining. */ public Builder clearRelation() { relation_ = getDefaultInstance().getRelation(); onChanged(); return this; } /** * * * <pre> * The relation name (BigQuery table or view equivalent in the source data * warehouse). * </pre> * * <code>string relation = 4;</code> * * @param value The bytes for relation to set. * @return This builder for chaining. */ public Builder setRelationBytes(com.google.protobuf.ByteString value) { if (value == null) { throw new NullPointerException(); } checkByteStringIsUtf8(value); relation_ = value; onChanged(); return this; } private java.lang.Object attribute_ = ""; /** * * * <pre> * The attribute name (BigQuery column equivalent in the source data * warehouse). * </pre> * * <code>string attribute = 5;</code> * * @return The attribute. */ public java.lang.String getAttribute() { java.lang.Object ref = attribute_; if (!(ref instanceof java.lang.String)) { com.google.protobuf.ByteString bs = (com.google.protobuf.ByteString) ref; java.lang.String s = bs.toStringUtf8(); attribute_ = s; return s; } else { return (java.lang.String) ref; } } /** * * * <pre> * The attribute name (BigQuery column equivalent in the source data * warehouse). * </pre> * * <code>string attribute = 5;</code> * * @return The bytes for attribute. */ public com.google.protobuf.ByteString getAttributeBytes() { java.lang.Object ref = attribute_; if (ref instanceof String) { com.google.protobuf.ByteString b = com.google.protobuf.ByteString.copyFromUtf8((java.lang.String) ref); attribute_ = b; return b; } else { return (com.google.protobuf.ByteString) ref; } } /** * * * <pre> * The attribute name (BigQuery column equivalent in the source data * warehouse). * </pre> * * <code>string attribute = 5;</code> * * @param value The attribute to set. * @return This builder for chaining. */ public Builder setAttribute(java.lang.String value) { if (value == null) { throw new NullPointerException(); } attribute_ = value; onChanged(); return this; } /** * * * <pre> * The attribute name (BigQuery column equivalent in the source data * warehouse). * </pre> * * <code>string attribute = 5;</code> * * @return This builder for chaining. */ public Builder clearAttribute() { attribute_ = getDefaultInstance().getAttribute(); onChanged(); return this; } /** * * * <pre> * The attribute name (BigQuery column equivalent in the source data * warehouse). * </pre> * * <code>string attribute = 5;</code> * * @param value The bytes for attribute to set. * @return This builder for chaining. */ public Builder setAttributeBytes(com.google.protobuf.ByteString value) { if (value == null) { throw new NullPointerException(); } checkByteStringIsUtf8(value); attribute_ = value; onChanged(); return this; } @java.lang.Override public final Builder setUnknownFields(final com.google.protobuf.UnknownFieldSet unknownFields) { return super.setUnknownFields(unknownFields); } @java.lang.Override public final Builder mergeUnknownFields( final com.google.protobuf.UnknownFieldSet unknownFields) { return super.mergeUnknownFields(unknownFields); } // @@protoc_insertion_point(builder_scope:google.cloud.bigquery.migration.v2.NameMappingKey) } // @@protoc_insertion_point(class_scope:google.cloud.bigquery.migration.v2.NameMappingKey) private static final com.google.cloud.bigquery.migration.v2.NameMappingKey DEFAULT_INSTANCE; static { DEFAULT_INSTANCE = new com.google.cloud.bigquery.migration.v2.NameMappingKey(); } public static com.google.cloud.bigquery.migration.v2.NameMappingKey getDefaultInstance() { return DEFAULT_INSTANCE; } private static final com.google.protobuf.Parser<NameMappingKey> PARSER = new com.google.protobuf.AbstractParser<NameMappingKey>() { @java.lang.Override public NameMappingKey parsePartialFrom( com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { Builder builder = newBuilder(); try { builder.mergeFrom(input, extensionRegistry); } catch (com.google.protobuf.InvalidProtocolBufferException e) { throw e.setUnfinishedMessage(builder.buildPartial()); } catch (com.google.protobuf.UninitializedMessageException e) { throw e.asInvalidProtocolBufferException().setUnfinishedMessage(builder.buildPartial()); } catch (java.io.IOException e) { throw new com.google.protobuf.InvalidProtocolBufferException(e) .setUnfinishedMessage(builder.buildPartial()); } return builder.buildPartial(); } }; public static com.google.protobuf.Parser<NameMappingKey> parser() { return PARSER; } @java.lang.Override public com.google.protobuf.Parser<NameMappingKey> getParserForType() { return PARSER; } @java.lang.Override public com.google.cloud.bigquery.migration.v2.NameMappingKey getDefaultInstanceForType() { return DEFAULT_INSTANCE; } }
{ "redpajama_set_name": "RedPajamaGithub" }
5,259
Updated March 2017 Since this post, other MEAN & MERN stack posts have been written: The Modern Application Stack by Andrew Morgan. In the first part of this blog series, we covered the basic mechanics of our application and undertook some data modeling. In this second part, we will create tests that validate the behavior of our application and then describe how to set-up and run the application. Let's begin by defining some small configuration libraries. Our server will be running on port 8000 on localhost. This will be fine for initial testing purposes. Later, if we change the location or port number for a production system, it would be very easy to just edit this file. To prepare for our test cases, we need to ensure that we have a good test environment. The following code achieves this for us. First, we connect to the database. Next, we drop the user collection. This ensures that our database is in a known starting state. Next, we will drop the user feed entry collection. Next, we will connect to Stormpath and delete all the users in our test application. Next, we close the database. Finally, we call async.series to ensure that all the functions run in the correct order. Frisby was briefly mentioned earlier. We will use this to define our test cases, as follows. In the following example, we are testing a password that does not have any lower-case letters. This would actually result in an error being returned by Stormpath, and we would expect a status reply of 400. In the following example, we are testing an invalid email address. So, we can see that there is no @ sign and no domain name in the email address we are passing, and we would expect a status reply of 400. Now, let's look at some examples of test cases that should work. Let's start by defining 3 users. In the following example, we are sending the array of the 3 users we defined above and are expecting a success status of 201. The JSON document returned would show the user object created, so we can verify that what was created matched our test data. Next, we will test for a duplicate user. In the following example, we will try to create a user where the email address already exists. One important issue is that we don't know what API key will be returned by Stormpath a priori. So, we need to create a file dynamically that looks like the following. We can then use this file to define test cases that require us to authenticate a user. In order to create the temporary file above, we need to connect to MongoDB and retrieve user information. This is achieved by the following code. In the following code, we can see that the first line uses the temporary file that we created with the user information. We have also defined several feeds, such as Dilbert and the Eater Blog. Previously, we defined some users but none of them had subscribed to any feeds. In the following code we test feed subscription. Note that authentication is required now and this is achieved using .auth with the Stormpath API keys. Our first test is to check for an empty feed list. In our next test case, we will subscribe our first test user to the Dilbert feed. In our next test case, we will try to subscribe our first test user to a feed that they are already subscribed-to. Next, we will subscribe our test user to a new feed. The result returned should confirm that the user is subscribed now to 2 feeds. Next, we will use our second test user to subscribe to a feed. Before we begin writing our REST API code, we need to define some utility libraries. First, we need to define how our application will connect to the database. Putting this information into a file gives us the flexibility to add different database URLs for development or production systems. If we wanted to turn on database authentication we could put that information in a file, as shown below. This file should not be checked into source code control for obvious reasons. We can keep Stormpath API and Secret keys in a properties file, as follows, and need to carefully manage this file as well. In Express.js, we create an "application" (app). This application listens on a particular port for HTTP requests to come in. When requests come in, they pass through a middleware chain. Each link in the middleware chain is given a req (request) object and a res (results) object to store the results. Each link can choose to do work, or pass it to the next link. We add new middleware via app.use(). The main middleware is called our "router", which looks at the URL and routes each different URL/verb combination to a specific handler function. Now we can finally see our application code, which is quite small since we can embed handlers for various routes into separate files. We define our own middleware at the end of the chain to handle bad URLs. Now our server application is listening on port 8000. Let's print a message on the console to the user. So we will now define schemas for these 4 collections. Let's begin with the user schema. Notice that we can also format the data, such as converting strings to lowercase, and remove leading or trailing whitespace using trim. In the following code, we can also tell Mongoose what indexes need to exist. Mongoose will also ensure that these indexes are created if they do not already exist in our MongoDB database. The unique constraint ensures that duplicates are not allowed. The "email : 1" maintains email addresses in ascending order. If we used "email : -1" it would be in descending order. We repeat the process for the other 3 collections. The following is an example of a compound index on 4 fields. Each index is maintained in ascending order. Every route that comes in for GET, POST, PUT and DELETE needs to have the correct content type, which is application/json. Then the next link in the chain is called. Now we need to define handlers for each combination of URL/verb. The link to the complete code is available in the resources section and we just show a few examples below. Note the ease with which we can use Stormpath. Furthermore, notice that we have defined /api/v1.0, so the client would actually call /api/v1.0/user/enroll, for example. In the future, if we changed the API, say to 2.0, we could use /api/v2.0. This would have its own router and code, so clients using the v1.0 API would still continue to work. Finally, here is a summary of the steps we need to follow to start the server and run the tests. MongoDB University provides excellent free training. There is a course specifically aimed at Node.js developers and the link can be found in the resources section below. The resources section also contains links to good MongoDB data modeling resources.
{ "redpajama_set_name": "RedPajamaC4" }
1,224
Q: (complex variables) Expand $\frac{2z+3}{1+z}$ in a power series of $z-1$ and comment on its convergence Question: Expand $\frac{2z+3}{1+z}$ in a power series of $z-1$. What can we say about its convergence? Attempt: First, notice $ \frac{2z+3}{1+z} = \frac{2z+3}{1} \frac{1}{1+z}$. Let $w = 1 -z$. Using substitution, I can get $$ \frac{5}{2} - \sum_{n=1}^\infty (z-1)^n (-1)^n 2^{-n}.$$ So, it converges when $ \left| z - 1 \right| < 1$. However, if I plug this into wolfram alpha, I get: $$ \frac{5}{2} + \sum_{n=1}^\infty (z-1)^n (-1)^n 2^{-1-n}.$$ Notice that in the second case, the summation is added to the fraction and there is an additional $\frac{1}{2}$ in the summation. Can anyone tell me where I'm going wrong? A: Let $w=z-1$ so $$\frac{2z+3}{1+z}=\frac{2w+5}{w+2}=2+\frac{1}{w+2}=2+\frac12.\frac{1}{1+\frac{w}{2}}=2+\frac12\sum_{n=0}^\infty(-1)^n\Big(\frac{w}{2}\Big)^n$$ and this series with convergence redius 2 expand as $$\frac52-\frac{w}{4}+\frac{w^2}{8}-\frac{w^3}{16}+\cdots$$
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,700
Q: 'Access-Control-Allow-Origin' header issue when using Restangular I'm trying to use Restangular in my AngularJS application to access a REST API and I'm facing an issue with the 'Access-Control-Allow-Origin' header. I know that this needs to be returned with the API response for security reasons and it is being returned, but for some reason XMLHttpRequest doesn't seem to be noticing it. The full error message is this: XMLHttpRequest cannot load http://api.mysite.dev/users. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://mysite.dev' is therefore not allowed access. If I look at the request in the Network tab of Chrome's dev tools, I can see that the header is being set on the response: This is the code that I'm using to serve those headers in my Laravel REST application: Route::options('{all}', function () { $response = Response::make(''); $response->header('Access-Control-Allow-Origin', '*'); $response->header('Access-Control-Allow-Methods', 'POST, GET, OPTIONS'); $response->header('Access-Control-Allow-Headers', 'Accept, Content-Type'); return $response; }) ->where('all', '.*'); Any ideas what could be going wrong here? A: Try these additional Access-Control-Allow-Headers: $response->header('Access-Control-Allow-Headers', 'Authorization,content-type,accept,Origin,User-Agent,DNT,Cache-Control,X-Mx-ReqToken,Keep-Alive,X-Requested-With,If-Modified-Since');
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,970
Jelets (ryska Елец) är den näst största staden i Lipetsk oblast i Ryssland. Folkmängden uppgick till 105 989 invånare i början av 2015. Källor Städer i Ryssland
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,764
{"url":"http:\/\/www.helpteaching.com\/questions\/CCSS.Math.Content.HSN-VM.C.7","text":"Tweet\n\n# Common Core Standard HSN-VM.C.7 Questions\n\n(+) Multiply matrices by scalars to produce new matrices, e.g., as when all of the payoffs in a game are doubled.\n\nYou can create printable tests and worksheets from these questions on Common Core standard HSN-VM.C.7! Select one or more questions using the checkboxes above each question. Then click the add selected questions to a test button before moving to another page.\n\nGrade 12 Matrices CCSS: HSN-VM.C.7\nIf the matrix $[[2,9,8],[0,3,4],[1,11,3]]$ is multiplied by the scalar $5$, what is the result?\n1. $[[7,14,13],[5,8,9],[6,16,8]]$\n2. $[[10,45,40],[0,15,20],[5,55,15]]$\n3. $[[3,4,1],[11,3,2],[9,8,0]]$\n4. $[[10,0,5],[45,15,55],[40,20,15]]$\nGrade 12 Matrices CCSS: HSN-VM.C.7\nIf the matrix $[[27,9,12],[3,0,6],[18,21,3]]$ is multiplied by the scalar $1\/3$, what is the result?\n1. $[[9,3,4],[1,0,2],[6,7,1]]$\n2. $[[30,12,14],[6,3,9],[21,24,6]]$\n3. $[[27,9,12],[3,0,6],[18,21,3]]$\n4. $[[9,1,6],[3,0,7],[4,2,1]]$\nGrade 12 Matrices CCSS: HSN-VM.C.7\nIf the matrix $[[6,8,9],[1,0,2],[3,6,2]]$ is multiplied by the scalar 3, what is the result?\n1. $[[9,11,12],[4,3,5],[8,9,5]]$\n2. $[[18,24,27],[3,0,6],[9,18,6]]$\n3. $[[3,5,6],[-2,-3,-1],[0,3,-1]]$\n4. $[[9,8,6],[2,0,1],[2,6,3]]$\nGrade 12 Matrices CCSS: HSN-VM.C.7\nIf the matrix $[[2,9,8],[0,3,4],[1,11,3]]$ is multiplied by the scalar $3$, what is the result?\n1. $[[2\/3,3,8\/3],[0,1,4\/3],[1\/3,11\/3,1]]$\n2. $[[6,0,3],[27,9,33],[24,16,12]]$\n3. $[[5,12,11],[3,6,7],[4,14,7]]$\n4. $[[6,27,24],[0,9,12],[3,33,9]]$\nGrade 12 Matrices CCSS: HSN-VM.C.7\nIf the matrix $[[2,8,12],[3,4,0],[8,10,4]]$ is multiplied by the scalar $1\/2$, what is the result?\n1. $[[4,16,24],[6,8,0],[16,20,8]]$\n2. $[[12,8,2],[0,4,3],[4,10,8]]$\n3. $[[4,6,16],[16,8,20],[24,0,8]]$\n4. $[[1,4,6],[1.5,2,0],[4,5,2]]$\nYou need to have at least 5 reputation to vote a question down. Learn How To Earn Badges.","date":"2016-12-10 12:41:54","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.18486271798610687, \"perplexity\": 2022.1592379543977}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-50\/segments\/1480698543170.25\/warc\/CC-MAIN-20161202170903-00340-ip-10-31-129-80.ec2.internal.warc.gz\"}"}
null
null
\section{Introduction} Since the seminal work of Shannon \cite{shannon48}, there have been huge advancements in coding and information theory. The fundamental limits and efficient coding solutions approaching these limits are now known for many communication channels. However, in the vast majority of coding schemes invented, it is assumed that the receiver is perfectly synchronized with the transmitter, i.e., the symbol arrival times are known at the receiver. In most communication systems, however, achieving perfect synchronization is not possible even with the existence of timing recovery systems. When perfect synchronization does not exist, random symbol insertions and deletions (synchronization errors) occur in the received sequence. This phenomenon poses a great challenge for error correction. Since the positions of the inserted/deleted symbols are unknown at the receiver, even a single uncorrected insertion/deletion can result in a catastrophic burst of errors. Thus, conventional error-correcting codes fail at these situations. Error-correcting codes designed for dealing with such insertion/deletion (I/D) channels are called {\it synchronization codes}. Synchronization codes have a long history but their design and analysis has proven to be extremely challenging, hence few practical results exist in the literature. {\color{black}Moreover, standard approaches do not lead to finding the optimal codebooks or tight bounds on the capacity of I/D channels and finding their capacity is still an open problem \cite{mitzenmacher09}.} The first synchronization code was proposed by Sellers in 1962 \cite{sellers62}. He inserted {\it marker} sequences in the transmitted bitstream to achieve synchronization. Long markers allowed the decoder to correct multiple insertion or deletion errors but greatly increased the overhead. In 1966, using number-theoretic techniques, Levenshtein constructed binary codes capable of correcting a single insertion or deletion assuming that the codeword boundaries were known at the decoder \cite{levenshtein66}. Most subsequent work were inspired by the number-theoretic methods used by Levenshtein, e.g., see \cite{tenegolts76,helberg93,saowapa00,helberg02}. Unfortunately, these constructions either cannot be generalized to correct multiple synchronization errors without a significant loss in rate, do not scale well for large block lengths, or lack practical and efficient encoding or decoding algorithms. Some authors also generalized these number-theoretic methods to non-binary alphabets and constructed non-binary synchronization codes \cite{calabi69,tanaka76,tenegolts84,levenshtein92,levenshtein02}. Following \cite{levenshtein92}, \emph{perfect} deletion-correcting codes were studied and constructed using combinatorial approaches \cite{yin01,klein04,wang06}. Most of these codes, however, are constructed using ad hoc techniques and no practical encoding and decoding algorithm is provided. Non-binary low-density parity-check (LDPC) codes decoded by a verification-based decoding algorithm are designed for deletion channels in \cite{mitzenmacher06}. Unfortunately, the decoding complexity of this construction is also far from being practical. The drawback of all the above-mentioned synchronization codes is that they only work under very stringent synchronization and noise restrictions such as working only on deletion channels, or a single synchronization error per block. Coding methods proposed for error-correction on the I/D channels working under more general conditions are usually based on concatenated coding schemes with two layers of codes, i.e., an inner and an outer code \cite{schulman99,davey01,chen03,ratzer05,buttigieg11}. The inner code identifies the positions of the synchronization errors and the outer code is responsible for correcting the insertions, deletions, and substitution errors as well as misidentified synchronization errors. In the seminal work of Davey and MacKay \cite{davey01}, a practical concatenated coding method is presented for error-correction on general binary I/D channels. They have called their inner code, a \emph{watermark} code. The main idea is to provide a carrier signal or watermark for the outer code. The synchronization errors are inferred by the outer code via identifying discontinuities in the carrier signal. One of the advantages of watermark codes is that the decoder does not need to know the block boundaries of the received sequence. However, due to the use of a sparsifier, rate loss is significant. The watermark is substituted by fixed and pseudo-random markers in \cite{ratzer05} and is shown that it allows better rates but is only able to outperform the watermark codes at low synchronization error rates. {\color{black}Also, it has recently been shown that the performance of watermark codes can be improved by using symbol-level decoding instead of bit-level decoding \cite{briffa10,wang11}}. In this work, we consider the problem of \textcolor{black}{devising an efficient coding method for} reliable communication over non-binary I/D channels. On these channels, synchronization errors occur at the symbol level, i.e., symbols are randomly inserted in and deleted from the received sequence. We also assume that all symbols are corrupted by additive white Gaussian noise (AWGN). {\color{black}The use of this channel model is motivated by the fact that at the receiver the received continuous waveform is first sampled at certain time instances to produce the discrete symbol sequence required by the decoder. If the symbol arrival times are not perfectly known at the receiver, i.e., there is timing mismatch, some of the transmitted symbols are not sampled at all (symbol deletions) or sampled multiple times (symbol insertions) \cite{barry_tutorial}. As a result, this channel model can be used to represent non-binary communications over the AWGN channel suffering from timing mismatch. Most communication systems use non-binary signalling, where synchronization errors can result in insertion/deletion at the symbol level.} For the proposed channel model, we utilize the inherent redundancy that can be achieved in non-binary symbol sets by first expanding the symbol set and then allocating part of the bits associated with each symbol to watermark symbols. As a result, not all the available bits in the signal constellation are used for the transmission of information bits. In its simplest form, our solution can be viewed as a communication system using two different signal sets. The system switches between these two signal sets according to a binary watermark sequence. Since the watermark sequence is known both at the transmitter and the receiver, probabilistic decoding can be used to infer the insertions and deletions that occurred and to remove the effect of additive noise. In particular, the system is modeled by a hidden Markov model (HMM) \cite{rabiner86} and the forward-backward algorithm \cite{bahl74} is used for decoding. Our proposed scheme resembles trellis coded modulation (TCM) \cite{ungerboeck82}. The main idea in both methods is to add redundancy by expanding the symbol set and limit the symbol transitions in a controlled manner. The proposed method is also closely related to the watermark codes of \cite{davey01}. In both methods, decoding is done by the aid of a watermark sequence which both the transmitter and receiver agree on. The difference is that the extra degree of freedom in non-binary sets allows us to separate information from the watermark. Our proposed solution leads to significant system ability to detect and correct synchronization errors. For example, a rate $1/4$ binary outer code is capable of correcting about $2,900$ insertion/deletion errors per block of $10,012$ symbols \textcolor{black}{even when block boundaries are unknown at the receiver}. This paper is organized as follows. In Sections \ref{Sec:model_approach} and \ref{sec:system_model}, we state our proposed approach and describe the system model. Section \ref{Sec:error_analysis} demonstrates the capabilities of the proposed solution by providing numerical results and discussions. Section~\ref{sec:increasing_achievable_rates} describes ways to increase the achievable information rates on the channel and Section~\ref{sec:complexity} analyzes the system in terms of complexity, and practical considerations. Finally, Section \ref{Sec:conclusion} concludes the paper. \section{Channel model and the proposed approach} \label{Sec:model_approach} Throughout this paper, scalar quantities are shown by lower case symbols, complex quantities by boldface letters, and vectors by underlined symbols. \subsection{Channel model} The channel model we consider in this work is a non-binary I/D channel with AWGN where insertions and deletions occur at the symbol level. Similar to \cite{davey01}, it is assumed that the symbols from the input sequence $\ubx$ first enter a queue before being transmitted. Then at each channel use, either a random symbol is inserted in the symbol sequence $\ubx'$ with probability $p_\mathrm{i}$, the next queued symbol is deleted with probability $p_\mathrm{d}$, or the next queued symbol is transmitted (put as the next symbol in $\ubx'$) with probability $p_\mathrm{t}=1-p_\mathrm{d}-p_\mathrm{i}$. For computational purposes, we assume that the maximum number of insertions which can occur at each channel use is $I$. The resulting symbol sequence $\ubx'$ is finally affected by an i.i.d. sequence of AWGN $\ubz$ where $\bz\sim\mathcal{CN}(0,2\sigma^2)$ and $\uby=\ubx'+\ubz$ is received at the receiver side. {\color{black}Note that in this paper, to show the capabilities of the proposed method, we consider totally random and independent symbol insertions/deletions. When symbol insertions are resulted from imperfect synchronization, insertions or deletions tend to be correlated. These cases lead to easier identification of insertions/deletions at the receiver compared to random independent insertions/deletions which we consider here.} \subsection{Proposed approach} Now, consider a communication system working on this channel by employing an $M$-ary signalling (e.g., $M$-ary phase-shift keying (PSK)). We call this the \emph{base} system. Motivated by the idea of watermark codes \cite{davey01}, we are interested in embedding a watermark in the transmitted sequence. The watermark, being known at the receiver, allows the decoder to deduce the insertions and deletions and to recover the transmitted sequence. {\color{black}The watermark can be embedded in the transmitted sequence in many ways. One way of doing this is to add the watermark to the information sequence and treat the information sequence as additive noise at the receiver. This is a direct extension of the binary watermark codes of \cite{davey01} to non-binary signalling. In particular, the additive watermark $\uw$ can be defined as a sequence of $M$-ary symbols drawn from the base system constellation. The binary information sequence is first passed through a sparsifier; every $k$ bits of the information sequence is converted to an $n$-tuple of $M$-ary symbols. The rate of the sparsifier is then given by $r_\mathrm{s}=k/n$ where $0<r_\mathrm{s}<m$ and $M=2^m$. The average density of the sparsifier $f$ is defined as the average Hamming distance of the $n$-tuples divided by $n$. The mapping used in the sparsifier is chosen as to minimize $f$. By defining addition as shifting over the constellation symbols, the watermark sequence could be added to the sparsified messages (denote it by $\us$) and $\uw\oplus\us$ be sent over the channel. At the receiver, similar to \cite{davey01}, an inner decoder which knows the watermark sequence, uses the received sequence to deduce the insertions/deletions and provides soft information for an outer code. The main drawback of this method is that the decoder is not able to distinguish between additive noise and the information symbols. This is because the information is embedded into the watermark by adding $\us$ to $\uw$. Sequence $\us$ contains both zeros and non-zero symbols. Non-zero symbols shift the watermark symbols over the constellation, similar to what additive noise does. This greatly degrades the performance of the decoder. To improve the decoding performance, $\us$ should contain as many zeros as possible, i.e., be as sparse as possible, which is equivalent to having a small $f$. A small $f$ is achieved by decreasing $r_\mathrm{s}$ which in turn decreases the achievable rates on the channel directly. Also, notice that even in the absence of additive noise, the decoder is still fooled by the shifts occurred over $\uw$ and thus misidentifies some of the insertions/deletions.} \smallskip To aid error recovery at the receiver, we are interested in an embedding method which makes the watermark as distinguishable as possible from the information sequence. This necessitates using some \emph{extra} resources (other than those used to transmit the information sequence) for transmitting the watermark sequence. These extra resources can be provided by enlarging the signal set. The extra available bits per transmission can then be used to transmit the watermark. After embedding the watermark, we refer to the system as the \emph{watermarked} system. In this work, we are mostly interested in binary watermark sequences. As a result, to accommodate the watermark bits in each symbol, we expand the signal set size $M$ by the factor of $2$, giving rise to a $2M$-ary signalling scheme. For example, if the base system uses 4-PSK, in the watermarked system we use 8-PSK modulation. To provide fair comparison, we put the symbol rate, information bits per symbol \textcolor{black}{(denoted by $r_\mathrm{c}$)}, and average energy of the signal constellation of the watermarked system equal to those of the base system. As a result, the spectral efficiency and the total transmitted power of the watermarked system are equal to those of the base system. In other words, no bit rate, bandwidth, or power is sacrificed as a result of embedding the watermark. {\color{black}Notice that $r_\mathrm{c}=m$ where $M=2^m$ for the base system and also the watermarked system when a binary watermark sequence is used for each transmitted symbol. This is because in an $M$-ary base system all the $m$ available bits are dedicated to information bits. Also, in each symbol of the $2M$-ary watermarked system (with $m+1$ available bits) $m$ bits are assigned to information bits. Later, we will see that sometimes it is more efficient to use non-binary watermark sequences or to assign less than one bit per symbol on average to the watermark giving rise to $0<r_\mathrm{c}<m+1$. These cases will be investigated in Section \ref{sec:increasing_achievable_rates}.} Expanding the signal set while fixing the average energy of the constellation leads to reduction in the minimum distance of the constellation. Nevertheless, we show that by using the mapping described in Section \ref{Sec:modulator}, the minimum distance $d_\mathrm{min}$ between symbols corresponding to the same watermark value does not necessarily reduce. In fact in some cases, e.g., in PSK modulation, the minimum distance does not change compared to the base system. Thus, the noise immunity\footnote{Here, the noise immunity is measured in the absence of synchronization errors under the assumption of minimum distance decoding. As a result, the minimum distance between the signal constellation points can be used as the noise immunity measure.} of the system does not change after adding the watermark. \section{System model}\label{sec:system_model} The proposed system model is shown in Fig.~\ref{fig:system_model}. First, the binary information sequence $\ub$ is encoded by the outer code producing the coded binary sequence $\ud$ which is then broken into $m$-bit subsequences. The modulator then combines the binary watermark $w$ and the $m$-bit subsequences by a one-to-one mapping $\mu:\{0,1\}^{m+1}\rightarrow\mathcal{X}$ where $\mathcal{X}$ is the signal set of size $|\mathcal{X}|=2M=2^{m+1}$. Then $\ubx$ is sent over the channel. The received sequence $\uby$ is first decoded by the watermark decoder which provides soft information for the outer decoder in terms of log-likelihood ratios (LLRs). The LLR sequence $\ul$ is then utilized to decode the information sequence $\hat{\ub}$. \begin{figure} \centering \psfrag{b}{$\ub$} \psfrag{d}{$\ud$} \psfrag{w1}{$\uw$} \psfrag{w2}{$\uw$} \psfrag{x}{$\ubx$} \psfrag{y}{$\uby$} \psfrag{l}{$\ul$} \psfrag{bhat}{$\hbb$} \psfrag{z}{$\ubz$} \includegraphics[width=.7\columnwidth]{watermark_system2.eps}\\ \caption{The proposed system model.}\label{fig:system_model} \end{figure} \subsection{Modulator} \label{Sec:modulator} The modulator plays a key role in the proposed system. It allows embedding encoded data and watermark bits while ensuring a good minimum distance. The most important part of designing the modulator is to choose an appropriate mapping $\mu$. By viewing $\mu$ as $\{0,1\}\times\{0,1\}^m\rightarrow\mathcal{X}$, we first divide $\mathcal{X}$ into two disjoint subsets $\mathcal{X}^0$ and $\mathcal{X}^1$ each having $M$ signal points corresponding to watermark bit $w=0$ and $w=1$, respectively. Thus, in the label of each signal point, one bit (can be any of the $m+1$ bits) is dedicated to the watermark bit and the other $m$ bits correspond to the $m$-bit subsequences of $\ud$. Formally we have \begin{equation*}\label{eq:subsets} \mathcal{X}^{w}=\left\{\bx|\bx\in\mathcal{X};\ell_\mathrm{w}(\bx)=w\right\},\quad\mathrm{for}\quad w=0,1, \end{equation*} where $\ell_{\mathrm{w}}(\bx)$ denotes the value of the bit in the label of $\bx$ dedicated to the watermark. We also define $\ell^j(\bx)$ for $j=1,2,\dots,m$ as the $j$-th non-watermark bit of the label of $\bx$. Now the question is how to choose the labeling. To maximize the noise immunity of the system, and since the watermark sequence is known at the receiver, we maximize the minimum distance between the signal points in each of $\mathcal{X}^0$ and $\mathcal{X}^1$. To do this, first we do a one-level set partitioning \cite{ungerboeck82}, i.e., we divide $\mathcal{X}$ into two subsets with the largest minimum distance between the points in each subset. These subsets are named $\mathcal{X}^0$ and $\mathcal{X}^1$ and the watermark bit of the label is assigned accordingly. Next, by a Gray mapping \cite{caire98} of the signals in each of $\mathcal{X}^0$ and $\mathcal{X}^1$, the non-watermark bits of the label are assigned. This process is illustrated for two different signal constellations in Figs.~\ref{fig:4_8_PSK} and \ref{fig:16_32_QAM}. The Gray mapping ensures the least bit error rate in each subset \cite{caire98}. \begin{figure} \centering \subfigure[Base system] { \psfrag{$d_{min}$}{$\scriptstyle d_{\mathrm{min}}$} \label{fig:4_PSK}{\includegraphics[width=.45\columnwidth]{4PSK_modified.eps}} } \hfill \subfigure[Watermarked system] { \psfrag{$d_{min}$}{$\scriptstyle d_{\mathrm{min}}$} \label{fig:8_PSK}{\includegraphics[width=.45\columnwidth]{8PSK_modified_2.eps}} } \caption{Signal constellations and their labeling for the base system ($4$-PSK) and the watermarked system ($8$-PSK). The leftmost bit in the label of the watermarked system corresponds to the watermark bit. For both constellation $d_\mathrm{min}=\sqrt{2}$.} \label{fig:4_8_PSK} \end{figure} \begin{figure} \centering \subfigure[Base system] { \psfrag{dmin}{$\scriptstyle d_{\mathrm{min}}$} \label{fig:16_QAM}{\includegraphics[width=.45\columnwidth]{16QAM_modified.eps}} } \hfill \subfigure[Watermarked system] { \psfrag{dmin}{$\scriptstyle d_{\mathrm{min}}$} \label{fig:32_QAM}{\includegraphics[width=.45\columnwidth]{32QAM_modified3.eps}} } \caption{Signal constellations and their labeling for the base system ($16$-QAM) and the watermarked system ($32$-AM/PM). The leftmost bit in the label of the watermarked system corresponds to the watermark bit. Both constellation have unit average energy. Thus, $d_\mathrm{min}=2/\sqrt{10}=0.633$ for the base system and $d_\mathrm{min}=4/\sqrt{42}=0.617$ for the watermarked system.} \label{fig:16_32_QAM} \end{figure} The minimum distance of the constellation is now defined as \begin{equation*} d_\mathrm{min} = \min_{w}\min_{\{\bx_i,\bx_j\}\subset\mathcal{X}^w,\bx_i\neq\bx_j}||\bx_i-\bx_j||. \end{equation*} Notice that by assuming signal constellations of fixed energy, going from $M$-PSK in the base system to $2M$-PSK in the watermarked system does not change $d_\mathrm{min}$ (see Fig.~\ref{fig:4_8_PSK}). For the QAM, as illustrated in Fig.~\ref{fig:16_32_QAM}, $d_\mathrm{min}$ does change because of energy adjustments but always stays very close to that of the original constellation. For example in Fig.~\ref{fig:16_32_QAM}, $d_\mathrm{min}$ is reduced by only $2.4\%$. A definition which proves useful in the next sections is \begin{equation*}\label{eq:labeling} \mathcal{X}_j(w_i,d_{i,j})=\left\{\bx|\bx\in\mathcal{X};\ell_\mathrm{w}(\bx)=w_i,\ell^j(\bx)=d_{i,j}\right\}, \end{equation*} where $i$ denotes the index of both the watermark bit and the $m$-bit subsequences of $\ud$ and $d_{i,j}=d_{(i-1)m+j}$ denotes the $j$-th bit of the $i$-th subsequence. Considering a watermark sequence of length $N$ and an encoded data sequence of length $mN$, then $i=1,2,\dots,N$. Thus, $\mathcal{X}_j(u,v)$ refers to the subset of $\mathcal{X}$ where the watermark bit $w_i$ is equal to $u$ and the $j$-th data bit in the $i$-th subsequence, i.e., $d_{i,j}$, is equal to $v$. The size of this subset is $M/2$. \subsection{Watermark decoder}\label{sec:watermark_decoder} The goal of the watermark decoder is to produce LLRs for the outer decoder given $\uw$ and the received sequence $\uby$. As in \cite{davey01}, by ignoring the correlations in $\ud$, we can use an HMM to model the received sequence and then use the forward-backward algorithm \cite{rabiner86} to calculate posterior probabilities or LLRs for the outer decoder. Notice that due to the nature of the channel which introduces insertions and deletions, there will be a synchronization \emph{drift} between $\ubx$ and $\uby$. The synchronization drift at position $i$, i.e., $t_i$ is defined as the (number of insertions) $-$ (number of deletions) occurred in the signal stream until the $i$th symbol, i.e., $\bx_i$, is ready for transmission\footnote{This means that if $\bx_{i-1}$ is not deleted by the channel it is received as $\by_{i-1+t_i}$.}. The drifts ${\{t_i\}}_{i=1}^N$, form the hidden states of the HMM. Each state $t_i$ takes values from \begin{equation} \mathbf{T}=\{\dots,-2,-1,0,1,2,\dots\}. \end{equation} Thus, $t_i$ performs a random walk on $\mathbf{T}$ whose mean and variance depend on $\ppi$ and $\ppd$. To reduce the decoding complexity, as in \cite{davey01}, we limit the drift to $|t_i|\leq t_\mathrm{max}$ where $t_\mathrm{max}$ is usually chosen large enough such that it accommodates all likely drifts with high probability. For example, when $\ppi=\ppd$, $t_\mathrm{max}$ is chosen several times larger than $\sqrt{N\ppd/(1-\ppd)}$ which represents the standard deviation of the drifts over a block of size $N$. To further characterize the HMM \cite{rabiner86}, we need the state transition probabilities, i.e., $P_{ab}=P(t_{i+1}=b|t_i=a)$. Each symbol $\bx_i$ entering the channel can produce any number of symbols between $0$ and $I+1$ at the channel output. As a result, if $t_i=a$, then $t_{i+1}\in\{a-1,\dots,a+I\}$. Notice that the transition from $t_i=a$ to $t_{i+1}=b$ can occur in two ways. One is when $\bx_i$ is deleted by the channel and $(b-a+1)$ symbols are inserted by the channel. The other one is when $\bx_i$ is transmitted and $(b-a)$ symbols are inserted by the channel. In either case, $(b-a+1)$ symbols are produced at the channel output. As a result, the state transition probabilities are given by \begin{equation}\label{eq:state_transition} P_{ab}=\left\{\begin{array}{ll} \ppd & b=a-1 \\ \alpha_I\ppi\ppd+\ppt & b=a \\ \alpha_I(\ppi^{b-a+1}\ppd+\ppi^{b-a}\ppt) & a<b<a+I \\ \alpha_I\ppi^I\ppt & b=a+I \\ 0 & \mathrm{otherwise}. \end{array} \right. \end{equation} where $\alpha_I=1/(1-\ppi^I)$ is a constant normalizing the effects of maximum insertion length $I$ and ensuring that the sum of probabilities is $1$. We also need to calculate the conditional probability of producing the observation sequence $\widetilde{\uby}=\{\by_{i+a},\dots,\by_{i+b}\}$ given the transition from $t_i=a$ to $t_{i+1}=b$. As stated, this transition can occur in two ways. Thus, \begin{align} \nonumber Q_{ab}^i(\widetilde{\uby})&=P(\widetilde{\uby}|t_i=a,t_{i+1}=b,w_i,\mathcal{H})\\ &=\left(\alpha_I\ppi^{b-a+1}\ppd\prod_{k=i+a}^{i+b}\gamma_k+ \alpha_I\ppi^{b-a}\ppt\beta_{i+b}\prod_{k=i+a}^{i+b-1}\gamma_k\right)/P_{ab}, \end{align} where $\mathcal{H}$ denotes set of parameters of the HMM, i.e., $\mathcal{H}=\{[P_{ab}],\mathbf{T}\}$, $\gamma_k$ is the probability of receiving $\by_k$ given that $\by_k$ is an inserted symbol, and $\beta_{i+b}$ is the probability of receiving $\by_{i+b}$ assuming that it is the result of transmitting $\bx_i\in\mathcal{X}^{w_i}$. Formally, we have \begin{equation} \gamma_k=\frac{1}{2M}\sum_{\bx\in\mathcal{X}}\frac{1}{2\pi\sigma^2} \exp{\left(-\frac{|\by_k-\bx|^2}{2\sigma^2}\right)}, \end{equation} and \begin{align} \nonumber\beta_{i+b}=P(\by_{i+b}|t_i=a,t_{i+1}=b,w_i,\mathcal{H}) =\frac{1}{M}\sum_{\bx\in\mathcal{X}^{w_i}}\frac{1}{2\pi\sigma^2} \exp{\left(-\frac{|\by_k-\bx|^2}{2\sigma^2}\right)}. \end{align} Now that the HMM is defined, we use the forward-backward algorithm to calculate LLRs. By ignoring the correlations between the bits of $\ud$ \textcolor{black}{and assuming $P(d_{i,j}=0) = P(d_{i,j}=1)$}, the bit by bit LLR is calculated as \begin{align}\label{eq:LLR} l_{i,j}&=\log\frac{P(d_{i,j}=0|\uby,\uw,\mathcal{H})}{P(d_{i,j}=1|\uby,\uw,\mathcal{H})}= \log\frac{P(\uby|d_{i,j}=0,\uw,\mathcal{H})}{P(\uby|d_{i,j}=1,\uw,\mathcal{H})} =\log\frac{\sum_{\bx_i\in\mathcal{X}_j(w_i,0)}P(\uby|\bx_i,\uw,\mathcal{H})} {\sum_{\bx_i\in\mathcal{X}_j(w_i,1)}P(\uby|\bx_i,\uw,\mathcal{H})}. \end{align} By using the forward-backward algorithm, the posterior probabilities are found by \cite{rabiner86,davey01} \begin{equation}\label{eq:prob_after_LLR} P(\uby|\bx_i,\uw,\mathcal{H})=\sum_{a,b}F_i(a)\acute{Q}^i_{ab}(\widetilde{\by}|\bx_i)B_{i+1}(b) \end{equation} where the forward quantity is defined as \begin{equation} F_i(a)=P(\by_1,\dots,\by_{i-1+a},t_i=a|\uw,\mathcal{H}), \end{equation} the backward quantity as \begin{equation} B_i(b)=P(\by_{i+b},\dots|t_i=b,\uw,\mathcal{H}), \end{equation} and \begin{align} \nonumber\acute{Q}^i_{ab}(\widetilde{\by}|\bx_i)&=P(\widetilde{\by},t_{i+1}=b|t_i=a,\bx_i,\mathcal{H})\\ &=\alpha_I\ppi^{b-a+1}\ppd\prod_{k=i+a}^{i+b}\gamma_k+ \alpha_I\ppi^{b-a}\ppt\acute{\beta}_{i+b}\prod_{k=i+a}^{i+b-1}\gamma_k, \end{align} where \begin{align} \nonumber\acute{\beta}_{i+b}=P(\by_{i+b}|t_i=a,t_{i+1}=b,\bx_i,\mathcal{H}) =\frac{1}{2\pi\sigma^2}\exp{\left(-\frac{|\by_{i+b}-\bx_i|^2}{2\sigma^2}\right)}. \end{align} The forward and backward quantities are recursively computed by the forward pass \begin{equation} F_i(a)=\sum_{c\in \{a-I,\dots,a+1\}}F_{i-1}(c)P_{ca}Q^{i-1}_{ca}(\by_{i-1+c},\dots,\by_{i-1+a}), \end{equation} and the backward pass \begin{equation} B_i(b)=\sum_{c\in \{b-1,\dots,b+I\}}P_{bc}Q^i_{bc}(\by_{i+b},\dots,\by_{i+c})B_{i+1}(c). \end{equation} {\color{black}If the block boundaries are not known at the decoder, we can use the sliding window decoding technique used in \cite{davey01}. Assuming a continuous stream of transmitted blocks and received symbols, the forward-backward algorithm is used to infer the block boundaries. Then the decoding window is anchored at the most likely start of the next block and next block is decoded. Most of the results of this paper are shown using this sliding window decoding technique. We will briefly explain the methodology in Section~\ref{Sec:error_rates}. For the first transmitted block, we assume that the initial drift is zero. Thus, we use} \begin{equation}\label{eq:initial_condition_forward} F_1(a)=\left\{\begin{array}{ll} 1 & a=0 \\ 0 & \mathrm{otherwise}. \end{array} \right. \end{equation} {\color{black}It is also possible to insert some markers which specify the block boundaries into the transmitted sequence. By dedicating all the $m+1$ bits in the symbols at the boundaries to markers, they can be easily detected at the receiver. In this case, the block boundaries can be inferred by detecting the markers. Notice that as the block length becomes larger, recognizing the block boundaries requires less overhead and becomes more efficient. To see this, first define the marker rate as the number of marker symbols in each block divided by $N$. Given a fixed marker rate, the number of marker symbols is increased as $N$ grows. Increasing the number of marker symbols leads to a better boundary detection at the decoder for a fixed $\ppd$ and $\ppi$ because the probability of misidentifying larger block of markers is decreased. Thus, for large block lengths one can assume that the block boundaries are known at the decoder and use the following initial conditions for the backward pass of each block: \begin{equation}\label{eq:initial_condition_backward} B_N(b)=P_{bc}Q^N_{bc}(\by_{N+b},\dots) \end{equation} where $c=t_{N+1}$ is the final drift at the end of the block.} \smallskip In the next stage of decoding, the LLRs, calculated by inserting (\ref{eq:prob_after_LLR}) in (\ref{eq:LLR}), are passed to the outer decoder. \subsection{Outer code} The outer code can almost be any binary error correcting code. Due to the exemplary performance of LDPC codes on many communication channels and their flexible structure, we choose LDPC codes in this paper. At the transmitter, a binary LDPC code of rate $R$ is used to encode the binary information sequence $\ub$ of length $mNR$ producing the binary coded sequence $\ud$ of length $mN$. At the receiver, the $mN$ bit LLRs of (\ref{eq:LLR}) are used to recover $\ub$. \section{Error rates and fundamental limits} \label{Sec:error_analysis} In this section, we demonstrate the capabilities of the proposed solution through examples and discussions. In particular, we evaluate the watermarked system by providing bit and word error rates (BER and WER), maximum achievable transmission rates, and comparing them with two benchmark systems. We demonstrate our results using the following two examples. \textbf{Example~1:} Consider a base system with $4$-PSK modulation depicted in Fig.~\ref{fig:4_PSK} which gives rise to a watermarked system with $8$-PSK modulation. The labeling $\mu$ is chosen based on the method described in Section \ref{Sec:modulator}. The constellation and labeling are depicted in Fig.~\ref{fig:8_PSK}. \textbf{Example~2:} In this example, we consider a $16$-QAM base system and a $32$-QAM watermarked system. Notice that different constellations can be considered for $32$-ary modulation. We consider a $32$-AM/PM constellation whose $d_\mathrm{min}$ is very close to the base system (they differ by only $2.4\%$). The constellations and their labelings are depicted in Fig.~\ref{fig:16_32_QAM}. \subsection{Error rates} \label{Sec:error_rates} First, consider Example~1. We use a $(3,6)$-regular LDPC code ($R=0.5$) of length $20,024$ constructed by the progressive edge growth (PEG) algorithm \cite{Hu05} as the outer code. The LDPC code is decoded by the sum-product algorithm \cite{richardson01design} allowing a maximum of $400$ iterations. The watermark sequence $\uw$ is chosen to be a pseudo-random binary sequence. Since $m=2$, the block length is $N=10,012$. The maximum insertion length is chosen as $I=5$, the channel insertion and deletion probabilities are assumed equal, i.e., $\ppi=\ppd=\ppid$, {\color{black}and $t_\mathrm{max}=5\sqrt{N\ppid/(1-\ppid)}$.} {\color{black} A continuous stream of blocks of $\ub$, $\ud$, and $\ubx$ is generated and sent over the channel. A continuous sequence of $\uby$ is then received at the decoder. We assume that block boundaries are not known at the receiver. Thus, $\uby$ is decoded by the forward-backward algorithm using a sliding window decoding technique \cite{davey01}. For the first block, we assume that the receiver knows the starting position, i.e, we use (\ref{eq:initial_condition_forward}) to initialize the forward pass. For subsequent blocks, the watermark decoder is responsible to infer the boundaries and calculate LLRs for the outer code. This is done by first running the forward pass several multiples of $t_\mathrm{max}$ (here, six) beyond the expected position of the block boundary and initializing the backward pass from these last calculated forward quantities. Then the most likely drift at the end of each block is found as $\hat t_{N+1} = \arg\max_aF_{N+1}(a)B_{N+1}(a)$ and is used to slide the decoding window to the most likely start of the next block. Occasionally, the watermark decoder makes errors in identifying the block boundaries. If these errors accumulate, synchronization is lost and successive blocks fail to be successfully decoded. To protect against such gross synchronization loss, we use the re-synchronization technique of \cite{davey01} whose details are omitted in the interest of space.} We simulate the system under different SNRs and insertion/deletion probabilities. The BER and WER of the system are plotted in Fig.~\ref{fig:BER_vs_pd} versus different values of $\ppid$ under fixed SNRs. For example, at SNR$=\!10$~dB, the system is able to recover on average {\color{black}$1,400$} symbol insertions/deletions per block of $10,012$ symbols with an average BER less than $10^{-5}$. This increases to recovering about {\color{black}$1,920$} insertions/deletions at SNR$=\!20$~dB. Fig.~\ref{fig:BER_3_6_vs_SNR} shows the performance of the system versus SNR under fixed $\ppid$. \begin{figure} \centering \includegraphics[width=.99\columnwidth]{BER_3_6_vs_pd.eps}\\ \caption{BER and WER of the $8$-PSK watermarked system employing a (3,6)-regular LDPC code of length $20,024$ versus $\ppid$ at fixed SNRs.}\label{fig:BER_vs_pd} \end{figure} \begin{figure} \centering \includegraphics[width=.99\columnwidth]{BER_3_6_vs_noise.eps}\\ \caption{BER and WER of the $8$-PSK watermarked system employing a (3,6)-regular LDPC code of length $20,024$ versus SNR at fixed values of $\ppid$.}\label{fig:BER_3_6_vs_SNR} \end{figure} We also simulate the system under a $(3,4)$-regular LDPC code ($R=0.25$) of length $20,024$ with the same parameters. The system is now capable of recovering on average {\color{black}$2,700$} insertions/deletions per block of $10,012$ symbols at SNR$=\!20$~dB with an average BER$<10^{-5}$. This increases to {\color{black}$2,900$} insertions/deletions using an optimized irregular LDPC code of the same rate and length with degree distributions reported in Table~\ref{Table} (Code~1). We will briefly describe the LDPC optimization method in the next section. {\color{black}We are not aware of any practical coding method in the literature that can be directly and fairly compared to our proposed system. However, we provide comparisons with the best results of \cite{davey01,ratzer05,briffa10,wang11}. It is worth mentioning that this comparison is not completely fair as the I/D channel considered in these works is binary whereas in our case is non-binary. All in all, we believe that this comparison provides insight into what can be achieved by exploiting the extra degrees of freedom provided by non-binary signalling. This comparison is depicted in Fig.~\ref{fig:block_error_comparison}. To make the comparison as fair as possible, we have adjusted the block size and the rate of our $8$-PSK watermarked system according to the parameters of codes considered in the comparison. It is evident from Fig.~\ref{fig:block_error_comparison} that a significant improvement in the error correction performance is achieved by using non-binary signalling. There is also a significant performance improvement compared to the method of \cite{wang11} which considers marker codes with iterative exchange of information between the inner and outer decoders. Marker codes concatenated with optimized irregular LDPC outer codes with overall rates around $0.4$ and block length $5,000$ have been reported in \cite{wang11} which can reliably work under $\ppid<0.04$. As Fig.~\ref{fig:block_error_comparison} shows, a regular half-rate code with block length $4,002$ can do much better in our case even without iterative exchange of information.} \begin{figure} \centering \includegraphics[width=.75\columnwidth]{block_error_comparison.eps}\\ \caption{WER comparison of the $8$-PSK watermarked system with the best results of \cite{davey01,ratzer05,briffa10}. Codes D, F, and H are binary watermark codes from \cite{davey01} with overall rates $0.71$, $0.50$, $3/14$, overall block lengths $4,995$, $4,002$, $4,662$, and outer LDPC codes defined over $\mathrm{GF(16)}$, $\mathrm{GF(16)}$, and $\mathrm{GF(8)}$, respectively. Code B and E are binary marker codes from \cite{ratzer05} with overall rates $0.71$ and $0.50$ and overall block lengths $4,995$ and $4,000$ with binary LDPC codes as outer codes. Codes D and F are also decoded by the symbol-level decoding method of \cite{briffa10}. All these codes are decoded on the binary I/D channel with no substitution errors or additive noise. For the non-binary I/D channel with $8$-PSK signalling in Example~1, we have done sliding window decoding at SNR$=\!20$~dB for three different LDPC codes with variable node degree $3$ and rates $0.71$, $0.50$, $3/14$ and block lengths $4,996$, $4,002$, $4,662$, respectively.}\label{fig:block_error_comparison} \end{figure} \subsection{Achievable information rates} {\color{black}To obtain the capacity of the I/D channel, one is interested to calculate \cite{dobrushin67} \begin{equation} \label{eq:ultimate_capacity} C=\lim_{N\rightarrow\infty}\frac{1}{N}\sup_{P(\ubx)} I(\ubx;\uby), \end{equation} where $\ubx$ is the channel input sequence of length $N$, $\uby$ is the received sequence of random length, and $P(\ubx)$ denotes the joint distribution of the input sequence. Unfortunately, due to the presence of insertions/deletions, finding (\ref{eq:ultimate_capacity}) or its bounds has proven to be extremely challenging and the capacity is unknown. No single letter characterization of the mutual information also exists. Most of the results in the literature focus on sub-optimal decoding or more constrained channel models (such as deletion-only channel) and provide bounds on the capacity \cite{gallager61, zigangirov69, diggavi06, drinea07}. Most of these bounds, however, are driven for binary I/D channels and binary synchronization codes and either cannot be extended to non-binary I/D channel or become computationally extensive such as the bounds in \cite{fertonani11}. A trellis-based approach is developed in \cite{hu10} to obtain achievable information rates for binary I/D channels with AWGN and inter-symbol interference under i.i.d. inputs (uniform $P(\ubx)$). This approach which mainly uses the forward pass of the forward-backward algorithm, can be extended to i.i.d. non-binary inputs and thus our channel model. We will use this method to find lower bounds on the capacity of the channel, i.e., $C_{\mathrm{i.u.d.}}$, and compare the achievable rates of our watermarked system with this lower bound. There also exist bounds on the performance of $q$-ary synchronization codes \cite{levenshtein02} which we will use in the comparisons.} To obtain the achievable rates of our watermarked system, we calculate the maximum average per-symbol mutual information. In particular, we obtain an estimate of the average mutual information between $\uby$ and $\ubx$ given $\uw$. Assuming that $\ubx$ is a sequence of i.i.d. symbols, the average per-symbol mutual information is given by $\frac{1}{N}\sum_{i=1}^NI(\bx_i;\uby|\uw)$ where \begin{align} \nonumber I(\bx_i;\uby|\uw)=&\; H(\bx_i|w_i)-H(\bx_i|\uby,w_i)\\ =& \;r_\mathrm{c}-E_{\uby}\left[-\sum_{\bx_i\in\mathcal{X}^{w_i}}\left(\frac{P(\uby|\bx_i,w_i)}{\sum_{\bx_i\in\mathcal{X}^{w_i}}P(\uby|\bx_i,w_i)} \log_2\left(\frac{P(\uby|\bx_i,w_i)}{\sum_{\bx_i\in\mathcal{X}^{w_i}}P(\uby|\bx_i,w_i)}\right)\right)\right],\label{eq:mutual_infor} \end{align} and the conditional probabilities are given by the watermark decoder. While it is not possible to do an exact calculation of the expectation, it is possible to calculate it numerically by Monte Carlo simulation. Then, (\ref{eq:mutual_infor}) can be used to find an estimate of the achievable rates of the watermarked system and a lower bound on the capacity of the channel. It should be noted that under large block lengths $N$, the variance of (\ref{eq:mutual_infor}) under Monte Carlo simulation is usually very small. Thus, it converges to the average very fast. Here, our results are averaged over $100$ blocks. Using (\ref{eq:mutual_infor}) and assuming known block boundaries, the achievable information rates of the watermarked system of Example~1 is plotted versus SNR in Fig.~\ref{fig:entropy_8PSK}. We also compare the achievable rates with those of the two benchmark systems. One is the base system ($4$-PSK) which has the same $d_\mathrm{min}$, and another one is the system which has the same number of modulation points ($8$-PSK) as in the watermarked system but is not watermarked, i.e., all the $m+1$ bits are dedicated to information bits. Both of these systems use Gray mapping and are decoded by the forward-backward algorithm described in Section \ref{sec:watermark_decoder} with the exception that there is no watermark. The number of symbols per block is kept fixed at $10,012$ for all three systems so that the average number of symbols corrupted by insertions/deletions remains the same. {{\color{black}Notice that $r_\mathrm{c}=2.0$ for the base and the watermarked system and $r_\mathrm{c}=3.0$ for the $8$-PSK system with no watermark.}} \begin{figure} \centering \includegraphics[width=.99\columnwidth]{mutual_info_8PSK.eps}\\ \caption{Maximum achievable information rates (bits per channel use) versus SNR under different modulations assuming a $4$-PSK base system ($r_\mathrm{c}=2.0$). The $8$-PSK watermarked system mentioned in Section~\ref{Sec:modulator} has $r_\mathrm{c}=2.0$. The maximum achievable rates given by (\ref{eq:mutual_infor}) under $8$-PSK modulation with no watermark ($r_\mathrm{c}=3.0$), and under two $8$-PSK watermarked system with partial watermarking ($r_\mathrm{c}=2.3$ and $2.8$) mentioned in Section~\ref{sec:increasing_achievable_rates} are plotted for comparison. Also, $C_\mathrm{i.u.d.}$ is plotted for the $8$-PSK modulation.}\label{fig:entropy_8PSK} \end{figure} The dashed curves in Fig.~\ref{fig:entropy_8PSK} correspond to the maximum achievable information rates of the three systems when $\ppid=0$. In this case, it is clear that the watermarked and the base system achieve the same rates but the $8$-PSK system with no watermark achieves higher rates. At $\ppid=0.01$, however, the watermarked system performs much better than the two benchmark systems in terms of the maximum achievable rates (by using (\ref{eq:mutual_infor})) on the channel. This is of course not very surprising since no watermark is used in the benchmark systems and their only source of protection against insertions/deletions comes from the fact that they are decoded by the forward-backward algorithm. {\color{black}The figure also depicts $C_\mathrm{i.u.d.}$ under $8$-PSK signalling at $\ppid=0.01$. Comparing this curve with the results given by (\ref{eq:mutual_infor}) shows how far the achievable rates of our watermarked system are from the maximum achievable rates on the channel using the same constellation under i.i.d. inputs. Although this gap is not small, we are not aware of any results in the literature that can approach $C_\mathrm{i.u.d.}$. This gap can be made smaller by the method we show in Section~\ref{sec:increasing_achievable_rates}.} We also provide a comparison with \cite{davey01} in terms of comparing the achievable rates of these two systems as viewed by the outer code. In particular, we calculate the average $I(d_{i,j};l_{i,j})$ which is a number between $0$ and $1$ for both systems. This is done by Monte Carlo simulation, using the LLRs produced by the watermark decoder, i.e., using (\ref{eq:LLR}). This comparison is depicted in Fig.~\ref{fig:compare_with_davey}. The achievable rates seen by the outer code from \cite{davey01} are compared to those of the watermarked $8$-PSK system at SNR$=\!20$~dB. The rates of \cite{davey01} \textcolor{black}{are given for three binary watermark codes with sparsifier rates of $3/7$, $4/6$, and $4/5$ and} are calculated assuming no substitution error on the channel which is similar to the high SNR case on our channel. As depicted, the achievable rates of the proposed watermarked $8$-PSK system are much higher than those of \cite{davey01}. \begin{figure} \centering \includegraphics[width=.99\columnwidth]{compare_with_davey.eps}\\ \caption{Maximum achievable information rates as viewed by the outer code $I(d_{i,j};l_{i,j})$ for the $8$-PSK watermarked system at high SNR are compared with the obtained rates of \cite{davey01}. The rates of \cite{davey01} are calculated assuming no substitution errors. Also, the maximum $\ppid$ that the $8$-PSK watermarked system can tolerate with BER less than $10^{-5}$ is indicated for the three optimized irregular LDPC codes of rates $0.25$, $0.50$, and $0.75$ and three regular LDPC codes.}\label{fig:compare_with_davey} \end{figure} Given the success of LDPC codes on many channels, we expect that the information rates of Fig.~\ref{fig:compare_with_davey} can be approached with carefully designed irregular LDPC codes of large block lengths. To demonstrate this, we have optimized the degree distributions of irregular LDPC codes of rates $0.25$, $0.50$, and $0.75$, and constructed codes of length $20,024$. The optimization process is done by the conventional numerical LDPC optimization methods in the literature (e.g., see \cite{chung01gaussian}). {\color{black} These optimization techniques usually use the pdf of the LLRs. On most channels, this LLR pdf can be calculated analytically. However, this cannot be done in our case due to nature of the channel. Thus, Monte Carlo simulation is used to find estimates of the LLR pdf. Given the channel parameters, this is done by simulating a large number of channel realizations, calculating LLRs using (\ref{eq:LLR}), and finally computing the average LLR pdf (probability mass function to be more precise). Next, the rate of the code is maximized by optimizing its degree distributions using the computed LLR pdf. The optimized degree distributions are given in Table.~\ref{Table}.} {\color{black}After optimizing the degree distributions, the parity-check matrices of the codes are constructed by the PEG algorithm \cite{Hu05}}. Finally, the codes are simulated on the channel at high SNR by assuming known block boundaries with the rest of parameters being the same as in Example~1. Fig.~\ref{fig:compare_with_davey} shows the maximum $\ppid$ under which the constructed irregular LDPC codes perform with BER less than $10^{-5}$. It is seen that these practically achievable rates are not far from the maximum achievable rates given by $I(d_{i,j};l_{i,j})$. Also depicted in Fig.~\ref{fig:compare_with_davey} are the results for three regular LDPC codes of the same length, i.e., $(3,4)$-regular, $(3,6)$-regular, and $(3,12)$-regular LDPC codes with rates $0.25$, $0.50$, and $0.75$, respectively. \begin{table}[t] \caption{Variable and check node degree distributions for the optimized irregular LDPC codes; All results achieved assuming maximum variable node degree of 30} \label{Table} \begin{center} \scriptsize \begin{tabular}{|c|c|c|c|} \hline & Code~1 & Code~2 & Code~3\\ \hline $\lambda_2$ & 0.2793 & 0.1920 & 0.2562 \\ \hline $\lambda_3$ & 0.2648 & 0.2480 & 0.3334\\ \hline $\lambda_4$ & & 0.0064 & 0.0010\\ \hline $\lambda_5$ & & & 0.0022\\ \hline $\lambda_6$ & 0.0173 & 0.0171 & 0.3621\\ \hline $\lambda_7$ & 0.0575 & 0.0709 & 0.0434\\ \hline $\lambda_8$ & 0.0938 & 0.1223 & 0.0017\\ \hline $\lambda_9$ & 0.0279 & 0.0278 & \\ \hline $\lambda_{10}$ & 0.0528 & 0.0302 & \\ \hline $\lambda_{18}$ & & 0.0413 & \\ \hline $\lambda_{26}$ & 0.0494 & 0.0302 &\\ \hline $\lambda_{27}$ & 0.0126 & 0.0137 &\\ \hline $\lambda_{28}$ & 0.0179 & 0.0199 &\\ \hline $\lambda_{29}$ & 0.0303 & 0.0358 &\\ \hline $\lambda_{30}$ & 0.0964 & 0.1507 & \\ \hline \hline $\rho_5$ & 1.0000& & \\ \hline $\rho_9$ & & 1.0000 & \\ \hline $\rho_{13}$ & & & 1.0000\\ \hline \hline Rate & 0.25 & 0.50 & 0.75 \\ \hline \end{tabular} \end{center} \end{table} Now, consider Example~2. Using the same method, the maximum achievable information rates of the watermarked system (by using (\ref{eq:mutual_infor})) are compared to those of the two benchmark systems (Gray labeled $16$-QAM in Fig.~\ref{fig:16_QAM} and quasi Gray $32$-QAM) and $C_\mathrm{i.u.d.}$ in Fig.~\ref{fig:entropy_32QAM}. The block size is again kept fixed at $10,012$ symbols. The same discussion as in Example~1 applies to this case as well. \begin{figure} \centering \includegraphics[width=.99\columnwidth]{mutual_info_32AMPM.eps}\\ \caption{Maximum achievable information rates (bits per channel use) versus SNR under different modulations assuming a $16$-QAM base system ($r_\mathrm{c}=4.0$). The $32$-AM/PM watermarked system mentioned in Section \ref{Sec:modulator} has $r_\mathrm{c}=4.0$. The maximum achievable rates given by (\ref{eq:mutual_infor}) under quasi-Gray $32$-QAM modulation with no watermark ($r_\mathrm{c}=5.0$), and under two $32$-AM/PM watermarked system with partial watermarking ($r_\mathrm{c}=4.3$ and $4.8$) mentioned in Section~\ref{sec:increasing_achievable_rates} are plotted for comparison. Also, $C_\mathrm{i.u.d.}$ is plotted for the $32$-QAM modulation.}\label{fig:entropy_32QAM} \end{figure} We also compare the maximum achievable information rates with the bounds available for $q$-ary synchronization codes. We use the asymptotic bounds for $q$-ary codes of \cite{levenshtein02} to compare with our scheme. {\color{black} These bounds are achieved by considering the Levenshtein distance \cite{levenshtein66} between $q$-ary codewords and enumerating the maximum size of codes capable of correcting insertions/deletions with \emph{zero} error probabilities}. Since these bounds do not consider substitution errors or additive noise, we compare them with the achievable rates of our scheme in the high SNR region. {\color{black}Fig.~\ref{fig:bounds} compares the upper and lower bounds of $q$-ary codes for $q=8,32$, with the achievable information rates of our $8$-PSK and $32$-AM/PM schemes using (\ref{eq:mutual_infor}), and also $C_\mathrm{i.u.d.}$. Notice that $C_\mathrm{i.u.d.}$ and our achievable information rates in some regions exceed the upper bound of \cite{levenshtein02}. This is due to the fact that the upper bounds are computed assuming zero error probabilities whereas $C_\mathrm{i.u.d.}$ and our achievable rates given by (\ref{eq:mutual_infor}) are computed assuming asymptotically vanishing error probabilities and thus computed in different scenarios. As seen on the figure, the achievable rates of our watermarked system is below $C_\mathrm{i.u.d.}$ and in some regions below the $q$-ary upper bound. Nevertheless, no code exists in the literature which can approach these limits. At small $\ppid$, the $q$-ary codes can theoretically achieve higher information rates than our scheme. This suggests that it is not efficient to dedicate one whole bit to the watermark when the number of synchronization errors is small. We will show in the next section how the information rates can be increased.} \color{black} \begin{figure} \centering \includegraphics[width=.99\columnwidth]{levenshtein_bounds.eps}\\ \caption{Maximum achievable information rates of the watermarked systems at high SNR are compared to the upper and lower bounds of $q$-ary insertion/deletion correcting codes \cite{levenshtein02} and the achievable rates $C_\mathrm{i.u.d.}$. These rates are plotted for the $8$-PSK and $32$-AM/PM watermarked systems in two cases each. First, using binary watermark and assigning one bit to the watermark in each symbol ($r_\mathrm{c}=2.0$ and $r_\mathrm{c}=4.0$ for $8$-PSK and $32$-AM/PM, respectively). Second, the achievable rates are maximized by optimizing $q_\mathrm{w}$ and $r_\mathrm{c}$ in each point.}\label{fig:bounds} \end{figure} \section{Increasing the achievable information rates}\label{sec:increasing_achievable_rates} In this section, we show how the maximum achievable rates of the watermarked system can be improved. {\color{black}We defined $r_\mathrm{c}$ to be the average number of bits assigned to the information bits (more precisely coded bits) per each transmitted symbol. Until now, we considered cases where one bit was assigned to the binary watermark in each of the transmitted symbols. For example, for the $4$-PSK base system and the $8$-PSK watermarked system discussed in Example~1, $r_\mathrm{c}=2.0$ bits. It is also possible to embed watermark bits into only some of the symbols but not all of them. For $8$-PSK, this means $2.0<r_\mathrm{c}<3.0$. We use Gray mapping for those symbols which are not watermarked. Also, we scatter those symbols which contain watermark uniformly in the transmitted block.} It is evident that there is a trade-off between $r_\mathrm{c}$ and the system ability to recover synchronization errors. Increasing $r_\mathrm{c}$ potentially lets more information to pass through the channel but at the same time increases the system vulnerability to synchronization errors since less bits are assigned to the watermark. As a result, for a fixed signal set, there exists an optimum $r_\mathrm{c}$ for each $\ppid$ and SNR which provides the largest transmission rate on the channel. For example consider the system of Example~1. At high SNR and $\ppid=0.01$, the maximum achievable rate is $1.945$ bits per channel use when $r_\mathrm{c}=2.0$ bits which increases to $2.528$ bits per channel use when $r_\mathrm{c}=2.8$ bits. This implies that dedicating one bit in every symbol to the watermark, i.e., $r_\mathrm{c}=2.0$ is wasteful in this case. A better protection is provided against synchronization errors by assigning one bit to the watermark in only $20\%$ of the symbols. As examples, Fig.~\ref{fig:entropy_8PSK} also shows the maximum achievable rates of the $8$-PSK system under $r_\mathrm{c}=2.3$ and $r_\mathrm{c}=2.8$. For $\ppid=0.01$, when SNR$<\!6.44$~dB the system with $r_\mathrm{c}=2.0$ achieves higher rates compared to those of systems with $r_\mathrm{c}=2.3$ and $r_\mathrm{c}=2.8$. When $6.44\!<$SNR$<\!9.31$, $r_\mathrm{c}=2.3$ provides the largest rates and when SNR$>\!9.31$~dB, $r_\mathrm{c}=2.8$ provides the largest rates compared to the other two systems. Fig.~\ref{fig:entropy_32QAM} depicts the comparison for the $16$-QAM and $32$-AM/PM systems. Until now, we have only considered binary watermark sequences. When the number of synchronization errors is large, a binary watermark may not be very helpful in localizing these errors. Increasing the alphabet size of the watermark $q_\mathrm{w}$ increases the system ability to combat synchronization errors. Increasing $q_\mathrm{w}$, however decreases $r_\mathrm{c}$, i.e., less number of bits are available in each symbol for information bits. As a result, there is a trade-off between the $q_\mathrm{w}$, $r_\mathrm{c}$, and the maximum achievable information rate on the channel. Fig.~\ref{fig:bounds} depicts the maximum achievable rates that $8$-PSK and $32$-AM/PM watermarked systems can achieve by finding the optimum $r_\mathrm{c}$ and $q_\mathrm{w}$ at each point. It is evident that the maximum achievable rates can be increased significantly by this strategy. This is especially beneficial when $\ppid$ is small where the achievable rates gets closer to the lower bound on $q$-ary codes. \section{Complexity and the watermark sequence} \label{sec:complexity} \subsection{Decoding complexity} The complexity of the forward-backward algorithm determines the complexity of watermark decoding. This complexity scales as $O\left((1+2t_\mathrm{max})IMN\right)$, where $1+2t_\mathrm{max}$ is the number of states in the HMM and $N$ is equal to the number of symbols on which the forward-backward algorithm is performed. It should be noted that it is possible to reduce this complexity using arguments similar to those of \cite{davey01}. \subsection{Watermark sequence} The watermark sequences used in this paper are pseudo-random sequences. Our experiments confirm that these sequences perform well under different insertion/deletion rates. {\color{black}Periodic sequences with small periods are usually not good choices. First, they are vulnerable to successive insertios/deletions. As a simple example, if the watermark is a periodic sequence with period $4$, then the decoder cannot detect $4$ successive deletions. Furthermore, certain patterns of insertions/deletions can fool the decoder such that it fails to detect them. Randomness lowers the probability of false detecting or missing insertions/deletions by the decoder. Among other factors which affect the decoding performance is the number of successive identical symbols (runs) in the watermark. One advantage of having runs is that it provides the ability to detect successive deletions. The larger the run-length, the larger successive deletions that can be detected. Nevertheless, larger runs lead to a worse localization of insertions/deletions as the decoder is not able to detect where exactly insertions/deletions have occurred. This gives rise to less reliable LLRs in the vicinity of insertions/deletions. Thus, there should be a balance between large and small runs in the watermark sequence. Pseudo-random sequences usually have this property. Among candidates for the watermark are the run-length limited (RLL) sequences. RLL sequences with small maximum run-lengths (e.g., around 3 or 4) are good choices particularly when $\ppid$ is very small since the probability of having successive insertions/deletions is small. The performance gain over pseudo-random watermarks is only notable at small $\ppid$ where high-rate outer codes are used. The presence of additive noise can also affect the choice of watermark. All in all, it is possible that sequences with structure could offer better performance than the above-mentioned sequences. This can be the subject of further investigation.} \color{black} \section{Conclusion} \label{Sec:conclusion} In this paper, we proposed a concatenated coding scheme for reliable communication over non-binary I/D channels where symbols were randomly deleted from or inserted in the transmitted sequence and all symbols were corrupted by additive white Gaussian noise. First, we provided redundancy by expanding the symbol set while maintaining almost the same minimum distance. Then, we allocated part of the bits associated with each symbol to watermark bits. The watermark sequence, known at the receiver, was then used by the forward-backward algorithm to provide soft information for the outer code. Finally, the received sequence was decoded by the outer code. We evaluated the performance of the watermarked system and through numerical examples we showed that significant amount of insertions and deletions could be corrected by the proposed method. The maximum information rates achievable by this method on the I/D channel were provided and compared with existing results and the available bounds on $q$-ary synchronization codes. Practical codes were also designed that could approach these information rates. \bibliographystyle{IEEEtran} \renewcommand{\baselinestretch}{1.5}
{ "redpajama_set_name": "RedPajamaArXiv" }
8,822
\section{Introduction} For the past several years many lattice groups have been involved in studying strongly-coupled near-conformal gauge--fermion systems. Some of these models may be candidates for new physics beyond the standard model, while others are simply interesting non-perturbative quantum field theories. Because the dynamics of these lattice systems are unfamiliar, it is important to study them with several complementary techniques. Not only does this allow consistency checks, it can also provide information about the most efficient and reliable methods to investigate near-conformal lattice theories. Monte Carlo Renormalization Group (MCRG) two-lattice matching is one of several analysis tools that we are using to investigate SU(3) gauge theories with many massless fermion flavors. This technique predicts the step-scaling function $s_b$ in the bare parameter space. In a previous work~\cite{Petropoulos:2012mg} we proposed an improved MCRG method that exploits the Wilson flow to obtain a bare step-scaling function that corresponds to a unique discrete \ensuremath{\beta} function. We briefly review our Wilson-flow-optimized MCRG (WMCRG) procedure in Sections~\ref{sec:mcrg}--\ref{sec:wmcrg}. It is important to note that we are investigating a potential infrared fixed point (IRFP) where the coupling is irrelevant: its running slows and eventually stops. This is challenging to distinguish from a near-conformal system where the gauge coupling runs slowly but does not flow to an IRFP. The observation of a backward flow that survives extrapolation to the infinite-volume limit could provide a clean signal. In \secref{sec:results} we report WMCRG results for SU(3) gauge theory with $N_f = 12$ flavors of massless fermions in the fundamental representation. This 12-flavor model has been studied by many groups, including Refs.~\cite{Appelquist:2009ty, Deuzeman:2009mh, Fodor:2011tu, Appelquist:2011dp, Hasenfratz:2011xn, DeGrand:2011cu, Cheng:2011ic, Jin:2012dw, Lin:2012iw, Aoki:2012eq, Fodor:2012uw, Fodor:2012et, Itou:2012qn, Cheng:2013eu, Aoki:2013pca, Hasenfratz:2013uha, Hasenfratz:2013eka, Cheng:2013bca}. Using new ensembles of 12-flavor gauge configurations generated with exactly massless fermions, our improved WMCRG technique predicts a conformal IRFP where the step-scaling function vanishes. As with every method, it is essential to study the systematic effects. For WMCRG the most important systematic effects are due to the finite volume and limited number of blocking steps. While we are not able to carry out a rigorous infinite-volume extrapolation, the observed zero of the bare step-scaling function is present for all investigated lattice volumes and renormalization schemes, and agrees with the earlier MCRG results of \refcite{Hasenfratz:2011xn}. The results of our complementary $N_f = 12$ investigations of finite-temperature phase transitions~\cite{Schaich:2012fr, Hasenfratz:2013uha}, the Dirac eigenmode number~\cite{Cheng:2013eu, Cheng:2013bca}, and finite-size scaling~\cite{Hasenfratz:2013eka} are also consistent with the existence of an infrared fixed point and IR conformality. \section{\label{sec:mcrg}Monte Carlo Renormalization Group} MCRG techniques probe lattice field theories by applying RG blocking transformations that integrate out high-momentum (short-distance) modes, moving the system in the infinite-dimensional space of lattice-action couplings. In an IR-conformal system on the $m = 0$ critical surface, a renormalized trajectory runs from the perturbative gaussian FP (where the gauge coupling \ensuremath{\beta} is a relevant operator) to the IRFP (where \ensuremath{\beta} is irrelevant). Because the locations of these fixed points in the action-space depend on the renormalization scheme, each scheme corresponds to a different renormalized trajectory. The RG flow produced by the blocking steps moves the system towards and along the renormalized trajectory, from the perturbative FP to the infrared fixed point. At stronger couplings, where we would na\"ively expect backward flow, there might be no ultraviolet FP to drive the RG flow along a renormalized trajectory. Except in the immediate vicinity of the IRFP, every method that attempts to determine the strong-coupling flow of the gauge coupling (including MCRG two-lattice matching) might then become meaningless. We determine the bare step-scaling function $s_b(\ensuremath{\beta} _1)$ by matching the lattice actions $S(\ensuremath{\beta} _1, n_b)$ and $S(\ensuremath{\beta} _2, n_{b - 1})$ for systems with bare couplings $\{\ensuremath{\beta} _1, \ensuremath{\beta} _2\}$ after $\{n_b, n_{b - 1}\}$ blocking steps: $s_b(\ensuremath{\beta} _1) \equiv \lim_{n_b \to \infty} \ensuremath{\beta} _1 - \ensuremath{\beta} _2$~\cite{Petropoulos:2012mg}. When the lattice actions are identical, all observables are identical. We use the plaquette, the three six-link loops and a planar eight-link loop to perform this matching. Using short-distance gauge observables allows us to carry out more blocking steps, down to small $2^4$ or $3^4$ lattices. We minimize finite-volume effects by comparing observables measured on the same blocked volume~\cite{Hasenfratz:2011xn}. We perform the matching independently for each observable, fitting the data as a cubic function of \ensuremath{\beta} to smoothly interpolate between investigated values of the gauge coupling. Our finite lattices only allow a few blocking steps, so we must optimize the procedure to reach the renormalized trajectory in as few steps as possible. In practice, we optimize by tuning some parameter so that consecutive RG blocking steps yield the same $\ensuremath{\beta} _1 - \ensuremath{\beta} _2$, which we identify as $s_b(\ensuremath{\beta} _1)$. Traditional optimization tunes the RG blocking transformation at each coupling separately, resulting in a different renormalization scheme at each bare coupling $\ensuremath{\beta} _1$: the $s_b$ we obtain is a composite of many different discrete \ensuremath{\beta} functions. The Wilson flow provides a parameter that we can tune without changing the scheme. \section{\label{sec:wmcrg}Wilson-flow-optimized MCRG} \begin{figure}[th] \centering \includegraphics[height=3 in]{wilson_flow_opt.png} \caption{The Wilson flow (blue) moves systems on a surface of constant lattice scale $a$ (normal to the orange renormalized trajectory) in the infinite-dimensional coupling space. Wilson-flow-optimized MCRG tunes the flow time to bring the system close to the renormalized trajectory (yellow star), so that MCRG blocking (green) quickly reaches the renormalized trajectory.} \label{fig:wflow_opt} \end{figure} The Wilson flow is a continuous smearing transformation~\cite{Narayanan:2006rf} that removes UV fluctuations without changing the lattice scale, as shown in \fig{fig:wflow_opt}. In perturbation theory it is related to the \ensuremath{\overline{\textrm{MS}} } running coupling~\cite{Luscher:2010iy}, and can be used to compute a renormalized step-scaling function~\cite{Fodor:2012td, Fodor:2012qh}. In this work we use the Wilson flow to optimize MCRG two-lattice matching with a fixed RG blocking transformation (renormalization scheme). The Wilson flow continuously moves the system on a surface of constant lattice scale in the infinite-dimensional space of lattice-action couplings. We tune the flow time to bring the system as close as possible to the renormalized trajectory. After running the optimal amount of Wilson flow on the unblocked lattices, we then carry out the MCRG two-lattice matching. Because the renormalization scheme is fixed, we obtain a bare step-scaling function that corresponds to a unique discrete \ensuremath{\beta} function. \section{\label{sec:results}Results for 12 Flavors} \begin{figure}[ht] \centering \includegraphics[height=3 in]{12flav_6-12-24_3lm} \caption{The bare step-scaling function $s_b$ predicted by three-lattice matching with $6^4$, $12^4$ and $24^4$ lattices blocked down to $3^4$, comparing three different renormalization schemes. The error bars come from the standard deviation of predictions using the different observables discussed in \protect\secref{sec:mcrg}.} \label{fig:24to3} \end{figure} \begin{figure}[ht] \centering \includegraphics[height=3 in]{12flav_8-16-32_44_as_3lm} \caption{As in \protect\fig{fig:24to3}, the bare step-scaling function $s_b$ for three different renormalization schemes from three-lattice matching, now using $8^4$, $16^4$ and $32^4$ lattices blocked down to $4^4$.} \label{fig:32to4} \end{figure} Our WMCRG results for the 12-flavor system are obtained on gauge configurations generated with exactly massless fermions. Our lattice action uses nHYP-smeared staggered fermions as described in \refcite{Cheng:2011ic}, and to run with $m = 0$ we employ anti-periodic boundary conditions in all four directions. All of our analyses are carried out at couplings weak enough to avoid the unusual strong-coupling ``$\ensuremath{\cancel{S^4}} $'' phase discussed by Refs.~\cite{Cheng:2011ic, Hasenfratz:2013uha}. We perform three-lattice matching with volumes $6^4$--$12^4$--$24^4$ and $8^4$--$16^4$--$32^4$. Three-lattice matching is based on two sequential two-lattice matching steps, to minimize finite-volume effects~\cite{Hasenfratz:2011xn}. Both two-lattice matching steps are carried out on the same final volume $V_f$. We denote the number of blocking steps on the largest volume by $n_b$, and tune the length of the initial Wilson flow by requiring that the last two blocking steps predict the same step-scaling function. Using the $8^4$--$16^4$--$32^4$ data we determine the bare step-scaling function for $n_b = 3$ and $V_f = 4^4$ as well as $n_b = 4$ and $V_f = 2^4$, while the $6^4$--$12^4$--$24^4$ data set is blocked to a final volume $V_f = 3^4$ ($n_b = 3$). This allows us to explore the effects of both the final volume and the number of blocking steps. We investigate three renormalization schemes by changing the HYP smearing parameters in our blocking transformation~\cite{Petropoulos:2012mg}: scheme~1 uses smearing parameters (0.6, 0.2, 0.2), scheme~2 uses (0.6, 0.3, 0.2) and scheme~3 uses (0.65, 0.3, 0.2). Figs.~\ref{fig:24to3}, \ref{fig:32to4} and \ref{fig:scheme1} present representative results for 12 flavors. All of the bare step-scaling functions clearly show $s_b = 0$, signalling an infrared fixed point, for every $n_b$, $V_f$ and renormalization scheme. Appropriately for an IR-conformal system, the location of the fixed point is scheme dependent. We observe that the fixed point moves to stronger coupling as the HYP smearing parameters in the RG blocking transformation increase. When we block our $8^4$, $16^4$ and $32^4$ lattices down to a final volume $V_f = 2^4$ (corresponding to $n_b = 4$), the observables become very noisy, making matching more difficult. The problem grows worse as the HYP smearing parameters increase, and our current statistics do not allow reliable three-lattice matching for $V_f = 2^4$ in schemes~2 and 3. To resolve this issue, we are accumulating more statistics in existing $32^4$ runs, and generating additional $32^4$ ensembles at more values of the gauge coupling $\ensuremath{\beta} _F$. These additional data will also improve our results for scheme~1, which we show in \fig{fig:scheme1}. Different volumes and $n_b$ do not produce identical results in scheme~1, suggesting that the corresponding systematic effects are still non-negligible. We can estimate finite-volume effects by comparing $n_b = 3$ with $V_f = 3^4$ and $V_f = 4^4$. Systematic effects due to $n_b$ can be estimated from $n_b = 4$ and $V_f = 2^4$, but this is difficult due to the noise in the $2^4$ data. Even treating the spread in the results shown in \fig{fig:scheme1} as a systematic uncertainty, we still obtain a clear zero in the bare step-scaling function, indicating an IR fixed point. \begin{figure}[th] \centering \includegraphics[height=3 in]{12flav_8-16-32_3lm_22} \caption{The bare step-scaling function $s_b$ for scheme~1, comparing three-lattice matching using different volumes: $6^4$, $12^4$ and $24^4$ lattices blocked down to $3^4$ (black $\times$s) as well as $8^4$, $16^4$ and $32^4$ lattices blocked down to $4^4$ (blue bursts) and $2^4$ (red crosses).} \label{fig:scheme1} \end{figure} \section{Conclusion} In this proceedings we have shown how the Wilson-flow-optimized MCRG two-lattice matching procedure proposed in \refcite{Petropoulos:2012mg} improves upon traditional lattice renormalization group techniques. By optimizing the flow time for a fixed RG blocking transformation, WMCRG predicts a bare step-scaling function $s_b$ that corresponds to a unique discrete \ensuremath{\beta} function. Applying WMCRG to new 12-flavor ensembles generated with exactly massless fermions, we observe an infrared fixed point in $s_b$. The fixed point is present for all investigated lattice volumes, number of blocking steps and renormalization schemes, even after accounting for systematic effects indicated by \fig{fig:scheme1}. This result reinforces the IR-conformal interpretation of our complementary $N_f = 12$ studies of phase transitions~\cite{Schaich:2012fr, Hasenfratz:2013uha}, the Dirac eigenmode number~\cite{Cheng:2013eu, Cheng:2013bca}, and finite-size scaling~\cite{Hasenfratz:2013eka}. \section*{Acknowledgments} This research was partially supported by the U.S.~Department of Energy (DOE) through Grant No.~DE-SC0010005 (A.~C., A.~H.\ and D.~S.) and by the DOE Office of Science Graduate Fellowship Program under Contract No.~DE-AC05-06OR23100 (G.~P.). A.~H.\ is grateful for the hospitality of the Brookhaven National Laboratory HET group during her extended visit. Our code is based in part on the MILC Collaboration's lattice gauge theory software.\footnote{\href{http://www.physics.utah.edu/~detar/milc/}{http://www.physics.utah.edu/$\sim$detar/milc/}} Numerical calculations were carried out on the HEP-TH and Janus clusters at the University of Colorado, the latter supported by the U.S.~National Science Foundation (NSF) through Grant No.~CNS-0821794; at Fermilab under the auspices of USQCD supported by the DOE; and at the San Diego Computing Center and Texas Advanced Computing Center through XSEDE supported by NSF Grant No.~OCI-1053575. \bibliographystyle{utphys}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,652
\section{INTRODUCTION} The structure of the solar atmosphere varies from partially ionized in its lower layers to fully ionized in the solar corona. In the solar photosphere there is one ion per about $10^3-10^4$ neutrals but in the chromosphere and in the transition region the number of neutrals rapidly falls off with height as the temperature increases (e.g., Priest 2014; Ballester et al. 2018). The role of wave heating in this temperature increase has been investigated in many papers (e.g., Narain \& Ulmschneider 1996; Roberts \& Ulmschneider 1997; Roberts 1991, 2006) in which a fully ionized solar atmosphere is typically considered. Recently, the effects of partially ionized solar atmosphere on the wave propagation are taken into account in two-fluid numerical studies performed by Maneva et al. (2017), W\'ojcik et al. (2019), Popescu Braileanu et al. (2019), Ku\'zma et al. (2019) and Murawski et al. (2020). The obtained results demonstrate that ion and neutral waves behave differently and that ion-neutral collisions may become an effective damping mechanism for the waves. The wave propagation in the solar atmosphere is strongly affected by the presence of cutoff periods, which are different for different waves and strongly depend on the structure of media in which the waves propagate. The concept of cutoff periods was originally introduced by Lamb (1909, 1910, 1945) for acoustic waves propagating in stratified but isothermal atmospheres. Then, the work was extended to more realistic (nonisothermal) atmospheres with magnetic fields in which the cutoffs for different waves were obtained analytically (e.g., Defouw 1976; Gough 1977; Rae \& Roberts 1982; Musielak 1990; Fleck \& Schmitz 1991, 1993; Schmitz \& Fleck 1992; Stark \& Musielak 1993; Musielak \& Moore 1995; Roberts \& Ulmschneider 1997; Roberts 2004, 2006; Musielak et al. 2006, 2007; Routh et al. 2010; Murawski \& Musielak 2010; Cally \& Hasan 2011; Routh \& Musielak 2014; Perera et al. 2015; Felipe et al. 2018; Routh et al. 2020). All these analytical studies were done for fully ionized (and often isothermal) solar atmosphere; despite these limitations some of the derived cutoffs for acoustic and magnetoacoustic waves (MAWs) will be compared to the results for two-fluid MAWs obtained in this paper. The above analytical studies of cutoff periods were supplemented by 3D numerical simulations of different linear and nonlinear waves in the solar atmosphere with different non-magnetic and magnetic settings, and the wave generation, propagation and dissipation were investigated. Specific numerical results involve impulsively generated linear and non-linear magnetohydrodynamic (MHD) waves, 3D simulations of magnetic twisters, and 3D simulations of magnetic flux tube waves (e.g., Murawski \& Zaqarashvili 2010; Chmielewski et al. 2013; Murawski et al. 2016; Mart{\'{\i}}nez-Sykora et al. 2017; Maneva et al. 2017; Kraśkiewicz et al. 2019; Ku\'zma et al.\ 2019; Popescu Braileanu et al. 2019; W\'ojcik et al. 2018, 2019, 2020; Murawski et al. 2020). Some of this work has already included the effects of partial ionization of the solar atmosphere. Specifically, the observationally established variations of the acoustic cutoff periods with height in the solar atmosphere (Wi\'sniewska et al. 2016; Kayshap et al. 2018) were partially reproduced numerically by Murawski \& Musielak (2016), Murawski et al. (2016); however, recently Ku\'zma et al. (2022) obtained a very good agreement with the observations. The main purpose of this paper is to extend the MHD models (e.g., Fleck \& Schmitz 1991, Kalkofen et al. 1994, Kra\'skiewicz et al. 2019) according to which for the piston-wave excitation in the isothermal atmosphere, the asymptotic solution shows in the linear limit a superposition of waves with the piston wave-period and waves with the cutoff wave-period. For the waves excited above the cutoff period, the atmosphere oscillates with the acoustic cutoff period at greater heights; however, as time elapses the piston driving period takes over. The extension is done to account for a non-isothermal solar atmosphere, the conditions for propagation of long wave-period two-fluid MAWs are determined by taking into account the effects of dynamics of neutrals, and by considering wave-periods of neutral waves. The paper is organized as follows. In Sec.~\ref{sec:numerical_model} we describe the numerical model of the atmosphere. In Sec.~\ref{sec:num_sim_model} we present our numerical results. We finalize our draft by conclusions in Sec.~4. \section{NUMERICAL MODEL OF THE SOLAR ATMOSPHERE}\label{sec:numerical_model} We consider a 1D magnetically structured and gravitationally stratified solar atmosphere which dynamics is described by set of two-fluid equations for ions + electrons treated as one fluid and neutrals regarded as second fluid. Specifically, these fluids are governed by the following equations (e.g., Maneva et al. 2017; Popescu Braileanu et al. 2019): \begin{equation} \frac{\partial \varrho_{\rm n}}{\partial t}+\nabla\cdot(\varrho_{\rm n} \mathbf{V}_{\rm n}) = 0\,, \label{eq:neutral_continuity} \end{equation} \begin{equation} \frac{\partial \varrho_{\rm i}}{\partial t}+\nabla\cdot(\varrho_{\rm i} \mathbf{V}_{\rm i}) = 0\,, \label{eq:ion_continuity} \end{equation} \begin{equation} \begin{split} \frac{\partial (\varrho_{\rm n} \mathbf{V}_{\rm n})}{\partial t}+ \nabla \cdot (\varrho_{\rm n} \mathbf{V}_{\rm n} \mathbf{V}_{\rm n}+p_{\rm n} \mathbf{I}) = \varrho_{\rm n} \mathbf{g} + {\bf S_{\rm m}}\,, \end{split} \label{eq:neutral_momentum} \end{equation} \begin{equation} \begin{split} \frac{\partial (\varrho_{\rm i} \mathbf{V}_{\rm i})}{\partial t}+ \nabla \cdot (\varrho_{\rm i} \mathbf{V}_{\rm i} \mathbf{V}_{\rm i}+p_{\rm i} \mathbf{I}) = \frac{1}{\mu}(\nabla \times \mathbf{B}) \times \mathbf{B}\\ + \varrho_{\rm i} \mathbf{g} - {\bf S_{\rm m}}\,, \end{split} \label{eq:ion_momentum} \end{equation} \begin{equation} \frac{\partial \mathbf{B}}{\partial t} = \nabla \times (\mathbf{V_{\rm i} \times }\mathbf{B})\,, \hspace{3mm} \nabla \cdot{\mathbf B}=0\,, \label{eq:ions_induction}\\ \end{equation} \begin{equation} \begin{split} \frac{\partial E_{\rm n}}{\partial t}+\nabla\cdot[(E_{\rm n}+p_{\rm n})\mathbf{V}_{\rm n}] = \varrho_{\rm n} \mathbf{g} \cdot \mathbf{V}_{\rm n} + S_{\rm En}\,, \end{split} \label{eq:neutral_energy} \end{equation} \begin{equation} \begin{split} \frac{\partial E_{\rm i}}{\partial t}+\nabla\cdot\left[\left(E_{\rm i}+p_{\rm i} + \frac{{\bf B}^2}{2\mu} \right)\mathbf{V}_{\rm i}-\frac{\mathbf{B}}{\mu} (\mathbf{V}_{\rm i}\cdot \mathbf{B})\right]\\ = \varrho_{\rm i} \mathbf{g} \cdot \mathbf{V}_{\rm i} + S_{\rm Ei}\,, \end{split} \label{eq:ion_energy} \end{equation} \begin{equation} \begin{split} E_{\rm n}=\frac{\varrho_{\rm n} \mathbf{V}_{\rm n}^2}{2}+\frac{p_{\rm n}}{\gamma-1}\,, \end{split} \label{eq:En} \end{equation} \begin{equation} \begin{split} E_{\rm i}=\frac{\varrho_{\rm n} \mathbf{V}_{\rm i}^2}{2}+\frac{{\bf B}^2}{2\mu}+\frac{p_{\rm i}}{\gamma-1}\,. \end{split} \label{eq:Ei} \end{equation} Here the momentum collisional, ${\bf S_{\rm m}}$, and energy, $S_{\rm Ei,n}$, source terms are given as \begin{equation} {\bf S_{\rm m}} =\alpha_{\rm c}({\bf V_{\rm i}}-{\bf V_{\rm n}})\, , \end{equation} \begin{equation} S_{\rm En}= \frac{1}{2}\alpha_{\rm c}(V_{\rm i}-V_{\rm n})^2 + \frac{3\alpha_{\rm c} k_{\rm B}}{m_{\rm i}+m_{\rm n}}(T_{\rm i}-T_{\rm n})\, \end{equation} \begin{equation} S_{\rm Ei}= \frac{1}{2}\alpha_{\rm c}(V_{\rm i}-V_{\rm n})^2 + \frac{3\alpha_{\rm c}k_{\rm B}}{m_{\rm i}+m_{\rm n}}(T_{\rm n}-T_{\rm i})\, \end{equation} with subscripts $_{\rm i}$, $_{\rm n}$ and $_{\rm e}$ corresponding respectively to ions, neutrals and electrons. The symbols $\varrho_{\rm i,n}$ denote mass densities, ${\bf V}_{\rm i,n}=[V_{\rm i,n\,x}, V_{\rm i,n\,y}, 0]$ velocities, $p_{\rm i,n}$ ion+electron and neutral gas pressures, ${\bf B}$ is magnetic field and $T_{\rm i,n}$ are temperatures specified by ideal gas laws, \begin{equation} p_{\rm n} =\frac{k_{\rm B}}{m_{\rm n}}\varrho_{\rm n}T_{\rm n}\, , \hspace*{3mm} p_{\rm i}=\frac{k_{\rm B}}{m_{\rm i}}\varrho_{\rm i}T_{\rm i}\, . \label{eq:pressures} \end{equation} \begin{figure} \begin{center} \includegraphics[width=8cm]{./Figures/T.eps} \includegraphics[width=8cm]{./Figures/density.eps} \caption{\small Vertical profiles of equilibrium temperature (top) and the hydrostatic mass densities of the ionized (bottom, solid line) and neutral fluids (bottom, dashed line) versus height, $y$, in the two-fluid model of the solar atmosphere. } \vspace{-1cm} \label{fig:Atm} \end{center} \end{figure} Collision coefficient is given as (e.g. Oliver et al. 2016; Ballester et al. 2018, and references cited therein) \begin{equation} \alpha_{\rm c} = \frac{4}{3} \frac{\sigma_{\rm in}}{m_{\rm i}+m_{\rm n}} \sqrt{ \frac{8k_{\rm B}}{\pi} \left( \frac{T_{\rm i}}{m_{\rm i}}+\frac{T_{\rm n}}{m_{\rm n}} \right) } \; \varrho_{\rm n} \varrho_{\rm i} \end{equation} with $\sigma_{\rm in}$ being the collisional cross-section for which we have chosen its quantum value of $1.4\times 10^{-19}$~m$^{2}$ (Vranjes and Krstic 2013). \begin{figure} \begin{center} \includegraphics[width=8cm]{./Figures/P_m-By.eps} \caption{\small The cutoff wave-period, $P_{\rm m}$, vs. height in the case of vertical magnetic field of magnitude $B_y=11.4$~G. } \label{fig:Pm_cut_off} \end{center} \end{figure} A gravity vector is ${\bf g} = [0, -g, 0]$ with its magnitude $g = 274.78$ m s$^{-2}$, $m_{\rm i,n}$ are the masses of respectively ions and neutrals, $k_{\rm B}$ is the Boltzmann constant, $\gamma=5/3$ is the specific heats ratio, and $\mu$ is magnetic permeability of the medium. The other symbols have their standard meaning. The present 1D model suffers from severe limitations. For instance, important dissipative effects such as ionization/recombination, thermal conduction and radiative losses are neglected. This leads very likely to an overestimated energy flux of these waves that can be carried into the corona. In addition, the very significant magnetic expansion of flux tubes from the photosphere to the corona is neglected, leading to severe overestimation of the wave energy flux and the corresponding density perturbations. These effects are very important since the magnetic field strength could decrease by $2-3$ orders of magnitude from the photosphere to the corona. We plan to make our model more realistic in future studies. \begin{figure} \begin{center} \includegraphics[width=8cm]{./Figures/Pd200By_Vyi.eps} \includegraphics[width=8cm]{./Figures/By_Pd200_P_vs_y_Vx2_i.eps} \caption{\small Time-distance plot for the ion vertical velocity, $V_{\rm iy}$, (top) and its Fourier period, $P$, vs. height, $y$, (bottom) in the case of $P_{\rm d}=200$~s. } \label{fig:200_Viy} \end{center} \end{figure} \begin{figure*} \begin{center} \includegraphics[width=14cm, height=13cm]{./Figures/wavelet_Pd200By_1.5.eps} \caption{\small Time-signature for $V_{\rm iy}(y=1.5\,{\rm Mm})$ (top), its wavelet spectrum (left-bottom), and global wavelet spectrum (right-bottom) in the case of $P_{\rm d}=200$~s. } \label{fig:200_wavelet_Viy-1.5} \end{center} \end{figure*} \begin{figure} \begin{center} \includegraphics[width=8cm]{./Figures/Pd300By_Vyi.eps} \includegraphics[width=8cm]{./Figures/By_Pd300_P_vs_y_Vx2_i-exp.eps} \caption{\small Time-distance plot for the ion vertical velocity, $V_{\rm iy}$, (top) and its Fourier period, $P$, and experimental data from Wi\'sniewska et al. (2016) (stars) and Kayshap et al. (2018) (circles) vs. height, $y$, (bottom) in the case of $P_{\rm d}=300$~s. } \label{fig:300_Viy} \end{center} \end{figure} \begin{figure*} \begin{center} \includegraphics[width=14cm, height=13cm]{./Figures/wavelet_Pd300By_1.5.eps} \caption{\small Time-signature for $V_{\rm iy}(y=1.5\,{\rm Mm})$ (top), its wavelet spectrum (left-bottom), and global wavelet spectrum (right-bottom) in the case of $P_{\rm d}=300$~s. } \label{fig:300_wavelet_Viy-1.5} \end{center} \end{figure*} \begin{figure*} \begin{center} \mbox{ \includegraphics[width=8cm]{./Figures/Pd400By_Vyi.eps} \includegraphics[width=8cm]{./Figures/By_Pd400_P_vs_y_Vx2_i.eps} } \caption{\small Time-distance plots for the ion vertical velocity, $V_{iy}$, and its Fourier power period, $P$, vs. height, $y$, for $P_{\rm d}=400$~s. } \label{fig:400_Viy} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=14cm, height=11cm]{./Figures/wavelet_Pd400_Viy_By_1.5.eps} \caption{\small Time-signature for $V_{\rm iy}(y=1.5\,{\rm Mm})$ (top), its wavelet spectrum (left-bottom), and global wavelet spectrum (right-bottom) in the case of $P_{\rm d}=400$~s. } \label{fig:wavelet_Pd400-1.5} \end{center} \end{figure*} \begin{figure} \begin{center} \includegraphics[width=8cm]{./Figures/cutoff_Pd.eps} \caption{\small Main Fourier period, $P_{\rm m}$, for $V_{\rm iy}$ vs. maximum value of $y$ at which this period is present in the case of $B_{\rm y}=11.4$~G. } \vspace{-0.5cm} \label{fig:cutoff_Pd} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=8cm]{./Figures/ef_Pd-By11.4.eps}\\ \includegraphics[width=8cm]{./Figures/ef_By-Pd300.eps} \caption{\small Energy flux vs. the driving period for $B_{\rm y}=11.4$~G (top) and vs. magnetic field for $P_d=300$~s (bottom), evaluated at $y=1.9$~Mm. } \vspace{-0.5cm} \label{fig:ef_Pd+By} \end{center} \end{figure} \subsection{Equilibrium} We assume that the equilibrium plasma is still (${{\bf V}_i}={{\bf V}_n}={\bf 0}$) and the solar atmosphere is hydrostatic, viz. \begin{equation} \varrho_{\rm {hi,n}} g = -\frac{\partial p_{\rm {hi,n}}}{\partial y}\, . \label{eq:33B} \end{equation} This equation is satisfied by \begin{equation} p_{\rm {hi,n}}(y) = p_{\rm {0i,n}} \exp\left[ -\int_{y_{\rm r}}^{y} \frac{dy'}{\Lambda_{\rm i,n}(y')} \right]\, , \end{equation} \begin{equation} \varrho_{\rm {hi,n}}(y) = \frac{p_{\rm {hi,n}}(y)}{g\Lambda_{\rm i,n}(y)}\,, \end{equation} where the subscript $_{\rm h}$ corresponds to a hydrostatic quantity, $ y_{\rm r}=50$~Mm is the reference level, $p_{\rm 0i}=10^{-2}$~Pa and $p_{\rm 0n}=3\cdot 10^{-4}$~Pa are, respectively, the ion and neutral gas pressure at this level and \begin{equation} \label{eq:lambda} \Lambda_{\rm i,n}(y) =\frac{ k_{\rm B}T(y)}{ m_{\rm i,n}g} \end{equation} is the ion (neutral) pressure scale-height that depends on the plasma temperature $T(y)$, which is taken from the semi-empirical model of Avrett \& Loeser (2008). See Fig.~\ref{fig:Atm} (top). The ion and neutral mass density profiles at the equilibrium are shown in Fig.~\ref{fig:Atm} (bottom, solid line). Note that at $y=0$~Mm, which correspond to the bottom of the photosphere, $\varrho_n\approx 30\varrho_i$. For $y>0.7$~Mm $\varrho_i>\varrho_n$ and at $y=3$~Mm $\varrho_i=80\varrho_n$. The above hydrostatic equilibrium is overlaid by a current-free ($\nabla \times {\bf B}/\mu={\bf 0}$) magnetic field. We consider the case of vertical magnetic field, ${\bf B}=[0, B_y, 0]$ with $B_y=11.4$~G. This value of magnetic field is typical for the solar corona and representative for the chromosphere. \subsection{Cutoff periods} For the physical settings considered in this paper, the relevant cutoff periods were derived by Roberts (2004) for acoustic waves, and by Roberts (2006) for MAWs. Comparison of the analytical findings derived in these papers to the numerical results presented here is given and discussed. The propagation of MAWs in the solar atmosphere is affected by cutoff periods for these waves. The local cutoff period derived for the vertical magnetic field by Roberts (2006) is defined as (see Fig.~\ref{fig:Pm_cut_off}) \begin{equation} P_m(y) = \frac{2\pi}{\Omega_m}, \label{eq:Pm} \end{equation} where \begin{eqnarray} \begin{split} &\Omega_m^2(y)=\\ &c_t^2\left\{\frac{1}{4\Lambda^2}\left(\frac{c_t}{c_s}\right)^4- \frac{1}{2}\gamma g \left(\frac{c_t^2}{c_s^4}\right)'+ \frac{1}{c_A^2}\left(\omega_g^2+\frac{g}{\Lambda}\frac{c_t^2}{c_s^2}\right)\right\} \end{split} \end{eqnarray} and $\omega_g^2$ denotes the squared buoyancy or Brunt-V\"{a}is\"{a}l\"{a} frequency given by \begin{equation} \omega_g^2=-g\left(\frac{g}{c_s^2}+\frac{\varrho'}{\varrho}\right), \end{equation} with $c_t=c_s c_A / c_f$ being the 'cusp' (or tube) speed, $c_s = \sqrt{\gamma (p_{hi}+p_{hn}) / (\varrho_{hi}+\varrho_{hn})}$ sound speed, $c_A= B_0 /\sqrt{\mu (\varrho_{hi}+\varrho_{hn})}$ bulk Alfv\'en speed, $c_f(y)=\sqrt{c_s^2(y)+c_A^2(y)}$, $\Lambda=k_B T/(m_i+m_n)g$ and $'=d/dy$. \subsection{Periodic driver} We assume that the waves are driven by the velocity fluctuations in the lower part of the photosphere. Since typical periods of these fluctuations are about 5 min, we set a monochromatic driver at the height of $y=0$~Mm operating with the period close to $P=300$~s. We develop a numerical model for the wave propagation and this model generalizes the previous MHD studies of monochromatic MAWs waves performed by Kra\'skiewicz et al. (2019). We perturb the solar atmosphere by the driver in the $y$-components of ion and neutral velocities given by \begin{equation} V_{\rm iy}(y=0,t)= V_{\rm ny}(y=0,t)= V_{\rm 0}\sin\left(\frac{2\pi\, t}{P_{\rm d}}\right), \label{eq:2B} \end{equation} where $V_0=0.025$~km~s$^{-1}$ is the amplitude of the driver and $P_{\rm d}$ stands for its period. Here we consider wave-periods equal or longer than $200$~s. For such values of $P_{\rm d}$ collisions between ions and neutrals are not expected to play any significant role in wave damping and consequently in thermalization of their energy. However, frequent ion-neutral collisions in the lower atmospheric layers result in strong ion-neutral coupling in the photosphere and less frequent collisions in weak coupling in the top chromosphere and the low corona, which affects cutoff wave-periods. The driver generates neutral acoustic and ion MAWs waves propagating through the photosphere and then through the chromosphere towards the corona. We study these waves by tracing the $y$-components of ion and neutral velocities and their mass densities. We aim is to show that some waves of their wave-periods larger than 300~s are evanescent and they are not able to reach the solar corona. Note that internal gravity waves are removed from the model by considering the background medium such as 1D plasma, which means that $\partial/\partial x= \partial/\partial z=0$. \section{NUMERICAL RESULTS }\label{sec:num_sim_model} We perform numerical simulations of the MAWs in a partially ionized and weakly magnetized solar atmosphere by using the JOANNA code (W\'ojcik et al. 2018). The code is based on the original work by Leake et al. (2012), who considered a two-fluid model of ions and neutrals as separate fluids. This code was extensively tested in W\'ojcik et al. (2019, 2020) and in Murawski, Musielak \& W\'ojcik (2020); the latter paper reports on a good agreement between the results obtained by the code and the observations. We present results of numerical simulations with use of a 1D uniform grid of its finest size of $\Delta y=5$~km between $y = 0$ and $y = 5.12$~Mm. Higher up the grid stretches up with height until the top boundary which is located at $y = 60$~Mm. At the bottom and top boundaries, we set and hold fixed in time all plasma quantities to their equilibrium values. The only exception is the bottom boundary at which we overlay the equilibrium quantities by the driver, specified by Eq. (\ref{eq:2B}). This means that the presented results are valid for the photosphere and chromosphere, and that they can be used to establish ranges of waves that may carry energy up to the corona. We focus on waves with periods of a few hundred seconds because for these waves the ion-neutral collisions are ineffective in thermalization of the wave energy. On the other hand, if wave periods are an order of magnitude shorter, then their periods become close to the ion-neutral collision time (e.g., Popescu Braileanu et al. 2019), and therefore these waves are not considered here. Since waves of short periods are mainly responsible for the heating of the solar chromosphere, the latter is not discussed in this paper, but instead the presented results are focused on the wave energy transport to the solar corona. In the following, we describe in detail the obtained numerical results and discuss their implications for solar physics. \subsection{The case of $P_{\rm d}=200$~s} In Fig.~\ref{fig:200_Viy} (top), we show the time-distance plot for the ion vertical velocity, $V_{iy}$, for the wave-period, $P_{\rm d}=200$~s. It is seen that the signal evolution remains essentially unchanged in time for a fixed height, and that the wave signals propagate through the photosphere and chromosphere to the corona. Figure~\ref{fig:200_Viy} (bottom) demonstrates the Fourier period, $P$ vs. height, $y$ in which we clearly see the main period $P_d=200$~s and also $P_{\rm cutoff}=230$~s, albeit with less power, for $1.0\,{\rm Mm}<y<1.5\, {\rm Mm}$. The same period we can infer from the formula $P_{\rm cutoff}= P_{\rm mod}P_{\rm d}/(P_{\rm mod}-P_{\rm d})$, where the modulation period, $P_{\rm mod}\approx 1470$~s, is estimated from Fig.~\ref{fig:200_wavelet_Viy-1.5} ((a) panel). The period of 230~s is consistent with the cutoff wave-period for MAWs at $y=1.5$~Mm (Fig.~\ref{fig:Pm_cut_off}), and it is exited together with the driver period. In Fig.~\ref{fig:200_wavelet_Viy-1.5}, we demonstrate the time-signature of $V_{iy}(y=1.5$~Mm) (top) and the corresponding wavelet spectra (bottom) for the wave-period $P_{\rm d}=200$~s. The time-signature exhibits the modulations with $P_{\rm mod}$ (top), which is discernible in the wavelet spectrum for $70$~s < $t$ < $590$~s (bottom-left). The presented results reinforce those shown in Fig.~\ref{fig:200_Viy}. The main implication of these results is that the propagating wave pattern is seen for the waves of this period in the solar photosphere and chromosphere, which means that these waves carry most of their energy directly to the solar corona. Let us point out that as in the case of $P_d=200$~s the differences between the neutral and ion velocity periodograms are very small, we show here only the ion case. \subsection{The case of $P_{\rm d}=300$~s} Figure~\ref{fig:300_Viy} presents time-distance plot for $V_{iy}$ (top) and its Fourier spectrum (bottom) for $P_d=300$~s. The presented results show that the waves with periods of $P=300$~s cannot reach the solar corona as their power spectrum significantly decreases with height in the middle chromosphere. Moreover, it is also seen that shorter period waves appear. These waves correspond to the local cutoff with the strongest signal associated with $P_{\rm cutoff}\approx 240$~s, observed at $y>1$~Mm in Figs.~\ref{fig:Pm_cut_off} and \ref{fig:300_Viy}. A comparison of these wave periods with the observational data of Wi\'sniewska et al. (2016) and Kayshap et al. (2018) reveals an agreement at some points. Hence, the qualitative similarity between the numerical results and the observational findings confirms that ion-neutral collisions are an efficient mechanism of energy release. Figure~\ref{fig:300_wavelet_Viy-1.5} illustrates time-signature for $V_{iy}(y=1.5$~Mm) (top) and the corresponding wavelet spectra (bottom). Despite different wave periods displayed in Figs.~\ref{fig:200_wavelet_Viy-1.5} and \ref{fig:300_wavelet_Viy-1.5}, the wave evolution remains apparently unchanged. However, the wave profile is altered in comparison to the driving signal, with clear signs of modulation seen in the time-signature (top). The modulation period, $P_{\rm mod}$, in this case can be estimated from Fig.~\ref{fig:300_wavelet_Viy-1.5} (top) as $P_{\rm mod}\approx 1200$~s, giving $P_{\rm cutoff}=P_{\rm mod}P_{\rm d}/ (P_{\rm mod}+P_{\rm d})\simeq 240$~s. This value is close to 230~s which was estimated for $P_d=200$~s and it is present for $y>1$~Mm in Fig.~\ref{fig:Pm_cut_off}. \subsection{The case of $P_{\rm d}=400$~s} Figure~\ref{fig:400_Viy} presents time-distance plot for $V_{iy}$ (top) and its Fourier power spectrum (bottom) for $P=400$~s. Comparison to the results of previous figures demonstrates that these waves can reach only the atmospheric height $y\simeq 1.3$ Mm. As a result, these waves cannot become propagating until they reach the solar corona, and the waves with period of $P_{\rm cutoff}\approx 255$~s are excited at about $y=1.2$~Mm (bottom). This period is close to cutoff periods seen in Fig.~\ref{fig:Pm_cut_off} for $y>1$~Mm. Note that the results presented in Fig.~\ref{fig:400_Viy} (bottom) reveal at the top chromosphere the wave-periods which are much lower than the driving period of $P_{\rm d} =400$~s. These wave-periods are in the range of $200-300$~s and they support the results given by Fleck \& Schmitz (1991) and Kalkofen et al. (1994). According to these authors, any $P_{\rm d}$ higher than the local atmospheric cutoff period initially oscillates with its local cutoff and later on the oscillations evolve to reach $P_{\rm d}$ (see Fig.~\ref{fig:wavelet_Pd400-1.5}, left-bottom). The global wavelet spectrum reveals that the signal at $P=P_d=400$~s is much lower than the signal at $P\approx 230$~s (right-bottom), which agrees with Fig.~\ref{fig:400_Viy} (bottom) which shows that for $y>1.5$~Mm the signal corresponding to $P\simeq 230$~s is stronger than that for $P=400$~s. As expected from our previous results obtained for the waves with periods of 200 s and 300 s, the waves with their periods of 400 s cannot reach the solar chromosphere because their power spectrum is significantly reduced in the solar chromosphere and its shape is altered, so the waves become evanescent. By using the latter wave property, we were able to determine the cutoff period for the waves in the solar chromosphere and corona, and use it to establish the conditions for the wave propagating in these layers of the solar atmosphere. \subsection{Parametric studies} Figure~\ref{fig:cutoff_Pd} displays main Fourier period for $V_{\rm iy}$ vs. maximum value of $y$ at which this period is present, i.e. $P_{\rm m}=P_{\rm d}$ for $y < max(y)$; for $y > max(y)$ the dominant period in the Fourier spectrum is lower than the driving period $P_{\rm d}$. Note the fast fall-off of $P_{\rm m}$ in the range of $0.75\, {\rm Mm} < y < 1\, {\rm Mm}$ which means than the driver with its wave-period $P_{\rm d}$ higher than about $300\,{\rm s}$ excites evanescent magnetoacoustic waves reaching the low chromosphere. Such a drastic decline of wave wave-periods was already detected in the numerical data for stochastically excited magnetoacoustic-gravity waves in a quiet region of the solar atmosphere, see Fig.~4 in Kra\'skiewicz and Murawski (2019), and in the observational data of Wi\'sniewska et al. (2016). We discuss now the energy flux which is approximated by the following formula: \begin{equation} F_{\rm E} \approx \frac{1}{2} \varrho_{\rm i} \mathbf{V}_{\rm i}^2 \, c_{\rm s}\, . \label{eq:energy_flux}\\ \end{equation} We evaluate this flux at $y=1.9$~Mm. Figure~\ref{fig:ef_Pd+By} illustrates $F_{\rm E}$ vs. the driving period, $P_{\rm d}$, (top) and magnetic field, $B_{\rm y}$, (bottom). Note that $F_{\rm E}$ declines with $P_{\rm d}$, meaning that shorter wave-period waves carry more energy than longer wave-period waves (top). The variation of $F_{\rm E}$ with $B_{\rm y}$ reveals that for $B_{\rm y}=5$~G $F_{\rm E}\approx 850\, {\rm erg\, cm}^{-2}\, {\rm s}^{-1}$. For higher values of $B_{\rm y}$ $F_{\rm E}$ grows, attaining its local maximum at $B_{\rm y}=15$~G, and then $F_{\rm E}$ declines to $F_{\rm E}\approx 910$~G for $B_{\rm y}=50$~G (bottom). These energy fluxes can be compared with the data in Tab.~1 from Withbroe \& Noyes (1977) according to which total energy loss is $3\cdot 10^5\, {\rm erg\, cm}^{-2}\,{\rm s}^{-1}$. The data of Fig.~\ref{fig:ef_Pd+By} shows that $F_{\rm E}$ is close to $1\cdot 10^5\, {\rm erg\, cm}^{-2}\,{\rm s}^{-1}$ for $P_{\rm d}=200$~s and $B_{\rm y}=11.4$~G (top). As for other considered values $F_{\rm E}$ is lower than the required values to balance energy losses in the top chromosphere (Withbroe \& Noyes 1977), we conclude that only the monochromatic driver with its period $P_{\rm d}=200$~s and $B_{\rm y}=11.4$~G is able to provide the required energy flux to heat the top chromosphere. \subsection{Oscillations in mass densities} It is noteworthy that in the limit of long wave-period waves, that is discussed in this paper, ions and neutrals acquire similar velocities, $\mathbf{V}_{\rm i}\simeq \mathbf{V}_{\rm n}$. As a result of that the ion-neutral drag force is negligible and the collisional heating is marginal. Besides, wave dispersion is minimal (e.g., Zaqarashvili et al. 2011) and the two-fluid equations approach is limited to the two-species equations (e.g. Terada et al. 2009). These two-species equations reveal potentially different evolution of ion and neutral mass densities. Therefore, we discuss this evolution here. Figure~\ref{fig:delta_Rho_i_n} describes the time-distance plot for relative perturbations of the neutral mass density, $\Delta\varrho_{\rm n}/\varrho_{\rm {n}}=(\varrho_{\rm {n}}- \varrho_{\rm {hn}})/\varrho_{\rm {n}}$, from the equilibrium state. The pattern is similar to $V_{iy}(t,y)$ but the investigation of the Fourier power spectrum shows some different behaviour of ions from neutrals ({Fig.~\ref{fig:Map_300_Rho}}). More significant difference is seen in Fig.~\ref{fig:wavelet_300_Rho} which shows time-signatures for $\varrho_i$ and $\varrho_n$, collected at $y=1.5$~Mm (a panels), the corresponding wavelet spectra (left-bottom panels), and the global wavelet spectra (right-bottom panels). These global wavelet spectra reveal the excited signal of $P=150$~s for ions (top) and lack of that for neutrals (bottom panel) at $y=1.5$~Mm. These different oscillations of $\varrho_{\rm {i}}$ and $\varrho_{\rm {n}}$ from those in $V_{iy}$ are not surprising as even in the case of linear MHD mass density is governed by a different evolution equation than vertical velocity is (Roberts 2006). The presented results show that the waves with periods of 300 s cannot reach the solar corona because of their spectrum decreases significantly in the middle chromosphere. This clearly shows the effects of the solar chromosphere on the waves of this period, which is very different than the effects of the chromosphere on the waves with periods of 200 s. Our results also demonstrate the changes between the driven and propagating wave profiles in the solar chromosphere. We also report on differences in the behaviour of ions and neutrals, which are revealed in their different oscillations, but we point out that these differences have already been observed before. \begin{figure} \begin{center} \includegraphics[width=8cm]{./Figures/Pd300_By_Rho_n_p.eps} \caption{\small Time-distance plots for $\Delta\varrho_{\rm n}/\varrho_{\rm {n}}$ and $P_{\rm d}=300$~s. } \label{fig:delta_Rho_i_n} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=8cm]{./Figures/P_vs_y_Pd300_Rho_i_By.eps} \includegraphics[width=8cm]{./Figures/P_vs_y_Pd300_Rho_n_By.eps} \caption{\small Fourier period, $P$, vs. height, $y$, for $\varrho_{i}$ (top) and $\varrho_{n}$ (bottom) in the case of $P_{\rm d}=300$~s.} \vspace{-0.5cm} \label{fig:Map_300_Rho} \end{center} \end{figure} \begin{figure*} \begin{center} \includegraphics[width=14cm, height=11cm]{./Figures/wavelet_Pd300_Rho_i_By_1.5.eps} \includegraphics[width=14cm, height=11cm]{./Figures/wavelet_Pd300_Rho_n_By_1.5.eps} \caption{\small Wavelet spectrum for $\varrho_{i}$ (top) and $\varrho_{n}$ (bottom), collected at $y=1.5$~Mm, in the case of $P_{\rm d}=300$~s.} \label{fig:wavelet_300_Rho} \end{center} \end{figure*} \section{CONCLUSIONS AND SUMMARY}\label{sec:conclusions} We performed 1D numerical simulations of propagation of neutral acoustic and ion magnetoacoustic two-fluid waves in the partially ionized lower solar atmosphere, consisting of ion+electron and neutral fluids, which are coupled by ion-neutral collisions. The considered atmosphere was assumed to be permitted by a uniform vertical magnetic field and the waves were excited by a monochromatic driver located at the bottom of the solar photosphere. We investigated variations of wave-periods with height in the solar atmosphere, which allowed us to find the wave cutoffs and determine the conditions for the wave propagation. Our main results demonstrate that waves with different wave-periods than those long wave-periods generated by the driver are present and that their vertical propagation is strongly affected by the solar atmosphere, which filters out the $300$~s (and longer) waves and makes some of these waves evanescent. By identification the evanescent waves we were able to find the wave cutoffs and their variations in the solar atmosphere, and use them to determine the propagation conditions for the considered waves. Then, we used the propagation conditions to find periods of waves that may carry their energy from the solar surface to the corona. Additionally, we showed that according to the analytical predictions (e.g. Roberts 2006) oscillations in mass densities exhibit different wave period spectra than in their vertical velocity component. Finally we mention that a two-fluid model describes dynamics of two fluids, such as ions+electrons and neutrals. An MHD model does not distinguish between these two fluids and therefore it is inferior to a two-fluid model. Our results reveal that Fourier spectra of excited oscillations of these two fluids differ one from another and they depend on whether these spectra are drawn for vertical velocities (see Sect. 3) or for mass densities (see Appendix). Obviously for wave-periods much larger than ion-neutral collision time the spectra for vertical velocities are similar in the framework of MHD and the two-fluid model. \bigskip\noindent {\bf ACKNOWLEDGMENTS}\\ \noindent The JOANNA code was developed by Darek W\'{o}jcik with some contribution from Luis Kadowaki and Piotr Wo{\l}oszkiewicz. KM's work was done within the framework of the projects from the Polish Science Center (NCN) Grant No. 2020/37/B/ST9/00184. \bigskip\noindent {\bf DATA AVAILABILITY}\\ The data underlying this article are available in the article and in its online supplementary material. \bigskip\noindent {\bf REFERENCES}\\ \noindent Avrett, E. H. \& Loeser, R. 2008, ApJS, 175, 229\\ Ballester, J.~L., Alexeev, I., Collados, M., {et~al.}\ 2018, Space Science Reviews, 214, 58B\\ Cally, P.~S., \& Hansen, S.~C.\ 2011, ApJ, 738, 119\\ Chmielewski, P., Srivastava, A.~K., Murawski, K., et al.\ 2013, MNRAS, 428, 40\\ Defouw, R.~J.\ 1976, ApJ, 209, 266\\ Felipe, T., Kuckein, C., \& Thaler, I.\ 2018, A\&A, 617, A39\\ Fleck, B., \& Schmitz, F.\ 1991, A\&A, 250, 235\\ Fleck, B., \& Schmitz, F.\ 1993, A\&A, 273, 671\\ Gough D. O.\ 1977, ApJ, 214, 196\\ Kalkofen W., Rossi P., Bodo G., Massaglia S.\ 1994, A\&A, 284, 976\\ Kayshap, P., Murawski, K., Srivastava, A.~K., et al.\ 2018, MNRAS, 479, 5512\\ Kra\'skiewicz, J.K. \& Murawski, K.\ 2019, MNRAS, 482, 3244\\ Kra\'skiewicz, J.K., Murawski, K., \& Musielak, Z.E.\ 2019, A\&A, 623, A62\\ {Ku{\'z}ma}, B., {W{\'o}jcik}, D., {Murawski}, K.\ 2019, A\&A, 878, 81\\ {Ku{\'z}ma}, B., {Murawski}, K., \& Musielak, Z.E.\ 2022, submitted to MNRAS\\ Lamb, H.\ 1909, Proc. Lond. Math. Soc., 7, 122\\ Lamb, H.\ 1910, Proc. R. Soc. London, A, 34, 551\\ Lamb, H.\ 1945, Hydrodynamics, Dover Publications, New York\\ Leake, J. E., Lukin, V. S., Linton, M. G., Meier, E. T.\ 2012, ApJ, 760, 109\\ Mart{\'{\i}}nez-Sykora, J., Pontieu, B.~D., Hansteen, V.~H., {et~al.} 2017, Science, 356, 1269\\ Murawski, K., Musielak, Z.~E., Konkol, P., et al.\ 2016, ApJ, 827, 37\\ Musielak, Z.~E., Musielak, D.~E., \& Mobashi, H. \ 2006, Phys. Rev. E, 73, 036612-1\\ Maneva, Y.G., Alvarez Laguna, A., Lani, A., \& Poedts, S.\ 2017, ApJ, 836, 197\\ Murawski, K., \& Zaqarashvili, T. V.\ 2010, A\&A, 519, 8\\ Murawski, K., \& Musielak, Z.~E.\ 2010, A\&A, 518, A37\\ Murawski, K., \& Musielak, Z.E.\ 2016, MNRAS, 463, 4433\\ Murawski, K., Musielak, Z.E., W\'ojcik, D.\ 2020, ApJL, 896 L1\\ Musielak, Z. E.\ 1990, ApJ, 351, 287\\ Musielak Z. E., Moore R., 1995, ApJ , 452, 434\\ Musielak, Z.~E., Routh, S., \& Hammer, R.\ 2007, ApJ, 659, 650\\ Musielak Z. E., Musielak, D. E., Mobashi H.\ 2006, Phys. Rev. E, 73, 036612-1\\ Narain, U., \& Ulmschneider, P.\ 1996, Space Sci. Rev., 75, 453\\ Oliver, R., Soler, R., Terradas, J., \& Zaqarashvili, T. V. 2016, ApJ,818, 128\\ Perera, H.~K., Musielak, Z.~E., \& Murawski, K.\ 2015, MNRAS, 450, 3169\\ Priest, E. R.\ 2014, Magnetohydrodynamics of the Sun, Cambridge University Press, Cambridge\\ Popescu Braileanu, B., Lukin, V.~S., Khomenko, E., et al.\ 2019, A\&A, 627, A25\\ Rae, I. C., \& Roberts, B.\ 1982, ApJ, 256, 761\\ Roberts, B.\ 1991, Geophysical and Astrophysical Fluid Dynamics, 62, 83\\ Roberts, B., \& Ulmschneider, P.\ 1997, European Meeting on Solar Physics, 75\\ Roberts, B.\ 2004, Proceedings of 'SOHO 13 Waves, Oscillations and Small-scale Transients Events in the Solar Atmosphere: Joint View from SOHO and TRACE', Compiled by: H. Lacoste, p. 1\\ Roberts, B.\ 2006, Phil. Trans. R. Soc. A, 364, 447\\ Rogers, F. J. \& Nayfonov, A. \ 2002, ApJ, 576, 1064\\ Routh S., Musielak Z. E. 2014, Astronomische Nachrichten, 335, 1043\\ Routh, S., Musielak, Z.~E., \& Hammer, R.\ 2010, ApJ, 709, 1297\\ Routh, S., Musielak, Z.E., Sundar, M.N., Joshi, S.S., \& Charan, S.\ 2020, Astrohys. \& Space Sci., 365, 139\\ Schmitz, F., \& Fleck, B.\ 1992, A\&A, 260, 447\\ Stark, B.A., \& Musielak, Z.E.\ 1993, ApJ, 409, 450\\ Terada, N., Shinagawa, H., Tanaka, T., Murawski, K., Terada, K.\ 2009, JGR, 114, A09208\\ V{\"o}gler, A. 2004, Three-dimensional simulations of magnetoconvection in the solar photosphere (Copernicus)\\ Vranjes, J., \& Krstic, P. \ 2013, App, 554, A22\\ Wi{\'s}niewska, A., Musielak, Z.~E., Staiger, J., Roth M.\ 2016, ApJ, 819, L23\\ Withbroe, G. L. \& Noyes, R. W.\ 1977, Annual Rev. Astr. Astrophys., 15. (A78-16576 04-90) Palo Alto, Calif., Annual Reviews, Inc., 1977, p. 363\\ {W{\'o}jcik}, D., {Murawski}, K., \& {Musielak}, Z.~E. 2018, MNRAS, 481, 262\\ W\'ojcik, D., Murawski, K., \& Musielak, Z.E.\ 2019, ApJ, 882, 32W\\ W\'ojcik, D., Ku\'zma, B., Murawski, K., Musielak, Z.E.\ 2020, A\&A, 635, A28\\ Zaqarashvili, T.V., Khodachenko, M.L., Rucker, H.O.\ 2011, A\&A, 529, A82\\ \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
952
\section{Introduction} This paper deals with Gorenstein algebras/categories, singularity categories and a finiteness condition ensuring existence of a useful theory of support for modules over finite dimensional algebras. First we give some background and indicate how these subjects are linked for us. Then we discuss the common framework for our investigations and give a sample of the main results in the paper. Finally we describe the structure of the paper. For related work see Green--Madsen--Marcos \cite{GMM} and Nagase \cite{N}. In Subsection~\ref{subsection:nagase}, we compare our results to those of Nagase. For a group algebra of a finite group $G$ over a field $k$ there is a theory of support varieties of modules introduced by Jon Carlson in the seminal paper \cite{Ca}. This theory has proven useful and powerful, where the support of a module is defined in terms of the maximal ideal spectrum of the group cohomology ring $H^*(G,k)$. Crucial facts here are that the group cohomology ring is graded commutative and noetherian, and for any finitely generated $kG$-module $M$, the Yoneda algebra $\Ext^*_{kG}(M,M)$ is a finitely generated module over the group cohomology ring (see \cite{E,Go,V}). For a finitely generated $kG$-module $M$ the support variety is defined as the variety associated to the annihilator ideal of the action of the group cohomology ring $H^*(G,k)$ on $\Ext^*_{kG}(M,M)$. This construction is based on the Hopf algebra structure of the group algebra $kG$, and until recently a theory of support was not available for finite dimensional algebras in general. Snashall and Solberg \cite{SO} have extended the theory of support varieties from group algebras to finite dimensional algebras by replacing the group cohomology $H^*(G,k)$ with the Hochschild cohomology ring of the algebra. Whenever similar properties as for group algebras are satisfied, that is, (i) the Hochschild cohomology ring is noetherian and (ii) all Yoneda algebras $\Ext^*_\Lambda(M,M)$ for a finitely generated $\Lambda$-module $M$ are finitely generated modules over the Hochschild cohomology ring, then many of the same results as for group algebras of finite groups are still true when $\Lambda$ is a selfinjective algebra \cite{EHSST}. The above set of conditions is referred to as \fg{} (see \cite{EHSST, Solberg:1}). \textit{Triangulated categories of singularities} or for simplicity \textit{singularity categories} have been introduced and studied by Buchweitz \cite{Buchweitz:unpublished}, under the name \textit{stable derived categories}, and later they have been considered by Orlov \cite{Orlov}. For an algebraic variety $\mathbb{X}$, Orlov introduced the \textit{singularity category} of $\mathbb{X}$, as the Verdier quotient $\mathsf{D}_\mathsf{sg}(\mathbb{X}) = \mathsf{D}^\mathsf{b}(\coh \mathbb{X})/ {\operatorname{perf}}(\mathbb{X})$, where $\mathsf{D}^\mathsf{b}(\coh \mathbb{X})$ is the bounded derived category of coherent sheaves on $\mathbb{X}$ and ${\operatorname{perf}}(\mathbb{X})$ is the full subcategory consisting of perfect complexes on $\mathbb{X}$. The singularity category $\mathsf{D}_\mathsf{sg}(\mathbb{X})$ captures many geometric properties of $\mathbb{X}$. For instance, if the variety $\mathbb{X}$ is smooth, then the singularity category $\mathsf{D}_\mathsf{sg}(\mathbb{X})$ is trivial but this is not true in general \cite{Orlov}. It should be noted that the singularity category is not only related to the study of the singularities of a given variety $\mathbb{X}$ but is also related to the Homological Mirror Symmetry Conjecture due to Kontsevich \cite{Kontsevich}. For more information we refer to \cite{Orlov, Orlov:2, Orlov:3}. Similarly, the singularity category over a noetherian ring $R$ is defined \cite{Buchweitz:unpublished} to be the Verdier quotient of the bounded derived category $\mathsf{D}^\mathsf{b}(\fmod{R})$ of the finitely generated $R$-modules by the full subcategory ${\operatorname{perf}}(R)$ of perfect complexes and is denoted by \[ \mathsf{D}_\mathsf{sg}(R)=\mathsf{D}^\mathsf{b}(\fmod{R})/ {\operatorname{perf}}(R). \] In this case the singularity category $\mathsf{D}_\mathsf{sg}(R)$ can be viewed as a categorical measure of the singularities of the spectrum $\Spec(R)$. Moreover, by a fundamental result of Buchweitz \cite{Buchweitz:unpublished}, and independently by Happel \cite{Happel:3}, the singularity category of a Gorenstein ring is equivalent to the stable category of (maximal) Cohen--Macaulay modules $\uCM(R)$, where the latter is well known to be a triangulated category \cite{Happel:4}. Note that this equivalence generalizes the well known equivalence between the singularity category of a selfinjective algebra and the stable module category, a result due to Rickard \cite{Rickard}. If there exists a triangle equivalence between the singularity categories of two rings $R$ and $S$, then such an equivalence is called a \textit{singular equivalence} between $R$ and $S$. Singular equivalences were introduced by Chen, who studied singularity categories of non-Gorenstein algebras and investigated when there is a singular equivalence between certain extensions of rings \cite{Chen:schurfunctors, Chen:radicalsquarezero, Chen:Singular equivalences, Chen:Singular equivalences-trivial extensions}. Next, from the perspective of support varieties, we describe some links between the above topics. Support varieties for $\mathsf{D}^\mathsf{b}(\fmod\Lambda)$ using the Hochschild cohomology ring of $\Lambda$ were considered in \cite{Solberg:1} for a finite dimensional algebra $\Lambda$ over a field $k$, where all the perfect complexes ${\operatorname{perf}}(\Lambda)$ were shown to have trivial support variety. Hence the theory of support via the Hochschild cohomology ring naturally only says something about the Verdier quotient $\mathsf{D}^\mathsf{b}(\fmod\Lambda)/{\operatorname{perf}}(\Lambda)$ -- the singularity category. To have an interesting theory of support, the finiteness condition \fg{} is pivotal. When \fg{} is satisfied for an algebra $\Lambda$, then $\Lambda$ is Gorenstein \cite[Proposition~1.2]{EHSST}, or equivalently, $\fmod\Lambda$ is a Gorenstein category. As we pointed out above, when $\Lambda$ is Gorenstein, then by Buchweitz--Happel the singularity category $\mathsf{D}^\mathsf{b}(\fmod\Lambda)/{\operatorname{perf}}(\Lambda)$ is triangle equivalent to $\uCM(\Lambda)$, the stable category of Cohen--Macaulay modules. When $\Lambda$ is a selfinjective algebra, then $\envalg{\Lambda}$ is selfinjective and $\uCM(\envalg{\Lambda})=\umod\envalg{\Lambda}$ is a tensor triangulated category with $\Lambda$ as a tensor identity. Let $\mathcal{B}$ be the full subcategory of $\uCM(\envalg{\Lambda})$ consisting of all bimodules which are projective as a left and as a right $\Lambda$-module. Then $\mathcal{B}$ is also a tensor triangulated category with tensor identity $\Lambda$. The strictly positive part of the graded endomorphism ring \[ \End^*_{\mathcal{B}}(\Lambda) = \bigoplus_{i\in \mathbb{Z}} \Hom_{\mathcal{B}}(\Lambda, \Omega^i_{\envalg{\Lambda}}(\Lambda)), \] of the tensor identity $\Lambda$ in $\uCM(\envalg{\Lambda})$ is isomorphic to the strictly positive part $\HH{\geqslant 1}(\Lambda)$ of the Hochschild cohomology ring of $\Lambda$. This is the relevant part for the theory of support varieties via the Hochschild cohomology ring. In addition $\mathcal{B}$ is a tensor triangulated category acting on the triangulated category $\uCM(\Lambda)$, and we can consider a theory of support varieties for $\uCM(\Lambda)$ using the framework described in the forthcoming paper \cite{BKSS}. Therefore the singularity category of the enveloping algebra $\envalg{\Lambda}$ encodes the geometric object for support varieties of modules and complexes over the algebra $\Lambda$. Next we describe the categorical framework for our work. There has recently been a lot of interest around recollements of abelian (and triangulated) categories. These are exact sequences of abelian categories \[ \xymatrix{ 0 \ar[r]^{ } & \mathscr A \ar[r]^{i} & \mathscr B \ar[r]^{e} & \mathscr C \ar[r]^{ } & 0 } \] where both the inclusion functor $i\colon \mathscr A\longrightarrow \mathscr B$ and and the quotient functor $e\colon \mathscr B\longrightarrow \mathscr C$ have left and right adjoints. They have been introduced by Beilinson, Bernstein and Deligne \cite{BBD} first in the context of triangulated categories in their study of derived categories of sheaves on singular spaces. Properties of recollements of abelian categories were studied by Franjou and Pirashvilli in \cite{Pira}, motivated by the MacPherson--Vilonen construction for the category of perverse sheaves \cite{MV}, and recently homological properties of recollements of abelian and triangulated categories have also been studied in \cite{Psaroud}. Recollements of abelian categories were used by Cline, Parshall and Scott in the context of representation theory, see \cite{CPS, Parshall}, and later Kuhn used recollements in his study of polynomial functors, see \cite{Kuhn}. Recently, recollements of triangulated categories have appeared in the work of Angeleri H\"ugel, Koenig and Liu in connection with tilting theory, homological conjectures and stratifications of derived categories of rings, see \cite{AKL, AKL2, AKL3, AKLY4}. Also, Chen and Xi have investigated recollements in relation with tilting theory \cite{Xi:1} and algebraic K-theory \cite{Xi:2, Xi:3}. Furthermore, Han \cite{Han} has studied the relations between recollements of derived categories of algebras, smoothness and Hochschild cohomology of algebras. It should be noted that module recollements, i.e.\ recollements of abelian categories whose terms are categories of modules, appear quite naturally in various settings. For instance any idempotent element $e$ in a ring $R$ induces a recollement situation between the module categories over the rings $R/\idealgenby{e}$, $R$ and $eRe$. In fact recollements of module categories are now well understood since every such recollement is equivalent, in an appropriate sense, to one induced by an idempotent element \cite{PsaroudVitoria}. We want to compare the \fg{} condition for Hochschild cohomology, Gorensteinness and the singularity categories of two algebras. Our aim in this paper is to present a common context where we can compare these properties for an algebra $\Lambda$ and $a\Lambda a$, where $a$ is an idempotent of $\Lambda$. This is achieved using recollements of abelian categories. To summarize our main results we introduce the following notion. Given a functor $f\colon \mathscr B \longrightarrow \mathscr C$ between abelian categories, the functor $f$ is called an \defterm{eventually homological isomorphism} if there is an integer $t$ such that for every pair of objects $B$ and $B'$ in $\mathscr B$, and every $j > t$, there is an isomorphism \[ \Ext_\mathscr B^j(B,B') \cong \Ext_\mathscr C^j(f(B),f(B')). \] Our main results, stated in the context of artin algebras, are summarized in the following theorem. The four parts of the theorem are proved in Corollary~\ref{cor:algebra-ext-iso}, Corollary~\ref{corsingular}, Corollary~\ref{corGorensteinArtinalg} and Theorem~\ref{thm:main-result-fg}, respectively. More general versions of the first three parts, in the setting of recollements of abelian categories, are given in Corollary~\ref{cor:recollement-e-is-iso} and Proposition~\ref{prop:ext-iso-implies-conditions}, Theorem~\ref{thmsingular} and Theorem~\ref{thm:gorenstein}. \begin{main-thm} Let $\Lambda$ be an artin algebra over a commutative ring $k$ and let $a$ be an idempotent element of $\Lambda$. Let $e$ be the functor $a-\colon \fmod{\Lambda}\longrightarrow \fmod{a\Lambda a}$ given by multiplication by $a$. Consider the following conditions$\colon$ \begin{align*} (\alpha) \ \id_\Lambda \Big( \frac{\Lambda/\idealgenby{a}} {\rad \Lambda/\idealgenby{a}} \Big) &< \infty & (\beta) \ \pd_{a{\Lambda}a} a{\Lambda} &< \infty \\ (\gamma) \ \pd_\Lambda \Big( \frac{\Lambda/\idealgenby{a}} {\rad \Lambda/\idealgenby{a}} \Big) &< \infty & (\delta) \ \pd_{\opposite{(a{\Lambda}a)}}{\Lambda}a &< \infty \end{align*} Then the following hold. \begin{enumerate} \item The following are equivalent$\colon$ \begin{enumerate} \item $(\alpha)$ and $(\beta)$ hold. \item $(\gamma)$ and $(\delta)$ hold. \item The functor $e$ is an eventually homological isomorphism. \end{enumerate} \item The functor $e$ induces a singular equivalence between $\Lambda$ and $a\Lambda a$ if and only if the conditions $(\beta)$ and $(\gamma)$ hold. \item Assume that $e$ is an eventually homological isomorphism. Then $\Lambda$ is Gorenstein if and only if $a{\Lambda}a$ is Gorenstein. \item Assume that $e$ is an eventually homological isomorphism. Assume also that $k$ is a field and that $(\Lambda/\rad \Lambda) \otimes_k (\opposite{\Lambda}/\rad \opposite{\Lambda})$ is a semisimple $\envalg{\Lambda}$-module (for instance, this is true if $k$ is algebraically closed). Then $\Lambda$ satisfies \fg{} if and only if $a{\Lambda}a$ satisfies \fg{}. \end{enumerate} \end{main-thm} Now we describe the contents of the paper section by section. In Section~\ref{section:prelim}, we recall notions and results on recollements of abelian categories and Hochschild cohomology that are used throughout the paper. In Section~\ref{section:extensions}, we study extension groups in a recollement of abelian categories $(\mathscr A,\mathscr B,\mathscr C)$. More precisely, we investigate when the exact functor $e \colon \mathscr B \longrightarrow \mathscr C$ is an eventually homological isomorphism. It turns out that the answer to this problem is closely related to the characterization given in \cite{Psaroud} of when the functor $e$ induces isomorphisms between extension groups in all degrees below some bound $n$. In Corollary~\ref{cor:recollement-e-is-iso} and Proposition~\ref{prop:ext-iso-implies-conditions} we give sufficient and necessary conditions, respectively, for the functor $e$ to be an eventually homological isomorphism. In the setting of the Main Theorem, we characterize when the functor $e$ is an eventually homological isomorphism in Corollary~\ref{cor:algebra-ext-iso}. The results of this section are used in Section~\ref{section:Gorenstein} and Section~\ref{section:fg-a(Lambda)a}. In Section~\ref{section:Gorenstein}, we study Gorenstein categories, introduced by Beligiannis and Reiten \cite{BR}. Assuming that we have an eventually homological isomorphism $f \colon \mathscr D \longrightarrow \mathscr F$ between abelian categories, we investigate when Gorensteinness is transferred between $\mathscr D$ and $\mathscr F$. Among other things, we prove that if $f$ is an essentially surjective eventually homological isomorphism, then $\mathscr D$ is Gorenstein if and only if $\mathscr F$ is (see Theorem~\ref{thm:gorenstein}). We apply this to recollements of abelian categories and recollements of module categories. In Section~\ref{sectionsingular}, we investigate singularity categories, in the sense of Buchweitz \cite{Buchweitz:unpublished} and Orlov \cite{Orlov}, in a recollement $(\mathscr A,\mathscr B,\mathscr C)$ of abelian categories. In fact, we give necessary and sufficient conditions for the quotient functor $e\colon \mathscr B\longrightarrow \mathscr C$ to induce a triangle equivalence between the singularity categories of $\mathscr B$ and $\mathscr C$, see Theorem~\ref{thmsingular}. This result generalizes earlier results by Chen \cite{Chen:schurfunctors}. We obtain the results of Chen in Corollary~\ref{corsingular} by applying Theorem~\ref{thmsingular} to rings with idempotents. Finally, for an artin algebra $\Lambda$ with an idempotent element $a$, we give a sufficient condition for the stable categories of Cohen--Macaulay modules of $\Lambda$ and $a\Lambda a$ to be triangle equivalent, see Corollary~\ref{corgorsingular}. In Section~\ref{section:cohomology} and Section~\ref{section:fg-a(Lambda)a}, which form a unit, we investigate the finite generation condition \fg{} for the Hochschild cohomology of a finite dimensional algebra over a field. In particular, in Section~\ref{section:cohomology} we show how we can compare the \fg{} condition for two different algebras. This is achieved by showing, for two graded rings and graded modules over them, that if we have isomorphisms in all but finitely many degrees then the noetherian property of the rings and the finite generation of the modules is preserved, see Proposition~\ref{prop:graded-fin-gen} and Corollary~\ref{prop:fg-two-algebras}. In Section~\ref{section:fg-a(Lambda)a}, we use this result to show that \fg{} holds for a finite dimensional algebra $\Lambda$ over a field if and only if \fg{} holds for the algebra $a\Lambda a$, where $a$ is an idempotent element of $\Lambda$ which satisfies certain assumptions (see Theorem~\ref{thm:main-result-fg}). The final Section~\ref{section:examples} is devoted to applications and examples of our main results. First we apply our results to triangular matrix algebras. For a triangular matrix algebra $\Lambda=\bigl(\begin{smallmatrix} \Sigma & 0 \\ {_{\Gamma}M_{\Sigma}} & \Gamma \end{smallmatrix}\bigr)$, we compare $\Lambda$ to the algebras $\Sigma$ and $\Gamma$ with respect to the \fg{} condition, Gorensteinness and singularity categories. In particular, we recover a result by Chen \cite{Chen:schurfunctors} concerning the singularity categories of $\Lambda$ and $\Sigma$. Then we consider some special cases where there are relations between the assumptions of our main results (see $(\alpha)$--$(\delta)$ in Main Theorem) and provide an interpretation for quotients of path algebras. Finally, we compare our results to those of Nagase \cite{N}. \subsection*{Conventions and Notation} For a ring $R$ we work usually with left $R$-modules and the corresponding category is denoted by $\Mod R$. The full subcategory of finitely presented $R$-modules is denoted by $\fmod R$. Our additive categories are assumed to have finite direct sums and our subcategories are assumed to be closed under isomorphisms and direct summands. The Jacobson radical of a ring $R$ is denoted by $\rad R$. By a module over an artin algebra $\Lambda$, we mean a finitely presented (generated) left $\Lambda$-module. \subsection*{Acknowledgments} This paper was written during a postdoc period of the first author at the Norwegian University of Science and Technology (NTNU, Trondheim) funded by NFR Storforsk grant no.\ 167130. The first author would like to thank his co-authors, Idun Reiten and all the members of the Algebra group for the warm hospitality and the excellent working conditions. The authors are grateful for the comments from Hiroshi Nagase on a preliminary version of this paper, which led to a much better understanding of the conditions occurring in the Main Theorem. \section{Preliminaries} \label{section:prelim} In this section we recall notions and results on recollements of abelian categories and Hochschild cohomology. \subsection{Recollements of Abelian Categories} In this subsection we recall the definition of a recollement situation in the context of abelian categories, see for instance \cite{Pira,Happel,Kuhn}, we fix notation and recall some well known properties of recollements which are used later in the paper. We also include our basic source of examples of recollements. For an additive functor $F\colon \mathscr A \longrightarrow \mathscr B$ between additive categories, we denote by $\Image F = \{B \in \mathscr B \mid B \cong F(A) \ \text{for some} \ A \in \mathscr A\}$ the \defterm{essential image} of $F$ and by $\Ker F = \{A \in \mathscr A \mid F(A) = 0\}$ the \defterm{kernel} of $F$. \begin{defn} \label{defnrec} A \defterm{recollement situation} between abelian categories $\mathscr A,\mathscr B$ and $\mathscr C$ is a diagram \[ \recollement{\mathscr A}{\mathscr B}{\mathscr C}{q}{i}{p}{l}{e}{r} \] henceforth denoted by $(\mathscr A,\mathscr B,\mathscr C)$, satisfying the following conditions$\colon$ \begin{enumerate} \item[\bf 1.] $(l,e,r)$ is an adjoint triple. \item[\bf 2.] $(q,i,p)$ is an adjoint triple. \item[\bf 3.] The functors $i$, $l$, and $r$ are fully faithful. \item[\bf 4.] $\Image i = \Ker e$. \end{enumerate} \end{defn} In the next result we collect some basic properties of a recollement situation of abelian categories that can be derived easily from Definition~\ref{defnrec}. For more details, see \cite{Pira,Psaroud}. \begin{prop} \label{properties} Let $(\mathscr A,\mathscr B,\mathscr C)$ be a recollement of abelian categories. Then the following hold. \begin{enumerate} \item The functors $i\colon \mathscr A\longrightarrow \mathscr B$ and $e\colon \mathscr B\longrightarrow \mathscr C$ are exact. \item The compositions $ei$, $ql$ and $pr$ are zero. \item The functor $e\colon \mathscr B\longrightarrow \mathscr C$ is essentially surjective. \item The units of the adjoint pairs $(i,p)$ and $(l,e)$ and the counits of the adjoint pairs $(q,i)$ and $(e,r)$ are isomorphisms$\colon$ \[ \Id_\mathscr A \xrightarrow{\cong} pi \qquad \Id_\mathscr C \xrightarrow{\cong} el \qquad qi \xrightarrow{\cong} \Id_\mathscr A \qquad er \xrightarrow{\cong} \Id_\mathscr C \] \item The functors $l\colon \mathscr C\longrightarrow \mathscr B$ and $q\colon \mathscr B\longrightarrow \mathscr A$ preserve projective objects and the functors $r\colon \mathscr C\longrightarrow \mathscr B$ and $p\colon \mathscr B\longrightarrow \mathscr A$ preserve injective objects. \item The functor $i\colon \mathscr A\longrightarrow \mathscr B$ induces an equivalence between $\mathscr A$ and the Serre subcategory $\Ker e = \Image i$ of $\mathscr B$. Moreover, $\mathscr A$ is a localizing and colocalizing subcategory of $\mathscr B$ and there is an equivalence of categories $\mathscr B/\mathscr A \simeq \mathscr C$. \item For every $B$ in $\mathscr B$ there are $A$ and $A'$ in $\mathscr A$ such that the units and counits of the adjunctions induce the following exact sequences$\colon$ \[ \xymatrix{ 0 \ar[r]^{ } & i(A) \ar[r]^{ } & le(B) \ar[r]^{ } & B \ar[r]^{ } & iq(B) \ar[r] & 0 } \] and \[ \xymatrix{ 0 \ar[r]^{ } & ip(B) \ar[r]^{ } & B \ar[r]^{ } & re(B) \ar[r] & i(A') \ar[r] & 0 } \] \end{enumerate} \end{prop} Throughout the paper, we apply our results to recollements of module categories, and in particular to recollements of module categories over artin algebras as described in the following example. \begin{exam} \label{exam:mod-recollements} Let $\Lambda$ be an artin $k$-algebra, where $k$ is a commutative artin ring, and let $a$ be an idempotent element in $\Lambda$. \begin{enumerate} \item We have the following recollement of abelian categories$\colon$ \[ \recollement{\fmod \Lambda/\idealgenby{a}}{\fmod \Lambda}{\fmod a \Lambda a} {\Lambda/\idealgenby{a} \otimes_\Lambda -} {\inc} {\Hom_\Lambda(\Lambda/\idealgenby{a},-)} {\Lambda a \otimes_{a \Lambda a} -} {e=a(-)} {\Hom_{a \Lambda a}(a \Lambda,-)} \] The functor $e\colon \fmod{\Lambda}\longrightarrow \fmod{a\Lambda a}$ can be also described as follows$\colon$$e=a(-)\cong \Hom_{\Lambda}(\Lambda a,-)\cong a\Lambda\otimes_{\Lambda}-$. We write $\idealgenby{a}$ for the ideal of $\Lambda$ generated by the idempotent element $a$. Then every left $\Lambda/\idealgenby{a}$-module is annihilated by $\idealgenby{a}$ and thus the category $\fmod{\Lambda/\idealgenby{a}}$ is the kernel of the functor $a(-)$. \item Let $\envalg{\Lambda}=\Lambda \otimes_k \opposite{\Lambda}$ be the enveloping algebra of $\Lambda$. The element $\varepsilon=a\otimes \opposite{a}$ is an idempotent element of $\envalg{\Lambda}$. Therefore as above we have the following recollement of abelian categories$\colon$ \[ \recollement{\fmod \envalg{\Lambda}/\idealgenby{\varepsilon}} {\fmod \envalg{\Lambda}} {\fmod \envalg{(a \Lambda a)}} {\envalg{\Lambda}/\idealgenby{\varepsilon} \otimes_{\envalg{\Lambda}} -} {\inc} {\Hom_{\envalg{\Lambda}}(\envalg{\Lambda}/\idealgenby{\varepsilon},-)} {\envalg{\Lambda}\varepsilon \otimes_{\envalg{(a \Lambda a)}} -} {E = \varepsilon(-)} {\Hom_{\envalg{(a \Lambda a)}}(\varepsilon\envalg{\Lambda},-)} \] Note that $\envalg{(a \Lambda a)} \cong \varepsilon\envalg{\Lambda}\varepsilon$ as $k$-algebras. \end{enumerate} \end{exam} \begin{rem} As in Example~\ref{exam:mod-recollements}, any idempotent element $e$ in a ring $R$ induces a recollement situation between the module categories over the rings $R/\idealgenby{e}$, $R$ and $eRe$. This should be considered as the universal example for recollements of abelian categories whose terms are categories of modules. Indeed, in \cite{PsaroudVitoria}, it is proved that any recollement of module categories is equivalent, in an appropriate sense, to one induced by an idempotent element. \end{rem} \subsection{Hochschild cohomology rings} \label{subsection:hh} We briefly explain the terminology we need regarding Hochschild cohomology and the finite generation condition \fg{}, and recall some important results. For a more detailed exposition of these topics, see sections 2--5 of \cite{Solberg:1}. Let $\Lambda$ be an artin algebra over a commutative ring $k$. We define the \defterm{Hochschild cohomology ring} $\HH*(\Lambda)$ of $\Lambda$ by \[ \HH*(\Lambda) = \Ext_{\envalg{\Lambda}}^*(\Lambda, \Lambda) = \bigoplus_{i=0}^\infty \Ext_{\envalg{\Lambda}}^i(\Lambda, \Lambda). \] This is a graded $k$-algebra with multiplication given by Yoneda product. Hochschild cohomology was originally defined by Hochschild in \cite{Hochschild}, using the bar resolution. It was shown in \cite[IX, \S6]{CartanEilenberg} that our definition coincides with the original definition when $\Lambda$ is projective over $k$. Gerstenhaber showed in \cite{Gerstenhaber:1} that the Hochschild cohomology ring as originally defined is graded commutative. This implies that the Hochschild cohomology ring as defined above is graded commutative when $\Lambda$ is projective over $k$. The following more general result was shown in \cite[Theorem~1.1]{SO} (see also \cite{Suarez}, which proves graded commutativity of several cohomology theories in a uniform way). \begin{thm} \label{thm:hh-graded-commutative} Let $\Lambda$ be an algebra over a commutative ring $k$ such that $\Lambda$ is flat as a module over $k$. Then the Hochschild cohomology ring $\HH*(\Lambda)$ is graded commutative. \end{thm} To describe the finite generation condition \fg{}, we first need to define a $\HH*(\Lambda)$-module structure on the direct sum of all extension groups of a $\Lambda$-module with itself (for more details about this module structure, see \cite{SO}). Assume that $\Lambda$ is flat as $k$-module, and let $M$ be a $\Lambda$-module. The direct sum \[ \Ext_\Lambda^*(M, M) = \bigoplus_{i=0}^\infty \Ext_\Lambda^i(M, M) \] of all extension groups of $M$ with itself is a graded $k$-algebra with multiplication given by Yoneda product. We give it a graded $\HH*(\Lambda)$-module structure by the graded ring homomorphism \[ \varphi_M \colon \HH*(\Lambda) \longrightarrow \Ext_\Lambda^*(M, M). \] which is defined in the following way. Any homogeneous element of positive degree in $\HH*(\Lambda)$ can be represented by an exact sequence \[ \eta\colon 0 \longrightarrow \Lambda \longrightarrow X \longrightarrow P_n \longrightarrow \cdots \longrightarrow P_1 \longrightarrow P_0 \longrightarrow \Lambda \longrightarrow 0 \] of $\envalg{\Lambda}$-modules, where every $P_i$ is projective. Tensoring this sequence throughout with $M$ gives an exact sequence \[ 0 \longrightarrow \Lambda \otimes_\Lambda M \longrightarrow X \otimes_\Lambda M \longrightarrow P_n \otimes_\Lambda M \longrightarrow \cdots \longrightarrow P_1 \otimes_\Lambda M \longrightarrow P_0 \otimes_\Lambda M \longrightarrow \Lambda \otimes_\Lambda M \longrightarrow 0 \] of $\Lambda$-modules (the exactness of this sequence follows from the facts that $\Lambda$ is flat as $k$-module and that the modules $P_i$ are projective $\envalg{\Lambda}$-modules). Using the isomorphism $\Lambda \otimes_\Lambda M \cong M$, we get an exact sequence of $\Lambda$-modules starting and ending in $M$; we define $\varphi_M([\eta])$ to be the element of $\Ext_\Lambda^*(M, M)$ represented by this sequence. For elements of degree zero in $\HH{*}(\Lambda)$, the map $\varphi_M$ is defined by tensoring with $M$ and using the identification $\Lambda \otimes_\Lambda M \cong M$. In \cite{EHSST}, Erdmann--Holloway--Snashall--Solberg--Taillefer identified certain assumptions about an algebra $\Lambda$ which are sufficient in order for the theory of support varieties to have good properties. They called these assumptions \textbf{Fg1} and \textbf{Fg2}. We say that an algebra satisfies \fg{} if it satisfies both \textbf{Fg1} and \textbf{Fg2}. We use the following definition of \fg{}, which is equivalent (by \cite[Proposition~5.7]{Solberg:1}) to the definition of \textbf{Fg1} and \textbf{Fg2} given in \cite{EHSST}. \begin{defn} \label{defn:fg} Let $\Lambda$ be an algebra over a commutative ring $k$ such that $\Lambda$ is flat as a module over $k$. We say that $\Lambda$ satisfies the \defterm{\fg{} condition} if the following is true: \begin{enumerate} \item The ring $\HH*(\Lambda)$ is noetherian. \item The $\HH*(\Lambda)$-module $\Ext_\Lambda^*(\Lambda/\rad \Lambda, \Lambda/\rad \Lambda)$ is finitely generated. \end{enumerate} \end{defn} The following result states that in our definition of \fg{}, we could have replaced part (ii) by the same requirement for all $\Lambda$-modules. It can be proved in a similar way as \cite[Proposition~1.4]{EHSST}. \begin{thm} \label{fg-implies-every-ext-fin-gen} If an artin algebra $\Lambda$ satisfies the \fg{} condition, then $\Ext_\Lambda^*(M, M)$ is a finitely generated $\HH*(\Lambda)$-module for every $\Lambda$-module $M$. \end{thm} We end this section by describing a connection between the \fg{} condition and Gorensteinness. \begin{thm}\cite[Theorem~1.5~(a)]{EHSST} \label{fg-implies-gorenstein} If an artin algebra $\Lambda$ satisfies the \fg{} condition, then $\Lambda$ is Gorenstein. \end{thm} \section{Eventually homological isomorphisms in recollements} \label{section:extensions} Given a functor $f\colon \mathscr D \longrightarrow \mathscr F$ between abelian categories and an integer $t$, the functor $f$ is called an \defterm{$t$-homological isomorphism} if there is an isomorphism \[ \Ext_\mathscr B^j(B,B') \cong \Ext_\mathscr C^j(f(B),f(B')) \] for every pair of objects $B$ and $B'$ in $\mathscr B$, and every $j > t$. If $f$ is a $t$-homological isomorphism for some $t$, then it is an \defterm{eventually homological isomorphism}. In this section, we investigate when the functor $e$ in a recollement \[ \recollement{\mathscr A}{\mathscr B}{\mathscr C}{q}{i}{p}{l}{e}{r} \] of abelian categories is an eventually homological isomorphism. The functor $e$ induces maps \begin{equation} \label{eqn:ext-map} \Ext^j_\mathscr B(X,Y) \ \longrightarrow \ \Ext^j_\mathscr C(e(X),e(Y)) \end{equation} of extension groups for all objects $X$ and $Y$ in $\mathscr B$ and for every $j \ge 0$. With one argument fixed and the other one varying over all objects we study when these maps are isomorphisms in almost all degrees, that is, for every degree $j$ greater than some bound $n$ (see Theorem~\ref{thm:ext-iso-proj} and Theorem~\ref{thm:ext-iso-inj}). We use this to find two sets of sufficient conditions for the functor $e\colon \mathscr B \longrightarrow \mathscr C$ to be an eventually homological isomorphism (Corollary~\ref{cor:recollement-e-is-iso}), and we find a partial converse (Proposition~\ref{prop:ext-iso-implies-conditions}). Finally, we specialize these results to artin algebras, using the recollement $(\fmod \Lambda/\idealgenby{a}, \fmod \Lambda, \fmod a{\Lambda}a)$ of Example~\ref{exam:mod-recollements}~(i). In particular, we characterize when the functor $e\colon \fmod \Lambda \longrightarrow \fmod a{\Lambda}a$ is an eventually homological isomorphism (Corollary~\ref{cor:algebra-ext-iso}). These results are used in Section~\ref{section:Gorenstein} for comparing Gorensteinness of the categories in a recollement, and in Section~\ref{section:fg-a(Lambda)a} for comparing the \fg{} condition of the algebras $\Lambda$ and $a{\Lambda}a$, where $a$ is an idempotent in $\Lambda$. We start by fixing some notation. For an injective coresolution $0 \longrightarrow B \longrightarrow I^0 \longrightarrow I^1{\longrightarrow}\cdots $ of $B$ in $\mathscr B$, we say that the image of the morphism $I^{n-1}\longrightarrow I^n$ is an \defterm{\mbox{$n$-th} cosyzygy} of $B$, and we denote it by $\Sigma^n(B)$. Dually, if $\cdots \longrightarrow P_1 \longrightarrow P_0 \longrightarrow B\longrightarrow 0$ is a projective resolution of $B$ in $\mathscr B$, then we say that the kernel of the morphism $P^{n-1}\longrightarrow P^{n-2}$ is an \defterm{\mbox{$n$-th} syzygy} of $B$, and we denote it by $\Omega^n(B)$. Also, if $\mathcal X$ is a class of objects in $\mathscr B$, then we denote by $\mathcal X^{\bot}=\{B\in \mathscr B \mid \Hom_{\mathscr B}(X,B)=0\}$ the \defterm{right orthogonal subcategory} of $\mathcal X$ and by ${^{\bot}\mathcal X}=\{B\in \mathscr B \mid \Hom_{\mathscr B}(B,X)=0\}$ the \defterm{left orthogonal subcategory} of $\mathcal X$. We now describe precisely how the maps \eqref{eqn:ext-map} induced by the functor $e$ in a recollement are defined. Let $\mathscr D$ and $\mathscr F$ be abelian categories and $f\colon \mathscr D \longrightarrow \mathscr F$ an exact functor which has a left and a right adjoint (for example, the functors $i$ and $e$ in a recollement have these properties). If \[ \xi\colon \xymatrix{ 0 \ar[r]^{} & X_n \ar[r]^{d_n \ \ } & X_{n-1} \ar[r]^{} & \cdots \ar[r]^{} & X_{1} \ar[r]^{d_1 \ } & X_0 \ar[r]^{} & 0 } \] is an exact sequence in $\mathscr D$, then we denote by $f(\xi)$ the exact sequence \[ f(\xi)\colon \xymatrix{ 0 \ar[r]^{} & f(X_n) \ar[r]^{f(d_n) \ \ \ } & f(X_{n-1}) \ar[r]^{} & \cdots \ar[r]^{} & f(X_{1}) \ar[r]^{f(d_1) \ } & f(X_0) \ar[r]^{} & 0 } \] in $\mathscr F$. It is clear that this operation commutes with Yoneda product; that is, if $\xi$ and $\zeta$ are composable exact sequences in $\mathscr D$, then $f(\xi\zeta) = f(\xi) \cdot f(\zeta)$. For every pair of objects $X$ and $Y$ in $\mathscr D$ and every nonnegative integer $j$, we define a group homomorphism \[ f_{X,Y}^j \ \colon \ \Ext_{\mathscr D}^j(X,Y) \ \longrightarrow \ \Ext_{\mathscr F}^j(f(X),f(Y)) \] by \begin{align*} f_{X,Y}^0(d) &= f(d) &&\text{for a morphism $d \colon X \longrightarrow Y$;} \\ f_{X,Y}^j([\eta]) &= [f(\eta)] &&\text{for a $j$-fold extension $\eta$ of $X$ by $Y$, where $j>0$.} \end{align*} For an object $X$ in $\mathscr D$, the direct sum $\Ext^*_\mathscr D(X,X) = \bigoplus_{j=0}^\infty \Ext^j_\mathscr D(X,X)$ is a graded ring with multiplication given by Yoneda product, and taking the maps $f^j_{X,X}$ in all degrees $j$ gives a graded ring homomorphism \[ f^*_{X,X}\colon \Ext^*_\mathscr D(X,X) \longrightarrow \Ext^*_\mathscr F(f(X),f(X)). \] \begin{rem} We explain briefly why the maps $f_{X,Y}^j$ and $f^*_{X,X}$ defined above are homomorphisms. \begin{enumerate} \item The functor $f$ being a right and left adjoint implies that it preserves limits and colimits and therefore it preserves pullbacks and pushouts. Thus the map $f_{X,Y}^j$ preserves the Baer sum between two extensions. \item For checking that the map $f_{X,X}^*$ is a graded ring homomorphism, the only nontrivial case to consider is the product of a morphism and an extension. For this case, we again use that the functor $f$ preserves pullbacks and pushouts. \end{enumerate} \end{rem} We now consider the maps \[ e_{B,B'}^j \colon \Ext_\mathscr B^j(B, B') \longrightarrow \Ext_\mathscr C^j(e(B), e(B')) \] induced by the functor $e \colon \mathscr B \longrightarrow \mathscr C$ in a recollement, where we let one argument be fixed and the other vary over all objects of $\mathscr B$. In \cite{Psaroud}, the first author studied when these maps are isomorphisms for all degrees up to some bound $n$, that is, for $0 \le j \le n$. This immediately leads to a description of when these maps are isomorphisms in all degrees, which we state as the following theorem. \begin{thm}\cite[Propositions~3.3 and 3.4, Theorem~3.10]{Psaroud} \label{extthm} Let $(\mathscr A,\mathscr B,\mathscr C)$ be a recollement of abelian categories and assume that $\mathscr B$ and $\mathscr C$ have enough projective and injective objects. Let $B$ be an object in $\mathscr B$. \begin{enumerate} \item The following statements are equivalent$\colon$ \begin{enumerate} \item The map $e_{B,B'}^{j} \colon \Ext_{\mathscr B}^{j}(B,B') \longrightarrow \Ext_{\mathscr C}^{j}(e(B),e(B'))$ is an isomorphism for every object $B'$ in $\mathscr B$ and every nonnegative integer $j$. \item The object $B$ has a projective resolution of the form \[ \xymatrix{ \cdots \ar[r]^{} & l(P_2) \ar[r]^{} & l(P_1) \ar[r]^{} & l(P_0) \ar[r]^{} & B \ar[r]^{} & 0 } \] where $P_j$ is a projective object in $\mathscr C$. \item $\Ext_{\mathscr B}^j(B,i(A))=0$ for every $A\in \mathscr A$ and $j\geq 0$. \item $\Ext_{\mathscr B}^j(B,i(I))=0$ for every $I\in \Inj{\mathscr A}$ and $j\geq 0$. \end{enumerate} \item The following statements are equivalent$\colon$ \begin{enumerate} \item The map $e_{B',B}^{j} \colon \Ext_{\mathscr B}^{j}(B',B) \longrightarrow \Ext_{\mathscr C}^{j}(e(B'),e(B))$ is an isomorphism for every object $B'$ in $\mathscr B$ and every nonnegative integer $j$. \item The object $B$ has an injective coresolution of the form \[ \xymatrix{ 0 \ar[r]^{} & B \ar[r]^{} & r(I^0) \ar[r]^{} & r(I^1) \ar[r]^{} & r(I^2) \ar[r]^{} & \cdots } \] where $I^j$ is an injective object in $\mathscr C$. \item $\Ext_{\mathscr B}^j(i(A),B)=0$ for every $A\in \mathscr A$ and $j\geq 0$. \item $\Ext_{\mathscr B}^j(i(P),B)=0$ for every $P\in \Proj{\mathscr A}$ and $j\geq 0$. \end{enumerate} \end{enumerate} \end{thm} The above theorem describes when the maps $e_{B,B'}^j$ induced by the functor $e$ are isomorphisms in all degrees $j$. Our aim in this section is to give a similar description of when these maps are isomorphisms in almost all degrees. The basic idea is to translate the conditions in the above theorem to similar conditions stated for almost all degrees, and show the equivalence of these conditions by using the above theorem and dimension shifting. In order for this to work, however, we need to modify the conditions somewhat. We obtain Theorem~\ref{thm:ext-iso-proj} which is stated below and generalizes parts of Theorem~\ref{extthm}~(i) (and the dual Theorem~\ref{thm:ext-iso-inj} which generalizes parts of Theorem~\ref{extthm}~(ii)). In order to prove the theorem, we need a general version of dimension shifting as stated in the following lemma. \begin{lem} \label{lem:dimension-shift} Let $\mathscr A$ be an abelian category, $n$ be an integer, and let \[ \epsilon\colon \xymatrix{ 0 \ar[r]^{} & X \ar[r]^{} & E_{m-1} \ar[r]^{} & \cdots \ar[r]^{} & E_{0} \ar[r]^{} & Y \ar[r]^{} & 0 } \] be an exact sequence in $\mathscr A$ with $\pd_{\mathscr A}E_i \le n$ for every $i$. Then for every $i > n$ and $Z\in \mathscr A$, the map \[ \xymatrix@C=0.5cm{ \epsilon^* \ \colon \ {\Ext}_{\mathscr A}^{i}(X,Z) \ \ar[rr] && \ \Ext^{i+m}_{\mathscr A}(Y,Z),} \] given by $\epsilon^*([\eta]) = [\eta\epsilon]$, is an isomorphism. \end{lem} Now we are ready to show our characterization of when the functor $e$ in a recollement induces isomorphisms of extension groups in almost all degrees. \begin{thm} \label{thm:ext-iso-proj} Let $(\mathscr A,\mathscr B,\mathscr C)$ be a recollement of abelian categories and assume that $\mathscr B$ and $\mathscr C$ have enough projective and injective objects. Consider the following statements for an object $B$ of $\mathscr B$ and two integers $n$ and $m$$\colon$ \begin{enumerate} \item[(a)] The map $e_{B,B'}^{j} \colon \Ext_{\mathscr B}^{j}(B,B') \longrightarrow \Ext_{\mathscr C}^{j}(e(B),e(B'))$ is an isomorphism for every object $B'$ in $\mathscr B$ and every integer $j > m + n$. \item[(b)] The object $B$ has a projective resolution of the form \[ \xymatrix{ \cdots \ar[r] & l(Q_1) \ar[r] & l(Q_0) \ar[r] & P_{n-1} \ar[r] & \cdots \ar[r] & P_0 \ar[r] & B \ar[r] & 0 } \] where each $Q_j$ is a projective object in $\mathscr C$. \item[(c)] $\Ext_{\mathscr B}^j(B,i(A))=0$ for every $A \in \mathscr A$ and $j > n$, and there exists an $n$-th syzygy of $B$ lying in ${^{\bot}i(\mathscr A)}$. \item[(d)] $\Ext_{\mathscr B}^j(B,i(I))=0$ for every $I\in \Inj{\mathscr A}$ and $j > n$, and and there exists an $n$-th syzygy of $B$ lying in ${^{\bot}i(\Inj{\mathscr A})}$. \end{enumerate} We have the following relations between these statements$\colon$ \begin{enumerate} \item $\textup{(b)} \iff \textup{(c)} \iff \textup{(d)}$. \item If $\pd_{\mathscr C} e(P) \le m$ for every projective object $P$ in $\mathscr B$, then $\textup{(b)} \implies \textup{(a)}$. \end{enumerate} \end{thm} \begin{proof} (i) By dimension shift, statement (c) is equivalent to \[ \Ext_\mathscr B^j(\Omega^n(B), i(A)) = 0 \qquad\text{for every $j \ge 0$ and every $A \in \mathscr A$,} \] and statement (d) is equivalent to \[ \Ext_\mathscr B^j(\Omega^n(B), i(I)) = 0 \qquad\text{for every $j \ge 0$ and every $I \in \Inj \mathscr A$,} \] where in both cases $\Omega^n(B)$ is a suitably chosen $n$-th syzygy of $B$. The equivalence of statements (b), (c) and (d) now follows from the equivalence of (b), (c) and (d) in Theorem~\ref{extthm}~(i). (ii) Let \[ \pi\colon \xymatrix{ 0 \ar[r] & K \ar[r] & P_{n-1} \ar[r] & \cdots \ar[r] & P_1 \ar[r] & P_0 \ar[r] & B \ar[r] & 0 } \] be the beginning of the chosen projective resolution of $B$, where $K = \Omega^n(B)$ is the $n$-th syzygy of $B$. Consider the following group homomorphisms$\colon$ \begin{equation} \label{eqn:ext-maps} \Ext_\mathscr B^j(B,B') \xleftarrow{\pi^*} \Ext_\mathscr B^{j-n}(K,B') \xrightarrow{e_{K,B'}^{j-n}} \Ext_\mathscr C^{j-n}(e(K),e(B')) \xrightarrow{(e(\pi))^*} \Ext_\mathscr C^j(e(B),e(B')) \end{equation} Here, the maps $\pi^*$ and $(e(\pi))^*$ are isomorphisms by Lemma~\ref{lem:dimension-shift}. Note that for $(e(\pi))^*$ we use the fact that $\pd_{\mathscr C} e(P) \le n$ for every projective object $P$ in $\mathscr B$. The map $e_{K,B'}^{j-n}$ is an isomorphism by Theorem~\ref{extthm}~(i). Thus, we have an isomorphism \[ (e(\pi))^* \circ e_{K,B'}^{j-n} \circ (\pi^*)^{-1} \colon \Ext_\mathscr B^j(B,B') \longrightarrow \Ext_\mathscr C^j(e(B),e(B')) \] for every $j\geq m+n+1$ and $B'\in \mathscr B$. We want to show that this is the same map as $e_{B,B'}^j$. We consider an element $[\eta] \in \Ext_\mathscr B^{j-n}(K,B')$, and follow it through the maps~\eqref{eqn:ext-maps}. We then get the following elements$\colon$ \[ \xymatrix@R=1em@C=3em{ {\Ext_\mathscr B^j(B,B')} & {\Ext_\mathscr B^{j-n}(K,B')} \ar[l]_{\pi^*}^\cong \ar[r]^-{e_{K,B'}^{j-n}}_-\cong & {\Ext_\mathscr C^{j-n}(e(K),e(B'))} \ar[r]^-{(e(\pi))^*}_-\cong & {\Ext_\mathscr C^j(e(B),e(B'))} \\ {[\eta\pi]} & {[\eta]} \ar@{|->}[l] \ar@{|->}[r] & {[e(\eta)]} \ar@{|->}[r] & {[e(\eta) \cdot e(\pi)]} \ar@{=}[d] \\ &&& {[e(\eta\pi)]} } \] This shows that our isomorphism takes any element $[\zeta] \in \Ext_\mathscr B^j(B,B')$ to the element $[e(\zeta)] \in \Ext_\mathscr C^j(e(B),e(B'))$. Thus, our isomorphism is $e_{B,B'}^j$. \end{proof} Dually to the above theorem, we have the following generalization of some of the implications in Theorem~\ref{extthm}~(ii). \begin{thm} \label{thm:ext-iso-inj} Let $(\mathscr A,\mathscr B,\mathscr C)$ be a recollement of abelian categories and assume that $\mathscr B$ and $\mathscr C$ have enough projective and injective objects. Consider the following statements for an object $B$ of $\mathscr B$ and two integers $n$ and $m$$\colon$ \begin{enumerate} \item[(a)] The map $e_{B',B}^{j} \colon \Ext_{\mathscr B}^{j}(B',B) \longrightarrow \Ext_{\mathscr C}^{j}(e(B'),e(B))$ is an isomorphism for every object $B'$ in $\mathscr B$ and every integer $j > m + n$. \item[(b)] The object $B$ has an injective coresolution of the form \[ \xymatrix{ 0 \ar[r] & B \ar[r] & I^0 \ar[r] & \cdots \ar[r] & I^{n-1} \ar[r] & r(J^0) \ar[r] & r(J^1) \ar[r] & \cdots } \] where each $J^j$ is a projective object in $\mathscr C$. \item[(c)] $\Ext_{\mathscr B}^j(i(A),B)=0$ for every $A \in \mathscr A$ and $j > n$, and there exists an $n$-th cosyzygy of $B$ lying in $i(\mathscr A)^{\bot}$. \item[(d)] $\Ext_{\mathscr B}^j(i(P),B)=0$ for every $P\in \Proj{\mathscr A}$ and $j > n$, and there exists an $n$-th cosyzygy of $B$ lying in $i(\Proj{\mathscr A})^{\bot}$. \end{enumerate} We have the following relations between these statements$\colon$ \begin{enumerate} \item $\textup{(b)} \iff \textup{(c)} \iff \textup{(d)}$. \item If $\id_{\mathscr C} e(I) \le m$ for every injective object $I$ in $\mathscr B$, then $\textup{(b)} \implies \textup{(a)}$. \end{enumerate} \end{thm} In the above results, we fixed an object $B$ of the category $\mathscr B$, and considered the maps $e_{B,B'}^j$ or $e_{B',B}^j$ for all objects $B'$ in $\mathscr B$. With certain conditions on the object $B$, we found that these maps are isomorphisms for almost all degrees $j$. We now describe some conditions on the recollement which are sufficient to ensure that the maps $e_{B,B'}^j$ are isomorphisms in almost all degrees $j$ for all objects $B$ and $B'$ of $\mathscr B$, in other words, that the functor $e$ is an eventually homological isomorphism. These conditions are given in the following corollary, which follows directly from Theorem~\ref{thm:ext-iso-proj} and Theorem~\ref{thm:ext-iso-inj}. \begin{cor} \label{cor:recollement-e-is-iso} Let $(\mathscr A,\mathscr B,\mathscr C)$ be a recollement and assume that $\mathscr B$ and $\mathscr C$ have enough projective and injective objects. Let $m$ and $n$ be two integers. Assume that one of the following conditions hold$\colon$ \begin{enumerate} \item $(\alpha')$ $\sup\{\id_{\mathscr B}i(I) \mid I\in \Inj{\mathscr A} \}<m$. \noindent $(\epsilon)$ Every object of $\mathscr B$ has an $m$-th syzygy which lies in ${^{\bot}i(\Inj{\mathscr A})}$. \noindent $(\beta)$ $\sup\{\pd_{\mathscr C}e(P) \mid P\in \Proj{\mathscr B} \}\leq n$. \item $(\gamma')$ $\sup\{\pd_{\mathscr B}i(P) \mid P\in \Proj{\mathscr A} \}<n$. \noindent $(\opposite{\epsilon})$ Every object of $\mathscr B$ has an $n$-th cosyzygy which lies in $i(\Proj{\mathscr A})^{\bot}$. \noindent $(\delta)$ $\sup\{\id_{\mathscr C}e(I) \mid I\in \Inj{\mathscr B} \}\leq m$. \end{enumerate} Then the functor $e$ is an $(m+n)$-homological isomorphism, and in particular the map \[ \xymatrix@C=0.5cm{ e^j_{B,B'} \ \colon \ {\Ext}_{\mathscr B}^j(B,B') \ \ar[rr]^{ \cong } && \ \Ext^j_{\mathscr C}(e(B),e(B'))} \] is an isomorphism for all objects $B$ and $B'$ of $\mathscr B$ and for every $j>m+n$. \end{cor} We now show a partial converse of the above result. \begin{prop} \label{prop:ext-iso-implies-conditions} Let $(\mathscr A,\mathscr B,\mathscr C)$ be a recollement and assume that $\mathscr B$ and $\mathscr C$ have enough projective and injective objects. Assume that the functor $e$ is an eventually homological isomorphism. Then the following hold$\colon$ \begin{enumerate} \item[$(\alpha)$] $\sup \{ \id_\mathscr B i(A) \mid A \in \mathscr A \} < \infty$. \item[$(\beta)$] $\sup \{ \pd_\mathscr C e(P) \mid P \in \Proj \mathscr B \} < \infty$. \item[$(\gamma)$] $\sup \{ \pd_\mathscr B i(A) \mid A \in \mathscr A \} < \infty$. \item[$(\delta)$] $\sup \{ \id_\mathscr C e(I) \mid I \in \Inj \mathscr B \} < \infty$. \end{enumerate} In particular, if $e$ is an $t$-homological isomorphism for a nonnegative integer $t$, then each of the above dimensions is bounded by $t$. \end{prop} \begin{proof} $(\alpha)$ Let $A$ be an object of $\mathscr A$. For every $B$ in $\mathscr B$ and $j > t$, we get \[ \Ext_\mathscr B^j(B, i(A)) \cong \Ext_\mathscr C^j(e(B), ei(A)) \cong \Ext_\mathscr C^j(e(B), 0) = 0, \] since $ei = 0$ by Proposition~\ref{properties}, and thus $\id_\mathscr B i(A) \le t$. The proof of $(\gamma)$ is similar. $(\beta)$ Let $P$ be a projective object of $\mathscr B$. For every $C$ in $\mathscr C$ and $j > t$, we get \[ \Ext_\mathscr C^j(e(P), C) \cong \Ext_\mathscr C^j(e(P), el(C)) \cong \Ext_\mathscr B^j(P, l(C)) = 0, \] since $el \cong \Id_\mathscr C$ by Proposition~\ref{properties}, and thus $\pd_\mathscr C e(P) \le t$. The proof of $(\delta)$ is similar. \end{proof} \begin{rem} \label{rem:relative-gldim} Recall from \cite{Psaroud} that $\sup \{ \pd_\mathscr B i(A) \mid A \in \mathscr A \} < \infty$, which appears in statement $(\gamma)$ above, is called the $\mathscr A$-relative global dimension of $\mathscr B$, and denoted by $\gld_\mathscr A \mathscr B$. \end{rem} We close this section by interpreting Theorem~\ref{thm:ext-iso-proj}, Theorem~\ref{thm:ext-iso-inj} and Corollary~\ref{cor:recollement-e-is-iso} for artin algebras. To this end, for an artin algebra $\Lambda$ and $a\in \Lambda$ an idempotent element, we denote by \[ e = (a\Lambda \otimes_\Lambda -) \colon \fmod \Lambda \longrightarrow \fmod a \Lambda a \] the quotient functor of the recollement $(\fmod{\Lambda/\idealgenby{a}}, \fmod{\Lambda}, \fmod{a \Lambda a})$, see Example~\ref{exam:mod-recollements}. We first need the following well-known observation. \begin{lem} \label{lemHom} Let $\Lambda$ be an artin algebra, $M$ be a $\Lambda$-module and $S$ be a simple $\Lambda$-module. Then for every $n\geq 1$ we have$\colon$ \[ \Ext_{\Lambda}^n(M,S)\cong \Hom_{\Lambda}(\Omega^n(M),S) \qquad\text{and}\qquad \Ext_{\Lambda}^n(S,M)\cong \Hom_{\Lambda}(S,\Sigma^n(M)), \] where $\Omega^n(M)$ is the $n$-th syzygy in a minimal projective resolution of $M$, and $\Sigma^n(M)$ is the $n$-th cosyzygy in a minimal injective coresolution of $M$. \end{lem} We also need the next easy result whose proof is left to the reader. \begin{lem} \label{lem:algebra-dimension-inequalities} Let $\Lambda$ be an artin algebra and $a$ an idempotent element of $\Lambda$. Then the following inequalities hold$\colon$ \begin{enumerate} \item $\pd_{a \Lambda a} e(P) \le \pd_{a \Lambda a} a\Lambda$, for every $P\in \proj{\Lambda}$. \item $\id_{a \Lambda a} e(I) \le \pd_{\opposite{(a \Lambda a)}} \Lambda a$, for every $I\in \inj{\Lambda}$. \end{enumerate} \end{lem} The following is a consequence of Theorem~\ref{thm:ext-iso-proj} and Theorem~\ref{thm:ext-iso-inj} for artin algebras. \begin{cor} \label{cor:algebra-ext-iso-onemodule} Let $\Lambda$ be an artin algebra and $a$ an idempotent element in $\Lambda$, and let $m$ and $n$ be integers. \begin{enumerate} \item Let $M$ be a $\Lambda$-module such that $\Ext^j_{\Lambda}\big(M,(\Lambda/\idealgenby{a})/(\rad{\Lambda/ \idealgenby{a}})\big)=0$ for every $j\geq m$. Assume that $\pd_{a \Lambda a} a \Lambda \le n$. Then the map \[ \xymatrix@C=0.5cm{ e^j_{M,N} \ \colon \ {\Ext}_{\Lambda}^j(M,N) \ \ar[rr]^{ \cong \ } && \ \Ext^j_{a\Lambda a}(e(M),e(N)) } \] is an isomorphism for every $\Lambda$-module $N$, and for every integer $j > m + n$. \item Let $M$ be a $\Lambda$-module such that $\Ext^j_{\Lambda}\big((\Lambda/\idealgenby{a})/(\rad{\Lambda/ \idealgenby{a}}),M\big)=0$ for every $j\geq n$. Assume that $\pd_{\opposite{(a \Lambda a)}} \Lambda a \le m$. Then the map \[ \xymatrix@C=0.5cm{ e^j_{N,M} \ \colon \ {\Ext}_{\Lambda}^j(N,M) \ \ar[rr]^{ \cong \ } && \ \Ext^j_{a\Lambda a}(e(N),e(M)) } \] is an isomorphism for every $\Lambda$-module $N$, and for every integer $j > m + n$. \end{enumerate} \end{cor} \begin{proof} (i) Consider the recollement $(\fmod{\Lambda/\idealgenby{a}},\fmod{\Lambda},\fmod{a\Lambda a})$ of Example~\ref{exam:mod-recollements}. Since every simple $\Lambda/ \idealgenby{a}$-module is also simple as a $\Lambda$-module it follows from Lemma~\ref{lemHom} that \[ \Hom_{\Lambda} \big(\Omega^m(M), (\Lambda/\idealgenby{a})/(\rad{\Lambda/\idealgenby{a}}) \big) = 0 \] This implies that $\Hom_{\Lambda}(\Omega^m(M),N)=0$ for every $\Lambda/\idealgenby{a}$-module $N$ since every module has a finite composition series. Then the result is a consequence of Theorem~\ref{thm:ext-iso-proj}. (ii) The result follows similarly as in (i), using Theorem~\ref{thm:ext-iso-inj} and the second isomorphism of Lemma~\ref{lemHom}. \end{proof} As an immediate consequence of the above results we have the following characterization of when the functor $e \colon \fmod \Lambda \longrightarrow \fmod a{\Lambda}a$ is an eventually homological isomorphism. This constitutes the first part of the Main Theorem presented in the introduction. \begin{cor} \label{cor:algebra-ext-iso} Let $\Lambda$ be an artin algebra and $a$ an idempotent element in $\Lambda$. The following are equivalent: \begin{enumerate} \item There is an integer $s$ such that for every pair of $\Lambda$-modules $M$ and $N$, and every $j > s$, the map \[ \xymatrix@C=0.5cm{ e^j_{M,N} \ \colon \ {\Ext}_{\Lambda}^j(M,N) \ \ar[rr]^{ \cong \ } && \ \Ext^j_{a\Lambda a}(e(M),e(N)) } \] is an isomorphism. \item The functor $e$ is an eventually homological isomorphism. \item $(\alpha)$ $\id_{\Lambda}\big((\Lambda/\idealgenby{a})/(\rad{\Lambda/ \idealgenby{a}})\big) < \infty$ and $(\beta)$ $\pd_{a \Lambda a} a \Lambda < \infty$. \item $(\gamma)$ $\pd_{\Lambda}\big((\Lambda/\idealgenby{a})/(\rad{\Lambda/ \idealgenby{a}})\big) < \infty$ and $(\delta)$ $\pd_{\opposite{(a \Lambda a)}} \Lambda a < \infty$. \end{enumerate} In particular, if the functor $e$ is a $t$-homological isomorphism, then each of the dimensions in \textup{(iii)} and \textup{(iv)} are at most $t$. The bound $s$ in \textup{(i)} is bounded by the sum of the dimensions occurring in \textup{(iii)}, and also bounded by the sum of the dimensions occurring in \textup{(iv)}. \end{cor} \begin{proof} The implications (ii) $\implies$ (iii) and (ii) $\implies$ (iv) follow from Proposition~\ref{prop:ext-iso-implies-conditions}. The implications (iii) $\implies$ (i) and (iv) $\implies$ (i) follow from Corollary~\ref{cor:algebra-ext-iso-onemodule}. \end{proof} \section{Gorenstein categories and eventually homological isomorphisms} \label{section:Gorenstein} Our aim in this section is to study Gorenstein categories, introduced by Beligiannis--Reiten \cite{BR}. The main objective is to study when a functor $f\colon \mathscr D \longrightarrow \mathscr F$ between abelian categories preserves Gorensteinness. A central property here is whether the functor $f$ is an eventually homological isomorphism. We prove that for an essentially surjective eventually homological isomorphism $f \colon \mathscr D \longrightarrow \mathscr F$, then $\mathscr D$ is Gorenstein if and only if $\mathscr F$ is. The results are applied to recollements of abelian categories, and recollements of module categories. We start by briefly recalling the notion of Gorenstein categories introduced in \cite{BR}. Let $\mathscr A$ be an abelian category with enough projective and injective objects. We consider the following invariants associated to $\mathscr A\colon$ \[ \spli{\mathscr A} = \sup\{\pd_{\mathscr A}I \mid I\in \Inj{\mathscr A} \} \ \ \text{and} \ \ \silp{\mathscr A} = \sup\{\id_{\mathscr A}P \mid P\in \Proj{\mathscr A} \} \] Then we have the following notion of Gorensteinness for abelian categories. \begin{defn}\cite{BR} An abelian category $\mathscr A$ with enough projective and injective objects is called \defterm{Gorenstein} if $\spli{\mathscr A}<\infty$ and $\silp{\mathscr A}<\infty$. \end{defn} Note that the above notion is a common generalization of Gorensteinness in the commutative and in the noncommutative setting. We refer to \cite[Chapter VII]{BR} for a thorough discussion on Gorenstein categories and connections with Cohen--Macaulay objects and cotorsion pairs. We start with the following useful observation whose direct proof is left to the reader. \begin{lem} \label{lemsilpspli} Let $\mathscr A$ be an abelian category with enough projective and injective objects and let $X$ be an object of $\mathscr A$. \begin{enumerate} \item If $\pd_{\mathscr A}X<\infty$, then $\id_{\mathscr A}X\le \silp{\mathscr A}$. \item If $\id_{\mathscr A}X<\infty$, then $\pd_{\mathscr A}X\le \spli{\mathscr A}$. \end{enumerate} \end{lem} In the main result of this section we study eventually homological isomorphisms between abelian categories with enough projective and injective objects. In particular we show that an essentially surjective eventually homological isomorphism preserves Gorensteinness. This is a general version of the third part of the Main Theorem presented in the introduction. \begin{thm} \label{thm:gorenstein} Let $f \colon \mathscr D \longrightarrow \mathscr F$ be a functor, where $\mathscr D$ and $\mathscr F$ are abelian categories with enough projective and injective objects, and let $t$ be a nonnegative integer. Consider the following four statements. \begin{alignat*}{2} \text{\textup{(a)} For every $D$ in $\mathscr D$\textup{:}} &\left\{ \begin{aligned} \pd_\mathscr D D &\le \sup \{ \pd_\mathscr F f(D), t \} \\ \id_\mathscr D D &\le \sup \{ \id_\mathscr F f(D), t \} \end{aligned} \right. \qquad\qquad & \textup{(c)} &\left\{ \begin{aligned} \spli \mathscr D &\le \sup \{ \spli \mathscr F, t \} \\ \silp \mathscr D &\le \sup \{ \silp \mathscr F, t \} \end{aligned} \right. \\ \text{\textup{(b)} For every $D$ in $\mathscr D$\textup{:}} &\left\{ \begin{aligned} \pd_\mathscr F f(D) &\le \sup \{ \pd_\mathscr D D, t \} \\ \id_\mathscr F f(D) &\le \sup \{ \id_\mathscr D D, t \} \end{aligned} \right. & \textup{(d)} &\left\{ \begin{aligned} \spli \mathscr F &\le \sup \{ \spli \mathscr D, t \} \\ \silp \mathscr F &\le \sup \{ \silp \mathscr D, t \} \end{aligned} \right. \end{alignat*} We have the following. \begin{enumerate} \item If $f$ is a $t$-homological isomorphism, then \textup{(a)} holds. \item If $f$ is an essentially surjective $t$-homological isomorphism, then \textup{(a)} and \textup{(b)} hold. \item If \textup{(a)} and \textup{(b)} hold, then \textup{(c)} holds. \item If \textup{(a)} and \textup{(b)} hold and $f$ is essentially surjective, then \textup{(c)} and \textup{(d)} hold. \end{enumerate} In particular, we obtain the following. \begin{enumerate} \setcounter{enumi}{4} \item If $f$ is an essentially surjective eventually homological isomorphism, then $\mathscr D$ is Gorenstein if and only if $\mathscr F$ is Gorenstein. \item If $f$ is an eventually homological isomorphism and \textup{(b)} holds, then $\mathscr F$ being Gorenstein implies that $\mathscr D$ is Gorenstein. \end{enumerate} \end{thm} \begin{proof} We first assume that $f$ is an essentially surjective $t$-homological isomorphism and show the inequality $\pd_\mathscr F f(D) \le \sup \{ \pd_\mathscr D D, t \}$; the other inequalities in parts (i) and (ii) are proved similarly. The inequality clearly holds if $D$ has infinite projective dimension. Assume that $D$ has finite projective dimension, and let $n = \max \{ \pd_\mathscr D D, t \} + 1$. For any object $X$ in $\mathscr F$, there is an object $X'$ in $\mathscr D$ with $f(X') \cong X$, since the functor $f$ is essentially surjective. By using that $f$ is a $t$-homological isomorphism, we get \[ \Ext_\mathscr F^n(f(D), X) \cong \Ext_\mathscr F^n(f(D), f(X')) \cong \Ext_\mathscr D^n(D, X') = 0. \] This means that we have $\pd_\mathscr F f(D) < n$, and therefore $\pd_\mathscr F f(D) \le \sup \{ \pd_\mathscr D D, t \}$. We now assume that (a) and (b) hold and $f$ is essentially surjective, and show the inequality $\spli \mathscr F \le \sup \{ \spli \mathscr D, t \}$; the other inequalities in parts (iii) and (iv) are proved similarly. Let $I$ be an injective object of $\mathscr F$. Since $f$ is essentially surjective, we can choose an object $D$ in $\mathscr D$ such that $f(D) \cong I$. By (a), the object $D$ has finite injective dimension, and then by Lemma~\ref{lemsilpspli}, its projective dimension is at most $\spli \mathscr D$. Using (b), we get \[ \pd_\mathscr F I \le \sup \{ \pd_\mathscr D D, t \} \le \sup \{ \spli \mathscr D, t \}. \] Since this holds for any injective object $I$ in $\mathscr F$, we have $\spli \mathscr F \le \sup \{ \spli \mathscr D, t \}$. Parts (v) and (vi) follow by combining parts (i)--(iv). \end{proof} Now we return to the setting of a recollement $(\mathscr A, \mathscr B, \mathscr C)$. We use Theorem~\ref{thm:gorenstein} to study the functors $i\colon \mathscr A \longrightarrow \mathscr B$ and $e\colon \mathscr B \longrightarrow \mathscr C$ with respect to Gorensteinness. \begin{cor} \label{cor:gorenstein-recollement} Let $(\mathscr A,\mathscr B,\mathscr C)$ be a recollement of abelian categories. \begin{enumerate} \item Assume that the categories $\mathscr B$ and $\mathscr C$ have enough projective and injective objects, and that the functor $e$ is an eventually homological isomorphism. Then $\mathscr B$ is Gorenstein if and only if $\mathscr C$ is Gorenstein. \item Assume that the category $\mathscr B$ has enough projective and injective objects, and that we have either \par\noindent\begin{minipage}{\linewidth} \[ \left. \begin{aligned} \sup\{\pd_{\mathscr B}i(P) \mid P\in \Proj{\mathscr A}\} &\le 1 \\ \sup\{\id_{\mathscr B}i(I) \mid I\in \Inj{\mathscr A}\} &< \infty \end{aligned} \right\} \qquad\text{or}\qquad \left\{ \begin{aligned} \sup\{\pd_{\mathscr B}i(P) \mid P\in \Proj{\mathscr A}\} &< \infty \\ \sup\{\id_{\mathscr B}i(I) \mid I\in \Inj{\mathscr A}\} &\le 1 \end{aligned} \right. \] \end{minipage} If $\mathscr B$ is Gorenstein, then $\mathscr A$ is Gorenstein. \end{enumerate} \end{cor} \begin{proof} Part (i) follows directly from Theorem~\ref{thm:gorenstein}~(v), noting that $e$ is essentially surjective by Proposition~\ref{properties}. We now show part (ii). By Proposition~\ref{properties}~(iv) and (v), $\mathscr A$ has enough projective and injective objects since $\mathscr B$ does (see \cite[Remark~2.5]{Psaroud}). It follows from \cite[Proposition~4.15]{Psaroud} (or its dual) that the functor $i\colon \mathscr A\longrightarrow \mathscr B$ is a homological embedding, i.e.\ the map $i_{X,Y}^n$ is an isomorphism for all objects $X$ and $Y$ in $\mathscr A$ and every $n \ge 0$. In particular, this means that $i$ is a $0$-homological isomorphism. By Theorem~\ref{thm:gorenstein}~(i), we have \begin{equation} \label{eqn:gorenstein-recollement-inequalities} \pd_\mathscr A A \le \pd_\mathscr B i(A) \qquad\text{and}\qquad \id_\mathscr A A \le \id_\mathscr B i(A) \end{equation} for every object $A$ in $\mathscr A$. We show that $\spli \mathscr A \le \spli \mathscr B$. Let $I$ be an injective object in $A$. By assumption, we have $\id_\mathscr B i(I) < \infty$, and then by the first inequality in \eqref{eqn:gorenstein-recollement-inequalities} and Lemma~\ref{lemsilpspli}, we have \[ \pd_\mathscr A I \le \pd_\mathscr B i(I) \le \spli \mathscr B. \] Hence we have $\spli \mathscr A \le \spli \mathscr B$. By a similar argument, we have $\silp \mathscr A \le \silp \mathscr B$. The result follows. \end{proof} In a recollement $(\mathscr A, \mathscr B, \mathscr C)$ we have seen that the implications $\mathscr B$ Gorenstein if and only if $\mathscr C$ Gorenstein and $\mathscr B$ Gorenstein implies $\mathscr C$ Gorenstein hold under various additional assumptions. It is then natural to ask if the categories $\mathscr A$ and $\mathscr C$ being Gorenstein could imply that $\mathscr B$ is Gorenstein. The next example shows that this is not true in general. \begin{exam} Let $k$ be a field and consider the algebra $k[x]/\idealgenby{x^2}$. Then from the triangular matrix algebra \[ \Lambda = \begin{pmatrix} k & k \\ 0 & k[x]/\idealgenby{x^2} \\ \end{pmatrix} \] we have the recollement of module categories $(\fmod{k[x]/\idealgenby{x^2}}, \fmod{\Lambda}, \fmod{k})$, where $\fmod{k[x]/\idealgenby{x^2}}$ and $\fmod{k}$ are Gorenstein categories but $\fmod{\Lambda}$ is not Gorenstein. We refer to \cite[Example~4.3~(2)]{Chen:schurfunctors} for more details about the algebra $\Lambda$. \end{exam} Recall from \cite{BR} that a ring $R$ is called \defterm{left Gorenstein} if the category $\Mod{R}$ of left $R$-modules is a Gorenstein category. Applying Corollary~\ref{cor:gorenstein-recollement} to the module recollement $(\Mod{R/\idealgenby{e}},\Mod{R},\Mod{eRe})$ from Example~\ref{exam:mod-recollements}, we have the following result. \begin{cor} \label{cor:gorenstein-ring} Let $R$ be a ring and $e$ an idempotent element of $R$. \begin{enumerate} \item If the functor $e-\colon \Mod R \longrightarrow \Mod eRe$ is an eventually homological isomorphism, then the ring $R$ is left Gorenstein if and only if the ring $eRe$ is left Gorenstein. \item Assume that we have either \par\noindent\begin{minipage}{\linewidth} \[ \left. \begin{aligned} \pd_R R/\idealgenby{e} &\le 1 \\ \sup\{\id_{\mathscr B}i(I) \mid I\in \Inj{R/\idealgenby{e}} \} &< \infty \end{aligned} \right\} \qquad\text{or}\qquad \left\{ \begin{aligned} \pd_R R/\idealgenby{e} &< \infty \\ \sup\{\id_{\mathscr B}i(I) \mid I\in \Inj{R/\idealgenby{e}} \} &\le 1 \end{aligned} \right. \] \end{minipage} If the ring $R$ is left Gorenstein then the ring $R/\idealgenby{e}$ is left Gorenstein. \end{enumerate} \end{cor} Recall that an artin algebra $\Lambda$ is called \defterm{Gorenstein} if $\id{_{\Lambda}\Lambda}<\infty$ and $\id\Lambda_{\Lambda}<\infty$ (see \cite{AR:applications, AR:cm}). Note that $\fmod{\Lambda}$ is a Gorenstein category if and only if $\Lambda$ is a Gorenstein algebra. We close this section with the following consequence for artin algebras, whose first part constitutes the third part of the Main Theorem presented in the introduction. \begin{cor} \label{corGorensteinArtinalg} Let $\Lambda$ be an artin algebra and $a$ an idempotent element of $\Lambda$. \begin{enumerate} \item Assume that the functor $a-\colon \fmod \Lambda \longrightarrow \fmod a{\Lambda}a$ is an eventually homological isomorphism. Then the algebra $\Lambda$ is Gorenstein if and only if the algebra $a\Lambda a$ is Gorenstein. \item Assume that we have either \[ \left. \begin{aligned} \pd_\Lambda \Lambda/\idealgenby{a} &\le 1 \\ \pd_{\opposite{\Lambda}} \Lambda/\idealgenby{a} &< \infty \end{aligned} \right\} \qquad\text{or}\qquad \left\{ \begin{aligned} \pd_\Lambda \Lambda/\idealgenby{a} &< \infty \\ \pd_{\opposite{\Lambda}} \Lambda/\idealgenby{a} &\le 1 \end{aligned} \right. \] If the algebra $\Lambda$ is Gorenstein, then the algebra $\Lambda/\idealgenby{a}$ is Gorenstein. \end{enumerate} \end{cor} \section{Singular equivalences in recollements} \label{sectionsingular} Our purpose in this section is to study singularity categories, in the sense of Buchweitz \cite{Buchweitz:unpublished} and Orlov \cite{Orlov}, in a recollement of abelian categories $(\mathscr A,\mathscr B,\mathscr C)$. In particular we are interested in finding necessary and sufficient conditions such that the singularity categories of $\mathscr B$ and $\mathscr C$ are triangle equivalent. We start by recalling some well known facts about singularity categories. Let $\mathscr B$ be an abelian category with enough projective objects. We denote by $\mathsf{D}^\mathsf{b}(\mathscr B)$ the derived category of bounded complexes of objects of $\mathscr B$ and by $\mathsf{K}^\mathsf{b}(\Proj{\mathscr B})$ the homotopy category of bounded complexes of projective objects of $\mathscr B$. Then the \defterm{singularity category} of $\mathscr B$ (\cite{Buchweitz:unpublished, Orlov}) is defined to be the Verdier quotient$\colon$ \[ \mathsf{D}_\mathsf{sg}(\mathscr B) = \mathsf{D}^\mathsf{b}(\mathscr B) / \mathsf{K}^\mathsf{b}(\Proj{\mathscr B}) \] See \cite{Chen:relativesing} for a discussion of more general quotients of $\mathsf{D}^\mathsf{b}(\mathscr B)$ by $\mathsf{K}^\mathsf{b}(\mathcal X)$, where $\mathcal X$ is a selforthogonal subcategory of $\mathscr B$. It is well known that the singularity category $\mathsf{D}_\mathsf{sg}(\mathscr B)$ carries a unique triangulated structure such that the quotient functor $Q_{\mathscr B}\colon \mathsf{D}^\mathsf{b}(\mathscr B)\longrightarrow \mathsf{D}_\mathsf{sg}(\mathscr B)$ is triangulated, see \cite{Krause:Localization, Neeman:book, Verdier}. Recall that the objects of the singularity category $\mathsf{D}_\mathsf{sg}(\mathscr B)$ are the objects of the bounded derived category $\mathsf{D}^\mathsf{b}(\mathscr B)$, the morphisms between two objects $X^{\bullet}\longrightarrow Y^{\bullet}$ are equivalence classes of fractions $(X^{\bullet}\leftarrow L^{\bullet}\rightarrow Y^{\bullet})$ such that the cone of the morphism $L^{\bullet}\longrightarrow X^{\bullet}$ belongs to $\mathsf{K}^\mathsf{b}(\Proj{\mathscr B})$ and the exact triangles in $\mathsf{D}_\mathsf{sg}(\mathscr B)$ are all the triangles which are isomorphic to images of exact triangles of $\mathsf{D}^\mathsf{b}(\mathscr B)$ via the quotient functor $Q_{\mathscr B}$. Note that a complex $X^{\bullet}$ is zero in $\mathsf{D}_\mathsf{sg}(\mathscr B)$ if and only if $X^{\bullet}\in \mathsf{K}^\mathsf{b}(\Proj{\mathscr B})$. Following Chen \cite{Chen:Singular equivalences, Chen:Singular equivalences-trivial extensions}, we say that two abelian categories $\mathscr A$ and $\mathscr B$ are \defterm{singularly equivalent} if there is a triangle equivalence between the singularity categories $\mathsf{D}_\mathsf{sg}(\mathscr A)$ and $\mathsf{D}_\mathsf{sg}(\mathscr B)$. This triangle equivalence is called a \defterm{singular equivalence} between $\mathscr A$ and $\mathscr B$. To proceed further we need the following well known result for exact triangles in derived categories. For a complex $X^{\bullet}$ in an abelian category $\mathscr A$ we denote by $\sigma_{>n}(X^{\bullet})$ the truncation complex \ $\cdots\longrightarrow 0 \longrightarrow \Image{d^n}\longrightarrow X^{n+1}\stackrel{d^{n+1}}{\longrightarrow} X^{n+2}\longrightarrow \cdots $, and by $H^n(X^{\bullet})$ the n-th homology of $X^{\bullet}$. \begin{lem} \label{lemtriangles} Let $\mathscr A$ be an abelian category and $X^{\bullet}$ be a complex in $\mathscr A$. Then we have the following triangle in $\mathsf{D}(\mathscr A)\colon$ \[ \xymatrix{ H^n(X^{\bullet})[-n] \ar[r]^{} & \sigma_{>n-1}(X^{\bullet}) \ar[r]^{} & \sigma_{>n}(X^{\bullet}) \ar[r]^{} & H^n(X^{\bullet})[1-n] } \] \end{lem} Now we are ready to prove the main result of this section which gives necessary and sufficient conditions for the quotient functor $e\colon \mathscr B\longrightarrow \mathscr C$ to induce a triangle equivalence between the singularity categories of $\mathscr B$ and $\mathscr C$. This is a general version of the second part of the Main Theorem presented in the introduction. \begin{thm} \label{thmsingular} Let $(\mathscr A,\mathscr B,\mathscr C)$ be a recollement of abelian categories. Then the following statements are equivalent$\colon$ \begin{enumerate} \item We have $\pd_{\mathscr B}i(A)<\infty$ and $\pd_{\mathscr C} e(P)< \infty$ for every $A\in \mathscr A$ and $P\in \Proj{\mathscr B}$. \item The functor $e\colon \mathscr B\longrightarrow \mathscr C$ induces a singular equivalence between $\mathscr B$ and $\mathscr C\colon$ \[ \xymatrix@C=0.5cm{ \mathsf{D}_\mathsf{sg}(e)\colon \mathsf{D}_\mathsf{sg}(\mathscr B) \ \ar[rr]^{ \ \ \ \simeq} && \ \mathsf{D}_\mathsf{sg}(\mathscr C) } \] \end{enumerate} \begin{proof} (i) $\Rightarrow$ (ii) First note that we have a well defined derived functor $\mathsf{D}^\mathsf{b}(e)\colon \mathsf{D}^\mathsf{b}(\mathscr B)\longrightarrow \mathsf{D}^\mathsf{b}(\mathscr C)$ since the quotient functor $e\colon \mathscr B\longrightarrow \mathscr C$ is exact. Also the recollement situation $(\mathscr A,\mathscr B,\mathscr C)$ implies that $0 \longrightarrow \mathscr A \longrightarrow \mathscr B \longrightarrow \mathscr C \longrightarrow 0$ is an exact sequence of abelian categories, see Proposition~\ref{properties}. Then it follows from \cite[Theorem~3.2]{Miyachi}, see also \cite{Keller:cyclic}, that $0 \longrightarrow \mathsf{D}^\mathsf{b}_{\mathscr A}(\mathscr B) \longrightarrow \mathsf{D}^\mathsf{b}(\mathscr B) \longrightarrow \mathsf{D}^\mathsf{b}(\mathscr C) \longrightarrow 0$ is an exact sequence of triangulated categories, where $\mathsf{D}^\mathsf{b}_{\mathscr A}(\mathscr B)$ is the full subcategory of $\mathsf{D}^\mathsf{b}(\mathscr B)$ consisting of complexes whose homology lie in $\mathscr A$. Hence $\mathsf{D}^\mathsf{b}(e)$ is a quotient functor, i.e.\ $\mathsf{D}^\mathsf{b}(\mathscr B)/\mathsf{D}^\mathsf{b}_{\mathscr A}(\mathscr B)\simeq \mathsf{D}^\mathsf{b}(\mathscr C)$. Next we claim that $\mathsf{D}^\mathsf{b}(e)(\mathsf{K}^\mathsf{b}(\Proj{\mathscr B})) \subseteq \mathsf{K}^\mathsf{b}(\Proj{\mathscr C})$. Let $P^{\bullet}\in \mathsf{K}^\mathsf{b}(\Proj{\mathscr B})$. Suppose first that $P^{\bullet}$ is concentrated in degree zero, so we deal with a projective object $P$ of $\mathscr B$. Since the object $e(P)$ has finite projective dimension it follows that there is a quasi-isomorphism $Q^{\bullet}\longrightarrow e(P)[0]$ where $Q^{\bullet}\in \mathsf{K}^\mathsf{b}(\Proj{\mathscr C})$ is a projective resolution of $e(P)$. Then the object $e(P)$[0] is isomorphic with $Q^{\bullet}$ in $\mathsf{D}^\mathsf{b}(\mathscr C)$ and therefore $e(P)\in \mathsf{K}^\mathsf{b}(\Proj{\mathscr C})$. Now let $P^{\bullet}=(0\longrightarrow P_0\longrightarrow P_1\longrightarrow 0)\in \mathsf{K}^\mathsf{b}(\Proj{\mathscr B})$. Then we have the triangle $P_0[0]\longrightarrow P_1[0]\longrightarrow P^{\bullet}\longrightarrow P_0[1]$ and if we apply the functor $\mathsf{D}^\mathsf{b}(e)$ we infer that $\mathsf{D}^\mathsf{b}(e)(P^{\bullet})\in \mathsf{K}^\mathsf{b}(\Proj{\mathscr C})$ since $\mathsf{K}^\mathsf{b}(\Proj{\mathscr C})$ is a triangulated subcategory. Continuing inductively on the length of the complex $P^{\bullet}$ we infer that the object $\mathsf{D}^\mathsf{b}(e)(P^{\bullet})$ lies in $\mathsf{K}^\mathsf{b}(\Proj{\mathscr C})$ and so our claim follows. Then since the triangulated functor $\mathsf{D}^\mathsf{b}(e)\circ Q_{\mathscr C}\colon \mathsf{D}^\mathsf{b}(\mathscr B)\longrightarrow \mathsf{D}_\mathsf{sg}(\mathscr C)$ annihilates $\mathsf{K}^\mathsf{b}(\Proj{\mathscr B})$ it follows that $\mathsf{D}^\mathsf{b}(e)\circ Q_{\mathscr C}$ factors uniquely through $Q_{\mathscr B}$ via a triangulated functor $\mathsf{D}_\mathsf{sg}(e)\colon \mathsf{D}_\mathsf{sg}(\mathscr B)\longrightarrow \mathsf{D}_\mathsf{sg}(\mathscr C)$, that is the following diagram is commutative$\colon$ \[ \xymatrix{ \mathsf{D}^\mathsf{b}(\mathscr B) \ar[rr]^{Q_{\mathscr B}} \ar[d]_{\mathsf{D}^\mathsf{b}(e)} && \mathsf{D}_\mathsf{sg}(\mathscr B) \ar[d]^{\mathsf{D}_\mathsf{sg}(e)} \\ \mathsf{D}^\mathsf{b}(\mathscr C) \ar[rr]^{Q_{\mathscr C}} && \mathsf{D}_\mathsf{sg}(\mathscr C) } \] Next we show that $\mathsf{D}^\mathsf{b}_{\mathscr A}(\mathscr B)\subseteq \mathsf{K}^\mathsf{b}(\Proj{\mathscr B})$ in $\mathsf{D}^\mathsf{b}(\mathscr B)$. Since the projective dimension of $i(A)$ is finite for all $A$ in $\mathscr A$, it follows that $i(\mathscr A)\subseteq \mathsf{K}^\mathsf{b}(\Proj{\mathscr B})$ in $\mathsf{D}^\mathsf{b}(\mathscr B)$. Let $B^{\bullet}$ be an object of $\mathsf{D}^\mathsf{b}_{\mathscr A}(\mathscr B)$. Assume first that $B^{\bullet}$ is concentrated in degree zero. Hence we deal with an object $B\in \mathscr B$ such that $B\cong i(A)$ for some $A\in \mathscr A$, and therefore our claim follows. Now consider a complex \[ B^{\bullet}\colon\xymatrix{ 0 \ar[r]^{} & B^0 \ar[r]^{d^0} & B^1 \ar[r]^{} & 0 } \] where $H^0(B^{\bullet})$ and $H^1(B^{\bullet})$ lies in $\mathscr A$. From Lemma~\ref{lemtriangles} we have the triangles \[ \xymatrix{ H^0(B^{\bullet}) \ar[r]^{} & \sigma_{>-1}(B^{\bullet}) \ar[r]^{} & \sigma_{>0}(B^{\bullet}) \ar[r]^{} & H^0(B^{\bullet})[1] } \] and \[ \xymatrix{ H^1(B^{\bullet})[-1] \ar[r]^{} & \sigma_{>0}(B^{\bullet}) \ar[r]^{} & \sigma_{>1}(B^{\bullet}) \ar[r]^{} & H^1(B^{\bullet}) } \] in $\mathsf{D}^\mathsf{b}(\mathscr B)$. Then from the second triangle it follows that $\sigma_{>0}(B^{\bullet})\in \mathsf{K}^\mathsf{b}(\Proj{\mathscr B})$ and therefore from the first triangle we get that $\sigma_{>-1}(B^{\bullet})=B^{\bullet}\in\mathsf{K}^\mathsf{b}(\Proj{\mathscr B})$. Continuing inductively on the length of the complex $B^{\bullet}$, we infer that $\mathsf{D}^\mathsf{b}_{\mathscr A}(\mathscr B)\subseteq \mathsf{K}^\mathsf{b}(\Proj{\mathscr B})$ in $\mathsf{D}^\mathsf{b}(\mathscr B)$. Using this we can form the quotient $\mathsf{K}^\mathsf{b}(\Proj{\mathscr B})/\mathsf{D}^\mathsf{b}_{\mathscr A}(\mathscr B)$, and then we have the following exact commutative diagram$\colon$ \[ \xymatrix{ 0 \ar[r] & \mathsf{K}^\mathsf{b}(\Proj{\mathscr B})/\mathsf{D}^\mathsf{b}_{\mathscr A}(\mathscr B) \ar[r]^{} \ar[d]_{} & \mathsf{D}^\mathsf{b}(\mathscr B)/\mathsf{D}^\mathsf{b}_{\mathscr A}(\mathscr B) \ar[d]^{\simeq} \ar[r] & \mathsf{D}_\mathsf{sg}(\mathscr B) \ar[r]^{} \ar[d]^{\mathsf{D}_\mathsf{sg}(e)} & 0 \\ 0 \ar[r] & \mathsf{K}^\mathsf{b}(\Proj{\mathscr C}) \ar[r]^{} & \mathsf{D}^\mathsf{b}(\mathscr C) \ar[r] & \mathsf{D}_\mathsf{sg}(\mathscr C) \ar[r] & 0 } \] We show that the functor $\mathsf{K}^\mathsf{b}(\Proj{\mathscr B})/\mathsf{D}^\mathsf{b}_{\mathscr A}(\mathscr B)\longrightarrow \mathsf{K}^\mathsf{b}(\Proj{\mathscr C})$ is an equivalence, where we denote it by $\mathsf{K}^\mathsf{b}(e)$. First from the above commutative diagram and since there is an equivalence $\mathsf{D}^\mathsf{b}(\mathscr B)/\mathsf{D}^\mathsf{b}_{\mathscr A}(\mathscr B)\simeq \mathsf{D}^\mathsf{b}(\mathscr C)$, it follows that the functor $\mathsf{K}^\mathsf{b}(e)$ is fully faithful. Let $P^{\bullet}\colon 0 \longrightarrow P_n \longrightarrow \cdots \longrightarrow P_1 \longrightarrow P_0 \longrightarrow 0 $ be an object of $\mathsf{K}^\mathsf{b}(\Proj{\mathscr C})$. Each $P_i$ is a projective object in $\mathscr C$ and from Proposition~\ref{properties} we have $el(P_i)\cong P_i$ with $l(P_i)\in \Proj{\mathscr B}$. Then the complex $l(P^{\bullet})\colon 0 \longrightarrow l(P_n) \longrightarrow \cdots \longrightarrow l(P_1) \longrightarrow l(P_0) \longrightarrow 0 $ is such that $\mathsf{K}^\mathsf{b}(e)(l(P^{\bullet}))=P^{\bullet}$. This implies that the functor $\mathsf{K}^\mathsf{b}(e)$ is essentially surjective. Hence the functor $\mathsf{K}^\mathsf{b}(e)$ is an equivalence. In conclusion, from the above exact commutative diagram we infer that the singularity categories of $\mathscr B$ and $\mathscr C$ are triangle equivalent. (ii) $\Rightarrow$ (i) Suppose that there is a triangle equivalence $\mathsf{D}_\mathsf{sg}(e)\colon \mathsf{D}_\mathsf{sg}(\mathscr B)\stackrel{\simeq}{\longrightarrow} \mathsf{D}_\mathsf{sg}(\mathscr C)$. Let $P$ be a projective object of $\mathscr B$. Then $P[0]\in \mathsf{K}^\mathsf{b}(\Proj{\mathscr B})$ and $\mathsf{D}^\mathsf{b}(e)(P[0])\in \mathsf{K}^\mathsf{b}(\Proj{\mathscr C})$. Thus the object $e(P)$ has finite projective dimension. Let $A\in \mathscr A$ and consider the object $i(A)$ of $\mathscr B$. Then from Proposition~\ref{properties} we have $e(i(A))=0$. Since $\mathsf{D}_\mathsf{sg}(e)$ is an equivalence, the object $i(A)$ is zero in $\mathsf{D}_\mathsf{sg}(\mathscr B)$, and therefore $i(A)\in \mathsf{K}^\mathsf{b}(\Proj{\mathscr B})$. We infer that $i(A)$ has finite projective dimension. \end{proof} \end{thm} \begin{rem} \label{rem:singular} If the functor $e \colon \mathscr B \longrightarrow \mathscr C$ is an eventually homological isomorphism, then statement (i) in Theorem~\ref{thmsingular} is true by Proposition~\ref{prop:ext-iso-implies-conditions}. Thus Theorem~\ref{thmsingular} in particular says that if the functor $e \colon \mathscr B \longrightarrow \mathscr C$ in a recollement $(\mathscr A,\mathscr B,\mathscr C)$ is an eventually homological isomorphism, then it induces a singular equivalence between $\mathscr B$ and $\mathscr C$. Note that statement (i) in Theorem~\ref{thmsingular} only states that each object of the form $i(A)$ or $e(P)$ has finite projective dimension, and not that there exists a finite bound for the projective dimensions of all such objects. In other words, the supremums \[ \sup \{ \pd_\mathscr B i(A) \mid A \in \mathscr A \} \qquad\text{and}\qquad \sup \{ \pd_\mathscr C e(P) \mid P \in \Proj \mathscr B \} \] (which are used in other parts of the paper) may be infinite even if statement (i) is true. \end{rem} Applying Theorem~\ref{thmsingular} to the recollement of module categories $(\fmod{R/\idealgenby{e}},\fmod{R},\fmod{eRe})$, see Example~\ref{exam:mod-recollements}, we have the following consequence due to Chen, see \cite[Theorem~2.1]{Chen:schurfunctors} and \cite[Corollary~3.3]{Chen:tworesultsofOrlov}. Note that our version is somewhat stronger; the difference is that Chen takes $\pd_{eRe} eR <\infty$ as an assumption instead of including it in one of the equivalent statements. This result constitutes the second part of the Main Theorem presented in the introduction. \begin{cor} \label{corsingular} Let $R$ be a left Noetherian ring and $e$ an idempotent element of $R$. Then the following statements are equivalent$\colon$ \begin{enumerate} \item For every $R/\idealgenby{e}$-module $X$ we have $\pd_RX<\infty$, and $\pd_{eRe} eR< \infty$. \item The functor $e(-)\colon \fmod{R}\longrightarrow \fmod{eRe}$ induces a singular equivalence between $\fmod{R}$ and $\fmod{eRe}\colon$ \[ \xymatrix@C=0.5cm{ \mathsf{D}_\mathsf{sg}(e(-))\colon \mathsf{D}_\mathsf{sg}(\fmod{R}) \ \ar[rr]^{ \ \ \ \ \simeq} && \ \mathsf{D}_\mathsf{sg}(\fmod{eRe}) } \] \end{enumerate} \end{cor} We end this section with an application to stable categories of Cohen--Macaulay modules. Let $\Lambda$ be a Gorenstein artin algebra. We denote by $\CM(\Lambda)$ the category of (maximal) \defterm{Cohen--Macaulay modules} defined as follows$\colon$ \[ \CM(\Lambda) = \{ X\in \fmod{\Lambda} \mid \Ext_{\Lambda}^n(X,\Lambda)=0 \ \text{for all $n\geq 1$} \} \] Then it is known that the stable category $\uCM(\Lambda)$ modulo projectives is a triangulated category, see \cite{Happel:4}, and moreover there is a triangle equivalence between the singularity category $\mathsf{D}_\mathsf{sg}(\fmod{\Lambda})$ and the stable category $\uCM(\Lambda)$, see \cite[Theorem~4.4.1]{Buchweitz:unpublished} and \cite[Theorem~4.6]{Happel:3}. As a consequence of Corollary~\ref{cor:algebra-ext-iso}, Corollary~\ref{corGorensteinArtinalg} and Corollary~\ref{corsingular} we get the following. \begin{cor} \label{corgorsingular} Let $\Lambda$ be Gorenstein artin algebra and $a$ an idempotent element of $\Lambda$. Assume that the functor $a-\colon \fmod \Lambda \longrightarrow \fmod a{\Lambda}a$ is an eventually homological isomorphism. Then there is a triangle equivalence between the stable categories of Cohen--Macaulay modules of $\Lambda$ and $a\Lambda a$$\colon$ \[ \xymatrix@C=0.5cm{ \uCM(\Lambda) \ \ar[rr]^{ \simeq \ \ } && \ \uCM(a\Lambda a) } \] \end{cor} \section{Finite generation of cohomology rings} \label{section:cohomology} In this section, we describe a way to compare the \fg{} condition (see Definition~\ref{defn:fg}) for two different algebras. This is used in the next section for the algebras $\Lambda$ and $a{\Lambda}a$, where $\Lambda$ is a finite dimensional algebra over a field and $a$ is an idempotent in $\Lambda$. Let $\Lambda$ and $\Gamma$ be two artin algebras over a commutative ring $k$, and assume that they are flat as $k$-modules. Let $M = \Lambda/(\rad \Lambda)$ and $N = \Gamma/(\rad \Gamma)$. Assume that we have graded ring isomorphisms $f$ and $g$ making the diagram \begin{equation} \label{eqn:fg-diagram} \xymatrix{ {\HH*(\Lambda)} \ar[r]^-{\varphi_M} \ar[d]_{f}^{\cong} & {\Ext_\Lambda^*(M, M)} \ar[d]_g^{\cong} \\ {\HH*(\Gamma)} \ar[r]_-{\varphi_{N}} & {\Ext_{\Gamma}^*(N, N)} } \end{equation} commute, where the maps $\varphi_M$ and $\varphi_N$ are defined in Subsection~\ref{subsection:hh}. Then it is clear that \fg{} for $\Lambda$ is exactly the same as \fg{} for $\Gamma$, since all the relevant data for the \fg{} condition is exactly the same for the two algebras. However, we can come to the same conclusion even if the homology groups for $\Lambda$ and $\Gamma$ are different in some degrees, as long as they are the same in all but finitely many degrees. In other words, if the maps $f$ and $g$ above are just graded ring homomorphisms such that $f_n$ and $g_n$ are group isomorphisms for almost all degrees $n$, then the \fg{} condition holds for $\Lambda$ if and only if it holds for $\Gamma$. The goal of this section is to show this. We first prove the result in a more general setting, where we replace the rings in \eqref{eqn:fg-diagram} by arbitrary graded rings satisfying appropriate assumptions. This is done in Proposition~\ref{prop:graded-fin-gen}, after we have shown a part of the result (corresponding to part (i) of the \fg{} condition) in Proposition~\ref{prop:graded-noetherian}. Finally, we state the result for \fg{} in Proposition~\ref{prop:fg-two-algebras}. We now introduce some terminology and notation which is used in this section and the next. By \defterm{graded ring} we always mean a ring of the form \[ R = \bigoplus_{i=0}^\infty R_i \] graded over the nonnegative integers. We denote the set of nonnegative integers by $\No$. If $R$ is a graded ring and $n$ a nonnegative integer, we use the notation $R_{\ge n}$ for the graded ideal \[ R_{\ge n} = \bigoplus_{i=n}^\infty R_i \] in $R$. We use the term \defterm{rng} for a ``ring without identity'', that is, an object which satisfies all the axioms for a ring except having a multiplicative identity element. We use the following characterization of noetherianness for graded rings. \begin{thm} \label{thm:noetherian-equals-graded-noetherian} Let $R$ be a graded ring. Then $R$ is noetherian if and only if it satisfies the ascending chain condition on graded ideals. \end{thm} \begin{proof} This follows directly from \cite[Theorem~5.4.7]{gradedrings:book}. \end{proof} We now begin the main work of this section by showing that an isomorphism in all but finitely many degrees between two sufficiently nice graded rings preserves noetherianness. This implies that such a map between Hochschild cohomology rings preserves part (i) of the \fg{} condition, and thus gives one half of the result we want. \begin{prop} \label{prop:graded-noetherian} Let $R$ and $S$ be graded rings. Assume that $R_0$ and $S_0$ are noetherian, that every $R_i$ is finitely generated as left and as right $R_0$-module, and that every $S_i$ is finitely generated as left and as right $S_0$-module. Let $n$ be a nonnegative integer, and assume that there exists an isomorphism $\phi \colon R_{\ge n} \longrightarrow S_{\ge n}$ of graded rngs. Then $R$ is noetherian if and only if $S$ is noetherian. \end{prop} \begin{proof} We prove (by showing the contrapositive) that $R$ is left noetherian if $S$ is left noetherian. The corresponding result with right noetherian is proved in the same way. This gives one of the implications we need. The opposite implication is proved in the same way by interchanging $R$ and $S$ and using $\phi^{-1}$ instead of $\phi$. Assume that $R$ is not left noetherian. Let \[ \mathbb{I}\colon\quad I^{(0)} \subset I^{(1)} \subset \cdots \] be an infinite strictly ascending sequence of graded left ideals in $R$ (this is possible by Theorem~\ref{thm:noetherian-equals-graded-noetherian}). For every index $i$ in this sequence, we can write the ideal $I^{(i)}$ as a direct sum \[ I^{(i)} = \bigoplus_{d \in \No} I^{(i)}_d \] of abelian groups, where $I^{(i)}_d \subseteq R_d$ is the degree $d$ part of $I^{(i)}$. For any degree $d$, we can make an ascending sequence \[ I^{(0)}_d \subseteq I^{(1)}_d \subseteq \cdots \] of $R_0$-submodules of $R_d$ by taking the degree $d$ part of each ideal in $\mathbb{I}$. But $R_d$ is a noetherian $R_0$-module (since $R_0$ is noetherian and $R_d$ is a finitely generated $R_0$-module), and hence this sequence must stabilize at some point. Let $s(d)$ be the point where it stabilizes, that is, the smallest integer such that $I^{(s(d))}_d = I^{(i)}_d$ for every $i > s(d)$. We now define two functions $\sigma\colon \No \longrightarrow \No$ and $ \delta\colon \No \longrightarrow \No$. For $d \in \No$, we define \[ \sigma(d) = \max \{ s(0), s(1), \ldots, s(d) \}. \] For $i \in \No$, we define $\delta(i)$ as the smallest number such that \[ I^{(i)}_{\delta(i)} \ne I^{(i+1)}_{\delta(i)}. \] These functions have the following interpretation. For a degree $d$, the number $\sigma(d)$ is the index in the sequence $\mathbb{I}$ where the ideals in the sequence have stabilized up to degree $d$. For an index $i$, the number $\delta(i)$ is the lowest degree at which there is a difference from the ideal $I^{(i)}$ to the ideal $I^{(i+1)}$. We now define a sequence $(i_j)_{j \in \No}$ of indices and a sequence $(d_j)_{j \in \No}$ of degrees by \begin{align*} i_j &= \left\{ \begin{array}{ll} \sigma(n) &\text{if $j = 0$,} \\ \sigma(d_{j-1} + n) &\text{otherwise.} \end{array} \right. \\ d_j &= \delta(i_j) \end{align*} We observe that for every positive integer $j$, we have \[ i_j > i_{j-1} \qquad\text{and}\qquad d_j > d_{j-1} + n. \] We now construct a sequence $\mathbb{J}$ of graded left ideals in $S$. For every nonnegative integer $j$, we choose an element \[ x_j \in I^{(i_j+1)}_{d_j} - I^{(i_j)}_{d_j} \] (this is possible because $d_j = \delta(i_j)$). Note that the degree of $x_j$ is $d_j$, which is greater than $n$. We then define $J^{(j)}$ to be the left ideal of $S$ generated by the set \[ \{ \phi(x_0), \ldots, \phi(x_j) \}. \] We let $\mathbb{J}$ be the sequence of these ideals$\colon$ \[ \mathbb{J}\colon\quad J^{(0)} \subseteq J^{(1)} \subseteq \cdots. \] We want to show that each inclusion here is strict. This means that we must show, for every positive integer $j$, that $\phi(x_j)$ is not an element of $J^{(j-1)}$. We show this by contradiction. Assume that there is a $j$ such that $\phi(x_j) \in J^{(j-1)}$. Then we can write $\phi(x_j)$ as a sum \[ \phi(x_j) = \sum_{m=0}^{j-1} s_m \cdot \phi(x_m), \] where each $s_m$ is an element of $S$. Since $\phi(x_j)$ and every $\phi(x_m)$ are homogeneous elements, we can choose every $s_m$ to be homogeneous. For each $m$, we have that if $s_m$ is nonzero, then its degree is \[ \degree{s_m} = \degree{\phi(x_j)} - \degree{\phi(x_m)} = \degree{x_j} - \degree{x_m} = d_j - d_m > n. \] Thus $s_m$ is either zero or in the image of $\phi$. We use this to find corresponding elements in $R$. Let, for each $m \in \{1,\ldots,j-1\}$, \[ r_m = \left\{ \begin{array}{ll} 0 &\text{if $s_m = 0$,} \\ \phi^{-1}(s_m) &\text{otherwise.} \end{array} \right. \] Now we have \[ \phi(x_j) = \sum_{m=0}^{j-1} s_m \cdot \phi(x_m) = \phi\left( \sum_{m=0}^{j-1} r_m\cdot x_m \right). \] Applying $\phi^{-1}$ gives \[ x_j = \sum_{m=0}^{j-1} r_m\cdot x_m. \] Since we have $x_m \in I^{(i_m+1)} \subseteq I^{(i_j)}$ for every $m$, this means that $x_j \in I^{(i_j)}$. This is a contradiction, since $x_j$ is chosen so that it does not lie in $I^{(i_j)}$. We have shown that the sequence $\mathbb{J}$ is a strictly ascending sequence of graded left ideals in $S$. Thus $S$ in not left noetherian. \end{proof} We now complete the picture by considering two graded rings and a graded module over each ring, and showing that isomorphisms in all but finitely many degrees preserve both noetherianness of the rings and finite generation of the modules (given that certain assumptions are satisfied). \begin{prop} \label{prop:graded-fin-gen} Let $R$ and $M$ be graded rings, and $\theta \colon R \longrightarrow M$ a graded ring homomorphism. View $M$ as a graded left $R$-module with scalar multiplication given by $\theta$. Assume that $R_0$ is noetherian, that every $R_i$ is finitely generated as left and as right $R_0$-module, and that every $M_i$ is finitely generated as left $R_0$-module. Similarly, let $R'$ and $M'$ be graded rings, and $\theta' \colon R' \longrightarrow M'$ a graded ring homomorphism. View $M'$ as a graded left $R'$-module with scalar multiplication given by $\theta'$. Assume that $R'_0$ is noetherian, that every $R'_i$ is finitely generated as left and as right $R'_0$-module, and that every $M'_i$ is finitely generated as left $R'_0$-module. Assume that there are graded rng isomorphisms $\phi \colon R_{\ge n} \longrightarrow R'_{\ge n}$ and $\psi \colon M_{\ge n} \longrightarrow M'_{\ge n}$ (for some nonnegative integer $n$) such that the diagram \[ \xymatrix@C=5em{ R_{\ge n} \ar[r]^{\theta_{\ge n}} \ar[d]_{\phi} & M_{\ge n} \ar[d]^{\psi} \\ R'_{\ge n} \ar[r]_{\theta'_{\ge n}} & M'_{\ge n} } \] commutes. Then the following two conditions are equivalent. \begin{enumerate} \item $R$ is noetherian and $M$ is finitely generated as left $R$-module. \item $R'$ is noetherian and $M'$ is finitely generated as left $R'$-module. \end{enumerate} \end{prop} \begin{proof} We prove that condition (i) implies condition (ii). The opposite implication is proved in exactly the same way by using $\phi^{-1}$ and $\psi^{-1}$ instead of $\phi$ and $\psi$. Assume that condition (i) holds. Then by Proposition~\ref{prop:graded-noetherian}, $R'$ is noetherian. We need to show that $M'$ is finitely generated as left $R'$-module. We begin with choosing generating sets for things we know to be finitely generated. Note that the ideal $R_{\ge n}$ of $R$ is finitely generated, since $R$ is noetherian. Let $A$ be a finite homogeneous generating set for $R_{\ge n}$. Let $G$ be a finite homogeneous generating set for $M$ as left $R$-module. For every $i$, let $B_i$ be a finite generating set for $M'_i$ as left $R'_0$-module. Let \[ b_R = \max \big\{ \degree{a} \mathrel{\big|} a \in A \big\} \qquad\text{and}\qquad b_M = \max \big\{ \degree{g} \mathrel{\big|} g \in G \big\} \] be the maximal degrees of elements in our chosen generating sets for $R$ and $M$, respectively. Let \[ b = b_R + b_M + n. \] Define the set $G'$ to be \[ G' = \bigcup_{i=0}^b B_i. \] We want to show that $G'$ generates $M'$ as left $R'$-module. Let $N'$ be the $R'$-submodule of $M'$ generated by $G'$. It is clear that $N'$ contains every homogeneous element of $M'$ with degree at most $b$. Let $m' \in M'$ be a homogeneous element with $\degree{m'} > b$. Let $m = \psi^{-1}(m')$. We can write $m$ as a sum \[ m = \sum_i \theta(r_i) \cdot g_i, \] where every $r_i$ is a homogeneous nonzero element of $R$ and every $g_i$ is an element of the generating set $G$ for $M$. For every $r_i$, we have \[ \degree{r_i} = \degree{m} - \degree{g_i} = \degree{m'} - \degree{g_i} > b - b_M = b_R + n. \] Thus $r_i$ lies in the ideal $R_{\ge n}$, so we can write it as a sum \[ r_i = \sum_j u_{i,j} \cdot a_{i,j}, \] where every $u_{i,j}$ is a homogeneous nonzero element of $R$, and every $a_{i,j}$ is an element of the generating set $A$ for $R_{\ge n}$. For every $u_{i,j}$, we have \[ \degree{u_{i,j}} = \degree{r_i} - \degree{a_{i,j}} > (b_R + n) - b_R = n. \] Now we can write the element $m$ as \[ m = \sum_{i,j} \theta(u_{i,j} \cdot a_{i,j}) \cdot g_i = \sum_{i,j} \theta(u_{i,j}) \cdot \theta(a_{i,j}) \cdot g_i = \sum_{i,j} \theta(u_{i,j}) \cdot (a_{i,j} \cdot g_i). \] If we have $a_{i,j} \cdot g_i = 0$ for some terms in the sum, we ignore these terms. For every pair $(i,j)$, we have \[ \degree{\theta(u_{i,j})} = \degree{u_{i,j}} > n \qquad\text{and}\qquad \degree{a_{i,j} \cdot g_i} \ge \degree{a_{i,j}} \ge n. \] This means that when applying $\psi$ to a term in the above sum for $m$, we have \[ \psi(\theta(u_{i,j}) \cdot (a_{i,j} \cdot g_i)) = \psi(\theta(u_{i,j})) \cdot \psi(a_{i,j} \cdot g_i). \] Using this, we can write our element $m'$ of $M'$ in the following way$\colon$ \[ m' = \psi(m) = \psi\Big( \sum_{i,j} \theta(u_{i,j}) \cdot (a_{i,j} \cdot g_i) \Big) = \sum_{i,j} \psi(\theta(u_{i,j})) \cdot \psi(a_{i,j} \cdot g_i) = \sum_{i,j} \theta'(\phi(u_{i,j})) \cdot \psi(a_{i,j} \cdot g_i). \] For every pair $(i,j)$, we have \[ \degree{\psi(a_{i,j} \cdot g_i)} = \degree{a_{i,j} \cdot g_i} = \degree{a_{i,j}} + \degree{g_i} \le b_R + b_M \le b, \] so $\psi(a_{i,j} \cdot g_i)$ lies in the module $N'$ generated by $G'$. Thus $m'$ also lies in $N'$. Since every homogeneous element of $M'$ lies in $N'$, we have $M' = N'$, and hence $M'$ is finitely generated. \end{proof} Finally, we apply the above result to the rings which are involved in the \fg{} condition, and obtain the main result of this section. \begin{prop} \label{prop:fg-two-algebras} Let $\Lambda$ and $\Gamma$ be artin algebras over a commutative ring $k$, and assume that they are flat as $k$-modules. Let $M$ and $M'$ be $\Lambda$-modules, and let $N$ and $N'$ be $\Gamma$-modules, such that $M \cong \Lambda/(\rad \Lambda)$ and $N' \cong \Gamma/(\rad \Gamma)$. Let $n$ be some nonnegative integer, and assume that there are graded rng isomorphisms $f$, $g$, $f'$ and $g'$ making the following two diagrams commute$\colon$ \[ \vcenter{\hbox{ \xymatrix{ {\HH{\ge n}(\Lambda)} \ar[r]^-{\varphi_M^{\ge n}} \ar[d]_{f}^{\cong} & {\Ext_\Lambda^{\ge n}(M, M)} \ar[d]_{g}^{\cong} \\ {\HH{\ge n}(\Gamma)} \ar[r]_-{\varphi_{N}^{\ge n}} & {\Ext_{\Gamma}^{\ge n}(N, N)} }}} \qquad\text{and}\qquad \vcenter{\hbox{ \xymatrix{ {\HH{\ge n}(\Lambda)} \ar[r]^-{\varphi_{M'}^{\ge n}} \ar[d]_{f'}^{\cong} & {\Ext_\Lambda^{\ge n}(M', M')} \ar[d]_{g'}^{\cong} \\ {\HH{\ge n}(\Gamma)} \ar[r]_-{\varphi_{N'}^{\ge n}} & {\Ext_{\Gamma}^{\ge n}(N', N')} }}} \] Then $\Lambda$ satisfies \fg{} if and only if $\Gamma$ satisfies \fg{}. \end{prop} \begin{proof} We first check that the conditions on the graded rings in Proposition~\ref{prop:graded-fin-gen} are satisfied in this case. For every degree $i$, we have that $\HH{i}(\Lambda)$, $\Ext_\Lambda^i(M,M)$ and $\Ext_\Lambda^i(M',M')$ are finitely generated as $k$-modules. Therefore, they are also finitely generated as $\HH{0}(\Lambda)$-modules. The ring $\HH{0}(\Lambda)$ is noetherian since it is an artin algebra. Similarly, we see that $\HH{i}(\Gamma)$, $\Ext_\Gamma^i(N,N)$ and $\Ext_\Gamma^i(N',N')$ are finitely generated $\HH{0}(\Gamma)$-modules, and that the ring $\HH{0}(\Gamma)$ is noetherian. Assume that $\Lambda$ satisfies \fg{}. Then $\HH*(\Lambda)$ is noetherian, and by Theorem~\ref{fg-implies-every-ext-fin-gen}, $\Ext_\Lambda^*(M',M')$ is a finitely generated $\HH*(\Lambda)$-module. By applying Proposition~\ref{prop:graded-fin-gen} to the commutative diagram with $f'$ and $g'$, we see that $\Gamma$ satisfies \fg{}. The opposite inclusion is proved in the same way by using the other commutative diagram. \end{proof} \section{Finite generation of cohomology rings in module recollements} \label{section:fg-a(Lambda)a} We now investigate the relationship between the \fg{} condition (see Definition~\ref{defn:fg}) for an algebra $\Lambda$ and the algebra $a{\Lambda}a$, where $a$ is an idempotent of $\Lambda$. We show that, given some conditions on the idempotent $a$, the algebra $\Lambda$ satisfies \fg{} if and only if the algebra $a{\Lambda}a$ satisfies \fg{}. We prove this result only for finite-dimensional algebras over a field, and not more general artin algebras. Throughout this section, we let $k$ be a field, $\Lambda$ a finite-dimensional $k$-algebra and $a$ an idempotent in $\Lambda$. We denote by $e$ and $E$ the exact functors \begin{align*} e = (a-) &\colon \fmod \Lambda \longrightarrow \fmod a{\Lambda}a \\ E = (a-a) &\colon \fmod \envalg{\Lambda} \longrightarrow \fmod \envalg{(a{\Lambda}a)}. \end{align*} These functors fit into the recollements described in Example~\ref{exam:mod-recollements}. For a $\Lambda$-module $M$, we can construct the diagram \[ \xymatrix@C=4em{ {\HH*(\Lambda)} \ar[r]^-{\varphi_M} \ar[d]_{E_{\Lambda,\Lambda}^*} & {\Ext_\Lambda^*(M,M)} \ar[d]^{e_{M,M}^*} \\ {\HH*(a{\Lambda}a)} \ar[r]_-{\varphi_{e(M)}} & {\Ext_{a{\Lambda}a}^*(e(M), e(M))} } \] where the maps $\varphi_M$ and $\varphi_{e(M)}$ are defined in Subsection~\ref{subsection:hh}, and the maps $E_{\Lambda,\Lambda}^*$ and $e_{M,M}^*$ are defined in Section~\ref{section:extensions}. We show that this diagram commutes, and that under certain conditions on $a$, the vertical maps are isomorphisms in almost all degrees. We then use Proposition~\ref{prop:fg-two-algebras} to show that $\Lambda$ satisfies \fg{} if and only if $a{\Lambda}a$ satisfies \fg{}. Let us consider what kind of conditions we need to put on the choice of the idempotent $a$. From Corollary~\ref{cor:algebra-ext-iso}, we know that the map $e_{M,M}^*$ in the above diagram is an isomorphism in all but finitely many degrees if the two dimensions \[ \id_\Lambda \Big( \frac{\Lambda/\idealgenby{a}}{\rad \Lambda/\idealgenby{a}} \Big) \qquad\text{and}\qquad \pd_{a{\Lambda}a} (a\Lambda) \] are finite, or, equivalently, that the two dimensions \[ \pd_\Lambda \Big( \frac{\Lambda/\idealgenby{a}}{\rad \Lambda/\idealgenby{a}} \Big) \qquad\text{and}\qquad \pd_{\opposite{(a{\Lambda}a)}} ({\Lambda}a) \] are finite. We show (given an additional technical assumption about the algebra $\Lambda$) that this is in fact also sufficient for the map $E_{\Lambda,\Lambda}^*$ to be an isomorphism in all but finitely many degrees. This section is structured as follows. The first part considers the commutativity of the above diagram, concluding with Proposition~\ref{prop:e-gamma-commute}. The second part considers when the map $E_{\Lambda,\Lambda}^*$ is an isomorphism in high degrees, concluding with Proposition~\ref{prop:lambda-ext-iso}. Finally, the main result of this section is stated as Theorem~\ref{thm:main-result-fg}. We now show that the above diagram is commutative. The maps $\varphi_M$ and $\varphi_{e(M)}$ are defined by using tensor functors. It is convenient to have short names for these functors. For every $\Lambda$-module $M$, we define $t_M$ and $T_M$ to be the tensor functors \begin{align*} t_M = (- \otimes_\Lambda M) &\colon \fmod \envalg{\Lambda} \longrightarrow \fmod \Lambda, \\ T_M = (- \otimes_{a{\Lambda}a} aM) &\colon \fmod \envalg{(a{\Lambda}a)} \longrightarrow \fmod a{\Lambda}a. \end{align*} Together with the functors $e$ and $E$ from above, these functors fit into the following diagram of categories and functors: \[ \xymatrix@C=6em{ {\fmod \envalg{\Lambda}} \ar[r]^{t_M} \ar[d]_{E} & {\fmod \Lambda} \ar[d]^{e} \\ {\fmod \envalg{(a \Lambda a)}} \ar[r]_{T_M } & {\fmod a \Lambda a} } \] We begin by showing that the two possible compositions of maps from upper left to lower right in this diagram are related by a natural transformation. \begin{lem} \label{lem:et-commutes} For every $\Lambda$-module $M$, there is a natural transformation $\tau^M \colon T_M \circ E \longrightarrow e \circ t_M$. \end{lem} \begin{proof} Note that we have \[ T_M E(N) = aNa \otimes_{a{\Lambda}a} aM \qquad\text{and}\qquad e t_M(N) = aN \otimes_\Lambda M \] for every $\envalg{\Lambda}$-module $N$. We define the maps $\tau^M_N$ of the natural transformation $\tau^M$ by \[ \tau^M_N(n \otimes m) = n \otimes m \] for an element $n \otimes m$ of $T_M E(N)$. This gives well defined maps since $a\Lambda a \subseteq \Lambda$. It is easy to check that the compositions $et_M(f) \circ \tau^M_N$ and $\tau^M_{N'} \circ T_M E(f)$ are equal for a homomorphism $f\colon N \longrightarrow N'$ of $\envalg{\Lambda}$-modules, so $\tau^M$ is a natural transformation. \end{proof} We are now able to show that the diagrams we consider are commutative. \begin{prop} \label{prop:e-gamma-commute} For any $\Lambda$-module $M$, the following diagram of graded rings commutes$\colon$ \[ \xymatrix@C=4em{ {\HH*(\Lambda)} \ar[r]^-{\varphi_M} \ar[d]_{E_{\Lambda,\Lambda}^*} & {\Ext_\Lambda^*(M,M)} \ar[d]^{e_{M,M}^*} \\ {\HH*(a{\Lambda}a)} \ar[r]_-{\varphi_{e(M)}} & {\Ext_{a{\Lambda}a}^*(e(M), e(M))} } \] \end{prop} \begin{proof} We show that the result holds in the positive degrees of the graded rings and graded ring homomorphisms in the diagram. Showing that it also holds in degree zero can be done in a similar way, by looking at elements given by homomorphisms instead of extensions. Let $\mu$ and $\nu$ be the natural isomorphisms \[ \mu\colon \Lambda \otimes_\Lambda M \longrightarrow M \qquad\text{and}\qquad \nu\colon a{\Lambda}a \otimes_{a{\Lambda}a} e(M) \longrightarrow e(M) \] given by multiplication. Consider, for some positive integer $i$, an element $[\eta] \in \Ext_{\envalg{\Lambda}}^i(\Lambda,\Lambda)$ which is represented by the exact sequence \[ \eta\colon\quad 0 \longrightarrow \Lambda \longrightarrow X \longrightarrow P_{i-2} \longrightarrow \cdots \longrightarrow P_0 \longrightarrow \Lambda \longrightarrow 0, \] where each $P_j$ is a projective $\envalg{\Lambda}$-module. We apply the compositions of maps $\varphi_{e(M)} \circ E_{\Lambda,\Lambda}^*$ and $e_{M,M}^* \circ \varphi_M$ to $[\eta]$, and show that we get the same result in both cases. We first consider the map $\varphi_{e(M)} \circ E_{\Lambda,\Lambda}^*$. If we apply the functor $E$ to $\eta$, then we get the exact sequence \[ E(\eta)\colon 0 \longrightarrow E(\Lambda) \longrightarrow E(X) \longrightarrow E(P_{i-2}) \longrightarrow \cdots \longrightarrow E(P_0) \longrightarrow E(\Lambda) \longrightarrow 0 \] of $\envalg{(a \Lambda a)}$-modules, and we have that $E_{\Lambda,\Lambda}^*([\eta]) = [E(\eta)]$. Since the objects $E(P_j)$ are not necessarily projective, we may need to find a different representative of the element $[E(\eta)]$ in order to apply the map $\varphi_{e(M)}$. We construct the following commutative diagram with exact rows, where each $Q_j$ is a projective $\envalg{(a \Lambda a)}$-module and the bottom row is $E(\eta)$. \[ \xymatrix{ 0 \ar[r] & a{\Lambda}a \ar[r] \ar@{=}[d] & Y \ar[r] \ar[d]_{f_{i-1}} & Q_{i-2} \ar[r] \ar[d]_{f_{i-2}} & \cdots \ar[r] & Q_0 \ar[r] \ar[d]_{f_0} & a{\Lambda}a \ar[r] \ar@{=}[d] & 0 \\ 0 \ar[r] & E(\Lambda) \ar[r] & E(X) \ar[r] & E(P_{i-2}) \ar[r] & \cdots \ar[r] & E(P_0) \ar[r] & E(\Lambda) \ar[r] & 0 } \] Note that both rows represent the same element in $\Ext_{\envalg{(a \Lambda a)}}^i(a \Lambda a, a \Lambda a)$. Applying the functor $T_M$ to this diagram gives the two lower rows in the following commutative diagram of $a \Lambda a$-modules, where the two upper rows are exact. \[ \xymatrix@C=1.5em{ 0 \ar[r] & e(M) \ar[r] \ar[d]_{\nu^{-1}}^{\cong} & T_M(Y) \ar[r] \ar@{=}[d] & T_M(Q_{i-2}) \ar[r] \ar@{=}[d] & \cdots \ar[r] & T_M(Q_0) \ar[r] \ar@{=}[d] & e(M) \ar[r] \ar[d]_{\nu^{-1}}^{\cong} & 0 \\ 0 \ar[r] & T_M(a{\Lambda}a) \ar[r] \ar@{=}[d] & T_M(Y) \ar[r] \ar[d]_{T_M(f_{i-1})} & T_M(Q_{i-2}) \ar[r] \ar[d]_{T_M(f_{i-2})} & \cdots \ar[r] & T_M(Q_0) \ar[r] \ar[d]_{T_M(f_0)} & T_M(a{\Lambda}a) \ar[r] \ar@{=}[d] & 0 \\ & T_M E(\Lambda) \ar[r] & T_M E(X) \ar[r] & T_M E(P_{i-2}) \ar[r] & \cdots \ar[r] & T_M E(P_0) \ar[r] & T_M E(\Lambda) & } \] The top row in this diagram is a representative for the element $(\varphi_{e(M)} \circ E_{\Lambda,\Lambda}^*)([\eta])$. We now consider the map $e_{M,M}^* \circ \varphi_M$. Applying the functor $e \circ t_M$ to the exact sequence $\eta$ gives the top row in the following commutative diagram of $a \Lambda a$-modules with exact rows, where the bottom row is a representative of the element $(e_{M,M}^* \circ \varphi_M)([\eta])$. \[ \xymatrix@C=1.5em{ 0 \ar[r] & e t_M(\Lambda) \ar[r] \ar[d]_{e(\mu)}^{\cong} & e t_M(X) \ar[r] \ar@{=}[d] & e t_M(P_{i-2}) \ar[r] \ar@{=}[d] & \cdots \ar[r] & e t_M(P_0) \ar[r] \ar@{=}[d] & e t_M(\Lambda) \ar[r] \ar[d]_{e(\mu)}^{\cong} & 0 \\ 0 \ar[r] & e(M) \ar[r] & e t_M(X) \ar[r] & e t_M(P_{i-2}) \ar[r] & \cdots \ar[r] & e t_M(P_0) \ar[r] & e(M) \ar[r] & 0 \\ } \] Finally, we use the natural transformation $\tau^M$ from Lemma~\ref{lem:et-commutes} to combine the two above diagrams into the following commutative diagram of $a \Lambda a$-modules$\colon$ \[ \xymatrix@C=1.5em{ 0 \ar[r] & e(M) \ar[r] \ar[d]_{\nu^{-1}}^{\cong} & T_M(Y) \ar[r] \ar@{=}[d] & T_M(Q_{i-2}) \ar[r] \ar@{=}[d] & \cdots \ar[r] & T_M(Q_0) \ar[r] \ar@{=}[d] & e(M) \ar[r] \ar[d]_{\nu^{-1}}^{\cong} & 0 \\ 0 \ar[r] & T_M(a{\Lambda}a) \ar[r] \ar@{=}[d] & T_M(Y) \ar[r] \ar[d]_{T_M(f_{i-1})} & T_M(Q_{i-2}) \ar[r] \ar[d]_{T_M(f_{i-2})} & \cdots \ar[r] & T_M(Q_0) \ar[r] \ar[d]_{T_M(f_0)} & T_M(a{\Lambda}a) \ar[r] \ar@{=}[d] & 0 \\ & T_M E(\Lambda) \ar[r] \ar[d]_{\tau^M_\Lambda} & T_M E(X) \ar[r] \ar[d]_{\tau^M_X} & T_M E(P_{i-2}) \ar[r] \ar[d]_{\tau^M_{P_{i-2}}} & \cdots \ar[r] & T_M E(P_0) \ar[r] \ar[d]_{\tau^M_{P_0}} & T_M E(\Lambda) \ar[d]_{\tau^M_\Lambda} & \\ 0 \ar[r] & e t_M(\Lambda) \ar[r] \ar[d]_{e(\mu)}^{\cong} & e t_M(X) \ar[r] \ar@{=}[d] & e t_M(P_{i-2}) \ar[r] \ar@{=}[d] & \cdots \ar[r] & e t_M(P_0) \ar[r] \ar@{=}[d] & e t_M(\Lambda) \ar[r] \ar[d]_{e(\mu)}^{\cong} & 0 \\ 0 \ar[r] & e(M) \ar[r] & e t_M(X) \ar[r] & e t_M(P_{i-2}) \ar[r] & \cdots \ar[r] & e t_M(P_0) \ar[r] & e(M) \ar[r] & 0 \\ } \] It is easy to check that the composition of maps along the leftmost column is the identity map on $e(M)$, and the same holds for the composition of maps along the rightmost column. Thus the top and bottom rows in this diagram represent the same element in $\Ext_{a \Lambda a}^i(e(M),e(M))$. Since the top row is a representative of the element $(\varphi_{e(M)} \circ E_{\Lambda,\Lambda}^*)([\eta])$ and the bottom row is a representative of the element $(e_{M,M}^* \circ \varphi_M)([\eta])$, this means that $\varphi_{e(M)} \circ E_{\Lambda,\Lambda}^* = e_{M,M}^* \circ \varphi_M$. \end{proof} Having shown that our diagrams are commutative, we now move on to describing when the map $E_{\Lambda,\Lambda}^*$ is an isomorphism in almost all degrees. For this, we use Corollary~\ref{cor:algebra-ext-iso-onemodule}~(i) on the algebras $\envalg{\Lambda}$ and $(a\otimes\opposite{a})\envalg{\Lambda}(a\otimes\opposite{a})$ and the $\envalg{\Lambda}$-module $\Lambda$. We let $\varepsilon$ denote the element $a\otimes\opposite{a}$ of $\envalg{\Lambda}$, so that we can write the algebra $(a\otimes\opposite{a})\envalg{\Lambda}(a\otimes\opposite{a})$ more simply as $\varepsilon\envalg{\Lambda}\varepsilon$. Note that Corollary~\ref{cor:algebra-ext-iso-onemodule} uses a recollement situation; in this case, the recollement is like the one in Example~\ref{exam:mod-recollements}~(ii). In order to use Corollary~\ref{cor:algebra-ext-iso-onemodule}~(i) in this situation, we need to show the following$\colon$ \[ \pd_{\varepsilon\envalg{\Lambda}\varepsilon} \varepsilon\envalg{\Lambda} < \infty \qquad\text{and}\qquad \Ext_{\envalg{\Lambda}}^j \Big( \Lambda, \frac{\envalg{\Lambda}/\idealgenby{\varepsilon}} {\rad \envalg{\Lambda}/\idealgenby{\varepsilon}} \Big) = 0 \quad\text{for $j \gg 0$.} \] We show the first of these conditions in Lemma~\ref{lem:pd-envalg}, and the second one in Lemma~\ref{lem:ext-lambda-simples} (here we need an additional technical assumption on $\Lambda$ to be able to describe the simple modules over $\envalg{\Lambda}$), and finally tie it together in Proposition~\ref{prop:lambda-ext-iso}, where we show that $E_{\Lambda,\Lambda}^*$ is an isomorphism in sufficiently high degrees. First, we show how the projective dimension of the tensor product $M \otimes_k N$ is related to the projective dimensions of $M$ and $N$, when $M$ and $N$ are modules over $k$-algebras. In particular, the following result implies that if a left and a right $\Lambda$-module ${}_\Lambda M$ and $N_\Lambda$ both have finite projective dimension, then their tensor product $M \otimes_k N$ has finite projective dimension as $\envalg{\Lambda}$-module. \begin{lem} \label{lem:tensor-preserves-fin-projdim} Let $\Sigma$ and $\Gamma$ be $k$-algebras, and let $M$ be a $\Sigma$-module and $N$ a $\Gamma$-module. If $M$ has finite projective dimension as $\Sigma$-module and $N$ has finite projective dimension as $\Gamma$-module, then $M \otimes_k N$ has finite projective dimension as $(\Sigma \otimes_k \Gamma)$-module, and \[ \pd_{\Sigma \otimes_k \Gamma}(M \otimes_k N) \le \pd_\Sigma{M} + \pd_\Gamma{N}. \] \end{lem} \begin{proof} Assume that $\pd_{\Sigma}M=m$ and $\pd_{\Gamma}N=n$. Then we have finite projective resolutions \[ 0 \to P_m \xrightarrow{} \cdots \xrightarrow{} P_0 \to M \to 0 \qquad\text{and}\qquad 0 \to Q_n \xrightarrow{} \cdots \xrightarrow{} Q_0 \to N \to 0 \] of $M$ and $N$, respectively. Let $P$ and $Q$ denote the corresponding deleted resolutions. Consider the tensor product \[ P \otimes_k Q \colon \cdots \xrightarrow{} (P_0 \otimes_k Q_2) \oplus (P_1 \otimes_k Q_1) \oplus (P_2 \otimes_k Q_0) \xrightarrow{} (P_0 \otimes_k Q_1) \oplus (P_1 \otimes_k Q_0) \xrightarrow{} P_0 \otimes_k Q_0 \to 0 \] of the complexes $P$ and $Q$. This is a bounded complex of projective ($\Sigma \otimes_k \Gamma$)-modules. We want to show that it is in fact a deleted projective resolution of the $(\Sigma \otimes_k \Gamma)$-module $M \otimes_k N$, which completes the proof. We need to show that the complex $P \otimes_k Q$ is exact in all positive degrees and has homology $M \otimes_k N$ in degree zero. Let us temporarily forget the $\Sigma$- and $\Gamma$-structures, and view $P$ as a complex of right $k$-modules, $Q$ as a complex of left $k$-modules, and $P \otimes_k Q$ as a complex of abelian groups. Then by the K\"unneth formula for homology, see \cite[Corollary~11.29]{Rotman1}, we have an isomorphism \[ \alpha \colon \bigoplus_{i+j=n} H_i(P) \otimes_k H_j(Q) \xrightarrow{\cong} H_n(P \otimes_k Q) \] of abelian groups, given by $\alpha([p] \otimes [q]) = [p \otimes q]$, for $p \in P_i$ and $q \in Q_j$. Observe that $\alpha$ preserves $(\Sigma \otimes_k \Gamma)$-module structure. Thus, $\alpha$ is a $\Sigma \otimes_k \Gamma$-module isomorphism, and we get \[ H_n(P \otimes_k Q) \cong \bigoplus_{i+j=n} H_i(P) \otimes_k H_j(Q) \cong \left\{ \begin{array}{ll} M \otimes_k N & \text{if $n = 0$} \\ 0 & \text{if $n > 0$} \end{array} \right. \] This means that the complex $P \otimes_k Q$ is a deleted projective resolution of the $(\Sigma \otimes_k \Gamma)$-module $M \otimes_k N$. Since the complex $P \otimes_k Q$ is zero in all degrees above $m+n$, we get \[ \pd_{\Sigma \otimes_k \Gamma} (M \otimes_k N) \le m + n = \pd_\Sigma{M} + \pd_\Gamma{N}, \] and the proof is complete. \end{proof} Using the above result, we find that the assumptions we make about the left and right $a{\Lambda}a$-modules $a\Lambda$ and ${\Lambda}a$ having finite projective dimension imply the first condition we need for applying Corollary~\ref{cor:algebra-ext-iso-onemodule}~(i), namely that the $\varepsilon\envalg{\Lambda}\varepsilon$-module $\varepsilon\envalg{\Lambda}$ has finite projective dimension. We state this as the following result. \begin{lem} \label{lem:pd-envalg} We have the following inequality$\colon$ \[ \pd_{\varepsilon\envalg{\Lambda}\varepsilon}\varepsilon\envalg{\Lambda} \le \pd_{a{\Lambda}a} a{\Lambda} + \pd_{\opposite{(a{\Lambda}a)}} {\Lambda}a \] \end{lem} \begin{proof} Note that $\varepsilon \envalg{\Lambda}$ is isomorphic to $(a\Lambda\otimes_k\Lambda a)$ as left $\envalg{(a\Lambda a)}$-modules and that the rings $\envalg{(a{\Lambda}a)}$ and $\varepsilon\envalg{\Lambda}\varepsilon$ are isomorphic. By using these isomorphisms and Lemma~\ref{lem:tensor-preserves-fin-projdim}, we get that \[ \pd_{\varepsilon \envalg{\Lambda}\varepsilon}\varepsilon\envalg{\Lambda} = \pd_{\envalg{(a\Lambda a)}}\varepsilon\envalg{\Lambda} = \pd_{\envalg{(a\Lambda a)}}(a\Lambda\otimes_k \Lambda a) \leq \pd_{a\Lambda a} a\Lambda+\pd_{\opposite{(a\Lambda a)}}\Lambda a. \qedhere \] \end{proof} Now we show how we get the second condition needed for applying Corollary~\ref{cor:algebra-ext-iso-onemodule}~(i). We begin with a general result which relates extension groups over $\envalg{\Lambda}$ to extension groups over $\Lambda$. \begin{lem} \label{lem:ext-over-envalg-as-ext-over-lambda} Let $M$ and $N$ be $\Lambda$-modules. Let $D$ be the duality $\Hom_k(-,k)\colon \fmod \Lambda \longrightarrow \fmod \opposite{\Lambda}$. Then \[ \Ext_{\envalg{\Lambda}}^j(\Lambda, M \otimes_k D(N)) \cong \Ext_\Lambda^j(N, M) \] for every nonnegative integer $j$. \begin{proof} This follows from \cite[Corollary~4.4, Chapter~IX]{CartanEilenberg} by using the isomorphism $M \otimes_k D(N) \cong \Hom_k(N,M)$ of $\envalg{\Lambda}$-modules. \end{proof} \end{lem} Furthermore, we need to be able to describe the simple $\envalg{\Lambda}$-modules in terms of simple $\Lambda$-modules. It is reasonable to expect that taking the tensor product \[ (\Lambda/\rad \Lambda) \otimes_k (\opposite{\Lambda}/\rad \opposite{\Lambda}) \] should produce all the simple $\envalg{\Lambda}$-modules. This is, however, not true for all finite-dimensional algebras, as Example~\ref{exam:not-semisimple} shows. The following result describes when it is true. \begin{lem} \label{lem:envalg-simples} We have an isomorphism \[ \envalg{\Lambda}/\rad \envalg{\Lambda} \cong (\Lambda/\rad \Lambda) \otimes_k (\opposite{\Lambda}/\rad \opposite{\Lambda}) \] of $\envalg{\Lambda}$-modules if and only if the $\envalg{\Lambda}$-module \[ (\Lambda/\rad \Lambda) \otimes_k (\opposite{\Lambda}/\rad \opposite{\Lambda}) \] is semisimple. \begin{proof} It is easy to show that \[ (\Lambda/\rad \Lambda) \otimes_k (\opposite{\Lambda}/\rad \opposite{\Lambda}) \cong \frac{\envalg{\Lambda}} {\Lambda \otimes_k (\rad \opposite{\Lambda}) + (\rad \Lambda) \otimes_k \opposite{\Lambda}} \] as $\envalg{\Lambda}$-modules, and that the ideal $\Lambda \otimes_k (\rad \opposite{\Lambda}) + (\rad \Lambda) \otimes_k \opposite{\Lambda} $ of $\envalg{\Lambda}$ is nilpotent. This means that if $ (\Lambda/\rad \Lambda) \otimes_k (\opposite{\Lambda}/\rad \opposite{\Lambda}) $ is a semisimple $\envalg{\Lambda}$-module, then it is isomorphic to $\envalg{\Lambda}/\rad \envalg{\Lambda}$. The opposite implication is obvious. \end{proof} \end{lem} Now we give an example showing that $(\Lambda/\rad\Lambda) \otimes_k (\opposite{\Lambda}/\rad\opposite{\Lambda})$ is not necessarily semisimple for a finite dimensional algebra $\Lambda$ over a field $k$. \begin{exam} \label{exam:not-semisimple} Let $k=\mathbb{Z}_2(x)$ be the field of rational functions in one indeterminant $x$ over $\mathbb{Z}_2$, and let $\Lambda$ be the $2$-dimensional $k$-algebra $k[y]/\idealgenby{y^2 - x}$. Then $\Lambda$ is a field, so that $\rad\Lambda =(0)$. The element $\alpha = y\otimes 1 + 1\otimes y$ satisfies $\alpha^2=0$. Hence $\idealgenby{\alpha}$ is a nilpotent non-zero ideal in $\envalg{\Lambda}$, and therefore $\envalg{\Lambda}$ is not semisimple. \end{exam} We assume that $(\Lambda/\rad \Lambda) \otimes_k (\opposite{\Lambda}/\rad \opposite{\Lambda})$ is semisimple whenever we need it. In particular, this assumption is included in the main result at the end of this section. Note that this assumption is satisfied in many cases, for example if $\Lambda/\rad \Lambda$ is separable as $k$-algebra (by \cite[Corollary~7.8~(i)]{CurtisReiner}) if $k$ is algebraically closed (this can be shown by using the Wedderburn--Artin Theorem), or if $\Lambda$ is a quotient of a path algebra by an admissible ideal. Now we can show how to get the second condition we need for applying Corollary~\ref{cor:algebra-ext-iso-onemodule}~(i). \begin{lem} \label{lem:ext-lambda-simples} Assume that $(\Lambda/\rad \Lambda) \otimes_k (\opposite{\Lambda}/\rad \opposite{\Lambda})$ is a semisimple $\envalg{\Lambda}$-module, and that we have \[ (\alpha) \ \id_\Lambda \Big( \frac{\Lambda/\idealgenby{a}}{\rad \Lambda/\idealgenby{a}} \Big) < \infty \qquad\text{and}\qquad (\gamma) \ \pd_\Lambda \Big( \frac{\Lambda/\idealgenby{a}}{\rad \Lambda/\idealgenby{a}} \Big) < \infty. \] Then \[ \Ext_{\envalg{\Lambda}}^j \Big( \Lambda, \frac{\envalg{\Lambda}/\idealgenby{\varepsilon}} {\rad \envalg{\Lambda}/\idealgenby{\varepsilon}} \Big) = 0 \quad\text{for}\quad j > \max \Big\{ \pd_\Lambda \Big( \frac{\Lambda/\idealgenby{a}} {\rad \Lambda/\idealgenby{a}} \Big), \, \id_\Lambda \Big( \frac{\Lambda/\idealgenby{a}} {\rad \Lambda/\idealgenby{a}} \Big) \Big\}. \] \end{lem} \begin{proof} By Lemma~\ref{lem:envalg-simples}, every simple $\envalg{\Lambda}$-module is a direct summand of a module of the form $S \otimes_k D(T)$ for some simple $\Lambda$-modules $S$ and $T$, where $D$ is the duality $\Hom_k(-,k)\colon \fmod \Lambda \longrightarrow \fmod \opposite{\Lambda}$. If neither of the modules $S$ or $T$ is annihilated by the ideal $\idealgenby{a}$, then we have \[ \idealgenby{\varepsilon} (S \otimes_k D(T)) = \idealgenby{a \otimes \opposite{a}} (S \otimes_k D(T)) = (\idealgenby{a}S) \otimes_k D(\idealgenby{a}T) = S \otimes_k D(T), \] which means that no nonzero direct summand of the $\envalg{\Lambda}$-module $S \otimes_k D(T)$ is a $\envalg{\Lambda}/\idealgenby{\varepsilon}$-module. Let $j$ be an integer such that \[ j > \max \Big\{ \pd_\Lambda \Big( \frac{\Lambda/\idealgenby{a}} {\rad \Lambda/\idealgenby{a}} \Big), \, \id_\Lambda \Big( \frac{\Lambda/\idealgenby{a}} {\rad \Lambda/\idealgenby{a}} \Big) \Big\}. \] In order to prove the result, it is sufficient to show that $\Ext_{\envalg{\Lambda}}^j(\Lambda, U) = 0$ for every simple $\envalg{\Lambda}/\idealgenby{\varepsilon}$-module $U$. By the above reasoning, every such $U$ is a direct summand of a module $S \otimes_k D(T)$ for some simple $\Lambda$-modules $S$ and $T$, where at least one of $S$ and $T$ is annihilated by $\idealgenby{a}$ and is thus a simple $\Lambda/\idealgenby{a}$-module. Using Lemma~\ref{lem:ext-over-envalg-as-ext-over-lambda}, we get \[ \Ext_{\envalg{\Lambda}}^j(\Lambda, S \otimes_k D(T)) \cong \Ext_\Lambda^j(T, S) = 0, \] since we have $\pd_\Lambda T < j$ or $\id_\Lambda S < j$. It follows that $\Ext_{\envalg{\Lambda}}^j(\Lambda, U) = 0$. \end{proof} The following result summarizes the above work and shows that, with the assumptions we have indicated for the algebra $\Lambda$ and the idempotent $a$, the functor $E$ gives isomorphisms $E_{\Lambda,\Lambda}^j \colon \HH{j}(\Lambda) \longrightarrow \HH{j}(a{\Lambda}a)$ in almost all degrees $j$. \begin{prop} \label{prop:lambda-ext-iso} Assume that $(\Lambda/\rad \Lambda) \otimes_k (\opposite{\Lambda}/\rad \opposite{\Lambda})$ is a semisimple $\envalg{\Lambda}$-module, and that the functor $e$ is an eventually homological isomorphism. Then the map \[ E_{\Lambda,M}^j \colon \Ext_{\envalg{\Lambda}}^j(\Lambda, M) \longrightarrow \Ext_{\envalg{(a{\Lambda}a)}}^j(E(\Lambda), E(M)) \] is an isomorphism for every $\envalg{\Lambda}$-module $M$ and every integer $j$ such that \[ j > \max \Big\{ \pd_\Lambda \Big( \frac{\Lambda/\idealgenby{a}} {\rad \Lambda/\idealgenby{a}} \Big), \, \id_\Lambda \Big( \frac{\Lambda/\idealgenby{a}} {\rad \Lambda/\idealgenby{a}} \Big) \Big\} + \pd_{a{\Lambda}a} a{\Lambda} + \pd_{\opposite{(a{\Lambda}a)}}{\Lambda}a + 1 < \infty. \] In particular, we have isomorphisms \[ \HH{j}(\Lambda) \cong \HH{j}(a{\Lambda}a) \] for almost all degrees $j$. \end{prop} \begin{proof} We use Corollary~\ref{cor:algebra-ext-iso-onemodule}~(i) on the algebra $\envalg{\Lambda}$, the idempotent $\varepsilon = a \otimes \opposite{a}$ and the $\envalg{\Lambda}$-module $\Lambda$. Let $m$ and $n$ be the integers \[ m = \max \Big\{ \pd_\Lambda \Big( \frac{\Lambda/\idealgenby{a}} {\rad \Lambda/\idealgenby{a}} \Big), \, \id_\Lambda \Big( \frac{\Lambda/\idealgenby{a}} {\rad \Lambda/\idealgenby{a}} \Big) \Big\} + 1 \qquad\text{and}\qquad n = \pd_{a{\Lambda}a} a{\Lambda} + \pd_{\opposite{(a{\Lambda}a)}}{\Lambda}a. \] Note that $m$ and $n$ are finite by Corollary~\ref{cor:algebra-ext-iso}. By Lemma~\ref{lem:ext-lambda-simples}, we have \[ \Ext_{\envalg{\Lambda}}^j \Big( \Lambda, \frac{\envalg{\Lambda}/\idealgenby{\varepsilon}} {\rad \envalg{\Lambda}/\idealgenby{\varepsilon}} \Big) = 0 \quad\text{for}\quad j \ge m, \] and by Lemma~\ref{lem:pd-envalg}, we have \[ \pd_{\varepsilon\envalg{\Lambda}\varepsilon}\varepsilon\envalg{\Lambda} \le n. \] Now the result follows from Corollary~\ref{cor:algebra-ext-iso-onemodule}~(i) by noting that $\envalg{(a{\Lambda}a)}$ is the same algebra as $\varepsilon\envalg{\Lambda}\varepsilon$ and that our functor $E = a-a$ is the same as the functor $\varepsilon -$ given by left multiplication with the idempotent $\varepsilon$. \end{proof} Finally, we conclude this section by showing that the assumptions we have indicated imply that \fg{} holds for $\Lambda$ if and only if \fg{} holds for $a{\Lambda}a$. The following theorem is the main result of this section and constitutes the fourth part of the Main Theorem presented in the introduction. \begin{thm} \label{thm:main-result-fg} Let $\Lambda$ be a finite dimensional algebra over a field $k$, and let $a$ be an idempotent in $\Lambda$. Assume that $(\Lambda/\rad \Lambda) \otimes_k (\opposite{\Lambda}/\rad \opposite{\Lambda})$ is a semisimple $\envalg{\Lambda}$-module, and that the functor $a-\colon \fmod \Lambda \longrightarrow \fmod a{\Lambda}a$ is an eventually homological isomorphism. Then $\Lambda$ satisfies \fg{} if and only if $a{\Lambda}a$ satisfies \fg{}. \end{thm} \begin{proof} For every $\Lambda$-module $M$, we can make a diagram \[ \xymatrix@C=4em{ {\HH*(\Lambda)} \ar[r]^-{\varphi_M} \ar[d]_{E_{\Lambda,\Lambda}^*} & {\Ext_\Lambda^*(M,M)} \ar[d]^{e_{M,M}^*} \\ {\HH*(a{\Lambda}a)} \ar[r]_-{\varphi_{e(M)}} & {\Ext_{a{\Lambda}a}^*(e(M), e(M))} } \] of graded rings and graded ring homomorphisms. This diagram commutes by Proposition~\ref{prop:e-gamma-commute}, and the maps $E_{\Lambda,\Lambda}^*$ and $e_{M,M}^*$ are isomorphisms in almost all degrees by Proposition~\ref{prop:lambda-ext-iso} and Corollary~\ref{cor:algebra-ext-iso}, respectively. Since we have such diagrams for every $\Lambda$-module $M$ and the functor $e$ is essentially surjective (see Proposition~\ref{properties}), we can make one diagram with $M = \Lambda/\rad \Lambda$ and another with $e(M) \cong a{\Lambda}a/\rad a{\Lambda}a$. Then, by Proposition~\ref{prop:fg-two-algebras}, it follows that $\Lambda$ satisfies \fg{} if and only if $a{\Lambda}a$ satisfies \fg{}. \end{proof} \section{Applications and Examples} \label{section:examples} In this section we provide applications of our Main Theorem (stated in the Introduction), and examples illustrating its use. For ease of reference, we restate the Main Theorem here. \begin{thm} \label{thm:main-results-summary} Let $\Lambda$ be an artin algebra over a commutative ring $k$ and let $a$ be an idempotent element of $\Lambda$. Let $e$ be the functor $a-\colon \fmod{\Lambda}\longrightarrow \fmod{a\Lambda a}$ given by multiplication by $a$. Consider the following conditions$\colon$ \begin{align*} (\alpha) \ \id_\Lambda \Big( \frac{\Lambda/\idealgenby{a}} {\rad \Lambda/\idealgenby{a}} \Big) &< \infty & (\beta) \ \pd_{a{\Lambda}a} a{\Lambda} &< \infty \\ (\gamma) \ \pd_\Lambda \Big( \frac{\Lambda/\idealgenby{a}} {\rad \Lambda/\idealgenby{a}} \Big) &< \infty & (\delta) \ \pd_{\opposite{(a{\Lambda}a)}}{\Lambda}a &< \infty \end{align*} Then the following hold. \begin{enumerate} \item The following are equivalent$\colon$ \begin{enumerate} \item $(\alpha)$ and $(\beta)$ hold. \item $(\gamma)$ and $(\delta)$ hold. \item The functor $e$ is an eventually homological isomorphism. \end{enumerate} \item The functor $a-\colon \fmod{\Lambda}\longrightarrow \fmod{a\Lambda a}$ induces a singular equivalence between $\Lambda$ and $a\Lambda a$ if and only if the conditions $(\beta)$ and $(\gamma)$ hold. \item Assume that $e$ is an eventually homological isomorphism. Then $\Lambda$ is Gorenstein if and only if $a{\Lambda}a$ is Gorenstein. \item Assume that $e$ is an eventually homological isomorphism, that $k$ is a field and that $(\Lambda/\rad \Lambda) \otimes_k (\opposite{\Lambda}/\rad \opposite{\Lambda})$ is a semisimple $\envalg{\Lambda}$-module. Then $\Lambda$ satisfies \fg{} if and only if $a{\Lambda}a$ satisfies \fg{}. \end{enumerate} \end{thm} This section is divided into three subsections. In the first subsection, we apply Theorem~\ref{thm:main-results-summary} to the class of triangular matrix algebras. In the second subsection, we consider some cases where the conditions $(\alpha)$--$(\delta)$ in Theorem~\ref{thm:main-results-summary} are related. As a consequence, we find sufficient conditions, stated in terms of the quiver and relations, for applying Theorem~\ref{thm:main-results-summary} to a quotient of a path algebra. In the last subsection, we compare our work to that of Nagase in \cite{N}. \subsection{Triangular Matrix Algebras} Let $\Sigma$ and $\Gamma$ be two artin algebras over a commutative ring $k$, and let ${_{\Gamma}M_{\Sigma}}$ be a $\Gamma$-$\Sigma$-bimodule such that $M$ is finitely generated over $k$, and $k$ acts centrally on $M$. Then we have the artin triangular matrix algebra \[ \Lambda = \begin{pmatrix} \Sigma & 0 \\ {_\Gamma M_\Sigma} & \Gamma \\ \end{pmatrix}, \] where the addition and the multiplication are given by the ordinary operations on matrices. The module category of $\Lambda$ has a well known description, see \cite{ARS, FGR}. In fact, a module over $\Lambda$ is described as a triple $(X,Y,f)$, where $X$ is a $\Sigma$-module, $Y$ is a $\Gamma$-module and $f\colon M\otimes_{\Sigma}X\longrightarrow Y$ is a $\Gamma$-homomorphism. A morphism between two triples $(X,Y,f)$ and $(X',Y',f')$ is a pair of homomorphisms $(a,b)$, where $a\in \Hom_{\Sigma}(X,X')$ and $b\in \Hom_{\Gamma}(Y,Y')$, such that the following diagram commutes$\colon$ \[ \xymatrix{ M\otimes_{\Sigma}X \ar[r]^{ \ \ \ f} \ar[d]_{\Id_M\otimes a} & Y \ar[d]^{b} \\ M\otimes_{\Sigma}X'\ar[r]^{ \ \ \ f'} & Y' } \] We define the following functors$\colon$ \begin{enumerate} \item The functor $T_{\Sigma}\colon \fmod{\Sigma}\longrightarrow \fmod{\Lambda}$ is defined on $\Sigma$-modules $X$ by $T_{\Sigma}(X)=(X,M\otimes_{\Sigma}X,\Id_{M\otimes X})$ and given a $\Sigma$-homomorphism $a\colon X\longrightarrow X'$ then $T_{\Sigma}(a)=(a,\Id_{M}\otimes a)$. \item The functor $U_{\Sigma}\colon \fmod{\Lambda}\longrightarrow \fmod{\Sigma}$ is defined on $\Lambda$-modules $(X,Y,f)$ by $U_{\Sigma}(X,Y,f)=X$ and given a $\Lambda$-homomorphism $(a,b)\colon (X,Y,f)\longrightarrow (X',Y',f')$ then $U_{\Sigma}(a,b)=a$. Similarly we define the functor $U_{\Gamma}\colon \fmod{\Lambda}\longrightarrow \fmod{\Gamma}$. \item The functor $Z_{\Sigma}\colon \fmod{\Sigma}\longrightarrow \fmod{\Lambda}$ is defined on $\Sigma$-modules $X$ by $Z_{\Sigma}(X)=(X,0,0)$ and given a $\Sigma$-homomorphism $a\colon X\longrightarrow X'$ then $Z_{\Sigma}(a)=(a,0)$. Similarly we define the functor $Z_{\Gamma}\colon \fmod{\Gamma}\longrightarrow \fmod{\Lambda}$. \item The functor $H_{\Gamma}\colon\fmod{\Gamma}\longrightarrow \fmod{\Lambda}$ is defined by $H_{\Gamma}(Y)=(\Hom_{\Gamma}(M,Y),Y,\epsilon_X)$ on $\Gamma$-modules $Y$ and given a $\Gamma$-homomorphism $b\colon Y\longrightarrow Y'$ then $H_{\Gamma}(b)=(\Hom_{\Gamma}(M,b),b)$. \end{enumerate} Then from Example~\ref{exam:mod-recollements} (see also \cite[Example~2.12]{Psaroud}), using the idempotent elements $e_1=\bigl(\begin{smallmatrix} 1_{\Sigma} & 0 \\ 0 & 0 \end{smallmatrix}\bigr)$ and $e_2=\bigl(\begin{smallmatrix} 0 & 0 \\ 0 & 1_{\Gamma} \end{smallmatrix}\bigr)$, we have the following recollements of abelian categories$\colon$ \begin{equation} \label{recolone} \xymatrix@C=0.5cm{ \fmod{\Gamma} \ar[rrr]^{Z_{\Gamma}} &&& \fmod{\Lambda} \ar[rrr]^{U_{\Sigma}} \ar @/_1.5pc/[lll]_{q} \ar @/^1.5pc/[lll]^{U_{\Gamma}} &&& \fmod{\Sigma} \ar @/_1.5pc/[lll]_{T_{\Sigma}} \ar @/^1.5pc/[lll]^{Z_{\Sigma}} } \end{equation} and \begin{equation} \label{recoltwo} \xymatrix@C=0.5cm{ \fmod{\Sigma} \ar[rrr]^{Z_{\Sigma}} &&& \fmod{\Lambda} \ar[rrr]^{U_{\Gamma}} \ar @/_1.5pc/[lll]_{U_{\Sigma}} \ar @/^1.5pc/[lll]^{p} &&& \fmod{\Gamma} \ar @/_1.5pc/[lll]_{Z_{\Gamma}} \ar @/^1.5pc/[lll]^{H_{\Gamma}} } \end{equation} The functors $q$ and $p$ are induced from the adjoint pairs $(T_{\Sigma},U_{\Sigma})$ and $(U_{\Gamma},H_{\Gamma})$ respectively, see \cite[Remark~2.3]{Psaroud} for more details. We want to use Theorem~\ref{thm:main-results-summary} to compare the triangular matrix algebra $\Lambda$ with the algebras $\Sigma$ and $\Gamma$. First consider the case where we compare $\Lambda$ with $\Sigma$. We then take the idempotent $a$ in the theorem to be $e_1$, and we can reformulate the conditions $(\alpha)$, $(\beta)$, $(\gamma)$ and $(\delta)$ as follows: \begin{enumerate} \item[$(\alpha)$] The functor $Z_\Gamma$ sends every $\Gamma$-module to a $\Lambda$-module with finite injective dimension. \item[$(\beta)$] The functor $U_\Sigma$ sends every projective $\Lambda$-module to a $\Sigma$-module with finite projective dimension. \item[$(\gamma)$] The functor $Z_\Gamma$ sends every $\Gamma$-module to a $\Lambda$-module with finite projective dimension. \item[$(\delta)$] The functor $U_\Sigma$ sends every injective $\Lambda$-module to a $\Sigma$-module with finite injective dimension. \end{enumerate} By interchanging $\Sigma$ and $\Gamma$, we get a similar reformulation of the conditions for the case where we compare $\Lambda$ with $\Gamma$. The next result clarifies when the above hold for the recollement \eqref{recoltwo} of a triangular matrix algebra $\Lambda$. \begin{lem} \label{lemtriangular} Let $\Lambda=\bigl(\begin{smallmatrix} \Sigma & 0 \\ {_\Gamma M_\Sigma} & \Gamma \end{smallmatrix}\bigr)$ be a triangular matrix algebra. The following hold. \begin{enumerate} \item If $\pd_{\Gamma}M<\infty$, then the functor $U_{\Gamma}$ sends projective $\Lambda$-modules to $\Gamma$-modules of finite projective dimension. \item The functor $U_{\Gamma}$ preserves injectives. \item Assume that $\gld{\Sigma}<\infty$. Then $\id_{\Lambda}Z_{\Sigma}(X)<\infty$ for every $\Sigma$-module $X$. \item Assume that $\gld{\Sigma}<\infty$ and $\pd_{\Gamma}M<\infty$. Then we have $\pd_{\Lambda}Z_{\Sigma}(X)<\infty$ for all $\Sigma$-modules $X$. \end{enumerate} \begin{proof} (i) It is known, see \cite{ARS}, that the indecomposable projective $\Lambda$-modules are of the form $T_{\Sigma}(P)$, where $P$ is an indecomposable projective $\Sigma$-module, and $Z_{\Gamma}(Q)$, where $Q$ is an indecomposable projective $\Gamma$-module. Hence it is enough to consider modules of these forms. We have $U_{\Gamma}Z_{\Gamma}(Q)=Q$, and since $\pd_{\Gamma}M<\infty$ it follows that $\pd_{\Gamma}U_{\Gamma}T_{\Sigma}(P)=\pd_{\Gamma}(M\otimes_{\Sigma}P)<\infty$. (ii) Since $(Z_{\Gamma}, U_{\Gamma})$ is an adjoint pair and $Z_{\Gamma}$ is exact it follows that the functor $U_{\Gamma}$ preserves injectives. (iii) Let $0\longrightarrow X\longrightarrow I^0\longrightarrow \cdots\longrightarrow I^n\longrightarrow 0$ be a finite injective resolution of a $\Sigma$-module $X$. Then applying the functor $Z_{\Sigma}$ we get the exact sequence $0\longrightarrow Z_{\Sigma}(X)\longrightarrow Z_{\Sigma}(I^0)\longrightarrow \cdots\longrightarrow Z_{\Sigma}(I^n)\longrightarrow 0$, where every $Z_{\Sigma}(I^i)$ is an injective $\Lambda$-module since we have the adjoint pair $(U_{\Sigma},Z_{\Sigma})$ and $U_{\Sigma}$ is exact. Hence the injective dimension of $Z_{\Sigma}(X)$ is finite. (iv) This follows from \cite[Lemma~2.4]{triangular} since a $\Lambda$-module $(X,Y,f)$ has finite projective dimension if and only if the projective dimensions of $X$ and $Y$ are finite. \end{proof} \end{lem} Using now the recollement \eqref{recolone} we have the following dual result of Lemma~\ref{lemtriangular}. The proof is left to the reader. \begin{lem} \label{lemtriangular2} Let $\Lambda=\bigl(\begin{smallmatrix} \Sigma & 0 \\ {_\Gamma M_\Sigma} & \Gamma \end{smallmatrix}\bigr)$ be a triangular matrix algebra. The following hold. \begin{enumerate} \item The functor $U_{\Sigma}$ preserves projectives. \item If $\pd_{\Sigma} M_{\Sigma}<\infty$, then the functor $U_{\Sigma}$ sends injective $\Lambda$-modules to $\Sigma$-modules of finite injective dimension. \item Assume that $\gld{\Gamma}<\infty$. Then $\pd_{\Lambda}Z_{\Gamma}(Y)<\infty$ for every $\Gamma$-module $Y$. \item Assume that $\gld{\Gamma}<\infty$ and $\pd_{\Sigma} M_{\Sigma}<\infty$. Then for every $\Gamma$-module $Y$ we have $\id_{\Lambda}Z_{\Gamma}(Y)<\infty$. \end{enumerate} \end{lem} As a consequence of Lemma~\ref{lemtriangular} and Theorem~\ref{thm:main-results-summary} we have the following result. For similar characterizations with (ii) see \cite{XiongZhang}. \begin{cor} \label{cortriang} Let $\Lambda=\bigl(\begin{smallmatrix} \Sigma & 0 \\ {_{\Gamma}M_{\Sigma}} & \Gamma \end{smallmatrix}\bigr)$ be an artin triangular matrix algebra over a commutative ring $k$ such that $\gld{\Sigma}<\infty$ and $\pd_{\Gamma}M<\infty$. Then the following hold. \begin{enumerate} \item The singularity categories of $\Lambda$ and $\Gamma$ are triangle equivalent$\colon$ \[ \xymatrix@C=0.5cm{ \mathsf{D}_\mathsf{sg}(U_{\Gamma})\colon \mathsf{D}_\mathsf{sg}(\fmod{\Lambda}) \ \ar[rr]^{ \ \ \ \ \simeq} && \ \mathsf{D}_\mathsf{sg}(\fmod{\Gamma}) } \] \item $\Lambda$ is Gorenstein if and only if $\Gamma$ is Gorenstein. \item Assume that $k$ is a field and that $(\Lambda/\rad \Lambda) \otimes_k (\opposite{\Lambda}/\rad \opposite{\Lambda})$ is a semisimple $\envalg{\Lambda}$-module. Then $\Lambda$ satisfies \fg{} if and only if $\Gamma$ satisfies \fg{}. \end{enumerate} \end{cor} \begin{rem} The algebra $(\Lambda/\rad \Lambda) \otimes_k (\opposite{\Lambda}/\rad \opposite{\Lambda})$ being semisimple (as required in part (iii) above) can be shown to be equivalent to the following three algebras being semisimple: $(\Sigma/\rad \Sigma) \otimes_k (\opposite{\Sigma}/\rad \opposite{\Sigma})$, $(\Sigma/\rad \Sigma) \otimes_k (\opposite{\Gamma}/\rad \opposite{\Gamma})$ and $(\Gamma/\rad \Gamma) \otimes_k (\opposite{\Gamma}/\rad \opposite{\Gamma})$. \end{rem} We also have the following consequence, obtained now from Lemma~\ref{lemtriangular2} and Theorem~\ref{thm:main-results-summary}. Note that in the first statement we recover a theorem by Chen \cite{Chen:schurfunctors}. \begin{cor} Let $\Lambda=\bigl(\begin{smallmatrix} \Sigma & 0 \\ {_{\Gamma}M_{\Sigma}} & \Gamma \end{smallmatrix}\bigr)$ be an artin triangular matrix algebra over a commutative ring $k$. \begin{enumerate} \item \cite[Theorem~4.1]{Chen:schurfunctors} Assume that $\gld{\Gamma}<\infty$. Then there is a triangle equivalence$\colon$ \[ \xymatrix@C=0.5cm{ \mathsf{D}_\mathsf{sg}(\fmod{\Lambda}) \ \ar[rr]^{ \simeq \ }_{\mathsf{D}_\mathsf{sg}(U_{\Sigma}) \ } && \ \mathsf{D}_\mathsf{sg}(\fmod{\Sigma}) } \] \item Assume that $\gld{\Gamma}<\infty$ and $\pd_{\Sigma} M_{\Sigma}<\infty$. Then the following hold. \begin{enumerate} \item $\Lambda$ is Gorenstein if and only if $\Sigma$ is Gorenstein. \item Assume that $k$ is a field and that $(\Lambda/\rad \Lambda) \otimes_k (\opposite{\Lambda}/\rad \opposite{\Lambda})$ is a semisimple $\envalg{\Lambda}$-module. Then $\Lambda$ satisfies \fg{} if and only if $\Sigma$ satisfies \fg{}. \end{enumerate} \end{enumerate} \end{cor} From the above corollaries and the classical result of Buchweitz--Happel (see the text before Corollary~\ref{corgorsingular}) we have the following result for stable categories of Cohen--Macaulay modules. \begin{cor} \label{cor:matrix-algebra-cm} Let $\Lambda=\bigl(\begin{smallmatrix} \Sigma & 0 \\ {_{\Gamma}M_{\Sigma}} & \Gamma \end{smallmatrix}\bigr)$ be an artin triangular matrix algebra. \begin{enumerate} \item \cite[Corollary~4.2]{Chen:schurfunctors} Assume that $\gld{\Gamma}<\infty$ and $\Sigma$ is Gorenstein. Then there is a triangle equivalence$\colon$ \[ \xymatrix@C=0.5cm{ \mathsf{D}_\mathsf{sg}(\fmod{\Lambda}) \ \ar[rr]^{ \ \simeq } && \ \uCM(\Sigma) } \] \item Assume that $\gld{\Gamma}<\infty$ and $\pd_{\Sigma} M_{\Sigma}<\infty$. If $\Sigma$ is Gorenstein, then there is a triangle equivalence between the stable categories of Cohen--Macaulay modules of $\Lambda$ and $\Sigma$$\colon$ \[ \xymatrix@C=0.5cm{ \uCM(\Lambda) \ \ar[rr]^{ \simeq} && \ \uCM(\Sigma) } \] \item Assume that $\gld{\Sigma}<\infty$ and $\pd_{\Gamma}M<\infty$. If $\Gamma$ is Gorenstein, then there is a triangle equivalence between the stable categories of Cohen--Macaulay modules of $\Lambda$ and $\Gamma$$\colon$ \[ \xymatrix@C=0.5cm{ \uCM(\Lambda) \ \ar[rr]^{ \simeq} && \ \uCM(\Gamma) } \] \end{enumerate} \end{cor} \subsection{Algebras with ordered simples} In this subsection, we apply Theorem~\ref{thm:main-results-summary} to cases where there exists a total order $\preceq$ of the simple $\Lambda/\idealgenby{a}$-modules with the property that \begin{equation} \label{eqn:simple-order} S \preceq S' \implies \Ext_\Lambda^{>0}(S,S')=0 \end{equation} for every pair $S$ and $S'$ of simple $\Lambda/\idealgenby{a}$-modules. With this assumption, we show that we have the implications $(\alpha) \implies (\delta)$ and $(\gamma) \implies (\beta)$ between the conditions in Theorem~\ref{thm:main-results-summary}. We then consider some special cases where such orderings appear. We need the following preliminary results. \begin{lem} \label{lem:simples-proj-res} Let $\Lambda$ be an artin algebra, let $M$ be a $\Lambda$-module with minimal projective resolution $\cdots \longrightarrow P_1 \longrightarrow P_0 \longrightarrow M \longrightarrow 0$, and let $S$ be a simple $\Lambda$-module. Then, for every nonnegative integer $n$, we have $\Ext_\Lambda^n(M, S) = 0$ if and only if the projective cover of $S$ is not a direct summand of $P_n$. \end{lem} \begin{lem} \label{lem:simples-in-module-recollement} Let $\Lambda$ be an artin algebra, and let $a$ be an idempotent in $\Lambda$. Let $S$ be a simple $\Lambda$-module which is not annihilated by the ideal $\idealgenby{a}$, and let $P$ be the projective cover of $S$. Then $aP$ is a projective $a{\Lambda}a$-module. \end{lem} \begin{proof} We have \[ \Hom_\Lambda({\Lambda}a, S) \cong aS \ne 0, \] so there exists a nonzero morphism $f \colon {\Lambda}a \longrightarrow S$. Decomposing the idempotent $a$ into a sum $a = a_1 + \cdots + a_t$ of orthogonal primitive idempotents gives a decomposition ${\Lambda}a \cong {\Lambda}a_1 \oplus \cdots \oplus {\Lambda}a_t$ of ${\Lambda}a$ into indecomposable projective modules. For some $i$, we must then have a nonzero morphism $f_i \colon {\Lambda}a_i \longrightarrow S$. Since $S$ is simple, this means that ${\Lambda}a_i$ is its projective cover. Since $a \cdot a_i = a_i$, we get \[ aP \cong a{\Lambda}a_i = (a{\Lambda}a) a_i. \] Therefore $aP$ is a projective $a{\Lambda}a$-module. \end{proof} Now we show that the conditions of Theorem~\ref{thm:main-results-summary} are related when we have an ordering of the simple $\Lambda/\idealgenby{a}$-modules. \begin{prop} \label{prop:ordered-simples} Let $\Lambda$ be an artin algebra, and let $a$ be an idempotent in $\Lambda$. Assume that there is a total order $\preceq$ on the simple $\Lambda/\idealgenby{a}$-modules satisfying condition~\eqref{eqn:simple-order}. Then we have the following implications between the conditions of \textup{Theorem~\ref{thm:main-results-summary}}$\colon$ \begin{enumerate} \item $(\alpha) \implies (\delta)$. \item $(\gamma) \implies (\beta)$. \end{enumerate} \end{prop} \begin{proof} We show the second implication; the first can be showed in a similar way. Assume that $(\gamma)$ holds, that is, every $\Lambda/\idealgenby{a}$-module has finite projective dimension as a $\Lambda$-module. We want to show that $(\beta)$ holds, that is, the $a{\Lambda}a$-module $a\Lambda$ has finite projective dimension. As in Section~\ref{section:fg-a(Lambda)a}, we let $e$ be the exact functor $e=(a-)\colon \fmod \Lambda \longrightarrow \fmod a{\Lambda}a$ given by multiplication with $a$. Then what we need to show is that $e(\Lambda)$ has finite projective dimension as $a{\Lambda}a$-module. Let $S_1 \preceq \cdots \preceq S_s$ be all the simple $\Lambda/\idealgenby{a}$-modules (up to isomorphism), ordered by the total order $\preceq$. Let $T_1, \ldots, T_t$ be all the other simple $\Lambda$-modules (up to isomorphism). Let $Q_i$ be the projective cover of $S_i$ (considered as $\Lambda$-module) and $Q'_j$ the projective cover of $T_j$, for every $i$ and $j$. These are all the indecomposable projective $\Lambda$-modules up to isomorphism, so it is sufficient to show that $e(Q_i)$ and $e(Q'_j)$ have finite projective dimension as $a{\Lambda}a$-modules for every $i$ and $j$. For each of the modules $Q'_j$, we have that $e(Q'_j)$ is a projective $a{\Lambda}a$-module by Lemma~\ref{lem:simples-in-module-recollement}. We need to check that $e(Q_i)$ has finite projective dimension for every $i$. Consider the module $S_1$. By our assumptions, every simple $\Lambda/\idealgenby{a}$-module has finite projective dimension over $\Lambda$. Let \[ \xymatrix{ 0 \ar[r]^{} & P^{(1)}_{n_1} \ar[r]^{} & \cdots \ar[r]^{} & P^{(1)}_2 \ar[r]^{} & P^{(1)}_1 \ar[r]^{} & Q_1 \ar[r]^{} & S_1 \ar[r] & 0 } \] be a minimal projective resolution of $S_1$. Applying the functor $e$ to this sequence gives the exact sequence \begin{equation} \label{eqn:proj-res-e(Q_1)} \xymatrix{ 0 \ar[r]^{} & e(P^{(1)}_{n_1}) \ar[r]^{} & \cdots \ar[r]^{} & e(P^{(1)}_2) \ar[r]^{} & e(P^{(1)}_1) \ar[r]^{} & e(Q_1) \ar[r]^{} & 0 } \end{equation} of $a{\Lambda}a$-modules, since $e(S_1) = 0$. Since we have $\Ext_\Lambda^{>0}(S_1,S_i) = 0$ for every $i$, it follows from Lemma~\ref{lem:simples-proj-res} that the only indecomposable projective $\Lambda$-modules which can occur as direct summands of the modules $P^{(1)}_1, \ldots, P^{(1)}_{n_1}$ are the modules $Q'_j$. Since we know that these are mapped to projective modules by $e$, the sequence~\eqref{eqn:proj-res-e(Q_1)} is a projective resolution of the $a{\Lambda}a$-module $e(Q_1)$. We continue inductively. For every $i$, we apply the functor $e$ to a minimal projective resolution \[ \xymatrix{ 0 \ar[r]^{} & P^{(i)}_{n_i} \ar[r]^{} & \cdots \ar[r]^{} & P^{(i)}_2 \ar[r]^{} & P^{(i)}_1 \ar[r]^{} & Q_i \ar[r]^{} & S_i \ar[r] & 0 } \] and obtain the sequence \[ \xymatrix{ 0 \ar[r]^{} & e(P^{(i)}_{n_i}) \ar[r]^{} & \cdots \ar[r]^{} & e(P^{(i)}_2) \ar[r]^{} & e(P^{(i)}_1) \ar[r]^{} & e(Q_i) \ar[r]^{} & 0 } \] of $a{\Lambda}a$-modules. Each of the modules $P^{(i)}_1, \ldots, P^{(i)}_{n_i}$ has only the indecomposable projective modules $Q'_1, \ldots, Q'_t, Q_1, \ldots, Q_{i-1}$ as direct summands. Therefore (by the induction assumption), all the modules $e(P^{(i)}_1), \ldots, e(P^{(i)}_{n_i})$ have finite projective dimension, and thus the module $e(Q_i)$ has finite projective dimension. \end{proof} The following example shows that the implications $(\alpha) \implies (\beta)$ and $(\gamma) \implies (\delta)$ of the above proposition do not hold in general. \begin{exam} Let $k$ be a field. Let the $k$-algebra $\Lambda = kQ/\idealgenby{\rho}$ be given by the following quiver and relations$\colon$ \[ Q\colon \xymatrix{ {1} \ar@/^/[r]^\alpha & {2} \ar@/^/[l]^\beta } \qquad\qquad \rho = \{ \alpha\beta \}. \] Let $a = e_1$. Let $S_2$ be the simple $\Lambda$-module associated to the vertex $2$. Then we have $\pd_\Lambda S_2 = 2$ and $\id_\Lambda S_2 = 2$, but $\pd_{a{\Lambda}a} a{\Lambda} = \infty$ and $\id_{\opposite{(a{\Lambda}a)}} {\Lambda}a = \infty$. \end{exam} By combining Theorem~\ref{thm:main-results-summary} with Proposition~\ref{prop:ordered-simples}, we get the following result. \begin{cor} \label{cor:ordered-simples} Let $\Lambda$ be an artin algebra over a commutative ring $k$, and let $a$ be an idempotent in $\Lambda$. Assume that there is a total order $\preceq$ on the simple $\Lambda/\idealgenby{a}$-modules satisfying condition~\eqref{eqn:simple-order}. Then the following hold, where $(\alpha)$, $(\beta)$, $(\gamma)$ and $(\delta)$ refer to the conditions in \textup{Theorem~\ref{thm:main-results-summary}}. \begin{enumerate} \item The functor $a-\colon \fmod{\Lambda}\longrightarrow \fmod{a\Lambda a}$ induces a singular equivalence between $\Lambda$ and $a\Lambda a$ if and only if $(\gamma)$ holds. \item Assume that $(\alpha)$ and $(\gamma)$ hold. Then $\Lambda$ is Gorenstein if and only if $a{\Lambda}a$ is Gorenstein. \item Assume that $(\alpha)$ and $(\gamma)$ hold, that $k$ is a field and $(\Lambda/\rad \Lambda) \otimes_k (\opposite{\Lambda}/\rad \opposite{\Lambda})$ is a semisimple $\envalg{\Lambda}$-module. Then $\Lambda$ satisfies \fg{} if and only if $a{\Lambda}a$ satisfies \fg{}. \end{enumerate} \end{cor} We now consider special cases of the conditions $(\alpha)$ and $(\gamma)$ where the dimensions are not only finite, but at most one. We show that if one of these dimensions is at most one, then we have an ordering of the simple $\Lambda/\idealgenby{a}$-modules as assumed in Proposition~\ref{prop:ordered-simples} and Corollary~\ref{cor:ordered-simples}. \begin{lem} \label{lem:pd,id<=1} Let $\Lambda$ be an artin algebra, and let $a$ be an idempotent in $\Lambda$. Assume that we have either \[ (\alpha_1) \ \id_\Lambda \Big( \frac{\Lambda/\idealgenby{a}}{\rad \Lambda/\idealgenby{a}} \Big) \le 1 \qquad\text{or}\qquad (\gamma_1) \ \pd_\Lambda \Big( \frac{\Lambda/\idealgenby{a}}{\rad \Lambda/\idealgenby{a}} \Big) \le 1 \] Then there exists a total order $\preceq$ on the simple $\Lambda/\idealgenby{a}$-modules satisfying condition~\eqref{eqn:simple-order}. \end{lem} \begin{proof} Assume that $(\gamma_1)$ holds (the proof using $(\alpha_1)$ is similar). Let $S_1, \ldots, S_s$ be all the simple $\Lambda/\idealgenby{a}$-modules (up to isomorphism), and let $P_1, \ldots, P_s$ be their projective covers as $\Lambda$-modules, such that $P_i/((\rad \Lambda)P_i) \cong S_i$ for every $i$. Assume that we have ordered these by increasing length of the projective covers, that is, \[ \operatorname{length}(P_1) \le \operatorname{length}(P_2) \le \cdots \le \operatorname{length}(P_s). \] For any $i$, the module $S_i$ has a projective resolution of the form \[ \xymatrix{ 0 \ar[r]^{} & Q \ar[r]^{} & P_i \ar[r]^{} & S_i \ar[r]^{} & 0 } \] Since the module $Q$ has shorter length than the module $P_i$, it can not have any of the modules $P_i, \ldots, P_s$ as direct summands. Then Lemma~\ref{lem:simples-proj-res} implies that $\Ext_\Lambda^{>0}(S_i, S_j) = 0$ for $i \le j$. \end{proof} By using Proposition~\ref{prop:ordered-simples}, Lemma~\ref{lem:pd,id<=1} and Theorem~\ref{thm:main-results-summary}, we have the following. \begin{cor} \label{cor:pd,id<=1} Let $\Lambda$ be an artin algebra over a commutative ring $k$, and let $a$ be an idempotent in $\Lambda$. Then the following hold, where $(\alpha)$, $(\beta)$, $(\gamma)$ and $(\delta)$ refer to the conditions in \textup{Theorem~\ref{thm:main-results-summary}}, and $(\alpha_1)$ and $(\gamma_1)$ refer to the conditions in \textup{Lemma~\ref{lem:pd,id<=1}}. \begin{enumerate} \item If $(\gamma_1)$ holds, then the singularity categories of $\Lambda$ and $a{\Lambda}a$ are triangle equivalent. \item Assume either that $(\alpha_1)$ and $(\gamma)$ hold, or that $(\alpha)$ and $(\gamma_1)$ hold. Then $\Lambda$ is Gorenstein if and only if $a{\Lambda}a$ is Gorenstein. \item Assume either that $(\alpha_1)$ and $(\gamma)$ hold, or that $(\alpha)$ and $(\gamma_1)$ hold. Furthermore, assume that $k$ is a field and $(\Lambda/\rad \Lambda) \otimes_k (\opposite{\Lambda}/\rad \opposite{\Lambda})$ is a semisimple $\envalg{\Lambda}$-module. Then $\Lambda$ satisfies \fg{} if and only if $a{\Lambda}a$ satisfies \fg{}. \end{enumerate} \end{cor} For the following results, we let $\Lambda = kQ/\idealgenby{\rho}$ be a quotient of a path algebra, where $k$ is a field, $Q$ is a quiver, and $\rho$ a minimal set of relations in $kQ$ generating an admissible ideal $\idealgenby{\rho}$. First we describe how the conditions $(\alpha_1)$ and $(\gamma_1)$ can be interpreted for quotients of path algebras. The result follows directly from \cite[Corollary, Section~1.1]{Bongartz}. \begin{lem} \label{lem:relations-dimensions} Let $S$ be the simple $\Lambda$-module corresponding to a vertex $v$ in the quiver $Q$. \begin{enumerate} \item We have $\pd_\Lambda S \le 1$ if and only if no relation starts in the vertex $v$. \item We have $\id_\Lambda S \le 1$ if and only if no relation ends in the vertex $v$. \end{enumerate} \end{lem} As a consequence of Lemma~\ref{lem:relations-dimensions} and Corollary~\ref{cor:pd,id<=1}, we get the following results for path algebras. \begin{cor} \label{cor:removing-vertices-1} Let $\Lambda = kQ/\idealgenby{\rho}$ be a quotient of a path algebra as above. Choose some vertices in $Q$ where no relations start, and let $a$ be the sum of all vertices except these. Then the functor $a-\colon \fmod \Lambda \longrightarrow \fmod a{\Lambda}a$ induces a singular equivalence between $\Lambda$ and $a{\Lambda}a\colon$ \[ \mathsf{D}_\mathsf{sg}(a-)\colon \mathsf{D}_\mathsf{sg}(\fmod \Lambda) \stackrel{\simeq}{\longrightarrow} \mathsf{D}_\mathsf{sg}(\fmod a{\Lambda}a) \] \end{cor} \begin{cor} \label{cor:removing-vertices-2} Let $\Lambda = kQ/\idealgenby{\rho}$ be a quotient of a path algebra as above. Choose some vertices in $Q$ where no relations start and no relations end, and let $a$ be the sum of all vertices except these. Then the following hold$\colon$ \begin{enumerate} \item $\Lambda$ is Gorenstein if and only if $a{\Lambda}a$ is Gorenstein. \item $\Lambda$ satisfies \fg{} if and only if $a{\Lambda}a$ satisfies \fg{}. \end{enumerate} \end{cor} We apply the above result in the following example. \begin{exam} Let $Q$ be the quiver with relations $\rho$ given by \[ Q\colon \xymatrix@C=3em{ 1 \ar[r]^{\alpha_1} & 2 \ar[r]^-{\alpha_2} & {\cdots} \ar[r]^{\alpha_{m-1}} & m \ar@(dl,dr)[lll]^{\alpha_m} } \qquad\text{and}\qquad \rho=\{(\alpha_m \cdots \alpha_1)^n\}, \] for some integers $m \ge 2$ and $n \ge 2$. Let $\Lambda = kQ/\idealgenby{\rho}$, and let $a = e_1$ (the only vertex where a relation starts and ends). Then $a{\Lambda}a \cong k[x]/\idealgenby{x^n}$, so $a{\Lambda}a$ satisfies \fg{} by \cite{EH,EHS}. By Corollary~\ref{cor:removing-vertices-2}, the algebra $\Lambda$ also satisfies \fg{}. By Corollary~\ref{cor:removing-vertices-1}, the algebras $\Lambda$ and $k[x]/\idealgenby{x^n}$ are singularly equivalent. See \cite{SO} for a general discussion of the Hochschild cohomology ring of the path algebra $kQ$ modulo one relation. \end{exam} \subsection{Comparison to work by Nagase} \label{subsection:nagase} In this subsection we recall a result of Hiroshi Nagase \cite{N} and relate his set of assumptions to ours. In \cite{N} Hiroshi Nagase proves the following result. \begin{prop} Let $\Lambda$ be a finite dimensional algebra over an algebraically closed field with a stratifying ideal $\idealgenby{a}$ for an idempotent $a$ in $\Lambda$. Suppose $\pd_{\envalg{\Lambda}} \Lambda/\idealgenby{a} < \infty$. Then we have \begin{enumerate} \item[(1)] $\HH{\ge n}(\Lambda)\cong \HH{\ge n}(a\Lambda a)$ as graded algebras, where $n = \pd_{\envalg{\Lambda}} \Lambda/\idealgenby{a} + 1$. \item[(2)] $\Lambda$ satisfies \fg{} if and only if so does $a\Lambda a$. \item[(3)] $\Lambda$ is Gorenstein if and only if so is $a\Lambda a$. \end{enumerate} \end{prop} This work is based on the paper \cite{Kon-Nag}, where stratifying ideals $\idealgenby{a}$ in a finite dimensional algebra $\Lambda$ were used to show that the Hochschild cohomology groups of $\Lambda$ and $a\Lambda a$ are isomorphic in almost all degrees. We start by giving an example of a recollement $(\fmod \Lambda/\idealgenby{a}, \fmod \Lambda, \fmod a\Lambda a)$, where the ideal $\idealgenby{a}$ is not a stratifying ideal but it satisfies our conditions from Theorem \ref{thm:main-result-fg}. \begin{exam} Let $Q$ be the quiver with relations $\rho$ given by \[\xymatrix@R=5pt{ & 2\ar[dd]^\gamma \\ 1\ar@(ul,dl)_{\alpha}\ar[ur]^\beta& \\ & 3\ar[ul]^\delta }\] and $\rho=\{\alpha^2, \gamma\beta, \beta\alpha\delta\}$. Let $\Lambda = kQ/\idealgenby{\rho}$ for some field $k$, and let $a = e_1$. We want to study the relationship between $\Lambda$ and $a\Lambda a$. Let $S_i$ denote the simple $\Lambda$-module associated to the vertex $i$ for $i=1,2,3$. Then $\pd_\Lambda S_2 = 1$, $\pd_\Lambda S_3 = 3$, $\id_\Lambda S_2 = 2$ and $\id_\Lambda S_3 = 3$. Furthermore, the left and right $a\Lambda a$-module $a\Lambda$ and $\Lambda a$ have finite projective dimension (they are projective) as $a\Lambda a$-modules. Hence, according to Theorem \ref{thm:main-result-fg} $\Lambda$ satisfies \fg{} if and only if $a\Lambda a\cong k[x]/\idealgenby{x^2}$ does. We infer from this that $\Lambda$ satisfies \fg{}. Moreover, the Hochschild cohomology groups of $\Lambda$ and $a\Lambda a$ are isomorphic in almost all degrees by Proposition \ref{prop:lambda-ext-iso}. We claim that $\idealgenby{a}$ is not a stratifying ideal. Recall that $\idealgenby{a}$ is stratifying if (i) the multiplication map $\Lambda a\otimes_{a\Lambda a} a\Lambda \longrightarrow\Lambda a \Lambda$ is an isomorphism and (ii) $\Tor^{a\Lambda a}_i(\Lambda a,a\Lambda) = (0)$ for $i>0$. Using that $(1-a)\Lambda a\cong a\Lambda a$ as a right $a\Lambda a$-module, direct computations show that $\Lambda a\otimes_{a\Lambda a} a\Lambda$ has dimension $12$, while $\idealgenby{a}$ has dimension $10$. Consequently $\idealgenby{a}$ is not a stratifying ideal in $\Lambda$. However, the condition (ii) is satisfied since $\Lambda a$ is a projective $a\Lambda a$-module. \end{exam} Next we show that, when $\idealgenby{a}$ is a stratifying ideal, then the property $\pd_{\envalg{\Lambda}} \Lambda/\idealgenby{a}< \infty$ is equivalent to the functor $e \colon \fmod \Lambda \longrightarrow \fmod a{\Lambda}a$ being an eventually homological isomorphism. We thank Hiroshi Nagase for pointing out that (a) implies (b) in the second part of the following result. This led to a much better understanding of the conditions occurring in the main results. \begin{lem} \label{lem:stratifying} Let $\Lambda$ be a finite dimensional algebra over an algebraically closed field $k$. \begin{enumerate} \item\label{lem:stratifying:pd} Assume that $(\alpha)\ \id_\Lambda \Big( \frac{\Lambda/\idealgenby{a}}{\rad \Lambda/\idealgenby{a}} \Big) < \infty$ and $(\gamma)\ \pd_\Lambda \Big( \frac{\Lambda/\idealgenby{a}}{\rad \Lambda/\idealgenby{a}} \Big) < \infty$. Then $\pd_{\envalg{\Lambda}} \Lambda/\idealgenby{a} < \infty$. \item\label{lem:stratifying:equiv} Assume that $\idealgenby{a}$ is a stratifying ideal in $\Lambda$. Then the following are equivalent. \begin{enumerate} \item $\pd_{\envalg{\Lambda}} \Lambda/\idealgenby{a} < \infty$. \item The functor $e \colon \fmod \Lambda \longrightarrow \fmod a{\Lambda}a$ is an eventually homological isomorphism. \end{enumerate} \end{enumerate} \end{lem} \begin{proof} \ref{lem:stratifying:pd} For two primitive idempotents $u$ and $v$ in $\Lambda$, we have that \[\Hom_{\envalg{\Lambda}}(\envalg{\Lambda}(u\otimes v), \Lambda/\idealgenby{a}) \cong u(\Lambda/\idealgenby{a})v.\] Then, if $u$ or $v$ occurs in $a$, then this homomorphism set is zero. Consequently we infer that the composition factors of $\Lambda/\idealgenby{a}$ are direct summands of the semisimple module $\Big( \frac{\Lambda/\idealgenby{a}}{\rad \Lambda/\idealgenby{a}} \Big) \otimes_k \Big( \frac{\opposite{\Lambda}/\idealgenby{a}}{\rad \opposite{\Lambda}/\idealgenby{a}} \Big)$. By Lemma \ref{lem:tensor-preserves-fin-projdim} $\pd_{\envalg{\Lambda}} \Big( \frac{\Lambda/\idealgenby{a}}{\rad \Lambda/\idealgenby{a}} \Big) \otimes_k \Big( \frac{\opposite{\Lambda}/\idealgenby{a}}{\rad \opposite{\Lambda}/\idealgenby{a}} \Big)$ is finite, hence the claim follows. \ref{lem:stratifying:equiv} By Corollary~\ref{cor:algebra-ext-iso} and part \ref{lem:stratifying:pd}, statement (b) implies (a). Conversely, assume (a). For $j> \pd _{\envalg{\Lambda}}\Lambda/\idealgenby{a}$ and any $\Lambda$-modules $M$ and $N$ we have that \[ \Ext^j_{\envalg{\Lambda}}(\Lambda,\Hom_k(M,N)) \cong \Ext^j_{\envalg{\Lambda}}(\idealgenby{a}, \Hom_k(M,N)) \] Using the isomorphism in the proof of Proposition 3.3 in \cite{Kon-Nag}, \[ \Ext^i_{\envalg{\Lambda}}(\idealgenby{a},X) \cong \Ext^i_{\envalg{a\Lambda a}}(a\Lambda a, aXa), \] we obtain that \begin{align*} \Ext^i_{\envalg{\Lambda}}(\idealgenby{a},\Hom_k(M,N)) &\cong \Ext^i_{\envalg{a\Lambda a}}(a\Lambda a, a\Hom_k(M,N)a)\\ &\cong \Ext^i_{\envalg{a\Lambda a}}(a\Lambda a, \Hom_k(aM,aN))\\ &\cong \Ext^i_{a\Lambda a}(aM,aN)) \end{align*} for all $\Lambda$-modules $M$ and $N$. Since $\Ext^i_{\envalg{\Lambda}}(\Lambda,\Hom_k(M,N)) \cong \Ext^i_{\Lambda}(M,N)$, we obtain the isomorphism \[ \Ext_\Lambda^j (M, N) \cong \Ext_{a\Lambda a}^j (aM, aN) \] for all $j > \pd _{\envalg{\Lambda}}\Lambda/\idealgenby{a}$ and all $\Lambda$-modules $M$ and $N$. Hence $e$ is an eventually homological isomorphism. \end{proof} The following result gives a characterization of the condition $(\gamma)$ when $\idealgenby{a}$ is a stratifying ideal. \begin{lem} Let $\Lambda$ be an artin algebra and $a$ an idempotent in $\Lambda$. Assume that $\idealgenby{a}$ is a stratifying ideal in $\Lambda$. Then we have $(\gamma)\ \pd_\Lambda \Big( \frac{\Lambda/\idealgenby{a}}{\rad \Lambda/\idealgenby{a}} \Big) < \infty$ if and only if $\gld{\Lambda/\idealgenby{a}}<\infty$ and $\pd_{\Lambda} \idealgenby{a}<\infty$. Moreover, if $(\gamma)$ holds, then $(\beta)$ holds. \begin{proof} Assume that $(\gamma)$ $\pd_\Lambda \Big( \frac{\Lambda/\idealgenby{a}}{\rad \Lambda/\idealgenby{a}} \Big) < \infty$. It is clear that $\pd_\Lambda\idealgenby{a} < \infty$ if and only if $\pd_\Lambda \Lambda/\idealgenby{a} < \infty$. Since $\Lambda/\idealgenby{a}$ as a $\Lambda$-module is filtered in simple modules occurring as direct summands in $(\Lambda/\idealgenby{a})/(\rad \Lambda/\idealgenby{a})$, we infer that $\pd_\Lambda \Lambda/\idealgenby{a} < \infty$ by the property $(\gamma)$. Since $\idealgenby{a}$ is a stratifying ideal in $\Lambda$, we have that \[\Ext^j_{\Lambda/\idealgenby{a}}(X,Y) \cong \Ext^j_\Lambda(X,Y)\] for all $j\ge 0$ and all modules $X$ and $Y$ in $\fmod\Lambda/\idealgenby{a}$. Using the above isomorphism and the property $(\gamma)$ again, we obtain that $\id_{\Lambda/\idealgenby{a}} Y \leq \pd_\Lambda (\Lambda/\idealgenby{a})/(\rad \Lambda/\idealgenby{a})$ for all $Y$ in $\fmod\Lambda/\idealgenby{a}$. Hence $\gld\Lambda/\idealgenby{a} < \infty$. Assume conversely that $\gld{\Lambda/\idealgenby{a}}<\infty$ and $\pd_{\Lambda} \idealgenby{a}<\infty$. From \cite[Theorem $3.9$]{Psaroud} we have a finite projective resolution $0\longrightarrow \Lambda a\otimes_{a\Lambda a}Q_n\longrightarrow \cdots \longrightarrow \Lambda a\otimes_{a\Lambda a}Q_0\longrightarrow \idealgenby{a}\longrightarrow 0$, where $Q_i$ are projective $a\Lambda a$-modules. Then applying the exact functor $e=a-$, it follows from Proposition \ref{properties} that the sequence $0\longrightarrow Q_n\longrightarrow \cdots \longrightarrow Q_0\longrightarrow a(\idealgenby{a})\longrightarrow 0$ is exact. We infer that $(\beta)\ \pd_{a\Lambda a} a\Lambda < \infty$, since $a\idealgenby{a} \cong a\Lambda$. Since $\gld\Lambda/\idealgenby{a} < \infty$ and $\pd_{\Lambda} \Lambda/\idealgenby{a}<\infty$, we have that $\pd_{\Lambda}X\leq \pd_{\Lambda/\idealgenby{a}}X + \pd_{\Lambda}\Lambda/\idealgenby{a}$. We infer that $(\gamma)$ $\pd_\Lambda \Big( \frac{\Lambda/\idealgenby{a}}{\rad \Lambda/\idealgenby{a}} \Big) < \infty$ holds. The last claim follows immediately from the above. \end{proof} \end{lem}
{ "redpajama_set_name": "RedPajamaArXiv" }
543
calcareous sponge order The revision is based predominantly on new or unidentified material collected during various expeditions, but also on material used by previous authors. Marine Fauna of Norway. Phylogenetic relationships of fossil calcisponges. Global Diversity of Sponges (Porifera). 25: 23-40. No need to register, buy now! Stop Zoom. Book Material. The order Murrayonida comprises Calcinea with a reinforced calcite skeleton, calcareous plates, or spicule tracts. Image by: Philippe Bourjon (Wikimedia Commons; Creative Commons Attribution-Share Alike 3.0 Public Domain Dedication). Demospongiae Fossil Invertebrates. 2002. Calcareous sponges (Phylum Porifera, Class Calcarea) are known to be taxonomically difficult. 2012. Hemoscleromorpha sponges tend to be massive or encrusting in form and have a very simple structure with very little variation in spicule form (all … Maximum diameter of specimen is approximately 8 cm. This keeps the offspring protected from harsh weather as well as climate changes. Order: Clathrinida. Spicules: Calcareous. NOAA Deep-Sea Coral & Sponge Map Portal. Spicules: Calcareous. The most common spicule shape are triactines with three pointed spires, which are shown in the figure below from Van Soest et al., 2012. Calcareous Sponges: Ecology. & van Soest, R.W.M. Skeleton: Triactines are massed in the tube walls without apparent order. Specimen is from the collections of the Paleontological Research Institution, Ithaca, New York. They are multicellular and composed of specialized cells, arranged in a single layer, for the maintenance of life processes. Sponges, the members of the phylum Porifera (meaning "pore bearer"), are a basal Metazoa (animal) clade as a sister of the Diploblasts. by Digital Atlas of Ancient Life They contain no spongin. A unique characteristic discovered in living members of class Calcarea is vivipary (Hooper and van Soest, 2002). Porifera pōrĭf´ərə [key] [Lat.,=pore bearer], animal phylum consisting of the organisms commonly called sponges. Fossil specimen of the sponge Astraeospongium meniscus from the Silurian Niagara Group of Perry County, Tennessee (PRI 76744). Calcinea contains two orders, Murrayonida and Clathrinida. The most common spicule shape are triactines with three pointed spires, which are shown in the figure below from Van Soest et al., 2012. A total of 16 species were found and are described here. Porifera Fossil specimen of the calcareous sponge Astraeospongium meniscus from the Silurian Niagara Group of Perry County, Tennessee (PRI 76744). Three types of aquiferous system are realized in Calcarea: asconoid, all internal cavities are lined by choanocytes (flagellated cells) … Cytological and embryological features are used as diagnostic characters in both general classification and species identification of the Demospongiae and Calcarea. Calcareous sponges have spicules made of magnesium calcite (MgCO3), or may lack spicules altogether. ; Aizenberg et al., 1996a), but there is also no reason to exclude a precipitation scenario via a transient phase that so far may not have been detected. Van Soest, R.W.M., et al. 713 pp. Origin of Sponges: There is a great controversy regarding the origin of Porifera. This page was written by Jaleigh Q. Pier. - Filter feeders. 1987. Leucetta losangelensisis a calcareous sponge in the subclass Calcinea and order Clathrinida. Well-preserved fossil reef/mound-building communities and shallow microfacies have been recovered from Changhsingian platform−margin sponge reef at the Panlongdong Section and intraplatform sponge skeletal mound at the Yanggudong Section in NE Sichuan Basin, South China. Members of the order Heteractinida (class Calcarea) are characterized by a calcareous "octactine" spicule type, which looks like a snowflake with six branches and two additional branches positioned at right angles in a second growing plane. There are about 8500 described living species under the phylum Porifera worldwide. In order to grow in size, sponges developed a system of inhalant and exhalant canals and chambers with flagellate cells (choanocytes) (Figure 10) that continuously pump water through the body of the sponge. The diameter of the entire sponge is less than 3 cm. Tags: porifera, sponges, calcareous, hexactinellid, marine, ocean, water, be thankful, protect what you love, peppermint narwhal Phylum: porifera Order: homocoela Classes: hexactinellida. This order is only known from Paleozoic fossils dating from the Early Cambrian to the end-Permian mass extinction (Picket, 2008). It is most common on relatively shallow depths, down to … Specimen is from the collections of the. Calcarea ← Of the 15,000 or so species of Porifera that exist, only 400 of those are calcareans. Department of Zoology. Please standby ... Loading Site Characterization Information. Zoc. 2011. Calcarea or Calcispongiae— (Calcareous Sponges): [Calcarea, L. Calcarious = limy, Calcispongiae, L. … Comparisons of calcareous sponge spicules with the amorphous silica spicules of sponges of the classes Hexactinellida and Demospongiae, as well as with calcitic skeletal elements of echinoderms are drawn. Blackwell Scientific Publications. About Sanctuary Home History Regulations FAQs Sanctuary Staff Natural Setting NW Gulf Banks Species List Mesophotic Coral Ecosystems Research Vessel Sanctuary Encyclopedia 20 Things to Love NOAA Corps Staff: SPONGE SPECIES. They are common in the Paleozoic and Mesozoic, however, rare in the Cenozoic. Almost all sponges function first as one sex and then as the other. Hence, we analysed the regeneration and speed rates from two regions (osculum and choanosome) of the body of a calcareous sponge: Ernstia sp. Creative Commons Attribution-ShareAlike 4.0 International License, Creative Commons Attribution-Share Alike 3.0 Public Domain Dedication, Creative Commons Attribution-Share Alike 2.0 License, Porifera: Raphidonema farringdonense (PRI 45561), Sponge: Astraeospongium meniscus (PRI 76744), Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, Key features of group: ascon, sycon and leucon body forms, Mg-calcite unfused, monoaxon, diactine, triactine, tetracline, or polyactine spicules. Palaeoworld: 27, pp. Size range and diversity of structure and colour, Pinacocytes, collencytes, and other cell types. Type. 1-23. Sponges in general use flagellated cells called choanocyte cells to create a current. Zool. Kluwer Academic/Plenum Publishers: New York, NY (USA). Kingdom: Animalia. Publication info. That said, they have the most diversity of any other sponge class making classification difficult. Ledger, P.W. Pickett J., 2002. Sponge bore into the hard calcareous substrata through tunnels and cavities formed by the etching of hard calcium carbonate in the form of microchips of almost the same dimensions (0.056 x 0.047 x 0.032 mm average), and hence the interior of these cavities, when viewed under high magnification, may have a pitted appearance. Hooper, J.N.A. Burton, Maurice, 1898-1992 British Museum (Natural History). In the case of budding, the sponges produce a cluster of cells known as a gemmule that is covered in a hard coating. Sponge: Astraeospongium meniscus (PRI 76744) Often growing on gorgonians. Class Calcarea includes sponges that are small in size and less colorful than other sponge classes. By signing up for this email, you are agreeing to news, offers, and information from Encyclopaedia Britannica. It is now established that many of these forms actually belong to several groups of demosponges because of the possession of primary siliceous spicules, and only fe… 2014 Sep;10(9):3875-84. doi: 10.1016/j.actbio.2014.01.023. Pan. In order to start filling this gap, in this study we describe the calcareous sponges from Curaçao, Southern Caribbean. Calcareous sponges have spicules made of magnesium calcite (MgCO3), or may lack spicules altogether. –– 2. However, they look … Calcareous Sponges-The calcareous sponges of class Calcarea are members of the animal phylum Porifera, the cellular sponges. Rossi AL(1), Campos AP(2), Barroso MM(3), Klautau M(4), Archanjo BS(2), Borojevic R(5), Farina M(6), Werckmann J(7). In the asconoid structure, the water is drawn in through the ostium (outer pores), goes through the spongocoel or atrium, and out the osculum (the opening in the top of the sponge). Data Download. : Aspects of the secretion and structure of calcareous sponge spicules. Publication Details. Shuster (1986; 1991) recorded annual population fluctuations of L. losangelensison Station Beach in 1983-85. A revision of the classification of the calcareous sponges : with a catalogue of the specimens in the British Museum (Natural History) By. Spicule complement of P. massiliana: from left to right pugiole, sagittal triactines, microdiactine (photos courtesy Jean Vacelet); E. Calcaronean spicules: sagittal (inequiangular) triactines and diactines; F. Syconoid aquiferous system from Sycon ciliatum (SEM photo, courtesy Louis De Vos, ULB); G. Sycon ciliatum (Calcaronea, Leucosolenida), specimen about 10 cm, from the English Channel." Sponges in general use flagellated cells called choanocyte cells to create a current. Phylum: Porifera. Marine Flora of Norway. Permissions beyond the scope of this license are detailed … –– 6. Ecology: In the Laminaria-zone down to 50 m. Distribution: British Isles, west coasts of France and Spain; Mediterranean. They are predominantly marine organisms which inhabit in the intertidal to the deepest ocean zone. A family may be divided into subfamilies, which are intermediate ranks between the ranks of family and genus. Therefore, we … Order Heteractinida Hinde, 1887. Calcareous sponges seem to have a polarized regeneration closely related to their external morphology and level of individuality and integration. Calcarea diversity. The calcareous sponges of class Calcarea are members of the animal phylum Porifera, the cellular sponges. Boardman, R.S., Cheetham, A.H., and Rowell, A.J. The general architecture of the skeleton is used to differentiate families, the particular combinations of spicular types to define genera, and the form and dimensions of single spicule types to differentiate species. Calcareous sponges, which have calcium carbonate spicules and, in some species, calcium carbonate exoskeletons, are restricted to relatively shallow marine waters where production of calcium carbonate is easiest. 1977); (2) the majority of sponge taxa (main exception, order Keratosa) form spicules that are liberated when the ani­ mals decay. - Only class of sponge with both asconoid and syconoid construction. In order to allow critical evaluation of the interrelationships between the three sponge classes, and to resolve the question of mono‐ or paraphyly of sponges (Porifera), we used the polymerase chain reaction (PCR) to amplify almost the entire nucleic acid sequence of the 18S rDNA from several hexactinellid, demosponge and calcareous sponge species. Sponge reefs are known also from the Mesozoic Era. The term 'Porifera' is derived from two Latin words 'Porus' meaning pores and 'Ferre' meaning bear. Only the osculum regenerated until the end of the experiment, while the choanosome simply cicatrized. Long-range crystalline order in spicules from the calcareous sponge Paraleucilla magna (Porifera, Calcarea). –– 5. Ten species are new to science and are provisionally … Scientific name: Guancha lacunosa. Ecology: In the Laminaria-zone down to 50 m. Distribution: British Isles, west coasts of France and Spain; Mediterranean. (Photo by ©Gregory G. Dimijian, M. D./Photo Researchers, Inc. Reproduced by permission.) Holding Institution . Next Extent. Brusca. The Digital Atlas of Ancient Life project is managed by the Paleontological Research Institution, Ithaca, New York. (Calcinea, Clathrinida), NW Mediterranean Sea (photo courtesy Jean Vacelet); B. Calcinean spicules: equiangular and equiradiate triactines (photo courtesy Jean Vacelet); C. (Calcinea, Clathrinida), NW Mediterranean Sea; D. (Calcaronea, Lithonida), two specimens from caves, NW Mediterranean Sea. Demospongiae. The information (TEXT ONLY) provided by the Marine Life Information Network (MarLIN) is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.IMAGES and other media featured on this page are each governed by their own terms and conditions and they may or may not be available for reuse. Some recent Antarctic expeditions have discovered a "sponge kingdom," where at least five Calcarea sponge species have been found at depths in excess of 4400 m. Highly simplified overview of Porifera phylogeny based in part on the hypothesis of relationships presented by Botting and Muir (2018). Fossil specimen of the calcareous sponge Raphidonema farringdonense from the Cretaceous of Berkshire, England (PRI 45561). These organisms are characterized spicules made out of calcium carbonate in the form of calcite or aragonite. : Septate junctions in the calcareous sponge Sycon ciliatum. Image by: from the Cretaceous of Berkshire, England (PRI 45561). The skeleton is reinforced by calcareous alga of the type Jania, with megasclere spicules tornotes and microsclere spicules sigmas and chelae. Maximum diameter of specimen is approximately 8 cm. produce skeletal elements– only the class of calcareous sponges can build calcitic spicules, which are the extracellular products of specialized cells, the sclerocytes. Phylum Porifera: Class # 1. In order to gain insight into the evolution and function of CAs in biomineralization of a basal metazoan species, we determined the diversity and expression of CAs in the calcareous sponges Sycon ciliatum and Leucosolenia complicata by means of genomic screening, RNA-Seq and RNA in situ hybridization expression analysis. They are, by far, the sponge class with the highest diversity of spicule and body forms. Springer, Boston, MA. : Types of collagen fibres in the calcareous sponges Sycon and Leucandra. Geostatistics. Greenland is a transition zone between the western and eastern Atlantic boreal calcareous sponge faunas, being home to species from both sides of the North Atlantic combined with some true Arctic species. NOW 50% OFF! London :Printed by order of the trustees of the British Museum (Natural History),1963. Unless otherwise indicated, the written and visual content on this page is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Doi: 10.1016/j.actbio.2014.01.023 British Museum ( Natural History ),1963 of Berkshire, England PRI... ) pp why this sponge is so common in the interior part of sanctuary... Burton, Maurice, 1898-1992 British Museum ( Natural History ) along with skeletal images inorganic,... The oscules, quite visible, are located in the form of calcite or aragonite species were found are. Porifera pōrĭf´ərə [ Lat., =pore bearer ], animal phylum Porifera ) are known to be difficult!, A.H., and Tendal, O.S of Berkshire, England ( PRI 45561 ) information Encyclopaedia! In the calcareous sponge spicules only stable hydrated ACC is known about the structure, composition, and other types... Of any other sponge classes one of the animal kingdom or Triaxonida or Hyalospongiae— ( sponge... Cells, arranged in a hard coating by Ernst Haekel ( Public Domain Dedication.. Species under the phylum Porifera, the sponges ( phylum Porifera, class Calcarea, the sponges... A large oscula ( exhalant pore ) at its terminal end that opens to internal. The geologic past population fluctuations of L. losangelensison Station Beach in 1983-85 Janussen, D. and... And embryological features are used as diagnostic characters in both general classification and species identification of animal... Choanocyte cells to create a current ending rather blunt: 80-100 x 6 µm a characteristic! Unless otherwise indicated, the cellular sponges USA ) are found within the coral cap region of the British (. Rowell, A.J of sponges, J.P, and information from Encyclopaedia Britannica quite visible are... Used by previous authors sponge in the Laminaria-zone down to 50 m. Distribution: British Isles, west coasts France! By budding of cells known as a gemmule that is covered in a hard coating depths! Ranks between the ranks of family and genus recorded annual population fluctuations L.... Colour, Pinacocytes, collencytes, and Rowell, A.J multicellular and composed specialized! Into subfamilies, which attach themselves to the end-Permian mass extinction ( Picket, 2008 ) annual population of. The diameter of the experiment, while the spicules of a Heteractinid sponge below close... Shallow water throughout their fossil record because the calcitic spicules do not preserve well, or.! Diameter of the animal phylum consisting of the organisms commonly called sponges Natur ( 1904 ), plate:! Diameter of the following sponges are sessile ( nonmotile ), and formation of calcareous sponge is. Choice, 100+ million high quality, affordable RF and RM images [ 1 family. … 1 only class of sponge with both asconoid and syconoid construction filter species with images... Review and phylogenetic framework Maurice, 1898-1992 British Museum ( Natural History ),1963 predominantly marine organisms which inhabit the... Calcarea includes sponges that are small in size and less colorful than other sponge classes 2 cm wide and cm. Predominantly marine organisms which inhabit in the tube walls are smooth tube walls without apparent order and colour,,... Of Porifera that exist, only 400 of those are calcareans sponge is less than cm! Long suffered from inaccessibility to any but a handful of informed specialists: from the Research collections of the Research..., white, tubular sponge that grows to up 2 cm wide and 1 cm thick: (! With the highest diversity of Calcarea ( graph generated using the Paleobiology Database )... Their fossil record were classified as either stromatoporoids, chaetetids, archaeocyaths, inozoans, pharetronids or! Indicated, the sponges produce a cluster of cells known as a gemmule that is covered in hard! Classification and species identification of the animal subkingdom Parazoa and represents the least evolutionarily advanced group the. Body forms 13–18 ( 1975 ) Google Scholar ; 1991 ) recorded annual population fluctuations of L. Station! And microsclere spicules sigmas and chelae characteristically ending rather blunt: 80-100 x 6 µm sponges were reef-building... Leucetta losangelensisis a calcareous sponge Sycon ciliatum can be both sexual and asexual, by far, the sponges a! Species they have either two or four points body plans modified from original image by: Philippe Bourjon Wikimedia! By calcareous sponge order alga of the sponge, 385–389 ( 1974 ) Google Scholar lack altogether. Et al., 2012 ) ( Wikimedia Commons ; Creative Commons Attribution-Share 2.0! Is reinforced by calcareous alga of the animal subkingdom Parazoa and represents least... Junctions in the calcareous sponge spicules characterized by spicules made out of calcium carbonate in the Cenozoic for! Jaleigh Q. Pier, licensed under a Creative Commons Attribution-ShareAlike 4.0 International.. Newsletter to get trusted stories delivered right to your inbox ):3875-84. doi: 10.1016/j.actbio.2014.01.023 Laminaria-zone to... By ( Van Soest et al., 2012 ) ( Wikimedia Commons Creative. That grows to up 2 cm wide and 1 cm thick the eggs hatch to free swimming larvae which! 2014 Sep ; 10 ( 9 ):3875-84. doi: 10.1016/j.actbio.2014.01.023 collencytes, and other Cell types:3875-84.:! Calcarea, the cellular sponges, NY ( USA ) in calcareous sponge Sycon ciliatum after a few days search! Record of unambiguously identified Calcarea is vivipary ( Hooper and Van Soest al.. This order is only known from Paleozoic fossils dating from the collections the... ( 1974 ) Google Scholar ) has long suffered from inaccessibility to any but handful! Right to your inbox, =pore bearer ], animal phylum Porifera class. Plans modified from original image by: Bernard DUPONT ( Wikimedia Commons ; Creative Attribution-NonCommercial-ShareAlike! N'T yet been identified leucosolenia botryoides is a family may be lost when sediments altered!, calcareous plates, or may be lost when sediments are altered and Rowell, A.J: Bernard (... Previous authors as well as climate changes the National Science Foundation exhalant pore ) at its terminal end opens! 1991 ) recorded annual population fluctuations of L. losangelensison Station Beach in 1983-85 6. Animal phylum Porifera ) are known also from the Early Cambrian to the by... And information from Encyclopaedia Britannica Distribution: British Isles, west coasts of France Spain... Is covered in a hard coating, J.P, and Rowell, A.J the.! Pri 76744 ) and bathyal depths in the intertidal to the deepest ocean zone and/or model. ( USA ) 10 ( 9 ):3875-84. doi: 10.1016/j.actbio.2014.01.023 Soest, 2002.... Formation of calcareous sponges, along with skeletal algae and inorganic cements, massive. ( class Calcarea ) has long calcareous sponge order from inaccessibility to any but a handful informed.: 80-100 x 6 µm Bourjon ( Wikimedia Commons calcareous sponge order Creative Commons Alike. Attribution-Noncommercial-Sharealike 4.0 International License terminal end that opens to an internal central cavity: 80-100 x 6 µm collected. Right to your inbox botryoides is a family may be lost when sediments are altered closely to. Alga of the organisms commonly called sponges your Britannica newsletter to get trusted delivered... Family is one of the calcareous sponges in general use flagellated cells called choanocyte cells to create a.! 2 cm wide and 1 cm thick [ key ] [ Lat., =pore ]! 2012 ) ( Wikimedia Commons ; Creative Commons Attribution-Share Alike 3.0 Public Domain )... ( Photo by ©Gregory G. Dimijian, m. D./Photo Researchers, Inc. Reproduced by permission. is and! Calcareous Sponges-The calcareous sponges ( phylum Porifera, the cellular sponges sponges are (. Those are calcareans Calcarea that have n't yet been identified form of calcite or aragonite sponges. Ecology: in the interior part of the trustees of the sanctuary ( 0-130 ft, 0-40m deep ) Astraeospongium! Choanocytes are located in the tube walls without apparent order International License making classification difficult faunas! Both asconoid and syconoid construction and formation of calcareous sponge Astraeospongium meniscus ( PRI 45561...., rare in the subclass Calcinea and order Clathrinida because the calcitic spicules do preserve... ; there are six families of freshwater sponges as one sex and then as the other (. By ( Van Soest R.W.M., Willenz P. ( eds ) systema:. Of animals species identification of the organisms commonly called sponges and microsclere spicules sigmas and chelae Calcispongiae by Haekel!, D., and formation of calcareous sponges from abyssal and bathyal in. Spicule and body forms which are intermediate ranks between the Canadian and Greenlandic sponge faunas syconoid. J.N.A., Van Soest R.W.M., Willenz P. ( eds ) systema Porifera as diagnostic in! A few days other sponge class with the highest diversity of structure and colour, Pinacocytes, collencytes, nearly. Found within the coral cap region of the Paleontological Research Institution, Ithaca, New York Natural )... One: 7 ( 4 ), and nearly all are marine ; there are six families of sponges! Interior part of the secretion and structure of calcareous sponges have spicules made magnesium... By Ernst Haekel ( Public Domain Dedication ) Creative Commons Attribution-Share Alike 3.0 Public Domain ) characters... Larvae, which attach themselves to the search box to filter species with algae... Trustees of the 15,000 or so species of calcareous sponges, reproduction can be both sexual and asexual by... Of magnesium calcite ( MgCO3 ), pp: British Isles, west coasts of France and Spain Mediterranean. 7, 13–18 ( 1975 ) Google Scholar so common in late winter and … 1 Sycon. Calcarea, the cellular sponges have spicules made of magnesium calcite ( MgCO3 ), or spicule tracts, coasts. Maintenance of Life processes exhalant pore ) at its terminal end that opens to an internal cavity! At its terminal end that opens to an internal central cavity sponges previously reported from Greenland are.. ; they are predominantly marine organisms which inhabit in the case of budding the... Gallon Of Baked Beans, Ge Electric Motor Model Number Lookup, Kentucky Death Records, 1852-1965, Jr Pass Validity, Principles Of Education Pdf, Johns Hopkins University Mental Health Services, Biscuit Recipes Using Chickpea Flour, Curry Leaves Images, Telecaster Thinline Pickguard Humbucker, Outlander Knitting Kits, calcareous sponge order 2020
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
971
Гулково — деревня в Суховском сельском поселении Кировского района Ленинградской области. История Впервые упоминается в Писцовой книге Водской пятины 1500 года, как деревня Гулково в Пречистенском Городенском погосте Ладожского уезда. Затем деревня Гулково упоминается в Дозорной книге Водской пятины Корельской половины 1612 года. На карте Санкт-Петербургской губернии Ф. Ф. Шуберта 1834 года обозначена деревня Гулкова. Деревня Гулкова отмечена на карте профессора С. С. Куторги 1852 года. ГУЛКОВО — деревня Ведомства государственного имущества, по просёлочной дороге, число дворов — 9, число душ — 18 м. п. (1856 год) ГУЛКОВО — деревня владельческая при реке Кабоне, число дворов — 9, число жителей: 16 м. п., 20 ж. п. (1862 год) В XIX веке деревня административно относилась к Гавсарской волости 1-го стана Новоладожского уезда Санкт-Петербургской губернии, в начале XX века — 4-го стана. По данным «Памятной книжки Санкт-Петербургской губернии» за 1905 год в Гавсарском сельском обществе Гавсарской волости находилась деревня Гулково. Согласно военно-топографической карте Петроградской и Новгородской губерний издания 1915 года, деревня называлась Гулкова. По данным 1933 года деревня Гулково входила в состав Гавсарского сельсовета Мгинского района. По данным 1966 и 1973 годов деревня Гулково находилась в подчинении Выставского сельсовета Волховского района. По данным 1990 года деревня Гулково входила в состав Суховского сельсовета Кировского района. В 1997 году в деревне Гулково Суховской проживали 4 человека, в 2002 году — 14 человек (русские — 93 %). В 2007 году в деревне Гулково Суховского СП — 1. География Деревня расположена в северо-восточной части района на автодороге (Дусьево — Остров), к востоку от центра поселения, деревни Сухое. Расстояние до административного центра поселения — 5 км. Расстояние до ближайшей железнодорожной станции Войбокало — 24 км. К югу от деревни протекает река Кобона. Демография Примечания Населённые пункты Кировского района (Ленинградская область)
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,145
An attraction unique to Jackson County, the Florida Caverns State Park is settled on the Chipola River and allows visitors a chance to tour the state's only walk-through cave system. The park is open 7 days a week, however cave tours are closed on Tuesdays and Wednesdays. The park is on the Great Florida Birding Trail and regularly hosts birding walks. Once used as shelter by aboriginal Indians, the caverns reveal an amazing world of black pools, brilliantly lit, jagged formations and dripping limestone stalactites and stalagmites slowly forming over thousands of years. The 1313-acre park offers 32 camping hook-ups, hiking trails, pavilions, canoe rentals to stroll the Chipola River. The landing is located at the natural bridge, once crossed by General Andrew Jackson. Named for the Chattahoochee and Flint rivers that merge to form the Apalachicola River below Lake Seminole, the 682-acre Three Rivers State Park lines two miles of the lake's southern shore at the Florida-Georgia border. Visitors enjoy nature trails, picnic areas and campgrounds over looking Lake Seminole, as well as a fishing dock, boat ramp and canoe rentals. While alligators and snapping turtles are commonly seen in the lake, the heavily forested park provides the ideal home for white-tailed deer, grey fox, bob-white quail and other native wildlife. This tucked away park is known for its serenity. This scenic park located on Chipola River and Caverns Road is a real jewel. Six trails range from paved to unpaved, from ½ mile to 2 miles as well as a fitness trail will increase fitness level. A gazebo overlooks a gorgeous pond along the trail, outdoor picnic tables host reunions, parties, events. The outdoor stage serves as a venue to many festivals, events and Summer Concert Series. The Lodge is located at 4574 Lodge Drive, off Caverns Rd, and has a seating capacity of 100 people and an occupancy load of 147 people. For more information call Jackson County Parks 850-718-0437. Hinson Conservation and Recreation Area was completed in 2012 under a partnership between the Florida Trail Association and the City of Marianna. In 2012 the Department of Interior designated Hinson Conservation & Recreation Area as a National Recreational Trail. The trail head begins at the scenic Chipola River and turns to form a four mile loop around the park passing caves, sink holes and short bluffs along the 226 acres. Birding and wildlife enthusiast love this park for its great spotting. The park is located one mile south of Marianna on Highway 73, open daylight to sundown. For more information call 850-482-4353. This small trail is tucked away in Marianna on Chipola River at the corner of Kelson Avenue and Nolan Street and is part of the Florida Greenways and Trails. The ¼ mile trek leads you to a bend on the Chipola River is known for its birding and wildlife sighting. In 2012 the Department of Interior designated Butler Trail as a National Recreational Trail. Future plans are in discussion to connect to the north and south trails. The first magnitude spring pumps gallons of fresh water into the popular swimming hole each day. Plan to spend a summer day here, rent a pavilion for reunions, or pontoon boat or kayak or canoe to discover 202 acres of beauty on Merritt's Mill Pond. Open Memorial Day to Labor Day. For more information call 850-718-0437 or 850-482-2114.
{ "redpajama_set_name": "RedPajamaC4" }
1,923
\section{Introduction and overview}\label{sec:introduction} Out of equilibrium dynamics of non-abelian gauge fields provides an outstanding challenge in quantum field theory. Its most prominent application concerns the description of relativistic heavy ion collisions. It involves the solution of an initial value problem in real time based on the underlying theory of quantum chromodynamics (QCD). This is not accessible to standard lattice gauge theory simulations in Euclidean space-time. However, important aspects of nonequilibrium gauge field dynamics may be described using classical-statistical lattice gauge theory techniques in real time. Classical-statistical approximations to the quantum dynamics are expected to be valid if the expectation value of the anti-commutator of fields is much larger than the commutator. Loosely speaking this is realized in the presence of sufficiently high occupation numbers per mode or large coherent field expectation values. The physics of nonequilibrium instabilities provides an important example where classical-statistical simulations techniques can give accurate descriptions of the underlying quantum dynamics. This has been tested to high accuracy for self-interacting scalar quantum field theories~\cite{Aarts:2001yn,Arrizabalaga:2004iw,Berges:2008wm} and Yukawa-like theories with fermions~\cite{Berges:2010zv}, where appropriate far-from-equilibrium resummation techniques are available for the respective quantum field theory. In recent years classical simulation techniques have become an important building block for our understanding of the time evolution of instabilities in non-abelian gauge theories in the context of collision experiments with heavy nuclei~\cite{Romatschke:2005pm,Berges:2007re,Kunihiro:2010tg,Fukushima:2011nq}. The present work investigates classical dynamics of coherent non-abelian gauge fields, which is also motivated by the notion of 'color flux tubes' that may form after the collision of two Lorentz contracted heavy nuclei. These are characterized by intense color-magnetic as well as color-electric field configurations in longitudinal direction. They are smooth in this direction and correlated over a transverse size associated to the inverse of the characteristic momentum scale $Q_s$ in the saturation scenario~\cite{Gelis:2010nm}. To understand the time evolution of this configuration in QCD is a formidable task, which is further complicated by the longitudinal expansion of the system. In order to approach this complex question of non-linear gauge field dynamics, it is very instructive to consider first an extreme simplification which can also give analytical insights. For the ease of a later numerical treatment, we use $SU(2)$ gauge theory. We neglect expansion and employ a constant color-magnetic field configuration, $B^a_\mu$ with color index $a=1,2,3$ and Lorentz index $\mu$, whose spatial components $i=x,y,z$ point in longitudinal ($i=z$) direction \begin{equation} B_i^a \ = \, \delta^{1 a} \delta_{z i} B \, . \label{eq:Blong} \end{equation} Here the latter can always be arranged to point into the direction $a = 1$ in color space by a suitable gauge transformation without loss of generality. In the context of heavy ion collisions this configuration has been extensively discussed in Ref.~\cite{Iwazaki:2008xi}, where the field in infinite volume is taken to be generated from the vector potential $A^a_\mu$ as \begin{equation} A_x^1 = - \frac{1}{2} y B \, , \quad A_y^1 = \frac{1}{2} x B \, , \label{eq:NO-A} \end{equation} with all other spatial components vanishing. Such a field configuration is known to exhibit a Nielsen-Olesen instability~\cite{NO} characterized by an exponential growth of fluctuations with maximum growth rate \begin{equation} \gamma_{\rm NO} \, = \, \sqrt{g B} \, , \label{eq:NOmaxrate} \end{equation} where $\sqrt{g B}$ may be taken to be of order $Q_s$. This exponential growth leads to a production of gluons, which is much faster than any conventional production process. Its consequences for the question of thermalization in a heavy ion collision can be significant. Of course, in this context a single macroscopic constant color-magnetic background field is certainly not a realistic option and it is important to understand the robustness of the underlying physical processes against suitable generalizations. In a first generalization we will allow for temporal modulations of the color magnetic background field configurations. This can be conveniently achieved by the gauge field configuration \begin{equation} A_x^2 \, = \, A_y^3 \, = \, \sqrt{\frac{B}{g}} \, , \label{eq:Anonlinear} \end{equation} with all other components zero. It can be directly verified that this vector potential configuration gives the same form of the longitudinal magnetic field (\ref{eq:Blong}). However, there are important differences as compared to the previously considered case, where the magnetic field is generated from space-dependent vector potentials. In contrast to (\ref{eq:NO-A}), the configuration (\ref{eq:Anonlinear}) contributes to the non-linear part of the field strength tensor \begin{equation} F^a_{\mu\nu}[A] \, = \, \partial_\mu A^a_\nu - \partial_\nu A^a_\mu + g \epsilon^{abc} A^b_\mu A^c_\nu \label{eq:fieldstrength} \end{equation} from which the magnetic field is obtained as $B^{a}_j = \epsilon^{ijk} F^a_{jk}$. Generating the magnetic field with (\ref{eq:Anonlinear}) via the non-linear term of the field strength tensor leads to coherent oscillations in time with characteristic frequency $\sim \sqrt{g B}$. In contrast, it can be readily verified that (\ref{eq:NO-A}) is a time-independent solution of the classical Yang-Mills equations. Accordingly, also the magnetic field derived from~(\ref{eq:NO-A}) is constant in both space and time as long as it is unperturbed. We discuss analytic solutions for the classical time evolution of the configuration (\ref{eq:Anonlinear}) in temporal (Weyl) gauge and study the behavior of linear perturbations on top of the oscillating background field in Sec.~\ref{sec:linear}. It is shown that these perturbations still exhibit robust growth similar to the Nielsen-Olesen--type. In order to go beyond the linear analysis we employ classical-statistical lattice gauge theory simulations in Sec.~\ref{sec:classical-statistical}. The linear analysis is seen to accurately reproduce the numerical data for times which are short compared to the inverse characteristic growth rates. Afterwards, non-linear contributions become crucial. An important observation is that the non-linearities lead to a very efficient growth for all components of the gauge potential such that the details about the initial configuration (\ref{eq:Blong}) become irrelevant as long as sufficient primary growth is triggered. This adds to the robustness of the observed phenomena also for further generalizations of the initial configurations. An important step towards more realistic scenarios concerns the inclusion of fluctuations into the initial conditions. The initial configuration (\ref{eq:Anonlinear}) exhibits a sharply defined macroscopic field amplitude $B$ which is certainly unrealistic, in particular, since no macroscopic colored objects can exist. Here the time-evolution of single macroscopic $B$-field configurations may be viewed as describing the dynamics of individual members of an ensemble. Ensemble averages are then obtained from sampling initial conditions according to given averages or expectation values at initial time. In Sec.~\ref{sec:coherent} we consider nonequilibrium ensembles, where we choose the initial values of the spatially homogeneous fields in (\ref{eq:Anonlinear}) randomly from a Gaussian distribution with zero mean. Again one finds robust growth also for the ensemble of homogeneous initial fields. A dramatic difference is that for the ensemble average one observes isotropization of the stress-energy tensor happening on a time scale much faster than $1/\sqrt{gB}$ because of a dephasing phenomenon. We comment on the possible relevance of this observation for the outstanding question of rapid isotropization in heavy ion collisions in Sec.~\ref{sec:iso}. Finally, we take into account that the fields should be correlated only over a limited transverse size, which in the color flux tube picture is associated to the inverse of the characteristic momentum scale $Q_s$. For this we divide in Sec.~\ref{sec:patches} the transverse coordinate plane into domains of equal spatial size. Each domain is then filled with a coherent color magnetic field, however, using a different random color direction in each patch. If the transverse size is taken to be $Q_s$ or $\sqrt{gB}$ then this should model characteristic aspects of the dynamics of color flux tubes. It turns out that one observes exponential growth for all components of this inhomogeneous gauge field configuration. The nature of the underlying instability still shows properties reminiscent of the Nielsen-Olesen--type. In particular, the characteristic dependence of the primary growth rate as a function of momentum still reaches its maximum at low momenta. In contrast, Weibel instabilities show a vanishing growth rate at zero momentum~\cite{Weibel}. This difference has also been emphasized in Ref.~\cite{Iwazaki:2008xi}. We conclude in Sec.~\ref{sec:conclude}. In an appendix we describe that the phenomenon of parametric resonance leads to a subdominant instability band at higher longitudinal momenta. \section{Linear regime} \label{sec:linear} We first discuss analytic solutions for the classical time evolution starting from the coherent field configuration (\ref{eq:Anonlinear}). It is shown that linear perturbations on top of the oscillating background field still exhibit robust growth similar to the Nielsen-Olesen--type. We consider a non-abelian gauge theory with $SU(2)$ color gauge group. Taking into account two colors simplifies the analysis as compared to the $SU(3)$ gauge group relevant for QCD and the difference is expected to be of minor relevance for the physics considered here \cite{Berges:2008zt,Ipp:2010uy}. With the covariant derivative \begin{equation} D_\mu^{ab}[A] \, = \, \partial_\mu \delta^{ab} + g \epsilon^{acb} A_\mu^c \end{equation} the classical equations of motion read \begin{equation} \left(D_\mu[A] F^{\mu\nu}[A]\right)^a \, = \, 0 \, . \label{eq:classicalfieldequation} \end{equation} In a first step, we will compute the classical time evolution starting from the coherent field configuration (\ref{eq:Anonlinear}). For an analytical understanding of the dynamics it is convenient to split the gauge field potentials $A^a_\mu(x)$ into a time-dependent background field $\bar{A}^a_\mu(x^0)$ and a perturbation: \begin{equation} A^a_\mu(x) \, = \, \bar{A}^a_\mu(x^0) + \delta A^a_\mu(x) \, . \label{eq:background} \end{equation} We can then compute the time evolution in an expansion in powers of the perturbation $\delta A^a_\mu(x)$. At zeroth order we obtain the field equation for the background field \begin{equation} \left(D_\mu[\bar{A}] F^{\mu\nu}[\bar{A}]\right)^a \, = \, 0 \, . \label{eq:backgroundfieldequation} \end{equation} The next order corresponds to the linearized equation for the fluctuations \cite{Tudron:1980gq}, \begin{eqnarray} &&\left(D_\mu[\bar{A}]D^\mu[\bar{A}]\delta A^\nu\right)^a - \left(D_\mu[\bar{A}]D^\nu[\bar{A}]\delta A^\mu\right)^a \nonumber\\ && + g \epsilon^{abc} \delta A^b_\mu F^{c\mu\nu}[\bar{A}] \, = \, 0 \, . \label{eq:fluctuationequation} \end{eqnarray} Accordingly, equations (\ref{eq:backgroundfieldequation}) and (\ref{eq:fluctuationequation}) correspond to the classical field equation (\ref{eq:classicalfieldequation}) up to corrections of order $(\delta A)^2$. We choose temporal (Weyl) gauge where $A^a_0 = 0$. Writing $t \equiv x^0$ we consider the field configuration \begin{equation} \bar{A}^a_i(t) \, = \, \bar{A}(t) \left( \delta^{a2} \delta_{ix} + \delta^{a3} \delta_{iy} \right) \end{equation} which corresponds to (\ref{eq:Anonlinear}) at initial time with $\bar{A}(t=0) = \sqrt{B/g}$ for given initial color-magnetic field $B$. With the further initial condition $\partial_t\bar{A}(t=0) = 0$ the background field equation (\ref{eq:backgroundfieldequation}) reads \begin{equation} \partial_t^2 \bar{A}(t) + g^2 \bar{A}(t)^3 \, = \, 0 \end{equation} and the solution is given in terms of a Jacobi elliptic function: \begin{equation} \bar{A}(t) \, = \, \sqrt{\frac{B}{g}} \,\, {\rm cn}\!\left(\sqrt{g B}\, t\,,\frac{1}{2}\right) \, . \label{eq:Asol} \end{equation} Here ${\rm cn}(\sqrt{g B}\, t,1/2)$ is an oscillatory function such that $\bar{A}(t) = \bar{A}(t+\Delta t_B)$ with period \begin{equation} \Delta t_B \, = \, \frac{4 K(1/2)}{\sqrt{g B}} \,\simeq \, \frac{7.42}{\sqrt{g B}} \, , \end{equation} where $K(1/2)$ denotes the complete elliptic integral of the first kind \cite{Elliptic}. Similar behavior has been discussed in the context of classical chaos \cite{chaos}, or for parametric resonance in scalar field theories \cite{Kofman:1994rk}. The solution (\ref{eq:Asol}) is plotted in Fig.~\ref{fig:jacobiancn}. For later use we note that the corresponding characteristic frequency is $\omega_B = 2 \pi/\Delta t_B \simeq 0.847 \sqrt{g B}$. We emphasize that no constant non-zero solution exists for $\bar{A}$ in this case. This is in contrast to the analysis of Ref.~\cite{Tudron:1980gq}, where the same gauge potential configuration in the presence of suitable external charges is assumed to lead to a time-independent $\bar{A}$. \begin{figure} \begin{center} \epsfig{file=zeromode.eps, width=8cm} \caption{Time evolution of the background field $\bar{A}(t)$ normalized to its initial value. It is described by the Jacobi elliptic function ${\rm cn}(\sqrt{Bg}\, t,1/2)\,\sqrt{B/g}$ as a function of time $t$ in units of the characteristic scale $\sqrt{gB}$.}\label{fig:jacobiancn} \end{center} \end{figure} In the following we insert this background field behavior into the linearized fluctuation equation (\ref{eq:fluctuationequation}) and discuss solutions for modes of $\delta A^a_i$ in spatial Fourier space: \begin{equation} \delta A^a_i(t,{\bf p}) \, = \, \int {\rm d}^3 x\, e^{-i p_j x^j} \delta A^a_i(t,{\bf x}) \end{equation} We concentrate on momenta $p_z$ along the $z$-direction, i.e.\ we evaluate the modes of $\delta A^a_i(t,{\bf p})$ at $p_x=p_y=0$. The evolution equation (\ref{eq:fluctuationequation}) for the different components of $\delta A^a_i(t,p_z)$ can then be written with $ \delta \dot{A}^a_i(t,p_z) \equiv \partial_t \delta A^a_i(t,p_z)$ as the following independent sets of coupled differential equations: \begin{equation} \left( \begin{array}{c} \delta \ddot{A}^1_x \\ \delta \ddot{A}^3_z \end{array} \right) \, = \, -\left( \begin{array}{cc} g^2 \bar{A}^2 + p_z^2 & i g \bar{A} p_z \\ -i g \bar{A} p_z & g^2 \bar{A}^2 \end{array} \right) \left( \begin{array}{c} \delta A^1_x \\ \delta A^3_z \end{array} \right) . \label{eq:M1} \end{equation} The complex conjugate of the components $(\delta A^1_y, \delta A^2_z)$ obey the same equation, which is obtained from (\ref{eq:M1}) with the replacement $(\delta A^1_x, \delta A^3_z) \, \rightarrow \, (\delta A^{1*}_y, \delta A^{2*}_z)$. Similarly, one finds \begin{equation} \left( \begin{array}{c} \delta \ddot{A}^2_x \\ \delta \ddot{A}^3_y \end{array} \right) \, = \, -\left( \begin{array}{cc} g^2 \bar{A}^2 + p_z^2 & 2 g^2 \bar{A}^2 \\ 2 g^2 \bar{A}^2 & g^2 \bar{A}^2 + p_z^2 \end{array} \right) \left( \begin{array}{c} \delta A^2_x \\ \delta A^3_y \end{array} \right) \label{eq:M2} \end{equation} and \begin{equation} \left( \begin{array}{c} \delta \ddot{A}^3_x \\ \delta \ddot{A}^2_y \\ \delta \ddot{A}^1_z \end{array} \right) \, = \, -\left( \begin{array}{ccc} p_z^2 & -g^2 \bar{A}^2 & -i g \bar{A} p_z \\ -g^2 \bar{A}^2 & p_z^2 & i g \bar{A} p_z \\ i g \bar{A} p_z & -i g \bar{A} p_z & 2 g^2 \bar{A}^2 \end{array} \right) \left( \begin{array}{c} \delta A^3_x \\ \delta A^2_y \\ \delta A^1_z \end{array} \right). \label{eq:M3} \end{equation} Each of the linear differential equation matrices (\ref{eq:M1})-(\ref{eq:M3}) with time dependent background field $\bar{A}(t)$ may be further analyzed by diagonalization. The time-dependent eigenvalues of the equation matrices read for (\ref{eq:M1}) \begin{eqnarray} \omega_{1}^{\pm}(p_z)^2 &=& g^2 \bar{A}^2 + \frac{p_z^2}{2} \pm \frac{1}{2} \sqrt{4 g^2 \bar{A}^2 p_z^2 + p_z^4} \, , \end{eqnarray} for (\ref{eq:M2}) \begin{eqnarray} \omega_2(p_z)^2 &=& 3 g^2 \bar{A}^2 + p_z^2 \, , \label{eq:omega}\\ \omega_3(p_z)^2 &=& - (g^2 \bar{A}^2 - p_z^2) \, , \nonumber \end{eqnarray} and for (\ref{eq:M3}) \begin{eqnarray} \omega_4(p_z)^2 &=& - (g^2 \bar{A}^2 - p_z^2) \, , \\ \omega_{5}^{\pm}(p_z)^2 &=& \frac{1}{2} \left(3 g^2 \bar{A}^2 + p_z^2 \pm \sqrt{ g^4 \bar{A}^4 + 6 g^2 \bar{A}^2 p_z^2 + p_z^4}\right) \, . \nonumber \end{eqnarray} The corresponding eigenvectors depend, in general, on $\bar{A}(t)$ and thus on time. However, the eigenvectors associated to $\omega_2(p_z)^2$, $\omega_3(p_z)^2$ and $\omega_4(p_z)^2$ are time-independent and given by $(1,1)/\sqrt{2}$, $(-1,1)/\sqrt{2}$, and $(1,1,0)/\sqrt{2}$, respectively. These include, in particular, the only {\em negative} eigenvalues $\omega_3(p_z)^2 = \omega_4(p_z)^2$ for momenta $p_z^2 < g^2 \bar{A}(t)$. These will play a particularly important role in the following since they turn out to govern the fastest time scales. It is very instructive to consider the example of (\ref{eq:M2}) in diagonalized form, i.e.\ \begin{eqnarray} \delta \ddot{A}_+ &=& -\left(3 g^2 \bar{A}^2 + p_z^2 \right) \delta A_+ \, , \label{eq:Ap}\\ \delta \ddot{A}_- &=& \left(g^2 \bar{A}^2 - p_z^2 \right) \delta A_- \, , \nonumber \end{eqnarray} where $\delta A_+ = \delta A^2_x + \delta A^3_y$ and $\delta A_- = \delta A^3_y - \delta A^2_x$. (The same equation for $\delta A_-$ would also be obtained from (\ref{eq:M3}) by associating $\delta A_-$ to $\delta A^3_x + \delta A^2_y$.) These equations are of the Lam{\'e}-type \cite{Elliptic}: Since the squared background field appearing in (\ref{eq:Ap}) has periodicity $\Delta t_B/2$, each solution can be written as a linear combination of the form \begin{equation} \delta A_+(t+\Delta t/2,p_z) = e^{i C_+(p_z)}\, \delta A_+(t,p_z) \end{equation} such that \begin{equation} \label{eq:linansatz} \delta A_+(t,p_z) = e^{2i C_+(p_z) t/\Delta t_B}\, \Pi_+(t,p_z) \end{equation} with periodic functions $\Pi_+(t+\Delta t/2,p_z) = \Pi_+(t,p_z)$ and similarly for $\delta A_-(t,p_z)$. The Floquet function $C_+(p_z)$ is time-independent and leads to oscillating behavior if $C_+(p_z)$ is real or to exponentially growing or decaying solutions if imaginary. The dominant exponentially growing solutions arise because of the appearance of the time-dependent negative eigenvalues $\omega_3(p_z)^2=\omega_4(p_z)^2$ and we will concentrate on them in the following. Subdominantly growing modes are associated to parametric resonance and are discussed in detail in the appendix. We first observe from the evolution equation (\ref{eq:Ap}) for $\delta A_-(t,p_z)$ that replacing the background field by a constant, $\bar{A}(t) \rightarrow \sqrt{B(t=0)/g}$, would lead to the well-known Nielsen-Olesen result for the growth rate of modes with $p_z^2 \le g B(t=0)$: \begin{equation} \gamma_{\rm NO}(p_z) \, = \, \sqrt{g B(t=0) - p_z^2} \, . \label{eq:gammano} \end{equation} This is in accordance with the fact that the value for the maximum growth rate for constant fields, as stated in (\ref{eq:NOmaxrate}), is obtained for vanishing momenta. In contrast, since the background field is oscillatory in our case there will be deviations from the Nielson-Olesen result (\ref{eq:gammano}). However, it turns out that to rather good accuracy their growth rates follow the Nielsen-Olesen estimate if the temporal average of the oscillating magnetic background field is used, which is explained in the appendix. The time average over one period $\Delta t_B/2$ of the square of the time-dependent background field (\ref{eq:Asol}), which enters the evolution equation for the fluctuation $\delta A_-(t,p_z)$, is given for the above initial conditions by \begin{eqnarray} g \overline{B} & \equiv & \frac{g B(t=0)}{2 K(1/2)} \int_0^{2 K(1/2)} \! {\rm d}x \, {\rm cn}^2\!\left(x, \frac{1}{2}\right) \\ & = & \frac{ \Gamma(3/4)^2 }{ \Gamma(5/4) \Gamma(1/4) }\, g B(t=0) \nonumber\\ & \approx & 0.457\, g B(t=0) \, . \label{eq:averageB} \end{eqnarray} Indeed, replacing $g B(t=0)$ in (\ref{eq:gammano}) by the average value (\ref{eq:averageB}) reproduces the full numerical results in the linear regime at sufficiently early times to good accuracy, which is described in the following. \section{Classical lattice gauge theory} \label{sec:classical-statistical} The exponentially growing solutions observed in the previous section will finally lead to non-linear behavior. In order to be able to describe this physics, we use classical-statistical Yang-Mills theory in Minkowski space-time following closely Ref.~\cite{Berges:2007re}, to which we refer for further technical details. The above linear analysis will be seen to accurately reproduce the numerical data for times which are short compared to the inverse characteristic 'primary' growth rates of the linear theory. Afterwards, non-linear contributions become crucial. In particular, the primary growth induces 'secondary' growth rates which are multiples of the primary ones similar to what has been observed in Refs.~\cite{Berges:2007re}, where no coherent magnetic fields were employed. As a consequence, a very efficient growth for all components of the gauge potential is observed such that the details about the initial configuration (\ref{eq:Blong}) become irrelevant as long as sufficient primary growth is triggered. For the simulations we employ the Wilsonian lattice action for SU($2$) gauge theory in Minkowski space-time: \begin{eqnarray} S[U] &=& - \beta_0 \sum_{x} \sum_i \left\{ \frac{1}{2 {\rm Tr} \mathbf{1}} \left( {\rm Tr}\, U_{x,0i} + {\rm Tr}\, U_{x,0i}^{\dagger} \right) - 1 \right\} \nonumber\\ && + \beta_s \sum_{x} \sum_{i<j} \left\{ \frac{1}{2 {\rm Tr} \mathbf{1}} \left( {\rm Tr}\, U_{x,ij} + {\rm Tr}\, U_{x,ij}^{\dagger} \right) - 1 \right\} , \nonumber\\ \label{eq:LatticeAction} \end{eqnarray} with $x = (x^0, {\bf x})$ and spatial Lorentz indices $i,j = 1,2,3$. It is given in terms of the plaquette variable $U_{x,\mu\nu} \equiv U_{x,\mu} U_{x+\hat\mu,\nu} U^{\dagger}_{x+\hat\nu,\mu} U^{\dagger}_{x,\nu}$, where $U_{x,\nu\mu}^{\dagger}=U_{x,\mu\nu}\,$. Here $U_{x,\mu}$ is the parallel transporter associated with the link from the neighboring lattice point $x+\hat{\mu}$ to the point $x$ in the direction of the lattice axis $\mu = 0,1,2,3$. The definitions $\beta_0 \equiv 2 \gamma {\rm Tr} \mathbf{1}/g_0^2$ and $\beta_s \equiv 2 {\rm Tr} \mathbf{1}/(g_s^2 \gamma)$ contain the lattice parameter $\gamma \equiv a_s/a_t$, where $a_s$ denotes the spatial and $a_t$ the temporal lattice spacings, and we will consider $g_0 = g_s = g$. The dynamics is solved in temporal axial gauge. Varying the action (\ref{eq:LatticeAction}) w.r.t.\ the spatial link variables $U_{x, j}$ yields the classical lattice equations of motion. Variation w.r.t.\ to a temporal link gives the Gauss constraint. The coupling $g$ can be scaled out of the classical equations of motion and we set $g = 1$ for the simulations. We define the gauge fields as \begin{equation} g A_i^a(x) \,=\, -\frac{i}{2 a_s} \, {\rm Tr} \left( \sigma^a U_i(x) \right) \label{eq:compute-gauge-field} \end{equation} where $\sigma^1$, $\sigma^2$ and $\sigma^3$ denote the three Pauli matrices. Correlation functions are obtained by repeated numerical integration of the classical lattice equations of motion and Monte Carlo sampling of initial conditions~\cite{Berges:2007re}. The initial time derivatives $\dot{A}_{\mu}^a(t=0, \vec{x})$ are set to zero for all shown results, which implements the Gauss constraint at all times. This simplifies the numerical implementation, however, we checked that including initial electric fields does not change much the observed growth rates. Shown results are from computations on $N^3 = 128^3$ lattices. \begin{figure} \begin{center} \epsfig{file=fig1.eps, width=8cm} \caption{Time evolution of spatial Fourier modes of $A_x^3$ as a function of time, all in appropriate units of the energy density $\epsilon$. The momenta are chosen in the $z$ direction.} \label{fig1} \end{center} \end{figure} Using the initial condition (\ref{eq:Anonlinear}), supplemented by a small noise for all gauge field components, we indeed find exponentially growing behavior corresponding to a nonequilibrium instability. In Fig.~\ref{fig1} we show the absolute value squared of $A^3_x(t,{\bf p})$ as a function of time in units of the fourth root of the energy density $\epsilon$. Different lines correspond to different spatial momenta and one observes that modes with smallest momentum have the biggest initial growth rate, as expected from the linear analysis of Sec.~\ref{sec:linear}. In contrast to the linear analysis, Fig.~\ref{fig1} exhibits that higher momentum modes can also have significant growth at somewhat later times. \begin{figure} \begin{center} \epsfig{file=NO-growth-rates.eps, width=8.0cm, angle=0} \hspace{1cm} \epsfig{file=growth-rates-eps-units.eps, width=8.0cm, angle=0} \caption{Growth rates for $|A_x^3(t,p_z)|$ from simulations with different magnetic field strengths measured (a) in lattice units and (b) in units of $\epsilon^{1/4} $. The solid lines represent the function (\ref{eq-NO-growth-rates-num}) for the various magnetic field strengths. The presence of a subdominant instability band is shown in the appendix, in Fig.\ref{fig-anares}. } \label{fig-NO-growth-rates} \end{center} \end{figure} Concentrating first on earlier times, we obtain an average growth rate by fitting an exponential function to $|A_x^3(t,p_z)|$ over times large compared to the oscillation frequency. The results of such a fit are shown in Fig.~\ref{fig-NO-growth-rates}. The upper panel shows the growth rate as a function of $p_z$ for different initial magnetic field strengths. Both the growth rate and the momentum are plotted in lattice units. We find that the primary growth rates can be described very well by \begin{equation} \gamma^{(\text{num})}(p_z) \, \simeq \, \sqrt{0.42 g B(t=0) - p_z^2} \label{eq-NO-growth-rates-num} \end{equation} for momenta $p_z^2 \lesssim 0.42 g B(t=0)$. This function is displayed as a curve in Fig.~\ref{fig-NO-growth-rates} (a) along with the fit values. The numerical factor appearing in front of $g B(t=0)$ in (\ref{eq-NO-growth-rates-num}) is equal within errors to the one in (\ref{eq:averageB}) found from the linear analysis. \begin{figure*}[!ht] \begin{center} \epsfig{file=fig2.eps, width=12cm} \caption{Time evolution of the individual gauge field components in units of appropriate powers of the energy density. Each panel shows the time evolution of three different Fourier coefficients whose momenta are parallel to the $z$-axis.} \label{fig:NO-A-evolution} \end{center} \end{figure*} In the lower panel of Fig.~\ref{fig-NO-growth-rates} (b) we plot the same data as in (a), however, this time the growth rates as well as momenta are measured in units of $ (g^2\epsilon)^{1/4}$. One observes a collapse of the results from simulations with different initial field strength onto a single curve to very good accuracy. The reason for this trivial scaling can be understood from (\ref{eq-NO-growth-rates-num}) for the growth rate and the fact that the energy density is proportional to $B^2$. We note that parametric resonance leads to a subdominant instability band at higher longitudinal momenta, which is given in the appendix. Fig.~\ref{fig1} shows that there are significant deviations from the linear analysis for higher momentum modes already at rather early times, before the growth of the lowest momentum mode saturates. Though these higher momentum modes have initially comparably small primary growth rates, there is a substantial speed-up because of non-linearities at later times. As a consequence, they almost catch up with the initially fastest growing mode. The observed non-linear behavior is the consequence of the self-interactions of the gauge fields \cite{Berges:2007re}. The secondary growth rate is to a good approximation three times the growth rate of the fastest primary growth, which can be described from resummed loop expansions based on the two-particle irreducible effective action following the lines of Ref.~\cite{Berges:2002cz}, where similar phenomena for nonequilibrium instabilities in scalar field theories were studied. \begin{figure} \begin{center} \epsfig{file=T-diag-vs-t.eps, width=8cm} \caption{Time evolution of the diagonal entries of the spatial components of the stress-energy tensor, i.e.\ transverse versus longitudinal 'pressure', for single runs with a macroscopic initial $B$-field configuration defined in (\ref{eq:Anonlinear}). } \label{fig-NO-T_ij} \end{center} \end{figure} Even though the initial conditions (\ref{eq:Anonlinear}) involve only few gauge field components, the non-linear dynamics very efficiently includes the other components as well. Similar to Fig.~\ref{fig1}, we show in Fig.~\ref{fig:NO-A-evolution} the time evolution for all gauge field components. The three lines in each graph correspond to the different momenta $p_z = 0$, $p_z = 0.48 (g^2 \epsilon)^{1/4}$ and $p_z = 1.2 (g^2 \epsilon)^{1/4}$, respectively. The zero momentum mode oscillations of $A^2_x$ and $A^3_y$, expected from the linear analysis of Sec.~\ref{sec:linear}, are manifest as well as the primary growth in these components at non-zero momenta and in $A^2_x$ and $A^3_y$.\footnote{At early times the magnetic field is approximately given by $ B_z^1(t) \simeq A_x^2(t,0) A_y^3(t,0)$, because the contribution of the derivative terms is small.} The simulations reveal growth for all other components such that most of the details about the initial conditions are lost rather quickly. All growth saturates at approximately $ t \simeq 40 \, (g^2\epsilon)^{1/4}$, which coincides with the time when the zero modes cease to oscillate. Before this time these oscillations are almost identical to the unperturbed case described by (\ref{eq:Asol}). The initial configuration (\ref{eq:Blong}) pointing in the longitudinal direction is highly anisotropic and an important question in this context is the characteristic time for isotropization. For applications to hydrodynamic descriptions of heavy ion collisions the isotropization time of the stress-energy tensor is of particular relevance. The diagonal entries of the spatial components of the stress-energy tensor, or transverse and longitudinal 'pressure', are shown as a function of time in Fig.~\ref{fig-NO-T_ij}. One observes that for the macroscopic initial $B$-field configuration described by (\ref{eq:Anonlinear}) the stress-energy tensor shows oscillatory behavior. We find significant damping of these oscillations and approximate isotropization about the time when the exponential growth with characteristic rate $\sqrt{gB}$ stops. As we will see in Sec.~\ref{sec:iso}, this result can change dramatically if fluctuations are included in the initial conditions. \section{Including initial fluctuations} \label{sec:iso} \subsection{Ensemble of coherent fields} \label{sec:coherent} So far, we considered initial configurations with a sharply defined macroscopic field amplitude $B$ described by (\ref{eq:Anonlinear}). An important step towards more realistic scenarios concerns the inclusion of fluctuations in the initial conditions. Here we choose non-zero initial values of the homogeneous $A_i^a$ fields randomly from a Gaussian distribution with zero mean and finite width. Then we perform an ensemble average over the initial conditions. Such a nonequilibrium ensemble is meant to describe a single system in the same sense as, e.g., equilibrium thermodynamics may be applied to a macroscopic system. More precisely, we consider at initial time the homogeneous field configurations \begin{equation} g A_x^2 = s_1,\qquad g A_y^3 = s_2 \, , \label{eq:noisy-initial} \end{equation} where the real random numbers $s_1$ and $s_2$ fulfill \begin{equation} \langle s_1 \rangle = \langle s_2 \rangle = 0 \, , \quad \langle s_1^2 \rangle = \langle s_2^2 \rangle = \Delta^2\, , \label{eq:noisy-initial-delta} \end{equation} where $\langle \ldots \rangle$ denotes the ensemble average with given width $\Delta$. Again, all other $A_i^a$ are taken to vanish initially up to a small amplitude noise seeding the instabilities. \begin{figure} \begin{center} \epsfig{file=growthrates_random_errorbars.eps, width=8cm} \caption{Growth rates for the fluctuations of gauge potentials, $\langle |A^2_x(t,p_z)|^2 \rangle^{1/2}$ etc., for the ensemble of coherent fields as described in the text. } \label{fig-random-growthrates} \end{center} \end{figure} It turns out that growth rates in units of the average energy density $\langle \epsilon \rangle$ are fairly independent of the value of the width and the lattice size for the considered range of parameters. Fig.~\ref{fig-random-growthrates} shows results for growth rates using the range of width $\Delta a_s = 0.15-0.25 $ ensuring that the instability dynamics can be well resolved on the lattices we are using. We employ $64^3$ or $128^3$ lattices for an ensemble built out of 1600 runs in total. The vertical lines indicate the statistical errors. The ensemble averages show a rather similar picture as for the initial condition (\ref{eq:Anonlinear}) without fluctuations. For the latter case we saw in Sec.~\ref{sec:linear} that $\delta A_y^3 - \delta A_x^2$ and $\delta A_x^3 + \delta A_y^2$ exhibit primary growth rates as displayed in Fig.~\ref{fig-NO-growth-rates}. Also for the ensemble average the initially unstable modes turn out to be $A_x^2$, $A_y^3$, $A_y^2$ and $A^3_x$. However, two different growth rates are observed from Fig.~\ref{fig-random-growthrates}. Here $\langle |A_x^2(t,p_z)|^2 \rangle^{1/2}$ and $\langle |A^3_y(t,p_z)|^2 \rangle^{1/2}$ exhibit a somewhat bigger growth rate, while $\langle |A^3_x(t,p_z)|^2 \rangle^{1/2}$ and $\langle |A_y^2(t,p_z)|^2 \rangle^{1/2}$ have a smaller rate than for the simple initial condition (\ref{eq:Anonlinear}). \begin{figure} \begin{center} \epsfig{file=Tmunu_random_avr_smooth.eps, width=8cm} \caption{The same as in Fig.~\ref{fig-NO-T_ij}, however, including initial fluctuations described by the ensemble of coherent fields (\ref{eq:noisy-initial}). The vertical bars reflect the size of typical fluctuations on the smoothed curve we plot. The thin dashed line ($1/3$) corresponds to an isotropic $T_{ij}$.} \label{fig-random-tmunu} \end{center} \end{figure} The inclusion of initial fluctuations has a significant impact on the behavior of the stress-energy tensor. In Fig.~\ref{fig-random-tmunu} we plot the diagonal entries of the spatial components of the stress-energy tensor as a function of time. Even though the initial configuration is highly anisotropic, one observes that transverse and longitudinal pressure very quickly approach the same value. In contrast to what has been observed in Sec.~\ref{sec:classical-statistical}, where approximate isotropy of pressure occured at the end of the period of exponential growth of fluctuations, it now happens on a much shorter time scale. This rapid isotropization is a dephasing phenomenon. For a sharply defined initial field amplitude $B$, which is realized with the initial condition (\ref{eq:Anonlinear}), the longitudinal and transverse pressure components oscillate with frequency $\sim \sqrt{g B}$ according to Fig.~\ref{fig-NO-T_ij}. The time average of the pressure over several oscillations was already isotropic in that case, but the coherent oscillations of the homogeneous $A$ fields showed up in this quantity. In contrast, sampling many runs with different phases and initial field amplitudes leads very quickly to a practically constant value. Rapid isotropization is an important ingredient for the application of hydrodynamic descriptions for heavy-ion collisions. However, in this case longitudinal expansion and increasing diluteness may make the system again become more anisotropic. We also emphasize that the observed dephasing phenomenon depends on the fact that the stress-energy tensor is dominated by the homogeneous modes for the considered initial conditions. For the spatially inhomogeneous configurations in the context of color flux tubes, discussed in the following, this will no longer be the case. \subsection{Modeling color flux tubes} \label{sec:patches} \begin{figure} \begin{center} \epsfig{file=randompatch.eps, width=7.2cm, angle=0} \caption{Time evolution of a low momentum mode of the gauge potential $A^1_y(t,p_z \simeq 0)$ for different sizes of correlated domains.} \label{fig-patchsize} \end{center} \end{figure} In this section, we also take into account that the fields may be correlated only over a limited transverse size while being homogeneous in longitudinal direction. To describe this, we divide the transverse plane into square-shaped domains of equal size. Each patch is filled with a coherent color magnetic field as in (\ref{eq:noisy-initial}), however, in each domain we use a different, randomly chosen color direction. The distribution of the random $A$-fields is a Gaussian with a given width $\Delta$. This corresponds to an inhomogeneous configuration, which is built from correlated domains of a characteristic transverse size. If the transverse size is taken to be $Q_s$ then this should model characteristic aspects of the dynamics of color flux tubes. The typical scale of the magnetic field inside a domain is $ g B_{\rm domain} \sim \Delta^2 $. We consider domains ranging in size from 0.4 to 6 in units of the characteristic spatial extent $1/\Delta$. As before, also a small amplitude noise is added to all the fields to seed the instabilities. Similar to the dynamics for the homogeneous initial conditions described above, one observes also for the inhomogeneous configurations approximately exponential growth of the spatial momentum modes of the gauge potentials, $A^a_i(t,{\bf p})$. However, one observes primary growth for all color and spatial components with approximately the same rate. In Fig.~\ref{fig-patchsize} the time evolution for a low momentum mode\footnote{We use the first non-zero momentum mode on a $128^3$ lattice.} of the gauge potential, $A^1_x(t,p_z \simeq 0)$, is shown as a function of time in units of the averaged energy density $\epsilon$ for systems with different initial domain sizes. The data is obtained from an average over only eight runs, but since each run contains a significant number of domains with independent coherent fields it turns out that the growth rates are fairly stable. \begin{figure} \begin{center} \epsfig{file=random_rates.eps, width=7.2cm, angle=0} \caption{ The growth rate of a function of the momentum for various domain sizes.} \label{fig-patchrate} \end{center} \end{figure} In Fig.~\ref{fig-patchrate} the growth rates are given as a function of momenta for different domain sizes. Comparing the results for the homogeneous and the inhomogeneous initial configurations, one observes a similar momentum dependence, but the latter show smaller growth rates by approximately a factor of six to eight. It is interesting to note that one still might understand this difference in terms of the characteristic size of the space-averaged color-magnetic fields. This average is not $ B_{\rm domain}$ because different domains have independent fields, which can average out. Instead, the characteristic size may be associated to the average of the absolute value of the magnetic field. In the case of the 'flux tube' initial condition the space-average of the magnetic field is typically a factor of hundred smaller than in the homogeneous case in units of the energy density. If the growth rates scale with the square root of the magnetic field, as is characteristic for a Nielsen-Olesen instability, one expects a factor of ten smaller growth rates. However, this is only a lower estimate, since somewhat larger growth rates can occur if the spatial variations of the magnetic field are such that there are regions with larger magnetic fields which cancel in the average. Remarkably, the growth rates measured in units of the energy density of the system are similar to the growth rates observed in Ref.~\cite{Berges:2007re}. In the latter study inhomogeneous initial field configurations were randomly generated from a Gaussian distribution. The oblate initial distribution for the spatial Fourier modes $|A^a_i(t=0,{\bf p})|$ was characterized by a transverse width $\Delta_T$ and a longitudinal width $\Delta_L$, where $\Delta_L \ll \Delta_T$. This corresponds to an extreme anisotropy with a $\delta(p_z)$-like distribution in longitudinal direction. As a consequence, the approximately homogeneous field configurations in longitudinal direction employed in Ref.~\cite{Berges:2007re} resemble quite closely the correlated domains employed here if the transverse width $\Delta_T$ is associated to the inverse domain size of our configurations. \section{Conclusions} \label{sec:conclude} In this paper we have studied out-of-equilibrium dynamics in $SU(2)$ pure gauge field theory. We presented a detailed analytic calculation of the time evolution in the linear regime starting from coherent field configurations. Generalizing the well-known Nielsen-Olesen instabilities for constant color-magnetic fields, in a first step we took into account temporal modulations. This leads to a remarkable coexistence of the original Nielsen-Olesen instability for time-averages together with a subdominant instability band because of the phenomenon of parametric resonance. The comparison to classical-statistical lattice gauge theory simulations showed remarkable agreement between analytics and numerics in the linear regime, while the lattice results show important nonlinear phenomena at later times such as secondary growth rates which are multiples of primary growth rates. This analysis provided the building blocks for the understanding of the dynamics of more realistic initial conditions including fluctuations. For the ensemble of coherent fields, which have zero mean color-magnetic field, again robust growth of the Nielsen-Olesen type is observed. Remarkably, we find that for this ensemble isotropization of the stress-energy tensor happens on a much shorter time scale than the characteristic inverse growth rate of the instability. This rapid isotropization is a dephasing phenomenon. Rapid isotropization is an important ingredient for the application of hydrodynamic descriptions for heavy-ion collisions. However, in this case longitudinal expansion and increasing diluteness~\cite{Romatschke:2005pm,Fukushima:2011nq} can make the system again become more anisotropic. In particular, the observed dephasing phenomenon depends on the fact that the stress-energy tensor is dominated by the homogeneous modes for the considered initial conditions. For spatially inhomogeneous configurations this is typically not the case. To see this and in order to make the link to earlier work on plasma instabilities using inhomogeneous initial field configurations, we finally took into account that the fields should be correlated only over a limited transverse size. If the transverse size is taken to be $Q_s$ or $\sqrt{gB}$ then this should model characteristic aspects of the dynamics of color flux tubes. In this case we observed exponential growth for all components of the inhomogeneous gauge field configuration. The nature of the underlying instability still shows properties reminiscent of the Nielsen-Olesen--type. In particular, the results indicate that the characteristic dependence of the primary growth rate as a function of momentum still reaches its maximum at low momenta. In contrast, Weibel instabilities are expected to show a vanishing growth rate at zero momentum~\cite{Weibel}, which has also been emphasized in Ref.~\cite{Iwazaki:2008xi}. In view of these results it is interesting to observe that previous findings about plasma instabilities from classical-statistical lattice gauge theory~\cite{Romatschke:2005pm,Berges:2007re,Fukushima:2011nq} share important aspects of a Nielsen-Olesen instability. In this work, we have not discussed the extraction of distribution functions and their possible interplay with time-dependent condensates after isotropization as suggested recently in Ref.~\cite{Blaizot:2011xf}. Distribution functions may, for instance, be derived from equal-time correlation functions in Coulomb gauge using the classical-statistical simulations employed in this work. This is deferred to a separate publication~\cite{preparation}. \acknowledgments \noindent The authors would like to thank Hiro Fujii for many inspiring discussions. They acknowledge the support by the Deutsche Forschungsgemeinschaft, the University of Heidelberg (FRONTIER), the Alliance Program of the Helmholtz Association (HA216/EMMI), by BMBF and MWFK Baden-W\"urttemberg (bwGRiD cluster). \section*{Appendix} \subsection{Linear regime including parametric resonance} In Sec.~\ref{sec:linear} we discussed analytic solutions in the linear regime, which concentrated on the dominant exponentially growing modes at low momenta. However, the analysis neglects the phenomenon of parametric resonance in the presence of a periodically evolving background field. In this appendix we show that this phenomenon leads to a subdominant instability band at higher longitudinal momenta than the Nielsen-Olesen instability. Here we focus on the evolution of $\delta A_-(t,p)$ described by (\ref{eq:Ap}) of Sec.~\ref{sec:linear} as this shows the dominant primary instability. Plugging in the solution for the evolution of the background field (\ref{eq:Asol}), the evolution equation for vanishing transverse momentum reads \begin{eqnarray} \left[\partial_t^2+p^2-gB~\text{cn}^2\left(\sqrt{gB}~t,\frac{1}{2}\right) \right]\delta A_-(t,p)=0 , \label{eq:AppEVO} \end{eqnarray} where in this appendix we always denote the longitudinal momentum by $p$ to ease the notation. This closely resembles the Jacobian form of the Lam\'{e} equation. The crucial difference here is the negative sign in front of the oscillating term, which gives rise to the Nielsen-Olesen type instability discussed in Sec.~\ref{sec:linear}. Our strategy for solving the above evolution equation consists of approximating \begin{eqnarray} \text{cn}^2(x,m)=1-\text{cn}^2(x-K(m),m)+\mathcal{O}(m^2) \label{eq:APPapprox} \end{eqnarray} for $m=1/2$ in (\ref{eq:AppEVO}) in order to reduce it to the well known Lam\'{e} equation, where again $K(m)$ denotes the complete elliptic integral of the first kind \cite{Elliptic}. While this approximation captures the important features of (\ref{eq:AppEVO}), it has the shortcoming that the average field strength $g\overline{B}$ according to Eq. (\ref{eq:averageB}) is overestimated by a factor of \begin{eqnarray} \frac{g\overline{B}}{g\overline{B}_{\text{app}}}=\frac{c}{1-c} \;, \quad c=\frac{ \Gamma(3/4)^2 }{ \Gamma(5/4) \Gamma(1/4)} \simeq 0.457 \;. \label{eq:APPavgB} \end{eqnarray} Consequently, we expect the approximation to yield slightly enhanced growth rates that extent to somewhat higher momenta within about ten percent accuracy of the full numerical solution at early times, where the linear analysis is expected to hold. In order to obtain the solutions for $\delta A_-(t,p)$ for the above approximation, we recast the evolution equation to the Weierstrass form by use of the identity \cite{Boyanovski,Elliptic} \begin{eqnarray} \text{cn}^2\left(x,\frac{1}{2}\right)=-2\, \wp\left(x+iK\left(\frac{1}{2}\right)\right) \;, \label{eq:APPwpcn} \end{eqnarray} where $\wp(x)$ is the Weierstrass elliptic function with roots $e_1=1/2$, $e_2=0$ and $e_3=-1/2$. The evolution equation in Weierstrass form then reads \begin{eqnarray} \left[\partial_t^2+p^2-gB-2gB\, \wp\left(\sqrt{gB} t-\tau)\right)\right]\delta A_-(t,p)=0 \;, \nonumber\\ \label{eq:APPWSeq} \end{eqnarray} where $\tau=(1-i)K(1/2)$. By introducing the dimensionless time variable $\theta=\sqrt{gB}\, t$ and expressing the time independent contribution in (\ref{eq:APPWSeq}) in terms of the Weierstrass elliptic function according to \begin{eqnarray} \wp(z)=1-\frac{p^2}{gB} \, , \label{eq:APPzDef} \end{eqnarray} the fundamental solutions $U_{p}^{1}(\theta)$ and $U_{p}^{2}(\theta)$ to (\ref{eq:APPWSeq}) read \cite{ODE} \begin{eqnarray} U_{p}^{1/2}(\theta)=e^{\mp(\theta-\tau)\zeta(z)}\,\frac{\sigma(\theta-\tau \pm z)}{\sigma(\theta-\tau)} \, . \end{eqnarray} Here the superscript $1/2$ labels the respective fundamental solution. The functions $\sigma(x)$ and $\zeta(x)$ represent the corresponding Weierstrass functions and the momentum dependence of the solutions is encoded in $z$ defined by (\ref{eq:APPzDef}). \begin{figure}[t] \begin{center} \epsfig{file=parametric.eps, width=7.2cm, angle=0} \epsfig{file=frequencies.eps, width=7.2cm, angle=0} \caption{\label{fig-anares} Growth rates and oscillation frequencies in the linear regime.} \end{center} \end{figure} In order to obtain the growth rate $\gamma(p)$ and the oscillation frequency $\omega(p)$ we proceed along the lines of Sec.~\ref{sec:linear} performing a Floquet analysis, i.e. \begin{eqnarray} U_{p}^{1/2}(\theta+2K(1/2))=e^{i\,C_-^{1/2}(p)}\, U_{p}^{1/2}(\theta) \end{eqnarray} with the Floquet function $C_{-}^{1/2}(p)$ as in Sec.~\ref{sec:linear}. Using the quasi periodicity of the Weierstrass $\sigma$-function \cite{Boyanovski,Elliptic} \begin{eqnarray} \sigma(x+2K(1/2))=-\sigma(x)\exp[2(x+K(1/2))\zeta(K(1/2))] \nonumber \\ \end{eqnarray} we find \begin{eqnarray} C^{1/2}_-(p)=\pm2i\left[K(1/2)\zeta(z)-z~\zeta(K(1/2))\right]\;. \end{eqnarray} The growth rate $\gamma(p)$ and oscillation frequency $\omega(p)$ are related to the real and imaginary parts of the floquet index by\footnote{We note that in this way the oscillation is only defined up to constants of $2\pi$.} \begin{eqnarray} \gamma(p)&=&\sqrt{gB}~\frac{|\text{Im}[C_-(p)]|}{2K(1/2)} \label{eq:APPgamma}\\ \omega(p)&=&\sqrt{gB}~\frac{|\text{Re}[C_-(p)]|}{2K(1/2)} \label{eq:APPomega}\;. \end{eqnarray} In order to evaluate the expressions in (\ref{eq:APPgamma}) and (\ref{eq:APPomega}) one needs to evaluate the mapping (\ref{eq:APPzDef}) defining $z$. Following the lines of \cite{Boyanovski} we find that there are four different regimes described by \begin{eqnarray} z&=&\beta \qquad \qquad \qquad~~ \text{for} \qquad \qquad \frac{p^2}{gB}<\frac{1}{2} \, , \label{eq:APPB1}\\ z&=&i\beta+K(1/2) \qquad \text{for} \qquad~ \frac{1}{2}<\frac{p^2}{gB}<1 \, , \label{eq:APPB2} \\ z&=&\beta+i K(1/2) \qquad \text{for} \qquad~ 1<\frac{p^2}{gB}<\frac{3}{2} \, , \label{eq:APPB3} \\ z&=&i\beta \qquad \qquad \qquad ~ \text{for} \qquad \qquad \frac{p^2}{gB}>\frac{3}{2} \, , \label{eq:APPB4} \end{eqnarray} where $\beta \in [0,K(1/2)]$ for all regimes. The different regimes correspond to two stable and two unstable bands. The lowest band corresponds to the Nielsen-Olesen instability, whereas the third band corresponds to the parametric resonance instability. Outside these momentum regions all modes are stable within the approximation using (\ref{eq:APPapprox}). The corresponding growth rates and oscillation frequencies for all bands are on display in Fig.~\ref{fig-anares}. The modes being subject to the Nielsen-Olesen instability show no oscillatory behavior according to this approximation. In contrast, modes which are amplified by the parametric resonance instability follow the oscillation of the macroscopic field $\bar{A}(t)$. The stable band in between interpolates between the two oscillation frequencies, while at higher momenta $p^2\gg gB$ the free field limit $\omega(p)=p$ is approached. Concerning the growth rates one observes that the Nielsen-Olesen instability exhibits the dominant growth rate when compared to parametric resonance. When comparing the analytic results for the growth rate to lattice data, we find that the analytic prediction is somewhat larger and extends to higher momenta. As mentioned above, this is a consequence of the approximation (\ref{eq:APPapprox}). However, one can accurately describe the lattice data by a trivial rescaling of the magnetic field amplitude $B$ in the analytical result by a factor of $\sqrt{c/(1-c)}$ as suggested by (\ref{eq:APPavgB}) to reproduce the correct average amplitude $\overline{B}$. \subsection{Parametric resonance band} The above discussion provided implicit expressions for the growth rate as a function of momentum. To obtain explicit expressions one has to evaluate the variable $z$ introduced in (\ref{eq:APPzDef}) as a function of the momentum $p$. According to (\ref{eq:APPB1})-(\ref{eq:APPB4}) this has to be done separately for each band. In Sec.~\ref{sec:linear} we have already provided an explicit expression for the growth rate of the Nielsen-Olesen instability by an alternative approach. The same is desireable for the parametric resonance instability. To obtain such an expression we evaluate $z$ as a function of $p$, where for the parametric resonance band (\ref{eq:APPB3}) one has \begin{eqnarray} z=\beta+iK(1/2)\;, \end{eqnarray} with \begin{eqnarray} \beta(p)=\text{cn}^{-1}\left(\sqrt{2}~\sqrt{\frac{p^2}{gB}-1}~;~\frac{1}{2}\right)\;. \label{eq:APPbetaPR} \end{eqnarray} The last expression was obtained by expressing the Weierstrass function $\wp(z)$ in (\ref{eq:APPzDef}) in terms of Jacobi elliptic functions according to (\ref{eq:APPwpcn}). Though (\ref{eq:APPbetaPR}) along with (\ref{eq:APPgamma}) already provides an explicit expression for the growth rate, it proves insightful to apply further approximations. In a first step we expand the Weierstrass zeta function as \begin{eqnarray} \zeta(x)&=&\frac{\eta x}{\omega}+\frac{\pi}{2\omega}\cot\left(\frac{\pi x}{2\omega}\right)+\frac{2\pi}{\omega}\sum_{k=1}^{\infty}\frac{q^{2k}}{1-q^{2k}}\sin\left(\frac{k\pi x}{\omega}\right) \nonumber \\ \\ \eta&=&\zeta(\omega)=\frac{\pi^2}{12\omega}+\mathcal{O}(q^2) \end{eqnarray} where $q=e^{-\pi}$ and $\omega=K(1/2)$. Without further approximations the growth rate $\gamma(p)$ is then given by \begin{eqnarray} \gamma(p)&=&\sqrt{gB}~\Bigg|\text{Re}\left[\frac{\pi}{2\omega}\cot\left(\frac{\pi z}{2\omega}\right)\frac{}{}\right. \nonumber \\ &+&\left.\frac{2\pi}{\omega}\sum_{k=1}^{\infty}\frac{q^{2k}}{1-q^{2k}}\sin\left(\frac{k\pi z}{\omega}\right)\right]\Bigg| \end{eqnarray} The real part can then be expanded in a $q$-Series by seperately expanding the cotangent and the series of sine functions. As $q\simeq0.043$ is a reasonably small expansion parameter we will keep the leading terms only. For the cotangent the real part is given by \begin{eqnarray} \text{Re}\left[\cot\left(\frac{\pi}{2\omega}\beta+i\frac{\pi}{2}\right)\right]&=&\frac{\sin(\frac{\pi}{\omega}\beta)}{\cosh(\pi)-\cos(\frac{\pi}{\omega}\beta)} \nonumber \\ &=&2q\sin\left(\frac{\pi}{\omega}\beta\right)+\mathcal{O}(q^2)\;, \end{eqnarray} and similarly for the series of sine functions one finds \begin{eqnarray} 4\text{Re}\left[\sum_{k=1}^{\infty}\frac{q^{2k}}{1-q^{2k}}\sin\left(\frac{\pi}{\omega}k\beta+ik\pi)\right)\right]&=& \nonumber \\ 2\sum_{k=1}^{\infty}\frac{q^{k}+q^{3k}}{1-q^{2k}}\sin\left(\frac{\pi}{\omega}k\beta\right)&=&\nonumber \\ 2q\sin\left(\frac{\pi}{\omega}\beta\right)+\mathcal{O}(q^2)\;. \end{eqnarray} As these expressions involve trigonometric functions of $\beta(p)$ a simple expression for the momentum dependent growth rate $\gamma(p)$ in the parametric resonance regime can be obtained by expanding the inverse of the Jacobi cosine in (\ref{eq:APPbetaPR}) in terms of inverse trigonometric functions according to \begin{eqnarray} \text{cn}^{-1}(x,m)=\frac{2\omega}{\pi}\cos^{-1}(x)+\mathcal{O}(m) \;. \end{eqnarray} By use of double angle formulas we obtain as the final result \begin{eqnarray} \gamma(p)\simeq \sqrt{gB}~e^{-\pi}~\frac{8\pi}{K(1/2)}~\sqrt{\frac{p^2}{gB}-1}~\sqrt{\frac{3}{2}-\frac{p^2}{gB}} \;\end{eqnarray} for the parametric resonance band where $1<p^2/(gB)<3/2$. The maximal growth rate $\gamma_0$ is realized at the momentum $p \simeq\sqrt{5/4} \sqrt{gB}$. Numerically one finds $\gamma_0\simeq0.146 \sqrt{gB}$ which is significantly smaller than the growth rates associated with the Nielsen-Olesen instability.
{ "redpajama_set_name": "RedPajamaArXiv" }
39
Ajuchitlán del Progreso är en kommun i Mexiko. Den ligger i delstaten Guerrero, i den södra delen av landet, km sydväst om huvudstaden Mexico City. Antalet invånare är . Arean är kvadratkilometer. Terrängen i Ajuchitlán del Progreso är bergig söderut, men norrut är den kuperad. Följande samhällen finns i Ajuchitlán del Progreso: Ajuchitlán del Progreso Corral Falso Changata San Jerónimo el Grande San Mateo Cantón de Guerrero El Aguaje Ayavitle Pocitos del Balcón Puerto del Coco La Comunidad La Hacienda Santa Fe Colonia Villa Hermosa Nanche Colorado Cuatro Cruces Gómez Farías Santa Rosa Primera Los Fresnos de Puerto Rico Ixcapuzalco San Jerónimo Santa Fe San Gabriel Pinzán Morado El Tepehuaje Chilacayote La Lajita El Carrizal San Sebastián Los Fabianes El Tule I övrigt finns följande i Ajuchitlán del Progreso: Kullar: Cerro Cabeza de Perro (en kulle) Cerro Cristo Rey (en kulle) Cerro Diego (en kulle) Cerro El Alabado (en kulle) Cerro El Aura (en kulle) Cerro El Coyote (en kulle) Cerro La Cebadilla (en kulle) Cerro La Minilla (en kulle) Cerro La Minita (en kulle) Cerro La Zorra (en kulle) Cerro Los Dos Cerros (en kulle) Cerro Ojo de Agua (en kulle) Cerro Real (en kulle) Berg: Cerro Arevalo (ett berg) Cerro Azul (ett berg) Cerro Azul (ett berg) Cerro Blanco (ett berg) Cerro Botija (ett berg) Cerro Corral Falso (ett berg) Cerro El Aguila (ett berg) Cerro El Balcón (ett berg) Cerro El Botellón (ett berg) Cerro El Calvario (ett berg) Cerro El Carrizo (ett berg) Cerro El Chiquihuite (ett berg) Cerro El Columpio (ett berg) Cerro El Conejo (ett berg) Cerro El Gachupin (ett berg) Cerro El Nanche (ett berg) Cerro El Otate (ett berg) Cerro El Puerto (ett berg) Cerro Estacado (ett berg) Cerro Grande (ett berg) Cerro La Bandera (ett berg) Cerro La Cacanicua (ett berg) Cerro La Gala (ett berg) Cerro La Presa (ett berg) Cerro La Vieja (ett berg) Cerro La Vinata (ett berg) Cerro Las Guacamayas (ett berg) Cerro Las Tinajitas (ett berg) Cerro Loma Verde (ett berg) Cerro Los Caballos (ett berg) Cerro Los Tres Picachos (ett berg) Cerro Mata de Bejuco (ett berg) Cerro Pedregones (ett berg) Cerro Piedra Parada (ett berg) Cerro San Antonio (ett berg) Cerro Tres Picachos (ett berg) Cerro Verde (ett berg) Cerro Viejo San Lorenzo (ett berg) Bergspass: Puerto Frío (ett bergspass) Puerto La Parota (ett bergspass) Puerto Los Corrales (ett bergspass) Savannklimat råder i trakten. Årsmedeltemperaturen i trakten är  °C. Den varmaste månaden är april, då medeltemperaturen är  °C, och den kallaste är september, med  °C. Genomsnittlig årsnederbörd är millimeter. Den regnigaste månaden är september, med i genomsnitt mm nederbörd, och den torraste är mars, med mm nederbörd. Kommentarer Källor Indelningar i Guerrero
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,087
\section{Introduction} The notion of Terwilliger algebra was introduced by Paul Terwilliger in \cite{terwilliger}. After the introduction, there were some works related to it. Other works discussing Terwilliger algebra can be found in \cite{fernadezmiklavic,HamidTerwilliger}. For the recent paper, the reader can see \cite{balmacedareyes,hanaki}. The computation is done using \cite{sagemath}. The main objective of this paper is to obtain the Terwilliger algebras of some group association schemes for the the groups $G_I$, $G_{II}, G_{III},G_{IV}$. Let $X \in \lbrace \text{I}, \text{ II}, \text{ III}, \text{ IV} \rbrace$. It is known that a weight enumerator of Type $X$ codes is $G_X$-invariant. The reader who is interested in Terwilliger algebra can directly see Section \ref{sec: Terwilliger algebra}. The section shows that \begin{align*} T(G_{I}) & \cong \mathcal{M}_1 \oplus \mathcal{M}_1 \oplus \mathcal{M}_2 \oplus \mathcal{M}_3 \oplus \mathcal{M}_7, \\ T(G_{II}) & \cong \mathcal{M}_4 \oplus \mathcal{M}_8 \oplus \mathcal{M}_{12} \oplus \mathcal{M}_{16} \oplus \mathcal{M}_{24} \oplus \mathcal{M}_{32}, \\ T(G_{III}) & \cong \mathcal{M}_2 \oplus \mathcal{M}_{10} \oplus \mathcal{M}_{16} \\ T(G_{IV}) & \cong \mathcal{M}_2 \oplus \mathcal{M}_2 \oplus \mathcal{M}_{6}. \end{align*} From coding theory point of view, the invariant theory of finite groups can connect number theory to coding theory. The point of view continues to show that the ring of the weight enumerators of Type $X$ codes can be generated by Eisenstein polynomials (E-polynomials for short) associated to Type $X$ codes. In other words, it is obtained that \[ \mathfrak{R}^{G_{I}} = \mathbb{C}[\varphi_2, \varphi_8] \] \[ \mathfrak{R}^{G_{II}} = \mathbb{C}[\varphi_8, \varphi_{24}] \] \[ \mathfrak{R}^{G_{III}} = \mathbb{C}[\varphi_4, \varphi_{12}] \] \[ \mathfrak{R}^{G_{IV}} = \mathbb{C}[\varphi_2, \varphi_6] \] where $\mathfrak{R}^{G_X}$ denotes the invariant ring for $G_X$. Some results in Section \ref{sec:E-polynomials} are not new. For example, the E-polynomials related to $G_{\text{II}}$ was discussed in \cite{ourapoly}. \section{Preliminaries} We follow \cite{MiezakiOura} for the notations. Let \[ G_I = \left\langle \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}, \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \right\rangle, \] \[ G_{II} = \left\langle \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}, \begin{pmatrix} 1 & 0 \\ 0 & i \end{pmatrix} \right\rangle, \] \[ G_{III} = \left\langle \frac{1}{\sqrt{3}} \begin{pmatrix} 1 & 2 \\ 1 & -1 \end{pmatrix}, \begin{pmatrix} 1 & 0 \\ 0 & \exp(2 \pi i / 3) \end{pmatrix} \right\rangle, \] \[ G_{IV} = \left\langle \frac{1}{2} \begin{pmatrix} 1 & 3 \\ 1 & -1 \end{pmatrix}, \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \right\rangle. \] The orders and the number of conjugacy classes for these groups can be seen in Table \ref{tab:order of G}. \begin{table} \centering \begin{tabular}{cccc} $G$ & $|G|$ & Conjugacy Classes \\ \hline $G_I$ & 16 & 7 \\ $G_{II}$ & 192 & 32 \\ $G_{III}$ & 48 & 14 \\ $G_{IV}$ & 12 & 6 \end{tabular} \caption{Order and conjugacy classes of some groups} \label{tab:order of G} \end{table} Now some terms in coding theory are given. Let $\mathbb{F}_q$ be the field of $q$ elements. A linear code $C$ of length $n$ is a linear subspace of $\mathbb{F}_q^n$. The number of nonzero components of $\mathbf{c}$ is called the weight of $\mathbf{c}$ and denoted by $wt(\mathbf{c})$. The weight enumerator $w_C (x,y)$ of a code $C$ is defined by \[ w_C (x,y) := \sum_{\mathbf{c} \in C} x^{n- wt(\mathbf{c})} y^{wt(\mathbf{c})}. \] The inner product $(\mathbf{x}, \mathbf{y})$ of two elements $\mathbf{x}, \mathbf{y} \in \mathbb{F}_q^n$ is defined by \[ (\mathbf{x}, \mathbf{y}) := x_1 y_1 + \cdots + x_n y_n \text{ mod } n. \] The dual of $C$, denoted by $C^{\perp}$, is defined by \[ C^{\perp} := \lbrace \mathbf{y} \in \mathbb{F}_q^n \ \vert \ (\mathbf{x},\mathbf{y})=0, \forall \mathbf{x} \in C \rbrace. \] We say $C$ self-dual if $C=C^\perp$ holds. Let $C$ be a self-dual code. The self-dual code is said to be as follows: \begin{enumerate} \item Type I if it is defined over $\mathbb{F}_2^q$ with all weights multiples of of 2; \item Type II if it is defined over $\mathbb{F}_2^n$ with all weights multiples of 4; \item Type III if it is defined over $\mathbb{F}_3^n$ with all weights multiples of 3; and \item Type IV if it is defined over $\mathbb{F}_4^n$ with all weights multiples of 2. \end{enumerate} It is necessary to note that the weight enumerator $w_C(x,y)$ for Type $X$ codes is in invariant ring of $G_X$ for $X = \lbrace \text{I}, \text{ II}, \text{ III}, \text{ IV} \rbrace$. Let $\mathfrak{R}^{G_X}$ be the invariant ring of $G_X$. That is \[ \mathfrak{R}^G = \lbrace f \in \mathbb{C}[x,y] \ \vert \ f^g = f, \ \forall g \in G_X \rbrace. \] Here $f^g$ means the action of $g$ on $f$. In this paper, the dimension formula $\mathcal{I}(G_X)$ for $\mathfrak{R}^{G_X}$ is written by the formal series \[ \mathcal{I}(G_X) = \sum_k ^\infty \dim \mathfrak{R}^{G_X}_k t^k. \] The dimension formulas for $\mathfrak{R}^G$ where $G=G_I, G_{II},G_{III},G_{IV}$ are: \[ \mathcal{I}(G_{I}) = \frac{1}{(1-t^2)(1-t^8)}, \] \[ \mathcal{I}(G_{II}) = \frac{1}{(1-t^8)(1-t^{24})}, \] \[ \mathcal{I}(G_{III}) = \frac{1}{(1-t^4)(1-t^{12})}, \] \[ \mathcal{I}(G_{IV}) = \frac{1}{(1-t^2)(1-t^6)}. \] \section{E-polynomials} \label{sec:E-polynomials} This section discusses E-polynomials. Before proceeding, some information about the generators of some invariants rings mentioned before is provided. It refer to \cite{MiezakiOura} for the generators. \begin{enumerate} \item $\mathfrak{R}^{G_I} = \mathbb{C}[f,g], f=x^2+y^2, g=x^2 y^2(x^2-y^2)^2$ \item $\mathfrak{R}^{G_{II}} = \mathbb{C}[f,g], f=x^8 + 14 x^4 y^4 + y^8, g=x^4 y^4(x^4-y^4)^4$ \item $\mathfrak{R}^{G_{III}} = \mathbb{C}[f,g], f=x^4+8xy^3, g=y^3(x^3-y^3)^3$ \item $\mathfrak{R}^{G_{IV}} = \mathbb{C}[f,g], f=x^2 + 3y^2, g=y^2(x^2-y^2)^2$ \end{enumerate} Hence, $R^\mathbb{G_X}$ can be generated by the E-polynomials related to Type $X$ codes. Let $\bar{x}$ be a column vector of $x$ and $y$. An E-polynomial $\varphi_k$ of degree $k$ for the group $G_X$ with respect to Type $X$ code is defined by \[ \varphi_k (\bar{x})=\varphi_k(x,y)= \frac{1}{|G|} \sum_{\sigma \in G} (\sigma_1 \bar{x})^k \] where $\sigma_1$ is the first row of $\sigma$. It is not difficult to show that E-polynomial for $G_X$ belongs to $\mathbb{C}[x,y]^{G_X}$. Let $\mathfrak{E}^{G_X}$ be the ring of E-polynomials for the group $G_{X}$. By obtaining the generators of $\mathfrak{E}^{G_X}$, the following theorem is obtained. \begin{thm} \label{thm:epoly generated} \[ \mathfrak{E}^{G_I} = \mathbb{C}[\varphi_2, \varphi_8] \] \[ \mathfrak{E}^{G_{II}} = \mathbb{C}[\varphi_8, \varphi_{24}] \] \[ \mathfrak{E}^{G_{II}} = \mathbb{C}[\varphi_4, \varphi_{12}] \] \[ \mathfrak{E}^{G_{IV}} = \mathbb{C}[\varphi_2, \varphi_{6}] \] \end{thm} \begin{proof} This is done by computation. The proof is similar to \cite[Theorem 4.2.]{Hamid}. \end{proof} Finding the generators of $\mathfrak{E}^{G_X}$, I observe if the generators can generated the invariant ring $\mathfrak{R}^{G_X}$. The following theorem shows that the invariant ring of the group $G_I, G_{II}, G_{III}, G_{IV}$ can be generated by the E-polynomials for each group. \begin{thm} \label{thm:invariant generator} The followings hold \[ \mathbb{R}^{G_{I}} = \mathbb{C}[\varphi_2, \varphi_8] \] \[ \mathbb{R}^{G_{II}} = \mathbb{C}[\varphi_8, \varphi_{24}] \] \[ \mathbb{R}^{G_{III}} = \mathbb{C}[\varphi_4, \varphi_{12}] \] \[ \mathbb{R}^{G_{IV}} = \mathbb{C}[\varphi_2, \varphi_6] \] \end{thm} \begin{proof} We give the proof for $G_{III}$ case. The proofs of $G_{I}, G_{II},$ and $G_{IV}$ cases are similar. The explicit form of $\varphi_4$ and $\varphi_{12}$ are \begin{align*} \varphi_4 & = \frac{1}{3} \left(x^{4} + 8 x y^{3} \right), \\ \varphi_{12} & = 243 \left( 61 x^{12} + 440 x^{9} y^{3} + 14784 x^{6} y^{6} + 28160 x^{3} y^{9} + 1024 y^{12} \right). \end{align*} Then $f$ and $g$ for $G_{III}$ case can be expressed as \begin{align*} f = & 3 \varphi_4, \\ g = & \frac{1}{1024} \left(1647 \varphi_4^3 - 243 \varphi_{12} \right). \end{align*} In the same way, other cases could be proved. This completes the proof. \end{proof} Theorem \ref{thm:invariant generator} shows that the invariant ring of $G_I,G_{II}, G_{III}, G_{IV}$ can be generated by the E-polynomials related to them. For the details, the explicit forms of the generators taken from the E-polynomials are given. \begin{align*} G_{I} : & \ \varphi_2 = \frac{1}{2} \left( x^2 + y^2 \right), \\ &\ \varphi_8 = \frac{1}{32} \left( 9 x^{8} + 28 x^{6} y^{2} + 70 x^{4} y^{4} + 28 x^{2} y^{6} + 9 y^{8}. \right)\\ G_{II} : & \ \varphi_8 = \frac{1}{24} \left( 5 x^{8} + 70 x^{4} y^{4} + 5 y^{8} \right), \\ & \ \varphi_{24}=\frac{1}{6144} ( x^{24} + 10626 x^{20} y^{4} + 735471 x^{16} y^{8} + 2704156 x^{12} y^{12} \\ & \ \ \ \ \ \ \ \ + 735471 x^{8} y^{16} + 10626 x^{4} y^{20} + 1025 y^{24} ) \\ \end{align*} \begin{align*} G_{IV} : & \ \varphi_8 = \frac{1}{2} \left( x^{2} + 3 y^{2}\right), \\ & \ \varphi_{24}=\frac{1}{32} ( 11 x^{6} + 45 x^{4} y^{2} + 405 x^{2} y^{4} + 243 y^{6}). \\ \end{align*} \section{Terwilliger Algebra} \label{sec: Terwilliger algebra} Before continuing the investigation of the Terwilliger algebra, the definition group association scheme needs to be defined. \begin{df}\label{defGroupAssociation} Let $G$ be a finite group and $C_0,C_1,\ldots,C_d$ be the ordering conjugacy classes of $G$. Define the relations $R_i(i=0,1,\ldots,d)$ on $G$ by $$(x,y)\in R_i \Longleftrightarrow yx^{-1}\in C_i.$$ Then $\mathfrak{X}(G)=(G,\lbrace R_i \rbrace_{0\leq i \leq d})$ forms a commutative association scheme of class $d$ called the \textit{group association scheme of $G$}. \end{df} We associate the matrix $A_i$ of the relation $R_i$ as \begin{equation*} (A_i)_{x,y} = \begin{cases} 1 & \text{if} \left(x,y\right)\in R_i,\\ 0 & \text{otherwise}. \end{cases} \end{equation*} Then, \begin{equation*} A_i A_j = \sum_{k=0}^{d}{p_{ij}^k A_k} \end{equation*} and the matrices $A_0,\ldots, A_d$ generate an algebra called a Bose-Mesner algebra. The intersection numbers $p_{ij}^k$ of the group association scheme $\mathfrak{X}$ are given by $$\vert \lbrace \left(x,y\right) \in C_i \times C_j \vert xy=z, z\in C_k\rbrace \vert.$$ For each $i=0,\ldots,d,$ let $E_i^*$ be the diagonal matrices of size $n \times n$ which are defined as follows. \begin{equation*} \left( E_i^* \right)_{x,x} = \begin{cases} 1, & \text{if } x \in C_i\\ 0, & \text{if } x \notin C_i \end{cases} \qquad \left(x\in G \right). \end{equation*} Then $\mathfrak{A^*} = \langle E_0^*,\ldots,E_d^* \rangle$ is a commutative algebra called the dual Bose-Mesner algebra. The intersection numbers provide information about the structure of the Terwilliger algebra. The following relation refers to \cite{terwilliger}. \begin{equation*} \begin{array}{c c c c} E_i^* A_j E_k^* = 0 & \text{iff} & p_{ij}^k=0 & (0\leq i,j,k \leq d) \end{array} \end{equation*} \begin{df} Let $G$ be a finite group. The Terwilliger algebra $T(G)$ is the sub-algebra of $Mat_G (\mathbb{C})$ generated by $\mathfrak{A}$ and $\mathfrak{A}^*$. \end{df} The Terwilliger algebra is noncommutative algebra. It is also semisimple since it is closed under conjugate-transpose map. Then, the investigation of this algebra is undertaken by obtaining its properties, such as dimension, primitive central idempotent, and structure. From \cite{bannaimunemasa}, the bound on the dimension of $T(G)$ turns to be \[ \vert \lbrace i,j,k \vert p_{ij}^k \neq 0 \rbrace \vert \leq \dim T \leq \sum_{i=0}^d \frac{|G|}{|C_i|}. \] The dimension of $T(G_X)$ is provided. \begin{thm} \label{thm:dimension of T} The dimension of $T(G_I),T(G_{II}),T(G_{III}),$ and $T(G_{IV})$ are given by the followings: \begin{center}\label{thdim} \begin{tabular}{l l l} 1. & $T(G_I)$ & 64\\ 2. & $T(G_{II})$ & 2808 \\ 3. & $T(G_{III})$ & 300 \\ 4. & $T(G_{IV})$ & 44 \\ \end{tabular} \end{center} \end{thm} \begin{proof} The dimension of each case is obtained by determining a basis for each algebra. A set $\mathcal{B}$ of linearly independent elements for the set $\lbrace E_i^* A_j E_k^*, E_i^* A_j E_k^* \cdot E_k^* A_l E_m^* \rbrace$ is found. The computation shows that $\mathcal{B}$ can generates the set $\lbrace E_i^* A_j E_k^* \cdot E_k^* A_l E_m^* \cdot E_m^* A_n E_m^p \rbrace$. This completes the proof. The details of distribution of the basis elements related to each conjugacy class are given in Appendix \ref{sec:appendix2}. \end{proof} Theorem \ref{thm:dimension of T} shos that $T(G_{IV})$ satisfies the condition \[ \dim T = \sum_{i=0}^d \frac{|G_{IV}|}{|C_i|} \] where $C_i (i=1, ...)$ are the conjugacy classes of $G_{IV}$. The readers who want to know from where the dimension of $T(G_X)$ is provided, can see Appendix \ref{sec:appendix2}. After providing the dimension of $T(G_x)$, the primitive central idempotents need to be obtained. We denote by $Z(T)$ the center of $T$. From \cite{balmaceda}, $Z(T)$ contains block diagonal matrices. Hence, it can be written as follows. \[ Z(T) \subseteq \oplus_{i=0}^d Z(E_i^* T E_i^*). \] Thus, to obtain the center of $T$, it is sufficient to consider the basis elements which are related to $(C_i,C_i)$ position. \begin{lem} The dimensions of the center of $T(G_{I}), T(G_{II}), T(G_{III})$ and $T(G_{IV})$ are the following: \begin{enumerate} \item $\dim Z(T(G_I)) = 5$. \item $\dim Z(T(G_{II})) = 6$. \item $\dim Z(T(G_{III}) = 3$ \item $\dim Z(T(G_{III}) = 3$ \end{enumerate} \end{lem} \begin{proof} The result is obtained by determining the basis of center, that is the dimension of linear equation system solution $\lbrace x_i y = y x_i \rbrace$ where $y=\sum {c_j b_j}$, $b_j$ and $x_i$ are in the basis of $T$. \end{proof} The degrees of the irreducible complex representation afforded by the primitive central idempotents are provided, that is a set $\lbrace \varepsilon_i \ | \ 1 \leq i \leq s \rbrace$ which satisfies $\varepsilon_i^2 = \varepsilon_i \neq \mathbf{0}, \varepsilon_i \varepsilon_j = \delta_{ij}\varepsilon_i, \sum_{i=1}^2 \varepsilon_i, \text{ and } \varepsilon_i \in Z(T)$. These are obtained using the method described on \cite{balmaceda}. Let $e_1, e_2, \ldots, e_s$ be the basis for $\mathcal{Z}(T(G))$. Therefore, \[ e_i e_j = \sum r_{ij}^k e_k. \] Define a matrix $B$ by \[ B_i := (r_{ij}^k), 1 \leq i \leq s. \] As the matrices $B_i$ are mutually commute, they can be simultaneously diagonalised by a nonsingular matrix. Thus, there is a matrix $P$ such that \begin{equation} \label{eq:diagonal} P^{-1} B_i P \end{equation} is a diagonal matrix for $i=1, \ldots, s$. Let $v_1(i), \ldots, v_s(i)$ be the diagonal entries of (\ref{eq:diagonal}). Define a matrix $M$ by \[ M_{ij} := v_i(j). \] Then the primitive central idempotents $\varepsilon_1, \ldots, \varepsilon_s$ of $T(G)$ can be obtained by \[ (\varepsilon_1, \ldots, \varepsilon_s) = (e_1, \ldots, e_s) M^{-1}. \] Using the primitive central idempotents, the following result is obtained. \begin{thm}\label{thpmt} The degrees of the irreducible complex representations afforded by every idempotents are given below. \begin{tabular}{c l c c c c c c c c c} (1)&$T(G_I)$ & $\varepsilon_i$ & $\varepsilon_1$ & $\varepsilon_2$ & $\varepsilon_3$ & $\varepsilon_4$ & $\varepsilon_5$ & & &\\ & & \text{deg}$\varepsilon_i$ & 1 & 1 & 2 & 3 & 7 & & & \\ (2)&$T(G_{II})$ & $\varepsilon_i$ & $\varepsilon_1$ & $\varepsilon_2$ & $\varepsilon_3$ & $\varepsilon_4$ & $\varepsilon_5$ & $\varepsilon_6$ & & \\ & & \text{deg}$\varepsilon_i$ & 4 & 8 & 12 & 16 & 24 & 32 & & \\ (3)&$T(G_{III})$ & $\varepsilon_i$ & $\varepsilon_1$ & $\varepsilon_2$ & $\varepsilon_3$ & & & & &\\ & & \text{deg}$\varepsilon_i$ & 2 & 10 & 16 & & & & & \\ (3)&$T(G_{III})$ & $\varepsilon_i$ & $\varepsilon_1$ & $\varepsilon_2$ & $\varepsilon_3$ & & & & &\\ & & \text{deg}$\varepsilon_i$ & 2 & 2 & 6 & & & & & \\ \end{tabular} \end{thm} \begin{proof} To determine the degrees of $d_i$ afforded by each $\varepsilon_i$, the fact that $T \varepsilon_i \cong \mathcal{M}_{d_i}(\mathbb{C})$ is used. Thus $d_i^2 = dim T\varepsilon_i$ equals the dimension of the set $\lbrace x_j \varepsilon_i\rbrace$ where $x_j$ are the basis elements of $T$. \end{proof} The degrees of primitive idempotents enable us to get the following structure theorem, in which $M_i$ denotes a full matrix algebra over $\mathbb{C}$ of degree $i$. \begin{cor}[Structure Theorem for $T(G_X)$ ] \begin{enumerate} \item $T(G_{I}) \cong \mathcal{M}_1 \oplus \mathcal{M}_1 \oplus \mathcal{M}_2 \oplus \mathcal{M}_3 \oplus \mathcal{M}_7 $. \item $T(G_{II}) \cong \mathcal{M}_4 \oplus \mathcal{M}_8 \oplus \mathcal{M}_{12} \oplus \mathcal{M}_{16} \oplus \mathcal{M}_{24} \oplus \mathcal{M}_{32} $. \item $T(G_{III}) \cong \mathcal{M}_2 \oplus \mathcal{M}_{10} \oplus \mathcal{M}_{16} $. \item $T(G_{IV}) \cong \mathcal{M}_2 \oplus \mathcal{M}_2 \oplus \mathcal{M}_{6}$. \end{enumerate} \end{cor}
{ "redpajama_set_name": "RedPajamaArXiv" }
2,664
\section{Introduction} Low scale unification of gauge and gravitational interactions ~\cite{Antoniadis:1990ew,Arkani-Hamed:1998rs,Antoniadis:1998ig}, appears to be a promising framework for solving the hierarchy problem. In this context, the weakness of the gravitational force in long distances is attributed to the existence of extra dimensions at the Fermi scale. A realization of this scenario can occur in type I string theory~\cite{Lykken:1996fj} where gauge interactions are mediated by open strings with their ends attached on some D-brane stack, while gravity is mediated by closed strings that propagate in the whole 10 dimensional space. In the context of Type I string theory using appropriate collections of parallel \cite{Polchinski:1996na,Angelantonj:2002ct} or intersecting \cite{Berkooz:1996km,Balasubramanian:1996uc} D-branes, there has been considerable work in trying to derive the Standard Model theory or its Grand Unified extensions \cite{Antoniadis:2002en, Antoniadis:2002qm, Antoniadis:2002cs, Gioutsos:2005uw, Aldazabal:2000cn, Cvetic:2001tj, Leontaris:2001hh, Kokorelis, Blumenhagen:2003jy, Blumenhagen:2005mu, Antoniadis:2004dt}. Some of these low energy models revealed rather interesting features: (i) The correct value of the weak mixing angle is obtained for a string scale of the order of a few TeV (ii) baryon and lepton numbers are conserved due to the existence of exact global symmetries which are remnants of additional anomalous $U(1)$ factors broken by the Green-Schwarz mechanism (iii) supersymmetry is not necessary to solve the hierarchy problem. However, its rivals, supersymmetric Grand Unified theories (where the unification of gauge couplings occurs at the order of $10^{16}$ GeV), and their heterotic string realizations (with even higher unification scale), exhibit also a number of additional interesting features. Apart from the natural gauge coupling unification these features include fermion mass~\cite{Chanowitz:1977ye, Greene:1986jb} relations and in particular the bottom tau-unification, i.e. the equality of the corresponding Yukawa couplings at the unification scale, which reproduces the correct mass relation at low energies. Full gauge coupling unification does not occur in low string scale models, however, this should not be considered as a drawback since the various gauge group factors are associated with different stacks of branes and therefore gauge couplings may differ at the string scale. In standard-like models in particular, there should be at least three different stacks of branes accommodating the $SU(3)$, $SU(2)$ and $U(1)$ gauge groups respectively. Following a bottom-up approach~\cite{Gioutsos:2005uw}, in this talk we examine the possible brane configurations that can accommodate the Standard Model and the associated hypercharge embeddings and we analyze the consequences of (partial) gauge coupling unification in conjunction with bottom-tau Yukawa coupling equality. We shall restrict to non-super\-symmetric configurations, (for some recent results on supersymmetric and split supersymmetric models see \cite{Blumenhagen:2003jy, Antoniadis:2004dt} and references therein), however, we will consider models with two Higgs doublets so that the bottom and top quark masses will be related to different vacuum expectation values while the tau lepton and the bottom quark will receive masses from the same Higgs doublet. We find that in a class of models that can be realized in the context of type I string theory with large extra dimensions, the experimentally low energy masses can be reproduced assuming equality of bottom-tau Yukawa couplings and a string scale as low as $\sim 10^3$ TeV. In the next section we briefly describe the general set up of brane models and derive the hypercharge formulae for an arbitrary number of $U(1)$ factors. In section 3 we identify two brane configurations that admit only one Higgs doublet coupled to the down quarks and leptons: the first is a four brane-stack configuration with two $U(1)$ branes, while the second is a five brane-stack system with three $U(1)$ branes. Section 4 deals with the calculational details and renormalization analysis of gauge couplings, while in section 5 the results for $b-\tau$ Yukawa couplings are presented. Our conclusions are drawn in section 6. \section{Hypercharge embedding in generic Standard model like brane configurations} We consider models which arise in the context of various D-brane configurations \cite{Antoniadis:2002en, Antoniadis:2002qm}. A single D-brane carries a $U(1)$ gauge symmetry which is the result of the reduction of the ten-dimensional Yang-Mills theory. Therefore, a stack of $n$ parallel D-branes gives rise to a $U(n)$ gauge theory where the gauge bosons correspond to open strings having both their ends attached to some of the branes of the various stacks. The minimal number of brane sets required to provide the Standard Model structure is three: a 3-brane ``color" stack with gauge symmetry ${U(3)}_C\sim{SU(3)}_C\times{U(1)}$, a 2-brane ``weak" stack which gives rise to ${U(2)}_L\sim{SU(2)}_L\times{U(1)}$ gauge symmetry and an abelian $U(1)$ brane for hypercharge. However, accommodation of all SM particles as open strings between different brane sets requires at least one $U(1)$ brane to be added to the above configuration~\cite{Antoniadis:2002en,Antoniadis:2002cs}. Additional abelian branes may be present too. In more complicated scenarios the weak or color stacks can be repeated leading to an effective ``higher level embedding" of the Standard Model. The full gauge group will be of the form% \begin{eqnarray} G={U(m)}_C^p\times{U(n)}_L^q\times{U(1)}^N \label{ggg} \end{eqnarray}% with $m\ge 3$ and $n\ge 2$ and $p,q\ge1$. Since $U(n)\sim {SU(n)}\times{U(1)}$ and so on, we infer that brane constructions automatically give rise to models with $SU(n)$ gauge group structure and several $U(1)$ factors. A generic feature of this type of string vacua is that several abelian gauge factors are anomalous. However, at least one $U(1)$ combination remains anomaly free. This is the hypercharge that can be in general written as% \begin{eqnarray} Y=\sum_{i=1}^p k_3^{(i)} Q_3^i + \sum_{j=1}^q k_2^{(j)} Q_2^j+ \sum_{\ell=1}^N k'_\ell \, Q'_\ell, \label{ydef} \end{eqnarray}% where $Q_3^i$ are the $U(1)$ generators of the color factor $i$, $Q_2^j$ are the ${U(1)}$ generators of the weak factor $j$ and $Q'_\ell\,(\ell=1,\dots,N)$, are the generators of the remaining Abelian factors. The simplest case which leads directly to the SM theory is the choice $p=q=1$. Constructions of this type have already been proposed in reference~\cite{Antoniadis:2002en}. An immediate consequence of (\ref{ggg}) and (\ref{ydef}) is that the hypercharge coupling ($g_Y$) at the string/brane scale $(M_S)$ is related to the brane couplings ($g_m, g_n, g_i'$) as% \begin{eqnarray} \frac{1}{g_Y^2}=\frac{2 m k_3^2}{g_m^2}+\frac{2 n k_2^2}{g_n^2}+2\sum_{i=1}^N \frac{{k'_i}^2}{ {g'_i}^2}\label{gydef} \end{eqnarray}% where we have used the traditional normalization ${\rm Tr}\, T^a\, T^b= \delta^{ab}/2 , a,b=1, \dots, n^2$ for the $U(n)$ generators and assumed that the vector representation (${\bf n}$) has abelian charge $+1$ and thus the $U(1)$ coupling becomes ${g_n}/{\sqrt{2 n}}$ where $g_n$ the $SU(n)$ coupling. Choosing further $m=3, n=2$ in (\ref{gydef}) we obtain directly the non-abelian structure of the SM with several $U(1)$ factors, therefore the hypercharge gauge coupling condition reads% \begin{eqnarray} k_Y&\equiv&\frac{\alpha_2}{\alpha_Y} \;=\;{6 k_3^2}\,\frac{\alpha_2}{\alpha_3}+4 k_2^2+2\sum_{i=1}^N {k_i'}^2\,\frac{\alpha_2}{\alpha_i'}\label{kY} \end{eqnarray}% where $\alpha_i \equiv g_i^2/(4\pi)$. Given a relation between the $\alpha_i'$ and $\alpha_2$ (or $\alpha_3$) and a hypercharge embedding ($k_i'$ known) equation (\ref{kY}) in conjunction with the $\alpha_3$ evolution equation, determine the string scale $M_S$. In the remaining of this section, we will derive all possible sets of $k_i$'s compatible with brane configurations which embed the SM particles and imply an economical Higgs spectrum. \begin{figure}[!b] \centering ($N=2$) \includegraphics[width=0.28\textwidth]{conf1a.eps}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ($N=3$) \includegraphics[width=0.25\textwidth]{conf2a.eps} \caption{\label{fconf} Possible $N=2,3$ brane configurations ($N$ is the number of the $U(1)$-branes) that can accommodate the SM spectrum with down quarks and leptons acquiring masses from the same Higgs doublet.} \end{figure} \section{Concrete brane configurations\label{cbc}} We consider here brane configurations that can lead to concrete realizations of those proposed previously. As already mentioned, a specific realization must include two Higgs doublets in order to ensure the bottom-top mass difference. In brane models each SM particle corresponds to an open string stretched between two branes. In our charge conventions, the possible quantum numbers of such a string ending to the $U(m)$ and $U(n)$ brane sets are $\left({\bf m};+1,{\bf n};+1\right),\left({\bf \bar{m}};-1,{\bf n};+1\right)$, $\left({\bf m};+1,{\bf \bar{n}};-1\right)$, $\left({\bf \bar{m}};-1,{\bf \bar{n}}; -1 \right)$, that is, bifundamentals of the associated unitary groups. Higher representations could be obtained by considering strings with both ends on the same brane set $U(m)$, $\left({\bf m(m-1)/2}, 2 \right)$, $\left({\bf m(m+1)/2}, 2 \right)$, $\dots$, however, we will restrict here to the bi-fundamental case. By analyzing possible brane configurations that can accommodate a gauge group of the form (\ref{ggg}) we find that only the four and five brane-stack scenarios ($N=2,3$) in (\ref{ggg}) can lead to natural $b$-$\tau$ unification. It is possible to introduce additional brane sets, however, in such a case down quarks and leptons get their masses from different Higgs doublets and any Yukawa coupling unification condition would require the equality of the associated doublet vevs. The $N=2,3$ two Higgs doublet candidate configurations are presented pictorially in figure \ref{fconf}. The associated hypercharge embeddings can be obtained by solving the hypercharge assignment conditions for SM particles for $k_i$. SM particle abelian charges under $U_3(1) \times U_2(1) \times {U(1)}_1 \times {U(1)}_2$ are the general form $Q(+1,\epsilon_1,0,0)$, $d^c(-1,0,\epsilon_2,0)$, $u^c(-1,0,0,\epsilon_3)$, $L(0,\epsilon_4,0,\epsilon_5)$, $e^c(0,0,\epsilon_6,\epsilon_7)$ and thus% \begin{eqnarray} k_3+k_2\, \e1 &=& \hphantom{+}\frac{1}{6}\nonumber\\ -k_3+k_1'\, \e2 &=& \hphantom{+}\frac{1}{3}\nonumber\\ -k_3+k\, \e3 &=& -\frac{2}{3}\label{6e}\\ k_2\, \e4+k_2'\, \e5 &=& -\frac{1}{2}\nonumber\\ k_1'\,\e6 +k_2'\, \e7 &=& \hphantom{+}1\nonumber \end{eqnarray}% where $k=k_2'$ in the first configuration and $k=k_3'$ for the second one, while $\e{i}^2=1,\,i=1,\dots,7$. As seen by (\ref{ydef}) and (\ref{gydef}), only the absolute values of the hypercharge embedding coefficients $k_i,\,k_i'$ enter the coupling relation at $M_S$. Solving (\ref{6e}), for the SM particle charges in configuration ($N=2$) we obtain three possible solutions. These correspond to the (absolute) values for the coefficients presented in cases (a), (b) and (c) of table \ref{ytab1}. Configuration $N=3$ leads to four additional cases, namely (d), (e), (f) and (g) of the same table. If in a particular solution a coefficient $k_i$ (or $k_i'$) turns out to be zero, the associated abelian factor does not participate to the hypercharge. \begin{table}[!t] \caption{\label{ytab1}Absolute values of the possible hypercharge embedding coefficient sets ($k_3, k_2$ and $k_i'$) for the brane configurations with $N=2$ and $N=3$ of figure \ref{fconf}.} \renewcommand{\arraystretch}{1.2} \centering \begin{tabular}{clccccc} \br $N$&&$|k_3|$&$|k_2|$&$|k_1'|$&$|k_2'|$&$|k_3'|$\\ \mr &(a)&$\frac{1}{6}$&$0$&$\frac{1}{2}$&$\frac{1}{2}$&-\\ $2$&(b)&$\frac{2}{3}$&$\frac{1}{2}$&$1$&$0$&-\\ &(c)&$\frac{1}{3}$&$\frac{1}{2}$&$0$&$1$&-\\ \mr &(d)&$\frac{1}{6}$&$0$&$\frac{1}{2}$&$\frac{1}{2}$&$\frac{1}{2}$\\ &(e)&$\frac{1}{3}$&$\frac{1}{2}$&$0$&$1$&$1$\\ $3$&(f)&$\frac{5}{6}$&$1$&$\frac{1}{2}$&$\frac{1}{2}$&$\frac{3}{2}$\\ &(g)&$\frac{2}{3}$&$\frac{1}{2}$&$1$&$0$&0\\ \br \end{tabular} \end{table} \section{Gauge coupling running and the String scale} Following a bottom-up approach, in this section we determine the range of the string scale for all the above models by taking into account the experimental values of $\alpha_3, \alpha_e$ and $\sin^2\theta_W$ at $M_Z$ \cite{Eidelman} \begin{eqnarray} \alpha_3 = 0.118\pm 0.003,~~~\alpha^{-1}_{e}=127.906,~~~ \sin^2\theta_W=0.23120\nonumber \end{eqnarray} For the scales above $M_Z$ we consider the standard model spectrum with two Higgs doublets. The one loop RGEs for the gauge couplings ($\tilde{\alpha}\equiv \alpha/(4\pi)$) take the form \begin{eqnarray} \frac{d \tilde{\alpha}_i}{dt} = b_i \tilde{\alpha}_i^2\,,~~~~ i=Y,2,3\label{rge} \end{eqnarray} where $(b_Y, b_2, b_3)= (7, -3, -7)$ and $t=2\ln\mu$ ($\mu$ is the renormalization point). \begin{table}[!t] \caption{\label{ytab2} Possible values of $k_Y$ as a function of $\xi=\alpha_2 / \alpha_3$ for various orientations of $U(1)$'s for the models of table \ref{ytab1}. The rows show the $k_Y$ values for various orientations (see text for details). Last row shows the minimum value of the string scale $M_S$ obtained for the models (a)-(g).}% \renewcommand{\arraystretch}{1.2} \centering \begin{tabular}{ccccccc} \br &\multicolumn{6}{c}{Model}\\ \mr% \parbox{1.5cm}{\small coupling\\[-4pt]relation}&(a)&(b)&(c)&(d)&(e)&(g)\\ \mr% $\alpha_i'=\alpha_2$&$\frac{\xi}{6}+1$&$\frac{8\xi}{3}+3$ & $\frac{2\xi}{3}+3$ & $\frac{\xi}{6}+\frac{3}{2}$ & $\frac{2\xi}{3}+5$ & $\frac{8\xi}{3}+3$ \\ $\alpha_i'=\alpha_3$&$\frac{7\xi}{6}$ & $\frac{1\bf{4\xi}}{3}+1$ & $\frac{8\xi}{3}+1$ & $\frac{2\xi}{3}+1$ & $\frac{14\xi}{3}+1$ & $\frac{14\xi}{3}+1$ \\ (see text)&$\frac{2\xi}{3} +\frac{1}{2}$ & - & - & $\frac{5\xi}{3}$ & $\frac{8\xi}3+3$ & - \\ (see text)&- & - & - & $\frac{7\xi}{6}+\frac 12$ & - & - \\ \mr $M_S $(GeV)&$6.71\times 10^{17}$&$5.78\times 10^3$&$1.99\times 10^6$&$1.65\times 10^{14}$ &$5.78\times 10^3$&$5.78\times 10^3$\\ \br \end{tabular} \end{table} First, we concentrate on simple relations of the gauge couplings, i.e., those relations implied from models arising only in the context on non-intersecting branes. In these cases, certain constraints on the initial values of the gauge couplings have to be taken into account, leading to a discrete number of admissible cases which we are going to discuss. Thus, in the case of two $D5$ branes, $U(3)$ and $U(2)$ are confined in different bulk directions. In the parallel brane scenario the orientation of a number of the extra $U(1)$'s may coincide with the $U(3)$-stack direction while the remaining abelian branes are parallel to the $U(2)$ stack. This implies that the corresponding $U(1)$ gauge couplings have the same initial values either with the $\alpha_3$ or with the $\alpha_2$ gauge couplings. If we define $\xi=\frac{\alpha_2}{\alpha_3}$ the ratio of the two non-abelian gauge couplings at the string scale, for any distinct case, $k_Y$ takes the form $k_Y= \lambda \,\xi +\nu$, where $\lambda,\nu$ are calculable coefficients which depend on the specific orientation of the $U(1)$ branes. For example, in model (a) we can have the following possibilities: $\alpha_1'=\alpha_2'=\alpha_2$, $\alpha_1'=\alpha_2'=\alpha_3$ and $\alpha_1'=\alpha_2, \alpha_2'=\alpha_3$ leading to $k_Y=\frac{\xi}6+1$, $\frac{7\xi}6$ and $\frac{2\xi}3+\frac 12$ correspondingly. All cases for the models (a)-(g) are presented in table~\ref{ytab2} and are classified with regard to the hypercharge coefficient $k_Y$. (All cases of Model (f) lead to unacceptably small string scales, so these are not presented). Allowing ${\alpha_3}$ to take values different from ${\alpha_2}$, we find that models (a,b,c,d,e,g) of table \ref{ytab1} predict a string scale in a wide range, from a few TeV up to the Planck mass. The highest value is of the order $M_S\sim 7\times 10^{17}$ GeV and corresponds to equal couplings $\frac{\alpha_2}{\alpha_3}\equiv \xi =1$ at $M_S$. On the other hand, lower unification values of the order of a few TeV assume a gauge coupling ratio $\frac{\alpha_3}{\alpha_2}\approx 2$. In this case the idea of complete gauge coupling unification could be still valid, considering that the SM gauge group arises from the breaking of a gauge symmetry whose non-abelian part is $U(3)\times U(2)^2$, i.e., for the case $p=1,\,q=2$ of (\ref{ggg}) where the factor of 2 in the gauge coupling ratio is related to the diagonal breaking $U(2)\times U(2)\ra U(2)$. The lowest possible unification for the three models (b),(e),(g) corresponds to $k_Y=\frac{14\,\xi}3+1$, and is $M_S\sim 5.81\times 10^{3}$ GeV, for a weak to strong gauge coupling ratio $\xi\sim 0.42$ at $M_S$. Case (c) predicts an intermediate value $M_S =2 \times 10^6$ GeV while model (d) gives $M_S\sim 10^{14}$ GeV. Finally, model (a) for $\xi\sim 1$ predicts a unification scale as high as $M_S\sim 6.7\times 10^{17}$ GeV which is of the order of the heterotic string scale. Interestingly, in this latter case, all gauge couplings are equal at $M_S$, $\alpha_3=\alpha_2=\alpha_i'$, while, as can be seen from table 2, $k_Y$ takes a common value for all three cases, $k_Y=7/6$. \begin{figure}[!t] \hspace*{-.5cm} \includegraphics[width=0.5\textwidth]{cmodels.eps}\hspace{1.0pc}% \begin{minipage}[b]{18pc} \caption{\label{cmodels}The string scale as a function of the coupling ratio $\frac{\alpha_3}{\alpha}$, ($\alpha$ is a common value for the $U(1)$ couplings $\alpha_i'$) for the hypercharge embeddings of table 1, in the general case of intersecting branes. Re\-sults for model (g) coincide with those of model (b).}\vspace*{0.2cm} \end{minipage} \end{figure} In the general intersecting case, the $U(1)$ branes are neither aligned to the $SU(3)$, nor to the $SU(2)$ stacks, thus the corresponding gauge couplings can take arbitrary values. Without loss of generality, we will assume here for simplicity that all these couplings are equal $\alpha_1'=\alpha_2'=\dots=\alpha_N'=\alpha$. In figure \ref{cmodels} we plot the string scale ($M_S$) as a function of the logarithm of the ratio $\alpha_3/\alpha$ for the candidate models (a), (b), (c), (d), (e) and (g). The results for models (b), (c), (e) and (g) which is identical with model (b), are represented in the figure with continuous lines. These are compatible with low scale unification particularly when $\alpha_3\ge \alpha$. For $\alpha_i'=\alpha_3$, (which corresponds to the zero of the logarithm at the $x$-axis), we obtain again the results of the parallel brane scenario, shown in table 2. At this point, we further observe a crossing of the (e)-curve with the curve for models (b),(g). It is exactly this point ($\alpha_i'=\alpha_3$) that these three models predict the same value for the lowest string scale. When $\alpha_3\ge \alpha_i'$, model (e) predicts the lowest $M_S$, whilst, if $\alpha_i'>\alpha_3$, models (b), (g) imply lower string scales than model (e). The values of the string scale for models (a), (d) (represented in the figure with dashed curves) are substantially higher; for these latter cases in particular, assuming reasonable gauge coupling relations $\alpha_i'\approx {\cal O}( \alpha_{2,3})$ we find that $M_S\ge 10^{12}$ GeV. Again, for $\alpha_3=\alpha_i'$, (the zero value of the $x$-axis) we rederive the values of $M_S$ presented in table 2. \section{Yukawa coupling evolution and mass relations} In this section, we will examine whether a unifiaction of the $b - \tau$ Yukawa couplings\footnote{% For $b-\tau$ unification in a different context see also~\cite{Parida:1996mz}.} % is possible in the above described low string scale models. Our procedure is the following: Using the experimentally determined values for the third generation fermion masses $m_b, m_{\tau}$ we run the 2-loop system of the $SU(3)_C\times U(1)_Y$ renormalization group equations up to the weak scale ($M_Z$) and reconcile there the results with the experimentally known values for the weak mixing angle and the gauge couplings. For the renormalization group running below $M_Z$ we define the parameters \begin{eqnarray} \tilde{\alpha}_e \;=\; \left( \frac{e}{4 \pi} \right)^2 ,\, \tilde{\alpha}_3 \;=\; \left( \frac{g_3}{4 \pi} \right)^2 ,\, t \;=\; 2 \ln\mu \end{eqnarray} where $e, g_3$ are the electromagnetic and strong couplings respectively and $\mu$ is the renormalization scale. The relevant RGEs are \cite{Arason:1991ic}% \begin{eqnarray} \frac{d\tilde{\alpha}_e}{dt} &=& \frac{80}{9}\tilde{\alpha}_e^2+ \frac{464}{27}\tilde{\alpha}_e^3+\frac{176}{9}\tilde{\alpha}_e^2\tilde{\alpha}_3\nonumber \\[2mm] \frac{d\tilde{\alpha}_3}{dt} &=& -\frac{23}{3}\tilde{\alpha}_3^2 - \frac{116}{3}\tilde{\alpha}_3^3 + \frac{22}{9}\tilde{\alpha}_3^2\tilde{\alpha}_e - \frac{9769}{54}\tilde{\alpha}_3^4 \nonumber \\[2mm] \frac{dm_b}{dt} &=& m_b \left\{ -\frac{1}{3} \tilde{\alpha}_e - 4\tilde{\alpha}_3 + \frac{397}{162}\tilde{\alpha}_e^2 - \frac{1012}{18} \tilde{\alpha}_3^2 - \frac{4}{9} \tilde{\alpha}_3 \tilde{\alpha}_e - 474.8712 \tilde{\alpha}_3^3 \right\}\nonumber \\[2mm] \frac{dm_\tau}{dt} &=& m_\tau \left\{ -3\tilde{\alpha}_e + \frac{373}{18}\tilde{\alpha}_e^2 \right\}\nonumber \end{eqnarray} where $m_b, m_\tau$ are the running masses of the bottom quark and the tau lepton respectively, while we use the notation $\tilde{a}_i \equiv g^2_i/16\pi^2$ and $\tilde{a}_{t,b,\tau} \equiv \lambda^2_{t,b,\tau}/16\pi^2$. The required value for the running mass of $m_t$ at $M_Z$ is computed as follows: we formally solve the 1-loop RGE system for ($\tilde{a}_3$, $\tilde{a}_2$, $\tilde{a}_Y$, $\tilde{a}_t$, $\tilde{a}_b$, $\tilde{a}_\tau$) and afterwards we determine the interpolating function for $\tilde{a}_3(\mu)$ and $m_t(\mu; m_t^z)$ at any scale $\mu$ above $M_Z$, where $m_t^z \equiv m_t(M_Z)$ indicates the dependence on an arbitrary initial condition. The unknown value for $m_t^z$ is determined by solving numerically the algebraic equation% \begin{eqnarray}% \left[ m_t(\mu; m_t^z) - \frac{M_t}{1+ \frac{16}{3} \tilde{a}_3(\mu)-2 \tilde{a}_t(\mu)} \right]_{\mu=M_t}=0 \end{eqnarray}% We use these results as inputs for the relevant parameters and we run the RGE system to higher scales until the $\tilde{a}_b$ and $\tilde{a}_{\tau}$ Yukawa couplings coincide. The scale that this happens is considered as the string scale. There, the values of $\tilde{a}_3, \tilde{a}_2, \tilde{a}_Y$ are checked and the ratio $\tilde{a}_2/\tilde{a}_Y$ is calculated in order to obtain the normalization constant $k_Y$. In our numerical analysis we use for the gauge couplings the values presented in the previous section, for the bottom quark mass $m_b$ the experimentally determined range at the scale $\mu = m_b$ ie.~$m_b(m_b) = 4.25 \pm 0.15$ GeV and finally the top pole mass is taken to be $M_t=178.0 \pm 4.3$ GeV \cite{Eidelman}. For the scales above $M_Z$ we consider the standard model spectrum augmented by one more Higgs. The Higgs doubling is in accordance with the situation that usually arises in the SM variants with brane origin. Moreover, we assume that one Higgs $H_u$ only couples to the top quark while the second Higgs $H_d$ couples only to the bottom. Then, in analogy with supersymmetry we define the angle $\beta$ related to their vevs where $\tan\beta=\frac{v_u}{v_d}$. Thus, we have the equations for the gauge couplings \begin{eqnarray} \frac{d\tilde{\alpha}_Y}{dt}\;=\; 7 \tilde{\alpha}_Y^2 ,\;\; \frac{d\tilde{\alpha}_2}{dt}\;=\; -3 \tilde{\alpha}_2^2 ,\;\; \frac{d\tilde{\alpha}_3}{dt}\;=\; -7 \tilde{\alpha}_3^2 \nonumber \end{eqnarray} and for the Yukawas \begin{eqnarray} \frac{d\tilde{\alpha}_t}{dt}&=& \tilde{\alpha}_t (-\frac{17}{12}\tilde{\alpha}_Y - \frac{9}{4} \tilde{\alpha}_2 - 8 \tilde{\alpha}_3 + \frac{9}{2} \tilde{\alpha}_t + \frac{1}{2} \tilde{\alpha}_b) \nonumber\\ \frac{d\tilde{\alpha}_b}{dt} &=& \tilde{\alpha}_b (-\frac{5}{12} \tilde{\alpha}_Y - \frac{9}{4} \tilde{\alpha}_2 -8 \tilde{\alpha}_3 + \frac{1}{2} \tilde{\alpha}_t + \frac{9}{2} \tilde{\alpha}_b + \tilde{\alpha}_{\tau}) \nonumber\\ \frac{d\tilde{\alpha}_{\tau}}{dt} &=& \tilde{\alpha}_{\tau} (-\frac{15}{4} \tilde{\alpha}_Y - \frac{9}{4} \tilde{\alpha}_2 + 3 \tilde{\alpha}_b + \frac{5}{2} \tilde{\alpha}_{\tau} ) \nonumber\\ \frac{dv_u}{dt} &=& \frac{v_u}{2} (\frac{3}{4} \tilde{\alpha}_Y + \frac{9}{4} \tilde{\alpha}_2 - 3 \tilde{\alpha}_t) \nonumber \\ \frac{dv_d}{dt} &=& \frac{v_d}{2} (\frac{3}{4} \tilde{\alpha}_Y + \frac{9}{4} \tilde{\alpha}_2 - 3 \tilde{\alpha}_b - \tilde{\alpha}_{\tau} )\nonumber \end{eqnarray} where $ t = 2\ln\mu$. Further, if we define $ v^2 = v_u^2 +v_d^2 $, with $v_u = v \sin\beta $, $v_d = v \cos\beta$ and $v\sim 174~ {\rm GeV}$, the $Z$-boson mass is given by $M_Z^2 = \frac{1}{2}(g_Y^2 + g_2^2)v^2$. The elecromagnetic and the strong couplings are defined in the usual way \[ \tilde{\alpha}_{e} = \tilde{\alpha}_Y \cos^2\theta_W = \tilde{\alpha}_2 \sin^2\theta_W \] while the top and bottom quark masses are related to the Higgs vevs by \[ m_t = 4 \pi v_u \sqrt{\tilde{\alpha}_t} ~~~~~~~~ m_b = 4 \pi v_d \sqrt{\tilde{\alpha}_b} \] \begin{table}[!t] \caption{\label{ytab3a} The String scale and the $b-\tau$ ratio at $M_S$ for various orientations of $U(1)$ branes presented in table \ref{ytab2}.}% \centering \begin{tabular}{ccccc} \br model & $k_Y$ & $\xi=\frac{\alpha_2}{\alpha_3}$ & ${M_S}/{GeV}$&$\frac{m_b}{m_{\tau}}(M_S)$\\ \mr b,e,g & $2.969$ & 0.42 & $5.786 \times 10^3$ &1.25 \\ c & $2.539$ & 0.58 & $1.986 \times 10^6$ &1.01 \\ d & $1.554$ & 0.93 & $1.645 \times 10^{14}$ &0.73 \\ a & $1.226$ & 1.01 & $6.710 \times 10^{17}$ &0.68 \\ \br \end{tabular} \end{table} We will examine the possibility of obtaining $b-\tau$ unification at a low string scale $M_S$. We first concentrate in the models (a)-(g) discussed in the previews section. We present our results in the last column of table \ref{ytab3a}. We notice that $b-\tau$ unification is obtained in model $c$, for $M_S\approx 10^6$ GeV. Models (b), (e), (g) with unification scale $M_S\approx 5.8 \times 10^3$ GeV predict a small deviation from exact $b-\tau$ unification. We observe that in these cases the strong-weak gauge coupling ratio\footnote{This relation holds naturally if we embed the model in a $U(3)\times U(2)^2\times U(1)^2$ symmetry.} is ${a_3}\approx 2\,{a_2}$. \begin{figure}[!h] \hspace*{-0.9cm} \includegraphics[width=0.6\textwidth]{mbmtau.eps}\hspace{-1pc}% \begin{minipage}[b]{18pc} \caption{\label{mbmtau}The ratio $\frac{m_b}{m_{\tau}}$ as a function of the energy $\mu$ in the 2-Higgs Standard Model. The shaded region corresponds to $a_3$ and threshold uncertainties.}\vspace*{0.7cm} \end{minipage} \end{figure} In figure \ref{mbmtau} the ratio $m_b/m_\tau$ is plotted as a function of the energy scale for the case of the two-Higgs Standard Model (see~\cite{Kane:2005va} and references therein). All previous uncertainties are incorporated and the result is the shaded region shown in the figure. The horizontal shaded band is defined between the values $\frac{m_b}{m_{\tau}}=[0.95-1.05]$ and take into account deviations of the ratio $\frac{m_b}{m_{\tau}}$ from unity due to possible threshold as well as mixing effects in the full $3\times 3$ quark and lepton flavor mass matrices. As can be seen, exact $m_b=m_{\tau}$ equality is found around the scale $M_S\approx 10^6$ GeV. Taking into consideration $m_b/m_{\tau}$-uncertainties expressed through the shaded band, the $M_S$ energy range is extended up to $\sim 10^{12}$ GeV. \section{Conclusions} We performed a systematic study of the Standard Model embedding in brane configurations with $U(3)\times U(2)\times U(1)^N$ gauge symmetry and we examined a number of interesting phenomenological issues. Seeking for models with economical Higgs sector, we identified two brane configurations with two or three extra abelian branes which can accommodate the Standard Model with two Higgs doublets. We analysed the possible hypercharge embeddings and found seven possible solutions leading to six models (with acceptable string scale $M_S$), implying the correct charge assignments for all standard model particles. We further examined the gauge coupling evolution in these models for both, parallel, as well as intersecting branes and determined the lowest string scale allowed for all possible alignments of the $U(1)$ branes with respect to the $U(3)$ and $U(2)$ non-abelian factors of the gauge symmetry. In the parallel brane scenario, we have identified three models which allow a string scale $M_S$ as low as a few TeV, one model with string scale of the order $10^6$ GeV and two models with high unification scales. Similar results were obtained for the general case of intersecting branes. We further analysed the consequences of the third generation fermion mass relations and in particular $b-\tau$ equality at the string scale on the above models. In the parallel brane scenario, we found that exact $b-\tau$ Yukawa unification is obtained only in the model with $M_S\approx 10^3$ TeV, while in the TeV string scale models the $m_b/m_{\tau}$ ratio deviates from unity by $25\%$. Allowing the $U(1)$ gauge couplings to take arbitrary (perturbative) values, we found that $b-\tau$ Yukawa unification is possible for a wide string scale range form $10^6$ up to $10^{12}$ GeV. {\ack This research was funded by the program `PYTHAGORAS' (no.\ 1705 project 23) of the Operational Program for Education and Initial Vocational Training of the Hellenic Ministry of Education under the 3rd Community Support Framework and the European Social Fund.} \medskip \section*{References} \medskip
{ "redpajama_set_name": "RedPajamaArXiv" }
4,794
title: Projects layout: default --- <!DOCTYPE html> <html class="no-js"> <head> <meta charset="UTF-8" /> <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Hux - Timeline</title> <link rel="stylesheet" type="text/css" href="css/timeline.css" /> <script src="js/modernizr.custom.js"></script> </head> <body> <div class="container" style="margin:0;padding:0;width:100%;"> <header> <img width="175" height="175" style="border-radius:50%;" src="/img/avatar-beauty.jpg"> <h1>Roc</h1> <p>A robot engineer,<br> major in intelligent robot development.</p> </header> <div class="main"> <ul class="cbp_tmtimeline"> <li> <div class="cbp_tmlabel"> <h2 id="boxoffice">xbot</h2> <time>2016.04-NOW</time> <img src="images/work-xbot.png"> <ul> <li> Xbot is a double wheeled mobile robot which is suitable for most of common sensors and hardware such as Microsoft's Kinect and Asus' Xtion Pro and RPlidar. Users can easily integrate their own customized hardware and applications to the development platform by using ROS and the related series tutorials. </li> <li class="skill"> <span><b>ROS</b></span> <span class="i-react"></span> <span class="link"><a target="_blank" href="http://robots.ros.org/xbot/">ROS wiki</a></span> </li> </ul> </div> </li> {% if site.duoshuo_username %} <li> <div class="cbp_tmlabel"> <!-- 多说评论框 start --> <div class="comment"> <div class="ds-thread" data-thread-key="{{site.duoshuo_username}}/portfolio" data-title="{{page.title}}" data-url="{{site.url}}{{site.baseurl}}{{page.url}}" > </div> </div> <!-- 多说评论框 end --> </div> </li> {% endif %} </ul> </div> </div> {% if site.duoshuo_username %} <!-- 多说公共JS代码 start (一个网页只需插入一次) --> <script type="text/javascript"> // dynamic User by Hux var _user = '{{site.duoshuo_username}}'; // duoshuo comment query. var duoshuoQuery = {short_name: _user }; (function() { var ds = document.createElement('script'); ds.type = 'text/javascript';ds.async = true; ds.src = (document.location.protocol == 'https:' ? 'https:' : 'http:') + '//static.duoshuo.com/embed.js'; ds.charset = 'UTF-8'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(ds); })(); </script> <!-- 多说公共JS代码 end --> {% endif %} </body> </html>
{ "redpajama_set_name": "RedPajamaGithub" }
7,912
Q: Counting divs with specific classes before header tag with JavaScript/JQuery in HTML-doc I wrote a script in JS/jQuery, which is looking for h1, h2 and h3-tags in my HTML-Doc. If the script finds a header-tag, it will automatically write the content of the tag into my HTML-Doc. So far everything is working perfectly fine, but I have some problems with the following task. Lets say I have a h1-tag in line 90 of my HTML code. The goal is to extend the JS-Script so that it counts all occurrences of a div with a certain class which are standing before the h1-tag. Example: HTML: <div class="example"></div> <div class="example2"></div> <!--Should not count this one--> <div class="example"></div> <div class="example"></div> <div class="example2"></div> <!--Should not count this one--> <div class="example"></div> <h1>TEST</h1> <div class="example"></div> <div class="example"></div> <h2>TEST2</h2> JS-Script: 1. Spots h1-tag in line 7 of the HTML-code and saves the value 4 in a variable because there were 4 divs with the class "example" before the spotted h1-tag 2. Spots h2-tag in line 10 of the HTML-code and saves the value 6 in the same variable because there were 6 divs with the class "example" before the spotted h2-tag So far I'm clueless on how the JS-code should look like. I hope you understood the question and know a way to help me. A: You could just loop over the dom elements and check the nodeName property, if you hit an h1 or h2 reset the counter. UPDATE: after rereading the question I realized you don't want to reset the counter at each header tag.. I've removed the code that resets the count. let count = 0; $('div,h1,h2,h3', 'body').each(function(){ const node = $(this).prop('nodeName'); if(['H1','H2'].includes(node)){ console.log(`${count} example div's before spotting ${node} tag`); } else if($(this).hasClass('example')) { count++; } }); console.log(`${count} example div's`);
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,360
Wisconsin Oven Corporation announced the shipment of a LP Gas Fired Heavy Duty Car Bottom Oven with Fume Incinerator to a leader in the oil and gas industry. This car bottom oven will be used for prebaking drill pipe joints. The thermal clean oven has a maximum operating temperature of 800° F and work chamber dimensions of 8'6" wide x 50'0" long x 8'6" high. Guaranteed temperature uniformity of ±30° F at 750° F was verified through a twenty (20) point profile test conducted in an empty oven chamber under static operating conditions. The industrial oven has sufficient capability to heat 70,000 pounds of steel from a cold start to 750° F within 75 minutes. The load car is designed for a maximum loading of 172,000 pounds. The fume incinerator is designed to incinerate the exhaust fumes from the oven and has sufficient capacity to handle 5,000 CFM of exhaust. This heat treating oven was fully factory tested and adjusted prior to shipment from our facility. All safety interlocks are checked for proper operation and the equipment is operated at the normal and maximum operating temperatures. An extensive quality assurance check list was completed to ensure the equipment met all Wisconsin Oven quality standards.
{ "redpajama_set_name": "RedPajamaC4" }
2,754
Q: quadratic form factorization For a homogeneous polynomial with real coefficients:$f(x,y,z)=ax^2+by^2+cz^2+dxy+exz+fyz$, suppose we know $f$ factors into products of linear forms$f=(p_1x+p_2y+p_3z)(q_1x+q_2y+q_3z)$, are there anyway to determinate whether the coefficients of the linear forms are real or complex? A: The coefficients are real if and only if (depending on whether or not $p_1x+p_2y+p_3z$ and $q_1x+q_2y+q_3z$ are proportional or not) such a quadratic form can be, by a coordinate change, brought to $\pm x^2$ or to $xy$ (this is an obvious re-phrasing, take the factor(s) for new coordinate(s)). The second one can be brought to $x^2-y^2$ also. In other words, the signature of your forms(number of pluses/minuses when brought to sum of squares) should be (1,0) or (0,1) or (1,1) in order for a factorisation over reals to exist. The ways to bring a quadratic form to sum of squares over reals are explained in most linear algebra textbooks. In fact, to compute the signature, you don't even need that. Form the matrix $$\begin{pmatrix}a&\frac12d&\frac12e\\ \frac12d&b&\frac12f\\ \frac12e&\frac12f&c\end{pmatrix}.$$ You want the characteristic polynomial to have two zero roots, or one zero root, one positive root, and one negative root.
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,598
Q: Add data to Dataframe with pandas I want to add a result from a simulation model to an existing dataframe at a specific position with the dataframe. Based on a dataframe and model for linear regression I am calculating a value. This value must be added to the input dataframe used for the linear regression. I am using pandas insert functions which brings the following error: enter image description here A: Thank you very much for you help. It worked with: X_heute_10 = df2.insert(4, 'Stunde_10', y_heute_10[0,0])
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,880
Invisible People Strategies To End Homelessness VideoSite Invisible PeopleRSS Feeds Congressional Legislators Introduce Ending Homelessness Act Federal lawmakers introduced a bill on July 19 that seeks to end homelessness once and for all. Known as the "Ending Homelessness Act of 2021," the bill would: Transform the Housing Choice Voucher program into a federal entitlement Prohibit source of income discrimination nationwide Recalculate Section 8 assistance Permanently authorize both the McKinney-Vento Homeless Assistance Act and the U.S. Interagency Council on Homelessness (USICH) Representative Maxine Waters (D-CA) introduced the bill with co-sponsors, Representatives Emanuel Cleaver (D-MO) and Ritchie Torres (D-NY). Several nonprofit advocacy groups support it, such as the National Low Income Housing Coalition (NLIHC), National Alliance to End Homelessness, Catholic Charities USA, and Children's Defense Fund. "Our nation is in the midst of a housing crisis, which has been decades in the making," Waters said. "To this day, more than 580,000 people are experiencing homelessness, 10.5 million households are paying more than 50 percent of their income on rent, and more than 20 million mortgage-ready individuals are unable to realize their dream of homeownership. Without a stable home, children cannot thrive. Without a stable home, families are forced to choose between a roof over their heads and meals. And without a stable home, many people lack the foundation necessary to plan for the future." Reduce Barriers to Housing Vouchers One of the biggest changes proposed by the legislation is that it would guarantee housing vouchers for all applicants earning no more than 50 percent of an area's median income (AMI). To meet demand, the bill also requires the Department of Housing and Urban Development (HUD) to increase its allotment of housing vouchers by 500,000 per year between 2023 and 2025. The proposed entitlement would be the housing equivalent of the Supplemental Nutrition Assistance Program (SNAP). In essence, the program would provide housing assistance funds when people experience a sudden and dramatic loss of income. Under current law, public housing authorities (PHAs) must distribute 75 percent of their allotted vouchers to people earning up to 30 percent AMI. Often, this policy results in PHAs denying applications for those at higher income levels. Only 20 percent of applicants currently receive housing vouchers. Researchers from Columbia University project the bill could: Lift nine million people out of poverty Reduce child poverty by over 33 percent Decrease the racial wealth gap The entitlement would be phased in over eight years, according to the bill. Prohibiting Source of Income Discrimination Another policy proposed by the bill is a nationwide prohibition on source of income discrimination. Currently, states are responsible for writing landlord-renter laws that address source of income issues. And while some states have recently passed laws to prohibit such discrimination, others have moved in the opposite direction. In April, Iowa Governor Kim Reynolds signed a law that prohibits towns and counties from adopting source of income discrimination laws that favor tenants. According to the bill, existing ordinances prohibiting source of income discrimination like those in Des Moines, Iowa City, and Marion will be invalid after January 1, 2023. However, landlords in these towns can still reject applications for reasons such as prior convictions. On the other side of the coin, states like Maryland, Colorado, New York, and California have all taken steps to protect renters from source of income discrimination. Recalculating Section 8 Assistance Many people who receive Section 8 assistance still struggle to pay rent because of how assistance levels are calculated under federal law. HUD's current calculation is based on a Fair Market Rent (FMR) assessment or annual estimates of median rents in metropolitan areas. Because the program averages rents from different zip codes included in a metro area, tenants receiving assistance in higher-income neighborhoods often struggle to pay rent. The Ending Homelessness Act would revise the calculation to account for Small Area Fair Market Rents (SAFMR), broken down by zip code. This would allow the Section 8 program to more accurately assess the needs of the program's participants. The difference between the two programs is slight but is entirely consequential for tenants receiving assistance. For example, FMR assistance payments made to tenants in a one-bedroom apartment in Denver are nearly $200 less per month than if the payments had been calculated using SAFMR, according to HUD data. Invest in Affordable Housing and Permanently Authorize Federal Homelessness Agencies On top of the policy priorities outlined in the legislation, it also allocated $10 billion over five years to the Housing Trust Fund and McKinney-Vento Grants to build housing for people experiencing homelessness. In all, the bill is projected to fund the creation of 410,000 new housing units. However, the bill also acknowledges that simply throwing money at the issue won't solve homelessness. That's why the bill also provides $120 million to states to improve their outreach programs and technical assistance systems. The bill would also permanently authorize two programs directly involved with helping states bolster their response to homelessness. The legislation would repeal the sunset requirements for USICH, which was set to expire on October 1, 2028. "More than ever, the country needs increased and sustained investments in solutions to keep the lowest-income people stably, accessibly, and affordably housed," NLIHC's President and CEO Diane Yentel said. "Thanks to Chairwoman Waters' effective leadership and deep commitment, we are closer to realizing that goal than we've been in decades." Now is not the time to be silent about homelessness. According to data from HUD, homelessness across the country has increased by five percent since 2016. At the same time, the latest Point in Time Count shows the number of people experiencing unsheltered homelessness is now greater than those in shelters for the first time since HUD began the survey in 2004. Contact your state and federal legislators and tell them you support expanding the federal response to homelessness. Tell them you also support funding and building safe and affordable housing options for our most vulnerable neighbors. Join the campaign to end homelessness by supporting the only newsroom focused solely on the topic of homelessness. Our original reporting — posted five to seven days a week — can also be found on Apple News and Google News. Through storytelling, education, news, and advocacy, we are changing the narrative on homelessness. Invisible People is a nonprofit organization. We rely on the support of friends like you — people who understand that well-written, carefully researched stories can change minds about this issue. And that's what leads to true transformation and policy change. Our writers have their fingers on the pulse of homeless communities. Many are formerly or currently homeless themselves. They are the real experts, passionate about ending homelessness. Your support helps us tell the true story of this crisis and solutions that will end it. Your donations help make history by telling the real story of homelessness to inspire tangible actions to end it. Your donation, big or small, will help bring real change. Invisible People RSS 'Mobile' Review by Woman Experiencing Vehicular Homelessness States Must Improve Distribution of Rent Relief to Curb Homelessness Invisible People's Top 10 News Posts for 2022 Sadly, criminalizing homelessness is the top news story of 2022. There is a concerted effort to blame homeless people for living in poverty and punish them for it. Our capitalist society not only fails to help our... HUD Reports More Americans Are Experiencing Unsheltered and Chronic Homelessness Biden Administration Moves to Cut Homelessness 25% by 2025 President Joe Biden announced a plan to reduce homelessness by 25% by 2025 after the latest federal one-night homeless count showed that more Americans are... Libraries Play an Invaluable Role in Fighting Homelessness Libraries have long been a haven for homeless people. They provide invaluable tools and resources such as computers with internet access, restrooms, warmth and air conditioning, outlets for charging phones, and a safe... Aboriginal Families Targeted for Homelessness Again through New Legislation City of Phoenix Sued to Stop Encampment Sweeps Capitalism Kills Nearly 1 Million Americans Per Year Christmas Is Out of Reach for Too Many People This Year Cities Use New Supportive Housing Model to Reduce Homelessness What Does the New AHAR Report Tells Us About Family Homelessness? Small ads NetHomeless has made Social Media accounts for the benefit of our visitors. Please use them for sharing and discussion. Copyright © 2023 · by NetHomeless.com
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,443
{"url":"https:\/\/newonlinecourses.science.psu.edu\/stat505\/book\/export\/html\/716","text":"7.1.2 - A Naive Approach\n\n7.1.2 - A Naive Approach\n\nFollowing the univariate method, a naive approach for testing a multivariate hypothesis is to compute the t-test statistics for each individual variable; i.e.,\n\n$t_j = \\dfrac{\\bar{x}_j-\\mu^0_j}{\\sqrt{s^2_j\/n}}$\n\nThus we could define $t_{j}$, which would be the t-statistic for the $j^{th}$\u00a0variable as shown above. We may then reject $H_0\\colon \\boldsymbol{\\mu} = \\boldsymbol{\\mu_0}$ if $|t_j| > t_{n-1, \\alpha\/2}$ for at least one variable $j \\in \\{1,2,\\dots, p\\}$ .\n\nProblem with Naive Approach\n\nThe basic problem with this naive approach is that it does not control the family-wise error rate. By definition, the family-wise error rate is the probability of rejecting at least one of the null hypotheses $H^{(j)}_0\\colon \\mu_j = \\mu^0_j$ when all of the $H_{0}$\u2019s are true.\n\nTo understand the family-wise error rate suppose that the experimental variance-covariance matrix is diagonal. This implies zero covariances between the variables. If the data are multivariate normally distributed then this would mean that all of the variables are independently distributed. In this case, the family wide error rate is\n\n$1-(1-\\alpha)^p > \\alpha$\n\nwhere p is the dimension of the multivariate date and $\u03b1$ is the level of significance. By definition, the family-wide error rate is equal to the probability that we reject $H _ { 0 } ^ { ( j ) }$ for at least one j, given that $H _ { 0 } ^ { ( j ) }$ is true for all j. Unless p is equal to one, the error will be strictly greater than $\u03b1$.\n\nConsequence\n\nThe naive approach yields a liberal test. That is, we will tend to reject the null hypothesis more often than we should.\n\nBonferroni Correction\n\nUnder the Bonferroni Correction we would reject the null hypothesis that mean vector $\\boldsymbol{\\mu}$ is equal to our hypothesized mean vector $\\boldsymbol{\\mu_{0}}$ at level $\\alpha$ if, for at least one variable j, the absolute value of $t_{j}$\u00a0is greater than the critical value from the t-table with n-1 degrees of freedom evaluated at $\\alpha \/(2p)$ for at least one variable j between 1 and p.\n\nNote! For independent data, this yields a family-wide error rate of approximately $\\alpha$. For example, the family-wide error rate is shown in the table below for different values of p. You can see that these are all close to the desired level of $\\alpha = 0.05$.\n\np Family-Wide Error rate\n2 0.049375\n3 0.049171\n4 0.049070\n5 0.049010\n10 0.048890\n20 0.048830\n50 0.048794\n\nThe problem with the Bonferroni Correction, however, is that it can be quite conservative when there is a correlation among the variables, particularly when doing a large number of tests. In a multivariate setting, the assumption of independence between variables is likely to be violated.\n\nConsequence\n\nWhen using the Bonferroni adjustment for many tests with correlated variables, the true family-wide error rate can be much less than $\\alpha$.\u00a0 These tests are conservative and there is low power to detect the desired effects.\n\n [1] Link \u21a5 Has Tooltip\/Popover Toggleable Visibility","date":"2019-10-20 01:25:35","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9387919306755066, \"perplexity\": 288.1097563366907}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-43\/segments\/1570986700560.62\/warc\/CC-MAIN-20191020001515-20191020025015-00238.warc.gz\"}"}
null
null
Amalia Anglés y Mayer (Badajoz, Espanya, 1827 - Stuttgart, Alemanya, 1859), també coneguda amb Amalia Anglés de Fortuny, fou una cantant d'òpera espanyola. Va néixer a Badajoz el 3 de novembre de 1827. Des de la seva infància va demostrar aptituds per la música, una vocació que es va manifestar completament el 1837 i dos anys més tard va començar a tocar la guitarra amb mestria sense haver rebut lliçons. Va formar-se al Reial Conservatori de Música de Madrid, matriculada l'octubre del 1839, i on va ser deixebla de Francesc Frontera i Laserra. El seu pas pel conservatori va ser molt notable, amb notes excel·lents, va ser nomenada repetidora el 1847 i la reina regent, Maria Cristina de Borbó, la va elegir professora de les seves filles, Isabel i Lluïsa. Va acabar els seus estudis l'octubre de 1852. Després d'exercir com a professora auxiliar, va començar a cantar sobre els escenaris, primer al teatre del Palau Reial de Madrid, on el 1851 va cantar La straniera de Bellini. Arran d'això, amb només 24 anys va marxar a Itàlia, on va ser contractada per La Scala de Milà, on va interpretar dinou vegades el Rigoletto, a més de prendre part també a La sonnambula, Poliuto, Lucia di Lammermoor i Griselda, al costat de Marietta Gazzaniga. Més endavant va recórrer Europa, on també va assolir molt d'èxit. De fet, va morir mentre estava treballat al teatre de l'òpera de Stuttgart, a Alemanya, l'1 de maig de 1859. Referències Bibliografia Enciclopèdia Espasa Volum núm. 5, pàg. 564 ()7 Cantants espanyols Alumnes del Conservatori de Madrid Persones de Badajoz Morts a Stuttgart
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,595
Electronic Records Archives: The National Archives and Records Administration's Fiscal Year 2006 Expenditure Plan GAO-06-906 Published: Aug 18, 2006. Publicly Released: Aug 18, 2006. Since 2001, the National Archives and Records Administration (NARA) has been working to acquire the Electronic Records Archives (ERA) system, which is intended to address critical issues in the creation, management, and use of federal electronic records. As required by law, the agency submitted its fiscal year 2006 expenditure plan to the congressional appropriations committees, seeking the release of about $22 million for the development of the system. GAO's objectives in reviewing the expenditure plan were to (1) determine the extent to which the expenditure plan satisfied the legislative conditions specified in the appropriations act; (2) determine the extent to which NARA has implemented GAO's prior recommendations; and (3) provide any other observations about the expenditure plan and the ERA acquisition. We reviewed the expenditure plan and analyzed it against the legislative conditions and assessed NARA's progress in addressing prior recommendations. National Archives and Records Administration To reduce risks associated with NARA's efforts to acquire ERA, the Archivist of the United States should ensure that future expenditure plans include a sufficient level and scope of information to enable Congress to understand what system capabilities and benefits are to be delivered, by when, and at what cost, and report on the progress being made against the commitments that were made in prior expenditures plans. Closed - Implemented <label class="status-code-label">Closed - Implemented</label><p class="status-code-description"><p>Actions that satisfy the intent of the recommendation have been taken.</p></p> In August 2006, we reported on the National Archives and Records Administration's (NARA) Fiscal Year 2006 Expenditure Plan for acquiring the Electronic Records Archives (ERA) system. ERA is a major information system that is intended to address critical issues in creating, managing, and using federal electronic records and automating the records management and archiving life cycle. We noted that NARA's expenditure plan did not contain the level and scope of information needed by Congress to understand the agency's plans and commitments relative to system capabilities, benefits, schedules, and costs. We recommended that to reduce the risks associated with NARA's efforts to acquire ERA, the Archivist of the United States ensure that future expenditure plans include a sufficient level and scope of information to enable the Congress to understand what system capabilities and benefits are to be delivered, by when, and at what cost, and report on the progress being made against the commitments that were made in prior expenditure plans. The Archivist agreed with our results and recommendation. During our review of NARA's fiscal year 2007 expenditure plan for the ERA acquisition, we noted that NARA implemented our recommendation and incorporated information such as the agency's plans and commitments relative to ERA system capabilities, benefits, schedules, and costs. By doing so, NARA provided more detailed information that could help Congress in its oversight of the ERA acquisition. Electronic recordsElectronic records archiveElectronic records managementFederal records managementInformation securityInformation security managementInformation technologyProcurement planningRisk managementNational archives
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
576
Business For Sale | Harford County, Maryland Discover business opportunities, in or around Harford County, with well-established brands, who are looking for people like you to help them grow. As of the 2010 United States Census, Harford County, located in The Old Line State, had a total population of 248029 people. City Statistics and Other Information About Harford County From a total population of 248029, Harford County has a median age of 39.9. The average age of males, in Harford County, is 38.4 while the average age of females is 41.1. When you compare this to median age of Maryland, Harford County is -1.8 year younger. The median income of individuals, in Harford County, is $35,763 while the average household income is $81,016. When comparing the median income of people living in Harford County to the rest of Maryland, the average income in Harford County is $907 less. No additional information. Explore Franchises For Sale in Maryland Now Other Franchises Looking For Owners Like You in Maryland
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,942
<?php namespace Tuurbo\Spreedly; class Payment { protected $config; protected $client; public $gatewayToken; public $paymentToken; /** * Create a Guzzle instance and set tokens. * * @param \GuzzleHttp\Client $client * @param array $config * @param string $paymentToken optional * @param string $gatewayToken optional * @return void */ public function __construct(Client $client, $config, $paymentToken = null, $gatewayToken = null) { $this->client = $client; $this->gatewayToken = $gatewayToken; $this->paymentToken = $paymentToken; } /** * Get a list of all payment methods you've created on Spreedly. * * @param string $paymentToken optional * @return \Tuurbo\Spreedly\Client */ public function all($paymentToken = null) { $append = ''; if ($paymentToken) $append = '?since_token='.$paymentToken; return $this->client->request('https://core.spreedly.com/v1/payment_methods.xml'.$append); } /** * Create a payment method on Spreedly. * * @param array $data * @return \Tuurbo\Spreedly\Client */ public function create(array $data) { $params = [ 'payment_method' => $data ]; return $this->client->request('https://core.spreedly.com/v1/payment_methods.xml', 'post', $params); } /** * Update a payment method on Spreedly. * * @param array $data * @return \Tuurbo\Spreedly\Client */ public function update(array $data) { if (! $this->paymentToken) throw new Exceptions\MissingPaymentTokenException; $params = [ 'payment_method' => $data ]; return $this->client->request('https://core.spreedly.com/v1/payment_methods/'.$this->paymentToken.'.xml', 'put', $params); } /** * Retain a payment method on Spreedly. * * @return \Tuurbo\Spreedly\Client */ public function retain() { if (! $this->paymentToken) throw new Exceptions\MissingPaymentTokenException; return $this->client->request('https://core.spreedly.com/v1/payment_methods/'.$this->paymentToken.'/retain.xml', 'put'); } /** * Store/Vault a payment method to a third party, like Braintree or Quickpay. * * @return \Tuurbo\Spreedly\Client */ public function store() { if (! $this->paymentToken) throw new Exceptions\MissingPaymentTokenException; if (! $this->gatewayToken) throw new Exceptions\MissingGatewayTokenException; $params = [ 'transaction' => [ 'payment_method_token' => $this->paymentToken ] ]; return $this->client->request('https://core.spreedly.com/v1/gateways/'.$this->gatewayToken.'/store.xml', 'post', $params); } /** * Get details of a payment method on Spreedly. * * @return \Tuurbo\Spreedly\Client */ public function get() { if (! $this->paymentToken) throw new Exceptions\MissingPaymentTokenException; return $this->client->request('https://core.spreedly.com/v1/payment_methods/'.$this->paymentToken.'.xml'); } /** * Disable a payment method stored on Spreedly. * * @return \Tuurbo\Spreedly\Client */ public function disable() { if (! $this->paymentToken) throw new Exceptions\MissingPaymentTokenException; return $this->client->request('https://core.spreedly.com/v1/payment_methods/'.$this->paymentToken.'/redact.xml', 'put'); } /** * Ask a gateway if a payment method is in good standing. * * @param array $params * @return \Tuurbo\Spreedly\Client * @link https://docs.spreedly.com/reference/api/v1/gateways/verify/ */ public function verify($retain = false, array $data = null) { if (! $this->paymentToken) throw new Exceptions\MissingPaymentTokenException; if (! $this->gatewayToken) throw new Exceptions\MissingGatewayTokenException; $params = [ 'transaction' => [ 'payment_method_token' => $this->paymentToken, 'retain_on_success' => $retain ] ]; if (is_array($data)) { $params['transaction'] += $data; } return $this->client->request('https://core.spreedly.com/v1/gateways/'.$this->gatewayToken.'/verify.xml', 'post', $params); } /** * View all transactions of a specific payment method. * * @param strong $paymentToken optional * @return \Tuurbo\Spreedly\Client */ public function transactions($paymentToken = null) { if (! $this->paymentToken) throw new Exceptions\MissingPaymentTokenException; $append = ''; if ($paymentToken) $append = '?since_token='.$paymentToken; return $this->client->request('https://core.spreedly.com/v1/payment_methods/'.$this->paymentToken.'/transactions.xml'.$append); } /** * Magic Method for payment methods. * * Can be used to charge or authorize. * * <code> * // Charge a payment method on the default gateway. * Spreedly::payment($paymentToken)->purchase(1.99); * * // Set currency to Euros. * Spreedly::payment($paymentToken)->purchase(1.99, 'EUR'); * </code> * * @param string $method * @param array $parameters * @return mixed */ public function __call($method, $parameters) { if (! in_array($method, ['purchase', 'authorize'])) throw new Exceptions\InvalidPaymentMethodException($method.' is an invalid payment method.'); if (! $this->gatewayToken) throw new Exceptions\MissingGatewayTokenException; if (! $this->paymentToken) throw new Exceptions\MissingPaymentTokenException; if (! isset($parameters[0]) || $parameters[0] <= 0) throw new Exceptions\InvalidAmountException($method.' method requires an amount greater than 0.'); $params = [ 'transaction' => [ 'payment_method_token' => $this->paymentToken, 'amount' => $parameters[0] * 100, 'currency_code' => isset($parameters[1]) ? $parameters[1] : 'USD' ] ]; if (isset($parameters[2]) && is_array($parameters[2])) $params['transaction'] += $parameters[2]; return $this->client->request('https://core.spreedly.com/v1/gateways/'.$this->gatewayToken.'/'.$method.'.xml', 'post', $params); } }
{ "redpajama_set_name": "RedPajamaGithub" }
8,730
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <!--NewPage--> <HTML> <HEAD> <!-- Generated by javadoc (build 1.6.0-beta2) on Mon Mar 19 18:29:22 CST 2007 --> <META http-equiv="Content-Type" content="text/html; charset=utf-8"> <TITLE> ReferenceType (Java Platform SE 6) </TITLE><script>var _hmt = _hmt || [];(function() {var hm = document.createElement("script");hm.src = "//hm.baidu.com/hm.js?dd1361ca20a10cc161e72d4bc4fef6df";var s = document.getElementsByTagName("script")[0];s.parentNode.insertBefore(hm, s);})();</script> <META NAME="date" CONTENT="2007-03-19"> <LINK REL ="stylesheet" TYPE="text/css" HREF="../../../../stylesheet.css" TITLE="Style"> <SCRIPT type="text/javascript"> function windowTitle() { if (location.href.indexOf('is-external=true') == -1) { parent.document.title="ReferenceType (Java Platform SE 6)"; } } </SCRIPT> <NOSCRIPT> </NOSCRIPT> </HEAD> <BODY BGCOLOR="white" onload="windowTitle();"> <HR> <!-- ========= START OF TOP NAVBAR ======= --> <A NAME="navbar_top"><!-- --></A> <A HREF="#skip-navbar_top" title="跳过导航链接"></A> <TABLE BORDER="0" WIDTH="100%" CELLPADDING="1" CELLSPACING="0" SUMMARY=""> <TR> <TD COLSPAN=2 BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A NAME="navbar_top_firstrow"><!-- --></A> <TABLE BORDER="0" CELLPADDING="0" CELLSPACING="3" SUMMARY=""> <TR ALIGN="center" VALIGN="top"> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../overview-summary.html"><FONT CLASS="NavBarFont1"><B>概述</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="package-summary.html"><FONT CLASS="NavBarFont1"><B>软件包</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#FFFFFF" CLASS="NavBarCell1Rev"> &nbsp;<FONT CLASS="NavBarFont1Rev"><B>类</B></FONT>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="class-use/ReferenceType.html"><FONT CLASS="NavBarFont1"><B>使用</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="package-tree.html"><FONT CLASS="NavBarFont1"><B>树</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../deprecated-list.html"><FONT CLASS="NavBarFont1"><B>已过时</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../index-files/index-1.html"><FONT CLASS="NavBarFont1"><B>索引</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../help-doc.html"><FONT CLASS="NavBarFont1"><B>帮助</B></FONT></A>&nbsp;</TD> </TR> </TABLE> </TD> <TD ALIGN="right" VALIGN="top" ROWSPAN=3><EM> <b>Java<sup><font size=-2>TM</font></sup>&nbsp;Platform<br>Standard&nbsp;Ed. 6</b></EM> </TD> </TR> <TR> <TD BGCOLOR="white" CLASS="NavBarCell2"><FONT SIZE="-2"> &nbsp;<A HREF="../../../../javax/lang/model/type/PrimitiveType.html" title="javax.lang.model.type 中的接口"><B>上一个类</B></A>&nbsp; &nbsp;<A HREF="../../../../javax/lang/model/type/TypeKind.html" title="javax.lang.model.type 中的枚举"><B>下一个类</B></A></FONT></TD> <TD BGCOLOR="white" CLASS="NavBarCell2"><FONT SIZE="-2"> <A HREF="../../../../index.html?javax/lang/model/type/ReferenceType.html" target="_top"><B>框架</B></A> &nbsp; &nbsp;<A HREF="ReferenceType.html" target="_top"><B>无框架</B></A> &nbsp; &nbsp;<SCRIPT type="text/javascript"> <!-- if(window==top) { document.writeln('<A HREF="../../../../allclasses-noframe.html"><B>所有类</B></A>'); } //--> </SCRIPT> <NOSCRIPT> <A HREF="../../../../allclasses-noframe.html"><B>所有类</B></A> </NOSCRIPT> </FONT></TD> </TR> <TR> <TD VALIGN="top" CLASS="NavBarCell3"><FONT SIZE="-2"> 摘要:&nbsp;嵌套&nbsp;|&nbsp;字段&nbsp;|&nbsp;构造方法&nbsp;|&nbsp;方法</FONT></TD> <TD VALIGN="top" CLASS="NavBarCell3"><FONT SIZE="-2"> 详细信息:&nbsp;字段&nbsp;|&nbsp;构造方法&nbsp;|&nbsp;方法</FONT></TD> </TR> </TABLE> <A NAME="skip-navbar_top"></A> <!-- ========= END OF TOP NAVBAR ========= --> <HR> <!-- ======== START OF CLASS DATA ======== --> <H2> <FONT SIZE="-1"> javax.lang.model.type</FONT> <BR> 接口 ReferenceType</H2> <DL> <DT><B>所有超级接口:</B> <DD><A HREF="../../../../javax/lang/model/type/TypeMirror.html" title="javax.lang.model.type 中的接口">TypeMirror</A></DD> </DL> <DL> <DT><B>所有已知子接口:</B> <DD><A HREF="../../../../javax/lang/model/type/ArrayType.html" title="javax.lang.model.type 中的接口">ArrayType</A>, <A HREF="../../../../javax/lang/model/type/DeclaredType.html" title="javax.lang.model.type 中的接口">DeclaredType</A>, <A HREF="../../../../javax/lang/model/type/ErrorType.html" title="javax.lang.model.type 中的接口">ErrorType</A>, <A HREF="../../../../javax/lang/model/type/NullType.html" title="javax.lang.model.type 中的接口">NullType</A>, <A HREF="../../../../javax/lang/model/type/TypeVariable.html" title="javax.lang.model.type 中的接口">TypeVariable</A></DD> </DL> <HR> <DL> <DT><PRE>public interface <B>ReferenceType</B><DT>extends <A HREF="../../../../javax/lang/model/type/TypeMirror.html" title="javax.lang.model.type 中的接口">TypeMirror</A></DL> </PRE> <P> 表示一个引用类型。这些类型包括类和接口类型、数组类型、类型变量和 null 类型。 <P> <P> <DL> <DT><B>从以下版本开始:</B></DT> <DD>1.6</DD> </DL> <HR> <P> <!-- ========== METHOD SUMMARY =========== --> <A NAME="method_summary"><!-- --></A> <TABLE BORDER="1" WIDTH="100%" CELLPADDING="3" CELLSPACING="0" SUMMARY=""> <TR BGCOLOR="#CCCCFF" CLASS="TableHeadingColor"> <TH ALIGN="left" COLSPAN="2"><FONT SIZE="+2"> <B>方法摘要</B></FONT></TH> </TR> </TABLE> &nbsp;<A NAME="methods_inherited_from_class_javax.lang.model.type.TypeMirror"><!-- --></A> <TABLE BORDER="1" WIDTH="100%" CELLPADDING="3" CELLSPACING="0" SUMMARY=""> <TR BGCOLOR="#EEEEFF" CLASS="TableSubHeadingColor"> <TH ALIGN="left"><B>从接口 javax.lang.model.type.<A HREF="../../../../javax/lang/model/type/TypeMirror.html" title="javax.lang.model.type 中的接口">TypeMirror</A> 继承的方法</B></TH> </TR> <TR BGCOLOR="white" CLASS="TableRowColor"> <TD><CODE><A HREF="../../../../javax/lang/model/type/TypeMirror.html#accept(javax.lang.model.type.TypeVisitor, P)">accept</A>, <A HREF="../../../../javax/lang/model/type/TypeMirror.html#equals(java.lang.Object)">equals</A>, <A HREF="../../../../javax/lang/model/type/TypeMirror.html#getKind()">getKind</A>, <A HREF="../../../../javax/lang/model/type/TypeMirror.html#hashCode()">hashCode</A>, <A HREF="../../../../javax/lang/model/type/TypeMirror.html#toString()">toString</A></CODE></TD> </TR> </TABLE> &nbsp; <P> <!-- ========= END OF CLASS DATA ========= --> <HR> <!-- ======= START OF BOTTOM NAVBAR ====== --> <A NAME="navbar_bottom"><!-- --></A> <A HREF="#skip-navbar_bottom" title="跳过导航链接"></A> <TABLE BORDER="0" WIDTH="100%" CELLPADDING="1" CELLSPACING="0" SUMMARY=""> <TR> <TD COLSPAN=2 BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A NAME="navbar_bottom_firstrow"><!-- --></A> <TABLE BORDER="0" CELLPADDING="0" CELLSPACING="3" SUMMARY=""> <TR ALIGN="center" VALIGN="top"> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../overview-summary.html"><FONT CLASS="NavBarFont1"><B>概述</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="package-summary.html"><FONT CLASS="NavBarFont1"><B>软件包</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#FFFFFF" CLASS="NavBarCell1Rev"> &nbsp;<FONT CLASS="NavBarFont1Rev"><B>类</B></FONT>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="class-use/ReferenceType.html"><FONT CLASS="NavBarFont1"><B>使用</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="package-tree.html"><FONT CLASS="NavBarFont1"><B>树</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../deprecated-list.html"><FONT CLASS="NavBarFont1"><B>已过时</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../index-files/index-1.html"><FONT CLASS="NavBarFont1"><B>索引</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../help-doc.html"><FONT CLASS="NavBarFont1"><B>帮助</B></FONT></A>&nbsp;</TD> </TR> </TABLE> </TD> <TD ALIGN="right" VALIGN="top" ROWSPAN=3><EM> <b>Java<sup><font size=-2>TM</font></sup>&nbsp;Platform<br>Standard&nbsp;Ed. 6</b></EM> </TD> </TR> <TR> <TD BGCOLOR="white" CLASS="NavBarCell2"><FONT SIZE="-2"> &nbsp;<A HREF="../../../../javax/lang/model/type/PrimitiveType.html" title="javax.lang.model.type 中的接口"><B>上一个类</B></A>&nbsp; &nbsp;<A HREF="../../../../javax/lang/model/type/TypeKind.html" title="javax.lang.model.type 中的枚举"><B>下一个类</B></A></FONT></TD> <TD BGCOLOR="white" CLASS="NavBarCell2"><FONT SIZE="-2"> <A HREF="../../../../index.html?javax/lang/model/type/ReferenceType.html" target="_top"><B>框架</B></A> &nbsp; &nbsp;<A HREF="ReferenceType.html" target="_top"><B>无框架</B></A> &nbsp; &nbsp;<SCRIPT type="text/javascript"> <!-- if(window==top) { document.writeln('<A HREF="../../../../allclasses-noframe.html"><B>所有类</B></A>'); } //--> </SCRIPT> <NOSCRIPT> <A HREF="../../../../allclasses-noframe.html"><B>所有类</B></A> </NOSCRIPT> </FONT></TD> </TR> <TR> <TD VALIGN="top" CLASS="NavBarCell3"><FONT SIZE="-2"> 摘要:&nbsp;嵌套&nbsp;|&nbsp;字段&nbsp;|&nbsp;构造方法&nbsp;|&nbsp;方法</FONT></TD> <TD VALIGN="top" CLASS="NavBarCell3"><FONT SIZE="-2"> 详细信息:&nbsp;字段&nbsp;|&nbsp;构造方法&nbsp;|&nbsp;方法</FONT></TD> </TR> </TABLE> <A NAME="skip-navbar_bottom"></A> <!-- ======== END OF BOTTOM NAVBAR ======= --> <HR> <font size="-1"><a href="http://bugs.sun.com/services/bugreport/index.jsp">提交错误或意见</a><br>有关更多的 API 参考资料和开发人员文档,请参阅 <a href="http://java.sun.com/javase/6/webnotes/devdocs-vs-specs.html">Java SE 开发人员文档</a>。该文档包含更详细的、面向开发人员的描述,以及总体概述、术语定义、使用技巧和工作代码示例。 <p>版权所有 2007 Sun Microsystems, Inc. 保留所有权利。 请遵守<a href="http://java.sun.com/javase/6/docs/legal/license.html">许可证条款</a>。另请参阅<a href="http://java.sun.com/docs/redist.html">文档重新分发政策</a>。</font> </BODY> </HTML>
{ "redpajama_set_name": "RedPajamaGithub" }
4,983
Telling History The story collectors Some notable Lincolnshire folk tale collectors include: Gervase Holles (1607-1675) was collecting popular antiquities, later to be known as folklore, long before the latter word was even devised. Born in Grimsby, he was a prominent Royalist and a lawyer. He served as Mayor of Grimsby and MP for the town both before and after the English Civil War. Abraham de la Pryme (1671-1704) was also an antiquarian. Born in Hatfield, on the Levels near Doncaster he became a curate in Broughton and then Hull. He kept a diary—Ephemeris Vitae: A Diary of My Own Life— from the age of twelve until his death, in which he recorded items of interest. George Stovin (1696-1780) was born at Tetley Hall, in the parish of Crowle, and lived the life of a country gentleman. His main interest was research the topography and antiquities of the area of his birth. He was particularly interested in the drainage of the Level of Hatfield Chase, where he had inherited estates. It was said that he rarely left the Levels, regarding "no part of England comparable to the Isle of Axholme, and no town equal to Crowle." Late in life, he did however cross the Trent where he took up residence in Winterton, "in a little cottage which he had made Arcadian with honeysuckles and other flowers, where he was to be seen with his pipe every morning at five, and where he was accustomed to amuse his neighbours with the variety of anecdote with which his memory supplied him." Henry Evan Smith (1828-1908) was a local correspondent for the Stamford Mercury, North Lincolnshire Star and other papers. He left numerous manuscript notes, many of which are now in the Lincolnshire Archives. James Conway Walter (1831-1913), the eldest son of a Lincolnshire clergyman, was born in Langton near Horncastle. He went to Cambridge University and became vicar of St. Andrews, Langton (Woodhall Spa) and Kirkstead, Lincolnshire in 1869. In 1877 his father died of a chill caught whilst walking six miles home from a meeting in a snowstorm. After 21 years he relinquished the latter position and became rector at Langton. During his career he became a highly respected Lincolnshire historian, and wrote a number of books and papers on Lincolnshire history including 'The Legend of Bayards Leap' which was included in Andrews, W (Eds) (1891) Bygone Lincolnshire. He also contributed various items on folklore to Mabel Peacock including the script to a rather bawdy Lincolnshire Plough Play that had been performed in the kitchen of his Rectory, about the year 1889. Edward Peacock (1831-1915) was the only son of Edward Shaw Peacock, a wealthy Lincolnshire landowner and agriculturalist. The young Edward was eductated at home and developed an interest in history and archaeology. He lived at Bottesford Manor House which is situated on the outskirts of modern Scunthorpe. He married Lucy Anne Wetherell from America and the couple had six children. In 1892, due to financial pressures, Edward and his daughter Mabel moved to Dunstan House, Kirton in Lindsay. He was an avid collector of folklore, which he sent to various publications including local newspapers. One of his informants was his aunt Mary Ann Ashton who lived at Northorpe, near Kirton in Lindsey. Mary Ann had lost her husband only five years after her marriage and lived as a widow in the village. At one time he was preparing to write The Folklore of Lincolnshire, a task that was eventually completed by his daughter Mabel. Throughout his life, Edward researched the Lincolnshire Dialect and he continued submitting short items to Folklore until 1908. He also wrote four novels, none of which were particularly successful. Along with the Peacocks there were two other notable people collecting folklore within Lincolnshire in the later nineteenth century. Robert Marshall Heanley (1848-1915), the eldest son of a wealthy farmer, was born in Croft in the south east of the county. He entered the church in 1875 and became assistant curate at Burgh-le-Marsh before becoming rector of Wainfleet All Saints and perpetual curate of Wainfleet St Thomas until 1889. His parishes were close to the place of his birth and he drew on this and boyhood memories to produce pieces on Marshland folklore for Lincolnshire Notes and Queries (1891), for Folk-Lore (1898) and for an article on 'The Vikings: Traces of their Folklore in Marshland' for the Saga Book of the Viking Club (1903). James Alpass Penny (1855-1944) was born in Crewkerne, Somerset, where his father was headmaster of the Grammar School, became vicar of Stixwould, a village near Horncastle in 1888. Seven years later he moved five miles to become vicar of Wispington until 1914. He spent his later years, suffering from blindness, at Woodhall Spa, also in the Horncastle area. He produced two collections of folklore from around the locality which also include popular memories and incidents from his own experience as a parish clergyman. Willingham Franklin Rawnsley (1845-1927), though born in Hertfordshire, was the eldest son of the rector of Halton Holgate, Lincolnshire. William's brother Canon Hardwicke Drummond Rawnsley, who was one of the founders of the National Trust, and as such Willingham was interested in the environment. Though he was for a number of years, proprietor of Winton House, private school in Winchester and he spent his retirement in Guildford, he kept his interest in the county of his ancestors, as evidenced in his travel book The Highways and Byways of Lincolnshire (1914). He was also related by marriage to Tennyson, and an expert on his poems. Sidney Oldall Addy (1848-1933) was an antiquary and man of letters. Born at Norton, Derbyshire, the son of a colliery owner, he studied Classics at Lincoln College, Oxford. He became a solicitor and spent much of his life in Sheffield, combining his work with his other interests. He wrote a number of books including a collection of tales from Household Tales with other Traditional Remains Collected in the Counties of York, Lincoln, Derby. In the introduction, he noted that "the ancient stories, beautiful or highly humorous even in their decay, linger with us here and there in England, and, like rare plants, may be found by those who seek them." He collected all the tales from the oral tradition, rather then printed sources and wrote them up using the words of the narrator but without the dialect, with the exception of obsolete words. Mabel Geraldine Woodruffe Peacock (1856-1920) followed on from the work of her father in submitting papers and notes to Folklore between 1887 and 1917, quite a number of which were based on Lincolnshire Folklore. She published three collections of stories and verse, Tales and Rhymes in Lindsey Folk-Speech (1886), Tales fra Linkisheere (1889) and Lincolnshire Tales – the recollections of Eli Twigg (1897) which include a number of Lincolnshire versions of traditional tales and rhymes telling stories of boggarts, fairies and fools. In 1902 she commenced correspondence with the Folklore Society on the subject of taking over from her father in the collection of Lincolnshire Folklore from printed sources. This work, which was carried out with Mrs Gutch, was completed and published in 1908. Mabel did not collect folklore in the field, indeed a neighbour is recorded as saying that she rarely left her house, but gleaned information from friends and acquaintances. Marie Clothilde Balfour (1862-1931) was born in Edinburgh, but spent some of her childhood in New Zealand. She was a cousin of Robert Louis Stevenson, and in 1885 she married another of her cousins, James Craig Balfour, a physician and surgeon. Between 1887 and 1889 the couple lived in the Vicarage at Redbourne, North Lincolnshire, where Marie collected a number of stories from the local people. A description of this experience was included in a semi-biographical novel, and outlined earlier in this introduction. The first of the tales collected by Marie was included by Andrew Lang in both Longman's Magazine and Folk-Lore, the journal of the Folk Lore Society. This received a favourable reaction and probably prompted her to send the Legends to the Society. Sadly, Balfour was not skilled in dialect but attempted to record the tales accurately. This misguided attempt led to later criticism of the authenticity of the stories, but an investigation into their content, and the views of people from the are leads me to conclude that they were indeed from the Lincolnshire Carrs. Leland Lewis Duncan (1862-1923) was born in Kent and became a life member of Kent Archaelogical Society. He was made a Fellow of the Society of Antiquaries in London in 1890. Ethel H. Rudkin (1893-1985) followed in the footsteps of Mabel Peacock, indeed as a child, she recalled visits made as a child with her parents, to the Kirton in Lindsey home of the Peacocks. Born Ethel Hutchinson in Willoughton, Lincolnshire, her mothers family were the Pickthalls of Suffolk. An only child, she was educated in Scarborough before gaining employment as a governess. Whilst at a point-to-point meeting at Burton near Lincoln in 1914, she met and then three years later, married George Rudkin, who served in the War, firstly in the Yeomanry, and then as a Lieutenant in the Machine Gun Corps. Sadly George died on 28th October 1918 from the influenza epidemic which is believed to have killed 250,000 people in Britain and millions worldwide. His mother died on the same day. As his widow Ethel gained George's share in the Rudkin family farm and for a time she helped in this venture. By 1927 she had returned to live with her parents in Willoughton, where she stayed to look after her elderly parents. A devoted, and lifelong collector of Lincolnshire oral history and folklore, she traveled up and down the county in the 1920s and 1930s, in a bull-nosed Morris car, collecting evidence directly from the rural villagers on a broad range of topics. She submitted a number of articles to Folklore and produced a book Lincolnshire Folklore, which was published at her own expense in 1936. The preface to the book stated that everything in it was "authentic and collected between World War One and Two from people not books." Ethel also had a keen interest in archaeology, local and social history and dialect, and eventually became an acknowledged expert in all these subjects within the county. Her home became a place of pilgrimage for researchers wishing to view an enormous quantity of books, manuscripts, artefacts, memorabilia and farm implements and also enjoy a cup of tea with the lively lady herself. In the 1970s Ethel moved to a cottage in Toynton All Saints where she spent the remaining years of her life. She was instrumental in cataloguing the vast collection of artefacts that formed the core of the Museum of Lincolnshire Life and much of her collection was donated to the Lincolnshire Museum Service after her death. Ruth Lyndall Tongue (1898-1981) is normally regarded as a Somerset folklorist, though she was born in Staffordshire. Her mother Betsy Mabel Jones was from Whitchurch, Shropshire, but her father, a Congregational Minster had been born in Louth, Lincolnshire and both of his parents, and grandparents were natives of Lincolnshire. Ruth recalled hearing stories from her Aunt Annie Tongue in Alkborough, and from Great Aunt Hetty Carr of Blyton Farm, near Gainsborough. Annie Tongue, Ruth said had heard stories from her female ancestors who also were born in, and raised in Lincolnshire. Ruth's grandfather, Joseph Tongue was, like her own father, a minister (Primitive Methodist) but in his later years he moved back to the birth place of his wife. They lived in West Halton Lane, Alkborough. Courses & Day Schools Legends of the Carrs Cambridgeshire Folk Tales Lincolnshire Folk Tales Free Stories & Audio Lincolnshire additional information Recording His or Her Story Fables, Tales, and Folklore – Fenland Copyright 2011 tellinghistory.co.uk Misty Look Theme by wpthemes.info. Ported to Drupal by kiterminal
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,832