text stringlengths 14 5.77M | meta dict | __index_level_0__ int64 0 9.97k ⌀ |
|---|---|---|
<?php
return [
/*
* All models that use the GetsCleanedUp interface in these directories will be cleaned.
*/
'directories' => [
app_path('Models'),
],
/*
* All models in this array that use the GetsCleanedUp interface will be cleaned.
*/
'models' => [
// App\LogItem::class,
],
];
| {
"redpajama_set_name": "RedPajamaGithub"
} | 3,044 |
To highlight your natural beauty, in order to feel better, if you want to treat yourself or your loved ones with the royal treatment, studio Gorica has special offers and packages for you. Do not forget to see other services and treatments that The House of Health and Beauty has to offer.
Enter your e-mail and we'll add you to our mailing list so you and will be the first to know all the news and special offers from the House of Health and Beauty! As a sign of gratitude you will receive a special gift that you can use in studio Gorica! | {
"redpajama_set_name": "RedPajamaC4"
} | 3,898 |
Q: Can't keep SSH connection to VM using gcloud-sdk I have a google cloud Deep Learning Virtual Machine Image for PyTorch that uses an SSH connection to connect to the Jupyter Notebook on it. How can I change what I am currently doing so that the Jupyter Notebook remains alive even when I close my laptop/temporarily disconnect from internet?
Currently after turning my VM and opening a tmux window I start up the Jupyter Notebook and its SSH connection with this command:
gcloud compute ssh <my-server-name> -- -L 8080:localhost:8080
This code is taken from the official docs for the deep learning images here: https://cloud.google.com/deep-learning-vm/docs/jupyter
I can then connect at localhost:8080 and do what I need to. However, if I start training a model for a long time and need to close my laptop, when I re-open it my ssh connection breaks, the Jupyter Notebook is turned off, and my model that is training is interrupted.
How can I keep this Juptyer Notebook live and be able to reconnect to it later?
NB. I used to use the Google Cloud browser SSH option and once in the server start a tmux window and the jupyter notebook within it. This worked great and meant the notebook was always alive. However, with the Google Cloud images that have CUDA and Jupyter preinstalled, this doesn't work and the only way I have been able to connect is through the above command.
A: I have faced this problem before on GCP too and found a simple way to resolve this. Once you have ssh'd into the compute engine, run the linux screen command and you will find yourself in a virtual terminal (you may open many terminals in parallel) and it is here you will want to run your long running job.
Once you have started the job, detach from the screen using the keys the keys Ctrl+a and then d. Once detached, you can exit out of the VM, reconnect to the VM and run screen -r and you will find that your job is still running.
Of course, you can do a lot of cool stuff with the screen command and would encourage you to read some of the tutorials found here.
NOTE: Please ensure that your Compute Engine instance is not a Pre-emptible machine!
Let me know if this helps!
A: I thinks it's better install Jupyter as server . so your job can keep running even when you disconnect.
There are something you might also want to know.
This is not the multi-user server you are looking for. This document describes how you can run a public server with a single user. This should only be done by someone who wants remote access to their personal machine. Even so, doing this requires a thorough understanding of the set-ups limitations and security implications. If you allow multiple users to access a notebook server as it is described in this document, their commands may collide, clobber and overwrite each other.
If you want a multi-user server, the official solution is JupyterHub. To use JupyterHub, you need a Unix server (typically Linux) running somewhere that is accessible to your users on a network. This may run over the public internet, but doing so introduces additional security concerns.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 5,924 |
package com.googlecode.pngtastic.core.processing;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.util.ArrayList;
import java.util.List;
import java.util.zip.InflaterInputStream;
import com.googlecode.pngtastic.core.Logger;
/**
* Implements PNG compression and decompression
* Uses zopfli to compress: https://code.google.com/p/zopfli/
*
* @author rayvanderborght
*/
public class ZopfliCompressionHandler implements PngCompressionHandler {
private final Logger log;
private final String compressor; // e.g. /Users/ray/projects/pngtastic/lib/zopfli
public ZopfliCompressionHandler(Logger log, String compressor) {
this.log = log;
this.compressor = compressor;
}
/**
* {@inheritDoc}
*/
@Override
public byte[] inflate(ByteArrayOutputStream imageBytes) throws IOException {
InflaterInputStream inflater = new InflaterInputStream(new ByteArrayInputStream(imageBytes.toByteArray()));
ByteArrayOutputStream inflatedOut = new ByteArrayOutputStream();
int readLength;
byte[] block = new byte[8192];
while ((readLength = inflater.read(block)) != -1) {
inflatedOut.write(block, 0, readLength);
}
byte[] inflatedImageData = inflatedOut.toByteArray();
return inflatedImageData;
}
/**
* {@inheritDoc}
*/
@Override
public byte[] deflate(byte[] inflatedImageData, Integer compressionLevel, boolean concurrent) throws IOException {
List<byte[]> results = deflateImageDataSerially(inflatedImageData, compressionLevel);
byte[] result = null;
for (int i = 0; i < results.size(); i++) {
byte[] data = results.get(i);
if (result == null || (data.length < result.length)) {
result = data;
}
}
this.log.debug("Image bytes=%d", (result == null) ? -1 : result.length);
return result;
}
/* */
private List<byte[]> deflateImageDataSerially(byte[] inflatedImageData, Integer compressionLevel) {
List<byte[]> results = new ArrayList<byte[]>();
try {
results.add(deflateImageData(inflatedImageData, compressionLevel));
} catch (Throwable e) {
log.error("Uncaught Exception: %s", e.getMessage());
}
return results;
}
/* */
private byte[] deflateImageData(byte[] inflatedImageData, Integer compressionLevel) throws IOException {
byte[] result = deflate(inflatedImageData).toByteArray();
log.debug("Compression strategy: zopfli, compression level=%d, bytes=%d", compressionLevel, (result == null) ? -1 : result.length);
return result;
}
/* */
private ByteArrayOutputStream deflate(byte[] inflatedImageData) throws IOException {
File imageData = null;
try {
imageData = File.createTempFile("imagedata", ".zopfli");
writeFileOutputStream(imageData, inflatedImageData);
ProcessBuilder p = new ProcessBuilder(compressor, "-c", "--zlib", imageData.getCanonicalPath());
Process process = p.start();
ByteArrayOutputStream deflatedOut = new ByteArrayOutputStream();
int byteCount;
byte[] data = new byte[8192];
InputStream s = process.getInputStream();
while ((byteCount = s.read(data, 0, data.length)) != -1) {
deflatedOut.write(data, 0, byteCount);
}
deflatedOut.flush();
return deflatedOut;
} finally {
if (imageData != null) {
imageData.delete();
}
}
}
private FileOutputStream writeFileOutputStream(File out, byte[] bytes) throws IOException {
FileOutputStream outs = null;
try {
outs = new FileOutputStream(out);
outs.write(bytes);
} finally {
if (outs != null) {
outs.close();
}
}
return outs;
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 6,584 |
This list includes a number of television shows which have received negative reception from both critics and audiences alike, some of which are considered the worst of all time.
Criteria
Factors that can reflect poorly on a television series include inherently poor quality, the lack of a budget, rapid cancellation, very low viewership, offensive content, and negative impact on other series on the same channel. Multiple outlets have produced lists ranking the worst television series, including TV Guide, Entertainment Weekly and Mail Online. TV Guide published lists in 2002 and 2010, each of which had contemporary shows near the top of the list.
The following is a list of television series notable for negative reception—some of which are considered the worst of all time by critics, network executives, and viewers (with extremely low viewership despite high promotion). Situation comedy shows make up a large percentage, so they are listed in a separate page.
Animated shows
The Brothers Grunt Created by future Ed, Edd 'n' Eddy creator Danny Antonucci, The Brothers Grunt premiered on MTV in August 1994 in the network's attempt to capitalize on their earlier success of Mike Judge's Beavis and Butt-Head, but the show was canceled after seven months and derided by critics and viewers for its gross-out content. Kenneth R. Clark of the Chicago Tribune wrote that MTV "created the most repulsive creatures ever to show up on a television screen". Charles Solomon of the Los Angeles Times deemed it a "sophomoric half-hour that leaves the viewer longing for the refined good taste of Alice Cooper." The Boston Globe called the show "moronic", while Steve Hall of The Indianapolis Star commented: "Compared to this ... Beavis and Butt-Head looks like a masterpiece of social satire." Jean Prescott of The Pantagraph, in 1999, cited The Brothers Grunt as an "animation disaster". In their 2002 book North of Everything: English-Canadian Cinema Since 1980, authors William Beard and Jerry White called the series a "failure". Writer David Hofstede included the show among his selection of "The 100 Dumbest Events In Television History" in 2004: "Given the ... grotesque appearance of the characters, it's not surprising that the series didn't last."
Bucky and Pepito The 1959 syndicated series Bucky and Pepito has been criticized for its poor production quality and racial stereotyping. It was produced by Sam Singer, who is referred to as "the Ed Wood of animation" by Jerry Beck for his low-budget and generally ill-reviewed style. The show was described by Fast Company technology editor Harry McCracken as setting "a standard for awfulness that no contemporary TV cartoon has managed to surpass". In his 2011 book Television Westerns: Six Decades of Sagebrush Sheriffs, Scalawags, and Sidewinders, author Alvin H. Marill wrote that the show "managed to set TV animation back to the early crude days", and castigated Pepito—who was voiced by white actor Dallas McKennon — as "pure Mexican stereotype—from the huge sombrero that covered his eyes to [his] slow, lazy ways ... mentioned in the show's theme song." Writer David Perlmutter described Bucky and Pepito as being "racially troubling" and having "poor animation and cliché-ridden writing". Media historian Hal Erickson called Pepito "non-politically correct [and] stereotyped" and the show's animation "arguably the worst of any TV cartoon of the 1950s." One episode was featured on Beck's Cartoon Brew webseries Cartoon Dump in 2007.
Father of the Pride Father of the Pride was a 2004 primetime computer-animated series that centered around a family of white lions whose titular patriarch stars in a Siegfried & Roy show in Las Vegas, but pre-release publicity was affected by Roy Horn being attacked by a tiger during a 2003 performance while the show was in production. Despite studio DreamWorks Animation marketing the show to younger audiences, NBC was forced to return $50,000 in funding to the Family Friendly Programming Forum after airing a series of promos during the 2004 Summer Olympics that showed characters making sexual references, and the program itself was panned by critics for its crude adult-oriented humor. The Las Vegas Sun commented: "Father of the Pride isn't suitable for children. Unless, of course, you consider references to sex acts and bestiality OK for younger ears." The combination of pre-release issues, negative reviews and poor ratings led to the show's cancellation after only thirteen episodes. Newsday named Father of the Pride one of the "worst shows of the 21st century", and The Daily Beast rated it among NBC's "most embarrassing flops of the last decade". Chris Longridge of Digital Spy said in 2017, "[It] didn't help that Roy Horn was attacked by one of his own tigers before the show got to air. File under catastrophic misjudgment."
Ren & Stimpy "Adult Party Cartoon" The Ren & Stimpy Show creator John Kricfalusi rebooted his original 1991 series for the relaunch of The National Network as Spike TV, as part of its new adult animation block. Ren & Stimpy "Adult Party Cartoon" premiered in June 2003 and contained significantly more vulgar content than its predecessor, which resulted in only three of nine ordered episodes being aired by the network. Rob Owen of the Pittsburgh Post-Gazette described it as "just plain gross. ... They don't pay me enough to watch cartoon characters eating snot." Charles Solomon of the Los Angeles Times criticized the show as "'adult' only in the sense that you wouldn't want kids watching them." Tucson Weekly and Exclaim! both labeled it "disastrous". DVD Talk praised the show's animation, "but the weak stories epitomize empty, heavy-handed shock value." Matt Schimkowitz of Splitsider opined that the show's intended audience was "the 16-year-olds who grew up on the [original] show and are ready to handle such hilarious topics as spousal abuse and eating boogers." Comic Book Resources, in 2018, called it "perhaps the most hated animated reboot ever." The negative reaction to the show tainted Kricfalusi's reputation and resulted in a 2016 pitch for a Ren & Stimpy feature film being rejected by Paramount Pictures. Billy West, who voiced Stimpy in the original series, had turned down Kricfalusi's offer to reprise the part in Adult Party Cartoon: "It would have damaged my career. It was one of the worst things I ever saw."
Velma Created by Charlie Grandy, this 2023 HBO Max series based on the popular Scooby-Doo franchise was pitched as a "love quadrangle" between the Mystery Inc. group, but primarily focuses on Velma Dinkley, voiced by Mindy Kaling, trying to solve a mystery regarding the disappearance of her mother, as well as the numerous murders of local teenage girls. The show was criticized by Liz Shannon Miller of Consequence for the show's unbalanced tone, lack of focus, absence of Scooby-Doo, overstuffed narrative and feeling "a bit PG in comparison to other adult animation currently in the works." In a review for IGN, Brittany Vincent criticized the series' portrayal of Velma, comparing the title character to "a biting, hateful version of Daria without the character growth", stating that this aspect of the show held it back from being what it strives to be. In Variety, Joshua Alston wrote that the show was "irreverent to a fault", extolling most of the humor but stating that it could have belong to any other comedy series. He also criticized the portrayal of the Mystery Inc. gang, being described as "just really unpleasant to spend time with."
Live-action children's shows
Minipops This 1983 Channel 4 show featured children between the ages of eight and twelve singing contemporary pop songs, often dressed and made up to resemble the original artists. The programme made many adult viewers uncomfortable when some of the juvenile singers imitated the provocative styles of adult performers. One performance by eight-year-old Joanna Fisher sparked outrage when, while performing the Sheena Easton song "9 to 5", she sang the lyrics "Night time is the right time/We make love". Despite the show's popularity, the resulting controversy caused Minipops to be cancelled after only six episodes. John Naughton of The Radio Times named Minipops the second-worst UK television show in history in 2006. The Daily Telegraph, in 2019, called Minipops an "all-round televisual travesty".
Barney & Friends
Ranking 50th on the TV Guide 2002 list of worst television shows in American history (the only public television series, PBS or otherwise, to make the list), Barney & Friends has been subject to a barrage of vicious and often dark anti-Barney humor and vitriol since its debut in the late 1980s (as the direct-to-video Barney and the Backyard Gang). Barney, and the intense backlash it drew, were the subject of the 2022 documentary miniseries I Love You, You Hate Me, a name partially taken from a schoolyard mockery of Barney's signature song. Quoth W. J. T. Mitchell, a media theorist, in his book The Last Dinosaur Book: "Barney is on the receiving end of more hostility than just about any other popular cultural icon I can think of."
Dramas and soap operas
The Colbys Although much hyped in 1985, garnering high ratings for its premiere episode, and also the winner of a 1986 People's Choice Award for New Dramatic TV Program, The Colbys was ultimately a ratings disappointment. The first season finished in 35th place, in part due to competition with NBC's Cheers and Night Court on Thursday nights (by comparison, Dynasty finished in 7th place the same season). The series was renewed for a second season, but fared much worse. Now, not only being scheduled opposite NBC's multi-camera sitcoms, Cheers and Night Court, but also rival soap Knots Landing on CBS, and later in the season, having to compete with another multi-camera sitcom, The Cosby Show, The Colbys finished a dismal 76th for the year prompting the network to cancel the show. The series did not fare well among critics either, with one of its main criticisms being that it was simply a copy of Dynasty. The Los Angeles Times stated "It's not a spinoff, it's a clone—as close a replica as ABC and the Dynasty producers could concoct, right down to the credits." The Pittsburgh Press compared the scripts to Dick and Jane books for children. In their Directory To Primetime TV Shows, television historians Tim Brooks and Earle Marsh stated that the series likely failed because it was "too close a copy" of Dynasty. Even some cast members were vocal about their dissatisfaction with the series. In 1986, Barbara Stanwyck opted to end her contract and leave the series after its first season, reportedly calling it "a turkey" and telling co-creator Esther Shapiro "This is the biggest pile of garbage I ever did" and that "It's one thing to know you're making a lot of money off vulgarity, but when you don't know it's vulgar – it's plain stupid." On the contrary, Charlton Heston always had supported the show and stated its cancellation "was premature" as "we were coming closer to being a creative production team that could make the kind of show we'd planned on from the beginning."
Cop Rock This musical police procedural, which aired on ABC in 1990, has been cited as one of the worst television series ever as it ranked No. 8 on TV Guides 50 Worst TV Shows of All Time list in 2002. The show was a critical and commercial failure from the beginning and was canceled by the network after 11 episodes. Owing to the combination of its bizarre nature and its high-powered production talent (including an Emmy win for composer Randy Newman), it became infamous as one of the biggest television failures of the 1990s.
Eldorado This BBC soap opera from 1992 was, despite heavy advertising, a notorious flop. Many of the cast were inexperienced actors whose limitations were clearly exposed on such a new and ambitious project; the acting was derided as amateurish, while the attempt to appear more 'European' by having people speaking other languages without subtitles or bizarre/unconvincing accents was met by viewers with incomprehension and ridicule. Eldorado is remembered as an embarrassing failure for the BBC, and is sometimes used as a byword for any unsuccessful, poorly received or overhyped television programme.
Ironside (2013) NBC's remake of Raymond Burr's 1967 crime drama was canceled after only four episodes due to poor ratings, and drew protest beforehand from disabled actors for casting Blair Underwood as the wheelchair-using title character. NBC responded that an able-bodied actor was needed to perform flashback scenes, but actor Kurt Yaeger likened it to "having a white guy do blackface". Neil Genzlinger of The New York Times wrote that the show's "plodding writing" and Underwood's performance "makes the title character an unpleasant combination of macho and brusque," and Slant noted Underwood's "oppressive, angry" portrayal as "a protagonist who believes his impairment gives him the authority to act like a total ass". The show was described by Complex as "an eye-rolling, monotonous, procedural mess", and by the St. Louis Post-Dispatch as an "unnecessary remake" that was "too grim and unengaging". Tim Goodman of The Hollywood Reporter commented, "It's just another detective show. And it's not even a very good one." The A.V. Club, Rolling Stone, New York Post, and USA Today named Ironside among their worst shows of 2013.
Skins (U.S. remake) MTV's 2011 remake of the 2007 British series generated controversy over its sexual content and raised accusations of child pornography, since many of the actors were under the age of 18. Outcry from the Parents Television Council, along with numerous companies pulling their advertising from the program, led to the series being canceled after one season of ten episodes.
The Spike Irish drama series on RTÉ, set at a secondary school. Episode five featured "the briefest glimpse of naked flesh". The episode sparked debate in Dáil Éireann and was condemned by the Taoiseach Jack Lynch, despite him having never seen the programme. On the day that the sixth episode was due to air it was axed. The remaining episodes remain locked away and have neither been broadcast on RTÉ nor viewed by members of the general public. The Spike was later featured on RTÉ's scandal series, Scannal, with the Irish Independent naming it as one of their "Top 10 Worst Irish TV Programmes".
Supertrain Supertrain was the most expensive series ever aired in the United States at the time. The production was beset by problems including a model train that crashed. While the series was heavily advertised during the 1978–79 season, it suffered from poor reviews and low ratings. Despite attempts to salvage the show by reworking the cast, it never took off and left the air after only three months. NBC, which had produced the show itself, with help from Dark Shadows producer Dan Curtis, was unable to recoup its losses. Combined with the U.S. boycott of the 1980 Summer Olympics the following season, which cost NBC millions in ad revenue, the series nearly bankrupted the network. For these reasons, Supertrain has been called one of the greatest television flops. The A.V. Club noted that Supertrain has a reputation as "one of the worst television series ever made...it was hugely expensive, little watched, and critically derided".
Triangle A soap opera about a British ferry that starred Kate O'Mara, Triangle is remembered as "some of the most mockable British television ever produced". The series is even humorously mentioned in passing in the BBC comedy series The Young Ones - "Even Triangle has better furniture than this!"
Viva Laughlin CBS's 2007 American adaptation of the British series Blackpool lasted only two episodes, one in Australia. Like the aforementioned Cop Rock, the series was an attempt to create a musical TV drama; in this case, the series had a fatal flaw in that the lead actors sang over hit records with the original vocal tracks intact. The opening line of The New York Times review said, "Viva Laughlin on CBS may well be the worst new show of the season, but is it the worst show in the history of television?" Newsdays review started with, "The stud is a dud. And that's only the first of a dozen problems with CBS' admirably ambitious but jaw-droppingly wrongheaded new musical/murder mystery/family drama Viva Laughlin. Let us count the ways it bombs..."
Fantasy and science fiction shows
Galactica 1980 The 1979 cancellation of Battlestar Galactica prompted a letter-writing campaign by fans that convinced ABC to revive the show as Galactica 1980, but with a significantly reduced budget that resulted in the setting being changed to Earth three decades after the events of the original program, while the cast was overhauled save for Lorne Greene and Herbert Jefferson Jr. Galactica 1980 was negatively received as a result and canceled after ten episodes. GamesRadar+ named the show among its "Top 25 Worst Sci-Fi and Fantasy TV Shows Ever" in 2012, lambasting its "cardboard cut-out heroes" and having "more loathsome kids than any other SF show ever." Gordon Jackson of io9 criticized it as "ill-advised" and "lack[ing] any of the zest of the original series." Carol Pinchefsky of Syfy wrote in 2017, "[P]lease, oh please, let's not think about Galactica 1980", and The Guardian called the show "woeful". Luke Y. Thompson of Nerdist deemed it "extremely difficult to defend," and considered the absence of original series star Richard Hatch a factor in its demise. Hatch had rejected reprising his role as Captain Apollo, as he felt the changes "ruined the story. I just wasn't interested." In 2020, 40 years after the show's broadcast, Medium described Galactica 1980 as "having earned its dubious place in the history of televised science fiction".
Inhumans The 2017 eight-episode miniseries—based on the Marvel Comics race of the same name—was canceled by ABC after one season due to low ratings, and is regarded by critics as one of the worst works by Marvel. The IMAX premiere of the first two episodes was poorly received and grossed only $2.6 million in its opening weekend, with Comic Book Resources commenting that "Inhumans is already a disaster" that "sounded a sour note with fans". The Hollywood Reporter criticized the "poorly developed characters [and] confusing superpowers", and Entertainment Weekly noted the "terrible acting". The series was described as "look[ing] like the worst Marvel show out there" by The New York Times, "a disappointment on every level" by IGN, "a messy, miserable show" by io9, and by Vox as "jaw-droppingly awful television. Even worse, it's boring." Uproxx opined that Inhumans "has no reason to exist except that Marvel wanted it to, by any means necessary." IndieWire declared that the series was "the worst thing Marvel has done in decades".
Manimal Manimal was scheduled by NBC opposite CBS's Dallas, and was canceled after eight episodes due to low ratings. It was a part of NBC's 1983 fall line-up, which featured eight other series that were canceled before their first seasons ended (including Jennifer Slept Here and Bay City Blues). John Javna's book The Best of Science Fiction TV rated Manimal among its "Worst Science Fiction Shows of All Time". TV Guide ranked Manimal number 15 on their list of the 50 Worst TV Shows of All Time in 2002. In 2004, readers of the British trade magazine Broadcast voted Manimal as one of the worst television shows ever exported by the US to the UK.
Game shows
Don't Scare the Hare The premiere of the 2011 British game show hosted by Jason Bradbury drew 1.93 million viewers for a 15% audience share, but was canceled after only three of nine planned episodes due to poor ratings. Jim Shelley of the Daily Mirror wrote: "The idiots playing might have enjoyed themselves but even toddlers would have found the games dull and Jason creepy." The Stage observed: "The actual games are pretty feeble and uninspired, leaving the poor hare and his robotic novelty value to carry the show." John Anson of the Lancashire Evening Post opined: "If you're going to have a gimmick in your game show at least make it entertaining. ... Make the questions simple, involve bunches of kids and hey, presto it works... But primetime Saturday night viewing it ain't." Alex May of Now Then magazine called the show "without question, the worst game show in the world, ever." Complex said in 2011, "Don't Scare The Hare was cancelled after only three episodes aired for a reason—the show was absolutely terrible". Caroline Westbrook of Metro listed the "frankly bizarre" show among her 2013 selection of "so bad they're brilliant" game shows. Digital Spy rated Don't Scare the Hare sixth among the "10 of the worst TV shows of all time" in 2016, and Scott Harris-King of Grunge included it in his 2017 list of "dumb game shows someone should've been fired for".
The Million Second Quiz Marred by a confusing and boring format that jeopardized the health of its contestants, excessive and unwarranted hype, banal questions, and a random decision to inflate the grand prize after it was won solely to set the record for most money won on a single game show, The Million Second Quiz was lambasted by critics and suffered from collapsing ratings throughout its short run in 2013. A review for The A.V. Club was indicative of the reception: "so deeply flawed and so universally unpopular that it is not going to remain in anyone's memory for long."
Naked Jungle A UK game show on Channel 5 that revolved around naturists performing an assault course. Naked Jungle was savaged by critics, denounced by nudists for being exploitative and even condemned in the House of Commons of the United Kingdom. A group of TV historians later voted it the worst British TV show ever. Host Keith Chegwin later called presenting the show "the worst career move I made in my entire life".
Shafted A UK game show aired on ITV presented by Robert Kilroy Silk. It is most notorious for Kilroy-Silk's laughable actions on the show, which have since been frequently mocked on popular satirical show Have I Got News for You since late 2004. Particularly notable is his delivery of the show's tagline, "Their fate will be in each other's hands as they decide whether to share or to shaft", and the associated hand actions. The show was dropped just four episodes after it started in 2001, and was listed as the worst British television show of the 2000s in the Penguin TV Companion (2006). A 2012 postmortem of the show read: "Nothing seemed to work for Shafted from the start. It looked derivative, it sounded derivative, the format was pretty unfair, the host was bad, and it just wasn't that interesting. So basically nothing worked out." In an article on ITV programmes, Stuart Heritage described Shafted as "Hamfisted" and stated it was "deservedly remembered as one of the worst television programmes ever made".
Three's a Crowd A game show created and produced by Chuck Barris, and hosted by Jim Peck, which aired in syndication from 1979 to 1980. In it, a male contestant was asked pointed personal questions, which were then asked of both his wife and secretary, to find out which of the two knew him better. David Hofstede, author of What Were They Thinking?: The 100 Dumbest Events in Television History wrote that it "offered the chance to watch a marriage dissolve on camera years before Jerry Springer", and noted that it received backlash from the United Auto Workers and the National Organization for Women. By the time the controversy settled in 1980, Three's a Crowd and all three of Barris's other shows (The Dating Game, The Newlywed Game and The Gong Show) had been canceled. His next two projects, revivals of Treasure Hunt and Camouflage, neither of which lasted beyond one season, were also failures; Barris, whose reputation was effectively ruined by both this and some not-safe-for-TV incidents Barris allowed and encouraged on The Gong Show, would never again create a new game show and would stick to revivals of his previously existing shows for the rest of his career.
Who's Whose The 1951 panel game show was described at the time as "one of the most poorly produced TV shows yet to hit our living room screen," and "a miserable flop." while columnist Rex Lardner wrote that the show was "the worst ever to hit television." Who's Whose, rushed into production to fill a hole caused when The Goldbergs refused to comply with the Hollywood blacklist, was the first television series to be canceled after one episode, and its host, radio personality Phil Baker, had his contract bought out; it would be Baker's only television hosting role.
You're in the Picture The premiere of this 1961 CBS game show hosted by Jackie Gleason received extremely hostile reviews that the following Friday, Gleason appeared in the same time slot inside a stripped-down studio to give what Time magazine called an "inspiring post-mortem", asking rhetorically "how it was possible for a group of trained people to put on so big a flop." Time later cited You're in the Picture as one piece of evidence that the 1960–61 TV season was the "worst in the [then] 13-year history of U.S. network television."
News
The Morning Program On January 12, 1987, The Morning Program made its debut on CBS hosted by actress Mariette Hartley and Rolland Smith, former longtime anchor at WCBS-TV in New York City. Radio personality Mark McEwen handled the weather, while Bob Saget did comedy bits. Produced by the network's entertainment division, the show ran for 90 minutes (7:30–9am local time) behind a briefly expanded 90-minute CBS Early Morning News (6–7:30am local; although most larger affiliates pre-empted all or part of the 6–7am hour to produce a local morning newscast), which had dropped "Early" from its title. However, The Morning Program, with its awkward mix of news, entertainment, and comedy, became the joke of the industry, receiving scathing reviews. At one point, it generated the lowest ratings CBS had seen in the morning slot in five years. The format was aborted and the time slot returned to the news division after a ten-and-a-half-month run. Hartley and Smith were dumped, while Saget left to star on the ABC sitcom Full House, which premiered later that same year. A longtime producer summed up this version of the program upon its demise by saying, "...everyone thought we had the lowest ratings you could have in the morning. The Morning Program proved us wrong."
Reality television series
The Briefcase An American reality TV series created by Dave Broome that premiered on CBS on 27 May 2015. In each episode, two American families undergoing financial hardship are each given a briefcase containing $101,000, and must decide whether to keep all the money for themselves or give some or all of it to the other family. Over the course of 72 hours, each family learns about the other and makes a decision without knowing that the other family has also been given a briefcase with the same instructions. The Briefcase was met with largely negative reception from critics. Ken Tucker, critic-at-large of Yahoo! TV, described it as "cynical and repulsive" for "passing off its exploitation...as uplifting, inspirational TV." Jason Miller of Time.com called it "the worst reality TV show ever". Others compared the show to fictional films and television that pitted the needy against each other, such as the Twilight Zone episode "Button, Button", or The Hunger Games. A petition was made on Change.org to end the show with more than 60,000 supporters.
Here Comes Honey Boo Boo An American reality television series on TLC, featuring the family of child beauty pageant contestant Alana "Honey Boo Boo" Thompson. The show premiered on August 8, 2012. Thompson and her family originally rose to fame on TLC's reality series Toddlers & Tiaras. The show mainly revolves around Alana "Honey Boo Boo" Thompson and "Mama" June Shannon and their family's adventures in the southern town of McIntyre, Georgia. Critical reaction to the series was largely negative, with some characterizing the show as "offensive," "outrageous," and "exploitative," while others called it "must-see TV." The A.V. Club called the first episode a "horror story posing as a reality television program," with others worrying about potential child exploitation.
Jersey Shore A string of controversies over the U.S. MTV series documenting members of the Guido subculture made this series one of the most controversial in television history.
The One: Making a Music Star At the time of its premiere, according to overnight ratings from Nielsen Media Research, the first episode of The One was the lowest-rated series premiere in ABC history, and the second-worst such episode in the history of American broadcast television, scoring only 3.2 million total viewers (1.1 rating in the 18–49 demographic), and fifth place in its timeslot. In Canada, the premiere of The One on CBC had 236,000 viewers, which trailed far behind Canadian Idol on CTV and Rock Star: Supernova on Global, each scoring around one million viewers. The next night's results episode fared even worse in the United States ratings, sinking to a 1.0 rating in the 18–49 demographic. The re-run of night 1's episode (which preceded the results show) plunged to an embarrassingly low 0.6 average in the vital demo ratings. The poor performance of the show helped ABC measure its lowest-rated night in the network's history (among 18–49s), finishing tied for sixth place. The series was ultimately cancelled after a second week of poor results. According to CBC executive Kirstine Layfield, in terms of resources and money, The One "had the most backing from ABC than any summer show has ever had (sic)." The One was touted as a show that would dethrone American Idol, then the most-watched show in the United States; such high expectations for the series made the resounding public rejection of it all the more spectacular. Canadian ratings have dipped as low as 150,000 – not necessarily out of step with the CBC's usual summer ratings, although much lower than the broadcaster's stated expectations for primetime audiences, in the one-million range. The CBC initially insisted that despite the cancellation, a planned Canadian version may still go ahead, citing the success of the format in Quebec (Star Académie) and Britain (the BBC's Fame Academy). The network confirmed that the show will not air in fall 2006 – in fact, the show had never been given a fall timeslot – but the show was "still under development." Critical response was limited but generally negative. A 2018 article on TV By the Numbers identified the show as "the nadir of ABC's forays into music competitions," among a list of seven major flops in the format ABC had attempted in the 21st century (the article noted in its headline "ABC is terrible at music shows").John de Mol Jr. (the creator of The One) would later find much greater success with his next music-based reality contest, The Voice.
The Swan The 2004 plastic surgery reality series has been panned by multiple critics. Robert Bianco of USA Today called The Swan "hurtful and repellent even by reality's constantly plummeting standards". Journalist Jennifer Pozner, in her book Reality Bites Back, calls The Swan "the most sadistic reality series of the decade". Journalist Chris Hedges also criticized the show in his 2009 book Empire of Illusion, writing "The Swan'''s transparent message is that once these women have been surgically 'corrected' to resemble mainstream celebrity beauty as closely as possible, their problems will be solved". Feminist scholar Susan J. Douglas criticized the show in her book The Rise of Enlightened Sexism for its continuation of a negative female body image, claiming that "it made all too explicit the narrow physical standards to which women are expected to conform, the sad degree to which women internalize these standards, the lengths needed to get there, and the impossibility for most of us to meet the bar without, well, taking a box cutter to our faces and bodies". Author Alice Marwick believes that this program is an example of "body culture media", which she describes as "a genre of popular culture which positions work on the body as a morally correct solution to personal problems". Marwick also suggests that cosmetic reality television encourages viewers to frame their family, financial, or social problems in bodily terms, and portrays surgical procedures as an everyday and normal solution. The Swan attracted further criticism internationally as British comedian and writer Charlie Brooker launched attacks on it during his Channel 4 show You Have Been Watching, where guest Josie Long suggested the show be renamed "The bullies were right". In 2013, second-season contestant Lorrie Arias spoke publicly about problems she attributed to her participation in The Swan, including unresolved surgery complications and mental health problems she says were exacerbated by her appearance on the program. The show was ranked at No. 1 in Entertainment Weeklys 10 Worst Reality-TV Shows Ever.
Sitcoms
Specials and television filmsThe Decision: On July 8, 2010, LeBron James announced on a live ESPN special that he would be playing for the Miami Heat for the 2010–11 season. In exchange for the rights to air the special, ESPN agreed to hand over its advertising and airtime to James. James arranged for the special to include an interview conducted by Jim Gray, who was paid by James's marketing company and had no affiliation with the network. The show drew criticism for making viewers wait 28 minutes before James revealed his decision, and the spectacle involved. James's phrase "taking my talents to South Beach", which he spoke in revealing his choice, became a punchline for critics. Though the special drew 13 million viewers, ESPN's reporting leading up to the program, its decision to air it and the network's relinquishing of editorial independence in the process were cited as gross violations of journalistic ethics. Forbes, in 2012, listed James as one of the world's most disliked athletes on the basis of his move to Miami.Eaten Alive: A 2014 television special on Discovery Channel that purported to have host Paul Rosolie swallowed whole by an eighteen-foot anaconda, it drew criticism before its airing from those who felt Discovery was aiming for sensationalism and shock value. Rosolie was never actually consumed by the creature before the stunt was prematurely called off due to safety concerns, which resulted in heavy viewer complaints. PETA criticized the special as an example of "entertainment features ... that show humans interfering with and handling wild animals [that] are detrimental to species conservation." In January 2015, Discovery president Rich Ross admitted the special's promotion was "misleading."Elvis in Concert:This TV special was a recorded Elvis Presley concert held on June 19, 1977; it was one of the last concerts of his career. Presley's deteriorating health was evident in his weight gain and his inability to remember lyrics of several songs. It has been described as "terrible and embarrassing" and a "travesty." Had Presley not died on August 16 of the same year, CBS would have likely never aired the concert, and only did so in October, after his death, and again in May 1978; the network had plans to record another concert to get better footage, but this was rendered impossible after Presley's death. The Presley estate refuses to release the special on VHS or DVD to this day.Exposed! Pro Wrestling's Greatest Secrets: The documentary was criticized for being sensationalist, misleading, and outdated in the presentation of the "secret tricks." Critics in and out of the wrestling business contend that many of the "secrets" exposed were already widely known by fans to begin with, and others were so obscure as to be non-notable. While most of the professional wrestling world refrained from acknowledging the program, the night following its airing, Ernest "The Cat" Miller entered the ring during WCW Monday Nitro and sarcastically shouted in a melodramatic tone to the audience, "Now you know all our secrets!" Mick Foley on WWF Monday Night RAW announced to tag partner Al Snow, "We didn't do so well last week, but last night, the secrets of professional wrestling were revealed to me!" Foley also poked fun at the program several times in his autobiography, Have a Nice Day!First Night 2013 with Jamie Kennedy: On December 31, 2012, KDOC-TV aired a live New Year's Eve special hosted by comedian and actor Jamie Kennedy. It was riddled with mishaps and technical issues, including periods of dead air, unedited explicit language, and Kennedy randomly speaking into his microphone, unaware he was live. A fistfight erupted onstage during the end credits. The special received a scathing critical reception, deemed "the world's worst New Year's broadcast" by The A.V. Club, "the worst New Year's Eve show of all time" by Uproxx, and "the worst in television history" by Gawker. Kotaku called it a "class-five flaming disaster", and Huffington Post noted the special's "astounding level of technical incompetence". In 2018, Good Housekeeping included the show among its selection of the "most dramatic TV catastrophes ever". Comedian Jensen Karp described Brett Kavanaugh's Supreme Court confirmation hearing that year as "running as smooth as a Jamie Kennedy New Years Eve special". Kennedy claimed the show's miscues were intentional, and defended his work in an interview with The New York Times: "I didn't stab nobody, I didn't shoot nobody. I just made a New Year's Eve special. Is that so bad?"If I Did It: In November 2006, O. J. Simpson, who had been acquitted of the murder of his ex-wife Nicole Brown Simpson and her friend Ronald Goldman in a trial in 1995, wrote a book describing how, if he were to have actually committed the murder, how he would have done it. He arranged for a television special in which he would be interviewed by publisher Judith Regan to promote the book. NBC refused to air it, while Fox almost did before backing out at the insistence of its affiliates. The Goldman family, who won a $33,500,000 wrongful death settlement in 1997 against Simpson and insist he is guilty of the murders despite his acquittal, declared the special "an all-time low for television", and arranged for Regan's firing from HarperCollins for alleged "anti-Semitic remarks". Regan sued HarperCollins for wrongful termination and won, but Fox CEO Rupert Murdoch admitted the special was an "ill-considered project." The special never aired in its original form and the book's rights were turned over to the Goldmans, who retitled the book If I Did It: Confessions of the Killer, with the If in much smaller type. In 2018, the still-unaired special was reedited, with new bridging segments hosted by Soledad O'Brien, and given the name O.J. Simpson: The Lost Confession. The Goldman family approved of the reedited special, which aired in March 2018.Liz & Dick: This 2012 Lifetime original movie starred Lindsay Lohan in the title role of Elizabeth Taylor, a casting move that earned wide derision. Matt Roush of TV Guide panned the film, calling it an "epic of pathetic miscasting" and "laughably inept". According to David Wiegand of the San Francisco Chronicle, the film is "so terrible, you'll need to ice your face when it's over to ease the pain of wincing for two hours" and "the performances range from barely adequate to terrible. That would be [Grant] Bowler [as Richard Burton] in the "barely adequate" slot and Lohan, well, in the other one." Jeff Simon of The Buffalo News noted, based on a consensus of other reviews, that "it's the howler everyone expected" and openly mused that the film could end Lohan's acting career. At Metacritic, which assigns a normalized rating out of 100 to reviews from mainstream critics, the film received an average score of 26, which indicates "generally unfavorable reviews", based on 27 reviews.Megalodon: The Monster Shark Lives: As part of their annual Shark Week programming, Discovery Channel aired a special on August 4, 2013 that alleged the continued existence of the megalodon, a long-extinct giant shark species. While the show attracted a record 4.8 million viewers, it was later criticized for fabricating events that were passed off therein as fact. Huffington Post called Shark Week "a disgrace" in response to the special. The Atlantic wrote, "[T]he last bastion of science-related television was Discovery Channel. But no more." Christie Wilcox of Discover accused the network of "peddling lies and faking stories for ratings." Wired deemed the show "the absolute worst of Shark Week" in that it "mockumentary-ized [reality] using fake experts and videos". John Oliver of The Daily Show called it "a faked two hour shark-gasm", and actor Wil Wheaton wrote that Discovery owed its viewers an apology for airing "a cynical ploy for ratings [that] deliberately lied to its audience and presented fiction as fact." The special was highlighted in a 2014 article by The Verge titled "How Shark Week Screws Scientists". Discovery responded that Megalodon had contained multiple disclaimers that some events were dramatized and that the "institutions or agencies" who appeared therein had no affiliation with the special, nor approved its contents.The Mystery of Al Capone's Vaults: Recently fired from his job as a reporter for ABC, Geraldo Rivera hosted this live syndicated television special in 1986, which involved opening a recently discovered vault previously owned by mafia boss Al Capone. Although the promotions for the special heavily implied that the vault was likely to contain valuable artifacts from Capone's life or possibly even dead bodies, when the vault was opened it was revealed to contain a handful of empty moonshine bottles and nothing else. The phrase "Al Capone's vault" soon entered the vernacular to refer to any event that is heavily hyped and promoted but spectacularly fails to live up to expectations. Several sitcoms made joking references to the disappointment. The special marked a turning point in Rivera's career, shifting from his previous career in journalism to a career in tabloid entertainment, including his eponymous talk show.Poochinski: This unsold pilot aired as a one-off special on NBC in 1990. The show, which featured Peter Boyle as the voice of a detective who is killed and reincarnated as a bulldog, has been retrospectively mocked for its bizarre premise and copious amounts of toilet humor.Warder, Robin (6 October 2012). 6 TV Shows You Won't Believe Were Actually Made, CrackedLyons, Margaret (6 January 2010). Clip du jour: 'Poochinski' is the best/worst show that never was, Entertainment WeeklyRapsittie Street Kids: Believe in Santa: This special has become infamous among fans of bad films. Ever since it aired on television, it received extremely negative reviews from critics and audiences, and has been repeatedly noted for its "hideous" and ugly computer animation and bizarre production history, though the ensemble voice cast received some praise.Star Wars Holiday Special: Generally, Star Wars Holiday Special has received a large amount of criticism, both from Star Wars fans and the general public. David Hofstede, author of What Were They Thinking?: The 100 Dumbest Events in Television History, ranked the holiday special at number one, calling it "the worst two hours of television ever." Shepard Smith, a former news anchor for the Fox News Channel, referred to it as a "'70s train wreck, combining the worst of Star Wars with the utter worst of variety television." Actor Phillip Bloch explained on a TV Land special entitled The 100 Most Unexpected TV Moments, that the special, "...just wasn't working. It was just so surreal." On the same program, Ralph Garman, a voice actor for the show Family Guy, explained that "Star Wars Holiday Special is one of the most infamous television programs in history. And it's so bad that it actually comes around to good again, but passes it right up." George Lucas himself is quoted as saying, "If I had the time and a sledgehammer, I would track down every copy of that program and smash it." The only aspect of the special that has been generally well-received is the animated segment done by Canadian animation studio Nelvana, which introduces Boba Fett, who would later become a popular character when he appeared in the Star Wars theatrical films.The Star Wars Holiday Special (1978) Animated Cartoon Special - BCDBWho Wants to Marry a Multi-Millionaire?: This one-time special had fifty female contestants vying to immediately marry an unseen multimillionaire who, unknown to the contestants or viewers, only barely qualified for the title (owning only $2,000,000 in assets, including non-liquid ones) and who had a record of domestic violence. The winner, Darva Conger, never consummated her relationship with Rick Rockwell and the marriage was annulled. In a 2010 issue of TV Guide, the show was ranked No. 9 on a list of TV's ten biggest "blunders".
Sports
The Baseball Network (Baseball Night in America): The Baseball Network came immediately after CBS's four-year run as MLB's over-the-air broadcaster, which was itself a disaster, being compared at least once to the Exxon Valdez oil spill. This short-lived joint venture between ABC, NBC, and Major League Baseball was a pioneer in that the league produced and owned the rights to the telecasts (including half of the regular season and the postseason), but it was mostly a flop. The arrangement did not last long. Due to the effects of a players' strike on the remainder of the 1994 season, as well as poor reception from fans and critics over how the coverage was implemented, The Baseball Network would be disbanded after the 1995 season. Criticism centered on several factors: that The Baseball Network held exclusivity over every market, which meant that in markets with two teams, a Baseball Network game featuring one team prevented all viewers in the market from seeing the other team's game that night; the fact that East Coast teams playing on the West Coast (or vice versa) could not be seen in the market as the start time would either be too late or early for the home market; and regionalized coverage well into the postseason, which led Sports Illustrateds Tom Verducci to dub The Baseball Network both "America's regional pastime" and an "abomination" and Bob Costas to write that it was an unprecedented surrender of prestige and a slap to all serious fans. Frustration was also shared by fans; the mere mention of The Baseball Network during the Mariners-Yankees ALDS from public address announcer Tom Hutyler at Seattle's Kingdome elicited boos from most of the crowd. ABC Sports president Dennis Swanson, in announcing the dissolution of The Baseball Network, said "The fact of the matter is, Major League Baseball seems incapable at this point in time, of living with any long term relationships, whether it's with fans, with players, with the political community in Washington, with the advertising community here in Manhattan, or with its TV partners."Celebrity Boxing: This self-explanatory series, an icon of Fox's "lowbrow" era of the late 1990s and early 2000s, ranked number 6 on TV Guides 50 Worst TV Shows of All Time list. Celebrities who participated in the two-episode contest were mostly D-list names and those involved in criminal cases (Joey Buttafuoco, Tonya Harding, and Paula Jones, while Buttafuoco's former lover Amy Fisher backed out of the contest); one match even featured a man (Buttafuoco) facing off against a woman (pro wrestler Chyna), with Buttafuoco (who had taken the place of "Weird Al" Yankovic, who refused to fight a woman) winning in a decision.NBA on ABC (2002–present): Some viewers have been critical of ABC's telecasts of the NBA since the 2002 season as one common complaint is of strange camera angles, including the Floorcam and Skycam angles used by ABC throughout its coverage. Other complaints are of camera angles that appear too far away, colors that seem faded and dull, and the quieting of crowd noise so that announcers can be heard clearly (by contrast to NBC, which allowed crowd noise to sometimes drown out their announcers). Some complaints have concerned the promotion, or perceived lack thereof, of NBA telecasts. The 2003 NBA Finals received very little fanfare on ABC or corporate partner ESPN; while subsequent Finals were promoted more on both networks, NBA related advertisements on ABC were still down significantly from promotions on NBC. NBA promos took up 3 minutes and 55 seconds of airtime on ABC during the week of May 23, 2004 according to the Sports Business Daily, comparable to 2 minutes and 45 seconds for the Indy 500. Promotions for the Indianapolis 500 outnumbered promotions for the NBA Finals fourteen-to-nine from the hours of 9:00 pm to 11:00 pm during that week.NHL on Fox (FoxTrax era): Fox Sports's decision to implement a CGI-generated glowing hockey puck during their live coverage of the National Hockey League from 1996 to 1998 drew ire from sports fans, who derided the move as a gimmick. Greg Wyshinski wrote of the glowing puck as one of the worst ideas in sports history in his book Glow Pucks and Ten-Cent Beer: The 101 Worst Ideas in Sports History.
NBC Olympic broadcasts (1964, 1988–present [summer]; 1972, 2002–present [winter]) NBC was the inaugural Olympic broadcaster at the 1964 Tokyo Summer Olympics. They later broadcast the 1972 Winter Olympics. NBC brought the broadcast rights to start with the 1988 Summer Olympics, and would obtain rights to broadcast the Winter Olympics starting in 2002. Currently, NBCUniversal (a division of Comcast which operates NBC and its cable networks) holds the broadcasting rights for the Olympics until 2032. Since 2000, NBC has received criticism over its tape-delaying practice, which has gotten many complaints from many viewers, yet in 1992, the then-NBC Sports producer Terry O'Neil coined the term "possibly live" for NBC's practices to tape delay live events as if they were live. Some examples include the Women's Gymnastics event during the 2016 Summer Olympics in order to "juice the numbers". In the 2010 Winter Olympics, NBC aired no alpine skiing events in order to showcase high-profile events. Many viewers have expressed outrage, including U.S. senators during the 2010 Winter games, and people were forced to use VPN servers to access the BBC and in Canada, CTV (for the 2010 Winter Games and 2012 Summer Games), and the CBC (for the 2014 Winter Games and 2016 Summer Games) to view them live.
NBC has also frequently been criticized for airing the Olympics as if it is more of a reality television program instead of a live sports event. One example of this includes cutting off a fall from Russian gymnast Ksenia Afanasyeva, which NBC Sports chairman Mark Lazarus did "in the interest of time," although her routine took only 1 minute and 38 seconds. And according to The New York Times, he did this to create suspense on the U.S. Women's Gymnastics team.
In 2016, chief marketing officer John Miller held a press conference prior to the 2016 Summer Olympics about their formatting of NBC's Olympics coverage, citing that the Olympics were "not about the result, [but] about the journey. The people who watch the Olympics are not particularly sports fans. More women watch the Games than men, and for the women, they're less interested in the result and more interested in the journey. It's sort of like the ultimate reality show and mini-series wrapped into one." This led to criticism from the media; Linda Stasi of the New York Daily News claimed it to be "sexist nonsense" and a "pandering, condescending view of the millions of women viewers." Washington Post columnist Sally Jenkins suggested that "it insults the audience — but it sure does insult Olympic athletes, especially female athletes."
NBC was also criticized for frequently editing and tape-delaying the opening and closing ceremonies, with "context" as its main reason. In 2010, NBC aired the opening and closing ceremonies on a tape delay, even for viewers on Pacific Time, despite being 3 hours behind Eastern Time. During the closing ceremonies, NBC went into a 65-minute intermission to air a series premiere of The Marriage Ref and local newscasts, and returning to the ceremonies at 11:35 PM ET/PT. This spawned outbursts from upset viewers, especially on Twitter, when several performances were cut off.
In 2012, NBC cut a tribute to the victims of the July 7, 2005 London bombings in favor of a Ryan Seacrest interview with U.S. swimmer Michael Phelps during the opening ceremonies. Ultimately, this caused the hashtag #NBCFail to trend on Twitter. The network was criticized for cutting up to 27% of the closing ceremonies to air local newscasts and a sneak preview of the NBC sitcom Animal Practice.
In 2014, NBC also received criticism for cutting the video segments on the Olympic Torch relay and not showing the mascots. It also received criticism for cutting the Olympic Oaths and IOC President Thomas Bach's speech on discrimination and equality. It was also criticized for setting a 90-minute window to air the closing ceremonies. In addition, they used the times before and after the 90-minute window to air a sneak preview of another sitcom, Growing Up Fisher, at 10:30 PM ET/PT, and a documentary on Tonya Harding and Nancy Kerrigan which aired between 7 PM and 8:30 PM ET/PT. In 2016, NBC aired both of the ceremonies in a 1-hour delay (at 8 PM ET/PT) and it also drew criticism for the excessive amount of advertisements it aired during the delayed ceremony, and cutting 38% of the closing ceremony.
NBC also received criticism for an alleged pro-American biasFanfare for the American: NBC's Prime-Time Broadcast of the 2012 London Olympiad - SAGE Journals despite such bias being far less than other national Olympic broadcasters such as Canada and Russia, and for various comments made by commentators during the Olympics in 2016 and in the opening ceremony of the 2012 Olympics.
Olympics Triplecast Even before the 1992 Summer Olympics started, many criticized the business model. On July 16, nine days before the Opening Ceremony, one Philadelphia Inquirer writer called it "the biggest marketing disaster since New Coke". The Triplecast was deemed by The New York Times "sports TV's biggest flop" and that NBC and Cablevision were "bereft in sanity" in operating it. By 1994, it was referred to as "the Heaven's Gate of television" Albert Kim, the editor of Entertainment Weekly, went on National Public Radio and called it "an unmitigated disaster for NBC". It was a loss of about $100 million (half of which was covered by Cablevision under agreement) for the two parties, and shaped NBC's strategies in the coverage of future Olympics.The Premiership: In 2000, ITV took over terrestrial broadcasting rights the highlights of the English Premier League, following a bidding war against its rival and long-time rights holder, the BBC (known for broadcasting its similar show Match of the Day) at a reported cost of £183 million to commence at the start of the 2001–02 Premier League season. The first show aired at 7pm on 18 August 2001 was watched by a peak figure of 5 million viewers, in comparison to The Weakest Link which drew an average of 7 million when shown on rival channel BBC One at the same time. The channel suffered their worst Saturday night ratings for five years, when an average of 3.1 million viewers watched The Premiership. Not helped was the media and football critics – most notably the Daily Mirror – were outspoken about the programme's highlights. Out of the 70 minutes on air, the first show included only 28 minutes of action, compared to the average of 58 minutes on MotD the previous season. At the end of its contract run in May 2004, rights for the league were sold back to the BBC.Thursday Night Football: Throughout its decade-plus run, the package of National Football League games have been subjected to a barrage of criticism. Among the controversies were the hiring of Bryant Gumbel as its first play-by-play announcer, difficulties in getting NFL Network onto cable providers, poor quality of the games, a uniform scheme that caused great difficulty for those with color blindness to tell teams apart, disruption to the flow of the league's weekly schedule (the league is forbidden under federal law from televising games on Friday or Saturday for most of the regular season) in a way that potentially puts players at greater risk of injury, and a perception that the package saturates the market with NFL products and was thus driving down the viewership of the league's Sunday and Monday games. On at least one occasion, the league has reportedly considered ending the package after its current contracts expire.XFL on NBC, XFL on TNN and XFL on UPN: The three television programs covering the XFL are generally treated as one for the purposes of worst television show lists. The series, the subject of Brett Forrest's book Long Bomb: How the XFL Became TV's Biggest Fiasco, ranked No. 3 on the 2002 TV Guide list of worst TV series of all time, #2 on ESPN's list of biggest sports flops, #21 on TV Guide's 2010 list of the biggest television blunders of all time, and #10 on Entertainment Weekly's list of the biggest bombs in television history.TV's 50 biggest bombs and blunders|EW.com Despite the league's failure, both of its co-founders would try again nearly two decades later: NBC's Dick Ebersol with the Alliance of American Football in 2019 (which ran out of money midway through its only season), and McMahon with another XFL in 2020 (which he sold to Dwayne Johnson and Dany Garcia during the pandemic shutdown ahead of his total exit from sports entertainment two years later); the latter is scheduled to return from its extended hiatus in 2023.
Talk showsThe Chevy Chase Show: A late night talk-show hosted by Chevy Chase that aired on Fox in 1993. It received negative reviews from critics,TIME article: "Late-Night Mugging". and ranked 16th on TV Guide's list of worst television shows and the same position on its list of biggest television blunders; former Fox chairwoman Lucie Salhany described it as "uncomfortable and embarrassing," and the series was cancelled within six weeks of its debut.The New York Times article: "Chevy Chase's Show Canceled After 6 Weeks".The Jeremy Kyle Show: British tabloid talk show which presented family disputes and the like. Often accused of treating its guests in an exploitative way, it was permanently scrapped in May 2019 when a guest died a week after appearing and failing a lie detector test on the show, apparently taking his own life.The Jerry Springer Show: The trash TV showHow 90s Trash TV Became Deadly|Dark Side of the 90s - VICE on YouTube topped TV Guide magazine's 2002 list of "The Worst TV Shows Ever". The phrase "Jerry Springer Nation" began to be used by some who see the program as being a bad influence on the morality of the United States.The Magic Hour: Soon after its debut, the series was panned by critics citing Earvin "Magic" Johnson's apparent nervousness as a host, his overly complimentary tone with his celebrity guests, and lack of chemistry with his sidekick, comedian Craig Shoemaker. The series was quickly retooled with Shoemaker being relegated to the supporting cast (and eventually fired for publicly stating the show was a disaster) which included comedian Steve White and announcer Jimmy Hodson. Comedian and actor Tommy Davidson was brought in as Johnson's new sidekick and Johnson interacted more with the show band leader Sheila E. The format of the show was also changed to include more interview time with celebrity guests. One vocal critic of The Magic Hour was Howard Stern, who was later booked as a guest for one episode as part of a stunt to boost ratings.Maury: This tabloid talk show hosted by Maury Povich was dubbed by USA Today columnist Whitney Matheson as "the worst show on television" and "miles further down the commode than Jerry Springer." The New York Post listed it among the "20 worst shows on TV right now" in 2013: "Since 1991, Maury Povich has slathered us with trashy tales of abusive spouses, kinky freaks and promiscuous teens." The A.V. Club wrote in 2016 that "Maury has been lowering the daytime TV bar for 25 years" by "ruthlessly exploiting the misery and misfortune of its guests for ratings."
Variety and sketch comedy showsThe 1/2 Hour News Hour: Fox News Channel's satirical news comedy show was criticized for its obvious intent to imitate Comedy Central's The Daily Show from a more politically conservative slant. The show's initial two episodes received generally poor reviews. MetaCritic's television division gave The 1/2 Hour News Hour pilots a score of 12 out of 100, making it the lowest rated television production ever reviewed on the site. Business Insider ranked it #1 on its list of "The 50 worst TV shows in modern history, according to critics".Australia's Naughtiest Home Videos: The series was cancelled by its network midway through its first airing. Kerry Packer, Australian media magnate and owner of the broadcaster Nine Network, saw the show while out at dinner with friends, and reportedly phoned Nine central control personally, ordering them to "Get that shit off the air!" The network complied and immediately replaced it with reruns of Cheers, citing "technical difficulties." Packer arrived at the network the next day and again referred to the show as "disgusting and offensive shit." The show itself largely consisted of videos involving crude sexual content interspersed with off-color jokes from the show's host, former 2MMM morning host "Uncle" Doug Mulray. The show would not be seen in its entirety until 2008, three years after Packer's death.Ben Elton Live From Planet Earth: Live From Planet Earth debuted on Channel Nine on 8 February 2011, in the 9:30 pm timeslot. During the broadcast of the first episode, reaction on Twitter was hostile, with many users speculating the show would be axed. Reviews of the first episode were largely negative. Colin Vickery of the Herald Sun called it "an early contender for worst show of the year", and Amanda Meade of The Australian called it "a screaming, embarrassing failure". The Ages Karl Quinn stated there was "more to like than dislike" about the show.Horne & Corden: This was a sketch show written by and starring James Corden and Mathew Horne, following their tenure in the hugely successful sitcom Gavin & Stacey. Unlike the latter, the show garnered largely negative reviews in the press. The show was cancelled, and Corden stated that the sketch show was a mistake.Osbournes Reloaded: This variety show was universally panned by critics, with Roger Catlin of the Hartford Courant even going so far as to call it the "worst variety show ever" and Tom Shales of The Washington Post labeling it "Must-Flee TV". It was canceled after one episode, which itself was cut from 60 to 35 minutes prior to air; 26 affiliates had refused to air the first show or buried it in overnight graveyard slots, and Fox had barely convinced a group of 19 other stations to drop its plans to do the same. Rolling Stone named it one of the 12 worst TV shows of all time.Pink Lady (also known as Pink Lady and Jeff): The series ranked No. 35 on TV Guides Fifty Worst TV Shows of All Time list. The series, which featured Japanese duo Pink Lady struggling awkwardly through American disco hits and sketch comedy (the duo spoke very little English), was moved to the Friday night death slot after one episode and killed off after five episodes. (A sixth episode was unaired at the time but later included in a DVD release.)Rosie Live: This NBC variety special hosted by comedian and activist Rosie O'Donnell on the day before Thanksgiving 2008 received almost universally negative reviews from critics. The Los Angeles Times critic Mary McNamara wrote, "For those of us who are, and remain, Rosie fans, who think The View will never quite recover from her departure, who think her desire to resurrect the variety show was, and is, a great idea, disappointment does not even begin to describe it." TV Guide critic Matt Roush panned the show as "dead on arrival," while Variety wrote "If Rosie O'Donnell and company were consciously determined to strangle the rebirth of variety shows in the crib, they couldn't have done a better job of it than this pre-holiday turkey." The show had been cleared for a tentative January 2009 launch as a regular series, but the show's poor reception led to the cancellation of those plans.Ryantown: Ryantown was named as one of the "Top 10 Worst Irish TV Programmes" by the Irish Independent and host Gerry Ryan was later to admit that it was all horribly "half-baked" and "should have been taken off the air after a few shows".Saturday Night Live with Howard Cosell: Saturday Night Lives director Don Mischer remembers the show as hectic and unprepared, and has recalled one particular episode wherein executive producer Roone Arledge discovered that Lionel Hampton was in New York, and invited the musician to appear on the show an hour before the show was set to go on the air. The show fared poorly among critics and audiences alike, with TV Guide calling it "dead on arrival, with a cringingly awkward host." Alan King, the show's "executive in charge of comedy," later admitted that it was difficult trying to turn Cosell into a variety show host, saying that he "made Ed Sullivan look like Buster Keaton." Saturday Night Live with Howard Cosell was canceled on January 17, 1976, after only 18 episodes. A year later, in 1977, the NBC sketch show Saturday Night finally got permission to be named Saturday Night Live due to the cancellation of this version of Saturday Night Live and hired many cast members who worked on the ABC version (the most notable being Bill Murray, who was hired after the departure of Chevy Chase).The Tom Green Show: This comedy show written by and starring controversial Canadian comedian Tom Green was ranked No. 41 on TV Guides 50 Worst TV Shows of All Time list. In 2001, Green also produced the film Freddy Got Fingered, which featured a similar style of humor and is also considered one of the worst films of all time to the point of winning the Golden Raspberry Award for Worst Picture.The Wilton North Report: Almost from the outset, creative differences occurred between The Wilton North Reports writing team, executive producer Barry Sand, and hosts Phil Cowan and Paul Robins. The hosts thought the writers' material was too sophisticated for mass audiences and frequently not very funny; the writers thought Cowan and Robins were less than erudite and felt uncomfortable writing for them. Sand tried to make peace between the hosts and writers, seeking material that Cowan and Robins would feel comfortable with yet encouraging the hosts to tone down their shrill delivery. Pre-debut rehearsals did not impress Sand nor Fox executives, who decided on November 29 to push back Wilton Norths premiere, which had been scheduled for the next night, to allow the crew extra time to gel (the hosts and writers had been together for not even a week). The delay also meant a retooling of the show, beginning with Sand's scrapping of the opening news review segment; Sand believed it did not mesh with Cowan and Robin's friendly approach, while Fox objected to its crude humor. By the time Wilton North did finally reach the air on December 11, its own cast and crew would have difficulty articulating what the show was even trying to do. The on-air product was met with general derision from critics; Clifford Terry of the Chicago Tribune said the show took a smug, studious approach to its subject material, while Ken Tucker of the Philadelphia Inquirer thought the "video version of Spy magazine" lacked "genuinely amusing rudeness." Later episodes of Wilton North would see a greater reliance on long-form videos and feature reporting, with such examples including a report presented by Aron Ranen on a dominatrix that specialized in corporal punishment, as well as a feature on a high school basketball team in South Carolina that hadn't won a game in five years (though they pulled off a win when a Wilton North crew filmed them in action). The idea was to have Cowan and Robins generally serve as presenters and offer comments on what was being shown. Staff writer and commentator Paul Krassner would also be on hand to introduce and discuss "underground videos" with the hosts. Krassner, in what he would later term a "practice" segment, discussed the highlights of 1987 with Cowan and Robins on the January 1 broadcast, with the possibility that such analyses would become permanent the following week (a possibility Krassner was thrilled about doing, as he would recall in a February 1988 Los Angeles Times piece about his time at Wilton North). By this time, however, Fox's affiliates grew restless and demanded that the show be cancelled immediately; Fox would announce Wilton North''s cancellation on January 5, 1988, with network president Jamie Kellner calling the show "a bit too ambitious." The show's 21st and final episode would air on January 8.
See also
List of television series canceled after one episode
List of television series canceled before airing an episode
List of films considered the worst
Hate-watching
References
Further reading
External links
The 20 Worst TV Shows | HEAVY
20 Worst TV Shows Ever Made (According To Rotten Tomatoes)
20 Worst TV Shows Ever Made (According To IMDb)
Television series
Worst
Criticism of television series
Film and video fandom | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 8,336 |
Home › Politics › "Redneck Neighborhood Watch" Duo Convicted For Shooting At And Ramming 2 Black Teens On ATV's
"Redneck Neighborhood Watch" Duo Convicted For Shooting At And Ramming 2 Black Teens On ATV's
By Economic Left on August 27, 2021 • ( 0 )
"Redneck Neighborhood Watch" Wade Twiner & Lane Twiner
A Mississippi jury convicted a white father & son of hate crimes for shooting at Black teens as part of what they considered to be a "redneck neighborhood patrol".
The Twiners posted slogans to their social media accounts such as "Redneck Neighborhood Watch," "You Loot We Shoot" with images of a Confederate flag.
"Not only did they shoot at him, they also ran into the back of his four wheeler, and that could also have been murder right then and there," said the mother of one of two black teens.
John Wright from Rawstory writes:
"A jury in Mississippi on Wednesday convicted a white father and son of hate crimes for shooting at Black teens as part of what they reportedly considered a "redneck neighborhood patrol."
Wade Twiner and Lane Twiner were each convicted of two counts of simple assault and felonious mischief after a three-day trial, according to the local ABC affiliate.
"Prosecutors said in addition, the jury found the father and son guilty of a hate crime enhancement, which allows for doubling of the time of their sentences," the station reported.
The Twiners admitted to chasing and shooting at the two teenagers riding ATVs in September of last year. The Twiners also rammed one of the teen's ATVs with their pickup truck.
"The Twiners told cops they own land on both sides of the road, pay taxes, and don't want people riding ATVs on the road since it's illegal," the New York Post reported following their arrests. "Previously, the Twiners had posted slogans and memes to their social media accounts such as 'Redneck Neighborhood Watch,' 'You Loot We Shoot' and images of a Confederate flag…"
See full story here.
‹ Bernie Vows to Tax Billionaires Who Probably Won't Be Taxed On Nearly $2 Trillion In Profits
The Death Of The Affordable Starter Home Has Been Creeping Up On Us Over The Last 2 Decades ›
Tags: "Redneck Neighborhood Watch", ATV, behavioral economics, breaking news, democratic socialism, economic left, economic news, economics news, featured, Hate Crime, Hate Crimes, keynesian economics, left media, leftism, leftist economic news, leftist media outlet, leftist news, Mississippi jury, neo nazi, Neo-Confederacy, Neo-Confederate, news, progressive, progressive economic news, progressive media, progressive media outlet, progressive movement, progressive news, progressive news sites, racism, racist, socialism, Wade Twiner and Lane Twiner | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,184 |
{"url":"https:\/\/www.analyzemath.com\/stepbystep_mathworksheets\/intersections\/intersection_of_circles.html","text":"# Find the Points of Intersection of Two Circles\n\n A step by step interactive worksheet to solve problems related to the intersection of two circles is presented. Practice questions are also generated with their solutions. The graph two circle are plotted at the bottom of the page and show the points of intersection if any. Step by step solution STEP 1: If $(x,y)$ is be the point of intersection of the two circles, then $(x,y)$ is a solution to the simultaneous system of equations shown below. STEP 2: Expand and simplify the squares in the equations in step 1. STEP 3: Subtract the lower equation from the upper equation and simplify. STEP 4: Substitute $y$ in one of the equations of step 1 by the last expression found in step 3. STEP 5: Write the above quadratic equation in standard form and solve it, then use the linear equation found in step 3 to find $y$. Graphical Meaning of Solution Below are shown the graphs of the two circles and the point of intersection if there are any. (Change scales if necessary)\nMore Step by Step Math Worksheets Solvers New !","date":"2019-11-12 18:30:35","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6744409203529358, \"perplexity\": 126.56039249615405}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-47\/segments\/1573496665726.39\/warc\/CC-MAIN-20191112175604-20191112203604-00350.warc.gz\"}"} | null | null |
Q: How to cache a function with both position and keyword parameters? I have a function foo(a, b, c= False, d = 0), and I need to create a cache dict inside the function for each combination of the parameters.
The examples I have seen online only use *args, but I need to get the kwargs-values too, in the correct order, and store them in a cache dict, and handle the case that dicts cannot be keys, and kwargs is a dict.
So I did it like this: is this the way to go?
foo(*args, **kwargs):
if (*args, *kwargs.values()) in cache:
return cache[(*args, *kwargs.values())]
Basically I convert the kwargs to values (in the order of insertion since I am using Python3.6, which will be what I want) and then combine with *args to create a tuple.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 9,528 |
Q: a simple one with java calassCastException package Medium;
import java.util.ArrayList;
import java.util.Scanner;
public class Demo5 {
public static void main(String[] args) {
System.out.println("input end when you want to stop:");
ArrayList<Double> arr = new ArrayList<Double>();
while(true){
Scanner in = new Scanner(System.in);
//String d = in.nextLine();
double d = in.nextDouble();
arr.add(d);
if (d==0) {
break;
}
}
Double[] array =(Double[]) arr.toArray(); //I hava already change the type to double
int outter,inner,max;
for(outter=0;outter<array.length-1;outter++){
max = outter;
for(inner=outter+1;inner<array.length;inner++){
if (array[inner]>array[max]) {
inner = max;
}
}
double temp =array[outter];
array[outter] =array[inner];
array[inner]=temp;
}
System.out.println(array);
}
}
Description:Find K-th largest element in an array.
I want to input a arrayList and change it into array,then use selectSort.However,prints out ClassCastException on"Double[] array =(Double[]) arr.toArray();",what's wrong?thank you for your time.
Exception:
Exception in thread "main" java.lang.ClassCastException: [Ljava.lang.Object; cannot be cast to [Ljava.lang.Double;
at Medium.Demo5.main(Demo5.java:31)
A: The toArray() method without passing any argument gives you back Object[].
Just to fix the issue you have to pass an array as an argument with type and size.
Double[] array = list.toArray(new Double[list.size()]);
A: Another way to do this is:
Double[] array = Arrays.copyOf(arr.toArray(), arr.size(), Double[].class);
Either method, you need to pass the array type Double[] and its length in some way.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 3,011 |
{"url":"https:\/\/cstheory.stackexchange.com\/questions\/2524\/reference-for-odd-hole-antihole-free-graphs","text":"# Reference for (odd-hole,antihole)-free graphs?\n\nX-free graphs are those that contain no graph from X as an induced subgraph. A hole is a cycle with at least 4 vertices. An odd-hole is a hole with an odd number of vertices. An antihole is the complement of a hole.\n\nThe (odd-hole,odd-antihole)-free graphs are precisely the perfect graphs; this is the Strong Perfect Graph Theorem. It is possible to find the largest independent set (and largest clique) in a perfect graph in polynomial time, but the only known method of doing so requires building a semi-definite program to compute the Lov\u00e1sz theta number.\n\nThe (hole,antihole)-free graphs are called weakly chordal, and constitute a rather easy class for many problems (including INDEPENDENT SET and CLIQUE).\n\nDoes anyone know if (odd-hole,antihole)-free graphs have been studied or written about?\n\nThese graphs occur quite naturally in constraint satisfaction problems where the graph of related variables forms a tree. Such problems are rather easy, so it would be nice if there were a way to find a largest independent set clique for graphs in this family without having to compute the Lov\u00e1sz theta.\n\nEquivalently, one wants to find a largest independent set for (hole, odd-antihole)-free graphs. Hsien-Chih Chang points out below why this is a more interesting class for INDEPENDENT SET than (odd-hole, antihole)-free graphs.\n\nIn fact, it is relatively easy. Instead for studying independent set problem in (odd-hole,antihole)-free graphs, we take complement of the graphs and try to find a maximum clique in it. Thus it becomes maximum clique problem in (hole, anti-odd-hole)-free graphs.\n\nIn section 2 of the paper \"Triangulated Neighborhoods in Even-hole-free Graphs\" by da Silva and Vuskovic, they stated that Farber first shows\n\nThere are $O(n^2)$ maximal cliques in any 4-hole free graphs.\n\nThen their main theorem stated that\n\nThere are $O(n + m)$ maximal cliques in even-hole-free graphs, and all the maximal cliques can be found in time $O(n^2m)$.\n\nSince we are dealing with (hole, anti-odd-hole)-free graphs which is clearly even-hole-free, finding a maximum clique takes at most $O(n^2m)$ time.\n\n4-hole-free are critical to these kind of results like a poly-time algorithm for $\\overline{K_{2,m}}$-free graphs, so the real challenge may be studying independent set problem in (hole, anti-odd-hole)-free graphs instead, which becomes the maximum clique problem in (odd-hole, anti-hole)-free graphs.\n\nEdit:\n\nOh, another thought came out. (hole, anti-odd-hole)-free graphs are almost weakly chordal in the following sense: since 4-hole-free implies there are only anti-holes with size 4~7 remains (any k-anti-hole with size > 7 contains a 4-hole), and it is also anti-odd-hole-free which restrict the size of anti-holes down to 4 and 6, it is almost no holes\/antiholes in the graph! Thus a poly-time algorithm seems plausible for such graphs.\n\n\u2022 The overline runs away, there I mean the complement of $K_{2,m}$ for any $m\\geq 2$. \u2013\u00a0Hsien-Chih Chang \u5f35\u986f\u4e4b Oct 28 '10 at 15:50\n\u2022 Thanks! Looking again at my result with Peter Jeavons, we actually showed that tree-structured constraint problems yield (hole, odd-antihole)-free graphs in which one wants to find the largest independent set. I will make the question more precise -- I incorrectly suggested IS was the problem one wanted to solve. \u2013\u00a0Andr\u00e1s Salamon Oct 29 '10 at 9:23\n\u2022 @Andr\u00e1sSalamon can you give open access to preprints of your work on this topic? I couldn't access through my university's proxy neither \u2013\u00a0Diego de Estrada Jul 10 '12 at 18:56\n\u2022 @DiegodeEstrada: I'd be happy to send you a preprint of our CP 2008 paper, just send me an email. However, it really is a constraints paper so may not be that interesting to you. \u2013\u00a0Andr\u00e1s Salamon Jul 13 '12 at 14:17","date":"2019-06-17 23:33:55","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7141851186752319, \"perplexity\": 821.2360539648248}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-26\/segments\/1560627998581.65\/warc\/CC-MAIN-20190617223249-20190618005249-00534.warc.gz\"}"} | null | null |
Q: JAXB force ordering of Sets I'm marshaling objects which have fields of type Set. The implementation is unsorted, so the order of resulting XML elements is arbitrary, moreover I got a different order every time I do marshaling.
Is there a way to tell marshaller how to sort a field contents during marshaling?
A: You could take advantage of a SortedSet. If you initialize an instance of a Set on your instance then the JAXB will use that implementation instead of creating a new one:
package forum7686859;
import java.util.Set;
import java.util.TreeSet;
import javax.xml.bind.annotation.XmlRootElement;
@XmlRootElement
public class Root {
//private Set<String> children = new HashSet<String>();
private Set<String> children = new TreeSet<String>();
public Set<String> getChildren() {
return children;
}
public void setChildren(Set<String> children) {
this.children = children;
}
}
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 6,129 |
Carlson's Motor Sales provides Radiator Hose Replacement services to Concord, NH, Bow, NH, Hookset, NH, and other surrounding areas.
Why Should You Have Radiator Hose Replacement Services Performed at Carlson's Motor Sales?
We proudly service the Radiator Hose Replacement needs of customers in Concord, NH, Bow, NH, Hookset, NH, and surrounding areas. | {
"redpajama_set_name": "RedPajamaC4"
} | 290 |
{"url":"https:\/\/avanceservices.de\/tags\/o7wwai.php?page=probability-density-function-8fd83e","text":"The general form of its probability density function is {k\\frac{{{x^2}}}{2}} \\right|_0^{10} = 1,}\\;\\; \\Rightarrow {\\frac{k}{2}\\left( {100 \u2013 0} \\right) = 1,}\\;\\; \\Rightarrow {50k = 1,}\\;\\; \\Rightarrow {k = \\frac{1}{{50}}. for (with cumulants PDFs are used to gauge the risk of a particular security, such as an individual stock or ETF.\n\nWhen the PDF is graphically portrayed, the area under the curve will indicate the interval in which the variable will fall. We'll do that using a probability density function (\"p.d.f.\").\n\nInvestors should use PDFs as one of many tools to calculate the overall risk\/reward in play in their portfolios.\n\nJill Jones Mother, Missha Time Revolution First Treatment Essence Review, Skyrim Recorder Quest Guide, Cape Coral Fishing Charters, Make Sentence With Delight, Cuisinart Broiler Pan, Marrakesh Moroccan Restaurant, Llc, Keto Cheese Wraps, Spicy Tofu Recipe, \" \/>\n\nThe general form of its probability density function is {k\\frac{{{x^2}}}{2}} \\right|_0^{10} = 1,}\\;\\; \\Rightarrow {\\frac{k}{2}\\left( {100 \u2013 0} \\right) = 1,}\\;\\; \\Rightarrow {50k = 1,}\\;\\; \\Rightarrow {k = \\frac{1}{{50}}. for (with cumulants PDFs are used to gauge the risk of a particular security, such as an individual stock or ETF.\n\nWhen the PDF is graphically portrayed, the area under the curve will indicate the interval in which the variable will fall. We'll do that using a probability density function (\"p.d.f.\").\n\nInvestors should use PDFs as one of many tools to calculate the overall risk\/reward in play in their portfolios.\n\nJill Jones Mother, Missha Time Revolution First Treatment Essence Review, Skyrim Recorder Quest Guide, Cape Coral Fishing Charters, Make Sentence With Delight, Cuisinart Broiler Pan, Marrakesh Moroccan Restaurant, Llc, Keto Cheese Wraps, Spicy Tofu Recipe, \" \/>\n\n# probability density function\n\nhttps:\/\/mathworld.wolfram.com\/ProbabilityDensityFunction.html, Time-Dependent The probability density function (PDF) of a continuous distribution is defined as the derivative of the (cumulative) distribution function, (1) (2) (3) so (4) (5) A probability function satisfies (6) and is constrained by the normalization condition, (7) (8) Special cases are (9) (10) If a random variable $$X$$ follows the normal distribution with the parameters $$\\mu$$ and $$\\sigma,$$ we write $$X \\sim N\\left( {\\mu ,\\sigma } \\right).$$, The normal distribution is said to be standard when $$\\mu = 0$$ and $$\\sigma = 1.$$ In this special case, the normal random variable $$X$$ is called a standard score or a $$Z-$$score. What Is a Probability Density Function (PDF)? function . }\\], ${P\\left( {2 \\le X \\le 5} \\right) = \\int\\limits_2^5 {f\\left( x \\right)dx} }={ \\frac{1}{{50}}\\int\\limits_2^5 {xdx} }={ \\left. Investopedia uses cookies to provide you with a great user experience. Necessary cookies are absolutely essential for the website to function properly. You also have the option to opt-out of these cookies. {{e^{ \u2013 \\lambda x}}} \\right|_0^\\infty }={ \u2013 \\frac{1}{\\lambda }\\left( {0 \u2013 1} \\right) }={ \\frac{1}{\\lambda }. {\\frac{2}{\\pi\\left({1 + {x^2}}\\right)}}, & \\text{if } {x \\ge 0} \\\\ A random variable is a variable whose value is unknown, or a function that assigns values to each of an experiment's outcomes. {\\left( {\\frac{{{x^2}}}{2}} \\right)} \\right|_a^b }={ \\frac{1}{{b \u2013 a}} \\cdot \\frac{{{b^2} \u2013 {a^2}}}{2} }={ \\frac{{\\cancel{\\left( {b \u2013 a} \\right)}\\left( {b + a} \\right)}}{{2\\cancel{\\left( {b \u2013 a} \\right)}}} }={ \\frac{{a +b}}{2}.}$. Abramowitz, M. and Stegun, I. distribution is defined as the derivative of the (cumulative) distribution We can easily find the mean value $$\\mu$$ of the probability distribution: The probability $$P\\left( {1 \\le X \\le 2} \\right)$$ is also determined through integration: We determine the value of $$k$$ from the condition, We integrate the given $$PDF$$ and equate it to $$1:$$, Compute the probability $$P\\left( {0 \\le X \\le 1} \\right):$$. probability density function $$\\left( {PDF} \\right),$$. Hints help you try the next step on your own. This idea is very common, and used frequently in the day to day life when we assess our opportunities, transaction, and many other things. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. Probability Density Functions are a statistical measure used to gauge the likely outcome of a discrete value (e.g., the price of a stock or ETF). The one-parameter exponential distribution of the probability density function $$PDF$$ is described as follows: ${f\\left( x \\right) = \\lambda {e^{ \u2013 \\lambda x}},\\;\\;}\\kern0pt{x \\ge 0,}$. }\\], Hence, the mean (average) value of the exponential distribution is, ${\\mu = \u2013 \\frac{1}{\\lambda }\\left. function, Given the moments of a distribution (, , and the gamma statistics ), the asymptotic A discrete variable can be measured exactly, while a continuous variable can have infinite values. New York: McGraw-Hill, The normal distribution is the most widely known probability distribution since it describes many natural phenomena. }$, ${P\\left( {1 \\le X \\le 2} \\right) = \\int\\limits_1^2 {f\\left( x \\right)dx} }={ \\frac{1}{9}\\int\\limits_1^2 {{x^2}dx} }={ \\left. Instead, we'll need to find the probability that $$X$$ falls in some interval $$(a, b)$$, that is, we'll need to find $$P(a They are typically depicted on a graph, with a normal bell curve indicating neutral market risk, and a bell at either end indicating greater or lesser risk\/reward. A probability distribution is a statistical function that describes possible values and likelihoods that a random variable can take within a given range. As indicated previously, PDFs are a visual tool depicted on a graph based on historical data. A continuous random variable takes on an uncountably infinite number of possible values. Join the initiative for modernizing math education. Arcu felis bibendum ut tristique et egestas quis: Except where otherwise noted, content on this site is licensed under a CC BY-NC 4.0 license. Please submit your feedback or enquiries via our Feedback page. An investor willing to take higher risk looking for higher rewards would be on the right side of the bell curve. We'll assume you're ok with this, but you can opt-out if you wish. What is the probability that a randomly selected hamburger weighs between 0.20 and 0.30 pounds? Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. Pay attention to the notations: \\(X, Z$$ denote the random variables, and $$x,z$$ denote the possible values of the variables. P(X = c) = 0 for any number c \u2026$, The median of a continuous probability distribution $$f\\left( x \\right)$$ is the value of $$x = m$$ that splits the probability distribution into two portions whose areas are identical and equal to $$\\large{\\frac{1}{2}}\\normalsize:$$, ${\\int\\limits_{ \u2013 \\infty }^m {f\\left( x \\right)dx} }={ \\int\\limits_m^\\infty {f\\left( x \\right)dx} }={ \\frac{1}{2}.}$. p.\u00a094, 1984. To compute probabilities for $$Z,$$ we use a standard normal table ($$Z-$$table) or a software tool. The area between the density curve and horizontal X-axis is equal to 1, i.e. Instead of this, we require to calculate the probability of X lying in an interval (a, b). Most of us, looking for average returns and average risk would be at the center of the bell curve. What is the probability that $$X$$ falls between $$\\frac{1}{2}$$ and 1? }\\], ${\\int\\limits_a^b {xf\\left( x \\right)dx} = \\mu ,\\;\\;\\;}\\kern0pt{\\int\\limits_a^b {f\\left( x \\right)dx} = 1. \\end{cases}.$, $f\\left( x \\right) = \\begin{cases} In the continuous case, it is areas under the curve that define the probabilities. Probability Density Functions This tutorial provides a basic introduction into probability density functions. Now, we have to calculate it for P(a< X< b). }$, \\[{{\\sigma ^2} = \\int\\limits_a^b {{x^2}f\\left( x \\right)dx} \u2013 2{\\mu ^2} + {\\mu ^2} }={ \\int\\limits_a^b {{x^2}f\\left( x \\right)dx} \u2013 {\\mu ^2} }={ \\frac{1}{{b \u2013 a}}\\int\\limits_a^b {{x^2}dx} \u2013 {\\left( {\\frac{{a + b}}{2}} \\right)^2} }={ \\frac{1}{{b \u2013 a}}\\left. Integrating the exponential density function from $$t = 0$$ to $$t = 1,$$ we have, \\[{P\\left( {0 \\le t \\le 1} \\right) }={ \\int\\limits_0^1 {f\\left( t \\right)dt} }={ \\int\\limits_0^1 {3{e^{ \u2013 3t}}dt} }={ 3\\int\\limits_0^1 {{e^{ \u2013 3t}}dt} }={ 3 \\cdot \\left. Odit molestiae mollitia laudantium assumenda nam eaque, excepturi, soluta, perspiciatis cupiditate sapiente, adipisci quaerat odio voluptates consectetur nulla eveniet iure vitae quibusdam? For a discrete random variable $$X$$ that takes on a finite or countably infinite number of possible values, we determined $$P(X=x)$$ for all of the possible values of $$X$$, and called it the probability mass function (\"p.m.f.\"). It is also called a probability distribution function or just a probability function. We'll first motivate a p.d.f. Probability is the likelihood of an event to happen.\n\nThe general form of its probability density function is {k\\frac{{{x^2}}}{2}} \\right|_0^{10} = 1,}\\;\\; \\Rightarrow {\\frac{k}{2}\\left( {100 \u2013 0} \\right) = 1,}\\;\\; \\Rightarrow {50k = 1,}\\;\\; \\Rightarrow {k = \\frac{1}{{50}}. for (with cumulants PDFs are used to gauge the risk of a particular security, such as an individual stock or ETF.\n\nWhen the PDF is graphically portrayed, the area under the curve will indicate the interval in which the variable will fall. We'll do that using a probability density function (\"p.d.f.\").\n\nInvestors should use PDFs as one of many tools to calculate the overall risk\/reward in play in their portfolios.","date":"2021-01-19 08:40:08","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9343868494033813, \"perplexity\": 1286.278779565163}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-04\/segments\/1610703518201.29\/warc\/CC-MAIN-20210119072933-20210119102933-00543.warc.gz\"}"} | null | null |
\section{The Covariant Spectator Theory}
The Covariant Spectator Theory (CST) was introduced already several decades ago by Franz
Gross \citep{Gro69}. Its purpose is to include relativity in a manifestly covariant way in
few-body problems, while keeping their complexity at a level that is still manageable for
numerical calculations. It has been already applied successfully to the description of a
variety of systems, including the deuteron and nucleon-nucleon ($NN$) scattering
\citep{Buc79, Gro92},
elastic and inelastic electron scattering off the
deuteron \citep{Arn80, Ord95, Ada02}, and the 3N bound state \citep{Sta97, Sta97b}.
The CST is based on Relativistic Quantum Field Theory, where an exact and complete
scattering amplitude can be expressed in terms of the Bethe-Salpeter equation (BSE)
\citep{Sal51}. The Spectator Equation (SE) is a particular re-organization of the complete
BSE for a system of heavy particles (nucleons) that interact via the exchange of lighter
particles (mesons). Ignoring vertex and self-energy corrections as well as genuine many-body
forces, the scattering amplitude is given as an infinite sum of ladder and crossed ladder
diagrams of all orders in the coupling constants. It can be generated by an integral
equation if the propagators and the irreducible kernel are chosen in a suitable way.
While the BSE includes the full off-shell propagators for all heavy particles, the SE
restricts all but one of them to their mass shells. Consequently, contributions to the
scattering amplitude which are generated by one equation through iterations may appear as
part of the irreducible kernel of the other. When their respective kernels are
truncated---which in practical calculations is unavoidable since the complete kernels
themselves contain already an infinite number of diagrams---the BSE and the SE are no
longer equivalent. It has been shown that the truncated SE, in certain circumstances,
converges faster to the full result than the truncated BSE \citep{Gro82}. The simplest and
most often used cases are truncations at lowest order, leading to the so-called ladder
approximation since crossed-ladder graphs are thereby excluded.
The SE has sometimes been misunderstood as an approximation to the BSE. This point of view
is somewhat misleading, since there is really nothing more fundamental about the BSE than
the SE (or other so-called quasi-potential equations). It is less confusing to understand
the SE as an equation in its own right. It is meant to replace the Schr\"odinger equation
in situations where relativity is important, rather than to approximate a truncated BSE.
Some of the important features of the SE are: it is manifestly covariant although the loop
integrations are three-dimensional, all boosts are kinematic ({\it i.e.}, interaction
independent), the off-shell particle has negative-energy components, and cluster
separability holds. The latter property guarantees that the solution of the two-body SE is
consistent with the one that appears as dynamical input in the three-body SE.
\section{Two-nucleon scattering}
The SE for $NN$ scattering is shown in diagrammatic form in Fig.\ \ref{F:2Ngrosseq}. Its
antisymmetrized kernel (or ``potential'') is truncated to the lowest-order ladder terms,
such that it is of one-boson exchange (OBE) form.
\begin{figure}
\includegraphics[width=6cm]{2Ngrosseq-gs.eps}
\caption{Two-nucleon SE (upper panel) with antisymmetrized kernel (lower
panel). The symbol ``$\times$'' on a nucleon line indicates that the particle is
restricted to its positive-energy mass shell.} \label{F:2Ngrosseq}
\end{figure}
The first realistic OBE $NN$ potentials for the SE were constructed by Gross,
Van Orden, and Holinde \citep{Gro92}. They are based on the exchange of six
different mesons, two pseudoscalar ($\pi$ and $\eta$), two scalar ($\sigma$ and
$\delta$), and two vector mesons ($\rho$ and $\omega$).
The potentials have meson-nucleon vertices
of the following form:
\begin{equation}
g_s \Lambda_s = g_s \left[
1 + \frac{\nu_s}{2m} ( \slashed{p}'- m + \slashed{p} - m )
+ \frac{\kappa_s}{4m^2} (\slashed{p}' - m)(\slashed{p} - m) \right] \, ,
\label{eq:Vs}
\end{equation}
for scalars,
\begin{eqnarray}
g_p \Lambda_p & = &
g_p \left[ \gamma^5 +
\frac{\nu_p}{2m}\left[ (\slashed{p}' -m)\gamma^5
+ \gamma^5 (\slashed{p} -m) \right]
+ \frac{\kappa_p}{4m^2}(\slashed{p}'-m)\gamma^5(\slashed{p}-m)\right]
\nonumber \\
& = & g_p \left[ (1-\nu_p)\gamma^5
+ \frac{\nu_p}{2m}\gamma^5 \slashed{q}
+ \frac{\kappa_p}{4m^2}(\slashed{p}'-m)\gamma^5(\slashed{p}-m)\right] \, ,
\label{eq:Vp}
\end{eqnarray}
for pseudoscalars, and
\begin{equation}
g_v \Lambda_v^\mu =
g_v \left[ \gamma^\mu +
\frac{\kappa_v}{2m} i \sigma^{\mu\nu} q_\nu +
\frac{\kappa_v (1-\lambda_v)}{2m}
\left[(\slashed{p}'-m)\gamma^\mu + \gamma^\mu(\slashed{p}-m)\right]
+ \cdots \right]
\end{equation}
for vector mesons. Here, $p$ and $p'$ are the nucleon four-momenta, and $q$ is
the meson four-momentum at the meson-$NN$ vertex. Furthermore, $m$ is the
nucleon mass, the $g$'s are coupling constants, $\kappa_v$ is the usual $f/g$
ratio for vector mesons, $\nu_s$, $\nu_p$, $\kappa_s$, $\kappa_p$, and
$\lambda_v$ are off-shell coupling constants. The vertices are regularized by
form factors, one for each particle at the vertex, which are parametrized by
cut-off masses.
While $\nu_p$ has appeared before as mixing parameter in potentials that allow
a mixing of pseudoscalar and pseudovector $\pi NN$ coupling, the scalar
off-shell coupling proportional to $\nu_s$ is a new feature not explored
previously in $NN$ scattering. It contributes only if at least one of the
nucleons at the vertex is off mass shell. In frameworks that do not contain
off-shell nucleons, such as in relativistic hamiltonian dynamics or in
nonrelativistic quantum mechanics, these couplings vanish.
The terms proportional to $\kappa_s$ and $\kappa_p$ contribute only if both
nucleons at the vertex are off mass shell. With their inclusion,
Eqs.~(\ref{eq:Vs}) and (\ref{eq:Vp}) represent the most general coupling
of nucleons to scalar or pseudoscalar mesons, respectively. However, in the
potentials fitted so far we have always set $\kappa_s = \kappa_p = 0$.
\section{The three-nucleon bound state}
The SE for the $3N$ bound state has been derived \citep{Sta97b} in complete
consistency with the CST of $NN$ scattering. This is achieved by
restricting in all intermediate states spectator particles to their
positive-energy mass shell (hence the name ``spectator theory''). This simple
prescription automatically insures that in any intermediate state only one
nucleon is off mass shell. As a consequence, loop integrations are again reduced
from four to three dimensions, the same as in the nonrelativistic case.
\begin{figure}
\includegraphics[width=6cm]{3Nbound-gs.eps}
\caption{The SE for the vertex function of the $3N$ bound state. The oval
represents the $NN$ spectator amplitude calculated from the SE of Fig.\
\ref{F:2Ngrosseq}. The dot in the vertex function indicates the spectator
particle during the last two-body interaction.}
\label{F:3Nbound}
\end{figure}
Figure \ref{F:3Nbound} displays the homogeneous integral equation for the $3N$
vertex function in graphical form. It is given here in its general form, valid
both for
identical fermions ($\zeta=-1$) and bosons ($\zeta=+1$).
The numerical solution of the Faddeev-type $3N$ SE was performed \citep{Sta97} in a basis of
partial wave helicity states. The propagator of the off-shell nucleon is
decomposed into positive and negative energy components, or $\rho$-spin states,
which also enter into the classification of the basis states.
The $3N$ SE was solved for a family of $NN$ potentials, which were constructed
by fixing the scalar off-shell coupling parameters to a particular value and
fitting the remaining parameters to the energy-dependent \mbox{Nijmegen} $np$ phase
shift analysis \citep{Sto93} of 1993. Then the $\chi^2$ to $NN$ databases is
calculated from the potentials in a second step, using Arndt's code
SAID \citep{ArnSAID}. While the off-shell couplings for the $\sigma$ and $\delta$
mesons are in principle independent, initially we related them in terms of a common scaling
parameter $\nu$, such that $\nu_\sigma = 2.6 \nu$ and $\nu_\delta = -0.75 \nu$
(a result obtained in preliminary independent fits). The parameters of some of
these potentials can be found in Ref.\ \citep{Sta98}.
It turned out that potentials with nonzero values of $\nu$ have a lower $\chi^2$ and
are therefore preferred by the fits. The potential named W16 has the minimum value,
$\chi^2=1.895$, for $\nu=1.6$, which corresponds to $\nu_\sigma=4.16$ and
$\nu_\delta=-1.2$. The triton binding energy varies rather strongly with $\nu$. The best
model in terms of $\chi^2$, W16, yields with $E_t=-8.491$ MeV also the binding energy
closest to the experimental value of $-8.48$ MeV.
This family of $NN$ potentials was designed principally to study the scalar off-shell
coupling and other relativistic effects in the $3N$ bound state. The number of adjustable
parameters was---with only 13---kept comparatively small. We are now in the process of
further improving the fit to the $NN$ data by allowing more potential parameters to vary
independently. As an example I mention here the (preliminary) model WJL19-1.1 with 19 free
parameters. Among other new features, it has different masses for the charged and neutral
pions, and its scalar off-shell coupling parameters, $\nu_\sigma=-6.59$ and
$\nu_\delta=2.67$, were varied independently. With a much improved $\chi^2=1.254$ it
demonstrates that the potentials of the CST are comparable in quality with the best
nonrelativistic potentials. They are, however, true meson-exchange potentials in the sense
that their parameters are the same in all partial waves or isospin channels. The model
WJL19-1.1 yields a somewhat higher triton binding energy of $E_t=-9.116$ MeV. This value is
likely to change as the fit is being finalized.
We also calculated the $3N$ bound state in the nonrelativistic limit, using a
potential fitted especially for this case. This calculation yields a $3N$
binding energy of -7.914 MeV. One can use this case also to estimate the
importance of relativistic boost effects, which here turned out to be repulsive
but relatively small.
\section{Spectator theory of electromagnetic three-nucleon currents}
For the calculation of elastic and inelastic electron scattering from the $3N$
bound state a conserved electromagnetic $3N$ current has to be derived within
the CST. As we will see, special care is needed to avoid double
counting and to arrive at expressions consistent with the basic assumptions of
the spectator formalism.
In the one-photon approximation, a gauge invariant current is obtained if all Feynman
diagrams are summed in which the photon is attached to all propagators and to all vertices
with momentum-dependent couplings. This is a clear prescription for any given Feynman
diagram. However, if we are dealing with an infinite sum of diagrams, defined through an
integral equation, a more general procedure is needed that systematically attaches the
photon in all the right places.
\begin{figure}
\epsfig{file=overcounting.eps,width=8cm}
\caption{An example for the overcounting problem in the BSE. The upper part
displays the decomposition of the full vertex function into components with a
given particle, indicated by a dot, as spectator during the last $NN$
interaction in the final state. The lower part shows how certain
contributions to the impulse approximation current appear twice if one simply
takes the ``obvious'' diagram (B) from nonrelativistic theory.} \label{F:overcounting}
\end{figure}
Unfortunately, one cannot simply use results from the nonrelativistic theory as
guidance. It is instructive to consider the elastic current in impuls
approximation for the BSE \citep{Kvi99}. Figure \ref{F:overcounting} shows that
diagram \ref{F:overcounting}(B), which one might naivly identify with the
impulse approximation in analogy with the nonrelativistic case, includes some
contributions twice, for instance diagram \ref{F:overcounting}(d).
The origin of this double-counting problem can be traced back to the fact that
the two-body amplitude in diagram \ref{F:overcounting}(d) can be ``pulled
out'', by applying the $3N$ equation, from the vertex function in the initial as
well as in the final state. In contrast to nonrelativistic diagrams, in a
Feynman diagram it makes no difference if this two-body amplitude is extracted
from the right or left side of the diagram, since it can ``slide freely'' past the
point were the photon is attached to the third nucleon.
The double-counting problem in covariant scattering theories has been known for a long
time, especially in the context of simple systems of nucleons and pions \citep{Tay63,
Phi95}.
\begin{figure}[b]
\epsfig{file=equivalence-gs.eps,width=7cm}
\caption{An example of how diagrams with spectator particles off mass shell are
related to others with spectators on mass shell. See the discussion in the text.} \label{F:equivalence}
\end{figure}
A method to generate systematically the infinite series of diagrams that
represents the current, and avoids double counting, was introduced by
Kvinikhidze and Blankleider \citep{Kvi97a}. They call it the ``gauging of integral
equations method.'' It is based on the observation that coupling a photon to an
amplitude given by an integral equation satisfies the same distributive rule as
the differentiation of a product. For instance, if
\begin{equation}
\Gamma = 2 M G P \Gamma
\end{equation}
is the Faddeev equation for the vertex function $\Gamma$, with $M$ being the
$NN$ amplitude, $G$ an off-shell propagator, and $P$ the permutation
operator interchanging particles 1 and 2, then the gauged vertex function
satisfies
\begin{equation}
\Gamma^\mu = ( 2 M G P \Gamma)^\mu
=
2 M^\mu G P \Gamma + 2 M G^\mu P \Gamma
+ 2 M G P \Gamma^\mu \, .
\end{equation}
Kvinikhidze and Blankleider applied their gauging method also to the $3N$
SE \citep{Kvi97}. However, their result contained diagrams in which the
spectator particle appeared off mass shell, clearly inconsistent with the
assumptions of the CST. The situation was even more confusing since
Adam and Van Orden \citep{Ada04} worked out an alternative derivation of the $3N$
current, using a purely algebraic technique, and obtained a result in which all
spectators are on mass shell. This apparent contradiction was resolved when
it was shown that the two results are in fact equivalent \citep{Gro04}.
This equivalence is illustrated in Fig.\ \ref{F:equivalence}. Diagram (A)
is part of diagram (B), which has one spectator off mass shell. A closer
inspection shows that (A) is actually identical to (A') which in turn
contributes to (B') where all spectators are on mass shell. Applying this idea
one can replace all diagrams of type (B), such as the ones appearing in the
current of Kvinikhidze and Blankleider, by others satisfying the
spectator-on-mass-shell constraint.
\begin{figure}[t]
\epsfig{file=corediagrams-gs.eps,width=8cm}
\caption{The core diagrams of the electromagnetic $3N$ spectator current.} \label{F:core}
\end{figure}
\begin{figure}[b]
\epsfig{file=3Nel-gs.eps,width=5cm}
\caption{The elastic electromagnetic spectator current, in terms of the core
diagrams.} \label{F:elcur}
\end{figure}
We derived the gauge invariant electromagnetic $3N$ current for the cases of
elastic and inelastic scattering from the $3N$ bound state within the spectator
theory in a diagrammatic approach \citep{Gro04}. The ``core diagrams'' of Fig.\
\ref{F:core} play a central role since they contribute in both cases. In fact,
elastic scattering (Fig.\ \ref{F:elcur}) is completely determined through the
core diagrams. The first 6 diagrams of Fig.\ \ref{F:elcur} comprise the Complete Impulse
Approximation (CIA) which is gauge invariant by itself. The remaining 4 diagrams involve the
photon coupling to the interacting $NN$ pair and belong therefore to the interaction
current.
\begin{figure}
\epsfig{file=3Nbreakupcur-gs.eps,width=8cm}
\caption{The electromagnetic $3N$ breakup spectator current.} \label{F:breakupcur}
\end{figure}
In the case of electrodisintegration a number of diagrams need to be added in
which the photon couples to the outgoing nucleons. In Fig.\ \ref{F:breakupcur},
diagrams (a1) and (a2) represent the relativistic impuls approximation, (b)
contains the final state interactions, and (c) and (d) are interaction currents.
The derived currents will be used to calculate the elastic form factors of the $3N$ bound
state as well as its electrodisintegration. These calculations are currently in progress.
\begin{theacknowledgments}
This work was performed in collaboration with Franz Gross (Jefferson
Laboratory) and Teresa Pe\~na (IST Lisbon). It was supported by FEDER and the
{\em Funda\c c\~ao para a Ci\^encia e a Tecnologia} under grant
POCTI/FNU/40834/2001.
\end{theacknowledgments}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,277 |
Jens Rehn (* 18. September 1918 als Otto Jens Luther in Flensburg; † 3. Januar 1983 in Berlin) war ein deutscher Schriftsteller und Hörspielautor.
Leben
Jens Rehn wuchs in Berlin als Sohn des Kammervirtuosen Paul Luther auf. Nach dem Besuch des Gymnasiums und Konservatoriums schlug er ab 1937 die Laufbahn eines Offiziers bei der Kriegsmarine ein. Im Zweiten Weltkrieg war er vom 4. Juni 1943 bis 15. Juli 1943 Kommandant des U-Boots U-135. 1943 geriet er für vier Jahre in Gefangenschaft, die er in Afrika, Kanada und England verbrachte.
Von 1947 bis 1949 war er Freiberufler, von 1950 bis 1981 Redakteur der Literaturabteilung des RIAS in Berlin. In den Jahren 1954 bis 1958 absolvierte er ein Studium der Philosophie, Anglistik und Musikwissenschaft an der Freien Universität Berlin. Neben seiner Tätigkeit als Redakteur komponierte Rehn, schrieb eine Reihe von Hörspielen und unternahm ausgedehnte Reisen nach Ostasien, Indien und in die USA.
Sein Roman Nichts in Sicht, in dem er seine Kriegserlebnisse durch die Geschichte zweier Schiffbrüchiger, eines amerikanischen Piloten und eines deutschen U-Boot-Soldaten, verarbeitete, wurde viel beachtet und von Marcel Reich-Ranicki als "Parabel von hoher Anschaulichkeit und Suggestivität" bezeichnet; der Kritiker schrieb: "Das Buch Nichts in Sicht sollten wir, dürfen wir nicht vergessen: Es ist beides in einem – ein zeitgeschichtliches und ein künstlerisches Dokument." Großes Lob erfuhr Nichts in Sicht auch von Gottfried Benn. In dem expressionistischen Science-Fiction-Roman Die Kinder des Saturn thematisierte Jens Rehn 1959 die Themen Atomkrieg und Strahlentod.
Jens Rehn, der Mitglied des PEN-Zentrums der Bundesrepublik Deutschland war, erhielt 1956 den Berliner Kunstpreis "Junge Generation" und 1979 ein Villa-Massimo-Stipendium.
Werke
Nichts in Sicht. Berlin-Frohnau, Neuwied am Rhein, Luchterhand 1954. Neuausgabe mit Nachwort von Ursula März: Schöffling, Frankfurt/Main 2018. ISBN 978-3-89561-149-0.
Feuer im Schnee. Darmstadt [u. a.] 1956.
Rondo und Scherzo funèbre. Stierstadt im Taunus 1958.
Die Kinder des Saturn. Luchterhand, Darmstadt 1959. Taschenbuch: Wilhelm Heyne Verlag, München 1975.
Der Zuckerfresser. Neuwied a. Rh. 1961.
Das neue Bestiarium der deutschen Literatur. Stierstadt 1963.
Daten, Bilder, Hinweise, Störungen. Berlin 1964.
Kyushu-nikki. Stierstadt im Taunus 1965.
Das einfache Leben oder der schnelle Tod. Baden-Baden 1967.
Morgen-Rot. Stuttgart 1976.
Die weiße Sphinx. Herford 1978.
Nach Jan Mayen und andere Geschichten. Darmstadt [u. a.] 1981.
Herausgeberschaft
Die zehn Gebote. Reinbek bei Hamburg 1967.
Hörspiele
Der Chefrechner – Oder: Eins und eins ist drei. Radio Bremen 1961.
Drei Begegnungen. Süddeutscher Rundfunk 1961.
Nichts Außergewöhnliches. Radio Bremen 1963.
Arm Seel – da lachen ja die Hühner. Radio Bremen/RIAS Berlin 1964.
Verrostete Sterne. Süddeutscher Rundfunk 1966.
Das Geschwätz. Süddeutscher Rundfunk 1969.
Nichts in Sicht. Rundfunk Berlin-Brandenburg 2006.
Literatur
Weblinks
Einzelnachweise
Autor
Hörspielautor
Literatur (20. Jahrhundert)
Literatur (Deutsch)
Roman, Epik
Deutscher
Absolvent der Freien Universität Berlin
Person (Flensburg)
Geboren 1918
Gestorben 1983
Mann | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 487 |
Q: Cast webview android studio Estou fazendo um app para meu site, que é só uma webview, alguém pode me dizer se é possível colocar para transmitir nas TV Smart? se sim, pode me indicar onde dar uma estudada sobre? ou até me ajudar por aqui mesmo, só tenho a WEBVIEW até agora.
A: Recentemente eu estava dando suporte ao meu app de download de vídeos justamente ao suporte de cast, eu encontrei as seguintes opções:
Google cast SDK
Com ele você pode transmitir vídeo, áudio ou a dela de algum app ou jogo, ele está disponível para android, ios e chrome, está bem documentado mas para você aprender vai ter que se esforçar um pouco, dependendo do que você tem em mente você vai ter que registra esse app no console do google cast que custa 5 dólares, mas se não deseja customizar ui do receptor ou algo semelhante pode usar o player padrão que é de graça, creio que o google cast só da suporte a tvs com android a aparelhos do google cast como o Chromecast.
Samsung Smart View SDK
Este sdk visa televisores com o sistema operacional da Samsung, o Tizem. Mas o próprio site já recomenda que você use em conjunto com o google cast para um maior suporte, existe um guia de integração no site.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 1,156 |
Almindelig taks (Taxus baccata; ), ofte blot kaldt taks, er vildtvoksende, men næsten udryddet i Danmark. Det kan enten være et lille, regelmæssigt og stedsegrønt træ eller en bred kroget busk med en tæt grenet vækstform. Planten dyrkes i haver og parker i talrige, ofte stærkt afvigende sorter. Veddet er sejt og rødbrunt.
Beskrivelse
Almindelig taks er et stedsegrønt træ, og forekommer både som et træ med en stamme, et træ med flere stammer, eller nærmest som en busk. Den bliver normalt ikke over 10 meter høj. Vækstformen kan være meget varieret, og flere af de dyrkede sorter har stærkt afvigende form. Stammer og grene kan være meget uregelmæssige i tværsnit. Barken er først grønlig (i op til 3 år), så bliver den brun med smalle strimler. Gammel gren- eller stammebark skaller af i flager og afslører derved den røde, orangerøde, gule eller brune underbark. Knopperne er spredte, små, ægformede og grønne.
Nålene er mørkegrønne med tydelig spids og med en langsgående køl på oversiden. De hanlige blomster er samlet i rakle-lignede stande, mens hunblomsterne er enlige eller få sammen. Frugterne er bæragtige og røde med slimet, vammelt-sødt frugtkød og en giftig, grøn kerne. Som regel finder man kun rent hanlige eller rent hunlige (frugtbærende) individer.
Rodsystemet består af tæt forgrenede hovedrødder, der når dybt ned, og mange trævlede og overfladiske finrødder. Taks har kernetræ, og den smalle kærne er gulhvid. Veden er hård og rødbrun. Barken er først grønlig, så bliver den brun med smalle strimler. Gammel gren- eller stammebark skaller af i flager og afslører derved den røde, orangerøde, gule eller brune underbark.
Knopperne er spredte, små, ægformede og grønne. Grenene er som regel buskede, med flade, lancetformede, bløde og mørkegrønne nåle; oversiden af grenene er glinsende, undersiden er mat. Nålene bliver 1-4 cm lange og 2-3 mm brede. Nålene sidder i spiraler rundt om kvisten. Nålene sidder også i to planer, derfor er spiralmønstret mere tydelig på ferske, vertikale skud.
Hanblomstene er samlet i en rakle-lignede stand, mens hunblomsterne er enlige eller få sammen. Hanblomsterne er sfæriske i formen med en diameter på 3-6 mm. Hanblomsternes pollen spredes tidligt om foråret.
Koglen er en specielt udformet frugt som er bærlignende akkurat som hos enebær. Hver kogle indeholder et eneste frø som er 4-7 mm langt og som omgives af en meget, bærlignende kappe med en skarp skarlagensrød farve. Hele frugten er 8-15 mm både i bredde og længde og har en åbning i enden. Frugten modner efter 6-9 måneder efter bestøvningen og er en delikatesse for fugle. Smagen af frugtkødet er sødlig. Frugtkødet er det eneste på taks som ikke er giftig. De hårde frø går ufordøjet gennem fuglenes fordøjelsessystem og bliver på denne måden spredt. Det tager 2-3 måneder før frøene spirer.
Udbredelse
Taks er oprindeligt hjemmehørende på mineralrig, fugtig bund i Danmark, hvor den dannede underskov under blandede løvskove. Veddets brugbarhed og træets giftighed (specielt for heste) har dog gjort, at planten er næsten helt udryddet. I Danmark er den regnet som en truet art på den danske rødliste.
I Munkebjergskoven på Vejlefjordens sydside har man den eneste vildtvoksende, danske bestand af arten. Her findes den sammen med bl.a. ahorn, ask, alm. bingelurt, bøg, kambregne, bjergbregne, ensidig vintergrøn, fingerstar, glat hullæbe, gul anemone, hulrodet lærkespore, hvid anemone, kæmpestar, stor frytle, tyndakset star og tætblomstret hullæbe. Forskellige former er i øvrigt meget almindelige i haver, parker og på kirkegårde.
Giftighed
Hele planten er giftig, bortset fra den røde frugtkappe. Frøene inde i de røde bær er også giftige. Derfor bliver bærrene giftigere, jo længere man tygger på dem, og skal man af med taks, bør man ikke smide det fra sig et tilfældigt sted. Taks er næsten udryddet i den fri natur, men findes stadig blandt andet i haver og også på kirkegårde. Plant derfor ikke planter, hvor der færdes småbørn.
Planten er farlig for mennesker og dyr. Den er så giftig, at heste og kvæg kan dø, selv hvis de indtager små mængder.
Fortællinger om plantens giftighed har muligvis været medvirkende til, at Danmarks oprindelige bestand af taks er stærkt fortrængt.
Både nålene, barken og frøene indeholder alkaloider (A + B toksiner) som er giftige for pattedyr (specielt heste og kvæg), og mennesker. Plantedelene er giftige både i frisk og tør tilstand.
Alkaloiderne gør at hjertet slår langsomt og kan give hjerterytmeforstyrrelser, og giften kan også skabe problemer med fordøjelse og hyperventilering. Udtræk fra 50 til 100 g friskt plantemateriale kan være en dødelig dose for et menneske. For heste, kvæg og får kan indtagelsen af 100-200 g plantemateriale være dødelig. Forgiftning kan også føre til varige leverskader både på dyr og mennesker.
De fleste ulykker sker ved at små tørrede plantedele efter beskæringen blandes med hø.
Anvendelse
Historisk
Taks er ekstremt anvendeligt til fremstilling af langbuer. Særligt knastfrit tætåret taks er godt. I middelalderen blev taks dyrket til fremstilling af buer i stor stil. Årsagen er, at kernetræet er godt at komprimere, og splintvedet er godt at strække. Dette giver et særligt godt "kast" når man skyder en pil af sted.
Mytologisk
Yggdrasil, der i nordisk mytologi er livets træ, er i følge nogle kilder nok ikke et asketræ; men snarere et takstræ, da træet ifølge Eddaen er et evigt og stedsegrønt træ med nåle. Denne beskrivelse stemmer overens med takstræets evne til at blive tusinder af år gammelt og dets stedsegrønne nåle.
Medicinsk
Træets indhold af taxaner herunder paclitaxel i "Taxol" og docetaxel i "Taxotere", der anvendes som effektive cytostatika til kemoterapi i forbindelse med blandt andet æggestokkræft, lungekræft og brystkræft, har forstærket medicinalindustriens interesse for træet.
Senest er "Abraxane", der indeholder oaclitaxel bundet til plasmaproteinet albumin, blevet godkendt af Den Europæiske Fødevaresikkerhedsautoritet i 2008.
I dag
Taks tåler dyb skygge og hård klipning. Begge dele har gjort træet til en værdsat plante på kirkegårde og i privathaver. Plantens ved er meget sejt og holdbart, men da alle dele af den er giftige – og især for heste – blev den næsten fuldstændigt udryddet i løbet af det 18. og 19. århundrede. Dog er bevoksningen i Munkebjergskoven ved Vejle oprindeligt vildtvoksende. Desuden er taks i dag forvildet i hele landet. Der er udvalgt masser af sorter i arten almindelig taks.
Se også
Bemærkelsesværdige træer
Bibliografi for plantekendskab
Kilder og henvisninger
Signe Frederiksen et al., Dansk flora, 2. udgave, Gyldendal 2012. .
Sten Porse: Plantebeskrivelser, DCJ 2003 (CD-Rom).
Eksterne henvisninger
Arne og Anna-Lena Anderberg: Den virtuella floran, Naturhistoriska riksmuseet
Haveplanter
Smukke frugter
Stedsegrønne
Hækplanter
Planter i Danmark
Giftige planter
Nåletræer
Træer og buske (naturkanon)
Taks | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 3,921 |
function fsl_preprocess(dwi_files, bvecs_file, bvals_file, pe_dir, outdir)
% Correct for EPI distortions, eddy currents and motion with FSL
%
% dwi_files = Cell array of paths to niftis with alternating PE directions
% pe_dir = mxn matrix denoting the phase encode direction of each
% acquisition. Each row is for an image and columns are x,y,z
% bvecs_file = Bvecs file for each nifti image
% bvals_file = Bvals file for each nifti image
%
% example:
% dwi_files = ...
% {'/mnt/diskArray/projects/KNK/data/20140814S015/20140814_191546DWIdir84LRs051a001.nii.gz'...
% '/mnt/diskArray/projects/KNK/data/20140814S015/20140814_191546DWIdir84RLs049a001.nii.gz'};
% bvecs_file = ...
% {'/mnt/diskArray/projects/KNK/data/20140814S015/20140814_191546DWIdir84LRs051a001.bvec'...
% '/mnt/diskArray/projects/KNK/data/20140814S015/20140814_191546DWIdir84RLs049a001.bvec'};
% bvals_file = ...
% {'/mnt/diskArray/projects/KNK/data/20140814S015/20140814_191546DWIdir84LRs051a001.bval'...
% '/mnt/diskArray/projects/KNK/data/20140814S015/20140814_191546DWIdir84RLs049a001.bval'};
% pe_dir = [-1 0 0; 1 0 0];
% outdir = '/mnt/diskArray/projects/KNK/data/20140814S015/fsl_84dir'
% fsl_preprocess(dwi_files, bvecs_file, bvals_file, pe_dir, outdir)
%% Argument checking
if ~exist('outdir','var') || isempty(outdir)
outdir = fileparts(dwi_files{1});
end
if ~exist(outdir,'dir')
mkdir(outdir);
end
cd(outdir);
%% Concatenate files and pull out b=0 images
b0_cat = []; dfull = [];
for ii = 1:length(dwi_files)
% load image
im = readFileNifti(dwi_files{ii});
% make a concetenated image
dfull = cat(4,dfull,im.data);
% load bvals and bvecs
bvals{ii} = dlmread(bvals_file{ii});
bvecs{ii} = dlmread(bvecs_file{ii});
% define b0 volumes. Sometimes these still have minimal diffusion
% weighting
b0 = bvals{ii}<20;
% pull out b0 volumes
im.data = im.data(:,:,:,b0);
im.dim(4) = sum(b0);
% b0name{ii} = [im.fname(1:end-7) '_b0s.nii.gz'];
% im.fname = b0name{ii};
% writeFileNifti(im);
% make a concatenated b0
b0_cat = cat(4,b0_cat,im.data);
% mark which b0s in the concatenated image are from each acquisition
nb0(ii) = sum(b0);
end
% Make a concatenated b0 image
im.data = b0_cat;
im.dim(4) = size(im.data,4);
b0cat_file = fullfile(outdir,'b0_cat.nii.gz');
im.fname = b0cat_file;
writeFileNifti(im);
% Write a text file with PE directions
pe = [];
for ii = 1:length(nb0)
pe = vertcat(pe,repmat(pe_dir(ii,:),[nb0(ii),1]));
end
% The fourth column is the readout dwell time thing. This should be read
% from the images....
pe(1:end,4) = .0651;
acq_file=fullfile(outdir,'acqparams.txt');
dlmwrite(acq_file,pe,'\t');
% Write out a concatenated DWI
im.data = dfull;
im.dim(4) = size(im.data,4);
dwicat_file = fullfile(outdir,'dMRI_cat.nii.gz');
im.fname = dwicat_file;
writeFileNifti(im);
totalvols = im.dim(4);
% Write out concatenated bvecs and bvals
bvals_cat = horzcat(bvals{:});
% bvalues below 20 will be treated as 0
bvals_cat(bvals_cat<20) = 0;
bvecs_cat = horzcat(bvecs{:});
dlmwrite(fullfile(outdir,'bvecs_cat.bvec'),bvecs_cat,'\t');
dlmwrite(fullfile(outdir,'bvals_cat.bval'),bvals_cat,'\t');
%% run topup as a system call
cmd = sprintf('topup --imain=%s --datain=%s --config=b02b0.cnf --out=topup_results --iout=topup_b0',...
b0cat_file,acq_file);
system(cmd);
%% brain extraction
system('fslmaths topup_b0 -Tmean topup_b0');
system('bet topup_b0 topup_b0_brain -m -f 0.2');
%% Create an index file
% First find the indices of the volumes within dwi_cat that are b0
in = find(bvals_cat==0);
% Next we create an index where each dwi volume is associated with the b0
% that came before it. This way the topup params are applied to the nearest
% image in time. One other thing to keep in mind is that this reference
% should be relative to the concatenated b0 volume not the full dwi dataset
indx = [];
for ii = 1:totalvols
% for volume ii find the it's difference in time from each b0 volume
tdiff = in-ii;
% Don't considder b0 collected later in time
tdiff(tdiff>0) = inf;
% Find the nearest b0 in time
[~, i] = min(abs(tdiff));
% Record the index to this volume
% indx(ii) = in(i);
indx(ii) = i;
end
dlmwrite('index.txt',indx,'\t');
%% run eddy currect correction
if ~exist('eddy','dir')
mkdir('eddy');
end
outname = 'eddy/eddy_corrected_data';
cmd = sprintf('eddy --imain=dMRI_cat.nii.gz --mask=topup_b0_brain_mask.nii.gz --acqp=acqparams.txt --index=index.txt --bvecs=bvecs_cat.bvec --bvals=bvals_cat.bval --topup=topup_results --out=%s --flm=quadratic --niter=%d --verbose',...
outname,8);
system(cmd);
% rotate bvecs - first load eddy paramters
b = dlmread([outname '.eddy_parameters']);
bvec = dlmread('bvecs_cat.bvec');
% rotate vecs. see: http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/eddy/Faq
bvecrot = [];
for ii = 1:size(bvec,2)
bvecrot(:,ii) = fsl_rotMatrixFromEddy(b(ii,4),b(ii,5),b(ii,6))*bvec(:,ii);
end
dlmwrite('eddy/bvecs',bvecrot,'\t');
copyfile('bvals_cat.bval','eddy/bvals');
movefile([outname '.nii.gz'],'eddy/data.nii.gz')
copyfile('topup_b0_brain_mask.nii.gz', 'eddy/nodif_brain_mask.nii.gz')
%% Run dtifit
eddyNii = fullfile(pwd,'eddy/data.nii.gz');
eddyBvecs = fullfile(pwd,'eddy/bvecs');
eddyBvals = fullfile(pwd,'eddy/bvals');
mask = fullfile(pwd,'eddy/nodif_brain_mask.nii.gz');
dtidir = fullfile(pwd,'dtifit');
dtiOut = fullfile(dtidir,'dti');
if ~exist(dtidir,'dir')
mkdir(dtidir);
end
cmd = sprintf('dtifit --data=%s --out=%s --mask=%s --bvecs=%s --bvals=%s',...
eddyNii, dtiOut, mask, eddyBvecs, eddyBvals);
system(cmd);
%% Run bedpostx
%system('bedpostx eddy')
| {
"redpajama_set_name": "RedPajamaGithub"
} | 9,889 |
Q: Objective C - Access another instance's instance variables? Is it possible to access another instance's variables, given that we're working in the same class?
Or, in other words, can you do this Java code (which works in Java, I've done it before) in Objective C:
class Matrix {
private int mat[] = new int[16]; //wouldn't be a pointer in C
public Matrix (Matrix m){
for (int i = 0; i < 16; i++){
this.mat[i] = m.mat[i]; //<-- this here
}
}
}
Given that arrays cannot be properties in Objective C, I can't make mat[] into a property. Is there any way to do this then?
A: You could use an NSArray that holds NSNumbers instead of a regular old c int array--then you could use it as a property.
something like this maybe:
self.mat = [NSMutableArray arrayWithCapacity:16];
for(int i = 0; i < 16; i++) {
[self.mat addObject:[NSNumber numberWithInt:[m.mat objectAtIndex:i]]];
}
A: You can do it perfectly fine, you just can't make the instance variable (ivar) into a property:
@interface Matrix : NSObject
{
@private
int mat[16];
}
- (id) initWithMatrix:(Matrix *)m;
@end
@implementation Matrix
- (id) initWithMatrix:(Matrix *)m
{
if ((self = [super init]))
{
for(int i = 0; i < 16; i++)
mat[i] = m->mat[i];
// Nota bene: this loop can be replaced with a single call to memcpy
}
return self;
}
@end
A: The closest analogy would be a readonly property that returns int *:
@interface Matrix : NSObject {
@private
int values[16];
}
@property (nonatomic, readonly) int *values;
@end
@implementation
- (int *)values
{
return values;
}
@end
For a Matrix type, you should really be using a struct or Objective-C++; all the method dispatch/ivar lookup will add a lot of overhead to inner loops
A: Arrays can't be properties. But pointers to an array of C data types can. Just use an assign property, and check for NULL before indexing it as an array.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 5,419 |
Q: Angular styling viewContainerRef from Directive Input array I have created a directive that uses a variable length array to populate a tooltip. This works great but I need to dynamically style the tooltip so it remains under the initial trigger component. Using a top or bottom value that changes based on the number of items.
<div tooltipDirective [tooltipDataArray]="['Person1', 'Person2', 'Person3', 'Person4', 'Person5', 'Person6']">See tooltip!
<ng-template #tooltipTemplate >
<div class="tooltip" [ngStyle]="{'top.px': divStyle}"> // Not sure if this is the correct approach as can't bind to divStyle in the directive
</div>
</ng-template>
</div>
I tried using ngStyle but am unsure how to get access to the divStyle value because this is created using viewContainerRef.createEmbeddedView.
I thought a better option would be to add the styles from the ts file using style.bottom but I don't know how to add that. I need to calculate tooltipDataArray.length then add 10px or so to a variable that repositions the viewContainerRef. I'm not sure of the best way to proceed.
@Input() tooltipDataArray: string[];
@ContentChild("tooltipTemplate") private tooltipTemplateRef: TemplateRef<Object>;
@HostListener("mouseenter") onMouseEnter(): void {
console.log(this.tooltipDataArray);
const view = this.viewContainerRef.createEmbeddedView(
this.tooltipTemplateRef
);
this.tooltipDataArray.forEach(el => {
const child = document.createElement("div");
child.innerText = el;
this.renderer.appendChild(view.rootNodes[1], child);
});
// Somthing like this.viewContainerRef.styles.bottom = 10 x this.tooltipDataArray.length + 'px'
console.log(view.rootNodes)
view.rootNodes.forEach(node => {
this.renderer.appendChild(this.elementRef.nativeElement, node);
});
}
@HostListener("mouseleave") onMouseLeave(): void {
if (this.viewContainerRef) {
this.viewContainerRef.clear();
}
stackBlitz here
A: If you were willing to pass in the templateRef as an input to the directive this would be a lot easier...
With your current implementation you are replacing the content of the
div with the rendered content of the template...
*
*This essentially is
not a tooltip and you would need to decouple them somehow to "simulate a tooltip"
Below is one way you could accomplish this.
Separate the ng-template from the div to decouple them, and pass your #tooltipTemplate as a value to the [templateRef] input on the directive
<div tooltipDirective [templateRef]="tooltipTemplate" [tooltipDataArray]="['Person1', 'Person2']">See tooltip!
</div>
<ng-template #tooltipTemplate>
<div class="tooltip">
This is my tooltip!
</div>
</ng-template>
In your directive convert your @ContentChild to an input to receive the templateRef, create your embeddedView and add your array elements.
*
*This also simplifies your logic here
@Input() templateRef: TemplateRef<Object>;
@HostListener("mouseenter") onMouseEnter(): void {
const view = this.viewContainerRef.createEmbeddedView(this.templateRef);
this.tooltipDataArray.forEach(el => {
const child = document.createElement("div");
child.innerText = el;
this.renderer.appendChild(view.rootNodes[1], child);
});
}
Adjust your global styling
.tooltip {
position: absolute;
/* bottom: -40px; */
left: 15px;
padding: 10px;
background: red;
border-radius: 5px;
/* box-shadow: 0 2px 1px rgba(0, 0, 0, 0.6); */
}
STACKBLITZ
https://stackblitz.com/edit/angular-zr2ydx?file=app/tooltip.directive.ts
This would be the cleanest implementation with the scaffolding you have provided... with that said, if I were to implement a tooltip directive, I would research CDK Overlay to create a custom tooltip implementation.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 1,655 |
#ifndef __ZUISTRING_H__
#define __ZUISTRING_H__
#include <assert.h>
namespace ZuiLib
{
typedef char ZChar;
class ZString
{
public:
enum { MAX_LOCAL_STRING_LEN = 63 };
ZString();
ZString(const ZChar ch);
ZString(const ZString& src);
ZString(const ZChar* str, int nLen = -1);
~ZString();
void Empty();
int GetLength() const;
bool IsEmpty() const;
ZChar GetAt(int nIndex) const;
void Append(const ZChar* pstr);
void Assign(const ZChar* pstr, int nLength = -1);
const ZChar* GetString() const;
void SetAt(int nIndex, ZChar ch);
operator const ZChar*() const;
ZChar operator[] (int nIndex) const;
const ZString& operator=(const ZString& src);
const ZString& operator=(const ZChar ch);
const ZString& operator=(const ZChar* pstr);
ZString operator+(const ZString& src) const;
ZString operator+(const ZChar* pstr) const;
const ZString& operator+=(const ZString& src);
const ZString& operator+=(const ZChar* pstr);
const ZString& operator+=(const ZChar ch);
bool operator == (const ZChar* str) const;
bool operator != (const ZChar* str) const;
bool operator <= (const ZChar* str) const;
bool operator < (const ZChar* str) const;
bool operator >= (const ZChar* str) const;
bool operator > (const ZChar* str) const;
int Compare(const ZChar* pstr) const;
int CompareNoCase(const ZChar* pstr) const;
void MakeUpper();
void MakeLower();
ZString Left(int nLength) const;
ZString Mid(int iPos, int nLength = -1) const;
ZString Right(int nLength) const;
int Find(ZChar ch, int iPos = 0) const;
int Find(const ZChar* pstr, int iPos = 0) const;
int ReverseFind(ZChar ch) const;
int Replace(const ZChar* pstrFrom, const ZChar* pstrTo);
int __cdecl Format(const ZChar* pstrFormat, ...);
int __cdecl Format(const ZChar* pstrFormat, va_list Args);
int __cdecl SmallFormat(const ZChar* pstrFormat, ...);
protected:
ZChar* m_pstr;
ZChar m_szBuffer[MAX_LOCAL_STRING_LEN + 1];
};
}//namespace
#endif //__ZUISTRING_H__ | {
"redpajama_set_name": "RedPajamaGithub"
} | 1,310 |
{"url":"https:\/\/torch.mlverse.org\/docs\/reference\/torch_multinomial.html","text":"Multinomial\n\ntorch_multinomial(self, num_samples, replacement = FALSE, generator = NULL)\n\n## Arguments\n\nself (Tensor) the input tensor containing probabilities (int) number of samples to draw (bool, optional) whether to draw with replacement or not (torch.Generator, optional) a pseudorandom number generator for sampling\n\n## Note\n\nThe rows of input do not need to sum to one (in which case we use\nthe values as weights), but must be non-negative, finite and have\na non-zero sum.\n\n\nIndices are ordered from left to right according to when each was sampled (first samples are placed in first column).\n\nIf input is a vector, out is a vector of size num_samples.\n\nIf input is a matrix with m rows, out is an matrix of shape $$(m \\times \\mbox{num\\_samples})$$.\n\nIf replacement is TRUE, samples are drawn with replacement.\n\nIf not, they are drawn without replacement, which means that when a sample index is drawn for a row, it cannot be drawn again for that row.\n\nWhen drawn without replacement, num_samples must be lower than\nnumber of non-zero elements in input (or the min number of non-zero\nelements in each row of input if it is a matrix).\n\n\n## multinomial(input, num_samples, replacement=False, *, generator=NULL, out=NULL) -> LongTensor\n\nReturns a tensor where each row contains num_samples indices sampled from the multinomial probability distribution located in the corresponding row of tensor input.\n\n## Examples\n\nif (torch_is_installed()) {\n\nweights = torch_tensor(c(0, 10, 3, 0), dtype=torch_float()) # create a tensor of weights\ntorch_multinomial(weights, 2)\ntorch_multinomial(weights, 4, replacement=TRUE)\n}\n#> torch_tensor\n#> 1\n#> 1\n#> 1\n#> 1\n#> [ CPULongType{4} ]","date":"2021-02-28 19:37:19","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3007751405239105, \"perplexity\": 3568.259125901183}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-10\/segments\/1614178361723.15\/warc\/CC-MAIN-20210228175250-20210228205250-00334.warc.gz\"}"} | null | null |
England, UK .16.4.1990. London . Wembley Stadium. Mandela Concert. Nelson Mandela arrives in London after his release from prison.
England, UK .16.4.1990. London . Wembley Stadium. Mandela Concert. Nelson Mandela arrives in London after his release from prison. Copyright © 1990 Andrew Wiard W: www.reportphotos.com E: info@reportphotos.com
http://www.andrew-wiard.comhttps://www.reportphotos.com/-/galleries/report-photos-archive/1990-archive-photos/nelson-mandela-at-wembley-april-1990/-/medias/eefe1997-a3e5-499a-b0b3-9a5c1f4505f1/price | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,747 |
Mushkegowuk Council (pointed: ᐅᒪᐡᑫᑯ ᐅᑭᒫᐎᐎᐣ (omashkeko okimāwiwin); unpointed: ᐅᒪᐡᑫᑯ ᐅᑭᒪᐎᐎᐣ), or officially as the Mushkegowuk Tribal Council, is a non-profit regional chiefs' council representing Cree First Nations in northern Ontario, Canada. The council, located in Moose Factory, Ontario provides advisory services and program delivery to its eight member nations.
Council
The council is made up of a representing chief from each of the eight member communities. The chiefs provide political direction to the organization in its strategic planning, government relations and policy development. To assist in these activities, the council maintains a political and advocacy staff to support its efforts in helping their communities to prosper. In turn, the council is a member of Nishnawbe Aski Nation, a tribal political organization representing the majority of Treaty 5 and Treaty 9 First Nations in northern Ontario.
The council's current grand chief is Jonathon Solomon. Musician Lawrence "Wapistan" Martin has also previously served as grand chief.
Member First Nations
Attawapiskat First Nation (ᐋᐦᑕᐙᐱᐢᑲᑐᐎ ᐃᓂᓂᐧᐊᐠ Āhtawāpiskatowi ininiwak)
Chapleau Cree First Nation (ᔕᑊᓗ ᐃᓂᓂᐗᐠ šaplo ininiwak)
Fort Albany First Nation (ᐲᐦᑖᐯᒄ ᐃᓕᓕᐗᒃ pîhtâpek ililiwak)
Kashechewan First Nation (ᑫᔒᒋᐗᓐ ᐃᓕᓕᐗᒃ kêšîciwan ililiwak)
Missanabie Cree First Nation (ᒪᓯᓈᐴᔾ ᐃᓂᓂᐗᐠ masinâpôy ininiwak)
Moose Cree First Nation (ᒨᓱᓂᔨ ᐃᓕᓕᐗᒃ môsoniyi ililiwak
Taykwa Tagamou Nation (ᑕᐟᑾ ᑕᑲᒪᐤ ᐃᓂᓂᐗᐠ tatkwa takamaw ininiwak)
Weenusk First Nation (ᐐᓈᐢᑯ ᐃᓂᓂᐗᐠ wînâsko ininiwak)
See also
Trick or Treaty?, a documentary film about Treaty 9, featuring Council Grand Chief Stan Louttit
References
External links
Mushkegowuk Council, Official website
INAC profile
Path of the Elders - Explore Treaty 9, Aboriginal Cree & First Nations history.
First Nations governments in Ontario
Cree governments
First Nations tribal councils | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 592 |
Social Media & Halloween: Connect the Dots
Holidays can be trying, what with arrangements to be made, food to be prepared, gifts to be given, and so on. Halloween specifically – which may lack the flair of its more imposing sisters (Thanksgiving and Christmas), but which demands the same creativity and agility – requires forethought, since it's the one time when you're actually mandated to be someone (or something) else, to take on a persona different from your own.
In that respect, Halloween is rather like today's social media platforms, places where you're constantly making impressions, and where every move you make needs careful planning. To make the most of both the holiday and social media, it's important to know what to expect, and who you're talking to.
So, in honor of today (October 31st, All Hallows' Eve), here are four people you're likely to meet on Halloween, the four social media users they most resemble, and four different ways to engage them (in real life and in social)!
1. The person who opts for a topical costume.
This might be the friend of yours who wears all pink and calls herself Miley Cyrus' tongue, or the stranger clad in paint swatches who's introduced to you as "50 Shades of Grey". In either case, this is the person intent on making a statement with what he/she wears, the person eager to be relevant.
This type of person is most similar to "the cheerleader" this infographic describes. A cheerleader in social media is the person who is the first to like, comment on, and share your brand's posts, who enters every giveaway and drives new traffic to your site, and who wants to be known by others as a good brand advocate.
The best ways to engage both person and user? Show appreciation for their enthusiasm, and work to understand the character they've created for themselves (what it means to them, why it speaks to them, and what speaks to them more generally as individuals). Make them feel respected, and seek out opportunities for further interaction (perhaps with a giveaway online, or a hang-out in real life!).
2. The person who undercommits to his/her costume.
This is the partygoer who slaps a quarter on his back and calls himself a quarterback, or the carouser who dons a pair of cat-ears and considers herself Catwoman – essentially, the person who puts next to no effort in the costume he/she wears, but who still appreciates the holiday and has fun all the same.
As a social media user, this person would be the "quiet follower," the sort of fan who knows your business and brand but likely isn't a direct customer, who follows you because their friends do, and who mildly helps your overall presence on social channels.
The best ways to engage both person and user? Find out what makes them tick! Learn where their interests and loyalties lie, and coax out their true personality (with witty banter in person, with appealing surveys and other calls to action online).
3. The person who tries to be sexy.
This type of reveler would be happiest to pair some form of animal ears with lingerie, would gladly sport a striped crop-top and striped short-shorts and answer to "Waldo" – in short, wants to impress and be remembered. This would be the sort of person who enjoys Halloween and the carousing that comes with it, and sees it as an opportunity to stand out among all the other celebrants.
It is in that last respect especially that this person brings to mind the "deal-seeker" the infographic mentions – the user of social media who puts value before loyalty, and who wants a clear picture of what he or she stands to gain in committing to your brand or product. This sort of person is a trend-setter, and is quick to lead others to act.
The best ways to engage both person and user? Be complimentary, but not to the extent that the person or user feels appreciated only for their appearance. Make sure they feel heard and recognized in your exchanges; you might manage this online with special deals, promotional packages, fan giveaways – freebies that encourage support and dialogue.
4. The person who eschews costumes altogether.
This would be the last sort of person you'd want to encounter on a Halloween night – the kind of partygoer who denounces the holiday as pointless, who scoffs at or ridicules others for their costumes, the killjoy who wants only to bring down the mood.
In social media, this person would likely be considered a troll the person who litters your brand's page with negativity and demands to have the last word.
The best ways to engage both person and user? Keep as cool a head as possible, and try to be helpful when or when they'll let you (as a shoulder to lean on in person, and a thought leader or resource online). But just as you'd put a drunk in a cab and send him home, you might need to block the true troll if he's ruining the experience for the rest of your guests.
Realistically, you'll never be fully prepared for every possible eventuality when it comes to a beast as unwieldy as social media or a holiday as chaotic as Halloween. What's important is that you develop a plan of some sort, that you prepare, so that you're on firmer ground when things (inevitably) go awry.
Find out more tips for social media marketing at Act-On's Center of Excellence! And Happy Halloween to one and all.
visit the act-on center of excellence;
"Brindle Pitbull with Bow," "Orange Cat in Blue Blanket," "Two White Chickens with Eggs & a Box of Hay," and "Orange Cat in Pumpkin Costume" images by the Found Animals Foundation, used under a Creative Commons 2.0 License.
10 People You Should Follow on Twitter if You're in Marketing
How to Use Hashtags on Twitter With #Confidence | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 3,933 |
We Killed the Middle Class. Here's How We Can Revive It.
America is once again engaged in the process of rebuilding its economy from a devastating recession. The United States cannot afford another feeble and prolonged rebound, in which the gilded chambers of the economy recover faster than all the others, and it need not have one. But it may be slipping into that trap again, because our leaders have not learned the lessons of the nation's great postwar boom, the last time America delivered lasting prosperity and security for the middle class. They have not learned that the way to create another middle-class boom is by investing in workers.
I've spent two decades writing and reporting about politics and economics, in Washington and around the country. For much of that time, I have been consumed by a question: Where did the good jobs go for the American middle class after the great postwar boom faded 40 years ago, and where will we find new ones to replace them? I have interviewed laid-off factory workers in Ohio, home health aides in Virginia, an aspiring lawyer who watched her mother lose her childhood home in Chicago, a relentlessly upbeat airport valet who earned poverty wages for wheeling elderly passengers through the terminal. I have watched more equation-stuffed slide decks at economics conferences than I would deem medically advisable. From the people and the cutting-edge research, I have built an understanding of what actually made the middle class boom after World War II, and what it will take to rebuild it from our pandemic crisis now.
[Annie Lowrey: Now we'll know what the recession feels like]
My reporting taught me that the United States economy enjoyed a golden era of shared prosperity in large part because, during the war effort and the civil-rights era, America made it easier for people who had been previously shut out of economic opportunity—women, minorities, immigrants—to enter the workforce and climb the economic ladder, to make better use of their talents and potential. Research from economists at the University of Chicago and Stanford attribute two-fifths of our per-worker growth since 1960 to that improved flow of talent. Reducing discrimination made our country faster-growing and more productive. It lifted everyone up, including white men, who have set the rules of the American economy since its founding.
This post was excerpted from Tankersley's recent book.
And I've also learned what isn't working. I've seen the government bail out airlines and flood the financial system with money to keep it functioning, while leaving individuals to slowly scrape up the pieces of their shattered careers and dreams. I've watched politicians harness the politics of fear to lash out at immigrants, ignoring the doctors who have manned ventilators during the pandemic and the farmworkers who have kept produce flowing to Amazon's delivery trucks. I've seen the public turn a blind eye to the centuries of systemic oppression that have kept Black women who don nursing scrubs and ring up groceries from earning and saving enough to buy their own homes. And in the depths of a crisis, I've watched populists pit one struggling group of Americans, who happen to be white men, against struggling Black and Latino men, and against women of all races.
The barriers that block some workers from advancement, such as inadequate parental-leave policies, federal limits on imported brainpower, and overt racial discrimination, are holding us all back. Those barriers don't just hurt women and men of color. They're shrinking the middle class, and they're hurting our democracy.
Wide-ranging economic research shows that strong middle classes breed political and social stability. "Societies with a strong middle class experience higher levels of social trust but also better educational outcomes, lower crime incidence, better health outcomes and higher life satisfaction," a 2019 report from the Organization for Economic Cooperation and Development concluded, citing several studies. "The middle class champions political stability and good governance. It prevents political polarization and promotes greater compromise within government." The report warned that members of the middle class increasingly say the economy is unfair because they see so much income and wealth flowing to the rich, while their own lifestyles—their middle-class security blanket of a home, education, retirement savings—have become more expensive.
[Catilin Zaloom: Does the U.S. still have a 'middle class'?]
As America climbs out of its coronavirus recession, it must reinvest in its middle class, and in the people who will bring good, middle-class jobs forth in the economy. I've encountered a lot of very smart people while reporting, who have detailed thoughts on how to do just that.
Some of the solutions are themselves fodder for entire books, such as reducing the cost of American health care or bringing down soaring housing prices in superstar areas like Silicon Valley. Many are targeted at specific groups who are being held back. William Darity, a Duke University economist who has exhaustively chronicled discrimination and its effects, proposes a suite of programs to empower Black Americans to earn more and build wealth, including paying reparations to the descendants of enslaved people and providing a living-wage, government-guaranteed job for anyone who wants to work. In April, I tuned in to an online conference call where Darity said the pandemic recession had made those measures all the more important. The virus, he said, had exposed "the deep historical residue of health disadvantage that was already embedding itself in the Black community." He predicted that it would further worsen income and wealth inequality on racial lines. Sure enough, it has.
Heather Boushey, the president and CEO of the Washington Center for Equitable Growth and an adviser to the Democratic presidential nominee Joe Biden, was one of the first economists to talk at length with me about the middle class and how to revive it. She favors policies that clear the way for women to work and earn more in the economy, allowing us to tap the full potential of our most skilled workers. Boushey proposes expanding paid leave for parents and caregivers, reducing or eliminating the cost of child care for working families, and adopting a universal prekindergarten system, all to support working women and their children, and advance women in the workplace. As the pandemic unfolded in the spring of 2020, she pushed for new and permanent policies to safeguard workers on the front lines of the virus response—both to protect those workers and to give Americans confidence that the people taking care of them during the outbreak would be taken care of themselves. She told me that the countries faring best in the crisis had "paid sick leave, universal access to affordable health care, and a robust public-health infrastructure"—along with income supports for workers and businesses, which automatically ramped up when the economy contracted.
[Derek Thompson: We can prevent a Great Depression. It'll take $10 trillion.]
Libertarian economists like Matt Mitchell at George Mason University, a crusader against the "crony capitalism" that favors the politically connected, support policies to end special favors from government that hold some workers back. These include eliminating state occupational-licensing requirements that prohibit people from working in certain fields, such as hair braiding, without a particular government-approved training certificate, and killing tax loopholes and direct subsidies that benefit handfuls of companies lobbying hard to maintain their edge over would-be rivals. The government response at all levels to the 2020 pandemic only reinforced this view: To adapt to the new world of economic restrictions and the realities of the health crisis, officials suspended a lot of regulations that some economists say never should have existed in the first place. They allowed bars to sell takeout cocktails and, in some places, deliver them to people's homes. They let doctors and other health professionals work in states even if they did not have a license there. Some areas extended that same ability to foreign doctors who were not licensed in the United States. There's no reason those restrictions should return.
One of my favorite professional antagonists, a liberal economist named Dean Baker who delights in criticizing the reporting of The Washington Post and The New York Times on his personal blog, is an often-lonely crusader for a similar change that would introduce elite white men to the same sort of labor competition that manufacturing workers face—by allowing doctors, lawyers, and other professionals who are trained abroad to more easily emigrate to the United States and ply their trades when they get here.
Late in 2019, I called Marianne Wanamaker, a University of Tennessee economic historian and former member of President Donald Trump's Council of Economic Advisers, whose research has found that young Black men have experienced no gains in relative economic mobility since 1870—a stunning lack of progress. I asked her what policy change she would make to finally unblock the upward path for Black men. She said that she, like some of her more liberal colleagues in the profession, supports universal prekindergarten. But, she said, "if you're going to address some of the core problems, you have to move back into people's lives"—on a personal level. She was thinking of the work toward racial reconciliation in a church congregation, or her own family's investment in their refugee neighbors in Knoxville. "The solution has to be us," she said, "and how we treat people and understand people and love people, and how we interact with them in society. That's a huge challenge. But it's not government's to solve."
[Read: How Black middle-class kids become poor adults]
Government has played an important role in breaking down barriers to advancement; it's hard to see how women and Black Americans would have made it even this far without federal civil-rights legislation, for example. But I also understand the hesitancy that many Americans feel about turning over more of the task of balancing the economy to politicians. American history is full of examples of political leaders using their power to block some groups from advancement. So I favor measures that meet a narrow test: Do they help people build their own human capital? Do they help them gain income and wealth and stability, so that they can survive the next recession, whether it is mild or severe? Do they defeat the power structures that hold people back? Do they help Americans do what they're best at?
Policies that pass this test would help disadvantaged Americans get through college, find jobs, and advance in the workforce. They would aggressively punish discrimination and tear down power structures that exist to protect white elites. They would foster and train a new generation of entrepreneurs, homegrown and imported, to disrupt the competition calcification of the American economy and create jobs that allow working-class whites—and everyone else—to return to the work that best utilizes their talents. They would diminish government interventions in parts of the economy that are ripe for favoritism, and diminish the ability of opportunistic companies and people to game the economy for their own limited benefit, to the detriment of everyone else. They would protect against job losses in bad economic times and promote fair pay in good ones, and they would strive to keep the economy humming with strong growth and low unemployment—the formula that has produced sustained income gains that raise pay for everyone and pull people into the middle class.
If you add up all those initiatives, you might make a real dent in the problem. But you still won't be going far enough. This is where I agree—naively, perhaps, but earnestly—with Wanamaker. The big change we need is attitudinal. We need a national commitment to helping one another succeed and get ahead.
We need to stop ourselves and others from discriminating by race and gender, stop vilifying the people who don't look like we do. Elite white Americans, in particular, need to work harder to help everyone else enjoy the same opportunities they do. They should acknowledge that they have benefited disproportionately from the technological and globalization trends of the past 40 years, which amplified the advantages that elite white men have enjoyed for centuries. If they are interested in helping lift others up, or even just in optimizing the performance of the economy so that it will keep delivering gains for people like them, they should be willing to pay higher taxes, to fund investments in human-capital accumulation for everyone else. I am convinced such investments would make America more productive and entrepreneurial again; they would empower the people who will create new, good jobs.
[Rahm Emanuel: It's time to hold American elites accountable for their abuses]
By helping one another reach our full potential, we'll help the whole country get its swagger back. This is a hopeful realization, but also a daunting one. We've never really achieved that goal as a nation, not even in the booming 1950s, nor in the civil-rights era. But I truly believe that we could achieve it now.
"It takes powerful social movements, I think, to move these things," the University of Chicago economist Chang-Tai Hsieh, the lead author of a breakthrough paper on how the upward mobility of women and Black Americans supercharged the American economy in the postwar era, told me in an interview. "The '60s and the '70s in the U.S., I think, were a very special time," he said. But, he added, "I do think there is the potential for a similar burst, if we had a similar type of revolution again, to tear down more barriers."
I cannot and will not offer you a simple solution for that task. I don't see a lot of complex problems around me being solved by simple plans. Like so much else in my career, I owe that perspective to Bill Woo.
Bill was my college mentor, my journalistic hero, and a great friend. He had been the editor of the St. Louis Post-Dispatch newspaper—the first Asian American to ascend to the highest job at one of the country's large daily papers—until he realized he wanted no more part in his bosses' slow march toward a smaller staff, shorter stories, and a diminished product that padded profits but shortchanged readers. He resigned, and later, he almost certainly saved me from a lifetime of studying and practicing law.
During my freshman year at Stanford, Bill taught a small seminar on writing and reporting the news. He ran us through mock news events and forced us to write on deadline. He unspooled hours-long, perfectly crafted tales of his news-gathering youth in tones that barked with joy and fell to a whisper so low you could barely hear it over the hum of the air-conditioning.
I think about the American progress that the arc of his life represented. More than a century after Chinese laborers laid the railroad tracks that built the fortune of a man named Leland Stanford, who used his money to create a university, that university hired William F. Woo, son of an American mother and a Chinese father, to teach journalism.
He died of cancer in 2006, months before my son was born. When I spoke at his funeral in St. Louis, I told the mourners that I had kept my assignments from his class, not because of the amateur writing I had produced, but for the notes he had left me in the margins in a sharp, red ink. I think of one of them in particular almost every day.
"I really hope you keep this message with you always," Bill wrote. "There are no quick fixes, ever, for the things we hold dearest."
But there is, I have learned, a good place to start.
This article is adapted from Tankersley's recent book, The Riches of This Land: The Untold, True Story of America's Middle Class. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 29 |
Q: Scale of knowledge and fact base for an expert system I am new to development of expert systems.I have started programming with clips a few simple codes like animal identification,mini version of mycin etc.I want to increase the knowledge and fact base(to a million facts) for my programs so i was wondering if there was any Database management system to make this process easier.I would like to know how to implement this in general.
A: Out of the box, the only mechanism for editing and loading rules/facts is through text files. CLIPS can be integrated with other languages so it's possible to add alternate means for storing and editing your rules/facts, you'd just have to write the code to integrate the pieces.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 6,377 |
Cornelis William Hendrik Fuhler (3 July 1964 – 19 July 2020) was a Dutch/Australian improvisor, composer, and instrument builder associated with free jazz, experimental music and acoustic ecology. He played piano by manipulating sound with electromagnetic string stimulators like Ebows and motorized actuators. Fuhler also performed on guitar, turntables and synthesizer. He invented the keyolin, a combination of keyboard and violin.
Fuhler was a student of Misha Mengelberg of the Instant Composers Pool. He recorded the album Corkestra (Data, 2005) with Ab Baars, Tony Buck, Tobias Delius, Wilbert de Joode, Anne La Berge, Andy Moor, Nora Mulder, and Michael Vatcher. Fuhler played prepared piano, analog keyboards, clavinet, melodica, and electric lamellophone. Fuhler played solo prepared piano on his album Stengam (Potlatch, 2007).
In 2016 he attained a PhD in composition at the University of Sydney and in 2017 he published his book Disperse and Display covering modular composing strategies and extended piano techniques.
Fuhler "died unexpectedly" at his home in Australia.
Discography
1995 7 CC in 10 (Geestgronden)
1996 The Psychedelic Years Palinckx (Vonk)
1998 Bellagram (Geestgronden)
1999 DJ Cor Blimey and his Pigeon (ConundromCD)
2001 The Flirts (Erstwhile), duo with Gert-Jan Prins
2002 The Hands of Caravaggio, as part of M.I.M.E.O., featuring John Tilbury (Erstwhile)
2003 Tinderbox (Data)
2005 HHHH (Unsounds)
2005 ONJO (Doubtmusic)
2005 Corkestra (Data)
2007 Stengam (Potlatch)
2007 The Culprit, duo with Keith Rowe (7hings)
2011 Gas Station Sessions (Platenbakerij)
2014 Truancy (Splitrec)
2016 Mungo (Splitrec)
2017 FAN (SoundOut)
2019 Fietstour (WhirrbooM! Records)
References
External links
Official site
Whitehead, Kevin. New Dutch Swing (1998). New York: Billboard Books. .
Disperse and Display Sydney: Giwalo Press, 2017. .
1964 births
2020 deaths
People from Emmen, Netherlands
Free improvisation
Electroacoustic improvisation
Dutch experimental musicians
Dutch emigrants to Australia
University of Sydney alumni | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,818 |
BMW Bikes India Contact Number,Customer Care Number, Head Office Address, Regional Offices Address, Website Details, bmw.in .
BMW just the names suggest a sense of class, eloquence, sophistication, a dream you just can't forget, even with eyes waking. The BMW bikes look different, very different, unmatched beasts in physical attributes and none needs to be said about the internal specifications. Motorcycling is made priceless by the BMW engineers who attempt to offer the best, safe, comfortable but yet luxurious feel to it. Each product is made with the great finesse and the special concept of safety 360. The main categorises of the motorbikes include the Roadsters, Urban Mobility, Endura and Tour. They also have a range of scooters like BMW650 G and BMW 600 Sport.
If you are spending money of this world class automobile brand d then you must be also interested in getting their gears and accessories as well apparels that are available at the dealer shops and agencies. Even if you are not purchasing the automobile you can still find the apparel and accessories separately in India. The good part about the dealers in India is that even if they it is not available with them they are still the one-stop shop to get it imported.
You can fins showrooms on http://www.bmw.in/in/en/general/ecom_uic/dlo/dealer_locator.html. You can also try contacting the direct sales team on directsales.indina@bmw.in and gurpreet.singh@bmw.in. | {
"redpajama_set_name": "RedPajamaC4"
} | 850 |
Q: XAML Binding: binding to properties of a "global" object I would like to have one (global, singleton) object in my application that exposes a number of dependency properties. I would like to bind values in XAML to these dependency properties. How can I achieve this so that the syntax of my XAML binding is as simple as possible (in other words, not constantly worrying about RelativeSource, AncestoryType, etc).
A: You can use the x:Static markup extension to bind directly to your Singleton, as it's a static property.
For example, if your singleton had a property named "Foo":
<TextBox Text="{x:Static local:YourSingleton.Instance.Foo}" />
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 3,128 |
\section*{Acknowledgement}
The authors would like to thank S. Dittmaier and H. Spiesberger for
useful discussions.
\clearpage
\begin{table}
\begin{center}
\vspace{-5mm}
\begin{tabular}{|r|r|r|c||r|r|c|}
\hline
\multicolumn{1}{|c|}{angle}&\multicolumn{3}{c||}{unpolarized}&
\multicolumn{3}{c|}{left-handed}\\
\hline
&\multicolumn{1}{c|}{$\Delta_{IBA}$}&$\delta\Delta_{IBA}$&
\multicolumn{1}{c|}{$\Delta_{
IBA}+\delta\Delta_{IBA}$}&
$\Delta_{IBA}$&$\delta\Delta_{IBA}$&
\multicolumn{1}{c|}{$\Delta_{IBA}+\delta\Delta_{IBA}$}\\
\hline\hline
\multicolumn{7}{|c|}{$\sqrt{s}=161$ GeV}\\
\hline
\multicolumn{1}{|c|}{total}&1.45&-0.72&0.73&1.45&-0.72&0.73\\
10&1.63&-0.73&0.90&1.63&-0.73&0.90\\
90&1.44&-0.72&0.72&1.44&-0.72&0.72\\
170&1.26&-0.70&0.56&1.26&-0.70&0.56\\
\hline\hline
\multicolumn{7}{|c|}{$\sqrt{s}=165$ GeV}\\
\hline
\multicolumn{1}{|c|}{total}&1.27&-0.71&0.56&1.28&-0.71&0.57\\
10&1.67&-0.74&0.93&1.67&-0.74&0.93\\
90&1.17&-0.71&0.46&1.18&-0.71&0.47\\
170&0.75&-0.67&0.08&0.77&-0.67&0.10\\
\hline\hline
\multicolumn{7}{|c|}{$\sqrt{s}=175$ GeV}\\
\hline
\multicolumn{1}{|c|}{total}&1.26&-0.71&0.55&1.28&-0.71&0.57\\
10&1.71&-0.75&0.96&1.71&-0.75&0.96\\
90&1.03&-0.69&0.34&1.06&-0.70&0.36\\
170&0.59&-0.62&-0.03&0.69&-0.63&0.06\\
\hline\hline
\multicolumn{7}{|c|}{$\sqrt{s}=184$ GeV}\\
\hline
\multicolumn{1}{|c|}{total}&1.02&-0.70&0.32&1.06&-0.71&0.35\\
10&1.57&-0.75&0.82&1.57&-0.75&0.82\\
90&0.67&-0.68&-0.01&0.72&-0.69&0.03\\
170&0.10&-0.58&-0.48&0.32&-0.64&-0.32\\
\hline\hline
\multicolumn{7}{|c|}{$\sqrt{s}=190$ GeV}\\
\hline
\multicolumn{1}{|c|}{total}&1.24&-0.70&0.54&1.28&-0.71&0.57\\
10&1.67&-0.74&0.93&1.67&-0.75&0.92\\
90&0.95&-0.68&0.27&1.01&-0.69&0.32\\
170&0.58&-0.57&0.01&0.83&-0.59&0.24\\
\hline\hline
\multicolumn{7}{|c|}{$\sqrt{s}=205$ GeV}\\
\hline
\multicolumn{1}{|c|}{total}&1.60&-0.70&0.90&1.65&-0.71&0.94\\
10&1.77&-0.74&1.03&1.77&-0.74&1.03\\
90&1.55&-0.66&0.89&1.64&-0.68&0.96\\
170&1.61&-0.53&1.08&1.94&-0.56&1.38\\
\hline
\end{tabular}
\end{center}
\caption{
The Table shows the quality of the improved Born approximation (IBA) for
the total (defined by integrating over $10^0\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\vartheta\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}} 170^0$)
and the differential cross section (for $W^-$-production
angles $\vartheta$ of $10^0,90^0$ and $170^0$) for $e^+e^-\to W^+W^-$
at various
energies for unpolarized and left-handed electrons. The quantity
$\Delta_{IBA}$ denotes the percentage deviation of the IBA from the full
one-loop result, the numerical results being taken from ref. \cite{been}. Our
correction, $\delta\Delta_{IBA}$, as well as the final accuracy,
$\Delta_{IBA}+\delta\Delta_{IBA}$, of our IBA
are given in the second and third column, respectively.
The photon splitting scale entering the cross section for initial state
bremsstrahlung is chosen as $Q^2=s$.}
\label{table1}
\end{table}
\clearpage
\begin{table}
\begin{center}
\begin{tabular}{|r|r|r|c||r|r|c|}
\hline
\multicolumn{1}{|c|}{angle}&\multicolumn{3}{c||}{unpolarized}&
\multicolumn{3}{c|}{left-handed}\\
\hline
&\multicolumn{1}{c|}{$\Delta_{IBA}$}&$\delta\Delta_{IBA}$&
\multicolumn{1}{c|}{$\Delta_{
IBA}+\delta\Delta_{IBA}$}&
$\Delta_{IBA}$&$\delta\Delta_{IBA}$&
\multicolumn{1}{c|}{$\Delta_{IBA}+\delta\Delta_{IBA}$}\\
\hline\hline
\multicolumn{7}{|c|}{$\sqrt{s}=161$ GeV}\\
\hline
\multicolumn{1}{|c|}{total}&0.97&-0.72&0.25&0.97&-0.72&0.25\\
10&1.14&-0.73&0.41&1.14&-0.73&0.41\\
90&0.95&-0.72&0.23&0.96&-0.72&0.24\\
170&0.78&-0.70&0.08&0.78&-0.70&0.08\\
\hline\hline
\multicolumn{7}{|c|}{$\sqrt{s}=165$ GeV}\\
\hline
\multicolumn{1}{|c|}{total}&0.77&-0.71&0.06&0.78&-0.71&0.07\\
10&1.17&-0.74&0.43&1.17&-0.74&0.44\\
90&0.67&-0.71&-0.04&0.68&-0.71&-0.03\\
170&0.25&-0.67&-0.42&0.27&-0.67&-0.40\\
\hline\hline
\multicolumn{7}{|c|}{$\sqrt{s}=175$ GeV}\\
\hline
\multicolumn{1}{|c|}{total}&0.70&-0.71&-0.01&0.73&-0.71&0.02\\
10&1.17&-0.75&0.42&1.17&-0.75&0.42\\
90&0.48&-0.69&-0.21&0.51&-0.70&-0.19\\
170&0.05&-0.62&-0.57&0.15&-0.63&-0.48\\
\hline\hline
\multicolumn{7}{|c|}{$\sqrt{s}=184$ GeV}\\
\hline
\multicolumn{1}{|c|}{total}&0.43&-0.70&-0.27&0.47&-0.71&-0.24\\
10&0.99&-0.75&0.24&0.99&-0.75&0.24\\
90&0.09&-0.68&-0.59&0.14&-0.69&-0.55\\
170&-0.48&-0.58&-1.06&-0.26&-0.64&-0.90\\
\hline\hline
\multicolumn{7}{|c|}{$\sqrt{s}=190$ GeV}\\
\hline
\multicolumn{1}{|c|}{total}&0.63&-0.70&-0.07&0.67&-0.71&-0.04\\
10&1.07&-0.74&0.33&1.07&-0.75&0.32\\
90&0.35&-0.68&-0.33&0.41&-0.69&-0.28\\
170&-0.02&-0.57&-0.59&0.23&-0.59&-0.36\\
\hline\hline
\multicolumn{7}{|c|}{$\sqrt{s}=205$ GeV}\\
\hline
\multicolumn{1}{|c|}{total}&0.94&-0.70&0.24&0.99&-0.71&0.28\\
10&1.11&-0.74&0.37&1.12&-0.74&0.38\\
90&0.90&-0.66&0.24&0.99&-0.68&0.31\\
170&0.96&-0.53&0.43&1.28&-0.56&0.72\\
\hline
\end{tabular}
\end{center}
\caption{
As Table \ref{table1}, but for the photon splitting scale
$Q^2=M_W^2$.}
\label{table2}
\end{table}
\clearpage
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 4,115 |
{"url":"https:\/\/www.projecteuclid.org\/euclid.im\/1259158596","text":"## Internet Mathematics\n\n### Local Computation of PageRank Contributions\n\n#### Abstract\n\nMotivated by the problem of detecting link-spam, we consider the following graph-theoretic primitive: Given a webgraph $G$, a vertex $v$ in $G$, and a parameter $\\delta \\in (0,1)$, compute the set of all vertices that contribute to $v$ at least a $\\delta$ fraction of $v$'s PageRank. We call this set the $\\delta$-contributing set of $v$. To this end, we define the contribution vector of $v$ to be the vector whose entries measure the contributions of every vertex to the PageRank of $v$. A local algorithm is one that produces a solution by adaptively examining only a small portion of the input graph near a specified vertex. We give an efficient local algorithm that computes an $\\epsilon$-approximation of the contribution vector for a given vertex by adaptively examining $O(1\/\\epsilon)$ vertices. Using this algorithm, we give a local approximation algorithm for the primitive defined above. Specifically, we give an algorithm that returns a set containing the $\\delta$-contributing set of $v$ and at most $O(1\/\\delta)$ vertices from the $\\delta\/2$-contributing set of $v$, and which does so by examining at most $O(1\/\\delta)$ vertices. We also give a local algorithm for solving the following problem: If there exist $k$ vertices that contribute a $\\rho$-fraction to the PageRank of $v$, find a set of $k$ vertices that contribute at least a $(\\rho-\\epsilon)$-fraction to the PageRank of $v$. In this case, we prove that our algorithm examines at most $O(k\/\\epsilon)$ vertices.\n\n#### Article information\n\nSource\nInternet Math., Volume 5, Number 1-2 (2008), 23-44.\n\nDates\nFirst available in Project Euclid: 25 November 2009","date":"2019-11-13 21:15:28","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8917677402496338, \"perplexity\": 299.32361106824476}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-47\/segments\/1573496667333.2\/warc\/CC-MAIN-20191113191653-20191113215653-00295.warc.gz\"}"} | null | null |
Yeah, yeah - Louisville is cool. We're, like, the next Portland! Yuck. There's a lot to hate in Louisville and I've whittled my list, with difficulty, down to ten things. Don't hate me, Louisville - hate these lousy things about our town.
. Every time I want to go out and get a meal I struggle to find a place to eat - there's so many places I haven't visited yet! If we could slim down the amount of unique eateries to one or two per neighborhood, I'd be happy. I would also like to see a few more Chili's or Red Robin's around town since we don't have enough of those. I don't know about you, but there's only so much unique and interesting food I can stand.
! And like the food, It's nearly impossible to hit all of them - lots happen on the same weekends and I'm forced to split my time trying to get to all the stuff I want to see. I don't like this, so I propose eliminating all events that don't include elephant ears.
. This would be fine, but what if I want to get my groceries, ammunition and top 40 CDs all in one place? I have to drive clear across town! We really need to put a WalMart or Meijer on Bardstown road, preferably with a massive parking lot. Maybe we could take out that block with ValuMarket on it - no one shops there, right?
), tons of talented artists featured in some great galleries, and a music scene that puts anyone else to shame. All of the boundless talent in town makes me feel so incompetent that I want to curl up in the fetal position and never leave my house. Everywhere I go I'm reminded of the intense concentration of talent that Louisville boasts, and with every new gallery visit or show I go to I just grow more and more bitter. Can't we get a few third-rate bit players around town to water the pool down?
5 - What's With The Fantastic Parks System?
. We have the Jefferson Memorial Forest all the way down to tiny little Gnadinger. We have Cherokee Park, which is so big I got lost in it and spent twice as long on its cool trails and open spaces then I planned on. I was tired and warm and I had a good time, but I really wanted some food. Once I finally found my car I had to spend another hour deciding what to eat! Let's turn some of these unused spaces into another big box store, or just add a Bennigan's to one or two of them.
just make me bitter about my shotgun house. Just going downtown is enough to make me have a history geek induced panic attack. This is no way to live.
Louisville folks are really friendly, which is all well and good, except that it isn't. When I first moved here and told people I was new to the area, everyone was all "Oh, welcome to Louisville! Here's this thing for free," or "Oh, do you need to know anything about the area?" Where I come from we don't do that unless we're the weird old lady at the Salvation Army. Even then she'd probably just scowl. Can't we be more like Minnesota and just be passive aggressively nice and secretly hate each other behind our backs? It's much easier and requires less effort.
Last winter was "harsh," according to my sources - if you think that was a bad winter, you haven't seen anything. It's pretty obvious that the mild, friendly climate we have here in town has made us all weak and unable to handle extremes (aside from intense humidity and an overabundance of sunshine), so thankfully we have that polar vortex thing to worry about next winter. With any hope we'll be battered by ice and snow to balance out our warm and sunny summer, forced to drip our faucets and cover our windows with Toy Story blankets and saran wrap. You'll thank me when you can walk outside in negative ten degree weather in just a sweatshirt, I swear.
Maybe it's all of these horrible things I mentioned, but Louisville is absolutely obsessed with itself. The people here constantly try to improve every bit of every neighborhood, be it with unique shops or locally-owned and interesting restaurants. There's preservation efforts everywhere, and people going out of their ways to do something nice for their neighbors. Everyone I talk to seems to love living here, and doesn't hesitate to tell you. Louisville worship is pretty strong in every corner I find myself lost in, and it's a unique phenomenon in places I've visited. Yeah, people in Michigan like Michigan, but the praise this town receives from residents is over the top. | {
"redpajama_set_name": "RedPajamaC4"
} | 518 |
<?php
namespace app\modules\payments\controllers;
use yii\web\Controller;
class InfoController extends Controller {
public function filters() {
return array(
'accessControl',
);
}
public function accessRules() {
return array(
array('allow',
'users' => array('@')
),
array('deny',
'users' => array('*'),
),
);
}
public function actionIndex() {
$model = new PaymentHistory('searchLast');
$model->unsetAttributes(); // clear any default values
if (isset($_GET['PaymentHistory'])) {
$model->attributes = $_GET['PaymentHistory'];
}
$model->setAttribute('user_id', Yii::$app->user->getId());
$user = Yii::$app->user->getModel();
$payForm = new PayForm;
if (isset($_POST['PayForm'])) {
$payForm->attributes = $_POST['PayForm'];
if ($payForm->validate()) {
$history = new PaymentHistory;
$history->user_id = Yii::$app->user->getId();
$history->amount = $payForm->amount;
$history->payment_system_id = $payForm->payment_system_id;
$history->currency = Currency::DEFAULT_CURRENCY;
$history->curs = Currency::getUSDCrossCurs($history->currency);
$history->equivalent = round($history->amount * Currency::getUSDCrossCurs($history->currency), 2);
$history->save();
$config = $history->paymentSystem->getConfig();
$params = $config->getParams($history);
$this->render('processing_form', array(
'config' => $config,
'params' => $params,
));
Yii::$app->end();
}
}
if (!$user->phone_confirm) {
$accountLink = CHtml::link(Yii::t('app', 'account'), '/user/profile');
$text = Yii::t('app', 'After confirming your phone number - all financial transactions will occur with his participation. That will greatly enhance the safety of your funds.');
$text .= ' ' . Yii::t('app', 'Add your phone number in the settings of your {account}.', array('{account}' => $accountLink));
Yii::$app->user->setFlash('notice', $text);
}
$this->render('index', array(
'user' => $user,
'payForm' => $payForm,
'model' => $model
));
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 4,115 |
Wilhelm Baumann (* 15. September 1895 in München; † 3. Oktober 1982 in Starnberg) war ein deutscher Lehrer.
Leben
Baumann besuchte die Volksschule und die Lehrerbildungsanstalt. 1913 begann er als Schulpraktikant an der Volksschule Raitenhaslach. 1916 begann er als Aushilfslehrkraft an mehreren Volksschulen. Ein Jahr später legte er die Prüfung für den Volksschuldienst ab, 1920 wurde er schließlich zum Lehrer ernannt. In diesem Beruf war er bis 1930 tätig, daneben auch einigen gewerblichen und landwirtschaftlichen Berufsschulen. In den Zwanziger Jahren bis 1933 war er erster Vorsitzender der Arbeitsgemeinschaft der Junglehrerschaft in Oberbayern und in Bayern, außerdem gehörte er dem Haupt-Ausschuss des Bayerischen Lehrervereins und der schulpolitischen Hauptstelle des deutschen Lehrerverbands an. In dieser Zeit studierte er Pädagogik an der Universität München. 1930 wurde er Bezirksoberlehrer und Leiter des Fortbildungsbezirks Berchtesgaden, ein Jahr später Hauptlehrer. 1933 wurde er in Schutzhaft genommen und dem Amt des Fortbildungsleiters enthoben. Nachdem er auch im Zweiten Weltkrieg als Soldat tätig war, wurde er im Mai 1945 Schulrat der Landkreise Starnberg und Fürstenfeldbruck. Im selben Jahr ernannte man ihn zum Regierungsschulrat bei der Regierung von Oberbayern und zum Leiter der Schulabteilung, ein Jahr später als Regierungsdirektor im Bayerischen Staatsministerium für Unterricht und Kultus. Von 1947 an war er am Schulamt in Starnberg tätig, ein Jahr später im Schulaufsichtsdienst, bis er 1960 in den Ruhestand ging.
Baumann gründete den Landesverbands der bayerischen Schulräte, deren Vorsitzender er bis 1955 war. 1951 war er Gründungsmitglied der Gewerkschaft Erziehung und Wissenschaft, 1965 wurde er deren Ehrenvorsitzender. Ferner gehörte er der schulpolitischen Hauptstelle der Arbeitsgemeinschaft Deutscher Lehrerverbände und dem schulpolitischen Ausschuss sowie dem Landesschulbeirat des DGB an, ebenso dem Beirat der Akademie für Politische Bildung. Von 1956 bis 1967 war er Mitglied des Bayerischen Senats.
Am 13. Januar 1964 wurde er mit dem Bayerischen Verdienstorden ausgezeichnet.
Weblinks
Mitglied des Bayerischen Senats
Schullehrer
Person (München)
Träger des Bayerischen Verdienstordens
Deutscher
Geboren 1895
Gestorben 1982
Mann | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,578 |
module Ruboty
module Handlers
class Ss < Base
# FIME: move envs to ruboty/ss/storage/*.rb
env :RUBOTY_SS_STORAGE, 'storage provider for ruboty-ss. (default: gyazo)', optional: true
env :RUBOTY_SS_DROPBOX_ACCESS_TOKEN, 'dropbox access token.', optional: true
on %r{ss (?<url>http(?:s?)://.*)}, name: 'screenshot', description: 'take a screenshot'
def screenshot(message)
Ruboty::Ss::Actions::TakeScreenshot.new(message).call
end
end
end
end
| {
"redpajama_set_name": "RedPajamaGithub"
} | 169 |
This entry contains information about North Charleston, South Carolina. For links to small business resources found in and around North Charleston, visit the North Charleston, South Carolina Hub.
North Charleston is a city in Charleston County in South Carolina.
Link to this page: "https://wiki.smallbusiness.com/w/index.php?title=North_Charleston,_South_Carolina&oldid=28708" | {
"redpajama_set_name": "RedPajamaC4"
} | 7,957 |
\section{Introduction}
\label{sec:intro}
Conversational interfaces, e.g., Google's Home or Amazon's Alexa, are becoming pervasive in daily life. As an important part of any conversation, language understanding aims at extracting the meaning a partner is trying to convey. Spoken Language Understanding (SLU) plays a critical role in such a scenario. Generally speaking, in SLU a spoken utterance is first transcribed, then semantic information is extracted. Language understanding, i.e., extracting a semantic ``frame'' from a transcribed user utterance, typically involves: i) Intent Detection (ID) and ii) Slot Filling (SF) \cite{tur2011}. The former makes the classification of a user utterance into an intent, i.e., the purpose of the user. The latter finds what are the ``arguments'' of such intent. As an example, let us consider Figure \ref{fig:slu-example}, where the user asks for playing a song (\texttt{Intent=PlayMuysic}) (\textit{with or without you}, \texttt{Slot=song}) of an artist (\textit{U2}, \texttt{Slot=artist}).
Usually, supervised learning methods are adopted for SLU. Their efficacy strongly depends on the availability of labeled data.
There are various approaches to the production of labeled data, depending on the complexity of the problem, on the characteristics of the data, and on the available resources (e.g., annotators, time and budget). When the reuse existing public data is not feasible, manual labeling should be accomplished, eventually by automating part of the labeling process.
In this work, we present the first public dataset for the Italian language for SLU. It is generated through a semi-automatic procedure from an existing English dataset annotated with intents and slots. We have translated the sentences into Italian and reported the annotations based on a token span algorithm.
Then, the translation, spans and consistency of the entities in Italian have been manually validated. Finally, the dataset is used as benchmark for NLU systems. In particular, we will compare a recent state-of-the-art (SOTA) approach \cite{ernie} with Rasa \cite{rasa} taken from the open source world, IBM Watson Assistant \cite{watson}, Google DialogFlow \cite{dialogflow} and, finally, Microsoft LUIS \cite{msluis}, some commercial solutions in use.
Following, in section \ref{sec:related} related works will be discussed; section \ref{sec:production} will discuss the dataset generation. Section \ref{sec:experiments} will present the experiments. Finally, section \ref{sec:conclusion} will derive the conclusions.
\begin{figure}[t]
\centering
\includegraphics[width=0.46\textwidth]{slotsexample_bn}
\caption{An example of Slot Filling in IOB format for a sentence with intent \textit{PlayMusic}.}
\label{fig:slu-example}
\end{figure}
\section{Related Work}
\label{sec:related}
SLU has been addressed in the Natural Language Processing community mainly in the English language.
A well-known dataset used to demonstrate and benchmark various NLU algorithms is Airline Travel Information System (ATIS) \cite{atis1990} dataset, which consists of spoken queries on flight related information. In \cite{braun-EtAl:2017:SIGDIAL} were presented three dataset for Intent classification task. \textit{AskUbuntu Corpus} and \textit{Web Application Corpus} were extracted from StackExchange and the third one, i.e., \textit{Chatbot Corpus}, was derived from a Telegram chatbot. The newer multi-intent dataset SNIPS \cite{snips2018} is the starting point for the work presented in this paper.
An alternative approach to manual or semi-automatic labeling is the one proposed by the data scientists of the Snorkel project with Snorkel Drybell \cite{2018arXiv181200417B} that aims at the automate the labeling through the use of data programming. Other works have explored the possibility of creating datasets in a language starting from datasets of other languages, such as \cite{Jabaian2010InvestigatingMA} and \cite{inproceedings}.
\section{Almawave-SLU: A new dataset for Italian SLU}
\label{sec:production}
We derived the new dataset \footnote{The dataset will be available for download} starting from the SNIPS dataset \cite{snips2018}, which is in English. It contains $14,484$ annotated examples\footnote{There are $13084$, $700$ and $700$ for training, validation and test, respectively.} with respect to $7$ intents and $39$ slots. In table \ref{tab:snipsexamples} an excerpt of the dataset is shown. We started from this dataset as: i) it contains a reasonable amount of examples; ii) it is multi-domain; iii) we believe it could represent a more realistic setting in today's voice assistants scenario.
\begin{table*}[!t]
\footnotesize
\centering
\begin{tabular}{|l|l|}
\hline
\texttt{AddToPlaylist} & Add the song virales de siempre by the cary brothers to my gym playlist. \\ \hline
\texttt{BookRestaurant} & I want to book a top-rated brasserie for 7 people. \\ \hline
\texttt{GetWeather} & What kind of weather will be in Ukraine one minute from now? \\ \hline
\texttt{PlayMusic} & Play Subconscious Lobotomy from Jennifer Paull. \\ \hline
\texttt{RateBook} & Rate The children of Niobe 1 out of 6 points. \\ \hline
\texttt{SearchCreativeWork} & Looking for a creative work called Plant Ecology \\ \hline
\texttt{SearchScreeningEvent} & Is Bartok the Magnificent playing at seven AM? \\ \hline
\end{tabular}
\caption{Examples from the SNIPS dataset. The first column indicates the intent, the second columns contains an example.}
\label{tab:snipsexamples}
\end{table*}
We performed a semi-automatic procedure consisting of two phases: an automatic translation with contextual alignment of intents and slots; a manual validation of the translations and annotations.
The resulting dataset, i.e., \texttt{Almawave-SLU}, has fewer training examples, a total of $7,142$ and the same number of validation and test examples of the original dataset. Again, $7$ intents and $39$ slots have been annotated.
Table \ref{tab:datastats} shows the distribution of examples for each intent.
\begin{table}[b]
\footnotesize
\centering
\begin{tabular}{|l|c|c|c|c|}
\hline
& \textbf{\scriptsize{Train}} & \textbf{ \scriptsize{Train-R}} & \textbf{\scriptsize{Valid}} & \textbf{\scriptsize{Test}} \\ \hline
\texttt{\scriptsize{AddToPlayList}}& $744$ & $185$ & $100$ & $124$ \\ \hline
\texttt{\scriptsize{BookRestaurant}} & $967$ & $250$ & $100$ & $92$ \\ \hline
\texttt{\scriptsize{GetWeather}} & $791$ & $195$& $100$ & $104$ \\ \hline
\texttt{\scriptsize{PlayMusic}} & $972$ & $240$ & $100$ & $86$ \\ \hline
\texttt{\scriptsize{RateBook}} & $765$ & $181$ & $100$ & $80$ \\ \hline
\texttt{\scriptsize{SearchCreativeWork}} & $752$ & $172$ & $100$ & $107$ \\ \hline
\texttt{\scriptsize{SearchScreeningEvent}} & $751$ & $202$ & $100$ & $107$ \\ \hline
\end{tabular}
\caption{\texttt{Almawave-SLU} Datasets statistics. Train-R is the reducted training set.}
\label{tab:datastats}
\end{table}
\subsection{Translation and Annotation}
\label{sec:translation}
In a first phase, we translated each English example in Italian by using the Translator Text API: part of the Microsoft Azure Cognitive Services.
In order to create a more valuable resource in Italian, we also performed an automatic substitution of the names of movies, movie theatres, books, restaurants and of the locations with some Italian counterpart. First, we collected from the Web a set $E$ of about $20,000$ Italian versions of such entities; then, we substituted each entity in the sentences of the dataset with one randomly chosen from $E$.
After the translation, an automatic annotation was performed. The intent associated with the English sentence has been copied to its Italian counterpart. Slots have been transferred by aligning the source and target tokens\footnote{The alignment was provided by the Translator API.} and by copying the corresponding slot annotation.
In case of exceptions, e.g., multiple alignments on the same token or missing alignment, we left the token without annotation.
\subsection{Human Revision}
\label{sec:human}
In a second phase, the dataset was splitted into $6$ different sets, each containing about $1,190$ sentences. Each set was assigned to $2$ annotators\footnote{A total of $6$ annotators were available.}, and each was asked to review the translation from English to Italian and the reliability of the automatic annotation. The guideline was to consider a valid annotation when both the alignment and the semantic slots were correct. Moreover, also a semantic consistency check was performed: e.g., served dish and restaurant type or city and region or song and singer.
The $2$ annotators have been used to cross-check the annotations, in order to provide more reliable revisions. When the $2$ annotators disagreed, the annotations have been validated by a third different annotator.
During the validation phase some interesting phenomena emerged. \footnote{Some inconsistencies were in the original dataset} For example, there have been cases of inconsistency between the restaurant name and the type of served dish when the name of the restaurant mentioned the kind of food served, e.g.,
\textit{"Prenota un tavolo da Pizza Party per mangiare noodles"}.
There were also wrong associations between the type of restaurant and service requested, e.g,
\textit{"Prenota nell'area piscina per 4 persone in un camion-ristorante"}. A truck restaurant is actually a van equipped for fast-food in the street.
Again, among the cases of unlikely associations resulting from automatic replacement, the inconsistency between temperatures and cities is mentioned, in cases like "snow in the Sahara". Another type of problem occured when the same slot was used to identify very different objects. For example, for the intent \textit{SearchCreativeWork}, the slot \textit{object\_name} was used for paintings, games, movies, etc...
In all these cases, the annotators were asked to correct the sentences and the annotations, accordingly. Again, in the case of \textit{BookRestaurant} intent a manual revision was made when in the same sentence the city and state coexist: to make the data more relevant to the Italian language, the region relative to the city is changed, e.g, \textit{"I need a table for 5 at a highly rated gastropub in Saint Paul, MN"} is translated and adapted for Italian in \textit{"Vorrei prenotare un tavolo per 5 in un gastropub molto apprezzato a Biella, Piemonte"}.
\subsection{Automatic Translation Analysis}
\label{sec:analysis}
In many cases, machine translation lacked context awareness: this isn't an easy task due to phenomena as polysemy, homonymy, metaphors and idioms. There can be problems of lexical ambiguities when a word has more than one meaning and can produce wrong interpretations. For example, the verb "to play" can mean ''spend time doing enjoyable things'', such as ''using toys and taking part in games'', ''perform music'' or ''perform the part of a character''.
Human intervention occurred to maintain the meaning of the text dependent on cultural and situational contexts. Different translation errors were modified by the annotators. For example, the automatic translation of the sentence \textit{Play Have You Met Miss Jones by Nicole from Google Music.} was \textit{Gioca hai incontrato Miss Jones di Nicole da Google Music.}, but the correct Italian version is \textit{Riproduci Have You Met Miss Jones di Nicole da Google Music.}. In this case the wrong translation of the verb \textit{play} causes a meaningless sentence.
Often, translation errors are due to presence of prepositions, that have the same function in Italian as they do in English. Unfortunately, these cannot be directly translated. Each preposition is represented by a group of related senses, some of which are very close and similar while others are rather weak and distant. For example, the Italian preposition ''di'' can have six different English counterparts – of, by, about, from, at, and than.
For example, in the SNIPS dataset the sentence \textit{I need a table for 2 on feb. 18 at Main Deli Steak House} was translated as \textit{Ho bisogno di un tavolo per 2 su Feb. 18 presso Main Deli Steak House}. Here, the translation of ''on'' is wrong: the correct Italian version should translate it as ''il''. Another example with wrong preposition translation is the sentence \textit{''What will the weather be one month from now in Chad ?'}, the automatic translation of ''one month from now'' is ''un mese da ora'' but the correct translation is ''tra un mese''.
Common errors were in the translation of temporal expression, that are different between Italian and English. For example the translation of the sentence \textit{''Book a table in Fiji for zero a.m''} was \textit{''Prenotare un tavolo in Fiji per zero a.m}" but in Italian ''zero a.m'' is ''mezzanotte''.
Other errors were specific of some intents, as they tend to have more slangs.
For example, the translation of \textit{GetWeather}'s sentences was problematic because the main verb is often misinterpreted, while in the sentences related to the intent \textit{BookRestaurant} a frequent failure occurred on the interpretation of prepositions. For example, the sentence \textit{''Will it get chilly in North Creek Forest?''} was translated as ''Otterrà freddo in North Creek Forest?'', while the correct translation is ''Farà freddo a North CreekForest?''. In this case, the system misinterpreted the context, assigning to ``get'' the wrong meaning.
\section{Benchmarking SLU Systems}
\label{sec:experiments}
Nowadays, there are many human-machine interaction platforms, commercial and open source.
Machine learning algorithms enables these systems to understand natural language utterances, match them to intents, and extract structured data.
We decided to use the Almawave-SLU dataset with the following SLU systems.
\subsection{SLU Systems}
\label{sec:slu-systems}
\paragraph{RASA.} RASA \cite{rasa} is an open source alternative to popular NLP tools for the classification of intentions and the extraction of entities. Rasa contains a set of high-level APIs to produce a language parser through the use of NLP and ML libraries, via the configuration of the pipeline and embeddings. It seems to be very fast to train, does not require great computing power and, despite this, it seems to get excellent results.
\paragraph{LUIS.} Language Understanding service \cite{msluis} allows the construction of applications that can receive input in natural language and extract the meaning from it through the use of Machine Learning algorithms.
LUIS was chosen as it provides also an easy-to-use graphical interface dedicated to less experienced users. For this system the computation is done completely remotely and no configurations are necessary.
\paragraph{Watson Assistant.}
IBM's Watson Assistant \cite{watson} is a white label cloud service that allows software developers to embed a virtual assistant, that use Watson AI machine learning and NLU, in their software. Watson Assistant allows customers to protect information gathered through user interaction in a private cloud. It was chosen because it was conceived for an industrial market and for its long tradition in this task.
\paragraph{DialogFlow.} Dialogflow \cite{dialogflow} is a Google service to build engaging voice and text-based conversational interfaces,
powered by a natural language understanding (NLU) engine.
Dialogflow makes it easy to connect the bot service to a number of channels and runs on Google Cloud Platform, so it can scale to hundreds of millions of users. DialogFlow was chosen due to its wide distribution and ease of use of the interface.
\paragraph{Bert-Joint.} It is a SOTA approach to SLU adopting a joint Deep Learning architecture in an attention-based recurrent frameworks \cite{ernie}. It exploits the successful Bidirectional Encoder Representations from Transformers (BERT) model to pre-train language representations. In \cite{ernie}, the authors extend the BERT model in order to perform the two tasks of ID and SF jointly. In particular, two classifiers are trained jointly on top of the BERT representations by means of a specific loss function.
\subsection{Experimental Setup}
\label{sect:expsetup}
\texttt{Almawave-SLU} has been used for training and evaluation of Rasa, Luis, Watson Assistant, DialogFlow and Bert-Joint.
Another evalution is made on $3$ different training datasets, i.e Train-R, of reduced dimensions with respect to the Almawave-SLU, each about $1,400$ sentences equally distributed on intent.
\begin{table*}[!ht]
\footnotesize
\centering
\begin{tabular}{|l|c|c|c|c|c|c|}
\hline
\hline
& \multicolumn{3}{|c|}{\textbf{Eval-1 with Train set}} & \multicolumn{3}{|c|}{\textbf{ Eval-2 with Train-R set}} \\
System & Intent & Slot & Sentence & Intent & Slot & Sentence \\ \hline
\textbf{Rasa} & $96.42$ & $85.40$ & $65.76$ & $93.84$ & $78.58$ & $52.25$\\ \hline
\textbf{LUIS} & $95.99$ & $79.47$ & $50.57$& $94.46$ & $72.51$ & $35.53$ \\ \hline
\textbf{Watson Assistant} & $96.56$ & - & - & $95.03$ & - & - \\ \hline
\textbf{Dialogflow} & $95.56$ & $74.62$ & $46.16$ & $93.60$ & $65.23$ & $36.68$ \\ \hline
\textbf{Bert-Joint} & $\textbf{97.6}$ & $\textbf{90.0}$ & $\textbf{77.1}$ & $\textbf{96.13}$ & $\textbf{83.04}$ & $\textbf{65.23}$ \\ \hline
\end{tabular}
\caption{Overall scores for Intent and Slot}
\label{tab:overallScores}
\end{table*}
The train/validation/test split used for the evaluations is $5,742$ ($1,400$ for Train-R), $700$ and $700$, respectively.
Regarding Rasa, we used version $1.0.7$, and we adopted the standard ''supervised embeddings'' pipeline, since it is recommended in the official documentation. This pipeline consists of a
\textit{WhiteSpaceTokenizer}, that was modified to avoid the filter of punctuation tokens, a \textit{Regex Featurizer}, a \textit{Conditional Random Field} to extract entities, a \textit{Bag-of-words Featurizer} and an \textit{Intent Classifier}.
LUIS was tested against the api v$2.0$, and the loading of data to train the system with LUIS APP VERSION $0.1$.
Unfortunately Watson Assistant supports only English models for the annotations of contextual entities, i.e, slots; therefore, we have only measured the intents \footnote{Refer to \textit{Table 3. Entity feature support details} at \url{ https://cloud.ibm.com/docs/services/assistant?topic=assistant-language-support}}.
Regarding DialogFlow, a ''Standard'' (free) utility has been created with API version 2; the python library ''dialogflow'' has been used for the predictions. \footnote{\url{https://cloud.google.com/dialogflow/docs/reference/rest/v2/projects.agent.intents#Part}}.
We changed only the setting ''match mode'' to ''ML only''.
Regarding the BERT-Joint system, a pre-trained BERT model is adopted, which is available on the BERT authors website\footnote{\url{https://storage.googleapis.com/bert\_models/2018\_11\_23/multi\_cased\_L-12\_H-768\_A-12.zip}}. This model is composed of $12$-layer and the size of the hidden state is $768$. The multi-head self-attention is composed of $12$ heads for a total of $110$M parameters.
As suggested in \cite{ernie}, we adopted a dropout strategy applied to the final hidden states before the intent/slot classifiers.
We tuned the following hyper-parameters over the validation set: (i) number of epochs among ($5$, $10$, $20$, $50$); (ii) Dropout keep probability among ($0.5$, $0.7$ and $0.9$). We adopted the Adam optimizer \cite{kingma2014} with parameters $\beta_1=0.9$, $\beta_2=0.999$, L$2$ weight decay $0.01$ and learning rate $2\text{e-}5$ over batches of size $64$.
\subsection{Experimental Results}
\label{sec:results}
In table \ref{tab:overallScores} the performances of the systems are shown. The SF performance is the F1 while the ID and Sentence performances are measured with the accuracy.
We also show an evaluation carried out with models trained on three different split of reduced size derived from the whole dataset. The reported value is the average of measurements obtained separately on the entire test dataset.
Regarding the ID task, all models are performing similarly, but Bert-Joint F1 score is little higher than others. For SF task, notice that there are significant differences between LUIS, DialogFlow and Rasa performances.
Finally, Bert-Joint achieved the top score on joint classification, in the assessments with the two different sizes of the dataset. The adaptation of nominal entities in Italian may have amplified the problem for the other models.
\section{Conclusion}
\label{sec:conclusion}
The contributions of this work are two-fold: first, we presented and released the first SLU dataset in Italian: (\texttt{Almawave-SLU}), composed of $7,142$ sentences annotated with respect to intents and slots, almost equally distributed on the $7$ different intents. The effort spent on the construction of this new resource, according to the semi-automatic procedure described, is about 24 FTE \footnote{Full Time Equivalent}, with an average production of about $300$ examples per day. We consider this effort lower than typical efforts to create linguistic resources from scratch.
Second, we compared some of the most popular NLU services with this data. The results show they all have similar features and performances. However, compared to another specific architecture for SLU, i.e., Bert-Joint, they perform worse. It was expected and it demonstrates the Almawave-SLU can be a valuable dataset to train and test SLU systems on the Italian language.
In future, we hope to continuously improve the data and to extend the dataset.
\section{Acknowledgment}
\label{sec:ack}
The authors would like to thank to David Alessandrini, Silvana De Benedictis, Raffaele Mazzocca, Roberto Pellegrini and Federico Wolenski for the support in the annotation, revision and evaluation phases.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,013 |
Q: Angular form with dynamic name are not initializing I have a few objects and a form. Each object should remember the state of it's form (mainly, dirtyness).
I'm trying to create a form with a dynamic name like such:
<form name="selectedObject.form">
<input type="text" name="name" ng-model="selectedObject.name" required>
</form>
My problem is:
*
*I make the first form dirty
*Change the selected object
*The form is considered dirty
I would think the using a dynamic name to the form would set a watch and have it rerender dynamically.
Anyway to do this?
Here's a plunkr simulating the problem:
http://plnkr.co/edit/NAHVfhCf6RhpJHPGl7El?p=preview
A: As you would like to have it work, I think not. I also see problem in your code, namely form declaration. Name is string so now your form's name is always literal selectedObject.form. I think you ment to write it as <form name="{{ selectedObject.form }}"> and have actual name for the form instead of {} you have assigned now?
You could add isDirty as new property to your objects and toggle it manually.
$scope.objects = [{
name: 'Object1',
type: 'Type1',
form: 'form1',
isDirty: false
},{
name: '',
type: 'Type2',
form: 'form2',
isDirty: false }];
<body ng-controller="MainCtrl">
<button ng-click="setSelected()">Change selected</button>
<form name="{{ selectedObject.form }}">
<input type="text"
ng-model="selectedObject.name"
ng-change="selectedObject.isDirty = true">
{{ selectedObject.isDirty }}
</form>
</body>
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 9,471 |
\section{Introduction}
Learning from time-dependent data is a challenging task due to the uncertainty about the dynamics of real-world environments.
When predictive models are deployed in environments susceptible to changes, they must detect these changes and adapt themselves accordingly.
The phenomenon in which the data distribution evolves is referred to as \textit{concept drift}, and a sizeable amount of literature has been devoted to it \cite{gama2014survey}.
An archetype of concept drift is the interest of users in a service, which typically changes over time \cite{kim2017efficient}. Changes in the environment have a potentially strong negative impact on the performance of models \cite{gama2014survey}. Therefore, it is fundamental that these models can cope with concept drift. That is, to detect changes and adapt to them accordingly.
Concept drift detection and adaptation are typically achieved by coupling predictive models with a change detection mechanism \cite{gomes2019machine}. The detection algorithm launches an alarm when it identifies a change in the data. Typical concept drift strategies are based on sequential analysis \cite{page1954continuous}, statistical process control \cite{gama2004learning}, or monitoring of distributions \cite{bifet2007learning}. When change is detected, the predictive model adapts by updating its knowledge with recent information. A simple example of an adaptation mechanism is to discard the current model and train a new one from scratch. Incremental approaches are also widely used \cite{gomes2017adaptive}.
The input data for the majority of the existing drift detection algorithms is the performance of the predictive model over time, such as the error rate. In many of these detection methods, alarms are signalled if the performance decreases significantly. However, in several real-world scenarios, labels are not readily available to estimate the performance of models.
Some labels might arrive with a delay or not arrive at all due to labelling costs.
This is a major challenge for learning algorithms that rely on concept drift detection as the unavailability of the labels precludes their application \cite{gomes2019machine}.
In this context, there is increasing attention toward unsupervised approaches to concept drift detection. These assume that, after an initial fit of the model, no further labels are available during the deployment of this model in a test set. Most works in the literature handle this problem using statistical hypothesis tests, such as the Kolmogorov-Smirnov test. These tests are applied to the output of the models \cite{vzliobaite2010change}, either the final decision or the predicted probability, or the input attributes \cite{dos2016fast}.
Our goal in this paper is to address concept drift detection in an unsupervised manner. To accomplish this, we propose a novel approach to tackle this problem using a student--teacher learning paradigm called \texttt{STUDD} (\textbf{S}tudent--\textbf{T}eacher approach for \textbf{U}nsupervised \textbf{D}rift \textbf{D}etection). The gist of the idea is as follows. On top of the main predictive model, which we designate as the teacher, we also build a second predictive model, the student.
Following the literature on model compression \cite{bucilua2006model} and knowledge distillation \cite{hinton2015distilling}, the student model is designed to mimic the behaviour of the teacher.
Using the student--teacher framework, our approach to unsupervised concept drift detection is carried out by monitoring the student's mimicking loss. The mimicking loss is a function of the discrepancy between the teacher's prediction and student's prediction in the same instance. In summary, we use the student model's loss as a surrogate for the behaviour of the main model. Accordingly, we can apply any state of the art approach in the literature, which considers the loss of a model as the main input, for example, the Page-Hinkley test \cite{page1954continuous}.
When concept drift occurs, it causes changes in the classes' prior probabilities or changes in the class conditional probabilities of the predictor variables. In effect, we hypothesise that these changes disrupt the collective behaviour between the teacher and student models. In turn, this change of behaviour may be captured by monitoring the student model's mimicking loss.
We compared \texttt{STUDD} to several state-of-the-art methods, both unsupervised and supervised ones using 19 benchmark data streams. The results indicate that the proposed method is useful for capturing concept drift. \texttt{STUDD} shows a more conservative behaviour relative to other approaches, which is beneficial in many domains of application.
To summarise, the contributions of this paper are the following:
\begin{itemize}
\item \texttt{STUDD}: a novel method for unsupervised concept drift detection based on a student--teacher learning approach;
\item A set of experiments used to validate the proposed method. These include comparisons with state of the art approaches, and an analysis of the different scenarios regarding label availability.
\end{itemize}
The proposed method is publicly available online\footnote{\url{https://github.com/vcerqueira/studd}}. Our implementation is written in Python and is based on the scikit-multiflow framework \cite{montiel2018scikit}. We also remark that this article is an extension of a preliminary work published previously \cite{cerqueira2020unsupervised}.
The rest of this paper is organised as follows. In the next section (Section \ref{sec:pd}), we formally define the problem of concept drift detection in data streams, while in the following section (Section \ref{sec:rw}), we briefly review the literature on the topic of our work. We describe the methodology behind \texttt{STUDD} in Section \ref{sec:methodology}.
The experiments are reported in Section \ref{sec:experimental_design} we report the results. The results of these are discussed in Section \ref{sec:discussion}. Finally, Section \ref{sec:conclusions} concludes the paper.
\section{Background}\label{sec:pd}
\subsection{Problem Definition}
Let $D(X,y) = \{(X_1, y_1), \dots, (X_t, y_t)\}$ denote a possibly infinite data stream, where each $X$ is a $q$-dimensional array representing the input predictor variables. Each $y$ represents the corresponding output label. We assume that the values of $y$ are categorical. The goal is to use this data set $\{X_i, y_i\}^t_1$ to create a classification model to approximate the function which maps the input $X$ to the output $y$. Let $\mathcal{T}$ denote this classifier. The classifier $\mathcal{T}$ can be used to predict the labels of new observations $X$. We denote the prediction made by the classifier as $\hat{y}_{\mathcal{T}}$.
Many real-world scenarios exhibit a non-stationary nature. Often, the underlying process causing the observations changes in an unpredictable way, which degrades the performance of the classifier $\mathcal{T}$.
Let $p(X, y)$ denote the joint distribution of the predictor variables $X$ and the target variable $y$. According to Gama et al. \cite{gama2014survey}, concept drift occurs if $p(X, y)$ is different in two distinct points in time across the data stream.
Changes in the joint probability can be caused by changes in $p(X)$, the distribution of the predictor variables or changes in the class conditional probabilities $p(X|y)$ \cite{gao2007general}. These may eventually affect the posterior probabilities of classes $p(y|X)$.
\subsection{Label Availability}\label{sec:label_availability}
When concept drift occurs, the changes need to be captured as soon as possible, so the decision rules of $\mathcal{T}$ can be updated.
The vast majority of concept drift detection approaches in the literature focus on tracking the predictive performance of the model. If the performance degrades significantly, an alarm is launched and the learning system adapts to these changes.
The problem with these approaches is that they assume that the true labels are readily available after prediction. In reality, this is rarely the case. In many real-world scenarios, labels can take too long to be available, if ever. If labels do eventually become available, often we only have access to a part of them. This is due to, for example, labelling costs. The different potential scenarios when running a predictive model are depicted in Figure \ref{fig:labels}.
\begin{figure}[hbt]
\centering
\includegraphics[width=.75\textwidth]{labelaccess.pdf}
\caption{The distinct potential scenarios regarding label access after the initial fit of the model (adapted from Gomes et al. \cite{gomes2017adaptive}).}
\label{fig:labels}
\end{figure}
Precisely, a predictive model is built using an initial batch of training data, whose labels are available. When this model is deployed in a test set, concept drift detection is carried out in an unsupervised or supervised manner.
In unsupervised scenarios, no further labels are available to the predictive model. Concept drift detection must be carried out using a different strategy other than monitoring the loss. For example, one can track the output probability of the models \cite{vzliobaite2010change} or the unconditional probability distribution $p(X)$ \cite{kuncheva2004classifier}.
Concept drift detectors have access to labels when the scenario is supervised. On the one hand, the setting may be either strongly supervised or weakly supervised \cite{zhou2018brief}. In the former, all labels become available. In the latter, the learning system only has access to a part of the labels. This is common in applications which data labelling is costly. On the other hand, labels can arrive immediately after prediction, or they can arrive with some delay. In some domains, this delay may be too large, and unsupervised approaches need to be adopted.
In this paper, we address concept drift detection from an unsupervised perspective. In this setting, we are restricted to use $p(X)$ to detect changes, as the probability of the predictor variables is not conditioned on $y$.
\section{Related Research}\label{sec:rw}
In this section, we briefly review previous research related to our work. We split this review into two parts. In the first part, we overview approaches for concept drift detection, giving particular emphasis to unsupervised approaches.
The second part addresses model compression and the related work on the student--teacher learning approach, which is the basis of the proposed method.
\subsection{Concept Drift Detection}
Concept drift can occur in mainly three different manners: suddenly, in which the current concept is abruptly replaced by a new one; gradually, when the current concept slowly fades; and reoccurring, in which different concepts are prevalent in distinct time intervals (for example, due to seasonality). A variation of gradual concept drifts are incremental drifts, which are extremely difficult to detect as they consist of many concepts that continually evolve.
We split concept drift detection into two dimensions: supervised and unsupervised. The supervised type of approaches assumes that the true labels of observations are available after prediction. Hence, they use the error of the model as the main input to their detection mechanism. On the other hand, unsupervised approaches preclude the use of the labels in their techniques.
\subsubsection{Supervised Approaches}
Plenty of error-based approaches have been developed for concept drift detection. These usually follow one of three sort of strategies: sequential analysis, such as the Page-Hinkley test (PHT) \cite{page1954continuous}; statistical process control, for example the Drift Detection Method (DDM) \cite{gama2004learning} or the Early Drift Detection Method (EDDM) \cite{baena2006early}; and distribution monitoring, for example the Adaptive Windowing (ADWIN) approach \cite{bifet2007learning}.
\subsubsection{Unsupervised Approaches}\label{sec:rw_unsupervised}
Although the literature is scarce, there is an increasing interest in approaches which try to detect drift without access to the true labels. \v{Z}liobaite \cite{vzliobaite2010change} presents a work of this type. She proposed the application of statistical hypothesis testing to the output of the classifier (either the probabilities or the final categorical decision). The idea is to monitor two samples of one of these signals. One sample serves as the reference window, while the other represents the detection window. When there is a statistical difference between these, an alarm is launched.
This process can be carried out using a sliding reference window (c.f. Figure \ref{fig:output_tracker_sliding}) or a fixed reference window (c.f. Figure \ref{fig:output_tracker_fixed}).
\begin{figure}[hbt]
\centering
\includegraphics[width=.9\textwidth]{studd-Page-2.pdf}
\caption{Detecting changes using a sliding reference window. Change occurs at time t if the reference window is statistically different than the detection window.}
\label{fig:output_tracker_sliding}
\end{figure}
\begin{figure}[hbt]
\centering
\includegraphics[width=.9\textwidth]{studd-Page-1.pdf}
\caption{Detecting changes using a fixed reference window.}
\label{fig:output_tracker_fixed}
\end{figure}
In a set of experiments, \v{Z}liobaite shows that concept drift is detectable using this framework. The hypothesis tests used in the experiments are the two-sample Kolmogorov-Smirnov test, the Wilcoxon rank-sum test, and the two-sample t-test.
Reis et al. \cite{dos2016fast} follow a strategy similar to \v{Z}liobaite \cite{vzliobaite2010change}. They propose an incremental version of the Kolmogorov-Smirnov test and use this method to detect changes without using any true labels.
However, they focus on tracking the attributes rather than the output of the predictive model. Specifically, they use a fixed window approach (c.f. Figure \ref{fig:output_tracker_fixed}) to monitor the distribution of each attribute. If a change is detected in any of these attributes, a signal for concept drift is issued.
In the same line of research, Yu et al. \cite{yu2018request} apply two layers of hypothesis testing hierarchically. Kim et al. \cite{kim2017efficient} also apply a windowing approach. Rather than monitoring the output probability of the classifier, they use a confidence measure as the input to drift detectors.
Pinto et al. \cite{pinto2019automatic} present an automatic framework for monitoring the performance of predictive models. Similarly to the above-mentioned works, they perform concept drift detection based on a windowing approach. The signal used to detect drift is computed according to a mutual information metric, namely the Jensen-Shannon Divergence \cite{lin1991divergence}. The window sizes and threshold above which an alarm is launched is analysed, and the approach is validated in real-world data sets. The interesting part of the approach by Pinto et al. \cite{pinto2019automatic} is that their method explains the alarms. This explanation is based on an auxiliary binary classification model. The goal of applying this model is to rank the events that occurred in the detection window according to how these relate to the alarm. These explanations may be crucial in sensitive applications which require transparent models.
G{\"o}z{\"u}a{\c{c}}{\i}k et al. \cite{gozuaccik2019unsupervised} also develop an auxiliary predictive model for unsupervised concept drift detection, which is called D3 (for (Discriminative Drift Detector). The difference to the work by Pinto et al. \cite{pinto2019automatic} is that they use this model for detecting concept drift rather than explaining the alarms.
\subsection{Student--Teacher Learning Approach}
Model compression, also referred to as student-teacher learning, is a technique proposed by Bucilu\v{a} et al. \cite{bucilua2006model}. The goal is to train a model, designated as a student, to mimic the behaviour of a second model (the teacher).
To perform model compression, the idea is to first retrieve the predictions of the teacher in observations not used for training (e.g. a validation data set). Then, the student model is trained using this set of observations, where the explanatory variables are the original ones, but the original target variable is replaced with the predictions of the teacher.
The authors use this approach to compress a large ensemble (the teacher) into a compact predictive model (the student).
Bucilu\v{a} et al. \cite{bucilua2006model} use the ensemble selection algorithm \cite{caruana2004ensemble} as the teacher and a neural network as the student model and address eight binary classification problems. Their results show that the compressed neural network performs comparably with the teacher while being ``1000 times smaller and 1000 times faster''. Moreover, the compressed neural network considerably outperforms the best individual model in the ensemble used as the teacher.
Hinton et al. \cite{hinton2015distilling} developed the idea of model compression further, denoting their compression technique as knowledge distillation. Distillation works by softening the probability distribution over classes in the softmax output layer of a neural network.
The authors address an automatic speech recognition problem by distilling an ensemble of deep neural networks into a single and smaller deep neural network.
Both Bucilu\v{a} et al. \cite{bucilua2006model} and Hinton et al. \cite{hinton2015distilling}, show that combining the predictions of the ensemble leads to a comparable performance relative to a single compressed model.
While our concerns are not about decreasing the computational costs of a model, we can leverage model compression approaches to tackle the problem of concept drift detection. Particularly, by creating a student model which mimics the behaviour of a classifier, we can perform concept drift detection using the loss of the student model. Since this loss is not conditioned on the target variable $y$, concept drift detection is carried out in an unsupervised manner.
This paper significantly extends a previously published paper \cite{cerqueira2020unsupervised}. The experiments are completely different. While in the previous work we validated \texttt{STUDD} using synthetic drifts based on two data sets, we now focus on a realistic evaluation scenario based on 19 benchmark data streams. We also include other state of the art approaches in the experiments.
\section{Methodology}\label{sec:methodology}
In this section we describe \texttt{STUDD}, the proposed approach to unsupervised concept drift detection.
\texttt{STUDD} is split into two steps: an initial offline stage, which occurs during the training of the data stream classifier (Section \ref{sec:stage1}); and an online stage, when the method is applied for change detection (Section \ref{sec:stage2}).
\subsection{Stage 1: Student--Teacher Training}\label{sec:stage1}
The first stage of the proposed approach refers to the training of the predictive models. This process is illustrated in Figure \ref{fig:scheme_fitting}. A batch of training observations is retrieved from the source data stream $\mathcal{D}$. These observations ($\mathcal{D}(X_{tr}, y_{tr})$) are used to train the classifier $\mathcal{T}$. This is the predictive model to be deployed in the data stream.
After creating $\mathcal{T}$, we carry out a student--teacher approach in which $\mathcal{T}$ acts as the teacher. First, $\mathcal{T}$ is used to make predictions on the training set. This leads to a new training data set, in which the targets $y_{tr}$ are replaced with the predictions of $\mathcal{T}$, $\hat{y}_{\{\mathcal{T},tr\}}$. Finally, the student model $\mathcal{S}$ is trained using the new data set. Essentially, the student model $\mathcal{S}$ is designed to mimic the behavior of the teacher $\mathcal{T}$.
It might be argued that using the same instances to train both the teacher and the student models leads to over-fitting. However, Hinton et al. \cite{hinton2015distilling} show that this is not a concern.
\begin{figure}[hbt]
\centering
\includegraphics[width=\textwidth]{UCDD_ModelCompression_Flowchart.pdf}
\caption{Fitting the teacher ($\mathcal{T}$) and student ($\mathcal{S}$) models using an initial batch of training observations.}
\label{fig:scheme_fitting}
\end{figure}
The student-teacher learning paradigm is at the core of model compression \cite{bucilua2006model} or knowledge distillation \cite{hinton2015distilling} methods. These approaches aim at compressing a model with a large number of parameters (teacher), such as an ensemble or a deep neural network, into a more compact model (student) with a comparable predictive performance. Accordingly, the student model is deployed in the test set, while the teacher is not used in practice due to high computational costs.
Conversely, our objective for using a student-teacher strategy is different. We regard the student model $\mathcal{S}$ as a model which is able to predict the behavior of the teacher model $\mathcal{T}$, i.e., what the output of $\mathcal{T}$ will be for a given input observation. Moreover, it is important to remark that, in our methodology, both student and teacher models are applied in the test phase.
\subsection{Stage 2: Change Detection}\label{sec:stage2}
The second stage of the proposed method refers to the change detection process. As we have described before, the state-of-the-art concept drift detection methods take the loss of predictive models as their primary input. Since we assume that labels are unavailable, we cannot compute the model's loss $\mathcal{T}$. This precludes the typical application of state-of-the-art change detection approaches to unsupervised concept drift detection.
\begin{figure}[hbt]
\centering
\includegraphics[width=\textwidth]{UCDD_ModelCompression_Flowchart_Detection.pdf}
\caption{The concept drift detection process of \texttt{STUDD}. We retrieve the predictions from both models for a new observation. A function of the discrepancy of these predictions, which is independent from the true labels, is given as input to a state of the art detection model.}
\label{fig:scheme_detection}
\end{figure}
However, we can compute the student model's loss, which is independent of the true labels.
The loss of the student is quantified according to the discrepancy between the prediction of $\mathcal{T}$ ($\hat{y}_{\mathcal{T}}$) and the prediction of $\mathcal{S}$ about $\hat{y}_{\mathcal{T}}$ ($\hat{y}_{\mathcal{S}}$). Accordingly, the loss of $\mathcal{S}$ is defined as $L(\hat{y}_{\mathcal{T}}, \hat{y}_{\mathcal{S}})$, where $L$ is the loss function (e.g. the error rate).
Therefore, our approach to concept drift detection uses a state of the art detector, such as the Page-Hinkley test \cite{page1954continuous}. However, the main input to this detector is the student model's error, rather than the teacher model's error. This process is depicted in Figure \ref{fig:scheme_detection}. For a given input observation $x_i$, we obtain the prediction from the models $\mathcal{T}$ and $\mathcal{S}$. Then, a function of the discrepancy between these predictions is given as input to the detection model.
When concept drift occurs, it potentially causes changes in the posterior probability of classes, $p(y|X)$. Thus, we hypothesise that such changes will also potentially affect the joint behaviour between student and teacher models. This effect will then be reflected on the student's imitation error, and the underlying change detection mechanism can capture it.
In effect, the teacher model is deployed in the data stream and used to make predictions on the upcoming observations. For concept drift detection, we track the error of the student model.
\section{Empirical Experiments}\label{sec:experimental_design}
This section details the experiments carried out to validate the proposed approach to unsupervised concept drift detection.
We start by describing the research questions we aim at answering (\ref{sec:research_questions}), followed by a brief description of the data streams used in the experiments (\ref{sec:data}).
Afterwards, we explain the workflow used to analyse each approach under comparison (\ref{sec:workflow_experiments}), and in the respective evaluation scheme (\ref{sec:evaluation}). Finally, we describe the methods used to compare \texttt{STUDD} with (\ref{sec:methods}), and detail the value of important parameters (\ref{sec:parameter_setup}).
\subsection{Research Questions}\label{sec:research_questions}
We designed a set of experiments to answer the following research questions:
\begin{itemize}
\item \textbf{RQ1}: Is \texttt{STUDD} able to detect concept drift?
\item \textbf{RQ2}: What is the performance of \texttt{STUDD} for concept drift detection relative to state of the art approaches? These include both unsupervised and supervised ones;
\item \textbf{RQ3}: When, in terms of label availability scenarios, is \texttt{STUDD} beneficial relative to a supervised approach?
\end{itemize}
\subsection{Data Sets}\label{sec:data}
\begin{table}[!thb]
\centering
\caption{Data streams used in the experiments. The shape column describes the dimensionality of the data in the form (number of rows $\times$ number of columns).}
\resizebox{\textwidth}{!}{%
\begin{tabular}{lp{5cm}rr}
\toprule
\textbf{Data Stream} & \textbf{Description} & \textbf{Shape} & \textbf{\# Classes}\\
\midrule
Electricity & \strut Price direction of electricity market in Australia & 45.312 $\times$ 9 & 2 \\
CoverType & \strut Forest cover type from the US Forest Service & 581.012 $\times$ 55 & 5\\
Poker & \strut Poker hands drawn from a deck of 52 cards & 1.025.010 $\times$ 12 & 10\\
Gas & \strut Gas measurements from chemical sensors & 13.910 $\times$ 17 & 6\\
Luxembourg & \strut Survey concerning the internet usage (high or low) from 2002 to 2007 & 1.901 $\times$ 31 & 2\\
Ozone & Air measurements concerning ozone levels & 2.574 $\times$ 32 & 2\\
Sensors & Sensor identification from environmental data & 2.219.803 $\times$ 5 & 54\\
Powersupply & Hour identification from power supply data from an Italian electricity company & 29.928 $\times$ 2 & 24\\
Rialto & Building identification near from processed images taken in the Rialto bridge in Venice & 82.250 $\times$ 27 & 10 \\
Outdoor & Object identification from images taken outdoor under varying lighting conditions (sunny and cloudy) & 4.000 $\times$ 21 & 40 \\
Keystroke & User identification from typing rhythm of an expression & 1.600 $\times$ 10 & 4\\
NOAA & Rain detection from weather measurements collected over 50 years & 18.159 $\times$ 8 & 2\\
Bike & Count of rental bikes (high or low) from a bike-sharing system & 17.378 $\times$ 5 & 2\\
Arabic & Digit identification from audio (in arabic) features & 8.800 $\times$ 28 & 10\\
ArabicShuffled & Similar to \textit{Arabic} but observations are shuffled by gender to enhance concept drift (c.f. \cite{dos2016fast}) & 8.800 $\times$ 28 & 10\\
Insects & Identification of the specimen of a flying insect that is passing through a laser & 5.325 $\times$ 50 & 5\\
InsectsAbrupt & Similar to \textit{Insects}, but abrupt drift is introduced in the feature space & 5.325 $\times$ 50 & 5\\
Posture & Movement identification from sensors carried by different people & 164.859 $\times$ 4 & 11 \\
GMSC & Credit scoring data set (\textit{Give me some credit}) & 150.000 $\times$ 11 & 2 \\
\bottomrule
\end{tabular}%
}
\label{tab:data}
\end{table}
We used 19 benchmark data streams to answer the above research questions and validate the applicability of \texttt{STUDD}.
These data sets include the following data streams: Electricity \cite{harries1999splice}, forest cover type \cite{blackard1999comparative}, Poker \cite{cattral2002evolutionary}, Gas \cite{vergara2012chemical}, Luxembourg \cite{vzliobaite2011combining}, Ozone \cite{dua2017uci}, Power supply \cite{zhu2010stream}, Rialto \cite{losing2016knn}, Outdoor \cite{losing2015interactive}, Keystroke \cite{souza2015data}, NOAA \cite{ditzler2012incremental}, Bike \cite{fanaee2014event}, Arabic \cite{hammami2010improved}, Arabic with shuffled observations as per Reis et al. \cite{dos2016fast}, Insects \cite{de2013classification}, Insects with artificial abrupt concept drift \cite{dos2016fast}, Posture \cite{kaluvza2010agent}, and GMSC \cite{gomes2017adaptive}. These are briefly described in Table \ref{tab:data}. In order to speed up computations, we truncated the sample size of all data streams to 150.000 observations. These data sets are commonly used as benchmarks for mining data streams. We retrieved them from an online repository for data streams \cite{souza2020challenges}, or the repository associated with two previous works related to data streams \footnote{\url{https://github.com/denismr/incremental-ks}}\textsuperscript{,}\footnote{\url{https://github.com/hmgomes/AdaptiveRandomForest}}.
\subsection{Workflow of Experiments}\label{sec:workflow_experiments}
We designed the experiments according to a batch setup, split into an offline stage and an online stage.
In the offline stage, we train the main classifier $\mathcal{T}$ to be deployed in the data stream using an initial batch of W observations. We also carry out any task-specific to the underlying drift detection approach. For example, in the case of the proposed approach, we also train the student model $\mathcal{S}$.
The online stage starts when the classifier $\mathcal{T}$ is deployed in the data stream. For each new observation $x_i$, the classifier $\mathcal{T}$ makes a prediction $\hat{y}_i$. Meanwhile, the underlying detection mechanism uses the available data (e.g. $x_i$, $\hat{y}_i$) to monitor the classifier's behaviour. If the detection mechanism detects a change, it launches an alarm and the classifier $\mathcal{T}$ is adapted with recent information.
\begin{figure}[hbt]
\centering
\includegraphics[width=.9\textwidth]{workflow_design_experiments.pdf}
\caption{The workflow for applying and evaluating each method under comparison.}
\label{fig:workflow_experiments}
\end{figure}
The adaptation mechanism adopted in this work is based on a re-training procedure. The current model is discarded, and a new model is re-trained using the latest W observations. This workflow is depicted in Figure \ref{fig:workflow_experiments}. We remark that, in the case of \texttt{STUDD}, the student model is also updated as described.
\subsection{Evaluation}\label{sec:evaluation}
Our goal is to evaluate the performance of the concept drift detection mechanisms. We focus on unsupervised scenarios, in which the true labels are not readily available.
However, for evaluation purposes, we use the labels to assess the quality of change detectors.
We aim at measuring a trade-off between predictive performance and the number of alarms issued by the detector. The alarms represent a cost as they trigger the retrieval (and annotation) of a batch of observations.
We evaluate each approach from two dimensions:
\begin{itemize}
\item Predictive performance: We measure the quality of the classifier according to the Cohen's Kappa statistic \cite{cohen1960coefficient}, which is a common metric used to evaluate classification models;
\item Annotation costs: We assume that we are working in environments where labels are scarce and costly to obtain. Therefore, as explained before, each concept drift signal triggers a request for an additional batch of labels. This request is expensive to the user, and an important metric to minimize. We account for this problem by measuring the ratio of labels with respect to the complete length of the data stream used by the respective approach.
\end{itemize}
\noindent Ideally, the optimal approach maximizes predictive performance and minimizes the amount of labels requested.
\subsection{Methods}\label{sec:methods}
Besides the proposed method we include nine other approaches in our experimental setup. These can be described as follows.
We include the following two baselines:
\begin{itemize}
\item \texttt{BL-st}: A static baseline, which never adapts to concept drift. In practice, a model (Random Forest) is fit using the training observations and used to predict the subsequent ones remaining in the data stream. Accordingly, the model may become outdate due to concept drift, but incurs in minimal labelling costs;
\item \texttt{BL-ret}: A baseline which follows the opposite strategy to \texttt{BL-st}; it retrains the predictive model after every $W$ observations. The model is always up to date, but at the price of high labelling costs.
\end{itemize}
In terms of unsupervised methods, which do not use any true labels, we apply the following state of the art approaches:
\begin{itemize}
\item \texttt{Output Sliding} (\texttt{OS}): A method that tracks the output of the predictive model using a \textbf{sliding} reference window as described by {\v{Z}}liobaite \cite{vzliobaite2010change}. Figure \ref{fig:output_tracker_sliding} shows the workflow of this approach. According to previous studies \cite{vzliobaite2010change,dos2016fast}, we apply the Kolmogorov-Smirnov test to assess whether or not change occurs (c.f. Section \ref{sec:rw_unsupervised} for more details);
\item \texttt{Output Fixed} \texttt{OF}: A similar approach to \texttt{OS}, but which tracks the output of the predictive model using a \textbf{fixed} reference window. This approach is depicted in Figure \ref{fig:output_tracker_fixed}. We also apply the Kolmogorov-Smirnov test in this case.
\item \texttt{Feature Fixed} (\texttt{Feature Fixed}): The method described by Reis et al. \cite{dos2016fast}, which instead of tracking the output of predictive models (such as \texttt{OS} or \texttt{OF}), it tracks the values of features. Following Reis et al. \cite{dos2016fast}, if a change is detected in any of the features using the Kolmogorov-Smirnov test, the predictive model is adapted.
\end{itemize}
We also include the following supervised approaches in our experiments.
These assume some level of access to the true labels. While they may not be applicable in some scenarios where labels are difficult to acquire, they are important benchmarks for comparisons.
\begin{itemize}
\item \texttt{Strongly Supervised} (\texttt{SS}): We apply the standard concept drift detection procedure which assumes that all the true labels are immediately available after making a prediction. This can be regarded as the gold standard. The term \textit{strong} means refers to the fact that all labels are available during testing \cite{zhou2018brief};
\item \texttt{Weakly Supervised} (\texttt{WS}): In many real-world scenarios, particularly in high-frequency data streams, data labelling is costly. Hence, predictive models can only be updated using a part of the entire data set. This process is commonly referred to as weakly supervised learning \cite{zhou2018brief}.
We simulate a weakly supervised scenario in our experiments. Accordingly, predictive models only have access to \textit{l\_access}\% of the labels. In other words, after a model predicts the label of a given instance, the respective label is immediately available with a \textit{l\_access}\% probability;
\item \texttt{Delayed Strongly Supervised} (\texttt{DSS}): Labels can take some time to be available. We study this aspect by artificially delaying the arrival of the labels by \textit{l\_delay} instances. After a label becomes available, the respective observation is used to update the change detection model;
\item \texttt{Delayed Weakly Supervised} (\texttt{DWS}): We combine the two previous scenarios. In the \texttt{DWS} setup, only \textit{l\_access}\% of the labels are available. Those which are available arrive with a delay of \textit{l\_delay} observations.
\end{itemize}
Note that all methods above follow the procedure outlined in Section \ref{sec:workflow_experiments}. There are two differences between these approaches: (1) the degree of access to labels; and (2) how concept drift detection is carried out.
\subsection{Parameter Setup}\label{sec:parameter_setup}
In terms of parameters, we set the training window size $W$ to 1000 observations for most data streams. Due to low sample size, for the data streams \textit{Insects}, \textit{AbruptInsects}, \textit{Keystroke}, \textit{Ozone}, \textit{Outdoor}, and \textit{Luxembourg}, we set this parameter to 500 observations. We follow the setup used by Reis et al. \cite{dos2016fast} to set these values.
We focus on the Random Forest as learning algorithm \cite{breiman2001random}, which we apply with 100 trees. The remaining parameters are left as default according to the implementation provided in the \textit{scikit-learn} library \cite{pedregosa2011scikit}. We also apply the Random Forest method, with the same configuration, for building the student model in the proposed \texttt{STUDD} approach.
We apply the Page-Hinkley test \cite{page1954continuous} for concept drift detection, specifically its implementation from the \textit{scikit-multiflow} library \cite{montiel2018scikit}. This approach is a state of the art method for concept drift detection. We set the value of the $\delta$ parameter, which concerns the magnitude of changes, to 0.001, while the remaining parameters are left as default.
For the state of the art unsupervised concept drift detection approaches, the significance level parameter for rejecting the null hypothesis in the Kolmogorov-Smirnov test is set to 0.001, similarly to Reis et al. \cite{dos2016fast}.
Regarding the delayed supervised methods (\texttt{DSS} and \texttt{DWS}), we set the delay parameter (\textit{l\_delay}) to $W / 2$, which is half of the training window size. For the weakly supervised variants (\texttt{WS} and \texttt{DWS}), the access to labels ((\textit{l\_access})) is set to 50. Finally, the loss function used as input to the Page-Hinkley test is the error rate.
\subsection{Results}\label{sec:experimental_results}
In this section, we present the results obtained from the experiments. First, we start by visualizing the alarms triggered by \texttt{STUDD} for concept drift and comparing to those of a supervised benchmark method (Section \ref{sec:visual_alarms}). Then, we present the main results which shows the performance of each approach and the respective costs (Section \ref{sec:main_results}). Finally, we carry out a sensitivity analysis which compares the performance of \texttt{STUDD} with a supervised approach with varying degrees of access to labels (Section \ref{sec:sensitivity_labels}).
\subsubsection{Visualizing Alarms}\label{sec:visual_alarms}
\begin{figure}[hbt]
\centering
\includegraphics[width=.9\textwidth]{Viz_Alarms_InsectsAbrupt.pdf}
\caption{An example using the \textit{AbruptInsects} data stream where the proposed method is able to detect concept drift and adapt to the environment similarly to a supervised approach.}
\label{fig:viz_alarm_succ}
\end{figure}
We start the analysis of the results by visualizing the alarms launched and the predictive performance by \texttt{STUDD}. We also include the behaviour of \texttt{SS} for a comparative analysis. In the interest of conciseness, we focus on three examples of of the 19 problems: a successful example, in which \texttt{STUDD} is able to detect concept drift and obtain a competitive performance with a supervised approach with complete access to the true labels; a positive example, in which the proposed approach shows a better change detection behaviour relative to \texttt{SS}; and a negative example, which shows a problem in which \texttt{STUDD} performs poorly.
\begin{figure}[hbt]
\centering
\includegraphics[width=.9\textwidth]{VizAlarms_Posture.pdf}
\caption{An example using the \textit{Posture} data stream where the proposed method is able to detect concept drift and adapt to the environment better than a supervised approach.}
\label{fig:viz_alarm_positive}
\end{figure}
The first example is shown in Figure \ref{fig:viz_alarm_succ}. The figure shows the performance of each approach, \texttt{SS} in black and \texttt{STUDD} in blue, across the data stream \textit{InsectsAbrupt}. The performance is computed in a sliding window of 200 observations. The vertical dashed lines represent the time points in which the respective approach triggers an alarm for concept drift.
In the initial part of the data stream the performance of both approaches is identical (both lines are superimposed). Their behaviour are different from the point \texttt{SS} triggers the first alarm. This alarm has a visible impact on predictive performance because the score of \texttt{STUDD} decreases considerably. Notwithstanding, \texttt{STUDD} is able to detect the change soon after and regain the previous level of predictive performance.
This example shows that the proposed approach is able to detect changes in the environment.
\begin{figure}[hbt]
\centering
\includegraphics[width=.9\textwidth]{VizAlarms_Bike.pdf}
\caption{An example using the \textit{Bike} data stream where the proposed method is unable to detect changes in the environment.}
\label{fig:viz_alarm_negative}
\end{figure}
Figure \ref{fig:viz_alarm_positive} shows another example for the data stream \textit{Posture} which follows the same structure as the previous one. In this case, \texttt{STUDD} is not only able to detect multiple changes in a timely manner but also show a visible better predictive performance relative to the benchmark. The proposed approach also launches less alarms relative to \texttt{SS}, which shows that it can be more efficient in terms of amount of labels it requests.
We show a final example in Figure \ref{fig:viz_alarm_negative} for data stream \textit{Bike}. This represents a negative example from the perspective of \texttt{STUDD}, in which it fails to detect the changes in the environment and performs poorly. On the contrary, the benchmark approach is able to improve its performance by detecting concept drift.
The above examples show the behaviour of \texttt{STUDD} is different scenarios.
In the next section, we will analyse its performance in all data streams and compare it to state of the art approaches.
\subsubsection{Performance by Data Stream}\label{sec:main_results}
The main results are presented in Tables \ref{tab:performance} and \ref{tab:costs}. The first one reports the Kappa score of each approach across each data set. The final row of the table described the average rank of each method across all problems. The second table has a similar structure as Table \ref{tab:performance}, but the values represent the ratio of labels (with respect to the full length of the data stream) used by the respective approach.
\texttt{STUDD} shows better performance scores relative to the static baseline \texttt{BL-st} (whose model is never updated) in most problems. Moreover, Table \ref{tab:costs} indicates that \texttt{STUDD} presents the best scores in terms of costs apart from the above-mentioned baseline. This outcome shows that the change signals triggered by \texttt{STUDD} are beneficial in terms of predictive performance, and that the method is efficient in terms of the labels required relative to other state of the art approaches.
\begin{table}
\label{tab:performance}
\caption{Performance of each method in each data set according to to Cohen's kappa score. The value of the best method in each category (baseline, supervised, and unsupervised) is in bold.}
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}[t]{lrrrrrrrrrr}
\toprule
& \texttt{STUDD} & \texttt{BL-st} & \texttt{BL-ret} & \texttt{SS} & \texttt{DSS} & \texttt{WS} & \texttt{DWS} & \texttt{OS} & \texttt{OF} & \texttt{FF}\\
\midrule
AbruptInsects & 0.79 & 0.56 & 0.74 & 0.79 & 0.79 & 0.81 & 0.74 & 0.79 & 0.80 & 0.78\\
Insects & 0.81 & 0.81 & 0.77 & 0.77 & 0.76 & 0.81 & 0.81 & 0.70 & 0.71 & 0.76\\
Posture & 0.45 & 0.33 & 0.48 & 0.41 & 0.47 & 0.46 & 0.48 & 0.38 & 0.42 & 0.43\\
Arabic & 0.82 & 0.68 & 0.79 & 0.82 & 0.83 & 0.82 & 0.80 & 0.81 & 0.82 & 0.77\\
Bike & 0.02 & 0.02 & 0.33 & 0.31 & 0.31 & 0.35 & 0.28 & 0.02 & 0.02 & 0.33\\
\addlinespace
NOAA & 0.40 & 0.40 & 0.46 & 0.44 & 0.46 & 0.46 & 0.44 & 0.42 & 0.44 & 0.47\\
Sensor & 0.60 & 0.11 & 0.80 & 0.84 & 0.68 & 0.79 & 0.65 & 0.62 & 0.67 & 0.81\\
Powersupply & 0.06 & 0.06 & 0.08 & 0.05 & 0.07 & 0.03 & 0.03 & 0.07 & 0.08 & 0.08\\
Poker & 0.27 & 0.15 & 0.60 & 0.65 & 0.61 & 0.59 & 0.57 & 0.42 & 0.20 & 0.60\\
Rialto & 0.26 & 0.17 & 0.33 & 0.44 & 0.27 & 0.38 & 0.25 & 0.27 & 0.33 & 0.34\\
\addlinespace
Ozone & 0.13 & 0.13 & 0.12 & 0.13 & 0.13 & 0.13 & 0.13 & 0.13 & 0.13 & 0.13\\
Outdoor & 0.38 & 0.44 & 0.40 & 0.41 & 0.43 & 0.44 & 0.44 & 0.40 & 0.46 & 0.39\\
Luxembourg & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00\\
Gas & 0.32 & 0.38 & 0.54 & 0.63 & 0.58 & 0.69 & 0.54 & 0.53 & 0.41 & 0.54\\
Keystroke & 0.88 & 0.88 & 0.93 & 0.88 & 0.88 & 0.88 & 0.88 & 0.88 & 0.88 & 0.93\\
\addlinespace
ArabicShuffled & 0.83 & 0.83 & 0.84 & 0.83 & 0.83 & 0.83 & 0.83 & 0.84 & 0.83 & 0.83\\
Covtype & 0.43 & 0.39 & 0.70 & 0.70 & 0.59 & 0.65 & 0.51 & 0.26 & 0.53 & 0.71\\
GMSC & 0.10 & 0.10 & 0.20 & 0.24 & 0.14 & 0.10 & 0.10 & 0.10 & 0.10 & 0.10\\
Electricity & 0.40 & 0.28 & 0.52 & 0.49 & 0.49 & 0.43 & 0.48 & 0.51 & 0.46 & 0.52\\
\addlinespace
Avg. Rank & 6.97 & 7.82 & 4.16 & 4.34 & 4.50 & 4.32 & 6.03 & 6.58 & 5.79 & 4.50\\
\bottomrule
\end{tabular}%
}
\end{table}
We also compare the proposed approach to other unsupervised methods for concept drift detection, namely \texttt{OS}, \texttt{OF}, and \texttt{FF}. Regarding average rank, \texttt{STUDD} shows the best score cost-wise but the worst one in terms of performance. This result suggests that the proposed approach is more conservative relative to other unsupervised approaches, and better suited for dealing with false alarms.
Overall, \texttt{FF} presents the best performance scores in terms of predictive performance, among the unsupervised approaches. However, these are accompanied with the worst scores in terms of costs, among all methods excluding the baseline \texttt{BL-ret}.
\texttt{OS} and \texttt{OF} present a more comparable behaviour relative to \texttt{STUDD}. While the performance average ran of \texttt{STUDD} is worse than that of \texttt{OS} and \texttt{OF}, the difference in scores are negligible in most of the problems.
On the other hand, the gains in predictive performance shown by \texttt{OS} and \texttt{OF} relative to \texttt{STUDD} come at a high cost. This is especially noteworthy in data sets \textit{Insects}, \textit{Poker}, \textit{Rialto}, \textit{Outdoor}, \textit{Gas}, and \textit{Electricity}, where \texttt{STUDD} shows relative low costs while maintaining comparable levels of performance. This suggests that the proposed approach is more efficient in terms of the number of labels required while securing a competitive performance.
\begin{table}
\label{tab:costs}
\caption{Ratio of additional labels (with respected to the full length of the data stream) required by each method in each data set. The value of the best method in each category (baseline, supervised, and unsupervised) is in bold.}
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}[t]{lrrrrrrrrrr}
\toprule
& \texttt{STUDD} & \texttt{BL-st} & \texttt{BL-ret} & \texttt{SS} & \texttt{DSS} & \texttt{WS} & \texttt{DWS} & \texttt{OS} & \texttt{OF} & \texttt{FF}\\
\midrule
AbruptInsects & 0.19 & 0.09 & 0.94 & 0.19 & 0.19 & 0.19 & 0.19 & 0.19 & 0.19 & 0.66\\
Insects & 0.09 & 0.09 & 0.94 & 0.28 & 0.28 & 0.09 & 0.09 & 0.47 & 0.66 & 0.94\\
Posture & 0.04 & 0.01 & 0.99 & 0.06 & 0.06 & 0.16 & 0.08 & 0.02 & 0.02 & 0.74\\
Arabic & 0.23 & 0.11 & 0.91 & 0.34 & 0.23 & 0.23 & 0.68 & 0.23 & 0.23 & 0.80\\
Bike & 0.06 & 0.06 & 0.98 & 0.35 & 0.17 & 0.17 & 0.35 & 0.06 & 0.06 & 0.98\\
\addlinespace
NOAA & 0.11 & 0.06 & 0.99 & 0.22 & 0.28 & 0.22 & 0.28 & 0.17 & 0.17 & 0.99\\
Sensor & 0.57 & 0.01 & 0.99 & 1.22 & 0.48 & 1.71 & 0.87 & 0.39 & 0.45 & 0.99\\
Powersupply & 0.10 & 0.03 & 0.97 & 0.10 & 0.17 & 0.10 & 0.07 & 0.47 & 0.47 & 0.94\\
Poker & 0.02 & 0.01 & 0.99 & 0.78 & 0.55 & 1.03 & 0.75 & 0.32 & 0.21 & 0.99\\
Rialto & 0.10 & 0.01 & 1.00 & 1.49 & 0.58 & 1.32 & 0.68 & 0.50 & 0.78 & 1.00\\
\addlinespace
Ozone & 0.39 & 0.39 & 1.18 & 0.39 & 0.39 & 0.39 & 0.39 & 0.39 & 0.39 & 0.99\\
Outdoor & 0.38 & 0.25 & 1.00 & 0.75 & 0.75 & 0.25 & 0.25 & 0.62 & 0.75 & 1.00\\
Luxembourg & 0.53 & 0.53 & 1.05 & 0.53 & 0.53 & 0.53 & 0.53 & 0.53 & 0.53 & 1.05\\
Gas & 0.43 & 0.07 & 0.93 & 1.39 & 1.00 & 1.54 & 1.57 & 0.50 & 0.86 & 0.93\\
Keystroke & 0.31 & 0.31 & 0.94 & 0.31 & 0.31 & 0.31 & 0.31 & 0.31 & 0.31 & 0.63\\
\addlinespace
ArabicShuffled & 0.11 & 0.11 & 0.91 & 0.11 & 0.11 & 0.11 & 0.11 & 0.23 & 0.11 & 0.23\\
CovType & 0.12 & 0.01 & 0.99 & 0.65 & 0.35 & 0.69 & 0.19 & 0.11 & 0.31 & 0.99\\
GMSC & 0.01 & 0.01 & 0.99 & 0.01 & 0.01 & 0.01 & 0.01 & 0.01 & 0.01 & 0.01\\
Electricity & 0.18 & 0.02 & 0.99 & 0.73 & 0.44 & 0.51 & 0.35 & 0.44 & 0.53 & 0.99\\
\addlinespace
Avg. Rank & 3.50 & 2.13 & 9.24 & 6.24 & 5.21 & 5.84 & 5.16 & 4.34 & 4.76 & 8.58\\
\bottomrule
\end{tabular}%
}
\end{table}
When comparing \texttt{STUDD} with supervised approaches it is clear that, given our experimental setup, that having access to true labels brings an advantage for concept drift detection. All supervised variants, namely \texttt{SS}, \texttt{DSS}, \texttt{WS}, and \texttt{DWS}, show better performance relative to \texttt{STUDD}. Notwithstanding, the proposed approach is worthwhile in terms of costs, which are are considerably lower relative to these methods. For example, in the data streams \textit{Insects}, \textit{NOAA}, and \textit{Rialto}, \texttt{STUDD} is able to show a comparable performance with considerably less costs.
\subsubsection{Sensitivity Analysis to Label Access and Delay}\label{sec:sensitivity_labels}
In the previous two sections we showed the applicability of \texttt{STUDD} for concept drift detection, and how it compares with other state of the art approaches. While we approach the concept drift task in a completely unsupervised manner, there may be scenarios in which labels are available, though in a limited manner. We described these scenarios in Section \ref{sec:label_availability}.
We introduced observation delay and availability in some of the methods used in the experiments, namely \texttt{DSS}, \texttt{WS}, and \texttt{DWS}. In this section, we aim at making another comparison between \texttt{STUDD} and these methods. Specifically, our goal is to study the relative performance of these methods for different values of label delay (\textit{l\_delay}) and label access (\textit{l\_access}).
As described in Section \ref{sec:methods}, we define the label availability according to two parameters: \textit{l\_access}\%, which denotes the probability of a label becoming available; and \textit{l\_delay}, which represents the number of observations it takes for a label to become available.
For \textit{l\_access}, we test the following values: \{1, 10, 25, 50\}.
In terms of \textit{l\_delay}, the set of possibilities is \{250, 500, 1000, 1500, 2000, 4000\}.
For example, suppose that \textit{l\_access} is equal to 50 and \textit{l\_delay} is set to 250. This means that a label becomes available with 50\% probability after 250 observations.
\begin{figure}[hbt]
\centering
\includegraphics[width=\textwidth]{ObserPercAnalysis.pdf}
\caption{Analysing the results for different values of \textit{l\_access} and \textit{l\_delay}. Each barplot represents the average rank of each approach across the 19 data streams.}
\label{fig:sens_analysis}
\end{figure}
We show the results of this analysis in Figure \ref{fig:sens_analysis}, which presents four barplots, one for each value of l\_access. Each barplot represents the average rank of each method across the 19 data streams. A method has rank 1 in a data set if it presents the best performance score in that task. In each barplot, we include \texttt{SS}, \texttt{STUDD}, and six supervised variants (one for each delay value). Each one of these methods is identified by the delay value. For example, \texttt{S\_W2000} represents a supervised variant with a delay of 2000 observations and the respective \textit{l\_access}. We note that in each barplot, the value of \textit{l\_access} is only valid for the six supervised variants and not for \texttt{SS} or \texttt{STUDD}.
The results show that \texttt{STUDD} performs relatively better as the probability of the label availability (\textit{l\_access}) decreases. Regarding the delay time (\textit{l\_delay}), lower values typically lead to better performance in terms of average rank. However, this parameter has a weaker impact relative to \textit{l\_access}. For example, for a \textit{l\_access} equal to 50, \texttt{STUDD} is the worst approach irrespective of the delay time. In summary, the results indicate that the proposed approach is beneficial if label acquisition is a problem.
\section{Discussion}\label{sec:discussion}
In the previous section we analysed the proposed approach for concept drift detection. \texttt{STUDD} is designed to detect changes in the environment without any access to true labels. In this sense, we refer to this approach as unsupervised.
The results obtained provided enough empirical evidence verifying the ability of \texttt{STUDD} to detect concept drift (\textbf{RQ1}). While its predictive performance is comparable to other unsupervised approaches in most of the problems, it is often able to considerably reduce the label requirements (\textbf{RQ2}). This feature is important in domains of application in which the annotation process or false alarms is costly. We also compared \texttt{STUDD} with several variants of supervised approaches to concept drift detection. The results indicated that \texttt{STUDD} provides better performance only if the access to labels is low (\textbf{RQ3}).
In terms of future work, a possibly interesting research direction is attempting to combine \texttt{STUDD} with supervised approaches with potentially delayed or limited feedback.
\section{Conclusions}\label{sec:conclusions}
Detecting concept drift is an important task in predictive analytics. Most of the state of the art approaches designed to tackle this problem are based on monitoring the loss of the underlying predictive model.
In this paper, we follow the idea that the assumption that labels are readily available for computing the loss of predictive models is too optimistic \cite{vzliobaite2010change,pinto2019automatic}. Therefore, we focus on solving this problem in an unsupervised manner, i.e., without any access to the true labels.
We propose a method to deal with this task based on a model compression \cite{bucilua2006model} approach. The core of the idea is to replace the loss of the predictive model which is deployed in the data stream (the teacher) with the \textit{mimicking} loss of the student model as the input to traditional concept drift detection methods, such as the Page-Hinkley test \cite{page1954continuous}.
We carry out empirical experiments using 19 benchmark data streams and several state of the art methods. We show that the proposed method is able to detect concept drift and adapt itself to the environment. The behavior of the method is conservative with respect to other approaches, which is an advantage in domains where false alarms or label acquisition is costly. We published the code necessary to reproduce the experiments online. The data sets used are also available in public online repositories.
\bibliographystyle{spmpsci}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 5,277 |
{"url":"https:\/\/www.physicsforums.com\/threads\/what-is-secant-modulus.395779\/","text":"Aerospace What is secant modulus?\n\n1. Apr 16, 2010\n\nrmrramani\n\nwhat is secant modulus?\n\n2. Apr 16, 2010\n\ntiny-tim\n\nHi rmrramani!\n\nFor the straight portion of the stress-strain graph, tangent modulus and secant modulus are the same.\n\nWhen it starts curving, tangent modulus is the slope of the tangent, but secant modulus is the slope of the line joining the point to the origin.\n\nIn other words, tangent modulus is the marginal stress\/strain, but secant modulus is the total stress\/strain.\n\nSee http:\/\/www.instron.co.uk\/wa\/resourcecenter\/glossaryterm.aspx?ID=99\" for a fuller explanation, and a good diagram.\n\nLast edited by a moderator: Apr 25, 2017","date":"2018-03-18 04:52:10","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8458360433578491, \"perplexity\": 3000.4679165314124}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-13\/segments\/1521257645513.14\/warc\/CC-MAIN-20180318032649-20180318052649-00001.warc.gz\"}"} | null | null |
The six regions of Eritrea are divided into administrative subregions ().
Anseba Region
Adi Tekelezan
Asmat
Hamelmalo
Elabered
Geleb
Hagaz
Halhal
Habero
Keren
Kerkebet
Sela
Central (Maekel) Region
Berikh
Ghala Nefhi
North Eastern
North western
Serejaka
South Eastern
South Western
Gash-Barka Region
Akurdet
Barentu
Dghe
Forto
Gogne
Omhajer
Haykota
Logo Anseba
Mensura
Mogolo
Molki
Shambuko
Teseney
Upper Gash
Northern Red Sea Region
Afabet
Adobha
Dahlak
Ghela'elo
Foro
Ghinda
Karura
Massawa
Nakfa
She'eb
Southern (Debub) Region
Mai ani
Tsorona
Emni Haili
Adi Keyh
Adi Quala
Areza
Debarwa
Dekemhare
Mai-Mne
Mendefera
Segeneiti
Senafe
Southern Red Sea Region
Are'eta
Assab
Central Denkalya
Southern Denkalya
References
Eritrea, Districts of
Subregions, Eritrea
Subregions | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 4,091 |
Paul Hermann Fritz Hennig (* 16. Juli 1874 in Frankfurt (Oder); † 28. Juni 1930 in München) war Mitbegründer und Vorstandsmitglied des Gewerkschaftsbunds der Angestellten (GDA) und der Deutschen Angestellten-Krankenkasse (DAK).
Leben
Zunächst als Kaufmann in der "Berlinischen Rückversicherungs Gesellschaft" tätig, kam er in den 1890er Jahren nach München und betätigte sich dort auch gewerkschaftlich im "Bund der Handlungsgehilfen". Im Jahr 1904 wurde er als stellvertretender Schatzmeister des "Vereins Deutscher Kaufleute" nach Berlin berufen. Nach dem Ersten Weltkrieg, in dem er u. a. in Belgien eingesetzt war, war er von 1919 bis 1920 Abgeordneter der Deutschen Volkspartei (DVP) in der verfassunggebenden Preußischen Landesversammlung. Er war Mitglied im Beamtenrat der Johannis-Loge "Humanitas" der Freimaurer.
Im Jahr 1920 entstand mit seiner Beteiligung aus früheren kaufmännischen Teilgewerkschaften der GDA, in dem er leitende Funktionen übernahm. Ab etwa 1923 leitete er die GDA-Landesverwaltung Bayern und Württemberg. Der Sitz der Hauptverwaltung war das gewerkschaftseigene Haus in München, Barer Straße 44. Ein Höhepunkt seiner Tätigkeit war die Eröffnung des GDA-Erholungsheims "Villa Marie-Luise" in Hallthurm bei Berchtesgaden.
Er war Mitbegründer und Aufsichtsratsmitglied der DAK und stellvertretender Vorsitzender im Vorstand der Kranken- und Begräbniskasse des "Vereins der Deutschen Kaufleute".
Quellen
GDA Jahrbuch 1927 für Deutsche Angestellte
GDA-Archiv: Epochen der Angestellten-Bewegung 1774-1930; Berlin 1930
Mitglied des Preußischen Landtags (Freistaat Preußen)
DVP-Mitglied
Gewerkschafter (Deutschland)
Freimaurer (Deutschland)
Freimaurer (19. Jahrhundert)
Deutscher
Geboren 1874
Gestorben 1930
Mann | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 3,902 |
Brooklin Heritage Society
Preserving Brooklin's past, recording Brooklin's future
BHS – Committees, Roles
Sesquicentennial of Brooklin (1997)
(Excerpt from 4 SEPTEMBRE 1997 ASSEMBLÉE LÉGISLATIVE DE L_ONTARIO http://hansardindex.ontla.on.ca/hansardeissue/36-1/l224.htm at about 1:30 pm)
Mr John O'Toole (Durham East): On September 6, 1997, the village of Brooklin, located in my riding of Durham East, will celebrate the 150th anniversary of its naming.
The village, located north of Whitby on Highway 12, was founded in 1840 and was previously named Winchester. When residents of the village went to apply for a post office, they discovered there was already a Winchester post office elsewhere in Ontario. On August 11, 1847, the 300 inhabitants of the village met and agreed to change the name to Brooklin. No one is certain why they chose that name, but perhaps it's because of the little brook that trickles through the town.
Throughout the day on September 6, several events have been scheduled to commemorate the heritage of this village, with horse-drawn carriages, entertainment and self-guided tours. Visitors to Brooklin can see some of the historic buildings, such as the old Brooklin Mill, which today houses a hardware store and small engine repair shop, and a former stable currently being used by the W.J. Medland and Son Ltd business.
Like so many Ontario villages, Brooklin is no exception in its contribution to this wonderful province of Ontario. At one time, Brooklin was known as being the smallest town in the world to have a senior A lacrosse team. In 1968 the Redmen senior A lacrosse team won the esteemed Mann Cup, and again in 1969, and the team went on to win the cup again in 1985, 1987, 1988 and 1990. The Mann Cup: Morley Kells would like to forget about this.
Recognition should also be given to community leaders such as Dr John McKinney and John Dryden.
I would like to ask the members of the Legislature to join me in congratulating the residents of Brooklin on their 150th celebration.
Author Brian WickPosted on 2020-08-16 Categories 1990-1999
It's a Gas!
On June 25, 1995, the Consumers Gas Company in Whitby, along with their authorized dealer, Advantage Air Care in Brooklin, was pleased to inform Maureen and Gord Stevens that they were the winners of the draw for a brand new furnace installation to the value of $3,500.00. The arrival of natural gas to Brooklin was announced on July 20, 1995, at the lighting of the torch ceremony in front of the Luther Vipond arena, where Maureen was given the honour of "throwing" the switch. Mayor Tom Edwards attended the ceremony, along with Consumers Regional General Manager, Paddy Davis. Because Maureen and Gord were one of the first 100 natural gas customers in Brooklin, the happy couple also received a coupon from Uxbridge Nurseries Ltd. for a free 2 ft Spruce Tree. This tree was planted in the back yard of their house on Queen Street and is now a 30 ft beauty! Maureen and Gord were sorry they couldn't take the tree with them when they moved to Kimberly Drive.
Author Jennifer Bailey HudginsPosted on 2020-05-25 2020-08-17 Format ImageCategories 1990-1999
About the Cows
Posted by Charlton
Our-Lucky-Stars-Cows
These wonderful cow paintings have become a huge part of the cafe culture. Trevor spotted them outside Pot Of Gold Antiques on Old Wooler Road which is where we'd bought some chairs and little vintage dessert plates. Mary Postar, the proprietor of the shop didn't know much about the cow paintings other than that they were salvaged from a barn somewhere around Oshawa. Friends urged me to take a pass on my original intent to have rotating art exhibitions and buy these big beauties for the cafe instead. So, the day before opening they were delivered, and yes, they were perfect.
They've been very popular and real conversation starters. Within a few days of opening, a local farmer approached me and said that he believed they could be from a farm in Brooklin, Ontario that he believed was demolished to make way for a new subdivision and the 407 highway. Since then several customers have recognized the paintings and indeed its been confirmed that they were originally hung on the barn exterior of Roybrook Farms in Brooklin, owned by renowned Holstein breeder Roy Ormiston . Indeed the first gent to shed some light on their provenance brought me a copy of 'The Chosen Breed' which holds plenty of information on Roy Ormiston and his cows including his legendary 'white cow'.
Another local farmer has told me he thinks he knows who painted these wonderful beasts and I'm hoping he'll return with the artist's name so we can give credit where due! ( My dad would like to have brass plaques made and mounted on the 'frames' of each painting, giving names to the cows, to their home and to the artist!)
Now, one final thing – the bull on the right has horns which seems okay, but so does the cow on the left and people are asking if that's 'correct'. So, I'm wondering – can you tell this city girl!? And I'd also welcome any more information on the story of these paintings and their subjects! Use the email link on the left to contact me or go to our facebook page and share your comments!
Author Brian WickPosted on 2019-01-24 2020-08-17 Categories 1990-1999
The History of Brooklin's Post Offices
W. J. Medland and Son Feed Store
The Brooklin Concretes softball team
Grand Opening of Cullen Gardens
BHS on Facebook
This house at 5 Vipond Road is said to be haunted. Anyone know why? ... See MoreSee Less
**Funny Phone Call** In the middle of the night, when your phone rings, you expect something terrible has happened. Dad woke up and bolted down the stairs when he heard the phone ringing late one night. I heard him pick up phone, clear his throat and say a hesitant "Hello". All I heard him say after that was, "Yes." He started laughing and hung up the phone. Mom and I were both awake anxious to hear what the call was about. Dad said the man on phone, whose voice he didn't recognize asked if Dad had any dry cows. Always being alert to possibly selling a cow or two, especially a dry one, which is a cow that hasn't calved, he told the man he had dry cows. The man replied, "Well, they're likely thirsty and probably want some water." My father loved a good joke and was a prankster who loved to play tricks on people. This call was a definite "Gotcha". To let you know, Dad did figure out who called so late at night. ... See MoreSee Less
1 day ago ·
**Long Ago Wintertime Saturday Night Entertainment** On Saturday nights in the winter when Dad was a young boy that he would take my Grandpa and Grandma Nelson out with the horse and cutter or sleigh. They would wear warm clothes and have many thick wool blankets over their legs to protect themselves from the bitter cold and blowing snow. He would take them along the road, round the corner of the barnyard and head up the hill through a snowy field for a regular Saturday night visit with his aunt and uncle who lived in the farmhouse on the top of the hill. The entertainment on Saturday nights was to listen to the hockey game broadcasted that night. Did you notice I wrote, listened not watched? This was in the 1930s or 1940s. The NHL had a six team league. The broadcast was over the radio. He told me everyone sat near the radio with one ear facing the radio to hear the voices better. At intermission between the first and second periods, his aunt Alice made a special treat for everyone to eat. She would put popcorn kernels in a wire basket and shake them over the wood stove in the kitchen. The kernels would make fluffy popcorn. She would also warm some caramel and drizzle it over the warm popcorn. It was a treat for him to have every Saturday during the hockey game. And that is why Dad always made popcorn every Saturday night during the first intermission of the hockey game. We would always take a small bowl of popcorn into Grandma Nelson to have while she watched the game on her own television. She would say something like, "I thought I smelled popcorn." The tradition continues. I still love eating popcorn when watching hockey games on television. © Stephen Nelson 2022 ... See MoreSee Less
businesses (5)
Discovery Trail (3)
Brooklin Heritage Society Proudly powered by WordPress | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,900 |
Robert Butler (born 19 June 1967) is a British Conservative Party politician who has been Member of Parliament (MP) for Aylesbury since the 2019 general election. He served as Parliamentary Under-Secretary of State for Prisons and Probations between September and October 2022.
Early life
Butler was born in Aylesbury, Buckinghamshire. His earliest years were spent in Bedgrove, until his family moved to Bicester. He attended the University of Sheffield, where he studied French and Economics.
Early career
Butler's professional life began as a TV presenter at the BBC and later Channel 5, where he presented that channel's lunchtime news from its launch in 1997 until the end of 2004. In 2005, he founded a communication and lobbying consultancy, which worked with large and small companies around the world, such as the private healthcare company Bupa.
In 2010, he joined the lobbying firm Pagefield at its launch, as an associate partner and he was still listed as a specialist partner in their senior advisory team at the time of the 2019 election. During that time Pagefield worked with many clients including tobacco giants British American Tobacco, Philip Morris International, arms manufacturer and systems advisors BAE Systems, and the government of Azerbaijan, while its sister company Pagefield Global Counsel provides public relations services to clients such as the kingdom of Saudi Arabia in relation to the Saudi Arabian-led intervention in Yemen.
Prior to his election to Parliament, Butler was also a director of His Majesty's Prison and Probation Service.
Parliamentary career
Along with every Conservative candidate standing at the 2019 general election, Butler signed a personal pledge to support party leader Boris Johnson's Withdrawal Agreement. The MPs' pledges and the "oven-ready" Brexit deal were central tenets of the Conservative election campaign. In 2019, Butler was elected to succeed Sir David Lidington as MP for Aylesbury, a safe seat for the Conservative Party since 1929.
While never opposing it in votes or other action, Butler has criticised the HS2 project in words, calling it an "unwanted and ludicrously expensive railway". Butler has served on the Justice Select Committee since March 2020 and is a member of the Armed Forces Parliamentary Scheme for 2020 to 2022.
On 13 June 2022, Butler was appointed Parliamentary Private Secretary to Liz Truss, the secretary of state for foreign, Commonwealth and development affairs. He resigned this position on 7 July 2022 amid the July 2022 United Kingdom government crisis.
References
External links
1967 births
British broadcaster-politicians
Living people
UK MPs 2019–present
Conservative Party (UK) MPs for English constituencies
People from Aylesbury
People from Bicester
British television presenters
Alumni of the University of Sheffield | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 6,125 |
At any age, from teenage to middle-age – acne is debilitating, self-esteem robbing and troublesome. Acne is the most common of all skin conditions, worldwide. There are several types of acne, but at the top of the list for most severe are acne pustules.
Acne pustules are often found in clusters on the skin and usually affect the face, neck or back.
Acne pustules are round, medium-sized bumps that are characterized by a white or yellow dot in the middle, which is an indication of pus. There may also be a red area around the pustule due to the infection under the skin.
Acne pustules can be painful, difficult to get rid of on your own, and can be hard to treat.
The initial desire of most people who have acne, is to squeeze or pop. Just don't do it. Squeezing and popping leaves an open wound on your face which can easily become infected with bacteria, leading to more acne and eventually, scarring. Squeezing pushes the infection farther down under the skin, leading to more pustules and infection. It's easy to see how touching, squeezing and popping can quickly get out of hand.
The most useful DIY treatments contain benzoyl peroxide or salicylic acid, but over-the-counter products rarely have the strength needed to completely clear up the problem.
If your acne pustules are a consistent and serious problem, a visit to Allure Skin & Laser is definitely the best solution. We will formulate a treatment plan along with a combination of products, to not only treat your severe acne, but also help make your skin soft and moisturized.
Don't touch your face during the day, as that transfers germs, bacteria and dirt.
Don't pop a pustule, as you can easily make one acne pustule turn into three.
Do wash your face daily with a doctor-recommended cleanser.
The highly trained team of professionals at Allure Skin & Laser understands how acne affects your life. We are here to help!
Call for a treatment consultation, today: (630) 818-7546. | {
"redpajama_set_name": "RedPajamaC4"
} | 3,944 |
IFC Markets Corp. é corretor on-line, sediado em BVI, fornecendo o comércio de pares de moedas, metais preciosos e CFDs (Contract for difference) sobre as ações (americanas, britânicas, russas, alemãs, chinesas e japonesas), futuros de mercadorias e CFD contínuos sobre índices e mercadorias. A empresa opera através de sua própria plataforma analítica comercial NetTradeX e MetaTrader 4. Em 2013, a empresa lançou a tecnologia única PCI, que fornece aos seus clientes uma oportunidade de criar, analisar e negociar com um número ilimitado de instrumentos exclusivos comerciais.
A versão móvel do site foi lançada recentemente, uma vez que os dispositivos móveis tornaram-se uma parte inseparável da vida diária.
História
IFC Markets foi fundada em 2006 como uma parte da IFCM Group. A empresa foi licenciada pela Comissão de Serviços Financeiros de Ilhas Virgens Britânicas em junho de 2014 sob o número do Certificado SIBA/L/14/1073. IFC Markets (IFCM) é licenciada a usar a estação comercial de nova geração NetTradeX desde 2006. Atualmente ela atende a mais de 80.000 clientes em 60 países em plena conformidade com os padrões internacionais de serviços de corretagem.
Em 2007 IFCM Group incorpora NetTradeX Corp.- o desenvolvedor de software comercial avançado, registrado em Seychelles sob o número 022316, bem como o licenciado de I-Securities Global Ltd. para a utilização da plataforma comercial MetaTrader 4.
Em 2013, depois de muitos anos de pesquisa e colaboração, o Método GeWorko e Instrumentos Compostos Pessoais são finalmente introduzidos na plataforma comercial NetTradeX.
A partir de Setembro de 2014, a empresa detém o Seguro de indenização profissional com a AIG EUROPE LIMITED sob o Certificado número P/080408/2012/7.
Informação sobre regulamentação
IFC Markets é incorporada nas Ilhas Virgens Britânicas (BVI) sob o número de registro 669838. A empresa é licenciada pela Comissão de Serviços Financeiros de Ilhas Virgens Britânicas (BVI FSC), Certificado No. SIBA/L/14/1073, para realizar operações de investimento fora das seguintes categorias:
Categoria 1: Negociação em Investimentos
Sub-categoria B: Negociação como principal
Categoria 2: Organização de ofertas em investimentos
Modelo de negócio
O modelo de negócios da IFC Markets é baseado em relações transparentes e de confiança com o cliente. A empresa recebe cotações dos principais fornecedores de bancos- fornecedores de liquidez nas praças de mercados ECN e formas de fluxo de cotações para os clientes com melhores preços. No terminal comercial, o cliente observa e negocia com os melhores preços de bid e ask para cada instrumento financeiro. Como resultado, as ordens de clientes são executadas ao preço do provedor de liquidez que oferece atualmente o melhor preços de Compra e Venda.
Plataformas comerciais
IFC Markets oferece duas plataformas comerciais on-line: NetTradeX e MetaTrader 4. NetTradeX opera em PC, iOS, Android e Windows Mobile. MetaTrader 4 está disponível em PC, Mac OS, iOS e Android. Ambos NetTradeX e MetaTrader 4 fornecem programas comerciais (Expert Advisors).
PCI Tecnologia
A empresa trabalhado muito sobre o lançamento da tecnologia PCI (Instrumento Composto Pessoal), integrado no terminal comercial NetTradeX. A tecnologia é baseada no método GeWorko e permite a criação, análise e negociação do número ilimitado de instrumentos.
Concorrentes
Hoje em dia, existem muitos corretores de Forex e CFD no mercado financeiro e alguns deles são os principais concorrentes da IFC Markets.
FXCM
Admiral Markets
FxPro
IronFx
Alpari
Oanda
eToro
XTB
SaxoBank
IG
Referências
Ligações externas
Informações Companhia
IFC Markets Visão Geral
IFC Markets
Trading
Informações sobre a empresa - IFC Markets Overview
Veja também
Instrumento Composto Pessoal
Método GeWorko
Serviços financeiros | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 311 |
Copyright © 2017 by Tom Doyle
All rights reserved.
Published in the United States by Ballantine Books, an imprint of Random House, a division of Penguin Random House LLC, New York.
BALLANTINE and the HOUSE colophon are registered trademarks of Penguin Random House LLC.
Photo credits can be found on this page.
LIBRARY OF CONGRESS CATALOGING-IN-PUBLICATION DATA
Names: Doyle, Tom
Title: Captain Fantastic: Elton John's stellar trip through the '70s / Tom Doyle.
Description: First edition. | New York: Ballantine Books, [2017] | Includes bibliographical references and index.
Identifiers: LCCN 2016034659 (print) | LCCN 2016035358 (ebook) | ISBN 9781101884188 (hardcover: alk. paper) | ISBN 9781101884201 (ebook)
Subjects: LCSH: John, Elton. | Rock musicians—England—Biography.
Classification: LCC ML410.J64 D69 2017 (print) | LCC ML410.J64 (ebook) | DDC 782.42166092 [B]—dc23
LC record available at https://lccn.loc.gov/2016034659
Ebook ISBN 9781101884201
randomhousebooks.com
_Book design by Simon M. Sullivan, adapted for ebook_
Cover design: David G. Stevenson
Cover photograph: © Anwar Hussein/WireImage
v4.1
ep
# Contents
Cover
Title Page
Copyright
Prologue
Chapter 1: A Long, Long Time
Chapter 2: Elton John Has Arrived
Chapter 3: Reborn on the West Coast
Chapter 4: A Well-Known Gun
Chapter 5: Hope You Don't Mind
Chapter 6: Hercules
Chapter 7: No Superman Gonna Ruin My Plans
Chapter 8: All Aboard the Starship
Chapter 9: High in the Snowy Mountains
Chapter 10: From the End of the World to Your Town
Chapter 11: Drift of the Fantastic Voyage
Chapter 12: Burning Out His Fuse
Chapter 13: An Audience with the King
Chapter 14: Floating into Space
Chapter 15: Super Czar
Chapter 16: Quack!
Epilogue
Dedication
Acknowledgments
Discography
Bibliography
Photo Credits
By Tom Doyle
About the Author
# PROLOGUE
IF YOU WERE TO TAKE a peek at random entries in his diary, 1969 seemed to have been a fairly humdrum year for Reg Dwight. A moon-faced twenty-one-year-old, going on twenty-two, from the northwest London suburb of Pinner, he cut a vaguely Beatleish figure in his bowl haircut and Lennon-inspired "penny round" glasses. His pleasures were simple: getting up early on Sunday mornings to play soccer; reading his collections of magazines on World War II and the Old Master painters; going to the cinema and returning home to jot down his thoughts on the films he'd seen.
> January 17, Sunday: Saw:—"Lady In Cement"—good, "Secret Life of An American Wife"—lousy.
Music, however, was his obsession. Reg's poky bedroom in the small flat at 30A Frome Court, Pinner Road, where he lived with his mother, Sheila, and stepfather, Fred, was a virtual record library—approximately fourteen hundred singles, three hundred LPs, and one hundred EPs, which he lovingly played and cleaned and cataloged. There was an expression for people like him, who were utterly entranced by the sounds magically emanating from revolving black plastic discs: "It's called vinyl in the blood," he'd proudly state. His mum laughed and called him an old hoarder. Reg wouldn't ever let anyone borrow any of his records, for fear of them being scratched. He would get annoyed if anyone else even touched them.
To feed his obsession, he helped out, unpaid, at Musicland, a large record shop on the corner of Berwick and Noel streets in Soho, the seedy and thrillingly boho heart of London. Reg would drop in, have a cup of tea, then get behind the counter. Often he'd wait there until late in the evening for the deliveries to come in, especially the American imports. In October '68, he'd hung around for hours to get his hands on a copy of Jimi Hendrix's head-spinning double LP _Electric Ladyland_. In April of that year, he stayed on until nine o'clock at night in eager anticipation of the arrival of Leonard Cohen's stark second album, _Songs from a Room_.
There was a fetishistic element to Reg's record collecting. He much preferred the thicker American cardboard covers to the flimsier British ones. He'd been delighted to discover that Laura Nyro's wildly eclectic, piano-driven _Eli and the Thirteenth Confession_ came with a lyric sheet perfumed with scented ink. He'd admit to anyone that when he was a solitary child, records had been his friends, and so they remained.
Sometimes, the outside world wasn't so friendly. In a violent disruption of his unremarkable everyday existence, one night, returning to Pinner from Soho, diffident, bespectacled Reg had been beaten up.
> April 12, Saturday: Went into Musicland. Got "duffed up" on the way home. Went straight to bed.
Ten days later, perhaps not coincidentally, he became the proud owner of a boxy vehicle, which would ferry him around in comparative safety.
> April 22, Tuesday: Got home tonight to find that Auntie Win and Mum had bought me a car—Hillman Husky Estate—Superb!!
In reality, Reg's life wasn't as ordinary as it seemed. The year before, he'd begun living a strange, parallel existence as Elton John: his stage name—and increasingly exaggerated persona. Stepping out of the shadows of Bluesology, the band in which for three years he'd been an enthusiastic, if often teeth-grindingly frustrated, member, hammering away in the corner on his Vox Continental organ, he harbored a strong desire—highly unlikely you'd think if you were to take just one look at him—to become a pop star. Sometimes this shy and funny individual would suddenly erupt with excitement at the very thought.
"I'm going to be a star, do you hear me?" he'd loudly declare, with comic drama, as if assuming the role of a brattish character in a hackneyed rags-to-riches Hollywood biopic. "A _star_!"
But stardom was proving agonizingly tough to achieve. He had started off 1969 on a promising high with the release in January of his second single, "Lady Samantha," a ghostly heavy rock ballad centered on a sad, spectral female protagonist wafting supernaturally across the hillsides and spooking the locals, as dreamed up by his lyric-writing partner, eighteen-year-old Bernie Taupin. It had been what was termed a "turntable hit," pulling in an encouraging number of radio plays, before entirely failing to rouse the interest of the record-buying public.
There was an almost schizophrenic nature to the songs that Bernie and Elton had been writing for the past thirteen months as payroll employees of Dick James Music—the former on £10 a week, the latter £15, since he was required to sing and play on their demonstration recordings. Half of these songs were in tune with the post-psychedelic, head-in-the-clouds mood of the times, bearing typical "Wow, man" titles such as "A Dandelion Dies in the Wind" and "Tartan-Colored Lady." The other half were far straighter and more commercial-minded, written with a view to being sold off to chart singers looking for material. These, which the pair invariably viewed as "stinkers," were usually intensely lovelorn: "You'll Be Sorry to See Me Go," "There's Still Time for Me," "When the First Tear Shows."
One song from this latter crop, "I Can't Go on Living Without You," a frothy and upbeat pop soul track in the style of Dusty Springfield, was selected in February as a potential UK entry for the Eurovision Song Contest, that annual pancontinental clash of oompah tunes and overwrought ballads. "I Can't Go on Living Without You," in competition with five other songs from British writers, was performed by the diminutive Scottish singing sensation Lulu on BBC1 that month and witnessed by a televisual audience of close to twenty million.
It came in sixth out of six in the ensuing postcard ballot. Even more depressing, the winning entry, the gibberish lovey-dovey anthem "Boom Bang-a-Bang," went on to scoop the Eurovision prize for Britain. For Elton and Bernie, it was a sign that they were paddling in too-shallow waters. They resolved never again to write such pop pap.
Instead, they threw themselves into the making of what was to become the first Elton John album, _Empty Sky_. Recorded in nocturnal downtime sessions at the DJM studio in New Oxford Street, it was a happy time for all involved—plotting and scheming their bright shiny futures in breaks at the Wimpy restaurant along the road, eating dinner after hours at the Golden Egg café, walking through the near-deserted streets at four in the morning to doss down at the flat that producer Steve Brown's father was given as a perk of his job working for the Salvation Army. Reg always slept on the sofa, drifting off with a head filled with amazed wonder at how the tracks were shaping up.
Hearing the playback of the propulsive, faintly trippy title song of _Empty Sky,_ eight and a half minutes long, took him aback. To his young ears, it was the greatest thing he'd ever done or possibly even ever heard. But, in truth, from the outside perspective, Reg still seemed to be fumbling to work out exactly who he wanted Elton to be. The songs bore obvious reverberations of the sounds of his vinyl collection back at Frome Court—the loose swing of the Rolling Stones, the wandering flute reveries of Traffic, the rootsy modernist Americana of the Band.
One song, however, the hymnlike "Skyline Pigeon," gently floating along on ornate harpsichord phrasings and churchy organ, stood out from the rest. Its lyric was inspired by the views from a twelfth-century clock tower in Lincolnshire that as a child Bernie would climb to watch the sun slipping below the horizon. It was written from the imagined perspective of a captive bird longing to be turned loose and to soar high and far away. As a pining ballad from a couple of jobbing songwriters who spent half their time chained down penning on-spec middle-of-the-road compositions, it was hard not to read it as a yearning for creative freedom.
Falling into neither of the fashionable musical categories of the time—woozy psychedelia and weighty rock—"Skyline Pigeon" was something else entirely. Perhaps it was a pre-echo of a time to come, a pared-down singer-songwriter approach that just might belong to the decade to follow.
Reg becoming Elton. The first photo session: Hampstead Heath, London, January 22, 1968.
Even if creatively reborn as Elton, in physical reality, Reg was still awkwardly coming to grips with his pop star guise. Shortish at five feet eight inches and prone to dumpiness, he struggled with his weight. His glasses constantly slipped down his snub nose. His stubby fingers looked nothing like those that might belong to a piano player.
The January before, he'd been photographed for the first time as Elton, looking like a bank clerk who'd experienced a drunken epiphany of sixties liberation before sweeping down Carnaby Street to randomly buy clothes for a "far out" makeover. A leopard print fedora. A flamboyant fur coat with a ludicrously hairy collar that looked as though it might have things living in it. He'd also taken to wearing shirts cut from nursery curtain material featuring popular children's book characters.
Yet remarkably, as 1969 progressed, his celebrity, in sometimes bizarre ways, appeared to be on the rise, as he incredulously noted in his diary:
> April 30, Wednesday: Offer to open a carwash in Cricklewood—what!! Stayed in tonight. My glasses broke.
To promote "Lady Samantha," he'd been interviewed by the UK teen magazine _Jackie_. "I always wanted to be famous," he confessed, even though he wasn't really.
"He says he can be unbearably moody and self-pitying," revealed the uncredited _Jackie_ writer, "but usually manages to laugh himself out of it."
Those around him knew this to be true. His black moods would suddenly descend, but they would blow away just as quickly. It wasn't hard for him to cheer himself up. Simple pleasures once again.
At the yearly Pinner Fair in the last week of May 1969, Reg amused himself at the various "roll-up roll-up" stalls. At one, where the challenge was to throw Ping-Pong balls into jam jars, ever competitive, he quickly succeeded twice, his prize being two unlucky piscine creatures cruelly forced to swim around in small transparent plastic bags of water.
> May 28, Wednesday: Went to the fair with Mick and Pat: I won a coconut and two goldfish!!—John and Yoko!!
The next day, John and Yoko died. _"Very upset!!"_ scribbled Reg.
A week later, _Empty Sky_ was released. It struggled to sell four thousand copies before, for now, disappearing completely.
—
EXACTLY FIVE AND a half years after that day at Pinner Fair—November 28, 1974—a limousine was gliding through the streets of Manhattan. Inside, heading for Madison Square Garden, were the now legally named Elton Hercules John and John Lennon. The latter was almost paralyzed by fear at the prospect of his first appearance onstage in more than two years. The mood in the car was heavy and silent, as if its passengers were traveling to a funeral rather than a sold-out gig.
John and Elton had first met the previous year, in October 1973, during what was to become known as Lennon's "Lost Weekend" in Los Angeles. Separated from Yoko Ono, he'd enjoyed a second adolescence—or perhaps suffered a premature midlife crisis—in the romantic company of the Lennons' personal assistant, May Pang, while getting out of his mind with the dangerously spirited likes of Keith Moon and Harry Nilsson.
Elton had been a brief visitor to the deranged sessions for Lennon's _Rock'n'Roll_ album, where the band members were worryingly loaded on vodka and cocaine while producer Phil Spector, his temples throbbing from heavy amyl nitrate use, had grown frighteningly out of control. Elton had quickly left, realizing the scene was a "hairy" one. The recording of the album would later collapse after Spector terrified everyone by firing a bullet into the studio ceiling.
In these past few months of 1974, in New York, John and Elton had become collaborators and drug buddies: recording the former's latest single, "Whatever Gets You thru the Night," at the Record Plant on West Forty-Fourth Street and spending hours holed up in a suite at the Sherry-Netherland hotel on Fifth Avenue snorting cocaine, laughing their asses off, trading surreal wisecracks, and indulging in bitchy gossip about anyone and everyone.
At the same time, the pair were suffering the notorious paranoia-inducing side effects of the stimulant. One evening, there was a knock at the door. In his wired state, it felt to Elton as if it took him ten minutes to pad his way over to it and put his eye to the peephole.
"It's Andy Warhol!" he whispered to John in disbelief.
Lennon began waving his arms wildly. "Fuck off!" he exclaimed, though under his breath. "Don't let him in!"
Their paranoia was later ramped skyward when a group of cops arrived at the hotel suite and warned Elton about an anonymous call they'd received from a crazy person saying that he had a gun, he was in the hotel, and he was hunting for him. They warned the star to be super-vigilant. This utterly shook both John and Elton, but Lennon in particular, since he was preparing in less than two weeks to make such a high-profile public appearance at the Madison Square Garden show.
The reason Lennon was putting himself through such agony was because of a wager. At the session for the maddeningly catchy and dance-floor-ready "Whatever Gets You thru the Night," where Elton had contributed harmony vocals and thumping, funky piano to complement the ex-Beatle's lead vocal and guitar, he'd bet John that the record would be Lennon's first solo American number one. To Lennon, whose last big single had been "Imagine," in '71, which itself had reached only number three, it seemed like a remote possibility. Elton had made John promise that if he was proved right, John would appear with him onstage to perform the song. When the record reached the U.S. top spot on November 16, Lennon was forced to honor the deal.
To prepare himself, eight days ahead of the gig, Lennon and May Pang flew up the coast to check out Elton's show at Boston Garden, where to his ears the fevered cries of the fans sounded like an echo of Beatlemania. Having grown ever more cartoonish and outrageous in his stagewear, Elton had appeared onstage in a jumpsuit covered in multihued feathers that made him look like a chicken repeatedly plunged into pots of rainbow-colored paints. Topping it for his encore, he'd come out wearing a bikini adorned with an enormous heart, which, he fancied, made him look like an outsized chocolate box.
"Ah, so this is what it's fucking like nowadays," John said afterward, grinning.
Come the night of the Madison Square Garden show on Thanksgiving Day, sitting tensely in the limo, Lennon was forced to face the fact that his stage fright had grown acute in recent years, owing to his lack of regular live performance. He'd been vomiting most of the previous night and unable to sleep, following a failed attempt to numb his nervousness with cocaine and champagne.
As Elton and the band took to the stage, out front no one in the audience had a clue that Lennon was about to appear. No one, that was, except for Yoko Ono, sitting in a seat she'd requested where she couldn't be seen from the stage. Ono had sent good luck cards and a single white gardenia each to John and Elton backstage. Lennon affixed his to the lapel of his black suit jacket, before he was forced to once again throw his guts up through sheer terror. He didn't know that his estranged wife was in the crowd. He was relieved to think that she wasn't coming to the show.
"Otherwise I know I'd never be able to go out there," he admitted.
Eleven songs in, Elton, sitting at his glittering grand piano and daringly resplendent in tight white trousers striped in lime gold and mauve, affixed with suspenders pulled up over his surprisingly toned and hirsute chest, turned to the crowd. "Seeing as it's Thanksgiving," he began, his voice audibly shaky, "we thought we'd make tonight a little bit of a joyous occasion by inviting someone up with us onto the stage. And, uh, I'm sure he will be no stranger to anybody in the audience when I say, it's our great privilege and _your_ great privilege to see and hear...Mr. John Lennon!"
In the wings, Lennon clung to Bernie Taupin, pleading with him, "You gotta come on with me." Then, addressing the roadies, friends, and hangers-on standing around him, John suddenly seemed resigned to his fate, as if he was a petrified World War I foot soldier readying himself to clamber up a trench ladder to face a storm of bullets. "Oh well," he cried, "here we go, over the hill."
The roaring response from the crowd as Lennon stepped onto the stage was instantly deafening. The walls of the Garden seemed to shudder with the volume of the squealing, cheering, hooting, and stomping. The PA system hanging from the ceiling swung hazardously. There followed an ovation, which took almost ten minutes to die down before the musicians could launch into the rattling introduction of "Whatever Gets You thru the Night."
Next, Elton introduced "one of the best songs ever written," "Lucy in the Sky with Diamonds," a faithful rendition of which, save for the addition of a reggae skank to its third chorus, was his current single. During the number, Lennon's tension showed and his voice sounded raw and constricted when Elton eased back from the microphone to allow him to take the spotlight in the later choruses. At the song's conclusion, John spoke to the audience for the first time.
"Ah...I'd like to thank Elton and the boys for having me on tonight," he said. "We've been trying to think of a number to finish with so as I could get out of here and be sick...and we thought we'd do a number of an old estranged fiancé of mine called Paul. This is one I never sang. It's an old Beatle number and we _just_ about know it."
John and Elton and the band proceeded to thunder their way through "I Saw Her Standing There," track one on the first Beatles album, _Please Please Me,_ released eleven years before. Back playing his beloved rock'n'roll, Lennon's performing spirit was reawakened. As he departed the stage, no one could know that it was for the very last time.
Elton and John Lennon (in his final live performance), Madison Square Garden, New York, Thanksgiving Day 1974.
From her vantage point in the crowd, Yoko had felt that her husband looked lonely up there. Too many bows, taken too quickly. Afterward Lennon felt guilty that he hadn't been as moved by the experience as those around him were. But as soon as Yoko stepped into the backstage area, as John later admitted, he felt his strong emotions for her stir once again. Two months later, the couple got back together, and Yoko immediately became pregnant. Elton was later asked to be godfather to their son, Sean.
Reg Dwight's goldfish were long dead and gone. But—unimaginable five and a half years previously—Elton John, the coked-up, bikini-wearing superstar, had played a pivotal role in reuniting the actual John and Yoko.
—
"HOW WEIRD IS that?" Elton marvels, his eyes widening behind sea-blue lenses as he casts his mind back down the decades. "I mean, who would've known?"
It was one of many highly surreal moments that typified his 1970s. "John was a force of nature," he states. "A _fucking great force of nature._ "
Four decades later Elton is sitting in the narrow dining room at the back of his townhouse in Holland Park, west London. On the walls surrounding him are a gallery's worth of modern art originals, including a framed white rosette by Tracey Emin bearing the provocative legend "Action Cunt" and a thirteen-foot photographic print by Sam Taylor-Wood titled "Wrecked," a re-creation of Leonardo da Vinci's _The Last Supper_ with the central figure of Christ replaced by a bare-breasted woman.
Enthusiasm pours out of him as he remembers the giddy highs of the first decade of his orbiting fame. He talks assuredly and fast fast fast, his words crashing into one another in their rush to escape his mouth. His voice is deep and rounded, bordering on basso profundo. He is a prolific and enthusiastic swearer: No one says "fuck" with quite as much gusto. He is in decent shape, given the excesses of his past, the only apparent hangover being an unsettling gasp for air when he's at his most effusive.
Having spent the 1960s as an unfulfilled earthbound observer, in the 1970s, Elton achieved vertical takeoff. Once his delayed launch had taken place, from 1970 to '76, he was unstoppable, scoring seven consecutive number one albums in America, along with fourteen Top Ten singles.
It was a dizzying and unpredictable trip, which found him hanging out with old Hollywood royalty, traveling on his private jet the _Starship,_ and becoming ever more extroverted as he increasingly relied upon cocaine to fuel his fantastic voyage. His 1970s story includes encounters and friendships with a supporting cast of characters such as Elvis Presley, Bob Dylan, Stevie Wonder, Mae West, Groucho Marx, Katharine Hepburn, and Princess Margaret.
"Fucking hell," he astonishedly notes at one point, "you could write books and books and books about it."
The series of interviews I conducted with Elton that are featured in _Captain Fantastic_ were originally commissioned by Britain's biggest-selling rock magazine, _Mojo,_ to take a retrospective look at the heady era that was his 1970s. Most of these conversations, rich in details and anecdotes, were never published, owing to page constraints. When I realized that I had to exclude so much great material from my articles—and that even Elton's story of meeting Elvis in 1976 (when both were suffering declines in differing ways) would have to go untold for the time being—the idea for this book was born.
To me, there were major aspects of Elton's life and career in the 1970s that had been subsequently eclipsed by his mainstream celebrity status. Not least that it has been generally forgotten that Elton was both as cool and as musically influential in that decade as David Bowie, Pink Floyd, Led Zeppelin, or the Rolling Stones—none of whom sold as many albums as he did during the seventies.
Much of his musical genre hopping during that decade can be credited to his wide-ranging and eclectic listening tastes. The mind of the music obsessive he remains to this day still boggles when he talks about the 1960s and '70s cultural big bang that he witnessed and then became a major part of.
"From the early sixties to the mid-seventies," he enthuses, "it was an explosion of incredible creativity that I don't think there's been in music, in art, in theater, in cinema, whatever. In those fifteen years, music just changed completely. There were ten or twelve albums a week that you could buy. You were listening to everything. You were listening to Ravi Shankar, you were listening to _Hot Rats_ by Frank Zappa, you were listening to _Astral Weeks_ by Van Morrison, you were listening to the Band, to Leon Russell, to the Grateful Dead, to Jefferson Airplane. It was fucking _astonishing_. It was like having a fucking injection of great drugs every two or three minutes with the music that was going on."
Having spent those nascent years lost in music in his bedroom at Frome Court, when Reg became Elton and a star, in the United States he went on a determined pilgrimage to the various recording studio and concert hall locations of the music that had fired his imagination.
"One of the first things I did when I went to America was to go to Memphis and kiss the eight-track machine at Stax Records," he remembers. "I did my visit to Motown. I did my visit to the Apollo. I went to the shrines of where music had come from. Music for me is just as exciting, but there'll never be an exciting time like that again. 'Cause I was young and it was all happening. I was lucky to be born at that time."
—
WHEN YOU TALK to Elton, Reg is never far away.
Elton's success in the 1970s came at a time when many musicians were changing their birth names to something more starlike. But for some, including him, it proved far harder to change their personality.
"I didn't get any confidence until I started performing onstage and let that buried half of me come out," he confesses. "The timid boy that I was, I continued to be offstage. I was comfortable on the stage but not very comfortable off of it. Although I was having a ball, you're still stuck with the insecure, nervous person inside. Being successful doesn't cure it. In fact, it makes things a little worse because then the difference between your stage persona and your actual, normal persona is so far removed."
If he can rattle on and on with enormous enthusiasm about music, one to one, Elton still seems to find general small talk harder. This lingering shyness appears to be evidence that inside, he is still very much Reginald Kenneth Dwight, who never stopped being a reticent, slightly wary individual, despite having utterly transformed himself. "I thought I was getting rid of that shy boy," he states. "But you know what? You're still stuck with that shy boy. You're still stuck with the same shit inside of you."
This extraordinary metamorphosis, and the conflict between the dual sides of his personality, Reg versus Elton, were to characterize the star's entire early career. As the true successor to the Beatles in terms of sheer popular phenomenon, he experienced a thrilling but turbulent flight. _Captain Fantastic_ traces the skyscraping accomplishments and plummeting lows of Elton's 1970s, from the height of his stacked heels to the depths of his depressions. It is the tale of the bashful kid who turned into a superhero.
"Fuck! My life," he says, revealing that familiar grin, "has been incredible."
Reg (second from left) trying to look cool with the struggling Bluesology, London, July 22, 1965.
TO FLY ALL THIS WAY to California, across the Atlantic from England in a jumbo jet, to the land of freedom, adventure, and rock'n'roll, only to end up on a red bloody London bus. He'd wanted to roar off in a Cadillac or something, instead of trundling toward Hollywood, taking "fucking forever" to get there. He was totally embarrassed, totally pissed off.
He hadn't even wanted to come. It was his music publisher and record label boss, Dick James, who persuaded him to make the trip. Dick was fifty and an old-school music biz figure, with his bald head and business suits and thick-rimmed glasses. He'd made his fortune publishing the songs of the Beatles, so he knew a thing or two.
The problem was that Elton's second album, recorded at great expense and titled simply _Elton John,_ had been released in April in Britain and hadn't fared much better than his first, _Empty Sky,_ which had tanked. As the pages of the 1970 calendar began to blow away, he was running out of options.
He'd been sorely tempted to take up an offer from Jeff Beck, the swaggering star guitarist who had been kicked out of the Yardbirds in '66 for his repeated no-shows and fizzing tantrums. Since then, the drably named Jeff Beck Group had made two albums that sold well in America, before falling apart in a cloud of petty arguments and ego huffs.
One night in July at the Speakeasy club in London, the rock star hangout just north of Oxford Street, Beck had caught a show by Elton's new three-piece band. Impressed by what he saw, Beck came up with a proposal: Back me and we'll tour the States. Then Elton heard the terms of the deal—for every booking, Beck would take 90 percent of the $10,000 fee. Elton and the band would share just 10 percent.
Nevertheless, he thought, _Wow, a thousand dollars a night. Still sounds like a lot of money._
Dick James talked him out of it.
"You'll be a bigger star in America than Jeff Beck in a year's time," he insisted.
"I thought, _Oh, Dick, you're so stupid,_ " Elton remembers.
Now it was August and he'd touched down in Los Angeles, thrilled to his fingertips to actually be in America. Exiting the airport, he'd been greeted by the sight of the double-decker parked outside. His face fell and it quickly had to be explained to him by Norman Winter, his new U.S. publicist, that the bus was a surprise stunt he'd planned for him. It would carry Elton and the band into L.A. and fanfare his arrival in style.
A screaming message in huge white letters on a black banner ran almost the entire length of it: ELTON JOHN HAS ARRIVED.
Dutifully, Elton had stepped onto the rear platform of the open-backed London bus. It really didn't feel as though he'd "arrived." But if he had, it had taken him a long, long time.
—
THE VERY FIRST thing Reg could remember was sitting at the piano.
In his blurry memory, his gran Ivy lifts him onto her knee and he immediately starts banging the keys. In the days that follow, no one can keep him away from the seemingly captivating instrument. One day his infantile pounding somehow gives way to his working out the chords and elegant top line of "The Skater's Waltz."
His mum, Sheila, is flabbergasted. Reg is only three but it quickly becomes apparent that he has quite a gift.
"I just sat down," he remembers, "and I could pick out a tune very easily."
By the time he was four, in 1951, Reg had a mass of bubbly hair, which made him look cherubic and a bit like Shirley Temple. A typical preschooler, he was prone to tantrums, though the piano seemed to be a reliable source of calm for him. If he was kicking and screaming, his father placing him on its stool always cooled his hysteria.
There was music all around the house: from the wireless, from the cabinet-sized, varnished-wood-encased radiogram (part radio, part gramophone) on which Sheila and his dad, Stanley, would play their records. Stanley, who'd performed as a trumpeter in a dancehall band called the Millermen, favored the cool piano jazz of George Shearing, the dreamy percussive melodies of Charlie Kunz. Sheila preferred the pop sounds of the early fifties: Johnnie Ray, Rosemary Clooney, Frankie Laine. Reg's favorite musician quickly became Winifred Atwell, the Caribbean pianist who could play anything from classical to honky-tonk. He was fascinated by how Atwell wasn't the least bit snooty when it came to her musical tastes. He'd excitedly listen to her finishing a piece on a concert grand and saying, "And now I'm going across to my other piano," which turned out to be a battered old upright bought from a junk shop.
By age six, Reg had developed quite a repertoire and was fast becoming the center of the entertainment. If Stanley and Sheila had friends coming over in the evening, they would put him to bed during the day for an afternoon nap so that he could stay up later and play piano for them all. He'd transpose the proto rock'n'roll of the Super-Sonics' "New Guitar Boogie Shuffle" and its B side, "The Sheik of Araby," into jaunty piano tunes. Then he'd slow the tempo into "Butterflies" by Patti Page, or "Wish You Were Here" by Eddie Fisher, before maybe picking the pace back up with Jo Stafford's "Diamonds Are a Girl's Best Friend."
But apart from all of this fun and showing off, there was unease at home. Reg never felt real love from his father.
Stanley had been a flight lieutenant in the Royal Air Force in the latter years of World War II, promoted to squadron leader in 1953. As a result, he was away from home for extended periods of time. Reg would dread his returning for the weekends. His dad seemed snobbish and stiff. He'd tell the boy off for kicking a ball around the garden, fearing he'd damage the plants. He wouldn't allow him to eat celery because the crunching irritated him. Reg grew up feeling suppressed by him and ultimately afraid of him.
Stanley and Sheila had met through the RAF, where she was working as an office clerk. But even when Reg was still very young, their marital bond was already beginning to fray. Worse, their disputes seemed to be having a destabilizing effect on him. His mother described him as a bag of nerves.
"A bag of nerves?" he echoes. "Yeah, I was on tenterhooks as a child. Maybe because when your parents don't get on, you're always worried there's gonna be a row. So it drove me towards music even more. Sitting in your room listening to the radio. Looking at your records, _studying_ them. Looking at the little numbers, writing them down, who wrote the B side. It was like having a university course in music. I was fascinated by watching records go round the turntable. I remember twelve-inch seventy-eights which classical music used to be on. You could break records in those days. It was a _tragedy_ when you broke a record."
Looking to develop her son's natural talent, Sheila found a private piano tutor for Reg when he was seven. Mrs. Jones would teach him classical pieces and encourage him to practice for three hours each day. For the most part, he hated the passages he was forced to learn. He loathed the sad and dissonant night music of Bartók, though he found himself falling for the prettier melodies of Bach and Chopin.
That year, he gave his first public performance, of sorts, at the wedding of his soccer-player cousin Roy Dwight. The band booked to play at the reception turned up late, and to fill in, Reg sat at the piano and entertained the guests until the adult musicians arrived. There was a British law at the time forbidding anyone to play for a paying audience until the age of thirteen. It was a pity, really, thought Sheila, because otherwise Reg would quite likely be hailed as a child prodigy.
All the while, his father remained a remote and forbidding figure. Still, it seems that in some ways, Stanley tried to connect with his music-obsessed only son. For Reg's ninth birthday, his dad bought him a copy of Frank Sinatra's _Songs for Swingin' Lovers!_ The boy wasn't exactly won over by this gift, however, indicating that the difficulties in the father-son relationship now cut both ways. Later, the tough-to-please Reg would moan that what he'd really wanted was a bike.
Pop music had dramatically moved on by the mid-fifties, and replacing the jazz standards and crooning balladeers was the souped-up sound of rock'n'roll. Shifting with the trends, in 1956, a year after it had been number one in Britain, Sheila brought home "Shake, Rattle and Roll" by Bill Haley and His Comets. As much as Reg loved it, he preferred its flip side, "A.B.C. Boogie," and another disc his mother had bought, the stark and eerily mournful "Heartbreak Hotel" by Elvis Presley. The same week he first heard these records, he was at a barbershop, waiting to have his hair cut and flicking through a copy of _Life_ magazine, when he saw a photograph of the quiffed and impossibly cool Presley.
Now aged ten and beginning to buy records for himself, Reg was more compelled by the edgier, electrifying 45s of Little Richard and Jerry Lee Lewis, both wild-eyed and savage piano players with buzz-shocked hair. His mother found these sounds far too raucous and headache-inducing, and so Reg was forced to play the discs in his bedroom, miming in the mirror, a suburban preteen whose overfed reflection mock-singing back at him looked nothing like his skinny and snake-hipped idols.
Alone in his room, he'd stare at his records for hours. Most of the label designs were dark and dull, so he was drawn to the more colorful ones: the orange background and golden stars of Polydor, the yellows and blacks and bewildered-looking lion of MGM, the RCA Victor logo of a dog staring curiously into the horn of a gramophone as he whirled around the turntable.
But Sheila was worried that Reg seemed a self-absorbed and lonely child. Later, he'd admit that he'd longed for brothers and sisters and claim that his father was against the idea. To Sheila, the young Reg was a "terribly sad person. I used to sit there crying my eyes out when he was a child."
—
IT WAS MUSIC that rescued him. Turning eleven, Reg moved up to Pinner County Grammar School, a bustling high school facility housed in an Art Deco–style building where he had access to a proper Steinway piano and was invited by his music teacher to audition for a part-time scholarship at the Royal Academy of Music, the esteemed London conservatory founded more than a century before. Almost effortlessly sailing through the test, he was assigned to classes at the Academy in Marylebone, near Regent's Park, every Saturday morning.
There his music teacher, Helen Piena, was astonished when she performed a four-page Handel sonata for Reg and he played it back to her, note for note, having instantly memorized the piece. When it came to music, his mind was like a tape recorder. Yet soon it became clear that he really had no interest in sight-reading pages of musical notation. His talent was for playing by ear. He would learn the passages he was given to play, mentally dissecting their structures, before disobediently improvising his own embellishments to the melodies, already a songwriter in his soul.
At home, he played the piano for hours, until forced to stop by his parents or when the complaints from the neighbors became impossible to ignore. But as he grew older, a rebellious streak began to surface. Later he would boast that he'd regularly skip the classes at the Royal Academy and sit on a Circle Line subway train, looping around and around London, before coming back on the Metropolitan Line to the Northwood Hills station closest to home. In truth, if he had been a chronic truant, he'd have been thrown out of the Academy. Instead, lacking the discipline to learn the musical theory required to become a concert pianist, he remained just above average among the other precocious young musicians.
Rock'n'roll had dizzied his schoolboy head, and even as a tubby kid in short trousers and too-tight blazer, he would wow his fellow students at Pinner County Grammar by pumping out "Great Balls of Fire" on the Steinway. Having seen Buddy Holly perform in concert, he began wearing glasses in imitation of him in an effort to look hep. After eighteen months of wearing them constantly, he realized he couldn't see without them.
Stanley Dwight, meanwhile, really wasn't happy about the musical direction in which his son was heading. Reg was baffled by his vehement response—after all, Stanley had been a part-time musician himself; he should understand. Sent to an RAF post two hundred miles north of Pinner, where notably Reg and Sheila didn't follow him, he would pen stern and angry missives back to his wife warning her to tame the apparently gone-feral youth. "Reggie must give up this idea of becoming a pop musician," he wrote in one letter. "He's turning into a wild boy."
During Stanley's protracted absence, Sheila fell for an amiable and laid-back local painter and decorator, Fred Farebrother, whose name Reg, with his absurdist sense of humor (inspired by BBC Radio's surrealist comedy troupe the Goons), reversed, calling him Derf.
In the otherwise terrible winter of 1962 in England, remembered for years to come as the Big Freeze, Stanley and Sheila divorced, and at fifteen, Reg was free to be whoever he wanted to be.
—
IF IN 1963 you found yourself in Northwood, on the northwest edge of London, on a weekend evening and thirsty for some alcoholic refreshment, you may well have wandered into the large, detached Northwood Hills pub opposite the tube station. Inside, if you were brave enough to venture through the slightly more genteel saloon to the public bar, the domain of the more committed drinker, you would have discovered that it was packed. More so, a local could have told you, than it had been for a long time.
The reason would have become quickly apparent. At an upright piano positioned near the window, sporting a ginger-toned Harris Tweed sports jacket, his hair short and neat, you would see "Reggie," the Northwood Hills's resident piano player. He would be knocking out pretty standard pub song fare: rowdy wartime sing-along "Roll out the Barrel," music hall throwbacks like "My Old Man (Said Follow the Van)," maybe "When Irish Eyes Are Smiling" for the more boozily nostalgic or maudlin.
Then suddenly, amid the seething crowd, a pint accidentally spilled would lead to an angry word, which would lead to a punch thrown. "When there were fights, there were _fights,_ " Elton recalls. "So when I was singing, if there was a fight that did break out, I was out the window. Even though I was shit-scared, I knew I could jump out the window, wait for it all to calm down, and then get back as soon as possible. 'Cause music helps to sort these situations. At sixteen years of age and being quite insecure, it gave me an inner steel."
It also gave him an opportunity to extend his range. Often, between the crowd-pleasing tunes, he'd slip in something current, like Bruce Channel's recent number one, the howling "Hey Baby," or turn in something soulful, like Ray Charles's yearning "I Can't Stop Loving You."
Fred, or Derf, now introduced by Reg to everyone as his stepfather, attended these pub performances every weekend, his encouragement of the teenager's musical aspirations drawing them closer. Reg earned only a pound a night from the gig, but at the end of each set, Fred passed through the drinkers, pushing a money box under their noses and asking for tips for the young pianist. And so, nightly, Reg earned north of a tenner, sometimes totting up thirty-five quid a week (the equivalent of almost $400 today), at a time when the average weekly wage was only £12. "That paid for my amplifier and my electric piano," he remembers, "and gave me the experience of dealing on a solo basis."
He'd already been in a group, of a kind. Hanging out at the youth club in his local church hall back when he was thirteen and fourteen, he began to encounter like-minded sorts including Stu Brown, his cousin's boyfriend, who fancied himself a guitarist and singer.
Reg told Stu he was a piano player. Stu instantly erupted in cruel laughter, since rotund Reg looked far from the picture of a rock'n'roller. Hurt and annoyed, but determined to prove himself, Reg took to the keys and began to mimic Jerry Lee Lewis with surprising intensity and passion. Brown was instantly silenced, and together the pair began to hatch a plan to form a band.
Dumbfounding everyone, at the first gig played in the church hall by the freshly named Corvettes, Reg did his full Jerry Lee, aggressively heeling away the piano stool midnumber and standing up to play. The shocked reaction this provoked among the audience quietly thrilled and astonished him. From then on, kicking the piano stool away would become his signature onstage trick.
—
AS TEENAGE BANDS often do, the Corvettes fell apart within a year and a half, owing to a lack of gigs, shoddy amplification, and a procession of woefully out-of-tune church hall pianos. Having experienced the highs of being part of a group, Reg was back playing alone at the Northwood Hills.
Within the year, though, he bumped back into Stu Brown, who—along with the Corvettes' bassist, Geoff Dyson—had deepened his knowledge of the blues, at the time acutely in vogue. Stu suggested to Reg that he come play with them. Armed with his recently bought Hohner Pianet electric piano and an amp to ensure he would be heard on a level way above a decrepit church hall piano, Reg joined his first proper band, completed by drummer Mick Inkpen. Cribbing and bastardizing their name from jazz guitarist Django Reinhardt's 1949 album _Djangology,_ the quartet became Bluesology.
More in tune stylistically with the painfully hip rhythm and blues of UK singer Georgie Fame than the scream pop of the Beatles, Bluesology added brass to their lineup and, as time passed, bagged regular gigs at local pubs and then a host of cool clubs in Soho. A purist R&B and soul band, they would knock out numbers by Jimmy Witherspoon, and later Eddie Floyd and Otis Redding.
Reg switched from his Hohner Pianet to the more R&B-friendly Vox Continental organ. But he knew he was a fairly bad organist and, worse, the instrument kept breaking down. Often he felt like a fraud: weighing nearly two hundred pounds and stuck stage left behind his keyboard when all he wanted to do was be the lead singer—a role for the most part taken (aside from the odd Reg-sung number) by the taller, slimmer, more traditionally frontman-like Stu Brown. Soul and blues were all very well, but in his heart Reg still loved the manic charge of rock'n'roll.
He'd recently gone to see the Beatles perform live at London's Hammersmith Odeon and it had proved an intoxicating experience, witnessing the emotionally heightened scenes both outside and inside the venue. "Seeing the Beatles was kind of like seeing God in a way," he laughs. "Just actually seeing them live and being in their presence. Police in the street. It was just chaos everywhere they went. That was the exciting thing about it, just to have a glimpse at the moment. Being in an audience that frenzied. You couldn't hear anything for the noise."
Sick of school and more than ever dedicated to pop music, in 1964, Reg quit Pinner County Grammar School only two weeks before he was due to take his crucially important A-level exams. Longing to be in the eye of the capital's music scene, he managed to land himself a job as a mailroom clerk, delivery boy, and general dogsbody at Mills Music, a sheet music publisher on Denmark Street. He soon became a well-known face in the area, dashing up and down the parade of instrument shops and recording studios that was at the time London's equivalent of Tin Pan Alley.
It was here that he first encountered Caleb Quaye, an aspiring guitarist similarly working as a runner for Paxton's Music in Old Compton Street. The two immediately hit it off, though Caleb constantly ribbed Reg, nicknaming him Billy Bunter and telling him that he thought Bluesology's band name was pretentious. Like most, he couldn't imagine how this plump, myopic teenager, however funny and likable, could ever set foot on a stage.
But even if outwardly tentative, Reg was made of sturdy stuff. In the spring of 1965, Bluesology played a gig at the Elms Club in the northwestern London suburb of South Harrow, where they found themselves facing a crowd of hostile, greasy-haired bikers. Sneering at the band's soulful R&B, most of the rabble retreated to the back of the hall. Some of them then began angrily revving their motorcycles at the entrance, as if readying themselves to ride into the venue. Others threatened to smash the place up if Bluesology didn't play rock'n'roll. The shaken band plowed on regardless, though Reg was sorely tempted to launch into his beloved Jerry Lee Lewis or Little Richard numbers.
Bluesology were growing in confidence, and in July '65, they notched it up a gear. First, they released their debut single, having landed a deal with Fontana Records to put out an original song, "Come Back Baby," written—and, more significantly, sung—by Reg. The band sounded shaky and amateurish on vinyl, performing this loungy bossa nova with a generic lyric written from the perspective of a jilted, pining lover, although the song's ornate structural twists nonetheless revealed its writer's deft ways with an arrangement.
It was to prove a flop. But not before Reg heard himself on the airwaves for the first time. Driving in his car, he happened to catch a DJ on Radio Luxembourg give "Come Back Baby" a spin. "Hey, that's me singing, folks!" he exclaimed aloud. In a shrewd move, Reg had also managed to sell the song to his bosses at Mills Music for £500 ($5,500 today).
Then Bluesology landed a second lucky break. On July 22, they entered a Battle of the Bands competition at the Gaumont State cinema in Kilburn, north London. After performing, and while waiting for the results, they were approached by an impressed booking agent, Roy Tempest, who asked them if they'd consider backing U.S. soul artists on British tours. They immediately agreed, but were quickly deflated in the weeks afterward when the musical director for Wilson Pickett—just about to release his signature hit "In the Midnight Hour"—wasn't similarly impressed by their playing.
It was to prove a minor setback, however, and Tempest soon had Bluesology on the road, slogging away backing various American acts. The Chicago soul singer Major Lance was so taken by how meticulously the band parroted his songs that he found he didn't have to change a note. The hulking Washington, D.C., crooner Billy Stewart was a fine touring partner, although the band would often have to stop on the M1 motorway while he leisurely and copiously urinated like a racehorse. Georgie Fame, at the time riding high with a run of R&B chart hits, had recently covered Stewart's hit "Sitting in the Park." One night at the Ricky-Tick in Windsor, Reg watched, astounded, as the American soul man dealt with an infuriating heckler.
"Someone said, 'We want Georgie Fame!' " he remembers. "He fucking jumped off the stage and chased him. I thought, Fuck, you're a brave man shouting at Billy Stewart. He was _huge_. He must have been twenty-four stone"—more than three hundred pounds.
Meanwhile, Patti LaBelle and the Blue Belles provided a different sort of education—part musical, since the singer (who'd endured a hard-knocks upbringing in Philadelphia) was a tough taskmistress, and part social, when it came to the ways of the road. LaBelle often invited Bluesology to play cards with her and used her gambling wiles to win back their wages from them. Sometimes, overcome with guilt, she would take them back to the girls' rented flat and cook dinner for them.
Reg was quickly disillusioned by his time backing "Patti LaBelle and her Blue Bellies," as they were billed one night, finding himself playing dreary standards such as "Danny Boy" and "Somewhere over the Rainbow." It was a taste of a life as a cabaret musician, a life he was to come to despise.
Other bookings only served to show him how quickly musical stars could fall. Supporting the Ink Spots, who'd managed to carve out a fine career as a doo-wop group in the fifties, he'd watch younger audiences grow bored and wander off when they played their old hits.
Day by day, night after night, it amounted to a demanding, seemingly endless trek, even for a band with the boundless energy of teenagers. One long shift might find them schlepping between three bookings, from the Cavendish in Sheffield to Tito's in Stockton to the Latino in South Shields. Or they'd play a trio of gigs in different London clubs in the same evening: the Ready Steady Go to the All Star to the Flamingo. Another night, in Manchester, it would be the Oasis to the Twisted Wheel and then back to the Oasis for a late-night slot. Show after show after show with the latest American soul act arriving in the country—Arthur Alexander or the Exciters or Solomon Burke or Doris Troy.
Next it was off to Hamburg, shadowing the steps of the pre-fame Beatles, living in shabby rooms above seamy venues and playing nightly sets where Reg would sing rude words to his songs for any unsuspecting non-English-speaking Germans. And then to St. Tropez and the Papagayo Club, where Brigitte Bardot had been known to shimmy tantalizingly in a short dress on the dance floor and where Reg gave himself an electric shock with his faulty equipment, requiring a sedative shot in the ass from a doctor as the other band members stood around his bed.
Still, ever the record obsessive, his mind was on other, more musical shocks to the system. "I remember playing the Papagayo Club with Bluesology when _Revolver_ came out and 'Reach Out I'll Be There' by the Four Tops," he says, his memory rewinding back to the hot August of 1966. "I couldn't believe both records were so brilliant. It was like, _Fuck...what?_ "
By December, still only nineteen years old, Reg was appearing on the same stage as one of his formative heroes, opening with Bluesology for Little Richard at the Saville Theatre in London. Once again, though, the group faced an audience of enraged, blues-hating rockers, who threw motorcycle parts at them during their set while chanting, "Off...off...off!"
When the headliner took the platform, Reg was astounded to witness his idol at such close proximity, standing like a returning king atop his piano, pulsating with energy, a vision in sequins and light.
He decided right there and then. His mission from now on, however long it might take him, was to become a bedazzling rock'n'roll piano star.
—
STILL THE NEVER-ENDING road of exhausting gigs as a wage slave stretched out ahead of him. He'd share bills with other bands like the Move from Birmingham and sense their drive and determination and realize that Bluesology lacked these qualities, and were instead held together only by loyalty and a shared need to earn money as they motored, crammed in the back of the van, from one gig to another.
"We'd play in places like Balloch in Scotland," he remembers. "We'd arrive and say, 'Why's the stage ten feet tall?' And they'd say, 'So that the fucking people can't get to the band.' You'd see that there would be two sides to the audience. Everyone would be in two halves and you'd just wait for the fight to break out."
Into this hectic and road-weary scene stepped Long John Baldry. Earning his piratical nickname for his unusual height—six foot seven, almost a foot taller than Reg—Baldry was highly regarded as a singer and leading face of the British blues movement. His most recent venture, fronting the three-vocalist lineup of Steampacket alongside Julie Auger and Rod Stewart, had collapsed when the band had been offered a residency at the Papagayo Club in St. Tropez, and for financial reasons, Stewart had been cut out of the deal and left behind. Down in the south of France, the effervescent one-man party that was Baldry drank himself to a standstill, missing gigs by the second week. In the end, when Steampacket was sacked from the engagement, it was Bluesology who ended up stepping in.
After a subsequent Bluesology show back in London, at the swish Cromwellian club in South Kensington, where there was gambling upstairs and dancing below, Long John Baldry began sizing them up as his new group. Their prospects at this point were not great: Their second Fontana single, the similarly Reg-fronted "Mr. Frantic," a potboiling R&B tune that in spite of its title sounded oddly listless, had failed to chart. And so they took up the offer to back Baldry, who reshaped them in the image of Steampacket, encouraging Stu Brown to drop his guitar playing to concentrate on vocals and bringing in another tall, blond singer, Alan Walker, to form a three-man front line. This reconfiguration reenergized the group. After the first few gigs, even Reg considered the new lineup to be not a bad little band.
Baldry was great company: a complete hoot, drinking and loudly playing his blues 45s on the record-player system recently installed in their van as they drove from here to there up and down the country. Onstage, eccentric and brandy-fueled, he would sometimes baffle the crowd by stopping a song to tell a joke or berate someone in the audience. He was also openly camp at a time when homosexuality was still illegal in the UK. Utterly naive, Reg didn't even realize Baldry was gay.
Though he was now making a living as a musician, playing the organ sidestage behind three charismatic vocalists, Reg remained quietly frustrated. He wanted to be a lead singer, but he realized that given his pudgy appearance, no one was likely to give him the job.
Maybe he should concentrate on writing songs, he thought. It might be the only way that he was going to get anywhere.
—
THE AD APPEARED in the June 17, 1967, edition of the _New Musical Express._ Reg was on tour in Newcastle when he picked up a copy and spotted it: "Liberty wants talent. Artistes/Composers, Singers-Musicians to form new group. Call or write Ray Williams for appointment or mail audition tape or disc to 11 Albemarle Street, London W1. Tel: Mayfair 7362."
He secured an appointment with Williams and went into the Liberty Records offices for an audition, nervously parking himself at their upright piano. Perhaps a touch bizarrely, given the fact that he'd already sung on two records with Bluesology, he fell back on his Northwood Hills pub routine, performing a selection of Jim Reeves songs including "He'll Have to Go" and "I Love You Because" and even exuberantly paddling out "Mammy" by Al Jolson.
Williams thought Reg had an unusual and interesting voice, no matter how plain and un-pop-star-like he appeared. Reg, for his part, had zero confidence in his own image. He later remembered that on that day he felt he looked like "a lump of porridge."
A test recording session was booked. Walking through the door of DJM Studio, Reg was met by the sight of Caleb Quaye, his onetime fellow music publisher runner, now a sound engineer, who at first didn't recognize "Billy Bunter" in his longer hair and slightly groovier getup. When he realized who the artist he'd be working with actually was, Quaye burst out laughing. But his merriment was cut short when he heard Reg sing. To Quaye's ears, his almost androgynous vocal tones sounded like Sandie Shaw.
Ray Williams played the recordings to his bosses at Liberty. They were not won over. Breaking the bad news of the rejection to Reg, he suggested an alternative idea.
He said, "Well, I've got all these lyrics on my desk from this guy in Lincolnshire."
"Bernie became the brother I always wanted." Elton and Bernie, the hopeful songwriters.
BEFORE THEY EVEN MET, they'd written twenty songs together.
Later they would look back and view the fact that they'd both responded to the Liberty ad as a twist of fate. Especially since Bernie Taupin hadn't himself even managed to mail the letter including examples of his song lyrics. He had got as far as stuffing them into an envelope and addressing it, but then left it on the mantelpiece at the Taupins' farmhouse in the small Lincolnshire village of Owmby by Spital (population: 300) in England's East Midlands. Somehow, the letter had been temporarily lost, tucked behind a clock, and lay there for two weeks before Bernie's mother found it and popped it in the post.
Like Reg, seventeen-year-old Bernie had always been something of a loner. As a boy he loved the poetry his mother and maternal grandfather would read to him. His childhood imagination was flooded by the long, fantastical verses of Coleridge's "The Rime of the Ancient Mariner" and the adventures of the gallant knight Lochinvar in Sir Walter Scott's "Marmion," causing him to charge around the hills surrounding his home wielding a wooden sword.
The Taupins were not a musical family, but Bernie's ears were caught by the sounds of Johnny Cash and Marty Robbins he heard by turning the radio dial to the American Forces Network station. Robbins's country-and-western murder ballad "El Paso" was a formative favorite, with its tale of a jealous cowboy who shoots his love rival in a tussle for the affections of the wicked Feleena. Bernie played Robbins's 1959 album _Gunfighter Ballads and Trail Songs_ until it was worn out. These were not just epic stories, but epic stories set to song.
Inspired by the words of these songs, eleven-year-old Bernie penned what he fancied was a short book on the history of the American West, albeit only six or seven pages long, and even sent it to a London publisher. A representative gently responded with the words "Dear Mr. Taupin, there's nothing we can do with your book right now." Encouragingly, though, they added that the youngster appeared to have a flair for writing and that he should pursue it.
Bernie kept these ambitions to himself and left school at fifteen, falling into a procession of menial jobs, including working at a printing press inking leaflets for local fetes and horse shows and laboring on a farm where he was tasked with forking the carcasses of diseased chickens into an incinerator. All the while, back in the kitchen at his family's farmhouse, he would _tap tap tap_ away, writing poems on an old typewriter.
As 1966 turned to 1967, his verses of juvenilia began to take on more psychedelic hues, with titles such as "Swan Queen of the Laughing Lake" and "The Year of the Teddy Bear." Later he was rightly to regard these as terrible rip-offs of the spaced-out hello-trees-hello-sky lyrical fashion of the day. Significantly, though, the week that Bernie spotted the Liberty Records ad in the _New Musical Express,_ the number one single in Britain was Procol Harum's "A Whiter Shade of Pale," a faux-classical dream song with a lyric by Keith Reid, a member of the band who wrote their songs' words but—interestingly—didn't play an instrument.
Thanks to his mother's forcing Bernie's hand by sending the letter to Liberty, Ray Williams got back in touch. "When you happen to be in Mayfair," Williams wrote, "pop in and see me."
Of course, farm boy Bernie was never simply passing through Mayfair. But he took Williams up on his offer and jumped on a train to travel the hundred fifty miles south to London. Climbing the stairs to the offices on Albemarle Street that Liberty shared with Gralto, the company that published the songs of the Hollies, he was thrilled to pass their singer-guitarist Graham Nash on the stairs, looking impressively with-it in his worn-out tweed jacket and high-waisted jeans.
Williams played matchmaker, giving Bernie's lyrics to Reg, who began to fit these screeds of poetry to music over the following weeks. Having spent years studying pop records, for him, turning words on a page into songs was a breeze. It quickly proved to be a highly productive hookup, which set in place the unusual remote writing approach that was to characterize their partnership.
During moonlighting demo sessions at DJM's makeshift office turned studio, nicknamed the Gaff by everyone who frequented it, Reg would record these collaborations, with Caleb Quaye behind the mixing console. At one session in late July 1967, Bernie turned up, arriving unfashionably early, intimidated by the prospect of meeting Reg and hanging out with these "swinging London" types. In an effort to mask his insecurity and perhaps to fit in, the diminutive lyricist, just short of five feet five, hid his eyes behind trendy sunglasses. One musician he encountered there turned to him and said, "Great shades, man, can I try them on?" Bernie had no idea what he was talking about.
Reg turned up and invited him for a coffee around the corner at the Lancaster Grill on Tottenham Court Road. Bernie was surprised to discover that his songwriting partner was not the deeply cool figure he'd imagined him to be. "He was definitely not Granny Takes a Trip," Bernie laughs, referring to the modish King's Road boutique of the day. "Just _Granny_. I was expecting someone very hip, plugged-in. And sure, he was out there playing in all those great clubs of the time, backing all these wonderful American artists. But he had none of the pretensions or the airs of many of the other people that circulated around at that time.
"I just thought he was a really nice guy," he adds. "He was very, very friendly. A little shy. Very caring, but very awkward. We were _both_ very awkward. It very much eased me because I thought, _Oh good, I'm not feeling sort of substandard here_. 'Cause I was pretty green. I was very much a fish out of water."
Reg himself, not yet emboldened by his Elton persona, was typically tentative and hellishly nervous about meeting his co-writer. "I was wondering who would turn up," he says, "what he would be like, whether he would be fucking horrible or would I like him. He was incredibly young. We hit it off straightaway. It was like a kismet thing."
Back in the studio, hearing songs that featured his words actually coming out of the speakers, Bernie was utterly enthralled. He thought to himself, _Wow, this is, like, the real thing_. The first creatively successful product of the collaboration was "Scarecrow," a midpaced and lightly bouncy piano ballad given a skeletal arrangement with badly played shaker and tambourine. Although the lyric was convoluted and largely nonsensical, with clunking lines such as "like moths around a lightbulb, your brain is still bleeding," it was a start.
Encouraged by the work of his new songwriting team, Ray Williams pressed a few copies of "Scarecrow" onto acetate discs with a view to possibly selling the rights to his office neighbors Gralto. Bernie took a copy back to Putney, south London, where he was staying with his aunt and uncle, and proudly played it for them. "That's a memory you can't erase," he enthuses. "I mean, it was so exciting."
Other recordings followed: "Velvet Fountain" with its earnestly delivered opening line "Do you believe in fairies?"; the fanciful full-band treatment of the self-consciously trippy "A Dandelion Dies in the Wind" with its "purple clouds" and "golden rain." Reg and Bernie quickly took up an after-hours residency at the DJM studios while learning their craft and defining their roles. It was clear that Bernie was no musician—Caleb Quaye howled with laughter during one session when the lyricist tried to play a tambourine, wildly out of time, looking as if he was trying to swat a fly. Similarly, Reg, who admitted he had always struggled with having the confidence to express his feelings on the page, was no lyric writer, as proved when he offered up a self-penned song called "The Witches' House": "I go to the witches' house / I go there when I can / Me and Molly Dickinson in my delivery van."
Session by session, they strove onward, secretly recording at the studio without the knowledge of the staff at DJM. Until, fatefully, they were caught. The studio was housed on the first floor of the building at 71-75 New Oxford Street above a branch of the Midland Bank. An arrangement had been put in place where the security guard downstairs at the bank had to be notified if there was to be a nocturnal session in the offices above.
One night, for whatever reason, the message wasn't relayed and the security guard was forced to call and wake up Ronald Brohn, DJM's studio manager, to alert him to the goings-on. Brohn raced over there. Caleb Quaye looked up from the mixing desk to see the enraged Brohn standing in the doorway of the control room. The engineer quickly hid his joint and was loudly bawled out.
The next morning Quaye faced what he would come to refer to as the Great Purge. Dick James and his son and employee Stephen demanded to know what the hell had been happening. Quaye, certain he was about to be sacked, not least since he had been furtively requisitioning checks to pay for rented equipment and session musicians, nervously began to list the names of the artists and bands he had been recording.
"That's it," barked Dick James. "I'm throwing them all out. It's over."
"Well, you can throw them out and you can sack me if you want," Quaye replied. "But I've got these two guys...I think you ought to listen to their stuff."
Sweating, Quaye threaded the tape onto a reel-to-reel machine and played a selection of Reg and Bernie's songs. Five or six tunes in, Dick James's mood turned from enraged to intrigued. Never one to miss a potentially rewarding business opportunity, he asked Quaye to set up a meeting with the pair.
It felt to the sheepish Reg and Bernie as if they were being summoned to the headmaster's office. Astonishingly, though, Dick James offered to sign them to a deal as songwriters. On November 7, 1967, they inked their names to the contract.
Bernie immediately moved in with Reg at Frome Court so that the two could concentrate on their writing. Notably, though, their collaborative process didn't change. Each preferred to work alone: Bernie would write or type lyrics, Reg would take them away and come up with the chords and melodies.
"It was one of the happiest times of my life," remembers the singer. "We were inseparable. He turned me on to different music, I turned him on to different music. We were buying records, we were going to the cinema, we were going to gigs. It all revolved around artistic things.
"For me, he became the mate that I could hang out with, and the brother I always wanted."
—
MEANWHILE REG WAS still out on the road for sporadic dates with Long John Baldry, as part of an expanded nine-piece lineup that now included Caleb Quaye on guitar, a sax player named Elton Dean, and Marsha Hunt, the future face of the musical _Hair,_ on additional vocals. Together, they performed as the Long John Baldry Show.
By this point, Baldry had turned his back on the blues and refashioned himself as something of a crooner, remarkably scoring a number one hit in Britain with the hokey orchestrated showstopper "Let the Heartaches Begin," sung in a trembling, croaky voice in imitation of Tom Jones or Engelbert Humperdinck, after he had downed a copious amount of Courvoisier in the studio.
Onstage, wearing bottle-green suits and frilly-cuffed shirts, the band would be forced to mime to a tape of the hit, since it was impossible to re-create its string-soaked arrangement in the live environment. Reg hated every minute of it. Having gone from playing in chic clubs on the UK circuit, he now found himself in far cheesier cabaret venues, setting up his equipment during bingo sessions and performing while people were eating fish and chips.
His changeable outlook was not improved by the fact that he had begun popping amphetamine-based diet pills in an effort to slim down. Baldry found they made Reg aggressive and short-tempered, as well as increasingly huffy and prone to perfectionism. If anyone messed up onstage or played out of tune, Reg would boil over. During the sets, he made catty comments behind Baldry's back, though loud enough for the others to hear, causing them to break up with laughter. One night, he stood up midshow and started moaning and ranting and kicking his malfunctioning amplifier while dressed in an enormous fur coat. To Quaye's eyes, Reg looked like a furious Winnie-the-Pooh, storming around the stage and attacking the gear.
Then, on Christmas Eve 1967, someone came along to brighten Reg's mood. At the Cavendish club in Sheffield, he met a local girl, Linda Woodrow, an ash-blonde three years his senior and four inches taller.
In matters of the heart, Reg was still a naïf. He'd had one girlfriend, or at least love interest, back in his pub-piano-playing days at the Northwood Hills when an older girl, a twenty-year-old blonde named Nellie, had taken a shine to him. More exotic still, it transpired that she was a gypsy, living in a caravan that was moved on from place to place by the police every few weeks.
He visited her on-site, following her direction to "turn left at the third field" on the approach into Southall in west London. Bucking his preconceptions about those who chose an itinerant life, he found her caravan clean and Nellie herself very welcoming. Though not as welcoming as he'd have perhaps liked. After their dalliance, he remained a virgin.
Linda Woodrow was a more sophisticated individual than Nellie—privately educated, fond of high fashion, and well known on the Sheffield music scene, thanks to her shifts spinning records at the local ice rink. Her companion at the Long John Baldry gig at the Cavendish was another DJ, Chris Crossley, nicknamed the Mighty Atom by virtue of his vertically challenged stature of four feet eight inches. Mingling with the crowd and chatting at the bar after the show, the band members assumed Crossley to be Linda's boyfriend. That was until the others noticed that she and Reg were deep in conversation.
They became friends, then a romantic item. In London, the relationship quickly intensified, with Linda visiting Reg on the weekends at Frome Court. A lack of privacy ensured their romance had certain limits, however, since Reg was of course living there with Sheila and Fred, not to mention sharing the bunk beds in his small room with Bernie. Before long, though, Linda had moved to London and rented a flat of her own.
During these first months of 1968, Reg's frustrations with touring life as a bit-part player in the Long John Baldry Show began to deepen. He even briefly considered quitting as a musician altogether and started scanning the small ads in the papers looking for a job, working in a record shop, or doing anything really apart from being a supper-club organist.
At the same time, Stephen James had been shopping Reg and Bernie's songs around the record companies with a view to having them covered by name artists. In their reactions to the tapes, more than one A&R man voiced an opinion that surely the singer on these recordings—with his smooth delivery and impressive range—was strong enough to front the songs himself. It seemed an interesting idea to Stephen James, which he conveyed to his father. As a result, DJM offered to sign Reg as a recording artist to a five-year deal.
Reg was overcome with happiness. But there was one aspect of the arrangement that troubled him—namely, the thought that few people were likely to buy a record by a singer with the acutely unglamorous name of Reg Dwight.
Flying back down from Scotland after a gig at Green's Playhouse in Glasgow, Reg was restless on the plane. He wandered up the aisle toward sax player Elton Dean. "I'm leaving the band," he bluntly announced. "To become a pop star. Is it all right if I call myself Elton Dean?"
Dean was flummoxed. "That's a bit strong, Reg," he replied.
On the airport bus taking them back into London, Reg was scribbling names on a piece of paper. If he couldn't be Elton Dean, he thought, then maybe he could borrow the first name of his current frontman and use it as a surname.
He was later to view it as a eureka moment. That was it. He would become Elton John.
"I was never happy with the name Reg anyway," he says. "It's an old-fashioned name. It wasn't the name I wanted as a kid. Changing it was just, psychologically, a big boost for me."
—
NOT THAT, INITIALLY, it made much difference to his commercial fortunes. Stephen James cut a two-single deal with Philips, licensing the DJM recordings to them, and the first Elton John single, "I've Been Loving You," was swiftly released, on March 1, 1968. Although the song was credited to both Elton and Bernie, in truth, the latter's credit came as a result of the former's generosity, since he'd written both the music and lyrics himself. But it showed. "I've Been Loving You" was not too much of a stylistic stretch from Long John Baldry's "Let the Heartaches Begin," a ho-hum easy-listening ballad that quickly sank.
Everything was moving fast for him, however, both professionally and personally. Linda Woodrow had found a basement flat for her and Elton to share, at 29 Furlong Road, Islington, north London.
"Furlong Road...the first time I moved away from home," he ruminates. "Really didn't enjoy that much."
In truth, it proved to be one of the most painful periods of the singer's life.
To help pay the rent, Bernie came as part of the household setup, moving into the bedroom, while the newly domesticated couple inhabited the living room.
Elton and Linda made an odd pair—her with the posh accent and stylish clothes, him trying to dress like a rock star on a limited budget with his enormous fur coat. There were other, more problematic differences that set them apart. Linda didn't seem to share Elton's absurdist sense of humor or, worse, his taste in music. He'd later claim she'd constantly dismiss the songs he was writing, destroying him inside. She preferred the passé pop jazz of Buddy Greco and Mel Tormé.
But, at twenty-one, he finally lost his virginity. Not long after, according to Elton, Linda told him she was pregnant. Overcome with a sense of old-fashioned propriety, though not exactly getting down on one knee, he mumbled to her, "Well, I suppose we should get married, then." A date was set, for the third week of June.
As time passed, Elton began to view the impending wedding as an oncoming train, speeding toward him while he was tied to the tracks. One afternoon in the kitchen at Furlong Road, he turned on—but didn't light—the gas oven. He placed a pillow inside and rested his head on it.
Bernie found him. "I ended up pulling his head out," he remembers.
It was, says Elton, only a semi-serious attempt to take his own life: "Um...kind of half and half. I'd backed myself into a wall by saying I was gonna get married. I didn't want to get married. It was a cry for help. You know inside you're making the wrong move. You deal with it by being preposterous." As suicide attempts go, it was indeed preposterous: He'd only turned the gas oven dial to "low," and he'd left the windows open.
It didn't stall the plans for the wedding, however. On a halfhearted stag night in the first week of June, Bernie and Long John Baldry took Elton to the Bag O'Nails club in Soho. The three proceeded to get uproariously drunk. Baldry, who was supposed to be best man at the ceremony, asked Elton if he'd even yet booked the hall for the reception. Elton broke down in tears.
Baldry seized the opportunity to speak his mind. "If you marry this woman, you'll destroy two lives, yours and hers," he warned him. More importantly, Baldry, an unashamedly gay man, sensed there was a deeper root to his friend's dilemma.
"He said, 'For fuck's sake, you're more in love with Bernie than you are this woman,' " Elton recalls. " 'For God's sake, come to your senses.' "
Elton and Bernie woozily stumbled back to Furlong Road sometime around four in the morning, waking Linda. Then Elton, with drunken bravado, decided to tell her that he wanted to call off the wedding. Bernie hastily retreated to his room and locked the door as the argument between the unhappy couple exploded on the other side. A little while later, Elton knocked and said, "I'm coming in there with you." He spent the rest of the night on the floor.
In spite of Long John Baldry's assumptions about Elton and Bernie, both insist their relationship was never a sexual one. But the incident seemed to strengthen their brotherly bond. The morning after the bust-up with Linda, Fred Farebrother arrived in his van to pick up their belongings, and they returned together to Frome Court.
"It was," says Elton, in remembering the lucky escape he had from choking domesticity, "a turning point in my life."
—
FROM HERE, THE two continued to spend their days hacking away in the studio, trying to write songs that would keep Dick James happy. Their own, more esoteric offerings were meanwhile piling up: the groovy sixties swinging "The Angel Tree," the mournful psych melodrama of "Tartan-Colored Lady," the surrealist romance of "When I Was Tealby Abbey," the brazenly Beatlesque "Regimental Sgt Zippo." Bit by bit, they were getting somewhere, even if none of these recordings were ever to be officially released.
Elsewhere, with their pop songs designed to be hawked to other artists, there was little but frustration. Tune after tune was rejected and the pair would despondently take the train back home to Pinner, growing increasingly disillusioned with the music business. "There were tremendous, tremendous highs and lows," Bernie says, "where we got our hopes up, only to have them dashed."
Sometimes they found themselves so near to greatness but yet so far. When playing on a session at Abbey Road for the novelty troupe the Barron Knights, a starstruck Elton, with Bernie in tow, met Paul McCartney, who sat down at a piano and sang for them the next Beatles single, "Hey Jude." It blew their heads apart. "It was like, _'Eeeeee!'_ " says Elton. At the same time, says Bernie, "it was kind of disheartening to see all the movers and shakers existing in this fabulous world that we weren't really a party to."
But there were at least a few promising signs that the pair's fortunes might be on the rise. A new promotions man at DJM, Steve Brown, a comparative longhair in a company of "straights," heard what Elton and Bernie were doing and urged them to write from their hearts. Taking this approach, they came up with "Skyline Pigeon," which both felt was their greatest songwriting achievement yet. Further reinforcement came from Roger Cook and Roger Greenaway, writers of hits for the Fortunes and Gene Pitney, who tried to bring attention to the John/Taupin songs. Cook covered "Skyline Pigeon" and released it on Columbia in August 1968. The very same month, another version of the song was recorded for Pye by the clean-cut would-be pop star Guy Darrell.
Night shifts at DJM produced Elton's debut LP, _Empty Sky,_ unsuccessfully promoted by ads on the backs of a hundred London buses. It wasn't a complete failure, however. In America, the California group Three Dog Night, an outfit always hungry for songs, virtually photocopied "Lady Samantha" for their second album, _Suitable for Framing,_ released in the summer of 1969 and bringing in some much-needed funds for Elton and Bernie. When the album reached number ten on the _Billboard_ chart, Elton got his hands on a copy of the trade mag and proudly underlined the entry.
The flopping of _Empty Sky_ didn't dampen the hopes of Dick James, who firmly believed that he had found someone special in Elton. Instead, he chose to up the ante. The next Elton John record, he decided, would be recorded with a far higher budget of £6,000, and at a better-equipped studio.
There was already a new song to hang the next album on. One morning at Frome Court, over breakfast, Bernie had rapidly scribbled a lyric that read like an intimate, almost bashful message to an unnamed object of his affections, accepting his failings and even poverty and instead offering up only these lines on the page. "It's the voice of someone who hasn't experienced love in any way," he says. "It's a very virginal song." Betraying its swift execution as part of an everyday routine, the piece of paper on which he wrote "Your Song" was stained with egg.
On Monday, October 27, 1969, it took Elton only ten minutes to conjure up its chords and melody. "Your Song" was quickly demoed with him alone at the piano, at a faster clip than the version that would be released and sung in a hushed tone that was almost feminine.
It was clearly an enormous step forward, and it galvanized the DJM team. The relatively inexperienced Steve Brown stepped aside as producer, approaching Gus Dudgeon, who had recently overseen the imaginative production of David Bowie's first Top Five hit, "Space Oddity." Dudgeon had already heard of Elton, having been aware of the _Empty Sky_ advertising campaign.
"I remember seeing ads on the backs of buses," he recalled. "That kind of registered with me. Hello, we've got a record company that's actually working hard on the behalf of an artist. So when I got the call I thought, _Oh, this is the guy that they're pushing really hard_."
Dudgeon set to work by drafting in the arranger of the inventive strings on "Space Oddity," Paul Buckmaster. Sessions were booked at the state-of-the-art Trident Studios tucked away in St. Anne's Court in Soho, where the Beatles had decamped from Abbey Road to record "Hey Jude" and parts of _The White Album,_ drawn there by its eight-track tape machine.
Elton was told that the plan was to record the songs live with an orchestra. Naturally, this added a nervy edge to the proceedings. "I had to play the piano with all these brilliant session musicians," he says. "And if I fucked up...The fear element was great."
There was a sonic leap between _Empty Sky_ and the album that was to become _Elton John_. Most of this aural advancement was centered on the pristine and sparkling piano sound. With it, and these new songs, Elton seemed to catch something that was in the air: the burgeoning trend for piano balladeering at the dawning of the 1970s. It was a sound that was suddenly appearing all around him, on the records he was listening to both before and after making _Elton John_ —the glassy piano textures of Joni Mitchell on _Ladies of the Canyon,_ the grandeur of Larry Knechtel's playing on Simon & Garfunkel's _Bridge over Troubled Water,_ the delicacy of Leon Russell's meandering arpeggios and gently pulsing chords on "A Song for You."
On _Elton John,_ you could hear the Saturday mornings of the musician's youth spent playing the works of the great composers. The classical influence was heavy, and unusual for the times. In contrast to the shoestring budget of his first LP, the making of this second album was meticulously plotted, Elton remembers, "like a military operation," making full use of the latest developments in studio technology. It was a time when recordings were becoming lusher and the possibilities in hi-fi sound were opening up.
Coloring the picture, Paul Buckmaster's orchestral arrangements were inspired and unorthodox, lending soaring drama to the schoolboy crush tale of "First Episode at Hienton" and an adrenaline rush to "The King Must Die," Bernie's monarch-toppling homage to the vibrant historical novels of Mary Renault. Elton was captivated by watching the eccentric figure of Buckmaster at work, particularly when the producer made strange mouth noises to convey the odd buzzing string effect he was after for the beginning of "Sixty Years On." The result sounded like a chorus of wonked-out wasps.
Lyrically, Bernie's contributions went from high to low. The words of the rousing "Take Me to the Pilot," he would later admit, meant "fuck all." In contrast, he painted a vivid picture of a devilish southern harlot who "milked the male population clean" in "No Shoe Strings on Louise," as brought to life on the microphone by Elton in an overly chewy impersonation of Mick Jagger.
For the most part, everyone's inspirations flowed freely. Even Elton felt moved to put pen to paper, adding the lines of racial solidarity to the coda of the brilliant and powerful gospel epic "Border Song." Released as the first single from the new album, vexingly, it floundered. But not before Elton was invited to perform it on BBC1's _Top of the Pops_. On the Thursday night the show aired, he watched it on the color television in Dick James's office, sweating anxiously as he viewed this broadcast version of himself. Still, it was amazing. This was really happening. He was on TV.
A music industry showcase at the Revolution Club in London's Mayfair launched _Elton John_ two weeks before its April 10 release, where the singer appeared as part of his new trio along with drummer Nigel Olsson and bassist Dee Murray. Elton, in his Mickey Mouse T-shirt and round John Lennon glasses, tried his hardest, kicking away the piano stool, battering the tambourine off his butt. But the response from the music business cognoscenti was muted.
"He's got no chance," one invited booking agent was overheard saying. "I just can't see him onstage at Madison Square Garden."
Following this first proper attempt at flight, Elton landed with a bump. It was back to the day job. Unable to earn a living wage from his own songs and recordings, he was forced to pay the bills by resorting to anonymously singing an array of cover versions on cheap hits compilation albums for budget labels—most prolifically for Hallmark's _Top of the Pops_ series, the covers of which featured a parade of young ladies whose clothes appeared to be in the process of falling off.
At the very least, it spotlit his versatility. He convincingly belted out Stevie Wonder's "Signed, Sealed, Delivered (I'm Yours)." He aped the quivering tones of Bee Gee Robin Gibb's mawkish solo single "Saved by the Bell," in perfect imitation of its dreadful warble. In spite of his skin color, he enthusiastically threw himself at the Heptones' ska version of "To Be Young, Gifted and Black." So distinctive was his voice that any listener familiar with Elton's own records could easily have identified him within a line or two.
Still, for a few months, everything was an uphill struggle. When they performed in Paris, opening for the smooth bossa nova grooves of Sérgio Mendes and Brasil '66, the crowd booed and lobbed tomatoes at Elton and the band. A stand-alone single—the only half convincing Stonesy hip-thruster "Rock and Roll Madonna"—was released in June and instantly died in the light.
Booked to perform on August 14, 1970, at the Yorkshire Folk, Blues & Jazz Festival in Krumlin in the north of England, Elton, Nigel, and Dee arrived on the makeshift site, in a valley in the middle of rural nowhere, to discover the organizers in hippie disarray. Despite its being the height of the British summer, it was bitingly cold. In front of the stage, most of the twenty-five-thousand-strong crowd lay stoned on the ground, cocooned in sleeping bags or failing to maintain their core temperature under thin sheets of plastic. No actual running order of groups had been arranged. Pink Floyd was said to have been booked, but they in fact knew nothing about it. Backstage, the Pretty Things and the Groundhogs were squabbling about who was going on when.
An exasperated Elton declared, "Oh, I'm going to go on now!" It was eight in the evening, and almost as soon as the trio walked onto the stage, the heavens opened. In yellow overalls, aluminum-colored boots, and a Donald Duck–style bib, Elton rose to the sodden occasion, passing out cups of warming brandy to the drenched front rows. He realized that if he jumped around, really not caring what he was doing, then at least he would keep his own ass warm.
The audience roared at the sight of this bespectacled court jester dancing in the rain. He booted the piano stool away and they howled even louder. The wilder he was, the more they loved him.
It was the moment he knew he was on to something.
Seven days later he flew to America, ready to take on anything.
—
EARLIER THAT YEAR, Russ Regan, the U.S. executive of Uni Records and the man famed for rebranding the Pendletones as the Beach Boys, was at the Continental Hyatt on Sunset Boulevard one morning, having breakfast with Roger Greenaway. He looked up to see Lennie Hodes, DJM's New York representative, wandering over.
"Russ, wow, I've got a package for you," Hodes said, disappearing up to his room to grab it. On his return, handing over a record mailer containing a copy of _Empty Sky,_ he explained to Regan, "Now this artist was just released by Bell Records and we've shopped him around and so far we haven't got any offers to pick up his contract."
Greenaway obviously knew whom Hodes was talking about. He told Regan that Elton John was going to be a star.
In truth, Bell Records had turned down the opportunity to release _Empty Sky_ before five other East Coast American labels did the same. After Regan coaxed the full truth out of Hodes, he thought to himself, _Well, if it's been turned down by people, forget it_. He really wasn't interested. But as a favor to Lennie, whom he liked, he played _Empty Sky_ when he got back to the Uni office.
"Y'know, I'm not gonna say I thought he was a superstar at that moment in time," Regan states. "But I said, 'This guy's really good.' "
Regan called Hodes, who told him that he could license Elton's records in the States for no advance. It seemed like pretty much a no-lose deal. In the weeks that followed, as the company began making plans to release _Empty Sky,_ the just completed and far slicker _Elton John_ arrived.
"I listened to it in my office," says Regan. "I looked up and I said, 'Thank you! Thank you! How lucky can one man get to have a piece of product like this?' I put the phones on hold. There were about thirty employees at the time at Uni Records. I said, 'Everybody, come on in, you gotta hear something.' So they all came in. They all sat on the floor. And after the album was over it was like, 'My God, what?! This is unreal.' We were so elated to have this product...and that was the beginning."
Regan wanted to bring Elton and the band over to Los Angeles and have them play a string of shows at the Troubadour. He insisted it had to be done on DJM's money, though. The trip would cost around £5,000—the equivalent of $46,000 today.
Back in London, weighing up the proposition, Dick and Stephen James looked at each other and made a decision: It's our last shot on Elton.
—
AND SO IT was he found himself on the trundling red London bus, heading for Sunset Boulevard and the Continental Hyatt. He and the band were so mortified by this gimmicky arrival that they were trying to duck below the windows out of sight of passersby.
Russ Regan, on the bus beside them, was effusive, constantly beaming and repeatedly declaring, "I love you guys." Elton, his sulkiness giving way to nervousness and babbling, kept launching into his _Goon Show_ voices to break the tension.
As they pulled toward West Hollywood, their saucer eyes were met by the vision of the America they'd grown up watching on television: gas stations, hamburger stands, doughnut joints, 77 Sunset Strip.
It was true that Elton hadn't even wanted to come. But now he was here, and even if the trip turned out to be a dud, at least he could go shopping for records.
"I didn't feel the time was right, and I was completely and utterly wrong," he admits. "When I got to America, the last thing I expected was what happened at the Troubadour."
"I was leaping on the piano. People were going, Oh my God." Wowing the Troubadour, August 1970.
IT WAS THE TWENTY-FIRST DAY of the eighth month of the new decade. The second year of Nixon's doomed presidency and the fourth week at number one for "(They Long to Be) Close to You" by the Carpenters. The temperature in Los Angeles was hovering around 75 degrees.
Up and down Sunset Strip, past the Continental Hyatt where Elton and the others were checking in, moved the everyday parade of the supercool, the misfits, and the bums, while here and there hippies on the sidewalks hawked copies of the _Free Press_ for twenty-five cents to cruising drivers: "Don't be a creep, buy a _Freep_."
Nine miles southeast of the Hyatt, at the Hall of Justice, the Manson trial was into its second month. Weirding out those forced to attend it, glazed female members of the Family hung around outside the building, wearing sheathed hunting knives, Xs burned with a soldering iron onto their foreheads in imitation of the facial carvings of their dark-eyed leader. By night, they slept in bushes or a parked van.
The day before, August 20, Manson, who'd spent the past two months either staring for hours at Judge Charles Older or making wild, head-game proclamations, had taken the stand for the first time. Unstrung and angry, he'd railed about his treatment in jail, calling it humiliating and "like kicking a dead man." Today the prosecution revealed they had two witnesses who would testify that on March 23, 1969, Manson had visited the home of Sharon Tate at 10500 Cielo Drive, then occupied by the record producer Terry Melcher, before the killings twelve months ago. The revelation blew a hole in Manson's defense. He'd claimed he'd never been anywhere near the place.
Elsewhere in the city, unrest was stirring over Vietnam. Eight days later, fourteen miles southeast of the Hyatt, in Laguna Park, police would tear-gas Mexican American antiwar protesters, resulting in a riot breaking out in the surrounding streets. Amid the fog of violent confusion, four people were killed.
As edgy and dangerous as it was, the atmosphere in L.A. didn't affect Elton in his music-headed bubble, his eyes filled with stars and stripes. Eight of them had flown over from London, lining up in front of the red bus for a photo opportunity. Elton stood in the foreground, smiling sheepishly in his beard and tight-fitting jeans. Down the line ran a procession of faces whose expressions ranged from sunny grins to mild bemusement or insouciance: Bernie, Dee Murray, a kneeling Nigel Olsson, and the London-chic and slightly dandified trio of DJM sleeve designer and photographer David Larkham, Steve Brown, and Elton's now manager Ray Williams, over whose left shoulder road manager Bob Stacey peeked at the camera.
For weeks, the people at Uni had been forcefully pumping up expectation in the city ahead of Elton's appearance. Publicist Norman Winter had adopted a bold strategy: Let's treat him as if he's Elvis opening in Vegas rather than an unknown artist hitting town for the first time. Amazingly, it had worked, and as a result, Elton was all over local radio, with posters in every record store.
That night, Elton went to the Troubadour, the five-hundred-capacity hipster hangout on Santa Monica Boulevard, to check out the Dillards, the Missouri bluegrass group who'd recently gone electric. Forever the fanboy, he was "knocked out" by them, but shocked to learn that his support act at the club was to be David Ackles, the former child actor turned purveyor of intense theatrical songs delivered in moody baritone. Elton, a huge admirer, immediately tried to have the bill inverted, but to no avail. "We could not believe we were playing over David Ackles," he says. "He was one of our heroes."
With a few days to kill before the opening Troubadour show on Tuesday, the first of a mind-boggling six-night residency, Elton rented a Mustang convertible to get around. His main priority was to visit record stores and stock up on American discs actually bought in the United States. Meanwhile back at the hotel, more mundane matters prevailed. Nigel Olsson, lacking a hair dryer to blow-dry his long and carefully tended locks, had Ray Williams call a friend, Joanna Malouf, to ask to borrow one. She wasn't at home, but her flatmate Janis Feibelman was. Soon after, Janis arrived at the Hyatt with her sister Maxine in tow, who instantly caught Bernie's eye.
The next day, everyone, apart from an increasingly nervous Elton, went on a road trip to Palm Springs. Alone and stewing in his hotel room, his anxiety pulling his mood downward into an almighty huff, he called Dick James in London and—quite rightly—moaned that Ray Williams had abandoned him.
He didn't have long to fret, however. As a Uni Records artist, Elton now shared the roster with Neil Diamond, and so the company arranged for him to go and visit his new labelmate at his house off Coldwater Canyon for some encouragement before the first Troubadour show. Upon his arrival, Elton's nervousness began to get the better of him and he seemed painfully shy and socially awkward. Diamond thought: _This kid is never going to make it._
Day by day, the pressure in Elton's mind had been building. The night before the show, he suddenly blew, standing up in the middle of a packed restaurant and saying that was it, he was going home. The enormity of what he'd let himself in for was throwing his head into severe turmoil.
Not that he really needed to worry. Sound-checking at the Troubadour the next day, his mood changed. He instantly felt at home. The band was clearly polished and more than ready for the show.
"We were like a new engine," he says. "We'd done our mileage. We were run in."
Here he was onstage at the venue where Lenny Bruce had been arrested for obscenity in '62, where the Byrds had found one another in '64, where Joni Mitchell and Neil Young had made their starmaking debuts, and where he was now playing a piano that Laura Nyro had touched only two weeks before. Everything around him seemed to blur his twin realities as fan and performer and serve to both unnerve and empower him. Reg may have been feeling jumpy and wired about the whole affair, but Elton was now supremely confident.
Walking into the venue midafternoon, Uni marketing man Rick Frio was taken aback: "The three guys were onstage and the first thing I thought was that they were playing the record behind them. There was so much music coming out of those three fellas that it was incredible." He immediately called Russ Regan back at the office. "I said, 'We're home free, it's gonna work.' 'Cause up till that point, we were doubtful. I mean, we had never seen them, had never even met them, and all we had was the record. It was gangbusters from then on."
—
THE NIGHT OF the show, the Troubadour was packed. Uni's determined push ensured a respectable smattering of celebrities seated around tables in the club, including Quincy Jones, Mike Love of the Beach Boys, Gordon Lightfoot, Danny Hutton of Three Dog Night, and the formidable folk blues singer Odetta, all waiting for the appearance of this twenty-three-year-old nobody Elton John.
"It was very hot and smoky and a great vibe," he says. "I honestly think they weren't expecting what they were gonna see."
Come ten o'clock, Neil Diamond walked out onto the stage to say a few introductory, if oddly noncommittal, words. "Folks," he began, "I've never done this before, so please be kind to me. I'm like the rest of you—I'm here because of having listened to Elton John's album. So I'm going to take my seat with you now and enjoy the show."
Then Elton stepped into the light, colorful and alive and a startling contrast to the half-lit and glum-looking individual on the cover of his eponymous LP. He sat down at the piano in an outfit designed by Tommy Roberts of London's Mr. Freedom boutique: yellow bell-bottomed coveralls with a grand piano appliquéd on the back, a long-sleeved black T-shirt bearing white stars, and, to complete this outlandish look, white boots affixed with green bird wings.
At first, the crowd did get what they'd perhaps come expecting to see, as he launched solo into "Your Song" before Dee and Nigel slid in to join him on the third verse. But as early as the second number, he began to transform amid a pummeling and gutsy-voiced "Bad Side of the Moon." He was up and away. In his mind, he was competing with the Rolling Stones, not mild-mannered singer-songwriters trapped behind a piano. By song three, "Sixty Years On," which dramatically built from delicate piano arpeggios to thunderous instrumental passages, a far cry from their more introspective stylings on vinyl, he knew he had them.
"With a three-piece band," he points out, "there's no way you can just sit there and interpret those songs à la record, because it was an orchestral album. So we went out and did the songs in a completely different way and extended them and extemporized and just blew everyone away."
As the show rolled on, he fueled the intensity, through "Border Song," "Country Comfort," "Take Me to the Pilot," and then, to make the point explicit that here was an apparently meek character with a rock'n'roll heart, "Honky Tonk Women." Firing into the set closer, "Burn Down the Mission," he kicked the stool away and lunged into the vamping sections, launching his heels into the air for a series of handstands as he stretched the song over the ten-minute mark with detours into Elvis's "My Baby Left Me" and the Beatles' "Get Back." A shocked Neil Diamond was cheering so loudly that he spilled his drink.
"I was leaping on the piano," recalls Elton, still thrilled by the memory. "People were going, 'Oh my God.' Right place, right time, and you seize those opportunities."
—
IT WAS THE performance that made him. Russ Regan was amazed to discover that within the space of forty-five minutes, he'd landed himself a star. "I knew we were going all the way," he says. "I just knew it." The Troubadour's owner, Doug Weston, was similarly astonished. Having witnessed scores of landmark debut performances at his club, he reckoned "no one had captured the town as completely and thoroughly."
But still, afterward in the dressing room, Elton fell to earth and some of Reg's awkwardness returned. Uni publicist Norman Winter brought Quincy Jones backstage and introduced Elton to him as "a genius." Elton was horrified. Later he angrily tore a strip off Winter: "Never do that to me again." People were telling him he was the greatest. But inside, he didn't feel like the greatest.
"I don't think I ever believed the hype," he says.
Crashing down to earth backstage at the Troubadour.
The day after the show, he was interviewed by _Rolling Stone_ for the first time and came across as self-effacing and "oddly subdued...almost fragile" to writer David Felton. "I don't want the big star bit," he declared. "I can't bear that bit. What I want is just to do a few gigs a week and really get away from everything and just write, and have people say, 'Oh, Elton John? He writes good music.' "
Of course, he protested too much. But this strange admission did reveal the schism that was to widen as his career progressed: the desire for musical credibility versus the bright lure of the showbiz spotlight.
—
THE SECOND NIGHT, he looked up from the piano halfway through "Burn Down the Mission" and there in the second row sat the long-silver-haired figure of Leon Russell staring back at him. "I nearly fucking shit myself," Elton laughs. "Leon was such a striking-looking man and my biggest influence at the time, without question."
Meeting him in the dressing room after the show, Elton was relieved to discover that instead of being annoyed that he'd cribbed some of his eccentric rock'n'roll piano player act, Russell was friendly and complimentary and even invited Elton to his house the next day. He turned up suffering with a throat that was ragged from two nights of belting it out. Russell gave him a tip: Mix one spoonful of honey and one spoonful of cider vinegar with the hottest water you can take, gargle it for a minute, spit it out, then do it again and again. "I've done it from that day," Elton says.
On the morning before the third show, August 27, came the confirmation, via the printed word, on page 22 of the _Los Angeles Times,_ that something vital had happened that first night at the Troubadour. Their highly regarded rock critic, Robert Hilburn, had submitted a review of the show that no one could ignore:
> Rejoice. Rock music, which has been going through a rather uneventful period recently, has a new star. He's Elton John, a 23-year-old Englishman whose United States debut Tuesday night at the Troubadour was, in almost every way, magnificent.
>
> His music is so staggeringly original that it is obvious he is not merely operating within a given musical field. He has, to be sure, borrowed from country, rock, blues, folk and other influences, but he has mixed them in his own way. The resulting songs are so varied in texture that his work defies classification into any established pattern.
>
> Beyond his vocals, melodies and arrangements, there is a certain sense of the absurd about John as a performer that is reminiscent of the American rock stars of the mid-1950s. Only someone with that wild, uninhibited view of his music would dare ask the audience to sing along—something that is almost never done any more—or drop to his knees, like Jerry Lee Lewis used to do. The audience...roared its approval.
>
> By the end of the evening there was no question about John's talent and potential. Tuesday night at the Troubadour was just the beginning. He's going to be one of rock's biggest and most important stars.
Elton was floored. "It was a turbo review," he says. "It spread to New York, Chicago...it really kick-started our career and in a hugely quick way."
Calls started coming in from the promoter Bill Graham, and from the producers at _The Ed Sullivan Show_.
The review didn't just cement Elton's reputation. Its glowing praise bolstered his confidence and reinforced his self-belief. From the third show on, he came further out of his shell, displaying a campiness onstage that he had previously hidden from view.
That day he'd enjoyed a trip to Disneyland, where Uni had managed to lay on the celebrity treatment for him, ensuring he was whisked to the head of the lines. He left having bought a pair of Mickey Mouse ears. That night at the Troubadour, in combination with a pair of shorts, he wore them to perform "Your Song." It was a glimpse of the Elton of the future.
—
IN THE DAYS that followed, the Los Angeles music community further embraced Elton and Bernie. Danny Hutton arranged for them to visit his friend, the drug-damaged Brian Wilson, at the time slowly reconnecting with his music by writing seven of the twelve songs on the Beach Boys' upcoming album _Sunflower,_ set to be released on the last day of August.
As they arrived with Hutton and his girlfriend, the actress June Fairchild, at the gated entrance of Wilson's Bel Air mansion, Elton's and Bernie's minds were reeling. Danny pressed the intercom and Brian answered, jokily singing the hook of "Your Song" manically sped up: "I-hope-you-don't-mind-I-hope-you-don't-mind-I-hope-you-don't-mind."
"He was not well at the time," says Elton. "His wife, Marilyn, was fabulous: 'You wanna hamburger, Brian?' We had dinner and the dining room was filled with sand. He went upstairs to introduce us to the kids, woke them up. 'This is Elton John, I-hope-you-don't-mind.'
"Bernie and I were freaking out. I'm from Pinner, he's from Lincolnshire. We hadn't taken a drug in our lives."
After dinner, Brian led them into his home recording studio to play them the master tape of "Good Vibrations." Not more than ten seconds in, he pressed the Stop button, confusedly, saying, "No, that's not right." Then he tried to sell Elton his grand piano. At four in the morning, they left, completely disoriented. "I mean, we were absolutely in awe of this man," Elton says, "but freaking out because we'd never been in such a weird situation."
Weirdness abounded throughout this eye-opening California trip. Another night, Elton drove in the Mustang up to Hutton's place high on Lookout Mountain Avenue in Laurel Canyon, the oh-so-hip artistic enclave that in the past had drawn the likes of Orson Welles and Natalie Wood to its leafy calm and in recent years had provided creative dropout sanctuary for the Byrds, the Mamas & the Papas, Joni Mitchell, and Crosby, Stills, Nash and Young. In these worrying and jumpy times, though, as the hippies were tipping toward dangerous hedonism, the Canyon had increasingly become a magnet for unsettling freaks and drug-peddling criminals. Worse, the collective drug paranoia of these overindulging artists had been rendered horrifically real by the brutality of the Manson murders.
Blissfully tuned out from these disturbing frequencies, there at Hutton's house Elton met Van Dyke Parks, the cerebral, bespectacled lyricist for the Beach Boys' aborted _Smile_ album. They had dinner and Elton played Hutton's piano to entertain them. They stayed up all night, and sometime after seven in the morning, he got back in the Mustang.
Driving down the hill on Laurel Canyon Boulevard, heading home to the Hyatt, Elton felt strangely energized. His time spent in Los Angeles had seen him grow up and get wise. He'd met some of his heroes, but he'd also rubbed shoulders with people he considered to be "con men and hipsters." Elton realized he could see through them. Inside, he pledged never to end up like these sad music-biz hustlers.
He thought to himself: _God, I was so naive a week ago. And you know what? It's really weird. I've never stayed up till seven in the morning in my life. I really feel good. I must be excited._
"Years later," he says, "Danny told me that they'd put cocaine in my food. I'd no idea the first time I did cocaine."
The trio that sounded like an orchestra: (left to right) Nigel Olsson, Dee Murray, Elton.
BERNIE TAUPIN'S HEAD was full of western stories. The allure of frontier days had begun for him in the fifties with the flickering black-and-white TV images of Roy Rogers, the Lone Ranger, the Range Rider, and Lash LaRue, all saving the day and riding into the sunset. Beamed into his home on a rural English farm, these evocative visions made a powerful childhood impression, bringing adventure, as he puts it, to "the dreariness of your upbringing." Later, outgrowing the more cartoonish capers of these on-screen heroes, he'd been drawn more toward grittier tales of the American West—trailing the intrepid gold prospectors through the snow in Johnny Horton's 1960 hit record "North to Alaska," or discovering that sometimes, as in the songs of Marty Robbins, the outlaw might be hanged.
For his part, Elton absolutely hated westerns. They bored him to tears. If a cowboy film came on TV, he would immediately switch it off.
But Elton of course loved the Band and their romantic visions of bygone America, and so he could relate to Bernie's lyrics about the West: the stagecoach fugitive caught by the Pinkertons in "Ballad of a Well-Known Gun," the Confederate army enlister of "My Father's Gun," the hopeless fiery uprising of the poor and broken folk in "Burn Down the Mission." These were clearly written in homage to Robbie Robertson's songs for the Band, though, brimful of Elton's musical character, they could hardly be dismissed as plagiarism. "I wouldn't say it was a blatant rip-off," says Bernie, "because, God, if only I could have ripped off so well."
So rich were Bernie's impressions of the United States that he had actually penned these lyrics before ever setting foot on American soil. Song by song, they made up Elton's third long-player, _Tumbleweed Connection_. Thanks to the tough contract with DJM, which required two LPs a year, even by the time Bernie and Elton reached America, they had another album in the bag.
It had been recorded in March 1970, at Trident Studios, almost dovetailing with the sessions for _Elton John_. In contrast to that record's lush orchestrations, however, _Tumbleweed Connection_ was a more pared-back and earthy affair. The main reason for this was the involvement of Caleb Quaye's new band, Hookfoot (including _Empty Sky_ drummer Roger Pope), who lent the songs their hard-edged country rock swing and rolling soul grooves. "Even though it had some orchestration," Elton points out, "it was far more funky."
The only track on _Tumbleweed Connection_ that echoed the baroque arrangements of the _Elton John_ album was "Come Down in Time," which delicately swelled from harpist Skaila Kanga's gently plucked introduction into an elaborate Paul Buckmaster score for strings and woodwind. Bernie's eerie and almost otherworldly lyric concerned potential lovers who never quite manage to meet—kept apart by either physical remove or the distance of time or, perhaps, given the way the female character seems to haunt the narrator, bereavement. Subtle and open-ended, these were Taupin's most sophisticated stanzas to date. " 'Come Down in Time,' " says Elton, "is an _astonishing_ lyric for someone who's not even twenty."
For the most part, though, _Tumbleweed Connection_ was soaked in American influences. "Where To Now St. Peter?" drifted along on dreamy California verses reminiscent of Joni Mitchell. "Amoreena" was a yearning country soul love song with images of cornfields and cattle towns and a heart as big as Texas. "My Father's Gun," both lyrically and musically, made the reference to the Band explicit, coming across like a slowed, if no less rousing, take on "The Night They Drove Old Dixie Down," complete with Preservation Hall–style brass. "Burn Down the Mission" was the epic six-and-a-half-minute closer, moving from gospel verses and choruses into the driving up-tempo instrumental passages that Elton used to full effect in his live show.
Given that Bernie had never been to America when he wrote the words for these songs, there was the odd anachronism, particularly in "Country Comfort," which convincingly sounded as if it was born in the southern United States, except for the glaringly British inclusion of a hedgehog in the final verse. Similarly, the sepia-toned gatefold sleeve of _Tumbleweed Connection_ depicted Elton and Bernie hanging out together on an old-time train platform, as if beamed back to pioneering railroad days. Only on closer inspection did the station's metallic plate ads for the _Daily Telegraph,_ Swan Vestas matches, and Cadbury's chocolate reveal the scene to be a very English one, shot by David Larkham on the Bluebell Railway steam train heritage line running through the southern English county of Sussex.
Nonetheless, even if it was recorded in London in the cold spring of 1970, where the winter snow had turned to icy drizzle, _Tumbleweed Connection_ was very much an album with its soul in America—and one set to resonate with the country's vast populace of record buyers.
—
AFTER THE HEAD-REELING highs of the Los Angeles shows in August, Elton, Bernie, and the touring party flew up the coast to San Francisco for a comparatively muted date before an audience of the city's music biz tastemakers at the Bay Arena on September 8 and then jetted across the country for two East Coast gigs, in Philadelphia at the Electric Factory on the eleventh and twelfth.
Surprisingly, even following the triumphs of the Troubadour appearances, there was a lingering doubt in some quarters at Uni Records as to whether Elton had the stuff to turn the hype into record sales. Russ Regan tried to tune out the naysayers who questioned his comparatively lavish spending on his newly favored artist. They had even begun to call the British singer-songwriter "Regan's Folly." On the road in Philly, Regan was on the phone to Uni's financial controller back out west when the latter let slip this derogatory in-office nickname. Regan exploded, furious, and yelled at the guy. Later, he'd say he felt he was on the verge of dropping dead of a heart attack, right there and then in his room at the Philadelphia Holiday Inn.
Come showtime at the Electric Factory, Regan was no less pumped up. Neither was a gung-ho Elton with, in his mind, nothing to lose: "We just blew the place apart." The audience response was immense, and physical. Midway through the show, Uni's Rick Frio worried that the floor of the fifteen-hundred-capacity venue might collapse: "It felt almost dangerous. I'd never been in a concert like that where you thought the floor was gonna cave in." Regan, flying on adrenaline and vindication, grabbed the promoter by the neck and shouted, "You see what I mean?!"
If the Troubadour gigs had wooed the critics and Elton's fellow musicians, the Electric Factory shows were the moment when it became clear that he could equally captivate a real ticket-buying audience. Around eleven o'clock the morning after the first show, Regan was awoken by a call from someone in the New York offices of MCA, Uni's distributor, wondering what the hell had happened in Philly. They'd just had orders from various record stores totaling five thousand albums. Regan fell back asleep, only to be woken again two hours later by another call saying orders had just come in for another five thousand.
A swift comedown from their Philadelphia buzz, New York, a week later, was a downer. Booked to play a lunchtime promotional show at the Playboy Club, Elton turned up late, midafternoon, and ended up performing in this chichi environment to a thinning crowd of journalists, most of whom had gone back to work. It was a disaster. Emotionally drained after the performance, he burst into tears.
For Bernie, New York was terrifying, a far cry from the sunlit, if paranoid, L.A. He felt as if he had descended into "the bowels of hell." The first night they arrived, booked into the Loews Midtown, the police shot someone outside his hotel window. "For me," Taupin says, "with my background and upbringing, just learning to come to terms with New York was absolutely devastating." The next day, shocked and numbed, he wrote a lyric, "Mona Lisas and Mad Hatters," which seemed to scorn the city and its swarming occupants, dwarfed by what he saw as light-blocking skyscrapers.
"If you dissect it," he argues, "it's not a put-down. It's a song about being pretty scared of New York."
It was also the sound of an alienated farm boy trying to get his head around his new circumstances and surroundings, a theme he would return to again and again.
—
ARRIVING BACK IN London, Elton was supercharged. The first public display of this newfound confidence was at his opening slot for folk-rockers Fotheringay on October 2 at the Royal Albert Hall.
The wheels for this pivotal London show had been set in motion three months earlier, in the dark and uncertain days of July in London when he'd thought about giving up—which, after America, already seemed an age ago. Elton had been booked as a pianist and singer on a jobbing session for the Warlock Music publisher. Headed by Joe Boyd, Warlock's Boston-born owner and producer who'd done much to advance the progress of hippiefied folk with Fairport Convention and the Incredible String Band, the sessions were an attempt to rerecord some of the Warlock writers' songs in a more commercial vein, with a view to selling them to other artists.
Elton was keen to get involved, being particularly fond of the achingly melancholic songs of Boyd's troubled protégé Nick Drake. He turned up for the two-day session at Sound Techniques Studio in Chelsea and efficiently laid his contributions down on tape, rendering in particular Drake's "Time Has Told Me" and "Saturday Sun" in a country rock style close to that of the recently recorded but still unreleased _Tumbleweed Connection_. The other singer booked to appear on the recordings was Linda Peters (later to marry Fairport Convention's Richard Thompson and become his artistic partner). Peters had never been in a studio before, and to calm her jangling nerves, she proceeded to get wasted on Valium and wine. "I don't remember much about it," she later admitted. "Except that Elton had to hold me up to the microphone."
Also playing on the session was Jerry Donahue, the guitarist who'd recently joined with Fairport Convention's departed vocalist Sandy Denny to form Fotheringay. Chatting to Donahue at the session, Elton asked about the possibility of his being first on the bill at the band's upcoming show at the prestigious Albert Hall. Donahue felt Elton was a "sensitive" player and talked the other members of the group into giving him the gig. He would come to sorely regret the fact that Fotheringay knew very little about Elton's stage show: "We had no idea what he had in mind, that he was going to do the most incredible rock'n'roll show."
Walking tall after his American adventures, Elton barreled onto the stage and took command of the venerable domed Victorian venue with his stool-kicking, piano-bashing act, upstaging Fotheringay even before they'd managed to reach the stage. After his set, a reeling and shaken Sandy Denny approached Elton backstage with the words "How are we supposed to follow that?"
The truth was they couldn't, and they didn't. Fotheringay's whimsical electric folk sounded hopelessly weak in comparison to what had come before, and the show was one of the chief factors in their subsequent breakup three months later. That night, understanding the awkward situation he'd created, Elton slipped out of the building before Fotheringay had even begun their ill-starred set.
—
HE WASN'T OVERLY cocky, though. He knew that his return to America later that month would prove whether the ripples created by the Troubadour and Electric Factory shows would turn into significant waves, or dissipate as quickly as they had appeared.
In October in the States, "Take Me to the Pilot" was released as the next single, with "Your Song" bafflingly banished to the B side in favor of the more up-tempo A side. In the end, radio DJs made their own decisions about the tracks, flipping the single to air the far more affecting ballad. Russ Regan was driving on the Hollywood Freeway when "Your Song" suddenly popped up on the local AM radio station KHJ. He was so moved that he had to pull the car over. "I cried like a baby," he says. "It was just very emotional to me that I felt that I was gonna be vindicated."
That same month, the _Elton John_ album entered the U.S. chart and began a slow climb. Elton was thrilled to see his name in the _Billboard_ listings, alongside George Harrison's post-Beatles triple album splurge _All Things Must Pass_. Harrison even sent a telegram congratulating him. To Elton, with success came "a sigh of relief."
This second U.S. jaunt started out on a flat note, however, with a series of poorly attended shows at the street mission turned rock hall, the Boston Tea Party, on October 29–31. Undeterred—and finding his road legs that would never seem to tire—Elton crisscrossed the States, sharing stages with Leon Russell, the Byrds, Poco, the Kinks, and Eric Clapton's new, dangerously drugged-out group Derek and the Dominos. There was much in the way of touring life camaraderie with the other musicians, but also fierce, if friendly, competition.
"Every time I was second on the bill," Elton remembers, "whether it was with Leon or the Kinks or whoever, my thing was, 'I'm gonna go on and you're gonna have to follow me, 'cause we're gonna tear the fucking house down.' And I would say ninety-nine percent of the time, everyone who came on and followed me _did_. And I stood there and went, 'Fuck, yeah.' You think you're good and then they go on and they're even better than you are.
"Derek and the Dominos were on fire by that time. But I remember playing in Chicago at the Auditorium Theatre and _we_ were on fire. Then Eric came on and I thought, 'Well, good luck.' And they were _fucking_ incredible and your respect for someone goes up so much."
Every day seemed to reveal a jaw-dropping moment, not least when Elton and the group returned to the Electric Factory in Philadelphia in the first week of November. Unbeknownst to them, the Band, playing on the seventh in Worcester, Massachusetts, put their stage time forward so they could fly down the coast in their private plane to catch Elton's show.
That night, he was in feverish form, ripping up his shirt and throwing the torn pieces to the crowd. Afterward, as he was cooling off, the members of the Band casually wandered backstage. The sight of them seemed so utterly unreal that Elton turned woozy.
He felt he might lose control of his bodily functions and perhaps suffer a potentially calamitous, bowel-loosening mishap similar to his reaction when he'd spotted Leon Russell in the audience at the Troubadour. Repeating one of his graphic favorite phrases, Elton recalls, "I nearly shit myself. They just walked into the dressing room. They were _such_ a huge influence."
Tentatively, he put a copy of the freshly pressed _Tumbleweed Connection_ on a turntable. It was clearly a record that was very much in debt to the Band. They told him they loved it. Talk turned to whether he'd maybe like to come up to their spiritual home of Woodstock to record, maybe write a song for them. Elton managed to play it cool, but inside, Reg was screaming with joy. It seemed as if it was in a different life that he'd excitedly waited behind the counter at the Musicland record shop in London to bag a copy of the Band's self-titled second album. In reality, it had been only fourteen months before.
—
WHEN ELTON RETURNED to Los Angeles, it was for the coronation of the piano king. Upscaling from the Troubadour, he headlined the three-thousand-seat Santa Monica Civic Auditorium, with Ry Cooder and Odetta opening for him.
Elton arrived onstage in rectangular shades and a towering brown top hat and a black cape. Throwing it off to reveal a yellow jumpsuit adorned with a huge Donald Duck badge and a smaller pink-faced, red-nosed plastic clown mask just above his groin, he sat down at the piano and started solo and slowly, with "I Need You to Turn To" and "Your Song" before gearing up through "Bad Side of the Moon," "Country Comfort," and "Sixty Years On." He was so keen to perform a dramatic new song, "Indian Sunset," which Bernie had been moved to write after visiting a Native American reservation, that he read the words from a piece of paper as he sang. A film crew was in attendance to capture every moment, their cameras almost blocking the view of the paying audience.
Under the hot stage lights, Elton was boiling, performing under layers of clothing that he discarded as the show progressed, in a comedic striptease, taking off his jumpsuit to reveal another underneath before he peeled it away to end up in a long Fillmore West sweatshirt matched with mauve tights that Bernie's now girlfriend Maxine Feibelman had dared him to wear.
During "Burn Down the Mission," enthralled by his handstand acrobatics and silver-booted keyboard kicking, the audience rushed to the lip of the stage, overwhelming the security guards. When the crowd howled for an encore, Elton was forced to return to the stage, instill calm, and then explain that he had nothing left to perform. He ended by busking his way through a cover of John Lennon's "Give Peace a Chance."
Elton had brought the house down. But not everyone was bowled over by this arch and outlandish display, which was sometimes at odds with the serious tone of the songs. In his review for _Billboard,_ Eliot Teigel perceptively reasoned, "Elton John faces a major decision in his short career. Does he abandon his valid musical skills in favor of being a 'stage freak' using unnecessary physical tricks?" Even Robert Hilburn worried that the theatrics were in danger of overshadowing the music, cautiously revising his effusive write-up of only two months earlier in the _Los Angeles Times:_ "Some felt John was trying to use gimmicks to further his career. They felt he should stick to the music, that the fancy clothes and exaggerated Jerry Lee antics were signs of a desperate desire for success."
Later Elton defended himself against the accusation of careerism. "It wasn't desperation to be successful," he countered. "I just wanted to get away from the things that everyone else was doing. I could have come out on the stage in a pair of Levi's and a cowboy shirt. But I would have been bored to death. I just couldn't do it."
In truth, this determined dive into the dress-up box was a reaction against his shy and restricted childhood and the inferiority complex of his teens. He was making up for lost time and freeing himself. At the age of twenty-three, he was only now really beginning to live.
Moreover, by playing up the theatrics and not taking himself too seriously, he was having a ball.
"I went the more humorous route," he reasons, "because (a) I was stuck at the fucking piano and (b) I never saw myself as a sex symbol."
—
THE STARS WERE aligning for him in other ways as well. The airwaves were suddenly alive with his music. The rise of stereo FM radio—a high-fidelity sonic wonder compared to the tinny qualities of AM—was, as luck would have it, perfectly suited to the glossy production values of his records. Giant steps had been made in the field of studio production in the past few years, with the rapid move from eight- to sixteen-track recording. Gus Dudgeon's recording skills ensured that both _Elton John_ and _Tumbleweed Connection_ were on the cutting edge of these audio advances, with their pristine piano and orchestra reproductions and Elton's warm, full-bodied vocals.
"We were there when FM started to break," says Elton. "You put it on in the car and you went, 'Fuck, this sounds so much better than AM.' It was innovative. It was in stereo, and they didn't have a playlist like the AM stations. It was a changing time and it was fucking exciting." Soon, a name was coined for this new, FM-friendly musical genre—Album-Oriented Rock, or AOR. It was to prove vital to Elton's career.
In the States, live-to-air FM performances were becoming the vogue, making listeners feel as if they were in the studio or concert hall rather than on the receiving end of remote, crackly transmissions. Elton had already played live on KPPC in Pasadena, the first FM underground rock station in California, but it was on the East Coast that he would give his most memorable radio performance, on New York's WABC-FM.
November 17 saw Elton, Dee, and Nigel gather at producer Phil Ramone's A&R Studios on West Forty-Eighth Street, the three wearing headphones, as if making a record, albeit in front of an audience of more than a hundred invited guests. The station's silky-voiced DJ, Dave Herman, made the introduction: "Would you welcome, very warmly, those of you at home, those of you here, Mr. Elton John."
At first, there were sound problems in the studio, which Elton addressed in very polite, almost affected English tones after "Your Song": "Can you turn the piano down? It's very loud and I can't hear what I'm singing in the cans." He quickly loosened up, however, teasing the listeners at the end of "Country Comfort" by saying that Bernie "did the Palais Glide during that number naked throughout the audience. So if any of you heard squeals of delight, it was Bernie there."
Later, he further lightened up and revealed more of his self-deprecating humor, explaining after "Border Song," "We've played that so much, now we call it the 'Boredom Song.' " After kicking into "Bad Side of the Moon" and a gutsy-voiced "Take Me to the Pilot," he sounded genuinely overwhelmed by the reaction of the audience, whose collective cheers and whoops sounded as if they were produced by a far larger crowd. As WABC quickly took the decision to cancel commercials, cancel the news, allowing the band to run straight through, "Burn Down the Mission" passed the ten-minute mark, segueing into the full-tilt rock'n'roll mash-up of "My Baby Left Me" and "Get Back." At the end, having hammered his fingertips raw through the sheer force of his playing, Elton left blood on the piano keys. To quote Herman's closing assessment of this storming performance, it was "outtasight."
The WABC gig exists as a sonic document of the excitement stirred up by Elton's 1970 live shows. Initially, as a key indicator of how popular he was becoming in America, enterprising souls taped the show from the radio and it was widely bootlegged on vinyl in various editions with titles such as _Very Live,_ _Knockin' 'Em Dead Alive,_ and _Live E Jay_. In the end, DJM Records was forced to attempt to nullify these counterfeits with its own release of the show, titled (because of the differences in date stylings) _11-17-70_ in the United States and _17-11-70_ in the UK. In keeping with its cool-giving black-market origins, the cover was intended to look like an actual bootleg, with an unfussy monochrome image of Elton in midflight, standing bent over the piano.
"It was never meant to be a live album, it was meant to be a broadcast, but the playing on it was phenomenal," Elton points out, with none of the immodesty this statement suggests, and more the buzzed enthusiasm of a fan. "It's one of the best live albums I've ever heard."
Three days after the WABC show, he opened for Leon Russell at two shows at the Fillmore East on the Lower East Side. Startling Elton, from the opening line of "Your Song" it was apparent that the audience knew every word, which they sang back at him. Although neither Elton nor Bernie was aware of his presence during the show, in the crowd that night was Bob Dylan. Later, possibly their greatest idol came backstage to say hi. "Bernie and I were just like, 'Fuck!' " says Elton. "Dylan said, 'I love that song about My Father's Gun.' We were like, 'Uh...uh.' Dylan has an aura about him. It's not frightening. It's just...foo, blimey."
Nigel Olsson was a witness to this overwhelming encounter: "They went nuts. They couldn't believe it. Bernie was almost in tears. Bob's there with a little briefcase and glasses and looked like an accountant."
The next night, an impressed Dylan returned, bringing along his wife, Sara, Paul Simon, and John Phillips of the Mamas & the Papas. The news reached Britain and a _Melody Maker_ report appeared the next week with the cred-bestowing headline DYLAN DIGS ELTON!
Meanwhile the demand on the West Coast was such that two more shows were booked in California for early December, at the Anaheim Convention Center on the fourth and the Swing Auditorium in San Bernardino on the fifth. It was all happening so fast. Astoundingly, Aretha Franklin released a powerful cover version of "Border Song," fully realizing its gospel intent, and it entered the Top Forty. Elton was approached by director Hal Ashby, who'd seen him in concert, to star as the death-obsessed Harold Parker Chasen in his next film, the black comedy _Harold and Maude,_ which the singer felt was an ambitious step too far for him to contemplate. "To do films properly," he reasoned at the time, "you've got to work at it full-time, devote all your energy to it."
To cap the year, Elton was invited by NBC TV to appear on _The Andy Williams Show,_ in the same episode as another of his formative heroes, Ray Charles. Williams was quietly amazed that Elton turned up wearing a cape and an earring. Elton performed "Your Song" alone before, for the show's finale, side by side, he and Charles, respectively playing black and white grand pianos, traded verses on Stevie Wonder's "Heaven Help Us All" along with the host and a kaftan-wearing Mama Cass, the audience clapping in time.
He had turned his fortunes around in a way that had been unimaginable at the beginning of 1970, the year that changed everything for him. The words of the opening song of _Tumbleweed Connection_ were ringing loud and true: In Old West parlance, he was now a well-known gun.
Walking down the street together one day in New York, unrecognized by passersby, Bernie turned to Elton, offering him sage words:
"I think you'd better savor your anonymity now," he told his friend. "It'll be gone soon enough."
The bashful star returns home. Elton with his mother, Sheila, and stepfather, Fred, Frome Court, Pinner, 1971.
IN AMERICA, it was easy to feel like a star. But back under the raincloud skies of London, surrounded everywhere by reminders of who you'd been and who you truly were, it was a lot harder. In reality, it was Reg standing on the pavement outside the forbiddingly trendy indoor Kensington Market in west London, afraid to enter. Inside lay stall after stall of seventies fashion hipness manned by the unapproachably cool staff—racks of regal-looking red velvet coats edged with gold brocade, a sea of suede jackets with shiny sovereign buttons, piles of corduroy caps in whatever color you fancied.
His friend June—wife of his pal Marc Bolan, who in the first month of 1971, after years of sideline sixties struggle, had finally risen to number two in the UK chart with T. Rex and "Ride a White Swan"—grabbed Elton and forced him to walk through the doors. "I was so self-conscious," he says. "June would take me by the hand."
In Britain, in terms of real fame, Elton was still pretty much nowhere. Leaving Los Angeles, he'd enjoyed a send-off from a group of fans at the airport. Arriving back at Gatwick, he'd walked down the quiet corridors entirely unnoticed. It was the suddenness of it all that took him aback. At the same time, he now harbored a real determination to make it work for him back home.
He wouldn't have to wait long. Word of Elton's U.S. triumphs had crossed the Atlantic, not least with the _Melody Maker_ report of Dylan gracing him with his presence in the audience in New York. When the weekly music paper duly ran an interview with Elton in their first issue of 1971, the bitingly funny character that his friends knew, possessed of a mercilessly sharp tongue, first surfaced in print. Discussing his favorite subject—new records—he passed catty comment on the current exploits of the former Animals singer Eric Burdon, now fronting an almost all-black American funk band called War. "Have you got Eric Burdon's new one, Black Man's Burdon?" Elton was quoted as saying. "There's one track I like. But he should have been born black and given us all a rest."
What followed suggested that Burdon didn't take the comment at all well. Three weeks later, Elton and the band were booked to appear at MIDEM, the yearly European music business shindig held in Cannes on the French Riviera. Preceding him on the bill were Burdon and War. Both acts were assigned strictly timed fifteen-minute showcase slots. But, rebelliously digging deeper and deeper into their soulful grooves, and despite the screaming efforts of the organizers to halt their set, War went on, and on, finally leaving the stage after more than an hour. Pacing around, fuming, Elton stormed out of the theater.
The next night, he was persuaded to return. But before the trio could finish their short appearance, a too-hasty backstage operative stupidly brought the stage curtain down in the middle of their last number. Embarrassed for a second night running, Elton's fury boiled over. He hacked his way out in front of the curtain and seethingly addressed the audience.
"Whoever organized this thing is a fucking idiot!" he bawled into the microphone, to the sound of sympathetic applause.
As Reg, he'd kicked his amplifier onstage back in his days with Long John Baldry. But this was his first public temper tantrum as Elton. It wouldn't be his last.
—
FEBRUARY 1971 PROVED he wasn't a flash in the pan in the States, with _Elton John_ hitting number four in the first week, followed by _Tumbleweed Connection_ reaching number five the next. Now that he had two Top Five albums in America, British radio DJs were forced to take notice. By the middle of the month, "Your Song" was sitting at number seven on the UK singles chart and he was back (with a proper hit this time after the phantom pregnancy of "Border Song") on _Top of the Pops_.
"Thank God we did happen in Britain as well," he says. "It would've been horrible to happen in America and not happen in Britain, because obviously that's where you were born and that's where you live. But it all fell into place."
Not without a cost, however. In order to capitalize on the surge of interest in Elton, the first third of the year saw the singer hurtling north, south, east, and west, all over the UK, to fulfill the demand in bookings. Given what he'd just undergone in the States, it proved too much for him to handle. A slew of Scottish and Welsh dates were blown out in February, followed by a cutting back of the schedule for March. These were his first gigs canceled, on medical orders, on account of stress. "I was on the edge of a nervous breakdown," he admitted at the time. "Now I've got to have three holidays a year."
The centerpiece of these shows was a high-status headliner at the Royal Festival Hall on London's South Bank, on March 3, featuring an orchestra conducted by Paul Buckmaster. If there was a lingering suspicion that some writers in the British music press had decided that Elton was getting above himself following his U.S. success and with his perceived level of onstage pomp, then it was confirmed when _Melody Maker_ ridiculed the Royal Festival show: "Elton John has shown what a musical dwarf he is...it was sad, the man, this living myth, darling of the Americans...struggling like a pygmy."
Sometimes, his growing flamboyance did in fact make him look faintly ridiculous. Closing the interrupted UK tour at the Fairfield Halls in Croydon, Elton stamped and hopped around the stage in too-tight red coveralls with a spangly purple bow tie held with a ribbon around his neck. This ill-fitting getup gave him less the appearance of a cutting-edge, fashion-advancing rock star than of a toe-curlingly overkeen children's entertainer.
A film crew from London Weekend Television captured the show for a half-hour program focusing on Elton and Bernie, aired as part of the _Aquarius_ arts series. Backstage, either a bit stoned or affecting the air of someone who was, Taupin falteringly tried to sum up his unstarry role in the operation: "Yeah, well, I live in a fantasy world," he offered. "I'm just sort of happy the way I am. And I want to stay that way, y'know. I don't want to sort of be anything pr..." He paused to think. "I mean not that I don't want to be prolific, but I don't want to be the sort of savior of modern writing. I just want to write what I want to write and if it's appreciated, y'know, people don't have to know me."
Bernie really meant what he said. He was genuinely happy to remain in the shadows. One subsequent scene in the documentary showed Elton standing center stage, presenting the visibly reluctant lyricist to the crowd. "I thought it only fair that I should introduce Bernie Taupin who never really faces his public," he announced. "And without Bernie there wouldn't be any songs anyway." Hiding a bottle of beer under his plaid coat, his eyes shielded behind shades, Bernie lifted them momentarily to eyeball the audience before grinning, bashfully waving, and then getting offstage as quickly as possible.
It was becoming a unique and strange existence for Bernie, more or less constantly touring with Elton and living a rock star lifestyle without—the odd introduction to the audience aside—ever being under the stage lights. Bernie insists he didn't ever crave fame, even if he enjoyed its offstage perks. "Truth be known," he says, "I was probably more interested in living that grass roots rock'n'roll-style life than Elton actually was. Only I was living it without having to perform."
While Elton was driving on with his career, Bernie had the space and freedom to get on and enjoy his life. Only eight months after he had met Maxine Feibelman in L.A. on the first U.S. trip, their relationship had moved to a significant new stage. In April, two days before the next U.S. tour was due to begin, they were married, at the Holy Rood Catholic Church, back in the Lincolnshire village of Market Rasen.
The wedding day was a clear indication of how quickly Bernie and Elton had become rock star news. The bride and groom wore white, though Elton outshone both in a suit bejeweled with rhinestone flowers in yellow, red, and blue and a silver silk top hat. Photographers and reporters pressed together outside the church as policemen tried to direct the flow of traffic and fans hung around hoping for autographs. It was a measure of how so much had changed in such a short time. Only a year before, no one in the media would have known them or cared.
Dick James bought the happy couple a silver Mini Cooper as a wedding present, and together the newlyweds were due to move to nearby Tealby into a two-bedroom semidetached cottage that Bernie whimsically named Piglet-in-the-Wilds. The couple honeymooned in the States—fishing on the Mississippi, visiting Civil War battle sites, driving to Dodge City and Tombstone.
A three-month American tour lay ahead and, already exhausted before it began, Elton's mind turned to thoughts of quitting. But he'd come so far. There was no stopping now.
—
AT THE SAME time, he was in danger of blowing it by flooding the market with too many albums, released too closely together. Before the American tour of spring '71 came another, the soundtrack to director Lewis Gilbert's film _Friends._ Concerning a young couple, Paul and Michelle, who meet on a flight to France, become lovers, and unsuccessfully try to set up a home together, the movie was panned at the time for its apparently gratuitous sex scenes. The soundtrack had been rush-recorded in three weeks by Elton at Trident Studios in London the previous summer, squeezed into the break between the first two American trips.
Bernie had skimmed through the script and, before Elton had even seen it, written three lyrics. There were five new John/Taupin songs featured, which cut between the orchestrated pop of _Elton John_ and the country soul of _Tumbleweed Connection_ —the title track, "Seasons," and "Michelle's Song" were serviceable but a tad schmaltzy, and "Can I Put You On" and "Honey Roll" were both the Band–style rock swingers. Adorning and blending the songs together were Paul Buckmaster's arrangements and symphonic instrumentals, but taken as a whole, _Friends_ amounted to an unremarkable effort.
Paramount Pictures was keen, not least because of Elton's growing status in America, to put out the _Friends_ soundtrack LP on their own label. As producer, Gus Dudgeon felt that it should be issued as a five-track "maxi single" at best, but it was released as a full album in a lurid fuchsia sleeve featuring an illustration of the film's kissing protagonists. Elton hated the cover and described it as a "fucking pink massacre" that to his mind might be the garish color of a dress worn by the showy English romantic novelist Barbara Cartland. The album sold poorly in the UK, and even though it scraped into the U.S. Top Forty, as far as its creators were concerned, it was a flop.
"They issued about six hundred thousand copies," Elton said of Paramount Records in the United States. He then quipped, "Little did they know that they were going to get five hundred and ninety-nine thousand of them back."
It was an album too far. He'd put out four LPs in the States in a year, three of them in the previous six months. Nevertheless, he was still riding high, with _Elton John_ and _Tumbleweed Connection_ both certified gold. This American tour, beginning in April, would take them everywhere, from the coasts to the heartland.
They began in New York, with three nights in the familiar environment of the Fillmore East, before setting out for Maryland, Illinois, Michigan, Ohio, Nebraska, Oregon, California, Arizona, Colorado, Texas, Louisiana, Florida, Missouri, Tennessee, Kentucky, and Georgia. "We covered everywhere," says Elton. "We did a lot of stuff in the south and a lot of [musicians] wouldn't go there."
He may have been worn out but there was a steely resolve within Elton. If he was going to truly break through in all parts of America, he knew the only way to do it was to put in the hours and the road miles. "You get up," he says, "and you do every radio broadcast, you do every print interview. There was so much going on. You worked so hard. Anyone can be successful in New York and in Los Angeles and Chicago. But there's a lot of country in between."
It wasn't, however, an ascent without rough moments in sometimes choppy air. Fame, for Elton, or maybe for Reg, was difficult to acclimate to. If a fan approached him when he was shopping in a record store, offering his compliments, he would say thank you, shuffle uneasily, and go red in the face. Others were more troubling in their gushing displays of love. He had worryingly begun to attract crazier and more disturbed devotees. As he was leaving the San Francisco Civic Auditorium after a gig there on May 9, one unhinged male fan clung to his car, pleading, "I must go home with you! Let me be a person!" Helpless and upset, Elton watched the poor guy sobbing on the ground as they drove away.
As the tour progressed, it was clear that in some quarters of the press, the knives were out for Elton. The hype had come to haunt him. In the _Seattle Post-Intelligencer,_ after catching a show at the city's Arena on April 24, the critic Stephen Chensvold lambasted it as "glossed-over and well-promoted garbage...ludicrous." In _Melody Maker,_ the paper's Los Angeles correspondent, Jacoba Atlas, went further, claiming that the latest tour was nose-diving fast: "Elton John seems to be having trouble with the middle part of the USA. His concerts have not been selling out, and in the words of one observer, 'He's dead in New York. And everyone knows New York is the center of popular opinion.' "
Those last words inflamed the promoter Bill Graham, who felt he had to bluntly and unemotionally respond by writing a letter to the music paper: "The report that Elton John was dead in the USA is not true. Elton was alive and well at the Fillmore East. He played to three packed houses in April. I consider Elton one of the truly great entertainers working today."
Counterattacking the cynics, Elton made the cover of _Rolling Stone_ on June 10, 1971, gazing up at the camera in shorts and star-patterned boots and a T-shirt that tipped the hat to his songwriting partner with the legend BERNIE TAUPIN FUNKY MONKEY. Inside, among various articles characteristically aimed at "heads"—one examining the aftermath of the shooting of unarmed student protesters at Kent State University the year before, another lamenting a "5-Ton Grass Bust on the High Seas" in San Francisco Bay—Elton cut a comparatively straight and uncontroversial figure. The feature cast him as a celibate hard worker, someone too preoccupied by his daily travails and domestics to even think about rock'n'roll indulgences or a sex life.
"I've got no time for love affairs," he claimed in the article. "You wake up in the morning—even if you have a day off—and the phone will ring: 'Can you come into the office? There's something I want to talk to you about.' Your solicitor will phone you up, for a start, or your accountant, or your manager, or your publicist—somebody will phone you up. Then you have the day-to-day things to worry about, like your car will go wrong so you have to take it in. Or the stove will blow up. It's amazing how many things go wrong in life."
Elsewhere, his quotes further revealed his downcast mood, his glass seemingly half empty. He couldn't come to terms with the sales or acclaim, and worse, couldn't imagine his career lasting. "I've got to do everything in three years," he insisted. "After three years you just have to assume it's gonna go down. Realistically I don't think I can be any more popular than I am now. And I don't want to sort of work that hard while I'm, you know, going down and getting less money and working myself dead. I just want to quit at the top. Not quit, but quit working hard." This wasn't the kind of admission typical of a performer whose popularity was still rapidly on the rise. Already, Elton was sounding hopelessly weary.
For now, there was no sign of the hard work letting up. Crowning the tour were two shows at Carnegie Hall in New York on June 11 and 12, on the days immediately after the _Rolling Stone_ cover hit the newsstands. Unknown to Elton, his mother had flown over from England for the gigs. The first that he knew about it was when he looked up to see Sheila dancing at the side of the stage. Afterward there was a party thrown in a suite decked out with carnival-themed decor at the Essex House hotel on Central Park South. Guests including Bette Midler and the rarely seen Sly Stone, resplendent in his voluminous Afro and gold lamé suit, toasted Elton's success.
It had been a challenging tour, and still it wasn't over. There were another four dates left—Cleveland, Providence, Columbia, Harrisburg—before he could finally go home.
—
HE RETURNED TO the UK, and fifteen million television screens, in an oddly low-key way. Unannounced and unexplained, he appeared with Marc Bolan and T. Rex on _Top of the Pops_ as they ripped through "Get It On," set to be Bolan's second British chart topper of the year—although the record was renamed "Bang a Gong (Get It On)" in the States, where it was Bolan's sole Top Ten hit, to avoid confusion with a then current song of the same name by the U.S. jazz rock band Chase.
Stage right at the _Top of the Pops_ appearance, Elton enthusiastically mimed the piano part for a record he hadn't actually played on, as the curly-mopped Bolan peacocked and licked his lips, in silver jacket and pink jeans, a star of glitter glued to his left cheek. Elton appeared to be in his element, up there onstage with his cool friends, and he ended the song standing upright after having nudged, rather than kicked, the piano stool away, possibly not wanting to be seen to be even trying to upstage the luminous Bolan.
It was the dawning of glam rock and Elton was there to witness it firsthand. It was also the beginning of a playful sales rivalry between the two rising stars. "A wonderful, brilliantly inventive man," Elton says of Bolan. "When Marc was coming in at number one, he'd say, 'Darling, I sold a hundred thousand records in an hour today!' He was so competitive, but in a nice way."
Music trends were moving fast, which played well with Elton's high-octane creativity and low boredom threshold, but sometimes made it hard for his audience to keep up with him. At the Garden Party, a one-day outdoor festival held at the Crystal Palace Bowl, south London, playing on a bill alongside Yes and Fairport Convention, Elton bamboozled and ultimately bored the rapidly dwindling crowd by playing nine new and unheard songs set to appear on his next album. "I remember it dying a death," he said later with a cringe. "People said, 'Oh God.' "
His career and creative decisions were becoming slightly chaotic. He was in effect a man without a plan, maniacally trying to cope with his touring schedule and the demands of his two-albums-a-year contract with DJM. Someone needed to oversee everything and plot the day-to-day scheduling. Since breaking away from his manager, Ray Williams, on his return from the first U.S. trip, it had been left to Dick James to provide nominal representation for Elton. Now he needed a devoted manager. As it turned out, the answer lay in someone who was already very close to him.
Elton had first met John Reid the year before when he'd dropped in to the London offices of EMI Records in Manchester Square to cheekily scrounge free vinyl copies of the latest releases. Reid was a music-obsessed Scot, at twenty-one two years younger than Elton, who had managed to transcend his upbringing as the son of a Paisley laborer to become the head of the UK operation of the EMI-distributed Tamla Motown Records. He made quite an impression on the singer.
Reid was clearly driven—he'd quit Scotland and his studies in marine engineering to pursue a career in the music industry. Once in the capital, he'd sold suits at a branch of Austin Reed tailors before landing a job at EMI's Ardmore & Beechwood publishing wing. Obviously a bright spark, he'd quickly progressed to Motown, where part of his job was deciding which releases from the American label would have the best chance of charting in the British market.
He'd enjoyed an early win in February 1970, picking the three-year-old and at the time largely forgotten "The Tears of a Clown" by Smokey Robinson and the Miracles out of the back catalog, releasing it as a single, and then watching it sail to number one in the UK. Blindsided by this development, Motown in the United States was then forced to reissue the track. Reid evidently had good ears, and his eyes were firmly fixed on the bigger picture.
Elton next saw Reid when he came along to the anticlimactic Revolution Club showcase for his second album in March 1970. But it was in San Francisco in September of that year, high on the victories of the Troubadour shows, that he first spent real time with the young label manager, in California on a work trip to attend the tenth anniversary celebrations for Motown. It was in this liberated city that the pair's growing friendship took a perhaps surprising turn.
Privately, Elton had always suspected he was gay. Traveling from Pinner into London's west end on underground trains in his days as a jobbing writer for DJM, he would find himself eyeing the guys, but not the girls. His relationships with the opposite sex had always been slightly awkward and doomed to failure. He was developing a conspicuous taste in ever campier attire, both onstage and off. In a fumbled, clumsy way, as was revealed much later, he had even once tried getting it on with his songwriting partner. "He made his affections known," Bernie coyly admitted years after the fact. "When I started laughing, it sort of broke the ice. He got over it very quickly."
It took Reid to confirm Elton's sexual proclivities. Returning from a highly significant night with the Scot in San Francisco, he admitted to Steve Brown, "I'm definitely gay." Elton had clearly fallen hard for the man from Tamla Motown, whom Ray Williams and the others had mischievously nicknamed Pamela Motown.
It was wholly liberating for Elton, at the relatively late age of twenty-three, to finally accept the true nature of his sexuality. Bravely, he didn't attempt to keep it hidden from those closest to him. When he got back to London, he came out to his family. Elton remembers that Sheila wasn't particularly shocked.
"Not really," he says. "She said, 'Yeah, well, we thought so, anyway.' Coming out to your parents is always traumatic. But they're not stupid. Parents are not daft. They know. But it's still traumatic. I had no resistance at all from any of my family members and from my friends, only support.
"That's all I cared about. I said, 'If my mum can accept it and my family can accept it, then I don't give a toss about anybody.' I was extremely lucky because other people don't get that sometimes. Y'know, a lot of people come out to their parents and they get rejected. But my mum has always been a modern-thinking woman. She's always been supportive, all the way through."
In London, Elton and Reid set up home together, arousing no suspicion in anyone outside the inner circle. It wasn't, after all, unusual for two young men to share a flat. Elton didn't carry himself with the mincing, limp-wristed demeanor of the archetypical "comedy" 1970s gay man, and his often colorful dress sense was easily attributed to his pop star status. He wasn't, for instance, nearly as conspicuously effeminate as the married Marc Bolan. Reid, meanwhile, was entirely straight in his appearance, with his short, side-parted hair and business suits.
Their flat at 384 The Water Gardens, a stylish apartment block on Edgware Road in west London, was a discreetly elegant and fashionable abode: modernist furniture, expensive piano, carpeted lift that opened directly into the living space. Showing her approval, Sheila supported her only son in leaving the parental sanctuary of Frome Court and beginning this new life. "We've helped him all we could," she told an inquisitive reporter in October 1971, before revealing that she was sometimes embarrassed by Elton's flamboyance. "I did nag him about his clothes and his hair, but then I had been living in this suburban place. Now I go up to London and meet his friends and he looks fine."
In fact it was Sheila who first suggested to Elton that John Reid might become his manager. Reid, for his part, was initially reluctant to take on the role: He had a highly promising career within EMI and great things were expected of him. Eventually he relented, quitting his tenure at Motown and becoming a salaried employee of DJM.
Dick James was very happy about the whole affair, quipping to his son Stephen, "Who else can we rely on to get Elton out of bed in the morning than the guy he's in bed _with_?"
Stephen James was slightly more wary of Reid and his increasingly hawklike manner. He suspected that as each contract option came up, Reid would seize more power as manager in his renegotiations: "I felt he was only out for himself and that we weren't going to get any loyalty from him." James Jr. also noticed that Elton was becoming more confrontational, emboldened by his personal and professional relationship with Reid—"If we said black, he'd say white."
Even though he'd assumed the role of manager, as far as Reid was concerned, he'd been thrown into the deep end: "It was ridiculous. I was twenty-one at the time. I didn't have any money, I had no real experience." Looking on, their friends noticed changes in the couple as their tastes in clothes and hairstyles began subtly to morph. For a time, Elton even combed his thinning locks into a Reid-ish side flick.
But although he now had a boyfriend, there was a side to Elton that still felt very much alone. The singer was so driven and passionate when it came to his career that somehow he couldn't quite settle into the personal partnership. "Even though I was in a relationship with John Reid, I felt lonely," he admits. "All I had in my life was my music. Not a bad thing to have. But, yeah, there was a loneliness."
—
IT WAS TIME to go back into the studio, and for the first time, the songs had been harder to come by. On the eponymous LP and _Tumbleweed Connection,_ Elton and Bernie had been working from a stockpile of material. For the next full studio album, the evocatively titled _Madman Across the Water,_ they'd had to start from scratch.
The unrelenting touring, particularly in the States, where he'd completed three treks in twelve months, had two audible effects on the new record. First, many of the songs featured Elton singing in a sometimes overly mannered American accent. Second, Elton had clearly been influenced by having shared the stage with heavier-sounding bands during his recent tours, and the resulting album was in parts more in tune with the fashionable progressive rock of the time.
_Madman Across the Water_ endured a troubled birth. Elton was tired and overworked, but he also felt that the normally dependable Paul Buckmaster was distracted during the sessions at Trident. The arranger turned up at the studio with no score penned for the grandly sweeping title track. For those songs he had actually prepared for, Buckmaster seemed to be bungling, knocking a pot of ink over his sheet music with eighty session musicians sitting waiting as Elton nervously bit his nails.
The singer finally flounced out of a session in a fit of pique after an argument with Dick James, who reckoned that one of the tracks had lost its direction and could benefit from being rerecorded. A week later, Elton turned up at DJM, shamefaced, with a cassette of the reworked track in his pocket to play for James's approval.
In spite of the difficulties involved in its making, _Madman Across the Water_ proved that Elton and Bernie were still very much on a creative roll. An album of weighty ballads and midpaced rockers, lyrically it came over as a travelogue that documented—sometimes directly, sometimes obliquely—Elton and Bernie's experiences touring throughout the States. "Holiday Inn" read as if it had been scribbled on a scrap of paper midflight, with its detailing of landing in Boston, then moving from gate to limo to hotel, filled with ennui yet somehow strangely beyond fatigue. The episodic "Tiny Dancer," set to become one of the duo's most enduring songs, was Taupin's open love letter to Maxine, its title inspired by how his petite wife, now employed on the road as a stagewear seamstress, would stand gently swaying to Elton's performances. Though set to become a future Elton John standard, it was slightly marred on record by its singer's chewy vocal approximation of a southern American accent.
Elsewhere, the album was populated by a cast of imagined and decaying American characters, from the aging wino "Razor Face" to the homeless and ever moving onetime criminal and druggie in "Rotten Peaches." More tangential was the odd tale of the fictitious Alvin Tostig, who names his son "Levon," and employs him to sell balloons in town, as depicted in a David Larkham illustration in the lyric booklet stapled into the record's gatefold sleeve. Intriguingly, "All the Nasties" featured Elton, through the words of Bernie, addressing the press and ruminating on whether they might ever inquire about a private matter not made explicit in the lyric. No one picked up on the fact that it was a song hinting at the secret of Elton's sexuality.
But it was the expansive title track of _Madman Across the Water_ that proved the standout. Unfolding over close to six minutes from its opening piano and trippy reverse reverb acoustic guitar effects into a hypnotic groove, it gradually built its intense atmosphere, ebbing and flowing and aided by Buckmaster's stunning orchestrations, to the high drama of its chorus, before fading to nothing and beginning again. The playfully elusive lyric—seemingly the scattered thoughts of someone with failing mental health—was wide open to interpretation, not least because Taupin refused to explain it. Some listeners were later to erroneously believe that the madman across the water was Richard Nixon.
Wrapped in expensive and elaborate artwork depicting its title and the name of its artist embroidered into denim, the look and sound of _Madman Across the Water_ chimed with the fashions of 1971. Moreover, its warm and detailed production was very much suited to the burgeoning desire among record buyers for top-quality hi-fi sound. Even more importantly perhaps, it was FM radio ready.
Unbelievably, then, it stiffed in the UK, not quite reaching the Top Forty of the album chart, while far more successfully peaking at number eight in the States. In Britain, it seemed, the press backlash against Elton—seen to be sucking up to the Americans by spending most of his time there—had affected his sales. Appearing on BBC TV's newly launched _The Old Grey Whistle Test,_ designed to showcase more serious rock as opposed to pop music, he couldn't help but gripe. "During the year I've had my fair share of bitchiness, which I really can't stand," he moaned on the show.
In his homeland, the lackluster performance of _Madman Across the Water_ was viewed as the plummeting fall of Elton John after the rapid rise. "I like the songs on the album," he later commented. "I don't like my vocal performance. I knew after that album there had to be a change."
—
AT THE SAME time, those close to him believe this is where Elton first floated off on his star trip, pulling away from the band on the road, shielded from them by the increasingly protective John Reid. In truth, though, there was an element of survival instinct in his sudden remove and apartness. Particularly whenever he returned to the American West Coast, his arrival seemed to attract a swarm of pushers and groupies and music biz hangers-on. These were days of lax security at concert venues and sometimes it seemed as if anyone could just wander backstage and into the dressing room. "It really was a sort of open-door situation," says Bernie. "Between the wheelers and dealers and the guys in the satin jackets from the record companies...it was just such a scene."
A very different scene greeted them when they flew south across the equator in October for the first tour of Australia. Instantly, Elton felt that the island continent was behind the times compared to Europe and America. Touching down in Perth, having had his hair dyed orange with touches of green behind his ears, in a look that was more proto-punk than glam, Elton and the touring party began to attract stares and hostile comments. As they stood in line at customs, one local woman cried out, "What's this? A bloody traveling circus?"
They were more welcome in other quarters. No less unlikely a figure than the Dean of Perth (a leading cleric of the Anglican Church of Australia) invited them all to a reception to be held that evening. Elton was hopelessly beat after the long flight, and so someone from his camp made a call wondering if the function could possibly be moved to the following night. The next anyone heard was when the local TV news channel broadcast a damning item declaring, "Elton John snubs the Dean of Perth."
It was a bad omen for what would turn out to be a disastrous tour. Antipodean promoters, it seemed, hadn't yet got to grips with how to stage modern rock shows, and so the venues for the gigs were random and strange—a soccer stadium in Perth, tennis centers in Adelaide and Melbourne, a detour to New Zealand to play a speedway track in Auckland.
More bizarrely, at the airport in Sydney ahead of the final show, Elton was treated as a threat to polite society. He arrived wearing a denim jacket on which were pinned and stitched dozens of badges and patches—a sheriff's star, an insignia with the words SUPER SCHMUCK, images of Donald Duck, Mickey Mouse, and Porky Pig. But the authorities took exception to a large white button bearing the meaningless and daft legend, in growing sizes of font, BITCH BITCH BITCH, along with others they considered to be sexually suggestive. In the end, Elton agreed to cover them up with Band-Aids to "save hassle." Nevertheless, the luggage of these apparent freaks was thoroughly searched while they were aggressively interrogated about whether any of them were marijuana smokers.
Their last gig, at the Randwick Racecourse on October 31, was the worst of a terrible run. A canopy of canvas over the top of the stage collapsed and blew away in high winds and the show had to be put on hold. Ever the troupers, Elton and the band, freezing in overcoats, then performed "Waltzing Matilda," regarded as Australia's unofficial national anthem, to a chorus of accompanying voices rising up from the crowd as the rain battered down.
—
OVERRELEASING AND OVERTOURING made 1971 a tough year for Elton. At the end of it, his nerves shot, his fingers raw, he felt "like a plate of jelly."
Shopping in London's upmarket food and department store Fortnum and Mason in the first week of December, he threw another hissy fit in public. The sales assistant at the till recognized him—wow, it was Elton John. She then took a look at his checkbook, which had of course been issued under the name of Reginald Dwight, and wouldn't allow him to make the transaction, unaware that that was actually his real name. The matter was quickly resolved, but still, he felt angry and embarrassed. "Fuck this," he snapped, stepping out onto the pavement and turning to John Reid. "I'm going to change my name."
On December 8, Dick James arranged a meeting for the singer with his lawyer to make the application for a name change by deed poll. Asked if he wanted to give himself a new middle name, he said, on impulse, "Hercules."
When Elton broke the news to his mum, she went berserk. It hadn't been a problem telling her he was gay. But this was changing, forever, the birth name she had given him.
"You can call me Herc, if you like," Elton teased her.
"Fancy calling yourself after Steptoe's horse," she huffed, referring to the long-suffering nag in the BBC's then popular sitcom _Steptoe and Son,_ about two rag-and-bone men who lived in a junkyard.
But her son was determined to make the name change absolute. From here on in he insisted that everyone call him Elton. He would even tear up any letters that arrived in the post addressed to Reg Dwight.
But as he points out: "Changing your name, it doesn't alter anything."
Nonetheless, in future times, when Reg would have a wobble, he would take psychological strength from having legally assumed the identity of his far more glamorous, far more self-assured alter ego: Elton Hercules John.
Opposites—and outsiders—attract. Backstage at the Shaw Theatre in London: Princess Margaret and Elton (with her husband, Antony Armstrong-Jones, the Earl of Snowdon, in the background).
IF, IN 1972, YOU PEERED through its spiked iron gates at its weathered, ivy-clad stone walls, you wouldn't immediately think that the Château d'Hérouville had the makings of a hit factory. But the gently decaying property, built more than two centuries earlier, already held a place in French artistic folklore. Lying twenty-four miles northwest of Paris, the château in the Val d'Oise made its first impression upon a creative work as the Castle d'Hérouville in a hunting scene in _Modeste Mignon,_ the 1844 novel by Honoré de Balzac. Around the same time, the Polish-born composer Frédéric Chopin and his lover George Sand, the promiscuous, androgynous female writer, often met there in secret. Adding to the myth of the château, after their deaths, their ghosts were said to haunt its halls.
Toward the end of that century, in the last torturous weeks of his life in the summer of 1890, Vincent van Gogh could sometimes be seen from one of its picture windows, painting in the surrounding fields. Reportedly he daubed a lost image of the château itself, but it was certainly in this pastoral setting that the tormented artist produced his last great work, the vivid if gloomy _Wheatfield with Crows_. It's believed it was in these same fields that Van Gogh shot himself on July 27, before stumbling to the nearby village of Auvers-sur-Oise and dying two days later.
Most recently, in the 1960s, the château had been bought by the French film composer Michael Magne, who transformed the top floor of its south wing into a recording facility he named Strawberry Studio. The location had become something of a magnet for the sixties counterculture in northern France. Jane Fonda, in the country filming _Barbarella_ in 1968, brought Magne a sapling that he planted in the garden, which was now growing into a twisted tree.
The summer of '71 saw the Grateful Dead staying at the château when the California psychedelicists were booked to play a nearby festival. The gig was canceled, and so instead they staged an impromptu show on the grounds of the château for the local villagers. Less than keen, for obvious drug-related reasons, to have French police attend this "happening," the band made arrangements through the mayor to have fire officers provide security for the event. A welcoming meal was laid on for the firemen. Mischievously, and potentially dangerously, the Dead spiked their wine and fruit juice with LSD.
A splendid time, however, was apparently had by all. The night ended with naked firefighters frolicking in the château's outdoor swimming pool.
—
HOW ELTON CAME to discover the Château d'Hérouville was purely through financial necessity. Now that the money was pouring in, he was being subjected to the British government's punishing taxation of high earners at the marginal rate of 75 percent. His accountants advised Elton that writing and recording overseas—the same trick the Rolling Stones would pull in '72 by making _Exile on Main Street_ in Villefranche-sur-Mer—would ease some of the financial pain.
Besides, Elton believed that getting out of London and fixing his mind on his creativity might alleviate some of the stresses he'd experienced making _Madman Across the Water_. In this way, he was to prove a pioneer of what was to become known as residential recording—making music in a beautiful place out in the country. It was an experience that both Dylan and the Band in the Big Pink house in West Saugerties, New York, and Traffic in rural England had previously benefited from. "I think we were inspired," says Bernie, "by Traffic going off to make records in little cottages in the country."
As producer, Gus Dudgeon was sent to France to scout potential locations. He initially chanced upon a villa in the south of the country, which he figured he could turn into a makeshift studio. In fact, he'd already ordered a hundred fifty mattresses to sonically deaden its walls when someone told him about the château outside Paris. "The minute I arrived there," he remembered, "I thought, _This has got to be the place_. _This has got to be ten times better than renting a villa and basically building a bloody studio in the middle of it._ "
Elton and the band arrived there in January 1972 with a new addition to the lineup, guitarist Davey Johnstone. The twenty-year-old Scot, who already looked the hirsute rocker part with his long wavy blond hair, was an impressively versatile musician—a brief member of the progressive folk group Magna Carta who'd gone on to play acoustic guitar, mandolin, and sitar on _Madman Across the Water_. Elton was keen to use his touring band on an entire album for the first time, and to pare back the orchestrations for a more focused sound. Having added Johnstone to the faithful core of Murray and Olsson, together the augmented group traveled to the château.
Once there, Elton and Bernie settled into a songwriting modus operandi that was both spontaneous and intense. The château was divided into two wings: one housed the studio, the other the living quarters. It was in the latter that the communal breakfast area became an improvised writing zone for Elton, with a piano and drum kit and amps dotted around the dining room. The duo arrived at the château with only two songs prepared, but within three days they had written another nine, at a breakneck pace that would come to define their albums made at Hérouville. Dudgeon remembered one morning watching Elton—within the space of half an hour—virtually autocompose a gorgeously atmospheric new song from a lyric Bernie had written called "Rocket Man."
"All the gear would be set up near the breakfast table," Elton remembers. "Bernie would be typing upstairs."
"Our approach was very, very immediate," says Taupin. "I remember sitting on the edge of my bed just scrolling out stuff and tearing it off and going on to something else."
"Bernie would bring down the lyric, I'd write the song," Elton explains. "The band would get up, join in with me and then we'd go over the courtyard to the studio and record it. It was a pretty sensible way of doing things. It was very, very casual and very quick. It was such a creative hive of industry."
It was also sometimes an acutely spooky environment to work in. Visitors to the château were convinced that the place was indeed haunted. Later, when David Bowie recorded _Pin Ups_ and _Low_ there, he was terrified by the eerie atmosphere in the master bedroom, particularly one cold and dark corner of it that seemed to suck in any natural or electric light.
"It had a very strange vibe about it," says Bernie of the château. "It was beautiful and tranquil, but at the same time there were definitely some sort of ethereal things in the air around there."
"It _was_ haunted," believes Davey Johnstone. "Almost every day somebody would be tapped on the shoulder when they were walking down the giant staircase."
The heavy imbibing of marijuana by the band members likely enhanced their tuning in to these peculiar vibes. Not for the still drug-free Elton, though, even if he would join the others in indulging in fine French wines.
"The band were doing drugs...puffing," he says. "And we'd have a glass of wine. But we couldn't afford not to be reasonably clean-living. We were doing too much work."
It was an idyllic setup. Elton found himself playing the studio's Steinway piano beneath a thirteenth-century chandelier while occasionally glancing up to take in the lovely rural scenes outside the window. No matter how loud the band played, with no neighbors nearby, they could open all the studio's windows as they blasted away. Once a take had been completed, they enjoyed rib-rattlingly loud playbacks through the enormous speakers.
It was a loose and productive time, in what they came to call their honky château. As the recording progressed, it was clear that these new songs were so strong that they wouldn't need to be embellished by strings. The singer, privately relieved, was thinking, _No one can turn around and say, "Oh, it's Elton John with his hundred-piece orchestra again."_
—
ALL THE SAME, the first thing he did when he returned to London was to make a second, already booked appearance at the Royal Festival Hall on February 5 with a full army of classical musicians. Whereas his previous performance there eleven months before found him backed by many of the string players who'd appeared on his Trident recordings, this time around the plan was for him to up the ante by fronting the eighty-piece Royal Philharmonic Orchestra.
As it turned out, the show was something of a trial. Elton suspected that the classically trained musicians weren't giving their all because they considered a pop show to be unworthy of their talents. Conducting them, Paul Buckmaster found himself the subject of snippy remarks during rehearsals.
"Can I please have a bit of quiet?" Buckmaster asked the musicians at one point.
"Well," came the sneering reply from one of their number, "if you got your fucking hair cut, perhaps you could _hear_ quiet."
That night, Elton, sparkling in a silver jacket made up of rhombus shapes, opened the show solo before being joined by the band—including Davey Johnstone in his debut performance—for a more traditional rock set. After a break, having undergone a costume change, he reappeared in a cream-colored long-tailed tuxedo to much applause and whistling before doffing his silver top hat. He sat down at the piano in front of the orchestra and slipped into "Your Song." It was stirring stuff, but not everyone onstage appeared moved by it. As he led into "Take Me to the Pilot," some of the orchestra members sat around looking snooty and bored. Conscious of their hostile attitude, Elton became tense and uncomfortable, backed by what he perceived to be an ensemble of musical snobs. He ended an hour later with a lovely orchestrated version of "Goodbye," the closing track of _Madman Across the Water,_ but he left the stage feeling deflated.
Afterward, now irate, once again Elton said more than he should have to the press. "I thought the orchestra were a bunch of cunts," he snapped at one journalist. "They made snide remarks. I sunk a lot of money into that concert and I'll never do it again." Ray Coleman in _Melody Maker_ reckoned that the show was contrived: "Majestic occasion though it was...not the mindblower we perhaps expected. And it raised the hoary question: does pop want, need, or benefit from such an uneasy hybrid?"
It was a fair point, and clearly a dilemma that Elton was tussling with himself. Fifteen days later, he turned in a more stripped-back performance at the Shaw Theatre in Euston, at a benefit for the National Youth Theatre attended by Princess Margaret and her husband, Lord Snowdon. Elton and the Queen's younger sister hit it off immediately, prompting her to invite the singer and the band to dinner at Kensington Palace. Margaret had a reputation for enjoying a drink and was a heavy smoker, but more significantly, perhaps, was said behind closed doors to have a wicked sense of humor. As such, she and Elton were a natural fit, in both their shared love of hooting laughter, and likely in another way that wasn't at the time publicly apparent. "I always felt she was very lonely," Elton says. "She wasn't wild. She loved music, played the piano. I found her to be kinda lonely and always very, very sweet to me."
If this blossoming friendship would in the years to come usher Elton behind the private doors of the royal family, it was further evidence that no matter how the critics turned their noses up at him, he was fast becoming a key figure in popular British culture. A hero's welcome greeted him at his virtual homecoming show at Watford Town Hall, seven miles north of Pinner, on February 24. But with this rising profile, Elton also became a target for those with dangerous agendas.
Two days earlier—and only three weeks after the notorious incident known as "Bloody Sunday" in which twenty-six unarmed civil rights protesters were shot in Derry, Northern Ireland, by British soldiers—the Irish Republican Army had carried out a revenge attack by detonating an incendiary device hidden in a car parked outside the UK army barracks in the small Hampshire town of Aldershot, southwest of London. Seven civilian staff were killed. Now Britain was nervous and on high alert.
Partway through the Watford Town Hall show, with the gig in full swing, an organizer walked onto the stage and whispered into Elton's ear that a call had been made to the venue by someone claiming to be from the IRA, warning that a bomb had been planted in the building. Elton was anxiously forced to step away from the piano stool to make way for a policeman, Inspector O'Connor, who sat down and announced to the suddenly panicked audience that the hall had to be evacuated. As was often the case in those tense and uncertain days, though, the threat proved to be a hoax.
Fame was sometimes frightening, sometimes frustrating, but while he was now shining brightly, Elton refused to forget those musicians from his past now left in his shadow. By 1972, Long John Baldry had become a mess, drinking more and more as his career increasingly lost its direction. The previous year, both Elton and Baldry's other protégé, Rod Stewart, had individually produced tracks for a new Baldry album, _It Ain't Easy,_ designed to get the singer back on track. It turned out to be Baldry's most successful record, not least when the grinding single "Don't Try to Lay No Boogie-Woogie on the King of Rock and Roll," reconnecting him with his blues roots, became an airplay hit on FM radio in the States. On a subsequent tour of America, though, Baldry, trying to mask his insecurities, hit the bottle even harder.
A subsequent, patchier album, _Everything Stops for Tea,_ found Elton and Rod taking a vinyl side apiece in terms of production, with the former overseeing a novelty take on the New Orleans standard "Iko Iko." Such was their loyalty to their former bandleader that both far more famous singers appeared on _Top of the Pops_ to perform the song with Baldry. At an after-show party, he was approached and interviewed by a writer from _Disc_ magazine and was generous about Rod before—with affection but likely a touch of jealousy—poking fun at Elton.
"I always knew Rod was going to become something very, very special," Baldry pronounced. "But how could one predict that a boy with an overweight problem...I mean he is a bit broad across the beam, our Reg. Who would have thought that this strange boy with his myopic lenses and fat arse...could turn out to be one of the pop sensations of all time?"
Of course, Elton dealt out his share of bitchy comments about other people to the press, so he had to take it. He couldn't stop himself, it seemed, from blurting out wicked put-downs of other artists. Though it was surprising when one of his targets became his former hero numero uno, Jerry Lee Lewis.
In April, he caught a show by Lewis at the London Palladium. Excitedly, Elton turned up in his drape jacket, only to be jeered at by the aging Teddy Boys in the crowd who recognized him. Ironically, after a run of flop singles in the sixties, Lewis had at the time abandoned rock'n'roll and detoured into a country music career. To Elton's eyes, when he stepped onto the Palladium stage, his idol had turned jaded and toothless. "He could have wiped the audience out," he said afterward. "But he just sat there and played country-and-western numbers as if to say, 'Fuck you.'
"All these old rock stars are the same," he added, dismissing the thirty-six-year-old. But indeed Jerry Lee Lewis must now have seemed practically antique to the twenty-five-year-old Elton—someone belonging to the past, just as he was looking to the future.
—
WHEREVER HE WENT now, Elton always had to deal with a certain degree of hassle. If it wasn't Teddy Boys mocking him at a gig, it was airport security personnel meticulously pulling apart his luggage looking for drugs. When he touched down in Los Angeles in April ahead of his next U.S. tour, four pairs of his eight-inch-heeled platform stage boots were dismantled in the belief that he was using them to smuggle illegal substances into the country. "It's just that it's such a new style, we haven't caught up with it yet," one customs agent apologized to him, before telling a reporter that the star had been "very cooperative."
As he hit town, "Rocket Man" was released as his next single. It was soon filling the radio airwaves. Driving one day down Sunset Boulevard, Elton heard a DJ on KHJ compare it to Rod Stewart's latest release, the strutting rocker "You Wear It Well."
"That's Rod Stewart's new record," the announcer said in his link. "He surely must be the number one male vocalist around at the moment. I've had lots of phone calls saying that Elton John is. Well, I think Rod Stewart just about beats Elton John."
Sitting at the wheel as he cruised the Strip, Elton was fizzing. Rightly so, perhaps, since the two records were completely different. "You Wear It Well" was a cocksure groover. "Rocket Man" was something else entirely, filled with alienation and a sense of sad adventure.
Its lyric had come to Bernie as he was motoring through the Lincolnshire countryside to his parents' house, the first lines arriving fully formed in his head. Immediately the rest of the song began to flash through his mind. He quickly pushed the accelerator to the floor to get to his destination to grab a pen and paper before he forgot them forever.
A vividly evocative mid-tempo ballad, lent further atmosphere by David Hentschel's plaintive synthesizer parts and Davey Johnstone's eerie, echoing slide guitar (not to mention the band members' meshed harmonies, which were to become a sonic trademark of Elton's subsequent records), "Rocket Man" was set in an imagined future where the narrator's day job involves lonely flights to Mars. Of course, it might have been one long metaphor, with its references to being "high as a kite" and "burning out his fuse" easily interpreted as drug references.
In this way, "Rocket Man" of course echoed David Bowie's veiled tale of the astronaut as mind-fried junkie in "Space Oddity," released three years before and also produced by Gus Dudgeon. But for Taupin, its inspiration had come from the forlorn, pollution-choked commentary of "A Day in the Life of a Tree" from the Beach Boys' 1971 album _Surf's Up_. However, he did later admit that he cribbed the title from the mournful song "Rocket Man" by Florida's baroque pop band Pearls Before Swine.
None of this, however, calmed an apparently livid Bowie, not least when—after "Rocket Man" rose to number six in the United States—"Space Oddity" (which hadn't initially been a hit in America) was rereleased in 1973 and didn't fare as well, reaching only number fifteen. It was unfairly judged by many to be an Elton rip-off. Bowie's wife, Angie, tried to cool his fury with the words "Other people can sing about space travel too."
"Rocket Man" seemed to be everywhere, not least because Uni Records had linked its release to the launch of Apollo 16. Cleverly, if opportunistically, the record label took out music trade press ads that read: "On the morning of April 16, 1972, Apollo 16 was launched into orbit on a journey to the moon. A few mornings earlier Uni Records launched a new Elton John single into the worldwide orbit. What a trip! Both launchings bound to set new records."
The publicity stunt tie-in prompted an invite from NASA for Elton, Bernie, and the band to visit their Manned Spacecraft Center in Houston, Texas, on April 28, the day after the Apollo 16 spacecraft splashed down in the Pacific Ocean. For four hours, the party were shown around the Center by Apollo 15 Command Module Pilot Al Worden and even given the opportunity to "fly into space" in a simulator.
"Rocket Man"—with its bracketed LP addendum "(I Think It's Going to Be a Long, Long Time)"—set up the arrival of Elton's fifth studio album, titled _Honky Château_ in tribute to the French hideaway. Not only was it a far less ornate and more direct offering than _Madman Across the Water,_ it also exhibited Elton's knack for musical genre hopping, opening with the infectious Louisiana swing of "Honky Cat" and closing with the glam rock swish of "Hercules." In between, it often sounded like a record made in New Orleans rather than northern France—"Susie (Dramas)" could have been written and produced by Allen Toussaint, "Amy" had some of Dr. John's voodooish swampiness, and "Salvation" (originally earmarked as the first single) was rousing, multivoiced, light-seeking gospel.
Elsewhere, the lyric of "Slave" revisited the violent uprising of "Burn Down the Mission" from a shackled perspective, and the aptly named, slow-lane "Mellow" featured the keening electric violin of the French musician Jean-Luc Ponty, as if channeling the ghosts of the château.
One track, however, stuck out through its sheer daftness. "I Think I'm Going to Kill Myself" belied its fatalistic title by being a jaunty tongue-in-cheek moment of light relief in which the singer cast himself as a teenager imagining the reactions to his death after he shoots himself. To highlight the incongruous frivolity, its middle section featured the sound of the hairy alumnus of the Bonzo Dog Doo-Dah Band, "Legs" Larry Smith, merrily tap-dancing.
Elton, like the Beatles before him, who had cast the lampooning ensemble in a strip club scene in _Magical Mystery Tour_ in 1967, was a huge fan of the band their aficionados called the Bonzos. Having mutated from a rabble of art schoolers playing twenties jazz into a surrealist rock band, by 1972—in part because of the deep depressions and heavy tranquilizer use of their frontman, Vivian Stanshall—the Bonzo Dog Doo-Dah Band was effectively over, after a final and wryly named contractual obligation album, _Let's Make Up and Be Friendly_.
"Legs" Larry Smith, given his nickname as a result of his early appearances with the band, clacking around in tap shoes as the self-invented character Mr. Wonderful, had subsequently become the group's drummer. Now he was at loose ends. Fortuitously, looking to inject some humor into his upcoming American shows, Elton called him up and invited him on tour.
"I said, 'But, darling, I don't have a thing to wear!' " Smith remembers. "And Elton said, 'Well, look, go and do what you wanna do and bring it over to the States and away we go.' So I got two outfits made up and I flew over first-class to New York."
Smith, with his mock-effete actorly demeanor and acute sense of the absurd, was to further bolster Elton when it came to the flashy peculiarities of his stage costumes. "I'd already been established as a kind of flamboyant eccentric with my clothes," Smith says. "He was still in the closet in terms of gaydom." But with Larry's help, albeit in the guise of camp theatricality, Elton was able to throw those doors wide open.
Arriving in the States and at Elton's hotel, Larry paraded around in the outfit he'd already dreamed up for his tap-dancing appearance in the show for "I Think I'm Going to Kill Myself." He called it the Triple Wedding. "I had a shiny chrome crash helmet with a wedding cake couple glued on the top, and another two, one on each shoulder. A three-quarter-length tunic-waisted jacket. White flares with diamanté running down the seams. Wonderful silver lamé shoes from Chelsea Cobbler, beautifully made with diamanté round the platform soles. Silk gloves, a naughty doggy dog shit tattooed on my chest and, flowing off of each shoulder, forty yards of white netting."
It was quite a getup, and one that clearly tickled Elton. From then on, Larry was Elton's closest companion on the mammoth American and British tour that would run, on and off, for the next seven months. "He was the only person who kept me sane," Elton said of Larry in '72. "I get terribly bored when things are too serious. I want to get into funny things, so I might as well have a bit of comedy in my act."
Larry encouraged Elton to position a framed photograph of the sugary-sweet Hollywood legend Doris Day on top of his piano, which at a certain point in the set the singer would clamber up to kiss. Going further, Smith wondered if he could devise a "Singin' in the Rain" song-and-dance duet for the pair of them.
"You must be mad," Elton told Larry. "They'll wonder what the fuck's going on." It could easily have pushed a rock audience too far and sent them heading for the beer and hamburger stands, or worse, the exit doors. But Smith pointed out that "Singin' in the Rain" now had an added resonance, having been featured in Stanley Kubrick's _A Clockwork Orange_ the year before, most notoriously in a rape scene. By featuring it in the show, Smith argued, they could be "double hip." Elton decided to give it a go.
As it turned out, the audiences laughed and roared and lapped it up. In the routine, tour manager Marvin Tabolsky sat at the piano and mimed to a prerecorded backing tape as Larry and Elton, in macs, capes, and fedoras, performed a skit reminiscent of the golden days of screen musicals, albeit with added risquéness.
"Gee, Larry, I wish I could dance like you," Elton said onstage to his partner. "I'm sure I'd get all the girls."
"Gee, Elton," Larry responded. "I wish I could play the piano like you. Because I know I'd get all the boys."
"Which he loved, of course," Smith points out. "Because he was still inside that closet with the door firmly locked."
—
IN OVERSIZED YELLOW glasses spelling out Z-O-O-M!, back in Britain, Elton appeared on _Top of the Pops_ as "Rocket Man" made number two, his biggest UK hit yet. Backstage there was much cause for celebration since, after all his trials in Britain, here was proof that Elton was now a bona fide pop star.
For Davey Johnstone, it was quite an event, coming only five months after he'd joined the band. "We're all in one of these cheesy dressing rooms," he remembers, "and John Reid says, 'Let's have some champagne.' So he's opening it and talking at the same time and the cork goes straight up his nose. It was brilliant. It was like one of those great Spinal Tap moments."
His status upped, and outgrowing the London flat, Elton moved with Reid to a large two-story bungalow with an outside swimming pool at 14 Abbots Drive in the showbiz enclave of the Wentworth Estate in Virginia Water, Surrey, west of London. Here he enjoyed filling the house with the spoils of the high-earning star: Pricey artworks covered the white walls, a fifties jukebox sat in the game room, a suit of armor guarded the stairway, and the driveway was soon crammed with nose-to-tail luxury cars, including a de rigueur Rolls-Royce.
Far from being a messy rock star abode littered with empty bottles and overflowing ashtrays, the bungalow was conspicuously spic-and-span. Visitors noted that it seemed almost like a show home. Sheila and Fred married in May and moved into a property in the garden. The idea was that his mother and now legal stepfather could look after the house (which Elton, in a familiar theme, had taken to calling Hercules) and the two dogs he'd recently acquired—Brian, a spaniel, and Bruce, a German shepherd—whenever Elton was away, which was of course often.
After the painless birth of _Honky Château,_ the making of the next album turned out to be a slog. Everyone else was raring to go, but Elton wasn't in the right frame of mind. The workhorse was worn out, and in June, he almost had to drag his bones back to the château.
Once he got there, Elton remained in a very peculiar state, as if he was moving in slow motion. He felt he was on the verge of a nervous breakdown. He said to Gus Dudgeon, "I can't make this album." The producer was unflappable. He told the exhausted singer, "All right, we'll do it in September."
But once relieved of this commitment, Elton changed his mind. He was planning to go on vacation in July and he thought, _Let's get it over and done with before then_. Even so, he knew it wasn't a particularly healthy way to approach a creative endeavor. For most of the recording, his temper was foul, and he was constantly quarreling with everyone. There were angry phone calls made to Dick James back in London. He and Elton didn't talk to each other for four months afterward.
Even Bernie, who'd completed his lyrics for the new album beforehand, stayed away from the studio for the most part this time around. When he arrived one day, without warning, Elton was walking across the château's lawn and nearly jumped out of his skin when he saw Taupin sitting behind a bush, enjoying a moment of quiet contemplation in the garden. Still, he was a welcome visitor to what had been a rough session. The band delighted in seeing Taupin's reactions to the new songs as they blasted out of the studio's speakers.
"Bernie might've thought, 'This would be a ballad' and it'd be a steaming rocker," says Davey Johnstone. "Or he might think, 'This'll be really up-tempo,' and it'd be the opposite. We all would be watching him."
At the end of a trying month, Elton's sixth album was in the can, even if he'd made it in a numbed daze. Running on fumes, he landed back in Los Angeles for his planned vacation, ranting worryingly as he walked through the airport. Those around him suspected he was cracking up before their eyes. He was subsequently diagnosed with glandular fever.
"I just was exhausted," he says. "It was plain exhaustion from working nonstop. You move so fast your body after a while says stop it. I never had any breakdowns in that time. If I was ill, I was ill. I was just ill because I was fucking working hard. It was a wake-up."
He cooled his aching mind on the coast, at a beach house he'd rented in Malibu. Joining him for the summer was Bryan Forbes, the British director of films including _Whistle Down the Wind_ and _Séance on a Wet Afternoon_. Forbes, who also wrote scripts and novels, was a polymath who owned a bookshop back in Virginia Water. One day Elton had wandered in and they'd struck up a conversation that quickly turned to a friendship. Elton became a regular visitor to the house nearby, Seven Pines, that Forbes shared with his actress wife, Nanette Newman, where the singer admired the older couple's books and art and elegant décor.
Sometimes, suburban-born Reg felt—as is often the case with the nouveau riche—that he lacked refinement. Under the influence of Forbes, he began to cultivate his tastes. At the same time, 1972 marked the year that Elton truly began to spend, spend, spend—a compulsion that would come to publicly define him in the minds of some. Over the summer he daily went on money-burning shopping trips around Los Angeles. At the end of the holiday, he'd managed to fill sixty-seven suitcases and thirty-two trunks with his purchases of records, books, clothes, and jewelry. What's more, there were clearly still piles of money heading his way. In July, _Honky Château_ became his first number one U.S. album.
Making the most of his first break in ages, Elton, with Bernie in tow, went to see Alice Cooper at the open-air Hollywood Bowl. They were astounded by how ridiculously and thrillingly large-scale the show was. A helicopter flew over and rained hundreds of pairs of paper panties from the sky. Bernie remembers himself and Elton "acting like kids in the audience. They threw posters out at the end and we were jumping up trying to get them."
Through Forbes and his film industry connections, Elton threw himself into L.A. life, even mixing with stars of old Hollywood. He was introduced to the still raunchy and sharp Mae West, then on the cusp of her seventy-ninth birthday. Elton would take her out for afternoon tea and together he and West would marvel at the people who'd come up to her, showering her with praise. "It was very touching," he remembers. "I sat there while Mae West was talking, thinking, _Fuck, I can't believe where I am_. All I did was listen. You don't interrupt."
Then one day Bryan Forbes told Elton that he had a surprise guest coming over for dinner that evening. Although it was the height of a baking summer, the visitor was insisting the fire be lit for his arrival. It was Groucho Marx.
"Groucho arrived in an overcoat and was _so_ not friendly," Elton remembers. " 'When are we gonna eat? It's too cold in here.' We're all thinking, _Oh fucking hell_. Very uncomfortable for about half an hour. Then suddenly, of course, it was a big joke."
Once everyone had relaxed, Groucho said to Elton, "They tell me you're number one. But I'd never heard of you until I went into my office this morning and said I was having dinner with Elton John. They all fainted. After that I lost what remaining respect I have for you."
Even at eighty-one, Groucho's sense of humor was as razor-edged as ever and there was still very much a twinkle in his eye. Although his silver screen heyday was far behind him and he'd suffered a patchy career as a TV host throughout the sixties, he remained an open-minded individual who moved with the times. In 1968, in preparation for a role in the panned hippie satire _Skidoo,_ in which he played a mob boss named God who tuned in, turned on, and dropped out, Marx had taken acid with writer and onetime Merry Prankster Paul Krassner. At one point during their trip, they were listening to Bach's Cantata No. 7 when Groucho began hallucinating "beautiful visions of Gothic cathedrals." He asked Krassner, "Do you think Bach knew he was doing that?"
Marx was still a provocative figure. In 1971 he caused a furor when he gave an interview to the underground San Francisco paper _Take One_ and declared, "I think the only hope this country has is Nixon's assassination." While it made him a hero to the country's war-protesting youth, it also ensured that he had an FBI file opened on him as a potentially dangerous subversive.
That summer, Elton and Groucho got along so well that they ended up hanging out together on a number of occasions. One night, the singer and John Reid took Marx to see a theater performance of _Jesus Christ Superstar_. "Along the way," says Elton, "he stopped in a bar and picked up two girls. He was the biggest flirt I've ever met." As the theater lights dimmed, Marx barked, "Does it have a happy ending?" Later, during the crucifixion scene, he loudly declared, "This is sure to offend the Jews."
"Funniest fucker," Elton says. "He never could figure out why I was called Elton John and said I should be called John Elton. I've got a Marx Brothers poster signed by him—'To John Elton from Marx Groucho.' "
As their friendship developed, Elton often found himself impaled on the sharp end of Groucho's jokes. One night, laughing but exasperated, Elton held up his hands in mock surrender. "Don't shoot me," he protested, "I'm only the piano player."
Elton thought to himself, _That's a good line_. He stored it away for later.
—
THE U.S. TOUR resumed at the end of September. Inspired by his Hollywood summer, Elton and Larry had added even more razzle-dazzle to the show by the time it made its way back to California for two arena gigs before a total of thirty-five thousand people at the Forum in Los Angeles. There was also added intensity to Elton's playing, as he forcefully battered the piano keys, resulting in his bursting a finger wide open on the second night and having to rush backstage to stem the bloodflow.
He returned to massed cheers and offered the words "Even if I had only one finger left, I'd play for you." He then thanked the audience for the "most fantastic night of my life."
Singin' in the glittering rain of confetti: Elton and "Legs" Larry Smith cackling among the chorus girls.
For "I Think I'm Going to Kill Myself," "Legs" Larry Smith made his entrance onstage followed by two little people dressed as U.S. marines holding his wedding train. "I came out blowing kisses," Smith recalls, while also remembering that during the number, twenty thousand dollars' worth of confetti showered on the performers. "I was in heaven," he says. "It was just a joy to do."
Not everyone in the band thought these preposterous theatrics were a great idea, though. "Our attitude was always _Oh God...cringe,_ " Davey Johnstone admits. "We're going, 'What? He doesn't need this...' "
All the while, on the road, Legs was whispering into Elton's ear: "It's wonderful...carry on...get bigger...get crazier."
Elton certainly would. He was only just getting going.
Writing, always writing. The ever prolific Elton and Bernie, London, 1973.
THE QUEEN MOTHER HAD NEVER SEEN anything like it. From her vantage point in the royal box of the London Palladium on the evening of Monday, October 30, 1972, she applauded politely as she watched Elton emerge from the wings in teetering stacked-heel boots, white hexagonal glasses, and a silver suit patriotically striped in red and blue. He took a bow and sat down at a white grand piano.
"Did I hear a snigger from the audience?" he said into the microphone, suddenly aware of how outlandish he must have appeared in such a stuffy environment as the great British upper-crust tradition of the televised Royal Variety Performance.
He'd been advised that he should play a couple of his best-known hits—maybe "Your Song," maybe "Rocket Man." Elton thought, _Boring!_ He was in a mood for mischief. Nine years before, back in '63, John Lennon had urged the poshos to "rattle your jewelry." Elton planned to offer up something more bizarre.
He began by pumping out the opening chords to "I Think I'm Going to Kill Myself" before launching into its oddly happy-go-lucky suicidal lyric. Then, in the instrumental break that followed the first chorus, Legs swanned onto the stage in his Triple Wedding outfit, his fists gripping bunches of multicolored balloons. He released them, untied, into the audience and they flew erratically around the stalls making farting noises. From where he was sitting, Elton could see women in tiaras mouthing, "Ooh, ooh."
Legs, in his long brown locks and Zapata mustache, crash helmet, and white wedding shoulder veils, beamed and tap-danced, then blew kisses and waved and disappeared sidestage. Behind his humongous drum kit, Nigel Olsson, who considered Smith a "bloody twit," shot a withering look across at Dee Murray and thought, _I hope this is a dream._
Next, Elton introduced one of the songs he'd written and recorded during his depleted summer at the château. "This is about all the rock'n'roll records of the 1960s," he announced, "and it's called 'Crocodile Rock.' " It was his new single released three days before—a kitschy and naggingly catchy take on the dance craze singles of the past.
At its close, Elton and the band, including a returning Legs—who'd respectfully removed his crash helmet—lined up at the front of the stage to collectively bow to the audience and, as was the tradition, in the direction of the royal presence. It looked as if they'd thoroughly enjoyed themselves. In truth, Elton had found the whole affair "horrendous."
"Princess Margaret has told me she thinks it's four hours of boredom," Elton indiscreetly told a reporter later. "I know what she means."
They'd all had to fly home from California especially for this royal appointment, since there was still another month left of the American tour. Jet-lagged backstage at the Palladium—with its limited space and a full roster of acts: the Jackson 5, the British female impersonator Danny La Rue, and the toothy, tickling-stick-wielding Liverpudlian comedian Ken Dodd among them—Elton was forced to share a dressing room with Jack Jones and Liberace. The latter gave him a demonstration of how he could light up his suit with a hidden button. Now _there_ was an idea. There seemed to be no limit to how brilliantly ludicrous stage costumes could be.
—
THAT TIRING AND testing evening aside, Elton was keen to focus his sights back on Britain, feeling that he'd neglected his native land in recent years with his virtually incessant American touring. "Driving to Bolton isn't quite as glamorous as driving to Santiago," he pointed out to the _New Musical Express,_ strangely choosing a South American city he'd never actually visited to illustrate his point. "But we have to get our finger out."
His success in America was obviously hugely rewarding, but Britain in 1972, with the rise of glam rock, was really where it was at in terms of trailblazing music and fashion. In August, at the Rainbow Theatre in north London, Elton had witnessed David Bowie's biggest show yet in his futuristic incarnation as Ziggy Stardust, supported by the equally outré sci-fi rock'n'roll of Roxy Music. He emerged exhilarated—his throat raw from actually screaming with excitement—if a touch envious.
"That was a turning point," he says. "It was like, Fuck, the bar's been raised."
Everyone was trying to outdo one another, not least, in their teasing battle of one-upmanship, Elton and the now UK-chart-dominating Marc Bolan. For the T. Rex singer's birthday the previous September, Elton had cheekily sent a gift of a life-sized blow-up photographic image of himself. Returning the gesture, but going further, in March, for Elton's twenty-fifth, Bolan had given him the silver disc for his "Jeepster" British number two hit and, similarly, an enlarged reproduction of his own image. "Of course, his was much bigger than mine," Elton notes. Indeed it was—nearly thirty feet tall and delivered to the Wentworth bungalow in a moving van. It had to be left out in the garden.
On a similarly grand scale, Marc Bolan had recently completed a movie, _Born to Boogie,_ directed by Ringo Starr and released via Apple Films. Elton appeared with him in two scenes shot in the basement studio of Apple's offices in Savile Row—doing his best vamping Little Richard impersonation as Marc screamed "Tutti Frutti" and Starr thumped the drums beside him, and playing on "Children of the Revolution" with Bolan's head sticking out of a hole in a mock grand piano where the strings should have been. But, viewing himself in exaggerated dimensions on-screen at the film's London premiere in December, Elton couldn't help but wince. "I look like a fucking gorilla," he said. "So ugly."
His self-consciousness would regularly return to unsettle him, and it was with mixed feelings he returned to Pinner County Grammar School just before Christmas, following an invitation to play a concert at his alma mater. He fretted in the days leading up to it: "What will they think of my act? Because it was a bit wild."
He certainly didn't dress down for the occasion, rolling up in his Ferrari in a fox fur cape adorned with a diamond brooch, a purple suit, and matching glasses. Many of the teachers he'd known at the school he'd left only eight years before were still employed there. If they all looked much the same, the metamorphosis of Reg was astonishing.
The school kids turned feral upon Elton's arrival and older pupils were forced to link arms to form a human chain to protect him as he pushed through the halls to the student lounge. Once safely there, he requested only a cup of tea and presented them with a color TV set for the common room. The concert was, of course, a roaring success. Blowing away his clouds of self-doubt, there was perhaps no greater validation.
"When I drove away," he later remembered, "I thought, _You've made it. You've arrived_. It was a nice feeling."
—
AS 1973 DAWNED, Elton was in the mood for adventure. Even before the album recorded the previous summer at the château had been released, he had to start work on its successor. The château was temporarily unavailable as a result of legal wrangles over its ownership, and so he decided to continue to explore the concept of recording on location by looking farther afield. Following in the fresh footprints of the Rolling Stones, who'd just recorded _Goats Head Soup_ at reggae producer Byron Lee's Dynamic Sounds Studio in Kingston, Jamaica, Gus Dudgeon flew out to check out the facility and see if it was suitable for the making of Elton's next album.
En route, the producer's luggage was lost by the airline, and so he found himself wandering through the sweltering streets in the clothes he'd been wearing back in England: thick leather trousers, big padded woolen jacket. "I didn't dare go in any shops," he remembered, " 'cause I was quite convinced somebody was gonna cut my head off or yell, 'Get out of here, honky...' "
Dudgeon arrived at Dynamic Sounds and was given a playback, through the studio's booming sound system, of some of the reggae tracks recorded there. He was mightily impressed. "The biggest bass I've ever heard in my life," he marveled. "It was fantastic. I thought, _Well, this is a pretty cool place_. So I went back and said, 'Yeah, it's great, let's go there.' "
Landing in Kingston on January 23, the day after the high-profile boxing match at the city's National Stadium in which George Foreman had taken only two rounds to beat Joe Frazier and become the world heavyweight champion, the party arrived in a swarming city bristling with excitement and danger. As they drove through the streets, Elton was captivated by the sight of tumbledown record store shacks at the side of the road, but he couldn't get his head around the scenes of third-world poverty passing just beyond the window of the car.
Quickly overwhelmed by the sensory assault of Jamaica, he bunkered himself into his room at the Pink Flamingo hotel. Adding to the discomfiting, threatening air of the city was his discovery that Astrid Lundstrom, the girlfriend of Rolling Stones bassist Bill Wyman, had been raped at knifepoint by an intruder in the very same room only three weeks before. Terrified to leave the hotel, Elton rented a Fender Rhodes electric piano, sat down with a pile of Bernie's recently completed lyrics, and, in three days, wrote twenty-one songs.
His weed-loving band, meanwhile, staying in the north coastal resort of Ocho Rios, were off sampling the herbal pleasures Jamaica had to offer. It was here in Jamaica that Elton first tried marijuana, if in a more unusual form. "I wasn't smoking at the time," he says. "So I did liquid ganja, which was like Newcastle Brown Ale. It was fucking brilliant. I'd recommend it highly."
One night, there was cause for celebration: the release on January 26 of Elton's sixth album—named, after his retort to Groucho Marx, _Don't Shoot Me I'm Only the Piano Player_. To toast the album's arrival into the world, he and the band threw a small party in the dining room of the Pink Flamingo. Elton went to bed earlier than the others, leaving them to their carousing, only to suddenly burst back into the dining room, naked under a sheet. While dropping off to sleep, he'd discovered a huge centipede crawling across his body and fled his room in wimpish terror.
When the sessions began at Dynamic Sounds, everyone—including Dudgeon, who'd perhaps not examined the setup closely enough on his initial visit—was shocked to discover that the studio was in fact run-down and woefully inadequate. Having been used to playing Steinway grand pianos, Elton sat down at the studio's battered Yamaha upright and tried to bash away at a rocking, up-tempo new song he'd just completed called "Saturday Night's Alright for Fighting."
"Elton was hitting the piano and cockroaches were flying out of it," Bernie remembers.
Worse, there was a distinct lack of urgency among the studio staff, which was entirely at odds with the band's normally highly productive work rate. "I mean, Jamaica's a really lovely place," says Elton. "But, God knows...the Stones must've brought all their equipment there. We just came into this studio and it was like, Oh my God. And, y'know, everything was on Caribbean time. 'Well, we need a Leslie speaker.' 'Oh...oh yeah...two or three days is okay?' "
Davey Johnstone, for his part, knew they were in trouble when Nigel Olsson was setting up his drum kit and he heard one engineer tell another, "Carlton, get the mike." "We went, Oh fuck," Johnstone says. "We used twenty mikes on the drums even in those days, y'know. It was like, Oh, we're in deep shit here."
Even the toilet at the studio was troubling. "It was just crusted in yellow," Dudgeon recalled. "I've never seen a more disgusting lavatory in my life. We used to go in and hover over it. You didn't dare sit on the bloody thing."
The fact that everyone in the band was ludicrously stoned helped to take the edge off. Trying out the sad, slow New Orleans jazz of another new song, "Sweet Painted Lady," Johnstone suddenly came up with an idea for a complementary part he thought he could play on his banjo. He popped open its case, pulling the instrument out by the neck, only for the body of it to fall away, having been wrecked by airport baggage handlers. Everyone dissolved into giggles. "Elton falls off his piano, Nigel falls out of the drums," Johnstone remembers. "We're completely out of it. I was kind of staring at this poor banjo that's completely fucking trashed."
Together they plowed on, in far from ideal circumstances. The fact that Dynamic Sounds was situated in a compound circled with barbed wire to deter would-be thieves made for an ugly creative environment that was uninspiring in itself. But Elton and the band's visit also coincided with a strike by workers at Dynamic Sounds' on-site record plant, ensuring that their journey to work every day became extremely unnerving. Some of the strikers banged angrily on the sides of the band's Volkswagen van. Others wielded blowpipes, forcefully puffing through the vehicle's open windows what turned out to be crushed fiberglass. Everyone's skin broke out in nasty red rashes.
They persevered for a few days, making a determined effort to capture "Saturday Night's Alright for Fighting" before eventually nailing what felt like a great take of the song. But when Elton and the others entered the control room to hear what they'd just recorded, it sounded horribly tinny, like a swarm of angry wasps or, in Dudgeon's words, "thirty million very small, blaring, distorted transistor radios."
It was hopeless; they were forced to give up. But even trying to leave the island turned into a nightmare. A dispute broke out with the studio owners over who was to settle the hotel bills. As a result, in a standoff, all of the band's equipment and rental cars were impounded behind the gates of the recording facility.
Back in Ocho Rios, the band sat around the pool and tried to work their way, as Dudgeon remembered, "through five sacks full of dope...and never actually succeeded. It was the first time I actually left some dope behind."
Escaping to the airport in a taxi, Elton was alarmed when his driver swerved into a sudden diversion through a sugarcane field. The cabbie was only taking a highly unorthodox shortcut, but in his hyper-paranoid state, Elton thought he was being taken away to be killed.
—
_Don't Shoot Me I'm Only the Piano Player_ was an absolute commercial triumph—becoming Elton's first transatlantic number one album—but a creative muddle, probably as a result of its agitated creation. Interestingly, then, even though he had been exhausted while writing and recording it, _Don't Shoot Me_ boasted more up-tempo songs than any of his previous LPs, as if the singer had been trying to use his own music to summon up energy from somewhere within him.
Nevertheless, a lot of it was fairly uninspired stuff—"Teacher I Need You" sounded like a 1950s jukebox 45 refracted through glam rock, "Midnight Creeper" was an unspectacular imitation of the Rolling Stones. "Crocodile Rock," with its falsetto "la-la-la-la-la" hook, owed much to Pat Boone's "Speedy Gonzales" and would later prompt an out-of-court settlement with that song's publisher.
In effect, the album was a hastily recorded homage to the rock'n'roll records that had inspired Elton as a youth. The cover even harked back to the fifties, with its image of a greaser and his jive-skirted date standing at a cinema box office beneath a backlit electric sign for the imaginary main feature, "DON'T SHOOT ME I'M ONLY THE PIANO PLAYER" STARRING ELTON JOHN. To their right, a poster for the Marx Brothers' _Go West_ hinted at the title's inspiration.
Elton himself thought the album was "ultra-pop" but "disposable" and, expecting it to be ripped apart by the critics, was surprised when it was widely praised. In _Rolling Stone,_ Stephen Holden declared that "the heart of the album is a sequence of American movie fantasies whose chief aim is to delight...engaging entertainment and a nice step forward in phase two of Elton John's career."
The real beauties on _Don't Shoot Me_ were to be found where the tempo slowed. "I'm Gonna Be a Teenage Idol" found its inspiration in the tale of Marc Bolan's reinvention from cross-legged acoustic-guitar-playing starchild to teen-scream pop star. The dreamy "High Flying Bird" was a heartbroken ballad par excellence, which hinted that the narrator had lost his partner through suicide or possibly murder. Best of all was "Daniel," an achingly sentimental and compellingly mysterious song, written from one brother to another, that in March '73 became Elton's follow-up single to "Crocodile Rock."
Bernie had written the lyric for "Daniel" the summer before at the Château d'Hérouville, having read an article in _Time_ magazine about a Vietnam vet confined to a wheelchair after being injured in the Tet offensive. Returning to America, he'd been hailed as a hero by everyone around him in his small Texas town. Troubled by the attention, the soldier longed instead to slip away unnoticed to a quiet life on his farm. Taupin could clearly relate to this desire for a peaceful rural existence away from fuss and hubbub.
The lyricist transformed the story into a song about a pained soul who wants to disappear, as watched by his recounting sibling, conflating the narrative with Bernie's own memory of his brother's move to Spain to study there in 1968. Crucially, though, obscuring the meaning of the song, when Elton came to write the melody, he scrubbed out a final verse that he felt made the song overlong but that in fact explained the story. As a result, "Daniel" was interpreted by listeners as being about a family dispute, or even as a gay love song. "It was really a brother's reflection on his elder brother leaving to find peace of mind somewhere else," says Bernie. "If you know what the story is, then you go back and reference it against the lyric, perhaps it makes a little bit more sense."
The release of "Daniel" was to cause a temporary rift between Elton and DJM. To Dick James's ears, the slow-paced track didn't sound like a hit, particularly coming off the back of "Crocodile Rock," but Elton put his foot down. When James refused to advertise the record's release in the music press, the singer stubbornly said he'd pay for the promo himself. "I can't believe it," Elton protested at the time. "He says the single isn't commercial and it's coming out at the wrong time. He says it will harm the sales of the album. But I want it out. We've reached a compromise where if it's a hit, he'll pay for the advertising. But not before."
With the quarrel having gone public, Dick James was forced to defend himself in an interview in the music trade press. "This is a very one-sided viewpoint," he fumed. "It's untrue to say I don't like 'Daniel.' It's a beautiful, fantastic number, one of the best Elton and Bernie have written. Steve Brown, Stephen James and myself all came to the same conclusion independently about not releasing 'Daniel.' We are releasing 'Daniel' as a single solely because of the pressure from Elton."
Upon its release, "Daniel" made number four in the UK and number two in the States. Elton had been proved right. From here on in, his relationship with DJM would be characterized by a series of skirmishes and outright battles.
—
EVERY SINGLE OR album he put out now seemed destined to fly into the Top Five. If Elton seemed super-confident, even invincible, then he would sometimes make funny, self-deprecating remarks in interviews about his appearance, saying he couldn't possibly compete with the likes of David Bowie or Mick Jagger when it came to their slinky stagewear. "I haven't got the figure for it," he admitted. "I'd look like Donald Dumpling from Dover. So I try and make people grin a bit."
At the same time, further souring their fairly remote association, he took a dig at Bowie. "I know David has always wanted to be Judy Garland," he declared. "Well, I'm the Connie Francis, then, of rock'n'roll."
The year before, "Judy" and "Connie" had met for tea one afternoon in Los Angeles, in a summit intended to reach some kind of détente between the two. It didn't go well. "We had tea and cakes," Bowie remembered, in his slightly passive-aggressive account of their head-to-head. "We asked each other how we found America. After a polite half hour, I made my apologies, declining another cuppa. We didn't exactly become pals, not really having that much in common, especially musically."
Elton was clearly an anomaly when it came to the accepted look and sound of the cool 1973 pop star. He was comedic when you were supposed to be enigmatic, and he was, as was now becoming increasingly evident, balding. "I think he was very confused about his identity," says Annie Reavey, the costume designer Elton drafted in as his pop star status began to rocket. "The easiest way to cover up that confusion was to become almost like a cartoon character."
As unlikely as it might once have seemed, Elton was now attracting a devoted army of teenage fans. "If they scream at me," he quipped at the time, "it's probably in horror." His spring tour of '73 in the UK was filled with chaotic scenes. These were the days before professional security firms were brought in by promoters to protect stars from their more ardent and determined fans, with most venues usually staffed by local bouncers often ill equipped to cope. Sometimes, particularly if Elton was performing on a low stage, it made for alarming encounters with his adoring female admirers.
"To be onstage and see some sixteen-stone girl hurtling towards you is a frightening sight," he only half joked at the time. "I don't know what it takes for a girl to get it into her head that she must touch me. When they grab hold of you, it takes about six guys to get them off the stage. There were times when I thought the fans would rip us apart."
In Glasgow, he was trapped inside the Green's Playhouse for over an hour after the show when the shrieking devotees outside refused to disperse. In Newcastle, ten policemen were sent to the City Hall to protect him, though the promoter still broke an ankle in the melee as Elton tried to force himself through the crush of fans to make his getaway. In London, one girl tried to jump in the band's car just as it was pulling away. Nigel Olsson remembers, "I was trying to...in the nicest possible way, say, 'No, you can't come with us.' I had to kind of push her out. The fan deal got out of hand. They were nuts."
Over two nights at the 1,850-capacity Edmonton Sundown Theatre in north London, as girls fainted in the front rows, Elton chose to rise to the occasion, appearing onstage in a quilted cloak of many colors over a green satin suit, then throwing himself into the shows and drinking in the madness surrounding him. A week later, running a news story with a headline bearing a slightly incredulous exclamation point, _Melody Maker_ declared, NOW ELTON'S A TEEN IDOL!
What had once made him seem an improbable pop star was what was now making him unique.
"I did feel like a bit of an outsider," he says. "But y'know what? It helped me. There was only one Elton."
—
OFFSTAGE, IT WAS record-obsessed Reg who followed Elton's successes with the delight of the super-keen pop watcher. He carefully made sure the releases didn't clash with other singles or albums that might impede their progress on the charts. Shrewdly, he had delayed the arrival in the shops of _Don't Shoot Me I'm Only the Piano Player_ until January 1973 to avoid the big pre-Christmas possible contenders of Neil Young's _Journey Through the Past_ and Carole King's _Rhymes & Reasons_ (while getting a two-month jump on Led Zeppelin's _Houses of the Holy_ ). "When mine came out, there was nothing else," he stressed with some satisfaction.
Every Tuesday morning, when the trade magazines were published, he would sit with his nose in _Billboard_ and _Record World_ and _Cashbox,_ studying the hotly tipped hits and the chart rises and falls. He had instant opinions about everyone and their latest singles: Gilbert O'Sullivan's "Get Down" ("Great, right?"), the Faces' "Cindy Incidentally" ("Good"), Roxy Music's "Pyjamarama" ("A good record that won't make it in the States"), Donny Osmond's "The Twelfth of Never" ("Dreck"), the Carpenters' "Sing" ("Double dreck").
All of this baffled Bernie. "I had no interest in what was going on with our records in the charts," he says. "Back then I just was not even aware of it. It never crossed my mind to pick up _Billboard_ and see what our album was doing. Elton would tell me. He was totally the opposite, infatuated by that whole thing."
Whereas the bands of the 1960s, not least the Beatles and the Rolling Stones, had been mercilessly ripped off in bad deals and through shady or naive advice, Elton and John Reid were completely on it when it came to business. As Stephen James had predicted, as time passed, Reid had pulled away from Dick James to better Elton's contracts. In August 1972, he'd left his job at DJM as Elton's salaried manager and set up his own company, John Reid Enterprises, with an office in Soho.
His first act had been to boldly review the DJM deals, bringing in the New York lawyer John Eastman—brother-in-law of Paul McCartney, following his marriage in his Beatles days to Eastman's sister Linda. The attorney advised Reid that the publishing deal with DJM USA, in which the split between the company and the songwriters was 70-30, could reasonably be deemed unfair. Dick James, who had of course invested not only money but also confidence in Elton in his struggling years, was hurt. No further action was taken by Elton and John Reid, not least because they needed James for the time being, while the IRS was assessing the singer's U.S. tax status. But it was a warning shot across the bow.
Even with this arguably reduced royalty rate in America, Elton was still rolling in money. Investing further in art, he bought two etchings by Rembrandt and six paintings by the British pop artist David Hockney, the latter batch as gifts for Reid. The singer was known to ostentatiously waltz into Tower Records on Sunset Strip and sometimes spend as much as six thousand dollars on records and tapes in one quick splurge. For Elton, this was less a vulgar extravagance than a desire to live only for today. "I'm not decadent," he insisted. "My attitude to money is that tomorrow I could be knocked down by a number nine bus or something, so I might as well spend it."
Money meant freedom and money meant power, and the two combined in early 1973 with the launch of Elton's own label, the Rocket Record Company. Although his contract with DJM prohibited him from recording for Rocket himself—at least for the time being—it was of course a fanboy's dream to own such an enterprise.
The notion had first been floated during a drunken night at the château the year before. With the vin rouge flowing, Davey Johnstone had admitted that he wanted to make his own album but couldn't find a deal. It was Elton who first slurringly suggested they should "start our own fucking label!" Everyone said, "Yeah!" The next morning, all with aching heads, the conversation was brought back up again with the words "Were we serious?"
They were, and a plan was put into action. The board of the new label would comprise Elton, Bernie, John Reid, Gus Dudgeon, and Steve Brown. Elton bluntly and enthusiastically set out the ethos of Rocket in an interview with _Disc:_ "What we are offering is undivided love and devotion, a fucking good royalty for the artist and a company that works its bollocks off."
Of course, forming their own label had proved disastrous for the Beatles, with the money-sucking hole that was Apple Records. Elton and Reid were shrewder—Rocket was funded through a deal with MCA Records, and the recording artists, even if enjoying a generous royalty rate, would receive modest advances against sales. Still, there were early signs of nepotism and an Apple-like blinkered A&R policy. Two of Rocket's first album releases were Davey Johnstone's _Smiling Face_ and _If It Was So Simple_ by folk rockers Longdancer, featuring Nigel Olsson's singer/guitarist brother, Kai.
The birth of Rocket seemed to warrant not just one, but three parties. The first on March 25, pantingly described by _Melody Maker_ as "the biggest name-dropping rave-up of recent times," was a booze-soaked bender thrown on a boat, the _John D,_ moored on the Thames. Among the guests were Ringo Starr, Harry Nilsson, Rod Stewart and the Faces, Cat Stevens, and Paul Simon, who were entertained, in typical 1970s fashion, by a female stripper.
The second bash was less star-filled, if a touch stranger. On May 3, Elton chartered a train, filled it with Rocket employees and journalists, and took it the seventy miles from London's Paddington Station to the picturesque Cotswolds town of Moreton-in-Marsh. En route, the drinking began, and one carriage even housed a disco. Upon their arrival, while a comparatively dressed-down Elton in a plaid shirt and white Oxford bags signed autographs for fans and then awkwardly attempted to run away from them like a newborn foal in platform boots, a brass band led the two hundred fifty guests to a local hall where a medieval banquet and gallons of champagne were laid on. Afterward, Longdancer and another Rocket signing, Mike Silver, played short sets before Elton and the band joined them for a triumphant, well-oiled jam session.
Rocket was launched in Los Angeles in some style at Universal Studios on a back lot used to film westerns. A mock cowboy gunfight opened the proceedings. Elton was brazenly attired in white satin shorts, knee socks, a green-and-blue-striped jacket, heart-shaped glasses, and a floppy hat under which his hair was dyed orange and pink. He bashed out "Crocodile Rock" and Jerry Lee Lewis's "Whole Lotta Shakin' Goin' On" for the trendy Hollywood set, joined on backing vocals by Dusty Springfield and his old pal from "Patti LaBelle and her Blue Bellies," Nona Hendryx.
There were multiple parties for Rocket, but by contrast, at least initially, there were few hits. The one notable exception was the UK number-thirteen-charting "Amoureuse," an atmospheric ballad by the new label's most significant signing, Kiki Dee. The twenty-six-year-old, born Pauline Matthews in West Yorkshire, had already been something of a teenage wow who'd recorded for Fontana in the UK and in the States as the first British artist signing to Motown, without yielding any real commercial success. Through Motown, she'd met John Reid and, in turn, Elton.
By the early seventies, Kiki's early promise remained unfulfilled. Moreover, Elton saw in her someone who was shy and likely bruised by her experiences of relative failure—something he could of course relate to, given his own past. "She's got a great voice," he enthused at the time. "She needs her confidence built back up again."
"I suppose I was quite insecure," says Dee. "But when I was onstage and when I sang I felt confident. I think a lot of performers are like that. I saw that Elton had a shy side as well."
When Rocket was just a wine-induced plot the year before, Elton and John Reid had courted Dee by inviting her around to the Water Gardens flat, on the same night that their guests included Neil Young and Elton's mother. At one point, attempting to fetch some champagne glasses from the kitchen, Dee fumbled with the sliding door on a cupboard and embarrassed herself. "All the glasses fell on the floor and broke," she remembers. "Elton came in and burst out laughing."
Only eleven days separated Kiki and Elton in age, and so, through their shared diffidence and sometimes fumbling awkwardness, they came to develop an almost brother-sister bond. Elton co-produced, with Clive Franks, Kiki's first Rocket album, _Loving and Free_. "I got signed at sixteen," she points out, "so for me it was the first time that I was working with people of my own age, really. I'd always been the kid before, with all these grown-ups. So suddenly Elton and the band and John became my family."
Kiki was invited to sing backup on Elton's spring '73 tour of Italy—a jaunt that further emphasized the bedlam around him. In Genoa, two hundred ticketless and frenzied fans attacked the carriage of the train carrying the band into the city ahead of the show at the ten-thousand-capacity Palasport di Genova indoor arena. On the street in Rome, there was a scuffle between the ever protective John Reid and a paparazzo attempting to snap candid shots of Elton. One fellow photographer caught the ugly tussle on film: the manager blurringly launching himself at the defiant snapper as the concerned singer looked on. It wouldn't be the last time Reid's pugnacious Scottish temperament was to get him into trouble.
—
AS FRUSTRATING AS the failed Jamaican trip had been, it nevertheless produced close to two dozen still unrecorded songs for the next album. And so, when Elton and the others returned to the Château d'Hérouville in May, the subsequent sessions were utterly energized. "He was on some kind of amazing long-term roll," said Gus Dudgeon, "which just didn't stop."
To Bernie's mind, being back at the château after the horrors of Jamaica was paradise. As Davey Johnstone remembers, the heavy-puffing atmosphere of Kingston was transported back to northern France. "Tons of hash," he says. "Rolling joints like there was no tomorrow. I think it reflects in the music. There's not a lot of uptightness to it, y'know."
At odds with this laid-back vibe, however, was "Saturday Night's Alright for Fighting," the most aggressive track Elton and Bernie had written to date, which was successfully completed at the studio. More reminiscent of the Stones or the Who than the balladeering piano man many still thought Elton to be, it was driven by Johnstone's gnarly guitar riff. Taupin took its lyrical inspiration from his teenage nights at the Aston Arms in the small Lincolnshire town of Market Rasen watching almighty ruckuses break out among the drunks.
The recording process of the song was as atypical as its sound and found Elton stepping away from his piano—which was later overdubbed—to stand up and sing at the microphone and aggressively marshal the band through the take. Nigel Olsson remembers Elton stomping around the live room shouting, "Come on, you bastards!"
In a concentrated three-week period, they laid down track after track after track. It was becoming clear that this next album would be a double. Bernie and Elton began throwing around ideas for titles: _Vodka and Tonic;_ _Silent Movies Talking Pictures_ (strangely deemed too camp by the singer); and—jokily, since the perma-stoned guitarist was forever experimenting and piling on the overdubs— _How Many Guitar Sounds Can Davey Johnstone Get on This?_
"Saturday Night's Alright for Fighting" was released in July as a single ahead of the still-untitled double album, and issued in a picture sleeve that spelled out the song's title in a dagger tattoo design and featured Elton in the guise of an antisocial thug swigging from a bottle of wine. Some radio stations instantly banned the record, understandably deeming it an incitement to violence. "I wouldn't want to be blamed for provoking anybody into a fight," says Bernie. "But at the same time it says a lot about the power of the song."
It hit number seven in the UK, though strangely only number twelve in America. But again, Elton had confounded expectations. Interviewed in the dressing room at _Top of the Pops,_ he spoke about how he was "becoming fed up with the singer-songwriter records. They drive me mad. I've always fought against the Elton John Syndrome."
Elton believed that the public's perception of him was one of a safely noncontroversial piano-playing balladeer, and he was defiantly determined to kick against that image.
But even if 1973 so far had seen him seize full control of his career, his thoughts were once again turning to giving it all up before the public lost interest in him. He had designs on concentrating on Rocket full-time, on becoming solely a music businessman.
Elton had achieved a level of fame previously inconceivable to him. Yet now, his greatest desire was to "gradually fade myself out."
Realistically, it wasn't going to happen. It was too soon to begin his descent. The double album was given a title: _Goodbye Yellow Brick Road._
He was just about to ascend to the next flight level.
A vision in rhinestone-encrusted glasses and feathers. Outlandish Elton in '73.
THE 136-FOOT-LONG JET was an Air Force One for the rock star elite. As the very first Boeing 720 to roll off the production line back in 1960, it had been flown for thirteen years by United Airlines before being sold to Ward Sylvester of Contemporary Entertainment, manager of TV caperers the Monkees and teen idol Bobby Sherman. From here, after a major interior refit, the now private plane was renamed the _Starship._
July 1973 saw it chartered by Led Zeppelin, after a turbulent and anxious flight for the band down the coast from San Francisco to Los Angeles in a tiny Dassault Falcon eight-seater forced their manager Peter Grant to think bigger. That summer it became for the British megagroup what their tour manager called "a floating gin palace"—the notorious scene of sexual shenanigans with groupies as generous lines of cocaine were snorted through rolled-up hundred-dollar bills.
Stepping through the _Starship_ 's forward boarding door, you entered a front lounge with galley kitchen and passed through a club room with revolving leather armchairs before walking into the Grand Salon with its maroon shag carpet. Here a thirty-foot couch ran along the inside fuselage facing a bar in which an electric organ had been installed for musicianly in-flight entertainment. To offset the ravages of boozy touring life, packs of vitamin pills were laid out on the counter. These could be washed down by beer or the potent cocktail of your choice.
At the rear of the plane lay what was nicknamed the Hippie Room, filled with beanbags and a low-level sofa in its own unique airborne crash pad arrangement. Behind it there was a bedroom featuring a shower cubicle and queen-sized water mattress covered with a throw of white fake fur.
As soon as he set eyes on it in August 1973, Elton fell in love with the _Starship_ and its trashy approximation of high class. Among its state-of-the-art luxury features was a video player, to go with its well-stocked film library of everything from Marx Brothers films to hardcore pornography.
"I remember showing my parents _Deep Throat_ on the _Starship_ while they were having their lunch," Elton says mischievously, conjuring up a mind-boggling image of a highly unusual family meal scenario at thirty thousand feet.
However, as he didn't possess an entourage on the scale of Led Zeppelin's (or their extraordinary appetites), often when Elton and the band were in midflight, they felt they were rattling around inside the plane, with too few of them really to fill the rarefied environment. Sometimes there were only eight people in a jet that could more than comfortably hold forty.
Still, for major league musicians, the _Starship_ revolutionized high-level touring. No longer did they have to endure the all too familiar, energy-depleting run of hotel–sound check–gig–hotel–sound check–gig–hotel. Instead, a band could base themselves in the major U.S. metropolises—New York, Los Angeles, Chicago, Houston—and commute back and forth to shows in the smaller cities.
Such an operation was befitting Elton's biggest U.S. tour to date. From the middle of August to the end of October, he would play forty-four dates at arena and stadium level. In keeping with this upward scaling, he decided that his show was now going to be "Liberaceized." More elaborate capes and suits were ordered, but it was in the specs department that he decided to go super-large. The singer had been commissioning the L.A. eyeglass designers Optique Boutique to conjure up ever wilder designs, and for this tour, the company's Dennis Roberts outdid himself with one massive pair costing five thousand dollars that spelled out Elton's name in flashing lights.
"I had a battery pack," Elton recalls, "which was like a fucking milk crate. I'd sort of stagger on the stage like Quasimodo and then, singing...my nose would be squashed by the weight of the glasses."
Sometimes the enormity of the shows took him by surprise and was more than he could physically handle. In Kansas City, at the Arrowhead Stadium, he leaped off the twelve-foot-high stage only to suddenly realize that he couldn't get back up onto it. A little shamefacedly, he had to sprint around to the backstage area before he could make his triumphant reappearance.
All roads, however, were leading to the Hollywood Bowl on September 7. It was a show designed to go over the top of the very top itself.
—
NO EXPENSE IT seems was spared in the high-profile buildup to Elton's appearance at the 17,500-capacity 1920s amphitheater set deep in the Hollywood Hills. It was only three years since he had first set foot in Los Angeles with his mind reeling. Now on Sunset Strip a mammoth billboard advertising the show, featuring an illustrated image of the singer as a thirties song-and-dance man in top hat and tails, smiled down on passersby.
On the evening of the gig, a Friday, the same cartoon image, stretched to wide-screen, stared out from the back of the shell-like stage at the Hollywood Bowl: Elton as Fred Astaire, flanked by chorus girls. Outside, scalpers were selling tickets for anything up to $500. Inside, as the sun fell, the audience slipped into their seats.
Come stage time, none other than Linda Lovelace, the star of Elton's beloved _Deep Throat,_ stepped up to the microphone to open the show. "Ladies and gentlemen, I'd like to welcome you to the Hollywood Bowl," she began with breathy, giggly excitement. "On this spectacular night, we hope to revive some of the glamour that has all but disappeared from show business. We're very lucky in having this evening with us many distinguished guests from all parts of the world, none of whom would dare to miss this show tonight."
Then, down a sparkling stairway came a parade of look-alikes, resembling the cover of _Sgt. Pepper_ brought to life: the Queen, Elvis Presley, Frankenstein, the Pope, the Beatles, Batman and Robin, Groucho Marx, Mae West...
"Lastly," Lovelace went on, "the gentleman you've all been waiting for and the costar of my next movie. The biggest, most colossal, gigantic, fantastic Elton John!"
He made his entrance at the top of the stairs in white fur chaps and matching wide-brimmed bolero hat. As he descended the steps, the 20th Century Fox theme boomed and trumpeted through the PA. The lids of five mock grand pianos, painted red, orange, silver, blue, and pink, were raised, spelling out E-L-T-O-N. From inside them were released four hundred white doves, flying out from the stage and into the night. More or less. Some of the birds, frightened by the bright lights and noise, refused to leave the comfort of their opened piano cages. Sidestage, John Reid was going nuts, shouting, "I want them out _now_!" Bernie found himself sitting inside the body of one of the dummy instruments trying to fling reluctant white doves into the air.
Elton slammed into "Elderberry Wine" from _Don't Shoot Me I'm Only the Piano Player_ before moving through a sixteen-song set comprising his biggest hits and still unheard numbers from the upcoming _Goodbye Yellow Brick Road_. By "Saturday Night's Alright for Fighting," as one lost dove winged around the stage in a panic, he was up on his feet, cajoling the crowd, before leaping atop his pink-satin-covered piano and throwing himself around in Jaggeresque dance moves.
As much as Elton was now visibly in his element, Bernie wasn't entirely convinced by his partner's dress-up antics. "Bernie fucking _loathed_ it all," Elton admits. "I think it did hurt the music. People thought it was more style than substance...which I disagreed with, obviously, because the music was there. But I understood where they were coming from."
"There were certain areas where I was less than enamored by his wardrobe," Bernie confesses. "Elton's larger-than-life persona has been probably detrimental to him as a musician. Because when you become a star of that magnitude, eighty percent of what people want to know about you, or hear about you, is what you're wearing, how you're living, how many cars you have and how much money you have."
But there was no stopping the pantomime. Similarly eccentric scenes filled his first show at Madison Square Garden in New York two weeks later, where he appeared once again in his daring white rodeo cowboy getup. At the close of the show, for "Your Song," the audience expressed their approval in a way Elton had never experienced before, as they lit matches and thousands of delicate flames flickered in the darkness.
He was now, to use the phrase John Lennon coined during the heady early days of the Beatles, at the toppermost of the poppermost. If Elton privately doubted he could remain at this level for any great length of time, he wasn't alone. After the Madison Square show, one music business insider was heard to mutter, "I don't know how he can keep it up. But for the moment we're all people who gain from it."
Unsurprisingly, his punishing schedule was again causing his temper to flare unpredictably. One night in a restaurant, as Dee Murray remembered, for no apparent reason Elton stood up and started yelling at the band, "You're all nothing but a load of bastards," before walking out. He returned a minute later and continued to heckle the bewildered members of the group before flouncing off for good.
After a show at the Nassau Coliseum in New York, Elton threw an almighty strop that was to embarrass him for years to come. He stamped onto the _Starship,_ ahead of takeoff for the short flight to Boston, parking himself in a chair in the front lounge of the plane. From the Grand Salon came the sound of someone playing "Crocodile Rock" on the bar's built-in organ. Elton was gently encouraged by those around him to go back and join the party and check out the guest keyboardist.
"I was in a shit mood," he says. "I'm sitting up at the front of the plane sulking and everyone's saying, 'We've got a surprise for you.' 'Fuck off! I don't want a fucking surprise.' In the end, they were crying and they said, 'You've got to come back...it's Stevie Wonder.' " Elton, suddenly coming to his senses, realized he was being "an asshole."
There was more in the way of unwanted drama later in the tour, at Baltimore Civic Center Arena on September 30. From the stage, Elton could see security personnel harassing fans and refusing to allow them to get up and dance. Then, one girl ran to the front of the stage to take a photograph and a beefy guard grabbed her and threw her aside. Furious, Elton stopped the show, jumped off the stage, and started goading the guards through the microphone. "You should be at home minding your babies," he shouted, to the sound of cheers. Pissed off by this very public put-down, the security people deserted their posts. Freed from their control, around five hundred audience members dangerously stormed the stage, threatening its collapse.
Down south one night in Atlanta, Georgia, as the tour drew to a close, the exhilaratingly demented Iggy Pop and the Stooges, an Elton fave, were playing a late-night show in the city at a club called Richard's. Partway through their reliably ferocious set, someone in a gorilla costume leaped onto the stage and started grooving along, to the fist-pumping delight of the manic crowd.
A stoned Pop was disturbed by the interloper, though, somehow getting it into his head that he was a speed-freak biker in cunning disguise come to attack the band. He was tempted to knock him out with a punch. Then the stage invader removed his gorilla mask to reveal himself. It was Elton. With surprising strength, he hoisted Iggy up into his arms and the part superstar, part gorilla stomped his cares away to the sound of proto punk rock.
—
MEANWHILE THE PROCESSION of look-alikes that had opened the momentous Hollywood Bowl show was reproduced, with a different cast, on the _Goodbye Yellow Brick Road_ double album. In Bernie's mind, key parts of it were images of old Hollywood reflected in modern songs. "It's all of these characters collapsing into one," he says. "They're falling off the silver screen."
It was an album populated by the stars and archetypes Bernie and Elton had viewed from their seats in cinema stalls throughout their childhoods. At the Embassy in North Harrow back in the fifties, young Reg had seen Roy Rogers make a personal appearance with his faithful horse, Trigger. Now, on the new album, this heroic Trucolor cowboy was remembered in a song that took his name and painted him in a far more romantic light than Bernie's previous depictions of the harsh Old West. A timeworn nine-to-five working narrator, numbed by adulthood, gets home, closes his curtains, and relives youthful thrills, watching old Roy Rogers movies on TV as his wife and child sleep in another room. In "Sweet Painted Lady," the harlot-with-a-heart protagonist mirrored the makeup-caked saloon hookers of cinematic westerns. Elsewhere, "The Ballad of Danny Bailey (1909–34)" was an elegy to a gunned-down gangster who very much resembled the mobster models of Jimmy Cagney or Edward G. Robinson.
In an album of many standouts, "Candle in the Wind" was to become as enduring as the legendary status of its tragic subject. Bernie, like millions of others the world over, had been captivated by the glamour and fragility of Marilyn Monroe. For the lyricist's twenty-first birthday in May '71, Elton had bought him one of Monroe's dresses, housed in an illuminated Perspex display case. The evocative title of the song quoted a line that record executive Clive Davis had used to describe Janis Joplin (although the lyricist says he was also aware of the 1960 dystopian play of the same name by Russian writer Aleksandr Solzhenitsyn). Bernie took the phrase and built from it a memorial to Monroe from "the young man in the twenty-second row" seeing the film star as someone "more than sexual," who had been acutely vulnerable and exploited by the movie business and the media.
Later the song's ubiquity and heavily emotional tone made Taupin feel that the lyric of "Candle in the Wind" had been one-dimensionally interpreted as being written by someone utterly in awe of the dead star. "To be quite honest I was not _that_ enamored with Marilyn," he insists. "What I was enamored with was the idea of fame and youth and somebody being cut short in the prime of their life. The song could've been about James Dean, it could've been about Montgomery Clift, it could've been about Jim Morrison...how we glamorize death. How we immortalize people."
For his part, Elton, while understanding the power of the track, "thought we were gonna get groans from people." Instead, "Candle in the Wind" was to become possibly the most famous song in Elton and Bernie's catalog.
Throughout _Goodbye Yellow Brick Road,_ there was a recurring theme of sympathy for the troubled outsider. In "All the Girls Love Alice," the promiscuous sixteen-year-old central character gets lost in the murk of the gay scene and ends up dead in a subway. The nihilistic boozer of "Social Disease," wasted from morning to night, at first seems to be happy in his alcoholic haze, until he reveals that he feels his days are aimless and hopeless.
All of this was a desire on Bernie's part to add grit to his and Elton's songs, partly in the wake of the throwaway pop of _Don't Shoot Me I'm Only the Piano Player_. "I wanted a little angst in what we were doing," he says. "I didn't want us to be just thought of as this pure pop machine."
Elton, musically, was also darkening the tone, particularly with the double album's two-part, eleven-minute opener, "Funeral for a Friend / Love Lies Bleeding." The first half, a mournful symphony for synthesizer played by David Hentschel, was written by Elton when he was in a fog of depression and imagining music for his own wake: "I got very down one day. I'm hung up on things like that. I like tearful, plodding music." The second part is far angrier, a thundering up-tempo song of heartbreak with hints of almost suicidal torment.
In fact, the death count on _Goodbye Yellow Brick Road_ was high—Danny Bailey, Alice, Marilyn, and in "Funeral for a Friend," even the singer himself. Only later would Elton realize quite how bleak the record was lyrically, when a DJ in Philadelphia brought it to his attention, saying, "Hey, your new album...Bernie is so bitter these days."
"I listened to it again," Elton reflected at the time, "and it's true. It's a very depressive album, although I'd never thought of it like that before."
At the same time, there was enough in the way of pop sensibility on _Goodbye Yellow Brick Road_ to ensure that the hits from the album kept coming. In "Bennie and the Jets," Gus Dudgeon used crowd noise sound effects—lifted in the introduction from the recording of Elton's '72 Royal Festival Show and in the outro from the audience recorded at a Jimi Hendrix gig, mixed in with the band hooting and clapping off-time in the studio at the château—to create a faux live recording. This elaborate sound design backdropped Elton's most soul-oriented creation yet, slipping between full voice and falsetto over a staccato groove in a hypnotic paean to what Bernie called a "robotized, futuristic rock'n'roll band" fronted by a "butch girl" in electric boots and mohair suit.
Best of all was the title track of _Goodbye Yellow Brick Road,_ an affecting ballad that managed to balance itself between jaded sentimentality and soaring optimism while sounding like something Lennon and McCartney might have written in 1967, updated with the production values of 1973. Its lyric was the perfect summation of the tale Taupin had previously explored in "Mona Lisas and Mad Hatters" and "Honky Cat"— the country-born innocent drawn to the alluring flame of city life. In "Goodbye Yellow Brick Road," he has become the plaything of a high society figure, and he is defiantly escaping the gilded cage to return to his farm and plow.
For Bernie, it was a song that had its roots in his first experiences of coming to London back in '67: "It was about me being the country kid coming to town and being a little out of my depth. It was a sort of Dick Whittington tale—going to the city, making it big, but knowing that reality lay back where he came from. It just ended up being a song that seemed to echo those feelings of homesickness that I experienced in my first few months down in London. I was torn between the potential glamour of the bright lights and my country roots."
One of three unarguably classic songs on the double album, "Goodbye Yellow Brick Road" sat alongside "Candle in the Wind" and "Bennie and the Jets" as proof of the now gold standard quality of Elton and Bernie's compositions. At the same time, with its edgy characters and moments of censor-baiting profanity (one "shit," a "bitch," and a couple of "jerk-offs"), the expansive album certainly wasn't just light fare for the pop kids. Like most double LPs, it was experimental and sometimes messy, but it also displayed an impressive array of musical styles and much in the way of light and shade. In short, it was Elton and Bernie's greatest accomplishment up to that point.
As such, it needed a suitably iconic cover. When it came time to hastily conjure up the artwork, DJM sleeve designer David Larkham remembered an advertisement image he'd seen in the portfolio of illustrator Ian Beck: "A guy was gazing wistfully at a poster on a wall and the idea...sort of epiphany...just gelled." The concept developed into a vivid graphic of Elton on a run-down city pavement stepping, with platform boots the color of Judy Garland's ruby slippers, into a poster of Oz.
It depicted someone magically passing from a drab environment into a brighter otherworld. As a visual metaphor, it perfectly captured in one colorful image everything Elton had experienced in the past four years.
—
HOLLYWOOD WAS HERE, there, and everywhere and even followed him home. One Sunday afternoon Elton was sitting by the swimming pool at his Wentworth bungalow with a friend, the Scottish TV comedian Stanley Baxter. That week, Bryan Forbes and Nanette Newman had none other than Katharine Hepburn as a guest at their nearby home. The highly spirited star of _The Philadelphia Story_ and _The African Queen,_ then sixty-six, liked to swim, and she asked the couple about the availability of nearby pools. Elton casually mentioned that the actress could come by and use his anytime she liked.
"I never had gates at my property," he remembers. "Suddenly we see this bicycle riding up the lawn and Stanley Baxter's face... _'Oh, fuuuck...'_ 'Cause that was his idol. There was a dead frog in my pool and I've got a phobia about frogs, so I wouldn't go in it. And she got this huge fucking leaf, dove in, got to the bottom, and threw it out.
"I went, 'How could you _do_ that?' And she went, 'Character!' She'd walk in the house and go, 'Your furniture's all wrong. You need that over there. This there.' An incredible woman."
All of this was becoming strangely normal to Elton—a sign that he was finally adjusting to his own stardom. Back over in Los Angeles, the cover of _Goodbye Yellow Brick Road_ was being revealed as an enormous billboard teasingly painted day by day, piece by piece, on Sunset Strip. As publicity gimmicks went, it was good. But the record company felt they could go one better.
Journalists in New York and Los Angeles were invited to what was promised to be a cutting-edge technological, bicoastal video press conference via satellite. Bernie on the East Coast said a few words of introduction in New York, followed by Gus Dudgeon in L.A. Then Elton appeared on screens in both locations, broadcasting from his room at the Holiday Inn somewhere in the middle of America, and began to take questions from the assembled writers.
"It was supposedly coming by satellite," Dudgeon remembered. "In actual fact he was sitting in a room about twenty feet away that we made look like a Holiday Inn. They were like, 'Wow, this is great!' Then we started putting up interference on the screen. I'm saying, 'Well, I'm very sorry, ladies and gentlemen, I think we're losing contact with Elton now.' It's going, _crrrrr,_ y'know, blizzard across his face."
Once the conference was over, there was much excited babble among the journalists as they made their way over to the buffet table. Then, Elton, dressed down and unnoticed at first, stepped among them and started helping himself to the sandwiches. One writer suddenly spotted the star and said, "Wait a minute, fuck, that's him standing there."
"It was a classic piece of hype," said Dudgeon. "It worked a treat. And of course we got fantastic press out of it."
A subsequent playback of _Goodbye Yellow Brick Road_ in a screening suite at Universal Studios in L.A. didn't go quite as smoothly. To ensure that journalists heard the album in the highest possible quality, an expensive sound system was brought into the room, with a slide show of images from the album's artwork set to accompany the music. As the lights dimmed and the opening overture of "Funeral for a Friend" filled the air, it was clear that there was something wrong with the sound.
John Reid was incensed. When the first side of the album concluded with "Bennie and the Jets" and the lights went up, he jumped out of his seat and harangued the sound engineer. "Can't you get the fucking thing together?" he snapped. The stressed-out tech fired back, "It's not us, it's the fucking tape." Reid smacked him in the mouth, drawing blood. There was a rush forward by people trying to hold the manager back, but he managed to throw a few more punches at the engineer before being pulled away. As he left, he shouted back at the techs, "Bloody cunts!"
Challenged about the incident in a subsequent issue of _Rolling Stone,_ Reid was unapologetic and only a touch repentant: "I don't make excuses. I'm not particularly proud of it. But any time anything like this has happened, it's been in defense of Elton or Bernie, not for personal reasons."
In the end, _Goodbye Yellow Brick Road_ received mixed reviews from the critics. Roy Carr in the _New Musical Express_ oddly considered the album to be "like an old sweater...crafted for maximum enjoyment and quite indispensable," while declaring it "exquisite and by far the finest mass appeal album to have emanated from Downtown Oz." Stephen Davis in _Rolling Stone_ was unconvinced: "A massive double-record exposition of unabashed fantasy, myth, wet dreams and cornball acts...too fat to float, artistically doomed by pretention but redeemed commercially by a couple of brilliant tracks."
No bad review could sink _Goodbye Yellow Brick Road,_ though. In the second week of November, in the States, it hit number one, for the first of an unbroken eight-week run. In December, it reached number one in the UK. In both markets, it was Elton's second chart-topping album of 1973.
—
SITTING AT A black grand piano in the living room of the Wentworth bungalow—his hair dyed a Ziggyish orange, his accent suddenly affecting curiously cut-glass, upper-class English tones—Elton, talking to Bryan Forbes, insisted that in some ways, the astonishing fame he'd achieved was something a part of him had always anticipated.
"I knew I'd be famous one day," he said, in the mannered speaking voice that with his disheveled bohemian image gave him the air of disgraced royalty. "I mean, I was always convinced. I knew I'd probably have to wait till I was fifty-three. But I just knew. It was the only thing that kept me going...this ambition."
The scene was an interview shot for Forbes's documentary _Elton John and Bernie Taupin Say Goodbye Norma Jean and Other Things,_ which took the director a full year to cut from eighteen hours of footage before being screened by ITV in Britain and ABC TV in the States. Given exclusive access to the French recording sessions for _Goodbye Yellow Brick Road_ and the Hollywood Bowl show, the TV special offered unique insight—thanks in large part to the director's and artist's personal friendship—into Elton's extraordinary circumstances and state of mind in 1973.
"At twenty-six, he walks confidently on five-inch heels where others fear to tread," Forbes stated in his verbose introductory voiceover, which had the knowing tone of the insider. "Sometimes as bright and unyielding as the diamonds he wears on his fingers. Sometimes plunged deep into self-critical gloom. Extravagant and generous, seeking fame one moment, determined to reject it the next. The life and soul of the party. The party destroyer. The genuine article. The superstar who does his own hoovering."
Cutting between the glitziness and intensity of Elton's live shows and footage of him kicking a ball around his garden with his dogs, the film was designed to depict him as a runaway musical sensation with a fairly normal home life. Ultimately, though, the documentary couldn't help but reveal his insecurities, whether through his own admissions or the words of others.
In the film, Elton appeared skinny—if not quite on the virtually emaciated level of Jagger or Bowie at the time—but said he felt fat. If he ate even one slice of white bread, he joked, "I'm instantly the size of the Crystal Palace."
"He's had darker moods since he's made it into the pop world than he ever did before," confessed his mother, before adding what might later have seemed a disclaimer when it came to his private life. "He's always been a very quiet boy, though...never been a boy to have the gay life or anything."
Dick James, meanwhile, encapsulated in two sentences the Reg-to-Elton transfiguration he'd witnessed firsthand. "I think he's a paradox in himself to a great extent," he said. "He's an introvert that projects a tremendous extrovert performance and image."
For Christmas, this quiet boy turned tremendous extrovert brought Hollywood back to Britain, a country sorely lacking in glamour during the grim winter of 1973. Prime Minister Edward Heath was locked in a steely battle with the National Union of Mineworkers as the country was slowly being crippled by high inflation rates and the government's capping of public sector wage rises. Boldly, the miners began a work-to-rule policy that was painfully depleting the nation's coal stocks. In a drive to counter the industrial action, on December 13, Heath announced that a three-day workweek to conserve energy in the UK would come into force at the end of the year. Nonetheless, the shortages would result in power cuts and blackouts throughout the country.
Before their arrival, then, there was possibly no better time to light up the capital. Five days before Christmas, Elton began a five-gig stint at Hammersmith Odeon in west London that echoed the Beatles' festive residency there back in 1964. He even made his own contribution to the canon of yuletide records with a jaunty Phil Spectorish romp replete with clanging tubular bells titled "Step into Christmas," though it was set to be eclipsed in Britain by Slade's cheerful thumper "Merry Xmas Everybody."
The Hammersmith Odeon shows were a series of daft and loose performances. Debuting with the lineup for the first time as a full-time member was Ray Cooper, a chiseled, intense percussionist who would throw his tambourine high in the air and even contributed a bizarre duck call solo to the middle of "Honky Cat." Getting fully into the spirit, Elton played an instrumental piano rendition of "Rudolph the Red-Nosed Reindeer," recalling the pub-playing days of the teenage Reggie back at the Northwood Hills.
At a key point, styrofoam snow tumbled down onto the stage, threatening to throw the whole show into amateur pantomime chaos. The fake blizzard was copious and far too thick. It covered everyone and everything and jammed up the piano keys, much to the cackling delight of the band.
"I'm available for weddings, Christmas parties, everything," Elton the styrofoam snowman laughed, at the close of what had been a momentous year. "Ten pounds an hour."
Elton venturing further and further out there with the ludicrously extravagant costumes while on tour in Australia in 1974.
HE WAS WANDERING past a table at the back of the studio when he noticed there was a line of white powder on top of it. Elton was still so inexperienced when it came to drugs, he really wasn't sure it was what he suspected it might be. He asked John Reid, "What on earth is that?" Reid told him it was cocaine. Elton figured he might as well take a little sniff.
He had started drinking more because it loosened him up, made him feel safer, more secure. Everyone else was taking drugs, and to his mind, he was the outsider. He didn't smoke, so he couldn't share with them even a puff on a joint. He was sick of missing out on all the action.
That first line of coke broke down all of his remaining barriers. It made him chatty chatty chatty. He could really start communicating now.
It felt slightly dangerous, too, and he was sick of being a goody-goody.
More important, Elton thought the fact that he could now snort away with the others made him finally accepted by them. All of a sudden, he was in with the in crowd. He was part of the Class A crew. He'd finally arrived.
"I did get into drugs 'cause I wanted to join the gang," he admits. "My band were doing drugs so far ahead. I was so naive."
It was January 1974 and Elton and the others were high—and getting higher still—in the Colorado Rockies at the remote studio location of the Caribou Ranch. Back in the first week of September '73, when he'd been playing nearby at the Coliseum in Denver, he'd gone up to the ranch to check out the studio, which its owner, Jim Guercio, producer of the band Chicago, had opened the year before. Elton liked some of the records that had already been made there: the heavy blues and alien talk-box vocal sounds of Joe Walsh's "Rocky Mountain Way" (from his vividly titled 1973 album _The Smoker You Drink, the Player You Get_ ), the head-nod groove of Rick Derringer's "Rock and Roll, Hoochie Koo."
Caribou was tough to get to, which was one of its main attractions. From Denver, you drove the thirty-five miles to Boulder in the foothills of the Rockies. Then you took the Boulder Canyon road, slowly ascending almost three thousand feet on the winding route to Nederland. Nine miles on, you passed through the gates of the ranch, set within four thousand acres of mountain land, motoring along another two and a half miles of driveway before you finally reached its doors.
The setup was rustic but lush, with a series of cabins dotted around the building that housed the studio. But life on the ranch, particularly in winter, often caused an extreme climate shock to visiting bands, when the snow was two feet deep and sixty-mile-an-hour winds were whipping it around. Recreation for the hardy might involve scooting around on a convoy of snowmobiles.
There was much in the way of comfort, however, to offset the weather conditions. The residential lodges were furnished with antiques and brass four-poster beds, and movies could be piped directly to TV sets in the cabins. The studio itself was hunting lodge cum seventies chic: mounted deer heads, leather armchairs, thick brown carpets, dark pine walls. Built on a former Native American burial ground, the ranch, like the Château d'Hérouville, was believed to be filled with ghosts. One of its cabins, in particular, Running Bear, was said to be "haunted as hell."
Cut off from everything, Caribou was the perfect place for rock stars to hide away and get down to serious business, whether that be intensive recording or intensive snorting, or often a combination of the two. As such, the sessions for the next Elton album were his and the band's most hedonistic yet.
"A very, very apt word is hedonistic," confirms Bernie. "It was in the winter and we always used to say there was more snow inside than outside."
—
EVEN AFTER THE outpouring of songs for _Goodbye Yellow Brick Road,_ for Elton and Bernie, it seemed as if there was still plenty of creative flow left. The follow-up album, which would come to be titled _Caribou,_ was written and recorded at the ranch in a lightning eight days. Much of this accelerated activity, though, was down to the sheer pressure to get the record done and dusted before impending tours of Japan, Australia, and New Zealand.
Up at the ranch, Elton looked in the mirror and saw a bloated zombie staring back. In the past few months, he'd put on forty-five pounds through drinking over half a bottle of whisky a day. At twenty-seven, his hair was rapidly thinning, and he worried that his body was on the verge of collapse. He felt like death warmed over.
Cocaine helped to snuff the hangovers and push him through the process, but still, the making of _Caribou_ was excruciating. His mood swings, likely worsened by his newfound narcotic dabbling, were more extreme than ever. As he had been during the recording of _Don't Shoot Me I'm Only the Piano Player,_ he was in such a weird mental state that the project lost three days in between songwriting and recording while the singer pulled himself together. Once work resumed, the backing tracks were cut in only two and a half days.
_Caribou_ was an album made on autopilot. Its creators may have been shattered, but at the same time they knew they could knock another record out. "It's a very uneven album," Elton says, looking back. "It was an album that we had to get out under contract."
Bernie and Elton were at this point, the singer confesses, "quite cocky." Some of their songs were flippant and a touch bizarre, such as "Grimsby," a tongue-in-cheek attempt to romanticize the unspectacular east coast English fishing town, sarcastically talking up its culinary delights of pies and peas and cotton candy. "Stinker" was a bluesy sludge as bad as its title, being the confessions of a tramplike figure who lives in a hole in the ground. Highlighting their misguided, coked-up overconfidence, Elton and Bernie even joked about calling the album _Stinker_.
Worst of all was the intentional lyrical gibberish of "Solar Prestige a Gammon," its verses rendered in a hammy operatic Italian voice before it bounced into a jaunty, jokey Euro pop chorus. In the minds of the writers, there was method behind this madness, as well as madness behind the method. Over the past few years, some of Elton and Bernie's songs had been wildly misinterpreted by overthinking listeners as possessing hidden religious subtexts—the "nailed to my love" line in "I Need You to Turn To" from _Elton John_ was said to be about the crucifixion of Jesus Christ; "Border Song" with its "holy Moses I have been deceived" refrain was seen by some as anti-Semitic.
The pair decided to have a little nose-thumbing fun. "We just thought, We're gonna write this load of old rubbish and just put it out," Elton says. "That was us coasting a bit. When you're at number one, you can do anything you want. The very reason we did it was to say, 'Listen, no one's gonna read anything into this.' But, fucking hell, if there's not the name of five fishes in it..."
Amid its dippy wordplay, "Solar Prestige a Gammon" happened to list five species of fish—"lantern," "salmon," "cod," and the intentionally misspelled "sardin" and "turbert." This was decoded by some fans as being a direct reference to Christ's Feeding of the Four Thousand. "We got more letters on that song 'cause of the five fishes," the singer remembers. "It was, Oh this has _gotta_ be religious."
It seemed there was no way to suppress the code hunters. With uncanny coincidence, one overzealous fan even managed to unscramble the nonsense title of "Solar Prestige a Gammon" as an anagram of "Elton's Program Is a Game."
For an album written and recorded under duress, _Caribou_ still contained some great tracks. "The Bitch Is Back" was a tongue-in-cheek soul-tinged rocker poking fun at Elton's reputation as the king of the wicked put-downs, which referred to his "sniffing pots of glue" before—with some irony, considering his current inebriated state—claiming to be in reality "stone cold sober." "I've Seen the Saucers," in Elton's estimation "not the greatest song in the world," was better than he thought it was—an elegant ballad sung from the imagined viewpoint of a true UFO believer. "Ticking" was an evocative short story in song, the tale of a quiet individual who suddenly snaps and commits a mass shooting in a bar in Queens, New York, killing fourteen people before being gunned down by police.
There was one song on _Caribou,_ though, that towered above the others. "Don't Let the Sun Go Down on Me," a slow burner that exploded into a pleading, emotionally charged chorus, was in Elton's mind his homage to the Beach Boys, while in truth it sounded like no one other than himself. On the morning he was writing it, a passing Nigel Olsson, heading to bed after staying up all night, heard him playing the song and offered two simple words of appreciation: "Number one," said the drummer.
However, attempting to record the vocal for "Don't Let the Sun Go Down on Me" at the ranch, eighty-six hundred feet above sea level in thinning air, proved to be a challenge for Elton, provoking one of his now notorious tantrums. "A hard song to sing," he points out, in his defense. "It's a naked vocal for the first half. Your voice is sticking out like a sore thumb. I couldn't get the vocal right. In the end, I went, 'You can send this fucking song to Engelbert Humperdinck! And if he doesn't like it, tell him to send it to Lulu!' "
After finally completing the take, Elton walked into the control room of the studio and said to Gus Dudgeon, "If you put this on the album, I'll sodding well shoot you." He hated the song, hated his vocal on it. A year later, "Don't Let the Sun Go Down on Me" was nominated at the Grammys for Best Pop Vocal Performance—Male.
But not before this imagined tribute to the Beach Boys was completed in Los Angeles with the addition of harmonies by the band's Carl Wilson and Bruce Johnston, along with a dramatic horn arrangement by Del Newman that allowed the track to soar. Recorded in a sulk, "Don't Let the Sun Go Down on Me" was nevertheless—if not, as Olsson predicted, number one—a U.S. number two hit.
Still, it was quite a risky track to put out as a single, being five and a half minutes long. Before releasing it, driving along in his car one day, Elton played a cassette of the song for Rod Stewart. The two friends shared a similar sense of absurd humor—giving each other the drag queen alter ego names of Sharon (Elton) and Phyllis (Rod)—and were always in sharp, if good-natured competition with each other.
Elton sat at the wheel, waiting for Rod's reaction, feeling antsy throughout the two minutes before the chorus of "Don't Let the Sun Go Down on Me" finally kicked in.
_Fucking long single,_ thought Elton, _and so slow_.
Rod turned to him and drily asked, "Ballad, is it?"
—
FROM JAPAN, IN February, where he was mobbed everywhere he went, he traveled back to Australia before moving on to New Zealand. His first tour in this part of the world in October '71 had been ill-starred. As this second trip would prove, it was a destination where Elton would appear to be cursed.
It started well enough. Even though he was drained and appeared "sedate" to onlookers, he managed to be chirpy when answering reporters' questions upon his arrival. One journalist wondered how he had changed since the last time he was in Australia. "I'm the _balding_ Elton John now," the singer quipped.
There hadn't been much development of the Australian concert circuit in the intervening three and a half years, and so again Elton was booked to play an odd mix of sports stadiums and racetracks. Only this time around, he was selling them out and breaking attendance records.
On the tour, he was carrying with him an eye-watering $200,000 worth of personal baggage, in elaborate trunks that folded out into a series of traveling wardrobes filled with costumes and shoes and hats and glasses. He planned to unveil two new outfits in particular during this jaunt down under. The first was a tight black Lurex zip-up suit strung with dozens of small red, orange, blue, and green balls, while others sprouted from his shoulders on piano wire. The second involved a voluminous arrangement of outsized feathers—in tribute to the 1920s–'30s exotic dancer Josephine Baker—that made him look like a psychedelic peacock.
The shows were slick for the slower songs and utterly forceful for the harder rockers, with Elton nightly attacking "Crocodile Rock" and "Saturday Night's Alright for Fighting" in particular. One elated reviewer in Melbourne called him a "genius in feathers" and said that his "piano-playing is comparable to the venom of Jerry Lee Lewis, with at times the delicate touch of a classical pianist."
It was in New Zealand that the tour hit trouble, ahead of Elton's show at the Western Springs Stadium on February 28, where he was to play to a crowd of thirty-five thousand—a shade over one percent of the country's entire population at the time. The afternoon before, a reception was thrown for him by his antipodean label, Festival Records, at a cream-colored, colonial-style pavilion on the grounds of the Parnell Rose Gardens in the city of Aukland. Outside, the party was greeted by the performance of a traditional Maori haka, or war dance. It was only too fitting a prelude for what was to follow.
Inside, as the guests milled around and knocked back free drinks, the bar quickly ran out of beer. Requesting a whisky, John Reid was told that, sorry, there wasn't any in stock. The manager immediately got into an argument with the event's organizer, Kevin Williams, shouting at him, "You're an incompetent!" Williams offered Reid a glass of champagne instead. Reid threw it into his face.
Ten minutes later, at the bar, Judith Baragwanath, a model and writer for Auckland's _Sunday News,_ challenged Reid about his behavior. "How could you do that to anyone?" she demanded. "You rotten little bastard." Reid later claimed that she—or perhaps someone else—then added, "You little poof." In a blind rage, and what he described as "a reflex action," the manager punched her in the left eye. Elton, on the other side of the room, was unaware of the commotion.
Later that same night, at an after-party following American pop star David Cassidy's show at Auckland Town Hall, a colleague of Baragwanath's from the _Sunday News,_ the reporter David Wheeler, was allegedly heard muttering that Reid and the others were now "marked men." Reid got wind of this and advised Elton it would be best if they all left. Instead, an enraged Elton decided to accost Wheeler, grabbing him by the collar and hissing in his ear, "You no-good son of an Irish leprechaun. You've threatened my manager?" The journalist protested, "I don't know what you're talking about." Elton was on the verge of hitting him. But Reid got in there first, smacking Wheeler to the ground before laying a few kicks in for good measure.
They made a hasty exit, speeding off back to their hotel. Once there, they received a call from the head of David Cassidy's security personnel warning them that there was now a carload of angry men out cruising the streets looking for them. No one left the safety of their rooms for the rest of the night.
The next day, the police arrived, looking for John Reid. He was arrested and charged with two counts of assault, and refused bail. Only when his attorneys protested that the manager had to be present at the Auckland show that night was Reid temporarily freed, after shelling out five thousand Australian dollars. Incredibly, given the violence and drama that had preceded it, the gig at Western Springs Stadium, before the largest crowd in New Zealand concert history, was a resounding success.
The following morning, Reid was due back in court, and Elton with him, on a charge of assaulting Wheeler. After a twenty-minute hearing at Auckland Magistrates Court, the singer, tamely dressed in a gray suit, was released when the fracas was deemed the result of a misunderstanding. He was charged fifty Australian dollars to cover prosecution costs, and before he left, he signed autographs for fans waiting in the courthouse.
Reid didn't get off as lightly, not least since it was revealed that Judith Baragwanath had suffered a black eye, as had David Wheeler, though the latter's list of injuries extended to chipped and cracked teeth and bruising. In an effort to try to reduce his punishment, the manager had already promised to pay civil damages to both victims out of court, but nonetheless the judge noted "the continuity of the offenses" while adding that Reid had shown "an ill-mannered, arrogant indifference to people in the way he dealt with them." He was sentenced to twenty-eight days in Auckland's Mount Eden Jail.
In the end, Elton completed the remaining dates in Australia before flying home without his manager and partner.
—
MEANWHILE, OVER IN the States, something entirely unexpected was happening. In Detroit, at the R&B station WJLB, the twenty-year-old late-night DJ Donnie Simpson picked "Bennie and the Jets" out of the tracklist of _Goodbye Yellow Brick Road_ and began giving it regular spins on the air. Initially, his program controller, Jay Butler, was nervous about this—WJLB catered mainly to a black audience, who he worried might balk at the idea that the station was suddenly playing a song by Elton John, someone from a very white, very pop background. Butler called a few black record stores in the Detroit area. Their managers told him that _Goodbye Yellow Brick Road_ was an album already being bought by many of their customers.
Three days later, "Bennie and the Jets" became the song most requested by callers to WJLB. Jay Butler phoned Elton's promotion man, Pat Pipolo, giving him the surprising news and telling him he was going to talk to Rosalie Trombley, the program director at CKLW in Windsor, Ontario. The Canadian AM station's far-reaching broadcasts could be heard throughout the American Midwest, making it highly influential when it came to the Top Forty in both the black and white markets. Hearing from Butler the reaction to "Bennie and the Jets" in Detroit, Trombley called Pipolo. She advised him to release the track as a single, saying it would be a guaranteed hit not just on the pop chart but also on the R&B chart.
No one at the label, or Elton himself, was sure about this. "Candle in the Wind" was already lined up to be the next release. Elton asked Pipolo: "Are you prepared to put your career on the line?" The promo man replied, "Well, no, not really, but I think we should release it as a single. I think you'll be an R&B artist as well as a pop artist."
Pipolo stuck his neck out and "Bennie and the Jets" was released as an A side on February 4, 1974. Within weeks, it had risen to number one on the _Billboard_ chart and, amazing everyone involved, number fifteen on the R&B chart.
"I'm a part black man," jokes Elton. "I'm sure there's a part of me that is black, because I've always loved black music. I mean, come on, you're a white kid from fucking Pinner and you're on the black chart. It was validation from the music people that I loved the most. It was one of the nicest things that happened."
"If you look back at that point in time," says Bernie, " 'Bennie and the Jets' wasn't what was going on radio-wise in general. We set a trend, or broke the mold."
—
AFTER AUSTRALIA AND New Zealand, Elton was wiped out. There was no way he could face the European tour due to begin in April, so all of its dates were canceled. Instead, he took a very much needed vacation, booking himself into the John Gardiner Tennis Ranch on Camelback Mountain near Paradise Valley, Arizona. Spending close to a month there, and on the tennis courts every day, he lost twenty-eight pounds.
Elton returned to the UK in May, fitter, happier, and completely revitalized. He was itching to get back onstage and so seized the opportunity to play a benefit for Watford Football Club, the team he'd supported since childhood and continued to go to watch play whenever he could. Since the previous November he'd even lent his name as a vice president to the club. At the time, Watford, stuck in the Third Division, was suffering serious money troubles. "I wouldn't like to see them go under," Elton told reporters before the show. "I'll do everything in my power to save them." Although high profile, this benefit concert was almost a token gesture. He started to wonder if he might somehow get involved with the club on a deeper financial level.
On the afternoon of May 5, thirty-one thousand fans (five times the number of people who normally came to Watford matches) crammed into the team's Vicarage Road stadium. Elton arrived onstage in a gold-and-black-striped jacket and matching trousers reflecting both the team's playing colors and their nickname, the Hornets. His eyes shielded behind oversized white shades that looked less like sunglasses than ski goggles, he sat down at his grand piano—which was covered in silks of rainbow hues fringed with yellow, pink, and red feather boas—and let rip. It was clear from the get-go that his energy was back at peak level.
Among the hits, he found time to air for the first time his take on the Beatles' "Lucy in the Sky with Diamonds" and, when dark clouds above produced a passing shower, to lead the crowd in a rendition of "Singin' in the Rain." Then he introduced a special guest, the tufty-haired, white-clad Rod Stewart. Together they launched into a harmonizing version of "Country Comfort" (to which Stewart himself had given a raspy treatment four years earlier on his album _Gasoline Alley_ ).
Out in the crowd, the fans crushed forward and swayed dangerously as girls got up on their boyfriends' shoulders and waved tartan scarves. As soon as the song was over, two St. John's ambulance men rushed onto the scaffolding stage and urged the people at the front barrier to move back. Rod departed after rousing renditions of Chuck Berry's "Sweet Little Rock 'n' Roller" and Jimi Hendrix's "Angel," leaving Elton to finish with a walloping "Saturday Night's Alright for Fighting." In the director's box, the slightly hysterical wife of an American record executive was running up and down the aisles, loudly exclaiming, "He's my generation's Sinatra!"
More sedate was a show on May 18 at the Royal Festival Hall in London in aid of the Invalid Children's Society, and at the request of Princess Margaret, who watched her pop star friend perform a retrospective set that took in rarely heard songs including "Skyline Pigeon" from _Empty Sky_ and "Love Song" from _Tumbleweed Connection_. As a thank-you, and clearly understanding Elton's preposterous tastes, the Princess gifted him a pair of stuffed leopards.
By this point, _Goodbye Yellow Brick Road_ had been in the U.S. Top Ten for more than half a year. _Caribou_ was due to be released in June. In London, Elton went into the BBC studios of _The_ _Old Grey Whistle Test_ and, solo at the piano, prerecorded a showcase of two songs, "Ticking" and "Grimsby," from the upcoming album, to be screened in July, by which time he would be elsewhere.
Upon his release from jail in New Zealand, John Reid had gone directly to the United States and begun to negotiate a new record contract for Elton that was highly ambitious in its demands. The singer, due to join him, this time around decided to travel to America at a slower pace and in luxurious style.
In the last week of June, Elton arrived at Southampton dock to board the SS _France_ for a transatlantic voyage. On the quayside, to send the ship on its way, a brass band of schoolchildren tootled and banged their way through "Yellow Submarine."
It was a leisurely five-day crossing. But there was still work to do. By the time he got to New York, Elton wanted to have all of the songs written for the next album. It was planned to be an early days autobiography in the form of a long-playing record—one that found Elton and Bernie drowning in nostalgia for a time that wasn't in fact that long ago but now seemed so far away.
The only way to fly. Elton and Bernie and entourage ahead of takeoff on the '74 American tour.
OUT ON THE ATLANTIC, Elton had time to reflect. Here he found a moment of pause, an oasis of calm amid the madness.
At midday, every day, he'd enter the SS _France_ 's Salon Debussy, the luxury liner's First Class music room, with its grand piano, gold-lacquered walls, bronzed panels, and statue of a flute-playing girl by the French sculptor Hubert Yencesse. So popular was the Salon that Elton could secure only a two-hour slot to work there each day, during the lengthy lunchtime of an opera singer traveling onboard who'd blocked out the entire time. One day, Elton arrived only to find the ship's concert pianist sitting on the stool he'd prearranged to occupy. To his embarrassment, he was forced to eject her, and she flounced off to another music room directly above. For two hours, briefly interrupting Elton's tranquillity, the sounds of battling pianos filled the corridors.
From his stately position amid this splendor and elegance in the summer of 1974, Elton transported his thoughts back to his struggling days of the late sixties. Laid out in front of him were the sheets of Bernie's latest lyrics, which delved into the duo's failures, doubts, and heartaches during that frustrating time.
"It did feel a long time ago," Elton says. "Those days were the innocent days. They were the tough days of really, really hard graft...really, really trying. The disappointments...will they ever end? Will you ever get a lucky break? Those memories stay with you forever."
The idea behind the next album was that the pair would revisit and document their story up to the release of _Empty Sky_ in June 1969. The springboard was a lyric Taupin had written that starred both of them in alter-ego roles as Captain Fantastic and the Brown Dirt Cowboy: Elton the child, suppressed by a restrictive father, who grows up to discover his "real" self as a superhero-like figure; Bernie the backwoods westerns obsessive wondering if there might be a place for him in the city; the pair of them meeting, pooling their talents, and beginning "a long and lonely climb."
"It was such an autobiographical song that the others fell into place," says Bernie. "The majority of the songs are pictures seen through both our eyes and experiences." Page after page, their tale unfolded. The music business of their early years was depicted as a quasi-Biblical tower populated by greedy and lustful sinners. In an attempt to survive within it, as the Tin Pan Alley Twins, they peddle songs to be sung in pubs and cabaret summer seasons. Surrounded by Kings Road dandies, the duo are broke and despondent, living a life in black and white while everyone around them seems to exist in vibrant Swinging Sixties color.
Late nights in London are spent gazing through the grimy windows of a cheap café, watching drunks and prostitutes stumble by, backgrounded by the flashing blue lights of police cars. The brain-fried, out-of-his-depth country boy Bernie takes the weekend trains home to the safety of his former rural environment, only to return to Frome Court and Reg and a suburban work week of washing dishes and shaving with dull razor blades while optimistically writing endless songs.
Elsewhere, there was room for confession. One evocative lyric spoke of the days of Elton's engagement to a domineering fiancée, leading him inexorably toward unhappy marriage and piling debt, the partnership exploding in a drunken breakup following wise words from a friend. It was a song that alluded to suicide, whether metaphorical or actual.
The real romance in these songs—heightened for the purposes of lyrical drama—was the platonic one between the writers: penning childish songs about scarecrows and dandelions, laboring on, laughing through their worries for the future.
In those two-hour bursts on the SS _France,_ Elton tackled the lyrics chronologically as he composed the chords and melodies. "It was about _us,_ " he says, "so I felt involved in the actual meaning of the songs. You're writing something that's about your life. It was personal. It just flowed out of me. I wrote them in running order, so you could see the landscape coming up."
Aboard the liner, there was also much downtime for fun and frivolousness. Accompanying Elton on the trip was his friend Tony King, the general manager of Rocket, who'd previously worked for the Beatles at Apple. Also joining them on the journey to New York was John Lennon's first wife, Cynthia, and their son, Julian, then eleven, en route to visit his dad. Their days on the ship were frittered away in very pleasant circumstances, as Elton recorded in his diary:
> June 22: At 3:30pm, play squash with Tony. He is just beginning and I am not much better. But we do quite well and attract an audience who quickly pick up a few tips on the lesser art of the game.
There were rounds of backgammon and laps swum in the cold saltwater indoor swimming pool. Now on a health kick, Elton was trying to observe a no-alcohol, low-carbohydrate diet. In this environment, however, with its lavish banquets of food and beverages, it was tough to uphold. There were drinks with the captain in the Riviera Bar. There was dressing up in suits and bow ties, in an approximation of a perfect English gentleman, for fine dining on caviar and pepper steak in the Chambord Room.
Still, even with his success and riches, Elton, sporting streaks of green in his hair, was made to feel like a low-class upstart by some of his fellow travelers. During dinner one evening, he heard a posh voice loftily announce, "That man over there is Elton John. He is very famous, but I have never heard of him."
Among the singer's group, this prompted much laughter. Together, though, they could outbitch anyone, and they were usually to be found casting a withering eye around them at their snooty fellow passengers.
_"You can tell the continental people from the Americans by looking at their clothes,"_ Elton noted in his diary. _"Why do large American ladies squeeze themselves into dresses that show every inch of flab?"_
In truth, even if he looked like a deviant glam rocker, by this point Elton could buy and sell virtually everyone else on the SS _France_. Midvoyage, more incredible news came through from America, which he duly recorded in his journal.
> I am whisked away for a ship-to-shore telephone call—"Caribou is now platinum. Congratulations. Roger and out."
—
RESTED, REFRESHED, AND lighter in body and spirit, Elton docked in New York. There was much to do. Aside from promoting the already runaway _Caribou,_ he was set to help launch the U.S. career of Kiki Dee with her second Rocket album, _I've Got the Music in Me_.
Owing to his crammed schedule, Elton had been forced to take a back seat in the making of the LP—recorded at the Jimi Hendrix–founded Electric Lady Studios in Manhattan and produced by Gus Dudgeon and Clive Franks. He had nevertheless made his presence felt in an encouraging and sometimes comedic way. Kiki had found herself intensely nervous when getting ready to record the vocal for the driving, ecstatic title track in the presence of such backing soul talents as Cissy Houston and Joshie Jo Armstead (whose combined supporting-role credits included Elvis Presley, Otis Redding, and Bob Dylan).
"I remember Elton running around the studio with his trousers round his ankles, just to make me laugh," she says. "I got the vocal after that. We always had that kind of relationship where physically he'd jump on me and tickle me."
Kiki was clearly Elton's favorite protégée. In ostentatious displays of his generosity, he took the singer on shopping trips in Manhattan, where he loved to blow thousands of dollars on her. "He used to be a bit overwhelming, if I'm honest," she says. "You'd be in some store in New York, and he'd bring over this beautiful black dress and you'd think, Oh my God. He'd get you putting it on and he'd have bought all these things before you knew it. It was crazy. So sweet. There was a bit of a twinkle in his eye when he was doing it. Being able to do it must have been a huge thrill."
There was no sign of his financial momentum slowing, either, since each record he released sold more than the one before. But if _Caribou_ was a hit with the record-buying public, the critics noticed that Elton's quality control had slipped. Al Rudis in the _Chicago Sun-Times_ stated, "Nowhere is the magic moment that stands out in shining splendor, that demands, 'Listen to me.' " Tom Nolan in _Rolling Stone_ was more pointed, calling Elton "a maestro who has presented a series of attractive aural surfaces. The trouble with surface is that it wears thin."
The reviews for _Caribou_ stung Elton. "I thought it would get slagged off because it seemed time for something of mine to get slagged off," he reasoned, a touch bruised, in _Melody Maker_. "I'm just sitting back and taking it. I really think some reviewers are just deaf."
The most brutal condemnation of the album would come from none other than its producer. " _Caribou_ is a piece of crap," thundered Gus Dudgeon. "The sound is the worst, the songs are nowhere, the lyrics weren't that good, the singing wasn't all there, the playing wasn't that great, the production is just plain lousy." But Elton's ever-growing army of fans didn't seem to notice or care that _Caribou_ was rushed and sloppy. In both the United States and the United Kingdom, it reached number one.
The fact that his stock was now unbelievably high made John Reid's task of negotiating a new record deal for Elton a relative breeze. Still, the fearless, tenacious manager exceeded everyone's expectations.
Since _Don't Shoot Me I'm Only the Piano Player,_ Elton's releases had been moved over in the States from Uni Records to the label's parent company, MCA. Now that the sketchily detailed distribution deal Dick James had cut with Russ Regan at Uni in 1970 was due to end, MCA's president, Mike Maitland, was determined to keep Elton on the label. In fact, he knew losing the company's most successful artist would leave him shamefaced. "We would have survived," Maitland confessed. "But it could have crippled us for a while."
Reid was clearly in a winning position, but, bravely, he sought no outside advice in the renegotiations. During the previous couple of years, he later admitted, the dense legalese of recording contracts had provided his "bedtime reading" material.
There was one other significant suitor when it came to signing Elton. Over the past three years, David Geffen had built up his Asylum Records to become a protective stable for such artists as Joni Mitchell, Tom Waits, the Eagles, and, since 1973, Bob Dylan. Now he had his sights firmly set on the premier singer-songwriter of the early seventies. Geffen sidled up to John Reid at a party, telling him, "I've signed Bob Dylan. Next I'm going to sign Elton and then we're all going to take over the world."
Reid was wary, and keen to stick with MCA, knowing that changing horses in the midstream of Elton's fast-flowing career was highly risky. "I'm superstitious about changing labels," he later told a reporter. "I don't think you should do it unless something is seriously wrong."
Before anything could be properly discussed, however, Reid and Maitland had to cut a deal with Dick James in London allowing MCA to retain the rights to distribute Elton's back catalog in North America while DJM kept the copyrights. James agreed to the arrangement; he knew that Elton was moving on. Holding on to the catalog—which MCA would of course do their best to continue to exploit—was James's reward for all of his years of belief and investment.
Reid and Maitland put together a fifty-five-page document that guaranteed Elton a total of $8 million from MCA over the five-year period beginning in 1976 (when the DJM contract would be fulfilled), along with an unprecedented 28 percent royalty rate—almost twice what the highest-earning artist might expect to nail down. It was the most lucrative recording contract the music industry had ever seen. In _Billboard,_ Maitland called it simply "the best deal anyone ever got."
To announce this record-breaking agreement between MCA and Elton John, on June 19, 1974, full-page ads appeared not in the music business trade press, as was the norm, but in _The New York Times_ and the _Los Angeles Times_. Even Elton, never one for understatement, thought that this move seemed an immodest step too far.
"I couldn't believe the amount of money involved," he said a few years later, noting that it "was the start of the multi-million[-dollar] deal. It made the record business more vulgar and I was partly responsible for that."
It was a complete turnaround from the swindling deals of the sixties. From here on in, seventies artists and their managers would seek out ever more enormous advances for their recording services. But maybe it had gone too far. Rock stars were now as rich as royalty and business tycoons, with money to burn on houses and planes and cars and girls (or boys) and, of course, heaps of white powder.
—
BRINGING HIS HEALTH kick to an abrupt end, cocaine was to prove the fuel for the mutual appreciation club that was Elton John and John Lennon. Hanging with the former Beatle in Manhattan in that summer of '74 was the biggest hero-meeting thrill yet for Elton. "He's probably the first big star I instantly fell in love with," he admitted at the time. "It usually takes me six or seven meetings with someone, 'cause I'm very withdrawn. But he's so easy to get on with."
Lennon felt much the same about Elton, instantly drawn toward his acerbic English humor and wholly impressed by his skyscraping talents. When he'd first heard "Your Song," Lennon was blown away, publicly stating that Elton was "the first new thing that's happened since we happened." At a time when the erstwhile Beatle was going through a patchy phase, both creatively and commercially, he was astounded by Elton's constant presence on the charts and on the radio. Lennon joked about how even death couldn't possibly increase the amount of airplay Elton received. "You get played enough," John laughed. "If you ever die, I'll throw my radio out the window."
Their first time in the studio together, on the sessions for Lennon's _Walls and Bridges,_ Elton felt the pressure. "You're in there with John Lennon," he points out, "you better fucking perform."
Trying to sing along to Lennon's idiosyncratic vocal phrasing for the two tracks he appeared on—"Surprise Surprise (Sweet Bird of Paradox)" and "Whatever Gets You thru the Night"—was hellishly tricky for Elton, and took hours. "People were leaving the room," he reckoned. "Razor blades were being passed out."
It was Lennon's turn to be nervous when he accepted Elton's invitation to the Caribou Ranch in Colorado, where the sessions for the next album were due to commence in July. The plan was to record two of Lennon's songs: the relatively obscure "One Day (at a Time)," a cut from his 1973 album _Mind Games,_ and "Lucy in the Sky with Diamonds." Far removed mentally from his Beatles days by this point, Lennon couldn't even remember the chords for "Lucy." Davey Johnstone had to gently remind him.
The normally unflappable Gus Dudgeon was starstruck by Lennon, seeing "his charisma as a glow of light." Still, there were other more pressing concerns when it came down to the business of actually recording—Lennon found himself acutely short of breath when trying to sing in the high-altitude studio environment. "He had to keep rushing to the oxygen tank," Dudgeon remembered.
Lennon stayed on at Caribou for a few idyllic days, riding horses and shopping for cowboy boots in nearby Boulder, before he left Elton and the others to get on with the making of the record. The autobiographical album was now to be called _Captain Fantastic and the Brown Dirt Cowboy,_ and the team had learned from the mistakes they'd made on _Caribou_. An entire month was set aside for its recording, making for a comparatively smooth and painless process, as reflected in the music emanating from the speakers, which was shaping up to be possibly Elton's best yet.
Only when recording the vocal for "Someone Saved My Life Tonight"—the raw and moving song that beamed Elton back to the desperate days of Linda Woodrow and Furlong Road—was there any real tension in the studio. At the microphone, he delivered the nearly seven-minute ballad with fitting tenderness. Dudgeon, however, listening in the control room, felt the singer could wring more passion and power out of it, and he kept rewinding the tape and pushing Elton harder and harder.
An embarrassed and irritated Davey Johnstone took the producer aside and said, "Don't you think you should take it easy on the guy? Don't you know what he's singing about? He's talking about attempting suicide."
"That's a fucking hard song to sing," Elton admits. "Gus went, 'Ah.' "
Dudgeon was horrified: "I made him sing the most unbelievably personal things over and over again to get a bloody note right or get a bit of phrasing together."
When Elton listened to the playback of "Someone Saved My Life Tonight," he was overcome with emotion.
"He had to leave the room," Dudgeon remembered. "He just couldn't take it."
—
WHEN SEPTEMBER ROLLED around, it was time to get back onto the _Starship_. This time, as with the recording of _Captain Fantastic and the Brown Dirt Cowboy,_ Elton was far better prepared. He'd even solved the problem of his bleeding, piano-bashing fingers, painting them before the shows with the New-Skin liquid bandage solution used by bowlers, which added a protective film to his long-suffering digits.
It had been six months since the eventful tour of Australia and New Zealand, with only three sporadic dates since—the longest break he'd had from the road in years. But Elton was making up for it with this upcoming U.S. jaunt. He'd be appearing before a total of more than three-quarters of a million people over the next ten weeks.
The tour started with a sweep through the southern states, the band's initial base being a hotel on Bourbon Street in New Orleans. On the first night, in Dallas at the ten-thousand-capacity Convention Center, the show went without a hitch, and Elton left the stage feeling it was perhaps the best gig he'd ever played.
But despite this slicker level of operation, there was often a sense that it was all getting wilder and almost uncontrollable. In Houston, he was given a police escort to the Hofheinz Pavilion that was both dramatic and dangerous, zooming in the wrong direction down one-way streets, causing motorists coming the other way to hastily pull over to the curb. In Mobile, at the Municipal Auditorium, he couldn't hear himself sing and kicked over a monitor. Backstage, he was raging and almost didn't return for the encore, until it was clear that the howling crowd wouldn't let him get away without coming back for more.
In Los Angeles, a total of seventy thousand tickets for four shows at the Forum had sold out in six hours. On the opening night, before an audience that included Ringo Starr, Barbra Streisand, Diana Ross, and Elizabeth Taylor, Elton once again got into an argument with the security guards, who were pushing fans out of the aisles and making them sit back down in their seats. He arrived for the encore carried on the muscular shoulders of his new personal bodyguard, Jim Morris, 1973's Mr. America, and cried, "This is your concert...come down!" Thousands rushed toward the stage. Afterward, the Forum's manager Jim Appel threatened to have the singer arrested for dangerously provoking the crowd and endangering his staff. The next night, the security detail was doubled.
During the run at the Forum, Elton, always a frustrated disc jockey, took over from regular host Richard Kimball for a two-hour live broadcast on L.A.'s FM station KMET. He was a natural, introducing himself as "EJ the DJ," playing records by John Lennon, Joe Cocker, Little Feat, Aretha Franklin, and of course Kiki Dee. He read out advertisements hawking hair restorer ("If you're going bald like me...") and Licorice Pizza ("Gives you a good run for your money") and referred to himself, as if he wasn't himself, as "that little punk...that looks like a bank clerk...I hate him."
He then talked up his favorite music retail store in the world, Tower Records on Sunset Strip, while at the same time jokingly advertising his latest album. "Because I'm doing this commercial, they're paying me seven million dollars!" he goofed. "They've put a stack of my _Caribou_ albums just inside the door. From today to Sunday midnight, they're paying compensation to everyone who falls over them."
—
STANDING IN FRONT of the _Starship,_ Elton, the band, and the entire entourage lined up for photographer Terry O'Neill. The exterior of the plane had been resprayed in red and a royal blue flecked with white stars to go along with the stenciled words ELTON JOHN BAND TOUR 1974 and the MCA Records logo on the tail.
Elton and Bernie posed in the foreground, dressed in white—the singer in a Panama hat and a faux-military shirt and leaning on an accessorizing walking cane, looking like a benign South American despot; the lyricist in spotless overalls with a patch bearing his name, like a body shop mechanic who'd never actually done a proper day's work. Behind them gathered everyone else: John Reid, Kiki Dee, Ray Cooper, and a couple of dozen others. Their number had certainly swelled since the '73 tour. The message the photograph sent out was clear: We are now massive.
Inside the plane, Elton posed for an O'Neill photo shoot in the rear bedroom, bare-chested and appearing to be naked since his lower half was tucked under the white fur blanket, his eyes peering through large round glasses surrounded by concentric circles dotted with small diamonds, resembling scientific models of orbiting jewel planets. It was the most explicitly sexual image he'd ever projected, tipping toward the homoerotic. But just in case anyone got the wrong idea, a copy of _Penthouse_ sat on the bed beside an issue of _Esquire_.
In San Francisco, he managed to stir up more fuss. Constantly plugged in the preceding days as an upcoming guest DJ on KFRC, Elton had to call in sick, laid low with food poisoning after eating a dodgy crab omelet. Embarrassed by the no-show—and with unnecessary melodrama—the station's morning host, Don Rose, announced to listeners, "Elton is ill. I won't say gravely ill. How ill we don't know. There's a doctor examining him in his suite at the Fairmont right at this very moment. We will keep you informed." The news, and the leak of Elton's hotel details, prompted a flood of calls to both the Fairmont and the radio station, from concerned fans worried that first, the concert in Oakland that night was to be canceled, and second, the singer was at death's door. Ever the workhorse, Elton rose from his bed, and the show went on.
All the while, his latest hit, "The Bitch Is Back," pumped out of radios everywhere, even if some DJs were reluctant to read out the song's title in their introductions and more conservative stations banned the record or, ridiculously, bleeped out the word "bitch" every time it appeared, which was often. Of course, ever the provocateur, Elton found this hilarious.
If he felt untouchable in his bubble of super-fame, it was becoming evident to others as well. _Rolling Stone_ 's Ben Fong-Torres joined Elton on the road for an upcoming cover story, the writer observing that the people around the star now seemed to be overly protective of him. "Somewhere between him and the outside," Fong-Torres wrote in the subsequent article, "there are forces which don't seem to understand the nature of Elton John, and the nature of his success." No doubt this was meant as a dig at the gatekeeper that was John Reid, and in his interview with Elton, the writer probed deeper into the nature of the relationship between the singer and his manager, particularly the insinuations surrounding the fact that the two shared a house in England.
"He's just my manager," Elton insisted, seemingly unruffled by the topic, before stating that the team who worked with him were almost like a second family to him. "Everything around us is incestuous, and that's probably why there might be a lot of talk about us. I have a close circle of friends who just aren't in the public—sort of like Elvis and his...motorbike people."
If Elton had ducked the line of inquiry, he had in turn highlighted a different truth: There was now indeed something distinctly Elvis-like about the operation. The '74 tour was all about excess, a glitzy display of scale. In Vancouver, Elton and the band drove directly into the Pacific Coliseum in a cavalcade of seven silver limos. A local reviewer breathlessly described him as "the absolute best since the Beatles."
There was no higher praise. At Madison Square Garden on Thanksgiving Day, Elton made the connection explicit with the ecstatically received arrival of John Lennon onto his stage. Afterward, Elton, the temporary king of New York, threw a party at the five-star Pierre Hotel on Fifth Avenue. There, Reg finally got his chance to drop his Elton persona for a minute to say to Lennon, "You must get tired of hearing this, but your music changed my life."
Lennon's face lit up. "You're right," he said. "I do hear that a lot. But I never get tired of hearing it."
Later that night, there was an ugly scene that mirrored the fracas in New Zealand, albeit less violently. The blond wife of a radio DJ, for no clear reason, apparently accosted Elton and called him a fag. Once again, Reid erupted, though this time kept his fists to himself. "This is my party," he yelled at the DJ, "and I'm ordering you and your slag wife to get the fuck out right now!"
"Do me a favor!" Elton shouted at the couple as security personnel hustled them out of the building. "Drop fucking dead!"
While the homophobic insult had been mindless, drunken abuse, it touched a nerve with Elton and John Reid, their relationship being very much a furtive one.
—
THEY RETURNED TO Britain, and the following month, at his second run of Christmas shows at the Hammersmith Odeon back in London, the crowd standing in the stalls raised their eyes to the upper circle to see Elton perform a daredevil act. Suspended on a wire, he was followed by a spotlight through the darkness of the theater as he flew superhero-style from the balcony to the stage. Cheers erupted, the stage lights went up, and as if in a flash, there he was, sitting at the piano stage right.
It was a feat worthy of the great magicians and, of course, a trick, performed with the use of a dummy. The pantomime had once again come to town, although this time the production values had been noticeably upped. The stage and Elton's piano were covered in a festive crimson, with a stairway leading up to the riser where Nigel Olsson and Ray Cooper performed, alongside the Muscle Shoals horn section. The final gig, on Christmas Eve, was broadcast by the BBC, and showed the singer—in a feathery-shouldered silver getup—barely able to control his laughter as he performed a southern soul take on "White Christmas" and fake snow showered the audience.
He had much cause to be very happy. The year 1974 ended with his first singles compilation, _Elton John: Greatest Hits_ —a ten-tracker spanning "Your Song" to "Don't Let the Sun Go Down on Me"—sitting at number one in Britain and the States. The album would lodge itself at the top of the charts in both countries for close to three months.
—
IF EXPERIENCE HAD taught him anything, it was that it was time to stop and gather himself. And so 1975 began with the still unreleased album _Captain Fantastic and the Brown Dirt Cowboy_ in the bag, and space to breathe.
Aside from his own soar-away run of singles and albums—and even though it had continued to pursue a slightly wonky signing policy—the Rocket Record Company had proved to be highly successful. Some of the label's decisions were a touch bewildering, though, such as its signing of a thirteen-year-old Welsh balladeer named Maldwyn Pope, discovered by the Radio One DJ John Peel. While clearly talented, the teen prodigy—whom Elton nicknamed Blodwyn Pig (after the hairy British blues rock band)—came across, with his high, unbroken voice amid the orchestrations of his debut 1974 single, "I Don't Know How to Say Goodbye," as an uneasy mix of Nick Drake and Donny Osmond.
Far more lucrative had been Rocket's signing of Neil Sedaka. By the early seventies, the career of the U.S. pop star and songwriter was in the doldrums, having been virtually sunk by the wave of Beatlemania that hit America in the sixties. Elton and John Reid first met Sedaka in 1973, and they were astounded when he told them that he currently had no record deal in the States. When the pair said that he was welcome to join Rocket, Sedaka offered to sign with them for no advance. "We couldn't believe our luck," Elton remembered in the immediate aftermath. "We sort of ran into a corner and laughed and said, 'This guy must be an idiot.' "
But Neil Sedaka was no idiot. Elton had proved that flamboyant piano-playing singer-songwriters made for big business in the States, and Sedaka wanted some of that magic fairy dust to rub off on him. Rocket repositioned Sedaka, always seen as a singles artist, into the albums market. The resulting record, _Sedaka's Back,_ shipped half a million in America, with his comeback single, "Laughter in the Rain," hitting number one.
Elton sold Sedaka hard in interviews, making the latter quip that the former—who was taking a decent cut of his record sales—was "the most expensive publicist in the world." Sedaka was ecstatic about the rise in his fortunes that his association with Elton brought about, letting it be known that "in one year I went from making $50,000 to $6 million."
Amassing wealth now from more than one source, Elton bought himself a house in Los Angeles. A Moorish-style mansion at 1400 Tower Grove Road, set amid the eucalyptus and chaparral of Benedict Canyon, it looked out over Beverly Hills and had previously been occupied by Greta Garbo and _Gone with the Wind_ producer David O. Selznick. The purchase saw Elton planting a domestic flag in California soil, but it wasn't long before he was questioning the wisdom of his move.
One morning he woke to find a girl fan sitting at the edge of his bed. Confused and slightly panicked, he reached for his glasses and asked her, "Who are _you_?"
"Oh, you don't know me," she airily replied.
The girl, who'd somehow managed to silently intrude into the house, was gently ushered off the property by the singer. _Christ, she could have had a gun,_ thought Elton, reminded of the Manson Family murders of Sharon Tate and her friends only six years before and just over a mile away.
Visitors to the Benedict Canyon house said that it had a lonely, almost spooky air. In those first months of 1975, a gloom seemed to descend on Elton, accentuating his natural moodiness. In L.A., in February, he agreed to appear on the inaugural episode of Cher's eponymous TV show for CBS. On set, Elton was grouchy. It took him eight takes to nail a version of "Lucy in the Sky with Diamonds," dressed in a pointy wizard hat with his gold-and-purple-tinted, diamond-framed glasses glinting under the studio lights. Then he was joined by Cher, far more soberly dressed in lilac shirt and brown tank top, for a part-harmonizing, part-shouting "Bennie and the Jets."
"Cher, I'd just like to say that I really enjoyed doing the show," he told the host at the song's conclusion in a prearranged spiel. "I didn't do it just for the hundred dollars. D'you know, you're the sort of person that in fifty years' time will still be going strong..."
This was the slightly clunky segue into a skit, set at the "Final Curtain Rest Home for Aged Performers," in which Elton appeared alongside a comically decrepit Cher (fake sagging breasts nearly touching her navel under a lavender satin top) and Bette Midler (ludicrously balloon-assed in tight pink pants). The singer, in a bald skullcap stitched with gray wings of remaining hair, arrived in a motorized green glitter wheelchair, imploring the others in a Monty Pythonesque screech, "Let's turn on the TV set and watch the show we did fifty years ago!," causing them to collapse into giggles.
The camera zoomed into the screen to show a set filled with balloons and a white-top-hatted Elton, looking like a cross between Liberace and Willy Wonka, as he launched with Cher and Midler into a showbizzy medley of songs including "Proud Mary," "Ain't No Mountain High Enough," and "Never Can Say Goodbye."
It was very daft, very 1975, and appeared to have been a riot to film. In truth, Elton and Cher had been arguing during the dummy runs for the cameras. "He said some very unkind things," Cher recalled. "I got very upset and I began to cry. 'Damn it, Elton,' I said. 'Who needed this aggravation?' "
It was typical of his outbursts: He would blow up and then quietly stew, before repenting. The next day he turned up on set with a gift for Cher of a star sapphire on a gold chain.
—
AROUND THE SAME time, Elton almost managed to fall out with his old friend Rod Stewart. Two years before, Stewart had told him that he'd been approached by the Who to make a cameo appearance in their upcoming movie.
"They're going to do a film of _Tommy,_ " said Rod.
"Oh, no, not a film now," Elton groaned, feeling that the band's rock opera about a deaf, dumb, and blind kid who becomes a messianic figure—which they had released as an album in 1969 and played a large segment from in their sets ever since—was in danger of being done to death. "Bloody hell, what are they going to do next?" he wondered. "It'll be a cartoon series soon."
Elton advised Rod against committing to the film. "I said, 'I wouldn't do it if I were you,' " he laughs. "Initially there were rumors that the film wasn't going to be that great."
The year after, Elton received a call from the Who's Pete Townshend, asking him to perform the same song in the film he'd offered to Rod, "Pinball Wizard," before informing him that Ken Russell ( _Women in Love, The Devils_ ) was on board to direct. Elton immediately said yes.
"You don't say no to Pete Townshend," he argues, in defense of this volte-face. "I said, 'Absolutely.' The Who have always been one of my favorite bands. You have to look up to your peers and when they ask you to do something like that, you step up to the plate."
At the Who's Ramport Studios in south London in April 1974, Elton rerecorded "Pinball Wizard," using his own band and Gus Dudgeon as producer, while slipping a reference to the group's debut single "I Can't Explain" into the song's extended outro. He reveled in spitting out the lyric, written from the perspective of the Local Lad character who is dazzled and infuriated by the pinballing skills of the sensory-deprived Tommy Walker.
Elton in Tommy as the outsized Local Lad, with the Who erupting around him.
Filming for the sequence took place over three days at the Kings Theatre in the Southsea resort in Portsmouth, a venue more used to staging variety bills for summer vacationers. Backed by the Who, even though they hadn't played on the recording, Elton teetered on four-and-a-half-foot-high Dr. Martens boots, supported by calipers allowing for only very rigid and dangerous movement. He mugged and grimaced his way through the song, playing a petite keyboard attached to a pinball machine, and gestured in mock anger at Roger Daltrey as the titular hero racking up phenomenal scores, before the band inevitably trashed their equipment.
Showing his dedication to the cinematic version of _Tommy,_ Elton turned up at all three premieres of the film, in the UK, Australia, and the United States. There, up on the silver screen, for all the world to see, was perhaps coy Reg's greatest transformation of all, into a ten-foot-tall bobble-hatted thug.
"Of course it became an iconic scene in the movie," Elton points out, "with the fucking boots and clinging onto the pinball machine for dear life.
"Rod," he mischievously notes, almost as an afterthought, "has never forgiven me for it."
The leader of the bigger band: (clockwise from bottom left) Elton, Davey Johnstone, Ray Cooper, James Newton Howard, Caleb Quaye, Roger Pope, Kenny Passarelli.
STANDING IN FRONT of a glass-bodied baby grand piano, _Soul Train_ host Don Cornelius turned to the parade of funky-looking audience members standing around it and joked, "Okaaaay, everybody's waiting for my first concerto." Stepping over to its matching transparent stool, he curiously eyed the piano, a relatively alien instrument to appear on a TV show more used to driving drums, syncopated horns, booty-shaking bass, and a procession of gutsy singers. "Now, let me see...which way does it go?" the host wondered, parking himself down, facing the wrong way from the piano, playing the class clown and musical dunce. The prank produced a ripple of laughter among the assembly before Cornelius stood back up again to address the camera and the viewers at home.
"No, on the serious side," he said, "this is especially for a very, very gifted young man, who has combined absolute genius as musician-songwriter with a sort of psychedelic outlook on life, that causes everybody that comes near him to have a lot of fun, besides be thoroughly entertained. If you will, gang, a warm welcome for one of the world's greatest...Mr. Elton John."
To much applause, Elton climbed onto the platform, in a wide-lapeled brown pin-striped suit, scarlet shirt, and black fedora with noir voodooish feathers tucked into its glittering headband. Elton had obviously given much thought to what he should wear on the prestigious show nicknamed the black _American Bandstand_. But in reality, there was possibly too much of the Harlem pimp about his flashy attire.
"Allrrrright!" Cornelius grinned. "Where'd you get that suit, brother?"
"Sears and Roebuck," the singer chuckled, name-checking the department store chain that was in truth way below his expensive tastes, and eliciting good-natured guffaws from those around him.
It was May 17, 1975, and Elton was only the third white artist invited onto the show, after Motown's Funk Brothers studio band guitarist Dennis Coffey in '72 and the long-haired Canadian groover Gino Vannelli three months before. Still, there was something about this reticent Englishman that singled him out as absolutely the whitest performer ever to appear on _Soul Train_. Nonetheless, if a touch self-effacing, Elton was visibly at home on the set of a show he watched every week when he was in America.
Before he performed, he took some questions from the audience. "My name is Jolanna Toussaint," said one girl in a blue dress. "I'd like to know...of all the songs you recorded, which one is your favorite?"
"That's a difficult one," Elton deliberated. "Um...I sorta like the ones I've written the most recently. I like 'Don't Let the Sun Go Down on Me.' " [ _Claps and "yeah"s from the audience._ ] "And I like 'Bennie and the Jets.' " [ _More "yeah"s._ ]
Diana Bruner, one of the resident _Soul Train_ dancers, wanted to know, "Did you start singing from childhood?"
"No, I've only been singing for about five or six years," Elton fibbed, when in truth it had been almost ten years since he'd fronted the first Bluesology single. "I used to be a pianist in a band." Lending himself some indisputable soul cool, he added, "When I first started professionally I used to back Patti LaBelle and Major Lance and people like that. [ _"All right!" shouted someone in the crowd_.] I'm just learning to sing, y'know. I'm having a good time."
Elton laughed and cheekily stuck out his tongue, and then Cornelius asked him to introduce his latest single.
"It's a tribute to the music of Philadelphia and also to a lady called Billie Jean King who used to play tennis for the Philadelphia Freedoms team," he told the host. "We thought it'd be nice to write a song about the tennis team and the music of Philadelphia, 'cause it's given so many people a lot of pleasure."
Taking to the piano with the heart-stirring string intro of "Philadelphia Freedom" swirling around him, Elton began to sing over the backing track of his latest, never-more-soulful 45. His microphone bounced before his nose as the flimsy stage shook under the feet of the surrounding dancers.
This landmark TV appearance of the pumping R&B track, featuring a sweeping orchestral arrangement by the Philly Soul master Gene Page, was ultimate proof that Elton could expertly turn his hand to an array of musical genres. His soul band touring days had indeed proved highly instructive; this music was in his very bones.
"Philadelphia Freedom" was also unique in the sense that Elton had given Bernie the song's title and asked him to come up with a lyric for a tailor-made single. In the past year, Elton had become a tennis fanatic, and close friends with King, who'd even come up onstage to sing with him at the Spectrum in Philadelphia the previous December. At first, Taupin struggled with his partner's request, stumped by the challenge to write something about either tennis or the Pennsylvanian city. In the end, he came up with what he described as "an esoteric song about being free." Later, the uplifting sound and spirit of "Philadelphia Freedom" was to render it a modern patriotic U.S. anthem.
Once again getting the jump on David Bowie—who'd later perform "Golden Years" and "Fame" on the show—Elton beat his rival onto _Soul Train_ by seven months. But the elaborate sound of "Philadelphia Freedom" was to cause a ruckus with the producers of another key television program back home in Britain, _Top of the Pops_. For years, a bizarre Musicians' Union ruling had forced artists to rerecord their hits especially for TV, typically within the tight space of a half-day studio session, before they were allowed to mime to them on-screen. Some performers dutifully complied with the demand; others went into the studio, did nothing except put their feet up for four hours, and then lip-synced to a minimally remixed version of the original recording.
But the idea of having to unpick and redo "Philadelphia Freedom," likely backed by the notoriously bad _Top of the Pops_ orchestra, was abhorrent to Elton, and so, given his star power, he refused. Canceling his upcoming appearance on the show, and standing up for both himself and the notion of quality control, he put out a flinty statement that read: "The Elton John Band and their producer Gus Dudgeon are in the habit of spending a great deal of time and love perfecting each number they record. It is completely impossible to reproduce such labor at short notice."
Aside from this strop, there was a deep irony to the fact that "Philadelphia Freedom" was the first and last single to be released under the collective name of the Elton John Band. Two months after it appeared, in a surprising and shocking move, Elton fired his loyal and long-standing rhythm section of Nigel Olsson and Dee Murray. "I just felt we'd gone as far as we could go," he says. "Something had to change musically. It was a really hard thing for me to do."
Like someone stuck in an unhappy marriage, Elton had been brooding over the problem for months, feeling a knot of anxiety in his stomach, along with a tantalizing ripple of excitement that came with the possibility of freeing himself up to work with new musicians. He'd never sacked anyone before in his life. Worse, in a move he would come to intensely regret, he did it over the phone, breaking the news to each musician individually. Olsson was in Los Angeles, Murray was on holiday in the Caribbean. Neither took the news well, but the latter in particular felt deeply hurt and for a time wouldn't even speak to Elton.
"We were never made to feel inferior at all, until later," said Olsson, revealing how the ever ascending star and the players beside him had floated further and further apart as the 1970s had progressed. In the end, the drummer admitted, it had become exhausting working on the Elton John production line: "Towards the latter days, y'know, it was just...churning it out."
—
FOR ELTON, HIS albums came in cycles, each with a beginning and ending point. _Empty Sky_ in '69 to _Madman Across the Water_ in '71 was broadly his orchestrated balladeer period. _Honky Château_ in '72 to _Goodbye Yellow Brick Road_ in '73 found him recording with his own live band and genre hopping through New Orleans funk, thumping glam, dance craze rock'n'roll, and impressively authentic-sounding soul. The move to Caribou Studios in '74 had resulted in a more polished sound, and now he wanted to explore that direction further. But if _Captain Fantastic and the Brown Dirt Cowboy_ was in its creator's head the final album of this current cycle, it was a very fine way indeed for it to close.
Of all the Elton John albums, _Captain Fantastic_ was the most Beatlesque—from Bernie's lyrical ability to imbue the everyday with a certain romance or view it through a surrealistic prism, to Elton's unmannered and brilliant harmony-backed vocals, to Gus Dudgeon's warm and subtly modernist production, whether feeding the piano through an electronic harmonizer or coaxing the band to add layers of artful overdubs.
Some of the songs were episodic and moved in Lennon and McCartneyish fashion through various inventive passages—such as the title track's gear shifting from sparse country verses to rolling funky bridges to the rock propulsion of the choruses, complete with synthesized jet noises that seared from left to right across the speakers. "Tower of Babel" was by turns stark and then gently groovy, as it portrayed the lyric's morality-free scenes of music biz hustlers, letches, and druggies. "Bitter Fingers" found the jobbing songwriter of days gone by at the piano, picking out arpeggios, before boiling over in vexed anger in a deceptively Eurovision-like chorus. The nearly eleven-minute melding of "We All Fall in Love Sometimes" and "Curtains," recorded by the band in a single take, was effectively a short film featuring the young songwriters Elton and Bernie in flashback, with their "naive notions that were childish, simple tunes that tried to hide it."
Other tracks sustained one mood throughout, but were no less vivid. Gene Page and his Philly Soul strings returned to document Taupin's memories of his London-to-Lincolnshire weekend escape acts in "Tell Me When the Whistle Blows." There was desperation and fury in the disco-tinged poverty rock of "(Gotta Get a) Meal Ticket." The breezy easy-listening atmosphere of "Writing" was filled with the ennui and quiet desperation of the aspiring songsmith.
_Captain Fantastic_ wasn't an album written with hits in mind, but nevertheless it produced one. "Someone Saved My Life Tonight," Elton's close-to-suicide confession, as conveyed by Bernie, who'd pulled his friend's head out of the gas oven that day back in '68 in Furlong Road, was one of their greatest ballads yet. In it, Linda Woodrow, with perhaps unfair grotesquerie, was painted as a "princess perched in her electric chair," and Long John Baldry was "Sugar Bear," the individual responsible for the "sweet freedom whispered in my ear." Even for the listener who didn't know the full story, "Someone Saved My Life Tonight" was clearly a highly emotional and touching song, which spoke directly to anyone trapped in a dead-end or abusive relationship. In July, it scaled the U.S. chart to number four.
The Beatleisms on _Captain Fantastic_ didn't stop with its music. For its cover, Taupin and DJM's David Larkham commissioned the pop artist Alan Aldridge to create a colorful illustrated fantasia that stretched across the record's gatefold sleeve. On the front cover, Elton the silver-mask-wearing superhero sits astride a tipped-over grand piano amid a menagerie of strange, unearthly creatures. On the back, in a continuation of the otherworldly scene, a smiling Bernie reclines in a bubble with his songwriting book while a white dove surreally bearing the face of his wife, Maxine, sits on his knee. Above the lyricist's head, the band members, with a significance that in reflection surely wasn't lost on anyone, fly off into the sky in space-age spheres.
The grand design of _Captain Fantastic_ extended to a poster, lyrics booklet, and scrapbook (containing old press cuttings, diary entries, and memorabilia) all tucked away within its sleeve, which made the album a highly desirable artifact for fans, although a pricey one. Upon its release in Britain on May 19, 1975, it retailed at £3.25 ($21 today), the most expensive single album released in the country up to that point. In the press, DJM's Stephen James defended accusations that he was fleecing record buyers by saying, "The new album had a very costly packaging, and two sixteen-page booklets and a poster. When you add up the real cost of those, they're actually getting them cheap."
Taken as a whole, from its music to its artwork, _Captain Fantastic and the Brown Dirt Cowboy_ was Elton's creative high-water mark of the 1970s. Gus Dudgeon certainly thought so, declaring, "There's not one song on it that's less than incredible. Elton sings better than he's ever sung. From every conceivable point of view, it's better. Therefore, it adds up to being the best."
Before the release, though, Elton worried that the record wasn't commercial-sounding. He thought it would flop. In the end, he was proven spectacularly wrong. _Captain Fantastic_ made American rock music history by being the first album ever to debut at number one, and it stayed there for seven weeks.
"It was a pinnacle for me," Elton later reflected. "It was a time when you couldn't switch on a radio in America without hearing one of my songs."
But while he sometimes allowed himself to enjoy this towering accomplishment, at the same time he fretted that he was now wildly overexposed and that the public might be in danger of getting sick of the sight and sound of him.
"People do get cheesed off," he said. "I was getting cheesed off hearing myself as well."
—
ELTON CHOSE THE Netherlands as the location to assemble the new band, on a vast film studio soundstage just outside of Amsterdam. Filling the shoes of Dee Murray as bassist was a twenty-six-year-old American named Kenny Passarelli, a native of Colorado and a regular on sessions at Caribou Studios, who came on the recommendation of Joe Walsh. Helping to reproduce the layers of keyboards featured on _Captain Fantastic_ was the twenty-four-year-old Los Angeles–born James Newton Howard.
Elsewhere, perhaps gallingly for Nigel Olsson, the new drummer was effectively the original drummer, Roger Pope, who'd played on _Empty Sky_ and tracks on _Tumbleweed Connection_ and _Madman Across the Water_ and who'd toured the States opening for Elton as part of the Kiki Dee Band. He wasn't the only familiar face to return: Caleb Quaye was drafted in as guitarist to supplement Davey Johnstone, along with Jeff "Skunk" Baxter of Steely Dan and the Doobie Brothers. Having first begun touring back in 1970 without a guitarist, Elton now had three.
It was typical of the maximalist mood of the mid-1970s. In the past, for Elton, less was more. Now, it seemed, more was more. "The old band...used to rattle on," he reckoned. "I've always wanted to be part of a good driving rock'n'roll band."
Each day, this expansive troupe was bused from the Amsterdam Hilton to the soundstage for rigorous ten-hour rehearsals in preparation for Elton's biggest UK show to date, on June 21 in front of seventy-two thousand people at London's Wembley Stadium. The mood within the band was high on their collective musicianship and high on everything else. "The rehearsals were a giant party, man," says Passarelli.
Other rock stars came to hang out—the entertainingly unpredictable Keith Moon, and Ringo Starr, the latter one day sitting in with the band behind the drum kit, delighting them when they found themselves playing "Lucy in the Sky with Diamonds" with an actual Beatle. Starr was at a loose end in his career and wondered aloud whether there was any chance he could join the touring group. There was much agonizing within the ranks, but in the end, with Pope confirmed, Starr was gently let down.
Over those ten days in Holland there was much work and little sleep, thanks to cocaine and alcohol in plentiful supply. The bar bill for the band and crew at the Hilton ran into the thousands. "It was a hell of a great time," Passarelli remembers. "We partied like it was 1999. All the road crew were going, 'Oh my God, how are we gonna pay our bar bill?' And Elton just wrote the check for the entire thing."
The Wembley gig was designed to be the greatest display yet of Elton's monumental success in his homeland. As such, a full day's lineup of supporting artists was announced, including the Eagles and the Beach Boys. The latter group, who might reasonably have been considered hopelessly passé by the mid-seventies, were in fact at the time coasting a wave of nostalgia for the sixties, and so triumphed on the hot summer's day with a hit-stuffed set that ended with encores of "Surfin' U.S.A." and "Fun, Fun, Fun."
The Beach Boys proved an almost impossible act to follow, even for Elton. He opened strongly, with "Funeral for a Friend / Love Lies Bleeding" running into "Rocket Man," "Candle in the Wind," "Philadelphia Freedom" and onward to a first-half-ending, double-featured Beatles tribute of "Lucy in the Sky with Diamonds" and "I Saw Her Standing There."
But then, moving into the second half, Elton made a slightly ominous announcement. "We have a new album out called _Captain Fantastic and the Brown Dirt Cowboy,_ " he reminded the massive audience. "I'm sorry it's £3.25, but I'll tell you about that later. We're going to do the whole of the album, and usually it bores everyone to tears if you play things people don't know. But we're going to take the chance anyway. This is the whole of the _Captain Fantastic_ album. Here we go..."
Whether down to miscalculation or hubris, it was a terrible decision. As magnificent a studio album as it was, _Captain Fantastic_ had only been out for just over a month and wasn't exactly crowd-pleasing fare for the beery hordes who'd just been punching the air to the Beach Boys. Perhaps anticipating this problem, the introduction to the opening title track had been reconfigured as a country hoedown, losing its laid-back charm, before settling back into the down-tempo pace of the recorded version.
Only a few numbers into this portion of the set, the audience began to exit the stadium by the thousands. Onstage, Elton could see what was happening, but was powerless to stop it. To halt the performance of the new album midway through would be an ignominious acknowledgment of failure, and so he and the band drove on. Encores of "Pinball Wizard" and "Saturday Night's Alright for Fighting" came too late, by which time the audience was embarrassingly sparse.
Afterward, there was a post-show party attended by Paul and Linda McCartney, Ringo Starr, Harry Nilsson, and Billie Jean King. But Elton knew he had misjudged and blown the show. The next week, a damning headline in _Melody Maker_ crowed, BEACH BOYS' CUP RUNNETH OVER; ELTON LEFT TO PICK UP THE EMPTIES.
More than just a poorly received gig, it was a sign that Elton's fantastic voyage was beginning to drift off course.
Hungover at the Caribou Ranch, Colorado, July 1975.
THE FIRST THING HE DID to annoy the Rolling Stones was land in a helicopter directly behind their stage, and before almost forty thousand of their fans, at the Hughes Stadium in Fort Collins, Colorado. It was the afternoon of Saturday, July 19, and Elton had been asked to make a guest appearance with the band on the thirty-fourth show of their Tour of the Americas '75.
The prospect of appearing onstage with the Stones—whom he'd loved since his late teens—was obviously hugely appealing to Elton, and anyway the gig was only a short flight away from Caribou, where he was recording, so he could easily rotor in. But then, as he sometimes tended to forget when measuring himself against his former heroes turned contemporaries, in terms of record sales, he was the bigger star in the States by this point. The Stones perceived his grand helicopter landing as a showboating stunt and an attempt to upstage them.
Everything went downhill from there. In a pre-show confab, Elton confessed to Mick Jagger and Keith Richards that the only Stones number he really knew how to play was "Honky Tonk Women," which turned out to be their opening song. And so it was agreed that he would join them from the start of their set.
Under drizzling clouds, Elton, in a beige cowboy hat and blue bomber jacket bearing the white logo of the Los Angeles Dodgers baseball team, nervously chewed gum and knocked back scotch as he followed Mick Jagger—his face over-made-up and his body gaunt beneath a shocking pink jacket—up the stairs to the stage. The group's introduction music, Aaron Copland's "Fanfare for the Common Man," blared out toward the bleachers.
They kicked into "Honky Tonk Women," Elton pounding away on a grand piano. Once the song was over, Elton left the stage. But as he remembers it, later on in the set a member of the Rolling Stones' road crew walked up to him and told him that Billy Preston, the band's keyboard player for the tour, had asked if he could come back on and jam along for a few more tunes. Elton returned to the stage, thinking he could play in the shadows, but the Stones made it clear he really wasn't welcome. Mick was now in the mood to mock their guest, pointedly introducing him with the words "On the piano so far we've had Reg from Watford."
Looking on, Kenny Passarelli sensed that Elton being on his stage suddenly threatened Jagger. "Forty thousand people are staring at the guy playing keyboards," he says. "I think Mick didn't want that diversion."
"We should have kicked him off the stage but we didn't," Jagger said later, putting his reluctance down to good old-fashioned English manners.
Post-show, the Stones were due to fly back to Caribou with Elton following his invitation to an after-hours barbecue. But the mood backstage was frosty and it became obvious that the invitation would not be accepted. "All I know," says Passarelli, "is that at one point we were told all the Caribou people had to leave because Mick and Keith were upset that Elton had worn out his welcome."
The singer left in the helicopter in a black mood, which had worsened by the time everyone was in the limo and heading back up the winding road to the ranch. At Caribou, Elton became more upset, then worryingly distressed. Passarelli and Davey Johnstone talked him down from his panic.
The singer finally went to bed, Passarelli set off for a nightclub in Nederland, and Johnstone dropped acid. By the time the bassist arrived back at Caribou the next morning, he found the guitarist sitting in a wheelchair, staring out at the surrounding fields, hallucinating visions of Native American battles going on all around him. "He said, 'Look, Kenny! Look over there!' " says Passarelli. "And I was going, 'Oh, no...' "
Elton got up and somehow managed to coax the guitarist, in his altered state, to do some work with him. "He was tripping," marvels Elton. "He had the guitar and I was telling him what to play at seven o'clock in the morning in bright sunshine." Together the pair wrote "Grow Some Funk of Your Own," a full-on rock song with a not coincidentally Jagger/Richards–like swagger. If Elton couldn't join the Stones, he would try to beat them.
It was a scenario typical of the highly intense and drugged-out making of the album that would become _Rock of the Westies_. "Everybody was pretty jacked up," admits Bernie. "That period was pretty much the apex of our abuse."
"Musically, I just wanted to make the band a little bit more raucous," says Elton. "I wanted it heavier." This direction was initially to cause a rift with Gus Dudgeon, particularly since they'd all just come off the layered and considered _Captain Fantastic and the Brown Dirt Cowboy_. For the creation of its successor, the team's cocaine use skyrocketed, which could be heard in the teeth-grinding performances of its songs, rehearsed for a week and then cut live.
" _Rock of the Westies_ turned out to be cryptic and almost punkish by our standards," Bernie says. "Nobody really knew what the fuck anything was about, but I quite liked that. Songs bled into other songs. The riffs were sort of archaic and kind of fun."
The expanded band—minus guitarist Jeff "Skunk" Baxter, who'd stayed only for the disastrous Wembley show before returning to the Doobie Brothers as a full-time member—really dug their nails into the songs, as was evident on the six-minute cokey funk-rock jam of "Medley (Yell Help, Wednesday Night, Ugly)." Returning to Bernie's space-age themes, "Dan Dare (Pilot of the Future)" was a faster and grittier take on "Bennie and the Jets" with a Leon Russell swing and a nonsense lyric that included the single-entendre line "Holy cow...my eyes never saw a rocket that was quite that size."
_Rock of the Westies_ was high on grooves, if low on melodies. "Street Kids" and "Hard Luck Story" were hyperactive band workouts that must have sounded great blasting out through Caribou's studio speakers. But in the cold light of day and the inevitable comedown, they blew away like the haze from chain-smoked cigarettes. "Feed Me" took a decent stab at smooth Steely Dan–style jazz-rock, but by the closing "Billy Bones and the White Bird" with its ever more insistent "Check it out! Check it out!" hook line, the overstimulation of the players was obvious, even if their shared enthusiasm didn't similarly affect the listener.
Elton fancied "Dan Dare (Pilot of the Future)" for the first single, though "Grow Some Funk of Your Own" was similarly considered for release to radio. There were perhaps two more obvious contenders, though. "Island Girl" was an up-tempo ode to a Jamaican amazon cum femme fatale turning tricks in the streets of New York for johns unsuspecting of quite what they were letting themselves in for. Tellingly, given the high-octane circumstances of the album's recording, there was only one ballad in the tracklist. "I Feel Like a Bullet (in the Gun of Robert Ford)" name-checked Jesse James's assassin in an oblique Taupin lyric written from the viewpoint of someone with a strong urge to kill a dying relationship. It was the first indication, albeit gently obscured, that all was not well with Bernie and Maxine.
For his part, Kenny Passarelli felt that during the recording of _Rock of the Westies,_ Elton was also "going through a bunch of changes." Desperate to stay thin in the rock star spotlight, the singer had taken to subsisting on a not entirely nutritious diet consisting almost solely of avocados and Diet Dr Pepper. In the cover shot of the album, taken by Terry O'Neill during a visit to the ranch, he gazes at the lens through hexagonal shades, his image markedly toned down—unshaven in a blue Harris Tweed deerstalker cap and striped rugby shirt. If at first glance the image made Elton appear open and relaxed, a closer look revealed him to be drawn and washed out.
In contrast, in their back cover lineup picture and individual inner sleeve portraits, the band are the very picture of what it was to be a rock musician in 1975: bare-chested in unbuttoned waistcoats and denim shirts, groin-crushing jeans, alligator boots, and leather chaps.
—
THE LEADER OF this gang of cocaine cowboys returned to California in August. On the ninth, from the Santa Monica Civic Auditorium, Elton cohosted, along with Diana Ross, the debut ceremony of the Rock Music Awards, broadcast live by CBS. The concept, cooked up by producer Don Kirshner, was to rival the more pop-oriented Grammys, focusing instead on the harder and more progressive sounds of the likes of Led Zeppelin and the Who.
Elton and Diana Ross arrived onstage from inside a billow of dry ice, balancing on the back of a rickety motorized golf cart contraption bearing a cheap-looking metallic backdrop. The former, in a silver suit, and the latter, in a canary yellow fur coat, walked up to a podium that magically rose from the floor.
"Good evening, everybody in TV land," said Elton. "I'm Captain Fantastic."
"And I'm General Delivery," said Diana. "Better known as Big Bird."
"So I can tell her to 'cluck off' during the program," wisecracked Elton. He then added, "Stay tuned. It gets worse."
Over the course of the show, awards were handed out to Joni Mitchell, Linda Ronstadt, the Eagles, Bad Company, Bob Dylan, and the Who for _Tommy_. Elton's jokes got better, such as when he made a cheeky reference to _Jaws_ being the new film starring Linda Lovelace. He even picked up a gong for himself when he was named Rock Personality of the Year. Not everyone was won over by Elton's performance, however, and John Leonard, the cultural critic of _The New York Times,_ belittled him in print as "the Bob Hope of the counter-culture."
Sixteen days later, to mark the fifth anniversary of his life-changing Troubadour shows, Elton returned to the club for six charity gigs over three nights supporting the UCLA Eye Institute (a research facility to prevent blindness) founded by MCA's originator, Jules Stein. On the evening of the opening gala performance, the scene outside was very different from Elton's debut there a half decade before. Santa Monica Boulevard was closed to traffic, and a procession of stars rolled up to the door in limousines: Tony Curtis, Mae West, Hugh Hefner, Cher, Ringo Starr.
Elton turned up onstage bearded, much as he had been at the original shows. Surveying the older Hollywood crowd, he carped in a Lennonesque way, "Here's one you can all tap your wheelchairs to." It was his opportunity to introduce the new band to an American audience. The verdict, particularly within the confines of the club, was that they were polished and punchy, but way too loud.
—
IN L.A., BACK in his Benedict Canyon mansion, Elton began to withdraw into himself once again. He would suddenly become terribly and stubbornly depressed, lying in bed for two whole days, wallowing and wondering if all this work was worth anything, as darker and darker thoughts ran through his mind. If John Reid tried to coax him out of his black hole, Elton would spurn his help. Sometimes his friends tried to cheer him up. They'd temporarily pull him out of his nosedive, then he'd plummet back down. Even though he was in a relationship with Reid and surrounded by people who loved him, sometimes Elton still felt deeply alone. "Depressions are very strange," he admitted in 1975. "They can come on in the most unlikely places and for no apparent reason."
There was little time to mope, however. The U.S. tour supporting the soon-to-be-released _Rock of the Westies_ began on September 29 at the San Diego Sports Arena and was scheduled to run for just one month, if a rigorous one. Cutting back on the visuals, the new show concentrated instead on the music and stretched to three hours, not including a half-hour interval while everyone, band and audience, caught their breath. The gigs spanned Elton's entire catalog so far, ten albums, returning now little-heard songs such as "I Need You to Turn To" and "Levon" to the set. Elton being Elton, he paraded around in a white jumpsuit and another piano-key-patterned one in blue. But overall, the staging was comparatively understated.
In a way, this band was too conspicuously cool for Elton to rely on the theatrics. If the idea was that reining in the visual histrionics would shine a light on his songcraft and charisma, it worked. One local reviewer who witnessed the show at the Convention Center in Las Vegas on October 2 noted, "No one cared if the sets were too long, or too noisy. It was a night with a legend along the lines of the Beatles, Judy Garland, or Nat King Cole."
Being three hours long, it was a performance that required some chemically aided stamina. At points on the tour, certain members of the band would arrange to meet two cocaine dealers who flew in from Miami with a suitcase full of narcotics. Thousands of dollars changed hands and the tour rolled on, until the next supply was required. As a top-up, just before they went onstage, the musicians were handed little vials of coke by a member of the road crew with the words "Here you go, boys. Do a good show."
A handy coke break came twelve songs into the set when the lights dimmed and the dry ice pumped out to set the atmosphere for the synthesized overture of "Funeral for a Friend." Under the cloak of stage darkness, the players who were partial could snort a line or two off the top of their amps before dynamically smashing back in with "Love Lies Bleeding."
It was a turbocharged and heady time, and adding to this, October 20–26 was declared to be Elton John Week in Los Angeles, set to culminate with two massive outdoor shows at Dodger Stadium. In his most lavish gesture yet, Elton decided to splash out more than a hundred thousand dollars chartering a Pan Am plane as the _Rock of the Westies Express_ and flying a hundred thirty people over from England for the occasion, including his extended family, friends, and Rocket employees.
During the flight, everyone was fed steak Diane, filled with champagne, and handed an entertainment pack of jokes and puzzles and sugary sticks of Blackpool rock. Upon arrival in Los Angeles, as everyone filed out of the jet and made their way down the wheeling access ladder, Elton was there to greet them, reserving the biggest cuddle for his squealing mum, Sheila. A convoy of Rolls-Royces and Cadillacs was there to collect the entire gathering and ferry them to their rooms at the Beverly Hills Holiday Inn.
The next day the party was divided into two for some sightseeing larks. A film crew from London Weekend Television followed Sheila and Fred and Elton's Auntie Peg as they were given the tour at Universal Studios, shrieking on fake-collapsing bridges and rubbernecking past sets used in _The Sting_ and _Earthquake_. The other half were meanwhile cruising out onto the Pacific aboard John Reid's recently purchased yacht, the wryly named _Madman_ (as painted on its side in the florid lettering featured on the cover of _Madman Across the Water_ ).
Interviewed by LWT's Russell Harty at the Benedict Canyon house, Elton looked exhausted and seemed ambivalent when the subject of Reid's yacht was brought up. "I'm a bit frightened of the sea," he offered, "and after seeing _Jaws_ I'll probably never go near a beach again. I'd rather play tennis than go out on the boat. I mean, after five minutes, I say, 'Well, what else does it do? So it floats? Ducks float.' I'm very restless. That's why people say I work too hard."
On October 21, six thousand fans stopped the traffic on Hollywood Boulevard in anticipation of witnessing the singer unveil his star on the Walk of Fame—the first time in 1,662 ceremonies that the street had to be closed because of the volume of spectators. Many impatiently yelled, "We want Elton!" Others waved customized T-shirts bearing the words YOU'RE BETTER OFF DEAD IF YOU HAVEN'T HEARD ELTON.
To discover that his name was to be permanently embedded only a few steps from those of Groucho Marx, Greta Garbo, and Jean Harlow was obviously an enormous deal for Elton. He dressed to impress for the ritual that would cement his showbiz immortality, arriving in a lime-green suit and bowler hat, riding a golf cart customized with a faux windshield of two large stars studded with lightbulbs.
"I'm very, very honored, being British, to have my star on Hollywood Boulevard," he said into the microphone as he stood at a lectern. Then, hunkering down onto the sidewalk, he peeled the cover off the waiting slab with the words "I now declare this supermarket open. Oh, I'm sorry, wrong place, uh. This is more nerve-racking than doing a concert, I tell ya..."
—
FROM THIS VERTIGINOUS height, there was only farther to fall. Wrecked by cocaine and overwork, in reality, Elton was in the grip of an acute emotional crisis.
Two days before the first show at Dodger Stadium, his family members and some of the band were sitting around the pool at the Benedict Canyon house when a visibly distraught Elton emerged from his room and made a very dramatic appearance.
Seeing stars: Elton makes his mark on the Hollywood Walk of Fame, Los Angeles, October 21, 1975.
He had just swallowed sixty Valium tablets.
"I'm gonna be dead in an hour," he announced. "I've taken sleeping pills."
He stumbled past everyone and threw himself into the swimming pool.
As suicide attempts went, this was far more serious than the cry-for-help gas oven incident back in Furlong Road.
"It's craziness," he says, flatly. "My life was crazy. It's...not being able to get your feelings across and not being willing to deal with them in a mature way. Instead you deal with it in an immature way. And the pressure, I mean...fuck."
After he was dragged out of the pool, medics quickly arrived, pumped the singer's stomach, and sped him to the hospital. For the rest of the night, Elton slipped in and out of consciousness.
Standing by the pool, numb, Sheila turned to Caleb Quaye, Elton's oldest friend, and said, "Oh, Caleb, can't you talk to him?" Quaye couldn't look Elton's mother in the eye, feeling complicit in her son's drug taking.
Then, with an impeccable sense of bathos, the tormented singer's seventy-five-year-old grandmother, Ivy, sighed and sadly said, "I suppose we've all got to go home now."
—
IF, AS JOHN UPDIKE famously wrote, "celebrity is a mask that eats into the face," then Elton was the mask now threatening to eat into the face of Reg Dwight. He had gone too far too fast. It was way too much for him to handle.
Or maybe it wasn't. On the day following his suicide bid, there were long conversations as to whether the Dodger Stadium shows would, or should, go ahead. Incredibly, the singer was deemed by his doctors to be fit to perform.
This flirtation with death only seemed to empower Elton. Before the sound check at the stadium, he charged around the field playing soccer with the band, tackling Kenny Passarelli so hard the bassist thought he'd suffered a broken nose. "I got the ball from him and the next thing I knew I was on the ground," says Passarelli. "He elbowed me. He was the most competitive guy. It was unbelievable. He didn't want to lose at anything."
It seems, for Elton, there was much in the way of defiance and denial involved. To see him onstage at the piano during the sound check, in his striped T-shirt and sweatpants, apparently brimming with energy, no outsider could possibly have guessed at the inner turmoil he had been suffering only two days before.
There was much riding on these shows, and for Elton, failure was not an option. No other artist had been allowed to perform at the stadium after a concert by the Beatles there in 1966 had resulted in near riots. In their wake, Dodger bosses had nixed the idea of future concerts. These two gigs, before a total of 110,000 people over two days, had been the result of six months of negotiations between the baseball team's owners and Elton's U.S. agent, Howard Rose.
The tickets sold out immediately. Some fans applying for them had simply addressed their envelopes to "Elton John, Los Angeles," and somehow the mail had been delivered.
Rock shows had come a long way since the sixties and the prototypical high-level touring days of the Beatles. In addition, lessons had been learned from the horrendous crowd control problems of Woodstock, the Isle of Wight, and the murderous Altamont. No expense was spared at Dodger Stadium when it came to solving the typical difficulties of large-scale concerts—bad sound, poor catering, woeful toilet facilities.
On the Saturday morning of the first show, October 25, the gates were opened two and a half hours early to accommodate around ten thousand fans who'd turned up at dawn to secure a place close to the front of the stage. As the stadium filled up, beach balls were punched around the crowd and people took turns being tossed high in the air by others holding tightly stretched blankets.
Backstage, none other than a lager-sipping, gray-haired but still dapper Cary Grant hung out with Elton and Billie Jean King. Out front, as a peculiar warm-up act, the celebrity California car dealer Calvin Coolidge "Cal" Worthington, renowned for appearing alongside an array of wild animals in his manic TV spots, brought a lion onstage. The bewildered creature and the crowd eyed one another uncertainly. Following this, there were muted responses to the opening musical acts, Emmylou Harris and Joe Walsh. This was an audience keenly waiting for the main attraction.
Late afternoon, in bright sunlight, roadies pulled aside the white curtains at the front of the stage and Elton—in white bell-bottoms, bowler hat, and spangled aqua T-shirt—opened solo with "Your Song." The thousands who'd been lying down on the field stood up and cheers rose into the air as the star's piano slid on a hydraulic platform from the rear to the front of the stage.
Up in the stands, being filmed by the LWT camera crew, Sheila waved a scarf and sang along with Fred, gushing, "He's sensational, isn't he?" As the song closed, she began to cry. She appeared overcome with joy, only later admitting to painfully mixed emotions. Watching her troubled son onstage, she felt he "looked terrible...I was so worried."
From the wings, Bernie swigged a beer and watched the audience's overwhelming reaction, seeing it as "American exuberance, y'know. Just fists in the air and yelling and taking your top off and swirling it around your head and just, basically, floating on a breeze of ganja."
The band joined Elton for "Burn Down the Mission" before ripping through "Country Comfort," "Levon," "Rocket Man," "Hercules," and even the rarely heard "Empty Sky." There was a short costume change intermission before he returned wearing the outfit that had been especially tailored for him for these shows by the designer Bob Mackie: a Dodgers uniform in their traditional blue and white, customized in hundreds of light-reflecting sequins and the words ELTON 1 stitched on the back. The singer hopped onto the blue-carpeted lid of his grand piano and expertly volleyed a few balls into the crowd with a baseball bat.
From here, the rest of the show was a home run. Elton dropped to his knees to play Davey Johnstone's guitar with his teeth in an aping of David Bowie and Mick Ronson's provocative ritual. He introduced the band members as their names flashed up on the electronic scoreboard. Bernie and Billie Jean King joined the backing singers, all dressed in white gas station attendant uniforms with ESSO patches on their chests. Bizarrely, kilted dancers appeared onstage to lead Elton in a Highland fling. He was joined by the Reverend James Cleveland and his forty-five-piece gospel choir, and for "Don't Let the Sun Go Down on Me," with perfect timing, the sky began to darken.
By the closing encore numbers of the marathon three-and-a-half-hour, thirty-one-song set, the stadium's lights had flared into life and he stormed into "Saturday Night's Alright for Fighting" and "Pinball Wizard" as mad hippies and teen fans frantically danced together in the crowd.
After all the drama of the week, the Dodger shows were a complete blast. When it was all over, the details of the gigs were hopelessly blurry in Elton's mind. "I remember Cary Grant being there," he said afterward. "I remember crying coming offstage."
Elton had viewed _Captain Fantastic_ debuting at number one in the U.S. chart as the peak of his success. But for Bernie, it was the triumphs at Dodger Stadium.
Howling to the sky: Dodger Stadium, L.A.
"I think we were at the absolute pinnacle of his fame in the seventies," he states. "I mean, he wasn't a very happy chap during that period, or especially at those gigs. But they were great gigs."
All the while, the hits kept stacking up. In the first week of November, "Island Girl" reached number one in America, ending the three-week reign of Neil Sedaka's "Bad Blood," which itself featured Elton on backing vocals. Effectively, he'd knocked himself off the top of the charts. The following week, _Rock of the Westies_ became his latest U.S. chart topper in an unbroken seven-album run.
But if business was booming, Elton's personal life was in the process of falling apart. Two days after the Dodger shows, John Reid threw a party in Los Angeles. Elton disappeared partway through and Terry O'Neill found him crying in an alleyway, saying he and Reid had split.
Theirs had always been a turbulent relationship. "My arguments with Elton get so bad that we've ended up knocking one another around," Reid admitted. "I've given him more than one black eye." For his part, Elton, in a telling use of words, later confessed that Reid had been "more unfaithful than I liked." The singer's recent problems, it seemed, had been exacerbated by his deeply unhappy internecine partnership.
Back in England after their personal—if not commercial—separation, Reid (who had recently expanded his operation to manage the messy business affairs of the fast-rising UK rock band Queen) moved to a house in Montpelier Square, Knightsbridge. Elton, meanwhile, had bought Woodside, a £400,000 Queen Anne–style eight-bedroom redbrick mansion in the countryside near Windsor, west of London. It had been built in the sixteenth century by Henry VIII, officially to house his surgeon, though legend had it that the king in fact kept a mistress there.
Elton's notion to make such a grand upward move had become fixed in his mind after he visited Ringo Starr at Tittenhurst Park, the Georgian country house set amid seventy-two acres that the Beatles drummer had bought from John Lennon when the latter left for New York in 1971. Wandering its grounds, Elton had been struck by "this feeling of complete freedom and privacy."
He had toyed with the idea of permanently relocating to Los Angeles, but his moody months locked away in Benedict Canyon had made him think twice. "I thought seriously about staying in the States," he said at the time, "but...I simply couldn't face it. Anyway, I've now made enough money to happily live in Britain...whatever the taxman may take from me."
Woodside itself was set within a comparatively modest, if still wildly expansive, thirty-seven acres of land boasting three lakes. The singer sent Sheila and Fred to check out the property while he was readying himself for the _Rock of the Westies_ tour. In fact, his stepfather's handyman skills were soon to become useful, as he was put in charge of the mansion's refurbishment. Amid the fallout of an explosive time, once again, Elton was turning to his family for practical support.
In late '75, he moved in and, with his magpie-like attraction to the glint of bright and shiny things, soon filled the rooms with antiques and artworks and vintage pinball machines and an extensive record library to rival the BBC's. Before long, Woodside began to resemble a rock star approximation of Aladdin's Cave. So stuffed were its interiors, the only place left to hang a Rembrandt etching was in the garage.
Happiness returned to Elton in his semirural seclusion. There was much at Woodside to entertain him: a squash court, billiards room, indoor swimming pool, and private cinema. Far removed from his roots, he had now become lord of the manor with a resident staff to serve him hand and foot.
Less than three miles away sat Windsor Castle, the chief English residence of the Queen away from Buckingham Palace. In recent years, Elton's connection with the royals had grown stronger. In early November, he accompanied Princess Margaret to the movie premiere of Neil Simon's _The Sunshine Boys_. He had already given a private performance for the Queen Mother at her royal lodge in Windsor Great Park.
Improvising at the royal piano as he picked his way through "Your Song," he'd changed the line "I'd buy a big house" to "I'd buy Windsor Castle, Your Majesty, where we both could live..."
Although the purchase of the firmly off-the-market Windsor Castle was obviously beyond him, Elton's spending power had never been more in evidence than in 1975. He'd continue to surprise the people around him with unexpected gifts—an $800 mandolin for Davey Johnstone, which the guitarist had seen in a New York music store and Elton knew he'd secretly coveted; a Rolls-Royce for his American agent, Howard Rose; a $23,000 raccoon coat for his secretary. For his thirtieth birthday back in January, Elton had given Rod Stewart Rembrandt's sketch for _The Adoration of the Shepherds_.
A _Time_ magazine journalist, David DeVoss, accompanying Elton on a shopping trip to Cartier in London, was astonished by the way the star blew the equivalent of $4,300—almost $20,000 today—on a gold bracelet and necklaces, three gold cigarette lighters, a duffel bag, and four briefcases, in a mere fifteen minutes.
That Christmas, all of the employees at John Reid Enterprises received a Cartier watch from the singer. "Well, it's so easy at Cartier," Elton casually told the management company's director, Geoffrey Ellis. "They've got everything I want and they wrap it all up nicely for you."
In making his own plans for Christmas, Elton decided to rent an enclave of houses in St. James, Barbados, and treat his friends and band members to a Caribbean vacation. It was a long overdue and desperately needed respite. John Reid traveled with the party, proving that his and Elton's working partnership—albeit one that still involved plenty of lively arguments—could survive the collapse of their private relationship. There was, it seemed, no need to estrange themselves from each other. After all, business was booming.
Each morning Elton shocked the others to life with a full-volume airing of the disco reworking of "Babyface" by the Wing and a Prayer Fife and Drum Corps. The days were leisurely frittered away swimming in the sea, water-skiing on it, and paragliding above it. Then there was the drinking—heavy, heavy drinking. There were Bloody Marys for breakfast, rum punches at lunch, and piña coladas to smooth out the rest of the day.
It was a final act of largesse in a year full of it. Still, Elton could afford such generosity. In 1975 he had been solely responsible for 2 percent of global record sales. Put simply, one in every fifty albums sold in the world that year was an Elton John record.
Onstage in America in '76: letting rip, knowing it was the "last" tour.
TWO ELTON JOHNS STOOD BEFORE THE CAMERAS—one rendered in wax, the other grinning in barely disguised disbelief at the supposed likeness of him to his right. It was March 7, 1976, and the singer had received the dubious honor of being the first rock star since the Beatles to become the subject of his own dummy at Madame Tussaud's wax museum in London. The real Elton looked great, even if he appeared to have dressed that morning in the dark: fur coat slung on over a striped sweater and jogging pants. The replica Elton was all wrong, looking in his shiny three-piece suit like a slightly creepy, visually impaired fiftysomething showbiz impresario.
Still, it was a celebrity landmark of sorts, and in Britain, Elton had kicked off the year by doing the rounds of the mainstream media. Four weeks before, he'd appeared as a guest on _Parkinson,_ the biggest talk show in the land. Its genial and laid-back northern host Michael Parkinson asked Elton about fame. "One thing that interests me," the host began, in his unhurried, circumlocutory way, "is how do you cope with all this sort of hero worship, y'know, all the kids who come and see you and no doubt storm the dressing rooms and all this sort of nonsense?"
"You have to live with it," said Elton breezily, before revealing more of his true state of mind. "You become terribly paranoid. But you have to fight it. You can lose touch."
Parkinson then redirected his line of questioning to the immense sums of money the singer was now earning. "I think people have got the wrong idea of me," Elton averred. "They've only got to shake my hand and they think I'm gonna give them a Rolls-Royce Corniche. I'm very, very grateful for the vast amount of money that I'm paid for what I do. But I don't stand with a whip and force people to go into shops and buy my records. I mean, I could release a record tomorrow that will just plummet down the charts.
"In fact," he added, with a smile, "I've got a single out now that isn't doing too well."
The audience laughed. But it was true. Having hemmed and hawed over the choice of the second single to release from _Rock of the Westies,_ Elton had hedged his bets for a double A side featuring "Grow Some Funk of Your Own" paired with "I Feel Like a Bullet (in the Gun of Robert Ford)." In the States, it had struggled to number fourteen. In the UK, it had failed to chart entirely.
Back to workaday reality and the ever waiting coal face—and in an effort to shake up the process, having now made three records at Caribou Ranch—Elton traveled to Toronto and Eastern Sound Studio to begin the sessions for his eleventh album. Not in the mood for superstar fuss, Elton fancied walking to work every day for the short distance between his hotel and the studio. It was a nice idea, but it quickly proved impractical when he was chased down the street by a mob of fans. He took to wearing a face-obscuring hockey mask in the streets, but that only made him more oddly conspicuous, and his devotees soon twigged who this weird serial-killer-looking figure actually was.
Following his split with John Reid and his roller-coasting highs and lows of the previous year, Elton was in a strange, self-pitying mood. Any attempt to revive some kind of a love life seemed clumsy and doomed to fail. He took to playing 10cc's achingly plaintive single "I'm Not in Love" over and over: "I'd sob like a baby because someone or other had taken my fancy and it was totally wrong." Frequently he'd turn up late at the studio. When he did arrive, he was more often than not in an inexplicable sulk. "It was a difficult personal time in my life," Elton says, "probably because the drugs began to escalate. But then we went and made _Blue Moves_."
During that album's creation, heartache was in the air. Bernie was in Barbados and absent from the sessions, his marriage to Maxine on the rocks. When he sent over the lyrics he'd completed for the new record, it was clear that the turbulence in his private life had thrown him into a state of trauma that informed almost everything he'd written.
"It's almost impossible at times," says Bernie, "not to put a great deal of yourself into what you are writing. _Blue Moves_ was the first time I wrote about the disillusionment of love and marriage and, y'know, let my pain be released."
"Bernie was a mess at that time," says Elton. "I rejected some of the songs for that album because the breakup with Maxine was going on and, y'know, they were so down I couldn't sing them."
In secret, or so they believed, Maxine Taupin and Kenny Passarelli had begun an affair. "Nobody knew," the bassist insists. "The affair had not come out. But it was subconsciously there. What was happening was that Bernie and Maxine's marriage was falling apart. It was too much money, too young, and too much fame.
"To rethink it with a mature mind, I would have walked away from all of that. But everything was crumbling—Elton and John Reid, Bernie and Maxine. So the lyrics were heavy and it was dark and sad. All these covert dramas going on...that becomes part of the energy of the music."
"Covert" was perhaps the key word. In "Between Seventeen and Twenty," a suspicious Taupin wondered aloud whether it might be a "close friend" now sharing a bed with his wife, while at the same time admitting his own infidelities and blaming them on the fact that his rock'n'roll lifestyle had led him astray. Recording the song, Passarelli kept mum and, squirming inside, dutifully played its bass line.
Elsewhere, for once breaking their standard modus operandi of writing apart, the previous summer, Bernie had visited Elton at his L.A. home and heard him playing a mournful piano part for which the singer already had the beginnings of a melody and even a first line, "What have I got to do to make you love me?" As Elton sang it, a title immediately appeared in Taupin's mind: "Sorry Seems to Be the Hardest Word."
It was a unique instance of the pair employing a more traditional writing approach, and it was fitting in the sense that the song seemed to capture on the page the romantic agonies that both were experiencing. An almost unbearably honest cri de coeur, "Sorry Seems to Be the Hardest Word" was to become the centerpiece of the aptly titled double album _Blue Moves._
Even if the lyrics were bleak, the party-time recording atmosphere carried over from Caribou to Eastern Sound: mimosas in the morning, followed by lines of coke and frenzied creativity. Still, the cocktail of intoxicants only served to upend Elton's moods. After the band ran once through a funky rocker called "Bite Your Lip (Get Up and Dance!)," the singer stood up from his piano and decisively declared, "That's a wrap. That'll be a hit."
When Gus Dudgeon informed him that it had only been a run-through to get the sounds right, Elton flatly refused to do another take. He was the star, it was his record, and he wouldn't hear another word said about the matter.
—
OVER IN ENGLAND, John Reid's behavior was becoming similarly erratic. One day he arrived back in London from a trip to Los Angeles only to find that because of a mix-up, his chauffeur wasn't at the airport waiting for him. The manager jumped into a cab and arrived at the Mayfair offices of John Reid Enterprises in a raging state. Surprising and terrifying everyone, he sacked his entire staff and ordered them out onto the street.
Across town, the company's director Geoffrey Ellis was in the middle of a dull tax advisory meeting when he was interrupted by a call from Reid telling him what he had just done. Ellis rushed back to the offices to find the receptionist, the only member of the staff remaining, trembling under her desk while still trying to operate the switchboard.
"We can do the royalty statements ourselves, with just our secretaries," Reid told a disbelieving Ellis. "We don't need anyone else." Ellis quietly had a word with the shell-shocked telephonist and the couple of members of staff still bewilderedly lingering on the pavement outside, telling them not to worry, he would sort everything out. Over the next few days, one by one, the workers returned. "No more," Ellis later noted, "was said about the incident."
It really wasn't a time to be sacking everyone who worked on the management team. Elton had a British tour that was about to begin. Its name, Louder than Concorde, had come from a quip Princess Margaret had made to Elton about the deafening volume of his amplified piano playing.
Minus the touring luxuries that came with traveling around the States, it was designed to be a no-frills affair, on every level. For years, the UK music press had been grousing that Elton had deserted Britain to concentrate on his extensive touring of America. As a long-delayed response, the Louder than Concorde dates were to find him playing theater- and civic-hall-sized shows in small UK towns and cities—Preston, Hanley, Dundee, Taunton—which normally fell off the touring map. Given the relatively short distances to cover, Elton and the band would be traveling not by plane or luxury coach but in cars, albeit a fleet of vintage Daimlers.
It was a month of pints of ale in woody country pubs, stop-off walks on bleak Northumberland moors, ice cream cones in Edinburgh, plates of fish and chips in greasy spoon cafés in Blackpool. It was resolutely un-starlike and as stripped down as the stage set. But at the same time there seemed to be endless photographs taken during the tour of Elton looking bleary-eyed, shattered, or asleep with a drink in his hand backstage while the party raged on around him.
A review of the opening night at the Grand Theatre in Leeds appeared in _Melody Maker_ acknowledging the very different tone to the tour: "This was Elton being Elton, with no trimmings. There were no stage props or gimmicks in sight, not a whiff of dry ice, and not even a back projection screen."
It wasn't, however, a tour without incident. At the Kings Hall in Belle Vue, Manchester, in a grim echo of Watford Town Hall back in '72, there was an IRA bomb scare in the middle of the show. All five thousand members of the audience—along with Elton and the band, who weren't allowed to return to their dressing room—were forced to stand in the rain for half an hour while the venue was scoured for suspect packages. Once it was declared safe for them to reenter the arena, the show recommenced with an added edge of electrified defiance.
Nineteen days later, in Newcastle, Kenny Passarelli almost got himself arrested, drunk and dancing on top of the hotel bar. The bassist's father was a police officer back in the States, and so when members of the local constabulary arrived, he somehow wrongheadedly thought this gave him special privileges. He screamed, "Fuck off! My father's a cop too. Leave me alone!" Eventually, he was dragged off to bed, rather than jail, to sleep it off.
For the greater part, the UK leg of the Louder than Concorde tour offered proof that even without his dress-up antics and usual pizzazz, Elton and his songs and his well-drilled band could still dazzle. Moreover, the unfussy day-to-day circumstances of its operation were a return to something approximating normality and his pre-fame touring. It was a reminder of a life he once knew.
—
NOTABLY, BERNIE WAS missing from the entourage of the British tour. Year by year, with their downtime spent entirely apart, a distance between the two songwriters had begun to grow. Their brotherly bond had started to unravel. Bernie graphically described Elton at the time as being "Santa Claus one minute, the Devil incarnate the next."
Not that Taupin in 1976 was the model of emotional stability. Down the years, he had slowly acquired a drinking problem that was now becoming serious. "I was very much a lazy bastard hanging backstage drinking all the Jack Daniel's," he says. "I didn't have anything else to do."
Although he was now extraordinarily wealthy, Taupin had struggled to find a role for himself beyond his writing partnership with Elton. In 1971, he had recorded a poorly selling eponymous spoken word album on Elektra Records, on which he read his lyrics and poetry over backing music provided by Caleb Quaye and Davey Johnstone. More successfully, at least in creative terms, the following year he had turned producer for David Ackles's highly dramatic, heavily orchestrated, and critically acclaimed third album, _American Gothic_. In May '76, he had just published a book of his collected lyrics, _The One Who Writes the Words for Elton John,_ illustrated by various collaborators including John Lennon, Ringo Starr, Alice Cooper, and Joni Mitchell.
But his was often an oddly aimless existence filled with much empty time. After the disintegration of his marriage, Bernie was sent into what he describes as a "tailspin." Occupying a spartan house on North Doheny Drive in West Hollywood, he lived by night and slept through the day, waking to reach over to his bedside refrigerator, crack open a beer, drink half of it, and then top it up with vodka.
In the evenings he hung out with the rock star clique who had earned themselves the nickname the Hollywood Vampires, their chosen lair a table on a balcony at the Rainbow Bar and Grill on Sunset Strip. Its loose membership, aside from Bernie and his now close buddy Alice Cooper, numbered Harry Nilsson, Mickey Dolenz, and Keith Moon. "Bernie wasn't an alcoholic," Cooper stresses. "I was. Bernie could turn it on and off."
Aside from alcohol and his ubiquitous coke habit, Taupin dabbled in hallucinogenics (LSD, mushrooms) and even on a couple of occasions dangerously flirted with heroin. "I always had the good sense to recognize things that could potentially destroy me," he says with a laugh. "There's nothing fucking heroic about it at all. It's just the stupidity of youth."
These indulgences threw the lyricist into a delicate physical and mental state. "That horrible paranoia that's making you shake," he says. "That awful feeling you get after being up for three days and you can't sleep. Putting towels across the window, y'know. Don't want to see the sun. 'Oh God...makes my skin crawl.' "
It slowly began to dawn on this narcotic vampire that it was perhaps time for him to step back into the light.
—
SOON TO SWELL both Bernie and Elton's bank accounts was the release of _Blue Moves_ as the first album of the lucrative MCA deal John Reid had negotiated in '74. Before that could happen, there was a final record owed to DJM. In typical contractual obligation style, it was to be a live album, titled _Here and There_. As much as it was a stopgap release, it highlighted the key differences in Elton's live performances between 1972 and 1974.
The first vinyl side was culled from the '72 Royal Festival Hall performance in London, though notably from the earlier band portion of the set before the sniffy Royal Philharmonic Orchestra members arrived onstage to unsettle him. The token hit inclusion of "Crocodile Rock" aside, it was the sound of the delicate singer-songwriter of "Skyline Pigeon" and "Love Song." The flip side, meanwhile, featured a selection of numbers from the '74 Madison Square Garden show that had guest-starred John Lennon. The visceral excitement of that momentous gig had been successfully captured on vinyl, as evidenced by the wild screams heard over "Funeral for a Friend" as Elton stepped onto the stage, and the squeals of delighted recognition that similarly greeted the opening bars of "Rocket Man" and "Bennie and the Jets."
In the end, _Here and There_ served as a contrasting chronicle of the more considered appreciation of '72 Elton in England and the crazed reaction to '74 Elton in the States. It made it clear that British and American audiences expected different things of him.
The year 1976 marked the culmination of the bicentennial celebrations in the United States and so the plans for the American leg of the Louder than Concorde tour—now given the tongue-in-cheek bracketed addendum (But Not Quite as Pretty)—were appropriately far more ambitious. New outfits were designed for the singer: one that gave him the appearance of a glammed-up Uncle Sam in a red-and-white-striped jacket and voluminous top hat; another saved for Madison Square Garden, where he was to be decked out as a bespectacled Statue of Liberty.
America demanded big moves and showmanship and Elton rose to the challenge. But before the tour kicked off, he was to be given a thought-provoking audience with the very man who had set the rock'n'roll agenda.
—
IT WAS FINALLY going to happen. Two decades after being that tubby nine-year-old kid miming to rock'n'roll records in his bedroom mirror, Elton was going to meet Elvis Presley.
Staggeringly, they were meeting on almost equal terms—Elvis obviously a living legend, but in terms of record sales power, Elton currently the biggest star in America. Appropriately, he took along his mother, Sheila, the person who'd blown his young mind by introducing him to Presley's music in the first place.
It was the last week of June 1976 and Elton and the band were rehearsing in Washington, D.C., for the imminent start of the two-month-long tour. The call came through confirming that yes, Elton was very much welcome to come along to Presley's Sunday show at the Capital Center in Largo, Maryland, only a half hour's drive away. More incredibly, yes, the King would be happy to meet with him before the show.
Arriving there that night, Elton and Sheila were ushered backstage and then—amping up their nerves—left waiting. Finally, they were led into Elvis's inner sanctum. Instantly, they were shocked by Presley's pallid complexion and corpulent figure. No longer merely portly, he had gone to fat: a ballooned phantom in a garish white, gold, and lilac suit. Black hair dye dripped down his forehead.
Elton looked into the eyes of the King and felt there was "nothing there." Sheila was so stunned she couldn't say a word. "It was sad," says Elton. "It was very clumsy. 'Cause what can you say? This is the man who started it all, basically."
The two traded compliments about each other's music. Elton asked Elvis if he could request "Heartbreak Hotel," but it wasn't in the set of songs the band had planned for the night's show. At the end of the brief and awkward summit, in reality it was Reg, not Elton, who asked Elvis for his autograph.
Only minutes before showtime, Elton and his mother were protectively escorted by cops into the arena. As the house lights were killed for the familiar boom of Presley's walk-on music, Strauss's "Also Sprach Zarathustra" (better known as the theme from _2001: A Space Odyssey_ ), they took their seats, stage right, in the second row. A ripple of excitement passed through the audience at the sighting of Elton. One police officer lingered in the aisle, just in case the English star was hassled by any of the other concertgoers.
The past few years had seen Elvis in and out of the hospital, afflicted by various health problems, which were a worrying by-product of his expanding size and addictions to prescription drugs. His performances were now notoriously erratic. One night he would have his wits about him and there would be flashes of the brilliance of his heyday. The next night, he would slur his way through the songs or forget the words and even the names of the band members as he introduced them.
Tonight, both sides of the unpredictable performer were on display. Grinning and attempting some of his old dance moves, he sounded tired and gaspy in "Fever," but then he managed to summon from somewhere his still extraordinary voice for the nape-tingling operatic crescendo of "America the Beautiful," causing Elton to stand up and applaud madly. Elvis hit the big notes at the end of "Hurt," his latest single, but then he seemed halfhearted and uninterested in "Hound Dog." He ended, as he always did these days, with "Can't Help Falling in Love," draping cheap scarves around the necks of his wide-eyed female devotees in the front row.
Elton was riveted. In both gratifying and disturbing ways, it was an utterly compelling spectacle to witness. "It was someone who was in a complete drug haze giving nylon scarves away to these fans," remembers Elton. "And yet it was still, in a way, magical."
Afterward, Sheila turned to her son and said, "He'll be dead...give him six months."
"Well," Elton reflects, "it was a year."
—
THE PRESLEY SHOW threw up some uncomfortable truths and parallels for Elton. Three nights later, he was set to appear on the same stage at Maryland's Capital Center, and, like his former idol, he, too, was still subjecting himself to a grinding tour schedule while increasingly propping himself up with drugs. He was finding it more and more difficult even to hold a conversation with someone unless he'd had a sizable toot.
"I was nowhere near as bad as he was," he says. "Y'know, I was thin, I was all right, I was functioning very well."
In some ways, Elton was thankful to be grounded by the ego-deflating and sometimes mocking treatment he was given back home by the press in Britain. "In the long run," he reasons, "it's better than being told, 'You're wonderful, you're wonderful, you're wonderful,' all the time. Stars are treated like royalty in America. They surround themselves by yes people and their reality goes out the window."
Nevertheless, Elton's audience with the King in decline had been a warning. Something had to give. He knew that if he didn't make a big change in his life and career, he could easily go the same way.
Having seen firsthand the physically wrecking effects that chemicals and constant touring could have on someone, Elton would announce onstage at the penultimate gig of his seven-night stint at Madison Square Garden that at the age of just twenty-nine, he was quitting. It had been a magnificent and often astonishing era, but he had decided it was over.
For now, however, with the tour still ahead, he chose to keep this decision to himself.
—
KNOWING IT WAS the last tour, he decided to utterly throw himself into it and have an absolute ball. The keyboard gymnastics were more extreme. He jumped on top of the glittering piano and catwalked along its theatrically extended length. He dressed up as a clown and hung stuffed cloth bananas and carrots around his neck. From Washington, D.C., to Detroit to Cincinnati, every night was Mardi Gras.
Sometimes, though, the stadium-proportioned crowds were overly drunk and overly rowdy. In Detroit, at the Pontiac, a flying bottle hit Elton, provoking him to rant at the audience. Later the same night, narrowly missing the shoulder of the singer, a pair of hurled binoculars struck Kenny Passarelli in the chest. The bassist thought he'd been shot. From the side of the stage, John Reid urged him to "keep playing...keep playing."
Still, even after those tougher shows, once they were back in the sky aboard the _Starship,_ everyone was giddy with exhilaration. One night, someone presented Elton with a pair of white roller skates. As the jet hit cruising altitude and the seatbelt signs were extinguished, he gleefully glided up and down the plane, as everyone hooted around him.
As he traveled around the country, Elton attracted other celebrities into his orbit. In Chicago, Hugh Hefner threw a party for him at the Playboy Mansion and someone snapped a shot in which the singer pretended to be transfixed by the sight of the cleavage of one of the bunnies as he signed her wrist sweatband. In Philadelphia, Elizabeth Taylor hopped onto the _Starship_ along with the seventeen-year-old Michael Jackson. "Michael was the sweetest kid," says Elton. "I got to know Elizabeth Taylor very well. I could say, 'Ah, you old cow, how you doing?' Everyone treats stars so reverently and they don't want that."
Midway through the sets, Kiki Dee would arrive on the stage to perform "I've Got the Music in Me" before being joined by Elton for the duet that was fast on its way to becoming his biggest hit of the year. "Don't Go Breaking My Heart" had been recorded, almost as an afterthought, during the _Blue Moves_ sessions, and released as a stand-alone single.
Conceived as a Marvin Gaye / Tammi Terrell–style soul number, the song had come to Elton one day as he sat in Eastern Sound playing a Wurlitzer electric piano. He'd added a gobbledygook lyric and then played it over the phone to Bernie in Barbados. Bernie quickly scribbled down his playfully romantic pop rhymes and sent them up to Toronto, where Elton recorded a demo version of the tune duetting with himself, singing the female parts in a comic-voiced higher register. Elton and Bernie's lighthearted attitude to this throwaway piece of pop fluff was underlined by the fact that as its writers they chose to credit themselves under the punning pseudonyms of Ann Orson / Carte Blanche (or "an 'orse and cart(e) blanche")—an inside joke that was too obscure for most outsiders to grasp.
Kiki Dee recorded her parts three and a half thousand miles away in London. "I didn't say at the time that we'd done it separately," she points out. "I just kept it low-key. I kind of thought it would be more fun for people to think that we were standing there together in the studio, like on the video."
The highly memorable cut-price promo film for the song was shot in one take in London before Elton departed for the American tour. In it, he and Kiki goofed around at the microphone, the former in a loose-fitting checked suit, the latter in a pink getup that gave her the look of a children's TV presenter. "If I'd known that it would have become such an iconic video," she says, laughing, "I might have been more concerned about what I was wearing. A jumpsuit with tight ankles and little espadrilles. It was a bit Noddy [the petite wooden puppet character created by English children's author Enid Blyton]."
Although Motownish in its intent, in truth, "Don't Go Breaking My Heart" was unabashedly corny, while at the same time being so infuriatingly catchy it proved to be radio gold. It topped the American charts for the whole of August and for six straight weeks back in Britain, as Elton's first-ever UK number one.
A more cred-giving record was the one Elton broke at Madison Square Garden between August 10 and 17, bettering—by one show—the Rolling Stones' six nights at the venue in '75. Raking in $1.25 million and playing to 137,900 people, it was effectively an unofficial East Coast equivalent of L.A.'s Elton John Week the year before. For those seven days, the singer owned New York. After hours he would hang out with Manhattan's hippest of the hippest. "The seas parted for Elton," says Bernie, "and there was Bianca [Jagger] and Andy [Warhol]."
Elton was in no mood for anyone to rain on his parade, and after John Rockwell wrote in _The New York Times_ that the opening Madison Square Garden show in his opinion had "offered wallpaper music of the most banal sort," the singer verbally attacked him live on air on WNEW-FM. "If you're listening now, you asshole," Elton spat, "come down here and I'll destroy you. I'll rip you to bits."
It was a public unraveling that showed he was almost at the end of his tether. Before showtime on the second-to-last gig, Elton gathered his band around him in the dressing room at Madison Square Garden. He gave them the shocking news that he was quitting live performance.
Before he could even get the words out, he started crying. He told them, "I just can't do this anymore. I love you all, and this is the greatest, but I have to take some time off." Sweetening the bitter pill, he then said he was giving them all a whole year's salary in advance.
That night, Elton announced to the nearly twenty-thousand-strong audience, to audible gasps, "You won't see me for a while, but I'll be back...someday."
—
IN ANNOUNCING HIS retirement from the stage, Elton preempted the personal revelation that was to result in a steep decline in his record sales in the United States. For years, no journalist had summoned the nerve to ask him outright about his sexuality. In a _Playboy_ interview that had appeared at the beginning of the year, the writer Eugenie Ross-Leming pushed him on the subject, asking him what he made of the trend toward androgyny in the rock scene. Did Elton, she wondered, "get off on the bisexuality scene?" He ducked the question. "I really don't know what to say about it," he blankly responded.
The day after the final Madison Square Garden show, Cliff Jahr of _Rolling Stone_ spoke to Elton in his suite at the Sherry-Netherland. At one point, Jahr asked the singer, "Can we get personal? Should we turn off the tape?"
"I knew what was coming," Elton recalls. "There were obviously rumors around." Elton said there was no need for Jahr to switch off his recorder. The interviewer then asked, "What about Elton when he comes home at night? Does he have love and affection?"
In a circuitous way, Elton said that he did have "a certain amount of sex" and that he craved to be loved. "I don't know what I want to be, exactly," he stated. "I'm just going through a stage where _any_ sign of affection would be welcome on a sexual level. I'd rather fall in love with a woman eventually because I think a woman probably lasts much longer than a man. But I really don't know. I haven't met anybody that I would like to settle down with—of either sex."
"You're bisexual?" asked Jahr.
"There's nothing wrong with going to bed with somebody of your own sex," Elton stressed, if at first not specifically answering the question, before deciding to throw all remaining caution to the wind. "I think everybody's bisexual to a certain degree. I don't think it's just me."
It hadn't been an outright admission of his homosexuality, and during the interview he even lied about having had a relationship with John Reid. But some kind of truth was now out there. "I had no problems about doing it," Elton reflects. "It felt as if the time was right to actually say it. I was just thrilled to get it out."
In reality, Elton was in danger of being "outed" anyway. In September, a month before the issue of _Rolling Stone_ appeared with him on the cover beside the words ELTON'S FRANK TALK—THE LONELY LIFE OF A SUPERSTAR, David Bowie had given an interview to _Playboy_ in which the enmity between the two stars had spilled out onto the page. Bowie had previously dismissed Elton as a "token queen." Now Bowie—who had himself admitted to his own, as it would turn out, token bisexuality back in 1972—leveled a loaded accusation at Elton. "I consider myself responsible for a whole new school of pretensions," he said. "They know who they are. Don't you, Elton? Just kidding. No, I'm not."
But whereas Bowie's blurred sexuality had a cool-giving effect, a large part of Elton's audience were conservative Middle Americans. Still, even if coming out damaged his record sales in the immediate aftermath, as he views the period through the distance of time, Elton remembers it somewhat differently, valuing above all the personal liberty it won him.
"A few radio stations were a bit upset," he says, "and people burnt my records. But you know what? It was a very small price to pay for the freedom that it gave me."
—
IN THE UK, the singer's confession didn't cause much of a fuss, with the normally scandal-hungry _News of the World_ running a story under the uncharacteristically sympathetic headline ELTON: MY LOVE LIFE ISN'T SO STRANGE. In fact, the admission seemed to endear him all the more to the British public. He noticed that when he was out and about, more people waved at him in the street.
This unburdening seemed to put Elton in a liberated and reckless mood. When he arrived onstage for what he considered to be absolutely his last show, at Edinburgh's Playhouse on September 17 for a long-scheduled and unavoidable solo gig as part of the city's Festival of Popular Music (being filmed for an ABC TV special in the States), he had obviously been hitting the bottle.
His inebriated state, though, didn't seem to affect his performance. In fact, alone at the piano, he attacked the set list's mix of his best-known hits and deeper album cuts with visible relish, while throughout the show he kept topping up his Bloody Mary from a jug. "I don't want the people at home to think I'm an alcoholic," he proclaimed into the microphone. "I want them to _know_ I'm an alcoholic."
The longer the show rolled on, the drunker he got. During a wild and frenetic "Saturday Night's Alright for Fighting," he climbed up onto the piano, picked up a tartan scarf, and tied it around his balding cranium, larked around, tossed his microphone to the floor, and handed his piano stool out to the front row. He was flying on vodka, clearly hammered, but having something that looked suspiciously like the time of his life. "All right, lunacy prevails this evening," he laughed.
In marked contrast, his eleventh album, released five weeks later, was a far more serious affair. If Elton's first double, _Goodbye Yellow Brick Road,_ had been the soundtrack of his ascension to stardom, _Blue Moves_ was the sound of the comedown. Taupin had addressed the death of fame in key songs: "Cage the Songbird" was about the last hours of Edith Piaf; "Idol" spoke of the decay of a fifties rock star (unmistakably Elvis Presley) while pointing out that all celebrities seem to end up drowning in their own despair.
Although _Blue Moves_ was musically adventurous in its freewheeling, jazzy stylings, it sometimes made for uneasy listening. Anyone hearing the bleak lyric of "Someone's Final Song," for instance, might have had real cause to worry about Taupin's mental state, since it was about a writer penning his conclusive lyric before presumably killing himself, or as Bernie later elucidated, "blowing their brains out." With no little understatement, the lyricist later observed, "I think I sank too much into depressive excess on that album."
Altogether, it made for an uncommercial package, even down to its cover reproduction of a painting— _The_ Guardian _Readers,_ by the Irish artist Patrick Proktor, which Elton had bought at an exhibition—featuring largely faceless, topless men sunbathing on blue-rendered parkland grass. If it was somehow timely for the singer in the wake of his _Rolling Stone_ interview, then its vaguely homoerotic nature ensured that a competition to give away copies of the album in _The Sun_ newspaper was nixed. "They wouldn't do it because they said, 'There's no women in this painting,' " Elton incredulously noted at the time. "I didn't realize before."
As artistically successful as _Blue Moves_ was, it made it only to number three in America, ending Elton's unbroken run of seven number one albums. "In a way, it was a relief," he says. "I'm someone who studies the charts and I just thought, _This is not going to go on forever_. I was prepared for it and I knew that after it _had_ stopped, it was a matter of just finding your place."
—
BACK HOME IN England, Elton found something to ground him.
When Stanley Dwight, on his rare weekends off from the RAF, used to take young Reg to see Watford Football Club play, it had sparked a lifelong devotion. Wherever he was in the world, Elton always had to find out the latest Watford score. In New York, in the spring of '75, on the day that the team were playing their final match of the season, the singer had run into a shop, asking to use their phone to call England and find out the result. When he was told that Watford had lost 3–2 to Walsall and been relegated to the Fourth Division, Elton immediately sat down and began weeping. "They must have thought I was mad," he noted, not unreasonably.
The symbolic title of vice president of the club that he'd been given back in 1973—purely because he was the team's most famous fan—suddenly wasn't enough for him. In June of '76 he became the chairman of the heavily indebted team, bringing with him his financial heft. His hugely ambitious aim, he declared, was to bring the team up from the Fourth to the First Division.
More important, perhaps, it gave Elton something to focus on now that his touring life had come to an end. "It wasn't about just throwing money at the club," he insists. "Even though I paid a lot of money, you couldn't have bought that amount of happiness. It gave me another interest outside of music. Because at that point I had no other interest outside of music."
It also brought him sharply back down to earth. No one that Elton dealt with at Watford FC was even remotely impressed by his fame. In fact, in that no-nonsense British way, they seemed determined to treat him like everyone else. When the team's nineteen-year-old star player Keith Mercer was trying, and failing, to grow a mustache, Elton boasted that he himself could grow one in a week. The singer returned a week later with a full beard. Mercer took one look at the star's furry face and playfully jibed him with the words "When you can grow that on your face, it's a shame you can't do the same on your head."
Elton loved this kind of banter, and he even cheerfully put up with the insulting chants that would be weekly bellowed at him by the supporters of Watford's opposing teams. They'd repeatedly sing, "Elton John...is a homosexual!" or "He's bald—he's queer—he takes it up the rear!" To the tune of the Cockney standard "My Old Man (Said Follow the Van)," they'd chorus, "Don't sit down when Elton's around / Or you'll get a penis up your arse!"
To the singer, this behavior was simply "good-natured English abuse." It shook him out of any last remaining traits of starriness. Over the past seven years, he had slowly lost touch with reality. Now here he was on the receiving end of a major jolt of it.
In America, he had been given a glimpse of what might have easily become his future. He knew that it wasn't one he wanted. "I mean," he said, "who wants to be a forty-five-year-old entertainer in Las Vegas like Elvis?"
Together but alone: John Reid and Elton after their personal, but not professional, split.
ELTON WAS AT HOME watching _Top of the Pops_ when the Sex Pistols exploded onto the screen. Seeing Johnny Rotten thrillingly howl his way through "Pretty Vacant," he suddenly felt ancient. Understanding better than most the vagaries of musical fashions, he knew something vital had changed that might well render him redundant. Elton looked at the punks and thought, _What a fucking state they're in_. But at the same time he could relate to them, remembering those days during the glam rock era when he'd dyed his hair orange and green and his eyebrows pink.
He'd turned thirty in March 1977 and it had left him feeling old and tired and slightly vulnerable. Entering your thirties, at a time when pop music was still considered not to be a viable long-term career option, seemed like the beginning of the end. Adding weight to the notion that the sun might be setting on his relevance, he was winning music industry prizes that somehow felt like retirement clocks: Favorite Male Artist at the American Music Awards, an Ivor Novello songwriter's award for the now glaringly old-fashioned "Don't Go Breaking My Heart."
Picking up the Best Singer gong at the Capital Radio Awards in London, he really didn't feel he deserved it and said so in his acceptance speech. "I think this award," he declared, "should go to Elvis Costello."
All of his energies were being thrown into Watford FC. In April, he boldly sacked their current manager, Mike Keen, and replaced him with the Lincoln City Football Club's boss, Graham Taylor. Elton's mother saw the positive effect that being involved with the soccer team was having on her son. "He's mixing with ordinary people," Sheila pointed out. "He's never been more happy."
At the same time, only eight months after he'd dramatically quit live performance, he couldn't stay away from the stage. In May, he agreed to give six relatively intimate performances at the three-thousand-seat Rainbow Theatre in London as part of the celebrations for the Queen's Jubilee year, marking her quarter century on the throne. He viewed the shows, billed as the Elton John and Ray Cooper Charity Gala, as an opportunity to dust off songs he hadn't performed in years.
The first half of the set featured him alone at the piano, reeling off "The Greatest Discovery" from _Elton John,_ "Where To Now St. Peter?" from _Tumbleweed Connection,_ and "Sweet Painted Lady" from _Goodbye Yellow Brick Road,_ along with a cover of Marvin Gaye's "I Heard It Through the Grapevine." He was then joined for the second half—from "Funeral for a Friend" through the likes of "Bennie and the Jets" and "Sorry Seems to Be the Hardest Word"—by Cooper on percussion, whirling around from timpani to vibes and throwing himself into his signature arm-flinging tambourine playing routine.
It was, Elton realized, a fresh and nonflashy way for him to approach concerts. Maybe he could further explore this in the future.
The Queen's first cousin, forty-year-old Princess Alexandra, attended the opening night at the Rainbow. According to Geoffrey Ellis of John Reid Enterprises, who was sitting beside her in the front row, the minor royal didn't seem to know how she should act at a pop concert—whether to clap along with the rest of the audience during the songs or appear more regally reserved. Later, at a backstage party, meeting Elton, she seemed equally unsure of exactly what to say to him. She was impressed by the stamina he seemed to summon up for the two-and-a-half-hour show. Did he take some sort of a drug? she wondered. Cocaine, perhaps?
"I couldn't believe it," said Elton, recklessly relaying the anecdote to reporters after the event. "I was so stunned. I'm not sure what I replied."
When the story appeared in the newspapers, Elton was forced to apologize to Princess Alexandra through the pages of the _Daily Telegraph_. "I very much hope I haven't embarrassed the princess," he said. "I thought it was very amusing and that's why I repeated it. Of course I don't take cocaine." Later, choosing his words carefully, if revealing more, he explained, "I told her that I don't take cocaine before I go onstage—which is the truth."
This awkward apology showed just how much Elton valued his position as a royally endorsed rock star. A week before the Sex Pistols tried to subvert the Jubilee celebrations by performing on a boat sailing up and down the River Thames, resulting in the arrest of their manager, Malcolm McLaren, Elton was performing on a variety bill at the BBC-televised Royal Windsor Big Top Show for the Queen and Prince Philip. The ideological distance between him and the punk generation was seemingly immense.
Not that he wasn't down with the kids in other ways. On the afternoon of Friday, June 17, he was at home in Woodside, lying on his bed watching a Wimbledon tennis match on TV, when his housekeeper came in to say there was a bunch of students outside, from Shoreditch College in Surrey, who'd buzzed the intercom at the front gates. Their leader, Ken Hall, had informed her that they'd booked the American soul singer Jimmy Helms for their valedictory ball to be held that evening but at the last minute, Helms had canceled. Was there any chance at all that Elton would consider coming along and performing?
Elton immediately passed back a surprising message: Yes, of course I'll do it. On two conditions—that they find him a decent grand piano, and that they not tell the press. "I wasn't doing anything that night, so I thought, _Why not?_ " Elton remembered. "I admired their nerve."
The students were stunned. Returning to college, they were deflatingly informed by their tutors that there was no way, even for Elton John, that they would be allowed to wheel one of the institution's two grand pianos, kept in the chapel, into the main hall where the ball was being staged. Thinking on their feet, the students asked if the singer could instead perform in the chapel itself. And so this quickly became the alternative arrangement.
At 6:30, Elton's personal assistant Bob Halley turned up, checked out the piano and the sound system the students had borrowed from the hired disco, and gave the setup the OK. Elton pulled up in his Rolls-Royce at 9:30, and in an effort not to spoil the surprise for the unsuspecting audience, the organizers sneaked him in through the back door of the chapel.
Everyone was taken aback by how casual and friendly he was. Elton sat down at the piano, took out his Barclays Bank checkbook, and began to write out a rough set list on the back of it. As he did a quick sound check, the rumors of his impending appearance began to spread around the college, building to a buzz of anticipation and disbelief.
Still, come showtime, the chapel wasn't even full, many students believing the news to be some kind of practical joke. Then Ken Hall stepped onto the stage to make the introduction. "Ladies and gentlemen," he said, "I give you, for the first time ever at Shoreditch College...Elton John."
Resounding cheers went up as he arrived onstage, and students rushed in to pack the 150-capacity chapel to fire regulation breaking point. Onstage, in a green-striped black jacket, blue track pants, and an olive flat cap, Elton rolled through "Crocodile Rock," "Daniel," "Rocket Man," "Your Song," and more, in a nearly two-hour set. "I was astonished," the college's vice principal, Peter Lacey, later recalled, "by how well he could communicate without any props or lights."
Elton encored with "Bennie and the Jets," stopping midway to pick up the lid of the grand piano and pretend that he was about to hurl it to the back of the stage. The music lecturer's face turned scarlet, thinking his precious grand piano was about to be trashed. In the end, the singer respectfully replaced the lid and finished the song to yells and applause.
Afterward, Elton accepted an invitation to have a post-show drink with a bunch of students in one of their rooms. There, he knocked back their beer and Glenfiddich whisky, regaled them with his tale of meeting Elvis Presley, and defaced a poster of himself hanging on the wall with the words "Watch out, old four eyes is back."
Never once was there any mention of his receiving any payment for the performance. His mum was right. He was enjoying hanging out with everyday people.
—
THROUGHOUT THE YEAR, though, hits were hard to come by, particularly in the States in the wake of his "bisexual" confession. Nothing else released from _Blue Moves_ made much of an impression: "Bite Your Lip (Get Up and Dance!)" limped to number twenty-eight in both the United Kingdom and the United States; the lightsome funk of "Crazy Water" climbed one position higher to number twenty-seven in Britain but wasn't even released in America.
In August, Elton flew to New York and threw a party at the nautically themed One Fifth Avenue restaurant to try to revive interest in the Rocket Record Company, including, in truth, his own fading passion for the project. He and Kiki Dee performed at the event, but it was becoming increasingly evident that the label was drifting rudderless. "No one was at the center of things," said Kiki at the time. "Maybe it wasn't a hungry label."
During the trip, Elton hooked up with Rod Stewart to discuss an idea the former had for a film, which he imagined might star the two of them. To be titled _Jet Lag,_ he envisaged it as a buddy movie caper featuring the pair as tax-avoiding rock stars who flew around the world, living on their private planes. It would take the spirited rivalries of their real lives and blow them up to comedic proportions.
Rod humored his friend but thought it was a "totally barking idea." The film was destined never to be made, and, if anything, the notion only served to highlight just how aimless Elton was.
—
AS 1977 WORE on, he began to be increasingly depressed by the sight in the mirror of his fast-disappearing hairline. In autumn, Elton traveled to Paris for the first of a series of hair transplant procedures. At the clinic of Dr. Pierre Pouteaux, he underwent a long and painful five-hour surgery during which, under local anesthetic, squares of healthy hair were cut from the back of his scalp and grafted onto the top of his head.
Comically, if agonizingly, as a dazed Elton emerged from the clinic to step into his waiting car, he managed to bang his head on the edge of its door, dislodging some of the patches of freshly grafted hair. Meekly, he was forced to turn around and go back inside to have them reattached.
News of his hair transplant made the newspapers, but Elton wasn't the slightest bit embarrassed or ashamed to have people know he had subjected himself to the operation. "It's all one hundred percent vanity," he cheerfully confessed. "But I'm thrilled with the result."
Having spent most of the year resolutely dressed down, he was of two minds as to whether to agree to appear as a guest on _The Muppet Show._ "I don't want to do those crazy, flamboyant costumes with all the big feathers and big glasses and stuff," he told their creator, Jim Henson. In the end, he relented and dug some of his most outlandish costumes out of storage. Over three days in October, at Elstree Studios northwest of London, he filmed sketches and songs for the half-hour show, clearly having a fine old time in the process.
In his bejeweled white skullcap and multicolored ostrich plumes, Elton fronted the show's house band, Dr. Teeth and the Electric Mayhem, for "Crocodile Rock," amid a swamp scene where puppet reptiles performed the falsetto "la-la-la" hook line before hungrily dragging him from his piano stool and into the water. During a dressing room skit, the Muppets' backstage gofer Scooter thumped a piano as he let the singer hear a new song he'd "written" that sounded strangely like a pub-time rendition of "Bennie and the Jets."
"Isn't that the worst song you've ever heard, Elton?" asked a mock-appalled Kermit.
"Well...I didn't think so when I wrote it," he deadpanned.
In a lemon and sea-green suit and black bowler hat encircled with piano keys, Elton played it straight for "Goodbye Yellow Brick Road," and even the normally manic and flailing drummer Animal managed to pin down the beat. Finally, to cap the lot, in a spangly pink jumpsuit that revealed just how slim he'd become, he sang "Don't Go Breaking My Heart" with Miss Piggy. "Eat your heart out, Kiki!" growled the porcine diva in the direction of the camera.
During the taping, as he later related, Elton had to do eleven takes of "Don't Go Breaking My Heart" because he kept cracking up with laughter. In an off-camera ad-lib, the voice and animator of Miss Piggy, Frank Oz, suddenly declared, "I'm not used to working with amateurs!" before making her flounce off the set. "Those puppets," said Elton, "are human."
The next month, Elton changed his mind about touring. Since coming off the road, Davey Johnstone and James Newton Howard had formed a band, China, who were subsequently signed to Rocket. As they were rehearsing in Los Angeles, a call came through from Elton in England saying he wanted to do some dates with China both opening for him and backing him during his own performances. He flew to L.A. and rehearsed with the band for three weeks, although, confusingly, now with apparent reluctance.
A charity show to debut this new arrangement was booked at London's 12,500-capacity Wembley Empire Pool arena for November 3. On the day of the gig, the singer was in a terrible mood, throwing a strop backstage when some caps he'd ordered weren't delivered. That evening, in a black leather jacket and matching beret, he turned up onstage and immediately fluffed the introduction to "Better Off Dead" owing to an onstage sound problem.
As the show progressed, he looked and sounded tired, and even while cajoling the crowd into a sing-along, he appeared to be merely going through the motions. Thirteen songs in, talking as he gently picked out the opening chords of "Sorry Seems to Be the Hardest Word," he made another of his dramatic announcements.
"Uh, I'd just like to say something," he began. "Um, it's very hard to put it in words, really. But I haven't been touring for a long time, and it's been a painful decision whether to come back on the road or not."
There were cheers and shouts of "Yeah!" from the audience.
"And I'm really enjoying tonight," he continued. "Thank you very much. But...I've made a decision tonight. This is going to be the last show. All right?"
There were cries of "No!" from the crowd.
"There's a lot more to me than playing on the road," Elton concluded, "and this is the last one I'm gonna do."
Onstage, Ray Cooper respectfully applauded the singer's surprise declaration, while the rest of the band seemed stunned as they joined him on the song. Backstage, John Reid was pacing up and down and ranting. Elton hadn't warned him of his decision. "I've got to discuss this whole thing with him," a blindsided Reid told a pack of journalists afterward.
Anyone watching the show, however, could see that Elton was burned out. To his mind, everything had grown uncontrollably enormous. He was tired of seeing his fans crushed up against security barriers and thinking, _Well, this might be great for me, but is it great for them?_ He was wearied by how upset he would become if he felt he'd underperformed during a show, and how he'd mentally beat himself up afterward. He wanted somehow to start his life and career all over again.
It was another year, another retirement announcement. This one, however, had an air of finality about it.
—
IF THERE WAS a sense that everything was winding down, the arrival of his second singles compilation, _Elton John's Greatest Hits Volume II,_ seemed to mark the end of the second phase of his success. Coming only three years after his first hits compendium, the album chronologically swept up all the 45s from "The Bitch Is Back" to "Grow Some Funk of Your Own." It was a remarkable feat, really, to warrant two "best of" collections within the first eight years of your career, though a scan down the tracklist of this secondary volume only revealed the diminishing returns of his output.
Elton still wanted to make records, but he now wanted to create them in different ways. His thoughts returned to soul music. He decided to forget the idea of playing the piano, or even really being a songwriter, for the time being. Purely as a singer, he wanted to put himself in the hands of a record producer.
—
OVER THE PREVIOUS decade, Thom Bell had earned himself a reputation as the Phil Spector of Philly Soul, producing a chain of hits for the likes of the Delfonics, the Stylistics, and the Spinners. Elton put into action a plan to work with Bell, and sessions were booked for Kaye-Smith Studios in Seattle, where the producer was currently living. Before they began, Bell kept asking the singer if he was entirely sure he wanted to relinquish control of the recording process. The singer said he was. "He was tired," Bell remembered, "and he was smart enough to say, 'Listen, man, I'm gonna get someone else to do it for me this time.' "
In the end, inevitably, Elton brought two songs of his own to the table. The first, "Nice and Slow," was based on a lascivious Bernie Taupin lyric. The second, "Shine On Through," found Elton stepping for the first time outside his songwriting partnership with Taupin. The singer had been visiting his friend Gary Osborne, the lyricist who'd co-written "Amoureuse" for Kiki Dee back in '72, at his Hampstead home when he'd played him a piano part and sung a melody for a song idea that Elton said Bernie couldn't come up with any words for. He asked Osborne if he fancied trying to write something for it, and in time the song was fashioned into a slow-grooving gospel ballad.
Apart from these two compositions, Elton relied on Thom Bell to provide the material. The producer drafted his songwriting nephew LeRoy Bell for three other romantically minded songs, "Mama Can't Buy You Love," "Are You Ready for Love," and the tantalizingly titled "Three Way Love Affair."
In the Seattle studio, Elton tried to deliver these songs in his soul falsetto voice. Bell bravely told the singer that to his ears, his higher register wasn't strong enough. "Thom Bell taught me a lot," Elton said later. "How to breathe properly and use my voice in a lower register." The sessions seemed to go with a swing, six tracks were recorded, and an ABC TV crew invited into the control room at Kaye-Smith caught a listening party in full flow with the musicians, along with the singer and John Reid, dancing and clapping along to the playbacks.
When Elton returned to England, Bell traveled to Sigma Sound Studios in Philadelphia to add orchestrations. But when he heard them, the singer felt the tracks were overproduced and "saccharine" and, frustrating everyone, canceled the further sessions planned for the new year. For now, the tapes of the Thom Bell sessions were shelved.
—
THROUGHOUT 1977, ELTON and Bernie's physical remove from each other—the former largely in Britain, the latter remaining in the United States—had gradually turned into a state of estrangement.
Having experienced a moment of clarity in the midst of his drinking and drugging, Bernie had gone to Mexico for an extended break in an effort to dry out. He'd been followed there by Alice Cooper, himself fresh from a period in a rehab clinic and in a straight, if fragile, condition. Cooper would entertain Taupin with stories of the various oddball characters he'd met in the sanatorium cleaning up his act. "Bernie," he said, "I was in a writer's paradise."
Together, the pair began throwing around ideas for songs to be co-written about this concept, which would result in the 1978 Alice Cooper album _From the Inside_. Meanwhile, back in the UK, Elton found himself writing more and more with Gary Osborne.
"Because of the geographical thing," says Bernie, "we were being musically seduced elsewhere. We were dabbling in creating things with other people."
"I think when it initially happened," Elton says, "we were a little jealous of each other. That's just human, because of the love we have for each other. It's kind of like letting your wife sleep with another man. Y'know, it's like, [ _sheepishly_ ] 'Oh, it's okay.' But you had to get over that.
"The two of us were too close," he reasons. "Even though we had our own lives, it was necessary for us to find our own bases."
And so, ten years after they'd first met, Elton and Bernie split. Incredibly, both say there were no angry scenes during this breakup period. "Never had an argument with him," says Elton. "We're not those kind of people. I could never shout and scream at Bernie 'cause I love him too much."
"No, we never argued about it," Bernie confirms. "It was continental drift. _Blue Moves_ was sort of the apex. It was our Mount Everest in a way. We'd gone to the top. I'm sure drugs, alcohol, the geographical thing, it all contributed. The base core of it was that I don't know if we knew what we wanted to do next. Or even if we _could_ do it."
—
AT THE OUTSET of 1978, Elton entered the Mill Studio, situated in the village of Cookham on the River Thames, west of London, to begin work on his first album without Bernie. Gus Dudgeon in fact owned the facility, but Elton decided to co-produce the record himself along with Clive Franks, although of course Dudgeon would enjoy some remuneration in the process via their renting of the studio. The arrangement suited Gus fine, who felt that he and Elton had exhausted their collaboration. "The challenge had gone," said the producer.
The sessions carried on, in sporadic bursts, as the months went by. Everyone was in high spirits, not least a coked-up Elton who was also drinking as much as two bottles of brandy a day and, now that he had taken up smoking, puffing on joint after joint. His songwriting process with Gary Osborne was very different from the one he'd built with Bernie. The two would generally write in the same room, riffing ideas and coming up with titles and phrases.
It had been nearly two years since Elton had thrown himself so deeply into the creative process. In what would turn out to be an unwitting assessment of the quality of much of the material, though, he said at the time he felt he was suffering from "writer's diarrhea."
Nonetheless the first product of these sessions was in fact a John/Taupin song written, though unrecorded, for _Blue Moves_. "Ego" was a forceful, elaborately arranged rock song that aimed for New Wave but landed closer to the baroque style of Queen, with a lyric that found Elton inhabiting the persona of a demented fame-and-money-hungry superstar figure.
Talking about the track when it was released as a single in March 1978, Elton pulled no punches in exposing who, in his mind, the song was directed at. "It's dedicated to the Jaggers and Bowies of this world—and especially to Mr. McCartney," he crowed. "David Bowie is a pseudointellectual, and I can't bear pseudointellectuals. And McCartney's music has gone so far down the tubes, I can't believe it. They just all annoy me."
Only later did he realize that in "Ego," he might as well have been singing about himself. Fittingly, an ambitious promotional video was commissioned for the song, at great expense, overseen by Michael Lindsay-Hogg, director of the Beatles' _Let It Be_ and the Rolling Stones' _Rock and Roll Circus._
In the video, a glasses-free, contacts-wearing Elton in dark suit and pink tie stands mouthing the song's lyric with punkish aggression in front of neon signs unfortunately bearing the words "Elton" and "Ego" divided by a burning Olympic flame. Intercut were scenes of photographers flocking around an unseen star at a press reception and staged flashbacks of a child actor, John Emberton, resembling the young Reg, traveling on a train and acting in a school production of _Romeo and Juliet_.
As a slightly desperate attempt to remake himself for the postpunk age, it wasn't Elton's finest hour. Premieres of the video held in cinemas in London and Los Angeles didn't stir up any real interest in the endeavor, and "Ego" stalled at number thirty-four in both the British and American charts.
Elton was furious about the failure of the single, and in an interview with _The Sun_ newspaper he even questioned the accuracy of the chart compiled by the British Market Research Bureau for the BBC, lambasting it as "highly inaccurate...everybody in the business knows it's ridiculous. But far too few people have had the courage to say so. Until something is done about it, I've decided to withdraw my record company's advertising from any publication printing the BMRB chart."
By now, the title "Ego" was beginning to seem all too appropriate. The following week, John Reid, on a trip to Australia, called the offices of Rocket to chew out the label's pluggers for their failure to get the song's video screened on _Top of the Pops_. "It was all done in a fit of anger," said Reid in reflection, once he'd cooled down. "I have apologized for screaming so loudly."
In April, Elton was profiled in _The Sunday Times_ in the UK and seemed determined to underline his significance and integrity at a time when both appeared to be slipping away. "Most people have completely the wrong idea of me," he bristled. "I turned down one offer of a million dollars to do a week in Las Vegas. I didn't even think about it. I'm looked upon as one of the artists in this country who has the least credibility and I think I have the most credibility."
—
ALL OF THIS seemed to smack of wild insecurity worsened by cocaine-induced paranoia. If outwardly he was displaying much bravado and bluster, within himself he was feeling acutely vulnerable.
One Sunday in August, Elton was at home, in a peculiar frame of mind, filled with strangely morbid thoughts, when he began playing a haunting, compellingly dreamy refrain on the piano. A line kept returning to him over and over: "Life isn't everything."
On the same day, Rocket's seventeen-year-old courier Guy Burchett was killed in a motorcycle accident. Elton learned the awful news on Monday when he went into the office. In his head, he immediately titled the composition "Song for Guy" in tribute to the teenager cut down in his prime.
The coincidence of the tragedy with his own dark ruminations struck Elton as eerie. In reality, the circumstances surrounding the writing of the song said more about the singer's precarious mental state in the summer of 1978. He later wrote in a _Billboard_ advertisement for the subsequent single release of "Song for Guy": "I imagined myself floating into space and looking down at my own body. I was imagining myself dying."
Elton and Ray Cooper outside the Summer Palace, Leningrad, USSR, May 1979.
EIGHT YEARS AND SEVEN WEEKS after he'd stormed the Troubadour, it was a very different Elton John who tentatively stepped onto the stage at the Plaza hotel in New York for his first live performance in eleven months. He was in an extremely wobbly state. His knee was quaking so badly he was forced to rest his foot on one of the piano's pedals to try to quell the tremors. It didn't work. "It just kept shaking," he confessed afterward.
Elton was the surprise solo guest performer for two hundred fifty delegates who'd flown in from all over America for the October 1978 MCA Records national convention. He kicked off with "Bennie and the Jets," enthusiastically crooning "Ooh yeah" over the song's opening piano stabs, before comically adding, "Notice how I've toned down my act?" He was in fine voice. No one in the audience would have been able to tell how terrified he actually was.
That was until he forgot the words in the third verse of "Sixty Years On," replacing them with an ad-libbed "doo-doo-doo-dah-dah." The record company employees laughed and clapped in support. "Just testing," Elton joked. Still, his uneasiness then made him mumble some of the lines in "Daniel." "We're gonna do a couple of songs from the new album now," he announced, forgetting he was onstage alone. "As I have never sung them in public, I should probably be dreadful."
He wasn't, and the soulful yearning of "Shine On Through," followed later by the delicate ballad "Shooting Star" segueing into "Song for Guy," all walked tall alongside his best-known songs.
A look designed to kill flashy Elton. The cover shoot for A Single Man, Long Walk, Windsor Great Park.
"I'm nervous as hell about the album and I'm nervous as hell coming on tonight," he admitted to the industry crowd. "Thanks very much, MCA. We've had our ups and downs...and I'm sort of on a downstroke, I think, really. Perhaps I got a little bit too cozy in what I was doing, and too safe. And it's a good thing that we had a two-year layoff, 'cause I'm excited again." His enthusiasm was infectious. Everyone left convinced that his comeback album would be a massive hit.
As if trying to obliterate his former flamboyance, the cover of _A Single Man_ featured him dressed in oddly funereal fashion. Resting on a cane in black greatcoat, top hat, and jackboots, he gave the appearance of someone about to attend a Victorian burial, an impression heightened by the fact that he was positioned on the dramatic Long Walk road leading up to Windsor Castle. He stared seriously at the camera, specsless and somber.
Inside the gatefold sleeve, he posed in tweed cap and jacket at the wheel of his vintage Jaguar, looking every inch the country squire off to do a spot of clay pigeon shooting. In finally shaking off his now old-fashioned glam rock past—and having given up trying to keep pace with the New Wave pack—his new look was that of someone belonging to an imagined rock aristocracy.
_A Single Man,_ however, was a patchy effort. The ballads were strong, but there was an abundance of weak-to-middling tracks. The barely veiled gay flirtation tale told in "Big Dipper" (audaciously featuring backup vocals from Watford FC players) wasn't as great an idea as it had probably seemed in the studio. "Return to Paradise" was a song of summer romance rendered in an easy-listening style that would have made Reg cringe if it had been in Long John Baldry's cabaret set back in '67. Later in the running order, "Georgia" was a decent gospel revival rouser in the style of the Band, though the lead single, "Part-Time Love," was a blandly bouncy ode to the apparent joys of infidelity, with even a returning Paul Buckmaster orchestrating on autopilot.
There was something—or someone—missing. If the title _A Single Man_ could easily have been construed as barbed, in seeming to refer to his "divorce" from Bernie, Elton insists it wasn't. "No," he avers, before adding a less decisive "not really."
Bernie, for his part, didn't perceive the album's name as a gibe. "I don't think there was anything bitter in that title," he says. "I suppose it could have been conceived [that way], but I know it wasn't. Obviously it could be seen as [meaning] our artistic marriage had dissolved and he was in fact a single man. But the truth of it was he actually wasn't because he was working with other people. He'd just remarried."
_A Single Man_ did decent business, shipping gold in the States in October and turning platinum the following month, Elton's confession about his sexuality two years earlier apparently having faded in the minds of the more conservative American record buyers. In the end, it was "Song for Guy" rather than "Part-Time Love" that provided its standout hit in the UK, with the gently entrancing, largely instrumental bossa nova carrying a wistful quality—lent poignancy by its sad backstory—that found it peaking at number four in Britain, although it made no impression whatsoever in the United States.
Buoyed as he was by his comeback, Elton was still determined not to resume a cosseted superstar lifestyle. He'd even begun to travel alone, amazing himself with the thrill of being able to board a flight, jump into a taxi, and check himself into a hotel entirely unaided. "A few years ago, I would have had it all done for me," he mused. "I thought, _Here you are at the age of thirty-one and at last you can do it on your own, you stupid sod_."
He was feeling alive with renewal and the invigorating freedom of something approximating normal life. He realized now that he had been locked away for years, like a prize tiger.
—
THEN, ON NOVEMBER 9, before leaving Woodside to travel to Paris, he collapsed.
He'd been sitting down, talking on the phone. He stood up and his legs gave way. His PA, Bob Halley, called an ambulance.
"I had terrible pains in my chest, arms, and legs," Elton later remembered in horror. "I couldn't breathe. I could hardly move for the pain."
He was dashed to the cardiac wing of the Harley Street Clinic in central London, having suffered a suspected heart attack. Someone leaked the information and this became translated into absolute confirmation of his condition in a news bulletin read out on BBC radio: "Pop star Elton John has been rushed to hospital after suffering a heart attack."
Almost immediately, journalists flocked like hungry crows on the pavement outside the clinic. Inside, Elton was being subjected to a battery of tests. Eventually the diagnosis came through: He'd keeled over from a panic attack brought on by nervous exhaustion.
He really hadn't been looking after his health in recent months. Aside from the high intake of alcohol and other stimulants, his eating habits had become wildly erratic: One minute he'd be dieting, drinking too much coffee, and popping vitamins; the next he'd be wolfing down a tub of ice cream.
"This has really shaken me," he told a reporter as he left the clinic. "When you're used to nonstop tours across the States and so on, you start to think you're superhuman. Then something like this happens and you realize you're not. Sometimes you've got to slow down like everyone else."
—
ABSOLUTELY THE LAST thing he needed to do was book an extensive tour for 1979, which is exactly what he did. After three months of rest following his collapse, he was itching to get back onstage. "I was wrong," he said, in explaining how he suddenly regretted retiring from live performance. "I didn't know how much I would miss it."
Nonetheless, he had no desire to re-form the band and return to the stadiums. Instead, he revived the two-man operation consisting of himself and Ray Cooper that he felt had worked so successfully at the Rainbow. This was a show that didn't rely on glitziness or volume. In rehearsals, stripping the songs back to their basic constituents, Elton looked back on the decade that was nearly over and recognized just what he and Bernie had achieved together. He rediscovered his back catalog and in the process "realized what great lyrics and songs they were."
In all his years of touring, Elton had sorely neglected Europe, and now he decided to remedy the situation. Two preliminary comeback shows—the first he'd given for paying crowds in the fifteen months since the halfhearted Wembley Empire Pool gig with China in November '77—were booked for the Stockholm Concert Hall in the Swedish capital on February 5 and 6.
Those first gigs were tough for him. Later he admitted that up there onstage he'd suffered another attack of the shakes, and for the first part of the set, without Cooper, he felt "bloody lonely." But he offered a positive post-show assessment to the press: "It was a good start and it's going to get better."
From here the tour moved through Copenhagen, Hamburg, the Hague, Rotterdam, Amsterdam, Mannheim, Munich, Berlin, Cologne, Paris, Antwerp, Dusseldorf, Wiesbaden, Lausanne, Nice, Barcelona, and Madrid in the space of five weeks. For someone who had just suffered an exhaustion-provoked health scare, it was a crazily intensive schedule, and one that said everything about Elton's obsessive drive to perform, perform, perform.
He hit the UK in March and gathered momentum, playing Glasgow, Edinburgh, Newcastle, Preston, and Belfast before arriving in London for six shows at the Theatre Royal, Drury Lane. Every night, he'd play for a generous, if energy-sapping, three hours. After the last, triumphant gig, he threw a fancy dress party at the Legends nightclub. Elton turned up in a curly blond wig and told everyone he'd come as Rod Stewart.
Ten days later, at the gig in Oxford, the balding, thick-mustache-sporting Vladimir Kokonin slipped in among the audience. As the deputy director of Gosconcert, the Russian state's music promoter, he was there mingling with the crowd at the Oxford Theatre on something of an under-the-radar mission.
After watching the extensive, highly entertaining, and—most important—nonprovocative show, the next day Kokonin called his superior back home and gave his approval. A phone call was then made from the offices of Gosconcert in Moscow to the promoter Harvey Goldsmith in England. The organization would be honored to invite Elton John to become the first major Western rock star ever to perform in the USSR.
—
ELTON HAD FIRST suggested the idea over lunch with Harvey Goldsmith and John Reid. He wanted to play places he'd never been to before and wondered aloud, "What about Russia?" Goldsmith said he'd see what he could do. The promoter wrote a lengthy letter detailing the request, which was passed via the British Foreign Office to the Russian Ministry of Culture. This in turn had resulted in Kokonin's trip to Oxford.
The planned tour was an arrangement that suited both sides. There was a controversy brewing over the fact that Russia had been given the opportunity to stage the 1980 Olympics. In light of this, Elton's visit was planned as a cross-cultural warm-up to the international athletic and sports event. At the same time, it marked a softening of the USSR's hard-line attitudes, both in allowing Elton to perform in the first place and in permitting Western media to glimpse Russian life, or at least an edited version of it. For Elton, it was both a new adventure and a massive publicity coup. The only Western pop star who'd previously performed in Russia—three years before, in 1976—was the wholesome British singer Cliff Richard, hardly the kind of international name to attract worldwide press headlines.
In a way, Elton's invitation to perform behind the Iron Curtain was something of a backhanded compliment—having been completely vetted, he was deemed not to be a subversive figure. There was no way, for instance, that the far more dangerous Rolling Stones would have been afforded the same opportunity. For now, it seemed, Elton's announcement of his "bisexuality" was unknown or overlooked. If he had outwardly declared himself to be gay, then his visit to a country where a homosexual act carried with it the threat of a five-year prison sentence would surely have been nixed.
On May 20, the Elton John party boarded an Aeroflot flight from London to Moscow, the singer traveling with Ray Cooper, John Reid, Bob Halley, Harvey Goldsmith, Geoffrey Ellis, and his mother and stepfather. Western journalists invited onto the tour included David Wigg of the _Daily Mail_ and Robert Hilburn, the _Los Angeles Times_ writer of Elton's career-starting "turbo review" of the debut Troubadour show. Joining them to film a documentary of the excursion were Ian La Frenais and Dick Clement, better known as the writers of the British sitcom _The Likely Lads_ than as directors.
Elton was nervous on the plane. He had no idea what might happen during the upcoming concerts and he was fretting about how he would be received by Russian audiences. At the same time, this was one of his main motivations for making the trip. "I didn't know what to expect," he said. "That makes you play harder."
He arrived in a country still under the control of Leonid Brezhnev, the Russian ruler who had held power since 1964 and who'd trampled the cultural freedoms gently instituted by his comparatively liberal predecessor Nikita Khrushchev. The Soviet Union in 1979 was suffering tight state control under a Politburo whose membership, in Brezhnev's policy of "mature socialism," consisted mostly of septuagenarians. An agricultural crisis had resulted in meat and dairy shortages and long queues outside largely empty shops. Understandably, amid the general populace, alcoholism was rife.
From Moscow, Elton and the others boarded the Red Arrow night train, which would carry them the four hundred miles northwest to Leningrad. "The Russians wouldn't let us fly," said Elton. "We had to go by train. We presumed there was something they didn't want us to see." To compensate, the Gosconcert organizers added to the train an opulently furnished coach normally reserved for high-ranking Soviet officials.
Shadowing them closely on the journey were two women from Gosconcert, along with an inscrutable young man named Sacha who everyone suspected was a member of the KGB. Before the train set off, fans on the platform threw their Elton John albums in through its windows to have them signed and returned. This slightly desperate sight caused Elton and Sheila to weep, suddenly overcome by the emotion of it all.
—
TICKETS FOR THE show on May 21, the first of four at the 3,800-seat Oktyabrsky Hall in Leningrad, were officially on sale for the equivalent of $9—$31 today—though on the black market they were fetching up to twenty times as much. The official program for the events revealed just how the Soviet officials viewed the star, with the stiffly chosen words "Audiences are specially attracted to the lyrical ballads and folk songs performed by R. Dwight." In the printed running order of the songs, "Rocket Man" had been translated as "Cosmonaut."
Surveying the crowd as he stepped onto the stage and bade them good evening ( _"Dobry vecher"_ ), Elton could see row upon row of middle-aged to elderly officials and their wives. The real, younger fans were positioned toward the rear of the venue or upstairs in the balcony. Dressed in a flat cap and green shirt, his blue velvet trousers tucked into his boots in Cossack fashion, he sat down at the piano and picked out the opening arpeggio refrain of "Your Song" to enthusiastic if self-consciously restrained cheers from the back of the auditorium and polite applause from the front.
It was clear from the first few numbers that the Elton devotees knew all the words to his songs, having learned them from their bootleg records or mass-copied poor-quality tapes that had originated from Soviet soldiers recording West German radio broadcasts while stationed in East Germany. Still, the crowd reception remained muted until the singer launched into his first set-ending version of "I Heard It Through the Grapevine," prompting wilder roars. Elton thanked the audience in Russian, with a polite _"spasibo."_ Between songs, girls tentatively approached the stage, proffering bouquets of flowers and even requesting midshow autographs, something that was never done in the West.
The second set managed to achieve real liftoff when, partway through the opening "Funeral for a Friend," panels rose on the stage to reveal Ray Cooper's enormous percussion setup and he began to animatedly thump a pair of timpani. The Russian officials in the crowd appeared uneasy, unsure of exactly how to deal with this audience of now excitable youths. Rock'n'roll had finally arrived in the Soviet Union, albeit in the slightly neutered form of grand piano and acoustic percussion.
Then the erupting moment came with the rambunctious introduction of "Saturday Night's Alright for Fighting," as Elton kicked away his piano stool and stood up to give the Russians their first taste of his full Jerry Lee Lewis routine. Kids rushed the stage. From here, he moved into "Pinball Wizard," ending the song with the impishly improvised addition of "Back in the USSR," even though he didn't really know the lyric. "It just came to me," he said, "and I was singing it before I realized I didn't know any of the words."
It seemed to be all over in a flash and Elton was dizzied by the experience. "I'm knocked out," he told Hilburn afterward. "This has to be my biggest achievement as an artist. I'm at a loss for words."
In the street three floors below his dressing room, thousands of ticketless fans broke through the security barriers and police cordon shouting, "Elton, Elton!" The singer waved at them from his window and threw tulips, his eyes brimming.
—
FOR THE NEXT show, officials tried to impose two rules on Elton after the scenes they'd witnessed on the opening night. First, they objected to his knocking over his piano stool, on the grounds that he was damaging Russian property (not to mention inciting the crowd to dangerous enthusiasm), and second, he must not sing "Back in the USSR." He complied with the first request and ignored the second. During the gig, whenever the fans stepped into the aisles, Soviet guards pushed them back into their seats. Onstage, Cooper urged them to "Come on!"
After the shows, everyone hung out in the "hard currency" bar of their hotel, where only Western money could be spent. There one night Elton confessed to Geoffrey Ellis that he had struck up a "close friendship" with Sacha, the aloof and good-looking suspected secret police agent. (Later, in Moscow, Elton would be shocked when Sacha brought his wife and children backstage to meet him.) One evening, as a great deal of champagne and vodka was being downed, Elton, with Ray Cooper on drums and live soundman Clive Franks stepping in on bass, entertained everyone in the restaurant with a lively disco-grooved take on "I Heard It Through the Grapevine."
The next morning, Elton awoke with a pounding hangover and missed a planned trip to the Winter Palace, leaving Cooper to attend alone. (Getting wind of the story, one British newspaper ran a headline that dramatically blared ELTON SNUBS RUSSIANS IN WINTER PALACE REVOLT.) Later that day, however, the star managed to rouse himself for a tour of the Summer Palace, where in the treasury of the Hermitage Pavilion he was shown a collection of jewelry and gold artifacts that of course appealed to his expensive tastes.
Arriving in Moscow on May 25 ahead of the first of the quartet of shows at the slightly smaller twenty-five-hundred-capacity Rossiya State Central Concert Hall, Elton was surprised to find that the Russian capital was a far prettier city than he'd imagined. To his eyes, it looked very much like Manchester.
In Moscow, there was a full program of activities set out for him during the days. He visited the still-under-construction Olympic Stadium and attended a soccer match between Dynamo Moscow and an opposing team culled from the ranks of the Red Army. Appearing as a guest on the austere set of a Russian talk show, he was asked innocuous questions about his image ("I wear clothes to express my feelings at the moment, and my moods change") and the influence his visit to the USSR might have on his music ("I think I've gleaned something from the atmosphere to write something").
The shows in Moscow were even more rigidly controlled, though on the final night, May 28, he clearly had nothing to lose. The performance was being broadcast live on the BBC back in Britain using a complex satellite relay system that went via Ukraine, and the pair were in fiery form. Leading into "Pinball Wizard," Cooper hoisted his percussion mallet aloft and banged a huge gong, which brought an ovation. Elton toasted the audience with a tumbler of vodka poured from a bottle he had kept atop his piano throughout the show. Downing the drink in one swig, he threw the empty glass over his left shoulder to smash on the stage.
—
AS THE AEROFLOT jet taking them home lifted off the ground the next day, Elton and Bob Halley began screaming. The takeoff had revealed that their seats weren't secured to the floor. They had both been tipped backward by the thrust.
Happily and gratefully back on terra firma in London, Elton, invigorated by his Russian experience, faced what he felt was a highly negative interrogation by the British press corps. He was asked if he now supported Communism.
"I'm against bigotry and prejudice and persecution," he responded. "But if that stopped me playing my music, I wouldn't play here because of the [far-right political party] National Front or the campaign against homosexuality. You don't go in with guns blazing, saying, 'I want this and that.' You've got to approach things gently. I'd be very presumptuous to consider myself an ambassador of any sort, but I'm glad to do my bit."
He seemed keen to sing the praises of the USSR. "It was one of the most memorable and happy tours I've been on," he said. "The country is not dark, gray, or drab. It's very beautiful and the people are very warm. The only negative experiences I had were two or three hangovers from vodka."
Elton had clearly made an impression on the Soviet officials, who sanctioned the release of _A Single Man_ on the state record label Melodiya, though they retitled it _Poet_ and omitted—for their lyrical content, not their inferior musical qualities—both "Part-Time Love" and "Big Dipper."
The _Daily Mail_ had even given him a dynamic new nickname: Super Czar.
In the years to come, however, Elton would look back on photographs of himself taken in Russia and feel he looked prematurely aged and unhappy in them: "I look twenty or thirty years older. You can see how sad I am. I see those pictures and think, _How could I have looked at myself and not seen that there was something desperately wrong?_ "
—
IF ELTON REMAINED emotionally unstable, it was a condition reflected in his next, unarguably poor album.
The singer had first met its producer, Pete Bellotte, in his days at the Top Ten club in Hamburg back in 1966, when he'd been with Bluesology and Bellotte had appeared on the same bill as the rhythm guitarist in Linda Laine and the Sinners. Since '72, Bellotte had been living in Munich and had forged a partnership with the electronic music pioneer Giorgio Moroder that had resulted in a string of innovative disco records including Donna Summer's 1977 hit "I Feel Love." When Elton had picked up a copy of that record, he'd been surprised to find the name of his old friend among the credits.
The two reconnected when Bellotte came backstage after one of the Drury Lane shows to say hello. It was there he first suggested that the pair of them might work together on a disco album. Elton was open to the idea, but only if Bellotte wrote and produced the music and he could appear only as a singer.
That summer of '79, Elton was in the south of France when he received a call from Bellotte saying the backing tracks were ready. Elton flew to Germany and in an eight-hour session at Musicland Studios recorded his vocal contributions to what would become the much-maligned _Victim of Love,_ rush-released only two months later in October.
Conceived as a nonstop dance party marrying disco with rock, the thirty-five-minute album was the shortest of Elton's career. It began with an eight-minute version of Chuck Berry's "Johnny B. Goode," which tried to update the rock'n'roll classic for the age of the underlit dance floor. Later in the unrelenting running order, the unconvincing rebel groove of "Born Bad" and the unfortunately titled "Thunder in the Night" further dragged down the proceedings.
Most of Elton's contemporaries had made their disco-influenced records a year before—Rod Stewart ("Da Ya Think I'm Sexy?"), the Rolling Stones ("Miss You")—and even then perhaps a little too late. As a result, _Victim of Love_ sounded like the death rattle of an expiring musical fad. The album made the Top Twenty in Australia and Norway but tanked everywhere else.
"It didn't do my career a lot of good," Elton admitted. "I don't regret doing it whatsoever. I wanted to make a record that people could dance to without taking the needle off. I can understand why it wasn't successful. I enjoyed it, [but] it was self-indulgent. I'm not ashamed of it. I'm not going to hide that record in the cupboard."
The critics were brutal. " _Victim of Love_ hasn't a breath of life," declared Stephen Holden in _Rolling Stone_. "There's no getting around it," Lester Bangs stated in _The Village Voice,_ "Elton's got problems."
—
ON THE UPSIDE, earlier in the year, in June, almost two years after they were recorded, three tracks from the sessions with Thom Bell had been released as an EP. The extracted single "Mama Can't Buy You Love" had produced Elton's biggest American hit in three years, repositioning him as a Top Ten artist when it reached number nine.
Off the back of this success, Elton returned with Ray Cooper to the States for his first major tour there since 1976, naming it Back in the USSA. As he had done in Europe and Russia, he faced the American audiences without his trademark spectacles. "The big glasses and the weird clothes were a way of hiding my shyness," he explained. "Since I started wearing contact lenses I've had to overcome that. I was forced to look people in the face, and you have to find confidence from somewhere to do that."
The tour started well, with two dates in Tempe, Arizona, and three in Berkeley successfully completed. But then, onstage at the Universal Amphitheatre in Los Angeles, Elton fainted. A stomach bug had been making the rounds of the crew and it had finally caught up with the singer. He left the stage for ten minutes, was advised to force himself to throw up, and then returned to complete the three-hour show.
In New York, four weeks later, he played eight shows at the Palladium. The Elton-and-Ray double act was a well-oiled routine by this point. But not everyone was won over. Writing in _The New York Times,_ Robert Palmer denounced the show as having "a supper club ambience...he's just about ready for Las Vegas."
And still the tour rolled ever onward, finishing with two weeks of concerts in Australia, which took him almost to the end of the year. The final gig tally of 1979 was impressive, if perhaps ill advised: Between February and December Elton had played a health-defying 125 shows.
—
THOSE CLOSEST TO him could see that he was propping himself up with booze and drugs. Only one person was brave enough to confront him about it.
On Boxing Day 1979, five days before the seventies ended, Elton turned up at Watford Football Club's Vicarage Road stadium to watch the team play Luton Town. He had a self-imposed rule that he would never take cocaine during the matches.
"But I could still do half a bottle of scotch," he laughs.
Elton had been up all night and looked terrible. After arriving at the stadium, he had a quick shave. Then the club manager he'd appointed, Graham Taylor, gestured to the singer to follow him into his office.
"I want to see you, Elton," said Taylor, holding a bottle of brandy.
"Here you are, fucking drink this," the manager angrily implored him. "It's what you want, isn't it? For fuck's sake, what's wrong with you? Look at the state of you. Get yourself together."
Elton was terrified by the outburst. Taylor then turned calm and conciliatory.
"He sat me down and said, 'Listen, you seem to want this more than anything else in life,' " remembers Elton. "I could take it from him 'cause I respected him. And he was only saying it out of friendship."
It was the eve of a new decade. As far as Elton's self-destructive lifestyle was concerned, there were two distinctly different ways it could go.
The whisky-drinking Donald Duck onstage in Central Park.
THE SONG SOUNDED LIKE A VIVID CONFESSION. In it, the singer depicts himself facedown in a cocaine-smeared mirror, where he's remained for twenty-four hours. He's virtually stupefied, but nevertheless he pledges his devotion to the drug, even though he knows this is a habit that has gone way too far. He snorts another line, which sends a chemical jolt to his run-down brain.
Titled "White Lady White Powder," it was the first track on side two of _21 at 33,_ Elton's first new album of the 1980s, and one of the songs that marked the surprise lyrical return of Bernie Taupin. These were words of explicit divulgence written from the memory of Bernie, now clean, sung by someone who was not. Elton had arrived at the fork in the road where he could have decided to follow a route to a healthier lifestyle. Instead he chose to venture further down the path of heavy indulgence.
Theirs was a tentative reunion, and "White Lady White Powder" was one of three new collaborations that made the album, selected from the ten songs Elton and Bernie had completed in the summer of '79 in Grasse on the French Riviera. "The first song that we wrote when we got back together was a thing called 'Two Rooms at the End of the World,' " remembers Bernie.
"That was quite a good song," he adds, "but...most of [the new material] was shit."
As if awkwardly trying to revive a failed marriage, Elton insisted that the pair's working relationship remain open. He still demanded the right to see other songwriting partners. "It was necessary for us to go off and write with other people," he insists.
In the three years since _Blue Moves,_ Bernie had continued to seem adrift in terms of a career plan. After his 1978 album with Alice Cooper, he'd collaborated on a coffee table book with the photographer Gary Bernstein, _Burning Cold,_ contributing romantic stanzas that probably few actually read, since they were accompanied by _Playboy_ -style shots of a model named Kay Sutton York. More recently, Bernie had readied a second solo album for release in 1980. This time around, rather than reading out his verses over music as he had done on his eponymous 1971 debut, he stepped up to the microphone to respectably sing on the apparently self-aggrandizingly named _He Who Rides the Tiger_.
Posing on the moody black-and-white cover with bow tie undone, leaning on a pool table like an upmarket shark, Bernie came across as far less cocksure in its Eaglesish songs of life in the California fast lane. One in particular, "Approaching Armageddon," was baldly autobiographical and found him casting his mind back over the last ten years and realizing how his life had irrevocably changed. From here he surveyed a landscape littered with rock casualty friends who'd succumbed to whisky and heroin, and he broached the subject of his failed marriage and its temporarily destructive effect on his writing. "Ten years on I'm wiser," he concluded, "but I'm still a farmer's son."
There had been no bust-up between himself and Elton and so there was no need for them to apologize to each other. Instead, when Bernie arrived in the south of France (along with his new wife, the fashion model Toni Russo), he and his estranged songwriting partner slipped back into their old familiar routine, even if both had been dented by their experiences in different ways.
The record _21 at 33_ came out in May 1980, its name derived from a creative accounting that took in all of Elton's albums—including live LPs, compilations, and the belated 1975 U.S. release of _Empty Sky_ —to tally it up as his twenty-first record, released in his thirty-third year. Recorded near Nice at Super Bear Studios, owing to technical problems it had to be completed in Los Angeles, where an increasingly unreliable Elton often didn't turn up, leaving Clive Franks to shoulder much of the burden when it came to their co-production credits. Still, there was a mood of nostalgia and reconciliation in the air, and even Dee Murray and Nigel Olsson made respective cameos on bass and drums.
Although not exactly a return to Elton's peak form of the early seventies, it was a step in the right direction. The opener, "Chasing the Crown," burst out of the speakers with a Roxy Music–ish art rock swagger and a Taupin lyric voicing the imagined thoughts of an omnipresent character who has swept through various moments in history, from the building of the Great Wall of China to the Boston Tea Party, in his quest for dominance. It was something of a misleading start, and apart from the deceptively upbeat "White Lady White Powder," much of _21 at 33_ relied on ballads ranging from the wishy-washy ("Little Jeannie" and "Dear God," both co-written with Gary Osborne) to the quietly graceful ("Sartorial Eloquence," written with the British singer and gay rights activist Tom Robinson).
By this point, though, the critics seemed to be deaf to even Elton's better efforts. "Ever since 1975...Elton has sounded confused, bitter, exhausted," Ken Tucker wrote in _Rolling Stone_. "We're now into the fifth year of the Elton John crisis, and frankly some of us here on the Elton watch are getting worried."
In the UK, the singer really put his back into the promotion of _21 at 33,_ throwing open the doors of his Woodside mansion to the media. At the same time, a gossip column report in the _Daily Express_ noted with some glee that his mum, Sheila, had recently moved away to Brighton on the English south coast and wouldn't now be on call to attend to his domestic chores. "Who will water Elton's newly transplanted thatch?" the writer bitchily mock-agonized.
In July, Elton invited BBC Radio One's Andy Peebles into his home. The broadcaster was given a guided tour by its proud owner, who showed off his now gargantuan record collection, which Elton, or an echo of the teenage Reg, explained that he still cleaned and cataloged himself. On a trip to the bathroom, Peebles was surprised to find a Rembrandt etching hanging on its wall.
Elton said he never worried about intruders at the mansion, joking that "the constant Dorothy Squires records blasting out of the loudspeakers tend to drive people away." Later, he revealed that he'd begun rehearsing for an upcoming American tour with Nigel Olsson and Dee Murray. Ten years on, he had come full circle. His new band was basically his old band.
His mood of reflection lingered when he was interviewed for a BBC TV series titled _Best of British_. With characteristic honesty, he spoke about how he now viewed his 1970s. "Great from a career point of view," he said, "but from a personal point of view...terrible. A lot of the time, I was a complete mess. Around 1975, especially, I started acting like a real spoiled brat. There are parts of it I can't even remember."
—
RETURNING TO THE Troubadour a decade on perhaps seemed a touch overly sentimental, and so, on August 25, 1980, the tenth anniversary to the day of that life-changing event, Elton marked the occasion by playing a set at the Palomino, the country music honky-tonk venue in North Hollywood. In remembrance of a more distant era, he even performed Jim Reeves's "He'll Have to Go," a staple of the sixteen-year-old Reggie's pub piano sets back at the Northwood Hills.
To Elton, this show rang with more resonance than the Hollywood-star-packed fifth-anniversary gig at the Troubadour in his troubled year of 1975. "I guess that's because I always figured that I'd be around for five years," he told Robert Hilburn afterward. "But there were times when I wondered if I'd make it through for ten."
The set list for the upcoming U.S. tour similarly harked back to his heyday, with inclusions of "Tiny Dancer" from _Madman Across the Water,_ "All the Girls Love Alice" from _Goodbye Yellow Brick Road,_ and the still affecting "Someone Saved My Life Tonight" from _Captain Fantastic and the Brown Dirt Cowboy_. If the shows were something of a victory lap, then it was one he had earned, although the tickets didn't always sell out and some radio station programmers seemed to have forgotten his name. Up there under the lights, the flashiness of old had been reinstated in his stagewear: a Spanish toreador jacket here, a hotel doorman suit festooned with piano keys there.
Even if he didn't have the box-office pull he had once enjoyed, there was one surefire way to attract attention: stage a free concert on a massive scale. In a deal organized with the New York City authorities and with sponsorship from Calvin Klein, it was announced that Elton was to play the Great Lawn in Central Park on Saturday, September 13.
As promotional stunts went, it was peerless, and moreover, it was to benefit the city. Revenue raised from merchandise sold that day—$75,000, as it turned out—was to go to the restoration of key areas of the park, including those that would be trampled by his fans. Forty-eight hours before the show, people were already camping out on the Great Lawn and roping off choice areas for themselves, even as the stage was being constructed.
On the afternoon of the momentous gig, the sun had burned away the early autumnal haze of morning cloud, and viewed from the stage, close to half a million people were packed together as far as the eye could see. Backstage, Elton had been knocking back whisky, and as a result, come three o'clock when he arrived onstage, if looking chunkier than he had a year before, he was in feisty form. The original three-piece band comprising himself, Dee Murray, and Nigel Olsson was augmented by James Newton Howard on keyboards, and guitarists Tim Renwick and Richie Zito, both new faces found during the sessions for _A Single Man_ and _21 at 33_.
As the foreboding chords of "Funeral for a Friend" echoed off the skyline surrounding the park, giving way to "Love Lies Bleeding," Elton rocked back and forth at the piano in a sporty lime-green, azure, and black outfit adorned with yellow circular mirrors and completed by a white Stetson, making him look like a cowboy out for a glamorous jog.
To gain a better vantage point, people began to climb the trees that edged the immense area while the singer rolled out the hits: "Goodbye Yellow Brick Road," "Rocket Man," "Philadelphia Freedom," "Sorry Seems to Be the Hardest Word."
At the end of the first set, he reemerged in his piano-patterned suit and peaked cap. Facing the greatest number of people he had ever performed before, Elton was now determined to stun. He attacked "Saturday Night's Alright for Fighting" with throat-shredding passion, and partway through, he booted away the piano stool. It was the move that had served him well since the first time Reggie had stunned the church hall audience with the Corvettes back in Pinner. Over the years it had enlivened and sometimes saved his shows. Today, it provoked mad cheers from the massive crowd. Girls jumped onto the shoulders of their boyfriends and waved Union Jack flags and howled in the direction of the stage.
Hitting the opening chord stab of "Bennie and the Jets," Elton stood up from the piano and turned to the audience to further stir them up. But as the song kicked in, he scanned his eyes around the stage looking angry about some onstage sound problem and started maniacally barking the song's verses and screeching the choruses in his demented Monty Python voice. In his extreme effort to entertain a crowd this vast, he was in danger of overdoing it. When he moved into the piano solo at the end, it was too intense, too long, and rescued only when the band launched back in to close the nearly ten-minute extended rendition.
"We're gonna do a song written by a friend of mine who I haven't seen for a long time," he said, introducing his cover of "Imagine." Elton looked across the park in the direction of the Dakota, adding, "He only lives just over the road. He hasn't made a record for ages but he's doing one at the moment." Grinning and riffing on Lennon's lyric, he sang, "You might say I'm a screamer." Each time he hit the line "Imagine all the people," he turned again to look incredulously back at the audience, an unimaginable number of people to be here watching a man playing a piano.
The scene astounded Pat Pipolo, the MCA promotion man who'd been with Elton since the beginning of his American campaign ten years before. "All you saw was a sea of faces," he remembered. "I just said to myself, _My God, from the Troubadour to this._ Unreal. Just unreal."
When Elton left the stage ahead of the final act, there was a lengthy pause. The ultimate costume change seemed to be taking forever. Behind the stage, aided by his PA, Bob Halley, the singer was struggling into the costume especially designed by Bob Mackie for the Central Park show. But in the heat of the moment, Elton couldn't remember how to juggle his limbs into it: He was trying to force his legs into armholes and arms into leg holes.
At last he returned to the stage, to gasps and wild applause, dressed as Donald Duck. In his blue bib and bird-beaked cap, fat white tail and yellow legs with enormous webbed feet, he could barely walk. He even had trouble sitting down at the piano. He had been drinking all day, and the whisky seemed to take effect as he began to sing "Your Song," dissolving into fits of giggles and punctuating key lines with an enthusiastic "Quack!"
No one, not least the singer himself, could ever have predicted that ten years after its release, the sullen-looking man on the cover of the _Elton John_ album would have been performing its signature song for close to half a million people in Central Park while dressed as a cartoon duck.
—
WHEN IT WAS all over, an after-show bash was held aboard the SS _Peking,_ a ship docked at the South Street Seaport on the East River. Elton turned up in a striped blazer and straw boater accessorized with a badge that read BITCH. He hung out with Calvin Klein, Andy Warhol, and, for the first time in years, John Lennon and Yoko Ono. He was understandably exhausted after the show, however, and didn't stay at the party long.
Wiped out after the Central Park gig, arriving at the postshow party on the SS Peking.
—
HIS PROFILE DULY raised, in the days and weeks that followed, Elton seemed to accept every media invitation, especially those that put him back on American TV screens. Four nights after Central Park, he was interviewed by Tom Snyder on NBC's _The Tomorrow Show,_ appearing in a colorful variation of the doorman outfit patterned with arrows that he reckoned made him look like "a freaked-out parking attendant."
Had it sunk in yet, Snyder wondered, how many people had turned out to watch Elton's show in the park? "There was a party afterwards," the singer explained. "I went there for about three seconds and blinked and then it began to hit home. So many people want to come and say 'Fantastic!' and everything. I just went back to the hotel and stayed on my own and, y'know, got out a dirty book." He erupted with mischievous laughter.
"If you were to start off today to be a rock star," Snyder went on, "could you do again what you've done?"
"A lot of what has happened to me," Elton mused, "was being in the right place at the right time. Just sheer fortune. I got pressured into coming to America the first time where it all happened. I've been lucky. I've been through a lot of things in ten years. I wouldn't change anything. Even the terrible times and the depressing times."
A month later, during a ramshackle and amateurish episode of Phil Donahue's talk show, filmed in Chicago and syndicated nationally, Elton remained in a ruminative frame of mind. "I look back on the last ten years," he said, "and I think, Cor, you're thirty-three, and I've just got the enthusiasm to start touring again and to make records. And I think, after that ten years...and a hell of a lot has happened...thank God I've got some enthusiasm left. If it all ended tomorrow, I would be quite happy selling records in a record store. As long as I was involved in music."
In Los Angeles, on _The Tonight Show_ with Johnny Carson, Elton admitted that after Russia's invasion of Afghanistan in December 1979, there was no way he would consider returning to play in the Soviet Union. "I was very disappointed when they invaded Afghanistan," he deadpanned, "because I was growing some of my best pot there."
On November 6, he returned to the Forum in L.A. for four consecutive sold-out dates. Backstage one night, Elton was introduced to Sting, lead singer of the Police, the latest British act to break in America. In a photo opportunity, the two posed together, the older star symbolically endorsing this younger challenger. Looking back, Elton would wince when realizing how ludicrous he must have looked at that moment, dressed as he was in a Minnie Mouse costume. "Minnie Mouse was pretty bad," he cringes. "Sting is looking at me as if to say, 'Fucking hell, what's going on here?' "
It was the point when Elton realized that maybe, just maybe, his stagewear had reached the point of ridiculousness. "For me, up until 1976, it was just completely done on instinct," he says. "Then it became a question of 'What do I wear?' I think everything you do at the beginning of a career on impulse is exciting. Then you have those five years at the top, which are done on adrenaline and instinct and you find your place, and then it's not adrenaline and instinct anymore. You lose that innocence, you lose that naïveté, you lose that edge. Then you make mistakes. Those five years, I never thought about anything. And then when you start thinking, that's when things start to go wrong."
—
HE WAS IN Australia, on the final leg of the 1980 tour, flying from Brisbane to Melbourne. His private plane touched down and John Reid boarded it to tell Elton the shocking news: John Lennon had been murdered in New York.
"I just didn't believe it," he said afterward. "It didn't sink in."
Numb and unsure what to do, he arranged for a special service to be held at St. Patrick's Cathedral in Melbourne. There he sang Psalm 23 ("The Lord is my shepherd / I shall not want...")
He knew that Yoko would be deluged with cards and messages of condolence, so instead, he simply sent her and his godson Sean an enormous chocolate cake with a message: "Love from Elton."
Onstage, on December 11 at Melbourne's aptly named Memorial Drive, he introduced his version of "Imagine" with the words "This week the worst thing in the world happened. This is a song written by an incredible man." After performing the peace anthem, he was overwhelmed with sadness and had to briefly leave the stage.
Bernie was in Los Angeles when he heard about Lennon's death. He couldn't watch the news, couldn't read a paper. Instead, he picked up his pen and wrote a lyric titled "Empty Garden (Hey, Hey Johnny)," imagining the New York venue where Lennon had played his last show. Later, Elton set it to music, though, finding the song too emotional to perform onstage, he did so only rarely—most memorably two years later, with Yoko and Sean in attendance, on the stage at Madison Square Garden, where he felt the loss of Lennon even more.
—
THERE WAS NO Elton John tour in 1981, although he did stage one private concert as a special request. That year, Prince Andrew was set to turn twenty-one, and Elton was asked to perform at the party, to be held at Windsor Castle. He chose to appear with Ray Cooper in the stripped-down two-man routine. Sound-checking inside the castle's ballroom, Elton surveyed the rows of empty golden chairs awaiting the extended membership of the royal family. He suddenly thought, _Oh, Christ_. He'd never been more nervous about the prospect of any concert in all his years of performing.
Come showtime, Elton was now petrified, arriving onstage to be faced by the intimidating sight of the royals gazing up at him (including Prince Charles and his fiancée, Lady Diana Spencer), along with their four hundred guests. The show, however, went without a hitch. As with that starmaking debut show at the Troubadour, when Elton was under pressure, he often played his best. At the back of the hall, soundman Clive Franks, wearing a black dinner suit rented for the occasion from the High Street tailor Moss Bros., expertly oversaw the proceedings. Partway through the show, Princess Margaret crept up beside Franks to sneak a cigarette.
At the end of the set, Elton and Ray took their bows and a twenties-style jazz band replaced them on the platform. Out on the floor, Diana Spencer asked Elton if he'd like to dance with her. Together they giggled and shuffled and kicked their way through improvised steps vaguely resembling the Charleston. After a buffet dinner in an adjoining room, Princess Anne led the singer back into the ballroom, where a DJ was now spinning records. It was, thought Elton, the quietest disco on earth.
He was dancing with Princess Anne when he heard a cut-glass-accented voice in his ear inquire, "Can we join you?" Elton turned to face the Queen, and just at that moment, the DJ cut into Bill Haley's "Rock Around the Clock." And so it was that Reg found himself bopping with Her Majesty Queen Elizabeth II—dressed in peach, her handbag dangling from her arm, her diamond tiara glittering under the ballroom lights—to one of the records he'd mouthed along to back in his bedroom as a shy, chubby youngster.
—
NOW, IT SEEMED, he'd seen and done it all.
Elton had been extraordinarily famous for almost eleven years. It had left him a touch unhinged, and an uncertain future stretched out before him.
He was still indulging in dangerous habits, even though he was fully aware of the damage they might cause him.
He still didn't know quite who he really was or quite where he was going.
He'd work it all out, eventually.
# EPILOGUE
"I REALLY DIDN'T SORT OUT my personal persona until I got sober, to be honest with you, till I was forty-three."
In the dining room of his Holland Park townhouse, Elton is talking about his struggles with his identity and his difficult journey through the 1980s, which resulted, in July 1990, in his undergoing rehab at the Parkside Lutheran Hospital in Chicago.
"My life was just totally built about music," he says, "and the amount of work that I did was astonishing. I'm not complaining 'cause I loved every minute of it. But my whole life really was about work and it wasn't about...y'know..."
He hesitates, trying to find the right words.
"I was still stuck," he continues. "I had this huge, successful career and then this very unsure private life. I was very immature. There were a lot of complications.
"One thing I'm grateful for [about] the drugs was that in the end they didn't kill me. The only reason is 'cause I still worked while I was doing them. I didn't sit at home. I still made albums, I still went on tour. The work probably suffered for it. It absolutely _did_ suffer for it. I'll hold my hand up and say it. But at least I worked."
Elton has joked in the past that no one told him that "the 'me' decade"—the phrase coined by writer Tom Wolfe for the seventies—actually ended in 1979. Instead, he barreled through the subsequent ten years ruled by his cocaine and alcohol addictions while fulfilling a characteristically unrelenting recording and touring schedule.
Through his burgeoning desire to start a family and his growing belief that he could live a "straight" life, the singer surprised the world by becoming engaged to, and in 1984 marrying, the German recording engineer Renate Blauel. Their union would last four years. By 1986, though, Elton's now acute coke and booze habits had begun to result in messy performances both onstage and, with that year's _Leather Jackets_ album, in the studio.
"I was so coked out," he admits. "I made that album in Holland. I was just on coke all the time. Some good songs, but I'm ashamed of where I was on that album."
During his live shows of the period, Elton became increasingly agitated about both his playing and the reactions of the crowds. "That's the worst thing about performing when you've done a line," he points out. "You're so paranoid that you're too fast, too slow, [wondering] whether the audience likes it. It's like, _Fucking hell, awww_.
"Even though I was a fucking nightmare in the eighties," he adds, "I still loved my music enough to haul my arse out of bed and go and play it. Sometimes not very well, unfortunately. But how would I know? 'Cause I can't remember half of it."
What, for him, constituted a "line" of cocaine?
"A line for me was an ounce...half an ounce. I couldn't do [just] a line. I know what I'm like...I have to have everything _now._ So with drugs I'd have to have half an ounce. Two or three days up at a time. I think four days in Australia once."
At points, Elton was in serious danger of losing the plot. In one highly memorable incident, after having stayed up for days on end at the Inn on the Park hotel in London, in a deluded state, he phoned his office to ask if someone could "do something" about how windy it was outside.
Year by year, he gradually began to isolate himself in his drug taking, to the point where he ended up taking cocaine, sometimes as alarmingly frequently as every four minutes, when he was entirely alone. "Towards the end I was in this house...and I was upstairs just doing it on my own in my bedroom. It's like, _Urgh._ There would be six months when I was perfectly clean, but there were times when I wasn't. So it was up and down. It was very off-kilter one moment, back on track the next. It was all over the place."
Considering the extent of his drug use, it's incredible, really, that Elton came out the other end without having done himself permanent physical damage.
"Yeah, it's only a little nose," he laughs. "I don't know how it's still fucking there."
—
THERE WERE OTHER low points. In 1987, _The Sun_ newspaper in the UK seemed to pursue a vendetta against Elton, printing entirely unfounded front-page stories about him. One claimed that he had paid underage rent boys for sex. Another, bizarrely, stated that he'd had the voice boxes of the guard dogs at his home surgically removed because their barking at night was disturbing his sleep. He successfully sued the paper for £1 million in damages.
The following year, in a mood to purge, Elton decided to get rid of the mountains of possessions cluttering his Woodside mansion. Almost two thousand personal items were put up for auction at Sotheby's in London, raising more than $8 million: four entire catalogs' worth of jewelry, glasses, furniture, and ornaments, plus virtually all of his stage costumes. The owners of the Hard Rock Cafe in Los Angeles bought the light-up ELTON specs he'd worn onstage at the Hollywood Bowl for almost $17,000. A rep for the Dr. Martens footwear company purchased his outsized _Tommy_ boots for close to $30,000.
Through it all, his presidency of Watford FC helped him maintain some kind of equilibrium. Under his stewardship, the team rose swiftly through the leagues. "It was an incredible, stabilizing thing," he stresses. "Going from the Fourth Division to the First, it was magical. Being around people who cared about me and who were very honest, y'know. 'I don't like your new record.' 'Why are you wearing that?' I didn't find it offensive. I just found them down to earth. Without that, fuck knows what would've happened. I would've gone completely off the rails."
Moreover, his association with the Watford club underlined another truth about Elton—at heart he isn't a loner, and he thrives when he's part of a team. "My whole career's been involved in camaraderie," he says. "When you _are_ successful, the actual feeling of it and the sharing of it, it's just so incredible. I don't think I'd have enjoyed myself so much without having the relationship with Bernie. Y'know, if I'd have been writing songs on my own, it would not have been the same. The fact that we are a team has enhanced it."
—
AFTER _21 at 33_ in 1980, his and Bernie's writing partnership was fully recemented on 1983's _Too Low for Zero,_ with its hits "I'm Still Standing" and "I Guess That's Why They Call It the Blues." Nevertheless, their output continued to be wildly inconsistent in terms of its quality control right up until the turn of the millennium. They are divided when it comes to which album of the post-seventies, pre-2000 era they consider to be their worst. Elton thinks it's undoubtedly 1986's hopelessly weak and ultimately forgettable _Leather Jackets_. "Gus Dudgeon did his best" he says, grinning, "but you can't work with a loony."
" _Leather Jackets_ was just awful," Bernie agrees, while rating 1997's _The Big Picture_ slightly below it. "I think that is probably the worst album we ever made. Some of the songs on there are actually not bad songs. It's just that the production is abysmally cold and technical.
"I have no regrets, because we're in good company," the lyricist decides, breezily. "We made some horribly crappy records, but then so did a lot of our contemporaries. The Stones, McCartney, even John [Lennon] made some toilet flushers. That's nothing to be ashamed of. If something interests you, try it. If you fall flat on your ass, then you admit it. And we've done that countless times."
In the summer of 2000, Elton and Bernie sat on the balcony of the singer's mountainside house on Mont Boron in Nice and began reminiscing about the thirty-three years that had passed since they met in 1967. Together, they came to a grim realization when it came to the fruits of their collaboration since reestablishing their working relationship with _21 at 33_.
"We said, 'It's not been fucking good enough,' " Elton remembers. "We made a vow with each other on that balcony. Let's start making albums we can be proud of again. I made albums when I didn't want to make albums, 'cause the record company wanted me to. And I thought, _I can't do that anymore_. I said, 'Let's just start again.' "
The duo's renaissance began with _Songs from the West Coast_ in 2001, a return to their warmly produced 1970s sound. Among its standout songs were "I Want Love," "This Train Don't Stop There Anymore," and the affecting "American Triangle," written about the beating, torture, and murder of the twenty-one-year-old gay student Matthew Shepard in Wyoming in 1998.
This sense of revitalized creativity continued through _Peachtree Road_ in 2004, the sparse and reflective _The Diving Board_ nine years later, and the upbeat and rockier _Wonderful Crazy Night_ in 2016. Meanwhile, in 2006, the pair produced a sequel to _Captain Fantastic and the Brown Dirt Cowboy_. But whereas the lyrics on the original album had tackled his and Bernie's years of pre-fame struggle, its successor, titled _The Captain & the Kid,_ was concerned with the dizzying effects of their startling success in the seventies.
One song in particular, "Tinderbox," dealt with how Elton and Bernie's relationship had become combustible by 1976. "It's very, very dangerous to live in each other's pockets," says Bernie, "because eventually you rub up together too much and fireworks start. There was a point where everywhere you went, it was Elton John, Elton John...You couldn't sneeze without hearing one of our songs. Elton was playing stadiums, our records were coming in at number one. Where do you go from there?
"If you become that obscenely popular," he reasons, "you hit the bridge and you either fall off it or you limp across to the other side. You're never gonna continue with that phenomenal success. Nobody does. It's impossible. But you can survive and reimagine yourself and remain artistically viable. In a way, that is a huge relief."
Looking back, Elton finds it utterly remarkable that his and Bernie's partnership _has_ managed to survive. "Even though we live so far apart," he says, "we're so in sync after all this time. There's a natural telepathy between us. We've never, ever screamed or shouted. Which is extraordinary when you look at some of the great partnerships that have fallen afoul of each other because of jealousies and egos. It's never been the case with him and me."
Of all the songs the pair have written, the singer feels that the reflections of their early days in "We All Fall in Love Sometimes" on _Captain Fantastic_ best sums up their relationship.
"I find it very difficult to listen to," says Elton, turning emotional. " 'We All Fall in Love Sometimes' is about two people whose love for each other goes beyond...I dunno...I'm kinda misting up when I say it now. We love each other and we're not close to each other, but I can't imagine my life without him being in it."
—
IN EFFECT, ELTON and Bernie have become the characters the latter dreamed up for them on _Captain Fantastic and the Brown Dirt Cowboy_. On his ranch near Santa Ynez in California, the Stetson-wearing lyricist lives the life of the modern cowhand: raising cutting horses for riding competitions, attending rodeos, and involving himself in the Professional Bull Riders organization. Unlike in the 1970s, he's very rarely recognized by fans in his day-to-day life. His is an enviable existence involving enormous wealth and relative anonymity.
"Luckily now, my face has become less recognizable, thank God," he says with a laugh. "Up to the point of the _Blue Moves_ album, my face was on every album cover very, very prominently and I was very visible at the shows. Back then, it was as if I was a performer, because if I went in record stores or clothes stores or on the street, people actually recognized me. Now, that's dissipated.
"My name obviously still gets recognized. Hand over your credit card and it's like an American Express commercial—you don't know my face but you know my name. And I'm really very happy for that. I could never, never, never be in the position Elton is. I'd kill myself, y'know, because I'm just so, so happy that I can just pretty much go anywhere and do anything and live my life."
As Captain Fantastic, meanwhile, Elton remains the superstar who jets around the world, performing a still incredible number of shows each year (and still heckling heavy-handed security guards from the stage when he feels they're stopping his fans from enjoying themselves: most recently in June 2016 in Leicester, England, where he lambasted the bouncers as "pricks").
In terms of his personal life post–John Reid, in the early eighties Elton was in a relationship with Gary Clarke, an Australian twelve years his junior (and the subject of the 1982 hit "Blue Eyes"), while at the other end of that decade, his boyfriend was the real estate agent Hugh Williams from Atlanta, Georgia, whose example he followed in deciding that he had to enter rehab.
Then, in the autumn of 1993, a mutual friend brought a Canadian advertising executive named David Furnish along to a dinner party Elton was throwing at his Woodside mansion, where the other guests included Richard Gere, Sylvester Stallone, and Princess Diana. Elton and Furnish were immediately attracted to each other. The next day, Elton called David and they met in London for dinner alone, becoming inseparable from that point on.
Furnish seemed to offer the stability that Elton had been craving, resulting in their civil partnership in 2005 and their wedding in 2014, the year that same-sex marriage became legal in the UK. Elton had always wanted to be a dad, which seemed to him somehow incompatible with his life as a gay man. Through surrogacy, the couple are now the fathers of two sons, Zachary and Elijah.
Elton and David's marriage has not been without its controversies—not least in recent years when Furnish took control of his husband's business affairs, causing Elton to jokingly nickname him Yoko. But Elton admits that David has given him some perspective on his past achievements, which, in his drive to push ever onward with his career, the singer is often reluctant to look back on.
"You can't stop and pat yourself on the back," he insists. "[But] David is always saying, 'You've got to do that sometimes.' "
These days, Elton is a far more centered and happy individual. The addictive side of his nature now finds satisfaction in the far less dangerous pursuit of collecting art and photographs. At the same time, his previous experiences and altruistic impulses have down the years seen him, as the self-styled Uncle Elt, throwing his arms around those who have suffered similar troubles, whether it be George Michael or Rufus Wainwright. In this way, he's proved himself to be something of a caregiver for the damaged and famous, as difficult as that has sometimes been for him.
"You get concerned about people," he explains. "When I was doing a load of drugs, George Harrison tried to help me and I just went, 'Woo, fuck off.' You don't want to know. It doesn't mean to say you don't like them as a person. But you might not wanna be near them because they know what you're doing and they're right."
Arguably, as Bernie points out, Elton's greatest achievement other than his musical legacy has been the creation of his nonprofit AIDS Foundation in 1992, which has over the years raised in excess of $200 million spent on care and educational programs.
"Personally, I don't think he's given enough credit for it," says Bernie. "But I don't think that's what he's in it for, so I'm not gonna make a stink about it. In my mind, he's the other Bono. One of the sad things about it is that I still think that AIDS has this stigma about it. It's like, 'Okay, your foundation makes an incredible contribution,' but there's still that little stigma—it's about AIDS, it's not about world relief. Somebody's got to do it, y'know.
"There are kids today that only see him in a certain light—that Elton John as he is today, larger than life, Sir Elton, mega lifestyle. But, I mean, he's an _extraordinary_ individual."
—
IT WAS THE singer's charitable work that led him in 1998 to be knighted by his former dancing partner Queen Elizabeth II. Elsewhere, despite the protestations of his younger self, Elton has indeed ended up doing residencies in Las Vegas, most notably the Red Piano show, in which his performance of "Someone Saved My Life Tonight" was backdropped by a David LaChapelle film that even featured a facsimile of the gas oven back at Furlong Road in which the singer halfheartedly tried to end his life. From time to time, Elton will insist that he plans to cut back on the number of live performances he gives. Now, as then, still impossibly driven, he never really does.
Since _Songs from the West Coast,_ he's given up worrying about his chart positions. He admits it was tough. "I've not been interested in singles," he says. "We'd got to a point where we were trying to do stuff because the record company were saying, 'We need a single, we need a single.' And it's hard to let go of that situation when you've been successful for years and used to having hit singles, especially in America. But there comes a point where you have to admit that you're not gonna get played on the radio in America because it's ageist. There's a whole stream of different music come along now. And you have to face up to it."
Ultimately, one thing still propels him forward: his sheer love of music. "It's never died," he says. "It burns as bright in me now as it did when I first started. I still get the same kick going into a record store. I still can go in and come out with an armful of stuff that I don't have and I still get the same excitement. I refuse to get free ones, I'll fucking buy it. And if I like it, I'll buy fifty of them and say to people, 'Listen to this.' "
Elsewhere, in terms of regrets? He's had a few.
"If I could relive it, the first five or six years of my career I would do it exactly the same," he states. "If you had the chance of doing the next ten to fifteen years again, you'd make some alterations. Absolutely. You'd make some visual alterations, you'd make some personal alterations. But, the first five or six years, I don't regret any of it."
—
LET US LEAVE him now, then: Reginald Kenneth Dwight from Pinner, insecure music addict trapped inside the body of Sir Elton Hercules John, international rock legend, recovered drug addict, gay rights champion, and now husband, father, and pillar of the establishment. At this time of his life, looking back, he sometimes thinks about how he will be remembered.
"An overview of my career is usually...glasses...homosexuality...tantrums," he concludes with a laugh, considering the sheer ridiculousness of it all. "But the music was pretty phenomenal, y'know."
And as Reg would no doubt tell you, the records are all that matter in the end.
To Brian Doyle and memories of 1976: the Rover behind the shop, _Captain Fantastic_...on a stolen tape, burns on the dashboard, fires to light
# ACKNOWLEDGMENTS
FOR THEIR GENEROUS INTERVIEW TIME and for quotes that made their way into this book, thanks to Elton John, Bernie Taupin, Alice Cooper, Kiki Dee, Rick Frio, Davey Johnstone, David Larkham, Nigel Olsson, Kenny Passarelli, Pat Pipolo, Annie Reavey, Russ Regan, "Legs" Larry Smith, and the late Gus Dudgeon. For additional transcripts, thanks to Nick De Grunwald and George Scott.
For still continuing to employ me on the leaky boat that is music journalism and allowing me to disappear (again) to write this book, thanks to Danny Eccleston, Phil Alexander, Andrew Male, Jenny Bulley, Ian Harrison, Mark Wagstaff, Matt Turner, Matt Mason, Ted Kessler, Niall Doherty, Paul Stokes, Chris Catchpole, Sam Inglis, and David Glasper. To my writer pals, let's keep on keepin' on: Sylvia Patterson (the best music journalist in the known universe), Simon Goddard (whose suggested title for this book was both hilarious and unprintable), Dorian Lynskey, John Aizlewood, Craig McLean, Pat Gilbert, Keith Cameron, Mark Blake, Andrew Perry, Alexis Petridis, Andy Fyfe, John Harris, Eamonn Forde, Louise Millar and Amy Raphael.
To all at Penguin Random House in New York who helped to make this happen, including the brilliant Susanna Porter (who instantly "got" this book), Priyanka Krishnan, Greg Kubie, Alexandra Coumbis, Emily Hartley, and Janet Wygal.
To the ever cool customer that is my agent, Kevin Pocklington, at Jenny Brown Associates, especially for reminding me that I'd had this book idea and pushing me to get cracking on it.
Big up to all my nonwriterly mates: the inspirational "Blue" Anth Brown, Mike Brown, Derek Hood (for reminding me Elton was sitting at the Wurly every time I went up to the loft), Steve Aungle, Steve Wilkins (for his Shoreditch College super-sleuthing), all the Daves—Dave Black, Dave Scott, Dave Tomlinson—Alan Shaw, Robbie and Parm Gunn-Hamilton, Aidan Rose, Syann Gilroy, Nick Walker, Nick Roberts, Jon Mills, Sean Cooney, James Hall, Paul Esposito, Douglas Anderson, Duncan Jordan, Craig Stevens ("Deadline!"), Ben Gregor, and Chris Metzler.
Thanks to my very occasional _Last of the Summer Wine_ walking partners Jon Bennett (with me every step of the way) and Craig McNeil ("Mon the Horse!"). For helping me in a more general way, even if it was just listening to me bang on, I thank Matt Everitt, Gordon Thomson, Matt Delargy, Neil Jaworski, Clare Hollister, Ian Beck, and Ross Bennett. Thanks also to Kevin Smalley, Caroline Theakstone, and Joe Medina at Getty Images and to Laura Watts at Rex.
To my family up in Scotland: Heather, Brian, Caroline, Ryan, Lauren, Jimmy, and John, and my uncle Jim Herschell for lending me his Elton LPs when I was a nipper. And to the still saintly-patient Karen for her bathtime reading of the pages of this book (and just basically _everything_ ) and to the ever gorgeous Amelia, for bringing me food and beer and making "hilarious" "jokes" about my gray hair and "big nose," and for putting up with me saying she's turning into a fourteen-year-old goth/emo...which she is, by the way. Love both of you to the moon and back.
# DISCOGRAPHY
# Albums · _1969–1980_
## EMPTY SKY
Empty Sky · Val-hala · Western Ford Gateway · Hymn 2000 · Lady What's Tomorrow · Sails · The Scaffold · Skyline Pigeon · Gulliver · Hay-Chewed · Reprise
US: MCA MCA-2130, JANUARY 1975 UK: DJM DJLPS 403, JUNE 1969
## ELTON JOHN
Your Song · I Need You To Turn To · Take Me to the Pilot · No Shoe Strings on Louise · First Episode at Hienton · Sixty Years On · Border Song · The Greatest Discovery · The Cage · The King Must Die
US: UNI 73090, AUGUST 1970 UK: DJM DJLPS 406, APRIL 1970
## TUMBLEWEED CONNECTION
Ballad of a Well-Known Gun · Come Down in Time · Country Comfort · Son of Your Father · My Father's Gun · Where To Now St. Peter? · Love Song · Amoreena · Talking Old Soldiers · Burn Down the Mission
US: UNI 73096, OCTOBER 1970 UK: DJM DJLPS 410, OCTOBER 1970
## FRIENDS—ORIGINAL SOUNDTRACK RECORDING
Friends · Honey Roll · Variation on Friends Theme (The First Kiss) · Seasons · Variation on Michelle's Song (A Day in the Country) · Can I Put You On · Michelle's Song · I Meant to Do My Work Today (A Day in the Country) · Four Moods · Seasons Reprise
US: PARAMOUNT PAS-6004, MARCH 1971 UK: PARAMOUNT SPFL 269, MARCH 1971
## 11-17-70 (17-11-70 IN UK)
Take Me to the Pilot · Honky Tonk Women · Sixty Years On · Bad Side of the Moon · Burn Down the Mission (including My Baby Left Me · Get Back)
US: UNI 93105, APRIL 1971 UK: DJM DJLPS 414, APRIL 1971
## MADMAN ACROSS THE WATER
Tiny Dancer · Levon · Razor Face · Madman Across the Water · Indian Sunset · Holiday Inn · Rotten Peaches · All the Nasties · Goodbye
US: UNI 93120, NOVEMBER 1971 UK: DJM DJLPH 420, NOVEMBER 1971
## HONKY CHÂTEAU
Honky Cat · Mellow · I Think I'm Going to Kill Myself · Susie (Dramas) · Rocket Man (I Think It's Going to Be a Long, Long Time) · Salvation · Slave · Amy · Mona Lisas and Mad Hatters · Hercules
US: UNI 93135, MAY 1972 UK: DJM DJLPH 423, MAY 1972
## DON'T SHOOT ME I'M ONLY THE PIANO PLAYER
Daniel · Teacher I Need You · Elderberry Wine · Blues for My Baby and Me · Midnight Creeper · Have Mercy on the Criminal · I'm Going to Be a Teenage Idol · Texan Love Song · Crocodile Rock · High Flying Bird
US: MCA MCA-2100, JANUARY 1973 UK: DJM DJLPH 427, JANUARY 1973
## GOODBYE YELLOW BRICK ROAD
Funeral for a Friend/Love Lies Bleeding · Candle in the Wind · Bennie and the Jets · Goodbye Yellow Brick Road · This Song Has No Title · Grey Seal · Jamaica Jerk-Off · I've Seen That Movie Too · Sweet Painted Lady · The Ballad of Danny Bailey (1909–1934) · Dirty Little Girl · All the Girls Love Alice · Your Sister Can't Twist (But She Can Rock 'n Roll) · Saturday Night's Alright for Fighting · Roy Rogers · Social Disease · Harmony
US: MCA MCA2-10003, OCTOBER 1973 UK: DJM DJLPD 1001, OCTOBER 1973
## CARIBOU
The Bitch Is Back · Pinky · Grimsby · Dixie Lily · Solar Prestige a Gammon · You're So Static · I've Seen the Saucers · Stinker · Don't Let the Sun Go Down on Me · Ticking
US: MCA MCA-2116, JUNE 1974 UK: DJM DJLPH 439, JUNE 1974
## ELTON JOHN: GREATEST HITS
Your Song · Daniel · Honky Cat · Goodbye Yellow Brick Road · Saturday Night's Alright for Fighting · Rocket Man (I Think It's Going to Be a Long, Long Time) · Bennie and the Jets · Don't Let the Sun Go Down on Me · Border Song · Crocodile Rock
US: MCA MCA-2128, NOVEMBER 1974 UK: DJM DJLPH 442, NOVEMBER 1974
## CAPTAIN FANTASTIC AND THE BROWN DIRT COWBOY
Captain Fantastic and the Brown Dirt Cowboy · Tower of Babel · Bitter Fingers · Tell Me When the Whistle Blows · Someone Saved My Life Tonight · (Gotta Get a) Meal Ticket · Better Off Dead · Writing · We All Fall in Love Sometimes · Curtains
US: MCA MCA-2142, MAY 1975 UK: DJM 22094, MAY 1975
## ROCK OF THE WESTIES
Medley (Yell Help · Wednesday Night · Ugly) · Dan Dare (Pilot of the Future) · Island Girl · Grow Some Funk of Your Own · I Feel Like a Bullet (in the Gun of Robert Ford) · Street Kids · Hard Luck Story · Feed Me · Billy Bones and the White Bird
US: MCA MCA-2163, OCTOBER 1975 UK: DJM DJLPH 464, OCTOBER 1975
## HERE AND THERE
Skyline Pigeon · Border Song · Honky Cat · Love Song · Crocodile Rock · Funeral for a Friend/Love Lies Bleeding · Rocket Man (I Think It's Going to Be a Long, Long Time) · Bennie and the Jets · Take Me to the Pilot
US: MCA MCA-2197, APRIL 1976 UK: DJM DJLPH 473, APRIL 1976
## BLUE MOVES
Your Starter For...· Tonight · One Horse Town · Chameleon · Boogie Pilgrim · Cage the Songbird · Crazy Water · Shoulder Holster · Sorry Seems to Be the Hardest Word · Out of the Blue · Between Seventeen and Twenty · The Wide-Eyed and Laughing · Someone's Final Song · Where's the Shoorah? · If There's a God in Heaven (What's He Waiting For?) · Idol · Theme from a Non-Existent TV Series · Bite Your Lip (Get Up and Dance!)
US: MCA MCA2-11004, OCTOBER 1976 UK: THE ROCKET RECORD COMPANY ROSP 1, OCTOBER 1976
## ELTON JOHN: GREATEST HITS VOLUME II
The Bitch Is Back · Lucy in the Sky with Diamonds · Sorry Seems to Be the Hardest Word · Don't Go Breaking My Heart · Someone Saved My Life Tonight · Philadelphia Freedom · Island Girl · Grow Some Funk of Your Own · Levon · Pinball Wizard
US: MCA MCA-1690, SEPTEMBER 1977 UK: DJM DJH 20520, SEPTEMBER 1977
## A SINGLE MAN
Shine On Through · Return to Paradise · I Don't Care · Big Dipper · It Ain't Gonna Be Easy · Part Time Love · Georgia · Shooting Star · Madness · Reverie · Song for Guy
US: MCA MCA-3065, OCTOBER 1978 UK: THE ROCKET RECORD COMPANY TRAIN 1, OCTOBER 1978
## VICTIM OF LOVE
Johnny B. Goode · Warm Love in a Cold World · Born Bad · Thunder in the Night · Spotlight · Street Boogie · Victim of Love
US: MCA MCA-5104, OCTOBER 1979 UK: THE ROCKET RECORD COMPANY HISPD 125, OCTOBER 1979
## 21 AT 33
Chasing the Crown · Little Jeannie · Sartorial Eloquence · Two Rooms at the End of the World · White Lady White Powder · Dear God · Never Gonna Fall in Love Again · Take Me Back · Give Me the Love
US: MCA MCA-5121, MAY 1980 UK: THE ROCKET RECORD COMPANY HISPD 126, MAY 1980
# Singles · _1968–1981_
### I've Been Loving You · Here's to the Next Time
UK: 1968 PHILIPS BF1643, MARCH 1968
### Lady Samantha · All Across the Havens
UK: PHILIPS BF1739, JANUARY 1969
### It's Me That You Need · Just Like Strange Rain
UK: DJM DJS205, MAY 1969
### Border Song · Bad Side of the Moon
US: UNI 55246, APRIL 1970 UK: DJM DJS217, MARCH 1970
### Rock and Roll Madonna · Grey Seal
UK: DJM DJS222, JUNE 1970
### Your Song · Into the Old Man's Shoes (UK), Take Me to the Pilot (US)
US: UNI 55265, OCTOBER 1970 UK: DJM DJS233, OCTOBER 1970
### Friends · Honey Roll
US: UNI 55277, MARCH 1971 UK: DJM DJS244, APRIL 1971
### Levon · Goodbye
US: UNI 55314, NOVEMBER 1971
### Tiny Dancer · Razor Face
US: UNI 55318, FEBRUARY 1972
### Rocket Man · Susie (Dramas) (US), Holiday Inn · Goodbye (UK)
US: UNI 55328, APRIL 1972 UK: DJM DJX501, APRIL 1972
### Honky Cat · Slave (US), Lady Samantha · It's Me That You Need (UK)
US: UNI 55343, JULY 1972 UK: DJM DJS269, JULY 1972
### Crocodile Rock · Elderberry Wine
US: MCA 40000, NOVEMBER 1972 UK: DJM DJS271, OCTOBER 1972
### Daniel · Skyline Pigeon
US: MCA 40046, MARCH 1973 UK: DJM DJS275, MARCH 1973
### Saturday Night's Alright for Fighting · Jack Rabbit · Whenever You're Ready (We'll Go Steady Again)
US: MCA 40105, JULY 1973 UK: DJM DJX502, JULY 1973
### Goodbye Yellow Brick Road · Screw You (retitled Young Man's Blues in US)
US: MCA 40148, OCTOBER 1973 UK: DJM DJS285, OCTOBER 1973
### Step into Christmas · Ho Ho Ho (Who'd Be a Turkey at Christmas?)
US: MCA 65018, NOVEMBER 1973 UK: DJM DJS290, NOVEMBER 1973
### Candle in the Wind · Bennie and the Jets
UK: DJM DJS297, FEBRUARY 1974
### Bennie and the Jets · Harmony
US: MCA 40198, FEBRUARY 1974
### Don't Let the Sun Go Down on Me · Sick City
US: MCA 40259, MAY 1974 UK: DJM DJS302, MAY 1974
### The Bitch Is Back · Cold Highway
US: MCA 40297, SEPTEMBER 1974 UK: DJM DJS322, SEPTEMBER 1974
### Lucy in the Sky with Diamonds · One Day at a Time
US: MCA 40344, NOVEMBER 1974 UK: DJM DJS340, NOVEMBER 1974
### Philadelphia Freedom · I Saw Her Standing There
US: MCA 40364, FEBRUARY 1975 UK: DJM DJS354, FEBRUARY 1975
### Someone Saved My Life Tonight · House of Cards
US: MCA 40421, JUNE 1975 UK: DJM DJS385, JUNE 1975
### Island Girl · Sugar on the Floor
US: MCA 40461, SEPTEMBER 1975 UK: DJM DJS610, SEPTEMBER 1975
### Grow Some Funk of Your Own · I Feel Like a Bullet (in the Gun of Robert Ford)
US: MCA 40505, JANUARY 1976 UK: DJM DJS629, JANUARY 1976
### Pinball Wizard · Harmony
UK: DJM DJS652, MARCH 1976
### Don't Go Breaking My Heart · Snow Queen
US: MCA / THE ROCKET RECORD COMPANY 40585, JUNE 1976 UK: THE ROCKET RECORD COMPANY ROKN512, JUNE 1976
### Sorry Seems to Be the Hardest Word · Shoulder Holster
US: MCA / THE ROCKET RECORD COMPANY 40645, NOVEMBER 1976 UK: THE ROCKET RECORD COMPANY ROKN517, NOVEMBER 1976
### Bite Your Lip (Get Up and Dance!) · Chicago
US: MCA / THE ROCKET RECORD COMPANY 40677, JANUARY 1977 UK: THE ROCKET RECORD COMPANY ROKN526, JANUARY 1977
### Crazy Water · Chameleon
UK: THE ROCKET RECORD COMPANY ROKN 521, FEBRUARY 1977
### Ego · Flinstone Boy
US: MCA 40892, MARCH 1978 UK: THE ROCKET RECORD COMPANY ROKN538, MARCH 1978
### Part Time Love · I Cry at Night
US: MCA 40973, NOVEMBER 1978 UK: THE ROCKET RECORD COMPANY XPRES1, OCTOBER 1978
### Song for Guy · Lovesick
US: MCA 40993, MARCH 1979 UK: THE ROCKET RECORD COMPANY XPRES5, NOVEMBER 1978
### Are You Ready for Love · Mama Can't Buy You Love · Three Way Love Affair
US: MCA 13921, APRIL 1979 UK: THE ROCKET RECORD COMPANY XPRES1312, APRIL 1979
### Mama Can't Buy You Love · Strangers (UK), Three Way Love Affair (US)
US: MCA 41042, JUNE 1979 UK: THE ROCKET RECORD COMPANY XPRES20, JUNE 1979
### Victim of Love · Strangers
US: MCA 41126, SEPTEMBER 1979 UK: THE ROCKET RECORD COMPANY XPRES21, SEPTEMBER 1979
### Johnny B. Goode · Thunder in the Night (UK), Georgia (US)
US: MCA 41159, NOVEMBER 1979 UK: THE ROCKET RECORD COMPANY XPRES24, NOVEMBER 1979
### Little Jeannie · Conquer the Sun
US: MCA 41236, MAY 1980 UK: THE ROCKET RECORD COMPANY XPRES32, MAY 1980
### Sartorial Eloquence (retitled Don't You Wanna Play This Game No More in US) · White Man Danger · Cartier
US: MCA 41293, AUGUST 1980 UK: THE ROCKET RECORD COMPANY XPRES41, AUGUST 1980
### Dear God · Tactics
UK: THE ROCKET RECORD COMPANY XPRES45, NOVEMBER 1980
### I Saw Her Standing There · Whatever Gets You thru the Night · Lucy in the Sky with Diamonds
UK: DJM DJS10965, MARCH 1981
# BIBLIOGRAPHY
Appleford, Steve. _The Rolling Stones: Rip This Joint—The Stories Behind Every Song_ (USA: Avalon Travel Publishing, 2001).
Bernardin, Claude, and Tom Stanton. _Rocket Man: Elton John from A–Z_ (Westport, CT: Praeger, 1996).
Black, Susan. _Elton John in His Own Words_ (London: Omnibus, 1993).
Bright, Spencer. _Essential Elton_ (London: Andre Deutsch, 1998).
Buckley, David. _Elton John: The Biography_ (London: Andre Deutsch, 2010).
Bugliosi, Vincent, and Curt Gentry. _Helter Skelter: The True Story of the Manson Murders_ (New York: W. W. Norton, 1974).
Cassata, Mary Anne. _The Elton John Scrapbook_ (New York: Citadel Press, 2002).
Clarke, Gary. _Elton,_ My _Elton_ (London: Smith Gryphon, 1995).
Crimp, Susan, and Patricia Burstein. _The Many Lives of Elton John_ (London: Robert Hale, 1992).
Davis, Stephen. _Hammer of the Gods: Led Zeppelin Unauthorised_ (New York: William Morrow, 1985).
———. _Old Gods Almost Dead: The 40-year Odyssey of the Rolling Stones_ (London: Aurum Press, 2002).
Ellis, Geoffrey. _I Should Have Known Better_ (London: Thorogood, 2005).
Flynn, Paul. _Dream Ticket: Elton John Across Four Decades_ (Lithonia, GA: HST Management, 2004).
Gambaccini, Paul. _Elton John and Bernie Taupin_ (London: Star Books, 1974).
Goodall, Nigel. _Elton John: A Visual Documentary_ (London and New York: Omnibus, 1993).
Guinn, Jeff. _Manson: The Life and Times of Charles Manson_ (London: Simon and Schuster, 2014).
Guralnick, Peter. _Careless Love: The Unmaking of Elvis Presley_ (London: Abacus, 1999).
Hayward, Keith. _Tin Pan Alley: The Rise of Elton John_ (London: Soundcheck, 2013).
———. _From Tin Pan Alley to the Yellow Brick Road_ (Bedford, UK: Wymer, 2015).
Heylin, Clinton. _No More Sad Refrains: The Life and Times of Sandy Denny_ (London: Omnibus, 2011).
Houghton, Mick. _I've Always Kept a Unicorn: The Biography of Sandy Denny_ (London: Faber and Faber, 2015).
Humphries, Patrick. _A Little Bit Funny: The Elton John Story_ (London: Aurum Press, 1998).
John, Elton. _Love Is the Cure_ (London: Hodder and Stoughton, 2012).
———, and Bernie Taupin. _Two Rooms: Elton John and Bernie Taupin in Their Own Words_ (London: Boxtree, 1991).
Kanfer, Stefan. _Groucho: The Life and Times of Julius Henry Marx_ (New York: Alfred A. Knopf, 2000).
Myers, Paul. _It Ain't Easy: Long John Baldry and the Birth of British Blues_ (Canada: Greystone, 2007).
Newman, Gerald, with Joe Bivona. _Elton John_ (New York: Signet Books, 1976).
Norman, Philip. _Elton_ (London: Hutchinson, 1991).
Nutter, David. _Elton: It's a Little Bit Funny_ (London and New York: Penguin, 1977).
O'Neill, Terry. _Two Days That Rocked the World_ (London: ACC Editions, 2015).
Pang, May, and Henry Edwards. _Loving John_ (London: Corgi, 1983).
Peebles, Andy. _The Elton John Tapes_ (London: BBC, 1981).
Quaye, Caleb, with Dale A. Berryhill. _A Voice Louder than Rock & Roll_ (USA: Vision Publishing, 2006).
Randall, Lucian, and Chris Welch. _Ginger Geezer: The Life of Vivian Stanshall_ (London: Fourth Estate, 2001).
Rosenthal, Elizabeth J. _His Song: The Musical Journey of Elton John_ (New York: Billboard Books, 2001).
Shaw, Greg. _Elton John: A Biography in Words and Pictures_ (New York: Sire Books, 1976).
Stein, Cathi. _Elton John_ (UK: Futura, 1975).
Stewart, Rod. _Rod: The Autobiography_ (London: Arrow Books, 2012).
St. Michael, Mick. _Elton John_ (London: Bison Group, 1994).
Tatham, Dick, and Tony Jasper. _Elton John_ (London: Octopus, 1976).
Taupin, Bernie. _A Cradle of Haloes_ (London: Aurum Press, 1988).
———. _The One Who Writes the Words for Elton John_ (London: Jonathan Cape, 1976).
Toberman, Barry. _Elton John: A Biography_ (London: Weidenfeld and Nicolson, 1988).
Tobler, John. _Elton John: 25 Years in the Charts_ (London: Hamlyn, 1995).
Unknown. _Scraps,_ insert booklet of _Captain Fantastic and the Brown Dirt Cowboy_ album (DJM Records, 1975).
Walker, Michael. _Laurel Canyon_ (New York: Faber and Faber, 2006).
# PHOTO CREDITS
1. Val Wilmer/Getty Images
2. Steve Morley/Getty Images
3. George Harris/Associated Newspapers/Rex/Shutterstock
4. Michael Ochs Archives/Getty Images
5. Ed Caraeff/Morgan Media/Getty Images
6. Ed Caraeff/Morgan Media/Getty Images
7. Michael Ochs Archives/Getty Images
8. John Olson/Getty Images
9. Michael Putland/Getty Images
10. Michael Ochs Archives/Getty Images
11. Michael Ochs Archives/Getty Images
12. Terry O'Neill/Getty Images
13. Robert Knight Archive/Getty Images
14. Terry O'Neill/Getty Images
15. Michael Ochs Archives/Getty Images
16. Michael Ochs Archives/Getty Images
17. Terry O'Neill/Getty Images
18. Bob Thomas/Getty Images
19. Terry O'Neill/Getty Images
20. Robin Platzer/Getty Images
21. Evening Standard/Getty Images
22. Richard Blanchard/Getty Images
23. Terry O'Neill/Getty Images
24. Chris Morris/Rex/Shutterstock
25. Ron Galella/Getty Images
# BY TOM DOYLE
The Glamour Chase: The Maverick Life of Billy MacKenzie
Man on the Run: Paul McCartney in the 1970s
Captain Fantastic: Elton John's Stellar Trip Through the '70s
# ABOUT THE AUTHOR
TOM DOYLE is an acclaimed music journalist, author, and long-standing contributing editor to _Q._ His work has also appeared in _Mojo,_ _The Guardian, Billboard, The Times,_ and _Sound on Sound._ Over the years he has been responsible for key magazine profiles of Paul McCartney, Elton John, Yoko Ono, Keith Richards, U2, Madonna, Kate Bush, and R.E.M., among many other artists. He is the author of _The Glamour Chase: The Maverick Life of Billy MacKenzie,_ and _Man on the Run: Paul McCartney in the 1970s._ He lives in London, England.
@Tom_Doyle_
# _What's next on
your reading list?_
[Discover your next
great read!](http://links.penguinrandomhouse.com/type/prhebooklanding/isbn/9781101884201/display/1)
* * *
Get personalized book picks and up-to-date news about this author.
Sign up now.
1. Cover
2. Title Page
3. Copyright
4. Contents
5. Prologue
6. Chapter 1: A Long, Long Time
7. Chapter 2: Elton John Has Arrived
8. Chapter 3: Reborn on the West Coast
9. Chapter 4: A Well-Known Gun
10. Chapter 5: Hope You Don't Mind
11. Chapter 6: Hercules
12. Chapter 7: No Superman Gonna Ruin My Plans
13. Chapter 8: All Aboard the Starship
14. Chapter 9: High in the Snowy Mountains
15. Chapter 10: From the End of the World to Your Town
16. Chapter 11: Drift of the Fantastic Voyage
17. Chapter 12: Burning Out His Fuse
18. Chapter 13: An Audience with the King
19. Chapter 14: Floating into Space
20. Chapter 15: Super Czar
21. Chapter 16: Quack!
22. Epilogue
23. Dedication
24. Acknowledgments
25. Discography
26. Bibliography
27. Photo Credits
28. Other Titles
29. About the Author
1. Cover
2. Cover
3. Title Page
4. Contents
5. Start
1. v
2. vi
3. xi
4. xii
5. xiii
6. xiv
7. xv
8. xvi
9. xvii
10. xviii
11. xix
12. xx
13. xxi
14. xxii
15. xxiii
16. xxiv
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
42.
43.
44.
45.
46.
47.
48.
49.
50.
51.
52.
53.
54.
55.
56.
57.
58.
59.
60.
61.
62.
63.
64.
65.
66.
67.
68.
69.
70.
71.
72.
73.
74.
75.
76.
77.
78.
79.
80.
81.
82.
83.
84.
85.
86.
87.
88.
89.
90.
91.
92.
93.
94.
95.
96.
97.
98.
99.
100.
101.
102.
103.
104.
105.
106.
107.
108.
109.
110.
111.
112.
113.
114.
115.
116.
117.
118.
119.
120.
121.
122.
123.
124.
125.
126.
127.
128.
129.
130.
131.
132.
133.
134.
135.
136.
137.
138.
139.
140.
141.
142.
143.
144.
145.
146.
147.
148.
149.
150.
151.
152.
153.
154.
155.
156.
157.
158.
159.
160.
161.
162.
163.
164.
165.
166.
167.
168.
169.
170.
171.
172.
173.
174.
175.
176.
177.
178.
179.
180.
181.
182.
183.
184.
185.
186.
187.
188.
189.
190.
191.
192.
193.
194.
195.
196.
197.
198.
199.
200.
201.
202.
203.
204.
205.
206.
207.
208.
209.
210.
211.
212.
213.
214.
215.
216.
217.
218.
219.
220.
221.
222.
223.
224.
225.
226.
227.
228.
229.
230.
231.
232.
233.
234.
235.
236.
237.
238.
239.
240.
241.
242.
243.
244.
245.
246.
247.
248.
249.
250.
251.
252.
253.
254.
255.
256.
257.
258.
259.
260.
261.
262.
263.
264.
265.
266.
267.
268.
269.
270.
271.
272.
273.
274.
275. vii
276.
277.
278.
279.
280.
281.
282.
283.
284.
285.
286.
287.
288.
289. ii
290.
| {
"redpajama_set_name": "RedPajamaBook"
} | 8,216 |
import sys
import trace
# create a Trace object, telling it what to ignore, and whether to
# do tracing or line-counting or both.
tracer = trace.Trace(
ignoredirs=[sys.prefix, sys.exec_prefix],
trace=1,
count=1,
outfile='.results',
timing=True)
# run the new command using the given tracer
tracer.run("""
import class_a
a = class_a.A()
for x in range(10):
a.repeat()
a.sleeping()
a.sleeping_half_second()
a.sleeping_more_one_seconds()
""")
# make a report, placing output in the current directory
r = tracer.results()
r.write_results(show_missing=True, coverdir="tracer")
| {
"redpajama_set_name": "RedPajamaGithub"
} | 5,955 |
What is the furthest north one can get without flying or taking special cruises?
(How) Can I hire an instructor as a tourist driver on the Nürburgring without renting a car?
When travelling through Dagestan by car, how far can risks of crime, harassment or political/cultural trouble be managed?
Where to hide your money in high-risk areas?
Can you still do train surfing in India?
How can I find safety records for extreme sports in New Zealand?
Are there tours to the UTA Flight 772 Memorial in Niger?
How far can you realistically travel across potentially rough water in an inflatable dinghy?
Is it possible to go Robinson Crusoe in the Bahamas?
Can I spend some time in a real prison as a tourist attraction?
Are there organized multiday cave trips in Europe?
What is the longest distance by car ferry?
I just noticed that there is a ferry from mainland spain to Tenerife. I already knew of the ferry to Iceland. My question is if there are ferries that travel longer distances than these two?
Where do they speak Dutch?
I would like to visit Antarctica, but worry polar bear may chase after me. How to avoid being attacked by polar bear?
Extreme rope swinging - is this offered commercially at the location shown in 'World's Largest Rope Swing' video?
When are the first 'regular' space tourist commercial flights actually due to start?
Huashan Cliffside Path: Accessible for tourists?
Shark diving: Where can I do it in Europe or South America?
What is the LEAST visited country by tourists?
What on earth has yet to be explored?
What's the deepest underwater tour available?
How can I travel to the North Pole, cheaply?
Are some parts of Iraq currently possible to visit for the brave, adventurous, and open-minded traveller?
Can a person fly to the Moon as a tourist?
Roughly how long does the Pamir Highway take to do?
How can I visit Antarctica?
How can I find a guide that will take me safely through the Amazon jungle? | {
"redpajama_set_name": "RedPajamaC4"
} | 1,631 |
{"url":"http:\/\/engineeringlibrary.org\/reference\/helical-springs-air-force-stress-manual","text":"# Helical Springs\n\nThis page provides the section on helical springs from the \"Stress Analysis Manual,\" Air Force Flight Dynamics Laboratory, October 1986.\n\n## Nomenclature\n\n b = width of section D = diameter fs = calculated shear stress G = modulus of elasticity in shear\n n = number of active spring coils P = axial load r = radius \u03b4 = spring deflection\n\n### 1.5.4 Helical Springs\n\nThe primary stresses in the wire of a helical spring are due to torsion. Section 1.5.4.1 treats helical springs composed of round wire, and those composed of square wire are treated in Section 1.5.4.2.\n\n#### 1.5.4.1 Helical Springs of Round Wire\n\nFigure 1-69 shows a helical spring made of round wire under an axial load, P. If the spring radius (r) is much greater than the wire diameter (D), the wire may be treated as a straight round beam under a torsional load, Pr, as indicated in Figure 1-69. Superposing the stress due to torsion of the wire on the uniform shear stress due to direct shear (4P\/\u03c0D2), the following equation for the maximum shear stress in the spring may be obtained:\n\n$$f_{smax} = { 16 ~Pr \\over \\pi D^3 } \\left( 1 + { D \\over 4 r } \\right)$$\n(1-83)\n\nIn the cases of heavy coil springs composed of wire with a relatively large diameter, D, in comparison to r, the initial curvature of the spring must be accounted for. This is done in the following equation:\n\n$$f_{smax} = { 16 ~Pr \\over \\pi D^3 } \\left( { 4m - 1 \\over 4m - 4 } + { 0.615 \\over m } \\right)$$\n(1-84)\n\nwhere\n\n$$m = { 2 r \\over D }$$\n(1-85)\n\nThis equation reduces to Equation (1-83) as r\/D becomes large.\n\nThe total deflection (\u03b4) of a round spring of n free coils is given by\n\n$$\\delta = { 64 ~Pr^3 n \\over G ~D^4 }$$\n(1-86)\n\nThis equation neglects the deflection due to direct shear which is given by\n\n$$\\delta_s = { 8 ~P R n \\over G ~d^2 }$$\n(1-87)\n\nThis portion of the deformation, however, is generally negligible compared to the value of \u03b4 given by Equation (1-86) and is thus generally ignored.\n\nAll of the equations in this section apply to both compression and tension springs, and in both cases the maximum shear stress occurs at the inside of the wire.\n\nWe have a number of structural calculators to choose from. Here are just a few:\n\n#### 1.5.4.2 Helical Springs of Square Wire\n\nFigure 1-70 shows a helical spring made of square wire under an axial load, P. The maximum shear stress in the square wire is given by\n\n$$f_{smax} = { 4.80 ~Pr \\over b^3 } \\left( { 4m - 1 \\over 4m - 4 } + { 0.615 \\over m } \\right)$$\n(1-84)\n\nwhere\n\n$$m = { 2 r \\over b }$$\n(1-89)\n\nThe total deflection of such a spring is given by\n\n$$\\delta = { 44.5 ~P r^3 h \\over G ~b^4 }$$\n(1-90)\n\nwhere n is the number of active or free coils in the spring. This equation neglects the deflection due to direct shear as did Equation (1-86). However, the deflection due to direct shear is normally negligible compared to that given by Equation (1-90).","date":"2021-09-19 20:50:39","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.746271550655365, \"perplexity\": 1368.366592119981}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-39\/segments\/1631780056900.32\/warc\/CC-MAIN-20210919190128-20210919220128-00566.warc.gz\"}"} | null | null |
{"url":"https:\/\/doc.cgal.org\/4.14\/Spatial_searching\/classCGAL_1_1Search__traits__2.html","text":"CGAL 4.14 - dD Spatial Searching\nCGAL::Search_traits_2< SearchGeomTraits > Class Template Reference\n\n#include <CGAL\/Search_traits_2.h>\n\n## Definition\n\nThe class Search_traits_2 can be used as a template parameter of the kd tree and the search classes.\n\nTemplate Parameters\n SearchGeomTraits must be a model of the concept SearchGeomTraits_2, for example Simple_cartesian or Simple_cartesian.\nIs Model Of:\n\nSearchTraits\n\nRangeSearchTraits\n\nSearch_traits_3<SearchGeomTraits_2>\nSearch_traits<NT,Point,CartesianConstIterator,ConstructCartesianConstIterator,Dim>\nExamples:\nSpatial_searching\/circular_query.cpp, Spatial_searching\/distance_browsing.cpp, Spatial_searching\/iso_rectangle_2_query.cpp, Spatial_searching\/nearest_neighbor_searching.cpp, Spatial_searching\/searching_sphere_orthogonally.cpp, Spatial_searching\/searching_with_circular_query.cpp, Spatial_searching\/splitter_worst_cases.cpp, Spatial_searching\/using_fair_splitting_rule.cpp, and Spatial_searching\/weighted_Minkowski_distance.cpp.\n\n## Types\n\ntypedef Dimension_tag< 2 >\u00a0Dimension\nDimension type.\n\ntypedef SearchGeomTraits::FT\u00a0FT\nNumber type.\n\ntypedef SearchGeomTraits::Point_2\u00a0Point_d\nPoint type.\n\ntypedef SearchGeomTraits::Iso_rectangle_2\u00a0Iso_box_d\nIso box type.\n\ntypedef SearchGeomTraits::Circle_2\u00a0Sphere_d\nSphere type.\n\ntypedef SearchGeomTraits::Cartesian_const_iterator_2\u00a0Cartesian_const_iterator_d\nAn iterator over the Cartesian coordinates.\n\ntypedef SearchGeomTraits::Construct_cartesian_const_iterator_2\u00a0Construct_cartesian_const_iterator_d\nA functor with two function operators, which return the begin and past the end iterator for the Cartesian coordinates. More...\n\ntypedef SearchGeomTraits::Construct_iso_rectangle_2\u00a0Construct_iso_box_d\nFunctor with operator to construct the iso box from two points.\n\ntypedef SearchGeomTraits::Construct_center_2\u00a0Construct_center_d\nFunctor with operator to construct the center of an object of type Sphere_d.\n\nFunctor with operator to compute the squared radius of a an object of type Sphere_d.\n\ntypedef SearchGeomTraits::Construct_min_vertex_2\u00a0Construct_min_vertex_d\nFunctor with operator to construct the vertex with lexicographically smallest coordinates of an object of type Iso_box_d.\n\ntypedef SearchGeomTraits::Construct_max_vertex_2\u00a0Construct_max_vertex_d\nFunctor with operator to construct the vertex with lexicographically largest coordinates of an object of type Iso_box_d.\n\n## \u25c6\u00a0Construct_cartesian_const_iterator_d\n\ntemplate<typename SearchGeomTraits >\n typedef SearchGeomTraits::Construct_cartesian_const_iterator_2 CGAL::Search_traits_2< SearchGeomTraits >::Construct_cartesian_const_iterator_d\n\nA functor with two function operators, which return the begin and past the end iterator for the Cartesian coordinates.\n\nThe functor for begin has as argument a Point_d. The functor for the past the end iterator, has as argument a Point_d and an int.","date":"2022-07-07 07:50:53","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.48136675357818604, \"perplexity\": 6082.283229561227}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-27\/segments\/1656104683708.93\/warc\/CC-MAIN-20220707063442-20220707093442-00609.warc.gz\"}"} | null | null |
package com.linkedin.pinot.integration.tests;
import com.linkedin.pinot.common.utils.FileUploadUtils;
import com.linkedin.pinot.common.utils.ZkStarter;
import com.linkedin.pinot.controller.helix.ControllerTestUtils;
import com.linkedin.pinot.tools.query.comparison.QueryComparison;
import com.linkedin.pinot.tools.query.comparison.SegmentInfoProvider;
import com.linkedin.pinot.tools.query.comparison.StarTreeQueryGenerator;
import com.linkedin.pinot.util.TestUtils;
import java.io.BufferedReader;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileReader;
import java.io.IOException;
import java.sql.Timestamp;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
import org.apache.commons.compress.archivers.ArchiveException;
import org.apache.commons.compress.utils.IOUtils;
import org.apache.commons.io.FileUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.helix.manager.zk.ZKHelixAdmin;
import org.apache.helix.model.ExternalView;
import org.apache.helix.model.IdealState;
import org.apache.helix.tools.ClusterStateVerifier;
import org.json.JSONObject;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.testng.Assert;
import org.testng.annotations.AfterClass;
import org.testng.annotations.BeforeClass;
import org.testng.annotations.Test;
/**
* Integration test for Star Tree based indexes: - Sets up the Pinot cluster and creates two tables,
* one with default indexes, and another with star tree indexes. - Sends queries to both the tables
* and asserts that results match. - Query to reference table is sent with TOP 10000, and the
* comparator ensures that response from star tree is contained within the reference response. This
* is to avoid false failures when groups with same value are truncated due to LIMIT or TOP N.
*/
public class StarTreeClusterIntegrationTest extends ClusterTest {
private static final Logger LOGGER =
LoggerFactory.getLogger(StarTreeClusterIntegrationTest.class);
private static final int TOTAL_EXPECTED_DOCS = 115545;
private final String DEFAULT_TABLE_NAME = "myTable";
private final String STAR_TREE_TABLE_NAME = "myStarTable";
private final String TIME_COLUMN_NAME = "DaysSinceEpoch";
private final String TIME_UNIT = "daysSinceEpoch";
private final String RETENTION_TIME_UNIT = "DAYS";
private final int RETENTION_TIME = 3000;
private static final int SEGMENT_COUNT = 12;
private static final long TIMEOUT_IN_MILLISECONDS = 30 * 1000;
private static final long TIMEOUT_IN_SECONDS = 3600;
private final File _tmpDir = new File("/tmp/StarTreeClusterIntegrationTest");
private final File _segmentsDir = new File("/tmp/StarTreeClusterIntegrationTest/segmentDir");
private final File _tarredSegmentsDir = new File("/tmp/StarTreeClusterIntegrationTest/tarDir");
private StarTreeQueryGenerator _queryGenerator;
private File _queryFile;
/**
* Start the Pinot Cluster: - Zookeeper - One Controller - One Broker - Two Servers
* @throws Exception
*/
private void startCluster() throws Exception {
startZk();
startController();
startBroker();
startServers(2);
}
/**
* Add the reference and star tree tables to the cluster.
* @throws Exception
*/
private void addOfflineTables() throws Exception {
addOfflineTable(DEFAULT_TABLE_NAME, TIME_COLUMN_NAME, TIME_UNIT, RETENTION_TIME,
RETENTION_TIME_UNIT, null, null);
addOfflineTable(STAR_TREE_TABLE_NAME, TIME_COLUMN_NAME, TIME_UNIT, RETENTION_TIME,
RETENTION_TIME_UNIT, null, null);
}
/**
* Generate the reference and star tree indexes and upload to corresponding tables.
* @param avroFiles
* @param tableName
* @param starTree
* @throws IOException
* @throws ArchiveException
* @throws InterruptedException
*/
private void generateAndUploadSegments(List<File> avroFiles, String tableName, boolean starTree)
throws IOException, ArchiveException, InterruptedException {
BaseClusterIntegrationTest.ensureDirectoryExistsAndIsEmpty(_segmentsDir);
BaseClusterIntegrationTest.ensureDirectoryExistsAndIsEmpty(_tarredSegmentsDir);
ExecutorService executor = Executors.newCachedThreadPool();
BaseClusterIntegrationTest.buildSegmentsFromAvro(avroFiles, executor, 0, _segmentsDir,
_tarredSegmentsDir, tableName, starTree, null);
executor.shutdown();
executor.awaitTermination(TIMEOUT_IN_SECONDS, TimeUnit.SECONDS);
for (String segmentName : _tarredSegmentsDir.list()) {
LOGGER.info("Uploading segment {}", segmentName);
File file = new File(_tarredSegmentsDir, segmentName);
FileUploadUtils.sendSegmentFile(ControllerTestUtils.DEFAULT_CONTROLLER_HOST,
ControllerTestUtils.DEFAULT_CONTROLLER_API_PORT, segmentName, new FileInputStream(file),
file.length());
}
}
/**
* Waits for total docs to match the expected value in the given table. There may be delay between
* @param expectedRecordCount
* @param deadline
* @throws Exception
*/
private void waitForTotalDocsToMatch(String tableName, int expectedRecordCount, long deadline)
throws Exception {
int actualRecordCount;
do {
String query = "select count(*) from " + tableName;
JSONObject response = postQuery(query);
actualRecordCount = response.getInt("totalDocs");
String msg =
"Actual record count: " + actualRecordCount + "\tExpected count: " + expectedRecordCount;
LOGGER.info(msg);
Assert.assertTrue(System.currentTimeMillis() < deadline,
"Failed to read all records within the deadline. " + msg);
Thread.sleep(2000L);
} while (expectedRecordCount != actualRecordCount);
}
/**
* Wait for External View to be in sync with Ideal State.
* @return
*/
private boolean waitForExternalViewUpdate() {
final ZKHelixAdmin helixAdmin = new ZKHelixAdmin(ZkStarter.DEFAULT_ZK_STR);
ClusterStateVerifier.Verifier customVerifier = new ClusterStateVerifier.Verifier() {
@Override
public boolean verify() {
String clusterName = getHelixClusterName();
List<String> resourcesInCluster = helixAdmin.getResourcesInCluster(clusterName);
LOGGER.info("Waiting for external view to update for resources: {} startTime: {}",
resourcesInCluster, new Timestamp(System.currentTimeMillis()));
for (String resourceName : resourcesInCluster) {
IdealState idealState = helixAdmin.getResourceIdealState(clusterName, resourceName);
ExternalView externalView = helixAdmin.getResourceExternalView(clusterName, resourceName);
LOGGER.info("HERE for {},\n IS:{} \n EV:{}", resourceName, idealState, externalView);
if (idealState == null || externalView == null) {
return false;
}
Set<String> partitionSet = idealState.getPartitionSet();
for (String partition : partitionSet) {
Map<String, String> instanceStateMapIS = idealState.getInstanceStateMap(partition);
Map<String, String> instanceStateMapEV = externalView.getStateMap(partition);
if (instanceStateMapIS == null || instanceStateMapEV == null) {
return false;
}
if (!instanceStateMapIS.equals(instanceStateMapEV)) {
return false;
}
}
LOGGER.info("External View updated successfully for {},\n IS:{} \n EV:{}", resourceName,
idealState, externalView);
}
LOGGER.info("External View updated successfully for {}", resourcesInCluster);
return true;
}
};
return ClusterStateVerifier.verifyByPolling(customVerifier, TIMEOUT_IN_MILLISECONDS);
}
/**
* Replace the star tree table name with reference table name, and add TOP 10000. The TOP 10000 is
* added to make the reference result a super-set of star tree result. This will ensure any groups
* with equal values that are truncated still appear in the reference result.
* @param starQuery
*/
private String convertToRefQuery(String starQuery) {
String refQuery = StringUtils.replace(starQuery, STAR_TREE_TABLE_NAME, DEFAULT_TABLE_NAME);
return (refQuery + " TOP 10000");
}
@BeforeClass
public void setUp() throws Exception {
startCluster();
addOfflineTables();
BaseClusterIntegrationTest.ensureDirectoryExistsAndIsEmpty(_tmpDir);
List<File> avroFiles = BaseClusterIntegrationTest.unpackAvroData(_tmpDir, SEGMENT_COUNT);
_queryFile = new File(TestUtils.getFileFromResourceUrl(BaseClusterIntegrationTest.class
.getClassLoader().getResource("OnTimeStarTreeQueries.txt")));
generateAndUploadSegments(avroFiles, DEFAULT_TABLE_NAME, false);
generateAndUploadSegments(avroFiles, STAR_TREE_TABLE_NAME, true);
Thread.sleep(15000);
// Ensure that External View is in sync with Ideal State.
if (!waitForExternalViewUpdate()) {
Assert.fail("Cluster did not reach stable state");
}
// Wait until all docs are available, this is required because the broker routing tables may not
// be updated yet.
waitForTotalDocsToMatch(DEFAULT_TABLE_NAME, TOTAL_EXPECTED_DOCS,
System.currentTimeMillis() + 1500000L);
waitForTotalDocsToMatch(STAR_TREE_TABLE_NAME, TOTAL_EXPECTED_DOCS,
System.currentTimeMillis() + 1500000L);
// Initialize the query generator
SegmentInfoProvider dictionaryReader =
new SegmentInfoProvider(_tarredSegmentsDir.getAbsolutePath());
List<String> dimensionColumns = dictionaryReader.getDimensionColumns();
List<String> metricColumns = dictionaryReader.getMetricColumns();
Map<String, List<String>> columnValuesMap = dictionaryReader.getColumnValuesMap();
_queryGenerator = new StarTreeQueryGenerator(STAR_TREE_TABLE_NAME, dimensionColumns,
metricColumns, columnValuesMap);
}
/**
* Given a query string for star tree: - Get the result from star tree cluster - Convert the query
* to reference query (change table name, add TOP 10000) - Get the result from reference cluster -
* Compare the results and assert that result of star tree is contained in reference result. NOTE:
* This method of testing is limited in that it cannot detect cases where a valid entry is missing
* from star tree result (to be addressed in future).
* @param starQuery
* @param expectNonZeroDocsScanned
*/
public void testOneQuery(String starQuery, boolean expectNonZeroDocsScanned) {
try {
JSONObject starResponse = postQuery(starQuery);
if (expectNonZeroDocsScanned) {
int numDocsScanned = starResponse.getInt("numDocsScanned");
String message = "Zero Docs Scanned for query: " + starQuery;
Assert.assertTrue((numDocsScanned > 0), message);
}
String refQuery = convertToRefQuery(starQuery);
JSONObject refResponse = postQuery(refQuery);
boolean result = QueryComparison.compare(starResponse, refResponse, false);
String message = "Result mis-match for Query: " + starQuery + "\nStar: "
+ starResponse.toString() + "\nRef: " + refResponse.toString();
Assert.assertTrue(result, message);
} catch (Exception e) {
LOGGER.error("Exception caught when executing query {}", starQuery, e);
}
}
@AfterClass
public void tearDown() throws Exception {
stopBroker();
stopController();
stopServer();
stopZk();
FileUtils.deleteDirectory(_tmpDir);
}
@Test(enabled = false)
public void testGeneratedQueries() {
for (int i = 0; i < 1000; ++i) {
String starQuery = _queryGenerator.nextQuery();
testOneQuery(starQuery, false);
}
}
@Test
public void testHardCodedQueries() {
BufferedReader queryReader = null;
try {
queryReader = new BufferedReader(new FileReader(_queryFile));
String starQuery;
while ((starQuery = queryReader.readLine()) != null) {
testOneQuery(starQuery, true);
}
} catch (IOException e) {
throw new RuntimeException(e.getMessage());
} finally {
IOUtils.closeQuietly(queryReader);
}
}
/**
* Test that when metrics have predicates on them, we still get
* correct results, ie correctly fall back on non-StarTree based execution.
*/
@Test
public void testPredicateOnMetrics() {
String query;
// Query containing predicate on one metric only
query = "SELECT SUM(DepDelayMinutes) FROM myStarTable WHERE DepDelay > 0\n";
testOneQuery(query, false);
// Query containing predicate on multiple metrics
query = "SELECT SUM(DepDelayMinutes) FROM myStarTable WHERE DepDelay > 0 AND ArrDelay > 0\n";
testOneQuery(query, false);
// Query containing predicate on multiple metrics and dimensions
query = "SELECT SUM(DepDelayMinutes) FROM myStarTable WHERE DepDelay > 0 AND ArrDelay > 0 AND OriginStateName = 'Massachusetts'\n";
testOneQuery(query, false);
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 5,502 |
Q: How to make a heads-up notification on Android? I'm trying to show a notification to the user when a service that runs in the background receives an FCM parcel. The FCM parcel is properly received and parsed, however:
The notification I get on my phone only shows up as a default importance (https://notifee.app/react-native/docs/android/appearance#default) notification.
Here's my code:
// create notification
NotificationCompat.Builder builder = new NotificationCompat.Builder(this, "default")
.setSmallIcon(R.mipmap.ic_launcher_round)
.setContentTitle(title)
.setContentText(body)
.setPriority(NotificationManager.IMPORTANCE_HIGH);
NotificationManager notificationManager = getSystemService(NotificationManager.class);
notificationManager.createNotificationChannel(new NotificationChannel("default", "default", NotificationManager.IMPORTANCE_HIGH));
NotificationManagerCompat.from(this).notify(0, builder.build());
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 5,603 |
Whatever home remodeling you may want to accomplish High Resolution Industrial Pipe Furniture #8 Industrial Pipe Side Table, the following initial checklist can help you to accomplish impressive outcome with your remodeling project. It is vital not to ignore every single step described below when preparing your redecoration task.
What part of your home do you want to renovate? Renovation could be of various forms and you need to make a decision whether you want to redo your whole property or just a section of it Industrial Pipe Furniture for example bathroom or living room.
The next step would be to determine who is likely to execute the makeover. Unfortunately, there is no one solution satisfies all situations for this topic High Resolution Industrial Pipe Furniture #8 Industrial Pipe Side Table. If you become confused with the project then employing professional is the most effective option. In the event it is solely moderate update and you own the skill for it then it is okay to do it yourself.
The subsequent consideration is your spending budget Mary Thomas. Creating budget in the beginning is critical to finish the job on schedule with great result. In case you are inefficient with budgeting, it is better to employ interior designers for the project.
Should you work with a home remodeling expert High Resolution Industrial Pipe Furniture #8 Industrial Pipe Side Table, be sure you talk about stuff like paint, style, quality of resources along with other specifics to obtain your expected outcome. | {
"redpajama_set_name": "RedPajamaC4"
} | 3,502 |
Q: mvc:resources mapping a specific jar file I am using Spring MVC, and in the configuration I am using the following line to make some Javascript files available to my application:
<mvc:resources mapping="/resources/**" location="/,classpath:/META-INF/" />
However, I am concerned that this may lead to security issues, unforeseen circumstances, etc. because every single jar on my classpath (and there are many!) that have a META-INF folder will be accessible from /resources/ to a user who is poking around.
Is there a way to only map the location to a specific jar file. In my case, it would be Spring-JS jar located at the following location (relative to the root of my Eclipse workspace):
/Jars/spring 3.0.5/spring-js-2.0.7.RELEASE.jar
Any help would be appreciated. Thanks in advance.
A: You should instead restrict the specific locations exposed. For eg, for spring-js jar you can do this:
<mvc:resources mapping="/resources/**" location="classpath:/META-INF/web-resources" />
I doubt if you can specify the exact jar though.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 4,099 |
Finance & economicsAug 1st 2009 edition
Spain's property market
Tricks and mortar
The central bank makes life a little easier for lenders
A soaring problem
SPANISH banks have been doing their best to shield themselves from the bursting of the country's property bubble. By buying properties before the loans on them go bad, lenders can mask their worst bets. Restructuring loans has the same effect. Help is now at hand from an unlikely source: the normally sober Bank of Spain. In July the central bank circulated guidance that relaxed provisioning rules on risky mortgages.
The change makes some sense. Until now, banks have had to make provision for the full value of high-risk loans—those above 80% of the property's value—after two years of arrears. That was far too demanding, since property values rarely fall to zero. Under the new rules, they only have to allow for the difference between the value of the loan and 70% of the property's market value.
Yet the timing is terrible, mainly because the move follows heavy lobbying from the banks. The Bank of Spain maintains that the net effect on the system will be neutral since it is also tightening rules for bad consumer loans. But the impression is that Spain's central bank—one of the few to emerge from the crisis in credit—has moved the goalposts to help banks deal with the onslaught of bad property loans.
For Spain's two largest banks, Banco Santander and BBVA, which have diversified abroad and reported decent second-quarter results this week, the new guidance will probably have only a marginal effect. But it will be a boon to smaller lenders with greater exposure to risky loans. Iñigo Vega, an analyst at Iberian Equities, estimates that the new rules would relieve banks of the need to make provisions of about €22 billion ($31 billion) in coming months (assuming non-performing loans reach 8% by the end of 2010). To put that into context, Spain's savings banks, which are heavily exposed to developers, are expected to make profits of only €16 billion before provisions this year.
The new accounting guidelines will help Spanish lenders smooth out the effects of the property bust over time. But the risk is that the problems are merely postponed. The ratio of bad loans to the total, property included, has tripled to 4.6% over the past 12 months as unemployment appears to head inexorably towards 20%.
The true picture is worse still. Commercial banks have bought about €10 billion in debt-for-property swaps, according to UBS. Spain's savings banks do not disclose the figure. Assume it is similar to their commercial peers and reclassify all these property purchases as bad loans, and then the non-performing loan ratio would be 5.7% (before any further adjustments for loan restructuring). Deferring losses to mañana doesn't change the extent of the difficulties facing Spain's financial system.
This article appeared in the Finance & economics section of the print edition under the headline "Tricks and mortar" | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 144 |
{"url":"https:\/\/dsp.stackexchange.com\/questions\/1738\/how-to-remove-stains-from-images?noredirect=1","text":"# How to remove stains from images?\n\nI have this extremely distorted and stained image\n\nIs it possible to remove this stain ? Could image inpainting help ?\n\nEDIT : Another image\n\nAfter applying anisotropic diffusion and representing image with imagesc(MATLAB)\n\nI tried inpainting however the result isn't good enough\n\nIs there anyway i could improve this output?\n\n\u2022 Well thats what image processing is about making a computer to do it for me \u2013\u00a0vini Mar 16 '12 at 17:26\n\u2022 In your image the stains are obviously on a separate plane of focus. Therefore, I would think about how to decompose the image by focus planes. Throw in a little inpainting and you should you be done :) \u2013\u00a0Emre Mar 16 '12 at 17:49\n\u2022 @vini Do you have several images or just this one? I can get \"good\" results by creating a mask manually and inpainting. Is that a solution you'd be interested in? \u2013\u00a0Lorem Ipsum Mar 17 '12 at 16:17\n\u2022 You can't just clean the windows? \u2013\u00a0endolith Mar 19 '12 at 19:53\n\u2022 The images seem to be double exposed as well as stained. Is this the case? \u2013\u00a0Charna Mar 30 '12 at 20:11\n\nThis is not a complete and crisp answer however, i am leaving you with at least some approach for you to fight with. (I would be very glad to know if you have results).\n\nTake a look at these questions:\n\nThey are essentially trying to solve the same problem.\n\nThere are two parts of the problem,\n\na. Identifying the spot\/stain b. Replacing the stain with what would have been in the place of occlusion.\n\nThe nature of the question is trying to solve exact problem (in some sense).\n\nThis is not trivial thing. However, in both questions there are some unique pattern that you can exploit.\n\n1. In all cases, the superimposing element which is required to be removed called here as (stain, glare, bright spot), overlay has a unique and distinct hue\/color which distinguishes itself from regular objects\/scene.\n\n2. In most cases, this hue\/color of the overlay fades away into the regular scene. The actual resultant color does change - however, it is better to model overlay as single intensity and color with successively reducing transperency Hence you can say resultant pixel $$P[x,y] = (1-\\alpha[x,y])*S[x,y] + \\alpha[x,y] * OverlayHue$$ $$\\tilde S[x,y] = (P[x,y] - OverlayHue * \\tilde \\alpha[x,y])\/(1-\\tilde \\alpha[x,y])$$ where $P[x,y]$ is observed image, and $S[x,y]$ is desired occlusion free image. Note, that alpha can be arbitrarily varied over pixels, but $OverlayHue$ of overlay is considered almost constant. $\\tilde S[x,y]$ and $\\tilde \\alpha[x,y]$ are estimated values by your algorithm for the respective quantities.\n\n3. The OverlayHue value can be independently estimated by manually segmenting pixel regions where Stain or Flash is clearly dominating.\n\n4. You can assume that \\alpha[x,y] is consistent across all channels (i.e. R,G,B) . Hence you can identify individual components as follows: $$\\tilde S_R[x,y] = (P_R[x,y] - OverlayHue_R * \\tilde \\alpha[x,y])\/(1-\\tilde \\alpha[x,y])$$ $$\\tilde S_G[x,y] = (P_G[x,y] - OverlayHue_G * \\tilde \\alpha[x,y])\/(1-\\tilde \\alpha[x,y])$$ $$\\tilde S_B[x,y] = (P_B[x,y] - OverlayHue_B * \\tilde \\alpha[x,y])\/(1-\\tilde \\alpha[x,y])$$\n\n5. You can see that when $\\alpha$ is close to 1 implies that overlay is completely occluding the scene and hence no estimate of $\\tilde S$ can be good, you should avoid that and keep some reference value to iterate it over time.\n\n6. You still have more variables than equation unfortunately, and that is due to the physical nature of pixels. A given color could have resulted either due to the pixel's own property or due to stain\/glare. Best bet is that you start with identified pixels where you know that $\\alpha$ is 1 and then gradually decay down guess for $\\alpha$ reducing successively. Over some iteration you can find the patterns.\n\n7. Also, in order to finally estimate pure black spots, You can also apply smoothing constraint over neighborhood pixels (i.e. $\\tilde S[x,y]$ as well as $\\tilde \\alpha[x,y]$).\n\nThis may not be perfect solution, but may be better than most obvious than pixel level clipping or playing around with saturation etc. I sincerely request you do try this in your end and show us results (my workbench is currently in a mess so i couldn't do it!)\n\nHope this helps.","date":"2021-05-05 22:42:41","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6189756989479065, \"perplexity\": 1100.86935489132}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-21\/segments\/1620243988696.23\/warc\/CC-MAIN-20210505203909-20210505233909-00012.warc.gz\"}"} | null | null |
Blomberg-Fritsch-affæren (på tysk Blomberg-Fritsch-Krise) var to sammenhængende skandaler i begyndelsen af 1938, som førte til, at de tyske væbnede styrker (Wehrmacht) blev underlagt Adolf Hitler. Som det fremgår af Hossbach-memorandummet, havde Hitler været utilfreds med disse to højest rangerende militærchefer og anså dem for at være for tøvende over for de krigsforberedelser han forlangte.
Blombergs ægteskab
Affæren begynde, da en politimand meddelte, at den unge kvinde, krigsminister Werner von Blomberg havde giftet sig med den 12. januar 1938, var på pornografiske billeder og derfor ikke havde en ren straffeattest. Det var en overtrædelse af den officerskodeks, som Blomberg selv havde udarbejdet, og kom som et chok for Hitler. Lederen af Luftwaffe, Hermann Göring, havde været Blombergs forlover og Hitler havde været vidne ved brylluppet. Hitler gav Blomberg ordre til at få ægteskabet annulleret for at undgå skandale og bevare hærens ære. Blomberg afviste at annullere ægteskabet, og da Göring truede med at offentliggøre hans kones fortid, tog han sin afsked den 27. januar 1938.
Fritsch
Begivenhederne i forbindelse med Blombergs ægteskab inspirerede Hermann Göring og Heinrich Himmler til at arrangere en tilsvarende affære for den øverstkommanderende Werner von Fritsch. Göring ønskede ikke at von Fritsch skulle efterfølge Blomberg og blive hans overordnede. Himmler ønskede at svække Wehrmacht og dets fortrinsvis aristokratiske ledere for at styrke sit Schutzstaffel og især Waffen-SS som en konkurrent til den regulære tyske hær.
Nogle få dage senere beskyldte Himmler og SS Fritsch for at være homoseksuel. En politirapport blev fremskaffet, som Gestapo allerede havde vist Hitler i 1935. Dengang havde Hitler afvist den og beordret den destrueret.
Det hævdes, at Fritsch af general Ludwig Beck blev opfordret til at gennemføre et militærkup, men det afslog han og tog sin afsked den 4. februar 1938. Han blev fulgt af Walther von Brauchitsch, som Fritsch selv havde anbefalet til posten.
Reorganisering
Hitler udnyttede situationen til at overføre krigsministerens opgaver til en ny organisation Oberkommando der Wehrmacht (OKW), og Wilhelm Keitel blev den nye leder af OKW den 4. februar 1938. Det svækkede den traditionelle Oberkommando des Heeres (OKH), som nu blev underlagt OKW.
Hitler udnyttede også situationen til at udskifte adskillige generaler og ministre med mere loyale og overtog i større grad den faktiske kontrol med Wehrmacht, som han allerede formelt havde kommandoen over. Nogle af de højtstående militære ledere protesterede mod disse ændringer, ikke mindst generaloberst Ludwig Beck som rundsendte end skrivelse, der også var underskrevet af generaloberst Gerd von Rundstedt.
Opfølgning
Det blev snart kendt, at anklagerne var falske. Politirapporten drejede sig om en Rittmeister von Frisch. Himmler fremførte dog et vidne, som støttede anklagen. Wehrmacht krævede, at en æresdomstol af officerer gennemgik Blomberg-Fritsch-affæren. Møderne foregik under forsæde af Hermann Göring personligt.
Himmlers vidne hævdede at genkende von Fritsch som en officer, han havde set i et homoseksuelt samleje. Vidnet Otto Schmidt var en prostitueret fra München med et langt strafferegister, som var blevet bestukket til at støtte anklagen; hans største kriminelle aktivitet bestod i at udspionere og afpresse homoseksuelle.
Medlemmerne af det tyske officerskorps var chokerede over behandlingen af von Fritsch, og ved det næste møde kunne Himmler, Göring og selv Hitler være kommet under pres fra dem. Den vellykkede annektion af Østrig (Anschluss) kort efter, lukkede munden på kritikerne. Generaloberst Beck tog sin afsked den 18. august 1938 og generaloberst von Rundstedt fik tilladelse til at tage sin afsked i oktober 1938.
Frifindelse
Otto Schmidt trak senere sin beskyldning tilbage, men blev myrdet. Von Fritsch blev frikendt den 18. marts, men skaden var sket. Han blev aldrig genindsat som øverstkommanderende.
Mange medlemmer af Wehrmacht tog aldrig konsekvensen af deres væmmelse over denne hændelse. De følte sig bundet af deres personlige ed til Hitler fra 1934, som ironisk nok var beordret af Blomberg. Herefter var hæren et forholdsvis trofast redskab for Hitler; det førte til dens og hans ødelæggelse.
Fodnoter
.../
Eksterne kilder
http://www.dhm.de/lemo/html/nazi/innenpolitik/fritschblom/
Begivenheder i 1938
Nazi-Tyskland | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 2,886 |
The University of Tennessee Foundation is seeking a Director of Development in Memphis, TN.
Under the supervision of the Associate Vice Chancellor for Development and Alumni Affairs or their designee, the ideal candidate will establish and maintain strong and effective relationships with UTHSC's major internal and external constituencies which include the Chancellor and his leadership team, the Dean of the College of Pharmacy and their faculty, and members of related academic, administrative, research, and clinical programs; volunteers and alumni.
To learn more, or to apply online, visit the University of Tennessee website. | {
"redpajama_set_name": "RedPajamaC4"
} | 2,167 |
Normally you remove an old site when you have a new one to replace it. We haven't developed a new site—yet.
We will be adding our work over time. There is no agreed-upon order or schedule. There is, however, a strong communication goal to demonstrate the breadth of how we work through images and words. We'll share successes and even some failures.
In addition to our client work, we'll also be integrating our teaching, writing and experimentation.
Today we tread a new path. To do this we've created a sparse visual system—a hand full of typefaces and minimal grid. We're excited to see where this takes us. | {
"redpajama_set_name": "RedPajamaC4"
} | 5,601 |
namespace zetasql {
namespace functions {
// Hashes inputs using a particular algorithm.
class Hasher {
public:
enum Algorithm {
kMd5 = 0,
kSha1 = 1,
kSha256 = 2,
kSha512 = 3,
};
// Creates a new instance of the hasher for the given algorithm.
static std::unique_ptr<Hasher> Create(Algorithm algorithm);
virtual ~Hasher() {}
// Returns the hash of the input bytes. Calling this method concurrently
// on the same object is not thread-safe.
ABSL_MUST_USE_RESULT virtual std::string Hash(absl::string_view input) = 0;
};
// Computes the fingerprint of the input bytes using the farmhash::Fingerprint64
// function from the FarmHash library (https://github.com/google/farmhash).
int64_t FarmFingerprint(absl::string_view input);
} // namespace functions
} // namespace zetasql
#endif // ZETASQL_PUBLIC_FUNCTIONS_HASH_H_
| {
"redpajama_set_name": "RedPajamaGithub"
} | 6,094 |
Q: htaccess rewrite links and duplicate it I have this htaccess code
Options +FollowSymLinks
RewriteEngine On
RewriteBase /
RewriteCond %{HTTP_HOST} ^einfogarden.com$
RewriteRule (.*) http://www.einfogarden.com/$1 [R=301,L]
RewriteCond %{THE_REQUEST} \s/+single\.php\?title=([^\s&]+) [NC]
RewriteRule ^ %1/? [R=302,L,NE]
## Adding a trailing slash
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{THE_REQUEST} \s/+(.*?)[^/][?\s]
RewriteRule [^/]$ %{REQUEST_URI}/ [L,R=302]
# convert %20 to -
RewriteRule "^(\S*) +(\S* .*)$" $1-$2 [L,NE]
RewriteRule "^(\S*) (\S*)$" $1-$2 [L,R=302,NE]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.+?)/?$ single.php?title=$1 [NE,L,QSA]
which should remove single.php?title= from url and replace the 20% by slash and it work corrctly.
but i have 2 problem 1. it stop the css in my website 2. if you try to click any link (except) the home page it will give you something like this http://www.einfogarden.com/%D9%81%D9%88%D8%A7%D8%A6%D8%AF-%D8%A7%D9%84%D8%AC%D8%B1%D8%AC%D9%8A%D8%B1/single.php?title=%D9%81%D9%88%D8%A7%D8%A6%D8%AF%20%D8%A7%D9%84%D8%B1%D9%85%D8%A7%D9%86%20%D8%A7%D9%84%D8%B1%D8%A7%D8%A6%D8%B9%D8%A9
it duplicate the link
A: You are asking in the .htaccess the url of your current page.
So yourwebsite.com/home will be yourwebsite.com/home/home
I edited your code a little. using this will do fine.
RewriteCond %{REQUEST_FILENAME} -f [NC,OR]
RewriteCond %{REQUEST_FILENAME} -d [NC]
RewriteRule ^(.*?)$ $1 [L]
RewriteCond %{REQUEST_URI} !^/cache
RewriteCond %{REQUEST_URI} !^/images
RewriteCond %{REQUEST_URI} !.*\.(css|jpg|gif|zip|js)
RewriteRule ^(.*)/(.*)/?$ index.php?page1=$1&page2=$2 [L]
RewriteRule ^(.*) index.php?page1=$1 [L]
#RewriteRule ^(.*)/?$ http://www.google.com/ [R]
This is a easy way the use SEO url's
The RewriteRule is for creating a new var to your url.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 8,045 |
\section{Introduction}
Let $\fg = \mathfrak{sl}_{n+1}(\C)$ with index set $I=\{1,\ldots,n\}$, and fix a triangular decomposition
$\fg = \fn_+ \oplus \fh \oplus \fn_-$.
Denote by $\varpi_i$ ($i \in I$) the fundamental weights.
Let $\fg[t]=\fg\otimes \C[t]$ be the associated current algebra.
For $m \in I$ and a sequence $\bm{\ell}=(\ell_1, \ell_2,\ldots,\ell_p)$ of nonnegative integers,
we define a $\fg[t]$-module $V_m(\bm{\ell})$ by
\[ V_m(\bm{\ell}) = V(\ell_1 \varpi_m) *V(\ell_2\varpi_m) * \cdots * V(\ell_p\varpi_m).
\]
Here $V(\gl)$ is the simple $\fg$-module with highest weight $\gl$,
and $*$ denotes the fusion product defined by Feigin and Loktev in \cite{MR1729359}.
We may assume without loss of generality that $\ell_1 \ge \ell_2 \ge \cdots \ge \ell_p$, that is, $\bm{\ell}$ is a partition.
In \cite{MR3296163}, Chari and Venkatesh have introduced a large family of indecomposable $\fg[t]$-modules
(with $\fg$ a general simple Lie algebra) indexed by a sequence of partitions, in terms of generators and relations.
In this note, we will show that the fusion product $V_m(\bm{\ell})$ is isomorphic to a module belonging to their family.
More explicitly, we show the following defining relations of $V_m(\bm{\ell})$.\\[-5pt]
\par
\noindent
{\bfseries Theorem.}\ \
{\itshape
Let $m \in I$ and $\bm{\ell} = (\ell_1\ge\cdots\ge\ell_p)$ be a partition.
Set $L_i = \ell_i +\cdots +\ell_{p-1}+\ell_p$ for $1 \le i \le p$ and $L_i = 0$ for $i>p$.
Then $V_m(\bm{\ell})$ is isomorphic to the
$\fg[t]$-module generated by a vector $v$ with relations
\begin{align*
\fn_+[t]v=0, \ \ \ (h\otimes t^s) v&= \gd_{s0}L_1\langle h,\varpi_m\rangle v \text{ for }
h \in \fh, \ s \in \Z_{\ge 0},\nonumber\\
(f_\ga\otimes \C[t])v&= 0 \text{ for } \ga \in \gD_+ \text{ with } \langle h_\ga,\varpi_m\rangle = 0,\nonumber\\
f_\ga^{L_1+1}v&=0 \text{ for } \ga \in \gD_+ \text{ with } \langle h_\ga, \varpi_m\rangle =1, \nonumber\\
(e_\ga\otimes t)^{s}f_\ga^{r+s}v&=0 \text{ for } \ga \in \gD_+, r,s \in \Z_{>0} \text{ with } \langle h_\ga, \varpi_m
\rangle = 1,\nonumber\\
&\hspace{40pt} r+s \ge 1+kr + L_{k+1} \text{ for some } k\in\Z_{>0}.
\end{align*}
Here $\gD_+$ is the set of positive roots, $ h_\ga$ is the coroot corresponding to $\ga$, and $e_\ga$ and $f_\ga$ are
root vectors corresponding to $\ga$ and $-\ga$ respectively.}
\par
This theorem for $\fg = \mathfrak{sl}_2$ has been proved in \cite{MR1988973} and \cite{MR3296163}.
In the case $p=2$, this can be found in \cite{MR3336341} and \cite{Fourier} (see also \cite{CSVW}).
Let us introduce a motivation of the theorem.
For that we consider the case $\fg = \mathfrak{sl}_2(\C)$ for a moment.
Let $(\ell_1\ge \ell_2), (r_1\ge r_2)$ be partitions of a positive integer $\ell$.
By the well-known Clebsch-Gordan formula
\[ V(\ell\varpi_1)\otimes V(r\varpi_1) = V\big(|\ell-r|\varpi_1\big) \oplus \cdots \oplus
V\big((\ell+r-2)\varpi_1\big) \oplus V\big((\ell+r)\varpi_1\big),
\]
we see that there exists a surjective $\fg$-module homomorphism
\[ V(\ell_1\varpi_1)\otimes V(\ell_2\varpi_1) \twoheadrightarrow V(r_1\varpi_1) \otimes V(r_2\varpi_1)
\]
if and only if $\ell_2 \ge r_2$.
This surjection implies that the difference of their characters can be written as a sum of characters of simple $\fg$-modules.
Since the characters of simple $\fg$-modules are known as Schur functions,
this property is called \textit{Schur positivity}.
Generalization of the surjection to a more general $\fg$ and more general $\fg$-modules has been studied in
\cite{MR2358610, MR2369890, MR3248990,MR3226992}.
In particular when $\fg = \mathfrak{sl}_{n+1}$, it follows from \cite{MR3248990} (see also \cite{MR2369890})
that for $m \in I$ and two partitions $(\ell_1\ge\cdots\ge \ell_p)$, $(r_1\ge\cdots \ge r_p)$ of a positive integer $\ell$,
there exists a surjective $\fg$-module homomorphism
\begin{equation}\label{eq:surjection}
V(\ell_1\varpi_m) \otimes \cdots \otimes V(\ell_p\varpi_m) \twoheadrightarrow V(r_1\varpi_m) \otimes \cdots \otimes
V(r_p\varpi_m)
\end{equation}
if $\ell_i + \cdots +\ell_p \geq r_i + \cdots +r_p$ holds for each $1\le i \le p$.
Fourier and Hernandez have raised the following question in the introduction of \cite{MR3226992}: Can surjections
such as (\ref{eq:surjection}) be obtained from surjective $\fg[t]$-module homomorphisms between the
corresponding fusion products?
(Recall that the $\fg$-module structures of a tensor product and a fusion product are the same.)
By inspecting the defining relations of the theorem we obtain the following corollary,
which gives a positive answer to their question in our setting.
\begin{Cor}
Let $m \in I$, and $\bm{\ell}=(\ell_1 \geq \cdots \geq \ell_p)$, $\bm{r}=(r_1 \geq \cdots \geq r_p)$ be two partitions of a
positive integer $\ell$.
We assume that $\ell_i + \cdots +\ell_p \geq r_i + \cdots +r_p$ holds for each $1 \le i \le p$.
Then there exists a surjective $\fg[t]$-module homomorphism
from $V_m(\bm{\ell})$ onto $V_m(\bm{r})$.
\end{Cor}
It would be an interesting problem to generalize the theorem to a more general $\fg$ or more general modules.
These will be studied elsewhere.
The organization of this paper is as follows.
We fix basic notations in Subsection \ref{Subsection:preliminary},
and recall the definition of fusion products in Subsection \ref{Subsection:Fusion_product}.
By \cite{MR2964614} $V_m(\bm{\ell})$ can be realized as a $\fg[t]$-submodule of a module over the affine Lie algebra $\hfg$,
which is recalled in Subsection \ref{Subsection:Realization}.
In Subsection \ref{Subsection:Chari-Venkatesh}, we recall some results in \cite{MR3296163} needed for the proof of
the main theorem, and show one technical lemma.
Then we prove the theorem in Section \ref{Section:proof} by determining the defining relations recursively
using the realization, in which we apply the method used in \cite{MR3120578}.
\section{Preliminaries}
\subsection{Simple Lie algebra, current algebra, and affine Kac-Moody Lie algebra of type
{\boldmath$A$}}\label{Subsection:preliminary}
Let $\fg = \mathfrak{sl}_{n+1}(\C)$ with index set $I=\{1,\ldots,n\}$.
We fix a triangular decomposition $\fg = \fn_+ \oplus \fh \oplus \fn_-$.
Let $\ga_i$ and $\varpi_i$ ($i\in I$) be simple roots and fundamental weights respectively.
We use the labeling in \cite[Section 4.8]{MR1104219}.
For convenience we set $\varpi_0=0$.
Let $\gD$ be the root system, $\gD_+$ the set of positive roots, $W$ the Weyl group with simple reflections
$\{s_i\mid i\in I\}$ and longest element $w_0$,
and $(\ , \ )$ the unique non-degenerate
$W$-invariant symmetric bilinear form on $\fh^*$ satisfying $(\ga,\ga) = 2$ for all $\ga \in \gD$.
Let
\[ \gt = \ga_1 + \cdots + \ga_{n-1}+\ga_n
\]
be the highest root.
For each $\ga \in \gD$, let $ h_\ga$ be its coroot, $\fg_\ga$ the corresponding root space,
and $e_{\ga} \in \fg_\ga$ a root vector satisfying $[e_\ga,e_{-\ga}] = h_\ga$.
We also use the notations $f_\ga = e_{-\ga}$ for $\ga \in \gD_+$, $h_i = h_{\ga_i}$, $e_i = e_{\ga_i}$ and $f_i = f_{\ga_i}$.
Denote by $P$ the weight lattice, by $P_+$ the set of dominant integral weights,
and by $V(\gl)$ ($\gl \in P_+$) the simple $\fg$-module with highest weight $\gl$.
For $i \in I$, set
\[ i^* = n+1-i \in I.
\]
Note that $w_0(\varpi_i) = -\varpi_{i^*}$ holds.
Given a Lie algebra $\fa$, its \textit{current algebra} $\fa[t]$ is defined by the tensor product $\fa \otimes \C[t]$
equipped with the Lie algebra structure given by
\[ [x\otimes f(t), y \otimes g(t)] = [x,y] \otimes f(t)g(t).
\]
For $k\in \Z_{> 0}$, let $t^k\fa[t]$ denote the ideal $\fa \otimes t^k\C[t] \subseteq \fa \otimes \C[t]$.
Let $\hfg = \fg\otimes \C[t,t^{-1}] \oplus \C c \oplus \C d$ be the nontwisted affine Lie algebra associated with $\fg$.
Here $c$ is the canonical central element and $d$ is the degree operator.
Note that $\fg$ and $\fg[t]$ are naturally considered as Lie subalgebras of $\hfg$.
Let $\hI = I \sqcup \{0\}$, and define Lie subalgebras $\hfh$, $\hfn_+$, and $\hfb$ as follows:
\[ \hfh = \fh \oplus \C c \oplus \C d, \ \ \hfn_+ = \fn_+ \oplus t\fg[t], \ \
\hfb = \hfh \oplus \hfn_+.
\]
We also define $\hfh_{\mathrm{cl}} = \fh \oplus \C c$.
We often consider $\fh^*$ (resp.\ $\hfh_{\mathrm{cl}}^*$) as a subspace of $\hfh^*$ by setting
\[ \langle c,\gl\rangle = \langle d,\gl\rangle = 0 \text{ for } \gl \in \fh^* \ \ \ \big(\text{resp.}\
\langle d,\gl\rangle = 0 \text{ for } \gl \in \hfh_{\mathrm{cl}}^*\big).
\]
Let $\hat{\gD}$ be the root system of $\hfg$, $\hP$ the weight lattice, $\hP_+$ the set of dominant integral weights,
and $\hat{W}$ the Weyl group with simple reflections $\{s_i \mid i \in \hI\}$.
Denote by $\gd\in \hP$ the null root, and by $\gL_0 \in \hP$ the unique element satisfying
\[ \langle \fh,\gL_0\rangle = \langle d, \gL_0 \rangle = 0, \ \ \ \langle c, \gL_0 \rangle = 1.
\]
Let $\ga_0 = \gd -\gt$, $e_0 = f_{\gt} \otimes t$ and $f_0 = e_{\gt} \otimes t^{-1}$.
Given an integrable $\hfg$-module $M$ and $i \in \hI$, define a linear automorphism $\Phi_i^M$ on $M$ by
\[ \Phi^M_i = \mathrm{exp}(f_i)\mathrm{exp}(-e_i)\mathrm{exp}(f_i),
\]
see \cite[Lemma 3.8]{MR1104219}.
For each $w\in \hW$ fix a reduced expression $w=s_{i_k}\cdots s_{i_1}$, and set
$\Phi^M_w=\Phi^M_{i_k}\cdots \Phi^M_{i_1}$.
Then $\Phi^M_w$ satisfies
\begin{equation*
\Phi^M_w(M_\mu) = M_{w(\mu)} \ \text{ for } \mu \in \hP, \ \ \ \Phi_w^M \hfg_\ga (\Phi_w^M)^{-1}
= \hfg_{w(\ga)} \text{ for } \ga \in \hat{\gD}.
\end{equation*}
In particular by considering the adjoint representation, an algebra automorphism on $U(\hfg)$
is defined for each $w \in \hW$, which is denoted by $\Phi_w$.
Note that $\Phi^M_w$ for $w \in W$ is also defined on a finite-dimensional $\fg$-module $M$.
Define $t_\gl \in \mathrm{GL}(\hfh^*)$ for $\gl \in P$ by
\begin{equation*}
t_{\gl}(\mu) = \mu + \langle c,\mu \rangle \gl - \big((\mu, \gl) +
\frac{1}{2}(\gl, \gl)\langle c, \mu \rangle\big)\gd,
\end{equation*}
see \cite[Chapter 6]{MR1104219}.
Let $T(P) = \{t_\gl \mid \gl \in P\}$ and $\widetilde{W} = W \ltimes T(P)$,
which is called the \textit{extended affine Weyl group}.
Here $w \in W$ and $t_\gl \in T(P)$ satisfy $wt_\gl w^{-1}=t_{w(\gl)}$.
For $i \in \hI$, let
\begin{equation*}
\tau_i = t_{\varpi_i}w_{i,0}w_0 \in \widetilde{W}
\end{equation*}
where $w_{i,0}$ is the longest element of $W_{\varpi_i}$,
the stabilizer of $\varpi_i$ in $W$.
We have
\begin{equation}\label{eq:rule_of_tau}
\tau_i(\gd) = \gd, \ \ \ \tau_i(\ga_j) = \ga_{\ol{i+j}}, \ \text{ and } \ \tau_i(\varpi_j+\gL_0) \equiv \varpi_{\ol{i+j}}+
\gL_0 \ \text{ mod } \Q\gd \ \text{ for } j \in \hI
\end{equation}
where $\ol{i+j} \equiv i+j$ mod $n+1$.
Set $\gS = \{\tau_i \mid i\in \hI\}$.
It is known that $\widetilde{W} = \hW\rtimes \gS $.
We define an action of $\gS$ on $\hfg$ by letting $\tau_i$ act as a Lie algebra automorphism determined by
\begin{align*
\tau_i(e_j) = e_{\ol{i+j}}, \ \ \tau_i(f_j) = f_{\ol{i+j}} \text{ for } j \in \hI, \ \
\langle \tau_i(h), \tau_i(\gl)\rangle = \langle h,\gl\rangle \text{ for } h \in \hfh, \gl \in \hfh^*.
\end{align*}
\subsection{Fusion product}\label{Subsection:Fusion_product}
Let us recall the definition of the fusion product introduced in \cite{MR1729359}.
Note that $U(\fg[t])$ has a natural $\Z_{\ge 0}$-grading defined by
\[ U(\fg[t])^k = \{X \in U(\fg[t]) \mid [d,X] = kX\}.
\]
Let $\gl_1,\ldots,\gl_p$ be a sequence of elements of $P_+$, and $c_1,\ldots,c_p$ pairwise distinct complex numbers.
We define a $\fg[t]$-module structure on $V(\gl_i)$ as follows:
\[ \big(x\otimes f(t)\big) v = f(c_i)xv \ \text{ for } x \in \fg, f(t) \in \C[t], v \in V(\gl_i).
\]
Denote this $\fg[t]$-module by $V(\gl_i)_{c_i}$.
Let $v_i$ be a highest weight vector of $V(\gl_i)$.
Then the $\fg[t]$-module $V(\gl_1)_{c_1} \otimes \cdots \otimes V(\gl_p)_{c_p}$ is
generated by $v_1 \otimes \cdots \otimes v_p$ (see \cite{MR1729359}),
and the grading on $U(\fg[t])$ induces a filtration on $V(\gl_1)_{c_1}\otimes \cdots \otimes V(\gl_p)_{c_p}$ by
\[ \Big(V(\gl_1)_{c_1}\otimes \cdots \otimes V(\gl_p)_{c_p}\Big)^{\leq k} = \sum_{r \leq k}
U(\fg[t])^r(v_1\otimes \cdots \otimes v_p).
\]
Now the $\fg[t]$-module obtained by taking the associated graded is denoted by
\[ V(\gl_1) * \cdots * V(\gl_p),
\]
and called the \textit{fusion product} of $V(\gl_1),\ldots,V(\gl_p)$.
Though the definition depends on the parameters $c_i$, we omit them for the notational convenience.
All fusion products appearing in this paper do not depend on the parameters up to isomorphism.
Note that, by definition, we have
\[ V(\gl_1) * \cdots * V(\gl_p) \cong V(\gl_1) \otimes \cdots \otimes V(\gl_p)
\]
as a $\fg$-module.
\subsection{Another realization of fusion products}\label{Subsection:Realization}
Kirillov-Reshetikhin modules for $\fg[t]$ are $\fg[t]$-modules defined in terms of generators and relations,
which have been introduced in \cite{MR2238884}.
In \cite{MR2964614} the fusion products of Kirillov-Reshetikhin modules for $\fg[t]$
were studied when $\fg$ is of type $ADE$, and a new realization of these modules using Joseph functors was given.
When $\fg$ is of type $A$, a Kirillov-Reshetikhin module is just the evaluation module at $t=0$ of $V(k\varpi_i)$
with $k\in \Z_{> 0}$ and $i \in I$,
and hence their fusion products are what we are studying in this note.
In this subsection we will reformulate the result of \cite{MR2964614} in type $A$ in a different way
(see Remark \ref{Rem}).
This formulation has previously been used in \cite{MR3120578}, and is more suitable for later use since we can apply Lemma
\ref{Lem:proceed} stated below.
First we introduce several notations.
Assume that $V$ is a $\hfg$-module and $D$ is a $\hfb$-submodule of $V$.
For $\tau \in \gS$, denote by $F_\tau V$ the pull-back $(\tau^{-1})^*V$ with respect to the Lie algebra
automorphism $\tau^{-1}$ on $\hfg$, and define a $\hfb$-submodule $F_\tau D \subseteq F_\tau V$ in the obvious way.
For $i \in \hI$ let $\hfp_i$ denote the parabolic subalgebra $\hfb \oplus \C f_i \subseteq \hfg$,
and set $F_iD= U(\hfp_i)D \subseteq V$ to be the $\hfp_i$-submodule generated by $D$.
Finally we define $F_w D$ for $w \in \widetilde{W}$ as follows:
let $\tau \in \gS$ and $w' \in \hat{W}$ be the elements such that $w=w'\tau$, and choose a reduced
expression $w'=s_{i_k} \cdots s_{i_1}$.
Then we set
\[ F_{w}D = F_{i_k}\cdots F_{i_1} F_\tau D\subseteq F_\tau V.
\]
Though the definition depends on the choice of the expression of $w'$, we use $F_w$ by an abuse of notation.
For $\gL \in \hP_+$ let $\hV(\gL)$ be the simple highest weight $\hfg$-module with highest weight $\gL$.
Denote by $\C_{\gL}$ the $1$-dimensional $\hfb$-submodule of $\hV(\gL)$ spanned
by a highest weight vector.
Note that $F_\tau\hV(\gL) \cong \hV(\tau\gL)$ and $F_\tau \C_{\gL} \cong \C_{\tau\gL}$ for $\tau \in \gS$.
Let
\begin{equation*
\hfb' = \hfb\cap \fg[t]= \fh \oplus \hfn_+.
\end{equation*}
Now \cite[Theorem 6.1]{MR2964614} is reformulated as follows.
\begin{Thm}\label{Thm:realization}
Let $\bm{\ell} = (\ell_1 \ge \cdots\ge\ell_p)$ be a partition, and $m_1,\ldots,m_p$ a sequence of elements
of $I$.
As a $\hfb'$-module, we have
\begin{align*}
V(\ell_1\varpi_{m_1}) * \cdots& * V(\ell_p\varpi_{m_p})\\
&\cong F_{t_{-\varpi_{m_1^*}}}\Big(\C_{(\ell_1-\ell_2)\gL_0} \otimes \cdots
\otimes F_{t_{-\varpi_{m_{p-1}^*}}}\big(\C_{(\ell_{p-1}-\ell_p)\gL_0} \otimes F_{t_{-\varpi_{m_p^*}}}\C_{\ell_p\gL_0}\big)\!
\cdots \!\Big).
\end{align*}
\end{Thm}
\begin{Rem}\label{Rem}\normalfont
In \cite[Theorem 6.1]{MR2964614} the right-hand side is defined in terms of Joseph functors, but it can easily be proved to
be isomorphic to the right-hand side of Theorem \ref{Thm:realization} as follows.
By the universality of Joseph functors, there exists a surjection between two modules.
Moreover their characters coincide by \cite[Corollary 6.2]{MR2964614} and \cite[Theorem 5]{MR1887117},
and hence they are isomorphic. (See \cite[a paragraph below Lemma 5.2]{MR3120578} for more detail,
in which a similar argument is given.)
\end{Rem}
For $i \in \hI$, let $\hfn_i$ be the nilradical of $\hfp_i$.
More explicitly, $\hfn_i$ is defined as follows:
\[ \hfn_i = \bigoplus_{\ga \in \gD_+\setminus \{\ga_i\}} \C e_\ga \oplus t\fg[t] \ (i \in I), \
\hfn_0 = \fn_+ \oplus \bigoplus_{\ga \in \gD\setminus\{-\theta\}} \C (e_\ga \otimes t) \oplus (\fh\otimes t)\oplus
t^2\fg[t].
\]
The following lemma is useful to determine defining relations of modules constructed using $F_w$'s.
For the proof, see \cite[Lemma 5.3]{MR3120578}.
\begin{Lem}\label{Lem:proceed}
Let $V$ be an integrable $\hfg$-module, $T$ a finite-dimensional $\hfb$-submodule of $V$,
$i \in \hI$ and $\xi \in \hP$ such that $\langle h_i, \xi \rangle \ge 0$.
Assume that the following conditions hold:\\
{\normalfont(i)}
$T$ is generated by a weight vector $v \in T_\xi$ satisfying $e_i v=0$.\\
{\normalfont(ii)}
There is an $\ad(e_i)$-invariant left $U(\hfn_i)$-ideal $\mathcal{I}$ such that
\[ \mathrm{Ann}_{U(\hfn_+)}v = U(\hfn_+)e_i + U(\hfn_+)\mathcal{I}.
\]
{\normalfont(iii)} We have $\ch F_i T = \mathcal{D}_i \ch T$, where $\mathrm{ch}$ denotes the character with respect to
$\hfh$, and $\mathcal{D}_i$ is the Demazure operator defined by
\[ \mathcal{D}_i(f) = \frac{f-e^{-\ga_i}s_i(f)}{1-e^{-\ga_i}}.
\]
Let $v' = f_i^{\langle h_i, \xi \rangle} v$. Then we have
\[ \mathrm{Ann}_{U(\hfn_+)} v'= U(\hfn_+)e_i^{\langle h_i, \xi \rangle+1} + U(\hfn_+)\Phi_i( \mathcal{I}).
\]
\end{Lem}
\subsection{Presentation by Chari and Venkatesh}\label{Subsection:Chari-Venkatesh}
Following \cite{MR3296163}, we introduce some notations.
For $r,s \in\Z_{\ge 0}$, let
\[ \mathbf{S}(r,s)=\Big\{(b_j)_{j\ge 0}\Bigm| b_j \in \Z_{\ge 0}, \ \sum_j b_j = r, \ \sum_j jb_j = s\Big\}.
\]
Note that $\mathbf{S}(0,s) = \emptyset$ if $s > 0$, and if $(b_j)_{j\ge 0} \in \mathbf{S}(r,s)$ then
$b_j = 0$ for $j>s$.
For $x \in \fg$ and $r,s\in \Z_{\ge 0}$, define a vector $x(r,s) \in U(\fg[t])$ by
\[ x(r,s) = \sum_{(b_j)_{j\ge 0}\in \mathbf{S}(r,s)} (x\otimes 1)^{(b_0)}(x\otimes t)^{(b_1)} \cdots
(x\otimes t^s)^{(b_s)},
\]
where for $X \in \fg[t]$, $X^{(b)}$ denotes the divided power $X^b/b!$.
We understand $x(r,s) = 0$ if $\mathbf{S}(r,s)=\emptyset$.
For $\ga \in \gD_+$, define Lie subalgebras $\mathfrak{sl}_{2,\ga}$ and $\fb_{\ga}$ of $\fg$ by
\[ \msl_{2,\ga}= \C e_\ga \oplus \C h_\ga \oplus \C f_{\ga},\ \ \ \fb_{\ga}=\C e_\ga\oplus \C h_\ga.
\]
We also define a Lie subalgebra $\hfm_\ga$ of $\msl_{2,\ga}[t]$ by
\[ \hfm_\ga = t\msl_{2,\ga}[t] \oplus \C f_\ga.
\]
By \cite{MR502647} (see also \cite[Lemma 1.3]{MR1850556}), we have
\begin{align}\label{eq:Garland}
(e_\ga\otimes t)^{(s)}f_{\ga}^{(r+s)} - &(-1)^sf_{\ga}(r,s)\in U(\hfm_\ga)t\fb_\ga[t].
\end{align}
For $k \in \Z_{\ge 0}$, let $\mathbf{S}(r,s)_k$ (resp.\ ${}_k\mathbf{S}(r,s)$) be the subset of $\mathbf{S}(r,s)$ consisting
of elements $(b_j)_{j\ge0}$, satisfying
\[ b_j =0 \ \text{ for } \ j \ge k \ \ \ (\text{resp.\ } b_j = 0 \ \text{ for } \ j<k).
\]
For $x\in \fg$, define a vector $x(r,s)_k$ and ${}_kx(r,s)$ by
\begin{align*}
x(r,s)_k &= \sum_{(b_j)_{j\ge 0}\in \mathbf{S}(r,s)_k} (x\otimes 1)^{(b_0)}(x\otimes t)^{(b_1)} \cdots
(x\otimes t^{k-1})^{(b_{k-1})},\\
{}_kx(r,s)&= \sum_{(b_j)_{j\ge0}\in {}_k\mathbf{S}(r,s)} (x\otimes t^k)^{(b_k)}(x\otimes t^{k+1})^{(b_{k+1})} \cdots
(x\otimes t^{s})^{(b_s)}.
\end{align*}
The following was proved in \cite{MR3296163}.
\begin{Lem}\label{Lem:Chari-Venkatesh}
{\normalfont(i)} Let $x \in \fg$. If $r,s,k\in \Z_{>0}$ and $K \in \Z_{\ge 0}$ satisfy $r+s\ge kr+K$, then
\[ x(r,s)={}_kx(r,s) + \sum_{(r',s')} x(r-r',s-s')_k\cdot {}_kx(r',s'),
\]
where the sum is over all pairs $r',s' \in \Z_{\ge 0}$ such that $r' < r, s'\le s$ and $r'+s' \ge kr' + K$.\\
\noindent{\normalfont(ii)} Let $\ga \in \gD_+$, $V$ be an $\mathfrak{sl}_{2,\ga}[t]$-module, $v \in V$
and $K \in \Z_{\ge 0}$.
Assume that $e_\ga\otimes \C[t]$ and $h_\ga\otimes t\C[t]$ act trivially on $v \in V$.
Then,
\[ (e_\ga \otimes t)^{s}f_{\ga}^{r+s}v = 0 \text{ for all } r,s \in \Z_{>0} \text{ with }
r+s \ge 1 + kr + K \text{ for some } k\in \Z_{>0}
\]
if and only if
\[ {}_kf_{\ga}(r,s)v=0 \ \text{ for all } r,s,k \in \Z_{>0} \text{ with } r+s \ge 1 + kr + K.
\]
\end{Lem}
The following proposition plays an important roll in the next section.
\begin{Prop}\label{Prop2}
Let $\ga \in \gD_+$. If $r,s,k \in \Z_{>0}$ and $K \in \Z_{\ge 0}$ satisfy $r+s \ge kr + K$,
then we have
\[ \big[ e_\ga, \ {}_kf_{\ga}(r,s)\big] \in \sum_{(r',s')}U(t\mathfrak{sl}_{2,\ga}[t]){}_k\bfa(r',s') +
U(t\msl_{2,\ga}[t])t\fb_\ga[t],
\]
where the sum is over all pairs $r',s' \in \Z_{> 0}$ such that $r'+s'\ge kr'+K$.
\end{Prop}
\begin{proof}
First we introduce some notation.
We write $\mathfrak{f}_\ga = \C f_\ga$ here.
Define Lie subalgebras $\hfm_\ga^h$, $\mathfrak{f}_\ga[t]_{< k}$ and $\ff_\ga[t]_{<k}^h$ by
\[ \hfm_\ga^h = \hfm_\ga \oplus \C h_\ga, \ \ \ \ff_\ga[t]_{<k} = \bigoplus_{j=0}^{k-1} \C (f_\ga \otimes t^j), \ \ \
\ff_{\ga}[t]_{<k}^h = \ff_{\ga}[t]_{<k} \oplus \C h_\ga.
\]
Since
\[ \hfm_\ga^h = \ff_{\ga}[t]_{<k}^h \oplus t^k\ff_{\ga}[t] \oplus t\fb_\ga[t],
\]
we have by the PBW theorem that
\[ U(\hfm_\ga^h) = \ff_{\ga}[t]_{<k}^hU(\hfm_\ga^h) \oplus U\big(t^k\ff_{\ga}[t]\oplus t\fb_\ga[t]\big).
\]
Denote by $p$ the projection
\[ U(\hfm_\ga^h) \twoheadrightarrow U\big(t^k\ff_\ga[t] \oplus t\fb_\ga[t]\big)
\]
with respect to this decomposition.
It follows from Lemma \ref{Lem:Chari-Venkatesh} (i) that
\begin{equation}\label{eq:image_of_p}
p\big(\bfa(r',s')\big) = {}_k\bfa(r',s').
\end{equation}
Denote by $\mathcal{I}$ the left $U(t\msl_{2,\ga}[t])$-ideal in the assertion.
Now we begin the proof of the proposition.
By (\ref{eq:Garland}), it follows that
\[ \big[e_\ga,\ (e_\ga\otimes t)^{(s)}f_\ga^{(r+s)}\big] -
(-1)^s\big[e_\ga, \ \bfa(r,s)\big] \in U(\hfm_\ga^h)t\fb_\ga[t].
\]
By applying $p$ to this, we have
\begin{align}\label{eq:containment2}
p\Big(\big[e_\ga,\ (e_\ga\otimes t)^{(s)}f_\ga^{(r+s)}\big]\Big) -
(-1)^sp&\Big(\big[e_\ga, \ \bfa(r,s)\big]\Big)\in U\big(t^k\ff_\ga[t]\oplus t\fb_\ga[t]\big)t\fb_\ga[t]\subseteq \mathcal{I}.
\end{align}
The following calculation is elementary:
\begin{align*
\big[e_\ga, \ (e_\ga\otimes t)^{(s)}f_\ga^{(r+s)}\big] = (e_\ga\otimes t)^{(s)}\big[e_\ga, \, f_\ga^{(r+s)}\big]
= (h_\ga+r-s-1)(e_\ga\otimes t)^{(s)}f_\ga^{(r+s-1)}.
\end{align*}
Note that the pair $(r-1,s)$ satisfies the condition $(r-1)+s \ge k(r-1)+K$ since $k \in \Z_{>0}$.
By (\ref{eq:Garland}), the above equality implies
\begin{align}\label{eq:containment}
p\Big(\big[e_\ga, \ (e_\ga\otimes t)^{(s)}f_\ga^{(r+s)}\big]\Big) \in&\ p\Big(\C(e_\ga\otimes t)^{(s)}
f_\ga^{(r+s-1)}\Big)\nonumber\\
\subseteq&\ p\Big(\C \bfa(r-1,s) + U(\hfm_\ga)t\fb_\ga[t]\Big)\\ =&\ \C\, {}_k\bfa(r-1,s) +
U\big(t^k\ff_\ga[t] \oplus t\fb_\ga[t]\big)t\fb_\ga[t] \subseteq \mathcal{I},\nonumber
\end{align}
where the equality holds by (\ref{eq:image_of_p}).
On the other hand, we have by Lemma \ref{Lem:Chari-Venkatesh} (i) that
\begin{align}\label{eq:pro_equal}
p\Big(\big[e_\ga, \ \bfa(r,s)\big]\Big) = p \Big( \big[e_\ga,\ {}_k\bfa(r,s)\big]\Big) + \sum_{(r',s')}
p\Big(\big[ e_\ga, \ \bfa(r-r',s-s')_k\cdot {}_k\bfa(r',s')\big]\Big).
\end{align}
Since $\big[e_\ga,\ {}_k\bfa(r,s)\big] \in U(t^k\mathfrak{sl}_{2,\ga}[t])$,
it follows that $p\Big(\big[e_\ga, \ {}_k\bfa(r,s)\big]\Big) = \big[e_\ga, \ {}_k\bfa(r,s)\big]$, and
\begin{align*}
p\Big(\big[ e_\ga, \ \bfa(r&-r',s-s')_k\cdot {}_k\bfa(r',s')\big]\Big)\\
\hspace{-5pt}=&\, p\Big(\big[e_\ga, \ \bfa(r-r',s-s')_k\big]{}_k\bfa(r',s')\Big) + p\Big(\bfa(r-r',s-s')_k
\big[e_\ga, \ {}_k\bfa(r',s')\big]\Big)\\
\hspace{-5pt}\in&\, p\Big(U(\hfm_\ga^h){}_k\bfa(r',s')\Big) + 0 = U\big(t^k\ff_\ga[t] \oplus t\fb_\ga[t]\big){}_k\bfa(r',s')
\subseteq \mathcal{I}.
\end{align*}
Hence (\ref{eq:pro_equal}) implies
\begin{align*
p\Big(\big[e_\ga, \ \bfa(r,s)\big]\Big) - \big[e_\ga, \ {}_k\bfa(r,s)\big]
\in \mathcal{I}.
\end{align*}
Now $\big[e_\ga, \ {}_k\bfa(r,s)\big] \in \mathcal{I}$ follows from this, together with (\ref{eq:containment2}) and
(\ref{eq:containment}).
The proof is complete.
\end{proof}
\section{Main theorem and proof}\label{Section:proof}
Let $m \in I$ and $\bm{\ell} = (\ell_1\ge\cdots\ge\ell_p)$ be a partition,
and denote by $V_m(\bm{\ell})$ the fusion product $V(\ell_1\varpi_m) * \cdots * V(\ell_p\varpi_m)$.
Set $L_i = \ell_i + \cdots +\ell_p$ for $1\le i \le p$, and $L_i = 0$ for $i>p$.
As mentioned in the introduction, the main theorem of this note is the following.
\begin{Thm}\label{Thm:Main2}
The fusion product $V_m(\bm{\ell})$ is isomorphic to the
$\fg[t]$-module generated by a vector $v$ with relations
\begin{align*
\fn_+[t]v=0, \ \ \ (h\otimes t^s) v&= \gd_{s0}L_1\langle h,\varpi_m\rangle v \text{ for }
h \in \fh, \ s \in \Z_{\ge 0},\nonumber\\
\big(f_\ga\otimes \C[t]\big)v&= 0 \text{ for } \ga \in \gD_+ \text{ with } \langle h_\ga,\varpi_m\rangle = 0,\nonumber\\
f_\ga^{L_1+1}v&=0 \text{ for } \ga \in \gD_+ \text{ with } \langle h_\ga, \varpi_m\rangle =1, \nonumber\\
(e_\ga\otimes t)^{s}f_\ga^{r+s}v&=0 \text{ for } \ga \in \gD_+, r,s \in \Z_{>0} \text{ with } \langle h_\ga, \varpi_m
\rangle = 1,\nonumber\\
&\hspace{40pt} r+s \ge 1+kr + L_{k+1} \text{ for some } k\in\Z_{>0}.
\end{align*}
\end{Thm}
\begin{Rem}\normalfont
In \cite{MR3296163}, the authors have introduced a collection of $\fg[t]$-modules $V(\bm{\xi})$
(with $\fg$ a general simple Lie algebra) indexed by
a $|\gD_+|$-tuple of partitions $\bm{\xi} = (\xi^\ga)_{\ga \in \gD_+}$ satisfying $|\xi^\ga|=\langle h_\ga,\gl\rangle$
for some $\gl \in P_+$.
In their terminology, the theorem says that $V_m(\bm{\ell})$ is isomorphic to $V(\bm{\xi})$ where
$\bm{\xi} = (\xi^\ga)_{\ga\in \gD_+}$ with
\[ \xi^\ga = \begin{cases} \bm{\ell} & \text{if }\langle h_\ga,\varpi_m\rangle = 1,\\
0 & \text{if }\langle h_\ga,\varpi_m\rangle = 0.
\end{cases}
\]
\end{Rem}
The proof of the theorem will occupy the rest of this paper.
Fix $m \in I$ and $\bm{\ell}$ from now on.
By Theorem \ref{Thm:realization}, we have
\begin{align}\label{eq:isom}
V_m(\bm{\ell})
\cong F_{t_{-\varpi_{m^*}}}\Big(\C_{(\ell_1-\ell_2)\gL_0}\otimes \cdots \otimes F_{t_{-\varpi_{m^*}}}
\big(\C_{(\ell_{p-1}-\ell_p)\gL_0}
\otimes F_{t_{-\varpi_{m^*}}}\C_{\ell_p\gL_0}\big)\!\cdots\!\Big)
\end{align}
as $\hfb'$-modules.
We shall determine defining relations of the right-hand side recursively.
In the sequel, we write $\tau= \tau_m$ and $\gs = w_0w_{m,0}$ (see Subsection \ref{Subsection:preliminary}).
Note that
\[ \gs(\varpi_m) = w_0(\varpi_m) = -\varpi_{m^*} \ \text{ and } \ t_{-\varpi_{m^*}} = \gs t_{\varpi_m}\gs^{-1} = \gs\tau
\]
hold.
Let $\gs=s_{i_{\ell(\gs)}}\cdots s_{i_2}s_{i_1}$ be a reduced expression of $\gs$, and set $\gs_j =
s_{i_j}\cdots s_{i_2}s_{i_1}$ for $0\le j \le \ell(\gs)$.
For $a \in \{0,\pm 1\}$, define a subset $\gD[a] \subseteq \gD$ by
\[ \gD[a] = \{ \ga \in \gD\mid \langle h_\ga,\varpi_m\rangle = a\}.
\]
Note that $\gD[\pm 1] \subseteq \pm \gD_+$, and
\begin{equation}\label{eq:small_remark}
\ga \in \gD[a] \text{ if and only if } \langle \gs(h_\ga),\varpi_{m^*}\rangle
= -a.
\end{equation}
We also write $\gD[\ge\! 0] = \gD[0] \sqcup \gD[1]$, etc.
It should be noted that, since $\gs$ is the shortest element such that $\gs(\varpi_m) = -\varpi_{m^*}$,
for every $1 \le j \le \ell(\gs)$ we have
\begin{equation}\label{eq:positive2}
\langle h_{i_j},\gs_{j-1}(\varpi_m)\rangle = 1 \text{ and } \gs_{j-1}^{-1}(\ga_{i_j}) \in \gD[1].
\end{equation}
Define a parabolic subalgebra $\fp_{\varpi_m}$ of $\fg$ by
\[ \fp_{\varpi_m} = \bigoplus_{\ga \in \gD[\ge 0]} \C e_\ga \oplus \fh = \bigoplus_{\ga \in \gD[0] \cap \gD_+} \C f_\ga
\oplus \fb.
\]
For $1\le q \le p$ and $0\le j \le \ell(\gs)$, let $V(q,j)$ be the $\hfb$-module
\[ F_{\gs_j\tau}\Big(\C_{(\ell_q-\ell_{q+1})\gL_0} \otimes F_{t_{-\varpi_{m^*}}}\Big(\cdots\otimes F_{t_{-\varpi_{m^*}}}
\big(\C_{(\ell_{p-1}-\ell_p)\gL_0}
\otimes F_{t_{-\varpi_{m^*}}}\C_{\ell_p\gL_0}\big)\!\cdots\!\Big)\!\Big).
\]
\begin{Prop}\label{Prop*}
For every $q$ and $j$, there exists a nonzero vector $v_{q,j}$ in $V(q,j)$ whose $\hfh_{\mathrm{cl}}$-weight is
$L_q\gs_j(\varpi_m)+\ell_q\gL_0$, such that $V(q,j)$ is generated by $v_{q,j}$ as a $\hfb'$-module and
\begin{align}\label{eq:annihilators}
\mathrm{Ann}_{U(\hfn_+)}v_{q,j}= \sum_{\begin{smallmatrix}\ga \in \gD[-1] \\ \gs_j(\ga) \in \gD_+\end{smallmatrix} }
U(\hfn_+)& e_{\gs_j(\ga)}^{L_q+1} + \sum_{\begin{smallmatrix}\ga \in \gD[\ge0] \\ \gs_j(\ga) \in \gD_+\end{smallmatrix} }
U(\hfn_+) e_{\gs_j(\ga)} \\&+\sum_{\ga\in\gD[-1]}\sum_{(r,s,k)}
U(\hfn_+){}_ke_{\gs_j(\ga)}(r,s) +U(\hfn_+)\Phi_{\gs_j}\big(t\fp_{\varpi_m}[t]\big), \nonumber
\end{align}
where the sum for $(r,s,k)$ is over all $r,s,k \in \Z_{>0}$ such that $r+s \ge 1+ kr + L_{k+q}$.
\end{Prop}
For a while we assume this proposition, and give a proof to Theorem \ref{Thm:Main2}.
Denote by $T_q$ the running set of $(r,s,k)$ in (\ref{eq:annihilators}), that is,
\[ T_q=\big\{(r,s,k)\in \Z_{>0}^3\bigm| r+s \ge 1 + kr + L_{k+q}\big\}.
\]
Since $\langle h_\ga, \varpi_{m^*}\rangle = -\langle h_{\gs^{-1}(\ga)}, \varpi_m\rangle$, we see that
for $\ga \in \gD_+$, $\gs^{-1}(\ga) \in \gD[-1]$ is equivalent to $\langle h_\ga,\varpi_{m^*}\rangle = 1$.
Hence (\ref{eq:isom}) and Proposition \ref{Prop*} with $q = 1$ and $j = \ell(\gs)$ imply that
there exists a nonzero vector $v'$ in $V_m(\bm{\ell})$ whose $\fh$-weight is $-L_1\varpi_{m^*}$, such that $V_m(\bm{\ell})$ is
generated by $v'$ and
\begin{align}\label{eq:annihilators_of_1}
\mathrm{Ann}_{U(\hfn_+)}v' = \sum_{\ga \in \gD_+}
U(\hfn_+)& e_{\ga}^{L_1\langle h_\ga, \varpi_{m^*}\rangle+1}\\
&+ \sum_{\ga \in \gD[-1]}\sum_{(r,s,k) \in T_1}U(\hfn_+){}_ke_{\gs (\ga)}(r,s)
+U(\hfn_+)\Phi_{\gs}(t\fp_{\varpi_m}[t]),\nonumber
\end{align}
where the first summation in the right-hand side is obtained using (\ref{eq:small_remark}).
$V_m(\bm{\ell})$ being a finite-dimensional $\fg$-module, $\Phi_{w_0}^{V_m(\bm{\ell})}$ is defined.
Set $v''= \Phi^{V_m(\bm{\ell})}_{w_0}(v') \in V_m(\bm{\ell})_{L_1\varpi_m}$,
and $\hfm_+=\Phi_{w_0}(\hfn_+) = \fn_-[t] \oplus t\fb[t]$.
Since each $\gD[a]$ is stable by $w_0\gs = w_{m,0}$ and $\gD[-1] = -\gD[1]$, it follows that
\begin{align*
\mathrm{Ann}_{U(\hfm_+)}v'' = \sum_{\ga \in \gD_+} U(\hfm_+)f_\ga^{L_1\langle h_\ga,\varpi_m\rangle+1}
+ \sum_{\ga \in \gD[1]}\sum_{(r,s,k)\in T_1}U(\hfm_+){}_kf_{\ga}(r,s)
+U(\hfm_+)t\fp_{\varpi_m}[t].
\end{align*}
Let $M$ be the $\fg[t]$-module generated by a vector $v$ with relations in Theorem \ref{Thm:Main2}.
By Lemma \ref{Lem:Chari-Venkatesh} (ii), $v$ satisfies
\[ {}_kf_\ga(r,s)v=0 \ \text{ for } \ga \in \gD[1], (r,s,k) \in T_1.
\]
Then we see from the above description of $\mathrm{Ann}_{U(\hfm_+)}v''$ that
there exists a surjective $\hfm_+$-module homomorphism
from $V_m(\bm{\ell})$ to $M$ mapping $v''$ to $v$.
On the other hand, since
\[ V_m(\bm{\ell}) \cong V(\ell_1\varpi_m) \otimes \cdots \otimes V(\ell_p\varpi_m)
\]
as a $\fg$-module,
we have $V_m(\bm{\ell})_{\mu} = 0$ if $\mu > L_1\varpi_{m}$, which implies $\fn_+v''=0$.
Then again by Lemma \ref{Lem:Chari-Venkatesh} (ii), $v''$ satisfies $(e_\ga \otimes t)^sf_\ga^{r+s}v'' = 0$ for
$\ga \in \gD[1]$ and $r,s$ with $(r,s,k) \in T_1$ for some $k \in \Z_{>0}$,
and we also see that there exists a surjective $\fg[t]$-module homomorphism from $M$ to $V_m(\bm{\ell})$ mapping $v$ to $v''$.
Hence $V_m(\bm{\ell}) \cong M$ holds, and the theorem is proved.
The rest of this paper is devoted to prove Proposition \ref{Prop*}.
Define a left $U(\hfn_+)$-ideal $\mathcal{I}(q,j)$ by the right-hand side of (\ref{eq:annihilators}).
We prove the assertion by the induction on $(q,j)$.
When $q=p$ and $j = 0$,
\[ V(p,0) = F_\tau\C_{\ell_p\gL_0} \cong \C_{\ell_p(\varpi_m+\gL_0)}
\]
is a $1$-dimensional module with $\hfh_{\mathrm{cl}}$-weight $\ell_p(\varpi_m+\gL_0)$ on which $\hfn_+$ acts trivially.
Hence in order to verify the assertion in this case, it suffices to show that $\mathcal{I}(p,0) = U(\hfn_+)$.
The containment $\mathcal{I}(p,0) \subseteq U(\hfn_+)$ is obvious,
and $\fn_+ + t\fp_{\varpi_m}[t] \subseteq \mathcal{I}(p,0)$ is easily seen.
Moreover since $L_{1+p} = 0$, $(1,s,s) \in T_p$ for every $s\in \Z_{>0}$,
and hence we have
\[ {}_se_\ga(1,s) = e_\ga \otimes t^s \in \mathcal{I}(p,0) \ \text{ for } \ga \in \gD[-1], \ s \in \Z_{>0}.
\]
Hence $U(\hfn_+) \subseteq \mathcal{I}(p,0)$ holds.
Next we shall prove that, if the assertion for $(q,j-1)$ holds, then that for
$(q,j)$ also holds.
We write $i = i_j$ for short.
We have $V(q,j) = F_{i}V(q,j-1)$,
and the $\hfh_{\mathrm{cl}}$-weight of $v_{q,j-1}$ is $L_q\gs_{j-1}(\varpi_m)+\ell_q\gL_0$.
Moreover $e_iv_{q,j-1}=0$ holds by (\ref{eq:positive2}).
Set $v_{q,j} = f_i^{L_q}v_{q,j-1}$.
Since $V(q,j)$ is a submodule of an integrable module,
it follows from the representation theory of $\mathfrak{sl}_2$ that
\[ v_{q,j} \neq 0, \ \ \ f_iv_{q,j} = 0,\ \text{ and } \ e_i^{L_q}v_{q,j}
\in \C^\times v_{q,j-1}.
\]
Hence we have
\[ V(q,j)=F_iV(q,j-1) = U(\hfp_i)v_{q,j-1} = U(\hfp_i)v_{q,j} = U(\hfb')v_{q,j},
\]
and the cyclicity of $V(q,j)$ is proved.
Moreover it is obvious that the $\hfh_{\mathrm{cl}}$-weight of $v_{q,j}$ is $L_{q}\gs_j(\varpi_m)+\ell_q\gL_0$.
It remains to prove $\mathrm{Ann}_{U(\hfn_+)}(v_{q,j}) = \mathcal{I}(q,j)$.
Let $\mathcal{J}$ be the left $U(\hfn_{i})$-ideal defined by
\begin{align*}
\mathcal{J}= \sum_{\begin{smallmatrix}\ga \in \gD[-1]\\ \gs_{j-1}(\ga) \in \gD_+\end{smallmatrix}}
U(\hfn_i)&e_{\gs_{j-1}(\ga)}^{L_q+1} + \sum_{\begin{smallmatrix}\ga \in \gD[\ge0]\\ \gs_{j-1}(\ga) \in
\gD_+\setminus \{\ga_i\}
\end{smallmatrix}}
U(\hfn_i)e_{\gs_{j-1}(\ga)}\\&+\sum_{\ga\in\gD[-1]}\sum_{(r,s,k)\in T_q}
U(\hfn_i){}_ke_{\gs_{j-1}(\ga)}(r,s) +U(\hfn_i)\Phi_{\gs_{j-1}}\big(t\fp_{\varpi_m}[t]).
\end{align*}
By the induction hypothesis we have
\begin{align*}
\mathrm{Ann}_{U(\hfn_+)}v_{q,j-1} =\mathcal{I}(q,j-1)= U(\hfn_+)e_{i}+U(\hfn_+)\mathcal{J}.
\end{align*}
It suffices to show that $V(q,j-1)$, $v_{q,j-1}$ and $\mathcal{J}$ satisfy the conditions (i)--(iii) in Lemma
\ref{Lem:proceed}.
Indeed if they satisfy the conditions, it follows from the lemma that
\begin{align*
\mathrm{Ann}_{U(\hfn_+)}v_{q,j} = U(\hfn_+)e_i^{L_{q}+1} + U(\hfn_+)\Phi_i(\mathcal{J})= \mathcal{I}(q,j),
\end{align*}
as required.
The condition (i) follows from the induction hypothesis, and the condition (iii) is proved by \cite[Theorem 5]{MR1887117},
or \cite[Corollary 2.13 and Lemma 3.2(ii)]{MR2964614}.
In order to show the condition (ii) we need to prove that $\mathcal{J}$ is $\mathrm{ad}(e_i)$-invariant.
For that, we first verify for $\ga \in \gD \setminus \{-\ga_i\}$ that
\[ \ad(e_i)U\big(t\fg_\ga[t]\big) \subseteq \mathcal{J}.
\]
If $\ga + \ga_i \notin \gD$, this is obvious.
Assume that $\ga + \ga_i \in \gD$.
Then it follows from (\ref{eq:positive2}) that
\[ \ga + \ga_i = \gs_{j-1}\big(\gs_{j-1}^{-1}(\ga) + \gs_{j-1}^{-1}(\ga_i)\big) \in \gs_{j-1}\big(\gD[\ge\!0]\big).
\]
Since $[e_{\ga+\ga_i},e_\ga] = 0$ holds, this implies
\[ \ad(e_i)U\big(t\fg_\ga[t]\big) \subseteq U(t\fg_\ga[t])t\fg_{\ga+\ga_i}[t] \subseteq \mathcal{J},
\]
as required.
In a similar manner, $\ad(e_i)U\big(\C e_\ga\big) \subseteq \mathcal{J}$ for $\ga \in \gD_+$ is also proved.
In addition, $\ad(e_i)\big(t\fh[t]\big)= t\fg_{\ga_i}[t] \subseteq \mathcal{J}$ follows from (\ref{eq:positive2}).
Now combining these facts with Proposition \ref{Prop2}, $\ad(e_i)\big(\mathcal{J}\big) \subseteq \mathcal{J}$ is proved.
Finally it remains to prove that the assertion for $\big(q+1,\ell(\gs)\big)$ implies that for $(q,0)$.
Note that
\begin{align*}
V(q,0) = F_\tau \Big(\C_{(\ell_q-\ell_{q+1})\gL_0} \otimes V\big(q+1,\ell(\gs)\big)\Big)
\cong \C_{(\ell_q-\ell_{q+1})(\varpi_m+\gL_0)} \otimes (\tau^{-1})^*V\big(q+1,\ell(\gs)\big).
\end{align*}
Let $z$ be a basis of $\C_{(\ell_q-\ell_{q+1})(\varpi_m+\gL_0)}$ and set
$v_{q,0} = z \otimes (\tau^{-1})^*v_{q+1,\ell(\gs)}$,
where $(\tau^{-1})^*v_{q+1,\ell(\gs)}$ is the image of $v_{q+1,\ell(\gs)}$ under the linear isomorphism
$V\big(q+1,\ell(\gs)\big) \to (\tau^{-1})^*V\big(q+1,\ell(\gs)\big)$.
By (\ref{eq:rule_of_tau}), the $\hfh_{\mathrm{cl}}$-weight of $v_{q,0}$ is
\begin{align*}
(\ell_q-\ell_{q+1})(\varpi_m+\gL_0&) + \tau(-L_{q+1}\varpi_{m^*}+\ell_{q+1}\gL_0)\\ &= (\ell_q-\ell_{q+1})(\varpi_m+\gL_0)
+ \tau\big(-L_{q+1}(\varpi_{m^*}+\gL_0) + (\ell_{q+1}+L_{q+1})\gL_0\big)\\
&= (\ell_q-\ell_{q+1})(\varpi_m+\gL_0) + \big(-L_{q+1}\gL_0+ (\ell_{q+1}+L_{q+1})(\varpi_m+\gL_0)\big)\\
&= L_q\varpi_m + \ell_q\gL_0.
\end{align*}
Moreover since $\hfn_+$ acts trivially on $z$, we have
\begin{align}\label{eq:final_ann}
\mathrm{Ann}_{U(\hfn_+)}(v_{q,0}) = \tau\Big(\mathrm{Ann}_{U(\hfn_+)}(v_{q+1,\ell(\gs)})\Big)
=\tau\Big(\mathcal{I}\big(q+1,\ell(\gs)\big)\Big).
\end{align}
By noting $\tau \gs = t_{\varpi_m}$, we see that
\begin{equation}\label{eq:tau2}
\tau(e_{\gs(\ga)}\otimes t^s) = e_\ga \otimes t^{s-\langle h_\ga,\varpi_m\rangle} \ \text{ for } \ga \in \gD,
s \in \Z_{\ge 0}.
\end{equation}
\begin{Lem}\label{Lem:elementary_lemma}
For $\ga \in \gD[-1]$ and $r,s,k \in \Z_{>0}$ we have
\[ \tau\big({}_ke_{\gs(\ga)}(r,s)\big) = {}_{k+1}e_{\ga}(r,s+r).
\]
\end{Lem}
\begin{proof}
It is easily seen that the map
\[ {}_k\mathbf{S}(r,s) \ni (b_j)_{j\ge 0} \to (b_j')_{j\ge0}\in{}_{k+1}\mathbf{S}(r,s+r)
\]
defined by $b_j' = b_{j-1}$ is bijective.
Then the assertion is proved from (\ref{eq:tau2}).
\end{proof}
Using (\ref{eq:tau2}) and the above lemma, we see that
\begin{align}\label{eq:tauI}
\tau\Big(\mathcal{I}\big(q+1,\ell(\gs)\big)\Big) =
\sum_{\ga \in \gD[1]} U(\hfn_+&)\Big\{\C(f_\ga\otimes t)^{L_{q+1}+1}
\\ +&\sum_{(r,s,k)\in T_{q+1}}\C\, {}_{k+1}f_\ga(r,s+r)\Big\}
+ U(\hfn_+)\big(\fn_+ + t\fp_{\varpi_m}[t]\big).\nonumber
\end{align}
On the other hand, we have
\begin{align}\label{eq:Iq0}
\mathcal{I}(q,0) = \sum_{\ga \in \gD[1]} \sum_{(r,s,k)\in T_q}U(\hfn_+){}_{k}f_\ga(r,s)
+ U(\hfn_+)\big(\fn_+ + t\fp_{\varpi_m}[t]\big).
\end{align}
It is easily checked that
\[ \big\{(r,s+r,k+1)\bigm| (r,s,k) \in T_{q+1}\big\} = \{(r,s,k) \in T_q\mid k>1\},
\]
which implies
\begin{equation}\label{eq:easy_equal}
\sum_{(r,s,k)\in T_{q+1}}U(\hfn_+) {}_{k+1}f_\ga(r,s+r) = \sum_{\begin{smallmatrix} (r,s,k)\in T_q \\ k > 1
\end{smallmatrix}}U(\hfn_+) {}_{k}f_\ga(r,s).
\end{equation}
We shall prove for each $\ga \in \gD[1]$ that
\begin{align}\label{eq:final1}
(f_\ga\otimes \,t)^{L_{q+1}+1}&\in \sum_{(r,s,k)\in T_q}U(\hfn_+){}_{k}f_\ga(r,s), \ \text{ and}\\
\label{eq:final2}
{}_1f_\ga(r,s) &\in U(\hfn_+)\Big(\C(f_\ga\otimes t)^{L_{q+1}+1} + \fn_+[t] + t\fh[t]\Big)\ \text{ for } r,s
\text{ with } (r,s,1) \in T_q.
\end{align}
By comparing (\ref{eq:tauI}) and (\ref{eq:Iq0}) and using (\ref{eq:easy_equal}), we see that these imply
\[ \tau\Big(\mathcal{I}\big(q+1,\ell(\gs)\big)\Big) = \mathcal{I}(q,0),
\]
and then by (\ref{eq:final_ann}) we have $\mathrm{Ann}_{U(\hfn_+)}(v_{q,0}) = \mathcal{I}(q,0)$,
which completes the proof.
Setting $r=s=L_{q+1}+1$, we have
\[ {}_1f_\ga(r,s) = (f_\ga\otimes t)^{L_{q+1}+1},
\]
and hence (\ref{eq:final1}) follows.
Assume that $r,s$ satisfy $(r,s,1) \in T_q$, which implies $s \ge L_{q+1}+1$.
Since ${}_1f_\ga(r,s) = 0$ if $r >s$, we may assume $r\le s$.
If we apply the Lie algebra automorphism $\tau\circ\Phi_{\gs}$ to (\ref{eq:Garland}) with $s$ replaced by $s-r$, we have
from Lemma \ref{Lem:elementary_lemma} that
\[ e_{\ga}^{(s-r)}(f_\ga\otimes t)^{(s)} - (-1)^{s-r}{}_1f_\ga(r,s) \in U(\hfn_+)\big(\fn_+[t] \oplus t\fh[t]\big).
\]
Hence (\ref{eq:final2}) also holds.
\section*{Acknowledgement}
The author is supported by JSPS Grant-in-Aid for Young Scientists (B) No.\ 25800006.
\def\cprime{$'$} \def\cprime{$'$}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 8,014 |
Rok 2011 byl vyhlášen Organizací spojených národů (OSN) Mezinárodním rokem lesů. Cílem bylo zvýšit povědomí o významu lesa pro společnost a posílit snahu o trvale udržitelné obhospodařování lesů, trvale udržitelný rozvoj lesů a ochranu všech typů lesů k užitku současných i budoucích generací.
Význam lesů pro člověka
Lesy mají jako nejkomplexnější ekosystémy na planetě pro společnost obrovský význam a jejich užitečnost dalece přesahuje statisticky vykazovaný podíl na hrubém domácím produktu. Lesy hrají důležitou roli například v čištění vody a vzduchu, v ochraně půdy proti erozi a silnému větru a v ochraně biologické diverzity. Pro mnoho zemí, zejména těch rozvojových, představují lesy jeden z nejdůležitějších prostředků pro rozvoj a boj proti chudobě. Jako takový má Mezinárodní rok lesů upozornit na to, že les je důležitou součástí rozvoje a rozhodující spojnicí s mnoha dalšími tématy na globální úrovni. Na mezinárodní úrovni si Mezinárodní rok lesů klade za cíl konsolidovat celosvětovou diskusi o lesích, na regionální úrovni má hledat, jak zajistit potřebné nástroje a dovednosti pro hospodaření v lesích a na úrovni ekologické má hledat celostní přístup pro nakládání se světovými zdroji.
Externí odkazy
Fórum OSN
Mezinárodní rok lesů 2011 vystřídal Mezinárodní rok biodiverzity 2010. V průběhu celého roku lesníci po celém světě připravovali mnoho akcí a materiálů pro veřejnost o lese a lesnictví, podobně jako v tzv. Týdnech lesů.
2011
Lesy
lesů
Stromy roku | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 3,353 |
Hand is a direct target of Tinman and GATA factors during Drosophila cardiogenesis and hematopoiesis
Zhe Han,
Zhe Han
Department of Molecular Biology, University of Texas Southwestern Medical Center at Dallas, 6000 Harry Hines Boulevard, Dallas, TX 75390, USA
Eric N. Olson
Eric N. Olson *
*Author for correspondence (e-mail: eric.olson@utsouthwestern.edu)
Zhe Han, Eric N. Olson; Hand is a direct target of Tinman and GATA factors during Drosophila cardiogenesis and hematopoiesis. Development 1 August 2005; 132 (15): 3525–3536. doi: https://doi.org/10.1242/dev.01899
The existence of hemangioblasts, which serve as common progenitors for hematopoietic cells and cardioblasts, has suggested a molecular link between cardiogenesis and hematopoiesis in Drosophila. However, the molecular mediators that might link hematopoiesis and cardiogenesis remain unknown. Here, we show that the highly conserved basic helix-loop-helix (bHLH)transcription factor Hand is expressed in cardioblasts, pericardial nephrocytes and hematopoietic progenitors. The homeodomain protein Tinman and the GATA factors Pannier and Serpent directly activate Hand in these cell types through a minimal enhancer, which is necessary and sufficient to drive Hand expression in these different cell types. Hand is activated by Tinman and Pannier in cardioblasts and pericardial nephrocytes,and by Serpent in hematopoietic progenitors in the lymph gland. These findings place Hand at a nexus of the transcriptional networks that govern cardiogenesis and hematopoiesis, and indicate that the transcriptional pathways involved in development of the cardiovascular, excretory and hematopoietic systems may be more closely related than previously appreciated.
Hand, tinman, pannier, serpent, Drosophila, Heart development, Hematopoiesis, Lymph gland, Transcription regulation
The fruit fly, Drosophila melanogaster, has a simple open circulatory system composed of circulating blood cells (hemocytes) and a dorsal vessel surrounded by pericardial cells. The dorsal vessel is a contractile tube lined by a layer of myoepithelial vascular cells called cardioblasts. The anterior part, called the aorta, functions as a major blood vessel; the posterior part, called the heart, pumps hemocytes through the aorta into the body cavity. The pericardial cells flanking the aorta and heart are excretory cells, so-called pericardial nephrocytes. Anterior to the pericardial nephrocytes, there are two pairs of cell clusters flanking the aorta, which comprise the lymph and ring gland. The lymph gland is made up of hematopoietic progenitor cells that generate all three blood cell types in the adult. The cardioblasts, pericardial nephrocytes and the lymph gland hematopoietic progenitors all arise from the same cardiac mesoderm that is specified by signaling pathways involving bone morphogenetic protein (Bmp),Decapentaplegic (Dpp), Wingless (Wg) and fibroblast growth factor (Fgf)(Cripps and Olson, 2002; Evans et al., 2003), hinting at a possible link between cardiogenesis and hematopoiesis.
Several transcription factors have been shown to play key roles in cardiogenesis and hematopoiesis in flies and vertebrates. The Drosophila NK-type homeobox gene tinman (tin), the earliest marker of the cardiac lineage, is initially expressed in the entire mesoderm before becoming restricted to the dorsal mesoderm and later to the cardiac mesoderm, in response to ectodermal Dpp and Wg signals. After all the cardiac cell types are specified, tin expression is extinguished in many cardiac cell types and maintained in only a subset of cardiac and pericardial cells (Han et al.,2002; Han and Bodmer,2003). In tin mutant embryos, the entire cardiogenic region and the lymph gland fail to form(Bodmer, 1993, Mandal et al., 2004),indicating the essential role of Tinman in early specification of the cardiac and hematopoietic lineages.
There are several NK-type homeobox genes in vertebrates, which are named Nkx2.3-Nkx2.10 (Evans,1999). Nkx2.5 is expressed in the early cardiac crescent and continues to be expressed throughout heart development. Mouse embryos lacking Nkx2.5 show early cardiac defects and arrested cardiogenesis before looping morphogenesis (Lyons et al., 1995). Furthermore, overexpression of a dominant-negative form of Nkx2.5 in Xenopus blocks cardiogenesis(Grow and Krieg, 1998) and mutations in Nkx2.5 cause congenital heart disease in humans(Schott et al., 1998). As tin is no longer expressed in hematopoietic progenitors after stage 13, its function in hematopoiesis is limited to the early specification of the cardiogenic mesoderm containing the progenitor cells for the lymph gland(Mandal et al., 2004).
Members of the GATA family of zinc-finger transcription factors play crucial roles in both cardiogenesis and hematopoiesis in Drosophilaand vertebrates. The Drosophila GATA factor Pannier is expressed in the cardiac mesoderm as well as the overlaying ectoderm and functions primarily in cardiogenesis. Embryos lacking pannier (pnr)show a dramatic reduction of cardiac progenitor cells(Gajewski et al., 1999; Alvarez et al., 2003; Klinedinst and Bodmer, 2003). In vertebrates, GATA4, GATA5 and GATA6 are expressed in the cardiogenic region. Loss-of-function assays in mouse, Xenopus and zebrafish have shown that these GATA factors are required for myocardial differentiation and normal heart development (Molkentin et al., 1997; Gove et al.,1997; Reiter et al.,1999). Another Drosophila GATA factor Serpent (Srp)functions mainly in hematopoiesis. It is expressed in all hematopoietic progenitors formed in the head mesoderm and the lymph gland. In serpent (srp) mutant embryos, hematopoiesis from both the head mesoderm and the lymph gland is inhibited(Lebestky et al., 2000; Mandal et al., 2004),indicating that Serpent plays an essential role in hematopoietic progenitor cell specification. In vertebrates, GATA1, GATA2 and GATA3 play fundamental roles in various aspects of hematopoietic development(Tsai et al., 1994; Ting et al., 1996; Ferreira et al., 2005). It is likely that the functions of Pannier and Serpent in cardiogenesis and hematopoiesis, respectively, reflect the highly conserved but simplified developmental processes in Drosophila compared with vertebrates.
Several transcription factors that are directly regulated by Tinman and Pannier have been identified, including Mef2 and even-skipped, through enhancer mutagenesis studies(Gajewski et al., 1997; Gajewski et al., 1998; Nguyen and Xu, 1998; Knirr and Frasch, 2001; Han et al., 2002). These studies have begun to establish a transcriptional network that governs Drosophila cardiogenesis. In this network, Tinman and Pannier function in parallel as key cardiogenic factors at the top of the hierarchy. Although several transcription factors, such as Lozenge (Lz) and Glial-cells-missing (Gcm), appear to act `downstream' of Serpent, there is as yet no evidence for direct activation of these genes by Serpent.
The Drosophila Hand gene encodes a highly conserved basic helix-loop-helix (bHLH) transcription factor. Interestingly, Hand is the only gene identified so far that is expressed in a specific pattern in all the cardioblasts, pericardial nephrocytes and hematopoietic progenitors in the lymph gland (Kolsh and Paululat, 2002). The vertebrate Hand genes have been shown to play essential roles during heart development(Srivastava et al., 1995; Srivastava et al., 1997; Yamagishi et al., 2001; McFadden et al., 2005). Hand genes have also been shown to be expressed during heart development in Xenopus, zebrafish and Ciona(Sparrow et al., 1998; Yelon et al., 2000; Davidson and Levine, 2003). The conserved cardiac expression patterns of Hand genes across vast evolutionary distances suggest that these genes play conserved roles during cardiogenesis and may be regulated by conserved genetic pathways.
In an effort to understand the position of Hand in the genetic networks that govern cardiogenesis and hematopoiesis, we searched for and identified the cis-regulatory region of the Drosophila Hand gene. We describe a minimal Hand enhancer that completely recapitulates endogenous Hand expression in cardioblasts, pericardial nephrocytes and lymph gland prehemocytes. This enhancer contains consensus binding sites for the NK factor Tinman and the GATA factors Pannier and Serpent, which are conserved across evolutionarily divergent Drosophila species. Mutagenesis of these consensus binding sites shows that Hand is directly activated by Tinman and Pannier in the heart, and by Serpent in the lymph gland. Overexpression of Tinman, Pannier or Serpent induces ectopic Hand in muscle progenitors, dorsal vessel and hematopoietic progenitors, respectively, indicating that Hand is activated separately by Tinman, Pannier and Serpent in distinct cell types. These findings place Hand at a central position to link the transcriptional networks that govern cardiogenesis and hematopoiesis.
Drosophila strains
The following mutant stocks were used: tinEC40(Bodmer, 1993), pnrVX6 (Ramain et al.,1993), srpneo45 (the Bloomington stock center). Different Drosophila species were provided by the Tucson species center. Overexpression of transgenes was accomplished by using the Gal4-UAS system (Brand and Perrimon,1993). The following fly lines were used: twi-Gal4;24B-Gal4 (Greig and Akam,1993), UAS-tin(Ranganayakulu et al., 1998),UAS-pnr (Gajewski et al.,1999), UAS-Srp (Waltzer et al., 2002), UAS-TinEnR (Han et al., 2002), UAS-PnrEnR(Klinedinst and Bodmer, 2003). Oregon-R was used as the wild-type reference strain.
Generation of transgenic fly lines
The various Hand enhancer fragments(Fig. 2A) were PCR amplified and subcloned into pC4LZ (containing the lacZ reporter gene) or pPelican (containing the GFP reporter gene)(Barolo et al., 2000), using SphI/XhoI or KpnI/NotI sites,respectively. The constructs were injected according to standard procedures. Germline transformed, transgenic flies were selected by red eye color(w+) and maintained as homozygotes. At least four independent transgenic lines were analyzed for each construct.
Immunohistochemistry and microscopy
Embryos from different lines were collected and stained with various antibodies as previously described (Han et al., 2002). The following primary antibodies were used: mouse anti-β-galactosidase 1:300 (Promega); rat anti-Eve 1:200 (from D. Kosman); rabbit anti-Tinman 1:500 (from R. Bodmer); rabbit anti-Dmef2 1:1000(from B. Peterson); rabbit anti-GFP 1:2000 (Abcam); and rabbit anti-Srp 1:500(from R. Reuter). Cy2, Cy3, Cy5 or Biotin-conjugated secondary antibodies(from Jackson Lab) were used to recognize the primary antibodies. Images were obtained with a Zeiss LSM510-meta confocal microscope or a Leica DMRXE compound microscope.
Electrophoretic mobility shift assays
GST-Tin and GST-Pnr fusion proteins were prepared according to standard procedures. Complimentary oligonucleotides containing Tin or GATA consensus site were radiolabeled using Klenow fill-in reaction as probes. Complimentary oligonucleotides containing wild-type consensus binding sites or binding-site mutations were used as non-labeled competitors to compete for the binding of GST fusion proteins in the presence of the radio-labeled probe. After 30 minutes incubation of the protein, probe and competitor oligonucleotides at 4°C, the products were electrophoresed in 7.5% non-denaturing polyacrylamide gels at 4°C. The sense strand DNA sequences of the oligonucleotides used are shown as follows with consensus binding sites in parentheses and mutated nucleotides underlined: Tin1, TTT CCA AAA AGG(CACTTAA) TTA ATC AAA CCC; Tin2: TTT CTG AAG CAC (CACTTAG) ACA CTT GTC TCT;Tin3, CTT TTT ATA AAG (TCAAGTG) CTT TTG TTT CTT; Tin4/G5: ATA ATA AAC AAA(CAATTGA) (GATA) TCT ACG CCC CAG; G1, CTC TTG TGT TCA (TATC) TAA AAC CAG ATT;G2, GCG TCT GCG GTT (TATC) ACT TCC GAA ATT; G3, CCA TTA GGA ATA (TATC) TAC AAT CAA TCG; G4: CAA TCG AGT TTT (TATC) TGC GGA TTA CAA; Tin1m, TTT CCA AAA AGG(CATCCAA) TTA ATC AAA CCC; Tin2m, TTT CTG AAG CAC(CATCCAG) ACA CTT GTC TCT; Tin3m, CTT TTT ATA AAG(TCGGATG) CTT TTG TTT CTT; Tin4m, ATA ATA AAC AAA(CATCCGA) (GATA) TCT ACG CCC CAG; G1m, CTC TTG TGT TCA(TCCC) TAA AAC CAG ATT; G2m, GCG TCT GCG GTT (TCCC) ACT TCC GAA ATT; G3m, CCA TTA GGA ATA (TCCC) TAC AAT CAA TCG; G4m, CAA TCG AGT TTT (TCCC) TGC GGA TTA CAA; G5m, ATA ATA AAC AAA (CAATTGA)(GGGA) TCT ACG CCC CAG.
Transfection assays
Cell transfection and luciferase assays were performed as described(Han et al., 2004). Reporter plasmid (100 ng) and 100 ng of each activator plasmid were used. The Hand-luciferase was generated by cloning the minimal Hand enhancer identified in this study into the pGL3 vector (Promega). Tin-pAc5.1,Pnr-pAc5.1 or SrpNC-pAc5.1 were generated by cloning the full length tin,pnr or srp cDNAs into the pAc5.1-HisA vector (Invitrogen),respectively. Luciferase activities are expressed as mean±s.d. from three experiments.
Expression of Hand in cardioblasts, pericardial nephrocytes and lymph gland hematopoietic progenitors
We and others (Moore et al.,2000; Kolsh and Paululat, 2002) have identified the Drosophila Hand gene by homology to its vertebrate orthologs. Similar to its vertebrate orthologs, Drosophila Hand is expressed in a specific pattern in the cardiogenic mesoderm. Hand expression is initiated in the cardiogenic region at late stage 12, immediately following the differentiation of Even-skipped (Eve)-positive mesodermal progenitors into segmentally repeated Eve pericardial cells (EPCs) and DA1 muscles, which marks the completion of progenitor cell divisions that give rise to the cardioblasts and pericardial nephrocytes (Han and Bodmer, 2003). Cardiac expression of Hand is initially weak and segmental, but soon becomes strong in most cardioblasts and pericardial cells from stage 13 (Fig. 1A-C). At the end of embryogenesis, when the heart is completely formed, Hand is expressed in all the cardioblasts that also express Dmef2 (Fig. 1D-F) and in all the pericardial nephrocytes that express even-skipped(eve) (Fig. 1D-F).
At stage 15, tin is expressed in four of the six cardioblasts in each hemisegment from segment A1 to A5, and all the Eve-positive pericardial cells, as well as all cardioblasts in from segment T2 to T3, but not in the lymph gland (Fig. 1G). Hand expression is detected in all the Tinman-positive cardiac cells(Fig. 1H). Hand is likely to be expressed in all the pericardial nephrocytes as all Zfh-1-positive pericardial cells express Hand(Fig. 1I). odd-skipped(odd) is expressed in both the lymph gland hematopoietic progenitor cells and a subset of pericardial nephrocytes(Fig. 1J). Handexpression is also detected in all the Odd-skipped-positive hematopoietic progenitors and pericardial nephrocytes(Fig. 1K). In addition, Hand is co-expressed with Serpent in all the lymph gland progenitors(Fig. 1L). The secreted extracellular protein Pericardin (Prc) labels the ring gland and the extracellular matrix surrounding the pericardial nephrocytes(Fig. 1M). Handexpression is not detected in the ring gland, but Hand-expressing cells are surrounded by Prc from segment T2-A6(Fig. 1N-O). Handexpression also appears in the visceral mesoderm(Fig. 1A-C, data not shown),the garland cells (data not shown) and in a subset of central nervous system cells (data not shown).
Expression pattern of Hand is fully recapitulated by the Hand enhancer driven reporter gene in Drosophila heart. (A-F) Hand is strongly expressed in the developing cardioblasts that express Mef2 (blue) and a subset of pericardial nephrocytes that express Even-skipped (red). Hand expression is also detected in the visceral mesoderm (arrowhead in A) and the lymph gland (arrowhead in D). (G,H) All the Tinman-positive cardioblasts and pericardial cells (red) express Hand. (I) All pericardial cells labeled by Zfh1 (red) express Hand. (J,K) Hand is also expressed in all pericardial cells that express Odd-skipped (red), including the lymph gland pre-hemocytes and the pericardial nephrocytes. (L) All lymph gland hematopoietic progenitor cells that express Serpent (red) also express Hand. (M-O) The extracellular protein Pericardin (red), expressed by pericardial nephrocytes,encloses Hand-expressing pericardial cells (N); Hand-expressing lymph gland hematopoietic progenitor cells (arrowhead) do not express Pericardin and Pericardin-positive ring gland cells (arrow) do not express Hand. In all panels, Hand transcripts were detected by in situ hybridization and labeled in green. Other cardiac and hematopoietic markers are labeled in red as indicated. (A-C) Lateral views of stage 13 embryos; (D-O) dorsal views of stage 15-16 embryos. Anterior is towards the left in all panels.
Cardiac and hematopoietic expression of Hand is controlled by a 513bp enhancer
To search for cis-regulatory elements capable of conferring the specific expression pattern of Hand in cardioblasts, pericardial nephrocytes and lymph gland hematopoietic progenitors, we generated a series of reporter genes containing lacZ and the hsp70 basal promoter linked to genomic fragments within a 13 kb genomic region encompassing the gene and examined reporter gene expression in transgenic embryos. As shown in Fig. 2A, we identified a 513 bp minimal enhancer, referred to as Hand cardiac and hematopoietic (HCH)enhancer, between exons 3 and 4 of the Hand gene (see Fig. 2B for the sequence),which was both necessary and sufficient to direct lacZ expression in the entire embryonic heart and lymph gland in a pattern identical to that of the endogenous Hand gene (Fig. 2C,parts a-c). Further deletions of this enhancer caused either a partial or complete loss of activity (data not shown). The 513 bp HCH enhancer showed the same expression pattern in the heart and lymph gland as larger genomic fragments that were positive for enhancer activity (data not shown). We conclude that this enhancer fully recapitulates the temporal and spatial expression pattern of Hand transcription in the distinct cell types derived from the cardiogenic region.
Replacing the lacZ reporter gene with a GFP reporter made it possible to examine HCH enhancer activity after embryogenesis. The HCH-GFP is expressed in embryos in the same pattern as Hand transcripts(Fig. 2C, part d). After embryogenesis, the enhancer activity remains strong in the lymph gland,cardioblasts and pericardial nephrocytes in larvae(Fig. 2C, part e), and GFP expression persists in the heart throughout the Drosophila life cycle(data not shown).
Identification of the minimal Hand cardiac and hematopoietic (HCH)enhancer. (A) The Hand gene located on chromosome 2 contains four exons. The 13 kb genomic region containing Hand-coding sequence was screened for expression in the embryonic heart. A 517 bp minimal cardiac enhancer (called HCH) was identified between exons 3 and 4 of Hand-coding sequence. The top eight genomic regions were assayed using a lacZ reporter, and the bottom three genomic regions were assayed using a GFP reporter. B, BamHI; R, EcoRI; S, SalI; X, XhoI. (B) DNA sequence of the HCH enhancer with Tinman- and GATA-binding sites highlighted in yellow and blue, respectively.(C) Cardiac and hematopoietic expression pattern driven by the HCH enhancer,shown by lacZ staining (red in b, yellow in c), is identical to that of the endogenous Hand transcripts (green in a, yellow in c). The HCH enhancer can also drive GFP expression in the same pattern in embryos (d) and in lymph gland (arrow) and heart (arrowhead) in larva (e). (C, parts a-d)Dorsal views of stage 16 embryos. (C, part e) A living first instar larva. Anterior is towards the left in all panels.
Conservation of the Hand enhancer among different Drosophila species
As Hand expression in the heart is conserved from Drosophila to humans, we reasoned that possible evolutionary conservation of cis-regulatory sequences in the minimal HCH enhancer could guide us towards the identification of upstream transcriptional activators of Hand transcription. We therefore searched for the Handenhancer sequence in other Drosophila species, taking advantage of the fact that the enhancer is located between two conserved exons, which allowed us to perform PCR of genomic DNA using a series of nested primers. Sequence alignment of genomic DNA obtained by PCR from five other Drosophila species – D. sechellia, D. yakuba, D. erecta and D. virilis (in order of increasing evolutionary distance from D. melanogaster) showed that the sequence of the HCH enhancer was conserved in these different species, whereas the surrounding sequence was divergent. Alignment of the sequences with homology to the HCH enhancer revealed four consensus sequences for binding of Tinman (CAC/ATTNA/G)and five potential GATA (GATAA/T) binding sites(Fig. 3A, see Fig. S1 for sequence and alignments). The spacing between these consensus sequences is similar from D. melanogaster to D. erecta, except that one of the five GATA sites is not conserved in D. erecta, indicating possible redundancy of the consensus binding sites. In D. virilis,although the spacing and order of the consensus sites are different from that of D. melanogaster because of variable intervening sequences, the number of consensus sites is the same as that of D. melanogaster,suggesting the importance of these consensus-binding sites.
The Hand cardiac enhancer is conserved among distinct Drosophila species. (A) Schematic diagrams of Hand enhancers identified in distinct Drosophila species and the positions of Tinman and GATA consensus binding sites. (B, parts a-d) The HCH enhancer from D. virilis can drive GFP expression (green) in the same pattern as that of the D. melanogaster HCH enhancer. Lateral view of a stage 13 embryo(B, parts a,b) and dorsal view of a stage 15 embryo (B, parts c,d) are shown.(B, parts a-d) vir-HCH-GFP is in green, Mef2 is shown in blue and Even-skipped in red. Anterior is towards the left in all panels.
To examine if the HCH enhancer-like sequence of other Drosophilaspecies could confer cardiac and lymph gland expression of Hand, we tested the D. virilis HCH enhancer sequence (vir-HCH) because it is the most evolutionarily divergent Hand enhancer sequence we identified. The vir-HCH enhancer sequence directed GFP expression in an identical pattern to that of the D. melanogaster Hand enhancer(Fig. 3B, parts a-d). These findings suggest that the regulation of Hand expression in the heart and lymph gland is evolutionarily conserved among Drosophila species that diversified ∼60 million years ago.
Tinman, Pannier and Serpent bind directly to the consensus sites in the HCH enhancer
To test for binding of Tinman protein to the Tinman consensus-binding sites in the HCH enhancer, we performed gel mobility shift assays with GST-Tinman fusion protein and a radiolabeled probe corresponding to first Tinman consensus site (Tin1). GST-Tinman bound avidly to this site, and binding could be competed by unlabeled oligonucleotides corresponding to any of the Tinman consensus-binding sites in the HCH enhancer(Fig. 4A). We then tested whether the GATA factor Pannier could bind to the GATA consensus sites. By using a radiolabeled probe containing the second GATA consensus site (G2), we found that GST-Pannier fusion protein could bind this probe, and binding could be competed by unlabeled oligonucleotides corresponding to any of the GATA consensus-sites in the HCH enhancer (Fig. 4B). Next, we tested whether the same GATA consensus sites could be bound by the hematopoietic GATA factor Serpent. As expected, both forms of the Serpent protein (SrpNC and SrpC) could bind to the radiolabeled probe containing the second GATA consensus site (G2), and the binding could be competed by any of the unlabeled GATA consensus sites in the HCH enhancer(Fig. 4C). Mutation of the Tinman and GATA consensus-sites severely diminished their ability to compete for binding of the corresponding proteins to the labeled probes(Fig. 4A-C). As not all of these consensus-binding sites are conserved in all the Drosophilaspecies, but they were all bound by the corresponding proteins, it is likely that some of the NK and GATA consensus-binding sites are functionally redundant.
Activation of the HCH enhancer by Tinman, Pannier and Serpent in Drosophila S2 cells
To examine if the HCH enhancer could be activated by Tinman and GATA factors in vitro, we generated a luciferase reporter construct using the HCH enhancer and tested it in Drosophila S2 cells. Remarkably, Tinman was able to activate this enhancer over 100-fold, whereas Pannier and Serpent activated the enhancer approximately sixfold(Fig. 4D). Although previous studies suggested that Tinman and Pannier function synergistically to activate cardiac gene expression (Gajewski et al.,1998), we did not detect significant synergy between these factors on the HCH enhancer when transfected simultaneously(Fig. 4D).
In order to show that the activation occurred specifically through binding of the three transcription factors to their consensus-binding sites, we mutated the Tinman and GATA-binding sites in the HCH enhancer. Tinman could still activate the HCH enhancer with all the GATA-binding sites mutated, but could not activate the enhancer with all the Tinman-binding sites mutated,whereas Pannier or Serpent could activate the enhancer with the Tinman-binding sites mutated, but not with the GATA-binding sites mutated(Fig. 4D). An enhancer with both Tinman- and GATA-binding sites mutated could not be activated by either Tinman, Pannier or Serpent (Fig. 4D). These results further support the conclusion that the HCH enhancer is a direct transcriptional target of Tinman, Pannier and Serpent.
Binding to and activation of the HCH enhancer by Tinman, Pannier and Serpent. (A) Gel shift assays were performed using GST-Tinman protein and a radiolabeled probe corresponding to the Tin1 site in the HCH enhancer. Competition assays were performed using a 50-fold molar excess of unlabeled oligonucleotide corresponding to the wild-type Tin1, Tin2 or Tin3 sites or mutant sites, as indicated. (B) Gel shift assays were performed using GST-Pannier protein and a radiolabeled probe corresponding to the second GATA site (G2) in the HCH enhancer. Competition assays were performed using a 10-or 50-fold molar excess of unlabeled oligonucleotide corresponding to the wild-type GATA sites or a 50-fold excess of unlabeled mutant GATA sites, as indicated. Similar results were obtained for all the GATA sites. An experiment with the G2, G3 and G4 sites is shown. (C) Gel shift assays were performed using in vitro translated Serpent protein and a radiolabeled probe corresponding to the second GATA site (G2) in the HCH enhancer. Competition assays were performed using 50-fold molar excess of unlabeled oligonucleotide corresponding to the wild-type or mutant GATA sites. (D) S2 cells were transfected with luciferase reporters controlled by the wild-type or mutated HCH enhancers. As indicated, Tinman activates the HCH enhancer over 100-fold,whereas Pannier or Serpent activates the HCH enhancer approximately sixfold. The three factors do not show significant synergy when added simultaneously. Mutation of the Tinman sites (HCH-4T) specifically abolishes the activation by Tinman, whereas mutation of the GATA sites specifically abolishes the activation by Pannier or Serpent. The HCH enhancer with both Tinman and GATA sites mutated (HCH-4T5G) cannot be activated by any of these three transcription factors.
Ectopic expression of Tinman, Pannier and Serpent induces distinct expansion of Hand expression
To further investigate the potential of Tinman, Pannier and Serpent to activate the HCH enhancer, we overexpressed these three transcription factors in the mesoderm of Drosophila embryos and examined the expression of the HCH-GFP reporter. Surprisingly, ectopic expression of Tinman in the mesoderm using the twi-Gal4; 24B-Gal4 driver strongly induced GFP expression in all the somatic muscles, in a pattern nearly identical to that of Mef2 (Fig. 5D-F). Interestingly, ectopic Tinman did not cause any significant over-proliferation of cardioblasts or pericardial nephrocytes(Fig. 5D-F), indicating that Tinman alone is not sufficient to induce cardiogenesis. We did not detect an inhibitory role of Tinman on lymph gland progenitor development as was shown in a previous study (Mandel et al., 2004), probably owing to different experimental conditions.
In contrast to Tinman, Pannier overexpression in the mesoderm using the same Gal4 driver induced the formation of ectopic Mef2-positive cardioblasts(indicated by arrows in Fig. 5H,I), as shown in a previous study(Klinedinst and Bodmer, 2003). Ectopic expression of HCH-GFP was also detected in all the extra cardioblasts(Fig. 5G-I). The expanded HCH-GFP pattern also showed that more pericardial nephrocytes were induced by ectopic Pannier (indicated by the arrowhead in Fig. 5G-I). We did not detect supernumerary Eve-positive pericardial cells(Fig. 5I), but Odd-positive pericardial cells were significantly increased (data not shown). Although ectopic expression of HCH-GFP was detected randomly in a few muscle cells,this effect was insignificant compared with the ectopic HCH-GFP expression induced by Tinman. We did not detect an expansion of the lymph gland when Pannier was overexpressed in the mesoderm.
Unlike Tinman or Pannier, ectopic Serpent driven by twi-Gal4; 24B-Gal4 did not induce any cardioblasts or pericardial nephrocytes, but instead repressed their formation (indicated by arrows in Fig. 5J-L). By contrast, cell clusters forming the lymph gland (identified by their position, shape and Hand-GFP expression) were significantly expanded by ectopic Serpent (indicated by arrowheads in Fig. 5J-L). Furthermore, in embryos with ectopic mesodermal Serpent, pericardial nephrocytes around the aorta and heart often failed to align along the dorsal vessel, but formed cell clusters like hematopoietic progenitors in the lymph gland (Fig. 5J,L), suggesting a cell fate transformation from pericardial nephrocytes to hematopoietic progenitors. A gain-of-function study of Srp using a mef2-Gal4 driver showed similar results with Odd as a marker(Mandal et al., 2004).
Ectopic Tinman, Pannier or Serpent induces ectopic Hand expression in somatic muscles, cardioblasts/pericardial nephrocytes or hematopoietic progenitors. (A-C) Wild-type HCH-GFP is expressed in all the Mef2-expressing cardioblasts and Eve-expressing pericardial cells. (D-F) Overexpression of Tinman using twi-Gal4; 24B-Gal4 induces HCH-GFP expression in the somatic muscle cells that express Dmef2 (arrows indicate Ectopic HCH-GFP in the muscle cells). (G-I) Overexpression of pannier in the mesoderm using twi-Gal4; 24B-Gal4 induces the formation of extra cardioblasts (indicated by arrows) and pericardial nephrocytes (indicated by arrowheads), and produces an expanded heart. Expression of HCH-GFP is detected in all the ectopic heart cells. (J-L) Overexpression of Serpent using twi-Gal4; 24B-Gal4 reduces the number of cardioblasts and pericardial nephrocytes (indicated by arrows), but induces ectopic hematopoietic progenitor cells, as shown by the expanded lymph gland (indicated by arrowheads). HCH-GFP expression is detected in all the cells in the expanded lymph gland. HCH-GFP also shows the clustering of the pericardial nephrocytes, which normally form a line, indicating a cell fate transformation from pericardial nephrocytes to hematopoietic progenitor cells. All panels show dorsal views of stage 16 embryos carrying HCH-GFP reporter(green) and are labeled with Mef2 antibody in red and Eve antibody in blue. Anterior is towards the left.
Tinman, Pannier and Serpent are required for HCH enhancer activity in different tissues
As ectopic expression of Tinman, Pannier and Serpent caused distinct changes of Hand expression, we sought to determine whether Tinman,Pannier or Serpent were required in different cell types for Handexpression. A previous study has suggested the important role of Tinman and Pannier in specification of cardiogenic and hematopoietic progenitors(Mandal et al., 2004). Here,we examined HCH-GFP expression in mutants of tin, pnr and srp (Fig. 6B-D), by using a balancer chromosome containing actin-GFP and aging collected embryos for 24 hours before observation for easy identification of homozygous embryos or larvae. In tin-null mutants (tinEC40), the lymph gland and heart fail to form and no HCH-GFP expression was detected(Fig. 6B), indicating that Tinman is required for the specification of the progenitor cells for both cardiogenesis and hematopoiesis. In the pnr-null mutant(pnrVX6), HCH-GFP expression was only detectable in a few cells that could be the remnants of heart and lymph gland that are not formed normally (Fig. 6C), indicating that Pannier is required for the majority of cardiac and hematopoietic cells to form. By contrast, in an allele of serpent(srpneo45) that specifically abolishes Srp expression in hemocyte precursors, most of the cardioblasts and pericardial nephrocytes formed, but HCH-GFP expression was no longer detected in the lymph gland(Fig. 6D), indicating that the HCH enhancer activity is dependent on Srp in this cell type.
As the dependence of the HCH enhancer on Tinman and Pannier could result secondarily from defects in tin or pannier mutant embryos,we established a system to test the requirement of Tinman and Pannier for activation of Hand expression specifically in the cells that express Hand. Using the HCH enhancer, we generated HCH-Gal4 transgenic flies,which could drive a UAS-GFP reporter in a pattern identical to that of the endogenous Hand gene (data not shown). We then overexpressed dominant-negative forms of Tinman or Pannier specifically in the Hand-expressing cardiac and hematopoietic cells using the HCH-Gal4 driver. The dominant-negative forms of Tinman (Tin-EnR) or Pannier (Pnr-EnR)were made by fusing the Engrailed repression domain (EnR) to the Tinman or Pannier DNA-binding domain (Han et al.,2002; Klinedinst and Bodmer,2003). Overexpression of Tin-EnR in the Hand-expressing cells nearly abolished HCH-GFP expression in cardioblasts and pericardial nephrocytes, and also reduced HCH-GFP in the lymph gland but less dramatically(Fig. 6F), indicating that dominant-negative Tinman can suppress HCH activity more efficiently in cardiac cells than in hematopoietic cells. Overexpression of Pnr-EnR in the Hand-expressing cells using HCH-Gal4 abolished most of the HCH activity in the heart and lymph gland (Fig. 6G), indicating that dominant-negative Pannier is able to suppress HCH activity efficiently in both heart and lymph gland, probably by competing with both endogenous Pannier and Serpent for binding to the HCH enhancer. Ectopic expression of Tin-EnR or Pnr-EnR in the Hand-expressing cells did not ablate these cells in the embryos but rather appeared to induce some kind of cell fate changes that we are currently investigating (data not shown).
Tinman, Pannier and Serpent are required for HCH enhancer activity during development. (A) The HCH-GFP is expressed in the lymph gland (arrow) and the heart (arrowhead) in the first-instar larva. (B) In homozygous tinmutant larvae, the lymph gland and heart fail to form and no HCH-GFP is detected. (C) Only residual activity of the HCH enhancer is detected in homozygous pannier mutant larvae, in which no lymph gland is formed(indicated by arrow) and the few surviving cardiac cells fail to fuse at the dorsal midline (indicated by arrowheads). (D) In homozygous serpentmutant larvae, the lymph gland does not form (indicated by the arrow), but most cardiac cells form and express HCH-GFP (indicated by arrowhead). (E) In first-instar larvae that express a dominant-negative form of Tinman in the Hand-expressing cells using HCH-Gal4 driver, HCH-GFP expression is dramatically suppressed in the heart (arrowhead), and less dramatically suppressed in the lymph gland (arrow). (F) Overexpression of a dominant-negative form of Pannier in the Hand-expressing cells using the HCH-Gal4 driver dramatically suppressed the HCH-GFP expression in both the heart (arrowhead) and the lymph gland (arrow). All panels are dorsal view of embryos/larvae carrying the HCH-GFP with anterior towards the left.
Functional analysis of Tinman and GATA consensus-binding sites in the HCH enhancer
To further assess the potential importance of Tinman and GATA factors for activation of the Hand enhancer in vivo, we generated transgenic flies carrying the HCH enhancer with various combinations of binding site mutations. Mutation of any single Tinman or GATA consensus-binding site, or combination of single Tinman site mutations and single GATA site mutations,did not alter enhancer activity (data not shown), suggesting that the enhancer is robustly activated through redundant Tinman and GATA sites. Therefore, we mutated all four Tinman consensus-binding sites simultaneously and examined enhancer activity in transgenic flies. This mutant enhancer (HCH-4T) retained the ability to direct expression of GFP in a majority of cardiac cells and all the lymph gland cells (Fig. 7D-F). However, the overall GFP expression level was reduced compared with the wild-type HCH enhancer (comparing Fig. 7E with Fig. 7B). HCH-4T-GFP expression was also frequently missing or dramatically reduced in the Tinman-positive cardioblasts (indicated by parallel arrows in Fig. 7D-F) and pericardial nephrocytes (indicated by joined arrows in Fig. 7D-F). These data indicate that the direct binding of Tinman to this enhancer is required for its full activity and the Tinman consensus-binding sites are more crucial for mediating enhancer activity in the cells with higher expression levels of Tinman. These data also suggest that activation of this enhancer by other factors can support its activity at a reduced level in most of the heart and at normal level in the lymph gland.
To examine the in vivo function of the GATA consensus-binding sites, we generated transgenic flies carrying the HCH enhancer with all five GATA sites mutated (HCH-5G). Interestingly, this mutant enhancer activated GFP expression only in Tinman-positive cardiac cells (Fig. 7G-I). The expression pattern of HCH-5G-GFP was almost identical to that of Tinman (Fig. 7G). The level of GFP expression in these Tinman-positive cardioblasts and pericardial nephrocytes (Fig. 7H,I) was the same as that of the wild-type HCH enhancer (compare Fig. 7H with Fig. 7B). The absence of HCH-5G-GFP activity in the Tinman-negative cardioblasts and pericardial nephrocytes indicates that the binding of Pannier to the consensus GATA sites is necessary to activate Hand expression in Tinman-negative cardioblasts and pericardial cells. However, the absence of the HCH-5G-GFP in the lymph gland hematopoietic progenitors(Fig. 7G) suggests that the binding of Serpent to the consensus GATA-binding sites is required for Hand expression in the lymph gland hematopoietic progenitors.
In order to test whether the Tinman and GATA sites are necessary for all the expression of Hand in the cardioblasts, pericardial nephrocytes and lymph gland hematopoietic progenitors, we created a mutant HCH enhancer with all the four Tinman-binding sites and five GATA-binding sites mutated. This enhancer, HCE-4T5G, was completely devoid of activity in cardioblasts,pericardial nephrocytes and the lymph gland(Fig. 7J-L), demonstrating that the activation of Hand in these three closely linked cell types is absolutely dependent on the binding of Tinman, Pannier and Serpent to the Hand cardiac and hematopoietic (HCH) enhancer.
Regulation of Hand expression in the heart and lymph gland
In this study, we have identified a 513 bp minimal Hand cardiac and hematopoietic (HCH) enhancer that is necessary and sufficient to drive reporter expression in cardiac cells and lymph gland hematopoietic progenitors. This enhancer contains conserved consensus binding sites for the NK factor Tinman and the GATA factors Pannier and Serpent, which bind and directly activate this enhancer.
The homeobox-containing protein Tinman is essential for the formation of the cardiac mesoderm, from which the heart and blood progenitors arise(Bodmer, 1993). However, its potential late functions remain unknown. It is believed that Tinman is not required for the entirety of heart development in flies, because it is not maintained in all the cardiac cells at late stages. Our data reveal at least one function for the late-embryonic Tinman expression, which is to maintain Hand expression. The fact that ectopic Tinman can turn on Hand expression dramatically in the somatic muscles is striking and suggests the existence of a Tinman-co-factor in muscle cells that can cooperate with Tinman to activate Hand expression; this co-factor would not be expected to be expressed in pericardial cells or the lymph gland. This co-factor should also be expressed in Drosophila S2 cells, as transfected Tinman can increase activity of the HCH enhancer in S2 cells by more than 100-fold. The generally reduced activity of the HCH enhancer that results from mutation of the Tinman-binding sites also suggests that Tinman activity is required to fully activate the Hand enhancer.
Requirement of Tinman and GATA sites for activity of the HCH enhancer during development. (A-C) The wild-type HCH enhancer drives GFP expression strongly in the lymph gland hematopoietic progenitors (indicated by arrowhead in A), cardioblasts (indicated by arrowheads in B and C, labeled in C by Mef2 antibody in blue) and the pericardial nephrocytes (arrows in B and C indicate a subset of pericardial nephrocytes labeled in C by Eve antibody in red).(D-F) The HCH enhancer with all four Tinman binding sites mutated (HCH-4T)drives GFP expression in a similar pattern to the wild-type HCH, but at a lower level. The activity of this enhancer is not affected in the lymph gland(indicated by the arrowhead in D), but is frequently missing from the Tinman-positive cardioblasts (parallel arrows in D-F, the four Tinman-positive cardioblasts in each hemisegment come from a common progenitor cell), as well as Tin/Eve-positive pericardial nephrocytes (joined arrows in D-F; two Tin/Eve-positive pericardial cells are formed in each hemisegment). (G-I) The HCH enhancer with all five GATA-binding sites mutated (HCH-5G) fails to drive GFP expression in the lymph gland (indicated by arrowhead in G). In contrast to HCH-4T-GFP, HCH-5G-GFP is expressed in an identical pattern to that of Tinman (shown by Tinman antibody in red, which totally overlaps with the HCH-5G-GFP pattern in green). (H,I) Higher magnified panels co-labeled with Mef2 in blue and Eve in red show that HCH-5G-GFP is only expressed in four out of six cardioblasts (parallel arrows) and two Eve pericardial cells (joined arrows) in each hemisegment. (J-L) Mutation of both Tinman and GATA-binding sites totally abolishes HCH enhancer activity in the lymph gland, cardioblasts and pericardial nephrocytes. Each row of panels shows a different enhancer activity in green as indicated. The left column (A,D,G,J) shows dorsal views of stage 15 embryos carrying the enhancer-GFP (green) and labeled by Tinman antibody in red. The right two columns are dorsal/lateral views of three hemisegments of stage 14 embryos carrying different enhancer GFP (green) and co-labeled with DMef2 antibody (blue) and Eve antibody (red). Anterior is towards the left in all panels.
Although Pannier and Serpent bind to the same consensus sites, these GATA factors produce distinct phenotypes when overexpressed in the mesoderm. Ectopic Pannier induces cardiogenesis, shown by the extra number of cardioblasts and pericardial nephrocytes, but does not affect the lymph gland hematopoietic progenitors. Ectopic Serpent, however, induces ectopic lymph gland hematopoietic progenitors, but reduces the number of cardioblasts and pericardial cells. Interestingly, pericardial cells with ectopic Serpent expression have a tendency to form cell clusters such as the lymph gland progenitors, suggesting a partial cell fate transformation. These results suggest that Pannier functions as a cardiogenic factor, whereas Serpent functions as a hematopoietic factor. Although both can activate Handexpression, Pannier and Serpent activate the HCH enhancer in different cell types. This assumption is also supported by the specific expression pattern of Serpent and Pannier in late embryos. We and others (Mandel et al., 2004) have detected Serpent specifically in the lymph gland hematopoietic progenitors but not in any cardiac cells. Pannier expression in the cardiogenic region of late embryos is not clear because of the interference by the high level Pannier expression from the overlaying ectoderm. However, we examined the lymph gland in late stage embryos and did not detect any Pannier expression in these cells(data not shown). Together with the evidence from loss-of-function and gain-of-function experiments with Serpent, we conclude that the HCH-5G-GFP transgene is not expressed in the lymph gland because Serpent could not bind to the mutant enhancer in the lymph gland cells; whereas the lack of HCH-5G-GFP expression in cardiac cells is due to the inability of Pannier to bind the mutant enhancer in these cardiac cells.
As tin and pnr are not expressed in all the cardiac cells of late stage embryos but the Hand-GFP transgene is expressed in these cells,it is likely that additional factors control Hand expression in the heart. One group of candidates is the T-box family. As Doc1, Doc2 and Doc3 genes (Drosophila orthologs to vertebrate Tbx5) are expressed in the Svp-positive cardioblasts where tin is not expressed(Lo and Frasch, 2001), but H15 and midline (Drosophila orthologs to vertebrate Tbx-11) are expressed in most of the cardiac cells in late embryos(Miskolczi-McCallum et al.,2005; Qian et al.,2005), it is likely that the T-box genes activate Handexpression in cells that do not express tin and pannier. However, the enhancer lacking GATA and Tinman sites has no activity,indicating that the additional factors that may activate Handexpression in the heart and lymph gland also requires these crucial Tinman and GATA sites, probably through protein interaction between Tinman and the GATA factors.
Evolution of the HCH enhancer
We have identified putative Hand enhancers from divergent Drosophila species. In most of these species, the entire 513 bp Hand enhancer region is highly conserved. However, the D. virilis HCH enhancer does not exhibit highly conserved sequence between the consensus binding sites, even though it has a similar number of consensus binding sites for both Tinman and Pannier. The fact that this D. virilis enhancer can also drive reporter gene expression in the heart indicates that these Tinman and GATA-binding sites are the crucial elements for enhancer activity. Besides the enhancers with all Tinman or all GATA binding sites mutated, we also generated transgenic flies carrying one or two mutations of the Tinman or GATA-binding sites. None of these transgenic lines shows significant changes in enhancer activity (data not shown), indicating that this enhancer is robustly activated by Tinman, Pannier and Serpent through functionally redundant binding sites. These data also explain why the Hand enhancers from different Drosophila species have different numbers of Tinman or GATA-binding sites.
Interestingly, Hand expression is also dependent on GATA factors in vertebrates. We have previously described an enhancer necessary and sufficient to direct cardiac expression of the mouse Hand2 gene,which contains two essential GATA-binding sites(McFadden et al., 2000). Thus,we propose that the Hand genes are directly regulated by GATA factors in an evolutionarily conserved developmental pathway in both Drosophila and mice. Although no functional NK binding sites were identified in the mouse Hand2 enhancer, there are perfectly matched NK consensus sites in the Hand2 locus that may function in a redundant or refined way to regulate Hand2 expression (Z.H. and E.N.O., unpublished).
Identification of Hand as a common target of transcriptional cascades that govern cardiogenesis and hematopoiesis
In mammals, the adult hematopoietic system originates from the yolk sac and the intra-embryonic aorta-gonad-mesonephros (AGM) region(Medvinsky and Dzierzak,1996). The AGM region is derived from the mesodermal germ layer of the embryo in close association with the vasculature. Indeed, the idea of the hemangioblast, a common mesodermal precursor cell for the hematopoietic and endothelial lineages, was proposed nearly 100 years ago without clear in vivo evidence. Recently, this idea was substantiated by the identification of a single progenitor cell that can divide into a hematopoietic progenitor cell in the lymph gland and a cardioblast cell in the dorsal vessel in Drosophila (Mandal et al.,2004). In addition to providing the first evidence for the existence of the hemangioblast, this finding also suggested a close relationship between the Drosophila cardiac mesoderm, which gives rise to cardioblasts, pericardial nephrocytes and pre-hemocytes, and the mammalian cardiogenic and AGM region, which gives rise to the vasculature(including cardiomyocytes), the excretory systems (including nephrocytes) as well as adult hematopoietic stem cells(Evans et al., 2003). In fact,in both Drosophila and mammals, the specification of the cardiogenic and AGM region requires the input of Bmp, Wnt and Fgf signaling(Cripps and Olson, 2002; Evans et al., 2003). In addition to the conserved role of the NK and GATA factors, GATA co-factors(U-shaped in Drosophila and Fog in mice) also play important roles in cardiogenesis and hematopoiesis in both Drosophila and mammals(Fossett et al., 2001; Sorrentino et al., 2005). Recent studies have shown that the Notch pathway is required for both cardiogenic and hematopoietic progenitor specification in Drosophila(Han and Bodmer, 2003; Mandel et al., 2004), as well as for mammalian embryonic vascular development(Fischer et al., 2004). It is likely that Notch also plays an important role in mammalian hematopoiesis.
A model for the position of Hand in the transcriptional networks that control cardiogenesis and hematopoiesis. Both cardiogenesis and hematopoiesis occur in the cardiac mesoderm, which is specified by signaling pathways (Dpp, Wg and Fgf) from the overlaying ectoderm, through direct or indirect transcription activation of the genes encoding Tinman, Pannier and Serpent, which also affect one another at the transcriptional level. Tinman and Pannier directly activate the genes encoding Hand and other transcription factors such as Mef2 and Eve in cardioblasts and pericardial nephrocytes in the transcriptional network that controls cardiogenesis. Serpent activates Hand and probably genes encoding other transcription factors such as Lz (Lozenge) and Gcm (Glial cells missing) in the transcriptional network that controls hematopoiesis. Notch is required for the specification of both cardiogenic and hematopoietic progenitors during its early phase of mesodermal expression, and for inhibiting myocardial cell fate while promoting pericardial and hematopoietic cell fate during its late phase of mesodermal expression. Ush (U-shaped) cooperates with Pannier and Serpent in cardiogenesis and hematopoiesis. Solid arrows indicate verified direct transcription activation if pointing to a gene, or verified requirement for a certain cell type formation if pointing to a cell type; broken arrows indicate unverified or indirect gene activation if pointing to a gene, or proposed requirement for certain cell type formation if pointing to a cell type; broken lines indicate different cell types in which a transcription factor is expressed and may have functions.
In this study, we found that Drosophila Hand is expressed in cardioblasts, pericardial nephrocytes and pre-hemocytes, and is directly regulated by conserved transcription factors (NK and GATA factors) that control both cardiogenesis and hematopoiesis. The bHLH transcription factor Hand is highly conserved in both protein sequence and expression pattern in almost all organisms that have a cardiovascular system. In mammals, Hand1 is expressed at high levels in the lateral plate mesoderm, from which the cardiogenic region and the AGM region arise, in E9.5 mouse embryos(Firulli et al., 1998). Functional studies of Hand1 and Hand2 using knockout mice have demonstrated the essential role of Hand genes during cardiogenesis(Srivastava et al., 1995; Srivastava et al., 1997; Yamagishi et al., 2001; McFadden et al., 2005),whereas the functional analysis of Hand genes during vertebrate hematopoiesis has not yet been explored. It will be interesting to determine whether mammalian Hand genes are also regulated in the AGM region by GATA1, GATA2 and GAT3 (vertebrate orthologs to Drosophila Serpent), and whether they play a role in mammalian hematopoiesis.
In summary, this study places Hand at a pivotal point to link the transcriptional networks that govern cardiogenesis and hematopoiesis, as shown in Fig. 8. As the Handgene family encodes highly conserved bHLH transcription factors expressed in the cardiogenic region of widely divergent vertebrates and probably in the AGM region in mouse, these findings open an avenue for further exploration of the conserved transcriptional networks that govern both cardiogenesis and hematopoiesis, by studying the regulation and functions of Hand genes in vertebrate model systems.
We are especially grateful to our late colleague Dr Junyoung Oh, who initiated these studies. We thank R. Schulz, R. Bodmer, the Bloomington stock center and the Tucson species center for fly stocks. We also thank R. Reuter,B. Paterson and the University of Iowa Hybridoma Bank for antibodies; Xiumin Li and Jiang Wu for technical support; A. Diehl for graphics; and J. Page for editorial assistance. Z.H. was supported by a post-doctorial fellowship from The American Heart Association and E.N.O. was supported by grants from The National Institutes of Health and from the Donald W. Reynolds Cardiovascular Clinical Research Center, Dallas, Texas; and from the Robert A. Welch Foundation.
Alvarez, A. D., Shi, W., Wilson, B. A. and Skeath, J. B.(
). Pannier and pointedP2 act sequentially to regulate Drosophila heart development.
Barolo, S., Carver, L. A. and Posakony, J. W.(
). GFP and beta-galactosidase transformation vectors for promoter/enhancer analysis in Drosophila.
Biotechniques
Bodmer, R. (
). The gene tinman is required for specification of the heart and visceral muscles in Drosophila.
Brand, A. H. and Perrimon, N. (
). Targeted gene expression as a means of altering cell fates and generating dominant phenotypes.
Cripps, R. M. and Olson, E. N. (
). Control of cardiac development by an evolutionarily conserved transcriptional network.
Davidson, B. and Levine, M. (
). Evolutionary origins of the vertebrate heart: Specification of the cardiac lineage in Ciona intestinalis.
Proc. Natl. Acad. Sci. USA
Evans, C. J., Hartenstein, V. and Banerjee, U.(
). Thicker than blood: conserved mechanisms in Drosophila and vertebrate hematopoiesis.
Dev. Cell
Evans, S. M. (
). Vertebrate tinman homologues and cardiac differentiation.
Semin. Cell Dev. Biol.
Ferreira, R., Ohneda, K., Yamamoto, M. and Philipsen, S.(
). GATA1 function, a paradigm for transcription factors in hematopoiesis.
Mol. Cell Biol.
Firulli, A. B., McFadden, D. G., Lin, Q., Srivastava, D. and Olson, E. N. (
). Heart and extra-embryonic mesodermal defects in mouse embryos lacking the bHLH transcription factor Hand1.
Fischer, A., Schumacher, N., Maier, M., Sendtner, M. and Gessler, M. (
). The Notch target genes Hey1 and Hey2 are required for embryonic vascular development.
Genes Dev.
Fossett, N., Tevosian, S. G., Gajewski, K., Zhang, Q., Orkin, S. H. and Schulz, R. A. (
). The Friend of GATA proteins U-shaped, FOG-1, and FOG-2 function as negative regulators of blood, heart,and eye development in Drosophila.
Gajewski, K., Kim, Y., Lee, Y. M., Olson, E. N. and Schulz, R. A. (
). D-mef2 is a target for Tinman activation during Drosophila heart development.
EMBO J.
Gajewski, K., Kim, Y., Choi, C. Y. and Schulz, R. A.(
). Combinatorial control of Drosophila mef2 gene expression in cardiac and somatic muscle cell lineages.
Dev. Genes Evol.
Gajewski, K., Fossett, N., Molkentin, J. D. and Schulz, R. A. (
). The zinc finger proteins Pannier and GATA4 function as cardiogenic factors in Drosophila.
Gove, C., Walmsley, M., Nijjar, S., Bertwistle, D., Guille, M.,Partington, G., Bomford, A. and Patient, R. (
). Over-expression of GATA-6 in Xenopus embryos blocks differentiation of heart precursors.
Greig, S. and Akam, M. (
). Homeotic genes autonomously specify one aspect of pattern in the Drosophila mesoderm.
Grow, M. W. and Krieg, P. A. (
). Tinman function is essential for vertebrate heart development: elimination of cardiac differentiation by dominant inhibitory mutants of the tinman-related genes,XNkx2-3 and XNkx2-5.
Han, Z. and Bodmer, R. (
). Myogenic cells fates are antagonized by Notch only in asymmetric lineages of the Drosophila heart, with or without cell division.
Han, Z., Fujioka, M., Su, M., Liu, M., Jaynes, J. B. and Bodmer,R. (
). Transcriptional integration of competence modulated by mutual repression generates cell-type specificity within the cardiogenic mesoderm.
Han, Z., Li, X., Wu, J. and Olson, E. N.(
). A myocardin-related transcription factor regulates activity of serum response factor in Drosophila.
Klinedinst, S. L. and Bodmer, R. (
). Gata factor Pannier is required to establish competence for heart progenitor formation.
Knirr, S. and Frasch, M. (
). Molecular integration of inductive and mesoderm-intrinsic inputs governs even-skipped enhancer activity in a subset of pericardial and dorsal muscle progenitors.
Kolsch, V. and Paululat, A. (
). The highly conserved cardiogenic bHLH factor Hand is specifically expressed in circular visceral muscle progenitor cells and in all cell types of the dorsal vessel during Drosophila embryogenesis.
Lebestky, T., Chang, T., Hartenstein, V. and Banerjee, U.(
). Specification of Drosophila hematopoietic lineage by conserved transcription factors.
Lo, P. C. and Frasch, M. (
). A role for the COUP-TF-related gene seven-up in the diversification of cardioblast identities in the dorsal vessel of Drosophila.
Mech. Dev.
Lyons, I., Parsons, L. M., Hartley, L., Li, R., Andrews, J. E.,Robb, L. and Harvey, R. P. (
). Myogenic and morphogenetic defects in the heart tubes of murine embryos lacking the homeo box gene Nkx2-5.
Mandal, L., Banerjee, U. and Hartenstein, V.(
). Evidence for a fruit fly hemangioblast and similarities between lymph-gland hematopoiesis in fruit fly and mammal aorta-gonadal-mesonephros mesoderm.
McFadden, D. G., Charite, J., Richardson, J. A., Srivastava, D.,Firulli, A. B. and Olson, E. N. (
). A GATA-dependent right ventricular enhancer controls dHAND transcription in the developing heart.
McFadden, D. G., Barbosa, A. C., Richardson, J. A., Schneider,M. D., Srivastava, D. and Olson, E. N. (
). The Hand1 and Hand2 transcription factors regulate expansion of the embryonic cardiac ventricles in a gene dosage-dependent manner.
Medvinsky, A. and Dzierzak, E. (
). Definitive hematopoiesis is autonomously initiated by the AGM region.
Miskolczi-McCallum, C. M., Scavetta, R. J., Svendsen, P. C.,Soanes, K. H. and Brook, W. J. (
). The Drosophila melanogaster T-box genes midline and H15 are conserved regulators of heart development.
Molkentin, J. D., Lin, Q., Duncan, S. A. and Olson, E. N.(
). Requirement of the transcription factor GATA4 for heart tube formation and ventral morphogenesis.
Moore, A. W., Barbel, S., Jan, L. Y. and Jan, Y. N.(
). A genome-wide survey of basic helix-loop-helix factors in Drosophila.
Nguyen, H. T. and Xu, X. (
). Drosophila mef2 expression during mesoderm development is controlled by a complex array of cis-acting regulatory modules.
Qian, L., Liu, J. and Bodmer, R. (
). Neuromancer Tbx20-related genes (H15/midline) promote cell fate specification and morphogenesis of the Drosophila heart.
Ramain, P., Heitzler, P., Haenlin, M. and Simpson, P.(
). Pannier, a negative regulator of achaete and scute in Drosophila, encodes a zinc finger protein with homology to the vertebrate transcription factor GATA-1.
Ranganayakulu, G., Elliott, D. A., Harvey, R. P. and Olson, E. N. (
). Divergent roles for NK-2 class homeobox genes in cardiogenesis in flies and mice.
Reiter, J. F., Alexander, J., Rodaway, A., Yelon, D., Patient,R., Holder, N. and Stainier, D. Y. (
). Gata5 is required for the development of the heart and endoderm in zebrafish.
Schott, J. J., Benson, D. W., Basson, C. T., Pease, W.,Silberbach, G. M., Moak, J. P., Maron, B. J., Seidman, C. E. and Seidman, J. G. (
). Congenital heart disease caused by mutations in the transcription factor NKX2-5.
Sorrentino, R. P., Gajewski, K. M. and Schulz, R. A.(
). GATA factors in Drosophila heart and blood cell development.
Sparrow, D. B., Kotecha, S., Towers, N. and Mohun, T. J.(
). Xenopus eHAND: a marker for the developing cardiovascular system of the embryo that is regulated by bone morphogenetic proteins.
Srivastava, D., Cserjesi, P. and Olson, E. N.(
). A subclass of bHLH proteins required for cardiac morphogenesis.
Srivastava, D., Thomas, T., Lin, Q., Kirby, M. L., Brown, D. and Olson, E. N. (
). Regulation of cardiac mesodermal and neural crest development by the bHLH transcription factor, dHAND.
Ting, C. N., Olson, M. C., Barton, K. P. and Leiden, J. M.(
). Transcription factor GATA-3 is required for development of the T-cell lineage.
Tsai, F. Y., Keller, G., Kuo, F. C., Weiss, M., Chen, J.,Rosenblatt, M., Alt, F. W. and Orkin, S. H. (
). An early haematopoietic defect in mice lacking the transcription factor GATA-2.
Waltzer, L., Bataille, L., Peyrefitte, S. and Haenlin, M.(
). Two isoforms of Serpent containing either one or two GATA zinc fingers have different roles in Drosophila haematopoiesis.
Yamagishi, H., Yamagishi, C., Nakagawa, O., Harvey, R. P.,Olson, E. N. and Srivastava, D. (
). The combinatorial activities of Nkx2.5 and dHAND are essential for cardiac ventricle formation.
Yelon, D., Ticho, B., Halpern, M. E., Ruvinsky, I., Ho, R. K.,Silver, L. M. and Stainier, D. Y. (
). The bHLH transcription factor hand2 plays parallel roles in zebrafish heart and pectoral fin development.
Supplemental Figure 1
Fig. S1. DNA sequences and alignment of HCH enhancers from five Drosophila species. Tinman- and GATA-binsdgin sites are highlighted in yellow and blue. respectively.
- jpeg file | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 8,224 |
In animals, Rabies is a viral zoonotic neuroinvasive disease which causes inflammation in the brain and is usually fatal. Rabies, caused by the rabies virus, primarily infects mammals. In the laboratory it has been found that birds can be infected, as well as cell cultures from birds, reptiles and insects. The brains of animals with rabies deteriorate. As a result, they tend to behave bizarrely and often aggressively, increasing the chances that they will bite another animal or a person and transmit the disease. Most cases of humans contracting the disease from infected animals are in developing nations. In 2010, an estimated 26,000 people died from rabies, down from 54,000 in 1990.
Stages of disease
Three stages of rabies are recognized in dogs and other animals.
The first stage is a one- to three-day period characterized by behavioral changes and is known as the prodromal stage.
The second stage is the excitative stage, which lasts three to four days. It is this stage that is often known as furious rabies due to the tendency of the affected animal to be hyperreactive to external stimuli and bite at anything near.
The third stage is the paralytic or dumb stage and is caused by damage to motor neurons. Incoordination is seen due to rear limb paralysis and drooling and difficulty swallowing is caused by paralysis of facial and throat muscles. This disables the host's ability to swallow, which causes saliva to pour from the mouth. This causes bites to be the most common way for the infection to spread, as the virus is most concentrated in the throat and cheeks, causing major contamination to saliva. Death is usually caused by respiratory arrest.
Mammals
Bats
Bat-transmitted rabies occurs throughout North and South America but it was first closely studied in Trinidad in the West Indies. This island was experiencing a significant toll of livestock and humans alike to rabid bats. In the 10 years from 1925 and 1935, 89 people and thousands of livestock had died from it—"the highest human mortality from rabies-infected bats thus far recorded anywhere."
In 1931, Dr. Joseph Lennox Pawan of Trinidad in the West Indies, a government bacteriologist, found Negri bodies in the brain of a bat with unusual habits. In 1932, Dr. Pawan discovered that infected vampire bats could transmit rabies to humans and other animals. In 1934, the Trinidad and Tobago government began a program of eradicating vampire bats, while encouraging the screening off of livestock buildings and offering free vaccination programs for exposed livestock.
After the opening of the Trinidad Regional Virus Laboratory in 1953, Arthur Greenhall demonstrated that at least eight species of bats in Trinidad had been infected with rabies; including the common vampire bat, the rare white-winged vampire bat, as well as two abundant species of fruit bats: the Seba's short-tailed bat and the Jamaican fruit bat.
Recent data sequencing suggests recombination events in an American bat led the modern rabies virus to gain the head of a G-protein ectodomain thousands of years ago. This change occurred in an organism that had both rabies and a separate carnivore virus. The recombination resulted in a cross-over that gave rabies a new success rate across hosts since the G-protein ectodomain, which controls binding and pH receptors, was now suited for carnivore hosts as well.
Cats
In the United States, domestic cats are the most commonly reported rabid animal. In the United States, , between 200 and 300 cases are reported annually; in 2017, 276 cats with rabies were reported. , in every year since 1990, reported cases of rabies in cats outnumbered cases of rabies in dogs.
Cats that have not been vaccinated and are allowed access to the outdoors have the most risk for contracting rabies, as they may come in contact with rabid animals. The virus is often passed on during fights between cats or other animals and is transmitted by bites, saliva or through mucous membranes and fresh wounds. The virus can incubate from one day up to over a year before any symptoms begin to show. Symptoms have a rapid onset and can include unusual aggression, restlessness, lethargy, anorexia, weakness, disorientation, paralysis and seizures. Vaccination of felines (including boosters) by a veterinarian is recommended to prevent rabies infection in outdoor cats.
Cattle
In cattle-raising areas where vampire bats are common, fenced-in cows often become a primary target for the bats (along with horses), due to their easy accessibility compared to wild mammals. In Latin America, vampire bats are the primary reservoir of the rabies virus, and in Peru, for instance, researchers have calculated that over 500 cattle per year die of bat-transmitted rabies.
Vampire bats have been extinct in the United States for thousands of years (a situation that may reverse due to climate change, as the range of vampire bats in northern Mexico has recently been creeping northward with warmer weather), thus United States cattle are not currently susceptible to rabies from this vector. However, cases of rabies in dairy cows in the United States has occurred (perhaps transmitted by bites from canines), leading to concerns that humans consuming unpasteurized dairy products from these cows could be exposed to the virus.
Vaccination programs in Latin America have been effective at protecting cattle from rabies, along with other approaches such as the culling of vampire bat populations.
Coyotes
Rabies is common in coyotes, and can be a cause for concern if they interact with humans.
Dogs
Rabies has a long history of association with dogs. The first written record of rabies is in the Codex of Eshnunna (), which dictates that the owner of a dog showing symptoms of rabies should take preventive measure against bites. If a person was bitten by a rabid dog and later died, the owner was fined heavily.
Almost all of the human deaths attributed to rabies are due to rabies transmitted by dogs in countries where dog vaccination programs are not sufficiently developed to stop the spread of the virus.
Horses
Rabies can be contracted in horses if they interact with rabid animals in their pasture, usually through being bitten (e.g. by vampire bats) on the muzzle or lower limbs. Signs include aggression, incoordination, head-pressing, circling, lameness, muscle tremors, convulsions, colic and fever. Horses that experience the paralytic form of rabies have difficulty swallowing, and drooping of the lower jaw due to paralysis of the throat and jaw muscles. Incubation of the virus may range from 2–9 weeks. Death often occurs within 4–5 days of infection of the virus. There are no effective treatments for rabies in horses. Veterinarians recommend an initial vaccination as a foal at three months of age, repeated at one year and given an annual booster.
Monkeys
Monkeys, like humans, can get rabies; however, they do not tend to be a common source of rabies. Monkeys with rabies tend to die more quickly than humans. In one study, 9 of 10 monkeys developed severe symptoms or died within 20 days of infection. Rabies is often a concern for individuals travelling to developing countries as monkeys are the most common source of rabies after dogs in these places.
Rabbits
Despite natural infection of rabbits being rare, they are particularly vulnerable to the rabies virus; rabbits were used to develop the first rabies vaccine by Louis Pasteur in the 1880s, and continue to be used for rabies diagnostic testing. The virus is often contracted when attacked by other rabid animals and can incubate within a rabbit for up to 2–3 weeks. Symptoms include weakness in limbs, head tremors, low appetite, nasal discharge, and death within 3–4 days. There are currently no vaccines available for rabbits. The National Institutes of Health recommends that rabbits be kept indoors or enclosed in hutches outside that do not allow other animals to come in contact with them.
Red Pandas
Although rare, cases of rabies in red pandas have been recorded.
Skunks
In the United States, there is currently no USDA-approved vaccine for the strain of rabies that afflicts skunks. When cases are reported of pet skunks biting a human, the animals are frequently killed in order to be tested for rabies. It has been reported that three different variants of rabies exist in striped skunks in the north and south central states.
Humans exposed to the rabies virus must begin post-exposure prophylaxis before the disease can progress to the central nervous system. For this reason, it is necessary to determine whether the animal, in fact, has rabies as quickly as possible. Without a definitive quarantine period in place for skunks, quarantining the animals is not advised as there is no way of knowing how long it may take the animal to show symptoms. Destruction of the skunk is recommended and the brain is then tested for presence of rabies virus.
Skunk owners have recently organized to campaign for USDA approval of both a vaccine and an officially recommended quarantine period for skunks in the United States.
Wolves
Under normal circumstances, wild wolves are generally timid around humans, though there are several reported circumstances in which wolves have been recorded to act aggressively toward humans. The majority of fatal wolf attacks have historically involved rabies, which was first recorded in wolves in the 13th century. The earliest recorded case of an actual rabid wolf attack comes from Germany in 1557. Though wolves are not reservoirs for the disease, they can catch it from other species. Wolves develop an exceptionally severe aggressive state when infected and can bite numerous people in a single attack. Before a vaccine was developed, bites were almost always fatal. Today, wolf bites can be treated, but the severity of rabid wolf attacks can sometimes result in outright death, or a bite near the head will make the disease act too fast for the treatment to take effect.
Rabid attacks tend to cluster in winter and spring. With the reduction of rabies in Europe and North America, few rabid wolf attacks have been recorded, though some still occur annually in the Middle East. Rabid attacks can be distinguished from predatory attacks by the fact that rabid wolves limit themselves to biting their victims rather than consuming them. Plus, the timespan of predatory attacks can sometimes last for months or years, as opposed to rabid attacks which end usually after a fortnight. Victims of rabid wolves are usually attacked around the head and neck in a sustained manner.
Other placental mammals
The most commonly infected terrestrial animals in the United States are raccoons, skunks, foxes, and coyotes. Any bites by such wild animals must be considered a possible exposure to the rabies virus.
Most cases of rabies in rodents reported to the Centers for Disease Control and Prevention in the United States have been found among groundhogs (woodchucks). Small rodents such as squirrels, hamsters, guinea pigs, gerbils, chipmunks, rats, mice, and lagomorphs like rabbits and hares are almost never found to be infected with rabies, and are not known to transmit rabies to humans.
Marsupial and monotreme mammals
The Virginia opossum (a marsupial, unlike the other mammals named above, which are all eutherians/placental), has a lower internal body temperature than the rabies virus prefers and therefore is resistant but not immune to rabies. Marsupials, along with monotremes (platypuses and echidnas), typically have lower body temperatures than similarly sized eutherians.
Birds
Birds were first artificially infected with rabies in 1884; however, infected birds are largely, if not wholly, asymptomatic, and recover. Other bird species have been known to develop rabies antibodies, a sign of infection, after feeding on rabies-infected mammals.
Transport of pet animals between countries
Rabies is endemic to many parts of the world, and one of the reasons given for quarantine periods in international animal transport has been to try to keep the disease out of uninfected regions. However, most developed countries, pioneered by Sweden, now allow unencumbered travel between their territories for pet animals that have demonstrated an adequate immune response to rabies vaccination.
Such countries may limit movement to animals from countries where rabies is considered to be under control in pet animals. There are various lists of such countries. The United Kingdom has developed a list, and France has a rather different list, said to be based on a list of the Office International des Epizooties (OIE). The European Union has a harmonised list. No list of rabies-free countries is readily available from OIE.
In recent years, canine rabies has been practically eliminated in North America and Europe due to extensive and often mandatory vaccination requirements. However it is still a significant problem in parts of Africa, parts of the Middle East, parts of Latin America, and parts of Asia. Dogs are considered to be the main reservoir for rabies in developing countries.
However, the recent spread of rabies in the northeastern United States and further may cause a restrengthening of precautions against movement of possibly rabid animals between developed countries.
See also
Prevalence of rabies
Rabies transmission
Rabies vaccine
Footnotes
References
Baynard, Ashley C. et al. (2011). "Bats and Lyssaviruses." In: Advances in VIRUS RESEARCH VOLUME 79. Research Advances in Rabies. Edited by Alan C. Jackson. Elsevier. .
Goodwin G. G., and A. M. Greenhall. 1961. "A review of the bats of Trinidad and Tobago." Bulletin of the American Museum of Natural History, 122.
Joseph Lennox Pawan (1936). "Transmission of the Paralytic Rabies in Trinidad of the Vampire Bat: Desmodus rotundus murinus Wagner, 1840." Annual Tropical Medicine and Parasitol, 30, April 8, 1936, pp. 137–156.
in animals | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,499 |
Das Bauwerk Ritterstraße 11 in Esslingen am Neckar ist ein Wohn- und Geschäftshaus aus dem späten 19. Jahrhundert.
Geschichte und Beschreibung
Das Gebäude wurde in den Jahren 1898/99 nach Plänen von Hermann Falch als Bank- und Wohnhaus errichtet und zunächst von der Esslinger Actien-Bank genutzt, die 1889 gegründet und 1910 von Stahl & Federer übernommen wurde. Die Geschäftsräume befanden sich im Erdgeschoss, das mit großen Rundbogenfenstern ausgestattet ist. Eine Wohnung für den Bankdirektor befand sich im ersten Stock. Mittlerweile dient das Haus als Verwaltungssitz der Württembergischen Landesbühne.
Durch seine Orientierung an Bauformen der Renaissance und der Spätgotik hebt sich das Haus von seiner neubarocken Umgebung ab. Es besitzt an der Fassade zur Ritterstraße einen übergiebelten Mittelrisaliten und an der Ecke zur Küferstraße einen reich verzierten Eckerker aus Werkstein. Abgesehen vom Esslinger Stadtwappen dienen Embleme des Handels und der Industrie als Gebäudeschmuck. Während der Tresorraum, die Fenster und das Treppenhaus im Originalzustand der Errichtungszeit erhalten geblieben sind, zeugen Türen und Stuckdecken im expressionistischen Stil von einer Renovierung, die 1921 erfolgte.
Literatur
Andrea Steudle u. a., Denkmaltopographie Bundesrepublik Deutschland. Kulturdenkmale in Baden-Württemberg. Band 1.2.1. Stadt Esslingen am Neckar, Ostfildern 2009, ISBN 978-3-7995-0834-6, S. 252
Einzelnachweise
Bauwerk in Esslingen am Neckar
Kulturdenkmal in Esslingen am Neckar
Erbaut im 19. Jahrhundert
Esslingen
Esslingen | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 450 |
package pain
import (
"encoding/xml"
"github.com/fgrid/iso20022"
)
type Document00200102 struct {
XMLName xml.Name `xml:"urn:iso:std:iso:20022:tech:xsd:pain.002.001.02 Document"`
Message *PaymentStatusReportV02 `xml:"pain.002.001.02"`
}
func (d *Document00200102) AddMessage() *PaymentStatusReportV02 {
d.Message = new(PaymentStatusReportV02)
return d.Message
}
// Scope
// The PaymentStatusReport message is sent by an instructed agent to the previous party in the payment chain. It is used to inform this party about the positive or negative status of an instruction (either single or file). It is also used to report on a pending instruction.
// Usage
// The PaymentStatusReport message is exchanged between an agent and a non-financial institution customer to provide status information on instructions previously sent. Its usage will always be governed by a bilateral agreement between the agent and the non-financial institution customer.
// The PaymentStatusReport message can be used to provide information about the status (e.g. rejection, acceptance) of the initiation of a credit transfer, a direct debit, as well as on the initiation of other customer instructions (e.g. PaymentCancellationRequest).
// The PaymentStatusReport message refers to the original instruction(s) by means of references only or by means of references and a set of elements from the original instruction.
// The PaymentStatusReport message can be used in domestic and cross-border scenarios.
// The PaymentStatusReport message exchanged between agents and non-financial institution customers is identified in the schema as follows: urn:iso:std:iso:20022:tech:xsd:pain.002.001.02
type PaymentStatusReportV02 struct {
// Set of characteristics shared by all individual transactions included in the status report message.
GroupHeader *iso20022.GroupHeader5 `xml:"GrpHdr"`
// Original group information concerning the group of transactions, to which the status report message refers to.
OriginalGroupInformationAndStatus *iso20022.OriginalGroupInformation1 `xml:"OrgnlGrpInfAndSts"`
// Information concerning the original transactions, to which the status report message refers.
TransactionInformationAndStatus []*iso20022.PaymentTransactionInformation1 `xml:"TxInfAndSts,omitempty"`
}
func (p *PaymentStatusReportV02) AddGroupHeader() *iso20022.GroupHeader5 {
p.GroupHeader = new(iso20022.GroupHeader5)
return p.GroupHeader
}
func (p *PaymentStatusReportV02) AddOriginalGroupInformationAndStatus() *iso20022.OriginalGroupInformation1 {
p.OriginalGroupInformationAndStatus = new(iso20022.OriginalGroupInformation1)
return p.OriginalGroupInformationAndStatus
}
func (p *PaymentStatusReportV02) AddTransactionInformationAndStatus() *iso20022.PaymentTransactionInformation1 {
newValue := new(iso20022.PaymentTransactionInformation1)
p.TransactionInformationAndStatus = append(p.TransactionInformationAndStatus, newValue)
return newValue
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 6,111 |
Stanier Class 5 4-6-0 5231 (British Railways no. 45231) is a preserved British steam locomotive. In preservation, it has carried the names 3rd (Volunteer) Battalion The Worcestershire and Sherwood Foresters Regiment and The Sherwood Forester, though it never carried either of these in service.
Service
5231 was built by Armstrong-Whitworth in 1936 for the London, Midland and Scottish Railway. It spent most of its early career at Patricroft shed, working mainly to North Wales and Leeds. After nationalisation in 1948, it was renumbered 45231 by British Railways.
45231 was transferred to Northampton in October 1954, but was only officially there for a month — such allocation changes were often only carried out on paper — and then transferred to Aston, where it remained for nine years. 45231 was officially transferred to Rugby in February 1963, but was moved a short time later (July) to Chester. It stayed at Chester until closure of Chester shed in April 1967 when 45231 was then transferred to Speke Junction and finally Carnforth, where 45231 lasted until the last day of steam on BR in August 1968.
It was sold by BR directly into preservation and was restored at Carnforth to LMS livery.
This locomotive was one of a total of 842 with four of its class having the following names: 45154 Lanarkshire Yeomanry, 45156 Ayrshire Yeomanry, 45157 The Glasgow Highlander, and 45158 The Glasgow Yeomanry.
Preservation
After being initially preserved at Carnforth, 5231 became associated with the preserved Great Central Railway (GCR) in Leicestershire. In 1973, it hauled the official opening train between Loughborough and Quorn. Never having carried a name in BR service, the locomotive was nonetheless named 3rd (Volunteer) Battalion The Worcestershire and Sherwood Foresters Regiment at Quorn on 9 May 1976. It was taken out of service the following year for an overhaul.
5231 was overhauled in Cornwall, and was complete in 1988, when it returned to the GCR. It was then moved to the Nene Valley Railway from 1989 until 1993, when it returned to the GCR for the filming of Shadowlands. 5231 was sold to the GCR in late 1996, who repainted it in BR lined black in 1997. It also acquired a new set of nameplates, this time more simply The Sherwood Forester. A very similar name was carried by LMS Royal Scot Class (4)6112 Sherwood Forester.
45231 emerged from an overhaul in 2005 and shortly after it was moved by road to the Mid Hants Railway. It eventually undertook a proving run on Sun 26 Jun from Alton to Fratton before entering mainline service later in the year. 45231 returned to traffic in 2013 after another major overhaul was carried out at Carnforth MPD. In March 2015, 45231 paid a visit to the Llangollen Railway to attend the Steel Steam & Stars IV gala which was running over 2 weekends from Fri 6 to Sun 8 Mar & Fri 13 to Sun 15 March. Due to the railway not being connected to the national network, it had to be moved by road from Carnforth and the entrance to the yard at Llangollen was very tight. During the first weekend of the gala, the locomotive was failed with cylinder problems and was not able to take part in the remainder of the gala, with the engine returning to Carnforth shortly after the gala finished.
The locomotive was, until May 2015, owned by Bert Hitchen and after his death, the locomotive was taken into the care of the Hitchen family who looked after the engine until November 2015, when Jeremy Hosking purchased the locomotive from the Hitchen family. 45231 is currently based at Crewe Diesel TMD alongside a number of other mainline certified locomotives to help with charter trains. Its current mainline certificate expires in 2020 with the boiler certificate running out in 2023.
Fame in Preservation
45231 was used in the 40th anniversary special of the Fifteen Guinea Special on 10 August 2008. It worked alongside fellow class engine 45407 when they double headed the train from Carlisle back as far as Blackburn. 45231 then worked the train alone back to Liverpool Lime Street via Wigan.
2013 marked the 45th anniversary of the end of regular steam on British Railways in August 1968, and because 11 August was on a Sunday, it was fitting that the special was to run on the exact day 45 years after the 1968 run. 45231 was one of the chosen engines to work two special one-off railtours in August. The first trip was on Wednesday 7 August, working one of Statesman Rail's Fellsman trains which was renamed for the occasion "The Fifteen Guinea Fellsman", which also had the 1T57 headboard fitted alongside the regular Fellsman headboard. It ran in double-headed form with sister engine 44932 from Lancaster to Carlisle and back via the Settle-Carlisle line both ways.
Then on the Sunday of the same week, 11 August (the final day of regular steam on BR in 1968), 45231 was given the honour of double heading once again with sister 44932, but this time acting as pilot engine, and it carried the 1T57 Headboard for the occasion. Also for the trip, its "Sherwood Forester" nameplates were removed as the majority of Black 5s in BR days, apart from five, did not have nameplates fitted. 45231 & 44932 worked the Carlisle to Manchester via Hellifield and Darwen leg of the Railway Touring Company's "Fifteen Guinea Special" 45th anniversary train. Other engines that played roles in this special were a third sibling engine no 45305 (allocated to the original train in 1968 but replaced by 45110) which worked from Liverpool to Manchester and return via Warrington Central and 70013 Oliver Cromwell which worked the Manchester to Carlisle leg via Hellifield.
TPWS Isolation Incident
On 2 October 2015, The Sherwood Forester was working a West Coast Railways (WCR) special through Doncaster when it was noticed that its Train Protection & Warning System (TPWS) had been isolated by the footplate crew. Previously, isolation of TPWS on a WCR operated locomotive had been a factor in the 2015 Wootton Bassett SPAD incident and the resultant suspension of WCRC's access to the network. As a result, in November 2015 a further prohibition notice was issued to WCRC by the Office of Rail and Road, suspending further steam services operated by them.
References
External links
Great Central Railway page
Railuk database
45231
Preserved London, Midland and Scottish Railway steam locomotives
Individual locomotives of Great Britain
Railway locomotives introduced in 1936
Standard gauge steam locomotives of Great Britain | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 4,115 |
Market/Novel Tech
Bio-Ophthalmology
A UK gene therapy company – Gyroscope Therapeutics Ltd – raises £50.4 million funding for retinal disease treatment
by Dr. Gearóid Tuohy
1st October 2019 18th October 2019
Gyroscope Therapeutics Ltd., based in Stevenage, UK, has announced the successful completion of a £50.4 million investment to advance clinical development of an investigational gene therapy treatment for dry age-related macular degeneration (dry-AMD). The company is built on studies and scientific publications which link chronic local inflammation, activation of the complement system and AMD pathogenesis research. In addition, gene therapy is aimed to develop a sustained drug delivery modality within ophthalmic and non-ophthalmic indications which partner with surgical device technologies, also based with Syncona Ltd., a healthcare life sciences investment company.
Syncona has invested £48 million in the current funding round which, according to the largest shareholder, brings to about £82M investment since the beginning of Gyroscope's founding. In addition, Cambridge Innovation Capital (CIC) reports that it has made an investment of £2.4 million in the same funding round. Gyroscope is focused on developing a gene therapy treatment (GT005) for the Phase I/II FOCUS trial to treat dry age-related macular degeneration. According to the company, new capital is to be used to advance clinical development of the company's investigational gene therapy for a recombinant non-replicating adeno-associated viral (AAV) vector encoding a human complement factor. The financing is aimed to support a manufacturing platform and to use the company's subretinal delivery system which may additionally license the technology to multiple commercial partners. The complement factor clinical trial is formally entitled as "FocuS: An Open Label First in Human Phase 1/2 Multicentre Study to Evaluate the Safety, Dose Response and Efficacy of GT005 Administered as a Single Subretinal Injection in Subjects With Macular Atrophy Due to AMD". The study is a non-randomized interventional trial aiming to recruit patients in 7 sites – Bristol Eye Hospital, Moorefields Eye Hospital, Manchester Eye Hospital, Nottingham University Hospital, Oxford University Hospital and two Southampton hospitals. The study is estimated to complete the primary outcome measures in February 2021.
Mr. Chris Hollowood, chief investment officer and chairman of Gyroscope stated that the company "have brought this novel medicine to the clinic in less than two and half years from formation. In conjunction they have built a surgical platform alongside that has the potential to improve the therapeutic's safety, efficacy and consistency as well as increase the number of patients that can potentially benefit. We are pleased to provide the team the support to build on this momentum and potentially bring the first medicine for dry-AMD to patients around the world."
Categories Select Category Bio-Ophthalmology Book Review Clinical Featured Introduction Market/Novel Tech None selected Research Uncategorised
Archives Select Month January 2023 December 2022 October 2022 September 2022 August 2022 July 2022 June 2022 May 2022 April 2022 March 2022 February 2022 January 2022 December 2021 November 2021 October 2021 September 2021 August 2021 July 2021 June 2021 May 2021 March 2021 February 2021 January 2021 December 2020 November 2020 October 2020 September 2020 August 2020 July 2020 June 2020 May 2020 April 2020 March 2020 February 2020 January 2020 December 2019 November 2019 October 2019 August 2019 July 2019 June 2019 May 2019 April 2019 March 2019 February 2019 January 2019 December 2018 August 2018 July 2018 June 2018 May 2018 April 2018 March 2018 February 2018 January 2018 December 2017 November 2017 October 2017 September 2017 August 2017 July 2017 June 2017 May 2017 April 2017 March 2017 February 2017 January 2017 December 2016 November 2016 October 2016 September 2016 August 2016 July 2016 June 2016 May 2016 April 2016 March 2016 February 2016 January 2016 December 2015 November 2015 October 2015 September 2015 August 2015 July 2015 June 2015 May 2015 April 2015 March 2015 February 2015 January 2015 December 2014 November 2014 October 2014 September 2014 August 2014 July 2014 June 2014 May 2014 April 2014 March 2014 February 2014 January 2014 December 2013 November 2013 October 2013 September 2013 August 2013 July 2013 June 2013 May 2013 April 2013 March 2013 February 2013 January 2013 December 2012 November 2012 October 2012 September 2012 August 2012 July 2012 June 2012 May 2012 April 2012 March 2012 February 2012 January 2012 December 2011 November 2011 October 2011 September 2011 August 2011 July 2011 June 2011 May 2011 April 2011 March 2011 February 2011 January 2011 December 2010 November 2010 October 2010 August 2010 July 2010 June 2010 May 2010 April 2010 March 2010 November 2009 September 2009 June 2009
© EURETINA All rights reserved | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 6,711 |
Q: How to replace spaces with "_" in the data between brackets using shell script? I want to replace spaces from data which are written in brackets using shell script.
my input line is
2012-05-21 06:37:16 M NumberOfHwEntitiesMismatch Cabinet=1
(SAU that is not configured detected.)
I want my output to be:
2012-05-21 06:37:16 M NumberOfHwEntitiesMismatch Cabinet=1
(SAU_that_is_not_configured_detected.)
Please suggest me something....
A: Using awk, split on "(" and then use gsub to replace space with underscore in the second field.
Example:
$ awk -F\( '{gsub(" ","_", $2);print $1"("$2}' <<< "2012-05-21 06:37:16 M NumberOfHwEntitiesMismatch Cabinet=1 (SAU that is not configured detected.)"
2012-05-21 06:37:16 M NumberOfHwEntitiesMismatch Cabinet=1 (SAU_that_is_not_configured_detected.)
(This assumes that your input has only one set of brackets.)
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 1,180 |
\section{Introduction}
We consider the problem of sparse linear classification, where the output depends upon a linear combination of a small subset of features. This is a core problem in high-dimensional statistics~\citep{hastie2015statistical} where the number of features $p$ is comparable to or exceeds the number of samples $n$. In such settings, sparsity can be useful from a statistical viewpoint and can lead to more
interpretable models. We consider the typical binary classification problem with samples $(\mathbf{x}_i, y_i), i = 1, \ldots, n$, features $\mathbf{x}_i \in \mathbb{R}^p$, and outcome $y_i \in \{-1,+1\}$. In the spirit of best-subset selection in linear regression~\citep{miller2002subset}, we consider minimizing the empirical risk (i.e., a surrogate for the misclassification error) while penalizing the number of nonzero coefficients:
\begin{equation} \label{eq:l0vanilla}
\min \limits_{ \boldsymbol{\beta} \in \mathbb{R}^p } \frac{1}{n} \sum_{i=1}^n f \left( \langle \mathbf{x}_i, \boldsymbol{\beta} \rangle , y_i \right) + \lambda_0 \| \boldsymbol{\beta} \|_0,
\end{equation}
where $f: \mathbb{R} \times \{-1,+1\} \to \mathbb{R}$ is the loss function (for example, hinge or logistic loss). The term $\| \boldsymbol{\beta} \|_0$ is the $\ell_0$ (pseudo)-norm of $\boldsymbol\beta$ which is equal to the number of nonzeros in $\boldsymbol{\beta}$, and $\lambda_0>0$ is a regularization parameter which controls the number of nonzeros in $\boldsymbol{\beta}$. We ignore the intercept term in the above and throughout the paper to simplify the presentation. Problem \eqref{eq:l0vanilla} is known to be NP-Hard and poses computational challenges~\citep{natarajan1995sparse}. In this paper, we introduce scalable algorithms for this optimization problem using techniques based on both continuous and discrete optimization, specifically, mixed integer programming~\citep{wolsey1999integer}.
There is an impressive body of work on obtaining approximate solutions to Problem~\eqref{eq:l0vanilla}: popular candidates include greedy (a.k.a. stepwise) procedures~\citep{bahmani2013greedy},
proximal gradient methods~\citep{blumensath2009iterative}, among others. The $\ell_{1}$-norm~\citep{tibshirani1996regression} is often used as a convex surrogate to the $\ell_0$-norm, leading to a convex optimization problem. Nonconvex continuous penalties (such as MCP and SCAD)~\citep{zhang2010nearly}
provide better approximations of the $\ell_0$-penalty but lead to nonconvex problems, for which gradient-based methods~\citep{gong2013general,boyd2014,li2015accelerated} and coordinate descent~\citep{ncvreg,sparsenet} are often used. These algorithms may not deliver optimal solutions for the associated nonconvex problem. Fairly recently, there has been considerable interest in exploring Mixed Integer Programming (MIP)-based methods~\citep{wolsey1999integer,bestsubset,ustun2016supersparse,sato2016feature} to solve variants of Problem~\eqref{eq:l0vanilla} to optimality. MIP-based methods create a branch-and-bound tree that simultaneously leads to feasible solutions and corresponding lower-bounds (a.k.a. dual bounds). Therefore, these methods deliver optimality certificates for the nonconvex optimization problem. Despite their appeal in delivering nearly optimal solutions to Problem~\eqref{eq:l0vanilla}, MIP-based algorithms are usually computationally expensive compared to convex relaxations or greedy (heuristic) algorithms~\citep{fastbestsubset,hastie2020}---possibly limiting their use in time-sensitive applications that arise in practice.
The vanilla version of best-subset selection is often perceived as a gold-standard for high-dimensional sparse linear regression, when the signal-to-noise ratio (SNR) is high. However, it suffers from overfitting when the SNR becomes moderately low~\citep{friedman2001elements,lowsnr,hastie2020}. A possible way to mitigate this shortcoming is by imposing additional continuous regularization---see for example,~\citet{lowsnr,fastbestsubset} for studies in the (linear) regression setting. Thus, we consider an extended family of estimators which combines $\ell_0$ and $\ell_q$ (for $q \in \{1, 2\}$) regularization:
\begin{equation} \label{eq:mainlagrangian}
\min \limits_{\boldsymbol{\beta} \in \mathbb{R}^p}~~~\frac{1}{n} \sum_{i=1}^n f \left( \langle \mathbf{x}_i, \boldsymbol{\beta} \rangle , y_i \right) + \lambda_0 \| \boldsymbol{\beta} \|_0 + \lambda_q \| \boldsymbol{\beta} \|_q^q,
\end{equation}
where the regularization parameter $\lambda_0 \geq 0$ explicitly controls the sparsity in $\boldsymbol\beta$, and $\lambda_q \geq 0$ controls the amount of continuous shrinkage on the nonzero coefficients of $\boldsymbol\beta$ (for example, the margin in linear SVM).
In what follows, for notational convenience, we will refer to the combination of regularizers $\lambda_0 \| \boldsymbol{\beta} \|_0$ and $ \lambda_q \| \boldsymbol{\beta} \|_q^q$, as the $\ell_0$-$\ell_q$ penalty.
For flexibility, our framework allows for both choices of $q \in \{1, 2\}$, and the value of $q$ needs to be specified a-priori by the practitioner. When $q=2$, Problem~\eqref{eq:mainlagrangian} seeks to deliver a solution $\boldsymbol\beta$ with few nonzeros (controlled by the $\ell_{0}$-penalty) and a small $\ell_{2}$-norm (controlled by the ridge penalty).
Similarly, when $q=1$, we seek a model $\boldsymbol\beta$ that has a small $\ell_{1}$-norm and a small $\ell_0$-norm. If $\lambda_{1}$ is large, the $\ell_1$-penalty may also encourage zeros in the coefficients. Note that the primary role of the $\ell_0$-penalty is to control the number of nonzeros in $\boldsymbol\beta$; and that of the $\ell_1$-penalty is to shrink the model coefficients. In our numerical experiments we observe that both choices of $q\in \{1, 2\}$ work quite well, with no penalty uniformly dominating the other. We refer the reader to~\citet{lowsnr} for complementary discussions in the regression setting.
A primary focus of our work is to propose new scalable algorithms for solving Problem~\eqref{eq:mainlagrangian},
with certificates of optimality (suitably defined). Problem~\eqref{eq:mainlagrangian} can be expressed using MIP formulations.
However, these formulations lead to computational challenges for off-the-shelf commercial MIP solvers (such as Gurobi and CPLEX). To this end, we propose a new MIP-based algorithm that we call ``integrality generation'', which allows for solving instances of Problem~\eqref{eq:mainlagrangian} with $p\approx 50,000$ (where $n$ is small) to optimality within a few minutes.\footnote{Empirically, we observe the runtime to depend upon the number of nonzeros in the solution. The runtime can increase if the number of nonzeros in an optimal solution becomes large---see Section~\ref{sec:experiments} for details.} This appears to be well beyond the capabilities of state-of-the-art MIP solvers, including recent MIP-based approaches, as outlined below. To obtain high-quality solutions for larger problem instances, in times comparable to the fast $\ell_1$-based solvers~\citep{glmnet}, we propose approximate algorithms based on coordinate descent (CD)~\citep{wright2015coordinate} and local combinatorial optimization,\footnote{The local combinatorial optimization algorithms are based on solving MIP problems over restricted search-spaces; and are usually much faster to solve compared to the full problem~\eqref{eq:mainlagrangian}.} where the latter leads to higher quality solutions compared to CD. Our CD and local combinatorial optimization algorithms are publicly available through our fast C++/R toolkit \texttt{L0Learn}: on CRAN at \url{https://cran.r-project.org/package=L0Learn} and also at \url{https://github.com/hazimehh/L0Learn}.
From a statistical viewpoint, we establish new upper bounds on the estimation error for solutions obtained by globally minimizing $\ell_0$-based estimators~\eqref{eq:mainlagrangian}. These error bounds (rates) appear to be better than current known bounds for $\ell_1$-regularization; and have rates similar to the optimal minimax rates for sparse least squares regression~\citep{raskutti_wainwright}, achieved by $\ell_0$-based regression procedures.
\textbf{Related Work and Contributions:}
There is a vast body of work on developing optimization algorithms and understanding the statistical properties of various sparse estimators~\citep{hastie2015statistical,stats-HDD}.
We present a brief overview of work that relates to our paper.
\textbf{Computation:}
An impressive body of work has developed fast algorithms for minimizing the empirical risk regularized with convex or nonconvex proxies to the $\ell_0$-norm, e.g., \citet{glmnet,ncvreg,sparsenet,NesterovComposite,shalev2012proximal}. Below, we discuss related work that directly optimize objective functions involving an $\ell_0$ norm (in the objective or as a constraint).
Until recently, global optimization with $\ell_0$-penalization was rarely used beyond $p = 30$ as popular software packages for best-subset selection (for example, \texttt{leaps} and \texttt{bestglm}) are unable to handle larger instances. \citet{bestsubset}~demonstrated that $\ell_0$-regularized regression problems could be solved to near-optimality for $p \approx 10^3$ by leveraging advances in first-order methods and the capabilities of modern MIP solvers such as Gurobi. \citet{bertsimas2017logistic,sato2016feature} extend the work of \citet{bestsubset} to solve $\ell_0$-regularized logistic regression by using an outer-approximation approach that can address problems with $p$ in the order of a few hundreds. \citet{bertsimas2017sparse} propose a cutting plane algorithm for the $\ell_0$-constrained least squares problem with additional ridge regularization---they can handle problems with $n \approx p$, when the feature correlations are low and/or the amount of ridge regularization is taken to be sufficiently large. \citet{BertsimasSparseClassification} adapt \citet{bertsimas2017sparse}'s work to solve classification problems (e.g., with logistic or hinge loss). The approach of \citet{BertsimasSparseClassification} appears to require a fairly high amount of ridge regularization for the cutting plane algorithm to work well.
A separate line of research investigates algorithms to obtain feasible solutions for $\ell_0$-regularized problems. These algorithms do not provide dual bounds like MIP-based algorithms, but can be computationally much faster. These include: (i) first-order optimization algorithms based on hard thresholding, such as Iterative Hard Thresholding (IHT) \citep{blumensath2009iterative} and GraSP \citep{bahmani2013greedy}, (ii) second-order optimization algorithms inspired by the Newton method such as NTGP \citep{yuan2017}, NHTP \citep{zhou2019global}, and NSLR \citep{wang2019fast}, and (iii) coordinate descent methods based on greedy and random coordinate selection rules \citep{BeckSparsityConstrained, randomCDL0}.
\citet{fastbestsubset} present algorithms that offer a \emph{bridge} between MIP-based global optimization and good feasible solutions for $\ell_0$-regularized problems, by using a combination of CD and local combinatorial optimization. The current paper is similar in spirit, but makes new contributions. We extend the work of~\citet{fastbestsubset} (which is tailored to the least squares loss function) to address the more general class of problems in~\eqref{eq:mainlagrangian}. Our algorithms can deliver solutions with better statistical performance (for example, in terms of variable selection and prediction error) compared to the popular fast algorithms for sparse learning (e.g., based on $\ell_1$ and MCP regularizers). Unlike heuristics that simply deliver an upper bound, MIP-based approaches attempt to solve~\eqref{eq:mainlagrangian} to optimality.
They can (i) certify via dual bounds the quality of solutions obtained by our CD and local search algorithms; and (ii) improve the solution if it is not optimal. However, as off-the-shelf MIP-solvers do not scale well, we present a new method: the \emph{Integrality Generation Algorithm} (IGA) (see Section~\ref{sec:MIO}) that allows us to \emph{solve} (to optimality) the MIP problems for instances that are larger than current
methods~\citep{bertsimas2017logistic,BertsimasSparseClassification,sato2016feature}.
The key idea behind our proposed IGA is to solve a sequence of relaxations of~\eqref{eq:mainlagrangian} by allowing only a subset of variables to be binary. On the contrary, a direct MIP formulation for~\eqref{eq:mainlagrangian} requires $p$ many binary variables; and can be prohibitively expensive for moderate values of $p$.
\textbf{Statistical Properties:}
Statistical properties of high-dimensional linear regression have been widely studied \citep{candes-sparse-estimation, raskutti_wainwright, bic-tsybakov, candes2007dantzig, lasso-dantzig}. One important statistical performance measure is the $\ell_2$-estimation error defined as $\| \boldsymbol{\beta}^{*} - \hat{\boldsymbol{\beta}} \|^2_2$, where $\boldsymbol{\beta}^{*}$ is the $k$-sparse vector used in generating the true model and $\hat{\boldsymbol{\beta}}$ is an estimator. For regression problems, \citet{candes-sparse-estimation, raskutti_wainwright} established a $(k/n)\log(p/k)$ lower bound on the $\ell_2$-estimation error. This optimal minimax rate is known to be achieved by a global minimizer of an $\ell_0$-regularized estimator~\citep{bic-tsybakov}.
It is well known that the Dantzig Selector and Lasso estimators achieve a $(k/n)\log(p)$ error rate~\citep{candes2007dantzig, lasso-dantzig} under suitable assumptions for the high-dimensional regression setting.
{Compared to regression, there has been limited work in deriving estimation error bounds for classification tasks. \citet{tarigan} study margin adaptation for $\ell_1$-norm SVM. A sizable amount of work focuses on
the analysis of generalization error and risk bounds~\citep{greenshtein2006best,vdg_linear_models}.
\citet{vssvm} study variable selection consistency of a nonconvex penalized SVM estimator, using a local linear approximation method with a suitable initialization. Recently, \citet{L1-SVM} proved a $(k/n) \log(p)$ upper-bound for the $\ell_2$-estimation error of $\ell_1$-regularized support vector machines (SVM), where $k$ is the number of nonzeros in the estimator that minimizes the population risk. \citet{Wainwright-logreg} show consistent neighborhood selection for high-dimensional Ising model using an $\ell_1$-regularized logistic regression estimator. \citet{one-bit} show that one can obtain an error rate of $k/n\log(p/k)$ for 1-bit compressed sensing problems. In this paper, we present (to our knowledge) new $\ell_{2}$-estimation error bounds for a (global) minimizer of Problem~\eqref{eq:l0vanilla}---our framework applies to a family of loss functions including the hinge and logistic loss functions.}
\textbf{Our Contributions:} We summarize our contributions below:
\begin{itemize}
\item We develop fast first-order algorithms based on cyclic CD and local combinatorial search to (approximately) solve Problem \eqref{eq:mainlagrangian} (see Section \ref{sec:algorithm}). We prove a new result which establishes the convergence of cyclic CD under an asymptotic linear rate. We show that combinatorial search leads to solutions of higher quality than IHT and CD-based methods. We discuss how solutions from the $\ell_0$-penalized formulation, i.e., Problem~\eqref{eq:mainlagrangian}, can be used to obtain solutions to the cardinality constrained variant of~\eqref{eq:mainlagrangian}. We open source these algorithms through our sparse learning toolkit \texttt{L0Learn}.
\item We propose a new algorithm: IGA, for solving Problem~\eqref{eq:mainlagrangian} to optimality. On some problems, our algorithm reduces the time for solving a MIP formulation of Problem~\eqref{eq:mainlagrangian} from the order of hours to seconds, and it can solve high-dimensional instances with $p \approx 50,000$ and small $n$. The algorithm is presented in Section \ref{sec:MIO}.
\item We establish upper bounds on the squared $\ell_2$-estimation error for a cardinality constrained variant of Problem~\eqref{eq:mainlagrangian}. Our $(k/n)\log(p/k)$ upper bound matches the optimal minimax rate known for regression.
\item On a series of high-dimensional synthetic and real data sets (with $p \approx 10^5$), we show that our proposed algorithms can achieve significantly better statistical performance in terms of prediction (AUC), variable selection accuracy, and support sizes, compared to state-of-the-art algorithms (based on $\ell_1$ and local solutions to $\ell_0$ and MCP regularizers).
Our proposed CD algorithm compares favorably in terms of runtime compared to current popular toolkits~\citep{glmnet,ncvreg,bahmani2013greedy,zhou2019global} for sparse classification.
\end{itemize}
\subsection{Preliminaries and Notation} \label{sec:formulation-notation}
For convenience, we introduce the following notation:
$$g(\boldsymbol{\beta}) \eqdef \frac{1}{n} \sum_{i=1}^n f \left( \langle \mathbf{x}_i, \boldsymbol{\beta} \rangle , y_i \right)~~~\text{and}~~~G(\boldsymbol{\beta}) \eqdef g(\boldsymbol{\beta}) + \lambda_1 \| \boldsymbol{\beta} \|_1 + \lambda_2 \| \boldsymbol{\beta} \|_2^2.$$
Problem \eqref{eq:mainlagrangian} is an instance of the following (more general) problem:
\begin{equation} \label{eq:lagrangian}
\min \limits_{ \boldsymbol{\beta} \in \mathbb{R}^p } P(\boldsymbol{\beta}) \eqdef G(\boldsymbol{\beta}) + \lambda_0 \| \boldsymbol{\beta} \|_0.
\end{equation}
In particular, Problem \eqref{eq:mainlagrangian} with $q=1$ is equivalent to Problem \eqref{eq:lagrangian} with $\lambda_2 = 0$. Similarly, Problem \eqref{eq:mainlagrangian} with $q=2$ is equivalent to Problem \eqref{eq:lagrangian} with $\lambda_1 = 0$. Problem (3) will be the focus in our algorithmic development.
We denote the set $\{1,2,\dots,p\}$ by $[p]$ and the canonical basis for $\mathbb{R}^p$ by $\boldsymbol{e}_1, \dots, \boldsymbol{e}_p$. For $\boldsymbol{\beta}\in \mathbb{R}^p$, we use $\text{Supp}(\boldsymbol{\beta})$ to denote the support of $\boldsymbol{\beta}$, i.e., the indices of its nonzero entries. For $S \subseteq [p]$, $\boldsymbol{\beta}_S \in \mathbb{R}^{|S|}$ denotes the subvector of $\boldsymbol{\beta}$ with indices in $S$. Moreover, for a differentiable function $g(\boldsymbol{\beta})$, we use the notation $\nabla_S g(\boldsymbol{\beta})$ to refer to the subvector of the gradient $\nabla g(\boldsymbol{\beta})$ restricted to coordinates in $S$. We let $\mathbb{Z}$ and $\mathbb{Z}_{+}$ denote the set of integers and non-negative integers, respectively. A convex function $g(\boldsymbol\beta)$ is said to be $\mu$-strongly convex if $\boldsymbol\beta \mapsto g(\boldsymbol\beta)-\mu\|\boldsymbol\beta\|_{2}^2/2$ is convex. A function $h(\boldsymbol{\beta})$ is said to be Lipschitz with parameter $L$ if $\| h(\boldsymbol{\beta}) - h(\boldsymbol{\alpha}) \|_2 \leq L \| \boldsymbol{\beta} - \boldsymbol{\alpha} \|_2$ for all $\boldsymbol{\beta}, \boldsymbol{\alpha}$ in the domain of the function.
\subsection{Examples of Loss Functions Considered} \label{sec:supportedloss}
In Table \ref{table:lossfunctions}, we give examples of popular classification loss functions that fall within the premise of our algorithmic framework and statistical theory. The column ``{FO \& Local Search}" indicates whether these loss functions are amenable to our first-order and local search algorithms (discussed in Section \ref{sec:algorithm}).\footnote{That is, these algorithms are guaranteed to converge to a stationary point (or a local optimum) for the corresponding optimization problems.} The column ``{MIP}" indicates whether the loss function leads to an optimization problem that can be solved (to optimality) via the MIP methods discussed in Section \ref{sec:MIO}.
Finally, the column ``Error Bounds" indicates if the statistical error bounds (estimation error) discussed in Section \ref{sec:error-bound} apply to the loss function.
\begin{table}[tb]
\centering
\begin{tabular}{@{}llccc@{}}
\toprule
Loss & $f(\hat{v},v)$ & FO \& Local Search & MIP & Error Bounds \\ \midrule
Logistic & $\log(1+e^{- \hat{v} v })$ & \cmark & \cmark & \cmark \\
Squared Hinge & $\max(0, 1 - \hat{v} v )^2$ & \cmark & \cmark & \xmark \\
Hinge & $\max(0, 1 - \hat{v} v )$ & \ \cmark* & \cmark & \cmark \\ \bottomrule
\end{tabular}
\caption{Examples of loss functions we consider. ``*'' denotes that our proposed first-order and local search methods apply upon using \citet{nesterov2012efficiency}'s smoothing on the non-smooth loss function.}
\label{table:lossfunctions}
\end{table}
\section{First-Order and Local Combinatorial Search Algorithms}
\label{sec:algorithm}
Here we present fast cyclic CD and local combinatorial search algorithms for obtaining high-quality local minima (we make this notion precise later) for Problem~\eqref{eq:lagrangian}. Our framework assumes that $g(\boldsymbol\beta)$ is differentiable and has a Lipschitz continuous gradient. We first present a brief overview of the key ideas presented in this section, before diving into the technical details.
Due to the nonconvexity of~\eqref{eq:lagrangian}, the quality of the solution obtained depends on the algorithm---with local search and MIP-based algorithms leading to solutions of higher quality. The fixed points of the algorithms considered satisfy certain necessary optimality conditions for~\eqref{eq:lagrangian}, leading to different classes of local minima.
In terms of solution quality, there is a hierarchy among these classes.
We show that for Problem~\eqref{eq:lagrangian}, the minima corresponding to the different algorithms satisfy the following hierarchy:
\begin{equation}\label{eq:hierarchy}
\text{MIP Minima} ~\subseteq~ \text{Local Search Minima} ~\subseteq~ \text{CD Minima} ~\subseteq~ \text{IHT Minima}.
\end{equation}
The fixed points of the IHT algorithm contain the fixed points of the CD algorithm. As we move to the left in the hierarchy, the fixed points of the algorithms satisfy stricter necessary optimality conditions. At the top of the hierarchy, we have the global minimizers, which can be obtained by solving a MIP formulation of~\eqref{eq:lagrangian}.
Our CD and local search algorithms can run in times comparable to the fast $\ell_1$-regularized approaches~\citep{glmnet}. These algorithms can lead to high-quality solutions that can be used as warm starts for MIP-based algorithms. The MIP framework of Section~\ref{sec:MIO} can be used to certify the quality of these solutions via dual bounds, and to improve over them (if they are sub-optimal).
In Section \ref{section:cd}, we introduce cyclic CD for Problem~\eqref{eq:lagrangian} and study its convergence properties. Section~\ref{section:localsearch} discusses how the solutions of cyclic CD can be improved by local search and presents a fast heuristic for performing local search in high dimensions. In Section \ref{sec:algo-IHT}, we discuss how our algorithms can be used to obtain high-quality (feasible) solutions to the {\emph{cardinality constrained}} counterpart of~\eqref{eq:lagrangian}, in which the complexity measure $\| \boldsymbol\beta \|_0$ appears as a constraint and not a penalty as in~\eqref{eq:lagrangian}.
Finally, in Section \ref{section:L0Learn}, we briefly present implementation aspects of our toolkit \texttt{L0Learn}.
\subsection{Cyclic Coordinate Descent: Algorithm and Computational Guarantees} \label{section:cd}
We describe a cyclic CD algorithm for Problem \eqref{eq:lagrangian} and establish its convergence to stationary points of~\eqref{eq:lagrangian}.
\textbf{Why cyclic CD?} We briefly discuss our rationale for choosing cyclic CD. Cyclic CD has been shown to be among the fastest algorithms for fitting generalized linear models with convex and nonconvex regularization (e.g., $\ell_1$, MCP, and SCAD) \citep{glmnet,sparsenet, ncvreg}. Indeed, it can effectively exploit sparsity and active-set updates, making it suitable for solving high-dimensional problems (e.g., with $p \sim 10^6$ and small $n$). Algorithms that require evaluation of the full gradient at every iteration (such as proximal gradient, stepwise, IHT or greedy CD algorithms) have difficulties in scaling with $p$~\citep{nesterov2012efficiency}. In an earlier work, \citet{randomCDL0} proposed random CD for problems similar to \eqref{eq:lagrangian} (without an
$\ell_1$-regularization term in the objective). However, recent studies have shown that cyclic CD can be faster than random
CD~\citep{BeckConvergence,gurbuzbalaban2017cyclic,fastbestsubset}.
Furthermore, for $\ell_0$-regularized regression problems, cyclic CD is empirically seen to obtain solutions of higher quality (e.g., in terms of optimization and statistical performance) compared to random CD~\citep{fastbestsubset}.
\textbf{Setup.} Our cyclic CD algorithm for~\eqref{eq:lagrangian} applies to problems where $g(\boldsymbol{\beta})$ is convex, continuously differentiable, and non-negative. Moreover, we will assume that the gradient of $\boldsymbol\beta \mapsto g(\boldsymbol{\beta})$ is coordinate-wise Lipschitz continuous, i.e., for every $i \in [p]$, $\boldsymbol{\beta} \in \mathbb{R}^p$ and $s \in \mathbb{R}$, we have:
\begin{equation} \label{eq:coordinateLipschitz}
| \nabla_i g(\boldsymbol{\beta} + \boldsymbol{e}_i s) - \nabla_i g(\boldsymbol{\beta} ) | \leq L_i |s|,
\end{equation}
where $L_i > 0$ is the Lipschitz constant for coordinate $i$. This assumption leads to the block Descent Lemma \citep{bertsekas2016nonlinear}, which states that
\begin{equation} \label{eq:blockdescent}
g(\boldsymbol{\beta} + \boldsymbol{e}_i s) \leq g(\boldsymbol{\beta}) + s \nabla_i g(\boldsymbol{\beta}) +
\frac12{L_i}s^2.
\end{equation}
Several popular loss functions for classification fall under the above setup. For example, logistic loss and squared hinge loss satisfy~\eqref{eq:coordinateLipschitz} with $L_i = {\| \boldsymbol X_i \|_2^2}/{4n}$ and $L_i = {2 \| \boldsymbol X_i \|_2^2}/{n}$, respectively (here, $\mathbf{X}_i$ denotes the $i$-th column of the data matrix $\mathbf X$).
\textbf{CD Algorithm.} Cyclic CD~\citep{bertsekas2016nonlinear} updates one coordinate at a time (with others held fixed) in a cyclical fashion.
Given a solution $\boldsymbol{\beta} \in \mathbb{R}^p$, we attempt to find a new solution by changing the $i$-th coordinate of $\boldsymbol\beta$---i.e., we find $\boldsymbol\alpha$ such that ${\alpha}_j = {\beta}_j$ for all $j \neq i$ and $\alpha_i$ minimizes the one-dimensional function: $\beta_{i} \mapsto P(\boldsymbol{\beta})$. However, for the examples we consider (e.g., logistic and squared hinge losses), there is no closed-form expression for this minimization problem. This makes the algorithm computationally inefficient compared to $g$ being the squared error loss~\citep{fastbestsubset}. Using~\eqref{eq:blockdescent}, we consider a quadratic upper bound $\tilde{g}(\boldsymbol\alpha; \boldsymbol\beta)$ for $g(\boldsymbol\alpha)$ as follows:
\begin{equation}\label{defn-tilde-g}
g(\boldsymbol{\alpha}) \leq \tilde{g}(\boldsymbol\alpha; \boldsymbol\beta) \eqdef g(\boldsymbol{\beta}) + (\alpha_i - \beta_i) \nabla_i g(\boldsymbol{\beta}) + \frac{\hat{L}_i}{2} (\alpha_i - \beta_i)^2,
\end{equation}
where $\hat{L}_i$ is a constant which satisfies $\hat{L}_i > L_i$. (For notational convenience, we hide the dependence of $\tilde{g}$ on $i$.)
Let us define the function
$$\psi(\boldsymbol\alpha) = \sum_{i} \psi_{i} (\alpha_{i})~~~\text{where}~~~\psi_{i}(\alpha_i) = \lambda_0 \mathbb{1}(\alpha_i \neq 0) + \lambda_1 |\alpha_i| + \lambda_2 \alpha_i^2.$$
Adding $\psi(\boldsymbol\alpha)$ to both sides of equation~\eqref{defn-tilde-g}, we get:
\begin{equation} \label{eq:Ptilde}
P(\boldsymbol{\alpha}) \leq \widetilde{P}_{\hat{L}_i}(\boldsymbol{\alpha};\boldsymbol{\beta}) \eqdef \tilde{g}(\boldsymbol\alpha;\boldsymbol{\beta}) + \psi(\boldsymbol\alpha).
\end{equation}
We can approximately minimize $P(\boldsymbol{\alpha})$ w.r.t.~$\alpha_{i}$ (with other coordinates held fixed)
by minimizing its upper bound $\widetilde{P}_{\hat{L}_i}(\boldsymbol{\alpha};\boldsymbol{\beta})$ w.r.t.~$\alpha_i$. A solution $\hat{\alpha}_{i}$ for this one-dimensional optimization problem is given by
\begin{align} \label{eq:cdupperbd}
\hat{\alpha}_{i} \in \argmin_{\alpha_{i}}~\widetilde{P}_{\hat{L}_i}(\boldsymbol{\alpha};\boldsymbol{\beta})~~=~~\argmin_{\alpha_i} ~ \frac{\hat{L}_i}{2} \left( \alpha_i - \left(\beta_i - \frac{1}{{\hat{L}_i }} {\nabla_i g(\boldsymbol{\beta} )} \right) \right)^2 + \psi_{i} (\alpha_{i}).
\end{align}
Let $\hat{\boldsymbol\alpha}$ be a vector whose $i$-th component is $\hat{\alpha}_{i}$, and $\hat{\alpha}_{j} = \beta_{j}$ for all $j \neq i$. Note that $P(\hat{\boldsymbol{\alpha}}) \leq P({\boldsymbol{\beta}})$---i.e., updating the $i$-th coordinate via~\eqref{eq:cdupperbd} with all other coefficients held fixed, leads to a decrease in the objective value $P({\boldsymbol{\beta}})$. A solution of~\eqref{eq:cdupperbd} can be computed in closed-form; and is given by the thresholding operator $T: \mathbb{R} \to \mathbb{R}$ defined as follows:
\begin{equation} \label{eq:thresholding}
T(c; \boldsymbol{\lambda}, \hat{L}_i) = \begin{cases}
\frac{\hat{L}_i}{\hat{L}_i + 2\lambda_2} \Big( |c| - \frac{\lambda_1}{\hat{L}_i} \Big) \text{sign}(c) \quad & \text{ if } \frac{\hat{L}_i}{\hat{L}_i + 2\lambda_2} \Big( |c| - \frac{\lambda_1}{\hat{L}_i} \Big) \geq \sqrt{\frac{2 \lambda_0}{\hat{L}_i + 2 \lambda_2}} \\
0 \quad & \text{~~otherwise}
\end{cases}
\end{equation}
where $c = \beta_i - \nabla_i g(\boldsymbol{\beta} )/\hat{L}_i$ and $\boldsymbol{\lambda} = (\lambda_0, \lambda_1, \lambda_2)$.
Algorithm 1 below, summarizes our cyclic CD algorithm.
\begin{tcolorbox}[colback=white]
\centering
\textbf{Algorithm 1: Cyclic Coordinate Descent (CD)}
\begin{itemize}
\item \textbf{Input:} Initialization $\boldsymbol{\beta}^{0}$ and constant $\hat{L}_i > L_i$ for every $i \in [p]$
\item \textbf{Repeat for $l = 0, 1, 2, \dots$ until convergence: } \begin{enumerate}
\item $i \gets 1 + (l \mod p)$ and $\boldsymbol{\beta}^{l+1}_j \gets \boldsymbol{\beta}^{l}_j$ for all $j \neq i$
\item Update ${\beta}^{l+1}_i \gets \argmin_{\beta_i} \widetilde{P}_{\hat{L}_i}({\beta}^{l}_1, \dots, \beta_i, \dots, {\beta}^{l}_p; \boldsymbol{\beta}^{l})$ using \eqref{eq:thresholding} with $c~=~{\beta}^{l}_i~-~{\nabla_i g(\boldsymbol{\beta}^{l} )}/{\hat{L}_i }$
\end{enumerate}
\end{itemize}
\end{tcolorbox}
\textbf{Computational Guarantees.} The convergence of cyclic CD has been extensively studied for certain classes of continuous objective functions, e.g., see \citet{Tseng2001,bertsekas2016nonlinear,BeckConvergence} and the references therein. However, these results do not apply to our objective function due to the discontinuity in the $\ell_0$-norm. In Theorem~\ref{theorem:cdconvergence}, we establish a new result which shows that cyclic CD (Algorithm 1) converges at an asymptotic linear rate,
and we present a characterization of the corresponding solution. Theorem~\ref{theorem:cdconvergence} is established under the following assumption:
\begin{asu} \label{assumption:strongconvexity}
Problem \eqref{eq:lagrangian} satisfies at least one of the following conditions:
\begin{enumerate}
\item Strong convexity of the continuous regularizer, i.e., $\lambda_2 > 0$.
\item Restricted Strong Convexity: For some $u \in [p]$, the function $\boldsymbol\beta_{S} \mapsto g(\boldsymbol{\beta}_S)$ is strongly convex for every $S \subseteq [p]$ such that $|S| \leq u$. Moreover, $\lambda_0$ and the initial solution $\boldsymbol{\beta}^{0}$ are chosen such that $P(\boldsymbol{\beta}^{0}) < u \lambda_0$.
\end{enumerate}
\end{asu}
\begin{theorem} \label{theorem:cdconvergence}
Let $\{ \boldsymbol{\beta}^{l} \}$ be the sequence of iterates generated by Algorithm 1. Suppose that Assumption \ref{assumption:strongconvexity} holds, then:
\begin{enumerate}
\item The support of $\boldsymbol\beta^l$ stabilizes in a finite number of iterations, i.e., there exists an integer $N$ and support $S \subset [p]$ such that $\text{Supp}(\boldsymbol{\beta}^{l}) = S$ for all $l \geq N$.
\item Let $S$ be the support as defined in Part 1. The sequence $\{ \boldsymbol{\beta}^{l} \}$ converges to a solution $\boldsymbol{\beta}^{*}$ with support $S$, satisfying:
\begin{equation}
\begin{aligned} \label{eq:CWminima}
& \boldsymbol{\beta}^{*}_S \in \argmin_{\boldsymbol{\beta}_S} G(\boldsymbol{\beta}_S) \\
& |{\beta}^{*}_i | \geq \sqrt{\frac{2 \lambda_0}{\hat{L}_i + 2 \lambda_2}} \quad \text{ for } i \in S \\
\text{and}~~~~~~& |\nabla_i g(\boldsymbol{\beta}^{*} )| - \lambda_1 \leq \sqrt{2 \lambda_0 (\hat{L}_i + 2 \lambda_2)} \quad \text{ for } i \in S^c.
\end{aligned}
\end{equation}
\item
Let $S$ be the support as defined in Part 1. Let us define $H(\boldsymbol\beta_{S})\eqdef g(\boldsymbol{\beta}_S) + \lambda_2 \| \boldsymbol{\beta}_S \|_2^2$.
Let $\sigma_{S}$ be the strong convexity parameter of $\boldsymbol\beta_{S} \mapsto H(\boldsymbol\beta_{S})$, and let $L_S$ be the Lipschitz constant of $\boldsymbol\beta_{S} \mapsto \nabla_{S} H(\boldsymbol\beta_{S})$. Denote $\hat{L}_{\max} = \max_{i \in S} \hat{L}_i $ and $\hat{L}_{\min} = \min_{i \in S} \hat{L}_i $. Then, there exists an integer $N'$ such that the following holds for all $t \geq N'$:
\begin{equation} \label{eq:cdrate}
{P(\boldsymbol{\beta}^{(t+1)p}) - P(\boldsymbol{\beta}^{*})} \leq \left( 1 - \frac{\sigma_S}{\gamma} \right)
\left(P(\boldsymbol{\beta}^{t p}) - P(\boldsymbol{\beta}^{*}) \right),
\end{equation}
where $\gamma^{-1} = 2 \hat{L}_{\max} (1 + |S| {L_S^2} \hat{L}^{-2}_{\min}).$
\end{enumerate}
\end{theorem}
We provide a proof of Theorem \ref{theorem:cdconvergence} in the appendix. The proof is different from that of CD for $\ell_0$-regularized regression \citep{fastbestsubset} since we use inexact minimization for every coordinate update, whereas \citet{fastbestsubset} use exact minimization. At a high level, the proof proceeds as follows. In Part 1, we prove a sufficient decrease condition which establishes that the support stabilizes in a finite number of iterations. In Part 2, we show that under Assumption \ref{assumption:strongconvexity}, the objective function is strongly convex when restricted to the stabilized support. After restriction to the stabilized support, we obtain convergence from standard results on cyclic CD \citep{bertsekas2016nonlinear}. In Part 3, we show an asymptotic linear rate of convergence for Algorithm~1. To establish this rate, we extend the linear rate of convergence of cyclic CD for smooth strongly convex functions by \citet{BeckConvergence} to our objective function (note that due to the presence of the $\ell_1$-norm, our objective is not smooth even after support stabilization).
\textbf{Stationary Points of CD versus IHT:}
The conditions in \eqref{eq:CWminima} describe a fixed point of the cyclic CD algorithm and are necessary optimality conditions for Problem~\eqref{eq:lagrangian}. We now show that the stationary conditions~\eqref{eq:CWminima} are strictly contained within the class of stationary points arising from the IHT algorithm \citep{blumensath2009iterative,BeckSparsityConstrained,bestsubset}. Recall that IHT can be interpreted as a proximal gradient algorithm, whose updates for
Problem~\eqref{eq:lagrangian} are given by:
\begin{align} \label{eq:iht}
\boldsymbol{\beta}^{l+1} \in \argmin_{\boldsymbol{\beta}}~ \left\{ \frac{1}{2\tau} \| \boldsymbol{\beta} - (\boldsymbol{\beta}^{l} - \tau \nabla g(\boldsymbol{\beta}^{l})) \|_2^2 + \psi(\boldsymbol{\beta}) \right\},
\end{align}
where $\tau > 0$ is a step size. Let $L$ be the Lipschitz constant of $\boldsymbol\beta \mapsto \nabla g(\boldsymbol\beta)$, and let $\hat{L}$ be any constant satisfying $\hat{L} > L$. Update~\eqref{eq:iht} is guaranteed to converge to a stationary point if $\tau = 1/\hat{L}$ (e.g., see \citealt{PenalizedIHT,fastbestsubset}). Note that $\tilde{\boldsymbol\beta}$ is a fixed point for~\eqref{eq:iht} if it satisfies \eqref{eq:CWminima} with $\hat{L}_i$ replaced with $\hat{L}$. The component-wise Lipschitz constant $L_i$ always satisfies $L_i \leq L$. For high-dimensional problems, we may have $L_i \ll L$ (see discussions in \citealt{BeckSparsityConstrained,fastbestsubset} for problems where $L$ grows with $p$ but $L_i$ is constant). Hence, the CD optimality conditions in~\eqref{eq:CWminima} are more restrictive than IHT---justifying a part of the hierarchy mentioned in~\eqref{eq:hierarchy}. An important practical consequence of this result is that CD may lead to solutions of higher quality than IHT.
\begin{remark}
A solution $\boldsymbol\beta^*$ that satisfies the CD or IHT stationarity conditions is a local minimizer in the traditional sense used in nonlinear optimization.\footnote{That is, given a stationary point $\boldsymbol\beta^*$, there is a small $\epsilon>0$ such that any $\boldsymbol\beta$ lying in the set $\|\boldsymbol\beta - \boldsymbol\beta^*\|_2 \leq \epsilon$ will have an objective that is at least as large as the current objective value $P(\boldsymbol\beta^*)$.} Thus, we use the terms stationary point and local minimizer interchangeably in our exposition.
\end{remark}
\subsection{Local Combinatorial Search} \label{section:localsearch}
We propose a local combinatorial search algorithm to improve the quality of solutions obtained by Algorithm 1. Given a solution from Algorithm~1, the idea is to perform small perturbations to its support in an attempt to improve the objective. This approach has been recently shown to be very effective (e.g, in terms of statistical performance) for $\ell_0$-regularized regression~\citep{fastbestsubset}, especially under difficult statistical settings (high feature correlations or $n$ is small compared to $p$).
Here, we extend the approach of~\citet{fastbestsubset} to general loss
functions, discussed in Section~\ref{section:cd}.
As we consider a general loss function, performing exact local minimization becomes computationally expensive---we thus resort to an approximate minimization scheme. This makes our approach different from the least squares setting considered in~\citet{fastbestsubset}.
Our local search algorithm is iterative. It performs the following two steps at every iteration $t$:
\begin{enumerate}
\item \textbf{Coordinate Descent}: We run cyclic CD (Algorithm 1) initialized from the current solution, to obtain a solution $\boldsymbol{\beta}^{t}$ with support $S$.
\item \textbf{Combinatorial Search}: We attempt to improve $\boldsymbol{\beta}^{t}$ by
making a change to its current support $S$ via a \emph{swap} operation. In particular, we search for two subsets of coordinates $S_1 \subset S$ and $S_2 \subset S^c$, each of size at most $m$, such that removing coordinates $S_1$ from the support, adding $S_2$ to the support, and then optimizing over the coefficients in $S_2$, improves the current objective value.
\end{enumerate}
To present an optimization formulation for the combinatorial search step (discussed above), we introduce some notation. Let $U^S$ denote a $p \times p$ matrix whose $i$-th row is $\boldsymbol{e}_i^T$ if $i \in S$ and zero otherwise. Thus, for any $\boldsymbol{\beta} \in \mathbb{R}^{p}$, $(U^S \boldsymbol{\beta})_i = \beta_i$ if $i \in S$ and $(U^S \boldsymbol{\beta})_i = 0$ if $i \notin S$. The combinatorial search step solves the following optimization problem:
\begin{equation} \label{eq:swaps}
\min_{S_{1}, S_{2}, \boldsymbol{\beta}} ~~ P(\boldsymbol{\beta}^{t} - U^{S_1} \boldsymbol{\beta}^{t} + U^{S_2} \boldsymbol{\beta}) ~~~~~ \text{s.t. } ~~~~ S_1 \subset S, S_2 \subset S^c, |S_1| \leq m, |S_2| \leq m,
\end{equation}
where the optimization variables are the subsets $S_{1}$, $S_{2}$ and the coefficients of $\boldsymbol\beta$ restricted to $S_{2}$.
If there is a feasible solution $\hat{\boldsymbol{\beta}}$ to \eqref{eq:swaps} satisfying $P(\hat{\boldsymbol{\beta}}) < P(\boldsymbol{\beta}^{t})$, then we move to $\hat{\boldsymbol{\beta}}$. Otherwise, the current solution $\boldsymbol{\beta}^{t}$ cannot be improved by swapping subsets of coordinates, and the algorithm terminates. We summarize the algorithm below.
\begin{tcolorbox}[colback=white]
\centering
\textbf{Algorithm 2: CD with Local Combinatorial Search}
\begin{itemize}
\item \textbf{Input:} Initialization $\hat{\boldsymbol{\beta}}^{0}$ and swap subset size $m$.
\item \textbf{Repeat for $t = 1, 2, \dots$: } \begin{enumerate}
\item ${\boldsymbol{\beta}}^{t} \gets$ Output of cyclic CD initialized from $\hat{\boldsymbol{\beta}}^{t-1}$. Let $S \gets \text{Supp}(\boldsymbol{\beta}^{t})$.
\item Find a feasible solution $\hat{\boldsymbol{\beta}}$ to \eqref{eq:swaps} satisfying $P(\hat{\boldsymbol{\beta}}) < P(\boldsymbol{\beta}^{t})$.
\item If Step 2 succeeds, then set $\hat{\boldsymbol{\beta}}^{t} \gets \hat{\boldsymbol{\beta}}$. Otherwise, if Step 2 fails, \textbf{terminate}.
\end{enumerate}
\end{itemize}
\end{tcolorbox}
Theorem~\ref{theorem:localsearch} shows that Algorithm 2 terminates in a finite number of iterations and provides a description of the resulting solution.
\begin{theorem} \label{theorem:localsearch}
Let $\{ \boldsymbol{\beta}^{t} \}$ be the sequence of iterates generated by Algorithm~2. Then, under Assumption~\ref{assumption:strongconvexity}, $\boldsymbol{\beta}^{t}$ converges in finitely many steps to a solution $\boldsymbol{\beta}^{*}$ (say). Let $S = \text{Supp}(\boldsymbol{\beta}^{*})$. Then, $\boldsymbol{\beta}^{*}$ satisfies the stationary conditions in~\eqref{eq:CWminima} (see Theorem~\ref{theorem:cdconvergence}). In addition, for every $S_1 \subset S$ and $S_2 \subset S^c$ with $|S_1| \leq m$, $|S_2| \leq m$, the solution $\boldsymbol{\beta}^{*}$ satisfies the following condition:
\begin{equation} \label{eq:inescapable}
P(\boldsymbol{\beta}^{*}) \leq \min_{\boldsymbol{\beta}} ~~ P(\boldsymbol{\beta}^{*} - U^{S_1} \boldsymbol{\beta}^{*} + U^{S_2} \boldsymbol{\beta}).
\end{equation}
\end{theorem}
Algorithm 2 improves the solutions obtained from Algorithm~1. This observation along with the discussion in Section~\ref{section:cd}, establishes the hierarchy of local minima in \eqref{eq:hierarchy}. The choice of $m$ in Algorithm 2 controls the quality of the local minima returned---larger values of $m$ will lead to solutions with better objectives. For a sufficiently large value of $m$, Algorithm~2 will deliver a global minimizer of Problem \eqref{eq:lagrangian}. The computation time of solving Problem~\eqref{eq:swaps} increases with $m$. We have observed empirically that small choices of $m$ (e.g., $m=1$) can
lead to a global minimizer of Problem~\eqref{eq:lagrangian} even for some challenging high-dimensional problems where the features are highly correlated (see the experiments in Section \ref{sec:experiments}).
Problem~\eqref{eq:swaps} can be formulated using MIP---this is discussed in Section~\ref{sec:MIO}, where we also present methods to solve it for large problems. The MIP-based framework allows us to (i) obtain good feasible solutions (if they are available) or (ii) certify (via dual bounds) that the current solution cannot be improved by swaps corresponding to size $m$. Note that due to the restricted search space, solving~\eqref{eq:swaps} for small values of $m$ can be much easier than solving Problem~\eqref{eq:lagrangian} using MIP solvers. A solution $\boldsymbol\beta^*$ obtained from Algorithm~2 has an appealing interpretation: being a fixed point of~\eqref{eq:inescapable}, $\boldsymbol\beta^*$ cannot be improved by locally perturbing its support. This serves as a certificate describing the quality of the current (locally optimal) solution $\boldsymbol\beta^*$.
In what follows, we present a fast method to obtain a good solution to Problem~\eqref{eq:swaps} for the special case of $m=1$.
\textbf{Speeding up Combinatorial Search when $m=1$:}
For Problem~\eqref{eq:swaps}, we first check if removing variable $i$, without adding any new variables to the support, improves the objective, i.e., $P(\boldsymbol{\beta}^{t} - \boldsymbol{e}_i {\beta}_i^{t}) < P(\boldsymbol{\beta}^{t})$. If the latter inequality holds, we declare a success in Step~2 (Algorithm~2). Otherwise, we find a feasible solution to Problem~\eqref{eq:swaps} by solving
\begin{equation} \label{eq:contmin}
\min_{\beta_j}~~ G(\boldsymbol{\beta}^{t} - \boldsymbol{e}_i {\beta}_i^{t} + \boldsymbol{e}_j \beta_j )
\end{equation}
for every pair $S_{1}=\{i\}$ and $S_{2}=\{j\}$. Performing the full minimization in~\eqref{eq:contmin} for every $(i,j)$ can be expensive---so we propose an approximate scheme that is found to work relatively well in our numerical experience.
We perform a \emph{few} proximal gradient updates by applying the thresholding operator defined in \eqref{eq:thresholding} with the choice $\hat{L}_j = L_j$, $\lambda_0 = 0$, and using $\beta_j=0$ as an initial value, to approximately minimize~\eqref{eq:contmin}---this helps us identify if the inclusion of coordinate $j$ leads to a success in Step~2.
The method outlined above requires approximately solving Problem~\eqref{eq:contmin} for $|S|(p-|S|)$ many $(i,j)$-pairs (in the worst case). This cost can be further reduced if we select $j$ from a small subset of coordinates outside the current support $S$, i.e.,
$j \in J \subset S^c$, where $|J| < p-|S|$.
We choose $J$ so that it corresponds to the $q$ largest (absolute) values of the gradient $|\nabla_{j} g(\boldsymbol{\beta}^{t} - \boldsymbol{e}_i {\beta}_i^{t})|$, $j \in S^c$. As explained in Section~\ref{sec:choice-J}, this choice of $J$ ensures that we search among coordinates $j \in S^c$ that lead to the maximal decrease in the current objective with one step of a proximal coordinate update initialized from $\beta_{j}=0$.
We summarize the proposed method in Algorithm 3.
\begin{figure}[tb]
\begin{tcolorbox}[colback=white]
\begin{itemize}
\item[] \textbf{Algorithm~3: Fast Heuristic for Local Search when $m=1$}
\item \textbf{Input}: Restricted set size $q \in \mathbb{Z}_{+}$ such that $q \leq p - |S|$.
\item \textbf{For every $i \in S$}:
\begin{enumerate}
\item If $P(\boldsymbol{\beta}^{t} - \boldsymbol{e}_i {\beta}_i^{t}) < P(\boldsymbol{\beta}^{t})$ then \textbf{terminate} and return $\boldsymbol{\beta}^{t} - \boldsymbol{e}_i {\beta}_i^{t}$.
\item Compute $\nabla_{S^c} g(\boldsymbol{\beta}^{t} - \boldsymbol{e}_i {\beta}_i^{t})$ and let $J$ be the set of indices of the $q$ components with the largest values of $|\nabla_j g(\boldsymbol{\beta}^{t} - \boldsymbol{e}_i {\beta}_i^{t})|$ for $j \in S^c$.
\item For every $j \in J$:
Solve $\hat{\beta}_j \in \argmin_{\boldsymbol{\beta}_{j} \in \mathbb{R}} G(\boldsymbol{\beta}^{t} - \boldsymbol{e}_i {\beta}_i^{t} + \boldsymbol{e}_j \beta_j)$ by iteratively applying the thresholding operator in~\eqref{eq:thresholding} (with $\hat{L}_i = L_i$ and $\lambda_0 = 0$).
If $P(\boldsymbol{\beta}^{t} - \boldsymbol{e}_i {\beta}_i^{t} + \boldsymbol{e}_j \hat{\beta}_j) < P(\boldsymbol{\beta}^{t})$, \textbf{terminate} and return $\boldsymbol{\beta}^{t} - \boldsymbol{e}_i {\beta}_i^{t} + \boldsymbol{e}_j \hat{\beta}_j$.
\end{enumerate}
\end{itemize}
\end{tcolorbox}
\end{figure}
The cost of applying the thresholding operator in step 3 of Algorithm 3 is $\mathcal{O}(n)$.\footnote{Assuming that the derivative of $f(\cdot,v)$ w.r.t the first argument can be computed in $\mathcal{O}(1)$, which is the case for common loss functions that we consider here.} For squared error loss, the cost can be improved to $\mathcal{O}(1)$ by reusing previously computed quantities from CD (see \citealt{fastbestsubset} for details). Our numerical experience suggests that using the above heuristic with values of $q \approx 0.05 \times p$ often leads to the same solutions returned by full exhaustive search. Moreover, when some of the features are highly correlated, Algorithm 2 with the heuristic above (or solving Problem~\ref{eq:swaps} exactly) performs better in terms of variable selection and prediction performance compared to state-of-the-art sparse learning algorithms (see Section~\ref{exp:varysamples} for numerical results).
\subsection{Solutions for the Cardinality Constrained Formulation} \label{sec:algo-IHT}
Algorithms~1 and 2 deliver good solutions for the $\ell_0$-penalized problem~\eqref{eq:lagrangian}.
We now discuss how they can be used to obtain solutions to the cardinality constrained version:
\begin{align} \label{eq:constrainedopt}
\min_{\boldsymbol{\beta} \in \mathbb{R}^p} G(\boldsymbol{\beta})~~~~\text{s.t.}~~~~\| \boldsymbol{\beta} \|_0 \leq k,
\end{align}
where $k$ controls the support size of $\boldsymbol\beta$. While the unconstrained formulation~\eqref{eq:lagrangian} is amenable to fast CD-based algorithms, some support sizes are often skipped as $\lambda_0$ is varied. For example, if we decrease $\lambda_0$ to $\lambda'_0$, the support size of the new solution can differ by more than one, even if $\lambda'_0$ is taken to be arbitrarily close to $\lambda_0$. On the other hand, formulation~\eqref{eq:constrainedopt} can typically return a solution with any desired support size,\footnote{Exceptions can happen if for a subset $T \subset [p]$ of size $k$, the minimum of $\boldsymbol\beta_{T}
\mapsto G(\boldsymbol{\beta}_{T})$ has some coordinates in $\boldsymbol\beta_{T}$ exactly set to zero. \label{footnote:degenerate}} and it may be preferable over the unconstrained formulation in some applications due to its explicit control of the support size.
Suppose we wish to obtain a solution to Problem~\eqref{eq:constrainedopt} for a support size $k$ that is not available from a sequence of solutions from~\eqref{eq:lagrangian}.
We propose to apply the IHT algorithm on Problem~\eqref{eq:constrainedopt}, initialized by a solution from Algorithm 1 or 2. This leads to the following update sequence
\begin{equation} \label{eq:ihtcard}
\beta^{l+1} \gets \argmin_{ \|\boldsymbol{\beta} \|_0 \leq k } \left\{\frac{1}{2 \tau} \left\| \boldsymbol{\beta} - \left(\boldsymbol{\beta}^{l} - \tau \nabla g(\boldsymbol{\beta}^{l}) \right) \right\|_2^2 + \lambda_1 \| \boldsymbol{\beta} \|_1 + \lambda_2 \| \boldsymbol{\beta} \|_2^2 \right\},
\end{equation}
for $l \geq 0$, with initial solution $\boldsymbol\beta^0$ (available from the $\ell_0$-penalized formulation) and $\tau >0$ is a step size (e.g., see Theorem~\ref{theorem:ihtconvergence}). We note that update \eqref{eq:ihtcard} typically returns a solution of support size $k$ but there can be degenerate cases where a support size of $k$ is not possible (see footnote \ref{footnote:degenerate}).
\citet{BeckSparsityConstrained} has shown that greedy CD-like algorithms perform better than IHT for a class of problems similar to \eqref{eq:constrainedopt} (without $\ell_1$-regularization). However, it is computationally expensive to apply greedy CD methods to~\eqref{eq:constrainedopt} for the problem sizes we study here. Note that since we initialize IHT with a solution from Problem~\eqref{eq:lagrangian}, it converges rapidly to a high-quality feasible solution for Problem~\eqref{eq:constrainedopt}.
Theorem~\ref{theorem:ihtconvergence}, which follows from~\citet{lowsnr}, establishes that IHT is guaranteed to converge, and provides a characterization of its fixed points.
\begin{theorem} \label{theorem:ihtconvergence}
Let $\{ \boldsymbol{\beta}^{l} \}$ be a sequence generated by the IHT algorithm
updates~\eqref{eq:ihtcard}. Let $L$ be the Lipschitz constant of $\nabla g(\boldsymbol{\beta})$ and let $\hat{L}> L$. Then, the sequence $\{ \boldsymbol{\beta}^{l} \}$ converges for a step size $\tau ={1}/{\hat{L}}$. Moreover, $\boldsymbol{\beta}^{*}$ with support $S$ is a fixed point of~\eqref{eq:ihtcard} iff $\| \boldsymbol{\beta}^{*} \|_0 \leq k$,
\begin{align*}
\boldsymbol{\beta}^{*}_S \in \argmin_{\boldsymbol{\beta}_S}~G(\boldsymbol{\beta}_S) \label{eq:ihtstationary} \quad \text{ and }
\quad |\nabla_{i} g(\boldsymbol{\beta}^{*})| \leq \delta_{(k)} \quad \text{ for } i \in S^c,
\end{align*}
where $\delta_{j}=|\hat{L} \beta^{*}_{j} - {\nabla_{j} g(\boldsymbol{\beta}^{*})}|$ for $j \in [p]$ and
$\delta_{(k)}$ is the $k$th largest value of $\{\delta_{j}\}_{1}^{p}$.
\end{theorem}
Lemma~\ref{lemma:comparison} shows that if a solution obtained by Algorithm 1 or 2 has a support size $k$, then it is a fixed point for the IHT update~\eqref{eq:ihtcard}.
\begin{lemma} \label{lemma:comparison}
Let $\gamma>1$ be a constant and $\boldsymbol{\beta}^{*}$ with support size $k$ be a solution for
Problem~\eqref{eq:lagrangian} obtained by using Algorithm 1 or 2 with $\hat{L}_i = \gamma L_i$. Set $\hat{L} = \gamma L$ and $\tau = {1}/{\hat{L}}$ in \eqref{eq:ihtcard}. Then, $\boldsymbol{\beta}^{*}$ is a fixed point of the update~\eqref{eq:ihtcard}.
\end{lemma}
The converse of Lemma \ref{lemma:comparison} is not true, i.e., a fixed point of update~\eqref{eq:ihtcard} may not be a fixed point for Algorithm 1 or 2---see our earlier discussion around hierarchy~\eqref{eq:hierarchy}.\footnote{Assuming that we obtain the same support sizes for both the constrained and unconstrained formulations.} Thus, we can generally expect the solutions returned by Algorithm 1 or 2 to be of higher quality than those returned by IHT.
The following summarizes our procedure to obtain a path of solutions for Problem~\eqref{eq:constrainedopt} (here, we assume that $(\lambda_1, \lambda_2)$ are fixed and $\lambda_0$ varies):
\begin{enumerate}
\item Run Algorithm 1 or 2 for a sequence of $\lambda_0$ values to obtain a regularization path.
\item To obtain a solution to Problem~\eqref{eq:constrainedopt} with a support size (say $k$) that is not available in Step~1, we run the IHT updates~\eqref{eq:ihtcard}. The IHT updates are initialized with a solution from Step~1 having a support size smaller than $k$.
\end{enumerate}
As the above procedure uses high-quality solutions obtained by Algorithm 1 or 2 as advanced initializations for IHT, we expect to obtain notable performance benefits as compared to using IHT alone for generating the regularization path. Also, note that if a support size $k$ is available from Step 1, then there is no need to run IHT (see Lemma~\ref{lemma:comparison}).
\subsection{L0Learn: A Fast Toolkit for $\ell_0$-regularized Learning} \label{section:L0Learn}
We implemented the algorithms discussed above in $\texttt{L0Learn}$: a fast sparse learning toolkit written in C++ along with an R interface. We currently support the logistic and squared-hinge loss functions,\footnote{This builds upon our earlier functionality for least squares loss~\citep{fastbestsubset}.} but the toolkit can be expanded and we intend to incorporate additional loss functions in the future. Following~\citet{glmnet,fastbestsubset}, we used several computational tricks to speed up the algorithms and improve the solution quality---these include: warm starts, active sets, correlation screening, a (partially) greedy heuristic to cycle through the coordinates, and efficient methods for updating the gradients by exploiting sparsity.
Note that Problem~\eqref{eq:lagrangian} can lead to the same solution if two values of $\lambda_0$ are close to one another.
To avoid this issue, we dynamically select a sequence of $\lambda_0$-values---this is an extension of an idea appearing in~\citet{fastbestsubset} for the least squares loss function.
\section{Mixed Integer Programming Algorithms}
\label{sec:MIO}
We present MIP formulations and a new scalable algorithm: IGA, for solving Problem \eqref{eq:lagrangian} to optimality. Compared to off-the-shelf MIP solvers (e.g., the commercial solver Gurobi), IGA leads to certifiably optimal solutions in significantly reduced computation times. IGA also applies to the local search problem of Section~\ref{section:localsearch}. We remind the reader that while Algorithm~1 (and Algorithm~2 for small values of $m$) leads to good feasible solutions quite fast, it does not deliver certificates of optimality (via dual bounds). The MIP framework can be used to certify the quality of solutions obtained by Algorithms 1 or 2 and potentially improve upon them.
\subsection{MIP Formulations}\label{sec: basic-MIO}
Problem \eqref{eq:lagrangian} admits the following MIP formulation:
\begin{equation} \label{eq:MIP}
\begin{aligned}
\min\limits_{ \boldsymbol{\beta}, \mathbf{z}} ~~~ \left \{ G(\boldsymbol{\beta}) + \lambda_0 \sum\limits_{i=1}^{p} z_i \right\}~~ & ~~\mathrm{s.t. } &~~~|\beta_i| \leq \mathcal{M} z_i,~ z_i \in \left\{0,1\right\}, ~ i \in [p]
\end{aligned}
\end{equation}
where $\mathcal{M}$ is a constant chosen large enough so that some optimal solution $\boldsymbol{\beta}^{*}$ to \eqref{eq:mainlagrangian} satisfies $\| \boldsymbol{\beta}^{*} \|_{\infty} \leq \mathcal{M}$. Algorithm~1 and Algorithm~2 (with $m=1$) can be used to obtain good estimates of $\mathcal{M}$---see \citet{bestsubset,lowsnr} for possible alternatives on choosing $\mathcal{M}$.
In~\eqref{eq:MIP}, the binary variable $z_i$ controls whether $\beta_i$ is set to zero or not.
If $z_i = 0$ then $\beta_i =0$, while if $z_i=1$ then $\beta_{i} \in [-{\mathcal M},{\mathcal M}]$ is allowed to be `free'. For the hinge loss with $q=1$, the problem is a Mixed Integer Linear Program (MILP).
For $q=2$ and the hinge loss function,~\eqref{eq:MIP} becomes a Mixed Integer Quadratic Program (MIQP). Similarly, the squared hinge loss leads to a MIQP (for both $q\in \{1, 2\}$).
MILPs and MIQPs can be solved by state-of-the-art MIP solvers (e.g., Gurobi and CPLEX) for small-to-moderate sized instances. For other nonlinear convex loss functions such as logistic loss, two approaches are possible: (i)~nonlinear branch and bound \citep{belotti2013mixed} or (ii)~outer-approximation in which a sequence of MILPs is solved until convergence (for example, see \citealt{BertsimasSparseClassification,bertsimas2017sparse,sato2016feature}).\footnote{In this method, one obtains a convex piece-wise linear lower bound to a nonlinear convex problem. In other words, this leads to a polyhedral outer approximation (aka outer approximation) to the epigraph of the nonlinear convex function.} We refer the reader to~\citet{lee2011mixed} for a review of related approaches.
The local combinatorial search problem in \eqref{eq:swaps} can be cast as a
variant of~\eqref{eq:MIP} with additional constraints; and is given by the following MIP:
\begin{subequations} \label{eq:localsearcMIP}
\begin{align}
\min\limits_{ \boldsymbol{\beta}, \mathbf{z}, \boldsymbol{\theta}} \quad & G(\boldsymbol{\theta}) + \lambda_0 \sum\limits_{i=1}^{p} z_i \\
\text{s.t.}~~& \boldsymbol{\theta} = \boldsymbol{\beta}^{t} - \sum_{i \in S} \boldsymbol{e}_i \beta^{t}_i (1 - z_i) + \sum_{i \in S^c} \boldsymbol{e}_i \beta_i \label{eq:dummyvar}\\
& |\beta_i| \leq \mathcal{M} z_i, ~~ i \in S^c \\
& \sum_{i \in S} z_i \geq |S| - m \label{eq:cut1}\\
& \sum_{i \in S^c} z_i \leq m \label{eq:cut2} \\
& z_i \in \left\{0,1\right\}, ~~ i \in [p]
\end{align}
\end{subequations}
where $S = \text{Supp}(\boldsymbol{\beta}^{t})$. The binary variables $z_i, i \in [p]$ perform the role of \emph{selecting} the subsets $S_1 \subset S$ and $S_2 \subset S^c$ (described in \eqref{eq:swaps}). Particularly, for $i \in S$, $z_i = 0$ means that $i \in S_1$---i.e., variable $i$ should be removed from the current support.
Similarly, for $i \in S^c$, $z_i = 1$ means that $i \in S_2$---i.e., variable $j$ should be added to the support in~\eqref{eq:localsearcMIP}.
Constraints~\eqref{eq:cut1} and~\eqref{eq:cut2} enforce $|S_1| \leq m$ and $|S_2| \leq m$, respectively. The constraint in~\eqref{eq:dummyvar} forces the variable $\boldsymbol{\theta}$ to be equal to $\boldsymbol{\beta}^{t} - U^{S_1} \boldsymbol{\beta}^{t} + U^{S_2} \boldsymbol{\beta}$.
Note that~\eqref{eq:localsearcMIP} has a smaller search space compared to the full formulation in \eqref{eq:MIP}---there are additional constraints in \eqref{eq:cut1} and \eqref{eq:cut2}, and the number of free continuous variables is $|S^c|$, as opposed to $p$ in \eqref{eq:MIP}. This reduced search space usually leads to notable reductions in the runtime compared to solving Problem \eqref{eq:MIP}.
\subsection{Scaling up the MIP via the Integrality Generation Algorithm (IGA)}\label{sec:integrality-gen}
While state-of-the-art MIP solvers and outer-approximation based MILP approaches~\citep{BertsimasSparseClassification,sato2016feature} lead to impressive improvements over earlier MIP-based approaches, they often have long run times when solving high-dimensional classification problems with large $p$ and small $n$ (e.g., $p=50,000$ and $n=1000$). Our proposed algorithm, IGA, can solve~\eqref{eq:MIP} to \emph{global} optimality, for high-dimensional instances that appear to be beyond the capabilities of current MIP-based approaches.
Loosely speaking, IGA solves a sequence of MIP-based relaxations or subproblems of Problem~\eqref{eq:MIP}; and exits upon obtaining a global optimality certificate for~\eqref{eq:MIP}.
The aforementioned MIP subproblems are obtained by relaxing a subset of the binary variables $\{z_{i}\}_{1}^{p}$ to lie within $[0,1]$, while the remaining variables are retained as binary. Upon solving the relaxed problem and examining the integrality of the continuous $z_i$'s, we create another (tighter) relaxation by allowing more variables to be binary---we continue in this fashion till convergence. The algorithm is formally described below.
The first step in our algorithm is to obtain a good upper bound for Problem~\eqref{eq:MIP}---this can be obtained by Algorithm 1 or 2. Let $\mathcal{I}$ denote the corresponding support of this solution.
We then consider a relaxation of Problem \eqref{eq:MIP} by allowing
the binary variables in $\mathcal I^c$ to be continuous:
\begin{subequations} \label{eq:PMIO}
\begin{align}
\min\limits_{\boldsymbol{\beta}, \mathbf{z}} \quad & G(\boldsymbol{\beta}) + \lambda_0 \sum\limits_{i=1}^{p} z_i \\
\text{s.t.}~~~& |\beta_i| \leq \mathcal{M} z_i, ~~ i \in [p] \\
& z_i \in [0,1], ~~ i \in \mathcal{I}^c \\
& z_i \in \left\{0,1\right\}, ~~ i \in \mathcal{I}.
\end{align}
\end{subequations}
The relaxation~\eqref{eq:PMIO} is a MIP (and thus nonconvex). The optimal objective of Problem~\eqref{eq:PMIO} is a lower bound to Problem~\eqref{eq:MIP}. In formulation~\eqref{eq:PMIO}, we place integrality constraints on $z_i, i \in \mathcal{I}$---all remaining variables $z_{i}$, $i \in {\mathcal I}^c$ are continuous. Let $\boldsymbol{\beta}^{u}, \mathbf{z}^{u}$ be the solution obtained from the $u$-th iteration of the algorithm. Then, in iteration $(u+1)$, we set $\mathcal{I} \gets \mathcal{I} \cup \{ i \ | \ z^{u}_i \neq 0, i \in {\mathcal I}^c \}$ and solve Problem~\eqref{eq:PMIO} (with warm-starting enabled). If at some iteration $u$, the vector $\mathbf{z}^{u}$ is integral, then solution $\boldsymbol{\beta}^{u}, \mathbf{z}^{u}$ must be optimal for Problem \eqref{eq:MIP} and the algorithm terminates. We note that along the iterations, we obtain tighter lower bounds on the optimal objective of Problem~\eqref{eq:MIP}. Depending on the available computational budget, we can decide to terminate the algorithm at an early stage with a corresponding lower bound. The algorithm is summarized below:
\begin{tcolorbox}[colback=white]
\centering
\textbf{Algorithm 4: Integrality Generation Algorithm (IGA)}
\begin{itemize}
\item Initialize $\mathcal{I}$ to the support of a solution obtained by Algorithm 1 or 2.
\item \textbf{For $u=1,2,\dots$} perform the following steps till convergence: \begin{enumerate}
\item[1.] Solve the relaxed MIP~\eqref{eq:PMIO} to obtain a solution $\boldsymbol{\beta}^{u}, \mathbf{z}^{u}$.
\item[2.] Update $\mathcal{I} \gets \mathcal{I} \cup \{ i \ | \ z^{u}_{i} \neq 0, i \in {\mathcal I}^c \}$.
\end{enumerate}
\end{itemize}
\end{tcolorbox}
As we demonstrate in Section~\ref{sec:experiments}, Algorithm 4 can lead to significant speed-ups: it reduces the time to solve several sparse classification instances from the order of \emph{hours} to \emph{seconds}. This allows us to solve instances with $p \approx 50,000$ and $n \approx 1000$ within reasonable computation times. These instances are much larger than what has been reported in the literature prior to this work (see for example,~\citealt{sato2016feature,BertsimasSparseClassification}).
A main reason behind the success of Algorithm~4 is that Problem~\eqref{eq:PMIO} leads to a solution $\mathbf{z}$ with very few nonzero coordinates. Hence, a small number of indices are added to $\mathcal{I}$ in step 2 of the algorithm. Since $\mathcal{I}$ is typically small, \eqref{eq:PMIO} can be often solved significantly faster than the full MIP in \eqref{eq:MIP}---the branch-and-bound algorithm has a smaller number of variables to branch on.
Lemma~\ref{lemma:sparse} provides some intuition on why the solutions of~\eqref{eq:PMIO} are sparse.
\begin{lemma} \label{lemma:sparse}
Problem \eqref{eq:PMIO} can be equivalently written as:
\begin{subequations} \label{eq:lassolike}
\begin{align}
\min\limits_{ \boldsymbol{\beta}, \mathbf{z}_{\mathcal{I}}} \quad & G(\boldsymbol{\beta}) + \frac{\lambda_0}{\mathcal{M}} \sum_{i \in \mathcal{I}^c} |\beta_{i}| + \lambda_0 \sum\limits_{i \in \mathcal{I}} z_i \\
\text{s.t.}~~& |\beta_i| \leq \mathcal{M} z_i, ~~ i \in \mathcal{I}
\\ & |{\beta}_{i} | \leq \mathcal{M}, ~~~ i \in {\mathcal I}^c
\\ & z_i \in \left\{0,1\right\}, ~~ i \in \mathcal{I}.
\end{align}
\end{subequations}
\end{lemma}
We have observed empirically that in \eqref{eq:lassolike}, the $\ell_1$-regularization term $\sum_{i \in \mathcal{I}^c} |\beta_{i}|$ encourages sparse solutions, i.e., many of the components ${\beta}_i, i \in {\mathcal{I}^c}$ are set to zero. Consequently, the corresponding $z_i$'s in~\eqref{eq:PMIO} are mostly zero at an optimal solution. The sparsity level is controlled by the regularization parameter ${\lambda_0}/{\mathcal{M}}$---larger values will lead to more $z_i$'s being set to zero in \eqref{eq:PMIO}. Thus, we expect Algorithm~4 to work well when $\lambda_0$ is set to a sufficiently large value (to obtain sufficiently sparse solutions). {Note that Problem \eqref{eq:lassolike} is different from the Lasso as it involves additional integrality constraints.}
\textbf{Optimality Gap and Early Termination:}
Each iteration of Algorithm 4 provides an improved lower bound to Problem~\eqref{eq:MIP}. This lower bound, along with a good feasible solution (e.g., obtained from Algorithm~1 or 2), leads to an optimality gap: given an upper bound UB and a lower bound LB, the MIP optimality gap is defined by
(UB - LB)/LB. This optimality gap serves as a certificate of global optimality. In particular, an early termination of Algorithm~4 leads to a solution with an associated certificate of optimality.
\textbf{Choice of $ ~\mathcal I$: } The performance of Algorithm~4 depends on the initial set $\mathcal{I}$. If the initial $\mathcal{I}$ is close to the support of an optimal solution to~\eqref{eq:MIP}, then our numerical experience suggests that Algorithm~4 can terminate within a few iterations. Moreover, our experiments (see Section \ref{exp:varysamples}) suggest that
Algorithm~2 can obtain an optimal or a near-optimal solution to Problem~\eqref{eq:MIP} quickly, leading to a high-quality initialization for $\mathcal{I}$.
In practice, if at iteration $u$ of Algorithm 4, the set $\{ i \ | \ z^{u}_{i} \neq 0 \}$ is large, then we add only a small subset of it to $\mathcal{I}$ (in our implementation, we choose the $10$ largest fractional $z_i$'s). Alternatively, while expanding $\mathcal I$, we can use a larger cutoff for the fractional $z_{i}$'s, i.e., we can take the indices $\{ i \ | \ z^{u}_{i} \geq \tau \}$
for some value of $\tau \in (0,1)$.
This usually helps in maintaining a small size in $\mathcal{I}$, which allows for solving the MIP subproblem \eqref{eq:PMIO} relatively quickly.
\textbf{IGA for Local Combinatorial Search:} While the discussion above was centered around the full MIP~\eqref{eq:MIP}---the IGA framework (and in particular, Algorithm 4) extends, in principle, to the local combinatorial search problem in~\eqref{eq:localsearcMIP} for $m\geq 2$.
\section{Statistical Properties: Error Bounds}
\label{sec:error-bound}
We derive non-asymptotic upper bounds on the coefficient estimation error for a family of $\ell_0$-constrained classification estimators (this includes the loss functions discussed in Section~\ref{sec:supportedloss}, among others). For our analysis, we assume that: $( \mathbf{x}_i,y_i), i \in [n]$ are i.i.d. draws from an unknown distribution $\mathbb{P}$. Using the notation of Section \ref{sec:algorithm}, we consider a loss function $f$ and define its population risk $\mathcal{L}(\boldsymbol{\beta}) = \mathbb{E} \left( f \left( \langle \mathbf{x}, \boldsymbol{\beta} \rangle ; y \right) \right)$, where the expectation is w.r.t.~the (population) distribution~$\mathbb{P}$. We let $ \boldsymbol{\beta}^*$ denote a minimizer of the risk, that is:
\begin{equation} \label{def-beta0}
\boldsymbol{\beta}^* \in \argmin\limits_{ \boldsymbol{\beta} \in \mathbb{R}^{p}}~\mathcal{L}(\boldsymbol{\beta}):= \mathbb{E} \left( f \left( \langle \mathbf{x}, \boldsymbol{\beta} \rangle ; y \right) \right).
\end{equation}
In the rest of this section, we let $k = \| \boldsymbol{\beta}^*\|_0$ and $R=\| \boldsymbol{\beta}^*\|_2$---i.e., the number of nonzeros and the Euclidean norm (respectively) of $\boldsymbol\beta^*$.
We assume $R\ge 1$.
To estimate $\boldsymbol\beta^*$, we consider the following estimator:\footnote{We drop the dependence of $\hat{\boldsymbol\beta}$ on $R,k$ for notational convenience.}
\begin{equation} \label{learning-l0}
\hat{\boldsymbol\beta} ~~ \in~~ \argmin \limits_{ \substack{ \boldsymbol{\beta} \in \mathbb{R}^p \\ \| \boldsymbol{\beta} \|_0 \le k, \ \| \boldsymbol{\beta} \|_2 \le 2R } } \ \ \frac{1}{n} \sum_{i=1}^n f \left( \langle \mathbf{x}_i, \boldsymbol{\beta} \rangle ; y_i \right),
\end{equation}
which minimizes the empirical loss with a constraint on the number of nonzeros in $\boldsymbol\beta$ and a bound on the $\ell_2$-norm of $\boldsymbol\beta$. The $\ell_2$-norm constraint in~\eqref{learning-l0} makes $\boldsymbol\beta^*$ feasible for Problem~\eqref{learning-l0}; and ensures that $\hat{\boldsymbol\beta}$ lies in a bounded set (which is useful for the technical analysis).
Section \ref{sec: framework} presents the assumptions we need for our analysis---see the works of~\citet{L1-SVM}, \citet{Wainwright-logreg} and \citet{quantile-reg} for related assumptions in the context of $\ell_1$-based classification and quantile regression procedures. In Section~\ref{sec:theory-main-results}, we establish a high probability upper bound on $\| \hat{\boldsymbol{\beta}} - \boldsymbol{\beta}^*\|_2^2$.
\subsection{Assumptions}\label{sec: framework}
We first present some assumptions for establishing the error bounds.
\textbf{Loss Function.}
We list our basic assumptions on the loss function.
\begin{asu} \label{asu1}
The function $t \mapsto f( t; y)$ is non-negative, convex, and Lipschitz continuous with constant $L$, that is $| f(t_1; y) - f(t_2; y) | \le L | t_1 -t_2 |, \ \forall t_1, t_2$.
\end{asu}
We let $\partial f(t;y)$ denote a subgradient of $t \mapsto f(t;y)$---i.e.,
$f(t_2; y) - f(t_1; y) \ge \partial f(t_1;y) (t_2 -t_1), \ \forall t_1, t_2$. Note that the hinge loss satisfies Assumption 2 with $L=1$; and has a subgradient given by $\partial f(t;y) = \mathbf{1} \left( 1 - y t \ge 0\right) y$. The logistic loss function satisfies Assumption 2 with $L=1$; and its subgradient coincides with the gradient.
\textbf{Differentiability of the Population Risk.}
The following assumption is on the uniqueness of $\boldsymbol{\beta}^*$ and differentiability of the population risk $\mathcal{L}$.
\begin{asu} \label{asu2}
Problem~\eqref{def-beta0} has a unique minimizer. The population risk $\boldsymbol\beta \mapsto {\mathcal L}(\boldsymbol\beta)$ is twice continuously differentiable, with gradient $\nabla \mathcal{L}(\boldsymbol\beta)$ and Hessian $\nabla^2 \mathcal{L}(\boldsymbol\beta)$. In particular, the following holds:
\begin{equation}\label{gradient-relation}
\nabla \mathcal{L}(\boldsymbol\beta) = \mathbb{E}\left( \partial f \left( \langle \mathbf{x}, \boldsymbol\beta \rangle ; y \right) \mathbf{x} \right).
\end{equation}
\end{asu}
When $f$ is the hinge loss, \citet{lemma2} discuss conditions under which Assumption \ref{asu2} holds. In particular, Assumption (A1) in~\citet{lemma2} requires that the conditional density functions of $\mathbf{x}$ for the two classes are continuous and have finite second moments. Under this assumption, the Hessian $\nabla^2 {\mathcal L}(\boldsymbol\beta)$ is well defined and continuous in $\boldsymbol\beta$. Under Assumption (A4)~\citep{lemma2}, the Hessian is positive definite at an optimal solution, therefore~\eqref{def-beta0} has a unique solution. We refer the reader to~\citet{Wainwright-logreg,vdg_linear_models} for discussions pertaining to logistic regression.
\indent {\bf Restricted Eigenvalue Conditions.}
Assumption \ref{asu4} is a \textit{restricted eigenvalue condition} similar to that used in regression problems \citep{lasso-dantzig,stats-HDD}. For an integer $\ell>0$, we assume that the quadratic forms associated with the Hessian matrix $\nabla^2 {\mathcal L}(\boldsymbol{\beta}^*)$ and the covariance matrix $n^{-1}\mathbf{X}^T\mathbf{X}$ are respectively lower-bounded and upper-bounded on the set of $2\ell$-sparse vectors.
\begin{asu} \label{asu4} Let $\ell>0$ be an integer. Assumption \ref{asu4}$(\ell)$ is said to hold if there exists constants $\kappa(\ell), \lambda(\ell) >0 $ such that almost surely the following holds:
$$ \kappa(\ell) \le \inf \limits_{ \mathbf{z} \neq 0, \| \mathbf{z} \|_0 \le 2 \ell } \left\{ \frac{ \mathbf{z}^T \nabla^2\mathcal{L}(\boldsymbol{\beta}^*) \mathbf{z} }{ \|\mathbf{z}\|_2^2 } \right\}~~~~~~~\text{and}~~~~~\lambda(\ell) \ge \sup \limits_{ \mathbf{z} \neq 0, \| \mathbf{z} \|_0 \le 2\ell } \left\{ \frac{ \| \mathbf{X} \mathbf{z} \|_2^2 }{ n \|\mathbf{z}\|_2^2 } \right\}.
$$
\end{asu}
In the rest of this section, we consider Assumption \ref{asu4} with $\ell=k$.
Assumption (A4) in \citet{L1-SVM} for linear SVM is similar to our
Assumption~\ref{asu4}. For logistic regression, related assumptions appear in the literature, e.g., Assumptions A1 and A2 in~\citet{Wainwright-logreg} (in the form of a dependency and an incoherence condition on the population Fisher information matrix).
\textbf{Growth condition.}
As $\boldsymbol{\beta}^*$ minimizes the population risk, we have $\nabla \mathcal{L}(\boldsymbol{\beta}^*) = 0$. Under the above regularity assumptions and when Assumption \ref{asu4}$(k)$ is satisfied, the population risk is lower-bounded by a quadratic function in a neighborhood of $\boldsymbol{\beta}^*$. By continuity, we let $r(k)$ denote the maximal radius for which the following lower bound holds:
\begin{equation}\label{growth-cond-defn}
r(k) = \max \left\{ r>0 \ \Bigg\rvert \ \mathcal{L}(\boldsymbol{\beta}^* + \mathbf{z} ) \ge \mathcal{L}(\boldsymbol{\beta}^*) + \frac{1}{4} \kappa(k) \| \mathbf{z} \|_2^2~~~\forall \mathbf{z}~~~\text{s.t.}~~~\| \mathbf{z} \|_0 \le 2 k , \| \mathbf{z} \|_2 \le r \right\}.
\end{equation}
Below we make an assumption on the growth condition.
\begin{asu} \label{asu5}
Let $\delta \in (0,1/2)$. We say that Assumption \ref{asu5}($\delta$) holds if the parameters $n,p,k,R$ satisfy: $$\frac{24L}{\kappa(k)} \sqrt{\frac{\lambda(k)}{n} \left( k \log\left( Rp/k \right) + \log\left( 1/\delta \right) \right) } < r(k),$$
and the following holds: $({k}/{n}) \log(p/k) \le 1$ and $7n e \le 3L \sqrt{\lambda(k) } p \log \left( p/k \right)$.
\end{asu}
Assumption~\ref{asu5} is similar to the scaling conditions in \citet{Wainwright-logreg}[Theorem~1]
for logistic regression. \citet{quantile-reg}~also
makes use of a growth condition for
their analysis in $\ell_1$-sparse quantile regression problems.
Note that although Assumption~\ref{asu5} is not required to prove Theorem~\ref{restricted-strong-convexity}, we will need it to derive the error bound of Theorem~\ref{main-results}. We now proceed to derive an upper bound on coefficient estimation error.
\subsection{Main Result}\label{sec:theory-main-results}
We first show (in Theorem~\ref{restricted-strong-convexity}) that the loss function $f$ satisfies a form of restricted strong convexity~\citep{M-estimators} around $\boldsymbol\beta^*$, a minimizer of~\eqref{def-beta0}.
\begin{theorem} \label{restricted-strong-convexity}
Let $\textbf{h} = \hat{\boldsymbol{\beta}} - \boldsymbol{\beta}^*$, $\delta \in (0,1/2)$, and $\tau = 6L \sqrt{\frac{\lambda(k)}{n} \left( k\log\left( Rp/k \right) + \log\left( 1/\delta \right) \right) }$. If Assumptions \ref{asu1}, \ref{asu2}, \ref{asu4}($k$) and \ref{asu5}($\delta$) are satisfied, then with probability at least $1 - \delta$, the following holds:
\begin{equation}\label{conclude-theorem-restricted-strong-convexity}
\begin{aligned}
\frac{1}{n} \sum_{i=1}^n f \left( \langle \mathbf{x}_i, \boldsymbol{\beta}^* + \mathbf{h} \rangle ; y_i \right) - \frac{1}{n} \sum_{i=1}^n f \left( \langle \mathbf{x}_i, \boldsymbol{\beta}^* \rangle ; y_i \right) \\
~~~~ \ge \frac{1}{4} \kappa(k) \left\{ \|\mathbf{h}\|_2^2 \wedge r(k) \|\mathbf{h}\|_2 \right\} - \tau \| \mathbf{h} \|_2\vee \tau^2.
\end{aligned}
\end{equation}
\end{theorem}
Theorem~\ref{restricted-strong-convexity} is used to derive the error bound presented in Theorem~\ref{main-results}, which is the main result in this section.
\begin{theorem} \label{main-results}
Let $\delta \in \left(0, 1/2\right)$ and assume that Assumptions \ref{asu1}, \ref{asu2}, \ref{asu4}($k$) and \ref{asu5}($\delta$) hold. Then the estimator $\hat{\boldsymbol{\beta}}$ defined as a solution of Problem \eqref{learning-l0} satisfies with probability at least $1-\delta$:
\begin{equation}\label{error-bound-thm-eqn}
\| \hat{\boldsymbol{\beta}} - \boldsymbol{\beta}^*\|_2^2 \lesssim L^2 \lambda(k)\widetilde{\kappa}^2 \left( \frac{k \log(Rp/k)}{n} + \frac{\log(1/ \delta)}{n} \right)
\end{equation}
where, $\widetilde{\kappa} = \max \{1/\kappa(k), 1\}$.
\end{theorem}
In~\eqref{error-bound-thm-eqn}, the symbol ``$\lesssim$'' stands for ``$\leq$" up to a universal constant. The proof of Theorem~\ref{main-results} is presented in Appendix \ref{sec: appendix_main-results}. The rate appearing in Theorem~\ref{main-results}, of the order of $k/n \log (p/k)$, is the best known rate for a (sparse) classifier; and this coincides with the optimal scaling in the case of regression. In comparison, \citet{L1-SVM} and \citet{Wainwright-logreg} derived a bound scaling as $k/n \log(p)$ for
$\ell_1$-regularized SVM (with hinge loss) and logistic regression, respectively. Note that the bound in Theorem~\ref{main-results} holds for any sufficiently small $\delta>0$. Consequently, by integration, we obtain the following result in expectation.
\begin{cor} \label{main-corollary}
Suppose Assumptions \ref{asu1}, \ref{asu2}, \ref{asu4}($k$) hold true and Assumption \ref{asu5}($\delta$) is true for $\delta$ small enough, then:
$$ \mathbb{E} \| \hat{\boldsymbol{\beta}} - \boldsymbol{\beta}^* \|_2^2 \lesssim L^2 \lambda(k) \widetilde{\kappa}^{2} \frac{k \log(Rp/k)}{n}, $$
where $\widetilde{\kappa}$ is defined in Theorem~\ref{main-results}.
\end{cor}
\section{Experiments} \label{sec:experiments}
In this section, we compare the statistical and computational performance of our proposed algorithms versus the state of the art, on both synthetic and real data sets.
\subsection{Experimental Setup} \label{section:expsetup}
{\bf{Data Generation}.} For the synthetic data sets, we generate a multivariate Gaussian data matrix $\mathbf{X}_{n \times p} \sim \text{MVN}(\mathbf{0}, \boldsymbol\Sigma)$ and a sparse vector of coefficients $\boldsymbol{\beta}^{\dagger}$ with $k^{\dagger}$ nonzero entries, such that $\beta^{\dagger}_i=1$ for $k^{\dagger}$ equi-spaced indices $i \in [p]$. Every coordinate $y_i$ of the outcome vector $\mathbf{y} \in \{ -1, 1 \}^n$ is then sampled independently from a Bernoulli distribution with success probability:
$P(y_{i} = 1 | \mathbf{x}_{i} ) =(1 + \exp ( {-s \langle \boldsymbol{\beta}^{\dagger} , \mathbf{x}_{i} } \rangle ))^{-1},$
where $\mathbf{x}_i$ denotes the $i$-th row of $\mathbf{X}$, and $s$ is a parameter that controls the signal-to-noise ratio. Specifically, smaller values of $s$ increase the variance in the response ${y}$, and when $s \to \infty$ the generated data becomes linearly separable.
\textbf{Algorithms and Tuning.}
We compare our proposal, as implemented in our package \texttt{L0Learn}, with: (i) $\ell_1$-regularized logistic regression (\texttt{glmnet} package \citealp{glmnet}), (ii) MCP-regularized logistic regression (\texttt{ncvreg} package \citealp{ncvreg}), and (iii) two packages for sparsity constrained minimization (based on hard thresholding): \texttt{GraSP} \citep{bahmani2013greedy} and \texttt{NHTP} \citep{zhou2019global}. For \texttt{GraSP} and \texttt{NHTP}, we use cardinality constrained logistic regression with ridge regularization---that is, we optimize Problem \eqref{eq:constrainedopt} with $\lambda_1 = 0$. Tuning is done on a separate validation set under the fixed design setting, i.e., we use the same features used for training but a new outcome vector $\mathbf{y}'$ (independent of $\mathbf{y}$).
The tuning parameters are selected so as to minimize the loss function on the validation set (e.g., for regularized logistic regression, we minimize the unregularized negative log-likelihood).
We use $\ell_0$-$\ell_q$ as a shorthand to denote the penalty $\lambda_0 \| \boldsymbol\beta \|_0 + \lambda_q \| \boldsymbol\beta \|_q^q$ for $q \in \{1, 2\}$.
For all penalties that involve 2 tuning parameters---i.e., $\ell_0$-$\ell_q$ (for $q \in \{1, 2\}$), MCP, \texttt{GraSP}, and \texttt{NHTP}, we sweep the parameters over a two-dimensional grid. For our penalties, we choose $100$ $\lambda_0$ values as described in Section~\ref{section:L0Learn}. For \texttt{GraSP} and \texttt{NHTP}, we sweep the number of nonzeros between $1$ and $100$. For $\ell_0$-$\ell_1$, and we choose a sequence of $10$ $\lambda_1$-values in $[a,b]$, where $a$ corresponds to a zero solution and $b=10^{-4}a$. Similarly, for $\ell_0$-$\ell_2$, \texttt{GraSP}, and \texttt{NHTP}, we choose $10$ $\lambda_2$ values between $10^{-4}$ and $100$ for the experiment in Section~\ref{exp:varysamples}; and between $10^{-8}$ and $10^{-4}$ for that in Section~\ref{exp:largescale}. For MCP, the sequence of $100$ $\lambda$ values is set to the default values selected by $\texttt{ncvreg}$, and we vary the second parameter $\gamma$ over $10$ values between $1.5$ and $25$. For the $\ell_1$-penalty, the grid of 100 $\lambda$ values is set to the default sequence chosen by \texttt{glmnet}.
\textbf{Performance Measures.} We use the following measures to evaluate the performance of an estimator $\hat{\boldsymbol{\beta}}$:
\begin{itemize}
\item \textbf{AUC}: The area under the curve of the ROC plot.
\item \textbf{Recovery F1 Score}: This is the F1 score for support recovery, i.e., it is the harmonic mean of precision and recall: F1 Score = $2 P R/(P + R)$, where $P$ is the precision given by $ |\text{Supp}(\hat{\boldsymbol{\beta}}) \cap \text{Supp}(\boldsymbol{\beta}^{\dagger})|/ | \text{Supp}(\hat{\boldsymbol{\beta}})|$, and $R$ is the recall given by $ |\text{Supp}(\hat{\boldsymbol{\beta}}) \cap \text{Supp}(\boldsymbol{\beta}^{\dagger})|/ | \text{Supp}(\boldsymbol{\beta}^{\dagger})|$ . An F1 Score of $1$ implies full support recovery; and a value of zero implies that the supports of the true and estimated coefficients have no overlap.
\item \textbf{Support Size}: The number of nonzeros in $\hat{\boldsymbol{\beta}}$.
\item \textbf{False Positives}: This is equal to $| \text{Supp}(\hat{\boldsymbol{\beta}}) \setminus \text{Supp}(\boldsymbol{\beta}^{\dagger})|$.
\end{itemize}
\begin{figure}[tb]
\centering
{\small High Correlation Setting ($\Sigma_{ij} = 0.9^{|i-j|}$)}
\includegraphics[scale=0.36]{Experiments/E09-p1000-k25-s1-Logistic}
\caption{{\small{Performance for varying $n\in [100, 10^3]$ and $\Sigma_{ij} = 0.9^{|i-j|}, p=1000, k^{\dagger} = 25, s=1$. In this high-correlation setting, our proposed algorithm (using local search) for the $\ell_0$-$\ell_q$ (for both $q \in \{1, 2\}$) penalized estimators---denoted as L0L2 (CD w. Local Search) and L0L1 (CD w. Local Search) in the figure---seems to outperform state-of-the-art methods (MCP, $\ell_1$, \texttt{GraSP} and \texttt{NHTP}) in terms of both variable selection and prediction. The variable selection performance improvement (higher F1 score and smaller support size) is more notable than AUC.
Local search with CD shows benefits compared to its variants that do not employ local search, the latter denoted by L0L1 (CD) and L0L2 (CD) in the figure.}}}
\label{fig:highcorr}
\end{figure}
\subsection{Performance for Varying Sample Sizes} \label{exp:varysamples}
In this experiment, we fix $p$ and vary the number of observations $n$ to study its effect on the performance of the different algorithms and penalties. We hypothesize that when the statistical setting is difficult (e.g., features are highly correlated and/or $n$ is small), good optimization algorithms for $\ell_0$-regularized problems lead to estimators that can significantly outperform estimators obtained from convex regularizers and common (heuristic) algorithms for nonconvex regularizers. To demonstrate our hypothesis, we perform experiments on the following data sets:
\begin{itemize}
\item \textbf{High Correlation}: $\Sigma_{ij} = 0.9^{|i-j|}, p=1000, k^{\dagger} = 25, s=1$
\item \textbf{Medium Correlation}: $\Sigma_{ij} = 0.5^{|i-j|}, p=1000, k^{\dagger} = 25, s=1$
\end{itemize}
\begin{figure}[tb]
\centering
{\small Medium Correlation Setting ($\Sigma_{ij} = 0.9^{|i-j|}$)}
\includegraphics[scale=0.36]{Experiments/E05-p1000-k25-s1-Logistic}
\caption{Performance for varying $n$ and $\Sigma_{ij} = 0.5^{|i-j|}, p=1000, k^{\dagger} = 25, s=1$. In contrast to Figure \ref{fig:highcorr}, this is a medium-correlation setting. Once again Algorithm~2 (CD with local search) for the $\ell_0$-$\ell_1$ and $\ell_0$-$\ell_2$ penalties seems to perform quite well in terms of variable selection and prediction performance, though its improvement over the other methods appears to be less prominent compared to the high-correlation setting in Figure~\ref{fig:highcorr}. In this example, local search does not seem to offer much improvement over pure CD (i.e., Algorithm~1).}
\label{fig:mediumcorr}
\end{figure}
For each of the above settings, we use the logistic loss function and
consider $20$ random repetitions. We report the averaged results for the high correlation setting in Figure \ref{fig:highcorr} and for the medium correlation setting in Figure \ref{fig:mediumcorr}.
Figure \ref{fig:highcorr} shows that in the high correlation setting, Algorithm~2 (with $m=1$) for the $\ell_0$-$\ell_2$ penalty and the $\ell_{0}$-$\ell_{1}$ penalty---denoted by L0L2 (CD w. Local Search) and L0L1 (CD w. Local Search) (respectively) in the figure---achieves the best support recovery and AUC across different sample sizes $n$.
We note that in this example, the best F1 score falls below $0.8$ for $n$ smaller than $600$---suggesting that none of the algorithms can do full support recovery.
For larger values of $n$, the difference between Algorithm~2 and others become more pronounced in terms of F1 score, suggesting an important edge in terms of variable selection performance. Moreover, our algorithms select the smallest support size (i.e., the most parsimonious model) for all $n$. In contrast, the $\ell_1$ penalty (i.e., L1) selects significantly larger support sizes (exceeding 100 in some cases) and suffers in terms of support recovery. It is also worth mentioning that the other $\ell_0$-based algorithms, i.e., Algorithm 1, {\texttt{NHTP}} and {\texttt{GraSP}}, outperform MCP and L1 in terms of support recovery (F1 score) and support sizes. The performance of {\texttt{NHTP}} and {\texttt{GraSP}} appears to be similar in terms of support sizes and F1 score, though {\texttt{NHTP}} appears to be performing better compared to {\texttt{GraSP}} in terms of AUC.
In Figure~\ref{fig:mediumcorr}, for the medium correlation setting, we see that Algorithms 1 and 2 perform similarly: local search (with CD) performs similar to CD (without local search)---Algorithm~1 can recover the correct support with around $500$ observations. In this case, the performance of MCP becomes similar to the $\ell_0$-$\ell_{q}$ penalized estimators in terms of AUC, though there are notable differences in terms of variable selection properties. Compared to Figure~\ref{fig:highcorr}, we observe that $\ell_1$ performs better in this example, but still appears to be generally outperformed by other algorithms in terms of all measures. We also observe that \texttt{NHTP}'s performance is comparable to the best methods in terms of F1 score and AUC. However, it results in models that are more dense compared to L0L1 and L0L2 (both Algorithms~1 and 2). That being said, we will see in Section~\ref{sec:timings} that in terms of running time, Algorithm~1 has a significant edge over \texttt{NHTP}. Furthermore, for larger problem instances (Section~\ref{exp:largescale}) the performance of \texttt{NHTP} suffers in terms of variable selection properties compared to the methods we propose in this paper. \texttt{NHTP} appears to perform better than \texttt{GraSP} across all metrics in this setting.
Figures~\ref{fig:highcorr} and~\ref{fig:mediumcorr} suggest that when the correlations are high, local search with $\ell_0$ can notably outperform competing methods (especially, in terms of variable selection). When the correlations are medium to low, the differences across the different nonconvex methods become less pronounced. The performance of nonconvex penalized estimators (especially, the $\ell_0$-based estimators) is generally better than $\ell_1$-based estimators in these examples.
\subsection{Performance on Larger Instances} \label{exp:largescale}
We now study the performance of the different algorithms
for some large values of $p$ under the following settings:
\begin{itemize}
\item \textbf{Setting 1}: $\boldsymbol\Sigma = \mathbf{I}, n=1000, p=50,000, s=1000$, and $k^{\dagger} = 30$
\item \textbf{Setting 2}: $\Sigma_{ij} = 0.3$ for $i \neq j$, $\Sigma_{ii} = 1$, $n=1000, p=10^5, s=1000$, and $k^{\dagger} = 20$.
\end{itemize}
Table~\ref{table:largeexp} reports the results available from Algorithms~1 and 2 ($m=1$) versus the other methods, for different loss functions.
\begin{table}[tb]
\centering
\hspace{3.5cm} Setting 1 \hspace{5.5cm} Setting 2 \\
\begin{tabular}{ccc}
\begin{tabular}{lcc|}
\toprule
{Penalty/Loss} & FP & $\| \hat{\boldsymbol{\beta}} \|_0$ \\
\midrule
$\ell_0$-$\ell_2$/Logistic (Algorithm 1) & $0.0 \pm 0.0$ & $ 30.0 \pm 0.0 $ \\
$\ell_0$-$\ell_1$/Logistic (Algorithm 1) & $0.0 \pm 0.0$ & $ 30.0 \pm 0.0 $ \\
$\ell_0$-$\ell_2$/Logistic (Algorithm 2) & $0.0 \pm 0.0$ & $ 30.0 \pm 0.0 $ \\
$\ell_0$-$\ell_1$/Logistic (Algorithm 2) & $0.0 \pm 0.0$ & $ 30.0 \pm 0.0 $ \\
$\ell_0$-$\ell_2$/Sq. Hinge (Algorithm 1) & $ 2.8 \pm 0.3 $ & $ 32.8 \pm 0.3 $ \\
$\ell_1$/Logistic & $ 617.2 \pm 8.3 $ & $ 647.2 \pm 8.3 $ \\
MCP/Logistic & $0.0 \pm 0.0$ & $ 30.0 \pm 0.0 $ \\
NHTP/Logistic & $ 44.7 \pm 5.6 $ & $ 74.7 \pm 5.6 $ \\
GraSP/Logistic & $ 50.5 \pm 5.6 $ & $ 80.5 \pm 5.6 $ \\
\bottomrule
\end{tabular}
\begin{tabular}{cc}
\toprule
FP & $\| \hat{\boldsymbol{\beta}} \|_0$ \\
\midrule
$ 21.6 \pm 0.9 $ & $ 26.2 \pm 0.5 $ \\
$ 11.2 \pm 0.7 $ & $ 14.8 \pm 0.3 $ \\
$11.5 \pm 0.4$ & $ 14.6 \pm 0.3 $ \\
$ 11.2 \pm 0.4$ & $ 14.5 \pm 0.3 $ \\
$ 23.9 \pm 0.7 $ & $ 27.0 \pm 0.4 $ \\
$ 242.2 \pm 4.8 $ & $ 256.9 \pm 5.3 $ \\
$ 80.1 \pm 10.9 $ & $ 91.5 \pm 12.1 $ \\
$ 92.5 \pm 0.6$ & $ 100.0 \pm 0.0 $ \\
$ 45.3 \pm 3.0 $ & $ 54.4 \pm 2.8 $ \\
\bottomrule
\end{tabular}
\end{tabular}
\caption{Variable selection performance for different penalty and loss combinations, under high-dimensional settings. FP refers to the number of false positives. We consider ten repetitions and report the averages (and standard errors) across the repetitions.}
\label{table:largeexp}
\end{table}
In Settings 1 and 2 above, all methods achieve an AUC of 1 (approximately), and the main differences across the methods lie in variable selection. For Setting 1, both Algorithms~1 and 2 applied to the logistic loss; and \texttt{ncvreg} for the MCP penalty (with logistic loss) correctly recover the support. In contrast, $\ell_1$ captures a large number of false positives, leading to large support sizes. Both \texttt{NHTP} and \texttt{GraSP} have a considerable number of false positives and result in large support sizes.
In Setting 2, none of the algorithms correctly recovered the support. This setting is more difficult than Setting 1 as it has higher correlation and a larger $p$. In Setting 2, we observe that CD with local search can have an edge over plain CD (see, logistic loss with $\ell_0$-$\ell_2$ penalty). In terms of small FP and compactness of model size, CD with local search appears to be the winner, with Algorithm~1 being quite close. Both Algorithms~1 and 2 appear to work better compared to earlier methods in this setting.
We note that in Setting 2, our proposed algorithms select supports with roughly 3 times fewer nonzeros than the MCP penalized problem, and 10 times fewer nonzeros than those delivered by the $\ell_1$-penalized problem. For both settings, our proposed methods offer important improvements over {\texttt{NHTP}} and {\texttt{GraSP}} as well.
\subsection{Performance on Real Data Sets}\label{sec:real-datasets}
We compare the performance of $\ell_1$ with $\ell_0$-$\ell_q$ ($q \in \{1,2\}$) regularization, using the logistic loss function.\footnote{We tried MCP regularization using \texttt{ncvreg}, however it ran into convergence issues and did not terminate in over an hour. We also tried NHTP, but it ran into convergence issues and took more than $2$ hours to solve for a single solution in the regularization path. Also, note that based on our experiment in Section \ref{sec:timings}, GraSP is slower than NHTP and cannot handle such high-dimensional problems. Therefore, we do not include MCP, NHTP and GraSP in this experiment.} We consider the following three binary classification data sets taken from the NIPS 2003 Feature Selection Challenge \citep{nipschallenge}:
\begin{itemize}
\item \textbf{Arcene}: This data set is used to identify cancer vs non-cancerous patterns in mass-spectrometric data. The data matrix is dense with $p = 10,000$ features. We used 140 observations for training and 40 observations for testing.
\item \textbf{Dorothea}: This data set is used to distinguish active chemical compounds in a drug. The data matrix is sparse with $p=100,000$ features. We used 805 observations for training and 230 observations for testing.
\item \textbf{Dexter:} The task here is to identify text documents discussing corporate acquisitions. The data matrix is sparse with $p = 20,000$ features. We used 420 observations for training and 120 observations for testing.
\end{itemize}
We obtained regularization paths for $\ell_1$ using \texttt{glmnet} and for $\ell_0$-$\ell_2$ and $\ell_0$-$\ell_1$ using both Algorithm 1 and Algorithm 2 (with $m=1$). In Figure~\ref{fig:aucvssupp}, we plot the support size versus the test AUC for $\ell_1$ and Algorithm 2 (with $m=1$) using $\ell_0$-$\ell_2$.
To avoid overcrowded plots, the results for our other algorithms and penalties are presented in Appendix~\ref{appendix-sec:expts}.
\begin{figure}[tb]
\centering
\includegraphics[scale=0.5]{Experiments/AUC_Arcene_Swaps}
\includegraphics[scale=0.5]{Experiments/AUC_Dorothea_Swaps}
\includegraphics[scale=0.5]{Experiments/AUC_Dexter_Swaps}
\caption{Plots of the AUC versus the support size for the Arcene, Dorothea, and Dexter data sets. The green curves correspond to logistic regression with $\ell_1$ regularization. The other curves correspond to logistic regression with $\ell_0$-$\ell_2$ regularization (using Algorithm~2 with $m=1$) for different values of $\lambda_2$ (see legend). }
\label{fig:aucvssupp}
\end{figure}
For the Arcene data set, $\ell_0$-$\ell_2$ with $\lambda_2 = 1$ (i.e., the red curve) outperforms $\ell_1$ (the green curve) for most of the support sizes, and it reaches a peak AUC of 0.9 whereas $\ell_1$ does not exceed 0.84. The other choices of $\lambda_2$ also achieve a higher peak AUC than $\ell_1$ but do not uniformly outperform it. A similar pattern occurs for the Dorothea data set, where $\ell_0$-$\ell_2$ with $\lambda_2 = 1$ achieves a peak AUC of 0.9 at around 100 features, whereas $\ell_1$ needs around 160 features to achieve the same peak AUC. For the Dexter data set, $\ell_0$-$\ell_2$ with $\lambda_2 = 10^{-2}$ (the blue curve) achieves a peak AUC of around 0.98 using less than 10 features, whereas $\ell_1$ requires around 40 features to achieve a similar AUC. In this case, larger choices of $\lambda_2$ do not lead to any significant gains in AUC. This phenomenon is probably due to a higher signal in this data set compared to the previous two. Overall, we conclude that for all the three data sets, Algorithm~2 for $\ell_0$-$\ell_2$ can achieve higher AUC-values with (much) smaller support sizes.
\subsection{Timings}\label{sec:timings}
In this section, we compare the running time of our algorithms versus several state-of-the-art algorithms for sparse classification.
\subsubsection{Obtaining good solutions: Upper Bounds}\label{sec:good-upperbounds}
We study the running time of fitting sparse logistic regression models, using the following packages: our package \texttt{L0Learn}, \texttt{glmnet}, \texttt{ncvreg}, and \texttt{NHTP}.\footnote{We also considered \texttt{GraSP}, but we found it to be several orders of magnitude slower than the other toolkits, e.g., it takes 3701 seconds at $p=10^5$. All the other toolkits require less than $70$ seconds at $p=10^5$. Therefore, we did not include \texttt{GraSP} in our timing experiments.} In \texttt{L0Learn}, we consider both Algorithm~1 (denoted by L0Learn 1) and Algorithm~2 with $m=1$ (denoted by L0Learn 2). We generate synthetic data as described in Section \ref{section:expsetup}, with $n=1000, s=1, \boldsymbol\Sigma=I$, $k^\dagger=5$, and we vary $p$. For all toolkits except \texttt{NHTP},\footnote{\texttt{NHTP} does not have a parameter to control the tolerance for convergence.} we set the convergence threshold to $10^{-6}$ and solve for a regularization path with 100 solutions. For \texttt{L0Learn}, we use a grid of $\lambda_0$-values varying between $\lambda_0^{\text{max}}$ (the value which sets all coefficients to zero) and $0.001 \lambda_0^{\text{max}}$. For \texttt{L0Learn} and \texttt{NHTP}, we set $\lambda_2 = 10^{-7}$. For \texttt{glmnet} and \texttt{ncvreg}, we compute a path using the default choices of tuning parameters. The experiments were carried out on a machine with a 6-core Intel Core i7-8750H processor and 16GB of RAM, running macOS 10.15.7, R 4.0.3 (with vecLib's BLAS implementation), and MATLAB R2020b. The running times are reported in Figure~\ref{fig:Rruntimes}.
Figure~\ref{fig:Rruntimes} indicates that \texttt{L0Learn}~1 (Algorithm 1) and \texttt{glmnet} achieve the fastest runtimes across the $p$ range. For $p > 40,000$, \texttt{L0Learn}~1 (Algorithm 1) runs slightly faster \texttt{glmnet}. \texttt{L0Learn}~2 (Algorithm 2) is slower due to the local search, but it can obtain solutions in reasonable times, e.g., less than a minute for 100 solutions at $p = 10^5$. It is important to note that the toolkits are optimizing for different objective functions, and the speed-ups in \texttt{L0Learn} are partly due to the nature of $\ell_0$-regularization which selects fewer nonzeros compared to those available from $\ell_1$ or MCP regularization.
\begin{figure}[tb]
\centering
\includegraphics[scale=0.6]{Experiments/Timings.png}
\caption{Runtimes to obtain good feasible solutions: Time (s) for obtaining a regularization path (with 100 solutions) for different values of $p$ (as discussed in Section~\ref{sec:good-upperbounds}). L0Learn 1 runs Algorithm 1 (CD), while L0Learn 2 runs Algorithm 2 (CD with Local Search).}
\label{fig:Rruntimes}
\end{figure}
\subsubsection{Timings for MIP Algorithms: Global Optimality Certificates}
We compare the running time to solve Problem~\eqref{eq:MIP} by Algorithm 4 (IGA) versus solving~\eqref{eq:MIP} directly i.e., without using IGA. We consider the hinge loss and take $q=2$. We use Gurobi's MIP solver for our experiments. We set $\boldsymbol\Sigma=\mathbf{I}$ and $k^{\dagger}=5$ as described in Section \ref{section:expsetup}. We then generate $\epsilon_{i} \stackrel{\text{iid}}{\sim} N(0, \sigma^2)$ and set $y_i = \text{sign}(\mathbf{x}_{i}'\boldsymbol\beta^{\dagger} + \epsilon_i)$, where the signal-to-noise ratio SNR=$\text{Var}(\mathbf{X} \boldsymbol\beta^{\dagger})/\sigma^2~=~10$. We set $n=1000$ and vary $p\in \{10^4, 2\times10^4, 3\times10^4, 4\times10^4, 5\times10^4\}$.
For a fixed $\lambda_2 = 10$, $\lambda_0$ is chosen such that the final (optimal) solution has 5 nonzeros. The parameter $\mathcal{M}$ is set to $1.2 \|\tilde{\boldsymbol\beta}\|_{\infty}$, where $\tilde{\boldsymbol\beta}$ is the warm start obtained from Algorithm~2. We note that in all cases, the optimal solution recovers the support of the true solution $\boldsymbol{\beta}^{\dagger}$.
For this experiment, we use a machine with a 12-core Intel Xeon E5 @ 2.7 GHz and 64GB of RAM, running OSX 10.13.6 and Gurobi v8.1. The timings are reported in Table~\ref{table:miptimings}.
\begin{table}[tb]
\centering
\begin{tabular}{@{}cccccc@{}}
\toprule
Tookit/p & 10000 & 20000 & 30000 & 40000 & 50000 \\ \midrule
IGA (this paper) & 35 & 80 & 117 & 169 & 297* \\
Gurobi & 4446 & 21817 & - & - & - \\ \bottomrule
\end{tabular}
\caption{Runtimes to certify global optimality: Time(s) for solving an $\ell_0$-regularized problem with a hinge loss function, to optimality. ``-'' denotes that the algorithm does not terminate in a day. ``*'' indicates that the algorithm is terminated with a 0.05\% optimality gap.}
\label{table:miptimings}
\end{table}
The results indicate significant speed-ups. For example, for $p=20,000$ our algorithm terminates in 80 seconds whereas Gurobi takes 6 hours to solve~\eqref{eq:MIP} (to global optimality). For larger values of $p$, Gurobi cannot terminate in a day, while our algorithm can terminate to optimality in few minutes. It is worth noting that, empirically we observe IGA can attain low running times when sparse solutions are desired ($< 30$ nonzeros in our experience). This observation also applies to the state-of-the-art MIP solvers for sparse regression, e.g., see \citet{bestsubset, bertsimas2017sparse, hazimeh2020sparse}. For denser solutions, IGA can still be useful for obtaining optimality gaps, but certifying optimality with a very small optimality gap is expected to take longer.
\section{Conclusion}
We considered the problem of linear classification regularized with a combination of the $\ell_0$ and $\ell_q$ (for $q \in \{1,2\}$) penalties. We developed both approximate and exact algorithms for this problem. Our approximate algorithms are based on coordinate descent and local combinatorial search. We established convergence guarantees for these algorithms and demonstrated empirically that they can run in times comparable to the fast $\ell_1$-based solvers. Our exact algorithm can solve to optimality high-dimensional instances with $p \approx 50,000$. This scalability is achieved through the novel idea of integrality generation, which solves a sequence of mixed integer programs with a small number of binary variables, until converging to a globally optimal solution. We also established new estimation error bounds for a class of $\ell_0$-regularized classification problems and showed that these bounds compare favorably with the best known bounds for $\ell_1$ regularization. We carried out experiments on both synthetic and real datasets with $p$ up to $10^5$. The results demonstrate that our $\ell_0$-based combinatorial algorithms can have a significant statistical edge (in terms of variable selection and prediction) compared to state-of-the-art methods for sparse classification, such as those based on $\ell_1$ regularization or simple greedy procedures for $\ell_0$ regularization.
There are multiple promising directions for future work. From a modeling perspective, our work can be generalized to structured sparsity problems based on $\ell_0$ regularization \citep{hazimeh2021grouped}. Recent work shows that specialized BnB solvers can be highly scalable for $\ell_0$-regularized regression \citep{hazimeh2020sparse}. One promising direction is to scale our proposed integrality generation algorithm further by developing specialized BnB solvers for the corresponding MIP subproblems.
\acks{\sloppy We would like to thank the anonymous reviewers for their comments that helped improve the paper.
The authors acknowledge research funding from the Office of Naval Research [Grants ONR-N000141512342 and ONR-N000141812298 (Young Investigator Award)], and the National Science Foundation [Grant NSF-IIS-1718258].}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,128 |
El distrito peruano de Cacatachi es uno de los catorce distritos que conforman la Provincia de San Martín en el Departamento de San Martín, perteneciente a la Región de San Martín en el Perú.
Desde el punto de vista jerárquico de la Iglesia católica forma parte de la Prelatura de Moyobamba, sufragánea de la Metropolitana de Trujillo y encomendada por la Santa Sede a la Archidiócesis de Toledo en España.
Geografía
La capital se encuentra situada a 295 , a 12 km al norte de Tarapoto a un costado de la carretera Fernando Belaúnde Terry. Sus coordenadas son 6°29'40" de latitud sur y 76°27'57" de longitud oeste.
Etimología
Su nombre proviene de los términos quechuas CACA = Tierra y TACHI = Plana, lo que significaría Tierra Plana.
Véase también
Organización territorial del Perú
Región San Martín
Mapa de San Martín
Referencias
Cacatachi | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 6,226 |
\chapter{}
\label{Appendix}
\section{Discrete harmonic functions}
The purpose of this chapter is to present the proofs of the maximum
principle and the Harnack inequality (Theorem \ref{ERharnack}) for discrete
harmonic functions. The Harnack inequality was used in Section \ref{ERsecreg}
(in the proof of Lemma \ref{ERl4}) to construct the regeneration times.
These inequalities are due to Kuo and Trudinger \cite{KT}.
For the purpose of self-containedness, we will give the complete proofs of these
estimates. We follow the arguments in \cite{KT}, adding to it some extra
details.
Recall the definitions of $a$, $L_a$, $b$ and $b_0$ in Section \ref{SeTriid1}.
We consider discrete difference operates $L_a$
such that
\[\sum_y a(x,y)=1, \quad\forall x,\]
and $a(x,y)> 0$ only if $|x-y|=1$, denoted $x\sim y$.
We assume that $L_a$ is uniformly elliptic with constant
$\kappa\in(0,\frac{1}{2d}]$, that is,
\[
a(x,y)\ge \kappa \text{ for any $x, y$ such that $x\sim y$}.
\]
For $r>0, x\in\mathbb{R}^d$, let $B_r(x)=\{z\in\mathbb{Z}^d: |z-x|<r\}$. We also write
$B_r(o)$ as $B_r$.
\subsection{Maximum principle}
For any bounded set $E\subset\mathbb{Z}^d$, let
$\partial E=\{y\in E^c:x\sim y \text{ for some }x\in E\}$,
$\bar{E}=E\bigcup\partial E$ and $\mathop{\rm diam} E=\max\{|x-y|_\infty: x,y\in E\}$.
\begin{theorem}\cite[Theorem 2.1]{KT}\label{AMP}
Let $E\subset\mathbb{Z}^d$ be bounded and $u$ be a function on $\bar{E}$.
For $x\in E$, define
\[
I_u(x)=\{s\in\mathbb{R}^d: u(x)-s\cdot x\ge u(z)-s\cdot z, \forall z\in\bar{E}\}.
\]
If
\[L_a u(x)\ge -g(x)\]
for all $x\in E$ such that $I_u(x)=I_u(x,E,a)\neq \emptyset$, then
\[
\max_E u\le
C\mathop{\rm diam}(\bar{E})
\big(\sum_{x\in E, I_u(x)\neq \emptyset}|g|^d\big)^{1/d}+\max_{\partial E}u,
\]
where $C$ is a constant determined by $d, \kappa$ and $b_0\mathop{\rm diam} E$.
\end{theorem}
{\it Proof:}~
Without loss of generality, we assume $g\ge 0$ and
\[
\max_E u=u(x_0)>\max_{\partial E}u
\]
for some $x_0\in E$. Otherwise, there is nothing to prove.
For $s\in \mathbb{R}^d$ such that
\begin{align}\label{A*1}
|s|_\infty
&\le[u(x_0)-\max_{\partial E}u]/(d\mathop{\rm diam}\bar{E})\nonumber\\
&=:R=R(u,E),
\end{align}
we have
\[
u(x_0)-u(x)\ge s\cdot(x_0-x)
\]
for all $x\in\partial E$, which implies that $\max_{z\in\bar{E}}u(z)-s\cdot z$ is achieved in $E$.
Hence $s\in \bigcup_{x\in E}I_u(x)$ and the cube
\[
Q_R:=\{x:|x|_{\infty}< R\}\subset \bigcup_{x\in E}I_u(x).
\]
For any $p\in\mathbb{R}^d$, set
\[
f(p)=(|p|^{d/d-1}+\mu^{d/d-1})^{1-d},
\]
where $\mu>0$ is a constant to be fixed later.
Since for any $x\in E$, $I_u(x)\subset\mathbb{R}^d$ is bounded and closed, we
can choose
$p_x\in I_u(x)$ so that
\[
|p_x|=\min_{p\in I_u(x)}|p|.
\]
Then
\[
f(p_x)=\max_{p\in I_u(x)}f(p).
\]
Thus
\begin{equation}\label{A*2}
\int_{Q_R}f(s)\, \mathrm{d} s\le \int_{\bigcup_{x\in E}I_u(x)}f(s)\, \mathrm{d} s
\le
\sum_{x:I_u(x)\neq\emptyset}f(p_x)|I_u(x)|,
\end{equation}
where $|I_u(x)|$ denotes the Lebesgue measure of $I_u(x)$.
Further,
we will show that, for any $x\in E$ with $I_u(x)\neq\emptyset$,
\begin{equation}\label{A*14}
|I_u(x)|\le (2/\kappa)^{d}[g(x)+b(x)p_x]^d.
\end{equation}
To this end, we fix an $x\in E$ with $I_u(x)\neq\emptyset$
and set
\[
w(z)=u(z)-p_x(z-x), \quad\forall z\in\bar{E}.
\]
Then $w(x)\ge w(z)$ for all $z\in\bar{E}$ and
\begin{equation}\label{A*3}
I_u(x)=I_w(x)+p_x.
\end{equation}
Since for any $q\in I_w(x)$ and $i=1,\ldots,d$,
\[
w(x)-w(x\pm e_i)\ge\mp q_i,
\]
we obtain (by ellipticity and by $w(x)\ge w(z)$, $\forall z\in\bar{E}$)
\begin{align*}
0\le \kappa|q|_\infty
&\le\sum_{y}a(x,y)(w(x)-w(y))\\
&=-L_a u+b(x)p_x\\
&\le g(x)+b(x)p_x.
\end{align*}
Hence
\[
I_w(x)\subset \big[\frac{-g(x)-b(x)p_x}{\kappa},\frac{g(x)+b(x)p_x}{\kappa}\big]^d
\] and
\[
|I_u(x)|\stackrel{(\ref{A*3})}{=}|I_w(x)|\le (2/\kappa)^{d}[g(x)+b(x)p_x]^d.
\]
(\ref{A*14}) is proved.
(\ref{A*14}) and (\ref{A*2}) yield
\[
\int_{Q_R}f(s)\, \mathrm{d} s
\le
(\dfrac{2}{\kappa})^d\sum_{x:I_u(x)\neq\emptyset}f(p_x)[g(x)+b(x)p_x]^d.
\]
Since by H\"{o}lder's inequality,
\[
g(x)+|b(x)||p_x|
\le
\big[(\frac{g(x)}{\mu})^d+|b(x)|^d\big]^{1/d} \big[\mu^{d/d-1}+|p_x|^{d/d-1}\big]^{(d-1)/d},
\]
we get
\begin{equation}\label{A*4}
\int_{Q_R}f(s)\, \mathrm{d} s
\le
(\frac{2}{\kappa})^d\sum_{x:I_u(x)\neq\emptyset}
\big[(\frac{g(x)}{\mu})^d+|b(x)|^d\big].
\end{equation}
On the other hand, by H\"{o}lder's inequality,
\[
f(s)=(|s|^{d/d-1}+\mu^{d/d-1})^{1-d}\ge 2^{2-d}(|s|^d+\mu^d)^{-1}.
\]
Thus
\begin{align}\label{A*5}
\int_{Q_R}f(s)\, \mathrm{d} s
\ge
\int_{B_R}f(s)\, \mathrm{d} s
&\ge
2^{2-d}\int_{B_R}(|s|^d+\mu^d)^{-1}\, \mathrm{d} s\nonumber\\
&=2^{2-d}\frac{\mathcal{O}_d}{d}\log[(\frac{R}{\mu})^d+1],
\end{align}
where $\mathcal{O}_d$ is the area of the unit sphere in $\mathbb{R}^d$.
Finally, combining (\ref{A*4}) and (\ref{A*5}) and putting
\[
\mu:=[\sum_{x:I_u(x)\neq\emptyset}g(x)^d]^{1/d},\]
we conclude that
\[
\kappa^d 2^{2-2d}\frac{\mathcal{O}_d}{d}\log[(\frac{R}{\mu})^d+1]
\le 1+(b_0\mathop{\rm diam}\bar{E})^d.
\]
Recalling the definition of $R=R(u,E)$ in (\ref{A*1}), the theorem follows.
\qed
By the same argument as in the proof of Theorem \ref{Cmvi} (Section \ref{SeTriid1}),
Theorem~\ref{AMP} and Lemma~\ref{Cmvilemma} imply
\begin{theorem}[Mean-value inequality]\label{Amvi}
For any function $u$ on $\bar{B}_R$ such that
\[
L_a u\ge 0, \quad x\in B_R
\]
and any $\sigma\in (0,1)$, $0<p\le d$, we have
\[
\max_{B_{\sigma R}}u\le C\norm{u^+}_{B_R,p},
\]
where $C$ depends on $\sigma, p, \kappa, d$ and $b_0R$.
\end{theorem}
\subsection{Harnack inequality}
\begin{theorem}[Harnack inequality]\cite[Corollary
4.5]{KT}\label{ERharnack}
Let $u$ be a
non-negative function on $B_R$, $R>1$. If
\[
L_a u=0
\]
in $B_R$,
then for any $\sigma\in (0,1)$ with $R(1-\sigma)>1$, we have
\[
\max_{B_{\sigma R}}u\le C\min_{B_{\sigma R}}u,
\]
where $C$ is a positive constant depending on $d, \kappa, \sigma$ and $b_0 R$.
\end{theorem}
\begin{lemma}\label{Ahlemma}
Suppose $u$ is a non-negative function on $\bar{B}_R$ that satisfies
\[ L_a u\le 0\]
in $B_R$. Then for any $\sigma\le\tau<1$,
\begin{equation}\label{Aehlemma}
\min_{B_{\tau R}}u\ge C\min_{B_{\sigma R}}u,
\end{equation}
where $C$ depends on $\kappa,d, \sigma,\tau$ and $b_0R$.
\end{lemma}
{\it Proof:}~
Recall the definition of $\eta=\eta_R(x)$ in Lemma \ref{Cmvilemma}.
We will first show that there exists a constant $\beta=\beta(\sigma, b_0R, \kappa)$ such
that
\begin{equation}\label{A*10}
L_a \eta\ge -(2^\beta+\beta^3) R^{-3} \qquad\text{ in }B_R\setminus B_{\sigma R}.
\end{equation}
If $R-1\le |x|< R$, then $\eta(x)\le (2/R)^\beta\le 2^\beta R^{-3}$ for
$\beta\ge 3$.
Hence for $\beta\ge 3$,
\[L_a\eta\ge -\eta\ge -2^\beta R^{-3}.\]
If $\sigma R\le |x|<R-1$, then $y\in B_R$ for all $y\sim x$.
For $i=1,\ldots, d$, the third derivative $D_i^3\eta$ of $\eta$ with respect to
$x_i$ satisfies
\begin{align*}
|D_i^3\eta| &=\big|4\beta(\beta-1)x_iR^{-4}\eta^{1-3/\beta}
[3(1-|x|^2 /R^{2})-2(\beta-2)x_i^2/R^2]\big|\\
&\le 4\beta(\beta-1)(2\beta-1)R^{-3},
\end{align*}
and so, by Taylor's expansion,
\[
\eta(x+e)-\eta(x)\ge \nabla\eta(x)\cdot e+\frac{1}{2}e^{T}D^2\eta(x)e-\frac{8}{6}\beta^3 R^{-3}.
\]
Thus
\begin{align*}
L_a\eta(x)
&=\sum_{e}a(x,e)(\eta(x+e)-\eta(x))\\
&\ge \nabla\eta\cdot b(x)+\frac{1}{2}\sum_e a(x,x+e)e^{T}D^2\eta(x)e-\frac{4}{3}\beta^3 R^{-3}.
\end{align*}
Noting that
\[
\nabla\eta\cdot b(x)
\stackrel{\eqref{Afifth}, \eta\le 1}{\ge}
-2(b_0R)\beta R^{-2}\eta^{1-2/\beta},
\]
and, for $\sigma R\le |x|<R-1$,
\begin{align*}
&\sum_e a(x,x+e)e^{T}D^2\eta(x)e\\
&=\sum_{i=1}^d(a(x,x+e_i)+a(x,x-e_i))D_{ii}\eta(x)\\
&=2\beta R^{-2}\eta^{1-2/\beta}\sum_{i=1}^d\big(a(x,x+e_i)+a(x,x-e_i)\big)\big(\frac{2(\beta-1)x_i^2}{R^2}-(1-\frac{|x|^2}{R^2})\big)\\
&\ge
2\beta R^{-2}\eta^{1-2/\beta}[4\kappa(\beta-1)\sigma^2-1],
\end{align*}
we have
\[
L_a\eta
\ge
[4\kappa(\beta-1)\sigma^2-1-2b_0R]R^{-2}\eta^{1-2/\beta}
-\frac{4}{3}\beta^3R^{-3}.
\]
Hence \eqref{A*10} also holds for $\sigma R\le |x|<R-1$ if we take
\[
\beta\ge 1+\frac{1+2b_0R}{4\kappa\sigma^2}.
\]
\eqref{A*10} is proved.
Next, let $m_\sigma:=\min_{B_{\sigma R}}u$ and $w:=m_\sigma\eta-u$.
Then
\begin{equation}\label{A*11}
\max_{B_{\tau R}}w\ge (1-\tau^2)^\beta m_\sigma-m_\tau.
\end{equation}
Since $w\le 0$ in $B_{\sigma R}\bigcup B_R^c$ and
\[
L_a w\stackrel{\eqref{A*10}}{\ge}
-(2^\beta+\beta^3)m_\sigma R^{-3} \qquad\text{in }B_R/B_{\sigma R},
\]
we get by the maximum principle that
\begin{equation}\label{A*12}
\max_{B_R}w
\le
C_1 m_\sigma R^{-1},
\end{equation}
where $C_1$ depends on $\kappa,d,\sigma$ and $b_0R$.
By \eqref{A*11} and \eqref{A*12},
\[
[(1-\tau^2)^\beta-\frac{C_1}{R}]m_\sigma\le m_\tau.
\]
Therefore, \eqref{Aehlemma} holds if $R$ satisfies
\[R>\frac{2C_1}{(1-\tau^2)^\beta}.\]
For $R\le\frac{2C_1}{(1-\tau^2)^\beta}$, it follows by iteration (noting $\kappa
u(x)\le u(y)$ for $x\sim y$) that
\[
\kappa^{2C_1(1-\tau^2)^{-\beta}}m_\sigma\le m_\tau.
\]
\eqref{Aehlemma} is proved.\qed\\
For any $z\in\mathbb{Z}^d$ and any
$n=(n_1,\ldots,n_d)\in\mathbb{N}^d$ , we let
\[
N(z,n):=(z+\prod_{i=1}^d[0,n_i-1])\cap\mathbb{Z}^d.
\]
We say that $N(z,n)$ is \textit{nice} if $n$ satisfies $\max_{i,j}|n_i-n_j|\le 1$.
Call $|n|_\infty$ the \textit{length} of the nice rectangle $N(z,n)$. Intuitively, a nice rectangle is ``nearly a cube".
\begin{proposition}\label{Aprop0}
Let $u$ be a nonnegative function on $\bar{B}_R, R>0$ such that
\[
L_a u\le 0 \quad\text{ in }B_R.
\]
Suppose $r\in(0, R/7\sqrt{d}]$ and
$N=N(z,n)\subset Q_r$ is a nice rectangle in $Q_r$.
Then there exists a constant
$\delta=\delta(d,\kappa,b_0R)\in(0,1)$ such that, if
$\Gamma\subset B_R$ satisfies
\[|\Gamma\cap N|\ge \delta|N|,\]
then
\[
\min_{N'}u\ge C\min_{\Gamma}u,
\]
where
$N'=(z+\prod_{i=1}^d[-n_i,2n_i-1])\cap\mathbb{Z}^d$
and $C$ depends on $\kappa,d, \sigma, \tau$ and $b_0R$.
\end{proposition}
{\it Proof:}~
When $|n|_\infty=1$, $N$ is a singleton, and the proposition follows by
iteration (noting that $u(x)\le\kappa u(y)$ for any $x\sim y$). So we only
consider the case when the length of $N$ is $\ge 2$.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{boxes}
\caption{$N$ is the rectangle in the center. $B_{h}(O_N)$ is the small circle.}
\label{Afig2}
\end{figure}
Denote the center of $N$ by $O_N=z+(\frac{n_1-1}{2},\cdots,\frac{n_d-1}{2})\in(\frac{1}{2}\mathbb{Z})^d$.
Setting $h=:\min_{i}n_i/2$, we have
\[
B_h(O_N)\subset N\subset B_{2\sqrt{d}h}(O_N).
\]
Since $h\ge\frac{|n|_\infty-1}{2}\ge\frac{|n|_\infty}{4}$, we have
\[
N'\subset B_{3|n|_\infty\sqrt{d}/2}(O_N)
\subset B_{6\sqrt{d}h}(O_N).
\]
Suppose for some $\delta\in(0,1)$,
\[|\Gamma\cap N|\ge \delta|N|.\]
Let $u_\Gamma=:\min_\Gamma u$ and \[v=:u_\Gamma-u,\] then $L_a v\le 0$ and $v^+|_\Gamma=0$.
By Theorem \ref{Amvi},
\begin{align*}
\max_{B_{h/2}(O_N)}v
&\le C\frac{1}{|B_{h}|}\sum_{B_h(O_N)}v^+\\
&\le C\frac{|B_h(O_N)\setminus\Gamma|}{|B_h|}\max_{B_h(O_N)}v\\
&\stackrel{|B_h|\ge C|N|}{\le}
C\frac{|N\setminus\Gamma|}{|N|}\max_{B_h(O_N)}v
\le C_2(1-\delta)\max_{B_h(O_N)}v,
\end{align*}
where $C_2$ depends on $\kappa,d$ and $b_0R$.
Taking $\delta=\delta(\kappa, d,b_0R)$ big enough such that $C_2(1-\delta)\le 1/2$, we get
\[\max_{B_{h/2}(O_N)}v
\le \frac{1}{2}\max_{B_h(O_N)}v.\]
Hence
\[
u_\Gamma-\min_{B_{h/2}(O_N)}u\le\frac{1}{2}(u_\Gamma-\min_{B_h(O_N)}u).
\]
Therefore, noting that (since $r\le R/7\sqrt{d}$) $B_{7\sqrt{d}h}(O_N)\subset B_R$,
\[
u_\Gamma\le 2\min_{B_{h/2}(O_N)}u
\stackrel{\text{Lemma \ref{Ahlemma}}}{\le}
C \min_{B_{6\sqrt{d}h}(O_N)}u\le C \min_{N'}u,
\]
with $C$ depending on
$\kappa,d$ and $b_0R$.\qed\\
\begin{lemma}\label{Ahlemma2}
Let $u$ be a nonnegative function on $\bar{B}_R, R>0$ such that
\[
L_a u\le 0 \quad\text{ in }B_R.
\]
Let $r\in(0, R/7\sqrt{d}]$.
Then for any $\Gamma\subset Q_r$,
there exists a subset $\Gamma_\delta\supset\Gamma$ of $Q_r$ such that
either $\Gamma_\delta=Q_r$ or $|\Gamma_\delta|>\delta^{-1}|\Gamma|$ holds,
and
\[
\min_{\Gamma_\delta}u\ge\gamma\min_\Gamma u.
\]
Here the constant $\gamma$ depends only on $\kappa, d$ and $b_0R$, and $\delta$ is the same as in Proposition \ref{Aprop0}.
\end{lemma}
{\it Proof:}~
We will construct $\Gamma_\delta$ through a cube decomposition procedure.
Observe that any nice rectangle with length $l\ge 2$ can be decomposed into (at most $2^d$) smaller
disjoint nice rectangles whose lengths are either $\lfloor \frac{l}{2}\rfloor$ or $\lfloor \frac{l}{2}\rfloor+1$.
With abuse of terminology, we say that such a decomposition is \textit{nice}. Note that a nice decomposition may not
be unique.
For any $\Gamma\subset Q_r$, set
\[\mathcal{N}=\mathcal{N}(\Gamma):=\{N:N \text{ is nice and }|\Gamma\cap N|\ge \delta |N|\}.\]
Now perform cube decompositions to $Q_r$ as follows. Assume that we have an imaginary ``bag".
In the first step, we put $Q_r$ into our ``bag" if $Q_r\in\mathcal{N}$, and decompose $Q_r$ nicely (into at most $2^d$ nice rectangles) if otherwise.
In the second step, we repeat the same procedure on each of the remaining
rectangles, i.e.,
put a rectangle into our ``bag" if it is in $\mathcal{N}$, and
decompose a rectangle (with lengths$\ge 2$) nicely if it is not in
$\mathcal{N}$. Repeat this procedure as often as necessary, and stop if there is
nothing to decompose or
all the remaining rectangles are singletons in $Q_r\setminus\Gamma$.
The process will end within finite number of steps.
Denote the collection of the rectangles in our ``bag" by $\mathcal{N}_0$($\subset\mathcal{N}$).
For $N\in\mathcal{N}_0$ and $N\neq Q_r$, we denote by $N^{-1}$ its \textit{prior}, i.e, $N$ is obtained from a nice decomposition of $N^{-1}$ in the previous step. Set $Q_r^{-1}=Q_r$ and
\[
\Gamma_\delta:=\bigcup_{N\in\mathcal{N}_0}N^{-1}.
\]
Recall the definition of $N'$ in Proposition \ref{Aprop0}.
For any $N\in\mathcal{N}_0$, since $|\Gamma\cap N|\ge \delta |N|$ and $N^{-1}\subset N'$,
by the Proposition \ref{Aprop0} we have
\[
\min_{N^{-1}}u
\ge\min_{N'}u
\ge\gamma\min_\Gamma u.
\]
Hence,
\[
\min_{\Gamma_\delta}u\ge\gamma\min_\Gamma u.
\]
Moreover, note that $\Gamma_\delta=Q_r$ when $\mathcal{N}_0=\{Q_r\}$. Otherwise, if
$\mathcal{N}_0\neq\{Q_r\}$, we have
\[
|\Gamma\cap N^{-1}|<\delta|N^{-1}|\quad\text{for all } N\in\mathcal{N}_0.
\]
Therefore, if $\mathcal{N}_0\neq\{Q_r\}$, \begin{align*}
|\Gamma|=\Abs{\bigcup_{N\in\mathcal{N}_0}(\Gamma\cap N)}
&\le \Abs{\bigcup_{N\in\mathcal{N}_0}(\Gamma\cap N^{-1})}\\
&<\sum_{N^{-1}:N\in\mathcal{N}_0}\delta\abs{N^{-1}}=\delta\abs{\Gamma_\delta}.
\end{align*}
Our proof is complete. \qed\\
\noindent{\it Proof of Theorem \ref{ERharnack}:}\\
We only consider the case when $\sigma<1/7\sqrt{d}$.
For any $\Gamma\subset Q_{\sigma R}$, if $|Q_{\sigma R}|\le\delta^{-s}|\Gamma|$ for some $s\in\mathbb{N}$,
then we have
\[
m:=\min_{Q_{\sigma R}}u\ge\gamma^s\min_\Gamma u
\]
by Lemma \ref{Ahlemma2} and iteration.
Hence for $t\ge 0$, putting $\Gamma^t:=\{x\in Q_{\sigma R}: u(x)\ge t\}$, we get
\begin{equation}\label{A*13}
m\ge\gamma^{\lceil\log_\delta(|\Gamma^t|/|Q_{\sigma R}|)\rceil}t
\ge \left(\frac{|\Gamma^t|}{|Q_{\sigma R}|}\right)^{\log_\delta\gamma}\gamma t.
\end{equation}
Note that $q:=\log_\gamma \delta>0$, since $\gamma,\delta\in(0,1)$.
Therefore, for any $p\in(0,q)$,
\begin{align*}
\frac{1}{|Q_{\sigma R}|}\sum_{Q_{\sigma R}}u^p
&=m^p+\frac{1}{|Q_{\sigma R}|}\sum_{Q_{\sigma R}}\int_m^\infty pt^{p-1}1_{u\ge t}\, \mathrm{d} t\\
&=m^p+\int_m^\infty pt^{p-1}\frac{|\Gamma^t|}{|Q_{\sigma R}|}\, \mathrm{d} t\\
&\stackrel{(\ref{A*13})}{\le}
m^p+\int_m^\infty pt^{p-1}(\frac{m}{\gamma t})^q\, \mathrm{d} t\le C m^p,
\end{align*}
where $C$ depends on $\kappa, d$ and $b_0R$. Combining this and Theorem \ref{Amvi}, the Harnack inequality for $\sigma<1/7\sqrt{d}$ is proved.
The case $\sigma\ge 1/7\sqrt{d}$ then follows by a chaining argument.
\qed
\chapter{Invariance Principle for Random Walks in Balanced Random Environment}
\label{CLT chapter}
This chapter is devoted to the proofs of Theorem~\ref{CLT1} and Theorem~\ref{CLT2}.
The structure of this chapter is as follows. In Section \ref{SePeEn} we construct the ``periodized environments" as in \cite{Sz1, ZO}, and show that the
proof of $Q\sim P$ can be reduced to the proof of the inequality (\ref{CPhi0}).
Using the maximum principle (Theorem \ref{CMP}), we then prove (\ref{CPhi0})
in Section \ref{SeMP} under the assumptions of Theorem \ref{CLT1}(i). In Section
\ref{SePercE}, which is devoted to the iid setup,
we prove Theorem \ref{CLT2}(i) using percolation tools. Section \ref{SeTran} is
devoted
to the proof of the transience of the RWRE for $d\geq 3$, thus providing
a proof of Theorem \ref{CLT1}(ii). In Section \ref{SeTriid}, we will show
a modified maximum principle for balanced difference operators, and use it to prove
Theorem \ref{CLT2}(ii).
Throughout, $C$ denotes a generic positive constant that may depend on
dimension only, and whose value may change from line to line.
\section {The periodized environments}\label{SePeEn}
As in \cite{Sz1, ZO}, the following periodic structure of the environment
is introduced.
Let $ \Delta_N (x_0)=\{x\in \mathbb{Z}^{d}: |x-x_0|_{\infty}\le N\} $ be the cube centered at $x_0$ of length $2N$. Let $\Delta_N=\Delta_N(o)$. For any $ x\in\mathbb{Z}^{d} $, set
$$ \hat{x}:=x+(2N+1)\mathbb{Z}^{d}\in \mathbb{Z}^{d}/(2N+1)\mathbb{Z}^{d}. $$
For any fixed $ \omega \in \Omega $, we define $\omega^{N}$ by setting
$ \omega^{N}(x)=\omega(x) $ for $ x \in \Delta_{N} $
and $\omega^N (y)=\omega^N (x)$ for $y \in \mathbb{Z}^{d} $ whenever $\hat{y}=\hat{x}$. Let $ \Omega^{N}=\{\omega^{N}: \omega\in\Omega\}$. Let $ \{X_{n,N}\} $ denote the random walk on $ \mathbb{Z}^{d} $ in the environment $ \omega^{N} $. Then $ \{\hat{X}_{n,N}\} $ is an irreducible finite-state Markov chain,
hence it possesses a unique invariant probability measure, which can
always be written in
the form
\[ \dfrac{1}{(2N+1)^d}\sum_{x\in\Delta_{N}}\Phi_{N}(x)\delta_{\hat{x}} . \]
Here $\Phi_N$ is some function on $\Delta_N$ and $(2N+1)^{-d}\Phi_{N}(\cdot)$
sums to $1$, so that $\Phi_N$ can be interpreted as a density with
respect to the uniform measure on $\Delta_N$.
Define
\[ Q_{N}=Q_{N,\omega}=\dfrac{1}{(2N+1)^{d}}\sum_{x\in\Delta_{N}}\Phi_N (x)\delta_{\theta^{x}\omega^{N}} \]
as a probability measure on $\Omega^N$.
Then, for any $x\in \Delta_N$,
\begin{align*}
\sum_{y\in\Delta_N} Q_N(\theta^y \omega^N)M(\theta^y \omega^N, \theta^x\omega^N)
&=\sum_{y\in\Delta_N} \frac{\Phi_N (y)}{(2N+1)^d}\omega^N(y,x)\\
&= \frac{\Phi_N (x)}{(2N+1)^d}=Q_N (\theta^x \omega^N).
\end{align*}
This implies that
$Q_N$ is the invariance probability measure
(with respect to the kernel $M$) for the Markov chain
$\{\bar{\omega}^{N}(n)\}$ on $\Omega^{N}$.
We will show that $Q_N$ converges weakly to some measure $ Q $ with good
properties. To do this, we first introduce a sequence of measures
\[
P_{N}=P_{N,\omega}=\dfrac{1}{(2N+1)^{d}}
\sum_{x\in\Delta_{N}}\delta_{\theta^{x}\omega^{N}},
\]
which, by the multidimensional ergodic theorem (see Theorem (14.A8) in \cite{Ge}
and also Theorem 1.7.5 in \cite{Kr}), converges weakly to $P$, $P$-a.s.
Let $ \{\omega_{\gamma}^{N}\}_{\gamma=1}^{k} $ denote the set of distinct states in $ \{ \theta^{x}\omega^{N}\}_{x\in \Delta_{N}} $ and $ C_{N}(\gamma):=\{x\in \Delta_{N}: \theta^{x}\omega^{N}=\omega_{\gamma}^{N}\} $.
Set,
for any finite subset $E\subset\mathbb{Z}^d$,
\[
\lVert f\rVert_{E,j}:=(|E|^{-1}\sum_{x \in E} |f(x)|^{j})^{\frac{1}{j}}.
\]
Since
$ \, \mathrm{d} Q_N/\, \mathrm{d}
P_N=\sum_{\gamma=1}^{k}\delta_{\omega_{\gamma}^{N}}|C_{N}(\gamma)|^{-1}\sum_{
x\in C_{N}(\gamma)}\Phi_{N}(x):=f_{N}$, we have that, for any measurable
function $g$ on $\Omega$,
\begin{align}\label{Q_n0}
|Q_N g|
&\le (\int f_{N}^{\alpha} \, \mathrm{d} P_N)^{\frac{1}{\alpha}}(\int |g|^{\alpha'} \, \mathrm{d} P_N)^{\frac{1}{\alpha'}} \nonumber\\
&\le \big( \frac{1}{|\Delta_N |}\sum_{\gamma =1}^{k}\sum_{x \in C_N (\gamma)}\Phi_N (x)^{\alpha} \big)^{\frac{1}{\alpha}}(\int |g|^{\alpha'} \, \mathrm{d} P_N)^{\frac{1}{\alpha'}}
\nonumber\\
&= \lVert \Phi _N \rVert _{\Delta_N, \alpha}(P_{N}|g|^{\alpha'})^{\frac{1}{\alpha'}},
\end{align}
where
$\alpha'$ is the H\"older conjugate of $\alpha$, $1/\alpha+1/\alpha'=1$,
and we used H\"older's inequality in the first and the second inequalities.
Since $\Omega$ is compact with respect to the product topology,
along some subsequence $N_k\to\infty$,
$\{Q_{N_k}\}$ converges weakly to a limit, denoted $Q$.
Assume for the moment that
\begin{equation}\label{CPhi0}
\varlimsup_{N\to\infty}\lVert \Phi _N \rVert _{\Delta_N, \alpha}
\le C, \quad P\mbox{- }a.s.
\end{equation}
We then show that, for a.e. $\omega\in\Omega$,
\begin{equation}\label{CLTf2}
Q\ll P .
\end{equation}
Indeed, let $A\subset \Omega$ be measurable.
Let $ \rho $ denote a metric on the Polish space $ \Omega $.
For any closed subset
$ F \subset A $, $ \delta >0 $,
introduce the function $f(\omega)=[1-\rho (\omega, F)/\delta ]^+$
which is supported on
$ F_{\delta}=\{\omega \in \Omega: \rho (\omega, F) <\delta\}$.
Then, by (\ref{Q_n0}), (\ref{CPhi0}),
\begin{equation*}
Q F\le \varlimsup_{N\to\infty}Q_N f \le C (P f^{\alpha'})^{\frac{1}{\alpha'}}\le C (P F_{\delta})^{\frac{1}{\alpha'}} .
\end{equation*}
Letting $ \delta\downarrow 0 $, we get $ Q F \le C (P F)^{\frac{1}{\alpha'}}$.
Taking supremums over all closed subset $ F \subset A $, one concludes that
$Q A \le C\cdot (P A)^{\frac{1}{\alpha'}}$, which proves (\ref{CLTf2}).
Once we have (\ref{CLTf2}), it is standard to check, using ellipticity,
that $ \bar{\omega}(n) $ is ergodic with respect to $ Q $ and $Q\sim P$
(see \cite{Sz1, ZO}). (Thus,
by the ergodic theorem, $Q$ is uniquely determined by $Qg=\mathop{\rm lim}_{n\to\infty} E\sum_{j=0}^{n-1}g(\bar{\omega}_j)/n$ for every bounded measurable $g$. Hence $Q$ is \textit{the} weak limit of $Q_N$.) Therefore, to prove the invariance principle it suffices to prove (\ref{CPhi0}).
Sections
\ref{SeMP} and Section \ref{SePercE} are devoted to the proof of (\ref{CPhi0}),
under the assumptions of Theorems \ref{CLT1} and \ref{CLT2}.
\section {Maximum principle and proof of Theorem \ref{CLT1}(i)}\label{SeMP}
Throughout this section, we fix an $\omega\in \Omega$.
For any bounded set $ E \subset \mathbb{Z}^{d} $, let $\partial E =\{y \in E^{c}: \exists x\in E, |x-y| _{\infty}=1\}$, $ \bar{E}=E \bigcup \partial E $ and $ \mathop{\rm diam}(E)=\max\{|x-y| _{\infty}: x, y\in E\} $. For any function $f$ defined on $ \bar{E}$ ,
let $ L_{\omega} $ denote the operator
\begin{equation}\label{operator}
(L_{\omega}f )(x)=\sum_{i=1}^{d}\omega(x, e_i)[f(x+e_i)+f(x-e_i)-2f(x)], \quad
x\in E .
\end{equation}
The following discrete maximum principle is an adaption of Theorem 2.1 of
\cite{KT}.
\begin{theorem}[Maximum Principle]\label{CMP}
Let $E\subset \mathbb{Z}^d$ be bounded,
and let $u$ be a function on $ \bar{E} $. For all
$x\in E$, assume
$ \varepsilon(x)>0 $ and define
$$ I_{u}(x):=\{s \in \mathbb{R}^{d}: u(x)-s\cdot x \ge u(z)-s\cdot z, \forall
z \in\bar{E}\} .$$
If $ L_{\omega} u(x) \ge -g(x)$ for all $x \in E$ such that $I_u (x)\ne
\emptyset$,
then
\begin{equation}\label{Cmpremark}
\max_{E} u \le
C\mathop{\rm diam}{\bar{E}}
\bigg(\sum_{\substack{
x\in E\\
I_u (x)\ne \emptyset}
}|\frac{g}{\varepsilon}|^d\bigg)^{\frac{1}{d}}
+\max_{\partial E}u .
\end{equation}
In particular, \[ \max_{E} u \le
C\mathop{\rm diam}{\bar{E}}\cdot |E|^{\frac{1}{d}} \lVert
\frac{g}{\varepsilon}\rVert _{E,d}+\max_{\partial E}u .\]
\end{theorem}
{\it Proof:}~ See the proof of Theorem 2.1 in \cite{KT}.\qed\\
Define the stopping times $ \tau_0=0 $, $ \tau_1 =\tau :=\min \{j \ge 1: |X_{j,N}-X_{0,N}|_{\infty}> N\} $ and
$ \tau_{j+1}=\min \{n>\tau_j : |X_{n,N} -X_{\tau_j, N}|_{\infty}> N\} $.
\begin{lemma}\label{CLTE}
Let $\omega^{N}$, $\{X_{n,N}\}$ be as in Section 1 and $\tau$ as defined above, then
there exists a constant $c$ such that, for all $N$ large,
$$E_{\theta^{x}\omega^{N}}^{o} (1-\frac{c}{N^2})^{\tau}\leq C <1 .$$
\end{lemma}
{\it Proof:}~
Since P is balanced, $ X_{n,N} $ is a martingale
and it follows from Doob's inequality that for any $K \ge 1 $,
\begin{align*}
P_{\theta^{x}\omega^{N}}^{o} \{\tau \le K\}
& \le
2 \sum_{i=1}^{d} P_{\theta^{x}\omega^{N}}^{o}\{\sup_{n \le K} X_{n, N}(i) \ge N+1\}\\
& \le
\frac{2}{N+1} \sum_{i=1}^{d}E_{\theta^{x}\omega^{N}}^{o} X_{K, N} (i)^{+}
\le \frac{2d}{N+1} \sqrt{K},
\end{align*}
where $ X_{n,N} (i) $ is the $i$-th coordinate of $ X_{n,N} $. Hence
\begin{equation*}
E_{\theta^{x}\omega^{N}}^{o} (1-\frac{c}{N^2})^{\tau}
\le (1-\frac{c}{N^2})^{K} +\frac{2d}{N+1}\sqrt{K} .
\end{equation*}
Taking $c=16 d^2$ and $K=N^2/16d^2$, we get
$E_{\theta^{x}\omega^{N}}^{o} (1-\frac{c}{N^2})^{\tau}\le e^{-1}+2^{-1} .$ \qed
\begin{theorem}\label{CPhi}
\begin{equation}\label{CPhi1}
\lVert \Phi_N \varepsilon\rVert _{\Delta_N , \beta}\le C,
\end{equation}
where $\beta=d'=d/(d-1)$.
\end{theorem}
{\it Proof:}~
Let $c$ be the same
constant as in the previous lemma. For any function $ h\ge 0 $ on $\Delta_N $,
\begin{align*}
&\lVert \Phi_N \cdot h\rVert _{\Delta_N , 1}\\
&=
\frac{c}{N^2}\sum_{x \in \Delta_N}\frac{\Phi_N (x)}{|\Delta_N |}
\sum_{m\ge 0}E_{\omega^N}^{x}\sum_{\tau_m\le\ j <\tau_{m+1}}(1-\frac{c}{N^2})^{j}h(\hat{X}_{j,N})\\
&\le
\frac{c}{N^2}\sum_{x \in \Delta_N}\frac{\Phi_N (x)}{|\Delta_N |}\sum_{m\ge 0}E_{\omega^N}^{x}(1-\frac{c}{N^2})^{\tau_m}
E_{\omega^N}^{\hat{X}_{\tau_m , N}}\sum_{j=0}^{\tau -1}h(\hat{X}_{j,N})\\
&\le
\frac{c}{N^2}\sum_{x \in \Delta_N}\frac{\Phi_N (x)}{|\Delta_N |}\sum_{m\ge 0}\big[\sup_{y\in \Delta_N}E_{\omega^N}^{y}(1-\frac{c}{N^2})^{\tau}\big]^{m}\cdot \sup_{y\in \Delta_N}E_{\omega^N}^{y}
\sum_{j=0}^{\tau -1}h(\hat{X}_{j,N}).
\end{align*}
Since the function $f(x)=E_{\omega^N}^{x} \sum_{j=0}^{\tau -1}h(\hat{X}_{j,N})$ satisfies
\begin{equation}
\left\{
\begin{array}{rl}
L_{\omega^N}f(x)=h(x), & \text{if } x\in\Delta_N\\
f(x)=0, & \text{if } x\in\partial \Delta_N,
\end{array}
\right.
\end{equation}
we can apply the maximum principle (Theorem \ref{CMP}) and get
$$\sup_{y\in \Delta_N}E_{\omega^N}^{y}\sum_{j=0}^{\tau -1}h(\hat{X}_{j,N}) \le C N^2
\lVert
\frac{h}{\varepsilon}
\rVert _{\Delta_N ,d}.$$
This, together with Lemma \ref{CLTE} and $\sum_{x \in \Delta_N}\Phi_N
(x)/|\Delta_N |=1$, yields
$$\lVert \Phi_N \cdot h\rVert _{\Delta_N , 1}
\le C \lVert \frac{h}{\varepsilon}\rVert _{\Delta_N ,d}.$$
Hence by the duality of norms,
\begin{equation*}
\lVert \Phi_N \varepsilon\rVert _{\Delta_N , \beta}=\sup_{\norm{h/\varepsilon}_{\Delta_N, d}=1}\norm{
\Phi_N h} _{\Delta_N, 1}\le C . \quad \mbox{\qed}
\end{equation*}
\noindent{\it Proof of (\ref{CPhi0}) under the assumption of Theorem \ref{CLT1}(i) :}\\
Assume that
\begin{equation}\label{asm}
\mathrm{E}\varepsilon(o)^{-p}< \infty
\mbox{ for some } p>d.
\end{equation}
Take $\alpha=(1-1/d+1/p)^{-1}$. We use H\"older's inequality
and Theorem \ref{CPhi}
to get
\[
\lVert \Phi_N\rVert _{\Delta_N,\alpha}\le \lVert \Phi_N \varepsilon\rVert _{\Delta_N , \beta}\lVert \varepsilon^{-1}\rVert _{\Delta_N, p}
\le C \lVert \varepsilon^{-1}\rVert _{\Delta_N, p} .
\]
By the multidimensional ergodic theorem,
\begin{equation*}
\mathop{\rm lim}_{N \to \infty}\lVert \varepsilon^{-1}\rVert _{\Delta_N, p}=(E \varepsilon(o)^{-p})^{\frac{1}{p}}<\infty,
\quad P\mbox{- }a.s. \qed
\end{equation*}
\begin{remark}
Without the assumption (\ref{asm}), the conclusion
\eqref{CPhi0} may fail.
To see the difficulty,
let $$A=A(\omega, \varepsilon_0)=\{x:\min_i \omega(x, e_i)<\varepsilon_0\}.$$
By (\ref{CPhi1}) we have
$$\lVert \Phi_N 1_{A^c}\rVert _{\Delta_N, \beta}\le \lVert \Phi_N \frac{\varepsilon}{\varepsilon_0}\rVert _{\Delta_N, \beta} \le \frac{C}{\varepsilon_0}.$$
In order to proceed as before, we need to show that
$\varlimsup_{N\to\infty}\lVert \Phi_N 1_{A}\rVert _{\Delta_N, \alpha} \le C$
for some $1 <\alpha \le \beta$ .
As Bouchaud's trap model
\cite{Bou,BAC} shows,
this is not always the case. However, if $P\{\max_{|e|=1}\omega(o,e)\ge\xi_0\}=1$,
then for $x\in A$, we have, using that the environment is balanced,
some control of $\Phi_N (x)$
by $\Phi_N|_{A^c}$ (see Lemma \ref{CPhicontrol}).
Further,
in the iid case, $A$ corresponds to a `site percolation' model,
whose cluster sizes can be estimated.
We will show in the next section that these properties lead
to a proof of (\ref{CPhi0}) in the iid setup, without moment assumptions.
\end{remark}
\section{A percolation estimate and proof of Theorem \ref{CLT2}(i)}\label{SePercE}
In this section we consider the RWRE in the iid setting where
$\max_{|e|=1}\omega(x,e)\ge \xi_0$ for all $x\in\mathbb{Z}^d$ and all $\omega\in\Omega$.
We begin by introducing some terminology.\\
The \textit{$l^1$-distance} (graph distance) from $x$ to $y$ is defined as
$$d(x,y)=|x-y|_1=\sum_{i=1}^{d}|x_i-y_i|.$$
Note that $|x|_{\infty}\le |x|_1 \le d|x|_{\infty}$.
In an environment $\omega$, we say that a site $x$ is \textit{open}(\textit{closed}) if $\min_i \omega(x, e_i)<\varepsilon_0 (\ge\varepsilon_0, resp.)$ and that an edge of $\mathbb{Z}^d$ is open if its endpoints are open.
Here $\varepsilon_0>0$ is a constant whose value is to be determined.
An edge is called closed if it is not open.
Let $A=A(\omega)$ denote the subgraph of $\mathbb{Z}^d$ obtained
by deleting all closed edges and
closed sites.
We call $A(\omega)$ a \textit{site percolation}
with parameter $p=p(\varepsilon_0)=P\{\min_i \omega(x, e_i)< \varepsilon_0\}$.
A \textit{percolation cluster} is a connected component of $A$.
(Although here a percolation cluster is defined as a graph, we also use it as
a
synonym for its set of vertices.)
The $l^1$ diameter of a percolation cluster $B$ is defined as $l(B)=\sup_{x\in B, y\in \partial B}d(x,y)$.
For $x\in A$, let $A_x$ denote the percolation cluster that contains $x$ and let $l_x$ denote its diameter. Set $A_x=\emptyset$ and $l_x=0$ if $x\notin A$.
We let $\varepsilon_0$ be small enough such that $l_x<\infty$ for all $x\in \mathbb{Z}^d$.\\
We call a sequence of sites $(x^1, \cdots, x^n)$ a \textit{path} from $x$ to $y$ if $x^1=x$, $x^n=y$ and
$|x^j-x^{j+1}|=1$
for $j=1,\cdots, n-1$. Let
$$\square=\{(\kappa_1,\cdots, \kappa_d)\in \mathbb{Z}^d: \kappa_i=\pm 1\}.$$
We say that a path $\{x^1, \cdots, x^n\}$ is a \textit{$\kappa$-path}, $\kappa\in \square$, if
$$\omega(x^j, x^{j+1}-x^j)\ge\xi_0$$ and
$\kappa_i(x^{j+1}-x^j)_i\ge 0$ for all $i=1,\cdots, d$ and $j=1,\cdots, n-1$.
Observing that for each site there exist at least two neighbors (in opposite directions) to whom the transition probabilities are $\ge \xi_0$, we have the following property concerning the structure of the balanced environment:
\begin{itemize}
\item For any $x\in A$ and any $\kappa\in \square$, there exists a $\kappa$-path from $x$ to some $y\in\partial A_x$, and this path is contained in $\bar{A}_x$.
\end{itemize}
This property gives us a useful inequality.
\begin{lemma}\label{CPhicontrol}
For $x\in A\cap \Delta_N$, if $l_x \le N$, then
\begin{equation}\label{CLTf7}
\Phi_N (x) \le \xi_0 ^{-l_x} \sum_{y \in \partial A_x \cap \Delta_N} \Phi_N (y).
\end{equation}
\end{lemma}
{\it Proof:}~ Suppose that $A_x\neq\emptyset$ (otherwise the proof is trivial). Since $l_x \le N$, $\bar{A}_x\subset \Delta_N (x)$. Note that at least one of the $2^d$
corners of $\Delta_N (x)$ is contained in $\Delta_N$. Without loss of generality, suppose that
$v=x+(N,\cdots,N)\subset \Delta_N$. Then there is a $(1,\cdots, 1)$-path in $\bar{A}_x$ from $x$ to
some $y\in \partial A_x\cap \Delta_N$,
as illustrated in the following figure:
\begin{center}
\begin{tikzpicture}[scale=.2]{centered}
\fill[gray!30!white][rotate=45] (10, 1.5) ellipse (4.5 and 4.3);
\draw[step=1, gray, very thin] (1.5,3.5) grid (10.5, 12.5);
\draw[dash pattern=on 2pt off 3pt on 4pt off 4pt] (-5,-5) rectangle (15, 15) node[right=1pt]{$v$};
\draw (2,4) rectangle (22, 24);
\draw[thick] (5,5)--(5,6)--(7,6)--(7,7)--(8,7)--(8,10)--(9,10)--(9,11)--(10,11)node[right=1pt]{$y$};
\draw[left=1pt] (5,5) node{$x$};
\draw (3,10) node[right=1pt]{$A_x$};
\draw (23, 14) node[right=1pt]{$\Delta_N$};
\draw (-5, 5) node[left=1pt]{$\Delta_N(x)$};
\end{tikzpicture}
\end{center}
Recalling that $\Phi_N$ is the invariant measure for $\{\hat{X}_{n,N}\}$ defined in Section 1, we have
\begin{align*}
\Phi_N (y)
&=
\sum_{z\in \Delta_N} \Phi_N (z) P_{\omega^N}^{d(x,y)} (\hat{z}, \hat{y})\\
&\ge
\Phi_N (x) P_{\omega^N}^{d(x,y)}(\hat{x}, \hat{y})
\ge \Phi_N (x) \xi_0^{l_x}.
\end{align*}
Here $P^m_{\omega^N} (\hat{z}, \hat{y})$ denotes the $m$-step transition probability
of $\{\hat{X}_{n,N}\}$ from $\hat{z}$ to $\hat{y}$.
\qed
Let $S_n=\{x: |x|_{\infty}=n\}$ denote the boundary of $\Delta_n$.
Let $x\to y$ be the event that $y\in \bar{A}_x$
and $o\to S_n$ be the event that $o\to x$ for some $x\in S_n$.
The following theorem, which is the site percolation version of the combination of
Theorems 6.10 and 6.14 in \cite{GG}, gives an exponential bound
on the diameter of the cluster containing the origin, when $p$ is small.
\begin{theorem}\label{perc}
There exists a function $\phi(p)$ of $p=p(\varepsilon_0)$ such that
$$P\{o\to S_n\}\le C n^{d-1} e^{-n\phi(p)}$$
and $\mathop{\rm lim}_{p\to 0}\phi(p)=\infty$.
\end{theorem}
Let $A_x(n)$ denote the connected component of $A_x\cap \Delta_n(x)$ that contains $x$ and set
\[q_n=P\{o\to S_n\}.\]
The proof of Theorem
\ref{perc} will proceed by showing some (approximate) subadditivity
properties of $q_n$. We thus recall the following subadditivity lemma:
\begin{lemma}
\label{Clem-subadd}
If a sequence of finite numbers $\{b_k: k\ge 1\}$ is subadditive, that is,
$b_{m+n}\le b_m +b_n \mbox{ for all m,n}$,
then $\mathop{\rm lim}_{k\to\infty}b_k/k=\inf_{k\in\mathbb{N}} b_k/k$.
\end{lemma}
\noindent{\it Proof of Theorem \ref{perc}:}
We follow the proof given by Grimmett in \cite{GG} in the bond percolation case.
By the BK inequality (\cite{GG}, pg. 38),
$$q_{m+n}\le \sum_{x\in S_m} P\{o\to x\} P\{x\to x+S_n\}.$$
But $P\{o\to x\}\le q_m$ for $x\in S_m$
and $P\{x\to x+S_n\}=q_n$ by translation invariance. Hence we get
\begin{equation}\label{Csub1}
q_{m+n}\le |S_m|q_m q_n.
\end{equation}
By exchanging $m$ and $n$ in (\ref{Csub1}),
\begin{equation}\label{Csub11}
q_{m+n}\le |S_{m\wedge n}|q_m q_n.
\end{equation}
On the other hand, let $U_x$ be the event that
$x\in \overline{A_o(m)}$
and let $V_x$ be the event that
$\overline{A_x(n)}\cap S_{m+n}\neq\emptyset$.
We use the FKG inequality (\cite{GG}, pg. 34) to find that
$$q_{m+n}\ge P\{U_x\}P\{V_x\}\quad \mbox{ for any $x\in S_m$}.$$
However, $\sum_{x\in S_m}P\{U_x\}\ge q_m$, which implies that
$$\max_{x\in S_m}P\{U_x\}\ge \frac{q_m}{|S_m|} .$$
Let $\gamma_n=P\{\overline{A_o(n)}\cap\{x:x_1=n\}\neq\emptyset\}$, then $P\{V_x\}\ge\gamma_n$.
Moreover, $\gamma_n\le q_n\le 2d\gamma_n$.
Hence
\begin{equation*}
q_{m+n}\ge \frac{q_m q_n}{2d |S_m|},
\end{equation*}
and then
\begin{equation}\label{Csub2}
q_{m+n}\ge \frac{q_m q_n}{2d |S_{m\wedge n}|}.
\end{equation}
Note that $|S_m|\le C_d m^{d-1}$. Letting
$$b_k =\log q_k +\log C_d +(d-1)\log (2k),$$
one checks
using (\ref{Csub11})
that the sequence $\{b_k\}$ is subadditive.
Similarly by (\ref{Csub2}), $\{-\log q_k +\log (2d C_d) +(d-1)\log (2k)\}$
is subadditive.
Thus, using Lemma \ref{Clem-subadd},
$$\phi (p):=-\mathop{\rm lim}_{k\to\infty}\frac{1}{k}\log q_k$$
exists and
\begin{equation}\label{Csub3}
\log q_k +\log C_d +(d-1)\log (2k)\ge -k\phi (p),
\end{equation}
\begin{equation}\label{Csub4}
-\log q_k +\log (2d C_d) +(d-1)\log (2k)\ge k\phi(p).
\end{equation}
The first part of the theorem follows simply from (\ref{Csub4}), and the second
by noting that with
$p\downarrow 0$ in (\ref{Csub3}) we have
$q_k \downarrow 0$ and then $\phi (p)\to \infty$. \qed
\begin{remark}
It follows from Theorem \ref{perc}
that
\begin{equation}\label{Clo}
P\{l_o \ge n\} \le P\{o\to S_{\lfloor n/2d\rfloor}\}\le C e^{\phi(p)} n^{d-1} e^{-n\phi(p)/2d}.
\end{equation}
With (\ref{Clo}) and the Borel-Cantelli lemma, one concludes
that, P-almost surely, $l_x\le N$ is true for all
$x\in \Delta_N$ when $N$ is sufficiently large and $p$ is such that $\phi(p)>0$.
Hence the inequality (\ref{CLTf7}) holds for all $x\in\Delta_N$ when $N$
is large.
\end{remark}
\noindent{\it Proof of (\ref{CPhi0}) under the assumption of Theorem \ref{CLT2}(i):}
By H\"older's inequality,
$$\frac{1}{|\Delta_N|}\sum_{y\in \partial A_x \cap \Delta_N} \Phi(y)\le
\lVert \Phi_N 1_{\partial A_x}\rVert _{\Delta_N, \beta}
\big(\frac{|\partial A_x|}{|\Delta_N|}\big)^{1-1/\beta} ,$$
so when $N$ is large enough we have by Lemma \ref{CPhicontrol} that for any $x\in A\cap \Delta_N$,
\begin{equation}\label{CLTf8}
\Phi_N (x)\le \xi_0^{-l_x} |\partial A_x|^{1-1/\beta } |\Delta_N |^{1/\beta}
\lVert \Phi_N 1_{\partial A_x}\rVert _{\Delta_N, \beta}.
\end{equation}
Hence for any $\alpha \in (1, \beta)$,
\begin{align*}
&\lVert \Phi_N 1_A\rVert _{\Delta_N,\alpha}^\alpha\\
&\le
\frac{1}{|\Delta_N|}\sum_{x\in A\cap\Delta_N}\big(\xi_0^{-l_x} |\partial A_x|^{1-1/\beta}
|\Delta_N |^{1/\beta}\lVert \Phi_N 1_{\partial A_x}\rVert _{\Delta_N, \beta} \big)^\alpha\\
&\le
\left[\frac{1}{|\Delta_N|}\sum_{x\in A\cap\Delta_N}
\big(\xi_0^{-l_x} |\partial A_x|^{1-1/\beta}|A_x|^{1/\beta})^{\alpha (\beta/\alpha)'}\right]
^{1-\alpha/\beta}\\
&\qquad\times \left[\frac{1}{|\Delta_N|}\sum_{x\in A\cap\Delta_N}
\big(\frac{|\Delta_N|^{1/\beta}\lVert \Phi_N 1_{\partial A_x}\rVert _{\Delta_N, \beta}}
{|A_x|^{1/\beta}}\big)^{\beta}
\right]^{\alpha/\beta}\\
&=
\left[\frac{1}{|\Delta_N|}\sum_{x\in A\cap\Delta_N}
\big(\xi_0^{-l_x} |\partial A_x|^{1-1/\beta}|A_x|^{1/\beta}\big)^{\alpha\beta/(\beta -\alpha)}\right]
^{1-\alpha/\beta}\\
&\qquad\times
\left(\sum_{x\in A\cap\Delta_N}
\frac{\lVert \Phi_N 1_{\partial A_x}\rVert _{\Delta_N, \beta}^{\beta}}{|A_x|}
\right)^{\alpha/\beta},
\end{align*}
where we used (\ref{CLTf8}) in the first inequality and H\"older's inequality in the second.
Observe that
\begin{equation}\label{CPhiaverage}
\sum_{x\in A\cap\Delta_N} \dfrac{\lVert \Phi_N 1_{\partial A_x}\rVert _{\Delta_N, \beta}^{\beta}}{|A_x|}
\le \sum_{i=1}^n \lVert \Phi_N 1_{\partial A_i}\rVert _{\Delta_N, \beta}^{\beta}
\le 2d \lVert \Phi_N 1_{\partial A}\rVert _{\Delta_N, \beta}^{\beta}
\le C\varepsilon_0^{-\beta},
\end{equation}
where $A_1, \cdots, A_n$ are different clusters that intersect with $\Delta_N$.
On the other hand, the multidimensional ergodic theorem gives
\begin{align}\label{percergodic}
&\mathop{\rm lim}_{N\to\infty}\frac{1}{|\Delta_N|}\sum_{x\in A\cap\Delta_N}
\big(\xi_0^{-l_x} |\partial A_x|^{1-1/\beta}|A_x|^{1/\beta}\big)^{\alpha\beta/(\beta -\alpha)}\nonumber\\
&= E \big(\xi_0^{-l_o} |\partial A_o|^{1-1/\beta}|A_o|^{1/\beta}\big)^{\alpha\beta/(\beta -\alpha)}
\le
C E \big(\xi_0^{-l_o} l_o^d\big)^{\alpha\beta/(\beta -\alpha)} \quad \mbox{P-a.s.,}
\end{align}
which by (\ref{Clo}) is finite when $\varepsilon_0$ is small.
\qed
\section{Transience in general ergodic environments}\label{SeTran}
In this section we will prove (ii) of Theorem \ref{CLT1} by an argument similar
to that in
\cite{ZO}. The main differences in our method are that we use a stronger control of the hitting time
(Lemma \ref{Ctau}), and that we apply a mean value
inequality (Theorem \ref{Cmvi}) instead of the discrete Harnack
inequality used in \cite{ZO}.
\begin{lemma}\label{Ctau}
Let $\{X_n\}$ be a
random walk in a balanced environment
$\omega$ such that $\omega(x,o)=0$ for all $x$. For
any $r>0$, define $\tau=\tau(r)=
\inf\{n: |X_n|>r\}$. Then $E_\omega^o \tau\le (r+1)^2$.
\end{lemma}
{\it Proof:}~ Observe that $\{|X_n|^2-n\}$ is a (quenched) martingale with respect to
$\{\mathcal{F}_n=\sigma(X_1,\cdots,X_n)\}$.
Thus by optional stopping, $0= E_\omega^o[|X_\tau|^2-\tau]\le (r+1)^2-E_\omega^o\tau$. \qed
To prove Theorem \ref{CLT1}(ii), we shall make use of the following mean-value inequality,
which is a modification of Theorem 3.1 in \cite{KT}. Let $B_{r}(z)=\{x\in\mathbb{Z}^d: |x-z| <r\}$.
We shall also write $B_r (o)$ as $B_r$; recall the definition of $L_\omega$ in
\eqref{operator}.
\begin{theorem}\label{Cmvi}
For any function $u$ on $\bar B_R (x_0)$ such that
$$L_{\omega} u =0 , \quad x \in B_R (x_0)$$
and any $\sigma\in (0,1)$, $0<p\le d$, we have
\[
\max_{B_{\sigma R}(x_0)}u\le C \lVert
\frac{u^+}{\varepsilon^{d/p}}\rVert _{B_R (x_0), p},
\]
where $C$ depends on $\sigma$, $p$ and $d$.
\end{theorem}
We postpone the proof of Theorem \ref{Cmvi} to
the next section, and now demonstrate Theorem \ref{CLT1}.
\noindent{\it Proof of Theorem \ref{CLT1}(ii):}
Note that the transience of the random walk would not change
if we considered the walk restricted
to its jump times. That is, the transience or recurrence of the random walk
in an environment $\omega$ is the same as in an environment
$\tilde \omega$,
where $\tilde \omega$ is defined by
$\tilde{\omega}(x,e)=\omega (x,e)/(1-\omega(x,o))$. Therefore, in the sequel we assume $\omega(x,o)=0$
for all $x$ and almost all $\omega$.
Let $K$ be any constant that is at least 3.
We denote $B_{K^i}(x)$ by $B^i(x)$ and define
$\tau_i:=\inf \{n: |X_n|> K^i\}$.
Our approach is to bound the (annealed) expected number of visits to the origin by
the walk; this requires some a-priori bounds on the moments of
$\varepsilon(o)^{-1}$.\\
For any $z\in\partial B^i$ , $y\in B^{i-1}$, noting that $E_\omega^x (\mbox{\# visits at $y$ before $\tau_{i+2}$}):=v(x)$ satisfies $L_\omega v (x)=0$ for
$x\in B^{i+2}\setminus\{y\}$, we have that, for $p\in(0,d]$,
\begin{align}\label{CLTf11}
& E_{\theta^{y}\omega}^{z} (\mbox{ \# visits at $o$ before $\tau_{i+1}$})
\nonumber\\
&\le
E_{\omega}^{z+y}(\mbox{\# visits at $y$ before $\tau_{i+2}$})\nonumber\\
&\le
\max_{x\in B^{i-1}(z)}E_{\omega}^{x}(\mbox{\# visits at $y$ before $\tau_{i+2}$})\nonumber\\
&\le
C\Bnorm{\frac{E_{\omega}^{x}(
\mbox{\# visits at $y$ before $\tau_{i+2}$})}{\varepsilon_\omega (x)^{d/p}}
} _{B_{2K^{i-1}}(z), p}\nonumber\\
&\le
C\Bnorm{ \frac{E_{\omega}^{x}
(\mbox{\# visits at $y$ before $\tau_{i+2}$})}{\varepsilon_\omega (x)^{d/p}}
} _{B^{i+2}, p},
\end{align}
where we used Theorem \ref{Cmvi} in the third inequality. Take $p=d/q$ (without loss of
generality, we always assume that $q< d$).
Then, by (\ref{CLTf11}) and Lemma \ref{Ctau},
\begin{align}\label{CLTf12}
& \sum_{y\in B^{i-1}}E_{\theta^{y}\omega}^o
(\mbox{ \# visits at $o$ in $[\tau_i,\tau_{i+1})$})\nonumber\\
&\le
C\sum_{y\in B^{i-1}} \left[
\frac{1}{|B^{i+2}|}\sum_{x\in B^{i+2}}
\frac{
E_{\omega}^{x}
(\mbox{\# visits at $y$ before $\tau_{i+2}$})^{d/q}}
{\varepsilon_\omega (x)^d}
\right]^{q/d}\nonumber\\
&\le
C K^{-iq}\sum_{y\in B^{i-1}}\sum_{x\in B^{i+2}}
\frac{E_{\omega}^{x}(\mbox{\# visits at $y$ before $\tau_{i+2}$})}
{\varepsilon_\omega (x)^q}
\nonumber\\
&=
C K^{-iq}\sum_{x\in B^{i+2}}
\frac{E_{\omega}^{x}(\mbox{\# visits at $B^{i-1}$ before $\tau_{i+2}$})}
{\varepsilon_\omega (x)^q}\nonumber\\
&\le
C K^{-iq}\sum_{x\in B^{i+2}}\frac{E_{\omega}^{x}\tau_{i+2}}{\varepsilon_\omega (x)^q}\nonumber\\
&\le
C K^{(2-q)i} \sum_{x\in B^{i+2}} \varepsilon_\omega(x)^{-q}.
\end{align}
Taking expectations and using translation invariance, we have
\begin{equation*}
\mathbb{E}^o(\mbox{\# visits at $o$ in $[\tau_i, \tau_{i+1})$})
\le C K^{(2-q)i} E \varepsilon^{-q}.
\end{equation*}
Therefore, if $E \varepsilon^{-q}<\infty$ for some $q>2$ , then
\begin{equation*}
\mathbb{E}^o (\mbox{\#
visits at $o$})\le C E \varepsilon^{-q}
\sum_{i=1}^{\infty}K^{(2-q)i}<\infty .
\end{equation*}
This proves Theorem \ref{CLT1}(ii) for $\{\Omega, P\}$ such that $\omega (x,o)=0$ for all $x$
and almost all $\omega$.
As mentioned earlier,
the general case follows by
replacing
$\varepsilon$ with $\varepsilon/(1-\omega(o,o))$.\qed\\
\begin{remark}
It is natural to expect
that arguments similar to the proof of the invariance principle also work for
proving the transience in the iid case. Namely, one may hope to control
$P_\omega^x\{\mbox{visit $o$ in $[\tau_i,\tau_{i+1})$}\}$ using some mean value inequality (like Theorem \ref{Cmvi}), and to use percolation arguments to handle
``bad sites''
where the ellipticity constant $\varepsilon$ is small.
This suggests considering walks that jump from bad sites to
good sites. In \cite{KT2}, Kuo and Trudinger
proved a maximum principle and mean value inequality for balanced operators
in general meshes, which may be applied to balanced walks with possibly
big jumps. However, their estimates,
in the presence of a small ellipticity
constant, are not strong enough. To overcome this issue,
we will prove a modified maximum principle that involves only
big exit probabilities, and then use it to prove the transience
in the i.i.d case with no moment assumptions.
\end{remark}
\section{Transience in iid environments}\label{SeTriid}
In this section we prove a modified maximum principle for
balanced environments.
We then prove Theorem 2(ii) using the corresponding mean
value inequality (Theorem \ref{Cmvi2})
and percolation arguments.
\subsection{Difference operators}\label{SeTriid1}
Following \cite{KT2}, we introduce general difference operators.
Let $a$ be a nonnegative function on $\mathbb{Z}^d\times \mathbb{Z}^d$
such that for any $x$,
$a(x,y)> 0$
for only finitely many $y$.
Define the linear operator $L_a$ acting on the set of functions
on $\mathbb{Z}^d$ by
\[L_a f(x)=\sum_y a(x,y)(f(y)-f(x)).\]
We say that $L_a$ is \textit{balanced} if
\begin{equation}\label{CLTe7}
\sum_y a(x,y)(y-x)=0.
\end{equation}
Throughout this section we assume that $L_a$ is a probability operator, that is,
\[\sum_y a(x,y)=1.\]
For any finite subset $E\subset\mathbb{Z}^d$, define its boundary
\[E^b=E^b(a)=\{y\notin E: a(x,y)>0 \text{ for some } x\in E
\},
\]
and set
\begin{equation}
\label{CLTeq-ofernew}
\tilde{E}=E\cup E^b.
\end{equation}
Define the upper contact set of $u$ at $x\in E$ as
$$I_u(x)=I_u(x,E,a)=\{s\in\mathbb{R}^d: u(x)-s\cdot x\ge u(z)-s\cdot z \text{ for all }z\in\tilde{E}\}.$$
Set
\begin{align*}
&h_x=h_x(a)=\max_{y: a(x,y)>0}\abs{x-y},\\
&b(x)=\sum_y a(x,y)(y-x), \mbox{ and }b_0=\sup|b|.
\end{align*}
Note that $b_0=0$ when $L_a$ is balanced.
The following lemma is useful in the proofs of various mean
value inequalities. It is similar to Theorem 2.2 in \cite{KT2},
except that the proof in \cite{KT2} contains several unclear
passages, e.g., in the inequality above (2.23) in \cite{KT2},
and so we provide a complete proof.
Throughout, we set $u^+=u\vee 0$.
\begin{lemma}\label{Cmvilemma}
Fix $R>0$. Let $\eta(x)=\eta_R(x):=(1-\abs{x}^2/R^2)^\beta 1_{|x|<R}$ be a
function on $\mathbb{R}^d$.
For any function $u$ on $B_R$ such that
$L_a u\ge 0$ in $B_R$ and any $\beta\ge 2$,
we let $v=\eta u^+$.
Then, for any $x\in B_R$ with $I_v (x)=I_v(x, B_R, a)\neq\emptyset$,
\[L_a v(x)\ge -C(\beta, b_0R) \eta^{1-2/\beta}R^{-2} h_x^2 u^+,\]
where $C(\beta,b_0R)$ is a constant that depends only on $\beta$ and $b_0R$.
\end{lemma}
{\it Proof:}~
We only need to consider the nontrivial case where $v\not\equiv 0$.
For $s=s(x)\in I_v(x)\neq \emptyset$, recalling the definition of $I_v$ one has
$$|s|\le 2v(x)/(R-|x|).$$
Note that $I_v(x)\neq\emptyset$ implies $u(x)> 0$.
If further $R^2-|x|^2\ge 4R \abs{x-y}$ ,
computations as in \cite[pg. 426]{KT2}
reveal that
\begin{align}
2^{-\beta}
&\le
\frac{\eta(y)}{\eta(x)}\le 2^\beta,\label{Cfirst}\\
\abs{\eta(x)-\eta(y)}
&\le
\beta 2^\beta R^{-1} \eta(x)^{1-1/\beta}|x-y|,\label{Csecond}\\
\abs{\eta(x)-\eta(y)-\nabla\eta(x)(x-y)}
&\le
\beta (\beta-1)2^{\beta} R^{-2}\eta(x)^{1-2/\beta}|x-y|^2,\label{Cthird}\\
|s|
&\le
4 \eta^{1-1/\beta}R^{-1}u,\label{Cfourth}
\end{align}
where
\begin{equation}\label{Afifth}
\nabla\eta=-2\beta x R^{-2}\eta^{1-1/\beta}
\end{equation}
is the gradient of $\eta$.
Following \cite{KT2}, we set $w(z)=v(z)-s\cdot (z-x)$.
By the definition of $s$, we have
$w(x)\ge w(z)$ for all $z\in \tilde E$.
Then
\begin{align}\label{A*6}
v(x)-v(y)=\frac{\eta(x)}{\eta(y)}(v(x)-v(y))
&+\frac{\eta(y)-\eta(x)}{\eta(y)}s(x-y)\\
&+\frac{\eta(y)-\eta(x)}{\eta(y)}(w(x)-w(y)).\nonumber
\end{align}
Consider first $x$ such that $R^2-|x|^2\ge 4Rh_x$.
By (\ref{Csecond}), for any $y$ such that $a(x,y)>0$,
\begin{equation}\label{A*7}
\frac{\eta(y)-\eta(x)}{\eta(y)}(w(x)-w(y))
\le \beta 2^\beta R^{-1}h_x\eta(x)^{-1/\beta}\frac{\eta(x)}{\eta(y)}(w(x)-w(y)).
\end{equation}
Since
\begin{align*}
&\sum_y a(x,y)\frac{\eta(x)}{\eta(y)}(w(x)-w(y))\\
&=\sum_y a(x,y) \Big[\frac{\eta(x)}{\eta(y)}\big(v(x)-v(y)\big)
+\frac{\eta(x)-\eta(y)}{\eta(y)}s (y-x)\Big]+s\cdot b(x),
\end{align*}
by \eqref{A*6}, \eqref{A*7} and noting $R^{-1}\eta^{-1/\beta}h_x\le 1/4$, we obtain
\begin{align}\label{A*8}
&\sum_y a(x,y)(v(x)-v(y))\nonumber\\
&\le
(1+\beta 2^\beta R^{-1}h_x\eta(x)^{-1/\beta})
\sum_y a(x,y)\Big[\frac{\eta(x)}{\eta(y)}\big(v(x)-v(y)\big)+\frac{\eta(x)-\eta(y)}{\eta(y)}s (y-x)+s\cdot b(x)\Big]
\nonumber\\
&\le
\beta 2^{\beta-1}\big[\sum_y a(x,y) \frac{\eta(x)}{\eta(y)}\big(v(x)-v(y)\big)+
4(\beta 2^{2\beta}+b_0R)\eta^{1-2/\beta}R^{-2}h_x^2u\big],
\end{align}
where we used \eqref{Cfirst}, \eqref{Csecond} \eqref{Cfourth} in the last inequality.
Moreover, recalling that $u(x)>0$ (because $I_v(x)\neq\emptyset$),
\begin{align}\label{A*9}
& \sum_y a(x,y) \frac{\eta(x)}{\eta(y)}\big(v(x)-v(y)\big)\nonumber\\
&=
\sum_y a(x,y)\left [\eta(x)\big(u(x)-u^+(y)\big)+\big(\eta(x)-\eta(y)\big)u(x)+\frac{(\eta(x)-\eta(y))^2}{\eta(y)}u(x)\right]\nonumber\\
&\stackrel{a\geq 0}{\le}
-\eta(x)L_a u(x)+\sum_y a(x,y) \left[\big(\eta(x)-\eta(y)\big)u(x)+\frac{(\eta(x)-\eta(y))^2}{\eta(y)}u(x)\right]\nonumber\\
&\stackrel{L_au\ge 0}{\le}
\sum_y a(x,y) \left[\big(\eta(x)-\eta(y)-\nabla\eta(x)(x-y)\big)u(x)+\frac{(\eta(x)-\eta(y))^2}{\eta(y)}u(x)\right]\nonumber\\
&\qquad\qquad-\nabla\eta(x)b(x)u(x)\nonumber\\
&\le
(2^{3\beta+1}+b_0R)\beta^2\eta^{1-2/\beta}h_x^2R^{-2}u,
\end{align}
where we used (\ref{Cfirst}), (\ref{Csecond}), (\ref{Cthird}) and (\ref{Afifth}) in the last inequality.
Hence, by (\ref{A*8}) and (\ref{A*9}), we conclude that
\begin{equation*}
-L_a v
\le
(2^{3\beta+1}+b_0R)\beta^32^\beta \eta^{1-2/\beta}R^{-2}h_x^2 u
\end{equation*}
holds in $\{x: R^2-|x|^2\ge 4Rh_x, I_v(x)\neq\emptyset\}$.
On the other hand, if $R^2-|x|^2<4Rh_x$, then $\eta^{1/\beta}\le 4h_x/R$. Thus by the fact that $u(x)>0$,
we have $-L_a v\le v(x)\le 16\eta^{1-2/\beta} R^{-2} h_x^2 u$. \qed\\
\noindent{\it Proof of Theorem \ref{Cmvi}:}
Since $L_\omega$ is a balanced operator ($b_0=0$) and $h_x=1$ in this case,
by the above lemma,
\[L_\omega v\ge -C(\beta) \eta^{1-2/\beta}R^{-2} u\] for $x\in B_R$ such that $I_u(x)\neq\emptyset$,
where $C(\beta)$ depends only on $\beta$.
Applying Theorem \ref{CMP} to $v$ and taking $\beta=2d/p\ge 2$,
we obtain
\begin{align*}
\max_{B_R} v
&\le
C \Bnorm{ \eta^{1-2/\beta} \frac{u^+}{\varepsilon}} _{B_R, d}
=C\Bnorm{ v^{1-p/d}\frac{(u^+)^{p/d}}{\varepsilon}}_{B_R, d}\\
&\le
C(\max_{B_R} v)^{1-p/d}\Bnorm{\frac{u^+}{\varepsilon^{d/p}}} _{B_R, p}^{p/d}.
\end{align*}
Hence
\[
\max_{B_R} v \le C \Bnorm{\frac{u^+}{\varepsilon^{d/p}}} _{B_R, p},
\]
and then
\[
\max_{B_{\sigma R}}u\le (1-\sigma^2)^{-2d/p}\max_{B_{\sigma R}}v
\le C(\sigma, p, d) \Bnorm{\frac{u^+}{\varepsilon^{d/p}}} _{B_R, p}. \text{\qed}
\]
\subsection{A new maximum principle and proof of Theorem \ref{CLT2}(ii)}
For any fixed environment $\omega\in\Omega$, let $\varepsilon_0>0$ be a constant to be determined,
and define site percolation as in Section \ref{SePercE}. Recall that for $x\in\mathbb{Z}^d$, $A_x$
is the percolation cluster that contains $x$ and $l_x$ is its $l^1$-diameter.
As mentioned in the introduction, the transience would not change if we
considered the walk restricted to its
jump times. Without loss of generality, we assume that $\omega(x,o)=0$ for all $x$, $P$-almost surely.
Recall the definition of $\square$ and $\kappa$-path for $\kappa\in\square$ in Section \ref{SePercE}. Note
that under our assumption, $\max_i \omega(x, e_i)\ge 1/2d$, so we take $\xi_0=1/2d$ in the definition of
$\kappa$-paths.
For each $\kappa\in\square$, we pick a site $y_\kappa=y(x, \kappa)\in \partial A_x$ such that
\[d(x, y_\kappa)=\max_{\substack{y:
\exists \text{ $\kappa$-path in $\bar{A}_x$ }\\\text{ from $x$ to $y$}
}
} d(x,y)\]
and let $\Lambda_x\subset \bar{A}_x$ be the union of (the points of the) $\kappa$-paths from $x$ to $y_\kappa$ over all $\kappa\in\square$.
From the definition of $y_\kappa$ one can conclude that
\begin{itemize}
\item For any $q\in\mathbb{R}^d$, we pick a $\kappa=\kappa_q\in \square$ such that
\[q_j \kappa_j\le 0 \text{ for all }j=1,\cdots, d.\]
Then $(y_\kappa-x)_j q_j\le 0$ for all $j=1,\cdots, d$. Moreover, for $i\in\{1,\cdots, d\}$, $q_i>0$ implies $y_\kappa-e_i\notin \Lambda_x$, and $q_i<0$
implies $y_\kappa+e_i\notin \Lambda_x$.
\end{itemize}
In the sequel we let $\tau_{\Lambda_x}=\inf\{n>0: X_n\notin \Lambda_x\}$ and
\[a(x,y)=P_\omega^x \{
X_{\tau_{\Lambda_x}}=y
\}.
\]
By the fact that $X_n$ is a (quenched)
martingale, it follows that $L_a$ is a balanced operator.
For the statement of the next theorem,
recall the definition of $\tilde{E}$ in \eqref{CLTeq-ofernew}.
\begin{theorem}\label{Cmp2}
Let $E\subset\mathbb{Z}^d$ be bounded.
Let $u$ be a function on $\tilde{E}$. If $L_a u(x)\ge -g(x)$ for all $x\in E$ such that $I_u(x)=I_u(x,E,a)\neq \emptyset$ ,
then
$$\max_E u\le
\frac{d\mathop{\rm diam} \tilde{E}}{\varepsilon_0}
\bigg(\sum_{\substack{
x\in E\\I_u (x)\ne \emptyset
}
}
\abs{g(x)(2d)^{l_x}}^d
\bigg)^{\frac{1}{d}}
+\max_{E^b}u .$$
\end{theorem}
{\it Proof:}~ Without loss of generality, assume $g\ge 0$ and
$$\max_E u=u(x_0)>\max_{E^b}u$$
for some $x_0\in E$. Otherwise, there is nothing to prove.
For $ s\in \mathbb{R}^{d} $ such that $ |s|_{\infty} \le [u(x_0)-\max_{E^b}u]/(d\mathop{\rm diam} \tilde{E}) $,
we have \[ u(x_0)-u(x) \ge s \cdot (x_0-x) \] for all $ x \in E^b $,
which implies that
$\max_{z\in\tilde{E}}u(z)-s\cdot z$
is achieved in $E$. Hence
$ s \in \bigcup_{x \in E} I_{u}(x) $ and
\begin{equation}\label{CLTe1}
\left[-\dfrac{u(x_0)-\max_{E^b }u}{d\mathop{\rm diam} \tilde{E}}, \dfrac{u(x_0)-\max_{E^b}u}{d\mathop{\rm diam} \tilde{E}}\right]^{d} \subset \bigcup_{x\in E} I_{u}(x) .
\end{equation}
Further, if $s\in I_u(x)$, we set
$$w(z)=u(z)-s(z-x).$$
Then $w(z)\le w(x)$ for all $z\in \tilde{E}$ and
\begin{equation}\label{CLTe2}
I_u(x)=I_w(x)+s.
\end{equation}
Since for any $q\in I_w(x)$, there is $\kappa=\kappa_q\in\square$ such that
$$q_j(x-y_\kappa)_j\ge 0 \text{ for } j=1,\cdots, d,$$ we have
$$ w(x)-w(y_\kappa\pm e_i)\ge q(x-y_\kappa\mp e_i)\ge \mp q_i.$$
Moreover, for any $i\in\{1,\cdots, d\}$, if $q_i>0$, then $y_\kappa-e_i\notin \Lambda_x$ and we have
$w(x)-w(y_\kappa-e_i)\ge |q_i|$. Similarly, if $q_i<0$, then $y_\kappa+e_i\notin \Lambda_x$ and
$w(x)-w(y_\kappa+e_i)\ge |q_i|$. We conclude that
$$|q_i|\le \frac{\sum_{y} a(x, y)(w(x)-w(y))}{\min\limits_{\pm}\{a(x, y_\kappa\pm e_i)\}}.$$
On the other hand, from the construction of $\Lambda_x$ we obtain (noting that
$y_\kappa\in\partial A_x$)
$$a(x, y_\kappa\pm e_i)\ge (\frac{1}{2d})^{l_x} \varepsilon_0.$$
Hence, since $L_a$ is balanced,
$$|q_i|\le
\frac{(2d)^{l_x}}{\varepsilon_0}\sum_y a(x,y)(w(x)-w(y))
=
\frac{(2d)^{l_x}}{\varepsilon_0}(-L_a u)
\le \frac{(2d)^{l_x}}{\varepsilon_0} g$$
for all $i$. Therefore
\begin{equation}\label{CLTe3}
I_w(x)\subset [-(2d)^{l_x}\varepsilon_0^{-1} g, (2d)^{l_x}\varepsilon_0^{-1} g]^d.
\end{equation}
Combining (\ref{CLTe1}), (\ref{CLTe2}) and (\ref{CLTe3}), we conclude that
\begin{equation*}
\left(\dfrac{u(x_0)-\max_{E^b}u}{d\mathop{\rm diam} \tilde{E}}\right)^d\le
\sum_{\substack{
x\in E\\I_u (x)\ne \emptyset
}
}
\abs{g(x)(2d)^{l_x}\varepsilon_0^{-1}}^d.
\quad\quad\quad\quad\quad
\quad\quad\quad\quad\quad
\mbox{\qed}
\end{equation*}
As with Theorem \ref{Cmvi}, we have a corresponding mean value inequality.
\begin{theorem}\label{Cmvi2}
For any function $u$ on $B_R$ such that
$$L_a u=0, \quad x\in B_R$$
and any $\sigma\in (0,1)$, $0<p\le d$, we have
\[
\max_{B_{\sigma R}} u\le C\big(\frac{\mathop{\rm diam} \tilde{B}_R}{\varepsilon_0 R}\big)^{d/p}
\norm{[l_x^2 (2d)^{l_x}]^{d/p}u^+}_{B_R, p},
\]
where $C$ depends on $\sigma, p$ and $d$.
\end{theorem}
{\it Proof:}~ By the same argument as in the proof of Theorem \ref{Cmvi}, Lemma \ref{Cmvilemma} and
Theorem \ref{Cmp2} implies Theorem \ref{Cmvi2}. \qed\\
Having established
Theorem \ref{Cmvi2}, we can now
prove the transience of the random walks in balanced
iid environment with $d\geq 3$.
\noindent{\it Proof of Theorem \ref{CLT2}(ii)}:
Let $K$ be any constant $\ge 4$ and define $B^i, \tau_i$ as in Section 5.
Let $\Omega_i=\{\omega\in\Omega: l_x\le K^{i-1} \mbox{ for all $x\in B^{i+2}$}\}$.
For any $\omega\in\Omega_i$, $z\in\partial B^i$, $y\in B^{i-1}$, noting that
$P_\omega^x \{\mbox{visit $y$ before $\tau_{i+2}$}\}:=u(x)$ satisfies
\[L_a u(x)=0\]
for $x\in B_{2K^{i-1}}(z)$, by similar argument as in (\ref{CLTf11}) we have
\begin{align*}
& P_{\theta^y\omega}^z \{\mbox{visit $o$ before $\tau_{i+1}$}\}1_{\omega\in\Omega_i}\\
&\le
\max_{x\in B^{i-1}(z)}P_\omega^x \{\mbox{visit $y$ before $\tau_{i+2}$}\}1_{\omega\in\Omega_i}\\
&\le
C \varepsilon_0^{-d}\norm{
[l_x^2 (2d)^{l_x}]^d P_\omega^x \{\mbox{visit $y$ before $\tau_{i+2}$}\}
}_{B_{2K^{i-1}}(z), 1}\\
&\le
C \varepsilon_0^{-d}\abs{B^{i+2}}^{-1}
\sum_{x\in B^{i+2}}l_x^{2d} (2d)^{dl_x} P_\omega^x \{\mbox{visit $y$ before $\tau_{i+2}$}\} ,
\end{align*}
where in the second inequality, we applied Theorem \ref{Cmvi2} with
$p=1$ and used the fact that $\mathop{\rm diam} \tilde{B}_{2K^{i-1}}\le 3K^{i-1}$ when $\omega\in\Omega_i$. Hence
\begin{align}\label{CLTe5}
&\sum_{y\in B^{i-1}}P_{\theta^y\omega}^o
\{\mbox{visit $o$ in $[\tau_i, \tau_{i+1})$}\}
1_{\omega\in\Omega_i}\nonumber\\
&\le
C \varepsilon_0^{-d}\abs{B^{i+2}}^{-1}
\sum_{x\in B^{i+2}}l_x^{2d} (2d)^{dl_x} E_\omega^x
(\mbox{\# visits at $B^{i-1}$ before $\tau_{i+2}$})\nonumber\\
&\stackrel{\text{Lemma }\ref{Ctau}}{\le}
C \varepsilon_0^{-d}K^{(2-d)i}\sum_{x\in B^{i+2}}l_x^{2d} (2d)^{dl_x}.
\end{align}
Since
\begin{align}\label{CLTe6}
&\sum_{y\in B^{i-1}}P_{\theta^y\omega}^o
\{\mbox{visit $o$ in $[\tau_i, \tau_{i+1})$}\}\nonumber\\
&\le \sum_{y\in B^{i-1}}P_{\theta^y\omega}^o
\{\mbox{visit $o$ in $[\tau_i, \tau_{i+1})$}\}1_{\omega\in\Omega_i}
+\abs{B^{i-1}}1_{\omega\notin\Omega_i},
\end{align}
taking $P$-expectations on both sides of (\ref{CLTe6}) and using (\ref{CLTe5}) we get
\[
\mathbb{P}^o \{\mbox{visit $o$ in $[\tau_i,\tau_{i+1})$}\}
\le
C \varepsilon_0^{-d}K^{(2-d)i}El_o^{2d}(2d)^{dl_o}+P\{\omega\notin\Omega_i\}.
\]
By (\ref{Clo}), we can take $\varepsilon_0$ to be small enough such that
$El_o^{2d}(2d)^{dl_o}<\infty$ and $\sum_{i=1}^\infty P\{\omega\notin\Omega_i\}<\infty$.
Therefore when $d\ge 3$,
\[\sum_{i=1}^\infty \mathbb{P}^o \{\mbox{visit $o$ in
$[\tau_i,\tau_{i+1})$}\}<\infty. \quad\quad\quad\quad\quad\quad
\mbox{\qed}\]
\section{Concluding remarks}
While Bouchaud's trap model (see \cite{Bou,BAC}) provides an example of an
(iid)
environment where local traps can destroy the invariance
principle, it is interesting to note that
a counter-example to Theorem
\ref{CLT2} in the ergodic setup also can be written.
Namely, let $d\ge 2$, write for $x\in\mathbb{Z}^d$, $z(x)=(x_2,\cdots, x_d)\in\mathbb{Z}^{d-1}$.
Let $\{\varepsilon_z\}_{z\in\mathbb{Z}^{d-1}}$ be i.i.d random variables with support in $(0, 1/2)$ and set
\begin{equation}
\omega(x, e)=\left\{
\begin{array}{rl}
\varepsilon_{z(x)}, & \text{if } e=\pm e_1\\
(1-2\varepsilon_{z(x)})/2(d-1), & \text{else }.
\end{array}
\right.
\end{equation}
It is easy to verify that $\{X_t^n\}_{t\ge 0}$ satisfies the quenched invariance principle, but that
the limiting covariance may degenerate if the tail of $\varepsilon_z$ is heavy.
\chapter{Einstein Relation for Random Walks in Balanced Random Environment}
\label{ER chapter}
In this chapter we will give
the proof of the Einstein relation \eqref{Einstein relation} in the context of random walks in a balanced uniformly elliptic iid random environment. As mentioned in Section~\ref{IER}, our proof consists of proving Theorem~\ref{ER1} and Theorem~\ref{ER2}.
We will prove Theorem \ref{ER1} in Section \ref{ERsec1}.
In Section \ref{ERsecreg}, we will present our new construction of the regeneration times. Furthermore, we will show in Section \ref{ERsecmo} that these regeneration times have good moment properties. Section \ref{ERsecpro} is devoted to the proof of Theorem \ref{ER2}, using the regeneration times and arguments similar to \cite[pages 219-222]{GMP}.
Throughout this chapter, we assume
\textit{the environment $P$ is iid, balanced, and uniformly
elliptic with ellipticity constant $\kappa>0$.} Recall that we have obtained in Section~\ref{SePeEn} an ergodic measure $Q$ for the process $\bar\omega(n)$. By the ergodic theorem, we get
\[
\bm{D}=\big(2E_Q\omega(o,e_i)\delta_{ij}\big)_{1\le i,j\le d},
\]
where $\bm D$ is the covariance matrix defined at the beginning of Section~\ref{IER}.
\section{Proof of Theorem \ref{ER1}}\label{ERsec1}
\begin{lemma}\label{ERl1}
For any $t>0$ and any bounded continuous functional $F$ on $C([0,t],\mathbb{R}^d)$,
\[
\mathop{\rm lim}_{\lambda\to 0}E_{\omega^\lambda}
F(\lambda X_{s/\lambda^2};0\le s\le t)
=
EF(N_s+D_\ell s;0\le s\le t),
\]
where $(N_s)_{s\ge 0}$ is a $d$-dimensional Brownian motion with covariance matrix $\bm D$.
\end{lemma}
{\it Proof:}~
We first consider the Radon-Nikodym derivative of the measure $P_{\omega^\lambda}$ with respect to
$P_\omega$. Put
\[
G(t, \lambda)=G(t,\lambda;X_\cdot):=\log\prod_{j=1}^{\lceil t\rceil}[1+\lambda\ell\cdot(X_j-X_{j-1})].
\]
Then
\[
E_{\omega^\lambda} F(X_s: 0\le s\le t)
=
E_\omega F(X_s: 0\le s\le t)e^{G(t,\lambda)}.
\]
In particular, taking $F\equiv 1$, we have
\begin{equation}\label{ERe0}
E_\omega e^{G(t,\lambda)}=1
\end{equation}
for any $\lambda\in (0,1)$ and $t>0$.
Moreover, by the inequality
$a-\frac{a^2}{2}\le \log (1+a)\le a-\frac{a^2}{2}+\frac{a^3}{3}$ for $a>0$, we get
\begin{align}\label{ERe1}
G(t,\lambda)
&=
\sum_{j=1}^{\lceil t \rceil}\log(1+\lambda\ell\cdot(X_j-X_{j-1}))\nonumber\\
&=\sum_{j=1}^{\lceil t \rceil}
\left[\lambda\ell\cdot(X_j-X_{j-1})-\frac{\lambda^2\big(\ell\cdot(X_j-X_{j-1})\big)^2}{2}\right]
+\lambda^2\lceil t\rceil H(\lambda)\nonumber\\
&=\lambda X_{\lceil t \rceil}\cdot\ell
-\frac{\lambda^2}{2}\sum_{j=1}^{\lceil t \rceil}\big(\ell\cdot(X_j-X_{j-1})\big)^2
+\lambda^2\lceil t\rceil H(\lambda),
\end{align}
where the random variable $H(\lambda)=H(\lambda;X_\cdot)$ satisfies $0\le H\le \lambda/3$.
Setting
$h(\omega)=\sum_{i=1}^d\omega(o,e_i)\ell_i^2$,
\[
\left(
\sum_{j=1}^n \big(\ell\cdot(X_j-X_{j-1})\big)^2-2h(\omega_{X_{j-1}})
\right)_{n\ge 0}
\]
is a martingale sequence with bounded increments. Thus $P_\omega$-almost surely,
\[
\mathop{\rm lim}_{n\to\infty}\frac{1}{n}\sum_{j=1}^n
[\big(\ell\cdot(X_j-X_{j-1})\big)^2-2h(\theta^{X_{j-1}}\omega)]=0.
\]
Further,
by the ergodic theorem, $P\otimes P_\omega$-almost surely,
\begin{equation}\label{ERe2}
\mathop{\rm lim}_{\lambda\to 0}
\lambda^2\sum_{j=1}^{\lceil t/\lambda^2 \rceil}
\big(\ell\cdot(X_j-X_{j-1})\big)^2
=
\mathop{\rm lim}_{\lambda\to 0}
\lambda^2\sum_{j=1}^{\lceil t/\lambda^2 \rceil} 2h(\theta^{X_{j-1}}\omega)
=
2tE_Q h.
\end{equation}
We deduce from (\ref{ERe1}) and (\ref{ERe2}) that
\[
e^{G(t/\lambda^2,\lambda)}
=\exp[\lambda X_{t/\lambda^2}\cdot\ell-tE_Q h+O_{\lambda,X_\cdot}(1)],
\]
where $O_{\lambda,X_\cdot}(1)$ denotes a quantity that depends on $\lambda$ and
$X_\cdot$, and $O_{\lambda,X_\cdot}(1)\to 0$ $P_\omega$-almost surely as
$\lambda\to 0$.
By Theorem~\ref{LaThm},
$(\lambda X_{s/\lambda^2})_{s\ge 0}$ converges weakly (under $P_\omega$) to $(N_s)_{s\ge 0}$.
Hence for $P$-almost all $\omega$,
\begin{equation}\label{ERe3}
F(\lambda X_{s/\lambda^2};0\le s\le t)e^{G(t/\lambda^2,\lambda)}
\end{equation}
converges weakly (under $P_\omega$) to
\[
F(N_s:0\le s\le t)\exp(N_t\cdot\ell-tE_Qh).
\]
Next, we will prove that for $P$-almost every $\omega$, this convergence is also in $L^1(P_\omega)$.
It suffices to show that the class
$(e^{G(t/\lambda^2,\lambda)})_{\lambda\in (0,1)}$ is uniformly integrable under $P_\omega$, $P$-a.s..
Indeed, for any $\gamma>1$, it follows from (\ref{ERe1}) and the estimate on $H(\lambda)$ that
\begin{align*}
&\gamma G(t/\lambda^2,\lambda)\\
&\le G(t/\lambda^2, \gamma\lambda)
+
\frac{(\gamma^2-\gamma)\lambda^2}{2}
\sum_{j=1}^{\lceil t/\lambda^2 \rceil}
\big(\ell\cdot(X_j-X_{j-1})\big)^2
+\gamma\lambda^2\lceil t/\lambda^2\rceil H(\lambda)\\
&<
G(t/\lambda^2, \gamma\lambda)
+\gamma^2 (t+1).
\end{align*}
Hence for $\gamma>1$ and all $\lambda\in (0,1)$,
\[
E_\omega \exp(\gamma G(t/\lambda^2,\lambda))
\le
e^{\gamma^2 (t+1)}E_\omega \exp(G(t/\lambda^2, \gamma\lambda))
\stackrel{by (\ref{ERe0})}{=}e^{\gamma^2 (t+1)},
\]
which implies the uniform integrability of $(e^{G(t/\lambda^2,\lambda)})_{\lambda\in (0,1)}$.
So the $L^1(P_\omega)$-convergence of (\ref{ERe3}) is proved and (for $P$-almost every $\omega$)
we have
\begin{align*}
&\mathop{\rm lim}_{\lambda\to 0}
E_{\omega^\lambda}F(\lambda X_{s/\lambda^2};0\le s\le t)\\
&=\mathop{\rm lim}_{\lambda\to 0}
E_{\omega}F(\lambda X_{s/\lambda^2};0\le s\le t)e^{G(t/\lambda^2,\lambda)}\\
&=E \big[F(N_s:0\le s\le t)\exp(N_t\cdot\ell-t E_Qh)\big].
\end{align*}
The lemma follows by noting that $tE_Qh=E(N_t\cdot\ell)^2/2$ and that, by
Girsanov's formula,
\[
E \big[F(N_s:0\le s\le t)\exp(N_t\cdot\ell-E(N_t\cdot\ell)^2/2)\big]
=
E F(N_s+D_\ell s;0\le s\le t).
\]\qed
\begin{lemma}\label{ERl2}
For any $\lambda\in (0,1), t\ge 1/\lambda^2, p\ge 1$ and any balanced environment $\omega$,
\[
E_{\omega^\lambda}\max_{0\le s\le t}|X_s|^p\le C_{p,d}(\lambda t)^p.
\]
Here we use $C_{p,d}$ to denote constants which depend only on $p$ and the dimension $d$, and
which may differ from line to line.
\end{lemma}
{\it Proof:}~
Since the drift of $\omega^\lambda$ at $X_n, n\in\mathbb{N}$, is
\begin{align*}
E_{\omega^\lambda}(X_{n+1}-X_n|X_n)
&=\sum_{|e|=1}\omega(X_n,e)(1+\lambda e\cdot\ell)e\\
&=\lambda\sum_{i=1}^d 2\omega(X_n,e_i)\ell_i e_i:=\lambda d_\omega(X_{n}),
\end{align*}
we get that
\begin{equation}\label{ERe4}
Y_n:=\lambda\sum_{i=1}^{n}d_\omega(X_{i-1})-X_n
\end{equation}
is a $P_{\omega^\lambda}$-martingale
with bounded increments. By the Azuma-Hoeffding inequality, we get that for any $p\ge 1$,
\[
E_{\omega^\lambda}\max_{1\le i\le n}|Y_i|^p
\le
C_{p,d}n^{p/2}.
\]
Hence
\[
E_{\omega^\lambda}\max_{1\le i\le n}|X_i|^p
\le
2^p(E_{\omega^\lambda}\max_{1\le i\le n}|Y_i|^p+\lambda^p n^p)
\le
C_{p,d}\lambda^p n^p
\]
for any $n\ge 1/\lambda^2$.
The same inequality is true (with different $C_{p,d}$) if we replace $n\in\mathbb{N}$ with
any $t\in\mathbb{R}$ such that $t\ge 1/\lambda^2$.
\qed\\
\textit{Proof of Theorem \ref{ER1}:}
Note that Lemma \ref{ERl1} implies that
$\lambda X_{t/\lambda^2}$ (under the law $P_{\omega^\lambda}$) converges weakly to
$N_t+D_\ell t$ as $\lambda\to 0$.
When $t\ge 1$, the uniform integrability of $(\lambda X_{t/\lambda^2})_{\lambda\in (0,1)}$
under the corresponding measures $P_{\omega^\lambda}$, as shown in Lemma
\ref{ERl2},
then yields that this convergence is also in $L^1$.\qed
\section{Regenerations}\label{ERsecreg}
\subsection{Auxiliary estimates}
For the rest of this section, we assume that $\ell_1=\ell\cdot e_1>0$.
Let
\[
\lambda_1:=\big(\lceil(2\lambda\ell_1)^{-1}\rceil\big)^{-1}/2,
\]
so that $0.5/\lambda_1$ is an integer. Note that
\[
\frac{1}{2\lambda\ell_1}\le \frac{1}{2\lambda_1}<\frac{1}{2\lambda\ell_1}+1.
\]
For any $n\in\mathbb{Z}, x\in\mathbb{Z}^d$, call
\[
\mathcal{H}_n^x=\mathcal{H}_n^x(\lambda,\ell):=
\{y\in\mathbb{Z}^d: (y-x)\cdot e_1=n/\lambda_1\}
\]
\textit{the $n$-th level} (with respect to $x$).
Denote the hitting time of the $n$-th level by
\[
T_n=T_n(X_\cdot):=\inf\{t\ge 0: (X_t-X_0)\cdot e_1=n/\lambda_1\}, n\in\mathbb{Z}.
\]
Also set
\[
T_{\pm 0.5}:=\inf\{t\ge 0: (X_t-X_0)\cdot e_1=\pm 0.5/\lambda_1\}.
\]
Since $\ell_1>0$, the random walk is transient in the $e_1$ direction.
Thus $(T_n)_{n\ge 0}$ are finite $P_{\omega^\lambda}$-almost surely.
\begin{proposition}\label{ERprop5}
For any $n,m\in\mathbb{Z}^+$ and any balanced environment $\omega$,
\[
P_{\omega^\lambda}(T_n<T_{-m})
=
\dfrac{1-q_\lambda^m}{1-q_\lambda^{m+n}},
\]
where $q_\lambda:=(\frac{1-\lambda\ell_1}{1+\lambda\ell_1})^{1/\lambda_1}$.
\end{proposition}
{\it Proof:}~
Observe that the jumps of $(X_n\cdot e_1)_{n\ge 0}$ are lazy random walks on $\mathbb{Z}$, with the ratio of the probabilities of left-jump to right-jump equals $(1-\lambda\ell_1)/(1+\lambda\ell_1)$.
Hence
for $i,j\in\mathbb{Z}^+$,
\[
P_{\omega^\lambda}(\tilde{T}_i<\tilde{T}_{-j})
=\frac{1-(\frac{1-\lambda\ell_1}{1+\lambda\ell_1})^j}
{1-(\frac{1-\lambda\ell_1}{1+\lambda\ell_1})^{i+j}},
\]
where $\tilde{T}_k:=\inf\{n\ge 0: (X_n-X_0)\cdot e_1=k\}, k\in\mathbb{Z}$.
The proposition follows by noting that $T_n=\tilde{T}_{n/\lambda_1}$. \qed
\begin{lemma}\label{ERl3}
For all $\lambda\in(0,1), t>0, m\in\mathbb{N}$ and any balanced environment $\omega$ with ellipticity constant
$\kappa\in(0,1/(2d))$,
\[
P_{\omega^\lambda}(T_m\ge t/\lambda_1^2)
\le
2 e^{-t\kappa^2/(2m)}.
\]
\end{lemma}
{\it Proof:}~
First, note that if $Z$ is a real-valued random variable with zero mean and supported on $[-c,c]$, then for $\theta>0$,
$Ee^{\theta Z}\le \exp{(\frac{1}{2}\theta^2c^2)}$. (By Jensen's inequality,
$e^{\theta Z}\le \frac{c-Z}{2c}e^{\theta c}+\frac{c+Z}{2c}e^{-\theta c}$. Taking expectations on both sides gives the inequality.)
Recall the definition of $Y_n$ in (\ref{ERe4}). Since $Y_n\cdot e_1$ is a $P_{\omega^\lambda}$-martingale with increments bounded by $2$, for $\theta>0$,
\begin{align*}
&E_{\omega^\lambda}(e^{\theta Y_{n+1}\cdot e_1}|X_i,i\le n)\\
&=
e^{\theta Y_n\cdot e_1}E_{\omega^\lambda}[e^{\theta(Y_{n+1}-Y_n)\cdot e_1}||X_i,i\le n]
\le
e^{\theta Y_n\cdot e_1+2\theta^2}.
\end{align*}
Hence
\[
\exp{(\theta Y_n\cdot e_1-2n\theta^2)}
\]
is a $P_{\omega^\lambda}$-supermartingale.
By the optional stopping theorem and ellipticity,
\begin{align*}
1
& \ge
E_{\omega^\lambda} \exp[\theta Y_{T_m}\cdot e_1-2 T_m\theta^2]\\
& \ge
E_{\omega^\lambda}
\exp[\theta(2\lambda\ell_1\kappa T_m-X_{T_m}\cdot e_1)-2T_m\theta^2].
\end{align*}
Letting $\theta=\kappa\lambda\ell_1/2$
in the above inequality and noting that
$X_{T_m}\cdot e_1=m/\lambda_1$, we obtain
\begin{align*}
1 &\ge
E_{\omega^\lambda}\exp\big((\kappa\lambda\ell_1)^2T_m/2-\kappa\lambda\ell_1 m/(2\lambda_1)\big)\\
&\ge
E_{\omega^\lambda}\exp(\kappa^2\lambda_1^2T_m/2-\kappa m),
\end{align*}
where we used $\lambda_1\le \lambda\ell_1\le 2\lambda_1$ in the second
inequality.
Hence by H\"{o}lder's inequality,
\[
E_{\omega^\lambda}\exp(\kappa^2\lambda_1^2T_m/(2m)-\kappa)\le 1.
\]
Therefore,
\[
P_{\omega^\lambda}(T_m\ge t/\lambda_1^2)
\le
e^{\kappa-\kappa^2 t/(2m)}
<2 e^{-\kappa^2 t/(2m)}. \qed
\]
\begin{proposition}\label{ERprop2}
There exists a constant $C_0=C_0(\kappa,d)>0$ such that
\[
P_{\omega^\lambda}(\max_{0\le s\le T_1}|X_s|\ge C_0/\lambda_1)<0.5.
\]
\end{proposition}
{\it Proof:}~
By Lemma \ref{ERl2} and Lemma \ref{ERl3}, for any $m\ge 1$,
\begin{align*}
&P_{\omega^\lambda}(\max_{0\le s\le T_1}|X_s|\ge m/\lambda_1)\\
&\le
P_{\omega^\lambda}(T_1\ge \sqrt{m}/\lambda_1^2)
+P_{\omega^\lambda}(\max_{0\le s\le\sqrt{m}/\lambda_1^2}|X_s|\ge m/\lambda_1)\\
&\le
2e^{-\sqrt{m}\kappa^2/2}+C/\sqrt{m},
\end{align*}
which is less than $0.5$ if $m$ is large enough.\qed
\begin{lemma}\label{ERl4}
There exists a constant $c_1\in (0,1]$ such that for any $\lambda\in (0,1)$, $x\in\mathbb{Z}^d$
and balanced environment $\omega$,
\begin{equation}\label{ERe5}
P_{\omega^\lambda}^{x}(X_{T_1}=\cdot)
\ge
c_1
P_{\omega^\lambda}^{x+0.5e_1/\lambda_1}(X_{T_{0.5}}=\cdot|T_{0.5}<T_{-0.5}).
\end{equation}
\end{lemma}
{\it Proof:}~
For any $x\in\mathbb{Z}^d$, let
\[
\mathcal{H}_{0.5}^x
:=\{y\in\mathbb{Z}^d: (y-x)\cdot e_1=0.5/\lambda_1\}.
\]
Fix $w\in\mathcal{H}_1^x$. Then the function
\[
f(z):=
P_{\omega^\lambda}^z(X_\cdot \text{ visits $\mathcal{H}_1^x$ for the first time at }w)
\]
satisfies
\[
L_{\omega^\lambda}f(z)=0
\]
for all $z\in\{y: (y-x)\cdot e_1<1/\lambda_1\}$.
By the Harnack inequality for discrete harmonic functions (See Theorem
\ref{ERharnack} in the Appendix. In this case $a=\omega^\lambda,
R=0.5/\lambda_1$ and $b_0\le\lambda$), there exists a constant $C_2$ such that,
for any $y, z\in \mathcal{H}_{0.5}^x$ with $|z-y|<0.5/\lambda_1$,
\[
f(z)\ge C_2 f(y).
\]
Hence, for any $z\in \mathcal{H}_{0.5}^x$ such that
$|z-(x+0.5e_1/\lambda_1)|<C_0/\lambda_1$, we have
\begin{equation}\label{ERe27}
f(z)\ge C_2^{2C_0} f(x+0.5e_1/\lambda_1).
\end{equation}
Therefore,
\begin{align*}
P_{\omega^\lambda}^x(X_{T_1}=w)
&\ge
\sum_{|y-x|<C_0/\lambda}
P_{\omega^\lambda}^x(X_{T_{0.5}}=y)P_{\omega^\lambda}^y(X_{T_{0.5}}=w)\\
&\stackrel{\eqref{ERe27}}{\ge}
C P_{\omega^\lambda}^x(|X_{T_{0.5}}-x|<C_0/\lambda_1)P_{\omega^\lambda}^{x+0.5e_1/\lambda_1}
(X_{T_{0.5}}=w)\\
&\ge
c_1 P_{\omega^\lambda}^{x+0.5e_1/\lambda_1}
(X_{T_{0.5}}=w|T_{0.5}<T_{-0.5})
\end{align*}
where in the last inequality we used the facts that (by Proposition \ref{ERprop2})
\[
P_{\omega^\lambda}^x(|X_{T_{0.5}}-x|<C_0/\lambda_1)>\frac{1}{2}
\]
and
\[
P_{\omega^\lambda}^{x+0.5e_1/\lambda_1}(T_{0.5}<T_{-0.5})>\frac{1}{2}.\qed
\]
\subsection{Construction of the regeneration times}
Let
\[
\mu_{\omega^\lambda,1}^x(\cdot)=P_{\omega^\lambda}^{x+0.5e_1/\lambda_1}
(X_{T_{0.5}}=\cdot|T_{0.5}<T_{-0.5}).
\]
Recall that $c_1$ is the constant in Lemma \ref{ERl4}. For any $\beta\in (0,c_1)$, we set
\[
\mu_{\omega^\lambda,0}^x(\cdot)=\mu_{\omega^\lambda,0}^{x,\beta}(\cdot)
:=
\big[P_{\omega^\lambda}^x(X_{T_1}=\cdot)-\beta\mu_{\omega^\lambda,1}^x(\cdot)\big]/(1-\beta).
\]
Then by (\ref{ERe5}), both $\mu_{\omega^\lambda,1}^x$ and $\mu_{\omega^\lambda,0}^x$ are probability measures on $\mathcal{H}^x_{0.5}$ and
\[
P_{\omega^\lambda}^x(X_{T_1}=u)=\beta\mu_{\omega^\lambda,1}^x(u)+(1-\beta)\mu_{\omega^\lambda,0}^x(u).
\]
For any $\mathcal{O}\in\sigma(X_1,X_2,\ldots, X_{T_1}), x\in\mathbb{Z}^d$ and $i\in\{0,1\}$, put
\begin{align}\label{ERe6}
\nu_{\omega^\lambda,i}^x(\mathcal{O})
&=
\nu_{\omega^\lambda,i}^{x,\beta}(\mathcal{O})\nonumber\\
&:=
\sum_y
\big[i\mu_{\omega^\lambda,1}^x(y)+(1-i)\mu_{\omega^\lambda,0}^x(y)\big]
P_{\omega^\lambda}^x(\mathcal{O}|X_{T_1}=y).
\end{align}
Notice that under the environment measure $P$,
\[
\nu_{\omega^\lambda,1}^x(X_{T_1}\in\cdot)=\mu_{\omega^\lambda,1}^x(\cdot)
\]
is independent of
$\sigma(\omega_y:y\cdot e_1\le x\cdot e_1)$.\\
We will now define the regeneration times.
We first sample a sequence $(\epsilon_i)_{i=1}^\infty\in\{0,1\}^\mathbb{N}$ of iid Bernoulli random variables according to the law $Q_\beta$ defined by
\[
Q_\beta(\epsilon_i=1)=\beta \text{ and } Q_\beta(\epsilon_i=0)=1-\beta.
\]
Then, fixing $\epsilon:=(\epsilon_i)_{i=1}^\infty$, we will define a new law $P_{\omega^\lambda,\epsilon}$
on the paths as follows (see Figure \ref{ERfig0}). For $x\in\mathbb{Z}^d$, set
\[P_{\omega^\lambda,\epsilon}^x(X_0=x)=1.\]
Assume that the $P_{\omega^\lambda,\epsilon}^x$-law for finite paths of
length$\le n$ is defined. For any path $(x_i)_{i=0}^{n+1}$ with $x_0=x$, define
\begin{align*}
&P_{\omega^\lambda,\epsilon}^{x}
(X_{n+1}=x_{n+1},\ldots, X_{0}=x_0)\\
&:=
P_{\omega,\epsilon}^{x}(X_I=x_I,\ldots, X_0=x_0)
\nu_{\omega^\lambda,\epsilon_J}^{x_I}(X_{n+1-I}=x_{n+1},\ldots, X_1=x_{I+1}),
\end{align*}
where
\[
J=J(x_0,\ldots,x_n):=\max\{j\ge 0: \mathcal{H}_{j}^{x_0}\cap\{x_i, 0\le i\le n\}\neq\emptyset\}
\]
is the highest level visited by $(x_i)_{i=0}^{n}$ and
\[
I=I(x_0,\ldots,x_n):=\min\{0\le i\le n: x_i\in\mathcal{H}_J^{x_0}\}
\]
is the hitting time to the $J$-th level. By induction, the law $P_{\omega^\lambda,\epsilon}^x$
is well-defined for paths of all lengths.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\textwidth]{law}
\caption{The law $\bar P_{\omega^\lambda,\epsilon}$ for the walks.}
\label{ERfig0}
\end{figure}
Note that a path sampled by $P_{\omega^\lambda,\epsilon}^x$ is not a Markov chain, but
the law of $X_\cdot$ under
\[
\bar{P}_{\omega^\lambda}^x=\bar{P}_{\omega^\lambda,\beta}^{x}:=Q_\beta\otimes P_{\omega^\lambda,\epsilon}^x
\]
coincides with $P_{\omega^\lambda}^x$. That is,
\[
\bar{P}_{\omega^\lambda}^x(X_\cdot\in\cdot)
=
P_{\omega^\lambda}^x(X_\cdot\in\cdot).
\]
Denote by
$\bar{\mathbb P}_\lambda=\bar{\mathbb P}_{\lambda,\beta}:=P\otimes\bar{P}_{\omega^\lambda,\beta}$ the law of the triple
$(\omega,\epsilon, X_\cdot)$. Expectations with respect to $\bar{P}_{\omega^\lambda}^x$ and $\bar{\mathbb P}_\lambda$ are denoted by
$\bar{E}_{\omega^\lambda}^x$ and $\bar{\mathbb E}_\lambda (=\bar{\mathbb E}_{\lambda,\beta})$, respectively.
Next, for a path $(X_n)_{n\ge 0}$ sampled according to $P_{\omega^\lambda,\epsilon}^o$, we will define the regeneration times. See Figure \ref{ERfig2} for an illustration.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\textwidth]{defreg}
\caption{The definition of a regeneration time.}
\label{ERfig2}
\end{figure}
To be specific, put $S_0=0, M_0=0$,
and define inductively
\begin{align*}
&S_{k+1}=\inf\{T_{n+1}: n/\lambda_1\ge M_k \text{ and }\epsilon_n=1\},\\
&R_{k+1}=S_{k+1}+T_{-1}\circ\theta_{S_{k+1}},\\
&M_{k+1}=X_{S_{k+1}}\cdot e_1+N\circ \theta_{S_{k+1}}, \qquad k\ge 0.
\end{align*}
Here $\theta_n$ denotes the time shift of the path, i.e, $\theta_n X_\cdot=(X_{n+i})_{i=0}^\infty$,
and
\[
N:=\inf\{n/\lambda_1: n/\lambda_1>(X_i-X_0)\cdot e_1 \text{ for all }i\le T_{-1}\}.
\]
Set
\begin{align*}
&K:=\inf\{k\ge 1: S_k<\infty, R_k=\infty\},\\
&\tau_1:=S_K,\\
&\tau_{k+1}=\tau_k+\tau_1\circ\theta_{\tau_k}.
\end{align*}
We call $(\tau_k)_{k\ge 1}$ the ($\beta$-)\textit{regeneration times}.
Intuitively, under $\bar{P}_{\omega^\lambda}^x$, whenever the walker visits a new level $\mathcal{H}_i, i\ge 0$,
he flips a coin $\epsilon_i$.
If $\epsilon_i=0$ (or $1$), he then walks
following the law $\nu_{\omega^\lambda,0}$ (or $\nu_{\omega^\lambda,1}$) until he hits the $(i+1)$-th level. The regeneration time $\tau_1$ is defined to be the first time of visiting a new level
$\mathcal{H}_k$ such that the outcome $\epsilon_{k-1}$ of the previous
coin-tossing is ``$1$" and the path will never backtrack to the level
$\mathcal{H}_{k-1}$ in the
future. See Figure \ref{ERfig1}.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\textwidth]{regenerations}
\caption{In this picture, $K=2, X_{\tau_1}=5/\lambda_1, M_1=4/\lambda_1$.}
\label{ERfig1}
\end{figure}
\subsection{The renewal property of the regenerations}
The regeneration times possess good renewal properties in the following sense:
\begin{enumerate}
\item
Since the ratio of the probabilities of left-jump and right-jump of the lazy random walks $(X_n\cdot e_1)_{n\ge 0}$ (in $\mathbb{Z}$) is $(1-\lambda\ell_1)/(1+\lambda\ell_1)$,
the law of $(X_{\tau_n}\cdot e_1)_{n\ge 1}$ does not depend on the
environment $\omega$. (Indeed, if we only observe
the chain $(X_n\cdot e_1)_{n\ge 0}$ at the times when it \textit{moves} and forget about its laziness, we get a random walk on $\mathbb{Z}$ with probabilities $(1-\lambda\ell_1)/2$ and $(1+\lambda\ell_1)/2$ of jumping to the left and to the right, respectively.) Furthermore, under $\bar{P}_{\omega^\lambda}$, the inter-regeneration distances
$(e_1\cdot X_{\tau_1}\circ\theta_{\tau_n})_{n=1}^\infty$ in the direction $e_1$ are iid random variables which are independent of $X_{\tau_1}\cdot e_1$, and
\[
\bar{P}_{\omega^\lambda}(e_1\cdot X_{\tau_1}\circ\theta_{\tau_n}\in\cdot)
=
\bar{P}_{\omega^\lambda}(X_{\tau_1}\cdot e_1\in\cdot|T_{-1}=\infty), n\ge 1.
\]
\item
For $k\ge 0$, define
\begin{align*}
&\tilde{S}_{k+1}:=\inf\{T_n: n/\lambda\ge M_k \text{ and }\epsilon_n=1\},\\
&\tilde{\tau}_1:=\tilde{S}_K,\\
&\tilde{\tau}_{k+1}:=\tau_k+\tilde{\tau}_1\circ\theta_{\tau_k}.
\end{align*}
Note that for $k\ge 1$,
\begin{align*}
&S_{k}=\tilde S_{k}+T_1\circ\theta_{\tilde S_k},\\
&X_{\tau_k}\cdot e_1
=X_{\tilde{\tau}_k}\cdot e_1+1/\lambda_1.
\end{align*}
Conditioning on $X_{\tilde{\tau}_k}=x$, the law of $X_{\tau_k}$ is
$\mu_{\omega^\lambda,1}^x$, which is independent (under the environment measure $P$) of
$\sigma(\omega_y:y\cdot e_1\le x\cdot e_1)$. Moreover, after time $\tau_k$,
the path will never visit $\{y:y\cdot e_1\le x\cdot e_1\}$. Thus the movement of the path
after time $\tau_k$ is independent (under $\bar{\mathbb P}_\lambda$)
of $(X_n)_{n\le \tilde{\tau}_k}$, and therefore, we expect
\[
(\tilde{\tau}_1\circ\theta_{\tau_k})_{k\ge 1}
\]
to be iid random variables under $\bar{\mathbb P}_\lambda$.
See Proposition \ref{ERprop1} for a rigorous proof.
\item
Although the inter-regeneration distances $(X_{\tau_1}\circ\theta_{\tau_k})_{k\ge 1}$
and $(\tilde{\tau}_1\circ\theta_{\tau_k})_{k\ge 1}$ are both iid sequences,
the inter-regeneration times $(\tau_1\circ\theta_{\tau_k})_{k\ge 1}$ are not even independent.
However, letting
\[
\Delta_k:=T_1\circ\theta_{X_{\tilde{\tau}_k}}=\tau_k-\tilde{\tau}_k \text{ for }k\ge 1,
\]
we can show that for every $k\ge 1$, $\lambda_1^2 \Delta_k$ is bounded by a constant plus an
exponential random variable.
So $\Delta_k$ is much less than $\tau_1\circ\theta_{\tau_k}$, which is roughly $C/(\beta\lambda_1^2)$
(as will be shown in Proposition \ref{ERprop3}).
In this sense, the inter-regeneration times $\tau_1\circ\theta_{\tau_k}$ are
\textit{almost} iid if $\beta$ is sufficiently small.
\end{enumerate}
The rest of this subsection is devoted to the proof that
$(\tilde{\tau}_1\circ\theta_{\tau_k})_{k\ge 1}$ are iid
(Proposition \ref{ERprop1}) and
that $\Delta_k$'s are dominated by iid random variables of sizes $1/\lambda^2$
(Proposition \ref{ERprop4}).
We introduce the $\sigma$-field
\[
\mathcal{G}_k
:=
\sigma\big(
\tilde{\tau}_k, (X_i)_{i\le\tilde{\tau}_k},(\omega_y)_{y\cdot e_1\le X_{\tilde{\tau}_k}\cdot e_1}
\big).
\]
\begin{lemma}\label{ERl5}
For any appropriate measurable sets $B_1, B_2$
and any event
\[
B:=\{(X_i)_{i\ge 0}\in B_1, (\omega_y)_{y\cdot e_1>-1/\lambda_1}\in B_2\},
\]
we have, for $k\ge 1$,
\[
\bar{\mathbb P}_\lambda(B\circ\bar{\theta}_{\tau_k}|\mathcal{G}_k)
=
\frac{E_P\big[
\sum_y
\mu_{\omega^\lambda,1}(y)
\bar{P}_{\omega^\lambda}^y(B\cap\{T_{-1}=\infty\})\big]}
{
E_P\big[
\sum_y
\mu_{\omega^\lambda,1}(y)
\bar{P}_{\omega^\lambda}^y(T_{-1}=\infty)\big]
}.
\]
Here $\bar{\theta}_n$ is the shift defined by
\[
B\circ\bar{\theta}_n
=
\{(X_i)_{i\ge n}\in B_1, (\omega_y)_{(y-X_n)\cdot e_1>-1/\lambda_1}\in B_2\}.
\]
\end{lemma}
{\it Proof:}~
For simplicity, let us consider the case $k=1$.
We use $\theta^n$ to denote the shift of the $\epsilon$-coins, i.e.,
$\theta^n \epsilon_\cdot=(\epsilon_i)_{i\ge n}$.
For any $A\in\mathcal{G}_1$,
\begin{align*}
&\bar{\mathbb P}_\lambda(B\circ\bar{\theta}_{\tau_1}\cap A)\\
&=
E_{P\otimes Q_\beta}\big[
\sum_{k\ge 1,x}P_{\omega^\lambda,\epsilon}
(A\cap\{\tilde{S}_k<\infty,R_k=\infty, X_{\tilde{S}_k}=x\}\cap B\circ\bar{\theta}_{S_k})
\big]\\
&=
E_{P\otimes Q_\beta}\big[
\sum_{k\ge 1,x,y}P_{\omega^\lambda,\epsilon}
(A\cap\{\tilde{S}_k<\infty,X_{\tilde{S}_k}=x\})
\nu_{\omega^\lambda,1}^x(X_{T_1}=x+y)\\
&\qquad\qquad\qquad\qquad\qquad\times
P_{\omega^\lambda,\theta^{k+1}\epsilon}^{x+y}(B\cap\{T_{-1}=\infty\})\big].
\end{align*}
Note that in the last equality,
\[P_{\omega^\lambda,\epsilon}
(A\cap\{\tilde{S}_k<\infty,X_{\tilde{S}_k}=x\})\]
is
$\sigma\big((\epsilon_i)_{i\le k},(\omega_z)_{(z-x)\cdot e_1\le 0}\big)$-
measurable,
whereas
\[\nu_{\omega^\lambda,1}^x(X_{T_1}=x+y)P_{\omega^\lambda,\theta^{k+1}\epsilon}^{x+y}(B\cap\{T_{-1}=\infty\})\]
is
$\sigma\big((\epsilon_i)_{i\ge k+1}, (\omega_z)_{(z-x)\cdot e_1>0}\big)$-
measurable for $y\in\mathcal{H}_1^x$.
Hence they are independent under $P\otimes Q_\beta$ and we have
\begin{align}\label{ERe7}
&\bar{\mathbb P}_\lambda(B\circ\bar{\theta}_{\tau_1}\cap A)\\
&=
\sum_{k\ge 1}\bar{\mathbb P}_\lambda
(A\cap\{\tilde{S}_k<\infty\})
E_P\big[
\sum_y
\nu_{\omega^\lambda,1}(X_{T_1}=y)
\bar{P}_{\omega^\lambda}^y(B\cap\{T_{-1}=\infty\})\big].\nonumber
\end{align}
Substituting $B$ with the set of all events, we get
\begin{equation}\label{ERe8}
\bar{\mathbb P}_\lambda(A)=
\sum_{k\ge 1}\bar{\mathbb P}_\lambda
(A\cap\{\tilde{S}_k<\infty\})
E_P\big[
\sum_y
\mu_{\omega^\lambda,1}(y)
\bar{P}_{\omega^\lambda}^y(T_{-1}=\infty)\big].
\end{equation}
(\ref{ERe7}) and (\ref{ERe8}) yield that
\[
\bar{\mathbb P}_\lambda(B\circ\bar{\theta}_{\tau_1}|A)
=
\frac{E_P\big[
\sum_y
\mu_{\omega^\lambda,1}(y)
\bar{P}_{\omega^\lambda}^y(B\cap\{T_{-1}=\infty\})\big]}
{
E_P\big[
\sum_y
\mu_{\omega^\lambda,1}(y)
\bar{P}_{\omega^\lambda}^y(T_{-1}=\infty)\big]
}
.\]
The lemma is proved for the case $k=1$. The general case $k>1$ follows by
induction. (The reasoning for the induction step is the same, although the
notation becomes more cumbersome.)\qed
The following proposition is an immediate consequence of the lemma.
\begin{proposition}\label{ERprop1}
Under $\bar{\mathbb P}_\lambda$, $\tilde{\tau}_1,\tilde{\tau}_1\circ\theta_{\tau_1},\ldots,
\tilde{\tau}_1\circ\theta_{\tau_k},\ldots$ are independent random variables. Furthermore,
$(\tilde{\tau}_1\circ\theta_{\tau_k})_{k\ge 1}$ are iid with law
\[
\bar{\mathbb P}_\lambda(\tilde{\tau}_1\circ\theta_{\tau_k}\in\cdot)
=
\frac{
E_P\big[
\sum_y
\mu_{\omega^\lambda,1}(y)
\bar{P}_{\omega^\lambda}^y(\tilde{\tau}_1\in\cdot,T_{-1}=\infty)\big]
}
{
E_P\big[
\sum_y
\mu_{\omega^\lambda,1}(y)
\bar{P}_{\omega^\lambda}^y(T_{-1}=\infty)\big]
}.
\]
\end{proposition}
Note that the inter-regeneration times $(\tau_1\circ\theta_{\tau_k})_{k\ge 1}$ are not independent.
However, the differences between $\tau_1\circ\theta_{\tau_k}$ and
$\tilde{\tau}_1\circ\theta_{\tau_k}, k\ge 1$ are controlled by iid exponential random variables.
For any $x\in\mathbb{Z}^d, t\ge 0$ ,
\begin{align*}
&\nu_{\omega^\lambda,1}^x(\lambda_1^2T_1\ge t)\\
&\stackrel{\eqref{ERe6}}{=}
\sum_y \mu_{\omega^\lambda,1}^x(y)P_{\omega^\lambda}^x(\lambda_1^2T_1\ge t|X_{T_1}=y)\\
&\stackrel{\eqref{ERe5}}{\le} c_1^{-1}
\sum_y P_{\omega^\lambda}^x(X_{T_1}=y)P_{\omega^\lambda}^x(\lambda_1^2T_1\ge t|X_{T_1}=y)\\
&=
c_1^{-1}P_{\omega^\lambda}^x(\lambda_1^2T_1\ge t)
\stackrel{\text{Lemma }\ref{ERl3}}{\le} 2c_1^{-1}e^{-t\kappa^2/2}.
\end{align*}
Hence for $k\ge 1$,
\[
P_{\omega^\lambda,\epsilon}(\lambda_1^2\Delta_k\ge t|X_i,i\le \tilde{\tau}_k)=
\nu_{\omega^\lambda,1}^{X_{\tilde{\tau}_k}}(\lambda_1^2T_1\ge t)
\le
2c_1^{-1}e^{-t\kappa^2/2},
\]
which implies that $\lambda_1^2\Delta_k$ is stochastically dominated by an exponential random
variable (with rate $\kappa^2/2$) plus a constant $c_2:=2\kappa^{-2}\log(2/c_1)$.
Thus we conclude:
\begin{proposition}\label{ERprop4}
Enlarging the probability space if necessary, one can couple $(\Delta_k)_{k\ge 1}$
with an iid sequence $(\xi_k)_{k\ge 1}$ such that each $\xi_k$ is the sum of $c_2$ and an exponential random
variable with rate $\kappa^2/2$, and that
\[
\lambda_1^2\Delta_k\le \xi_k, \text{ for all }k\ge 1.
\]
Therefore, for any $n\ge 1$,
\begin{equation}\label{ERe9}
\tilde{\tau}_1+\sum_{i=1}^{n-1}\tilde{\tau}_1\circ\theta_{\tau_i}
\le
\tau_n
\le
\tilde{\tau}_1+\sum_{i=1}^{n-1}\tilde{\tau}_1\circ\theta_{\tau_i}+\sum_{i=1}^n\xi_i/\lambda_1^2.
\end{equation}
\end{proposition}
\section{Moment estimates}\label{ERsecmo}
Throughout this section, we assume that
\[\ell\cdot e_1>0.\]
Set $\tau_0=0$. We will show that the typical values of $e_1\cdot X_1\circ\theta_{\tau_k}$
and $\tau\circ\theta_{\tau_k}$, $k\ge 0$ are $C/(\beta\lambda)$ and $C/(\beta\lambda^2)$, respectively.
\begin{theorem}\label{ERregdist}
Let $\omega$ be an elliptic and balanced environment. If $\lambda>0$
and $\beta>0$ are small enough,
then
\[
\bar{E}_{\omega^\lambda}\exp(\beta\lambda_1 X_{\tau_1}\cdot e_1/2)<12.
\]
\end{theorem}
{\it Proof:}~
For $0\le k\le K-1$, set
\[
L_{k+1}=\inf\{n\ge \lambda_1 M_k: \epsilon_n=1\}-\lambda_1 M_k+1.
\]
Then $L_1$ is the number of coins tossed to get the first `$1$' and
\[
X_{S_1}\cdot e_1=L_1/\lambda_1.
\]
Moreover, for $1\le k\le K-1$, let
\[
N_k=N\circ\theta_{S_k}.
\]
Then
\[
(X_{S_{k+1}}-X_{S_k})\cdot e_1=N_{k}+L_{k+1}/\lambda_1, \quad k\ge 1.
\]
So
\begin{equation}\label{ERe10}
X_{\tau_1}\cdot e_1=\sum_{i=1}^K L_i/\lambda_1+\sum_{i=1}^{K-1}N_i.
\end{equation}
First, we will compute the exponential moment of $L_i, i\le K$.
Since $(L_i)_{i\ge 1}$ depends only on the coins $(\epsilon_i)_{i\ge 0}$,
it is easily seen that they are iid geometric random variables with
parameter $\beta$. Hence for $i\ge 1$
(noting that $(1-\beta)e^{\beta/2}< e^{-\beta/2}<1$),
\[
\bar{E}_{\omega^\lambda}
[e^{\beta L_i/2}]
=
\sum_{n=0}^\infty e^{\beta(n+1)/2}(1-\beta)^n\beta
=
\dfrac{\beta e^{\beta/2}}{1-(1-\beta)e^{\beta/2}}.
\]
If $\beta>0$ is small enough, we have
\begin{equation}\label{ERe11}
\bar{E}_{\omega^\lambda}
[e^{\beta L_i/2}]<3.
\end{equation}
Next, we will compute the exponential moment of $N_i, i\le K-1$.
By Proposition \ref{ERprop5}, putting
\[
p_\lambda:=\bar{P}_{\omega^\lambda}(T_{-1}=\infty)
=1-q_\lambda,
\]
we have
\begin{align*}
&\bar{P}_{\omega^\lambda}(N=(n+1)/\lambda_1)\\
&=
\bar{P}_{\omega^\lambda}(T_n<T_{-1}<T_{n+1})\\
&=
\bar{P}_{\omega^\lambda}(T_n<T_{-1})-
\bar{P}_{\omega^\lambda}(T_{n+1}<T_{-1})\\
&=
\frac{p_\lambda}{1-q_\lambda^{n+1}}-\frac{p_\lambda}{1-q_\lambda^{n+2}}
=\dfrac{q_\lambda^{n+1}p_\lambda^2}{(1-q_\lambda^{n+1})(1-q_\lambda^{n+2})},
\quad n\ge 0.
\end{align*}
Observe that conditioning on $K$,
$(N_i)_{1\le i<K}$
are iid under $\bar{P}_{\omega^\lambda}$.
Hence
\begin{align*}
\bar{P}_{\omega^\lambda}(N_i=(n+1)/\lambda_1|K>i)
&=\bar{P}_{\omega^\lambda}(N=(n+1)/\lambda_1|T_{-1}<\infty)\\
&=
\dfrac{q_\lambda^{n}p_\lambda^2}{(1-q_\lambda^{n+1})(1-q_\lambda^{n+2})}\le q_\lambda^n,
\end{align*}
and
\[
\bar{E}_{\omega^\lambda}
[e^{\beta\lambda_1 N_i/2}|K>i]
\le
\dfrac{e^{\beta/2}}{1-e^{\beta/2}q_\lambda}.
\]
Noting that $\mathop{\rm lim}_{\lambda\to 0}q_\lambda=e^{-2}$, we can take both $\lambda$ and $\beta$ to be small enough such that
\begin{equation}\label{ERe12}
\bar{E}_{\omega^\lambda}
[e^{\beta\lambda_1 N_i/2}|K>i]
<\frac{1}{4q_\lambda}.
\end{equation}
Finally, note that, under $\bar{P}_{\omega^\lambda}=Q_\beta\otimes
P_{\omega^\lambda,\epsilon}^o$,
$K$ is a geometric random variable with success parameter $p_\lambda$, and
$(L_i)_{1\le i\le K}$ and $(N_i)_{1\le i\le K}$ are iid sequences when
conditioned on $K$. Therefore, by (\ref{ERe10}), (\ref{ERe11}) and
(\ref{ERe12}),
\[
\bar{E}_{\omega^\lambda}\exp(\beta\lambda_1 X_{\tau_1}\cdot e_1/2)
\le
\bar{E}_{\omega^\lambda} \frac{3^K}{(4q_\lambda)^{K-1}}
=\sum_{n=0}^\infty \frac{3^{n+1}}{(4q_\lambda)^n}q_\lambda^np_\lambda
<12
\]
if both $\beta,\lambda>0$ are small enough. \qed
\begin{corollary}\label{ERcor1}
For $t\ge 1$ and small enough $\lambda, \beta>0$,
\[
\bar{P}_{\omega^\lambda}(\beta\lambda_1^2\tau_1\ge t)
\le 14\exp(-\kappa^2\sqrt{t}/4).
\]
\end{corollary}
{\it Proof:}~
By Lemma \ref{ERl3} and Theorem \ref{ERregdist},
\begin{align*}
&\bar{P}_{\omega^\lambda}(\beta\lambda_1^2\tau_1\ge t)\\
&\le
\bar{P}_{\omega^\lambda}(\beta\lambda_1^2T_{\lceil\sqrt{t}/\beta\rceil}\ge t)
+\bar{P}_{\omega^\lambda}(T_{\lceil\sqrt{t}/\beta\rceil}<\tau_1)\\
&\le
2\exp(-\frac{\kappa^2t/\beta}{2(\sqrt{t}/\beta+1)})
+\bar{P}_{\omega^\lambda}(\lceil\sqrt{t}/\beta\rceil/\lambda_1<X_{\tau_1}\cdot e_1)\\
&\le
2e^{-\kappa^2\sqrt{t}/4}+12e^{-\sqrt{t}/2}
\le
14 e^{-\kappa^2\sqrt{t}/4}. \qed
\end{align*}
It follows from Corollary \ref{ERcor1} and Lemma \ref{ERl5}
(and noting that $P_{\omega^\lambda}(T_{-1}=\infty)=p_\lambda>1/2$) that,
for $k\ge 1$,
\begin{equation}\label{ERe13}
\bar{\mathbb P}_\lambda
(\beta\lambda_1^2\tau_1\circ\theta_{\tau_k}\ge t)
\le
28\exp(-\kappa^2\sqrt{t}/4).
\end{equation}
Hence by Theorem \ref{ERregdist}, Corollary \ref{ERcor1} and \eqref{ERe13},
we conclude that, for any $p\ge 1, k\ge 0$, there exists a constant
$C(p)<\infty$ such that
\begin{align}
&\bar{\mathbb E}_\lambda
(\beta\lambda_1^2\tau_1\circ\theta_{\tau_k})^p
<C(p),\label{ERe14}\\
&\bar{\mathbb E}_\lambda
(\beta\lambda_1 X_{\tau_1}\circ\theta_{\tau_k})^p
<C(p).\label{ERe15}
\end{align}
Moreover, since $\bar{\mathbb P}_\lambda$-almost surely,
\[
v_\lambda\cdot e_1=\mathop{\rm lim}_{n\to\infty}\frac{X_{\tau_n}\cdot e_1}{\tau_n},
\]
by \eqref{ERe9}
and the law of large numbers, we
have
\begin{equation}\label{ERe16}
L^{\beta,\lambda}:=\dfrac{\bar{\mathbb E}_\lambda[e_1\cdot X_{\tau_1}\circ\theta_{\tau_1}]}
{\bar{\mathbb E}_\lambda[\tilde{\tau}_1\circ\theta_{\tau_1}]+E\xi_1/\lambda_1^2}
\le
v_\lambda\cdot e_1
\le
\dfrac{\bar{\mathbb E}_\lambda[e_1\cdot X_{\tau_1}\circ\theta_{\tau_1}]}
{\bar{\mathbb E}_\lambda[\tilde{\tau}_1\circ\theta_{\tau_1}]}=:R^{\beta,\lambda}.
\end{equation}
\begin{proposition}\label{ERprop3}
When $\lambda, \beta>0$ are small enough,
\begin{equation}\label{ERe17}
\bar{\mathbb E}_\lambda[\tilde{\tau}_1\circ\theta_{\tau_1}]\ge \frac{C}{\beta\lambda_1^2}.
\end{equation}
\end{proposition}
{\it Proof:}~
By the definition of $L_i, i\ge 1$, we get
\begin{equation}\label{ERe18}
\bar{\mathbb E}_\lambda [e_1\cdot X_{\tau_1}\circ\theta_{\tau_1}]
\ge
\bar{\mathbb E}_\lambda L_1/\lambda_1
\ge
\frac{1}{\beta\lambda_1}.
\end{equation}
On the other hand, Lemma \ref{ERl2} implies that
\[
|v_\lambda|\le C\lambda \text{ for all }\lambda\in(0,1).
\]
This, together with \eqref{ERe16} and \eqref{ERe18}, yields
\[
\bar{\mathbb E}_\lambda[\tilde{\tau}_1\circ\theta_{\tau_1}]+E\xi_1/\lambda_1^2
\ge \dfrac{C}{\beta\lambda_1^2}.
\]
Recalling (see Proposition \ref{ERprop4}) that $E\xi_1$ is an exponential random variable with rate $\kappa^2/2$, (\ref{ERe17}) then follows by taking $\beta$ sufficiently small.\qed
Note that, by \eqref{ERe16} and \ref{ERe17},
\begin{equation}\label{ERe19}
R^{\beta,\lambda}\le (1+C\beta)L^{\beta,\lambda}\le C\lambda.
\end{equation}
\section{Proof of the Einstein relation}\label{ERsecpro}
\begin{lemma}\label{ERl6}
Assume $\ell\cdot e_1>0$. Then when $\beta>0$ and $\lambda>0$ are small enough,
there exists a constant $C$ such that
\[
\left|
\dfrac{\bar{\mathbb E}_\lambda X_{\tau_n}\cdot e_1}{\lambda\bar{\mathbb E}_\lambda\tau_n}
-\frac{v_\lambda\cdot e_1}{\lambda}
\right|\le C\beta+\frac{C}{n} \quad \text{ for all }n\ge 2.
\]
\end{lemma}
{\it Proof:}~
For $n\ge 2$, since
\[
\dfrac{\bar{\mathbb E}_\lambda X_{\tau_n}\cdot e_1}{\bar{\mathbb E}_\lambda\tau_n}
\ge
\dfrac{(n-1)\bar{\mathbb E}_\lambda[e_1\cdot X_{\tau_1}\circ\theta_{\tau_1}]}
{\bar{\mathbb E}_\lambda\tau_1+(n-1)\big(\bar{\mathbb E}_\lambda[\tilde{\tau}_1\circ\theta_{\tau_1}]+E\xi_1/\lambda_1^2\big)},
\]
and
\[
\dfrac{\bar{\mathbb E}_\lambda X_{\tau_n}\cdot e_1}{\bar{\mathbb E}_\lambda\tau_n}
\le
\dfrac{\bar{\mathbb E}_\lambda X_{\tau_1}\cdot e_1+(n-1)\bar{\mathbb E}_\lambda[e_1\cdot X_{\tau_1}\circ\theta_{\tau_1}]}
{(n-1)\bar{\mathbb E}_\lambda[\tilde{\tau}_1\circ\theta_{\tau_1}]},
\]
by the moment bounds \eqref{ERe14}, \eqref{ERe15}, \eqref{ERe17} and \eqref{ERe18}, we have
(for small $\beta$ and $\lambda$)
\[
\frac{L^{\beta,\lambda}/\lambda}{C/(n-1)+1}
\le
\dfrac{\bar{\mathbb E}_\lambda X_{\tau_n}\cdot e_1}{\lambda\bar{\mathbb E}_\lambda\tau_n}
\le
\frac{C}{n-1}+\frac{R^{\beta,\lambda}}{\lambda}.
\]
Hence when $\beta>0$ and $\lambda>0$ are small enough and $n\ge 2$, by \eqref{ERe16},
\[
\left|\dfrac{\bar{\mathbb E}_\lambda X_{\tau_n}\cdot e_1}{\lambda\bar{\mathbb E}_\lambda\tau_n}
-\frac{v_\lambda\cdot e_1}{\lambda}\right|
\le
\frac{C}{n-1}+\frac{R^{\beta,\lambda}}{\lambda}-\frac{L^{\beta,\lambda}/\lambda}{C/(n-1)+1}
\stackrel{(\ref{ERe19})}{\le}
C\beta+\frac{C}{n-1}.
\]
The lemma is proved.\qed
\begin{lemma}\label{ERthm3}
Assume $\ell\cdot e_1>0$.
Let $\alpha_n=\alpha_n(\beta,\lambda):=\bar{\mathbb E}_\lambda\tau_n$. Then when $\beta>0$ and $\lambda>0$ are small enough,
\[
\left|
\dfrac{\bar{\mathbb E}_\lambda X_{\tau_n}\cdot e_1}{\lambda\alpha_n}
-\dfrac{\bar{\mathbb E}_\lambda X_{\alpha_n}\cdot e_1}{\lambda\alpha_n}
\right|\le \frac{C}{n^{1/4}}\quad\text{ for all }n\in \mathbb{N}.
\]
\end{lemma}
Note that, by \eqref{ERe14} and \eqref{ERe17},
\begin{equation}\label{ERe20}
\frac{Cn}{\beta\lambda^2}
\le \alpha_n \le
\frac{C(1)n}{\beta\lambda^2}.
\end{equation}
{\it Proof:}~
Assume that both $\lambda$ and $\beta$ are sufficiently small.
First, for any $\rho\in(0,1)$,
\begin{align}\label{ERe21}
&\bar{\mathbb E}_\lambda
[|X_{\alpha_n}-X_{\tau_n}|1_{|\tau_n-\alpha_n|\le \rho\alpha_n}]\\
&\le
\bar{\mathbb E}_\lambda
\big[\max_{(1-\rho)\alpha_n\le s\le (1+\rho)\alpha_n}|X_s-X_{\alpha_n}|\big]\stackrel{\text{Lemma \ref{ERl2}}}{\le}
C\rho\lambda\alpha_n.\nonumber
\end{align}
Second,
\begin{align}\label{ERe22}
&\bar{\mathbb E}_\lambda
[|(X_{\alpha_n}-X_{\tau_n})\cdot e_1|1_{|\tau_n-\alpha_n|> \rho\alpha_n}]\nonumber\\
&\le
\sqrt{\bar{\mathbb E}_\lambda[|(X_{\alpha_n}-X_{\tau_n})\cdot e_1|^2]
\bar{\mathbb P}_\lambda(|\tau_n-\alpha_n|> \rho\alpha_n)}\nonumber\\
&\stackrel{\text{Lemma \ref{ERl2}, \eqref{ERe15}}}{\le }
Cn(\beta\lambda)^{-1}\sqrt{\bar{\mathbb P}_\lambda(|\tau_n-\alpha_n|> \rho\alpha_n)}.
\end{align}
Furthermore, we can show that
\begin{equation}\label{ERe23}
\bar{\mathbb P}_\lambda(|\tau_n-\alpha_n|> \rho\alpha_n)\le C/(n\rho^2).
\end{equation}
Indeed, put
\[
A_n:=\tilde{\tau}_1+\sum_{i=1}^{n-1}\tilde{\tau}_1\circ\theta_{\tau_i}
\]
and $B_n:=A_n+\sum_{i=1}^n\xi_i/\lambda_1^2$. Then by (\ref{ERe9}), we have
$A_n\le \tau_n\le B_n$. Thus
\[
A_n-\bar{\mathbb E}_\lambda A_n-Cn/\lambda^2
\le
\tau_n-\alpha_n
\le
B_n-\bar{\mathbb E}_\lambda B_n+Cn/\lambda^2.
\]
Hence, by \eqref{ERe20} and by taking $\beta>0$ small enough, we get
\begin{align*}
\bar{\mathbb P}_\lambda(\tau_n-\alpha_n> \rho\alpha_n)
&\le
\bar{\mathbb P}_\lambda(B_n-\bar{\mathbb E}_\lambda B_n\ge \rho\alpha_n/2)\\
&\le
\frac{\mathop{\rm Var} B_n}{(\rho\alpha_n/2)^2},
\end{align*}
and
\begin{align*}
\bar{\mathbb P}_\lambda(\tau_n-\alpha_n<-\rho\alpha_n)
&\le
\bar{\mathbb P}_\lambda(A_n-\bar{\mathbb E}_\lambda A_n\le -\rho\alpha_n/2)\\
&\le
\frac{\mathop{\rm Var} A_n}{(\rho\alpha_n/2)^2}.
\end{align*}
Since (recalling Proposition \ref{ERprop1})
\[
\mathop{\rm Var} A_n=\mathop{\rm Var} \tilde{\tau}_1+(n-1)\mathop{\rm Var}\tilde{\tau}_1\circ\theta_{\tau_1}
\stackrel{(\ref{ERe14})}{\le }
Cn(\beta\lambda^2)^{-2}
\]
and
\[
\mathop{\rm Var} B_n
\le
2\big(\mathop{\rm Var} A_n+\mathop{\rm Var}(\sum_{i=1}^n\xi/\lambda_1^2)\big)
=
2\mathop{\rm Var} A_n+Cn/\lambda_1^4
\le
Cn(\beta\lambda^2)^{-2},
\]
we conclude that
\[
\bar{\mathbb P}_\lambda(|\tau_n-\alpha_n|> \rho\alpha_n)
\le \frac{Cn(\beta\lambda^2)^{-2}}{(\rho\alpha_n/2)^2}
\le C/(n\rho^2).
\]
This completes the proof of \eqref{ERe23}.
Finally, combining (\ref{ERe21}), (\ref{ERe22}) and (\ref{ERe23}), we obtain
\[
\left|
\dfrac{\bar{\mathbb E}_\lambda X_{\tau_n}\cdot e_1}{\lambda\alpha_n}
-\dfrac{\bar{\mathbb E}_\lambda X_{\alpha_n}\cdot e_1}{\lambda\alpha_n}
\right|
\le
C\rho+\frac{C}{\rho\sqrt{n}}.
\]
The lemma follows by taking $\rho=\frac{1}{n^{1/4}}$.\qed\\
\noindent{\it Proof of Theorem \ref{ER2}:}\\
First, we will show that when $\lambda\in(0,1)$ is small enough, for any $t\ge 1$,
\begin{equation}\label{ERe26}
\left|
\dfrac{\bar{\mathbb E}_\lambda X_{t/\lambda^2}\cdot e_1}{t/\lambda}
-\frac{v_\lambda\cdot e_1}{\lambda}
\right|
\le
\frac{C}{t^{1/5}}.
\end{equation}
Note that if $\ell\cdot e_1=0$, then
$(X_n\cdot e_1)_{n=0}^\infty$ is a martingale and $\bar{\mathbb E}_\lambda X_n\cdot e_1=v_\lambda\cdot e_1=0$ for all $n$.
Hence we only consider the non-trivial case $\ell\cdot e_1\neq 0$. Without loss of generality, assume $\ell\cdot e_1>0$.
By Lemma \ref{ERl2}, the left side of \eqref{ERe26}
is uniformly bounded for all $t\ge 1$ and $\lambda\in (0,1)$. So it suffices to prove \eqref{ERe26} for all sufficiently large $t>0$ and sufficiently small $\lambda>0$.
When $t>0$ is sufficiently large and $\lambda>0$ is small enough, we let
\begin{equation}\label{ERe24}
\beta=\beta(t)=t^{-1/5}
\end{equation}
and set $n=n(t,\lambda)$ be the integer that satisfies
\[
\alpha_n\le \frac{t}{\lambda^2}<\alpha_{n+1}.
\]
By \eqref{ERe20}, the existence of $n(t,\lambda)$ is guaranteed. Moreover,
\begin{equation}\label{ERe25}
n\ge Ct\beta= Ct^{-4/5}.
\end{equation}
Since
\begin{align*}
&\left|
\dfrac{\bar{\mathbb E}_\lambda X_{\alpha_n}\cdot e_1}{\lambda\alpha_n}-\dfrac{\bar{\mathbb E}_\lambda X_{t/\lambda^2}\cdot e_1}{t/\lambda}
\right|\\
&\le \frac{1}{\lambda\alpha_n}\bar{\mathbb E}_\lambda|X_{\alpha_n}-X_{t/\lambda^2}|+
\bar{\mathbb E}_\lambda|X_{t/\lambda}|(\frac{1}{\lambda\alpha_n}-\frac{1}{t})\\
&\le \frac{1}{\lambda\alpha_n}
\bar{\mathbb E}_\lambda
\big[\max_{\alpha_n\le s<\alpha_{n+1}}|X_{\alpha_n}-X_s|\big]+
\bar{\mathbb E}_\lambda[\max_{0\le s<\alpha_{n+1}}|X_s|]
\frac{\lambda \bar{\mathbb E}_\lambda[\tau_1\circ\theta_{\tau_n}]}{(\lambda\alpha_n)^2},
\end{align*}
by Lemma \ref{ERl2}, \eqref{ERe14} and \eqref{ERe20}, we obtain
\begin{equation*}
\left|
\dfrac{\bar{\mathbb E}_\lambda X_{\alpha_n}\cdot e_1}{\lambda\alpha_n}-
\dfrac{\bar{\mathbb E}_\lambda X_{t/\lambda^2}\cdot e_1}{t/\lambda}
\right|
\le
\frac{C}{n}.
\end{equation*}
Combining Lemma \ref{ERl6}, Lemma \ref{ERthm3} and the above inequality, we conclude that
if $t$ is sufficiently large and $\lambda>0$ is sufficiently small,
then
\[
\left|
\dfrac{\bar{\mathbb E}_\lambda X_{t/\lambda^2}\cdot e_1}{t/\lambda}
-\frac{v_\lambda\cdot e_1}{\lambda}
\right|\le C\beta+\frac{C}{n^{1/4}}
\le
\frac{C}{t^{1/5}}.
\]
Here we used \eqref{ERe24} and \eqref{ERe25} in the last inequality. \eqref{ERe26} is proved.
The same equality for the remaining directions $e_2,e_3,\ldots,e_d$ can be
obtained using the same argument. Our proof of Theorem \ref{ER2} is complete.
\qed
\chapter{Limiting Velocity in Mixing Random Environment}
\label{LV chapter}
This chapter is devoted to the proof of Theorem~\ref{LVthm2}. The organization of the proof is as follows. In Section \ref{seccomb}, we prove a refined
version of \cite[Lemma 3]{Ze}. With this combinatorial result, we will prove the CLLN \eqref{ICLLN} in Section \ref{seclln}, using coupling arguments. In Section \ref{sechke}, using coupling, we obtain
heat kernel estimates, which is later used in
Section \ref{secunique} to show the uniqueness of the non-zero limiting velocity.
Throughout this chapter, we assume that \textit{the environment is uniformly elliptic with ellipticity constant $\kappa$ and satisfies $(G)$}. We use $c, C$ to denote finite positive constants that depend only on
the dimension $d$ and the environment measure $P$ (and implicitly, on the parameters $\kappa,r$ and $\gamma$ of the environment). They may differ from line to line.
We denote by $c_1,c_2,\ldots$ positive constants which are fixed throughout, and which depend only on $d$ and the measure $P$. Let $\{e_1,\ldots,e_d\}$ be the natural basis of $\mathbb{Z}^d$.
\section{A combinatorial lemma and its consequences}\label{seccomb}
In this section we consider the case that $\mathbb{P}(\varlimsup_{n\to\infty}X_n\cdot e_1/n>0)>0$. We will adapt the arguments in \cite{Ze} and prove that with positive probability,
the number of visits to the $i$-th level $\mathcal{H}_i=\mathcal{H}_i(X_0):=\{x:x\cdot e_1=X_0\cdot e_1+ i\}$ grows
slower than $Ci^2$.
An important ingredient of the proof is a refinement of a combinatorial lemma of Zerner \cite[Lemma 3]{Ze} about deterministic paths.
We say that a sequence $\{x_i\}_{i=0}^{k-1}\in (\mathbb{Z}^d)^{k}$, $2\le k\le\infty$, is a \textit{path} if
$|x_i-x_{i-1}|=1$ for $i=1,\cdots, k-1$. For $i\ge 0$ and an infinite path $X_\cdot=\{X_n\}_{n=0}^\infty$ such that $\sup_n X_n\cdot e_1=\infty$, let
\[T_i=\inf\{n\ge 0: X_n\in\mathcal{H}_i\}.\]
For $0\le i<j$ and $k\ge 1$, let $T_{i,j}^1:=T_i$ and define
recursively
\[
T_{i,j}^{k+1}=\inf\{n\ge T_{i,j}^k: X_n\in\mathcal{H}_i \text{ and } n<T_j\}\in \mathbb{N}\cup \{\infty\}.
\]
That is, $T_{i,j}^k$ is the time of the $k$-th visit to $\mathcal{H}_i$ before hitting
$\mathcal{H}_j$. Let
\[
N_{i,j}=\sup\{k: T_{i,j}^k<\infty\}
\]
be the total number of visits to $\mathcal{H}_i$ before hitting
$\mathcal{H}_j$.
As in \cite{Ze}, for $i\ge 0, l\ge 1$, let
\[
h_{i,l}=T_{i,i+l}^{N_{i,i+l}}-T_i
\]
denote the time spent between the first and the last visits to $\mathcal{H}_i$ before hitting $\mathcal{H}_{i+l}$.
For $m,M, a\ge 0$ and $l\ge 1$, set
\[
H_{m,l}=\sum_{i=0}^{l-1}N_{m+i,m+l}/(i+1)^2
\]
and
\[
E_{M,l}(a)=\frac{\#\{0\le m\le M: h_{m,l}\le a \text{ and } H_{m,l}\le a\}}{M+1}.
\]
Note that $E_{M,l}(a)$ decreases in $l$ and increases in $a$.
The following lemma is
a minor adaptation of \cite[Lemma 3]{Ze}.
\begin{lemma}\label{LVl5}
For any path $X_\cdot$ with $\varlimsup_{n\to\infty}X_n\cdot e_1/n>0$,
\begin{equation}\label{LVe27}
\sup_{a\ge 0}\inf_{l\ge 1}\varlimsup_{M\to\infty}E_{M,l}(a)>0.
\end{equation}
\end{lemma}
{\it Proof:}~
Since $\varlimsup_{n\to\infty}n/T_n=\varlimsup_{n\to\infty}X_n\cdot e_1/n>0$,
there exist an increasing sequence $(n_k)_{k=0}^\infty$ and $\delta<\infty$ such that
\[
T_{n_k}<\delta n_k \text{ for all }k.
\]
Thus for any $m$ such that $n_k/2\le m\le n_k$,
\begin{equation}\label{LV*17}
T_m\le 2\delta m.
\end{equation}
Set $M_k=\lceil n_k/2\rceil$, where $\lceil x\rceil\in\mathbb{N}$ denotes the
smallest integer which is not smaller than $x$. Then for all $k$ and
$1<l<\lfloor n_k/2 \rfloor$,
\begin{align}\label{LVe28}
\sum_{m=0}^{M_k} H_{m,l}
&=\sum_{i=0}^{l-1}\Big(\sum_{m=0}^{M_k}N_{m+i,m+l}\Big)/(i+1)^2\nonumber\\
&\le \sum_{i=0}^{l-1} T_{M_k+l}/(i+1)^2
\stackrel{(\ref{LV*17})}{\le} 4\delta (M_k+l).
\end{align}
By the same argument as in Page 193-194 of \cite{Ze}, we will show that there exist
constants $c_1, c_2>0$ such that
\begin{equation}\label{LV*18}
\inf_{l\ge 1}\varlimsup_{k\to\infty}
\frac{\#\{0\le m\le M_k: h_{m,l}\le c_1 \}}{M_k+1}>c_2.
\end{equation}
Indeed, if (\ref{LV*18}) fails,
then for any $u>0$,
\[
\varlimsup_{k\to\infty}\dfrac{\#\{0\le m\le M_k, h_{m,l}\le u\}}{M_k+1}\longrightarrow 0
\]
as $l\to\infty$ (note that the right side is decreasing in $l$). Hence,
one can find a sequence $(l_i)_{i\ge 0}$ with $l_{i+1}>l_i, l_0=0,$
such that for all $i\ge 0$,
\begin{equation}\label{LV*19}
\varlimsup_{k\to\infty}\dfrac{\#\{0\le m\le M_k, h_{m,l_{i+1}}\le 6\delta l_i\}}{M_k+1}<\frac{1}{3}.
\end{equation}
On the other hand, for $i\ge 0$
\begin{align}
&\varlimsup_{k\to\infty}\dfrac{\#\{0\le m\le M_k, h_{m,l_i}\ge 6\delta l_i\}}{M_k+1}\nonumber\\
&\le
\varlimsup_{k\to\infty}\frac{1}{(M_k+1)6\delta l_i}\sum_{m=0}^{M_k}(T_{m+l_i}-T_m)\nonumber\\
&\le
\varlimsup_{k\to\infty}\frac{l_i T_{M_k+l_i}}{6\delta l_i(M_k+1)}
\stackrel{(\ref{LV*17})}{\le}\frac{1}{3}. \label{LV*20}
\end{align}
By (\ref{LV*19}) and (\ref{LV*20})
, for any $i\ge 0$,
\begin{equation}\label{LV*21}
\varlimsup_{k\to\infty}\dfrac{\#\{0\le m\le M_k, h_{m,l_{i+1}}> h_{m,l_i}\}}{M_k+1}
\ge \frac{1}{3}.
\end{equation}
Therefore, for any $j\ge 1$, noting that
\[
\sum_{i=0}^{j-1}1_{h_{m,l_{i+1}}>h_{m,l_i}}\le N_{m,m+l_j}\le H_{m,l_j},
\]
we have
\begin{align*}
\frac{j}{3}&\stackrel{(\ref{LV*21})}{\le}
\varlimsup_{k\to\infty}
\sum_{i=0}^{j-1}\dfrac{\#\{0\le m\le M_k, h_{m,l_{i+1}}> h_{m,l_i}\}}{M_k+1}\\
&\le\varlimsup_{k\to\infty}
\frac{1}{M_k+1}\sum_{m=0}^{M_k}H_{m,l_j}\stackrel{(\ref{LVe28})}{\le} 4\delta,
\end{align*}
which is a contradiction if $j$ is large. This proves (\ref{LV*18}).
It follows from (\ref{LV*18}) that, for any $l\ge 1$, there is a subsequence
$(M'_k)$ of $(M_k)$ such that
\[
\frac{\#\{0\le m\le M'_k: h_{m,l}\le c_1 \}}{M'_k+1}>c_2
\]
for all $k$.
Letting $c_3=9\delta/c_2$, we have that when $k$ is large enough,
\[
\frac{1}{M'_k+1}\sum_{m=0}^{M'_k}1_{h_{m,l}\le c_1, H_{m,l}>c_3}
\le
\frac{1}{c_3(M'_k+1)}\sum_{m=0}^{M'_k}H_{m,l}
\stackrel{(\ref{LVe28})}{\le}
\frac{c_2}{2}.
\]
Hence for any $l>1$ and large $k$,
\begin{align*}
E_{M_k',l}(c_1\vee c_3)
&\ge \frac{1}{M'_k+1}\sum_{m=0}^{M'_k}1_{h_{m,l}\le c_1,H_{m,l}\le c_3}\\
&=
\frac{1}{M'_k+1}\sum_{m=0}^{M'_k}(1_{h_{m,l}\le c_1}-1_{h_{m,l}\le c_1,H_{m,l}> c_3})
\ge \frac{c_2}{2}.
\end{align*}
This shows the lemma, and what is more, with explicit constants.\qed\\
For $i\ge 0$, let $N_i=\mathop{\rm lim}_{j\to\infty} N_{i,j}$ denote the total number of visits to $\mathcal{H}_i$.
With Lemma \ref{LVl5}, one can deduce that with positive probability, $N_i\le C(i+1)^2$ for all $i\ge 0$:
\begin{theorem}\label{LVthm3}
If $\mathbb{P}(\varlimsup_{n\to\infty}X_n\cdot e_1/n>0)>0$, then there exists a constant
$c_5$ such that
\[\mathbb{P}(R=\infty)>0,\]
where $R$ is the stopping time defined by
\begin{align*}
R&=R_{e_1}(X_\cdot, c_5)\\
&:=
\inf\{n\ge 0: \sum_{i=0}^n 1_{X_i\in\mathcal{H}_j}>c_5(j+1)^2 \text{ for some }j\ge 0\}\wedge D,
\end{align*}
and $D:=\inf\{n\ge 1: X_n\cdot e_1\le X_0\cdot e_1\}$.
\end{theorem}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\textwidth]{intersection}
\caption{On $\{R=\infty\}$, the path visits the $i$-th level no more than
$c_5(i+1)^2$ times.}
\end{figure}
Note that for any $L>0$ and a path $(X_i)_{i=0}^\infty$ with $X_0=o$,
\begin{align}\label{LVe32}
\sum_{\substack{y: y\cdot e_1\le -L\\0\le i\le R}}e^{-\gamma d(y,X_i)}
&\le
\sum_{j=0}^\infty (\#\text{visits to $\mathcal{H}_j$ before time $R$})e^{-\gamma (j+L)}\nonumber\\
&\le
C\sum_{j=0}^\infty c_5(j+1)^2e^{-\gamma(j+L)}\le Ce^{-\gamma L}.
\end{align}
Hence on the event $\{R=\infty\}$, by (\ref{LVe32}) and $(G)$,
the trajectory $(X_i)_{i=0}^\infty$ is ``almost
independent" with the environments $\{\omega_x:x\cdot e_1\le -L\}$ when $L$ is large.
This fact will be used in our definition of the regeneration times in the Section \ref{seclln}.
To prove Theorem \ref{LVthm3}, we need the following lemma.
Recall that $r,\gamma$ are parameters of the environment measure $P$.
Let $S$ be a countable set of finite paths.
With abuse of notation, we
also use $S$ as the synonym for the event
\begin{equation}\label{LV2e*}
\bigcup_{(x_i)_{i=0}^N\in S}\{X_i=x_i \text{ for }0\le i\le N\}.
\end{equation}
\begin{lemma}\label{LVc2}
Let $a>0$ and $A\subset\Lambda\subset\mathbb{Z}^d$.
Suppose $S\neq\emptyset$ is a countable set of finite paths
$x_\cdot=(x_i)_{i=0}^N, N<\infty$ that satisfy $d(x_\cdot, \Lambda)\ge r$ and
\[
\sum_{y\in A, 0\le i\le N}e^{-\gamma d(y,x_i)}\le a.
\]
Then, $P$-almost surely,
\begin{equation}\label{LVe31}
\exp(-Ca)\le\frac{E_P [P_\omega(S)|\omega_x: x\in\Lambda]}{E_P [P_\omega(S)|\omega_x: x\in\Lambda\setminus A]}\le\exp(Ca).
\end{equation}
\end{lemma}
{\it Proof:}~
We shall first show that for any $(x_i)_{i=0}^N\in S$, $P$-almost surely,
\begin{align}\label{LVe40}&E_P[P_\omega(X_i=x_i,0\le i\le N)|\omega_y:y\in\Lambda]\nonumber\\&\le \exp(Ca)E_P[P_\omega(X_i=x_i,0\le i\le N)|\omega_y:y\in\Lambda\setminus A].\end{align}
Note that when $\Lambda^c$ is a finite subset of $\mathbb{Z}^d$, (\ref{LVe40})
is an easy consequence of $(G)$. For general $\Lambda$, we let
\[
\Lambda_n=\Lambda\cup\{x:|x|\ge n\}.
\]
When $n$ is sufficiently big, $(G)$ implies that
\begin{equation*}
\frac{E_P[P_\omega(X_i=x_i,0\le i\le N)|\omega_y:y\in\Lambda_n]}{E_P[P_\omega(X_i=x_i,0\le i\le N)|\omega_y:y\in\Lambda_n\setminus A]}\le\exp(Ca).
\end{equation*}
Since $\Lambda_n\downarrow \Lambda$ as $n\to\infty$,
(\ref{LVe40}) follows by taking $n\to\infty$ in the above inequality.
Summing over all $(x_i)_{i=0}^N\in S$ on both sides of (\ref{LVe40}), we conclude that
$P$-almost surely,
\[
E_P[P_\omega(S)|\omega_y:y\in\Lambda]
\le \exp(Ca)E_P[P_\omega(S)|\omega_y:y\in\Lambda\setminus A].
\]
The upper bound of (\ref{LVe31}) is proved. The lower bound follows likewise.\qed
Now we can prove the theorem. Our proof is a modification
of the proof of Theorem 1 in \cite{Ze}:\\
\noindent\textit{Proof of Theorem \ref{LVthm3}:}
It follows by Lemma \ref{LVl5} that there exists a constant $c_4>0$ such that
\begin{equation}\label{LV*22}
\mathbb{P}(\inf_{l\ge 1}\varlimsup_{M\to\infty}E_{M,l}(c_4)>0)>0.
\end{equation}
For $l>r$, $k\ge 0$ and $z\in\mathbb{Z}^d$ with $z\cdot e_1=r$, let $B_{m,l}(z,k,c)$ denote the event
\[
\{N_{m+r,m+l}=k,X_{T_{m+r,m+l}^k}=X_{T_m}+z,H_{m+r,l-r}\le c\}.
\]
Note that on the event $\{h_{m,l}\le c_4\text{ and }H_{m,l}\le c_4\}$, we have
\begin{align*}
T_{m+r,m+l}^{N_{m+r,m+l}}-T_m
&\le h_{m,l}+\sum_{i=0}^r N_{m+i,m+l}\\
&\le c_4+\sum_{i=0}^r (i+1)^2c_4\le (1+r)^3c_4,
\shortintertext{and}
H_{m+r,l-r}
&\le \sum_{i=0}^{l-r-1}(r+1)^2N_{m+r+i,m+l}/(r+i+1)^2\\
&\le (r+1)^2c_4=:c_5.
\end{align*}
Hence
$
\{h_{m,l}\le c_4\text{ and }H_{m,l}\le c_4\}\subset \bigcup_{|z|,k\le (r+1)^3c_4}B_{m,l}(z,k,c_5),
$
and
\[
\mathop{\rm lim}_{l\to\infty}\varlimsup_{M\to\infty}E_{M,l}(c_4)
\le
\sum_{|z|,k\le (r+1)^3c_4}
\varlimsup_{l\to\infty}\varlimsup_{M\to\infty}\frac{1}{M+1}
\sum_{m=0}^M 1_{B_{m,l}(z,k,c_5)}.
\]
Thus by (\ref{LV*22}), for some $k_0$ and $z_0$ with $z_0\cdot e_1=r$,
\begin{equation}\label{LVe29}
\mathbb{P}(\varlimsup_{l\to\infty}\varlimsup_{M\to\infty}
\frac{1}{M+1}\sum_{m=0}^M 1_{B_{m,l}(z_0,k_0,c_5)}>0)>0.
\end{equation}
In what follows, we write $B_{m,l}(z_0,k_0,c_5)$ simply as
$B_{m,l}$.
For any $l>r$ and any fixed $i\le l-1$, let $m_j=m_j(l,i):=i+jl$, i.e. $(m_j)_{j\ge 0}$ is
the class of residues of $i(\text{mod }l)$.
Now take any $j\in \mathbb{N}$. Observe that for any event $E=\{1_{B_{m_{j-1},l}}=\cdot,\ldots,1_{B_{m_0,l}}=\cdot\}$
and $x\in \mathcal{H}_{m_j}$,
\begin{align}\label{LV*1}
\MoveEqLeft P_\omega(\{X_{T_{m_j}}=x\}\cap E\cap B_{m_j,l})\\
&\le
P_\omega(\{X_{T_{m_j}}=x\}\cap E)
P_\omega^{x+z_0}(D>T_{l-r},H_{0,l-r}\le c_5).\nonumber
\end{align}
Moreover, for any $x\in \mathcal{H}_{m_j}$, there exists a countable set $S$ of finite paths $(x_i)_{i=0}^N$ that satisfy
$m_j+r\le x_i\cdot e_1\le m_j+l$ and $\#\{k\le N: x_k\in\mathcal{H}_i(x_0)\}\le c_5(i+1)^2$ for
$0\le i\le N$, such that
\begin{align*}
&\{X_0=x+z_0, D>T_{l-r},H_{0,l-r}\le c_5\}\\
&=\cup_{(x_i)_{i=0}^N\in S}\{X_i=x_i
\text{ for }0\le i\le N\}.
\end{align*}
Noting that (by the same argument as in (\ref{LVe32})) for any $(x_i)_{i=0}^N\in S$,
\[
\sum_{\substack{y:y\cdot e_1\le m_j\\i\le N}}e^{-\gamma d(y,x_i)}\le Ce^{-\gamma r},
\]
by Lemma \ref{LVc2} we have
\begin{align*}
&E_P[P_\omega^{x+z_0}(D>T_{l-r},H_{0,l-r}\le c_5)|\omega_y:y\cdot e_1\le m_j]\\
&\le
\exp{(Ce^{-\gamma r})}\mathbb{P}(D>T_{l-r},H_{0,l-r}\le c_5).
\end{align*}
Thus for $j\ge 0$ and $l>r$,
\begin{align*}
&\mathbb{P}(E\cap B_{m_j,l})\\
&\stackrel{(\ref{LV*1})}{\le}
\sum_{x\in\mathcal{H}_{m_j}}
E_P \big[P_\omega(\{X_{T_{m_j}}=x\}\cap E)P_\omega^{x+z_0}(D>T_{l-r},H_{0,l-r}\le c_5)\big]\\
&\le
\exp{(Ce^{-\gamma r})}
\sum_{x\in\mathcal{H}_{m_j}}
\mathbb{P}(\{X_{T_{m_j}}=x\}\cap E)\mathbb{P}(D>T_{l-r},H_{0,l-r}\le c_5)\\
&=
C\mathbb{P}(E)
\mathbb{P}(D>T_{l-r},H_{0,l-r}\le c_5).
\end{align*}
Hence, for any $j\ge 0$ and $l>r$,
\begin{equation*}
\mathbb{P}(1_{B_{m_j,l}}=1|1_{B_{m_{j-1},l}},\ldots,1_{B_{m_0,l}})
\le
C\mathbb{P}(D>T_{l-r},H_{0,l-r}\le c_5),
\end{equation*}
which implies that $\mathbb{P}$-almost surely,
\begin{equation}\label{LVe30}
\varlimsup_{n\to\infty}
\frac{1}{n}\sum_{j=0}^{n-1} 1_{B_{m_j,l}}
\le C\mathbb{P}(D>T_{l-r},H_{0,l-r}\le c_5).
\end{equation}
Therefore, $\mathbb{P}$-almost surely,
\begin{align*}
\varlimsup_{l\to\infty}\varlimsup_{M\to\infty}
\frac{1}{M+1}\sum_{m=0}^M 1_{B_{m,l}}
&\le \varlimsup_{l\to\infty}\frac{1}{l}\sum_{i=0}^{l-1}
\varlimsup_{M\to\infty}\frac{l}{M+1}
\sum_{\substack{0\le m\le M\\m\text{ mod }l=i}} 1_{B_{m,l}}\\
&\stackrel{(\ref{LVe30})}{\le} \mathop{\rm lim}_{l\to\infty}
C\mathbb{P}(D>T_{l-r},H_{0,l-r}\le c_5)\\
&=C\mathbb{P}(D=\infty, \sum_{i=0}^\infty N_i/(i+1)^2\le c_5).
\end{align*}
This and (\ref{LVe29}) yield
$\mathbb{P}(D=\infty, \sum_{i=0}^\infty N_i/(i+1)^2\le c_5)>0$.
The theorem follows.\qed
\section{The conditional law of large numbers}\label{seclln}
In this section we will prove the conditional law of large numbers \eqref{ICLLN}, using
regeneration times and coupling.
Given the dependence structure of the environment, we want
to define regeneration times in such a way that what happens after a regeneration time has little
dependence on the past. To this end, we will use the ``$\epsilon$-coins" trick introduced in \cite{CZ1} and the stopping time $R$ to define the regeneration times.
Intuitively, at a regeneration time, the past and the future movements have nice properties. That
is, the walker has walked straight for a while without paying attention to the environment, and his
future movements have little dependence on his past movements.
We define the $\epsilon$-coins $(\epsilon_{i,x})_{i\in\mathbb{N}, x\in \mathbb{Z}^d}=:\epsilon$
to be iid random variables with distribution
$Q$ such that
\[Q(\epsilon_{i,x}=1)=d\kappa \text{ and }Q(\epsilon_{i,x}=0)=1-d\kappa.\]
For fixed $\omega$, $\epsilon$, $P_{\omega,\epsilon}^x$ is the law of the Markov chain $(X_n)$ such that $X_0=x$ and that for any $e\in\mathbb{Z}^d$
such that $|e|=1$,
\[
P_{\omega,\epsilon}^x(X_{n+1}=z+e|X_n=z)
=\frac{1_{\epsilon_{n,z}=1}}{2d}+\frac{1_{\epsilon_{n,z}=0}}{1-d\kappa}[\omega(z,z+e)-\frac{\kappa}{2}].
\]
Note that the law of $X_\cdot$ under $\bar{P}_\omega^x=Q\otimes P_{\omega,\epsilon}^x$ coincides with its
law under $P_\omega^x$. Sometimes we also refer to
$P_{\omega,\epsilon}^x (\cdot)$ as a measure on the sets of paths, without indicating the specific random path.
Denote by
$\bar{\mathbb{P}}=P\otimes Q\otimes P_{\omega,\epsilon}^o$ the law of the triple
$(\omega, \epsilon, X_\cdot)$.
Now we define the regeneration times in the direction $e_1$.
Let $L$ be a fixed number which is sufficiently large.
Set $R_0=0$. Define inductively for $k\ge 0$:
\begin{align*}
&S_{k+1}=\inf\{n\ge R_k: X_{n-L}\cdot e_1>\max\{X_m\cdot e_1: m<n-L\},\\
&\qquad\qquad\qquad \epsilon_{n-i, X_{n-i}}=1, X_{n-i+1}-X_{n-i}=e_1 \text{ for all }1\le i\le L\},\\
&R_{k+1}=R\circ\theta_{S_{k+1}}+S_{k+1},
\end{align*}
where $\theta_n$ denotes the time shift of the path, i.e.,
$\theta_n X=(X_{n+i})_{i=0}^\infty$.
Let \[K=\inf\{k\ge 1: S_k<\infty,R_k=\infty\}\] and $\tau_1=\tau_1(e_1,\epsilon,X_\cdot):=S_K.$
For $k\ge 1$, the ($L$-)regeneration times are defined inductively by \[\tau_{k+1}=\tau_1\circ\theta_{\tau_k}+\tau_k .\]
By similar argument as in \cite[Lemma 2.2]{CZ1}, we can show:
\begin{lemma}\label{LVl7}
If $\mathbb{P}(\mathop{\rm lim}_{n\to\infty}X_n\cdot e_1/n=0)<1$, then
\begin{equation}\label{LVe19}
\mathbb{P}(A_{e_1}\cup A_{-e_1})=1.
\end{equation}
Moreover, on $A_{e_1}$, $\tau_i$'s are $\bar{\mathbb P}$-almost surely finite.
\end{lemma}
{\it Proof:}~
If $\mathbb{P}(\mathop{\rm lim}_{n\to\infty}X_n\cdot e_1/n=0)<1$,
\[
\mathbb{P}(\varlimsup_{n\to\infty}X_n\cdot e_1/n>0)>0\quad\text{ or }\quad
\mathbb{P}(\varlimsup_{n\to\infty}X_n\cdot (-e_1)/n>0)>0.
\]
Without loss of generality, assume that
\[\mathbb{P}(\varlimsup_{n\to\infty}X_n\cdot e_1/n>0)>0.\]
It then follows from Theorem \ref{LVthm3} that $\mathbb{P}(R=\infty)>0$.
We want to show that
$R_k=\infty$ for all but finitely many $k$'s.
For $k\ge 0$,
\begin{align*}
&\bar{\mathbb P}(R_{k+1}<\infty)\\
&= \bar{\mathbb P}(S_{k+1}<\infty, R\circ\theta_{S_{k+1}}<\infty)\\
&=\sum_{n,x}\bar{\mathbb P}
(S_{k+1}=n, X_n=x, R\circ\theta_n <\infty)\\
&=\sum_{n,x}E_{P\otimes Q}
\big[P_{\omega,\epsilon}(S_{k+1}=n,X_n=x)
P_{\omega,\theta^n\epsilon}^x(R<\infty)\big],
\end{align*}
where $\theta^n\epsilon$ denotes the time shift of the coins $\epsilon$,
i.e. $(\theta^n\epsilon)_{i,x}=\epsilon_{n+i,x}$.
Note that $P_{\omega,\epsilon}(S_{k+1}=n,X_n=x)$ and
$P_{\omega,\theta^n\epsilon}^x(R<\infty)$ are independent under the measure $Q$,
since the former is a function of $\epsilon$'s before time $n$, and the latter
involves $\epsilon$'s after time $n$.
It then follows by induction that
\begin{align*}
&\bar{\mathbb P}(R_{k+1}<\infty)\\
&=\sum_{n,x}E_P
\big[\bar{P}_\omega(S_{k+1}=n,X_n=x)
\bar{P}_\omega^x(R<\infty)\big]\\
&=\sum_{n,x}E_P\big[\bar{P}_\omega(S_{k+1}=n,X_n=x)
E_P[\bar{P}_\omega^x(R<\infty)|\omega_y: y\cdot e_1\le x\cdot e_1-L]
\big]\\
&\stackrel{(\ref{LVe32}), \text{Lemma }\ref{LVc2}}{\le} \bar{\mathbb P}(R_k<\infty)\exp{(e^{-cL})}\bar{\mathbb P}(R<\infty)\\
&\le [\exp{(e^{-cL})}\bar{\mathbb P}(R<\infty)]^{k+1},
\end{align*}
where we used in the second equality the fact that $\bar{P}_\omega(S_{k+1}=n,X_n=x)$
is $\sigma(\omega_y: y\cdot e_1\le x\cdot e_1-L)$-measurable.
Hence, by taking $L$ sufficiently large and by the Borel-Cantelli Lemma, $\bar{\mathbb P}$-almost surely,
$R_k=\infty$ except for finitely many values of $k$.
Let $\mathcal{O}_{e_1}$
denote the event that the signs of $X_n\cdot e_1$ change infinitely many often.
It is easily seen that (by the ellipticity of the environment)
\begin{align*}
&\mathbb{P}(\mathcal{O}_{e_1}\cup A_{e_1}\cup A_{-e_1})=1
\shortintertext{and}
&\mathcal{O}_{e_1}\subset \{\sup_n X_n\cdot e_1=\infty\}.
\end{align*}
However, on $\{\sup_n X_n\cdot e_1=\infty\}$, given that
$R_k$ is finite, $S_{k+1}$ is also finite.
Hence $\tau_1$ is $\bar{\mathbb P}$-almost surely finite on $\{\sup_n X_n\cdot e_1=\infty\}$,
and so are the regeneration times $\tau_2,\tau_3\ldots$.
Therefore,
\[
\mathbb{P}(\mathcal{O}_{e_1})=\bar{\mathbb P}(\mathcal{O}_{e_1}\cap\{\tau_1<\infty\}).
\]
Since $\mathcal{O}_{e_1}\cap\{\tau_1<\infty\}=\emptyset$, we get $\mathbb{P}(\mathcal{O}_{e_1})=0$. This gives (\ref{LVe19}). \qed\\
When $\mathbb{P}(R=\infty)>0$, we let
\[
\hat{\mathbb P}(\cdot):=\bar{\mathbb P}(\cdot|R=\infty).
\]
The following proposition is a consequence of Lemma \ref{LVc2}.
\begin{proposition}
Assume $\mathbb{P}(R=\infty)>0$.
Let $l>r$ and
$\Lambda\subset\{x:x\cdot e_1<-r\}.$
Then for any $A\subset\Lambda\cap\{x: x\cdot e_1<-l\}$ and $k\in\mathbb{N}$,
\begin{equation}\label{LVprop1}
\exp(-Ce^{-\gamma l})
\le
\dfrac{E_P\big[\bar{P}_\omega\big((X_i)_{i=0}^{\tau_k}\in\cdot, R=\infty\big)|\omega_y:y\in\Lambda\setminus A]}
{E_P\big[\bar{P}_\omega\big((X_i)_{i=0}^{\tau_k}\in\cdot, R=\infty\big)|\omega_y:y\in\Lambda]}
\le
\exp(Ce^{-\gamma l}).
\end{equation}
Furthermore, for any $k\in\mathbb{N}$ and $n\ge 0$, $\hat{\mathbb P}$-almost surely,
\begin{equation}\label{LVprop2}
\exp(-e^{-cL})
\le
\frac{\hat{\mathbb P}\big((X_{\tau_n+i}-X_{\tau_n})_{i=0}^{\tau_{n+k}-\tau_n}\in\cdot|X_{\tau_n}\big)}
{\hat{\mathbb P}\big((X_i)_{i=0}^{\tau_k}\in\cdot\big)}
\le
\exp(e^{-cL}).
\end{equation}
\end{proposition}
{\it Proof:}~
First, we shall prove (\ref{LVprop1}).
By the definition of the regeneration times, for any finite path $x_\cdot=(x_i)_{i=0}^N, N<\infty$, there exists an event
$G_{x_\cdot}\in\sigma(\epsilon_{i,X_i},X_i: i\le N)$
such that $G_{x_\cdot}\subset\{R>N\}$ and
\[
\{(X_i)_{i=0}^{\tau_k}=(x_i)_{i=0}^N, R=\infty\}
=G_{x_\cdot}\cap\{R\circ\theta_N=\infty\}.
\]
(For example, when $k=1$, we let
\[
G_{x_\cdot}=\bigcup_{j=1}^\infty
\{(X_i)_{i=0}^N=(x_i)_{i=0}^N, S_j=N, R>N\}.
\]
Then $\{(X_i)_{i=0}^{\tau_1}=(x_i)_{i=0}^N, R=\infty\}
=G_{x_\cdot}\cap\{R\circ\theta_N=\infty\}.$)
For $n\in\mathbb{N}$, we let
\[
E_n:=G_{x_\cdot}\cap\{R\circ\theta_N\ge n\}.
\]
Note that $E_n\in\sigma(\epsilon_{i,X_i},X_i:i\le N+n)$ can be interpreted (in the sense of (\ref{LV2e*}))
as a set of paths with lengths $\le N+n$. Also note that $E_n\subset\{R>N+n\}$.
Then by Lemma \ref{LVc2} and (\ref{LVe32}), we have
\[
\exp(-Ce^{-\gamma l})
\le
\dfrac{E_P\big[\bar{P}_\omega\big(E_n)|\omega_y:y\in\Lambda\setminus A]}
{E_P\big[\bar{P}_\omega\big(E_n\big)|\omega_y:y\in\Lambda]}
\le
\exp(Ce^{-\gamma l}).
\]
(\ref{LVprop1}) follows by letting $n\to\infty$.
Next, we shall prove (\ref{LVprop2}).
Let $x\in\mathbb{Z}^d$ be any point that satisfies
\[
\bar{\mathbb P}(X_{\tau_n}=x)>0.
\]
By the definition of the regeneration times,
for any $m\in\mathbb{N}$,
there exists an event
$G_m^x\in\sigma\{\epsilon_{i,X_i},X_i: i\le m\}$
such that $\bar{P}_\omega(G_m^x)$ is
$\sigma(\omega_y:y\cdot e_1\le x\cdot e_1-L)$-measurable, and
\[
\{\tau_n=m,X_m=x, R=\infty\}=G^x_m\cap\{R\circ\theta_m=\infty\}.
\]
Thus
\begin{align}\label{LV2e2}
&\bar{\mathbb P}\big((X_{\tau_n+i}-X_{\tau_n})_{i=0}^{\tau_{n+k}-\tau_n}\in\cdot,X_{\tau_n}=x, R=\infty\big)\nonumber\\
&=\sum_{m}\bar{\mathbb P}\big((X_{\tau_n+i}-X_{\tau_n})_{i=0}^{\tau_{n+k}-\tau_n}\in\cdot, \tau_n=m,X_m=x,R=\infty\big)\nonumber\\
&=
\sum_{m}E_P\big[\bar{P}_\omega(G_m^x)
\bar{P}_\omega^x((X_i-x)_{i=0}^{\tau_k}\in\cdot,R=\infty)\big]\nonumber\\
&\stackrel{(\ref{LVprop1})}{\le}
\exp(Ce^{-\gamma L})\sum_{m}\bar{\mathbb P}(G_m^x)
\bar{\mathbb P}\big((X_i)_{i=0}^{\tau_k}\in\cdot, R=\infty\big).
\end{align}
On the other hand,
\begin{align}\label{LV2e11}
\bar{\mathbb P}(X_{\tau_n}=x, R=\infty)
&=\sum_{m} E_P[\bar{P}_{\omega}(G^x_m)\bar{P}_\omega^x(R=\infty)]\nonumber\\
&\stackrel{(\ref{LVprop1})}{\ge}\exp(-Ce^{-\gamma L})
\sum_{m}
\bar{\mathbb P}(G_m^x)\bar{\mathbb P}(R=\infty).
\end{align}
By (\ref{LV2e2}) and (\ref{LV2e11}), we have (note that $L$ is sufficiently big)
\[
\hat{\mathbb P}\big((X_{\tau_n+i}-X_{\tau_n})_{i=0}^{\tau_{n+k}-\tau_n}\in\cdot|X_{\tau_n}=x\big)
\le
\exp(e^{-cL})\hat{\mathbb P}\big((X_i)_{i=0}^{\tau_k}\in\cdot\big).
\]
The right side of (\ref{LVprop2}) is proved. The left side of (\ref{LVprop2}) follows likewise.
\qed
The next lemma describes the dependency of a regeneration on its remote past.
It is a version of Lemma 2.2 in \cite{CZ2}. (The denominator is omitted in the
last equality in \cite[page 101]{CZ2}, which is corrected here, see the equality in (\ref{LV*8}).)
Set $\tau_0=0$.
Denote the truncated path between $\tau_{n-1}$ and $\tau_n-L$ by
\[
P_n=(P_n^i)_{0\le i\le \tau_{n}-\tau_{n-1}-L}:=(X_{i+\tau_{n-1}}-X_{\tau_{n-1}})_{0\le i\le \tau_n-\tau_{n-1}-L}.
\]
Set
\begin{align*}
W_n &=(\omega_{x+X_{\tau_{n-1}}})_{x\in P_n}=:\omega_{X_{\tau_{n-1}}+P_n},\\
F_n &=X_{\tau_n}-X_{\tau_{n-1}},\\
J_n &=(P_n,W_n,F_n,\tau_n-\tau_{n-1}).
\end{align*}
For $i\ge 0$, let $h_{i+1}(\cdot|j_i,\ldots,j_1):=\hat{\mathbb{P}}(J_{i+1}\in\cdot|J_{i},\ldots,J_1)|_{J_{i}=j_i,\ldots,J_1=j_1}$ denote the transition kernel of $(J_n)$. Note that when $i=0$, $h_{i+1}(\cdot|j_i,\ldots,j_1)=h_1(\cdot|\emptyset)=\hat{\mathbb P}(J_1\in\cdot)$.
\begin{lemma}\label{LVl4}
Assume $\mathbb{P}(R=\infty)>0$, $0\le k\le n$. Then $\hat{\mathbb P}$-almost surely,
\begin{equation}\label{LVe26}
\exp{(-e^{-c(k+1)L})}
\le
\frac{h_{n+1}(\cdot|J_n,\ldots,J_1)}{h_{k+1}(\cdot|J_n,\ldots,J_{n-k+1})}
\le \exp{(e^{-c(k+1)L})}.
\end{equation}
\end{lemma}
{\it Proof:}~
For $j_m=(p_m,w_m,f_m,t_m),m=1,\ldots n$, let
\begin{align*}
&\bar{x}_m:=f_1+\cdots+f_m,\\
&\bar{t}_m:=t_1+\cdots+t_m,\\
&B_{p_1,\ldots,p_m}:=\{R=\infty, P_i=p_i\text{ for all }i=1,\ldots,m\},\\
\text{ and }\quad
& \omega_{p_1,\ldots,p_m}:=(\omega_{\bar{x}_{i-1}+p_i})_{i=1}^m.
\end{align*}
First, we will show that for any $1\le k\le n$,
\begin{equation}\label{LV*8}
h_{k+1}(\cdot|j_k,\ldots,j_1)
=\frac{E_P\big[\bar{P}_\omega^{\bar{x}_k}(J_1\in \cdot,R=\infty)|\omega_{p_1,\ldots,p_k}\big]}
{E_P\big[\bar{P}_\omega^{\bar{x}_k}(R=\infty)|\omega_{p_1,\ldots,p_k}\big]}
\Big|_{\omega_{p_1,\ldots,p_k}=(w_i)_{i=1}^k}.
\end{equation}
By the definition of the regeneration times, there exists an event
\[
G_{p_1,\ldots,p_k}\in\sigma(X_{i+1},\epsilon_{i,X_i}, 0\le i\le \bar{t}_k-1)
\]
such that
\begin{equation}\label{LV*7}
B_{p_1,\ldots,p_k}=G_{p_1,\ldots,p_k}\cap\{R\circ\theta_{\bar{t}_k}=\infty\}.
\end{equation}
On the one hand, for any $\sigma(J_k,\ldots, J_1)$-measurable function $g(J_k,\ldots,J_1)$,
\begin{align}\label{LV*15}
&E_{\bar{\mathbb P}}\big[h_{k+1}(\cdot|J_k,\ldots,J_1)g(J_k,\ldots, J_1)1_{B_{p_1,\ldots,p_k}}\big]\nonumber\\
&=E_{\bar{\mathbb P}}\big[g1_{B_{p_1,\ldots,p_k}}1_{J_{k+1}\in \cdot}\big]\nonumber\\
&=
E_P [g1_{B_{p_1,\ldots,p_k}}\bar{P}_\omega(J_{k+1}\in\cdot,B_{p_1,\ldots,p_k})]\nonumber\\
&\stackrel{(\ref{LV*7})}{=}
E_P \big[g1_{B_{p_1,\ldots,p_k}}\bar{P}_\omega(G_{p_1,\ldots,p_k})\bar{P}_\omega^{\bar{x}_k}(J_1\in \cdot,R=\infty)\big].
\end{align}
On the other hand, we also have
\begin{align}\label{LV*16}
& E_{\bar{\mathbb P}}\big[h_{k+1}(\cdot|J_k,\ldots,J_1)
g(J_k,\ldots, J_1)1_{B_{p_1,\ldots,p_k}}\big]\nonumber\\
&=
E_P \big[
h_{k+1}(\cdot|J_k,\ldots,J_1)g1_{B_{p_1,\ldots,p_k}}
\bar{P}_\omega(B_{p_1,\ldots,p_k})\big]\nonumber\\
&\stackrel{(\ref{LV*7})}{=}
E_P \big[
h_{k+1}(\cdot|J_k,\ldots,J_1)g1_{B_{p_1,\ldots,p_k}}
\bar{P}_\omega(G_{p_1,\ldots,p_k})\bar{P}_\omega^{\bar{x}_k}(R=\infty)\big].
\end{align}
Comparing (\ref{LV*15}) and (\ref{LV*16}) and observing that on $B_{p_1,\ldots,p_k}$,
$\bar{P}_\omega(G_{p_1,\ldots,p_k})$ and all functions of $J_1,\ldots,J_k$
are $\sigma(\omega_y: y\in\bar{x}_{i-1}+p_i, i\le k)$-measurable
, we obtain that on $B_{p_1,\ldots,p_k}$, $P$-almost surely,
\begin{equation*}
h_{k+1}(\cdot|J_k,\ldots,J_1)
=\frac{E_P\big[\bar{P}_\omega^{\bar{x}_k}(J_1\in \cdot,R=\infty)|\omega_{\bar{x}_{i-1}+p_i},i\le k\big]}
{E_P\big[\bar{P}_\omega^{\bar{x}_k}(R=\infty)|\omega_{\bar{x}_{i-1}+p_i},i\le k\big]}.
\end{equation*}
Noting that
\[
B_{p_1,\ldots,p_k}\cap\{\omega_{p_1,\ldots,p_k}=(w_i)_{i=1}^k\}
=
\{J_i=j_i,1\le i\le k\},
\]
(\ref{LV*8}) is proved.
Next, we will prove the lower bound in \eqref{LVe26}.
When $n\ge k\ge 1$, by formula (\ref{LV*8}) and (\ref{LVprop1}), we have
\begin{align}\label{LV3e2}
&h_{n+1}(\cdot| j_n,\ldots,j_1)\nonumber\\
&=
\frac{E_P[\bar{P}_\omega^{\bar{x}_n}(J_1\in\cdot,R=\infty)|\omega_{p_1,\ldots,p_n}]}
{E_P\big[\bar{P}_\omega^{\bar{x}_n}(R=\infty)|\omega_{p_1,\ldots,p_n}\big]}\bigg|_{\omega_{p_1,\ldots,p_n}=(w_i)_{i=0}^n}\nonumber\\
&\le
\frac{\exp(Ce^{-\gamma(k+1)L})
E_P[\bar{P}_\omega^{\bar{x}_n}(J_1\in\cdot,R=\infty)|\omega_{\bar{x}_{i-1}+p_i},n-k+1\le i\le n]}
{\exp(-Ce^{-\gamma(k+1)L})
E_P[\bar{P}_\omega^{\bar{x}_n}(R=\infty)|\omega_{\bar{x}_{i-1}+p_i},n-k+1\le i\le n]}\nonumber\\
&\qquad
\big|_{\omega_{p_1,\ldots,p_n}=(w_i)_{i=0}^n}\nonumber\\
&=
\exp(2Ce^{-\gamma(k+1)L})
\frac{E_P[\bar{P}_\omega^{\bar{x}_n-\bar{x}_{n-k}}(J_1\in\cdot,R=\infty)|\omega_{p_{n-k+1},\ldots, p_n}]}
{E_P[\bar{P}_\omega^{\bar{x}_n-\bar{x}_{n-k}}(R=\infty)|\omega_{p_{n-k+1},\ldots, p_n}]}\nonumber\\
&\qquad
\big|_{\omega_{p_{n-k+1},\ldots,p_n}=(w_i)_{i=n-k+1}^n}\nonumber\\
&\stackrel{(\ref{LV*8})}{=}
\exp(2Ce^{-\gamma(k+1)L})h_{k+1}(\cdot|j_{n},\ldots,j_{n-k+1}),
\end{align}
where we used the translation invariance of the measure $P$ in the last but one equality.
When $k=0$ and $n\ge 1$, by formula \eqref{LV*8} and \eqref{LVprop1},
\begin{align}\label{LV3e1}
h_{n+1}(\cdot| j_n,\ldots,j_1)
&\le
\frac{\exp(Ce^{-\gamma L})E_P[\bar{P}_\omega^{\bar{x}_n}(J_1\in\cdot,R=\infty)]}
{\exp(-Ce^{-\gamma L})E_P[\bar{P}_\omega^{\bar{x}_n}(R=\infty)]}\nonumber\\
&=\exp(2Ce^{-\gamma L})\hat{\mathbb P}(J_1\in\cdot)\nonumber\\
&=\exp(2Ce^{-\gamma L})h_1(\cdot|\emptyset).
\end{align}
When $k=n=0$, \eqref{LVe26} is trivial.
Hence combining \eqref{LV3e2} and \eqref{LV3e1}, the lower bound in (\ref{LVe26}) follows as we take $L$ sufficiently big. The upper bound follows likewise.\qed
\begin{lemma}\label{LVl6}
Suppose that a sequence of non-negative random variables $(X_n)$ satisfies
\[
a\le \frac{\, \mathrm{d} P(X_{n+1}\in\cdot|X_1,\ldots, X_n)}{\, \mathrm{d} \mu}\le b
\]
for all $n\ge 1$, where $a\le 1\le b$ are constants and $\mu$ is a probability measure. Let $m_\mu\le \infty$ be
the mean of $\mu$. Then almost surely,
\begin{equation}\label{LVe21}
a m_\mu\le\varliminf_{n\to\infty}
\frac{1}{n}\sum_{i=1}^n X_i
\le
\varlimsup_{n\to\infty}
\frac{1}{n}\sum_{i=1}^n X_i
\le b m_\mu.
\end{equation}
\end{lemma}
Before giving the proof, let us recall the ``splitting representation" of random variables:
\begin{proposition}\cite[Page 94]{Tho}\label{LVprop}
Let $\nu$ and $\mu$ be probability measures. Let $X$ be a random variable with law $\nu$. If
for some $a\in(0,1)$,
\[
\frac{\, \mathrm{d}\nu}{\, \mathrm{d}\mu}\ge a,
\]
then, enlarging the probability space if necessary, we can find independent
random variables $\Delta, \pi, Z$ such that
\begin{itemize}
\item[i)] $\Delta$ is Bernoulli with parameter $1-a$, i.e., $P(\Delta=1)=1-a$,
$P(\Delta=0)=a$;
\item[ii)] $\pi$ is of law $\mu$, and $Z$ is of law $(\nu-a\mu)/(1-a)$;
\item[iii)] $X=(1-\Delta)\pi+\Delta Z$.
\end{itemize}
\end{proposition}
\noindent{\it Proof of Lemma \ref{LVl6}}:\\
By Proposition \ref{LVprop}, enlarging the probability space if necessary, there are
random variables $\Delta_i,\pi_i,Z_i,i\ge 1$,
such that for any $i\in\mathbb{N}$,
\begin{itemize}
\item
$\Delta_i$ is Bernoulli with parameter $(1-a)$, and $\pi_i$ is of law $\mu$;
\item
$\Delta_i, \pi_i$ and $Z_i$ are mutually independent;
\item
$(\Delta_i,\pi_i)$
is independent of
$\sigma(\Delta_k,\pi_k,Z_k: k<i)$;
\item
$X_i=(1-\Delta_i)\pi_i+\Delta_i Z_i$.
\end{itemize}
Note that since $X_i$'s are supported on $[0,\infty)$, $\pi_i\ge 0$ and $Z_i\ge 0$ for all $i\in\mathbb{N}$.
Thus by the law of large numbers, almost surely,
\[
\varliminf_{n\to\infty}\frac{1}{n}\sum_{i=1}^n X_i\ge
\mathop{\rm lim}_{n\to\infty}\frac{1}{n}\sum_{i=1}^n (1-\Delta_i)\pi_i=a m_\mu.
\]
This proves the first inequality of (\ref{LVe21}).
If $m_\mu=\infty$, the last inequality
of (\ref{LVe21}) is trivial. Assume that $m_\mu<\infty$.
Let $(\tilde{\Delta}_i)_{i\ge 1}$ be an iid Bernoulli sequence with
parameter $1-b^{-1}$ such that every $\tilde{\Delta}_i$ is independent of
all the $X_n$'s.
By a similar splitting procedure, we can construct non-negative
random variables $\tilde{\pi}_i,\tilde{Z}_i, i\ge 1$,
such that $(\tilde{\pi}_i)_{i\ge 1}$ are iid with law $\mu$, and
\[
\tilde{\pi}_i=(1-\tilde{\Delta}_i)X_i+\tilde{\Delta}_i\tilde{Z}_i.
\]
Let $Y_i=(1-b^{-1}-\tilde{\Delta}_i)X_i 1_{X_i\le i}$, we will first show that
\begin{equation}\label{LVe22}
\mathop{\rm lim}_{n\to\infty}\frac{1}{n}\sum_{i=1}^n Y_i=0.
\end{equation}
By Kronecker's Lemma, it suffices to show that
\[
\sum_{i=1}^\infty \frac{Y_i}{i} \text{ converges.}
\]
Observe that $(\sum_{i=1}^n Y_i/i)_{n\in\mathbb{N}}$ is a martingale sequence.
Moreover, for all $n\in\mathbb{N}$,
\begin{align*}
E\big(\sum_{i=1}^n \frac{Y_i}{i}\big)^2
= \sum_{i=1}^n EY_i^2/i^2
&\le \sum_{i=1}^\infty EX_i^2 1_{X_i\le i}/i^2\\
&\le b \sum_{i=1}^\infty E\tilde{\pi}_i^2 1_{\tilde{\pi}_i\le i}/i^2\\
&= b\int_0^\infty x^2 (\sum_{i\ge x}\frac{1}{i^2})\, \mathrm{d}\mu\\
&\le C\int_0^\infty x\, \mathrm{d}\mu=Cm_\mu<\infty.
\end{align*}
By the $L^2$-martingale convergence theorem, $\sum Y_i/i$
converges a.s. and in $L^2$. This proves (\ref{LVe22}).
Since
\[
\sum_i P(Y_i\neq (1-b^{-1}-\tilde{\Delta}_i)X_i)
\le
\sum_i P(X_i>i)
\le
b \sum_i P(\pi_1>i)\le b m_\mu<\infty,
\]
by the Borel-Cantelli lemma, it follows from (\ref{LVe22}) that
\[
\mathop{\rm lim}_{n\to\infty}\frac{1}{n}\sum_{i=1}^n (1-b^{-1}-\tilde{\Delta}_i)X_i=0, \text{a.s.}.
\]
Hence almost surely,
\begin{equation*}
m_\mu=\mathop{\rm lim}_{n\to\infty}\frac{1}{n}\sum_{i=1}^n \tilde{\pi}_i
\ge
\varlimsup_{n\to\infty}\frac{1}{n}\sum_{i=1}^n (1-\tilde{\Delta}_i)X_i
=\varlimsup_{n\to\infty}\frac{1}{n}\sum_{i=1}^n b^{-1} X_i.
\end{equation*}
The last inequality of (\ref{LVe21}) is proved.\qed
\begin{theorem}\label{LVlln}
There exist two deterministic numbers $v_{e_1},v_{-e_1}\ge 0$ such that $\mathbb{P}$-almost surely,
\begin{equation}\label{LVe25}
\mathop{\rm lim}_{n\to\infty}\frac{X_n\cdot e_1}{n}=v_{e_1} 1_{A_{e_1}}-v_{-e_1}1_{A_{-e_1}}.
\end{equation}
Moreover, if $v_{e_1}>0$, then $E_{\hat{\mathbb P}}\tau_1<\infty$ and
$\mathbb{P}(A_{e_1}\cup A_{-e_1})=1$.
\end{theorem}
{\it Proof:}~
We only consider the nontrivial case that $\mathbb{P}(\mathop{\rm lim} X_n\cdot e_1/n=0)<1$,
which by Lemma \ref{LVl7} implies
$\mathbb{P}(A_{e_1}\cup A_{-e_1})=1$.
Without loss of generality, assume $\mathbb{P}(\varlimsup_{n\to\infty}X_n\cdot e_1/n>0)>0$.
We will show that on $A_{e_1}$,
\[
\mathop{\rm lim}_{n\to\infty}X_n\cdot e_1/n=v_{e_1}>0, \text{ $\mathbb{P}$-a.s..}
\]
By (\ref{LVprop2}) and Lemma \ref{LVl6}, we obtain that
$\mathbb{P}(\cdot|A_{e_1})$-almost surely,
\begin{align}
\exp{(-e^{-cL})}E_{\hat{\mathbb P}}X_{\tau_1}\cdot e_1
&\le \varliminf_{n\to\infty}\frac{X_{\tau_n}\cdot e_1}{n}\nonumber\\
&\le \varlimsup_{n\to\infty}\frac{X_{\tau_n}\cdot e_1}{n}
\le \exp{(e^{-cL})}E_{\hat{\mathbb P}}X_{\tau_1}\cdot e_1,\label{LVe23}\\
\exp{(-e^{-cL})}E_{\hat{\mathbb P}}\tau_1
&\le \varliminf_{n\to\infty}\frac{\tau_n}{n}
\le \varlimsup_{n\to\infty}\frac{\tau_n}{n}
\le \exp{(e^{-cL})}E_{\hat{\mathbb P}}\tau_1. \label{LVe24}
\end{align}
Note that (\ref{LVe23}), (\ref{LVe24}) hold even if
$E_{\hat{\mathbb P}}X_{\tau_1}\cdot e_1=\infty$ or $E_{\hat{\mathbb P}}\tau_1=\infty$.
But it will be shown later that under our assumption, both of them are finite.
We claim that
\begin{equation}\label{LVe5}
E_{\hat{\mathbb P}}X_{\tau_1}\cdot e_1<\infty.
\end{equation}
To see this, let $\Theta:=\{i: X_{\tau_k}\cdot e_1=i \text{ for some }k\in\mathbb{N}\}$.
Since $\tau_i$'s are finite on $A_{e_1}$,
there exist (recall that $\tau_0=0$) a sequence $(k_n)_{n\in\mathbb{N}}$ such that
$X_{\tau_{k_n}}\cdot e_1\le n<X_{\tau_{k_n+1}}\cdot e_1$ for all $n\in\mathbb{N}$ and
$\mathop{\rm lim}_{n\to\infty}k_n=\infty$.
Hence for $n\ge 1$,
\[
\frac{\sum_{i=1}^n 1_{i\in \Theta}}{n}\le
\frac{k_n+1}{X_{\tau_{k_n}}\cdot e_1}, \quad\text{ $\hat{\mathbb P}$-a.s..}
\]
Then, $\hat{\mathbb P}$-a.s.,
\[
\varlimsup_{n\to\infty}
\frac{\sum_{i=1}^n 1_{i\in \Theta}}{n}
\le
\varlimsup_{n\to\infty}\frac{n}{X_{\tau_n}\cdot e_1}.
\]
Let $B_k=\{\epsilon_{k,X_k}=0, X_{k+1}-X_k=e_1, \epsilon_{k+i,X_{k+i}}=1,X_{k+i+1}-X_{k+i}=e_1
\text{ for all }1\le i\le L\}$. Then
\[
\bar{P}_\omega(B_k)
\ge
(d\kappa)^L(1-d\kappa)(\frac{\kappa}{2})(\frac{1}{2d})^L
\stackrel{1\ge 2d\kappa}{>}(\frac{\kappa}{2})^{L+2}.
\]
Observe that by the definition of the regeneration times, for $n> L+1$,
\begin{align*}
&\{T_{n-L-1}=k,X_k= x-(L+1)e_1, R>k\}\cap B_k\cap\{R\circ\theta_{k+L+1}=\infty\}\\
&\subset\{R=\infty, n\in\Theta, T_n=k+L+1,X_{T_n}=x\}.
\end{align*}
Hence for $n> L+1$,
\begin{align*}
& \hat{\mathbb P}(n\in \Theta)\\
&\ge
\sum_{k\in\mathbb{N},x\in\mathcal{H}_n}
\hat{\mathbb P}(B_k\cap\{T_{n-L-1}=k,X_k= x-(L+1)e_1,R\circ\theta_{k+L+1}=\infty\})\\
&\ge \sum_{k\in\mathbb{N},x\in\mathcal{H}_n}
E_P \big[P_\omega\big(T_{n-L-1}=k,X_k= x-(L+1)e_1,R>k\big)(\frac{\kappa}{2})^{L+2}\\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\times
P_\omega^x(R=\infty)\big]/\mathbb{P}(R=\infty).
\end{align*}
Since by (\ref{LVprop1}) and the translation invariance of $P$,
\[
E_P\big[P_\omega^x(R=\infty)|\omega_y:y\cdot e_1\le x\cdot e_1-L-1\big]
\ge
\exp(-e^{-cL})\mathbb{P}(R=\infty),
\]
we have for $n>L+1$,
\begin{align}\label{LV2e10}
& \hat{\mathbb P}(n\in \Theta)\nonumber\\
&\ge
(\frac{\kappa}{2})^{L+2}\exp(-e^{-cL})
\sum_{k\in\mathbb{N},x\in\mathcal{H}_n}\mathbb{P}(T_{n-L-1}=k,X_k= x-(L+1)e_1,R>k)\nonumber\\
&\ge (\frac{\kappa}{2})^{L+2} e^{-1}\mathbb{P}(R=\infty).
\end{align}
Hence
\begin{align*}
\frac{C}{E_{\hat{\mathbb P}}X_{\tau_1}\cdot e_1}
\stackrel{(\ref{LVe23})}{\ge} E_{\hat{\mathbb P}}\varlimsup_{n\to\infty}\frac{n}{X_{\tau_n}\cdot e_1}
&\ge E_{\hat{\mathbb P}}\varlimsup_{n\to\infty}
\frac{\sum_{i=1}^n 1_{i\in \Theta}}{n}\\
&\ge \varlimsup_{n\to\infty}E_{\hat{\mathbb P}}\frac{\sum_{i=1}^n 1_{i\in \Theta}}{n}\\
&\stackrel{(\ref{LV2e10})}{\ge} (\frac{\kappa}{2})^{L+2} e^{-1}\mathbb{P}(R=\infty)>0.
\end{align*}
This gives (\ref{LVe5}).
Now we can prove the theorem.
By (\ref{LVe23}) and (\ref{LVe24}),
\begin{align}\label{LV*9}
\exp{(-2e^{-cL})}\frac{E_{\hat{\mathbb P}}X_{\tau_1}\cdot e_1}{E_{\hat{\mathbb P}}\tau_1}
&\le \varliminf_{n\to\infty}\frac{X_{\tau_n}\cdot e_1}{\tau_{n+1}}\nonumber\\
&\le \varlimsup_{n\to\infty}\frac{X_{\tau_{n+1}}\cdot e_1}{\tau_n}
\le \exp{(2e^{-cL})}\frac{E_{\hat{\mathbb P}}X_{\tau_1}\cdot e_1}{E_{\hat{\mathbb P}}\tau_1},
\end{align}
$\mathbb{P}(\cdot|A_{e_1})$-almost surely.
Further, by the fact that $|X_i|\le i$ and the obvious inequalities
\begin{equation*}
\varliminf_{n\to\infty}\frac{X_{\tau_n}\cdot e_1}{\tau_{n+1}}
\le
\varliminf_{n\to\infty}\frac{X_n\cdot e_1}{n}
\le
\varlimsup_{n\to\infty}\frac{X_n\cdot e_1}{n}
\le
\varlimsup_{n\to\infty}\frac{X_{\tau_{n+1}}\cdot e_1}{\tau_n},
\end{equation*}
we have that
\begin{equation*}
\varlimsup_{n\to\infty}
\Bigl\lvert \frac{X_n\cdot e_1}{n}-
\frac{E_{\hat{\mathbb P}}X_{\tau_1}\cdot e_1}{E_{\hat{\mathbb P}}\tau_1}\Bigr\rvert
\le \exp{(2e^{-cL})}-1, \text{ $\mathbb{P}(\cdot|A_{e_1})$-a.s.}
\end{equation*}
Therefore, $\mathbb{P}(\cdot|A_{e_1})$-almost surely,
\[
\mathop{\rm lim}_{n\to\infty} \frac{X_n\cdot e_1}{n}
=
\mathop{\rm lim}_{L\to\infty}
\frac{E_{\hat{\mathbb P}}X_{\tau_1^{(L)}}\cdot e_1}{E_{\hat{\mathbb P}}\tau_1^{(L)}}:=v_{e_1},
\]
where $\tau_1$ is written as $\tau_1^{(L)}$ to indicate that it is an $L$-regeneration time.
Moreover, our assumption $\mathbb{P}(\varlimsup_{n\to\infty}X_n\cdot e_1/n>0)>0$ implies that
$v_{e_1}>0$ and (by (\ref{LV*9}))
\[
E_{\hat{\mathbb P}}\tau_1<\infty.
\]
Our proof is complete.\qed\\
If $v_{e_1}>0$, then it follows by (\ref{LVe24}) that
\begin{equation}\label{LVetau}
E_{\hat {\mathbb P}}\tau_n\le CnE_{\hat{\mathbb P}}\tau_1<\infty.
\end{equation}
Observe that although Theorem \ref{LVlln} is stated for $e_1$, the previous arguments, if properly modified, still work if one replaces
$e_1$ with any $z\in\mathbb{R}^d\setminus\{o\}$. So Theorem \ref{LVlln} is true for the general case. That is, for any $z\neq o$, there exist two deterministic constants $v_z, v_{-z}\ge 0$ such that
\[
\mathop{\rm lim}_{n\to\infty}\frac{X_n\cdot z}{n}=v_z1_{A_z}-v_{-z}1_{A_{-z}}
\]
and that $\mathbb{P}(A_z\cup A_{-z})=1$ if $v_z>0$.
Then, by the same argument as in \cite[page 1112]{Go}, one concludes that the
limiting velocity $\mathop{\rm lim}_{n\to\infty}X_n/n$ can take at most two antipodal values.
This proves the first part of Theorem \ref{LVthm2}.
\section{Heat kernel estimates}\label{sechke}
The following heat kernel estimates are crucial for the proof of the uniqueness of the
non-zero velocity in the next section. Although in the mixing case we don't have iid
regeneration slabs, we know that (by Lemma \ref{LVl4}) a regeneration slab has little dependence on its remote
past. This allows us to use coupling techniques to get the same heat
kernel estimates as in \cite{Be}:
\begin{theorem}\label{hke}
Assume $v_{e_1}>0$. For $x\in\mathbb{Z}^d$ and $n\in \mathbb{N}$,
we let
\[Q(n,x):=\hat{\mathbb P}(x \text{ is visited in }[\tau_{n-1},\tau_n)).\]
Then for any $x\in\mathbb{Z}^d$ and $n\in \mathbb{N}$,
\begin{align}
&\hat{\mathbb P}(X_{\tau_n}=x)\le Cn^{-d/2},\label{LVehke}\\
&
\sum_{x\in\mathbb{Z}^d}Q(n,x)^2\le C(E_{\hat{\mathbb P}}\tau_1)^2 n^{-d/2}.\label{LVehke2}
\end{align}
\end{theorem}
By Lemma \ref{LVl4}, we have for $n\ge 2$ and $1\le k\le n-1$, $\hat{\mathbb P}$-almost surely,
\begin{align}\label{LVnew1}
\frac{h_{k+1}(\cdot|J_{n-1},\ldots,J_{n-k})}{h_k(\cdot|J_{n-1},\ldots,J_{n-k+1})}
&=
\frac{h_{k+1}(\cdot|J_{n-1},\ldots,J_{n-k})}{h_n(\cdot|J_{n-1},\ldots,J_1)}
\frac{h_n(\cdot|J_{n-1},\ldots,J_1)}{h_k(\cdot|J_{n-1},\ldots,J_{n-k+1})}\nonumber\\
&\ge\exp(-e^{-c(k+1)L}-e^{-ckL})\nonumber\\
&\ge 1-e^{-ckL}
\end{align}
for large $L$. Hence for $n\ge 2$ and $1\le k\le n-1$, we can define a (random) probability measure $\zeta_{n,k}^{J_{n-1},\ldots,J_{n-k}}$ that satisfies
\begin{align}\label{LVnew2}
\MoveEqLeft
h_{k+1}(\cdot|J_{n-1},\ldots,J_{n-k})\\
&=e^{-ckL}\zeta_{n,k}^{J_{n-1},\ldots,J_{n-k}}(\cdot)+(1-e^{-ckL})h_k(\cdot|J_{n-1},\ldots, J_{n-k+1}).\nonumber
\end{align}
To prove Theorem \ref{hke}, we will first construct a sequence of random variables $(\tilde J_i, i\in\mathbb{N})$ such that for any $n\in\mathbb{N}$,
\begin{equation}\label{LVnew0}
(\tilde J_1,\ldots,\tilde J_n)\sim \hat{\mathbb P}(J_1\in\cdot,\ldots, J_n\in\cdot),
\end{equation}
where ``$X\sim\mu$" means ``$X$ is of law $\mu$".
\subsection{Construction of the $\tilde J_i$'s}
Our construction consists of three steps:
\noindent{\it Step 1.}
We let $\tilde J_1, \tilde J_{2,1}, \tilde \Delta_{2,1}$ be independent random variables such that
\[
\tilde J_1\sim h_1(\cdot|\emptyset),\quad \tilde J_{2,1}\sim h_1(\cdot|\emptyset)
\]
and $\tilde \Delta_{2,1}$ is Bernoulli with parameter $e^{-cL}$. Let $\tilde Z_{2,1}$ be independent of $\sigma(\tilde J_{2,1}, \tilde \Delta_{2,1})$ such that
\[
P(\tilde Z_{2,1}\in\cdot|\tilde J_1)=
\zeta_{2,1}^{\tilde J_1}(\cdot).
\]
Setting $\tilde J_{2}:=(1-\tilde \Delta_{2,1})\tilde J_{2,1}+\tilde\Delta_{2,1}\tilde Z_{2,1}$, by \eqref{LVnew2} we have
\[
(\tilde J_1, \tilde J_2)\sim
\hat{\mathbb P}(J_1\in\cdot,J_2\in\cdot).
\]
\noindent{\it Step 2.}
For $n\ge 3$, assume that we have constructed $\tilde J_1$ and $(\tilde J_{i,1}, \tilde\Delta_{i,j}, \tilde Z_{i,j}, 1\le j<i\le n-1)$ such that
\[
(\tilde J_1,\ldots, \tilde J_{n-1})
\sim
\hat{\mathbb P}(J_1\in\cdot,\ldots,J_{n-1}\in\cdot),
\]
where for $2\le j\le i<n$,
\[
\tilde J_{i,j}:=(1-\tilde\Delta_{i,j-1})\tilde J_{i,j-1}+\tilde\Delta_{i,j-1}\tilde Z_{i,j-1}
\]
and
\[
\tilde J_i:=
\tilde J_{i,i}.
\]
Then we define $\tilde J_{n,1}$ and $(\tilde\Delta_{n,k}, \tilde Z_{n,k}, 1\le k\le n-1)$ to be random variables such that conditioning on the values of $\tilde J_1$ and $(\tilde J_{i,1}, \tilde\Delta_{i,j}, \tilde Z_{i,j}, 1\le j<i\le n-1)$,
\begin{itemize}
\item $(\tilde J_{n,1}, \tilde\Delta_{n,k}, \tilde Z_{n,k}, 1\le k\le n-1)$ are conditionally independent;
\item The conditional distribution of $\tilde J_{n,1}$ is $h_1(\cdot|\emptyset)$;
\item For $1\le k\le n-1$, the conditional distributions of $\tilde Z_{n,k}$ and $\tilde\Delta_{n,k}$ are $\zeta_{n,k}^{\tilde J_{n-1},\ldots, \tilde J_{n-k}}(\cdot)$ and Bernoulli with parameter $e^{-ckL}$, respectively.
\end{itemize}
\noindent{\it Step 3.} For $2\le k\le n$, set
\begin{align*}
&\tilde J_{n,k}:=(1-\tilde\Delta_{n,k-1})\tilde J_{n,k-1}+\tilde\Delta_{n,k-1}\tilde Z_{n,k-1}\\
\text{and }&\tilde J_n:=\tilde J_{n,n}.
\end{align*}
Then (by \eqref{LVnew2}) almost surely,
\begin{equation}\label{LV3e3}
P(\tilde J_{n,k}\in\cdot|\tilde J_{n-1},\ldots,\tilde J_1)=h_k(\cdot|\tilde J_{n-1},\ldots,\tilde J_{n-k+1}).
\end{equation}
It follows immediately that
\begin{equation*}
(\tilde J_1,\ldots,\tilde J_n)\sim \hat{\mathbb P}(J_1\in\cdot,\ldots, J_n\in\cdot).
\end{equation*}
Therefore, by induction, we have constructed $(\tilde J_i, i\in\mathbb{N})$ such that \eqref{LVnew0} holds for all $n\in\mathbb{N}$.
In what follows, with abuse of notation, we will identify $\tilde J_i$ with
$J_i$ and simply write $\tilde J_{i,j}, \tilde \Delta_{i,j}, \tilde Z_{i,j}$ as
$J_{i,j}, \Delta_{i,j}$ and $Z_{i,j}$, $1\le j<i$. We still use $\hat{\mathbb
P}$ to denote the law of the random variables in the enlarged probability space.
\begin{remark}
To summarize, we have introduced random variables $J_{i,j}, \Delta_{i,j},
Z_{i,j}$, $1\le j<i$ such that for any $n\ge 2$,
\begin{align*}
&J_{n,2}=(1-\Delta_{n,1})J_{n,1}+\Delta_{n,1}Z_{n,1},\\
&\ldots,\\
&J_{n,n-1}=(1-\Delta_{n,n-2})J_{n,n-2}+\Delta_{n,n-2}Z_{n,n-2},\\
&J_n=J_{n,n}=(1-\Delta_{n,n-1})J_{n,n-1}+\Delta_{n,n-1}Z_{n,n-1}.
\end{align*}
Intuitively, we flip a sequence of ``coins" $\Delta_{n,n-1},\ldots,\Delta_{n,1}$ to determine whether $J_1,\ldots,J_{n-1}$ are in the ``memory" of $J_n$. For instance, if
\[
\Delta_{n,n-1}=\cdots=\Delta_{n,n-i}=0,
\]
then $J_n=J_{n,n-i}$ doesn't ``remember" $J_1,\ldots, J_i$ (in the sense that
\[
\hat{\mathbb P}(J_{n,n-i}\in\cdot|J_{n-1},\ldots, J_1)
=
h_{n-i}(\cdot|J_{n-1},\ldots, J_{i+1}).
\]
See \eqref{LV3e3}).
\end{remark}
\subsection {Proof of Theorem \ref{hke}}
For $1<i\le n$, let $I_n(i)$ be the event that
$\Delta_{i,i-1}=\ldots=\Delta_{i,1}=0$ and
$\Delta_{m,m-1}=\ldots=\Delta_{m,m-i}=0$ for all
$i<m\le n$.
Note that on $I_n(i)$,
\begin{equation}\label{LVnew3}
J_i=J_{i,1}\text{ and }J_m=J_{m,m-i}
\text{ for all }i<m\le n.
\end{equation}
Setting
\[M_n:=\{1\le i\le n: I_n(i)\neq\emptyset\},\]
we have
\begin{lemma}\label{LVliid}
For $n\ge 2$, let $H$ be a nonempty subset of $\{2,\ldots, n\}$, and set
\[M_n:=\{1< i< n: I_n(i)\neq\emptyset\}.\]
Conditioning on the
event $\{M_n=H\}$, the sequence $(J_i)_{i\in H}$ is iid and independent of
$(J_i)_{i\in \{1,\ldots,n\}\setminus H}$.
\end{lemma}
\noindent{\it Proof of Lemma \ref{LVliid}:}
From our construction it follows that for any $i>1$, $J_{i,1}$ is independent
of
\[
\sigma(\Delta_{k,j}, 1\le j<k)
\vee
\sigma(J_l, 1\le l<i)
\vee
\sigma(J_{m,m-i},m>i).
\]
Hence by \eqref{LVnew3}, for any $i\in H$ and any appropriate measurable sets $(V_j)_{1\le j\le n}$,
\begin{align*}
&\hat{\mathbb P}(J_j\in V_j, 1\le j\le n|M_n=H)\\
&=\hat{\mathbb P}(J_{i,1}\in V_i)
\hat{\mathbb P}(J_j\in V_j, 1\le j\le n, j\neq i|M_n=H).
\end{align*}
By induction, we get
\begin{align*}
&\hat{\mathbb P}(J_j\in V_j, 1\le j\le n|M_n=H)\\
&=\prod_{i\in H}\hat{\mathbb P}(J_{i,1}\in V_i)
\hat{\mathbb P}(J_j\in V_j, 1\le j\le n, j\notin H|M_n=H).
\end{align*}
The lemma is proved.\qed
\noindent{\it Proof of Theorem \ref{hke}:}
By Lemma \ref{LVliid}, for $i\in H$ and all $j\in\{1,\ldots, d\}$,
\[
\hat{\mathbb P}\big(X_{\tau_i}-X_{\tau_{i-1}}=(L+1)e_1\pm e_j|M_n=H\big)
=
\hat{\mathbb P}(X_{\tau_1}=(L+1)e_1\pm e_j)>0,
\]
where the last inequality is due to ellipticity.
Hence arguing as in \cite[pages 736, 737]{Be}, using Lemma~\ref{LVliid} and the heat kernel estimate for bounded iid random walks in $\mathbb{Z}^d$, we get that for any $x\in\mathbb{Z}^d$,
\[
\hat{\mathbb P}(\sum_{i\in H}X_{\tau_i}-X_{\tau_{i-1}}=x|M_n=H)
\le C|H|^{-d/2},
\]
where $|H|$ is the cardinality of $H$.
Hence, for any subset $H\subset\{2,\ldots,n\}$ such that $|H|\ge n/2$,
\begin{align}\label{LVe13}
& \hat{\mathbb P}(X_{\tau_n}=x|M_n=H)\nonumber\\
&=\sum_y \hat{\mathbb P}(\sum_{i\in H}X_{\tau_i}-X_{\tau_{i-1}}=x-y,
\sum_{i\in \{1,\ldots,n\}\setminus H}X_{\tau_i}-X_{\tau_{i-1}}=y|M_n=H)\nonumber\\
&=\sum_y
\bigg[\hat{\mathbb P}\big(\sum_{i\in H}X_{\tau_i}-X_{\tau_{i-1}}=x-y|M_n=H\big)\nonumber\\
&\qquad\qquad\qquad\qquad\qquad\times
\hat{\mathbb P}(\sum_{i\in \{1,\ldots,n\}\setminus H}X_{\tau_i}-X_{\tau_{i-1}}=y|M_n=H)\bigg]\nonumber\\
&\le C n^{-d/2},
\end{align}
where we used Lemma~\ref{LVliid} in the second equality.
On the other hand,
\begin{align*}
|M_n|
&\ge n-\sum_{i=2}^n \bigg(
1_{\Delta_{i,i-1}+\cdots+\Delta_{i,1}>0}+\sum_{m=i+1}^n
1_{\Delta_{m,m-1}+\cdots+\Delta_{m,m-i}>0}
\bigg)\\
&=n-\sum_ {i=2} ^n
1_{\Delta_{i,i-1}+\cdots+\Delta_{i,1}>0}-\sum_{m=2}^n\sum_{i=2}^{m-1}1_{\Delta_{
m,m-1}+\cdots+\Delta_{m,m-i}>0}\\
&\ge n-2\sum_{m=2}^n K_m,
\end{align*}
where $K_m:=\sup\{1\le j<m:\Delta_{m,j}=1\}$. Here we follow the convention
that $\sup\emptyset=0$.
Since $K_m$'s are independent, and for $m\ge 2$,
\begin{align*}
E e^{K_m} &= \sum_{j=0}^{m-1} e^j \hat{\mathbf P}(K_m=j)\\
&\le \sum_{j=1}^{m-1} e^j \hat{\mathbf P}(\Delta_{m,j}=1)+1\\
&\le \sum_{j=1}^\infty e^j e^{-cjL}+1\to 1 \text{ as $L\to\infty$},
\end{align*}
we can take $L$ to be large enough such that $E e^{K_m}\le e^{1/8}$ for all
$m\ge 2$ and so
\begin{align}\label{LVe14}
\hat{\mathbb P}(|M_n|<n/2)
&\le \hat{\mathbb P}(K_2+\cdots+K_n>n/4)\nonumber\\
&\le e^{-n/4}E e^{K_2+\cdots +K_n}
\le e^{-n/8}.
\end{align}
By (\ref{LVe13}) and (\ref{LVe14}), inequality (\ref{LVehke}) follows
immediately.
Furthermore, since
\begin{align*}
& Q(n,x)\\
&= \sum_y \hat{\mathbb P}(X_{\tau_{n-1}}=y)
\hat{\mathbb P}(x \text{ is visited in }[\tau_{n-1},\tau_n)|X_{\tau_{n-1}}=y)\\
&\stackrel{\text{Lemma }\ref{LVl4}}{\le}
C\sum_y \hat{\mathbb P}(X_{\tau_{n-1}}=y)\hat{\mathbb P}((x-y)\text{ is visited during }[0,\tau_1)),
\end{align*}
by H\"{o}lder's inequality we have
\begin{align*}
& Q(n,x)^2\\
&\le C
\big[\sum_y\hat{\mathbb P}\big((x-y)\text{ is visited during }[0,\tau_1)\big)\big]\\
&\qquad\qquad\times\big[\sum_y\hat{\mathbb P}(X_{\tau_{n-1}}=y)^2
\hat{\mathbb P}\big((x-y)\text{ is visited during }[0,\tau_1)\big)\big]\\
&\le CE_{\hat{\mathbb P}}\tau_1 \sum_y \hat{\mathbb P}(X_{\tau_{n-1}}=y)^2
\hat{\mathbb P}\big((x-y)\text{ is visited during }[0,\tau_1)\big).
\end{align*}
Hence
\begin{align*}
& \sum_x Q(n,x)^2\\
&\le CE_{\hat{\mathbb P}}\tau_1
\sum_y \big[\hat{\mathbb P}(X_{\tau_{n-1}}=y)^2
\sum_x\hat{\mathbb P}\big((x-y)\text{ is visited during }[0,\tau_1)\big)\big]\\
&\le C(E_{\hat{\mathbb P}}\tau_1)^2 \sum_y \hat{\mathbb P}(X_{\tau_{n-1}}=y)^2\\
&\stackrel{(\ref{LVehke})}{\le} C(E_{\hat{\mathbb P}}\tau_1)^2 n^{-d/2}\sum_y \hat{\mathbb P}(X_{\tau_{n-1}}=y)
=C(E_{\hat{\mathbb P}}\tau_1)^2 n^{-d/2}.
\end{align*}
Theorem \ref{hke} is proved.\qed
\section{The uniqueness of the non-zero velocity}\label{secunique}
In this section we will show that in high dimensions ($d\ge 5$), there
exists at most one non-zero velocity. The idea is the following.
Consider two random walk paths: one starts at the origin, the other starts near
the $n$-th regeneration position of the first path. By Levy's martingale convergence
theorem, the second path is ``more and more transient" as $n$ grows (Lemma \ref{LVl1}).
On the other hand, by heat kernel estimates, when $d\ge 5$, two ballistic walks in opposite directions
will grow further and further apart from each other (see Lemma \ref{LVl3}), thus they are almost independent.
This contradicts the previous fact that starting at the $n$-th regeneration point of the first path will prevent the second path from being transient in the opposite direction.\\
Set
$\delta=\delta(d):=\frac{d-4}{8(d-1)}$. (The reason for choosing this constant
will become clear in (\ref{LV2e6}).).
For any finite path $y_\cdot=(y_i)_{i=0}^M, M<\infty$, define $A(y_\cdot, z)$ to be the
set of paths $(x_i)_{i=0}^N, N\le\infty$ that satisfy
\begin{itemize}
\item[1)] $x_0=y_0+z$;
\item[2)] $d(x_i,y_j)>(i\vee j)^\delta$ if $i\vee j>|z|/3$.
\end{itemize}
The motivation for the definition of $A(y_\cdot, z)$ is as follows.
Note that for two paths $x_\cdot=(x_i)_{i=0}^N$ and $y_\cdot=(y_i)_{i=0}^M$ with $x_0=y_0+z$, if $i\vee j\le |z|/3$, then
\[
d(x_i,y_j)
\ge d(x_0,y_0)-d(x_0,x_i)-d(y_0,y_j)
\ge |z|-i-j
\ge |z|/3.
\]
Hence, for $(x_i)_{i=0}^N\in A(y_\cdot, z)$,
\begin{align}\label{LVe10}
\sum_{i\le N,j\le M}e^{-\gamma d(x_i,y_j)}
&\le
\sum_{0\le i,j\le |z|/3}e^{-\gamma |z|/3}+\sum_{i\vee j>|z|/3}e^{-\gamma(i^\delta+j^\delta)/2}
\nonumber\\
&\le
(\frac{|z|}{3})^2 e^{-\gamma |z|/3}+(\sum_{i=0}^\infty e^{-\gamma i^\delta/2})^2<C.
\end{align}
This gives us (by (G)) an estimate of the interdependence between
$\sigma\big(\omega_x: x\in (x_i)_{i=0}^N\big)$ and $\sigma\big(\omega_x: x\in (y_i)_{i=0}^M\big)$.
In what follows, we use
\[
\tau'_\cdot=\tau_\cdot(-e_1,\epsilon,X_\cdot)
\]
to denote the regeneration times in the $-e_1$ direction.
Assume that there are two opposite nonzero limiting velocities in directions $e_1$ and $-e_1$, i.e.,
\[
v_{e_1} \cdot v_{-e_1}>0.
\]
We let $\check{\mathbb P}(\cdot):=\mathbb{P}(\cdot|R_{-e_1}=\infty)$.
\begin{figure}[h]
\centering
\includegraphics[width=0.6\textwidth]{paths2}
\caption{$X_\cdot\in A(Y_\cdot^n, z)$.
When $i\vee j>|z|/3$, the distance between $Y_j^n$ of the ``backward path" and $X_i$ is at least $(i\vee j)^\delta$.}
\label{LVfig:1}
\end{figure}
\begin{lemma}\label{LVl3}
Assume that there are two nonzero limiting velocities in direction $e_1$. We sample $(\epsilon,\tilde{X}_\cdot)$ according to $\hat{\mathbb P}$ and let
$\tilde{\tau}_\cdot=\tau_\cdot (e_1,\epsilon,\tilde{X}_\cdot)$ denote its regeneration times.
For $n\ge 1$, we let
\[
Y_\cdot^n=(Y_i^n)_{i=0}^{\tilde{\tau}_n}:=(\tilde{X}_{\tilde{\tau}_n-i})_{i=0}^{\tilde{\tau}_n}
\]
be the reversed path of $(\tilde{X}_i)_{i=0}^{\tilde{\tau}_n}$.
If $|z|$ is large enough, $d\ge 5$ and $n\ge 1$, then
\begin{equation}\label{LVe8}
E_{\hat{\mathbb P}}\check{\mathbb P}^{\tilde{X}_{\tilde{\tau}_n}+z}
\big(X_\cdot\in A(Y_\cdot^n, z)\big)>C>0.
\end{equation}
\end{lemma}
{\it Proof:}~
Let
\[m_z:=\lfloor|z|^{1/2}\rfloor.\]
Then
\begin{align}
&E_{\hat{\mathbb P}}\check{\mathbb P}^{\tilde{X}_{\tilde{\tau}_n}+z}
\big(X_\cdot\notin A(Y_\cdot^n, z)\big)\nonumber\\
&\le
E_{\hat{\mathbb P}}\check{\mathbb P}^{\tilde{X}_{\tilde{\tau}_n}+z}(\tau'_{m_z}\ge |z|/3)+
\hat{\mathbb{P}}(\tilde{\tau}_n-\tilde{\tau}_{n-m_z}\ge |z|/3)\label{LV*12}\\
&\quad +E_{\hat{\mathbb P}}\check{\mathbb P}^{\tilde{X}_{\tilde{\tau}_n}+z}
(d(X_i,Y_\cdot^n)\le i^\delta\text{ for some } i>\tau'_{m_z})\label{LV*13}\\
&\quad +E_{\hat{\mathbb P}}\check{\mathbb P}^{\tilde{X}_{\tilde{\tau}_n}+z}
(d(\tilde{X}_{\tilde{\tau}_n-j},X_\cdot)\le j^\delta\text{ for some }j>\tilde{\tau}_n-\tilde{\tau}_{n-m_z}).\label{LV*14}
\end{align}
We will first estimate (\ref{LV*12}). By the translation invariance of the environment measure,
\[
\check{\mathbb P}^x(\tau'_{m_z}\ge |z|/3)=\check{\mathbb P}(\tau'_{m_z}\ge |z|/3)
\text{ for any }x\in\mathbb{Z}^d.
\]
Hence
\begin{equation}\label{LV2e4}
E_{\hat{\mathbb P}}\check{\mathbb P}^{\tilde{X}_{\tilde{\tau}_n}+z}(\tau'_{m_z}\ge |z|/3)
=
\check{\mathbb P}(\tau'_{m_z}\ge |z|/3)
\le
\frac{3E_{\check{\mathbb P}}\tau'_{m_z}}{|z|}
\stackrel{(\ref{LVetau})}{\le}
C(E_{\check{\mathbb P}}\tau'_1)|z|^{-1/2}.
\end{equation}
Similarly,
\begin{equation}\label{LV2e5}
\hat{\mathbb P}(\tilde{\tau}_n-\tilde{\tau}_{n-m_z}\ge |z|/3)
\stackrel{(\ref{LVprop2})}{\le}
\exp{(e^{-cL})}\hat{\mathbb P}(\tau_{m_z}\ge |z|/3)\le C(E_{\hat{\mathbb P}}\tau_1) |z|^{-1/2}.
\end{equation}
To estimate (\ref{LV*13}) and (\ref{LV*14}), for $i\ge 1, n\ge j\ge 1$, we let
\begin{align*}
&Q'(i,x)=\check{\mathbb P}(x\text{ is visited in }[\tau'_{i-1},\tau'_i)),\\
&\tilde{Q}(j,x)=\hat{\mathbb P}(X_{\tau_n}+x \text{ is visited in}[\tau_{n-j},\tau_{n-j+1})).
\end{align*}
Note that by arguments that are similar to the proof of Theorem \ref{hke},
one can also obtain the heat kernel estimate
(\ref{LVehke}) for $Q'(i,x)$ and $\tilde{Q}(j,x)$.
For $l>0$, let $B(o,l)=\{x\in\mathbb{Z}^d: d(o,x)\le l\}$.
Recall the definition of the $r$-boundary in Definition \ref{LVdef1}.
By the translation invariance of the environment measure,
\[
\check{\mathbb P}^y(X_i=y+z)
=\check{\mathbb P}(X_i=z) \text{ for any }y,z\in\mathbb{Z}^d \text{and }i\in\mathbb{N}.
\]
Hence
\begin{align*}
&E_{\hat{\mathbb P}}\check{\mathbb P}^{\tilde{X}_{\tilde{\tau}_n}+z}
(d(X_i,\tilde{X}_\cdot)\le i^\delta\text{ for some } i>\tau'_{m_z})\\
&\le
\sum_{i\ge m_z}\sum_{y\in\partial_1 B(o,i^\delta)}\sum_x
E_{\hat{\mathbb P}}\big[\check{\mathbb P}^{\tilde{X}_{\tilde{\tau}_n}+z}
(\tilde{X}_{\tilde{\tau}_n}+z+x\text{ is visited in }[\tau'_i,\tau'_{i+1}))\\
&\qquad\qquad\qquad\qquad\qquad\times 1_{\tilde{X}_{\tilde{\tau}_n}+z+x+y\in Y_\cdot^n}\big]\\
&=
\sum_{i\ge m_z}\sum_{y\in\partial_1 B(o,i^\delta)}\sum_x
\check{\mathbb P}(x\text{ is visited in }[\tau'_i,\tau'_{i+1}))
\hat{\mathbb P}(\tilde{X}_{\tilde{\tau}_n}+z+x+y\in Y_\cdot^n)\\
&=\sum_{i\ge m_z}\sum_{y\in\partial_1 B(o,i^\delta)}\sum_{j\le n}\sum_x
Q'(i,x)\tilde{Q}(j,x+z+y).
\end{align*}
By the heat kernel estimates and H\"{o}lder's inequality,
\begin{align*}
\sum_{j\le n}\sum_x Q'(i,x)\tilde{Q}(j,x+z+y)
&\le
\sqrt{\sum_x Q'(i,x)^2}\sum_{j\le n}\sqrt{\sum_x \tilde{Q}(j,x+y)^2}\\
&\le
C(E_{\check{\mathbb P}}\tau'_1)i^{-d/4}
\sum_{j\le n}(E_{\hat{\mathbb P}}\tau_1)j^{-d/4}\\
&\stackrel{d\ge 5}{\le}
Ci^{-d/4}E_{\check{\mathbb P}}\tau'_1 E_{\hat{\mathbb P}}\tau_1.
\end{align*}
Thus
\begin{align}\label{LV2e6}
&E_{\hat{\mathbb P}}\check{\mathbb P}^{\tilde{X}_{\tilde{\tau}_n}+z}
(d(X_i,\tilde{X}_\cdot)\le i^\delta\text{ for some } i>\tau'_{m_z})\nonumber\\
&\le
C\sum_{i\ge m_z}\sum_{y\in\partial_1 B(o,i^\delta)}i^{-d/4}
E_{\check{\mathbb P}}\tau'_1 E_{\hat{\mathbb P}}\tau_1\nonumber\\
&\le
C\sum_{i\ge m_z}i^{(d-1)\delta}i^{-d/4}E_{\check{\mathbb P}}\tau'_1 E_{\hat{\mathbb P}}\tau_1
\le C
|z|^{-(d-4)/8}
E_{\check{\mathbb P}}\tau'_1 E_{\hat{\mathbb P}}\tau_1,
\end{align}
where we used $d\ge 5$ and $\delta=\frac{d-4}{8(d-1)}$ in the last inequality.
Similarly, we have
\begin{align}\label{LV2e7}
&E_{\hat{\mathbb P}}\check{\mathbb P}^{\tilde{X}_{\tilde{\tau}_n}+z}
(d(\tilde{X}_{\tilde{\tau}_n-j},X_\cdot)
\le j^\delta\text{ for some }j>\tilde{\tau}_n-\tilde{\tau}_{n-m_z})\nonumber\\
&\le
C |z|^{-(d-4)/8}E_{\check{\mathbb P}}\tau'_1 E_{\hat{\mathbb P}}\tau_1.
\end{align}
Combining (\ref{LV2e4}), (\ref{LV2e5}), (\ref{LV2e6}) and (\ref{LV2e7}), we conclude that
\begin{equation*}
E_{\hat{\mathbb P}}\check{\mathbb P}^{\tilde{X}_{\tilde{\tau}_n}+z}
\big(X_\cdot\in A(Y_\cdot^n, z)\big)>C>0,
\end{equation*}
if $|z|$ is large enough and $d\ge 5$.\qed\\
Let
\[
T^o=\inf\{i\ge 0: X_i\cdot e_1<0\}.
\]
For every fixed $\omega\in\Omega$ and $P_{\omega,\epsilon}^o$-almost every
$X_\cdot$,
\[
P_{\omega,\theta^n\epsilon}^{X_n}(T^o=\infty)1_{T^o>n}=P_{\omega,\epsilon}^o (T^o=\infty|X_1,\ldots,X_n),
\]
and so by Levy's martingale convergence theorem,
\[
\mathop{\rm lim}_{n\to\infty}P_{\omega,\theta^n\epsilon}^{X_n}(T^o=\infty)1_{T^o> n}= 1_{T^o=\infty},
\quad\text{$P_{\omega,\epsilon}^o$-almost surely}.
\]
Hence, for $(\omega, \epsilon,\tilde{X}_\cdot)$ sampled according to
$\hat{\mathbb P}$,
\[
\mathop{\rm lim}_{n\to\infty}
P_{\omega,\theta^{\tilde{\tau}_n}\epsilon}^{\tilde{X}_{\tilde{\tau}_n}}(T^o=\infty)=1, \quad\text{$\hat{\mathbb P}$-almost surely}.
\]
It then follows by the dominated convergence theorem that
\begin{equation}\label{LV3e7}
\mathop{\rm lim}_{n\to\infty}
E_{\hat{\mathbb P}}
P_{\omega,\theta^{\tilde{\tau}_n}\epsilon}^{\tilde{X}_{\tilde{\tau}_n}}(T^o<\infty)
=0.
\end{equation}
\begin{lemma}\label{LVl1}
For any $z\in\mathbb{Z}^d$,
\begin{equation}\label{LVe2}
\mathop{\rm lim}_{n\to\infty}
E_{\hat{\mathbb P}}
P_{\omega,\theta^{\tilde{\tau}_n}\epsilon}^{\tilde{X}_{\tilde{\tau}_n}+z}(T^o<\infty)
=0.
\end{equation}
\end{lemma}
{\it Proof:}~
For $n>|z|$, obviously
\[(\tilde{X}_{\tilde{\tau}_n}+z)\cdot e_1>0.\]
This together with ellipticity yields
\[
P_{\omega,\theta^{\tilde{\tau}_n}\epsilon}^{\tilde{X}_{\tilde{\tau}_n}}(T^o<\infty)
\ge{(\frac{\kappa}{2})^{|z|}}
P_{\omega,\theta^{\tilde{\tau}_n+|z|}\epsilon}^{\tilde{X}_{\tilde{\tau}_n}+z}(T^o<\infty).
\]
Hence using \eqref{LV3e7},
\[
\mathop{\rm lim}_{n\to\infty}
E_{\hat{\mathbb P}}
P_{\omega,\theta^{\tilde{\tau}_n+|z|}\epsilon}^{\tilde{X}_{\tilde{\tau}_n}+z}(T^o<\infty)
=0.
\]
On the other hand, noting that $\{R>\tau_1\}=\{R=\infty\}$,
\begin{align*}
&E_{\hat{\mathbb P}}
P_{\omega,\theta^{\tilde{\tau}_n+|z|}\epsilon}^{\tilde{X}_{\tilde{\tau}_n}+z}(T^o<\infty)\\
&= \sum_{m,x}
E_{P\otimes Q}[P_{\omega,\theta^{m+|z|}\epsilon}^{x+z}(T^o<\infty)
P_{\omega, \epsilon}^o (R>\tau_1,\tau_n=m,X_m=x)]/\mathbb{P}(R=\infty)\\
&=\sum_{m,x}
E_{P\otimes Q}[P_{\omega,\theta^{m}\epsilon}^{x+z}(T^o<\infty)
P_{\omega, \epsilon}^o (R>\tau_1,\tau_n=m,X_m=x)]/\mathbb{P}(R=\infty)\\
&=
E_{\hat{\mathbb P}}P_{\omega,\theta^{\tilde{\tau}_n}\epsilon}^{\tilde{X}_{\tilde{\tau}_n}+z}(T^o<\infty),
\end{align*}
where we used the independence (under $Q$) of $P_{\omega,
\theta^m\epsilon}^{x+z}(T^o<\infty)$ and
$P_{\omega,\epsilon}^o(R>\tau_1,\tau_n=m, X_m=x)$ in the second to last
equality. The conclusion follows.
\qed\\
\noindent\textit{Proof of the uniqueness of the non-zero velocity when $d\ge 5$, as stated in Theorem \ref{LVthm2}:}
If the two antipodal velocities are both non-zero, we assume that
\[v_{e_1}\cdot v_{-e_1}>0.\]
Sample $(\omega,\epsilon_\cdot,\tilde{X}_\cdot)$ according
to $\hat{\mathbb P}$.
Henceforth, we take $z=z_0$ such that (\ref{LVe8}) holds
and
\[z_0\cdot e_1<-L.\]
We will prove Theorem \ref{LVthm2} by showing that
\begin{equation}\label{LVcontradiction}
E_{\hat{\mathbb P}}
P_{\omega,\theta^{\tilde{\tau}_n}\epsilon}^{\tilde{X}_{\tilde{\tau}_n}+z_0}(T^o<\infty)>C
\end{equation}
for all $n>|z_0|$, which contradicts with (\ref{LVe2}).
First, let $\mathcal{G}$ denote the set of finite paths $y_\cdot=(y_i)_{i=0}^M$ that satisfy $y_M=0, M<\infty$.
Then
\begin{align}\label{LVe11}
& E_{\hat{\mathbb P}}
P_{\omega,\theta^{\tilde{\tau}_n}\epsilon}^{\tilde{X}_{\tilde{\tau}_n}+z_0}(T^o<\infty)\\
&\ge
E_{\hat{\mathbb P}}
P_{\omega,\theta^{\tilde{\tau}_n}\epsilon}^{\tilde{X}_{\tilde{\tau}_n}+z_0}
\big((X_i)_{i=0}^{T^o}\in A(Y_\cdot^n, z_0),T^o<\infty\big)\nonumber\\
&=
\sum_{y_\cdot=(y_i)_{i=0}^M\in \mathcal{G}}
E_{\hat{\mathbb P}}
[P_{\omega,\theta^M\epsilon}^{y_0+z_0}
\big((X_i)_{i=0}^{T^o}\in A(y_\cdot, z_0),T^o<\infty\big)
1_{Y_\cdot^n=y_\cdot}]\nonumber\\
&=
\frac{1}{\mathbb{P}(R=\infty)}
\sum_{y_\cdot\in \mathcal{G}}
\sum_{\substack{N<\infty\\(x_i)_{i=0}^N\in A(y_\cdot, z_0)}}
E_{P\otimes Q}
[P_{\omega,\theta^M\epsilon}^{y_0+z_0}
\big((X_i)_{i=0}^{T^o}=x_\cdot\big)
P_{\omega,\epsilon}(Y_\cdot^n=y_\cdot)].\nonumber
\end{align}
By the definition of the regeneration times, for any finite path $y_\cdot=(y_i)_{i=0}^M$,
there exists an event $G_{y_\cdot}$ such that $P_{\omega,\epsilon}(G_{y_\cdot})$ is
$\sigma(\epsilon_{i,y_i}, \omega_{y_j}:0\le i\le M, 0\le j\le M-L)$-measurable and
\[
\{Y_\cdot^n=y_\cdot\}=\{(\tilde{X}_i)_{i=0}^{\tilde{\tau}_n}=(y_{M-j})_{j=0}^M\}
=G_{y_\cdot}\cap\{R\circ\theta_M=\infty\}.
\]
Hence, for and any $y_\cdot=(y_i)_{i=0}^M\in \mathcal{G}$ and
$x_\cdot=(x_i)_{i=0}^N\in A(y_\cdot,z_0)$, $N<\infty$,
\begin{align}\label{LV2e8}
& E_{P\otimes Q}
[P_{\omega,\theta^M\epsilon}^{y_0+z_0}\big((X_i)_{i=0}^{T^o}=x_\cdot, R_{-e_1}>N\big)
P_{\omega,\epsilon}(Y_\cdot^n=y_\cdot)]\nonumber\\
&=E_P[
\bar{P}_\omega^{y_0+z_0}\big((X_i)_{i=0}^{T^o}=x_\cdot, R_{-e_1}>N\big)
\bar{P}_\omega(G_{y_\cdot})
\bar{P}_\omega^{y_0}(R=\infty)]\nonumber\\
&\stackrel{(\ref{LVprop1})}{\ge}
CE_P[
\bar{P}_\omega^{y_0+z_0}\big((X_i)_{i=0}^{T^o}=x_\cdot, R_{-e_1}>N\big)
\bar{P}_\omega(G_{y_\cdot})]
\bar{\mathbb P}(R=\infty).
\end{align}
where we used in the equality that $(\epsilon_{i,x})_{i\ge 0, x\in\mathbb{Z}^d}$
are iid and in the inequality the fact that
\[
\bar{P}_\omega^{y_0+z_0}\big((X_i)_{i=0}^{T^o}=x_\cdot, R_{-e_1}>N\big)
\bar{P}_\omega(G_{y_\cdot})
\]
is $\sigma(\omega_v: v\cdot e_1\le y_0\cdot e_1-L)$-measurable (note that $z_0\cdot e_1<-L$).
Further, by Lemma \ref{LVc2} and (\ref{LVe10}), we have
\begin{align}\label{LV2e9}
&E_P[
\bar{P}_\omega^{y_0+z_0}\big((X_i)_{i=0}^{T^o}=x_\cdot, R_{-e_1}>N\big)
\bar{P}_\omega(G_{y_\cdot})]\nonumber\\
&\ge
C\bar{\mathbb P}^{y_0+z_0}\big((X_i)_{i=0}^{T^o}=x_\cdot, R_{-e_1}>N\big)
\bar{\mathbb P}(G_{y_\cdot}).
\end{align}
Note that
\begin{align}\label{LVe12}
\bar{\mathbb P}(G_{y_\cdot})\bar{\mathbb P}(R=\infty)
&\stackrel{(\ref{LVprop1})}{\ge}
CE_P
[\bar{P}_\omega (G_{y_\cdot})\bar{P}_{\omega}^{y_0}(R=\infty)]
\nonumber\\
&=C\bar{\mathbb P}(Y_\cdot^n=y_\cdot)
\ge
C\hat{\mathbb P}(Y_\cdot^n=y_\cdot).
\end{align}
Therefore, by (\ref{LVe11}), (\ref{LV2e8}) and (\ref{LV2e9}),
\begin{align*}
& E_{\hat{\mathbb P}}
P_{\omega,\theta^{\tilde{\tau}_n}\epsilon}^{\tilde{X}_{\tilde{\tau}_n}+z_0}(T^o<\infty)\\
&\ge
C\sum_{y_\cdot\in \mathcal{G}}
\sum_{\substack{N<\infty\\(x_i)_{i=0}^N\in A(y_\cdot, z_0)}}
\bar{\mathbb P}^{y_0+z_0}\big((X_i)_{i=0}^{T^o}=x_\cdot, R_{-e_1}>N\big)
\bar{\mathbb P}(G_{y_\cdot})\bar{\mathbb P}(R=\infty)\\
&\stackrel{(\ref{LVe12})}{\ge}
C\sum_{y_\cdot\in \mathcal{G}}
\sum_{\substack{N<\infty\\(x_i)_{i=0}^N\in A(y_\cdot, z_0)}}
\bar{\mathbb P}^{y_0+z_0}\big((X_i)_{i=0}^{T^o}=x_\cdot, R_{-e_1}>N\big)
\hat{\mathbb P}(Y_\cdot^n=y_\cdot)\\
&\ge
CE_{\hat{\mathbb P}}
\check{\mathbb P}^{\tilde{X}_{\tilde{\tau}_n}+z_0}
(X_\cdot\in A(Y_\cdot^n, z_0))\stackrel{\text{Lemma }\ref{LVl3}}{>}C.
\end{align*}
(\ref{LVcontradiction}) is proved.\qed
\chapter{Bibliography}
\chapter{Introduction}
\label{intro_chapter}
\section{An introduction to RWRE}\label{Section II}
Let $\mathcal{M}=\mathcal{M}_1(V)$ be the space of all probability measures on
$V=\{v\in\mathbb{Z}^d: |v|\le 1\}$,
where $|\cdot|$ denotes the $l^2$-norm. We equip $\mathcal{M}$ with the weak topology on probability measures, which makes it into a Polish space, and equip $\Omega=\mathcal{M}^{\mathbb{Z}^d}$ with the induced Polish structure. Let $\mathcal{F}$ be the Borel
$\sigma$-field of $\Omega$ and $P$ a probability measure on $\mathcal{F}$.
A random \textit{environment} is an element $\omega
=\{\omega(x, v)\}_{x\in{\mathbb{Z}^d}, v\in V}$ of $\Omega$. The random environment is called \emph{balanced} if \[P\{\omega(x, e_i)=\omega(x,-e_i) \mbox{ for all $i$ and all $x\in\mathbb{Z}^d$}\}=1,\]
and \textit{elliptic} if $P\{\omega(x,e)>0 \mbox{ for all $|e|=1$ and all $x\in\mathbb{Z}^d$}\}=1$. We say that the random environment is \emph{uniformly elliptic} with ellipticity constant $\kappa$ if $P\{\omega(x,e)>\kappa \mbox{ for all $|e|=1$ and all $x\in\mathbb{Z}^d$}\}=1$.
The random walk in the
random environment $\omega\in\Omega$ (RWRE)
started at $x$ is the Markov
chain $\{X_n\}$ on $(\mathbb{Z}^d)^\mathbb{N}$,
with state space $\mathbb{Z}^d$ and law $P_\omega^x$ specified by
\begin{align*}
&P_\omega^x\{X_0=x\}=1,\\
&P_\omega^x\{X_{n+1}=y+v | X_n=y\}=\omega(y, v), \quad v\in V.
\end{align*}
Let $\mathcal{G}$ be the $\sigma$-field generated by cylinder functions. The probability distribution $P_\omega^x$ on $((\mathbb{Z}^d)^\mathbb{N}, \mathcal{G})$ is called the \textit{quenched law}.
Note that for each $G\in\mathcal{G}$, $P_\omega^x (G) : \Omega\to [0,1]$ is a $\mathcal{F}$-measurable function.
The joint probability distribution $\mathbb{P}^x$ on $\mathcal{F}\times\mathcal{G}$:
\[
\mathbb{P}^x (F\times G)=\int_{F} P_\omega^x (G)P(\, \mathrm{d}\omega), \qquad F\in\mathcal{F},\, G\in\mathcal{G},
\]
is called the \textit{annealed} (or \textit{averaged})
law. Expectations with respect to $P_\omega^x$ and $\mathbb{P}^x$ are denoted
by $E_\omega^x$ and $\mathbb{E}^x$, respectively. We also write $\mathbb{P}^o$ as $\mathbb{P}$, where
$o=(0,\cdots, 0)$ is the origin.
For $\omega\in\Omega$, set
\[\omega_x=\big(\omega(x,e)\big)_{|e|=1}.\]
Define the spatial shifts $\{\theta^y\}_{y\in\mathbb{Z}^d}$ on $\Omega$ by
$(\theta^y\omega)_x=\omega_{x+y}$. We say that the random environment is \textit{ergodic}
if the measure $P$ is ergodic with respect to the group of shifts $\{\theta^y\}$.
A special case is when the probability vectors $(\omega_x)_{x\in\mathbb{Z}^d}$
are independent and identically distributed (\textit{iid}).
Setting $\bar{\omega}(n)=\theta^{\mathrm{X}_n}\omega$,
then the process $ \bar{\omega}(n) $ is a Markov chain under
$ \mathbb{P}^{o} $ with state space $ \Omega $ and transition kernel \[ M(\omega',\, \mathrm{d}\omega)=\sum_{i=1}^{d}[\omega'(o,e_i)\delta_{\theta^{e_i}\omega'}+
\omega'(o,-e_i)\delta_{\theta^{-e_i}\omega'}]+\omega'(o,o)\delta_{\omega'}. \]
$\big(\bar{\omega}(n)\big)_{n\in\mathbb{N}}$ is often referred as the ``\textit{environment viewed from the point of view of the particle}'' process.
For $t\ge 0$, let
\[
X_t=X_{\lfloor t\rfloor}+(t-\lfloor t\rfloor)(X_{\lfloor t\rfloor+1}-X_{\lfloor t\rfloor}).
\]
We say that the \textit{quenched invariance principle} of the RWRE holds if,
for $P$-almost every $\omega\in\Omega$ and some deterministic vector
$v\in\mathbb{R}^d$ (called the {\it limiting velocity}),
the $P_\omega^o$ law of the path $\{(X_{tn}-tnv)/\sqrt{n}\}_{t\geq 0}$ converges
weakly to a Brownian motion,
as $n\to \infty$. For $\ell\in S^{d-1}$, we say that the RWRE is
\emph{ballistic} in the direction $\ell$ if
\[
\varliminf_{n\to\infty}\frac{X_n\cdot\ell}{n}>0,\quad\mathbb{P}\mbox{-a.s.}
\]
\section{Structure of the thesis}
In this thesis, we study the diffusive and ballistic behaviors of random walks in random environment in $\mathbb{Z}^d, d\ge 2$.
The organization of the thesis is as follows.
Section~\ref{IOverview} gives an overview of the previous results in the study of the ballisticity, the central limit theorems (CLT), and the Einstein relation of RWRE. The three subsections in Section~\ref{Iresults} state the main results in this thesis and discuss the ideas of their proofs.
Chapters \ref{LV chapter}, \ref{CLT chapter} and \ref{ER chapter} are devoted to the proofs of our three main results:
In Chapter~\ref{LV chapter}, we consider the limiting velocity of random walks in strong-mixing random Gibbsian environments in $\mathbb{Z}^d, d\ge 2$.
Based on regeneration arguments, we will first provide an alternative proof of Rassoul-Agha's conditional law of large numbers (CLLN) for mixing environment \cite{R-A3}.
Then, using coupling techniques, we show that there is at most one nonzero limiting velocity in high dimensions ($d\ge 5$).
Chapter~\ref{CLT chapter} proves the quenched invariance principles (Theorem~\ref{CLT1} and Theorem~\ref{CLT2}) for random walks in elliptic and balanced environments.
We first prove an invariance principle (for $d\ge 2$) and the
transience of the random walks when $d\ge 3$ (recurrence when $d=2$)
in an ergodic environment which is not uniformly elliptic but satisfies
certain moment condition. Then, using percolation arguments, we
show that under (not necessarily uniform) ellipticity, the above results hold
for random
walks in iid balanced environments.
Chapter~\ref{ER chapter} gives the proof of the Einstein relation in the context
of random walks in a balanced uniformly elliptic iid random environment. Our
approach combines a change of measure argument of Lebowitz and Rost \cite{Le}
and the regeneration argument of Gantert, Mathieu and Piatnitski \cite{GMP}. The
key step of our proof is the construction of a new regeneration structure.
\section{Overview of previous results}\label{IOverview}
\subsection{Ballisticity}
The ballistic behavior of the RWRE in dimension $d\ge 2$ has been extensively
studied.
For random walks in iid random environment in dimension $d\ge 2$,
the Kalikow's 0-1 law \cite{Ka81} states that for any direction $\ell\in S^{d-1}$,
\[\mathbb{P}(A_\ell\cup A_{-\ell})\in\{0,1\}\]
where
$A_{\pm\ell}=\{\mathop{\rm lim}_{n\to\infty}X_n\cdot\ell=\pm\infty\}$.
It is believed that for any direction $\ell$ and any $d\ge 2$,
a stronger 0-1 law is true:
\[
P(A_\ell)\in\{0,1\} \tag{0-1 Law}.
\]
When $d=2$, this $0$-$1$ law was proved by Zerner and Merkel \cite{ZM}. The question whether the 0-1 law holds for iid random environment in dimensions $d\ge 3$ is still open. (It is known that some strong mixing condition is necessary for the 0-1 law to hold, as the counterexample in \cite{BZZ} shows.)
Much progress has been made in the study of the limiting velocity $\mathop{\rm lim}_{n\to\infty}X_n/n$ of random walks in iid environment, see \cite{ZO} for a survey.
For one-dimensional RWRE, the law of large numbers (LLN) was proved in \cite{So}.
For $d\ge 2$, a conditional law of large numbers (CLLN) was proved in \cite{SZ, Ze} (see \cite[Theorem 3.2.2]{ZO} for the full version). It states that $\mathbb{P}$-almost surely, for any direction $\ell$,
\[
\mathop{\rm lim}_{n\to\infty}\frac{X_n\cdot\ell}{n}=v_+ 1_{A_\ell}-v_-1_{A_{-\ell}}
\tag{CLLN}
\]
for some deterministic vectors $v_\ell$ and $v_{-\ell}$ (we set $v_\ell=o$ if
$\mathbb{P}(A_\ell)=0$).
This was achieved by considering the regenerations of the random walk path.
Hence for $d\ge 2$, the 0-1 law would imply the LLN. Recall that when $d\ge 3$,
the 0-1 law is one of the main open questions in the study of RWRE.
Nevertheless, in high dimensions ($d\ge 5$), Berger \cite{Be} showed
that the limiting velocity can take at most one non-zero value, i.e.,
\begin{equation}\label{Berger}v_\ell v_{-\ell}=0.\end{equation}
It is of interests to consider environments whose law $P$ is not iid but rather
ergodic (under possibly appropriate mixing conditions). Of special interest is
the environment that is produced by a Gibbsian particle system (which we call
the \textit{Gibbsian environment}) and satisfies Dobrushin-Shlosman's
strong-mixing condition IIIc in \cite[page 378]{DS}, see
\cite{R-A1,R-A2,CZ1,CZ2, R-A3} for related works.
An important feature of this model is that the influence of the environments in
remote locations decays exponentially as the distance grows. (We won't give the
definitions of the Gibbsian environment and the strong-mixing
condition in this thesis. For their definitions, we refer to \cite[pages
1454-1455]{R-A1}. We remark that our results only assume a mixing condition
(G), which is defined in page \pageref{LVdef1}. It is known that (G) is a
property of the strong-mixing Gibbsian environment, cf. \cite[Lemma 9]{R-A1}.)
In \cite{R-A1}, assuming a ballisticity condition (Kalikow's condition) which implies that the
event of escape in a direction has probability $1$, Rassoul-Agha proved the LLN for the strong-mixing Gibbsian environment, using the invariant measure of the ``environment viewed from the point of view of the particle" process $\big(\bar\omega(n)\big)$.
In \cite{R-A3}, Rassoul-Agha also obtained the CLLN for the strong-mixing Gibbsian environment, under an analyticity condition (see Hypothesis (M) in \cite{R-A3}).
Comets and Zeitouni proved the LLN for environments with a weaker cone-mixing assumption ($\mathcal{A}1$) in \cite{CZ1}, but under some conditions about ballisticity and the uniform integrability of the regeneration times (see ($\mathcal{A}5$) in \cite{CZ1}).
\subsection{Central Limit Theorems}
In recent years, there has been much interest in the study
of invariance principles and transience/recurrence
for random walks in random environments (on the
$d$-dimensional lattice $\mathbb{Z}^d$)
with non uniformly
elliptic transitions probabilities. Much of this work has
been in the context of reversible models, either for walks on percolation
clusters or for the random conductance model, see
\cite{Bar04,SS05,MR05,BB,MaP07,Ma08,BarDe10}. In those cases,
the main issue is the transfer of annealed estimates (given e.g.
in \cite{DFGW89} in great generality) to the quenched setting, and the control
of the quenched mean displacement of the walk.
On the other hand, in these models the reversibility of the walk
provides for explicit expressions for certain invariant measures
for the environment viewed from the point of view of the particle.
The non-reversible setup has proved to provide many additional, and
at this point insurmountable,
challenges, even in the uniformly elliptic iid setup, see
\cite{Zrev} for a recent account.
In \cite{Sz3}, Sznitman shows that his condition (T') implies ballisticity and
LLN and a directional annealed central limit theorem. The proof uses
regeneration times and a renormalization argument and does not employ the
process of the environment viewed from the point of view of particle. (We
remark that weaker forms of the condition (T') exist, see \cite{Sz3, DR1, DR2,
BDR}. Recently it was shown in \cite{BDR} that polynomial decay of some exit
probabilities implies (T').)
Further, it was shown by Berger and Zeitouni \cite{BZei} and Rassoul-Agha and Sepp\"{a}l\"{a}inen \cite{R-AS} that in the ballistic case, an annealed invariance principle is equivalent to a quenched invariance principle, under appropriate moment conditions on the regeneration times (these conditions are satisfied in all cases where a ballistic annealed CLT has been proved).
When the walk is not ballistic, the regeneration structure employed in \cite{Sz2} is not available. Several classes of non-ballistic models were considered in the literature: balanced environment (see the definition in Section~\ref{Section II}),
environment whose sufficiently high-dimensional projection is a simple random walk \cite{BSZ}, and isotropic environment which is a small perturbation of the simple random walk \cite{BK, BolZei, SZei}. Historically, the first to be considered was the balanced environment, first investigated by Lawler \cite{La}, which we describe next as a good part of the thesis deals with that environment:
\begin{theorem}[\cite{La},\cite{ZO}]\label{LaThm}
Assume the random environment is ergodic, balanced and uniformly elliptic. Then $P$-almost surely, the $P_\omega$ law of the rescaled path $\lambda X_{\cdot/\lambda^2}$ converges weakly to a Brownian motion on $\mathbb{R}^d$ with a non-degenerate diagonal covariance matrix. Moreover, the RWRE is recurrent for $d=2$ and transient for $d\ge 3$, $P$-almost surely.
\end{theorem}
In this case, a-priori estimates of the Alexandrov-Bakelman-Pucci
type give enough control that allows one to prove the existence
of invariant measures (for the environment viewed from
the point of view of the particle), and the fact that the walk
is a (quenched) martingale together with ergodic arguments yield
the invariance principle (obviously, control of the quenched
mean displacement,
which vanishes, is automatic). The establishment of recurrence (for $d=2$)
and transience (for $d\geq 3$) requires some additional
arguments, due to Kesten and Lawler, respectively, see
\cite{ZO} for details.
\subsection{Einstein relation}
In 1905, Einstein \cite[pp. 1-18]{Einstein} investigated the movement of suspended particles in a liquid under the influence of an
external force. He established the following linear relation between the diffusion constant $D$ and the
\textit{mobility} $\mu$:
\[
D\sim T\mu,
\]
where $T$ is the absolute temperature, and $\mu$ is defined as the limiting
ratio between the velocity (under the external force) and the force, as the
force goes to zero.
More precisely, the Einstein relation (ER) describes the relation between
the response of a system to a perturbation and its diffusivity at equilibrium. It states that the derivative of the velocity (with respect to the strength of the perturbation) equals the diffusivity:
\[
\mathop{\rm lim}_{\lambda\to 0}\mathop{\rm lim}_{t\to\infty}\frac{E_\lambda X_t/t}{\lambda}=D, \tag{ER}
\]
where $(X_t)_{t\ge 0}\in (\mathbb{R}^d)^{\mathbb{R}_+}$ denotes the random
motion of the particle, $\lambda$ is the size of the perturbation, $D$ is the
diffusion constant of the equilibrium state, and $E_\lambda$ is the annealed
measure of the perturbed media.
General derivations of this principle assume reversibility.
Recently, there has been much interest in studying the Einstein relation for reversible motions
in random media, see \cite{Le,KO,GMP,BHOZ}.
In \cite{Le}, Lebowitz and Rost proved a weak form of the Einstein relation for a wide class of random motions in random media:
\[
\mathop{\rm lim}_{\lambda\to 0}E_\lambda \frac{X_{t/\lambda^2}}{t/\lambda}=D \quad \forall t>0.
\]
In \cite{KO}, the ER is verified for random walks in random conductance, where the conductance is only allowed to take two values. The approach of \cite{KO} is an adaption of the perturbation argument and transience estimates in \cite{Loulakis}. For random walks on Galton-Watson trees, the ER is proved by \cite{BHOZ}. Their approach uses recursions due to the tree structure and renewal arguments. Recently, Gantert, Mathieu and Piatnitski \cite{GMP} established the ER for random walks in random potential, by combining the argument in \cite{Le} with good moment estimates of the regeneration times.
The Einstein relation for random motions in the non-reversible zero speed
set-up, e.g., random walks in balanced random environments (RWBRE), is a
challenging problem. (In general one expects correction terms in (ER) due to the
non-reversibility of the walk.)
\section{Our results}\label{Iresults}
In this section we will state the main results in the thesis and explain the
ideas of their proofs. The actual proofs will be presented in the following
chapters.
Our contributions are in three directions: CLLN and regeneration structures for
RWRE in Gibbsian environments, quenched invariance principles for balanced
elliptic (but non uniformly elliptic) environments, and ER for balanced iid
uniformly elliptic environments.
\subsection{Limiting velocity for mixing random environment}\label{ILV}
Recall first the definition of an $r$-Markov environment (see \cite{CZ2}).
\begin{definition}\label{LVdef1}
For $r\ge 1$, let $\partial_r V=\{x\in\mathbb{Z}^d\setminus V: d(x, V)\le r\}$ be the $r$-boundary of $V\subset\mathbb{Z}^d$.
A random environment $(P,\Omega)$ on $\mathbb{Z}^d$ is called $r$-Markov if
for any finite $V\subset\mathbb{Z}^d$,
\[
P\big((\omega_x)_{x\in V}\in \cdot|\mathcal{F}_{V^c}\big)
=P\big((\omega_x)_{x\in V}\in \cdot|\mathcal{F}_{\partial_r V}\big), \text{ $P$-a.s.,}
\]
where $d(\cdot,\cdot)$ denotes the $l^1$-distance and $\mathcal{F}_{\Lambda}:=\sigma(\omega_x:x\in\Lambda)$.
\end{definition}
We say that an $r$-Markov environment $P$ {\it satisfies condition (G)} if there
exist
constants $\gamma , C<\infty$ such that for all finite subsets $\Delta\subset V\subset\mathbb{Z}^d$ with $d(\Delta,V^c)\ge r$, and $A\subset V^c$,
\[
\frac{\, \mathrm{d} P\big((\omega_x)_{x\in\Delta}\in\cdot|\eta\big)}
{ \, \mathrm{d} P\big((\omega_x)_{x\in\Delta}\in\cdot|\eta'\big)}
\le
\exp{(C\sum_{x\in A,y\in\Delta}e^{-\gamma d(x,y)})}\tag{G}
\]
for $P$-almost all pairs of configurations $\eta,\eta'\in\mathcal{M}^{V^c}$ which agree on $V^c\setminus A$.
Here
\[
P\big((\omega_x)_{x\in\Delta}\in\cdot|\eta\big)
:=P\big((\omega_x)_{x\in\Delta}\in \cdot|\mathcal{F}_{V^c}\big)\big|_{(\omega_x)_{x\in V^c}=\eta}.
\]
We remark that $r$ and $\gamma$ are used as parameters of the environment throughout the article.
Recall that by Lemma 9 in \cite{R-A1}, the strong-mixing Gibbsian environment
satisfies (G).
Obviously, every finite-range dependent environment also satisfies (G).
Our main theorem concerning the mixing environments is:
\begin{theorem}\label{LVthm2}
Assume that $P$ is uniformly elliptic and satisfies \emph{(G)}.
Then there exist two deterministic constants $v_+, v_-\ge 0$
and a vector $\ell$ such that
\begin{equation}\label{ICLLN}
\mathop{\rm lim}_{n\to\infty} \frac{X_n}{n}=v_+\ell 1_{A_\ell}-v_-\ell 1_{A_{-\ell}},
\end{equation}
and $v_+=v_-=0$ if $\mathbb{P}(A_\ell\cup A_{-\ell})<1$.
Moreover, if $d\ge 5$, then
there is at most one non-zero velocity. That is,
\begin{equation}\label{Iunique}
v_+ v_-=0.
\end{equation}
\end{theorem}
We remark here that for the finite-range dependent
case, the CLLN is proved in \cite{ZO}.
\eqref{ICLLN} is a minor extension of Rassoul-Agha's CLLN in \cite{R-A3}. He
assumes slightly more than strong-mixing, which in turn is slightly stronger
than our condition (G). Our proof is very different from the proof in
\cite{R-A3} , which is based on a large deviation principle in \cite{R-A2}. The
main contribution of our proof of \eqref{ICLLN} is a new definition of the
regeneration structure, which enables us to divide a random path in the mixing
environment into ``almost iid" parts.
With this regeneration structure, we will use the ``$\epsilon$-coins" introduced
in \cite{CZ1} and coupling arguments to prove the CLLN. This regeneration
structure will also be used in the proof of \eqref{Iunique}.
Display \eqref{Iunique} is an extension of Berger's result \eqref{Berger} from
the iid case to our case (G), which includes the strong-mixing case.
In \cite{Be}, assuming that $\mathbb{P}(A_\ell)>0$ for a direction $\ell$, Berger coupled the iid environment $\omega$ with a transient (in the direction $\ell$) environment $\tilde\omega$ and a ``backward path", such that $\tilde\omega$ and $\omega$ coincide in the locations off the path.
Using heat kernel estimates for random walks with iid increments, he showed that if $v_\ell v_{-\ell}>0$ and $d\ge 5$, then with positive probability, the random walks in $\tilde\omega$ is transient to the $-\ell$ direction without intersecting the backward path, which contradicts $\tilde{\omega}$ being transient in the direction $\ell$.
The difficulties in applying this argument to mixing environments are that the regeneration slabs are not
iid, and that unlike the iid case, the environments visited by two disjoint paths are not independent. To overcome these difficulties,
we will construct an environment (along with a path) that is ``very transient" in $\ell$, and show that the ballistic walks in the opposite direction $-\ell$ will move further and further away from the given path (see Figure \ref{LVfig:1} in Section \ref{secunique}). The key ingredient here is a heat kernel estimate, which we will obtain in Section \ref{sechke} using coupling arguments.
\subsection{Invariance principle for RWBRE}
As mentioned above, Lawler \cite{La} proved the invariance principle under the uniform ellipticity assumption.
We explore the extent to which the uniform ellipticity assumption can be dropped.
Surprisingly, in the iid case, we can show that no assumptions of uniform ellipticity are needed at all.
Let
\begin{equation}
\label{CLTepsdef}
\varepsilon(x)=\varepsilon_{\omega}(x):=
[\prod_{i=1}^{d}\omega(x,e_i)]^{\frac{1}{d}}.
\end{equation}
Our first main result
is that
if $\mathrm{E}\varepsilon(o)^{-p}< \infty$ for some $p>d$, then
the quenched invariance principle
holds and moreover, the RWRE is transient $P$-almost surely if
$d\geq 3$. (Recurrence for $d=2$ under
the condition $E\varepsilon(0)^{-p}<\infty$
follows from the quenched invariance principle and ergodicity by an
unpublished argument of Kesten
detailed in
\cite[Page 281]{ZO}. Note that this argument cannot be used to
prove transience in dimensions $d\geq 3$,
even given an invariance principle, since in higher dimensions
the invariance principle does not give useful
information on the range of the
random walk; the behavior of the
range is a crucial element in Kesten's argument.)
\begin{theorem}\label{CLT1}
Assume that the random environment is ergodic, elliptic
and balanced.
\begin{enumerate}
\item[(i)] If $E\varepsilon(o)^{-p}< \infty$ for some $p>d\ge 2$,
then
the quenched invariance principle holds with a
non-degenerate diagonal limiting covariance matrix.
\item[(ii)] If $E[(1-\omega(o,o))/\varepsilon(o)]^q< \infty$ for some $q>2$ and $d\ge 3$, then the RWRE is transient $P$-almost surely.
\end{enumerate}
\end{theorem}
\noindent
That some integrability condition on the tail of $\varepsilon(o)$ is needed
for part (i) to hold
is made clear by the (non-Gaussian) scaling limits of random walks in
Bouchaud's trap model, see \cite{Bou,BAC}.
In fact, it follows from that example that Theorem \ref{CLT1}(i), or even an annealed
version of the CLT, cannot hold in general with $p<1$.
The proof of Theorem \ref{CLT1} is based on a sharpening of the arguments in
\cite{La,Sz1,ZO}; in particular, refined versions of the maximum
principle for walks in balanced environments (Theorem \ref{CMP})
and of a mean value inequality (Theorem \ref{Cmvi}) play a crucial role.
When the environment is iid and elliptic, our second main result
is that if $|X_{n+1}-X_n|=1$ a.s., then the quenched invariance principle holds.
Moreover, the RWRE is $P$-almost surely transient when $d\ge 3$.
The proofs combine percolation arguments with Theorem \ref{CLT1}.
\begin{theorem}\label{CLT2}
Assume that the random environment is iid, elliptic and
balanced.
\begin{enumerate}
\item[(i)] If $P\{\max_{|e|=1}\omega(o,e)\ge \xi_0\}$=1 for some positive constant $\xi_0$, then the
quenched invariance principle holds with a non-degenerate limiting covariance.
\item[(ii)] When $d\ge 3$, the RWRE is transient $P$-almost surely.
\end{enumerate}
\end{theorem}
Because the
transience or recurrence of the random walks does not change
if one considers the walk restricted
to its jump times,
one concludes, using Kesten's argument and the invariance
principle, comparing with
Theorem \ref{CLT1}, that
for $d=2$, a random walk in a balanced elliptic iid
random environment is recurrent $P$-a.s.
Our proof of the invariance principles, like that of \cite{La}, is based
on the approach of the
``environment viewed from the point of view of the particle".
Since $\{X_n\}$ is a (quenched) martingale,
standard arguments (see the proof of Theorem 6.2 in \cite{BB}) show that
the quenched invariance principle holds
whenever an invariant
measure $Q\sim P$ of $\{\bar{\omega}(n)\}$ exists.
The approach of Lawler \cite{La}, which is a discrete version of the argument of Papanicolaou and Varadhan \cite{PV}, is to construct such a measure as the limit of invariant measures of periodized environments. We will
follow this strategy using, as in \cite{Sz1,ZO}, variants of \cite{KT} to derive estimates on solutions of linear elliptic difference
equations. In the iid setup of
Theorem~\ref{CLT2}, percolation estimates are used to control
pockets of the environment where those estimates are not strong enough.
For the proof of the transience in the ergodic case,
we use a mean value inequality and follow \cite{ZO}.
To prove the transience in the iid case, we employ percolation
arguments together with
a new maximum principle (Theorem \ref{Cmp2}) for walks with (possibly)
big jumps.
\begin{remark}
Recently, Berger and Deuschel \cite{BD} have generalized our ideas and extended
the quenched invariance principle to the general non-elliptic case where the environment is only required to be iid and "genuinely $d$-dimensional".
\end{remark}
\subsection{Einstein relation for RWBRE}\label{IER}
In this subsection we will present the Einstein relation for random walks in uniformly elliptic balanced iid random environment. Recall that by Theorem~\ref{LaThm}, for $P$-almost every $\omega$, $(\lambda X_{t/\lambda^2})_{t\ge 0}$ converges weakly (as $\lambda\to 0$) to a Brownian motion with a non-degenerate covariance matrix, which we denote by $\bm{D}$.
For $\lambda\in (0,1)$ and a fixed direction
\[
\ell=(\ell_1,\ldots,\ell_d)\in S^{d-1},
\]
define the perturbed environment $\omega^\lambda$ of $\omega\in\Omega$ by
\[
\omega^\lambda(x,e)=(1+\lambda\ell\cdot e)\omega(x,e).
\]
Since $\omega^\lambda$ satisfies Kalikow's condition (see (0.7) in \cite{SZ}),
it follows from \cite[Theorem 2.3]{SZ} that there exists a deterministic constant $v_\lambda\in\mathbb{R}^d$ such that
\[
\mathop{\rm lim}_{t\to\infty} \frac{X_t}{t}=v_\lambda,
\quad \text{ $P\otimes P_{\omega^\lambda}^o$-almost surely}.
\]
Our main result is the
following mobility-diffusivity relation:
\begin{equation}\label{Einstein relation}
\mathop{\rm lim}_{\lambda\to 0}\frac{v_\lambda}{\lambda}=D_\ell,
\end{equation}
where
\[
D_\ell:=\bm{D}\ell=(2E_Q\omega(o,e_i)\ell_i)_{1\le i\le d}\in \mathbb{R}^d.
\]
Our proof of the Einstein relation \eqref{Einstein relation} consists of proving the following two theorems:
\begin{theorem}\label{ER1}
Assume that the environment $P$ is iid, balanced and uniformly elliptic. Then for $P$-almost every $\omega$ and for any $t\ge 1$,
\begin{equation*}
\mathop{\rm lim}_{\lambda\to 0}
E_{\omega^\lambda}\frac{X_{t/\lambda^2}}{t/\lambda}=D_\ell.
\end{equation*}
\end{theorem}
\begin{theorem}\label{ER2}
Assume that the environment $P$ is iid, balanced and uniformly elliptic. Then for all sufficiently small $\lambda\in (0,1)$ and any $t\ge 1$,
\[
\left|E_PE_{\omega^\lambda}\frac{X_{t/\lambda^2}}{t/\lambda}-\frac{v_\lambda}{\lambda}\right|
\le
\frac{C}{t^{1/5}}.
\]
\end{theorem}
Our proof of Theorem \ref{ER1} is an adaption of the argument of Lebowitz and Rost \cite{Le} (see also \cite[Proposition 3.1]{GMP}) to the discrete setting. Namely, using a change of measure argument, we will show that the scaled process $\lambda X_{t/\lambda^2}$ converges (under the law $P_{\omega^\lambda}$) to a Brownian motion with drift $tD_\ell$, which yields Theorem \ref{ER1}.
For the proof of Theorem \ref{ER2}, we want to follow the strategy of Gantert, Mathieu and Piatnitski \cite{GMP}. Arguments in the proof of \cite[Proposition 5.1]{GMP} show that if we can construct a sequence of random times $(\tau_n)_{n\in\mathbb{N}}$ (called the {\it regeneration times}) that
divides the random path into iid (under the annealed measure) pieces, then good moment estimates of the regeneration times yield Theorem \ref{ER2}.
In the construction of the regeneration times in \cite{GMP}, a heat kernel estimate \cite[Lemma 5.2]{GMP} for reversible diffusions is crucially employed.
However, due to the lack of reversibility, we don't have a good heat kernel estimate for RWRE. In this thesis, we construct the regeneration times differently, so that they divide the random path into ``almost iid" parts. Moreover, our regeneration times have good moment bounds, which lead to a proof of Theorem \ref{ER2}. The key ingredients in our construction are Kuo and Trudinger's \cite{KT} Harnack inequality for discrete harmonic functions and the ``$\epsilon$-coins" trick introduced by Comets and Zeitouni \cite{CZ1}.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 8,619 |
Salix calcicola är en videväxtart som beskrevs av Merritt Lyndon Fernald och Wieg.. Salix calcicola ingår i släktet viden, och familjen videväxter. Inga underarter finns listade i Catalogue of Life.
Källor
Viden
calcicola | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 932 |
\section{Introduction}
\label{intro}
Stability is ubiquitous in biology, ranging from
physicochemical homeostasis in cellular microenvironments to ecological
constancy and resilience
\cite{cannon1929organization,gorshkov1994physical,justus2008ecological}.
It is noteworthy that not only can the stability phenomenon arise in normal living systems,
but it can also happen in abnormal organisms such as cancer.
As a large family of diseases with abnormal cell growth,
cancer is generally acknowledged to be the malignant progression
along with a series of stability-breaking changes (\emph{e.g.} genomic instability)
within the normal organisms \cite{hanahan2011hallmarks}.
However, some recent researches reveal the other side of cancer.
An interesting \emph{phenotypic equilibrium}
was reported in some cancers
\cite{chaffer2011normal,gupta2011stochastic,yang2012dynamic}.
That is, the population composed of different cancer cells will tend to a fixed
equilibrium of phenotypic proportions over time regardless of initial states
(Fig. 1). These findings provided new insights to the research of cancer heterogeneity.
\begin{figure}
\begin{center}
\includegraphics[width=1.2\textwidth]{Fig1-eps-converted-to.pdf}
\caption{The phenotypic equilibrium of cancer cells. The figure is generated from the
data (SW620 colon cancer cell line) in \cite{yang2012dynamic}. In this experiment, two cellular phenotypes
were identified: cancer stem cells (CSCs) and non-stem cancer cells (NSCCs). It is shown that
no matter where the initial state is (as four different cases shown in the figure),
the CSCs proportion will converge to a fixed proportion as time passes.
The same is true for NSCCs proportion.
This phenomenon is termed \emph{phenotypic equilibrium} \cite{gupta2011stochastic}.}
\end{center}
\end{figure}
The experimental works also stimulate theoreticians to put forward reasonable models for interpreting
the phenotypic equilibrium \cite{gupta2011stochastic,zapperi2012cancer,
dos2013possible,dos2013noise,zhou2013population,wang2014dynamics,zhou2014multi,zhou2014nonequilibrium}.
In particular, it was reported that the intrinsic interconversion between different cellular phenotypes,
also called \emph{phenotypic plasticity} \cite{french2012complex,meacham2013tumour}, could play a crucial role in stabilizing the
mixture of phenotypic proportions in cancer. As a pioneering work, Gupta \emph{et al} proposed a discrete-time Markov chain model to
describe the phenotypic transitions in breast cancer cell lines \cite{gupta2011stochastic}. In their model, three phenotypes were
identified: stem-like cells (S), basal cells (B) and luminal cells (L). The phenotypic transitions among them can be captured by
the transition probability matrix as follows:
\begin{linenomath*}
\begin{equation}
P=\left(\begin{array}{ccc}
1-P_{S\rightarrow B}-P_{S\rightarrow L} & P_{S\rightarrow B} & P_{S\rightarrow L} \\
P_{B\rightarrow S} & 1-P_{B\rightarrow S}-P_{B\rightarrow L} & P_{B\rightarrow L} \\
P_{L\rightarrow S} & P_{L\rightarrow B} & 1-P_{L\rightarrow S}-P_{L\rightarrow B} \\
\end{array}\right),
\label{Matrix1}
\end{equation}
\end{linenomath*}
where $P_{i\rightarrow j}$ represents the probability of the transition from phenotype $i$ to $j$.
According to the limiting theory of discrete-time finite-state Markov chain, there exists unique equilibrium
distribution $\vec{\pi}=(\pi_S, \pi_B, \pi_L)$ such that $\vec{\pi}=\vec{\pi}P$, provided $P$ is irreducible
and aperiodic \cite{seneta1981non}. The Markov chain will converge to $\vec{\pi}$ regardless of where it begins.
By fitting the Markov chain model to their experimental data, the equilibrium proportions of
stem-like, basal and luminal cells were predicted by the equilibrium distribution $\pi_S, \pi_B, \pi_L$ respectively.
Even though the Markov chain model fitted the experimental results in breast cancer cell lines very well,
Zapperi and La Porta \cite{zapperi2012cancer} questioned the validity of the phenotypic transitions
and gave an alternative explanation to the phenotypic equilibrium, which was based on
the conventional cancer stem cell (CSC) model with imperfect CSC biomarkers.
Moreover, Liu \emph{et al} showed that the negative feedback mechanisms of
non-linear growth kinetics of cancer cells can also control the balance between
different cellular phenotypes \cite{liu2013nonlinear}.
These works suggested that the phenotypic plasticity may not be the only explanation to the phenotypic
equilibrium. To further reveal the mechanisms giving rise to
the phenotypic equilibrium, it is more convincing to study the models
integrating the phenotypic plasticity with the other conventional cellular processes
of cancer. Motivated by this, a series of works discussed the phenotypic equilibrium
by establishing the models coordinating with both
hierarchical cancer stem cell paradigm and phenotypic plasticity
\cite{dos2013possible,dos2013noise,zhou2013population,wang2014dynamics,zhou2014multi,zhou2014nonequilibrium}.
In these works, the phenotypic equilibria were intimately related to the stable steady-state
behavior of the corresponding ordinary differential equations (ODEs) models. In other
words, if one can model the dynamics of the phenotypic proportions as the following
system of ODEs
\begin{linenomath*}
$$\frac{d\vec{x}}{dt}=\vec{F}(\vec{x}),$$
\end{linenomath*}
the unique stable fixed point $\vec{x}^*$ (if exists) corresponds to the equilibrium
proportions.
The aforementioned works have showed that the phenotypic equilibrium can be explained by
different concepts of stabilities in different models. Thus a natural
question is whether there exists a unified framework to
harmonize the equilibrium distribution of the Markov chain model and the stable
steady-state behavior of the ODEs model. In this study, we try to address this issue
by establishing a multi-phenotype branching (MPB) model \cite{athreya1972branching}.
On one hand, our model integrates the phenotypic plasticity with
the cellular processes (such as cell divisions) that have extensively been studied in cancer biology.
On the other hand, the model is stochastic and closer to the reality with finite population size
\cite{dingli2007symmetric,antal2011exact}. Based on this model, it is shown that the ODEs
model can be derived by taking the expectation of our model.
More specifically, the ODEs model is just the \emph{proportion equation} of
the MPB model. Besides, the Markov chain model is also shown to be
closely related to our model. That is, the Kolmogorov forward equation of the continuous-time Markov chain model
is a special case of the proportion equation provided that the division rates of stem-like, basal and luminal
cells are the same. Interestingly, ``same doubling time'' of the three phenotypes was just observed
in Gupta \emph{et al}'s experiment when they used the Markov chain model to explain the phenotypic equilibrium
\cite{gupta2011stochastic}, which is in line with our theoretical prediction.
Moreover, our result also shows that one should be more cautious about
the application of the Markov chain in modeling cell-state dynamics in larger time scales, since the Markov chain
model takes no account of different capabilities of divisions by cancer stem cells and non-stem cancer cells.
More importantly, by showing \emph{almost sure convergence} of
the MPB model, the stationarity of the Markov chain model and the stability of the ODEs model
can be unified as the average-level stability of our model. Note that
the almost sure convergence indicates the path-wise stability of stochastic samples,
providing a more profound explanation to the phenotypic equilibrium. In other words,
the phenotypic equilibrium is actually rooted in the stochastic nature of (almost) every path sample;
the average-level stability just follows from it by averaging all the stochastic samples.
Furthermore, it is also shown that, not only can the model with phenotypic plasticity give rise to the
path-wise convergence, but the conventional cancer stem cell model without phenotypic plasticity
can also lead to the convergence under certain conditions.
This echoes the works \cite{zapperi2012cancer, liu2013nonlinear}
that the phenotypic plasticity is not the only explanation to the phenotypic equilibrium.
The paper is organized as follows. The model is presented in Section 2.
Main results are shown in Section 3. Conclusions are in Section 4.
\section{Model}
\label{Model}
\subsection{Assumptions}
In this section we give the assumptions of our model. Consider a population composed of different cancer cell phenotypes.
For pure theoretical investigations, the number of the phenotypes can be any $n$ in general \cite{zhou2014multi,jiang2014cell}.
However, to better illustrate our theoretical results on the basis of more concrete biological background, enlightened by
\cite{gupta2011stochastic}, we here focus on the specific model consisting of three phenotypes:
stem-like cells (S), basal cells (B) and luminal cells (L). The main assumptions are listed as follow:\\
\emph{1}. Stem-like cells can perform three types of divisions: symmetric division, symmetric differentiation and asymmetric division
\cite{morrison2006asymmetric, dalerba2007cancer,todaro2010colon}.
That is, a stem-like cell can divide into two identical stem-like cells (symmetric division) or two identical differentiated cancer cells
(symmetric differentiation; it can also divide into a stem-like cell and a differentiated cancer cell (asymmetric division).
\begin{itemize}
\item symmetric division: S $\overset{\alpha_{S}P_1}{\longrightarrow}$ S+S;
\item symmetric differentiation: S $\overset{\alpha_{S}P_2}{\longrightarrow}$ B+B or S $\overset{\alpha_{S}P_3}{\longrightarrow}$ L+L;
\item asymmetric division: S $\overset{\alpha_{S}P_4}{\longrightarrow}$ S+B or S $\overset{\alpha_{S}P_5}{\longrightarrow}$ S+L.
\end{itemize}
$\alpha_S$ is the division rate (or termed synthesis rate \cite{liu2013nonlinear}),
with the meaning that a stem-like cell will wait an exponential time
with expectation $\alpha_S$ and then perform one particular type of division with probability $P_i$ (note that $\sum_{i=1}^5P_i=1$).
Suppose the waiting time and the division strategy are independent to each other,
then the product of $\alpha_S$ and $P_i$ governs the reaction rate of the corresponding division type. \\
\emph{2}. For non-stem cancer cells, \emph{i.e.} basal or luminal cells, we assume that not only can they undergo symmetric divisions
with limited times, but they can also perform phenotypic conversions. To illustrate this,
let us take B phenotype as an example. Suppose a
newly-born B cell can divide at most $m$ times. If we denote $B_i$ as the B cell that has already divided $i$ times, then we have
the following hierarchical structure:
\begin{itemize}
\item $\textrm{B}_0$ $\overset{\alpha_{B}}{\longrightarrow}$ $\textrm{B}_1$+$\textrm{B}_1$;
\item ...
\item $\textrm{B}_{m-1}$ $\overset{\alpha_{B}}{\longrightarrow}$ $\textrm{B}_m$+$\textrm{B}_m$;
\item $\textrm{B}_m$ $\overset{\alpha_{B_m}}{\longrightarrow}$ $\emptyset$.
\end{itemize}
$\alpha_{B}$ is the division rate, and $\alpha_{B_m}$ is the death rate of $\textrm{B}_m$.
Moreover, assume that a B cell can convert into an S cell (termed \emph{de-differentiation} \cite{marjanovic2013cell})
by phenotypic plasticity. Let the dedifferentiation rate of $\textrm{B}_i$ be $\beta_{B_i}$, then we have
\begin{itemize}
\item $\textrm{B}_0$ $\overset{\beta_{B_0}}{\longrightarrow}$ S;
\item ...
\item $\textrm{B}_m$ $\overset{\beta_{B_m}}{\longrightarrow}$ S.
\end{itemize}
For simplicity, it is often assumed that $\beta_{B_0}=...=\beta_{B_m}$ \cite{wang2014dynamics},
denoted as $\beta_{B}$ for short. Meanwhile, note that a B cell can also convert into an L cell \cite{gupta2011stochastic}.
Since the biological mechanisms of the phenotypic conversions between different non-stem cancer cells are still
poorly understood, for simplicity it is assumed that the phenotypic transitions between B and L can only happen
in the same hierarchical level. That is, supposing that a newly-born L cell can also
divide at most $m$ times, $L_i$ is the L cell that has already divided $i$ times, then we have
\begin{itemize}
\item $\textrm{B}_i$ $\overset{\gamma_{B}}{\longrightarrow}$ $\textrm{L}_i$;
\end{itemize}
$\gamma_{B}$ is the transition rate. In fact, this assumption implies
$\textrm{B}$ ${\longrightarrow}$ $\textrm{L}$ with constant rate $\gamma_B$ overall,
which is in line with the assumption in \cite{gupta2011stochastic}.
For luminal cells, similarly, their cellular
processes are shown as follows:
\begin{itemize}
\item $\textrm{L}_i$ $\overset{\alpha_L}{\longrightarrow}$ $\textrm{L}_{i+1}$+$\textrm{L}_{i+1}$~~~($0\leq i\leq m-1$);
\item $\textrm{L}_m$ $\overset{\alpha_{L_m}}{\longrightarrow}$ $\emptyset$.
\item $\textrm{L}_i$ $\overset{\beta_{L}}{\longrightarrow}$ S~~~($0\leq i\leq m$);
\item $\textrm{L}_i$ $\overset{\gamma_{L}}{\longrightarrow}$ $\textrm{B}_i$~~~($0\leq i\leq m$).
\end{itemize}
\subsection{Multi-phenotypic branching (MPB) model}
Based on the cellular processes listed in the last subsection, we can model this cellular system
as a continuous-time Markov process in the discrete state space of cell numbers (Chapter 11 in \cite{beard2008chemical}).
If we let $X_1$ be the cell number of S phenotype,
$\vec{X}_2=(X^{(0)}_2, X^{(1)}_2,...,X^{(m)}_2)^T$ be the vector representing the cell numbers of $\textrm{B}_i$ cells,
and $\vec{X}_3=(X^{(0)}_3, X^{(1)}_3,...,X^{(m)}_3)^T$ be the vector representing the cell numbers of $\textrm{L}_i$ cells,
then the dynamics of $\vec{X}=(X_1, \vec{X}_2,\vec{X}_3)^T$ can be modeled as a multi-phenotype branching process
\cite{athreya1972branching}.
If we define $\textrm{Pr}(\vec{x};t)$ be the probability of $\vec{X}=\vec{x}$ at time $t$,
according to the theory of \emph{Chemical Master Equation} (CME), the rate of change of $\textrm{Pr}(\vec{x};t)$
is equal to the transitions into $\vec{x}$ minus the transitions out of it, \emph{i.e.}
\begin{linenomath*}
\begin{equation}
\frac{d\textrm{Pr}(\vec{x};t)}{dt}=\sum_{\vec{x}'\neq \vec{x}}T_{\vec{x}'\rightarrow
\vec{x}}\textrm{Pr}(\vec{x}';t)-\sum_{\vec{x}'\neq \vec{x}}T_{\vec{x}\rightarrow \vec{x}'}\textrm{Pr}(\vec{x};t),
\label{CME}
\end{equation}
\end{linenomath*}
where $T_{\vec{x}'\rightarrow \vec{x}}$ is the transition rate from $\vec{x}'$ to $\vec{x}$ and
$T_{\vec{x}\rightarrow \vec{x}'}$ is the transition rate from $\vec{x}$ to $\vec{x}'$
(see \ref{appendix1} for more details).
In next section we will show that the ODEs model and the Markov chain model can be
derived from our model. For convenience we term our multi-phenotype branching model
the\emph{ MPB model}.
\section{Results}
\subsection{Deterministic equations derived from the MPB model}
\label{section3.1}
To relate our MPB model to the ODEs model, we consider the mean dynamics of the MPB model
by averaging all the stochastic samples of it.
Let $\langle{\vec{X}}\rangle$ be the expectation of $\vec{X}$, that is, for each component
we define
\begin{linenomath*}
$$\langle X_i \rangle:=\sum_{\vec{x}}x_i\textrm{Pr}(\vec{x}; t).$$
\end{linenomath*}
We multiply $x_i$ on the both sides of Eq. (\ref{CME}), and then calculate the summation over all $\vec{x}$
\begin{linenomath*}
\begin{equation*}
\sum_{\vec{x}}x_i\frac{d\textrm{Pr}(\vec{x};t)}{dt}=\sum_{\vec{x}}x_i\left(\sum_{\vec{x}'\neq \vec{x}}T_{\vec{x}'\rightarrow
\vec{x}}\textrm{Pr}(\vec{x}';t)-\sum_{\vec{x}'\neq \vec{x}}T_{\vec{x}\rightarrow \vec{x}'}\textrm{Pr}(\vec{x};t)\right).
\end{equation*}
\end{linenomath*}
For S cells:
\begin{linenomath*}
\begin{equation}
\frac{d\langle X_1\rangle}{dt}=\alpha_S\left(P_1-P_2-P_3\right)\langle X_1\rangle+\beta_{B}\sum_{i=0}^{m}X^{(i)}_2
+\beta_{L}\sum_{i=0}^{m}X^{(i)}_3.
\end{equation}
\end{linenomath*}
For B cells:
\begin{linenomath*}
\begin{equation}
\begin{cases}
\frac{d\langle X^{(0)}_2\rangle}{dt}=\alpha_S\left(2P_2+P_4\right)\langle X_1\rangle-
\left(\alpha_{B}+\beta_{B}+\gamma_{B}\right)\langle X^{(0)}_2\rangle
+\gamma_{L}X^{(0)}_3;\\
\frac{d\langle X^{(i)}_2\rangle}{dt}=2\alpha_{B}\langle X^{(i-1)}_2\rangle-
\left(\alpha_{B}+\beta_{B}+\gamma_{B}\right)\langle X^{(i)}_2\rangle
+\gamma_{L}X^{(i)}_3~~~~(1\leq i\leq m-1);\\
\frac{d\langle X^{(m)}_2\rangle}{dt}=2\alpha_{B}\langle X^{(m-1)}_2\rangle-
\left(\alpha_{B_m}+\beta_{B}+\gamma_{B}\right)\langle X^{(m)}_2\rangle
+\gamma_{L}X^{(m)}_3.
\end{cases}
\end{equation}
\end{linenomath*}
For L cells:
\begin{linenomath*}
\begin{equation}
\begin{cases}
\frac{d\langle X^{(0)}_3\rangle}{dt}=\alpha_S\left(2P_3+P_5\right)\langle X_1\rangle-
\left(\alpha_{L}+\beta_{L}+\gamma_{L}\right)\langle X^{(0)}_3\rangle
+\gamma_{B}X^{(0)}_2;\\
\frac{d\langle X^{(i)}_3\rangle}{dt}=2\alpha_{L}\langle X^{(i-1)}_2\rangle-
\left(\alpha_{L}+\beta_{L}+\gamma_{L}\right)\langle X^{(i)}_3\rangle
+\gamma_{B}X^{(i)}_2~~~~(1\leq i\leq m-1);\\
\frac{d\langle X^{(m)}_3\rangle}{dt}=2\alpha_{L}\langle X^{(m-1)}_2\rangle-
\left(\alpha_{L_m}+\beta_{L}+\gamma_{L}\right)\langle X^{(m)}_3\rangle
+\gamma_{B}X^{(m)}_2.
\end{cases}
\end{equation}
\end{linenomath*}
Then it is not difficult to see that the dynamics of $\langle\vec{X}\rangle$ can be
captured by a system of linear ODEs,
\begin{linenomath*}
\begin{equation}
\frac{d \langle\vec{X}\rangle}{d t}=G\langle\vec{X}\rangle,
\label{ODE1}
\end{equation}
\end{linenomath*}
where
\begin{linenomath*}
\begin{equation}
G=[g_{ij}]_{(2m+3)\times(2m+3)}=\left(\begin{smallmatrix}
\alpha_S\left(P_1-P_2-P_3\right) & \beta_{B} & \cdots & \beta_{B} & \beta_{L} & \cdots & \beta_{L} \\
\alpha_S\left(2P_2+P_4\right) & -\left(\alpha_{B}+\beta_{B}+\gamma_{B}\right) & 0 & \cdots & \gamma_{L} & \cdots & 0 \\
0 & 2\alpha_{B} & -\left(\alpha_{B}+\beta_{B}+\gamma_{B}\right) & 0 & \cdots & \cdots & 0 \\
\cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots\\
\end{smallmatrix}\right).
\label{Matrix2}
\end{equation}
\end{linenomath*}
Furthermore, it should be noted that, Eq. (\ref{ODE1}) describes the cell number dynamics of each phenotype at each hierarchical level.
If we denote $X_2=\sum_{i=0}^m X^{(i)}_2$ and $X_3=\sum_{i=0}^m X^{(i)}_3$ as the total cell numbers of B and L phenotypes
respectively, then it is often the dynamics of $\vec{X}^*=(X_1, X_2, X_3)^T$ that interests people. That is,
\begin{linenomath*}
\begin{small}
\begin{equation}
\begin{cases}
\frac{d\langle X_1\rangle}{dt}=\alpha_S\left(P_1-P_2-P_3\right)\langle X_1\rangle+\beta_{B}\langle X_2\rangle+\beta_{L}\langle X_3\rangle;\\
\frac{d\langle X_2\rangle}{dt}=\alpha_S\left(2P_2+P_4\right)\langle X_1\rangle+(\alpha_B-\beta_B-\gamma_B)\langle X_2\rangle
+\gamma_{L}\langle X_3\rangle-(\alpha_B+\alpha_{B_m})\langle X^{(m)}_2\rangle;\\
\frac{d\langle X_3\rangle}{dt}=\alpha_S\left(2P_3+P_5\right)\langle X_1\rangle+\gamma_{B}\langle X_2\rangle
+(\alpha_L-\beta_L-\gamma_L)\langle X_3\rangle-(\alpha_L+\alpha_{L_m})\langle X^{(m)}_3\rangle.\\
\end{cases}
\label{ODEx}
\end{equation}
\end{small}
\end{linenomath*}
We can see that Eq. (\ref{ODEx}) is not linear of $\langle \vec{X}^* \rangle$, which also depends on $\langle X^{(m)}_2\rangle$ and
$\langle X^{(m)}_3\rangle$ separately. Technically this is due to the limited capability of divisions of B and L phenotypes.
In the limit of $m$, or when $m$ is relatively large in comparison to observational time scales
(\emph{e.g.} $t\lessapprox m$), Eq. (\ref{ODEx}) can approximately be
expressed as a linear system of $\langle \vec{X}^* \rangle$:
\begin{linenomath*}
\begin{equation}
\frac{d \langle\vec{X}^*\rangle}{d t}\approx G^*\langle\vec{X}^*\rangle,
\label{ODExx}
\end{equation}
\end{linenomath*}
where
\begin{linenomath*}
\begin{equation}
G^*=[g^*_{ij}]_{3\times3}=\left(\begin{smallmatrix}
\alpha_S\left(P_1-P_2-P_3\right) & \beta_{B} & \beta_{L}\\
\alpha_S\left(2P_2+P_4\right) & \alpha_B-\beta_B-\gamma_B & \gamma_{L}\\
\alpha_S\left(2P_3+P_5\right) & \gamma_{B} & \alpha_L-\beta_L-\gamma_L
\end{smallmatrix}\right).
\label{Matrixx}
\end{equation}
\end{linenomath*}
In this way the model reduces to the three-phenotypic model investigated in \cite{zhou2013population}.
However, Eq. (\ref{ODEx}) should be adopted for describing larger time scales (\emph{e.g.} $t\gg m$).
Note that it is inconvenient to analyze Eq. (\ref{ODEx}) directly,
we will show later that analyzing Eq. (\ref{ODE1}) is quite helpful for
the understanding of Eq. (\ref{ODEx}), especially in the study of the phenotypic equilibrium.
\subsection{Proportion equation: Bridging the MPB model and the ODEs model}
Since Eq. (\ref{ODE1}) describes the dynamics of the absolute numbers of different cellular
phenotypes, we term it the \emph{number equation}. However, to investigate the phenotypic
equilibrium, we are more concerned about the dynamics of the relative numbers (\emph{i.e.} proportions)
of different cellular phenotypes. Let $\vec{p}$ be the vector representing the proportions of different
cellular phenotypes. By replacing $\langle \vec{X}\rangle$ in Eq. (\ref{ODE1}) with $\vec{p}$ ,
we have the equation governing the phenotypic proportions as follows (see \ref{appendix2})
\begin{linenomath*}
\begin{equation}
\frac{d \vec{p}}{d t}=G\vec{p}-\vec{p} e^TG\vec{p},
\label{ODE2}
\end{equation}
\end{linenomath*}
where $e=(1,...,1)^T$. We term Eq. (\ref{ODE2}) the \emph{proportion equation}. It is noteworthy that
the stable steady-state behavior of Eq. (\ref{ODE2}) just corresponds to the phenotypic equilibrium
investigated in \cite{wang2014dynamics, zhou2014multi}.
The proportion equation thus connects the MPB model and the ODEs model in previous literature,
implying that the ODEs model can be seen as the average-level counterpart
of the stochastic MPB model. To show the stability of Eq. (\ref{ODE2}),
we have the following theorem (see \ref{appendix2+} for the proof):
\newtheorem{theorem}{Theorem}
\begin{theorem}
There exists unique positive stable fixed point $\vec{\mu}$ in Eq. (\ref{ODE2})
provided that $G$ is irreducible
\footnote{Strictly speaking, for completing the theorem
it is necessary to add a small perturbation to the initial state in rare cases,
see \ref{appendix2+}}.
\label{Thm1}
\end{theorem}
Theorem \ref{Thm1} shows that the deterministic population dynamics of cancer cells
will tend to an equilibrium mixture of phenotypic proportions
as time passes. Besides, let $\vec{p}^*$ be the proportion vector of $\vec{X}^*$,
\emph{i.e.}
\begin{linenomath*}
$$\vec{p}^*=(p^*_1, p^*_2, p^*_3)=(p_1, \sum_{i=0}^m p^{(i)}_2, \sum_{i=0}^m p^{(i)}_3).$$
\end{linenomath*}
Given $\lim_{t\rightarrow \infty}\vec{p}=\vec{\mu}$ (Theorem \ref{Thm1}),
\begin{linenomath*}
$$\lim_{t\rightarrow \infty}\vec{p}^*=\lim_{t\rightarrow \infty}(p_1, \sum_{i=0}^m p^{(i)}_2, \sum_{i=0}^m p^{(i)}_3)=
(\mu_1, \sum_{i=0}^m \mu^{(i)}_2, \sum_{i=0}^m \mu^{(i)}_3)=\vec{\mu}^*.$$
\end{linenomath*}
Thus we have the following result for $\vec{p}^*$:
\newtheorem{corollary}{Corollary}
\begin{corollary}
Under the same condition in Theorem \ref{Thm1}, $\vec{p}^*$ will tend to a fixed positive vector $\vec{\mu}^*$ as $t\rightarrow\infty$.
\label{cor1}
\end{corollary}
Corollary \ref{cor1} indicates the phenotypic equilibrium of the three-phenotypic model in Eq. (\ref{ODEx}).
Moreover, it should be pointed out that, the results in Theorem \ref{Thm1} and Corollary \ref{cor1}
can be seen as the average-level stabilities following from the the path-wise convergence of the MPB model,
which will be discussed in Sec. \ref{sectionIII}.
\subsection{The Markov chain model as a special case of the proportion equation}
Note that the Markov chain model Eq. (\ref{Matrix1}) is discrete-time and the MPB model is continuous-time;
to compare the two models in the same time scale, we turn our attention from discrete-time Markov chain
to continuous-time Markov chain. Consider the standard model of continuous-time Markov chain. That is, let
$P_i(t)$ be the probability of the Markov chain being in state $i$ at time $t$,
its dynamics can be captured by the Kolmogorov forward equation:
\begin{linenomath*}
\begin{equation}
\frac{d \vec{P}(t)}{d t}=Q^T\vec{P}(t),
\label{MC}
\end{equation}
\end{linenomath*}
where $Q$-matrix $[q_{ij}]_{3\times 3}$ satisfying
\begin{linenomath*}
\begin{align}
q_{ij}\geq 0 ~~ \forall i\neq j,
\label{Q1}
\end{align}
\end{linenomath*}
\begin{linenomath*}
\begin{align}
q_{ii}=-\sum_{j:j\neq i}q_{ij}.
\label{Q2}
\end{align}
\end{linenomath*}
We now discuss the relation between $\vec{P}(t)$ and $\vec{p}^*$.
By replacing $\langle \vec{X}^*\rangle$ in Eq. (\ref{ODExx}) with $\vec{p}^*$,
we obtain the proportion equation governing $\vec{p}^*$
\footnote{The derivation of Eq. (\ref{ODE*}) is similar to that of Eq. (\ref{ODE2}), see \ref{appendix2}.}
\begin{linenomath*}
\begin{equation}
\frac{d \vec{p}^*}{d t}=G^*\vec{p}^*-\vec{p}^* e^TG^*\vec{p}^*,
\label{ODE*}
\end{equation}
\end{linenomath*}
where $e=(1,1,1)^T$ and $G^*$ in Eq. (\ref{Matrixx}). If we let the sum of each column of $G^*$ is the same, \emph{i.e.}
\begin{linenomath*}
$$\alpha_S=\alpha_B=\alpha_L=\kappa,$$
\end{linenomath*}
then Eq. (\ref{ODE*})
becomes
\begin{linenomath*}
\begin{equation}
\frac{d \vec{p}^*}{d t}=(G^*-\kappa I)\vec{p}^*,
\label{ODE3}
\end{equation}
\end{linenomath*}
where $I$ is identity matrix.
If we denote $H=(G^*-\kappa I)^T$, it can be shown that $H$ satisfies the
conditions (\ref{Q1}) and (\ref{Q2}) for the $Q$-matrix (see \ref{appendix2}).
In other words, the Kolmogorov forward equation Eq. (\ref{MC}) is a
special linear case of the nonlinear proportion equation Eq. (\ref{ODE*}).
This relation implies that, when the division rates of the three phenotypes are the same,
the dynamics of the phenotypic proportion can equivalently be captured
by the Markov chain model where only the phenotypic transitions are accounted for.
Otherwise, the Markov chain model may oversimplify the phenotypic dynamics with unequal division rates.
Interestingly, it was reported in Gupta \emph{et al}'s experiment that the subpopulations of S, B and
L phenotypes have the same ``doubling time'' \cite{gupta2011stochastic}, which justified
their application of the Markov chain model.
However, as mentioned in the end of Sec. \ref{section3.1}, Eq. (\ref{ODExx}) is
valid only in relatively short time scale. For larger time scales,
it is unreasonable to model the three-phenotypic dynamics by the Markov chain model
taking no account of different capabilities of divisions by cancer stem cells (unlimited) and
non-stem cancer cells (limited), even if they have the same division rate.
Therefore, one should be cautious about the application of the Markov chain in modeling
cell-state dynamics.
\subsection{Path-wise convergence of the MPB model}
\label{sectionIII}
We have seen that the MPB model provides a unified framework for the ODEs model and Markov chain model.
In this section, we will show path-wise convergence of the MPB model, which provides a much stronger concept of
stability by which both the stable steady-state behavior of the ODEs model and the equilibrium distribution of the Markov chain model
will serve as average-level stabilities of the MPB model.
Much attention has long been paid to the limit theorems of multi-type branching processes by mathematicians
\cite{athreya1968some,kesten1967limit,janson2004functional,yakovlev2010limiting}.
Here we are not going to discuss the rigorous mathematical theory in general
(which is the focus of our another work \cite{jiang2014cell}). Instead
we are more interested in the specific results related to the phenotypic equilibrium, \emph{i.e.}
the conditions under which $\vec{p}$ converges to a positive vector $\vec{\mu}$.
Unlike the $\vec{p}$ in Theorem \ref{Thm1}, the $\vec{p}$ here is stochastic.
The ``convergence'' here means \emph{almost sure convergence}.
That is, if the convergence of $\vec{p}$ holds, almost all the stochastic paths
will tend to a fixed equilibrium (also termed \emph{path-wise convergence}).
We present our main results in the following two theorems (see \ref{appendix3} for the proofs and mathematical details):
\begin{theorem}
If $G$ in Eq. (\ref{Matrix2}) is irreducible and its Perron-Frobenius eigenvalue is positive,
then $\vec{p}$ will tend to a fixed positive vector $\vec{\mu}$ almost surely as $t\rightarrow\infty$
conditioned on non-extinction of the population.
\label{Thm2}
\end{theorem}
\begin{theorem}
Assume that \\
(1) all the phenotypic transition rates are zero, i.e. $\beta_B$, $\beta_L$ and
$\gamma_B$, $\gamma_L$ are zero;\\
(2) $\alpha_S>0$, $\alpha_B>0$ and $\alpha_L>0$;\\
(3) $P_i>0$ ($1\leq i\leq 5$) and $P_1>P_2+P_3$;\\
then $\vec{p}$ will tend to a fixed positive vector $\vec{\mu}$ almost surely as $t\rightarrow\infty$
conditioned on non-extinction of stem like cells.
\label{Thm3}
\end{theorem}
The above two theorems are applicable to different cases. Theorem \ref{Thm2} corresponds to the case with phenotypic plasticity,
since the irreducibility of $G$ is satisfied as long as the conversions
between different phenotypes can happen. In contrast, Theorem \ref{Thm3} corresponds to the case without
phenotypic plasticity, since all the phenotypic transition rates are assumed to be zero.
Interestingly, even though the assumptions of the two theorems are basically different,
both of them can lead to the path-wise convergence $\vec{p}$. Furthermore, it is easy to see that the path-wise convergence of
$\vec{p}^*$ is implied by Theorems \ref{Thm2} and \ref{Thm3}:
\begin{corollary}
$\vec{p}^*$ will tend to a fixed positive vector $\vec{\mu}^*$ almost surely as $t\rightarrow\infty$
under the conditions in either Theorem \ref{Thm2} or Theorem \ref{Thm3}.
\label{cor2}
\end{corollary}
Figs. 2 and 3 illustrate the path-wise convergence of $\vec{p}^*$ implied by Theorems \ref{Thm2} and \ref{Thm3}
respectively by using stochastic simulations (\ref{appendix4} shows the simulations for $\vec{p}$ in details).
In both cases, even though all the stochastic paths fluctuate at the beginning of the process,
the proportions of S, B and L cells eventually converge to their equilibrium proportions as time passes.
Since the path-wise convergence indicates the stability of (almost) every stochastic sample,
the convergence of the mean dynamics just follows from it by averaging all the stochastic samples
(see lower panels of Figs. 2 and 3). Note that both the Kolmogorov forward equation of the Markov chain and the ODEs model
can be seen as the mean dynamics of the phenotypic proportions; their stabilities just correspond to the
average-level stabilities of the MPB model, which can be seen as direct results of the path-wise convergence.
In this way, the path-wise convergence provides a deeper understanding to the phenotypic equilibrium
from the stochastic point of view.
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{Fig2-eps-converted-to.pdf}
\caption{Stochastic simulations for the case with phenotypic plasticity (Theorem \ref{Thm2}). Upper panel shows the stochastic path-wise dynamics of the phenotypic proportions of S (blue), B (black) and L (red). The initial numbers of S, B and L cells are assumed to be 20, 0 and 0 respectively, that is, the initial proportions of S, B and L cells are 100\%, 0\% and 0\%.
According to the assumptions in Theorem \ref{Thm2}, we set $m=10$;
$\alpha_S=0.8$, $P_1=0.3$, $P_2=0.2$, $P_3=0.2$, $P_4=0.15$, $P_5=0.15$;
$\alpha_B=0.6$, $\alpha_{B_m}=0.3$, $\beta_B=0.1$, $\gamma_B=0.05$;
$\alpha_L=0.7$, $\alpha_{L_m}=0.3$, $\beta_L=0.13$, $\gamma_L=0.2$.
Thirty stochastic samples for each phenotype were produced. It is shown that even though the stochastic paths fluctuate at the beginning of the process, the proportions of S, B and L phenotypes eventually path-wisely tend to their equilibrium proportions respectively. Lower panel shows the mean dynamics of the phenotypic proportions by averaging all the thirty samples shown in upper panel.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{Fig3-eps-converted-to.pdf}
\caption{Stochastic simulations for the case without phenotypic plasticity (Theorem \ref{Thm3}).
The initial cell numbers of S, B and L cells are also assumed to be 20, 0 and 0 respectively.
According to the assumptions in Theorem \ref{Thm3}, we set $m=10$;
$\alpha_S=0.8$, $P_1=0.5$, $P_2=0.14$, $P_3=0.16$, $P_4=0.1$, $P_5=0.1$;
$\alpha_B=0.4$, $\alpha_{B_m}=0.3$, $\beta_B=0$, $\gamma_B=0$;
$\alpha_L=0.45$, $\alpha_{L_m}=0.3$, $\beta_L=0$, $\gamma_L=0$.
Ten stochastic samples for each phenotype were produced. Upper panel shows the path-wise convergence of the phenotypic proportions. Lower panel shows the average-level stability of the mean dynamics.}
\end{center}
\end{figure}
As the end of this section, it is noteworthy to emphasize that, according to Theorem \ref{Thm3}, the
phenotypic equilibrium can still happen in the paradigm of conventional cancer stem cell theory.
The assumptions in Theorem \ref{Thm3} together indicate the cellular hierarchy proposed by the cancer stem cell
theory \cite{dalerba2007cancer}. That is, cancer stem cells (S cells)
are capable of differentiation into other more committed non-stem cancer cells (B and L cells)
but not vice versa. In this way, cancer stem cells are at the apex of this cellular hierarchy.
Moreover, the assumption ``$P_1>P_2+P_3$'' implies the \emph{dominance} of S phenotype during the growth of the population.
To show this, note that $\alpha_S\left(P_1-P_2-P_3\right)$ is the eigenvalue corresponding to S phenotype of $G$ in
Eq. (\ref{matrix3}), which is the only positive eigenvalue of $G$ provided ``$P_1>P_2+P_3$''
(see \ref{appendix3}). In other words, instead of the phenotypic plasticity, Theorem \ref{Thm3} also
gives an alternative explanation to the phenotypic equilibrium in the framework of the cancer stem cell theory,
as long as the cancer stem cell phenotype is dominant in the population. However, it is interesting to see that
the convergence rate of the case in Fig. 2 is faster than that of the case in Fig. 3, even though they both
give rise to the path-wise convergence. This suggests that perhaps the convergence rate (rather than the convergence itself)
could serve as an indicator to distinguish the models with and without phenotypic plasticity, which might be another
meaningful research topic in future.
\section{Conclusions}
In this study, we have presented a multi-phenotype branching model of cancer cells.
On one hand, this model can serve as an underlying model from which the ODEs model and
the Markov chain model can be deduced. On the other hand, the almost sure convergence of the
model enhances our understanding of the phenotypic equilibrium, from average-level stability
to path-wise convergence. Furthermore, our results have indicated that, even though the
phenotypic plasticity facilitates the phenotypic equilibrium, it is not indispensable in some cases.
It has been shown that the conventional cancer stem cell model can also stabilize the mixture of
the phenotypic proportions, providing an alternative explanation to the phenotypic equilibrium.
Moreover, it should be noted that even though this work is focused on
the issue of cancer, our methods can conveniently be used to
more generalized cell population dynamics \cite{jiang2014cell}.
To further reveal the biological mechanisms of the phenotypic equilibrium,
more detailed dynamic models of cancer cells are needed. For instance,
the hypothesis of cooperation among cancer cells has been put forward
\cite{axelrod2006evolution}. In particular, self-sufficiency of certain growth signals
of cancer cells supports the concept of mutualism and could be an important mechanism
supporting the phenotypic equilibrium. Therefore, the models of capturing the
interactions among cancer cells, \emph{e.g.} evolutionary game models \cite{nowak2004evolutionary},
could be a promising research direction in future.
Furthermore, the genetic and epigenetic state networks \cite{huang2012molecular,wang2013phage} of cancer will enable us to explore the
molecular mechanisms of the phenotypic equilibrium, which are poorly understood.
The network methods have successfully been used to investigate the processes of cellular pluripotent reprogramming \cite{wang2012global}
and epithelial-mesenchymal transitions (EMT) \cite{jolly2014towards}.
Note that EMT could play a key role in regulating the phenotypic heterogeneity in cancer \cite{may2011epithelial},
further studies on it should be another important tasks in future plans.
\section*{Acknowledgements}
D. Z. acknowledges the generous sponsorship from
the National Natural Sciences Foundation of China (No. 11401499),
the Natural Science Foundation of Fujian Province of China (No. 2015J05016),
and the Fundamental Research Funds for the Central Universities (No. 20720140524).
Y. N. is supported by National Natural Science Foundation of China (No.11401594) and the New Teachers' Specialised
Research Fund for the Doctoral Program from Ministry of Education of China (No.20120162120096).
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 5,953 |
Satya Bhusan Burman (born May 1907) was an Indian judge and former Chief Justice of Orissa High Court.
Career
Burman was born in 1907. He studied at Mitra Institution, Kolkata and Presidency College. He passed Law from Hazra Law College, University of Calcutta. Burman became Bar at Law from the Lincoln's Inn, London. In 1932 he was called to the English Bar. Burman was appointed additional Judge of the Orissa High Court on 3 February 1958. He was elevated to the post of Chief Justice of Orissa High Court in 1967 according to the proposal of Sudhi Ranjan Das, Chief Justice of India. Justice Burman retired from the judgeship on 30 April 1969.
References
1907 births
Year of death missing
Judges of the Orissa High Court
Chief Justices of the Orissa High Court
Members of Lincoln's Inn
University of Calcutta alumni
Presidency University, Kolkata alumni
Indian barristers | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,372 |
Q: For this page, why do I have to use display: inline-block instead of display: inline? http://christianselig.com/wp/
For the main nav, if I use display: inline, they're displayed as blocks. I added display: inline-block on a whim, and it worked. scratches head
A: Now define your main nav li define display:inline-block or flaot:none because your .main-navigation li define float left
nav.main-navigation li {
display: inline-block;
float: none;
vertical-align: top;
}
Results is
A: i have check your html code and css this is not valid according to web standards and create many issues if you fix one problem then other problem is arise so kindly check the code
nav.main-navigation li {
display: inline-block;
float: none;
}
you can add this to your css and it works fine but the navigation merge with your banner headings
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 8,455 |
Remember, 2'x4' HO model train layout is small and does not provide the flexibility to grow your layout. One of the main reasons why model train beginners tend to start off with a 4'x8' HO train layout is because building it is very easy and it also provides enough flexibility to grow your layout.
Some model train enthusiasts prefer to use a single model train on their layout that serves multiple purposes from picking up passengers on their way to work, all the way up to being used a freight train for supplies for the town. If you are building a layout to show it as a night model, this layout is an amazing inspiration. There is something about toy and model trains that can transform a grown man into a version of his 10-year-old self — wide-eyed, goofy-grinned, literally bursting with excitement.
I'll admit it: Growing up, I used to love when my father set up his childhood trains around the holidays. After entering the San Diego Model Railroad Museum, I rediscovered what had captivated me as a child. Whether you are a model railroad hobbyist, a parent with train-happy kids, someone looking to relive childhood memories or just plain curious, this unique museum is definitely worth checking out.
Inside the museum, you'll find the largest indoor model railroad display in the world (at 27,000 square feet).
4'x8' plywood sheets are readily available and do not take up an incredible amount of space. This is because after experience, they soon realize that it is too small for a proper model railroad layout.
Set up the grid paper scale based on your own preference, I always find it better to use big sheets of paper that can be found at elementary schools.
The most important aspect of HO train layouts that are customized for a single train is the switchers.
I would sit for hours watching the Lionel train snake through a toy village, mesmerized by the roar of the engine and the sound of the whistle — all the while eagerly waiting for my turn to operate the train.
And so when I heard that there was a museum dedicated to model railroading right here in San Diego, I was dubious.
Within minutes, I became a version of my 10-year-old self — wide-eyed, goofy-grinned, bursting with excitement. There are four main exhibits that depict railroads of the Southwest in O, HO and N scales (that's model railroad speak for size), as well as a toy train gallery with the ever-popular Lionel type trains. The museum also includes a children's educational and entertainment area, and a separate library stuffed with books, magazines and photographs related to prototype and model railroading.
If you currently have grandchildren or know teachers that work at elementary schools, try to ask them for larger grid paper. Road switchers such as diesel based RS3 and EMD GP series and smaller Mogul and Atlantic types for steam is a crucial decision for HO railroading. I found myself just short of running from exhibit to exhibit, searching for the trains, almost giggling when they passed by. 2'x'4' is not a bad size for a N or Z gauge starter layout but will not do an HO gauge justice. It is just like getting your haircut, once its cut you cannot change it but if the barber starts off cutting it off slow, you can manage and customize it as you see fit. The prospect of spending time inside on a beautiful day staring at trains did not sound entertaining; I just didn't understand the draw. Many are works in progress, so you'll often find club members working on the models, always ready to chat with visitors about the project or model railroading in general.
12.11.2015 at 17:29:23 Supply a background for young children to create wooden train set comes with engine.
12.11.2015 at 15:27:10 Baby here, and an general shrinking track from slipping off, but shallow enough model train magazine pdf to be a comfy electric train. | {
"redpajama_set_name": "RedPajamaC4"
} | 4,467 |
\section{Introduction}
\noindent
Distal theories and structures were introduced by Simon~\cite{SimonDistal} as a way to distinguish those NIP theories which are in some sense \emph{purely unstable}, i.e., where absolutely no stable behavior of any kind occurs.
We sometimes think of distality as meaning: everything in sight is completely controlled by linear orders, either overtly or covertly.
Any o-minimal theory is distal, and the $p$-adic fields are distal as well.
In an o-minimal structure, everything is controlled by the obvious underlying linear order. In the $p$-adics, there is no underlying linear order, however everything is still controlled in some sense by the totally ordered value group (up to a finite residue field).
A non-example is the theory of algebraically closed valued fields (ACVF).
Indeed, the interpretable residue field is an algebraically closed field, a purely stable structure which is not being controlled by any linear order.
\medskip\noindent
More recently, Chernikov, Galvin and Starchenko showed that a strong Szemer\'{e}di-type
regularity lemma and other combinatorial results hold
in all distal structures~\cite{ChernikovGalvinStarchenko,ChernikovStarchenko}.
Consequently, there has been increased interest in classifying which NIP structures are distal, as well as classifying which NIP structures have distal expansions.
In this paper we prove that a particular structure, \emph{the asymptotic couple $(\Gamma_{\log},\psi)$ of the ordered valued differential field $\mathbb{T}_{\log}$ of logarithmic transseries}, is distal.
Asymptotic couples arise as the value groups of certain types of valued differential fields: the so-called \emph{asymptotic fields}. See~\cite{ADAMTT} for the full story.
We now define the object $(\Gamma_{\log},\psi)$:
\medskip\noindent
Throughout, $m$ and $n$ range over $\N=\{0,1,2,\ldots\}$. Let $\bigoplus_n\R e_n$ be a vector space over $\R$ with basis $(e_n)$. Then $\bigoplus_n\R e_n$ can be made into an ordered group using the usual lexicographical order, i.e., by requiring for nonzero $\sum_i r_ie_i$ that
\[
\textstyle \sum r_ie_i>0\ \Longleftrightarrow\ r_n>0\ \ \text{for the least $n$ such that $r_n\neq 0$.}
\]
Let $\Gamma_{\log}$ be the above ordered abelian group $\bigoplus_n\R e_n$. It is often convenient to think of an element $\sum r_ie_i$ as the vector $(r_0,r_1,r_2,\ldots)$. We follow Rosenlicht~\cite{rosenlicht} in taking the function
\[
\psi:\Gamma_{\log}\setminus\{0\} \to\Gamma_{\log}
\]
defined by
\[
(\underbrace{0,\ldots,0}_{n},\underbrace{r_n}_{\neq 0},r_{n+1},\ldots) \mapsto (\underbrace{1,\ldots,1}_{n+1},0,0,\ldots)
\]
as a new primitive, calling the pair $(\Gamma_{\log},\psi)$ an \emph{asymptotic couple} (the asymptotic couple of $\mathbb{T}_{\log}$). In~\cite{gehretQE,GehretNIP}, the model theory of $(\Gamma_{\log},\psi)$ is studied in detail. There, $(\Gamma_{\log},\psi)$ is construed as an $\L_{\log}$-structure for a certain first-order language $\L_{\log}$. In this paper we continue the study of the theory $T_{\log} = \Th_{\L_{\log}}(\Gamma_{\log},\psi)$. The main result is the following:
\begin{theorem}
\label{tlogdistalthm}
$T_{\log}$ is distal.
\end{theorem}
\noindent
An immediate consequence of Theorem~\ref{tlogdistalthm} and Proposition~\ref{distalNIP} below is:
\begin{cor}
\label{tlogNIP}
$T_{\log}$ is NIP.
\end{cor}
\noindent
This provides a new proof of the main result from~\cite{GehretNIP}. The original proof that $T_{\log}$ is NIP in~\cite{GehretNIP} involved a counting-types argument which invoked a consistency result of Mitchell and used the fact that the statement ``$T_{\log}$ is NIP'' is absolute. The appeal of the new proof of Corollary~\ref{tlogNIP} is that it is algebraic, and avoids any set-theoretic black boxes by taking place entirely within ZFC.
\medskip\noindent
Theorem~\ref{tlogdistalthm}, together with~\cite[Corollary 6.3]{ChernikovStarchenko}, also has the amusing consequence:
\begin{cor}
No model of $T_{\log}$ interprets an infinite field of positive characteristic.
\end{cor}
\noindent
We believe that no model of $T_{\log}$ interprets a field of characteristic zero either, although we leave that story for another time and place.
\medskip\noindent
In Section~\ref{distalNIPsection} we recall some definitions and basic facts around distality and NIP.
We also state and prove a general criterion for showing that a theory of a certain form is distal. This criterion is based on one developed by Hieronymi and Nell~\cite{Hieronymi_Nell}. There they use it to show that certain ``pairs'' such as $(\R;0,+,\cdot,<,2^{\Z})$ are distal. We are able to adapt it for use in our setting due to certain superficial syntactic similarities between our structure and their pairs.
\medskip\noindent
In Section~\ref{ACsection} we discuss the basics of $H$-asymptotic couples with asymptotic integration. We also define the language $\L_{\log}$ and discuss the theory $T_{\log}$. We restate some useful facts about models of $T_{\log}$ which were established in~\cite{gehretQE,GehretNIP}.
In Section~\ref{AClemmas}, we go on to prove some additional lemmas concerning the behavior of indiscernible sequences in models of $T_{\log}$.
\medskip\noindent
In Section~\ref{spreadout} we introduce a concept of an indiscernible sequence $(a_i)_{i\in I}$ in a model of $T_{\log}$ being \emph{spread out} by a parameter $b$. Roughly speaking, this means that the sequence $(a_i-b)_{i\in I}$ is sufficiently widely distributed in the convex hull of the $\Psi$-set. We then proceed to show that in such a situation, there is a certain desirable monotone interaction between the translated sequence and the $\Psi$-set (Lemma~\ref{seqmonotone}). This is one of the key steps in the proof of distality of $T_{\log}$.
\medskip\noindent
In Section~\ref{extensions} we prove several finiteness results concerning finite rank extensions of the underlying groups of models of $T_{\log}$, and their relationship to the functions $\psi, s$, and $p$ from $\L_{\log}$.
\medskip\noindent
In Section~\ref{distalproof} we bring everything together to prove Theorem~\ref{tlogdistalthm}.
\medskip\noindent
Finally, in Section~\ref{dpranksection} we prove that $T_{\log}$ is not strongly dependent (Theorem~\ref{TACnotstrong}).
This section does not rely on any of the previous sections.
We include this result to contrast it with Theorem~\ref{tlogdistalthm}. This also illustrates that distal structures can still be quite complicated: among NIP structures, being \emph{not strongly dependent} is more complicated than the tamer notions of \emph{strongly dependent}, having \emph{finite $\DP$-rank}, and being \emph{$\DP$-minimal}.
\subsection*{Ordered set conventions} By ``ordered set'' we mean ``totally ordered set''.
\medskip\noindent
Let $S$ be an ordered set. Below, the ordering on $S$ will be denoted by $\leq$, and a subset of $S$ is viewed as ordered by the induced ordering. We put $S_{\infty}:= S\cup\{\infty\}$, $\infty\not\in S$, with the ordering on $S$ extended to a (total) ordering on $S_{\infty}$ by $S<\infty$.
Suppose that $B$ is a subset of $S$. We put $S^{>B}:= \{s\in S: s>b \text{ for every } b\in B\}$ and we denote $S^{>\{a\}}$ as just $S^{>a}$; similarly for $\geq, <$, and $\leq$ instead of $>$.
For $A\subseteq S$ we let
\[
\operatorname{conv}(A)\ :=\ \{x\in S: a\leq x\leq b \text{ for some } a,b\in A\}
\]
be the \textbf{convex hull of $A$} in $S$, that is, the smallest convex subset of $S$ containing $A$. For $A\subseteq S$ we put
\[
A^{\downarrow} \ := \ \{s\in S: s\leq a \text{ for some } a\in A\},
\]
which is the smallest downward closed subset of $S$ containing $A$.
\medskip\noindent
We say that $S$ is a \textbf{successor set} if every element $x\in S$ has an \textbf{immediate successor} $y\in S$, that is, $x<y$ and for all $z\in S$, if $x<z$, then $y\leq z$. For example, $\N$ and $\Z$ with their usual orderings are successor sets. We say that $S$ is a \textbf{copy of $\Z$} if $(S,<)$ is isomorphic to $(\Z,<)$.
\subsection*{Ordered abelian group conventions} Suppose that $G$ is an ordered abelian group. Then we set $G^{\neq}:= G\setminus\{0\}$. Also $G^{<}:= G^{<0}$; similarly for $\geq, \leq$, and $>$ instead of $<$. We define $|g|:= \max(g,-g)$ for $g\in G$. For $a\in G$, the \textbf{archimedean class} of $a$ is defined by
\[
[a] \ := \ \{g\in G : |a|\leq n|g| \text{ and } |g|\leq n|a| \text{ for some } n\geq 1\}.
\]
The archimedean classes partition $G$. Each archimedean class $[a]$ with $a\neq 0$ is the disjoint union of the two convex sets $[a]\cap G^{<}$ and $[a]\cap G^{>}$. We order the set $[G]:= \{[a]:a\in G\}$ of archimedean classes by
\[
[a]<[b] \ :\Longleftrightarrow \ n|a|<|b| \text{ for all } n\geq 1.
\]
We have $[0]<[a]$ for all $a\in G^{\neq}$, and
\[
[a]\leq [b] \ \Longleftrightarrow \ |a|\leq n|b| \text{ for some } n\geq 1.
\]
\subsection*{Model theory conventions} In general we adopt the model theoretic conventions of Appendix B of~\cite{ADAMTT}. In particular, $\L$ can be a many-sorted language. For a complete $\L$-theory $T$, we will sometimes consider a model $\M\models T$ and a cardinal $\kappa(\M)>|\L|$ such that $\M$ is $\kappa(\M)$-saturated and every reduct of $\M$ is strongly $\kappa(\M)$-homogeneous. Such a model is called a \textbf{monster model} of $T$. In particular, every model of $T$ of size $\leq\kappa(\M)$ has an elementary embedding into $\M$. All variables are finite multivariables. By convention we will write ``indiscernible sequence'' when we mean ``$\emptyset$-indiscernible sequence''.
\subsection*{Sequence conventions}
Suppose that $(a_i)_{i\in I}$ is a sequence of distinct elements from some set indexed by a linear order $I$. Given a subset or subsequence $A\subseteq (a_i)$, we let $I^{>A}$ denote the index set
\[
I^{>A}\ :=\ \bigcap_{a_{i_0}\in A}\{i\in I:i>i_0\}\ \subseteq \ I.
\]
Similarly for $I^{<A}$. Furthermore, given $I_0\subseteq I$ we denote by $A\cap I_0$ the set
\[
A\cap I_0 \ := \ \{a_i\in A: i\in I_0\} \ \subseteq \ A.
\]
\section{Distality and NIP}
\label{distalNIPsection}
\noindent
This section contains all of the general model-theoretic content we need for this paper. This includes a definition of distality, a criterion for proving that theories of a certain form are distal, and a proof that distal theories are NIP.
\emph{Throughout this section $\L$ is a language and $T$ is a complete $\L$-theory.}
\subsection*{Definition of distality}
\emph{In this subsection we fix a monster model $\M$ of $T$. We also let $I_1,I_2$ range over infinite linearly ordered index sets.} The definitions do not depend on the choice of this monster model. We define \emph{distality} in Definition~\ref{distaldef} below in terms of ``upgradability'' of a certain indiscernible sequence configuration. In practice, this seems to be one of the more convenient definitions to work with, and it is the only one we use in this paper. For other equivalent definitions of distality see~\cite{SimonDistal} or~\cite[Chapter 9]{SimonNIP}.
\begin{definition}
\label{distaldef}
Given $I_1$ and $I_2$, we say that $T$ is \textbf{$I_1,I_2$-distal} if for every $A\subseteq \M$, for every $x$, and for every indiscernible sequence $(a_i)_{i\in I}$ from $\M_x$, if
\begin{enumerate}
\item $I = I_1+(c)+I_2$, and
\item $(a_i)_{i\in I_1+I_2}$ is $A$-indiscernible,
\end{enumerate}
then $(a_i)_{i\in I}$ is $A$-indiscernible.
We say $T$ is \textbf{distal} if $T$ is $I_1,I_2$-distal for every $I_1$ and $I_2$.
Finally, we say that an $\L$-structure $\bm{M}$ is \textbf{distal} if $\Th(\bm{M})$ is distal.
\end{definition}
\noindent
It is also convenient to define what it means for a formula $\varphi(x;y)$ to be distal:
\begin{definition}
\label{distaldefformula}
Given $I_1$ and $I_2$, we say a formula $\varphi(x;y)$ is \textbf{$I_1,I_2$-distal}
if for every $b\in\M_y$ and every indiscernible sequence $(a_i)_{i\in I}$ from $\M_{x}$ such that
\begin{enumerate}
\item $I = I_1+(c)+I_2$, and
\item $(a_i)_{i\in I_1+I_2}$ is $b$-indiscernible,
\end{enumerate}
then
$
\models \varphi(a_c;b)\leftrightarrow \varphi(a_i;b)
$
for every $i\in I$.
We say that the formula $\varphi(x;y)$ is \textbf{distal} if it is $I_1,I_2$-distal for every $I_1$ and $I_2$.
\end{definition}
\noindent
It is well known that when checking distality, either for an individual formula $\varphi(x;y)$ or an entire theory, one is free to use any specific $I_1$ and $I_2$ they wish.
To make this sentiment precise we have introduced the provisional terminology ``$I_1,I_2$-distal'' which is not standard; see Lemmas~\ref{phixydistalequivalences} and~\ref{equivdistaldefs}.
We exploit this freedom in the proof of Distal Criterion~\ref{distal_multi} below.
\begin{lemma}
\label{phixydistalequivalences}
The following are equivalent for a formula $\varphi(x;y)$:
\begin{enumerate}
\item $\varphi(x;y)$ is distal;
\item $\varphi(x;y)$ is $I_1,I_2$-distal for some $I_1$ and $I_2$;
\end{enumerate}
\end{lemma}
\begin{proof}
\emph{The Standard Lemma}~\cite[Lemma 5.1.3]{TentZiegler} allows one to convert a counterexample of $I_1,I_2$-distality into a counterexample of $J_1,J_2$-distality, where $J_1,J_2$ are two other infinite linear orders. The details are left to the reader.
\end{proof}
\begin{lemma}
\label{equivdistaldefs}
The following are equivalent:
\begin{enumerate}
\item $T$ is distal;
\item there are $I_1$ and $I_2$ such that $T$ is $I_1,I_2$-distal;
\item every $\varphi(x;y)\in\L$ is distal;
\item there are $I_1$ and $I_2$ such that every $\varphi(x;y)\in\L$ is $I_1,I_2$-distal.
\end{enumerate}
\end{lemma}
\begin{proof}
(1)$\Rightarrow$(2)$\Rightarrow$(4)$\Leftrightarrow$(3) follow by definition and Lemma~\ref{phixydistalequivalences}. For (3)$\Rightarrow$(1), assume $T$ is not distal. Then there are $I_1$ and $I_2$ such that $T$ is not $I_1,I_2$-distal. This failure of $I_1,I_2$-distality is witnessed by some $x,y$, some sequence $(a_i)_{i\in I_1+(c)+I_2}$ from $\M_x$, some formula $\varphi(x_1,\ldots,x_n;y)$ where each $x_i$ is similar to $x$, and some parameter $b\in\M_y$. By adjusting this counterexample, through a combination of joining outer elements of the sequence with the parameter $b$, and/or grouping elements of the sequence together to create a new `thickened' sequence, one arrives at a formula $\varphi(x';y')$ (same formula, possibly different presentation of free variables) which is not distal. The argument is routine and left to the reader, although a word of caution is in order: in general $I_1$ and $I_2$ are not dense linear orders, and they may or may not have endpoints, etc. So the reduction as described above really depends on $I_1$ and $I_2$ and where the elements from $(a_i)$ which witness the failure of distality are located on these sequences.
\end{proof}
\noindent
Lemma~\ref{equivdistaldefs} permits us to work with \emph{any} $I_1,I_2$ we wish. It will be convenient for us to work with $I_1,I_2$ of a special form:
\begin{definition}
We say that a linear order $I = I_1+(c)+I_2$ is in \textbf{distal configuration at $c$} if $I_1$ and $I_2$ are infinite, $I_1$ does not have a greatest element, and $I_2$ does not have a least element.
\end{definition}
\noindent
Working with sequences in distal configuration is primarily used in the proof of Distal Criterion~\ref{distal_multi} and Proposition~\ref{pnonconstantinfty} below. It is not clear how to remove the assumption of distal configuration from these arguments, at least without making things more complicated.
\subsection*{A criterion for distality}
\label{distal_crit_sec}
\noindent
To set the stage for Distal Criterion~\ref{distal_multi} below, we now consider an extension $\L(\mathfrak{F}):=\ \L\cup\mathfrak{F}$ of the language $\L$ by a set $\mathfrak{F}$ of new unary function symbols involving sorts which are already present in $\L$. We also consider $T(\mathfrak{F})$, a complete $\L(\mathfrak{F})$-theory extending $T$. Given a model $\bm{M}\models T$ we denote by $(\bm{M},\mathfrak{F})$ an expansion of $\bm{M}$ to a model of $T(\mathfrak{F})$.
For a subset $X$ of a model $\bm{M}$, we let $\langle X \rangle$ denote the $\L$-substructure of $\bm{M}$ generated by $X$. If $\bm{M}$ is a submodel of $\bm{N}$, we let $\bm{M}\langle X \rangle$ denote $\langle M \cup X \rangle\subseteq\mathbf{N}$. For this subsection we also fix a monster model $\M$ of $T(\mathfrak{F})$. Note that $\M\!\upharpoonright\!\L$ is then a monster model of $T$.
\medskip\noindent
Distal Criterion~\ref{distal_multi} is a many-sorted, many-function generalization of~\cite[Theorem 2.1]{Hieronymi_Nell}. We give a proof below. \emph{In the statement of~\ref{distal_multi} and its proof, $x,x',x_i,y,z,w,w_i$, etc. are variables.}
\begin{distalcriterion}[Hieronymi-Nell]
\label{distal_multi}
Suppose $T$ is a distal theory and the following conditions hold:
\begin{enumerate}
\item The theory $T(\mathfrak{F})$ has quantifier elimination.
\item For every $\mathfrak{f}\in\mathfrak{F}$, every model $(\bm{N},\mathfrak{F}) \models T(\mathfrak{F})$, every substructure $\bm{M} \subseteq \bm{N}$ such that $\frak{g}(M) \subseteq M$ for all $\frak{g}\in\frak{F}$, every $x$, and every $c \in N_x$, there is a $y$ and $d \in \frak{f}\big(\bm{M}\langle c\rangle\big)_y$ such that
\[
\frak{f}\big(\bm{M}\langle c\rangle\big) \subseteq \big\langle \frak{f}(M),d \big\rangle.
\]
\item For every $\frak{f}\in\frak{F}$, the following holds: suppose that $x'$ is an initial segment of $x$, $g,h$ are $\L$-terms of arities $xy$ and $x'z$ respectively, $b_1\in \M_y$, and $b_2\in\frak{f}(\M)_z$. If $(a_i)_{i\in I}$ is an indiscernible sequence from $\frak{f}(\M)_{x'}\times \M_{x\setminus x'}$ such that
\begin{enumerate}
\item $I = I_1+(c)+I_2$ is in distal configuration at $c$, and $(a_i)_{i\in I_1+I_2}$ is $b_1b_2$-indiscernible, and
\item $\frak{f}\big(g(a_i,b_1)\big) = h(a_i,b_2)$ for every $i\in I_1+I_2$,
\end{enumerate}
then $\frak{f}\big(g(a_c,b_1)\big) = h(a_c,b_2)$.
\end{enumerate}
Then $T(\mathfrak{F})$ is distal.
\end{distalcriterion}
We refer the reader to Figure~\ref{bookkeepingfigure} which illustrates the bookkeeping being done in the proof of Distal Criterion~\ref{distal_multi}.
\begin{proof}
Fix an infinite linear order $I=I_1+(c)+I_2$ which is in distal configuration at $c$.
By (1) and Lemma~\ref{equivdistaldefs}, it is enough to show that every quantifier-free $\L(\mathfrak{F})$-formula $\varphi(x;y)$ is $I_1,I_2$-distal.
We prove this by induction on the number of times $e(\psi)$ that any symbol from $\mathfrak{F}$ occurs in $\psi$.
If $e(\psi) = 0$, this follows from the assumption that $T$ is distal.
Let $e>0$ and suppose that for all $\L(\mathfrak{L})$-formulas $\psi'$ with $e(\psi')<e$, $\psi'$ is distal.
Let $\psi(x;y)$ be a quantifier-free $\L(\mathfrak{F})$-formula with $e(\psi) = e$.
We will show that $\psi(x;y)$ is $I_1,I_2$-distal.
Take an indiscernible sequence $(a_i)_{i \in I}$ from $\M_{x}$ and $b \in \M_y$ such that $I = I_1+(c)+I_2$ and $(a_i)_{i \in I_1+I_2}$ is $b$-indiscernible.
\begin{figure}[h!]
\caption{Bookkeeping in the proof of Distal Criterion~\ref{distal_multi}}
\label{bookkeepingfigure}
\begin{center}
\begin{tikzpicture}[x=1.5cm,y=1cm]
\draw (-5,0)--(-.15,0); \draw (5,0)--(.15,0);
\node at (0,0) {$\bullet$}; \node at (0,-.4){$c$};
\draw (-.5,-.2)--(-.5,0); \node at (-.5,-.4){$i_0$};
\draw (-1.5,0)--(-1.5,.2); \draw (-1.33,0)--(-1.33,.2); \draw (-.9,0)--(-.9,.2); \draw (-.8,0)--(-.8,.2);
\draw [decoration={brace, raise=0.4cm},decorate] (-1.52,0) -- (-.78,0)
node [pos=0.5,anchor=north,yshift=1.05cm] {$a^*_-$};
\draw (.9,0)--(.9,.2); \draw (1.33,0)--(1.33,.2); \draw (1.5,0)--(1.5,.2);
\draw [decoration={brace, raise=0.4cm},decorate] (.88,0) -- (1.52,0)
node [pos=0.5,anchor=north,yshift=1.05cm] {$a^*_+$};
\draw (-3.4,0)--(-3.4,-.2); \node at (-3.4,-.4){$u_q$};
\draw (-4.8,0)--(-4.8,-.2); \node at (-4.8,-.4){$u_1$};
\node at (-4.1,-.4){$\cdots$};
\draw (2.4,0)--(2.4,-.2); \node at (2.4,-.4){$v_1$};
\draw (4.4,0)--(4.4,-.2); \node at (4.4,-.4){$v_r$};
\node at (3.4,-.4){$\cdots$};
\draw [decoration={brace, mirror, raise=0.5cm},decorate] (-.7,-.13) -- (.8,-.13)
node [pos=0.5,anchor=north,yshift=-0.55cm] {$I'$};
\draw [decoration={brace, mirror, raise=0.5cm},decorate] (-3.3,-.9) -- (-.15,-.9)
node [pos=0.5,anchor=north,yshift=-0.55cm] {$I_1^{>a_u}$};
\draw [decoration={brace, mirror, raise=0.5cm},decorate] (.15,-.9) -- (2.3,-.9)
node [pos=0.5,anchor=north,yshift=-0.55cm] {$I_2^{<a_v}$};
\draw [decoration={brace, mirror, raise=0.5cm},decorate] (-5,-1.7) -- (-.15,-1.7)
node [pos=0.5,anchor=north,yshift=-0.55cm] {$I_1$};
\draw [decoration={brace, mirror, raise=0.5cm},decorate] (.15,-1.7) -- (5,-1.7)
node [pos=0.5,anchor=north,yshift=-0.55cm] {$I_2$};
\end{tikzpicture}
\end{center}
\end{figure}
Since $e>0$, there is an $\L$-term $g$ and some $\frak{f} \in \mathfrak{F}$ such that the term $\frak{f}\big(g(x;y)\big)$ appears in $\psi$.
In other words, there is a quantifier-free $\L(\mathfrak{F})$-formula $\psi'(x;y;z)$ such that $e(\psi')<e$ and
\[
\psi(x;y)\ =\ \psi'\big(x;y;\frak{f}(g(x;y))\big).
\]
Let $\bm{M}$ be the $\L(\mathfrak{F})$-substructure of $\M$ generated by $\{a_i:i \in I_1+I_2\}$ with reduct $\bm{M}_{\L}:=\bm{M}\!\upharpoonright\!\L$.
By (2) applied to $\bm{M}\subseteq \M\!\upharpoonright\!\L$, there is $d \in \frak{f}(\bm{M}_{\L}\langle b \rangle)_w$ for some $w$ such that
\[
\tag{A}
\frak{f}\big(\bm{M}_{\L}\langle b \rangle\big)\ \subseteq\ \big\langle \frak{f}(M),d\big\rangle.
\]
By (A), we have:
\[
\tag{B} \text{for every $i\in I_1+I_2$,}\quad\quad \mathfrak{f}\big(g(a_i;b)\big)\in \big\langle \mathfrak{f}(M),d\big\rangle
\]
Next, take $q,r\in\mathbb{N}$ and $u_{1}<\cdots<u_{q}\in I_1$ and $v_{1}<\cdots<v_{r}\in I_2$ such that
$d$ is in the $\L(\frak{F})$-structure generated by $a_ua_vb$
where $a_u := (a_{u_1},\ldots,a_{u_q})$ and $a_{v}:=(a_{v_1},\ldots,a_{v_r})$.
Next, take $i_0\in I_1^{>a_u}$. Then applying (B) to this $i_0$, there is an $\L$-term $h$ and $\L(\mathfrak{F})$-terms $t_1,\ldots,t_l$ (all of the form $\mathfrak{f}(s_i)$ for $\L(\frak{F})$-terms $s_i$) such that
\[
\mathfrak{f}\big(g(a_{i_0};b)\big)\ =\ h\big(t_1(a_u,a_{i_0},a_v,a^*),\ldots,t_l(a_u,a_{i_0},a_v,a^*),d\big)
\]
where $a^*$ is a tuple of new parameters from $(a_i)_{i\in I_1+I_2}$ not yet mentioned (i.e., disjoint from $a_ua_va_{i_0}$). By $a_ua_vbd$-indiscernibility of $(a_i)_{i\in (I_1^{>a_u}+I_2^{<a_v})}$, we can arrange that $i_0$ is the largest index in $I_1$ among all indices specified so far (by sliding the elements $a^*\cap I_1^{>a_{i_0}}$ up to $I_2$). Now, let $a^*_-:= a^*\cap I_1$ and $a^*_+:= a^*\cap I_2$.
Define $I':=I_1^{>a_ua^*_-}+(c)+I_2^{<a_va^*_+}$, and note that $i_0\in I'$ and that $I'$ is also in distal configuration at $c$.
Since $(a_i)_{i\in I'\setminus(c)}$ is $a_ua_va^*bd$-indiscernible, it follows that
\[
\tag{C} \text{for every $i\in I'\setminus (c)$,}\quad\mathfrak{f}\big(g(a_i;b)\big)\ =\ h\big(t_1(a_u,a_i,a_v,a^*),\ldots,t_l(a_u,a_i,a_v,a^*),d\big).
\]
For each $i\in I'$, set
\[
a_i' \ :=\ \big( t_1(a_u,a_i,a_v,a^*),\ldots, t_l(a_u,a_i,a_v,a^*),a_i\big).
\]
$(a_i')_{i\in I'}$ is an indiscernible sequence from $\frak{f}(\M)^l\times \M_{x}$ and $(a_i')_{i\in I'\setminus (c)}$ is $a_ua_va^*bd$-indiscernible. Then by (C) and (3), it follows that
\[
\tag{D} \mathfrak{f}\big(g(a_c;b)\big)\ =\ h\big(t_1(a_u,a_c,a_v,a^*),\ldots,t_l(a_u,a_c,a_v,a^*),d\big).
\]
Finally, we note that
\begin{align*}
&\models \psi(a_c;b) \\
\Longleftrightarrow \quad& \models \psi'\big(a_c; b ; \mathfrak{f}(g(a_c;b))\big) \quad \text{by definition of $\psi'$}\\
\Longleftrightarrow \quad& \models \psi'\big(a_c; b ;h(t_1(a_u,a_c,a_v,a^*),\ldots,t_l(a_u,a_c,a_v,a^*),d)\big) \quad\text{by (D)}\\
\Longleftrightarrow \quad& \models \psi'\big(a_i; b ;h(t_1(a_u,a_i,a_v,a^*),\ldots,t_l(a_u,a_i,a_v,a^*),d)\big)\quad (\text{for $i\in I'\setminus (c)$}) \\
&\text{by inductive hypothesis: $e(\psi') = e\big(\psi'(x;y;h(w_1,\ldots,w_l,w))\big)<e$} \\
&\text{where $w_1,\ldots,w_l$ and $w$ are variables of the appropriate sort and} \\
&\text{the partition of this last formula is $(w_1\cdots w_l x;yw)$} \\
\Longleftrightarrow\quad& \models \psi'\big(a_i; b ; \mathfrak{f}(g(a_i;b))\big)\quad (\text{for $i\in I'\setminus (c)$}) \quad\text{by (C)}\\
\Longleftrightarrow\quad&\models \psi(a_i;b)\quad (\text{for $i\in I'\setminus (c)$}).
\end{align*}
This finishes the proof.\qedhere
\end{proof}
\subsection*{Connection to NIP}
Distality was first introduced as a property which a NIP theory may or may not have~\cite{SimonDistal}.
In this paper, we have defined what it means for an arbitrary theory to be distal. This does not actually give us any additional generality since Proposition~\ref{distalNIP} below shows that every distal theory is necessarily NIP. However, this does allow us to use distality as a means for establishing that a theory is NIP, although there are more direct ways to do this (see Remark~\ref{provingNIPremark}).
\emph{Let $\M$ be a monster model of $T$.}
\begin{definition}
We say that a partitioned $\L$-formula $\varphi(x;y)$ has the \textbf{non-independence property} (or \textbf{is NIP}) if for every $b\in\M_y$ and for every indiscernible sequence $(a_i)_{i\in I}$ from $\M_x$, there is $\varepsilon\in\{0,1\}$ and an index $i_0\in I$ such that
\[
\models \varphi(a_i;b)^{\varepsilon}\quad \text{for every $i\in I^{>i_0}$.}
\]
We say that $T$ \textbf{is NIP} if every partitioned $\L$-formula is NIP.
\end{definition}
\medskip\noindent
It is known that distality implies NIP (e.g., see~\cite[Remark 2.6]{ChernikovStarchenko}).
The proof of this fact that we include below was
communicated to us by the authors of~\cite{Hieronymi_Nell} and uses only the definitions of distality and NIP given in this paper:
\begin{prop}
\label{distalNIP}
If $T$ is distal, then $T$ is NIP.
\end{prop}
\begin{proof}
Suppose $\varphi(x;y)$ is not NIP. Then there is an indiscernible sequence $(c_n)_{n<\omega}$ in $\M_x$ and $b\in\M_y$ such that $\models \varphi(c_n;b)$ iff $n$ is even. Now define $d_n:= (c_{2n},c_{2n+1})\in\M_x\times\M_x$ and note that the sequence $(d_n)_{n<\omega}$ satisfies
\begin{enumerate}
\item the sequence $(d_{n,m})_{(n,m)\in\omega\times 2}$ with the lexicographical ordering on $\omega\times 2$ is indiscernible, and
\item for every $n$, $\models \varphi(d_{n,0};b)\wedge \neg \varphi(d_{n,1};b)$.
\end{enumerate}
By \emph{The Standard Lemma}~\cite[Lemma 5.1.3]{TentZiegler}, there is a $b$-indiscernible sequence $(e_i)_{i\in \Q}$ in $\M_x\times\M_x$ such that $\operatorname{EM}\!\big((e_i)_{i\in \Q}/b\big) = \operatorname{EM}\!\big((d_n)_{n<\omega}/b\big)$.
In particular:
\begin{enumerate}
\item the sequence $(e_{i,m})_{(i,m)\in \Q\times 2}$ with the lexicographical ordering on $\Q\times 2$ is indiscernible, and
\item for every $i$, $\models \varphi(e_{i,0};b)\wedge \neg \varphi(e_{i,1};b)$.
\end{enumerate}
Finally, for each $i\in\Q$ define the following element of $\M_x$:
\[
a_i\ :=\ \begin{cases}
e_{i,0} & \text{if $i\neq 0$} \\
e_{i,1} & \text{if $i=0$.}
\end{cases}
\]
We claim the sequence $(a_i)_{i\in\Q}$ witnesses that $\varphi(x;y)$ is not distal. Indeed:
\begin{enumerate}
\item $\Q=(-\infty,0)+(0)+(0,+\infty)$ is in distal configuration at $0$,
\item $(a_i)_{i\in (-\infty,0)+(0,+\infty)}$ is $b$-indiscernible, but
\item $\models \varphi(a_{1};b)\wedge\neg\varphi(a_0;b)$. \qedhere
\end{enumerate}
\end{proof}
\begin{remark}
\label{provingNIPremark}
Our route for proving Corollary~\ref{tlogNIP} goes through Proposition~\ref{distalNIP} and Theorem~\ref{tlogdistalthm}, which uses Distal Criterion~\ref{distal_multi}, a generalization of~\cite[Theorem 2.1]{Hieronymi_Nell}.
If our only goal in this paper was to establish Corollary~\ref{tlogNIP}, we could have taken a slightly more direct path by generalizing in a similar way
~\cite[Theorem 4.1]{dependentpairs} to obtain a ``NIP Criterion''. In which case, we could then use ``NIP versions'' of the results in Sections~\ref{AClemmas} and~\ref{spreadout} to establish Corollary~\ref{tlogNIP}.
\end{remark}
\section{Asymptotic couples and $T_{\log}$}
\label{ACsection}
\noindent
In this section we give a summary of the basics of $H$-asymptotic couples with asymptotic integration, as well as describing the language $\L_{\log}$ and the theory $T_{\log} = \Th_{\L_{\log}}(\Gamma_{\log},\psi)$.
\subsection*{Overview of asymptotic couples}
An \textbf{asymptotic couple} is a pair $(\Gamma,\psi)$ where $\Gamma$ is an ordered abelian group and $\psi:\Gamma^{\neq}\to\Gamma$ satisfies for all $\alpha,\beta\in\Gamma^{\neq}$,
\begin{itemize}
\item[(AC1)] $\alpha+\beta\neq 0\Longrightarrow \psi(\alpha+\beta)\geq \min \big(\psi(\alpha),\psi(\beta)\big)$;
\item[(AC2)] $\psi(k\alpha) = \psi(\alpha)$ for all $k\in\Z^{\neq}$, in particular, $\psi(-\alpha) = \psi(\alpha)$;
\item[(AC3)] $\alpha>0 \Longrightarrow \alpha+\psi(\alpha)>\psi(\beta)$.
\end{itemize}
If in addition for all $\alpha,\beta\in\Gamma$,
\begin{itemize}
\item[(HC)] $0<\alpha\leq\beta\Longrightarrow \psi(\alpha)\geq\psi(\beta)$,
\end{itemize}
then $(\Gamma,\psi)$ is said to be of \textbf{$H$-type}, or to be an \textbf{$H$-asymptotic couple}.
\medskip\noindent
\emph{For the rest of this subsection, $(\Gamma,\psi)$ is an $H$-asymptotic couple.} By convention, we extend $\psi$ to all of $\Gamma$ by setting $\psi(0):=\infty$. Then $\psi(\alpha+\beta)\geq \min\big(\psi(\alpha),\psi(\beta)\big)$ holds for all $\alpha,\beta\in\Gamma$, and $\psi:\Gamma\to\Gamma_{\infty}$ is a (non-surjective) convex valuation on the ordered abelian group $\Gamma$.
The following basic fact about valuations is used often:
\begin{fact}
If $\alpha,\beta\in\Gamma$ and $\psi(\alpha)<\psi(\beta)$, then $\psi(\alpha+\beta) = \psi(\alpha)$.
\end{fact}
\noindent
For $\alpha\in\Gamma^{\neq}$ we define $\alpha' := \alpha+\psi(\alpha)$. The following subsets of $\Gamma$ play special roles:
\[
(\Gamma^{\neq})':= \{\gamma':\gamma\in\Gamma^{\neq}\},\quad(\Gamma^{>})' := \{\gamma':\gamma\in \Gamma^{>}\},\quad (\Gamma^{<})' := \{\gamma': \gamma\in\Gamma^{<}\},\quad \Psi:=\{\psi(\gamma):\gamma\in\Gamma^{\neq}\}.
\]
We think of the map $\id+\psi:\Gamma^{\neq}\to\Gamma$ as \emph{the derivative}; this is because asymptotic couples arise in nature as the value groups of certain valued differential fields, in which case $\id+\psi$ is induced by an actual derivation. When \emph{antiderivatives} exist, they are unique:
\medskip
\begin{fact}
~\cite[Lemma 6.5.4(iii)]{ADAMTT}
The map $\gamma\mapsto \gamma'=\gamma+\psi(\gamma):\Gamma^{\neq}\to\Gamma$ is strictly increasing.
\end{fact}
\noindent
This allows us to talk about \emph{asymptotic integration}:
\begin{definition}
If $\Gamma = (\Gamma^{\neq})'$, the we say that $(\Gamma,\psi)$ has \textbf{asymptotic integration}. Suppose $(\Gamma,\psi)$ has asymptotic integration. Given $\alpha\in\Gamma$ we let $\int\alpha$ denote the unique $\beta\in\Gamma^{\neq}$ such that $\beta'=\alpha$ and we call $\beta = \int\alpha$ the \textbf{integral} of $\alpha$. This gives us a function $\int:\Gamma\to\Gamma^{\neq}$ which is the inverse of $\gamma\mapsto \gamma':\Gamma^{\neq}\to\Gamma$.
\end{definition}
\noindent
\emph{We now further assume that $(\Gamma,\psi)$ has asymptotic integration.} A closely related function to $\int$ is the \textbf{successor function} $s:\Gamma\to\Psi$ defined by $\alpha\mapsto s(\alpha):=\psi(\int\alpha)$. The successor function gets its name due to its behavior on the $\Psi$-set in the asymptotic couple $(\Gamma_{\log},\psi)$. More generally:
\begin{example}
The asymptotic couple $(\Gamma_{\log},\psi)$ is of $H$-type and has asymptotic integration. The functions $\int$ and $s$ behave as follows:
\begin{enumerate}
\item (Integral) For $\alpha = (r_0,r_1,r_2,\ldots)\in\Gamma_{\log}$, take the unique $n$ such that $r_n\neq 1$ and $r_m = 1$ for $m<n$. Then
\[
\alpha = (\underbrace{1,\ldots,1}_n,\underbrace{r_n}_{\neq 1},r_{n+1},r_{n+2},\ldots)\ \mapsto\ \textstyle\int\alpha = (\underbrace{0,\ldots,0}_n,r_n-1,r_{n+1},r_{n+2},\ldots)
\]
\item (Successor) For $\alpha = (r_0,r_1,r_2,\ldots)\in\Gamma_{\log}$, take the unique $n$ such that $r_n\neq 1$ and $r_m=1$ for $m<n$. Then
\[
\alpha = (\underbrace{1,\ldots,1}_n,\underbrace{r_n}_{\neq 1},r_{n+1},r_{n+2},\ldots)\ \mapsto\ s(\alpha) = (\underbrace{1,\ldots,1}_{n+1},0,0,\ldots)
\]
\end{enumerate}
\end{example}
\noindent
We conclude this subsection with some general facts about the $s$-function which we will need later:
\begin{fact}
\label{sfacts}
Let $\alpha,\beta\in\Gamma$. Then
\begin{enumerate}
\item\label{sinvconvex} if $\alpha\in s(\Gamma)$, then $s^{-1}(\alpha)\cap(\Gamma^{>})'$ and $s^{-1}(\alpha)\cap(\Gamma^{<})'$ are convex in $\Gamma$,
\item\label{succid} (Successor Identity) if $s\alpha<s\beta$, then $\psi(\alpha-\beta) = s\alpha$,
\item\label{poptop} if $\alpha\in (\Gamma^{<})'$ and $n\geq 1$, then $\alpha+(n+1)(s\alpha-\alpha)\in(\Gamma^{>})'$.
\end{enumerate}
\end{fact}
\begin{proof}
(\ref{sinvconvex}) is~\cite[Corollary 3.6]{gehretQE}, (\ref{succid}) is~\cite[Lemma 3.4]{gehretQE}, and (\ref{poptop}) is~\cite[Lemma 3.10]{GehretLiouville}.
\end{proof}
\subsection*{The theory $T_{\log}$}
Let $\L_{AC}$ be the ``natural'' language of asymptotic couples; $\L_{AC} = \{0,+,-,<,\psi,\infty\}$ where $0,\infty$ are constant symbols, $+$ is a binary function symbol, $-$ and $\psi$ are unary function symbols, and $<$ is a binary relation symbol. We consider an asymptotic couple $(\Gamma,\psi)$ as an $\L_{AC}$-structure with underlying set $\Gamma_{\infty}$ and the obvious interpretation of the symbols of $\L_{AC}$, with $\infty$ as a default value:
\[
-\infty \ = \ \gamma+\infty \ = \ \infty+\gamma \ = \ \infty+\infty \ = \ \psi(0) \ = \ \psi(\infty) \ = \ \infty
\]
for all $\gamma\in\Gamma$.
\medskip\noindent
Let $T_{AC}$ be the $\L_{AC}$-theory whose models are the divisible $H$-asymptotic couple with asymptotic integration such that
\begin{enumerate}
\item $\Psi$ as an ordered subset of $\Gamma$ has least element $s0$,
\item $s0>0$,
\item $\Psi$ as an ordered subset of $\Gamma$ is a successor set,
\item for each $\alpha\in\Psi$, the immediate successor of $\alpha$ in $\Psi$ is $s\alpha$, and
\item $\gamma\mapsto s\gamma:\Psi\to\Psi^{>s0}$ is a bijection.
\end{enumerate}
It is clear that $(\Gamma_{\log},\psi)$ is a model of $T_{AC}$. For a model $(\Gamma,\psi)$ of $T_{AC}$, we define the function $p:\Psi^{>s0}\to\Psi$ to be the inverse to the function $\gamma\mapsto s\gamma:\Psi\to\Psi^{>s0}$. We extend $p$ to a function $\Gamma_{\infty}\to\Gamma_{\infty}$ by setting $p(\alpha):=\infty$ for $\alpha\in\Gamma_{\infty}\setminus\Psi^{>s0}$.
\medskip\noindent
Next, let $\L_{\log} = \L_{AC}\cup\{s,p,\delta_1,\delta_2,\delta_3,\ldots\}$ where $s,$ $p$, and $\delta_n$ for $n\geq 1$ are unary function symbols. All models of $T_{AC}$ are considered as $\L_{\log}$-structures in the obvious way, again with $\infty$ as a default value, and with $\delta_n$ interpreted as division by $n$.
\medskip\noindent
We let $T_{\log}$ be the $\L_{\log}$-theory whose models are the models of $T_{AC}$.
These are some of the main results concerning $T_{\log}$ from~\cite[\S 5]{gehretQE}:
\begin{thm}
\label{Tlogknownthms}
The $\L_{\log}$-theory $T_{\log}$
\begin{enumerate}
\item has a universal axiomatization,
\item has quantifier elimination,
\item is complete,
\item is decidable, and
\item is model complete.
\end{enumerate}
\end{thm}
\noindent
We shall also need the following facts about models of $T_{\log}$. \emph{For Facts~\ref{lemma6.8gehretQE} and~\ref{cor6.5gehretQE} we let $(\Gamma,\psi)\models T_{\log}$.}
\begin{fact}
\label{lemma6.8gehretQE}
\cite[Lemma 6.8]{gehretQE}
$\Psi$ is a linearly independent subset of $\Gamma$ as a vector space over $\Q$.
\end{fact}
\begin{fact}
\label{cor6.5gehretQE}
\cite[Corollary 6.5]{gehretQE}
Let $n\geq 1$, $\alpha_1<\cdots<\alpha_n\in\Psi$, and let $\alpha = \sum_{j=1}^n q_j\alpha_j$ for $q_1,\ldots,q_n\in\Q^{\neq}$. Then
\begin{enumerate}
\item $\sum_{j=1}^nq_j\neq 1\Longrightarrow s(\alpha) = s0$,
\item $\sum_{j=1}^nq_j=1\Longrightarrow s(\alpha) = s(\alpha_1)$.
\end{enumerate}
\end{fact}
\section{Some indiscernible lemmas}
\label{AClemmas}
\noindent
\emph{In this section $\M$ is a monster model of $T_{\log}$ with underlying set $\Gamma_{\infty}$. Furthermore, $I = I_1+(c)+I_2$ is an ordered index set with infinite $I_1$ and $I_2$, and $i,j,k$ range over $I$.} In this section we will handle most of the cases that will arise when verifying hypothesis (3) from Distal Criterion~\ref{distal_multi} in our proof of distality for $T_{\log}$.
\medskip\noindent
In terms of~\ref{distal_multi}(3), Lemma~\ref{nonconstantconstant} will handle most of the cases where the sequence $\big(g(a_i,b_1)\big)$ which gets plugged into $\frak{f}$ is nonconstant, but the output sequence $\big(h(a_i,b_2)\big)$ is constant. The arguments involved use essentially that $\psi$ is a convex valuation on $\Gamma^{\neq}$, and that $s$ behaves in a similar topological manner to $\psi$.
\begin{lemma}
\label{nonconstantconstant}
Let $(a_i)_{i\in I}$ be a nonconstant indiscernible sequence from $\Gamma$ and suppose $(b,b')\in\Gamma\times\Psi$ is such that $(a_i)_{i\in I_1+I_2}$ is $bb'$-indiscernible. Then:
\begin{enumerate}
\item if $\psi(a_i-b) = b'$ for all $i\neq c$, then $\psi(a_c-b) = b'$;
\item if $s(a_i-b) = b'$ for all $i\neq c$, then $s(a_c-b) = b'$;
\item for $\frak{f}\in\{\psi, s\}$, it cannot be the case that $\frak{f}(a_i-b) = \infty$ for infinitely many $i$;
\item it cannot be the case that $p(a_i-b) = b'$ for infinitely many $i$.
\end{enumerate}
\end{lemma}
\begin{proof}
For (1), assume $\psi(a_i-b) = b'$ for all $i\neq c$. By $bb'$-indiscernibility, the sequence $(a_i-b)_{i\in I_1+I_2}$ is contained entirely within the convex set $\Gamma^<\cap \psi^{-1}(b')$ or it is contained entirely within the convex set $\Gamma^{>}\cap\psi^{-1}(b')$. In either case, $a_c-b$ is also contained in the same convex set by monotonicity of $(a_i-b)_{i\in I}$. Thus $\psi(a_c-b) = b'$.
The argument for (2) is similar to the argument for (1), except that the two convex sets are $(\Gamma^{<})'\cap s^{-1}(b')$ and $(\Gamma^{>})'\cap s^{-1}(b)$ (see Fact~\ref{sfacts}(\ref{sinvconvex})).
(3) and (4) follow from the assumption that $(a_i)$ is nonconstant.
\end{proof}
\noindent
Lemma~\ref{nonconstantnonconstant} will handle the cases where both sequences $\big(g(a_i,b_1)\big)$ and $\big(h(a_i,b_2)\big)$ are nonconstant:
\begin{lemma}
\label{nonconstantnonconstant}
Let $(a_ia_i')_{i\in I}$ be an indiscernible sequence from $\Gamma\times\Psi$ such that $(a_i)$ and $(a_i')$ are each nonconstant, and suppose $b\in\Gamma$ is such that $(a_ia_i')_{i\in I_1+I_2}$ is $b$-indiscernible. Then:
\begin{enumerate}
\item if $\psi(a_i - b) = a_i'$ for all $i\neq c$, then $\psi(a_c-b) = a_c'$;
\item if $s(a_i - b) = a_i'$ for all $i\neq c$, then $s(a_c-b) = a_c'$;
\item if $p(a_i-b) = a_i'$ for all $i\neq c$, then $p(a_c-b) = a_c'$.
\end{enumerate}
\end{lemma}
\begin{proof}
Without loss of generality, we will assume that $(a_i')$ is strictly increasing.
For (1), we first observe that if $i<j$, then $\psi(a_i-a_j) = a_i'$. Indeed, by indiscernibility, it suffices to show this for $i<j$ both from $I_1+I_2$. Given such $i<j$, we have
\[
\psi(a_i-a_j)\ =\ \psi\big((a_i-b)+(b-a_j)\big)\ =\ \min(a_i', a_j')\ =\ a_i'.
\]
Next, suppose $j\in I_2$, and note that
\[
\psi(a_c-b)\ =\ \psi\big((a_c-a_j)+(a_j-b)\big)\ =\ \min(a_c',a_j')\ =\ a_c'.
\]
For (2), we first claim that if $i<j$, then $\psi(a_i-a_j) = a_i'$. By indiscernibility, it suffices to show this for $i<j$ both from $I_1+I_2$. Given such $i<j$, we have $a_i' = s(a_i-b)<s(a_j-b) = a_j'$. By the Successor Identity (Fact~\ref{sfacts}(\ref{succid})), we have
\[
a_i' = s(a_i-b)\ =\ \psi\big((a_i-b) - (a_j-b)\big)\ =\ \psi(a_i-a_j).
\]
Next, suppose $j<k$ are both from $I_2$. Then by monotonicity of $(a_i-b)_{i\in I}$ and $b$-indiscernibility of $(a_i)_{i\in I_1+I_2}$,
\[
s(a_c-b)\ \leq\ s(a_j-b)\ <\ s(a_k-b).
\]
By the Successor Identity we have
\[
a_c'\ =\ \psi(a_c-a_k)\ =\ \psi\big((a_c-b)-(a_k-b)\big)\ =\ s(a_c-b).
\]
Finally for (3), we note that if $i \in I_1+I_2$, then $a_i-b \in \Psi^{>s0}$ and so
\[
a_i-b = sp(a_i-b) = s(a_i').
\]
From this we see that $a_i-s(a_i') = b$ and, in particular, $(a_i-s(a_i'))_{i \in I_1+I_2}$ is constant. By indiscernibility of $(a_ia_i')_{i \in I}$, we have $a_c - s(a_c') = b$ and so $p(a_c-b) = ps(a_c')$. As $a_i'\in \Psi$ for all $i \in I_1+I_2$ we have $a_c' \in \Psi$, and so $ps(a_c')= a_c'$.
\end{proof}
\noindent
Lemma~\ref{constantconstant} will handle the cases where both sequences $\big(g(a_i,b_1)\big)$ and $\big(h(a_i,b_2)\big)$ are constant. This case is trivial, but it does implicitly rely on the behavior $\mathcal{L}$-terms, where $\L:=\{0,+,-,<,(\delta_n)_{n<\omega},\infty\}\subseteq\L_{\log}$:
\begin{lemma}
\label{constantconstant}
Given $\frak{f}\in\{\psi,s,p\}$, let $g,h$ be $\L$-terms of arities $n+k$ and $m+l$ respectively with $m\leq n$, $b_1\in \M^k$, $b_2\in \frak{f}(\M)^l$, $(a_i)_{i\in I}$ be an indiscernible sequence from $\frak{f}(\M)^m\times \M^{n-m}$ such that
\begin{enumerate}
\item $(a_i)_{i\in I_1+I_2}$ is $b_1b_2$-indiscernible,
\item $\frak{f}\big(g(a_i,b_1)\big) = h(a_i,b_2)$ for every $i\neq c$,
\item $\big(g(a_i,b_1)\big)_{i\in I_1+I_2}$ is a constant sequence.
\end{enumerate}
Then $\frak{f}\big(g(a_c,b_1)\big) = h(a_c,b_2)$.
\end{lemma}
\begin{proof}
First, suppose $h(a_i,b_2) = \infty$ for every $i\neq c$. Then either $\infty$ occurs in the term $h$, or in the tuple $b_2$ in a ``non-dummy'' manner, or in one of the first $m$ coordinates of $a_i$ in a ``non-dummy'' manner. In any of these cases, it follows that $h(a_c,b_2) = \infty$.
Otherwise, suppose there is $b\in\Gamma$ such that $h(a_i,b_2) = b$ for every $i\neq c$. Then the $\L$-term $h$ essentially computes a $\Q$-linear combination of its arguments, and $\infty$ does not get involved at all. Thus $\big(h(a_i,b_2)\big)_{i\in I}$ is monotone by indiscernibility of $(a_i)_{i\in I}$. In particular, $h(a_c,b_2) = b$.
A similar argument shows that for an arbitrary $i_0\in I_1+I_2$, $g(a_c,b_1) = g(a_{i_0},b_1)$. Thus
\[
\frak{f}\big(g(a_c,b_1)\big)\ =\ \frak{f}\big(g(a_{i_0},b_1)\big)\ =\ h(a_{i_0},b_2)\ =\ h(a_c,b_2). \qedhere
\]
\end{proof}
\noindent
The following lemma greatly simplifies the sequence $\big(h(a_i,b_2)\big)$. \emph{We no longer assume that $j$ and $k$ range over $I$.}
\begin{lemma}
\label{RHSsimplification}
Let $h(x,y)$ be an $\L$-term of arity $m+n$, $b\in \M^n$ and $(a_i)_{i\in I}$ an indiscernible sequence from $\Psi_{\infty}^m$, with $a_i = (a_{i,1},\ldots,a_{i,m})$. Assume that $h(a_i,b)\in\Psi_{\infty}$ for infinitely many $i$. Then one of the following is true:
\begin{enumerate}
\item $h(a_i,b) = \infty$ for every $i$;
\item there is $\beta\in\Psi$ such that $h(a_i,b) = \beta$ for every $i$;
\item there is $l\in\{1,\ldots,m\}$ such that $h(a_i,b) = a_{i,l}$ for every $i$.
\end{enumerate}
\end{lemma}
\begin{proof}
If one of the components of $b$ which corresponds to a free variable which actually occurs in $h$ is $\infty$, then $h(a_i,b) = \infty$ for every $i$. Similarly if the constant $\infty$ occurs in the term $h$. Thus we may assume for the remainder of the proof that none of the components of $b$ are $\infty$ and that $\infty$ does not occur in the term $h$. We may then write $h(x,b) = (\textstyle\sum_{j=1}^m q_jx_j)+c$ where $c\neq \infty$ is a $\Q$-linear combination of the components of $b$ and $q_1,\ldots,q_m\in\Q$. We consider three disjoint cases:
\textbf{Case 1:} \emph{There is $i_0 \in I$ with $h(a_{i_0},b) = \infty$.} Then there must be $j \in \{1,\ldots, m\}$ with $a_{i_0,j} = \infty$, and so $a_{i,j} = \infty$ for every $i$. We conclude that $h(a_i,b) = \infty$ for every $i$.
\textbf{Case 2:} \emph{$h(a_i,b) \neq \infty$ for every $i$, and there are distinct $i_0,i_1 \in I$ and $\beta \in \Gamma$ with $h(a_{i_0},b) = h(a_{i_1},b)= \beta$.} We see then that $\sum_{j=1}^m q_ja_{i_0,j}=\sum_{j=1}^m q_ja_{i_1,j}$ and so $\sum_{j=1}^m q_ja_{i,j}=\sum_{j=1}^m q_ja_{i',j}$ for every $i, i'\in I$ by indiscernibility. We conclude that $h(a_i,b) = \beta$ for every $i$ and since $h(a_i,b)\in\Psi_{\infty}$ for infinitely many $i$, we see that $\beta \in \Psi$.
\textbf{Case 3:} \emph{$h(a_i,b) \neq \infty$ for every $i$, and $h(a_i,b) \neq h(a_{i'},b)$ for all distinct $i$ and $i'$.}
We will first clean up the summation by removing constant and redundant sequences. By indiscernibility, there is $m_0\geq 1$ and natural numbers $1\leq \eta(1)<\cdots<\eta(m_0)\leq m$ such that
\begin{enumerate}
\item for every $j\in \{\eta(1),\ldots,\eta(m_0)\}$, the sequence $(a_{i,j})_{i\in I}$ is nonconstant,
\item for every $j,j'\in \{\eta(1),\ldots,\eta(m_0)\}$ such that $j\neq j'$, $a_{i,j}\neq a_{i,j'}$ for every $i$, and
\item given $j\in \{1,\ldots,m\}\setminus\{\eta(1),\ldots,\eta(m_0)\}$, either
\begin{enumerate}
\item the sequence $(a_{i,j})_{i\in I}$ is constant, or
\item there is $j'\in \{\eta(1),\ldots,\eta(m_0)\}$ such that $a_{i,j} = a_{i,j'}$ for every $i$.
\end{enumerate}
\end{enumerate}
By rearranging the components of $(a_i)$ and the $q_j$'s, we may assume for the rest of the proof that $\eta(j) = j$ for $j=1,\ldots,m_0$.
Next, for $j\in \{1,\ldots,m_0\}$, define
\[
\textstyle \tilde{q}_j := \sum_{j'\in A} q_{j'}, \ \text{where } A:= \{j': a_{i,j} = a_{i,j'} \text{ for every $i$}\}
\]
and
\[
\textstyle \tilde{c} := c+ \sum_{j\in B} a_{i_0,j}, \ \text{where } B:= \{j: (a_{i,j})_{i\in I} \text{ is a constant sequence}\} \ \text{and $i_0\in I$ is some fixed index.}
\]
We now have that for every $i$,
\[
\textstyle h(a_i,b) = \big(\sum_{j=1}^{m_0}\tilde{q}_ja_{i,j}\big) + \tilde{c},
\]
and for every $i,i'$ and $j,j'\in\{1,\ldots,m_0\}$, $a_{i,j}\neq a_{i',j'}$ whenever $(i,j)\neq (i',j')$.
Now choose distinct $i_0,i_1,\ldots,i_{m_0+1}\in I$ such that $h(a_{i_k},b)\in\Psi$ for $k=0,\ldots,m_0+1$. We have for each $k\in \{1,\ldots,m+1\}$ that
\[
\textstyle \big(\sum_{j=1}^{m_0}\tilde{q}_ja_{i_0,j}\big) -h(a_{i_0},b) \ = \ -\tilde{c} \ = \ \big(\sum_{j=1}^{m_0}\tilde{q}_ja_{i_k,j}\big) -h(a_{i_k},b)
\]
and so
\[
\tag{$\ast$} \textstyle h(a_{i_k},b) \ = \ \big(\sum_{j=1}^{m_0}\tilde{q}_ja_{i_k,j}\big) - \big(\sum_{j=1}^{m_0}\tilde{q}_ja_{i_0,j}\big) + h(a_{i_0},b).
\]
Since $h(a_{i_k},b)$ and $h(a_{i_0},b)$ are distinct elements of $\Psi$, and $a_{i_0,1},\ldots,a_{i_0,m_0}, a_{i_k,1},\ldots,a_{i_k,m_0}$ are also distinct elements from $\Psi$, we deduce from the $\Q$-linear independence of $\Psi$ (Fact~\ref{lemma6.8gehretQE}) that
\[
h(a_{i_k},b) \ \in \ \{a_{i_0,1},\ldots,a_{i_0,m_0}, a_{i_k,1},\ldots,a_{i_k,m_0}\}.
\]
We claim that there is at least one $k\in\{1,\ldots,m_0+1\}$ such that $h(a_{i_k},b)\in \{a_{i_k,1},\ldots,a_{i_k,m_0}\}$. Suppose not. Then there is a function $\sigma:\{1,\ldots,m_0+1\}\to \{1,\ldots,m_0\}$ such that $h(a_{i_k},b) = a_{i_0,\sigma(k)}$. As $h(a_{i_k},b)\neq h(a_{i_{k'}},b)$ for all $k,k'\in\{1,\ldots,m_0+1\}$ such that $k\neq k'$, we must have that $\sigma$ is injective, a contradiction. Therefore we can take $k\in\{1,\ldots,m_0+1\}$ and $l\in \{1,\ldots,m_0\}$ such that $h(a_{i_k},b) = a_{i_k, l}$. In particular, $a_{i_k,l}\neq 0$ and so $a_{i_0,l}\neq 0$. Again from $(\ast)$, the $\Q$-linear independence of $\Psi$, and the fact that $a_{i_0,1},\ldots,a_{i_0,m_0}, a_{i_k,1},\ldots,a_{i_k,m_0}$ are all distinct, we deduce that $h(a_{i_0},b) = a_{i_0, l}$, that $\tilde{q}_l = 1$, and that $\tilde{q}_j = 0$ for $j\neq l$. From this, we deduce that $\tilde{c}=0$, and so $h(a_i,b) = a_{i,l}$ for every $i$.
\end{proof}
\section{Spread out sequences}
\label{spreadout}
\noindent
\emph{In this section $\M$ is a monster model of $T_{\log}$ with underlying set $\Gamma_{\infty}$. Furthermore, $I$ is an infinite ordered index set, $i,j$ range over $I$, and $(a_i)_{i\in I}$ is a strictly increasing indiscernible sequence from $\Gamma$.}
\begin{definition}
Given $a,b\in \conv(\Psi)$, we write $a\ll b$ if $s^na<b$ for every $n$.
If there is $b\in\Gamma$ such that $(a_i-b)_{i\in I}\subseteq \conv(\Psi)$, and for every $i<j$, $s0\ll a_i-b\ll a_j-b$, then we say that $(a_i)_{i\in I}$ is \textbf{spread out by $b$}.
\end{definition}
\noindent
Intuitively, the idea behind $(a_i)_{i\in I}$ being spread out by $b$, is that upon translating $(a_i)_{i\in I}$ by $b$, all elements of the sequence $(a_i-b)_{i\in I}$ are now in the convex hull of the $\Psi$-set, and each element of the sequence ``lives on'' its own copy of $\Z$ (or really, the convex hull of a copy of $\Z$). In Figure~\ref{spreadoutfigure} we suppose $(a_i)_{i\in I} = (a_n)_{n<\omega}$ is an indiscernible sequence spread out by $b$, and we illustrate the positions of $(a_n-b)_{n<5}$. Furthermore, each element $a_i-b$ will have a ``nearest'' element of the $\Psi$-set, namely $ps(a_i-b)$. Then next lemma shows that these nearest elements do not depend on $b$.
\begin{figure}[h!]
\caption{Five elements of a sequence being spread out by $b$.}
\label{spreadoutfigure}
\begin{center}
\begin{tikzpicture}
\draw (0,0)--(15,0);
\tikzmath{\x = .7; \y = .75; \z =2.85;
\w =1/(1-\y);}
\draw (.5,-.4)--(.5,.4);
\node at (.5,-.6) {\small$0$};
\node at (.5+2*\y*\z+\w*\y*\y*\z*2-2*\z*\w*\y*\y^1-\z*\y^1*\x^0, -.6) {\small$s0$};
\foreach \a in {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20} {
\begin{scope}
\foreach \b in {1,2,3,4,5,6} {
\begin{scope}
\draw (.5+2*\y*\z+\w*\y*\y*\z*2-2*\z*\w*\y*\y^\b-\z*\y^\b*\x^\a, -.4*\y^\b*\x^\a)--(.5+2*\y*\z+\w*\y*\y*\z*2-2*\z*\w*\y*\y^\b-\z*\y^\b*\x^\a, \z*\y^\b*\x^\a);
\end{scope}}
\end{scope}}
\foreach \a in {1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20} {
\begin{scope}
\foreach \b in {2,3,4,5,6} {
\begin{scope}
\draw (.5+2*\y*\z+\w*\y*\y*\z*2-2*\z*\w*\y^\b+\z*\y^\b*\x^\a, -.4*\y^\b*\x^\a)--(.5+2*\y*\z+\w*\y*\y*\z*2-2*\z*\w*\y^\b+\z*\y^\b*\x^\a, \z*\y^\b*\x^\a);
\end{scope}}
\end{scope}}
\node (A0) at (.5+2*\y*\z+\w*\y*\y*\z*2-2*\z*\w*\y*\y^2-\z*\y^2*\x^2-\y^2*\x^2*.4, 0) {\tiny$\bullet$}; \node at ([yshift=-.3cm]A0) {\small$a_0-b$};
\node (A1) at (.5+2*\y*\z+\w*\y*\y*\z*2-2*\z*\w*\y*\y^3-\z*\y^3*\x^1-\y^3*\x^1*.2, 0) {\tiny$\bullet$}; \node at ([yshift=-.3cm]A1) {\small$a_1-b$};
\node (A2) at (.5+2*\y*\z+\w*\y*\y*\z*2-2*\z*\w*\y^4+\z*\y^4*\x^1, 0) {\tiny$\bullet$}; \node at ([yshift=-.3cm]A2) {\small$a_2-b$};
\node (A3) at (.5+2*\y*\z+\w*\y*\y*\z*2-2*\z*\w*\y^5+\z*\y^5*\x^1+\y^5*\x^1*.2, 0) {\tiny$\bullet$}; \node at ([yshift=-.3cm]A3) {\small$a_3-b$};
\node (A4) at (.5+2*\y*\z+\w*\y*\y*\z*2-2*\z*\w*\y*\y^6-\z*\y^6*\x^0+\y^6*\x^0*.4, 0) {\tiny$\bullet$}; \node at ([yshift=-.3cm]A4) {\small$a_4-b$};
\end{tikzpicture}
\end{center}
\end{figure}
\begin{lemma}
Suppose $b_0,b_1\in\Gamma$ are such that $(a_i)_{i\in I}$ is spread out by both $b_0$ and $b_1$. Then for every $i<j$,
\[
ps(a_i-b_0)\ =\ ps(a_i-b_1)\ =\ p\psi(a_i-a_j).
\]
\end{lemma}
\begin{proof}
Suppose $(a_i)$ is spread out by $b$, and $i<j$ are arbitrary. Then by definition $a_i-b\ll a_j-b$, and by the Successor Identity,
\[
\psi(a_i-a_j)\ =\ \psi\big((a_j-b)-(a_i-b)\big)\ =\ s(a_i-b). \qedhere
\]
\end{proof}
\noindent
If $(a_i)$ is spread out by $b$, then the sign of the difference $(a_i-b)-ps(a_i-b)$ can depend on $i$ and $b$, although in a very \emph{dependent} way (see Figure~\ref{seqmonotonefigure}):
\begin{lemma}
\label{seqmonotone}
Suppose $(a_i)_{i\in I}$ is spread out by $b$. Let
\[I^*:=
\begin{cases}
I & \text{if $I$ does not have a greatest index} \\
I^{<d} & \text{if $I$ has a greatest index $d$.}
\end{cases}
\]
Then the function $i\mapsto a_i-ps(a_i-b):I^*\to\Gamma$ is either constant, strictly increasing, or strictly decreasing.
\end{lemma}
\begin{proof}
Fix indices $i_0<j_0$ from $I^*$ and fix $\star\in\{=,<,>\}$ such that
\[
\big(a_{i_0}-ps(a_{i_0}-b)\big)\ \star\ \big(a_{j_0}-ps(a_{j_0}-b)\big).
\]
Next, let $j<k$ be arbitrary indices from $I^*$, and fix an index $d\in I$ which is greater than $j_0$ and $k$. Note that the sequence $(a_i)_{i\in I^{<d}}$ is $a_d$-indiscernible. Thus
\begin{align*}
\big(a_{i_0} - ps(a_{i_0}-b)\big)\ \star\ \big(a_{j_0} - ps(a_{j_0}-b)\big) \ &\Longleftrightarrow\ \big(a_{i_0} -p\psi(a_{i_0}-a_d)\big)\ \star\ \big(a_{j_0}-p\psi(a_{j_0}-a_d)\big) \\
&\Longleftrightarrow \ \big(a_{j} -p\psi(a_{j}-a_d)\big)\ \star\ \big(a_{k}-p\psi(a_{k}-a_d)\big) \\
&\Longleftrightarrow \ \big(a_{j} - ps(a_{j}-b)\big)\ \star\ \big(a_{k} - ps(a_{k}-b)\big).
\end{align*}
Thus the function $i\mapsto a_i-ps(a_i-b):I^*\to\Gamma$ is either constant, strictly increasing, or strictly decreasing, depending on $\star$.
\end{proof}
\begin{figure}[h!]
\caption{To illustrate Lemma~\ref{seqmonotone} we apply $ps$ to the sequence $(a_n-b)_{n<5}$ from Figure~\ref{spreadoutfigure}}
\label{seqmonotonefigure}
\begin{center}
\begin{tikzpicture}[x=1.1cm,y=1cm]
\tikzmath{\x = .7; \y = .2; \z =6;}
\foreach \b in {0,1,2,3,4} {
\begin{scope}
\draw (1.5,-2*\b)--(13.5,-2*\b);
\node at (1.25,-2*\b) {$\dots$};
\node at (13.75,-2*\b) {$\dots$};
\draw (7.5, -2*\b-.2*\z*\y)--(7.5, -2*\b+\z*\y);
\foreach \a in {1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30} {
\begin{scope}
\draw (7.5+\z-\z*\x^\a, -2*\b-.2*\z*\y*\x^\a)--(7.5+\z-\z*\x^\a, -2*\b+\z*\y*\x^\a);
\draw (7.5-\z+\z*\x^\a, -2*\b-.2*\z*\y*\x^\a)--(7.5-\z+\z*\x^\a, -2*\b+\z*\y*\x^\a);
\end{scope}}
\end{scope}}
\node (A0) at (6.4, 0) {$\bullet$}; \node at ([yshift=-.3cm]A0) {$a_0-b$};
\draw[<-, bend right=45, shorten <=.1cm]
(7.5,0) to node[yshift= .2cm] {$ps$} (A0);
\node (A1) at (6.8, -2) {$\bullet$}; \node at ([yshift=-.3cm]A1) {$a_1-b$};
\draw[<-, bend right=35, shorten <=.1cm]
(7.5,-2) to node[yshift= .2cm, xshift = -.1cm] {$ps$} (A1);
\node (A2) at (7.5, -4) {$\bullet$}; \node at ([yshift=-.3cm]A2) {$a_2-b$};
\node (A3) at (8.2, -6) {$\bullet$}; \node at ([yshift=-.3cm]A3) {$a_3-b$};
\draw[->, bend right=35, shorten >=.1cm]
(A3) to node[yshift= .2cm, xshift = .1cm] {$ps$} (7.5,-6);
\node (A4) at (8.6, -8) {$\bullet$}; \node at ([yshift=-.3cm]A4) {$a_4-b$};
\draw[->, bend right=45, shorten >=.1cm]
(A4) to node[yshift= .2cm, xshift = .1cm] {$ps$} (7.5,-8);
\end{tikzpicture}
\end{center}
\end{figure}
\noindent
The following is the key proposition:
\begin{prop}
\label{pnonconstantinfty}
Suppose $I = I_1+(c)+I_2$ is in distal configuration at $c$. Further suppose that $b\in \Gamma$ is such that $(a_i)_{i\in I_1+I_2}$ is $b$-indiscernible. If $p(a_i-b) = \infty$ for every $i\in I_1+I_2$, then $p(a_c-b) = \infty$.
\end{prop}
\begin{proof}
We have a few cases to consider.
\textbf{Case 1:} \emph{There is $i_0\neq c$ such that $a_{i_0}-b>\Psi$.} Then by $b$-indiscernibility and monotonicity, $a_i-b>\Psi$ for every $i\in I$, and so $a_c-b\not\in\Psi$, so $p(a_c-b) = \infty$.
\textbf{Case 2:} \emph{There is $n$ and $i_0\neq c$ such that $a_{i_0}-b\leq s^n0$.} Again by $b$-indiscernibility and monotonicity, we have that $a_i-b<s^n0$ for every $i$. Assume towards a contradiction that $p(a_c-b)\neq\infty$. Then $a_c-b = s^m0$ for some $1<m<n$, and so for $i\neq c$, $a_i-b<s^m0$ iff $i<c$, contradicting the indiscernibility of $(a_i-b)_{i\in I_1+I_2}$.
We may now assume by monotonicity that $(a_i-b)_{i\in I}\subseteq \Gamma^{>(s^n0)_n}\cap \Psi^{\downarrow}$. In particular, $s0\ll a_i-b$ for every $i$.
\textbf{Case 3:} \emph{The sequence $\big(ps(a_i-b)\big)_{i\in I_1+I_2}$ takes the constant value $b'$.} Then by $b$-indiscernibility and monotonicity, either for all $i$, $a_i-b<b'$ or for all $i$, $a_i-b>b'$. In either case, $a_c-b\neq b'$, however $ps(a_c-b) = b'$. Thus $a_c-b\neq \Psi$.
\textbf{Case 4:} \emph{The sequence $\big(ps(a_i-b)\big)_{i\in I_1+I_2}$ is strictly increasing.} Then the sequence $\big(ps(a_i-b)\big)_{i\in I}$ is strictly increasing, and it follows from $b$-indiscernibility of $(a_i)_{i\in I_1+I_2}$ and $I$ being in distal configuration at $c$ that $(a_i)_{i\in I}$ is spread out by $b$. By Lemma~\ref{seqmonotone} and $b$-indiscernibility of $(a_i)_{i\in I_1+I_2}$, the function $i\mapsto a_i-ps(a_i-b):I\to\Gamma$ is either constant, strictly increasing, or strictly decreasing. By $b$-indiscernibility and the assumption that $p(a_i-b) = \infty$ for every $i\in I_1+I_2$, it follows that this function takes values entirely below $b$, or entirely above $b$. Thus $a_c-ps(a_c-b)\neq b$, and so $a_c-b\not\in \Psi$.
\end{proof}
\section{Extensions}
\label{extensions}
\noindent
\emph{In this section, $(\Gamma,\psi)$ is a model of $T_{\log}$.} Here we will prove the relevant facts that will allow us to later verify condition (2) in Distal Criteria~\ref{distal_multi}, with $(\Gamma,\psi)$ playing the role of ``$(\bm{N},\frak{F})$''. In~\ref{distal_multi}(2), we are allowed to assume the substructure $\bm{M}$ of $\bm{N}$ is closed under every function from $\frak{F}$.
\medskip\noindent
For the case $\frak{f} = \psi$, we do not need the subgroup $\Gamma_0$ of $\Gamma$ to be closed under any of the functions $\psi$, $s$, or $p$:
\begin{prop}
\label{finite_psi_ext}
Suppose $\Gamma_0$ is a divisible ordered subgroup of $\Gamma$. Given $c_1,\ldots,c_m\in\Gamma\setminus\Gamma_0$, we have
\[
\textstyle \# \big(\psi((\Gamma_0+\sum_{i=1}^m\Q c_i)^{\neq})\setminus \psi(\Gamma_0^{\neq})\big)\ \leq\ m.
\]
In particular,
there is $n\leq m$ and distinct
\[
\textstyle
d_1,\ldots,d_n\ \in\ \psi\big((\Gamma_0+\sum_{i=1}^m\Q c_i)^{\neq}\big)\setminus \psi(\Gamma_0^{\neq})
\]
such that
\[
\textstyle
\psi\big((\Gamma_0+\sum_{i=1}^m\Q c_i)^{\neq}\big)\ \subseteq\ \big(\!\bigoplus_{\alpha\in \psi(\Gamma_0^{\neq})}\Q\alpha\big)\! \oplus\! \big(\!\bigoplus_{j=1}^n\Q d_j\big).
\]
\end{prop}
\begin{proof}
This follows by induction on $m$. To simplify notation, we will show the inductive step only for $m=1$. Let $c\in\Gamma\setminus\Gamma_0$. If $\psi(\Gamma_0+\Q c) = \psi(\Gamma_0)$, then we are done. Otherwise, suppose that $\psi(\Gamma_0+\Q c) \neq \psi(\Gamma_0)$. As $\psi$ is constant on archimedean classes, we must have that $[\Gamma_0+\Q c] \neq [\Gamma_0]$. By \cite[Lemma 2.4.4]{ADAMTT}, there is $c^* \in \Gamma_0+\Q c$ with $\left[\Gamma_0+\Q c\right] = \left[\Gamma_0\right] \cup \big\{[c^*]\big\}$ and so
$
\psi\left(\Gamma_0+\Q c\right) = \psi(\Gamma_0) \cup \big\{\psi(c^*)\big\}.
$
\end{proof}
\noindent
For the case $\frak{f} = s$, we will need the subgroup to be closed under the functions $\psi$ and $s$, as the following example illustrates:
\begin{example}
Suppose $(\Gamma,\psi)$ has an element $\alpha\in\Psi$ such that $\alpha>s^n0$ for every $n$. Fix such an element $\alpha$. Let $\Gamma_0$ be the divisible ordered subgroup of $\Gamma$ generated by
\[
\{s^n0:n\geq 1\}\cup\{\alpha_1-\alpha_0: \alpha_1,\alpha_0\in\Psi\ \&\ s0 \ll \alpha_0<\alpha_1\}.
\]
By Fact~\ref{cor6.5gehretQE} we have
$s(\Gamma_0) = \{s^n0:n\geq 1\}\subseteq\Gamma_0$.
However $\Psi\subseteq \Gamma_0+\Q\alpha$ and thus
\[
\#\big(s(\Gamma_0+\Q\alpha)\setminus s(\Gamma_0)\big)\ =\ \#(\Psi\setminus \{s^n0:n\geq 1\}) \ = \ \infty.
\]
\end{example}
\begin{prop}
\label{finite_s_ext}
Suppose $\Gamma_0$ is a divisible ordered subgroup of $\Gamma$ such that $s(\Gamma_0)\subseteq\Gamma_0$ and $\psi(\Gamma_0^{\neq})\subseteq \Gamma_0$. Given $c_1,\ldots,c_m\in\Gamma\setminus\Gamma_0$, we have
\[
\textstyle \#\big(s(\Gamma_0+\sum_{i=1}^m\Q c_i)\setminus s(\Gamma_0)\big) \ \leq \ m+1.
\]
In particular,
there is $n\leq m+1$ and distinct
\[
\textstyle
d_1,\ldots,d_n\ \in\ s\big(\Gamma_0+\sum_{i=1}^m\Q c_i\big)\setminus s(\Gamma_0)
\]
such that
\[
\textstyle
s\big(\Gamma_0+\sum_{i=1}^m\Q c_i\big)\ \subseteq\ \big(\!\bigoplus_{\alpha\in s(\Gamma_0)}\Q\alpha\big)\! \oplus\! \big(\!\bigoplus_{j=1}^n\Q d_j\big).
\]
\end{prop}
\begin{proof}
Suppose $c_1,\ldots,c_m\in\Gamma\setminus\Gamma_0$ and set $\Gamma_1:=\Gamma_0+\sum_{i=1}^m\Q c_i$.
Assume towards contradiction that there are $m+2$ distinct elements in $s(\Gamma_1)\setminus s(\Gamma_0)$. Choose $e_1,\ldots,e_{m+2} \in \Gamma_1$ such that $s(e_1)<\ldots<s(e_{m+2})$ and such that $s(e_i) \not\in s(\Gamma_0)$ each $i=1,\ldots,m+2$. By the Successor Identity, we have that $\psi(e_{i+1}-e_i) = s(e_i)$ for each $i = 1,\ldots,m+1$. The closure assumptions on $\Gamma_0$ imply that $(\Gamma_0,\psi)$ is an asymptotic couple with asymptotic integration, and so $s(\Gamma_0) = \psi(\Gamma_0)$. Thus there are $m+1$ distinct elements in $\psi(\Gamma_1) \setminus \psi(\Gamma_0)$, contradicting Proposition \ref{finite_psi_ext} above. \qedhere
\end{proof}
\begin{prop}
\label{finite_p_ext}
Suppose $(\Gamma_0,\psi)\preccurlyeq(\Gamma,\psi)$. Given $c_1,\ldots,c_m\in\Gamma\setminus\Gamma_0$, we have
\[
\textstyle \# \big(p(\Gamma_0+\sum_{i=1}^m\Q c_i)\setminus p(\Gamma_0)\big)\ \leq \ m+1
\]
In particular, there is $n\leq m+1$ and distinct
\[
\textstyle d_1,\ldots,d_n \ \in\ p\big(\Gamma_0+\sum_{i=1}^m\Q c_i\big)\setminus p(\Gamma_0)
\]
such that
\[
\textstyle p\big(\Gamma_0+\sum_{i=1}^m\Q c_i\big)\ \subseteq\ \big(\!\bigoplus_{\alpha\in p(\Gamma_0), \alpha\neq\infty}\Q\alpha\big)\!\oplus\!\big(\!\bigoplus_{j=1}^n\Q d_j\big)\cup\{\infty\}.
\]
\end{prop}
\begin{proof}
Suppose $c_1,\ldots,c_m\in\Gamma\setminus\Gamma_0$ and set $\Gamma_1:=\Gamma_0+\sum_{i=1}^m\Q c_i$. Assume towards contradiction there are $m+2$ elements $e_1,\ldots,e_{m+2} \in \Gamma_1$ such that $p(e_i) \in p(\Gamma_1)\setminus p(\Gamma_0)$ for each $i$ and such that $p(e_i) \neq p(e_j)$ for all $i \neq j$. Then $e_i$ is in $\Psi$ for each $i$ and, as $s$ is injective on $\Psi$, we have that $s(e_i) \neq s(e_j)$ for all $i \neq j$. As $p(\Gamma_0) = s(\Gamma_0) \cup \{\infty\}= (\Psi \cap \Gamma_0)\cup \{\infty\}$, we have that $s(e_i) \not\in s(\Gamma_0)$ each $i$, contradicting Proposition \ref{finite_s_ext}.
\end{proof}
\section{Proof of Theorem~\ref{tlogdistalthm}}
\label{distalproof}
\noindent
In this section we prove Theorem~\ref{tlogdistalthm} by applying Distal Criterion~\ref{distal_multi}. In the language of~\ref{distal_multi}, the role of $T$ will be played by the reduct $T:=T_{\log}\!\upharpoonright\!\L$, where $\L = \{0,+,-,<,(\delta_n)_{n<\omega},\infty\}$. The $\L$-theory $T$ is essentially the same as the theory of ordered divisible abelian groups, except that it contains the element $\infty$ which serves as a default value ``at infinity''. It follows that $T$ is o-minimal and therefore it is distal by~\cite[Lemma 2.10]{SimonDistal}, since o-minimal theories are $\DP$-minimal.
\medskip\noindent
We now construe $T_{\log}$ as $T_{\log} = T(\frak{F})$, with $\frak{F} = \{\psi,s,p\}$. In particular, $\L_{\log} = \L(\frak{F})$. By~\cite[Theorem 5.2]{gehretQE}, $T(\frak{F})$ has quantifier elimination, which is condition (1) in~\ref{distal_multi}. Verifying condition (2) in~\ref{distal_multi} involves three cases: $\frak{f} = \psi$, $s$, and $p$. These cases are handled respectively by Propositions~\ref{finite_psi_ext},~\ref{finite_s_ext}, and~\ref{finite_p_ext}. Concerning Proposition~\ref{finite_p_ext}, recall that $T_{\log}$ has quantifier elimination and a universal axiomatization (Theorem~\ref{Tlogknownthms}). Thus, if $(\Gamma,\psi)\models T_{\log}$, and $\Gamma_0\subseteq\Gamma$ is a divisible subgroup closed under $s,\psi,$ and $p$, then $\Gamma_0$ is the underlying set of an elementary substructure of $(\Gamma,\psi)$.
\medskip\noindent
Finally, we will show how to verify condition (3) in~\ref{distal_multi}. Fix a monster model $\M$ of $T$ with underlying set $\Gamma_{\infty}$. Let $\frak{f}\in \frak{F}$, and let $g,h$ be $\L$-terms of arities $n+k$ and $m+l$ respectively, with $m\leq n$, $b_1\in\M^k$, $b_2\in\frak{f}(\M)^l$, $(a_i)_{i\in I}$ be an indiscernible sequence from $\frak{f}(\M)^m\times{\mathbb{M}}^{n-m}$ such that
\begin{enumerate}[(a)]
\item $I=I_1+(c)+I_2$ is in distal configuration at $c$, and $(a_i)_{i\in I_1+I_2}$ is $b_1b_2$-indiscernible, and
\item $\frak{f}\big(g(a_i,b_1)\big) = h(a_i,b_2)$ for every $i\in I_1+I_2$.
\end{enumerate}
Our job is to show that $\frak{f}\big(g(a_c,b_1)\big) = h(a_c,b_2)$.
We have several cases to consider:
\textbf{Case 1:} \emph{$\big(g(a_i,b_1)\big)_{i\in I_1+I_2}$ is a constant sequence.} In this case, $\frak{f}\big(g(a_c,b_1)\big) = h(a_c,b_2)$ follows from Lemma~\ref{constantconstant}.
For the remainder of the proof, we assume that \emph{$\big(g(a_i,b_1)\big)_{i\in I_1+I_2}$ is not a constant sequence.} In particular, the symbol $\infty$ does not play a non-dummy role in $g(a_i,b_1)$, so the $\L$-term $g(x,y)$ is essentially a $\Q$-linear combination of its arguments. By grouping these $\Q$-linear combinations, we get $b\in\Gamma$, and a nonconstant indiscernible sequence $(a_i')_{i\in I}$ from $\M$ such that
\begin{enumerate}[(a)]
\setcounter{enumi}{2}
\item $g(a_i,b_1) = a_i'-b$ for every $i\in I$,
\item $(a_ia_i')_{i\in I}$ is an indiscernible sequence from $\frak{f}(\M)^m\times \M^{n-m+1}$,
\item $(a_ia_i')_{i\in I_1+I_2}$ is $b_1b_2b$-indiscernible, and
\item $\frak{f}(a_i'-b) = h(a_i,b_2)$ for every $i\in I_1+I_2$.
\end{enumerate}
Our job now is to show that $\frak{f}(a_c'-b) = h(a_c,b_2)$. Since $\frak{f}(\M)\subseteq \Psi_{\infty}$ for each $\frak{f}$, by Lemma~\ref{RHSsimplification} we get three more cases:
\textbf{Case 2:} \emph{$h(a_i,b_2)=\infty$ for every $i\in I$.} By Lemma~\ref{nonconstantconstant}(3), this case cannot happen for $\frak{f}\in\{\psi,s\}$. If $\frak{f} = p$, then this case is handled by Proposition~\ref{pnonconstantinfty}.
\textbf{Case 3:} \emph{There is $\beta\in\Psi$ such that $h(a_i,b_2) = \beta$ for every $i\in I$.} If $\frak{f} = \psi$ or $\frak{f} = s$, then this case is handled by Lemma~\ref{nonconstantconstant}(1) or (2). By Lemma~\ref{nonconstantconstant}(4), this case cannot happen for $\frak{f} = p$.
\textbf{Case 4:} \emph{There is $l\in \{1,\ldots,m\}$ such that $h(a_i,b_2) = a_{i,l}$ for every $i\in I$.} This case is handled by Lemma~\ref{nonconstantnonconstant}.
This completes the verification of condition (3) in~\ref{distal_multi} and so we are done with our proof of Theorem~\ref{tlogdistalthm}.
\section{DP-rank}
\label{dpranksection}
\noindent
In this section, we weigh in on the $\DP$-rank of $T_{\log}$. In contrast to distality, a notion of \emph{pureness}, $\DP$-rank gives rise to a certain measure of \emph{diversity} (in the sense that $\DP$-rank measures the diversity of realizations of types as viewed by external parameters; see the Introduction to~\cite{dprkadditivity}). Below we show that $T_{\log}$ is not strongly dependent, and so its $\DP$-rank is quite large. This also underscores the point of view that distality is \emph{not} to be taken as a notion of tameness.
For a concise definition of $\DP$-rank, $\DP$-minimality, and strongly dependent theories, see~\cite{Usvyatsov}.
See also~\cite[Chapter 4]{SimonNIP} for more information.
We will not define these concepts here and will instead use Proposition~\ref{goodrickprop} as a black box for establishing our negative results.
\begin{thm}
\label{TACnotstrong}
$T_{\log}$ is not strongly dependent. Therefore it is not $\DP$-minimal and does not have finite $\DP$-rank.
\end{thm}
\noindent
It is sufficient to show that $T_{\log}$ is not \emph{strong}, since if a theory is strongly dependent, then it is strong (see~\cite{Goodrick}). To do this, we will use the following criterion:
\begin{prop}\cite[2.14]{Goodrick}
\label{goodrickprop}
Suppose that $\bm{M} = (M;+,<,\ldots)$ is an expansion of a densely-ordered abelian group. Let $\bm{N}$ be a saturated model of $\Th(\bm{M})$, and suppose that for every $\epsilon>0$ in $\bm{N}$ there is an infinite definable discrete set $X\subseteq\bm{N}$ such that $X\subseteq (0,\epsilon)$. Then $\Th(\bm{M})$ is not strong.
\end{prop}
\begin{proof}[Proof of Theorem~\ref{TACnotstrong}]
Let $\bm{N}$ be a saturated model of $T_{\log}$. The infinite definable set $\Psi_{\bm{N}}$ is discrete and has the property that for every $\alpha\in\Psi_{\bm{N}}$, the set $\Psi_{\bm{N}}^{>\alpha}$ is also infinite and discrete. Let $\epsilon>0$ and take $\alpha\in\Psi_{\bm{N}}$ such that $\big(\alpha + 2(s\alpha-\alpha)\big) - \alpha = -2\int\alpha<\epsilon$. Note that then $\alpha+2(s\alpha-\alpha)>\Psi_{\bm{N}}$ by Fact~\ref{sfacts}(\ref{poptop}). The definable infinite discrete set $X:= \Psi_{\bm{N}}^{>\alpha}-\alpha$ has the desired property.
\end{proof}
\section*{Acknowledgements}
\noindent
The authors thank Matthias Aschenbrenner, Artem Chernikov, Lou van den Dries, John Goodrick, Philipp Hieronymi, and Travis Nell for various conversations and correspondences around the topics in this paper.
The first author is supported by the National Science Foundation under Award No. 1703709.
\bibliographystyle{amsplain}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 8,224 |
\section{Related Work}
\label{sec:RW}
We discuss related works on learning structured representations, CAD shapes, CSG trees, and overfit models.
Aside from these, it is worth mentioning that there are heuristic-based techniques attempting to solve a similar problem as ours, e.g.,
using RANSAC. The most relevant work in this category to DualCSG is InverseCSG \cite{du2018inversecsg}, which requires additional
knowledge about each shape including the number of surface segments and pre-defined primitive types. Moreover, different parameter
settings are used for different shapes. Hence in this section, we focus on learning techniques that use the same setup across
shape categories to better contrast our work with the most comparable methods.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{figs/differ_last_op_2.pdf}
\caption{To reconstruct a ``negative S" shape, CSG-Stump~\cite{ren2021csgstump} can only approximate it by a union of small pieces (a) or by subtracting, from a box, a union of basic primitives (b). Similarly, CAPRI-Net~\cite{Capri_Yu} must employ a union of convexes to approximate the S (c). None of these are natural or compact solutions. Only DualCSG can reconstruct the most \emph{compact} solution (d) as the S can be well produced by its general residual branch using quadric primitives.}
\label{fig:s_example}
\vspace{-3mm}
\end{figure}
\vspace{-8pt}
\paragraph{Structured models.}
A shape can be represented as a set of primitives or parts assembled together.
Primitive fitting to point clouds has been extensively studied in \cite{li2011globfit,li2019fit,le2021cpfn}.
For shape abstraction, cuboid \cite{abstractionTulsiani17} and
super quadratics \cite{Paschalidou2019CVPR} have been previously employed and 3D Gaussian local functions are used \cite{genova2019learning} for template fitting.
Having a single RGB image, cuboids have been also used to estimate object parts and their relations using a convolutional-recursive auto-encoder.
More complex sub-shapes have been learned for shape assembly such as elementary 3D structures~\cite{deprelle2019learning}, implicit convex \cite{deng2020cvxnet,bspnet} and neural star components \cite{kawana2020neural}, as well as \emph{parts} in the form of learnable parametric patches \cite{sharma2020parsenet}, moving or deformable primitives \cite{liu2018physical,zou20173d,Yao_2021_ICCV, Paschalidou2021CVPR}, point clouds \cite{li2020learning}, or a part-aware latent space \cite{dubrovina2019composite}.
However, none of these techniques directly addresses reconstructing a CSG tree for a given 3D shape which is our problem of interest.
\vspace{-8pt}
\paragraph{Deep CAD models.}
Synthesizing and editing CAD models are difficult tasks since they tend to have many sharp features and various topologies.
Learning-based shape programs have been designed to perform these tasks by providing easy-to-use tools and editing capabilities \cite{tian2019learning,ellis2019write,cascaval2022differentiable}.
In addition, due to the complex topology and geometry of CAD models, certain representations are often used for CAD models.
Boundary Representations (B-Reps) are quite common for modeling CAD shapes and there are previous attempts to \emph{reverse engineer} such representations given an input mesh or point cloud \cite{xu2021inferring}. For example, BRepNet \cite{lambourne2021}, UV-Net \cite{jayaraman2021}, and SBGCN \cite{jones2021} offer network architectures capable of working with B-Reps and their topological information through message passing.
\vspace{-8pt}
\paragraph{Learning CSG.}
Learning CSG representations, e.g., primitive
assembly~\cite{JoinABLe} and sketch analysis~\cite{SketchGen,Free2CAD}, has become an emerging topic of geometric deep learning. While
most approaches are supervised, e.g., CSG-Net \cite{sharma2018csgnet}, SPFN \cite{li2019fit}, ParseNet \cite{sharma2020parsenet},
DeepCAD \cite{Wu_2021_ICCV}, and Point2Cyl~\cite{Point2Cyl}, there have been several recent attempts at unsupervised CSG tree
reconstruction, especially under the class-agnostic setting, resorting to neural implicit representations~\cite{imnet, OccNet, DeepSDF,bspnet}.
UCSG-Net~\cite{kania2020ucsg} is a relatively early method for reconstructing CSG trees with arbitrary assembly orders. The learning task is
difficult due to the order flexibility, but can be made feasible by limiting the primitives to boxes and spheres only. More success in terms of
reconstruction quality and compactness of the CSG trees has been obtained by learning {\em fixed-order\/} assemblies, including
DualCSG.
BSP-Net~\cite{bspnet} learns plane primitives whose half-spaces are assembled via intersections to obtain convexes, followed by a union
operation, to reconstruct concave shapes. CAPRI-Net~\cite{Capri_Yu} extends BSP-Net by adding quadric surface primitives and a difference
operation after primitive unions.
CSG-Stump~\cite{ren2021csgstump} also follows a fixed CSG assembly order while including an inverse layer to model shape
complements. The complement operation helps attain generality of their CSG reconstructions, in theory, but the inverse layer is
non-differentiable and the difference operations can only be applied to basic primitives, which can severely compromise the compactness
and quality of the reconstruction.
In Fig.~\ref{fig:s_example}, we use a 2D example to contrast how CSG-Stump, CAPRI-Net, and DualCSG can reconstruct a shape
whose natural construction involves subtracting an S shape.
\vspace{-8pt}
\paragraph{Overfit models.}
Overfitting to the geometry of a shape is a common approach. It has been used for applications such as compression \cite{davies2020overfit}, reconstruction \cite{williams2019deep}, and representing level of details \cite{takikawa2021neural} as it significantly helps to recover intricate details and features.
The underlying geometry of a NeRF is essentially an overfit (i.e., fixed) to the shape/scene although the primary task of NeRF is novel view synthesis \cite{yariv2021volume,mildenhall2021nerf}.
Following a similar principle, to replicate fine geometric details, DualCSG constructs a CSG tree for a given object via optimizing a small neural network along with a randomly initialized feature code (Fig.~\ref{fig:pipe-line}). We follow this optimization/overfitting procedure as we did not find learning a prior on CAD shapes very useful and generalizable due to the structural and topological diversity in the CAD shapes.
\section{Discussion, limitation, and future work}
\label{sec:future}
We present DualCSG, a simple yet effective idea, for unsupervised learning of general and compact CSG tree representations of 3D CAD objects.
Extensive experiments on the ABC and ShapeNet datasets demonstrate that our network outperforms state-of-the-art methods both in reconstruction quality and compactness. We also have ample visual evidence that the CSG trees obtained by our method tend to be more natural than those produced by prior approaches.
Our network does not generalize over a shape collection; it ``overfits'' to a single input shape and is in essence an optimization to find a CSG assembly. While arguably limiting, this is not entirely unjustified since the CAD shapes we seek to handle, i.e., those from ABC, do not appear to possess sufficient generalizability in their primitive assemblies. Another limitation is the lack of an explicit compactness loss to enforce minimal CSG trees, like CAPRI-Net.
In addition, incorporating interpretable CSG operations into the network tends to cause gradient back-propagation issues and limits the reconstruction accuracy of small details such as decorative curves on chair legs.
Besides addressing the above limitations, we would like to extend our method to structured CAD shape reconstruction from images and free-form sketches.
Another interesting direction for future work is to scale the primitive assembly optimization from CAD parts to indoor scenes.
\if 0
\fg{[FG: draft: ]In this paper, we present DualCSG, a novel neural network composed of two {\em dual branches\/} for unsupervised learning of of 3D CAD shapes. Our method is {\em provably general\/} owing to the shape complement construction in contrast to CAPRI-Net. We demonstrate both quantitatively and qualitatively that our network produces CSG reconstructions with superior quality, more natural trees \ali{natural tree is weird here}, and better quality-compactness tradeoff than all existing alternatives.
Yet, it is still limited on several fronts. First, despite the compactness exhibited by the recovered assemblies, DualCSG does not have an explicit compactness loss to
enforce minimal CSG trees. Consistent with the minimum description length principle~\cite{MDLbook},
devising such a loss could benefit many tasks beyond our problem domain. Second, limited quadric primitive types also affect reconstruction quality, e.g. the network could use redundant quadric primitives to fit a cone. Besides, incorporating interpretable CSG operations into the network tends to cause gradient back-propagation issues and limits the reconstruction accuracy of small details such as decorative curves on chair legs.
In addition to addressing the above limitations, we would also like to extend our method to CAD model reconstruction from images, or conditioning the network on design or hand-drawn sketches is also a promising direction considering its application potential. Finally, learning {\em functionality\/} of CAD
models is also an intriguing topic to explore.}
\fi
\section{Introduction}
\label{sec:intro}
CAD shapes have played a central role in the advancement of geometric deep learning, with most neural models to date trained on datasets such as
ModelNet~\cite{3DShapeNet}, ShapeNet~\cite{chang2015shapenet}, and PartNet~\cite{partnet} for classification, reconstruction, and generation tasks.
These shape collections all possess well-defined category or class labels, and more often than not, the effectiveness of the data-driven methods is tied
to how well the class-specific shape features can be learned.
Recently, the emergence of datasets of CAD {\em parts\/} and {\em assemblies\/} such as ABC~\cite{ABC} and Fusion360~\cite{Fusion360} has fueled
the need for learning shape representations that are {\em agnostic to class labels\/}, without any reliance on class priors. Case in point, the ABC dataset
does not provide any category labels, while another challenge to the ensuing representation learning problem is the rich topological varieties exhibited by
the CAD shapes.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{figs/teaser_1.pdf}
\caption{Comparing CSG trees and shape reconstructions obtained by our network, DualCSG, and by CAPRI-Net, the current state of the art. A natural CSG construction necessitates
a difference operation involving a complex (residual) shape, which DualCSG can predict with compactness (three primitives) and quality. CAPRI-Net can only build it using convexes,
requiring unnecessarily many primitives and results in poor reconstruction quality.}
\label{fig:teaser}
\end{figure}
{\em Constructive solid geometry\/} (CSG) is a classical CAD representation; it models a 3D shape as a recursive assembly of solid
primitives, e.g., cuboids, cylinders, etc., through Boolean operations including union, intersection, and difference. Of particular note is the indispensable
role the difference operation plays when modeling holes and high-genus shapes, as shown in Fig.~\ref{fig:teaser}. Recently, there have been
increased interests in 3D representation learning using CSG~\cite{Wu_2021_ICCV,sharma2020parsenet,sharma2018csgnet,kania2020ucsg,Capri_Yu,ren2021csgstump,bspnet}, striving for generality, compactness, and reconstruction quality of the learned models.
\begin{figure*}
\centering
\includegraphics[width=0.99\linewidth]{figs/pipeline.pdf}
\caption{For a given 3D CAD shape $S$ (shown at the right end), our network DualCSG is trained to optimize both its network parameters and a feature code to reconstruct $S$ by optimizing an occupancy loss. The network parameters define a CSG assembly over a set of quadric primitives. The assembly is build using two branches: a cover branch (top), producing shape $S_C$, and a residual branch (bottom), producing shape $S_R$. Primitives learned by the cover branch all define convex spaces, e.g., the half space for a plane, and the space enclosed by a bi-infinite cylinder. In the residual branch, the primitive prediction results in both convexes and inverses (or complements) of convexes. After applying intersections and a union to obtain the cover shape $S_C$ and residual shape $S_R$, by optimizing their respective occupancy losses, the
recovered shape is obtained via a difference operation.}
\label{fig:pipe-line}
\end{figure*}
In terms of primitive counts, a direct indication of compactness of the CSG trees, and reconstruction quality, CAPRI-Net~\cite{Capri_Yu}
represents the state of the art. However, it is {\em not\/} a general neural model, e.g., it is unable to represent CAD shapes whose assembly
necessitates {\em nested difference\/} operations (i.e., needing to subtract a part that requires primitive differences to build itself, e.g., see the CAD model in the first column of Fig.~\ref{fig:pc2csg_abc} as an example).
Since both operands of the (single) difference operation in CAPRI-Net can only model intersections and unions, their network cannot produce
a natural and compact CSG assembly for relatively complex CAD shapes with intricate concavities and topological details, such as the CAD part
shown in Fig.~\ref{fig:teaser}.
In this paper, we present DualCSG, a novel neural network composed of two {\em dual and complementary branches\/} for unsupervised learning of CSG
tree representations of 3D CAD shapes. As shown in Fig.~\ref{fig:pipe-line}, our network follows a {\em fixed-order\/} CSG assembly, like most
previous unsupervised CSG representation learning models~\cite{Capri_Yu,ren2021csgstump,bspnet}. The key difference to all of them however, is that our
network has a dedicated branch, the {\em residual branch\/}, to assemble the, potentially complex, {\em complement\/} or residual shape that is to be subtracted from an
overall cover shape. In turn, the cover shape is modeled by the other branch, the {\em cover branch\/}.
The two branches both construct a union of primitive intersections, but the residual branch also
learns {\em primitive inverses\/} while operating over the complement space.
Architecturally, the two branches almost replicate each other, but they have independent network parameters.
\rz{Given the challenge of unsupervised learning of CSG tree assemblies amid significant structural diversity among CAD shapes, our
network is not designed to learn a unified model over a shape collection. Rather, it {\em overfits\/} a given 3D CAD shape by optimizing a
compact CSG assembly of quadric surface primitives to approximate the shape. The learning problem is still challenging
since the number, selection, and assembly of the primitives are unknown and involve a complex search space.}
In contrast to CAPRI-Net, our method is {\em provably general\/}, meaning that any CSG tree can be
converted into an equivalent DualCSG representation. Our dual-branch network is fully differentiable and can be trained end-to-end
with only the conventional occupancy loss for neural implicit models~\cite{imnet,bspnet,Capri_Yu}. With both operands of the final difference
operation capable of learning general CAD assemblies, our network excels at representing complex and high-genus CAD shapes which challenge state-of-the-art
methods, as shown in Fig.~\ref{fig:teaser}. We demonstrate both quantitatively and qualitatively that our network, when trained on ABC~\cite{ABC} or ShapeNet~\cite{chang2015shapenet}, produces CSG reconstructions with superior quality, more natural trees, and better quality-compactness tradeoff than all existing alternatives, including BSP-Net~\cite{bspnet}, CSG-Stump~\cite{ren2021csgstump},
and CAPRI-Net~\cite{Capri_Yu}.
\if 0
In CAPRI-Net, we represent the atomic geometry entities using primitives defined by a constrained, implicit form of quadric equations.
Works such as Superquadrics \cite{paschalidou2019superquadrics} have used unconstrained explicit forms of superquadratic equations
but lead to primitives with low interpretability for CAD modeling. Furthermore, they cannot produce plane-based intersections to reconstruct sharp features.
Among the latter, BSP-Net~\cite{bspnet} is most closely related to our work, but they differ in several significant ways: \textcircled{1} CAPRI-Net can
generate several different interpretable primitives from a single quadric equation, while BSP-Net only uses planes. \rzz{\textcircled{2} Unlike BSP-Net,
our network includes a difference operation which is well-suited in CAD modeling.} \textcircled{3} In addition, we introduce a new loss term which
encourages the use of the difference operation, \rzz{resulting in more compact and more natural CSG assemblies.}
\rzz{UCSG-Net~\cite{kania2020ucsg} also learns CSG trees in an unsupervised manner, but uses only box and sphere primitives which can significantly
limit the shape reconstruction quality. Also, unlike BSP-Net and CAPRI-Net, the order of the CSG operations from UCSG-Net are {\em dynamic\/} and
not fixed. However, the order flexibility or generality comes at the cost of making the assembly learning task much more difficult. As we shall demonstrate,
with a fixed order of CSG operations (see Fig.~\ref{fig:network}), CAPRI-Net tends to produce more natural and more compact CSG trees than UCSG-Net.}
In a concurrent work, CSG-Stump~\cite{ren2021csgstump} also follows a fixed order of CSG operations. Key differences to our work include:
\textcircled{1} CAPRI-Net relies on a simple quadric equation to represent all primitives, while CSG-Stump needs to pre-set the number for primitives for each
primitive type. \textcircled{2} Like BSP-Net~\cite{bspnet}, CSG-Stump uses a non-differentiable inverse layer to model shape differences which limits the
applicability of the difference operation to simple primitives. In contrast, our network has a dedicated differentiable difference layer and a new loss term to
support the handling of complex convex shapes, leading to improved interpretability of the CSG trees of our method.
\fi
\section{Method}
\label{sec:method}
To reconstruct high-genus and complex target shapes with various concavities, one effective approach is to produce a volume containing the entire target shape and subtract a residual volume from it. Here, one can observe an analogy to CNC machining in mechanical engineering. In this setting, the target shape can be constructed by producing the cover and residual shapes via two separate cover and residual branches and then subtracting the two to produce the final result.
By learning how to reconstruct cover and residual shapes separately, details such as complex concavities are better recovered by subtraction in the final stage.
In this work, called DualCSG, we reconstruct a given 3D shape $S$ by producing shape $S_C$ that contains or \emph{covers} the given shape $S$ along with a \emph{residual} shape $S_R$ that if it gets subtracted from $S_C$, $S$ will be produced (i.e., $S\approx S_C-S_R$). $S_C$ and $S_R$ are obtained by a set of CSG operations. Sampled from a Gaussian distribution, the input to DualCSG is a feature code that is optimized along with the network's parameters to reconstruct a given 3D shape $S$ and the output is a set of occupancy values that are optimized to fit $S$. More details about the network are explained in Section \ref{sec:DCSG}.
CAPRI-Net \cite{Capri_Yu} also tries to perform a shape difference at the end by predicting a set of primitives that undergo a fixed order of intersection, union, and difference operations to obtain the target shape. However, this sequence is not \emph{general} and cannot support all shapes (\rz{see supplementary material}).
In addition, CAPRI-Net first uses an encoder that is pre-trained on the entire dataset before it is optimized/overfit to each input shape. However, we avoid using an encoder in DualCSG as we realized that it does not help to converge more accurately or faster. Since DualCSG is optimized for a single shape, it benefits from a smaller network with fewer parameters compared to CAPRI-Net (see Table~\ref{tab:mesh2csg_abc}).
\if 0
\begin{prop}
\label{prop:capri}
The fixed order introduced by CAPRI-Net is not general; i.e., it cannot cover all possible permutations of CSG operations applied to convex shapes.
\end{prop}
\vspace{-1mm}
\begin{prop}
\label{prop:dCSG}
The operation sequence in DualCSG is general; i.e., it is able to reproduce any CSG sequence.
\end{prop}
\begin{proof} Please see the proofs in the supplementary material.
\end{proof}
\fi
There are approaches such as CSG-Stump \cite{ren2021csgstump} utilizing a fixed CSG order of inverse, intersection, and union. This sequence is theoretically able to reproduce any CSG sequence as the difference operation can be achieved by inverse and intersection. However, this way, difference operations can be only applied on basic primitives at the early level of the CSG sequence. To reproduce a shape such as S in a box illustrated in Fig.~\ref{fig:s_example}, CSG-Stump has to approximate it by assembling a union of several small pieces of primitives as (a) or by subtracting a union of basic primitives (e.g. circles, boxes and triangles) as (b) instead of a natural difference of a box and S shape. Therefore, CSG-Stump fails to produce \emph{compact} results and causes irregular artifacts/bumps on the reconstructed shape. CAPRI-Net does apply the difference as the last operation but uses a union of convex shapes to approximate the S shape as (c). Thus the solution of CAPRI-Net is not \emph{compact} either. Only DualCSG can produce the \emph{compact} solution as (d) by introducing two separated CSG branches, while the cover branch can produce the box and the residual branch can produce the S shape since it is capable of producing arbitrary and complex residual shapes.
Formally, in DualCSG, we utilize two branches: \emph{cover branch} that produces cover shape $S_C$, which covers the target shape $S$ with a combination of convex shapes (Fig.~\ref{fig:pipe-line} Top) and \emph{residual branch} (Fig.~\ref{fig:pipe-line} Bottom) that produces the residual shape $S_R$ that is subtracted from $S_C$ to produce $S$.
To support generality, the set of primitives in the residual includes both convex primitives and complimentary primitives (see Section~\ref{sec:primrep}). This allows incorporating difference operations at early stages of the residual branch using inverse and intersection operations. Such alteration produces complex residual shapes that can be used to reconstruct general, high-genus, and complex concave shapes. \rz{See the supplementary material for a formal proof of the generality of DualCSG.}
\subsection{DualCSG}
\label{sec:DCSG}
$S_C$, the output of the cover branch, is the union of convex shapes. $S_R$, the output of the residual branch, can be quite complex since it is the union of possibly concave and convex shapes. This property that was missing in CAPRI-Net significantly empowers the network as many complicated shapes can be constructed by finding the difference between a simple convex shape and an arbitrary concave or convex shape (see Figures \ref{fig:pipe-line} and \ref{fig:csg_tree_abc}).
In DualCSG, we only optimize a feature code and network weights to fit the target shape (similar to Auto-decoder in DeepSDF~\cite{DeepSDF}). This is efficient since DualCSG is a light network that quickly converges to each shape. Starting from a feature code, we pass it to the primitive prediction network and generate two matrices that hold the primitives' parameters. Each matrix is used to determine the \emph{approximated signed distance} (ASD) of a set of query points sampled in the space in which the shape is embedded. These two sets of ASD values are separately passed to cover and residual branches, and each branch has an intersection and a union layer. The cover branch (Fig.~\ref{fig:pipe-line} Top) and the residual branch (Fig.~\ref{fig:pipe-line} Bottom) generate point occupancy values indicating whether a point is inside or outside $S_C$ and $S_R$ respectively. The difference between $S_C$ and $S_R$ forms the final shape.
\subsection{Primitive Representation}
\label{sec:primrep}
We use a more general primitive form for the residual branch in comparison with the cover branch to generate complex residual shapes. This is one of the key differences between our method and CAPRI-Net~\cite{Capri_Yu}. The primitive prediction network (an MLP) receives a code of size 256 and outputs two matrices $\nv{P}_C\in \mathbb{R}^{p\times7}$ (fed to the cover branch) and $\nv{P}_R\in \mathbb{R}^{p\times7}$ (fed to the residual branch), each contains parameters of $p$ primitives (see Fig.~\ref{fig:pipe-line}). Primitives in $\nv{P}_C$ are represented by a quadric equation same as CAPRI-Net\cite{Capri_Yu}:
\begin{equation}
|a|x^2+|b|y^2+|c|z^2+dx+ey+fz+g=0,
\label{eq:pos_primi}
\end{equation}
where the first three coefficients are constrained to be positive to represent convex primitives.
In DualCSG, while half of the primitives in $\nv{P}_R$ are the same as Equation~(\ref{eq:pos_primi}), we require the other half to be inverse convex primitives by constraining the first three coefficients to be negative:
\begin{equation}
-|a|x^2-|b|y^2-|c|z^2+dx+ey+fz+g=0.
\label{eq:neg_primi}
\end{equation}
We show two primitives produced by Equations~(\ref{eq:pos_primi}) and~(\ref{eq:neg_primi}) along with 2D visualization of \textit{cross-section} near the surface of each primitive (Fig.~\ref{fig:two_quadratic_functions}). Considering the universal space as a cube, for convex primitives (Fig.~\ref{fig:two_quadratic_functions} (a)), the ASDs of query points enclosed by the curved surface are negative, meaning that these points are inside the primitive. For primitive inverse (Fig.~\ref{fig:two_quadratic_functions} (b)), on the other hand, the ASDs of query points enclosed by the curved surface are positive, meaning that the complement space is inside the primitive. The negative constrains in Equation~(\ref{eq:neg_primi}) possibly make the residual shape complex with detailed concavities.
For reconstruction, $n$ points near the shape's surface are sampled and their ASD to all primitives is calculated similar to CAPRI-Net \cite{Capri_Yu}. For each point $\nv{q}_j = (x_j, y_j, z_j)$, its ASD is captured in matrix $\nv{D} \in \mathbb{R}^{n\times p}$ as: $\nv{D}_C(j,:) = \nv{Q}(j,:) \nv{P}_C^T$ and $\nv{D}_R(j,:) = \nv{Q}(j,:)\nv{P}_R^T$, where $\nv{Q}(j,:) = (x_j^2, y_j^2, z_j^2, x_j, y_j, z_j, 1)$ is the $j$th row of $\nv{Q}$.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{figs/two_quadratic_functions.pdf}
\caption{Visualizations of convex primitive shown in (a) and primitive inverse shown in (b). $n$ denotes the normal direction pointing outside the surface.}
\label{fig:two_quadratic_functions}
\vspace{-3mm}
\end{figure}
\subsection{DualCSG Branches}
We briefly discuss details about our DualCSG branches by following similar notations used in BSP-Net~\cite{bspnet} and CAPRI-Net~\cite{Capri_Yu}. We input the ASD matrix $\nv{D}_C$ into the cover branch and $\nv{D}_R$ into the residual branch, and output vector $\nv{a}_C$ and $\nv{a}_R$, indicating whether query points are inside/outside cover shape $S_C$ and residual shape $S_R$. Each branch contains an intersection layer and a union layer adopted from BSP-Net~\cite{bspnet}. In addition, to encourage difference operation between cover shape and residual shape to produce the final shape, we use the same difference loss from CAPRI-Net~\cite{Capri_Yu} to encourage $S_C$ from cover branch to cover the volume occupied by the ground truth shape and $S_R$ from residual branch to subtract a meaningful residual volume.
Our DualCSG branches aim to produce two vectors indicating query points inside/outside cover shape $S_C$ and residual shape $S_R$ from the predicted primitives.
The CSG operation order in each branch is the same as BSP-Net~\cite{bspnet}, which contains an intersection layer and a union layer. Note that the weights of the two branches are not shared.
During training, the inputs to intersection layers are two ASD matrices $\nv{D}_R \in \mathbb{R}^{n \times p}$ and $\nv{D}_C \in \mathbb{R}^{n \times p}$.
Primitives involved in forming intersected shapes are selected by two learnable matrices $\nv{T}_C \in \mathbb{R}^{p \times c}$ and $\nv{T}_R \in \mathbb{R}^{p \times c}$, where $c$ is the number of intersected solid shapes.
We can obtain $\nv{Con} \in \mathbb{R}^{n\times c}$ by the intersection layer and only when $\nv{Con}(j,i)=0$, query point $\nv{q}_j$ is inside the intersected solid shape $i$ ($\nv{Con}_R$ is the same and only subscripts are $R$):
\begin{equation}
\label{eqa:hard_intersection}
\nv{Con}_C = \text{relu}(\nv{D}_C)\nv{T}_C
\hspace{0.35cm}
\begin{cases}
0 & \text{in,} \\
> 0 & \text{out.}
\end{cases}
\end{equation}
Then all the shapes obtained by the intersection operation are combined by two union layers to find the cover shape $S_C$ as well as the residual shape $S_R$.
The inside/outside indicators of the combined shape are stored in the vector $\nv{a}_R \in \mathbb{R}^{n\times 1}$ and $\nv{a}_C \in \mathbb{R}^{n\times 1}$ indicating whether a point is in/outside of the cover and residual shapes.
Similar to \cite{Capri_Yu}, $\nv{a}_C$ and $\nv{a}_R$ are computed in a multi-stage fashion ($\nv{a}^+$ and $\nv{a}^*$ for early and final stages). Specifically, $\nv{a}^+_C$ is obtained by the following equation:
\begin{equation}
\begin{aligned}
&\nv{a}^+_C(j) = \\
&\mathscr{C}(\sum_{1 \leq i \leq c} \nv{W}_C(i) \mathscr{C}(1 - \nv{Con}_C(j,i)))
\hspace{0.1cm}
\begin{cases}
1 & \approx \text{in,} \\
<1 & \approx \text{out,}
\end{cases}
\label{label_a_+}
\end{aligned}
\end{equation}
where $\nv{W}_C \in \mathbb{R}^{c}$ is a learnable weighting vector and $\mathscr{C}$ is a clip operation to $[0, 1]$, and $\nv{a}^+_R$ is defined similarly with $\nv{W}_R$ and $\nv{Con}_R$. In later stages, $\nv{a}^*_C$ and $\nv{a}^*_R$ are obtained by finding $\text{min}$ of each row of $\nv{Con}_C$ and $\nv{Con}_R$:
\begin{equation}
\nv{a}^*_C(j) = \min_{1 \leq i \leq c} (\nv{Con}_C(j,i))
\hspace{0.35cm}
\begin{cases}
0 & \text{in,} \\
> 0 & \text{out.}
\end{cases}
\label{label_a_*}
\end{equation}
\subsection{Loss Functions and Training Strategy}
We utilize the same reconstruction loss and multi-stage training as CAPRI-Net's to facilitate differentiable operations in the early stage and gradually achieve good results in the end.
The following loss function is used in our method:
\begin{equation}
L = L_{rec} + L_{\nv{T}} + L_{\nv{W}},
\end{equation}
where $L_{rec}$ is the reconstruction loss applied to $\nv{a}_C$ and $\nv{a}_R$, it would force the subtracted result between cover shape and residual shape to be close to the input shape. $L_{\nv{T}} + L_{\nv{W}}$ are the losses applied to the intersection and union layer weights.
Note that differently from CAPRI-Net~\cite{Capri_Yu}, we have network weights from DualCSG branches: $\nv{T} = [\nv{T}_C, \nv{T}_R]$ and $\nv{W} = [\nv{W}_C, \nv{W}_R]$. However, $L_{\nv{T}}$ and $L_{\nv{W}}$ and the difference loss are the same as CAPRI-Net~\cite{Capri_Yu}.
Please refer to more details on the losses in the supplementary material.
\section{Results}
\label{sec:result}
In our experiments, we use two public datasets ABC~\cite{ABC} and ShapeNet~\cite{chang2015shapenet}. On each dataset, we test the quality of producing CSG operations from two inputs: mesh and point cloud. We present qualitative and quantitative results of our experiments to demonstrate the effectiveness of DualCSG.
\subsection{Training Details}
\label{sec:train_detail}
Since the methods we compare with require an additional time-consuming optimization at test time to achieve satisfactory results, (e.g., 30 min per shape for CSG-Stump), we have randomly selected a moderately sized subset of shapes as test set for evaluation: 500 shapes from ABC, and 50 from each of the 13 categories of ShapeNet (650 shapes in total). In addition, we ensured that 80\% of the selected shapes from ABC have genus larger than two with more than 10K vertices to include complex structures. Experiments were performed using an Nvidia GeForce RTX 2080 Ti GPU.
In our experiments, we set the number of maximum primitives as $p=512$ and the number of maximum intersections as $c=32$ for each branch to support complex shapes. Since both these numbers are half of the setting in CAPRI-Net, the size of our CSG layers is half of the CAPRI-Net's. The size of our latent code for all input types is 256 and a two-layer MLPs is used to predict the parameters of the primitives from the input feature code.
We train DualCSG per shape by optimizing the latent code, primitive prediction network, intersection layer, and union layer.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{figs/mesh2csg_abc.png}
\caption{Comparing results for 3D meshes in ABC.}
\label{fig:mesh2csg_abc}
\end{figure}
\subsection{Mesh to CSG Representation}
\label{sec:3d_ae}
Given a 3D mesh, the task is to learn an accurate and compact CSG representation for this shape. To do so, we first sample 24,576 points around the shape's surface (i.e. with a distance up to $1/64$) and 4,096 random points in 3D space. All 28,672 points are then scaled into the range $[-0.5, 0.5]$, these points along with their occupancy values are used to optimize the network.
We compare DualCSG with BSP-Net~\cite{bspnet}, CSG-Stump~\cite{ren2021csgstump} and CAPRI-Net~\cite{Capri_Yu}, which output structured parametric primitives. For a fair comparison, we optimize all of these networks with the same number of iterations. BSP-Net, CSG-Stump, and CAPRI-Net are pre-trained on the training set provided in CAPRI-Net before optimization to achieve better initialization. Note that CSG-Stump uses different network settings for shapes from ABC (with shape differences) and ShapeNet (without shape difference); we therefore follow the same settings in our comparisons. For each shape, BSP-Net takes about 15 min, CSG-Stump about 30 min and CAPRI-Net about 3
min to converge. The training process in DualCSG will run 12,000 iterations for each stage, taking about 5 minutes per shape.
\vspace{-8pt}
\paragraph{Evaluation Metrics.}
Quantitative metrics for shape reconstruction are symmetric Chamfer Distance (CD), Normal Consistency (NC), Edge Chamfer Distance \cite{bspnet} (ECD), and Light Field Distance \cite{LFD} (LFD). For ECD, we set the threshold for normal cross products to 0.1 for extracting points close to edges. CD and ECD are computed on 8K sample points on the surface and multiplied by 1,000. For LFD, we render each shape at ten different views and measure the Light Field Distances. In addition, we compare the number of primitives \#P to evaluate the compactness of shapes since all CSG-based modeling methods predict some primitives that are combined with intersection operations.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{figs/mesh2csg_shapenet.png}
\caption{Comparing results for 3D meshes in ShapeNet.}
\label{fig:mesh2csg_shapenet}
\end{figure}
\vspace{-8pt}
\paragraph{Evaluation and Comparison.}
We provide visual comparisons on representative examples from the ABC dataset in Fig.~\ref{fig:mesh2csg_abc} and the ShapeNet dataset in Fig.~\ref{fig:mesh2csg_shapenet}; more results can be found in the supplementary material.
Our method consistently reconstructs more accurate shapes with geometric details and concavities. BSP-Net simply assembles convex shapes to fit the target shape and obtain less compact results without a difference operation. CSG-Stump tends to use considerably more difference operations to reconstruct shapes. This also causes the shapes' surface to be carved by many redundant primitives (i.e., lack of compactness). In addition, since CSG-Stump does not support difference operations between complex shapes, it fails to reconstruct small holes or intricate concavities. As the intermediate shapes subtracted in CAPRI-Net's fixed CSG sequence are only unions of convex shapes, it fails to reproduce target shapes with complex concavities.
\input{tables/mesh2csg_abc}
\input{tables/mesh2csg_shapenet}
As shown in Table~\ref{tab:mesh2csg_abc}, DualCSG achieves the best reconstruction quality and compactness in all metrics on ABC dataset compared to other methods. As shown in Table ~\ref{tab:mesh2csg_shapenet}, the reconstruction accuracy of DualCSG on ShapeNet is mostly better than CAPRI-Net (except for NC). DualCSG may use slightly more but comparable primitives than CAPRI-Net on shapes with complex cavities to produce better details and concavities, such as the chair back in Fig.~\ref{fig:mesh2csg_shapenet}.
\input{tables/notrain}
\vspace{-8pt}
\paragraph{Ablation study.}
\rz{DualCSG uses an auto-{\em decoder\/} network architecture~\cite{DeepSDF}, requiring no pre-training. In contrast, CAPRI-Net adopts the traditional auto-encoder network with pre-training. To remove potential impacts of pre-training from the evaluation, we compare DualCSG to CAPRI-Net under the same auto-decoder setting, where the encoder of CAPRI-Net is removed along with the pre-training. The results are shown in Table~\ref{tab:notrain}, where we can see that CAPRI-Net still under-performs compared to DualCSG, as well as to its version with pre-training as provided in Table~\ref{tab:mesh2csg_shapenet}. We believe that the reason is that the limitation of only using convex shapes by CAPRI-Net requires a better network parameter initialization from the pre-training step, while DualCSG can quickly overfit to the test shape from scratch owing to its more general representation capabilities.}
\begin{figure*}[t!]
\centering
\includegraphics[width=.99\linewidth]{figs/csg-tree_comparison_v1.pdf}
\caption{Comparing learned CSG Trees of 3D examples from ABC shown in the first row and ShapeNet in the second row.}
\label{fig:csg_tree_abc}
\end{figure*}
\vspace{-8pt}
\paragraph{CSG Trees Comparison.}
Our network can learn to reconstruct a 3D shape by producing a \rz{plausible} CSG tree for a given shape without any direct CSG supervision as shown in Fig.~\ref{fig:csg_tree_abc}. We simplify CSG trees from BSP-Net, CAPRI-Net, and DualCSG by hiding their primitives to better illustrate their learned CSG modeling process.
Fig.~\ref{fig:csg_tree_abc} clearly reveals the limitations of other methods. BSP-Net combines many convex shapes (20 in this case) to construct the final shape, which uses many primitives and loses the compactness. CSG-Stump uses pre-defined limited shapes as primitives, such as boxes and spheres, limiting the reconstruction accuracy and requiring a large number of primitives (53 for this shape, we only show a subset here). Besides, CSG-Stump uses a primitive complement layer (illustrated as operation $C$) to form the subtraction operation for ABC shapes, which can result in many redundant difference operations. CAPRI-Net can only subtract the union of convex shapes to approximate intricate concavities, which limits its reconstruction accuracy and generality.
In contrast, shapes produced by our DualCSG are composed of fewer intermediate shapes in comparison with other methods, which makes our CSG tree more compact while achieving better reconstruction quality.
Note that the desired task is not just to obtain a fewer number of \emph{parts} with any complexity, but rather to obtain fewer simple \emph{primitives} or \emph{convex shapes} that are constructed via CSG operations. If the former objective is favorable, other methods such as Neural Parts \cite{Paschalidou2021CVPR} might be able to produce fewer parts.
\input{tables/pc2csg_abc}
\begin{figure*}[!t]
\centering
\includegraphics[width=.98\linewidth]{figs/pc2csg.pdf}
\caption{Comparing CSG representation learning from 3D point cloud in ABC shown in column 1-6 and ShapeNet in column 7-12.}
\label{fig:pc2csg_abc}
\end{figure*}
\subsection{Point Clouds to CSG}
In our last experiment, we reconstruct CAD shapes from point clouds, each containing 8,192 points. For each input point, we sample $8$ points along its normal with perturbations sampled from Gaussian distribution $(\mu = 0, \sigma=1/64)$. If this point is at the opposite direction of normal vectors, the occupancy value is 1, otherwise it is 0. This way, we sample $65,536$ points to fit the network to each shape. Similar to mesh-to-CSG experiment, we only optimize the latent code, primitive prediction network, and the selection matrix.
Quantitative comparisons in Table~\ref{tab:pc2csg_abc} and visual comparisons in Fig.~\ref{fig:pc2csg_abc} show that our network outperforms BSP-Net, CSG-Stump, and CAPRI-Net in different reconstruction similarity and compactness metrics on ABC dataset. Additional ShapeNet results can be found in the supplementary material.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,655 |
Q: Echo and Cat output problem I'm trying to run this script:
#!/bin/bash
for file in /home/dima/Downloads/dir/*
do
if [ -f "$file" ]
then
echo "SSS::################====================#####################" >> allfiles.txt
cat $file >> allfiles.txt
echo "################====================#####################" >> allfiles.txt
fi
done
It works but strings that echo printed can be seen only in the end of file when I need them between the outputs of "cat". How to fix that?
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 7,642 |
{"url":"http:\/\/www.skimwp.org\/cannot-determine\/cannot-determine-no-bounding-box-latex.php","text":"Home > Cannot Determine > Cannot Determine No Bounding Box Latex\n\n# Cannot Determine No Bounding Box Latex\n\nasked 3 years ago viewed 22847 times Related 26Create list of all external files used by master LaTeX document?7Error 'No bounding' box for PDF image11LaTeX Error: Cannot determine size of graphic Incidentally, you can change this in TexShop from the \"Typeset\" menu. asked 2 years ago viewed 21819 times active 7 months ago Get the weekly newsletter! Figuring out why I'm going over hard-drive quota Java precedence for multiple + and - operators How to tar.gz many similar-size files into multiple archives with a size limit If I news\n\nLaTeX Error: Cannot determine size of graphic in Pictures\/logo.png (no BoundingBox)2No BoundingBox in LaTeX for a JPEG figure1Latex \\includegraphics error: Cannot determine size of graphic in bleh.jpg ( no BoundingBox )1Figures Requirement Guide Cause latex only supports vector graphics (read: eps) \u2013Stephan202 Apr 8 '09 at 21:36 1 For what it's worth, JPG is quite possibly the worst image format to use when Word or phrase for \"using excessive amount of technology to solve a low-tech task\" What is a unifier? have a peek at this web-site\n\nWhy is using let inside a for loop so slow on Chrome? Draw some mountain peaks Was there no tax before 1913 in the United States? Make sure that is ticked to run pdflatex instead of latex. about us tour help blog chat data legal privacy policy work here advertising info mobile contact us feedback Technology Life \/ Arts Culture \/ Recreation Science Other Stack Overflow Server Fault\n\nbmpsize package can also used to replace extractbb. \u2013Leo Liu May 8 '11 at 15:15 5 Thank you for the solution provided. How can a Cleric be proficient in warhammers? asked 5 years ago viewed 223917 times active 9 months ago Visit Chat Linked 8 Error including a .png: Cannot determine size of graphic 1 Latex Error: Cannot determine the size LaTeX Error: Cannot determine size of graphic in Pictures\/logo.png (no BoundingBox)1Cannot determine the size of graphic1Error of \u201ccannot determine size of graphics\u201d Hot Network Questions Count trailing truths Why is this\n\nHow do I fix this? Working with DVI is often faster and compatible with more packages. The following one worked for me: \\usepackage[dvipdfmx]{graphicx} \\usepackage{bmpsize} \u2013zeroos May 12 '15 at 20:56 Just to add: specificying natwidth and natheight also solved a problem I had with jpgs http:\/\/tex.stackexchange.com\/questions\/94869\/error-including-a-png-cannot-determine-size-of-graphic Now change the extension in the tex file from \\includegraphics[width=0.8\\textwidth]{tiger.png} to \\includegraphics[width=0.8\\textwidth]{tiger.eps} share|improve this answer answered May 10 '15 at 9:56 NKN 276115 add a comment| up vote 1 down vote\n\nThanks. This Site asked 3 years ago viewed 42618 times active 2 years ago 16 votes \u00b7 comment \u00b7 stats Linked 77 Cannot determine size of graphic 3 pdflatex, dvips: cannot determine size of Edit: Here is the preamble: \\documentclass[dvips,sts]{imsart} \\usepackage{graphicx} \\usepackage{float} \\begin{document} \\begin{figure}[h!] \\centering \\includegraphics[width = \\textwidth]{simLinkError.pdf} \\caption{Blah} \\label{fig:sim1} \\end{figure} \\end{document} graphics pdf texworks share|improve this question edited Jul 17 '13 at 14:02 egreg It probably has to do with what compiler you are using.\n\nWelcome to TeX.SX! http:\/\/skimwp.org\/cannot-determine\/cannot-determine-size-of-graphic-in-latex.php Why is using let inside a for loop so slow on Chrome? Draw some mountain peaks Word or phrase for \"using excessive amount of technology to solve a low-tech task\" If a ring R with 1 has characteristic 0. How can I prove its value?\n\n\u2022 Type H for immediate help. ...\n\u2022 Ph.D.\n\u2022 I typed the following in Texmaker \\begin{document} \\begin{figure}[!ht] \\centering \\includegraphics[scale=1]{figure} \\end{figure} \\end{document} The error I get is !\n\u2022 share|improve this answer answered Apr 8 '09 at 21:45 Tom 10.3k42941 yeah.\n\u2022 What to do?\n\u2022 How to say 'can' in Spanish?\n\nIf you use the command pdflatex (making pdf directly) Then the system can read the file and determine its natural size automatically. GraphicsConvertor on a mac will do that for you easily. Whereas a PDF includes DPI and size, a JPEG has only a size in terms of pixels. ( I know this is not the answer you wanted, but it's probably better More about the author Prove that the following statements for a ring R are equivalent: Does The Amazing Lightspeed Horse work, RAW?\n\nshare|improve this answer answered Jan 31 at 1:08 IsaacS 21316 add a comment| protected by Community\u2666 Dec 2 '14 at 13:59 Thank you for your interest in this question. What is the source of your figure? \u2013Hans-Peter E. I get the same errors for both.\n\n## more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed\n\nTax Free when leaving EU through the different country How to justify Einstein notation manipulations without explicitly writing sums? Can I use verb \"to split\" in meaning to \"to run\"? How to justify Einstein notation manipulations without explicitly writing sums? Interconnectivity Actual meaning of 'After all' Can a player on a PC play with a a player on a laptop?\n\nnatwidth=... (not width= as that tries to scale to that size but still needs the natural size. Browse other questions tagged pdf latex eps or ask your own question. Kristiansen Sep 17 '13 at 19:29 in some .eps files, the bounding box information is at the end, rather than at the beginning where it really belongs. (and latex click site You are however able to state the natural size of the images using natwidth and natheight which will make latex compile without error.\n\nFiguring out why I'm going over hard-drive quota Java precedence for multiple + and - operators \u0108u oni estas \"en\" a\u016d \"sur\" foto? How can I trust that this is Google? Is adding the \u2018tbl\u2019 prefix to table names really a problem? In fact, I could compile it fine with MacTex on my machine.\n\nThe difference between \"an old,old vine\" and \"an old vine\" How to justify Einstein notation manipulations without explicitly writing sums? If so, please switch to pdflatex.exe as you are importing PNG file. \u2013kiss my armpit Feb 10 '13 at 18:16 Well, I am using TexMaker and I usually compile This is my pillow Is it safe to use cheap USB data cables? Not the answer you're looking for?\n\nRelated 3Included LaTeX figures do not show in dvi but do in pdf3Dimension Preserving JPEG to EPS Conversion1LaTeX porting *.eps images with eps2pdf and german umlauts (mutated vowel)321Inserting a pdf file It is often better to take the JPEG and convert it into a PDF (on a mac) or EPS (on a PC). Join them; it only takes a minute: Sign up Here's how it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the Please check that there is no inclusion of epsfig, it is deprecated.\n\nHow can a Cleric be proficient in warhammers? more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed I'm working on debian using emacs graphics errors png share|improve this question edited Jan 22 '13 at 15:51 Martin Schr\u00f6der 11.2k43196 asked Jan 22 '13 at 15:02 user24726 41112 marked as Yet another electrical box fill question Can You Add a Multiple of a Matrix Row to itself?","date":"2019-03-20 22:04:34","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4851512014865875, \"perplexity\": 4544.255435462175}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-13\/segments\/1552912202471.4\/warc\/CC-MAIN-20190320210433-20190320232433-00131.warc.gz\"}"} | null | null |
\section{Introduction}
\label{sec:intro}
In recent years Binary black hole (BBH) observations have become a mainstay of gravitational wave (GW)~\citep{gw} detection~\citep{gwtc-1,gwtc-2,gwtc-3}.
Observable BBH signals are produced during the late stages of the decay of a bound orbit of two black holes (BHs): these observations are typically short-lived, and have a distinctive morphology, produced by the final orbits (the ``inspiral'' phase), the merger of the two BHs, and finally the ``ringdown'' of the final, merged BH.
Most BBHs are expected to be circularized before the merging due to the loss of the orbital energies during the inspiral phase~\citep{circular_1,circular_2,Abadie_2010}, therefore the current analyses of GW data and the parameter estimations of GW sources have focused on binaries with circular orbits.
Eccentricity has not been detected~\citep{burst_search_2,eccentricity_search_2,eccentricity_search_3} in the O1 and O2 observing runs of LIGO/Virgo~\citep{ligo,virgo}, while a high-mass BBH event in O3a, GW190521~\citep{gw190521,gw190521_2}, shows evidence it may have been both highly eccentric and dynamically formed, as a result of parameter estimation analysis~\citep{gw190521_dynamical,gw190521_dynamical_2} using both spin-aligned eccentric waveform approximants, \texttt{SEOBNRE}~\citep{gw190521_dynamical,SEOBNRE_1,SEOBNRE_2} and \texttt{TEOBResumS}~\citep{gw190521_dynamical_2,TEOBResumS_2}; and numerical numerical relativity simulations~\citep{gw190521_dynamical_3}.
There are several dynamical-formation scenarios~\citep{dynamical_1,iso_spins_1,dynamical_3,dynamical_4,dynamical_5,dynamical_6,dynamical_7} which support the formation and merger of binary systems while retaining eccentricity throughout their lifetime.
One of the formation processes of these eccentric BH binaries is a gravitational radiation driven capture.
Captures are expected to take place in galactic nuclei, where the central super massive BH creates a steep density cusp of stellar-mass BHs.
This can provide a suitable environment for eccentric BBH formation~\citep{bhencounter}.
In accordance with the conditions of the initial orbital energy $E_{\rm orbital}$ and the initial angular momentum $L_{\rm initial}$, the motion of the binary systems can be classified as bound (circular or elliptical) and unbound (parabolic or hyperbolic) orbits~\citep{hyperbolic,encounter_gw_production_1,encounter_gw_production_2,parabolic}.
Gravitationally unbound interactions, or encounters, occur when $E_{\rm orbital} \geq 0$ without direct capture, and the trajectory of one component of the system will be either parabolic or hyperbolic relative to the other.
The GW signals produced from such encounters do not resemble those from bound systems, but will instead produce a strong burst of radiation as the objects have their closest encounter.
The process may be considered a gravitational analogy to Bremsstrahlung~\citep{gw_brem_1}
In this work we will focus on parabolic encounters~\citep{parabolic_waveform} with $E_{\rm orbital}=0$.
BH encounters that correspond to parabolic orbits at infinity merge quickly if their initial angular momentum is not suitably large enough~\citep{bhencounter}. We use the term ``parabolic black hole capture'' to refer to the systems which form and merge in this way.
In other cases, the binaries tentatively pass by, and finally merge in the very distant future.
We expect the GW signal from BH capture to be detectable both in the current generation of detectors, and planned future detectors~\citep{bh_encounter_detection_1,bh_encounter_detection_2}.
The expected event rate for hyperbolic BH encounters in galactic nuclei is comparable to estimates for other sources of GW signals, which is $\sim 0.9\ {\rm Gpc}^{-3}\ {\rm yr}^{-1} $~\citep{bh_encounter_detection_3}, independent of any detector.
GW signals from BBH events have a characteristic morphology: during the inspiral, before the two BHs merge the signal is sinusoidal, with a growing frequency as the radius of the binary decreases.
The remainder of the signal is produced from the merger, and the ringdown.
As the total mass of the binary increases the frequency at which the merger occurs decreases, and consequently, for high mass systems the merger may occur close to the lowest frequency the detector is capable of successfully observing.
In such a scenario the signal might appear to have little or no inspiral due to the lower frequency limit of ground-based detectors.
The waveforms of BH encounters, or parabolic BH captures will often have no inspiral, or only a small number of cycles of inspiral.
As a result it is not implausible that an event may be misinterpretted as a high-mass BH coalescence when in fact it had been a BH capture.
An example of an observation which fits these conditions is GW190521, which has been interpreted as a BBH coalescence with total mass around $142\ \ensuremath{\mathrm{M}_{\rm \odot}}$~\citep{gw190521,gw190521_2,gw190521_4,gw190521_5,gw190521_3}.
The use of unmodelled ``burst'' searches has been proposed for these highly eccentric mergers~\citep{burst_search,burst_search_2,burst_search_3}, though they are less capable of digging deep into the noise for signals and their sensitivity is hard to quantify~\citep{search_comment}.
Therefore it is probable that GW190521-like signals will be detected using analyses designed to identify and analyse BBH signals in the future, rather than those designed for more exotic waveform morphologies.
At present parameter estimation analyses using an approximant for parabolic capture signals is not viable, due to a lack of sufficiently flexible waveform models.
We therefore ask if analyses using BBH waveform models will produce different, and distinguishable, results when used to analyse both BBH and parabolic capture signals.
In order to do this we conducted two sets of analyses: one analysing simulated BBH signals injected into simulated noise, and one analysing simulated parabolic capture signals in simulated noise.
We then compare the posterior probability distributions of the BBH and parabolic capture injections.
The conventional approach for parameter estimation used in GW analysis uses Bayesian inference and stochastic sampling, which usually requires a large amount of computation.
Therefore, a neural network that can quickly give a posterior under BBH model while maintaining a fairly high accuracy was employed. Comparative studies of such a neural network approach and conventional Bayesian parameter estimation techniques have shown that a properly trained neural network is capable of emulating a posterior distribution to an acceptable level of accuracy~\citep{vitamin}.
Given the large amount of simulated data required for our study, we used the network to obtain the posteriors for each simulation.
We compare the mass, distance and merger time posterior distributions using the Jensen-Shannon (JS) divergence~\citep{jsd_1} to quantify any differences between these posteriors for the simulated population of black hole captures and BBH signals.
The outline of this paper is as follows.
In section \ref{sec:data}, we describe the production of the simulated parabolic BH capture and BBH signals which are used for the analysis.
In section \ref{sec:methods}, describe the parameter estimation method, and the use of a deep learning approach to improve computing speed.
The results of this analysis are presented in section \ref{sec:result}.
Finally, the main outcomes of the work and possible future directions are summarised in section \ref{sec:conclusion}.
\section{Mock data creation}
\label{sec:data}
Recent advances in numerical relativity~\citep{parabolic_waveform} have allowed the production of gravitational waveforms for unequal mass BH encounters under the parabolic approximation, which were used in this work.
The generated waveforms are applicable for non-spinning pairs of BHs with relative velocity up to $10\sim20\ \%$ of the speed of light.
The four waveforms we used to produce our mock data are from parabolic BH captures with mass ratios $q \in \{ 1, 4, 8, 16\}$.
While these waveforms contain a merger and a ringdown, they lack the characteristic sinusoidal morphology of the inspiral.
\begin{figure*}
\centering
\vspace{0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=0.5pt
\subfigcapskip=-5pt
\subfigure[]{
\label{plt:noisefree_bbh}
\includegraphics[width=\columnwidth]{./figure/noisefree_BBH.pdf}}
\subfigure[]{
\label{plt:noisy_bbh}
\includegraphics[width=\columnwidth]{./figure/noisy_BBH.pdf}}
\subfigure[]{
\label{plt:noisefree_be}
\includegraphics[width=\columnwidth]{./figure/noisefree_BE.pdf}}
\subfigure[]{
\label{plt:noisy_be}
\includegraphics[width=\columnwidth]{./figure/noisy_BE.pdf}}
\caption{Examples of BBH and parabolic BH capture injections which used in this work. The panels on the left show the signal before injection, and those on the right the signal whitened against the power spectrum of the simulated detector noise.
In the top row (a) represents a typical high-mass BBH signal for a system with $m_{1}=78\ \ensuremath{\mathrm{M}_{\rm \odot}}$, $m_{2}=72\ \ensuremath{\mathrm{M}_{\rm \odot}}$ at a distance of $1400\ {\rm Mpc}$; (b) depicts the signal from (a) whitened.
In the bottom row (c) represents a parabolic BH capture signal for a system with mass ratio $q = 1$, and a total mass of $150\ \ensuremath{\mathrm{M}_{\rm \odot}}$ at a distance of $5000\ {\rm Mpc}$, with (d) depicting it whitened. All waveforms in this paper correspond to a three-detector configuration, while here we only show the signal in H1 detector.}
\label{plt:wfm}
\end{figure*}
We used the \texttt{Minke}~\citep{minke} python package to generate the parabolic capture injections.
\texttt{Minke} is a toolkit designed to produce injection sets for signals derived from numerical relativity simulations, and performs the appropriate rescaling required to produce waveforms for systems with any total mass, and at any luminosity distance.
The signal is then convolved with the detector's antenna pattern, and time shifted for the corresponding detector.
The process for producing the mock data set using \texttt{Minke} is as follows.
For each of the four mass ratios we considered we chose a luminosity distance and a sky location for the parabolic capture waveforms which produced a posterior probability distribution when analysed by \texttt{VItamin} which was visually similar to the posterior from analysing a high-mass BBH signal. The posterior of a typical BBH has a shape with the following features: it should have obvious peaks and narrow width of the marginal distributions, which indicates that the corresponding parameters have been inferred well under the BBH model.
The maxima of the marginal distributions from BH captures are not required to be in the same locations as a typical BBH.
The visual method essentially checks that the posterior distributions neither rail against, nor are compressed towards one of the edges of the prior for a particular parameter.
This led us choosing the waveform parameters shown in Table~\ref{tab:injection}.
The simulated data are created with a fixed total mass of $150\ \ensuremath{\mathrm{M}_{\rm \odot}}$, and a luminosity distance $d_0$ in $[100,8000]\ {\rm Mpc}$.
Here, and elsewhere in this work, masses are quoted in the detector frame.
The right ascension and declination, $\alpha$, $\delta$, and the waveform polarisation, $\psi$ were distributed uniformly for all waveforms.
The detectors we used are LIGO Handford (H1), LIGO Livingston (L1)~\citep{ligo}, and Virgo (V1)~\citep{virgo}.
\begin{table}
\centering
\begin{tabular}[t]{lcc}
\toprule
Parameter & Injection\\
\hline
$m_{\rm total}$ (M$_\odot$) & \multicolumn{1}{c}{150} \\
$d_{\rm L}$ (Mpc) & \multicolumn{1}{c}{$d_{0}$} \\
$t_{0}$ (s) & \multicolumn{1}{c}{0.22}\\
$\alpha$ & [0, $2\pi$]\\
$\delta$ & [$-\pi/2$, $\pi/2$]\\
$\psi$ & [0, $2\pi$]\\
\hline
duration (s)& \multicolumn{1}{c}{1} \\
$\rm t_{start}$ (GPS time)& \multicolumn{1}{c}{1126259642} \\
$\rm t_{ref}$ (GPS time)& \multicolumn{1}{c}{1126259642.5} \\
detector network & \multicolumn{1}{c}{H1, L1, V1} \\
\bottomrule
\end{tabular}
\caption{
The injections of parabolic BH capture mock data used for \texttt{VItamin} analysis.
We list the start time $t_{\rm start}$, the reference time $t_{\rm ref}$ in GPS time and the fixed merger time $t_0$ = $0.22\ s$, where the merger time in GPS time $t_{\rm merger}$ = $t_{\rm ref}$ + $t_0$.
Here, and elsewhere in this work, masses are quoted in the detector frame.
}
\label{tab:injection}
\end{table}
The start time $t_{\rm start}$ was specified when generating mock signal, while the merger time $t_0$ and signal length could not be.
For the parabolic BH capture waveforms with a mass ratio of $1$, $4$, $8$, $16$, the length of the raw data produced by \texttt{Minke} ranges from $0.88\ {\rm s}$ to $1.34\ {\rm s}$.
We manually truncated the timeseries, or padded it with zeros, to fit the one-second analysis constraint of \texttt{VItamin}.
At the same time, we put the signal's peak at $t_0 = 0.22\ {\rm s}$, which lies within the pre-trained prior range $[0.15, 0.35]\ {\rm s}$ expected by \texttt{VItamin}.
To make the network function properly, signals were processed in the same way as the training data.
\texttt{VItamin} requires that the input data is whitened using detector amplitude spectral density (ASD), and given a zero mean, unit variance Gaussian noise. This whitening process was adopted mainly to scale the data more properly for input to the neural network, and we employed aLIGO zero detuning\footnote{See \url{https://dcc.ligo.org/T1800044-v5/public}}, high power design sensitivity ASD for H1 and L1, and the advanced Virgo\footnote{See \url{https://dcc.ligo.org/LIGO-P1200087-v42/public}} ASD for V1. Examples of BBH and parabolic BH capture injections are showed in Figure~\ref{plt:wfm}.
\section{Analysis}
\label{sec:methods}
The most widely used approach to analysing BBH signals uses Bayesian inference~\citep{bayes_1,bayes_2}.
One analysis pipeline which is widely used is \texttt{Bilby}~\citep{bilby,bilby_2}, a modular python package. However, it is computationally intensive because it uses stochastic sampling techniques to estimate the posterior.
Instead, we used \texttt{VItamin}~\citep{vitamin} to produce rapid posterior estimates for each injection.
Due to the high-mass systems we focus on, we first used the pre-training function supported by \texttt{VItamin} to expand its prior parameter space to Table~\ref{tab:prior_vtm}.
Once we obtained a posterior that is visually similar to that for a BBH, we applied Bayesian inference to the corresponding signal using non-spinning and spinning BBH templates. For the two inferences, we employed the \texttt{IMRPhenomPv2}~\citep{imr_template,imr_template_present} approximant respectively\footnote{ \texttt{IMRPhenomPv2} has 6 parameters to model the spins of BBH system. In order to produce non-spinning waveforms from it we set the 6 spins as zero.}
because it was used to train \texttt{VItamin}.
We carried-out these analyses using \texttt{Bilby} as corroboration for \texttt{VItamin} analysis.
\subsection{Bayesian Inference}
\label{sec:bayes}
The probability distribution on a set of parameters, conditional on the measured data, can be determined using Bayes Theorem, which can be represented as
\begin{equation}\label{eq:bayes_theorem}
p(x|y) = \frac{p(y|x) p(x)}{p(y)},
\end{equation}
where $x$ are the parameters, $y$ are the observed data, $p(x)$ is the prior on the parameters, $p(y)$ is the probability of the data, $p(x|y)$ is the posterior, and $p(y|x)$ is the likelihood.
GW parameter estimation analyses typically require the exploration of a very large parameter space, while analysing a large volume of data.
To address this it is typical to use a stochastic sampler to reconstruct the posterior.
This sampling can be done with a variety of techniques, including Nested Sampling and Markov chain Monte Carlo~\citep{montecarlo_1,mcmc_application_1,mcmc_application_2} methods.
The popular software packages used by LIGO parameter estimation analyses are \texttt{LALInference} and \texttt{Bilby}, which offer multiple sampling methods.
We used \texttt{Bilby}, a Bayesian inference library for GW astronomy, as an interface for the \texttt{dynesty}~\citep{dynesty} sampler.
Once the appropriate posteriors had been obtained in section~\ref{sec:vitamin}, one example of the corresponding signals was expanded with a data segment of 4 seconds and a sampling rate of $1024\ {\rm Hz}$ for a precise analysis. We used the \texttt{dynesty} sampler, and both spinning and non-spinning BBH waveforms drawn from the \texttt{IMRPhenomPv2} model to perform parameter estimation on the data. The priors we used are shown in Table~\ref{tab:prior_non-spinnning} and Table~\ref{tab:prior_spinning}.
We then calculated the Bayes factors $K$ for non-spinning BBH-template and spinning BBH-template against the noise.
The Bayes factor is defined as
\begin{equation}\label{eq:bayes_factor}
K = \frac{p(y|x,H_1)}{p(y|x,H_2)} = \frac{\int p(x_1|H_1)p(y|x_1,H_1)dx_1}{\int p(x_2|H_2)p(y|x_2,H_2)dx_2},
\end{equation}
where $H_1$, $H_2$ are two different hypotheses.
With a $K$ \textgreater 1 indicating greater support for $H_1$ hypothesis.
Finally, we used the median recovered values from the parabolic capture signal posterior to create injections using two BBH models (one spinning, and one non-spinning).
These two recovered signals were compared with the corresponding parabolic capture, demonstrating the ability of a parabolic capture to mimic a high-mass BH when analysed with the \texttt{IMRPhenomPv2} waveform approximant.
\subsection{VItamin analysis}
\label{sec:vitamin}
\texttt{VItamin} is a recently proposed network for BBH signals based on a conditional variational autoencoder~\citep{CAVE_1,CAVE_2}, which has been shown to produce samples describing the posterior distribution six orders of magnitude faster than the traditional Bayesian approach~\citep{vitamin}.
The network used non-spinning BBH approximant \texttt{IMRPhenomPv2}.
It omits the six additional parameters required to model the spins of the BBH system and produces posteriors on eight parameters: the component masses $m_1$, $m_2$, the luminosity distance $d_{\rm L}$, the time of coalescence $t_0$, the binary inclination $\theta_{jn}$, right ascension $\alpha$, and declination $\delta$.
The phase at coalescence $\phi_0$ and the GW polarisation angle $\psi$ are internally marginalized out.
For each parameter we used a uniform prior, with the exception of the declination and inclination parameters for which we used priors which were uniform in $\cos (\delta)$ and $\sin\theta_{jn}$.
The corresponding prior ranges are defined in Table~\ref{tab:prior_vtm}.
The initial prior range of \texttt{VItamin} focuses on low-mass BH binaries, and the upper limit of the component mass is $80\ \ensuremath{\mathrm{M}_{\rm \odot}}$.
However, we trained the network with a customized prior, increasing the maximum component mass to $160\ \ensuremath{\mathrm{M}_{\rm \odot}}$ to deal with the high-mass BBH system in this study.
The BBH signals used as training and test data were produced using \texttt{IMRPhenomPv2}, with a minimum cutoff frequency of $20\ {\rm Hz}$.
The training procedure only needed to be performed once, and took $\mathcal{O}(1)$ day to complete.
The resulting trained network could then quickly generate samples describing the posterior distribution, which was proved to achieve the same accuracy of results as trusted benchmark analyses used within the LIGO-Virgo Collaboration~\citep{vitamin}.
For BBH signals, GW data is usually sampled to frequencies between $1$ and $16\,\mathrm{kHz}$, depending upon the mass of binary.
We have chosen a low sampling rate of $256\ {\rm Hz}$ for the \texttt{VItamin} network, in order to decrease the computational time required to train it.
We observed that, the main frequency component of BH capture signals with total mass of $150\ \ensuremath{\mathrm{M}_{\rm \odot}}$ is around 80 Hz, which is below Nyquist sampling rate $f_{\rm Nyquist}=128\ {\rm Hz}$, thus the signal is well covered by the sampling range.
\begin{table}
\centering
\begin{tabular}[t]{lccc}
\toprule
Parameter & min & max & prior\\
\hline
$m_{1, 2}$ (M$_\odot$) & 30 & 160 & uniform\\
$d_{\rm L}$ (Mpc) & 1000 & 3000 & uniform\\
$t_{0}$ (s) & 0.15 & 0.35 & uniform\\
$\alpha$ & 0 & $2\pi$ & uniform\\
$\delta$ & $-\pi/2$ & $\pi/2$ & cosine\\
$\theta_{jn}$ & 0 & $\pi$ & sine\\
\hline
spins & \multicolumn{2}{c}{0} & - \\
duration (s)& \multicolumn{2}{c}{1} & - \\
$\rm t_{start}$ (GPS time)& \multicolumn{2}{c}{1126259642.0} & - \\
$\rm t_{ref}$ (GPS time)& \multicolumn{2}{c}{1126259642.5} & - \\
detector network & \multicolumn{3}{c}{H1, L1, V1} \\
\bottomrule
\end{tabular}
\caption{The priors and fixed parameter values used on non-spinning BBH model parameters for \texttt{VItamin} analysis.}
\label{tab:prior_vtm}
\end{table}
For all parabolic BH capture waveforms, we used one set of sky location injections ($\alpha$, $\delta$, and $\psi$) that contains 100 samples.
We only adjusted $d_{\rm L}$ injection, for the appropriate posteriors, which were given by \texttt{VItamin} network rapidly once the data had been input.
$d_{\rm L}$ injection is a critical factor due to the effect of waveform scaling.
It must be chosen such that the injected waveform has an SNR which would be detectable, and produces a plausible posterior distribution which might be mistaken as that of a high-mass BBH.
\begin{table}\label{ns_prior}
\centering
\begin{tabular}[t]{lccc}
\toprule
Parameter & min & max & prior\\
\hline
$m_{1, 2}$ (M$_\odot$) & 30 & 160 & uniform\\
$d_{\rm L}$ (Mpc) & 1000 & 3000 & uniform\\
$t_{0}$ (s) & 0.15 & 0.35 & uniform\\
$\alpha$ & 0 & $2\pi$ & uniform\\
$\delta$ & $-\pi/2$ & $\pi/2$ & cosine\\
$\theta_{jn}$ & 0 & $\pi$ & sine\\
$\psi$ & 0 & $\pi$ & uniform\\
$\phi$ & 0 & $2\pi$ & uniform\\
\hline
spins & \multicolumn{2}{c}{0} & - \\
duration (s)& \multicolumn{2}{c}{4} & - \\
$\rm t_{start}$ (GPS time)& \multicolumn{2}{c}{1126259642.0} & - \\
$\rm t_{ref}$ (GPS time)& \multicolumn{2}{c}{1126259644.5} & - \\
detector network & \multicolumn{3}{c}{H1, L1, V1} \\
\bottomrule
\end{tabular}
\caption{The priors and fixed parameter values used on non-spinning BBH model parameters for \texttt{Bilby} analysis. In this analysis we use a 4-second duration timeseries.}
\label{tab:prior_non-spinnning}
\end{table}
\begin{table}\label{s_prior}
\centering
\begin{tabular}[t]{lccc}
\toprule
Parameter & min & max & prior\\
\hline
$m_{1, 2}$ (M$_\odot$) & 30 & 160 & uniform\\
$d_{\rm L}$ (Mpc) & 1000 & 3000 & uniform\\
$t_{0}$ (s) & 0.15 & 0.35 & uniform\\
$\alpha$ & 0 & $2\pi$ & uniform\\
$\delta$ & $-\pi/2$ & $\pi/2$ & cosine\\
$\theta_{jn}$ & 0 & $\pi$ & sine\\
$\psi$ & 0 & $\pi$ & uniform\\
$\phi$ & 0 & $2\pi$ & uniform\\
$a_{1,2}$ & 0 & 0.99 & uniform\\
$\theta_{1,2}$ & 0 & $\pi$ & sine\\
$\Delta\phi$ & 0 & $2\pi$ & uniform\\
$\phi_{JL}$ & 0 & $2\pi$ & uniform\\
\hline
duration (s)& \multicolumn{2}{c}{4} & - \\
$\rm t_{start}$ (GPS time)& \multicolumn{2}{c}{1126259642.0} & - \\
$\rm t_{ref}$ (GPS time)& \multicolumn{2}{c}{1126259644.5} & - \\
detector network & \multicolumn{3}{c}{H1, L1, V1} \\
\bottomrule
\end{tabular}
\caption{The priors and fixed parameter values used on spinning BBH model parameters for \texttt{Bilby} analysis. In this analysis we use a 4-second duration timeseries.}
\label{tab:prior_spinning}
\end{table}
\section{Results}
\label{sec:result}
In the \texttt{VItamin} recovery study on parabolic BH capture using non-spinning BBH approximate \texttt{IMRPhenomPv2}, we did obtain the posteriors look superficially like a BBH posterior.
\footnote{
The posterior probability distribution of a parabolic BH capture which mimics a BBH determined by \texttt{VItamin} could be seen in Figure~\ref{vtm_be}.
As a comparison, we also display the \texttt{VItamin} posterior of a typical high-mass BBH in Figure~\ref{vtm_bbh}.}
The $d_{\rm L}$ injections to produce such posteriors are recorded in Table~\ref{tab:search_result}.
We calculated the optimal SNR in each detector, which is defined as
\begin{equation}
\label{eq:snr}
\rho_{\rm opt} = 2{\left [\int_{f_{\min}}^{f_{\max}}\frac{|h^{2}(f)|}{S_{h}(f)}\ {\rm d}f \right ]^{\frac{1}{2}}},
\end{equation}
where $h(f)$ is the Fourier transform of the (time-domain) GW signal, $S_{h}(f)$ is the one-sided noise spectral density in units of $\rm Hz^{-1}$,
and $f_{\min}\leq f \leq f_{\max}$ correspond to the frequency band of the instrument.
We found that, in the step of adjusting $d_{\rm L}$ injection, waveforms with a higher mass ratio produced a detectable SNR to smaller luminosity distances.
We also conducted parameter estimation using \texttt{Bilby} on one parabolic capture signal, with the posterior distribution sampled by \texttt{dynesty}.
\footnote{The posterior probability distributions for a parabolic capture signal analysed by \texttt{Bilby} is shown in Figure~\ref{blb_be} and \ref{blb_be_spin} using non-spinning and spinning models respectively.}
Both spinning and non-spinning BBH templates \texttt{IMRPhenomPv2} have a high log Bayes factor $\ln{K} = 134.620^{\pm 0.168}, 117.134^{\pm 0.155}$ against noise, strongly supporting the hypothesis that the signal is a BBH merger.
As a comparison, the reference BBH data gives a log Bayes factor $\ln{K} =157.363^{\pm0.209}$ compared to the noise hypothesis in the non-spinning BBH analysis.
Then we have the log Bayes factor between spinning and non-spinning BBH templates $\ln{K} = 17.486^{\pm 0.229}$, which is quite small.
This illustrates that it is very difficult for the Bayes factor to distinguish between the spinning and non-spinning models in this case. %
We then generated the signal corresponding to the median values of each waveform parameter's posterior, and compared it with the original signal from Figure~\ref{signal_comparison}, and we show the whitened waveforms of both the parabolic BH capture signal and the recovered BBH signal overlayed.
Here we can see the strong similarty in the merger-ringdown phase.
\begin{table}
\centering
\begin{tabular}{lll}
\toprule
mass ratio $q$ & $d_{\rm L}$ injection (Mpc)\\ \hline
1 & 5000 \\
4 & 2000 \\
8 & 1500 \\
16 & 500 \\
\bottomrule
\end{tabular}
\caption{The $d_{\rm L}$ injections to produce $\texttt{VItamin}$ posteriors that are visually similar to BBH.
It decreases with the increase in the mass ratio $q$. }
\label{tab:search_result}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{./figure/signal_comparison.pdf}
\caption{The whitened parabolic BH capture and the whitened recovered BBH signals at detector H1. The waveform with mass ratio $q$ = 1 was injected with a total mass of 150 \ensuremath{\mathrm{M}_{\rm \odot}} at a distance of 5000 Mpc. The parameter estimation was then performed on the signal using non-spinning and spinning BBH model \texttt{IMRPhenomPv2} by \texttt{Bilby}.
Here we present the waveform corresponding to the median values of each parameter's posterior distribution. }
\label{signal_comparison}
\end{figure*}
Furthermore, we might be able to make a case that the resemblance is more than superficial by reference to a statistic.
To do this we calculate the JS divergence~\citep{jsd_1} between the posterior distributions calculated by analysing both injected BBH signals and injected parabolic capture waveforms.
If the posterior distributions from an injected BBH and an parabolic capture signal are not statistically distinctive they will have a small JS divergence, and we can infer that the use of the incorrect waveform model in the analysis would not be detected.
The JS divergence is a symmetrised and smoothed measure of the distance between two probability distributions $p(x)$ and $q(x)$ defined as
\begin{equation}
D_{\rm JS}(p\mid q)=\frac{1}{2}\left [ D_{\rm KL}(p\mid s) +D_{\rm KL}(q\mid s)\right ],
\end{equation}
where $s=1/2(p+q)$ and $D_{KL}$ is the Kullback-Leibler divergence between the distributions $p(x)$ and $q(x)$ expressed as
\begin{equation}
D_{\rm KL}(p\mid q)=\int p(x) \log_{2}\left (\frac{p(x)}{q(x)} \right )dx.
\end{equation}
JS divergence ranges between [0, 1], a greater value of which indicates that the posteriors from two signals have a greater difference therefore they could be well distinguished.
The two JS divergences we considered are:
\begin{enumerate}
\item $\ensuremath{D_{{\rm JS, noise}}}$: the divergence between posteriors of reference BBH signal with different white noise realisations.
\item $\ensuremath{D_{{\rm JS, ref}}}$: the divergence between posteriors of parabolic capture and reference BBH signal, with the same noise realisation.
\end{enumerate}
$\ensuremath{D_{{\rm JS, noise}}}$ reflects the volatility of the \texttt{VItamin} results when dealing with different white noise, whereas $\ensuremath{D_{{\rm JS, ref}}}$ represents the bias of the \texttt{IMRPhenomPv2} template when modelling parabolic BH capture signals.
We expect that $\ensuremath{D_{{\rm JS, ref}}}$ should be obviously greater than the $\ensuremath{D_{{\rm JS, noise}}}$, in this case calculating $\ensuremath{D_{{\rm JS, ref}}}$ could be considered as a fast approach to distinguish BBH and parabolic BH capture events.
\begin{table*}
\centering
\begin{tabular}{lccccccccc}
\hline
mock signal & $m_1$ (\ensuremath{\mathrm{M}_{\rm \odot}}) & $m_2$ (\ensuremath{\mathrm{M}_{\rm \odot}}) & $d_{\rm L}$ (Mpc) & $t_0$ (s) & $\alpha$ & $\delta$ & $\psi$ & $\theta_{jn}$ & network SNR \\ \hline
parabolic BH capture m1 & 75 & 75 & 5000 & 0.22 & 0.89 & -0.94 & 1.54 & - & 11.13\\
recovered BBH & 76 & 68 & 1624 & 0.25 & 1.69 & 1.20 & - & 1.33 & 10.76\\
\hline
parabolic BH capture m4 & 120 & 30 & 2000 & 0.22 & 0.89 & -0.94 & 1.54 & - & 7.63\\
recovered BBH & 88 & 75 & 2278 & 0.26 & 4.69 & 1.21 & - & 1.76 & 4.38\\
\hline
parabolic BH capture m8 & 133.3 & 16.7 & 1500 & 0.22 & 0.89 & -0.94 & 1.54 & - & 9.93\\
recovered BBH & 98 & 83 & 1804 & 0.26 & 4.66 & 1.23 & - & 1.78 & 6.77\\
\hline
parabolic BH capture m16 & 141.2 & 8.8 & 500 & 0.22 & 0.89 & -0.94 & 1.54 & - & 13.90\\
recovered BBH & 104 & 90 & 1647 & 0.26 & 1.94 & 1.24 & - & 1.30 & 11.86\\
\hline
reference BBH & 78 & 72 & 1400 & 0.22 & 0.89 & -0.94 & 1.54 & 1.51 & 11.27\\
\hline
\end{tabular}
\caption{The injections of mock signals used for JS divergence analysis, including parabolic BH capture, its recovered BBH and reference BBH. For parabolic BH capture, we took the average peak value of \texttt{VItamin} posterior as the recovered injection.
The inefficiency and bias introduced by analysing the parabolic BH capture signal with a non-spinning BBH model \texttt{IMRPhenomPv2} can be seen clearly, as waveforms with a higher mass ratio were recovered to a higher total mass and lower luminosity distance with a detectable SNR. NB: $\psi$ is marginalized in \texttt{VItamin} inference, so we used $\psi=0$ for the injection. $\theta_{jn}$ is not an effective parameter for the parabolic BH capture waveform. We also note that the start time $t_{\rm start} = 1126259642.0$, the reference time $t_{\rm ref}=1126259642.5$ in GPS time, and the merger time $t_{\rm merger}$ = $t_{\rm ref}$ + $t_0$.
}\label{tab:bias}
\end{table*}
We then created mock data for the JS divergence analysis.
For each parabolic BH capture waveform, we reproduced one signal which can generate a posterior which is visually similar to one from a BBH.
We analysed 100 noise realisations with the same signal injected, and produced injections at the same sky location.
As a result, by considering the same injection time $t_0$, the antenna pattern is the same for each waveform. The injections and corresponding posterior peaks from the recovery are presented in Table~\ref{tab:bias}.
For reference, we also analysed 100 noise realisations with BBH signals, where the signal parameters: total mass, right ascension $\alpha$, declination $\delta$, and merger time $t_0$, are the same as the parabolic BH capture.
$d_{\rm L}$ was changed in order to scale the BBH signal's amplitude to be similar to the parabolic BH capture by visual comparison.
Its injections and corresponding posterior peaks from the recovery are also shown in Table~\ref{tab:bias}.
In this way, we created a situation where two similar-looking GW events, one BBH and one parabolic capture were observed.
We then computed the JS divergence between their posteriors, to measure their similarity.
We note that while eight parameters can be inferred for a input signal by \texttt{VItamin}, the three parameters which showed the greatest JS divergences were the component masses, $m_1$,$m_2$ and the merger time, $t_0$.
Having computed the JS divergence for all 100 pairs of signals, we looked at the distribution of the divergences in Figure~\ref{plt:jsd_ref}, where the three subplots represent the JS divergences of the components masses $m_1$, $m_2$, and the merger time $t_0$.
The distribution of $\ensuremath{D_{{\rm JS, noise}}}$ is generally close to zero, suggesting that the effect of noise on \texttt{VItamin}'s posterior is rather limited as we hope.
We calculated $D_{90}$, the 90\% confidence interval of $p(\ensuremath{D_{{\rm JS, noise}}})$, and used this as a threshold.
Then the percentage of $\ensuremath{D_{{\rm JS, ref}}}$ which is higher this threshold can indicate how far the distribution of $\ensuremath{D_{{\rm JS, ref}}}$ is away from the noise benchmark.
The related result is recorded in Table~\ref{tab:90confidence}.
The $D_{90}$ are 0.121, 0.134, 0.309 for $m_1$, $m_2$, $t_0$ respectively.
Though $D_{90}$ of $t_0$ is noticeably larger than those of the component masses of high, there is a large gap between distributions of $\ensuremath{D_{{\rm JS, noise}}}$ and $\ensuremath{D_{{\rm JS, ref}}}$ for this parameter, the percentage of which reaches 100\ \% for three waveforms and 97\ \% for the other one.
The greatest difference between the posteriors comes from the bias of the BBH approximatant \texttt{IMRPhenomPv2}.
For the same injection $t_0$ = 0.22\ s, a BBH signal is recovered with a peak value of 0.22\ s, but a parabolic BH capture is more likely to be recovered slightly later with, a recovered peak value of 0.25\ s or 0.26\ s (See this in Table~\ref{tab:bias}).
Therefore, this bias can be demonstrated through the the JS divergence analysis and used to test if a signal is a parabolic BH capture.
Besides, we also find that, for $m_1$, the average percentage of $\ensuremath{D_{{\rm JS, ref}}}$ above $D_{90}$ is 79.5\ \%, which has a more discriminative effect than that of $m_2$.
Parabolic captures with mass ratio of 8 and 16 can be distinguished fairly well from BBH signals, and the lowest percentage of them that higher than the threshold is also as high as 85\ \%.
This means we could have great confidence to distinguish the two types of signals when analysing with a BBH waveform.
However, under a more realistic detection scenario, we have no access to the true parameters of the signal. Thus, in addition to calculating $\ensuremath{D_{{\rm JS, ref}}}$, we should also compare the posteriors of the parabolic BH capture and its recovered signal, and look for the evidence of the bias.
Therefore, the recovered peak values were taken the average from 100 samples and used to inject the non-spinning BBH model \texttt{IMRPhenomPv2} with the same noise realisation.
The injections are recorded in Table~\ref{tab:bias}.
The new JS divergence we introduce is:
\begin{enumerate}
\item $\ensuremath{D_{{\rm JS, recover}}}$: the JS divergence between the posterior of a parabolic BH capture and its recovered BBH signal injected using the recovered peak values, with the same noise realisation.
\end{enumerate}
$\ensuremath{D_{{\rm JS, recover}}}$ describes the effect of recovering parabolic BH capture.
If the input signal is actually a BBH merger, its recovered signal should have a very similar posterior probability density distribution.
In this case, the distribution of $\ensuremath{D_{{\rm JS, recover}}}$ is close to zero but slightly higher due to the noise, of which the effect can be represented by $\ensuremath{D_{{\rm JS, noise}}}$.
But for other signals, if the difference between the posteriors of the recovered signal and itself is great, then $\ensuremath{D_{{\rm JS, recover}}}$ could be used as a criterion.
We plotted the distributions of $\ensuremath{D_{{\rm JS, recover}}}$ and compared it with the noise benchmark in Figure~\ref{plt:jsd_inj}.
Three subplots represents JS divergence of $m_1$, $m_2$, and $t_0$.
However, it almost overlaps with the distributions of $\ensuremath{D_{{\rm JS, noise}}}$ for the three parameters, which suggests a high similarity between posteriors of parabolic BH capture and the recovered high-mass BBH.
We also determined the percentage of $\ensuremath{D_{{\rm JS, recover}}}$ which were higher than $D_{90}$ and recorded them in Table~\ref{tab:90confidence}.
$\ensuremath{D_{{\rm JS, recover}}}$ has a much lower percentage than $\ensuremath{D_{{\rm JS, ref}}}$ above $D_{90}$.
Except for the waveform with mass ratio of 4, the highest percentage of their $\ensuremath{D_{{\rm JS, recover}}}$ greater than the threshold is only $32\ \%$.
The waveform with mass ratio of 4 is unusual compared to others, with the average percentage reaching $57.3\ \%$. Since we do not know \textit{a priori} what the mass ratio of the waveform is in the real analysis, we must consider all the waveforms equally, so this value would not be enough to provide support that the evidence of the bias has been found.
In addition, $\ensuremath{D_{{\rm JS, ref}}}$ has a good performance on $t_0$, while $\ensuremath{D_{{\rm JS, recover}}}$ is difficult to tell apart from $\ensuremath{D_{{\rm JS, noise}}}$ as it has an average percentage of $20.5\ \%$ above $D_{90}$.
\begin{figure*}
\centering
\vspace{0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=0.5pt
\subfigcapskip=-5pt
\subfigure[]{
\label{plt:jsd_ref_t0}
\includegraphics[width=\columnwidth]{figure/t0_be_ref.pdf}}
\subfigure[]{
\label{plt:jsd_ref_m1}
\includegraphics[width=\columnwidth]{figure/m1_be_ref.pdf}}
\subfigure[]{
\label{plt:jsd_ref_m2}
\includegraphics[width=\columnwidth]{figure/m2_be_ref.pdf}}
\caption{
Distributions of the JS divergence between parabolic BH capture and reference BBH $\ensuremath{D_{{\rm JS, ref}}}$ shown as the outline histograms and the JS divergence between reference BBHs with different noise $\ensuremath{D_{{\rm JS, noise}}}$, shown as the shaded histogram.
The JS divergence analysis is performed for four waveforms with mass ratio $q$ = 1, 4, 8, 16 and three parameters $m_1$, $m_2$, and $t_0$ respectively.
We evaluated our distinguishing method in terms of stability and effectiveness.
The former is illustrated by very low distributions of $\ensuremath{D_{{\rm JS, noise}}}$, which have 90\ \% upper limit, shown as a dashed line, of $m_1$, $m_2$, $t_0$ at 0.121, 0.134, 0.309. The latter is demonstrated by a high gap between distributions of $\ensuremath{D_{{\rm JS, noise}}}$ and $\ensuremath{D_{{\rm JS, ref}}}$, especially regarding JS divergence of $t_0$. The exact information about it is presented in Table~\ref{tab:90confidence}. This demonstrates that our approach works well.}
\label{plt:jsd_ref}
\end{figure*}
\begin{figure*}
\centering
\vspace{0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=0.5pt
\subfigcapskip=-5pt
\subfigure[]{
\label{plt:jsd_inj_t0}
\includegraphics[width=\columnwidth]{figure/t0_be_inj.pdf}}
\subfigure[]{
\label{plt:jsd_inj_m1}
\includegraphics[width=\columnwidth]{figure/m1_be_inj.pdf}}
\subfigure[]{
\label{plt:jsd_inj_m2}
\includegraphics[width=\columnwidth]{figure/m2_be_inj.pdf}}
\caption{Distributions of the JS divergence between parabolic BH capture and its recovered BBH $\ensuremath{D_{{\rm JS, recover}}}$ shown as the outline histograms, and the JS divergence between reference BBHs with different noise $\ensuremath{D_{{\rm JS, noise}}}$ shown as a shaded histogram. The JS divergence analysis is performed for four waveforms with mass ratio $q$ = 1, 4, 8, 16 and three parameters $m_1$, $m_2$, and $t_0$ respectively.
Here we considered the application of our distinguishing method in more realistic scenarios and evaluated it in terms of stability and effectiveness. The stability is the same as before with very low distributions of $\ensuremath{D_{{\rm JS, noise}}}$, with the 90\ \% upper limit represented by a dashed line. However, the distributions of $\ensuremath{D_{{\rm JS, noise}}}$ and $\ensuremath{D_{{\rm JS, recover}}}$ almost overlap which suggests that the two types of signal can't be well distinguished in this situation.(More information about this is presented in Table~\ref{tab:90confidence}.)}
\label{plt:jsd_inj}
\end{figure*}
\begin{table}
\centering
\begin{tabular}{ccccclll}
\cline{1-5}
\multicolumn{1}{l|}{} & \multicolumn{1}{c|}{mass ratio} & $t_0$ & $m_1$ & $m_2$ & & & \\ \cline{1-5}
\multicolumn{1}{c|}{\multirow{4}{*}{$\ensuremath{D_{{\rm JS, ref}}}$}} & \multicolumn{1}{c|}{1} & 100\% & 49\% & 11\% & & & \\
\multicolumn{1}{c|}{} & \multicolumn{1}{c|}{4} & 97\% & 74\% & 52\% & & & \\
\multicolumn{1}{c|}{} & \multicolumn{1}{c|}{8} & 100\% & 95\% & 86\% & & & \\
\multicolumn{1}{c|}{} & \multicolumn{1}{c|}{16} & 100\% & 100\% & 100\% & & & \\
\multicolumn{1}{c|}{\multirow{4}{*}{$\ensuremath{D_{{\rm JS, recover}}}$}} & \multicolumn{1}{c|}{1} & 9\% & 28\% & 24\% & & & \\
\multicolumn{1}{c|}{} & \multicolumn{1}{c|}{4} & 46\% & 62\% & 64\% & & & \\
\multicolumn{1}{c|}{} & \multicolumn{1}{c|}{8} & 19\% & 32\% & 27\% & & & \\
\multicolumn{1}{c|}{} & \multicolumn{1}{c|}{16} & 8\% & 32\% & 27\% & & & \\ \cline{1-5}
\end{tabular}
\caption{The percentage of $\ensuremath{D_{{\rm JS, ref}}}$ and $\ensuremath{D_{{\rm JS, recover}}}$ higher than the noise threshold for parabolic BH capture waveform with mass ratio of $1, 4, 8$, and $16$. The threshold is represented by $\ensuremath{D_{{\rm JS, noise}}}$ at $90\ \%$ confidence level, which is $0.309, 0.121, 0.134$ for $t_0$, $m_1$, and $m_2$ respectively.}\label{tab:90confidence}
\end{table}
Apart from being used for JS divergence analysis, Table~\ref{tab:bias} also gives us inspiration about the patterns on injected and recovered parameters. First, for parabolic BH capture waveforms with mass ratios of $1, 4, 8$, and $16$, the total mass is recovered as $144, 163, 181$, and $194$ $\ensuremath{\mathrm{M}_{\rm \odot}}$ respectively. These amount to a tendency for the rising of the recovered total mass with the mass ratio $q$ increasing, and the former one is much higher than the injection of $150$ $\ensuremath{\mathrm{M}_{\rm \odot}}$ when the mass ratio is greater than $1$.
We also find that, the recovered mass ratios are $1.12, 1.17,1.18,$ and $1.16$ respectively, which are all close to one.
For comparison, GW190521 has a mass ratio of $1.29$, and it is basically consistent with the analysis result we got.
The sensitive distance decreases with the increase in the mass ratio $q$.
For equal-mass BH binaries, eccentric sources are thought to be much closer than BBH sources with a circular orbit in inspiral.
Another discrepancy that could be highlighted is that the recovered merger times $t_0$ are all about 0.04 $s$ behind the injection truth. We suspect that it is caused by a mathematical fit of the BBH model to the capture signal, but we will investigate for a deeper pattern in the future.
\section{Summary and discussion}
\label{sec:conclusion}
In this work, we proposed the possibility that current approaches to GW analysis could misclassify parabolic BH capture signal as a BBH signal.
We then demonstrated a scenario under which this could occur, and devised for a statistical method to distinguish them.
We injected parabolic BH capture waveforms to produce mock data, using a tool developed for characterising burst searches, \texttt{Minke}, which was exploited to make injection with the customized distribution.
The main difficulty is that, it is impossible to predict how a signal be inferred under a biased multi-parameter model, and the computational cost of traditional Bayesian inference is expensive.
To overcome this we adopted \texttt{VItamin}, a neural network based on the BBH model, and retrained it to fit high-mass BBH signals, which reduced the cost of each parameter estimation to a very low level.
This greatly helped us to continuously adjust the injection parameters of the parabolic BH capture and finally obtain the appropriate posterior probability.
After that, we also performed confirmatory parameter estimation using \texttt{dynesty} sampler, of which the result also had a strong statistical support.
Here we summarize our main conclusions in more detail.
We have established that there are scenarios in which a parabolic BH capture could be recovered as a spinning (non-spinning) BBH signal with high statistical support, a log Bayes factor of $\ln{K}=134.6\ (111.7)$, compared to a noise hypothesis.
This type of signal is likely to be mistaken as a high-mass BBH by LIGO and Virgo.
Therefore it would be valuable to be vigilant to this possibility when a high-mass BBH system is identified in an analysis, otherwise future GW events may be misclassified.
This should be considered in cases where the waveform seems to lack a clear inspiral phase.
In this study, we have built a rapid approach to describe the difference between the posteriors of BBH and parabolic BH capture signals and distinguish them.
This approach is based on neural network, \texttt{VItamin}, and compares the distribution of JS divergences of three parameters $m_1$, $m_2$, and $t_0$ from two types of GW signals, with that of noise benchmark $D_{90}$.
Its validity has been proved by the JS divergence between the parabolic BH capture and the reference BBH, $\ensuremath{D_{{\rm JS, ref}}}$, which has 79.5\ \%, 62.3\ \%, 99.3\ \% of samples over $D_{90}$ for $m_1$, $m_2$, and $t_0$.
However, in a more realistic detection scenario, our analysis does not yield evidence that two types of GW events are distinguishable with the current BBH Bayesian inference.
This is a result of the lower value of the JS divergence between the parabolic BH capture and its recovered BBH $\ensuremath{D_{{\rm JS, recover}}}$, containing only 38.5\ \%, 35.5\ \%, 20.5\ \% of samples located above $D_{90}$ for $m_1$, $m_2$, and $t_0$.
The result of our analysis would not therefore allow us to make an identification of a GW190521-like signal.
As a result the parabolic BH capture could not be distinguished from a BBH by the current quasi-circular BBH analysis, which highlights the importance of a good BH capture approximant in the future.
We have identified the patterns on injected and recovered parameters. For four waveforms, there is a tendency for the recovered total mass to rise as the mass ratio increases; only the one from equal-mass system has a recovered total mass close to the injection of 150 $\rm{M}_{\odot}$, and the total masses of the others are recovered with much higher values. The recovered mass ratios are all close to one, which we also see on GW190521 with a mass ratio of 1.29. In contrast to the pattern observed with the total mass, the sensitive distance decreases as the mass ratio increases. We also note that that the recovered merger times are all offset by around 0.04 s compared to the injected value.
The research in this paper constitutes a comparatively novel use of deep learning in GW data analysis.
A typical Bayesian approach to analyses used in this study takes 8 to 14 hours while the neural network requires around $50$ seconds.
For each waveform, there were about four iterations on average before determining the appropriate $d_{\rm L}$ injection, and each turn gave 100 Bayesian posteriors corresponding to the combinations of sky location.
A total of $1,600$ inferences were performed in this stage.
Once the posteriors which mimic BBH were obtained, we selected one signal from each waveform and analysed it with 100 noise realisations, as well as the reference BBH signal, for construction of JS divergence distribution, of which the stage contained 500 inferences.
BBH signals injected from the recovered peaks of the BH capture signals were then inferred with the same noise realization sets.
This last step required 400 inferences and constructed the distribution of $\ensuremath{D_{{\rm JS, recover}}}$ to finally describe the difference between BBH and BH capture signals.
Overall, the use of a neural network saved around $2.7\times 10^4$ hours when performing $2,500$ parameter estimation analyses.
Because of computational cost limitations in training, the \texttt{VItamin} network has not been trained to take into account the spins of the BBH model.
One promising signature of the BH binary formation environment is the angular distribution of BH spins~\citep{distinguish_spins}.
Binaries formed through dynamical interactions are expected to have isotropic spin orientations~\citep{iso_spins_1,iso_spins_2,iso_spins_3,iso_spins_4,iso_spins_5} whereas systems formed from pairs of stars born together are more likely to have spins preferentially aligned with the binary orbital angular momentum~\citep{aligned_spins_2,aligned_spins_3,aligned_spins_4,aligned_spins_5}.
When modeling the BH capture data, the six additional parameters of spins, as intrinsic properties of a binary, are expected to play an important role in distinguishing binaries formation channel, allowing a further precise search that has been done in the real data analysis.
We will return to this subject in future work.
The component masses prior range of \texttt{VItamin} can be expanded and the sampling rate can be raised to cover more BH capture samples.
These events are principally from low-frequency sources, and as such are ideal candidates for both Einstein Telescope~\citep{ET_1,ET_2}, which aims to achieve much greater low-frequency sensitivity than current detectors, but also for Deci-Hz detectors, such as DECIGO~\citep{DECIGO_1,DECIGO_2}.
The misclassification is expected to be eliminated with their ability to observe at much lower frequencies, removing the ambiguity between unobserved low frequency inspiral cycles and a total lack of inspiral.
The detection rate of BH captures is dependent on the initial mass function of stars in galactic nuclei and the mass of the most massive BHs, therefore future observations can constrain both the average star formation properties and upper mass of BHs in galactic nuclei~\citep{bhencounter}.
\section*{Acknowledgements}
We would like to thank Charlie Hoy, Juan Calderon Bustillo, and Rossella Gamba for their comments on the manuscript, and suggestions, in addition to many discussions within the parameter estimation and burst groups of the LIGO, Virgo, and KAGRA collaborations.
DW and ISH were supported by STFC grants ST/V001736/1 and ST/V005634/1.
YB was supported by IBS under the project Code No. IBS-R018-D1 and by the National Research Foundation of Korea (NRF) grant funded by the Korea government(MSIT) (No. NRF-2021R1F1A1051269).
GK and YB are supported by the KISTI National Supercomputing Center with supercomputing resources and technical supports (KSC-2020-CRE-0352).
ZZ was supported by the National Natural Science Foundation of China under Grants Nos. 11633001, 11920101003 and 12021003, the Strategic Priority Research Program of the Chinese Academy of Sciences, Grant No. XDB23000000 and the Interdiscipline Research Funds of Beijing Normal University.
We are grateful for the support of our colleagues in the LIGO-Virgo Compact Binary Coalescence Parameter Estimation working group.
This work has been assigned LIGO document control number LIGO-P2200064.
\section*{Data Availability}
The data and code underlying this article are available in Zenodo at \url{https://doi.org/10.5281/zenodo.6384509}.
\bibliographystyle{mnras}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 655 |
\chapter*{Abstract}
Subshifts are sets of configurations over an infinite grid defined by a set of forbidden patterns. In this thesis, we study two-dimensional subshifts of finite type ($2$D SFTs), where the underlying grid is $\mathbb Z^2$ and the set of forbidden patterns is finite. We are mainly interested in the interplay between the computational power of $2$D SFTs and their geometry, examined through the concept of expansive subdynamics. $2$D SFTs with expansive directions form an interesting and natural class of subshifts that lie between dimensions $1$ and $2$. An SFT that has only one non-expansive direction is called extremely expansive. We prove that in many aspects, extremely expansive $2$D SFTs display the totality of behaviours of general $2$D SFTs.
For example, we construct an aperiodic extremely expansive $2$D SFT and we prove that the emptiness problem is undecidable even when restricted to the class of extremely expansive $2$D SFTs. We also prove that every Medvedev class contains an extremely expansive $2$D SFT and we provide a characterization of the sets of directions that can be the set of non-expansive directions of a $2$D SFT. Finally, we prove that for every computable sequence of $2$D SFTs with an expansive direction, there exists a universal object that simulates all of the elements of the sequence. We use the so called hierarchical, self-simulating or fixed-point method for constructing $2$D SFTs which has been previously used by G\'{a}cs, Durand, Romashchenko and Shen.
\tableofcontents
\cleardoublepage
\pagenumbering{arabic}
\chapter{Historical overview}
This thesis is about two-dimensional subshifts of finite type ($2$D SFTs), and more specifically, the behaviour of 2D SFTs with respect to a dynamical-geometrical notion called expansive subdynamics.
The mathematical study of 2D SFTs began with the paper of Wang \cite{wang}. A \dfn{Wang tile set} consists of a finite number of unit squares with coloured edges, which are called \dfn{tiles}. A valid tiling is a way to fill the entire plane with tiles such that the squares are edge-to-edge and such that the colors of abutting edges are the same. Wang asked the following question about Wang tile sets, which is called the \dfn{tiling problem}: Does there exist an algorithm that takes as input an arbitrary Wang tile set and decides whether it admits a valid tiling? He conjectured that the answer to this question is positive and proved that the problem is strongly correlated to the problem of the existence of an \dfn{aperiodic tile set}, that is a tile set that admits some valid tiling but no periodic valid tiling.
However, Berger \cite{berger} proved that this is not the case. In fact, he proved that the tiling problem is undecidable. In addition, his proof contained an explicit construction of an aperiodic tile set. Later, several authors have given alternative constructions of aperiodic tile sets and proofs of the undecidability of the tiling problem \cite{robinson,jarkkosmall,jarkkoundec}.
There is an alternative way of looking at and talking about the same problem. Let $\A$ be a finite set, called the \dfn{alphabet}. A (two-dimensional, or $2$D) \dfn{configuration} is a map $c \colon \mathbb Z^2 \to \A$. The set of all configurations $\A^{\mathbb Z^2}$ is called the \dfn{full shift}. A \dfn{pattern} is a map $p \colon D \to \A$, where $D \subseteq \mathbb Z^2$ is a \emph{finite} set. Let $\mathcal{F}$ be a set of \dfn{forbidden} patterns. The corresponding ($2$D) \dfn{subshift} $X_{\mathcal{F}}$ is the set of all configurations that avoid the patterns of $\mathcal{F}$: for all finite $D \subseteq \mathbb Z^2$ and $c \in X_{\mathcal{F}}$, $c\restr{D} \notin \mathcal{F}$. If $X=X_{\mathcal{F}}$ for some \emph{finite} set of forbidden patterns, then it is called a 2D subshift of finite type (SFT). In this thesis, we will only talk about $2$D subshifts and SFTs, so that we will usually omit the dimension, except in statements of theorems.
It is not difficult to see that the set of valid tilings of a Wang tile set is an SFT. In addition, for every SFT, we can construct a Wang tile set whose set of valid tilings is, in some sense, equivalent to the given SFT. The tiling problem can thus be rephrased as the \dfn{emptiness problem} for SFTs: Given a finite set of forbidden patterns, can we algorithmically decide whether $X_{\mathcal{F}} \neq \emptyset$? The undecidability of the tiling problem then is then immediately translated to the undecidability of the emptiness problem of SFTs.
Wang tiles and forbidden patterns give a geometrical definition of SFTs, but there also exists an equivalent dynamical definition. First of all, the full shift can be endowed with the product topology of the discrete topology on $\A$. This gives rise to a compact, metrizable topological space. The \dfn{horizontal} and \dfn{vertical shifts}, which consist in moving a configuration one step to the left and up, respectively, are continuous with respect to this topology and obviously commute. This defines a $\mathbb Z^2$ action over the full shift and we can study it using the usual tools of topological dynamics.
For example, one can prove that subshifts are exactly the closed, shift-invariant subsets of the full shift, or, equivalently the subsystems of the full shift. SFTs correspond to the chain-mixing subsystems of the full shift. More importantly, for the purposes of this thesis, we can study $2$D SFTs from the point of view of their \dfn{expansive subdynamics}. This notion was defined by Boyle and Lind \cite{expsubd} as a tool for studying multidimensional dynamical systems by looking at the (lower-dimensional) actions induced by the subgroups of the original action. Intuitively, this is the same as when we look at the lower-dimensional projections of a surface in order to understand some of its properties.
The general definition of expansive subdynamics and the main results of \cite{expsubd} fall out of the scope of this thesis. However, for $2$D subshifts there exists an equivalent, natural geometrical definition. Let $X \subseteq \A^{\mathbb Z^2}$ be a subshift, $l\in\Rb\defeq \mathbb R \sqcup \{\infty\}$ a slope and $\lin{l}\subset\mathbb R^{2}$ the corresponding line that passes through the origin. We say that $l$ is an \dfn{expansive direction} of $X$ if there exists a finite shape $V\subset\mathbb R^2$ such that, for all $x,y \in X$,
\[x\restr{(\lin{l}+V) \cap \mathbb Z^2}=y\restr{(\lin{l}+V) \cap \mathbb Z^2}\impl x=y~.\]
In other words, there exists a fixed width $b >0$ such that every configuration of $X$ is uniquely defined by its restriction to the strip of slope $l$ and width $b$ that goes through the origin (in fact, by shift invariance, by any strip). Geometrically, this means that in $X$ the ($2$D) information of the configuration is \xpr{packed} inside the one-dimensional strip of slope $l$. In some sense, even though $X$ is a two-dimensional object, it is determined by a one-dimensional strip, so that subshifts with directions of expansiveness are somewhere between dimensions $1$ and $2$.
A direction that is not expansive is called \dfn{non-expansive}. Let $\NE(X)$ be the set of non-expansive directions of $X$.
Boyle and Lind proved that $\NE(X)$ is closed in the one-point compactification topology of $\Rb$ and that $\NE(X) \neq \emptyset$ if and only if $X$ is infinite. Since finite subshifts are rather trivial, the most restricted non-trivial case with respect to non-expansive directions is the case when $X$ has a unique direction of non-expansiveness. We call such a subshift \dfn{extremely expansive}. Extremely expansive SFTs form the main object of interest in this thesis. We prove that in many aspects, extremely expansive SFTs are computationally as powerful as general SFTs.
Before stating the results, we find it useful to talk about another class of SFTs with many directions of non-expansiveness, namely those that arise from deterministic tile set. A tile set is called \dfn{NW-deterministic} (the initials stand for North and West) if every tile is uniquely determined by the colors of its top and left sides\cite{nilpind}. Similarly, we can define SW, SE and NE deterministic tile sets (S and E stand for South and East, respectively). A tile set is called \dfn{4-way deterministic} if it is SW,NW,SE and NE deterministic \cite{karipapasoglou}. One can easily see that for the SFT associated to a 4-way deterministic tile set and for every direction $l$ that is not the vertical or the horizontal one (slopes $\infty$ and $0$, respectively), $l$ is an expansive direction. Guillon, Kari and Zinoviadis recently proved \cite{pierreunpub} that the vertical and the horizontal direction must indeed be non-expansive unless the associated SFT is in some sense trivial, namely vertically or horizontally periodic.
We can now start stating the results of the thesis. The first result concerns the existence of an aperiodic extremely expansive SFT. As mentioned earlier, for the unrestricted case, there exist various constructions of aperiodic SFTs. Kari and Papasoglou \cite{karipapasoglou} have constructed an aperiodic 4-way deterministic tile set. According to what was said in the previous paragraph, the SFT associated to this tile set has exactly two non-expansive directions, the vertical and the horizontal one. We prove that
\begin{theorem}
There exists an aperiodic extremely expansive $2$D SFT.
\end{theorem}
Of course, our construction does not use a 4-way deterministic tile set. It might seem that this result is strictly better than the one using 4-way deterministic tile sets, since we have one non-expansive direction less. However, there exists a small nuance here: 4-way deterministic tile sets give rise to SFTs with so-called \dfn{bounded radii of expansiveness}, while our construction does not have this property. In addition, in \cite{pierreunpub} it is also proved that every aperiodic SFT with bounded radii of expansiveness must have at least two non-expansive directions. Therefore, the 4-way deterministic construction is also optimal, in the class of SFTs with bounded radii of expansiveness, and it might be more precise to say that the two results are incomparable.
As mentioned already, the existence of an aperiodic tile set was originally constructed in order to prove that the tiling problem is undecidable. Kari \cite{nilpind} prove that the tiling problem for NW-deterministic tile sets is undecidable. In addition, Lukkarila \cite{lukkarila} used the 4-way deterministic tile set of Kari and Papazoglou in order to prove that the tiling problem is undecidable for 4-way deterministic tile sets as well. As the reader has probably guessed already, we prove that
\begin{theorem}
The emptiness problem of extremely expansive $2$D SFTs is undecidable. More precisely, the emptiness problem is undecidable for $2$D SFTs such that the vertical direction is the only non-expansive direction.
\end{theorem}
One should understand the previous statement in the following sense: even if one is given an SFT $X$ (as a finite set of forbidden patterns) and is given the additional information that $X$ is either empty or extremely-expansive (and in this case $\NE(X)=\{\infty\}$), even then it is not possible to decide whether $X=\emptyset$. In other words, it is not possible to algorithmically separate the sets of forbidden patterns that define empty SFTs from those that define extremely expansive non-empty SFTs.
The third result can be considered a stronger version of the undecidability of the emptiness problem. We prove that there exist extremely expansive SFT whose configurations are computationally as complicated as possible.
In order to describe this result, we need to introduce some classical notions of computation theory. For the purposes of this introduction, a \dfn{computable function} will mean a function $f \colon \A^{\mathbb N} \to \A^{\mathbb N}$ such that there exists a Turing Machine that outputs $f(c)$ when originally its reading tape contains $c$ (\textit{i.e.}, it outputs $f(c)$ with oracle $c$). Using an effective enumeration of $\mathbb Z^2$, it is possible to talk about computable functions with domain or range $\A^{\mathbb Z^2}$, and in general $\A^\mathbb M$, where $\mathbb M$ is any effectively enumerable set.
We say that $d \in \A^{\mathbb M}$ is \dfn{reducible} to $c \in \A^{\mathbb M'}$ if there exists a computable function $f$ such that $f(c)=d$. This means that $c$ is computationally at least as complicated as $d$, since it is possible to obtain $d$ using $c$ and a computable function. A subset $Y \subseteq \A^{\mathbb M}$ is called \dfn{Medvedev} reducible to $X \subseteq \A^{\mathbb M'}$ if every point of $Y$ is reducible to some point of $X$. Intuitively, we can compute any point of $Y$ with the help of a suitable point of $X$ and a computable function. The relation of Medvedev reducibility is a pre-order on subsets.
Two sets are called Medvedev equivalent if they are Medvedev reducible to each other. This is an equivalence relation, whose equivalence classes are called \dfn{Medvedev degrees}. There exists a partial order on the set of Medvedev degrees given by the natural lift of the Medvedev reducibility pre-order. Computable sets are the least element of this order and, in a certain sense, the higher a set is in this hierarchy, the more difficult it is to compute a point of this set relative to the sets that lie lower in the hierarchy. The survey \cite{hinman} contains a thorough study of Medvedev degrees.
A set $X \subseteq \A^{\mathbb M}$ is called \dfn{effectively closed} if its complement is semi-decidable. Effectively closed sets form the so-called $\Pi_{0}^{1}$ sets and they play a very important role in computation theory. It is easy to see that SFTs are effectively closed, even though there exist many effectively closed sets (and even effectively closed subshifts) that are not SFTs. However, Simpson \cite{simpson} proved that every effective Medvedev degree (\textit{i.e.}, the Medvedev degree of an effectively closed set) contains a 2D SFT. Therefore, in some sense, not only is the emptiness problem undecidable for 2D SFTs, but their points can be as difficult to compute as possible. We improve this result to the extremely expansive case:
\begin{theorem}
Every effective Medvedev degree contains an extremely expansive 2D SFT. In other words, for every effectively closed set $Z\subseteq \A^{\mathbb M}$, there exists an extremely expansive 2D SFT $Y$ that is Medvedev equivalent to $Z$.
\end{theorem}
In fact, we prove something stronger, giving a complete characterization of the so-called \dfn{Turing degrees} of $Y$ relative to those of $Z$, but it is not necessary to go into these details here.
The next result is of a dynamical flavour and it does not concern extremely expansive SFTs, but sequences of SFTs with a common rational direction of expansiveness. It also uses the notion of simulation, which is of central importance in the proofs of the previous results and, in general, for the whole thesis, even though it wasn't mentioned until now.
We say that subshift $X \subseteq \A^{\mathbb Z^2}$ \dfn{simulates} subshift $Y \subseteq \B^{\mathbb Z^2}$ with parameters $(S,T)$ if there exists a $\B$-colouring of the $S \times T$ blocks of $X$ with the following property: Every configuration of $X$ can be partitioned in a unique way into $S \times T$ rectangles such that when we color these rectangles with the $\B$-colouring we obtain a configuration of $Y$. Inversely, every configuration of $Y$ can be obtained in this way.
This is weaker than the notion of simulation that we actually use, but it follows from it, is enough to describe the result and is much easier to describe. It corresponds to the definitions in \cite{drs}.
It was proved in \cite{laffite} that for every computable sequence of SFTs, there exists an SFT that simulates all of them. This is a surprising and really strong result. We prove a version of it in the case where all the SFTs of the sequence have a common, rational expansive direction (which without loss of generality we assume to be the horizontal one):
\begin{theorem}
Let $X_0,X_1, \ldots$ be a computable sequence of 2D SFTs such that $0 \in \NE(X_i)$, for all $i \in \mathbb N$. Then, there exists a 2D SFT $X$ such that $X$ simulates $X_i$ for all $i \in \NE$ and $0 \in \NE(X)$.
\end{theorem}
We note that there cannot exist a 2D SFT with an expansive direction that simulates all 2D SFTs with the same expansive direction, because this would imply the decidability of the emptiness problem for extremely expansive SFTs, according to an argument of Hochman \cite{hochmanuniv}.
The final result of the thesis answers a natural question which arises immediately after the construction of an extremely expansive SFT. As stated already, the unique non-expansive direction of the SFT that we construct is the vertical one. Which other directions can be the unique direction of non-expansiveness for 2D SFTs? Obviously, we can achieve any rational direction by rotating with elements of $SL_2(\mathbb Z)$, but can we do more? More generally, what are the sets of directions that can be the set of non-expansive directions of a 2D SFT?
Hochman \cite{nexpdir} proved that for general 2D subshifts (not necessarily of finite type, or even effective), any closed set of directions can be the set of non-expansive directions, while any direction can be the unique direction of non-expansiveness. Recall that Boyle and Lind proved that the sets of non-expansive directions must be closed, so it turns out that in the case of general subshifts this necessary topological condition is also sufficient.
In the case of SFTs, there is an additional necessary condition, namely that the set of non-expansive directions be \dfn{effectively closed}, which is equivalent to saying that its complement is the union of an increasing, computable sequence of open intervals. It turns out that this condition is necessary and sufficient for 2D SFTs:
\begin{theorem}
A set of directions $\NE$ is the set of non-expansive directions of a 2D SFT if and only if it is effectively closed. More precisely, a direction $l$ is the unique direction of non-expansiveness of a 2D SFT if and only if it is computable.
\end{theorem}
This answers Question~11.2 in Boyle's Open Problems for Symbolic Dynamics \cite{opsd}.
Using our methods, we could easily prove Theorems~1-3 for SFTs whose unique direction of non-expansiveness is $l$, where $l$ is any computable direction. This is a stronger version of the results, which we do not prove for lack of space. In any case, once one has mastered our method, it is possible to prove various new results and variants of already proved ones. Since this method is as important (if not more) as some of our results, it is probably worth saying some words about its history, too.
It is the so-called \dfn{fixed-point tile} or \dfn{self-simulating} method for constructing 2D SFTs. It was firstly described by Kurdyumov \cite{kurdyumov} in order to give a counterexample to the Positive Rates conjecture, even though only a sketch of a proof was included in this paper. It was G\'{a}cs \cite{gacs1} who elaborated Kurdyumov's idea into a full proof of the positive rates conjecture and formalized the notion of a hierarchy of simulating SFTs (he talks about 1D cellular automata, but this does not make a big difference). Later, he significantly improved his construction and the result in a notoriously lengthy and difficult paper \cite{gacs}. Gray's reader guide to that paper \cite{gray} and the description therein of self-simulation and the problems one encounters when trying to construct a self-simulating SFT are also a very useful exposition of the ideas of G\'{a}cs and Kurdymov. It was not until the work of Durand, Romashchenko and Shen \cite{drs} that the method became accessible to a broader mathematical audience. They work in the framework of 2D SFTs, which allows for a more clear, geometrical description of the basic ideas.
G\'{a}cs' construction did not have any direction of expansiveness, because it was a non-reversible cellular automaton. Nonetheless, it had the horizontal direction as a direction of \xpr{semi-expansiveness}. On the other hand, the construction of Durand, Romashchenko and Shen did not have neither directions of expansiveness neither directions of \xpr{semi-expansiveness}. A large part of this thesis consists in making their construction expansive in the horizontal direction. We need to introduce some tricks in order to do this, but once we achieve it, then self-simulation and a previous result of Hochman immediately give an extremely expansive aperiodic SFT. Something similar was also done in \cite{zinoviadis1}, but the construction of that paper was significantly easier because we dealt with non-reversible cellular automata, so that we only needed a direction of \xpr{semi-expansiveness}. Our current construction can be seen as an improvement of the construction of that paper, and using it we can easily retrieve its main result, which was a characterization of the numbers that can appear as the topological entropy of a (not necessarily reversible) CA.
One thing that all the constructions have in common, including ours, is that they are complicated and rather difficult to explain (for the writer) and understand (for the reader). This is unavoidable, in some degree, and the author's personal opinion is that there does not exist a \xpr{perfect} way to write them. Either the exposition is very formal, covering all details and defining every little thing, which is the road that we have chosen, or the construction is informal, in which case it is not clear what exactly the constructed SFT is, over which alphabet it is defined etc., which is the choice made by Durand, Romashchenko and Shen. Taking the middle road, as was more or less done by G\'{a}cs, does not help very much, either.
Our opinion is that the best thing is to be familiar with \emph{all} the constructions and use them accordingly. On the one hand, the constructions of Durand, Romashchenko and Shen are convincing for someone already familiar with the technique and they allow to explain a new idea concisely and efficiently, as was recently done in \cite{drs2}, while on the other hand our more formal presentation can be used to acquire mastery with the technique by dealing with all the unexpected little problems that arise during the construction and to convince those people who want to understand all the details.
Let us now describe the structure of the thesis:
In Chapter~\ref{s:prelim}, we give the basic definition that we will need throughout the paper. In Chapter~\ref{s:simul}, we define the precise notion of simulation that we will use and give some of its properties. We believe that some of the results of this chapter are of independent interest. In Chapter~\ref{c:programming}, we describe a pseudo-programming language that will be used to describe 2D SFTs in a concise way. In Chapter~\ref{construction}, we construct a family of SFT (which depend on the parameters $S,T$) with $0$ as a direction of expansiveness which are, in some sense, universal: They can simulate every SFT with $0$ as a direction of expansiveness, provided that its alphabet size is small compared to $S,T$ and it can be computed fast compared to $S,T$. This family of SFTs is of great importance for all subsequent constructions. This is the part of the thesis where we modify the construction of Durand, Romashchenko and Shen so as to make it reversible. In Chapter~\ref{s:hierarchy}, we prove Theorems~1 - 4. The constructions and the proofs all follow the same pattern, but we give as many details as possible for all of them for reasons of completeness. Finally, in Chapter~\ref{sec:expdir}, we prove Theorem~5. This proof is a modification of the proof of the result in \cite{nexpdir}. We try to explain what are the differences between that construction and ours and why the changes that we make are necessary.
Finally, let us mention that all of the aforementioned results have been obtain in collaboration with Pierre Guillon during various visits by him in Turku as well as of the author in Marseille. Currently, a series of joint papers is under construction that will contain even more applications of our method. Theorem~1 has also appeared in \cite{zinoviadis2}, even though because of lack of space, most of the details of the construction do not appear in that paper.
\chapter{Preliminaries}\label{s:prelim}
\section{Basic definitions}\label{s:encoding}
We will denote by $\mathbb Z$, $\mathbb N$, ${\mathbb N}_1$, $\mathbb Q$ and $\mathbb R$ the sets of integers, non-negative integers, positive integers, rational and real numbers, respectively, by $\co ij$ and $\cc ij$ the integer intervals $\{i,\ldots,j-1\}$ and $\{i,\ldots,j\}$, respectively, while $[\varepsilon,\delta]$ will denote an interval of real numbers. If $f,g \colon \mathbb N \to \mathbb N$, then we will use the classical $f \in O(g)$ notation to denote that $f(n) \le c g(n)$, for some constant $c$ and all $n \in \mathbb N$.
If $f \colon X \pto Y$ is a partial function, then its \dfn{domain} $\mathcal D}%\DeclareMathOperator*{\dom}{dom(f) \subseteq X$ is the set of elements of $X$ whose image through $f$ is defined. Two partial functions are equal when they have the same domain and they agree on their common domain. If $f \colon X \pto Y$ and $g \colon Y \pto Z$ are partial functions, then $g\circ f \colon X \pto Z$ is the partial function defined in the usual way (\textit{i.e.}, $g(f(w))$ does not exist if either $w \notin \mathcal D}%\DeclareMathOperator*{\dom}{dom(f)$ or $f(w) \notin\mathcal D}%\DeclareMathOperator*{\dom}{dom(g)$).
A \dfn{partial permutation} is a bijection over its domain onto its range, \textit{i.e.}, an injective partial map.
In the following, when defining a partial function, it will be implicit that any non-treated argument has an undefined image, and that saying that two partial functions are equal means in particular that their domains are the same.
If $Z\subset X$ and $f:X\pto Y$, we may abusively consider $f\restr{Z}$ as a partial map from $X$ to $Y$ whose domain is $Z\cap\mathcal D}%\DeclareMathOperator*{\dom}{dom(f)$.
For $m\ge n$, $\vec T\defeq(T_i)_{0\le i<m}$ and $\vec t\defeq(t_i)_{0\le i< n}$ such that for all $i$, $0\le t_i<T_i$, we note $\anib[\vec T]{\vec t}\defeq\sum_{0\le i < n}t_i\prod_{0\le j \le i}T_j$ the numeric value represented by the adic representation $\vec t$ in base $\vec T$. In general, $T_i$ and $t_i$ can belong in $\mathbb R$, not necessarily in $\mathbb N$.
By convention, if $\vec t$ has length $0$, then $\anib[\vec T]{\vec t}\defeq0$.
Similarly, for a sequence $\seq T\defeq(T_i)_{i\in\mathbb N}$, we note $\anib[\seq T]{\vec t}\defeq\anib[T_{\co0n}]{\vec t}$. For a sequence $\seq t \defeq(t_i)_{i \in \mathbb N}$, we note $\anib[\seq T]{\seq t}\defeq\lim\anib[T_{\co0n}]{t_{\co{0}{n}}}$, when this limit exists.
An \dfn{alphabet} is any finite set, whose elements are often called \dfn{symbols}.
If $\A$ is an alphabet, $\A^*\defeq\bigcup_{n\in\mathbb N}\A^n$ denotes the set of finite \dfn{words} over $\A$, and $\A^{**} \defeq \bigcup_{m\in\mathbb N}{(\A^*)^m}$ the set of finite tuples of words. (Notice that the notation $\A^{**}$ is a little ambiguous as it could also stand for the set $\bigcup_{m\in\mathbb N}{(\A^m)^*}$. Obviously, the two interpretations are isomorphic, but they are different objects.) The \dfn{empty} word is denoted by $\motvide \in \A^*$.
If $w \in \A^n$, we write $w=w_0\cdots w_{n-1}$, and call $\length{w}\defeq n$ the \dfn{length} of $w$.
If $\vec u\in (\A^*)^m$, we write $\vec u=(u_0,\ldots,u_{m-1})$, and $\length{\vec u}\defeq(\length{u_0},\ldots,\length{u_{m-1}})$.
For every $i \in \mathbb N$, we define the projection $\pi_i$ as a partial function $\pi_i \colon \A^{**} \pto \A^{*}$: $\pi_i(\vec u)=u_i$ if $\vec u \in (\A^{*})^m$ with $m\geq i$ (and $\pi_i(\vec u)$ is undefined otherwise).
A \dfn{field} is a projection $\pi_i$ together with a label \field, written in type-writer form.
The notion of fields is simply a convenient way of talking about tuples of words. The names of the fields will be chosen so as to reflect the role that the field plays in the construction.
We note $\mathbb N^*\defeq\bigcup_{m\in\mathbb N}\mathbb N^m$ the set of integer tuples of any dimension, where $m$ is the \dfn{dimension} of the tuple $\vec k\defeq(k_0,\ldots,k_{m-1})\in\mathbb N^m$. Let $\A^{\vec{k}}\defeq\A^{k_0}\times\ldots\times\A^{k_{m-1}}$;
any subalphabet of $\A^{\vec{k}}$ is said to have \dfn{constant lengths}.
We will mainly use the special alphabets $\haine n\defeq\{0,\ldots,n-1\}$, for $n\in\{2,3,4,5\}$. Of course, instead of $\haine5$ we could use any alphabet with $n$ letters. However, since some letters will have a fixed role throughout the thesis, it is better to fix the notation and get used to these roles.
The non-negative integers can be easily embedded into $\haine2^*$ thanks to the injection $n\mapsto\anib n$ which gives the shortest binary representation of $n \geq 1$. $\norm n\defeq\length{\anib n}=\spart{\log_2n}+1$ is the \dfn{length of $n$}. By definition, $\anib{0} \defeq \motvide$ and $\norm{\motvide} \defeq 0$. Inversely, if $u \in \haine2^*$, then $\bina u$ is the number represented by $u$ in the binary representation system: for all $ u\in\haine2^*,\anib{\bina u}$ is the suffix of $u$ that is obtained after removing the initial $0$s. (The \xpr{lower bar} is applied before the \xpr{top bar}.)
We will also need to embed some finite sets in $\haine2^*$. For instance, we will say that $\{-1,+1\}$ is $\haine2$ by identifying $-1$ with $0$ and $+1$ with $1$. Finite alphabets of bigger cardinality can be embedded into $\haine2^k$, for some suitable $k$.
Now, in the perspective of computing functions with many arguments, we are going to use symbol $2$ to encode tuples into words.
If $\vec u\in(\haine5^*)^m$ for some $m\in\mathbb N$, then $\Chi{\vec u}$ is defined as the concatenation
\begin{equation*}
\Chi{u_0}\Chi{u_1}\ldots\ldots\Chi{u_{m-1}}\in\haine3^*,
\end{equation*}
where $\Chi v\defeq2\double{v}$, and $v\mapsto\double v$ is some monoid injection (\textit{i.e.}, code) from $\haine5^*$ to $\haine2^*$. In this paper, we will use the code defined by $\double0\defeq000$, $\double1\defeq001$, $\double2\defeq010$, $\double3\defeq011$, $\double4\defeq100$.
Note that the structure of the encoding of word tuples depends only on $\length{\vec{u}}$.
We can also define $\Chi{\seq u}\defeq\Chi{u_0}\Chi{u_1}\ldots\in\haine3^\mathbb N$ for $\seq u\in\haine5^{\mathbb N}$.
Let us now prove a basic fact about $\Chi{\cdot}$. Namely, for every $\vec{k} \in \mathbb N^*$, there exists an easily computable function that gives the positions of the $2$s in encodings of $\haine5^{\vec{k}}$ and the positions of the encodings of the components of a letter.
\begin{fact}\label{f:encodings}
Let $M \in \mathbb N$ and $\vec{k} \in \mathbb N^M$. For all $0 \le i <M$, let us define $l_{\vec{k},i}\defeq3\sum_{j=0}^{i-1}{k_j}+i$. Then, for all $\vec{u} \in \haine5^{\vec{k}}$:
\begin{enumerate}
\item $\norm{\Chi{\vec u}}=l_{\vec{k},M}$,
\item ${\Chi{\vec u}}_{\co{l_{\vec{k},i}}{l_{\vec{k},i+1}}}= 2 \double{\pi_i(\vec{u})}$
\end{enumerate}
\end{fact}
These statements correspond to what Durand, Romashchenko and Shen refer to as ``the TM know the place where such and such information is held in the encoding''.
Symbol $3$ will be used in Subsection~\ref{s:turing} to encode the start and the end of the tape of a Turing machine.
Symbol $4$ will be used in order to construct alphabets with constant lengths.
In the computation, we indeed want words of various lengths to be able to represent the same objects.
For this, we define $\sh[l]u\defeq4^{l-\length u}u$, for every $l\in\mathbb N$ and $u\in\haine4^*$ with $\length{u} \le l$ ($\sh[l]u$ is undefined otherwise).
For instance, $\sh[\norm n]{\anib n}=\anib n$ for any integer $n\in\mathbb N$, and the encoding $\sh[l]\motvide=4^l$ of the empty word is a sequence of $4$s.
It is clear that the partial function \[\papp
{\mathbb N\times\haine4^*}{4^*\haine4^*}{(l,u)}{\sh[l]u}\] is injective (over its domain) and surjective; let us write $\hs w\in\haine4^*$ for the longest suffix in $\haine4^*$ of a word $w\in4^*\haine4^*$, in such a way that $\hs{\sh[l]u}=u$ for any $l\geq \length{u}$ and $u\in\haine4^*$.
These two maps can be adapted to vectors in the obvious way: $\sh[\vec k]{\vec u}\defeq(\sh[k_0]{u_0},\ldots,\sh[k_{m-1}]{u_{m-1}})$ for any $\vec k\in\mathbb N^m$, $m\in\mathbb N$ and $\vec u\defeq(u_0,\ldots,u_{m-1})\in\haine4^{*m}$.
Note that this is defined if and only if $\vec k\ge\length{\vec u}$.
Similarly, $\hs{\vec w}\defeq(\hs{w_0},\ldots,\hs{w_{m-1}})$ for any $\vec w\defeq(w_0,\ldots,w_{m-1})\in(4^*\haine4^*)^m$.
Recall that a partial permutation is simply an injective partial map. If $\alpha:\haine4^{**}\pto\haine4^{**}$ is a partial permutation that preserves the number of fields (\textit{i.e.}, $\alpha(\haine4^*)^l \subseteq (\haine4^*)^l$ for all $l \in \mathbb N$), we can transform it into an equivalent permutation that also preserves the lengths:
\[\pappl[\sh\alpha]{(4^*\haine4^*)^*}{(4^*\haine4^*)^*}{\vec w}{\sh[\length{\vec w}]{\alpha(\hs{\vec w})}~.}\]
\begin{remark}\label{sharpization}~
\begin{itemize}
\item For any $\vec k\in\mathbb N^*$, $\sh{\alpha}$ is also a partial permutation.
\item The restriction of $\alpha$ to any subalphabet is implemented by that of $\sh\alpha$ to large enough words
\[\forall \vec{u}\in\haine4^{**},\forall
\vec k\ge\max\{\length{\vec{u}},\length{\alpha(\vec{u})}\},\sh\alpha(\sh[\vec k]{\vec{u}})=\sh[\vec k]{\alpha(\vec{u})}~.\]
\end{itemize}\end{remark}
\begin{proof}
For the first part, assume that $\sh\alpha(\vec w)=\sh\alpha(\vec{w'})$. This implies that $\length{\vec w}=\length{\vec{w'}}$. In addition, $\alpha(\hs{\vec{w}})=\hs{\sh\alpha(\vec w)}=\hs{\sh\alpha(\vec w')}=\alpha(\hs{\vec{w'}})$. Since $\alpha$ is a partial permutation, this implies that $\hs{\vec{w}}=\hs{\vec{w'}}$. Therefore, $\vec{w}=\sh[\length{\vec{w}}]{\hs{\vec{w}}}=\sh[\length{\vec{w'}}]{\hs{\vec{w'}}}=\vec{w'}$.
For the second part, let $\vec{u}\in\haine4^{**}$ and $\vec k\ge\max\{\length{\vec{u}},\length{\alpha(\vec{u})}\}$. Then, $\sh[\vec{k}]{\vec{u}}$ and $\sh[\vec{k}]{\alpha(\vec{u})}$ exist and $\length{\sh[\vec{k}]{\vec{u}}}=\length{\sh[\vec{k}]{\alpha(\vec{u}})}=\length{\vec{k}}$. Therefore, $\sh{\alpha}(\sh[\vec{k}]{\vec{u}})=\sh[\vec{k}]{\alpha(\hs{\sh[\vec{k}]{\vec{u}}})}=\sh[\vec{k}]{\alpha(\vec{u})}$.
\end{proof}
In the rest of the paper, we will often implicitly use Remark~\ref{sharpization} both to construct partial permutations that preserve the lengths of the fields, as well as to state and prove things about them. It allows us to describe the behaviour of a partial permutation $\alpha$, and then translate this result into the behaviour of $\sh{\alpha}$, provided that the lengths of the fields are sufficiently large, thus omitting the (confusing) $\hs{\cdot}$ and $\sh{\cdot}$ symbols.
Let $i_1, \ldots, i_l$ be a set of fields, and $w\in\haine5^*$. Then,
\begin{equation*}
\emp[w]{i_1,\ldots,i_l} \defeq \set{\vec{u}}{\haine5^{**}}{\hs{\pi_{i_k}(u)}=w, \text{ for } k=1, \ldots,l}
\end{equation*}
is the set of all symbols that have fields $i_1,\ldots, i_l$ equal to $w$ (up to the application of $\hs{\cdot}$).
If $n\in\mathbb N$, let
\begin{equation*}
\emp[n]{i_1,\ldots,i_l}\defeq\set{\vec u}{\haine5^{**}}{\bina{\hs{\pi_{i_k}(u)}}=n, \text{ for } k = 1, \ldots, l}
\end{equation*}
be the set of all symbols who have the values $n$ (in binary form) in the fields $i_1,\ldots,i_l$.
\section{Computation}
\subsection{Turing machines}\label{s:turing}
The reader is assumed to be familiar with classical concepts in computability theory. We just fix some terminology and give a variant of a definition of Turing machines, imposing some additional technical restrictions which, however, do not restrict the computational power.
A \dfn{Turing machine} (TM) is a partial (\xpr{global}) map $\mathcal{M}$ from $\haine4^\mathbb Z\times Q\times\mathbb Z$ into itself, where $Q\subset\haine2^*$ is a finite set of \dfn{states} containing the \dfn{initial} state $0$ and the \dfn{accepting} state $\motvide$, and depending on a partial \dfn{transition map} $\delta_\mathcal{M}:\haine4\times Q\setminus\{\motvide\}\pto\haine4\times Q\times\{-1,+1\}$ such that:
\[\mathcal{M}(z,q,j)=\soit{(z,q,j)&\si q=\motvide\\(z',q',j')& \text{ otherwise, where } (z'_j,q',j'-j)=\delta_\mathcal{M}(z_j,q)\\ & \text{ and } z'_i=z_i, \forall i\ne j~,}\]
for any $(z,q,j)\in\haine4^\mathbb Z\times Q\times\mathbb Z$, which will sometimes be called a \dfn{machine configuration}, the first component being the \dfn{tape content}, the second the (head) \dfn{internal state}, the third the \dfn{head position}.
The model of TM that we use satisfies the following assumptions, which, as can be easily seen, do not restrict the computational power of TM.
\begin{itemize}
\item There is only one tape, from which the TM reads the input and on which it writes the output.
\item The internal states are words of $\haine2^*$ (this is just a semantic restriction).
\item All machines have the same initial and accepting states $0$ and $\motvide$, respectively.
\item
he global map is still defined after having accepted, and is then equal to the identity.
\item There is no precise rejecting state (instead, we use undefined transitions over non-accepting states).
\item In every accepting transition, the head disappears and moves to the right. In other words, every accepting transition is of the form $\delta(q,a)=(a',\motvide,+1)$. This is a technical assumption which simplifies the construction of an IPPA that simulates $\mathcal{M}$ in Section~\ref{Jarkko}.
\end{itemize}
We denote by $\mathcal{M}^t$ the $t$'th power of the (global) map $\mathcal{M}$. If
$\mathcal{M}^t(\pinf 3. \Chi{\vec{u}}3^{\infty},0,0)=
(\pinf 3. \Chi{\vec{u'}}3^{\infty},\epsilon,j)$, for some $t \in \mathbb N, j \in \mathbb Z$, then we say that $\mathcal{M}$ \dfn{halts} over (or \dfn{accepts}) \dfn{input} $\vec{u}\in\haine5^{**}$, and \dfn{outputs} $\vec{u'}\in\haine5^{**}$, and we define $f_\mathcal{M}(\vec{u})\defeq \vec{u'}$ and $t_\mathcal{M}(\vec{u})$ as the minimal $t$ for which this holds (if this never holds, or if $\pinf 3. \Chi{\vec{u}}3^{\infty}$ is rejected, then $t_\mathcal{M}(\vec{u})$ is undefined).
Notice that $f_{\mathcal{M}}(\vec{u})$ is well-defined, since when the accepting state $\motvide$ appears, the machine configuration is no more modified.
We say that $\mathcal{M}$ \dfn{computes} the partial map $f_\mathcal{M}:\haine5^{**}\pto\haine5^{**}$, with time \dfn{complexity} \[\appl[t_\mathcal{M}]\mathbb N\N n{\max_{\length{\Chi{\vec{u}}}=n}t_\mathcal{M}(\vec{u})~,}\]
where, by definition, the $\max$ is taken only over \emph{accepted} inputs. $t_{\mathcal{M}}$ is well-defined since there are only finitely many accepted inputs of each length.
\subsection{Computability}\label{ss:comput}
A partial function $f:\haine5^{**}\pto\haine5^{**}$ is called \dfn{computable} if there exists a TM $\mathcal{M}$ such that $f=f_\mathcal{M}$.
Recall that integers (and finite sets) can be identified
to words, hence allowing us to talk about computable maps between Cartesian products involving $\mathbb N$ and finite sets.
We also say that a set $X \subseteq \haine4^{**}$ is \dfn{computable} if its characteristic function $\iota_X\colon \haine4^{**} \to \haine2$ is computable, and that it is \dfn{computably enumerable} if it is the domain of a computable
function.
We will say that a partial function $f:X\pto\haine5^{**}$, with $X\subset\haine5^{**}$ is \dfn{computable} if both $X$ and the extension of $f$ to $\haine5^{**}$ (by not defining images outside of $X$) are computable.
A partial function $\Phi:\haine2^\mathbb N\pto\haine2^{\mathbb N}$ is called \dfn{computable} if there exists a TM $\mathcal{M}$ such that $x\in\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi)$ if and only if for all $n\in\mathbb N$, there exists $m\in\mathbb N$ such that $f_\mathcal{M}(x_{\co{0}{m}},n)$ is defined, in which case it is equal to $\Phi(x)_n$. Finally, by parametrizing $\mathbb Z$ with $\mathbb N$, we can talk about computable functions $\Phi:\haine2^\mathbb Z\pto\haine2^{\mathbb Z}$. An equivalent definition is that $\Phi:\haine2^\mathbb Z\pto\haine2^{\mathbb Z}$ is computable if there exists a TM $\mathcal{M}$ such that $x\in\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi)$ if and only if for all $n\in\mathbb N$, there exists $m\in\mathbb N'$ such that $f_\mathcal{M}(x_{\co{-m}{m}},n)$ is defined, in which case it is equal to $\Phi(x)_n$.
Since $\mathbb R$ can be identified with $\haine2^{\mathbb N}$, we can also talk about computable functions of real numbers. A partial function $\Psi:\mathbb R\pto\mathbb R$ is \dfn{computable} if there exists a computable function $f \colon \mathbb R \times \mathbb N \to \mathbb Q$ with the following property: $\abs{\Psi(x)-f(x,n)} < 2^{-n}$. This is the classical definition of computability for real functions and it says that we can compute better and better approximations of $x$.
If $\mathcal{M}$ is a TM, let
\begin{equation*}
\X_{\mathcal{M}}\defeq \set {z}{\haine2^\mathbb N}{\forall t\in\mathbb N,\mathcal{M}^t(\uinf3.z,0,0) \text{ exists and is not in } \haine4^{\mathbb Z}\times\{\motvide\}\times\mathbb Z}
\end{equation*}
be the set of one-sided binary sequences over which $\mathcal{M}$ runs for an infinite amount of time.
We say that a subset $X\subset\haine2^{\mathbb N}$ is \dfn{effectively closed} (or $\Pi^0_1$) if $\Chi{X}=\X_{\mathcal{M}}$ for some TM $\mathcal{M}$, or equivalently if the set of words that do not prefix any sequence in it is computably enumerable.
This can be extended to sets of sequences that can be encoded with words, in particular over finite alphabets: a subset $X\subset\prod_{t\in\mathbb N}\A_t$, where $\A_t$ is a finite subalphabet of $\haine5^*$, is \dfn{effectively closed} if $\Chi X=\X_{\mathcal{M}}$ for some program $\mathcal{M}$ (we encode every finite alphabet with $\haine2^k$, for some suitable $k$ which depends on $t \in \mathbb N$).
$\mathcal{M}$ is called \dfn{polynomial} if $t_{\mathcal{M}} \in O(P)$, for some polynomial $P$. A partial function $f$ is called \dfn{polynomially computable} if $f_{\mathcal{M}}=f$ for some polynomial TM $\mathcal{M}$. It is easy to see that the class of (polynomially) computable functions with this version of TM corresponds to the classical one.
Analogously, $X$ is a polynomially computable set if its characteristic function $\iota_X$ is polynomially computable.
We say that a function (or sequence) $f$ is \dfn{polynomially checkable} if it can be computed in time $O(P(\log{f}))$, for some polynomial $P$. The terminology comes from the fact that even though $f$ might not be polynomially computable, its graph (\textit{i.e.}, the set of pairs element-image) is a polynomially computable set. For example $f(n) = 2^{2^n}$ is a polynomially checkable sequence even though it is not polynomially computable.
Instead of a universal TM, we use the following essentially equivalent:
\begin{fact}
There exists an injection that associates to each TM $\mathcal{M}$ a \dfn{program} $p_{\mathcal{M}} \in \haine4^{*}$ such that
if we denote by $Q_p$ the state set of the TM corresponding to program $p$,
then
\begin{itemize}
\item The language $\sett{p_{\mathcal{M}}}{\mathcal{M} \text{ is a TM}} \subseteq \haine4^*$ is polynomially decidable.
\item The characteristic function $(p,q)\mapsto\iota_{Q_p}(q)$ that checks whether $q \in Q_p$ is polynomially computable.
\item The \xpr{universal} transition rule \[\pappl[\delta_\U]{\haine4\times\haine4^*\times\haine4^*}{\haine4\times\haine4^*\times\{-1,+1\}}{(a,q,p_\mathcal{M})}{\delta_\mathcal{M}(a,q)~}\]
is polynomially computable.
\item In addition, $\card{Q_p} \le \length{p}$. (We can assume that $p$ contains a list of the states of $Q_p$.)
\end{itemize}
\end{fact}
We will use the following notations: If $p$ is the program of a TM that computes a reversible function $f$, then $p^{-1}$ will denote the program of the inverse function $f^{-1}$ (it will always be computable in our constructions). Also, $t_p$ and $\X_p$ will be used to denote $t_{\mathcal{M}_p}$ and $\X_{\mathcal{M}_p}$, where $\mathcal{M}_p$ is the TM that corresponds to the program $p$.
The first examples of polynomially computable functions, which will be most useful in the sequel, are the encodings presented in Subsection~\ref{s:encoding}.
Clearly, $\sh[\cdot]\cdot$ and its (right) inverse $\hs\cdot$ are polynomially computable.
Moreover, the projections $\pi_i:\haine5^{**}\to\haine5^*$, for $i\in\mathbb N$, are polynomially computable and so are the functions $(\vec{k},i) \to l_{\vec{k},i}$ (as defined in Fact~\ref{f:encodings}) and $\Chi{\cdot}$.
\subsection{Degrees}
In the following, $\mathbb M$ and $\mathbb M'$ can stand for either $\mathbb N$ or $\mathbb Z$.
Two sets $X,Y \in \haine2^{\mathbb M}$ are \dfn{computably homeomorphic} if there exists a computable bijection between them.
We say that $d\in\haine2^{\mathbb M'}$ is \dfn{Turing-reducible} to $c\in\haine2^{\mathbb M}$ if $d = \Phi(c)$, for some computable function $\Phi$.
This yields a preorder over configurations, whose equivalence classes are called \dfn{Turing degrees}. If $d$ is Turing-reducible to $c$, then in a computational sense, $c$ is more complicated than $d$. A \dfn{cone} over degree $d$ is the set of Turing degrees that are higher than $d$.
Moreover, we say that subset $Y\subset\haine2^{\mathbb M'}$ is \dfn{Medvedev-reducible} to subset $X\subset\haine2^{\mathbb M}$ if there is a computable partial function $\Phi:\haine2^{\mathbb M}\pto\haine2^{\mathbb M'}$ such that $\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi) \supseteq X$ and $\Phi(X)\subseteq Y$. This also yields a pre-order over sets, whose equivalence classes are called \dfn{Medvedev degrees}.
Finally, we say that subset $Y\subset\haine2^{\mathbb M'}$ is \dfn{Mu\v cnik-reducible} to subset $X\subset\haine2^\mathbb M$ if every point of $X$ is Turing-reducible to some point of $Y$ (but not in a uniform way, as in Medvedev-reducibility). This again yields a pre-order over sets, whose equivalence classes are called \dfn{Mu\v cnik degrees}.
Medvedev and Mu\v cnik degrees of a set are an attempt to formalize the notion of how computationally difficult it is to compute a point of the set.
Of course, computable homeomorphism implies having the same Turing degrees, which implies Medvedev-equivalence, which in turns implies Mu\v cnik-equivalence.
We do not get too much into details, but the notion holds in the large setting of effective topological spaces (see for instance \cite{gacshoyruprojas}).
\section{Symbolic dynamics}
$\A^{\Z^d}$ is the set of $d$-dimensional \dfn{configurations}, endowed with the product of the discrete topology, and with the \dfn{shift} dynamical system $\sigma$, defined as the action of $\mathbb Z^d$ by $(\sigma^\vec{i})_{\vec{i} \in \mathbb Z^d}$, where $\sigma^\vec{i}(x)_\vec{k}\defeq x_{\vec{i}+\vec{k}}$ for any configuration $x\in\A^{\Z^d}$ and any $\vec{i},\vec{k}\in\mathbb Z^d$.
A \dfn{pattern} over a (usually finite) \dfn{support} $D\subset\mathbb Z^d$ is a map
$p\in\A^D$.
Two patterns $u_1\colon D_1 \to \A$ and $u_2 \colon D_2 \to \A$ are called \dfn{disjoint} if $D_1$ and $D_2$ are disjoint shapes of $\mathbb Z^d$. If $u_1,u_2$ are disjoint, let $u_1\vee u_2$ be the pattern over shape $D_1\sqcup D_2$ defined by $(u_1\vee u_2)(\vec{i})=u_{j}(\vec{i})$, if $\vec{i}\in D_j$, $j=1,2$. Inductively, we can define $\bigvee_{1\le i\le k}u_i$, when $u_1,\ldots,u_k$ are mutually disjoint pattens.
Let $E,D\subset\mathbb Z^2$ be two shapes, and $u\in\A^D$ be a 2D pattern. We denote $u_E$ the restriction of $u$ to $D\cap E$ (this is a pattern with support $D \cap E$).
If $I \subseteq \mathbb Z$ and $(c_i)_{i \in I}$ is a family of configurations of $\A^{\mathbb Z}$, $|(c_i)_{i \in I}$ denotes the (possibly infinite) pattern $u \colon \mathbb Z \times I \to \A$ such that $u_{\mathbb Z \times \{i\}} = c_i$, for all $i \in I$. Here we implicitly identify patterns on horizontal strips up to vertical translation. Formally, the domains of $u_{\mathbb Z \times \{i\}}$ and $c_i$ are not the same.
If $I = \co{0}{n}$, then $|(c_0,\ldots,c_{n-1})$ is the horizontal strip of width $n$ obtained by putting $c_0, \ldots, c_{n-1}$ on top of each other (in this order). If $I = \mathbb Z$, then we obtain a configuration in $\A^{\mathbb Z^2}$.
Let $x \in \A^{\Z^d}$ and $\vec S\defeq(S_0,\ldots,S_{d-1}) \in {\mathbb N}_1^{d}$. The \dfn{$\vec S$-bulking} (or higher-power representation) of $x$ is the configuration $\bulk[\vec S]x\i
(\A^{S_0\times\ldots S_{d-1}})^{\mathbb Z^d}$ such that for any $\vec i=(i_0,\ldots,i_{d-1})\in\mathbb Z^d$,
\begin{equation*}
{\bulk[\vec S]x}_{\vec i}\defeq x_{\co{i_0S_0}{(i_0+1)S_0}\times\ldots\times\co{i_{d-1}S_{d-1}}{(i_{d-1}+1)S_{d-1}}}.
\end{equation*}
A ($d$-dimensional) \dfn{subshift} is a closed set $X\subset\A^{\Z^d}$ such that $\sigma^\vec{i}(X)=X$ for all $\vec{i}\in\mathbb Z^d$.
Equivalently, $X$ is a subshift if and only if there exists a family of patterns $\mathcal{F}\subset\bigcup_{D\subfini\mathbb Z^d}{\A^{D}}$ such that
\begin{equation*}
X=\set x{\A^{\mathbb Z^d}}{\forall \vec{i}\in \mathbb Z^d,\forall D\subfini\mathbb Z^d,\sigma^{\vec{i}}(x)\restr{D}\notin\mathcal{F}}.
\end{equation*}
If $\mathcal{F}$ can be chosen finite, we say that $X$ is a \dfn{subshift of finite type} (SFT).
If $\mathcal{F}$ can be chosen computably enumerable, then $X$ is called an \dfn{effective subshift}.
A continuous map $\Phi$ from subshift $X$ to subshift $Y$ is a \dfn{morphism} if $\Phi\sigma=\sigma\Phi$. If it is surjective, then it is a \dfn{factor map}, and $Y$ is a \dfn{factor} of $X$ (this defines a preorder); if it is bijective, then it is a \dfn{conjugacy}, and $X$ and $Y$ are \dfn{conjugate} (this defines an equivalence relation). A subshift $Y \subseteq \A^{\mathbb Z^d}$ is called \dfn{sofic} if it is a factor of some SFT, which is then called a \dfn{cover} for $Y$.
A configuration $x \in \A^{\mathbb Z^d}$ is called \dfn{periodic} with \dfn{period} $\vec{j} \neq \vec{0}\in \mathbb Z^d$ if $\sigma^\vec{j}(x)=x$. A subshift $X$ is called \dfn{aperiodic} if it does not contain any periodic configurations.
Abusing notation, we use the notations $\emp[w]{i_1,\ldots,i_l}$ and $\emp[n]{i_1, \ldots, i_l}$ (where $w \in \haine5^{**}$ and $n \in \mathbb N$) also for configurations. For example, if $c \in (\haine5^{**})^{\mathbb Z}$, we will say that $c \in \emp[w]{i_1,\ldots,i_l}$ if $c_i \in \emp[w]{i_1,\ldots,i_l}$ for all $i \in \mathbb Z$. Finally, for $N \in \mathbb N$ and $n \in \co{0}{N}$, let
\begin{equation*}
\per[n,N]{i_1,\ldots,i_l} \defeq \{c \in (\haine5^{**})^{\mathbb Z} \colon \bina{\hs{\pi_{i_k}(c_j)}}=j+n \mod N, \text{ for all } j \in\mathbb Z, 1 \le k \le l\}
\end{equation*}
be the set of all configurations such that
\begin{equation*}
\pi_{i_k}(c)=.\dinf{(n\ldots(N-1)01\ldots(n-1))}, \text{ for } 1 \le k \le l,
\end{equation*}
where for all $w$, $.\dinf{w}$ denotes the configuration $c$ which satisfies that $c_{\co{j\length{w}}{(j+1)\length{w}}}=w$, for all $j \in \mathbb Z$.
\section{Cellular automata}\label{s:ca}
A (1D) \dfn{partial cellular automaton} (PCA) is a partial (\xpr{global}) continuous function $F:\A^{\Z}\pto\A^{\Z}$ whose domain is an SFT, and such that $F\sigma=\sigma F$.
Equivalently by some extension of the so-called Curtis-Lyndon-Hedlund theorem, there exist a \dfn{neighbourhood} $V\subfini\mathbb Z$ and a partial \dfn{local rule} $f:\A^{V}\pto\A$ such that for all $z\in\A^{\Z}$, $F(z)$ is defined if and only if $f(z\restr{i+V})$ is defined for all $i\in\mathbb Z$, in which case $F(z)_i \defeq f(z\restr{i+V})$.
If $V \subseteq \cc{-r}{r}$, then $r$ is called a \dfn{radius} of the PCA. The radius of a PCA is not uniquely determined
A PCA is called \dfn{reversible} (RPCA) if it is injective. In this case, it is known that there exists another RPCA, denoted by $F^{-1}$, such that $FF^{-1}$ and $F^{-1}F$ are restrictions of the identity, and $\mathcal D}%\DeclareMathOperator*{\dom}{dom(F^{-1})=F(\A^{\Z})$ (the argument for this is similar to the one in \cite{hedlund}). In particular, there exist so-called inverse radius and inverse local rule.
If $r$ is both a radius and an inverse radius for an RPCA $F$, we call it a \dfn{bi-radius} for $F$.
In the rest of the paper, we only consider RPCA with bi-radius $1$. This is not a significant restriction, since these PCA and RPCA exhibit the whole range of computational and dynamical properties of general PCA and RPCA.
For $t\in\mathbb N$, the $t^{\text{th}}-$\dfn{order range} of $F$ is the (sofic) subshift $\Omega_F^t\defeq F^t(\A^{\Z})\cap F^{-t}(\A^{\Z})$ and its \dfn{limit set} is the (effective) subshift $\Omega_F\defeq\Omega_F^\infty\defeq\bigcap_{t\in\mathbb Z}\Omega_F^t$, containing all the configurations that are \emph{not} \dfn{ultimately rejected} (either in the past or the future).
There is a canonical way to associate a 2D SFT $\orb F$ to an RPCA $F$: it consists of the infinite space-time diagrams of the configurations that are not ultimately rejected. Formally, $\orb F\defeq\sett{\orb[x]F}{x\in\Omega_F}$, where $\orb[x]F\defeq|(F^t(x))_{t\in\mathbb Z}\in\A^{\Z^2}$ for any $x\in\Omega_F$.
One can see that $\orb F$ is conjugate to the $\mathbb Z^2$-action of $(F,\sigma)$ over $\Omega_F$.
Note nevertheless that the same SFT may correspond to distinct RPCA (if the RPCA have different transient phases, \textit{i.e.}, they reject some configurations after different amounts of steps).
A pattern $w\in\A^D$, with $D\subset\mathbb Z^2$, is \dfn{locally valid} for $f$ if for any $(i,t)\in D$ such that $C\defeq(i+\cc{-1}1)\times\{t-1\}\subset D$, we have $p_{(i,t)}=f(p\restr C)$. Note that, in general, this notion depends on the local rule and not only on the RPCA. By compactness, if there exist locally valid square patterns of arbitrarily large height and width, then $\orb F\ne\emptyset$, \textit{i.e.}, there are configurations which are never rejected. If $x\in F^{-t}(\A^{\Z})$, then $|(x,F(x),\ldots,F^t(x))$ is a \dfn{locally valid horizontal strip} of height $t+1$.
The notion of a locally-valid horizontal strip depends only on the RPCA and not on the local rule, \textit{i.e.}, it is a ''global`` notion.
For every $m\in \mathbb N$, $\vec{\delta}=(\delta_0,\ldots,\delta_{m-1})\in \{-1,0,1\}^m$,
we define the shift product $\sigma^{\vec{\delta}}
= \sigma^{\delta_0} \times \ldots \times \sigma^{\delta_{m-1}}$.
A \dfn{partial partition} (cellular) \dfn{automaton} (PPA) is a PCA $F=\sigma^{\vec\delta}\circ\alpha$ over some alphabet $\A=\A_0\times\ldots\times \A_{m-1}$, where $\alpha$ is (the parallel synchronous application of) a partial permutation of $\A$. $-\delta_i$ is called the \dfn{direction} of field $i$. The (counter-intuitive) \xpr{$-$} is due to the fact that the normal definition of $\sigma$ shifts everything to the left, while we are used to thinking of the positive direction as going to the right. So, if we want to have a field with speed $+1$, then we should apply $\sigma^{-1}$ to it.
Every PPA is a RPCA with bi-radius $1$
and conversely every RPCA is essentially a PPA (see for instance \cite[Proposition~53]{jarkkoppa}).
Note, however, that the inverse of a PPA is not, formally, exactly a PPA: the permutation is performed after the shifts, in the form $\alpha^{-1}\circ\sigma^{-\vec{\delta}}$. Nevertheless, it is conjugate, via $\alpha$, to the corresponding PPA.
In order to define families of PPA that are somehow uniform, we consider the corresponding objects acting on infinite alphabets.
A \dfn{partial partition automaton with infinite alphabet} (IPPA) is a partial map $F:(\haine5^{*m})^\mathbb Z\pto(\haine5^{*m})^\mathbb Z$, where $m\in\mathbb N$, $F=(\sigma^{\delta_0}\times\ldots\times\sigma^{\delta_{m-1}})\circ\alpha$, the $\sigma^{\delta_j}$ are shifts over infinite $\haine5^*$ (that is $\sigma(y)_i=y_{i+1}$ for any $y\in(\haine5^*)^\mathbb Z$ and $i\in\mathbb Z$), and $\alpha \colon \haine5^{*m} \to \haine5^{*m}$ is a partial (infinite) \dfn{permutation}.
By restricting the domain and the co-domain of an IPPA to finite subsets of $\haine5^{*m}$, we obtain normal (finite) PPA. In our constructions, the permutation $\alpha$ will always be length-preserving and the restriction will be taken over an alphabet of the form $\haine5^{\vec{k}}$.
If $F \colon \A^{\mathbb Z} \to \A^{\mathbb Z}$ and $G \colon \B^{\mathbb Z} \to \B^{\mathbb Z}$ are PCA, then we say that $G$ is a \dfn{factor} of $F$ if there exists a continuous map $H \colon \A^{\mathbb Z} \to \B^{\mathbb Z}$ such that $GH=HF$. If $F$ and $G$ are RPCA and $F$ factors onto $G$, then it is easy to see that $\orb{F}$ factors onto $\orb{G}$ through the map that sends $\orb{x}$ to $\orb{H(x)}$, for all $x \in \A^{\mathbb Z}$. However, the notion of factoring for RPCA is stronger, since it also takes into account the transient times of the RPCA, \textit{i.e.}, the number of steps for which the image of an ultimately rejected configuration is defined before it is rejected (which are not relevant in the corresponding 2D SFTs).
Let $F_0,\ldots,F_{n-1}$ be RPCA such that $\mathcal D}%\DeclareMathOperator*{\dom}{dom(F_i) \cap \mathcal D}%\DeclareMathOperator*{\dom}{dom(F_j) = \emptyset$, for all $i \neq j$. Then, $\bigsqcup_{i\in \co{0}{n}}F_i$ denotes the map with domain $\bigsqcup_{i \in \co{0}{n}}\mathcal D}%\DeclareMathOperator*{\dom}{dom(F_i)$ and that agrees with $F_i$ on $\mathcal D}%\DeclareMathOperator*{\dom}{dom(F_i)$, for all $i \in \co{0}{n}$. $\bigsqcup_{i\in \co{0}{n}}F_i$ is not always an RPCA, since there might be a configuration that is not in any $\mathcal D}%\DeclareMathOperator*{\dom}{dom(F_i)$ but that is locally everywhere in the domains (which are SFTs). However, and this will always be the case in this paper, $\bigsqcup_{i\in \co{0}{n}}F_i$ is also an RPCA if $\mathcal D}%\DeclareMathOperator*{\dom}{dom(F_i)$ and $\mathcal D}%\DeclareMathOperator*{\dom}{dom(F_j)$ are over disjoint alphabets, for $i \neq j$. In this case, $\Omega_{\bigsqcup_{i\in \co{0}{n}}F_i}=\bigsqcup_{i \in \co{0}{n}} \Omega_{F_i}$ and $\orb{\bigsqcup_{i\in \co{0}{n}}F_i}=\bigsqcup_{i \in \co{0}{n}}\orb{F_i}$.
\subsection{Expansiveness}
The projective line $\Rb\defeq\mathbb R\sqcup\{\infty\}$
is seen as the set of \dfn{slopes} to the \emph{vertical} direction.
Here, quite unconventionally, the horizontal direction is represented by $\infty$ and the vertical one by $0$.
The relevance of this choice will appear later, but in any case it does not affect any set-theoretical, topological or computable property because the inversion map over $\Rb$ is a computable homeomorphism.
The projective line $\Rb$ admits a natural effective topology if seen as the quotient of the circle by central symmetry: a subset is effectively closed if the corresponding subset of the circle is effectively closed as a subset of $[0,1]^2$. This topology is equivalent to the one-point compactification of the $\mathbb R$ and renders $\Rb$ a compact, metric space.
Let $X$ be a 2D subshift, $l\in\Rb$ a slope and $\lin{l}\subset\mathbb R^{2}$ the corresponding vectorial line.
We say that direction $l$
is \dfn{expansive} for $X$ if there exists a bounded shape $V\subset\mathbb R^2$
such that, for all $x,y \in X$,
\[x\restr{(\lin{l}+V)\cap \mathbb Z^2}=y\restr{(\lin{l}+V) \cap \mathbb Z^2}\impl x=y~.\]
We denote by $\NE(X)$ the {set of \dfn{non-expansive} directions} (\textit{i.e.}, the set of directions that are not expansive).
The terminology comes from the fact that if $l=p/q$ is rational (or infinite), then $l$ is expansive for $X$ if and only if the dynamical system $(X,\sigma^{(p,q)})$ is expansive, in the classical sense of expansive dynamical systems
Expansive directions were first introduced by Boyle and Lind \cite{expsubd} in a more general setting.
The following fact is a particular case of \cite[Theorem~3.7]{expsubd}.
\begin{proposition}\label{p:atleastone}
Let $X$ be a 2D subshift.
Then, $\NE(X)$ is closed. In addition, $\NE(X)$ is empty if and only if $X$ is finite.
\end{proposition}
We say that $X$ is \dfn{extremely expansive} if $\card{\NE(X)}=1$, which is, according to Proposition~\ref{p:atleastone}, the most constrained non-trivial case.
In the case of SFTs (actually, of all effective subshifts), we have an additional restriction on the set of non-expansive directions that comes from computation theory, as is usually the case, see \cite{projsft, entrsft}.
A direction $l \in\Rb$ can be represented as the pair of coordinates of the intersection of the line $\lin{l}$ with the unit circle. This gives two (symmetric with respect to the origin) representations for each direction which are computably equivalent. Computability questions about expansive directions can then be transferred to computability questions about pairs of real numbers, which we already know how to deal with.
It can be noted that effectively closed subsets that do not contain $\{\infty\}$ are exactly the effectively closed subsets of $\mathbb R$.
The restriction map from $\Rb$ (with the above-defined effective topology) onto $\mathbb R$ is actually computable, and it can be noted that the pre-image of an effectively closed set by a computable function is effectively closed.
\begin{lemma}\label{lem:nonexpsftrestr}
Let $X$ be a 2D SFT. Then, $\NE(X)$ is effectively closed.
\end{lemma}
In particular, if an SFT $X$ has a unique direction of non-expansiveness, then this direction must be computable.
\begin{proof}
The statement follows from the following two facts: First, it is semi-decidable whether a direction is expansive, \textit{i.e.}, there exists a TM that takes as input a (rational direction) and halts if the direction is expansive. This follows from \cite[Lemma 3.2]{expsubd}. Secondly, it is semi-decidable whether two \emph{expansive} directions belong in the same expansive component. (The expansive component of an expansive direction is the largest connected set that includes the direction and is included in the set of expansive directions. One can see that it is always an open interval.) This follows from \cite{nasu154}, as described in \cite[Appendix C]{opsd}.
Having these two facts in mind, it is not difficult to see that the following algorithm enumerates a sequence of intervals whose union is the complement of $\NE(X)$: For each rational direction, check whether it is expansive. Every time you find an expansive direction, check whether it is in the same component with one of the expansive directions that you have already found. Every time this is the case, output the whole interval of directions that is between them.
\end{proof}
A subshift $Y$ is called \dfn{extremely-expansively sofic} if there exists an extremely expansive SFT that factors onto $Y$. Since expansive directions are not preserved through block maps, an extremely-expansively sofic subshift need not be extremely expansive itself. In fact, as we will see, there exist extremely-expansively sofic subshifts that do not have any direction of expansiveness.
\begin{lemma}\label{lem:basicstuffaboutNE}
Let $X_0,X_1,\ldots$ be 2D subshifts over the same alphabet $\A$.
\begin{itemize}
\item If $X_0\subseteq X_1$, then $\NE(X_0)\subseteq\NE(X_1)$.
\item If $\NE(X_0)\cap\NE(X_1)=\emptyset$, then $X_0\cap X_1$ is a finite subshift.
\item If $\bigsqcup_wX_w$ is a closed disjoint (possibly uncountable) union, then $\NE(\bigsqcup_wX_w)=\bigcup_w\NE(X_w)$.
\end{itemize}\end{lemma}
\begin{proof}
The first claim follows immediately from the definitions.
For the proof of the second claim, we have that $\NE(X_0 \cap X_1) \subseteq \NE(X_0) \cap \NE(X_1) = \emptyset$ according to the first claim. Therefore, $\NE(X_0 \cap X_1) = \emptyset$, and since $X_0 \cap X_1$ is a subshift, Proposition~\ref{p:atleastone} gives that it is finite.
Finally, for the last claim, the inclusion $\NE(X_w) \subseteq \NE(\bigsqcup_wX_w)$ comes from the first point.
For the other inclusion, assume $l\in\NE(\bigsqcup_wX_w)$
Then, there exist $x,y\in\bigsqcup_wX_w$ which coincide over an open half-plane $H_l \subseteq \mathbb R^2$ of slope $l$ and disagree somewhere outside it. The orbits of $x$ and $y$ under the shift action have a common limit point $z$. Then, $z$ is in the intersection $X_x \cap X_y$ of the subshifts that contain $x$ and $y$, respectively. By disjointness, we get that $X_x=X_y=X_{w'}$, for some $w'$, which means that $l \in \NE(X_{w'})$.
\end{proof}
If $F$ is an RPCA, then we denote $\NE(F)\defeq\NE(\orb F)$.
It is straightforward that the horizontal direction (which according to our definition is $\infty$) is expansive for $F$. It is not much more complicated to see that, if the bi-radius is $1$,
$\NE(F) \subseteq [-1,1]$ (directions around the horizontal are expansive).
Conversely, it can be shown that, up to a recoding, every 2D SFT for which the horizontal direction is expansive is equal to $\orb F$, for some RPCA $F$.
\chapter{Simulation}\label{s:simul}
\section{Simulation}
If $S,T\in{\mathbb N}_1$ and $Q\in\mathbb Z$, we say that RPCA $F:\A^{\Z}\pto\A^{\Z}$ \dfn{$(S,T,Q)$-simulates} RPCA $G:\B^{\Z}\pto\B^{\Z}$ if there is a partial continuous \dfn{decoding} surjection $\Phi:\A^{\Z}\pto\B^{\Z}$ such that $\sigma\Phi=\Phi\sigma^S$, $G\Phi=\Phi\sigma^QF^T$, $G^{-1}\Phi=\Phi\sigma^{-Q}F^{-T}$ and the \dfn{simulating} subshift $\rock\Phi\defeq\bigsqcup_{\begin{subarray}c0\le t<T\\0\le s<S\end{subarray}}\sigma^sF^t(\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi))$ is a disjoint union
In other words, $1$ step of $G$ is encoded into $T$ steps of $F$, up to some shift by $Q$, and the intermediary steps used are not valid encodings.
We note $F\simu[S,T,Q,\Phi]G$, or when some parameters are clear from the context or not so important, $F\simu[S,T,\Phi]G$, $F\simu[S,T,Q]$, $F\simu[S,T]G$, or $F\simu G$ (each time this symbol will be used, $F$ and $G$ are meant to be RPCA).
We remind the reader that according to our notations, $\sigma\Phi=\Phi\sigma^S$ and $G\Phi=\Phi\sigma^QF^T$ and $G^{-1}\Phi=\Phi\sigma^{-Q}F^{-T}$ imply that the domains of the two partial functions are identical. This is in fact crucial for understanding the notion of simulation and it will be used extensively in the proofs and constructions to come. For example, this means that the equality $G\Phi=\Phi\sigma^QF^T$ does not immediately imply $G^{-1}\Phi=\Phi\sigma^{-Q}F^{-T}$, because the domains of $G^{-1}\Phi$ and $\Phi\sigma^{-Q}F^{-T}$ might be different (if we only had the equality $G\Phi=\Phi\sigma^QF^T$, it could happen that $x \in \mathcal D}%\DeclareMathOperator*{\dom}{dom(G^{-1}\Phi)$ but $x \notin F^T\sigma^Q(\A^\mathbb Z)$).
In fact, one can see that the couple of conditions $G\Phi=\Phi\sigma^QF^T$ and $G^{-1}\Phi=\Phi\sigma^{-Q}F^{-T}$ is equivalent to the triple of conditions $G\Phi=\Phi\sigma^QF^T$, $\mathcal D}%\DeclareMathOperator*{\dom}{dom(G\Phi)=\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi\sigma^{-Q}F^{-T})$ and $\mathcal D}%\DeclareMathOperator*{\dom}{dom(G^{-1}\Phi) = \mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi F^{T} \sigma^Q)$.
$F$ \dfn{exactly simulates} $G$ if $\Phi$ is actually bijective.
In other words, there exists a well-defined \dfn{encoding} function $\Phi^{-1}:\B^{\Z}\to\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi)$.
$F$ \dfn{completely simulates} $G$ if, besides, $\Omega_F\subset\rock\Phi$.
In other words, every bi-infinite orbit of $F$ will eventually encode some orbit of $G$.
Actually, in our constructions we will even have the stronger $\mathcal D}%\DeclareMathOperator*{\dom}{dom(F^{t'})\subset\rock\Phi$, for some $t' \in \mathbb Z$.
\begin{remark}\label{r:simsynchr}~\begin{enumerate}
\item
\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi)=\sigma^S(\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi))$.
\item $F\simu[S,T,DS]G$ if and only if $F\simu[S,T,0]\sigma^{D}G$.
\item For any $s\in\co0S,t\in\co0T$, $\bulk[S]{\sigma^sF^t(\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi))}$ is an SFT.
\item Since the union $\rock\Phi$ is disjoint, there exists a shape $U\subfini\mathbb Z$ such that for any $x\in\rock\Phi$
, $x\restr U$ determines the (unique) $s\in\co0S$ and $t\in\co0T$ such that $x\in\sigma^sF^t(\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi))$.
\end{enumerate}\end{remark}
\begin{proof}
The first two claims follow immediately from the definitions.
For the third claim, notice that since $\Phi$ is continuous and $\sigma \Phi = \Phi \sigma^S$, this means that $\mathcal D}%\DeclareMathOperator*{\dom}{dom(F)$ is the domain of a PCA over $\bulk[S]{\A^{\mathbb Z}}$, so it is an SFT. Since $\sigma$ and $F$ are invertible maps and the property of being an SFT is preserved under invertible maps, we have that $\bulk[S]{\sigma^sF^t(\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi))}$ is an SFT for all $s \in \co{0}{S}$ and $t \in \co{0}{t}$.
The last claim follows easily from the disjointness using a classical compactness argument.
\end{proof}
We can prove an analogue of Curtis-Lyndon-Hedlund theorem for decoding and encoding functions.
\begin{remark}\label{r:simhedlund}
The decoding function $\Phi$ admits a {neighbourhood} $V\subfini\mathbb Z$ and a partial \dfn{bulked local rule} $\phi:\A^{V}\pto\B$ such that for all $x\in\A^{\Z}$, $\Phi(x)$ is defined if and only if $\phi(x\restr{iS+V})$ is defined for any $i\in\mathbb Z$, in which case the latter is equal to $\Phi(x)_i$.
\\
If the simulation is exact, the encoding function $\Phi^{-1}$ admits a {neighbourhood} $V\subfini\mathbb Z$ and a partial \dfn{unbulked local rule}, abusively noted $\phi^{-1}:\B^{V}\pto\A^S$ such that for all $y\in\B^\mathbb Z$, $\Phi^{-1}(y)$ is defined if and only if $\phi^{-1}(y\restr{i+V})$ is defined for any $i\in\mathbb Z$, in which case the latter is equal to $\Phi^{-1}(y)_{\co{iS}{(i+1)S}}$.
\end{remark}
Exact complete vertical (\textit{i.e.}, $Q=0$) simulation is stronger than most notions found in the literature. In particular:
\begin{itemize}
\item $\orb F$ simulates $\orb G$ in the sense of \cite{drs}.
\item The $\mathbb Z^2$-action $(F,\sigma)$ over the limit set $\Omega_F$ (or the 2D SFT $\orb F$) is conjugate to a suspension of $\Omega_G$ in the sense of a homeomorphism \[\appl[\Psi]{\Omega_F}{\Omega_G\times\co0S\times\co0T}x{(\Phi F^{-t}\sigma^{-s}(x),s,t),~\text{where}~F^{-t}\sigma^{-s}(x) \in \mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi)}\]
\item The $\mathbb Z^2$-action $(G,\sigma)$ over the limit set $\Omega_G$ (or the 2D SFT $\orb G$) is conjugate to the $\mathbb Z^2$-action $(F^T,\sigma^S)$ restricted to $\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi) \cap \Omega_F$ (see \cite{gacs});
\item $G$ is a sub-automaton of a rescaling of $F$, so that $F$ simulates $G$ according to the definition of simulation given in \cite{ollingersimulation}. While it is not necessary to formally define this notion of simulation, we can intuitively say that rescaling corresponds to the role of parameters $S$ and $T$ in our definition, while the sub-automaton condition corresponds to the decoding function $\Phi$. We notice, however, that Ollinger's definition is more general than ours, since it does not require $\rock\Phi$ to be a disjoint union, while the simulated can also be rescaled.
\end{itemize}
But the definition above also involves the transient part: every locally valid horizontal strip of height $t+1$ for $G$ gives a locally valid horizontal strip of height $Tt+1$ for $F$.
The following facts about our notion of simulation follow directly from the definition:
\begin{itemize}
\item Each kind of simulation is a conjugacy invariant.
\item If $F$ simulates $G$\resp{exactly}, then it simulates\resp{exactly} any of its subsystems (but clearly, completeness is not preserved). If $F$ factors onto $G$, then $F \simu[1,1,0] G$ completely.
\item $F\times G\simu[1,1,0]F$ completely if $G$ does not have empty domain. Also $F\times G\simu[1,1,0]F$ exactly if $G$ includes a singleton subsystem (recall that $G$ is a PCA, so that it does not necessarily have periodic points). The simulation is simultaneously exact and complete if $G$ is a singleton system.
\item The surjectivity of $\Phi$ implies that only systems with empty domain can be simulated by systems with empty domain.
\end{itemize}
We will mainly focus on \dfn{non-trivial} simulations: this means that $S,T>1$ and $G$ does not have empty domain.
\begin{remark}\label{r:locvalid}
If $F\simu[S,T,Q,\Phi]G$ non-trivially, then for all $j \in \cc{0}{T}$,
$F^{j}(\mathcal D}%\DeclareMathOperator*{\dom}{dom(G\Phi))=\sigma^{-Q}F^{-(T-j)}(\mathcal D}%\DeclareMathOperator*{\dom}{dom(G^{-1}\Phi) \neq \emptyset$.
\end{remark}
More specifically, a configuration \xpr{in the middle} of the work period, \textit{i.e.}, when $j = \ipart{T/2}$ has at least $\ipart{T/2}$ forward and backward images, or, in other words, it belongs to $\Omega_F^{\ipart{T/2}}$.
The following lemma states that the limit sets correspond, in the case of complete simulation. It is a more mathematical and detailed version of the comment that we made earlier, that a valid strip horizontal of height $t+1$ in $G$ gives a valid horizontal strip of height $Tt+1$ in $F$ (provided that the strip is simulated).
\begin{lemma}\label{l:simlim}
Assume $F\simu[S,T,Q,\Phi]G$.
\begin{enumerate}
\item\label{i:simo} If $j\in\mathbb N\sqcup\{\infty\}$, then
\[\rock[j]\Phi\defeq\bigsqcup_{\begin{subarray}c0\le t<T\\0\le s<S\end{subarray}}\sigma^sF^t\Phi^{-1}(\Omega_G^j)
\] is a disjoint union and a subshift, included in $\Omega_F^{(j-1)T+1}$. In addition, $\rock[j]\Phi\supset\rock[j+1]\Phi$ and $\rock[\infty]\Phi=\bigcap_{j\in\mathbb N}\rock[j]\Phi$.
\item $\Omega_F\supset\rock[\infty]\Phi$
\item\label{i:limlim} If the simulation is complete, then $\Omega_F=\rock[\infty]\Phi$.
\end{enumerate}\end{lemma}
\begin{proof}~\begin{enumerate}
\item It is clear that $\rock[j]\Phi$ is a disjoint union and a subshift, each subset in the union being (syntactically) included in one in the expression of $\rock\Phi$.
Assume that $F\simu[S,T,Q,\Phi]G$ for some $Q\in\mathbb Z$.
Now,
\begin{eqnarray*}
\Phi^{-1}(\Omega_G^j)&=&
\mathcal D}%\DeclareMathOperator*{\dom}{dom(G^j\Phi)\cap\mathcal D}%\DeclareMathOperator*{\dom}{dom(G^{-j}\Phi)
\\&=&
\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi\sigma^{jQ}F^{jT})\cap\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi\sigma^{-jQ}F^{-jT})
\\&\subset&
\mathcal D}%\DeclareMathOperator*{\dom}{dom(F^{jT})\cap\mathcal D}%\DeclareMathOperator*{\dom}{dom(F^{-jT})=\Omega_F^{jT}~.
\end{eqnarray*}
Hence, for any $s\in\co0S$ and any $t\in\co0T$, $\sigma^sF^t\Phi^{-1}(\Omega_G^j)\subset\Omega_F^{jT-T+1}$.
The other claims follow from the definitions.
\item It is obvious from the previous point that $\bigcap_{j\in\mathbb N}\Omega_F^{jT}\supset\bigcap_{j\in\mathbb N}\rock[j]\Phi=\rock[\infty]\Phi$.
\item Conversely, assume $x\in\Omega_F$, so that clearly $\forall k\in\mathbb Z,F^k(x)\in\Omega_F$. By completeness, there exist $y\in\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi)$ and $s\in\co0S,t\in\co0T$ such that $\sigma^sF^t(y)=x$. Disjointness and a direct induction give that for all $k\in\mathbb Z$, $F^k(y)\in F^{k\bmod T}\sigma^{Q\lfloor k / T \rfloor}(\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi))$. In particular, for all $j\in\mathbb Z$, $G^j\Phi(y)=\Phi F^{jT}\sigma^{jQ}(y)$ is defined. This gives that $\Phi(y)\in\Omega_G$, so $x \in \sigma^{-s}F^{-t}\Phi^{-1}(\Omega_G)= \sigma^{S-s}F^{T-t}\Phi^{-1}(\Omega_G)$.
\end{enumerate}\end{proof}
The following remark links the periodic points of the simulating and simulated systems. It is essential for proving aperiodicity of the subshifts that we construct. The same result appears in \cite{drs,twobytwo}, even though the argument essentially goes back to the kite-and-dart tile set of Penrose. We give a slightly more general version of the usual result also takes into consideration the shift by $Q$.
\begin{remark}\label{r:penrose}
If $F\simu[S,T,Q]G$ completely, then $\orb F$ admits a configuration with period $(s-lQ,t)$ if and only if $\orb G$ admits a configuration with period $(k,l)$, where $s=kS$ and $t=lT$.
\end{remark}
We will only use the case $Q=0$, for which it is intuitively clear to see that it holds true. When $q \neq 0$, one has to have in mind that for every $T$ time steps of a configuration of $F$, the simulated configuration is shifted $Q$ steps to the left.
\section{Nested simulations}
In the sequel, we will be most interested in infinite sequences of simulations of the form: $F_0 \simu F_1 \simu F_2 \simu \ldots$. This looks like a formidable task, since every RPCA of the sequence must contain the information about an infinite number of configurations and update this information within a determined time, but, as the results of this section will imply, an infinite sequence of simulations gives RPCA with very useful properties. The construction of these sequences forms the basic part of our constructions and will be done in the following chapters.
If $\vec{S}=(S_i)_{ 0 \le i \le n-1}$ is a sequence of numbers, then $\vec{1S}$ is the sequence whose first element is equal to $1$ with the elements of $\vec{S}$ shifted by one after it.
If $\vec{S}=(S_i)_{ 0 \le i \le n-1}$ and $\vec{T}=(T_i)_{ 0 \le i \le n-1}$ are finite sequences of non-zero numbers, then $\vec{1S/T}$ is the sequence $(S_{i-1}/T_i)_{0 \le i \le n-1}$, where $S_{-1}\defeq 1$.
A short calculation shows that
\begin{equation*}
\anib[\vec{1S/T}]{\vec{Q}}\prod T_i = \sum_{0 \le i \le n-1} \left(Q_i\prod_{0 <j<i} S_j \prod_{i < j \le n-1} T_j\right).
\end{equation*}
\begin{lemma}\label{l:simtrans}
Simulation\resp{exact, complete, exact and complete} is a preorder.
\\ More precisely, if $F_0\simu[S_0,T_0,Q_0,\Phi_0]F_1\simu[S_1,T_1,Q_1,\Phi_1]\ldots\simu[S_{n-1},T_{n-1},Q_{n-1},\Phi_{n-1}]F_n$\resp{exactly, completely} for some $n\in\mathbb N$, then $F_0\simu[S,T,Q,\Phi]F_n$\resp{exactly, completely}, where
\begin{equation*}
(S,T,Q,\Phi) = (\prod S_i,\prod T_i,{\anib[\vec{1S/T}]{\vec{Q}}\prod T_i},\Phi_{n-1}\cdots\Phi_0).
\end{equation*}
\end{lemma}
The products range from $0$ to $n-1$. If there were no shifts in the simulation (\textit{i.e.}, if $Q_i=0$ for all $i$) the above statement would be more or less trivial. Even in the presence of shifts, the proof is essentially a simple verification.
\begin{proof}~\begin{itemize}
\item Clearly $F\simu[1,1,0,\id]F$.
\item Now suppose
$F\simu[S,T,Q,\Phi]G\simu[S',T',Q',\Phi']H$.
Then it is clear that $\sigma\Phi'\Phi=\Phi'\sigma^{S'}\Phi=\Phi'\Phi\sigma^{S'S}$ and $H\Phi'\Phi=\Phi'\sigma^{Q'}G^{T'}\Phi=\Phi'\Phi\sigma^{QT'+SQ'}F^{T'T}$.
Moreover:
\begin{eqnarray*}
\bigsqcup_{\begin{subarray}c0\le t<T\\0\le s<S\end{subarray}}\sigma^sF^t(\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi))
&\supset
\bigsqcup_{\begin{subarray}c0\le t<T\\0\le s<S\end{subarray}}\sigma^sF^t\Phi^{-1}(\bigsqcup_{\begin{subarray}c0\le t'<T'\\0\le s'<S'\end{subarray}}\sigma^{s'}G^{t'}(\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi')))\\
&=&\bigsqcup_{\begin{subarray}c0\le t<T\\0\le s<S\end{subarray}}\bigsqcup_{\begin{subarray}c0\le t'<T'\\0\le s'<S'\end{subarray}}\sigma^sF^t\Phi^{-1}(\sigma^{s'}G^{t'}(\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi')))\\
&=&\bigsqcup_{\begin{subarray}c0\le t<T\\0\le s<S\end{subarray}}\bigsqcup_{\begin{subarray}c0\le t'<T'\\0\le s'<S'\end{subarray}}F^{t+t'T}\sigma^{s+s'S+t'Q}(\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi'\Phi))\\
&=&\bigsqcup_{\begin{subarray}c0\le t<TT'\\0\le s<SS'\end{subarray}}\sigma^sF^t(\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi'\Phi))\eqdef\rock{\Phi'\Phi}~.
\end{eqnarray*}
This proves that $F\simu[SS',TT',QT'+SQ',\Phi'\Phi]H$.
\item If $\Phi$ and $\Phi'$ are bijections, then $\Phi'\Phi$ is also a bijection.
\item If both simulations are complete, then by Point \ref{i:limlim} of Lemma \ref{l:simlim},
\begin{eqnarray*}
\Omega_F&=&\bigsqcup_{\begin{subarray}c0\le t<T\\0\le s<S\end{subarray}}\sigma^sF^t\Phi^{-1}(\Omega_G)\\
&\subset&\bigsqcup_{\begin{subarray}c0\le t<T\\0\le s<S\end{subarray}}\sigma^sF^t\Phi^{-1}(\rock{\Phi'})\\
&=&\rock{\Phi'\Phi}
\end{eqnarray*}
\item A direct induction gives the expected results.
\end{itemize}\end{proof}
Similarly to simulations, which involve a decomposition of the system in terms of how much is shifted the grid on which to read the encoding, a sequence of simulations involves a nested decomposition, which gives a full skeleton, inside each configuration, as expressed by the following lemma. Here, and in the following, we use gothic letters to denote sequences, but the corresponding normal letters to denote the elements of the sequences. Also, if $\seq S$ is an infinite sequence and $n \in \mathbb N$, then $\seq{S}_{\co{0}{n}}$ is the finite prefix of length $n$ of $\seq{S}$. Finally, if $(\Phi_i)_{i \in \mathbb N}$ is a sequence of decoding functions, then $\Phi_{\co{0}{n}}$ will be the decoding function $\Phi_{n-1} \cdots \Phi_0$.
\begin{lemma}\label{l:nonvide}~\begin{enumerate}
\item\label{i:infsim} If $F_0\simu[S_0,T_0,\Phi_0]F_1\simu[S_1,T_1,\Phi_1]\ldots\simu[S_{n-1},T_{n-1},\Phi_{n-1}]F_n\simu[S_n,T_n,\Phi_n]\ldots$ and $j\in\mathbb N\sqcup\{\infty\}$, then
\begin{eqnarray*}
\rock[j]{\seq\Phi} &\defeq &\bigcap_{n\in\mathbb N}\rock[j]{\Phi_{\co0n}}\\ & = &\bigsqcup_{\begin{subarray}c\seq t\in\prod_{i\in\mathbb N}\co0{T_i}\\\seq s\in\prod_{i\in\mathbb N}\co0{S_i}\end{subarray}}\bigcap_{n\in\mathbb N}\sigma^{\anib[\seq S]{\seq{s}_{\co0n}}}F_0^{\anib[\seq T]{\seq{t}_{\co0n}}}\Phi_0^{-1}\cdots\Phi_{n-1}^{-1}(\Omega_{F_n}^j)
\end{eqnarray*}
is a disjoint union and a subshift.
\\
In addition, $\rock[j]{\seq\Phi}\supset\rock[j+1]{\seq\Phi}$ and $\rock[\infty]{\seq\Phi}=\bigcap_{j\in\mathbb N}\rock[j]{\seq\Phi}$.
\item\label{i:nonempty} If, besides, all simulations are nontrivial, then $\rock[2]{\seq\Phi}=\rock[\infty]{\seq\Phi}\subset\Omega_{F_0}$ is uncountable.
\item If the simulations (in the hypothesis of Point \ref{i:infsim}) are complete, then $\rock[\infty]{\seq\Phi}=\Omega_{F_0}$.
\item\label{i:skel} If the sequence $(\Phi_n)_{n \in \mathbb N}$ is computable, then the map $x \in \rock[\infty]\Phi \to (s_i,t_i)_{i \in \mathbb N}$, where $(s_i,t_i)_{i \in \mathbb N}$ is the (unique) sequence such that $x\in\bigcap_n\sigma^{\anib[\seq S]{\seq{s}_{\co0n}}}F_0^{\anib[\seq T]{\seq{t}_{\co0n}}}\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi_{\co{0}{n}})$, is computable.
\end{enumerate}\end{lemma}
Point \ref{i:nonempty} implies nonemptiness of $\Omega_{F_0}$ and $\orb{F_0}$, and of any $\Omega_{F_n}$, since all those statements can be applied to the sequence starting from $n$.
Point \ref{i:skel} states that we can always recover the skeleton from a valid configuration. In particular the skeleton map is continuous.
\begin{proof}~\begin{enumerate}
\item By Lemma \ref{l:simtrans} and compactness, it is clear that $\rock[j]{\seq\Phi}$ is a subshift. The equality is rather easily checkable. We can see that the union is disjoint: if $(\seq s,\seq t)\ne(\seq{s'},\seq{t'})$, say $(s_m,t_m\ne s'_m,t'_m)$, then $\bigcap_{n\in\mathbb N}\sigma^{\anib[\seq S]{\seq{s}_{\co0n}}}F_0^{\anib[\seq T]{\seq{t}_{\co0n}}}\Phi_0^{-1}\cdots\Phi_{n-1}^{-1}(\Omega_{F_n}^j)$
is included in $\sigma^{\anib[\seq S]{\seq{s}_{\co0m}}}F_0^{\anib[\seq T]{\seq{t}_{\co0m}}}\Phi_0^{-1}\cdots\Phi_{m-1}^{-1}(\Omega_{F_m}^j)$, which is, according to Lemma~\ref{l:simtrans}, disjoint from
$\sigma^{\anib[\seq S]{\seq{s}'_{\co0m}}}F_0^{\anib[\seq T]{\seq{t}'_{\co0m}}}\Phi_0^{-1}\cdots\Phi_{m-1}^{-1}(\Omega_{F_m}^j)$ which includes \\$\bigcap_{n\in\mathbb N}\sigma^{\anib[\seq S]{\seq{s}'_{\co0n}}}F_0^{\anib[\seq T]{\seq{t}'_{\co0n}}}\Phi_0^{-1}\cdots\Phi_{n-1}^{-1}(\Omega_{F_n}^j)$
~.
\item
Since for any $n\in\mathbb N$ and $m\ge n$, $F_n\simu[S_n\cdots S_{m-1},T_n\cdots T_{m-1},\Phi_{m-1}\cdots\Phi_n]F_{m}$, then Point \ref{i:simo} of Lemma \ref{l:simlim} says that $\Omega_{F_n^{T_n\cdots T_{m-1}+1}}\supset\rock[2]{\Phi_{\co nm}}~.$
If the simulations are nontrivial, then $T_n\cdots T_{m-1}\to\infty$ when $n$ is fixed and $m \to \infty$, and
$\Omega_{F_n}\supset\bigcap_{m\in\mathbb N}\Omega_{F_n^{T_0\cdots T_{m-1}+1}}\supset\bigcap_{m\in\mathbb N}\rock[2]{\Phi_{\co nm}}=\rock[2]{\Phi_{\co n{\infty}}}~.$
Injecting this inclusion in the definition of $\rock[\infty]{\Phi_{\co0n}}$ gives that
$\rock[\infty]{\seq\Phi}\supset\bigcap_{m\ge 0}\rock[2]{\Phi_{\co0m}}\supset\rock[2]{\seq\Phi}.$
The converse is trivially true, and Point \ref{i:limlim} of Lemma \ref{l:simlim} already tells us that $\rock[\infty]{\seq\Phi}\subset\Omega_{F_0}$.
\\
Moreover, since $F_n\simu[S_nS_{n+1},T_nT_{n+1}]F_{n+2}$ with $T_nT_{n+1}\ge 4$, then Remark \ref{r:locvalid} gives that $\Omega_{F_n}^2$ is non-empty.
Therefore, each of the uncountably many subsets in the disjoint union expressing $\rock[2]{\seq\Phi}$ is a closed non-empty intersection.
\item If $n\in\mathbb N$ is such that $F_0\simu[S_0\cdots S_{n-1},T_0\cdots T_{n-1},\Phi_{n-1}\cdots\Phi_0]F_n$ completely, then
by Point \ref{i:limlim} of Lemma \ref{l:simlim}, $\Omega_{F_0}=\rock[\infty]{\Phi_{\co0n}}$.
\item This follows from repeated application of Remark~5.4 and the fact that $\Phi_{\co{0}{n}}$ is a decoding function for all $n \in \mathbb N$.
\end{enumerate}\end{proof}
The following extends Lemma \ref{l:nonvide} (which can be recovered by $\B_i$ being singletons). In this case, every RPCA simulates a disjoint union of RPCA, each one of which simulates a disjoint union of RPCA and so on. In this way, we obtain an \xpr{infinite tree} of simulations. Along any branch of this tree, Lemma~\ref{l:nonvide} is true, but, more importantly, something similar is true even when we take all the (possibly uncountable) branches of this tree together.
\begin{lemma}\label{l:nonvides}~\begin{enumerate}
\item\label{i:uncsim} Let $(\B_n)_{n\in\mathbb N}$ be a sequence of finite alphabets, such that for any word $u\in\prod_{i<n}\B_i$ of length $n\in\mathbb N$, there exist $S_u,T_u, Q_u \in \mathbb N$, a decoding function $\Phi_u$ and a RPCA $F_u$ such that
$F_u\simu[S_u,T_u,\Phi_u]\bigsqcup_{b\in\B_n}F_{ub}$.
Let $\rocks[j]z{\seq\Phi}\defeq\bigcap_{n\in \mathbb N}\rock[j]{\Phi_{z_{\co0n}}}$ for all $j\in\mathbb N\sqcup\{\infty\}, z\in\prod_{i\in\mathbb N}\B_i$.
Then, for any $j\in\mathbb N \sqcup \{\infty\}$ and any closed $Y\subset \prod_{i\in\mathbb N}\B_i$,
$\rocks[j]Y{\seq\Phi}\defeq\bigsqcup_{z\in Y}\rocks[j]z{\seq\Phi}$
is a disjoint union and a subshift, and $\rocks[2]Y{\seq\Phi}=\rocks[\infty]Y{\seq\Phi}\subset \Omega_{F_\motvide}$.
\item Besides, the set $Z\defeq\set z{\prod_{i\in\mathbb N}\B_i}{\rocks[\infty]z{\seq\Phi}\ne\emptyset}$ corresponding to nested nontrivial, non-empty simulations is closed.
If the simulations are complete, then $\rocks[2]Z{\seq\Phi}=\rocks[\infty]Z{\seq\Phi}=\Omega_{F_\motvide}$.
\end{enumerate}\end{lemma}
In the above statement, the notation $\Phi^z_{{\co0n}}$ stands for the composition $\Phi_{z_{\co{0}{n}}}\cdots\Phi_{z_0}\Phi_{\motvide}$, which is the decoding function from $F_{z_{\co{0}{n}}}$ onto $F_{\motvide}$.
\begin{proof}~\begin{enumerate}
\item
Point \ref{i:nonempty} of Lemma \ref{l:nonvide} gives that $\rocks[\infty]z{\seq\Phi}\ne\emptyset$ if $F_{z_{\co0n}}\simu F_{z_{\cc{0}{n+1}}}$ non trivially for any $n\in\mathbb N$, \textit{i.e.}, all these RPCA have non-empty domain. The converse is obvious.
\\
By the same distributivity of decreasing intersections over unions as for Point \ref{i:infsim} of Lemma~\ref{l:nonvide}, it can be easily seen that
\[\rocks[j]{Y}{\seq\Phi}=\bigcap_{n\in\mathbb N}\bigsqcup_{u\in\mathcal L_n(Y)}\bigsqcup_{\begin{subarray}c0\le t<\prod_{i<n}T_{u_{\co0i}}\\0\le s<\prod_{i<n}S_{u_{\co0i}}\end{subarray}}\sigma^sF_\motvide^t\Phi_\motvide^{-1}\Phi_{u_0}^{-1}\cdots\Phi_{u}^{-1}(\Omega_{F_u}^j)~,\]
which is a decreasing intersection of finite unions of subshifts, and we have $\rocks[2]z{\seq\Phi}=\rocks[\infty]z{\seq\Phi}\subset\Omega_{F_\motvide}$ for all $z\in Y$.
\item If $F_u\simu\bigsqcup_{a\in\B_n}F_{ua}$ completely, then Point \ref{i:limlim} of Lemma \ref{l:simlim} gives
\[\Omega_{F_u}=\bigsqcup_{\begin{subarray}c0\le t<T_u\\0\le s<S_u\end{subarray}}F_u^t\sigma^s\Phi_u^{-1}(\bigsqcup_{a\in\B_n}\Omega_{F_{ua}})~.\]
An immediate induction gives for any $n\in\mathbb N$,
\[\Omega_{F_\motvide}=\bigsqcup_{u\in\mathcal L_{n+1}(Z)}\bigsqcup_{\begin{subarray}c0\le t<\prod_{i<n}T_{u_{\co0i}}\\0\le s<\prod_{i<n}S_{u_{\co0i}}\end{subarray}}\sigma^sF_\motvide^t\Phi_\motvide^{-1}\Phi_{u_0}^{-1}\cdots\Phi_{u_{\co0n}}^{-1}(\Omega_{F_u})~.\]
Being true for any $n$, this gives the result.
\end{enumerate}\end{proof}
Lemmas~\ref{l:nonvide} and~\ref{l:nonvides} can be seen as extensions of Lemma~\ref{l:simlim} in the case of an infinite nested simulation. The following lemma can be seen as such an extension of Remark~\ref{r:penrose}.
\begin{lemma}\label{lem:aperiodichierarchy}
If $F_0\simu[S_0,T_0]F_1\simu[S_1,T_1]\ldots\simu[S_{n-1},T_{n-1}]F_n\simu[S_{n},T_{n}]\ldots$ completely, with $S_n,T_n>1$ for any $n\in\mathbb N$, then $\orb{F_0}$ is aperiodic.
\end{lemma}
In particular, either $\Omega_{F_n} = \emptyset$ ($= \orb{F_n}$) for all $n \in \mathbb N$ or, $\Omega_{F_n}$ (and $\orb{F_n}$) is aperiodic uncountable, for all $n \in \mathbb N$.
\begin{proof}
From Lemma~\ref{l:simtrans}, $F_0\simu[S_0\cdots S_{n-1},T_0\cdots T_{n-1}]F_n$ completely.
By Remark~\ref{r:penrose}, $\orb{F_0}$ cannot have any nontrival period less than $S_0\cdots S_{n-1}$ horizontally and less than $T_0\cdots T_{n-1}$ vertically.
If these two products go to infinity, we get that there cannot exist any periodic points.
\end{proof}
In fact, it follows from the proof that it is enough that one of the products $\prod_{i \in \mathbb N} S_i$ and $\prod_{i \in \mathbb N} T_i$ is infinite.
It is well known that a non-empty, aperiodic 2D SFT is uncountable. Lemma~\ref{l:nonvide} gives some additional information about how uncountability occurs in the case of an infinite nested simulation.
\section{Expansiveness and simulation}
The following lemmas highlight the relation between the notions of simulation and expansive directions. This subsection extends slightly Section~5 in \cite{nexpdir}.
The following lemmas correspond to Lemma~5.1 and Lemma~5.3 in \cite{nexpdir}, which examine how the so-called ``shape of prediction'' evolves.
It also motivates the choice of considering the horizontal direction as $\infty$, which will make many future expressions clearer.
\begin{lemma}\label{lem:relsimulexp}
Suppose $F\simu[S,T,Q]G$ exactly.
Then $\NE(F)\supseteq\frac1{T}\left(Q+S\NE(G)\right)$.
Moreover, if the simulation is complete, then $\NE(F)=\frac1{T}\left(Q+S\NE(G)\right)$.
\end{lemma}
In particular, $\NE(\sigma^{-Q}G)=\NE(G)+Q$ and $\NE(G^T)=\frac1T\NE(G)$.
\begin{proof}
Let us consider the matrix $M\defeq\left[\begin{array}{cc}S&Q\\0&T\end{array}\right]$ as acting over $\mathbb R^2$.
Consider a slope $l\in\Rb$, $\lin l\subset\mathbb R^2$ the corresponding vectorial line, $\lin l'\defeq M\lin l$ the vectorial line corresponding to slope $\frac STl+\frac QT$.
Roughly, $\lin l'$ for $F$ corresponds to $\lin l$ for $G$.
\begin{itemize}
\item Consider a finite shape $W'\subset\mathbb R^2$, $U$ and $f$ the neighbourhood and local rule of $F$, $V$ and $\phi^{-1}$ those of $\Phi^{-1}$, as defined in Remark \ref{r:simhedlund}. Without loss of generality, we can assume that $U = \cc{-uS}{uS}$, for some $u \in \mathbb N$.
Let $W\defeq M^{-1}W'+(T\cc{-u}{u} + V + \co{-Q}{0}) \times \{0\} + [-1,2[\times[0,1[$.
If $l\in\NE(G)$, then there exist configurations $x \neq y\in\Omega_G$ such that $\orb[x]G\restr{\lin l+W}=\orb[y]G\restr{\lin l+W}$.
Then, $\orb[\Phi^{-1}(x)]F \neq \orb[\Phi^{-1}(y)]F$, but we claim that
\begin{equation*}
\orb[\Phi^{-1}(x)]F\restr{\lin l'+W'}=\orb[\Phi^{-1}(y)]F\restr{\lin l'+W'}.
\end{equation*}
Since $W'$ was an arbitrary finite shape, this implies that $\frac STl+\frac QT \in \NE(F)$, which proves that $\NE(F)\supseteq\frac1{T}\left(Q+S\NE(G)\right)$.
Let us proceed with the proof of the claim. Let $(p_1,p_2) \in \lin l'+W'$ and write $p_1\eqdef mS+r$, $p_2\eqdef nT+q$ and $n \defeq m'S+r'$, where $m,r,n,q,m',r' \in \mathbb Z$ and $0\le r,r'<S$ and $0\le q<T$. Intuitively, we can think that $(p_1,p_2)$ belongs to the encoding of the $m$'th letter of $\sigma^{-m'Q}G^n(x)$ and $G^n(y)$.
More precisely, a straightforward computation shows that
\begin{equation*}
M^{-1}(p_1,p_2)=(m-Qm'+r/S-r'/{S}+q/T,n+q/T),
\end{equation*}
so that $(m-Qm',n) \in M^{-1}(p_1,p_2)+ [-1,2[ \times [0,1[$. This, in turn, implies that $(m-Qm',n) + (T\cc{-u}{u}+\co{-Q}{0}+V) \times \{0\}$ is included in $\lin l+ W$, so that
\begin{equation*}
\orb[x]{G}\restr{(m-Qm',n)+(T\cc{-u}{u}+\co{-Q}{0}+V)\times \{0\}} =
= \orb[y]{G}\restr{(m-Qm',n)+(T\cc{-u}{u}+\co{-Q}{0}+V)\times \{0\}}.
\end{equation*}
Using the facts that $V$ is the neighbourhood of $\phi^{-1}$ and that $\Phi^{-1}$ \xpr{blows-up} letters into blocks of size $S \times T$ with an additional shift of $Q$ for every vertical time step, we deduce that
\begin{multline*}
\orb[\Phi^{-1}(x)]{F}\restr{(\co{(m-Qm')S}{(m-Qm'+1)S}+T\cc{-uS}{uS}+\co{-QS}{0}+nQ)\times \{nT\}} = \\
=\orb[\Phi^{-1}(y)]{F}\restr{(\co{(m-Qm')S}{(m-Qm'+1)S}+T\cc{-uS}{uS}+\co{-QS}{0}+nQ)\times \{nT\}}.
\end{multline*}
Notice that $nQ - Qm'S = r'Q$. Now, using the fact that $T\cc{-uS}{uS}$ is a neighbourhood for $f^q$ and $\co{-QS}{0}$ for $\sigma^{-r'Q}$, we obtain that
\begin{multline*}
\sigma^{-r'Q}F^q\left(\orb[\Phi^{-1}(x)]{F}\right)\restr{\co{mS+r'Q}{(m+1)S+r'Q} \times \{nT\}} = \\
=\sigma^{-r'Q}F^q\left(\orb[\Phi^{-1}(y)]{F}\right)\restr{\co{mS+r'Q}{(m+1)S+r'Q} \times \{nT\}}.
\end{multline*}
The last equality implies that $\orb[\Phi^{-1}(x)]{F}\restr{(p_1,p_2)} = \orb[\Phi^{-1}(y)]{F}\restr{(p_1,p_2)}$, because
\begin{eqnarray*}
& & \sigma^{-r'Q}F^q\left(\orb[\Phi^{-1}(x)]{F}\right)\restr{\co{mS+r'Q}{(m+1)S+r'Q} \times \{nT\}}
\\&=&
\orb[\Phi^{-1}(x)]{F}\restr{\co{mS}{(m+1)S} \times \{nT+q\}}
\\&=&
\orb[\Phi^{-1}(x)]{F}\restr{\co{mS}{(m+1)S} \times \{p_2\}}
\end{eqnarray*}
and $p_1 \in \co{mS}{(m+1)S}$.
\item Consider a finite shape $W\subset\mathbb R^2$, $U$ the synchronizing shape as defined in Remark \ref{r:simsynchr}, $V$ and $\phi$ the neighbourhood and local rule of $\Phi$ as defined in Remark \ref{r:simhedlund}, and $W'\defeq MW+(V\cup U)\times\{0\}-\co0S\times\co0T$.
If $\frac STl+\frac QT\in\NE(F)$, then there exist configurations $x \neq y\in\Omega_F$ such that $\orb[x]F\restr{\lin l'+W'}=\orb[y]G\restr{\lin l'+W'}$.
By Remark \ref{r:simsynchr} and completeness of the simulation, there exist common $s\in\co0S$ and $t\in\co0T$ such that $x'\defeq\sigma^{-s}F^{-t}(x)$ and $y'\defeq\sigma^{-s}F^{-t}(y)$ are in $\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi)$.
It follows easily from the definitions that $\orb[x']F\restr{\lin l'+MW+V\times\{0\}}= \orb[y']F\restr{\lin l'+MW+V\times\{0\}}$
By injectivity of $\Phi$, $\orb[\Phi(x')]G$ and $\orb[\Phi(y')]G$ are also distinct, but we claim that they coincide in $\lin l+W$. Since $W$ is an arbitrary finite shape, this implies that $l \in \NE(G)$, which proves that $\NE(F)\subseteq\frac1{T}\left(Q+S\NE(G)\right)$.
Let $(p_1,p_2) \in \lin l+W$. Then, $M(p_1,p_2)+V\times\{0\}\subset \lin{l'} +MW+V\times\{0\}$; it follows from this that
\begin{equation*}
\orb[x']F\restr{(p_1S+p_2Q+V,p_2T)}= \orb[y']F\restr{(p_1S+p_2Q+V,p_2T)}.
\end{equation*}
In addition, we have that
\begin{eqnarray*}
\orb[\Phi(x')]G_{(p_1,p_2)}&=&
G^{p_2}\Phi(x')_{p_1}
\\&=&
\Phi\sigma^{p_2Q}F^{p_2T}(x')_{p_1}
\\&=&
\phi(\sigma^{p_2Q}F^{p_2T}(x')\restr{p_2S+V})
\\&=&
\phi(\orb[x']F\restr{(p_1S+V+p_2Q,p_2T)})
\end{eqnarray*}
The same holds for $y'$, and since, as we have noticed earlier, the final expression is the same for $x'$ and $y'$, we get that $\orb[\Phi(x')]G\restr{(p_1,p_2)} = \orb[\Phi(y')]G\restr{(p_1,p_2)}$, as claimed.
\end{itemize}\end{proof}
Lemmas \ref{l:simtrans} and \ref{lem:relsimulexp} can be combined to obtain expansive directions in nested simulations, which will be used extensively in Section~\ref{sec:expdir}.
\begin{lemma}\label{lem:iterrelsimulexp}
If $F_0\simu[S_0,T_0,D_0S_0]F_1\simu[S_1,T_1,D_1S_1]\ldots\simu[S_{n-1},T_{n-1},D_{n-1}S_{n-1}]F_{n}$ completely exactly, and all these RPCA have bi-radius $1$, then
\begin{equation*}
\NE(F_0)\subseteq\anib[\vec{1S/T}]{\vec{S}\vec{D}}+\left(\prod_{i<n}\frac{S_i}{T_i}\right)[-1,1]=\anib[\vec{S/T}]{\vec{D}}+\left(\prod_{i<n}\frac{S_i}{T_i}\right)[-1,1]~.
\end{equation*}
\end{lemma}
\begin{proof}
We already noted that the radius of a RPCA $F_n$ with bi-radius $1$ has $\NE(F_n)\subseteq[-1,1]$.
From Lemma \ref{l:simtrans}, we know that $F_0\simu[S,T,Q]F_n$ exactly completely, where $(S,T,Q) = (\prod S_i,\prod T_i,{\anib[\vec{1S/T}]{\vec S \vec D}\prod T_i})$ and from Lemma \ref{lem:relsimulexp}, we deduce that:
\[\NE(F_0)=\anib[\vec{1S/T}]{\vec S \vec D}+\left(\prod_{i<n}\frac{S_i}{T_i}\right)\NE(F_n)\subseteq \anib[\vec{1S/T}]{\vec S \vec D}+\left(\prod_{i<n}\frac{S_i}{T_i}\right)[-1,1]~.\]
Also, by definition we have that $\anib[\vec{1S/T}]{\vec S \vec D}=\anib[\vec{S/T}]{\vec D}$.
\end{proof}
In the limit case of an infinite nested simulation, we obtain the following proposition, which slightly extends Theorem~5.4 in \cite{nexpdir}.
\begin{proposition}\label{prop:hochman}
If $F_i \simu[S_i,T_i,D_iS_i] F_{i+1}$ completely exactly, for all $i \in \mathbb N$, then
\begin{equation*}
\NE(F_0)\subseteq\anib[\seq S/\seq T]{\seq D}+\left(\inf_{n\in\mathbb N}\prod_{i<n}\frac{S_i}{T_i}\right)[-1,1]~.
\end{equation*}
In particular, if the simulations are non-trivial and $\prod_{i<n}S_i/T_i$ converges to $0$, then $\NE(F_0)=\{\anib[\seq S/\seq T]{\seq D}\}$.
\end{proposition}
\begin{proof}
From Lemma \ref{lem:iterrelsimulexp}, we know that \begin{equation*}
\NE(F_0)\subseteq\bigcap_{n\in\mathbb N}\left(\prod_{i<n}\frac{S_i}{T_i}\right)[-1,1]+\anib[\seq{S}_{\co{0}{n}}/\seq{T}_{\co{0}{n}}]{\seq{D}_{\co{0}{n}}},
\end{equation*}
which gives the wanted inclusion, when $n$ goes to $\infty$.
For the second claim, if all the simulations are non-trivial, then from Lemma \ref{l:nonvide} we know that $\orb{F_0}$ is uncountable, hence by Proposition \ref{p:atleastone}, it has at least one non-expansive direction. In addition, by the first claim and the assumption $\prod_{i<n}S_i/T_i \to 0$, we know that $\NE(F_0) \subseteq \{\anib[\seq S/\seq T]{\seq D}\}$, and we must actually have equality.
\end{proof}
\section{Explicit simulation}\label{sub:simulconv}
In the previous sections of this chapters, we defined a notion of simulation and then proved some facts about this notion, which suggest that it is a good choice. However, we have not given any non-trivial example of simulation until now, nor have we explained how this could happen. For example, the decoding function $\Phi$ could be anything.
The simulation that we construct all have the same basic \xpr{form}. We call these simulation \dfn{explcit}, because the simulated configuration is explicitly written letter by letter in the simulating configuration. In order to make this more precise, we need to give some more definitions and notations.
Let us fix a some fields $\texttt{Addr}$, $\texttt{Addr}_{+1}$, $\texttt{Clock}$ and $\texttt{Clock}_{+1}$ (In fact, these are just distinct numbers that we use to project letters on). These are sometimes called \emph{coordinate} fields. For $s \in \co{0}{S}$ and $t \in \co{0}{T}$, let $\gra{s}{t}{S}{T} \defeq \per[s,S]{\texttt{Addr},\texttt{Addr}_{+1}} \cap \emp[t]{\texttt{Clock},\texttt{Clock}_{+1}}$. In $\gra{s}{t}{S}{T}$ the values of $\texttt{Addr}$ grow by $1$ modulo $S$ from left to right and the value of $\texttt{Clock}$ is constant and equal to $t$, while the origin has $\texttt{Addr}$ $s$. This is the usual way to break up a configuration into blocks, with one small difference. Normally, we only need the fields $\texttt{Addr}$ and $\texttt{Clock}$ to do this. However, since we are using PPA, we need to have some right- (or left-) moving copies of these fields in order to check the compatibility of these fields. Having this in mind, we define $\gra{s}{t}{S}{T}$ in the above way, since it will make notation a little lighter later on. The union $\bigsqcup_{\begin{subarray}c0\le t<T\\0\le s<S\end{subarray}}\gra{s}{t}{S}{T}$ is disjoint.
In addition, let $\grs{s}{S}\defeq \per[s,S]{\texttt{Addr},\texttt{Addr}_{+1}}$. In $\grs{s}{S}$, we do not care about the value of $\texttt{Clock}$ (or if it is even constant). Clearly, $\gra{s}{t}{S}{T} \subseteq \grs{s}{S}$. For $c \in \grs{s}{S}$ and $i \in \mathbb Z$, the pattern
\begin{equation*}
\col{i}{c}=c_{\co{-s+iS}{-s+(i+1)S}}
\end{equation*}
is called a \dfn{colony} of $c$. Clearly, $(\col{i}{c})_{i \in \mathbb Z}=\bulk[S]{\sigma^{-s}(c)}$ and in $\col{i}{c}$, the value of $\texttt{Addr}$ (and $\texttt{Addr}_{+1}$) grows from $0$ to $S-1$ from left to right.
This is the natural way to break a configuration into colonies of size $S$. Now, we are going to use every colony to encode one letter of the simulated configuration. For this, we have to define the appropriate decoding function.
Let $\tilde{\phi} \colon (\haine5^{*})^* \pto \haine5^{**}$ be the following function, which is the basis of all the decoding functions that we will use: Let $w \in (\haine5^*)^*$ be a \emph{word} over the infinite alphabet $\haine5^*$ (we look at $w$ as a finite part of some $1$D configuration over $\haine5^*$). If $\hs{w}= \Chi{\vec{u}}3^{\length{w}-\length{\Chi{\vec{u}}}}$, where $\vec{u} \in \haine5^{**}$ (we look at $\vec{u}$ as a tuple of elements of $\haine5^*$), then we define $\tilde{\phi}(w)=\vec{u}$.
Notice that $\Chi{\vec{u}} \in \haine3^*$. In other words, $w$ is equal to $\Chi{\vec{u}}$ up to appending some $3$s at the end of $\Chi{\vec{u}}$ (this gives a word in $\haine4^*$) and then adding some $4$s in front of every \emph{letter} of $\Chi{\vec{u}}3^{\length{w}-\length{\Chi{\vec{u}}}}$ (which gives a word in $(\haine5^{*})^*$). Unless $w$ has this very specific form, $\tilde{\phi}(w)$ is not defined.
$\tilde{\phi}$ is well-defined because $\Chi{\cdot}$ is an injection and because $3$ does not appear as a letter of $\Chi{\vec{u}} \in \haine3^*$. A necessary condition so that $\tilde\phi(w)=\vec{u}$ is that $\length{w} \geq \length{\Chi{\vec{u}}}$.
Let $\field$ be a new field and $\tilde{\phi}_{\field} \colon (\haine5^{**})^* \pto \haine5^{**}$ be defined as $\tilde{\phi} \pi_{\field}$. $\tilde{\phi}_{\field}$ can read words over letters with many fields by ignoring the other fields and using $\tilde{\phi}$ on $\field$.
We can extend $\tilde{\phi}$ in a natural way to a map $\tilde{\Phi} \colon (\haine5^*)^{\mathbb Z} \pto (\haine5^{**})^{\mathbb Z}$ as follows: for all $c \in (\haine5^{*})^{\mathbb Z}$ and $i \in \mathbb Z$, $\tilde{\Phi}(c)_i=\tilde{\phi}(c\restr{\co{iS}{(i+1)S}})$. Similarly, $\tilde\phi_{\field}$ can be naturally extended to a map $\tilde{\Phi}_{\field} \colon (\haine5^{**})^{\mathbb Z} \pto (\haine5^{**})^{\mathbb Z}$.
The idea is that every configuration will be divided into colonies using the coordinate fields and then $\tilde{\phi}_{\field}$ will be used on every colonies so as to obtain a letter. Putting these letters together, we obtain the simulated configuration.
Formally, a decoding function $\Phi$ will be equal to $\tilde{\Phi}_\field{\restr{\Sigma}}$, where $\Sigma \subseteq \grs{0}{S}$, for some $S$ that is large enough. If $b_i = \tilde{\phi}_\field(\col{i}{c})$, then $\Phi(c)= \tilde{\Phi}_\field(c)=(b_i)_{i \in \mathbb Z}$. We call $b_i$ the \dfn{simulated letter} of the $i$'th colony and the letters of $c$ are the \dfn{simulating letters}
The decoding functions that we will use in our constructions will \emph{always} be of the form ${\tilde{\Phi}_{\field}}{\restr{\Sigma}}$, where $\Sigma \subseteq \grs{0}{S}$. For such functions, we immediately obtain two of the conditions of a decoding function of a simulation:
\begin{remark}\label{twoconditionsofsimulation}
Let us fix a field list $\C=[\texttt{Addr},\texttt{Addr}_{+1},\texttt{Clock},\texttt{Clock}_{+1},\texttt{Tape}]$, $S \in {\mathbb N}_1$ and vectors $\vec{k}, \vec{k'} \in \mathbb N^{*}$ such that the following inequalities hold:
\[\both{
k_\texttt{Addr}\ge\norm S\\
k_\texttt{Tape}\ge 1\\
S \geq \length{\Chi{\haine5^{\vec{k'}}}}
~,}\]
Let $\Sigma \defeq (\haine5^{\vec{k}})^{\mathbb Z}\cap \grs{0}{S}\cap \tilde{\Phi}_{\texttt{Tape}}^{-1}((\haine5^{\vec{k'}})^{\mathbb Z})$. Then $\Phi \defeq {\tilde{\Phi}_{\texttt{Tape}}}{\restr{\Sigma}} \colon (\haine5^{\vec{k}})^{\mathbb Z} \to (\haine5^{\vec{k'}})^{\mathbb Z}$ is surjective and $\Phi \sigma^S = \sigma \Phi$.
In addition, for every $b \in (\haine5^{**})^{\vec{\mathbb Z}}$, we are free to chose the values of the anonymous fields in any way we like in a pre-image.
\end{remark}
In the above remark, $\Sigma$ contains those configurations over $\haine5^{\vec{k}}$ that are well-structured (\textit{i.e.}, divided into colonies with the origin having address $0$) and such that in the $i$'th colony we have the encoding of a letter of $\haine5^{\vec{k'}}$, for all $i \in \mathbb Z$.
\chapter{The programming language}\label{c:programming}
\section{Definitions and basic permutations}
In our constructions, we want to use permutations that are computed fast. It is not possible to formally state what fast means, but polynomially computable and, more generally, polynomially checkable permutations is fast enough. This is a common feature of all self-similar and hierarchical constructions and the reasons why it is needed are explained very thoroughly in \cite{gray}. For our purposes, it is enough to describe a pseudo-programming language, with which we will write \xpr{programs} that are interpreted as permutations $\alpha \colon \haine5^{**} \pto \haine5^{**}$.
Let us start describing this programming language: It has four types, \dfn{terms} (that are denoted $t,t'\ldots$), \dfn{valuations} (that are denoted $v,v',\ldots$), \dfn{conditions} (that are denoted $c,c',\ldots$) and \dfn{permutations} (that are denoted $\alpha,\alpha',\ldots$). Each type is semantically interpreted as a different kind of mathematical object. Terms are interpreted as maps $t \colon \haine5^{**} \pto \haine5^{*}$. They represent some word information that can be extracted from a tuple. Valuations are interpreted as functions $v \colon \haine5^{**} \pto \mathbb N$. Valuations represent numerical information that can be extracted from tuples. Conditions are predicates over $\haine5^{**}$, or equivalently maps $q \colon \haine5^{**} \pto \{0,1\}$. Finally, permutations are, rather predictably, interpreted as (partial) permutations $\haine5^{**} \pto \haine5^{**}$ which will be used to define IPPA
Let us describe each type with more details. We are not going to try to give a formal definition of the programming language, since it is would be unnecessarily complicated. It would involve a global induction on the various types, starting from some basic objects and taking a closure under some inductive operations. Instead, we will simply list the objects that we are actually going to use in the rest of the thesis. The proofs that they are polynomially computable are often trivial and will be omitted in most cases.
\paragraph{Terms}
\begin{itemize}
\item Every word $w \in \haine5^{*}$ is a term (understood as the constant function);
\item for all $i \in \mathbb N$, the projection $\pi_i$ of the $i$'th field is a term;
\item if $t$ is a term, then $\Chi{t}$ is also a term ($\Chi{t}(\vec{u})=\Chi{t(\vec{u})}$, for all $\vec{u}$ in $\haine5^{**}$);
\item if $v$ is a valuation and $t$ is a term, then $t\restr{v}$ is also a term, where $t\restr{v}(\vec{u}) \defeq t(\vec{u})\restr{v(\vec{u})}$. In other words, $t\restr{v}$ uses $v$ as a pointer for $t$ and it gives the letter at the $v(\vec{u})$'th position of $t(\vec{u})$.
\end{itemize}
\paragraph{Valuations}
\begin{itemize}
\item Every natural $n \in \mathbb N$ is a valuation, understood as a constant function;
\item if $t$ is a term, then $\length{t}$ is a valuation;
\item For all vectors $\vec{k} \in \mathbb N^{*}$ and $i \in \mathbb N$, the function $l_{\vec{k},i}$ defined in Fact~\ref{f:encodings} is a valuation.
\item If $\seq{S} \colon \mathbb N \to \mathbb N$ is a sequence of numbers and $v$ a valuation, then $S_v$ (where $S_v(\vec{u}) \defeq S_{v(\vec{u})}$) is also a valuation. (In general, the complexity of this valuation depends on the complexity of $\seq{S}$ and it is not polynomially computable if $\seq{S}$ is not.)
\item Basic arithmetical operations (addition, subtraction, multiplication etc) of valuations are still valuations.
\end{itemize}
In fact, we will need the following, more general version of the third bullet:
\begin{itemize}
\item For all valuations $v$, vector sequences $\vec{k} \colon \mathbb N \to \mathbb N^M$ and $i \in \mathbb N$, $l_{\vec{k}_v,i}$ (where $l_{\vec{k}_v,i}(\vec{u}) \defeq l_{\vec{k}_{v(\vec{u})},i}(\vec{u})$) is also a valuation. In this version, the vector whose structure $l_{\vec{k}_v,i}$ gives depends on the input letter. Of course, if $\vec{k}$ is not a polynomially computable sequence, then neither is $l_{\vec{k}_v,i}$.
\end{itemize}
A \dfn{vector valuation} is a collection $\vec{v}=(v_i)_{0 \le i \le M-1}$ of valuations, for some $M \in \mathbb N$. Vector valuations are used to obtain lengths of alphabets in a polynomially computable way.
\paragraph{Conditions}
\begin{itemize}
\item If $v_1, v_2$ are valuations, then $v_1 \geq v_2$ is a condition whose interpretation is clear;
\item if $t_1, t_2$ are terms, then $t_1 = t_2$ is a condition;
\item if $t,t_1$ are terms and $(Q_w)_{w \in \haine5^{*}}$ is a sequence of subsets of $\haine5^*$, then $t_1 \in Q_t$ is a condition. ($\vec{u}$ satisfies $t_1 \in Q_t$ if $t_1(\vec{u}) \in Q_{t(\vec{u})}$.)
\item if $t$ is a term and $i_1, \ldots, i_n$ are fields, then $\emp[t]{i_1, \ldots, i_n}$ is a condition (that is true for $\vec{u}$ if and only if $\vec{u} \in \emp[t(\vec{u})]{i_1, \ldots, i_n}$);
\item $\halt{p}{v}{t}$ is a condition, where $\vec{u}$ satisfies $\halt{p}{v}{t}$ if and only if the TM defined by program $p$ does not stop within $v(\vec{u})$ steps over term $t(\vec{u})$;
\item boolean operations of conditions are also conditions.
\end{itemize}
\paragraph{Permutations}
\begin{itemize}
\item For every condition $q$, $\chekk[q]$ is a permutation. $\chekk[q](\vec{u})$ is equal to $\vec{u}$ if and only if $\vec{u}$ satisfies $q$ (and is undefined otherwise). This is an involution.
\item For every valuation $v$ and field $i \in \mathbb N$, $\incr[v,i]$ is a permutation defined in the following way: Let $\vec{u} \in \haine5^{**}$ and define $\vec{u'}$ in the following way: $u'_j\defeq u_j$ for all $j\ne i$, and $u'_i\defeq\sh[\length{u_i}]\gamma(u_i)$, where $\gamma(w)\defeq\anib{\bina w+1\bmod v(\vec u)}$ when $\bina w<v(\vec u)$ (undefined otherwise);
then $\incr[v;i](\vec u)\defeq\vec{u'}$ if $v(\vec u)=v(\vec{u'})$ (undefined otherwise).
Essentially $\incr[v;i]$ adds $1$ modulo $v(\vec{u})$ to the $i$'th field of $\vec{u}$. The additional complications are due to the fact that we want this rule to always be reversible (which would not necessarily be true if $v(\vec{u'})$ is not equal to $v(\vec{u})$) and length preserving (which is the reason that we use the strange $\gamma$ function).
\item $\alpha_\U[t;\texttt{Tape},\texttt{Head}_{-1},\texttt{Head}_{+1}]$ is a permutation for every term $t$ and fields $\texttt{Tape}$, $\texttt{Head}_{-1}$, $\texttt{Head}_{+1}$. We direct the reader to Section~\ref{Jarkko} for the definition of this permutation, since it uses a permutation that is defined and examined therein.
\item Let $t$ be a term and $i$ be a field such that $t$ \dfn{does not depend} on $i$. In other words, if $\vec{u},\vec{u'} \in \haine5^{**}$ and $\pi_j(\vec{u})=\pi_j(\vec{u'})$ for all $j \neq i$, then $t(\vec{u})=t(\vec{u'})$.
Then, $\rite[t;i]$ is a permutation defined as follows: Let $\vec{u} \in \haine5^{**}$. $\rite[t;i](\vec{u})$ is defined if and only if $\hs{u_i}= \motvide$. In this case, all fields remain the same except for $i$ which becomes equal to $\sh[\length{u_i}]{t(\vec{u})}$.
Essentially, we check that the field $i$ is empty and then write $t(\vec{u})$ on it, while preserving the lengths. The condition that $t$ does not depend on $i$ is essential to ensure reversibility.
$\rite[t;i]^{-1}$ first checks that the $i$'th field is equal to $t(\vec{u})$ and then empties it, while preserving the lengths. This is a way to reversibly erase some information from a letter, namely compare it with some other place of the letter where the same information is held.
\item For all fields $i,i'$, $\exch[i,i']$ is a permutation defined as follows: Let $\vec{u} \in \haine5^{**}$. $\exch[i,i'](\vec{u})$ is defined if and only if $\length{u_i}=\length{u_{i'}}$. In this case, all fields are unchanged except for $i$ and $i'$ whose values are exchanged.
This is a length-preserving involution.
\item For every condition $q$ and permutation $\alpha$, $\algorithmicif\ q\ \algorithmicthen\ \alpha$ is a permutation. On input $\vec{u}\in \haine5^{**}$, it applies $\alpha$ if condition $q(\vec{u})$ is satisfied and $q(\vec{u})=q(\alpha(\vec{u}))$. If $q(\vec{u})$ is satisfied and $q(\vec{u}) \neq q(\alpha(\vec{u}))$, then it is not defined on $\vec{u}$ (this ensures reversibility). Finally, if $q(\vec{u})$ is \emph{not} satisfied, it is equal to the identity.
\item The composition of permutations is also a permutation. In constructions, we will denote the composition $\alpha_2 \circ \alpha_1$ by writing $\alpha_2$ below $\alpha_1$.
\end{itemize}
In the definition, we check that the values of the valuations, terms and conditions that are given as parameters do not change. This is a technical point that ensures that they are interpreted as reversible functions.
In all our constructions, these conditions will easily be satisfied because the valuations, terms and conditions will either be constant or depend on fields that are not modified by the rule at hand.
If we were giving a complete, formal description of a language, then this would be the point where by a large, tedious induction we would prove that, given some natural conditions on the parameters, every permutation of the language is polynomially computable, or, more precisely, polynomially computable in its parameters (this means that its complexity is a polynomial of the complexity of its parameters) and that short programs exist for the permutations. Namely, the size of the program is $O(p_{t,v, \ldots})$, where $t,v$ etc. are the parameters of the permutation.
We can also prove that the size of a program of a permutation is approximately the same as the size of the program of its inverse.
\section{Conventions about defining IPPA}\label{s:conv}
In the first part of this chapter, we gave a short exposition of the programming language that will be used in the rest of the thesis in order to define permutations of $\haine5^{**}$. However, in order to define a PPA, the number of fields and the directions of the fields also have to be fixed.
Recall that we want to define PPA, \textit{i.e.}, RPCA of the form $F = \sigma^{\vec\delta}\circ\alpha$, where $\vec\delta \in \{-1,0,+1\}^M$ is the shift vector and $\alpha$ is a partial permutation of $\A=\A_0 \times \ldots \A_{M-1}$, for some $M \in \mathbb N$. In our case, $F$ will always be the restriction of an IPPA, \textit{i.e.}, $\A$ will be equal to $\haine5^{\vec{k}}$, for some $\vec{k} \in \mathbb N^M$ and $\alpha \defeq \beta\restr{\haine5^{\vec{k}}}$ will be the restriction of some (infinite) permutation $\beta$ defined in the programming language.
We will use the following conventions when constructing such PPA:
\begin{itemize}
\item We first give a list of so-called \dfn{explicit} field labels. Such a list will often be noted in the form $\C\defeq[\field_{e},\ldots,\field'_{e'}]$, where $e,e' \in \{-1,0,+1\}$. The subscripts $e,\ldots,e'$ correspond to the \emph{directions} of the fields (if the direction is equal to $0$, then it will be omitted). The field list is a tuple of pairwise different natural projections, that are used by the permutation, together with their directions, that will be used by the shift. (The labels of the fields will make the permutations more understandable than the corresponding indices $i,i',\ldots$). The field list is not fixed, so in fact for every field list, we give a different permutation, even though they only differ in the enumeration of the fields.
The permutation is assumed to reject any element of $\haine5^{**}$ that does not involve all field numbers in the list, but note that it does not reject tuples that have more fields; the so-called \dfn{anonymous} fields, that are not in the list, are not modified by the permutation (but they might be used by some other PPA with which we compose). This allows us to define some simple PPA with few fields and then use them as \xpr{building blocks} in order to build more complicated ones in the following sense: the complicated PPA has more fields than the simple one, but, if it does not \xpr{touch} any of its fields, its behaviour on those fields is described by the corresponding behaviour of the building block.
If $\C$ and $\C'$ are two lists of field labels, then $\C \cup \C'$ is the list that contains the fields of $\C$ and $\C'$. Usually, the lists will be disjoint, so that we will use the notation $\C \sqcup \C'.$
\item After giving the field list, we describe an (infinite) permutation using the programming language defined in the first part of this chapter.
\item Then, we need to fix $M \in \mathbb N$ and $\vec{k} \in \mathbb N^M$. If we do not care about the existence of anonymous fields, then we always assume that $M$ is some number greater than or equal to the largest natural appearing in the field list $\C$. In this way, we ensure that the configurations will not be rejected simply because the program tries to access a field that is not there.
When we do not want anonymous fields to exist (for example, when we want to achieve exactness of a simulation), then we assume that the field list $\C$ is equal to $[0,\ldots,M-1]$ and we choose this $M$ for the number of fields.
In any case, after choosing $M$, we fix some vector $\vec{k}\in \mathbb N^M$ satisfying some appropriate conditions (which are case-specific).
\item Finally, we need to define the directions of the fields. However this has already been done in the definition of the field list with the use of the subscripts $e,e'$ etc. The directions of the anonymous fields can be anything. In fact, our statements will be true for \emph{all} directions of the anonymous fields, since we will not refer to them.
\end{itemize}
\chapter{The universal simulator}\label{construction}
In this chapter, our aim is to construct an RPCA (a family of RPCA in fact, depending on some parameters) that can simulate every other RPCA that satisfies some conditions. This is done in Lemma~\ref{universal}. This RPCA is extremely helpful and it will be part of all our subsequent constructions. Since it is difficult to overstress the importance of this RPCA, we will give a step-by-step description of its construction with as many details as possible.
In Section~\ref{sec:structure}, we will embed a periodic rectangular grid in every configuration. This is a standard procedure in hierarchical constructions and it will allow us to partition every configuration into colonies and use the decoding function $\tilde\Phi$. In Section~\ref{Jarkko}, we will make a slight digression and show how we can simulate any TM with an RPCA in real-time. This is needed in order to preserve the expansiveness of the horizontal direction. Then, in Section~\ref{sec:singlepermutation}, we construct an RPCA to simulate an RPCA whose direction vectors are null (all its fields are still). There are some tricks involved in this phase, mainly having to do with deleting the previous simulated letter and synchronizing the computations. Then, in Section~\ref{sec:singleshift}, we construct an RPCA that can simulate any RPCA whose permutation is the identity \textit{i.e.}, any shift. Finally, in Section~\ref{sec:universal}, we construct the universal IPPA $\unive$ that can simulate any RPCA, when it is restricted to the appropriate alphabet.
\section{Imposing a periodic structure}\label{sec:structure}
Let $\C[\coordi]=[\texttt{Addr},\texttt{Addr}_{+1},\texttt{Clock},\texttt{Clock}_{+1}]$.
\begin{itemize}
\item $\texttt{Clock}$ and $\texttt{Addr}$ are meant to localize the cell in its macrocell, and they correspond to the projections involved in the definition of explicit simulation in Section~\ref{sub:simulconv}.
\item $\texttt{Clock}_{+1}$ and $\texttt{Addr}_{+1}$ are used to communicate with the neighbour cells, so that consistency between the $\texttt{Clock}$ and $\texttt{Addr}$ fields is achieved.
\end{itemize}
\begin{algo}{coordi}{\coordi}{v_\texttt{MAddr}, v_\texttt{MClock}}
\STATE{$\chekk[\bina{\pi_{\texttt{Addr}_{+1}}}=\bina{\pi_\texttt{Addr}}$ \AND $\bina{\pi_{\texttt{Clock}_{+1}}}=\bina{\pi_\texttt{Clock}}]$} \label{al:coordi:chek} \COMMENT{Check left-neighbour information coherence.}
\STATE{$\incr[v_\texttt{MAddr};\texttt{Addr}_{+1}]$}\label{al:coordi:incadd} \COMMENT{Increment $\texttt{Addr}_{+1}$ so that the right neighbour can check coherence.}
\STATE{$\incr[v_\texttt{MClock};\texttt{Clock}]$}\label{al:coordi:incage} \COMMENT{Update \texttt{Clock}.}
\STATE{$\incr[v_\texttt{MClock};\texttt{Clock}_{+1}]$}\label{al:coordi:incage1} \COMMENT{Update $\texttt{Clock}_{+1}$.}
\end{algo}
By the discussion of Chapter~\ref{c:programming}, we know that $\coordi[v_{\texttt{MAddr}},v_{\texttt{MClock}};\C[\coordi]]$ is polynomially computable with respect to its parameters $v_{\texttt{MAddr}}$ and $v_{\texttt{MClock}}$.
In practice, the two valuation parameters $v_{\texttt{MAddr}}$ and $v_{\texttt{MClock}}$ will be constant over the alphabet of the PPA, in which case the behaviour will be described by the following:
\begin{lemma}\label{koo}
Let us fix a field list $\C[\coordi]\in\mathbb N^4$ and integers $S,T\in{\mathbb N}_1$.\\
Let $F$ be the IPPA defined by the permutation $\coordi[S,T;\C[\coordi]]$ and directions $\vec{\nu}_{\coordi}$ given by the label indices, and let $\vec k\in\mathbb N^*$ be a vector satisfying:
\[\both{
k_\texttt{Addr},k_{\texttt{Addr}_{+1}}\ge\norm S\\
k_\texttt{Clock},k_{\texttt{Clock}_{+1}}\ge\norm T
~.}\]
Let $c \in (\haine5^{\vec{k}})^{\mathbb Z}$. Then, $c \in F^{-2}((\haine5^{\vec{k}})^{\mathbb Z})$ if and only if there exist $s \in \co{0}{S}$ and $t \in \co{0}{T}$
such that $c \in \gra{s}{t}{S}{T}$. In this case, $F(c) \in \gra{s}{t+1 \mod T}{S}{T}$.
\end{lemma}
In the previous statement, $S$ and $T$ should be understood as the \dfn{width} and \dfn{height} of the macrocells. Notice, also, that the statement holds for all vectors $\vec{k} \in \mathbb N^*$ that satisfy the inequalities, which means that there can be other fields in the alphabet. This means that if we use $\coordi$ together with other rules that do not change the values of the fields in $\C[\coordi]$, the statement of the lemma will still be true.
The restrictions about the lengths of $\vec{k}$ ensure that fields are large enough that we can write the binary representation of $S$ and $T$ on them.
\begin{proof}
We prove the stronger claim that if $F^2(c)$ exists, then there exist $0\le s<S$ and $0\le t<T$ such that for all $n \in \mathbb Z$, $\bina{\pi_{\texttt{Addr}}(c_n)}=\bina{\pi_{\texttt{Addr}_{+1}}(c_n)}=s+n \bmod S$ and $\bina{\pi_{\texttt{Clock}}(c_n)}=\bina{\pi_{\texttt{Clock}_{+1}}(c_n)}=t$.
Suppose that $\bina{\pi_{\texttt{Addr}}(c_n)} \neq \bina{\pi_{\texttt{Addr}_{+1}}(c_n)}$ or $\bina{\pi_{\texttt{Clock}}(c_n)} \neq \bina{\pi_{\texttt{Clock}_{+1}}(c_n)}$, for some $n \in \mathbb Z$. Then, line~\ref{al:coordi:chek} would not be defined at cell $n$, $F(c)$ would not exist, which is a contradiction.
Suppose, then, that there exists $n \in \mathbb Z$ with $\bina{\pi_{\texttt{Addr}}(c_{n+1})} \neq \bina{\pi_{\texttt{Addr}}(c_n)} +1 \bmod S$. Line~\ref{al:coordi:incadd} and the fact that $\texttt{Addr}_{+1}$ is a right-going field imply that $\bina{\pi_{\texttt{Addr}_{+1}}F(c)_{n+1}} = \bina{\pi_{\texttt{Addr}}(c_n)} +1 \mod S$. Then, line~\ref{al:coordi:chek} is not defined at cell $n+1$ of $F(c)$ since $\bina{\pi_{\texttt{Addr}_{+1}}F(c)_{n+1}} = \bina{\pi_{\texttt{Addr}}(c_n)} +1 \mod S \neq \bina{\pi_{\texttt{Addr}}(c_{n+1})}$. Therefore $F^2(c)$ does not exist, which contradicts the hypothesis. Similarly, we can prove that $\bina{c_n.\texttt{Clock}} = \bina{c_{n+1}.\texttt{Clock}}$, for all $n \in \mathbb Z$. Thus, the stronger claim we made at the beginning of the proof is true.
If $\bina{\pi_{\texttt{Addr}}(c_0)}=s$ and $\bina{\pi_{\texttt{Clock}}(c_0)}=t$, then the previous claim implies that for all $n \in \mathbb Z$, $\bina{\pi_{\texttt{Addr}}(c_n)}=s+n \bmod S$ and $\bina{\pi_{\texttt{Clock}}(c_n)}=t$. Furthermore, since the value of $\texttt{Addr}$ is not changed by $F$ and the value of $\texttt{Clock}$ is increased by $1 \mod T$ every time step by line~\ref{al:coordi:incage}, we have that $\bina{\pi_{\texttt{Addr}}F(c)_n}=s+n \bmod S$ and $\bina{\pi_{\texttt{Clock}}F(c)_n}=t+1 \bmod T$, for all $n \in \mathbb Z$.
\end{proof}
In general, when using IPPA, we have to use a similar rule every time we want to impose some horizontal restriction on the configuration. Namely, we have to use an additional right-moving (or left-moving, it does not make a difference) field, and then we need $2$ steps in order to verify that the field is constant.
All of the rules we construct will factor onto $\coordi[S,T;\C[\coordi]]$, for some $S,T \in {\mathbb N}_1$. The following remark will give the disjointness condition in the definition of simulation.
\begin{remark}\label{thirdconditionofsimulation}
Assume that $F\colon \A^{\Z} \pto \A^{\Z}$ factors onto $\coordi[S,T;\C[\coordi]]$ through the factor map $H$ and let $\gr stF \defeq H^{-1}(\gra{s}{t}{S}{T})$. Then, the union $\bigsqcup_{\begin{subarray}c0\le t<T\\0\le s<S\end{subarray}}\gr stF$ is disjoint and $F(\gr{s}{t}{F}) \subseteq \gr{s}{t+1 \mod T}{F}$.
Therefore, if $\Phi \colon \A^{\Z} \pto \B^{\Z}$ satisfies that $\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi) \subseteq \gr{0}{0}{F}$, for some $s,t$, then the union $\bigsqcup_{\begin{subarray}c0\le t<T\\0\le s<S\end{subarray}}F^t\sigma^s(\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi))$ is disjoint.
\end{remark}
$\gr{s}{t}{F}$ implicitly depends on the factor map $H$. However, in applications, $H$ will be equal to $\pi_{\C[\coordi]}$ so that no ambiguity arises by omitting it.
\section{Simulating TM with IPPA}\label{Jarkko}
The IPPA $\coordi$ allows us to divide every configuration into colonies with a periodical clock. We want to use this space-time structure in order to do computations within the \dfn{work-periods} (the \xpr{time} between two subsequent steps where the clock is $0$). We are going to introduce the elements needed for this one by one, since, hopefully, it will make some of the ideas more clear. First, let us show how to simulate TMs in real time with PPA.
For all programs $p=p_{\mathcal{M}} \in \haine4^*$, we construct an IPPA that simulates $\mathcal{M}$ in real-time.
This subsection is inspired by \cite{morita}
Let ${\C}_\U\defeq[\texttt{Tape},\texttt{Head}_{-1},\texttt{Head}_{+1}]$.
The key item to maintaining reversibility, is to keep track of the history of the computation. Some kind of \emph{archive} of each past step is shifted in the direction opposite to the head, in order for the head to always have space to write the new history.
Recall the definition of the function $\U$ from Subsection~\ref{s:turing}. Let $\gamma_\U[p] \colon (\haine4^*)^3 \pto (\haine4^*)^3$ be defined by the following transitions: $(a,h_{-1},h_{+1})$ is mapped to
\begin{itemize}
\item $(a',\Chi{a,q,\delta},\Chi{a,q,\delta})$, if $(h_\delta,h_{-\delta})=(q,\motvide)$ ($\texttt{Head}_{\delta}$ contains the TM head) and $\U(a,q,p)=(a',\motvide,+1)$. If $\texttt{Head}_{\delta}$ contains a head and the transition is an accepting one, then we write an encoding of the last transition on the $\texttt{Head}$ fields, modify $\texttt{Tape}$ and the TM heads disappear. Here, the assumption that the TM head (which has disappeared) moves to the right is convenient to ensure injectivity.
\item $(a',h_{-1}',h_{+1}')$, where $(h_{\delta'}',h_{-\delta'}')=(q',\Chi{a,q,\delta})$ if $(h_\delta,h_{-\delta})=(q,\motvide)$ and $\U(a,q,p)=(a',\motvide,\delta')$. If the transition is not an accepting one, then $\texttt{Tape}$ is modified, the TM head is written on the appropriate $\texttt{Head}$ field and on the other $\texttt{Head}$ field we write an encoding of the transition and of the position of the head before the transition.
\item $(a,h_{-1},h_{+1})$ if $h_{-1},h_{+1}\notin Q_p\setminus\{\motvide\}$. If none of the $\texttt{Head}$ fields contains a TM head, then do nothing.
\end{itemize}
It is not difficult (by a tedious case enumeration) to see that $\gamma_\U[p]$ is a partial permutation and \emph{polynomially computable}, thanks in particular to the disjointness of $Q_p\subset\haine2^*$ and $\Chi{\haine4\times Q_p\times\{-1,1\}}\subset2\haine3^*$. Basically, $\gamma_\U[p]$ identifies the accepting state $\motvide$ with the absence of state (for which it just performs identity). In other cases, it prevents from having two (non-accepting) head states at the same cell; then it applies the transition rule and sends the new state to the correct direction (depending on $\delta$), while sending an archive of the last performed operation in the opposite direction.
At the moment that the accepting state appears, it just sends two (identical) archives in opposite directions (there is no new state to send).
We say that $c\in (\haine4^{**})^{\mathbb Z}$ \dfn{represents} $(z,q,j)\in \haine4^{\mathbb Z}\times Q\times\mathbb Z$ of the machine $\mathcal{M}$ corresponding to program $p$
if:
\begin{itemize}
\item For all $i \in \mathbb Z$, $\pi_{\texttt{Tape}}(c_i)=z_i$
\item $(\pi_{\texttt{Head}_{-1}}(c_j),\pi_{\texttt{Head}_{+1}}(c_j)) \in \{q\} \times \{\motvide\} \cup \{\motvide\} \times \{q\}$;
\item For all $i\ne j$ and $\delta\in\{-1,+1\}$, $\pi_{\texttt{Head}_{\delta}}(c_i)\notin Q_p\setminus\{\motvide\}$
\item For all $i\ne j$ and $\delta\in\{-1,+1\}$, if $\pi_{\texttt{Head}_{\delta}}(c_i)\neq\motvide$ then $\delta$ has the sign of $j-i$.
\end{itemize}
Intuitively, this means that in $c$ there is at most one (non-accepting) head at position $j$, no head elsewhere, and nothing (represented by $\motvide$, like the accepting state) in $\texttt{Head}_{-1}$ on its right nor in $\texttt{Head}_{+1}$ on its left. The possible archives go away from the head position. We can thus see that the head will never move into a cell where there is an archive, so that one of the transitions of $\gamma_{\U}[p]$ will always be applicable.
Formally, we have the following lemma about the behaviour of $\gamma_\U[p]$.
\begin{lemma}\label{l:tmsim}
Let us fix a field list $\C[\U]\in\mathbb N^3$ and a program $p=p_{\mathcal{M}} \in \haine4^*$.\\
Consider the IPPA $F$ defined by permutation $\sh{\gamma_\U[p]}$ and directions $\vec{\nu}_{\U}$ given by the label indices.\\
Let $\vec k\in\mathbb N^*$ be a vector satisfying:
\[\both{
k_{\texttt{Head}_{-1}},k_{\texttt{Head}_{+1}}\ge \norm{\Chi{\haine4\times Q_p\times\{-1,+1\}}}\\
k_\texttt{Tape} \ge 1
~.}\]
Let $c \in (\haine5^{\vec{k}})^{\mathbb Z}$ and suppose that $\hs{c}$ represents configuration $(z,q,j)\in \haine4^{\mathbb Z}\times Q\times\mathbb Z$ of $\mathcal{M}$. Then, for all $t \in \mathbb N$, $\hs{F^t(c)}$ represents $\mathcal{M}^{t}(z,q,j)$.
\end{lemma}
As in Lemma~\ref{koo}, the inequalities about the lengths of $\vec{k}$ simply state that the fields are long enough. Using Lemma~\ref{sharpization}, we will omit the $\sh{\cdot}$ and $\hs{\cdot}$ from $\sh{\gamma_\U[p]}$ and $\hs{c}$ in the following proof, since they are only used to make $F$ have constant lengths.
\begin{proof}
We will prove the claim for $t=1$; the general claim then follows by induction.
Suppose, first, that $\mathcal{M}(z,q,j)$ does not exist. This means that $\delta_\mathcal{M}(z_j,q)$ does not exist, or equivalently, that $\U(z_j,q,p)=\U(c_j.\texttt{Tape},q,p)$ does not exist. From the definition of $\gamma_\U[p]$ and the fact that $c$ represents $(z,q,j)$, we have that ${\gamma_{\U}[p]}(c_j)$ does not exist, which implies that $F(c)$ does not exist.
Suppose, then, that $\mathcal{M}(z,q,j)=(z',q',j')$ exists. This means that $\U(z_j,q,p)=(z'_j,q',j'-j)$, and $z'_i=z_i$ for any $i\ne j$.
By assumption, for any $i \in \mathbb Z$, $(\pi_{\texttt{Tape}}(c_i),\pi_{\texttt{Head}_{-1}}(c_i),\pi_{\texttt{Head}_{+1}}(c_i)) = (z_i,h_{-1,i},h_{+1,i})$ for some $h_{-1,i},h_{+1,i} \in Q_p \cup \Chi{\haine4\times Q_p\times\{-1,+1\}} \cup \{\motvide\}$.
Moreover, for any $i\ne j$, and $\delta\in\{-1,+1\}$, $h_{\delta,i}\notin Q_p$, so the identity rule is applied. After applying the shifts, it gives that for any $i<j-1$,
\begin{equation*}
(\pi_{\texttt{Tape}}F(c_i),\pi_{\texttt{Head}_{-1}}F(c_i),\pi_{\texttt{Head}_{+1}}F(c_i))=(z_i,h_{-1,i+1},h_{+1,i-1}) = (z'_i,h_{-1,i+1},\motvide)
\end{equation*}
with $h_{-1,i+1}\notin Q_p$, and for any $i>j+1$,
\begin{equation*}
(\pi_{\texttt{Tape}}F(c_i),\pi_{\texttt{Head}_{-1}}F(c_i),\pi_{\texttt{Head}_{+1}}F(c_i))=(z_i,h_{-1,i+1},h_{+1,i-1})=(z'_i,\motvide,h_{+1,i-1})
\end{equation*}
with $h_{+1,i-1}\notin Q$.
Now, assume $(h_{-1,i},h_{+1,i})=(q,\motvide)$ and that $\U(z_j,q,p)=(z'_j,q',-1)$ (the other cases can be dealt with in a similar way).
Then the transition $(z_j,q,\motvide) \to (z'_j,q',\Chi{z_j,q,-1})$ is applied by $\gamma_\U[p]$. After the application of $\gamma_\U[p]$ and the shifts, we obtain
\begin{eqnarray*}
(\pi_{\texttt{Tape}}F(c_j),\pi_{\texttt{Head}_{-1}}F(c_j),\pi_{\texttt{Head}_{+1}}F_\U[p](c_j)) &=& (z'_j,\motvide,\motvide),\\
(\pi_{\texttt{Tape}}F(c_{j-1}),\pi_{\texttt{Head}_{-1}}F(c_{j-1}),\pi_{\texttt{Head}_{+1}}F(c_{j-1})) &=& (z'_{j-1},q',\motvide) \text{ and,} \\
(\pi_{\texttt{Tape}}F(c_{j+1}),\pi_{\texttt{Head}_{-1}}F(c_{j+1}),\pi_{\texttt{Head}_{+1}}F(c_{j+1})) &=& (z'_{j+1},\motvide,\Chi{z_j,q,-1}).
\end{eqnarray*}
All conditions are hence satisfied for $F(c)$ to represent $\mathcal{M}(z,q,j)$.
\end{proof}
Note that due to the parallel nature of IPPA, some configurations may involve several machine heads, and valid simulations may take place in parallel, provided that there is enough space between them so that the archive and the heads do not collide.
For this reason, we need to give a \xpr{finite version} of the previous lemma.
\begin{lemma}\label{Turing}
Let us fix a field list $\C[\U] \in\mathbb N^3$ and a program $p=p_{\mathcal{M}} \in \haine4^*$.\\
Consider the IPPA $F$ defined by permutation $\sh{\gamma_\U[p]}$ and directions $\vec{\nu}_{\U}$ given by the label indices.
Let $\vec k\in\mathbb N^*$ be a vector satisfying:
\begin{align*}
&k_{\texttt{Head}_{-1}},k_{\texttt{Head}_{+1}}\ge \norm{\Chi{\haine4\times Q_p\times\{-1,+1\}}}\\
&k_\texttt{Tape} \ge 1.
\end{align*}
Let $c \in (\haine5^{\vec{k}})^{\mathbb Z}$, $c'=\hs{c}$ and assume that there exists $n \in \mathbb N$ such that the set
\begin{equation*}
J\defeq\set{j}{\mathbb Z}{{(\pi_{\texttt{Head}_{-1}}(c'_j),\pi_{\texttt{Head}_{+1}}(c'_j))}\ne(\motvide,\motvide)}
\end{equation*}
satisfies that for any $j\neq j'\in J$, we have $\abs{j'-j}>2n$, and that for all $j \in J$, $(c'_j.\texttt{Head}_{-1},c'_j.\texttt{Head}_{+1}) \in Q_p \times \{\motvide\} \cup \{\motvide\} \times Q_p$. For $j \in J$, let $q_j \defeq c'_j.\texttt{Head}_{-1}$ if $(c'_j.\texttt{Head}_{-1},c'_j.\texttt{Head}_{+1}) \in Q_p \times \{\motvide\}$ and $q_j \defeq c'_j.\texttt{Head}_{+1}$ if $(c'_j.\texttt{Head}_{-1},c'_j.\texttt{Head}_{+1}) \in \{\motvide\} \times Q_p$.
Then, $F^n(c)$ exists if and only if $(z^j,q'_j,j')\defeq\mathcal{M}^n(c'.\texttt{Tape},q_j,j)$ exists, for all $j\in J$.
In addition:
\begin{itemize}
\item $\pi_{\texttt{Tape}}F^n(c)_{\cc{j-n}{j+n}}=z^j_{\cc{j-n}{j+n}}$, for all $j \in J$;
\item $\pi_{\texttt{Tape}}F^n(c)_i=c_i$ if $i\notin J+\cc{-n}n$;
\item $(\pi_{\texttt{Head}_{-1}}F^n(c)_{j'},\pi_{\texttt{Head}_{+1}}F^n(c)_{j'}) \in \{(q'_j,\motvide)\} \cup \{(\motvide,q'_j\}$, for all $j \in J$;
\item $(\pi_{\texttt{Head}_{-1}}F^n(c)_i,\pi_{\texttt{Head}_{+1}}F^n(c)_i) \notin Q_p\times \{\motvide\} \cup \{\motvide\} \times Q_p$, if $i \notin \sett{j'}{j \in J}$
\end{itemize}
\end{lemma}
\begin{proof}
First note that the identity is always applied when the head is absent; in particular it is applied outside $J+\cc{-t}t$ at time $t\in\mathbb N$ (because $F$ has radius $1$) and initially the heads are only in the positions in $J$.
According to the assumptions, for all $j\in J$,
${c'}^j$ is obtained by turning all $(c_i.\texttt{Head}_{-1},c_i.\texttt{Head}_{+1})$ to $(\motvide,\motvide)$ except at position $j$ represents $(c'.\texttt{Tape},q_j,j)$. Thanks to Lemma \ref{l:tmsim}, for all $0 \le t \le n$, $F^t({c'}^j)$ exists if and only if $\mathcal{M}^n(c'.\texttt{Tape},q_j,j)$ exists.
In that case, since ${c'}^j$ coincides with $c'$ over interval $\cc{j-2n}{j+2n}$ and since the radius is $1$, a simple induction can show that $F^t({c'}^j)$ coincides with $F^t(c')$ over interval $\cc{j-2n+t}{j+2n-t}$.
Lemma~\ref{l:tmsim} hence gives the main claim.
Conversely, suppose that $F^t({c'}^j)$ is undefined for some $j\in J$ with $t\le n$ minimal. Then, $F^{t-1}({c'}^j)$ exists, and by Lemma \ref{l:tmsim} involves a unique (non-accepting) head, in some cell $j'\in\cc{j-t}{j+t}$.
Therefore, $\gamma_\U[p](F^{t-1}({c'}^j)_i$ is defined for any $i\ne j'$.
This means that $\gamma_\U[p](F^{t-1}({c'}^j)_{j'})$ is undefined; we have already noted that this is equal to $\gamma_\U[p](F^{t-1}(c')_{j'})$, which proves that $F^t(c')$ is undefined.
\end{proof}
Lemma~\ref{Turing} will be used in the following way: Every configuration will be divided into colonies by $\coordi$. Initially (when the clock is equal to $0$), inside every colony there will be exactly one TM head at the leftmost cell of the colony. These TM will perform some computation for a small amount of time compared to the the width of the colonies (the $S$ of Lemma~\ref{koo}) so that the heads will not meet. Lemma~\ref{Turing} will immediately imply that at the end of the computation, in every colony, $\texttt{Tape}$ contains the output of the computation. Finally, the output of the computation will be copied onto some new field and then the computation will be run backwards (remember that $\gamma_{\U}[p]$ is a permutation).
We are now ready to give the details of the definition of the permutation $\alpha_\U[t;\texttt{Tape},\texttt{Head}_{-1},\texttt{Head}_{+1}]$: Let $\vec{u} \in \haine5^{**}$ and define $\vec{u'}$ in the following way:
\begin{equation*}
(u'.\texttt{Tape},u'.\texttt{Head}_{-1},u'.\texttt{Head}_{+1})=\sh{\gamma_{\U}[t(\vec{u})]}(u.\texttt{Tape},u.\texttt{Head}_{-1},u.\texttt{Head}_{+1}),
\end{equation*}
and $u'_j\defeq u_j$ for all $j\notin\{\texttt{Tape},\texttt{Head}_{-1},\texttt{Head}_{+1}\}$.
Then,
\begin{equation*}
\alpha_\U[t;\texttt{Tape},\texttt{Head}_{-1},\texttt{Head}_{+1}](\vec u)\defeq\vec{u'},
\end{equation*}
if $t(\vec{u})=t(\vec{u'})$ (and it is undefined otherwise.).
Again, the definition gets a little more complicated due to the need to preserve the lengths, to have arbitrarily many fields and to ensure reversibility. When $t=\pi_{\texttt{Prog}}$, where $\texttt{Prog}$ is a new field, then the condition $t(\vec{u})=t(\vec{u'})$ is always satisfied.
\section{Computing the simulated permutation}\label{sec:singlepermutation}
Let $\C[\compute]= \C[\U]\sqcup[\texttt{NTape}]$.
\begin{itemize}
\item $\texttt{Head}_{-1},\texttt{Head}_{+1}$ are used by to simulate a TM with the rule of Subsection~\ref{Jarkko}.
\item The output of this computation is written on $\texttt{NTape}$ and then the computation is reversed (the Bennett trick, see~\cite{bennett}).
\end{itemize}
\begin{algo}{compute}{\compute}{v_\texttt{Addr},v_\texttt{Clock},v_\texttt{Alarm},t_\texttt{Prog},t_\revprog}
\IF{$v_\texttt{Clock}=0$ \AND $v_{\texttt{Addr}}=0$}
\STATE $\rite[0;\texttt{Head}_{-1}]$ \label{al:com:wrhead}
\COMMENT{Write the machine initial state in the left head field.}
\ENDIF
\IF{$0\le v_\texttt{Clock}<v_\texttt{Alarm}$}
\STATE $\sh{\gamma_{\U}[t_\texttt{Prog};\texttt{Tape},\texttt{Head}_{-1},\texttt{Head}_{+1}]}$ \COMMENT{Run the machine in order to compute the permutation.} \label{al:com:posforcom}
\ELSIF{$v_\texttt{Clock}=v_\texttt{Alarm}$}
\STATE{$\chekk[\pi_{\texttt{Head}_{-1}},\pi_{\texttt{Head}_{+1}}\notin Q_{t_\texttt{Prog}}\setminus\{\motvide\}]$} \COMMENT{Check that the computation halted.} \label{al:com:poscheckhalt}
\STATE{$\rite[\texttt{Tape};\texttt{NTape}]$} \COMMENT{Copy the output onto a different tape.} \label{al:com:posbennet}
\STATE{$\exch[\texttt{Head}_{-1},\texttt{Head}_{+1}]$} \COMMENT{The directions of the fields of $F_{\U}[p]^{-1}$ are opposite to those of $F_{\U}[p]$.} \label{al:com:posexch}
\ELSIF{$v_\texttt{Alarm}<v_\texttt{Clock}\le2v_\texttt{Alarm}$}
\STATE{$\sh{\gamma_{\U}[t_\texttt{Prog};\texttt{Tape},\texttt{Head}_{+1},\texttt{Head}_{-1}]^{-1}}$} \COMMENT{Unwind the computation in order to delete the archive.}\label{al:com:posbaccom}
\ENDIF
\IF{$2v_\texttt{Alarm}\le v_\texttt{Clock}<3v_\texttt{Alarm}$}
\STATE $\sh{\gamma_{\U}[t_\revprog;\texttt{NTape},\texttt{Head}_{-1},\texttt{Head}_{+1}]}$ \COMMENT{Compute the inverse of the permutation, in order to recover \texttt{Tape}.} \label{al:com:negforcom}
\ELSIF{$v_\texttt{Clock}=3v_\texttt{Alarm}$}
\STATE{$\chekk[\pi_{\texttt{Head}_{-1}},\pi_{\texttt{Head}_{+1}}\notin Q_{t_\revprog}\setminus\{\motvide\}]$} \COMMENT{Check that the computation halted.} \label{al:com:negcheckhalt}
\STATE{$\rite[\texttt{Tape};\texttt{NTape}]^{-1}$} \label{al:com:negbennet}\COMMENT{Empty $\texttt{NTape}$.}
\STATE{$\exch[\texttt{Head}_{-1},\texttt{Head}_{+1}]$} \label{al:com:negexch}\COMMENT{Reverse the directions again.}
\ELSIF{$3v_\texttt{Alarm}<v_\texttt{Clock}\le4v_\texttt{Alarm}$}
\STATE{$\sh{\gamma_{\U}[t_\revprog;\texttt{Tape},\texttt{Head}_{+1},\texttt{Head}_{-1}]^{-1}}$} \label{al:com:negbaccom}\COMMENT{Unwind the second computation, too.}
\ENDIF
\IF{$v_\texttt{Clock}=4v_\texttt{Alarm}$ \AND $v_\texttt{Addr}=0$}
\STATE $\rite[0;\texttt{Head}_{-1}]^{-1}$\label{al:com:delhead}
\COMMENT{Erase the machine initial state.}
\ENDIF
\end{algo}
$\compute[v_\texttt{Addr},v_\texttt{Clock},v_\texttt{Alarm},v_\texttt{Prog},v_\revprog;\C[\compute]]$ is polynomially computable with respect to its parameters.
Note that, depending on the value of $v_\texttt{Addr}$, only a small amount of these permutation are applied.
In applications, the three parameters $v_\texttt{Alarm},t_\texttt{Prog},t_\revprog$ will be constant. $v_\texttt{Alarm}$ contains a natural number that controls how long the computation lasts. $t_\texttt{Prog}$ and $t_\revprog$ are interpreted as the program and the reverse program (\textit{i.e.}, the program of the inverse IPPA) of the IPPA that we want to simulate.
We are now able to simulate uniformly RPCA with \xpr{radius $0$} (null direction vector).
\begin{lemma}\label{behavior}
Let us fix a field list $\C[\coordi] \sqcup \C[\compute] \in\mathbb N^8$, vectors $\vec{k}, \vec{k'}\in\mathbb N^*$, integers $S, T \in {\mathbb N}_1, t_0, U\in\mathbb N$ and programs $p,p^{-1}\in\haine4^*$ of a partial permutation $\alpha:\haine5^{**}\pto\haine5^{**}$ and its inverse $\alpha^{-1}$, respectively and let $G$ the IPPA corresponding to permutation $\alpha$ and null direction vector.
Consider the IPPA $F$ with directions $\vec{\nu}_{\coordi \cup \compute}$, and permutation
\begin{equation*}
\coordi[S,T]\circ\compute[\bina{\pi_\texttt{Addr}},\bina{\pi_\texttt{Clock}}-t_0,U,p,p^{-1}]~,
\end{equation*}
and assume that the following inequalities hold:
\[\both{
U\ge\max\{t_p({\haine5^{\vec{k'}}}),t_{p^{-1}}({\haine5^{\vec{k'}}})\}\\
S\ge\max\{2t_0,\norm{\Chi{\haine5^{\vec{k'}}}}\}\\
T\ge4U+t_0\\
k_\texttt{Addr},k_{\texttt{Addr}_{+1}}\ge\norm S\\
k_\texttt{Clock},k_{\texttt{Clock}_{+1}}\ge\norm T\\
k_{\texttt{Head}_{-1}},k_{\texttt{Head}_{+1}}\ge\length{\Chi{\haine4\times (Q_p \cup Q_{p^{-1}})\times\{-1,+1\}}}\\
k_\texttt{Tape},k_\texttt{NTape}\ge1.
}\]
Then, $F\restr{(\haine5^{\vec k})^\mathbb Z}\simu[S,T,0,\Phi]G\restr{(\haine5^{\vec k'})^\mathbb Z}$, where $\Phi\defeq\tilde{\Phi}_{\texttt{Tape}}{\restr\Sigma}$ and
$\Sigma\defeq(\haine5^{\vec{k}})^{\mathbb Z}\cap\gra{0}{0}{S}{T}\cap \emp[\motvide]{\texttt{NTape},\texttt{Head}_{-1},\texttt{Head}_{+1}}\cap\tilde{\Phi}_{\texttt{Tape}}^{-1}((\haine5^{\vec{k'}})^\mathbb Z).$
\end{lemma}
The number $t_0$ should be understood as a delay before which to apply this rule, and $U$ as the maximal time that we allow to the (forward and backward) computation.
\begin{proof}
Remarks~\ref{twoconditionsofsimulation} and \ref{thirdconditionofsimulation} imply that $\Phi$ is surjective, $\Phi \sigma^S = \sigma \Phi$ and that $\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi)=\bigsqcup_{\begin{subarray}c0\le t<T\\0\le s<S\end{subarray}}F^t\sigma^s(\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi))$ is a disjoint union.
Therefore, in order to prove the simulation we only have to prove that $G \Phi = \Phi F^T$. This is equivalent (since we are talking about partial functions) to the facts that if $\Phi(c)=b$, then $F^T(c)$ exists if and only if $G(b)$ exists, and in that case $\Phi F^T(c)=G(b)$.
We are actually going to prove the following stronger:
\begin{fact}\label{fact:permutation}
If $c \in (\haine5^{\vec{k}})^{\mathbb Z}\cap\gra{0}{t_0}{S}{T}\cap\emp[\motvide]{\texttt{NTape},\texttt{Head}_{-1},\texttt{Head}_{+1}}\cap\tilde{\Phi}_{\texttt{Tape}}^{-1}(b)$, then $F^{4U}(c)$ exists if and only if $G(b)$ exists, and in that case $F^{4U}(c) \in \gra{0}{t_0+4U}{S}{T}\cap\emp[\motvide]{\texttt{NTape},\texttt{Head}_{-1},\texttt{Head}_{+1}}\cap\tilde{\Phi}_{\texttt{Tape}}^{-1}(G(b))$.
\end{fact}
Since the only rule applied outside $t_0 \le \texttt{Clock} \le t_0 + 4U$ is $\coordi$, Fact~\ref{fact:permutation} implies that if $\Phi(c)=b$, then $\Phi F^T(c)=G(b)$, which concludes the proof of the lemma.
For the rest of the proof, let $c \in (\haine5^{\vec{k}})^{\mathbb Z}\cap\gra{0}{t_0}{S}{T}\cap\emp[\motvide]{\texttt{NTape},\texttt{Head}_{-1},\texttt{Head}_{+1}}\cap\tilde{\Phi}_{\texttt{Tape}}^{-1}(b)$.
Suppose, first, that $F^{4U}(c)$ exists.
\begin{itemize}
\item $\texttt{Clock}=t_0$: Initially, $c \in \emp[\motvide]{\texttt{NTape},\texttt{Head}_{-1},\texttt{Head}_{+1}}$. Line~\ref{al:com:wrhead} writes the initial head of the TM on $\texttt{Head}_{-1}$ of the leftmost cell of every colony.
\item $t_0 \le \texttt{Clock} < t_0+U$: Only the permutation of line~\ref{al:com:posforcom} is applied. Together with the directions of $\vec{\nu}_{\compute}$, this implies that we apply $U$ steps of the IPPA $F_{\U}[p]$ on configuration \\
$(\pi_{\texttt{Tape}}(c),\pi_{\texttt{Head}_{-1}}(c),\pi_{\texttt{Head}_{+1}}(c))$. This configuration has a starting TM head at the leftmost cell of every colony, and since $c \in \Phi^{-1}(b)$, we have that in the $i$'th colony the input of the TM is $\Chi{b_i}$.
\item $t_0=t_0+U$: Line~\ref{al:com:poscheckhalt} checks that in no place of the tape does there appear a head of the TM defined by $p$. This means that all the TM have accepted within the first $U$ steps and since $p$ is the program of $\alpha$, we take that $\alpha(b_i)$ exists, for all $i \in \mathbb Z$, or equivalently that $G(b)$ exists.
\end{itemize}
Therefore, if $F^{4U}(c)$ exists, then $G(b)$ exists
For the other direction, suppose that $G(b)$ exists, or, equivalently, that $\alpha(b_i)$ exists, for all $i \in \mathbb N$.
\begin{itemize}
\item $\texttt{Clock}=t_0$: By assumption, $c \in \emp[\motvide]{\texttt{NTape},\texttt{Head}_{-1},\texttt{Head}_{+1}}$
Line~\ref{al:com:wrhead} writes the initial state of the TM on the $\texttt{Head}_{-1}$ of the leftmost cell of every colony.
\item $t_0 \le \texttt{Clock} < t_0+U$: Only the permutation of line~\ref{al:com:posforcom} is applied. Together with the directions of $\vec{\nu}_{\compute}$, this implies that at each step, we apply the IPPA $F_{\U}[p]$ on the configuration \\
$(\pi_{\texttt{Tape}}(c),\pi_{\texttt{Head}_{-1}}(c),\pi_{\texttt{Head}_{+1}}(c))$. There is a TM head at the leftmost position of every colony and the input in the $i$'th colony is equal to $\Chi{b_i}$. In other words, $\tilde{\phi}_{\texttt{Tape}}(\col{i}{c})=b_i$, for all $i \in \mathbb Z$.
\item $\texttt{Clock}=t_0+U$ Since $\alpha(b_i)$ is defined for all $i \in \mathbb Z$, $U \geq t_p(\haine5^{\vec{k'}})$ and $S \geq 2U$ , we can see that the conditions of Corollary~\ref{Turing} are satisfied with $n=U$. This means that the computation of the TM in every colony has accepted and that the output of the computation is written on the $\texttt{Tape}$ of every colony. In other words, $\tilde{\phi}_{\texttt{Tape}}(\col{i}{F^{U}(c)})=\alpha(b_i)$, for all $i \in \mathbb Z$.
The check of line~\ref{al:com:poscheckhalt} is true, since by assumption all of the TM have accepted before $\texttt{Clock}=U$, and when a TM halts its head disappears. Line~\ref{al:com:posbennet} copies the contents of $\texttt{Tape}$ on $\texttt{NTape}$. Therefore, after the application of line~\ref{al:com:posbennet}, we have that $\tilde{\phi}_{\texttt{NTape}}(\col{i}{F^{U}(c)})=\alpha(b_i)$, for all $i \in \mathbb Z$.
Finally, line~\ref{al:com:posexch} swaps the fields $\texttt{Head}_{-1}$ and $\texttt{Head}_{+1}$. This can be thought of as \xpr{reversing} the directions of these fields. We do this because we want to reverse the computation done by $F_{\U}[p]$ and in order to achieve this, it is not enough to apply $\sh{\gamma_{\U}[p]^{-1}}$, but we also need to use directions $-\vec{\nu}_{\compute}$.
\item $t_0+U+1 \le \texttt{Clock} \le t_0+2U$: Only the permutation of line~\ref{al:com:posbaccom} is applied. Together with the fact that the shift directions have been reversed and that the fields $\texttt{Head}_{-1}$, $\texttt{Head}_{+1}$ have also been exchanged in the rules of lines \ref{al:com:posforcom} and \ref{al:com:posbaccom}, this implies that the IPPA $(F_{\U}[p])^{-1}$ is applied for $U$ time steps on the configuration $(F_{\U}[p])^U(\pi_{\texttt{Tape}}(c),\pi_{\texttt{Head}_{-1}}(c),\pi_{\texttt{Head}_{+1}}(c))$.
Therefore,
$(\pi_{\texttt{Tape}}F^{2U}(c),\pi_{\texttt{Head}_{-1}}F^{2U}(c),\pi_{\texttt{Head}_{+1}}F^{2R}(c))=(\pi_{\texttt{Tape}}(c),\pi_{\texttt{Head}_{-1}}(c),\pi_{\texttt{Head}_{+1}}(c)).$
In other words, the computation has been \xpr{run backwards} until the beginning, but the output of the computation is on $\texttt{NTape}$. This is the trick used by Bennet in \cite{bennett} to simulate arbitrary TM with reversible ones.
At this point, $F^{2U}(c) \in \emp[\motvide]{\texttt{Head}_{+1}}$, $\texttt{Head}_{-1}$ is empty except at the left-most cell of every colony, where it contains the initial state $0$, and finally, $\phi_{\texttt{Tape}}(\col{i}{F^{2U}(c)})=b_i$ and $\phi_{\texttt{NTape}}(\col{i}{F^{2U}(c)})=\alpha(b_i)$, for all $i \in \mathbb Z$.
\item $t_0+2U \le \texttt{Clock} < t_0+3U$: Only the permutation of line~\ref{al:com:negforcom} is applied. Together with the directions of $\vec{\nu}_{\compute}$, this implies that at each step, we apply the IPPA $F_{\U}[p^{-1}]$ on the configuration \\
$(\pi_{\texttt{NTape}}(c),\pi_{\texttt{Head}_{-1}}(c),\pi_{\texttt{Head}_{+1}}(c))$. Notice that we use $\texttt{NTape}$ as the TM tape and we use the program $p^{-1}=t_{\revprog}$.
\item $\texttt{Clock}= t_0 +3U$: Since $\alpha^{-1}(\alpha(b_i))$ is defined for all $i \in \mathbb Z$, $U > t_{p^{-1}}(\haine5^{\vec{k'}})$ and $S \geq 2U$ , the conditions of Corollary~\ref{Turing} are satisfied with $n=U$. This implies that
$\phi_{\texttt{NTape}}(\col{i}{F^{3U}(c)})=b_i$, for all $i \in \mathbb Z$.
The check of line~\ref{al:com:negcheckhalt} is true, since all of the TM have accepted before $\texttt{Clock}=3U$. Line~\ref{al:com:negbennet} copies the contents of $\texttt{Tape}$ on $\texttt{NTape}$. Since, at this point these fields are equal in every cell, this is equivalent to emptying fields $\texttt{NTape}$ (in a reversible way, though). Therefore, after applying this permutation, $F^{3U}(c) \in \emp[\motvide]{\texttt{NTape}}$.
We still have to empty the $\texttt{Head}$ fields, too. For this, we have to run the computation backwards. Line~\ref{al:com:negexch} swaps the fields $\texttt{Head}_{-1}$ and $\texttt{Head}_{+1}$, \xpr{reversing} the directions of these fields.
\item $t_0+3U+1 \le \texttt{Clock} \le t_0+4U$: Only the permutation of line~\ref{al:com:negbaccom} is applied. Together with the fact that the shift directions have been reversed and that the head fields inside the rules are also exchanged, this implies that the IPPA $F_{\U}[p^{-1}]^{-1}$ is applied for $U$ time steps on the configuration $F_{\U}[p^{-1}]^{3U}(\pi_{\texttt{Tape}}(c),\pi_{\texttt{Head}_{-1}}(c),\pi_{\texttt{Head}_{+1}}(c))$.
Therefore,
$(\pi_{\texttt{Tape}}F^{4U}(c),\pi_{\texttt{Head}_{-1}}F^{4U}(c),\pi_{\texttt{Head}_{+1}}F^{4U}(c))=(\pi_{\texttt{Tape}}F^{2U}(c),\pi_{\texttt{Head}_{-1}}F^{2U}(c),\pi_{\texttt{Head}_{+1}}F^{2U}(c)). $
Notice that now we are using $\texttt{Tape}$ as the tape of the TM, while during the forward computation we used $\texttt{NTape}$. This is not a problem, though, because the two fields were equal at the end of the forward computation at step $3U$.
At this point, we have that $\phi_{\texttt{Tape}}(\col{i}{F^{4U}(c)})=\alpha(b_i)$, for all $i \in \mathbb Z$. Also, in the $\texttt{Head}$ fields, there exists the initial state $0$ of the TM on $\texttt{Head}_{-1}$ of the leftmost cell of every colony, while the rest of them are empty.
\item Finally, line~\ref{al:com:delhead} deletes the initial state, and we get that $F^{4U}(c) \in \emp[\motvide]{\texttt{NTape},\texttt{Head}_{-1},\texttt{Head}_{+1}}$.
\end{itemize}
Therefore, we have proved that if $G(b)$ exists, then $F^{4U}(c)$ exists and $F^{4U}(c) \in (\haine5^{\vec k})^\mathbb Z \cap \gra{0}{t_0+4U}{S}{T}\cap\emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{NTape}} \cap \tilde{\Phi}_{\texttt{Tape}}^{-1}(\alpha(b))$, which finishes the proof of the lemma.
\end{proof}
In a nutshell, this is how the construction works. First, use the program $p$ to compute $\alpha$. At the end of this phase, $\texttt{Tape}$ contains $\alpha(b)$ (in the colonies). Copy $\alpha(b)$ onto $\texttt{NTape}$ and in the second phase, run the computation backwards so as to erase all auxiliary information written by the TM during the computation. At the end of the second phase, $\texttt{Tape}$ contains $b$ and $\texttt{NTape}$ contains $\alpha(b)$. In the third and fourth phases of the construction, perform the reverse of what was done in the first two phases, while exchanging the roles of $\texttt{NTape}$ and $\texttt{Tape}$. First, use $p^{-1}$ with tape field $\texttt{NTape}$ so as to compute $\alpha^{-1}(\alpha(b))=b$, then copy $\texttt{Tape}$ onto $\texttt{NTape}$ (thus emptying $\texttt{NTape}$) and then perform the computation backwards. At the end, $\texttt{NTape}$ is again empty and $\texttt{Tape}$ contains $\alpha(b)$ and everything was done in a reversible way.
Notice for all $b \in (\haine5^{\vec{k'}})$ and all $c \in \Phi^{-1}(b)$, the values of the fields in $\C[\coordi] \sqcup \C[\compute]$ of $c$ are uniquely determined. This implies that if there are no anonymous fields, or if the values of the anonymous fields were determined by the fields of $\C[\coordi] \sqcup \C[\compute]$, then the simulation is also exact.
\section{Shifting}\label{sec:singleshift}
Let $\C[\shift]\defeq[\texttt{Tape},\texttt{Tape}_{-1},\texttt{Tape}_{+1}]$.
$\texttt{Tape}_{+1}$ and $\texttt{Tape}_{-1}$ are used to exchange the information of $\texttt{Tape}$ between colonies.
In the following algorithm, $M \in {\mathbb N}_1$ has to be thought of as the number of fields in the simulated alphabet, $\vec{\nu} \in \{-1,0,+1\}^M$ as the vector of directions of the simulated IPPA, and $\vec{k'} \colon \haine5^{**} \pto \mathbb N^M$ is a vector valuation that gives the lengths of the alphabet of the simulated IPPA.
$\vec{k'}$ represents the field lengths of the simulated letters, whose information is then \xpr{known} to all the letters of the simulating PPCA.
\begin{algo}{shift}{\shift}{M,\vec{\nu},\vec{k'},v_\texttt{Addr},v_\texttt{Clock},v_\texttt{MAddr}}
\IF{$v_\texttt{Clock}=0$ \OR $v_\texttt{Clock}=v_\texttt{MAddr}$}
\FOR{$0\le i<M$}
\IF{$l_{\vec{k'},i}\le v_\texttt{Addr}<l_{\vec{k'},i+1}$} \label{al:shi:encoding}
\STATE{$\exch[\texttt{Tape},\texttt{Tape}_{\vec{\nu}_i}]$} \label{al:shi:movetape} \COMMENT{Letters are moved to the corresponding moving fields, and back after $v_\texttt{MAddr}$ steps.}
\ENDIF
\ENDFOR
\ENDIF
\end{algo}
This is polynomially computable in the parameters.
\begin{lemma}\label{parshift}
Let us fix a field list $\C[\coordi]\sqcup\C[\shift]\in\mathbb N^7$, an integer $M\in{\mathbb N}_1$, a direction vector $\vec{\nu} \in \{-1,0,+1\}^M$, a vector $\vec{k'} \in \mathbb N^M$ , a vector $\vec{k} \in \mathbb N^{*}$ and integers $S,T\in {\mathbb N}_1$, $t_0\in\mathbb N$.
Consider the IPPA $F$ defined by directions $\vec{\nu}_{\coordi \sqcup \compute}$, given by the label indices, and permutation
\begin{equation*}
\coordi[S,T] \circ \shift[M,\vec{\nu}_{\coordi \sqcup \compute},\vec{k'},\bina{\pi_\texttt{Addr}},\bina{\pi_\texttt{Clock}}-t_0,S],
\end{equation*}
and assume that the following inequalities hold:
\[\both{
S\geq\norm{\Chi{\haine5^{\vec{k'}}}}\\
T\geq t_0+S\\
k_\texttt{Addr},k_{\texttt{Addr}_{+1}}\geq\norm S\\
k_\texttt{Clock},k_{\texttt{Clock}_{+1}}\geq\norm T\\
k_\texttt{Tape},k_{\texttt{Tape}_{-1}},k_{\texttt{Tape}_{+1}}\geq 1
~.}\]
Then $F\restr{(\haine5^{\vec k})^\mathbb Z}\simu[S,T,0,\Phi]\sigma^{-\vec\nu}\restr{(\haine5^{\vec{k'}})^{\mathbb Z}}$, where $\Phi\defeq\tilde{\Phi}\restr\Sigma$ and $\Sigma\defeq(\haine5^{\vec{k}})^{\mathbb Z}\cap\gra{0}{0}{S}{T}\cap \emp[\motvide]{\texttt{Tape}_{-1},\texttt{Tape}_{+1}}\cap\tilde{\Phi}^{-1}((\haine5^{\vec{k'}})^\mathbb Z)$.
\end{lemma}
Recall that, by convention, the directions of the fields are the opposite of the shift that is actually applied.
\begin{proof}
Again, by the definition of $\Phi$ and Remarks~\ref{twoconditionsofsimulation} and~\ref{thirdconditionofsimulation}, we know that $\Phi$ is surjective, $\Phi \sigma^S = \sigma \Phi$ and that $\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi)\defeq\bigsqcup_{\begin{subarray}c0\le t<T\\0\le s<S\end{subarray}}F^t\sigma^s(\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi))$ is a disjoint union.
Therefore, we only have to show that $\sigma^{-\vec{\nu}} \Phi = \Phi F^T$, which is equivalent to showing, since $\sigma^{-\vec{\nu}}(b)$ is defined for all $b \in (\haine5^{\vec{k'}})^{\mathbb Z}$, that if $\Phi(c)=b$, then $\Phi F^T(c)= \sigma^{-\vec{\nu}}(b)$.
As in the proof of Lemma~\ref{behavior}, we are going to prove the following stronger
\begin{fact}\label{fact:shift}
If $c \in (\haine5^{\vec{k}})^{\mathbb Z}\cap\gra{0}{t_0}{S}{T}\cap\emp[\motvide]{\texttt{Tape}_{-1},\texttt{Tape}_{+1}}\cap\tilde{\Phi}_{\texttt{Tape}}^{-1}(b)$, then $F^S(c) \in \gra{0}{t_0+S}{S}{T} \cap \emp[\motvide]{\texttt{Tape}_{-1}=\texttt{Tape}_{+1}} \cap \tilde{\Phi}^{-1}(\sigma^{-\vec{\nu}}(b))$.
\end{fact}
\begin{itemize}
\item $\texttt{Clock}=t_0$: By assumption, $c \in \emp[\motvide]{\texttt{Tape}_{-1},\texttt{Tape}_{+1}}$.
Lines~\ref{al:shi:encoding} and \ref{al:shi:movetape} copy the encodings of the fields of $b$ on the correct $\texttt{Tape}$, in the following sense:
Since $c \in \Phi^{-1}(b)$, $\pi_{\texttt{Tape}}(\col{i}{c})\restr{\co{l_{\vec{k'},j}}{l_{\vec{k'},j+1}}}=\Chi{\pi_j(b_i)}$, for all $i \in \mathbb Z$ and $0 \le j < M$. Line~\ref{al:shi:movetape} moves the letter in $\texttt{Tape}$ to $\texttt{Tape}_{\vec{\nu}_j}$ when $l_{\vec{v},j}\le \texttt{Addr}<l_{\vec{v},j+1}$, or in other words, moves the encoding of the $j$'th field of $b_j$ onto $\texttt{Tape}_{+1}$ if $j$ is a right-moving field and to $\texttt{Tape}_{-1}$ if $j$ is a left-moving field.
This means that after the application of line \ref{al:shi:movetape}, we have that \\
$\pi_{\texttt{Tape}_{\vec{\nu}_j}}(\col{i}{c})\restr{\co{l_{\vec{k'},j}}{l_{\vec{k'},j+1} }}=\Chi{\pi_j(b_i)}$, while $\pi_{\texttt{Tape}_{-\vec{\nu}_j}}(\col{i}{c})\restr{\co{l_{\vec{k'},j}}{l_{\vec{k'},j+1}}}=\pi_{\texttt{Tape}}(\col{i}{c})\restr{\co{l_{\vec{k'},j}}{l_{\vec{k'},j+1}}}=\motvide$, for all $i \in \mathbb Z$ and $0 \le j < M$.
\item $t_0 \le \texttt{Clock} < t_0+S$: No permutation is applied during these time steps. Only the $\texttt{Tape}$ fields are shifted to the corresponding direction.
\item $t_0 = t_0+S$: Every symbol that was part of the encoding of the $j$'th field of $b$ has travelled $S$ steps to the direction indicated by $\vec{\nu}_j$. This means that before the application of line~\ref{al:shi:movetape}, we have that $\pi_{\texttt{Tape}_{\vec{\nu}_i}}(\col{i}{F^S(c)})\restr{\co{l_{\vec{k'},j}}{l_{\vec{k'},j+1}}}=\Chi{\pi_i(b_{i-\vec{\nu}_i})}$, while \\
$\pi_{\texttt{Tape}_{-\vec{\nu}_j}}(\col{n}{F^S(c)})\restr{\co{l_{\vec{k'},j}}{l_{\vec{k'},j+1}}}=\pi_{\texttt{Tape}}(\col{i}{F^S(c)})\restr{\co{l_{\vec{k'},j}}{l_{\vec{k'},j+1}}}=\motvide$, for all $i \in \mathbb Z$ and $0 \le j < M$.
Line~\ref{al:shi:movetape} moves the letter from the $\texttt{Tape}$ fields back to $\texttt{Tape}$. Therefore, after the application of line~\ref{al:shi:movetape}, we have that $F^S(c) \in \emp[\motvide]{\texttt{Tape}_{-1},\texttt{Tape}_{+1}}$ and $\Phi(F^S(c))=\sigma^{-\vec{\nu}}(b)$, which concludes the proof of the Lemma.
\end{itemize}
\end{proof}
In this proof, it is of great importance that all the letters of $b$ have the same lengths, because this implies that the $j$'th field of every letter of $b$ is encoded at the same positions inside every colony. In fact, the reason that we deal only with alphabets of constant lengths is that this shifting procedure can work so easily.
\section{Simulating any fixed rule}\label{sec:universal}
In this subsection, we will use Lemma~\ref{behavior} and \ref{parshift} to construct an IPPA that can simulate non-trivially any PCA, when restricted to an appropriate finite subalphabet.
Let $\C[\unive]\defeq \C[\compute] \cup \C[\shift] \in \mathbb N^{6}$ ($\C[\compute]$ and $\C[\shift]$ share the field $\texttt{Tape}$).
\begin{algo}{unive}{\unive}{M,\vec{\nu},\vec{k'},v_\texttt{Addr},v_\texttt{Clock},v_\texttt{MAddr},v_\texttt{Alarm},t_\texttt{Prog},t_\revprog}
\STATE{$\compute[v_\texttt{Addr},v_\texttt{Clock},v_\texttt{Alarm},t_\texttt{Prog},t_\revprog]$} \label{al:uni:comp}
\STATE{$\shift[M,\vec\nu,v_\texttt{Addr},v_\texttt{Clock}-4v_\texttt{Alarm},v_\texttt{MAddr},\vec{k'}]$} \label{al:uni:shift}
\end{algo}
This is easily seen to be polynomially computable in the parameters.
Notice that $\shift$ and $\compute$ are used at \xpr{different time steps}, \textit{i.e.}, at different values of $v_\texttt{Clock}$. $\compute$ starts being used when $v_\texttt{Clock}=0$ and, by definition of $v_\compute$, is equal to the identity when $v_\texttt{Clock} \geq 4v_\texttt{Alarm}$, while $\shift$ starts being used when $v_\texttt{Clock}=4v_\texttt{Alarm}$ (it has a delay of $4v_\texttt{Alarm}$). Formally, this means that for every value of $v_\texttt{Clock}$, at most one of the rules $\compute$ and $\shift$ is not equal to the identity.
The following lemma is the fruit of all our efforts until now. It provides an IPPA that can simulate any PCA when restricted to a sufficiently large alphabet.
\begin{lemma}\label{universal}
Let us fix a field list $\C[\unive] \sqcup \C[\coordi] \in\mathbb N^{10}$, an integer $M\in{\mathbb N}_1$, programs $p,p^{-1}\in\haine2^*$ of a partial permutation $\alpha:\haine5^{**}\pto\haine5^{**}$
and its inverse $\alpha^{-1}$, respectively, a direction vector $\vec{\nu}\in \{-1,0,+1\}^M$, a vector $\vec{k'} \in \mathbb N^M$, a vector $\vec{k} \in \mathbb N^{*}$ and integers $S,T,t_0,U\in\mathbb N$.
Let $G=\sigma^{-\nu}\alpha$ and $F$ be the IPPA defined by directions $\vec{\nu}_{\coordi \sqcup \unive}$, given by the label indices, and permutation
\begin{equation*}
\coordi[S,T]\circ \unive[M,\vec{\nu}_{\unive},\vec{k'},\bina{\pi_\texttt{Addr}},\bina{\pi_\texttt{Clock}}-t_0,S,T,U,p,p^{-1}]~,
\end{equation*}
and assume that the following inequalities hold:
\[\both{
U\ge\max\{t_p({\haine5^{\vec{k'}}}),t_{p^{-1}}({\haine5^{\vec{k'}}})\}\\
S\ge\max\{2U,\norm{\Chi{\haine5^{\vec{k'}}}}\}\\
T\ge4U+S+t_0\\
k_\texttt{Addr},k_{\texttt{Addr}_{+1}}\ge\norm S\\
k_\texttt{Clock},k_{\texttt{Clock}_{+1}}\ge\norm T\\
k_{\texttt{Head}_{-1}},k_{\texttt{Head}_{+1}}\ge\length{\Chi{\haine4\times (Q_p \cup Q_{p^{-1}})\times\{-1,+1\}}}\\
k_\texttt{Tape},k_\texttt{NTape},k_{\texttt{Tape}_{-1}},k_{\texttt{Tape}_{+1}}\ge1.
}\]
Then, $F\restr{(\haine5^{\vec k})^\mathbb Z}\simu[S,T,0,\Phi]G\restr{(\haine5^{\vec{k'}})^{\mathbb Z}}$ completely, where $\Phi\defeq\tilde{\Phi}_{\texttt{Tape}}{\restr{\Sigma}}$ and
\begin{equation*}
\Sigma\defeq(\haine5^{\vec{k}})^{\mathbb Z}\cap\gra{0}{t_0}{S}{T} \cap \emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}}\cap\tilde{\Phi}_{\texttt{Tape}}^{-1}((\haine5^{\vec{k'}})^\mathbb Z).
\end{equation*}
\end{lemma}
\begin{proof}
Notice that line~\ref{al:uni:comp} together with $\coordi$ make up the permutation whose behavior is described in Lemma~\ref{behavior}, while line~\ref{al:uni:shift} and $\coordi$ make up the permutation of Lemma~\ref{parshift}. Also, as already noted, for any value of $\texttt{Clock}$, at most one permutation of lines~\ref{al:uni:comp} and~\ref{al:uni:shift} is not equal to the identity.
Like in the proofs of Lemmas~\ref{behavior} and~\ref{parshift}, we can easily see that we only have to show that if $\Phi(c)=b$, then $\Phi F^T(c) = G(b)$, and that this follows from the following stronger fact:
\begin{fact}\label{fact:shiftandpermutation}
If $c \in (\haine5^{\vec{k}})^{\mathbb Z}\cap\gra{0}{t_0}{S}{T}\cap\emp[\motvide]{\texttt{NTape},\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1}}\cap\tilde{\Phi}_{\texttt{Tape}}^{-1}(b)^\mathbb Z$, then $F^{4U+S}(c)$ exists if and only if $G(b)$ exists, and in that case $F^{4U+S}(c) \in \gra{0}{t_0+4U+S}{S}{T}\cap\emp[\motvide]{\texttt{NTape},\texttt{Head}_{-1},\texttt{Head}_{+1}}\cap\tilde{\Phi}_{\texttt{Tape}}^{-1}(G(b))$.
\end{fact}
Indeed, according to Fact~\ref{fact:permutation}, we have that $F^{4U}(c)$ exists if and only if $\alpha(b)$ exits, or, equivalently, if and only if $G(b)$ exists, and in this case
\begin{equation*}
F^{4U}(c) \in \gra{0}{t_0+4U}{S}{T}\cap\emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}}\\ \cap \tilde{\Phi}_{\texttt{Tape}}^{-1}(\alpha(b)).\end{equation*}
Fact~\ref{fact:shift} implies that
$F^{4U+S}(c) \in \gra{0}{t_0+4U+S}{S}{T}\cap \emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}}\cap \tilde{\Phi}_{\texttt{Tape}}^{-1}(\sigma^{-\nu}\alpha(b)),$
which concludes the proof of the lemma, since $G=\sigma^{-\nu}\alpha(b)$.
\end{proof}
$\unive$ is a rule (family of rules in fact, since they depend on parameters) that can simulate any PPA. Every IPPA $F$ that we will construct later will factor onto $\unive$. They might have some additional fields for which we apply a different rule, and this rule might even take into consideration the fields of $\C[\unive]$, but none of these other rules is going to \emph{change} the fields of $\C[\unive]$. Therefore, by projecting onto $\C[\unive]$, we will immediately obtain that $F$ simulates $G$, even though the simulation might not be exact.
However, if $\vec{k}$ does not have any anonymous fields, then the simulation is exact, since all the fields of $\C[\coordi] \sqcup \C[\unive]$ are uniquely determined by $\Phi$.
\subsection{Satisfying the inequalities}
Lemma~\ref{universal} is true only under the assumption that the following set of inequalities are satisfied:
\[\both{
U\geq\max\{t_p({\haine5^{\vec{k'}}}),t_{p^{-1}}({\haine5^{\vec{k'}}})\}\\
S\geq\max\{2U,\norm{\Chi{\haine5^{\vec{k'}}}}\}\\
T\ge4U+S+t_0\\
k_\texttt{Addr},k_{\texttt{Addr}_{+1}}\geq\norm S\\
k_\texttt{Clock},k_{\texttt{Clock}_{+1}}\geq\norm T\\
k_{\texttt{Head}_{-1}},k_{\texttt{Head}_{+1}}\geq \length{\Chi{\haine4\times (Q_p \cup Q_{p^{-1}})\times\{-1,+1\}}}\\
k_\texttt{Tape},k_\texttt{NTape},k_{\texttt{Tape}_{-1}},k_{\texttt{Tape}_{+1}}\geq 1
~.}\]
We will denote this set of inequalities by $\I(\vec{k},\vec{k'},S,T,U,t_0,p,p^{-1})$. When $t_0=0$, \textit{i.e.}, when the computation starts at $\texttt{Clock}=0$, we will omit it and write $\I(\vec{k},\vec{k'},S,T,U,p)$, instead. Let us explain again intuitively why each inequality is needed:
\begin{itemize}
\item $k_\texttt{Addr},k_{\texttt{Addr}_{+1}}\ge\norm S$: The fields $k_\texttt{Addr}$ and $k_{\texttt{Addr}{+1}}$ have to be large enough so that we can write the binary representation of $S$ in them.
\item $k_\texttt{Clock},k_{\texttt{Clock}_{+1}}\ge\norm T$: The fields $k_\texttt{Clock}$ and $k_{\texttt{Clock}{+1}}$ have to be large enough so that we can write the binary representation of $T$ in them.
\item $U\ge\max\{t_p({\haine5^{\vec{k'}}}),t_{p^{-1}}({\haine5^{\vec{k'}}})\}$: We have to run the TM long enough so that the computation of $p$ onto the encoded letters halts.
\item $S\ge\max\{2U,\norm{\Chi{\haine5^{\vec{k'}}}}\}$: The colonies have to be wide enough so that we can encode the letters of $\haine5^{\vec{k'}}$ in them. In addition, they have to be wide enough so that the heads of the computation do not \xpr{collide}.
\item $T\ge4U+S+t_0$: $T$ has to be large enough so that the computation and the shifting are done before the next working period starts.
\item $k_{\texttt{Head}_{-1}},k_{\texttt{Head}_{+1}}\ge\length{\Chi{\haine4\times (Q_p\cup Q_{p^{-1}})\times\{-1,+1\}}}$:
The head fields have to be large enough so that states of $\gamma_{\U}[p]$ and $\gamma_{\U}[p^{-1}]$ can be written on them.
\item $k_\texttt{Tape},k_\texttt{NTape},k_{\texttt{Tape}_{-1}},k_{\texttt{Tape}_{+1}}\ge1$. Empty fields are of no use, in general.
\end{itemize}
\begin{remark}
If $p,p^{-1}\in \haine2^*, \vec{k'} \in \mathbb N^M$ and $t_0 \in \mathbb N$ are fixed, then we can choose $\vec{k},U,S$ and $T$ such that the inequalities of Lemma~\ref{universal} is satisfied.
\end{remark}
\begin{proof}
Since $\vec{k'}, p$ and $t_0$ are fixed, given $U,S,T$ we can choose $\vec{k} \defeq \vec{k}_{U,S,T}$ such that all of the inequalities except for the three first are satisfied as equalities. Then, given $S,U$ we can choose $T \defeq T_{S,U}$ such that the third inequality is satisfied as an equality. Similarly, given $U$ we can choose $S \defeq S_U$ such that the second inequality is satisfied. Finally, $U$ can be chosen independently from the rest of the parameters, since it only depends on $p$ and $\vec{k'}$, which are fixed.
\end{proof}
In later constructions, the choice of $U$ will not be so straightforward, as $\vec{k'}$ will depend on $\vec{k}$. In this case, we first fix $G$ and then look for the suitable values of the parameters. The situation becomes trickier when the simulated RPCA depends on the choice of the parameters, as will be the case in the following chapters. Then, we have to be careful not to fall into a circular argument.
\chapter{Infinite hierarchies}\label{s:hierarchy}
For every PPA $G$ and sufficiently large $S,T$, Lemma~\ref{universal} shows that is is possible to construct a PPA $F$ that $(S,T,0)$-simulates $G$. In addition, the simulation can be made exact. We also want to make it complete. The most direct way is to restrict $F$ to $\tilde{\Phi}_{\texttt{Tape}}^{-1}(\mathcal D}%\DeclareMathOperator*{\dom}{dom{G})$, which, by definition makes the simulation complete. However, this is not good because it is a radius-$S$ SFT condition and, if we wanted to have an infinite nested simulation, we would have to impose an infinite number of such restrictions, so that the subshift we would obtain would not be an SFT.
The idea, which is the basic idea behind all hierarchical constructions, is that if the simulated alphabet is determined in an easy way by the simulating alphabet, then it is possible to design a simple IPPA that ensures that the simulating configuration is in $\tilde{\Phi}_{\texttt{Tape}}^{-1}(\mathcal D}%\DeclareMathOperator*{\dom}{dom{G})$.
\section{Son-father checks}
The first thing is to check that the simulated letter, which is written in an encoded form bit by bit in $\texttt{Tape}$, has the correct structure, \textit{i.e.}, it is the encoding of a letter with the correct number of fields and lengths.
$\vec{k'} \in \haine5^{**} \pto \mathbb N^M$ is a vector valuation, that
gives the lengths of the simulated alphabet as a function of the lengths of the simulating alphabet. In applications, it will be easily computable from every letter of the simulating alphabet, or, in other words, the information about the structure of the simulated letter will be known to all of the letters of the simulating IPPA.
\begin{algo}{chekka}{\chekka}{M,v_\texttt{Addr},t_\texttt{Tape},\vec{k'}}
\IF{$v_\texttt{Addr}\ge l_{\vec{k'},M}$}
\STATE{$\chekk(t_\texttt{Tape}=3)$} \COMMENT{On the right side of the encoding, \texttt{Tape}\ is $3$.}
\ENDIF
\FOR{$0\le i<M$}
\IF{$v_\texttt{Addr}=l_{\vec{k'},i}$}
\STATE{$\chekk[t_\texttt{Tape}=2]$} \COMMENT{Field separators are at the expected positions.}
\ELSIF{$l_{\vec{k'},i}<v_\texttt{Addr}<l_{\vec{k'},i+1}$}
\STATE{$\chekk[t_\texttt{Tape}\in\haine2]$} \COMMENT{Proper field encodings are binary.}
\ENDIF
\ENDFOR
\end{algo}
This permutation is polynomially computable in its parameters. (In every case that we use it, it will be easily checkable that the parameters are polynomially computable.)
The following lemma follows simply by inspection of the definition of $\Chi{\cdot}$ and ${\chekka}[M,v_\texttt{Addr},t_\texttt{Tape},\vec{k'}]$:
\begin{lemma}\label{lem:fixalpha}
Let us fix a field list $[\texttt{Addr},\texttt{Tape}]\in\mathbb N^2$, an integer $M\in{\mathbb N}_1$, a vector $\vec{k'} \in \mathbb N^M$, $S\in{\mathbb N}_1$ and a vector $\vec{k} \in \mathbb N^*$.
Let $F$ be the IPPA defined by a null direction vector and permutation $\chekka[M,\bina{\pi_{\texttt{Addr}}},\pi_{\texttt{Tape}},\vec{k'}]$, and assume that the following inequalities hold:
\[\both{
S \geq \norm{\Chi{\haine5^{\vec{k'}}}}\\
k_\texttt{Addr}\ge\norm S\\
k_\texttt{Tape}\ge1
~.}\]
Let $c \in (\haine5^{\vec k})^{\mathbb Z} \cap \grs{s}{S} \cap \tilde{\Phi}_{\texttt{Tape}}^{-1}(b)$, where $s \in \co{0}{S}$ and $b \in (\haine5^{**})^{\mathbb Z}$. Then, $F(c)$ exists if and only if $b \in (\haine5^{\vec{k'}})^{\mathbb Z}$ and in this case $F(c)=c$.
\end{lemma}
In other words, if a configuration is split into colonies using the $\texttt{Addr}$ field and every colony has the encoding of some letter on its $\texttt{Tape}$ tape, then $\chekka[M,\bina{\pi_{\texttt{Addr}}},\pi_{\texttt{Tape}},\vec{k'}]$ ensures that this encoded letter belongs in $\haine5^{\vec{k'}}$.
We can also check that some field $i$ in the simulated letter has a prescribed prefix (given by a term $t$).
\begin{algo}{hier}{\hier}{M,v_\texttt{Addr},t_\texttt{Tape},\vec{k'},i,t}
\IF{$l_{\vec{k'},i}<v_\texttt{Addr}\le l_{\vec{k'},i}+\length{\Chi{t}}$}
\STATE{$\chekk[t_\texttt{Tape}=\Chi{t}\restr{v_\texttt{Addr}-l_{\vec{k'},i}}]$}
\ENDIF
\end{algo}
\begin{lemma}\label{lem:fixfield}
Let us fix a field list $[\texttt{Addr},\texttt{Tape}]\in\mathbb N^2$, an integer $M\in{\mathbb N}_1$, a field $i\in\co0M$, a covector $\vec{k'}\in \mathbb N^M$, $S \in \mathbb N$, a term $t \colon \haine5^{**} \to \haine5^{*}$ and a vector $\vec{k} \in \mathbb N^*$.
Let $F$ be the IPPA defined by a null direction vector and permutation $\hier[M,\bina{\pi_\texttt{Addr}},\texttt{Tape},\vec{k'},i,t]$, and assume that the following inequalities hold:
\[\both{
S \geq \norm{\Chi{\haine5^{\vec{k'}}}}\\
k_\texttt{Addr}\ge\norm S\\
k_\texttt{Tape}\ge1
~.}\]
Let $c \in (\haine5^{\vec k})^{\mathbb Z} \cap \grs{s}{S} \cap \phi^{-1}(b)$, where $0 \le s <S$ and $b \in (\haine5^{\vec{k'}})^{\mathbb Z}$ and assume that $t(c_n)=t(c_{n'})\defeq t_c$ for all $n,n' \in \mathbb Z$.\\
Then, $F(c)$ exists if and only if ${\pi_i(b_j)}_{\co{0}{\norm{t_c}}}=t_c$, for all $j \in \mathbb Z$ and in this case $F(c)=c$.
\end{lemma}
We implicitly assume that if $l > \norm{w}$, where $w \in \A^{*}$, then $w_l=\motvide$.
In other words, if all letters of $c$ have the \xpr{same idea} about what $\pi_i(b_j)$ should be, then, they can check in one step that this indeed happens. In practice, $t$ will usually be equal to $\pi_{\field}$, where $\field$ is a horizontally constant field, so that the condition $t(c_n)=t(c_{n'})$ will be true. In this case, we just check that $\pi_\field(c_n)$ is a prefix of $\pi_{\field}(b_j)$, for all $j,n \in \mathbb Z$.
Lemmas~\ref{lem:fixalpha} and \ref{lem:fixfield} correspond to what in \cite{drs} is achieved by mentioning that ``the TM knows at which place the information of every field is held''. For many people, this argument is one of the most confusing things in that construction. This is the reason why we have tried to explain this point as clearly as possible and show exactly how the cells of the simulating IPPA can collectively check that some constant information of the simulating alphabet is the same in the simulated alphabet. In fact, we use a general term $t$ in Lemma~\ref{lem:fixfield}, which essentially allows us to impose any (polynomially computable) condition on the simulated alphabet.
\section{Self-simulation}\label{sself}
We are now ready to construct a self-simulating RPCA. This is the simplest and first example of nested simulation. We just check that the simulated letter has the same lengths as the simulating ones and that some \xpr{hierarchical} fields (which contain the values $p,p^{-1},U,S,T$ that are fixed in Lemma~\ref{universal}) have the same value in the simulated letter as in the simulating ones (where their values is already fixed).
Let $\C[\texttt{Self}]\defeq\C[\unive]\sqcup [\texttt{MAddr},\texttt{MClock},\texttt{Alarm},\texttt{Prog},\revprog] \in \mathbb N^{15}$.
$\texttt{MAddr}$ and $\texttt{MClock}$ are used to obtain the values of $v_{\texttt{MAddr}}$ and $v_\texttt{MClock}$. Similarly, $\texttt{Prog},\revprog$ and $\texttt{Alarm}$ are used to obtain the values $t_{\texttt{Prog}},t_{\revprog}$ and $v_{\texttt{Alarm}}$. All of these fields will be horizontally constant.
\begin{algo}{selfs}{\texttt{Self}}{M,\vec\nu}
\IF{$\bina{\pi_\texttt{Clock}}=0$}
\STATE{$\chekk[\emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}}]$}\label{al:self:initial}
\STATE{$\chekka[M,\bina{\pi_\texttt{Addr}},\pi_\texttt{Tape},(\length{\pi_j})_{j<M}]$} \COMMENT{Check that the lengths of the simulated letter are the same}\label{al:self:alph}
\FOR{$i\in\{\texttt{MAddr},\texttt{MClock},\texttt{Alarm},\texttt{Prog},\revprog\}$}
\STATE{$\hier[M,\bina{\pi_\texttt{Addr}},\texttt{Tape},(\length{\pi_j})_{j<M},i,\pi_i]$} \COMMENT{Check that the hierarchical fields of the simulated letter are the same}\label{al:self:hiera}
\ENDFOR
\ENDIF
\STATE{$\unive[M$,$\vec{\nu}$,$(\length{\pi_j})_{j<M}$,$\bina{\pi_\texttt{Addr}}$,$\bina{\pi_\texttt{Clock}}$, $\bina{\pi_\texttt{MAddr}}$, $\bina{\pi_\texttt{MClock}}$, $\bina{\pi_\texttt{Alarm}}$, $\pi_\texttt{Prog}$, $\pi_\revprog]$}\COMMENT{The alphabet is as expected; we can simulate.}
\STATE{$\coordi[\bina{\pi_\texttt{MAddr}},\bina{\pi_\texttt{MClock}}]$}
\label{al:self:unive}
\end{algo}
In the next lemma, we do not want to have any anonymous fields, but only those fields that are used in $\texttt{Self}$. There are $15$ fields in $\C[\texttt{Self}]$, so we take the field list $[0,\ldots,14]$, which means that we assign a number of $\co{0}{15}$ to every field in $\C[\texttt{Self}]$ in some random (but fixed) way. Once we have done this, the corresponding vector of directions is also well-defined.
\begin{lemma}\label{self}
Let us fix the field list $\C[\texttt{Self}]\defeq[0,\ldots,14]$, the corresponding direction vector $\vec{\nu}_{\texttt{Self}}$, integers $S,T,U\in{\mathbb N}_1$ and vector $\vec k\in\mathbb N^{15}$. Let $F$ be the IPPA with directions $\vec{\nu}_{\texttt{Self}}$ and permutation $\texttt{Self}[15,\vec{\nu}_{\texttt{Self}}]$ and $p,p^{-1}$ be the programs for this permutation and its inverse, respectively.
Let $F_{\vec{k},S,T,U}$ be the restriction of $F$ to the subalphabet
\begin{equation*}
\A_{\vec k,S,T,U}\defeq\haine5^{\vec{k}}\cap\emp[S]{\texttt{MAddr}}\cap\emp[T]{\texttt{MClock}}\cap\emp[U]{\texttt{Alarm}}\cap\emp[p]{\texttt{Prog}}\cap\emp[p^{-1}]{\revprog},
\end{equation*}
and assume that the following inequalities are satisfied:
\[\both{
\I(\vec{k},\vec{k},S,T,U,p,p^{-1})\\
k_\texttt{Prog} \geq\norm p\\
k_\revprog \geq\norm{p^{-1}}\\
k_\texttt{MAddr} \geq \norm{S}\\
k_{\texttt{MClock}} \geq \norm{T}\\
k_{\texttt{Alarm}} \geq \norm{U}
~.}\]
Then, $F_{\vec{k},S,T,U}\simu[S,T,0,\Phi]F_{\vec{k},S,T,U}$ completely exactly, where $\Phi\defeq\tilde{\Phi}_{\texttt{Tape}}{\restr{\Sigma}}$ and \begin{equation*}
\Sigma\defeq\A_{\vec k,S,T,U}^{\mathbb Z}\cap\gra{0}{0}{S}{T}\cap\emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}}\cap\tilde{\Phi}^{-1}_{\texttt{Tape}}(\A_{\vec k,U,S,T,p}^\mathbb Z).
\end{equation*}
\end{lemma}
It is important to notice that $F$ is a fixed rule that does not depend on $\vec{k},S,T,U$. Therefore, its program $p$ is a \emph{fixed} word which we can \xpr{feed} to itself by restricting the alphabet to $\emp[p]{\texttt{Prog}}$. This is the basic idea of self-simulation. Notice also that the fields for which we do a hierarchical check are exactly those that are fixed in the definition of $\A_{\vec k,S,T,U}$. We need to ensure that these fields have the correct value in the simulated letter. The correct way to do this is to check that the value of the simulated letter is in a good relation with the values in the letters of the simulating IPPA. Here, the relation is simply equality. Later it will be something more complicated.
We will try to give as many details as possible in the following proof because it will serve as a prototype for the rest of the hierarchical simulations.
\begin{proof}
We have to show three things: First of all, that $F_{\vec{k},S,T,U}$ $(S,T,0)$-simulates $F_{\vec{k},S,T,U}$ with decoding function $\Phi$ (simulation), second, that $\Phi$ is injective (exactness) and, finally, that $\Omega_{F_{\vec{k},S,T,U,p}} \subseteq \rock{\Phi}$ (completeness).
For the simulation part, let $b \in \A_{\vec k,S,T,U}^{\mathbb Z}$ and $c \in \A_{\vec k,S,T,U}^{\mathbb Z}\cap\gra0{0}{S}{T}\cap \emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}} \cap \tilde{\Phi}^{-1}(b)$. By definition, $c$ is not rejected by the checks of lines~\ref{al:self:initial},\ref{al:self:alph} and~\ref{al:self:hiera}.
Indeed, line~\ref{al:self:initial} checks that the fields $\texttt{Head}_{-1}$, $\texttt{Head}_{+1}$, $\texttt{Tape}_{-1}$, $\texttt{Tape}_{+1}$, $\texttt{NTape}$ are empty, which is true since
\begin{equation*}
c \in \gra{0}{0}{S}{T}\cap\emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}}.
\end{equation*}
Line~\ref{al:self:alph} checks that the lengths of the fields of $b$ and $c$ are the same, while the checks of line~\ref{al:self:hiera} check that $b$ and $c$ have the same values in the fields $\texttt{MAddr}, \texttt{MClock}, \texttt{Prog}$ and $\revprog$, which are true by definition.
Since $c$ is not rejected by these checks and $F_{\vec{k},S,T,U}$ is a subrule of
\begin{equation*}
\coordi[S,T]\circ \unive_{15,\vec{\nu}_{\texttt{Self}}}[(\length{\pi_j})_{j<M},\bina{\pi_\texttt{Addr}},\bina{\pi_\texttt{Clock}},S,T,U,p,p^{-1}]
\end{equation*}
and, by assumption, the inequalities of Lemma~\ref{universal} are satisfied, and $p$ is the program of $\texttt{Self}[15,\vec{\nu}_{\texttt{Self}}]$, Lemma~\ref{universal} gives that $F_{\vec{k},S,T,U}$ $(S,T,0)$-simulates $F_{\vec{k},S,T,U}$ with decoding function $\Phi$.
For the exactness part, we have already noted various times that the values of the fields in $\C[\unive]$ are uniquely determined for all $c \in \Phi^{-1}(b)$. For the hierarchical fields (\textit{i.e.}, $\texttt{MAddr}, \texttt{MClock}, \texttt{Alarm}, \texttt{Prog}, \revprog$) the values are fixed for all $c \in \A_{\vec k,S,T,U}^{\mathbb Z}$. In addition, there do not exist any anonymous fields (since we chose $M=15$). Therefore, $\Phi$ is injective and the simulation is exact.
In order to show that the simulation is complete, we will first show that if $c \in \gra{0}{0}{S}{T} \cap F_{\vec{k},S,T,U}^{-T}(\A_{\vec k,S,T,U}^{\mathbb Z})$, then $c \in \Phi^{-1}(\A_{\vec k,S,T,U}^{\mathbb Z})$.
Indeed, line~\ref{al:self:initial} checks that $c \in \gra{0}{0}{S}{T}\cap\emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}}$ (in the sense that if this is not true, then $F_{\vec{k},S,T,U,p}$ would not be defined, so there would be a contradiction). According to Lemma~\ref{lem:fixalpha}, line~\ref{al:self:alph} checks that for every colony $\col{i}{c}$, $\pi_{\texttt{Tape}}(\col{i}{c})$ has the structure of the encoding of a letter in $\haine5^{\vec{k}}$. (We cannot immediately say that $\pi_{\texttt{Tape}}(\col{i}{c})$ is the encoding of a letter in $\haine5^{\vec{k}}$ because there are some triplets that are not used by $\Chi{\cdot}$. So for example, if the first three letters in the $\texttt{Tape}$ tape are $111$, then $\pi_{\texttt{Tape}}(\col{i}{c})$ is not the encoding of a letter in $\haine5^{\vec{k}}$, even though the $2$'s and $3$'s are in the correct positions.) In addition, since $F_{\vec{k},S,T,U}^{T}$ exists and the inequalities $\I(\vec{k},\vec{k},S,T,U,p,p^{-1})$ are satisfied, this means that the computation of $p$ on input $\pi_\texttt{Tape}(\col{i}{c})$ halts, therefore for all $i \in \mathbb Z$, $\tilde{\phi}_{\texttt{Tape}}(\col{i}{c})=b_i \in \haine5^{**}$. Lemma~\ref{lem:fixalpha} now implies that $\tilde{\phi}_{\texttt{Tape}}(\col{i}{c})=b_i \in \haine5^{\vec{k}}$.
Finally, line~\ref{al:self:hiera} checks that $\tilde{\phi}_{\texttt{Tape}}(b_i) \in \emp[S]{\texttt{MAddr}}\cap\emp[T]{\texttt{MClock}}\cap\emp[U]{\texttt{Alarm}}\cap\emp[p]{\texttt{Prog}}\cap\emp[p^{-1}]{\revprog}$ by checking that the hierarchical fields of $b_i$ have the same values as the corresponding fields of the letters of $c$ (notice that the hierarchical fields are constant for the letters of $c$, so that Lemma~\ref{lem:fixfield} applies). Summarizing, we have that $b \in \A_{\vec k,S,T,U}^{\mathbb Z}$, so that $c \in \A_{\vec k,S,T,U}^{\mathbb Z}\cap\gra{0}{0}{S}{T} \cap \emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}}\cap\tilde{\Phi}^{-1}_{\texttt{Tape}}(\A_{\vec k,S,T,U}^\mathbb Z)$.
Finally, if $c \in F_{\vec{k},S,T,U}^{-2T}(\A_{\vec k,S,T,U}^{\mathbb Z})$ then $F^t\sigma^s(c) \in \gra{0}{0}{S}{T}$, for some $s\in \co{0}{S}$ and $t \in \co{0}{T}$. Therefore, $F_{\vec{k},S,T,U}^t\sigma^s(c) \in \gra{0}{0}{S}{T} \cap F_{\vec{k},S,T,U}^{-T}(\A_{\vec k,S,T,U}^{\mathbb Z})$ so that $F_{\vec{k},S,T,U}^t\sigma^s(c) \in \Phi^{-1}(\A_{\vec k,S,T,U}^{\mathbb Z})$. This implies that \\
$\Omega_{F_{\vec{k},S,T,U}} \subseteq F_{\vec{k},S,T,U}^{-2T}(\A_{\vec k,S,T,U}^{\mathbb Z}) \subseteq \bigsqcup_{\begin{subarray}c0\le t<T\\0\le s<S\end{subarray}}F^t\sigma^s(\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi))$, which means that the simulation is also complete.
\end{proof}
\subsection{Satisfying the inequalities}
It is not as straightforward to see that the inequalities $\I(\vec{k},\vec{k},S,T,U,p,p^{-1})$ can be satisfied as it was for Lemma~\ref{universal}, because in this case $\vec{k'}$ is equal to $\vec{k}$, which means that we cannot fix $\vec{k'}$ and then choose $\vec{k},S,T,U$ sufficiently big.
\begin{remark}\label{rem:inequselfsimi}
We can find $\vec{k},S,T,U$ such that the inequalities of Lemma~\ref{self} are satisfied. In addition, for all $\epsilon > 0$, $S / T$ can be made larger than $1 -\epsilon$. (Intuitively, the macro-tiles can be made as close to a square as we want.)
\end{remark}
\begin{proof}
We have to satisfy the following inequalities:
\[\both{
U\ge\max\{t_p({\haine5^{\vec{k}}}),t_{p^{-1}}({\haine5^{\vec{k}}})\}\\
S\ge\max\{2U,\norm{\Chi{\haine5^{\vec{k}}}}\}\\
T\ge 4U+S\\
k_\texttt{Addr},k_{\texttt{Addr}_{+1}}\ge\norm S\\
k_\texttt{Clock},k_{\texttt{Clock}_{+1}}\ge\norm T\\
k_{\texttt{Head}_{-1}},k_{\texttt{Head}_{+1}}\ge\length{\Chi{\haine4\times (Q_p \cup Q_{p^{-1}})\times\{-1,+1\}}}\\
k_\texttt{Tape},k_\texttt{NTape},k_{\texttt{Tape}_{-1}},k_{\texttt{Tape}_{+1}}\ge1\\
k_\texttt{Prog}=\norm p\\
k_\revprog=\norm{p^{-1}}\\
k_\texttt{MAddr} = \norm{S}\\
k_{\texttt{MClock}} = \norm{T}\\
k_{\texttt{Alarm}} = \norm{U}
~.}\]
For all $S,T,U$, let us choose $\vec{k}\defeq\vec{k}_{S,T,U}$ such that all of the inequalities except the first three are equalities. Then, $\norm{\Chi{\haine5^{\vec{k}}}}\le P_1(\log{S},\log{T},\log{U})$ and $\max\{t_p({\haine5^{\vec{k}}}),t_{p^{-1}}({\haine5^{\vec{k}}})\} \le P_2(\log{S},\log{T},\log{U})$, for some polynomials $P_1,P_2$. These follow by definition of $\Chi{\cdot}$ and $\vec{k}_{S,T,U}$ and the fact that the program $p$ is fixed and has polynomial complexity.
Therefore, it is enough to find $S,T,U$ that satisfy the following inequalities:
\[\both{
U\ge P_2(\log{S},\log{T},\log{U})\\
S\ge\max\{2U,P_1(\log{S},\log{T},\log{U})\}\\
T\ge 4U+S
~.}\]
For all $S,U$, let us choose $T \defeq T_{S,U}=S+4U$. Also, for all $S,S_0,r$, let us choose $U \defeq U_{S,S_0,r}=\log^r(S+S_0)$. Then, the third inequality is satisfied and the other two are written as follows:
\[\bothrl{
\log^r(S+S_0)\ge &P_2(\log{S},\log(S+4\log^r(S+S_0)),\log(\log^r(S+S_0))\\
S\ge&\max\{2\log^r(S+S_0),\\
&P_1(\log{S},\log(S+4\log^r(S+S_0)),\log(\log^r(S+S_0)))\}.}\]
There exist \emph{fixed} $r,S_0 \in \mathbb N$ such that the first inequality is satisfied for all $S$, because $P_2$ is a fixed polynomial (hence its degree is a fixed number). Let us choose such $r,S_0$. Then, if $S$ is sufficiently large we also have that the second inequality is also satisfied (since $r,S_0$ are now fixed), because the right hand side grows only polylogarithmically in $S$, which finishes the proof of the claim.
For the second claim, $S/T=\frac{S}{S+\log^r(S+S_0)}$, which can be made larger than $1-\epsilon$ by choosing $S$ sufficiently large.
\end{proof}
\begin{corollary}
There exists an RPCA $G$ such that $\orb{G}$ is non-empty, aperiodic and $\NE(G)=\{0\}$.
\end{corollary}
\begin{proof}
Let $G \defeq F_{\vec{k},S,T,U} \colon \A_{\vec k,U,S,T}^{\mathbb Z} \to \A_{\vec k,U,S,T}^{\mathbb Z}$, for some parameters that satisfy $\I(\vec{k},\vec{k},S,T,U,p,p^{-1})$. This is possible, according to Remark~\ref{rem:inequselfsimi}. By definition, we have that $S < T$.
It is not difficult to see that $G^{-1}(\A_{\vec k,U,S,T}^{\mathbb Z})$ is nonempty. Then, Lemmas~\ref{self}, \ref{lem:aperiodichierarchy}, \ref{l:nonvide} and Proposition~\ref{prop:hochman} imply that $\orb{G}$ is non-empty, uncountable, aperiodic and $\NE(G)=\{0\}$.
\end{proof}
This finishes the construction of an extremely-expansive, aperiodic 2D SFT. Once we achieved self-simulation, then extreme expansiveness follows immediately from Proposition~\ref{prop:hochman}.
\section{Hierarchical simulation}
We now want to construct more general nested hierarchical simulations, where the parameters of the simulation might vary in every simulation level. This structure is more flexible than a simple self-simulating RPCA, and it will be more useful in the various applications.
Let us fix the field list $\C[\hsim] \defeq \C[\unive] \sqcup [\texttt{Level},\texttt{Prog},\revprog]$ and let $\vec{\nu}_{\hsim}$ be the corresponding vector of directions.
\begin{itemize}
\item $\texttt{Prog}$ and $\revprog$ are used as in the previous section.
\item $\texttt{Level}$ is used to obtain the values of $v_{\texttt{MAddr}}, v_{\texttt{MClock}}$ and $v_{\texttt{Alarm}}$, not through a direct projection, as in the previous case, but in a polynomially computable way.
\end{itemize}
In the following, let $\seq S,\seq T, \seq U \in \mathbb N^{\mathbb N}$ be sequences of integers and $(\vec{k} \colon \mathbb N \to \mathbb N^M)_{n \in \mathbb N}$ is a \emph{sequence} of vectors depending on $n$ (It can give rise to a vector valuation by using $\bina{\pi_\field}$ as the index of the sequence).
\begin{algo}{hsim}{\hsim}{M,\vec\nu,\vec{k},\seq S,\seq T,\seq U}
\IF{$\bina{\pi_\texttt{Clock}}=0$}
\STATE{$\chekk[\emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}}]$}\label{al:hier:empty}
\STATE{$\chekka[M,\bina{\pi_\texttt{Addr}},\pi_\texttt{Tape},\vec{k}_{\bina{\pi_\texttt{Level}}+1}]$} \COMMENT{Check that the lengths of the simulated letter are correct} \label{al:hier:alph}
\STATE{$\hier[M,\bina{\pi_\texttt{Addr}},\pi_\texttt{Tape},\vec{k}_{\bina{\pi_\texttt{Level}}+1},\texttt{Prog}
\pi_{\texttt{Prog}}]$} \COMMENT{$\texttt{Prog}$ of the simulated letter is the same}\label{al:hier:prog}
\STATE{$\hier[M,\bina{\pi_\texttt{Addr}},\pi_\texttt{Tape},\vec{k}_{\bina{\pi_\texttt{Level}}+1},\revprog
\pi_\revprog]$} \COMMENT{$\revprog$ is also the same}\label{al:hier:revprog}
\STATE{$\hier[M,\bina{\pi_\texttt{Addr}},\pi_\texttt{Tape},\vec{k}_{\bina{\pi_\texttt{Level}}+1},\texttt{Level}
\anib{\bina{\pi_\texttt{Level}}+1}]$} \COMMENT{$\bina{\texttt{Level}}$ of the simulated letter increases by $1$}\label{al:hier:lev}
\ENDIF
\STATE{$\unive[M$, $\vec{\nu}$, $\vec{k}_{\bina{\pi_\texttt{Level}}}$, $\bina{\pi_\texttt{Addr}}$ ,$\bina{\pi_\texttt{Clock}}$, $\seq{S}_{\bina{\pi_\texttt{Level}}}$, $\seq{T}_{\bina{\pi_\texttt{Level}}}$, $\seq{U}_{\bina{\pi_\texttt{Level}}}$, $\pi_\texttt{Prog}$, $\pi_\revprog]$} \COMMENT{Simulate}\label{al:hier:unive}
\STATE{$\coordi[\seq{S}_{\bina{\pi_\texttt{Level}}},\seq{T}_{\bina{\pi_\texttt{Level}}}]$}
\end{algo}
We will now construct a nested simulation of RPCA where the simulation parameters are different at every level.
\begin{lemma}\label{lem:nestsimul}
Let $\seq U,\seq S,\seq T$ be polynomially checkable sequences of integers. Let us fix the field list $\C[\hsim]\defeq [0,\ldots,12]$, the corresponding \emph{fixed} direction vector $\vec{\nu}_{\hsim}$ and a polynomially checkable sequence of $13$-uples $\seq{\vec{k}}\in(\mathbb N^{13})^\mathbb N$.
Let $F$ be the IPPA with directions $\vec{\nu}_{\hsim}$ and permutation $\hsim[13,\vec{\nu}_{\hsim},\vec{k},\seq S,\seq T, \seq U;\C[\hsim]]$ and $p,p^{-1}$ be the programs for this permutation and its inverse, respectively.
For all $n \in \mathbb N$, let $F_n$ be the restriction of $F$ to the subalphabet
\begin{equation*}
\A_{n}\defeq \haine5^{\vec{k}_n}\cap\emp[n]{\texttt{Level}}\cap\emp[p]{\texttt{Prog}}\cap\emp[p^{-1}]{\revprog},
\end{equation*}
and assume that the following inequalities hold for all $n \in \mathbb N$:
\[\both{
\I(\vec{k}_n,\vec{k}_{n+1},S_n,T_n,U_n,p,p^{-1})\\
k_{n,\texttt{Prog}}\geq\norm p\\
k_{n,\revprog}\geq\norm{p^{-1}}\\
k_{n,\texttt{Level}} \geq \norm{n}
~.}\]
Then, $F_n\simu[S_n,T_n,0,\Phi_n]F_{n+1}$ completely exactly, where $\Phi_n\defeq\tilde{\Phi}_{\texttt{Tape}}{\restr{\Sigma_n}}$ and $\Sigma_n \defeq \A_{n}^{\mathbb Z} \cap \gra{0}{0}{S_n}{T_n}\cap \emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}} \cap \tilde{\Phi}^{-1}(\A_{n+1}^{\mathbb Z})$
\end{lemma}
The proof is very similar to the proof of Lemma~\ref{self}. There exists some differences, though. For example, we do not have the fields $\texttt{MAddr}, \texttt{MClock}$ and $\texttt{Alarm}$. These fields are computed with the aid of field $\texttt{Level}$, so we perform a hierarchical check for $\texttt{Level}$. Apart from that, the proof follows the same pattern.
Another difference, which will be important when we prove that the inequalities can be satisfied is that the program is not fixed once we fix $M$ and $\vec{\nu}$, as in the self-similar case, but depends on $\vec{k},\seq{S},\seq{T}$ and $\seq{U}$. Therefore, its complexity also depends on these parameters. More precisely, $t_p(\A_{n}) = P(\length{\A_n}+t_{\vec{k}}(n)+t_{\seq{S}}(n)+t_{\seq{T}}(n)+t_{\seq{U}}(n))$, for some polynomial $P$ that does not depend on the parameters. This is due to the fact that the program consists in a bounded number (independent of $n$) of polynomially computable functions and a bounded number of calls to the parameters. Similarly, $\length{p}= O(\length{p_{\vec{k}}}+\length{p_{\seq{S}}}+\length{p_{\seq{T}}}+
\length{p_{\seq{U}}})$. (The same things hold for $p^{-1}$.)
\begin{proof}
Let us fix $n \in \mathbb N$.
We have to show three things: that $F_n$ $(S_n,T_n,0)$-simulates $F_{n+1}$ with decoding function $\Phi_n$ (simulation), that $\Phi_n$ is injective (exactness) and that $\Omega_{F_{n}} \subseteq \mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi_n)$ (completeness).
For the simulation part, let $b \in \A_{n+1}^{\mathbb Z}$ and $c \in \Phi^{-1}(b) \in \A_{n}^{\mathbb Z}\cap\gra{0}{0}{S_n}{T_n} \cap \emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}}$. By definition, $c$ is not rejected by the checks of lines~\ref{al:hier:empty},\ref{al:hier:alph},\ref{al:hier:prog}, \ref{al:hier:revprog} and \ref{al:hier:lev}, .
Since $c$ is not rejected by these checks and $F_{n}$ factors onto
\begin{align*}
&\unive[13,\vec{\nu}_{\unive},(\length{\pi_j})_{j<M},\bina{\pi_\texttt{Addr}},\bina{\pi_\texttt{Clock}},S_n,T_n,U_n,p,p^{-1}]\\
&\coordi[S_n,T_n]
\end{align*}
and, by assumption, the inequalities of Lemma~\ref{universal} are satisfied by $\vec{k}_n$ and $\vec{k}_{n+1}$, and $p$ is the program of $\hsim[13,\vec{\nu}_{\hsim},\vec{k},\seq S,\seq T, \seq U]$, Lemma~\ref{universal} gives that $F_{n}$ $(S_n,T_n,0)$-simulates $F_{n+1}$ with decoding function $\Phi_n$.
For the exactness part, we have already noted various times that the values of the fields in $\C[\unive]$ are uniquely determined for all $c \in \Phi^{-1}(b)$. For the hierarchical fields (\textit{i.e.}, $\texttt{Level}, \texttt{Prog}, \revprog$) the values are fixed for all $c \in \A_{n}^{\mathbb Z}$. In addition, there do not exist any anonymous fields (since we chose $M=13$). Therefore, $\Phi_n$ is injective and the simulation is exact.
For the completeness part, we will only show that if $c \in \gra{0}{0}{S_n}{T_n} \cap F_{n}^{-T}(\A_{n}^{\mathbb Z})$, then $c \in \Phi^{-1}(\A_{n+1}^{\mathbb Z})$. Having shown this, it is easy to conclude that the simulation is complete using the same argument as in the proof of Lemma~\ref{self}
Indeed, line~\ref{al:hier:empty} checks that $c \in \gra{0}{0}{S_n}{T_n}\cap \emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}}$. According to Lemma~\ref{lem:fixalpha}, line~\ref{al:hier:alph} checks that for every colony $\col{i}{c}$, $\pi_{\texttt{Tape}}(\col{i}{c})$ has the structure of the encoding of a letter in $\haine5^{\vec{k}_{n+1}}$. In addition, since $F_{n}^{T_n}(c)$ exists and the equations $\I(\vec{k}_{n},\vec{k}_{n+1},S_n,T_n,U_n,p,p^{-1})$ are satisfied, this means that the computation of $p$ on input $\pi_\texttt{Tape}(\col{i}{c})$ halts, therefore for all $i \in \mathbb Z$, $\tilde{\phi}_{\texttt{Tape}}(\col{i}{c})=b_i \in \haine5^{**}$. Lemma~\ref{lem:fixalpha} now implies that $\tilde{\phi}_{\texttt{Tape}}(\col{i}{c})=b_i \in \haine5^{\vec{k}_{n+1}}$.
Finally, lines~\ref{al:hier:lev},\ref{al:hier:prog} and \ref{al:hier:revprog} check that $\tilde{\phi}_{\texttt{Tape}}(b_i) \in \emp[n+1]{\texttt{Level}}\cap\emp[p]{\texttt{Prog}}\cap\emp[p^{-1}]{\revprog}$.
Summarizing, we have that $b \in \A_{n+1}^{\mathbb Z}$, so that $c \in \A_{n}^{\mathbb Z}\cap\gra0{0}{S_n}{T_n} \cap \emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}}\cap\tilde{\Phi}^{-1}_{\texttt{Tape}}(\A_{n+1}^\mathbb Z) = \Sigma_n$, or equivalently that $c \in \Phi^{-1}(\A_{n+1}^{\mathbb Z})$.
\end{proof}
\subsection{Satisfying the inequalities}
Let us now show that the inequalities of Lemma~\ref{lem:nestsimul} can be satisfied:
\begin{remark}\label{rem:inequhiera}
We can find $\vec{k} \in (\mathbb N^{13})^{\mathbb N}$ and $\seq{S},\seq{T},\seq{U} \in {\mathbb N}_1^{\mathbb N}$ such that the inequalities of Lemma~\ref{lem:nestsimul} are satisfied. In addition, $\prod_{i<n} S_i/T_i$ can be made both $0$ and $\neq 0$.
\end{remark}
We have to deal with two problems, which were not present in the previous cases: First, there is an infinite set of inequalities, since there is also an infinite set of RPCA, and they must all be satisfied simultaneously. Second, the size of the program and the complexity of the permutations depends on the choice of the parameters $\seq S$ and $\seq T$.
\begin{proof}
We have to satisfy the following inequalities, for all $n \in \mathbb N$
\[\both{
U_n\ge\max\{t_p({\haine5^{\seq{\vec{k}}_{n+1}}}),t_{p^{-1}}({\haine5^{\seq{\vec{k}}_{n+1}}})\}\\
S_n\ge\max\{2{U}_n,\norm{\Chi{\haine5^{\vec{k}_{n+1}}}}\}\\
T_n\ge 4{U}_n+S_n\\
k_{n,\texttt{Prog}}\geq\norm p\\
k_{n,\revprog}\geq\norm{p^{-1}}\\
k_{n,\texttt{Head}_{-1}},k_{n,\texttt{Head}_{+1}}\ge\length{\Chi{\haine4\times (Q_p \cup Q_{p^{-1}})\times\{-1,+1\}}}\\
k_{n,\texttt{Addr}},k_{n,\texttt{Addr}_{+1}}\ge\norm{S_n}\\
k_{n,\texttt{Clock}},k_{n,\texttt{Clock}_{+1}}\ge\norm{T_n}\\
k_{n,\texttt{Tape}},k_{n,\texttt{NTape}},k_{n,\texttt{Tape}_{-1}},k_{n,\texttt{Tape}_{+1}}\ge1\\
k_{n,\texttt{Level}}=\norm{n}
~.}\]
For all $n \in \mathbb N$ and $\seq{S},\seq{T}$ and $\seq{U}$, let us choose $\vec{k}_n\defeq\vec{k}_{n,\seq{S},\seq{T},\seq{U}}$ such that the last four inequalities are satisfied as equalities. Then, we can see that
\begin{equation*}
\norm{\Chi{\haine5^{\vec{k}_n}}}\le P_1(\log{S_n},\log{T_n},\log{n},k_{n,\texttt{Head}_{-1}},k_{n,\texttt{Head}_{+1}},k_{n,\revprog},k_{n,\texttt{Prog}}),
\end{equation*}
for some polynomial $P_1$.
We claim that $\length{p} \le c(\length{p_{\vec{k}}}+\length{p_{\seq{S}}}+\length{p_{\seq{T}}}+\length{p_{\seq{U}}})$, for some constant $c$. (The same holds for $p^{-1}$ and we can assume that the constant $c$ is the same.) This is because, as we have already noticed, the program of $p$ uses a fixed number of polynomial operations and a bounded number of calls to the parameters $\length{p_{\vec{k}}}$, $\length{p_{\seq{S}}}$, $\length{p_{\seq{T}}}$, $\length{p_{\seq{U}}}$.
For the same reason, we have that
\begin{multline*}
\max\{t_p({\haine5^{\vec{k}}}),t_{p^{-1}}({\haine5^{\vec{k}}})\} \le P_2(\log{S_n},\log{T_n},\log{n},k_{n,\texttt{Head}_{-1}},\\
k_{n,\texttt{Head}_{+1}},k_{n,\revprog},k_{n,\texttt{Prog}},t_{\vec{k}}(n),t_{\seq S}(n),
,t_{\seq T}(n),t_{\seq U}(n)),
\end{multline*}
for some \emph{fixed} polynomial $P_2$ that does not depend on the parameter sequences.
Therefore, it is enough to find sequences $\vec{k},\seq{S},\seq{T},\seq{U}$ that satisfy the following inequalities, for all $n \in \mathbb N$:
\[\bothrl{
k_{n,\texttt{Prog}}, k_{n,\revprog}\geq & c(\length{p_{\vec{k}}}+\length{p_{\seq{S}}}+\length{p_{\seq{T}}}+
\length{p_{\seq{U}}})\\
k_{n,\texttt{Head}_{-1}},k_{n,\texttt{Head}_{+1}}\geq & \length{\Chi{\haine4\times (Q_p \cup Q_{p^{-1}})\times\{-1,+1\}}}\\
U_n\ge & P_2(\log{S_{n+1}},\log{T_{n+1}},\log(n+1),\\
&k_{n+1,\texttt{Head}_{-1}}, k_{n+1,\texttt{Head}_{+1}},k_{n+1,\revprog},k_{n+1,\texttt{Prog}},\\ &t_{\vec{k}}(n+1),t_{\seq S}(n+1),t_{\seq T}(n+1),t_{\seq U}(n+1))\\
S_n \ge & \max\{2U_n,P_1(\log{S_{n+1}},\log{T_{n+1}},\log(n+1),\\
& \length{p_{\vec{k}}},\length{p_{\seq{S}}},\length{p_{\seq{T}}},\length{p_{\seq{U}}})\}\\
T_n \ge & 4U_n+S_n
~.}\]
Recall that in the above inequalities, $Q_p$ and $Q_{p^{-1}}$ depend on the choice of parameter sequences.
We will show two ways to do this. The first one does not give an extremely-expansive SFT, because $\prod_{i<n}S_i/T_i$ does not converge to $0$, while the second one does.
\begin{enumerate}
\item For all sequences $\seq S$ and $\seq U$, let us choose $T_n \defeq T_{n,\seq S,\seq U}=S_n+4U_n$. Also, for all $n_0,r$ and $Q \geq 2$, let us choose $U_{n,n_0,r}\defeq U_n=(n+n_0)^r$, $S_{n,n_0}\defeq S_n=Q^{n+n_0}$, $k_{n,\texttt{Prog}}=k_{n,\revprog}=n_0Qr$ and $k_{n,\texttt{Head}_{-1}}=k_{n,\texttt{Head}_{+1}}=n_0$ for all $n \in \mathbb N$.
Then, the last inequality is satisfied by definition. In addition, for all $n_0,r,Q$, we have that $\length{p_{\seq{S}}} \le \norm{c_1n_0Q}$, $\length{p_{\seq{U}}} \le \norm{c_2rn_0}$ and $\length{p_{\seq{T}}}$, $ \length{p_{\vec{k}}} \le \norm{c_3rn_0Q}$, for some \emph{constants} $c_1,c_2,c_3$.
This is true because the sequence $(n+n_0)^r$ is uniformly (polynomially) computable in $n,n_0,r$, which means that there exists an algorithm that takes as input $(n,n_0,r)$ and outputs $(n+n_0)^r$, for \emph{all} values of $n,n_0$ and $r$. If we use the program for this algorithm together with a description of $n_0$ and $r$, then we obtain a program of length bounded by $\norm{c_2rn_0}$ for the sequence $((n+n_0)^r)_{n \in \mathbb N}$. A similar argument holds for the sequence $Q^{n+n_0}$.
Since this algorithm works for \emph{all} choices of $n_0,Q,r$, it means that $Q_p$ and $Q_{p^{-1}}$ are actually \emph{fixed}.
In addition, all of the algorithms are polynomially computable, which means that
\begin{equation*}
t_{\seq S}(n), t_{\seq T}(n), t_{\vec{k}}(n) \le P_3(\log{Q^{n+n_0}}), t_{\seq U}(n) \le P_4(\log(n+n_0)^r),
\end{equation*}
for some \emph{fixed} polynomials $P_3,P_4$.
Therefore, substituting these in the inequalities above and doing some regrouping of the terms in parentheses (that is omitted), the inequalities that need to be satisfied are written as follows:
\[\bothrl{
n_0Qr \geq & c'\log(n_0Qr)\\
n_0 \geq & c'\\
(n+n_0)^r\ge & P_5(\log{Q^{n+n_0+1}},\log(n+n_0+1)^r,\log(n+1))\\
Q^{n+n_0}\ge & \max\{2(n+n_0+1)^r, P_6(\log{Q^{n+n_0+1}},\log(n+1))\}
~,}\]
for some polynomials $P_5,P_6$ and constant $c'$ that do not depend on $r,n_0$ or $Q$.
Since $c'$ is fixed, the first two inequalities are true for all but a finite number of triples $n_0,Q,r$. Without loss of generality, we assume that it is always true.
We can choose $n_Q$ and $r$ such that the second inequality is true for all $n \in \mathbb N$ and all $n_0 \geq n_Q$, because the right hand of the inequality is bounded by a fixed polynomial of $(n+n_0)$ and $r$, while the left-hand side grows like $n^r$. With fixed $r$, we can also find $n'_Q$ such that the third inequality is satisfied for all $n \in \mathbb N$ and all $n_0 \geq n'_Q$, because the left-hand side grows exponentially in $(n+n_0)$ and the right hand only polynomially (since $r$ is fixed). By choosing $n_0=\max\{n_Q,n'_Q\}$ we can satisfy both inequalities for all $n$ at the same time.
Note that $\prod_{i \in \mathbb N}S_i/T_i = \prod_{i \in \mathbb N}(1+(n+n_0)^r/Q^{n+n_0}) \neq 0$. Therefore, if we choose the sequences like this, we do not obtain a unique direction of non-expansiveness, but rather a cone of non-expansive directions.
\item For all $n_0 \in \mathbb N$ and $Q\geq 2$, let us choose $S_{n,n_0}\defeq S_n=Q^{n+n_0}$, $T_{n,n_0}\defeq T_n=2S_n$ and $U_{n,n_0}\defeq U_n=\frac{S_n}{2Q}$, $k_{n,\texttt{Prog}}=k_{n,\revprog}=n_0Q$ and $k_{n,\texttt{Head}_{-1}}=k_{n,\texttt{Head}_{+1}}=n_0$ for all $n \in \mathbb N$. We can use a similar argumentation as in previous case to show that it is enough to satisfy the following inequalities:
\[\both{
n_0Q \geq \norm{Qn_0}\\
\frac{Q^{n+n_0}}{4}\ge P_3(\log{Q^{n+n_0+1}},\log(n+1))\\
Q^{n+n_0}\ge\max\{\frac{Q^{n+n_0+1}}{2Q},P_4(\log{Q^{n+n_0+1}},\log(n+1))\}
~,}\]
for some polynomials $P_3,P_4$ and constant $c$ that do not depend on $n_0$ and $Q$.
Obviously, for all $Q$ these inequalities are satisfied when $n_0$ is sufficiently large.
In this case, $\prod_{i \in \mathbb N}S_i/T_i= \prod_{i \in \mathbb N}S_i/2S_i = 0$, therefore the corresponding SFT is extremely expansive.
\end{enumerate}
\end{proof}
For both cases, we have a lot of freedom in choosing the sequences. In the previous proof, we just described two of the possible ways which are enough for the results we want to obtain and help in presenting the basic ideas of the proof that is needed in any possible case.
\section{Universality}
Let $\C[\texttt{Other}]=[\texttt{OTape}_{-1},\texttt{OTape},\texttt{OTape}_{+1}]$ and let us fix the field list $\C[\intru]=\C[\hsim]\sqcup\C[\texttt{Other}]$ and the corresponding direction vector $\vec{\nu}_{\intru}$.
For any $n$, consider an RPCA $G_n$ with permutation $\alpha_n \colon (\haine5^{l_n})^3 \to (\haine5^{l_n})^3$ over $\C[\texttt{Other}]$. This is not a strict restriction in itself: all RPCA can be represented in this way, up to a simple alphabet renaming and use of Remark~\ref{sharpization
If the sequence of permutations $(\alpha_n)_{n \in \mathbb N}$ is polynomially computable, we can build a PPA that simulates $G_n$, for all $n \in \mathbb N$.
\begin{algo}{intru}{\intru}{M,\vec{\nu},\vec{k},\seq S,\seq T,\seq U, \seq\alpha}
\STATE{$\alpha_{\bina{\texttt{Level}}}[\C[\texttt{Other}]]$} \COMMENT{$G_n$ on the $\C[\texttt{Other}] $ fields.}\label{al:intru:gn}
\IF{$\bina{\pi_\texttt{Clock}}=0$}
\STATE{$\chekk[\emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}}]$}\label{al:univ:empty}
\STATE{$\chekka[M,\bina{\pi_\texttt{Addr}},\pi_\texttt{Tape},\vec{k}_{\bina{\pi_\texttt{Level}}+1}]$} \label{al:intru:alph}
\STATE{$\hier[M,\bina{\pi_\texttt{Addr}},\pi_\texttt{Tape},\vec{k}_{\bina{\pi_\texttt{Level}}+1},\texttt{Prog}
\pi_{\texttt{Prog}}]$}\label{al:intru:prog}
\STATE{$\hier[M,\bina{\pi_\texttt{Addr}},\pi_\texttt{Tape},\vec{k}_{\bina{\pi_\texttt{Level}}+1},\revprog
\pi_\revprog]$} \label{al:intru:revprog}
\STATE{$\hier[M,\bina{\pi_\texttt{Addr}},\pi_\texttt{Tape},\vec{k}_{\bina{\pi_\texttt{Level}}+1},\texttt{Level}
\anib{\bina{\pi_\texttt{Level}}+1}]$} \label{al:intru:lev}
\ENDIF
\STATE{$\unive[M$, $\vec{\nu}$, $\vec{k}_{\bina{\pi_\texttt{Level}}}$, $\bina{\pi_\texttt{Addr}}$, $\bina{\pi_\texttt{Clock}}$, $\seq{S}_{\bina{\pi_\texttt{Level}}}$, $\seq{T}_{\bina{\pi_\texttt{Level}}}$, $\seq{U}_{\bina{\pi_\texttt{Level}}}$, $\pi_\texttt{Prog}$, $\pi_\revprog]$} \COMMENT{Simulate}\label{al:intru:unive}
\STATE{$\coordi[\seq{S}_{\bina{\pi_\texttt{Level}}},\seq{T}_{\bina{\pi_\texttt{Level}}};\C[\coordi]]$}
\end{algo}
The only difference of this rule with $\hsim$ is that it has 3 additional fields (which implies that $\vec{k}$ will be chosen in $(\mathbb N^{16})^{\mathbb N}$) and that we apply $\alpha_{\bina{\texttt{Level}}}$ onto the field list $\C[\texttt{Other}]$ \emph{independently} from what we do on $\C[\hsim]$.
\begin{lemma}\label{lem:univppa}
Let $\seq U,\seq S,\seq T$ be polynomially checkable sequences of integers and $\alpha$ a polynomially computable sequence of permutations.
Let us fix the field list $\C[\intru]\defeq [0,\ldots,15]$, the corresponding \emph{fixed} direction vector $\vec{\nu}_{\intru}$ and a polynomially checkable sequence of $M$-uples $\seq{\vec{k}}\in(\mathbb N^{15})^\mathbb N$. Let $F$ be the IPPA with directions $\vec{\nu}_{\intru}$ and permutation $\intru[15,\vec{\nu}_{\intru},\vec{k},\seq S,\seq T, \seq U;\C[\intru]]$ and $p,p^{-1}$ be the programs for this permutation and its inverse, respectively.
For all $n \in \mathbb N$, let $F_n$ be the restriction of $F$ to the subalphabet
\begin{equation*}
\A_{n}\defeq \haine5^{\vec{k}_n}\cap\emp[n]{\texttt{Level}}\cap\emp[p]{\texttt{Prog}}\cap\emp[p^{-1}]{\revprog},
\end{equation*}
and assume that the following inequalities hold:
\[\both{
\I(\vec{k}_n,\vec{k}_{n+1},S_n,T_n,U_n,p)\\
k_{n,\texttt{Prog}}\geq\length p\\
k_{n,\revprog}\geq\length{p^{-1}}\\
k_{n,\texttt{Level}} \geq \norm{n}\\
k_{n,\texttt{OTape}}=k_{n,\texttt{OTape}_{-1}}=k_{n,\texttt{OTape}_{+1}}\geq l_n
~.}\]
If $\Omega_{G_n} \neq \emptyset$, then $F_n$ completely $(S_n,T_n,0)$-simulates $F_{n+1}$ with decoding function $\Phi_n=\tilde{\Phi}_{\texttt{Tape}}{\restr{\Sigma_n}}$, where
$\Sigma_n \defeq \A_{n}^{\mathbb Z} \cap \gra{0}{0}{S}{T}\cap \emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}} \cap \Phi^{-1}(\A_{n+1}^{\mathbb Z}).$
The simulation is exact if and only if $\Omega_{G_n}$ is a singleton. In addition, if $c \in F_n^{-1}(\A_{n}^{\mathbb Z})$, then $G_n \pi_{\C[\texttt{Other}]}(c)=\pi_{\C[\texttt{Other}]}F_n(c)$.
\end{lemma}
As mentioned before, the proof is very similar to the proof of Lemma~\ref{lem:nestsimul}. Therefore, we are going to omit most of the details and only stress those points where there is a difference.
\begin{proof}
Let us fix $n \in \mathbb N$.
For the first claim, we have to show that $F_n$ $(S_n,T_n,0)$-simulates $F_{n+1}$ with decoding function $\Phi_n$ (simulation), that $\Phi_n$ is an injection if and only if $\Omega_{G_n}$ is a singleton and that $\Omega_{F_{n}} \subseteq \mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi_n)$ (completeness).
For the simulation part, let $b \in \A_{n+1,p}^{\mathbb Z}$. We can find $c \in \A_{n}^{\mathbb Z}$ that simulates $b$: we choose $c \in \Phi^{-1}(b) \in \A_{n}^{\mathbb Z}\cap\gra{0}{0}{S}{T}\cap \emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}}$ such that $\pi_{\C[\texttt{Other}]}(c) \in \Omega_{G_n}$ (this is possible by the assumption that $\Omega_{G_n} \neq \emptyset$). Then, it is easy to see that $c$ simulates $b$, because $\C[\texttt{Other}]$ is only \xpr{touched} by $\alpha_{\bina{\texttt{Level}}}\defeq \alpha_n$, $p$ is the program of $\intru[17,\vec{\nu}_{\intru},\vec{k},\seq S,\seq T, \seq U;\C[\intru]]$ and $G_n^{T}(\pi_{\C[\texttt{Other}]}(c))$ exists.
For the exactness part, as usual $\pi_{\C[\hsim]}(c)$ is uniquely determined by $b$. However, $\pi_{\C[\texttt{Other}]}(c)$ can be chosen independently from $b$ to be any element of $\Omega_{G_n}$, so that the simulation is exact if and only if $\Omega_{G_n}$ is a singleton.
Finally, for the completeness part, an argument almost identical to the argument in the proof of Lemma~\ref{lem:nestsimul} shows that if $c \in \gra{0}{0}{S}{T} \cap F_{n}^{-T}(\A_{n}^{\mathbb Z})$, then $c \in \Phi^{-1}(\A_{n+1}^{\mathbb Z})$. As we know, this is enough to show that the simulation is complete.
The second claim, that if $c \in F_n^{-1}(\A_{n}^{\mathbb Z})$, then $G_n \pi_{\C[\texttt{Other}]}(c)=\pi_{\C[\texttt{Other}]}F_n(c)$ is straightforward from the definition of $F_n$, since the only rule that \xpr{touches} the fields $\C[\texttt{Other}]$ is $G_n$.
\end{proof}
\begin{remark}\label{rem:univenonempt}
\begin{enumerate}
\item $\Omega_{F_0} \neq \emptyset$ if and only if $\Omega_{G_n} \neq \emptyset$, for all $n \in \mathbb N$.
\item If $\Omega_{F_0} \neq \emptyset$, then $F_0$ completely simulates $G_n$ for all $n \in \mathbb N$.
\end{enumerate}
\end{remark}
\begin{proof}
\begin{enumerate}
\item If $\Omega_{G_n} = \emptyset$ for some $n \in \mathbb N$, then $\Omega_{F_n} = \emptyset$, so that since $F_0$ simulates $F_n$ (by transitivity of simulation), we obtain that $\Omega_{F_0}=\emptyset$ by Lemma~\ref{lem:aperiodichierarchy}.
If, on the other hand, $\Omega_{G_n} \neq \emptyset$ for all $n \in \mathbb N$, then Lemma~\ref{l:nonvide} gives that $\Omega_{F_0} \neq \emptyset$.
\item If $\Omega_{F_0} \neq \emptyset$, then the second claim of Lemma~\ref{lem:univppa} implies that $F_n$ factors onto $G_n$. Since $F_0$ simulates $F_n$, for all $n \in \mathbb N$, we obtain that $F_0$ simulates $G_n$, for all $n \in \mathbb N$.
\end{enumerate}
\end{proof}
\begin{remark}\label{rem:univextrexp}
Even if $\prod_{i \in \mathbb N} S_i/T_i =0$, $F_0$ is not necessarily extremely expansive, since we might have non-expansive directions coming from the $G_n$ part. However,
in the special case that $\Omega_{G_n}$ is a singleton for all $n \in \mathbb N$, then all the simulations are exact and it is straightforward to see that $\NE(F_0) = \{0\}$, because Proposition~\ref{prop:hochman} applies.
\end{remark}
\subsection{Satisfying the inequalities}
\begin{remark}\label{rem:inequunivppa}
We can find $\vec{k} \in (\mathbb N^{16})^{\mathbb N}$ and $\seq{S},\seq{T},\seq{U} \in {\mathbb N}_1^{\mathbb N}$ such that the inequalities of Lemma~\ref{lem:univppa} are satisfied and $\prod_{i \in \mathbb N}S_i/T_i=0$.
\end{remark}
We only state the case $\prod_{i \in \mathbb N}S_i/T_i=0$ (even though we can make it $\neq 0$) too, because it is what we will need and use in the applications.
\begin{proof}
Let us write explicitly the inequalities that we need to satisfy:
\[\both{
U_n\ge\max\{t_p({\haine5^{\seq{\vec{k}}_{n+1}}}),t_{p^{-1}}({\haine5^{\seq{\vec{k}}_{n+1}}})\}\\
S_n\ge\max\{2{U}_n,\length{\Chi{\haine5^{\vec{k}_{n+1}}}}\}\\
T_n\ge 4{U}_n+S_n\\
k_{n,\texttt{Prog}}\geq\length p\\
k_{n,\revprog}\geq\length{p^{-1}}\\
k_{n,\texttt{Head}_{-1}},k_{n,\texttt{Head}_{+1}}\ge\length{\Chi{\haine4\times (Q_p \cup Q_{p^{-1}})\times\{-1,+1\}}}\\
k_{n,\texttt{Addr}},k_{n,\texttt{Addr}_{+1}}\ge\norm{S_n}\\
k_{n,\texttt{Clock}},k_{n,\texttt{Clock}_{+1}}\ge\norm{T_n}\\
k_{n,\texttt{Tape}},k_{n,\texttt{NTape}},k_{n,\texttt{Tape}_{-1}},k_{n,\texttt{Tape}_{+1}}\ge1\\
k_{n,\texttt{Level}} = \norm{n}\\
k_{n,\texttt{OTape}}=k_{n,\texttt{OTape}_{-1}}=k_{n,\texttt{OTape}_{+1}}= l_n
~.}\]
For all $\seq{S},\seq{T}$ and $\seq{U}$ , let us choose $\vec{k}_n\defeq\vec{k}_{n,\seq{S},\seq{T},\seq{U}}$ such that the last five inequalities are satisfied as equalities. Then, we have the crucial inequality
\begin{equation*}
\norm{\Chi{\haine5^{\vec{k}_n}}}\le P_1(\log{S_n},\log{T_n},\log{n},l_n,k_{n,\texttt{Head}_{-1}},k_{n,\texttt{Head}_{+1}},k_{n,\revprog},k_{n,\texttt{Prog}}),
\end{equation*}
where $l_n$ is the size of the alphabet of $\alpha_n$.
The other crucial inequality of the proof of Lemma~\ref{rem:inequhiera} also holds without any essential changes:
\begin{multline*}
\max\{t_p({\haine5^{\vec{k}}}),t_{p^{-1}}({\haine5^{\vec{k}}})\} \le P_2(\log{S_n},\log{T_n},\log{n},l_n,k_{n,\texttt{Head}_{-1}},\\
k_{n,\texttt{Head}_{+1}},k_{n,\revprog},k_{n,\texttt{Prog}},t_{\vec{k}}(n),t_{\seq S}(n),
,t_{\seq T}(n),t_{\seq U}(n))
\end{multline*}
for some polynomial $P_2$ that does not depend on the parameters.
This holds because the permutation applied consists in a number of polynomial operations (recall that $\alpha$ is polynomially computable and fixed for this specific construction) and a bounded number of calls to $\seq S$, $\seq T$,$\seq U$ and $\vec{k}$.
Also, since $\alpha$ is polynomially computable, $l_n$ (which is part of the output of $\alpha_n$) is also bounded by a polynomial of $n$ so that we can \xpr{remove} $l_n$ from the right-hand side of the previous inequalities and \xpr{incorporate} it in the polynomials $P_1,P_2$.
From this point on, the proof is identical to the proof of Remark~\ref{rem:inequhiera}. (We are free to chose whether $\prod_{i \in \mathbb N}S_i/T_i$ is equal to $0$ or not.)
\end{proof}
\subsection{Domino problem}
\begin{theorem}
It is undecidable whether an extremely expansive SFT is empty.
\end{theorem}
\begin{proof}
Let $\mathcal{M}$ be an arbitrary TM with program $p'$. For all $n \in \mathbb N$, we define $\alpha_n$ as follows:
$l_n =1$. $\alpha_n(0,0,0)=(0,0,0)$ if $\halt{p'}{n}{0^n}$ is true (\textit{i.e.}, if $\mathcal{M}$ does not halt within $n$ steps). $\alpha_n$ is undefined in all other cases.
$(\alpha_n)_{n \in \mathbb N}$ is a polynomially computable sequence of permutations. $\Omega_{G_n}$ is a singleton, equal to $\{\dinf 0 \}$, if and only if $\mathcal{M}$ does not halt within $n$ steps. Otherwise $\Omega_{G_n}$ is empty.
Let us construct the sequence of RPCA $(F_n)_{n \in \mathbb N}$ as in Lemma~\ref{lem:univppa} corresponding to the $\alpha$ and $\vec{k}, \seq S, \seq T, \seq U$ that satisfy the inequalities and for which $\prod_{i \in \mathbb N} S_i/T_i=0$. Then, Remark~\ref{rem:univenonempt} implies that $\Omega_{F_0}$ (equivalently, $\orb{F_0}$) is non-empty if and only if $\Omega_{G_n}$ is non-empty for all $n$, which is equivalent to that $\mathcal{M}$ does not halt over input $0^{\infty}$.
In addition, Remark~\ref{rem:univextrexp} implies that if $\Omega_{F_0}$ is non-empty, then $\NE(F_0)=\{0\}$.
Therefore, for every TM $\mathcal{M}$, we have constructed a 2D SFT $\orb{F_0}$ that is non-empty if and only if $\mathcal{M}$ does not halt over input $0^{\infty}$ and if $\orb{F_0} \neq \emptyset$, then $\orb{F_0}$ is extremely expansive. This concludes the proof of the undecidability.
\end{proof}
It follows from the previous proof that we have actually proved the following: Let $A$ be the family of forbidden patterns that define empty SFT, and let $B$ be the family that defines non-empty extremely expansive SFT (with unique direction of non- expansiveness $\infty$). There does not exists a recursively enumerable set $X$ that contains $B$ and is disjoint from $A$. In other words, if an algorithm correctly recognizes all non- empty extremely-expansive SFT, then it must also (falsely) recognize an empty SFT.
\subsection{Intrinsic universality}
The second application concerns the universality properties of RPCA.
\begin{theorem}\label{c:intru}
For any computably enumerable set of non-empty PPA, there exists a PPA that completely simulates all of them.
\end{theorem}
\begin{proof}
First of all, we can assume that all the PPA are over the field list $\C[\texttt{Other}]$ with the corresponding directions. This is true because we can encode, in polynomial time, all the left-moving fields into a unique left-moving field, and similarly for the other types of fields. Then, saying that a set of PPA is computably enumerable is equivalent to saying that the corresponding set of permutations that define these PPA is computable enumerable.
In addition, for every computably enumerable set of PPA $X$ (over the field list $\C[\texttt{Other}]$), there exists a \emph{polynomially computable sequence} $(G_n)_{n\in\mathbb N}$ of PPA that contains exactly the elements of $X$. Equivalently, there exists a \emph{polynomially computable sequence} of permutations $(\alpha_n)_{n \in \mathbb N}$ that contains exactly the permutations of the PPA in $X$.
(Let $g$ be a fixed element of $X$. The polynomial algorithm of $(\alpha_n)_{n \in \mathbb N}$ takes as input $n$, runs the algorithm that enumerates $X$ for $n$ steps and sets $\alpha_n$ equal to the last permutation that was output. If no permutation has yet been output, then $\alpha_n$ is set equal to $g$.)
If we use this sequence $\alpha$ and sequences $\vec{k}, \seq S, \seq T, \seq U$ that satisfy the inequalities to define the sequence $(F_n)_{n \in \mathbb N}$ as in Lemma~\ref{lem:univppa}, then, since by assumption $\Omega_{G_n} \neq \emptyset$ for all $n \in \mathbb N$, Remark~\ref{rem:univenonempt} implies that $F_0$ completely simulates $G_n$, for all $n \in \mathbb N$.
\end{proof}
Theorem~\ref{c:intru} applies, up to a conjugacy, to computably enumerable sets of nonempty RPCA.
In some sense, it gives a deterministc version of the result in \cite{lafitteweiss}.
The same result is not true for the non-computably-enumerable set of all nonempty RPCA, thanks to an argument by Hochman \cite{hochmanuniv} and Ballier \cite{balliermedvedev}.
Nevertheless, the corollary applies to the family of all reversible (complete) cellular automata, since the family of RCA is computably enumerable.
Unfortunately, it gives an RPCA (partial CA) that simulates all RCA (full CA) instead of an RCA. The existence of an RCA that simulates all RCA seems to be a much more difficult question and is still open, (see for instance \cite{guillaume1}.
\section{Synchronizing computation}\label{s:comput}
We now introduce one more trick in our construction: the encoding of an infinite sequence inside an infinite nested simulation by encoding increasing finite prefixes of the infinite sequence inside the alphabets of the RPCA of the nested simulation.
Let $\C[\syncomp]\defeq\C[\unive] \sqcup [\texttt{MHist},\texttt{MHist}_{+1},\texttt{Prog},\revprog]$. In this simulation, we do not use a field $\texttt{Level}$ in order to store the parameter $n$. Instead, it will be obtained as the length of field $\texttt{MHist}$. $p'$ is the program of a TM. It is used to reject some nested simulation sequences, depending on the infinite sequence that is stored (through its increasing finite prefixes) in the alphabets of the RPCA.
\begin{algo}{syncomp}{\syncomp}{M,\vec{\nu},\vec{k},\seq S,\seq T,\seq U,p'}
\STATE{$\chekk[\pi_\texttt{MHist}=\pi_{\texttt{MHist}_{+1}}]$}\label{al:syncomp:mhistconsistency}
\IF{$\bina{\pi_\texttt{Clock}}=0$}
\STATE{$\chekk[\halt{p'}{\length{\texttt{MHist}}}{\texttt{MHist}}]$}\label{al:syncomp:medvedev}
\STATE{$\chekk[\emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}}]$}\label{al:syncomp:empty}
\STATE{$\chekka[M,\bina{\pi_\texttt{Addr}},\pi_\texttt{Tape},\vec{k}_{\length{\texttt{MHist}}+1}]$} \COMMENT{Check that the lengths of the simulated letter are correct} \label{al:syncomp:alph}
\STATE{$\hier[M,\bina{\pi_\texttt{Addr}},\pi_\texttt{Tape},\vec{k}_{\length{\texttt{MHist}}+1},\texttt{Prog}
\pi_{\texttt{Prog}}]$} \COMMENT{$\texttt{Prog}$ of the simulated letter is the same}\label{al:syncomp:prog}
\STATE{$\hier[M,\bina{\pi_\texttt{Addr}},\pi_\texttt{Tape},\vec{k}_{\length{\texttt{MHist}}+1},\revprog
\pi_\revprog]$} \COMMENT{$\revprog$ is also the same}\label{al:syncomp:revprog}
\STATE{$\hier[M,\bina{\pi_\texttt{Addr}},\pi_\texttt{Tape},\vec{k}_{\length{\texttt{MHist}}+1},\texttt{MHist},\pi_{\texttt{MHist}}]$} \COMMENT{$\texttt{MHist}$ of the simulating letters is a prefix of $\texttt{MHist}$ of the simulated}\label{al:syncomp:infinitesequence}
\ENDIF
\STATE{$\unive[M$, $\vec{\nu}$, $\vec{k}_{\length{\texttt{MHist}}}$, $\bina{\pi_\texttt{Addr}}$, $\bina{\pi_\texttt{Clock}}$, $\seq{S}_{\length{\texttt{MHist}}}$, $\seq{T}_{\length{\texttt{MHist}}}$, $\seq{U}_{\length{\texttt{MHist}}}$, $\pi_\texttt{Prog}$, $\pi_\revprog]$} \label{al:syncomp:unive}
\STATE{$\coordi[\seq{S}_{\length{\texttt{MHist}}},\seq{T}_{\length{\texttt{MHist}}}]$}
\end{algo}
\begin{lemma}\label{l:mhist}
Let $\seq S,\seq T,\seq U$ be polynomially checkable sequences of integers and $p'$ be the program of a TM. Let us fix the field list $\C[\syncomp]\defeq [0,\ldots,13]$, the corresponding \emph{fixed} direction vector $\vec{\nu}_{\syncomp}$ and a polynomially checkable sequence of $14$-uples $\seq{\vec{k}}\in(\mathbb N^{14})^{\mathbb N}$. Let $F$ be the IPPA with directions $\vec{\nu}_{\syncomp}$ and permutation
\begin{equation*}
\syncomp[14,\vec{\nu}_{\syncomp},\vec{k},\seq S,\seq T, \seq U,\pi_{\texttt{MHist}},p']
\end{equation*}
and $p,p^{-1}$ be the programs for this permutation and its inverse, respectively.
For all $w \in \haine2^{*}$, let
$S_w\defeq S_{\length{w}}$, $T_w \defeq T_{\length{w}}
and $F_w$ be the restriction of $F$ to the subalphabet
\begin{equation*}
\A_{w}\defeq \haine5^{\vec{k}_{\length{w}}}\cap\emp[w]{\texttt{MHist},\texttt{MHist}_{+1}}\cap\emp[p]{\texttt{Prog}}\cap\emp[p^{-1}]{\revprog},
\end{equation*}
and assume that the following inequalities hold for all $n \in \mathbb N$:
\[\both{
\I(\vec{k}_n,\vec{k}_{n+1},S_n,T_n,U_n,p,p^{-1})\\
k_{n,\texttt{Prog}}\geq\length p\\
k_{n,\revprog}\geq\length{p^{-1}}\\
k_{n,\texttt{Level}} \geq \norm{n}
~.}\]
Then, $F_w\simu[S_w,T_w,0,\Phi_w]\bigsqcup_{\begin{subarray}{c}a\in \haine2\end{subarray}}{F_{wa}}$ completely exactly, where
\begin{equation*}
\Sigma_w \defeq \A_{w}^{\mathbb Z} \cap \gra{0}{0}{S_w}{T_w} \cap \emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}} \cap \tilde{\Phi}^{-1}(\bigsqcup_{\begin{subarray}{c} a \in \haine2 \end{subarray}}\A_{wa}^{\mathbb Z}), \text{ and } \Phi_w\defeq\tilde{\Phi}_{\texttt{Tape}}{\restr{\Sigma_w}}.
\end{equation*}
\end{lemma}
\begin{proof}
Let $w \in \haine2^*$ and $\length{w}=n$. By definition, $S_w\defeq S_n$, $T_w\defeq T_n$ and $U_w \defeq U_n$. If $p'$ halts on input $w$ within $n$ steps, then the check of line~\ref{al:syncomp:medvedev} will reject every configuration, which means that $\mathcal D}%\DeclareMathOperator*{\dom}{dom(F_{w}) = \emptyset$. But, in this case, $wa$ will also be rejected by $p'$ within $n+1$ steps, for all $a \in \haine2$, so that $\mathcal D}%\DeclareMathOperator*{\dom}{dom(\bigsqcup_{\begin{subarray}{c}a\in \haine2\end{subarray}}{F_{wa}}) = \emptyset$, too. By definition, the empty PCA strongly, completely simulates itself for all possible choices of the simulating parameters, so that the claim is true in this case.
Suppose, then, that $p'$ does not halt on input $w$ within $n$ steps. Then, the check of line~\ref{al:syncomp:medvedev} is always true, so that we can ignore it in the rest of the proof. As in the previous proofs, we have to show three things: that $F_{w}$ $(S_n,T_n,0)$-simulates $\bigsqcup_{\begin{subarray}{c}a\in \haine2\end{subarray}}{F_{wa}}$ with decoding function $\Phi_{w}$ (simulation), that $\Phi_{w}$ is injective (exactness) and that $\Omega_{F_{w}} \subseteq \mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi_w)$ (completeness).
For the simulation, it is easy to see that if $b \in \A_{wa}^{\mathbb Z}$, where $a \in \haine2$ and $c \in \Phi_w^{-1}(b)$, then $c$ is not rejected by the checks of lines~\ref{al:syncomp:mhistconsistency},\ref{al:syncomp:empty},\ref{al:syncomp:alph},\ref{al:syncomp:prog}, \ref{al:syncomp:revprog} and \ref{al:syncomp:infinitesequence}. Then, simulation follows easily from the choice of the program $p$ and Lemma~\ref{universal}.
Exactness is also direct. The values of all the fields of $c$ are uniquely determined by $b$ and the form $\Phi_w$.
Completeness also follows the general pattern of the previous proofs, but there is a small difference: we can show that if $c \in \gra{0}{0}{S_n}{T_n} \cap F_{w}^{-2T}(\A_{w}^{\mathbb Z})$ (the difference is that we have $2T$ instead of $T$ in the exponent), then $c \in \Phi_{w}^{-1}(\bigsqcup_{\begin{subarray}{c} a \in \haine2 \end{subarray}}\A_{wa}^{\mathbb Z})$. This is enough to ensure completeness of the simulation.
Indeed, if
\begin{equation*}
c \in \gra{0}{0}{S_n}{T_n} \cap F_{w}^{-T}(\A_{w}^{\mathbb Z}),
\end{equation*}
then lines~\ref{al:syncomp:empty},\ref{al:syncomp:prog},\ref{al:syncomp:revprog} and \ref{al:syncomp:infinitesequence} ensure that
\begin{equation*}
c \in \A_{w}^{\mathbb Z} \cap \gra{0}{0}{S_n}{T_n}\cap\emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}} \cap \Phi^{-1}((\bigsqcup_{\begin{subarray}{c} a \in \haine2 \end{subarray}}\A_{wa})^{\mathbb Z}).
\end{equation*}
Let $b \in (\bigsqcup_{\begin{subarray}{c} a \in \haine2 \end{subarray}}\A_{wa})^{\mathbb Z}$ be such that $c \in \Phi^{-1}(b)$. The problem is that we still cannot know that $\pi_{\texttt{MHist}}(b)$ is the same in all cells, because line~\ref{al:syncomp:infinitesequence} only checks that at every cell $i$, $\pi_{\texttt{MHist}}(c)=w$ (which we know that it is constant) is a prefix of $\pi_{\texttt{MHist}}(b_i)$. However, we could still have that $\pi_{\texttt{MHist}}(b_i)=w0$ and $\pi_{\texttt{MHist}}(b_j)=w1$, for some $i\neq j$. This is why we need to take $2T$ steps instead of $T$ steps.
Indeed, since $F_w^{2T}(c)$ exists, $F^2(b)$ also exists, and line~\ref{al:syncomp:mhistconsistency} ensures that $\pi_{\texttt{MHist}}(b_i)=\pi_{\texttt{MHist}_{+1}}(b_i)=\pi_{\texttt{MHist}}(b_j)$, for all $i,j \in \mathbb Z$.
The argument for this is similar to the argument used in the proof of Lemma~\ref{koo}.
Therefore, $b \in \bigsqcup_{\begin{subarray}{c} a \in \haine2 \end{subarray}}\A_{wa}^{\mathbb Z}$ and this concludes the proof of the Lemma.
\end{proof}
\subsection{Satisfying the inequalities}
\begin{remark}\label{rem:inequcomputa}
We can find $\vec{k} \in (\mathbb N^{14})^{\mathbb N}$ and $\seq{S},\seq{T},\seq{U} \in {\mathbb N}_1^{\mathbb N}$ such that the inequalities of Lemma~\ref{l:mhist} are satisfied and $\prod_{i<n} S_i/T_i =0$.
\end{remark}
\begin{proof}
The proof is almost identical to the proof of Remark~\ref{rem:inequunivppa} and is omitted. We just make a few comments:
First of all, the inequalities depend on $w \in \haine2^{*}$, but in fact, if $\length{w}=\length{w'}$, then we have exactly the same inequalities for $w$ and $w'$, so that actually the inequalities can be translated to a set of inequalities that depend on $n$.
Second, notice that line~\ref{al:syncomp:medvedev} is computable in polynomial time, and since the program $p'$ is fixed in advance, its contribution to $\norm{\A_w}$, $t_p$ and $\length{p}$ is constant and does not depend on the choice of parameters.
Finally, we can choose $k_{w,\texttt{MHist}} \defeq n$, (where $n \defeq \length{w}$) which means that this field only contributes a polynomial of $n$ to the various inequalities, so that it can be \xpr{incorporated} into the polynomials and the problem can be reduced to the cases that have already been dealt with.
\end{proof}
\subsection{Realizing computational degrees}
The statement of Lemma~\ref{l:mhist} falls exactly into the situation described in Lemma~\ref{l:nonvides}. For all $n \in \mathbb N$, let $\B_n=\haine2$. Then, for all $w \in \haine2^n (= \prod_{i < n} \B_i)$, we have defined $S_w,T_w, F_w$ and $\Phi_w$ such that $F_w$ exactly, completely $(S_w,T_w,0)$-simulates $\bigsqcup_{b\in\B_n}F_{ub}$.
The check of line~\ref{al:syncomp:medvedev} forces that if $z \in \prod_{i \in \mathbb N} \B_i$, then $\rocks[\infty]z{\seq\Phi}\ne\emptyset$ if and only if $\halt{p'}{\infty}{z}$ is true, or in other words, $z \in \X_p$. Indeed, we have that $\mathcal D}%\DeclareMathOperator*{\dom}{dom(F_{z_{\co{0}{n}}})=\emptyset$ for some $n$ if and only if $\halt{p'}{\infty}{z}$ is not true, or in other words, if and only if $p'$ halts over $z$ within $n$ steps.
Therefore, Lemma~\ref{l:nonvides} implies that $\Omega_{F_{\motvide}}= \rocks[\infty]Z{\seq\Phi} = \bigsqcup_{z\in \X_p}\rocks[\infty]z{\seq\Phi}$.
\begin{lemma}\label{l:comphomeo}
For any effectively closed subset $Z\subset\haine2^\mathbb N$, there exists an extremely expansive RPCA $F$ and a computable, left-invertible map from $\Omega_{F}$ onto the Cartesian product $Z\times\haine2^\mathbb N$.
\end{lemma}
One could even prove that the computable map is two-to-one, and almost one-to-one for any reasonable (topological or measure-theoretical) notion.
Also, this SFT can be effectively constructed from $Z$.
\begin{proof}
We construct $\syncomp[14,$ $\vec{\nu}_{\syncomp}$, $\vec{k}$, $\seq S$, $\seq T$, $\seq U$, $\pi_{\texttt{MHist}}$, $p']$ for some sequences that satisfy the inequalities and a program $p'$ that recognizes $Z$. In addition, assume that $\prod_{i<n} S_i/T_i =0$.
It follows from Lemma \ref{l:nonvides} that
\begin{equation*}
\Omega_{F_{\motvide}}=\bigsqcup_{z \in \X_p}\rocks[\infty]z{\seq\Phi}=
\bigsqcup_{\begin{subarray}c z\in \X_p\\\seq t\in\prod_{i\in\mathbb N}\co0{T_i}\\\seq s\in\prod_{i\in\mathbb N}\co0{S_i}\end{subarray}}\bigcap_{n\in\mathbb N}\sigma^{\anib[\seq S]{s_{\co0n}}}F_{\motvide}^{\anib[\seq T]{t_{\co0n}}}\Phi_{z_0}^{-1}\cdots\Phi_{z_{\co{0}{n}}}^{-1}(\Omega_{F_n}).
\end{equation*}
Consider the map that associates, to each configuration $x \in \Omega_{F_{\motvide}}$, the unique triple $(z,\seq s,\seq t)$ such that $x\in\bigcap_{n\in\mathbb N}\sigma^{\anib[\seq S]{s_{\co0n}}}F_0^{\anib[\seq T]{t_{\co0n}}}\Phi_{z_0}^{-1}\cdots\Phi_{z_{\co{0}{n}}}^{-1}(\Omega_{F_n})$.
This map is computable, since, for all $n\in\mathbb N$, $S_n$ is for instance given by $\bina{\pi_\texttt{Clock}\Phi_0\Phi_1\cdots\Phi_{n-1}}$ and $z_{\co{0}{n}}$ is given by $\pi_{\texttt{MHist}}\Phi_0\Phi_1\cdots\Phi_{n-1}$.
Conversely, from the tripe $(z,\seq{s},\seq{t})$, one can build construct a configuration in $\Omega_{F_{\motvide}}$, as explained in \cite[Proposition 4.2]{gacs}.
The result follows from the obvious computable homeomorphisms between $\Omega_F$ and $\orb F$, and between $\prod_{i\in\mathbb N}\co0{S_i}\times\co0{T_i}$ and $\haine2^{\mathbb N^2}$.
Finally, $\NE(F_{\motvide}) = \{0\}$, because $\orb{F_{\motvide}}=\bigsqcup_{z \in \X_p} \orb{F_{\rocks[\infty]z{\seq\Phi}}}$ and $\NE(\orb{F_{\rocks[\infty]z{\seq\Phi}}}) = \{0\}$, due to the exact complete simulation and $\prod_{i<n} S_i/T_i =0$.
\end{proof}
The following was proven in \cite{simpson} for general 2D SFT. Here, we can also restrict the set of non-expansive directions.
\begin{theorem}\label{t:comphomeo}
For any effectively closed subset $X$, there exists a Medvedev-equivalent extremely expansive 2D SFT whose Turing degrees are the cones above the Turing degrees of $X$.
\end{theorem}
A fortiori, all Medvedev (and Mu\v cnik) degrees contain an extremely expansive SFT.
\begin{proof}
It is enough to notice that $\Omega_{F}$ and $\orb{F}$ are computably homeomorphic.
\end{proof}
The second component in the computable homeomorphism cannot easily be taken out: it is pointed in \cite{vanierdegrees} that all aperiodic subshifts admit a cone of Turing degrees (that is one degree and all degrees above it).
Let us make some final comments: In this chapter, we are inspired and draw mainly on the work of Durand, Romashchenko and Shen \cite{drs}. Reading that paper, one has the feeling that the construction of that paper can be done in a reversible way, except for the exchange of information. Working out the details needed to make that intuition work is (as proven by this chapter) messy and even tedious, sometimes, but we manage to obtain results for which there is no known alternative proof. We also feel that our construction can also shed some light on the construction of Durand, Romashchenko and Shen. More specifically, we always write explicitly the inequalities that need to be satisfied, and for each one we explain at least once why it is needed. Also, we construct \xpr{once and for all} the rules and then prove that they have the desired behaviour, instead of using their more informal approach where some rule is created and then it is modified, resulting in a new rule, for which it is taken for granted that the previous argumentation still holds.
\chapter{Expansive directions}\label{sec:expdir}
In Lemma~\ref{lem:nonexpsftrestr}, we described a necessary condition for the set of non-expansive directions of an SFT: if $X$ is an SFT, then $\NE(X)$ is effectively closed.
In this section, we are going to show that this is in fact a characterization of sets of non-expansive directions of SFTs.
\begin{theorem}\label{thm:nonexpansive}
If $\NE_0 \subseteq \Rb$ is effectively closed, then there exists
an SFT $X \subseteq \A^{\mathbb Z^2}$ such that $\NE(X)=\NE_0$.
\end{theorem}
This is mentioned as Open Problem~11.1 in Mike Boyle's Open Problems for Symbolic Dynamics \cite{opsd}. We only answer the first part of that problem, since our constructions do not have any SFT direction. The second part of the problem, concerning 2D SFT with an SFT direction is much more difficult to answer, since it is inextricably related to the expansiveness of RCA.
It is enough to prove this for sets of non-expansive directions that are included in $[-1,1]$, or even $[0,r]$, for some $0 < r <1$, because we can cover the set of directions with a finite number of rotations of $[-1,1]$ (and $[0,r]$). Therefore, even though the fact that we are using PPA might seem problematic (since $\NE(F) \subseteq [-1,1]$ in this case), this is not the case.
The key idea consists in constructing subshifts with a unique direction of non-expansiveness through a nested simulation of RPCA, so that we can use Lemma~\ref{prop:hochman}.
This idea was introduced in \cite{nexpdir}, in a non-effective way;
we will try to emphasize the obstacle that has to be overcome when trying to \xpr{SFTize} this construction.
\section{Directive encoding}\label{subsection:direncoding}
Proposition~\ref{prop:hochman} states that, if we manage to implement a certain kind of nested simulation, then we will obtain a subshift with a unique direction of non-expansiveness, equal to $\anib[\seq S/\seq T]{\seq D}$, where $\seq S,\seq T\in{\mathbb N}_1^\mathbb N$ and $\seq D\in\mathbb Z^\mathbb N$.
\cite[Lemma~5.6]{nexpdir} shows that all directions can be written in this form (when the sequences $\seq S, \seq T, \seq D$ are allowed to be chosen without any constraints).
But the sequences of nested simulations that are possible with our SFT construction are more constrained: for example, the sequences $\seq S$ and $\seq T$ must be polynomially checkable
This immediately imposes some restrictions, since, for example, it implies that $S_i$ cannot grow like an exponential tower of height $i$. This is not excluded from the construction of Hochman, since he takes $S$ \xpr{sufficiently large}, in order to make some \xpr{error term} sufficiently small. A large part of our construction is to show that we can satisfy these restrictions at the same time, or, in other words, that the error terms can be made sufficiently small even if $\seq S$ grows relatively slowly. At the same time, we have to take care of some technical details.
Let us begin the construction by giving some additional necessary definitions:
To any vector $\vec\varepsilon\in\mathbb R_+^n$ and any \dfn{directive word} $\vec d\defeq(D_i,W_i)_{0\le i<n}\in(\mathbb N^2\setminus\{(0,0)\})^n$, where $n\in\mathbb N$, we associate the direction interval $\Theta_{\vec\varepsilon}(\vec d)\defeq(\prod_{0\le i<n}R_i)[-1,1]+\anib[\vec R]{\vec D}\subset\Rb$, where $R_i\defeq1/(D_i+W_i+1+\varepsilon_i)\le1/2$; recall that $\anib[\vec R]{\vec D}\defeq\sum_{0\le i<n}D_i\prod_{0\le j\le i}R_j$. It follows immediately by the definition that $\Theta_{\vec\varepsilon}(\vec d)=R_0(\Theta_{\varepsilon\restr{\co1n}}(\vec d\restr{\co1n})+D_0)$ for any $\vec d=(D_i,W_i)_{0\le i < n}$.
We extend these definitions for infinite sequences in the natural way: To any sequence $\seq\varepsilon\in\mathbb R_+^\mathbb N$ and any \dfn{directive sequence} $\seq d\defeq(D_n,W_n)_{n\in\mathbb N}\in(\mathbb N^2\setminus\{(0,0)\})^\mathbb N$ we associate the direction $\theta_{\seq\varepsilon}(\seq d)\defeq\anib[\seq R]{\seq D}\in\mathbb R$, the unique element of $\bigcap_{n\in\mathbb N}\Theta_{\varepsilon\restr{\co0n}}((D_i,W_i)_{0\le i<n})$ (uniqueness follows from the fact that $R_i \le 1 /2$, for all $i \in \mathbb N$).
If $F_0\simu[S_0,T_0,D_0S_0]F_1\simu[S_1,T_1,D_1S_1]\ldots$ and for all $n\in\mathbb N$, $T_n=(D_n+W_n+1+\epsilon_n)S_n$, then observe that Proposition~\ref{prop:hochman} can be seen as saying that $\NE(F_0)=\{\theta_{\seq\varepsilon}(\seq D,\seq
)\}$. This is point of contact between this section and the rest of the thesis.
\cite[Lemma 5.6]{nexpdir} can now be reformalized as the following.
\begin{lemma
\label{lem:hochgeomstuff}
For all $x \in [0,1]$, there exist a sequence $\seq\varepsilon\in\mathbb R_+^\mathbb N$ and a directive sequence $\seq d$ such that $\theta_{\seq\varepsilon}(\seq d) = x$.
\end{lemma}
We refine this statement in two ways: first, we will show that the sequence $\seq\varepsilon$ can be fixed and second, we will restrict the alphabet of acceptable directive sequences to $\mathbf{\mathcal{B}}\defeq\{(0,1),(1,1),(1,0)\}$. By doing that, we will \xpr{lose} a small part on the right endpoint of the interval $[0,1]$.
\begin{lemma}\label{lem:geomstuff}
Let $\seq\varepsilon\in[0,\sqrt2-1]^\mathbb N$ be a sequence. Then,
\begin{equation*}
\theta_{\seq\varepsilon}(\mathbf{\mathcal{B}}^\mathbb N)
=[0,\anib[(1/(2+\varepsilon_i)_i)]{111\ldots
].
\end{equation*}
\end{lemma}
Though the interval $[0,1[$ cannot be covered fully with a fixed, non-trivial sequence, the convergence of the sequence $(\varepsilon_n)_{n \in \mathbb N}$ to $0$ can be sped up suitably, in order to realise any number arbitrarily close to $1$.
\begin{proof}
Let us prove by induction over $n\in\mathbb N$ that for any such sequence $\seq\varepsilon$, we have that
\begin{equation*}
\theta_{\varepsilon\restr{\co0n}}(\mathbf{\mathcal{B}}^n)=[-\prod_{i<n}\frac1{2+\varepsilon_i},\anib[(1/(2+\varepsilon_i))_{i<n}]{1\ldots12}]\supset[0,\frac1{\sqrt2}].
\end{equation*
The base of the induction follows from $\epsilon_0 \le \sqrt{2}-1$.
Let us assume that the inductive hypothesis is true for some $n\in\mathbb N$, and prove it for $n+1$.
Note that $
\anib[(1/(2+\varepsilon_i))_{i < n+1}]{1\ldots12}
=
\frac1{2+\varepsilon_0}\left(1+\anib[(1/(2+\varepsilon_i))_{1 \le i<n+1}]{1\ldots12}\right)
$
By the induction hypothesis (which we can apply to the truncated sequence $(\epsilon_n)_{n \geq 1}$, since it satisfies the assumption, too) and $\varepsilon_0\le\sqrt2-1$, we have
$\anib[(1/(2+\varepsilon_i))_{i < n+1}]{1\ldots12}\ge\frac1{1+\sqrt2}\left(1+\frac1{\sqrt2}\right)=\frac1{\sqrt2}$.
The set $\theta_{\varepsilon\restr{\co{0}{n+1}}}(\mathbf{\mathcal{B}}^{n+1})$ can be decomposed, in terms of the first directive letter, into a union of three intervals
\begin{eqnarray*}
\frac1{2+\varepsilon_0}\theta_{\varepsilon\restr{\cc1n}}(\mathbf{\mathcal{B}}^n)\cup\frac1{3+\varepsilon_0}(\theta_{\varepsilon\restr{\cc1n}}(\mathbf{\mathcal{B}}^n)+1)\cup\frac1{2+\varepsilon_0}(\theta_{\varepsilon\restr{\cc1n}}(\mathbf{\mathcal{B}}^n)+1)
~.
\end{eqnarray*}
Following the induction hypothesis, the first interval is equal to
\begin{equation*}
\frac1{2+\varepsilon_0}[-\prod_{1\le i\le n}\frac1{2+\varepsilon_i},\anib[(1/(2+\varepsilon_i))_{1\le i \le n}]{1\ldots12}]
=[-\prod_{0\le i\le n}\frac1{2+\varepsilon_i},\frac{1}{2+\varepsilon_0}\anib[(1/(2+\varepsilon_i))_{1\le i\le n}]{1\ldots12}].
\end{equation*}
The second interval is equal to
\begin{equation*}
\frac1{3+\varepsilon_0}(1+[-\prod_{1\le i\le n}\frac1{2+\varepsilon_i},\anib[(1/(2+\varepsilon_i))_{1\le i\le n}]{1\ldots12}])=[\frac1{3+\varepsilon_0}(1-\prod_{1\le i\le n}\frac1{2+\varepsilon_i}),\frac1{3+\varepsilon_0}\anib[(1/(2+\varepsilon_i))_{1\le i\le n}]{1\ldots12}].
\end{equation*}
The third interval is equal to
\begin{equation*}
\frac1{2+\varepsilon_0}(1+[-\prod_{1\le i\le n}\frac1{2+\varepsilon_i},\anib[(1/(2+\varepsilon_i))_{1\le i\le n}]{1\ldots12}])=[\frac1{2+\varepsilon_0}-\prod_{0\le i\le n}\frac1{2+\varepsilon_n},\anib[(1/(2+\varepsilon_i))_{0\le i\le n}]{1\ldots12}].
\end{equation*}
It is clear that the smallest point of these three intervals is $-\prod_{0\le i\le n}\frac1{2+\varepsilon_i}$ (the last two intervals are in $\mathbb R_+$), and the largest is $\anib[(1/(2+\varepsilon_i))_{0\le i\le n}]{1\ldots12}$ (the first interval is obtained through a translation by $-1$ of the third one, or through a homothecy by $\frac{2+\varepsilon_0}{3+\varepsilon_0}$ of the second one).
We proved earlier that $\anib[(1/(2+\varepsilon_i))_{i < n+1}]{1\ldots12}\ge\frac1{1+\sqrt2}\left(1+\frac1{\sqrt2}\right)=\frac1{\sqrt2}$. It follows from this that the upper bound $\frac1{2+\varepsilon_0}\anib[(1/(2+\varepsilon_i))_{1\le i\le n}]{1\ldots12}$ of the first interval is larger than $\frac{1}{(2+\varepsilon_0)\sqrt2}$, while the smaller bound
of the second interval is lower than $\frac{1}{3+\varepsilon_0}$, which is less than $\frac{1}{(2+\varepsilon_0)\sqrt2}$, as can be easily verified
In other words, there is no hole between these two intervals.
Using the same arguments, one can easily see that the upper bound of the second interval is $\frac1{3+\varepsilon_0}(1+\anib[(1/(2+\varepsilon_i))_{1\le i\le n+1}]{1\ldots12})$, which is larger than $\frac{1+1/\sqrt2}{3+\varepsilon_0}$ while the lower bound of the third interval is smaller than $\frac{1}{2+\varepsilon_0}$, which is smaller than $\frac{1+1/\sqrt2}{3+\varepsilon_0}$, since $\sqrt2-1+\varepsilon_0\frac1{\sqrt2}>0$. There is no hole here either, and globally, we get the full interval $\left[-\prod_{0\le i\le n}\frac1{2+\varepsilon_i},\anib[(1/(2+\varepsilon_i))_{0\le i\le n}]{1\ldots12}\right]$.
The proof of the statement is finished by observing that $\prod_{0\le i<n}\frac1{2+\varepsilon_i}\to0$ and $\anib[(1/(2+\varepsilon_i)_{i<n})]{1\ldots12}\to\anib[(1/(2+\varepsilon_i)_{i\in\mathbb N})]{1\ldots}$.
\end{proof}
\section{Computing directions}
Lemma~\ref{lem:nonexpsftrestr} stated that the set of non-expansive directions of an SFT is \emph{effectively} closed, which means that there exists a program which takes as (infinite) input the description of a direction in $\Rb$ and halts (after having read finitely many bits of the input) if and only if the direction is expansive.
In Subsection \ref{ss:comput}, it was suggested that the good way to represent directions in order to compute with them was by the two coordinates of some intersection with the unit circle. Each slope then has two (opposite) valid representations.
When restricting to closed subsets of $\mathbb R \subseteq \Rb$ (\textit{i.e.}, when we are not talking about the horizontal direction), the notion of effectively closed set of direction is the same with the above representation as with the usual definition of $\mathbb R$. This is due to the facts that the functions $\sin$ and $\cos$ and their inverses are computable and that the function $x \to 1/x$ is uniformly continuous away from $0$.
The following remark states that directive sequences give another, equivalent representation for directions.
\begin{remark}\label{rem:compslope}
Let $\seq\varepsilon \in \mathbb R^{\mathbb N}$ be computable. Then, $\theta_{\seq\varepsilon}$ is a computable function.
\end{remark}
The computation is actually uniform in $\seq\varepsilon$, in the sense that it could be considered as part of the input.
\begin{proof}
This follows from the fact that the diameter of $\Theta_{\varepsilon\restr{\co0n}}(\vec d)$ is at most $2^{-n}$, since $R_i \le 1/2$, for all $i \in \mathbb N$ and directive sequence $\vec d$.
\end{proof}
Remark~\ref{rem:compslope} implies that effectively closed sets of slopes can be equivalently described by an effectively closed set of directive sequences. This is the computational description of directive sequences that we are going to use in the next chapter.
\section{Realization of sets of non-expansive directions}
Let $\C[\reali]=\C[\unive]\sqcup[\texttt{MHist},\texttt{MShift},\texttt{MShift}_{+1},\texttt{Prog},\revprog]$.
The following permutation will also be parametrized by an effectively closed set $\NE_0 \subseteq [0,1/2]$, which is represented by the program $p'$ of the TM that recognizes $\NE_0$ as a set of directive sequences. $\NE_0$ is the set of non-expansive directions that we are trying to realize.
We identify the set $\mathbf{\mathcal{B}} \defeq \{(0,1),(1,1),(1,0)\}$ with $\haine3$ (through any bijection). If $a \in \mathbf{\mathcal{B}}$, then $D_a, W_a$ will denote the projection of $a$ onto the first and second coordinate, respectively.
In the following algorithm, $p'$ is the program of a TM that recognizes some set of non-expansive directions
\begin{algo}{reali}{\reali}{M,\vec{\nu},p',\seq{\vec{k}},\seq S,\seq U}
\STATE{$\chekk[\pi_\texttt{MShift}=\pi_{\texttt{MShift}_{+1}}]$}\label{al:reali:mhistconsistency}
\IF{$\bina{\pi_\texttt{Clock}}=0$}
\STATE{$\chekk[\halt{p'}{\length{\texttt{MHist}}}{\texttt{MHist}}]$}\label{al:reali:medvedev}
\STATE{$\chekk[\emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}}]$}\label{al:reali:empty}
\STATE{$\chekka[M,\bina{\pi_\texttt{Addr}},\pi_\texttt{Tape},\vec{k}_{\length{\texttt{MHist}}+1}]$} \label{al:reali:alph}
\STATE{$\hier[M,\bina{\pi_\texttt{Addr}},\pi_\texttt{Tape},\vec{k}_{\length{\texttt{MHist}}+1},\texttt{Prog}
\pi_{\texttt{Prog}}]$}\label{al:reali:prog}
\STATE{$\hier[M,\bina{\pi_\texttt{Addr}},\pi_\texttt{Tape},\vec{k}_{\length{\texttt{MHist}}+1},\revprog
\pi_\revprog]$}\label{al:reali:revprog}
\STATE{$\hier[M,\bina{\pi_\texttt{Addr}},\pi_\texttt{Tape},\vec{k}_{\length{\texttt{MHist}}+1},\texttt{MHist},\pi_{\texttt{MHist}}\pi_{\texttt{MShift}}]$}\label{al:reali:infinitesequence}
\ENDIF
\IF{$\bina{\pi_\texttt{Clock}}=0$}
\STATE{$\exch[\texttt{Tape},\texttt{Tape}_{+1}]$}\label{al:reali:startshift}
\ENDIF
\IF{$\bina{\pi_\texttt{Clock}}=D_{\texttt{MShift}}S_{\length{\texttt{MHist}}}$}
\STATE{$\exch[\texttt{Tape},\texttt{Tape}_{+1}]$}\label{al:reali:stopshift}
\ENDIF
\STATE{$\unive[M$, $\vec{\nu}$, $\vec{k}_{\length{\texttt{MHist}}}$, $\bina{\pi_\texttt{Addr}}$, $\bina{\pi_\texttt{Clock}}-D_{\texttt{MShift}}S_{\length{\texttt{MHist}}}$, $S_{\length{\texttt{MHist}}}$, $S_{\length{\texttt{MHist}}}(D_{\texttt{MShift}}+W_{\texttt{MShift}}+1)+4U_{\length{\texttt{MHist}}}$ $,U_{\length{\texttt{MHist}}}$, $\pi_\texttt{Prog}$, $\pi_\revprog]$}\label{al:reali:unive}
\STATE{$\coordi[\seq{S}_{\length{\texttt{MHist}}},\seq{S}_{\length{\texttt{MHist}}}(D_{\texttt{MShift}}+W_{\texttt{MShift}}+1)+4\seq{U}_{\length{\texttt{MHist}}}]$}\label{al:reali:coordi}
\end{algo}
This is like the simulation of the computation degrees, only that we keep a more complicated register in the $\texttt{MHist}$ field and we use the values of $\texttt{MShift}$ to perform a ``macro-shift'' before the simulation. We also note that $\seq{T}$ is not given as a parameter of the construction. Instead, it is determined by the sequences $\seq{S}, \seq{U}$ and the value of field $\texttt{MShift}$.
\begin{lemma}\label{lem:nexpdirsimul}
Let $\seq S,\seq U$ be polynomially checkable sequences of integers and $p'$ be the program of a TM. Let us fix the field list $\C[\reali]\defeq [0,\ldots,14]$, the corresponding direction vector $\vec{\nu}_{\reali}$ and a polynomially checkable sequence of $15$-uples $\seq{\vec{k}}\in(\mathbb N^{14})^\mathbb N$. Let $F$ be the IPPA with directions $\vec{\nu}_{\reali}$ and permutation
$\reali[14,\vec{\nu}_{\reali},p',\vec{k},\seq S,\seq U]$
and $p,p^{-1}$ be the programs for this permutation and its inverse, respectively.
For all $w \in \mathbf{\mathcal{B}}^*$ and $a \in \mathbf{\mathcal{B}}$, let $\vec{k}_{w,a} \defeq \vec{k}_{\length{w}}$, $S_{w,a}\defeq S_{\length{w}}$, $U_{w,a} \defeq U_{\length{w}}$, $T_{w,a} \defeq S_{w,a}(D_a+W_a+1)+4U_{w,a}$ and $F_{w,a}$ be the restriction of $F$ to the subalphabet
\begin{equation*}
\A_{w,a}\defeq \haine5^{\vec{k}_{\length{w}}}\cap\emp[w]{\texttt{MHist}}\cap\emp[a]{\texttt{MShift},\texttt{MShift}_{+1}}\cap\emp[p]{\texttt{Prog}}\cap\emp[p^{-1}]{\revprog}.
\end{equation*}
Assume that the following inequalities hold, for all $w\in\mathbf{\mathcal{B}}^*$ and $a,a' \in \mathbf{\mathcal{B}}$:
\[\both{
U_{w,a}\ge\max\{t_p({\haine5^{\seq{\vec{k}}_{wa,a'}}}),t_{p^{-1}}({\haine5^{\seq{\vec{k}}_{wa,a'}}})\}\\
S_{w,a}\ge\max\{2U_{w,a},\length{\Chi{\haine5^{\vec{k}_{wa,a'}}}}\}\\
k_{w,a,\texttt{Addr}},k_{w,a,\texttt{Addr}_{+1}}\ge\norm{S_{w,a}}\\
k_{w,a,\texttt{Clock}},k_{w,a,\texttt{Clock}_{+1}}\ge\norm{T_{w,a}}\\
k_{w,a,\texttt{Head}_{-1}},k_{w,a,\texttt{Head}_{+1}}\ge\max\length{\Chi{\haine4\times (Q_p \cup Q_{p^{-1}})\times\{-1,+1\}}}\\
k_{w,a,\texttt{Tape}},k_{w,a,\texttt{NTape}},k_{w,a,\texttt{Tape}_{-1}},k_{w,a,\texttt{Tape}_{+1}}\ge1\\
k_{w,a,\texttt{Prog}}\geq\length p\\
k_{w,a,\revprog}\geq\length{p^{-1}}\\
k_{w,a,\texttt{MHist}} \geq\length{w}\\
k_{w,a,\texttt{MShift}}=k_{w,a,\texttt{MShift}_{+1}}\geq 1
~.}\]
Then, $F_{w,a}$ completely exactly simulates $\bigsqcup_{\begin{subarray}{c}a'\in \mathbf{\mathcal{B}}\end{subarray}}{F_{wa,a'}}$ with parameters $(S_{w,a},T_{w,a},D_aS_{w,a},\Phi_{w,a})$, where $\Phi_{w,a}=\tilde{\Phi}_{\texttt{Tape}}{\restr{\Sigma_{w,a}}}$ and
$\Sigma_{w,a} \defeq \A_{w,a}^{\mathbb Z} \cap \gra{0}{0}{S_{w,a}}{T_{w,a}}\cap\emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}}
\cap \Phi^{-1}(\bigsqcup_{\begin{subarray}{c} a' \in \mathbf{\mathcal{B}} \end{subarray}}\A_{wa,a'}^{\mathbb Z}).$
\end{lemma}
The only difference between this proof and the proof of Lemma~\ref{l:mhist} is that we shift all the encodings $D_aS_{w,a}$ cells (equivalently, $D_a$ macro-cells) to the right before starting the simulation.
Also, it is not difficult to see that the usual inequalities hold: $t_p(\A_{n}) = t_{p^{-1}}(\A_{n})=P(\length{\A_n}+t_{\vec{k}}(n)+t_{\seq{S}}(n)+t_{\seq{T}}(n)+t_{\seq{U}}(n))$ and $\length{p},\length{p^{-1}} = O(\length{p_{\vec{k}}}+\length{p_{\seq{S}}}+\length{p_{\seq{T}}}+
\length{p_{\seq{U}}})$. Recall that $p'$ is fixed in advance so it is a constant in what matters complexity.
\begin{proof}
If $p'$ halts on input $w$ within $\length{w}$ steps, then the check of line~\ref{al:reali:medvedev} will reject every configuration, which means that $F_{w,a} = \emptyset$. But, in this case, $wa$ will also be rejected by $p'$ within $\length{wa}$ steps, for all $a \in \mathbf{\mathcal{B}}$, so that $\bigsqcup_{\begin{subarray}{c}a'\in \mathbf{\mathcal{B}}\end{subarray}}{F_{wa,a'}} = \emptyset$, too. By definition, the empty PCA strongly, completely simulates itself for all possible choices of the simulating parameters, so that the claim is true in this case.
Suppose, then, that $p'$ does not halt on input $w$ within $\length{w}$ steps. Then, the check of line~\ref{al:reali:medvedev} is always true, so that we can ignore it in the rest of the proof. As in the previous proofs, we have to show three things: that $F_{w,a}$ $(S_{w,a},T_{w,a})$ simulates $\bigsqcup_{\begin{subarray}{c}a\in \haine2\end{subarray}}{F_{wa,a'}}$ with decoding function $\Phi_{w,a}$ (simulation), that $\Phi_{w,a}$ is injective (exactness) and that $\Omega_{F_{w,a}} \subseteq \mathcal D}%\DeclareMathOperator*{\dom}{dom{\Phi_{w,a}}$ (completeness).
For the simulation, it is easy to see that if $b \in \A_{wa,a'}^{\mathbb Z}$, where $a' \in \haine2$ and $c \in \Phi_{w,a}^{-1}(b)$, then $c$ is not rejected by the checks of lines~\ref{al:syncomp:mhistconsistency},\ref{al:syncomp:empty},\ref{al:syncomp:alph},\ref{al:syncomp:prog}, \ref{al:syncomp:revprog} and \ref{al:syncomp:infinitesequence}.
Then, line~\ref{al:reali:startshift} copies \emph{all} the info bits onto $\texttt{Tape}_{+1}$. During the next $S_{w,a}D_a$ steps, no permutation is applied. The only thing happening to the configuration is that the encodings that are in $\texttt{Tape}_{+1}$ travel to the right at the speed of one cell per time step. After $S_{w,a}D_a$ steps, they are copied back to the $\texttt{Tape}$ tape by line~\ref{al:reali:stopshift}. Every letter has travelled exactly $S_{w,a}D_a$ cells to the right, which corresponds to $D_a$ macro-cells. Formally, $\Phi(F^{D_aS_{w,a}}(c))=\sigma^{-D_a}(b)$.
Then, from Fact~\ref{fact:shiftandpermutation} and since the only rule applied from $\texttt{Clock}=D_aS_{w,a}$ is $\coordi \circ \unive$, we obtain that $\Phi(F^{D_aS_{w,a}+S_{w,a}+4{U}_{w,a}}(c))=$ $F_{wa,a'}(\sigma^{-D_a}(b))$. After $\texttt{Clock}=D_aS_{w,a}+S_{w,a}+4{U}_{w,a}$, nothing else changes in the configuration until $\texttt{Clock}$ becomes $0$ again. Line~\ref{al:reali:coordi} ensures that $\texttt{Clock}$ goes from $0$ to $(D_a+W_a+1)\seq{S}_{w,a}+4\seq{U}_{w,a}$. This concludes the proof of the simulation part.
Exactness of the simulation is easy to see. The values of all the fields of $c \in \Phi_{w,a}^{-1}(b)$ are uniquely determined by $b$ and $\Sigma_{w,a}$.
For the completeness, we show that if $c \in \gra{0}{0}{S_{w,a}}{T_{w,a}} \cap F_{w,a}^{-2T_{w,a}}(\A_{w,a}^{\mathbb Z})$, then $c \in \Phi_{w,a}^{-1}(\bigsqcup_{\begin{subarray}{c} a' \in \mathbf{\mathcal{B}} \end{subarray}}\A_{wa,a'}^{\mathbb Z})$.
Indeed, if $c \in \gra{0}{0}{S_{w,a}}{T_{w,a}} \cap F_{w,a}^{-T_{w,a}}(\A_{w,a}^{\mathbb Z})$, then lines~\ref{al:reali:empty},\ref{al:reali:prog},\ref{al:reali:revprog} and \ref{al:reali:infinitesequence} ensure that
$c \in \A_{w,a}^{\mathbb Z} \cap \gra{0}{0}{S_{w,a}}{T_{w,a}}\cap \emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}} \cap \Phi^{-1}((\bigsqcup_{\begin{subarray}{c} a' \in \mathbf{\mathcal{B}} \end{subarray}}\A_{wa,a'})^{\mathbb Z}).$
Let $b \in (\bigsqcup_{\begin{subarray}{c} a' \in \mathbf{\mathcal{B}} \end{subarray}}\A_{wa,a'})^{\mathbb Z}$ be such that $c \in \Phi^{-1}(b)$. We still cannot know that $\pi_{\texttt{MShift}}(b_i)$ is the same for all $i \in \mathbb Z$.
We deal with this problem in a similar way as in Section~\ref{s:comput}, since $F_{w,a}^{2T}(c)$ exists, this means that $F^2(b)$ exists, and line\ref{al:reali:mhistconsistency} ensures that $\pi_{\texttt{MShift}}(b_i)=\pi_{\texttt{MShift}_{+1}}(b_i)=\pi_{\texttt{MShift}}(b_j)$, for all $i,j \in \mathbb Z$.
Therefore, $b \in \bigsqcup_{\begin{subarray}{c} a' \in \haine2 \end{subarray}}\A_{wa,a'}^{\mathbb Z}$ and this concludes the proof.
\end{proof}
\subsection{Satisfying the inequalities}
Unlike the previous cases, the set of inequalities that we want to satisfy does not depend on $n$, but instead on a word $w \in \mathbf{\mathcal{B}}^*$ and $a,a' \in \mathbf{\mathcal{B}}$. However, we will now see that the inequalities can be translated to some inequalities about the polynomially computable sequences $\seq{S},\seq{U}$
\begin{remark}
We can find $\vec{k} \in (\mathbb N^{15})^{\mathbb N}$ and $\seq{S}, \seq{U} \in \mathbb N^{\mathbb N}$ such that the inequalities of Lemma~\ref{lem:nexpdirsimul} are satisfied.
In addition, we can have $\seq\varepsilon_n \defeq 4U_n / S_n < \sqrt{2}-1$, for all $n \in \mathbb N$ and $\theta_{\seq\varepsilon}(\mathbf{\mathcal{B}}^{\mathbb N}) \supseteq [0,1/2]$.
\end{remark}
\begin{proof}
Let $w \in \mathbf{\mathcal{B}}^*$ and $a,a' \in \mathbf{\mathcal{B}}$. Let $n \defeq \length{w}$ be the length of $w$. Let us write again the inequalities:
\[\both{
U_{w,a}\ge\max\{t_p({\haine5^{\seq{\vec{k}}_{wa,a'}}}),t_{p^{-1}}({\haine5^{\seq{\vec{k}}_{wa,a'}}})\}\\
S_{w,a}\ge\max\{2U_{w,a},\length{\Chi{\haine5^{\vec{k}_{wa,a'}}}}\}\\
k_{w,a,\texttt{Prog}}\geq\length p\\
k_{w,a,\revprog}\geq\length{p^{-1}}\\
k_{w,a,\texttt{Head}_{-1}},k_{w,a,\texttt{Head}_{+1}}\ge\length{\Chi{\haine4\times (Q_p \cup Q_{p^{-1}})\times\{-1,+1\}}}\\
k_{w,a,\texttt{Addr}},k_{w,a,\texttt{Addr}_{+1}}\ge\norm{S_{w,a}}\\
k_{w,a,\texttt{Clock}},k_{w,a,\texttt{Clock}_{+1}}\ge\norm{T_{w,a}}\\
k_{w,a,\texttt{Tape}},k_{w,a,\texttt{NTape}},k_{w,a,\texttt{Tape}_{-1}},k_{w,a,\texttt{Tape}_{+1}}\ge1\\
k_{w,a,\texttt{MHist}} \geq\length{w}\\
k_{w,a,\texttt{MShift}}=k_{w,a,\texttt{MShift}_{+1}}\geq 1
~.}\]
According to the definition, $\vec{k}_{w,a} = \vec{k}_n$, $S_{w,a}=S_n$, $U_{w,a}=U_n$ and $T_{w,a}=S_n(D_a+W_a+1)+4U_n$: Therefore, we can write the above inequalities as follows:
\[\both{
U_n\ge\max\{t_p({\haine5^{\seq{\vec{k}}_{n+1}}}),t_{p^{-1}}({\haine5^{\seq{\vec{k}}_{n+1}}})\}\\
S_{n}\ge\max\{2U_{n},\length{\Chi{\haine5^{\vec{k}_{n+1}}}}\}\\
k_{n,\texttt{Prog}}\geq\length p\\
k_{n,\revprog}\geq\length{p^{-1}}\\
k_{n,\texttt{Head}_{-1}},k_{n,\texttt{Head}_{+1}}\ge\length{\Chi{\haine4\times (Q_p \cup Q_{p^{-1}})\times\{-1,+1\}}}\\
k_{n,\texttt{Addr}},k_{n,\texttt{Addr}_{+1}}\ge\norm{S_{n}}\\
k_{n,\texttt{Clock}},k_{n,\texttt{Clock}_{+1}}\ge\norm{S_n(D_a+W_a+1)+4U_n}\\
k_{n,\texttt{Tape}},k_{n,\texttt{NTape}},k_{n,\texttt{Tape}_{-1}},k_{n,\texttt{Tape}_{+1}}\ge1\\
k_{n,\texttt{MHist}} \geq n\\
k_{n,\texttt{MShift}}=k_{n,\texttt{MShift}_{+1}}\geq 1
~.}\]
Unlike the previous proofs, we cannot choose $\vec{k}_{\seq{S},\seq{T}} \in \mathbb N^{\mathbb N}$ such that all of the inequalities except the first two are satisfied as equalities. This is because the inequalities about $\texttt{Clock}$ and $\texttt{Clock}_{+1}$ depend on $a \in \mathbf{\mathcal{B}}$ and not only on $n$. However, $D_a+W_a \le 2$, for all $a\in \mathbf{\mathcal{B}}$, so that we can replace these inequalities with $k_{n,\texttt{Clock}},k_{n,\texttt{Clock}_{+1}}\ge\norm{3S_n+4U_n}$ and show that this new set of inequalities can be satisfied. (Here, it is essential that $\mathbf{\mathcal{B}}$ is a finite set. Bounding $D_a+W_a$ from above is one of the reasons that we had to do a little more work with the directive sequences.)
The rest of the proof follows the usual pattern. We choose $S_n = Q^{n+n_0}$ and $U_n=(n+n_0)^r$ for some suitable values of $n_0,r$ and $Q$.
For the second claim, let us recall more specifically in which order $n_0,r$ and $Q$ are chosen. In the proof of Remark~\ref{rem:inequhiera}, we showed that there exists $n_0$ and $r$ that work for \emph{every} $Q$. Therefore, by choosing $S_n = Q^{n+n_0}$ and $U_n=(n+n_0)^r$, for some sufficiently large $Q$, we can make $\seq\varepsilon \defeq 4U_n / S_n$ smaller than $\sqrt{2}-1$ and $\anib[(1/(2+\varepsilon_i)_i)]{111\ldots}$ larger than $1/2$.
\end{proof}
For all $w,a$, we have $T_{w,a}= (D_a+W_a+1)S_{w,a}+4U_{w,a}=(D_a+W_a+1+4U_{w,a}/S_{w,a})S_{w,a}=(D_a+W_a+1+\epsilon_{w,a})S_{w,a}$, where $\epsilon_{w,a} < \sqrt{2}-1$, so that we are in the situation described in Lemma~\ref{lem:geomstuff} and $\theta_{\seq\varepsilon}(\mathbf{\mathcal{B}}^{\mathbb N}) \supseteq [0,1/2]$.
Since $\NE_0 \subseteq [0,1/2]$ by assumption, this means that every direction in $\NE_0$ is representable as a sequence in $\theta_{\seq\varepsilon}(\mathbf{\mathcal{B}}^{\mathbb N})$.
\subsection{Realization}
For all $z \in \X_{p'}$, we have a sequence of complete, exact simulations given by
\begin{equation*}
F_{z_{\co{0}{n}},z_n} \simu[S_{z_{\co{0}{n}},z_n},T_{z_{\co{0}{n}},z_n},D_{z_n}S_{z_{\co{0}{n}},z_n}] F_{z_{\co{0}{n+1}},z_{n+1}},
\end{equation*}
for all $n \in \mathbb N$.
Therefore, according to a Lemma~\ref{l:nonvides}, we have that $\Omega_F= \bigsqcup_{z \in \X_{p'}} \rocks[\infty]z{\Phi}$, and $\orb{F_{\epsilon}}=\bigsqcup_{z \in \X_{p'}} \orb{F_{\rocks[\infty]z{\Phi}}}$. In addition, we know that $\NE(\orb{F_{\rocks[\infty]z{\Phi}}})=\theta(z)$, by Lemma~\ref{lem:iterrelsimulexp} and that every direction in $\cc{0}{1/2}$ can be represented in such a way, by Lemma~\ref{lem:geomstuff}. Finally, we know by Lemma~\ref{lem:basicstuffaboutNE} that $\NE(\orb{F_{\epsilon}})= \bigsqcup_{z \in \X_{p'}} \NE(\orb{F_{\rocks[\infty]z{\Phi}}}) = \bigsqcup_{z \in \X_{p'}} \theta(z) = \NE_{p'}=\NE_0$.
Therefore, for every effectively closed set of directions which is included in $[0,1/2]$, recognized by a TM with program $p'$, we have constructed a 2D SFT with exactly this set as set of non-expansive directions. According to our previous discussions, this is enough to realize arbitrary effectively closed sets of directions as the set of non-expansive directions of 2D SFT.
This concludes the proof of Theorem~\ref{thm:nonexpansive}.
\chapter*{Conclusion and Open Questions}
We have provided a general method for constructing extremely-expansive 2D SFT's of finite type and we have shown that this class of 2D SFT's has very rich computational, dynamical and geometrical properties. At the same time, our method throws some light on the essence of self-similar and hierarchical constructions and we hope that it might help to better understand previous works with hierarchical constructions, especially \cite{drs}. ( On the other hand, the difficulty of \cite{gacs} only partly comes from the hierarchical simulation, so our work is certainly not sufficient to explain this construction better.)
Regarding future work, we believe that the following questions about extremely-expansive 2D SFTs are very natural: First of all, can (a variant of) our method produce a \emph{minimal} extremely-expansive SFT? Is the emptiness problem undecidable for \emph{minimal} extremely-expansive SFT's? Recently, Durand and Romashchenko \cite{drs2} described a method for constructing minimal (but not extremely-expansive) SFT's and answer the second question positively. It seems that their technique can be readily generalized to our framework. Second, is it possible to realize all effective subshifts, in the sense of \cite{projsft, drs, aubrun} with 2D extremely-expansive SFT's? This would be an improvement with of the result of \cite{drs, aubrun} since it would (in some sense) further lower the dimension of the realizing subshift by one. Third, is it possible to construct extremely expansive SFT covers for square substitutions? This question goes back to the construction of Mozes \cite{mozes}, which constructs SFT covers for square substitutions without any directions of expansivess. Recently, Ollinger and Legloannec \cite{legloannec} constructed 4-way deterministic covers. We believe that the answer to this question is also positive. Finally, and this is certainly the most interesting, but also difficult question, is it possible to use our method in order to construct reversible, self-simulating CA, \textit{i.e.}, is it possible to turn the partial rules, with which we have been working in this thesis, to complete rules, while at the same time keeping the good properties of self-simulation? This could find an application to the problem of the undecidability of expansiveness for reversible CA.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,914 |
.class Lcom/mediatek/settings/CellBroadcastSettings$5;
.super Ljava/lang/Object;
.source "CellBroadcastSettings.java"
# interfaces
.implements Landroid/content/DialogInterface$OnClickListener;
# annotations
.annotation system Ldalvik/annotation/EnclosingMethod;
value = Lcom/mediatek/settings/CellBroadcastSettings;->showEditChannelDialog(Lcom/mediatek/settings/CellBroadcastChannel;)V
.end annotation
.annotation system Ldalvik/annotation/InnerClass;
accessFlags = 0x0
name = null
.end annotation
# instance fields
.field final synthetic this$0:Lcom/mediatek/settings/CellBroadcastSettings;
.field final synthetic val$channelName:Landroid/widget/EditText;
.field final synthetic val$channelNum:Landroid/widget/EditText;
.field final synthetic val$channelState:Landroid/widget/CheckBox;
.field final synthetic val$oldChannel:Lcom/mediatek/settings/CellBroadcastChannel;
# direct methods
.method constructor <init>(Lcom/mediatek/settings/CellBroadcastSettings;Landroid/widget/EditText;Landroid/widget/EditText;Landroid/widget/CheckBox;Lcom/mediatek/settings/CellBroadcastChannel;)V
.locals 0
.parameter
.parameter
.parameter
.parameter
.parameter
.prologue
.line 395
iput-object p1, p0, Lcom/mediatek/settings/CellBroadcastSettings$5;->this$0:Lcom/mediatek/settings/CellBroadcastSettings;
iput-object p2, p0, Lcom/mediatek/settings/CellBroadcastSettings$5;->val$channelName:Landroid/widget/EditText;
iput-object p3, p0, Lcom/mediatek/settings/CellBroadcastSettings$5;->val$channelNum:Landroid/widget/EditText;
iput-object p4, p0, Lcom/mediatek/settings/CellBroadcastSettings$5;->val$channelState:Landroid/widget/CheckBox;
iput-object p5, p0, Lcom/mediatek/settings/CellBroadcastSettings$5;->val$oldChannel:Lcom/mediatek/settings/CellBroadcastChannel;
invoke-direct/range {p0 .. p0}, Ljava/lang/Object;-><init>()V
return-void
.end method
# virtual methods
.method public onClick(Landroid/content/DialogInterface;I)V
.locals 17
.parameter "dialog"
.parameter "whichButton"
.prologue
.line 397
move-object/from16 v0, p0
iget-object v1, v0, Lcom/mediatek/settings/CellBroadcastSettings$5;->val$channelName:Landroid/widget/EditText;
invoke-virtual {v1}, Landroid/widget/EditText;->getText()Landroid/text/Editable;
move-result-object v1
invoke-virtual {v1}, Ljava/lang/Object;->toString()Ljava/lang/String;
move-result-object v11
.line 398
.local v11, name:Ljava/lang/String;
move-object/from16 v0, p0
iget-object v1, v0, Lcom/mediatek/settings/CellBroadcastSettings$5;->val$channelNum:Landroid/widget/EditText;
invoke-virtual {v1}, Landroid/widget/EditText;->getText()Landroid/text/Editable;
move-result-object v1
invoke-virtual {v1}, Ljava/lang/Object;->toString()Ljava/lang/String;
move-result-object v14
.line 399
.local v14, num:Ljava/lang/String;
move-object/from16 v0, p0
iget-object v1, v0, Lcom/mediatek/settings/CellBroadcastSettings$5;->val$channelState:Landroid/widget/CheckBox;
invoke-virtual {v1}, Landroid/widget/CheckBox;->isChecked()Z
move-result v9
.line 400
.local v9, checked:Z
const-string v10, ""
.line 401
.local v10, errorInfo:Ljava/lang/String;
move-object/from16 v0, p0
iget-object v1, v0, Lcom/mediatek/settings/CellBroadcastSettings$5;->this$0:Lcom/mediatek/settings/CellBroadcastSettings;
#calls: Lcom/mediatek/settings/CellBroadcastSettings;->checkChannelName(Ljava/lang/String;)Z
invoke-static {v1, v11}, Lcom/mediatek/settings/CellBroadcastSettings;->access$300(Lcom/mediatek/settings/CellBroadcastSettings;Ljava/lang/String;)Z
move-result v1
if-nez v1, :cond_0
.line 402
new-instance v1, Ljava/lang/StringBuilder;
invoke-direct {v1}, Ljava/lang/StringBuilder;-><init>()V
invoke-virtual {v1, v10}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v1
move-object/from16 v0, p0
iget-object v3, v0, Lcom/mediatek/settings/CellBroadcastSettings$5;->this$0:Lcom/mediatek/settings/CellBroadcastSettings;
const v4, 0x7f0d0103
invoke-virtual {v3, v4}, Lcom/mediatek/settings/CellBroadcastSettings;->getString(I)Ljava/lang/String;
move-result-object v3
invoke-virtual {v1, v3}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v1
invoke-virtual {v1}, Ljava/lang/StringBuilder;->toString()Ljava/lang/String;
move-result-object v10
.line 404
:cond_0
move-object/from16 v0, p0
iget-object v1, v0, Lcom/mediatek/settings/CellBroadcastSettings$5;->this$0:Lcom/mediatek/settings/CellBroadcastSettings;
#calls: Lcom/mediatek/settings/CellBroadcastSettings;->checkChannelNumber(Ljava/lang/String;)Z
invoke-static {v1, v14}, Lcom/mediatek/settings/CellBroadcastSettings;->access$400(Lcom/mediatek/settings/CellBroadcastSettings;Ljava/lang/String;)Z
move-result v1
if-nez v1, :cond_1
.line 405
new-instance v1, Ljava/lang/StringBuilder;
invoke-direct {v1}, Ljava/lang/StringBuilder;-><init>()V
invoke-virtual {v1, v10}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v1
const-string v3, "\n"
invoke-virtual {v1, v3}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v1
move-object/from16 v0, p0
iget-object v3, v0, Lcom/mediatek/settings/CellBroadcastSettings$5;->this$0:Lcom/mediatek/settings/CellBroadcastSettings;
const v4, 0x7f0d0102
invoke-virtual {v3, v4}, Lcom/mediatek/settings/CellBroadcastSettings;->getString(I)Ljava/lang/String;
move-result-object v3
invoke-virtual {v1, v3}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v1
invoke-virtual {v1}, Ljava/lang/StringBuilder;->toString()Ljava/lang/String;
move-result-object v10
.line 407
:cond_1
const-string v1, ""
invoke-virtual {v10, v1}, Ljava/lang/String;->equals(Ljava/lang/Object;)Z
move-result v1
if-eqz v1, :cond_4
.line 408
invoke-static {v14}, Ljava/lang/Integer;->valueOf(Ljava/lang/String;)Ljava/lang/Integer;
move-result-object v1
invoke-virtual {v1}, Ljava/lang/Integer;->intValue()I
move-result v13
.line 409
.local v13, newChannelId:I
move-object/from16 v0, p0
iget-object v1, v0, Lcom/mediatek/settings/CellBroadcastSettings$5;->val$oldChannel:Lcom/mediatek/settings/CellBroadcastChannel;
invoke-virtual {v1}, Lcom/mediatek/settings/CellBroadcastChannel;->getChannelId()I
move-result v2
.line 410
.local v2, tempOldChannelId:I
move-object/from16 v0, p0
iget-object v1, v0, Lcom/mediatek/settings/CellBroadcastSettings$5;->this$0:Lcom/mediatek/settings/CellBroadcastSettings;
move-object/from16 v0, p0
iget-object v3, v0, Lcom/mediatek/settings/CellBroadcastSettings$5;->val$oldChannel:Lcom/mediatek/settings/CellBroadcastChannel;
invoke-virtual {v3}, Lcom/mediatek/settings/CellBroadcastChannel;->getKeyId()I
move-result v3
#calls: Lcom/mediatek/settings/CellBroadcastSettings;->checkChannelIdExist(II)Z
invoke-static {v1, v13, v3}, Lcom/mediatek/settings/CellBroadcastSettings;->access$1500(Lcom/mediatek/settings/CellBroadcastSettings;II)Z
move-result v1
if-nez v1, :cond_3
.line 411
invoke-interface/range {p1 .. p1}, Landroid/content/DialogInterface;->dismiss()V
.line 412
new-instance v12, Lcom/mediatek/settings/CellBroadcastChannel;
move-object/from16 v0, p0
iget-object v1, v0, Lcom/mediatek/settings/CellBroadcastSettings$5;->val$oldChannel:Lcom/mediatek/settings/CellBroadcastChannel;
invoke-virtual {v1}, Lcom/mediatek/settings/CellBroadcastChannel;->getKeyId()I
move-result v1
invoke-direct {v12, v1, v13, v11, v9}, Lcom/mediatek/settings/CellBroadcastChannel;-><init>(IILjava/lang/String;Z)V
.line 414
.local v12, newChannel:Lcom/mediatek/settings/CellBroadcastChannel;
move-object/from16 v0, p0
iget-object v1, v0, Lcom/mediatek/settings/CellBroadcastSettings$5;->val$oldChannel:Lcom/mediatek/settings/CellBroadcastChannel;
const/4 v3, 0x0
invoke-virtual {v1, v3}, Lcom/mediatek/settings/CellBroadcastChannel;->setChannelState(Z)V
.line 415
invoke-virtual {v12}, Lcom/mediatek/settings/CellBroadcastChannel;->getChannelId()I
move-result v16
.line 416
.local v16, tempNewChannelId:I
const/4 v1, 0x2
new-array v15, v1, [Lcom/android/internal/telephony/gsm/SmsBroadcastConfigInfo;
.line 417
.local v15, objectList:[Lcom/android/internal/telephony/gsm/SmsBroadcastConfigInfo;
const/4 v7, 0x0
new-instance v1, Lcom/android/internal/telephony/gsm/SmsBroadcastConfigInfo;
const/4 v4, -0x1
const/4 v5, -0x1
const/4 v6, 0x0
move v3, v2
invoke-direct/range {v1 .. v6}, Lcom/android/internal/telephony/gsm/SmsBroadcastConfigInfo;-><init>(IIIIZ)V
aput-object v1, v15, v7
.line 419
const/4 v1, 0x1
new-instance v3, Lcom/android/internal/telephony/gsm/SmsBroadcastConfigInfo;
const/4 v6, -0x1
const/4 v7, -0x1
invoke-virtual {v12}, Lcom/mediatek/settings/CellBroadcastChannel;->getChannelState()Z
move-result v8
move/from16 v4, v16
move/from16 v5, v16
invoke-direct/range {v3 .. v8}, Lcom/android/internal/telephony/gsm/SmsBroadcastConfigInfo;-><init>(IIIIZ)V
aput-object v3, v15, v1
.line 421
move-object/from16 v0, p0
iget-object v1, v0, Lcom/mediatek/settings/CellBroadcastSettings$5;->this$0:Lcom/mediatek/settings/CellBroadcastSettings;
move-object/from16 v0, p0
iget-object v3, v0, Lcom/mediatek/settings/CellBroadcastSettings$5;->val$oldChannel:Lcom/mediatek/settings/CellBroadcastChannel;
#calls: Lcom/mediatek/settings/CellBroadcastSettings;->updateChannelToDatabase(Lcom/mediatek/settings/CellBroadcastChannel;Lcom/mediatek/settings/CellBroadcastChannel;)Z
invoke-static {v1, v3, v12}, Lcom/mediatek/settings/CellBroadcastSettings;->access$1600(Lcom/mediatek/settings/CellBroadcastSettings;Lcom/mediatek/settings/CellBroadcastChannel;Lcom/mediatek/settings/CellBroadcastChannel;)Z
move-result v1
if-eqz v1, :cond_2
.line 422
move-object/from16 v0, p0
iget-object v1, v0, Lcom/mediatek/settings/CellBroadcastSettings$5;->this$0:Lcom/mediatek/settings/CellBroadcastSettings;
#calls: Lcom/mediatek/settings/CellBroadcastSettings;->setCellBroadcastConfig([Lcom/android/internal/telephony/gsm/SmsBroadcastConfigInfo;)V
invoke-static {v1, v15}, Lcom/mediatek/settings/CellBroadcastSettings;->access$1100(Lcom/mediatek/settings/CellBroadcastSettings;[Lcom/android/internal/telephony/gsm/SmsBroadcastConfigInfo;)V
.line 432
.end local v2 #tempOldChannelId:I
.end local v12 #newChannel:Lcom/mediatek/settings/CellBroadcastChannel;
.end local v13 #newChannelId:I
.end local v15 #objectList:[Lcom/android/internal/telephony/gsm/SmsBroadcastConfigInfo;
.end local v16 #tempNewChannelId:I
:goto_0
return-void
.line 424
.restart local v2 #tempOldChannelId:I
.restart local v12 #newChannel:Lcom/mediatek/settings/CellBroadcastChannel;
.restart local v13 #newChannelId:I
.restart local v15 #objectList:[Lcom/android/internal/telephony/gsm/SmsBroadcastConfigInfo;
.restart local v16 #tempNewChannelId:I
:cond_2
move-object/from16 v0, p0
iget-object v1, v0, Lcom/mediatek/settings/CellBroadcastSettings$5;->this$0:Lcom/mediatek/settings/CellBroadcastSettings;
#calls: Lcom/mediatek/settings/CellBroadcastSettings;->showUpdateDBErrorInfoDialog()V
invoke-static {v1}, Lcom/mediatek/settings/CellBroadcastSettings;->access$1200(Lcom/mediatek/settings/CellBroadcastSettings;)V
goto :goto_0
.line 427
.end local v12 #newChannel:Lcom/mediatek/settings/CellBroadcastChannel;
.end local v15 #objectList:[Lcom/android/internal/telephony/gsm/SmsBroadcastConfigInfo;
.end local v16 #tempNewChannelId:I
:cond_3
move-object/from16 v0, p0
iget-object v1, v0, Lcom/mediatek/settings/CellBroadcastSettings$5;->this$0:Lcom/mediatek/settings/CellBroadcastSettings;
move-object/from16 v0, p0
iget-object v3, v0, Lcom/mediatek/settings/CellBroadcastSettings$5;->this$0:Lcom/mediatek/settings/CellBroadcastSettings;
const v4, 0x7f0d0105
invoke-virtual {v3, v4}, Lcom/mediatek/settings/CellBroadcastSettings;->getString(I)Ljava/lang/String;
move-result-object v3
#calls: Lcom/mediatek/settings/CellBroadcastSettings;->displayMessage(Ljava/lang/String;)V
invoke-static {v1, v3}, Lcom/mediatek/settings/CellBroadcastSettings;->access$1400(Lcom/mediatek/settings/CellBroadcastSettings;Ljava/lang/String;)V
goto :goto_0
.line 430
.end local v2 #tempOldChannelId:I
.end local v13 #newChannelId:I
:cond_4
move-object/from16 v0, p0
iget-object v1, v0, Lcom/mediatek/settings/CellBroadcastSettings$5;->this$0:Lcom/mediatek/settings/CellBroadcastSettings;
#calls: Lcom/mediatek/settings/CellBroadcastSettings;->displayMessage(Ljava/lang/String;)V
invoke-static {v1, v10}, Lcom/mediatek/settings/CellBroadcastSettings;->access$1400(Lcom/mediatek/settings/CellBroadcastSettings;Ljava/lang/String;)V
goto :goto_0
.end method
| {
"redpajama_set_name": "RedPajamaGithub"
} | 4,516 |
package com.zobot.client.packet.definitions.serverbound.play
import com.zobot.client.packet.Packet
case class EntityAction(entityId: Int, actionId: Int, jumpBoost: Int) extends Packet {
override lazy val packetId = 0x15
override lazy val packetData: Array[Byte] =
fromVarInt(entityId) ++
fromVarInt(actionId) ++
fromVarInt(jumpBoost)
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 4,099 |
On Friday the poker world learned of the latest innovation to hit the tournament circuit, an interesting new buy-in and payout format dubbed "Multi-Prize Pool Poker." Multi Prize Pool Poker was developed by UK poker pro Roberto Romanello and will make its debut at the Dusk Till Dawn Poker Club in London, England on January 10, 2013.
I have to admit that when I read the headline I was expecting an "innovation" something along the lines of the ISPT's combination of live and online tournaments, or the International Poker Federation's Duplicate Poker, but reading further into Multi Prize Pool Poker the idea seems plausible and is probably the best innovation to hit the poker world since Rush Poker.
So, what is Multi Prize Pool Poker? The game basically boils down to the buy-in amount and the prize-structure, as it allows players to make three successively larger buy-ins, with each tier allowing the player to compete for that percentage of the prize pool. In its simplest terms, Multi Prize Pool Poker can best be described by thinking of it like a cash game where you have side pots.
For instance, a Multi Prize Pool tournament could have buy-in levels of $100, $200, and $300. A player can choose to buy-in to tier 1 for $100; they could buy-in to the first two tiers for $300; or they could buy into all three tiers by ponying-up $600. The tier or tiers a player buys into determines what percentage of the prize-pool they are eligible to receive.
If the first tier in the above example received 100 entrants, all 100 entrants are eligible for the $10,000 prize-pool. If 50 of those players also bought into tier 2 only those 50 players would be eligible for the $5,000 prize-pool of tier 2 (which is on top of the $10,000 prize-pool). Finally, if a further 10 of these players decided to buy into the third tier they alone would be eligible for the additional $1,000 in prize money up for grabs in tier 3.
What makes Multi Prize Pool Poker so interesting is that there would essentially be a 10-man Sit & Go inside of the larger tournament, as well as a 50-player tournament; meaning more bubbles and more chances to cash in the event.
If the winner of the tournament only registered for the first tier they are only eligible for that part of the prize-pool (essentially the Main Pot using our cash game example). If he registered for all three he would win the first place prize amount from all three tiers: $10,000, $5,000, and $1,000.
If a player who registered for all three tiers finished second he would win the top prize from $5,000 and $1,000 prize-pools. If the runner-up only registered for Tier 1 and Tier 2, Tier 3's top prize would still be up for grabs to the highest finisher of the 10 players who registered for that level.
As you can see, the possibilities for this format are endless, and it offers pros a better chance to cash (although they will have to pay more) without giving them an edge like reentry tournaments offer.
I'm actually excited for the Multi Prize Pool Poker debut in January, eagerly anticipating the poker community's reaction to the innovative format. | {
"redpajama_set_name": "RedPajamaC4"
} | 3,288 |
using System;
using System.IO;
namespace James.Data.Grinding
{
public class ConsoleDataRowGrinder : IDataRowGrinder
{
public void BeforeGrinding()
{
Console.WriteLine("Beginning to grind.");
BeforeGrinding(Console.Out);
}
protected virtual void BeforeGrinding(TextWriter console)
{ }
public void GrindRow(dynamic row)
{
GrindRow(row, Console.Out);
}
protected virtual void GrindRow(dynamic row, TextWriter console)
{ }
public void AfterGrinding()
{
AfterGrinding(Console.Out);
Console.WriteLine("Done grinding.");
}
protected virtual void AfterGrinding(TextWriter console)
{ }
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 2,149 |
Eurafrasia, Eurasiáfrica, África-Eurasia, Afro-Eurasia, Afroeurasia, Euroafricasia, Asieuroáfrica, continente euroasiáticoafricano o simplemente y de forma concreta Eufrasia, es el supercontinente más grande de la Tierra. Se ubica en el hemisferio oriental y parte del occidental e incluye a África y a Eurasia (este último formado por Europa y Asia). Está considerado un «ensamble tricontinental». También llamado «Viejo Mundo» o «Antiguo Continente», abarca más de 85 millones de km², de un total de tierra emergida de casi 150 millones en todo el planeta; esto lo convierte en la región mundial de mayor tamaño. En 2008, albergaba cerca del 86 % de la población mundial. Ha sido la zona más poblada en toda la historia de la humanidad. Isaac Asimov calculó el total de su población y su extensión, sumando los datos correspondientes a cada uno de los continentes involucrados, y comentó al respecto: «Prescindiendo del canal de Suez, puede uno ir desde el cabo de Buena Esperanza al estrecho de Bering o a Portugal o Laponia sin cruzar agua salada; así que ese conjunto de tierras forma un solo continente».
«Eurafrasia» es un neologismo estrictamente geográfico.Este concepto se relaciona con otros términos similares, como ecúmene o «World Island», este último designado por el geopolítico y geógrafo inglés Halford John Mackinder y que apareció por primera vez en su obra «The Geographical Pivot of History». Mackinder lo definió como la gran masa continental continua. Esta terminología, junto a las demás, se suele utilizar con frecuencia en gran cantidad de textos y artículos geopolíticos.
Términos relacionados
Los siguientes conceptos están relacionados con la denominación «Eurafrasia», aunque difieren en ciertos puntos:
Ecúmene (del griego οἰκουμένη, «tierra habitada»): un concepto de la Antigüedad clásica sobre el mundo conocido en ese entonces, que se limitaba a Europa y parte de Asia y África. Varios historiadores, como Marshall Hodgson, Alfred Kroeber, Arnold Toynbee, William McNeill y Leften Stavrianos rescataron este término antiguo para referirse a las civilizaciones agrarias del cuarto milenio, que estaban en contacto unas con otras. Hodgson, sin embargo, fue quien comenzó a usar «Afro-Eurasia» en relación con «ecúmene».
Viejo Mundo: un término relacionado con la era de los descubrimientos, en contraste con el Nuevo Mundo, representado por América. Sin embargo, el término ha quedado desactualizado y se prefiere la denominación «Eurafrasia» para incluir los tres continentes, dado que refleja la constante relación entre ellos. William McNeill utiliza este término como sinónimo de «Afro-Eurasia», pese a que otros lo rechazan por considerarlo eurocentrista.
World-Island o isla mundial: se trata de un concepto acuñado por el geógrafo inglés Halford John Mackinder en una teoría presentada en su artículo «The Geographical Pivot of History». Mackinder define la «isla mundial» como una masa continental continua, que técnicamente excluye las islas como Gran Bretaña. También la consideraba el centro del mundo y una región privilegiada en términos de riqueza y población. Si bien Marco Valigi, Gabriele Natalizzia y Carlo Frappi consideran a «Eurafrasia» un sinónimo de World-Island, otros autores puntualizan que la diferencia entre los dos conceptos es que el primero incluye todas las islas consideradas parte de África, Europa y Asia. Por su parte, Isaac Asimov, en su ensayo «La isla del mundo», defiende esta denominación y si bien encuentra ridículo el acrónimo «Eurafrasia», confiesa que estuvo tentado de proponerlo.
Arnold Toynbee usó este término para llamar al «complejo de continentes interconectados». El escaso reconocimiento del término «Eurafrasia» y sus variantes se debe, según al historiador estadounidense Ross E. Dunn, al «mito de los continentes», según el cual existen siete masas de tierra separadas por las aguas intercontinentales; esto llevó a un dogmatismo que impidió que América del Sur y del Norte fueran considerados un solo continente en la década de 1950. Sin embargo, tanto Dunn como David Christian, de la Universidad Estatal de San Diego, consideran que el concepto es imprescindible para estudiar fenómenos históricos o sociales que tuvieron lugar fuera de las fronteras de Asia, Europa y África, como en el caso del Imperio romano o la ruta de la seda.
Geología
Aunque se considera que Eurafrasia tiene dos o tres continentes separados, no es un supercontinente propiamente dicho. En vez de eso, es la parte mayor del ciclo supercontinental. Según Christian, estudiar el desarrollo geológico de Afroeurasia permite verla como una gran estructura con historia propia más allá de la historia de la humanidad.
El lugar más antiguo de Eurafrasia es el cratón de Kaapvaal, que junto con Madagascar y parte de la India y el oeste de Australia formaron parte del primer supercontinente, Vaalbará o Ur alrededor de tres mil millones de años atrás. Desde entonces, se ha separado en supercontinentes. Tras la ruptura de Pangea hace doscientos millones de años, las placas norteamericana y euroasiática formaron Laurasia, mientras que la placa africana permaneció en Gondwana, del que después se desprendió la placa Índica. Esta impactó contra el sur de Asia y dio comienzo a la formación de los Himalayas; en el mismo período, también se fusionó con la placa australiana. La placa arábiga se separó de África treinta millones de años atrás e impactó contra la placa irania entre diecinueve y doce millones de años atrás; esto permitió la formación de las cadenas montañosas Alborz y Zagros. Después de esta conexión inicial de los tres continentes, el corredor bético se cerró, junto al arco de Gibraltar, hace un poco menos de seis millones de años; esto unió el norte de África con Iberia. Por eso, Solé Sabarís afirma que el estudio de la geología de España constituye un campo fundamental para estudiar el proceso de desarrollo de Eurafrasia. Esto llevó a que la cuenca del Mediterráneo se secara, lo que produjo la crisis salina del Messiniense. Eurasia y África volvieron a separarse: la inundación zancliense de hace 5,33 millones de años devolvió las aguas al mar Mediterráneo a través del estrecho de Gibraltar, y el rift del golfo de Suez acentuó la división de África y la placa arábiga.
En la actualidad, África está conectada con Asia solo por un puente de tierra —dividido por el canal de Suez en el istmo de Suez— y se separa de Europa por el estrecho de Gibraltar y el canal de Sicilia. El paleogeólogo Ronald Blakey ha considerado los próximos 15 a 100 millones de años de desarrollo tectónico como bastante establecidos y predecibles En ese tiempo, se supone que África continuará dirigiéndose hacia el norte. El estrecho de Gibraltar se cerrará dentro de seiscientos mil años y el mar Mediterráneo se evaporará. No se formará ningún supercontinente en este tiempo, aunque el registro geológico está plagado de cambios repentinos en la actividad tectónica que hace que las proyecciones a futuro sean «muy, muy especulativas». Existen tres posibilidades, llamadas Novopangea, Amasia y Pangea última. En las dos primeras, el océano Pacífico se cierra y África permanece fusionada con Eurasia, pero este supercontinente se divide mientras que África y Europa se dirigen al oeste; en la última, Europa, Asia y África rotan hacia el oriente y el océano Atlántico se cierra.
Subdivisiones
Eurafrasia se divide en el canal de Suez en África y Eurasia; esta última puede subdividirse en Europa y Asia. Por razones históricas y culturales, también se la ha dividido en Eurasia-África del Norte y África subsahariana.
Asia
Asia Occidental
Asia Central
Asia Oriental
Asia del sur
Norte de Asia
Sudeste Asiático
Europa
Europa septentrional
Europa Occidental
Europa Central
Europa Oriental
Europa meridional
África
África del Norte
África Occidental
África Central
África Oriental
África austral
Puntos extremos
A continuación, se listan los puntos extremos de Eurafrasia, es decir, las localizaciones geográficas que se encuentran en el extremo de un punto cardinal dentro del supercontinente. Se ha propuesto que, para calcularlos, se tenga en cuenta los puntos extremos de los continentes que lo conforman.
Eurafrasia (con las islas)
Norte - Cabo Fligely (isla Rodolfo, Tierra de Francisco José, Rusia)
Sur - Cabo de las Agujas, Sudáfrica
Oeste - Santo Antão, Cabo Verde
Este - Diómedes Mayor, Rusia
Eurafrasia (continente)
Norte - Cabo Cheliuskin, Rusia
Sur - Cabo de las Agujas, Sudáfrica
Oeste - Península de Cabo Verde, Senegal
Este - Cabo Dezhneva, Rusia
Mapas
Véase también
Anexo:División política de Eurafrasia
Anexo:División política de África
Anexo:División política de Asia
Anexo:División política de Europa
Referencias
Enlaces externos
Supercontinentes
Acrónimos
Neologismos | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,148 |
La prescinseûa, también llamada quagliata o cagliata, es un producto lácteo típico de la provincia de Génova, que toma el nombre del cuajo (caglio en italiano; presù en genovés). Tiene una consistencia a medio camino entre el yogur y el requesón, empleándose para preparar la famosa focaccia col formaggio, la torta pasqualina y casi todos los pasteles salados típicos de Liguria, así como los barbagiuai (raviolis fritos de calabaza).
Historia
Las primeras referencias datan de 1383, y desde 1413 una ley de la República de Génova señaló al prescinseûa como único homenaje que los genoveses podían hacer al Dogo.
Se cree que es un producto que llegó a Génova desde Oriente.
Preparación
Se obtiene dejando reposar en una olla durante 48 horas 2 litros de leche fresca. Transcurrido este tiempo toma un cuarto de la leche (medio litro) de la olla y se calienta a 40–50 °C, añadiendo 5 gramos de cuajo. A continuación se mezcla con la leche dejando reposar durante 4 horas.
Empresas productoras de prescinseûa
Centro Latte Tigullio, en Rapallo (Génova).
Lylag, en Génova.
Notas
Enlaces externos
Quagliata ligure, en el sitio web oficial de la Región de Liguria para la agricultura (italiano)
Gastronomía de Génova
Quesos de Italia | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,578 |
Q: codesnippets not showing correctly in VS 2015 Update 2 While developing on c# projects the installed Visual-C# snippets (like ctor, prop, ..) don't work any more in Visual Studio 2015 with Update 2.
When pressing Crtl+K, X instead of showing all snippets (including Visual-C# snippets) I only get default snippets for ASP.NET MVC 4:
mvcaction4 and mvcpostaction4
I have tried resetting the environment (tools->import and export settings) and also: devenv /resetuserdata.
CodeSnippet manager shows all code snippets, the files are there correctly
(C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC#\Snippets\1031\Visual C#)
It seems as if visual studio doesn't recognize the project environment (C#) anymore.
Rest of Intellisense is working correctly, just not showing the right code snippets.
UPDATE: Installation repair and a complete uninstall and new install of visual studio 2015 Update 2 didn't help.
A: This problem is solved after updating to Visual Studio 2015 Update 3.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 5,272 |
1. Combine the Dried Yeast with the warmed milk in a mixing bowl to dissolve the yeast, cover the mixing bowl with plastic wrap and allow it to prove for 30 minutes in a warm place.
2. Once proved. Add in the Whey Protein 360, Egg Yolks, Salt and Stevia and stir thoroughly until a dough has formed. Cover once again, and prove for another 30 minutes.
3. Lightly flour a work surface, grab your dough and divide it into 6 even balls.
4. Now lightly dust your hands in flavour, roll the dough ball in your hands until you rich a good doughnut thickness, joining the two ends to form your doughnut ring.
5. To cook, turn your fryer up to 180c and gently slide in your doughnuts using a slotted spoon. Cook for 2 minutes each side, remove and then drain on a paper towel to remove any excess oil.
6. If icing or topping, allow the doughnuts to cool for 10 minutes before applying any icing. | {
"redpajama_set_name": "RedPajamaC4"
} | 735 |
Q: Redirecting User After Iframe Source Change? I am coding in javascript, i know its not a neat solution.
Here is my code. I am using a Overflow Iframe. I want user to be redirected after iframe source change. But my code in not redirecting user after iframe source change.
Please note: I want the iframe to be placed on a page And I want to check history of link inside iframe to detect iframe source change not page source change. I have no control over content of iframe. Its a cross domaine iframe. For Example
http://freestarbucksgiftcard4u.blogspot.in/2012/04/stalk-overflow-testing.html
Also, the link inside iframe is random, so i can't be determined. I can just compare it with history to detect change.
Iframe code:
<div class="offerlink"
style="overflow: hidden; width: 467px; height: 321px; position: relative;" id="i_div"><iframe name="i_frame" src="http://www.villanovau.com/form/pm/070809_vu_pm_save/?source=193664zv1&utm_source=Quigo&utm_medium=PPC&utm_campaign=VU_PM&WT.mc_id=4321" style="border: 0pt none ; left: -518px; top: -274px; position: absolute; width: 1242px; height: 616px;" scrolling="no"></iframe></div>
Javscript code:
var Delay = 0;
var AppearDelay = 10;
var oldHistLength = history.length;
var once_per_session=1;
var unknown=true;
function setcookie() {
if (unknown==false){
document.cookie="alerted=yes"
}
}
function get_cookie(Name) {
var search = Name + "="
var returnvalue = "";
if (document.cookie.length > 0) {
offset = document.cookie.indexOf(search)
if (offset != -1) { // if cookie exists
offset += search.length
// set index of beginning of value
end = document.cookie.indexOf(";", offset);
// set index of end of cookie value
if (end == -1)
end = document.cookie.length;
returnvalue=unescape(document.cookie.substring(offset, end))
}
}
return returnvalue;
}
setInterval ( "checkHistory()", 1000 );
function checkHistory()
{
if (oldHistLength != history.length)
{
redirect();
oldHistLength = history.length;
}
}
$(document).ready(function()
{
$('.offerlink').click(function()
{
setTimeout('redirect()', Delay*1000);
});
});
This solution is not working for me. Please help me in debugging this javascript.
A: If you have control over the contents generated in the iframe, you can just do like this:
<body onload="top.location.href='newpath.html'">
or a nicer scripted version
<script type="text/javascript">
window.onload=function(){top.location.href='newpath.html';}
</script>
in the
best,
Michael
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 6,567 |
Q: complex if condition using e.target in modal.js i was just looking at the source of modal.js and came across the following difficulty . i don't understand the below if statement quite well :
if (this.$element[0] !== e.target && !this.$element.has(e.target).length)
ok i do understand parts of it , but i don't entirely understand it .
i did see a few helpful resources that did help me a big , but i still can't completely understand that if statement :
i saw this thread .
i also read W3C .
but i still can't completely grasp whats happening .
so heres what i understand so far .
in the below line :
if (this.$element[0] !== e.target && !this.$element.has(e.target).length)
this.$element is a native JS HTML emelemt and we are checking to see if it matches e.target , what about the next condition ?
!this.$element.has(e.target).length
what are we checking for here ?? has(e.target) ? i have never seen that before .
i have see something like this.$element.hasClass('classname') , but whats with the has(e.target).length . ?
I would be really grateful , if somebody could explain with an example , what that line is doing
the line can be found on git too , line 135.
Thank you.
Gautam.
A: It is checking that it contains the element or not.
refer: event.target
Hope this helps
A: So my question was what is the below line doing:
if (this.$element[0] !== e.target && !this.$element.has(e.target).length)
lets evaluate it piece by piece ,
this.$element[0] - is an HTML element.
e.target - is an HTML object on which an event was executed eg. click , hover etc.
So the first condition is
if this particular element this.$element[0] is not equal to !== the element on which the event occured e.target .
and the code looks as follows: this.$element[0] !== e.target
now coming to the secound condition:
!this.$element.has(e.target).length
! - the not operator
this.@element - a Jquery element
has - A jquery method/function
length - returns the length .
so basically we are saying if this particular element this.$element , has the element on which the event occurred has(e.target) , ............. Errrrrr , i am pretty much lost after this , somebody come and complete my answer . :) what is .length doing in the end ? , also how is ! effecting the condition .
P.S. Give me a few mins , i'll be back with my updated answer . just trying to figure out the last condition. just a few mins . :D
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 4,516 |
\section{Introduction}
Gravitation is the weakest force in nature, although it is the dominant force in the large scale universe. The theory of Gravity is classical by its origin while other fundamental forces describing microscopic aspects of nature are quantum mechanical. There are several attempts to unify gravity with forces in the Standard Model. The search to unify gravitation and electromagnetism has a long history. The first studies were carried out by Faraday \cite{Faraday} and then by Maxwell \cite{Maxwell}, Heaviside \cite{Heaviside1}, \cite{Heaviside2}, Weyl \cite{Weyl}, Kaluza-Klein \cite{KK}, among others. A formal analogy between the gravitational and the electromagnetic fields led to the notion of Gravitoelectromagnetism (GEM) to describe gravitation. GEM is based on the profound analogy between the Newton's law for gravitation and Coulomb's law for electricity. There are also studies that are based on Einstein's General Relativity (GR) and focus on gravitoelectromagnetism. For example, Lense-Thirring effect showed that in GR a rotating massive body creat a gravitomagnetic field \cite{LT}. The GEM theory emerges from the Einstein theory of gravity (GR) in the linear approximation, i.e., $g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu}$, where $h_{\mu\nu}$ is the perturbation to the linear order. The main structure of GEM emerges as given later in Eq. (1) to Eq. (4). The GEM potential is related to $h_{\mu\nu}$ \cite{Mashhon}. In addition the structure of GEM has a close relationship to the theory of electromagnetism.
There are three different ways to construct GEM theory: (1) based on the similarity between the linearized Einstein and Maxwell equations \cite{Mashhon}; (2) based on an approach using tidal tensors \cite{Filipe} and (3) based on the decomposition of the Weyl tensor into gravito-magnetic (${\cal B}_{ij}=\frac{1}{2}\epsilon_{ikl}C^{kl}_{0j}$) and gravito-electric (${\cal E}_{ij}=-C_{0i0j}$) components \cite{Maartens}. A Lagrangian formulation for GEM has been developed \cite{Khanna} using the Weyl tensor approach. GEM allows scattering processes with gravitons as an intermediate state like the photon for electromagnetic scattering. The theory of Gravitoelectromagnetism has been extended from a theory of classical gravity to a quantized theory \cite{Khanna} that allows a perturbative approach to calculating phenomenon in gravity. These do provide a reasonable results for some areas of gravity. In contrast the theory developed by Fierz and Pauli \cite{FP} was for a massive spin-2 field on flat space-time. However, this theory suffered from dangerous pathological, such as the impossibility of the existence of a good massless limit, among others, and was discarded later when a new approach to the gravity field was advanced. This paper is devoted to the study of gravitational Bhabha scattering using the Lorentz-violating framework of GEM.
Lorentz violation can emerge in models unifying gravity with quantum physics such as string theory \cite{Samuel}. Tiny violations of Lorentz and CPT symmetries may be detected experimentally at the Planck scale, $\sim 10^{19}\, \mathrm{GeV}$. The study of Lorentz violation as an extension of the Standard Model (SM) has been undertaken. The Standard Model Extension (SME) is an extensive theoretical framework that includes SM and all possible operators that break Lorentz symmetry \cite{SME1}, \cite{SME2}. The SME is divided into two parts: (i) a minimal extension which has operators with dimensions $d\leq 4$ and preserves conventional quantization, hermitian property, gauge invariance, power counting renormalization, and positivity of the energy and (ii) a non-minimal version of the SME associated with operators of higher dimensions.
Another interesting way to investigate Lorentz violation is to modify the interaction vertex, i.e., a new non-minimal coupling term added to the covariant derivative. The non-minimal coupling term may be CPT-odd or CPT-even. There are some applications with this new interaction term, such as: its effect on the cross section of the electron-positron scattering has been investigated \cite{Casana1}, modification in the Dirac equation in the non-relativistic regime has been analyzed \cite{Casana2}, radiative generation of the CPT-even gauge terms of the SME have been constructed \cite{Casana3}, the CPT-even aether-like Lorentz-breaking term has been generated in the extended Lorentz-breaking QED \cite{Petrov1}, \cite{Petrov2}, effects induced on the magnetic and electric dipole moments have been investigated \cite{Casana4}, Lorentz violation in Bhabha scattering in Electromagnetic theory at finite temperature has been studied \cite{Our2}, among others. In this paper the Bhabha scattering in the non-minimal coupling framework for GEM is analyzed at finite temperature. The Lorentz-violating parameter belongs to the gravity sector of SME and has dimension five, i.e. it is a part of the non-minimal version of SME. The temperature in stars would indicate the magnitude of total contribution of the Lorentz violating term to the cross section. It is important to note that there is a similar term in the non-minimal version of the electromagnetism sector of the SME. All this shows similarity between these two theories. The Thermo Field Dynamics (TFD) formalism is used to introduce finite temperature effects in order to estimate variation of the cross section for GEM.
TFD is a thermal quantum field theory \cite{Umezawa1}, \cite{Umezawa2}, \cite{Umezawa22}, \cite{Khanna1}, \cite{Khanna2} formalism. Its basic elements are: (i) the doubling of the original Fock space, composed of the original and a fictitious space (tilde space) and (ii) the Bogoliubov transformation that is a rotation of these two spaces. The original and tilde space are related by a mapping, tilde conjugation rules. The physical variables are described by non-tilde operators. As a consequence, the propagator is written in two parts: $T = 0$ and $T\neq 0$ components. TFD is a natural formalism to describe systems in equilibrium at finite temperature.
This paper is organized as follows. In section II, the GEM theory in its Lagrangian formalism is presented. In section III, the GEM Lagrangian with Lorentz-violating term is considered. In section IV, a brief introduction to TFD formalism is presented. In section V, the differential cross section for Bhabha scattering for GEM including Lorentz-violating parameter at finite temperature is calculated. In section VI, some concluding remarks are presented.
\section{GEM and its lagrangian formalism}
The Gravitoelectromagnetic (GEM) theory describes the dynamics of the gravitational field . In a flat space-time the Maxwell-like equations of GEM are given as
\begin{eqnarray}
&&\partial^i{\cal E}^{ij}=-4\pi G\rho^j,\label{01}\\
&&\partial^i{\cal B}^{ij}=0,\label{02}\\
&&\epsilon^{( i|kl}\partial^k{\cal B}^{l|j)}+\frac{\partial{\cal E}^{ij}}{\partial t}=-4\pi G J^{ij},\label{03}\\
&&\epsilon^{( i|kl}\partial^k{\cal E}^{l|j)}+\frac{\partial{\cal B}^{ij}}{\partial t}=0,\label{04}
\end{eqnarray}
where ${\cal E}_{ij}$ is the gravitoelectric field and ${\cal B}_{ij}$ is the gravitomagnetic field that are defined in terms of the Weyl tensor components ($C_{ijkl}$), i.e., ${\cal E}_{ij}=-C_{0i0j}$ and ${\cal B}_{ij}=\frac{1}{2}\epsilon_{ikl}C^{kl}_{0j}$. Here $G$ is the gravitational constant, $\epsilon^{ikl}$ is the Levi-Civita symbol, $\rho^j$ is the vector mass density and $J^{ij}$ is the mass current density. The symbol $(i|\cdots|j)$ denotes symmetrization of the first and last indices, i.e., $i$ and $j$.
The GEM fields ${\cal E}$ and ${\cal B}$, with components ${\cal E}^{ij}$ and ${\cal B}^{ij}$, are defined as (details are given in \cite{Khanna})
\begin{eqnarray}
{\cal E}&=&-\mathrm{grad}\,\varphi-\frac{\partial \tilde{\cal A}}{\partial t},\\
{\cal B}&=&\mathrm{curl}\,\tilde{\cal A},
\end{eqnarray}
where $\tilde{\cal A}$, with components ${\cal A}^{\mu\nu}$, is a symmetric rank-2 tensor field of the gravitoelectromagnetic tensor potential, and $\varphi$ is the GEM vector counterpart of the electromagnetic scalar potential $\phi$. The tensor fields, ${\cal E}_{ij}$ and ${\cal B}_{ij}$, are elements of a rank-3 tensor, the gravitoelectromagnetic tensor, $F^{\mu\nu\alpha}$, defined as
\begin{eqnarray}
F^{\mu\nu\alpha}=\partial^\mu{\cal A}^{\nu\alpha}-\partial^\nu{\cal A}^{\mu\alpha},
\end{eqnarray}
where $\mu, \nu,\alpha=0, 1, 2, 3$. The non-zero components of ${F}^{\mu\nu\alpha}$ are ${F}^{0ij}={\cal E}^{ij}$ and ${F}^{ijk}=\epsilon^{ijl}{\cal B}^{lk}$ where $i, j=1, 2, 3$. Using the gravitoelectromagnetic tensor the Maxwell-like equations, eqs. (\ref{01})-(\ref{04}), are written in a covariant form as
\begin{eqnarray}
\partial_\mu{F}^{\mu\nu\alpha}&=&4\pi G{\cal J}^{\nu\alpha},\\
\partial_\mu{\cal G}^{\mu\langle\nu\alpha\rangle}&=&0,
\end{eqnarray}
where ${\cal G}^{\mu\nu\alpha}$ is the dual GEM tensor, that is defined as
\begin{eqnarray}
{\cal G}^{\mu\nu\alpha}=\frac{1}{2}\epsilon^{\mu\nu\gamma\sigma}\eta^{\alpha\beta}{F}_{\gamma\sigma\beta},
\end{eqnarray}
and ${\cal J}^{\nu\alpha}$ is a rank-2 tensor that depends on the mass density, $\rho^i$, and the current density $J^{ij}$. With these definitions the GEM Lagrangian is written as
\begin{eqnarray}
{\cal L}_G=-\frac{1}{16\pi}{F}_{\mu\nu\alpha}{F}^{\mu\nu\alpha}-G\,{\cal J}^{\nu\alpha}{\cal A}_{\nu\alpha}.\label{L_G}
\end{eqnarray}
This Lagrangian formalism is constructed using the symmetric gravitoelectromagnetic tensor potential $A_{\mu\nu}$ as the fundamental field that describes the gravitational interaction. Although $A_{\mu\nu}$ has similar symmetry properties to those of $h_{\mu\nu}$, which is a tensor defined in Einstein Gravity in the weak field approximation, our approach is different, since the nature of $A_{\mu\nu}$ is quite different from $h_{\mu\nu}$. An essential difference, the tensor potential is connected directly with the description of the gravitational field in flat space-time and it has nothing to do with the perturbation of the space-time metric.
\section{GEM with Lorentz-violating term}
The main objective of this paper is to calculate the differential cross section for Bhabha scattering using the graviton-fermion interaction described by the Lagrangian
\begin{eqnarray}
{\cal L}&=&-\frac{1}{16\pi}F_{\mu\nu\alpha}F^{\mu\nu\alpha}-\bar{\psi}\left(i\gamma^\mu \overleftrightarrow{D_\mu}-m\right)\psi,\label{eq1}
\end{eqnarray}
where the first term is the GEM lagrangian and the second term is the Dirac lagrangian. Here $\psi$ is the fermion field with $\bar{\psi}=\psi^\dagger \gamma_0$, $m$ is the fermions mass, $\gamma^\mu$ are Dirac matrices and $D_\mu$ is the covariant derivative. To study Lorentz violation effects in the graviton-fermion interaction, the usual covariant derivative is modified by a non-minimal coupling term, i.e.,
\begin{eqnarray}
\overleftrightarrow{D_\mu}=\overleftrightarrow{\partial_\mu}-\frac{1}{2}igA_{\mu\nu}\overleftrightarrow{\partial^\nu}+\frac{1}{4}\bigl(k^{(5)}\bigl)_{\mu\nu\alpha\lambda\rho}\gamma^\nu F^{\alpha\lambda\rho},\label{der}
\end{eqnarray}
where $g=\sqrt{8\pi G}$ is the gravitational coupling constant and $\bigl(k^{(5)}\bigl)_{\mu\nu\alpha\lambda\rho}$ is a tensor that belongs to the gravity sector of the non-minimal SME with mass dimension $d=5$ \cite{QG-K}. Then unit of this parameter is given as ${{\rm GeV}^{4-d}}$. Since the action is dimensionless, the lagrangian ${\cal L}$ in eq. (\ref{eq1}) has dimension ${\rm GeV}^4$, then the tensor potential $A_{\mu\nu}$ has dimension ${\rm GeV}^1$. Here the Weyl formulation is used to investigate the flat space-time treatment of the theory of gravitation, i.e. GEM. This formulation is similar to the case of fermions with electromagnetic interactions. The correspondence between Lorentz violation effects for the electromagnetic (EM) field and for the weak field gravitational field, i.e. GEM in the mininal version of SME has been studied \cite{QG}. In this paper the similarity between GEM with Lorentz violation and the non-minimal part of the Electromagnetic sector of SME is utilized.
Using eq. (\ref{der}) the interaction part of the lagrangian becomes
\begin{eqnarray}
{\cal L}_I&=&-\frac{g}{4}A_{\mu\nu}\left(\bar{\psi}\gamma^\mu\partial^\nu\psi-\partial^\mu\bar{\psi}\gamma^\nu\psi\right)-\frac{1}{4}\bigl(k^{(5)}\bigl)_{\mu\nu\alpha\lambda\rho} F^{\alpha\lambda\rho}\bar{\psi}\sigma^{\mu\nu}\psi,
\end{eqnarray}
where $\sigma^{\mu\nu}=\frac{i}{2}\left(\gamma^\mu\gamma^\nu-\gamma^\nu\gamma^\mu\right)$ and by definition $A\overleftrightarrow{\partial^\mu}B\equiv\frac{1}{2}\left(A\partial^\mu B-\partial^\mu AB\right)$. The first term describes the usual interaction between gravitons and fermions and the second term is a new interaction that leads to Lorentz violation. This new interaction describes the non-minimal coupling between the GEM field and the fermion bilinear. It is similar to the non-minimal coupling between the electromagnetic field and the bilinear fermion field \cite{Kost_H}. Then the vertices are
\begin{eqnarray}
\bullet&\rightarrow& -\frac{ig}{4}\left(\gamma^\lambda p_1^\rho+p_2^\lambda\gamma^\rho\right)\equiv V^{\lambda\rho}_{(0)}\\
\circ&\rightarrow& -\frac{1}{2}\bigl(k^{(5)}\bigl)^{\mu\nu\alpha\lambda\rho}\sigma_{\mu\nu} q_\alpha\equiv V^{\lambda\rho}_{(1)}.
\end{eqnarray}
Here the momentum transfer, $q_\alpha$ is considered as $q_\alpha=(\sqrt{s},0)$, with $s$ being the center of mass energy.
The main interest of this paper is to study the graviton-fermion interaction at finite temperature. In the next section thermal quantum field theory is introduced.
\section{TFD formalism}
In this section a brief introduction to TFD formalism is considered. TFD is a real time formalism of quantum field theory at finite temperature. This is obtained when a thermal vacuum or ground state, $|0(\beta)\rangle$, is defined. The thermal average of an observable is given by the vacuum expectation value in an extended Hilbert space. There are two necessary basic ingredients to construct the TFD formalism: (a) the doubling the degrees of freedom in a Hilbert space and (b) the Bogoliubov transformations. This doubling is defined by the tilde ($^\thicksim$) conjugation rules, associating each operator in $S$ to two operators in $S_T$ , where the expanded space is $S_T=S\otimes \tilde{S} $, with $S$ being the standard Hilbert space and $\tilde{S}$ the fictitious Hilbert space. For an arbitrary operator ${\cal A}$ the standard doublet notation is
\begin{eqnarray}
{\cal A}^a=\left( \begin{array}{cc} {\cal A}\\
\xi\tilde{{\cal A}}^\dagger \end{array} \right),
\end{eqnarray}
where $\xi =+1(-1)$ for bosons (fermions). The Bogoliubov transformation introduces a rotation in the tilde and non-tilde variables and thermal quantities. The Bogoliubov transformations are different for fermions and bosons.
Considering fermions with $c_p^\dagger$ and $c_p$ being creation and annihilation operators respectively, in the standard Hilbert space and $\tilde{c}_p^\dagger$ and $\tilde{c}_p$ being operators in the tilde space. For fermions the Bogoliubov transformations are
\begin{eqnarray}
c_p&=&\mathsf{u}(\beta) c_p(\beta) +\mathsf{v}(\beta) \tilde{c}_p^{\dagger }(\beta), \label{f1}\\
c_p^\dagger&=&\mathsf{u}(\beta)c_p^\dagger(\beta)+\mathsf{v}(\beta) \tilde{c}_p(\beta),\label{f2}\\
\tilde{c}_p&=&\mathsf{u}(\beta) \tilde{c}_p(\beta) -\mathsf{v}(\beta) c_p^{\dagger}(\beta),\label{f3} \\
\tilde{c}_p^\dagger&=&\mathsf{u}(\beta)\tilde{c}_p^\dagger(\beta)-\mathsf{v}(\beta)c_p(\beta),\label{f4}
\end{eqnarray}
where $\mathsf{u}(\beta) =\cos \theta(\beta)$ and $\mathsf{v}(\beta) =\sin \theta(\beta)$. The anti-commutation relations for creation and annihilation operators are similar to those at zero temperature and are given as
\begin{eqnarray}
\left\{c(k, \beta), c^\dagger(p, \beta)\right\}&=&\delta^3(k-p),\nonumber\\
\left\{\tilde{c}(k, \beta), \tilde{c}^\dagger(p, \beta)\right\}&=&\delta^3(k-p),\label{ComF}
\end{eqnarray}
and other anti-commutation relations are null.
Now consider bosons with $a_p^\dagger$ and $a_p$ being creation and annihilation operators respectively, in the standard Hilbert space and $\tilde{a}_p^\dagger$ and $\tilde{a}_p$ being operators in the tilde space, then the Bogoliubov transformations are
\begin{eqnarray}
a_p&=&\mathsf{u}'(\beta) a_p(\beta) +\mathsf{v}'(\beta) \tilde{a}_p^{\dagger }(\beta), \\
a_p^\dagger&=&\mathsf{u}'(\beta)a_p^\dagger(\beta)+\mathsf{v}'(\beta) \tilde{a}_p(\beta),\\
\tilde{a}_p&=&\mathsf{u}'(\beta) \tilde{a}_p(\beta) +\mathsf{v}'(\beta) a_p^{\dagger}(\beta), \\
\tilde{a}_p^\dagger&=&\mathsf{u}'(\beta)\tilde{a}_p^\dagger(\beta)+\mathsf{v}'(\beta)a_p(\beta),
\end{eqnarray}
where $\mathsf{u}'(\beta) =\cosh \theta(\beta)$ and $\mathsf{v}'(\beta) =\sinh \theta(\beta)$. Algebraic rules for thermal operators are
\begin{eqnarray}
\left[a(k, \beta), a^\dagger(p, \beta)\right]&=&\delta^3(k-p),\nonumber\\
\left[\tilde{a}(k, \beta), \tilde{a}^\dagger(p, \beta)\right]&=&\delta^3(k-p),\label{ComB}
\end{eqnarray}
and other commutation relations are null.
An important note, the propagator in TFD formalism is written in two parts: one describes the flat space-time contribution and the other displays the thermal effect. Here our interest is in the graviton propagator at finite temperature, which is given as
\begin{eqnarray}
\langle 0(\beta)|\mathbb{T}\left[A_{\mu\nu}(x)A_{\rho\lambda}(y)\right]|0(\beta)\rangle=i\int \frac{d^4k}{(2\pi)^4}e^{-ik(x-y)}\Delta_{\mu\nu\rho\lambda}(k,\beta),\label{prop}
\end{eqnarray}
where $\mathbb{T}$ is the time ordering operator and $\Delta_{\mu\nu\rho\lambda}(k,\beta)=\Delta_{\mu\nu\rho\lambda}^{(0)}(k)+\Delta_{\mu\nu\rho\lambda}^{(\beta)}(k)$ with
{\small
\begin{eqnarray}
\Delta_{\mu\nu\rho\lambda}^{(0)}(k)&=& \frac{\eta_{\mu\rho}\eta_{\nu\lambda}+\eta_{\mu\lambda}\eta_{\nu\rho}-\eta_{\mu\nu}\eta_{\rho\lambda}}{2k^2}\,\tau_0\nonumber\\
\Delta_{\mu\nu\rho\lambda}^{(\beta)}(k)&=&-\frac{\pi i\delta(k^2)}{e^{\beta k_0}-1}\left( \begin{array}{cc}1&e^{\beta k_0/2}\\epsilon^{\beta k_0/2}&1\end{array} \right)(\eta_{\mu\rho}\eta_{\nu\lambda}+\eta_{\mu\lambda}\eta_{\nu\rho}-\eta_{\mu\nu}\eta_{\rho\lambda}),
\end{eqnarray}
}
where $\Delta_{\mu\nu\rho\lambda}^{(0)}(k)$ and $\Delta_{\mu\nu\rho\lambda}^{(\beta)}(k)$ are zero and finite temperature parts respectively and
\begin{eqnarray}
\tau_0=\left( \begin{array}{cc}1 & 0 \\ 0 & -1\end{array} \right).
\end{eqnarray}
\section{The differential cross section - Bhabha scattering }
Our interest is to calculate the cross section at finite temperature for the process, $e^-(p_1)+e^+(p_2)\rightarrow e^-(q_1)+e^+(q_2)$, with one graviton exchange including Lorentz violating terms. The Feynman diagrams, that describe this process, are given in FIG. 1. Contribution of Lorentz violation terms make small contribution to the cross section due to GEM theory. Until the Lorentz violation becomes significant, higher order contributions are not expected to be large.
\begin{figure}[h]
\includegraphics[scale=0.3]{LV_Bhabha.png}
\caption{GEM Bhabha Scattering with one graviton exchange. Here, $\bullet$ represent the usual GEM vertex and $\circ$ represent the new vertex due to the Lorentz violation.}
\end{figure}
The calculation is carried out in the center of mass frame (CM) where we have
\begin{eqnarray}
p_1&=&(E,\vec{p}),\quad\quad p_2=(E,-\vec{p}),\nonumber\\
q_1&=&(E,\vec{p'})\quad\quad\mathrm{and}\quad\quad q_2=(E,-\vec{p'}),
\end{eqnarray}
where $|\vec{p}|^2=|\vec{p'}|^2=E^2$, $\vec{p}\cdot\vec{p'}=E^2\cos\theta$ and
\begin{eqnarray}
p_1\cdot p_2=q_1\cdot q_2&=&2E^2, \quad p_1\cdot q_1=E ^2(1-cos\theta)\nonumber\\
p_1\cdot q_2=q_1\cdot p_2&=&2E^2, \quad p_2\cdot q_2=E ^2(1-cos\theta).
\end{eqnarray}
The differential cross section is defined as
\begin{eqnarray}
\frac{d\sigma}{d\Omega}=\left(\frac{\hbar^2 c^2}{64\pi^2 s}\right)\cdot\frac{1}{4}\sum_{spins}\bigl|{\cal M(\beta)}\bigl|^2,\label{cs}
\end{eqnarray}
where $s$ is the CM energy, ${\cal M}(\beta)$ is the S-matrix element at finite temperature. In addition an average over the spin of the incoming particles and summing over the spin of outgoing particles is included.
The transition amplitude for GEM Bhabha scattering is calculated as
\begin{eqnarray}
{\cal M}(\beta)=\langle f,\beta| \hat{S}^{(2)}| i,\beta\rangle,
\end{eqnarray}
with $\hat{S}^{(2)}$, the second order term, of the $\hat{S}$-matrix that is defined as
\begin{eqnarray}
\hat{S}&=&\sum_{n=0}^\infty\frac{(-i)^n}{n!}\int dx_1dx_2\cdots dx_n \mathbb{T} \left[ \hat{{\cal L}}_{I}(x_1) \hat{{\cal L}}_{I}(x_2)\cdots \hat{{\cal L}}_{I}(x_n) \right],
\end{eqnarray}
where $\hat{{\cal L}}_{I}(x)={{\cal L}}_{I}(x)-\tilde{{\cal L}}_{I}(x)$ describes the interaction. The thermal states are
\begin{eqnarray}
| i,\beta\rangle&=&c_{p_1}^\dagger(\beta)d_{p_2}^\dagger(\beta)|0(\beta)\rangle, \nonumber\\
| f,\beta\rangle&=&c_{p_3}^\dagger(\beta)d_{p_4}^\dagger(\beta)|0(\beta)\rangle ,
\end{eqnarray}
with $c_{p_j}^\dagger(\beta)$ and $d_{p_j}^\dagger(\beta)$ being creation operators. The transition amplitude becomes
\begin{eqnarray}
{\cal M}(\beta)&=&\frac{(-i)^2}{2!}\int d^4x\,d^4y\langle f,\beta|({\cal L}_I{\cal L}_I-\tilde{\cal L}_I\tilde{\cal L}_I)| i,\beta\rangle\nonumber\\
&=&\Bigl({\cal M}_0(\beta)+{\cal M}_\kappa(\beta)+{\cal M}_{\kappa\kappa}(\beta)\Bigl)-\left(\tilde{\cal M}_0(\beta)+\tilde{\cal M}_\kappa(\beta)+\tilde{\cal M}_{\kappa\kappa}(\beta)\right),
\end{eqnarray}
where ${\cal M}_0(\beta)$ is the matrix element of the Lorentz invariant, ${\cal M}_\kappa(\beta)$ is the linear term in the Lorentz violation and ${\cal M}_{\kappa\kappa}(\beta)$ is the second order in the Lorentz-violating parameter. This last term will be ignored since its contribution is of the fourth order in the Lorentz-violating parameter, then is very small when compared with the contribution of the ${\cal M}_\kappa(\beta)$ term. An important note, there are similar equations for matrix elements that include tilde operators.
The fermion field is written as
\begin{eqnarray}
\psi(x)=\int dp\, N_p\left[c_p u(p)e^{-ipx}+d_p^\dagger v(p)e^{ipx}\right],
\end{eqnarray}
where $c_p$ and $d_p$ are annihilation operators for electrons and positrons, respectively, $N_p$ is the normalization constant, and $u(p)$ and $v(p)$ are Dirac spinors. The Lorentz invariant part of the transition amplitude becomes
{\small
\begin{eqnarray}
&&{\cal M}_0(\beta)=-\frac{ig^2}{16} N\int d^4x\,d^4y\,\int d^4p(\mathsf{u}^2-\mathsf{v}^2)^2\langle 0(\beta)|\mathbb{T}[A_{\mu\nu}(x)A_{\rho\lambda}(y)]|0(\beta)\rangle\nonumber\\
&\times& \Bigl[\bar{u}(q_1)(\gamma^\mu p_1^\nu+q_1^\mu\gamma^\nu) u(p_1)\bar{v}(p_2)(\gamma^\rho p_2^\lambda +q_2^\rho\gamma^\lambda) v(q_2)e^{-ix(p_2-p_1)}e^{iy(q_2-q_1)}\nonumber\\
&-&\bar{u}(q_1)(\gamma^\mu q_1^\nu+p_1^\mu\gamma^\nu) v(p_1)\bar{v}(q_2)(\gamma^\rho q_2^\lambda +p_2^\rho\gamma^\lambda) u(p_2)e^{ix(q_1+p_1)}e^{-iy(q_2+p_2)} \Bigl],
\end{eqnarray}}
where Bogoliubov transformations eqs. (\ref{f1})-(\ref{f4}) are used. With $\mathsf{u}(\beta) =\cos \theta(\beta)$ and $\mathsf{v}(\beta) =\sin \theta(\beta)$ we get $(\mathsf{u}^2-\mathsf{v}^2)^2= \tanh^2(\frac{\beta |k_0|}{2})$, where $k_0=\omega$. Using the graviton propagator definition at finite temperature, given in eq. (\ref{prop}), and the definition of the four-dimensional delta function,
\begin{eqnarray}
&&\int d^4x\,d^4y\,e^{-ix(p_2-p_1+k)}e^{-iy(q_1-q_2-k)}=\delta^4(p_2-p_1+k)\delta^4(q_1-q_2-k),
\end{eqnarray}
the transition amplitude is written as
\begin{eqnarray}
{\cal M}_0(\beta)&=&-\frac{ig^2}{16}\Bigl[\bar{u}(q_1)(\gamma^\mu p_1^\nu+q_1^\mu\gamma^\nu) u(p_1)D_{\mu\nu\rho\lambda}(p_1-q_1)\bar{v}(p_2)(\gamma^\rho p_2^\lambda +q_2^\rho\gamma^\lambda) v(q_2)\nonumber\\
&-& \bar{u}(q_1)(\gamma^\mu q_1^\nu+p_1^\mu\gamma^\nu) v(p_1)D_{\mu\nu\rho\lambda}(q_1+p_1)\bar{v}(q_2)(\gamma^\rho q_2^\lambda +p_2^\rho\gamma^\lambda) u(p_2))\Bigl]\nonumber\\
&\times&\tanh^2\Bigl(\frac{\beta E_{CM}}{2}\Bigl),
\end{eqnarray}
where $|(p_1-q_1)_0|=|(q_1+p_1)_0|=E_{CM}$ has been used, with $E_{CM}$ being the energy of the CM and
\begin{eqnarray}
D_{\mu\nu\rho\lambda}(k)\equiv \Delta(k)\,(\eta_{\mu\rho}\eta_{\nu\lambda}+\eta_{\mu\lambda}\eta_{\nu\rho}-\eta_{\mu\nu}\eta_{\rho\lambda})
\end{eqnarray}
with
\begin{eqnarray}
\Delta(k)&=&\frac{1}{k^2}\left( \begin{array}{cc}1 & 0 \\
0 & -1\end{array} \right)-\frac{2\pi i\delta(k^2)}{e^{\beta k_0}-1}\left( \begin{array}{cc}1&e^{\beta k_0/2}\\epsilon^{\beta k_0/2}&1\end{array} \right).\label{delta}
\end{eqnarray}
In a similar way the linear term in the Lorentz violating parameter becomes
\begin{eqnarray}
{\cal M}_\kappa(\beta)&=&\frac{g}{4}\Bigl[\bar{u}(q_1)(\gamma^\mu p_1^\nu+q_1^\mu\gamma^\nu) u(p_1)D_{\mu\nu\rho\lambda}(p_1-q_1)\bar{v}(p_2) V^{\lambda\rho}_{(1)} v(q_2)\nonumber\\
&-& \bar{v}(q_1)(\gamma^\mu q_1^\nu+p_1^\mu\gamma^\nu) u(p_1)D_{\mu\nu\rho\lambda}(q_1+p_1)\bar{u}(p_2) V^{\lambda\rho}_{(1)} v(p_2))\Bigl]\nonumber\\
&\times&\tanh^2\Bigl(\frac{\beta E_{CM}}{2}\Bigl).
\end{eqnarray}
For evaluating the differential cross section the relevant quantity is $|{\cal M}|^2=\sum{\cal M}{\cal M}^*$, where the sum is over spins. Then
\begin{eqnarray}
\bigl|{\cal M}(\beta)\bigl|^2=\bigl|{\cal M}_0(\beta)+{\cal M}_\kappa(\beta)\bigl|^2.
\end{eqnarray}
This calculation is accomplished using the completeness relations:
\begin{eqnarray}
\sum_{spins} u(p_1)\bar{u}(p_1)&=&\slashed{p}_1+m, \nonumber\\
\sum_{spins} v(p_1)\bar{v}(p_1)&=&\slashed{p}_1-m.
\end{eqnarray}
In addition, the relation
\begin{eqnarray}
\bar{v}(p_2)\gamma_\alpha u(p_1)\bar{u}(p_1)\gamma^\alpha v(p_2)=\mathrm{tr}\left[\gamma_\alpha u(p_1)\bar{u}(p_1)\gamma^\alpha v(p_2)\bar{v}(p_2)\right]
\end{eqnarray}
is used. Henceforth the electron mass is ignored since all the momenta are much larger than the electron mass, i.e., ultra relativistic limit.
Then the differential cross section at finite temperature becomes
\begin{eqnarray}
\left(\frac{d\sigma}{d\Omega}\right)_T&=&\frac{g^4E^8}{4096\pi^2 s}\Bigl\{\Delta_1^2\left(1952\cos\theta+460\cos 2\theta+32\cos 3\theta+\cos 4\theta+1651\right)\nonumber\\
&+&32\Delta_2^2\left(\cos 2\theta+3\right)+128\Delta_1\Delta_2\left(\cos\theta+7\right)\cos^4(\theta/2)\nonumber\\
&+&\frac{32 (k^5)^2}{g^2}\bigl[-\Delta_1^2\Bigl(\cos^2\theta(1-\cos\theta)+24(1-\cos\theta)^3+10(1-\cos\theta)^2(\cos\theta-1)\Bigl)\nonumber\\
&-&120\Delta_2^2+\Delta_1\Delta_2\Bigl(2(10+9\cos\theta)+20(1+\cos\theta)\Bigl)\bigl]\Bigl\}\tanh^4\left(\frac{\beta E_{CM}}{2}\right),
\end{eqnarray}
where $\Delta_1\equiv\Delta_1(p_1-q_1)$ and $\Delta_2\equiv\Delta_2(p_1+q_1)$ are defined from eq. (\ref{delta}) and are written explicitly as
\begin{eqnarray}
\Delta_1&=&\frac{1}{(p_1-q_1)^2}\left( \begin{array}{cc}1 & 0 \\
0 & -1\end{array} \right)-\frac{2\pi i\delta((p_1-q_1)^2)}{e^{\beta (p_1-q_1)_0}-1}\left( \begin{array}{cc}1&e^{\beta (p_1-q_1)_0/2}\\epsilon^{\beta (p_1-q_1)_0/2}&1\end{array} \right)
\end{eqnarray}
\begin{eqnarray}
\Delta_2&=&\frac{1}{(p_1+q_1)^2}\left( \begin{array}{cc}1 & 0 \\
0 & -1\end{array} \right)-\frac{2\pi i\delta((p_1+q_1)^2)}{e^{\beta (p_1+q_1)_0}-1}\left( \begin{array}{cc}1&e^{\beta (p_1+q_1)_0/2}\\epsilon^{\beta (p_1+q_1)_0/2}&1\end{array} \right).
\end{eqnarray}
Here it is considered that the beam is perpendicular to the background, i.e., $\bigl(k^{(5)}\bigl)_{\mu\nu\alpha\lambda\rho}\,p^\rho=0$.
At zero temperature limit, $\tanh^4\left(\frac{\beta E_{CM}}{2}\right)\rightarrow 1$, $\Delta_{1}\rightarrow \frac{i}{2(p_1-q_1)^2}$ and $\Delta_{2}\rightarrow \frac{i}{2(p_1+q_1)^2}$. Then the differential cross section is
\begin{eqnarray}
\left(\frac{d\sigma}{d\Omega}\right)&=&\left(\frac{d\sigma}{d\Omega}\right)_{GEM}\Biggl[1+\frac{\bigl(k^{(5)}\bigl)^2}{g^2{\cal O}}\Bigl(128\cos^2\theta\sin^6(\theta/2) -4924\cos\theta + 1552 \cos 2\theta\nonumber\\
&-& 630 \cos 3\theta +140\cos 4\theta +14\cos 5\theta+ 3876\Bigl)\Biggl],\label{odd1}
\end{eqnarray}
where ${\cal O}\equiv 1864\cos\theta+540\cos 2\theta+56\cos 3\theta+5\cos 4\theta+1631$. Here $\left(\frac{d\sigma}{d\Omega}\right)_{GEM}$ is the differential cross section for the GEM field \cite{AFS}, Lorentz invariant case, and is given by
\begin{eqnarray}
\left(\frac{d\sigma}{d\Omega}\right)_{GEM}=-\frac{g^4E^4}{n\,\pi^2\,s}\frac{\left(1864\cos\theta+540\cos 2\theta+56\cos 3\theta+5\cos 4\theta+1631\right)}{(\cos\theta-1)^2},
\end{eqnarray}
with $n$ being a numerical factor defined as $n\approx6,6\times 10^4$.
It is clear that results at finite temperature are likely to be small, but these may be measurable in some cases. This will provide the role of Lorentz breaking components of the transition operators.
\section{Conclusions}
The standard model of particle physics and Electromagnetic field have been Lorentz covariant at all energies so far. In addition it is believed that the same will be the case for Gravitational field like Einstein theory and Gravitoelectromagnetic field. The gravitational field is considered to have its presence to a much larger time scale. A particular question to ask is: has Lorentz invariance been valid for all times for systems in a gravitational field or in a field consistent with the quantum fields? This has led to consideration of validity of this invariance. Then the question is posed to consider consequences of violation of Lorentz covariance. The present study is directed to investigate the role of temperature in such a violation. The Lorentz violation at finite temperature is studied for gravitoelectromagnetism (GEM). GEM is a gravitational theory obtained from the Einstein field equations in the linear approximation. And as stated earlier GEM has close resemblance to the electromagnetic theory. The formalism of Thermofield dynamics is used to calculate the differential cross section of gravitons at finite temperatures in the presence of Lorentz violation. It is well known that GEM field with Lorentz violation is similar to the electromagnetic field in the non-minimal version of SME. The present study gives a brief look at this aspect and the possible expectations in results in experiments. It is conceivable that interior of stars may show some results that will corroborate or discount the presence of Lorentz violation in the gravitational field and possibly in the case of standard model. Our results show that the differential scattering cross section for Bhabha scattering depends on temperature and details are presented. Variation of scattering cross section with Lorentz violation term in the starting Lagrangian presents the question about its impact on the cross section when the temperature is changing, such as in the interior of stars. This would have impact of different magnitude depending on temperature. This will help us to understand the role of Lorentz violation term depending on the nature of star with its internal temperature.
In addition, our results are calculated in the CM frame. However the coefficients in the CM frame are not constant because all experiments with beams involve non-inertial laboratories on the Earth, which is rotating in the standard Sun-centered inertial frame (SCF). Then CM coefficients need to be converted to SCF coefficients as discussed in \cite{Kost_H}, \cite{Kost2002}, \cite{Kost1998}.
\section*{Acknowledgments}
It is a pleasure to thank V. A. Kosteleck\'y for useful remarks about the Lorentz-violating coefficient for GEM field.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,411 |
Q: problem with uitableview when return i have viewcontroller with table view that when i click on a cell i go in the navigation to another view controller with another tableview.
i try to use the viewDidAppear and viewWillAppear in the first viewcontroller, that when i back to this viewcontroller from the second viewcontroller i will enter one of this method.
the problem is that he didn't enter this method when i return to this viewcontroller.
this is how i enter the second view controller:
- (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath{
ProfileViewController2 *tmp = [[ProfileViewController2 alloc]initWithType:indexPath.row string:[self.array2 objectAtIndex:indexPath.row]];
[[self navigationController] pushViewController:tmp animated:YES];
[tmp release];
[self.mytableview deselectRowAtIndexPath:indexPath animated:YES];
}
A: viewWillAppear and viewDidAppear are notoriously shaky on iOS.
If you are working with UITableViews we always put the code we need to run before the table gets loaded into the
- (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView
function. It's a slight hack but works very well as this is the first function out of the UITableViewDataSource protocol called.
Alternatively, you can call
[tmp viewDidAppear];
After pushing the view controller and before releasing tmp to force the function being called.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 6,922 |
← NYU grad union votes for Israel boycott (Politico NY)
Election and Referendum Results (GSOC-UAW 2110) →
NYU graduate worker union backs Israel boycott by big margin (Electronic Intifada)
NYU graduate worker union backs Israel boycott by big margin
Charlotte Silver Activism and BDS Beat 22 April 2016
New York University's graduate student worker union has voted by a large margin to join the international boycott, divestment and sanctions movement in support of Palestinian rights.
The Graduate Student Organizing Committee (GSOC), part of United Auto Workers Local 2110, also voted for stewards and delegates, defying the parent union'sdeclaration last week of an election by acclamation after disqualifying more than half of GSOC's candidates.
After 38 percent of the union's more than 2,000 members cast ballots this week, GSOC announced that the results are "clear evidence of a strong mandate for those elected" and a sign of members' "commitment to the democratic process."
Many of the candidates who had been disqualified were elected with strong margins.
Backing BDS
Two-thirds voted "Yes" to a question on whether GSOC should join the BDS movement until Israel complies with international law and respects Palestinian rights.
The petition that triggered the referendum, signed by more than 300 members, calls on NYU and the UAW's national organization – known in US labor parlance as the international – to "withdraw their investments from Israeli state institutions and international companies complicit in the ongoing violation of Palestinian human and civil rights."
It also urges NYU to close its program at Tel Aviv University, arguing that the partnership violates NYU's non-discrimination policy.
Israel has frequently barred entry or harassed visitors who are Muslim or of Arab or other Middle Eastern ancestry, including US citizens.
The UAW international nullified a similar BDS resolution passed by University of California graduate student workers, UAW Local 2865, in 2014. The local union is currently appealing the nullification to the UAW Public Review Board.
The president of UAW Local 2110 in New York had tried to persuade GSOC to postpone this week's BDS referendum pending the outcome of the California case.
In addition, 58 percent, or 366 GSOC members, voted to adhere to the academic boycott, agreeing to refrain from participating in research and programs sponsored by Israeli universities.
"This historic endorsement of BDS by GSOC at NYU occurs in the wake of growing momentum for the movement across university campuses and labor unions nationwide," Shafeka Hashash, a member of the GSOC for BDS caucus, said in apress release.
"NYU's GSOC referendum set an important precedent for both solidarity with Palestine and for union democracy," the press release added.
"The referendum success is indicative of the traction the movement is gaining across university campuses, and increasingly among graduate students," stated Maya Wind, another member of GSOC for BDS.
Wind had initially been disqualified as a candidate by the local, but was elected to a steward position with 32 percent of the votes. Only one other candidate received as large a share of the votes.
A week ago, City University of New York doctoral students passed a resolution in favor of a boycott of Israeli institutions complicit in abuses of Palestinian rights.
In denouncing the Doctoral Student Council's vote, CUNY Chancellor James B. Milliken reiterated the school's opposition to BDS.
"We are disappointed by this vote from one student group, but it will not change CUNY's position," Millikin said.
The two recent BDS victories in New York come at a time when state legislators are pushing for a crackdown on Palestine solidarity activism.
"The most far-reaching, unconstitutional anti-BDS bills in the country are currently under consideration by the New York legislature," according to the legal defense group Palestine Legal.
In January, New York lawmakers introduced bills in the lower house and senate that would require state officials to publish a blacklist of supporters of the BDS movement.
The state senate passed its version of the law, which applies to boycotts of any nation allied to the US, and is currently awaiting the governor's signature.
The proposed laws would bar those on the blacklist from working with state agencies. The bill would also prohibit state pension funds from investing in companies engaged in politically motivated boycotts of Israel.
This entry was posted in Academic Freedom, BDS, U.S. Labor News, UAW 2110 (GSOC). Bookmark the permalink. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 3,205 |
Yulia Mikhailova
Mesa P
Hiroshima City University
We all know too well that in contrast to natural sciences, history has no axiomatic theory, no postulates accepted by everyone. There used to be a time when historians believed they were able to deal with objective historical facts. The age of modernity gave birth to two main political/ideological interpretations of facts � the bourgeois liberal version and the Marxist one. In our age of post-modernity history is believed not to be the study of the "past as such", but to exist rather in the form of interpretations invented/imagined by scholars and narratives retold to audience.
I believe that on the one side, this paradigm of post-modernity increased the dependency of a historian on his/hers society and culture, the academic community and the audience the discourse is designed for. On the other side, because of the rapidly developing globalization historians often change their countries of residence and employment and are urged to be flexible and adaptable to new environments, in other words, to be international in their way of thinking. Thus, they may be confronted with a problem of building a credo of their own, including ethical and social commitments, which are essential for the genuine research. Can this be achieved and how? What factors contribute to the process? To what extent is a historian bound up with his/hers national identity and past? I would like to address these issues using my own experience of conducting research and teaching on Japanese history in such different societies as the former Soviet Union, Australia and Japan. I will attempt to demonstrate that in spite of the fact that we all live in the age of post-modernity in the broad sense, topics historians choose for their research tend to depend on the actual problems a particular society wants to solve, while approaches they adhere to may rather follow a more independent logic of the development of historical thought. For example, an acute interest to the study of Japanese cultural and intellectual history by the Soviet scholars was a sort of an escape from the officially sponsored Marxism. Concentration of Australians on the study of minorities or feminism in Japan is related to the problems of contemporary Australian society itself. The current preference of Japanese historians to see the history first of all as the result of competition between personal or group ambitions, but not as manifestation of "great historical laws" may reflect the on-going political and bureaucratic friction in this country. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,677 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.