text stringlengths 14 5.77M | meta dict | __index_level_0__ int64 0 9.97k ⌀ |
|---|---|---|
Q: iOS Flashcard swipe logic I'm trying to build a simple flashcard app for my University project and I can't figure out the swipe logic. I am trying to swipe between two UIViews using a panGesture.
The first image is my swipe logic.
In the second image I dragged the red flashcard to the left and line 104 and 105 are called. In line 105 I am trying to change the var card to equal self.flashcard2 which is the green flashcard.
In the third image I start to drag the green flashcard. When I start dragging line 84 is called making the red flashcard show up and move around.
Image 1 2 and 3
Does anyone know how to fix this problem or have any suggestions for a better way of swiping the flashcards? Sorry if my description was confusing. Just comment if you need a better explanation. Also sorry for the picture quality. I can't share three photos until I have 10 rep points.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 9,077 |
FAGG steht als
ICAO-Code für den südafrikanischen Flughafen George
Abkürzung für das österreichische Fern- und Auswärtsgeschäftegesetz
Abkürzung | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 330 |
The Trump /Syria Conundrum: Will Trump Deliver Deep State's World War?
We have entered uncharted territory, with the future of humanity at stake.
By Larry Chin
Theme: Intelligence, Terrorism, US NATO War Agenda
In appearance, Trump's April 6, 2017 missile attack on Syria is the first step towards a regime change, a massive regional conquest, and World War 3. In appearance, the event marked a point of no return for Trump's presidency.
Murder in the name of propaganda
What is clear is that the chemical attack was a false flag operation, staged for war propaganda purposes, with intelligence operatives, the Deep State, the political elite, and the corporate media working in unison.
Syrian jets hit a rebel position. Chemicals used by the rebels (US-backed terrorists, CIA) were stockpiled at the target. The target was hit.
Out of this came the fabricated narrative that President Assad released a chemical weapons attack on civilians, despite the Syrian government having no sane political reason to do so. The Assad regime had recently been given assurances by the Trump administration and Secretary of State Rex Tillerson suggesting that a regime change was not in the Trump administration's plans, and that "the Syrian people would decide" Assad's fate. There was also no sane reason for Russia to have participated in or condoned a chemical weapons attack on civilians.
Syrian-Russian anti-terror operations were succeeding, and Putin was looking forward to cooperation with the new Trump administration.
The White Helmets (left), who play a starring role in the subsequently staged "humanitarian crisis" spectacle, are US/UK-backed terrorist/intelligence assets and propaganda agents.
All signs point to a setup using tired old tricks. A repeat of previous similar staged "humanitarian crisis" pretexts, from the "incubator babies" to "Aleppo Boy". Elements of the current #SyriaHoax includes clumsy staging of the "chemical attack", starring a doctor who is a known US-backed terrorist, dead and injured people laughing (because they don't think the cameras are running), actors playing "dead children" opening their eyes. The Twitter page of a fabricated 7-year old Syrian girl Alabed Bana is ridiculous propaganda that is promoted by major Fake News media, including Jake Tapper of CIA-connected CNN.
As written by Michel Chossudovsky:
While there is no evidence that president Al-Assad ordered the chemical weapons attack, there is ample evidence –including a comprehensive UN report– that the opposition "rebels" (supported by US-NATO) have since 2012 stockpiled and used chemical weapons against Syrian civilians as well as SAA soldiers.
There is also evidence that Washington and its allies had previously planned and supported "False Flag" chemical weapons attacks perpetrated by the "rebels" (including the 2012 East Ghouta attacks) with a view to incriminating the Damascus government.
As Ron Paul and many others warned, the propaganda must be questioned. According to the sources of former CIA operative Ray McGovern and numerous other current and former intelligence officers, the White House was extensively briefed, and CIA director Mike Pompeo "played it straight". Trump "knew", or should have known, "was persuaded", and ordered the missile strike anyway.
National Security Adviser H.R. McMaster and Defense Secretary Mattis (image right) pushed for the strike. White House advisor and son-in-law Jared Kushner and daughter Ivanka Trump reportedly also pushed for the strike. White House advisor Steve Bannon advised against the strike, and was rebuked.
Who ordered this operation to frame Assad?
Was it the Deep State and corrupt players, and the unhinged warmonger and Al-Qaeda/ISIS favorite John McCain, who visited Syria in February?
Was it to force the Trump administration to act?
Was it the Trump administration itself, setting up a multi-pronged deception?
Was it conducted by other forces or governments (Israel, etc.)?
What did Russia know, if anything? Who benefits?
Trump fired 59 Tomahawk missiles on Shayrat air base. Trump continued entertaining Chinese president Xi Jinping in Palm Beach while the operation unfolded. Nine civilians were killed in the attack and several more were wounded.
Donald Trump committed a mass murder, and is now officially a war criminal. What is also a fact is that the strike and the resulting narrative politically benefits terrorism— ISIS, Al-Qaeda, and the CIA— in opposition to Syria, Russia and their allies, who are fighting terrorism. It was also a glorious moment for the Deep State and the New World Order.
Both Russia and Syria were warned in advance. No Russians lost their lives. Only 23 missiles hit their mark (why and where did the others go?) The damage was "relatively minor". The runway was left intact. Syrian military planes were able to conduct sorties from the same runway against rebel targets the following morning.
In his statement following the attack, Trump spoke of the "slow and brutal death of beautiful babies, cruelly murdered in a barbaric act", and declared that there was no question of Assad's guilt. He left no room even for investigations or questions. Tillerson, former head of Exxon-Mobil (and major player in past wars in which Exxon-Mobil benefited greatly) stated that there will be "no role for (Assad) to govern the Syrian people", and "there was no doubt Assad was behind the chemical attack". Defense Secretary Mattis also declared "no doubt" that Assad did it.Trump also stated that his attitude towards Assad "has changed very much" due to the alleged chemical attack. And to further exacerbate tensions, the Pentagon was ordered to begin an investigation into Russia's role behind the chemical attack.
Was Trump's attack "real"—or a one-time fake retaliation, in response to a fake chemical attack? Was Trump acting on his own, or was he deceived or forced to order the retaliatory strike? Was Trump naïve and ignorant enough to be manipulated by the Deep State propaganda, and act recklessly out of emotion?
Or was he playing a game?
Trump's crazed 180 degree turn
The world remains confused about the attack and its meaning. Even more baffling is Trump's apparent total reversal of principle.
According to former State Department counterterrorism coordinator Daniel Benjamin, Trump has
"completely flipped from one day to the next on the issue of significant national security that he's been talking about for years…I have very little confidence that he is thinking one, two or three moves ahead, which is what a president has to do."
Why did Trump take what appears to be a sudden and unwarranted step towards regime change in Syria, globalism, and World War 3 confrontation with Russia. Why, after having won the presidency for advocating the opposite?
Why this, from the anti-globalist/anti-regime change/anti-corruption choice was (unlike the deeply corrupt neocon-backed war-loving Hillary Clinton and her network) believed to be the president who would not start World War 3?
Why this, from someone who knows he won the election because the vast majority of American voters who responded to the message that "Americanism, not globalism, will be our credo"?
Why this, from a man who lambasted the Obama administration for meddling in Syria, and said attacking Syria was a bad idea at least 45 times? Why this, following months of insistence—over the violent opposition of the Deep State, Obama/Clinton Democrats, the Fake News corporate Media and endless "Russiagate" noise— that cooperation with Russia was critical to Middle East stability and world peace? Trump and his administration are now leading the anti-Russia and anti-Putin propaganda effort, launching daily attacks at Putin via the corporate media.
Why this, mere days since Rex Tillerson stated that Assad's future would be decided by the Syrians, and regime change was not planned?
The missile attack on Syria sparked immediate outrage from the largely anti-war Trump base, which denounced the strikes as a gigantic betrayal.
A typical example:
Veterans and alt-right turn against Trump
Most of the leading alternative media voices supporting Trump are, and remain, staunchly anti-war, including Mike Cernovich, Paul Joseph Watson, Stefan Molyneux, Jack Posobiec, Lee Stranahan, and many others. Some are ready to "get off the Trump train". Even Trump's blindest followers are going into contortions coming up with rationalizations.
Richard Spencer condemned Trump for his betrayal, and organized an anti-war rally in front of the White House. (This rally was violently attacked by pro-war Soros-funded Antifa thugs.)
Adding to the confusion is the fact that the Trump presidency, despite being embattled, was winning on many fronts. Much had been, up until April 6, moving in his favor.
The Deep State's all-out war on Trump was being met with effective pushback from those in the Trump camp. Simultaneously, the Deep State is being fought independently by whistleblowers, including WikiLeaks.
"Russiagate" is backfiring. Attention has been turned toObama/Democrats illegally spying on Trump and private citizens, leaks, and numerous corruption scandals possibly implicating current and former Democrats and Republicans.
Investigations into Hillary Clinton, the Clinton Foundation, the Obama administration, and the Democratic National Committee remain open and ongoing, with the FBI and the Justice Department sitting on thousands of pieces of incriminating evidence.
Large numbers of human trafficking and pedophilia arrests are taking place nationwide, under the direction of Attorney General Jeff Sessions, towards the exposure of the larger pedophilia network and political blackmail apparatus that controls much of Washington (Pedogate/Pizzagate).
The destructive activities of Antifa and anarchist groups funded and controlled by neoliberals and Democrats aligned with Clinton and Obama have failed to turn public opinion against Trump.
The intelligence-controlledestablishment corporate media aligned against Trump is losing on all fronts to independent and social media sources, and is now derided as Fake News.
Trump's presidency is a guiding example inspiring anti-globalist nationalist movements around the world, supported by Nigel Farageand the forces behind Brexit, and nationalists such as Marine Le Pen in France. These movements are gaining momentum.
Many elements of his Trump's Make America Great Again (MAGA) agenda are being put into place, despite obstruction and violent opposition by adversaries.
Most importantly, Trump had been fully supported by his base, through every difficulty.
Does it make any sense for Trump to squander these gains and alienate his vital support base? Why did he suddenly do the one thing that would lead to his demise: an act of war that only a globalist neocon of the highest order would pull off; the one thing that can unite both Left and Right against him?
Why does it seem that Trump is abandoning MAGA altogether?
Why would he toss away an alliance with Russia that would have ensured, if not at least contributed greatly, to a true world peace?
We are left with speculation.
Scenario 1: Trump has lost to Deep State
In this interpretation, Trump has given up the fight against globalism. He has "been flipped". He has sold out. He has surrendered to intimidation from the Deep State.
The Syrian false flag was set up by the Deep State, and Trump was ordered to finish the war of conquest begun with 9/11, or face consequences. He was threatened into doing exactly what Hillary Clinton wants done. Forced to abandon principles. Forced to commit political suicide. Forced to abandon his base, and accept being a politically isolated, impeachable lame duck president. The Trump presidency is left to collapse from both external attack and internal undermining. The Deep State triumphs, and the Bush/Clinton criminal network gets the last laugh.
A variation on the theory: Trump has cut a deal to end the civil war that has grown too damaging, threatening the existence to the system itself. Both the Deep State and Trump prefer to end the fight.
In exchange for the allowing the war to take place, Trump will be lavished with the rewards of all of the corrupt White House puppets who preceded him, as long as he follows orders. If he does a good job, he will be the next ceremonial "war president". He will get some of his domestic agenda passed, be permitted to live, be treated "normally" by the media, and ride off into the sunset. But MAGA, and the "swamp draining" of Flynn, Bannon, etc. is over.
What appears to be the White House's sudden"shift to the center" smacks of this capitulation.
Scenario 2: McMaster, and Deep State White House coup d'état
Is Trump in danger, is he in control, or is he an isolated dupe?
In explosive new stories broken by Mike Cernovich, national security adviser H.R. McMaster, acolyte of disgraced former CIA Director David Petraeus, has taken over the National Security Council, and is manipulating intelligence reports to Trump and wants 150,000 ground troops in Syria. McMaster is plotting with Petraeus to is purging all who oppose a ground war in Syria.
In this consolidation of power, Trump loyalist K.T. McFarland has been removed. Sebastian Gorka may be next to be removed by McMaster.
McMaster is also close with scandalized former Obama national security adviser Susan Rice, and it is reported that Rice herself pushed McMaster to remove Steve Bannon from the NSC. Bannon, who is against regime change, is gone.
Petraeus himself was considered by Trump for the NSC post, following the ouster of Michael Flynn by the Deep State, before McMaster got the job.
In the words of Cernovich, it is now "Trump supporters out, pro-war Petraeus puppets in".
These disturbing new developments, and the McMaster takeover of foreign policy, come in the wake of months of White House infiltration and sabotage.
Despite continuous warnings of Trump loyalists for the past months, Trump chose to surround himself with enemies and those who seek to undermine his original agenda: Republican/Bush neocons, Obama/Clinton infiltrators ("West Wing Democrats"), Deep State operatives, globalists, Goldman Sachs denizens. He has filled the "swamp" more than he has "drained" it.
In the internal civil war between Trump loyalists and the globalists, the Trump loyalists appear to be losing, leaving Trump isolated and surrounded by seasoned criminals, saboteurs, and spies. National security adviser Flynn, the most powerful operational member of what was the inner circle, was effectively forced to resign, perhaps not coincidentally after spearheading operations against the Deep State. The influence of advisor Steve Bannon, leading force behind the anti-globalist/anti-establishment agenda, has been greatly diminished, and there were rumors that he has considered leaving the White House.
Meanwhile, the influence and power of the CIA/Clinton/Obama/Soros/Goldman Sachs/Israel-connected "West Wing Democrats"–Jared Kushner, Ivanka Trump, Gary Cohn, and Dina Powell is on the rise. Poisonous Bush neocons such as Vice President Mike Pence and Reince Priebus, and the neocon generals, remain securely in place. Trump has shown no signs thathe will fire family members, despite Kushner's questionable background, and regardless of his leaks of anti-Bannon stories to MSNBC. Hecontinues to praise Priebus, despite evidence of sabotage, and despite thefact that the recently fired Priebus aide Katie Walsh was caught leaking anti-Trump stories to the enemy Fake News media.
Now comes word that the Trump presidency is "reset" towards a "centrist" globalist platform and the end of MAGA; the end of the "deconstruction" and reform efforts represented by Bannon.
According to former CIA operative Robert David Steele, this is Trump's "Bay of Pigs" moment, in which Trump either reverses course and opposes the Deep State forces pressuring and manipulating him, or he gives in to them—thus rendering himself an immediate lame duck president. This is the moment in which Trump must take on his enemies, at risk to his life. Or not.
Scenario 3: Trump playing high-stakes "4-D chess" games
Here is the "optimistic" scenario.
Was the missile strike itself a limited one-time staged propaganda deception—a fake response to a fake chemical attack— done with the back channel cooperation of the Russians and Syrians, who were warned in advance? The fact that no Russians were killed, "relatively minor" collateral damage was done, and the runway was left intact all suggest this possibility.
It was a noisy show of military strength and resolve—professional wrestling-style theater—to pave the way for Rex Tillerson's April 11, 2017 diplomatic trip to Russia, which will focus on the Syrian crisis. It was also a symbolic "shock and awe" gesture , to impress and intimidate visiting Chinese president Xi Jinping, a message to North Korea, and future US-China relations, including tensions in the South China Sea and US-Chinese trade.
The provocative noise is a planetary game of "chicken"—"peace through strength", "don't mess with us"—will improve US advantage in negotiations, and ultimately result in future victory. There will be tensions, but no world war. According to Mattis, "tensions with Russia will not spiral out of control".
This version of events is embraced by Steve Pieczenik, as well as Alex Jones and Roger Stone. Stone does not believe that the limited strike signals the beginning of a wider war. But both Jones and Stone agree that if Trump does widen the war, "he is done", and Trump "becomes George W. Bush", and "part of the Bush-Clinton-Bush-Obama continuum".
Here is Mike Cernovich's astute analysis:
Trump, Syria and the 4-D Chess hypothesis
Numerous other possible "4-D chess" scenarios are put forth by Scott Adams. This is the mythical Trump, keeping allies and foes alike off balance, many steps ahead.
It was also a "wag the dog" distraction, designed to erase Russiagate from news headlines (he has now "proven" that he is not "buddy buddy with Putin"), quell the media, temporarily quell the political opposition, and rally heretofore skeptical world leaders.
Trump also benefits from a "Bush Iraq-9/11 moment", and becomes a "war president", avenging an atrocity committed by an evildoer. Even if he loses the support of much of his base, perhaps Trump believes that it can be replaced by new support from those who might warm to him "in a time of war", against current boogeyman Assad.
Jack Posobiec speculates that Trump and Putin might have cut some sort of back channel deal.
Wikileaks' Julian Assange wonders that if the end game is that Russia pulls out of Syria, the quagmire will be left for the other nations.
Would Russia to go along with the removal of Assad, along with a carving up of Syria that includes Russia?
But these questions cast doubt:
Why would Russia give up its military bases without a war, give up its extensive, entrenched and vital oil-related interests in the region, and give the Anglo-American empire absolute primacy in the region?
Scenario 4: Trump as neocon Trojan Horse
Is Trump, in fact, a neocon globalist, in the long line of neocon globalists (Bush-Clinton-Bush-Obama/Clinton), who is now casting aside his populism, because it is no longer useful? Is Trump controlled by higher elites?
Has Trump been lying to the Russians and Putin for months about cooperation, while preparing major military moves against them? Has Trump used the Russians to get rid of ISIS elements, while plotting to get rid of the Russians once they stop being useful?
Is Trump, in fact, out to prove that he can outmuscle the "weak" Obama/Clinton globalist agenda, and do them one better, by 1) actually conquering Syria, 2) beat Russia and China, and drive them out of the Middle East, 3) take the Grand Chessboard, and also 4) scare China in compliance? Is this the "peace through strength" total war that Trump and his generals envision?
Trump is also a fervent militarist, who "loves generals" and loves the idea of wielding American military muscle and supremacy. This is evidenced by his incessant fawning overtures to Pentagon and CIA, and surrounding himself with generals and warriors. He is genuinely a fan of the intelligence community and law enforcement. As it probably pains him that any factions within these institutions oppose him, he is eager to win them over.
Trump has never stopped accusing the Obama/Clinton regime of "weakness". The Trump foreign policy will therefore display"strength".
Trump must certainly know that war is insanely lucrative, and an economic multiplier that could boost a stagnant US economy. He is pushing a huge increase in military spending.
War makes it all happen.
Who benefits from regime change in Syria? Who benefits from confrontation with Russia and world war 3? None other than the Deep State, the CIA and the New World Order.
There is also the oil. And the pipelines and transit routes.
How could Trump and oil man Tillerson ignore the Grand Chessboard and its spoils?
Has World War 3 already begun?
Judging by the latest provocations, the Trump administration is foaming at the mouth for war:
White House accuses Russia of cover-up in Syria chemical attack
AP: Senior US official says US has concluded that Russia knew in advance of chemical attack
US forces on Jordanian border, standing by with Jordanian special forces
McMaster calls for Syria regime change
US weighs saturation strike on Syrian government
Trump discussed with king of Jordan Sunni/Kurd coalition to stabilize Syria
Russia warns Trump they will respond with force if Syria red lines crossed again
Boris Johnson spearheads diplomatic drive to get Russian forces out of Syria
UN ambassador Nikki Haley: getting Assad out not only priority in Syria
Tillerson: no role for Assad in Syria
In statements given during April 9 press interviews, Tillerson repeated that defeating the Islamic State remains the top focus, and the strike has "not changed US priorities towards ousting Assad". Haley repeated declared that regime change in Syria is a priority and "inevitable". McMaster promised that fighting ISIS and ousting Assad are "simultaneous", and did not rule out additional strikes.
On every front, severe damage is quickly being done to US-Russian relations, too much to be forgiven or easily reversed. There are few voices of reason, and none from within the Trump administration.
Secretary of State Tillerson makes the Trump administration's first official trip to Russia on April 11. (If a state of war exists between the US and Russia, would Tillerson be making this trip at all?)
The result of this negotiation could decide the fate of humanity.
But is it smoke and mirrors? Is it all a moot point now?
US military forces are ready. According to Jack Posobiec's well-placed sources in the military, there will be boots on the ground June 1 or earlier.
Copyright © Larry Chin, Global Research, 2017
Articles by: Larry Chin | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 2,036 |
Una estípula és una estructura, usualment laminar, que es forma a cada costat de la base foliar d'un Traqueobiont. Usualment són asimètriques i, en certa manera, són imatges especulars una de l'altra. Les estípules poden ser:
lliures o laterals: no s'adhereixen al pecíol, queden unides només a la tija;
adnates, peciolars o vaginals: se solden al pecíol en una llargada més o menys llarga;
interpeciolars o caulinars: dues estípules de fulles oposades se solden en el seu punt de contacte;
intrapeciolars o axil·lars: dues estípules de la mateix fulla se solden per sobre del pecíol;
opositifòlies o oposades: dues estípules de la mateixa fulla se solden fent la volta pel costat oposat al pecíol;
embeinadores: en el mateix sentit que òcrea.
Les estípules poden aparèixer com a òrgans foliacis, espines, glàndules, pèls, escames. La seva presència està relacionada amb l'anatomia del nus que suporta la fulla.
''Vegeu també Fulla
Referències
Morfologia vegetal | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 6,857 |
\section*{Acknowledgments}
\medskip
We thank R. Radhakrishnan for in-depth discussions.
We also thank C. Ness, F. Peters and A. Los for useful suggestions.
The work is supported by the Swedish Research Council (grant no.\ VR 2014--5001)
and University of Campania `L. Vanvitelli' under the programme ``VALERE: VAnviteLli pEr la RicErca'' project: SEND.
\bibliographystyle{apsrev4-2}
\small
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 8,805 |
Catacombs of the Black Vatican – dziewiąty album studyjny amerykańskiego zespołu heavymetalowego Black Label Society. Wydawnictwo ukazało się 8 kwietnia 2014 roku nakładem wytwórni muzycznej E1 Music. W Europie materiał trafił do sprzedaży nakładem oficyny Mascot Records. Płyta zadebiutowała na 5. miejscu zestawienia Billboard 200 w Stanach Zjednoczonych sprzedając się w nakładzie 26 tys. egzemplarzy w przeciągu tygodnia od dnia premiery. Niespełna dwa miesiące później materiał sprzedał się w sumie w nakładzie 50 tys. kopii.
Był to pierwszy album formacji zarejestrowany z byłym perkusistą zespołu Breaking Benjamin Chadem Szeligą.
Lista utworów
Opracowano na podstawie materiału źródłowego.
{|
|valign="top"|
Edycja podstawowa
"Fields of Unforgiveness" – 03:12
"My Dying Time" – 03:22
"Believe" – 03:44
"Angel of Mercy" – 04:14
"Heart of Darkness" – 03:39
"Beyond the Down" – 02:54
"Scars" – 04:13
"Damn the Flood" – 03:18
"I've Gone Away" – 03:51
"Empty Promises" – 05:16
"Shades of Gray" – 06:28
|width="10"|
|valign="top"|
Utwory dodatkowe
"Dark Side of the Sun" – 05:19
"The Nomad" – 04:26
Utwory dodatkowe – edycja australijska
"Dark Side of the Sun" – 05:19
"Hell and Fire" – 04:24
Utwory dodatkowe – edycja digipack
"Dark Side of the Sun" – 5:22
"Blind Man" – 4:36
|}
Twórcy
Opracowano na podstawie materiału źródłowego.
Zakk Wylde – śpiew, gitara, instrumenty klawiszowe, produkcja, miksowanie, zdjęcia
John DeServio – gitara basowa, miksowanie, koprodukcja
Chad Szeliga – perkusja
Adam Klump – inżynieria dźwięku, miksowanie, koprodukcja
Peter A. Barker – mastering
Justin Reich – zdjęcia
Derek Sherinian – gościnnie instrumenty klawiszowe
Greg Locascio – gościnnie śpiew
Przypisy
Linki zewnętrzne
Okładka
Albumy Black Label Society
Albumy muzyczne wydane w roku 2014 | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 764 |
# Darkest Before Dawn
### #3 The Veil Series
## Pippa DaCosta
DARKEST BEFORE DAWN
#3 The Veil Series
* * *
Copyright Pippa DaCosta 2014
All rights reserved.
No part of this book may be reproduced in any form or by any electronic or mechanical means, including information storage and retrieval systems, without written permission from the author, except for the use of brief quotations in a book review.
* * *
ISBN: 1500996572
ISBN-13: 978-1500996574
All characters and events in this publication, other than those clearly in the public domain, are fictions and any resemblance to real persons, living or dead, is purely coincidental.
Version 1.1 Oct 15th, 2014. (Updated Dec 9th, 2014)
* * *
www.theveilseries.co.uk
www.pippadacosta.com
## 1
# Chapter One
It's not every night a bloodied and disheveled Prince of Hell shows up on my doorstep with an orphan girl, demanding I keep her safe before vanishing into thin air. But that's exactly what happened when I first met Dawn.
I'd worked up a sweat scrubbing demon blood out of my suede boots. The day hadn't gone well. My work as a _free_ lance Enforcer had seemed like a great idea at the time, especially the 'free' part. The Institute had answers I needed, but I was beginning to feel more and more like their blunt instrument. Demons hear _Enforcer_ and don't want to sit and talk about their options. I killed more demons than I talked down, and being half-demon myself, my choice of profession gnawed away at my resolve. I was having a crisis, which was part of the reason I was scrubbing my boots with all the gusto of someone trying to wipe clean a guilty conscience.
Jonesy, my cat, wove around my ankles, determined to distract me, but it was the delectable voice of the Prince of Greed that finally caught my attention. I flicked my hair out of my eyes, tossed my ruined boot and scrub brush into the kitchen sink, and glared across the lounge at the TV.
On screen, Akil had poured all of his raw masculinity and charisma into a relaxed posture at the end of a plush crimson couch. He'd dressed impeccably in a dark suit that probably cost the same as a year's rent for my new apartment. He hadn't aged a day in the fifteen years I'd known him and still managed to pull off the slick thirty-something routine with masterful perfection. Never mind that he was an immortal chaos demon, spat out of creation at the same time as the earth. Nobody cared about that. All they saw was a professional businessman who had an answer for everything and could charm the scales off a snake.
"Not all demons are good, of course." He smiled, and the woman interviewing him raised her plucked eyebrows. "That wasn't what I was implying. I wanted to merely stress that demons are as varied and diverse as people." Whatever he'd been asked, he wasn't in the least perturbed. You couldn't ruffle his princely feathers as easily as that. I should know. I'd ruffled his feathers—or rather, his leathery, lava-veined wings—once or twice.
Akil's host drew a tight smile across her lips. "What about yourself?" She uncrossed her shapely legs, shuffled back in her high-backed seat, and then re-crossed her legs again. A murmur rippled through the unseen audience. Akil's smile hitched up at one corner, and a few feminine jeers from the audience lifted the mood. The host smiled and tucked her hair behind her ear. "Well? Everyone wants to know why you decided to come forward as the spokesperson for the demon community."
"Jenny." He purred her name like it was forbidden. I arched an eyebrow as Jenny squirmed in her seat. "It was necessary. Someone had to do something. Things couldn't go on as they were. The good people of Boston need answers. They need to know we're not terrifying monsters, just... misunderstood."
I snorted a laugh.
Jenny glanced at her audience and back to Akil. "Many of us here have seen the rather blurry news footage of you protecting Boston from the... Lah-Kar–"
"Larkwrari demon." Akil helpfully provided the correct pronunciation, the word rolling off his tongue with an ancient accent I'd never fully pinned down. Given his bronze skin tone and hazel eyes, people often assumed he was Italian or perhaps from somewhere further afield, somewhere hot and exotic. They were right about that. Before he'd come out as full-blood demon, very few had witnessed his true appearance and lived to describe him in detail, although there were a few pixelated images currently going viral on the internet. The women swooning in the audience would run screaming if they knew him as Mammon, Prince of Greed.
"Yes. That was two months ago," Jenny said. "Are we likely to see more events such as that one in Boston Gardens?"
"It's highly unlikely. That situation was extreme..."
I bowed my head and turned my back on the TV. I'd been at the Gardens during the 'event' they spoke of. In fact, Akil wouldn't have been able to save the city without me. But where he'd walked into the spotlight afterward, I'd slunk into the shadows. I hadn't seen him since. Nor had I seen or heard from Stefan, the half-demon who had caused the tear in the veil which protects this world from the netherworld, thereby letting the Larkwrari demon through. When it was all over and I realized I was on my own, I'd agreed to work freelance for the Institute as long as they stayed out of my life. So far, so good. On the surface, everything was fine, but scratch off the veneer, and I still struggled to cope with the emotional fallout from that day.
Jonesy, my cat, leapt onto the kitchen counter and nudged my arm with a rumbling purr. I tickled behind his ear. "I know, buddy. Don't worry. I'm not going anywhere." I'd abandoned Jonesy once before, around the time Akil had torched my old apartment building in an effort to flush me out of hiding. Yeah, not all demons are good. I'd yet to meet a good demon, and yet the people seemed to buy what Akil was selling.
Two booms against my front door frightened Jonesy enough for him to skitter off the counter and dart under the coffee table. They were the kind of knocks the police give you before kicking your door in.
I already knew who stood outside my apartment. His radiating warmth seeped beneath the door and crept inside my lounge. The framed modern art adorning each of my apartment walls served a purpose: the anti-elemental symbols locked down elemental power. Plus, he couldn't get in without a personal invite. Even knowing I was protected, I still felt a trickle of fear raise the fine hairs on my arms. I liked to call it fear because the alternative—desire—didn't sit well with my human half.
"Go away," I called. The Akil on the pre-recorded TV interview was still busy charming his audience. He had them laughing now, smiles all 'round. Even Jenny had warmed up to him. A dash of color brightened her cheeks. I grabbed the remote and switched off the TV.
"Muse, this is important," Akil said, his voice muffled behind the closed door.
"Then call me." I moved a few steps toward the door and stopped. "I tried to call you, and you ignored me." That had grated. It had taken me weeks to pluck up the courage to call him so we could meet and talk about Subject Beta, about the princes, about everything he should have told me before, and he'd blanked me like one of his fangirls.
"You need to open this door." That delicious voice eased beneath my defences and wove into my thoughts.
My chest tightened, and I clenched a fist over my heart. I had the essence of a demon shrink-wrapped around my soul, a vengeful necrotic parasite feeding and polluting my insides. I preferred to call the thing a parasite because, if I used his name, it made it too real, too fucked up. Akil probably had the means of removing it. When he'd tried, I'd shut him down. Letting him help felt too much like trusting him, and that was something I could never do again. I had no desire to trade one demon's hold for another. Occasionally, the dark thing hitching a ride in me decided to make its presence known, and now was one of those times.
"Akil, please... just go." I winced as the dark pulsed out of time with my heart.
"I need your help."
Dammit. He knew just how to push my buttons. "You're a capable guy. Figure it out." I moved close enough to the door that I could reach a hand out to open it, but I held back, fingers twitching.
"I have. That's why I'm here. You don't need to invite me in. Just open the door."
I wasn't inviting him in. I'd tried that once. He'd subsequently attempted to kill me. We had a complicated relationship.
I reached for the door handle as my demon unfurled inside me, awakened by Akil's presence. Her purr rumbled through me, making her desires perfectly clear. Everything about Akil flicked her switches, but I was the one calling the shots. Plus inside my apartment, she could no more manifest outside my skin than he could call his power. The symbolic artwork on my walls held her back.
When I opened the door, the verbal assault I'd prepared fizzled away in a gasp. Akil's torn claret shirt hung askew, and his suit pants were blood splattered. He had scuff-marks across his cheek and forehead. Blood dribbled down the side of his face. His normally hazel eyes brimmed with liquid fire. All of that I could have dealt with, but it was the young girl cowering behind his leg that surprised me the most. Her wide chocolate eyes peeked out at me as she clutched a stuffed rabbit to her chest, its faux fur matted with blood.
"What did you do?" I growled at Akil.
He narrowed his flame-filled eyes at me and then crouched down to face the little girl. Akil, his hands clasped around the little girl's upper arms, looked her in the eyes and said, "I'm sorry you witnessed... that. I had no choice. Muse will protect you. She's more formidable than she looks." I gasped, open mouthed, at the both of them.
The little girl blinked and clutched her bunny tight against her chest.
"I must leave you now." He smiled and toned down the fire in his eyes. He couldn't do much about the blood and his general disheveled state, but she didn't seem to notice. "Do as Muse says. Promise me."
"Okay. Will you come back?" she asked in a tiny, mouse-like voice.
Akil took too long to answer. I glared at him. "Yes, he'll come back," I snapped.
Straightening, Akil gave the girl a slight shove in my direction. She took a few steps inside and peered over her rabbit at my lounge as though looking at an alien world.
I flung my attention back to Akil. "What the hell, Akil?" I hissed, reining back my tone to avoid rousing my neighbors.
"Do the right thing, Muse." The softness of his tone set off alarm bells in my mind. "I know you will."
"You can't just turn up after two months and dump a little girl on me. I don't know how to look after children. What am I supposed to do? Who is she? Why do you look like you've gone ten rounds with a Hellhound?"
Akil ran a hand through his mussed hair, and I saw it tremble. How could I not? Akil didn't behave like this. He was the suave bastard on TV, not the beaten-up wreck at my door. "Just keep the demons away from her."
I gulped back a rising knot of panic. "What? Why are demons after her? She's just a little girl."
"As were you. Once." He glanced down the hall. A door-lock rattled; one of my neighbors had decided to investigate the commotion in the hallway.
"Akil..." I warned, lowering my voice to a stage whisper, "are you telling me she's a half blood?"
He met my stare. "Do the right thing."
"Is everything all right, Charlie dear?" my neighbor, Rosaline, asked, her English accent neat and clean. I poked my head around the door and gave her a sweet smile. A delightful sixty-something widow, she couldn't help caring too much about the lost cause next door – me. We'd bonded over tea. She made a mean lemon drizzle cake.
"Everything's fine, Rosa. I was just talking with my friend here... Not to worry. I'm sorry if we disturbed you."
"No-no..." She grinned and gave me a quaint royal wave. "As long as you're okay, my dear. Oh, would you mind taking a look at my television? I can't seem to change the channels. All I get is the Discovery channel, and I've had just about enough of rampaging wildebeest for one day."
"Yup, sure thing. Will do..." I waved and watched her plod back inside her apartment. When I turned to face Akil again, he'd made himself scarce.
I uttered a curse and then remembered my young guest and cursed again for swearing in front of a child. The little girl didn't seem to hear anyway. She wore a slip of a dress, several sizes too big for her skinny little body. Her socks were mismatched, and her black patent leather shoes scuffed. I moved around her. She blinked wide doe-eyes up at me. Her flushed cheeks, pink lips, curly mouse hair, and oval face suggested an age of eight or nine years, and I inwardly cringed. I had no idea what I was supposed to do with her. Thankfully, the demon smacking into my apartment window distracted me from that thought.
I jerked around and saw a dark shadow slam against the window, leaving oily imprints on the glass and rattling the frame. Another clattering boom against the adjacent window snapped my attention across the lounge. Claws scratched at the glass, setting my teeth on edge. I couldn't quite see the demons—too human to focus on their ethereal forms—but whatever they were, they didn't appear to be able to break through. My symbols worked their magic. I had a few seconds of smug satisfaction and then I heard a raucous cry coming from my bedroom. Jonesy blurred across the floor with a yowl, and following behind came a heaving cloud of black smoke. I'd left the bedroom window open.
My demon came to me like a blast of hot air from an oven. She'd already been lurking at the back of my mind, now she butted up against my skin. The protection symbols prevented me from summoning all of her. I couldn't use my element, but I had enough fire in my veins to see the prehistoric creature inside the miasmic shadow. I'd seen it before. They patrolled the night sky in the netherworld, and they also made an appearance in most dinosaur reference books. Palaeontologists called them pterosaurs, better known as pterodactyls. Demons called them _venatores – hunters._
It teetered forward on its winged arms and legs, claws scratching against my hardwood floors, and cast its beady-eyed glance around. It let out an ear-piercing screech. The little girl squeaked behind me and scurried into the corner of the room where she ducked down and tried to hug herself into a tiny, insignificant ball.
I pinned the hunter in my sights and snatched a kitchen knife from the rack. We were equally matched in height—which isn't saying much—although its claws and beak full of razor-edged teeth gave it the distinct advantage. It screeched at me, the brittle sound like a clatter of cymbals.
"I already have demon blood on my boots," I growled. "I'd really prefer it if I didn't have to wash it off my walls as well."
It swung its elongated head and tried to get a fix on the girl behind me. Skittering to one side, it flapped its wings and snapped its jaws, unconcerned by my threat. Another of its companions slammed against the lounge window, jarring the glass. The hunter jerked its head, acknowledging its companion's idiocy. I used its distraction and bolted around behind it. Attacking it head on would get me a face full of sharp teeth. Snatching its left wing, I used my own momentum to swing around behind it. Its beak swung around after me, the two of us pirouetting before I plunged the kitchen knife into its leathery hide. I still had hold of its wing and yanked as it bucked away. The knife slid out with a _sloosh._ Blood spurted. Its beak snapped at me, close enough to taste the fish-oil stench of its breath. I recoiled, ducked, and as it snapped over my head, I thrust the knife into its neck and tugged its throat out with a grunt of exertion. The hunter whipped around, wings flailing and claws tearing at the gaping wound. It stumbled and staggered about the lounge, rearranging my furniture, and collapsed across my coffee table.
I dashed for the bedroom and slammed the window closed. Outside, the dark sky writhed with hunters. Any witnesses would see a cloud of black smoke against the night sky. Nothing too alarming.
I stepped back from the window and became acutely aware of the cooling demon blood plastering my top against my skin. I grimaced and walked gingerly back into the lounge, clothes chaffing. The hunter still lay sprawled across my coffee table, its blood dripping off the edges and pooling on my floor. How to dispose of a demon in Boston? Call the Institute, but that would mean answering a lot of questions about who my little guest was.
She'd gone. The corner she'd been cowering in was empty, and my apartment door hung ajar. I lunged for the door and remembered I was covered in blood. Quickly, I tossed the knife into the kitchen sink and tore off my clothes while retrieving some jeans and a tank top from my bedroom. I was still tugging on my boots and doing up my fly as I stumbled from my apartment and hurried down the stairs.
Akil had left her with me, and within the space of five minutes, I'd lost her. If she got outside, the hunters would tear into her. I staggered down the last few steps and brushed by Lacy, another of my neighbors.
"Hey, Charlie, are yah okay?" her Boston accent chimed.
"Yeah, all good..." I tossed a wave over my shoulder, heading for the main door and then stopped and turned. "Did you see a little girl come by here?"
Lacy gaped at me. She was dressed for a night out in matching tartans and lace up Doc Martin boots. Her faux fur jacket was so white it would have glowed under UV. Not much shocked Lacy, but she'd lost her voice now. I'd forgotten to wash the blood off my face. She gestured at me, mouth open. "Is that...?"
"Oh, it's not real." I grinned brightly. "I was playing dead with my... erm niece. Y'know. Ketchup." Family members played dead with ketchup and kids, didn't they? I was sure I'd seen it on TV.
She screwed up her face, not believing me for a second. "Yeah, she went outside. Do you need some help?"
"Nope. I'm fine. We're fine. Which way did she go?"
"Toward Sidewalk Cafe."
"Thanks." I didn't wait around for more questions and just hoped I'd remembered to shut my apartment door. If anyone saw the demon draped over my coffee table, I'd have a whole lot of explaining to do, not to mention losing my deposit. I'd only been in the apartment a month and was technically meant to be making a good impression.
Early Friday evenings in Boston were as busy as weekday rush hours. I lived in the heart of South Boston, a rejuvenated district currently undergoing something of a popularity revival. Southies liked the friendly neighborhood atmosphere of the place and feared the desirable ambiance had attracted too many well-to-dos who would spoil what made the place special. I couldn't comment, being a newbie myself, but I did like the close-knit community. It felt like home, and for me, that was a damn miracle.
The many cafes and bars of East Broadway were opening for the evening, but the sidewalk was still clear enough for me to spot the little girl weaving her way through the tourists and after-work crowds. I glanced up at the sky and immediately saw the flock of hunters passing overhead. They had all the finesse of the black smoke from _Lost_ , and I winced. If this went public, my boss at the Institute, Adam Harper, would lock me down and take away my freelance status. I had to control this. Dealing with Demons was, after all, my day job.
I didn't have my Beretta Pico sidearm or my Enforcer ID. I just looked like a crazy half-dressed woman with blood on her face chasing down a little girl. Could the situation get any worse? Breaking into a run, I raced through the crowd, muttering apologies as I brushed a few arms and bumped a few shoulders. I caught glimpses of the girl's ringlets and shiny black shoes, but she was quickly pulling away, able to thread herself through the crowd unnoticed.
A hunter's clattering battle cry trilled above, right before it dove toward the sidewalk. Someone screamed, also noticing what they'd see as a peculiar cloud rushing downward. I saw the hunter, its wings tucked in, beak open. It would slam into the little girl and make short work of her fragile human flesh. I couldn't let that happen. I summoned my demon's strength, releasing my mental hold and allowing her influence to flood through my body. She broke over me, pooled fire in my heart, and flushed my veins with ethereal energy. Still running, I lifted a hand and called to the heat slumbering in the buildings on either side of the street. Boston, like all cities, was a reservoir of heat. Human activity generated more than enough heat for me to play with. Answering the call, my element sloughed off the buildings and flooded the earth at my feet. Spooling it around my arm, I cast it outward, sending a whip-like tendril of fire over the heads of the crowd. Flames licked over the hunter and washed over the body of the beast, embracing every inch of it. It screamed an air-shattering cry and then tumbled out of the sky and thumped against the sidewalk, narrowly missing the unsuspecting crowd.
I didn't have time to explain to the gawping people what was happening. They would already know it was demon related. The news and events of late had prepped them, but that probably didn't make it any easier to witness.
I dropped off the sidewalk and ran along the road, casting another bolt of fire into the sky where a second writhing mass of darkness dive-bombed the fleeing girl. "Hey!" If I could get her to stop, I could turn and deal with the hunters in one go.
She veered left down a narrow, one-way street. The malicious black smoke funneled after her. A quick glance behind told me we were virtually alone. I called all of my demon and let her ride over my flesh, consuming every part of me. My one ruined wing burst from my back. My element draped me in flame. I stopped, planted my feet firmly on the cobbles, and thrust my hands skyward, launching with them a storm of orange and blue flames. The hunters scattered, but chaos fire has an intent all of its own, and they soon found tendrils of flame licking up their limbs. Jagged fragments of pain thumped me in the chest. I grunted. My power stalled. _Damn parasite_. With a snarl, I doubled my efforts. The black cloud burst apart from within and lit up the sky in a mass of fire strikes. Burned hunters slapped against the road. Some bounced off cars, setting off half a dozen alarms. I'd never been very good at subtle.
I finished off a few stragglers with some well-aimed fireballs and then jogged down the street, shaking off my demon with each step, returning to my normal, if slightly disheveled state. When I finally found the girl curled tightly into the crook of an old tree, I was myself again, complete with blood splatters.
I saw the whites of her eyes and tried to offer her my best, most friendly smile. In the distance, sirens announced the arrival of the authorities, and no doubt the Institute would be included in that response. I crouched down and offered her my hand.
"It's okay. The man who brought you to me, Akil—he was right. I'll keep you safe, but you gotta stay close to me."
She blinked and hugged her bunny.
I needed to get back to my apartment where the symbols would hide us both from demons. If I could get home and clean up the mess waiting for me, then maybe the girl might open up and explain just what the freakin' hell was going on. I'd call Akil too. I had no idea what he expected me to do that he couldn't, and his 'do the right thing' explanation wouldn't cut it.
"What's your name?"
She blinked again, and her lips tightened. She didn't trust me, and I couldn't blame her. I had no idea what she'd witnessed with Akil, but given the fifteen minutes we'd spent together, I'd have a hard time trusting anyone if I was her.
"Shall we do this properly?" I shuffled a bit closer. "My human name is Charlie, but my real name is Muse." I held out my hand, inviting her to shake it.
"That's a funny name." A slight netherworldly accent slurred her words.
"Yeah, a not-so-funny guy gave it to me."
"I have a funny name too."
"Oh, and what's your name?"
"Dawn." She held out her rabbit. "This is Missus Floppy."
"Dawn is a lovely name." I shook Floppy's paw and then Dawn's tiny, cold hand. "I'm very pleased to meet you both. Would you like to meet my cat, Jonesy? He loves tickles behind his ears."
Dawn clutched her bunny against her chest once more and smiled. "Okay, Miss Muse."
## 2
# Chapter Two
Adam only left the safety of the Institute complex when the world was about to end or I was involved. I wasn't surprised when he filed in behind the clean up crew. I leaned against the kitchen cabinets, arms crossed, watching the blue-overall-clad Institute employees surround the dead demon in the middle of my lounge and set about removing its carcass and copious amounts of drying blood from my apartment.
Adam gave the room a visual assessment, his gaze lingering on the framed symbols as though inspecting them for any errors. He took his time, observing his crew doing what they did best. He would look at me when necessary, not before. While I waited, I watched him, knowing he could feel my gaze crawl over him. A substantial man, both in demeanor and presence, he dressed casually in blue jeans and a blue-striped shirt. Suits weren't him, despite spending the majority of his days behind a desk. His graying hair should have been too long for a man of his middle-years, but he somehow made it look distinguished. His fawn colored eyes instantly disarmed anyone who didn't know him. He'd smile and ask you how your day was, right before he went for the jugular. He and I didn't get along.
Finally, after five minutes of rising tension, Adam turned those deceptively warm eyes on me. "I assume you're the fire demon who ran down the street in plain sight of half a dozen CCTV cameras and upward of fifty witnesses?"
Usually, he'd wait until he had me in his office before laying down the Institute law. Tonight, I was getting the no-holds-barred treatment.
Jonesy sat next to me on the kitchen counter, twitching tail dangling over the edge. My cat was an excellent judge of character.
"Would you prefer I let the flock of hunter demons eat the unsuspecting commuters?"
"I'd prefer discretion, Muse."
One of the blue-suit guys moved toward my bedroom. I tensed. "Nothing in there. It's all out here." The guy glanced at Adam, who nodded, and returned to the tacky pool of dark blood spreading across my floor.
Adam arched an eyebrow and crossed the room to my kitchenette. "I'm loath to think you're hiding something from us."
"There's nothing left to hide, Adam." I made a point of meeting his stare. He wouldn't think I was laying it on thick. This was how we always danced.
"Have you heard from Stefan?"
Now, I did flick my gaze away. "No."
"David Ryder?"
"No."
Stefan and Ryder had vanished after the event at Boston Gardens, and it remained an open wound between myself and the Institute. In fact, I believed Adam only kept me on to see if either Stefan or Ryder resurfaced around me. They hadn't. The last time I'd seen Stefan, he'd accused me of killing his sister. He thought I'd deliberately drugged him to subdue his demon and believed I'd sided with his nemesis, Akil. I'd left Stefan with Ryder as he struggled to contain his demon half, and I'd helped Akil drive the Larkwrari demon back through the tear in the veil. Ryder would keep Stefan safe. Either that, or Stefan would lash out and kill him. Given the madness that had come over Stefan since his lengthy stay in the netherworld, I hadn't ruled it out. That thought—among many others —kept me awake at night.
"Need I remind you, we have authority over your living arrangements and career?"
I ground my teeth. Hate is such a strong word. I liked to think myself incapable of true hate, but I was only half-human, and my demon hated Adam Harper with every netherworldy cell in her body. It was only because I'd made a deal with Ryder not to torch the Institute or spontaneously ignite Adam that I'd refrained from doing both.
"Why were the hunter demons here?" he asked.
I shrugged. That was a good question, and the sudden change of direction caught me off guard. "They must have been sent by someone who knew where I lived."
"Wouldn't your protection symbols hide you from any such threat?" He nodded toward my framed prints with the swirling interwoven markings.
He was right. Those symbols kept me off the demon radar. "What can I say? They found me. I dealt with it."
Any number of demons could have sent the hunters after me. Demons despised my half-blood nature, detested Enforcers, and had all taken my general lack of willingness to die as an affront to their demon egos. Hell, even Akil had sent demon-nasties after me in the past, although he appeared to have resolved his homicidal tendencies since I'd literally sucked the life out of him. My immortal brother could have sent them, but I'd learned assassins weren't his style. Valenti was more likely to run me through with a sword. He liked his sibling-rivalry up close and personal.
I shuddered and shoved thoughts of my half-brother to the back of my mind. Of all the crap I had to deal with, I really didn't need the specter of Val occupying my thoughts.
Besides, the hunters hadn't been after me. They'd wanted the girl, and Akil had led them straight here. Adam wasn't to know that, so I played dumb and shouldered the blame.
He waited for me to offer up some sort of explanation that he was happy with, but when it became clear after several minutes of silence, that I had no intention of elaborating, he made his excuses to leave. "Next time, Muse, dial down the fires from hell. I have enough trouble trying to manage demon sightings all over the city. I don't need one of my Enforcers in the headlines, especially a hybrid."
"Yes boss," I grumbled with zero conviction.
It took the Institute team an hour to wipe my lounge clean. Glad to see the back of them, I hurried them out the door with the disinfectant still drying and immediately checked on Dawn. She sat perched on the end of my bed, legs dangling over the edge and didn't look as though she'd moved since I'd told her to stay-put and stay quiet.
"Who was that?" She followed close behind me as I returned to my now-spotless lounge.
"They're not the type of people you want to be getting involved with, given their history with half bloods." I glanced down at Dawn. She stood inside my personal space, peering up at me, Missus Floppy loose in her hand. "That is what you are, right?"
"What's a half blood?"
Okay, we really needed to talk. "Are you hungry?" I asked with a smile.
She nodded.
"Chill out on the couch, and I'll make us some food." She skewed her wary gaze to the couch, regarding it suspiciously. "Sit. It won't bite."
She crossed the lounge with tight steps and hitched herself onto the couch. Curling herself into a tight ball, she sunk into the cushions as though hoping they'd swallow her up.
I flicked on the TV and channel surfed to something non-offensive, watching Dawn's eyes widen to absorb the images.
I checked my fridge for food and found it distinctly lacking. I couldn't cook. I'd tried it once. Or rather, Akil had attempted to teach me, but I'd struggled with the whole idea of heating up a stove when I could use my element. Needless to say, toast is flammable, and eggs explode when heated using chaos energy. Who knew? Akil had found it highly amusing while I'd considered myself a failure. Things had changed since then, but I still shied away from cooking.
Two microwave meals it was then.
"You're safe here, Dawn." I prepared the frozen meals. "As long as you stay inside these markings, the demons can't find you."
"They did before."
I glanced back at her. She was watching a wildlife program about chipmunks, overlaid with dramatic music. "The symbols only work on higher demons, the big guys with conscious thoughts. Some of the lesser ones can still get through, if they know what they're looking for. Plus, I left the window open. Don't do that. It gives them an in. Same with the front door. So we just have to stay here until we figure out what's going on."
The microwave pinged. I managed to turn the desiccated peas, carrots and shoe-leather meat onto plates so they looked partially edible and carried them over to Dawn.
She didn't bother with cutlery and dove right in with her fingers.
"Careful, it's hot." She didn't seem to care. Eyes darting between chipmunks and her plate of food, she tucked in as though I'd served her a gourmet meal. I watched her closely, finding myself transfixed by this quiet little girl. Why did Akil have her? What had happened to him? Why leave her with me? Why were the hunters after her? I wanted to demand answers from her, but I wasn't that heartless. The interrogation could wait.
I reached out and swept a lock of her curly hair behind her ear. A trickle of my element seeped outward, as it sometimes did around demons. It happened often enough that I barely noticed it. It wasn't invasive, just a curious touch, but Dawn jerked back and glared at me as though I'd slapped her.
I snatched my hand back. "It's okay." I'd felt a little stirring of the energy slumbering inside her. She was a half blood. Had she been full-demon, my skin would have crawled by now, plus she wouldn't have been able to enter my apartment. Now that I'd sensed the power in her, I knew for certain she was like me. "We're the same, you and me."
"The man who saved me, he says you're strong."
_He would,_ I thought. Demons only care for power, and considering Akil was the Prince of Greed, he liked nothing better than overflowing chaotic energies. "He _saved_ you?"
She blinked. "He told me not to tell you."
That sounded more like Akil. I smiled. "It's okay. You can trust me."
She shook her head, ringlets bobbing. I wasn't going to push it. Not yet. But I needed answers. If Akil was using this little girl to get to me, I'd take my overflowing chaotic energies and use them to go nuclear on his ass.
"Do you think you can trust Akil?"
She shook her head. Good girl. "He's strong too." Her eyes unfocused, and what little color she had drained from her face. "I don't want to go back," she whispered.
"You don't have to go anywhere you don't want to. I promise you that." I took her dainty hand in mine and gave it a squeeze. She squeezed back, eyes glistening. There were memories in my head just like hers. I knew what it meant to be a half blood abomination among demons. If Dawn had endured half of what I'd been subjected to, she was lucky to be alive, never mind coherent.
The parasitic demon knotted around my heart tightened. I sucked in a sharp breath, tugging my hand from Dawn's to clench it against my chest. It never let me forget its existence.
My cell phone rang, providing a welcome distraction from the hideous creature hitching a ride inside me. I left Dawn watching the chipmunks and answered the call.
"Charlie, it's Detective Coleman." His fast footfalls punctuated the background drone of traffic. A car door slammed. "Dead demon call just came in. I'm about to head down to a penthouse in Battery Wharf to seal it off—the usual—and thought you'd want to know."
"Hey," I drawled. "I'm fine. Thanks for asking." Coleman worked homicide at Boston PD, but he also got burdened with cases of suspected demon involvement. I was on his speed dial as the phone-a-friend for anything suspiciously inhuman. "Why would I want to know?" Battery Wharf was an exclusive luxury apartment complex. Not somewhere you'd expect a demon to turn up dead, but things were changing. Demons were everywhere, so the press said.
"Well, for one, you're the Institute, and I'm obliged to tell the Institute when one of your ki— when a demon turns up dead."
I caught that little slip of the tongue but let it go. "Noted. And?"
"You're acquainted with the apartment owner. Akil Vitalis."
## 3
# Chapter Three
I left Dawn in Rosa's doting hands and rode along with Coleman to Battery Wharf.
Akil owned property all over Boston. I knew of a handful of apartments and townhouses, but I wagered that he had dozens more hidey-holes he hadn't told me about. He'd spent much of the last eighty years building up a financial portfolio consisting of mostly property, but also shares in several corporations. The Prince of Greed had his finger on Boston's financial pulse, much to the irritation of the Institute, who had so far failed to catch him embroiled in anything illegal. Luckily for Akil, being a demon wasn't against the law. Yet. Akil was meticulous when it came to his business persona. So much so that, when I trapped him on the other side of the veil for six months, his businesses continued to operate without the head of the snake. When he got back, he stepped back into his suit —no tie—as though nothing had happened.
But something had happened at Battery Wharf, something he hadn't planned for. His luxurious penthouse apartment with its floor-to-ceiling windows, marble tiles, granite countertops and hardwood floors, had been scorched by fire. Nothing had actually gone up in flames, from what I could tell, but something had flash-burned through the lounge, dusting the décor with soot. Smokey imprints swirled across an otherwise crisp white ceiling.
Coleman followed me, watching my reaction. He knew I had a 'relationship' with Akil, but he didn't know the details. I'd told Coleman exactly what the Boston PD needed to know. Akil was a Prince of Hell and not to be fucked with unless they wanted to get their fingers burned. Prior to my revelation, they—like everyone else in this city—thought Akil was every bit the charming and successful businessman who happened to have volunteered for the role of demon ambassador. Nothing was ever that simple with Akil.
A few uniformed cops trailed in behind while I wandered. Outside the windows, Boston Harbor glistened in the sunlight. I'd spent many an hour on that terrace, watching the boats below. This apartment was different from the others he owned. Akil only came here when he needed time to think. This was his city bolthole. Modern furniture married with chic accents, creating a timeless quality where old married new, much like its owner.
I scanned the lounge area. Two empty wine glasses stood proudly on the coffee table. On the floor, as though kicked off in a hurry, lay a pair of women's high-heeled shoes. My sightline followed the discarded shoes to the body sprawled in the doorway to the master bedroom. Thankfully, not Akil's body. He was immortal, but I'd learned his human vessel wasn't. When Coleman had mentioned a body, I'd guarded myself against the possibility it could be Akil. When I found myself looking at the slippery gray skin of a female demon laying face down, a relieved sigh slipped from my lips.
I crouched down beside the corpse, noting the fin running down her spine and the four-inch stab wound in her lower back. I tilted my head to get a good look at her face. She had whiskers, like those on a catfish, and her lips pulled back into a cod-like grin. Considering her human vessel had been stunning, her demon was grotesque. I knew her as Carol-Anne. She'd once tried to kill me—what demon hadn't?
"Your thoughts?" Coleman tucked his hands in his coat pockets. In the bright apartment, his face had the same creamy pallor as the dead demon's.
I straightened and stepped around her body into the bedroom. "Her name is Carol-Anne. I knew her. She's the owner of The Voodoo Lounge, a demon club in Charlestown." The bedroom's white ceilings, pale blue walls, and mahogany floors gave the impression of understated luxury. The bed sheets were knotted, pillows scattered. I stepped onto a plush white rug and felt it squelch beneath the soles of my boots. Pools of water glistened on the hardwood floors. I touched the bed sheets and rubbed the moisture from my fingertips. "She was a water elemental. Pretty high on the food chain."
The last time I'd seen her, she'd been on her knees in front of Levi, the Prince of Envy. He'd been sent to collect and escort me to my father, Asmodeus. I'd managed to delay Levi's plans, but the threat still hung over my head like Damocles' sword.
"The water damage seeped through to the below apartment. That's how we got the call. She's a water elemental, huh. And what element is Akil Vitalis again?"
He already knew the answer, but I played his game. "Fire."
Coleman scratched his chin. I'd been teaching him the finer points of elemental chaos demons, at least the details I knew, which were woefully lacking since Akil wouldn't take my calls. "How well do you know her?"
I glanced back at Coleman standing in the doorway. He was tall and wiry thin, trading strength for speed, and coiled as tightly as a spring. He regarded me as though he might have to slap some cuffs on me, and that prospect didn't please him. Thankfully, as this was an all-demon incident, I outranked him. It didn't stop his cop instincts throwing up warning flags though.
"Not very. When I went off the rails a few months ago, I went to the Lounge looking for help. She was there. She put me in touch with someone. That's all I saw of her."
Coleman dug into his coat pocket and took out a pack of gum. He popped two pieces into his mouth and chewed. A week ago, he'd declared he was giving up coffee. The gum helped. "Why's she here?"
"I have no idea." The fact she'd been on Levi's payroll and now lay face down in Akil's apartment didn't bode well. The princes weren't supposed to meddle in the machinations of their princely brethren. And it looked as though Akil had gone beyond meddling, straight into provocation. Why? Did it have something to do with the half blood girl in my apartment?
"Charlie... C'mon." He arched an eyebrow, warning me that he'd recognize a lie. "You know Akil..."
I glanced at the bed and wondered why Carol-Anne would have been here, in the bedroom. My mind jumped to all sorts of conclusions, some of them graphic, none appealing. "Maybe they're an item. Hell, I don't know." Thoughts of exactly what Akil and Carol-Anne might have been up to in the bed lodged in my head. I grimaced. Water and fire didn't mix, but that doesn't mean it didn't happen. I had fire in my veins and had been briefly involved with Stefan, an ice demon. It should have been wrong—every part of my half-demon nature should have been repelled by Stefan— but damn, it had felt so right. Akil and Carol-Anne? I shivered. Stranger things had happened. Not that I should have cared. Akil could do what he pleased. It was none of my business.
Coleman gave up waiting for me to explain and shook his head, lifting his hands in surrender. "Well, as this is demon, it's not my problem. I thought I'd do you the favor of giving you first refusal before the Institute comes down on this like the NSA at a hackers' convention. Have you seen Akil recently?"
My memory flashed on the image of an amber-eyed and disheveled Akil at my door. "Not since the Garden event." I'd been lied to enough that lying to anyone else grated against my better judgment, but Coleman didn't need to get involved.
He held my gaze, trying to stare the truth out of me. Cops must have that universal expression stamped into their DNA. He couldn't be sure I was lying, but it wasn't his call to make. Adam, however, would grill me until my juices ran clear.
"Thanks for calling me."
Coleman nodded. "Sure."
I did my duty and called the Institute while Coleman listened in and then waited for them to swoop down before the press got wind of it and Akil's sexy-as-sin picture could be plastered all over the tabloids. He wouldn't like that. His halo would slip in the eyes of the Boston public. Never mind that his tarnished halo hung on devil horns. I needed to speak to him. Leaving bodies in his apartment wasn't his style, but then neither was saving little half-blood girls. Unless you counted me. Did saving young impressionable girls twice in fifteen years make it a trend if the rescuer is immortal?
I dialed his number on my cell but didn't get a reply. I called Rosa and checked on Dawn. She was sleeping, and Rosa was happy to watch her for me. Hanging up, I nodded at the Enforcers inviting themselves into Akil's apartment and readied myself for the interrogation.
## 4
# Chapter Four
Adam was thorough. He suspected I was lying and tried to talk me round in circles, but I'd been in the hot seat before and knew how he operated. I argued that the hunters showing up and the dead demon in Akil's apartment happened to be a coincidence. He knew it was bullshit, of course, but I didn't falter, and eventually, he released me. I asked him for a week off, citing stress as my motive. He just about choked on his reply, but he couldn't refuse me, not without risking me quitting on him. Again. For some reason, he didn't want to get rid of me, and I couldn't quit, not if I was going to find out the truth about Subject Beta—a.k.a. me.
The fresh bite in the air and bleeding sky told me it was almost dawn when I got home. After apologizing profusely to Rosa, I found Dawn asleep in my bed—a tiny fetal bump beneath the sheets. I collected a spare pillow and quilt and sprawled on the couch, holding out little hope that sleep would come. _It_ didn't allow me to rest. As the quiet of my apartment settled over me and the comforting sounds of Boston faded into the background, the parasitic demon clutching my heart awoke.
He crawled out from my insides like a spider nursing its web. Seeking tendrils of darkness groped through my mind and invaded my thoughts. Alive, Damien had been a wretched, blood-thirsty murderer who derived pleasure from pain. I'd been sold to him, a worthless half blood, a plaything, and he'd used me in every way imaginable and in some ways unimaginable. Akil had saved me the first time, but when Damien returned, he'd found new ways to torture me. He'd tied his very essence to mine, and when I'd killed him, his soul—or whatever the fetid thing inside of his carcass had been—had ported over to me. It struck at my heart, sunk its barbed claws in, and made itself a new home.
Nights were impossible. My dreams, when they came, resembled poisonous recipes of blood, desire, and savagery. They weren't tangible enough to hold on to when I woke, and for that I was grateful, but their filthy residue lingered during the day, soiling my thoughts, and they were getting worse.
Dawn touched my face, abruptly waking me. I heard the echoes of my scream in my ears and felt wetness on the pillow. As I blinked, a few more tears escaped. Dawn climbed onto the couch and wriggled beneath the quilt beside me. She snuggled close, head tucked beneath my chin. I didn't try to stop her. After a few minutes, the tremors rippling through me slowed, and the tears dried on my cheeks. A little of my element curled around her, drawing her close. She couldn't have known about the demonic tumor inside of me, but she didn't need to. She would have her own horrors stalking her dreams. I closed my eyes and prayed that the pain and horror of Dawn's past was over, that she'd escaped at a younger age than me. I hoped she would never have to endure half the things I did.
## 5
# Chapter Five
Leaving Dawn in my apartment while I went looking for Akil made my gut squirm, but taking her with me would be an equally bad idea. Reminding myself she was safer inside than anywhere else, I left her with the TV and some snacks and locked the door behind me. I hesitated. If I took her with me, I'd at least know she was safe, but if she still had the hunters after her, we'd soon find ourselves fending off a repeat attack, and Adam wouldn't let it slide again. If the Institute got hold of her, she could kiss any hope of freedom goodbye.
I rented a car with cash, removed the battery from my cell, and tucked my Beretta Pico gun into its holster inside my coat. You can't be too careful when the Institute has eyes on you. It took all morning driving around Boston to tick off the properties I knew belonged to Akil. Most, he'd rented out. Others were empty. It had been a long shot, but there was one last thing I could do.
Summoning a demon is easy. It's what you do with them when you have them that's tricky. A little blood, a focal point, and an invitation extended to their many names is all it takes. It only works for higher demons, and more often than not, they're mighty pissed at being yanked out of their daily routine. Summoning a prince was tantamount to inviting a great white shark into a cage in your front room and then getting into the cage with it. I tried not to make a habit of summoning demons, mostly because I spent my life running from them. But Akil had gone AWOL, leaving me little choice.
I couldn't summon Akil in my apartment, not with Dawn there. I wanted frank answers to direct questions and suspected Akil would lie with Dawn hovering around my legs. I also needed some way of tempering his power, just in case he took offense at my summoning him like a pet. There was only one other place I could go where there were protection symbols on the walls and where prying eyes couldn't penetrate: Stefan's old workshop.
Stefan had restored cars when he wasn't stalking misbehaving demons for the Institute. He kept a workshop not far from Ryder's place. I'd visited both in the last couple of months, but neither Stefan nor Ryder had returned, and the workshop remained untouched with tools strewn around. An old Dodge Charger hunkered in the center, waiting for someone to put it back together again.
I used the key Ryder had given me almost a year ago and opened up the workshop only to find it empty. I hesitated in the doorway. I'd expected to see the chassis of the Dodge in the middle of the floor and the walls plastered with various tools and equipment. But it was all gone. Venturing inside, I checked the office and found it stripped bare. No furniture, just scuffed walls and dust bunnies. I walked through the back door and into the den. I already knew it would be empty, but seeing it naked where before it had been an Aladdin's cave of weapons seemed so final. Even the symbols that should have been spray painted across the walls had been scrubbed off and painted over. The bare bulbs illuminated an empty windowless room. My heart sank.
_He's not coming back..._
I'd always assumed Stefan would return to Boston. He'd been through hell—literally. Trapped beyond the veil with his impeccable control slipping, the death of his sister, and after my so-called betrayal, it was to be expected that he would need time out to regain control of his demon. But he would come back. Now though... He certainly wasn't coming back to his workshop. The place was cold, all traces of his life gone. Slouching, I puffed out a sigh. I missed him more than he'd ever know.
With heavy steps, I turned and yelped.
Stefan leaned against the doorframe, brittle-blue eyes sparkling. His faded blue jeans sported a few frayed tears that could pass as deliberately fashionable. But knowing him, they were probably the result of a demon getting too close for comfort. His Timberland boots had been scuffed raw. A midnight blue V-neck sweater hugged his athletic physique. His clothes were casual, but his stance was not. He'd crossed his arms over his chest and glared a narrow-eyed stare. The rakish smile I'd come to love was nowhere in sight. Instead, his lips were pursed into a thin line. His platinum blond hair was shorter than I remembered but still long enough to slide my fingers through. A memory of doing just that distracted me. I blinked rapidly before skipping my gaze away.
"What are you doing here?" His cold voice reminded me of how we'd last seen one another. We'd fought. I'd flung fire, and he'd thrown ice-daggers.
"I... er..." I'd come to summon Akil. The truth was a bad idea. He thought I was in cahoots with Akil when nothing could have been further from the truth. I should lie, tell him anything, but the thought of lying to Stefan just felt outright wrong. I'd never lied to him and didn't want to start now. "I..."
"Never mind," Stefan bowed his head, releasing me from his penetrating stare. "You should leave. I'll need your key."
I looked down at the key in my hand. Just a key, but it felt as though it should mean something more, like I was letting the last piece of him go. "Stefan—"
"Nothing you can say will change anything that happened, so don't waste your breath."
My throat tightened. How had things gotten so bad between us? "I was going to say, I'm pleased to see you're okay..."
He shoved away from the door and, within a few strides, stood in front of me. He plucked the key from my hand and met my eyes. "I'm not okay, Muse." His hand closed into a fist. "Don't tell the Institute you've seen me."
"I wouldn't—"
He turned and strode out the room, leaving me staring at the empty doorway. I shivered as a thread of cold air unraveled around me. I hadn't seen him for two months, I hadn't even known if he was alive, and that was our reunion conversation? Like hell, it was.
I jogged after him, anger flaring heat through my veins. "Hey." He stopped, boots scuffing the dusty floor, but he didn't turn. My breath misted, a reminder of the volatile nature of his element. It hadn't always been that way. "I don't deserve this." Was that a tremor in my voice? So much for conviction.
His shoulders tensed. I found myself readying for an attack by spilling a little heat into my fingers. The temperature in the workshop plummeted. The air I breathed tingled through my clenched teeth and burned my lungs.
He turned his head, but didn't look at me. It was more of a cursory acknowledgement. He hesitated, about to speak. Whatever he had on the tip of his tongue, he let it rest there and walked away for the second time.
"Stefan, wait..." I followed and stepped out onto the narrow backstreet, shielding my eyes from the sun's glare. A late 60's style Dodge Charger had been parked outside the workshop, leaving just enough room for cars to pass behind it. It had a glossy new coat. "Are you leaving for good?"
He tugged open the driver's door. I caught a glimpse of black leather seats with red piping before he got inside and slammed the door behind him. I wanted to yank that door open and yell at him to demand he listen to me, just for a few seconds, just long enough to make him understand why I'd done the things I had. But I didn't move. We would fight. It was clear that nothing I could say would end well.
He turned the engine over, and the throaty V8 grumbled to life. He was leaving. I might never see him again, and yet I didn't have it in me to stop him. Maybe because he was right. I shouldn't have brought Akil back. Never mind that I had to to free Stefan. I shouldn't have pumped Stefan full of a drug that inhibited his demon (also done to protect him). I should have stopped my owner from killing Stefan's sister (as though I hadn't tried).
The car growled as he turned it around at the end of the dead-end street then cruised back to where I stood. He opened his door and climbed out enough to peer over the roof at me, expression harsh, eyes cold. "I'm sorry we met, Muse. Don't come looking for me. It's not safe."
_I'm sorry we met..._ I tried not to reveal the depth his words cut through me and shrugged a regret-laden shoulder. "Fine." Did he hate me that much? An emotional knot tightened my throat. I clamped my mouth closed, pinching my quivering lip between my teeth.
He waited, perhaps expecting more of a fight. He was right. No words could change the past. He glanced away, looking toward the main street, the exit, his way out. The time for redemption slipped past, and he ducked back inside the car. Had he glanced at me, I might have found the courage to say something to stop him, but he hadn't glanced back. He didn't even say goodbye.
He gunned the engine and spun the rear tires on the Dodge before it hooked and lunged away from the workshop, away from me. At the end of the street, the tail lights blinked red, and the engine roared once more before he peeled the car into traffic and disappeared out of sight.
I trembled and blinked back brimming tears. Screw him. I didn't need him. I didn't need anyone. It wasn't as though I cared about him or regularly dreamed about the cooling touch of his element easing through the blazing heat of mine. I certainly didn't want to remember how it felt to have his protective embrace pulling me close or how my name tumbled breathlessly from his lips when we lost ourselves in one another.
With a snarl, I turned and slammed a fist into the workshop door. Pain lanced up my arm. I hissed and spat my anger until most of it had fizzled away, leaving me nursing bruised knuckles as I trudged back to my car.
## 6
# Chapter Six
Stuck in traffic, I jabbed at the buttons on the radio, trying to find some music that might take my mind off the bitterness Stefan had left behind. A track with a fast beat and minimal lyrics did the trick. Drumming my fingers on the steering wheel, I squinted through the drizzle on the windshield at the red tail lights blooming ahead of me. The wipers sloshed back and forth, adding an intermittent squeak. An autumn storm hunkered over the city. The oppressive gray skies suited my mood.
My plans hadn't changed, but if I was going to summon Akil I'd need somewhere secure and private to do it. The only place left was my apartment. Even if I did manage to summon him, I couldn't trust his answers. _Focus on the facts_ , I told myself. I knew someone or something had attacked Akil. It took a lot to wear him down to the point where he didn't bother to heal himself. Or he was faking it. If I assumed everything I'd seen and heard in those few minutes he'd introduced Dawn had been true, then he'd either stolen the girl from someone who'd squared up to him, or he'd been more interested in protecting Dawn than himself. My instincts told me he was protecting her, but my instincts were all screwed up when it came to Akil.
I grumbled a frustrated noise and jabbed at the radio again.
I knew Carol-Anne and Akil had been having something of a civilized conversation, enough to warrant a glass of wine together. She'd kicked off her shoes, so things had gotten cozy. They'd moved the party to the bedroom and made it as far as the bed, if the drenched sheets were any indication. But something had gone wrong—at least for Carol-Anne. They'd fought... Mm, that didn't ring true. Carol-Anne was formidable, but she wasn't in the same league as Akil. She shouldn't have been able to rough him up. I'd seen him slam her down on The Voodoo Lounge bar without so much as dislodging an immaculate hair. But they _had_ fought in his apartment. The scorch marks and sodden floors testified to that. _So Akil kills Carol-Anne, and then shows up at my place with a little girl who just happens to be a half blood?_ Had Dawn been with them the whole time? Or had Akil collected her after killing Carol-Anne? And what did he expect me to do with her that he couldn't already do himself? He had access to resources on both sides of the veil, whereas I only had my wit and penchant for trouble. I was missing something.
The traffic inched forward. My windshield wipers squeaked. I eased my rental car into motion, rolling closer to the car in front of me. Leaning away from the door, I tried to get a better view of what caused the jam. The side window exploded inward. Shattered glass dashed my face and pummeled my clothes. I let out a startled squeal as a clawed hand the size of my head reached in and made a grab for my arm. I lunged away, twisted in the seat, and angled myself so I could shove off the door and shimmy backward into the passenger seat. The thick arm plunged inside the car again. A purely demon growl bubbled up my throat. I kicked out, striking the hand with the heel of my boot. It recoiled and then struck again, snatching knobby fingers around my ankle and yanking me toward the window.
The gearshift dug into my lower back, twisting me awkwardly, grinding against my spine. I spat a curse and kicked the hand with my free foot. The demon made a wet snarling sound, something like a bathtub full of water gurgling down a drain, and then drove his gnarled face through the window. He grinned, his gaping mouth too large for his misshapen pug-face. The passenger door wrenched open. A cool breeze wafted over my face, dashing my hot cheeks with rain. Briefly, my mind registered the rain tasted salty, then a dry, pitted arm hooked around my neck—skin rough like tree bark—and hauled me backward out the car. My leg slipped from the boot the pug-faced demon had hold of. I had a moment of weightlessness, right before my new assailant slammed me down on the hood of a car. My head thumped against metal. My teeth jarred. A net of white noise cascaded in front of my vision. Unconsciousness loomed. I blinked and tried to refocus, but the vast bulk of the rough-skinned demon blotted out the daylight. He pinned me beneath one branch-like forearm and leaned all of his weight into my chest. My ribs compressed. I tried to heave him off me, but my puny human hands barely closed around the girth of his muscles.
"Bears-the-flood," he growled around yellow teeth.
"Huh?" I grunted.
"Where half blood?" Clearly this demon didn't have much need to polish his human speech. He was probably fresh from the netherworld.
_Screw this._ I met his flat eyes and blazed heat through my limbs. My demon woke, her awareness fixed as sharp as lasers on our attacker. "You'd better hope you just look like you're made of wood..."
I inhaled, drawing in heat. Woody must have sensed me soaking up the elemental energy. He straightened, easing off my chest, and eyed me curiously. He'd underestimated me. One of these days, the demons would stop making that mistake, then I'd be screwed. But not today. I grabbed my gun, flicked the safety off, aimed, and fired in the time it took Woody to blink. A hole punched through his cheek, cricking his head back. He growled, and swung his stare back to me. I fired again. The gun jumped, and the round smacked into its chest. Another shot. He staggered back, arms flailing, eyes wild. Considering I'd just planted three bullets in his head and chest, he didn't seem all that concerned.
I fired once more for luck. He bumped back against my rental car and let out a roar that barreled down the jam-packed street. Bullets clearly did little more than piss him off. I lifted my free hand and coiled a thread of energy around my fingers. A smile hooked into my lips. A trickle of glee shivered through me, further arousing my demon-half. I flung a bolt of heat at Woody's chest. Fire blanched over him on impact. He wailed like a banshee and ran, slamming into stationary cars and ricocheting off moving ones. My smile died as I saw a member of the public using his cellphone to film the entire screw-up from inside his car. Dammit, Adam would be on my back again.
The remaining pug-faced demon sprang onto the top of my car, denting the roof as he landed on all fours and bellowed a roar loud enough to rattle the car windows. He looked a lot like a gorilla, if they came hairless, sporting forked tongues and blood-soaked eyes. He lifted my boot, waggled it, and launched it at my head with surprising accuracy.
I ducked, swept my left hand in a tight circle, and coiled energy around me. Fire burst into existence, gathered up my arm, and enclosed my hand. I aimed the gun in my right hand, figuring I might as well hit pug-face with all I had. The occupants of the car I was sprawled on gawked through their windshield. At least they weren't filming. I winked, the trickle of glee swelling into something more akin to lust for the hunt.
Pug-face sprang. I lashed out, casting a line of liquid heat, thrusting behind it a rush of energy that slammed into the demon mid-leap. The gunshot cracked the air. The demon lit up like a bonfire and jerked as the bullet punched through his gut. Pug-face landed hard against the side of the car, jolting me and the passengers, then dropped with a dull thud against the road.
Sliding off the hood, I counted two flaming demons sprawled on the road. Black smoke churned skyward. Several onlookers had left their vehicles, but none ventured too close, not while the fires still blazed. I shook off the tingling excitement, retrieved my boot, and made a quick exit. There was only so much damage control I could do, and no amount of bull crap about escaped animals from the zoo was going to mask my very public display of power.
It didn't take long to reach Southie on foot. I hadn't been far away when the demons had attacked. In all likelihood, whoever pulled their strings had been watching the main routes around the last known location of their hunters. Whoever it was, they obviously suspected Dawn was in my care. Maybe someone who knew I was connected to Akil?
I walked around the block a few times and took a few random shortcuts, checking over my shoulder for tails. Demons didn't do subtle. They stood out in a crowd simply by trying too hard to be human. They moved with a fluid grace, each step, each gesture, weighed and measured. They stalked like predators fixed on their prey. I wasn't being followed.
By the time I reached my apartment building, my soaked clothes stuck to me, and my hair clung to my face. My back and head ached, muscles tightening as bruises bloomed. Being half demon didn't make me any less squishy. When my demon rode me, I was tough, my human skin and clothes, protected beneath her otherworldly armor of elemental energy. But that's only when I was powered-up. Otherwise, I was just as fragile as everyone else. It was a trait my old owner had enjoyed exploiting. That same owner now pulsed inside me.
I jogged up the steps in my apartment building to my floor and dug into my pocket for my keys.
"Hey, Firecracker."
My stride faltered. Ryder leaned against the wall outside my apartment, looking every inch the returning tomcat. He chewed on a tooth pick, thumbs tucked into his cargo pants pockets. The black shirt might have looked smart on anyone else, but he'd somehow managed to crease it so the cotton resembled crepe paper. Even the creases had creases. He'd grown his hair out since I'd last seen him. Mocha locks scuffed his old-soul-eyes and curled around his stubble-dashed cheeks. He looked older than his mid-thirties, due in part to life's assault on him. I'd never asked about his past, and he didn't ask about mine, but I had eyes, I'd seen a story on his face, and I'd listened when he thought I wasn't.
"What the hell are you doing here?" I unlocked my apartment door.
"Nice to see you too."
"They're watching." I grumbled, shoving open the door and suggesting he enter with a short hand gesture.
"When aren't they?" He sauntered inside. His shirt bunched around a concealed gun at the small of his back, and I wondered if he was here on business. Ryder looked like something the cat dragged in, but he'd been one of the best Enforcers Boston had until he'd vanished with Stefan.
Closing the door, I tossed my keys on the kitchen counter top. The TV was on. An empty bowl and plate sat on the couch. I checked the bedroom and found Dawn sitting below the window, teasing Jonesy with a thread of cotton. She looked up and acknowledged me with a tight smile. My cat didn't acknowledge my presence, his allegiance decided.
When I turned around, Ryder stood behind me, eyebrow arched. He plucked the toothpick free and pointed it at Dawn. "Whose kid is she?"
"That's what I'm trying to figure out." I eased the door closed and returned to the kitchen to fix myself a coffee. Thick. Black. With a mountain of sugar.
"You look like crap, Muse."
I could trust Ryder not to mince his words. "Thanks." I grabbed two mugs and clattered about my little kitchen. I liked to take my frustrations out on inanimate objects.
"You have blood on your cheek."
I swiped at my face. Ryder mirrored me, indicating I should aim higher. "You shouldn't be here." I peered into the shiny kettle at my reflection, spotted a splatter of demon blood below my eye, licked my thumb, and wiped it off. "If the Institute see you—if they think you're here..." I shook my head. "I've got enough to deal with right now. Speaking of which..." The water in the kettle simmered. Leaning back against the counter, I swept my damp hair off my cheeks. "Is there such as thing as salty rain?"
He managed to simultaneously frown and smile. "How the fuck should I know? Do I look like a weatherman?"
An unexpected pang of loneliness assaulted me before I could chase it away. Damn, I'd missed his surly no-bullshit stance on life. "Dammit, Ryder. Where have you been?"
His slid his gaze away and moseyed around my apartment. "Nice place." It didn't take long before he noticed something on the floor. He crouched down and scratched at the wood grain. Rising, he cleaned the substance from beneath his nail and flicked his gaze back to me. "Make a habit of inviting demons over?"
"Yeah, actually. Wednesdays are movie nights. They bring the snacks." His lopsided grin drew the slither of a smile across my lips. "Ryder, I could have done with you around." I hadn't realized how much he'd meant to me until he'd pulled the vanishing act. Ryder was still technically my handler, and mentor. While he'd reported my progress to the Institute, he'd also taught me how to shoot, where to aim on various demons, and how many f-words you could ram into a single sentence. Since the only other teacher I'd had was Akil, Ryder's no-holds-barred method of teaching had been... enlightening.
He pretended to admire the framed symbols on my walls. "Nah, you're fine. You don't need me. Never did."
I dropped my gaze. He didn't know about my former owner caged inside me. I could never tell him. His loyalties lay with the Institute, and the fact I was compromised wasn't something I wanted Adam knowing. "Are you staying?"
"Can't."
I swallowed an unexpected sadness. When had I become so lonely? "It's Stefan, isn't it?"
Ryder drew in a breath and winced before meeting my gaze. "He ain't doin' so good."
I'd gathered that when he'd turned his workshop into a walk-in freezer. I clenched my jaw and clamped my hands against the edge of the counter top. "When are you leaving?"
"Tonight." He scratched his chin with his thumb, frowning. "Are you in some kinda trouble?"
"Not yet."
He glanced at my closed bedroom door. Dawn's giggles bubbled from behind it. "Shit, Muse. Is Akil still sniffing around?"
"Actually, no. He's MIA."
Ryder's expression darkened as it always did when Akil's name came up. "Tell me what's goin' on."
I told him everything that had happened since Akil had appeared with Dawn. He listened, making the obligatory noises when I mentioned Akil. By the time I'd finished, the frown on his face had turned disapproving. "Don't trust her."
"Dawn?" I scowled. "She's just a little girl."
"A little girl dumped on your door by a Prince of Hell. Whatever he did to get her, it weren't pretty if he was cut up."
I bit back my denial. Ryder was right. As much as I wanted to believe Akil had left her with me for good reasons, there was no denying his absence was suspicious. "She's a half blood, like me."
Ryder nodded. "All the more reason to stay clear. You've been through enough. You don't need to deal with Akil's baggage. Hand her over to the Institute."
A ripple of heat bloomed inside me. I ignored it for the sake of friendship. "How can you say that? You know what they'll do to her, right? You said it yourself once. You know what they did to Stefan. He despises them."
"He hates Adam and what the Institute did to him under his father's orders. There's the difference. She's dangerous—"
"Like I was—"
"Like you all are. Fuck, Muse. I saw you pump a Prince of Hell full of enough power to turn him into something not even demon—something... god-like. I watched you and Stefan go at each other, calling god-knows what from the netherworld, and I've seen what it's done to him. Half bloods are dangerous. Don't try to sell me some crap about control. That lil' girl you got in there..." He jabbed the toothpick at the door. "The safest place for her is behind bars at the Institute."
I placed my coffee very carefully down on the counter. My element pulsed in an unpleasant wave, breaching my control and then receding as I drove it back. "I can't believe I'm hearing this from you." They'd locked me behind bars. Twice. I would have despised them for that even without my history of abuse.
"What if she can call the kind of power you can, huh? What if Akil knows that, and he's left her here to hurt you like a goddamn Trojan horse?"
"He wouldn't. He's different."
Ryder gave me a sharp look. "Don't defend Akil."
I pressed my lips together and ground my teeth. "He really wouldn't hurt me." Was I convincing myself as much as I was Ryder?
"Why? Tell me why he wouldn't screw with you again? You're his weakness. You can drain him dry. You think he wants you walkin' around lordin' that much power over him?"
"You said it yourself. Yeah, I can drain him, and I can feed him more energy than he can handle. In the netherworld, you didn't see what I did. I... I killed a lot of demons, and it changed how he looks at me." I tried not to think about how I'd turned a crowd of demons to ash, mostly because of how I found the memory to be disturbingly comforting.
Ryder blinked, but my revelation only gave him a few seconds pause. "Jesus, Muse. I didn't come here to hear you spout off about Akil's sudden change of heart. He's all demon and a stone-cold killer, or have you forgotten how he murdered your friend? Sword through the chest because the guy was in the way? You saw it, Muse, with your own eyes. You read that blade and saw Akil kill Sam."
A shattering pain sparked across my chest, drawing a hiss from between my teeth. Heady emotions often gave my parasitic owner a waking jab. The guilt I felt over Sam's death provided more than enough emotional fuel.
I pressed my hand over my heart and tried to suck in a breath around the pain. "Get out."
"Sure," Ryder said, mistaking my grimace as one of anger. He headed for the door. "Y'know, I told Stefan you aren't what he thinks." He tugged the door open and glanced back at me. "Don't make a liar out of me."
The second the door closed behind him, I fell against the counter. My trembling arms barely held me up. I couldn't catch my breath. With each throb, the tightness around my heart increased, shortening each gasp until the edges of my vision darkened. I willed myself not to collapse, not with Dawn there. I had to keep it together. But the dark hungered. Echoes of Damien's laughter resounded through my head. I'd stabbed him, torn out his throat, and burned his body from the inside out, and still he'd laughed. My stomach hitched, trying to eject my breakfast and with it the demon rotting away my soul. _Damn him back to hell._
Ryder burst back into my apartment. "Institute. And they ain't for me." He strode across the lounge, eyes pinched with concern as he saw me struggling to stand upright. "Shit, Muse, what the–"
"I'm fine."
He dug into his pocket and handed me a set of keys and a wad of cash. "Take my car," he said, softer this time, "It's the beat up Mustang parked 'round the back. They'll be tracking yours. Don't use ATM cards. From what you tell me, that lil' girl is too hot to stay here. Get her out of town. Do what you've gotta do."
He squeezed my shoulder. Ryder didn't do physical expression of friendship; this was serious. "What about you?" I found my voice. "They'll take you in."
"They know everything already." His lips twisted as though he'd tasted something foul. "I'll tell 'em I was waitin' for you to come back. I got it covered. Go." He'd been keeping the Institute informed of Stefan's progress. I should have known. I could see the truth in his eyes. He wasn't happy about it, but it was his job, and Ryder was, first and foremost, an Enforcer.
After grabbing Dawn, the three of us hurried out of my apartment and down the hall to the fire exit. I shoved open the door and blinked as the stairwell lights flicked on.
Ryder hung back. "You'd better know what you're doing, Muse."
"Don't tell them." I clutched Dawn's hand in mine. "Don't tell them what's going on. Not yet. Let me figure out how to keep her safe. If I can't, I'll take her in."
Footfalls hammered on the stairs. Ryder frowned but nodded a silent agreement, then dug into another pocket and tossed me a cell phone. "Emergencies only. I'm in the contacts. Call if yah need me." He strode away to face his colleagues.
_Beat up_ were two words for the '66 Mustang. Another couple were: _scrap metal_. Once pale blue, now sporting three rust-red donor car doors, it sat on fat Firestone tires and chipped chrome rims. The interior hadn't fared much better. Embossed ponies galloped over torn leather seats. Knots of loose wires dangled below the dash, and when I turned on the ignition, the dials didn't respond.
I turned the keys and prayed the engine had seen more loving care than the body. The V8 grumbled to life. The car coughed, belched a puff of black smoke, and found its rhythm.
Dawn whimpered in the passenger seat. "Hang on," I told her, catching a glimpse of two Enforcers in the rear-view mirror. I eased the car away from the curb, trying to appear inconspicuous. They noticed us immediately. Both started running. One palmed a gun. The other chinned a cellphone.
Ramming the car into gear, I jammed the throttle open and lurched the Mustang forward. Ryder had taught me a few things about driving fast. We'd often raced each other to demon incursions. I was no street-racer, but I could handle a little excessive speed.
"Put the belt on." Wrenching on the steering wheel, I swung the car onto Dorchester Street. The Mustang loped into lane. Tired suspension gave the car an unhealthy amount of body roll and threatened to break away the rear end. I planted the throttle and accelerated hard. A few cars behind us, a silver Ford Taurus swung out of the side street and carved its way through the light traffic.
"Dammit."
Dawn secured her belt and hunkered down in the passenger seat, her rabbit pulled close.
"It's gonna be fine," I muttered, mostly to myself. By running, I'd already ticked off the Institute. Just as long as they didn't know why I was running, Dawn might stay off their radar. Running was her only chance at a normal life.
At an intersection, I swung the car left, bumping over the uneven road surface and fighting the steering wheel. We sped on, through a roundabout, following Old Colony Avenue. Parked cars choked the roadside, while ahead, a stream of traffic slowed my approach to the expressway. I glanced in the mirror, spotted my tail screeching through the intersection, dropped a gear, and bumped the Mustang over the inlaid stones between the lanes. The car bucked and twitched into oncoming traffic. Dawn let out a squeak. I swung the wheel and peeled back into the correct lane, planting my foot to the floor. With a throaty roar, the Mustang gobbled up the road.
Behind, the gray Ford knotted in traffic. I wove around slower cars. The greenery of Joe Moakley Park opened up to my left, the railway tracks and expressway to the right. Just a few minutes more and we'd be on R93 out of town.
A wall of red tail lights flared ahead. I slammed on the brakes, sensed the car breaking loose, and pumped the pedal to try to control the skid. The fat tires squealed and bellowed smoke. I turned the wheel into the slide, prayed we didn't hit anything, and held my breath. The Mustang rocked to a halt sideways behind the jammed traffic, body panels untouched.
I puffed out a breath. "You okay?"
Dawn chewed her lip and nodded. A squeal of tires snapped my attention back to the road. The incoming Ford slammed on its brakes and attempted to pin us in. I caught a glimpse of the driver. Jenna Sparks. Enforcer. A colleague. We'd traded small talk a few times in the Institute cafeteria. Raven-black hair, cut close against a face too sharp to be considered beautiful. Fine cheekbones, full lips, fearless brown eyes. And she was tenacious as hell.
Ramming the Mustang into reverse, I twisted in my seat, and plowed the car backward off the road. We trundled over a grass median and down onto a slip road. The undercarriage let out a nasty cry as it scuffed the road. Yanking on the wheel, I swung the Mustang around, so we faced the right direction, and dug around for first gear.
The Taurus bumped over the median after us.
The gears made a mangled, gnarling sound, protesting at the rough handling. I pumped the clutch and forced the stick into gear. Any gear. The Mustang sprang forward, throwing me back into the seat. Engine roaring, we built up some speed. But so did the Ford. As it drew up alongside, I shoved Dawn's head down. "Stay down."
Jenna glared at me through the window. She pointed, suggesting I might like to pull over. Yeah, that wasn't going to happen. I tightened my grip on the steering wheel and rammed the Mustang against her immaculate Ford. Screaming metal-on-metal briefly deafened me. The Ford veered off before coming right back and sideswiping us.
"Hold on!" I locked my hand around the parking brake and tugged. The rear end of the Mustang locked up and swung us sideways, helped by a quick jerk of the wheel. The Ford sailed on. We came to a halt facing the wrong way into oncoming traffic. I found a gear, planted the throttle once more, and played chicken with startled drivers until we burst out onto the Old Colony again, heading back toward my neighborhood.
Taking random turns, pushing the car to its limits, I only slowed when my mirrors were free of Enforcer vehicles. By then, we'd carved up half of South Boston. I'd lost the Institute for now, but they had my scent. I had to get out of the city and fast.
## 7
# Chapter Seven
We got onto the expressway headed north. "You okay?" I asked Dawn. She hardly moved, just gazed out of the window, shoulders slouched while her thoughts were clearly elsewhere. "It's not always like this."
"Why did we run from them?"
Focusing on the road ahead, I considered how best to describe an international company that controlled and killed demons without terrifying her. "The Institute protects people from demons. They're good at what they do. But if you happen to be a demon, or even half demon, they're not the most friendly bunch."
She fell quiet. I prodded the radio, but like most of the Mustang's instruments, it was dead. As we lost the daylight, I considered my options. Unknown demons wanted Dawn. Akil had left her with me for a reason. I wasn't getting any help from anyone. It was just the girl and me. Ryder believed she was a trap. I couldn't blame him for thinking the worst. Akil was fond of ulterior motives. But Dawn was a half blood. Akil knew I'd understand.
"We're going to one of Akil's houses. I've not been there for a while." Years, in fact. "I... used to live there when I was younger. It's nice. We'll be safe at Blackstone." _Maybe_ , I added silently. "Are you hungry?"
She shook her head and leaned against the door. Her mop of curls obscured her face, but I saw her lip quiver and tore my gaze away. What horrors had she seen at the hands of demons? Had she had a succession of owners like me? Did they beat her and violate her?
"Do you want to talk?" I asked carefully.
"No, Muse."
Something behind her voice sounded suspiciously like a warning. I backed off. I hadn't spoken of my past, not in detail, not to anyone. When Damien had returned, I'd told the Institute what they needed to know. Not even Akil knew it all. Demons are nothing if not vicious.
Akil's rural house was situated in Salem, New Hampshire, a forty-minute drive north of Boston, tucked inside the embrace of an ancient forest, like a fortress behind a barricade of trees. You'd never know Blackstone existed. Elaborate black iron gates designed to look like creeping vines guarded the mile long driveway. I knew every inch of those gates intimately. I'd crafted them. There was a track around the back, if a person was willing to drive another twenty minutes out of her way. The gates were padlocked, as I suspected they would be, so we took the long way around.
Designed by a European architect back in the early 1990's, Blackstone had matured well. Given its forest location, you'd expect to see a sprawling cabin, not the glass-fronted modern structure with its wing-like profile. Built into a slight incline, the split-levels cascaded down to a small lake. As we approached, much of the building's sophisticated design was buried beneath darkness. The Mustang's headlights raked over the stone and timber walls. Dawn eyed the sprawling structure with suspicion.
"Wait here." I left the Mustang. Gravel crunched under my boots as I headed for the side door. None of the lights came on, and besides the whisper of a breeze through the trees, the house and forest were quiet.
I knocked and rang the bell, but I didn't expect an answer. Walking around the double garages, I ventured into the dark around the back of the house and fumbled around a log-pile in the hope the spare key was where it always used to be. I'd been locked out before. Akil regularly disappeared without warning and after I'd spent a night on the doorstep, we'd stashed a key. Shifting logs around, I found it and returned to the side door.
Once inside, I flicked the lights on and entered the alarm code: the date I'd stepped through the veil for the first time and begun my human life. The musty air was cold and still, the big house as empty as a mausoleum. I ventured through each of the rooms, flicking on lights as I went. Sheets covered the furniture. A thick layer of dust coated what had once been smooth granite, polished marble floors, and glass surfaces.
I'd learned how to be human inside Blackstone's walls. My first good memories were forged in the grounds. Dawn would be safe at Blackstone as I had been.
Dawn wandered cautiously through the ground floor while I booted up the heating and security systems. The house was wired up like a bank vault and built with protective elemental symbols etched into the foundations. Discreet CCTV cameras fed images back to a basement control room. Six bedrooms, five bathrooms, three reception rooms, vast kitchen, deck and basement game room. Blackstone was more than enough house for Dawn and me. Too much, I realized, when I tried to find her again.
I eventually located her standing in the lounge, her tiny body dwarfed by the huge black granite fireplace. "Hey."
"This is his home?"
"Akil's? Yes. When he saved me from Damien—my owner, this was where he brought me. It's real nice in daylight. Probably seems a bit daunting right now." I crouched down beside her. Her gaze absorbed the room, eyes curious. "Nobody will find us here. Tomorrow, I'll go into town and grab us some clothes and groceries."
She turned in a slow circle. Her eyes darted as she assessed every inch of the lounge. "Why don't you call him Mammon?"
Because I tried not to associated the two. And Mammon scared the hell out of me. "It's the name he's chosen for when he's in human form. A long time ago, he went by _Ah-keel_. Now he's shortened it. Modernized it, I guess."
"But he's not human."
It wasn't a question, but I confirmed it anyway. "No. Not at all human."
"What are we?"
I smiled warmly. "I like to think of myself as human. But we're half way between demon and human. We're both and neither. We're different and lucky. We get to choose what we want to be."
"I don't feel lucky."
I hesitated, struggling to find the right words. Our gazes met. She patiently waited for me to elaborate. "Dawn, you don't have to tell me anything if you don't want to, but I think I know what you've been through. It's okay. Things will be better now. Akil saved me, and I think he's done the same for you."
"But he's..." Her eyes focused over my shoulder, her gaze distant. "He hurt my owner," she said quietly.
She'd seen him as Mammon. It explained the questions. "He is very dangerous. Don't ever forget that." I might like to heed my own warnings. "Dawn, what was your owner's name?"
"Carol-Anne."
I'd been right. Akil and Carol-Anne had fought. He'd taken Dawn for himself. _Prince of Greed, remember._ Not so long ago, he'd wanted my demon and the power repressed behind my weak human shell. Of course, he'd told me it was all for my benefit. He'd lied. Fifteen years ago, he'd stolen me from my owner just as he'd stolen Dawn from Carol-Anne. It seemed Akil was collecting half bloods.
I watched for any sign the girl was distraught at the memory of Akil killing Carol-Anne, but she blinked innocent eyes up at me with no trace of sorrow, just wide-eyed anticipation. "He set you free."
"Does that mean Akil's my owner now?"
"No." I smiled, forcing back a sudden urge to growl. "Nobody owns you. We don't have to be owned. You've been lied to, Dawn. We're strong—stronger than them." I lowered my voice to a conspiratorial whisper. "We're even more powerful than the princes."
Her eyes widened, and her little mouth parted in a silent 'O'. I smiled and gently squeezed her shoulder.
"But... But... I'm not... I don't..."
"It's okay. I was surprised too. I can help you, Dawn. I think that's why Akil brought you to me. We'll talk some more tomorrow. For now, let's get some rest."
She nodded and hugged her rabbit in the crook of her arm.
## 8
# Chapter Eight
I shouldn't have taken Dawn to the mall. I doubted my decision the entire way there, while checking the skies for hunters and the rear view mirrors for Institute cars. Yes, it was idiotic, but I understood what it was like to be caged. Returning to Blackstone roused memories of my rebirth as a gangly teenage girl. Akil had opened my eyes to the world. I wanted to do the same for Dawn, even if that meant putting her in danger. Freedom is only mourned by those who no longer have it. Those who've never known it don't have that luxury. She didn't know what she was missing, but I did, and I wasn't keeping it from her for another second.
These were the first days of the rest of her life. Later, I would teach her how to draw from the veil in order to protect herself. She didn't yet have the maturity to handle that much power, but I could show her what it meant to be a half blood. I would teach her how to look out for herself, and maybe, if we were lucky, we'd have some fun.
Salem was a sprawling town with suburban-style residential areas. Dominated by Canobie Lake, it also boasted one of the largest malls in New Hampshire, and that's where we were headed. Parking up at Rockingham Park Mall, I stashed my gun in the glove box, noticing Ryder's phone inside. A text message blinked onscreen. From Stefan.
Answer your phone
As Dawn's wide eyes drank in the sight of the mall, I checked the calls list and found four missed calls from Stefan and one voicemail. I sat back in the driver's seat and humphed a disgruntled noise. Stefan had made it clear he wanted nothing to do with me, and the cellphone was Ryder's. I shouldn't even be poking around his messages... Although he had given me the phone, so technically it wasn't snooping. Right?
I tucked the phone into my pocket and gave Dawn a bright smile. "C'mon, let's shop."
We didn't have much cash. Ryder had given me enough to survive for a few days, but I was fast eating through that. Once it was gone, bankcards were out of the question. I'd worry about it then. Right now, I had to teach a little girl about retail therapy. Hell knew she could do with the distraction. My first stop had been a new set of clothes for me: jeans and a lightweight V-neck white wool sweater. I'd hastily changed into both, stuffing my blood-splattered clothes back into the bag. I'd also grabbed a charger for Ryder's cell. The pre-weekend crowd had swelled, and I didn't fancy explaining to security why I looked like I'd been in a fight with a mincer. After securing Dawn a Hello Kitty dress more befitting a nine-year-old girl, complete with shoes that fit, we roamed the mall. For fifteen minutes, Dawn stuck to my side, wide eyes darting back and forth, absorbing the crisp white lines, glistening floors, and shining glass. When we arrived at the central staircase, her mouth fell open. Sunlight streamed in through the domed glass ceiling. Her eyes traced the graceful fall of the stairs until the steps flared at our feet like the train of a wedding dress.
"It's a palace." Without taking her eyes off the stairs, she reached up and took my hand. I closed my fingers around hers and felt the feather light touch of a smile lighten my lips. I'd done the right thing in bringing her here. Never mind the risk, it was worth it just to see the wonderment on her face. She said little as we walked, but her eyes glistened like jewels when we passed stores and joined streams of people.
Grande latte in one hand, half a dozen bags in the other, I waited in line to pay for the Krispy Kreme donuts I'd just spent the last five minutes convincing Dawn to try. She still stayed close, but occasionally, she'd break away and dash over to something that had caught her eye.
The tills chimed, and the chatter of the crowd ebbed and flowed around me. My thoughts wandered. Now that Dawn and I could relax, I needed a plan. The Institute suspected I was up to something, but they didn't know about Dawn. That was good. If she stayed at Blackstone, she'd be safe enough. I couldn't stay with her though. I might be able to wring another weeks' vacation out of Adam, citing post-traumatic stress from the garden incident. He'd buy it with suspicion, but he didn't have a choice. That gave me two weeks to figure out why the demons wanted Dawn and how I was going to shake them off our tails. The hunters had been the first wave, but the two demons that attacked me in traffic were smarter. They'd deliberately targeted me. Whoever wanted her had upped their game.
What was so important about Dawn? Half bloods were generally considered worthless abominations. Those not killed at birth were sold to demons further down the pecking order as playthings, curiosities. Only a handful of demons knew the truth about half bloods. Akil was one. A shard of pain twisted in my chest. Damien had been another, but he'd only figured it out with help. Carol-Anne may have known, but she was dead, so I could scratch her off the list of suspects.
The line to pay inched forward. I shuffled my bags around and took a generous sip of latte.
Stefan knew about half bloods, and of course the Institute were the foremost authority on half bloods this side of the veil. They'd studied Stefan like a lab-rat until he'd been old enough and strong enough to tell them where to shove their experiments. But the Institute didn't employ demons, and neither did Stefan.
The only place I could think I might discover something was Carol-Anne's club, The Voodoo Lounge. The club sat at the heart of Boston's demon population. She must have had acquaintances that could tell me something about Dawn or about why Carol-Anne had visited Akil. I'd met her demon doctor, Jerry a few months before when he'd tried to help me with some control issues. He'd seemed like a fairly reasonable guy, and with Carol-Anne gone, there would be a demon reshuffle in the hierarchy of that neighborhood.
I stepped up to the cashier's desk, digging my hand into my pocket for the cash to pay. A creeping sense of discomfort peeled across my skin, raising the fine hairs on my arms and sprinkling shivers down the nape of my neck. I froze and slowly turned my head. The line behind me consisted of bored shoppers. Someone rattled off a one sided conversation into a phone. A woman had hold of her two toddlers and was laying down the parental law. A man slouched near the back of the line, shifting awkwardly from foot to foot and peeking ahead, eager to get his shopping done.
I scanned further back, along the streams of shoppers flowing back and forth through the store.
"Ma'am?"
"Yeah." I dug out the cash and handed it over. What the shoppers behind me couldn't see was how I eased an elemental touch outward, reaching my senses beyond the apparent in search of the demon that had tripped my internal alarms.
Dawn twitched and swung her head around. I shook my head as she looked as though she might ask me why I'd flicked the demon switch.
Donuts paid for, I scooped up the bag and coffee and ushered Dawn from the store into the mall fairway. I kept walking, eyes scanning the crowd. The demon was here somewhere, and it was powerful. Shivers swept through me, adrenalin aiding my fight or flight response. I could feel its gaze on me, sense its penetrative touch, and a sickening deadweight of dread balled in my stomach. I recognized the touch of power. Fear rolled over me and scattered butterflies low in my stomach.
Dawn's nervous gaze checked mine every few steps. I mustered a smile, but she wasn't buying it.
I shoved by shoppers and wove between loitering groups, trying not to break into a run. A few people muttered in my wake. Hot coffee splashed over my hand, but I barely noticed. My heart thumped in my chest, and my breaths came fast. _Run._ I wanted to. But if the demon I sensed was who I thought it was, then running wasn't going to do a damned thing to save us.
"Muse?"
"It's okay. It's probably nothing." She would feel the touch too, but she might not recognize it as powerful.
I found the food court and planted Dawn at a MacDonald's table close to the wall. I dropped into the chair opposite her, leaned back, and scanned the faces around me. Demons stalk. In a crowd like this one, they could be spotted simply by the way they moved. Seventy percent of human communication is non-verbal. We're constantly in motion. Demons don't understand the intricacies of being human, mostly because they don't spend long enough in their human-suits to care.
I watched the crowd for any sign of someone standing perfectly still or a figure walking toward me, chin down, eyes up, but I couldn't place any demon, and within five minutes, the sickening fear and crawling sensation passed.
I slumped in the chair and closed my eyes. When I reached for my lukewarm coffee, my hand shook.
I'd faced Hellhounds. I'd drained a Prince of Hell of his element. I'd summoned and controlled enough raw elemental energy to level a city. And I regularly tracked demons and bumped them back across the veil, or worse, but very little struck fear into my soul like my brother, Valenti.
"Is it gone?" Dawn asked.
I nodded and eyed her over my coffee. Her flushed cheeks and light fluttering breaths suggested fear, but the look in her eyes didn't. When she smiled, it wasn't the nervous flitter of a smile I'd seen from her before, but a pearly-white grin. There was almost a predatory glean to her expression. She blinked and puffed out a sigh. "That was fun."
Fun? I chuckled. Right. She obviously hadn't met my brother. "We should get back to Blackstone."
## 9
# Chapter Nine
The drive back to Blackstone was slow going. I threaded my way around various backstreets and roads to nowhere in an attempt to flush out any tails. It wasn't likely to do me much good. Val didn't drive. Such mortal means of transportation were beneath him.
How had he found me? It might not be Val, I told myself. Whoever that demon was, he may not even have been there for me. Maybe a demon lived in Salem and fancied a coffee or a new pair of shoes. It could have been a coincidence. Nobody could know I was in Salem. Although if anyone could sense me, it would be Val, as we had the same blood in our veins, courtesy of our father, Asmodeus.
_Why would Val be here? Had it been Val? Why didn't he show himself?_ My brother didn't lurk. He was too proud for that. Had it been Val, he'd have just walked right up to me and said whatever he had to say. _No, it couldn't be him._
By the time we returned to Blackstone, I'd convinced myself the phantom demon hadn't been my brother. That didn't stop me from checking the tree line around the driveway and house as I emptied the groceries from the car.
At least inside we were relatively safe. Val couldn't enter the home without an invitation, and even if he got inside, the hidden marks on the walls would prevent him from calling his power. On those terms, I could rest easy.
Dawn broke into a huge grin at the sight of the donuts. She plucked a pink ring donut free and took a generous bite. Her expression exploded with a sugar rush of glee.
"Good, huh? I told you." Shrugging off my jacket, I placed Ryder's cell on the countertop. The lure of the voicemail message called to me. I tapped my nails on the counter and chewed my lip.
Dawn sat at the breakfast table, chewing loudly, licking sugar from her fingers. "Can I have another donut?" she mumbled through a mouthful.
"Sure."
"They are amazing. I've never eaten anything like this. Why are they round?" She continued in a breathless rush of words. "How are they made? They taste like chaos, don't they? What's the hole in the middle for?"
I fell quiet and let her talk. My gaze settled on Ryder's cell. The last conversation I'd had with Stefan replayed in my mind. How had it come to that? Did he hate me? The parasite around my heart twisted. I winced.
Dawn lifted her gaze. "Are you okay?"
"Yeah..." I sighed. "I was thinking about a friend. He's a half blood like us. He taught me what we really are."
"Is he good?"
"Yes." My smile fractured and crumbled away. "I think so."
"Like Akil?"
"Oh, Akil isn't good." I poured some orange juice into a glass. "Y'know, you're right to be wary of Akil. He's a very complicated demon. He tried to hurt me once, but my friend, Stefan, saved me. Stefan... sacrificed a lot for me."
"What happened?"
"He trapped Akil on the other side of the veil. Neither of them could get back." I ran my fingers down the outside of the glass of juice and gathered up beads of condensation. "Time works differently there." Dawn nodded. She understood. "Six months passed, but for him it was more like years, and... he'd changed."
"I don't like it there."
"No, the netherworld is a harsh place to survive in, especially for us." Stefan had spent the equivalent of two years fighting to survive. When he'd stepped through the veil, his control over his demon had been faultless. When he came back, his demon controlled him, and I suspected he liked it. There's a certain freedom that comes when you release the demon. Reason, apprehension, doubts, they all fade away to nothing. It's addictive, that freedom, and it's dangerous. "As half bloods, we are responsible for a great deal of power. If we don't control it, it controls us."
Dawn plucked a donut free of the box and held it up, but her gaze wandered, and her eyes glazed over. "My owner wanted me to release my demon. She said the princes would be pleased."
A jolt of alarm shot through me. "The princes?" _Plural? More than one?_ What I knew of them told me they never worked together. Ever.
Dawn nodded and took a bite out of the donut, muffling her next words. "She said I had to keep up with the others. If I was good, I could play with others like me."
"Others? Other half bloods?"
"I wasn't good." Dawn's gaze dropped. "I've never met the others—only you, Muse." She chomped the remainder of her donut and then with a grin asked if she could play with Missus Floppy.
I watched her run from the kitchen, in a hurry to get to her bunny. More half bloods. More princes. "Akil, you son-of-a-bitch, what the hell have you gotten me into?" In the absence of Akil's answers, there was only one other person who could help with a half-blood problem, but Stefan had made it clear what he thought of me.
I scooped up Ryder's cell from the countertop. I had to listen to it. Ryder had given me the phone. Maybe Stefan had been trying to contact me? I dialed the voicemail. " _Ryder, hey man, where are you? The workshop is empty. Muse was there..."_ Stefan paused. Was that a growl? When he continued, his voice had gained a jagged edge. _"I thought it would be easier. You were right. I can't do this. I need... Just call me."_
I replayed the message. Definitely a growl. He still had the demon brogue, a deeply gruff accent from his time in the netherworld. It hadn't been so apparent when I'd seen him at the workshop. He'd deliberately hidden it from me.
I lowered the cell and glared at it before scrolling through Ryder's contacts, breezing past Ryder's Spare—which I assumed would be the cell Ryder had on him—and hovered my thumb over Stefan's number. I suspected I knew how this call was going to go. It would be awkward, stilted, and painful. He'd tell me to get lost. He clearly didn't want anything to do with me. But I couldn't give up on him. We needed to talk. There was a time I'd have told him anything, and although it had been brief, our time together had meant something. He'd said the same, right before accusing me of plotting with Akil. Surely, if I could just get him to listen... If we could get past all the horror, which had somehow drowned us both...
My thumb twitched over the call button. He'd cleaned out the workshop. He'd told me not to contact him. _I'm sorry we met_. I clenched my jaw and ground my teeth. He believed I was his enemy.
I jabbed Stefan's number. The cell rang twice.
"Ryder, get your ass back here—"
"Stefan."
A brittle silence snapped down the line. For a second, I thought he'd hung up.
"Muse." He said my name slowly, as though savoring it. I heard humor in his voice and something else, something rich and heady like hunger. I shivered and heard his audible intake of breath. "Why do you have Ryder's cell?" The hunger had gone. His voice was flat. Cold. Controlled.
"We need to talk."
"I've said everything I need to say."
"Then shut up, and let me talk." I tapped my nails on the counter. "I just—"
"Where's Ryder?"
"At the Institute probably."
Stefan muttered a curse. "Why do you have his cell?"
"He helped me ditch the Institute."
"Why do you need to ditch them?"
"Can we talk?"
"What are we doing now?"
"This isn't talking," I grumbled. "It's an interrogation."
"What do you think is going to happen, Muse? That you're going to explain what you did and everything will go back to the way it was before?" The demon slur crept into his words, deepening his voice with a touch of power. "Nothing you can say will change the past. If you need someone to talk to, why don't you run back to Akil? I'm sure he'll welcome you with open arms."
"Stefan." I swallowed back the urge to scream at him. "I get that you're grieving, okay. But this isn't my fault."
He barked a laugh. "You're joking, right?"
I curled my fingers into my palm and clenched my hand into a fist. "What happened to us? I thought..." I drew in a deep breath. "I never wanted to hurt you."
"Then we're even. Get over it. Don't call me again."
"Stefan, wait. Can we meet? Please."
He fell quiet. I listened hard. Had he hung up?
"You know where I am. You've always known." He ended the call.
I threw the phone onto the counter and planted my hands either side of it. Goosebumps sprinkled up my arms. A trickle of power bloomed inside me, responding to the sudden chill in the air.
Of course I knew where he was. I'd always suspected he'd be at the lake house, tucked away in the White Mountains a few hours drive north of Boston. I should have gone to him. I told myself I'd give him time. It had been two months. But time wasn't going to change anything. I knew that now. The longer this went on, the further apart we'd drift.
I straightened and glanced down the hall. Dawn was chatting to her bunny somewhere. I could hear her delicate, one-sided conversation.
I didn't want her to see how jaded Stefan was. She needed time away from demons to start building a life. I wasn't even sure Stefan would help her. For me, he wouldn't. Ryder would turn her over to the Institute. He'd made that clear. Akil had abandoned her with me. I was her only chance at freedom, and I would have to help her alone. I needed more information. Carol-Anne was the key. Someone at her club must know about Dawn. Tomorrow, I was going back to Boston.
## 10
# Chapter Ten
"Don't open the door for anyone. Don't go outside. Don't worry if I'm not back by the time it gets dark. I've left some games out for you. Don't stick your fingers in any sockets." I stood at the door, car keys in hand. Dawn had nodded dutifully to everything I'd said, but to be sure, I repeated, "Don't go outside." As long as she stayed inside, Val or any other higher demon couldn't get to her.
"I won't. Promise." She grinned. "Cross my heart, and hope to die. Bake a demon in a pie."
Good girl. She was learning.
I left her alone at Blackstone and felt the first inkling of what it might be like to be a parent. The worry. Dawn seemed so small and the world so intent on harming her. I didn't know a damned thing about bringing up kids. It's not something I'd ever thought about. As far as I knew, I couldn't have kids. Being half demon wrecked the necessary plumbing. But when I left Dawn all alone at Blackstone, the concern nearly had me turning around again and abandoning my visit to The Voodoo Lounge.
The drive back to Boston didn't take long, despite my lane-swapping and erratic driving to check if I was being followed.
Carol-Anne's club, The Voodoo Lounge, was closed until further notice according to the sign on the door. I left the car in the club's parking lot and walked the block to Jerry's veterinary clinic. The walk gave me enough time to clear my head and to think like an Enforcer. Jerry knew Carol-Anne. She'd referred to Jerry as _her_ property. _He must know something of use._
A few minutes later, I sat in Jerry's waiting room between a Siberian husky with a cone collar and a poodle with a bandaged paw. The potent odor of antiseptic tickled my nose, but it didn't mask the scents of dogs, cats, sawdust, and urine. The floor squeaked under the assistants' feet.
Jerry stalked through a back door behind the reception desk like death arriving at a wake. Did I imagine the animals falling silent? A cat hissed from behind me. I caught a glimmer of recognition in Jerry's gaze when he saw me. Towering over every other person in the room, he had to duck through doorways. His obscenely taut muscles strained against his shirt, but his tattoos were his most striking feature. The all-black interwoven anti-elemental symbols covered his face, arms, and I assumed they smothered the rest of him. They reminded me of New Zealand Maori tattoos. He'd told me once the tats kept him safe from demons. Combine the daunting tattoos, his size, the deep voice he dragged up from the depths of his soul, and he could make babies cry at ten paces.
Apparently though, he was the go-to guy for pet problems.
Jerry left the room a moment later, and the animals resumed their fidgeting. A nurse handed me a note telling me to meet him around the back in twenty minutes.
Discarded couch cushions, trashcans and fast-food bags choked the alley behind the clinic. Gulls keened overhead, and the Boston traffic hummed in the background as I waited for Jerry to emerge.
"I don't know anything," Jerry said by way of hello as he opened the back door. His bass voice rumbled around the alley. He closed the door and folded his arms, creating a wall of stubborn masculinity.
"C'mon, Jerry. Help me out." I gave him my most beguiling big eyes routine, but his flat expression didn't change.
"I already told the Institute what I know. I don't want you hanging around here, Muse. It's bad for business." He didn't mean the pedigrees inside the clinic.
Okay, time to cut the crap. "Why did Carol-Anne meet with Akil?"
Jerry ran a hand over his buzz cut hair. He sported more hair from the stubble on his chin than the hair on his head. "Is this on the record?"
I didn't answer immediately. If I pulled the Enforcer card, I could threaten to take him back to the HQ, and that was something he would avoid at all costs. Demon doctors had reputations to uphold, and it wouldn't do for him to be seen hand in hand—or in cuffs—with the Institute. But I liked Jerry. Hell knew why. I didn't want to lie to him.
"No, this is me asking because Akil has dumped me in the crap, and I'm trying to dig my way out of it."
Jerry narrowed his eyes, scrutinizing me for weaknesses. "She had a visit from a prince, the one that can't decide if he's a guy or a gal."
Oh hell, that was Leviathan. He liked to appear as male or female, often altering his appearance and gender from one moment to the next in a deliberate attempt to unbalance those around him. I locked my expression down. "When?"
"Three nights ago. Demon chatter spooked my clients. When the princes come out to play, the little guys run for the hills."
"What did he want?"
Jerry's shoulders bobbed in a shrug. "He'd come to collect something. Before you ask, no, I don't know what it was. Carol-Anne was nervous—a good nervous. Being one of his subjects, she jumped at any chance to please him. Water finds its way to water, right?"
I only knew of Levi because he'd tried to take me back to the netherworld. To my father. As it stood, I was on borrowed time. "Did you tell the Institute this?"
"Yeah, they were fishing for something juicy, so I gave it to them. There's nothing they can do anyway."
"Well, no, but Carol-Anne was found dead in Akil's apartment. They've been gunning for Akil for years. If they can prove he's broken human laws, they'll use it as an excuse to pool their resources and force him out of Boston."
A grin creased the tattoos around his mouth. "I'd pay to see that show."
Yeah, I worked for the Institute and doubted their abilities in a straight fight with a Prince of Hell. "Lucky for Akil, murdering another demon isn't against the law. Yet." Not that any prison could hold him.
"Are you sure Akil killed Carol-Anne?" One of Jerry's dark eyebrows quirked.
"I saw the crime scene. There wasn't any sign of a third person present, demon or otherwise, but no, I'm not sure about anything right now."
"So what's the crap he's dumped you in?"
I wondered how much to tell him. I could trust Jerry about as far as I could throw him. "Did Carol-Anne spend much time in the netherworld?"
"Not as much as most. She's like Akil. She _was like_ Akil. Damn, I know you didn't like her, but she wasn't all bad, y'know, for a demon." Jerry leaned back against the wall. "She kept the trouble-makers out of town and made sure those who came through were at least capable of living this side of the veil."
"She aided the demon immigrants?"
"Yeah. You know what it's like. They don't just turn up on this side of the veil and walk into human lives. They need help blending in. Carol-Anne was a big part of that."
Maybe she helped half bloods too. "Is there anyone she worked with who might have a grudge against her? Someone who might want her out the way?"
"Demons? No. They needed her. People? Maybe. Ever since Akil came out as all-demon, folks around here have started asking questions, pointing fingers. The streets aren't safe for demons right now. I heard talk of a vigilante group—"
"Did you ever see a little girl with her? About eight or nine years old?"
Jerry didn't move. He breathed and blinked, but otherwise, he tried exceptionally hard not to give himself away. I waited.
"Maybe. Her niece or cousin or something."
"When we first met, you told me you'd only ever seen one half blood before, a man so ruined that even you couldn't save him. Was that true?"
He sighed. "Carol-Anne brought him to me. Poor bastard. I did what I could..."
"Why did Carol-Anne have a half-dead half blood?"
He dragged a hand across his chin and cast his gaze skyward. "Y'know, I don't ask questions of demons. That's how I've survived this long. I just patch 'em up and send them on their way."
"That's the only other half blood you've seen besides me?"
He met my gaze for a few beats and didn't say anything. "I'd love to help you, Muse. I would. I like you, even though your ice-cold friend turned my place into Santa's grotto. How is he, by the way? I saw what happened at the gardens, heard a few things on the grapevine afterward..."
"As far as I know, he's fine. I haven't seen him."
"Tricky things, half bloods." He shoved off the wall and opened the back door to his clinic. "Take care of yourself, Muse. I know one thing. If there are princes involved, you should stay out of their way." Jerry's smile softened his hard-as-nails persona.
"I wish I could." I smiled my own half-hearted smile. "Maybe they should stay outta my way?"
He chuckled, the sound of his laughter soft and delicious. "Maybe."
## 11
# Chapter Eleven
The weather turned during the trip back to Salem. Blue October skies dulled to a dirty gray, and the wind blustered enough to sweep fallen leaves across Blackstone's driveway. I climbed from the Mustang and took a moment to check my Pico handgun before tucking it neatly in its holster. The wind whipped my hair about my face. As soon as I lifted my gaze, an elemental touch crept around my ankle and coiled up my leg.
It wasn't a touch I recognized, but it was demon, and that could not be a good thing. I closed the car door, kept my hands free at my sides, and walked calmly toward the house. Each step that made contact with the gravel brought with it another explorative touch. I couldn't see elemental energy unless I summoned my demon, but I could feel it. Each elemental touch varies depending on the demon. Like a handshake. But this one was different. The element was different. It didn't have the tingle of ice or the warmth of fire. Nor did it have the smooth sensation of water. The more it probed, the more my skin crawled. Whatever element it was, it tripped my human senses into fight or flight mode.
I reached out a hand for the door handle, and the metaphysical touch of fire skittered down my back. My skin prickled, and my chest tightened. _That_ touch, I knew.
"Sister."
I gasped and clutched at the door handle, safety so close.
"Face me while I address you." His voice was rich with so much power he could whisper and silence a room.
Self-preservation screamed at me to dash inside, slam the door in his face and hide, but he was as fast as wildfire. He'd be on me as soon as my human body broadcast my intentions. It took every molecule of courage I possessed to turn and face my brother. He stood a few strides away, rapier pointed against Jenna's back. The Enforcer's wide blue eyes pleaded. She was on her knees, hands bound behind her back, mouth gagged with a strip of leather. She wasn't escaping, and from the panic in her eyes, she knew it.
Val's thin lips flirted with his perpetual smile that never quite made it to his molten silver eyes. He was adorned in various snug-fitting leathers—likely skinned from demons by his own hand. I counted two daggers sheathed inside his long, demon-skin coat. He'd have more concealed on him.
The wind tugged at the long braid of white hair cast over one shoulder and mussed his snow-white bangs. His eyes locked on me, rooting me to the earth. If I looked into his eyes long enough, I'd feel the touch of his element crawling inside me. Both born of fire, we shared the same blood. The same prince was our father. I'd always feared Val knew me better than I knew myself. When he looked upon me, it was always with disdain, as though he'd searched my soul and found me wanting.
I pinched my lips together and drew in air through my nose. "Alright." My fingers twitched at my sides. "Let's talk."
"Call the half blood to you."
I swallowed. "What half blood?"
Val's fine eyebrows furrowed. "Muse..." He jabbed the tip of the sword into Jenna's back. She grunted and arched away from the point of the blade. "I have no qualms when it comes to killing humans. This female serves a purpose. She lives because you have something I require. Hand over the half blood, and I shall release this _Enforcer."_ He spat the word _Enforcer_ , disgusted that such a word should pass his lips.
I made the slightest of movements to reach for my gun when Val's eyebrow arched. I froze my hand. He observed with detached interest: a predator looking down from the top of the food chain. None of this mattered to him. It was merely a way to pass the time. His face held no hint of emotion, and despite the hint of a smile, he wasn't amused. Just indifferent. We were all ants to him, insignificant and fleeting.
But I had something he wanted, and as long as Dawn stayed inside, he couldn't get to her.
He stood statuesque against a gust of wind, hair and coat flailing. "Your options are slim. I kill the Enforcer. I kill you. I wait for the half blood to come out. I have an eternity of patience at my disposal. Or you call her, she leaves with me, and you and this female walk away."
"Why do you want her? I thought you were in the business of selling half bloods, not collecting them." The venom in my words surprised even me. "What's Dawn to you? Why do you even care what happens to her? You've never cared for anything, especially half bloods."
"Care?" he scoffed. "Is that what you believe me here for?" He tilted his head a degree and smiled. "You are a remarkable fool."
My heart thumped so loudly I was sure he could hear it. The pulsing pollutant inside me twisted and writhed. Gritting my teeth didn't subdue it, but I refused to show Val how screwed up I was. I could still draw from the veil and potentially drain him the way I had Akil, but not before he ran Jenna through.
"Thinking of draining me like you did Mammon?" A single eyebrow ticked as he caught me flinching. "Yes, I am aware of your talents. Very little surprises me."
"Must be dull."
His fine eyes narrowed. "If you attempt to siphon my element, I will end this miserable human's existence and savor your subsequent death. Think carefully, Muse. The half-blood girl is nothing to you."
"But she is to you." Why would Val want her? What was I missing? My fingers tingled with the electric dance of energy. I wanted to summon my element, pluck it right out of him and drain him of every last drop. If I could do it to a prince, surely I could do it to my brother.
"I am her custodian. She belongs to another. You are in my way."
"Carol-Anne, her owner, is dead."
"I will not explain my actions to the likes of you. Call her out, and I will let you live."
He could kill me. He wanted to. My brother's beautiful eyes sparkled with the knowledge of a hundred ways he'd end my life. He would have calculated how long it would take to run Jenna through and cross the distance between us. I could run, but he was faster. I could summon my element, but he'd be on me the second I flicked the demon switch in my mind. He already knew I was flirting with power. He'd feel the heat shifting in the earth beneath us. That sword of his would find a home in my chest before I could blink.
"Does our father, Asmodeus, know you're doing this?"
Val tensed. His amusement fizzled away, leaving his expression stone cold. "His name upon your lips insults him. You are not worthy."
"Why? Does it bug you, Brother, that we share the same blood?"
He threw Jenna face down on the dirt and planted a boot on her back, pinning her. She let out a muffled cry that he quickly silenced with the point of his sword against the back of her neck. A gust of wind rippled his coat and teased through his hair. He looked every part the netherworldly brother who had stalked my dreams since my childhood. He had sold me to the demons. He was the one who orchestrated my life of slavery. It all started with him. That knowledge helped steel me.
"Forget Jenna. Kill me. That's what you want. That's what you've always wanted. My life offends you. It must be even worse now that Asmodeus wants me..." A thought brightened my fear-addled mind, realization widening my eyes. "Ah, you can't kill me, can you? _Father_ wants me alive."
Val's pale face contorted in a manner that no human face could mimic. His cheeks hollowed, jaw lengthening, eyes sinking as his glare gathered shadows. A snarl rolled leisurely across his lips. His disgust for me was evident in that ripple of his lips. I'd always known he hated me, but the magnitude of revulsion on his face stoked my natural fear. He shoved back, away from Jenna, and lowered the sword to his side. "The very fact I must converse with you is beneath me. I would prefer to run my sword through your pitiful flesh and be done with you."
At least I knew I was right. I snickered. "You have to share Asmodeus's affection with a half blood. Ouch. I bet that's dented your rep back home, huh?"
"Affection?" He spat. "Your ignorance insults me." Val's outline blurred. I blinked, trying to refocus, but it wasn't my eyes fooling me. He was changing, revealing his true form. Snarling lips rippled over crescent fangs. Vast, glossy black wings burst from his back and arched either side of him, reaching out to enclose us like the night closing in. His clothes fizzled away, revealing moon white skin and hair in perfect contrast to those midnight wings.
I thrust my demon into my body and took a step forward. Watching through demon eyes, I witnessed the full glory of my demon brother. He appeared more human than most full-blood demons, more human than me in all my demon glory, a trait that should have rendered him weak among his own kind. His defined immortal body glowed from within, turning him into a terrifying yet awe-inspiring beacon of power. His power swelled in the air—not just the element we shared, but something more potent. Raw chaos energy. It pressed against my skin and slid across my tongue while I breathed it in. His presence filled a space bigger than the driveway. He was everywhere, reaching through me, around me. Jesus, he wasn't just any demon, he was the firstborn son of Asmodeus, and I was out of my league.
He hunched low, spread his sleek wings wide, and hissed.
My breath caught. My fire spluttered. I was nothing next to him. He was immortal. Ancient. And I'd seriously pissed him off.
A gunshot shattered my trance. Val flinched and recoiled, distracted by the splash of blood burning crimson on his milk-white chest. He skewered Jenna with his blazing glare. She must have worked her restraints free, because she had a gun cupped in both hands, aimed squarely at his chest. He moved faster than I could track, morphing into a blur of black wings and tumultuous energy. He threw Jenna down against the hood of the Mustang and loomed over her. She struggled to bring the gun down between them. He let her writhe for a few seconds, a slither of delight lifting his lips, before he clamped her wrist against the hood of the car behind her and grinned into her face. She instantly stopped fighting. Her eyes widened, and her lips parted. Her whole body slumped as though the steel I'd seen in her had melted away. Her tongue darted across her lips. That wasn't defiance in her eyes. Desire simmered there now.
I launched myself at him. He couldn't kill me. I had the advantage. At least in theory.
Val didn't even look up. His dark wings swept back, corralling me into their embrace. The second their duck-down softness touched the fire of my flesh, the fight drained out of me. I dropped to my knees, smothered in a warm, comforting darkness. Sleep suddenly seemed like the best idea I'd had in days. I yearned to close my eyes, curl up in the caress of his wings, and drift in an ocean of black.
The parasite on my heart pulsed. I jerked. It's poison inside my soul leeched outward, wrapping around my limbs and bleeding strength into numb muscles. I gulped a lungful of cloying power-ridden air and summoned fire. Heat devoured my flesh, broke over me, and spilled outward in a hungry wave.
Only when Val's wings opened and released me could I see again. He turned, swept his wings behind him, and lunged. His shoulder hit me square in the chest. I let out a grunt and tried to twist away, but he clutched my arms, and we both tumbled to the ground. He fell on me, crouching like an animal prowling up its fallen kill, wings high above him. The wind whipped his hair about his face. His eyes burned with a light so white, it almost appeared fragile. I knew I should be fighting him, but my body didn't want to obey the mental screams to hit the bastard with everything I had.
He smiled a true smile, not the ghost-like smiles he's favored me with before. This was real. It was wicked. Divine. Delicious.
He leaned in so that his terrible eyes were all I could see. "Sister..."
His voice plucked desire from my depths and summoned it to the surface of my flesh where it flushed across my skin. No, this wasn't right... He lowered himself against me. Where his fevered demon skin met the lava-veins of mine, a static blast of power scattered through me. I bucked and twitched, fighting, recoiling, wanting, but not wanting. I shoved at him then sunk my fingers into his shoulders and pulled him against me. He was doing this to me. I didn't desire him. I couldn't stand to be within two feet of him. I was scared of him. Of everything about him. He terrified the broken little girl who cowered inside of me. Oh, but I wanted his wings around me. To feel him ease inside me and fill me up until I came crying his name.
My lips tingled. The urge to taste him, to ease my swollen lips, pushed at my denials. Need pulsed between my legs.
"Half-blood filth..." he whispered into my ear. His lips brushed my skin, feather-light. "I have you. I may not be permitted to kill you, but there are other pursuits worse than death, as you are aware. I will make you beg for more. Only when I am the air you breathe, the thoughts in your mind, the sensation embracing your skin, will I discard you."
No-No! My eyes fluttered closed. I was drowning in him and had no hope of escaping the weight of power pushing into my pores and suffocating my thoughts. The dark inside burned through my veins, but it wasn't enough. I wasn't demon enough to beat my brother.
"Muse...?" Dawn.
I turned my head and saw her standing by the back door of the house. Her rabbit hung limp by her side. Her ringlets bobbed around her head. Then Val's wing came down, and I couldn't see anything but the embrace of darkness. It felt like home.
## 12
# Chapter Twelve
Cold water thrown in your face must be one of the most unpleasant ways to wake up. The splash shocked me like a jolt of electricity. I sat bolt upright. My demon immediately tried to burst from my skin but instead butted up against it. Where the hell was I?
Jenna planted a hand on her hip and glared down at me.
My cheek stung. Had she hit me too?
"Good. I thought you were never going to wake up." She glanced about us, prompting me to do the same.
We were in the lounge at Blackstone. "Dawn?" I shoved off the couch and managed two steps before my vision flooded with black. I fell back into the couch and blinked my vision clear. My head throbbed, and my stomach rolled.
"If Dawn's the little girl, she's gone. The demon took her."
Rubbing my temples, I groaned. "When?"
"Hours ago. You've been out cold most of the night."
Val took her back to the netherworld. I promised to protect her, and failed. I sunk my hands into my hair and spat out a curse. "I have to go after her."
Jenna rolled her eyes. "Yeah, well, better you than me. That's one demon I don't want to see again unless he's stone cold dead." A dash of color touched her cheeks, and considering she had a healthy bronze glow to her skin, it took a lot for her to blush. She licked her lips and skewed a glance at me. We shared a flicker of recognition. She'd felt his power too. "What is he?" she asked, a hint of reverence lifting her voice.
"I have no idea. I thought he was like me. All hellfire. But he's not..." He had heat. He blazed white with an abundance of it, but he also messed with my head on a level I didn't even want to acknowledge. He wielded a different kind of heat. A body of heat. The heat of wanton desire. I recalled, in gut-churning precision, the touch of his nakedness, and it felt wrong, so very wrong, but my body didn't think so. Holy hell, I'd lusted after my own brother. Saliva pooled in my mouth. My empty stomach flipped at the thought of what might have happened. I growled, disgusted and appalled with myself. "Did he do... anything to me?"
Jenna's brown eyes met mine with a peculiar fierceness. Her brow tightened, and her lips pressed closed. "No. He fled with the girl."
"Val is a full-blood demon. He's Asmodeus' son," I said softly, remembering Stefan's words. "Asmodeus is the Prince of Lust." Of course Val would screw with my head. It was in his DNA. It wouldn't have been so bad if I didn't actually _want_ to experience his touch all over again. How could my own body betray my mind? I'd lusted after my brother, and by the look on Jenna's face, so had she. "Are you okay?"
She'd glazed over, focused on a spot somewhere beyond the lounge windows. "Yeah..." Running a hand back through her hair, she shook herself, and met my gaze. "Thanks to you. If you hadn't jumped him when you did, I'd have let him do anything to me right there on the car." She grimaced and turned away. Jenna was a fighter. I'd seen her take down demons twice her size with roundhouse kicks and short, sharp jabs. She was an imposing figure and a damn good Enforcer. Val'd had her salivating at his feet in seconds.
"You're lucky to be alive." So was I. It was only the ominous, if distant, presence of my father that prevented Val from killing me. If a demon as powerful as Val was concerned with adhering to the wishes of our father, just how bad was Daddy-dearest?
Jenna drew in a deep breath and nodded. "So what now, Muse?" She plucked her gun from inside her coat and checked the chamber. "I need to report this to the Institute." Ejecting the magazine, she gave it a once-over and rammed it home again.
I winced. There was no way I could stop her from talking, short of tying her up.
She took my delay to reply as her cue to explain. "I followed you up here. Your brother caught me watching you at the mall." She licked her lips again and wiped a hand across her mouth. "The Institute knows I'm here. If I don't check in, they'll send a team."
Akil would already be pissed that Enforcers had crawled over every inch of his Boston apartment. If I brought them to Blackstone, he'd probably send the Hellhounds after me.
"Okay. Alright. I just need... some time." Val followed Jenna here from Boston. I was sure of it. He couldn't have found Dawn or me hidden inside Blackstone's walls, so he'd tracked Jenna after she'd tried to tail me. Maybe I should have listened to Ryder. Had I handed Dawn over to the Institute, at least she'd be on this side of the veil. She'd have a chance. In the netherworld, where Val had surely taken her, they'd tear her apart and remake her into something damaged beyond repair.
"You wanna tell me what's going on?" Jenna asked stiffly. "Why you ran from us in Boston? Who the girl is? What you're hiding?"
I didn't have much choice now. I'd lost Dawn. I had to get her back. Could I go to the netherworld and face my brother alone? I wasn't a coward, but I did have some sense of self-preservation. And even if I got her back, what then? Val would come after me. I didn't relish the idea of another family reunion. I needed help. "Stick around, and you'll get your answers. I have to summon a Prince of Hell. Wanna help?"
Jenna looked at me as though she wasn't sure if she'd heard me right. "Don't do things by halves, do you Muse?"
"Not anymore." I grinned.
## 13
# Chapter Thirteen
I glued a candle to a salad plate, using its own molten wax to stick it fast, and placed it on the floor in the center of the lounge. We'd pushed the furniture up against the walls, giving me plenty of space to work.
Jenna handed me a kitchen knife. "You've done this before?" She swallowed with an audible click, wiped her hands on her leggings, and stepped back.
"Summoned a demon? Yeah, a few times. I've never summoned a prince though."
"You think he'll tell you about Dawn?"
Crouched beside the candle, I dragged the sharp edge of the knife across my palm, wincing as it stung. Blood pooled inside my clenched hand and dripped onto the plate. "Maybe. I don't know. We'll see. Stay back. I don't know how he'll react to you. Don't say or do anything unless I tell you to."
She backed up against the wall. "What if it goes wrong?"
Her voice quivered. She was afraid, and so she should be. The princes were formidable, myth-like nightmares. Few this side of the veil had ever seen one. Fearing the Seven Princes of Hell was healthy. She didn't know I'd given up being afraid of Akil, that he needed me to pump him full of power and I needed him to free me of the demon consuming my soul. We had a mutually beneficial relationship. "I can control him."
"Yeah, but what if..."
"If it goes wrong, blow out the candle. With the focal point gone, there's nothing to anchor him here."
"And he can't go full-demon on us, right?" Seeing her haunted wide-eyed look, I began to doubt having her here was a good idea. "We tried to summon a high ranking demon once at the Institute. It was a bloodbath. If he's, y'know, all demon, he might try to– " Her hand hovered over her sidearm.
"Not with the symbols in these walls. He'll just be Akil. Relax, Jenna, you're making me nervous."
I turned my attention to the candle and watched the flame writhe on its wick. "Mammon, One of The Seven, a First, Prince of Greed, Guardian of the Dark, Son of Chaos, _Master of Lies_ ," —my own addition–he'd earned it— "I, Muse, invite you to share with me this place and time. You will not harm me. By our element, I summon you."
The air trembled. An electric thrum of energy danced around us, invisible, but distinct enough to vibrate against my skin. I straightened slowly and glanced behind me. I'd been caught out before. It was daylight outside, but inside, the shadows lengthened, crawled up the walls, and consumed the light, plunging us into shades of gray. This was new.
Jenna caught my eye. I gave her a reassuring nod. She stood still, breathing slowly, waiting.
The electric charge strumming the air tightened across my skin. The fine hairs on my arms and down my neck prickled. He was coming. I swallowed. I wasn't afraid. He'd be pissed I'd summoned him, especially in front of a witness. Well, he'd have to swallow his pride. I'd had enough of fishing for answers to questions I didn't understand. It was time for him to tell me the truth.
Reality peeled apart in front of me, opening a jagged tear between this world and the netherworld. A blast of superheated energy rolled over me. I staggered back and shielded my face. When it passed, Mammon knelt on one knee in the center of the room. Horned head bowed, he held his leathery multi-jointed wings extended, their tips brushing the walls. Dust rained from his obsidian body. His corded muscles shimmered with a slick layer of energy. Darkness throbbed around him, remnants of the netherworld air clinging to its prince.
Shit. I hadn't expected him to appear as a demon. Say what you will about demons, but they know how to make an entrance. This was his house. Perhaps he'd rigged it so only he could summon his power inside the walls.
Mammon lifted his head. His eyes swirled like pools of lava. Red embers fizzled across his cheeks and skittered across his square jaw before settling beneath his skin. He pulsed with fire, veins throbbing red. Sweltering heat poured off him and over me. Perspiration beaded at my hairline. I wondered how Jenna was holding up but dared not look away. Looking away would be a sign of submission.
"Blackstone..." His coarse voice resonated, grumbling around the room and through my thoughts. "You brought her here..." Those fire-filled eyes narrowed on me. I didn't have a hope of reading his expression. His face resembled a human man's but hardened and exaggerated, as though carved from black granite.
"Mammon." I inclined my head. It wouldn't hurt to offer some respect.
His head jerked. He sniffed the air and swung his head to the side. Jenna stood perfectly still, hands flattened against the wall. Mammon's rumbling laughter filled the room. She cringed, but stood firm. Jenna wasn't the type to run. If she did run, Mammon would likely lunge for her. He dragged his gaze back to me and finally straightened. His chest glistened with blood. Streams of it ran down his thighs and pooled at his feet.
He grunted, acknowledging what I'd seen, and then shook himself all over, beating his wings. Hot ash blasted my face. I hissed and buried my face beneath the crook of my arm. Only when the heat passed could I look up. He'd hunkered down. His immense body trembled. His flesh tore open, rippling and contorting. Chaos energy licked over me, tugging on my demon. Mammon's presence faded, and Akil fell forward, naked and bleeding, landing on his hands and knees.
I dashed to his side and dropped to my knees. "What happened?" He rolled his eyes up to me. I cupped his cheek in my hand. He leaned against my touch, seeking comfort. "Akil... please... Tell me what's happening." He shivered, teeth clenched. Jesus, who had done this to him? I pulled him close and cradled his head against my chest.
"Did you do the right thing by the girl?" he growled.
I brushed tendrils of his blood-soaked hair away from his face. "I lost her. Val has her." He closed his eyes and shuddered. I pulled him against me. "Dammit, Akil. Tell me what's happening." I couldn't even summon my own power to help heal him. The marks in the walls prevented it. I tried anyway, but my demon pushed against my skin, unable to break free.
"Find her."
"I will. What does Val want with her?" I closed my arms around him and listened to his ragged breathing. The metallic odor of blood and the burned rubber smell of the netherworld burned my nose and throat.
"She is too powerful." Pain wracked him. He jolted in my arms, teeth locked. "You must get her away from your brother. He will deliver her to Levi."
"I will..." I stroked my hand down his arm and felt him wince. "Who's doing this to you?" He tensed, his muscles turning to stone in my arms. His voice was failing, fading in and out. I clasped my hands on his cheeks and searched his half-closed eyes. "Akil, stay awake. Who's hurting you?"
"Levi..." His eyes closed.
"Akil..." He fell limp in my arms, but he breathed. He wasn't dead. Not yet. If his vessel died, Akil as I knew him would be gone. Mammon would craft himself another avatar, but he wouldn't be Akil. I'd lose him. I needed him to get Damien out of me. I needed him, dammit.
"Muse, the candle..." Jenna approached.
The little candle flame flickered, as though disturbed. It twisted and writhed on its wick and then, with one final splutter, snuffed out. Immediately, Akil's weight lifted. His body dissolved right there on my lap. He just misted away to nothing. I snatched at my own breath and pressed a hand to my chest where the parasite throbbed around my heart. Akil was dying. Val had Dawn. Levi was torturing Akil, likely for information on Dawn's whereabouts. I couldn't save them both alone. I needed help. I needed the smartest, most badass demon hunter this side of the veil.
Jenna looked to me for our next move. "We need back-up." I climbed to my feet and strode from the room. I collected Ryder's cell and car keys and left the house with Jenna in tow.
We drove in silence toward the Salem mall to collect Jenna's car. Fury burned through my veins, flaring hotter as my thoughts darkened. I would protect Dawn. I'd promised her that much. And Akil... How dare Levi torture him? He had no right. Akil was infallible. A smug bastard he might be, but he didn't deserve that. What if Levi killed Mammon's vessel, killed Akil?
"Akil means a lot to you." Jenna leaned an arm against the passenger door and watched the leafy tree line blur past.
I glared ahead and tried to imagine what she thought she'd seen: a Prince of Hell dying in my arms and no doubt the terror in my voice. "No. I need him. There's a difference," I replied flatly. She'd witnessed Akil's true form and looked him right in the eyes. More than that, she'd held his stare. That took balls. She obviously had a pair. I liked her all the more.
She turned her head and watched me. "What on this earth do you need a Prince of Hell for, Muse?"
Good question. The answer was mine to keep.
"What the hell are you?"
I had some witty retort on my tongue about a half-baked mistake, but it twisted in my mouth and died on my tongue. She'd seen me go demon. I'd deterred Val's magnificent wings, albeit briefly. I'd bowed to a Prince of Hell and gathered him into my arms. She'd probably read the reports from the Garden event, where I'd funneled pure energy into Akil. As far as she was concerned, I had a Prince of Hell on a leash. She didn't know I had another demon shrink-wrapped around my heart or that I could wipe out a city if I put my mind to it.
I shrugged a shoulder. "I'm just me. Caught in a storm."
Jenna watched me closely. She was an Enforcer, trained to eliminate the demon threat. So was I, technically, but I was demon first. My allegiance didn't rest with the Institute, and it never would. I worked for them—for Adam—because I needed answers. Jenna saw through my act and witnessed the demon in me staring right back at her.
She didn't say another word, and once at the mall, she retrieved her car. We drove back to Boston in convoy. Her car loomed in my mirrors the whole time.
## 14
# Chapter Fourteen
We pulled up outside the lake house a little after midday. Jenna climbed from her car, talking into her cellphone. She shivered, reached inside the car, and pulled out a buff colored trench coat and shrugged it on. She strode away from her car like a woman in charge. I'd reassessed my impression of her over the last few hours. She was Institute, through-and-through. She probably never doubted herself while scrubbing demon blood out of her boots. She and Ryder would get along like a house on fire.
I took a few moments to absorb the serene surroundings. Sunlight sparkled on the lake to my right. The body of water lay embraced by sentinel pines as far as the eye could see. Frost-brittle grass crunched under my boots as we approached the white weatherboard house. Here, I'd learned how to draw from the veil. Here, Stefan had lied to me. Here, Akil had tried to kill me. Here, hidden in the metal memories of a sword, I'd witnessed Akil murder my friend. For somewhere so beautiful, it held many ugly memories. But there was good too. Among the embrace of trees, Stefan had taught me how to summon my element from beyond the veil. He'd opened my eyes to the truth.
As we approached the house, Jenna ended her call.
"What did you tell the Institute?"
She tucked her hair behind an ear. "That I'm still tailing you. Which I am."
"You didn't mention Akil or Dawn?"
"Not yet."
She would though, and soon. "Did you tell them where we are?" We stepped up onto the wraparound porch and stopped outside the side-door.
"No." She frowned at our surroundings. "I don't know where we are. In the middle of moose country, by the looks of it." She tightened her coat around her. "Is it always this cold here?"
I knocked on the door. There were no other cars parked alongside the house. Maybe nobody was home. I'd decided not to call ahead. It would only make the inevitable conversation worse, if such a thing were possible.
"Who we meeting?" Jenna asked, stamping her feet and breathing into her cupped hands.
"A friend."
"Another prince?" She arched an eyebrow.
I smiled. "No, I don't generally socialize with the princes if I can help it. They tend to want my head on a stake."
"Unless it's Akil."
I winced. Even mentioning Akil's name here set my teeth on edge. Nerves fluttered in my chest, stirring my parasitic hitchhiker. Speaking Akil's name around Stefan felt like throwing gasoline on a bushfire. This wasn't going to go well. I knew that and tried to steel myself against the inevitable.
I opened my mouth, about to ask Jenna to let me do the talking, when Stefan jerked open the door and rested his forearm on the jamb. He skipped an analytical gaze from me to Jenna. His piercing eyes narrowed a fraction. Jenna made a small, pleasantly surprised noise in her throat.
"Jenna..." He smiled easily. "How's the wrist?"
Flirtatious laughter peeled from her lips. "Fine, thank you. It took a good few months to heal. Messed up my aim for weeks. Had I known we were seeing you, Stef, I'd have brought a tub of Ben and Jerry's."
My demon bristled. If I had hackles, they'd have shot up. An angry hiss sounded at the back of my throat. I cleared it with a cough and gritted my teeth to prevent any more demon noises escaping me. I locked a bright smile on my face. "You know each other?"
Stefan turned his back on us and strode into the open-plan lounge. "Sure, Jenna and I were... friends before I had a surprise vacation in the netherworld."
I heard the hesitation in that word: _friend_. Whatever they had together was none of my business anyway. It's not like I had what some might call a relationship with Stefan or like I went to hell and back for him.
I crossed the threshold and immediately felt the press of the protective symbols painted on the walls. My demon shrank back, which would likely be a good thing, considering that I was having trouble keeping my bright smile alive and the various colorful curse words from breaching my lips. My demon was jealous. And surprisingly, so was I. Jenna knew what ice cream he liked. It bothered me. I didn't even know if he liked books or movies or what his favorite color was. If he liked take-out food or fancy restaurants. It hit me hard, the realization that Jenna knew him better than I did. They'd been friends. The one friend I'd had, Akil had killed. I had colleagues. I had acquaintances. I had lovers. But nothing real. Nothing lasting.
Jenna closed the door behind us. "I didn't know you had a place in the mountains, Stef."
Good. I knew something she didn't. Oh, god, what was I doing? This was ridiculous. So Stefan and Jenna may have been an item. It didn't matter. Of course he had a life before me. What did I expect? Besides, there were other, more important, priorities.
Stefan paused beside one of the two patterned couches huddled around a pine coffee table. His fingers danced lightly on the back of the cushions. "There's a lot you don't know, Jenna." He slid his gaze to me. "What do you want, Muse?" The pale blue of his shirt complimented his dazzling eyes. He'd rolled the sleeves up like it wasn't bitterly cold inside the house. Loose-fitting jeans implied a casual and calm persona. Almost. His power simmered around him. I couldn't see it, but it crackled in the air like an electrical current.
"I need to talk with you." To do this, to convince him to help me retrieve Dawn and Akil, I had to reel in my emotions. No matter what he said, I couldn't let my feelings rule me or allow my demon to distract me. _Forget the past_ , I told myself. I needed him to help me. Forget everything that broiled between us, simmering the tips of my fingers, tugging on my control, demanding to be free.
"I thought we'd already had this conversation." Cool. Calm. No hint of anger. We were testing each other. That was good. At least I hoped so. I couldn't quite tell if, below the chilled-out exterior, he was about to launch into a verbal assault. He couldn't summon his demon, not inside the house. Maybe, if we stayed like this, we could have a civilized conversation, our demons packed away for a battle another day.
Jenna had stilled beside me. She couldn't see the power he carried, but her instincts would be prodding her subconscious. I froze the prefect expression of indifference on my face and looked at her. She glanced at me, back at Stefan, read between the lines, and sighed. "You know what, I'll take a walk around outside. It's warmer out there."
I kept my head bowed and listened to her leave. I might have felt guilty for pushing her away, if I hadn't remembered whom she worked for. The door clicked closed behind her, and I was alone with Stefan. The weight of words unsaid virtually suffocated me.
"Can't look me in the eye?" Stefan asked. "Guilty much?"
Anger sparked to life in my veins. I snuffed it out, closed my eyes, and took a deep, steadying breath. He would not bait me. When I opened them again, Stefan had moved a few strides closer. I opened my mouth, unsure where to start. I remembered what it was like to trace my finger down the line of his jaw and tease my fingertips across his lips. He looked back at me, unblinking, locked motionless as though encased in ice. A sliver of a smile hitched up one corner of his lips. It was subtle, just a hint of humor.
"Well?" he growled, his demon accent was rich and deep as molasses. He'd been hiding the accent from me at the workshop and from Jenna. My demon did a curious little purr inside my mind, rolling over like a cat falling over herself at the feet of her master. She wanted a piece of him. We both did.
I swallowed, waging my own internal battle to stay focused. "I listened to the message you left for Ryder."
"Prying? Wow, that's low."
The anger was back, all my own. My demon still wallowed in the restrained power he radiated. Torn inside, I battled on two emotional fronts. "I've had a rough few days. Cut me some slack." Pinching the bridge of my nose, I sighed. _Stay calm. Don't fight._
He crossed his arms and regarded me coolly. Whatever he saw, it drained the tension from his body. "Alright, you've got ten minutes. Say what you gotta say and leave."
"I... er wanted to check that you were okay."
His gaze skipped away. He smiled. "Have you heard from Ryder?"
"No. He's a big boy. He can look after himself. _Are_ you okay, Stefan?"
"Of all the things you could ask, that's your first?" He brushed a hand across his chin. "When the Larkwrari came through the veil, Muse, at the garden, it tore me apart. I held the veil open too long. It poured enough raw power into me that I should have leveled the city." His eyes told me he'd wanted to. I knew that feeling. "Or died."
I glared back at him. "Hurts like a bitch, doesn't it?" My gaze said, _yes I've taken in that much power and nearly died from it. What do you want? A fuckin' medal?_
He laughed softly. "Look at you, Muse. Standing here, thinking you're something hot. Must feel nice to have a prince on the end of your strings. Do you burn for him, Muse?"
He was hurting. I'd let him off the things he'd said at the statue, and I'd let it go now, but a time would come when I wasn't going to let it slide so easily. "Don't do this, Stefan. You're better than this."
"And how would you know? You had a few sharp things to say to me that night on the pier, right before I stepped through the veil for you. What was it? You never wanted to see me again. Well, you almost got your wish. Didn't think it through though, did you?" He smiled. His words were born of anger, but his tone was flat.
I watched his face. The smug smile sat firmly on his lips, and laughter danced in his eyes. I didn't see anger in his expression. He hid it perfectly. I was walking on thin ice. I lowered my voice, "You know I didn't want to hurt you."
His gaze wandered briefly. Perhaps the memories clouded his thoughts. Was he faltering?
I approached the back of the couch, deliberately keeping a barrier between us. "You said... you thought about me while you were trapped in the netherworld... That I meant something to you."
He chuckled and sunk his hands into his hair, sweeping the blonde locks back from his face. His eyes had brightened. His smile tightened, almost twisting into a sneer. He moved back and paced behind the opposite couch. "You did, once." He threw a glance my way, accusation in his eyes.
"Stefan..." I whispered. My breath misted in front of me. "I never stopped thinking about you."
His glacial eyes blazed. "Were you thinking of me when you screwed Akil? Every time since? I lost years, Muse. For you. To keep that pyscho-prince away from you and Nica. The second you get back, you're all over him. I wouldn't even care if it was just time. Two years, pfft," —he tossed his hands in the air "—it's nothing. But I lost so much more than that. So, please forgive me," he growled, "if I seem a bit tense."
"I know you're grieving. I miss Nica too."
"You have no idea." He snarled, turned on his heel, and stalked into the kitchen, out of sight.
I growled and shut down the acidic rage simmering in my gut. I wanted to march in there, yell at him, scream that I needed Akil to get Damien out of me, that I wasn't screwing Akil, to yell that I never wanted any of this. My hands clenched, itching to throw something, a few punches, maybe some crockery.
Slowly, I entered the kitchen and hung back in the doorway.
Stefan leaned forward against the countertop, his back to me, hands splayed on the surface, head bowed. "You can't be here. I can't stand this." His words grated as though he'd dragged them through hell to speak them.
"I'm sorry." His shoulder muscles tensed beneath his shirt. I'd once slid my fingers over those broad shoulders and pulled him close, so close I didn't know where he ended and I began. "For everything you think I did. Every day, I wanted to get to you, Stefan. I worked for Adam, did everything he asked. I hated every second of it, but I did it to get my demon back to go after you." He had to listen to me, to hear my words. I couldn't live with this any longer. Either way, he had to know my side. I had enough fetid darkness devouring my insides. I didn't need or deserve his hatred. I needed him to understand in a way I hadn't even realized until that moment.
"It doesn't matter." The muscles in his braced arms flexed like cables under strain.
A swell of emotion clogged my throat. "I'd have done anything to get to you. I never gave up. They said you were dead. I knew you weren't." I blinked back tears. The truth hurt. "I knew the netherworld would try to kill me. I knew Akil was there. Val too. I didn't care. Even after the things Damien did, how he destroyed me all over again, I came for you." Tremors rolled through my body. Memories flowed forth. I'd not dealt with the fallout from my time in the netherworld. The horrors I'd endured were still there, too close to the surface of my thoughts to hide from. My demon snarled at my weakness, despising my emotional humanity, but I didn't care. If Stefan would just look at me, he'd see the truth exposed on my face. "Please believe me. I never gave up on you, Stefan."
"You..." His breath sawed out of him. Veins of electric blue ice sparked across the countertop and snapped up the windows, cracking the glass. "You must."
He shouldn't have been able to draw his power. But he was. The air in the kitchen hardened against my skin. Ice frosted on my lips. I breathed in the burning cold. My throat tightened. My bangs collected diamond-ice. Frost dusted my lashes.
Stefan's outline rippled. My focus phased out as translucent wings of ice sprouted from his back. The sunlight cascading through the window refracted through those wings and scattered countless shards of light across the kitchen. The entire room sparkled. He turned. His demon glared at me through crystalline eyes. I couldn't see all of his transformation, just a superimposed ghost of what he truly was. Man and demon vied for the same space. My vision blurred. Power trembled beneath my feet and rippled through the air.
I stole a small backward step. Stefan abruptly appeared in front of me, filling my vision. He lifted a hand to touch my face. Dry-ice spiraled from his skin, rising like smoke. When his fingers brushed my cheek, the touch burned. I hissed and turned away. He was everywhere at once, the air I breathed, the thoughts in my mind, the embracing cold shrinking around me, closing into a deadly embrace. I couldn't breathe. Ice crawled across my tongue and down my throat. Instincts screamed at me to run. I looked into his eyes and fell into the power swirling there, caught in his crippling beauty.
"Back off!" Ryder pressed a gun to Stefan's temple. Swirling ice vapor teased around the muzzle, dusting the barrel as it crept toward the grip.
Stefan's eyes bored into mine. His cold leeched the heat from my body, drawing it out of my flesh. I was dimly aware of my seizure-like shivering, but it didn't seem to matter. My demon kept trying to break through. Her attempts felt like nothing more than a fly bumping against a window. Wrapped in ice, I was solid. Hard. Unbreakable.
"Stefan. You know I'll do it. Back away from Muse." Ryder cocked the hammer.
Stefan smiled. Ice cracked away from his lips. He moved back. Those glorious wings chimed. Snow trailed from their crystal-feathered edges.
Ryder grabbed me by the shoulder and shoved me back. "Get out!"
I stumbled backward through the kitchen doorway and bumped into Jenna, standing rigid, gun out, aimed at Stefan.
Stefan gleamed in the sunlight. Glorious. Godly. I couldn't tear my gaze away. The touch of winter coiled up my legs and slithered around my waist.
"Get it under control, pretty boy." Ryder drawled, a gun a few inches from Stefan's head. Ryder's grip trembled. Ice clawed over his hand.
Stefan still looked at me. The smile had frozen on his lips. He hadn't even blinked. And I knew now, it was a mask. His demon had complete control. He could draw from the veil with a single thought and kill us all where we stood. He might not even need the veil to do it. Just how powerful was he?
"Go." His demon voice splintered the layer of ice smothering everything in the kitchen. The windows shattered. Ice and glass exploded.
I ducked away and fled through the door with Jenna hot on my heels. Panting, we stopped by the Mustang and waited for Ryder to emerge from the open door.
Stefan had lost control. No wonder Ryder had said half bloods were dangerous. He'd been dealing with this.
"Is Ryder going to be okay?" Jenna asked. I nodded, teeth chattering too much to speak. She shrugged off her coat and handed it to me. I shook my head. "Take it before you get hypothermia or something."
"N-no. I'm okay." He wouldn't have hurt me, would he? Was that what he'd been doing? Was he killing me with cold?
Jenna threw her coat around my shoulders and bunched it beneath my chin. She smiled. "You're as stubborn as he is. Deal with it." She plucked her cell from the pocket and jabbed speed-dial.
"What are you d-doing?"
"I have to report this. He's out of control."
Ryder beat me to it. "No you don't. They already know." He ambled over, holstering the gun inside his jacket. It was Stefan's gun, the Desert Eagle, complete with entwined scorpions etched into the grip. Ryder saw me eyeing the weapon. "He recognizes it. Helps ground him. Dunno for how much longer though."
"Ryder... Why d-didn't you tell me?"
He nodded at the Mustang. "Get inside the car. I'll crank the heater up an' get you warm."
Obliging, I sat shivering in the passenger seat. Jenna sat in the back, eyes trained on the house, hand resting on her thigh, ready to go for her gun.
Ryder rested a wrist on the steering wheel, turned toward me, and peered at me in that way he did when I screwed up and we both knew it. "What are you doin' here?"
"I needed to talk to him."
"He could've killed you."
"I didn't know he was... like that."
Ryder raised his eyebrows. "You were in Boston when Stefan brought a snowstorm in summer, right? You remember the dragon-demon eyeing us all for lunch? Don't kid yourself, Muse. Did you think he was goin' to be able to brush it off?"
My shoulders dropped. I sunk in the seat. "Yeah," I said in a small voice. "He's Stefan."
"Not any more, he ain't."
My demon shifted and resettled. "He just needs time."
Ryder looked out of the windshield at the house. It looked serene in the sunlight, as it had when we'd arrived. "Yeah, well, the Institute hasn't decided what to do with him."
"He can't go back to Adam."
"Nope. They aren't daft. Stefan inside that place again? He'll make Damien's killing spree look like a fuckin' Sunday stroll. The ward-symbols don't stop him. Dunno why. They used to."
Even Akil couldn't get through those symbols. Could he? He'd summoned his true form back at Blackstone. What if the princes weren't affected by the symbols? What did that make Stefan? As powerful as Akil?
"You've been up here all this time. Guarding him? Protecting him?"
"Both. Until Adam issues the order to have him... dealt with."
"Killed."
Ryder didn't reply.
Would Ryder do it? Yeah, of course he would. He wouldn't hesitate.
I pulled Jenna's coat tighter around me. The shivers were subsiding, leaving me exhausted and miserable. "Maybe I could..." I shrugged. "I don't know... help him somehow?"
Ryder's left eyebrow shot up. "What, like you did just then? Muse, no offense, but you're the last thing he needs. You push all his demon buttons. Always have."
"Yeah, but I couldn't bring my demon to that showdown. It wasn't a fair fight. You don't get it." I glanced back at Jenna. She glared at the house, fingers twitching on her thigh. She wouldn't hesitate to kill either. "Nobody gets it. I have a demon in me. Like him. I control it every second of every day. Mostly. Not always, but more often than not. Stefan and me, we're the same."
"Right, and that's why I don't want you in there. I got enough trouble controlling one fucked up half blood, I don't need two." Ryder gave me the cold, hard, military-grade stare. "Don't even think it."
I sighed and looked away, wandering my gaze along the tree line. "I lost Dawn, the little girl. Val has her in the netherworld. Akil is–"
"In Boston, doin' his thing."
"What?" I snapped my head around.
"Yup. Demon chatter has him back on top, butterin' up Boston's too-rich-to-care and fending off fangirls too stupid to live."
"Are you kidding me?" I shrieked. Jenna spat a disgusted curse in the back. "He was bleeding-out in my arms a few hours ago."
Ryder gave me a discerning glare. "You do too much lookin' with your eyes when it comes to Akil."
"You think he played me?" I scowled.
"Well, geez, give the lil' firecracker a certificate." Ryder smiled. "I love yah, Muse, like a sister. But for fuck sake, stay away from Akil. He'll screw you over every which way but Sunday."
I clamped my mouth closed and frowned at the galloping pony motif on the Mustang's dashboard. Had Akil set me up to go into the netherworld, guns blazing, in a bid to rid himself of his competition? Levi. If he thought I had that much clout, he was going to be seriously disappointed. What about Dawn? He said she was powerful. He wanted her tucked neatly away, nice and safe, with me so he could manipulate us both later?
I puffed out a sigh. "Screw Akil. I have to find Dawn. I promised her I would." When I faced Ryder, his face had softened. "I can't let her live like I did, Ryder. She doesn't deserve that. Nobody does."
He nodded, understanding, and dragged a hand down his face. "You came here to ask Stefan for help?" He didn't need me to reply. He slumped back in the driver's seat. "Hate to break it to yah, Muse, but Stefan ain't much use to anyone. And if he can't control that thing inside him, he's as good as dead."
Stefan couldn't help. That left only one option for back up. And I was going to kick his demon-ass for screwing with me.
## 15
# Chapter Fifteen
Akil was playing hard to get. Surprise, surprise.
I swung by his office and got blanked by the tight-lipped receptionist. I left a dozen messages on the three numbers he used. I tried his various houses. Nada. With each rebuttal, I fumed. He was screwing me over again. I'd kill him. I'd suck the life right out of him and shove him in Boston Harbor and see how he liked it. Twisted, sociopathic bastard.
By the time I got home, Jonesy glared green eyes at me, declaring me unfit to own a cat because I hadn't fed him for what felt like weeks. He twitched his tail and turned his nose up as I begged his forgiveness and unlocked my apartment. I was about to step inside when I heard laughter coming from two doors down. I knew that honeyed laughter. I'd suck the fire out of his veins and hand him over to Stefan.
I stalked up the hall and rapped my knuckles on Lacy's door, foot tapping.
Lacy answered, bright smile plastered across her face, half empty glass of red in one hand, hair mussed. "Oh, Charlie, you didn't tell me you knew Akil." She gushed. Lacy didn't gush. She wore wellington boots to nightclubs and white to funerals. She was not a fangirl.
I leaned out and peered around Lacy to pin Akil in my sights. He'd draped himself over her leather couch, looking like the cat that got the cream. His shirt gaped at the collar, revealing a tempting V of bronze skin. That smile could melt glaciers.
I put some serious heat behind my glower. How could he be sitting there, drinking her wine, warming her cheeks, while I thought he was dying in the netherworld?
Lacy giggled. "He was waiting for you, so I invited him in. I couldn't just leave him on the doorstep." She fanned her face, her black nail polish stark against her pale skin. She lowered her voice and said softly. "He's hot... I can't wait to upload this to my timeline." She yanked down her top, revealing the rise of her breast. "He signed me."
"Akil!" I snarled.
Lacy jumped. "It's cool. We're just friends, like. Right, Akil?" She turned and yelped. He stood an inch from her, towering over her young and impressionable person.
He handed her his wine glass. "Of course." He eased by her, deliberately brushing against her body.
She practically melted in a puddle of estrogen right there.
I stepped back, let him by, and watched Lacy give him bedroom eyes as he sauntered down the hall. "Go take a cold shower. Then call me when the wine has worn off so we can talk demon-protection."
She pointed a finger-gun at me. "Gotcha. You have awesome friends, Charlie."
I rolled my eyes but smiled. "Sleep it off."
"I won't be sleeping...." She let the door swing closed.
Akil leaned against the wall outside my open apartment door, arms crossed, eyes alight.
I planted a hand on my hip. "If you think I'm stupid enough to invite you in, think again, pal."
"That is not what I was thinking."
"I don't even want to know." I stalked down the hall, entered my apartment, and slammed the door in his face. Jonesy nudged my ankles. "Stupid, goddamn demons. I think he's dying. I'm the one trying to recruit Stefan to launch a raid on the netherworld. What the freakin' hell...? Ryder was so right. I should listen to Ryder more often. He has Akil pegged." I slammed cupboards and muttered a whole pin-board of expletives.
Only once Jonesy was chomping on his food did I stare at the closed door. Akil was still out there on the landing. I felt his heat creeping under the door. He wouldn't knock to get my attention. That was beneath him. He'd wait. Let me stew. He had an eternity to wait. Well, he could damn well wait. I'd die of old age before I let him in.
He knocked.
My thoughts came to a screeching halt.
"Muse, we need to talk."
I marched to the door and yanked it open. "You can start by telling me why you aren't dead."
He had the gall to look sorry, but it didn't reach those eyes. It never did. "Immortal?" He blinked.
"Ha-feckin'-ha. What, you got a sense of humor while Levi beat the shit out of you?"
Fire flashed in his eyes. "Careful."
"Or what?" Oh yeah. I was ready for this. _Bring it on, Prince of Hell._ I spread my arms. "You can't get me in my apartment, mister-I'm-dying-in-your-arms-come-save-me." I finger-walked the air. "So just mosey on back to the hole you crawled out of. I'm done."
His lips quirked. "Is that all you've got?"
"Hell, no. Stay away from Lacy. She's sweet. You're like the anti-sweet, whatever the word is for that."
"The devil."
I gaped then, quick as wildfire, asked, "Are you?"
"No, he's in Hollywood."
"Huh?"
"Satan."
I blinked and shook my head. "Whatever. Where was I?"
"Insulting me."
"Oh my god, yes. You're impossible. You're a murdering son-of-bitch. You get off on pain and control and fucking up peoples' lives. You lie through your teeth. You wouldn't know the truth if it crawled up your ass and bit you on the balls." _Oh yeah, I could get used to this._ "You have no idea about personal space. You're in my face the whole time. It's wrong. People don't do that. People respect each other's boundaries. You stomp on boundaries. Also, you snore."
His eyes had softened to a green-flecked hazel. He worked his lips, as though trying to swallow something wriggly and alive. Inside, he was laughing and trying damned hard not to.
Anger fizzled beneath my skin. "You think this is funny?" I snorted. "You would. Your sense of humor is so dark, even the lesser demons don't get you."
"Charlie, dear, are you okay?" Rosaline's fine English accent stopped my tirade.
Akil choked off a laugh. "She's okay, Rosaline. Just voicing a few grievances. How is the television?"
"Oh, it's working perfectly. You're such a nice young man. I'm always telling Charlie how she should find herself a nice man."
Nice? I poked my head around the door. "Rosaline, you have no idea."
"You're very kind, Rosaline." Akil flashed her a smile. "But Mu-Charlie is too good for me."
Rosaline placed her hand over her ample bosom. "Oh, modest too." She was wearing lipstick. She never wore lipstick.
I retreated to my apartment and threw my hands in the air. "You fixed her TV?" We glared at one another as Rosaline's mutterings wafted down the hall. "You coerced my neighbors into inviting you in?"
"Yes."
"You're evil." I meant it. "If you hurt them, ever, I'll bring the fire down on you so hard you'll never be tangible again."
"Don't talk dirty to me, Muse." He stepped over the threshold and into my apartment. "I just might take you up on your promises."
My jaw just about hit the floor. He was in my apartment. I hadn't invited him, and yet, there he was. Panic scurried through the debilitating effects of shock. I mentally swatted it aside. Had I mentioned something about the truth, his ass, and other parts of his anatomy? I'd really said those words together in the same sentence to a Prince of Hell. I blinked a few times, swallowed carefully, and flicked my hair out of my face. "You spoke to my landlord too?" I said, voice pitched too high to be nonchalant.
"Yes, I did. He said I could visit at any time. Polite gentleman, don't you think?"
My heart fluttered so fast I thought it might burn itself out. "What is wrong with people? Don't they watch the news? You're a demon."
"But I'm so charming on television." He quirked an eyebrow, and I had a hint right there of his grand plans.
This was unacceptable. "Get out."
"No." He closed the door behind him.
"Unlike my neighbors, I know what you are. I meant all those things."
"I don't doubt it."
"I don't want you here."
"Yes you do."
I snorted. "I really don't."
He moved so fast that all I got was a face of displaced air and then I was staring at his chest. I rammed both palms against him and shoved back. "No. No, I'm not doing this." I backed away. "You were bleeding-out in my arms, Akil!"
"You wanted answers. So ask questions."
I pressed my back against the cool apartment wall, grateful for the rigid stability. He stood in my quaint kitchen, looking somewhere between intrigued and mildly bored, the picture of sophisticated elegance in his obscenely expensive suit. Cufflinks reflected the light while his eyes captured it. Immortal. Ageless. Infinite. So toxic, he should come with a danger-of-death warning sign.
"What's Subject Beta?" I asked. Akil's eyes widened. He hadn't expected that question, and that's why I'd asked it. I hadn't forgotten about the file I'd skimmed on Adam's desk the night Nica died. I'd tried to meet with Akil during the last two months, and he'd eluded me. Until now.
He blinked, erasing all traces of his surprise. "Part of an Institute initiative to reproduce and utilize half-bloods."
My mouth fell open again. I scratched around my own head for my voice, but it was gone. It took me a few moments of blustering to say, "I'm sorry, what?"
"Subject Alpha is Stefan. You are Subject Beta. There are two others in Boston that I'm aware of."
My mouth worked, but no sound came out. My brain backpedalled. "Why?" I spoke so quietly I wondered if I'd spoken at all.
"You—or more specifically half bloods in general—are their secret weapon."
"You're lying?"
"No. I rarely lie. I merely manipulate the truth. I don't need to lie about the Institute. Everything about them offends. They meddle in the affairs of demons and believe themselves above the laws of nature. It is abhorrent." He delivered the last word with a disgusted snarl.
I hadn't expected this. Not only was Akil in all likelihood telling the truth—shocker—but the Institute had been watching me since I was born? "But... When? Val... Sold me... I thought..."
"Valenti controls the half bloods. All of them. He always has. You were to be sold to a nameless demon. That demon had plans to hand you over to the Institute in exchange for unhindered travel to this side of the veil. Val discovered the betrayal. He retrieved you and killed the demon who dared trade with humans."
I pressed my hands against the cool walls on either side of me. "Okay..." I'd always known my brother was responsible for my troubled upbringing, but I hadn't realized the Institute had a vested interest in me from such an early age. They'd bargained with demons to get their hands on me. How far did their reach extend? Was Akil in on it? I lifted my gaze. "Where do you come into all of this?"
"Do you mean how I saved you, taught you how to be human, gave you a lust for life, and you threw it all back in my face? That part?"
"No." Just had to remind me, didn't he. "How do you know all this?"
"How do you know about Operation Typhon?"
I narrowed my eyes at him. He knew everything in that handsome head of his. I'd been so afraid of him before, I'd never asked the horrifying questions. I hadn't wanted to know about the past. The future was all I needed. The problem was, you can't have one without the other.
"I read a file on Adam's desk."
He inclined his head and looked at me through dark lashes. "The ancient Greeks believed Typhon the father of all monsters."
"Adam." His name slotted into place like the missing piece of jigsaw.
Akil didn't reply. He didn't need to. "I make it my business to observe the Institute closely. I knew about you before you were born, when you were an idea, a chess piece on a figurative board. When Valenti decimated the demon for bartering one of his half bloods, I was intrigued. I asked myself why demons and humans alike were squabbling over half bloods. I watched. I learned. I waited."
"While I suffered?"
He held my gaze and moistened his lips. "I'm not perfect."
Laughter bubbled up and burst from me, turning from something light and jovial, to dark and menacing. The putrid thing knotted around my heart tightened. I choked back the insane hilarity and clenched a fist to my chest. I never expected the truth to hurt. My legs wobbled. I bowed over and planted my hands on my thighs, trying to control my breathing. I was used to being a worthless half blood, a demon plaything. I didn't expect anything from demons, only the worst they could do to me. But Adam had tried to buy me? And to know Akil had watched me all those years. While I'd been shoved from demon to demon, he'd eyed me from a distance. Everyone seemed to know all about me while I stumbled about in the dark, groping for answers.
"Who was my mother?" I asked softly, waiting for the tightness in my chest to pass.
"I don't know. I suspect she succumbed during your birth or was killed soon after. You father, Asmodeus, is brutal."
I looked up. "You knew about my existence before I was born, but you don't know who she was? Can you tell me anything about her?"
"No. I had other priorities. I avoid stalking human women, present company excluded, especially females under the wing of another prince."
"Do you think I'm maybe like her?" I didn't know why I asked. I'd never really thought about her before. I'd wondered, briefly, what it might have been like to have had a normal childhood, but pining over a past that could never have been wasn't for me. The future, that was me. I lived for tomorrow. I didn't ask questions. I didn't care. One foot in front of the other, always moving forward, always running headlong toward hope.
"I think..." he lowered his voice to a soothing timbre. "If she was anything like you, she'd have rained hell down on them all."
"You're damn right she would have." I straightened and pushed off the wall. I staggered somewhat, but Akil knew better than to reach for me. His dark eyes watched me approach him, drinking in my stride. I stopped inside his personal space, placed a hand on his chest, and stood on tiptoes to plant a chaste kiss on his cheek.
He turned his head and looked down at me as though trying to solve an exasperating puzzle. Surprise muddied the amusement in his eyes.
"You should try the truth on for size more often. It suits you."
## 16
# Chapter Sixteen
I woke slowly, wrapped in body-hugging warmth. Opening my eyes, I watched dust motes dance in the sunlight pouring through the windows. The world was quiet, my thoughts soft and content. Jonesy purred in my ear. I had a few glorious moments where my ignorance really was bliss, and then reality punched me in the gut. Like when you wake and think it's a weekend, but it's not. It's a weekday, and you should have been at work an hour ago. Multiply that sensation by a hundred.
Akil lay behind me. His warm spicy scent filled my head and tingled on my lips. I tensed and stopped purring. I'd been purring? Gulping, I took mental stock of the situation. I had all my clothes on. We were on top of the bed covers, not tangled up in each other's limbs beneath them. I sighed out the breath I'd been holding. Why were we on my bed? Why hadn't he said anything? Was he awake?
His breath fluttered at my neck, sending a shiver trickling down my spine. He wasn't touching me, but his heat radiated as though I'd curled up in front of an open fire. A tickling tendril of heat crawled across my back, down over my hip, and wove around my thigh.
In all the years I'd lived with Akil, I'd never once woken up next to him in the morning. I'd woken to an empty bed. Every. Single. Time.
We'd talked last night until my voice was hoarse and my head full off Adam's treachery. Akil knew more than I could have imagined. Adam had learned about half bloods after falling for a demon held captive by the Institute: Yukki Onna, Stefan's mother. Adam did the dirty, she became pregnant, and gave birth to Stefan at the Institute under the watchful eyes of the Institute scientists. Adam vowed to protect Stefan, knowing his newborn son would be killed if Yukki returned with him to the netherworld. Adam planned to eventually return him to Yukki, but Stefan was no normal child. He had power, power Adam could control, mold, and utilize for the benefit of the Institute. Stefan was a weapon. Half bloods were more powerful than Adam or the Institute could have imagined. Operation Typhon was born.
I must have fallen asleep on the couch. Did I fall asleep on Akil? Had he carried me into the bedroom? My demon stretched inside me, casting out a ripple of heat. Akil's breath shortened. He was awake, all right.
"Shouldn't you be pulling the wings off flies or something?" I quipped. Fake bravado made substantial armor.
"What would that accomplish?" His breath brushed my shoulder.
"Why are you here?" I couldn't turn to look at him. If I did that, I didn't trust what might happen next.
"Did you know you purr in your sleep?"
"That's not me. It's my demon."
"It's... adorable."
I jolted upright. He lay on his side, head propped up on an arm, thankfully fully clothed, although his creased shirt rode up his waist revealing a tantalizing glimpse of bronze skin. "Okay, what's going on? This isn't you. First, you start telling the truth, and now you're all..." I gestured, grasping for the right word, "Nice. A Prince of Hell doesn't do nice. Where's the real Akil?"
"Lay back down, and I'll reacquaint you with him."
I sprang from the bed and wobbled to my feet. "Urgh, how long was I asleep?"
"Twelve hours and fifteen minutes."
I hadn't dreamed. The suffocating nightmares hadn't stalked me. I'd slept a whole twelve hours without waking in cold sweats with my own screams ringing in my ears. I pressed a hand to my chest. Damien was still in there, but he was quiet, still. Dormant. Had Akil's presence somehow subdued him?
"What did you do?"
A frown touched his face. "I was the perfect gentleman. You appear to have a very low opinion of me."
"Well, duh. You killed my friend. You tried to kill me. I'm more surprised that you aren't laying the power on thick and summoning my demon out of me so you two can tango."
He gestured with a flick of his hand. "Your delightful symbols won't permit me."
Did that mean he couldn't draw his power or just that he couldn't draw _her_ out of me?
He straightened in much the same way big cats do, his languid movements unhurried and graceful. Once on his feet, he eyed me curiously. His hair was mussed and scruffy from twelve hours sprawled on the bed. His shirt gaped open and hung crooked on his muscular frame. He looked slept in, vulnerable, and very human, and it messed with my notion of the flawless, infallible, Prince of Hell.
He stepped closer, invading my space, and lay his hand over mine resting on my chest. "He consumes you from the inside." The honeyed roll of his words failed to mask the gravity of their meaning. "I know. I feel him. I witness his dark polluting your brilliance. If you wait too long, his essence will adhere to yours, and you will never be free of him. He will destroy the light in you."
I blinked up at Akil. "To get him out, I have to let you in... let you do the same to me as he did."
He inclined his head. "Not the same. It would be glorious. Would you rather he fester inside you?"
"I'd prefer to be free of both of you."
He trailed the fingertips of his left hand down the side of my face while keeping his right hand clasped over mine on my chest. "An infusion is a wonderful thing: an eternal bond between paired demons." He leaned in closer, shutting out my awareness of anything but him. A smile twitched his lips but didn't settle. The sparkling touch of restrained energy fizzled between us. A welcome warmth spread through my body, loosening muscle and chasing away the fears rattling in my head. Recognizing the seductive spell he'd cast, I balked and stepped back, but he moved with me, backing me against the wall. My demon reached for him, her snarl bubbling in my head when she could no more wrap herself around him than I could push him away. His fingers played lightly on my chin, tilting my face up. A lick of power strummed through me. "I will scrub the memory of Damien from your mind and soul. It will be..." His eyelids fluttered. The tip of his tongue darted across his lower lip. "...true ecstasy. You'd be free."
"Free of him, not you..."
He slammed a hand against the wall beside my head, rattling the windows. I flinched and gritted my teeth, but I held fast. He would not win this battle of wills. His rumbling growl was pure demon. Power bubbled up from the depths of his immortal presence and simmered dangerously in his eyes. He snared my gaze with his and bowed his head low enough that I could have flicked out my tongue to taste him. I reined my desires back, hands fisted at my sides. My demon paced behind mental bars. She wanted to pounce on him, tear his clothes off, and taste every delectable inch of his perfect body. She'd ride high on the ecstasy he offered. She didn't care for feelings or loyalty, and she didn't give a damn about control. She wanted to screw him until we shattered. I struggled to separate the sensations vying for supremacy. Her needs, my wants, the desire strumming beneath my skin—it all conspired to break me apart.
"You want me," he breathed. "All of me. You like what I am. You blame your demon—a weak excuse, Muse." With every word, his lips brushed mine. Heat sizzled beneath the feather-light touch. An impulsive urge to dart my tongue across his lips almost broke me. I bit my tongue and hoped the pain might anchor me. To taste him, to devour him, to explore every delicious inch of him... I'd been there before. I knew him intimately, and that only made self-restraint all the more punishing.
"You lie to yourself, Muse." The power beneath his words plucked my fears away as though they were insignificant. "You forget, I know you better than any man, or demon. Better than you know yourself." Filaments of fire sparked in his eyes. "I see your soul. I see the heat of desire burning in your eyes. Your need is wet between your legs." He lifted his hand from my chest and drove it lower, leaving a trail of heat. His fingers flicked open my pants buttons, his words punctuating each release. "You. Want. To. Fuck. Me."
Lust speared through me. I couldn't deny anything he'd said. It was all true. I dropped my head back against the wall and closed my eyes. He was right. I burned for him. My skin fizzled. My heart drummed a relentless beat. It was only by a tiny thread of stubborn denial that I clung onto control, but that thread was unraveling, slipping through my fingers. _It's just sex,_ I told myself, _just like old times._ This was old territory. And yet it wasn't. I'd changed. Whatever this was, it was different and weighed down with meanings I didn't understand. I snapped my eyes open and clutched at his shirt, torn between pushing him away and dragging him close. I needed his hands to brand me and his lips to follow. The smile tugging on his lips told me he knew I'd lost.
"Akil..." _Stop..._ But I couldn't say it. My demon rattled her bars and roared in my head. If she would just shut the hell up and let me think...
He pinched my lower lip between his teeth, letting it spring free as I sucked in a breath. His wandering hand found the wetness between my legs. A reluctant groan peeled from me. His fingers roamed. I whimpered and felt him smile against my cheek. "I will bury myself inside you," – deeper – "and free you, Muse."
A pent-up groan bubbled at the back of my throat, not quite free, and I slumped against him. "Damn you..." I snarled.
"Say yes. Accept me. Now." He leaned into me, his body a wall of hard muscle and otherworldly strength. His blunt teeth nipped my lips, my chin. He knotted his free hand into my hair and held me still, dark irises blazing with a halo of amber. Raw elemental power wrapped around me in an unseen embrace. I couldn't think through the roaring need and wave after wave of heat. I was aflame inside, barely human, a beast consumed by desires. My hips rocked against his fingers as they dove in and eased out. I was losing my mind.
"Get. Out." The words hurt to say. I wasn't even sure they were mine.
"No." He growled the word in a rushed kiss. The spicy taste of him danced on my tongue. I wanted more. I sunk my hand into his hair and plunged my tongue into a raw exploration of a kiss as though my life depended on it. I breathed him in, drank him down. His mouth roamed. His teeth nipped, sharper this time, rough with urgency.
A dart of control pierced my madness. Perhaps it was the memory of my owner's abuse or the remnants of him throbbing around my heart, but whatever its source, it blindsided me enough to release me from the crazed hunger. I planted both hands on Akil's chest and shoved him away. He staggered, snarled, and lunged. His heady scent danced on my tongue and sideswiped my denial. His arms clamped around me, biceps flexing. His body crowded mine. Steel and honey. So hard, so smooth. I teased my tongue down his jawline and down his neck where his pulse tapped a brisk tempo against my lips. I could almost taste the blood rushing through his veins. His muscles trembled beneath my hands. He wanted me. My body may have betrayed what I needed from him, but he was equally slipping into desire. It went deep, this madness. Did he realize how lost we both were? I tore his shirt open, nipped his shoulder, and dragged my nails over the taut biceps and down his toned arms. My demon roared. I growled, snarled, and snapped at her, driving her back. He was mine.
Akil tore my top over my head. His fingers speared into my hair and locked tight. He yanked me against him, releasing a savage groan against my neck that spilled a new wave of quivering heat through my body.
Jesus, he was unrefined, raw, and wild. I caught the gleam of the demon glaring back at me. Chaos caged. He wasn't a man. That being inside him, simmering just below his trembling flesh, was an eternal demon—not of this world, not belonging to humanity in any way. He was unreal, so far beyond my naive comprehension that I could no more hope to understand him than I could the workings of the universe.
"Stop," I breathed. My lust-soaked body belied my complaint.
"No. You need this. Don't deny it." He strangled a groan, the sound so wrought with bottled up lust that I couldn't help grinding my hips against his hardness. "Submit to me, body and mind," he ordered, fire in his eyes, power in his voice. An undoing spilled through my demon. Had it not been for my human half, I'd have been reduced to a mindless puddle of demon hunger at his command.
Where that molten stare wandered, my skin tingled. I couldn't breathe, couldn't see past desire. The smothering madness tore out all reason, but... "Akil?"
He growled low again, this time in warning. "Don't deny me." Need slurred his words. "Do not condemn yourself, Muse." His body shimmered behind a heat-haze. He pulled me to him, grinding the hardness of his cock against my hip. Another grunt of failing restraint slipped from him. I breathed out a pleasure-leaden moan and arched back. He held me tight, possessively, his hands smoldering against the small of my back. He trailed simmering kisses down my neck, lower, summoning fire in their wake. He hooked his fingers over my bra and tugged it aside. His lips burned. His tongue swirled around my nipple. I locked my hand in his hair and growled out a curse.
When his power sought mine, writhed through my skin and thrust toward my center, I immediately shrank away. Despair finally broke through the suffocating insanity. Akil would drive himself inside me, tear out Damien, and make himself at home. He would do to me the same as Damien had done: force his power inside and tear me open. I flinched and drew my humanity around me like a coat of armor. He couldn't have me like that. What Damien did to me was never happening again. Nobody could control me, beat me down, and drive me to my knees in submission. Ever again.
"I will never submit." My barely human speech betrayed my demon. She spoke through me despite the symbols on the wall. She was there, driving the lust out and glaring back at Akil's flame-filled eyes with nothing short of defiance.
I shoved his shoulder. He twisted back in against me, growling a warning.
"Stop." I clamped his face in both hands. "Just... stop. I'm not doing this with you."
He did stop. The world stopped too. I held his conflicted gaze and waited, not breathing. He wouldn't force me. He might be a beast inside that body, a creature of greed. He wanted, he yearned, he desired, but he'd never crossed that line. It was the only truth about him I could trust.
Finally, his eyes softened. He searched my face and with a sigh, planted a chaste kiss on my lips, and encircled me in his arms. The superheated lust we'd summoned dispersed. Seconds ticked on. Our panting breaths slowed, and our bodies cooled. The rapid beat of his heart corralled my runaway thoughts.
"I can't trust you." Humanity clipped my voice. I was back in the room and in control. But damn, he smelled divine: hot spices, sex and cinnamon. I drew in a deep breath and savored him because this wasn't happening again. He was my drug. An addiction. If I let him, he'd destroy me, and it was my choice to make.
He swallowed and stroked my hair back from my face, expression surprisingly calm. "You shouldn't."
"I don't think I can ever trust you."
"In which case, you had better become accustomed to sharing your soul with Damien." He peeled himself away from me. The sudden absence of heat released a wave of shivers through my body. He bent and scooped his shirt off the floor, gaze skipping to me as I watched him. He shrugged it over his shoulders, leaving it gaping, and gave me the raised eyebrow and wicked smile that sprinkled lust through my veins all over again. He knew the effect he had on me. Was there a Prince of Temptation? It should have been him. I slumped against the wall, cold but defiant. Akil raked a hand through his hair, eyes hardening. "The spirit which forms the soul can be changed, shaped, molded. Souls are akin to chaos in that respect. Make no mistake, your owner will destroy yours." He ruffled his hair, let his hand drop, and sucked in a wavering breath. "Your stubbornness will be your downfall."
"It's my choice to make." At least I didn't sound like I was about to collapse, even if I couldn't quite move away from the wall just yet.
Akil weighed my words. His smile had gone. While he buttoned up his shirt, I watched shadows gather into a frown on his face. "We have a half blood to find."
I nodded but couldn't find my voice until it occurred to me what had just happened. Ryder's words came back to haunt me. I did too much looking with my eyes, and Akil was made to seduce and manipulate. What had Akil told me once? His vessel was a trap, designed to lure and consume. The trembling of his formidable muscles, his sculpted masculinity, how impossibly perfect he appeared to be: it was an act. Even now, as a tiny bead of perspiration trailed idly over his rippled abs while his fingers worked the buttons closed. Every part of him was fake. Akil had played the 'nice' card in a bid to get me to submit to him.
My smile masked the downward tilt of my lips. "Oh, you're good."
He lifted his head, and his eyes narrowed, cutting me a scathing glance. "You must have me confused with another sociopathic demon."
"That's not what I meant, and you know it. All of this..." I flicked my hand. "The Prince Charming act... Mister Nice. You were screwing with my head." Yeah, it made sense now. _Butter me up with some truths, and then royally screw me over_. I lifted my chin and glared at him. "I thought we were beyond that crap. I guess I was wrong. You're still a lying son-of-a-bitch. You're not worth it."
Rage burned so quickly through his dark eyes that a blast of heat warmed my skin for a few seconds. A muscle jumped in his jaw. I'd clearly offended him, and my conviction stuttered.
"I've killed demons for lesser words, Muse." He turned and stalked toward the bedroom door.
Oh yeah, my words had hurt him. Well, that was unexpected. If he hadn't instigated the charming act to manipulate me, what had it all been about? Why was he being nice? I raked a hand through my hair. Nice I couldn't figure out. Nice disarmed me unlike anything else. Had he waltzed in here, all brutal orders and demands, I wouldn't have been surprised, and I'd have slapped him down much earlier. But he hadn't. Sure, he'd tricked his way into my apartment, but for him, that was par for the course. Something must have rattled him enough to bring him to me with answers on his lips.
I pushed away from the wall. "Akil, what happened in the netherworld? You were hurt. I didn't imagine that."
He stopped in the doorway, one hand resting on the jamb. He didn't look back. "Levi captured and tortured me for information on Dawn's whereabouts."
Just how powerful was Levi if he could reduce Akil to a bloody mess? "Did you tell him you gave her to me?" I asked quietly.
Akil's shoulders twitched. He chuckled, awakening the vestiges of desire tingling beneath my skin. "No. It takes a great deal more than physical pain to manipulate me."
"I lost Dawn anyway. You were tortured for nothing."
"On the contrary, I learned a great deal about Leviathan. Previously, I relied on Carol-Anne's ego to get what I wanted. I invited her to my apartment, feigning intrigue in her half-blood pet. Unfortunately, that didn't end well for her." Akil tilted his head and lifted his dark eyes to me. "Your brother took Dawn. As the custodian of half bloods, he will have returned her to Levi. And I know exactly where to find Leviathan." He paused, a thoughtful expression lightening his face. "Life really was quite tedious when the princes were forbidden to challenge one another." He didn't appear beaten. If anything, he made the fact he'd been tortured sound like foreplay.
It occurred to me that I'd been fooled by a bleeding Akil, as no doubt had Levi. "How did you get away?"
A wicked smile played on his lips, revealing sharp teeth. "You make the same mistake he did, assuming I was tortured under duress. Levi underestimates my abilities." His gaze told me never to screw him over, that he was the biggest, baddest, most manipulative demon out there. I believed him. And I'd just denied him a chunk of my soul.
I smiled right back.
## 17
# Chapter Seventeen
Akil broke the lock on the door into The Voodoo Lounge and gave it a brisk shove, almost taking it off at the hinges. I followed him inside the empty club. Within a few strides, a void of darkness swallowed me. I had a sense of space. Quiet yawned wide. I quickly spilled some of my element through me. My vision shimmered. Monochrome grays and black molded into the ghostly shapes of the bar and dance floors.
Akil's eyes glowed red in the dark. I liked to think of myself immune to most demon appearances, but still I flinched a little. Lacy wouldn't have been so quick to let him sign her chest had she seen that look. _Who needs horns and a tail when you've fire in your eyes?_
I trailed along behind him. He knew exactly where to go. During his torture he'd learned that Levi's lair overlaid The Voodoo Lounge. The netherworld exists in the same space as our world, just not in the same realm. The veil acts like tracing paper. The hard pencil lines on one side—our landscape—scores through to the other side, creating a similar imprint. Related but different twin worlds. Boston is a barren, half-burned dead forest in the netherworld, but the landscape follows the same contours. Those points of reference don't change. Nobody had mapped exactly what locations in our world matched the netherworld's. Demons don't care for maps, and nothing human (besides a few half bloods) could skip through the veil to accurately survey the netherworld.
One of the back rooms in The Voodoo Lounge served as an entrance to Levi's personal warren—Akil's word. At the Lounge, Levi, the Prince of Envy had always been close. Just a veil away. We were about to hop through the veil and take a discreet look around for Dawn. I suspected there might be an element of revenge involved. Levi had taken Akil to his warren to torture him. Now Akil wanted back in on his own terms. But as long as we found Dawn, Akil could do whatever he wanted to Levi. I wouldn't hang around to watch.
"Are you going to tell me how cozy you and Carol-Anne got?" Despite whispering, my voice echoed through the empty club.
"Ah, you found her body. I suppose that means the Institute people were crawling all over my apartment."
"Yes. They didn't discover anything."
"Of course they didn't. Did you think I'd leave my financial accounts and plans for world domination where prying eyes could see them?"
I stumbled, alarmed, until he slowed and tossed a grin over his shoulder. Right. World domination. He was joking, wasn't he? "Were you sleeping with her?" I asked, determined to get a straight answer before he turned away.
He faced me, still smiling. "Why should it matter if I was sexually involved with Carol-Anne? I don't believe you and I are in a relationship, or are you about to correct me?"
This was awkward. I shifted from one foot to the other and wandered my gaze away. "Obviously, we're not."
His chuckle promptly stopped me from further inserting my foot into my mouth. "I wasn't, nor have I ever, been involved with Carol-Anne. I'd like to think you credit me with more intelligence and better taste."
I swallowed to try to moisten my suddenly dry throat. A change of subject was in order. "Detective Coleman called me when her body was discovered. He thinks I'm in cahoots with you."
"He's not the only one." Akil continued to stride across the dance floors, gait confident and shoulders proud.
He missed my eye-roll. "I'm not here for you. You think everything's about you. I'm doing this for Dawn."
"That is an important distinction."
Was he laughing at me again? I closed the distance between us as we hurried down the back hallway. Closed doors flanked either side of us. "Who else thinks I'm working with you?"
"They all believe you're working _for_ me. Levi. Valenti. The entire netherworld. Although, perhaps not your father. Most demons—including the princes—are ignorant of the significance..." He hesitated a breath. "And Stefan."
Stefan's name on his lips drove an icicle through my heart, as I suspected he knew it would. He was right. Stefan did think I was in bed with Akil. I let it go. Now was not the time. "What significance?"
"Can we have this discussion another time?"
"No, you always say that when I'm getting too close to the truth. And there may not be another time. You're slippery." He stopped so abruptly I walked right into him and almost fell over myself.
He turned. "Slippery?" Amber swirled in his eyes. In the dark hallway in the empty club, he did the whole demon-bad-ass look a little too well.
Raking my hand through my hair, I recovered from falling over him. "Yeah, y'know... Difficult. Tricksy. I want my answers now before you disappear or volunteer for torturing again. How did you come back from that all flawless and..." I cleared my throat. "Er—y'know—pretty." I didn't want to admit how mind-numbingly sexy he was. Admitting I found him attractive felt too much like handing him a small victory. I crossed my arms. "When you were wounded a few months ago, I had to reach through the veil for the power to heal you."
His eyes widened at the memory. He liked it. "You think I'm _pretty_?" he asked carefully.
Now I was smiling. Some words just didn't sound right coming from him. I made a mental note to coerce him into saying obscure words like fluffy... or marshmallow. I snickered.
"Is our situation amusing, Muse? We're about to break into the Prince of Envy's warren, and you're giggling like a child."
"I'm sorry." I coughed and shook myself. "I'm just enjoying this mutual ground I seem to be on with you." It did feel good, though, poking a tiger with a stick.
He pressed his lips into a thin line and peered down at me as I tried to wipe the smile off my face. Once I'd regained my composure, he said, "I've been... different since you shared your element with me above Boston gardens."
"Different?" The way he said that one word, savoring it, rolling it across his tongue, it was a good different. "Are you going to elaborate?"
"No. What do you mean by mutual ground, Muse?"
Hello, warning-tone. I'd touched a nerve. I eyed him the same way he scrutinized me. I'd been afraid of Akil for the majority of my human existence. Afraid, in awe of, bewildered by. But now, I wasn't the cowering half blood afraid of Akil's shadow. I hadn't been since I'd drained him, and I would never be again. It was exhilarating, empowering, and I wondered if this was what freedom felt like. Or was it something far more seductive... like power?
A smile broke across Akil's face as he clearly read my thoughts in my expression. "And therein rests the significance I spoke of."
"I don't understand."
"You will." He opened the door.
A wall of water hit me square in the chest, blasted over my head, and slammed me against the wall. I thrust an arm out, searching for Akil's reaching hand, missed, and gasped before the torrent of water tore me away from him and flushed me down the hall.
## 18
# Chapter Eighteen
I coughed, spluttered saltwater from my lungs, and lifted my head out of the puddle I appeared to be laying in. My hair clung to my face, obscuring my vision. I blinked. Green eyes the size of headlights glared at me through steel bars.
I yelped and scurried back. Pain sliced up my back, wrenching out another cry. I was caged on all sides, above and below. I couldn't stand, couldn't stretch my arms out without bumping the razor wire-wrapped bars. My demon reared up, but the second her intention to ride through me became clear, something rammed her back down and pinned her to the back of my mind with as much mental precision as a pin through a butterfly.
I snarled and clenched my fists against my temples. "Get out of my head!"
Wet laughter bubbled around the empty dance floor outside my cage.
I snapped my head up and glared through wet bangs at Leviathan. He filled the space between floor and ceiling with his serpentine bulk. A scaled tail coiled around my cage and disappeared down the hallway from which I'd been flushed. His huge upper-body resembled a human's insomuch as he had arms and a chest, but his head was all green-scaled sea-serpent, and those eyes pierced the gloom, spotlighting me in an emerald glow. I could never mistake the wet, sickly, touch of his mind inside mine.
His slick green snout snuffled at the bars. A forked tongue flicked out, tasting the air. I flinched away. Leaden pressure pulled at my arms. For a moment, I ignored it, more concerned with the demon the size of a school bus eyeing me up for lunch. But then I realized my hands appeared to move away from me of their own accord. _What the hell?_ It felt like waking in the night with a numb limb. Commands left my mind, but my arms didn't obey. They stretched out. I watched, sickened and horrified, as I reached out and closed my hands around the steel bars of my cage. Snatches of pain twinged up my arms. The razor wire bit into flesh. Blood streamed over the back of my hands, down my arms, and dripped from my elbows, but I couldn't let go. My knuckles whitened.
Leviathan's vast serpent body sashayed, which I took to be an expression of pleasure.
"Get the fuck out of me!"
I heaved my body back. My arms snapped taut, but my hands refused to let go. Images started to pile into my mind. I couldn't stop them. This was my so-called talent. _Make her bleed; Make her read._ I could see the past in metal, any metal, but for it to work, I had to seal the link with my own blood. I jolted as though hit with an electric current.
A limp little girl cowered in the same space as me. So tiny. Her wet ringlets matted against her head. She trembled, whimpered, and clutched her rabbit against her chest. I recognized her tatty dress and mismatched socks.
Dawn.
My well-maintained reservoir of rage boiled dry inside me. I screamed at Leviathan with everything I had, but when the bellow broke over me and boomed from my throat, it didn't sound like any scream I'd voiced before. It was a pure, unfiltered, demonic roar of fury.
Leviathan's grip on my mind and body relaxed. He rippled back, body and tail undulating like waves on the ocean. He finally shook his demon away and stood before me in his human suit. Clad in snug fitting leather and interlocking steel plates, he was dressed for battle. He bristled with daggers and swords. A braid of auburn hair fell to his thighs and twitched like a cat's tail. His eyes glowed green. Haughty cheekbones pulled his lips into thin lines. He should have been handsome, but something in his perfection screamed alien, and my human senses recoiled.
Breathing hard with the sound of my own demon scream still ringing around us, I plucked my hands free off the razor wire. He was out of my body for now. I seethed so much that the water I crouched in simmered where it lapped against my clothes. Oh, I'd kill him—once I figured out how to summon my demon before he could pin her down again. He was so going on my revenge list.
"Greetings, half blood. You have gained power, I see. Asmodeus will be pleased."
"Asmodeus can go fuck himself." My demon lent my voice a throaty resonance, adding a threatening weight to my words, even if it was all bluster. I didn't take well to being caged. It felt too familiar, and if the memories bubbling to the surface of my simmering thoughts were anything to go by, anger was all I had to protect myself from my past. _Demons do so like their cages._
Levi's thin lips twitched like eels. "Passionate too, and yet to look at, you're rather unremarkable. Physically and mentally fragile. Riddled with insecurity... However, I am beginning to understand why my courtly brethren have taken it upon themselves to take an interest in you. You were quite efficient dispatching the hunters I sent after Mammon and the lesser demons I subsequently sent for you. You obviously have hidden talents. Half bloods are quite the puzzle. I do so enjoy tempering your kind."
"You want to temper me? Let me out this cage, and we'll dance." He almost seemed to be considering it. "What? Asmodeus won't allow it? Are you his pet now? Aren't you meant to be a prince? You wanna talk about judging books by their covers? Did you take fashion tips from Legolas? You're all trussed up in leathers and blades, and yet I've not seen anything to imply you're an awe-inspiring Prince of Hell. From my humble cage, you look like a fantasy freak trying too hard."
Levi stalked closer and crouched in front of the bars, leathers creaking. He draped his long arms over his knees and cocked his head. A double-eyelid flickered across his eyes. He unashamedly raked his gaze all over me, and I felt the touch of it as though he rode his hands across my skin. It turned my stomach.
I spat excess saliva at his feet. "Coward. You couldn't handle me outside these bars. You're afraid of a lowly half blood, not even a full de—"
His hand shot through the bars and clamped around my throat, jerking me against the side of the cage. Razors cut into my cheek, my chin, neck, shoulder. If I could summon my demon, I'd tear open the veil and boil the water from his veins. But she was still strung up like a sacrifice inside my mind. She thrashed, but his mental grip held firm.
He shoved me back and watched me gasp air with no trace of emotion on his face. "If you were mine, I would take great pleasure in crushing your spirit." His double eyelids flickered again.
"But I'm not yours..." I wheezed, rubbing at my throat. "You really live up to your name, huh. Envious much?"
He straightened his lithe body. A shimmer of power washed over him, leaving him female. I smiled. I couldn't help it. What did he think he was going to accomplish by wearing a woman suit? She was just as unnaturally stunning, all wrapped up in leather and steel, like something out of Tolkien.
"Where is the young half blood?" she asked, siren-voice pitched high enough to rattle my skull.
I dabbed at the blood trickling down my cheek. "I don't know. I thought you had her. Wasn't that what all the blood-on-metal crap was about?"
"I did have her. I kept her in that very cage. My subject Carol-Anne was her guardian. Mammon wove a net of lies to entrap my subject and stole my half blood from me. He has quite the penchant for half bloods, it would seem. Carol-Anne should have expected as much from the Prince of Greed. She failed me and suffered the consequences. Her quick death was generous of me."
Levi killed Carol-Anne. It wasn't Akil? I hissed as saltwater washed over the cuts on my hands. The water level was rising. I searched around the gloomy dance floor. Where was Akil? Water dribbled in through various cracks in the walls and around closed doors. I hated water, having almost drowned twice. Plus, my demon didn't play well with water elementals.
I tried to maintain my bravado even as I shivered in my flooded cage. "I thought you princes couldn't meddle in each other's business, some sort of mutual agreement not to piss each other off."
She-vi smiled a dazzling smile. "When it is convenient. The old rules are rarely upheld. Titles are shifting. Battlements are crumbling. Laws are worthless when those who uphold them also break them. Mammon is an opportunistic hunter. Do you deny it, Mammon?"
Akil peeled from the shadows behind Levi like a wraith. Now that he'd revealed himself, I could sense his familiar warmth in the air. He moved with predatory grace, head dipped, eyes up and locked on me.
"This gift of yours is quite the feisty half blood," She-vi crooned.
I hissed at Akil. "Bastard." I wasn't surprised. I'd completely given up being surprised when Akil screwed me over.
He stopped beside She-vi, not blinking, barely moving. His dark eyes narrowed by the smallest of margins. His lips tightened, and his shoulders bowed. I'd have to have been blind to miss the obvious disappointment. He seemed to catch himself revealing too much and shook his head, rebuilding the stoic mask. He straightened his shoulders and turned to Levi. "Contrary to what you both believe, I didn't bring Muse here for you. My last visit to your warren was somewhat... restricted." He smiled, his teeth too white, their tips sharper than normal. "You will not be taking Muse anywhere. I advise you release her from the cage before we have ourselves a disagreement."
She-vi looked at him sharply. "Do you dare deny Asmodeus his blood-spawn?" She laughed. "Oh, but you are so weak, Mammon. You think to challenge me for that?" She waggled a finger dismissively in my direction. "She is uncontrollable, virtually worthless, and infected by a degenerate demon by way of an infusion. Have you not tired of her by now? Your alliance with this half blood is foolhardy."
She-vi was either too proud or too blinded by her misconceptions about Akil to recognize his reserved posture for what it really was. I'd spent enough time with him to know his stillness was a prelude to an attack. Like a cat ready to pounce, he had her locked in his amber-fringed sights. While she ranted about how pathetic I was and how stupid he was, he studied her weaknesses. How could she not see it? I didn't get a chance to follow that trail of thought. Ice tugged at my fingers. Its greedy touch burned my skin. I plucked my hand free and looked down. Delicate threads of ice spiraled around the bars of my cage. I puffed out a breath. It plumed in front of my face. She-vi's fluidic voice echoed around the dance floor as a thin layer of ice crusted across the top of the rising water. It fractured and refroze almost as regularly as my own breathing. As it refroze, it thickened.
I shifted onto my knees and peered through the steel bars at the shadows. If Levi had unpinned my demon, I could have reached out with all my senses, but all I had was my human skillset, which in the gloom, was practically useless.
Akil and Levi were too engrossed in discussions to notice the ice crawling up the walls. Akil's smile had turned smug. He'd bowed his head, eyebrow arched. She-vi had a few scarce moments to reel in her attitude before he would clamp a hand around her throat and lay down some demon badass. A large part of me wanted to watch, but the ice continued to build. I twisted around, wincing as my clothes snagged on the razor wire. Spider webs of hoarfrost trailed from the lights. Icicles lengthened. Surely the two princes would notice.
They did. But it was too late.
I saw Stefan at the same time as Levi and Akil noticed him emerging from the shadows. Blue-eyes ablaze, red-leather coat sparkling with ice, he wasn't all demon, but wasn't far from it. Levi hissed a warning. The club exploded in ice. The world warped from liquid shadows to brittle, ice-white sculptures. In a fraction of a second, a shock of frost dashed across the walls, devoured the ceilings, washed over the bar, entombing both princes in crystal.
The bars of my cage shattered. Fragments of jagged steel blasted me. I hunkered down, curling into myself, sure the ice would consume me next.
"Muse..."
I lifted my head. Stefan held out a hand, his expression bleak and eyes fierce. This was it. The moment he'd finish what he started at the George Washington statue months ago. My parasite clenched around my heart, mimicking fear. Then Stefan smiled his crooked, wise-ass smile, and I let out a relieved breath. He was still Stefan. Not yet fully demon. I closed my hand around his. The chilling touch of ice wrapped around my wrist and threaded its way up my forearm.
He tugged me effortlessly to my feet. "C'mon... They won't stay frozen for long."
Akil and Levi resembled sculptures, one the striking representation of modern man, the other a warrior-woman—both frozen in the midst of a heated discussion. And both would be baying for blood once they thawed out. Streamlets of water cleaved valleys through Akil's sculpture. I'd wager you couldn't keep a fire demon frozen for long. Stefan slowed as we passed them. I couldn't quite see the expression on his face, but I heard his snarl.
"Do you know what you've done?" I whispered, hoping to distract him. Stefan and Akil had a history. They'd spent years battling in the netherworld. Demon to demon. There were a million reasons why Akil should suffer, and only two reasons he shouldn't. I needed him to free me of my demon hitchhiker. But more alarmingly, the thought of seeing him hurt knotted my insides with fear.
Stefan glanced back at my question and grinned. "Hell, yeah." His lust-for-life smile almost had me sobbing with relief. He really was Stefan. Stefan the Enforcer. The protector. The red coat, the swagger—he was back, just as he should be.
We burst from the Lounge into the night to find Stefan's gleaming Dodge parked half on the curb. I ducked inside the car. "They won't let this go."
In the driver's seat, Stefan gunned the engine, rammed the car into gear, planted the throttle, and swung away from the club, all in the space of about two seconds. "Let them come."
Coming from anyone else, that taunt might have been all bark and no bite, but there was nothing fake about the wildness in his eyes. Shit, he wanted the princes to come for him. I fumbled with the seatbelt. The last time I'd been in a car with Stefan, his driving had nearly killed me. Granted, we were being chased by Hellhounds at the time. I wasn't sure whether I'd prefer to be chased by the hounds or Princes of Hell.
"How'd you find me?"
"You still have Ryder's cellphone. He turned the GPS app on before giving it to you." Stefan caught my scowl. "He knew you were in trouble. He cares about you."
As we sped through the nighttime streets of Boston, I struggled to tear my gaze from Stefan. The light from the streetlights broke over his face. He appeared leaner somehow, unforgiving, refined. His eyes captured the light, fractured it, and splintered the color in his irises; aquamarine, amethyst, and sapphire. His eyes mirrored the colors of the veil.
After just a few minutes of white-knuckle driving, Stefan swung the car off the main stretch, bumped it up a curb, and stomped on the brakes. He flung open the door and jumped out. Did he always park cars like he'd stolen them? He tugged open my door, snatched my hand, and tugged me out. I yelped. "Hey—"
"Come with me." Eyes bright in the pseudo-dark of the city and breathing hard, he tugged me after him.
We passed through a gate into a leafy city park and climbed a dirt path to the top of a knoll. He released my hand as we crested the top. From our vantage point, the parkland sloped down to the water's edge. A glorious view of Boston Harbor sparkled in the distance. Beyond the inky strip of water some distance away, the high-rise buildings of the financial district glistened.
"Watch." Stefan descended between avenues of trees. An icy breeze whispered against my cheeks and kissed my lips. I smiled and pulled my coat tighter around me, wincing as my dozen or so cuts protested. It was beautiful, serene, an island of calm amidst the madness of my life.
Stefan lifted his arms, fingers rigid, coat rippling behind him as he walked. A carpet of ice bloomed beneath each step. Ice-strikes scattered and sparked in every direction, flooding the ground in white. Ice climbed trees, scampered up near-naked branches, and burst from their tips like crystal flowers. Snowflakes dallied in the air, but the sky was clear. They blinked into existence and danced around their master. He turned the world to winter with his every step and didn't look back. It was utterly surreal and completely spellbinding.
I ventured down the hill, almost falling on my ass too many times to count. Around me, the ice groaned, cracked, chimed, and sighed, drowning out the distant sounds of Boston. I shivered, teeth chattering, and summoned what heat I could find to fight the worst of the cold from my flesh.
Trees bowed over, weighed down by climbing vines of ice. Rime clawed at my boots. I had to pool more heat into my feet to keep it at bay. Stefan's ice was hungry, needy, like a living thing.
He reached the bay's edge at the bottom of a frozen avenue of trees. Ice gobbled up the black harbor waters ahead of him, only stopping its feast when he turned and smiled over his shoulder. I slipped, stumbled, somehow managed to stay upright, and cursed. One of his eyebrows arched. I scowled. "Hey, fire demon, okay? Ice messes with my chi."
He turned in a flurry of red coat and jogged back to me. "Well... C'mon... What do you think? You're impressed?"
"It's er..." I skipped my gaze over the avenue of ice while snowflakes landed on my lashes. Vapor rolled skyward, swirling and writhing higher to meet the flakes falling from a cloudless sky. The bitterly cold air tasted like minerals. He'd turned the park into a picture postcard of the netherworld. "It's stunning." _Like him._
He gripped my shoulders, startling a tiny gasp from my lips. "You told me what it was like. You said you'd never give up your demon, that she's a part of who you are. I thought you were nuts."
His enthusiasm fixed a genuine smile on my face. "Gee, thanks."
"But I get it now." His grip tightened. "The Institute tethered me."
"Listen, about that... I know some things about Adam you should probably hear."
He cut me off with a sharp glare. "Not now." He pressed a cool finger to my lips. "Later." He stilled. Doubt, or maybe hesitation, crossed his face. Before I could discern which, he clamped my face in both hands. His boreal eyes shone, and for a breathless moment, I thought he'd kiss me. He didn't. He closed his eyes and eased his ethereal touch into me.
My demon snapped to attention. She flung herself into my flesh, staggering me with enough wanton energy that for the briefest of moments, I utterly lost my mind. Like lightening in the dark, power jolted through me. I sucked in a sharp gasp. Fire broke over my skin. My element surged, knocking my humanity aside in its rush to meet this new, overflowing source of energy.
I tugged back and severed Stefan's contact. The power he'd shared shut down, leaving me breathless and disoriented. "Holy hell, Stefan." I licked my lips. "You're like..." _Like the sensation I get reaching through the veil and tapping into the great reservoir of energy in the netherworld._ He felt like raw chaos. My demon wanted to roll over at his feet. It was unnerving and deeply erotic. Raw chaos standing within reach, all wrapped up in Stefan, ready to be undone. Words failed me. It was too surreal, too impossible. Too demon.
I backed up, not trusting my own thoughts or his. "This is all... great, but Levi and Akil will be looking for us, and you've just painted the park in ice, so y'know, we're not exactly hard to find. I need to get to Dawn. I promised I would keep her safe. She's a half blood girl, and she..." He stood frozen still. "She..."
His wings snapped into existence, elaborate flourishes of ice arching either side of him. They were huge and damned distracting, especially since they sang like distant bells. I gawked. I couldn't help it. His wings always rendered me speechless.
"Call your demon, Muse." His voice dropped to demon-tones, rich, dark, and dangerous. My own darker-half did an odd little trill inside my head, further distracting me. I was having a hard time remembering my own name, let alone the fact we were meant to be running away from immortal bad guys.
I suspected if I did summon my demon, I'd be seeing more of Stefan than his fantastical wings, and I wouldn't have a hope in hell's chance of controlling my dark half. "I'm not sure that's a good idea." Look at me, the sensible one. What was the world coming to?
His lips quirked. He flicked his wrist and produced a blade of ice.
I hiked an eyebrow up. "So what is this? Suddenly you're getting all pissy again. Why? Because I won't play?"
He moved as quickly as Akil ever has and hooked an arm around my waist, pulling me against him. "We've got some crap to work out," he whispered against my cheek.
I turned my head. His lips brushed mine. A rising tide of need warmed me through. It would be easy to let go, to forget it all and throw my arms around his neck and drag him into a kiss. Dawn was out there. She needed me. Whatever this was, it wasn't helping anyone. I knew one way of ending our dance. Mention of a single name should do it. "Like you not believing me when I tell you I didn't go to the netherworld to save Akil? I'm not involved with him."
His grip tightened. "Lies, I can smell him on you."
"I've never lied to you." A growl underscored my words just enough to add a threat. "And I never will. I know what lies can do. Believe me or don't. It's the truth. The same as when I tell you I didn't mean to hurt you, or Nica."
He pressed the tip of the ice-sword under my chin. If it wasn't for the glimmer of humor sparkling in his eyes, I might have readied myself for the attack. "Words are cheap."
"You sound like Akil."
He bristled and pulled back, but the smile stayed. "Maybe I have reason to. Summon your demon, Muse. We're alone." He backed away. "You promised me a wild ride when you got your demon back, or was that a lie as well?"
I narrowed my eyes at him. "Back at the lake house, when you lost control, would you have hurt me?"
"I didn't mean for that to happen. I was... I just want to be free." He held out a hand. "Join me, just for a little while? C'mon, where's the fun in being different if we can't enjoy it?"
When he asked like that, I couldn't very well say no. "Oh, for hell's sake." I readied my stance in the snow and summoned my demon. She broke over me, enveloped my humanity in coal-black armored skin, and peered through my eyes at the ice-king before us. He'd shrugged off his humanity, clothes and all. Holy hell. He was nothing short of an angel, a deadly, razor-edged, diamond-eyed angel framed by wings of shattered crystal. Looking upon his true form, I had to wonder if he was ever meant to be a part of this world. He was clearly netherworldly, right down to the intricate fractals swirling beneath his skin. A curious chattering rattled through my teeth, the sound purely demon and one I had no hope of curbing.
"You want to fight it out? Fine." I didn't have a hope of beating him. He had more power rolling off of him than Akil. I unfurled my ragged wing and gave it a flick. Frankly, next to him, I looked like something that ought to be put out to pasture and shot. "Winner gets—" I flung a blast of fire at his face, turned on my heel, and darted toward the nearest frozen tree.
He snapped into existence in front of me, moving too fast for my eyes to track. I gasped, jerked back, and skidded, somehow managing to stay upright, but it wasn't pretty. He hunkered down and rolled the ice-sword in his hand. The glint of mischief in his eyes tugged a broad fang-filled smile across my lips. Backing up, I called to the slumbering city heat. It rushed into my body, burning away my doubts. Laughter peeled from my lips. I spun heat and energy around my arm and threw it over my skin, igniting a shield of fire. "Take your best shot, Frosty." Spreading my arms, I reveled in the embrace of my element.
He straightened, and took a few strides toward me. A short sharp jab stung me in the right butt cheek. I jumped. "Ow!" Ice daggers hovered in the air around me. He had all the fancy tricks. I flung my arms out, releasing a blast of heat in all directions, instantly melting his brittle daggers.
He summoned more with a chuckle. I swatted those like flies. We danced, his ice and my fire. After a while, I forgot the little girl I was meant to be saving and the princes, who by then had to be hunting us. I didn't care that cracking ice and raging fire would attract unwanted attention. I was lost in the freedom of the demon, riding by her side and happy to neglect reality while I played. Our game of fire and ice was a rollercoaster I had no control over. At some point, the moments blurred into a stream of motion and sensation. As demon, I knew only the thrill of the hunt, the chase, the capture, the wild and breathless anticipation. It was glorious, but in the clash of chaos, the threads of my tentative control unraveled.
We were laughing, teasing, snapping and growling like wolves at play: deadly, yet tamed. Ice rained, and flames spiraled around us. I was demon and free.
A wave of ice splinters rolled across the ground. My snarl quickly turned to laughter. He lunged. So did I. We clashed in a shower of opposing elemental sparks. "Freedom..." He panted. "Feels good, doesn't it."
I surged my element into him, seeking out the well of power at his center. He hissed as though burned and faltered, falling away from me. I had a few tricks up my sleeves too. I strode closer. He backed away. I pushed in deeper, seeking, reaching, entwining. He snarled and slammed into me. We tumbled to the ground in a panting, sizzling tangle, sprawled like spent lovers.
"What did you just do?" he asked, dragging his gaze along my thigh, down the concave of my waist, and over the rise of my breasts.
I felt his appraisal like a cool breeze across my hot demon skin, and an electric sizzle of power strummed through me, fluttering desire low in my abdomen. If he didn't notice the physical change in my demon body, he'd sense it. "Maybe I'll teach you sometime if you teach me how to control fire like you control ice."
He shook crushed ice from his hair. "I'd like to." His eyes turned serious as he reached a hand up and placed it carefully over my heart. His all-demon cool blue touch fizzled against my fire-veined skin. He sucked in a breath and toned down his own power, then gently eased his element through me, into the darkness slumbering at my core. I knew what he sensed as soon as I saw his handsome face cut into a scowl: Damien, my unwanted hitchhiker.
Stefan shrugged his demon off in one graceful roll of his shoulders. He was just Stefan again. Fully clothed, virtually normal but for the cerulean glare. I dropped my head back on the frozen earth and closed my eyes. My demon sauntered off to the back of my mind, dumping me back into human flesh. My clothes scratched against my skin, heavy and restricting. A small part of me pined for the freedom again. _Just let go..._
"It's the soul-lock you feel. When I stabbed and burned Damien, he didn't die. He made himself a new home in me." The words made it sound so simple. Nightmarish memories broiled.
Eyes still closed, I concentrated on the warmth of Stefan's hand resting on my chest. "I... didn't realize," he said. "That day was..." He didn't need to say it. I knew exactly what that day had been. I relived it constantly in my dreams.
"He lives in me. Every breath, every heartbeat, I share with him." I hadn't told anyone. Damien was my dirty secret. When I finally opened my eyes, I found Stefan watching me with a curious muddle of emotions on his face. Confusion, definitely. There was sympathy too.
"That's why you need Akil," he said softly.
"Yeah, Akil can fix me. But it's not that simple. I don't trust him. Part of me thinks he would take Damien's place. He's never said he wouldn't." The words began to flow easier now. It felt good to finally breath life into the fears I'd harbored for so long, as though sharing the horror relieved some of the burden. Is this what it felt like to have a friend? Someone I could share my secrets with? Someone who would listen without demanding something of me in return?
"You're right. Akil will take Damien's place if you let him. He's all demon. He wants you. He wouldn't let a little thing like free will get in his way." Stefan put on an alarmingly similar portrayal of Akil, right down to the netherworldly accent, " _It's too late, Muse. It is done, you must get over it_."
I laughed and punched Stefan on the shoulder. This really wasn't funny, and yet, with Stefan, I almost felt as though it could be. He chuckled dryly, but his light laughter faded as his gaze fell to his hand resting over my heart. An unhurried quiet fell over us. His hand rose and fell with the rhythm of my breathing, and I recalled when we'd last lain that close, lost in one another, as though nothing could ever tear us apart. How wrong I'd been.
"Muse..." His breath hitched. He looked away. "Those things I said that day at the statue—I was out of line." When he faced me, renewed fierceness burned in his eyes. "I was afraid, and angry... I wasn't thinking clearly. I still can't think straight. Since I've been back, I can't focus... The demon rides me the whole time. I..." He licked his lips and closed his eyes, exhaling a weary sigh.
The insults he'd hurled at me by the Washington statue had stung, not least because his words had cut close to the truth. I closed my hand around his. When he opened his eyes, the sadness on his face struck me like a physical blow. I understood. I always had. Fear clamped around my heart. Fear for him. For us. I tightened my grip on his hand. It would be okay. Together, we were stronger. I opened my mouth to tell him as much when a floodlight washed over us from above. A helicopter swooped low. The downdraft whipped up snow and ice, virtually blinding me. Stefan hissed and tore his hand from mine.
"DEMONS. STAY WHERE YOU ARE. YOU WILL NOT BE HARMED."
His wings flared wide. He rose to his feet with leisurely grace and walked into the helicopter's downdraft. The wind tore at his coat, but he stood proud, wings held high against the maelstrom. A dozen red laser spots danced on his coat. Black-clad Enforcers spilled from the trees. Stefan flashed me a smile, but it wasn't the light-hearted grin I loved. That smile was hungry and filled with sharp teeth. The sight of it spilled liquid ice into my veins. Before I could draw breath to warn the Enforcers, he dropped into a crouch, spread his wings, and snarled.
## 19
# Chapter Nineteen
"He killed seven Enforcers." Adam sat behind his desk, hands steepled in front of him. Jenna and Ryder flanked me. "Impaled two."
Squirming in my seat, I averted my eyes, focusing over Adam's shoulder. "You were going to kill him."
"That was not our intention." Adam's broad shoulders slouched. His bloodshot eyes spoke of the same level of weariness I felt in my bones. "We hoped to capture him."
"All the more reason for him to lash out."
"He murdered his colleagues in cold blood."
I didn't need reminding. Not only had I been there to witness Stefan's actions first hand, I'd been at the debriefing where a dozen monitors replayed the fantastical footage of a fire and ice demon throwing down in a winter wonderland in the center of Boston. I'd squirmed in my seat then too. Witnessing my demon half deliver the elaborate display of power in a room filled with Enforcers stamped a fat target on the back of my head. I'd felt their gazes burn into me. Adam had glowered at me—not the footage—during the entire meeting.
A kid had filmed the entire showdown on his cellphone. If that wasn't enough, I now sat in front of Adam's desk, hemmed in by two people who quite possibly had me on their hit lists. Jenna and Ryder hadn't said a word. Ryder wouldn't meet my eyes.
"What were you and Stefan discussing prior to my squad's arrival?" Adam asked.
I blinked and dragged my thoughts back into the room. He meant the part on the home movie where Stefan lay over me, hand on my chest. It had looked intimate on screen, and it was, but not in the way they'd all assumed.
Adam sat very still, radiating calm authority. The only time I'd seen him rattled was when I'd brought up the subject of Stefan's mother. He looked at me with those fatherly eyes, and I almost wished I could tell him all about my conversation with Stefan and how my sick bastard of an owner coiled around my heart even now. Adam looked like he'd listen—right before he'd shoot me up with PC34 and toss me in a cell.
"Sex," I said, hoping to disarm him.
His expression tightened. "After the vast amounts of energies the two of you had summoned, you wouldn't be talking about sex, you'd be having it."
My plan backfired. I cringed. A flush of heat warmed my face.
Adam removed his glasses. He pinched the bridge of his nose and exhaled a deep sigh. "I owe you an apology."
That got my attention. "Huh?"
He slid his glasses back on and leaned back in his seat. "You have more control than I've given you credit for. I underestimated you, Muse."
_Yah think?_ I frowned. He had to be going somewhere with this uncharacteristic praise.
"The fact is," he continued, "half bloods are all about one thing: control. Stefan has lost all control. He is no longer viable and must be destroyed."
"What?" I glanced at Ryder and Jenna. Both stared through stoic masks straight at Adam. They knew this was coming, and they didn't care. "You don't mean that. He thought you were going to hurt us. He didn't mean to kill anyone. He just reacted." Nobody in this room was listening to me. "He's your son, Adam. Doesn't that mean anything to you?"
"I lost Stefan when he stepped through the veil eight months ago."
A snarl curled my lip. "You lost Stefan the moment you decided to turn him into a weapon. How old was he when you abandoned him to the Institute scientists? Did he even know a normal life before then? Did he ever have a father who loved him?"
Ryder flicked his keen eyes to me. _Yeah, that's right. I know all about Stefan's past and more._
"That monster you toyed with in the park, Muse, wasn't my son," Adam replied, as cool as ice.
"He was the real Stefan, you ignorant bastard, and if you cared at all you'd realize that."
"Stefan wouldn't have killed innocent men and women doing their jobs, trying to protect the people of Boston." Adam cut his gaze to Ryder. "Ryder? Jenna? Would you disagree?"
"No," Ryder drawled. "He nearly killed me escaping the lake house."
This was a witch-hunt. "But he didn't kill you. He could have, and he didn't. He came for me."
Ryder twisted in the seat, jaw set and stare rigid. "Muse, that thing in the park ain't Stefan no more." His voice quivered with unreserved anger. "He's unstable. He killed good people back there."
"I'd have done the same, backed into a corner like that."
Ryder gave me a dry look. "No. You wouldn't. And you know it."
Maybe Ryder was right. Maybe I wouldn't have. What did that mean? Stefan had killed the Enforcers in an instant. After he'd smiled at me, a wave of fragmented ice burst from him, instantly freezing anything it came into contact with. The helicopter had pitched and plummeted into the harbor. The sounds of twisted metal were only matched by screams of agony from the fallen. His ice hadn't touched me. He knew exactly who to target. I'd watched, numbed by shock, and then he'd vanished, leaving behind a landscape of blood-stained snow.
They believed him out of control. What they didn't know—what nobody but me knew—was that he'd never been more in control. He knew exactly what he was doing. He was free.
I lifted my gaze to Adam and narrowed my sights on him. "Subject Alpha is no longer viable, huh? Put him down like a lab-rat, and then what? Buy another half blood for Operation Typhon? They're worthless anyway, right? Or maybe summon Yukki-Onna and screw her again. Keep her locked up here while she gives birth to your demon spawn."
Adam ground his teeth behind tight lips. He swallowed and spoke his next words with precise care. "Ryder, Jenna, you're dismissed. I want your written conclusion on my desk in an hour. Ryder, you will lead the team tasked with Stefan's termination."
The two Enforcers stood and left the office. Neither acknowledged me. I leaned forward in my chair. "Don't want them to know about your dirty little secret?" Adam wasn't squeaky clean in all of this. I wasn't going to let him sit back in that damn chair and order the death of his son like he was a saint, doing the heroic thing.
"Operation Typhon was a failed experiment. It was never approved."
I smiled. "Mm, so what happened? You went ahead anyway? Unofficial-like?"
He leaned in his chair and rested a hand on his desk, rubbing his fingers together as he considered his words carefully. "Who gave you this information?" He asked like he was enquiring how my day went, as though the answer was irrelevant. I knew that tone. It meant I was cutting close to the truth.
"There are two others like Stefan and me..."
Adam flinched. A smiled slashed across his lips. "What else do you know?"
"You named me Subject Beta while I was still an infant. You tried to buy me from some nameless demon, but my brother stopped the deal. You told me my employment was inevitable. You've been watching me, monitoring me, even when I was with Akil. I saw the pictures. You must have been waiting for Akil to tire of me or for me to get away from him before you made your move." I paused, giving him time to deny it. He looked back at me, reserved, calculating. I was right. Hate burned like bile in my throat. "I had five years on my own. What stopped you from recruiting me then?"
"You'd learned a great deal from Akil. Things I couldn't teach you, especially as Stefan was becoming difficult. I couldn't trust my son to teach you what was required. You weren't ready. We'd seen no evidence of your power. At that time, Stefan was our only benchmark. We didn't know if you possessed the same level of power as he did. So I continued to monitor you closely."
My heart fluttered. Everything Akil had told me was true. "How closely?"
"You were untested, raw and naive, and still very much under Akil's protection." He paused, steeled his gaze, and said, "I sent in a handler." He leaned forward. "The man you knew as Sam Harwood worked for me. His real name was Jason Bywater. Akil learned of my plans and killed him in front of Stefan as a warning to me, I suppose. Stefan knew nothing about Sam's real motives, but he reported the incident to me. I had hoped Sam—Jason would stay in place for several years. He was a good operative..."
As Adam's voice trailed off. The bottom fell out of my world. I was sitting there, bolstered by the facts, all geared up for a verbal fight, ready to pry the truth out of Adam, but he'd just slapped me down and wrenched my happy little five-year-folly out from under me. Sam, my friend, a big part of the only _normal_ life I'd ever had, was a lie. Sam wasn't even called Sam. The guy I'd shared beers with, movie nights, weekends away—his whispered promises, gentle touches, and easy laughs were all lies. A jagged shard of emotional pain sliced through me. I had to get out of Adam's office before I killed him. My fingers itched. My demon snarled. I slowly, carefully, rose to my feet. Violent tremors twitched through me. The urge to burn Adam from the inside out very nearly tore my control out from under me. I could taste ashes on my tongue and feel the burn in my fingers. "You were right..." I growled, sounding more demon with each breath. "Be grateful I have control, Adam, because at this very moment, my instincts are screaming at me to boil your insides."
He didn't move, just sat behind his desk and observed with clinical detachment. "There's a war coming, Muse. My actions were justifiable in the grander scheme of things. You need to decide whose side you're on."
"I'd rather side with the demons than you." In that moment, I envied Stefan his freedom to kill.
I strode from Adam's office. Rage boiled the blood in my veins. The Institute buzzed around me: hallways filled with people devoted to protecting humans from the demon incursion. Phones shrilled too loudly. Snippets of conversations drifted by as I broke into a jog. I had to get away. I barged by several people and heard them hiss in pain. Elemental heat rolled off me. I had to get out.
"Muse." Ryder blocked my path.
I snapped my head up and tried to veer around him. He caught my arm and hissed, releasing me with a flurry of curses.
"Don't come near me." My head swirled. The demon roared. I staggered and ran.
"Muse! I have to stop him. Don't get in my way."
I burst from the warehouse building, trailing fire in my wake and almost cried with relief when my demon stepped into my skin. The sweet release she offered robbed me of the flood of emotion breaking over my mental barricade. Sprinting, hard and fast, I ran away from the Institute and their treachery. My five years of freedom was just another cage. An illusion. When would it be over? When did I get the chance to live my life as I wanted? When could I be free of demons and people like Adam, and Levi, and Val, my father, Damien, the netherworld, every-fucking-thing that wanted to kill me or screw me over? I just wanted the truth. Was that too much to ask? Did I not deserve it?
Sobs bubbled from my lips. I laughed a vicious peel of laughter. Dripping liquid fire, I ran harder. The dark inside bloomed, wrapping oil-slick tendrils of power through muscle and flesh, tightening, consuming, feeding. I staggered and fell against a car. Flames spilled across the hood and arced over the roof. The fire hungered, and seeking freedom, it roared higher. I danced back and admired the firelight licking the air. A cruel smile twisted my lips. The fire was free. I could taste its joy and sense its unfettered lust. It was free, and it called to me to join it. _Burn it. Burn them all._ A shout shattered my reverie. Backing up, I swept my heated gaze over the crowd surrounding me. I read the fear and disgust in their animated expressions and agitated states. But I couldn't hear them over the roar of the flames. Half-crazed whispers filtered through raging inferno. _Burn them. Their hatred will know terror_. A flick of my wrist, and the fire would respond. They wouldn't escape. I could kill them all in an instant.
"Get away!" I snarled. Fire dripped from my fingertips.
I reached for the veil, unable to stop myself. I ached to feel the power coursing through my veins. I was a coward, seeking peace in oblivion, and I'd kill anyone who attempted to stop me.
I closed my eyes and dropped to my knees. The road bubbled around me. Reason told me to get up and get out of there. Enforcers would be here soon, and I'd kill them. I couldn't stop myself. I didn't want to. Suffocating madness embraced me. Laughter boiled inside my head and bubbled from my lips. Yes, this was what I needed. Why fight the inevitable? It was time to let go.
I fell forward onto my hands. Rivulets of flame spilled from my fingers, seeking fuel. Camera flashes jabbed at my vision like bee stings. Whispers, curses. A glass bottle smattered beside me. "Demon!" I snapped my head up, locked the vocal stranger in my sights, and lunged.
Akil scooped me up and flung me over his shoulder in a fireman's carry. A guttural roar tore from my lips. I let loose a flurry of talons and teeth, biting, kicking, snarling. My wing thrashed. The air tightened suddenly, squeezing from my lungs. Static energy washed over me, and in the next breath we were in the open-plan expanse of Akil's Battery Wharf apartment where I'd seen Carol-Anne's body.
He dropped me on my feet. A barrage of words spilled from him: something about control. I locked my hand into a fist, drew my arm back, and punched him in the face hard enough to feel bone crack beneath my knuckles. He grunted and reeled back. I threw a ribbon of fire at him. He spat blood and snarled while my obedient flames coiled around his waist and then disappeared as though absorbed by his clothes. Fire wouldn't work on him, but I was beyond rational thought. I lunged, hit him square in the chest, and slammed him against the granite bar. He huffed a foreign curse. I was on him. I sunk my teeth into his throat and tasted hot, spiced blood. All my thoughts focused into a pinpoint of all-consuming rage. I was a machine with a single purpose.
Akil thrust his hands out and threw me back. I bounced off the couch, tumbled to the floor, sprang off my feet and dashed for him again. I wanted more blood, more destruction. I'd tear his throat out. I'd burn this apartment, this building, and everyone in it.
He backhanded me, snapping my head back. Pain burned across my cheek and jaw. In the momentary distraction, I lost my footing and fell. He stalked forward, eyes ablaze, hands fisted at his sides. I kicked his leg out from under him. He went down onto a knee, his muscles already tensing to spring forward. _Bring it. You lying bastard._
Twisting like an eel, I leveraged my wing under me and pushed off the floor. Akil pulled up short. Had I been thinking, I'd have realized he was capable of more than this sparring session, but I was too far gone to care. I swung a fist, he ducked, twisted, and struck viper fast, punching me in the gut. My breath whooshed out of me, but as I buckled, I saw my opportunity and hooked my arm around his neck, yanking him back against my chest. My teeth found his shoulder and pierced his hot flesh once more. He roared and bucked beneath me. Then a vise-like hand caught my leg and tugged. I clung on like a pit bull, fangs in too deep to be wrenched free. He danced back and rammed me into a wall. I snarled against his hot, spicy flesh, ground my teeth into the wound, and swallowed the gush of rich blood.
The stakes changed when he brought Mammon to the party. One minute I was clamped onto Akil's back, the next I sat lodged between Mammon's enormous wings. His blood boiled in my mouth, and finally, my humanity reasserted itself. I pried my teeth from his leathery skin, suddenly and acutely aware I was clinging to the back of a Prince of Hell.
He reached a muscular arm back, clamped a huge hand around my upper arm and shoulder, plucked me off him as easily as removing a tick, and tossed me aside. I hit the floor hard and rolled before twisting onto my front, breathless, bruised, snarling and snapping my rage.
Mammon, a mountain of rippling black muscle and sizzling embers, eyed me with delight. He shook his wings out and rolled his massive shoulders. Hot ash rose from the fire licking across his skin. When he took a step toward me, a low warning growl rumbled through my entire body. Ash rained from my skin. Red-hot hunger slithered across his otherwise black eyes. I shivered with a peculiar mix of adrenalin, desire, and unease. _What the fuck was I doing?_ This was real. I was here. In Akil's apartment. Facing off with a Prince of Hell. Mammon's infinitely dark eyes read me, waiting, almost baiting me with the promise of violence. I straightened up to my full height—he still loomed over me—and shook my demon off. She retreated with a wicked slice of laughter that pulled my human lips into a smile.
Mammon tilted his head. His tongue licked across black onyx lips, and he dissolved away, leaving a bedraggled and abused Akil behind. He reached behind his left shoulder and hissed a curse. "You almost tore my shoulder open." His voice was muffled, the words not quite clear. Crimson blood streamed down his neck and over his shoulder, soaking into his shirt, but the wounds had already closed. He pressed the ball of his thumb to his nose and frowned when his hand came away bloody. "And broke my nose." That'd be the reason he sounded as though he had the flu. I tasted his salty blood on my lips and suppressed a groan. My demon stalked my thoughts, asserting her desires over mine. To bloody another demon was a power trip. To bloody a prince had my demon panting with lust. Holy hell, I'd virtually propositioned Mammon, the outcome of which didn't bear thinking about.
I wiped my mouth, alarmed by the blood on my lips. "Why didn't you tell me Sam was an Institute spy?" My voice had turned cold and flat. Good. I didn't have it in me to outthink or outmaneuver Akil. Let him believe I didn't care. Splatters of blood speckled my clothes. The hot-copper smell filled my head and flipped my stomach over. I was a mess, through to the bone.
Akil gave me a bored look, as though disappointed I'd blame him. "It wasn't significant." He shrugged his jacket off and laid it over the kitchen counter. Peeling off his shirt, he balled it up and tossed it in the trash. I'd torn several deep gashes in his chest, taken a chunk out of his neck, and torn open his shoulder. Yeah, I'd completely lost it. But I'd come back. That was a good thing, right?
"What do you mean it wasn't significant? Don't pull that shit, Akil. I'm barely under control here."
"I'd noticed." He gripped the bridge of his nose and, with an audible grinding of bone, set it right with barely a twitch.
I swept a hand back to tuck my hair behind my ear. My fingers shook. Now that I'd noticed the tremors, my entire body decided to join in. I clamped my arms around me, trying to crush the quivers away. "You knew all along. Was that why you killed him?"
Akil's glances revealed an uncharacteristic concern as he rinsed a towel beneath a tap and wiped the blood from his face and chest. "His betrayal would make a convenient excuse. Would you think differently of me had his deceit been my singular motive?"
I blinked. _Yes, I would_ , I thought but didn't reply. The man Akil murdered had been a stranger to me, but Akil had still killed a man. Just because Sam worked for Adam, didn't make it right.
"It matters not why I killed him, just that I did. If you're looking to absolve me, don't waste your time. I took pleasure in it. He was a pretender in your bed. My only mistake was not killing him sooner. The Institute tests my patience. They wanted you. They still do. Their insolence—"
"Is astounding. Yes, I know. You told me that once." I backed up and pressed a hot hand against my forehead. "Akil... You should have said."
"And ruin the only taste of freedom you've experienced?"
I dropped onto the edge of the couch, mouth open. Had he just told me he'd lied to protect my illusion of freedom? He looked back at me, eyes so damn understanding that it hurt to meet his gaze. I laced my hands in my hair and slumped forward. The world was wrong. Everything I relied on, the truths I'd come to cherish, were unraveling around me.
"The Institute has a hit out on Stefan," I grumbled. "He's gone wild. Dawn is probably terrified and alone somewhere in the netherworld. Levi wants to offer me up to Asmodeus on a platter. I'm losing control." I hissed in a sharp breath. "I was going to kill that man... on the street. It was all I could think of. I wouldn't have stopped at him either." I lifted my head and found Akil standing in front of me, holding two glasses of wine. Smears and dribbles of blood marred an otherwise perfect chest. Cuts crisscrossed his arms. He had blood in his hair and a smear across his cheek.
"I know."
"You stopped me."
"Yes." He hitched his bare bloody shoulder. "And experienced your wrath first hand. Which was... uncomfortably arousing."
I groaned and hid my face in my hands again. "Damien's soul-lock is getting stronger. I think he helped loosen the reins on my demon."
"Indeed, as I warned you he would." The couch shifted as he settled beside me.
I sat back, took the wine from Akil, and gulped it down without stopping for breath, then handed the empty glass back to him. His lips twitched. "Why me?"
He roamed his heated gaze across my bloody and torn clothing. "If you ceased battling your other half and embraced the truth of what you are, you'd have your answer."
"I'm just a wretched half-blood girl caught in a storm."
Akil tasted his wine and smiled. "Muse, you are the storm."
## 20
# Chapter Twenty
I woke in Akil's bed, thankfully alone. He'd found some clothes I must have left behind years ago —boot-cut, low-rise jeans and loose V-neck sweater—and left them out for me. The clothes were a tight fit. I'd gained some muscle in recent months. The fact he'd kept the clothes at all disturbed me on a level I didn't dare think about. I had a long list of things that were best not to dwell on. Losing control, failing Dawn, eyeing up Mammon for violent and bloody demon sex, breaking Akil's nose, and how I wanted to rip Adam's spine from his flesh and beat him with it.
I ditched my trashed clothes from the night before—retrieving Ryder's cellphone and my sidearm—showered, helped myself to breakfast, and found a note left on the kitchen counter. Akil had handwriting you'd expect to find in an antique tome, consisting of elaborate flourishes that seemingly flowed into one another with no room to breathe. I eventually deciphered his archaic patterns. He'd written _"be discreet"_ and _"keep control."_ Beside the note, he'd folded _The Boston Globe_ newspaper. A surprisingly sharp image of Stefan and me flexing our demon muscles adorned the front page. The headline read: Demons Slaughter Enforcers.
I headed home, my head filled with uncertainties and the residue of emotional fallout. Levi would come for me. Of that I was certain. My father unfortunately hadn't forgotten about me. Plus, I'd broken out of Levi's cage. He wasn't finished with me either. Dawn was, in all likelihood, with my psycho-brother, and I didn't know enough about Val to go after her. Even if I could find him, I wasn't strong enough to avoid his sex-on-legs mojo. If he tried that shit on Dawn, I'd go nuclear on his immortal ass. I could drain the fire from him. And technically, he wasn't permitted to kill me. Still, I needed a plan before I went charging after Val.
What do you do when you're swimming with sharks? You make sure you're the biggest, most badass thing in the pool. I was already a primetime topic—even made the front page. A demon celebrity for all the wrong reasons. Could I use that? Should I encourage it?
Arriving outside my apartment, I noticed the door was ajar and instinctively reached for my gun. Val? No, he couldn't enter without an invitation. Akil? He wouldn't bother with traditional means of entry, not now he had his own personal invite. He was more likely to appear while I was in the shower.
I gave the door a shove. It creaked open and revealed Stefan draped in my couch, boots propped up on the coffee table, and Jonesy curled in his lap, belly up, receiving tickles under the chin.
An unexpected surge of joy tugged a smile across my face. Then I remembered the circumstances surrounding our last meeting, and my smile died. I slammed the door closed and holstered my weapon. "Are you insane? They'll be watching my place."
"Well." He sighed, and lifted the pliable Jonesy off his lap, "I wasn't about to interrupt your sleep-over at Akil's."
But he didn't look angry, more mildly amused. "How did you–?"
"You've still got Ryder's cell. Plus, someone snapped you going feral in the street and sent it to the news stations. Akil's image was blurred, but I don't know any other guys in suits who could throw a flaming demon over their shoulders and disappear into thin air."
I snaked my arms crossed and tried to read him. His smile held a tight line, and his focus was off, as though he looked through me, not at me. I shifted my element and tried to gauge his power, but he was shut down, his demon behind lock and key.
"Akil kidnapped me."
"He stopped you from hurting anyone." He looked away, seeming to admire the framed symbols on my walls. "How many Enforcers did I kill?" His voice had leveled, like still waters with something dark lurking underneath.
My throat tightened. "Seven," I whispered.
A grimace tugged on his features, and at the same time, a brief lick of his element danced over my skin. "Something is very wrong with me. I can't stop the demon—"
A knock at my door startled a yelp out of me. Stefan's irises sharpened to an azure blue.
I held up a hand. "The Institute doesn't knock." I opened the door.
Lacy waved a newspaper in my face. "Holy crap, Charlotte Henderson, is this you? Are you a demon?" She grinned. "You must tell me everything. Do you have wings? I can't make it out in the pictures. They're all fuzzy, but it sure looks like you, just kinda... y'know, if you were burned to a crisp."
"Lacy, it's sort of a bad time."
She flicked her hair out of her face and gave me a knowing smile. "Is Akil here?"
I winced. "No."
"Aww, c'mon... You're, like, famous. You have demon friends. Did you kill those Enforcers?"
"No." I gasped.
"That's cool, I knew you wouldn't. When I read it, I thought, no way, that's not Charlie... Hey, who's the ice dude? There are more pictures online. They've enhanced them an' everything. Are his wings really made of ice? You two looked, kinda... y'know, cozy."
"Lacy, I'll answer all your questions. Okay? Just don't mention you know me, please. I mean it. I like it here. I don't want to have to move again."
"'Course, you're my friend, Charlie, and if I get to meet some hot guys, bonus. Are you an' Akil, like, an item? I was thinkin' maybe, if y'know, if it's cool, you could give him my number?"
Stefan opened the door wider, rested his forearm on the frame, and gave Lacy the 'you-have-to-be-kidding-me' stare. "It's Lacy, right?"
She blinked up at Stefan, mouth open, eyes wide. "Uh-huh."
"Akil likes to call hell his home. Maybe you've heard of Hell? Fire and brimstone? Eternal damnation? There are demons there who will tear off your skin and wear it as an apron. Akil is a Prince of Hell, so what do you think he'd like to do with your pretty skin, Lacy?"
She closed her mouth. "Equal rights for demons, dude. Don't discriminate." She showed Stefan her hand, palm out. He growled, low and threatening.
"Okay, you two... Lacy, please give me a few days for all this to settle down and for me to work out some kinks... and I'll happily teach you all about the dos and don'ts of demons. I highly recommend you don't piss them off."
"Sure." She grinned, glared at Stefan, and then stalked away.
Stefan slammed the door and loomed in front of me. "Do not tell me you invited Akil into your life again?" His eyes flashed electric blue.
"Do you think I'm an idiot?"
"She asked if he was here... _inside your apartment_."
_Now he gets pissy? Now he cares?_ I scowled back at him. "He coerced his way in via my landlord, alright? You know he's in denial about... everything."
He straightened and drew back. "You smell like him. It drives me crazy."
"I smell like him?" I cringed. "Do you know how weird that sounds?"
"I can't help it. My demon..." He growled again and turned his back on me. A coil of power I hadn't realized had tangled itself around me, unraveled as he strode away. "My demon's messed up, Muse. He's in my head the whole time... I can't shut him out like I used to. The things he wants... I want..."
"I thought you said you wanted the freedom." I said quietly.
He stopped and bumped a clenched fist on the back of the couch. "I do, and that's the problem. The Enforcers, that was just the start."
"That was an accident."
He turned. "Was it? You seem so certain. I wasn't thinking. I just... acted. It wasn't until later, when I came down off the power-trip, that I knew I'd killed them." He scowled. His shoulders slumped. "And I didn't care."
"Don't say that."
"It's the truth. I used to know where the demon ended and I began, but the lines are blurred. I'm starting to want what he wants."
Everything he said felt familiar in ways I didn't want to admit to. "What does he want, exactly?"
His gaze danced about the room, as though seeking the right words, and sucked in air between his teeth. The resulting hiss sounded almost demon. "Revenge... on the Institute." He closed his eyes. "My father, Akil, the netherworld. Everything." He twitched his head and dropped his chin. When he opened his eyes, the slippery touch of raw elemental energy crawled over me. "I'm not decided about you yet. We–I swing from revenge, to... something else."
"Maybe you just need to vent." My words were as pathetic as they sounded. No amount of venting—like we had at the park—was going to help.
His smile quirked. "Vent? I'm terrified of what I'm capable of, Muse. Right now, standing here, I know there's ice in the atmosphere miles above us. I can draw the cold down and freeze the air in your lungs before you could summon your demon to stop me. These symbols don't do anything. I don't need to draw from the veil. It's in here." He curled a hand over his chest. "You felt it."
I had when he'd driven his power into me in place of a kiss. My demon purred her pleasure. I mentally slapped her back. "Okay. Maybe I can help you. We could go to the mountains. Get away from everyone. You could throw your power around all you wanted without hurting anyone."
"I'd hurt you." His steeled gaze confirmed it. "I'm afraid of what we could do together. We're too dangerous. It's worse when I'm around you. My demon gets... distracted."
I knew that feeling. My demon was doing the same, purring and pacing, strutting around my head like a cat in heat. "I know you won't hurt me."
"I know one thing, I can't be around people. There's no way I can stay in Boston. Enforcers will be watching the lake house. There's only one place I can go where I can't hurt anyone."
My breath hitched. "The netherworld." He gritted his teeth and didn't deny it. "If you go back there, your demon will win." _I'll lose you again._ He couldn't go. I'd already experienced a world without Stefan, and it was an insipid place. He reminded me what it was like to be alive. He loved life. At least he used to. His dark half and his lust for mischief had twisted into a thirst for revenge.
"I think I have to."
I shook my head. "It took Akil ten years to make me human. If you go back there, you might never be human again."
"What choice do I have? I killed seven people, Muse. Seven. I stole their lives and I did it"—he clicked his fingers—"like that. I don't want to be that demon."
"Stefan—"
"No, Muse. If Ryder comes after me, I'll kill him." His voice fractured. "I will. My demon will. It's all the same."
I worried my lip between my teeth. "This isn't right."
He sunk his hands into his hair and blinked too-bright eyes up at the ceiling. "Tell me about what's right and fair. I fought my entire life to control this thing inside me, and I failed."
"No." I was losing him. He was going away from me. "You didn't fail. We just need to figure out a way to control him again." The words tumbled breathlessly from my lips. "Akil tamed me..."
"You were a young girl, Muse." His wan smile humored me. "He had years. You barely had enough control of your element to light a candle. I'm too far gone. I have too much power."
"I'm not giving up on you." _Shit._ Tears blurred my vision. I gulped the knot in my throat. "Just... just don't do anything. Not yet, okay? There must be a way."
"There is." His smile softened. "I walk away." This was goodbye.
I staggered back and bumped against the counter. It was all my fault. I'd driven him through the veil to begin with. I'd started everything. "No. Please don't. I'm sorry." Panic ripped away my fears and doubts, and the words I'd wanted to say since he'd returned spilled from my lips. "I tried to get to you. I would have followed you to the netherworld if I could. I only wanted to live when I told you to get rid of Akil. I didn't mean for this to happen. What you saw outside the lake house between me and Akil that night, it wasn't what it looked like. I didn't bring Akil back. I brought you back. Akil was just... there. I needed him for the bastard inside me. When you found me in the library with Nica, I tried to save her. Damien was too strong. I did everything I could. I tried to do the right thing. I'm not perfect. I fuck up—too often. I don't know what I'm supposed to do. I'm scared too. I don't know what's happening to me. I didn't... I couldn't..." Cool tears skipped unchecked down my cheeks.
He stood granite still, eyes glistening. I'd told him everything, the truth, all of it. But it wasn't enough. The truth couldn't change the past. He clenched his jaw and stayed demon-still, not daring to close the distance between us: just a room, but it felt like a canyon.
"Say something," I blurted. "Dammit, shout at me, curse me, fight me if you want to. Just tell me you won't go."
"You'll be alright." His eyes said ' _Goodbye.'_
"No, no, don't do this..." I stepped forward. He recoiled.
"Don't... Don't come near me." He held out a hand. "I'm..." His image shimmered, element surging. "Please, stay back, Muse."
This was insane. It was wrong. It wasn't fair. "You won't hurt me."
"I will," he snapped, and then more softly said, "I can't control it."
Gritting my teeth, I chained my emotions down. If he walked away now, I might never see him again. "Okay... Alright. But wait. Just... don't leave yet."
"If I wait, it will only get worse." His words settled around us like snowflakes.
I shook my head, dislodging tears. "Not yet. I..." I had to keep him there with me. I would find something to help him. Akil would know a way. I'd do anything. "I need your help. You asked me once for help. You wanted to kill my brother. You know what he's doing, don't you? You know he trades half-bloods."
"Yes." He replied gruffly. "I'd planned on asking for your help, but then I crossed the veil with Akil, and nothing mattered after that."
"Val has a little girl. Dawn. She was Levi's, but Akil stole her and dumped her on me. I can't abandon her. Val took her. She's a half blood like us. I have to get her back, and you're the only person I trust to help me."
He fluttered his eyes closed and inclined his head, fighting an internal battle. When he spoke again, his demon shadowed his words, lowering the timbre to a jagged growl. "If I do this, you let me go. You don't try to stop me."
I pursed my lips. I wouldn't let him go. I was as stubborn as he was committed. He just needed time to think it through. That was all. He'd change his mind. He'd battle his demon and win like I had all those years ago. He would because he was Stefan, and he was my hope that everything would work out in the end. He would survive because he always did. And if he could, then so could I. Two half bloods of opposing elements. Two hopeless dreamers.
He sighed. "I know where to find Valenti. I'll take you. We'll get Dawn. But you return with her to Boston, and leave me behind. That's the deal."
I held his arctic gaze with fire in my eyes. I'd leave him there over my dead body.
Two sharp knocks at the door shattered our negotiations. I swiped drying tears away and stalked to the door. I'd have to be curt with Lacy, lay down some scare tactics. I tugged open the door, a warning on my lips. Ryder shunted me aside. I reeled back and caught sight of the black-clad Enforcers outside my apartment, armed to the teeth behind riot gear.
"Easy... easy big fella..." Ryder trained a gun on Stefan. "No sudden moves, and nobody gets hurt."
There are moments in life when events undo in front of you like the thread of a scarf, and no matter how you try to stop them, it all falls apart as though it's inevitable. Fated. That moment in my apartment unraveled as time slowed. I watched, oddly detached, as Stefan's entire body sparkled with ice dust. He dropped his stance, lifted his lips in a snarl, and I knew he would kill Ryder, right there, in front of me. He'd do it before Ryder could pull the trigger. A glimmer of thought would end it. Ryder was a dead man.
Instinct drove me forward. No hesitation, no stumbling, or slipping. I grabbed Ryder's shoulder and yanked him back. He twisted, trying to shake me off. With dreadful certainty, I stepped in front of him. And the world blanched white for a second. Just a blink. When color bled back into my vision, I couldn't make sense of it. Warm liquid spluttered from my lips. Silence smothered me. The world was quiet. At peace. And Stefan stood in front of me, so close I could have leaned in and brushed his lips with mine. Ice dusted his face. His eyes sparkled like they had at the park when we'd both been free—just for a little while. He pressed a hand to my face and said something, but the words tore from me. I was falling. An intangible dark pulsed in my peripheral vision, an inescapable truth.
Stefan pulled away. I was so very cold. I wanted him back, but my mind drifted. Looming shadows devoured my untethered thoughts. His expression shattered. His face twisted with horror. He staggered. A slither of light slipped across the dagger of ice gripped in his hand. A splash of deep red coated the serrated blade. Streams of blood dribbled over his clenched fingers. My blood?
He'd stabbed me.
From behind, strong arms coiled around my waist. I sank to the floor, falling against the warmth of the man who smelled like gun oil and hot metal. Heavy eyelids fluttered, but I couldn't tear my gaze away from Stefan. If I looked away, he'd be gone. He lowered his gaze to the dagger in his hand, as though only just realizing what he'd done. When he said, "I'm sorry." I didn't hear the words, but I read them on his lips.
Dark lapped at my mind, breathing in and out, washing over me, drowning me. I clawed at the suffocating pressure and tried to fight my way back to the surface, but the pressure closed in, and after a while, my limbs wouldn't respond. I drifted. _So cold._ I drew my thoughts into the warmth inside me. And let go.
## 21
# Chapter Twenty One
They'd kept me sedated. Of that I was damned certain. I drifted in and out of consciousness and growled at the white-coats when they came near me, but I was so very weak. I couldn't escape them. It felt like hours I'd been that way, but when I finally had enough strength to swing my trembling legs from the bed, my muscles clenched with atrophy, and I suspected it had been much longer. My body quivered under the effort of movement. Cool beads of perspiration broke over my skin. I could walk. I could escape.
My bed was the only one in the spearmint-green room. A wall of one-way glass drew my eye. I'd been there enough times to know I was inside the Institute's medical facility, and they were watching, always watching.
I tore the IV from my arm, applied pressure to staunch the blood, and searched my room for clothes. I couldn't very well escape in a flimsy hospital gown, but the room was barren, more like a morgue than a clinic.
Ryder burst through the door, and I nearly jumped out of my skin, not least because he carried a bunch of pink carnations and a grin so wide he must have gotten laid while I was out for the count.
I scowled at him. Ryder and flowers? "Did someone die?" My voice clawed its way out my parched throat. Shit, just how long had I been out?
He dumped the flowers on the end of the bed. "Yeah, you." He tucked his thumbs over his belt. Rocking back on his heels, he wandered his wide-eyed gaze over me and then muttered a curse and swept me into a bear hug.
"Ah, easy." A twinge of pain needled me in the ribs. "Ryder." He squeezed tighter. "Kinda need to breathe here."
He pulled back. "Dammit, Muse, you died in my arms." He swept my bangs back from my face and searched my eyes. "I tried to get you out the apartment so you could summon your demon, but the Institute took over. The fuckers told me you'd died. And here you are. Shit, don't do that again." He glared at me, eyes flicking back and forth, tracing my expression, as though committing every nuance of my face to memory. Finally, he seemed to realize he was still clinging to me and stepped back.
"Well, I'm okay." I tried to smile but winced instead. My side throbbed. I pressed a hand over the heat of the wound and felt the sticky bandage taped to my side from below my breast to my hip. "What happened?"
"You'll have a bastard of a scar."
I winced and tried to stretch out the pain. "Scars are my armor."
"You saved my ass, Muse." He scratched at his chin and then chewed on his thumbnail. "He was gunning for me. He'd have gutted me right there..."
_Stefan._ My vision blurred. I gripped the bedside table. The Enforcers had stormed my apartment. Stefan stabbed me. I steeled my thoughts, packed all the unwieldy emotions down into a reinforced mental box, and rammed the lid on. When I met Ryder's stare, my expression was blank.
"Did you kill him?"
"No. He opened the veil and stepped through. He's gone."
I blinked, heard my demon roaring, smacked her down into the box, and tied a pretty pink bow on top. Stefan had gone through the veil thinking he'd killed me. "How long ago?"
"Two weeks. The whole world thinks you died. The Institute released a statement. The demons who slaughtered their enforcers had been terminated. End of story. Charlotte Henderson is no more. Adam only told me you were alive two hours ago. I had thirteen days of thinking you'd bled out in my arms. Fuck, don't ever do that again, lil' firecracker."
"I'm dead?" I might not have believed it, if it wasn't for the excess moisture in Ryder's eyes.
"As good as."
I should have felt something. Anything. "I want to go home." Why was I numb?
He smiled. "Adam wanted to keep you here. Yeah, don't give me that look. I told him you'd be happier back home. At least, you wouldn't try to kill him. He's flexed some Institute muscle and forced your neighbors to sign a non-disclosure contract. If they talk about you or what they've seen, they'll find themselves in a whole world of trouble."
Ah, hell. I hadn't wanted to get them involved at all. I lifted a trembling hand and rubbed my eyes. "What about Akil?"
Ryder shivered. "He's dropped off the grid, probably gone back to hell. Good, fuckin' riddance. Psycho tracked me down, damn near broke my arm, and demanded I give him a second-by-second break-down of what happened." He rubbed at his arm. "He wasn't best pleased with what I had to tell him."
Akil thought I was dead. Stefan thought he'd killed me. Asmodeus would stop looking for me. So would Val, and Levi. I was... free? I touched my chest. No, not free. Damien still thrummed through me. But almost free. Was that even a thing? Almost free? No, you can't be almost free. Free is an absolute.
I sighed out a weary breath. Did Dawn think me dead too?
I'd been dead to the world for two weeks. Two weeks in netherworld time was more like two months. I'd seen Stefan's face. The horror and guilt had crushed him. "I need some time..."
"Sure yah do. Let me buy you a drink, though. Maybe a round of pool? Might even let you win this time."
He was getting all mushy again. "Jeez, Ryder, who knew you had a heart behind all that swagger?"
He screwed up his face. "Yeah. I know. Don't go spreadin' it around. Seriously though, we need to talk. The night after you died, some weird shit started goin' down. The number of demon incursions fell off the chart. The Institute number-crunchers said it was a glitch, but the sightings, events, they're dwindling by the day. Demon chatter says something's up beyond the veil. We've got a few demons in the cells, but they won't talk. They're scared."
As he explained, I reached out a probing thought and poked at the veil. Usually, it rippled in the back of my mind, a constant that I ignored like the background noise of the city. When I reached for it, it should have pulsed and rippled. But when I reached for it then, it shivered and settled like the waters of a millpond. It didn't move. Didn't dance. Didn't twitch.
I repressed the shock, kicked it into the same mental box with everything else, and nodded. "You're right. That's not normal. Something's up, and it doesn't feel good."
## 22
# Chapter Twenty Two
A month after I'd been declared dead, I stood in the ladies washroom at The Voodoo Lounge, frowning at my pale reflection. The music thumped the air and drummed the inside of my skull. I'd had a headache before I'd arrived at the Lounge. Now it was pulsating mass of agony, as though Damien had shifted from my soul and settled behind my eyeballs. My stomach heaved. I gulped back pools of saliva. _How many drinks did I have?_ Demons in people-suits brushed by me and muttered various slurs. The club had been packed to bursting point every night since reopening two weeks ago. I was here to find out why the Lounge was bustling. And I was fucking it up.
My reflection looking back at me in the mirrors above the rows of sinks was a stranger. I had my straight-as-nails hair cut above the shoulder and dyed bottle-blond. A pink and black short skirt ensemble accentuated curves I didn't know I had. Lacy had assured me the outfit was as anti-Charlie Henderson as I could get. She was right. I hardly recognized myself. I had blue contacts in too. I'd melted the last pair when a demon got frisky a few days ago. He'd told me he'd rather live out his days in the Institute cells than go back across the veil. He'd fought like his life depended on it. Once, I would have killed him, but I let him go. Ryder didn't know.
What the hell was I doing here, surrounded by demons, and dressed-up like some demon fangirl?
My reflection frowned at me. Yeah, that was right. This was my idea. "Let's check out The Voodoo Lounge," I said. Word in the demon-chatter said the place was jumping since the veil had fallen quiet. The demons were getting twitchy, grouping together, flocking. _Safety in numbers._ The Enforcers wanted to know why. The last time I'd been in the Lounge, Levi had trapped me in a cage. Who was running this club, and exactly what had the demons spooked so badly they would rather die than go home? I also had a darker motive for my visit, something I'd yet to fully admit to myself. I was looking for trouble, itching for a fight. I hadn't slept properly since waking from my near-death experience. Blood soaked nightmares soiled my dreams. My demon hungered. So did I. If I didn't find trouble, I started it. Ryder had already called me out on being careless while on duty. My head wasn't in the game.
"Honey, too many flamin' Zambuccas?" The woman—if one could call her that—might have appeared normal if not for the two furred tails twitching around the hem of her skirt and the fangs crowding her mouth. She laughed and left me hunched over the sink, trying to keep my churning stomach contents down.
Ryder didn't know I'd been getting acquainted on a nightly basis with a bottle of wine. Not that he'd judge me. We all had our vices. I recognized the signs. I was slowly sinking in quicksand. The weight of despair pulled me under. I should have been happy. Ninety percent of my problems had evaporated overnight. The world thought I was dead. I was now happy Carla Gordons, who dyed her hair sunshine yellow and wore cherry red lipstick. I didn't want it. I wanted to go home, curl up in my bed, and hide. Carla wasn't me. I wasn't a coward. I didn't run. I didn't hide. But I was running, and each time I reached for the bottle, I was hiding. The darkness inside me throbbed harder with every passing day. _The whispering dark..._ Damien's hideous laughter haunted me.
My phone chirped in my pocket. Snarling a curse, I answered it as yet another demon bumped into me and grunted something derisive. "Ryder..." His name was more a growl than an acknowledgement.
"Hey... Just checkin' in. Everything okay?"
"Peachy."
"You don't have to do this, Muse. I can send another Enforcer in." We both knew that wasn't happening.
"No, I got it." I needed to be here, among demons.
"Okay, I'll call in an hour. Get something we can use."
"Gotcha." I jabbed the end-call button, splashed water on my face, and left the bathroom, heading for the dance floors.
The night wore on, and my mood soured. I managed to get a few tidbits of information out of a semi-conscious demon slumped over a table. It was bad across the veil. The princes were laying down their laws. Pick your allegiance or die. The demons who could escape had, but not all possessed the skill to cross the veil without assistance. Some bartered with higher demons for safe passage, but many didn't make it that far. The princes forbade it. I tried to get out of him why the princes were battling when they'd had centuries of relative peace. He mumbled something indecipherable about the veil, titles crumbling, and the _fall of wrath_ , whatever that meant, before proceeding to drink himself under the table.
Frustrated, I wandered the dance floors, scanning the sea of demons. My demon paced back and forth inside my mind. I couldn't bring her to the show, not without revealing who I was. I could dye my hair and paint my nails pink all I liked, but my demon didn't change. A half blood was rare enough. A half blood missing a wing? They'd know me the second I dropped my humanity.
My phone buzzed in my jacket pocket. I plucked it free and ducked to the side of the dance floor, almost falling over a crate as the crowd spat me free. Except it wasn't a crate. I ignored Ryder's call and crouched down beside the metal cage.
"Dawn?"
She was curled up in the middle of the cage, her filthy slip of a dress barely covering her grazed knees. She blinked doe eyes out from behind matted bangs. "Muse?"
"Move back." I summoned a slither of heat into my palms, gripped the bars, and tried to pour it into the metal. White glyphs flared beneath my touch, sending my jolt of heat back up my arms. I cursed and let go. "It'll be okay. I'm getting you out." I wouldn't leave her there. If I had to tear the club down around us, she was getting out of that cage.
She hugged herself tighter. "No. He'll hurt me."
"Who?" I pressed my face close to the bars. How could anyone do this to a little girl? I knew the answer. I'd had it done to me. They didn't see a girl. Dawn was property. The demons in this club barely gave her a second glance. Just a half-blood toy in a cage, an abomination.
"He's inside my head. He drowns me, but I don't get wet. Just go. He'll hurt you too."
Drowns but doesn't get wet? A shiver trickled across my flesh as the memory of experiencing exactly that bubbled to the surface of my thoughts. She had to be referring to Leviathan. Beneath the multicolored lights and deafening beat of the music, I couldn't see anyone resembling Levi. The club was too full of demons and their mingling elements to even try to sense him. "Is he here, Dawn?"
She bit into her lip and nodded. "The one with the black wings brought me back here. I don't like him. I don't like any of them."
Black wings? _Valenti_. My twisted excuse for a brother had handed her back to Levi. "Did the black-winged-one hurt you Dawn?"
She shook her head. "He said it wasn't right for half bloods to be free." She blinked, dislodging a silent tear. "The water prince likes it here. He told me he likes to make the demons thirsty. I get so thirsty. All I want is water. Then he drowns me..." Her tiny body shivered. "But it's not real."
"I know, honey." I had to get her away from here. I searched the edges of the cage and found a padlock. Bolt cutters would get it open. Ryder would have those. "I can get you out, but I need tools. I'll be back real soon, okay? I promise."
Her eyes saddened, and she turned the torn rabbit over in her arms and hugged it close. "I want it to end. I want to hide."
I closed my fingers around the bars. "Hiding won't make it go away. Hiding just delays the inevitable. Unless we do something about it." I offered her a hopeful grin. "I let my friend down once and didn't get to him in time. That's never happening again. Do you hear? I'll be back later, and I'm getting you out. And if Levi shows up, I'll boil him dry."
She blinked and nodded. "Okay, Muse."
I called Ryder from the parking lot beside the club and told him everything. He picked me up in the tired Mustang ten minutes later and drove a few blocks away before parking.
"Okay, listen up..." Twisting in the drivers seat, he gave me the no-bullshit glare. "The club closes at four hundred hours. I've checked local security footage. By five, the staff have gone. The security is non-existent, probably because nobody is stupid enough to break into a club owned by a Prince of Hell. So that's what we're doing. We break in. Cut open the cage. Grab Dawn. And get the hell out of there before anyone knows we've been inside. You sure you can deal with Levi if he shows up?"
"Yup." Maybe, mostly... definitely. If I could suck the fire out of Akil, I could most definitely turn Levi to steam. I had to. I wasn't leaving without Dawn. "Sounds like a plan."
He grabbed a gun from the back seat, checked the chamber, magazine, and handed it to me. "You're toting demon-killing soft point rounds, etched with glyphs. One shot to the head will take down any lesser demon, but the princes are tough bastards. It'll slow a prince down, but it won't kill him."
Cool. "You tested it on a prince?"
"I planted a few in Akil a few months back, but he took 'em like they were bee stings. Happened a few blocks from here."
Oh. I ejected the magazine and examined the rounds, deliberately avoiding Ryder's keen stare. I'd healed Akil's wounds that night. Ryder's bullets were more effective than he realized.
"You up to this?"
"Huh? Yeah. Sure." The adrenalin had ousted much of the alcohol in my veins. A blast of fire from my demon would combust the rest. Unfortunately, my demon couldn't help my fragile state of mind.
I briefed Ryder on the conversation I'd had with the wasted demon. Until a few months ago, the princes hardly featured on the lips of demons at all. It seemed they'd thrown off their complacent attitude and gotten their hands dirty. Akil would be in the midst of it. Perhaps I should have been concerned, but given his recent behavior, he clearly wasn't as weak as he allowed everyone to believe. He was different, he'd said so himself.
Memories of the garden event summoned thoughts of Stefan. I quickly trampled those before my past dragged me down toward the yawning pit of despair I tried to crawl out of on a daily basis.
"Have you thought about what you're going to do once you have Dawn?" Ryder asked, anchoring my thoughts in the present.
"Take her home."
He gave me a look that said, _try again, firecracker_. "Don't blow your cover. If you take her back to your place, you might as well paint a target on the top of your building saying, _Muse isn't dead_."
"Damn." I hadn't thought that far ahead. Dawn was hot property. Every badass demon I knew wanted a piece of her.
"Hand her over to the Institute."
"Ryder, no way. We've been through this."
"Yeah, I know, and I'm still right. Look me in the eye, and tell me half bloods aren't dangerous."
He had me there. "Adam doesn't have a great track record with half bloods. He'll ruin her."
"Muse, c'mon... she's a little half-blood demon girl. If she survives to adulthood, it'll be a bloody miracle. You only made it because Akil kept you safe."
I flinched back. "I don't belong to Akil."
"That ain't what I said."
"Is that what you think?"
"Muse, don't dump your shit on me, alright. Dawn doesn't have an asshole like Akil besotted with her. She won't last on her own. Her only chance is the Institute. You know it's true. Stop trying to find her a happy ending. There ain't no happy endings for half bloods. The best place for her is the Institute."
"Did you know Adam has two other half bloods squirreled away somewhere? He's buying or breeding them, rearing them like pets, and turning them into weapons. You think he'd care about Dawn at all? He ordered the death of his own son. I'm not giving her to him. I'll take her away somewhere. I'll drop off the grid. I've got nothing else. I'll keep her safe, Ryder, and don't you dare try and take her from me."
I'd tested his loyalty to the Institute before. There were times he should have bundled me back to the men in white coats, but he hadn't. That didn't mean he wouldn't though. From the hardened military-grade stare he'd fixed on me, he clearly wasn't negotiating.
"The Institute can't have her," I said. "Dawn deserves a chance at freedom, and maybe you're right. Maybe half bloods don't get happy endings. That's why I have to give her what nobody else can. Freedom."
His lips parted, and those old eyes steeled. "Then you'll both die. If she doesn't get you killed, she'll blow a fuse one day and nuke the neighborhood."
My scowl tightened and brought a snarl to my lips. "You've given her up as a lost cause already, haven't you?"
"No. I want what's best for the girl. If you stopped thinking with your heart, you'd see that."
"Have you given up on me, too?"
He spat a curse. "I'm here, ain't I?"
"Yeah, now. What about tomorrow or next week when I screw up, which I will, because I'm only human. What then? You gonna tie me up and bundle me off to the Institute too?"
He narrowed his eyes. "You know the answer."
"Yeah, I do." Crossing my arms, I slumped in the seat. The answer was yes. Ryder was the only man I knew—demon or otherwise—who never lied about what he was. "That morning at my apartment, you would have shot and killed Stefan."
He flinched and turned his face away. "Yeah, and it would have been the right thing to do. I screwed up. Hesitated. It won't happen again." Slowly, he faced me once more. A muscle twitched in his jaw. "He nearly killed you."
A sharp smile sliced across my lips "It must be easy, seeing in black and white." His glare contracted. "Sometimes, there is no right thing. Sometimes wrong wins, and that's okay. Life can't be distilled down to right and wrong. It's all about that messy gray area in between and how we deal with it. I sure hope I'm not the one staring down the barrel of your gun when you figure that out."
He humped and glared out the window. It was going to be a long few hours until 5am.
## 23
# Chapter Twenty Three
The last time I'd broken into the Lounge, I'd been with Akil, and Levi had flushed me down a hallway. I chalked that up to him having the element of surprise. This time I was ready. I had no wish to repeat that experience and planned to get Dawn out before the Prince of Envy realized we were on his turf. In all likelihood, he wouldn't be on the premises. Surely princes had better things to do than stalk empty clubs? If not, I was ready to flash-fry his ass.
Dawn's cage was right where I'd left it. Ryder's flashlight beam washed over her. She gripped the bars, eyes wide. Swirling protection symbols flared beneath her delicate hands, and a peculiar quiver of power jolted through me. _Not Levi's element._ I shook it off while Ryder cut the padlocks.
I swept my gaze across the shadows coating the dance floor, gun in hand, anti-prince rounds locked and loaded. I'd shoot the slippery bastard between the eyes before he could say, "Boo." Bam. No talking. No evil monologue. That's how things go wrong. Levi was too destructive. Give him an inch, and he'd take a mile. That wasn't going to happen. I'd pepper him full of bullets and hope they killed him. At the very least, it'd ruin his pretty human-suit and slow him down.
Dawn burst from the cage and clung to my leg. I sunk a hand into her hair. "It's okay, honey. We're gettin' out of here."
Breathless whispers slipped from her lips too quietly for me to hear. I crouched down. Her wide eyes pleaded. She shivered and whispered. "He's here..." Her breaths fluttered across my cheek.
Ryder dropped the bolt cutters. They clattered against the floor. "Fuck. Muse. What the fuck?"
He lifted his gun at arm's length. Tremors wracked his grip. He aimed at Dawn, then brought the weapon up, and tracked me with it as I straightened. Perspiration beaded his pale face. "This ain't me. I'm not doing this!"
Ushering Dawn behind my legs, I mumbled, "I know." I searched the dark. I couldn't see Levi, couldn't even sense him.
"Fuck, Muse. I can't—I can't stop it."
Ryder's disjointed hand angled the gun around. His arm followed until he held the muzzle at his temple. His expression twitched. He drew his lips back and snarled. "Get the fuck out of my head!" His eyes darted, searching for the source.
"Levi..." I growled. "Are you a coward now?" My voice bounced around the empty club. "Not going to show yourself?" Any second, I'd hear a gunshot, and Ryder would be gone. Adrenalin surged through my veins, threatening to pull the fire in its wake. "You don't need to kill my friend."
Ryder dropped to his knees with a strangled cry. He was fighting it. He'd die fighting if I didn't act fast.
Dawn whimpered and huddled in close. "Dammit, Levi, face me!"
Water vapor coalesced on the dance floor. I aimed my gun and narrowed my sights down the barrel as the steam adopted a female outline. The vapor spun up, like a reverse waterspout, and She-Vi stepped out. I pulled the trigger. The blast cracked the air. The gun kicked in my hand. The bullet splashed through She-Vi's forehead and smacked into the wall somewhere in the shadows behind.
She-Vi laughed.
A knot of dread tightened in my gut, even as I fired again, and again. Well, damn. I'd been so convinced the rounds would at least slow Levi down. It hadn't occurred to me they'd sail right through his watery vessel.
"Burn the bitch!" Ryder hissed.
She-Vi's brow jumped. "Summon your element, undead half blood, and I'll redecorate these premises with the contents of his skull."
My demon rattled her mental cage. She wanted out. The lure of the veil, albeit quiet, lingered within my reach. How quickly could I open it, draw from beyond, and throw flame at Levi? I was fast, but not bullet-fast. Given time, I could boil Levi dry from the inside out, but not before he killed Ryder. God, what had I done? I'd been so desperate to free Dawn, I hadn't even considered Ryder's vulnerability – his human mind.
"Half bloods are intriguing." She-Vi's singsong voice rippled through the dark. "Demon and human. Both and neither. Humans are puppets of flesh. All of them." Levi stalked to one side then the other, pacing, observing, weighing the three of us. "Humans dance for me." Her double-eyelids flickered. "They break like toys. I discard them, find more. Those break. They come here and drown themselves in drugs and alcohol to forget. Their fragile minds shatter. It's tiresome. But half bloods... Half bloods break and come back for more. My little half blood was learning well before Carol-Anne took it upon herself to flaunt my pet in front of Mammon. My little one, my Dawn, has spirit. There's no sweeter taste in the mind than a crushed spirit." She stopped pacing and lifted a hand, curling her fingers into a fist. "Like you, Muse. You were dead. The infamous half-blood daughter, Mother of Destruction. Ruined, spoiled, broken. Tortured..." She tasted the word on her lips. "And then deceased. Much was argued after your demise. Mammon believes it. Asmodeus believes it. The Court of Dark believes it. Nonetheless, here you stand: changed, hungry, raw. And yet you are the same petulant human, once again stealing what is mine. Valenti's sister, no less. And he had the gall to chastise me for losing my little pet. Your persistence is commendable. One might begin to believe the whispers cloying the air around you, Muse. Perhaps your father, Lust, is not wrong."
"Take me to Asmodeus," I said, hoping to bargain my way out. "I won't fight you. Just let Ryder and Dawn go." Levi had been tasked with my retrieval. I'd worry about my own chances of survival once my friends were safe.
She-Vi chuckled. "Why would I let them go? Dawn belongs to me. Half bloods must be owned. That is the way of things. And this puppet... Ryder? He is nothing. I would kill him now if his mind did not harbor such delicious intricacies. This one is a killer, a hard man, and yet so perfectly simple."
"Bitch!" Ryder snarled. "If you wanna mind-fuck me, come over here and let's get personal. Go deep, I like it rough. See what you find in there, princess."
She-Vi's body shimmered. In the next step, Levi was all masculine and muscle again. He cocked his head and observed Ryder curiously.
"Well, fuck me." Ryder spat a harsh laugh, gun muzzle grazing his temple. "Now if that ain't an instant turn-off, I don't know what is."
Levi's double-eyelids blinked, but otherwise he didn't move. Ryder appeared to fascinate him. Maybe it was his no-bullshit stance on life. He had a military past. Perhaps those memories intrigued Levi. While I coiled a slither of energy into my body, Ryder locked his fury-laden stare on Levi. To reach for the veil, I'd need to call my demon, but I didn't have time to do both. Levi would sense my demon as soon as she broke over my skin. He'd blow Ryder away.
Dawn's tiny hand slipped into mine, and a curl of slick energy touched my palm. I tightened my grip, afraid to look down at her for fear of catching Levi's attention.
"It's okay," Dawn said. Only she didn't. Her voice plunged through my thoughts like a beam through the dark. "I will unmake him." The crawling touch of her power dragged up my arm, spilling pins and needles in its wake. I flinched. The human part of me wanted to recoil and sprint away from her, as though something about her tiny body repelled me. Her power bloomed beneath my feet. A wash of energy rose over me, knocking me aside. One minute, I stood beside Dawn, my skin trying to crawl away from her, the next I was face down half a dance floor away, ears ringing, body dashed by countless needles of pain. I pushed up on my hands, wincing as a sudden pain scurried around my skull. Raw energy tickled my skin, itching madly. I had to fight not to dig my nails in and scratch the unwanted element out of me. Whatever element she wielded, my humanity ran screaming.
I twisted, half scrambling to my feet and saw Dawn, or rather, saw her demon. She was a nightmare of liquid green and oil black suspended a few feet off the ground, back arched, arms out. Dark-light bled through her emerald skin. A mass of oily, black barbed vines lashed about her head, each moving independently. Vines sprouted from her back, tangled around her, exploded outward, and knotted through the mangled miasmic cloud that had once been Levi. Dawn had literally unmade him, torn into him, pulled him apart, and thrown him back together into a frothing soup of blood, flesh, and water. The vines picked, plucked, and stabbed, working like a thousand needles.
My breath caught in my throat. I could taste her element like poison polluting the air. Panic rattled around my skull. Every muscle strummed tight with the need to run. It was only my stubborn need to protect Dawn that rooted me to the spot, that and my demon's morbid fascination.
In the dancing green light, I caught sight of Ryder pressed against the far wall. Gun clutched at his side, he cringed back from Dawn but didn't look away. I knew, without doubt, that he'd kill her if she lost control.
One of Dawn's vines snapped at me, cracking like a whip at my feet. I flinched back as more separated from the river and peeled toward me. "Dawn..." They still came, rippling above the dance floor, eager and hungry. "Dawn?" I back-pedaled. "Dawn!"
She didn't see or hear me. Her all-green eyes were locked on Levi's mess, her little head cocked to one side as her tendrils knitted parts of Levi back together and then unraveled him all over again.
The slick touch of her power snaked around my ankle and tugged my leg out from under me. I fell hard on my ass. The abhorrent crawl of Dawn's power yanked my demon out of me, plunging fire through my flesh. Dawn swung her blazing green eyes on me, dropping what was left of Levi. Blood and bone splashed across the dance floor. Her whip-like tendrils reared up behind her tiny body. Countless eels of energy hovered in the air, poised to strike.
"Dawn... I know you're in there, honey." My demon slur couldn't hide the quiver in my voice. "Don't let it rule you." Her dark energy swelled, replacing the air with a sour, elemental soup, heavy with energies not of this world. She was so little. How was she supposed to fight the desires of the demon riding her? Her demon might have been physically small, but the power she wielded wasn't. She hadn't even drawn from the veil.
Her element broke free and lunged for me. I thrust out a retort of fire, blasting her back. A gut-churning scream pierced the roar of my fire. Dammit, I was hurting her. I recalled my element, and realized my mistake as soon as the eels plunged through the fizzling embers and coiled around my legs. Her element, whatever dark power it was, knotted around my limbs and tightened. This couldn't be happening. Would she unmake me as she had Levi?
A gunshot punched through the air. Dawn collapsed with a cry. Her demon-form shattered. The black eels coiled around me splintered and dissolved, leaving behind a film of sticky black tar.
Shaking off my demon, I got to my feet and stumbled to Dawn as Ryder approached her. "Jesus, Ryder, you shot her." Blood pooled beneath her fragile body. I gathered her into my arms, snatched up her rabbit, and glared over her shoulder at Ryder. "Goddamn it, she's just a little girl."
He had the decency to look horrified before his training kicked in. "She's dangerous." He jerked a thumb at the pile of flesh and blood that had once been a Prince of Hell. "You wanna end up like Legolas over there?"
Dawn buried her head against my neck. This wasn't the time to rage at Ryder, but I'd be having a few sharp words with him once we were safe. "We have to get out of here." She'd killed a prince. Holy hell, killing a prince would surely trigger alarm bells in the netherworld. Plus, every demon in the local area would have sensed her power-drain.
Ryder offered me a hand. I ignored it and pulled Dawn close while staggering to my feet. "Let's get her to the car. I need to see how bad the wound is."
"I just grazed her," he grumbled. "I should have killed her."
I glared at him as we made a dash for the doors. "What the hell has gotten into you?"
"She was gonna kill you, Muse."
I uttered a string of colorful curses and shoved through the door of the club into a wall of bright light. The full weight of half a dozen spotlights blinded me. My demon reared up, poised for a fight. She tried to wench my control away. I staggered back with a snarl, fighting instincts and alarm.
" _RELEASE THE HALF BLOOD_." A voice boomed somewhere behind the barricade of blinding light. Enforcers. A rich melodic growl bubbled up from my demon. I swung an accusing glare at Ryder. He'd set me up.
Hands raised, gun still palmed, he backed up. The bastard was retreating to join his ranks. "It's for the best."
Like hell it was. I glared hard into the lights and made out a handful of cars, maybe a dozen Enforcers, all armed, all ready to shoot me down if I made one wrong move.
"Muse..." Dawn mumbled.
"It's okay. It's going to be okay." And it would be okay. Because they weren't having her. "I want you pretend you're somewhere safe. Somewhere warm. Think of the beach."
"I've never been to a beach."
My heart broke for her, and I nearly dropped the reins of my demon right then. "When we get out of this, I promise to take you." I set her gently on the road outside The Voodoo Lounge and straightened to face my enemies.
" _STEP AWAY FROM THE HALF BLOOD."_
_Screw you._
I stepped around Dawn's vulnerable form and lifted my hands. My eyes had adjusted to the stark whiteness. I could see the Enforcers much clearer now. Some, I knew. Adam wasn't there. Ryder stood off to the left on the fringes of the cadre. His steely eyes watched me while the others appeared more interested in Dawn. Ryder knew where the real threat lay. He stilled, lifted his chain, and shouted an order.
In a second's thought, my demon slammed into me. I planted both feet and closed my eyes. The veil opened with a precise mental swipe, and freedom whispered in my ear. Yes, this was right. My fire danced with me, roaring loud, swallowing the sounds of gunfire. Bullets slapped against my molten skin. I tasted melted metal in the blaze. Doubt didn't exist. Fear had fled. When Damien's poison seeped out of its hiding place and flushed through my veins, it didn't matter. They would not have Dawn.
The veil pulsed, and with it, pleasure strummed through my quivering demon muscles. I had more, so much more to give. Levi's words, Mother of Destruction—words I'd almost missed—briefly flitted through the inferno of my mind before skipping out of reach. My demon cared nothing for those words. Or those people. Or Ryder. But she recognized the half blood cowering on the floor behind us. She recognized power, and Dawn's little human body threw off enough power to render my demon feral.
I worked fire and flame like a conductor directs an orchestra. A flick of a wrist, a glance, a twitch. It was easy, quick, and wondrous. Wildfire ran free. When the gas tanks on the cars blew, shrapnel pummeled my molten skin. I soaked up the pain, twisting it into pleasure. Screams, sirens, gunshots, alarms: they meant no more to me than birdsong at the break of dawn. I expected the madness, when it came, to be a violent thing. I'd thought Damien's embrace would shred my thoughts and flay my soul, but the truth couldn't be further from my fears. The insanity, the chaos, was instead peaceful. All I had to do, was let go. I wondered why I'd ever fought it.
Akil's words drifted through the placid lake of my thoughts, _'If you ceased battling your other half, and embraced the truth of what you are, you'd have your answer.'_
It became clear as I stood in front of Dawn and flushed flames through the street, washing them clean of Enforcers, that freedom was within my grasp. Once free, nobody could stop me. Not the Institute. Not Akil. I was the mother of fire, and fire destroys. I was destruction.
## 24
# Chapter Twenty Four
Chewing on my thumbnail, I paced the tiny front room in Jerry's modest apartment. Dawn slept on a battered old couch, a blanket pulled up under her chin, bunny tucked under an arm. We'd fled the scene, and I'd called Jerry from a payphone only once I was sure the Enforcers hadn't sent back up after us. He hadn't asked questions, but he didn't need to. The street we'd left behind was ablaze. Fire crews had descended on The Voodoo Lounge. I'd walked away from a hellish nightmare of my own creating. There were bodies back there. I knew it. There had to be. When I'd summoned the fire, I'd let it gorge itself. To make matters worse, Damien's poison had crawled into my skin and stoked my lust for chaos.
I'd heard their screams...
What if Ryder had been one of them?
"You're going to wear a hole in my carpet."
Jerry's deeply delicious voice coaxed my thoughts back into the room. I looked down. There wasn't any carpet, just well worn floorboards. Lifting my head, I fixed a neutral mask on my face and gave Jerry the picture of restrained stoicism. His backlit, muscular frame filled his kitchen doorway.
"Coffee?" He grumbled.
I nodded, not trusting my voice. I hadn't spoken to him, not since the call. I was afraid of what I might say. My gaze fell to Dawn. I'd been protecting her. And that would have been just fine, but it wasn't entirely true. Not all my motives had been as honorable. The demon shifted inside me, resettling, her urges sated. I'd let go. And I'd liked it.
Watching Dawn's chest rise and fall, a resolute calm settled over me. I couldn't go back to the Institute. I'd burned that proverbial bridge. I didn't want to anyway. Not like this, so close to madness. I would take Dawn, and we'd go away, just the two of us. But I didn't want to leave the life I'd made for myself. I liked my home. I enjoyed chatting with Rosa about her time in England. Lacy was like a breath of fresh air in my otherwise stale existence. After Stefan had blown my workshop to smithereens, I never thought I'd find somewhere to call home again, but Southie was as close as I was going to get. To keep Dawn safe, I'd have to walk away. To keep my neighbors and friends safe, I couldn't go back. What would Stefan do in my shoes? As soon as I wondered as much, I smiled. He'd already done it. He'd walked away to keep those he loved safe.
I moved to the kitchen doorway and leaned against the frame. Jerry's mountainous bulk filled the tiny galley-style room. I watched him fix two coffees, tracing my gaze over the swirl of tattoos marking his scalp. "You always been a demon doctor, Jerry?"
"Nah. I was a warrior in another life." His deep voice filled the kitchen just as well as his muscle-bound body. Warrior was an obscure word. I was about to ask him what he meant when he planted a steaming hot cup of coffee in my hand. "Get that in you."
I had to crane my neck to meet his eyes. For a warrior, he had curiously beautiful eyes. Beguiling. He regarded me with detached indifference. "Have you ever killed anyone?" I asked. It's not the sort of question you can ask in passing. _How was your day? Have you killed anyone lately?_ But Jerry was different. I might not have known him well, but I recognized strength when I saw it and not just physical strength either. He'd helped me before. He knew about half bloods. He'd seen a lot of things, knew a great deal about demons. There was more to him than a backstreet vet.
That fact was made all the more clear when he didn't react at all to my question. The mask of tattoos didn't move. He raised his mug, took a sip, glanced through the doorway behind me, and then leaned his bulk back against the countertop. "Well, I guess you aren't as dead as the Institute made out, huh?"
"Looks that way." I tasted the coffee. Strong. Black. It would deliver the kick of caffeine I'd surely need to keep marching forward. I'd have welcomed a shot of whiskey with it.
Jerry's gaze roamed over me, assessing my new post-death transformation, complete with blond hair and short pink skirt. "Not sure about the pink and black..."
I arched an eyebrow. "Says the man wearing a mesh tank-top over gray sweatpants."
He snorted a laugh but quickly sobered. "That lil' girl asleep on my couch is Carol-Anne's half blood, Muse. How'd you get her, and what happened at the Lounge just now?"
I flinched, not entirely surprised that Jerry knew who Dawn was. Clamping both hands around my mug, I brought it to my lips. Hot, aromatic steam wafted over my face. "I think I killed them," I mumbled. I'd said the words. They were out there, as though speaking them made the truth all the more real. I'd expected to be afraid of the facts, but a cold weight of acceptance settled in my gut. Was this what Stefan meant when he said he didn't care? A part of me cared. That part cared so much that I was afraid to acknowledge it for fear I might break down and let the demon in. I could crawl into the corners of my mind and hide while she took control. She wanted to. She hungered. It would be easier that way. I hide, and she wins.
Jerry slowly blinked. Even his eyelids were marked. "You've changed since you asked for help to control your demon months ago. You're not that same woman. I see that. There's steel in you now. If you killed, that's your burden. It's how you deal with it that will define you." His steady tone and even stare could only come from experience.
I nodded. "I think Levi might be dead."
His eyes narrowed to slits.
"Yeah." Keeping my gaze trained on him, I gulped coffee, and welcomed the heat searing my tongue. "I don't suppose that's going to go unnoticed for long."
He rubbed the palm of his hand over his shaved head. "You killed a Prince of Hell? A creature that can't be killed? An immortal chaos demon?"
My eyelids fluttered as I looked down. The lie felt right. Dawn didn't need the fallout from that coming down on her. If she had any hope of escaping all this crap, she'd need to stay off the demons' radar. "Yeah. He had it coming. Nobody puts half bloods in cages. Not anymore."
Jerry shifted, planted his coffee on the counter, and crossed his thick arms over his chest. "Shit. You really are something." A smirk broke out across his lips, brightening his eyes and lessening the effects of those intimidating tats. "You know what they've started calling you across the veil?"
Whore. Abomination. Filth. I'd heard it all. "I can guess."
"The Mother of Destruction."
Jerry's words slammed into me. I attempted to hide my reaction by freezing my expression somewhere between mild curiosity and indifference. The result probably looked as though I was having a stroke. Levi had called me the same. When demons start calling you the Mother of Destruction, shit gets real. Titles have power in the netherworld. They're not just words. They're a purpose.
I blinked and laughed. "That's insane."
"Yeah well, you're dead, so I guess you got a posthumous rep or something. Although, from what I hear, didn't you nuke a few hundred demons not so long ago?"
I recalled that event well. The ash-strewn images, boiling flames, and acrid smells stalked my dreams. I'd leveled a few netherworld buildings and turned on the Prince of Greed too. "Yeah." It wasn't something my human half was proud of. My demon, on the other hand...
"Alright. So let me get this straight." He lifted his hand and started checking off my sins on his fingers. "You ruined the Prince of Greed, one of the First chaos demons... You killed the Price of Envy, also immortal, although not-so-much. You nuked a flock of demons. Killed your owner. Wiped out a cadre of Enforcers?" He raised his eyebrows. "For such a little thing, you've got some serious issues."
I choked on a splinter of bitter laughter. It was so ludicrous that the only sane thing I could do was laugh. "You offering to be my therapist?"
"I would if I wasn't scared of you." He flashed me pearly white teeth.
A rich bubble of laughter burst from me. There I was, a tiny half-blood thing dressed in pink and black, standing in front of the formidable Jerry, and he's telling me he's afraid of me? I laughed so hard I had to put my coffee down. The demons believed me some kind of harbinger of destruction? Hilarity flirted with insanity. Laughter wracked me so damn hard my sides hurt, and my eyes watered.
"Laugh it up, Muse." Jerry spluttered between bursts of his own laughter. "'Cause once the princes realize you're alive, they're gonna be coming for you."
## 25
# Chapter Twenty Five
The bus ride to Salem was a painfully slow experience. Dawn had receded into a quiet shell and refused to speak to me. I wasn't entirely sure if the silent treatment was due to what she'd done or my own monumental fuck-up. As I watched the scenery outside the bus windows change from urban sprawl to leafy green trees, I was also acutely aware that my brother would soon realize Levi was dead. Would he suspect his supposedly deceased half-sister? He knew about Blackstone—where Dawn and I were headed. It wouldn't take him long to find us. Once inside, we were safe. It was the only sanctuary left. I couldn't risk exposing my neighbors to the likes of Val. I needed to get away, to regroup and collect my thoughts. Blackstone was my last chance to figure out my next move. The Institute would be looking for us. Jenna had likely told them about Akil's house in the country. That meant I'd have to plan my next move quickly.
Security lighting puddled around Blackstone. Dawn and I had trudged up the driveway, wrung out, saying nothing. Her power coiled around her and throbbed like the dark thing clenching my heart. What a pair we made.
The night was quiet and calm. It soothed my wrung-out thoughts, but my demon stalked too close to the surface of my mind for comfort. The devil on my shoulder, she whispered, coerced, and tingled my human senses. I would need to shut her down if I wanted to pretend everything was fine and dandy. Another confrontation like the last could tip the scales of my control indefinitely.
I'd expected Blackstone to be empty, but as we rounded the bend in the driveway, I saw that someone was clearly home. A sleek, black and silver Lamborghini had gouged out four grooves in the loose gravel before being discarded outside the house. I glanced at the car as we passed. Low to the ground, shaped like an arrow, its sleek lines and undulating curves gave it the appearance of travelling a hundred miles an hour while parked.
Two steps past the Lambo, a wall of heat blasted across my skin. I jerked back, pink human flesh firing off pain receptors in my brain. A rich curse followed. If I taught her nothing else, Dawn would have a colorful new vocabulary. Between us and the house, a wall of almost tangible heat blocked our path. I could call my demon, but I really didn't want to risk having her back in my skin so soon.
_Let me out. Let me play. This heat is nothing_. _We hunger. We devour. We destroy._
I gritted my teeth and gave her the mental equivalent of a shove. _Back off, bitch. I'm in charge._ She snarled. I snarled. Before I could further entertain arguing with myself, I stepped into the wall of heat and drew it into my flesh with an inward breath. Once more, it came easily, eager to join the bubbling chaos simmering inside me. With the heat gone, Dawn followed in my footsteps, silent and calm. I sensed Akil's unique elemental touch slithering around my ankles. It was weak, though. My demon purred. I licked my dry lips. Yes, we would like for Akil to be here. A snarl crawled across my top lip.
"Muse?"
Dawn's quiet voice cooled the lust burning through me. I glanced back at her. So small. So fragile. So freakin' powerful she could unravel my DNA if I pissed her off. "It's okay." I mustered a smile. "I think Akil is here. Do you sense him?"
She nodded, big human eyes widening. Killers shouldn't look like little girls. Was it wrong that I could look her in the eyes and feel sorry for her while also fearing her? She was terror, camouflaged in the body of a nine year old. What must she be thinking? How would her young mind process what she'd done? Did she care?
After entering Blackstone with the hidden key, I followed the beckon of Akil's element and came to an abrupt halt in the lounge doorway. Dawn peeked from behind my leg and sucked in a tight yelp.
Mammon lay sprawled in front of a cold fireplace, wings draped over him like a black sheet over a corpse. The marble floor had cracked beneath him, likely from heat stress. The walls around the room bore the scars of an inferno. The ceiling had a layer of soot so thick it looked like the night sky. I could only assume the fragments of fabric and metal scattered here and there were the immolated remains of the furniture.
"Is he alive?" Dawn whispered.
"Yes." The sound of his bellows breathing confirmed it, but the lava veins tracing across his skin barely glowed. I inched closer when Dawn's hand on mine stopped me.
"It's okay," I said. "I don't think he'll hurt me." I would have welcomed my demon, but the symbols etched into the construction of Blackstone held her back. Sneaking up on an unconscious Prince of Hell while wrapped in my fragile humanity wasn't the best idea I'd had all day. One swipe of his hand could cave in my skull. "Dawn, it might be best if you went to your room. Do you remember where it is?"
She nodded and hurried out of the room. Only once she was safely out of earshot, did I turn back to the Prince of Greed.
"Mammon..." I whispered.
Seeing him sprawled in front of the fireplace seemed deeply wrong, like birds on the ground or rivers flowing upstream. Mammon had always been a force of nature, a natural disaster that threatened with his mere presence. To see him face down on the floor and vulnerable disturbed both halves of me on a deeply primal level.
With heavy steps, I shirked around his wing tip and traced my gaze across his muscular shoulder, over his bicep, his forearm, hand... and flicked it to his open eyes. _Dead eyes. Black. Empty._
My breath caught, and my heart fluttered. What was wrong with him? How long had he been like this? Who could have hurt him? "Mammon?" I inched closer and crouched on my heels beside his hand. The heat rolling off him should have been unbearable, but I felt his power as little more than the warmth of the sun on a summer's day. "Akil?"
His black eyes blinked and widened. He snorted air, breathing it into him. The entire musculature of his body quivered. I had a moment to realize I should get out of his way, when he lunged with alarming speed. I sprang back, stumbled, and fell on my ass with a grunt. Mammon knocked me flat on my back. He braced powerful arms either side of my head. Rigid thighs fenced me in, and his vast obsidian body arched over me, muscles rippling, but he didn't touch me. Jesus, I'd never been so close to him while so completely human before. My head swirled, eyes stinging. Tears slipped over my lashes and dried on my cheeks. His gaze pulled me in while at the same time repelling me, urging me to look away.
Mammon bowed his head and inhaled at my neck. My skin briefly cooled as he drew the hot air into his lungs, but the heat quickly returned when he sighed the breath out again. His vast wings settled either side of us. I struggled to swallow, my mouth as dry as sandpaper while my throat burned. If he fell on me, he could easily crush me and would most certainly burn me.
I gave my demon a mental tug, but she butted up against invisible barriers. A ripple of power spilled through me, just enough that it no longer hurt to _see_ him, and a violent tremor shocked through his body. He snarled. Black lips undulated over fangs the size of my fingers. I told myself if he were going to kill me, he'd have done it already. And then it occurred to me that killing might not be the first thing on his mind. I flicked my gaze down the crevice between our bodies. _Oh shit._
"Okay, big guy, I can't summon my demon here, remember? I'm just little ol' me, crunchy on the outside, chewy in the middle. Please don't act on those thoughts in your head right now." I'd have shoved him back if his skin wouldn't have caused me third degree burns. I seized a breath of sweltering air and summoned some authority. After what I'd dealt with over the last hell-knows-how-long, I could sure as hell tame a sexed-up Mammon.
"Mammon, Prince of Greed." I held his stare, denying the headache punching through my skull. "Back off."
He thrust his head forward, too close. A blast of heat tightened the skin on my face. I cringed and turned away. Tremors rolled from the tips of my fingers to my toes. Okay, so maybe using the authority-voice had been a very bad idea. I'd forgotten he liked it when I fought him.
He pushed up, herculean arms acting like hydraulic rams to heave his bulk off of me. Sprawled on my back beneath him, I could do little but watch with a mixture of awe and fear as Mammon peeled apart. The hand that went to his head flickered from volcanic black to tanned bronze, claws receding and then punching from his fingers again. His body reshaped, drawing the parts of Mammon inside, and then remaking and reshuffling demon flesh into human skin. It took time. Seconds, minutes, I don't know how long. I couldn't move, inexplicably fascinated as lashings of power knotted together, peeled apart, then tangled into the shape of a man.
Akil collapsed, naked and trembling beside me. Perspiration glistened on his chest, beaded over slick muscles, and trickled into the valley of his navel. I forced my focus higher, where he rested the crook of his arm over his face, hiding his expression. His breath sawed through gritted teeth.
I blinked, stunned into silence. He was okay. At least he was alive. That was good, right? I got to my knees, pinching my clothes away from my sweat-soaked skin. The stifling air inside the house crowded me. I needed to get away, to get some cool air into my lungs.
"You were dead." Akil's barely human voice grated from the back of his throat. He turned his head toward me, and I wasn't sure if his face was wet with perspiration or tears. It had to be sweat because the alternative was unthinkable.
I opened my mouth to explain but found my voice had abandoned me. Where did I start? Stefan, Dawn, Levi... the dead Enforcers. He couldn't help me with any of it. They were my mistakes. My problems. Akil couldn't save me from myself. Somewhere down the line, I'd stopped expecting him to.
"I should go." I climbed onto unsteady legs and, wiping the dampness from my forehead, I stumbled for the door.
Akil choked on a dry laugh. The ragged sound of it stopped me a few steps from the doorway. Turning back, I swallowed hard. He still laid on the floor, a goddamn picture-perfect man, apart from the shivering and twitching and the haunted wrung-out look in his eyes when he turned them on me.
"You should stay. You need to stay."
"No." Staying was a terrible idea. Every second I lingered, the urge to wrap him in my arms grew more immediate. "I thought the house was empty. I didn't realize you were here. Quite honestly, I've not thought about you for weeks." I could talk the talk, but when I watched him drag himself to his feet, stagger and sway like a drunk, my conviction fell to pieces. My demon stalked too close to the surface. Raw emotion teased around the edges of my control. I battled old urges and shoved the demon back, only for her heat to spill through me again. She wanted to go to him, to dance in the fire. "I can't do this." I turned away.
Akil's solid embrace fell on me from behind. I immediately lashed out, only to find myself planted against the wall. His deliciously spicy, otherworldly scent burned my senses. I tensed to shove back, but he pinned me still, rigid naked muscles smothering me. A growl rumbled through him, like distant thunder. A warning. It stirred my instincts. My responding growl came easily. "I told your alter-ego Mammon to get the fuck off me. If you don't let me go, I'll fight like a demon until you do. In the condition you're in, I might even have a chance."
He bowed his head and sucked in a breath just as Mammon had done moments before. His broad chest expanded against my back. I _would_ fight him, but given his current state, I wasn't entirely sure if fighting would help me.
"Listen well, Muse." His words slurred behind a melodic accent, barely English, certainly demon, "I am revealing a fragment of my soul to you, here and now." He hesitated, as though waiting for me to interrupt or perhaps contemplating his next words. "I am chaos eternal. I desire everything this world and the next offers. I am greed. I hunger." A snarl punctured his words. "Oh, how I hunger... I want the pathetic mortals of this world to bow before me. I want all that they own, all they desire, every marvelous creation of theirs, but there is only one thing in this world that I need, and that, Muse, is you." He leaned closer, rapid breaths whispering on my neck.
A flush of heat washed over me. I twisted in his embrace and pressed my back against the cool, hard wall. Akil planted his hands either side of me. Amber-rimmed eyes bored into mine. I'd peered into Mammon's eyes in much the same position before, only now we were vertical instead of horizontal. "This can't end well, Akil. You know that." No matter what he said, it would always end the same. He'd try to evict Damien. He'd slip his power into the heart of me and seduce my soul.
He licked his lips and said very carefully around sharp teeth, "You are killing me, Muse."
A shiver trickled through me. Fear? Maybe. Desire, lust? Certainly. He was too close, crowding me, filling my senses and clouding my thoughts. In those moments, he was all I knew, my anesthetic, and it was bliss. I needed to forget. I wanted to push the pain of reality away, to drown the horror of my own capabilities in the overbearing presence of Akil. But if I let him, he'd steal the last thread of my humanity, pluck it right out of me, and toss it away. Did he know how close I was to losing my mind? Could he sense the lure of chaos whispering to me? I gently planted my hands on his slick chest and soaked up his feverish warmth. His body quivered, and those micro movements just about undid me. When I flicked my gaze to his face, the raw emotion I saw seared my conviction. He bowed his head and sunk his hand into my hair. He pressed his scalding cheek against mine. I couldn't slow my racing heart or pull back the sharp intakes of breath. I didn't want to.
"I lost you," he whispered. His lips brushed mine, and the promise of a kiss fizzled between us, so damn close I locked my teeth together, refusing to succumb. His element flushed over me, a rapid wash of heat that summoned a storm of emotion from the darkest depths of my half blood body. I gripped his broad shoulders, intent on shoving him off, but my arms wouldn't obey. I dug my nails in, hoping to hurt him, but his growl sent a wave of sparkling lust flooding through me. A short gasp escaped my lips as the reins of control slipped away. He nipped at my mouth and swept the tip of his tongue out, testing my resistance. I had none to give.
"Akil–" Desperation clipped my voice. I was about to break, and he knew it.
He lunged in and captured my mouth with his. That tiny part of me that knew this was wrong faded into the background, smothered beneath a roaring need to have him chase away the horrors stalking my thoughts. I laced my fingers into his hair and pulled him into the ravaging kiss. I attacked him as though starved. His lips burned, his teeth nipped, and his tongue swirled. This couldn't happen. In seconds, he'd try to dive inside me to dislodge the dark parasite coiled around my insides. He'd rip out my cancerous parasite and take my humanity with it. I couldn't let him do that. This wasn't right. So why wasn't I pushing him away? If I told him to stop—really told him—he would. Why wasn't I saying the words? That damned demon lust was too fresh. Too real. Was it her, or was it me? What was the point in fighting my nature? Lust was in my veins, part of my DNA. My demon father was the Prince of Lust. My humanity only went halfway, and my demon had hold of me like never before. She pushed at my control, leaning into my restraint. She had the scent of freedom now and refused to yield.
I broke the maddening kiss, breathless and trembling, and dropped my head back, closing my eyes. "I can't." He trailed scorching kisses across my jawline, fluttering them down the curve of my neck. His hand eased under my top, slid around my waist and clamped against my lower back. He tugged me against him with an animalistic groan and pulled me close. His naked body smothered mine. Even as I knew it was wrong, I melted against him with a shuddering moan.
"Akil..." I closed my eyes, not wanting to see the delicious plain of his chest or the way his arms tensed, muscles tightening. I could feel him though, the heated strength of him, the hardness of his body against the softness of mine. Nerves fluttered low, shortening my breath. "You suffocate me."
"Stop thinking and feel. Let me love you."
Finally, a bolt of anger fired through me, driving back the smothering desire. I shoved, half mad with lust. He leaned back, giving me space to breathe again. "Love?" I snarled. The wicked play of firelight in his eyes pooled wet warmth between my legs. He licked his lips and pinned me in a predatory glare, making it quite clear he had every intention of devouring me once the foreplay was over. What did he know about love? "I can't do this, Akil. Don't do this to me. You know my demon wants you. Don't tempt me like this. I'm not in my right mind. I'm losing control—"
His fingers speared into my hair, locking his palms against my cheeks, forcing me to glare into his eyes. "Stop lying to yourself. This isn't your demon's doing, and you know it. Let me make love to you. Permit me this." He molded his body against mine, driving the hardness of his erection against my hip. "Not as demon," he whispered. "No element. No power. Just as a man." His lips brushed mine. His breathless whispers sawed, rough with hunger. "I need to feel you as a man does a woman. You have no idea what it costs me to say these things to you. You cannot fathom what it means. I need you. I lay the truth before you. Would you turn me away? Right here and now, Muse, I am but a man."
My fragile heart stuttered. Tears welled in my eyes. His words burned like nothing else could. How did he know how to break me so completely? How could he know what I needed? I didn't want him a demon. But as a man? When his lips met mine again, he teased and explored with reverent hesitation. He eased my jacket from my shoulders, the heat of his touch seeping through my clothes to sizzle against my sensitive skin. This was my last chance to pull away, and the decision was mine. I couldn't deny what I felt for him. I wanted him, both halves of me wanted to hide from the world inside Akil's embrace. I could forget the hideous thing crippling my soul, forget the sins hooked into my conscience, forget how everything I touched turned to ash.
I rode my hands over the silken hardness of his chest, skipped my fingers over his shoulders and captured his face. "Damn you, Akil." Drilling my gaze into his, I was already lost. I fell into his kiss, locked my arms around his neck, and dragged him down. My demon purred her approval. Otherworldly heat sizzled beneath my skin, and where Akil's hands explored, desire sparked.
He gathered me in his arms. A flash of static energy sprinkled my flesh, and in the next moment, we were in the bedroom. I registered the dark wood and opulent furnishings in my peripheral vision before Akil's growl hooked into my wandering thoughts and drew me back to him. He backed me up to the bed, fingers teasing up my thighs, sinking beneath the hem of my skirt and riding higher. I dragged my nails down his back, smiling against his mouth as he tensed and bowed me against him. His power had gone, his element snuffed out. I touched his fevered flesh and felt only the trembling of a man in the throes of desire.
I pulled him down and whispered against his neck, "I'm not the same woman you screwed over, Akil. The woman you tried to force my demon from, she's long gone. Do you believe you can tame me?"
"No." His gruff reply was more growl than word, but I heard it clearly enough.
I shoved him back a few feet and watched with perverse delight how his body revealed his need to have me beneath him. Jesus, he couldn't be real. He was too damned delicious to be real. In the low light, his primal masculinity stole all that remained of reason from my mind. I slid my tongue across my lips. He tracked the tiny movement. Removing my clothes, deliberately taking my time, I basked under the heat of his gaze as it roamed and devoured. He trembled by the time I kicked my boots off and stood before him in all my human nakedness. There was a time he'd despised my humanity, wanted only my demon, but there was no sign of that now.
He stalked toward me, gathered me in his arms, and claimed me with a kiss. Arching against him, I threw my head back. His skillful tongue swirled down my neck. I was lost to lust, buried too deep in the madness to care about anything but Akil. Swirling his tongue around a nipple, he licked and teased, spurring my lust higher. He hitched my thigh around his hip, his fingers digging into my flesh before diving into the wetness of my core, stirring my needs into frenzy. Inhuman growls escaped me. I bucked against him, needing him inside. His dark laughter ratcheted my madness higher.
I speared my hands into his hair and snarled a warning. "Stop playing games."
His soft hazel eyes glistened with unspoken promises. His smile spoke of the wicked things his mind had conjured. He cupped my behind and lifted me against him before lowering me onto the bed. He prowled up my body, timeless wisdom burning in his eyes, but no power. His eyes had never been more honest. With the fire gone, he was just a man. I peered up at him through half-closed lashes, drenched with need. As Akil towered over me, the vision of male perfection, ageless, netherworldy, I saw a weakness in him I'd never witnessed before: a knowledge in his eyes coupled with a fraction of regret, not for me, but for himself. He noticed my expression change, but before I could voice what I thought I'd seen, he nudged my knees apart and plunged into me, arching my back and stealing a ragged groan of ecstasy from the depths of my ruined soul.
I woke entwined in Akil's arms, captured against the unyielding strength of his body. Sunlight streamed in through the wall of windows. Akil's steady breath betrayed him as awake, as did the press of his erection against my leg. I purred and stretched beneath the sheets, deliciously languid and broken. Peeling open heavy eyelids, I stilled. Akil's glare brought an abrupt end to my dreamy post-sex state. He stared down at me, head propped on his hand, face stern and eyes cold.
"What's wrong?" I asked.
"The Prince of Envy is dead. By your hand."
"How do you know that?"
He tapped his temple. "I hear them, my brethren. Their reach rarely extends beyond the veil, but the death of one of their own has them grieving. Their voices are distracting. It is part of the reason I spend my time here, away from their whispers." So he had Prince FM playing in his head. It didn't escape my attention how he'd referred to the princes as them not _we_. He didn't include himself among them. Why? "They are furious," he added with a scowl.
"What makes you think I killed him?"
The corner of his lips—lips I'd nipped and teased last night—curled up. "Because I know you. When I last saw both you and Levi, he'd trapped you in a cage. Once Stefan's ice thawed, Levi was quite adamant he would draw you out, using Dawn as bait. Your apparent-death didn't change his plans for the half blood girl. He was a fool, blinded by prejudice and thankfully quite ignorant of the power of half bloods. Are you going to deny your involvement in his demise? I'd like to listen to you try."
I quickly darted my gaze away and dropped my head back on the pillow. If he looked into my eyes, he'd see the lie. "Yeah, that was me."
"How?"
"It doesn't matter—"
He gripped my jaw and tried to pull me to face him, but I growled and jerked my chin free.
Muttering a demon curse, Akil rose from the bed. "Your timing is somewhat imprudent." Liquid sunlight flowed over the smooth skin of his back. I propped my head up, brazenly admiring how his muscles flexed and rolled.
I could still taste him on my lips, still feel the throb of his touch on my skin. "Levi deserved it. He had Dawn in a cage. He'd toyed with her like she was worthless." My voice fractured, prompting me to clear my throat. I'd been a demon plaything. Memories bubbled but didn't surface.
He turned to face me, his expression a hard mask of disapproval. I arched an eyebrow and allowed the sight of his nakedness to fend off the reality I'd been trying to hard to forget. I didn't take much effort to recall where on his body I'd teased my tongue, or dragged my nails down his honeyed skin. "Don't go."
Even as I said the words, he flicked his wrist and clothed himself in tailored suit and amethyst colored shirt. "You don't kill a Prince of Hell and walk away, Muse."
I sighed, mourning the loss of his body. "That's exactly what I did." Flinging the sheet back, I trailed a fingernail down the valley of my waist to crest my hip. His gaze wandered before he remembered himself and shot me a scowl. "Oh, c'mon." I scoffed. "They call me the Mother of Destruction. I was living up to expectations."
His eyes narrowed. "The Mother of Destruction? Who told you that?"
"Levi. Right before I kicked his not-so-immortal ass into the underworld."
Akil snaked his arms crossed. A muscle jumped in his jaw as he ground his teeth. "You have no idea what you've done." Ah, but his lips fought a smile. "You killed a member of the Dark Court. A member of the Court hasn't fallen for a millennium. Not since the Queen..." His eyes glazed over for a few seconds. He shook his head and focused on me. "They aren't going to let this go unpunished, Muse." I'd seen Akil angry, and the expression on his face wasn't anger. The slant of his voice suggested pride. Being demon and a being of chaos, I imagined my crimes were tantamount to heroism in the netherworld. Chaos followed me wherever I went.
"I killed Enforcers too. I don't suppose Adam'll let that go unpunished either."
Akil's expression ticked, surprise widening his eyes before he shut it down. He spat out an ancient word that could only be a demon curse. "Your return and my... lapse." A curious rumble emanated from the back of his throat, not quite a growl. "This is... unexpected. When David Ryder told me how you'd died, I believed him. How is it possible he lied?" He growled, the surprise back in his eyes. "The Enforcer looked me in the eyes and lied. To me."
_A taste of your own medicine._ "No, he didn't lie. He believed I was dead. Everyone did."
Akil closed his eyes and sucked in a shuddering breath. He opened them again. "Why didn't you come to me?"
Because I no longer needed him. I sighed. This whole mess wasn't going away, despite my best attempts to pretend it was. "I thought you were in the netherworld. Were you here? All this time?" Had he been mourning me? Was that why he'd been virtually comatose when I'd found him?
He tilted his head curiously, perhaps really seeing me for the first time, taking in my bottle-blond hair, slim frame, and no doubt putting that image together with the black-hearted demon who killed a prince and set a dozen Enforcers ablaze.
I squirmed a little under his penetrating gaze. "What was I meant to do? I've got demons queuing up to slit my throat. Val, Levi, not to mention the vile bastard rooting around my soul. I had to keep that little girl safe, the girl you dumped on me, by the way. When you can't beat 'em, join 'em, right?" My fake bravado was almost enough to paint over the cracks in my fragile emotional state. Although the way Akil's gaze penetrated, I wondered if he could see right through those cracks into my swirling darkness. He couldn't know the gut-wrenching fear I was harboring for my waning humanity, could he?
"Where is the half-blood girl?"
"In the room down the hall."
A faint smile crept across his lips, and his attention wandered again. This time, I felt the skim of his gaze like the touch of his hands. He knew it. The hungry look in his eyes told me he'd like nothing better than to relive the erotic memories we shared. "You are a vision of temptation."
I returned his smile. I'd felt something in him as we'd lain together as man and woman. In all the years I'd slept with Akil, we had never reveled in one another like we had in those hours. Sex had always been a raw act, a physical need, not an emotional one. I'd never woken nestled protectively in his arms as I had moments ago. He'd never told he wanted to love me the way a man does a woman. Last night was different on so many levels, and some of those levels terrified me. He had said he wanted to 'love' me and then corrected himself by adding 'make love to me'. I wasn't naive enough to believe he loved me. I'd been down that road before, but he felt something. His reverent touch had confirmed as much, and considering where I'd come from and who I was, my heart just about shattered with pride at being the tiny, insignificant half blood standing beside a Prince of Hell. My demon purred in agreement, I allowed the verbal equivalent to ripple at the back of my throat and watched Akil's gaze splinter with fire.
He nodded at my unspoken words as though sensing my thoughts. "There are matters I must tend to. The Dark Court asks after me. If the rumors are to be believed, the Mother of Destruction is not dead."
"Will you tell them I'm sprawled naked in your bed?" I purred.
His smile twitched. "In case you hadn't noticed, I am the master of half-truths. I can manipulate the Court. I've been doing it for years." He grinned, baring sharp white teeth before vanishing in a burst of static.
It didn't escape my attention that he hadn't answered my question.
## 26
# Chapter Twenty Six
Time alone with my thoughts was the last thing I needed. It wasn't long before the anesthetic effects of sex with Akil wore off, and I was faced with some cold, hard facts. Fear of what I'd become gnawed my bones. My demon stalked happily around my head while guilt, remorse, and disgust churned my gut. I found a bottle of red wine in the cellar and sat at the breakfast table, glowering at the corked bottle and the empty glass beside it. I should have been stronger. I knew that. My demon was my responsibility. I was meant to be something better than this weak-willed woman I seemed to be, and yet I sought out means to forget what I was slowly becoming. _A monster._
I poured the wine. Akil was no different from the wine in that glass. He could offer me temporary reprieve, but it didn't solve any of the problems. In fact, he complicated matters with his dark words and even darker needs. I needed help, the kind that wouldn't come from drinking myself silly or sleeping with the sweet-talking Prince of Hell. If I was the Mother of Destruction, I was surely running headlong down the path of self-destruction with no means of escape.
I huffed out a breath and spread my hands on the countertop. Stefan was the only person who could possibly understand how much I terrified myself. Shit, the things he must have been thinking after killing those Enforcers. We were both so terribly damaged that our only hope had to be found in one another. If only he'd agreed to run away with me where nobody could hurt us and we couldn't hurt anyone. We could have fled, escaped all of this, but to what end? Dawn would have been trapped in a cage. I could never have left her. What ifs weren't going to change anything. Stefan was gone and, in likelihood, had turned full-demon by now. The same fate awaited me if I didn't get a grip.
Was this Dawn's future? There had to be a way out for her. She was powerful beyond anything I'd ever witnessed before. She wielded an element I didn't even begin to understand and did so with deadly efficiency. Had it not been for Ryder's intervention, she might have killed me. That was a sobering thought. Akil had said she was powerful, but Dawn was something else. No wonder the princes squabbled over her. She could kill an immortal. That made her demon kryptonite. But that erroneous accolade now rested on my shoulders. I trailed my gaze down the dark hallway, knowing I should check in on her... but finding myself hesitating. She'd have questions, and my answers weren't going to be happy ones.
There were two other half bloods out there somewhere, subjects in the Institute's Operation Typhon. Were they just as damaged? Were they strong? Did they beat the system? Had they needed help like I did?
I picked up the wine glass and admired the swirl of burgundy liquid. I'd tried to help Stefan once. _Half bloods don't get happy endings._ I'd destroyed any hope I'd had with him, despite my best efforts. Was I just delaying Dawn's inevitable destruction? No, I had to believe the little girl asleep down the hall could have a good life. It was too late for Stefan and me but not for her.
I tasted the wine, let it roll around my tongue, and swallowed. _The Mother of Destruction..._ What did it mean?
An alarm chimed somewhere, alerting me to movement outside. I checked the CCTV feed on the little flat-screen TV in the kitchen and saw Jenna striding toward the back door. The Lambo was the only car in the drive, evidently Akil's. I sighed. They'd found me.
I answered the door before she had chance to ring the bell. Glass of wine in hand, bed-hair, and dressed in some old ill-fitting old clothes of mine, I must have looked as bedraggled and bemused as I felt.
She arched an eyebrow. "I thought I might find you here."
I raked my gaze over her. Her jacket bunched around a sidearm at her hip. I leaned out and checked the driveway and then the pale blue wash of sky. "No backup?"
"Just me. May I come in?"
I couldn't summon my demon inside the house. She knew that. In a straight up fistfight, she'd probably win. I was fast. I had some tricks up my sleeves. Ryder taught me well, but I didn't have her years of training. "I don't think that's a good idea."
"Is Akil here?"
I leaned against the doorjamb and sipped my drink. "What do you want?"
"To talk with you."
I wondered if Akil had stashed any whiskey in the house. "How many did I kill?" I echoed the very same words Stefan had asked me.
"None, miraculously. But it was a close thing. Two are in the hospital. Their burns won't kill them, but..." she shrugged a shoulder.
I clutched the doorjamb as my vision wavered with relief. "Are you here to take me in peacefully? Avoid a firefight? Is that it?"
"No. I er..." She moistened her lips and looked away. "The Institute doesn't know I'm here, okay. Something's happened. I need your help."
I laughed. "Believe me, I am the last person on this earth you want helping you." I stepped back, giving her room to step inside. "If you knew what was good for you, you'd turn around and walk away. Get as far away from me as possible."
"I don't think I can," she said quietly.
She'd come all this way to talk to me? It didn't ring true, but I was beyond caring. I checked the tree line again, expecting to see Enforcers spilling from the forest. The fresh morning air was sweet on my tongue, the pine-scented breeze cool. I listened hard but heard only the undercurrent of the breeze. "Fine." I closed the door and showed her to the kitchen. "I'm having breakfast." I lifted my glass. "Want some?"
"Muse..."
I shrugged at her motherly tone of disapproval. "Yeah, I'm a wreck. I know it. You don't have to beat around the bush."
"What happened to you?" She tucked her hands into her pockets and squared her shoulders.
"The same old shit. Don't worry about me. I'll survive. I always do. What did you come all this way for?" I leaned against the breakfast table.
Jenna settled against the countertop, gaze evasive, body restless. "Do you remember when your brother tracked me to the mall in Salem then brought me here?"
I tapped my nails on the table. "Yes." It wouldn't be long before Val showed up again. If the Dark Court suspected I was alive, Val would soon hear about it. He wasn't a prince, but he was well connected.
"He er..." She swallowed and bowed her head. "When you tried to save me from him, after he... Y'know, when you were unconscious..." She shifted her stance and sighed. "Damn. Listen, I'm not easy, you understand? I don't usually..."
I narrowed my eyes. "Spit it out, Jenna."
"That day. Before he took your little girl and while you were out cold, Val... Ah, damn, he worked me over with whatever magic he has. Okay? I mean, he really went to town on me. Dammit, this is harder than I thought..."
"He got to you." I recalled how my brother had crowded Jenna against the car, smothering her with his netherworldy presence.
She sighed. Tears glistened in her eyes when she looked up. Until that moment, I'd never really felt much of anything for Jenna. She was the infallible Enforcer, Stefan's 'friend,' the type of woman I wanted to be. Driven, passionate, committed, perfect. Now, as I looked at her, I saw another life torn apart by demons and felt a tangible weariness drag me down. When would it end?
"My god, Muse," she whispered, "it was... wrong, but I wanted it. I still do. He –" Her throat moved as she swallowed. "He comes to me. He's been coming to me since that day. Jesus..." She chewed her lip. "He asks me things about you, the Institute, and I tell him because I can't bear for him to leave me without..." She swiped at a tear. "...without him screwing me."
"Oh, Jenna..." The bastard. "I'm so sorry."
She gave her head a few sharp shakes. "I thought maybe I could deal with it on my own and stop him somehow, but he's too strong."
I retrieved a glass from the cupboard and poured her a generous helping of wine.
Her hand trembled as she took it. "He came to me last night." Her focus wavered, memories clouding her eyes. "I can't help telling him things. The way I am with him, I'm not fully aware of what I'm doing... until afterward."
Lust was a madness. I knew it well. "What did he say?"
"He said Dawn was missing and that the Prince of Envy was dead. He knew in his blood it was you, and he asked me if you were alive. I told him." She gulped back a few mouthfuls of wine and wiped the back of her hand across her lips. "I'm so sorry. I had to find you. This was the only place I could think you'd go."
I spat out a curse. "What did you tell him about the Institute?"
She sobbed. "Everything. And what I didn't know, I found the answers to because I wanted to please him." She groaned. "It makes me sick, knowing what I've done, and I still want him. How can that be possible?"
"It's not you. It's his power. He's the Prince of Lust's first-born son. You didn't stand a chance, Jenna."
"He asked about Operation Typhon and half bloods, Muse." She saw me tense. "I didn't know what it was to begin with. I asked Ryder. He said it was a breeding program for demons. Something about creating weapons. He's the weapons guy. He should know, right? But he clammed up. I asked Adam. He denied it existed."
The fact that Ryder knew about Operation Typhon didn't entirely surprise me. His name had been all over the file I'd got a glimpse of. I trusted Ryder more than I trusted myself, but Jenna's words had me rethinking my opinion of my old friend.
Jenna's gaze said she knew more, and it wasn't going to be good. "Go on."
"I broke into Adam's office. This isn't me. I wouldn't have done it, but... I need him."
"It's okay." I shivered. Val's hideous power sickened me. "Tell me everything."
"I stole the file and gave it to him."
I groaned. "Did you read it?"
"Some of it. The Institute has been experimenting on half bloods in a big way. It's not just you and Stefan. There are others and more in other cities. But they're killers, Muse. It's terrible. They're caged animals, not really human."
Nausea pooled saliva in my mouth. I gulped it back, swallowing with it the rising tide of rage. "Val knows this..."
"His name is in that file, but they don't know much about him. I probably know more." She threaded her fingers through her hair.
"He controls the half bloods in the netherworld. Trades them like cattle." I downed my wine and refilled my glass. "When he learned that I was going to be sold to the Institute, he put a stop to it. I can only imagine what the Institute is doing offends him. Everything this side of the veil offends my brother. The fact he must breathe the same air as humans pisses him off."
"What will he do to the Institute?"
I met her gaze. "I have to worry about what he'll do to me before I can worry about the Institute. I killed Levi. I have Dawn, and we know he wants her back. He was working with Levi. I don't care enough about the Institute to help them dig themselves out of a hole of their own making."
She nodded, her eyes unfocused again as she chewed on her lower lip. "What am I going to do?"
"We'll think of something." I had no idea. Val was a terrible force to be reckoned with. One touch of his wings had rendered me unconscious. He'd dangled lust in front of my eyes, and I'd gladly thrown myself at his feet, as weak as a kitten. I couldn't imagine the horror Jenna had been living with, knowing he had her under his control and liking it. He'd be coming for me. And for Dawn.
"I need to check on Dawn." I left Jenna alone with her thoughts and wandered through the sprawling house, trying not to think about the physical and emotional numbness spreading through my body. I should be relieved. I hadn't killed anyone. That was good news. So why didn't I feel like shouting from the rooftop? Grim realization tugged the corners of my lips down. It wasn't a relief because I'd already accepted my demon was very capable of killing. Therefore, so was I. The lines between us were eroding.
Dawn's room was empty. The bed had been slept in and her bunny lay sprawled on the pillow, but she was gone. "Dawn?" The house was too damn big. I checked each room, my anxiety notching up a degree with each passing minute. She had to be here. She wouldn't have left. Not without the bunny. Nobody could get into Blackstone. The symbols kept all demons out, apart from Akil. I called her name, the pitch of my voice increasing as dread pooled in my gut.
The touch of Akil's element tugged through me as he called his power from somewhere inside the house. I turned on my heel and jogged back through the house until I found him in the kitchen, pinning Jenna to the wall, hand locked around her throat.
"Akil, put her down."
He snarled. "I don't take kindly to Enforcers on my property." Heat haze rippled the air around him.
Jenna's wide eyes locked on me. She groped for her gun, but Akil captured her hand and pinned that to the wall too. Leaning in closer, he breathed in through his nose, drawing her scent into him. "She has Valenti's scent on her." He swung his gaze back to me. Embers fizzled in his eyes, a sure sign he wasn't happy.
"I know." I sighed. "She came here for help."
He yanked her to him, growled through sharp teeth, then threw her to the floor between us. "She's his minion and the reason he waits outside."
I hissed, my fear for my brother like acid in my veins. "Val's outside?"
Jenna staggered to her feet, wild eyes finding me. "I'm sorry." She wheezed and spluttered, gasping air. "I couldn't have denied him even if I wanted to. I was supposed to lure you out, but I couldn't do it. Please believe me." She gave me a wretched stare, her self-disgust evident in the savage downturn of her lips.
I couldn't deal with her right now. Val was outside and... "Dawn's gone."
"Yes," Akil replied, smoothing back his hair. "Her departure was necessary."
"Oh god... What have you done?"
"I did the right thing, as I told you to do weeks ago. She's far too volatile to be allowed to roam free. If any of the other princes claimed her, they'd very quickly turn her against the rest of us. She is chaos, Muse. You saw as much when she killed Levi. Yes, I know it wasn't you. You might well be the Mother of Destruction, but that little girl is raw chaos inside the body of a nine year old human."
No, he couldn't be telling me this. This wasn't real. "Akil, what exactly have you done?" My hands clenched at my sides, fists aching as my muscles strained.
Firelight played in his eyes. "She's with the Institute."
My balance tilted out from under me, and my vision blurred. Staggering back, I fell against the countertop. "No. No, Akil. You aren't telling me this. The Institute?" I slumped forward and concentrated on my breathing because, if I didn't focus on something other than the rage bubbling up from the depths inside, I was going to lose control. I heard myself repeating "No" over and over, even as my element thrashed inside me. "She's just a little girl... Just a little thing... She deserved a shot at freedom, you son of a bitch."
"She's gone. There is nothing more to be done."
I swung my head around and snarled, welcoming my demon as close to the surface as she could get. "Get her back."
He blinked and held my stare. "This is not negotiable."
"You're afraid of her, aren't you?" I grinned. "The mighty Prince of Greed is afraid of a nine year old girl."
"She plucked an immortal chaos demon apart at the molecular level. Yes, I'm afraid of her. I happen to enjoy living, even after all these years. A whelp of a girl isn't going to threaten my existence."
"You selfish bastard. She wouldn't have hurt you. She only hurt Levi because he was going to kill Ryder and hand us both over to Asmodeus. She trusted you."
"That was her mistake." He arched an eyebrow and gave me a bored look. "Did you witness her killing Leviathan?" He saw the answer on my face. "Then you know how wild she is. Look past her human vulnerability, and see the demon inside her soul, Muse. It would be remiss of me to let a threat like that walk free."
She would have killed me. I already knew that, but I'd tried to convince myself that I'd understood her. "Do you have any idea what they'll do to her?"
"She is in the only place the demons cannot reach her."
I shoved off the counter and strode up to him, crowding his personal space. Jenna watched from the sidelines as I faced off with the Prince of Greed. "Take me there. Now."
"No." He glowered down at me, fiery eyes fierce with defiance. "If you go there, all you will accomplish is your own incarceration. Is that what you want?"
"I don't care about me, Akil. I'm already lost."
Jenna chose that moment to bolt for freedom. I cursed and lunged after her. Akil snatched my wrist, pulling me up short. "Let her go," he growled.
I tugged, but his unyielding grip held me fast. "Goddamn you, Akil. She'll tell Val where Dawn is. You think he's going to let the Institute stop him getting what he wants? He'll send Jenna in. She'll get Dawn for him. I have to stop him. I have to get her out of there before they ruin her like they did Stefan, like they did me. Let me go, Akil. Just let me go."
A flicker of acknowledgement narrowed his eyes. He exhaled a curse and released me. I ran after Jenna, only slowing as I approached the open back door. I couldn't see her outside, nor could I see my brother. But that didn't mean he wasn't there. I stepped out into the blazing sunshine, expecting to feel the touch of his power, but the absence of his element told me all I needed to know. He was gone. He could flit between vast distances just as well as Akil. He might already be at the Institute, manipulating Jenna.
I climbed into Akil's Lamborghini and jabbed the start button. The car burst into life with a hungry, resonating growl. The instruments lit up. I flicked the paddle shift into gear and spun the car, kicking up a wave of gravel in my wake. Akil could try to stop me at any time. We'd fight. I'd win.
The supercar twitched under my control, champing at the bit as I planted my foot to the floor. _I'm coming for you_ , _Dawn._
A plume of black smoke marred the pale blue sky over Boston. I spotted the cloud a few miles outside the city limits, but as I crawled the Lambo through the early morning traffic and the cloud bellowed higher, an unsettling sense of dread crawled across my flesh. I flicked on the radio. It didn't take long to locate a news broadcast. A warehouse complex was ablaze. I flicked to another station. The reporter was giving a breakdown on an international company charged with policing the demons: the Institute.
Disregarding traffic laws, I demanded everything the Lambo could give me and plowed through the clogged main routes. Akil must have left Dawn at the Institute while I slept, no more than a few hours ago. What had happened in that time? This had to be Val's doing.
I abandoned the Lambo as close to the Institute as I could get, outside a barrier of fire trucks and fought my way through a crowd of onlookers. Black smoke blotted out the sun. Thick, rolling shadows rippled across the walking wounded huddled around ambulances. It was chaos. There had to be hundreds of people spilling from the Institute doors.
I searched the faces for any signs of Ryder or Adam but couldn't see them.
"Ma'am, you can't go any further." A stocky fireman in all his firefighting gear blocked my path.
"I work there. I can help."
"There's nothing you can do."
A crackle on his radio drew his attention. I skirted around him and bolted between two fire trucks. Two rigs had extended their ladders over the flat roof of the warehouse complex. I couldn't see any flames, but smoke rolled skyward with no sign of letting up. I called my element and immediately felt the blast of heat inside the building and something else, the sickly, abhorrent touch of Dawn's unique power.
A line of firefighters helped a steady stream of smoke-damaged people spill from the doors. I strode up to them, very much aware of my unassuming appearance. They weren't going to let me inside. I didn't need to see their eyes behind their visors to know that.
In the next step, I threw a second skin of flame across my body and plastered a crazy grin on my face. They all recoiled. The wounded scattered. One of the firefighters barked into his radio, "There's a woman, a-a demon woman coming your way."
"Have you vented the smoke?" I asked, my demon-voice barely more than a growl.
The guy peering back at me gave me a wary nod. "What you gonna do?" He hedged, clearly not sure if he should try to stop me from entering the building.
"Fire demon." I quirked an eyebrow. "I'm going to put the fire out." That was the plan, although I'd technically never tried to extinguish a fire anywhere near the size of the inferno raging inside the the Institute. "You goin' to let me pass?"
He stepped aside.
Smoke immediately hindered my advance through the building. I pulled my element into me, not needing to add to the heat already pulsing against the walls. A few stragglers hurried by, coughing into rolled up shirts, their eyes wild with fear. Shoring myself up with a courage I didn't know I possessed, I kept low and ventured deeper into the building.
A closed door blocked my path. From the bubbling paint and terrible weight of heat throbbing the air, the inferno raged beyond. I planted my feet and called my demon, letting her slip inside my skin and protect my fragile human flesh beneath her lava-veined skin. My wing jutted against the ceiling. I drew it in behind me and sucked in a deep, smoldering breath. Time to see if I could tame the flames. Dawn was in there, alive, if the touch of her power was reliable. She'd be terrified.
I closed my hand around the door handle and shoved.
A wave of super-heated air blasted my body. Fire lunged for freedom and gobbled up the ceiling. I sent out a sharp flicker of power, curling my element around the wild roaring heat and drawing it to me in a motherly embrace. The wildfire coiled around me, answering my call. Power fizzled through my limbs and danced across my skin. A groan escaped me as the fire seeped into my demon flesh.
As I walked on, the flames danced around my blazing body, eager to please. The tunnel of fire beckoned. Blazing energy boiled across the walls and smothered the floors. The dark pollutant around my heart throbbed with the beat of the flames, devouring the rush of heat, feeding its addiction. I was walking a thin line of control. So damn thin it might as well have been a tightrope. My demon rode high, turning me into a beast of molten heat, the walking, living, breathing soul of fire. And I was hungry.
I stumbled against a wall, my hand sinking into the charred plywood. A snarl bubbled from my lips. Ahead, the heart of the fire beat for me, calling me closer. It hungered too. It wanted freedom. So did I. I rolled my shoulders and spread my ragged wing, absorbing the heat through every inch of my flesh. Fire lapped at the pleasure receptors in my brain, firing off my ingrained lust for chaos. It was wicked and divine. All the things my demon wanted, I wanted. I was demon.
I threw my head back and laughed. We walked on, demon and woman soaking up the power, reveling in the simplicity of madness.
Charred timbers rained around me. Ashes swirled in my firestorm. Walls collapsed, floors buckled, and I laughed. The roar of the fire gobbled up my laughter and raged higher. It taunted, beckoning me closer to its heart.
The Institute was lost. Nothing was coming back from this blaze. I felt a curious tease of pleasure ripple down my spine at that knowledge. I could help it on its way. Blast the building to ash. I'd done it before in the netherworld. I could raze the Institute, grind it to dust beneath my feet. If I reached for the veil, I'd feel what it truly meant to destroy. Giddy with power, I chuckled. It was what they deserved. They meddled with demons, cavorted with chaos. _Well, chaos always wins, you sons of bitches._
I sensed something human to my left, behind a blackened door. Someone was alive inside. The blaze tempted me. I didn't care for these people. They were nothing, fuel for the flames. But my humanity had not yet died. I stumbled against the door and almost tumbled inside as it gave way beneath my superheated flesh. Fire licked at my wing, teasing me further into chaos, but as I swept my gaze around the room I recognized the wall of books and the old, claw-footed desk.
Adam lay sprawled on the floor, a limp hand reaching toward the door. I cocked my head and looked down at the man I despised. I could hear his heartbeat flutter in his chest and his short rasping breaths. He would die here. All I need do was turn and leave. Demon desires tugged me away. The flames beckoned. The firelight's embrace called. I stayed in the doorway, unable to move.
"Adam..."
His fingers twitched. His heavy eyelids blinked. He rolled his eyes up to me and saw a fire-bathed demon looming over him. I smiled, baring fangs.
"Muse," he rasped, hand reaching.
A splintering crack above us snapped my attention to the ceiling. Fire pooled above. Melting plastic dripped to the floor, and then the entire ceiling ignited, flooding a wash of tumbling orange flame above us. I thrust an arm out and funneled the hungry fire through my fingers, down my arm, and into my body.
"Go!" I snarled.
He tried to heave his bulk off the floor but collapsed, breathless and unfocused. Dammit. It was taking all my control not to walk away. My demon wanted more than that. She wanted to bury him in flame. It was all he deserved. Nobody needed to know how I'd ushered him toward death. _But I'd know._
"Go, Adam. Go now–" A roar was the only warning before the ceiling and its framework collapsed, slamming me to the floor under the weight of debris. I clung onto consciousness despite a jagged tearing pain assaulting my senses. If I lost consciousness, the fire would roar back to life. I had hold of its reins for now.
I clawed at the floor, grating sharp claws through the melting carpet. Black boots. I blinked, and looked up at the man those boots belonged to. A firefighter peered down at me behind his visor and oxygen mask.
"It's a demon. Leave it behind." A voice barked through his radio.
"No." Adam shouted. A wracking cough almost robbed him of his ability to walk. He leaned heavily on the firefighters hurrying him from the room. "Don't leave her."
I wanted to tell the firefighter to help me up. That I could stop this, but it only came out as a growl, probably cementing the notion in his head that I was little more than an animal. I locked my gaze on his eyes behind the soot obscured visor. _Help me._
The voice crackled through his radio again, telling the crew to fall back. The building was lost.
He muttered something that I missed and then crouched beside me and heaved the metal gantry high enough for me to wriggle out. He headed for the hall.
"Come with me," he said, voice muffled through all his breathing gear. His bright eyes pleading.
I shook my head. "There's a girl here." I could still feel Dawn's power. She wasn't far. "Go." I felt his gaze on my back as I walked into the flames.
I devoured the fire with every step. Dangerous laughter played in my mind the whole time. It was glorious. I couldn't escape the sensation of wonderment. Like a proud mother, I admired the destruction the fire wrought even as I corralled the wayward flames to me.
Dawn's power loomed up ahead in what had once been the cafeteria. As I rounded the ruined corridor, I saw her demon form suspended amid her threads of chaotic tendrils. Black eels of power whipped and thrashed around her, containing her inside a pulsing bubble of dark energy.
Her eyes flicked to me, and treachery burned there.
I tugged on the fire devouring the room and snuffed it out in one sweeping gesture. Smoke drifted and rolled between us, shirking her aura. I caught the unmistakable odor of burned flesh, but I couldn't see the bodies, just dunes of debris, fragments of ashes, papers, and shredded clothes.
"Dawn..." I stepped closer. A lash of power snapped out at me, clearly a warning. "It's okay. I won't hurt you."
"I was wrong." Her innocent voice had twisted beneath the riding power of the demon. She didn't sound like Dawn. She sounded like madness. "I trusted you and Akil. You said not to. You said he was bad. But you trust him, and I trust you."
The dark boiled around her, dangerous and deadly. I had no doubt she could kill me and probably do it in an instant. I might not even see death coming. "I'm sorry. I didn't know what he was planning."
"He said you should have brought me here. Is that what you were going to do? I don't like it here." Her face clouded with shadows. "I unmade them, Muse, and it felt good. That's not right, is it? Is that what you felt when you burned those people to save me? Am I meant to be empty? I'm scared." She sobbed, and then a grin slashed across her face. "I want to do it again."
I clamped my jaw closed. "Dawn, I can help you manage your... gift." Right, like I managed my own. "We have to leave. The building isn't safe. The fire still hungers. Let me get you out, Dawn. Let me save you."
She tilted her head to the side and assessed me. I shivered with the overflowing currents of energy breezing through my veins. Ashes rained from my skin. Around me, the fire whispered promises of destruction. If I didn't save myself soon, I'd be as lost as this building was.
She puffed out a sigh and slumped forward. Her demon fell back from her skin, leaving the vulnerable little girl behind.
"Come..." I shoved my demon back. She snapped and snarled as she fought me for freedom. Gathering Dawn against me, I guided her back the way I'd come. The flames had died down. Behind us, the walls groaned and the ground trembled.
## 27
# Chapter Twenty Seven
We stumbled out into a sunlit backstreet, coughing and wheezing smoke from our lungs. EMTs crowded, sirens wailed, people cried. I hugged Dawn close and snapped at the EMTs to get back. We were fine. Others needed their help more than we did. A grumble shook the air around us like thunder. A cloud of ash spluttered skyward as the warehouse complex collapsed in on itself. A savage spike of glee twitched my lips as I turned away. I shouldn't have enjoyed the destruction, but I did. Had it not been for Dawn, I'd have been dancing in the debris.
Dawn's voice reached me through the clamoring madness in my head. "I don't know what I am."
A street away from the simmering remains of the Institute, I planted her down on a curb, away from the crowds, near a closed grocery store, its graffiti-covered shutters pulled down. Crouching in front of her, I cupped her face in my filthy hands and smiled. "It's okay because you're not alone. I won't ever let you go again. I can help you manage your demon. We'll work together, two half bloods, just you and me." I'd lost Stefan. I wasn't losing her. I tucked her hair behind her ear. And maybe, if I could save her, I'd save myself too.
A delicate smile skipped across her lips.
"Muse. Back away. Do it slowly."
I swung my stare over my shoulder and fixed it on Ryder. He had a gun palmed in his right hand, finger hooked over the trigger and Dawn in his sights. Determination hardened his sharp eyes.
"Ryder..." I stood slowly, as he'd said, and turned my back on Dawn to face him. "What are you doing?"
"She killed everyone in the cafeteria, Muse. Jesus..." His hand trembled, aim wavering. He flexed his fingers and regained control of himself. "She pulled them apart, turned then into confetti."
"Ryder..." I licked my dry lips, my throat hoarse. "You can't do this. She's just a little girl." A quiver of power slithered through me. I looked to the right, across the street. Akil stood at the curb, hooded eyes locked on me.
"She ain't no little girl." Ryder blinked rapidly. His lips turned down, and he shook his head slowly. "The Institute is gone because of her. Do you know what that means? The only thing stopping the demons from flooding this city is dust. She's demon, and she's a killer."
"It wasn't her fault." I lifted my hands, palms out. "Akil..." I glanced back, but he'd gone. "Akil brought her here. He had no right."
"It doesn't matter who did what. She slaughtered them. Shit, Muse. I saw it all on the cameras. She's a monster. Half bloods don't get happy endings, Muse. You're too damn fucked up. Every single one. Even Stefan, in the end. Y'know, I thought you might be different, but it ain't possible. The demon inside you calls the shots. Doesn't it?"
Dawn stood beside me, her little hand resting on my thigh. She looked up at me with those doe-eyes, wise beyond her years. "I am a monster."
"No, honey." My heart stuttered to hear her say it. "You just need a friend, that's all."
"I don't want to be this way."
_I don't want to be that demon._ Stefan's words drifted back to me. __
A whimper betrayed my internal battle between the need to save her and the need let her go. I knew, deep in my bones, that Ryder was right. I'd witnessed her power, and she truly was a terrible thing. But I'd come so close to saving her, to freeing her. She was a half blood caught in a storm just like me.
"Please..." I stepped in front of Dawn, shielding her behind my legs. "I will take her away from this, from everyone, somewhere she can't hurt anyone. There must be a way to control her demon." Even as I said the words, I wasn't sure I believed them.
Akil flitted into existence in my peripheral vision. "The princes will find her. They were ignorant of the power residing in half bloods. That is no longer the case."
Ryder breathed hard. His wide eyes flicked between Akil and me, probably alarmed to find Akil backing him up. "Fuck, Muse, get outtah the way. Don't make me shoot you too."
"I can't let you do this, Ryder." I lowered my hands. He would shoot me. I'd always known it. If it came down to this, staring down the barrel of his gun, he would pull the trigger.
He trembled. A sheen of perspiration glistened on his face. He smiled, but it was a bitter, worn out ghost of a smile filled with regret. He snarled at Akil, "Get Muse out of here."
I shot my hand out, halting Akil as I drilled my stare into Ryder. "Akil, don't you dare."
"She can't be saved, Muse." Akil's smooth voice sounded entirely reasonable.
I smiled and tugged on my demon's reins. "Then neither can I." I tasted the flames on my lips, felt them lick across my body. _Let the demon win, and nothing can hurt me again._
Ryder fired. The bullet smacked into my shoulder, engulfing my entire right side in a blast of agony, spinning me. My demon recoiled, leaving me human. I collapsed face down on the road. The smell and taste of my own blood coated my nose and throat. And then, as if the world felt I hadn't been dealt enough of a challenge for one day, I looked up to see my brother in all his netherworldly glory leering down at me. Vast black wings draped me in shadow. His milky-white body gleamed like marble. "Thank you for delivering her to me, sister-mine."
"Dawn, run!" I screamed.
Ryder's gun rang out. A splash of crimson burst across my brother's chest before his flesh soaked up the wound. His muscles rippled and spat out the deformed slug. It bounced on the road between us, reminding me not to fuck with immortals.
Val stepped around me, apparently deciding I wasn't worth the time or energy. I tracked his formidable demon form as Ryder continued to empty a clip of bullets into him. They buzzed about Val, no more bothersome than flies.
Mammon barreled into Val – seemingly out of thin air. His obsidian muscles rippled as he tackled Val and shoved my brother through the storefront shutters with all the finesse of a wrecking ball. Inhuman growls, snarls, and roars resounded inside the store. There was no way on this earth Mammon would let Val take Dawn.
I struggled to roll onto my side and hook my legs under me. I couldn't stand. I could barely sit up. My entire right side throbbed with a mind-numbing pain. Blood soaked my clothes, gluing them to my feverish skin. Through the haze of agony, I fixed Dawn and Ryder in my sights. Ryder loomed over her, the muzzle of his gun inches from her forehead. His aim didn't waver. His hand had never been steadier. Dawn tilted her head up and looked into his eyes. She didn't run, didn't beg, didn't call her demon. She could have done all of those things. She could have killed us all without even drawing from the veil. She was chaos, but in that moment, chaos was controlled by a nine-year-old human girl. She blinked up at Ryder and said two words.
"Thank you."
He pulled the trigger.
"No!" I reached a hand out as the gunshot cracked through the air and echoed down the street. My demon tried to clamber into my skin, but physical and mental anguish drove her down.
Dawn fell back. Her tiny body crumpled in a heap at Ryder's feet. She lay still. The touch of her element had vanished. A strangled cry—somewhere between a scream and a growl—tore from my throat. I smothered the blazing pain under my rage and somehow managed to get to my feet, only for my legs to crumple, dropping me to my knees.
Ryder staggered under the weight of his own guilt and turned away from Dawn's body and from me. He gave a wrenching groan of agony. I didn't care. I wanted to gather Dawn's fragile body into my arms, but I couldn't get to her. I fell forward onto a hand, lifted my gaze through my hair, and whispered, "It's okay, Dawn. Everything is going to be okay. You're free now."
The tears came, sliding down my soot-covered cheeks. It wasn't right. It wasn't fair. She could have survived. She could have lived. She didn't deserve this.
After perhaps minutes, hours, I don't know, I was aware of Akil's warming presence close behind me.
"Valenti has fled," he said softly. "The Enforcers are coming."
I bristled. My demon prowled just below the surface. "Get away from me."
I expected an argument, but in the next breath he was gone. He had brought Dawn here to this hellhole. It was his fault. All of it. He could have stopped Ryder, and he hadn't. He'd wanted her dead too. Everyone wanted her dead. Nobody cared enough to try to help her. What was wrong with this world?
Eventually, the sounds of the city coaxed me back to reality. Ambulance sirens _bipped_ through the crowds a street away. Clouds of gray smoke bellowed skyward, but the inferno devouring the Institute was dying. I felt its death in my veins.
I watched, detached and numb, as a black-clad firefighter walked toward me, helmet tucked under his arm. His short chestnut hair was plastered against his head. His face sported smudges of ash and soot. I might not have recognized him if not for the calm blue eyes: the same firefighter who'd helped me in Adam's office.
He crouched beside me, looked me over with a sensitive appraisal, and noted the blood soaking my top. He gave me the most heartbreaking smile. "There's no use in you dying here."
I wasn't sure I had the strength of mind to reply. Behind him, the EMTs wheeled a gurney closer, and behind them a handful of Enforcers bore down on us. Tears blurred my vision.
He tugged his gloves off and held out a hand. "C'mon, let's get that wound looked at."
I shook my head and bit my lip, trying to stop its quivering. "I can't leave. I promised her."
He glanced over my shoulder to where Dawn lay and nodded. "You're banged up pretty bad. Maybe it's time you looked after yourself?" His sincerity spoke of understanding. I examined his features, searching for hostility, but he wasn't Institute, and he wasn't demon either. He was just a normal guy who wanted to help, no strings attached, no ulterior motives. I closed my eyes, fearing what it meant to let Dawn go. I'd pinned more of my hopes on that little girl than I'd let myself believe.
As the firefighter closed his hand around mine and tugged me to my feet, a veil of forced indifference settled over me. I managed a few steps before falling against the ambulance crew. The blissful embrace of unconsciousness stole me away.
## 28
# Chapter Twenty Eight
The Stone's Throw bar had never seen a crowd quite like it. I jostled through the throngs of people, taking note of the armed Enforcers among them. My arm throbbed; a week and it still burned like a bitch. The painkillers were wearing off. I'd been popping so many pills I virtually rattled as I walked.
Ben Stone acknowledged me and gestured to one of his new bartenders to fix me a whiskey. I probably shouldn't have been drinking while my veins were buzzing with drugs, but really, in the scheme of things, I had other things to worry about. Like being a single thread away from disaster.
While I waited at the bar, I couldn't help dragging my gaze across the symbols spray painted across the walls and ceiling. The Enforcers were here en masse, and judging by the incident wall set up along one side of the room, they were here to stay. A map of Boston sat center stage. I couldn't see much, tucked away in the corner as I was, but I noted the locations of a dozen or more fat red circles pimpling the map.
The bartender handed me my drink. I paid, brought it to my lips, and noticed a hushed quiet descending over the crowd. The crawling itch of dozens of pairs of eyes skittered down my back. I took a sip and welcomed the sweet heat of the alcohol as it burned my throat and eased the tiredness in my muscles. I took my time, and all the while, the silence settled over the crowd until only the mumblings from the TV disturbed the quiet.
Licking my lips, I placed my drink down and leaned my good arm on the bar. When I lifted my gaze, upward of seventy Enforcers glared back at me. _Way to make a girl feel uncomfortable._ At least I'd ditched the pink and black persona in favor of my more typical knee high boots, skinny dark washed jeans, and my 90's throwback leather jacket. They'd see the gun holstered at my hip. Not that it would do me any good against a mob of demon killers.
I don't know what they expected me to do. Sprout horns and a wing, and roar at them?
Adam Harper's deep voice broke the stalemate. "Muse, join us..."
His Enforcers collectively grumbled their displeasure, but none would argue with the boss. They slowly resumed their conversations, turning away from me in the hope I'd skulk off with my tail between my legs. If only it was that simple.
I carried my drink to where Adam's voice had originated from to find him standing with half a dozen others around two tables pushed together, strewn with maps of Boston. Ryder hung back, leaning against the far wall, thumb tucked over his camo-print pants. His untucked shirt bunched around his gun. He chewed on a toothpick while training all of his attention on the documents. We hadn't spoken since he'd executed Dawn a week ago, and that was perfectly fine with me.
"Are you up to speed, Muse?" Adam asked.
All eyes turned to me. I might have squirmed under their collective no-nonsense stares if any of this actually mattered to me. As it was, not even whiskey could warm my cold heart. I'd shut all the emotional shit away.
"No." I said, surprised at my steady tone. I slid my gaze to Adam. I hadn't seen him since the firefighters hauled his ass out of his burning office. He was lucky to be standing there. My fingers twitched with that knowledge. I'd promised Ryder once that I'd never hurt Adam or the Institute. Well, I'd kept that promise. For what it was worth.
I rolled my sore shoulder beneath my jacket. "I've been in and out of the clinic." They didn't need to know I meant Jerry's place and not the general hospital.
Ryder lifted his hooded gaze. He plucked the toothpick free, picked up a file, and tossed it across the table to me. Fat black letters printed across the cover spelled out the contents: Operation Typhon.
I reached out, instinctively seeking answers, and then paused, curling my fingers into my hand. "What's going on?"
Adam straightened. "We're down, but we're not out. Demons have renewed their incursions with vigor, scouting parties before an all-out invasion. They're breaching the veil in record numbers. Boston PD is swamped with reports. Witnesses are reporting vigilante groups setting themselves up as would-be Enforcers, believing we're not up to the task of protecting this city. We have emergency plans in place, including new premises and several inbound teams, but it's taking time, and the vigilantes are out for blood. They're getting themselves killed." Adam dislodged his glasses and pinched the bridge of his nose. "The calm of a few weeks ago was a lull. What we're experiencing now is the outer fringes of an incoming storm. I've received reports of half a dozen Class A demons in Boston, Muse."
I was a Class A. So was Akil. We used to be the only ones. It looked as though that was changing. "Shit just got real, huh?" I found it hard to sympathize with the assholes who brought this all on themselves.
Adam pushed his glasses back on and gave me the disapproving fatherly stare. "Some, we suspect, may be princes." He puffed out a sigh. "We need you."
One of my eyebrows hiked up of its own accord. "Is that so?" Bet they could have done with Stefan too. What a shame Adam ordered the death of his son and their best Enforcer.
"We need your connections." He tapped a black and white photo of Akil. "We need your expertise." He flicked open the cover of the Operation Typhon folder to reveal an image captured on the Institute's internal video network. A one winged demon stood in a hallway, arms out, wing flexed high above her head, summoning the fire like a magnet calls metal. "And we need your power."
Well, damn. My gaze hooked up on Ryder. He wasn't happy about this, and considering his rigid glower, I could assume it wasn't his idea.
"What makes you think I'm going to work for you after everything that's happened?" I met and held Adam's eyes.
"We're not the bad guys here, Muse. The demons are. If they go unchallenged, if it's true, and there are princes on this side of the veil, people will die. Good, normal, everyday people. This is just the beginning, and you know it. Do it for the people of Boston, for your neighbors, your friends. You're a good person. You know what this means, and you can do something to stop it." His Adam's apple bobbed as he swallowed.
I flicked my gaze across the stern faces of the others crowded around the table. I had no friends here. Most of these people would happily put a bullet in my head, Ryder included. But this wasn't about the Institute, not any more. The demons were coming. "I'll help you. But I'm not dishing out your justice, Adam. I'm not needlessly killing demons for you. I will provide you with intel about the princes, because if you're right and they're here, then you're gonna need more than me. You'll need something not far off divine intervention." Perhaps Dawn could have turned the tide. She'd had the power to kill an immortal. But she was gone.
Jenna eased into the group beside me. A warm smile briefly lightened her lips. I failed at hiding the sharp intake of breath. She nodded, understanding my hesitance. "Playing both sides, Muse. Just like you."
"That's a dangerous game, Jenna." My brother was not to be messed with. Once he tired of her, he'd destroy her in ways I didn't even want to think about.
"I know." She regarded her colleagues with pride bright in her eyes. "We're the frontline. If we fail, they'll be nothing left of Boston."
"Can we count you among us?" Adam asked.
"No." Their collective gasp brought a smile to my face. "But I'm not against you. I promise you that much."
The Enforcers talked strategy, but none would look me in the eyes. I listened, soaking up their camaraderie. I was no longer part of their world, but I never really had been. Always on the outside, that was me. Nowhere to call home. Half demon, half human, wholly fucked. I finished my drink, grateful for the warmth spreading through my otherwise cold soul, scooped up the Operation Typhon file, and moved to leave when Adam's heavy hand clamped around my good arm. He drew me to one side, away from his devoted employees.
"What you did, I won't forget it."
I flicked my gaze down to his hand, which he promptly removed as though I'd scolded him. "Just because I saved your ass, it doesn't make us best buddies. I'm not your hero, Adam. I'm your enemy. Don't ever doubt that." I turned away from him before I really told him what had crossed my mind back in his office. He was a smart guy. He'd have figured as much.
Outside, I stole a few moments to deliberately breathe the slightly briny air of Boston into my lungs and soak the ambience of the quiet street into my pores, letting the city sounds and smells subdue my rattling anxiety. I didn't imagine the creeping sense of unease. If Adam was right, then whatever was happening beyond the veil had reached a tipping point, and those demons who could get out were scrabbling for freedom. Unfortunately for them, the Boston streets weren't demon friendly. They never really had been, but now gangs and death squads awaited newly arrived demons. People would die. Both sides were losing what was fast becoming a bloody turf battle. The body count was rising. Before long, the press would catch on. The fuse was burning down to a whole load of explosive material and the Institute was woefully underprepared, outmanned, outgunned and vulnerable. I admired their tenacity even if it would get them all killed. Never let it be said the Enforcers were cowards. They knew they were fighting a losing battle, but they were going to take the demons down with them.
I closed my arms around the Operation Typhon file and hugged it to my chest. Inside, there would be information about the other half bloods. _They're like animals_ , Jenna had said. The Institute was going to need them, but could they be controlled? From what I'd learned about my half-blood comrades, the answer was no. We didn't have a great track record for control. At least if I stepped out of line, Ryder would shoot me down.
A tight sizzle of heat trickled down the back of my neck, alerting my human senses to the wholly demonic presence behind me. I could have ignored him and walked away, but walking away from the Prince of Greed wasn't an easy step to take.
"Am I going to find your name all over this file?" My voice carried through the crisp night air. Distantly, a siren wailed, but even the very real noises of the city couldn't detract from the netherworldly throb of power he radiated, especially when I felt the tease of his fingers through my hair.
I gasped and snapped my head around, expecting Akil to be standing right behind me. He emerged from the shadows of a blocked-up doorway, as though those curtains of darkness had created him. In the subdued light where color fled, red embers sizzled in his eyes. The fall of his expensive suit accentuated a body I'd recently become intimately reacquainted with. He didn't approach as I'd expected him to, but stood back, reading me, likely waiting for the accusations that burned on the tip of my tongue.
His gaze flicked to the file in my hand, a cursory glance, guarded with indifference. "The contents of the file is irrelevant, as is the past. The Boston Institute is rubble and ash. Their meddling delivered them their well-earned justice."
He was entirely too nonchalant. I'd bought the blasé bullshit from him for ten years. Not anymore. The infinitesimal widening of his eyes, the slight flare of his nostrils, the slippery smile that hardly touched his lips: they all added up to a hint of something like smug satisfaction.
I glanced at the closed doors to Stone's Throw. A murmuring undertone of voices drifted through the still night air. At any moment, my little chat with a Prince of Hell could be disturbed. My fraternizing with Class A demons wouldn't go down well. "There are upward of seventy Enforcers behind those doors. Any one of them would give their right arm to capture you, and you're standing not ten feet from their back door. Are you trying to get caught?"
His heated gaze stayed trained on me with laser-like intensity. "Do you really believe seventy or seven hundred Enforcers could capture and hold a demon of my caliber?"
And there were seven—scratch that—six smug-ass princes just like him eyeing up our world. Hell help us. I licked my lips and watched as the movement caught his molten gaze. He'd scored me with that gaze as we'd lain together, wrapped in the trappings of lovemaking. Was that what he was thinking? Undressing me with his eyes? I couldn't pretend the sex hadn't meant anything, but neither could he. The change between us simmered like an electrical current in my veins. I had the distinct impression I'd somehow dragged him down to my level, and he'd elevated me to his. And there we were, standing on mutual ground, eyeing each other with sharp intent and dark knowledge. I'd seen him lost to grief, heard him beg to be loved, and while he may not have said those exact words, I knew what I'd felt in his fevered kisses and urgent touch. In all likelihood, he didn't understand what was happening between us. An immortal chaos demon could not love. He was incapable of wrapping his egocentric mind around it. Love was impossible for him, and yet... Those things he felt, they were alien to him, and I bet that drove him wild. I let a satisfied smile sit easily on my lips. His fire-touched eyes narrowed, but his smile stayed, curious, uncertain. He looked at me as though I were a puzzle, and the very fact he couldn't figure me out drove him to distraction. Good. He could know exactly how it felt to have the one you love shut you down. He'd done it to me. I was foolish then, naive and weak. That half blood girl was long gone. Whatever I was becoming, I was on a par with him. Equal.
When I spoke, my tone implied an equally blasé and nonchalant attitude. I'd learned from the best. "Things are different."
"Indeed." His statuesque masculinity served to remind me of the demon I was facing off against. Akil's human vessel was a trap, but which one of us had been caught?
"You could have stopped all of this from happening, but you didn't lift a finger. It's your fault Dawn's dead. As sure as Ryder pulled the trigger, you killed her." My calmness didn't sound right to my ears, but the glassy undercurrent mirrored in my thoughts was exactly what I needed. It felt good not to care for his reply as though he couldn't hurt me, no matter what he said or did. I was beyond that.
"Dawn's demise was necessary." Still, he didn't move, and he watched me as though he might actually care about my reply.
"How did you get her in? Did you just drop her off at the door and shoo her inside like you did at my apartment?" She'd been so eager to learn, brimming over with wonderment. The little girl Akil had left in my care with her mismatched socks and tatty rabbit could have been saved.
"I gave her to David Ryder. In this instance, he and I were in agreement. Did you know he has a teenage daughter and an ex-wife?" Akil's lips tightened as he saw disbelief on my face. "You do not ask enough questions of those around you, Muse. I make it my business to intimately know my enemies. David Ryder is a private man with many secrets. Humans find it difficult to look innocence in the eye and see the potential for madness. He knew Dawn was dangerous but failed to acknowledge the depth of chaos corrupting her young mind. He hesitated to administer the delightful drug they use to subdue demons. Perhaps he listened to you and gave her the benefit of the doubt? As a father himself, I imagine he let his own feelings cloud his impeccable judgment. Regardless, he made a mistake. He is, after all, only human."
I briefly closed my eyes, understanding the stalwart determination in Ryder's expression right before he'd pulled the trigger. He'd been the one to take her in. The deaths inside the Institute were on his hands. Maybe he had listened to my incessant belief that she could be saved. I'd asked him not to judge her on what she was, and she'd killed within hours. I should have handed her over when I had the chance, when Ryder first asked me to. It would have been the right thing to do. But I'd let my own past obscure the truth of what she was.
I opened my eyes. "Don't you feel anything?" I asked quietly. "You sent a little girl to her death."
He dipped his chin and peered at me through dark lashes. A predator's glare. "She knew what she was. She acknowledged her fate and accepted it. If I feel anything, it's a sense of achievement."
"How could you be sure they wouldn't use her against you?"
"Chaos cannot be controlled."
I drew in a measured breath as the truth of his words presented itself to me. "You knew Ryder would hesitate. You knew she'd tear it down around them..." Not a question but a realization. "You handed her over like a Trojan horse." Ryder was right all along. He'd even said as much to me while standing in my kitchen right after Akil had left her with me. "You used a nine year old girl to bring down the Institute." It made sense now. "From the second you showed up on my doorstep telling me to _do the right thing_ , you thought I'd give her to Adam, or at the very least, that they'd find her with me." Slippery son-of-a-bitch. This hadn't been about saving Dawn at all. He'd condemned her the second he stole her from Carol-Anne. _Kill two birds with one stone._ Dawn and the Institute. Both threats to his existence. And to look at him now, his understated confidence and the glint of infinite knowledge in his eyes, you'd think he'd done the world a favor.
He had never looked more reasonable than he did standing on that street outside Stone's Throw. "I believed you'd do what needed to be done."
"You don't know me at all, do you?" How could he think I'd subject any half blood to the Institute's attentions, especially a little girl? "You thought you'd leave her on my doorstep, and I'd send her to her death? What kind of monster do you think I am?"
"You're smarter than this, Muse. You are fully aware of the devastation half bloods can summon. Need I remind you of Stefan's downfall? You must, by now, appreciate your own potential. You should have handed her over to David Ryder. You should have done the right thing. You failed her by filling her head with hope when there was no hope for that half blood."
"You're insane."
"No. I'm right." He held my glare, eyes midnight black. "I could have delivered her into the hands of the Institute myself, but they'd treat her with less suspicion coming from you. When it became clear you had no intention of doing what must be done, I found other means to achieve the required outcome. Dawn was volatile, a threat. She could not be allowed to remain within Leviathan's grasp. Any one of the princes would have exploited her. I took advantage of an opportunity and put into motion the only possible outcome. Had I thought I could control her, I'd have secured her in my care long ago."
The same as he'd done with me: saved a wretched half blood girl from her abusive owner, mentored her, manipulated her "...and groomed her as your weapon later? Is that what you sought to do with me? Is that why you kept me all those years? Were you nurturing a weapon? Is that still your plan, even now?"
He broke the stare and looked away, perhaps searching for the right words. Whatever he was thinking, he barred it from his face. "Fire—our shared element—consumes. We are forever hungry, you and I. Fire devours, leaving nothing but ash, remnants devoid of life. Fire is the definitive destroyer. The demon you harbor has the potential to be wonderful in ways you do not yet fully understand. I see brilliance in you, but the time has passed where my attentions could be ignored. The princes know you now." The streetlight sparkled in his eyes and cut shadows across his face, making him appear leaner, sharper, harder. When those fire-lit eyes found me again, he swallowed. "Yes, you were to be my weapon. I am guilty of all those accusations and more. I find myself inexplicably disarmed in matters concerning you. I grew impatient and made mistakes... although those mistakes had the desired effect of awakening your latent abilities. Unfortunately, the resulting incarceration in the netherworld dampened my plans somewhat." He smiled. I didn't. With a sigh, he declared, "I admit my original intention was to use you as a weapon against my enemies. Is that truthful enough for you?"
At one time, his confession would have sent me spiraling into rage. The truth should have shocked me. I waited for the gut-wrenching fear to consume me, but nothing happened. The dark touch of Damien wrapped around my soul, gave an acknowledging squeeze. I'd not felt its touch since lying with Akil, but it was there now, an ever-present threat, a beast stalking the remnants of my shredded humanity, waiting, growing impatient. _Hungry._ My head was crowded with dark dreams and impossible wants. Some were my own desires. Others belonged to other netherwordly inhabitants of my body and mind.
I stole a step closer to Akil and would have walked right up to him had he not stepped back. I cocked my head, frowning. "When I saw you at Blackstone, by the fireplace, you were grieving, Akil. You believed me dead, and it cut you up. I know what I saw, and I know what I heard in your voice when you asked me to love you. What am I to you now? Just your weapon or something more?" My tone told him not to screw with me. I wasn't in the mood for lies, and this wasn't about hope, or some love-struck fanciful dreams. If Akil felt anything for me, I could use it.
I already knew the answer, but I waited for the lie he would surely tell and scrutinized his expression for any hint of him working to formulate a response. He eyed me steadily, breathing slowly. A muscle pulsed in his jaw. Amber fringed his eyes, revealing the demon poised inside his human avatar. At one time, it might have startled me to realize how he fought with the truth. He wanted to lie. I saw that much, but the foundation of our relationship had changed. Lying was no longer an option. We'd progressed too far beyond lies.
"I..." the words caught in his throat. He dipped his chin and lifted his glare, peering through his dark lashes, his gaze baking me in elemental heat. "You mean something else entirely." It pained him to say it. He couldn't have looked more disconcerted if he shuffled from foot to foot with his cap in his hand. As it was, he stood rigid, locking his vulnerabilities behind a mask of stubborn denial. He'd told me once the Prince of Greed didn't recognize denial _. He does now._
I smiled. There was no need for me to rage at him. It wouldn't change a thing. It didn't matter anyway. The past was irrelevant. The wretched half blood girl, the silly little whelp he'd planned to exploit, was now _something else entirely_. I'd outgrown him, and I had him exactly where I needed him: under my control.
From the slight pinch around his eyes, he hadn't expected to see the slow crawl of a smile slip across my lips. Acknowledgement darkened his gaze. He'd expected me to yell, to accuse him of using me. Maybe he wanted me to beat my fists against his chest. I might have done all those things once, but my smile told him more than he could have imagined. I'd accepted my fate. I knew what I was, what I was becoming, if I hadn't already crossed that Iine. I didn't fight the thing inside me—the half of me that danced in the dark—not any more. Cool, hard acceptance shuttered my emotions, sealing them off from my humanity. I stared back at him, mirroring his guarded-yet-fractured mask of indifference. He now knew what it felt like to have a piece of his soul at the mercy of another. I had him. The tables had turned. A fundamental shift in our relationship had altered everything. He was the demon sprawled in front of his fireplace, drowning in grief. Those birds were on the ground again. Waters were once more running up stream. The Prince of Greed was mine.
"New titles are born. Old titles die." He inclined his head, as though bowing, submitting, subservient. "You are ready." And with that, he vanished in a burst of static.
## 29
# Chapter Twenty Nine
I fumbled with my keys outside my apartment. What had just happened? Akil had admitted why he'd kept me safe and why he'd saved me all those years ago. Dawn's fate could easily have been mine. We'd both been pawns in Akil's game. I'd survived, whereas he'd led her like a lamb to the slaughter. All those years, I'd looked up at him with wide innocent eyes—the same way Dawn had looked at me. I'd have let him take my hand and walk me to my death once too. He'd only let me live because he believed me powerful yet pliable. I was his means to an end I didn't yet understand.
None of that was particularly surprising. Typically demon, Akil acted only in his best interests. But his admission that he'd felt something for me wasn't typical. He should have brushed my death off like a spec of dust on his impeccable attire, yet grief had ravaged him. He was chaos eternal, incapable of feeling much of anything besides hunger. But clearly he did feel. I wasn't comfortable with that revelation, and neither was he. Chaos demons were deadly enough without adding emotion to the mix.
The terrifying part was how he'd just left me. The finality of his words. The peculiar way he'd bowed out, as though saying farewell, as though his part in this game was over. _New titles are born. Old titles die._ He'd given me the truth, and yet he'd left me with a gut-churning sense of unease. As usual with Akil, his answers revealed more questions. Questions I wasn't sure I wanted the answers to. The way he'd looked at me: pride, acceptance, lust, love? Maybe even a little fear? A Prince of Hell feared me. What kind of monster did that make me?
With a growl, I shoved all thoughts of Akil into the broiling mass of emotion walled up in my head. I'd deal with it all later, after I'd drowned myself in a bottle of wine and forgotten everything for a little while. Damn Akil and his Machiavellian ways. The dark coiled around my heart gave a tight squeeze in agreement, briefly clenching my chest and shortening my breath. I snarled back at it and grumbled a few colorful words as I worked the correct key into the lock and shoved inside, too preoccupied to notice the door wasn't locked.
My grumbling curses froze on my lips when I looked up and saw Stefan leaning against my kitchen counter, open tub of Ben & Jerry's in one hand, spoon in the other, eyebrow raised, licking ice cream from his lips. My heart stuttered, my mouth fell open, and I knew I was dreaming. He could not be there, all smart-ass smiles and dazzling blue eyes.
"There's no Ben & Jerry's in the netherworld. It's a crime." His gravely demon brogue instantly roused my demon half. She gave herself a mental shake, her visceral hungers and curious anxiety merging with mine.
I dropped the file. Papers spewed across the floor. My keys slipped from my hand and clattered beside my feet. I gawked at him, drinking in the wonderful sight of Stefan in my kitchen. His scuffed red leather coat had darkened to the color of dried blood. The cool blue shirt and loose, low-slung jeans seemed unremarkable beside the sword sheathed at his hip. When I noticed his crooked half smile that said 'you know what, it's gonna be okay,' it was too much. The emotional steel rods I'd driven through myself while facing off with Akil turned to liquid and drained away. My knees buckled. His arms swept around me before my addled brain could register the fact he'd moved. A cool snap of power arced between us and wrenched my breath away. He held me close, body pressed against mine. Wide-eyed, I stared with abandon and found myself falling into his brilliant gaze.
He chuckled softly, the demon turning the laughter wicked. "I've always wanted to catch a swooning woman. I just never thought it'd be you."
I barely registered his words as I reached a trembling hand up and touched his face with my fingertips then slipped them lightly down his jawline. He felt real. Not a ghost, but solid, warm, and very much alive. A spritz of energy fizzled up my fingers. I slid the tip of my finger across his lips. His mouth twitched around a smile. He was really here. I wanted to blurt out how I'd ached deep in my bones to see him one more time, how I'd wanted him back but dared not dream I'd see him again, and even if I did, how I was afraid of what he might have become. But my voice had abandoned me. I couldn't say a single word.
He raised an eyebrow. "You could have let me know you weren't dead."
I slid both hands over his face, committing the slightly abrasive texture beneath my fingers to memory. There were more fine lines than I remembered. My thumb brushed the corner of his lips, lips I wanted to kiss. But I was so afraid that if I did, the spell would be broken, and he'd be gone again. After losing him for the second time, and then losing Dawn, I'd come to understand that hope didn't belong to the likes of me. If Stefan couldn't beat his demon and I couldn't save a lost little half blood girl despite all my shallow promises, then what chance did I have?
But Stefan had beaten his demon. He must have. He was here and looking back at me with mild curiosity. The hope I'd given up on sparked back to life. I clutched at his coat, knuckles whitening.
"Speechless?" His words sounded like a purr. "What is the world coming to?"
My mouth moved. I tried to snatch at the words in my head. My tongue tried to wrap around the necessary sounds, but my brain appeared to have detached itself from my vocal cords. All I could do was swallow and blink like a dumbstruck fool.
He lowered his gaze. Fair lashes shuttered his eyes. His smile faded, and a jolt of panic snapped through me. He was going to say he had to go. He would say the words. I knew it. This wonderful moment was already ending. I couldn't let him go, not again.
I pulled him into a raw, desperate kiss, exposing my soul as my lips met his. I didn't care that he stilled against me, that his arm stiffened as though he would push me off him in the next breath. I needed to feel him, to taste how real he was. When his lips parted and his responding hunger molded with mine, I very nearly came undone. My legs were all but useless, but it didn't matter. Stefan held me him like we'd never been apart, as though we'd never part again. I drove my fingers into his hair and pulled him so close I might gladly drown in him. Vaguely, I registered the clatter of the spoon against the floor. He slid his hand down the curve of my back and cradled me against him, hauling me in close enough that the heat of his body warmed the cold in mine.
He broke the kiss, only to roam his lips across my cheek. "You're crying." His cool breath tickled across the tracks of my tears.
I really was. "Please, don't stop. Don't say... anything." I couldn't bear to hear why he'd returned and why now, knowing it wouldn't be good. I never got a break. My world was one disaster after another—the Mother of freakin' Destruction—and this would be no different. _Half bloods don't get happy endings._ But I refused to hear it. I was not letting him go. I would not hear the terrible things he had to say. Instead, I wanted to forget it all: the horror of my own failure to save a little girl and the wretched realization of my own capabilities. Forget that I was cursed. Forget Akil's prophetic words...
"This is a mistake," he whispered.
"Don't." I growled, the sound borrowed from my demon.
His sharp breath hissed, and his entire body tightened with restraint. He fought even now, struggled with his own demon. I could see regret in his evasive gaze and the guarded expression on his face. He didn't want this. He would push me away. I fluttered my eyes closed, knowing in the next few seconds he would say something pertinent, and we'd pull apart. But for now—this singular moment—it was perfect. If I could have trapped time in a glass jar and kept it forever, I would have.
"I'm sorry." He captured my mouth once more, driving his tongue in deep. A demonic growl resounded through him, possessive and wild. Elemental energy simmered around us. A tantalizing quiver of chaos energy skipped across my flesh, sprinkling goose bumps in its wake. Stefan's hunger mirrored mine. He teased in his maddening way. Our bodies moved as one, thrust together as though inseparable. He tasted like ice cream and chaos. Sweet, delicious, and wonderfully alive. I purred my pleasure, demon and woman, both hungry. The chilling touch of his element coiled around us, igniting the fire slumbering inside me. My demon stretched beneath my skin, basking in the power he radiated.
He pulled away all too soon, withdrawing carefully, his gaze skittish and head bowed away from me. I let him go, even though every part of my muddled mind screamed for him to stay. Slumping against the counter, I touched my lips and tasted the chaos sprinkled there, fizzing like popping candy. Stefan moved away, turning his back on me, shoulders bowed. He didn't need to say a thing. His body said it for him. He didn't want this.
"Do you have any idea how difficult it is for an ice demon to light a campfire in the netherworld?" he asked, cool, and calm, as though we hadn't just tried to devour one another.
"Huh?" The residue of arousal tingled across my skin. I licked my swollen lips and blinked rapidly. Muddled thoughts reeled about my head. Why was he talking about campfires? I swept a hand back through my hair and swallowed hard. Holy hell, he was really there... Not demon, not dead.
"Try impossible." His coat buckles rattled as he reached down and scooped up the ice cream tub, placing it on the side. "Which is a problem when trying to cook demon meat. You don't wanna know what raw Sasori demons taste like. Also, don't eat the dark meat. It's poisonous. I found that out the hard way." He leaned back, hands braced against the countertop either side of him. "You'd think after a few years there, I'd have learned a few things about surviving in the netherworld." A glint of light reflected off an elaborate fractal-etched rapier at his waist. Seeing him armed with a sword reminded me of how my brother likes to appear tooled up and ready for battle. "I'm craving food, real food, like ice cream. And coffee. And French fries." He ground out a restrained groan and raked a hand through his hair. "But mostly ice cream." The heated look in his eyes when he finally met my gaze told me food wasn't the only thing he craved, so why push me away?
"I don't think those things are classed as real food," I said in a quiet voice, bumbling along with the bizarre conversation while my thoughts still spun, and my body burned to have him close. His kiss hadn't been a half-hearted response to my advances. It could have gone further. I wanted it to. But apparently, he didn't. Like an idiot, I'd forced myself on him.
I averted my gaze and busied myself scooping up the contents of the file from the floor. _Your father is the Prince of Lust... You and Akil deserve each other..._ I could guess why he'd stopped things before they spiraled out of control. His words from months ago still wounded me, even after all this time. It shouldn't matter. Coming from anyone else, those words wouldn't have mattered.
I dumped the file on the coffee table, aware that a quiet tension simmered between us. He still threw off power—nothing like the embrace I'd felt minutes before when he'd been pressed against me, but enough to distract my demon.
Facing him, I flicked my hair out of my eyes and planted a hand on my hip, grateful for the kitchen counter between us. I didn't trust myself not to pounce on him and couldn't bear it if he pushed me away again. Who was I kidding? Stefan wasn't meant for the likes of me: the Mother of Destruction. He was better in every way. A shining star. I'd already ruined his life, killed his sister, and condemned him to hell. The longer he stayed with me, the more I'd destroy him.
"I think you should go." I couldn't meet his eyes. He'd see the truth there. I wanted him in every way a woman wants a man. My demon wanted him in ways I couldn't even wrap my thoughts around. The intensity of my own desire terrified me. Was it lust? Was that all it was? My father's legacy living in me? No. If it had been just lust, I could have escaped it. Lust was simple. Yes, it was madness, but it was an uncomplicated madness. This need to have Stefan close, to bury myself in his embrace, to hide in his arms and snowflake kisses, it wasn't demanding, or selfish, and it definitely wasn't simple. I wanted to share everything with him, to spend precious time with him, time when demons weren't trying to kill us, and the Institute wasn't watching. Just the two of us. I couldn't escape these feelings, and that made them all the more terrifying because clearly he didn't feel the same.
"Is that what you want?" he asked. "You come back from the dead, kiss me like the world's ending, and then tell me to leave?" He managed to sound amused.
I bit my lip and nodded. "It's not safe here... with me."
He wove around the kitchen counter and stopped in front of me, a wall of red leather and cool temptation. Light glinted off the sword's guard and I realized I'd seen it before. Kira-Kira. _His mother's sword._ I flicked my gaze up through my lashes and immediately lost my thoughts under the intensity of his eyes. The cool touch of power was back, prodding at my demon, taunting her with its proximity.
"I came back to warn you." Severity hardened his voice and dragged it down to a deep demon growl.
A knot of dread tightened in my gut. "How long have you known I was alive?" I didn't want to hear his warnings. Whatever it was, it would ruin everything. I knew it as certainly as I knew he would leave me again. He stood so close, and yet there was a chasm between us. He was already gone. We just hadn't said the words yet.
"A few weeks—netherworld time. I wasn't going to come at all. But the Princes..."
"Why?" My emotions bled through that one word. "Why weren't you going to come? I needed you. Everything is falling apart. I'm... I'm out of control, or I'm terribly in control. I can't tell which. I nearly killed Adam. I could have. I wanted to. You've no idea how I ached to see that bastard burn. And before, there was... Something happened... I thought I'd killed people."
He held my stare, his expression guarded, bordering on resigned. When he brushed my bangs from my eyes, an electric shower of sparks shivered beneath that lightest of touches. "But you didn't do either."
He was too close. All I could think about was the cold burn of desire and the overwhelming need to hold him. "It doesn't matter," I said quietly, "because when I believed I was a killer, I didn't care. I accepted it. I never would have believed I was capable..." My humanity was failing me. Damien's touch had already poisoned too much. I was drowning in the dark. I just didn't have the good grace to go under one last time. "Dawn's dead. I couldn't save her, just like Ryder said. In the end, she thanked him, right before he killed her." I searched his gaze for any sign of judgment, but all I saw was hard acceptance. "I know why you left. The same fate awaits me. But I can't run back to the netherworld. Not now. Not ever. There's nowhere for me to go, nowhere to run. There's no way out. I'm trapped against a wall. Ryder should have put a bullet between my eyes. I see it on his face when he looks at me. He knows the truth, Stefan. He's just waiting for me to fuck up. We don't get happy endings. I am destruction. Akil's weapon..." I paused, sucked in a breath, and said, "And I feel... nothing."
Stefan's expression finally registered a change and darkened. "Akil's weapon?" He tried to capture my gaze, but I flicked it away. "What do you me—"
"You're right." I ground out the words while biting back a knot of emotion. "This—us was wrong. It was a mistake, just like you said. Every time I'm with you, bad shit happens." I remembered his words and repeated them back to him, "I'm sorry we met." The longer he stayed, the more I'd ruin him. I'd drag him down into the darkness inside of me and drown us both. I suddenly knew with absolutely clarity that I could never be a part of Stefan's life, not if he was going to survive. He sure as hell could beat this madness. He had the tenacity, the instincts, and the passion. But the likes of him weren't for me, not the Mother of Destruction. I'd ruined everything. I'd promised Dawn freedom and got her killed. Stefan's sister, sweet Nica, had died because of me. Stefan lost his whole world, because of me. Ryder'd had to execute a young girl because I'd failed to do the right thing. Destruction was my name. Holy hell, the demons knew me better than I did. Maybe Akil was right. I really was the monster he thought me to be. "You need to go."
"I'm not going anywhere." He turned away and picked up the Operation Tyhon file. "Wanna know why?"
I didn't imagine the drop in temperature or the trickle of power dancing against my skin. He appeared to be controlled, but the leeching touch of his power said otherwise. If I reached an element touch out to him, I feared what I'd find. My humanity—what was left of it—tingled a warning, raising the hairs on the back of my neck.
He flicked open the file, eyes narrowing as he skimmed the contents. "I have debts that need paying. Wrongs to right. My father, for one. The Prince of Greed is another. Your brother. The Institute..."
His words sounded dangerously like revenge woven together with a thread of something I regularly coveted: madness. "The Boston Institute is gone," I said carefully.
His smile cut deeper. "Not while Adam lives."
"Stefan, this doesn't sound like you." My breath misted in the air. I hugged myself, bracing my hands against my upper arms.
"Doesn't it?" He lifted his head and pierced me with his ice-born glare. "I thought I'd killed you. When I realized what I'd done..." He dropped the file onto the table, gaze locked on me. "When I watched you die in Ryder's arms, it destroyed me."
There was that word again. Just a word. But it spilled fear into my veins and seemed to pull a cold blast of air into the room around me. Shivers quivered through my exhausted body. "But you're here. I'm here. What you thought you did doesn't matter. The past is irrelevant." I inwardly winced, realizing I'd paraphrased Akil's words. _The past is irrelevant. The Institute is insignificant._ It occurred to me that Akil had taken out the Institute just as half the netherworld demons decided to make Boston their new home. That couldn't be a coincidence. Was Stefan's presence here also just bad timing, or was he in some way connected to the change in the netherworld and the influx of demons?
"You told me once we're the products of our past..." Stefan said with a wistful air of sadness.
I had, when he'd first taken me to the lake house, before he revealed the depth of lies he'd told me to protect his sister, before a lot of things, none of them good. I didn't know what to say to him. Nothing I could say would change the weight of the past pushing down on us.
He was in front of me so suddenly I gasped and jolted back, shoving against his chest. Instincts demanded I escape, but his hands clamped down on my shoulders, fingers digging in. Pain bloomed in my wounded shoulder muscles. I flinched and tried to pull away. "Stefan, please... you're hurting me."
He bowed his head, pulling me tight against him. This wasn't anything like the warm embrace we'd shared moments before. His body trembled with chilled restraint. A blast of cold stole the breath from my lungs. A dusting of ice tightened my skin.
He brushed his cheek against mine, his demon purring, "I came to warn you. The netherworld is dying. The princes are rallying. You are not prepared." He sighed and slumped against me as though speaking the words relieved him. "I am the Prince of Wrath. And I've never needed your help more than I do right now."
I gasped and pulled back. His words pierced my soul like splinters of shattered ice, and everything I'd felt for him, every tiny flicker of hope I'd cherished, scattered, chased away by terror. His hands tightened on my shoulders, gaze drilling deep. The demon that was Stefan damned me beneath his glare. The colors of the veil danced in his eyes, but further inside, deeper, I witnessed a soul cowering behind a barricade of ice. Brittle fractals sparked across his cheek, lanced into his hair, and sliced across his lips, cracking, spitting, as it smothered his expression in a mask of lacy frost. The restrained power I'd felt pulsing inside him since his return suddenly broke out and washed over me. I groaned and arched back, torn between fighting him and the terrible urge to answer his power with my own. The flood of ethereal energy slammed my humanity down. My demon roared inside my head, thrashing against her restraints in a bid to devour the source of chaos inside him. I tried to swallow a wail of despair, but it tore from me in an anguished cry.
The Prince of Wrath glared down at me with terrifying certainty. He'd come for vengeance. On his father, on Akil, on anyone who'd ever wronged him. Including me.
I looked up into his diamond-eyes and knew it was too late for Stefan.
I'd already destroyed him.
The reaching tendrils of Aki's power encircled me before I even knew he was nearby. Ice and fire wove around me. The opposing elements tightened against my skin and vied for supremacy. I heard Stefan's snarl just as I was wrenched back out of his vice-like grip and pulled into a chasm of darkness. For the briefest of moments, I was nowhere. It was time enough for panic to clench around my heart and my demon-hitch-hiker to spill its poison through my veins. It was only the scent of cinnamon and cloves among the suddenly embracing warmth that prevented me from losing my mind to fear. _Akil._ I stumbled out of the dark and fell into his arms. Or I would have, had I not spun and slapped him so hard his teeth rattled.
I shoved off him and staggered back, trying to get my bearings as the room around us sharpened into focus. The lounge at Blackstone. It had undergone some re-decorating: new leather couches, a new coat of paint so fresh I could still smell it drying. My demon shunted my humanity to one side and snarled at her failed attempts to be free. I threw that snarl at Akil. "Take me back!"
He worked his jaw and fingered the flushed mark on his cheek. "You're getting stronger." He spoke with pride in his eyes.
I didn't have time to deal with his ego. Stefan needed me, and Akil had just stolen me out of his arms. "Take me back right now."
He arched a dark eyebrow, managing to look both bemused and haughty. "You are capable of many things, but suicide is not one of them."
I glowered. "Stefan wasn't going to hurt me."
He sighed, his shoulders slouched, and his eyes lost some of their luster. He appeared to age a few years in a few seconds. "He killed you once, Muse. In that very apartment. By some miracle, you came back to me. You're mortal, and I'm not making the same mistake twice."
I clutched at the cool leather of the couch I'd bumped into, needing something to keep me upright while my legs threatened to give out. My head still wasn't quite grounded, thanks to the unexpected reality-hop. An ache throbbed behind my eyes, and my parasite's sickly touch still burned in my veins. It was all I could do not to double over and hurl all over Akil's polished, marble floor.
I sucked in a deep breath. "I have to go back."
Akil shrugged off his jacket and tossed it over the opposite couch. He unbuttoned his shirt cuffs and rolled his sleeves up. All the while, his gaze seared me as though we were about to engage in combat. "You don't seem to understand the danger you're in. Let me be perfectly honest with you—"
"That'll be a first."
He ignored me. "No more half-truths. You _are_ a weapon. The princes are aware of this pertinent fact, due to your antics over the past few months. Titles are shifting. Half bloods are rising. An immortal prince dies. Another has his title ripped from him by an upstart half blood ice demon who doesn't know any better. The netherworld is dying. The veil weakens." Akil scooped up a TV remote from the coffee table and flicked on the vast ultra-thin TV mounted on the wall. "Lesser demons are bleeding through. And you, my Muse... are the eye of the storm."
I blinked, wondering if I should be feeling something, but my body was numb and my thoughts hushed. The TV played a newsfeed. I got a glimpse of the reporter, but no sound. Akil had muted the volume. I didn't need to hear what was being said because the Hellhound sprawled on the road outside a McDonalds restaurant really didn't need an introduction. There were a number of things very wrong with that picture. People aren't meant to be able to see Hellhounds. But that fact didn't appear to have reached the members of the crowd taking pictures with their smartphones. Also, Hellhounds don't die. From the glassy red-eyes and lack of breathing, that one sure appeared to be dead.
"Oh."
"Indeed."
"What's going on?"
"The princes are coming. I've deterred them as long as possible, centuries in your time, longer in theirs. Unfortunately, the netherworld is dying. My home is no longer able to sustain the demons. And as with all immortals, the princes tire of that which they possess and hunger for that which they can acquire." He sighed. "Chaos is forever hungry."
This was big. Bigger than me. Bigger than the Institute, than Boston. "It's bad, isn't it?"
"It is. And inevitable."
I turned away from the TV and found him standing close enough that I had to look up to meet his gaze. "How much of this was your doing?"
"None." He swept a lock of hair from my face.
Why did I find that hard to believe? "Right. And I'm Mary Poppins. Wait while I get my umbrella, so I can beat you to death with it for never giving me a straight answer to anything. Cut the crap. Give it to me straight. I've earned that much from you."
He smiled. "You have. I told you once of the King and how the Queen killed him. You remember?" I nodded. "Good. The King and Queen—control and chaos—together maintained balance in the netherworld. When the Queen killed her counterpart, chaos reigned, and the beginning of the end of the netherworld was born. Unbeknownst to the remaining princes, the King lived in hiding. He was weak—"
"Okay, I've heard enough. Is this the part where you tell me you're the King? Because really, my head's already spinning from Stefan's revelation..."
He fought with a grin. "No. I'm flattered, but no. I'm _just demon_ remember."
"Just demon," I echoed and didn't believe it for a second. I'd thought Akil had killed Sam in a jealous rage, but I was wrong. He'd killed an Institute spy to protect me. I'd tried to point fingers at him, accusing him through rose-tinted glasses of being inhuman. Well, he was demon. I was just too much of a dumbass to accept it. And now he was telling me about a King who wasn't dead but had been weak, hiding on this side of the veil. So pinch me if I didn't quite believe him. Akil hid the truth in lies.
"Are you quite finished scowling?"
"Not by a long shot."
"As I was saying, the King was weak. He came here to regain his strength while the princes believed him dead. I know where he is. I helped him, in fact. We will need the King if we're to protect the human realm from the Princes."
I think I liked him better when he was wrapping me up in a bubble-wrap of lies. "I'm hearing a lot of plural talk in there."
"Well, you are the Mother of Destruction. I was hoping you might like to help save your city. But you do get a choice. Where you go, destruction follows. You merely need to choose which realm you reduce to rubble in your wake."
He made it sound as simple as whether I should have chocolate sprinkles on my cappuccino. "Yay. A win-win situation," I replied, dryly. This night was just getting better and better. "Okay, say for a second I take your word as the truth—which, by the way, I don't—why on this earth should I listen to you? You just used a little girl to wipe out the Institute. Convenient timing. From where I'm standing, that sure looks like you're on the side of the princes. Also, there is the fact you _are_ a prince with a reputation for manipulating the truth."
"Sacrifices must be made. The Institute was ill prepared. They played at being protectors, but it's not nearly enough."
"Are you telling me you did it for their own good?"
"Back any creature into a corner, and it will fight. Now the Institute gathers, galvanized. More of their ranks will come to Boston. They ready their soldiers. I disturbed the nest so that they'd wake in time to see the truth and prepare."
I pinched my lips together, biting back the urge to tell him people had died when he'd decided to rattle the Institute. He would tell me they were collateral damage. "And what do you get out of this? What does the slippery Prince of Greed gain? Because if I've learned anything, it's that you don't do anything unless it benefits you."
"I get my city back." He smiled a broad wolfish smile. "I have no desire to see this world burn. I'm content with playing these humans for the fools they are. Boston is mine, and I will not suffer any demon, prince or otherwise, who dare attempt to steal what is mine."
That sure sounded like the Akil I knew. I slumped against the couch, suddenly bone-tired. The news report, the plea from Adam to help the Institute, Stefan's breathless cry for help, and Akil telling me I'm somehow caught in the middle of it all were too much. "And Stefan? He's really a prince?"
Akil straightened, squaring his shoulders. "Impossibly, yes."
"You knew?"
"I did."
"For a long time?" I sighed as he nodded. "Why didn't you tell me?"
"He's beyond saving, Muse. He became prince not long after he believed he'd killed you. From what I understand, he laid waste to parts of the netherworld, attracting the attention of the princes. He made short work of Wrath. It should not be possible, a half blood as a prince..." He sucked in a breath, hissing air through his teeth. "Once Wrath fell, my brethren retreated. The damage was done. Had I told you, you'd have gone to him, and he'd have killed you again. Wrath is not just a name, Muse. It's a title. Wrath is his purpose. There's nothing you can do for him, but you can help stop the princes. Stefan will be among them, should he choose to be."
No, he wouldn't. He'd come to me, asking for help. He knew what he was, but he was still in there—that kiss hadn't been cold—and he needed me. I was sure of it. "Damn, Akil, this is a lot to wrap my head around. I need space to think."
"While you do that, may I suggest you help your former colleagues patrol the streets? The princes have begun summoning their lesser cousins. Their presence will rouse chaos, and where chaos reigns, the remaining princes will follow. Should chaos swell beyond control here, the veil will fall. Control the lesser demons, stem the flow, and buy yourselves time to regroup because, make no mistake, when the princes arrive, they will destroy Boston, and they won't stop at one city."
Stopping lesser demons was something I was definitely capable of. "And what are you going to do? Go and find this phantom King?"
He smirked. "You've already met him."
"I'm pretty sure I'd remember meeting the King of Hell. Forked tail, cloven hooves, goat legs, plays the fiddle?" Akil chuckled. "You're not going to tell me, are you?"
"No. While he is weak, it is better you do not know."
"You don't trust me?" I almost laughed when he frowned. "That's rich, coming from the Master of Lies." I snorted, then abruptly asked, "Is it you?"
"No. Again. You appear to be having difficulty hearing the truth."
"That's because, coming from you, truth and lies, right and wrong, they all sound the same."
"They are all a matter of perspective."
"Urgh..." I groaned. "I think I liked it better when you told me everything and nothing. Can we go back to that?"
"We are both too much changed to return to how things were."
I stood and raked my hands through my hair. "You know what would be handy right now? A half blood who could kill princes." I clicked my fingers. "Oh damn, you just sacrificed the best weapon we had against them." Scrunching my nose up, I asked, "Whose side are you on again?"
He gave me a sideways glance, arched an eyebrow, and twitched his lips. "Not the best weapon by far. I have that right here."
"Yeah, well, this blunt weapon of mass destruction is going back to Boston to find Stefan. The end of the world can wait. I can drive there, or you can take me back right now so I can at least try to convince the Prince of Wrath to fight on our side."
"Stefan is beyond listening to reason. His demon rules him."
"He'll make that call."
"I don't like it." A flicker of fire touched his eyes.
"I don't like you much either, but I can't seem to get rid of you, so how about we stop talking and start doing?"
Akil eyed me cautiously. "Stefan and I... The Prince of Wrath will not stop until his debts are paid." I read that as it was intended. Stefan would kill Akil. Stefan was more powerful with the weight of another world behind him. He had the potential, the motive, and no reason not to. When Akil and Stefan threw down, I had no doubt who would walk away. Stefan's only weakness was his mortality. I swallowed and denied those thoughts purchase. "Then stay out of his way, at least until I can talk to him."
Akil's eyes sparkled while at the same time managing to rake me with a sympathetic gaze. "You can't save half bloods. The trappings and foibles of your humanity provide you with great strengths but also insurmountable weaknesses."
"Yeah, yeah, half bloods don't get happy endings. I get it. I've never been one to follow the rules." I flashed him a bright smile and held out my hand. "Take me back."
He glowered at my outstretched hand and made no move to take it. "The safest place for you is here."
"It's about time you trusted me, Akil. Isn't this what it's all been about? What were you keeping me for, if not to use against your enemies?"
His gaze softened. "Once, yes. Now I find myself in the alarming situation of fearing I may lose you again and caring."
I instantly shoved that unnerving revelation to the back of my mind, ramming it down into the existing mental box marked _'deal with this shit later.'_ I could not even begin to consider what his words meant. Not with everything else crowding my head.
I stood, grabbed his hand, and met his curiously pained expression. "Surely the Prince of Greed and the Mother of Destruction can kick some demon-ass back to the netherworld. It might mess up your street-cred, but I'm sure an ego the size of yours can take it. Once we've averted disaster, you can go back to being the slippery, back-stabbing son-of-a-bitch I know so well."
He allowed himself a faint smile. "Boston is mine. I protect what is mine with every weapon at my disposal. No member of the Dark Court will take that which I possess." The fierceness behind his words wasn't lost on me.
"Good. Hold onto that thought." The enemy of my enemy was my friend, and right then, Akil was the only friend I had. It wouldn't last. He wanted me so he could pry Damien out of my soul, take his place, and wield the weapon he'd been fashioning for himself since he'd first seen me all those years ago. Words like 'love' and 'care' were cheapened when falling from his lips. He was the spider in the web, but I saw him now.
He looked askance at me, narrowing his eyes. He was an ageless chaos demon, and he wasn't buying my thinly veiled enthusiasm. "Why do I feel as though I'm the one making a deal with a devil?" He closed his fingers around mine.
I flashed him a sharp-toothed smile. And now we were equal.
# Epilogue
**I NSTITUTE CONFIDENTIAL SUBJECT REPORT**
FAO: VP Sabine Sturgill, New York Hub. Source: Adam Harper, HO, Boston Hub.
**OPERATION TYPHON UPDATE**
(Previous file destroyed due to security breach).
**SUBJECT EPSILON** – **a.k.a. Dawn.**
**STATUS** : Contained & holding.
* * *
Operation Typhon progresses despite the recent destruction of our Boston hub. We are now in possession of Subject Epsilon, a.k.a. Dawn. Regrettably, there were unavoidable casualties, due in part to how we acquired her. All necessary. Epsilon is alive and securely contained on site at the Middlesex Fells facility. From various reports by field Enforcer, David Ryder, Epsilon exhibits an element as yet untapped, but which could prove vital if Class A demons do breach the veil, as reports suggest. While demon chatter claims Subject Beta (Muse) terminated the Prince of Envy, David Ryder has confirmed Epsilon was responsible. Epsilon has the potential to be an invaluable asset. Her detainment is of the utmost importance. Her current status must remain confidential. **Her continued existence is vehemently denied.**
**Note:** David Ryder is aware Epsilon lives. This was an unfortunate necessity as Enforcer Ryder played a large part in her capture. While his devotion and commitment to our cause continues to be exemplary, it may be necessary to apply emotional pressure. I advise a trace be planted among his estranged family should his devotion lapse.
* * *
**S UBJECT GAMMA**
STATUS: Contained & holding. No change.
**SUBJECT DELTA**
STATUS: Contained & holding. No change.
* * *
**S UBJECT BETA – Muse, Charlie Henderson.**
**STATUS:** Consistently volatile and unpredictable. Borderline demon. Beta's allegiances have yet to be proven. She has the potential to be a valuable ally, but her relationship with the Prince of Greed is undesirable. She will cooperate while she believes she is in control, but her actions of late border on needing a termination order. If it were not for her connections in the netherworld, we would have allowed her to perish at Subject Alpha's (Stefan Harper's) hand. Demon chatter indicates her father, Asmodeus, has shown an interest in acquiring her. She is currently under his 'protection.' This makes her useful, and invulnerable to all but her father. We will continue to rally Beta to our cause and utilize her connections among the demon hierarchy. However, should she lose control of her demon—which I believe to be an imminent threat—a termination order will be issued.
Note: Demon chatter refers to Beta as the Mother of Destruction. This title is not to be dismissed as idle gossip. I suspect there are events in her recent past of which we are unaware. These events have increased her standing among demons. I strongly advise Enforcers focus on extracting the meaning behind this recent shift in Beta's status.
* * *
**S UBJECT ALPHA – Stefan Harper.**
**STATUS:** Failed. Termination order in effect.
Note: Sabine, I am perfectly capable of neutralizing the threat Subject Alpha poses. I have no emotional connection to my son. Thank you for your offer, but your assistance, while of course appreciated, is not necessary. Subject Alpha will be terminated.
SIGNED: Adam Harper.
**END REPORT**
The Veil Series continues in Book #4 Drowning In The Dark.
Buy it here.
* * *
If you enjoyed Darkest Before Dawn please take a few moments to leave a review on Goodreads. Each review helps new readers discover the Veil Series.
* * *
Read on for an **exclusive scene from Stefan's point of view,** and for an **excerpt from Drowning In The Dark #4 The Veil series.**
# Darkest Before Dawn Stefan Bonus Scene
_D arkest Before Dawn Bonus Scene – A snippet from the 'lakehouse kitchen scene' as told from Stefan's point of view. Also available on The Veil Series website __here_ _._
She burns. Every part of me, each infinitesimal molecule which binds demon to human, recoils, and yet I want... more. I watch. Time is a brittle frozen thing, captured and halted in my hands. I see through a gauze of ice and witness all that she is, all she will be. A halfling. As am I. Yet even half-a-thing holds power, moreso, for the passion with which it seeks its missing piece, its opposite. I reach out a hand, pushing against the blanket of heat, denying pain its purchase. Fear burns bright in her wide eyes. She sees demon, hardened by ice. I see her. Muse. My contradiction, my opposite. Even as the proximity of her repels, I seek her embrace. I touch her face, skip my fingers down her cheek. She hisses; turns away, but does not run. Her demon wants. So does she. These thoughts, they are wrong. These desires, they will distract. She will devastate. I know all of this, I see it all in her eyes, but still I cannot pull away. Ice seeks to smother her fire. My element surges, hungry and eager to quash her threat. Power feeds through me, combusting inside and rising, threatening to drown us both; to smother, to kill. I know its wants; I want the same. Demon. Human. Lost somewhere between. Pulled apart, stretched thin. I can devastate her. I see enemy. Fire to my ice. She is predator. So am I.
Motives sundered, I am motionless, captured as surely in indecision as I am in ice. I could–should kill her. Here, now, she is weakened, restrained. She thinks me incapable. In that, she is wrong. I am glacial. And yet, despite it all... her death would shatter me. Demon. Human. I am captured between, crushed, amalgamated. I could not hurt her. Would not. Despite everything, she warms the cold in me. Her fire melts my resolve and my ice quells her fear. Webs of ice lace from my touch and skitter across her cheek. She looks into me, sees me, all of me. Muse has always witnessed the truth of me. The warmth of her skin, the rapid beat of her heart; she grounds me, offers a clear path through the squall blanketing my thoughts. I realize, with conviction, I will protect her from all who seek to harm her, from the Prince of Greed, from the netherworld, from the Institute, from my father, from herself. I can protect her from everything... but me.
We are enemies. Opposites. An immovable object and an unstoppable force. I am afraid of my own desires. She sees the stark truth of me, looks into the eyes of winter, and braves the storm. She is more than I can possibly know, more than I can hope to avoid. This moment, the past, the future, all funnel to the now, and I see a glimpse of what is to come. I see blood on a blade of ice. I know how this ends. And she sees–with her dark fire-touched eyes–she sees it too.
# Excerpt Chapter One
_A n exclusive look at the first two chapters of 'Drowning In The Dark' Book 4 in The Veil Series, out 27th Feb, 2015._
Demon claws sliced into my waist, sending sparks of pain dancing up my right side and stealing a ragged cry from my lips. I twisted away, more instinct than thought, and cracked my fist across the demon's brittle jaw. His face fractured like glass, which would have been a victory, had the shards of bone not pierced my knuckles. Jesus, it was like fighting barbed wire. I saw the right hook coming, his claws spread wide, and realized I may have underestimated my quarry and overestimated my current abilities. I ducked, snatched my dagger from its sheath at my ankle, and lunged upward, driving the blade deep into his gut. He grunted. My gaze met his opaque eyes. He grinned, slippery blue lips drawn back over jagged teeth. Hot blood spilled over my hand, but from the look of glee on his crumpled face, you'd think he'd won. I was missing something. His brittle laughter confirmed it.
"They're coming, half blood," he growled around his fangs.
"Yeah, I got the memo. The princes are coming, blah blah. Tell me something I don't know."
His hand shot out like a viper strike. I yanked the blade from his gut, recoiled from his scalpel-like claws, and arched away, but my balance wobbled. Overreaching, I staggered. My stomach flip-flopped. Fear churned my gut. The big grin on his bony face morphed into a hideous, toothy snarl. He lunged and slammed his not-so-lightweight body into me. My back hit the alley dirt, knocking the breath out of my lungs. This would be one of those times when calling the fire would solve my misbehaving demon problem. I could kill him in an instant. A flicker of a thought was all it would take. But I wouldn't stop there. The alley would look nice draped in fire. That overflowing dumpster back there would go up like the 4th of July. The buildings would catch next. My fire would lick the sky, devour the neighborhood, and gobble up every living thing in the immediate vicinity. Insane laughter bubbled through my thoughts.
The demon coiled his hands around my throat. His legs straddled me. I took a swipe at his arm with my blade. His skin peeled apart, blood dribbled, but he didn't loosen his grip. I sliced again, while my lungs burned. His grip on my throat tightened. My vision clouded. The edges of his half-broken face blurred. My demon snarled inside my mind and rattled her mental bars. _Let me out..._ she urged. _Let me play. We will make short work of this beast. We are destruction. We taste his death. Ashes in the air. Let us devour._ It was pretty crowded in my head. Next, my personal parasite spilled his poison into my veins. His darkness polluted my limbs, stoking my thirst for fire. I couldn't hold out much longer. The fire would come. My demon would break the reins of my control, and this time, I might not come back. This could be it: the very last time I held the reins of my control. Was it over so soon? Would I lose my battle in this alley?
Demon spittle dribbled onto my face. My head lolled to one side. Among the fog of impending unconsciousness, a dark figure walked toward me. I didn't need to see clearly to know him. His element flooded ahead of him. Heat. A terrible, breath-stealing, skin-crawling heat. Fire without the flames. The demon with his hands around my throat jerked his head up. His chokehold vanished as foreign words spilled from his lips. He scrambled off me, but stayed kneeling, skinny shoulders hunched.
Akil's image shimmered behind a veil of heat-haze. The air around his body rippled and strummed. He wore a double-breasted overcoat over his trademark suit, as though he might actually suffer from the cold on this chilly Boston evening. Only Akil could stalk back-alleys and still look like he'd stepped off the pages of GQ magazine.
As my demon attacker mumbled and growled in an ancient and exotic language, I concentrated on filling my lungs with air, ignoring the odors of mildew, fish, and urine. The air tasted pretty sweet to my oxygen-starved lungs.
"Return to the netherworld," Akil ordered, his tone level and direct. He didn't expect to be disobeyed. He stopped in front of the prostrate demon, handsome face perfectly neutral.
"It won't do any good, sire. They come. There is nothing there but death."
Akil's dark eyes flicked to me. I wiggled my fingers at him. It was all I could muster.
"Perhaps you misheard because I'm certain you didn't just deny a direct order from your prince." A smile flirted across Akil's lips, and fire brimmed the irises in his otherwise hazel eyes.
"No, sire." The demon ducked his head.
"Good." Akil flicked his fingers, and a ribbon of light rippled open beside him. _The veil._ "Be on your way."
"Now? B-But..."
Akil plucked the demon off his knees and shoved him through the twitching slither of light. The veil stitched itself closed moments later, and Akil turned to me. "Before you say a word about not needing my help, I observed your altercation for several minutes before intervening. Had it gone on any longer, I'm quite certain you would be dead."
"Dead is such a strong word." My voice came out littered with scratches and hitches, dashing my attempt at bravado. I rolled onto my side, wincing as the wound in my side flared, and climbed to my feet. Akil watched me stagger and right myself. He knew better than to help me.
"Nice coat. Do you always kick demon ass dressed like an Italian supermodel?" I brushed loose dirt from my jeans and tee. When I caught sight of the bloom of blood and the warm metallic scent of it hit me, I gulped back a knot of fear. It _had_ been too close.
Akil blinked into existence right in front of me. His heat wrapped me in a quilt-like embrace. I attempted to deny how his warmth soothed my rattled body and mind, but it was a losing battle. Exhausted, battered, bruised, and bleeding, I was in no condition to argue with him. I'd not seen him in weeks—not officially—but I knew he'd been on the streets, eager to kick any wayward demons back to the netherworld, or hell as it was fondly referred to. According to Akil, Boston was his city, and nobody would take it from him, not an influx of demons, and certainly not the other princes. I wasn't entirely surprised to see him. I'd had my suspicions he'd been watching me from afar.
He hooked a finger under my chin and tilted my head up. "Why did you allow that demon to best you?"
I fluttered my eyes closed, the disappointment on his face too much. "I'm afraid."
"Of what? Not him."
"Damien." My parasite. I opened my eyes in time to catch Akil's glare narrowing. "He constantly pulls on my control. And my demon... She's impatient. She whispers to me the whole time. If I let her go, Akil, I'm afraid I might not come back." I'd lost control a few weeks ago, almost killing an angry mob and nearly tearing Akil's arm off in the process. He'd stopped me from doing both, but it had been too close for comfort.
He drew his hand back. Our gazes locked for a few seconds before he dipped his lower, over my lips, my chest, to where his fingers peeled the sticky hem of my top away from my waist. "You know how to remove the soul-lock. I'm sure you don't need me to say it again."
Right, by letting Akil dig him out. I'd been thinking about it every night when I woke screaming, drenched in cold sweat, body aching and mind shattered beneath a flood of revolting images—Damien's memories. Yeah, I'd thought about it a lot while drowning myself in whiskey. Damien was killing me as sure as if he was standing over my shoulder, driving a dagger into my back. I needed Akil's help. I was losing this battle. I'd been losing it since the beginning. And I didn't have much time left.
"Could he ever come back?" I asked quietly. "The part of him that's in me, could it ever become solid again, flesh and blood real?"
Akil searched my face, delaying, until he finally gave me the truth. "Yes. There is a way. But you need not concern yourself with it. Without your consent, it could never happen." I gulped back the burn of disgust. I wanted my owner out, gone for good. I'd have gladly cut him out with a rusted razor blade if I could. "You cannot continue like this, Muse." Akil's deft fingers probed my side, drawing a hiss from my lips. "If you refuse to summon your demon, you will likely die the next time you find yourself in harm's way. I may not always be here to save you."
I bowed my head, simultaneously resting my forehead against his chest while he pressed his hand over the wounds and fizzled heat through my flesh. "I think... maybe... I guess..." I sighed. "You're right. I have to do something. I'm ready." His body tensed, and his hand over the wound stilled. "You need to take him out of me, Akil. Please. I can't live like this anymore."
He laced his fingers into my hair and tipped my head back. I could have fought him, but what was the point? We both knew this had to happen eventually. He didn't look as happy as I thought he would. He studied me, his sculpted face marred by suspicion.
"I expected you to, y'know, gloat or something. You've wanted this since he soul-locked me."
"Much longer, actually. But I–"
His teeth snapped together, and he jerked, as though struck, then shoved me away from him. I almost fell over my own feet trying to stay upright. Stumbling against the wall, I spluttered a curse. "What the hell?"
He'd spun around and faced the mouth of the alley, his back to me. I saw them then, six black-clad men and women, assault rifles raised and trained on Akil as they closed in. Laser dots bounced around on his back. I searched the roofline and spotted the snipers above us. Worse, more special-ops jogged in from my left behind Akil. And I recognized one instantly. Ryder led the smaller team, rifle shouldered and aimed at Akil's back.
"Shit, Akil, get out of here." I shoved off the wall and strode into the line of fire, exuding a confidence I didn't have. "Don't do this, Ryder." I called over the sound of hammering boots on asphalt. Akil would kill them all.
"Get outtah the way, Muse," Ryder barked. "We will shoot through you."
Akil's element lashed outward, surging past me and rushing toward Ryder's group. "Dammit, Ryder, you wanna be responsible for more deaths?"
"Ain't gonna happen." His men were closing fast. It would be a bloodbath. Five in his group, a couple on the roof, six approaching Akil from the front. It wouldn't be enough. A hundred wouldn't be enough. What the hell was Ryder thinking?
Akil's element spluttered beneath my feet. I felt it choke and gasped, spinning around to see Akil drop to one knee and brace himself against the ground, head bowed. Heat throbbed around him, beating the air in relentless waves. He should have been upright, smug and confident – at the worst, he could have called his true form Mammon – but something was very wrong. "Akil?"
The Enforcers gathered around him. His shoulders rose and fell as he breathed hard, but he made no move to attack them or protect himself. A deep inhuman-growl rumbled through him. He snapped his head up and scored a few Enforcers with his powerful glare, but it only seemed to make them more determined. They closed ranks, moving tighter.
I stole a few steps closer when Ryder grabbed my arm and pulled me to him. "Stay away if you know what's good for you." He shoved me back, fierce determination making his glare hard and cold.
"Ryder, he'll kill all of you. Are you insane?" Akil might be down now, but it was likely a trap. He was probably hoping to lure them in so he could catch them together. I strode forward. "Let him go before it's too late." I didn't want to see anyone hurt, especially Ryder. We'd had our differences, but he didn't deserve to screw it up like this. "You can't capture a Prince of Hell. Ryder, please, c'mon... before he brings Mammon..." My words trailed off as Akil's gaze found me. Lips pulled back in a snarl, eyes bright with amber, he glared at me, accusations burning in his gaze. What? Did he think I had something to do with this? "Akil... Don't hurt them. Let them go." Another growl rumbled through him.
"He's not going anywhere, Muse." Ryder raised his rifle, aimed, and pulled the trigger. The sharp crack bounced around the alley. Akil took the hit in the shoulder. He spun around, his body moving liquid fast, but it wasn't enough. They opened fire. The deafening noise of gunfire drowned out my shriek of alarm. I sprang forward, only for Ryder to grab me and shove me into the arms of three of his crew. I kicked, yanked, writhed, and bucked, but the goons held fast.
When the gunfire ceased – a horrible unearthly quiet settled over the alley. The smell of hot metal, and acrid gun smoke burned my nostrils and laced my throat. Ragged breaths sawed out of me. I couldn't tear my gaze from the group huddled around a pool of blood. He couldn't be dead. Could he? Why hadn't he fought? Why didn't he summon Mammon? He'd once told me seven hundred Enforcers wouldn't be enough to take him down.
The crowd of special-ops parted. My knees buckled. Akil lay on his side, shredded clothes dark with blood. His glassy gaze stared into the middle-distance, seeing nothing. Blood dribbled from his parted lips. This couldn't be. My demon surged forward, driving a growl ahead of her and out of me.
Ryder turned to me. "Don't even think of bringing her to the party, Muse." He thumbed over his shoulder at the snipers above. I saw them and followed their aim to see the red fireflies dancing on my chest. "Unless you want your demon packed away for another day."
He glanced back, smiled, and nodded. "Job well done, everyone. Bag him, and let's get outtah here."
"You killed him," I snarled, battling with the terrible desire to spill fire into my veins and burn everyone in the alley—turn them to ash and dance with their remains in the breeze. It was insane, but that didn't make the thought any less appealing. "He was helping us drive the demons back." I clamped my teeth together, hissing each breath between them even as I felt my fangs lengthen. "Why do this?"
Ryder finally looked at me, and saw me—not another demon getting in his way, but me, once his friend. "Look," he lowered his voice, "he ain't dead. He's just chock-full of PC-Thirty-Four and a bit beaten up. He'll be pissed, for sure, but he won't be able to do a damn thing about it."
They'd drugged Akil. They'd _drugged_ a Prince of Hell. Panic speared through me. "Give him the antidote. Now. Before he comes 'round. Let him go. Do that, and you'll live. Otherwise, Ryder, when he wakes and realizes what you've done, you're a dead man. And not just you. Everyone here. Shit, maybe the whole city for all I know. Don't risk it. Walk away now. Tell Adam you failed."
Ryder beamed and backed up. "Hell, no. This is the best night of my life." He nodded in the direction of Akil's lifeless body. "That bastard deserves everything he gets, and now we have him. Happy days, Muse." He winked, and strode away.
"Ryder! Don't do this. You'll get them all killed. You can still make it right!" I kicked at the mountain of a man to my right, stamped on his instep and tried to clamp my sharp teeth down onto the hand gripping my shoulder. Ryder grumbled a warning. _Screw him._ I snapped my head back, caught something soft on the outside and bony inside, heard one of them spit a curse, and drove my elbow back. The blow, when it came, cracked across the back of my skull and sent me spiraling into darkness.
# Excerpt Chapter Two
Ben Stone eyed me from behind his bar, his hands busy drying glasses. "Bit early for whiskey, Charley."
"Bite me, Ben. I've had a rough night." I eased my sore body onto a barstool. "What time does Adam get here?"
"Seven-ish." He still eyed me, like a stepbrother trying to decide whether he should care or not. "I serve coffee now. With real beans. Maybe you'd prefer caffeine to alcohol?"
"No offense, but the syrup you serve isn't coffee." I glared. He really didn't want to push me. "I tried to take down a demon last night when he decided to wipe an alley floor with me and sharpen his claws on my insides. I then promptly had my Prince of Hell lover shot to shreds in front of me by my ex-friend and intend to speak with said ex-friend's boss in about"—I checked the clock on the wall behind the bar— "ten minutes. So would you just cut me some slack, and serve me a drink? I'm a big girl. I can handle whiskey at seven a.m."
"That's what I'm worried about."
"Your conscience is clear. You said your bit. Now, where's my drink?" Yes, I was being short with him. He didn't deserve it, but I'd had virtually zero sleep. I felt as though I'd been put through the wringer. Somewhere, there was a Prince of Hell fuming at the hands of the Institute. If he hadn't laid waste to their base of operations yet, he would soon. I had to find him. Fast. Adam was getting an earful the second he stepped through the Stone's Throw's doors.
Ben delivered my drink with a side order of judgmental expression. He knew I was a wreck. I knew I was a wreck. Surely we were past all the arched eyebrows and tut-tuts by now?
As the bar began to fill with Institute staff—most of them filing out the back to their temporary safe house—I wondered where Ryder had taken Akil. Obviously, the Institute had another base of operations somewhere, yet they still used Stone's Throw as an unofficial office. What had once been a forgotten bar Ryder and I frequented after work had turned into the Boston hub for all things demon hunting. The back wall looked like a psycho's pin-board, except the photos and maps were all demon related. The Enforcer's rallied here, and Adam dropped by three days a week. Today just happened to be one of those days. I'd mostly avoided the days he graced the bar/office for fear I might boil his insides. In fact, I'd not been to the bar much at all since the events a few weeks ago when Ryder had shot a half blood girl in the head, thereby destroying her short, tragic life and driving possibly the final nail in the coffin of my control. The only thing keeping me sane was stalking the streets, killing demons who stepped out of line or bumping illegal demon-immigrants back through the veil. I didn't sleep. Not any more. _He_ was there, stalking my dreams. I was on a downward spiral, one I'd finally accepted I needed Akil's help to break free of. Well, that wasn't happening any time soon.
Ryder walked in with several Enforcers in tow, Jenna the raven haired no-bullshit beauty, being one of them. The group clearly still buzzed from the previous nights' exploits, bouncing on Enforcer happy-pills until they saw me. Ryder peeled away from them, wove around the empty tables, and hitched himself onto a stool beside mine.
I waited for him to comment on the whiskey in my glass. He picked up a coaster and teased the edges with his fingers, his smile dying. "A hundred demons came through the veil last week alone, and those are the ones we know about. New York caught or killed dozens more. We ain't got the luxury of being picky—not no more, Muse. We gotta use everything we have. If that means grabbing the Prince of Greed, we do it. One Prince down. Five to go."
Technically four, if you didn't count Stefan, the newly crowned Prince of Wrath. Ryder didn't know about Stefan's recent promotion. Few did. Akil knew. Would he tell the Institute? No. It wouldn't come to that. He wouldn't let it. Shit, Akil would make them pay if I didn't get to him and talk him down.
"Akil was helping us."
He lifted mocha-brown eyes to me and ran a hand through his hair. His chin bristled with stubble, but he looked good, in a don't-give-a-damn kinda way. His eyes were bright, his gaze sharp. I knew that look. He was ex-military, and he liked nothing better than to get his teeth into a mission and feel like he was doing the world a favor. His scuffed tan-leather jacket looked as though it had seen as much action as he had.
"I'm not getting into a bitching contest with you about Akil, Muse. He's fucked you over more times than I can count. He's the Prince of Greed, for fuck's sake. Get over your Stockholm Syndrome, and move on. You'll live longer."
His words hit me like a punch in the gut. How dare he sweep me up in a statement like that? He knew what Akil had done for me. I'd thought Ryder knew me, _really_ knew me, the way friends should. I snatched up my glass and threw whiskey in his face just as Adam walked through the door. Ryder spluttered, knocked the glass out of my hand, and stilled himself. His right hand clenched in a fist that trembled with the effort of restraint.
I shot to my feet, sneering into Ryder's face. Ryder's groupies loomed near the back of the bar, hands on their holstered weapons. Jenna included. "You bastard," I growled. "I thought you were different. I thought we understood each other."
"Get the fuck outtah my face, Muse, before I do something I'll regret." He delivered his threat with enough bravado to deter me, even with whiskey dripping from his chin.
"What happened to you?"
"Me?" He dragged a hand down his face and flicked moisture from his fingers. "We're at war, and you're on the wrong side. Get your shit together, or get out of Boston."
Adam's presence loomed to my left. He was a big guy, built like a lumberjack in Abercrombie & Fitch apparel. Casually classy. He loitered in my peripheral vision, radiating authority the way Akil radiated heat. Behind him, three Enforcers watched me like hawks hovering over their prey. Six others hung back. All they needed was an excuse, and I'd be full of bullet holes. I blinked—grossly outnumbered—and backed away from Ryder. This wasn't over. I threw him a glare that told him as much and then steeled myself against Adam's stare of abject disapproval.
Adam nodded once and beckoned me away from Ryder. Whiskey churned in my gut as I obliged. Ryder's words couldn't have hurt me more if he'd stabbed my in the chest. I knew things were bad between us, but I hadn't realized how deep his hatred went. I shouldn't have been surprised. I hated him right back for what he did to Dawn, the half blood girl I'd tried to save and he'd killed.
"Everything okay?" Adam pulled out a chair and gestured for me to sit. I snorted and crossed my arms. "Sit."
"No."
"Very well." He sat and leaned back in the chair, stretching his long legs beneath the table. "This is about Akil. Let me make something perfectly clear, Muse. You will not be seeing Akil unless you're under the influence of P-C-Thirty-Four."
His words sucker-punched me right where Ryder's had already wounded me. My head spun, and my vision blurred. I sat in the chair and slumped forward, sinking my fingers into my hair. A dull ache throbbed up my right side, and the whiskey in my stomach threatened to force its way back up my throat. "I can't do that."
"This is not something we can negotiate. You're too volatile, and he's too valuable."
There was no way in hell I was letting Adam stick a needle in me and pump me full of PC34 again. Not going to happen. Ever. Not even for the demon who had saved me from myself on many occasions and in many different ways.
I lifted my head and despised the fact he'd see the tears brimming my eyes. I couldn't do a damn thing to stop them, so I snarled. "Akil was on our side. He's been on the streets like us. He no more wants the princes here than we do. What you've done... You don't understand how bad this is. He'll never let you live, Adam. He despises the Institute and how you meddle with demons. Until you did this, he's tolerated you, but that's not an option anymore. He'll destroy you."
"He's contained—and he's not going anywhere, Muse. Not for a very long time."
The thought of Akil strapped to a table and at the mercy of the Institute scientists was almost enough to tip my thin control over the edge. "Is he conscious?"
"Yes." Adam blinked slowly.
"Has he said anything?"
He didn't reply as he assessed me, obviously working over a few possible replies in his head before finally saying, "He's demanded to see you."
My heart flipped, but Adam's concerned expression trampled on the new-shoots of hope.
He sighed. "He believes you're involved in his capture. He claims the real reason you didn't summon your demon in that alley was to lure him into action. He's not saying much, but when he does, he's quite... vehement."
Fuck. I clamped my teeth together. I could see how, from Akil's point of view, it might look like I'd been involved. "And you haven't said anything to put him right?" Adam didn't reply. How could he sit there, so freakin' calm? If it wasn't for the anti-elemental symbols adorning the walls, I'd be dancing in the fire and giving him third degree burns by now. "How did you know he'd be in that alley?" I leaned back and crossed my arms, locking my trembling fingers into fists.
"Akil usually resurfaces around you. I had you watched."
"Where are you keeping him?"
"A secure facility."
"Is he... alright?"
"He's recovering from the assault better than expected, considering PC-Thirty-Four is subduing his demonic nature."
My jaw ached. A terrible pressure throbbed in my head. They could have killed him. Had they used etched bullets, they'd have destroyed his human avatar. Akil as I knew him, would have died. Mammon would have lived. He was truly immortal. But I didn't care about Mammon. I cared for Akil more than I'd realized. They'd taken him from me. He was mine, and the Institute had ripped him out of my arms. Worse, they'd defiled a Prince of Hell. A demon growl rumbled up my throat.
Adam's eyes widened. "Do I need to be concerned about you, Muse?"
"I'd be concerned about your affairs, Adam. Best get that Will & Testament written up while you're still breathing." They had no idea what they'd captured. Akil wasn't just another demon. He was chaos eternal. A force of nature. "You're an idiot. You all are. You had a Prince of Hell working toward the same goals as you—a direct link to the others—and you've managed to royally fuck it up. After what you've done to him, he'll never help you. You won't get anything out of him. You might as well let him go before he escapes. Which he will. Trust me." I looked around the bar and allowed my stewing anger to raise my voice. "You're all as good as dead. You just don't know it yet."
A dozen Enforcers glared back at me. They hated me. All of them. Fine. I was done with them. With everything and everyone. Ryder didn't even look over. I got a great view of his back and knew exactly where I stood with him. I shook my head at Adam. "Don't come crawling back to me, Adam, when you have the princes breathing down your neck. It's over. I can't help you any more."
He nodded, not in the least concerned. He would be.
'Drowning In The Dark' #4 The Veil Series.
Click here to buy.
Add it to your Goodreads shelf here.
# Want more?
Visit The Veil Series website for exclusive access to character bio's, Muse's blog, playlists and more...
www.theveilseries.co.uk
* * *
The Veil Series has a **Facebook** page where you can comment on the books, read character interviews, enjoy exclusive updates and artwork, and chat with likeminded readers and the author:
www.facebook.com/theveilseries
* * *
Join the mailing list by clicking here and get your free e-book, 'Wings Of Hope'.
* * *
If you enjoyed Darkest Before Dawn, please review the book. **Every review helps new readers discover the Veil Series.**
# About the Author
Born in Tonbridge, Kent in 1979, Pippa's family moved to the South West of England where she grew up among the dramatic moorland and sweeping coastlands of Devon & Cornwall. With a family history brimming with intrigue, complete with Gypsy angst on one side and Jewish survivors on the other, she draws from a patchwork of ancestry and uses it as the inspiration for her writing. Happily married and the mother of two little girls, she resides on the Devon & Cornwall border.
Contact me here:
* @pippadacosta
* pippadacosta
www.pippadacosta.com
pippadacosta@btinternet.com
# Also by Pippa DaCosta
Wings of Hope ~ The Veil Series Prequel Novella
Beyond The Veil (#1)
Devil May Care (#2)
Darkest Before Dawn (#3)
Drowning In The Dark (#4) - Coming Feb 27th, 2015.
* * *
www.theveilseries.co.uk
* * *
Get your free e-copy of 'Wings Of Hope' by signing up to The Veil Series mailing list, here.
# Acknowledgments
Thank you to my eternally patient husband, **Mat** , who must wonder when he gets his wife back. Big thanks to **Celairen Art** for her excellent work on The Veil Series covers. You rock! Thank you to my editor, **Karen** , at Red Adept, who has the same dry sense of humor as me, making for some entertaining edits. Thank you to my friend and fellow author, **Annaliese,** who without her trailblazing, The Veil Series may never have found its way to the thousands of readers who've enjoyed Muse's journey so far. And of course, thank you to you, **wonderful readers**. Thank you for your kind emails, your gushing reviews, and your endless support. I cannot thank you enough.
| {
"redpajama_set_name": "RedPajamaBook"
} | 3,647 |
Naoko Takahashi - (6 de mayo de 1972 en Gifu, Japón). Corredora japonesa de larga distancia. Fue la primera mujer en bajar la barrera de las 2 horas y 20 minutos en la maratón, y ganó la medalla de oro en esta prueba en los Juegos Olímpicos de Sídney 2000
Inicios
Corrió su primera maratón en 1997 en Osaka, y acabó 7ª clasificada.
En 1998 consiguió sus primeros éxitos, ganando la Maratón de y la maratón de los Juegos Asiáticos de Bangkok. En esta última hizo una marca de 2h21:47 que suponía la 5ª mejor marca de la historia y la segunda del mundo ese año tras la etíope Tegla Loroupe, que batió el récord del mundo en Róterdam.
En 1999 una lesión la mantuvo apartada de la competición.
En el año 2000 regresó en plena forma. El 12 de marzo ganó por segunda vez la Maratón de Nagoya. Luego se preparó para la gran cita de los Juegos Olímpicos de Sídney. Estuvo entrenando en Estados Unidos durante tres meses, en altura y haciendo muchas cuestas, ya que el recorrido olímpico tenía bastantes desniveles.
Sídney 2000
La maratón olímpica femenina se celebró el 24 de septiembre, en un recorrido que iba desde el Homebush al norte de Sídney hasta el Estadio Olímpico. En principio las favoritas eran la keniana poseedora del récord mundial Tegla Loroupe, la etíope campeona olímpica en 1996 Fatuma Roba y la propia Takahashi, que había hecho una gran marca hacía pocos meses en Bangkok.
Se dio la salida a las 9:00 de la mañana, y al poco de comenzar se escapó la belga Marleen Renders. Detrás iba un pelotón de unas 30 correrdoras que poco a poco fue perdiendo unidades. En el km 12 atraparon a Renders. En el km 20 Takahashi tomó el mando de la prueba, encabezando un grupo donde ya solo quedaban trece corredoras. Pasaron la media maratón en 1h11:47
El ritmo impuesto por Takahashi se hacía cada vez más duro, y el grupo se fue desintegrando. Finalmente solo la rumana Lidia Simon pudo aguantar el ritmo de la japonesa, y ambas pasaron en solitario por el km 30, con una cómoda ventaja sobre sus perseguidoras.
Sobre el km 35 Takahashi dio el golpe definitivo, y dejó descolgada a la rumana, a quien se veía ya muy agotada. La japonesa hizo los últimos kilómetros en solitario y llegó al Estadio con una cómoda ventaja. Sin embargo al entrar en el Estadio se llevó un gran susto cuando fue obstruida por un espectador que se lanzó a felicitarla, sin darse cuenta de que aun le quedaban unos metros por recorrer.
Superado el incidente, llegó a la meta en un tiempo de 2h23:14, mejorando el récord olímpico que poseía la estadounidense Joan Benoit desde Los Ángeles 1984. La medalla de plata fue para la rumana Lidia Simon (2h23:22) y el bronce para la keniana Joyce Chepchumba (2h24:45)
Takahashi se convertía así en la primera mujer japonesa en ganar una medalla de oro en atletismo.
Después de los Juegos
El 30 de septiembre de 2001 en la maratón de Berlín, Takahashi se convirtió en la primera mujer de la historia en bajar de las 2h20, ganando la prueba con 2h19:46 Sin embargo ese récord fue batido solo una semana más tarde por la keniana Catherine Ndereba en la maratón de Chicago (2h18:47)
En 2002 volvió a ganar por segunda vez la maratón de Berlín, aunque con peor marca (2h21:49)
No fue seleccionada por su país para competir en los Juegos Olímpicos de Atenas 2004, donde ganaría el oro su compatriota Mizuki Noguchi.
Reapareció en la maratón de Tokio el 20 de noviembre de 2005, consiguiendo la victoria.
Resultados
Maratón de Osaka 1997 - 7.ª (2h31:32)
Maratón de Nagoya 1998 - 1.ª (2h25:48)
Juegos Asiáticos de Bangkok 1998 - 1.ª (2h21:47)
Maratón de Nagoya 2000 - 1.ª (2h22:19)
Juegos Olímpicos de Sídney 2000 - 1.ª (2h23:14)
Maratón de Berlín 2001 - 1.ª (2h19:46 RM)
Maratón de Berlín 2002 - 1.ª (2h21:49)
Maratón de Tokio 2003 - 2.ª (2h27:21)
Maratón de Tokio 2005 - 1.ª (2h24:39)
Mejores marcas
10.000 metros - 31:48,23 (Osaka, 09-Jun-1996)
Maratón - 2h19,46 (Berlín, 30-Sep-2001)
Enlaces externos
linkoln.tripod.com/nt.html
Atletas de Japón
Medallistas olímpicos de oro de Sídney 2000
Medallistas olímpicos de oro de Japón
Medallistas olímpicos de oro de atletismo
Atletas en los Juegos Olímpicos de Sídney 2000
Deportistas de Japón en los Juegos Olímpicos de Sídney 2000 | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 6,045 |
/**
* Playcraft Engine - (C)2012 Playcraft Labs, Inc.
* See licence.txt for details
*/
/**
* @class pc.DataResource
* @augments pc.Base
* @description
* A generic resource you can load data, such as JSON, XML or config files from a URL, just like an image or sound file.
* <p>
* To load a resource, use the pc.Loader to add a resource:
* <pre><code>
* pc.device.loader.add(new pc.DataResource('level1', 'data/level1.tmx'));
* </code></pre>
* <p>
* Once you have the resource loaded you can access the contents of the resource using the data member:
* <pre><code>
* var xmlData = pc.device.loader.get('level1').resource.data;
* </code></pre>
* <p>
* You can optionally provide a function to be called when the resource has finished loading or has an error.
* <pre><code>
* function onLevelDataLoaded(dataResource)
* {
* // dataResource.data
* }
* pc.device.loader.add(new pc.DataResource('level1', 'data/level1.tmx', onLevelDataLoaded));
* </code></pre>
* <p>
* The Scrollia demo game has an example using that loads the level.tmx file from the editor as a data resource which
* is passed to pc.Scene to construct entities and layers.
*/
pc.DataResource = pc.Base.extend('pc.DataResource',
{},
/** @lends pc.DataResource.prototype */
{
/** Data resource that has been loaded */
data:null,
/** HTTP request object used to load the data */
request:null,
/** src URL */
src:null,
/** Short name for this resource */
name: null,
/** boolean indicating whether the resource has been loaded yet */
loaded:false,
/** current callback when the resource has been loaded */
onLoadCallback:null,
/** current callback if an error occurs whilst loading the resource */
onErrorCallback:null,
/**
* Loads data from a remote (URI) resource.
* @param {String} name Name to give the resource
* @param {String} src URI for the data
* @param {function} [onLoadCallback] Function to be called once the image has been loaded
* @param {function} [onErrorCallback] Function to be called if the image fails to load
*/
init:function (name, src, onLoadCallback, onErrorCallback)
{
this._super();
this.src = pc.device.loader.makeUrl(src);
this.name = name;
this.onLoadCallback = onLoadCallback;
this.onErrorCallback = onErrorCallback;
this.request = new XMLHttpRequest();
this.request.onreadystatechange = this.onReadyStateChange.bind(this);
this.request.onload = this.onReadyStateChange.bind(this);
this.request.onloadend = this.onReadyStateChange.bind(this);
this.load();
},
/**
* Triggers an immediate load of the resource. Use only if you're manually loading a resource, otherwise
* the pc.Loader will automatically call load when it starts.
* @param {function} [onLoadCallback] Optional function called when the resource has finished loading
* @param {function} [onErrorCallback] Optional function called if the resource fails to load
*/
load:function (onLoadCallback, onErrorCallback)
{
this.onLoadCallback = onLoadCallback;
this.onErrorCallback = onErrorCallback;
this.request.open('get', this.src);
this.request.send(null);
},
/**
* Force the reloading of a resource (by marking it not loaded and calling load
*/
reload:function ()
{
this.loaded = false;
this.load();
},
/**
* Called when the resource is loaded/ready. Generally this is used internally, and you should use the
* onLoadCallback function optionally pass to the load method or constructor
*/
onReadyStateChange:function()
{
if (this.loaded) return;
if (this.request.readyState == 4)
{
if (this.request.status == 200)
{
this.loaded = true;
this.data = this.request.responseText;
if (this.onLoadCallback)
this.onLoadCallback(this);
} else
if (this.request.status == 404)
{
this.warn('resource ' + this.src + ' error ' + this.request.status);
if (this.onErrorCallback)
this.onErrorCallback(this);
}
}
}
});
| {
"redpajama_set_name": "RedPajamaGithub"
} | 1,254 |
{"url":"https:\/\/socratic.org\/questions\/how-do-you-differentiate-f-x-x-2-3x-6-e-x-2-using-the-quotient-rule","text":"# How do you differentiate f(x)= ( x^2-3x-6 )\/ (e^x + 2) using the quotient rule?\n\nNov 1, 2016\n\n$f ' \\left(x\\right) = \\frac{{e}^{x} \\left(5 x + 3 - {x}^{2}\\right) + 4 x - 6}{{e}^{x} + 2} ^ 2$\n\n#### Explanation:\n\nYou need to use the quotient rule;\n$\\frac{d}{\\mathrm{dx}} \\left(\\frac{u}{v}\\right) = \\frac{v \\frac{\\mathrm{du}}{\\mathrm{dx}} - u \\frac{\\mathrm{dv}}{\\mathrm{dx}}}{v} ^ 2$\n\nSo with $f \\left(x\\right) = \\frac{{x}^{2} - 3 x - 6}{{e}^{x} + 2}$ we have\n\n$f ' \\left(x\\right) = \\left\\{\\frac{\\left({e}^{x} + 2\\right) \\left(\\frac{d}{\\mathrm{dx}} \\left({x}^{2} - 3 x - 6\\right)\\right) - \\left({x}^{2} - 3 x - 6\\right) \\left(\\frac{d}{\\mathrm{dx}} \\left({e}^{x} + 2\\right)\\right)}{{e}^{x} + 2} ^ 2\\right\\}$\n\n$\\therefore f ' \\left(x\\right) = \\left\\{\\frac{\\left({e}^{x} + 2\\right) \\left(2 x - 3\\right) - \\left({x}^{2} - 3 x - 6\\right) \\left({e}^{x}\\right)}{{e}^{x} + 2} ^ 2\\right\\}$\n\n$\\therefore f ' \\left(x\\right) = \\left\\{\\frac{{e}^{x} \\left(2 x - 3\\right) + 2 \\left(2 x - 3\\right) - \\left({x}^{2} - 3 x - 6\\right) \\left({e}^{x}\\right)}{{e}^{x} + 2} ^ 2\\right\\}$\n\n$\\therefore f ' \\left(x\\right) = \\left\\{\\frac{{e}^{x} \\left(2 x - 3 - \\left({x}^{2} - 3 x - 6\\right)\\right) + 2 \\left(2 x - 3\\right)}{{e}^{x} + 2} ^ 2\\right\\}$\n\n$\\therefore f ' \\left(x\\right) = \\frac{{e}^{x} \\left(2 x - 3 - {x}^{2} + 3 x + 6\\right) + 4 x - 6}{{e}^{x} + 2} ^ 2$\n\n$\\therefore f ' \\left(x\\right) = \\frac{{e}^{x} \\left(5 x + 3 - {x}^{2}\\right) + 4 x - 6}{{e}^{x} + 2} ^ 2$","date":"2021-01-20 19:18:20","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 9, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9845584630966187, \"perplexity\": 4379.884468090007}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-04\/segments\/1610703521987.71\/warc\/CC-MAIN-20210120182259-20210120212259-00249.warc.gz\"}"} | null | null |
There some specific LinkedIn mistakes you should avoid if you are a marketing consultant looking to add more clients to your business.
As of January 2015, there were 332 million people on LinkedIn.
41 percent of U.S. internet users with an household income of more than $75,000 use LinkedIn.
In short, the platform presents a great way to network professionally. Therefore, it is a great resource if utilized properly.
When used improperly, though, you can actually undermine your success through the use of LinkedIn.
It's never good to use people only for what they can do for you. This is just as true on LinkedIn.
Don't ever send someone a connection invitation only to ask them to connect you with someone they know right away.
This will make them feel like you are using them as a means to an end only. It implies you aren't interesting in connecting with them for them.
Don't do this. It makes people feel used and is in poor taste.
Wait…isn't the purpose of LinkedIn to network and grow your business?
It is a good way to connect and get a new client.
However, you shouldn't send someone a connection only to have access to jobs they might know about. This again makes people feel used.
Research a company yourself if you want to find out about their needs.
Don't ask someone else to give you information you could easily find yourself. It just looks bad and makes you appear lazy.
Okay. It might be acceptable to ask your closest friends to endorse your skills.
Whatever you do, though, don't ask someone you don't know that well to endorse you, even if you endorsed them.
This request makes you seem desperate, just don't do it.
It can be tempting to try your sales pitch on your LinkedIn connections, especially if you are in Business Development. Try to avoid this though, at least when you first make a connection.
If you must put your sales pitch out there, at least wait until you have been connected for longer than a few days or months.
Then, only present a soft sale. Present it as informational only and not a hard sales pitch.
If you know someone who knows someone, ask for an introduction. Don't use the InMail feature to connect to a connection's connection.
You might wonder why this is okay when earlier we said not to use people for their connections.
However, it's all about how you go about it. You have to be smart.
Don't ask for introductions immediately.
Don't push for that the first minute you are connected with someone. You can ask eventually, though.
When you ask the right way after enough time has passed, it isn't offensive.
This is especially true if you offer to introduce them to someone in return for the favor.
You might notice when looking over your connections that one of your connections is connected with someone you would like to know.
Don't do this, especially if you don't ask John Brown first.
After all, wouldn't it be more effective to ask for the introduction from John Brown instead, like we just talked about?
You learn basic information about a person when you connect on LinkedIn. This information can include their email and phone number.
Don't ever use this information without a person's expressed permission.
For example, don't add someone to your newsletter subscriber list because you now know their email.
It's an annoying thing to do.
You can ask them if they want to be added. This will give them the opportunity to partake in something they might enjoy.
Just don't add them without asking. It's rude.
Find out how much business your connections want to conduct by asking them if they would like to talk business via LinkedIn up front.
Sending a business message to them without first checking if this is okay can be aggravating.
If they say they prefer not to conduct business, don't ask them for any referrals or introductions.
Assume they don't want to use their connections to help you.
It happens. Just be professional.
They might come around eventually. But, don't push them.
Some people want to connect with potential business associates immediately.
It can be smarter to show some patience, though.
For example, wait until the end of the process to connect with potential clients if you are going through proposal process.
You can connect easily with your new clients after you officially get the job.
Conversely, if you don't get the job, you can send a polite thank you note for being considered, and add a LinkedIn connection invitation to your thank you note.
This is a great way to connect and shows your professionalism when you aren't awarded the job you wanted.
In general, everyone is busy. You aren't the only one with a mile long to-do list.
Keep this in mind when asking people for favors on LinkedIn.
Don't tax the kindness of business acquaintances, strangers or friends.
Although garnering a needed introduction might be important to you, remember that whoever you are asking this favor of is also busy.
Don't be a pest, and if someone does give you a referral or introduction, make sure you show them the appropriate amount of appreciation for their effort.
All 10 of the above blunders that people make when using LinkedIn can be summed up simply by using others for their own gain.
Yes, LinkedIn is all about networking.
However, you have to remember that each LinkedIn profile represents a real person.
Be professional. Ask permission. Treat each person you connect with on LinkedIn as a professional you're dealing with in your business arena.
If you wouldn't ask someone for a certain favor in person, think twice before doing so over LinkedIn.
Simply avoid the mistakes listed above and you are well on your way to utilizing LinkedIn for your professional success, and making your LinkedIn profile worthwhile. | {
"redpajama_set_name": "RedPajamaC4"
} | 2,447 |
GameGrid::GameGrid(float x, float y, float width, float height,
float completeX, float completeY, float tileGap, int _winWidth,
int _infoTextY, ResourceManager *res) {
// Initialize a new Grid
grid = new Grid(x, y, width, height, completeX, completeY, tileGap);
// Initialize the MinuteClock
clock = MinuteClock();
// Obtain all the textures
textures["TX_BH21"] = res->getTexture("TX_BH21");
textures["TX_BH22"] = res->getTexture("TX_BH22");
textures["TX_BH23"] = res->getTexture("TX_BH23");
textures["TX_BH24"] = res->getTexture("TX_BH24");
textures["TX_BH25"] = res->getTexture("TX_BH25");
textures["TX_BH26"] = res->getTexture("TX_BH26");
textures["TX_BH31"] = res->getTexture("TX_BH31");
textures["TX_BH32"] = res->getTexture("TX_BH32");
textures["TX_BV21"] = res->getTexture("TX_BV21");
textures["TX_BV22"] = res->getTexture("TX_BV22");
textures["TX_BV23"] = res->getTexture("TX_BV23");
textures["TX_BV24"] = res->getTexture("TX_BV24");
textures["TX_BV25"] = res->getTexture("TX_BV25");
textures["TX_BV26"] = res->getTexture("TX_BV26");
textures["TX_BV31"] = res->getTexture("TX_BV31");
textures["TX_BV32"] = res->getTexture("TX_BV32");
// Obtain the game's resources for this object
Font font = res->getFont("FN_COPPER");
// Configure the infoText
winWidth = _winWidth;
infoTextY = _infoTextY;
textFont = font;
infoText.setFont(textFont);
infoText.setString("Time: 00:00:00 Moves: 0");
centerText(&infoText, (float)winWidth, (float)infoTextY);
// Set the remaining instance variables
numMoves = 0;
isGridCompleteVar = false;
}
GameGrid::~GameGrid() {
// Delete the Grid pointers
delete grid;
grid = NULL;
}
void GameGrid::reset() {
// Reset GameGrid instance variables, MinuteClock and Grid
clock.reset();
grid->reset();
numMoves = 0;
isGridCompleteVar = false;
}
void GameGrid::onMouseLeftClick(Vector2i mousePosition) {
// Process select Block when left mouse button clicked
grid->selectBlock((float)mousePosition.x, (float)mousePosition.y);
}
void GameGrid::onMouseMove(Vector2i mousePosition) {
// Process move Block when mouse is moved
grid->moveBlock((float)mousePosition.x, (float)mousePosition.y);
}
void GameGrid::onMouseLeftRelease(Vector2i mousePosition) {
// Process release Clock when left mouse button clicked
grid->releaseBlock(numMoves);
}
void GameGrid::update() {
// Update the MinuteClock
clock.tick();
// Update the infoText's drawing position and text
centerText(&infoText, (float)winWidth, (float)infoTextY);
infoText.setString("Time: " + clock.getTime() + " Moves: " +
to_string(numMoves));
// Check if Grid is complete
if (grid->isComplete()) {
isGridCompleteVar = true;
}
}
void GameGrid::draw(RenderWindow *window) {
// Draw the infoText
window->draw(infoText);
// Iterate over all Blocks on the grid
for (unsigned int i = 0; i < grid->getBlocks().size(); i++) {
// Obtain the current Block
Block block = grid->getBlocks()[i];
// Create a Sprite with the current Block and draw it
Sprite blockSprite;
blockSprite.setTexture(textures[block.getTextureName()]);
blockSprite.setPosition(block.getX(), block.getY());
window->draw(blockSprite);
}
}
void GameGrid::addBlock(string textureName, float size, float x, float y,
float width, float height, bool orientation, bool flag) {
// Add a new Block to the Grid
Block block = Block(textureName, size, x, y, width, height, orientation);
if (flag) {
block.flag();
}
grid->addBlock(block);
}
unsigned int GameGrid::getNumMoves() {
// Return the number of moves
return numMoves;
}
string GameGrid::getClockTime() {
// Return the time on MinuteClock
return clock.getTime();
}
bool GameGrid::isGridComplete() {
// Return if the grid is complete
return isGridCompleteVar;
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 1,078 |
TechMozilla
Mozilla Joins Google and Facebook in Phasing Out Adobe Flash
Jonathan Vanian
A window on the Mozilla Firefox browser shows the browser has blocked the Adobe Flash plugin from activating.Sean Gallup—Getty Images
Another popular web browser has had it with Adobe Flash.
Mozilla said this week that it plans to gradually wean its Firefox web browser from Adobe's (ADBE) multimedia player. In August, Firefox will no longer support "certain Flash content" that it deems "not essential to the user experience," although Mozilla did not specify what type of Flash content it was referring to.
Mozilla will still support "legacy Flash content" for an unspecified time, but the company urged websites that use Flash or Microsoft (MSFT) Silverlight, another multimedia web player similar to Flash, for their videos or online games to adopt newer "HTML technologies as soon as possible."
In May, Google (GOOG) detailed its plans to end support of Flash for its Chrome web browser, and it hopes to completely rid itself of Flash advertisements by the beginning of 2017.
Get Data Sheet, Fortune's technology newsletter.
Google, like Adobe, is urging website operators to switch to the HTML5 coding language to display multimedia like video on their sites.
Flash is notoriously buggy and prone to many security vulnerabilities. Firefox believes that by ending support for Flash, its users will see "enhanced security, improved battery life, faster page load, and better browser responsiveness."
Still, Mozilla is not totally cutting ties with Adobe. Mozilla said it would "continue to work closely with Adobe to deliver the best possible Flash experience for our users" as it phases the multimedia player out, and said that an engineering partnership between the two companies has improved some performance and stability in Firefox when it displays Flash content.
For more about cybersecurity, watch:
Last summer, Facebook's (FB) chief security officer Alex Stamos urged Adobe via Twitter to disable Flash because of its security vulnerabilities.
In April, Adobe issued an emergency update to Flash after security researchers found a flaw that allowed hackers to distribute so-called ransomware to owners of Microsoft Windows personal computers. Ransomware is basically a form of malware that lets hackers block people from accessing their computer or related computer networks so that a hacker can demand payment in return for access.
In 2010, legendary Apple (AAPL) CEO Steve Jobs wrote a 1,700 word essay on Flash and why Apple's problems with the multimedia player, which he claimed hurt the "reliability and security of our iPhones, iPods and iPads." | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 457 |
'use strict';
var angular = require('angular');
process.env.appversion = require('../package.json').version;
require('angular-bootstrap');
require('angular-cache');
require('angular-jwt');
require('angular-resource');
require('angular-translate');
require('angular-translate-loader-partial');
require('angular-ui-router');
require('angular-validation-match');
require('mi-angular-alert-service');
require('mi-angular-vmp-auth-service');
var requires = [
'ui.bootstrap',
'angular-cache',
'angular-jwt',
'ngResource',
'pascalprecht.translate',
'ui.router',
'validation.match',
'mi.AlertService',
'mi.AuthService',
require('./components').name
];
angular.module('mi-vmpro-ui-app', requires)
// put jwt token into requests
.config(function Config($httpProvider, jwtInterceptorProvider) {
jwtInterceptorProvider.tokenGetter = ['config', 'CurrentUserService', 'AuthService', '$state', function (config, CurrentUserService, AuthService, $state) {
if (config.url.substr(config.url.length - 5) === '.html') {
return null;
}
if (angular.isUndefined(CurrentUserService.getAccessToken())) {
return null;
}
if (CurrentUserService.isExpired()) {
return AuthService.refresh(CurrentUserService.getRefreshToken(), CurrentUserService.getVideoManagerId()).then(
function (response) {
CurrentUserService.setResponseData(response);
return CurrentUserService.getAccessToken();
},
function (response) {
// was tun, wenn der refresh fehl schlug?
console.log('Security Module jwtInterceptor', response);
// nur so eine idee ... denn anscheinend hat der service keinen bock
CurrentUserService.logout();
$state.go('app.security.login', {}, {'reload': true});
}
);
} else {
return CurrentUserService.getAccessToken();
}
}];
$httpProvider.interceptors.push('jwtInterceptor');
})
// redirect for unknown routes
.config(function ($urlRouterProvider, $locationProvider, $resourceProvider) {
$urlRouterProvider.otherwise(function ($injector) {
var $state, CurrentUserService;
CurrentUserService = $injector.get('CurrentUserService');
$state = $injector.get('$state');
if (CurrentUserService.isLoggedIn()) {
$state.go('app.dashboard');
} else {
CurrentUserService.logout();
$state.go('app.security.login');
}
});
$resourceProvider.defaults.stripTrailingSlashes = true;
})
// check routes for auth and redirect if needed
.run(function ($rootScope, $injector) {
$rootScope.$on('$stateChangeStart', function (event, toState) {
var requireAuth = toState.data.requireAuth;
if (requireAuth === false) {
return;
} else {
var $state, CurrentUserService;
CurrentUserService = $injector.get('CurrentUserService');
$state = $injector.get('$state');
if (!CurrentUserService.getAccessToken()) {
event.preventDefault();
CurrentUserService.logout();
$state.go('app.security.login', {}, {'reload': true});
}
}
});
})
// trnslation stuff
.config(function ($translateProvider) {
$translateProvider.useSanitizeValueStrategy('escaped');
$translateProvider.useLoader('$translatePartialLoader', {
urlTemplate: '/i18n/{part}/{lang}.json'
});
// add translation table
$translateProvider
.registerAvailableLanguageKeys(['en', 'de'], {
'en_*': 'en',
'de_*': 'de'
})
.determinePreferredLanguage();
/*
The fallback language is not working ...
$translateProvider.fallbackLanguage('en');
The following workaround sets the preferred language to english,
if the detection failed or the detected language is not known.
*/
var language = $translateProvider.preferredLanguage();
if ((language !== null) || !language.match(/(de).*/)) {
return $translateProvider.preferredLanguage('de');
}
})
.constant('ALERT_LEVELS', {
danger: {timeout: 10000},
warning: {timeout: 5000},
success: {timeout: 3000},
info: {timeout: 3000}
})
;
angular.bootstrap(document, ['mi-vmpro-ui-app']);
| {
"redpajama_set_name": "RedPajamaGithub"
} | 4,843 |
A project manager many times must resolve conflicts within his team. What happens when he himself conflicts with another team member's view. It is easy to say that you must be objective about things, but very difficult to practice and put yourself out of this passionate situation.
I am wondering more and more on this and trying to see a reason behind this conflict. What is a conflict? A difference of opinion, a difference in work styles, Or a difference in understanding? Could be all three or just something else.
If I take malicious intent aside and keep that as an extreme case, then one baseline I have to think is that both parties have something in common, which is, to get the project/delivery delivered with quality and within budget.
My learning from practical experience says that a conflict usually arises when we assess the risks differently. In the end, we're only making decisions in binary Yes or No. Why two people say "Yes" & "No" to a common statement is only when they read the statement (or situation) differently. Why we read it differently? Because for every statement we try to think of failure modes (risks to fail) and then assess whether we should or should not do anything about it.
We keep doing this process in our subconscious mind all the time, without realizing that this process has become part of our conscience and we find someone's difference of opinion as naïve or stupid. I am not saying people don't make stupid decisions or take wrong turns, but it is usually because there are ill-informed or haven't thought of full scope or gamut of possibilities that come to others as natural.
Coming back to conflict management, the best way to keep yourself objective is to try to understand the perspective the other person is coming from. Some people have a tendency of being more risk-averse and some are more open to taking risks. If you keep yourself rigid with your perspective, then you're closing the door to potentially more effective arguments that may be helpful for your own good.
As a project manager, as much we're committed to the What and Why of the project, we may have to be more non-conformist about How. That would open a lot of opportunities for us to connect more with our teams and deliver quality.
Next Post:What makes a Bank's Lifestyle App tick? | {
"redpajama_set_name": "RedPajamaC4"
} | 3,279 |
package com.intellij.openapi.editor.ex.util;
import com.intellij.lexer.FlexAdapter;
import com.intellij.lexer.Lexer;
import com.intellij.openapi.application.ApplicationManager;
import com.intellij.openapi.diagnostic.Logger;
import com.intellij.openapi.editor.Document;
import com.intellij.openapi.editor.colors.EditorColorsScheme;
import com.intellij.openapi.editor.colors.TextAttributesKey;
import com.intellij.openapi.editor.event.DocumentEvent;
import com.intellij.openapi.editor.ex.DocumentEx;
import com.intellij.openapi.editor.ex.PrioritizedDocumentListener;
import com.intellij.openapi.editor.highlighter.EditorHighlighter;
import com.intellij.openapi.editor.highlighter.HighlighterClient;
import com.intellij.openapi.editor.highlighter.HighlighterIterator;
import com.intellij.openapi.editor.impl.EditorDocumentPriorities;
import com.intellij.openapi.editor.markup.TextAttributes;
import com.intellij.openapi.fileTypes.PlainSyntaxHighlighter;
import com.intellij.openapi.fileTypes.SyntaxHighlighter;
import com.intellij.openapi.progress.ProcessCanceledException;
import com.intellij.openapi.project.DumbAwareRunnable;
import com.intellij.openapi.project.Project;
import com.intellij.openapi.util.Comparing;
import com.intellij.psi.tree.IElementType;
import com.intellij.util.ArrayUtil;
import com.intellij.util.text.ImmutableCharSequence;
import com.intellij.util.text.MergingCharSequence;
import com.intellij.util.text.SingleCharSequence;
import com.intellij.util.ui.UIUtil;
import org.jetbrains.annotations.NotNull;
import org.jetbrains.annotations.Nullable;
import java.util.HashMap;
import java.util.Map;
public class LexerEditorHighlighter implements EditorHighlighter, PrioritizedDocumentListener {
private static final Logger LOG = Logger.getInstance("#com.intellij.openapi.editor.ex.util.LexerEditorHighlighter");
private HighlighterClient myEditor;
private final Lexer myLexer;
private final Map<IElementType, TextAttributes> myAttributesMap = new HashMap<>();
private final SegmentArrayWithData mySegments;
private final SyntaxHighlighter myHighlighter;
private EditorColorsScheme myScheme;
private final int myInitialState;
protected CharSequence myText;
public LexerEditorHighlighter(@NotNull SyntaxHighlighter highlighter, @NotNull EditorColorsScheme scheme) {
myScheme = scheme;
myLexer = highlighter.getHighlightingLexer();
myLexer.start(ArrayUtil.EMPTY_CHAR_SEQUENCE);
myInitialState = myLexer.getState();
myHighlighter = highlighter;
mySegments = createSegments();
}
protected SegmentArrayWithData createSegments() {
return new SegmentArrayWithData();
}
public boolean isPlain() {
return myHighlighter instanceof PlainSyntaxHighlighter;
}
@Nullable
protected final Document getDocument() {
return myEditor != null ? myEditor.getDocument() : null;
}
public final synchronized boolean checkContentIsEqualTo(CharSequence sequence) {
final Document document = getDocument();
return document != null && isInSyncWithDocument() && Comparing.equal(document.getImmutableCharSequence(), sequence);
}
public EditorColorsScheme getScheme() {
return myScheme;
}
protected Lexer getLexer() {
return myLexer;
}
@Override
public void setEditor(@NotNull HighlighterClient editor) {
LOG.assertTrue(myEditor == null, "Highlighters cannot be reused with different editors");
myEditor = editor;
}
@Override
public void setColorScheme(@NotNull EditorColorsScheme scheme) {
myScheme = scheme;
myAttributesMap.clear();
}
@NotNull
@Override
public HighlighterIterator createIterator(int startOffset) {
synchronized (this) {
if (!isInSyncWithDocument()) {
final Document document = getDocument();
assert document != null;
if(document instanceof DocumentEx && ((DocumentEx)document).isInBulkUpdate()) {
((DocumentEx)document).setInBulkUpdate(false); // bulk mode failed
}
doSetText(document.getImmutableCharSequence());
}
final int latestValidOffset = mySegments.getLastValidOffset();
return new HighlighterIteratorImpl(startOffset <= latestValidOffset ? startOffset : latestValidOffset);
}
}
private int packData(IElementType tokenType, int state) {
final short idx = tokenType.getIndex();
return state == myInitialState ? idx : -idx;
}
public boolean isValid() {
Project project = myEditor.getProject();
return project != null && !project.isDisposed();
}
private boolean isInSyncWithDocument() {
Document document = getDocument();
return document == null || document.getTextLength() == 0 || mySegments.getSegmentCount() > 0;
}
private static boolean isInitialState(int data) {
return data >= 0;
}
protected static IElementType unpackToken(int data) {
return IElementType.find((short)Math.abs(data));
}
@Override
public synchronized void documentChanged(DocumentEvent e) {
try {
final Document document = e.getDocument();
CharSequence text = document.getImmutableCharSequence();
if (document instanceof DocumentEx && ((DocumentEx)document).isInBulkUpdate()) {
myText = null;
mySegments.removeAll();
return;
}
if(mySegments.getSegmentCount() == 0) {
setText(text);
return;
}
myText = text;
int oldStartOffset = e.getOffset();
final int segmentIndex = mySegments.findSegmentIndex(oldStartOffset) - 2;
final int oldStartIndex = Math.max(0, segmentIndex);
int startIndex = oldStartIndex;
int data;
do {
data = mySegments.getSegmentData(startIndex);
if (isInitialState(data)|| startIndex == 0) break;
startIndex--;
}
while (true);
int startOffset = mySegments.getSegmentStart(startIndex);
int newEndOffset = e.getOffset() + e.getNewLength();
myLexer.start(text, startOffset, text.length(), myInitialState);
int lastTokenStart = -1;
int lastLexerState = -1;
IElementType lastTokenType = null;
while (myLexer.getTokenType() != null) {
if (startIndex >= oldStartIndex) break;
int tokenStart = myLexer.getTokenStart();
int lexerState = myLexer.getState();
if (tokenStart == lastTokenStart && lexerState == lastLexerState && myLexer.getTokenType() == lastTokenType) {
throw new IllegalStateException("Lexer is not progressing after calling advance()");
}
int tokenEnd = myLexer.getTokenEnd();
data = packData(myLexer.getTokenType(), lexerState);
if (mySegments.getSegmentStart(startIndex) != tokenStart ||
mySegments.getSegmentEnd(startIndex) != tokenEnd ||
mySegments.getSegmentData(startIndex) != data) {
break;
}
startIndex++;
lastTokenType = myLexer.getTokenType();
myLexer.advance();
lastTokenStart = tokenStart;
lastLexerState = lexerState;
}
startOffset = mySegments.getSegmentStart(startIndex);
int repaintEnd = -1;
int insertSegmentCount = 0;
int oldEndIndex = -1;
lastTokenType = null;
SegmentArrayWithData insertSegments = new SegmentArrayWithData();
while(myLexer.getTokenType() != null) {
int tokenStart = myLexer.getTokenStart();
int lexerState = myLexer.getState();
if (tokenStart == lastTokenStart && lexerState == lastLexerState && myLexer.getTokenType() == lastTokenType) {
throw new IllegalStateException("Lexer is not progressing after calling advance()");
}
lastTokenStart = tokenStart;
lastLexerState = lexerState;
lastTokenType = myLexer.getTokenType();
int tokenEnd = myLexer.getTokenEnd();
data = packData(myLexer.getTokenType(), lexerState);
if(tokenStart >= newEndOffset && lexerState == myInitialState) {
int shiftedTokenStart = tokenStart - e.getNewLength() + e.getOldLength();
int index = mySegments.findSegmentIndex(shiftedTokenStart);
if (mySegments.getSegmentStart(index) == shiftedTokenStart && mySegments.getSegmentData(index) == data) {
repaintEnd = tokenStart;
oldEndIndex = index;
break;
}
}
insertSegments.setElementAt(insertSegmentCount, tokenStart, tokenEnd, data);
insertSegmentCount++;
myLexer.advance();
}
final int shift = e.getNewLength() - e.getOldLength();
if (repaintEnd > 0) {
while (insertSegmentCount > 0 && oldEndIndex > startIndex) {
if (!segmentsEqual(mySegments, oldEndIndex - 1, insertSegments, insertSegmentCount - 1, shift)) {
break;
}
insertSegmentCount--;
oldEndIndex--;
repaintEnd = insertSegments.getSegmentStart(insertSegmentCount);
insertSegments.remove(insertSegmentCount, insertSegmentCount + 1);
}
}
if(repaintEnd == -1) {
repaintEnd = text.length();
}
if (oldEndIndex < 0){
oldEndIndex = mySegments.getSegmentCount();
}
mySegments.shiftSegments(oldEndIndex, shift);
mySegments.replace(startIndex, oldEndIndex, insertSegments);
if (insertSegmentCount == 0 ||
oldEndIndex == startIndex + 1 && insertSegmentCount == 1 && data == mySegments.getSegmentData(startIndex)) {
return;
}
myEditor.repaint(startOffset, repaintEnd);
}
catch (ProcessCanceledException ex) {
myText = null;
mySegments.removeAll();
throw ex;
}
catch (RuntimeException ex) {
throw new IllegalStateException("Error updating " + this + " after " + e, ex);
}
}
@Override
public void beforeDocumentChange(DocumentEvent event) {
}
@Override
public int getPriority() {
return EditorDocumentPriorities.LEXER_EDITOR;
}
private static boolean segmentsEqual(SegmentArrayWithData a1, int idx1, SegmentArrayWithData a2, int idx2, final int offsetShift) {
return a1.getSegmentStart(idx1) + offsetShift == a2.getSegmentStart(idx2) &&
a1.getSegmentEnd(idx1) + offsetShift == a2.getSegmentEnd(idx2) &&
a1.getSegmentData(idx1) == a2.getSegmentData(idx2);
}
public HighlighterClient getClient() {
return myEditor;
}
protected final synchronized void resetText(@NotNull CharSequence text) {
myText = null;
doSetText(text);
}
@Override
public void setText(@NotNull CharSequence text) {
synchronized (this) {
doSetText(text);
}
}
protected class TokenProcessor {
public void addToken(final int i, final int startOffset, final int endOffset, final int data, final IElementType tokenType) {
mySegments.setElementAt(i, startOffset, endOffset, data);
}
public void finish() {
}
}
private void doSetText(final CharSequence text) {
if (Comparing.equal(myText, text)) return;
myText = ImmutableCharSequence.asImmutable(text);
final TokenProcessor processor = createTokenProcessor(0);
final int textLength = text.length();
myLexer.start(text, 0, textLength, myInitialState);
mySegments.removeAll();
int i = 0;
while (true) {
final IElementType tokenType = myLexer.getTokenType();
if (tokenType == null) break;
int data = packData(tokenType, myLexer.getState());
processor.addToken(i, myLexer.getTokenStart(), myLexer.getTokenEnd(), data, tokenType);
i++;
myLexer.advance();
}
processor.finish();
if (textLength > 0 && (mySegments.mySegmentCount == 0 || mySegments.myEnds[mySegments.mySegmentCount - 1] != textLength)) {
throw new IllegalStateException("Unexpected termination offset for lexer " + myLexer);
}
if(myEditor != null && !ApplicationManager.getApplication().isHeadlessEnvironment()) {
UIUtil.invokeLaterIfNeeded(new DumbAwareRunnable() {
@Override
public void run() {
myEditor.repaint(0, textLength);
}
});
}
}
protected TokenProcessor createTokenProcessor(final int startIndex) {
return new TokenProcessor();
}
public SyntaxHighlighter getSyntaxHighlighter() {
return myHighlighter;
}
@NotNull
private TextAttributes getAttributes(IElementType tokenType) {
TextAttributes attrs = myAttributesMap.get(tokenType);
if (attrs == null) {
// let's fetch syntax highlighter attributes for token and merge them with "TEXT" attribute of current color scheme
attrs = convertAttributes(myHighlighter.getTokenHighlights(tokenType));
myAttributesMap.put(tokenType, attrs);
}
return attrs;
}
// Called to determine visual attributes of inserted character prior to starting a write action.
// TODO Should be removed when we implement typing without starting write actions.
@NotNull
public TextAttributes getAttributesForTypedChar(@NotNull Document document, int offset, char c) {
int startOffset = 0;
if (mySegments.getSegmentCount() > 0) {
final int segmentIndex;
try {
segmentIndex = mySegments.findSegmentIndex(offset) - 2;
}
catch (IndexOutOfBoundsException ex) {
throw new IndexOutOfBoundsException(ex.getMessage() + " Lexer: " + myLexer);
}
int startIndex = Math.max(0, segmentIndex);
int data;
do {
data = mySegments.getSegmentData(startIndex);
if (isInitialState(data)|| startIndex == 0) break;
startIndex--;
}
while (true);
startOffset = mySegments.getSegmentStart(startIndex);
}
CharSequence text = document.getImmutableCharSequence();
CharSequence newText = new MergingCharSequence(new MergingCharSequence(text.subSequence(0, offset), new SingleCharSequence(c)), text.subSequence(offset, text.length()));
myLexer.start(newText, startOffset, newText.length(), myInitialState);
IElementType tokenType = null;
while (myLexer.getTokenType() != null) {
if (myLexer.getTokenEnd() >= offset + 1) {
tokenType = myLexer.getTokenType();
break;
}
myLexer.advance();
}
return getAttributes(tokenType);
}
@NotNull
TextAttributes convertAttributes(@NotNull TextAttributesKey[] keys) {
TextAttributes attrs = new TextAttributes();
for (TextAttributesKey key : keys) {
TextAttributes attrs2 = myScheme.getAttributes(key);
if (attrs2 != null) {
attrs = TextAttributes.merge(attrs, attrs2);
}
}
return attrs;
}
@Override
public String toString() {
return getClass().getName() + "(" +
(myLexer.getClass() == FlexAdapter.class ? myLexer.toString() : myLexer.getClass().getName()) +
"): '" + myLexer.getBufferSequence() + "'";
}
public class HighlighterIteratorImpl implements HighlighterIterator {
private int mySegmentIndex = 0;
HighlighterIteratorImpl(int startOffset) {
try {
mySegmentIndex = mySegments.findSegmentIndex(startOffset);
}
catch (IllegalStateException e) {
throw new IllegalStateException("Wrong state of " + LexerEditorHighlighter.this, e);
}
}
public int currentIndex() {
return mySegmentIndex;
}
@Override
public TextAttributes getTextAttributes() {
return getAttributes(getTokenType());
}
@Override
public int getStart() {
return mySegments.getSegmentStart(mySegmentIndex);
}
@Override
public int getEnd() {
return mySegments.getSegmentEnd(mySegmentIndex);
}
@Override
public IElementType getTokenType(){
return unpackToken(mySegments.getSegmentData(mySegmentIndex));
}
@Override
public void advance() {
mySegmentIndex++;
}
@Override
public void retreat(){
mySegmentIndex--;
}
@Override
public boolean atEnd() {
return mySegmentIndex >= mySegments.getSegmentCount() || mySegmentIndex < 0;
}
@Override
public Document getDocument() {
return LexerEditorHighlighter.this.getDocument();
}
}
public SegmentArrayWithData getSegments() {
return mySegments;
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 8,805 |
require 'rails_helper'
RSpec.describe GeckoboardController, type: :controller do
context "IP & key restrictions" do
it "are enabled" do
expect(controller).to receive(:reject_untrusted_ips_and_without_key!)
get :leaderboard
end
end
end
| {
"redpajama_set_name": "RedPajamaGithub"
} | 4,194 |
I have a very mild form of this thing called dyscalculia, which is a dysfunction similar to dyslexia but also quite different.
Now when I was originally assessed, the psychologist at my then-college sent a copy of the report (with my permission) to my employer.
Some years later, and my employer has suddenly felt the need to start telling people in the office about it.
So as you can appreciate, I'm extremely offended, because not only is it a huge breach of trust, it's highly unprofessional on his part, and casts unfounded doubts on my own professional capacity.
To illustrate: He's been telling people that I have 'number dyslexia', which conjures up an image of someone who reads 6's and sees 9's, or reads 1,000 and sees 10,000. Now that's simply not the case, that's ACTUAL dyslexia.
What *I* have (which is in the report so he knows this) only affects mental arithmetic. As a result, even though I was diagnosed at the start of Foundation, I was never granted any extra time or special concessions throughout the entire AAT accounting qualification. So long as I have somewhere to write the numbers I'm working with, I'm as capable as anyone else.
I mean for pity's sake, I wrote an online article in 2007 about the maths behind cubic bézier curves which has stayed in the top 10 for the last 2 years!
If he has breached your right to sensitive personal data being kept confidentially, you may wish to instigate a grievance.
You could do this informally, or formally if you prefer. You might want to let him (or her) know how it has made you feel having to explain exactly what dyscalculia is to your work colleagues and lay it on as thick as you like.
Years ago, after I had been out sick on post operative recovery, a personnel officer rang my work colleague and asked him to pass a message to me regarding sick pay and went on to discuss my illness, my pay rate, my sick pay rate etc etc. I rang the Personnel Manager and complained, loudly and bitterly. I understand the personnel officer was disciplined and given a final written warning. Should have been sacked, if you ask me. Disgraceful carry on. | {
"redpajama_set_name": "RedPajamaC4"
} | 6,009 |
\section{Introduction}
\noindent
\par
Let $X$ be a smooth projective variety of dimension $n$ and let $L$ be an ample (resp. nef and big) line bundle on $X$.
Then the pair $(X,L)$ is called a polarized (resp. quasi-polarized) manifold.
\par
For this $(X,L)$, adjoint bundles $K_{X}+tL$ play important roles
for investigating this $(X,L)$
(for example, see \cite[Chapter 7, 9, and 11]{BeSo-Book}),
where $K_{X}$ is the canonical line bundle of $X$.
In particular, it is important to know the value of $h^{0}(K_{X}+tL)$.
\par
In \cite[Conjecture 7.2.7]{BeSo-Book}, Beltrametti and Sommese proposed the following conjecture.
\begin{con}\label{Conjecture1}
Let $(X,L)$ be a polarized manifold of dimension $n$.
Assume that $K_{X}+(n-1)L$ is nef.
Then $h^{0}(K_{X}+(n-1)L)>0$.
\end{con}
In \cite[Theorem 2.4]{Fukuma06}, the author proved that Conjecture \ref{Conjecture1}
is true for the case where $\dim X=3$. (See also \cite{BrHo09}.)
Moreover we gave a classification of $(X,L)$ with $h^{0}(K_{X}+2L)=1$ (see \cite[Theorem 2.4]{Fukuma06}).
\par
In general, there is the following conjecture (\cite[Section 4]{Ambro}, \cite[Conjecture 2.1]{Kawamata}).
\begin{con}[Ambro, Kawamata]\label{Conjecture2}
Let $X$ be a complex normal variety,
$B$ an effective $\mathbb{R}$-divisor on $X$
such that the pair $(X,B)$ is KLT, and $D$ a Cartier divisor on $X$.
Assume that $D$ is nef, and that $D-(K_{X}+B)$ is nef and big.
Then $h^{0}(D)>0$.
\end{con}
Here we note that in \cite[Open problems, P.321]{LPS93} Ionescu proposed the same conjecture for the case where $X$ is smooth and $B=0$.
For Conjecture \ref{Conjecture2}, the following results have been obtained.
\begin{itemize}
\item [\rm (\ref{Conjecture2}.a)]
If $\dim X=2$, then Conjecture \ref{Conjecture2} is true (see \cite[Theorem 3.1]{Kawamata}).
\item [\rm (\ref{Conjecture2}.b)]
Let $X$ be a $3$-dimensional projective variety with at most canonical singularities such that $K_{X}$ is nef, and let $D$ be a Cartier divisor such that $D-K_{X}$ is nef and big. Then $h^{0}(D)>0$
(see \cite[Proposition 4.1]{Kawamata}).
\item [\rm (\ref{Conjecture2}.c)]
Let $(X,L)$ be a polarized manifold of dimension $3$.
Assume that $L^{n}>27$.
Then $h^{0}(K_{X}+L)>0$ if $K_{X}+L$ is nef (see \cite[Th\'eor\`eme 1.8]{Broustet09}).
\item [\rm (\ref{Conjecture2}.d)]
Let $X$ be a $4$-dimensional projective variety with at most Gorenstein canonical singularities. Assume that $D\sim -K_{X}$ is ample. Then $h^{0}(D)>0$
(see \cite[Theorem 5.2]{Kawamata}).
\item [\rm (\ref{Conjecture2}.e)]
Let $X$ be a smooth projective variety of dimension $3$ with $h^{1}(\mathcal{O}_{X})>0$, and $L$ a nef and big Cartier divisor on $X$
such that $K_{X}+L$ is nef.
Then $h^{0}(K_{X}+L)>0$ (see \cite[Theorem 4.2]{1.5}).
\item [\rm (\ref{Conjecture2}.f)]
Let $X$ be a smooth projective variety of dimension $3$ with $\kappa(X)\geq 0$, and $L$ an ample Cartier divisor on $X$.
Then $h^{0}(K_{X}+L)>0$ (see \cite[Theorem 3.2]{Fukuma10}).
\end{itemize}
If $K_{X}+L$ is nef, then by \cite{Shokurov86} there exists a positive integer $m$ such that $h^{0}(m(K_{X}+L))>0$.
More generally if $\kappa(K_{X}+L)\geq 0$, then $h^{0}(m(K_{X}+L))>0$ for some positive integer $m$.
So it is interesting to study the following problem, which was proposed in \cite[Problem 3.2]{Fukuma10}:
\begin{prob}\label{Problem2.8}
For any fixed positive integer $n$, determine the smallest positive integer
$p$, which depends only on $n$, such that the following {\rm ($*$)} is satisfied:
\begin{itemize}
\item [\rm ($*$)]
$h^{0}(p(K_{X}+L))>0$ for any polarized manifold $(X,L)$ of dimension $n$
with $\kappa(K_{X}+L)\geq 0$.
\end{itemize}
\end{prob}
Here we note that
by \cite[Theorem 2.8]{Fukuma10},
we see that $p=1$ if $X$ is a curve or surface.
\par
In order to study this problem, in \cite[Problem 5.2]{Fukuma08-3}, we introduced the following:
\begin{definition}
For any fixed positive integer $n$, we set
\begin{eqnarray*}
\mathcal{P}_{n}
&:=&\left \{\ \mbox{\rm $(X,L)$ : polarized manifold}\ |\ \mbox{\rm $\dim X=n$
and
$\kappa(K_{X}+L)\geq 0$}\right\}, \\
\mathcal{M}_{n}
&:=&\left\{\ r\in\mathbb{N}\ |\ h^{0}(r(K_{X}+L))>0\ \mbox{\rm for any $(X,L)\in{\mathcal{P}}_{n}$}\right\},\\
m(n)&:=&
\left\{
\begin{array}{ll}
\mbox{\rm {min}}\ \mathcal{M}_{n}
& \ \ \mbox{\rm if $\mathcal{M}_{n}\neq \emptyset$,} \\
\infty & \ \ \mbox{\rm if $\mathcal{M}_{n}=\emptyset$.}
\end{array}\right.
\end{eqnarray*}
\end{definition}
In this paper, as the first step, we mainly consider the case where $\dim X=3$.
\par
In \cite[Corollary 5.2]{Fukuma08-3}, we said that $m(3)\leq 2$ holds.
Concretely, in \cite[Theorem 5.4 (2)]{Fukuma08-3}, we proved that if $\kappa(K_{X}+L)=3$,
then $h^{0}(2(K_{X}+L))\geq 3$.
Moreover in \cite[Theorem 5.4 (1)]{Fukuma08-3}, we announced that in this paper we will prove that $h^{0}(K_{X}+L)>0$ if $0\leq\kappa(K_{X}+L)\leq 2$.
\par
So in this paper, we will prove that $h^{0}(K_{X}+L)>0$ if $n=3$ and $0\leq \kappa(K_{X}+L)\leq 2$.
Moreover, we also study a lower bound of $h^{0}(m(K_{X}+L))$ if $\kappa(K_{X}+L)\geq 0$.
\par
The contents of this paper are the following:
In sections \ref{S2} and \ref{S3}, we will state some definitions and results which will be used later. In particular, in section \ref{S3}, we review the sectional geometric genus.
In section \ref{S4}, we will treat special cases.
If $\kappa(K_{X}+L)=1$ (resp. $2$), then there exists a polarized manifold $(M,A)$ such that $h^{0}(m(K_{X}+L))=h^{0}(m(K_{M}+A))$ for any positive integer $m$ and there exist a fiber space $M\to Y$ such that $Y$ is a normal projective variety of dimension $1$ (resp. $2$), and an ample line bundle $H$ on $Y$ such that $K_{M}+A=f^{*}(H)$. (This $(M,A)$ is called a reduction of $(X,L)$. See Definition \ref{Definition1.7}.)
Hence it is important to consider the following case:
Let $(X,L)$ be a polarized manifold of dimension $n\geq 3$ and let $Y$ be a normal projective variety of dimension $1$ or $2$.
Assume that there exists a fiber space $f:X\to Y$ such that $K_{X}+L=f^{*}(H)$ for some ample line bundle $H$ on $Y$.
In section \ref{S4}, we consider $(X,L)$ like this and we will give a lower bound for $h^{0}(m(K_{X}+L))$.
In particular, we see that $h^{0}(K_{X}+L)>0$ in this case.
\par
In section \ref{S5}, we will study the case where $\dim X=3$.
In particular, we will give a lower bound of $h^{0}(m(K_{X}+L))$ for the following cases:
\begin{itemize}
\item [\rm (a)] $0\leq \kappa(K_{X}+L)\leq 2$ and $m\geq 1$.
\item [\rm (b)] $\kappa(K_{X}+L)=3$ and $m\geq 2$.
\end{itemize}
In particular we get $h^{0}(K_{X}+L)>0$ if $0\leq \kappa(K_{X}+L)\leq 2$ and
$h^{0}(2(K_{X}+L))\geq 3$ if $\kappa(K_{X}+L)=3$ (see also \cite[Theorem 5.4 (2)]{Fukuma08-3}).
Moreover we will also classifiy $(X,L)$ with $\kappa(K_{X}+L)=3$ and $h^{0}(2(K_{X}+L))=3$ or $4$ (see Theorems \ref{EC1} and \ref{EC2}).
\\
\par
In this paper, we shall study mainly a smooth projective variety $X$
over the field of complex numbers $\mathbb{C}$.
We will employ the customary notation in algebraic geometry.
\section{Preliminaries}\label{S2}
Here we list up several results which will be used later.
\begin{Definition}\label{Definition1.7}
(i) Let $X$ (resp. $Y$) be an $n$-dimensional projective manifold, and $L$ (resp. $A$) an ample line bundle on $X$ (resp. $Y$).
Then $(X,L)$ is called a {\it simple blowing up of $(Y,A)$} if there exists a birational morphism $\pi: X\to Y$ such that $\pi$ is a blowing up at a point of $Y$ and $L=\pi^{*}(A)-E$, where $E$ is the $\pi$-exceptional effective reduced divisor.
\\
(ii) Let $X$ (resp. $M$) be an $n$-dimensional projective manifold, and $L$ (resp. $A$) an ample line bundle on $X$ (resp. $M$).
Then we say that $(M,A)$ is a {\it reduction of $(X,L)$} if
there exists a birational morphism $\mu: X\to M$ such that $\mu$ is a composition of simple blowing ups
and $(M,A)$ is not obtained by a simple blowing up of any polarized manifold.
The map $\mu: X\to M$ is called the {\it reduction map}.
\end{Definition}
\begin{Remark}\label{Remark1.7.1}
Let $(X,L)$ be a polarized manifold and let $(M,A)$ be a reduction of $(X,L)$.
Let $\mu: X\to M$ be the reduction map.
\begin{itemize}
\item [(i)]
If $(X,L)$ is not obtained by a simple blowing up of another polarized manifold, then $(X,L)$ is a reduction of itself.
\item [(ii)]
A reduction of $(X,L)$ always exists (see \cite[Chapter II, (11.11)]{Fujita-Book}).
\end{itemize}
\end{Remark}
\begin{Definition}\label{T-D1}
A quasi-polarized surface $(S,L)$ is said to be {\it $L$-minimal} if $LE>0$ for every $(-1)$-curve $E$ on $S$.
\end{Definition}
\begin{Lemma}\label{Lemma B}
Let $X$ be a complete normal variety of dimension $n$,
and let $D_{1}$ and $D_{2}$ be effective Cartier divisors on $X$.
Then $h^{0}(D_{1}+D_{2})\geq h^{0}(D_{1})+h^{0}(D_{2})-1$.
\end{Lemma}
\noindent{\em Proof.}
See \cite[Lemma 1.10]{Fukuma3} or \cite[15.6.2 Lemma]{Kollar95}. $\Box$
\begin{Proposition}\label{GHIT}
Let $X$ be a projective variety of dimension $n$ and let $D_{i}$ be $\mathbb{Q}$-Cartier divisors on $X$ for $0\leq i\leq k$.
Assume that $n\geq 2$ and that $D_{i}$ is nef for every integer $i$ with $1\leq i\leq k$.
If $n_{1}+\cdots +n_{k}=n-1$ and $n_{1}\geq 1$, then we have
$$(D_{0}D_{1}^{n_{1}}\cdots D_{k}^{n_{k}})^{2}\geq
(D_{0}^{2}D_{1}^{n_{1}-1}\cdots D_{k}^{n_{k}})(D_{1}^{n_{1}+1}\cdots D_{k}^{n_{k}}).$$
\end{Proposition}
\noindent{\em Proof.}
See \cite[Proposition 2.5.1]{BeSo-Book}. $\Box$
\begin{Proposition}\label{2P-T1}
Let $X$ be a normal projective surface and let $\pi: S\to X$ be a resolution of singularities of $X$.
Then $\chi(\mathcal{O}_{S})+h^{0}(R^{1}\pi_{*}(\mathcal{O}_{S}))=\chi(\mathcal{O}_{X})$.
In particular $\chi(\mathcal{O}_{S})\leq \chi(\mathcal{O}_{X})$ holds.
\end{Proposition}
\noindent
{\em Proof.}
By using Leray's spectral sequence for $\pi^{*}(\mathcal{O}_{X})$,
we have
$$\chi(\pi^{*}\mathcal{O}_{X})=\sum_{q\geq 0}(-1)^{q}\chi(R^{q}\pi_{*}(\pi^{*}\mathcal{O}_{X})).$$
Since $R^{q}\pi_{*}(\pi^{*}\mathcal{O}_{X})\cong R^{q}\pi_{*}(\mathcal{O}_{S})$
and $R^{q}\pi_{*}(\mathcal{O}_{S})=0$ for every integer $q$ with $q\geq 2$,
we have
$$\chi(\pi^{*}\mathcal{O}_{X})=\chi(\pi_{*}(\mathcal{O}_{S}))-\chi(R^{1}\pi_{*}(\mathcal{O}_{S})).$$
Here we also note that $\pi_{*}(\mathcal{O}_{S})=\mathcal{O}_{X}$ because $\pi$ is birational and $X$ is normal (see \cite[Corollary 11.4 in Chapter III]{Hartshorne}).
Moreover $\chi(R^{1}\pi_{*}(\mathcal{O}_{S}))=h^{0}(R^{1}\pi_{*}(\mathcal{O}_{S}))$ because $\dim\mbox{Supp}(R^{1}\pi_{*}(\mathcal{O}_{S}))\leq 0$.
Therefore since $\mathcal{O}_{S}=\pi^{*}(\mathcal{O}_{X})$, we get the assertion. $\Box$
\begin{Lemma}\label{4L-T4}
Let $X$ be a smooth projective variety of dimension $n$ and let $Y$ be a normal projective variety of dimension $m$ with $n>m\geq 1$.
Assume that $q(X)=q(Y)$ and there exists a fiber space $f: X\to Y$,
that is, $f$ is a surjective morphism with connected fibers.
Then for any resolution of singularities of $Y$, $\pi: Z\to Y$, we have $q(Z)=q(Y)$.
In particular, if $q(Y)\geq 1$, then the Albanese map of $Y$ can be defined.
\end{Lemma}
\noindent
{\em Proof.} By assumption, there exist smooth projective varieties $X_{1}$ and $Y_{1}$, birational morphisms $\mu_{1}: X_{1}\to X$ and $\nu_{1}: Y_{1}\to Y$,
and a fiber space $f_{1}:X_{1}\to Y_{1}$
such that $f\circ \mu_{1}=\nu_{1}\circ f_{1}$.
Here we note that $q(X)=q(X_{1})$ and $q(X_{1})\geq q(Y_{1})$.
Moreover we have $q(Y_{1})\geq q(Y)$ holds.
Hence we get $q(Y_{1})\geq q(Y)=q(X)=q(X_{1})\geq q(Y_{1})$ and we have $q(Y_{1})=q(Y)$.
On the other hand let $Z$ be any resolution of singularities of $Y$.
Then $q(Z)=q(Y_{1})$ because $Z$ is birationally equivalent to $Y_{1}$.
In particular, by \cite[(0.3.3) Lemma]{12} or \cite[Lemma 2.4.1 and Remark 2.4.2]{BeSo-Book}, the Albanese map of $Y$ can be defined.
Hence we get the assertion of Lemma \ref{4L-T4}. $\Box$
\section{Review on the sectional geometric genus}\label{S3}
In this section, we review the definition and some properties of the sectional
geometric genus of polarized manifolds, which will be used later.
\begin{Notation}\label{Notation1.1}
Let $X$ be a projective variety of dimension $n$
and let $L$ be a line bundle on $X$.
Let $\chi(tL)$ be the Euler-Poincar\'e characteristic of $tL$, where $t$ is an indeterminate.
Then we put
$$\chi(tL)=\sum_{j=0}^{n}\chi_{j}(X,L){t+j-1\choose j}.$$
\end{Notation}
\begin{Definition}\label{Definition1.2}
Let $X$ be a projective variety of dimension $n$
and let $L$ be a line bundle on $X$.
Then for every integer $i$ with $0\leq i\leq n$,
the {\it $i$-th sectional $H$-arithmetic genus $\chi_{i}^{H}(X,L)$} and
the {\it $i$-th sectional geometric genus $g_{i}(X,L)$ of $(X,L)$} are defined by the following:
\begin{eqnarray*}
\chi_{i}^{H}(X,L)&:=&\chi_{n-i}(X,L),\\
g_{i}(X,L)
&:=&(-1)^{i}(\chi_{i}^{H}(X,L)-\chi(\mathcal{O}_{X}))
+\sum_{j=0}^{n-i}(-1)^{n-i-j}h^{n-j}(\mathcal{O}_{X}).
\end{eqnarray*}
\end{Definition}
\begin{Remark}\label{Remark1.2.1}
\begin{itemize}
\item [(1)] Since $\chi_{n-i}(X,L)\in \mathbb{Z}$, we see that $\chi_{i}^{H}(X,L)$ and $g_{i}(X,L)$ are integers by definition.
\item [(2)]
If $i=0$, then $\chi_{0}^{H}(X,L)$ and $g_{0}(X,L)$ are equal to the degree of $(X,L)$.
\item [(3)]
If $i=1$, then $g_{1}(X,L)$ is equal to the sectional genus $g(X,L)$ of $(X,L)$.
\item [(4)]
If $i=n$, then $\chi_{n}^{H}(X,L)=\chi(\mathcal{O}_{X})$ and $g_{n}(X,L)=h^{n}(\mathcal{O}_{X})$.
\end{itemize}
\end{Remark}
\begin{Theorem}\label{Theorem1.3}
Let $(X,L)$ be a quasi-polarized manifold with $\dim X=n$.
For every integer $i$ with $0\leq i\leq n-1$, we have
$$g_{i}(X,L)=\sum_{j=0}^{n-i-1}(-1)^{j}{n-i\choose j}h^{0}(K_{X}+(n-i-j)L)
+\sum_{k=0}^{n-i}(-1)^{n-i-k}h^{n-k}(\mathcal{O}_{X}).$$
\end{Theorem}
\noindent{\em Proof.}
See \cite[Theorem 2.3]{Fukuma3}. $\Box$
\\
\par The following theorem will be often used later.
\begin{Theorem}\label{Theorem1.4}
Let $(X,L)$ be a polarized $3$-fold.
Assume that $\kappa(K_{X}+L)\geq 0$.
Then $g_{2}(X,L)\geq h^{1}(\mathcal{O}_{X})$.
\end{Theorem}
\noindent
{\em Proof.}
See \cite[Theorem 3.3.1 (2)]{Fukuma05-2}. $\Box$
\begin{Notation}\label{B1}
Let $X$ be a projective variety of dimension $n$, let $i$ be an integer with $0\leq i\leq n-1$, and let $L_{1},\dots , L_{n-i}$ be line bundles on $X$.
Then $\chi(L_{1}^{t_{1}}\otimes\cdots\otimes L_{n-i}^{t_{n-i}})$ is a polynomial in $t_{1}, \dots ,t_{n-i}$ of total degree at most $n$.
So we can write $\chi(L_{1}^{t_{1}}\otimes\cdots
\otimes L_{n-i}^{t_{n-i}})$ uniquely as follows.
\begin{eqnarray*}
&&\chi(L_{1}^{t_{1}}\otimes\cdots\otimes L_{n-i}^{t_{n-i}}) \\
&&=\sum_{p=0}^{n}\sum
_{\stackrel{p_{1}\geq 0,\dots , p_{n-i}\geq 0}
{p_{1}+\cdots +p_{n-i}=p}}
\chi_{p_{1},\dots , p_{n-i}}(L_{1},\dots ,L_{n-i})
{t_{1}+p_{1}-1\choose p_{1}}\dots{t_{n-i}+p_{n-i}-1\choose p_{n-i}}.
\end{eqnarray*}
\end{Notation}
\begin{Definition}\label{B2}(\cite[Definition 2.1 and Remark 2.2 (2)]{Fukuma11})
Let $X$ be a projective variety of dimension $n$, let $i$ be an integer with $0\leq i\leq n$, and let $L_{1},\dots , L_{n-i}$ be line bundles on $X$.
\\
(1) The {\it $i$-th sectional $H$-arithmetic genus $\chi_{i}^{H}(X,L_{1},\dots , L_{n-i})$} is defined by the following:
\[
\chi_{i}^{H}(X,L_{1},\dots , L_{n-i})=
\left\{
\begin{array}{ll}
\chi_{\underbrace{1, \dots , 1}_{n-i}}(L_{1},\dots , L_{n-i})
& \mbox{if $0\leq i\leq n-1$,} \\
\chi(\mathcal{O}_{X}) & \mbox{if $i=n$.}
\end{array}\right. \]
\\
(2) The {\it $i$-th sectional geometric genus $g_{i}(X,L_{1},\dots , L_{n-i})$} is defined by the following:
\begin{eqnarray*}
g_{i}(X,L_{1},\dots , L_{n-i})
&=&(-1)^{i}(\chi_{i}^{H}(X,L_{1},\dots , L_{n-i})-\chi({\mathcal{O}}_{X})) \\
&&\ \ \ +\sum_{j=0}^{n-i}(-1)^{n-i-j}h^{n-j}({\mathcal{O}}_{X}).
\end{eqnarray*}
\end{Definition}
\begin{Remark}\label{B6}
(1) Let $X$ be a projective variety of dimension $n$ and
let $L$ be a line bundle on $X$.
Let $i$ be an integer with $0\leq i\leq n-1$.
Then
$$\chi_{i}^{H}(X,L,\dots , L)=\chi_{i}^{H}(X,L)$$
and
$$g_{i}(X,L,\dots , L)=g_{i}(X,L).$$
(See \cite[Corollary 2.1]{Fukuma11}.)
\\
(2)
Let $X$ be a smooth projective variety of dimension $n$, and let $L_{1},\dots , L_{n-1}$ be line bundles on $X$.
Then
$$g_{1}(X,L_{1}, \dots, L_{n-1})=1+\frac{1}{2}\left(K_{X}+\sum_{j=1}^{n-1}L_{j}\right)L_{1}\cdots L_{n-1}.$$
(See \cite[Corollary 2.7]{Fukuma11} or \cite[Proposition 6.1.1]{Fukuma08-4}.)
\end{Remark}
\begin{Theorem}\label{B18}
Let $i$ be an integer with $1\leq i\leq n$.
Let $A, B, L_{1}, \cdots , L_{n-i-1}$ be line bundles on $X$.
Then
\begin{eqnarray*}
&&\chi_{i}^{H}(X,A+B,L_{1},\cdots , L_{n-i-1})\\
&&=\chi_{i}^{H}(X,A,L_{1},\cdots , L_{n-i-1})+\chi_{i}^{H}(X,B,L_{1},\cdots , L_{n-i-1}) \\
&&\ \ \ -\chi_{i-1}^{H}(X,A,B,L_{1},\cdots , L_{n-i-1})
\end{eqnarray*}
\begin{eqnarray*}
&&g_{i}(X,A+B,L_{1},\cdots , L_{n-i-1}) \\
&&=g_{i}(X,A,L_{1},\cdots , L_{n-i-1})+g_{i}(X,B,L_{1},\cdots , L_{n-i-1}) \\
&&\ \ \ +g_{i-1}(X,A,B,L_{1},\cdots , L_{n-i-1})-h^{i-1}(\mathcal{O}_{X}).
\end{eqnarray*}
\end{Theorem}
\noindent{\em Proof.} See \cite[Corollary 2.4]{Fukuma11}. $\Box$
\begin{Proposition}\label{SKB}
Let $X$ be a smooth projective variety with $\dim X=n\geq 2$,
let $L_{1},\cdots ,L_{m}$ be nef and big line bundles on $X$ and let $L$ be a nef line bundle, where $m\geq 1$.
Then
\begin{eqnarray*}
&&h^{0}(K_{X}+L_{1}+\cdots +L_{m}+L)-h^{0}(K_{X}+L_{1}+\cdots +L_{m})\\
&&=\sum_{s=0}^{n-1}\sum_{(k_{1},\cdots,k_{n-s-1})\in A_{n-s-1}^{m}}
g_{s}(X, L_{k_{1}},\cdots, L_{k_{n-s-1}},L) \\
&&\ \ \ -\sum_{s=0}^{n-2}{m-1\choose n-s-2}h^{s}(\mathcal{O}_{X}).
\end{eqnarray*}
Here $A_{t}^{p}:=\left\{ (k_{1},\cdots , k_{t})\ |\ k_{l}\in \{ 1, \cdots , p\}, k_{i}<k_{j} \ \mbox{\rm if $i<j$}\right\}$, and we set
\[
\sum_{(k_{1},\cdots,k_{n-s-1})\in A_{n-s-1}^{m}}
g_{s}(X, L_{k_{1}},\cdots, L_{k_{n-s-1}},L)
=\left\{
\begin{array}{lc}
0 & \mbox{if $n-s-1>m$,} \\
g_{n-1}(X,L) & \mbox{if $s=n-1$.}
\end{array} \right. \]
\end{Proposition}
\noindent{\em Proof.}
See \cite[Theorem 5.1]{Fukuma08-3}. $\Box$
\section{Special cases}\label{S4}
In this section, we will investigate the dimension of adjoint linear system for special cases.
First we prove the following.
\begin{Theorem}\label{3-1T1}
Let $(X,L)$ be a polarized manifold of dimension $n\geq 2$ and let $C$ be a smooth projective curve.
Assume that there exists a fiber space $f:X\to C$ such that $K_{X}+L=f^{*}(H)$ for some ample line bundle $H$ on $C$.
Then for every positive integer $m$
\[
h^{0}(m(K_{X}+L))\geq
\left\{
\begin{array}{lc}
(m-1)(g(C)-1)+mg(C) & \mbox{if $g(C)\geq 1$,} \\
m+1 & \mbox{if $g(C)=0$.}
\end{array} \right.
\]
In particular $h^{0}(K_{X}+L)>0$ holds.
\end{Theorem}
\noindent{\em Proof.}
In this case
\begin{eqnarray*}
h^{0}(m(K_{X}+L))
&=&h^{0}(f^{*}(mH)) \\
&=&h^{0}(mH) \\
&=&h^{1}(mH)+\deg(mH)+(1-g(C)).
\end{eqnarray*}
On the other hand, by \cite[Lemma 1.13]{Fukuma3}, we have $\deg H\geq 2g(C)-1$.
Hence if $g(C)\geq 1$, then
\begin{eqnarray*}
h^{0}(mH)&\geq& m(2g(C)-1)+1-g(C)\\
&=&(2m-1)g(C)-(m-1) \\
&=&(m-1)(g(C)-1)+mg(C).
\end{eqnarray*}
If $g(C)=0$, then $h^{1}(mH)=0$ and
$h^{0}(mH)=\deg(mH)+1\geq m+1$.
Therefore
\[
h^{0}(m(K_{X}+L))\geq
\left\{
\begin{array}{lc}
(m-1)(g(C)-1)+mg(C) & \mbox{if $g(C)\geq 1$,} \\
m+1 & \mbox{if $g(C)=0$.}
\end{array} \right.
\]
This completes the proof. $\Box$
\begin{Corollary}\label{3-1T1C}
Let $(X,L)$ be a polarized manifold of dimension $n\geq 2$ and let $C$ be a smooth projective curve.
Assume that there exists a fiber space $f:X\to C$ such that $K_{X}+L=f^{*}(H)$ for some ample line bundle $H$ on $C$.
Then for every positive integer $m$
\[
h^{0}(m(K_{X}+L))\geq
\left\{
\begin{array}{lc}
m & \mbox{if $g(C)\geq 1$,} \\
m+1 & \mbox{if $g(C)=0$.}
\end{array} \right.
\]
\end{Corollary}
\begin{Theorem}\label{3-1T3}
Let $(X,L)$ be a polarized manifold of dimension $n\geq 2$ and let $C$ be a smooth projective curve.
Assume that there exists a fiber space $f:X\to C$ such that $K_{X}+L=f^{*}(H)$ for some ample line bundle $H$ on $C$.
\begin{itemize}
\item [\rm (1)]
If $g(C)\geq 1$ and $h^{0}(m(K_{X}+L))=m$ for some positive integer $m$, then
$g(C)=1$ and $\deg H=1$.
\item [\rm (2)]
If $g(C)=0$ and $h^{0}(m(K_{X}+L))=m+1$ for some positive integer $m$, then
$(C,H)\cong (\mathbb{P}^{1}, \mathcal{O}_{\mathbb{P}^{1}}(1))$.
\end{itemize}
\end{Theorem}
\noindent{\em Proof.}
(2.1) Assume that $g(C)\geq 1$ and $h^{0}(m(K_{X}+L))=m$.
Then by the proof of Theorem \ref{3-1T1} we have $g(C)=1$ and $\deg H=1$.
\\
(2.2) Assume that $g(C)=0$ and $h^{0}(m(K_{X}+L))=m+1$.
Then the proof of Theorem \ref{3-1T1} implies that $\deg H=1$, that is, $H=\mathcal{O}_{\mathbb{P}^{1}}(1)$.
Therefore $(C,H)\cong (\mathbb{P}^{1},\mathcal{O}_{\mathbb{P}^{1}}(1))$.
So we get the assertion. $\Box$
\\
\par
Next we consider the following case.
\begin{Theorem}\label{3-1T2}
Let $(X,L)$ be a polarized manifold of dimension $n\geq 3$ and let $Y$ be a normal projective surface.
Assume that there exists a fiber space $f:X\to Y$ such that $K_{X}+L=f^{*}(H)$ for some ample line bundle $H$ on $Y$.
Then for every positive integer $m$
\[
h^{0}(m(K_{X}+L))\geq
\left\{
\begin{array}{lc}
{m+1\choose 2}-(m-1)\chi(\mathcal{O}_{Y}) & \mbox{if $\chi(\mathcal{O}_{Y})\leq 0$,} \\
{m\choose 2}+\chi(\mathcal{O}_{Y}) & \mbox{if $\chi(\mathcal{O}_{Y})>0$.}
\end{array} \right.
\]
In particular $h^{0}(K_{X}+L)>0$ holds.
\end{Theorem}
\noindent{\em Proof.}
In this case $h^{0}(m(K_{X}+L))=h^{0}(mH)$.
Here we note the following.
\begin{Claim}\label{CL1}
$h^{i}(mH)=0$ for $i=1, 2$.
\end{Claim}
\noindent
{\em Proof.}
Since $f^{*}(mH)-K_{X}=(m-1)K_{X}+mL=(m-1)(K_{X}+L)+L$ is ample,
we have $R^{i}f_{*}(f^{*}(mH))=0$ for every $i>0$
by \cite[Theorem 1.7]{Fukuma3}.
Hence by \cite[Exsercise 8.1 page 252 in Chapter III]{Hartshorne} we have
$h^{i}(f^{*}(mH))=h^{i}(f_{*}f^{*}(mH))=h^{i}(mH)$.
Therefore for every $i>0$
\begin{eqnarray*}
h^{i}(mH)&=&h^{i}(f^{*}(mH)) \\
&=&h^{i}(m(K_{X}+L)) \\
&=&h^{i}(K_{X}+(m-1)(K_{X}+L)+L) \\
&=&0.
\end{eqnarray*}
This completes the proof of Claim \ref{CL1}. $\Box$
\\
\par
By Claim \ref{CL1}, we have $h^{0}(m(K_{X}+L))=h^{0}(mH)=\chi(mH)$.
Here we use Notation \ref{Notation1.1}.
Then $\chi_{0}(Y,H)=\chi(\mathcal{O}_{Y})$, $\chi_{1}(Y,H)=1-g(Y,H)$ and
$\chi_{2}(Y,H)=H^{2}$, where $g(Y,H)$ denotes the sectional genus of $(Y,H)$.
Let $\delta: S\to Y$ be a minimal resolution of $Y$.
Then there exist a smooth projective variety $X_{1}$,
a birational morphism $\mu_{1}:X_{1}\to X$ and
a fiber space $f_{1}:X_{1}\to S$ such that $f\circ \mu_{1}=\delta\circ f_{1}$.
\\
\\
(I) The case where $\chi(\mathcal{O}_{Y})\leq 0$.
\\
Then
\begin{eqnarray}
\chi(mH)-m\chi(H)
&=&\sum_{j=0}^{2}\chi_{j}(Y,H){m+j-1\choose j}-m\sum_{j=0}^{2}\chi_{j}(Y,H)
\label{T-EQ1}\\
&=&-(m-1)\chi(\mathcal{O}_{Y})+\left({m+1\choose 2}-m\right)H^{2}\nonumber\\
&\geq&{m+1\choose 2}-m-(m-1)\chi(\mathcal{O}_{Y}) \nonumber\\
&=&{m\choose 2}-(m-1)\chi(\mathcal{O}_{Y}). \nonumber
\end{eqnarray}
Therefore $\chi(mH)\geq m\chi(H)+{m\choose 2}-(m-1)\chi(\mathcal{O}_{Y})=mh^{0}(H)+{m\choose 2}-(m-1)\chi(\mathcal{O}_{Y})$.
\\
\par
Next we prove the following claim.
\begin{Claim}\label{CL3}
$h^{0}(H)>0$.
\end{Claim}
\noindent{\em Proof.}
Since $\chi(\mathcal{O}_{Y})\leq 0$ in this case, we see that $h^{1}(\mathcal{O}_{Y})>0$. Because $h^{1}(\mathcal{O}_{X})=h^{1}(\mathcal{O}_{Y})$ in this case, by Lemma \ref{4L-T4} we see that $Y$ has the Albanese map.
Let $\alpha: Y\to \mbox{Alb}(Y)$ be the Albanese map of $Y$
and let $h:=\alpha\circ f$.
Here we note that $\dim h(X)=1$ or $2$.
\\
(a) First we consider the case where $\dim h(X)=2$.
By \cite[Corollary 10.7 in Chapter III]{Hartshorne}
any general fiber $F_{h}$ of $h$ can be written as follows:
$F_{h}=\cup_{i=1}^{r}F_{i}$, where $F_{i}$ is a smooth projective variety
of dimension $n-2$.
We note that $F_{i}$ is a fiber of $f$ for every $i$.
Since $(K_{X}+L)|_{F_{i}}=f^{*}(H)|_{F_{i}}\cong \mathcal{O}_{F_{i}}$,
we have
$$
h^{0}((K_{X}+L)|_{F_{h}})
=\sum_{i=1}^{r}h^{0}(K_{F_{i}}+L_{F_{i}})
=\sum_{i=1}^{r}h^{0}(\mathcal{O}_{F_{i}})
>0.
$$
By \cite[Lemma 4.1]{1.5} we have $h^{0}(H)=h^{0}(K_{X}+L)>0$.
\\
(b) Next we consider the case where $\dim h(X)=1$.
Then we note that $h$ has connected fibers.
Let $F_{h}$ (resp. $F_{\alpha}$) be a general fiber of $h$ (resp. $\alpha$).
Then $f|_{F_{h}}:F_{h}\to F_{\alpha}$ is a fiber space such that
$K_{F_{h}}+L_{F_{h}}=f^{*}(H)|_{F_{h}}=(f|_{F_{h}})^{*}(H|_{F_{\alpha}})$.
Here we note that $F_{h}$ and $F_{\alpha}$ are smooth projective varieties.
Since $H$ is ample, so is $H_{F_{\alpha}}$ on $F_{\alpha}$.
Since $\dim F_{\alpha}=1$, by Theorem \ref{3-1T1}
we have $h^{0}(K_{F_{h}}+L_{F_{h}})>0$.
Therefore by \cite[Lemma 4.1]{1.5} we get $h^{0}(H)=h^{0}(K_{X}+L)>0$.
This completes the proof. $\Box$
\\
\par
Claim \ref{CL3} implies that by (\ref{T-EQ1})
\begin{eqnarray*}
\chi(mH)&\geq& mh^{0}(H)+{m\choose 2}-(m-1)\chi(\mathcal{O}_{Y}) \\
&\geq& m+{m\choose 2}-(m-1)\chi(\mathcal{O}_{Y}) \\
&\geq& {m+1\choose 2}-(m-1)\chi(\mathcal{O}_{Y}).
\end{eqnarray*}
\noindent
\\
(II) Next we consider the case where $\chi(\mathcal{O}_{Y})>0$.
First we prove the following lemma.
\begin{Lemma}\label{L1}
$\chi_{1}(Y,H)+\chi_{2}(Y,H)\geq 0$.
\end{Lemma}
\noindent{\em Proof.}
First we note that
$K_{X_{1}}+\mu_{1}^{*}(L)\geq \mu_{1}^{*}(K_{X}+L)=\mu_{1}^{*}f^{*}(H)=f_{1}^{*}\delta^{*}(H)$.
Hence
for a general fiber $F_{1}$ of $f_{1}$, we have $0<h^{0}((K_{X_{1}}+\mu_{1}^{*}(L))|_{F_{1}})=h^{0}(K_{F_{1}}+(\mu_{1}^{*}(L))_{F_{1}})$.
Hence we have $(f_{1})_{*}(K_{X_{1}/S}+\mu_{1}^{*}(L))\neq 0$.
By Hironaka's theory there exist a smooth projective variety $X_{2}$ and a birational morphism $\mu_{2}:X_{2}\to X_{1}$ such that
$$\mu_{2}^{*}f_{1}^{*}((f_{1})_{*}(K_{X_{1}/S}+\mu_{1}^{*}(L)))\to \mu_{2}^{*}(K_{X_{1}/S}+\mu_{1}^{*}(L)-D)-E_{2}$$
is surjective, where $D$ is an effective divisor on $X_{1}$ and $E_{2}$ is a $\mu_{2}$-exceptional effective divisor on $X_{2}$.
Since $(f_{1})_{*}(K_{X_{1}/S}+\mu_{1}^{*}(L))$ is weakly positive (\cite[Theorem A$^{\prime}$ in Appendix]{Fukuma1.5}),
we see that $\mu_{2}^{*}(K_{X_{1}/S}+\mu_{1}^{*}(L)-D)-E_{2}$ is pseudo effective (see the proof of (1) in \cite[Remark 1.3.2]{Fukuma1.5}).
Here we note that for every positive integer $p$ we have
$$0\leq (\mu_{2}^{*}(K_{X_{1}/S}+\mu_{1}^{*}(L)-D)-E_{2})\mu_{2}^{*}f_{1}^{*}\delta^{*}(H)(\mu_{2}^{*}\mu_{1}^{*}(pL))^{n-2}$$
because $H$ is ample.
On the other hand
\begin{eqnarray*}
&&(\mu_{2}^{*}(K_{X_{1}/S}+\mu_{1}^{*}(L)-D)-E_{2})\mu_{2}^{*}f_{1}^{*}\delta^{*}(H)(\mu_{2}^{*}\mu_{1}^{*}(pL))^{n-2}\\
&&=(K_{X_{1}/S}+\mu_{1}^{*}(L)-D)(f_{1}^{*}\delta^{*}(H))(\mu_{1}^{*}(pL))^{n-2}\\
&&\leq (K_{X_{1}/S}+\mu_{1}^{*}(L))(f_{1}^{*}\delta^{*}(H))(\mu_{1}^{*}(pL))^{n-2}.
\end{eqnarray*}
Since $K_{X_{1}}=\mu_{1}^{*}K_{X}+E_{1}$, where $E_{1}$ is a $\mu_{1}$-exceptional effective divisor on $X_{1}$,
we have
\begin{eqnarray*}
&&(K_{X_{1}/S}+\mu_{1}^{*}(L))(f_{1}^{*}\delta^{*}(H))(\mu_{1}^{*}(pL))^{n-2}\\
&&=(\mu_{1}^{*}(K_{X}+L)-f_{1}^{*}(K_{S})+E_{1})(f_{1}^{*}\delta^{*}(H))(\mu_{1}^{*}(pL))^{n-2} \\
&&=(f_{1}^{*}(\delta^{*}(H)-K_{S})+E_{1})(\mu_{1}^{*}f^{*}(H))(\mu_{1}^{*}(pL))^{n-2} \\
&&=f_{1}^{*}(\delta^{*}(H)-K_{S})(\mu_{1}^{*}f^{*}(H))(\mu_{1}^{*}(pL))^{n-2}\\
&&=f_{1}^{*}(\delta^{*}(H)-K_{S})(f_{1}^{*}\delta^{*}(H))(\mu_{1}^{*}(pL))^{n-2}.
\end{eqnarray*}
Here we take $p$ as $\mbox{Bs}|\mu_{1}^{*}(pL)|=\emptyset$.
Then there exist $(n-2)$-general members $H_{1}, \dots , H_{n-2}$ in $|\mu_{1}^{*}(pL)|$ such that $H_{1}\cap \dots \cap H_{n-2}$ is a smooth projective surface $S_{1}$.
Then $f_{1}|_{S}:S_{1}\to S$ is a surjective morphism and we have
\begin{eqnarray*}
&&f_{1}^{*}(\delta^{*}(H)-K_{S})(f_{1}^{*}\delta^{*}(H))(\mu_{1}^{*}(pL))^{n-2}\\
&&=f_{1}^{*}(\delta^{*}(H)-K_{S})f_{1}^{*}(\delta^{*}(H))S_{1}\\
&&=(\deg f_{1}|_{S_{1}})(\delta^{*}(H)-K_{S})\delta^{*}(H).
\end{eqnarray*}
On the other hand, since $\chi_{2}(Y,H)=\chi_{2}(S,\delta^{*}(H))$ and $\chi_{1}(Y,H)=\chi_{1}(S,\delta^{*}(H))$,
we have
$(\delta^{*}(H)-K_{S})\delta^{*}(H)=2(\chi_{1}(S,\delta^{*}(H))+\chi_{2}(S,\delta^{*}(H)))=2(\chi_{1}(Y,H)+\chi_{2}(Y,H))$.
Hence we get the assertion. $\Box$
\\
\par
Therefore we get
\begin{eqnarray*}
h^{0}(mH)=\chi(mH)
&=& \chi_{0}(Y,H)+\chi_{1}(Y,H)m+\chi_{2}(Y,H){m+1\choose 2}\\
&=& \chi(\mathcal{O}_{Y})+m(\chi_{1}(Y,H)+\chi_{2}(Y,H))+\left({m+1\choose 2}-m\right)\chi_{2}(Y,H)\\
&\geq& \chi(\mathcal{O}_{Y})+{m\choose 2}.
\end{eqnarray*}
Therefore
$$h^{0}(m(K_{X}+L))\geq {m\choose 2}+\chi(\mathcal{O}_{Y}).$$
This completes the proof. $\Box$
\begin{Corollary}\label{3-1T2C}
Let $(X,L)$ be a polarized manifold of dimension $n\geq 3$ and let $Y$ be a normal projective surface.
Assume that there exists a fiber space $f:X\to Y$ such that $K_{X}+L=f^{*}(H)$ for some ample line bundle $H$ on $Y$.
Then for every positive integer $m$
\[
h^{0}(m(K_{X}+L))\geq
\left\{
\begin{array}{lc}
{m+1\choose 2} & \mbox{if $\chi(\mathcal{O}_{Y})\leq 0$,} \\
{m\choose 2}+1 & \mbox{if $\chi(\mathcal{O}_{Y})>0$.}
\end{array} \right.
\]
\end{Corollary}
\begin{Theorem}\label{3-1T4}
Let $(X,L)$ be a polarized manifold of dimension $n\geq 3$ and let $Y$ be a normal projective surface.
Assume that there exists a fiber space $f:X\to Y$ such that $K_{X}+L=f^{*}(H)$ for some ample line bundle $H$ on $Y$.
\begin{itemize}
\item [\rm (1)]
If $\chi(\mathcal{O}_{Y})\leq 0$ and $h^{0}(m(K_{X}+L))={m+1\choose 2}$ for some positive integer $m\geq 2$, then
$Y$ is smooth and $(Y,H)$ is a scroll over a smooth elliptic curve $C$ such that $H^{2}=1$.
\item [\rm (2)]
If $\chi(\mathcal{O}_{Y})>0$ and $h^{0}(m(K_{X}+L))={m\choose 2}+1$ for some positive integer $m\geq 2$, then one of the following holds.
{\rm (}Here let $\delta: S\to Y$ be the minimal resolution of $Y$.{\rm )}
\begin{itemize}
\item [\rm (2.0)]
$\kappa(S)=2$, $Y$ has at most canonical singularities with $h^{1}(\mathcal{O}_{Y})=0$ and $\chi(\mathcal{O}_{Y})=0$, and $H=K_{Y}+T$ with $H^{2}=1$, where $T$ is a non zero torsion divisor.
\item [\rm (2.1)]
$\kappa(S)=1$ and there exists an elliptic fibration $f:S\to C$
over a smooth curve $C$ such that $g(C)=1$, $\chi(\mathcal{O}_{S})=1$,
$q(S)=1$ and $\delta^{*}(H)F=1$, where $F$ is a general fiber of $f$.
In this case $Y$ has only rational singularities.
\item [\rm (2.2)]
$\kappa(S)=1$ and there exists an elliptic fibration $f:S\to C$
over a smooth curve $C$ such that $g(C)=0$, $\chi(\mathcal{O}_{S})=1$, $q(S)=0$
and one of the following holds.
{\rm (}Here let $t$ be the number of multiple fibers.{\rm )}
\begin{center}
\begin{tabular}{|c|c|c|c|}
$p_{g}(S)$ & $\delta^{*}(H)F$ & $t$
& $(m_{1}, \dots , m_{t})$ \\ \hline
$0$ & $6$ & $2$ & $(2,3)$ \\
$1$ & $4$ & $2$ & $(2,4)$ \\
$0$ & $3$ & $2$ & $(3,3)$ \\
$0$ & $2$ & $3$ & $(2,2,2)$
\end{tabular}
\end{center}
\item [\rm (2.3)]
$S$ is a one point blowing up of an Enriques surface $S^{\prime}$
and $\delta^{*}(H)=\mu^{*}(H^{\prime})-E_{\mu}$, where $\mu:S\to S^{\prime}$
is the blowing up at a point $P$, $H^{\prime}$ is an ample line bundle on $S^{\prime}$ and $E_{\mu}$ is the exceptional divisor.
\item [\rm (2.4)]
$\kappa(S)=-\infty$ and $q(S)=0$.
In this case $Y$ has only rational singularities.
\end{itemize}
\end{itemize}
\end{Theorem}
\noindent{\em Proof.}
Let $\delta: S\to Y$ be the minimal resolution of $Y$.
\\
(I) The case where $\chi(\mathcal{O}_{Y})\leq 0$.
\\
Then $h^{0}(m(K_{X}+L))\geq {m+1\choose 2}-(m-1)\chi(\mathcal{O}_{Y})$ by Theorem \ref{3-1T2}.
\\
Assume that $h^{0}(m(K_{X}+L))={m+1\choose 2}$.
Then, since $m\geq 2$, by the proof of Theorem \ref{3-1T2}, we have $\chi(\mathcal{O}_{Y})=0$, $H^{2}=1$ and $h^{0}(H)=1$.
Hence by Claim \ref{CL1}
\begin{eqnarray*}
1=h^{0}(H)&=&\chi(H)\\
&=&\chi(\mathcal{O}_{Y})+(1-g(Y,H))+H^{2}\\
&=&2-g(Y,H).
\end{eqnarray*}
Hence $g(Y,H)=1$.
Moreover since $\chi(\mathcal{O}_{Y})=0$, we have $h^{1}(\mathcal{O}_{Y})>0$.
Then $g(S,\delta^{*}(H))=g(Y,H)=1$.
In particular $\kappa(S)=-\infty$. Since $\delta^{*}(H)$ is nef and big, we have $g(S,\delta^{*}(H))\geq h^{1}(\mathcal{O}_{S})$ by \cite[Theorem 2.1]{Fukuma97-1}.
Moreover because $h^{1}(\mathcal{O}_{S})\geq h^{1}(\mathcal{O}_{Y})$, we have $1=g(S,\delta^{*}(H))\geq h^{1}(\mathcal{O}_{S})\geq h^{1}(\mathcal{O}_{Y})>0$.
Hence $g(S,\delta^{*}(H))=h^{1}(\mathcal{O}_{S})$ and $h^{1}(\mathcal{O}_{S})=h^{1}(\mathcal{O}_{Y})=1$.
Here we note that $\delta^{*}(H)$ is $\delta^{*}(H)$-minimal because $H$ is ample and $\delta$ is the minimal resolution.
Hence by \cite[Theorem 3.1]{Fukuma97-1}, we see that $(S,\delta^{*}(H))$ is a scroll over a smooth curve.
Then we can prove the following.
\begin{Claim}\label{CL2}
$\delta$ is the identity map.
\end{Claim}
\noindent
{\em Proof.}
Since $h^{1}(\mathcal{O}_{S})=h^{1}(\mathcal{O}_{Y})$, we see that $Y$ has the Albanese mapping by Lemma \ref{4L-T4}.
Then there exists an elliptic curve $C$ and morphisms $\alpha: Y\to C$ and $\alpha^{\prime}: S\to C$ such that $\alpha^{\prime}=\alpha\circ \delta$.
Here we note that $\alpha$ and $\alpha^{\prime}$ have connected fibers.
Since $\alpha^{\prime}$ is a $\mathbb{P}^{1}$-bundle over $C$, we see that any fiber of $\alpha^{\prime}$ is irreducible.
Assume that $\delta$ is not the identity map.
Then $\mbox{Sing}(Y)\neq\emptyset$ and $\alpha^{\prime}$ has non-irreducible fiber.
But this is a contradiction.
Therefore $\delta$ is the identity map. $\Box$
\\
\par
Hence $S\cong Y$, that is, $Y$ is smooth, and $(Y,H)$ is a scroll over a smooth elliptic curve $C$.
In particular, there exists an ample vector bundle $\mathcal{E}$ on $C$ such that $Y=\mathbb{P}_{C}(\mathcal{E})$ and $H=H(\mathcal{E})$.
Then $c_{1}(\mathcal{E})=1$ because $H^{2}=1$.
Therefore we see that $\mathcal{E}$ is an indecomposable ample vector bundle on $C$.
\\
\\
(II) Assume that $\chi(\mathcal{O}_{Y})>0$.
\\
Then we have $h^{0}(m(K_{X}+L))\geq {m\choose 2}+1$.
We consider $(X,L)$ with
$h^{0}(m(K_{X}+L))={m\choose 2}+1$.
Then, since $m\geq 2$, by the proof of Theorem \ref{3-1T2} we obtain
$\chi(\mathcal{O}_{Y})=\chi_{0}(Y,H)=1$, $\chi_{1}(Y,H)+\chi_{2}(Y,H)=0$ and $H^{2}=\chi_{2}(Y,H)=1$.
Hence we have $g(Y,H)=1-\chi_{1}(Y,H)=2$.
\par
Hence we see that a quasi-polarized surface $(S,\delta^{*}(H))$
is $\delta^{*}(H)$-minimal with $g(S,\delta^{*}(H))=2$
(Here we note that quasi-polarized surfaces of this type was studied in \cite{Bi-Fa-La06}.)
Here we note that $\delta^{*}(H)^{2}=1$ and $K_{S}\delta^{*}(H)=1$.
\par
Next we study $(S,\delta^{*}(H))$ with $g(S,\delta^{*}(H))=2$.
\\
(II.a) Assume that $\kappa(S)=2$.
Since $(\delta^{*}H)^{2}=H^{2}=1$ and $\delta^{*}(H)K_{S}=HK_{Y}=1$,
we see that $S$ is minimal because $(S,\delta^{*}(H))$ is $\delta^{*}(H)$-minimal (see Definition \ref{T-D1}).
By the Hodge index theorem we have $\delta^{*}(H)\equiv K_{S}$
and $K_{S}^{2}=1$.
Then $h^{1}(\mathcal{O}_{S})=0$ and $h^{1}(\mathcal{O}_{Y})=0$.
On the other hand $K_{S}=\delta^{*}(K_{Y})+E_{\delta}$ holds, where $E_{\delta}$ is a $\delta$-exceptional divisor.
Here we note that $E_{\delta}$ is not always effective.
Hence $\delta^{*}(H-K_{Y})\equiv E_{\delta}$.
If $E_{\delta}\neq 0$, then $(E_{\delta})^{2}<0$ by Grauert's criterion (e.g. \cite[(2.1) Theorem in Chapter III]{BaHuPeVa04}).
But since $\delta^{*}(H-K_{S})E_{\delta}=0$, this is impossible.
Therefore we have $E_{\delta}=0$ and $K_{S}=\delta^{*}(K_{Y})$.
Therefore $Y$ has at most canonical singularities.
Namely the singularities of $Y$ are at most rational double points.
Therefore $Y$ is Gorenstein and $K_{Y}$ is a Cartier divisor.
Since $\delta^{*}(H)\equiv \delta^{*}(K_{Y})$, we have $H\equiv K_{Y}$.
If $H=K_{Y}$, then $h^{2}(H)=h^{2}(K_{Y})=h^{0}(\mathcal{O}_{Y})=1$.
But this contadicts Claim \ref{CL1}.
Therefore $H=K_{Y}+T$, where $T$ is a torsion divisor.
\\
(II.b) Next we consider the case where $\kappa(S)=1$.
Here we use the results of \cite{LaTu98}.
Let $h:S\to C$ be its elliptic fibration.
Then, since $(\delta^{*}H)^{2}=1$ and $K_{S}\delta^{*}H=1$, the following are possible from \cite{LaTu98}.
\begin{itemize}
\item [(1)] $h$ has no multiple fibers (see \cite[Table 3.1]{LaTu98}).
\begin{itemize}
\item [\rm (1.1)] $g(C)=0$, $\chi(\mathcal{O}_{S})=3$, $q(S)=0$, $p_{g}(S)=2$ and $\delta^{*}(H)F=1$.
\item [\rm (1.2)] $g(C)=1$, $\chi(\mathcal{O}_{S})=1$, $q(S)=1$, $p_{g}(S)=1$ and $\delta^{*}(H)F=1$. (This is the type (2.1) in Theorem \ref{3-1T4}.)
\end{itemize}
\item [\rm (2)] The case where \cite[Table 4.1]{LaTu98}. (This is the type (2.2) in Theorem \ref{3-1T4}.)
\item [\rm (3)] $h$ has only one multiple fiber and its multiplicity is $2$.
In this case $g(C)=1$, $\chi(\mathcal{O}_{S})=0$, $q(S)=1$, $p_{g}(S)=0$ and $\delta^{*}HF=2$ (see the first case of \cite[Table 5.1]{LaTu98}).
\item [\rm (4)] The case where \cite[Table 5.2]{LaTu98}.
\end{itemize}
\begin{Lemma}\label{L3}
The cases {\rm (1.1)}, {\rm (3)} and {\rm (4)} above are impossible.
\end{Lemma}
\noindent{\em Proof.}
First we consider the case of (1.1).
In this case $\chi(\mathcal{O}_{S})=3>1=\chi(\mathcal{O}_{Y})$.
But this is impossible by Proposition \ref{2P-T1} because $\chi(\mathcal{O}_{Y})=\chi(\mathcal{O}_{X})$.
\par
Next we consider the case (3) above.
Since $q(S)=1$, $S$ has the Albanese fibration $\alpha: S\to B$, where $B$ is an elliptic curve.
In this case, since $C$ is also an elliptic curve, by the universality of the Albanese map we see that there exists a morphism $\lambda: B\to C$ such that $h=\lambda\circ \alpha$.
Because $\alpha$ and $h$ have connected fibers, we see that $\lambda$ is an isomorphism.
Namely we may assume that $\alpha=h$.
Moreover by Lemma \ref{4L-T4} the Albanese map of $Y$ can be defined, and let $\alpha_{Y}:Y\to B$ be its morphism.
But here $h$ is a quasi-bundle, so $\alpha$ is also a quasi-bundle.
(For the definition of quasi-bundle, see \cite[Definition 1.1]{Serrano91}.)
Hence $\delta$ is an isomorphism because $\alpha=\alpha_{Y}\circ \delta$.
Therefore $Y\cong S$.
But then $\chi(\mathcal{O}_{Y})=\chi(\mathcal{O}_{S})=0$ and this is a contradiction.
\par
Finally we consider the case where (4).
Then by \cite[Proposition 5.1]{LaTu98}, $\delta^{*}H$ is ample.
Namely $\delta$ is an isomorphism.
But then $\chi(\mathcal{O}_{Y})=\chi(\mathcal{O}_{S})=0$ and
this is also impossible.
\par
This completes the proof of Lemma \ref{L3}. $\Box$
\\
(II.c) Next we consider the case where $\kappa(S)=0$.
Let $\mu:S\to S^{\prime}$ be the minimalization of $S$.
If $\delta$ is an isomorphism, then $\chi(\mathcal{O}_{S})=\chi(\mathcal{O}_{Y})=1$ and $S^{\prime}$ is an Enriques surface.
If $\delta$ is not an isomorphism, then
since $g(S,\delta^{*}(H))=2$, by \cite[Proposition 3.2]{Bi-Fa-La06}
we see that $S^{\prime}$ is either an Enriques surface or a K3-surface.
If $S^{\prime}$ is birationally equivalent to a K3-surface, then $\chi(\mathcal{O}_{S^{\prime}})=2$.
But by Proposition \ref{2P-T1} this is impossible because $\chi(\mathcal{O}_{Y})=1$ in this case.
Therefore $S^{\prime}$ is birationally equivalent to an Enriques surface.
\\
(II.d) Next we consider the case where $\kappa(S)=-\infty$.
By Proposition \ref{2P-T1} we see that $\chi(\mathcal{O}_{S})\leq \chi(\mathcal{O}_{Y})=1$.
Since $g(S,\delta^{*}(H))=2$, we have $q(S)\leq 2$ by \cite[Theorem 2.1]{Fukuma97-1}.
By Lemma \ref{4L-T4}, we have $q(Y)=q(S)$ and if $q(Y)\geq 1$, then there exist the Albanese map of $Y$, $\alpha_{Y}: Y\to \mbox{Alb}(Y)$, and a morphism $\beta : \mbox{Alb}(S)\to \mbox{Alb}(Y)$ such that $\alpha_{Y}\circ\delta=\beta\circ\alpha_{S}$ holds, where $\alpha_{S}: S\to \mbox{Alb}(S)$ is the Albanese map of $S$.
Then $\alpha_{S}(S)$ and $\alpha_{Y}(Y)$ are smooth curves and $\alpha_{S}$ and $\alpha_{Y}$ have connected fibers (see \cite[Lemma 2.4.5]{BeSo-Book}).
Hence $\alpha_{S}(S)\cong \alpha_{Y}(Y)$.
\\
(i) If $q(S)=2$, then $g(S,\delta^{*}(H))=q(S)$ implies that $(S,\delta^{*}(H))$ is a scroll over a smooth curve by \cite[Theorem 3.1]{Fukuma97-1}.
Here we note that $\delta$ is an isomorphism because $S$ is a $\mathbb{P}^{1}$-bundle over $\alpha_{S}(S)$.
But then $\chi(\mathcal{O}_{Y})=\chi(\mathcal{O}_{S})=-1$ and this is impossible.\\
(ii) Next we consider the case where $q(S)=1$.
Assume that $K_{S}+\delta^{*}(H)$ is not nef.
Then there exists an extremal rational curve $E$ on $S$ such that $(K_{S}+\delta^{*}(H))E<0$.
If $E$ is a $(-1)$-curve, then $(K_{S}+\delta^{*}(H))E\geq 0$
since $(S,\delta^{*}(H))$ is $\delta^{*}(H)$-minimal.
Hence $S$ is a $\mathbb{P}^{1}$-bundle over a smooth elliptic curve $C$ and $E$ is a fiber of this because $q(S)=1$.
Let $f: S\to C$ be its morphism.
Moreover we see that $\delta^{*}(H)F=1$ for any fiber $F$ of $f$ because $(K_{S}+\delta^{*}(H))F<0$.
Then $g(S,\delta^{*}(H))=q(S)=1$.
But this contradicts to $g(S,\delta^{*}(H))=g(Y,H)=2$.
Hence $K_{S}+\delta^{*}(H)$ is nef.
So we get $0\leq (K_{S}+\delta^{*}(H))^{2}=K_{S}^{2}+2K_{S}\delta^{*}(H)+(\delta^{*}(H))^{2}=3+K_{S}^{2}$, that is, $-3\leq K_{S}^{2}$.
On the other hand $K_{S}^{2}\leq 0$ and $K_{S}^{2}=0$ if and only if $S$ is minimal.
Hence $S$ is at most three points blowing up of a $\mathbb{P}^{1}$-bundle over $C$.
\\
(ii.1) Assume that $S$ is a $\mathbb{P}^{1}$-bundle over $C$.
Then $S\cong Y$ because every exceptional curve of $\delta$ is contained in a fiber of $\alpha_{S}$.
But this is impossible because $\chi(\mathcal{O}_{S})=0\neq1=\chi(\mathcal{O}_{Y})$.
\\
(ii.2) Assume that $S$ is one point blowing up of a $\mathbb{P}^{1}$-bundle over $C$.
Then $S$ has one singular fiber $F_{1}$ and $F_{1}=C_{1}+C_{2}$, where each $C_{i}$ is a $(-1)$-curve and $C_{1}C_{2}=1$.
Since $\delta$ is the minimal resolution, we have $S\cong Y$.
But this is also impossible by the same reason as in (ii.1).
\\
(ii.3) Assume that $S$ is two point blowing up of a $\mathbb{P}^{1}$-bundle over $C$.
Then the following two cases possibly occur:\\
(ii.3.1) $\alpha_{S}$ has one singular fiber $F$ and $F=C_{1}+C_{2}+C_{3}$, where $C_{1}$ and $C_{3}$ are $(-1)$-curves and $C_{2}$ is a $(-2)$-curve such that $C_{1}C_{2}=1$, $C_{2}C_{3}=1$ and $C_{1}C_{3}=0$.
\\
(ii.3.2) $f$ has two singular fibers $F_{1}$ and $F_{2}$ such that $F_{1}=C_{1}+C_{2}$, $F_{2}=C_{3}+C_{4}$, where each $C_{i}$ is a $(-1)$-curve with $C_{1}C_{2}=1$ and $C_{3}C_{4}=1$.\\
By the same argument as (ii.2), (ii.3.2) cannot occur.
So we consider the case where (ii.3.1).
Then since $\delta$ is the minimal resolution, the exceptional curve of $\delta$ is $C_{2}$. So $Y$ is rational by Artin's criterion \cite[(3.2) Theorem in ChapterIII]{BaHuPeVa04}.
But this is impossible because $\chi(\mathcal{O}_{S})=0\neq1=\chi(\mathcal{O}_{Y})$.
\\
(ii.4) Assume that $S$ is three point blowing up of a $\mathbb{P}^{1}$-bundle over $C$.
Then the following four cases possibly occur:\\
(ii.4.1) $\alpha_{S}$ has one singular fiber $F$ and $F=C_{1}+C_{2}+C_{3}+C_{4}$, where $C_{2}$, $C_{3}$ and $C_{4}$ are $(-1)$-curves and $C_{1}$ is a $(-3)$-curve such that $C_{1}C_{i}=1$ for every $i$ with $i=2, 3, 4$, $C_{j}C_{k}=0$ with $j,k\in\{ 2, 3, 4\}$ and $j\neq k$.
\\
(ii.4.2) $\alpha_{S}$ has one singular fiber $F$ and $F=C_{1}+C_{2}+C_{3}+C_{4}$, where $C_{1}$ and $C_{4}$ are $(-1)$-curves, and $C_{2}$ and $C_{3}$ are $(-2)$-curves such that $C_{i}C_{i+1}=1$ for every $i$ with $i=1, 2, 3$, $C_{j}C_{k}=0$ with $|j-k|\geq 2$.
\\
(ii.4.3) $\alpha_{S}$ has two singular fibers $F_{1}$ and $F_{2}$ such that $F_{1}=C_{1}+C_{2}+C_{3}$, $F_{2}=C_{4}+C_{5}$, where $C_{i}$ is a $(-1)$-curve for every $i\neq 2$ and $C_{2}$ is a $(-2)$-curve such that $C_{1}C_{2}=1$, $C_{2}C_{3}=1$, $C_{1}C_{3}=0$ and $C_{4}C_{5}=1$.
\\
(ii.4.4) $f$ has three singular fibers $F_{1}$, $F_{2}$ and $F_{3}$ such that $F_{1}=C_{1}+C_{2}$, $F_{2}=C_{3}+C_{4}$ and $F_{3}=C_{5}+C_{6}$, where each $C_{i}$ is a $(-1)$-curve such that $C_{i}C_{i+1}=1$ with $i\in \{ 1, 3, 5\}$.
\\
By the same argument as above,
in these 4 cases we see that $\delta$ is an isomorphism
or $Y$ has rational singularities.
But this is impossible because $\chi(\mathcal{O}_{S})=0\neq \chi(\mathcal{O}_{Y})$.
\par
Therefore the case where $q(S)=1$ cannot occur.
By the above argument, we see that $q(S)=0$.
Then $\chi(\mathcal{O}_{S})=1=\chi(\mathcal{O}_{Y})$ and by Proposition \ref{2P-T1} we have $h^{0}(R^{1}\delta_{*}(\mathcal{O}_{S}))=0$.
So $Y$ has rational singularities.
This completes the proof. $\Box$
\section{Main results}\label{S5}
Let $(X,L)$ be a polarized manifold of dimension $3$.
In this section, we consider $h^{0}(m(K_{X}+L))$.
First by Theorems \ref{3-1T1} and \ref{3-1T2} we have the following.
\begin{Theorem}\label{3-2T1}
Let $(X,L)$ be a polarized manifold of dimension $3$.
\begin{itemize}
\item [\rm (1)]
Assume that $\kappa(K_{X}+L)=0$.
Then $h^{0}(m(K_{X}+L))=1$ for every positive integer $m$.
\item [\rm (2)] Assume that $\kappa(K_{X}+L)=1$.
Then for every positive integer $m$ the following holds.
\[
h^{0}(m(K_{X}+L))\geq
\left\{
\begin{array}{lc}
(m-1)(h^{1}(\mathcal{O}_{X})-1)+mh^{1}(\mathcal{O}_{X}) & \mbox{if $h^{1}(\mathcal{O}_{X})\geq 1$,} \\
m+1 & \mbox{if $h^{1}(\mathcal{O}_{X})=0$.}
\end{array} \right.
\]
\item [\rm (3)] Assume that $\kappa(K_{X}+L)=2$.
Then for every positive integer $m$ the following holds.
\[
h^{0}(m(K_{X}+L))\geq
\left\{
\begin{array}{lc}
{m+1\choose 2}-(m-1)\chi(\mathcal{O}_{X}) & \mbox{if $\chi(\mathcal{O}_{X})\leq 0$,} \\
{m\choose 2}+\chi(\mathcal{O}_{X}) & \mbox{if $\chi(\mathcal{O}_{X})>0$.}
\end{array} \right.
\]
\end{itemize}
\end{Theorem}
\noindent{\em Proof.}
Let $(M,A)$ be a reduction of $(X,L)$.
Here we note that $h^{0}(m(K_{X}+L))=h^{0}(m(K_{M}+A))$ for any positive integer $m$.
If $\kappa(K_{X}+L)=0$, then $(M,A)$ is a Mukai manifold, that is, $\mathcal{O}_{M}(K_{M}+A)=\mathcal{O}_{M}$ by \cite[Theorem 7.5.3]{BeSo-Book}.
This implies that $h^{0}(m(K_{X}+L))=h^{0}(m(K_{M}+A))=1$.
\par
If $\kappa(K_{X}+L)=1$ (resp. $2$), then by \cite[Theorem 7.5.3]{BeSo-Book} there exist a smooth projective curve $C$ (resp. a normal projective surface $Y$), and a fiber space $f:M\to C$ (resp. $M\to Y$) such that $K_{M}+A=f^{*}(H)$ for some ample line bundle $H$ on $C$ (resp. $Y$).
Moreover we have $h^{1}(\mathcal{O}_{X})=h^{1}(\mathcal{O}_{M})=h^{1}(\mathcal{O}_{C})$ (resp. $h^{i}(\mathcal{O}_{X})=h^{i}(\mathcal{O}_{M})=h^{i}(\mathcal{O}_{Y})$ for $i=0, 1, 2$ and $h^{3}(\mathcal{O}_{X})=0$).
Hence by Theorems \ref{3-1T1} and \ref{3-1T2} we get the assertion. $\Box$
\\
\par
Next we consider the case where $\kappa(K_{X}+L)=3$.
Then the following is obtained.
\begin{Theorem}\label{3-2T2}
Let $(X,L)$ be a polarized manifold of dimension $3$.
Assume that $\kappa(K_{X}+L)=3$.
Then we have
\[
h^{0}(m(K_{X}+L))
\geq \left\{
\begin{array}{lc}
\frac{1}{8}m^{3}+\frac{1}{4}m^{2}+1 & \mbox{if $m$ is even with $m\geq 2$,} \\
\frac{1}{8}m^{3}+\frac{1}{4}m^{2}+\frac{1}{8}m+1 & \mbox{if $m$ is odd with $m\geq 3$.}
\end{array} \right.
\]
\end{Theorem}
\noindent{\em Proof.}
Let $(M,A)$ be a reduction of $(X,L)$.
By assumption and \cite[Proposition 7.6.9]{BeSo-Book} we see that $K_{M}+A$ is nef.
\\
(I) The case where $m$ is even with $m\geq 2$.
\\
Then by Proposition \ref{SKB} we have the following.
\begin{eqnarray*}
h^{0}(m(K_{X}+L))
&=&h^{0}(m(K_{M}+A))\\
&=&h^{0}\left(\left(\frac{m}{2}+1\right)K_{M}+\frac{m}{2}A\right)
+g_{2}\left(M,\left(\frac{m}{2}-1\right)(K_{M}+A)+A\right)\\
&&\ \ \ -h^{1}(\mathcal{O}_{M})
+g_{1}\left(M,\left(\frac{m}{2}-1\right)(K_{M}+A)+A, \frac{m}{2}(K_{M}+A)\right).
\end{eqnarray*}
Since $((m/2)-1)(K_{M}+A)+A$ is ample and $\kappa(K_{M}+((m/2)-1)(K_{M}+A)+A)=\kappa(K_{M}+A)=3$, we have
$g_{2}(M,((m/2)-1)(K_{M}+A)+A)\geq h^{1}(\mathcal{O}_{M})$ by Theorem \ref{Theorem1.4}.
On the other hand, by Remark \ref{B6} (2) we have
\begin{eqnarray*}
&&g_{1}\left(M,\left(\frac{m}{2}-1\right)(K_{M}+A)+A, \frac{m}{2}(K_{M}+A)\right)\\
&&=1+\frac{1}{2}\left(K_{M}+\left(\frac{m}{2}-1\right)(K_{M}+A)+A+\frac{m}{2}(K_{M}+A)\right)\\
&&\ \ \ \times\left(\left(\frac{m}{2}-1\right)(K_{M}+A)+A)\right)\left(\frac{m}{2}(K_{M}+A)\right)\\
&&=1+\frac{m^{2}(m-2)}{8}(K_{M}+A)^{3}+\frac{m^{2}}{4}(K_{M}+A)^{2}A.
\end{eqnarray*}
We also note that $(K_{M}+A)^{3}\geq 1$ and $(K_{M}+A)^{2}A\geq 1$.
\\
If $(K_{M}+A)^{2}A=1$, then by Proposition \ref{GHIT} we see that
$(K_{M}+A)A^{2}=1$ and $A^{3}=1$ because $(K_{M}+A)A^{2}>0$.
Hence $g_{1}(M,A)=2$.
Therefore by \cite[(1.10) Theorem and Section 2]{Fujita87-2} we see that $K_{M}=\mathcal{O}_{M}$ and $h^{0}(A)\geq 1$ since $\kappa(K_{M}+A)=3$.
On the other hand, we have
$$h^{0}(m(K_{M}+A))=h^{0}(mA)=\chi(mA)=\frac{1}{6}m^{3}A^{3}+\frac{1}{12}mc_{2}(M)A$$
because $h^{i}(mA)=h^{i}(K_{M}+mA)=0$ for every $i>0$.
Since $h^{0}(A)\geq 1$, we get
$$1\leq h^{0}(A)=\frac{1}{6}A^{3}+\frac{1}{12}c_{2}(M)A.$$
Hence $(1/12)c_{2}(M)A\geq 1-(1/6)A^{3}=5/6$.
So we obtain
\begin{eqnarray*}
h^{0}(m(K_{M}+A))
&=&\frac{1}{6}m^{3}A^{3}+\frac{1}{12}mc_{2}(M)A\\
&\geq& \frac{1}{6}m^{3}+\frac{5}{6}m.
\end{eqnarray*}
\noindent
\\
If $(K_{M}+A)^{2}A\geq 2$, then
\begin{eqnarray*}
h^{0}(m(K_{M}+A))
&\geq & 1+\frac{m^{2}(m-2)}{8}+2\frac{m^{2}}{4}\\
&=&\frac{1}{8}m^{3}+\frac{1}{4}m^{2}+1.
\end{eqnarray*}
Here we note that
$(1/6)m^{3}+(5/6)m-((1/8)m^{3}+(1/4)m^{2}+1)=(1/24)(m-2)((m-2)^{2}+8)\geq 0$.
So if $m$ is even with $m\geq 2$, then we have $h^{0}(m(K_{M}+A))\geq (1/8)m^{3}+(1/4)m^{2}+1$.
\\
(II) The case where $m$ is odd with $m\geq 3$.
\\
Here we use the following equality which is obtained from Proposition \ref{SKB}.
\begin{eqnarray*}
h^{0}(m(K_{X}+L))
&=&h^{0}(m(K_{M}+A))\\
&=&h^{0}\left(\left(\frac{m+1}{2}+1\right)K_{M}+\frac{m+1}{2}A\right)\\
&&\ \ \ +g_{2}\left(M,\left(\frac{m-1}{2}-1\right)(K_{M}+A)+A\right)-h^{1}(\mathcal{O}_{M})\\
&&\ \ \ +g_{1}\left(M,\left(\frac{m-1}{2}-1\right)(K_{M}+A)+A, \frac{m+1}{2}(K_{M}+A)\right).
\end{eqnarray*}
Since $(-1+(m-1)/2)(K_{M}+A)+A$ is ample and $\kappa(K_{M}+(-1+(m-1)/2)(K_{M}+A)+A)=\kappa(((m-1)/2)(K_{M}+A))=\kappa(K_{M}+A)=3$, we have
$g_{2}(M,(-1+(m-1)/2)(K_{M}+A)+A)\geq h^{1}(\mathcal{O}_{M})$ by Theorem \ref{Theorem1.4}.
On the other hand,
\begin{eqnarray*}
&&g_{1}\left(M,\left(\frac{m-1}{2}-1\right)(K_{M}+A)+A, \frac{m+1}{2}(K_{M}+A)\right)\\
&&=1+\frac{1}{2}\left(K_{M}+\left(\frac{m-1}{2}-1\right)(K_{M}+A)+A+\frac{m+1}{2}(K_{M}+A)\right)\\
&&\ \ \ \times\left(\left(\frac{m-1}{2}-1\right)(K_{M}+A)+A\right)\left(\frac{m+1}{2}(K_{M}+A)\right)\\
&&=1+\frac{m(m+1)(m-3)}{8}(K_{M}+A)^{3}+\frac{m(m+1)}{4}(K_{M}+A)^{2}A.
\end{eqnarray*}
If $(K_{M}+A)^{2}A=1$, then by the same argument as above we see that
$$h^{0}(m(K_{M}+A))\geq\frac{1}{6}m^{3}+\frac{5}{6}m.$$
\noindent
If $(K_{M}+A)^{2}A\geq 2$, then we have
\begin{eqnarray*}
h^{0}(m(K_{M}+A))
&\geq & 1+\frac{m(m+1)(m-3)}{8}+\frac{m(m+1)}{2}\\
&=&\frac{1}{8}m^{3}+\frac{1}{4}m^{2}+\frac{1}{8}m+1.
\end{eqnarray*}
Here we note that
$(1/6)m^{3}+(5/6)m-((1/8)m^{3}+(1/4)m^{2}+(1/8)m+1)=(1/24)(m-3)((m-(3/2))^{2}+23/4)\geq 0$.
So if $m$ is odd with $m\geq 3$, then we have $h^{0}(m(K_{M}+A))\geq (1/8)m^{3}+(1/4)m^{2}+(1/8)m+1$.
This completes the proof of Theorem \ref{3-2T2}. $\Box$
\begin{Remark}\label{R1}
By Theorem \ref{3-2T2} we see that if $\kappa(K_{X}+L)=3$, then for every integer $m$ with $m\geq 2$, we have
$$h^{0}(m(K_{X}+L))\geq \frac{1}{8}m^{3}+\frac{1}{4}m^{2}+1.$$
\end{Remark}
If $\kappa(K_{X}+L)=3$ and $m=2$, then by Theorem \ref{3-2T2} or \cite[Theorem 5.4 (2)]{Fukuma08-3} we have $h^{0}(2(K_{X}+L))\geq 3$.
So it is interesting to study $(X,L)$ with $\kappa(K_{X}+L)=3$ and small $h^{0}(2(K_{X}+L))$.
The following results (Theorems \ref{EC1} and \ref{EC2}) give a classification of these $(X,L)$.
\\
\par
First we note the following which will be used later.
\begin{Proposition}\label{32P-T1}
Let $(X,L)$ be a polarized manifold of dimension $3$.
Then the following equalities holds.
\begin{eqnarray}
&&h^{0}(2K_{X}+2L)-h^{0}(2K_{X}+L) \label{m2.3.1}\\
&&=g_{2}(X,L)-h^{1}(\mathcal{O}_{X})+g_{1}(X,K_{X}+L,L), \nonumber \\
&&h^{0}(2K_{X}+2L)-h^{0}(K_{X}+L) \label{m2.3.2}\\
&&=g_{2}(X,K_{X}+L)-h^{1}(\mathcal{O}_{X})+g_{1}(X,K_{X}+L,L). \nonumber
\end{eqnarray}
\end{Proposition}
\noindent
{\em Proof.} These equalities are obtained from Proposition \ref{SKB}. $\Box$
\begin{Notation}\label{32N-T1}
Let $(X,L)$ be a polarized manifold of dimension $3$ and let $(M,A)$ be a reduction of $(X,L)$.
Set $d_{1}:=g_{2}(M,A)-h^{1}(\mathcal{O}_{M})$ and $d_{2}:=g_{2}(M,K_{M}+A)-h^{1}(\mathcal{O}_{M})$.
Then we see that
\begin{eqnarray*}
d_{2}-d_{1}
&=&\frac{1}{12}(K_{M}+A)(6K_{M}+6A)K_{M}+\frac{1}{12}c_{2}(M)K_{M}\\
&=&\frac{1}{12}(K_{M}+A)(6K_{M}+6A)K_{M}-2\chi(\mathcal{O}_{M}).
\end{eqnarray*}
Therefore
\begin{equation}
d_{2}-d_{1}+2\chi(\mathcal{O}_{M})=\frac{1}{2}(K_{M}+A)^{2}K_{M}.\label{m2.3.7}
\end{equation}
\end{Notation}
\begin{Theorem}\label{EC1}
Let $(X,L)$ be a polarized manifold of dimension $3$.
Assume that $\kappa(K_{X}+L)=3$.
Then $h^{0}(2(K_{X}+L))=3$ if and only if $(X,L)$ satisifes $L^{3}=1$,
$\mathcal{O}_{X}(K_{X})=\mathcal{O}_{X}$,
$h^{1}(\mathcal{O}_{X})=0$ and $h^{0}(L)=1$.
\end{Theorem}
\noindent{\em Proof.}
($\alpha$) Assume that $h^{0}(2(K_{X}+L))=3$.
\\
Let $(M,A)$ be a reduction of $(X,L)$.
Then by assumption we see that $K_{M}+A$ is nef and big.
First we prove the following claim.
\begin{Claim}\label{CL4}
$h^{0}(K_{M}+A)\leq 2$.
\end{Claim}
\noindent
{\em Proof.}
Assume that $h^{0}(K_{M}+A)\geq 3$.
Then by Lemma \ref{Lemma B}
we have $h^{0}(2(K_{M}+A))\geq 2h^{0}(K_{M}+A)-1\geq 5$.
This is a contradiction. $\Box$
\\
\par
By Proposition \ref{32P-T1} (\ref{m2.3.1}) and Theorem \ref{Theorem1.4}, we see that
\begin{eqnarray*}
3=h^{0}(2K_{M}+2A)
&\geq & g_{2}(M,A)-h^{1}(\mathcal{O}_{M})+g_{1}(M,K_{M}+A,A)\\
&\geq & g_{1}(M,K_{M}+A,A)\\
&=& 1+(K_{M}+A)^{2}A.
\end{eqnarray*}
Hence we have $(K_{M}+A)^{2}A\leq 2$.
On the other hand, since $1\leq (K_{M}+A)^{2}A$ we get
\begin{equation}
1\leq (K_{M}+A)^{2}A\leq 2. \label{m2.3.3}
\end{equation}
Namely the following holds.
\begin{equation}
2\leq g_{1}(M,K_{M}+A,A)\leq 3. \label{m2.3.4}
\end{equation}
Since $g_{1}(M,K_{M}+A,A)\leq 3$, by Proposition \ref{32P-T1} (\ref{m2.3.2})
we get
$h^{0}(2(K_{M}+A))-h^{0}(K_{M}+A)\leq g_{2}(M, K_{M}+A)-h^{1}(\mathcal{O}_{M})+3$.
By Claim \ref{CL4} and $h^{0}(2(K_{M}+A))=3$, we see that
\begin{eqnarray*}
3-2
&\leq&h^{0}(2(K_{M}+A))-h^{0}(K_{M}+A)\\
&\leq&g_{2}(M,K_{M}+A)-h^{1}(\mathcal{O}_{M})+3.
\end{eqnarray*}
Namely,
\begin{eqnarray}
g_{2}(M, K_{M}+A)-h^{1}(\mathcal{O}_{M})&\geq& -2. \label{m2.3.5}
\end{eqnarray}
From Proposition \ref{32P-T1} (\ref{m2.3.2}), (\ref{m2.3.4}) and the assumption that $h^{0}(2(K_{M}+A))=3$, we have
\begin{eqnarray*}
3&\geq&h^{0}(2(K_{M}+A))-h^{0}(K_{M}+A)\\
&=&g_{2}(M,K_{M}+A)-h^{1}(\mathcal{O}_{M})+g_{1}(M,K_{M}+A,A) \\
&\geq&g_{2}(M,K_{M}+A)-h^{1}(\mathcal{O}_{M})+2.
\end{eqnarray*}
Hence we have
\begin{eqnarray}
g_{2}(M, K_{M}+A)-h^{1}(\mathcal{O}_{M})&\leq& 1. \label{m2.3.6}
\end{eqnarray}
By (\ref{m2.3.4}) and Proposition \ref{32P-T1} (\ref{m2.3.1}), we have
\begin{eqnarray*}
3&\geq&h^{0}(2(K_{M}+A))-h^{0}(2K_{M}+A)\\
&=&g_{2}(M,A)-h^{1}(\mathcal{O}_{M})+g_{1}(M,K_{M}+A,A) \\
&\geq&g_{2}(M,A)-h^{1}(\mathcal{O}_{M})+2.
\end{eqnarray*}
Hence $1\geq g_{2}(M,A)-h^{1}(\mathcal{O}_{M})$.
From this and Theorem \ref{Theorem1.4} we have
\begin{equation}
d_{1}=0, 1. \label{m2.3.9}
\end{equation}
We also note that
\begin{equation}
-2\leq d_{2}\leq 1 \label{m2.3.10}
\end{equation}
by (\ref{m2.3.5}) and (\ref{m2.3.6}).
\\
\\
(I) If $(K_{M}+A)^{2}A=1$, then $(K_{M}+A)A^{2}=1$ and $A^{3}=1$ by Proposition \ref{GHIT}.
Therefore we get $g_{1}(M,A)=2$.
Since $\kappa(K_{M}+A)=3$, by \cite[(1.10) Theorem and Section 2]{Fujita87-2} we see that $K_{M}=\mathcal{O}_{M}$, $h^{1}(\mathcal{O}_{M})=0$ and $h^{0}(A)=1$.
By the Riemann-Roch theorem we get $\chi(tA)=(1/6)A^{3}t^{3}+(1/12)c_{2}(M)At$.
Since $h^{0}(2K_{M}+2A)=\chi(2K_{M}+2A)=\chi(2A)$,
we get $h^{0}(2K_{M}+2A)=(4/3)A^{3}+(1/6)c_{2}(M)A$.
Therefore $3=h^{0}(2K_{M}+2A)=(4/3)A^{3}+(1/6)c_{2}(M)A=(4/3)+(1/6)c_{2}(M)A$.
Namely $c_{2}(M)A=10$.
Here we note that $(M,A)\cong (X,L)$ because $A^{3}=1$.
\\
\\
(II) Next we assume that
\begin{equation}
(K_{M}+A)^{2}A=2. \label{m2.3.11}
\end{equation}
We will prove that this case cannot occur.
Since $(K_{M}+A)^{2}A=2$, by Proposition \ref{GHIT} we have
\begin{equation}
1\leq (K_{M}+A)^{3}\leq 4. \label{m2.3.12}
\end{equation}
By using (\ref{m2.3.7}), (\ref{m2.3.9}), (\ref{m2.3.10}), (\ref{m2.3.11}) and (\ref{m2.3.12}), we can determine the value of $\chi(\mathcal{O}_{M})$.
For example, assume that $d_{1}=0$ and $d_{2}=-2$.
Then $d_{2}-d_{1}=-2$ and $(K_{M}+A)^{2}K_{M}=4\chi(\mathcal{O}_{M})-4$ by (\ref{m2.3.7}).
Since $(K_{M}+A)^{2}A=2$, we have $(K_{M}+A)^{3}=4\chi(\mathcal{O}_{M})-2$.
By considering (\ref{m2.3.12}) we have $\chi(\mathcal{O}_{M})=1$.
By the same argument as this,
we can get the following list:
\begin{center}
\begin{tabular}{cccccc}\hline
$d_{1}$ & $d_{2}$ & $d_{2}-d_{1}$ & $(K_{M}+A)^{2}K_{M}$ & $(K_{M}+A)^{3}$ & $\chi(\mathcal{O}_{M})$\\
\hline
$0$ & $-2$ & $-2$ & $4\chi(\mathcal{O}_{M})-4$ & $4\chi(\mathcal{O}_{M})-2$ &
$1$ \\
$0$ & $-1$ & $-1$ & $4\chi(\mathcal{O}_{M})-2$ & $4\chi(\mathcal{O}_{M})$ & $1$ \\
$0$ & $0$ & $0$ & $4\chi(\mathcal{O}_{M})$ & $4\chi(\mathcal{O}_{M})+2$ & $0$ \\
$0$ & $1$ & $1$ & $4\chi(\mathcal{O}_{M})+2$ & $4\chi(\mathcal{O}_{M})+4$ & $0$ \\
$1$ & $-2$ & $-3$ & $4\chi(\mathcal{O}_{M})-6$ & $4\chi(\mathcal{O}_{M})-4$ &
$2$ \\
$1$ & $-1$ & $-2$ & $4\chi(\mathcal{O}_{M})-4$ & $4\chi(\mathcal{O}_{M})-2$ & $1$ \\
$1$ & $0$ & $-1$ & $4\chi(\mathcal{O}_{M})-2$ & $4\chi(\mathcal{O}_{M})$ & $1$ \\
$1$ & $1$ & $0$ & $4\chi(\mathcal{O}_{M})$ & $4\chi(\mathcal{O}_{M})+2$ & $0$
\end{tabular}
\end{center}
By this list, we see that $(K_{M}+A)^{3}=2$ or $4$.
\par
Assume that $(K_{M}+A)^{3}=4$.
Then by Proposition \ref{GHIT} we have
\begin{eqnarray*}
4&=&((K_{M}+A)^{2}A)^{2} \\
&\geq & ((K_{M}+A)^{3})((K_{M}+A)A^{2}) \\
&\geq &4(K_{M}+A)A^{2}.
\end{eqnarray*}
Since $K_{M}+A$ is nef and big, we see that $(K_{M}+A)A^{2}\geq 1$.
Therefore $(K_{M}+A)A^{2}=1$.
But by Proposition \ref{GHIT}, we have
$1=((K_{M}+A)A^{2})^{2}\geq ((K_{M}+A)^{2}A)A^{3}=2A^{3}\geq 2$,
and this is impossible.
\par
Assume that $(K_{M}+A)^{3}=2$.
Then by Proposition \ref{GHIT} we have
\begin{eqnarray*}
4&=&((K_{M}+A)^{2}A)^{2} \\
&\geq & ((K_{M}+A)^{3})((K_{M}+A)A^{2}) \\
&=&2(K_{M}+A)A^{2}.
\end{eqnarray*}
Hence we have
$(K_{M}+A)A^{2}\leq 2$.
By Proposition \ref{GHIT} we see that
$((K_{M}+A)A^{2})^{2}\geq ((K_{M}+A)^{2}A)A^{3}=2A^{3}\geq 2$.
Therefore $(K_{M}+A)A^{2}=2$ and $A^{3}\leq 2$ because $(K_{M}+A)A^{2}\geq 1$.
But since $(K_{M}+A)A^{2}=2$, we have $A^{3}=2$ because $(K_{M}+2A)A^{2}$ is even.
Therefore
$((K_{M}+A)A^{2})^{2}=4=((K_{M}+A)^{2}A)A^{3}$ holds and
$K_{M}+A\equiv A$ by \cite[Corollary 2.5.4]{BeSo-Book} since $A$ is ample.
Namely $K_{M}\equiv 0$.
Now since $g_{1}(M,A,K_{M}+A)=1+(K_{M}+A)^{2}A=3$, we see that $h^{0}(2K_{M}+A)=-d_{1}$ by Proposition \ref{32P-T1} (\ref{m2.3.1}).
Since $d_{1}=0$ or $1$ by (\ref{m2.3.9}), we have $d_{1}=0$.
On the other hand, $h^{i}(K_{M}+K_{M}+A)=0$ for every integer $i$ with $i>0$ because $K_{M}+A$ is nef and big.
So by the Riemann-Roch theorem we have $h^{0}(2K_{M}+A)=\chi(2K_{M}+A)=\chi(A)=(1/6)A^{3}+(1/12)c_{2}(M)A$.
Since $A^{3}=2$, we have $c_{2}(M)A=-4$ if $d_{1}=0$.
Here we calculate $h^{0}(2(K_{M}+A))$.
Since $K_{M}+2A$ is ample, then $h^{i}(2K_{M}+2A)=0$ for $i>0$.
Therefore
\begin{eqnarray*}
h^{0}(2(K_{M}+A))
&=&\chi(2(K_{M}+A))\\
&=&\chi(2A)\\
&=&\frac{4}{3}A^{3}+\frac{1}{6}c_{2}(M)A\\
&=&2.
\end{eqnarray*}
But this is impossinble because we assume that $h^{0}(2(K_{M}+A))=3$.
\\
($\beta$) Assume that $(X,L)$ satisfies $L^{3}=1$,
$\mathcal{O}_{X}(K_{X})=\mathcal{O}_{X}$,
$h^{1}(\mathcal{O}_{X})=0$ and $h^{0}(L)=1$.
Then $h^{0}(2K_{X}+L)=h^{0}(K_{X}+L)=h^{0}(L)=1$ and $h^{2}(\mathcal{O}_{X})=h^{1}(K_{X})=h^{1}(\mathcal{O}_{X})=0$.
Hence $g_{2}(X,L)=h^{0}(K_{X}+L)-h^{0}(K_{X})+h^{2}(\mathcal{O}_{X})=0$.
Moreover $g_{1}(X,K_{X}+L,L)=1+L^{3}=2$.
Therefore by Proposition \ref{32P-T1} (\ref{m2.3.1}) we have
\begin{eqnarray*}
h^{0}(2(K_{X}+L))
&=&h^{0}(2K_{X}+L)+g_{2}(X,L)-h^{1}(\mathcal{O}_{X})+g_{1}(X,K_{X}+L,L)\\
&=&3.
\end{eqnarray*}
This completes the proof. $\Box$
\begin{Remark}\label{}
(i) By Theorem \ref{EC1}, we see that if $\kappa(K_{X}+L)=3$ and $h^{0}(2(K_{X}+L))=3$, then $h^{0}(K_{X}+L)=1$.\\
(ii) There exists an example of $(X,L)$ which satisfies $\kappa(K_{X}+L)=3$ and $h^{0}(2K_{X}+2L)=3$.
See \cite[Example 3.1 (4)]{Fukuma10}.
\end{Remark}
Next we consider the case where $(X,L)$ satisfies $\kappa(K_{X}+L)=3$ and $h^{0}(2K_{X}+2L)=4$.
\begin{Theorem}\label{EC2}
Let $(X,L)$ be a polarized manifold of dimension $3$
and let $(M,A)$ be a reduction of $(X,L)$.
Assume that $\kappa(K_{X}+L)=3$.
Then $h^{0}(2(K_{X}+L))=4$ if and only if $(M,A)$ is one of the following.
\begin{itemize}
\item [\rm (1)] $K_{M}\equiv 0$, $A^{3}=2$, $\chi(\mathcal{O}_{M})=0$ and $h^{0}(A)=1$.
\item [\rm (2)] $(K_{M}+A)^{2}A=3$, $(K_{M}+A)^{3}=1$, $g_{2}(M,A)=h^{1}(\mathcal{O}_{M})=1$, $h^{2}(\mathcal{O}_{M})=0$ , $h^{3}(\mathcal{O}_{M})=0$ and $(M,K_{M}+A)$ is birationally equivalent to a scroll over an elliptic curve.
\end{itemize}
\end{Theorem}
\noindent{\em Proof.}
($\alpha$) Assume that $h^{0}(2(K_{X}+L))=4$.
\\
First we prove the following claim.
\begin{Claim}\label{T-CL1}
One of the following holds:
\begin{itemize}
\item [\rm (i)] $g(M,A)=2$.
\item [\rm (ii)] $(M,A)$ satisfies {\rm (1)} in Theorem {\rm \ref{EC2}}.
\item [\rm (ii)] $(M,A)$ satisfies {\rm (2)} in Theorem {\rm \ref{EC2}}.
\end{itemize}
\end{Claim}
\noindent
{\em Proof.}
If $h^{0}(K_{M}+A)\geq 3$, then
by Lemma \ref{Lemma B}
we see that $h^{0}(2K_{M}+2A)\geq 2h^{0}(K_{M}+A)-1\geq 5$ and this is impossible.
Hence
\begin{equation}
h^{0}(K_{M}+A)\leq 2. \label{m2.3.8}
\end{equation}
We note that
\begin{equation}
1\leq (K_{M}+A)^{2}A. \label{m2.3.13}
\end{equation}
Since $g_{2}(M,A)\geq h^{1}(\mathcal{O}_{M})$ by Theorem \ref{Theorem1.4} and $g_{1}(M,K_{M}+A,A)=1+(K_{M}+A)^{2}A$, we have
\begin{eqnarray}
&&h^{0}(2K_{M}+2A)-h^{0}(2K_{M}+A) \label{m2.3.14}\\
&&\geq g_{1}(M,K_{M}+A,A) \nonumber \\
&&=1+(K_{M}+A)^{2}A \nonumber
\end{eqnarray}
and
\begin{equation}
(K_{M}+A)^{2}A\leq 3 \label{m2.3.15}
\end{equation}
by Proposition \ref{32P-T1} (\ref{m2.3.1}) since $h^{0}(2K_{M}+2A)=4$.
\\
\par
Here we divide the argument into three cases.
\\
(i) Assume that $(K_{M}+A)^{2}A=1$.
Then $(K_{M}+A)A^{2}=1$ and $A^{3}=1$ by Proposition \ref{GHIT}.
So we get $g(M,A)=2$ and this is the type (i) in Claim \ref{T-CL1}.
\\
(ii) Assume that $(K_{M}+A)^{2}A=2$.
Then $g_{1}(M,K_{M}+A,A)=3$.
By Proposition \ref{GHIT}, we have $1\leq (K_{M}+A)^{3}\leq 4$.
Hence by Proposition \ref{32P-T1} (\ref{m2.3.1}) and Theorem \ref{Theorem1.4} we have
\begin{equation}
d_{1}=0, 1. \label{m2.3.16}
\end{equation}
By (\ref{m2.3.8}), Proposition \ref{32P-T1} (3) and the assumption $h^{0}(2K_{M}+2A)=4$ we have
\begin{eqnarray*}
2&\leq& h^{0}(2(K_{M}+A))-h^{0}(K_{M}+A) \\
&=&d_{2}+g_{1}(M,K_{M}+A,A)\\
&=&d_{2}+3.
\end{eqnarray*}
Namely we have
\begin{equation}
-1\leq d_{2}. \label{m2.3.17}
\end{equation}
By Proposition \ref{32P-T1} (\ref{m2.3.2}) and the assumption $h^{0}(2K_{M}+2A)=4$ we have
\begin{eqnarray*}
4&\geq& h^{0}(2(K_{M}+A))-h^{0}(K_{M}+A) \\
&=&d_{2}+3.
\end{eqnarray*}
Namely we have
\begin{equation}
1\geq d_{2}. \label{m2.3.18}
\end{equation}
So we get the following table by the same argument as in the proof of Theorem \ref{EC1}.
\begin{center}
\begin{tabular}{ccccccc}\hline
& $d_{1}$ & $d_{2}$ & $d_{2}-d_{1}$ & $(K_{M}+A)^{2}K_{M}$ & $(K_{M}+A)^{3}$ & $\chi(\mathcal{O}_{M})$\\
\hline
(2.1) & $0$ & $-1$ & $-1$ & $4\chi(\mathcal{O}_{M})-2$ & $4\chi(\mathcal{O}_{M})$ & $1$ \\
(2.2) & $0$ & $0$ & $0$ & $4\chi(\mathcal{O}_{M})$ & $4\chi(\mathcal{O}_{M})+2$ & $0$ \\
(2.3) & $0$ & $1$ & $1$ & $4\chi(\mathcal{O}_{M})+2$ & $4\chi(\mathcal{O}_{M})+4$ & $0$ \\
(2.4) & $1$ & $-1$ & $-2$ & $4\chi(\mathcal{O}_{M})-4$ & $4\chi(\mathcal{O}_{M})-2$ & $1$ \\
(2.5) & $1$ & $0$ & $-1$ & $4\chi(\mathcal{O}_{M})-2$ & $4\chi(\mathcal{O}_{M})$ & $1$ \\
(2.6) & $1$ & $1$ & $0$ & $4\chi(\mathcal{O}_{M})$ & $4\chi(\mathcal{O}_{M})+2$ & $0$ \\
\end{tabular}
\end{center}
\noindent
(ii.1) First we consider the case (2.4).
Then $(K_{M}+A)^{3}=2$.
By Proposition \ref{GHIT} we have
\begin{eqnarray*}
4&=&((K_{M}+A)^{2}A)^{2} \\
&\geq&((K_{M}+A)^{3})((K_{M}+A)A^{2}) \\
&=&2(K_{M}+A)A^{2}.
\end{eqnarray*}
Hence
$(K_{M}+A)A^{2}\leq 2$.
\\
(ii.1.1) If $(K_{M}+A)A^{2}=2$, then we also see that
\begin{eqnarray*}
4&\geq&((K_{M}+A)A^{2})^{2} \\
&\geq&(A^{3})((K_{M}+A)^{2}A) \\
&=&2A^{3}.
\end{eqnarray*}
Therefore
$A^{3}\leq 2$.
But since $(K_{M}+2A)A^{2}$ is even and $A^{3}>0$, we have $A^{3}=2$.
Hence $(A^{3})((K_{M}+A)^{2}A)=((K_{M}+A)A^{2})^{2}$.
By \cite[Corollary 2.5.4]{BeSo-Book} we have $K_{M}+A\equiv A$, that is, $K_{M}\equiv 0$.
In particular, $g_{2}(M,A)=g_{2}(M,K_{M}+A)$.
But since $d_{1}\neq d_{2}$ in the case (2.4), this is impossible.
\\
\\
(ii.1.2) If $(K_{M}+A)A^{2}=1$, then $A^{3}=1$ by Proposition \ref{GHIT}.
Hence we see that $g(M,A)=2$ and this is the type (i) in Claim \ref{T-CL1}.
\\
\\
(ii.2) Next we consider the cases (2.1), (2.3) and (2.5).
Then $(K_{M}+A)^{3}=4$.
Since $(K_{M}+A)^{2}A=2$, by Proposition \ref{GHIT}, we have $(K_{M}+A)A^{2}=1$ and by Proposition \ref{GHIT} we have $1=((K_{M}+A)A^{2})^{2}\geq ((K_{M}+A)^{2}A)(A^{3})\geq 2A^{3}$.
Since $A^{3}>0$, this is impossible.
\\
\\
(ii.3) Next we consider the cases (2.2) and (2.6).
Then $(K_{M}+A)^{3}=2$.
By Proposition \ref{GHIT}, we have $(K_{M}+A)A^{2}\leq 2$ since $(K_{M}+A)^{2}A=2$.
\\
\\
(ii.3.1) If $(K_{M}+A)A^{2}=2$, then by the same argument as (ii.1.1) above, we have $K_{M}\equiv 0$.
In this case
$$h^{0}(2K_{M}+A)=\chi(2K_{M}+A)=\chi(A)=\frac{1}{6}A^{3}+\frac{1}{12}c_{2}(M)A$$and
$$h^{0}(2K_{M}+2A)=\chi(2K_{M}+2A)=\chi(2A)=\frac{4}{3}A^{3}+\frac{1}{6}c_{2}(M)A.$$
Since $(K_{M}+A)^{3}=2$ and $K_{M}\equiv 0$, we have $A^{3}=2$
and $g_{1}(M,K_{M}+A,A)=g(M,A)=3$.
By Proposition \ref{32P-T1} (\ref{m2.3.1}) we have
\begin{eqnarray*}
h^{0}(2K_{M}+A)
&=&h^{0}(2K_{M}+2A)-d_{1}-g_{1}(M,K_{M}+A,A)\\
&=&1-d_{1}.
\end{eqnarray*}
Hence we get $d_{1}=0, 1$ because $d_{1}\geq 0$
by Theorem \ref{Theorem1.4}.
\\
\\
(ii.3.1.1) If $d_{1}=1$, then $h^{0}(2K_{M}+A)=0$ and $(1/6)A^{3}+(1/12)c_{2}(M)A=0$.
Therefore $h^{0}(2K_{M}+2A)=A^{3}=2$ and this is impossible.
\\
\\
(ii.3.1.2) If $d_{1}=0$, then $h^{0}(2K_{M}+A)=1$ and $(1/6)A^{3}+(1/12)c_{2}(M)A=1$.
Hence $h^{0}(2K_{M}+2A)=A^{3}+2=4$.
We note that $h^{i}(A)=0$ for every positive integer $i$
because $K_{M}\equiv 0$.
Hence $1=h^{0}(2K_{M}+A)=\chi(2K_{M}+A)=\chi(A)=h^{0}(A)$.
So this is the type (ii) in Claim \ref{T-CL1}.
\\
\\
(ii.3.2) If $(K_{M}+A)A^{2}=1$, then
\begin{eqnarray*}
1&=&((K_{M}+A)A^{2})^{2} \\
&\geq&((K_{M}+A)^{2}A)(A^{3}) \\
&=&2A^{3}
\end{eqnarray*}
and this is impossible.
\\
\\
(iii) Assume that $(K_{M}+A)^{2}A=3$.
Then $(K_{M}+A)^{3}\leq 9$ by Proposition \ref{GHIT} and $g_{1}(M,K_{M}+A,A)=1+(K_{M}+A)^{2}A=4$.
Since $h^{0}(2K_{M}+2A)=4$ in this case, we have $d_{1}=0$ by Proposition \ref{32P-T1} (\ref{m2.3.1}) and Theorem \ref{Theorem1.4}.
Moreover we see that $-2\leq d_{2}\leq 0$ by (\ref{m2.3.8}) and Proposition \ref{32P-T1} (\ref{m2.3.2}).
Since $d_{2}-d_{1}+2\chi(\mathcal{O}_{M})=(1/2)(K_{M}+A)^{2}K_{M}$ (see (\ref{m2.3.7})), we have
\begin{center}
\begin{tabular}{cccccc}\hline
& $d_{1}$ & $d_{2}$ & $d_{2}-d_{1}$ & $(K_{M}+A)^{2}K_{M}$ & $(K_{M}+A)^{3}$ \\
\hline
(3.1) & $0$ & $-2$ & $-2$ & $4\chi(\mathcal{O}_{M})-4$ & $4\chi(\mathcal{O}_{M})-1$ \\
(3.2) & $0$ & $-1$ & $-1$ & $4\chi(\mathcal{O}_{M})-2$ & $4\chi(\mathcal{O}_{M})+1$ \\
(3.3) & $0$ & $0$ & $0$ & $4\chi(\mathcal{O}_{M})$ & $4\chi(\mathcal{O}_{M})+3$ \\
\end{tabular}
\end{center}
First we consider the case (3.1).
Since $1\leq (K_{M}+A)^{3}\leq 9$, we have
$(\chi(\mathcal{O}_{M}), (K_{M}+A)^{3})=(1,3)$ or $(2,7)$.
\par
Next we consider the case (3.2).
Then we have $(\chi(\mathcal{O}_{M}),(K_{M}+A)^{3})=(0,1)$, $(1,5)$ or $(2,9)$.
\\
Finally we consider the case (3.3).
In this case, we get $(\chi(\mathcal{O}_{M}),(K_{M}+A)^{3})=(0,3)$ or $(1,7)$.
\\
(iii.1) Here we note that
if $(K_{M}+A)^{3}\geq 5$,
then by Proposition \ref{GHIT}
\begin{eqnarray*}
9&=&((K_{M}+A)^{2}A)^{2} \\
&\geq&((K_{M}+A)^{3})((K_{M}+A)A^{2}) \\
&\geq&5(K_{M}+A)A^{2}.
\end{eqnarray*}
and we have
$(K_{M}+A)A^{2}=1$ and $A^{3}=1$ by Proposition \ref{GHIT}.
Hence $g(M,A)=2$ and this is the type (i) in Claim \ref{T-CL1}.
\\
\\
(iii.2) Next we consider the case where $(K_{M}+A)^{3}=3$.
By Proposition \ref{GHIT}, we see that $(K_{M}+A)A^{2}\leq 3$.
\par
If $(K_{M}+A)A^{2}\leq 2$, then $A^{3}=1$ because $(K_{M}+A)^{2}A=3$.
But since $(K_{M}+2A)A^{2}$ is even, we see that $(K_{M}+A)A^{2}=1$ and $A^{3}=1$. Namely we have $g(M,A)=2$ and this is the type (i) in Claim \ref{T-CL1}.
\par
So we may assume that $(K_{M}+A)A^{2}=3$.
Then $((K_{M}+A)A^{2})((K_{M}+A)^{3})=((K_{M}+A)^{2}A)^{2}=9$.
Here we will prove the following lemma.
\begin{Lemma}\label{32L-T1}
Let $X$ be a smooth projective variety of dimension $3$.
Let $D_{1}$, $D_{2}$ and $D_{3}$ be divisors on $X$.
Assume the following:
\\
{\rm (1)} $D_{1}^{2}D_{3}>0$.
\\
{\rm (2)} $D_{3}$ is semiample and big.
\\
{\rm (3)} $(D_{1}^{2}D_{3})(D_{2}^{2}D_{3})=(D_{1}D_{2}D_{3})^{2}$.
\\
{\rm (4)} $D_{1}^{2}D_{3}=D_{2}^{2}D_{3}$.
\\
Then $(D_{1}-D_{2})D_{3}D=0$ holds for any divisor $D$ on $X$.
\end{Lemma}
\noindent
{\em Proof.}
By the assumption (2), there exists a smooth surface $S\in |mD_{3}|$
for some $m>0$.
Then by the assumption (3) we have $(D_{1}|_{S})^{2}(D_{2}|_{S})^{2}=((D_{1}|_{S})(D_{2}|_{S}))^{2}$.
So by the assumptions (1) and (4) we have $D_{1}|_{S}\equiv D_{2}|_{S}$.
In particular $(D_{1}|_{S})(D|_{S})=(D_{2}|_{S})(D|_{S})$ for any divisor $D$ on $X$.
Therefore $D_{1}D(mD_{3})=D_{2}D(mD_{3})$.
Hence we get the assertion. $\Box$
\\
\par
Since $K_{M}+A$ is semiample and big, we see that
$(K_{M}+A)^{2}D=A(K_{M}+A)D$ for any divisor $D$ on $M$ by Lemma \ref{32L-T1}.
Therefore $K_{M}D(K_{M}+A)=0$ for any divisor $D$ on $X$.
\par
Next we calculate $h^{0}(2K_{M}+2A)$ and $h^{0}(K_{M}+A)$.
Then by the Hirzebruch-Riemann-Roch theorem and the Kodaira vanishing theorem we have
$$h^{0}(2K_{M}+2A)=4+(1/6)c_{2}(M)A-3\chi(\mathcal{O}_{M}),$$
and
$$h^{0}(K_{M}+A)=(1/2)+(1/12)c_{2}(M)A-\chi(\mathcal{O}_{M}).$$
Since we are considering the case where $(K_{M}+A)^{3}=3$, we have $\chi(\mathcal{O}_{M})=0$ or $1$.
\par
If $\chi(\mathcal{O}_{M})=0$, then $4=h^{0}(2K_{M}+2A)=4+(1/6)c_{2}(M)A$.
Hence $c_{2}(M)A=0$.
But then $h^{0}(K_{M}+A)=1/2$ and this is impossible.
\par
If $\chi(\mathcal{O}_{M})=1$, then $(M,A)$ satisfies the case (3.1) and
$4=h^{0}(2K_{M}+2A)=4+(1/6)c_{2}(M)A-3\chi(\mathcal{O}_{M})=1+(1/6)c_{2}(M)A$.
Hence $c_{2}(M)A=18$ and $h^{0}(K_{M}+A)=1$.
On the other hand, by Theorem \ref{Theorem1.3} we have $1=h^{0}(K_{M}+A)=g_{2}(M,A)-h^{2}(\mathcal{O}_{M})+h^{3}(\mathcal{O}_{M})$.
Hence $g_{2}(M,A)=1+h^{2}(\mathcal{O}_{M})-h^{3}(\mathcal{O}_{M})$ and $d_{1}=\chi(\mathcal{O}_{M})=1$.
But $d_{1}=0$ in this case (3.1).
Hence this is also impossible.
\\
\\
(iii.3) Next we consider the case where $(K_{M}+A)^{3}=1$.
Then $(M,A)$ satisfies the case (3.2).
In particular $g_{2}(M,A)=h^{1}(\mathcal{O}_{M})$.
We also get $(K_{M}+A)^{2}K_{M}=-2$ from the assumption that $(K_{M}+A)^{2}A=3$ or $\chi(\mathcal{O}_{M})=0$.
In particular $\kappa(M)=-\infty$ and $h^{3}(\mathcal{O}_{M})=0$.
Here we note $g_{1}(M,K_{M}+A)=1+(1/2)(3K_{M}+2A)(K_{M}+A)^{2}=1$.
We also note that $h^{1}(\mathcal{O}_{M})>0$ because $\kappa(M)=-\infty$ and $\chi(\mathcal{O}_{M})=0$.
Hence by \cite[(4.9) Corollary]{Fujita89} we have $h^{1}(\mathcal{O}_{M})=1$ and $(M,K_{M}+A)$ is birationally equivalent to $(V,H)$ which is a scroll over an elliptic curve because $K_{M}+A$ is nef and big.
This is the type (iii) in Claim \ref{T-CL1}.
\par
These complete the proof of Claim \ref{T-CL1}. $\Box$
\\
\par
Here we consider the case where $g(M,A)=2$.
In this case by the classification of $(M,A)$ with $g(M,A)=2$ (\cite[(1.10) Theorem and Section 2]{Fujita87-2}) we see that $(M,A)$ is one of the following type: $\mathcal{O}(K_{M})=\mathcal{O}_{M}$, $h^{1}(\mathcal{O}_{M})=0$, $h^{0}(A)>0$ and $A^{3}=1$.
\par
Then $h^{0}(A)=(1/6)A^{3}+(1/12)c_{2}(M)A$ and $h^{0}(2A)=(4/3)A^{3}+(1/6)c_{2}(M)A$.
Since $4=h^{0}(2K_{M}+2A)=h^{0}(2A)$, we have
$4=(4/3)A^{3}+(1/6)c_{2}(M)A=(4/3)+(1/6)c_{2}(M)A$.
Hence $c_{2}(M)A=16$.
But then $h^{0}(A)=3/2$ and this is impossible.
\par
Therefore $(M,A)$ is one of the types (1) and (2) in Theorem \ref{EC2}.
\\
\\
($\beta$) Assume that $(M,A)$ satisfies one of the types (1) and (2) in Theorem \ref{EC2}.
\\
($\beta$.1) Assume that $(M,A)$ satisfies the type (1) in Theorem \ref{EC2}.
Here we note that $h^{i}(A)=0$ for every positive integer $i$.
Then
\begin{eqnarray*}
h^{0}(A)
&=&\chi(A)\\
&=&\frac{1}{6}A^{3}+\frac{1}{12}c_{2}(M)A.
\end{eqnarray*}
Hence we have $c_{2}(M)A=8$.
Therefore
\begin{eqnarray*}
h^{0}(2K_{M}+2A)
&=&\chi(2K_{M}+2A)\\
&=&\chi(2A)\\
&=&\frac{4}{3}A^{3}+\frac{1}{6}c_{2}(M)A\\
&=&4.
\end{eqnarray*}
\noindent
\\
($\beta$.2) Assume that $(M,A)$ satisfies the type (2) in Theorem \ref{EC2}.
\par
First we note the following.
\begin{Claim}
$h^{0}(2K_{M}+A)=0$.
\end{Claim}
\noindent
{\em Proof.}
Since $(M,K_{M}+A)$ is birationally equivalent to a scroll $(V,H)$ over a smooth ellitpic curve $B$,
there exist a smooth projective $3$-fold $T$ and birational morphisms $\mu: T\to M$ and $\nu: T\to V$ such that $\mu^{*}(K_{M}+A)=\nu^{*}(H)$.
Here we note that $V$ is smooth.
Then $h^{0}(2K_{M}+A)=h^{0}(\mu^{*}(2K_{M}+A))=h^{0}(K_{T}+\mu^{*}(K_{M}+A))=h^{0}(K_{T}+\nu^{*}(H))=h^{0}(\nu^{*}(K_{V}+H))=h^{0}(K_{V}+H)=0$.
This completes the proof. $\Box$
\\
\par
We also see that
$g_{1}(M,K_{M}+A,A)=1+(K_{M}+A)^{2}A=4$.
Hence from Proposition \ref{32P-T1} (\ref{m2.3.1}) we get
\begin{eqnarray*}
h^{0}(2(K_{M}+A))
&=&h^{0}(2K_{M}+A)+g_{2}(M,A)-h^{1}(\mathcal{O}_{M})+g_{1}(M,K_{M}+A,A)\\
&=&4.
\end{eqnarray*}
Therefore we get the assertion of Theorem \ref{EC2}. $\Box$
\begin{Remark}
By Theorem \ref{EC2}, we see that if $\kappa(K_{X}+L)=3$ and $h^{0}(2(K_{X}+L))=4$, then $h^{0}(K_{X}+L)=1$.
\end{Remark}
\begin{Example}
Here we give an example of this case.
\begin{itemize}
\item [\rm (1)] An example of the type (1) in Theorem \ref{EC2}.
In \cite[Theorem 1.1]{Beauville}, Beauville gave an example of
a polarized Calabi-Yau threefold $(X,L)$ such that
$h^{0}(L)=1$ and $L^{3}=2$. This is an example.
For details, see \cite[Theorem 1.1]{Beauville}.
\item [\rm (2)] An example of the type (2) in Theorem \ref{EC2}.
Let $C$ be an elliptic curve and let $\mathcal{E}$ be an ample vector bundle of rank $3$ on $C$ with $c_{1}(\mathcal{E})=1$.
Then $\mathcal{E}$ is indecomposable.
We note that such a vector bundle exists.
Let $M=\mathbb{P}_{C}(\mathcal{E})$ and $A=4H(\mathcal{E})-f^{*}(c_{1}(\mathcal{E}))$, where $f:M\to C$ is the natural map.
Then by \cite[Theorem 3.1]{Miyaoka87} we see that $A$ is ample,
and we also see that $(M,K_{M}+A)$ is a scroll over a smooth elliptic curve.
We can also check that $h^{0}(K_{M}+A)=h^{0}(H(\mathcal{E}))=1$, $h^{2}(\mathcal{O}_{M})=0$, $h^{1}(\mathcal{O}_{M})=1$, $g_{2}(M,A)=1$, $g_{1}(M,K_{M}+A,A)=4$ and $h^{0}(2K_{M}+A)=0$.
Therefore by Proposition \ref{32P-T1} (2) we have
$h^{0}(2K_{M}+2A)=h^{0}(2K_{M}+A)+g_{2}(M,A)-h^{1}(\mathcal{O}_{M})+g_{1}(M,K_{M}+A,A)=4$.
\end{itemize}
\end{Example}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 8,960 |
{"url":"http:\/\/physics.stackexchange.com\/questions\/71928\/expressions-of-action-and-energy-momentum-tensor-in-bc-conformal-field-with-cent","text":"# Expressions of action and energy momentum tensor in bc conformal field with central charge equals one\n\nI have a question with conformal field theory in Polchinski's string theory vol 1 p. 51.\n\nFor $bc$ conformal field theory $$S=\\frac{1}{2\\pi} \\int d^2 z b \\bar{\\partial} c$$ $$T(z)= :(\\partial b) c: - \\lambda \\partial (: bc:)$$ with central charge $c=-3 (2 \\lambda-1)^2 +1 =1$. Introducing $\\psi$ and $\\bar{\\psi}$ to replace the anticommuting fields $b$ and $c$ as following $$b \\rightarrow \\psi =2^{-1\/2} (\\psi_1 + i \\psi_2 )$$ and $$c \\rightarrow \\bar{\\psi} =2^{-1\/2} (\\psi_1 - i \\psi_2)$$ It is claimed that $$S=\\frac{1}{4\\pi} \\int d^2 z \\psi_1 \\bar{\\partial} \\psi_1 + \\psi_2\\bar{\\partial} \\psi_2 (2.5.18b)$$ $$T=- \\frac{1}{2} \\psi_1 \\partial \\psi_1 -\\frac{1}{2} \\psi_2 \\partial \\psi_2 (2.5.18c)$$\n\nI cannot obtain the above expressions of $S$ and $T$. Here is my derivations. First I try to recover the anti-commuting characters of fields $b$ and $c$ by $\\psi_1$ and $\\psi_2$. For $$bc+cb=0$$ I have $$\\psi_1 \\psi_1 + \\psi_2 \\psi_2 =0$$ Then for the action $$S=\\frac{1}{4\\pi} \\int d^2 z \\left( \\psi_1 \\bar{\\partial} \\psi_1 + i \\psi_2 \\bar{\\partial} \\psi_1 -i \\psi_1 \\bar{\\partial} \\psi_2 + \\psi_2 \\bar{\\partial} \\psi_2 \\right)$$ (1) [Solved] Why the term $i \\psi_2 \\bar{\\partial} \\psi_1 -i \\psi_1 \\bar{\\partial} \\psi_2$ does not contribute to the action?\n\n(2) How to derive (2.5.18c)?\n\n-\nYou know, it's completely fine for you to ask multiple questions about Polchinski in the course of your studying; there's really no need to start every question with a disclaimer like \"another stupid question from...\" :) \u2013\u00a0 joshphysics Jul 22 at 23:02\nThank you! I will try! \u2013\u00a0 user26143 Jul 22 at 23:19\nFor (1), you could try integration by parts and anti-commutativity of the fermions. \u2013\u00a0 Heidar Jul 22 at 23:59\nIf $\\psi_1$ and $\\psi_2$ anti commutes, then I can let $i \\psi_2 \\bar{\\partial} \\psi_1 -i \\psi_1 \\bar{\\partial} \\psi_2$ disappears. But from $\\{b,c\\}=0$ I only get $\\psi_1 \\psi_1 + \\psi_2 \\psi_2=0$. \u2013\u00a0 user26143 Jul 23 at 0:19\nWhat do you get from $\\{b, b\\} = \\{c , c\\} = 0$? \u2013\u00a0 Prahar Jul 23 at 0:31","date":"2013-12-21 01:36:24","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7892016172409058, \"perplexity\": 603.0840955025228}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2013-48\/segments\/1387345774525\/warc\/CC-MAIN-20131218054934-00038-ip-10-33-133-15.ec2.internal.warc.gz\"}"} | null | null |
\section{Introduction}
\subsection{Broader context}
Gas bubbles dispersed in liquids provide surface area through which mass can be exchanged by diffusion. Ocean-atmosphere exchanges of \ce{CO2}, for example, are enhanced by bubble-mediated transfer in regions of the globe where high winds lead to high rates of wave breaking, as entrained air cavities break apart into small bubbles in the turbulent field under the breaking wave \citep{Deike2018,Reichl2020,Deike2022}. Further, many industrial processes involve facilitating gas transfer to a liquid through bubble interfaces \citep{Schludieter2021}. In both environmental and industrial scenarios, the breakage of bubbles by the turbulence of the bulk flow increases the total surface area through which transfers may occur and modulates the bubbles' dynamics.
Despite the ubiquity of bubble break-up across disciplines, the physics of bubble breaking in turbulence remains to be fully understood, as turbulent effects are often accompanied by buoyant effects and shear in the mean structure of the flow \citep{Risso1998}. Further, the fast dynamics of bubble pinching have, until recently, been difficult to measure experimentally, leaving open questions regarding the final portion of the break-up process \citep{Ruth2019}. These various challenges have led to a wide variability in the predictions of models for both the rate at which bubbles break and the sizes of bubbles they break into.
\subsection{Bubble break-up in turbulence}
We consider the break-up of a bubble with an effective diameter $d_0$, taken to be the diameter of a sphere with the same volume. Before considering the turbulent nature of the liquid around it, the bubble in a liquid is described by the density of the liquid and gas phases, $\rho$ and $\rho_\mathrm{g}$, their viscosities $\mu$ and $\mu_\mathrm{g}$, the acceleration due to gravity $g$, and the surface tension of the liquid-gas interface $\sigma$. When the carrier flow in which the bubbles are dispersed (with velocity $\vec{u}$) is turbulent, it is characterized by the dissipation rate of the turbulence $\epsilon$, which is the rate at which kinetic energy in turbulent fluctuations is dissipated to heat. The turbulence is comprised of fluctuating motions existing over a range of length scales, extending from larger motions near the integral length scale $L_\mathrm{int}$ (beyond which the velocity field becomes uncorrelated) down to the Kolmogorov scale $\eta$, at which turbulent motions are dissipated by the viscosity of the fluid \citep{Pope2000}.
With nine independent physical parameters which span three physical dimensions, we require six dimensionless parameters to describe the problem of bubble break-up in turbulence, for which we choose
\begin{gather}
\changed{\mathrm{We}_0 = \frac{C_2 \rho \epsilon^{2/3} d_0^{5/3}}{\sigma}}, \qquad \frac{d_0}{L_\mathrm{int}}, \qquad \frac{d_0}{l_\mathrm{cap}} = \sqrt{\frac{\rho g d_0^2}{\sigma}}, \nonumber \\
\mathrm{Re}_\mathrm{t} = \frac{\rho L_\mathrm{int} u'}{\mu}, \qquad \frac{\rho}{\rho_\mathrm{gas}}, \qquad \frac{\mu}{\mu_\mathrm{gas}}, \label{eq:dimensionless_numbers}
\end{gather}
where the subscript "0" indicates a quantity refers to an initial condition. The \dan{size of the parent bubble relative to the capillary length scale $l_\mathrm{cap} = \sqrt{\sigma / (\rho g)}$} describes the relative importance of gravity and surface tension effects for the parent bubble. The large-scale turbulence Reynolds number $\mathrm{Re}_\mathrm{t}$ represents the separation of length scales in the turbulence. The bubble size relative to the integral length scale $d_0/L_\mathrm{int}$, along with $\mathrm{Re}_\mathrm{t}$, describes the spatial separation between the bubble and the turbulence scales. With $\mathrm{Re}_\mathrm{t} \gg 1$ and $\rho/\rho_\mathrm{gas}$ and $\mu/\mu_\mathrm{gas}$ both fixed constants $\gg 1$ for common liquid-gas configurations, we will neglect their impact in the rest of the experimental study. \changed{The Weber number of the parent bubble $\mathrm{We}_0$, which parameterizes the balance between turbulent stresses and surface tension, will be the main parameter of focus.}
For a bubble in the inertial subrange of the turbulence ($\eta \ll d_0 \ll L_\mathrm{int}$), the ratio of the inertial stresses arising from velocity gradients in the turbulence and surface tension stresses defines the Weber number, $\mathrm{We}(d) = C_2 \rho \epsilon^{2/3} d^{5/3} / \sigma$, with $C_2=2$, and is central in the analysis of bubble break-up \citep{Risso1998,Riviere2021jfm,Perrard2021BubbleFlow}. The definition of a critical Weber number for break-up $\mathrm{We}_\mathrm{c}$ yields the Hinze scale \citep{Hinze1955},
\begin{equation}
d_\mathrm{H} = \left(\frac{\mathrm{We}_\mathrm{c}}{2}\right)^{3/5} \left( \frac{\sigma}{\rho} \right)^{3/5} \epsilon^{-2/5}, \label{eq:Hinze_scale}
\end{equation}
\changed{and we typically use the ratio $d/d_\mathrm{H} = (\mathrm{We}/\mathrm{We}_\mathrm{c})^{3/5}$ in place of $\mathrm{We}$}. Estimations of $\mathrm{We}_\mathrm{c}$ vary, and generally involve either considerations of how likely a bubble is to break apart over some physically-relevant time or within some spatial observation window \citep{Hinze1955,Martinez-Bazan1999a,Risso1998,Riviere2021jfm}, or considerations of the shape of the bubble size distribution resulting from break-ups \citep{Deane2002}. Since $\mathrm{We}_\mathrm{c}$ is influenced by factors like the buoyancy and specificity of the turbulent flow, and since the turbulent stresses on a bubble are stochastic in nature, the Hinze scale as defined in \cref{eq:Hinze_scale} represents a soft limit for break-up. Different experimental and computational \dan{setups will} lead to a range of reported or inferred critical Weber numbers, which typically vary from 1---5 \citep{Riviere2021jfm,Risso1998,Hinze1955,Martinez-Bazan1999a,Vejrazka2018}. In this paper, we will use $\mathrm{We}_\mathrm{c} = 1$, consistent with our results and similar experiments in a turbulent flow forced by underwater pumps \citep{Vejrazka2018}. We note that the inertial stresses on a bubble that arise from the \dan{velocity slip} between the bubble and the surrounding liquid can induce stresses comparable to those \dan{associated with the turbulence's inherent} velocity gradients at the bubble scale \citep{Masuk2021}, that eddies smaller than the bubble can also contribute to deformation and break-up \citep{Luo1996,Qi2022}, and that the turbulent flow can trigger bubble shape oscillations \citep{Risso1998,Ravelet2011}. These factors will contribute to bubble deformation and break-up in ways that are not directly parameterized in the definition of $d_\mathrm{H}$.
\changed{The bubble size distribution $N(d)$ gives the number density of bubbles with diameter $d$, and given the nature of experiments reported in this paper, we define it such that $N(d) \mathrm{d} d$ gives the total number of bubbles with diameters $\in (d,d+\mathrm{d}d)$.} \cite{Garrett2000} proposed that, for bubbles larger than the Hinze scale, a power-law scaling $N(d) \propto d^{-10/3}$ describes the steady-state bubble size distribution, assuming that the break-up rate scales with the turbulent frequency at the bubble size. This regime has since been reported in several experiments \citep{Deane2002,Blenkinsopp2010,Rojas2007} and simulations \citep{Deike2016,Wang2016,Gao2021BubbleCrests,Chan2021,Soligo2019,Riviere2021jfm,Mostert2021}. For smaller bubbles, the size distribution typically exhibits a shallower slope \citep{Deane2002,Blenkinsopp2010}, with fewer studies resolving this range of scales and some variation in the values that have been reported. The $N(d) \propto d^{-3/2}$ distribution for $d<d_\mathrm{H}$ has been observed experimentally \citep{Deane2002} and numerically \citep{Wang2016,Mostert2021} for bubbles under breaking waves, though the identification of a sub-Hinze power-law slope is additionally complicated by the transient nature of bubble disintegration \citep{Riviere2021jfm} and breaking wave \citep{Mostert2021} events. Recent work has identified the capillary pinching of gas ligaments created by turbulent deformations as an origin of sub-Hinze bubbles, with theoretical arguments relating to the timescale over which such pinching occurs supporting the $N(d) \propto d^{-3/2}$ sub-Hinze scaling \citep{Riviere2021cap}. Relating measured size distributions to theoretical scalings derived from break-up physics is complicated by the fact that bubbles' motions, and hence their residence time in some experimental domain, are dependent on their size and the characteristics of the turbulence they encounter \citep{Garrett2000}. \dan{Smaller bubbles or bubbles in regions of more intense turbulence will rise slower than others}, for example \citep{Ruth2021}; accounting for these effects requires detailed knowledge of the size dependencies of the bubbles' motions.
\subsection{Child size distribution and break-up time scales}
In this work, we will employ experimental observations to describe bubble break-up over a range of spatial scales: we consider parent bubbles ranging in size from the Hinze scale to \changed{$d_0 = 8.3 d_\mathrm{H}$}, and investigate how they break up to produce child bubbles that may be orders of magnitude smaller than the Hinze scale. As volume is conserved in any break-up, we will work with bubble volumes $V= \pi d^3 / 6$ when discussing bubble break-up, denoting parent bubble volumes by $V=\Delta$ and child bubble volumes by $V=\delta$.
Expressions for a break-up kernel $f(\delta;\Delta)$, for which $f(\delta;\Delta) \mathrm{d} \delta$ gives the rate at which a parent bubble of volume $\Delta$ will break into a child bubble with volume $\in (\delta,\delta+\mathrm{d}\delta)$ in some turbulent flow, are informed by experiments and simulations on break-up. Most experimental studies have involved air bubbles in water under Earth's gravitational acceleration, with turbulence in the water generated by one or more jets \citep{Martinez-Bazan1999a,Vejrazka2018,Qi2020}, rotating blades \citep{Ravelet2011}, or by turbulent flow through a reactor or channel \citep{Andersson2006}. \cite{Risso1998} performed experiments on bubble break-up in microgravity to remove the effects of buoyancy, which also contributes to bubble deformation and break-up, and more recently, \cite{Riviere2021jfm} performed direct numerical simulations (DNSs) of bubble break-up without gravity, solving the full two-phase Navier Stokes equations for a bubble subjected to homogeneous, \dan{isotropic} turbulence.
These studies have confirmed that the time over which a break-up occurs is controlled by both the turbulent scales and the bubble's oscillatory scales. \cite{Riviere2021jfm} showed that, as a bubble of size $d_0 \gg d_\mathrm{H}$ is introduced to turbulence, it first breaks up after a time comparable to eddy turn-over time at its scale, $T_\mathrm{turb}(d_0) = \epsilon^{-1/3} d_0^{2/3}$. Experimental studies have shown that the time over which deformation occurs prior to break-up scales similarly \citep{Qi2020,Risso1998}. As the deformation of moderately-sized bubbles is also impacted by the surface tension, capillary dynamics remain important, as a bubble's natural oscillation \dan{frequency remains} apparent in its shape oscillations \citep{Risso1998,Ravelet2011,Perrard2021BubbleFlow}. Further, the turbulent turnover time is typically comparable to the capillary oscillation time at the parent bubble scale for air bubbles in water at moderate $d_0/d_\mathrm{H}$, which can lead to a resonance which aides break-up \citep{Risso1998,Ravelet2011}.
The break-up frequency $\omega$ is defined as the inverse of the typical time until a bubble undergoes a break-up, \changed{and is distinct from the (necessarily shorter) typical duration over which a break-up occurs}. \cite{Ravelet2011} showed that the distribution of the times until a bubble breaks mirrors the distributions of the times between severe shape deformations and the times between large instantaneous Weber numbers. The most energetic scales capable of deforming a bubble are those at the scale of the bubble, and experiments from which $\omega$ was extracted suggested that the break-up frequency initially increases with bubble size as the turbulence becomes more capable of counteracting surface tension, and then decreases for even larger bubbles, as the time required for a turbulent eddy to act across the bubble scale becomes longer \citep{Martinez-Bazan1999a}, though this analysis may have missed break-ups in which one child bubble size is close to the parent size \citep{Lehr2002}. Recent experiments from \cite{Qi2022} showed that eddies smaller than $d_0$ can also cause break-up, and other theoretical analyses have considered the action of a range of turbulent scales which may cause break-up. In such models, the product of the rate at which eddies of a given size interact with a bubble and each interaction's likelihood of causing break-up are integrated over a range of eddy sizes \citep{Prince1990,Tsouris1994,Luo1996,Lehr2002,Aiyer2019,Yuan2021}, causing the break-up frequency to increase with the bubble size as more turbulent scales contribute to break-up.
Various models for the child size distributions $p(\delta;\Delta)$ have been proposed, most of which assume that each break-up produces two bubbles. The child size distribution has been described with a $\cap$--shaped dependence on $\delta$---that is, the most likely outcome is to produce child bubbles that are comparable in \dan{size} to the parent bubble \citep{Martinez-Bazan1999b,Martinez-Bazan2010}; or with a $\cup$-- or W--shaped child size distributions, in which small bubbles are more likely to be produced than moderately-sized ones \citep{Qi2020,Riviere2021jfm,Vejrazka2018,Andersson2006,Tsouris1994,Luo1996,Lehr2002,Yuan2021,Qi2020}. Experimental and numerical evidence suggests that break-ups often produce just two child bubbles when $d_0/d_\mathrm{H}$ is close to 1 \citep{Vejrazka2018,Riviere2021jfm}. However, break-ups at larger $d_0/d_\mathrm{H}$ are more severe and often result in more than two child bubbles being formed in a single coherent event \citep{Vejrazka2018,Hinze1955,Riviere2021jfm}. \cite{Hill1996} developed generalized expressions for $p(\delta;\Delta)$ as products of power-law relations (each $\propto \delta^\alpha$) for $\alpha>-1$ and integer numbers of child bubbles, which by design satisfy constraints relating to the sizes of the bubbles formed. Their analysis was extended to \dan{break-ups with a non-integer average number of child bubbles} by \cite{Diemer2002}.
In the work discussed so far, the role of capillarity has been to counteract the turbulent stresses and prevent severe deformation, while also providing a resonance mechanism at moderate $d_0/d_\mathrm{H}$. However, more recent work has shown that capillarity also plays an important role late in the break-up process, even after a turbulent stress has decidedly overcome it. \cite{Andersson2006} showed that asymmetries in a deformed bubble shape can become more pronounced as a bubble breaks apart due to the variation in capillary pressure associated with the deformation. More recently, \cite{Riviere2021cap} showed that very small bubbles originate not from turbulent motions at very small scales, but rather from the capillary instabilities of ligaments arising from much larger-scale deformations.
\subsection{Outline of the paper}
\label{sec:problem_characterization}
\changed{In this work we address the problem of bubbles breaking up in forced turbulence, which is applicable to break-up under breaking waves and in industrial reactors. We probe a wide range of scales, with bubbles ranging in size from the Hinze scale to $d=8.3 d_\mathrm{H}$ (corresponding to $\mathrm{We}_0 = 34$). Further, we resolve the size distribution down to approximately an order of magnitude smaller than $d_\mathrm{H}$, enabling us to identify the way in which the sub-Hinze size distribution scales when there is a large separation between the Hinze scale and the bubbles which break.}
The experiment set-up, including the turbulence generation, is detailed in \Cref{sec:experiment}. The results on the disintegration of large air cavities are given in \Cref{sec:air_cavity_results}, spanning a wide range of $d_0/d_H$. We demonstrate experimentally that a $N(d)\propto d^{-3/2}$ distribution below the Hinze scale is observed when the initial cavity size is much larger than the Hinze scale, supporting the notion that the capillary pinching dynamics proposed by \cite{Riviere2021cap} are effective at producing sub-Hinze bubbles. The dynamically-tracked individual bubble break-ups with moderate $d_0/d_\mathrm{H}$ and resulting child size distributions are discussed in \Cref{sec:dynamical}. In \Cref{sec:model} we develop a model for turbulent bubble break-up that unifies the turbulent inertial dynamics with the faster, capillary pinching dynamics responsible for sub-Hinze bubble production, \changed{ascribing these physical mechanisms to various components of a modeled child size distribution}. The model is informed by both experimental observations of the disintegrations of air cavities of various sizes and by experimental \dan{and numerical} observations of individual break-up events. Concluding remarks are given in \Cref{sec:breakup_conclusions}.
\section{Experimental setup}
\label{sec:experiment}
This paper presents the results of two separate, complementary experiments, both involving air bubbles breaking apart in forced water turbulence. In the first, we generate large cavities of air with sizes much larger than the Hinze scale (with $d_0/d_\mathrm{H}$ betwen 2.1 and 8.3) and measure the transient evolution of the bubble size distribution as the cavity disintegrates in successive break-ups. In the second experiment, we introduce moderately-sized bubbles (with $d_0/d_\mathrm{H}$ between 1 and 3) into the turbulence, and track the outcomes of their individual break-ups. The turbulence generation is identical in both set-ups.
\subsection{Turbulence generation and characterization}
Turbulence in a \SI{0.37}{m^3} water tank is generated by the convergence of eight turbulent jets created by four submerged water pumps, as sketched in \Cref{fig:exp_and_PIV_singleplane} (a) and described in greater detail in \cite{Ruth2021}. The flow from each pump is split into two parallel jets at a Y, with each outlet separated by \SI{7.8}{cm}, with the centers of the Y forming the vertices of a \SI{25}{cm} square in the horizontal plane. \Cref{fig:exp_and_PIV_singleplane} (b) presents properties of the flow as characterized in the central plane ($y=0$) of the experiment with two-dimensional, two-component particle image velocimetry (PIV). The background gives the local fluctuation velocity $u' = \sqrt{({u'_x}^2+{u'_z}^2)/2}$, where $u'_i=\sqrt{\overline{(u_i-\overline{u_i})^2}}$ and overbars denote averaging in time. $u'$ tends to be largest in the plane of the jets ($ z \approx \SI{0.01}{cm}$) and in the region below their convergence zone ($x \approx y \approx 0$). PIV is performed in nine parallel planes, enabling the three-dimensional interpolation of turbulence quantities at any location within the measurement domain.
\begin{figure}
\centering
\begin{minipage}{.4\textwidth}
\centering
\begin{overpic}[width=1\linewidth]{figures/schematic_PIV.pdf}
\put(0,85){(a)}
\end{overpic}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
\centering
\begin{overpic}[width=0.95\linewidth]{figures/PIV_field_characterization_singleplane.pdf}
\put(0,79.5){(b)}
\end{overpic}
\end{minipage}
\caption{The turbulence generation and characterization. (a) A sketch of the experiment (not to scale), consisting of a \SI{0.37}{m^3} tank of water in which four pumps, each split to two outlets, are arranged at the corners of a square in the horizontal plane. The turbulence is characterised with particle image velocimetry performed separately in nine parallel planes, with illumination provided by a laser sheet (shown in green). (b) Properties of the turbulent flow field in the central plane of the experiment. The background shows the local value of $u'$, denoted by the color given in the colorbar. The green dashed rectangle shows the field of view employed in the large air cavity disintegration experiments. The diameter of the black circles denotes the Hinze scale $d_\mathrm{H}$ at various $x$ and $z$. The length of the cyan rectangles denotes the integral length scale $L_\mathrm{int}$ at those locations. }
\label{fig:exp_and_PIV_singleplane}
\end{figure}
As described in \cite{Ruth2021}, we compute the integral length scale $L_\mathrm{int}$ locally at each point in the flow by integrating the spatial autocorrelation function. It changes throughout the experiment, being the shortest where the turbulence is the strongest. The cyan lines in \Cref{fig:exp_and_PIV_singleplane} (b) denote the value of $L_\mathrm{int}$ at various locations in the central plane of the experiment: $L_\mathrm{int}$ is shortest near the convergence of the jets, and grows at lower and higher depths. With $u'$ and $L_\mathrm{int}$ calculated from the PIV data, we can then compute the local turbulence dissipation rate \dan{under the assumption of isotropy} with $\epsilon = C_\epsilon u'^3/L_\mathrm{int}$, with $C_\epsilon = 0.7$ \citep{Sreenivasan1997}, and the Kolmogorov microscale with $\eta = ((\mu/\rho)^3/\epsilon)^{1/4}$ \citep{Pope2000}. The Hinze scale $d_\mathrm{H}$, calculated using \cref{eq:Hinze_scale}, is denoted at various locations by the diameter of the black circles drawn in \Cref{fig:exp_and_PIV_singleplane} (b). The Hinze scale is smaller where the turbulence is more intense, meaning that more bubbles will be larger than the Hinze scale and susceptible to break-up at these locations. We refer to \cite{Ruth2021} for more details on the structure of the turbulence field and for maps of turbulent quantities outside of the central plane.
\subsection{Large cavity disintegration experiment}
\label{sec:exp_cavity}
For the experiment on large cavity break-ups, air cavities were produced following \cite{Landel2008} by placing a hollow hemispherical cup with $R=\SI{5}{cm}$ underwater, sketched in \Cref{fig:schematic_cavity} (a), and bubbling a known volume of air $V_0 = \pi d_0^3 / 6$ into it. Once bubbles in this cup have coalesced into a single air cavity, the cup is then inverted by rotating it rapidly half a revolution, such that the air inside is suddenly no longer constrained by the curved cup surface. The top surface of the initial volume of air roughly conforms to the curved inner surface of the cup. The large air cavity, having been suddenly exposed to stresses from the surrounding turbulence and its buoyant rise through the water, deforms and starts a complex sequence of break-ups, leading to its disintegration. The surface of the cup rotates with a speed around \SIrange{0.4}{0.9}{m/s}, and we have checked that this speed does not systematically impact the early stages of the bubble size distribution. \changed{Further, similar experiments run without turbulence yield very little break-up, as evidenced in \Cref{sec:quiescent_breakup_comparison}.}
\begin{figure}
\centering
\begin{minipage}{.4\textwidth}
\centering
\begin{overpic}[width=1\linewidth]{figures/schematic_cavity.pdf}
\put(0,86){(a)}
\end{overpic}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
\centering
\begin{overpic}[width=1\linewidth]{figures/bubble_release_single_image.pdf}
\put(4,62){(b)}
\put(-6,5.5){(c)}
\end{overpic}
\end{minipage}
\caption{The experiment on large cavity disintegration. (a) Schematic of the experiment. Air is bubbled into an inverted hemispherical cup located just under the convergence of the turbulent jets, and the cup is rapidly rotated to expose the air to the turbulence. The experiment is lit from behind (not shown) and filmed with a high-speed camera. (b) One representative image of a cavity breaking apart, with the cup still slightly visible at the bottom of the image. (c) The characteristic length scales $\eta$, $d_\mathrm{H}$, $L_\mathrm{cap}$, and $L_\mathrm{int}$ taken in analyzing the data, the pixel size $\oldDelta x$ and the minimum bubble size considered $d_\mathrm{min}$, and the diameters of the air cavities studied (circles). Distributions of the turbulence quantities in the field of view in the center of the tank (within the green rectangle in \Cref{fig:exp_and_PIV_singleplane}) are also given in gray.}
\label{fig:schematic_cavity}
\end{figure}
The turbulent flow in the region of the tank imaged in this experiment is denoted by the green rectangle in \Cref{fig:exp_and_PIV_singleplane} (b). The turbulence varies spatially, so to simplify the analysis, we take $u' \approx \SI{0.2}{m/s}$, $L_\mathrm{int} \approx \SI{1.5}{cm}$, and $\eta \approx \SI{37}{\micro m}$ as characteristic values, which set $d_\mathrm{H} = \SI{3.2}{mm}$ and $\mathrm{Re}_\mathrm{t} = 3400$. These length scales are denoted in \Cref{fig:schematic_cavity} (c), which also gives the distribution of the length scales present in the field of view in the middle of the tank. The mean flow is downwards with $\overline{W} \approx \SI{-0.25}{m/s}$, largely counteracting the buoyant rise speed of larger bubbles. This enables the bubble population to linger in the measurement region for a sufficient period of time to image it over multiple large-scale eddy turn-over times $T_\mathrm{int} = L_\mathrm{int}/u' \approx \SI{0.075}{s}$. The cavities range in size between $d_0/d_\mathrm{H} = 2.12$ and 8.30. Data for each condition, as well as the number of runs recorded at each, are given in \Cref{tab:cavity_conditions}.
\begin{table}
\begin{center}
\def~{\hphantom{0}}
\begin{tabular}{l|ccccccc}
experiment & $d_0$ [\si{cm}] & runs & $\mathrm{We}_0$ & $d_0/d_\mathrm{H}$ & $d_0/\eta$ & $d_0 / L_\mathrm{int}$ & $d_0/l_\mathrm{cap}$ \\
\hline
cavity disintegration& 0.68 & 20 & 3.5 & 2.12 & 184 & 0.46 & 2.51\\
& 0.91 & 15 & 5.7 & 2.84 & 247 & 0.61 & 3.36\\
& 1.34 & 15 & 10.7 & 4.15 & 361 & 0.89 & 4.91\\
& 1.85 & 15 & 18.5 & 5.76 & 500 & 1.24 & 6.80\\
& 2.25 & 10 & 25.6 & 7.00 & 608 & 1.50 & 8.28\\
& 2.67 & 11 & 34.0 & 8.30 & 721 & 1.78 & 9.81\\
\hline
individual break-ups & 0.54 $\pm$ 0.17 & 162 & 3.1 $\pm$ 1.7 & 1.89 $\pm$ 0.64 & 156 $\pm$ 50 & 0.41 $\pm$ 0.14 & 1.99 $\pm$ 0.62\\
\end{tabular}
\caption{Conditions of the experiments. Characteristic values are given for each of the cavity disintegration cases. For the experiments on individual bubble break-up, the mean and standard deviation among the 162 recorded cases are given for each quantity.}
\label{tab:cavity_conditions}
\end{center}
\end{table}
One image is shown in \Cref{fig:schematic_cavity} (b). The cup is visible in the bottom of the image as it has not yet fully rotated out of the field of view. The measurement region, \changed{which spans \SI{15.8}{cm} in the $x$ direction and \SI{8.9}{cm} in the $z$ direction,} is illuminated from the back, and the disintegration of the cavity is filmed with a high-speed camera at \SI{500}{Hz} with a spatial resolution of \SI{38}{\micro m/pixel}. \changed{The field of view is much larger than all the bubbles considered, so it does not introduce a significant bias related to bubbles whose images extend partially outside the field of view.} Bubbles are detected with an image processing method described in \Cref{sec:cavity_bubble_detection}, and their \changed{effective diameters $d$ are determined as the equivalent diameter of a circle with the same area as the projected bubble image}. In analyzing the data, we consider only bubbles for which $d\geq \SI{400}{\micro m}$, for which the detection is less sensitive to the chosen image intensity threshold. Given the typical severe deformation and overlapping images of larger bubbles ($d \gtrsim \SI{6}{mm}$), we note that their sizes will tend to be over-estimated by this method. The air void fraction in the vicinity of the cavity is high enough that we are unable to track the dynamics of individual break-ups, so we restrict our analysis to the resulting bubble size distribution.
To account for the limited field of view in our experiments, we adjust the measured size distributions by keeping a record of bubbles which have left and entered the field of view. Those which leave are "locked" into the bubble record used in computing the size distributions, while those that enter the field of view are excluded from the calculation of the size distribution. This, along with a slight smoothing in $d$ and $t$ to account for the limited number of bubbles observed at early times or with small cavities, is described in greater detail in \Cref{sec:advection_correction}, and has only a limited impact on the results reported in this paper, as we do not consider the size distribution at late times.
\subsection{Individual break-up tracking experiment}
\label{sec:exp_dynamical}
In the second set of experiments of bubble break-up, we dynamically track the individual break-ups of bubbles in the turbulent region. As sketched in \Cref{fig:schematic_dynamical} (a), bubbles are introduced to the bottom of the tank through a needle and rise to the turbulent region. Two cameras\changed{,} which are synchronized with a function generator\changed{,} film at \SI{1000}{fps}. They are oriented \SI{90}{\degree} from each other and their fields of view overlap in a measurement volume of approximately \SI{200}{cm^3}. The cameras are calibrated by mapping their pixels to the paths of the light rays reaching the pixels, following the method presented by \cite{Machicoane2019AImaging}. Then, following a method similar to that used in \cite{Ruth2021}, we identify the 3-D location of the bubbles which are simultaneously captured by each camera. The spatial resolution of each camera varies with the position of the bubble, but the typical value of the two cameras are \SI{28.9}{\micro m/pixel} and \SI{57.1}{\micro m / pixel}. An approximate lower bound for the size of the smallest resolved bubble is then $d_\mathrm{min} \approx \SI{200}{\micro m}$.
The trajectories taken by the bubbles are then determined using the Python package Trackpy \citep{trackpy}, which implements the algorithm from \cite{Crocker1996}. Such trajectories are shown in \Cref{fig:schematic_dynamical} (b). Using the three-dimensional map of the turbulence statistics obtained from PIV, we compute the bubble's size relative to the local Hinze scale $d/d_\mathrm{H}$ (computed with the local value of $\epsilon$) at each bubble location, which is encoded in the color in the figure. The mean dissipation rate at the break-up locations is $\epsilon = \SI{0.52}{m^2/s^3}$, with a standard deviation of \SI{0.21}{m^2/s^3}. The mean values and standard deviations of quantities describing the initial conditions for the break-ups studied in this experiment are given in \Cref{tab:cavity_conditions}.
\begin{figure}
\begin{minipage}{.4\textwidth}
\centering
\begin{overpic}[width=1\linewidth]{figures/schematic_dynamical.pdf}
\put(0,85){(a)}
\end{overpic}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
\centering
\begin{overpic}[width=0.95\linewidth]{figures/dynamical_just_trajectory.pdf}
\put(0,66){(b)}
\end{overpic}
\end{minipage}
\caption{The experiment to obtain dynamic reconstructions of individual break-up events. (a) Schematic of the experiment. Air bubbles are introduced through a needle at the bottom of the tank and rise into the turbulence created by the jets. The bubbles are filmed with two high-speed cameras, enabling the determination of the 3-D bubble trajectories. (b) The trajectories of parent and child bubbles involved in one break-up event, \dan{and their projections onto the horizontal ($x-y$) plane}. The color corresponds to the bubble's size relative to the local Hinze scale, \changed{which varies spatially with $\epsilon$ as the bubble size is fixed}. \changed{The green dot denotes the first detected position of the parent bubbles; the red dots denote the final detected position of the child bubbles.}}
\label{fig:schematic_dynamical}
\end{figure}
From the bubble trajectories, we identify each time a bubble breaks apart, which occurs when a new trajectory (or trajectories) appears in the vicinity of a previously-existing bubble. As the tracking algorithm will initially link the parent bubble to only one of the child bubbles, the parent bubble trajectory is then split at this time, and both child bubbles are treated equally. These events are denoted by the gray lines connecting the "end" of one bubble to the "beginning" of another in \Cref{fig:schematic_dynamical} (b). Given the complex deformations involved in some break-ups, the method does not always resolve the fast splitting dynamics accurately; the break-up child size distributions we report, however, are not sensitive to the order of events occurring within one break-up event
\section{Size distribution evolution during the disintegration of a large air cavity}
\label{sec:air_cavity_results}
Here, we present experimental results on the disintegration of air cavities of various sizes from the experiment described in \Cref{sec:exp_cavity}. First, we qualitatively discuss the break-up of cavities in two illustrative cases, one close to the critical size for break-up, and one with a large separation of scales between the cavity and the Hinze scale. Then, we analyze the transient evolution of the bubble size distributions.
\subsection{Disintegration of cavities of increasing sizes}
\begin{figure}
\centering
\includegraphics[width=0.75\linewidth]{figures/bubble_release_images_bubblerelease_turbulent_filldur5s_flowrate2sccmair_SDS0uM_fps500_backlight_zoomIn_v8.pdf}
\caption{The disintegration of an air cavity with $d_0/d_\mathrm{H} \approx 2.1$, involving just one break-up during the interval shown.}
\label{fig:bubble_release_small}
\end{figure}
The break-ups of two air cavities, one with $d_0 = \SI{0.68}{cm}$ and one with $d_0 = \SI{2.3}{cm}$, are shown in \Cref{fig:bubble_release_small} and \Cref{fig:bubble_release_big}, respectively. These correspond to non-dimensional sizes of $d_0/d_\mathrm{H} = 2.1$ and $7.0$, $d_0/L_\mathrm{int} = 0.46$ and $1.5$, and \dan{$d_0/l_\mathrm{cap} = 2.51$ and $8.28$}. As a reference, the constant values taken for $L_\mathrm{int}$ and $d_\mathrm{H}$ and the initial size of the cavity $d_0$ are denoted in the top-left corner of the first image. In both cases, the hemispherical cup which had constrained the bubble is visible at early times as it is rotated away.
In the disintegration of the smaller cavity, with $d_0/d_\mathrm{H} = 2.1$ (shown in \Cref{fig:bubble_release_small}), the bubble emerges from the cup with a moderate deformation caused by buoyancy and the surrounding turbulence. Eventually, within approximately one integral-scale turn-over time, the bubble becomes more elongated and breaks into two bubbles. One is near the parent bubble in size, and other is slightly smaller than the Hinze scale. These two bubbles persist without breaking for at least $\sim 2$ more integral-scale turn-over times, at which point the smaller of the two bubbles is advected out of the field of view by the downwards mean flow.
The deformation to the larger cavity shown in \Cref{fig:bubble_release_big} is more severe, leading to a more complex sequence of events during its disintegration. Upon emerging from the rotating cup, the cavity is flattened due to buoyancy (as \dan{$d_0 / l_\mathrm{cap} = 8.28$} for this case), and turbulent deformations to the cavity shape on the order of the cavity size itself quickly develop. By $t/T_\mathrm{int} \approx 0.4$, the cavity consists of two lobes (each of which is significantly deformed), separated by a shrinking neck of air. By the time the neck has pinched apart ($t/T_\mathrm{int} \approx 0.7$), the two larger child bubbles stemming from the two lobes are accompanied by much smaller child bubbles (some with $d \ll d_\mathrm{H}$ and $d \ll d_0$) which were formed during the collapse of the air neck. The larger child bubbles themselves go on to further break apart in a chain of break-ups, some of which similarly involve small bubble production via the collapse of elongated air necks. Many small bubbles which are more than an order of magnitude smaller than the initial one are eventually visible. At much later times, the largest bubbles have risen out of the field of view, and the total air volume imaged is significantly decreased.
\begin{figure}
\centering
\includegraphics[width=0.75\linewidth]{figures/bubble_release_images_bubblerelease_turbulent_filldur18s_flowrate20sccmair_SDS0uM_fps500_backlight_zoomIn_v7.pdf}
\caption{The disintegration of an air cavity with $d_0/d_\mathrm{H} \approx 7.0$.}
\label{fig:bubble_release_big}
\end{figure}
\subsection{Transient evolution of the bubble size distributions}
The experiment was carried out with six values of $d_0 / d_\mathrm{H}$ between 2.1 and 8.3, with 10---20 runs taken at each condition, as given in \Cref{tab:cavity_conditions}. \dan{Note that the largest cavities exceed the integral length scale in size, so the typical turbulent stress at their spatial scale will be saturated relative to that predicted by the Kolmogorov scaling employed in the definition of the Hinze scale.} \Cref{fig:bubblerelease_transient_distributions_SDS0} shows the transient evolution of $\mathcal{N}(d/d_\mathrm{H}) = N(d) d_\mathrm{H}$ for each condition (ensemble-averaging the 10---20 runs). At early times, the distributions for all $d_0/d_\mathrm{H}$ exhibit a peak at $d_0/d_\mathrm{H}$, denoted by the vertical dotted lines. For the two smallest cavities (given in \Cref{fig:bubblerelease_transient_distributions_SDS0} panels (a-b)), for which no break-up was observed during many runs, only a small number of bubbles are formed over time, and the size distribution near the injection scale does not decrease appreciably with time.
Over time, as the larger cavities (given in \Cref{fig:bubblerelease_transient_distributions_SDS0} panels (c-f)) begin to disintegrate, the size distribution for $d<d_0$ begins to be built up. Even among these larger cavities which produce a considerable number of sub-Hinze bubbles, the increase in the number of sub-Hinze bubbles is much more pronounced for the cavities that are initially larger (evidenced by comparing curves for $d_0/d_\mathrm{H}=4.15$ and $d_0/d_\mathrm{H} = 8.30$, for example). This suggests that there is a large separation of scales between the sub-Hinze bubbles and the parent bubbles responsible for their creation; more simply, large bubbles are needed for the production of small bubbles. For the largest cavities, the size distribution for sub-Hinze bubbles eventually follows an $\mathcal{N}(d/d_\mathrm{H}) \propto (d/d_\mathrm{H})^{\alpha_d}$ scaling, with $\alpha_d = -3/2$, sketched on all plots as the dashed line for reference. The final curves shown (for $t/T_\mathrm{int}=4$) might constitute an under-estimation for the bubble size distribution for smaller bubbles, since some of the bubbles which may break have risen out of the field of view by this time.
\begin{figure}
\centering
\begin{overpic}[width=1\linewidth]{figures/bubblerelease_transient_distributions_SDS0.pdf}
\put(3,41){(a)}
\put(35,41){(b)}
\put(66.5,41){(c)}
\put(3,22){(d)}
\put(35,22){(e)}
\put(66.5,22){(f)}
\end{overpic}
\caption{Bubble size distributions during the disintegration of cavities with $d_0/d_\mathrm{H}$ between 2.1 and 8.3 and times up to $4 T_\mathrm{int}$ after the cavity is released into the turbulence. The size of the parent bubble is denoted by the dashed vertical line. Each distribution integrates to the average number of bubble observed at that condition at that time. The eventual sub-Hinze power-law scaling exponent approaches $-3/2$, shown by the \dan{dashed black} line, as $d_0/d_\mathrm{H}$ is increased.}
\label{fig:bubblerelease_transient_distributions_SDS0}
\end{figure}
\changed{Now, we consider the size distributions averaged averaged between $2 T_\mathrm{int}$ and $4 T_\mathrm{int}$. During these times, a significant number of break-ups have occured (for larger $d_0/d_\mathrm{H}$), but a significant portion of the bubbles have not yet left the field of view, and the bubble size distribution approaches a constant shape.} \Cref{fig:bubblerelease_final_size_distributions} (a) compares the size distributions over these times for each value of $d_0/d_\mathrm{H}$. For larger air cavities, the magnitude of $\mathcal{N}(d/d_\mathrm{H})$ is increased, and the sub-Hinze power-law distribution steepens. \changed{The same data is shown in panel (b), normalized by the cavity diameter $d_0$ instead of the Hinze scale. Larger cavity sizes yield a $\propto d^{-3/2}$ scaling for all bubble sizes.}
\begin{figure}
\centering
\begin{overpic}[width=0.9\linewidth]{figures/bubblerelease_final_size_distributions.pdf}
\put(10.5,46.5){(a)}
\put(72,46.5){(b)}
\put(72,21.5){(c)}
\end{overpic}
\caption{Time-averaged bubble size distributions. (a) The \changed{dimensionless} bubble size distribution averaged between $t/T_\mathrm{int}=2$ and $t/T_\mathrm{int}=4$ for cases with varying $d_0/d_\mathrm{H}$, denoted by the position of the colored notches along the bottom axis. Distributions for $d_0/d_\mathrm{H}<2$ are smoothed slightly to account for the small number of observations. \changed{(b) The bubble size distributions based on the diameter normalized by the initial cavity diameter $d_0$.} (c) The exponent $\alpha_d$ of a power-law fit to the sub-Hinze portion of each distribution, $\mathcal{N}(d/d_\mathrm{H}) \propto (d/d_\mathrm{H})^{\alpha_d}$ for $d/d_\mathrm{H} < 1$, indicating that as $d_0/d_\mathrm{H}$ is increased, the sub-Hinze spectrum approaches a $\mathcal{N}(d/d_\mathrm{H}) \propto (d/d_\mathrm{H})^{-3/2}$ scaling.}
\label{fig:bubblerelease_final_size_distributions}
\end{figure}
\Cref{fig:bubblerelease_final_size_distributions} (c) shows the power-law exponent fit to the sub-Hinze portion of the distributions in panel (a), $\mathcal{N}(d/d_\mathrm{H}) \propto (d/d_\mathrm{H})^{\alpha_d}$ for $d/d_\mathrm{H} < 1$, for each case. As $d_0/d_\mathrm{H}$ is increased, \dan{an $\alpha_d = -3/2$} scaling is approached, indicated by the dashed black line. The size distribution is affected not only by the break-up physics, but is also steepened by the rising dynamics of the bubbles: as small bubbles rise more slowly than larger ones, they tend to linger in the field of view for longer, increasing their concentrations \citep{Garrett2000}.
Integrating the transient size distributions over the bubble diameter, the temporal evolution of the number of resolved bubbles \changed{$n$ (with the minimum resolvable size $d_\mathrm{min}=0.12 d_\mathrm{H}$)} is shown in \Cref{fig:num_vs_time_and_d0dH} (a). The gray shaded region denotes the times over which the bubble size distributions are averaged in \Cref{fig:bubblerelease_final_size_distributions} and \Cref{fig:num_vs_time_and_d0dH} (b).
\begin{figure}
\centering
\begin{overpic}[width=1\linewidth]{figures/num_vs_time_and_d0dH.pdf}
\put(3,42){(a)}
\put(52,42){(b)}
\end{overpic}
\caption{Evolution of the number of resolved bubbles (limited to $d/d_\mathrm{H} > 0.12$) with time and the initial cavity size. (a) Temporal evolution of the average number of all bubbles measured experimentally for different initial cavity sizes $d_0/d_\mathrm{H}$. Circles give the values employed in \cref{sec:model_m}. (b) The total number of bubbles (black), number of sub-Hinze bubbles (orange), and number of super-Hinze bubbles (purple) averaged between $2 T_\mathrm{int} \leq t < 4 T_\mathrm{int}$ (the region shaded in gray in (a)) as a function of the initial cavity size.}
\label{fig:num_vs_time_and_d0dH}
\end{figure}
\Cref{fig:num_vs_time_and_d0dH} (b) shows the number of resolved sub-Hinze, super-Hinze, and total bubbles, averaged over the time period considered, with the minimum resolved bubble size $d_\mathrm{min} \approx 0.12 d_\mathrm{H}$. Again, we see an increase in the number of bubbles formed with the initial size of the cavity. Further, the number of sub-Hinze bubbles increases with the parent bubble size more rapidly than the number of super-Hinze bubbles does, making sub-Hinze bubbles constitute a larger portion of the bubble size spectrum for larger $d_0/d_\mathrm{H}$. This is remarkable, since as $d_0/d_\mathrm{H}$ is increased, the span of bubble sizes constituting resolvable sub-Hinze bubbles ($d_\mathrm{min} < d < d_\mathrm{H}$) remains fixed, while the span of potential super-Hinze bubble sizes ($d_\mathrm{H} < d < d_0$) increases.
Taken together, \Cref{fig:bubblerelease_final_size_distributions,fig:num_vs_time_and_d0dH} are congruent with the capillary pinching mechanisms for sub-Hinze bubble production put forward by \cite{Riviere2021cap}. Our figures suggest that their formation relies on the break-up of cavities that are significantly larger than the Hinze scale: only larger values of $d_0/d_\mathrm{H}$ yield the $\mathcal{N}(d/d_\mathrm{H}) \propto (d/d_\mathrm{H})^{-3/2}$ power-law scaling in the sub-Hinze bubble size distribution, and the dependence on $d_0$ of the number of sub-Hinze bubbles produced (shown in \Cref{fig:num_vs_time_and_d0dH} (b)) is steeper than that of the number of super-Hinze bubbles produced. \dan{We propose in the next section an explanation of the mechanisms leading to this dependence.}
\subsection{Capillary splitting of ligaments prepared by the turbulence produces small bubbles}
\label{sec:concurrent_mechanisms}
Visual observations of the large air cavities disintegrating provide clues into the mechanism responsible for the production of sub-Hinze bubbles: child bubbles much smaller than the Hinze scale are seen to originate from a Rayleigh-Plateau-like instability that occurs during the pinching apart of elongated fluid ligaments prepared by the turbulence. However, the turbulence is only able to deform bubbles that are large enough with respect to the Hinze scale that such ligaments might be created, since surface tension is effective at limiting the severity of deformations to smaller bubbles. These experimental observations parallel \dan{a} recent interpretation of DNSs of bubble break-up \citep{Riviere2021cap}.
Illustrative examples of bubble break-up are given in \Cref{fig:explanation_figure_small_fig}, which shows the typical break-ups of bubbles of two sizes: one is near the Hinze scale in size (a), and another is seven times larger than the Hinze scale (b). The smaller bubble, with $d_0/d_\mathrm{H} = 2.1$, is initially deformed into two comparably-sized lobes, and the neck separating the two splits at a single point to form two child bubbles, each of a similar scale as the parent. Here, the parent bubble is small enough that surface tension is able to prevent significant deformation during the break-up.
\begin{figure}
\centering
\begin{overpic}[width=0.7\linewidth]{figures/explanation_figure_small_fig_annotated.pdf}
\put(-5,75){(a)}
\put(40,75){(b)}
\end{overpic}
\caption{Break-up of a bubble initially close to the Hinze scale in size (a, with $d_0/d_\mathrm{H} = 2.1$) and initially much bigger than the Hinze scale (b, with $d_0/d_\mathrm{H} = 7.0$). The deformation to the smaller bubble produces two comparably-sized lobes, which split apart to form two comparably-sized child bubbles. The deformation to the larger bubble also produces two comparably-sized lobes, but these are separated by a much more elongated filament of air. The unstable collapse of this filament produces the small "capillary" child bubble between the two larger ones. \changed{The small bubbles in the lower left of the image were produced in previous break-ups.}}
\label{fig:explanation_figure_small_fig}
\end{figure}
The larger bubble, with $d_0/d_\mathrm{H}=7.0$, is similarly deformed by the turbulence into two comparably-sized lobes prior to pinch-off. However, the filament of air separating the two just prior to pinch-off has become significantly more elongated than the neck in the break-up of the smaller bubble. This elongation opens the door to capillary instabilities along the filament during its collapse: in the instance shown in \Cref{fig:explanation_figure_small_fig} (b), the filament pinches apart at two separate points, leaving a small child bubble (with $d \ll d_\mathrm{H}$) between the two lobes.
\dan{The two examples of break-up discussed illustrate} two mechanisms present in the break-up of bubbles by turbulence. The first is the deformation of the parent bubble by a turbulent structure, likely on the spatial scale of the parent bubble itself. This brings the bubble to an unstable state consisting of two lobes (which will become what we call the "inertial" child bubbles) separated by a neck of air, which begins to pinch apart under capillarity. When the deformation to the bubble is severe enough, this ligament can take on an elongated, deformed shape. Its pinching can become unstable under a Rayleigh-Plateau-like mechanism, leading to the formation of small "capillary" bubbles.
\changed{\Cref{fig:instability_cases} shows an additional five instances of deformed ligaments undergoing a capillary instability to produce sub-Hinze bubbles. Many cases, especially those involving large parent bubbles, do not solely involve one ligament separated by two lobes; turbulent deformations cause the bubble shapes to be more irregular. However, in all instances, very small bubbles are produced as an air ligament involved in the turbulent deformation collapses unstably.}
\begin{figure}
\centering
\begin{overpic}[width=1\linewidth]{figures/instability_cases.pdf}
\end{overpic}
\caption{\changed{Five cases of sub-Hinze bubble production by the unstable collapse of a deformed ligament. Each row shows four snapshots in time, spaced 10, 4, 2, and \SI{0}{ms} before the time at which the sub-Hinze bubbles are first visible. The field of view in the final two columns is given by the blue square in the second column.}}
\label{fig:instability_cases}
\end{figure}
This \changed{description} is clearly a simplified understanding of the bubble pinch-off process, as it does not capture the redistribution of air due to a capillary pressure difference between lobes that may be responsible for the formation of small bubbles \citep{Andersson2006}, nor does it describe the "tearing off" of very small bubbles that we \changed{observe occurring to large parent bubbles more frequently than $1/T_\mathrm{turb}(d_0)$}. However, the framework serves as a bridge between the inertial deformations to a bubble by the turbulence and the later-time collapse dynamics instigated by capillarity. This understanding mirrors the description of bubble pinch-off in turbulence given in \cite{Ruth2019}, in which we showed that turbulence \dan{sets} an "initial" deformed bubble shape before the collapse dynamics overtake the turbulent dynamics. Once the inertial collapse of the neck becomes fast enough (equivalently, once the neck becomes small enough), however, the turbulence effectively "freezes" in place relative to the accelerating collapse dynamics. The end result is that the final stage of the pinching process---in this case, the production of small bubbles through the capillary instability of gas ligaments---is affected by the turbulence only insofar as the turbulence sets the "initial condition" on which the remainder of the process evolves under capillary and, eventually, inertial, dynamics.
\FloatBarrier
\section{Individual break-up event dynamics}
\label{sec:dynamical}
So far, we have considered the transient size distributions $\mathcal{N}(d/d_\mathrm{H})$ that result from air cavities with $d_0 > d_\mathrm{H}$ being introduced to turbulence. In this section, we focus on individual break-up events that are tracked individually in three dimensions as described in \Cref{sec:exp_dynamical}; these events are the building blocks for the disintegration of larger cavities.
\dan{We will characterize the break-up events over their typical time scale, which is given by the eddy turn-over time at the parent bubble's scale, $T_\mathrm{turb}(d_0) = \epsilon^{-1/3} d_0^{2/3}$, following discussions from \citet{Risso1998,Martinez-Bazan1999a,Riviere2021jfm}.}
\subsection{Qualitative discussion of the break-up sequences}
One break-up producing $m=2$ child bubbles is shown in \Cref{fig:dynamical_intro_figure_binary}, and one producing $m=4$ bubbles is shown in \Cref{fig:dynamical_intro_figure_multiple}. In each, images throughout the break-up sequence are shown in (a-c), and the three-dimensional trajectories taken by the bubbles involved are shown in (d). At each point, the bubble's size is computed relative to the local Hinze scale, and $d/d_\mathrm{H}$ is encoded in the trajectory color. The spatial scale is given in terms of the integral length scale at the break-up location, $L_{\mathrm{int},0}$, showing that the bubble trajectories are resolved over multiple integral length scales. Panel (e) shows the dimensional diameters of the bubbles involved over time.
\begin{figure}
\centering
\begin{overpic}[width=0.9\linewidth]{figures/dynamical_intro_figure_binary.pdf}
\put(1,65){(a)}
\put(34,65){(b)}
\put(67,65){(c)}
\put(1,30){(d)}
\put(47,45){(e)}
\end{overpic}
\caption{One dynamically-tracked bubble break-up involving the production of $m=2$ child bubbles. (a-c) Images recorded by one of the two high-speed cameras throughout the sequence. (d) The trajectories taken by the bubbles involved, with their size relative to the Hinze scale encoded in the color. The green circle marks the first observation of the parent bubble, and the red circles mark the final observation of the child bubbles. The side length of the square shown is given in terms of the integral length scale at the break-up location. (e) The "family tree" for the single break-up, giving diameters of the bubbles present at each point in time. Fainter lines give the instantaneously-measured diameters, and straight lines give the median for each bubble, which is the quantity we consider in our analysis.}
\label{fig:dynamical_intro_figure_binary}
\end{figure}
\begin{figure}
\centering
\begin{overpic}[width=0.9\linewidth]{figures/dynamical_intro_figure_multiple.pdf}
\put(1,65){(a)}
\put(34,65){(b)}
\put(67,65){(c)}
\put(1,30){(d)}
\put(47,45){(e)}
\end{overpic}
\caption{One dynamically-tracked bubble break-up involving the production of $m=4$ child bubbles. (a-c) Images recorded by one of the two high-speed cameras throughout the sequence. (d) The trajectories taken by the bubbles involved, with their size relative to the Hinze scale encoded in the color. The green circle marks the first observation of the parent bubble, and the red circles mark the final observation of the child bubbles. The side length of the square shown is given in terms of the integral length scale at the break-up location. (e) The "family tree" for the single break-up, giving diameters of the bubbles present at each point in time. Fainter lines give the instantaneously-measured diameters, and straight lines give the median for each bubble, which is the quantity we consider in our analysis.}
\label{fig:dynamical_intro_figure_multiple}
\end{figure}
In the case of binary break-up, given in \Cref{fig:dynamical_intro_figure_binary}, the parent bubble enters the imaged volume from the foreground, and quickly encounters a region of more intense turbulence, where $d_0/d_\mathrm{H}$ increases. Eventually, having become deformed, the bubble pinches apart into two child bubbles, each of which are comparable in size to the parent. Both child bubbles persist in the field of view for at least a tenth of a second ($\sim$ an integral-scale turn-over time) without breaking.
In the more complex break-up shown in \Cref{fig:dynamical_intro_figure_multiple}, the parent bubble similarly traverses from a region of less intense turbulence to more intense turbulence, increasing the value of $d_0/d_\mathrm{H}$. Eventually, at $t=\SI{0.170}{s}$ (shown in panel (a)), the bubble becomes elongated in the vertical direction, and in a sequence of two rapid splitting events produces the three child bubbles that are visible at $t=\SI{0.192}{s}$ (shown in panel (b)). One is still larger than $d_\mathrm{H}$, one is of the order of $d_\mathrm{H}$, and the third, left between the two, is smaller than $d_\mathrm{H}$. The bubble of the order of the Hinze scale is still significantly deformed, the capillary dynamics involved with the break-up not yet having relaxed. A short time later, by $t=\SI{0.201}{s}$ (shown in panel (c)), an additional bubble has split from it, leaving a total of four child bubbles.
\subsection{Identification of break-up events}
\label{sec:dynamical_identification}
We identify bubble break-ups like the ones shown in \Cref{fig:dynamical_intro_figure_binary,fig:dynamical_intro_figure_multiple} as being sequences of bubble splitting events not exceeding the eddy turn-over time at the parent bubble scale, $T_\mathrm{turb}(d_0) = \epsilon^{-1/3} d_0^{2/3}$. To enforce this temporal constraint, we first construct a "family tree" of all splitting events recorded in one run. Then, if any bubble is present at a time $T_\mathrm{turb}(d_0) $ beyond the initial detected break-up of the first bubble (with diameter $d_0$), we truncate the family tree at that bubble, and start a new family tree with the same bubble (if it later breaks apart). After doing so, we store the sizes of the parent bubble and child bubbles, as well as the turbulence characteristics spatially interpolated at the initial break-up location. To remove spurious break-ups, we discard those for which the sum of the calculated volumes of the $m$ child bubbles is less than 50\% of, or more than 200\% of, the calculated volume of the parent bubble.
In total, we captured 162 bubble break-ups with this dynamical tracking approach that fit the volume conservation criteria. \Cref{fig:dynamical_dim_nondim_sizedists} (a) shows the distributions of the break-up conditions (the Hinze scale at the break-up location and the parent bubble size) for the aggregated dataset, which we later break down by the parent bubble's size relative to the Hinze scale. The parent bubble diameter $d_0$ is typically slightly larger than the Hinze scale, as the distribution of $d_0$ \changed{(the green line)} is located just to the \changed{right} of that of $d_\mathrm{H}$ (the \changed{dashed red} line). Thus, the break-ups we capture in this experiment have $d_0/d_\mathrm{H} \approx 0.4 \text{--} 3.7$. The black curve shows the distribution of the sizes of child bubbles formed during break-ups, integrating to the average number of bubbles formed per break-up event.
\changed{To gauge the effect of inhomogeneity in the generated turbulence, we consider how the local turbulence intensity experienced by the bubble (in a Lagrangian sense) varies over timescales relevant to the bubble's break-up. Ideally, a bubble would not be advected through statistically inhomogeneous turbulence during the course of its break-up. Denoting the Hinze scale at the bubble's location at time $t$ as $d_\mathrm{H}(t)$, \Cref{fig:dynamical_dim_nondim_sizedists} (b) shows the Hinze scale at the break-up location $d_\mathrm{H}(t_0)$ as a function of the Hinze scale at the bubble's location one bubble-scale eddy turn-over time prior, $d_\mathrm{H}(t_0 - T_\mathrm{turb}(d_0))$ for the 52\% of observed break-ups in which the bubble is inside the volume resolved by PIV (in which we are able to compute $d_\mathrm{H}$) at this point in time. There is little difference between $d_\mathrm{H}(t_0)$ and $d_\mathrm{H}(t_0 - T_\mathrm{turb}(d_0))$, suggesting that the local turbulence characteristics experienced by the bubble do not change appreciably during the break-up, and that the turbulence is homogeneous over scales relevant to the break-up.}
\begin{figure}
\centering
\begin{overpic}[width=1\linewidth]{figures/dynamical_dim_nondim_sizedists.pdf}
\put(9,40){(a)}
\put(56.5,40){(b)}
\end{overpic}
\caption{Results on individual bubble break-ups. (a) Distributions of the Hinze scale at the break-up location (red), parent bubble sizes (green), and of the child bubble sizes (black). \changed{(b) The Hinze scale at the bubble's break-up position (vertical axis) as a function of the Hinze scale at the bubble's position one bubble-scale turn-over time prior to break-up (horizontal axis), for the 52\% of cases in which the bubble was in the volume resolved with PIV at this time.}}
\label{fig:dynamical_dim_nondim_sizedists}
\end{figure}
\begin{figure}
\centering
\begin{overpic}[width=0.889\linewidth]{figures/dynamic_nondim_childsizedists.pdf}
\put(4,47){(a)}
\put(67,38){(b)}
\end{overpic}
\caption{Dimensionless bubble break-up child size distributions for various approximate values of $d_0/d_\mathrm{H}$. The value give for each curve (which is denoted by the notch on the horizontal axis) corresponds to the mean value of $d_0/d_\mathrm{H}$ for that curve. (a) The distributions of child bubble diameter normalized by the Hinze scale. (b) The \changed{volumetric child size distribution}, with child bubble volumes normalized by the parent bubble volume, which is approximated as the sum of the resolved child bubble volumes.}
\label{fig:dynamic_nondim_childsizedists}
\end{figure}
\subsection{Child size distribution}
Now, we compute the dimensionless bubble child \changed{size} distributions conditioned on the approximate dimensionless parent bubble size, $\mathcal{P}_d(d/d_\mathrm{H};d_0/d_\mathrm{H})$. The data is averaged over three ranges of $d_0/d_\mathrm{H}$ (the ranges between [0.3:1.55];[1.55:1.93]; and [1.93:3.70]), and results are shown in \changed{\Cref{fig:dynamic_nondim_childsizedists}} (a). As $d_0/d_\mathrm{H}$ is increased, the dependence of $\mathcal{P}_d$ on $d/d_\mathrm{H}$ becomes steeper. The dashed line gives the $\mathcal{P}_d(d/d_\mathrm{H};d_0/d_\mathrm{H}) \propto (d/d_\mathrm{H})^{-3/2}$ scaling, which is approached for large $d_0/d_\mathrm{H}$ due to the production of small bubbles by capillary instabilities \citep{Riviere2021cap}. Qualitatively, the child size distribution for smaller parent bubbles is flatter near the Hinze scale, while that for larger parent bubbles increases more rapidly with decreasing bubble size as a power-law relationship.
\dan{Note that the child size distribution is defined so that it integrates to the average number of child bubbles formed.}
This representation of the child size distribution masks the large number of bubbles formed very close to the parent bubble size. To capture these small bubbles, we also compute the \changed{volumetric child size distribution}, normalized by the volume of the parent bubble $V_0$. Since the determination of the volumes of larger bubbles is difficult given their deformations, we approximate the parent bubble volume as the sum of the volumes of the child bubbles, and consider $(d/d_0)^3 \approx d^3 / \sum_{i=1}^m d_i^3$ \citep{Vejrazka2018}. The distribution of these dimensionless volumes is shown in \Cref{fig:dynamic_nondim_childsizedists} (b), exhibiting a $\cup$ shape that is not strongly dependent on $d_0/d_\mathrm{H}$ (though we again see increased small bubble production with larger $d_0/d_\mathrm{H}$). The large values of this distribution near 1 suggest that in many break-up events, small bubbles are "torn off" of the parent bubble, without inertial deformation producing multiple child bubbles of sizes comparable to that of the parent. We note that the resolution of our experiment (in which the smallest bubble we can detect is approximately \SI{200}{\micro m} in diameter) limits the number of bubbles detected.
\subsection{Small bubble production without significant inertial deformation}
\label{sec:end_pinching}
In many of the break-ups we observe in the large cavity disintegration and individual break-up experiments, small bubbles were seen to be "torn off" from a parent bubble, without an appreciable large-scale deformation to the parent bubble. These events are reminiscent of tip-streaming \citep{Montanero2020}. This phenomenon is evidenced by the right side of the $\cup$-shaped child size distributions shown in \Cref{fig:dynamic_nondim_childsizedists} (b), as a child bubble that is nearly the size of the parent is the signature of such break-ups. To understand these events, we present in \Cref{fig:dynamic_positions} a qualitative discussion of the dynamics of individual splitting events. For each splitting event, we compare the velocity of the parent bubble at break-up $\vec{v}_\mathrm{parent}$ (denoted by the gray arrow in panel (a)) to the displacement between the parent bubble's final position $\vec{x}_\mathrm{parent}$ (the gray circle) and the initial positions at which the child bubbles are detected $\vec{x}_\mathrm{child}$ (the black circles). The child bubble's initial detected position ahead of or behind the parent bubble, $\kappa = \vec{v}_\mathrm{parent} \cdot (\vec{x}_\mathrm{child} - \vec{x}_\mathrm{parent}) / (u' L_\mathrm{int})$, normalized by turbulence quantities, is then computed, and is plotted against the child bubble's size relative to the parent size in panel (b). The color of each marker denotes the size of the splitting event's parent bubble relative to the Hinze scale. The black line shows the expected value of $\kappa$ given the normalized child bubble size. Smaller child bubbles (with $d_\mathrm{child}/d_\mathrm{parent} < 0.6$, \dan{below which the mean value of $\kappa$ becomes negative}) tend to be left in the wake of the parent bubble ($\kappa<0$), while larger child bubbles tend to be produced ahead of the parent bubble $(\kappa > 0)$.
\begin{figure}
\centering
\begin{overpic}[width=0.9\linewidth]{figures/dynamic_positions.pdf}
\put(1,34){(a)}
\put(54,34){(b)}
\end{overpic}
\caption{Statistics of the positions of bubbles after splitting events. (a) A sketch of a splitting event involving small bubble production, including the parent bubble velocity at break-up and the final and initial positions, respectively, of the parent and child bubbles. (b) The initial child bubble position relative to the parent bubble's motion, $\kappa$, for each splitting event (circles), as well as the mean value conditioned on the normalized child bubble size (black line). $\kappa<0$ denotes bubble production behind the parent bubble, while $\kappa>0$ denotes bubble production ahead of the parent bubble.}
\label{fig:dynamic_positions}
\end{figure}
While the conceptual picture for break-up discussed in \Cref{sec:concurrent_mechanisms} describes the role of capillarity during break-ups involving large-scale deformations, it is likely that break-ups solely involving small bubble production are also regulated by capillarity: in these cases, a turbulent motion smaller than the parent bubble may succeed in producing a ligament which extends off of one side of the parent, and this ligament may pinch apart into many small bubbles in a capillary instability as it is retracted back into the bulk of the parent bubble. Specifically, \Cref{fig:dynamic_positions} suggests that the bulk of a bubble may often be swept forward by a turbulent eddy, and the trailing ligament may become unstable as it "catches up" with the rest of the parent bubble. Similar to the framework presented in \Cref{sec:concurrent_mechanisms}, the process is initiated by a turbulent deformation to the parent, and ends with the capillary instability of a ligament involved in the deformation.
\FloatBarrier
\section{A model for bubble break-up}
\label{sec:model}
\subsection{Physical ideas}
\label{sec:model_timescales}
The experiments presented in \Cref{sec:air_cavity_results,sec:dynamical}, taken together with the existing literature, point to \changed{three important time scales that must be considered in developing a population balance model: the inverse of the break-up frequency, the break-up duration, and the capillary capillary pinching times.}
\changed{The longest of these is the typical duration until a break-up occurs---that is, the inverse of the break-up frequency, $1/\omega(d_0)$. This time scale will control how many break-up events will occur over a given time and will be a function of $d_0/d_\mathrm{H}$. The second timescale is that over which a break-up typically occurs, or the event duration (i.e., lasting from the start of the deformation until the child bubbles have all been formed), and will also be a function of $d_0/d_\mathrm{H}$. The break-ups taking the longest time will be those instigated by the largest eddies capable of causing break-up, which are taken to be those at the parent bubble's scale \citep{Luo1996}. Thus, an upper bound and typical scale of the break-up duration is taken to be the eddy turn-over time at the parent bubble's scale, $T_\mathrm{turb}(d_0) = \epsilon^{-1/3} d_0^{2/3}$, in agreement with experimental and numerical observations of the time over which bubbles are deformed prior to break-up \citep{Risso1998,Martinez-Bazan1999a,Riviere2021jfm}.
\dan{The final timescale we consider is that of the capillary instabilities of gas ligaments that produce a small child bubble of size $d$, which will occur over the capillary timescale of that child bubble, $T_\mathrm{cap}(d) = (\rho/\gamma)^{1/2} d^{3/2} / (2 \sqrt{3})$ \citep{Riviere2021cap}.}}
\dan{From these three relevant time scales, we define three types of events. At the shortest time, we define the individual binary \textit{splitting events}. For the production of bubbles with $d \ll d_\mathrm{H}$, we have $T_\mathrm{cap}(d) \ll T_\mathrm{turb}(d_0)$. At the eddy turn-over time, we define a \textit{break-up} as being a sequence composed of all the splitting events occurring in a time bounded by \changed{$\Delta T_\mathrm{break-up} = T_\mathrm{turb}(d_0)$}, which permits the production of more than two bubbles in a single event (similar to the definition used for drop break-ups by \cite{Solsvik2016}).} Finally, following the nomenclature from \cite{Hinze1955}, a \textit{disintegration} is a longer-duration process involving an arbitrary number of break-ups.
These timescales are sketched in \Cref{fig:family_tree_timescales}, which illustrates two break-up events that stem from a bubble of diameter $d_\mathrm{A}$ encountering turbulence. The deformation to the parent bubble that instigates the break-up is assumed to happen within a time $T_\mathrm{turb}(d_\mathrm{A})$ before the first bubble splits from the parent. Then, within an additional time bounded by $T_\mathrm{turb}(d_\mathrm{A})$, subsequent splitting events occur due to capillary instabilities arising from the deformation. One such instability produces a bubble with diameter $d_\mathrm{C}$, and the time over which this instability develops is set by the capillary timescale at the smaller child bubble size, $T_\mathrm{cap}(d_\mathrm{C})$. Later on, one of the child bubbles produced in the first break-up, with diameter $d_\mathrm{B}$, itself breaks up.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figures/family_tree_timescales.eps}
\caption{Sketch of two bubble break-ups and the associated timescales, with $T_\mathrm{turb}(d) = \epsilon^{-1/3}d^{2/3}$ and $T_\mathrm{cap}(d) = (\rho/\gamma)^{1/2} d^{3/2} / (2 \sqrt{3})$. The gray vertical lines denote the times associated with each of the two break-ups. The shaded region to the left bounds the time over which the deformation to the parent bubble is assumed to occur (the turbulent timescale at the parent bubble size), and the region to the right of the line bounds the time over which the subsequent splitting events are assumed to occur (also taken to be the same turbulent timescale). During the subsequent splitting events, the capillary timescale at the size of the smaller child bubble sets the time over which the splitting event occurs \citep{Riviere2021cap}. The time between break-ups is set by the inverse of the break-up frequency $\omega$ of the bubble which is to break, which we address later in the paper.}
\label{fig:family_tree_timescales}
\end{figure}
Using these ideas, we propose a population balance model that integrates these physical elements and models the evolution of a bubble size distribution with a Boltzmann transport equation using the bubble size as an internal coordinate. \dan{The population balance model considers a break-up rate kernel $f$, constructed from child size distributions computed through a Monte Carlo approach (constrained by results from experiments and DNSs, informing the number of children and the shape of the distribution) and a parent bubble break-up frequency taken from the literature. With the kernel defined, we integrate the model in time to simulate the evolution of the size distribution during a cavity disintegration and compare to our experimental data.}
\subsection{Population balance modeling}
\label{sec:population_balance_modeling}
In a confined region of homogeneous turbulence, the transient evolution of the absolute dimensionless \changed{volumetric bubble size distribution} $\mathcal{N}_V(\nd{V}) = N_V(V) V_\mathrm{H} = \mathcal{N}(d/d_\mathrm{H}) / (3 (d/d_\mathrm{H})^2)$, where $N_V(V)$ is the absolute dimensional \changed{volumetric size distribution}, $\nd{V} = V/V_\mathrm{H}$, and $\nd{t} = t/T_\mathrm{int}$, is described by
\begin{equation}
\deriv{\mathcal{N}_V(\nd{V},\nd{t})}{\nd{t}} = - \frac{\mathcal{N}_V(\nd{V},\nd{t})}{\avg{m}(\nd{V})} \int_0^{\nd{V}} \tilde{f}(\tilde{\delta};\nd{V}) \mathrm{d}{\tilde{\delta}} + \int_{\nd{V}}^\infty \mathcal{N}_V (\tilde{\Delta},\nd{t}) \tilde{f}(\nd{V};\tilde{\Delta}) \mathrm{d} \tilde{\Delta}, \label{eq:population_balance_abs}
\end{equation}
where the first term on the RHS gives the rate of consumption of bubbles of \dan{volume} $\nd{V}$ due their break-ups, and the second term on the RHS gives the rate of production of bubbles of \dan{volume} $\nd{V}$ due to the break-ups of larger bubbles \citep{Martinez-Bazan2010}. The break-up kernel $\nd{f}(\tilde{\delta};\tilde{\Delta}) = f(\delta;\Delta) V_\mathrm{H} T_\mathrm{int}$ can be decomposed into a parent break-up frequency and \changed{volumetric child size distribution} with $\nd{f}(\tilde{\delta};\tilde{\Delta}) = \nd{\omega}(\tilde{\Delta}) \nd{p}(\tilde{\delta};\tilde{\Delta})$, with the dimensionless break-up frequency $\nd{\omega}(\tilde{\Delta}) = \omega(d) T_\mathrm{int}$ and dimensionless \changed{volumetric child size distribution} $\nd{p}(\tilde{\delta};\tilde{\Delta}) = p(\delta;\Delta) V_\mathrm{H}$. Thus, we can move $\nd{\omega}(\tilde{\Delta})$ outside the integral in the first term on the RHS and invoke $\int_0^{\nd{V}} \nd{p}(\tilde{\delta};\nd{V}) \mathrm{d} \tilde{\delta} = \avg{m}(\nd{V})$, \changed{with $\avg{m}(\nd{V})$ the average number of bubbles formed in the break-up of a bubble of volume $\nd{V}$,} to express the bubble consumption term as simply $-\mathcal{N}_V(\nd{V},\nd{t}) \nd{\omega}(\nd{V})$. \dan{Note that we define $\nd{p}(\tilde{\delta};\tilde{\Delta})$ so that it integrates over $\tilde{\delta}$ to the average number of child bubbles formed by the break-up of a bubble of volume $\tilde{\Delta}$.}
\subsection{Construction of the child size distributions}
\label{sec:Monte_Carlo}
We develop a parameterization of the break-up \changed{volumetric child size distribution} $\nd{p}(\tilde{\delta};\tilde{\Delta})$ that accounts both for child bubbles produced by both the slower inertial mechanism (occurring over the eddy turnover time) and the faster capillary pinching mechanism (occurring over the capillary timescale of the small child bubbles) \dan{using a Monte Carlo approach. We consider a set of rules constrained by our experimental and numerical observations describing the outcomes of individual break-up events, then aggregate the outcomes of these events into child size distributions.}
\subsubsection{Statistics on the number of child bubbles formed}
\label{sec:model_m}
A key step in modeling each break-up is to constrain the \dan{distribution of the} number of bubbles formed in each event. \dan{To this end, we first} consider the data from our dynamical experiments given in \cref{sec:dynamical}. \changed{The average number of child bubbles larger than the experimentally-resolvable minimum size $d_\mathrm{min}/d_\mathrm{H} \approx 0.07$, $\avg{m}$, is shown in \Cref{fig:dynamical_m_data} (a). As the parent bubble increases in size, more child bubbles are typically produced. Given the steep dependence of $\nd{p}(\tilde{\delta};\tilde{\Delta})$ on $\tilde{\delta}$, we must qualify each observation of $m$ with the minimum resolved bubble size to better enable comparisons between different experiments. For compactness, however, we take all $m$ values to be the number of resolved bubbles larger than $0.07 d_\mathrm{H}$ unless otherwise noted.} Our experimental observations of $\avg{m}$, binned by $d_0/d_\mathrm{H}$, are shown in the black squares, and the gray region around them bounds $\pm$ one half of a standard deviation around the mean.
\begin{figure}
\centering
\begin{overpic}[width=1\linewidth]{figures/dynamical_m_data.pdf}
\put(1,42){(a)}
\put(51,42){(b)}
\end{overpic}
\caption{Experimental data on the number of child bubbles formed in each break-up. (a) The average number of resolved bubbles (with $d_\mathrm{min}/d_\mathrm{H} = 0.07$) formed in each break-up event $\avg{m}$ as a function of the dimensionless parent bubble size. The shaded region shows $\pm$ one half of a standard around the mean for our dynamical data. Open circles give data from the disintegration of the three largest cavities, and closed circles give those data with an adjustment for the differing spatial resolution. Open stars give data from DNSs from \cite{Riviere2021jfm}, and closed stars give those data with the spatial resolution adjustment. The open gray markers give data from experiments reported by \cite{Vejrazka2018}. The thick orange line is the parameterization given in \cref{eq:avg_m_parameterization}. (b) The p.d.f. of \dan{$m'/\avg{m'}=(m-m_\mathrm{min})/(\avg{m} - m_\mathrm{min})$} for the experiments (squares) and DNSs (stars), along with the exponential fit employed in the Monte Carlo simulations. }
\label{fig:dynamical_m_data}
\end{figure}
Next, to consider the number of bubbles produced in the break-ups of larger bubbles, we turn to data from the disintegration of the three largest cavities presented in \cref{sec:air_cavity_results}. With the assumption that the initial splitting event happens nearly instantly after the bubble is released into the turbulence, to apply the same definition of the duration of the break-up, we define $\avg{m}$ for this dataset as the number of \changed{resolved} bubbles present after one eddy turnover time $T_\mathrm{turb}(d_0) = \epsilon^{-1/3} d_0^{2/3}$ has elapsed after the cavity release, which are denoted by the open circles in \cref{fig:num_vs_time_and_d0dH} (a) and \cref{fig:dynamical_m_data} (a). \changed{We invoke \cref{eq:number_adjustment} to apply a slight adjustment to these numbers in order to extrapolate results to the finer spatial resolution of the tracked break-up experiment, as discussed in \Cref{sec:number_adjustment}. The number of bubbles in the extrapolated range constitutes about $30\%$ of the ones in the observable range. These adjusted values are shown as the filled-in light blue circles in \Cref{fig:dynamical_m_data} (a).}
We have \dan{additionally} re-analyzed the DNSs of bubbles breaking in homogeneous, isotropic turbulence presented in \cite{Riviere2021jfm,Riviere2021cap}, tracking the bubble break-up events in a similar way to what has been done on the experimental data in \Cref{sec:dynamical}. From these DNSs, we can compute the average number of bubbles formed per event as a function of the parent bubble size, included in panel (a) as the red star markers. Open stars give the original observations, for which $d_\mathrm{min}/d_\mathrm{H} = 0.25$, while the filled-in stars give the number adjusted for the spatial resolution. \dan{Note that while we consider $\mathrm{We}_\mathrm{c}=1$ for the experimental data, the value of $d_\mathrm{H}$ for the DNS is given by $\mathrm{We}_\mathrm{c}=3$ \citep{Riviere2021jfm}.
Finally, as a comparison, the open gray markers show the (un-adjusted) number of bubbles detected experimentally in break-ups by \cite{Vejrazka2018}, in which break-ups varied in $\epsilon$ and $d_0$ (which is denoted by the marker style). As shown in their paper, once collapsed to $d_0/d_\mathrm{H}$, the dependence on the dimensional bubble size nearly disappears.
The four datasets (our two experiments, those from \cite{Vejrazka2018}, and DNSs from \cite{Riviere2021jfm}) produce a coherent picture regarding the number of bubbles formed. When $d_0/d_\mathrm{H}$ is small, break-ups tend to be binary, producing on average 2 child bubbles \dan{after $T_\mathrm{turb}(d_0)$}. As $d_0/d_\mathrm{H}$ increases, the number of child bubbles increases. Surface tension is less effective at preventing the severe deformation of larger bubbles, leading to more complex deformed bubble shapes that yield a greater number of child bubbles. The orange curve in panel (a) shows a fit to the data of the form
\begin{equation}
\avg{m} = m_\mathrm{min} + \frac{(d_0/d_\mathrm{H})^{b_2}}{b_1}, \label{eq:avg_m_parameterization}
\end{equation}
where $m_\mathrm{min} = 2$ and the fit constants are $b_1 = 4$ and $b_2 = 2.3$
\Cref{fig:dynamical_m_data} (b) compiles experimental and DNS data on the distribution of the number of child bubbles produced for increasing $d_0/d_\mathrm{H}$. The p.d.f.s of $m'/\avg{m'}$ are well-described by an exponential function $e^{-m'/\avg{m'}}$, with $m'=m-m_\mathrm{min}$ and $\avg{m'} = \avg{m}-m_\mathrm{min}$, for both the experiments (shown as the squares) and DNS (shown as the stars). Thus, for any parent bubble size we can write the p.d.f. of $m'$ as \changed{an exponential} distribution,
\begin{equation}
r(m';d_0/d_\mathrm{H}) = \frac{\exp(-m' / \avg{m'})}{\avg{m'}}, \qquad m'>0, \label{eq:m_pdf}
\end{equation}
with $\avg{m'}+m_\mathrm{min}$ the mean number of children, a function of the parent bubble size.
\subsubsection{A stochastic model for each break-up}
\label{sec:montecarlo_individual_breakup}
The Monte Carlo approach involves running many iterations of a stochastic model and developing a statistical representation of the aggregated results. Each discrete simulation of a break-up mirrors the physical processes involved: the bubble, sketched in \Cref{fig:montecarlo_explanation_sketch_crop} (a), is first deformed into two lobes, shown in panel (b), and then some number of capillary bubbles are created as the neck separating the lobes collapses \dan{to create the} two inertial child bubbles
For each iteration \dan{(i.e., one simulated breakup)} at a given value of $\tilde{\Delta}$, we first define the number of bubbles $m$ that will be produced by picking a value of $m'$ from the distribution $r(m';d_0/d_\mathrm{H})$ given by \cref{eq:m_pdf}, adding $m_\mathrm{min} = 2$, and rounding to the nearest integer. We pick $\tilde{\delta}_\mathrm{min} = 0.07^3$ in order to match the experimental dataset on which the parameterization of $\avg{m}$ is based. As we will show, once the p.d.f.s have been constructed for this given value of $\tilde{\delta}_\mathrm{min}$, it will be straightforward to extend them to lower or higher values of $\tilde{\delta}_\mathrm{min}$.
For cases in which $m\geq 3$, the capillary mechanism produces $m' = m-2$ bubbles, whose sizes follow a $\propto \tilde{\delta}^{\alpha}$ distribution with $\alpha=-7/6$ \changed{(corresponding to the \dan{$\mathcal{P}_d(d/d_\mathrm{H};d_0/d_\mathrm{H}) \propto (d/d_\mathrm{H})^{-3/2}$} scaling described by \cite{Riviere2021cap}, as distributions in diameter are related to those in volume by $ \mathcal{P}_d(d/d_\mathrm{H};d_0/d_\mathrm{H}) = 3 (d/d_\mathrm{H})^2 \nd{p}(\tilde{\delta};\tilde{\Delta})$ \citep{Martinez-Bazan2010,Qi2020})}. As is sketched in \Cref{fig:montecarlo_explanation_sketch_crop} (c), the \dan{volume} $\tilde{\delta}_{\mathrm{cap},i}$ of capillary bubble $i$ is picked from a power-law distribution with slope $\alpha$, bounded between $\tilde{\delta}_\mathrm{min}$ and the maximum allowable \dan{volume} for a capillary bubble given the previously-produced bubbles, $\tilde{\delta}_{\mathrm{cap,max},i}$. For the production of the first capillary bubble, we set $\tilde{\delta}_{\mathrm{cap,max},1} = \tilde{\Delta}$ (noting that the steep slope of \dan{$\mathcal{P}_d(d/d_\mathrm{H};d_0/d_\mathrm{H})$} with respect to $d/d_\mathrm{H}$ makes the production of capillary bubbles this large uncommon). For the production of the remaining capillary bubbles, we set $\tilde{\delta}_{\mathrm{cap,max},i} = \tilde{\Delta} - \sum_{j=1}^{i-1} \tilde{\delta}_{\mathrm{cap},j}$. At each step of the process, if $\tilde{\delta}_{\mathrm{cap},i}$ is greater than $\tilde{\delta}_{\mathrm{cap,max},i}/2$, we replace it with $\tilde{\delta}_{\mathrm{cap,max},i}/2 - \tilde{\delta}_{\mathrm{cap},i}$, such that for any splitting event, the smaller of the two produced does not further split.
\begin{figure}
\centering
\includegraphics[width=0.6\linewidth]{figures/montecarlo_explanation_ppt.pdf}
\caption{Process of simulating one break-up for the Monte Carlo approach of a bubble of volume $\tilde{\Delta}$, shown in (a). (b) First, the bubble is taken to be deformed into two lobes, separated by a neck of gas. (c) Next, the sizes of the $m' = m-2$ capillary bubbles are picked from a $\delta_\mathrm{cap}^\alpha$ distribution. (d) Finally, the sizes of the two inertial bubbles $\tilde{\delta}_{\mathrm{inertial},i}$ are picked by from a uniform distribution over the remaining parent bubble volume (that which has not gone to the capillary bubbles).}
\label{fig:montecarlo_explanation_sketch_crop}
\end{figure}
Once the \dan{volumes} of the $m'$ capillary bubbles are specified, we must determine the \dan{volumes} of the two inertial bubbles. To that end, we first compute the portion of the parent bubble volume that has gone to the capillary bubbles, $\chi_\mathrm{cap} = \sum_{i=1}^{m'} \tilde{\delta}_{\mathrm{cap},i} / \tilde{\Delta}$. The size of the first of the two inertial child bubbles $\tilde{\delta}_{\mathrm{inertial},1}$ is drawn uniformly from the remaining bubble volume, $(1-\chi_\mathrm{cap}) \tilde{\Delta}$, and the second is taken as its complement, $\tilde{\delta}_{\mathrm{inertial},2} = (1-\chi_\mathrm{cap}) \tilde{\Delta} - \tilde{\delta}_{\mathrm{inertial},1}$. Once this is done, the volumes of all child bubbles produced in this single break-up have been determined.
\subsubsection{Aggregation of simulated break-ups into child size distributions}
For a given value of $d_0/d_\mathrm{H}$ (or the equivalent normalized volume \dan{$\tilde{\Delta} = (d_0/d_\mathrm{H})^3$}), the process of simulating one break-up stochastically is repeated $n_\mathrm{MC} = 10^5$ times.
For each $\tilde{\Delta}$, the sizes of the bubbles produced in each of the $n_\mathrm{MC}$ events are aggregated, \changed{and the distribution of all these child bubbles defines the \changed{volumetric child size distribution} $\nd{p}(\tilde{\delta};\tilde{\Delta})$. The distribution is normalized such that $\int_0^{\tilde{\Delta}} \nd{p}(\tilde{\delta};\tilde{\Delta}) = \avg{m}(\tilde{\Delta})$, with $\avg{m}(\tilde{\Delta})$ the average number of bubbles formed.} Since the size distribution is aggregated from geometrically-plausible break-ups, it itself must satisfy any constraints relating to the sizes of the bubbles produced. \Cref{fig:montecarlo_distributions_fit} (a) shows the \changed{volumetric child size distributions} for five values of $\tilde{\Delta}$. When $\tilde{\Delta}$ is small, the child size distribution is nearly uniform, as the capillary production mechanism is negligible for small bubbles; for moderate $\tilde{\Delta}$, the child size distribution exhibits a $\nd{p} \propto \tilde{\delta}^\alpha$ scaling for small bubbles, while remaining close to flat for bubbles near the parent bubble size. For even larger bubbles, for which the capillary production mechanism is the most effective, the entire distribution approaches a $\tilde{\delta}^\alpha$ scaling.
For each $\tilde{\Delta}$, we also obtain $\avg{\chi_\mathrm{cap}}(\tilde{\Delta})$, shown in \Cref{fig:montecarlo_distributions_fit} (b), by averaging the portion of the parent bubble volume going to the capillary child bubbles $\chi_\mathrm{cap}$ over the $n_\mathrm{MC}$ events. When $\tilde{\Delta} \ll 1$, $\chi_\mathrm{cap} \approx 0$, and essentially all of the parent bubble volume goes to the two inertial child bubbles. With larger $\tilde{\Delta}$, $\chi_\mathrm{cap}$ increases, reaching $\chi_\mathrm{cap} = 0.1$ at $\tilde{\Delta} = 60$. Even at $\tilde{\Delta} = 1000$, less than half of the parent bubble volume goes to the capillary bubbles.
We then fit each \changed{volumetric child size distribution} as a sum of two components, each stemming from one of the two mechanisms of child bubble production,
\begin{equation}
\nd{p}(\tilde{\delta};\tilde{\Delta}) = \underbrace{a(\tilde{\Delta}) \tilde{\delta}^{\gamma (\tilde{\Delta})}}_\text{inertial mechanism}
+ \underbrace{b (\tilde{\Delta}) \tilde{\delta}^\alpha}_\text{capillary mechanism}, \label{eq:child_size_dist_approx_twocomps}
\end{equation}
with $\alpha=-7/6$ set by the distribution from which the capillary bubbles are picked and $\gamma(\tilde{\Delta})$ chosen to match the aggregated Monte Carlo simulation data. The two remaining coefficients, $a(\tilde{\Delta})$ and $b(\tilde{\Delta})$, are constrained by the volume going to bubbles produced by each mechanism, leading to
\begin{align}
a(\tilde{\Delta}) &= (1 - \avg{\chi_\mathrm{cap}}) \left( \frac{ (\gamma + 2) \tilde{\Delta} }{\tilde{\Delta}^{\gamma+2} - \tilde{\delta}_\mathrm{min}^{\gamma+2}} \right),\\
b(\tilde{\Delta}) &= \avg{\chi_\mathrm{cap}} \left( \frac{(\alpha+2)\tilde{\Delta}}{\tilde{\Delta}^{\alpha+2} - \tilde{\delta}_\mathrm{min}^{\alpha+2}} \right).
\end{align}
The fits to each child size distribution with \cref{eq:child_size_dist_approx_twocomps} are shown as the faint, thick lines in \Cref{fig:montecarlo_distributions_fit} (a).
\begin{figure}
\centering
\begin{overpic}[width=1\linewidth]{figures/montecarlo_distributions_fit.pdf}
\put(14,39){(a)}
\put(59,39){(b)}
\put(59,19){(c)}
\end{overpic}
\caption{\changed{Volumetric child size distributions} constructed via the Monte Carlo approach. (a) \changed{Volumetric child size distributions} $\nd{p}(\tilde{\delta};\tilde{\Delta})$ for five values of the parent bubble size $\tilde{\Delta}$. Distributions compiled from the Monte Carlo simulations are given by the thin lines, while the thick fainter lines give the fits using \cref{eq:child_size_dist_approx_twocomps}. The two components of the fit form of the distribution are illustrated for $\tilde{\Delta} = 10$. (b) The average capillary fraction $\avg{\chi_\mathrm{cap}}$ calculated from the ensemble of simulations, as a function of the parent bubble size. (c) Fit values of the exponent $\gamma(\tilde{\Delta})$ employed in \cref{eq:child_size_dist_approx_twocomps}. Data for the curves in (b) and (c) and Python code to use them to construct $\nd{p}(\tilde{\delta};\tilde{\Delta})$ are \dan{will be made available online}.}
\label{fig:montecarlo_distributions_fit}
\end{figure}
\Cref{fig:montecarlo_distributions_fit} (c) shows the evolution of the exponent $\gamma(\tilde{\Delta})$ describing the inertial production mechanism. Values of $\avg{\chi_\mathrm{cap}}(\tilde{\Delta})$ and $\gamma(\tilde{\Delta})$, which together contain all the necessary information about the child size distributions, are stored for many values of $\tilde{\Delta}$. To implement the child size distributions in a population balance model, we interpolate $\avg{\chi_\mathrm{cap}}(\tilde{\Delta})$ and $\gamma(\tilde{\Delta})$ for a given value of $\tilde{\Delta}$. Data for each curve and Python code to construct the child size distributions \dan{will be provided online at publication} for those wishing to implement the model we have constructed.
\subsection{Parameterization of the break-up frequency}
The next step is to parameterize how often the break-ups will occur. Here, using an approach that has been successfully applied to the break-up of oil droplets in turbulent jets \citep{Aiyer2019,Aiyer2020}, we integrate the effects of eddies smaller than the parent bubble size (each of \dan{dimensional} diameter $d_\mathrm{e}$) which contribute to break-up \citep{Prince1990,Tsouris1994}, yielding
\begin{equation}
\nd{\omega}(\tilde{\Delta}) = K \frac{d_\mathrm{H}}{L_\mathrm{int}} \int_0^{d_0/d_\mathrm{H}} \frac{\pi}{4} \left( \frac{d_0}{d_\mathrm{H}} + \frac{d_\mathrm{e}}{d_\mathrm{H}} \right)^2 \nd{u}_\mathrm{turb}(d_\mathrm{e}/d_\mathrm{H}) \left(\frac{d_\mathrm{e}}{d_\mathrm{H}} \right)^{-4} \Omega(d_\mathrm{e}/d_\mathrm{H};d_0/d_\mathrm{H}) \mathrm{d} (d_\mathrm{e}/d_\mathrm{H}), \label{eq:Aiyer_breakuprate}
\end{equation}
where $K$ is an order-1 constant we adjust, $\nd{u}_\mathrm{turb}(d_\mathrm{e}/d_\mathrm{H}) = C_2^{1/2} \epsilon^{1/3} d_\mathrm{e}^{1/3} / u' = C_\epsilon^{1/3} (d_\mathrm{e}/d_\mathrm{H})^{1/3} (d_\mathrm{H}/L_\mathrm{int})^{1/3}$ is the dimensionless turbulent velocity scale of the eddy, $(d_\mathrm{e}/L_\mathrm{int})^{-4}$ is the approximate dimensionless eddy density \citep{Solsvik2016}, and $\Omega(d_\mathrm{e}/d_\mathrm{H};d_0/d_\mathrm{H})$ is the break-up efficiency given the eddy and bubble sizes. Neglecting viscous effects (given the low viscosity of air bubbles), the break-up efficiency, which gives the probability that an eddy has sufficient energy to overcome surface tension, is taken as the inverse of the exponential of the ratio between the \dan{average} change in surface energy associated with the break-up $E_\sigma(d_0)$ and the kinetic energy of the eddy $E_\mathrm{eddy}(d_\mathrm{e})$, \dan{$\exp(- E_\mathrm{eddy}(d_\mathrm{e}) / E_\sigma(d_0)$}. The \dan{average} surface energy change is given dimensionally by
\begin{equation}
E_\sigma(d_0) = \frac{\sigma \pi}{4} \left( \int_{\delta_\mathrm{min}}^{\dan{\pi d_0^3/6}} p(\delta; \dan{\pi d_0^3/6}) \delta^{2/3} \mathrm{d} \delta - d_0^2 \right) = \Gamma \pi \sigma d_0^2 / 4,
\end{equation}
with the proportional change in surface area due to break-up $\Gamma$ dependent on the form of the child size distribution according to
\begin{equation}
\Gamma(\tilde{\Delta}) = \frac{\int_{\tilde{\delta}_\mathrm{min}}^{\tilde{\Delta}} \nd{p}(\tilde{\delta};\tilde{\Delta}) \tilde{\delta}^{2/3} \mathrm{d} \tilde{\delta}}{\tilde{\Delta}^{2/3}} - 1.
\end{equation}
The kinetic energy of the eddy is given by $E_\mathrm{eddy}(d_\mathrm{e}) = (\pi/4) \rho d_\mathrm{e}^3 C_2 (\epsilon d_\mathrm{e})^{2/3}$. Expressed in our non-dimensional units, the break-up efficiency is then
\begin{equation}
\Omega(d_\mathrm{e}/d_\mathrm{H};d_0/d_\mathrm{H}) = \exp \left( - \frac{ \Gamma(\tilde{\Delta}) (d_0/d_\mathrm{H})^2}{\mathrm{We}_\mathrm{c} (d_\mathrm{e}/d_\mathrm{H})^{11/3}} \right), \label{eq:breakup_efficiency}
\end{equation}
\dan{with the critical Weber number $\mathrm{We}_\mathrm{c}$ necessary to link the scales of the bubble and the turbulence.}
With each component specified, \cref{eq:Aiyer_breakuprate} is evaluated numerically and is shown in \Cref{fig:fitted_model_breakup_frequency}, using $K=2$ picked through a comparison to the experimental data given in \Cref{sec:air_cavity_results}. The break-up rate increases as bubbles approach the Hinze scale and then plateaus due to two competing effects: while larger bubbles are susceptible to a wider range of turbulent scales that may cause break-up, they tend to break into many more bubbles than smaller ones do, leading to a greater surface energy term in \cref{eq:Aiyer_breakuprate}. This means that while more eddies are interacting with the parent bubble, each is less likely to have sufficient energy to cause a break-up.
\begin{figure}
\centering
\includegraphics[width=0.6\linewidth]{figures/fitted_model_breakup_frequency.pdf}
\caption{The parent bubble break-up rate $\nd{\omega}$ as a function of its \dan{volume} $\tilde{\Delta}$, computed using the value of $d_\mathrm{H}/L_\mathrm{int}$ for our dataset. The black line shows the parent bubble break-up frequency given by \cref{eq:Aiyer_breakuprate}. The thicker gray line gives the inverse of the eddy turn-over time at the parent bubble scale, which is taken to be the upper limit in the duration of each break-up event. The dotted orange line gives the inverse of the capillary timescale at the parent bubble scale.}
\label{fig:fitted_model_breakup_frequency}
\end{figure}
The thicker gray line in \Cref{fig:fitted_model_breakup_frequency} gives the inverse of the turbulent turn-over time at the parent bubble scale, which we take to set the duration of each break-up event. The break-up frequency is thus consistent with the break-up duration, since $\nd{\omega}(\tilde{\Delta}) = \dan{g} T_\mathrm{int}$ being strictly less than $T_\mathrm{int} / T_\mathrm{turb}(d_0)$ means that the typical duration of a break-up is never longer than the typical time between such \dan{break-ups. Finally,} the dotted orange line gives the inverse of the (dimensionless) capillary timescale at the parent bubble scale, $T_\mathrm{int}/T_\mathrm{cap}(d_0)$, showing that capillary effects happen faster than both the break-up duration and time between break-ups (up until the largest bubbles we consider). The capillary pinching events responsible for sub-Hinze bubble creation thus occur over even shorter durations, as the capillary timescales of the small child bubbles formed will be much faster than that of the parent bubble.
\subsection{Summary of parameters involved in the model}
\changed{To summarize, \Cref{tab:model_parameters} lists each parameter in the model and explains how each is determined.
\begin{table}
\changed{
\begin{tabularx}{\linewidth}{p{1.5cm} l | p{2cm} XX}
\multicolumn{1}{c}{Model element} & \multicolumn{1}{c}{Equation} & \multicolumn{1}{c}{Variable} & \multicolumn{1}{c}{Description} & \multicolumn{1}{c}{Constraints} \\
\hline
number of bubbles produced & \cref{eq:avg_m_parameterization} & $\Delta T_\mathrm{break-up} = T_\mathrm{turb}(d_0)$ & break-up duration (over which child bubbles are formed) & theory (\Cref{sec:model_timescales}), informed by experimental and numerical data \citep{Risso1998,Riviere2021jfm} \\
& & $b_1 = 4$ & prefactor for number of bubbles formed per break-up & fit to our experimental and numerical data (\Cref{fig:dynamical_m_data}) \\
& & $b_2 = 2.3$ & power-law exponent in parent volume for number of bubbles & \\
child size distribution shape & \cref{eq:child_size_dist_approx_twocomps} & $\alpha = -7/6$ & power-law exponent for the capillary contribution, corresponding to $N(d) \propto d^{-3/2}$ & theory \citep{Riviere2021cap} \\
& & $a(\tilde{\Delta})$ & magnitude of the capillary contribution & Monte Carlo simulation results (\Cref{fig:montecarlo_distributions_fit}) \\
& & $b(\tilde{\Delta})$ & magnitude of the inertial contribution & \\
& & $\gamma(\tilde{\Delta})$ & power-law exponent for the inertial contribution & \\
break-up frequency & \cref{eq:Aiyer_breakuprate} & $K = 2$ & break-up frequency prefactor& fit to transient experimental data, within the range suggested by \cite{Aiyer2019} \\
& & $C_2 = 2.0$ & $D_\mathrm{LL}(d) / (\epsilon d)^{2/3}$ in inertial subrange for HIT & \cite{Pope2000} \\
& & $C_\epsilon = 0.7$ & $\epsilon L_\mathrm{int} / u'^3$ for HIT & \cite{Sreenivasan1997} \\
& \cref{eq:breakup_efficiency} & $\mathrm{We}_\mathrm{c} = 1$ & critical Weber number & experimental break-up threshold
\end{tabularx}
\caption{Parameters involved in the bubble break-up model, their physical origin and the experimental/numerical data constraints.}
\label{tab:model_parameters}}
\end{table}
}
\subsection{Model comparison to transient air cavity disintegration data}
With $\nd{p}(\tilde{\delta};\tilde{\Delta})$ and $\nd{\omega}(\tilde{\Delta})$ now fully specifying $\nd{f}(\tilde{\delta};\tilde{\Delta})$, we can simulate the turbulent disintegration of cavities we studied experimentally in \Cref{sec:air_cavity_results} by picking the appropriate initial condition for each (i.e., $\mathcal{N}(d/d_\mathrm{H})$ giving one bubble of size $d_0/d_\mathrm{H}$) and integrating \cref{eq:population_balance_abs} in time. \Cref{fig:fitted_model_comparisons} compares the experimental and modeled vales of the dimensionless bubble \changed{size} distribution $\mathcal{N}(d/d_\mathrm{H})$ at $t/T_\mathrm{int} = 1$ and 3 for each value of $d_0/d_\mathrm{H}$, with $d_0/d_\mathrm{H}=2.1$ in panel (a) and $d_0/d_\mathrm{H}=8.3$ in panel (f).
\begin{figure}
\centering
\begin{overpic}[width=1\linewidth]{figures/fitted_model_comparisons.pdf}
\put(2,61){(a)}
\put(56,61){(b)}
\put(2,41){(c)}
\put(56,41){(d)}
\put(2,21){(e)}
\put(56,21){(f)}
\end{overpic}
\caption{Comparisons of the experimental and modeled values of $\mathcal{N}(d/d_\mathrm{H})$ at $t/T_\mathrm{int} = 1$ and $3$ for each value of $d_0/d_\mathrm{H}$. The dotted vertical line gives the value of $d_0/d_\mathrm{H}$ for each condition. Dotted lines give the $\mathcal{N}(d/d_\mathrm{H}) \propto (d/d_\mathrm{H})^{ -3/2}$ sub-Hinze scaling. Good agreement between the measured and modeled distributions are observed for the full range of $d_0/d_H$ and times.}
\label{fig:fitted_model_comparisons}
\end{figure}
First, the model accurately reproduces the observed magnitudes of the size distributions near the Hinze scale, both in time and in the initial cavity size. Second, an $\mathcal{N}(d/d_\mathrm{H}) \propto (d/d_\mathrm{H})^{-3/2}$ scaling is approached for $d/d_\mathrm{H} < 1$ with larger $d_0/d_\mathrm{H}$, and this scaling is adopted more rapidly with larger cavities. With $d_0/d_\mathrm{H} = 2.1$ and 2.84, shown in panels (a) and (b), $\mathcal{N}(d/d_\mathrm{H})$ is flat near the Hinze scale at $t/T_\mathrm{int} = 1$, as the child size distributions for parent bubbles of these cavity sizes are largely flat (as shown in \Cref{fig:montecarlo_distributions_fit} (a)). With larger parent cavities, the sub-Hinze distribution steepens as the capillary mechanism contributes more significantly to the child size distributions for parent bubbles of these larger cavity sizes (as evidenced in the $\avg{\chi_\mathrm{cap}}(\tilde{\Delta})$ curve shown in \Cref{fig:montecarlo_distributions_fit} (b)).
\section{Conclusions}
\label{sec:breakup_conclusions}
\changed{In this paper, we used results from two sets of experimental measurements to describe the production of bubbles smaller than the Hinze scale by turbulent bubble break-up. We experimentally demonstrate that a $N(d) \propto d^{-3/2}$ scaling for bubbles smaller than the Hinze scale ($d < d_\mathrm{H}$) is obtained with the break-up of air cavities much larger than the Hinze scale subjected to forced turbulence, experimentally studying cavities up to $d_0 = 8.3 d_\mathrm{H}$ with accurate measurements of bubble sizes down to approximately $0.1 d_\mathrm{H}$. The $N(d)$ scaling we find is similar to the one reported in measurements and simulations of bubble size distributions under breaking waves \citep{Deane2002,Wang2016,Mostert2021}.
The small bubbles that are produced are significantly separated in size from the turbulent motions which are strong enough to cause break-up. Thus, the link between their sizes and the turbulent motions which do instigate break-up necessarily involves additional physics. Following \cite{Riviere2021cap}, we identify the capillary instability of deformed bubble ligaments which are involved in larger-scale turbulent deformations as the mechanism responsible for small bubble production. Crucially, significant small bubble production by this mechanism is limited to parent bubbles with $d_0 \gg d_\mathrm{H}$, as only bubbles much larger than the Hinze scale can become deformed to a severe enough extent to produce the ligaments from which the small bubbles originate.
The first piece of evidence we provide for this role of capillarity is visual: \Cref{fig:explanation_figure_small_fig,fig:instability_cases} show a number of instances of small bubbles being left behind after the collapse of gas ligaments. Second, the experimental $N(d) \propto d^{-3/2}$ scaling for $d < d_\mathrm{H}$ with $d_0 \gg d_\mathrm{H}$ is coherent with the $P(d) \propto d^{-3/2}$ scaling for the break-up child size distribution reported by \cite{Riviere2021cap}, who showed that the lifetime of ligaments before their collapse to produce a bubble of size $d$ coincides with the capillary time scale of a bubble of size $d$, $T_\mathrm{cap} \propto d^{-3/2}$.
We implemented these physical ideas in a population balance model of turbulent bubble break-up. The child size distributions describing individual break-up events were constructed with a Monte Carlo approach involving simulations of many break-ups. The statistics of each simulated break-up are prescribed by our understanding of the role of capillarity and additional experimental results on individual bubble break-up in which parent and child bubbles were tracked dynamically in three dimensions. The resulting expression for the child size distribution, \cref{eq:child_size_dist_approx_twocomps}, involves two components: one describes the effect of the large-scale deformation to a parent bubble by an energetic turbulent eddy, and the other describes the action of capillarity in producing small bubbles. Finally, the rate at which parent bubbles undergo break-ups was determined by integrating the action of eddies below the bubble's size, which all contribute to break-up. The complete model (consisting of the child size distributions and the parent break-up frequency) yields a good match to our transient experimental data.}
Along with the recent analysis of DNSs of bubble break-up in turbulence from \cite{Riviere2021jfm,Riviere2021cap}, this experimental work opens the door to a new understanding role of capillarity in turbulent bubble break-up, in which surface tension not only counteracts the initial turbulent deformation to a bubble but also leads to the formation of sub-Hinze bubbles through capillary instabilities that arise during the final stages of the break-up process.
\section*{Acknowledgements}
\label{sec:acknowledgements}
This work was supported by the NSF CAREER award 1844932 to L.D.
We declare no conflict of interest.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,408 |
Enrique Llovet Sánchez (Málaga, 15 de agosto de 1917 - Madrid, 5 de agosto de 2010) fue un escritor (cronista, guionista de cine, crítico teatral y dramaturgo) y diplomático español. Utilizó el seudónimo de Marco Polo en sus crónicas periodísticas y obtuvo diversos premios por su polifacética obra literaria: el Mariano de Cavia (1958), el premio nacional de la Crítica teatral (1964), el premio nacional de Radio y Televisión (1965) y el premio nacional de Literatura «Azorín» (1967). Existe un Premio de teatro Enrique Llovet que concede la Diputación de Málaga.
Biografía
Hijo de un médico, hizo sus primeros estudios en el Instituto malagueño Vicente Espinel, de la calle Gaona, donde también fueron alumnos Picasso, Severo Ochoa, Ortega y Gasset, Blas Infante y Victoria Kent. Estudió Derecho, Filosofía y Letras, Políticas y Económicas, tanto en España (Granada, Sevilla y Madrid) como en la Sorbona de París o en el Trinity College de Dublín. Desde joven destacó por sus colaboraciones literarias, iniciándose periodísticamente en el diario Sur de Málaga. En el panorama del final de la guerra civil —en la que se contó en el bando de los vencedores—, apareció como «uno de los intelectuales y creadores más completos y polifacéticos». Como poeta, fue autor de letras de himnos falangistas, escribió en «diversas publicaciones del bando alzado, como Vértice o Legiones y Falanges», además de colaborar en varias colecciones de «poemas hagiográfico-patrióticos» y dirigir, desde 1943, el Boletín de Información del Servicio Exterior de Falange. No obstante, a partir de la década de 1950, se fue «distanciando progresivamente del régimen en una evolución de la que dan cuenta sus controvertidos artículos políticos publicados en ABC».
En 1950 ingresó en la Escuela diplomática. Ejerció como diplomático en diversas capitales del mundo como París y Buenos Aires, desde donde se inició como cronista. En 1956 contrajo matrimonio con Carmen Baeza, hija del ensayista, traductor y diplomático republicano Ricardo Baeza Durán.
El cronista periodístico «Marco Polo»
Destinado en el consulado español de Teherán, publicará, con el seudónimo de «Marco Polo», en la prensa española (Blanco y Negro), crónicas de los acontecimientos de Oriente Medio, y otros trabajos periodísticos, textos que son traducidos en distintos medios de comunicación de Estados Unidos, Inglaterra, Francia, Italia y Alemania. Esta experiencia culminará en su obra Oriente Medio (Madrid, Arión, 1959). Por su artículo «Grandeza y miseria del Oriente» obtuvo el Premio Mariano de Cavia, concedido por el diario ABC, correspondiente a 1958.
Yo te diré: una canción muy especial
En los primeros años cuarenta escribe el guion cinematográfico titulado «Los héroes de Baler» que servirá posteriormente para la película Los últimos de Filipinas (1945), donde hará célebre la canción «Yo te diré», cuya letra escribe para esa cinta. Al morir Llovet, el periodista Antonio Burgos dijo en su artículo homenaje: «Escribió una de las más bellas habaneras compuestas nunca, la que cantaba Nani Fernández en Los últimos de Filipinas»; y Antonio Mingote le dedicó un dibujo con la leyenda: «Yo te diré, Enrique Llovet, por qué mi canción te nombra sin nombrar». Unos años más tarde publicó una novela corta inspirada en el guion cinematográfico con el mismo título que la película, Los últimos de Filipinas (Madrid, La novela del Sábado, 1954). Tras el «Yo te diré», escribió la letra de la canción «Luna de España», para Celia Gámez.
Su primera obra como autor teatral —Don Pío descubre la primavera— fue una comedia humorística escrita en colaboración con Tono (Antonio Lara), que se representó en Madrid en 1946. Esta obra intenta crear una nueva modalidad de humorismo, de sentido más intelectual y mayor alcance imaginativo.
De 1958 es el relato de humor Operación C-1, con ilustraciones de Mingote. En 1960 vuelve una obra suya a las tablas, esta vez escrita en solitario: la navideña Tururururú (1960), estrenada en el Teatro de la Comedia de Madrid por la Compañía de Aurora Bautista y Teresa Berganza. Entre sus numerosos los guiones cinematográficos destacan Aeropuerto, Cervantes, Simón Bolívar y la adaptación de la obra de Valle-Inclán Divinas palabras.
Del equipo de Samuel Bronston a TVE
Colaboró en el guion de El Cid, película épica, rodada en el año 1961, bajo la dirección de Anthony Mann, sobre la historia de Rodrigo Díaz de Vivar y el poema Mío Cid. El norteamericano Samuel Bronston fue el productor, con quien Llovet colaboró en otras superproducciones cinematográficas.
También ha sido guionista de diversas series de TVE, como las Sonatas de Valle Inclán. En TVE se encargó de la dirección de 300 millones, programa cultural y de diversión para todos los países de habla española.
El crítico y el teórico del teatro
Enrique Llovet ha sido reconocido como uno de los más prestigiosos conocedores y teóricos del teatro de la segunda mitad del siglo XX. Gran lector y con una mente lúcida y analítica, combinó sus conocimientos sobre el arte dramático con el humor andaluz. Como crítico teatral se inicia en ABC, periódico en el que publicará durante muchos años sus evaluaciones de la cartelera madrileña y nacional, frente a la alternativa de opinión que representó el crítico Alfredo Marquerie, desde el diario Pueblo. Eran las dos voces más influyentes de la crítica teatral española en los años cincuenta y sesenta, a las que habría que sumar la labor crítica de nombres como José Monleón y Eduardo Haro Tecglen.
Escribe también en el diario Informaciones, empieza a colaborar en TVE y terminará publicando en El País. Compaginó su labor de crítico con la de teoría teatral, en sus libros La formación del actor (1964) y Lo que sabemos del teatro (1967). A lo largo de su vida ha publicado cientos de artículos de todo tipo, fundamentalmente sobre literatura, teatro, y cultura en general, pero también ha sido analista del acontecer social y político. Sus artículos en la «tercera» de ABC, han sido contestados en esa misma página por otras grandes plumas de la cultura española.
Ha ocupado la cátedra de teatro Tirso de Molina y ha dirigido diversas revistas literarias. Ha estado atento tanto al teatro clásico, como al contemporáneo, a las formas tradicionales y a las nuevas (el Teatro Experimental Independiente o TEI). Profesor y conferenciante en la Real Escuela Superior de Arte Dramático, la Universidad Autónoma de Madrid, etc. Muy numerosos son los libros referentes al teatro o a obras teatrales que llevan prólogo o estudios introductorios de Enrique Llovet. En toda su obra hay una jerarquía de principios de lo que debe ser el teatro y cómo hay que valorar las obras. Su labor como crítico se fundamentaba tanto en los principios clásicos de la Dramaturgia, como en el «criterio histórico y social» (que relacionaba el teatro con la sociedad y sus tendencias y movimientos) y el «criterio impresionista» (que atendía a las emociones estéticas que una obra causaba en el espectador o el crítico).
Su tarea de crítico teatral no le impidió extender una mirada penetrante y literaria sobre la realidad española, en su configuración natural e histórico-social. Así, obtuvo el Premio Nacional de Literatura «Azorín» con su libro España viva (1967). Se concibe como una guía literaria y turística con la originalidad de denominar las regiones mediante un símbolo: así la España de cristal, la del Ebro, la frutal, la del sol, la del Guadalquivir, la de los conquistadores, la de los castillos, la del pan, la forestal.
Adaptador del teatro clásico
Más que como autor, la importancia de Llovet en la escena teatral española de la segunda mitad del siglo XX radica en su adaptación de obras clásicas españolas y en la traducción de extranjeras para su puesta en escena. Como adaptador y dramaturgo trabajó con los mejores directores y productores de su tiempo: Miguel Narros, Adolfo Marsillach, José Osuna, José Tamayo. Ya en 1945 adaptó la comedia de Tirso de Molina Don Gil de las calzas verdes, interpretada por Mercedes Prendes.
Pero será con la adaptación del Tartufo de Molière con la que consigue un brillante impacto en la escena española, que desborda lo literario para alcanzar lo político, ya que en la obra se planteaba una crítica entre líneas del gobierno de ese momento (1969).
Fue estrenada en el Teatro de la Comedia de Madrid por la Compañía de Adolfo Marsillach, primer gran actor de las adaptaciones de Llovet. Diez años más tarde (Teatro Príncipe de Madrid, 1979) utilizará el mismo texto para realizar una crítica política desde el teatro, a un gobierno ya del sistema democrático.
Otra de las creaciones más personales de Llovet fue el drama Sócrates (1972), centrado en la persona, las ideas y el final del filósofo griego. Basado en los Diálogos y la Apología de Sócrates de Platón, fue puesto en escena con una intención social y política. Se estrenó bajo la dirección de Adolfo Marsillach, que actuó de gira por varias provincias españolas en 1973. Venía a ser una anticipación teatral del cambio político que se anunciaba y que una parte de la sociedad española estaba reclamando.
Con el Teatro Estable Castellano (TEC) y con José Tamayo
También de Molière, había puesto en escena una versión de Las mujeres sabías en el Teatro Español de Madrid, en 1967. De Shakespeare, adaptó Medida por medida (1969), y de Aristófanes, Lisístrata (1975). A partir de 1978, Enrique Llovet se encargó de la dramaturgia del Teatro Estable Castellano: estrena con ellos El tío Vania de Chéjov (1979) y Don Carlos, infante de España (1979). Con el actor José María Rodero estrenó en 1979 la adaptación de Historia de un caballo de Tolstói, que años más tarde, volvería a escena con el actor Carlos Hipólito.
En la década de los 80 hay que destacar la adaptación de la tragedia de Shakespeare Antonio y Cleopatra (1980), estrenada en el Teatro romano de Mérida, con la dirección de José Tamayo. En 1981, La gaviota, de Chéjov (1981). Y en 1986, Enrique IV de Pirandello, de nuevo con Tamayo y con Rodero; y luego, de Tennessee Williams, Un tranvía llamado deseo (1988). En los años 90 también adaptó un clásico que se repone anualmente en los escenarios de Madrid, el Don Juan Tenorio de José Zorrilla.
En 1995 recibió la medalla de oro del Ateneo de Málaga y la Medalla de la ciudad de Málaga, concedida por el Ayuntamiento. Ya en 1987, el Área de Cultura y Educación de la Diputación de esa ciudad había creado el Premio de teatro Enrique Llovet. En 2001 recogió en su libro La magia del teatro una antología de su mejores artículos sobre teoría teatral.
Enrique Llovet falleció en Madrid el 5 de agosto de 2010. La ciudad de Málaga le ha dedicado la calle «Escritor Enrique Llovet».
Véase también
Sitio de Baler
Los últimos de Filipinas
Notas y referencias
Enlaces externos
Una interesante entrevista al mismo Enrique Llovet sobre sus obras más significativas (Operación C-1, España viva, y el Tartufo) realizada en su domicilio de Madrid en 2004. Dos vídeos en Portalatino.com
COCTEAU, Jean: La voz humana (La Voix humaine, 1930).
Versión española de Enrique Llovet.
Diplomáticos de España del siglo XX
Escritores de España del siglo XX
Guionistas de cine de España
Críticos de teatro de España
Dramaturgos de España del siglo XX
Premio Mariano de Cavia
Miembros de FET y de las JONS
Nacidos en Málaga
Fallecidos en Madrid | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,372 |
\section*{Preface}
According to Wikipedia, ``a how-to is an informal, often short, description of how to accomplish a specific task. A how-to is usually meant to help non-experts, may leave out details that are only important to experts, and may also be greatly simplified from an overall discussion of the topic.''~\cite{wikipediahowto}. In some aspects this is also valid for this article. However, in this case the aim of this article is to provide some insight to the experts themselves, that is, physicists, who may use Monte Carlo event generators as black boxes to serve their purposes, either to calculate cross sections or to generate events to further simulate and investigate a future possible experimental analysis.
\addcontentsline{toc}{section}{Preface}
\section{Introduction}
Treatment of particle collisions in mechanics starts off relatively easy: we initially study elastic collisions of two spheres in one dimension, and we are asked to calculate the various momenta after a collision occurs. The next complication involves adding some inelasticity. This results in some energy loss e.g. through the balls sticking together and so on. The theoretical description of collisions of elementary particles starts off equally simply: the scattering of two electrons, for example, can be simulated at leading order in the perturbative picture, via the exchange of a photon, representing an elastic collision. However, ``Truth is stranger than Fiction, but it is because Fiction is obliged to stick to possibilities; Truth isn't.''~\cite{twain}. In the context of particle physics, to describe `Truth', i.e. Nature, in our `fictional' simulations, we need to model a multitude of effects using a series of approximations and models. To name but a few of these aspects:
\begin{itemize}
\item particles radiate, e.g. photons off electrons, gluons off quarks,
\item incoming particles can be confined in a bound state, e.g. quarks and gluons in protons,
\item higher-order corrections in perturbation theory are too laborious to compute beyond the first few orders,
\item the phase space of the final-state particles is huge and of variable dimensions,
\item and many effects cannot be described by perturbation theory and need to be modelled.
\end{itemize}
Many of the above effects have been incorporated into computer simulations using Monte Carlo techniques.
The large dimensionality of the phase space makes the Monte Carlo integration the method of choice. The Markovian nature of the parton shower process can also be formulated as a Monte Carlo process. For different aspects of the simulation, several tools already exist on the ``market''. These serve many purposes, sometimes overlapping, following different approaches and methodologies. Without (and far from) being completely inclusive, some of these tools are:
\begin{itemize}
\item \texttt{MadGraph}: provides parton-level events of automatically generated process that the user asks for. At the moment leading order and next-to-leading order in QCD (via the MC@NLO method) can be produced. The output can then be given to a general-purpose event generator for showering and hadronization~\cite{Alwall:2011uj, Alwall:2014hca}.
\item \texttt{HERWIG++}~\cite{Bahr:2008pv, Arnold:2012fq, Gieseke:2011na}, \texttt{PYTHIA8}~\cite{Sjostrand:2006za,Sjostrand:2007gs}, \texttt{SHERPA}~\cite{Gleisberg:2008ta} and many more: these are general-purpose event generators that include in part some automation for generating processes at parton level as well as taking into account the effects of the parton shower, hadronization and the underlying event.
\item Many more (with apologies to their authors)!
\end{itemize}
For a relatively recent review of the detailed physics and the philosophy behind Monte Carlo event generators, I refer the reader to Ref.~\cite{Buckley:2011ms}. Here we wish to examine the minimal aspects of constructing a parton-level event generator, adding some hints at the end for how one can incorporate the more advanced features such as a parton shower, hadronization, the underlying event and including higher-order corrections. We will start with some preliminaries in the next section.
\section{Preliminaries}
\subsection{Monte Carlo integration}\label{sec:mcint}
This section has been adapted in part from Peter Richardson's CTEQ 2006 lectures\footnote{\url{http://www.ippp.dur.ac.uk/~richardn/talks/}.} as well as Mike Seymour's PhD thesis, Chapter 3.\footnote{\url{http://hepwww.rl.ac.uk/theory/seymour/thesis/}.}
Monte Carlo integration is based on a simple observation: the value of an integral can be recast as the average of the integrand:
\begin{equation}
I = \int_{x_1}^{x_2} \mathrm{d}x~ f(x) = (x_2 - x_1 ) \left< f(x) \right> \;.
\end{equation}
Consequently, this implies that if we take some, say $N$, values of $x$, distributed uniformly in $(x_1, x_2)$, then the average of $f(x)$ will be a good estimator of the integral, $I$. We can then write:
\begin{equation}
I \approx (x_2 - x_1) \frac{1}{N} \sum_{i=1}^N f(x_i) \;.
\end{equation}
We can draw the values $x_i$ randomly: if $\rho_i$ is a uniform random number in $(0,1)$, then we have:
\begin{equation}
x_i = (x_2 - x_1 ) \rho_i + x_1 \;.
\end{equation}
To estimate the accuracy of the calculation we can employ the so-called `Central Limit Theorem': the distribution of $\left< f(x) \right>$ will tend to a Gaussian with standard deviation $\sigma_\mathrm{MC} = \sigma / \sqrt{N}$, where $\sigma$ is the standard deviation of $f(x_i)$. Our inaccuracy simply decreases as $1/\sqrt{N}$. We often also define the weight: $W_i = (x_2 - x_1) f(x_i)$, and then the integral is simply the average of the weight:
\begin{equation} \label{eq:integral}
I \approx I_N = \frac{1}{N} \sum_{i=1}^N W_i \;.
\end{equation}
We also define the variance, $V_N \equiv \sigma^2$:
\begin{equation}
V_N = \frac{1}{N} \sum_i W_i^2 - \left[ \frac{1}{N} \sum_i W_i \right]^2 \;,
\end{equation}\label{eq:mcerror}
from which $\sigma_\mathrm{MC} = \sqrt{ V_N / N }$, and we finally arrive at the expression:
\begin{equation}
I \approx I_N \pm \sqrt{ \frac{V_N} { N } } \;.
\end{equation}
One can compare the convergence of the Monte Carlo integration technique to those for other common techniques. In $d$-dimensions the convergence of techniques such as the `Trapezium Rule', `Simpson's rule' and Gaussian quadrature are $\propto 1/N^{2/d}$, $\propto 1/N^{4/d}$ and $1/N^{(2m-1)/d}$ respectively. On the other hand, Monte Carlo integration always extends trivially and converges as $\propto 1/\sqrt{N}$ in $d$ dimensions, and hence converges already faster than all the aforementioned methods in $d>4$, $d>8$ and $d>4m-2$ respectively. In typical LHC events we have $\mathcal{O}(1000)$ particles and hence this results in $O(3000)$ phase space integrals. Monte Carlo integration is in fact the only viable option.
The biggest disadvantage of the Monte Carlo method is the relatively slow divergence in few dimensions. This can be tackled by `Importance Sampling', which we will discuss below. Its principal advantages over numerical quadrature can be summarised as:
\begin{itemize}
\item fast convergence in many dimensions,
\item arbitrarily complex integration regions,
\item small feasibility limit: the minimum number of functional evaluations which must be made for the method to work at all, in this case 2,
\item small growth rate: the smallest number of additional function evaluations needed to improve the current estimate, in this case 1: each additional point improves the estimate of the integral,
\item easy estimate of accuracy.
\end{itemize}
\subsection{Improving convergence of the Monte Carlo integration}\label{sec:jacob}
The accuracy of an integral calculated via the Monte Carlo integration method is given by $ \sqrt{ V_N / N } $. Thus one can simply increase the number of points to increase the accuracy. However, one can also look for ways to decrease $V_N$, e.g., by a method called `Importance Sampling'. The basic idea is to perform a Jacobian transform so that the integral is flatter in the new integration variable. This is equivalent to finding a transform such that $V_N' < V_N$.
We begin by considering the simplest case encountered in particle physics. In cross section calculations we often encounter the so-called Breit-Wigner distribution, that describes the `peak' of a resonance:
\begin{equation}
F_\mathrm{BW}(m^2) = \frac{1}{ (m^2 - M^2)^2 + M^2 \Gamma^2 } \;,
\end{equation}
where $M$ would be the physical (on-shell) mass of the particle, $m$ is the off-shell mass and $\Gamma$ its width. An example of the distribution (made using $M=90$, $\Gamma = 10$) is shown in Fig.~\ref{fig:bw}.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.75\linewidth]{bw.pdf}
\caption{An example of the Breit-Wigner distribution, made for $M=90$, $\Gamma=10$.}
\label{fig:bw}
\end{figure}
We then often encounter integrals of the form:
\begin{equation}\label{eq:bwint}
I = \int_{M_\mathrm{min}^2}^{M_\mathrm{max}^2} \mathrm{d} m^2 \frac{1}{ (m^2 - M^2)^2 + M^2 \Gamma^2 } \;.
\end{equation}
The transformation we wish to consider is $m^2 \rightarrow \rho$, where
\begin{equation}
m^2 = M\Gamma \tan \rho + M^2 \;,
\end{equation}
and the corresponding Jacobian is given by:
\begin{equation}
J = \left| \frac{ \partial m^2 } { \partial \rho }\right| = M\Gamma \sec ^2 \rho\;.
\end{equation}
Hence we have:
\begin{eqnarray}
I &=& \int_{\rho_\mathrm{min}}^{\rho_{max}} \mathrm{d} \rho \left| \frac{ \partial m^2 } { \partial \rho }\right| \frac{1}{ (m^2 - M^2)^2 + M^2 \Gamma^2 } \nonumber\\
&=& \frac{1}{M\Gamma} \int_{\rho_\mathrm{min}}^{\rho_{max}} \mathrm{d} \rho \;.
\end{eqnarray}
It is evident that in this case, we have in fact reduced the variance to zero: $V_N' = 0$. In practice, few of the cases we need to deal with can be exactly integrated. In cases of complicated integration regions, one can try and pick a function that approximates the behaviour of the function we want to integrate. A specific method, called multi-channel integration, aims to handle the situation where one is faced with multiple peaks in the phase space and one can then not use a single Breit-Wigner. The method can be automated and is used in all modern Monte Carlo event generators.
\subsection{Hit-or-Miss Monte Carlo}\label{sec:hitmiss}
There are two main aspects of Monte Carlo that make it ideal for use in constructing event generators: the close relationship between the numerical method and the physical process under study, both being `random' in some sense, and the ability to make unweighted events.
In a similar way that a Monte Carlo integration of the sort described in Section~\ref{sec:mcint} is performed, one can perform a scan of the function $f(x)$ and collect a set of phase-space points, which effectively correspond to `events', along with their associated probabilities, corresponding to the weight of each in the integral. However, if we want to use these events, e.g. to perform an experimental analysis, then we must always carry the associated weight around for use in histograms, averages and so on. This can be inconvenient but also very inefficient: time may be wasted in some latter part of the simulation (e.g. detector simulation) to events that possess only a very small weight. The so-called `hit-or-miss' method aims to equalize the weights of different events as far as possible.
Since the weight of each event is proportional to the probability of it occurring, we can unweight the events by keeping only a fraction of them, according to their weights. We do this by finding the maximum weight which occurs in the integration region. This can be done while performing Monte Carlo integration. We choose to keep (`accept') each event with probability $f(x)/f_\mathrm{max}$. The rest are thrown away (`rejected'). All accepted events are given a weight $\left< f \right>$, calculated from the Monte Carlo integral over all generated events (not just the accepted events). The complete algorithm for integration and event generation is then:
\begin{enumerate}
\item Monte Carlo integration and scanning are performed: $N$ points are picked randomly, according to some distribution and their weight is accumulated to the sums: $\sum_i W_i$, $\sum_i W_i^2$. The cross section and corresponding error are computed according to Eqs.~\ref{eq:integral} and~\ref{eq:mcerror}. During this period, the phase-space point which give the maximum weight, $W_\mathrm{max}$ is stored.
\item Generating unweighted events via the `hit-or-miss' method: go through randomly chosen phase-space points and compare the probability of each, given by $W_i/W_\mathrm{max}$ to a random number $R \in (0,1)$. If $W_i/W_\mathrm{max} > R$, we `accept' the event, otherwise we reject it. This is done until we have collected the desired number of events, $N_\mathrm{events}$.
\end{enumerate}
\subsection{Factorisation and the structure of event generators}
The complexity of an event is something that we (particle physicists) are all familiar with. This is exemplified in Fig.~\ref{fig:cmsevent}. Even if the `hard collision' is simple, we expect thousands of final state particles at hadron colliders. It is evident that this poses many challenges in simulating events: it is difficult or even impossible to construct an efficient algorithm but also hard to exactly calculate final-state distributions of hadrons.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.75\linewidth]{cmshiggscand.pdf}
\caption{Real CMS proton-proton collision events in which four high energy electrons are observed. The event shows characteristics expected from the decay of a Higgs boson but is also consistent with background Standard Model physics processes.}
\label{fig:cmsevent}
\end{figure}
It is fortunate that the probabilities for separate `stages' of the events factorize in some well-motivated approximations. We will not examine these in detail here: instead, we illustrate a possible, and common, factorisation of an event with the help of schematic diagrams as performed by a generic event generator when producing `full' event simulation. Figs.~\ref{fig:step1} to~\ref{fig:step5} demonstrate the various steps~\cite{Papaefstathiou:2011rc}. In each step, the new features are shown in red colour. In the present article we will only examine how step 1 is implemented in a numerical simulation.
\begin{enumerate}
\item{\textbf{Hard process generation,
Figure~\ref{fig:step1}}: The hard process is
generated by choosing a point on the phase space according to the
`hit-or-miss' method.}
\item{\textbf{Heavy resonance decay,
Figure~\ref{fig:step2}}: Heavy resonances
with narrow widths are
decayed before the parton shower. In this example the heavy
resonance could be a top quark, decaying to a $\ell \nu_{\ell}$
and a $b$-quark.}
\item{\textbf{Parton showers,
Figure~\ref{fig:step3}}: The incoming partons
are showered by evolving backwards to the incoming hadrons,
producing initial-state radiation. Any final-state particles
that are colour-charged also radiate, producing final-state
radiation.}
\item{\textbf{Multiple parton interactions,
Figure~\ref{fig:step4}}: Secondary
interactions between partons within the colliding hadrons,
modelled as QCD $2\rightarrow2$ interactions, are generated.}
\item{\textbf{Hadronization and hadron decays,
Figure~\ref{fig:step5}}: In the cluster
model, clusters are formed and hadrons are produced. Unstable
hadrons are subsequently decayed.}
\end{enumerate}
\label{app:mcillustration}
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.55, angle=0]{step1.eps}
\caption[]{\textbf{STEP 1}: Generation of the hard process.}
\label{fig:step1}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.55, angle=0]{step2.eps}
\caption[]{\textbf{STEP 2}: Decay of heavy resonances.}
\label{fig:step2}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.55, angle=0]{step3.eps}
\caption[]{\textbf{STEP 3}: Parton showers.}
\label{fig:step3}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.55, angle=0]{step4.eps}
\caption[]{\textbf{STEP 4}: Multiple parton interactions.}
\label{fig:step4}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.55, angle=0]{step5.eps}
\caption[]{\textbf{STEP 4}: Hadronization and hadron decays.}
\label{fig:step5}
\end{figure}
\section{Exercises}
The exercises and solutions can be found at: \url{http://http://physik.uzh.ch/~andreasp/mc}.
\subsection{Particle physics input}
We first provide some basic formulae that we will employ in the exercises given in this section.
\subsubsection{$e^+e^- \rightarrow \gamma \rightarrow \mu^+ \mu^-$}
The steps for calculating the matrix element and hence differential cross section for this process are given, for example, in Ref.~\cite{Peskin:1995ev}, Ch. 5. Here we list the main steps in the calculation of $e^+e^- \rightarrow \mu^+ \mu^-$ in QED via photon exchange. The Feynman diagram for this process is shown in Fig.~\ref{fig:eemumu}.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.5\linewidth]{eemumu.pdf}
\caption{Feynman diagram for $e^+e^- \rightarrow \mu^+ \mu^-$ in QED via photon exchange.}
\label{fig:eemumu}
\end{figure}
Using the QED Feynman rules, one can immediately write down the amplitude:
\begin{equation}
i \mathcal{M} = \bar{v}^{s'} (p') (-ie \gamma^\lambda) u^s(p) \left( \frac{ - i g_{\lambda\nu} } { q^2 }\right)\bar{u}^r (k) ( - ie \gamma^\nu ) v^{r'}(k') \;,
\end{equation}
where $s, s', r, r'$ are the spin indices. Writing them implicitly, the squared matrix element is given by
\begin{equation}
|\mathcal{M}|^2 = \frac{e^4}{q^4} ( \bar{v}(p') \gamma^\lambda u(p) \bar{u}(p) \gamma^\nu v(p')) ( \bar{u}(k) \gamma_\lambda v(k') \bar{v}(k') \gamma_\nu u(k)) \;.
\end{equation}
For simplicity, we can average over the electron and positron spins and sum over the muon spins:
\begin{equation}
\frac{1}{2} \sum_s \frac{1}{2} \sum_{s'} \sum_r \sum_{r'} |\mathcal{M}|^2 \;.
\end{equation}
Using completeness relations for the spinors we can write:
\begin{equation}
\frac{1}{4} \sum_\mathrm{spins} = \frac{e^4} { 4 q^4 } \mathrm{Tr} [ \slashed{p}' \gamma^\lambda \slashed{p} \gamma^\nu ]\mathrm{Tr} [ \slashed{k} \gamma_\lambda \slashed{k}' \gamma_\nu ] \;,
\end{equation}
where we have neglected both the electron and muon masses. Using identities of traces of gamma matrices, one can show that:
\begin{equation}
\frac{1}{4} \sum_\mathrm{spins} = \frac{8 e^4} { q^4 } \left[ (p\cdot k)(p'\cdot k') + (p \cdot k' ) ( p' \cdot k) \right] \;.
\end{equation}
Up to this point the matrix element squared is expressed in terms of invariant dot products. To obtain a more explicit formula we must specialise to a particular frame of reference and write down expressions for the four-vectors of the particles involved in the collision. These are shown in Fig.~\ref{fig:eemumukin}.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.7\linewidth]{eemumukin.pdf}
\caption{Schematic diagram for the kinematic setup of the process $e^+e^- \rightarrow \mu^+ \mu^-$. The angle $\theta$ is defined between the incoming electron and the outgoing muon, both being particles.}
\label{fig:eemumukin}
\end{figure}
Using the four-vector explicit expressions we can express the invariants as:
\begin{eqnarray}
q^2 = (p+p') = 4 E^2&,&\;\; p\cdot p' = 2 E^2,\nonumber \\
p\cdot k = p' \cdot k' = E^2 - E |\mathbf{k}| \cos \theta&,&\;\; p\cdot k' = p' \cdot k = E^2 + E |\mathbf{k}| \cos \theta \;,
\end{eqnarray}
where the angle $\theta$ is defined in the figure. At high enough energies we can neglect the lepton masses, $E=|\mathbf{k}|$ and:
\begin{equation}
\frac{1}{4} \sum_\mathrm{spins} |\mathcal{M} |^2 = e^4 ( 1 + \cos ^2 \theta ) \;.
\end{equation}
One can immediately plug the above expression into the relevant formula for the differential cross section for $2\rightarrow 2$ scattering:
\begin{equation}
\frac{ \mathrm{d} \sigma } { \mathrm{d} \Omega } = \frac{1} { 2 E_\mathcal{A} 2 E_\mathcal{B } | v_\mathcal{A} - v_\mathcal{B}| } \frac{ |\mathbf{k}| } { (2\pi)^2 4 E_\mathrm{cm} } |\mathcal{M} |^2\;,
\end{equation}
where $E_{\mathrm{cm}}$ is the centre of mass energy of the colliding particles, the difference $|v_\mathcal{A} - v_\mathcal{B}|$ is the relative velocity of the beams as viewed from the laboratory frame, $E_\mathcal{A}$, $E_\mathcal{B}$ their energies in that frame and $\mathrm{d} \Omega = \mathrm{d} \cos \theta~ \mathrm{d} \phi$ is the phase space factor. The result is then:
\begin{equation}\label{eq:qedxs}
\frac{ \mathrm{d} \sigma } { \mathrm{d} \Omega } = \frac{\alpha^2}{4 \hat{s} } ( 1 + \cos ^2 \theta ) \;,
\end{equation}
where $\alpha = e^2 / (4 \pi)$ is the QED running coupling. Since the expression does not depend on the angle $\phi$, we may integrate over it: this introduces a multiplicative factor of $2\pi$ on the RHS. We have also defined, $\hat{s} \equiv E_\mathrm{cm}^2$.
\subsubsection{$e^+e^-\rightarrow Z/\gamma \rightarrow \mu^+ \mu^-$}
The differential cross section for electroweak production of $\mu^+\mu^-$ at a lepton collider proceeds in much the same way as the one in QED. The main difference arises from the fact that the $Z$ boson couples with different strengths to left- and right-handed fermions~\cite{HalzenMartin}. Table~\ref{tb:couplings} shows the couplings of fermions to the $Z$ boson, in the form:
\begin{equation}
\mathcal{L}_{ffZ} = - \frac{g_W}{2 \cos \theta_W} \sum_f \bar{\psi}_f \gamma^\mu (V_f - A_f \gamma_5) \psi_f Z_\mu\;,
\end{equation}
where $g_W$ is the SU(2) coupling constant in the standard model, $\cos \theta_W$ is the cosine of the Weinberg angle, numerical values of which are found in Appendix~\ref{app:constants}, $\psi_f$ represents fermion $f$ and $Z_\mu$ is the $Z$ boson field strength.
\begin{table}[!htb]
\begin{center}
\begin{tabularx}{\linewidth}{XXXX}
\toprule
fermions & $Q_f$ & $V_f$ & $A_f$ \\ \midrule
u, c, t & $+\frac{2}{3}$ & $(+\frac{1}{2} - \frac{4}{3} \sin ^2 \theta_W )$ & $+\frac{1}{2}$ \\
d,s, b & $-\frac{1}{3}$ & $(-\frac{1}{2} - \frac{2}{3} \sin ^2 \theta_W )$ & $-\frac{1}{2}$ \\
$\nu_e$, $\nu_\mu$, $\nu_\tau$ & $0$ & $\frac{1}{2}$ & $+\frac{1}{2}$ \\
$e$, $\mu$, $\tau$ & $-1$ & $(-\frac{1}{2} + 2 \sin ^2 \theta_W )$ & $-\frac{1}{2}$ \\\bottomrule
\end{tabularx}
\end{center}
\caption{Couplings of fermions to the $Z$ boson, taken from Ref.~\cite{Ellis:1991qj}.}
\label{tb:couplings}
\end{table}
The difference is manifested in the resulting outgoing lepton distributions as an asymmetry between the forward and backward directions. While Eq.~\ref{eq:qedxs} contains only constant terms and terms proportional to the square of the cosine of the scattering angle, the inclusion of the $Z$ boson induces a term linear in $\cos \theta$:
\begin{equation}\label{eq:partonicxs}
\frac{ \mathrm{d} \sigma } { \mathrm{d} \Omega } = \frac{\alpha^2}{4 \hat{s} } \left[ A_0 ( 1 + \cos ^2 \theta ) + A_1 \cos \theta \right] \;,
\end{equation}
where $A_0$ and $A_1$ are given by:
\begin{eqnarray}
A_0 &=& Q_f^2 - 2 Q_f V_\mu V_f ~\chi_1 + (A_\mu^2 + V_\mu^2) (A_f^2 + V_f^2) ~\chi_2\;, \nonumber \\
A_1 &=& -4 Q_f A_\mu A_f ~\chi_1 + 8 A_\mu V_\mu A_f V_f ~ \chi_2\;,
\end{eqnarray}
where in turn, the functions $\chi_1$ and $\chi_2$ are given by:
\begin{eqnarray}
\chi_1 (\hat{s}) &=& \kappa \hat{s} ( \hat{s} - M_Z^2 ) / ( (\hat{s}-M_Z^2)^2 + \Gamma_Z^2 M_Z^2 ) \;, \nonumber \\
\chi_2 (\hat{s}) &=& \kappa^2 \hat{s}^2 / ( (\hat{s}-M_Z^2)^2 + \Gamma_Z^2 M_Z^2 ) \;, \nonumber \\
\kappa &=& \sqrt{2} G_f M_Z^2 / (4 \pi \alpha) \;.
\end{eqnarray}
A good test to check whether the Monte Carlo integration is working is to check whether the Monte Carlo cross section agrees with the analytic result:
\begin{equation}\label{eq:sigmaee}
\sigma = \frac{4 \pi \alpha^2} { 3 \hat{s} } A_0 \;,
\end{equation}
where it is evident that the $\cos \theta$ term has dropped out due to its asymmetry.
\section{Exercise 1: lepton colliders}
In this exercise the aim is to produce a Monte Carlo event generator for $e^+e^- \rightarrow Z/\gamma \rightarrow \mu^+ \mu^-$. Of course the choice of `final' flavour is arbitrary, since we have neglected all lepton masses to this point. Note, however, that if one wants to consider $e^+e^- \rightarrow e^+ e^-$, then there exists a new $t$-channel diagram that is not included in the above expression.
The integration to obtain the cross section is in fact trivial, since we know how to integrate cosine functions analytically, and the $e^+e^-$ centre-of-mass energy, $\hat{s}$, is fixed, without requiring any Jacobian transformations to improve efficiency (i.e. there's no $\mathrm{d} m^2$ integral as in Eq.~\ref{eq:bwint}). Nevertheless, the exercise provides an insight to the basic building blocks of an event generator. The algorithm is given in Section~\ref{sec:hitmiss}. One thing to notice is that to obtain the cross sections in picobarn, one has to use the conversion factor in Table~\ref{tb:constants} in Appendix~\ref{app:constants}.
The example `solution' was written in \verb=Python=, and provides some basic plotting using \verb+Matplotlib+. A histogram of the only variable $\cos \theta$ is given. In this case this is an observable that we can measure, since we know both the direction of the incoming lepton and the outgoing lepton (between which this angle is defined). Moreover, the momenta are `set up' in the laboratory frame, which is equivalent to the centre-of-mass frame in this case.
Some suggestions for possible extensions:
\begin{itemize}
\item Check the cross section against the analytical formula. For example, at $E_\mathrm{cm} = 90$~GeV: $\sigma = 1060.82 \pm0.25$~pb versus the analytic result: $\sigma_\mathrm{analytic} = 1060.93$~pb.
\item Plot distributions of the energy of particles, or the pseudo-rapidity (in this case equal to the rapidity since we neglect the mass): $\eta = - \ln \tan (\theta / 2)$.
\item Investigate the forward-backward asymmetry: $A_\mathrm{FB} \equiv (\sigma_F - \sigma_B) / (\sigma_F + \sigma_B)$, where $\sigma_{F,B}$ are the forward (right `hemisphere', $\theta \in (-\pi/2, +\pi/2)$) and backward (left `hemisphere', $\theta \in (\pi/2, +\pi)\cup (-\pi/2, -\pi)$) cross sections respectively.
\end{itemize}
\section{Exercise 2: hadron colliders}
The previous exercise involved essentially a one-dimensional integral, over the angle $\theta$. For an electron-positron collision, this is always the case for a $2\rightarrow 2$ hard process. The next incremental complication arises for hard processes at hadron colliders. Since the hadrons are not elementary particles, we have to consider collisions between their constituent quark and gluons (``partons''), at high enough energies ($E\gg 1$~GeV). This results in the following considerations:
\begin{itemize}
\item The centre-of-mass energy of the colliding partons is not fixed, i.e. $\hat{s}$ is variable. Moreover, since the centre-of-mass frame and the laboratory frame (where observations are made) are not the same, the final-state particles need to be Lorentz-boosted from one frame to the other, in order to construct observable distributions.
\item We need to consider the distribution of momenta of the colliding partons inside the protons as well as the different contributing quark flavours, characterised by the parton density functions. The parton density function for flavour $q$ for a quark or gluon carrying momentum fraction $x$ of the proton at momentum transfers $Q^2$ is denoted by $f_q(x,Q^2)$. This can be accessed via the \verb+LHAPDF+ library~\cite{Whalley:2005nh, lhapdfsite}. For more details on the \verb+LHAPDF+ interface, see Appendix~\ref{app:pdf}.
\item Due to the above two points, we now have essentially four variables that characterise the phase space: $\hat{s}$, the momentum fractions $x_{1,2}$ and the scattering angle $\theta$, plus one constraint allowing us to eliminate one: $\hat{s} = x_1 x_2 S$, where $S$ is the proton-proton centre of mass energy squared. This leaves us with a 3-dimensional phase space for the hard process at hadron colliders.
\item When summing over quark flavours, one has to note that the angle $\cos \theta$ is defined with respect to the incoming particle (as opposed to anti-particle) and the outgoing particle (as opposed to anti-particle). This implies that for example, in a collision of $u \bar{u}$, if $\theta$ is defined with respect to the positive $z$-axis, one must add a contribution for $\bar{u} u$, with $ \theta \rightarrow \pi - \theta$, resulting in the change $\cos \theta \rightarrow - \cos \theta$. Effectively this cancels out the asymmetric part of the distribution in a proton-proton collider (but not in a $p \bar{p}$ collider such as the Tevatron).
\item For the purposes of this exercise we will cut-off the di-lepton invariant mass at some value, $Q_\mathrm{min}$. This will appear in the limits of the integrals we perform. For reasonable results, we will choose $Q_\mathrm{min} = 60$~GeV.
\item The matrix element squared has to be multiplied by a factor of $1/3$: this \textit{averages} over the initial quark-anti-quark colour configurations. If we also had quarks in the final state, we would need to \textit{sum} over their colours.
\end{itemize}
The partonic cross section of Eq.~\ref{eq:partonicxs} is still valid in the case of $q\bar{q} \rightarrow Z/\gamma \rightarrow \mu^+ \mu^-$, with the quark charges taken into consideration accordingly. However, we must now consider the hadronic cross section:
\begin{equation}
\frac{\mathrm{d} \sigma} { \mathrm{d} \hat{s} ~\mathrm{d} \cos \theta } = \sum_{q,q'}\int_0^1 \mathrm{d} x_1 \int_0^1\mathrm{d} x_2~ \delta ( \hat{s} - x_1 x_2 S )~ f_q(x_1, \hat{s}) f_{q'}(x_2, \hat{s}) ~\frac{\mathrm{d} \hat{\sigma}} {\mathrm{d} \cos \theta } \;,
\end{equation}
with $\mathrm{d} \hat{\sigma}/\mathrm{d} \cos \theta$ given by Eq.~\ref{eq:partonicxs}, and we have already made the replacement $Q^2 = \hat{s}$ for the PDF factorisation scale. The sum is written here generically, over $q$ and $q'$ but should be taken over $q\bar{q}$ for the process we are considering. The integral over the $\delta$-function can then be performed to eliminate one of the dependent observables. We remove $x_2$ and remove the integral over $x_1$, turning it into a differential on the left-hand side:
\begin{eqnarray}
\frac{\mathrm{d} \sigma} { \mathrm{d} \hat{s} ~ \mathrm{d} x_1 ~\mathrm{d} \cos \theta } &=& \int_0^1\mathrm{d} x_2~ \delta ( s x_1 (x_2 - \hat{s}/(s x_1) )~ f_q(x_1, \hat{s}) f_{q'}(x_2, \hat{s}) ~\frac{\mathrm{d} \hat{\sigma}} {\mathrm{d} \cos \theta } \;, \nonumber \\
&=& \frac{1}{\hat{s} x_1}~ f_q(x_1, \hat{s}) f_{q'}(x_2=\hat{s}/(sx_1), \hat{s}) ~\frac{\mathrm{d} \hat{\sigma}} {\mathrm{d} \cos \theta } \;.
\end{eqnarray}
We define $\tau \equiv \hat{s} / S$ and the rapidity of the outgoing di-lepton system:
\begin{equation}
y \equiv \frac{1}{2} \ln \left( \frac{E+p_z} { E-p_z } \right) = \frac{1}{2} \ln \left( \frac{x_1} {x_2} \right) \;,
\end{equation}
by which $x_{1,2} = \sqrt{\tau} \mathrm{e}^{\pm y}$, and:
\begin{equation}
\mathrm{d} x_1 \mathrm{d} \hat{s}/(\hat{s} x_1) = \mathrm{d} \tau \mathrm{dy}\;.
\end{equation}
We finally arrive at:
\begin{equation}
\frac{\mathrm{d} \sigma} { \mathrm{d} \tau ~ \mathrm{d} y ~\mathrm{d} \cos \theta } = \sum_{q,q'} ~ f_q(x_1=\sqrt{\tau} \mathrm{e}^{+y}, \hat{s} = \tau S) f_{q'}(x_2= \sqrt{\tau} \mathrm{e}^{-y}, \hat{s}=\tau S) ~\frac{\mathrm{d} \hat{\sigma}} {\mathrm{d} \cos \theta }\;.
\end{equation}
The integration over the phase space can be performed via the Monte Carlo method by selecting $\tau$, $y$ and $\cos \theta$ randomly. Since we know we have a heavy resonance (the $Z$ boson) in the process, we can attempt to perform a Jacobian transformation as was described in Section~\ref{sec:jacob}. Note, however, that in this case the phase space is not flat after transformation since we have the photon contribution at low invariant masses, as well as the interference contribution. Nevertheless, the transformation is still useful and it is recommended. One can experiment with the parameters of the transformation relation to see if the variance can be decreased by clever choices. Hence, for random numbers $R_i \in (0,1)$, $i = 1,2$:
\begin{eqnarray}
\cos \theta &=& 2 R _1 - 1 \nonumber \\
y &=& (2 R_2 - 1 ) y_\mathrm{max} \;,
\end{eqnarray}
with the maximum value of the rapidity given by: $y_\mathrm{max} = - 0.5 \ln (\tau)$.
The $\tau$-integral has to be more carefully considered. Keeping the parameters $M_\mathrm{tr}$ and $\Gamma_\mathrm{tr}$ free for the moment:
\begin{equation}
\tau S = \hat{s} = M_\mathrm{tr} \Gamma_{tr} \tan (\rho) + M_{\mathrm{tr}}^2 \nonumber \\
\end{equation}
with $\rho$ in $(\rho_\mathrm{min}, \rho_\mathrm{max})$, generated using random number $R_3 \in (0,1)$ via:
\begin{equation}
\rho = \rho_{\mathrm{min}} + (\rho_\mathrm{max} - \rho_\mathrm{min}) R_3 \;,
\end{equation}
where $\rho$ is limited by the choice of $Q_\mathrm{min}$ and the hadron centre-of-mass energy $\sqrt{S}$:
\begin{eqnarray}
\rho_\mathrm{min} = \tan^{-1} \left( \frac{Q_\mathrm{min}^2 - M_\mathrm{tr}^2} {\Gamma_\mathrm{tr} M_\mathrm{tr}}\right) \;, \nonumber \\
\rho_\mathrm{max} = \tan^{-1} \left( \frac{S - M_\mathrm{tr}^2} {\Gamma_\mathrm{tr} M_\mathrm{tr}}\right)\;.
\end{eqnarray}
The integration can be performed in an equivalent way as in Exercise 1, and the maximum weight can be stored to perform the `hit-or-miss' unweighting of events. This is again, exactly equivalent to the case of lepton colliders. A final complication for the case of the hard processes at hadron colliders is boosting between the centre-of-mass frame (where the calculation of the partonic cross section was performed) into the lab frame. We already know the 4-momenta in the lab frame for the incoming partons:
\begin{eqnarray}
p_q^\mathrm{lab} &=& \frac{\sqrt{\hat{s}}}{2} (x_1, 0, 0, x_1) \; \nonumber \\
p_{q'}^\mathrm{lab} &=& \frac{\sqrt{\hat{s}}}{2} (x_2, 0, 0, -x_2) \;,
\end{eqnarray}
where $\sqrt{\hat{s}} = E_\mathrm{cm}$. The Lorentz boost factor along the $z$-axis between the lab and centre-of-mass frames can be calculated and is given by:
\begin{equation}
\beta = \frac{x_2 - x_1} {x_2 + x_1}\;,
\end{equation}
where $\beta = v/c$. And hence, the momenta in the centre-of-mass frame:
\begin{eqnarray}
p_\mu^\mathrm{com} &=& \frac{\sqrt{\hat{s}}}{2} (1,\sin \theta \cos \phi, \sin \theta \sin \phi, \cos \theta) \; \nonumber \\
p_{\bar{\mu}}^\mathrm{com} &=& \frac{\sqrt{\hat{s}}}{2} (1,-\sin \theta \cos \phi, -\sin \theta \sin \phi,- \cos \theta) \;,
\end{eqnarray}
(where $\phi$ has been generated randomly and uniformly using a random number $R_4 \in (0,1)$: $\phi = 2 \pi R_4$) can be transformed into those in the lab frame via a Lorentz boost along the $z$-direction:
\begin{equation}
p^\mathrm{lab} = ( \gamma p_0 - \gamma \beta p_3, p_1, p_2, -\gamma \beta p_0 + \gamma p_3 )\;,
\end{equation}
where $\gamma = \sqrt( 1 / (1 - \beta^2) )$.
The solution to this exercise is provided as a \verb+Python+ program as well, and generates a set of histograms using the \verb+Matplotlib+ library.
Some suggestions for further investigations:
\begin{itemize}
\item Calculate the cross sections for di-lepton production via $Z\gamma$ at proton-proton colliders 8~TeV and 14~TeV using the \verb+cteq6l1+ PDF sets and compare to the \texttt{MadGraph} results: $\sigma(8~\mathrm{TeV})=(881.8\pm 1)$~pb and $\sigma(14~\mathrm{TeV})=(1684\pm 1.3)$~pb. Note that the minimum same-flavour lepton invariant mass was taken to be $60$~GeV and no other cuts were imposed on the leptons.
\item Consider the modifications necessary to simulate a $p\bar{p}$ collider.
\item The Les Houches file format allows one to write parton-level events and feed them into a general-purpose Monte Carlo for parton showering and hadronization. An explanation of how the format looks like is found in Appendix~\ref{app:fileformat}. An example Les Houches file and an input file for running with \texttt{Herwig++} are attached.
\end{itemize}
\section{After the hard process}
Even though we will not go into the technical details of the implementation of the following steps in event generation, it is interesting to list some of the considerations necessary to perform them. The factorised view of Monte Carlo event generation has already been illustrated by Figs.~\ref{fig:step1} to~\ref{fig:step5}. Step-by-step, some points that need to be considered are:
\begin{enumerate}
\item The hard process can be $2\rightarrow N$, where $N$ is any number of particles.
\item Decays can be easily implemented on top of any process in a factorised way, given that the resonance is narrow enough. If this is the case, one can consider the decay of a massive resonance in its rest frame, and then boost the decay products into the lab frame according to the particle's boost in that frame.
\item Most parton showers are based on collinear and soft splitting kernels that capture the enhanced regions. There are two possibilities for parton showers, with some technical differences in the implementation: radiation from final-state particles or radiation from initial-particles. The difference arises because initial-state particles need to `evolve' back to the incoming hadrons, whereas final-state particles have to evolve forward to hadrons.
\item At some scale, $\mathcal{O}(1~\mathrm{GeV})$, perturbation theory breaks down and a non-perturbative model needs to take over. The phenomenon is called hadronization. The outgoing quarks and gluons need to be treated through some model that groups them into QCD colour-singlets. This is done in \texttt{Herwig++}, for example, via a cluster model, and in \texttt{Pythia} via a string model. Another non-perturbative effect involves the interaction of multiple partons. This is modelled as multiple QCD $2\rightarrow 2$ interactions, where all the initial-state particles are evolved back to gluons (in \texttt{Herwig++}), so as to avoid issues related to the treatment of valence quarks in the fragmented protons.
\end{enumerate}
\section{Conclusions}
We have presented a short introduction to Monte Carlo event generators and directly delved into two simple examples. Solutions to the exercises are given and motivations on how one can go beyond were presented.
\section{Acknowledgements}
The author would like to thank Christoph Grab and Nicolas Chanon the opportunity to lecture at the ``Advanced Scientific Computing Workshop'' at ETH Z\"urich, as well the students who attended the course, providing helpful feedback. Support is acknowledged in part by the Swiss National Science Foundation (SNF) under contract 200020-149517, by the European Commission through the ``LHCPhenoNet'' Initial Training Network PITN-GA-2010-264564, MCnetITN FP7 Marie Curie Initial Training Network PITN-GA-2012-315877 and by a Marie Curie Intra European Fellowship within the 7th European Community Framework Programme (grant no. PIEF-GA-2013-622071).
\bibliographystyle{JHEP.bst}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 4,604 |
Мустафа Хусейнович Яглы-Оглы (; 15 августа 1934, Харьков — 2 февраля 1992, там же) — советский тяжелоатлет, трёхкратный призёр чемпионатов СССР (1958, 1960, 1963), призёр чемпионата Европы (1960). Почётный мастер спорта СССР.
Биография
Мустафа Яглы-Оглы родился 15 августа 1934 года в Харькове. Начал заниматься тяжёлой атлетикой в возрасте 12 лет у Михаила Светличного. В дальнейшем продолжил тренироваться под руководством Александра Смушкевича.
В 1958 и 1960 годах был бронзовым призёром чемпионата СССР в лёгком весе. В 1960 году вошёл в состав сборной страны на чемпионате Европы в Милане, где завоевал серебряную медаль, уступив лишь известному польскому атлету Мариану Зелиньскому.
С 1962 года выступал в полутяжёлой весовой категории. В 1963 году стал серебряным призёром чемпионата СССР, проходившего в рамках III летней Спартакиады народов СССР.
В 1967 году завершил свою спортивную карьеру. В дальнейшем занимался преподавательской деятельностью в Харьковском институте инженеров транспорта.
Умер 2 февраля 1992 года. Похоронен на 3-м городском кладбище Харькова.
Семья
Хасан Яглы-Оглы (1927—1994) — брат, советский тяжелоатлет, чемпион Европы (1957).
Примечания
Ссылки
Профиль на сайте Lift Up
Профиль на портале «Спортивная Россия»
Тяжелоатлеты СССР
Почётные мастера спорта СССР
Похороненные на 3-м городском кладбище Харькова | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 9,591 |
La Dorsal de Cumbre Vieja o sector sur, es una dorsal volcánica activa en la isla de La Palma en las Islas Canarias. Constituye una estructura volcánica que se reactiva hacia los 0,123 Ma a través del rift N-S de 21,5 km de longitud que construye lo que se llama Dorsal de Cumbre Vieja. Esta columna vertebral se extiende en una dirección aproximada de norte a sur, que comprende la mitad sur de La Palma, con la cresta de la cumbre y los flancos marcados por docenas de cráteres y conos. La última erupción se inició el 19 de septiembre de 2021 en una zona boscosa de la localidad de Las Manchas conocida como Cabeza de Vaca. Los flujos de lava llegaron rápidamente a las áreas pobladas cuesta abajo, se extendieron a través de los asentamientos y las plantaciones de plátano, destruyeron miles de edificios y, en última instancia, se derramaron sobre acantilados empinados en el océano para agrandar la isla en varios lugares. El volcán se calmó el 13 de diciembre de 2021, el 25 de diciembre de 2021 el gobierno local declaró que la erupción había terminado.
Sobre esta dorsal se han producido las siete erupciones históricas, de los últimos 500 años, habidas en la isla.
Véase también
Erupción volcánica de La Palma de 2021
Erupción volcánica de La Palma de 1971
Erupción volcánica de La Palma de 1949
Parque natural de Cumbre Vieja
Referencias
La Palma | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 9,168 |
Флавий Лупицин () е политик и военен на Римската империя през 4 век.
През 359 г. Лупицин е номиниран от император Юлиан Апостат за magister equitum на Галия. През 360 г. той завежда помощни войски от Bononia в Rutupiae (Richborough) в Британия, за да се бият против нахлулите скоти и пикти.
През 364 – 367 г. Лупицин става при император Йовиан magister equitum на Изтока (Oriente).
Император Валент (364 – 378) го прави през 367 г. консул заедно с Флавий Иовин.
През 376 г. Лупицин е comes rei militari на Валент и се бие на Дунав с готите и хуните. Той е в главната си квартира в Марцианопол в Мизия, участва в Готската война (376 – 382), в боевете с Фритигерн и вероятно в битката при Адрианопол на 9 август 378 г.
Литература
Birley, Anthony Richard, The Roman Government of Britain, Oxford University Press, 2005, ISBN 0-19-925237-8, pp.425 – 426.
Burns, Thomas Samuel, Barbarians Within the Gates of Rome, Indiana University Press, 1994, ISBN 0-253-31288-4, pp. 24 – 26.
Jones, Arnold Hugh Martin, John Robert Martindale, John Morris, "Lupicinus 6", The Prosopography of the Later Roman Empire, volume 1, Cambridge University Press, 1992, ISBN 0-521-07233-6, pp. 520 – 521.
Източници
Външни препратки
The Roman Custer: Emperor Valens and the Battle of Adrianople, AD378". Lupicinus
Имперски римски консули
Римски военачалници
Тракия
Мизия | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 3,948 |
{"url":"http:\/\/pololu.github.io\/jrk-g2-arduino\/classJrkG2Serial.html","text":"Jrk G2 library for Arduino\nJrkG2Serial Class Reference\n\n#include <JrkG2.h>\n\nInheritance diagram for JrkG2Serial:\n[legend]\nCollaboration diagram for JrkG2Serial:\n[legend]\n\n## Public Member Functions\n\nJrkG2Serial (Stream &stream, uint8_t deviceNumber=255)\n\nuint8_t\u00a0getDeviceNumber ()\nGets the serial device number this object is using.\n\nuint8_t\u00a0getLastError ()\n\nMotor control commands\nvoid\u00a0setTarget (uint16_t target)\n\nvoid\u00a0setTargetLowResRev (uint8_t target)\n\nvoid\u00a0setTargetLowResFwd (uint8_t target)\n\nvoid\u00a0forceDutyCycleTarget (int16_t dutyCycle)\n\nvoid\u00a0forceDutyCycle (int16_t dutyCycle)\n\nvoid\u00a0stopMotor ()\n\nuint16_t\u00a0getInput ()\n\nuint16_t\u00a0getTarget ()\n\nuint16_t\u00a0getFeedback ()\n\nuint16_t\u00a0getScaledFeedback ()\n\nint16_t\u00a0getIntegral ()\n\nint16_t\u00a0getDutyCycleTarget ()\n\nint16_t\u00a0getDutyCycle ()\n\nuint8_t\u00a0getCurrentLowRes ()\n\nbool\u00a0getPIDPeriodExceeded ()\n\nuint16_t\u00a0getPIDPeriodCount ()\n\nuint16_t\u00a0getErrorFlagsHalting ()\n\nuint16_t\u00a0getErrorFlagsOccurred ()\n\nJrkG2ForceMode\u00a0getForceMode ()\n\nuint16_t\u00a0getVinVoltage ()\n\nuint16_t\u00a0getCurrent ()\n\nJrkG2Reset\u00a0getDeviceReset ()\n\nuint32_t\u00a0getUpTime ()\n\nuint16_t\u00a0getRCPulseWidth ()\n\nuint16_t\u00a0getRawCurrent ()\n\nuint16_t\u00a0getEncodedHardCurrentLimit ()\n\nint16_t\u00a0getLastDutyCycle ()\n\nuint8_t\u00a0getCurrentChoppingConsecutiveCount ()\n\nuint8_t\u00a0getCurrentChoppingOccurrenceCount ()\n\nRAM settings commands\nvoid\u00a0setResetIntegral (bool reset)\n\nbool\u00a0getResetIntegral ()\n\nvoid\u00a0setCoastWhenOff (bool coast)\n\nbool\u00a0getCoastWhenOff ()\n\nvoid\u00a0setProportionalCoefficient (uint16_t multiplier, uint8_t exponent)\n\nuint16_t\u00a0getProportionalMultiplier ()\n\nuint8_t\u00a0getProportionalExponent ()\n\nvoid\u00a0setIntegralCoefficient (uint16_t multiplier, uint8_t exponent)\n\nuint16_t\u00a0getIntegralMultiplier ()\n\nuint8_t\u00a0getIntegralExponent ()\n\nvoid\u00a0setDerivativeCoefficient (uint16_t multiplier, uint8_t exponent)\n\nuint16_t\u00a0getDerivativeMultiplier ()\n\nuint8_t\u00a0getDerivativeExponent ()\n\nvoid\u00a0setPIDPeriod (uint16_t period)\n\nuint16_t\u00a0getPIDPeriod ()\n\nvoid\u00a0setIntegralLimit (uint16_t limit)\n\nuint16_t\u00a0getIntegralLimit ()\n\nvoid\u00a0setMaxDutyCycleWhileFeedbackOutOfRange (uint16_t duty)\n\nuint16_t\u00a0getMaxDutyCycleWhileFeedbackOutOfRange ()\n\nvoid\u00a0setMaxAccelerationForward (uint16_t accel)\n\nuint16_t\u00a0getMaxAccelerationForward ()\n\nvoid\u00a0setMaxAccelerationReverse (uint16_t accel)\n\nuint16_t\u00a0getMaxAccelerationReverse ()\n\nvoid\u00a0setMaxAcceleration (uint16_t accel)\n\nvoid\u00a0setMaxDecelerationForward (uint16_t decel)\n\nuint16_t\u00a0getMaxDecelerationForward ()\n\nvoid\u00a0setMaxDecelerationReverse (uint16_t decel)\n\nuint16_t\u00a0getMaxDecelerationReverse ()\n\nvoid\u00a0setMaxDeceleration (uint16_t decel)\n\nvoid\u00a0setMaxDutyCycleForward (uint16_t duty)\n\nuint16_t\u00a0getMaxDutyCycleForward ()\n\nvoid\u00a0setMaxDutyCycleReverse (uint16_t duty)\n\nuint16_t\u00a0getMaxDutyCycleReverse ()\n\nvoid\u00a0setMaxDutyCycle (uint16_t duty)\n\nvoid\u00a0setEncodedHardCurrentLimitForward (uint16_t encoded_limit)\n\nuint16_t\u00a0getEncodedHardCurrentLimitForward ()\n\nvoid\u00a0setEncodedHardCurrentLimitReverse (uint16_t encoded_limit)\n\nuint16_t\u00a0getEncodedHardCurrentLimitReverse ()\n\nvoid\u00a0setEncodedHardCurrentLimit (uint16_t encoded_limit)\n\nvoid\u00a0setBrakeDurationForward (uint8_t duration)\n\nuint8_t\u00a0getBrakeDurationForward ()\n\nvoid\u00a0setBrakeDurationReverse (uint8_t duration)\n\nuint8_t\u00a0getBrakeDurationReverse ()\n\nvoid\u00a0setBrakeDuration (uint8_t duration)\n\nvoid\u00a0setSoftCurrentLimitForward (uint16_t current)\n\nuint16_t\u00a0getSoftCurrentLimitForward ()\n\nvoid\u00a0setSoftCurrentLimitReverse (uint16_t current)\n\nuint16_t\u00a0getSoftCurrentLimitReverse ()\n\nvoid\u00a0setSoftCurrentLimit (uint16_t current)\n\nLow-level settings and variables commands\nvoid\u00a0getEEPROMSettings (uint8_t offset, uint8_t length, uint8_t *buffer)\n\nvoid\u00a0getRAMSettings (uint8_t offset, uint8_t length, uint8_t *buffer)\n\nvoid\u00a0setRAMSettings (uint8_t offset, uint8_t length, uint8_t *buffer)\n\nvoid\u00a0getVariables (uint8_t offset, uint8_t length, uint8_t *buffer)\n\n## Protected Attributes\n\nuint8_t\u00a0_lastError = 0\n\n## Detailed Description\n\nRepresents a serial connection to a Jrk G2.\n\nFor the high-level commands you can use on this object, see JrkG2Base.\n\nDefinition at line 1679 of file JrkG2.h.\n\n## Constructor & Destructor Documentation\n\n JrkG2Serial::JrkG2Serial ( Stream & stream, uint8_t deviceNumber = 255 )\ninline\n\nCreates a new JrkG2Serial object.\n\nThe stream argument should be a hardware or software serial object. This class will store a pointer to it and use it to communicate with the Jrk. You should initialize it and set it to use the correct baud rate before sending commands with this class.\n\nThe deviceNumber argument is optional. If it is omitted or 255, the JrkG2Serial object will use the compact protocol. If it is a number between 0 and 127, it specifies the device number to use in the Pololu protocol, allowing you to control multiple Jrk controllers on a single serial bus.\n\nFor example, to use the first open hardware serial port to send compact protocol commands to one Jrk, write this at the top of your sketch:\n\nJrkG2Serial jrk(SERIAL_PORT_HARDWARE_OPEN);\n\nFor example, to use a SoftwareSerial port and send Pololu protocol commands to two different Jrk G2 controllers, write this at the top of your sketch:\n\n#include <SoftwareSerial.h>\nSoftwareSerial jrkG2Serial(10, 11);\nJrkG2Serial jrk1(jrkG2Serial, 11);\nJrkG2Serial jrk2(jrkG2Serial, 12);\n\nDefinition at line 1711 of file JrkG2.h.\n\n## Member Function Documentation\n\n void JrkG2Base::forceDutyCycle ( int16_t dutyCycle )\ninlineinherited\n\nForces the duty cycle of the Jrk to a value in the range -600 to +600.\n\nThe jrk will ignore the results of the usual algorithm for choosing the duty cycle, and instead set it to be equal to the value specified by this command, ignoring all motor limits except the maximum duty cycle parameters, and ignoring the 'Input invalid', 'Input disconnect', and 'Feedback disconnect' errors. This command will have an immediate effect, regardless of the PID period. The jrk will set its 'Integral' variable to 0 while in this mode.\n\nThis is useful if the jrk is configured to use feedback but you want to take control of the motor for some time, without respecting most motor limits.\n\nExample usage:\n\njrkG2.forceDutyCycle(250);\n\nThis function sends a \"Force duty cycle\" command to the Jrk G2, which clears the \"Awaiting command\" error bit.\n\nTo get out of this mode, use setTarget(), setTargetLowResFwd(), setTargetLowResRev(), forceDutyCycleTarget(), or stopMotor().\n\nDefinition at line 270 of file JrkG2.h.\n\n void JrkG2Base::forceDutyCycleTarget ( int16_t dutyCycle )\ninlineinherited\n\nForces the duty cycle target of the Jrk to a value in the range -600 to +600.\n\nThe Jrk will ignore the results of the usual algorithm for choosing the duty cycle target, and instead set it to be equal to the target specified by this command. The Jrk will set its 'Integral' variable to 0 while in this mode.\n\nThis is useful if the Jrk is configured to use feedback but you want to take control of the motor for some time, while still respecting errors and motor limits as usual.\n\nExample usage:\n\njrkG2.forceDutyCycleTarget(250);\n\nThis function sends a \"Force duty cycle target\" command to the Jrk G2, which clears the \"Awaiting command\" error bit.\n\nTo get out of this mode, use setTarget(), setTargetLowResFwd(), setTargetLowResRev(), forceDutyCycle(), or stopMotor().\n\nDefinition at line 238 of file JrkG2.h.\n\n uint16_t JrkG2Base::getAnalogReading ( JrkG2Pin pin )\ninlineinherited\n\nGets the analog reading from the specified pin.\n\nThe reading is left-justified, so 0xFFFE represents a voltage equal to the Jrk's 5V pin (approximately 4.8 V).\n\nReturns JrkG2InputNull if the analog reading is disabled or not ready or the pin is invalid.\n\nExample usage:\n\n{\n}\n\nDefinition at line 602 of file JrkG2.h.\n\n uint8_t JrkG2Base::getBrakeDurationForward ( )\ninlineinherited\n\nGets the brake duration when switching from forward to reverse from the Jrk's RAM settings, in units of 5 ms.\n\nDefinition at line 1239 of file JrkG2.h.\n\n uint8_t JrkG2Base::getBrakeDurationReverse ( )\ninlineinherited\n\nGets the brake duration when switching from reverse to forward from the Jrk's RAM settings, in units of 5 ms.\n\nDefinition at line 1261 of file JrkG2.h.\n\n bool JrkG2Base::getCoastWhenOff ( )\ninlineinherited\n\nGets the \"Coast when off\" setting from the Jrk's RAM settings.\n\nDefinition at line 767 of file JrkG2.h.\n\n uint16_t JrkG2Base::getCurrent ( )\ninlineinherited\n\nGets the Jrk's measurement of the current running through the motor, in milliamps.\n\nDefinition at line 518 of file JrkG2.h.\n\n uint8_t JrkG2Base::getCurrentChoppingConsecutiveCount ( )\ninlineinherited\n\nGets the number of consecutive PID periods during which current chopping due to the hard current limit has been active.\n\nDefinition at line 673 of file JrkG2.h.\n\n uint8_t JrkG2Base::getCurrentChoppingOccurrenceCount ( )\ninlineinherited\n\nGets and clears the \"Current chopping occurrence count\" variable, which is the number of PID periods during which current chopping due to the hard current limit has been active, since the last time the variable was cleared.\n\nThis command is only valid for the Jrk G2 18v19, 24v13, 18v27, and 24v21. The Jrk G2 21v3 cannot sense when current chopping occurs so this command will always return 0.\n\nDefinition at line 688 of file JrkG2.h.\n\n uint8_t JrkG2Base::getCurrentLowRes ( )\ninlineinherited\n\nGets the most-significant 8 bits of the \"Current\" variable.\n\nThe Jrk G2 supports this command mainly to be compatible with older Jrk models. In new applications, we recommend using getCurrent(), which provides a higher-resolution measurement.\n\nDefinition at line 410 of file JrkG2.h.\n\n uint8_t JrkG2Base::getDerivativeExponent ( )\ninlineinherited\n\nGets the exponent part of the derivative coefficient from the Jrk's RAM settings.\n\nDefinition at line 891 of file JrkG2.h.\n\n uint16_t JrkG2Base::getDerivativeMultiplier ( )\ninlineinherited\n\nGets the multiplier part of the derivative coefficient from the Jrk's RAM settings.\n\nDefinition at line 882 of file JrkG2.h.\n\n JrkG2Reset JrkG2Base::getDeviceReset ( )\ninlineinherited\n\nGets the cause of the Jrk's last full microcontroller reset.\n\nExample usage:\n\nif (jrk.getDeviceReset() == JrkG2Reset::Brownout)\n{\n\/\/ There was a brownout reset; the power supply could not keep up.\n}\n\nDefinition at line 532 of file JrkG2.h.\n\n bool JrkG2Base::getDigitalReading ( JrkG2Pin pin )\ninlineinherited\n\nGets a digital reading from the specified pin.\n\nA return value of 0 means low while 1 means high. In most cases, pins configured as analog inputs cannot be read as digital inputs, so their values will be 0. See getAnalogReading() for those pins.\n\nExample usage:\n\n{\n\/\/ Something is driving the RC pin high.\n}\n\nDefinition at line 628 of file JrkG2.h.\n\n int16_t JrkG2Base::getDutyCycle ( )\ninlineinherited\n\nGets the duty cycle variable.\n\nThe duty cycle variable is the duty cycle at which the jrk is currently driving the motor. A value of -600 means full speed reverse, while a value of 600 means full speed forward. A value of 0 means stopped (braking or coasting). The duty cycle could be different from the duty cycle target because it normally takes into account the Jrk's configurable motor limits and errors. The duty cycle can be overridden with forceDutyCycle().\n\nDefinition at line 400 of file JrkG2.h.\n\n int16_t JrkG2Base::getDutyCycleTarget ( )\ninlineinherited\n\nGets the duty cycle target variable.\n\nIn general, this is the duty cycle that the Jrk is trying to achieve. A value of -600 or less means full speed reverse, while a value of 600 or more means full speed forward. A value of 0 means stopped (braking or coasting). In no feedback mode (open-loop speed control mode), the duty cycle target is normally the target minus 2048. In other feedback modes, the duty cycle target is normally the sum of the proportional, integral, and derivative terms of the PID algorithm. In any mode, the duty cycle target can be overridden with forceDutyCycleTarget().\n\nIf an error is stopping the motor, the duty cycle target variable will not be directly affected, but the duty cycle variable will change\/decelerate to zero.\n\nDefinition at line 384 of file JrkG2.h.\n\n void JrkG2Base::getEEPROMSettings ( uint8_t offset, uint8_t length, uint8_t * buffer )\ninlineinherited\n\nGets a contiguous block of settings from the Jrk G2's EEPROM.\n\nThe maximum length that can be fetched is 15 bytes.\n\nExample usage:\n\n\/\/ Get the Jrk's serial device number.\nuint8_t deviceNumber;\njrk.getEEPROMSettings(0x28, 1, &deviceNumber);\n\nFor information on how the settings are encoded, see the Jrk G2 user's guide.\n\nDefinition at line 1357 of file JrkG2.h.\n\n uint16_t JrkG2Base::getEncodedHardCurrentLimit ( )\ninlineinherited\n\nGets the encoded value representing the hardware current limit the jrk is currently using.\n\nThis command is only valid for the Jrk G2 18v19, 24v13, 18v27, and 24v21. The Jrk G2 21v3 does not have a configurable hard current limit.\n\nDefinition at line 653 of file JrkG2.h.\n\n uint16_t JrkG2Base::getEncodedHardCurrentLimitForward ( )\ninlineinherited\n\nGets the encoded hard current limit for driving in the forward direction from the Jrk's RAM settings.\n\nThis command is only valid for the Jrk G2 18v19, 24v13, 18v27, and 24v21. The Jrk G2 21v3 does not have a configurable hard current limit.\n\nDefinition at line 1169 of file JrkG2.h.\n\n uint16_t JrkG2Base::getEncodedHardCurrentLimitReverse ( )\ninlineinherited\n\nGets the encoded hard current limit for driving in the reverse direction from the Jrk's RAM settings.\n\nThis command is only valid for the Jrk G2 18v19, 24v13, 18v27, and 24v21. The Jrk G2 21v3 does not have a configurable hard current limit.\n\nDefinition at line 1198 of file JrkG2.h.\n\n uint16_t JrkG2Base::getErrorFlagsHalting ( )\ninlineinherited\n\nGets the errors that are currently stopping the motor and clears any latched errors that are enabled. Calling this function is equivalent to reading the \"Currently stopping motor?\" column in the Errors tab of the configuration utility, and then clicking the \"Clear Errors\" button.\n\nEach bit in the returned register represents a different error. The bits are defined in the JrkG2Error enum.\n\nExample usage:\n\nuint16_t errors = jrk.getErrorFlagsHalting();\nif (errors & (1 << (uint8_t)JrkG2Error::NoPower))\n{\n\/\/ handle loss of power\n}\n\nIt is possible to read this variable without clearing the bits in it using a getVariables().\n\nDefinition at line 454 of file JrkG2.h.\n\n uint16_t JrkG2Base::getErrorFlagsOccurred ( )\ninlineinherited\n\nGets the errors that have occurred since the last time this function was called. Unlike getErrorFlagsHalting(), calling this function has no effect on the motor.\n\nNote that the Jrk G2 Control Center constantly clears the bits in this register, so if you are running the Jrk G2 Control Center then you will not be able to reliably detect errors with this function.\n\nEach bit in the returned register represents a different error. The bits are defined in the JrkG2Error enum.\n\nExample usage:\n\nuint32_t errors = jrk.getErrorsOccurred();\nif (errors & (1 << (uint8_t)JrkG2Error::MotorDriver))\n{\n\/\/ handle a motor driver error\n}\n\nIt is possible to read this variable without clearing the bits in it using getVariables().\n\nDefinition at line 483 of file JrkG2.h.\n\ninlineinherited\n\nGets the raw pulse rate or pulse width measured on the Jrk's FBT (tachometer feedback) pin.\n\nIn pulse counting mode, this will be the number of pulses on the FBT pin seen in the last N PID periods, where N is the \"Pulse samples\" setting.\n\nIn pulse timing mode, this will be a measurement of the width of pulses on the FBT pin. This measurement is affected by several configurable settings.\n\nExample usage:\n\nDefinition at line 581 of file JrkG2.h.\n\n uint16_t JrkG2Base::getFeedback ( )\ninlineinherited\n\nGets the feedback variable.\n\nThe feedback variable is a raw, unscaled feedback value, representing a measurement taken by the Jrk of the output of the system. In analog feedback mode, the feedback is a measurement of the voltage on the FBA pin, where 0 is 0 V and 4092 is a voltage equal to the Jrk's 5V pin (approximately 4.8 V). In frequency feedback mode, the feedback is 2048 plus or minus a measurement of the frequency of pulses on the FBT pin. In feedback mode none (open-loop speed control mode), the feedback is always zero.\n\nDefinition at line 341 of file JrkG2.h.\n\n JrkG2ForceMode JrkG2Base::getForceMode ( )\ninlineinherited\n\nReturns an indication of whether the Jrk's duty cycle target or duty cycle are being overridden with a forced value.\n\nExample usage:\n\nif (jrk.getForceMode() == JrkG2ForceMode::DutyCycleTarget)\n{\n\/\/ The duty cycle target is being overridden with a forced value.\n}\n\nDefinition at line 500 of file JrkG2.h.\n\n uint16_t JrkG2Base::getInput ( )\ninlineinherited\n\nGets the input variable.\n\nThe input variable is a raw, unscaled value representing a measurement taken by the Jrk of the input to the system. In serial input mode, the input is equal to the target, which can be set to any value from 0 to 4095 using serial commands. In analog input mode, the input is a measurement of the voltage on the SDA pin, where 0 is 0 V and 4092 is a voltage equal to the Jrk's 5V pin (approximately 4.8 V). In RC input mode, the input is the duration of the last RC pulse measured, in units of 2\/3 us.\n\nDefinition at line 312 of file JrkG2.h.\n\n int16_t JrkG2Base::getIntegral ( )\ninlineinherited\n\nGets the integral variable.\n\nIn general, every PID period, the error (scaled feedback minus target) is added to the integral (also known as error sum). There are several settings to configure the behavior of this variable, and it is used in the PID calculation.\n\nDefinition at line 363 of file JrkG2.h.\n\n uint8_t JrkG2Base::getIntegralExponent ( )\ninlineinherited\n\nGets the exponent part of the integral coefficient from the Jrk's RAM settings.\n\nDefinition at line 852 of file JrkG2.h.\n\n uint16_t JrkG2Base::getIntegralLimit ( )\ninlineinherited\n\nGets the integral limit from the Jrk's RAM settings.\n\nDefinition at line 940 of file JrkG2.h.\n\n uint16_t JrkG2Base::getIntegralMultiplier ( )\ninlineinherited\n\nGets the multiplier part of the integral coefficient from the Jrk's RAM settings.\n\nDefinition at line 843 of file JrkG2.h.\n\n int16_t JrkG2Base::getLastDutyCycle ( )\ninlineinherited\n\nGets the duty cycle the Jrk drove the motor with in the last PID period.\n\nThis can be useful for converting the getRawCurrent() reading into milliamps.\n\nDefinition at line 664 of file JrkG2.h.\n\n uint8_t JrkG2Base::getLastError ( )\ninlineinherited\n\nReturns 0 if the last communication with the device was successful, and non-zero if there was an error.\n\nDefinition at line 128 of file JrkG2.h.\n\n uint16_t JrkG2Base::getMaxAccelerationForward ( )\ninlineinherited\n\nGets the maximum acceleration in the forward direction from the Jrk's RAM settings.\n\nDefinition at line 985 of file JrkG2.h.\n\n uint16_t JrkG2Base::getMaxAccelerationReverse ( )\ninlineinherited\n\nGets the maximum acceleration in the reverse direction from the Jrk's RAM settings.\n\nDefinition at line 1008 of file JrkG2.h.\n\n uint16_t JrkG2Base::getMaxDecelerationForward ( )\ninlineinherited\n\nGets the maximum deceleration in the forward direction from the Jrk's RAM settings.\n\nDefinition at line 1045 of file JrkG2.h.\n\n uint16_t JrkG2Base::getMaxDecelerationReverse ( )\ninlineinherited\n\nGets the maximum deceleration in the reverse direction from the Jrk's RAM settings.\n\nDefinition at line 1068 of file JrkG2.h.\n\n uint16_t JrkG2Base::getMaxDutyCycleForward ( )\ninlineinherited\n\nGets the maximum duty cycle in the forward direction from the Jrk's RAM settings.\n\nDefinition at line 1104 of file JrkG2.h.\n\n uint16_t JrkG2Base::getMaxDutyCycleReverse ( )\ninlineinherited\n\nGets the maximum duty cycle in the reverse direction from the Jrk's RAM settings.\n\nDefinition at line 1126 of file JrkG2.h.\n\n uint16_t JrkG2Base::getMaxDutyCycleWhileFeedbackOutOfRange ( )\ninlineinherited\n\nGets the maximum duty cycle while feedback is out of range from the Jrk's RAM settings.\n\nDefinition at line 962 of file JrkG2.h.\n\n uint16_t JrkG2Base::getPIDPeriod ( )\ninlineinherited\n\nGets the PID period from the Jrk's RAM settings, in milliseconds.\n\nDefinition at line 916 of file JrkG2.h.\n\n uint16_t JrkG2Base::getPIDPeriodCount ( )\ninlineinherited\n\nGet the \"PID period count\" variable, which is the number of PID periods that have elapsed. It resets to 0 after reaching 65535. The duration of the PID period can be configured.\n\nDefinition at line 428 of file JrkG2.h.\n\n bool JrkG2Base::getPIDPeriodExceeded ( )\ninlineinherited\n\nReturns true if the Jrk's most recent PID cycle took more time than the configured PID period. This indicates that the Jrk does not have time to perform all of its tasks at the desired rate. Most often, this is caused by the configured number of analog samples for input, feedback, or current sensing being too high for the configured PID period.\n\nDefinition at line 420 of file JrkG2.h.\n\n uint8_t JrkG2Base::getProportionalExponent ( )\ninlineinherited\n\nGets the exponent part of the proportional coefficient from the Jrk's RAM settings.\n\nDefinition at line 813 of file JrkG2.h.\n\n uint16_t JrkG2Base::getProportionalMultiplier ( )\ninlineinherited\n\nGets the multiplier part of the proportional coefficient from the Jrk's RAM settings.\n\nDefinition at line 804 of file JrkG2.h.\n\n void JrkG2Base::getRAMSettings ( uint8_t offset, uint8_t length, uint8_t * buffer )\ninlineinherited\n\nGets a contiguous block of settings from the Jrk G2's RAM.\n\nThe maximum length that can be fetched is 15 bytes.\n\nExample usage:\n\n\/\/ Get the Jrk's feedback maximum setting.\nuint8_t buffer[2];\njrk.getRAMSettings(0x1F, 2, buffer);\nuint16_t feedbackMaximum = buffer[0] + (buffer[1] << 8);\n\nNote that this library has several functions for reading and writing specific RAM settings, and they are easier to use than this function.\n\nFor information on how the settings are encoded, see the Jrk G2 user's guide.\n\nDefinition at line 1379 of file JrkG2.h.\n\n uint16_t JrkG2Base::getRawCurrent ( )\ninlineinherited\n\nGets the Jrk's raw measurement of the current running through the motor.\n\nThis is an analog voltage reading from the Jrk's current sense pin. The units of the reading depend on what hard current limit is being used (getEncodedHardCurrentLimit()).\n\nDefinition at line 641 of file JrkG2.h.\n\n uint16_t JrkG2Base::getRCPulseWidth ( )\ninlineinherited\n\nGets the raw RC pulse width measured on the Jrk's RC input, in units of twelfths of a microsecond.\n\nReturns 0 if the RC input is missing or invalid.\n\nExample usage:\n\nuint16_t pulseWidth = jrk.getRCPulseWidth();\nif (pulseWidth != 0 && pulseWidth < 18000)\n{\n\/\/ Input is valid and pulse width is less than 1500 microseconds.\n}\n\nDefinition at line 562 of file JrkG2.h.\n\n bool JrkG2Base::getResetIntegral ( )\ninlineinherited\n\nGets the \"Reset integral\" setting from the Jrk's RAM settings.\n\nDefinition at line 731 of file JrkG2.h.\n\n uint16_t JrkG2Base::getScaledFeedback ( )\ninlineinherited\n\nGets the scaled feedback variable.\n\nThe scaled feedback is calculated from the feedback using the Jrk's configurable feedback scaling settings.\n\nDefinition at line 352 of file JrkG2.h.\n\n uint16_t JrkG2Base::getSoftCurrentLimitForward ( )\ninlineinherited\n\nGets the soft current limit when driving in the forward direction from the Jrk's RAM settings, in units of mA.\n\nDefinition at line 1296 of file JrkG2.h.\n\n uint16_t JrkG2Base::getSoftCurrentLimitReverse ( )\ninlineinherited\n\nGets the soft current limit when driving in the reverse direction from the Jrk's RAM settings, in units of mA.\n\nDefinition at line 1318 of file JrkG2.h.\n\n uint16_t JrkG2Base::getTarget ( )\ninlineinherited\n\nGets the target variable.\n\nIn serial input mode, the target is set directly with serial commands. In the other input modes, the target is computed by scaling the input, using the configurable input scaling settings.\n\nDefinition at line 324 of file JrkG2.h.\n\n uint32_t JrkG2Base::getUpTime ( )\ninlineinherited\n\nGets the time since the last full reset of the Jrk's microcontroller, in milliseconds.\n\nExample usage:\n\nuint32_t upTime = jrk.getUpTime();\n\nDefinition at line 544 of file JrkG2.h.\n\n void JrkG2Base::getVariables ( uint8_t offset, uint8_t length, uint8_t * buffer )\ninlineinherited\n\nGets a contiguous block of variables from the Jrk G2.\n\nNote that this library has convenient functions for reading every variable provided by the Jrk. The main reason to use this function is if you want to read multiple variables at once for extra efficiency or to ensure that the variables are in a consistent state.\n\nThe maximum length that can be fetched is 15 bytes.\n\nExample usage:\n\n\/\/ Get the Jrk's last device reset and its up time.\nuint8_t buffer[5];\njrk.getVariables(0x1F, 5, buffer);\n\nFor information on how the variables are encoded, see the Jrk G2 user's guide.\n\nDefinition at line 1427 of file JrkG2.h.\n\n uint16_t JrkG2Base::getVinVoltage ( )\ninlineinherited\n\nGets the measurement of the VIN voltage, in millivolts.\n\nExample usage:\n\nuint16_t vin = jrk.getVinVoltage();\n\nDefinition at line 511 of file JrkG2.h.\n\n void JrkG2Base::setBrakeDuration ( uint8_t duration )\ninlineinherited\n\nSets the brake duration for both directions in the Jrk's RAM settings, in units of 5 ms.\n\nYou would normally configure this setting ahead of time using the Jrk G2 Configuration Utility, but this function allows you to change it temporarily on the fly.\n\nDefinition at line 1274 of file JrkG2.h.\n\n void JrkG2Base::setBrakeDurationForward ( uint8_t duration )\ninlineinherited\n\nSets the brake duration when switching from forward to reverse in the Jrk's RAM settings, in units of 5 ms.\n\nYou would normally configure this setting ahead of time using the Jrk G2 Configuration Utility, but this function allows you to change it temporarily on the fly.\n\nDefinition at line 1230 of file JrkG2.h.\n\n void JrkG2Base::setBrakeDurationReverse ( uint8_t duration )\ninlineinherited\n\nSets the brake duration when switching from reverse to forward in the Jrk's RAM settings, in units of 5 ms.\n\nYou would normally configure this setting ahead of time using the Jrk G2 Configuration Utility, but this function allows you to change it temporarily on the fly.\n\nDefinition at line 1252 of file JrkG2.h.\n\n void JrkG2Base::setCoastWhenOff ( bool coast )\ninlineinherited\n\nSets or clears the \"Coast when off\" setting in the Jrk's RAM settings.\n\nBy default, the Jrk drives both motor outputs low when the motor is stopped (duty cycle is zero), causing it to brake. If enabled, this setting causes it to instead tri-state both outputs, making the motor coast.\n\nYou would normally configure this setting ahead of time using the Jrk G2 Configuration Utility, but this function allows you to change it temporarily on the fly.\n\nDefinition at line 749 of file JrkG2.h.\n\n void JrkG2Base::setDerivativeCoefficient ( uint16_t multiplier, uint8_t exponent )\ninlineinherited\n\nSets the derivative coefficient in the Jrk's RAM settings.\n\nThis coefficient is used in the Jrk's PID algorithm. The coefficient takes the form:\n\nmultiplier \/ (2 ^ exponent)\n\nThe multiplier can range from 0 to 1023, and the exponent can range from 0 to 18.\n\nYou would normally configure this setting ahead of time using the Jrk G2 Configuration Utility, but this function allows you to change it temporarily on the fly.\n\nDefinition at line 873 of file JrkG2.h.\n\n void JrkG2Base::setEncodedHardCurrentLimit ( uint16_t encoded_limit )\ninlineinherited\n\nSets the encoded hard current limit for both directions in the Jrk's RAM settings.\n\nYou would normally configure this setting ahead of time using the Jrk G2 Configuration Utility, but this function allows you to change it temporarily on the fly.\n\nThis command is only valid for the Jrk G2 18v19, 24v13, 18v27, and 24v21. The Jrk G2 21v3 does not have a configurable hard current limit.\n\nDefinition at line 1216 of file JrkG2.h.\n\n void JrkG2Base::setEncodedHardCurrentLimitForward ( uint16_t encoded_limit )\ninlineinherited\n\nSets the encoded hard current limit for driving in the forward direction in the Jrk's RAM settings.\n\nYou would normally configure this setting ahead of time using the Jrk G2 Configuration Utility, but this function allows you to change it temporarily on the fly.\n\nThis command is only valid for the Jrk G2 18v19, 24v13, 18v27, and 24v21. The Jrk G2 21v3 does not have a configurable hard current limit.\n\nDefinition at line 1156 of file JrkG2.h.\n\n void JrkG2Base::setEncodedHardCurrentLimitReverse ( uint16_t encoded_limit )\ninlineinherited\n\nSets the encoded hard current limit for driving in the reverse direction in the Jrk's RAM settings\n\nYou would normally configure this setting ahead of time using the Jrk G2 Configuration Utility, but this function allows you to change it temporarily on the fly.\n\nThis command is only valid for the Jrk G2 18v19, 24v13, 18v27, and 24v21. The Jrk G2 21v3 does not have a configurable hard current limit.\n\nDefinition at line 1186 of file JrkG2.h.\n\n void JrkG2Base::setIntegralCoefficient ( uint16_t multiplier, uint8_t exponent )\ninlineinherited\n\nSets the integral coefficient in the Jrk's RAM settings.\n\nThis coefficient is used in the Jrk's PID algorithm. The coefficient takes the form:\n\nmultiplier \/ (2 ^ exponent)\n\nThe multiplier can range from 0 to 1023, and the exponent can range from 0 to 18.\n\nYou would normally configure this setting ahead of time using the Jrk G2 Configuration Utility, but this function allows you to change it temporarily on the fly.\n\nDefinition at line 834 of file JrkG2.h.\n\n void JrkG2Base::setIntegralLimit ( uint16_t limit )\ninlineinherited\n\nSets the integral limit in the Jrk's RAM settings.\n\nThe PID algorithm prevents the absolute value of the integral variable (also known as error sum) from exceeding this limit. This can help limit integral wind-up. The limit can range from 0 to 32767.\n\nYou would normally configure this setting ahead of time using the Jrk G2 Configuration Utility, but this function allows you to change it temporarily on the fly.\n\nDefinition at line 932 of file JrkG2.h.\n\n void JrkG2Base::setMaxAcceleration ( uint16_t accel )\ninlineinherited\n\nSets the maximum acceleration in both directions in the Jrk's RAM settings.\n\nYou would normally configure this setting ahead of time using the Jrk G2 Configuration Utility, but this function allows you to change it temporarily on the fly.\n\nDefinition at line 1022 of file JrkG2.h.\n\n void JrkG2Base::setMaxAccelerationForward ( uint16_t accel )\ninlineinherited\n\nSets the maximum acceleration in the forward direction in the Jrk's RAM settings.\n\nYou would normally configure this setting ahead of time using the Jrk G2 Configuration Utility, but this function allows you to change it temporarily on the fly.\n\nDefinition at line 976 of file JrkG2.h.\n\n void JrkG2Base::setMaxAccelerationReverse ( uint16_t accel )\ninlineinherited\n\nSets the maximum acceleration in the reverse direction in the Jrk's RAM settings.\n\nYou would normally configure this setting ahead of time using the Jrk G2 Configuration Utility, but this function allows you to change it temporarily on the fly.\n\nDefinition at line 999 of file JrkG2.h.\n\n void JrkG2Base::setMaxDeceleration ( uint16_t decel )\ninlineinherited\n\nSets the maximum deceleration in both directions in the Jrk's RAM settings.\n\nYou would normally configure this setting ahead of time using the Jrk G2 Configuration Utility, but this function allows you to change it temporarily on the fly.\n\nDefinition at line 1082 of file JrkG2.h.\n\n void JrkG2Base::setMaxDecelerationForward ( uint16_t decel )\ninlineinherited\n\nSets the maximum deceleration in the forward direction in the Jrk's RAM settings.\n\nYou would normally configure this setting ahead of time using the Jrk G2 Configuration Utility, but this function allows you to change it temporarily on the fly.\n\nDefinition at line 1036 of file JrkG2.h.\n\n void JrkG2Base::setMaxDecelerationReverse ( uint16_t decel )\ninlineinherited\n\nSets the maximum deceleration in the reverse direction in the Jrk's RAM settings.\n\nYou would normally configure this setting ahead of time using the Jrk G2 Configuration Utility, but this function allows you to change it temporarily on the fly.\n\nDefinition at line 1059 of file JrkG2.h.\n\n void JrkG2Base::setMaxDutyCycle ( uint16_t duty )\ninlineinherited\n\nSets the maximum duty cycle for both directions in the Jrk's RAM settings.\n\nYou would normally configure this setting ahead of time using the Jrk G2 Configuration Utility, but this function allows you to change it temporarily on the fly.\n\nDefinition at line 1139 of file JrkG2.h.\n\n void JrkG2Base::setMaxDutyCycleForward ( uint16_t duty )\ninlineinherited\n\nSets the maximum duty cycle in the forward direction in the Jrk's RAM settings.\n\nYou would normally configure this setting ahead of time using the Jrk G2 Configuration Utility, but this function allows you to change it temporarily on the fly.\n\nDefinition at line 1095 of file JrkG2.h.\n\n void JrkG2Base::setMaxDutyCycleReverse ( uint16_t duty )\ninlineinherited\n\nSets the maximum duty cycle in the reverse direction in the Jrk's RAM settings.\n\nYou would normally configure this setting ahead of time using the Jrk G2 Configuration Utility, but this function allows you to change it temporarily on the fly.\n\nDefinition at line 1117 of file JrkG2.h.\n\n void JrkG2Base::setMaxDutyCycleWhileFeedbackOutOfRange ( uint16_t duty )\ninlineinherited\n\nSets the maximum duty cycle while feedback is out of range in the Jrk's RAM settings.\n\nYou would normally configure this setting ahead of time using the Jrk G2 Configuration Utility, but this function allows you to change it temporarily on the fly.\n\nDefinition at line 953 of file JrkG2.h.\n\n void JrkG2Base::setPIDPeriod ( uint16_t period )\ninlineinherited\n\nSets the PID period in the Jrk's RAM settings.\n\nThis is the rate at which the Jrk runs through all of its calculations, in milliseconds. Note that a higher PID period will result in a more slowly changing integral and a higher derivative, so the two corresponding PID coefficients might need to be adjusted whenever the PID period is changed.\n\nYou would normally configure this setting ahead of time using the Jrk G2 Configuration Utility, but this function allows you to change it temporarily on the fly.\n\nDefinition at line 908 of file JrkG2.h.\n\n void JrkG2Base::setProportionalCoefficient ( uint16_t multiplier, uint8_t exponent )\ninlineinherited\n\nSets the proportional coefficient in the Jrk's RAM settings.\n\nThis coefficient is used in the Jrk's PID algorithm. The coefficient takes the form:\n\nmultiplier \/ (2 ^ exponent)\n\nThe multiplier can range from 0 to 1023, and the exponent can range from 0 to 18.\n\nExample usage:\n\n\/\/ Set the proportional coefficient to 1.125 (9\/(2^3)).\njrk.setProportionalCoefficient(9, 3);\n\nYou would normally configure this setting ahead of time using the Jrk G2 Configuration Utility, but this function allows you to change it temporarily on the fly.\n\nDefinition at line 795 of file JrkG2.h.\n\n void JrkG2Base::setRAMSettings ( uint8_t offset, uint8_t length, uint8_t * buffer )\ninlineinherited\n\nSets a contiguous block of settings in the Jrk G2's RAM.\n\nThe maximum length that can be written in a single command is 7 bytes over Serial, 13 bytes over I2C.\n\nExample usage:\n\n\/\/ Set the Jrk's feedback maximum setting.\nuint16_t feedbackMaximum = 1234;\nuint8_t buffer[2];\nbuffer[0] = feedbackMaximum & 0xFF;\nbuffer[1] = feedbackMaximum >> 8 & 0xFF;\njrk.setRAMSettings(0x1F, 2, buffer);\n\nNote that this library has several functions for reading and writing specific RAM settings, and they are easier to use than this function.\n\nFor information on how the settings are encoded, see the Jrk G2 user's guide.\n\nDefinition at line 1404 of file JrkG2.h.\n\n void JrkG2Base::setResetIntegral ( bool reset )\ninlineinherited\n\nSets or clears the \"Reset integral\" setting in the Jrk's RAM settings.\n\nIf this setting is set to true, the PID algorithm will reset the integral variable (also known as error sum) when the absolute value of the proportional term exceeds 600.\n\nWhen enabled, this can help limit integral wind-up, or the uncontrolled growth of the integral when the feedback system is temporarily unable to keep the error small. This might happen, for example, when the target is changing quickly.\n\nYou would normally configure this setting ahead of time using the Jrk G2 Configuration Utility, but this function allows you to change it temporarily on the fly.\n\nDefinition at line 713 of file JrkG2.h.\n\n void JrkG2Base::setSoftCurrentLimit ( uint16_t current )\ninlineinherited\n\nSets the soft current limit for driving in both directions in the Jrk's RAM settings, in units of mA.\n\nYou would normally configure this setting ahead of time using the Jrk G2 Configuration Utility, but this function allows you to change it temporarily on the fly.\n\nDefinition at line 1332 of file JrkG2.h.\n\n void JrkG2Base::setSoftCurrentLimitForward ( uint16_t current )\ninlineinherited\n\nSets the soft current limit when driving in the forward direction in the Jrk's RAM settings, in units of mA.\n\nYou would normally configure this setting ahead of time using the Jrk G2 Configuration Utility, but this function allows you to change it temporarily on the fly.\n\nDefinition at line 1287 of file JrkG2.h.\n\n void JrkG2Base::setSoftCurrentLimitReverse ( uint16_t current )\ninlineinherited\n\nSets the soft current limit when driving in the reverse direction in the Jrk's RAM settings, in units of mA.\n\nYou would normally configure this setting ahead of time using the Jrk G2 Configuration Utility, but this function allows you to change it temporarily on the fly.\n\nDefinition at line 1309 of file JrkG2.h.\n\n void JrkG2Base::setTarget ( uint16_t target )\ninlineinherited\n\nSets the target of the Jrk to a value in the range 0 to 4095.\n\nThe target can represent a target duty cycle, speed, or position depending on the feedback mode.\n\nExample usage:\n\njrkG2.setTarget(3000);\n\nThis functions sends a \"Set target\" command to the jrk, which clears the \"Awaiting command\" error bit and (if the input mode is serial) will set the jrk's input and target variables.\n\nDefinition at line 152 of file JrkG2.h.\n\n void JrkG2Base::setTargetLowResFwd ( uint8_t target )\ninlineinherited\n\nSets the target of the Jrk based on a value in the range 0 to 127 that maps to a 12-bit target of 2048 or greater.\n\nIf the feedback mode is Analog or Tachometer, then the formula is Target = 2048 + 16 * value.\n\nIf the feedback mode is None (speed control mode), then the formula is Target = 2048 + (600 \/ 127) * value. This means that a value of 127 will set the duty cycle target to full-speed reverse (-600), while a value of zero will make the motor stop.\n\nExample usage:\n\njrkG2.setTargetLowResFwd(100);\n\nThis function sends a \"Set target low resolution forward\" command to the Jrk G2, which clears the \"Awaiting command\" error bit and (if the input mode is serial) will set the jrk's input and target variables.\n\nDefinition at line 209 of file JrkG2.h.\n\n void JrkG2Base::setTargetLowResRev ( uint8_t target )\ninlineinherited\n\nSets the target of the Jrk based on a value in the range 0 to 127.\n\nIf the value is zero, then this command is equivalent to the \"Stop motor\" command. Otherwise, the value maps to a 12-bit target less than 2048.\n\nIf the feedback mode is Analog or Tachometer, then the formula is Target = 2048 - 16 * value.\n\nIf the feedback mode is None (speed control mode), then the formula is Target = 2048 - (600 \/ 127) * value. This means that a value of 127 will set the duty cycle target to full-speed reverse (-600).\n\nExample usage:\n\njrkG2.setTargetLowResRev(100);\n\nThis function sends a \"Set target low resolution reverse\" command to the Jrk G2, which clears the \"Awaiting command\" error bit and (if the input mode is serial) will set the jrk's input and target variables.\n\nDefinition at line 182 of file JrkG2.h.\n\n void JrkG2Base::stopMotor ( )\ninlineinherited\n\nTurns the motor off.\n\nThis function sends a \"Stop motor\" command to the Jrk, which sets the \"Awaiting command\" error bit. The Jrk will respect the configured deceleration limit while decelerating to a duty cycle of 0, unless the \"Awaiting command\" error has been configured as a hard error. Once the duty cycle reaches zero, the Jrk will either brake or coast.\n\nExample usage:\n\njrkG2.stopMotor();\n\nDefinition at line 289 of file JrkG2.h.\n\n## Member Data Documentation\n\n uint8_t JrkG2Base::_lastError = 0\nprotectedinherited\n\nZero if the last communication with the device was successful, non-zero otherwise.\n\nDefinition at line 1437 of file JrkG2.h.\n\nThe documentation for this class was generated from the following files:","date":"2020-02-22 12:50:53","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.34205836057662964, \"perplexity\": 6954.55059253277}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-10\/segments\/1581875145676.44\/warc\/CC-MAIN-20200222115524-20200222145524-00128.warc.gz\"}"} | null | null |
Ready to spice up your love life with Melbourne singles? Well you have came to the right place.
At Spice of Life we have 1000's of singles Melbourne area seeking love and romance, friends and friendship, companionship and marriage, casual or relationship.
Simple add your free profile and start meeting single people in Melbourne area immediately.
By joining us you will have instant access to our online services which will match you with compatible singles in your local Melbourne area and also match you up with singles who are looking for someone like you.
Before you know it you will be meeting up with numerous Melbourne singles and enjoying an active social life and potentially meet your life long partner.
Why wait any longer, you have nothing to lose and everything to gain, join now and start meeting singles Melbourne area today! | {
"redpajama_set_name": "RedPajamaC4"
} | 4,694 |
A rugged and hard wearing plastic / alloy combo usb that protects your data and has a capped. With the option to print your logo on the front or back they are great promotional product, These usb drives mean you can protect your data from dust and debris.
USB Alpha Metallic Drive Not your thing? Try these instead! | {
"redpajama_set_name": "RedPajamaC4"
} | 5,332 |
{"url":"https:\/\/www.physicsforums.com\/threads\/unit-circle.212993\/","text":"# Unit circle\n\n[SOLVED] unit circle\n\n## Homework Statement\n\nMy book contains the following problem:\n\nLet U be the multiplication group $\\{z \\in C : |z| = 1\\}$\n\n1) Let z_0 be in U. Show that $U z_0 = \\{ z z_0 : z \\in U \\}$ is a subgroup of U, and compute U mod U z_0.\n2) To what group is U\/<-1> isomorphic to?\n\n## The Attempt at a Solution\n\nI think 1) is so insanely trivial it is not worth asking. The answer is clearly the trivial group, right?\n\nMy book says that the answer to 2) is U, but it seems it should be the half-circle or the reals mod 2 or something. Why is it U?$$\\in\\in$$\n\nLast edited:\n\nDick","date":"2021-06-23 11:44:29","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9147495627403259, \"perplexity\": 829.1279862183849}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-25\/segments\/1623488538041.86\/warc\/CC-MAIN-20210623103524-20210623133524-00586.warc.gz\"}"} | null | null |
{"url":"https:\/\/de.maplesoft.com\/support\/help\/maple\/view.aspx?path=Student\/Statistics\/ChiSquareGoodnessOfFitTest\/overview","text":"Student\/Statistics\/ChiSquareGoodnessOfFitTest\/overview - Help\n\nStudent[Statistics][ChiSquareGoodnessOfFitTest] Overview\n\noverview of the Chi Square Goodness Of Fit Test\n\nDescription\n\n \u2022 Chi-Squared Goodness of Fit Test is used to test how well the data in the observed sample reflects the data in the expected sample. The observed sample and the expected sample are given as histograms with the same set of bins. That is, each entry of observed and expected is a count of observations with particular characteristics (typically, the observations within a particular range). These characteristics are the same for corresponding entries of observed and expected.\n \u2022 Requirements for using Chi-Squared Goodness of Fit Test:\n 1 The goal is to test if the observed sample follows the same the distribution as of the expected sample.\n 2 The observed sample and the expected sample are given as histograms with the same set of bins.\n \u2022 The formula is:\n\n${X}^{2}={\\sum }_{i=1}^{N}\\frac{{\\left({\\mathrm{observed}}_{i}-{\\mathrm{expected}}_{i}\\right)}^{2}}{{\\mathrm{expected}}_{i}}$\n\n where $\\mathrm{observed}$\u00a0and $\\mathrm{expected}$\u00a0are the data in the observed sample and the expected sample respectively, $N$\u00a0is the sample size of the observed and the expected samples, and ${X}^{2}$\u00a0follows a Chi-Squared distribution with $N-1$\u00a0degrees of freedom.\n\nExample\n\nSam was playing a dice game. The rules of the game were: for each turn, the player rolls three dice and wins only if three sixes are rolled. After ten failures, Sam suspected that the dice may have been tampered with. Sam stole the dice and brought them home, where he rolled them 1000 times - the observed\u00a0sample.\n\nHe also had three dice he knew to be fair, and rolled those 1000 times - this is the expected\u00a0sample. For each roll he recorded the number of sixes. The results were as follows:\n\n observed expected 0 sixes 580 570 1 six 354 360 2 sixes 63 64 3 sixes 3 6\n\nNow he wants to test if the dice are fair or not.\n\n 1 Determine the null hypothesis:\n Null Hypothesis: The three dice are fair. (Observed sample dose not differ from expected sample.)\n 2 Substitute the information into the formula:\n $x=\\frac{{\\left(580-570\\right)}^{2}}{570}+\\frac{{\\left(354-360\\right)}^{2}}{360}+\\frac{{\\left(63-64\\right)}^{2}}{64}+\\frac{{\\left(3-6\\right)}^{2}}{6}=1.791063596$\n 3 Compute the p-value:\n $p-\\mathrm{value}=\\mathrm{Probability}\\left(X\u02c62>1.791063596\\right)=0.616881663760937$, \u00a0\u00a0\u00a0\u00a0\u00a0$X\u02c62\u02dc\\mathrm{ChiSquare}\\left(3\\right)$\n 4 Draw the conclusion:\n This statistical test does not provide enough evidence to conclude that the null hypothesis is false, so we fail to reject the null hypothesis.","date":"2021-01-20 09:19:21","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 9, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7268649339675903, \"perplexity\": 523.7737072412548}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-04\/segments\/1610703519984.9\/warc\/CC-MAIN-20210120085204-20210120115204-00039.warc.gz\"}"} | null | null |
Q: How can I reduce AD LDAP filter I am using one application through which I provide an LDAP filter to fetch some data from Microsoft AD.
The LDAP filter textbox is of a limited length and doesn't allow me to use a huge filter.
My Ldap filter has multiple membership checks.for instance:
(|(memberof=CN=Group1,OU=Users and Groups,OU=MyUsers,OU=abc,OU=us,DC=local,DC=com)
(memberof=CN=Group2,OU=qwe,OU=Users and Groups,OU=HQ,OU=xyz,OU=us,DC=local,DC=com)...and so on
Is there any way to reduce this filter? Like if I add some common attribute to all these users, then based on that attribute, can I search the users?
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 9,374 |
package org.fcrepo.utilities;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.util.HashMap;
import java.util.Map;
import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.SAXParser;
import javax.xml.transform.Source;
import javax.xml.transform.Templates;
import javax.xml.transform.Transformer;
import javax.xml.transform.TransformerException;
import javax.xml.transform.TransformerFactory;
import javax.xml.transform.stream.StreamSource;
import org.apache.commons.pool.impl.SoftReferenceObjectPool;
import org.fcrepo.utilities.xml.PoolableDocumentBuilderFactory;
import org.fcrepo.utilities.xml.PoolableSAXParserFactory;
import org.fcrepo.utilities.xml.PoolableTransformerFactoryFactory;
import org.w3c.dom.Document;
import org.xml.sax.InputSource;
import org.xml.sax.SAXException;
import org.xml.sax.helpers.DefaultHandler;
/**
*
* @author Edwin Shin
* @version $Id$
*/
public class XmlTransformUtility {
private static final Map<String, TimestampedCacheEntry<Templates>> TEMPLATES_CACHE =
new HashMap<String, TimestampedCacheEntry<Templates>>();
// A pool of namespace-aware DocumentBuilders
// Using a Stack pool means that objectsare created on demand after the
// pool is exhausted
//TODO how should the default values be configured?
private static final SoftReferenceObjectPool<DocumentBuilder> DOCUMENT_BUILDERS =
new SoftReferenceObjectPool<DocumentBuilder>(
new PoolableDocumentBuilderFactory(true, false));
private static final SoftReferenceObjectPool<TransformerFactory> TRANSFORM_FACTORIES =
new SoftReferenceObjectPool<TransformerFactory>(
new PoolableTransformerFactoryFactory());
private static final SoftReferenceObjectPool<SAXParser> SAX_PARSERS =
new SoftReferenceObjectPool<SAXParser>(
new PoolableSAXParserFactory(true, false));
/**
* Convenience method to get a new instance of a TransformerFactory.
* If the {@link #TransformerFactory} is an instance of
* net.sf.saxon.TransformerFactoryImpl, the attribute
* {@link #FeatureKeys.VERSION_WARNING} will be set to false in order to
* suppress the warning about using an XSLT1 stylesheet with an XSLT2
* processor.
*
* @return a new instance of TransformerFactory
*/
public static TransformerFactory getTransformerFactory() {
try {
return (TransformerFactory) TRANSFORM_FACTORIES.borrowObject();
} catch (Exception e) {
e.printStackTrace();
return null;
}
}
public static void returnTransformerFactory(TransformerFactory factory) {
try {
TRANSFORM_FACTORIES.returnObject(factory);
} catch (Exception e) {
e.printStackTrace();
}
}
public static Transformer getTransformer() throws TransformerException {
return getTransformer(null);
}
public static Transformer getTransformer(Source src) throws TransformerException {
TransformerFactory factory = null;
Transformer result = null;
try {
factory = TRANSFORM_FACTORIES.borrowObject();
result = (src == null) ? factory.newTransformer()
: factory.newTransformer(src);
} catch (TransformerException e) {
throw e;
} catch (Exception e) {
e.printStackTrace();
} finally {
if (factory != null) {
try {
TRANSFORM_FACTORIES.returnObject(factory);
} catch (Exception e) {
e.printStackTrace();
}
}
}
return result;
}
/**
* Try to cache parsed Templates, but check for changes on disk
* @param src
* @return
*/
public static Templates getTemplates(File src) throws TransformerException {
String key = src.getAbsolutePath();
TimestampedCacheEntry<Templates> entry = TEMPLATES_CACHE.get(key);
// check to see if it is null or has changed
if (entry == null || entry.timestamp() < src.lastModified()) {
TransformerFactory factory = getTransformerFactory();
try {
Templates template = factory.newTemplates(new StreamSource(src));
entry = new TimestampedCacheEntry<Templates>(src.lastModified(), template);
} finally {
returnTransformerFactory(factory);
}
TEMPLATES_CACHE.put(key, entry);
}
return entry.value();
}
public static Templates getTemplates(StreamSource source)
throws TransformerException {
TransformerFactory tf = getTransformerFactory();
Templates result = null;
try {
result = tf.newTemplates(source);
} finally {
returnTransformerFactory(tf);
}
return result;
}
public static DocumentBuilder borrowDocumentBuilder() {
try {
return (DocumentBuilder) DOCUMENT_BUILDERS.borrowObject();
} catch (Exception e) {
e.printStackTrace();
return null;
}
}
public static void returnDocumentBuilder(DocumentBuilder object) {
try {
DOCUMENT_BUILDERS.returnObject(object);
} catch (Exception e) {
e.printStackTrace();
}
}
public static Document parseNamespaceAware(File src)
throws Exception {
return parseNamespaceAware(new FileInputStream(src));
}
public static Document parseNamespaceAware(InputStream src)
throws Exception {
Document result = null;
DocumentBuilder builder =
(DocumentBuilder) DOCUMENT_BUILDERS.borrowObject();
try {
result = builder.parse(src);
} finally {
DOCUMENT_BUILDERS.returnObject(builder);
}
return result;
}
public static void parseWithoutValidating(InputStream in, DefaultHandler handler)
throws SAXException, IOException {
parseWithoutValidating(new InputSource(in), handler);
}
public static void parseWithoutValidating(InputSource in, DefaultHandler handler)
throws SAXException, IOException {
SAXParser parser = null;
try {
parser = (SAXParser) SAX_PARSERS.borrowObject();
} catch (Exception e) {
throw new RuntimeException("Error initializing SAX parser", e);
}
try {
parser.parse(in, handler);
} finally {
if (parser != null) {
try {
SAX_PARSERS.returnObject(parser);
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 7,465 |
{"url":"https:\/\/www.physicsforums.com\/threads\/locally-lorentz.456014\/","text":"# Locally Lorentz\n\nGold Member\n\n## Main Question or Discussion Point\n\n\"Locally Lorentz\"\n\nMister Thorne Wheeler, \"Gravitation\" asks \"What does it mean to say that the geometry of a sufficiently limited region of spacetime in the real physical world is Lorentzian?\"\n\nThe follow this up with two answers, neither of which appears to have much to do with the question. Instead, they give formulas to calculate proper time and proper distance. This barely scratches the surface of what it means to be Lorentz. Certainly, a body that takes an accelerated path through spacetime ages less than a body that takes an inertial path.\n\nBut what MTW fails to do is to get across a deeper undestanding of the paradox involved, and more particularly the resolution to this paradox. In fact, I can't find a single example in MTW where they demonstrate any competent expanation or deeper understanding of Special Relativity. They seem to start and end with the notion that Special Relativity is completely summed up by one equation:\n\n$$s^2=-\\tau^2=x^2+y^2+z^2-t^2$$\n\nI have no problem with having one equation to sum up Special Relativity, I just think the chose the wrong one. As for me, I don't think that SR is summed up by the calculation of the space-time-interval between events. Instead, it is summed up by the Lorentz Transformation Equations.\n\nLet me offer a single example of what the Lorentz Transformations do. Then we can ask whether that operation represents a \"global\" or a \"local\" application of Lorentz, and whether it makes sense to say that physics is only \"locally\" Lorentz.\n\nHere is the example:\n\nI have a particle observer one foot away from a wall. Roughly 1 nanosecond ago, light came from the wall which is currently being observed by our observer.\n\nThe space-time coordinates of this observer is (ct, x) = (-1, 1); that is, 1 second ago, and one foot away.\n\nNow, our observer undergoes a tremendous acceleration toward the wall of 0.99999999959c. (This corresponds to a change in rapidity of 10. tanh(10)=0.99999999959c.\n\nThe event (-1,1) is transformed by Lorentz Transformation as\n\n$$\\left( \\begin{array}{cc} \\cosh (\\varphi ) & -\\sinh (\\varphi ) \\\\ -\\sinh (\\varphi ) & \\cosh (\\varphi ) \\end{array} \\right)= \\left( \\begin{array}{c} -22026 \\\\ 22026 \\end{array} \\right)$$\n\nSo this event, which happened only 1 foot away, now happened 22026 feet away; that is, over four miles away.\n\nIn general with these large numbers, (velocities extremely close to c) it is a simple calculation, once you know the rapidity. If you have a rapidity of 100, then the multiple is 2*cosh(100)=2.68*1043. With a rapidity change of 1000, the multiple is 2*cosh(1000)=1.97*10434 feet (6.36*10414 light years.)\n\nSo...\n\nIs this exampe local? The proper time between these two events was zero. The proper distance between these two events is zero. This would seem to be about as \"local\" as you can get.\n\nHowever, by applying the Lorentz Transformation to these two events, we were able to make them as far apart as we wanted in coordinate space and time. A google google google google times as big as the universe. However, the proper time, and proper distance between these events remains zero.\n\nSo how can anyone justify even using the phrase \"locally Lorentz\" If any physics is Lorentz at all, it can be stretched over arbitrarily large swaths of spacetime--not local at all.\n\n## Answers and Replies\n\nRelated Special and General Relativity News on Phys.org\nPAllen\nScience Advisor\nRe: \"Locally Lorentz\"\n\nMister Thorne Wheeler, \"Gravitation\" asks \"What does it mean to say that the geometry of a sufficiently limited region of spacetime in the real physical world is Lorentzian?\"\n\nThe follow this up with two answers, neither of which appears to have much to do with the question. Instead, they give formulas to calculate proper time and proper distance. This barely scratches the surface of what it means to be Lorentz. Certainly, a body that takes an accelerated path through spacetime ages less than a body that takes an inertial path.\n\nBut what MTW fails to do is to get across a deeper undestanding of the paradox involved, and more particularly the resolution to this paradox. In fact, I can't find a single example in MTW where they demonstrate any competent expanation or deeper understanding of Special Relativity. They seem to start and end with the notion that Special Relativity is completely summed up by one equation:\n\n$$s^2=-\\tau^2=x^2+y^2+z^2-t^2$$\n\nI have no problem with having one equation to sum up Special Relativity, I just think the chose the wrong one. As for me, I don't think that SR is summed up by the calculation of the space-time-interval between events. Instead, it is summed up by the Lorentz Transformation Equations.\n\nLet me offer a single example of what the Lorentz Transformations do. Then we can ask whether that operation represents a \"global\" or a \"local\" application of Lorentz, and whether it makes sense to say that physics is only \"locally\" Lorentz.\n\nHere is the example:\n\nI have a particle observer one foot away from a wall. Roughly 1 nanosecond ago, light came from the wall which is currently being observed by our observer.\n\nThe space-time coordinates of this observer is (ct, x) = (-1, 1); that is, 1 second ago, and one foot away.\n\nNow, our observer undergoes a tremendous acceleration toward the wall of 0.99999999959c. (This corresponds to a change in rapidity of 10. tanh(10)=0.99999999959c.\n\nThe event (-1,1) is transformed by Lorentz Transformation as\n\n$$\\left( \\begin{array}{cc} \\cosh (\\varphi ) & -\\sinh (\\varphi ) \\\\ -\\sinh (\\varphi ) & \\cosh (\\varphi ) \\end{array} \\right)= \\left( \\begin{array}{c} -22026 \\\\ 22026 \\end{array} \\right)$$\n\nSo this event, which happened only 1 foot away, now happened 22026 feet away; that is, over four miles away.\n\nIn general with these large numbers, (velocities extremely close to c) it is a simple calculation, once you know the rapidity. If you have a rapidity of 100, then the multiple is 2*cosh(100)=2.68*1043. With a rapidity change of 1000, the multiple is 2*cosh(1000)=1.97*10434 feet (6.36*10414 light years.)\n\nSo...\n\nIs this exampe local? The proper time between these two events was zero. The proper distance between these two events is zero. This would seem to be about as \"local\" as you can get.\n\nHowever, by applying the Lorentz Transformation to these two events, we were able to make them as far apart as we wanted in coordinate space and time. A google google google google times as big as the universe. However, the proper time, and proper distance between these events remains zero.\n\nSo how can anyone justify even using the phrase \"locally Lorentz\" If any physics is Lorentz at all, it can be stretched over arbitrarily large swaths of spacetime--not local at all.\nYou can say that the metric sums up everything because the complete Lorentz transform is derivable from the metric.\n\nI think the definition of locality you want to use is one phrased in terms of invariants: proper distance and proper time; also any region over which the the metric deviation from Minkowski is insignificant. In your example, despite the huge coordinate distance, the metric remains flat over this region, so it is locally Lorentz.\n\n(At least this is my understanding).\n\nDale\nMentor\nYou can say that the metric sums up everything because the complete Lorentz transform is derivable from the metric.\n\nI think the definition of locality you want to use is one phrased in terms of invariants: proper distance and proper time; also any region over which the the metric deviation from Minkowski is insignificant. In your example, despite the huge coordinate distance, the metric remains flat over this region, so it is locally Lorentz.\n\n(At least this is my understanding).\nIt is my understanding too.\n\nbcrowell\nStaff Emeritus\nScience Advisor\nGold Member\nRe: \"Locally Lorentz\"\n\nMTW isn't a book on SR, it's a book on GR. As PAllen points out, you can derive the Lorentz transformation from the metric or the metric from the metric from the Lorentz transformation. But in the context of GR, it's more natural to emphasize the metric, because a metric is what GR has. GR does not have frames of reference, except locally, so a transformation between frames of reference isn't a central concern.\n\n-Ben\n\nGold Member\nRe: \"Locally Lorentz\"\n\nMTW isn't a book on SR, it's a book on GR. As PAllen points out, you can derive the Lorentz transformation from the metric or the metric from the metric from the Lorentz transformation. But in the context of GR, it's more natural to emphasize the metric, because a metric is what GR has. GR does not have frames of reference, except locally, so a transformation between frames of reference isn't a central concern.\n\n-Ben\nI know how to go from the Lorentz Transformation to the metric, but not the other way around.\n\nTo go from the LT's to the metric, you must either select two events which (a) you could travel between; i.e. the d\/c < t. or you can select two events (b) which you can't travel between, i.e. the distance\/speed of light > t.\n\nIn the first case, you apply the LT so that the two events are in the same position, but at different times. The time between those two events in this new reference frame is the proper time.\n\nIn the second case, you apply the LT so that the two events are in different positions, but at the same time. The distance in this reference frame is the proper distance.\n\n(by the way going from lorentz transformation to the metric isn't so much derivation as definition.)\n\nThere is of course, a third case, where the the distance\/speed of light = 1. In which case, you cannot transform the events so they happen at the same time, or the same place. You can make them arbitrarily close in space, and arbitrarily close in time, but never zero. For this relationship, the \"proper time\" and \"proper distance\" are both zero.\n\nI'm interested to see you go the other direction, deriving the Lorentz Transformation from the definition of proper time.\n\nLast edited:\nMentz114\nGold Member\nRe: \"Locally Lorentz\"\n\nIt is possible to find the transformation that leaves the proper length unchanged. In 2 dimensions,\n\n$$\\left[ \\begin{array}{c} T' \\\\\\ X' \\end{array} \\right]= \\left[ \\begin{array}{cc} a & b \\\\\\ c & d\\end{array} \\right] \\left[ \\begin{array}{c} T \\\\\\ X \\end{array} \\right]$$\n\n\\begin{align*} ds^2 &= T'^2-X'^2=(aT+bX)^2-(cT+dX)^2\\\\ &= T^2(a^2-c^2)-X^2(b^2-d^2)+2XT(ab-cd) \\end{align*}\n\nIf\n\n$$a=d=cosh(r), \\ \\ c=d=sinh(r)$$\n\nthen $ds^2$ is invariant under the transformation.\n\nRe: \"Locally Lorentz\"\n\nHere is the example:\n\nI have a particle observer one foot away from a wall. Roughly 1 nanosecond ago, light came from the wall which is currently being observed by our observer.\n\nThe space-time coordinates of this observer is (ct, x) = (-1, 1); that is, 1 second ago, and one foot away.\n\nNow, our observer undergoes a tremendous acceleration toward the wall of 0.99999999959c. (This corresponds to a change in rapidity of 10. tanh(10)=0.99999999959c.\n\nThe event (-1,1) is transformed by Lorentz Transformation as\n\n$$\\left( \\begin{array}{cc} \\cosh (\\varphi ) & -\\sinh (\\varphi ) \\\\ -\\sinh (\\varphi ) & \\cosh (\\varphi ) \\end{array} \\right)= \\left( \\begin{array}{c} -22026 \\\\ 22026 \\end{array} \\right)$$\n\nSo this event, which happened only 1 foot away, now happened 22026 feet away; that is, over four miles away.\n\nIn general with these large numbers, (velocities extremely close to c) it is a simple calculation, once you know the rapidity. If you have a rapidity of 100, then the multiple is 2*cosh(100)=2.68*1043. With a rapidity change of 1000, the multiple is 2*cosh(1000)=1.97*10434 feet (6.36*10414 light years.)\n\nSo...\n\nIs this exampe local? The proper time between these two events was zero. The proper distance between these two events is zero. This would seem to be about as \"local\" as you can get.\nDo not mistake the location of the points on the manifold for the distance measure between those same points.\n\npervect\nStaff Emeritus\nScience Advisor\nRe: \"Locally Lorentz\"\n\nYou can think of it like this. Rotations are linear transformations that leave distances unchanged. The Lorentz transformation is a linear transformation that leaves the Lorentz interval unchanged.\n\nThere are transforms other than Lorentz transforms that leave the Lorentz interval unchanged. These, however, are combinations of the Lorentz transform with standard spatial rotations. If you restrict yourself to one space and one time dimension, there is only the Lorentz transform.\n\nYou might try reading \"Space Time Physics\" by Taylor & Wheeler for more backgorund, they explain this in more detail, taking more time to do it.\n\nBut if you want to have a go at it yourself, it's not terribly hard to derive the Lorentz transforms from the invariance of the Lorentz interval. It's easiest if you choose units such that the speed of light, c, is equal to 1.\n\nYou just need to look for a linear transformation\n\nx' = ax + bt\nt' = fx + gt\n\nsuch that\n\nx'^2 - t'^2 = x^2 - t^2\n\nExpanding, you get\n\n(a^2 - f^2) x^2 + (2ab-2fg) xt + (b^2 - g^2) t^2 = x^2 - t^2\n\nFrom this you conclude a^2 - f^2 = 1 , b^2 - g^2 = -1, and ab=fg. It's easy enough to confirm that the Lorentz transforms (if they look unfamiliar, it's because they're the Lorentz transforms in geometric units where c=1, so all factors of c have been omitted) satisfy this.\n\na = gamma = 1\/sqrt(1-v^2)\nb = v*gamma = v\/sqrt(1-v^2)\nf = v*gamma = v\/sqrt(1-v^2)\ng = gamma = 1\/sqrt(1-v^2)\n\nIt's a little more work (more than I care to do, and I'm not sure where to refer you to as a reference) that there aren't any other solutions that aren't equivalent to the above. (Replacing v by -v is something that falls in the category of equivalent, for instance).\n\nLast edited:\natyy\nScience Advisor\nRe: \"Locally Lorentz\"\n\nIf the metric is everywhere diag(-1,1,1,1) then it is Minkowskian.\n\nHowever, in GR, the metric obeys the Einstein field equations, and is specified to have signature 2. So it is not diag(-1,1,1,1) everywhere. However, there are coordinates where the metric is diag(-1,1,1,1) at any particular point, and the first derivatives of the metric also vanish (but not second derivatives), and so the metric is said to be locally Lorentz. The deviation from local Lorentzianess as one goes away from the point is specified by terms in Taylor series, so we understand exactly how locally Lorentz it is or isn't.\n\nRe: \"Locally Lorentz\"\n\nI think the definition of locality you want to use is one phrased in terms of invariants: proper distance and proper time\nIs there an invariant way of defining locality for lightlike directions?\n\nPAllen\nScience Advisor\nRe: \"Locally Lorentz\"\n\nIs there an invariant way of defining locality for lightlike directions?\nNot that I can think of, but even if you pick a coordinate system with lightlike basis vectors, you can discuss the size of region in terms of proper spacelike interval and proper timelike interval.\n\nGold Member\nRe: \"Locally Lorentz\"\n\nDo not mistake the location of the points on the manifold for the distance measure between those same points.\nThe only distances I know of are \"proper distance,\" \"proper time,\" and \"distance.\" The last distance has no adjective, and it refers to the Euclidean, observer dependent distance. (It is observer dependent, because two observers with different rapidities will observe different distances.)\n\nThe \"distance measure\" as you put it, is an ambiguous term, since relativistically traveling observers would measure different distances between the same two events.\n\nOf course, I am referring to the Euclidean, observer dependent distance, with the critera that the space is \"locally lorentz,\" I applied a Lorentz Transform to a \"local\" space, and find that for all intents and purposes, local is global.\n\nGold Member\nRe: \"Locally Lorentz\"\n\nYou can think of it like this. Rotations are linear transformations that leave distances unchanged. The Lorentz transformation is a linear transformation that leaves the Lorentz interval unchanged.\n\nThere are transforms other than Lorentz transforms that leave the Lorentz interval unchanged. These, however, are combinations of the Lorentz transform with standard spatial rotations. If you restrict yourself to one space and one time dimension, there is only the Lorentz transform.\n\nYou might try reading \"Space Time Physics\" by Taylor & Wheeler for more backgorund, they explain this in more detail, taking more time to do it.\n\nBut if you want to have a go at it yourself, it's not terribly hard to derive the Lorentz transforms from the invariance of the Lorentz interval. It's easiest if you choose units such that the speed of light, c, is equal to 1.\n\nYou just need to look for a linear transformation\n\nx' = ax + bt\nt' = fx + gt\n\nsuch that\n\nx'^2 - t'^2 = x^2 - t^2\n\nExpanding, you get\n\n(a^2 - f^2) x^2 + (2ab-2fg) xt + (b^2 - g^2) t^2 = x^2 - t^2\n\nFrom this you conclude a^2 - f^2 = 1 , b^2 - g^2 = -1, and ab=fg. It's easy enough to confirm that the Lorentz transforms (if they look unfamiliar, it's because they're the Lorentz transforms in geometric units where c=1, so all factors of c have been omitted) satisfy this.\n\na = gamma = 1\/sqrt(1-v^2)\nb = v*gamma = v\/sqrt(1-v^2)\nf = v*gamma = v\/sqrt(1-v^2)\ng = gamma = 1\/sqrt(1-v^2)\n\nIt's a little more work (more than I care to do, and I'm not sure where to refer you to as a reference) that there aren't any other solutions that aren't equivalent to the above. (Replacing v by -v is something that falls in the category of equivalent, for instance).\nThanks pervect. (Thank you, too, Mentz, but pervect has a few extra equal signs so I looked at his first.)\n\nSo the lorentz transformation can be derived from the metric by asking \"what set of transformations keep this quantity t^2 - x^2 constant?\n\nVice versa, the metric is derived from the lorentz transformation by asking \"what is the minimum distance I can make these two events?\" or \"What would that clock measure, that goes between those two events\" or \"what would that ruler measure that goes between those two events.\"\n\nThe two things seem almost equivalent, but actually, I think there is an almost ideological difference between the two approaches. Going from the LT's to the metric, I am assuming a global geometric feature of the universe. Just as when I turn around, the universe will appear to undergo a rotational transformation, when I accelerate, the universe will undergo a Lorentz Transformation. Just as rotation applies equally to all objects at all distances, the Lorentz Transformation applies to all objects at all points in space and time. When I apply the lorentz transformation to derive the metric, I am using a universally applicable theory and finding the value of some local quantity.\n\nOn the other hand, if you go from the metric to the LT's, you are only concerned with the geometric features of nearby objects, traveling fairly slowly, along geodesics within a gravitational field. The lorentz transformations, themselves, do preserve these invariants, of the clock's time, or the ruler's length, but they are merely a mathematical curiosity, of little importance, since we are only concerned with what happens on earth.\n\nbcrowell put it succinctly \"GR does not have frames of reference, except locally, so a transformation between frames of reference isn't a central concern.\" I think bcrowell is correct, but I have often seen people claim that transformation between frames is not even a \"valid\" concern; they overstep the boundaries and say that Lorentz Transformations are only valid locallly.\n\nRe: \"Locally Lorentz\"\n\nThe only distances I know of are \"proper distance,\" \"proper time,\" and \"distance.\" The last distance has no adjective, and it refers to the Euclidean, observer dependent distance. (It is observer dependent, because two observers with different rapidities will observe different distances.)\nThe metric distance between two points on the manifold is observer independent.\n\nThe \"distance measure\" as you put it, is an ambiguous term, since relativistically traveling observers would measure different distances between the same two events.\nThat is not true, the metric distance is not ambiguous and observer independent.\n\nOf course, I am referring to the Euclidean, observer dependent distance, with the critera that the space is \"locally lorentz,\" I applied a Lorentz Transform to a \"local\" space, and find that for all intents and purposes, local is global.\nThen you clearly misapply the notion of 'locally Lorentzian'.\n\nGold Member\nRe: \"Locally Lorentz\"\n\nNot that I can think of, but even if you pick a coordinate system with lightlike basis vectors, you can discuss the size of region in terms of proper spacelike interval and proper timelike interval.\nI think I've seen somewhere, where they just take the space-time graph and rotateit it 45 degrees, and then the transformation becomes a contraction\/expansion on the horizontal axis, and vice-vers on the vertical axis.\n\nI think this constant sized region here involves four distinct events. Two photons leave a spot, hit something to turn around, and meet again. That would enclose a region of space-time that would have constant area under lorentz transformation.\n\nBut in general, no. If you have a photon going from one place to another, the best description of locality for it is zero. Effectively, there is a direct interaction between the particles at the origin, and the particles at the destination, with one caveat: The destination event definitely happens after the source event.\n\nThis is totally off-topic from the idea of local lorentz, but there is another interaction that I find interesting and related; when the photon is produced in, say, a Helium atom, this occurs when the electron falls into a lower shell, or perhaps when it hits the lower shell. Does the photon arise from the interaction of two particles, or does it arise from the acceleration of one particle? Is there a direct interaction between the proton and electron (one event?) or are they separated by a finite distance when they interact (two events?), or is the photon generated in a process that takes place in a region of time and space?\n\nSo (1) at the source, there are two events which occur at the same time, and (2) in between, there are events which definitely occur one after the other, and (3) at the destination, there are two events which occur at the same time.\n\nRe: \"Locally Lorentz\"\n\nI think I've seen somewhere, where they just take the space-time graph and rotateit it 45 degrees, and then the transformation becomes a contraction\/expansion on the horizontal axis, and vice-vers on the vertical axis.\nSounds like light-cone coordinates.\n\nPeterDonis\nMentor\nRe: \"Locally Lorentz\"\n\nIf you have a photon going from one place to another, the best description of locality for it is zero. Effectively, there is a direct interaction between the particles at the origin, and the particles at the destination, with one caveat: The destination event definitely happens after the source event.\nI don't think \"direct interaction\" is an apt description, because it implies that there's no difference in the physics along a photon worldline regardless of which pair of events along it I pick. The fact that the spacetime interval between any two events on a photon's worldline is zero does *not* imply that all events on that worldline are exactly the same in every physically relevant respect. So if I have two pairs of events, (A, B) and (A, C), that all lie on the same photon worldline, that does *not* imply that all the physics between A and B is exactly the same as all the physics between A and C.\n\nSimple example: a source at the origin that emits spherical wavefronts of light, and two detectors, both lying along the same radial line from the origin, one at radius R and the other at radius 2R. At time t = 0 in the frame in which all three objects (the source and both detectors) are at rest (I'm assuming flat spacetime, no gravity or other forces involved), the source emits a spherical wavefront. It arrives at detector #1 at time t = R and at detector #2 at time t = 2R. So we have three events: emission (t = 0, r = 0), detection #1 (t = R, r = R), and detection #2 (t = 2R, r = 2R). The spacetime interval between emission and detection #1 is the same as between emission and detection #2 (both are zero); however, the intensity of light measured at detection #1 is four times that measured at detection #2 (inverse square law). (In quantum terms, we would say that the amplitude for detection of a photon at detection #1 is twice the amplitude for detection of a photon at detection #2; the intensity goes as the square of the amplitude.) This difference, to me, means that saying \"the locality is zero\" for both pairs of events, or \"direct interaction\" between them, is not a good way of describing what's going on, because it gives no way of accounting for the difference in what's observed.\n\nGold Member\nRe: \"Locally Lorentz\"\n\nThe metric distance between two points on the manifold is observer independent.\n\nThat is not true, the metric distance is not ambiguous and observer independent.\n\nThen you clearly misapply the notion of 'locally Lorentzian'.\n\nThere's really no point in saying \"locally lorentz,\" since the Lorentz Transformations apply to every event in spacetime, it is either globally Lorentz, or not Lorentz at all. Perhaps, MTW should use some other set of words to describe what they are talkng about.\n\nFor instance, I would recommend talking about how, within the gravitational pull of a planet, the rate of proper time is a function of the distance from the planet. My suggestion would be to say that in this region, we have a situation where somehow, the geometry seems to differ from Lorentz in some fashion, for it is in these local regions where we find space-time to be curved. The particle, traveling on a straight path in its own coordinates, ends up traveling on a curved path in another body's coordinates. I think that the theory behind General Relativity is strong enough, that it does not need to rely on ambiguously defined terms and attacking the fundamentals of Special Relativity. It should stand constructively on its foundations--not try to dismiss them as \"only valid locally.\" On the global level, such slowing of proper time (implicit in general relativity) won't make a difference, because the end result, from a gloabl perspective, is just the local slowing of the speed of light. It's no more paradoxical than having glass, with its index of refraction slowing the speed of light.\n\nBut claiming that physics is somehow \"locally lorentz\" implies that physics is not \"globally lorentz.\" This is something that MTW tries to do throughout \"Gravitation\" is dismiss Special Relativity as being somehow incompatible with General Relativity. However, I have not yet found any logic in any of their arguments. Only weird claims, like this \"locally lorentz\" one.\n\nRe: \"Locally Lorentz\"\n\nJDoolin you are misinformed, global Lorentz invariance only occurs in a flat spacetime.\n\nFor instance, I would recommend talking about how, within the gravitational pull of a planet, the rate of proper time is a function of the distance from the planet. My suggestion would be to say that in this region, we have a situation where somehow, the geometry seems to differ from Lorentz in some fashion, for it is in these local regions where we find space-time to be curved. The particle, traveling on a straight path in its own coordinates, ends up traveling on a curved path in another body's coordinates. I think that the theory behind General Relativity is strong enough, that it does not need to rely on ambiguously defined terms and attacking the fundamentals of Special Relativity. It should stand constructively on its foundations--not try to dismiss them as \"only valid locally.\" On the global level, such slowing of proper time (implicit in general relativity) won't make a difference, because the end result, from a gloabl perspective, is just the local slowing of the speed of light. It's no more paradoxical than having glass, with its index of refraction slowing the speed of light.\nSorry but this is wrong.\n\nBut claiming that physics is somehow \"locally lorentz\" implies that physics is not \"globally lorentz.\"\nWhich is the case.\nIn curved spacetime there is no global Lorentz invariance, there is only Lorentz invariance at the local level.\n\nGold Member\nRe: \"Locally Lorentz\"\n\nI don't think \"direct interaction\" is an apt description, because it implies that there's no difference in the physics along a photon worldline regardless of which pair of events along it I pick. The fact that the spacetime interval between any two events on a photon's worldline is zero does *not* imply that all events on that worldline are exactly the same in every physically relevant respect. So if I have two pairs of events, (A, B) and (A, C), that all lie on the same photon worldline, that does *not* imply that all the physics between A and B is exactly the same as all the physics between A and C.\n\nSimple example: a source at the origin that emits spherical wavefronts of light, and two detectors, both lying along the same radial line from the origin, one at radius R and the other at radius 2R. At time t = 0 in the frame in which all three objects (the source and both detectors) are at rest (I'm assuming flat spacetime, no gravity or other forces involved), the source emits a spherical wavefront. It arrives at detector #1 at time t = R and at detector #2 at time t = 2R. So we have three events: emission (t = 0, r = 0), detection #1 (t = R, r = R), and detection #2 (t = 2R, r = 2R). The spacetime interval between emission and detection #1 is the same as between emission and detection #2 (both are zero); however, the intensity of light measured at detection #1 is four times that measured at detection #2 (inverse square law). (In quantum terms, we would say that the amplitude for detection of a photon at detection #1 is twice the amplitude for detection of a photon at detection #2; the intensity goes as the square of the amplitude.) This difference, to me, means that saying \"the locality is zero\" for both pairs of events, or \"direct interaction\" between them, is not a good way of describing what's going on, because it gives no way of accounting for the difference in what's observed.\nProper time and proper distance are distinct from what I would usually mean by distance. Obviously, it is only in the lab frame where the distances between these events is R and 2R, and the time between these events are R\/c, and 2R\/c.\n\nYou are using what I would call the common definition of distance and time, which are observer dependent. I am perfectly fine with that, because it is what any ordinary person thinks of when he hears the word \"distance.\"\n\nBy contrast, the metric distance between the source and destination events here is zero. This is the quantity that is unchanged by Lorentz Transformation. What the \"metric distance\" represents is an invariant quantity relating the distance and time between two events. As far as semantics are concerned, I actually object to even calling this quantity a \"distance\", preferring the more abstract term, \"space-time-interval\" and your objection makes it clearer why.\n\nBecause there is a difference between these two situations, where the receiver is 2R away, and where the receiver is R away. Yet the space-time-interval for the photons traveling the different distances is the same in both cases.\n\n(Edit: By the way, I don't think there is any photon that is detected by both receivers. If it is detected by the first receiver, it's energy is absrobed by the first receiver. That's part of the reason why I think it is appropriate to say that a photon (a single quantum photon) is a direct interaction between particles at the source, and particles at the destination. I wasn't sure if this was a subtlety or just obvious. The subtlety comes in when you have interference, and though the light goes through both slits and there is self-interference, you still don't actually have an event. The photon is still a direct interaction with the source and destination, but somehow modified by the interference device in a geometric way, but not an \"event-\"ful way.)\n\nLast edited:\nGold Member\nRe: \"Locally Lorentz\"\n\nJDoolin you are misinformed, global Lorentz invariance only occurs in a flat spacetime.\n\nSorry but this is wrong.\n\nWhich is the case.\nIn curved spacetime there is no global Lorentz invariance, there is only Lorentz invariance at the local level.\nBut I disagree. The speed of light slows down in the region of gravitational fields. This is a locally non-lorentz behavior. However, the lorentz transformation equations operate on every event in space-time. They cannot be contained to \"locally lorentz.\"\n\nOn the other hand, it is easy to deal with a local slowing speed of light within the context of a globally lorentz spacetime.\n\nI think the trouble occurs when distance and time are confused with the proper time and proper distance (what you call the \"metric distance.\") You'll notice that within the lorentz transformation equations, the metric distance is not to be found. Neither proper time nor proper distance of events appears in the operator or the inputs or the outputs of the equation. These notions are irrelevant to the geometry of spacetime at large.\n\nHowever, in MTW's description of Locally Lorentz, the defintion of proper space and proper time have a central relevance. They are following geodesics where distance is used from the planetary frame, while time is used from the falling object's frame, as modified by the planetary gravity. And it is within these coordinates that somehow some principal of least action is calculated, and in some sense, if you use the right variables, the particle's trajectory is straight.\n\nIs this straightness an application of some \"local lorentzian\" property? I'm not entirely sure, but I'd rather see the variables defined, and the reasoning clearly outlined.\n\nRe: \"Locally Lorentz\"\n\nJDoolin you are misinformed, global Lorentz invariance only occurs in a flat spacetime.\n\nSorry but this is wrong.\n\nWhich is the case.\nIn curved spacetime there is no global Lorentz invariance, there is only Lorentz invariance at the local level.\nThat is it!\n\nAB\n\nPeterDonis\nMentor\nRe: \"Locally Lorentz\"\n\nAs far as semantics are concerned, I actually object to even calling this quantity a \"distance\", preferring the more abstract term, \"space-time-interval\" and your objection makes it clearer why.\nI agree, the term \"spacetime interval\" is clearer since it is unambiguous.\n\nBecause there is a difference between these two situations, where the receiver is 2R away, and where the receiver is R away. Yet the space-time-interval for the photons traveling the different distances is the same in both cases.\nYes, it is. But there is clearly *some* invariant, frame-independent difference between the cases, since the physical observable (light intensity, or amplitude for photon detection in the quantum case) differs. What spacetime invariant corresponds to that physically observable difference? It can't be the distance, because that is, as you point out, frame-dependent. It can't be the interval because that's the same in both cases. So whatever invariant *does* correspond to the physical difference, focusing on the spacetime interval (and the fact that it's zero for a lightlike interval) does not help in identifying what it is.\n\nGold Member\nRe: \"Locally Lorentz\"\n\nAlright, I'm coming around to a legitimate meaning of \"locally lorentz\" but it involves defining some extra variables, and it preserves my meaning of \"globally lorentz.\" In fact, I think, if we clearly define our variables, both are true in different contexts, and both are false in the opposite context!\n\nWe can define tlocal by imagining a clock at a particular position in a gravitational field, suspended by a wall or a pole. This local time is a function of the clock's position in the gravitational field, but it is also a function of some external global observer-dependent time tglobal.\n\nThe main point is we have three different sets of variables. The third set of variables is, of course, related to the geodesic of the object falling through the space.\n\nThe \"locally lorentz\" behavior can then be described as\n\n$$\\Delta \\tau^2=\\Delta t_{local}^2-\\Delta x_{local}^2$$\n\nWhile the \"globally lorentz\" behavior applies to the global coordinates\n\n$$\\begin{pmatrix} c t_{global}' \\\\ x_{global}' \\end{pmatrix}= \\begin{pmatrix} \\gamma & -\\beta \\gamma \\\\ -\\beta \\gamma & \\gamma \\end{pmatrix} \\begin{pmatrix} c t_{global} \\\\ x_{global} \\end{pmatrix}$$\u200b\n\nGold Member\nRe: \"Locally Lorentz\"\n\nI agree, the term \"spacetime interval\" is clearer since it is unambiguous.\n\nYes, it is. But there is clearly *some* invariant, frame-independent difference between the cases, since the physical observable (light intensity, or amplitude for photon detection in the quantum case) differs. What spacetime invariant corresponds to that physically observable difference? It can't be the distance, because that is, as you point out, frame-dependent. It can't be the interval because that's the same in both cases. So whatever invariant *does* correspond to the physical difference, focusing on the spacetime interval (and the fact that it's zero for a lightlike interval) does not help in identifying what it is.\nThe main invariant of importance in Lorentz Transformations is the preservation of Maxwell's Laws, which in turn preserves the observer-dependent speed of light.\n\nConsider that the receiver is either chasing the emitter or vice versa. Whatever effects of relativistic doppler happens to the emitter, you are assured that exactly the inverse effect will apply to the receivers. Even though the doppler effect itself is frame dependent, the coupling of two inverse effects means there is a sort of invariance in the events themselves.\n\nNaturally, since the same events happen, regardless of what reference frame, the digital read-outs on the intensity probes will read the same.","date":"2019-12-09 11:01:54","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8212615251541138, \"perplexity\": 586.0177825153756}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-51\/segments\/1575540518627.72\/warc\/CC-MAIN-20191209093227-20191209121227-00540.warc.gz\"}"} | null | null |
Q: Debye-Waller factor from X-ray diffraction data I'm conducting a X-ray diffraction experiment for lab, but am new to solid-state physics and crystallography. I have to find the Debye-Waller factor (DWF) of Al at room temperature using X-ray powder diffraction. The value given in the International Tables for X-Ray Crystallography Vol. 3 is approximately $0.78 \mathring{A}^{2}$.
I obtain the diffraction peaks using a diffractometer with $\text{Cu}K_{\alpha}$ radiation and a Ni filter at the receiving slit.
I find the integrated intensity $I$ relative to that of the first diffraction peak by finding the peaks' areas above the background. Since $$I=K\exp(-2B\sin^2\theta/\lambda^2)$$ then $$\log I=\log K-2B\frac{\sin^2\theta}{\lambda^2}$$ where $\theta$ is the diffraction angle, $\lambda$ the wavelength in angstrom of the radiation incident on the aluminum powder sample, and $B$ is the DWF. $K$ is a constant proportional to factors like the structure factor and Lorentz polarization factors. I then fit the plot of $\log I$ vs. $\frac{\sin^2\theta}{\lambda^2}$ with a linear function, and divide the slope of the fit by $2$ to get the DWF.
The diffraction peaks I'm seeing are extremely similar to the plot of the diffraction peaks below:
However, when I actually plot $\log I$ vs. $\sin^2\theta/\lambda^2$, I get:
and a DWF of $5.92 \mathring{A}^{2}$. Due to this difference in magnitude, I feel like I'm doing something incorrectly when measuring the integrated intensity. Could someone offer a suggestion?
Thank you in advance.
A: You have to correct the measured intensity by Lorentz polarization factor, atomic factor (in square) and, the most important, by peak multiplicity. I assume that you have total absorption case and the absorption correction is constant. The multiplicities for powder diffraction and fcc are:
8 (111),6 (200),12 (220),24 (311),8 (222), 6 (400),24 (331),24 (420)...
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 8,045 |
There is much confusion among persons searching for Israel in ancient documents and texts. The answer is so simple that people pass by it without noticing it.
Israel was a satellite of Phoenicia, the Sea People, the trading nation of Tyre, and when Tyre sank into the sea during an earthquake, the documentation of Israel went to the bottom of the sea with it.
Here are the points to consider.
David stockpiled iron for the temple, which is not available in Israel. It had to come from someplace else. The only way to get it was by trading. The only traders with the ships to transport it were the Phoenicians. It would take so many camels that "stockpiling" would be impossible.
David stockpiled copper. The only place it is found locally is much further south-west in the baking deserts, so inhospitable that the prisoners who were sentenced to work in the refining furnaces there had an average life span after arriving of just 2 weeks. There was a "buffalo hide" of copper excavated several years ago that was sent for analysis. It had come from the Black Hills of America, yet it was documented to be in strata from the time of Solomon. Copper was so scarce and valuable that when Jerusalem was sacked, the invaders took all the copper in it, cut it up into transportable pieces and took it home with them, including all the copper in the temple, the "great sea" water basin with the 12 bulls under it, and the columns of Boaz and Jachin. All of it probably ended up as military hardware for the massive armies of that time.
Hiram, king of Tyre, is documented to have sent cedars from Lebanon to David to build a house for him to live in. This was not their first contact. That is documented in extra-Biblical literature as being when David took his father, Jesse, and his mother, the daughter of the high priest of that time who had been placed with Jesse as an indentured servant, and who automatically became his secondary wife when David was conceived, whose conception became the basis for the animosity between David and his half brothers. This is the source of David's learning the harp, which was only used in the temple, his authority to bring the ark to him in Jerusalem, and his authority to write the songs (Psalms) to be sung in the temple. No one but a high priest had the authority to move the ark.
When David killed Goliath, he took Goliath's sword to his cousin-priests for safe-keeping, then went to retrieve it when he fled from King Saul. The priest in change at that time allowed him to eat the bread of presentation which was changed that day, which is only lawful for the priests to eat. This is the family of David's mother's origin, and the family that King Saul tried to wipe out when he found out that David had been helped by them, during which event only one priest escaped and stayed with David for the rest of his life to ensure his personal safety. That priest brought with him one of the 12 ephods in existence at that time, and wore it during his service to David, which is only lawful for a high priest to wear, and which only a high priest knew how to operate, as it was an interactive device, lighting up stones according to the answer to the question presented to it.
It was this friendship with Hiram, cultivated during the years David served under King Saul, that provided the plans for the temple that David bequeathed to Solomon. Those plans have been clearly identified as Phoenician, as was the architect who was hired to build it. The architect's mother was from the tribe of Dan, but his father was Phoenician and learned all the skills of a builder from that source.
The wealth that David left to Solomon did not come from the agricultural economy of Israel. The very most profit an agricultural economy can generate is 20 percent. It came from his partnering with Hiram, king of Tyre, which Hiram chose to continue under Solomon, giving his daughter to Solomon as a wife, and using Israelite men as sailors on his ships when Solomon's fleet was smashed by a storm on its maiden voyage out of the port toward the Indian Ocean. The amount of silver and other goods coming to Solomon from that partnership is listed in the chronicles of the kings to be of such quantity that only a major trading enterprise could have produced it, since neither silver nor gold is found in large quantities in Israel, although gold was abundant further west and in Africa, and we know that the Phoenicians traded on both coasts of Africa because of the apes they brought back.
The death of Solomon ended the alliance with the Phoenicians, and marked the fall of the nation of Israel, both with its splitting up into two warring kingdoms and the loss of the revenue from trade. The wars consumed large amounts of gold and silver just when the loss of trade stopped supplying it.
Anyone who wants to document the nation of Israel in its time of strength needs to go back to the records of the Phoenicians. | {
"redpajama_set_name": "RedPajamaC4"
} | 6,832 |
From the garden of Eden, God has desired all humanity to be unified. He made Adam and Eve to live and take care of the garden provided for their substance (Genesis 1-2).
The problem of division arose when Satan lead Adam and Eve to sin against God (Genesis 3). Because of Satan, the unity of humanity was jeopardized.
This became evident with the division between Cain and Abel, which resulted in the first murder. Even today God desires humanity be unified but Satan's influence is evident in the division that plagues our society.
Paul writing to the Galatian Christians noted the actions that promote such division, "…enmities, strife, jealousy, outbursts of anger, disputes, dissensions, factions, envying…" (Galatians 5:20-21).
"For you are all sons of God through faith in Christ Jesus. For all of you who were baptized into Christ have clothed yourselves with Christ. There is neither Jew nor Greek, there is neither slave nor free man, there is neither male nor female; for you are all one in Christ Jesus. And if you belong to Christ, then you are Abraham's descendants, heirs according to promise" (Galatians 3:26-29).
If there is to be peace in our society, all humanity must turn from living for Satan to living for God. As long as individuals live for Satan and manifest the deeds of the flesh (Galatians 5:19-21), we will have the destructive division that is plaguing out society.
Where has "agape" (Love) [the desire for the very best for others] gone? If "agape" (Love) was manifest by everyone in our society, we would not have all the troubles we are currently having.
May we all seek the true peace that comes from truly loving one another as taught in God's inspired word. | {
"redpajama_set_name": "RedPajamaC4"
} | 8,390 |
\section{\label{sec:Intro}Introduction}
The widespread use of quantum technologies is limited by the large and expensive cooling systems required for their implementations. The rapidly emerging field of quantum thermodynamics~\cite{landi_irreversible_2020,kosloff_quantum_2013,partovi_quantum_1989,ozdemir_quantum_2020,vinjanampathy_quantum_2016} paves the way for compact, fast, and efficient quantum refrigeration schemes for quantum devices~\cite{naseem_two-body_2020,abah_shortcut-adiabaticity_2020}. Pioneering studies are limited to cooling a single two-level system
(qubit or spin-$1/2$ particle)~\cite{pulse}. A critical question for practical quantum machines is if and how such quantum refrigerators can cool down interacting multiple qubit systems. As a possible positive answer to this question, we propose a few-qubit quantum refrigerator with scalable advantages in its cooling efficiency and achievable minimum temperatures.
Early quantum refrigerator studies consider utilization of quantum coherence injected by external drives~\cite{pulse}, spectral bath filtering and periodically modulated interactions~\cite{drivenref}, and frequent measurement schemes~\cite{meas}.
The requirements of such proposals, precise quantum control~\cite{pulse,drivenref}, bath engineering~\cite{drivenref,meas}, very rapid measurements~\cite{meas} together with the lack of precise determination of energetic costs of quantum control and measurements make them difficult to implement for practical applications. A more conventional cooling method for spin systems is known as algorithmic cooling~\cite{alg1,alg2,alg3}. How it can be part of a quantum algorithmic heat engine has been recently presented~\cite{kose_algorithmic_2019}. A continuous variant of algorithmic cooling, without an external work cost, allowing for a flexible working temperature range, is proposed~\cite{smallref}. However, it relies on a three-body interaction among the qubits, which is not feasible for experimental realization. Intriguing proposals based on quantum coherence and entanglement to cool quantum systems~\cite{lutz2009,coh3} are challenging to use in refrigeration cycles. Their cost to prepare entangled states repeatedly reduces their appeal for practical applications.
Recently, a scheme, closely related to algorithmic cooling idea of entropy transfer between different qubit systems, to thermalize a many-body system by repeated collisions has been proposed~\cite{ourpaper}. The random collisions are
one of the oldest routes considered for describing thermalization, introduced by Lord Rayleigh~\cite{rayleigh1891}. A massive
particle thermalizes after many random collisions by small projectiles in
thermal states. This mechanism explains the micromaser in
the blackbody radiator regime, where
the optical cavity is heated by thermal pump atoms~\cite{scully_quantum_1967}.
More recent studies showed that pump atoms in quantum coherent states could also be used to heat the micromaser~\cite{coh1,coh2,epl-pce}. A particularly
intriguing scenario is the scalable heating of the micromaser with the number of pump atoms, using a so-called spin-star system~\cite{epl-pce}. Spin-star configuration consists of a central qubit surrounded by $N$ ancilla qubits (cf.~Fig.~\ref{fig:spinstar}). The critical point is that the central qubit can be at a higher local temperature than the environment.
Here, we show that when the interaction between the central spin and the surrounding spins in a central spin model contains only the longitudinal spin components and is of ferromagnetic type (negative coupling coefficient), the central spin becomes locally colder than the environment. Accordingly, the $(N+1)$-qubit system can be envisioned as a quantum refrigerator, where the central qubit is the quantum refrigerant to cool other systems, specifically, an interacting
multi-qubit system. For that aim, it is necessary to contact the quantum refrigerant with the many-body system. The required refrigerant-system coupling can be performed within the collisional route to many-body thermalization~\cite{ourpaper}. Successful coupling needs matching refrigerant frequencies to transition frequencies of the many-body system. Therefore, our proposal can be envisioned as an all-qubit network with integrated quantum refrigerators (cf.~Fig.~\ref{fig:coll}).
Finally, we should clarify the similarities with the algorithmic cooling. There is only a single bath (environment) where the spin-star qubit structure is held. Such quantum "molecule" has a central qubit at a local thermal equilibrium colder than the environment, due to longitudinal ferromagnetic qubit-qubit "bonding". While the initial thermal states' preparation is relatively easy in our scheme, we still need resetting and timing control in the quantum cooling network. Similar to algorithmic cooling, timing and control can be achieved by using qubits at different thermalization rates. Another significant advantage here is to have a readily integrable few-qubit refrigerator with a single qubit refrigerant for compact, fast, and efficient cooling of a many-qubit system.
While our focus will be on thermalization with the Markovian collision model introduced in Ref.~\cite{ourpaper} for the rest of this paper, another recent work on Markovian collision models for many-body systems \cite{mark-coll} needs to be mentioned. Although it is based on couplings much stronger than the system Hamiltonian, constraining its range of possible implementations, it is promising to generate non-local Lindblad dissipators, which are necessary to thermalize many-body systems with non-local energy eigenstates, using multi-qubit quantum gates. The collision model of Ref.~\cite{ourpaper} is constrained to local collisions and it is not guaranteed to generate a Lindblad master equation with a positive definite Kossakowski matrix for many-body systems with entangled energy eigenstates.
The outline of our paper is the following. After giving a brief description of our spin-star model in Sec.~\ref{mod}, we will derive an analytical expression for the effective temperature of the central qubit with uniform Ising interaction between center and ancilla qubits in Sec.~\ref{th}. After working out how cold these interactions can get the central qubit, we will discuss a simple refrigeration cycle in Sec.~\ref{sec:refrigeratorCycle} and calculate its efficiency defined as the ratio of the energy taken from the central qubit to the total work spent in one cycle. The Section~\ref{many} will summarize the findings of our previous work on many-body systems~\cite{ourpaper} and clarify how it allows cooling of quantum many-body systems along with this paper. The Section \ref{ancilla-sec} will deal with the state of the ancilla qubits after thermalization and we will propose two possible ways to use the ancilla qubits to make our refrigerator proposal more efficient. We conclude in Sec.~\ref{sec:conclusion}. We investigate the quantum effects in our refrigerator with Heisenberg interaction and provide a brief quantum-classical comparison for our model in the appendix.
\section{Model system\label{mod}}
We consider a so-called "spin-star" system consisting of a single qubit surrounded by $N$ ancilla qubits, as illustrated in Fig.~\ref{fig:spinstar}. Interactions between the central qubit and the surrounding qubits are assumed to be the same, characterized by the coupling coefficient $g$. The energy gap of the qubit is denoted by $h$. We represent each qubit as an effective spin-$1/2$ particle and further assume that the qubit-qubit interactions are only between the $z$-components of the effective spins. The specification of interaction direction is neither for simplicity nor arbitrary. Transverse components cause correlations and entanglement in the eigenstates of the Hamiltonian, which is not desirable for our purpose of cooling the system. Further explanation of the harmful influence of transverse interactions on cooling is given in the appendix. The total Hamiltonian can be written as
\begin{equation}\label{eq:model}
\hat{H} = h\sum_{n=0}^{N}\hat{\sigma}_{z,n} + g~\hat{\sigma}_{z,0}\sum_{n=1}^{N}\hat{\sigma}_{z,n},
\end{equation}
where the indices $n=0$ and $n=1,2\dots N$ indicate the central qubit and surrounding qubits, respectively. $\hat\sigma_{z,0},\hat\sigma_{z,n}$ are the $z$-component Pauli spin operators.
As the Pauli spin operators are only for the $z$-components, the model can be considered a longitudinal Ising model~\cite{ising}, but with a spin-star configuration instead of a spin chain. Spin-star models are special cases of Richardson-Gaudin models, which are usually studied in the context of hyperfine interactions in semiconductor quantum dots~\cite{qdot1,qdot2} and as a toy model of non-Markovianity~\cite{nm-ss1,nm-ss2,nm-ss3}. However, the semiconductor quantum dot implementation of spin-star models will not be relevant for our purposes. It is based on Heisenberg interactions, which we discuss and rule out for our purposes in the appendix. For a superconducting qubit implementation of our proposal, a generalization and re-configuration of the Chimera unit cell architecture used in D-Wave quantum annealers seems possible. This architecture makes use of orthogonally placed qubits overlapping each other and allowing to set couplers between horizontal and vertical qubits, which generate a longitudinal Ising interaction as we desire~\cite{dwave}.
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{spinstar-eps-converted-to.pdf}
\caption{Sketch of spin-star model consisting of a central spin-$1/2$ particle (blue sphere) surrounded by $N$ spin-$1/2$ particles (red spheres). Central spin is coupled to the surrounding spins with the same interaction coefficient $g$. The whole system is
in a homogeneous magnetic field $h$. We assume the interactions only contains longitudinal spin components, in the same direction with the magnetic field. The spin-star model is used to describe a $(N+1)$-qubit quantum refrigerator, where each spin effectively represents a qubit with an energy gap $h$. When the system is in thermal equilibrium with an environment at temperature $T$, the central qubit is at a temperature smaller than $T$. The central qubit can be used as the refrigerant to cool other quantum many-body systems (cf.~Fig.~\ref{fig:coll}).}
\label{fig:spinstar}
\end{figure}
\section{Results}
\label{sec:results}
In our numerical simulations, we will consider an artificial spin system, specifically a system of superconducting two-level systems (qubits). Efficient, compact, and fast cooling of such superconducting interacting qubits is a critical problem for practical quantum computations. Hence, we focus our range of parameters on this particular case, though our generic models, exact analytical results, and general conclusions apply a broader class of physical systems. We will call "effective spin," representing a qubit as "spin" in the following discussions for brevity. We take $\hbar=1$ and set $h=1\text{ GHz}$ for all of our calculations as it is a typical order of magnitude for superconducting qubits~\cite{s-qubit} and we will assume that $g$ can be in the order of $h$~\cite{coupling}.
\subsection{Thermal state for the spin-star model and effective temperature of the center qubit\label{th}}
\label{sec:results-centerSpinTemp}
The eigenstates of the Hamiltonian in Eq.~(\ref{eq:model}) are not entangled. Off-diagonal elements of the total density matrix vanish in the tensor product of the z-basis of each effective spin. Accordingly, we can treat the Ising spin-star model as a classical discrete system (with up and down spin states labeled by $z=+1$ and $z=+1$, respectively) and study the state probability distribution described by the diagonal elements of the total density matrix.
We consider the spin-star system immersed in a thermal environment at inverse temperature $\beta$ which is related to the environment temperature by $\beta = 1/k_B T$. We can define the partition function of the whole system by treating up and down states of the central spin separately. Assuming the central spin is in the $z_0=\pm 1$ state, the partition function of a single ancilla spin equals to that of a non-interacting spin with Hamiltonian eigenvalue $h\pm g$. The partition function of all ancilla spins is obtained simply by taking $N^{\text{th}}$ power of the partition function of a single ancilla. Summing the partition functions of ancilla spins for up (down) states of the central spin with factors $\exp{(-(+)\beta h)}$, we find the partition function of the whole system to be
\begin{equation}
Z_{\text{tot}} = 2^{N}(e^{-\beta h}\cosh^N(\beta(g+h))+e^{\beta h}\cosh^N(\beta(h-g))).\label{ztot}
\end{equation}
The first term of Eq. (\ref{ztot}) corresponds to the up state of the central spin while the second corresponds to its down state. That remark allows us to give explicit expressions for the probabilities of the states of the central spin
\begin{eqnarray}\label{eq:populations}
P(z_0 = \pm 1) = \frac{2^N e^{\mp \beta h} \cosh^N(\beta(h\pm g))}{Z_{\text{tot}}}.
\end{eqnarray}
The effective (local) inverse temperature of the central qubit $\beta_\text{eff}$ as a function of its state populations is defined by
\begin{eqnarray}\label{betaeff}
\beta_{\text{eff}} &=& \frac{1}{2h}\ln\left(\frac{P(z_0 =-1)}{P(z_0 =1)}\right) \nonumber\\
&=&\frac{1}{2h}\left(2\beta h + N\ln\left(\frac{\cosh(\beta(h-g))}{\cosh(\beta(h+g))}\right)\right) \nonumber\\
&=& \beta + \frac{N}{2h}(\ln(\cosh(\beta(h-g)))-\ln(\cosh(\beta(h+g)))). \nonumber \\
\end{eqnarray}
\begin{figure}[t!]
\centering
\subfloat[\label{fig:2a}]{\includegraphics[width=\linewidth]{ising_refrigerator_teff_g-eps-converted-to.pdf}}
\qquad
\subfloat[\label{fig:2b}]{\includegraphics[width=\linewidth]{ising_refrigerator_teff_N-eps-converted-to.pdf}}
\caption{\label{fig:ising-teff}Ratio of the effective $T_\text{eff}$ temperature of the central qubit to the environment temperature $T$ in a longitudinal ferromagnetic Ising spin-star model with $h=1$ GHz for (a) $N=6$ ancilla qubits at different interaction strengths $g$, (b) $g=-h$ at different number of ancilla qubits $N$.}
\end{figure}
Taking the derivative of $\beta_{\text{eff}}$ with respect to the interaction strength $g$ here turns out to be insightful.
\begin{equation}
\frac{\partial \beta_{\text{eff}}}{\partial g} = \frac{-N\beta}{2h}(\tanh(\beta(h-g))+\tanh(\beta(h+g)))
\end{equation}
As $\tanh$ is a one-to-one odd function, setting the derivative to zero requires $h-g = -(h+g) = -h-g$, which is not satisfied for any value of $g$. Thus, $\beta_{\text{eff}}$ is a monotonic function of $g$ and evaluating the derivative for $g=0$ further shows that $\beta_{\text{eff}}$ is a monotonically decreasing function of $g$. For our purposes, this guarantees $\beta_{\text{eff}} > \beta$ when $g<0$, proving that our proposed setup manages to cool down the central qubit for ferromagnetic type interactions. Also, by monotonicity of $\beta_{\text{eff}}$ as a function of $g$, it keeps increasing while $g$ diverges towards $-\infty$, meaning that its limit at $-\infty$ is also its upper bound.
\begin{eqnarray}\label{eq:limitTeff}
\beta_{\text{max}} &=& \lim_{g \to -\infty} \beta_{\text{eff}} = \beta + \frac{N}{2h} \ln\left(\lim_{g \to -\infty} \frac{\cosh(\beta(h-g))}{\cosh(\beta(h+g))} \right) \nonumber \\
&=& \beta + \frac{N}{2h} \ln\left(e^{2\beta h} \right) = (N+1)\beta \label{betamax}
\end{eqnarray}
Fig.~\ref{fig:ising-teff} shows the ratio of the effective temperature $T_\text{eff}$ to the environment temperature $T$ for different interaction strengths $g$ and number of ancilla qubits $N$. The asymptotic theoretical limit of the $T_\text{eff}$ in Eq.~(\ref{eq:limitTeff}) is approached faster with increasing $g$ in the low $T$ regime as shown in
Fig.~\ref{fig:2a}. Fig.~\ref{fig:2b} suggest that, towards $g\sim -h$, reasonably large values of $N$ can achieve an order of magnitude cooling of the central qubit relative to typical environment temperatures in superconducting circuits ($<20$ mK).
\subsection{A simple refrigeration cycle to cool the central qubit and its efficiency\label{sec:refrigeratorCycle}}
To cool the central qubit, we consider a cyclic transformation of the whole spin-star system in a single thermal environment.
The cycle begins with uncoupled qubits ($g=0$) in thermal equilibrium at the environment temperature $T$.
In the first step, the interaction is suddenly switched on so that there is no entropy change. At this stage, work is taken from the system, and there is no heat exchange with the environment.
The interacting qubits (spin-star system) are left to thermalize to $T$ in the second step. While the spin-star system is in thermal equilibrium with the environment at $T$, the central spin is not. The effective temperature of the central qubit is given by Eq. (\ref{betaeff}).
The third step consists of suddenly quenching the interaction ($g\rightarrow 0$) such that the state of the central qubit does not change. Under this assumption the transitions and the associated changes in $T_\text{eff}$ are negligible. In general, preservation of the initial state under a sudden perturbation requires that the switching on or off the interaction must be much faster than any characteristic time scale of the system, which is $1/2h$ for the central qubit. This condition is relaxed in our case, as the longitudinal Ising interactions (cf.~Eq.~(\ref{eq:model})) cannot cause any excitations in the initial thermal state before the quench. We can still introduce a bound on the perturbation time $\tau$. In practice, the qubits may not be uncoupled from the environment during the switching and hence we require $\tau<<1/\kappa$ where $\kappa$ is the relaxation (thermalization) time of the central qubit. Hence the central qubit remains cold at $T_\text{eff}$ for a duration of $\tau$. This
gives us a ``cooling window" in which the central qubit can be used
as a refrigerant to cool a many-qubit system, by the collisional route to thermalization, as described in Sec.~\ref{many} following Ref.~\cite{ourpaper}.
The fourth step is the thermalization of non-interacting central and ancilla qubits by the environment, bringing the whole system back to the beginning of the refrigeration cycle.
The cooling of the central qubit is performed with an efficiency given by
\begin{equation}\label{effcy}
\varepsilon = \frac{E_s(\beta)-E_s(\beta_{\text{eff}})}
{W_{\text{cycle}}} = \frac{h(\tanh(\beta_{\text{eff}} h)
-\tanh(\beta h))}{W_{\text{cycle}}}
\end{equation}
where $E_s(\beta)$ is the expectation of the bare system Hamiltonian at inverse temperature $\beta$ and $W_{\text{cycle}}$ is defined in Eq. (\ref{wcycle}) as the net work cost of turning on and off the Ising interactions. The interaction of the central qubit with the target many-body system at the end of the third step of the cycle does not affect the central qubit's cooling efficiency.
To calculate the efficiency defined in Eq.~(\ref{effcy}), we need the internal energy of the whole system at the end of each step of the cycle. The total energy is given by
\begin{equation}
E_{0}=-(N+1)h\tanh(\beta h)
\end{equation}
at the beginning of the cycle. After sudden quench by turning on the interaction, the state of the central qubit is preserved while
the energy change is equal to the expectation of the interaction Hamiltonian at the initial state. As the state probability distribution of each qubit is independent, the total energy at the end of the first step $E_{1}$ can be calculated as
\begin{equation}
E_{1}=-(N+1)h\tanh(\beta h) +\frac{gN(\cosh(2\beta h)-1)}{2\cosh^2(\beta h)}.
\end{equation}
We can calculate the energy of the interacting system in thermal equilibrium at the end of the second step by using the partition function in Eq.~(\ref{ztot}).
\begin{eqnarray}
&&E_{2} = -\frac{\partial \ln Z_{\text{tot}}}{\partial \beta} \nonumber \\
&&= \frac{-1}{e^{-\beta h}\cosh^N(\beta(g+h)) + e^{\beta h}\cosh^N(\beta(h-g))} \nonumber \\
&&\times(e^{-\beta h}\cosh^{N-1}(\beta(g+h))(N(g+h)\sinh(\beta(g+h))\nonumber\\
&&-h\cosh(\beta(g+h)))+e^{\beta h}\cosh^{N-1}(\beta(h-g))\nonumber\\
&&\times(\beta(h-g))(N(h-g)\sinh(\beta(h-g))+h\cosh(\beta(h-g))))\nonumber\\
&&\label{etot}
\end{eqnarray}
Finally, we can calculate the total energy of the system, $E_{3}$ after turning off the interaction at the end of the third step by calculating the expectation of the interaction Hamiltonian and subtracting it from $E_{2}$.
\begin{eqnarray}
&&<\hat{H}_{\text{int}}> = \frac{-g}{\beta}\frac{\partial \ln Z_{\text{tot}}}{\partial g}\nonumber\\
&&=\frac{-gN}{e^{-\beta h}\cosh^N(\beta(g+h)) + e^{\beta h}\cosh^N(\beta(h-g))}\nonumber\\
&&\times(e^{-\beta h}\cosh^{N-1}(\beta(g+h))\sinh(\beta(g+h))\nonumber \\
&&-e^{\beta h}\cosh^{N-1}(\beta(h-g))\sinh(\beta(h-g)))\\
&&E_{3} = E_{2} - <\hat{H}_{\text{int}}> \label{eq2}
\end{eqnarray}
As Eqs.~(\ref{etot}) and (\ref{eq2}) are fairly long, we are not going to write down the explicit expression for the total work in a cycle and restrict ourselves to express it in terms of the energies at different stages of the cycle.
\begin{eqnarray}
W_{\text{cycle}} &=& W_1 + W_2 = (E_{1}-E_{0})+(E_{3}-E_{2}) \nonumber\\
&=& \frac{gN(\cosh(2\beta h)-1)}{2\cosh^2(\beta h)} - <\hat{H}_{\text{int}}> \label{wcycle}
\end{eqnarray}
\begin{figure}[H]
\centering
\subfloat[\label{fig:3a}]{\includegraphics[width=\linewidth]
{ising_refrigerator_effcy_g-eps-converted-to.pdf}}
\qquad
\subfloat[\label{fig:3b}]{\includegraphics[width=\linewidth]
{ising_refrigerator_effcy_n-eps-converted-to.pdf}}
\caption{\label{fig:ising-eff}Efficiency $\varepsilon$ of the
refrigeration cycle defined in Eq. (\ref{effcy}) as a function of (a) interaction strength $g$ with $N=6$ and (b) number of ancilla qubits $N$ with $g=-h$ at different environment temperatures $T$ and for $h=1$ GHz.}
\end{figure}
The resulting efficiency with different number of ancilla qubits $N$ and different interaction strengths $g$ are plotted in Fig.~\ref{fig:ising-eff}. Fig.~\ref{fig:3a}
indicates that efficiency decreases with $g$. Comparing with Fig.~\ref{fig:2a},
we deduce that cooling to lower temperatures with increasing $g$ is not efficient.
Similar conclusion can be made for cooling by increasing $N$ after comparing Figs.~\ref{fig:2b} and~\ref{fig:3b}. An optimum strategy would be to use lower $g$ and $N$ values, relative to highest available ones, to cool to the target temperatures within acceptable efficiencies. For example, about an order of magnitude cooling can be achieved in typical superconducting qubit environment temperatures with $\sim 10\%$ efficiency for $g\sim -h/2$ and $N = 6$. In Sec.~\ref{ancilla-sec}, we will discuss exploiting the ancillae qubits to further increase the efficiency of the cooling cycle.
\subsection{Cooling of a many-body system with spin-star quantum refrigerators\label{many}}
We start the discussion of quantum many-body system cooling with a summary of the main results of Ref.~\cite{ourpaper}.
The proposed scheme in Ref.~\cite{ourpaper} is for a general thermalization problem of a many-body system, consisting of interacting qubits. The system qubits make repeated collisions with a set of ``bath" qubits. The number of bath qubits depends on the number of transition frequencies of the many-body system. The scheme is suitable for cooling a small many-body system with a finite set of discrete eigenfrequencies in practice. Fig.~\ref{fig:coll} shows a case where a two-qubit system is thermalized with the collision model. The system Hamiltonian is taken to be a longitudinal Ising model
\begin{eqnarray}\label{eq:targetModel}
H_\text{system}=\sum_{i=1}^2 h_i\sigma_{z,i}+J\sigma_{z,1}\sigma_{z,2},
\end{eqnarray}
which gives four transition frequencies $\omega_i$~\cite{ourpaper}. Here $h_i$ with $i=1,2$ are the resonant frequencies of the system qubits, and $J$ is the Ising coupling coefficients.
It is then sufficient to collide each system qubit with two-bath qubits at different $\omega_i$. In the present case, where our purpose is to cool down the system, the bath qubits are the central qubits coming out of the spin-star refrigerators at the third stage of the refrigeration cycle described in Sec.~\ref{sec:refrigeratorCycle}. Different spin-star refrigerators at different $h_i\equiv \omega_i /2$ should be adjusted to cool down their
central qubits to the same $T_\text{eff}$ by using different $g_i$ (cf.~Eq.~(\ref{betaeff})).
\begin{figure}[t!]
\includegraphics[width=\linewidth]{cooling.png}
\caption{\label{fig:coll}Sketch of a Markovian collision model cooling a two-spin longitudinal Ising model described by the Hamilonian in Eq. (\ref{eq:targetModel}), with coupling strength $J$, using four spin-star quantum refrigerators labeled with $i=1..4$. Central qubits of the refrigerators are the refrigerants at effective inverse temperature $\beta_{\text{eff}}$. Central qubits are not resonant with the Ising model qubits, whose energy gaps are denoted by $h_1$ and $h_2$, instead, they are resonant with the transition frequencies $\omega_i$ ($i=1..4$) of the Ising model.
Spin-star model has longitudinal and homogeneous couplings $g_i$.}
\end{figure}
The derivation of the Markovian master equation in Lindblad form for a many-body collision model in Ref.~\cite{ourpaper} is based upon the set of standard assumptions of open quantum systems weakly coupled to large
reservoirs~\cite{breuer}. Starting with the Liouville-von Neumann equation for the system and environment coupling Hamiltonian $\hat{H}_I(t)$ in the interaction picture
\begin{eqnarray}
i\hbar\frac{\partial \rho}{\partial t} = [\hat{H}_I (t),\rho],
\end{eqnarray}
we integrate it over time. The zeroth and first-order solutions for the system-bath density matrix $\rho$ are plugged in the expression, and a second-order time-dependent perturbative equation is obtained. Assuming negligible change in the bath and system states, neglecting the system-bath entanglement, and applying the secular approximation to the resulting equation yield the well-known Markovian master equation for a large bath weakly coupled to a system for a long time~\cite{breuer}.
The assumption of a large bath coupled to the system for a long time is in sharp contrast with short-time collisions with a single two-level system. Nevertheless, we showed that one could ignore the finite time effects in the master equation under certain conditions~\cite{ourpaper}. First, the two-level system must be in resonance with one of the transition frequencies of the system. Second, the collisions must take longer time than the inverse of the transition frequency in question~\cite{ourpaper}. The resulting equation is given for a single two-level target system. It can still be generalized to systems with arbitrarily many energy levels by interpreting the master equation in the subspace spanned by states separated by the resonant frequency, as all the off-resonance terms are neglected by secular approximation. This leads to Lindblad dissipators in the following form for each collision
\begin{eqnarray}\label{eq:masterEqnManyBody}
D(\hat{\sigma}_{-},\hat{\sigma}_{+},\rho_s ) ~~&\propto&~~ (\rho_{gg}^{\text{bath}} (\hat{\sigma}_{-}\rho_{s}(t)\hat{\sigma}_{+} - \frac{1}{2}\{\hat{\sigma}_{+}\hat{\sigma}_{-},\rho_{s}(t)\}) \nonumber\\
&+& \rho_{ee}^{\text{bath}}(\hat{\sigma}_{+}\rho_{s}(t)\hat{\sigma}_{-} - \frac{1}{2}\{\hat{\sigma}_{-}\hat{\sigma}_{+},\rho_{s}(t)\})),\nonumber \\
&&
\label{me-tls}
\end{eqnarray}
where $\rho_{gg}^{\text{bath}}$ and $\rho_{ee}^{\text{bath}}$ are the ground and excited state populations of the colliding ``bath qubit" (central, refrigerant, qubit of the spin-star system) whose resonance frequency $\omega_i$ coincides with one of the transition frequencies of the system. The jump operators $\sigma_{\pm}$ are for a system qubit. The density matrix of the many-qubit system is denoted by $\rho_s$.
Once the elimination of off-resonance terms is justified, the generalization to multiple transition frequencies is straightforward as the dissipators of collisions with different bath qubits are additive~\cite{ourpaper}. Each collision generates a term similar to
Eq.~(\ref{me-tls}), responsible for transitions between two states separated by the bath qubit's frequency. The collision model depicted in Fig.~\ref{fig:coll} for the target many-body system with Hamiltonian given by Eq.~(\ref{eq:targetModel}) gives rise to the master equation
\begin{eqnarray}\label{me-mbs}
\frac{d}{dt}\rho_s &\propto& \sum_{i=1}^{2}\sum_{\omega_i}\left(\rho_{g,\omega_i}D(\hat{\sigma}_{-i}^{\omega_i},\hat{\sigma}_{+i}^{\omega_i},\rho_s)\right.\nonumber\\
&+&\left.\rho_{e,\omega_i}D(\hat{\sigma}_{+i}^{\omega_i},\hat{\sigma}_{-i}^{\omega_i},\rho_s)\right),
\end{eqnarray}
where $\hat{\sigma}_{\pm i}^{\omega_i}$ are the single-qubit transition operators for the $i$-th bath qubit at resonance frequency $\omega_i$~\cite{ourpaper}. $\rho_{g/e,\omega_i}$ are the ground/excited state populations of the bath qubits with resonance frequencies $\omega_i$.
The thermal state of the target multi-qubit system is the unique equilibrium point of the collisional master equation, Eq.~(\ref{me-mbs}), when the generated transitions connect all of the states of the system~\cite{ggl}. The Kubo-Martin-Schwinger (KMS) conditions for the resulting master equation show that the target system's equilibrium temperature is the same as that of the refrigerant qubits $T_\text{eff}$ ~\cite{breuer,ourpaper}. In summary, we conclude that a thermalizing master equation can describe the interaction of the central qubits with the many-qubit system for the system to evolve into a thermal equilibrium state with the refrigerant central qubits out of spin-star refrigerators.
Although not depicted in our sketch, the effect of the environment at a temperature $T > T_{\text{eff}}$ during the collisions also needs to be considered in a real application. Despite this setback, an appropriate choice of collision times and strengths can still bring the target system to equilibrium at a temperature $T_{\text{eff}}<T_{\text{eq}}<T$ as the dissipators due to the environment and the refrigerant qubits are additive.
\subsection{Final state of ancilla qubits and using them to enhance cooling efficiency\label{ancilla-sec}}
So far, we were only interested in the central qubit and traced out the ancilla qubits in all of our calculations. We also defined the efficiency in Eq.~(\ref{effcy}) by excluding the energy change in the ancilla qubits. This may be a drawback for our proposal for large numbers of ancilla qubits and cooling to very cold temperatures because the work cost of the cycle in Eq.~(\ref{wcycle}) is roughly proportional to the number of ancilla qubits while the energy extracted from the central qubit gets more or less saturated in very cold temperatures. As a workaround to this problem, we propose two possible uses of the ancilla qubits to increase the cooling efficiency. The first one is to use them in collisions with the many-qubit system for a cooperative effect and the second one is to use them in a heat engine cycle to help with the work required for running the spin-star refrigerators.
\subsubsection{Cooperative cooling with ancilla qubits}
\label{sec:coopCool}
\begin{figure}[t!]
\centering
\subfloat[]{\includegraphics[width=\linewidth]{ising_refrigerator_polarization_g-eps-converted-to.pdf}}
\qquad
\subfloat[]{\includegraphics[width=\linewidth]{ising_refrigerator_polarization_n-eps-converted-to.pdf}}
\caption{\label{fig:ising-total}Ratio of the effective temperature $T_\text{eff} = 1/k_B \beta_\text{eff,whole}$ of the whole spin-star system after turning off its Ising interactions defined in Eq. (\ref{beq}) to the environment temperature $T$ as a function of (a) interaction strength $g$ with $N=6$ and (b) number of ancilla $N$ qubits with $g=-h$. We take $h=1$ GHz.}
\end{figure}
Let's consider using the ancilla qubits together with the central qubit as the refrigerant of the spin-star refrigerator. The cooling dynamics of our scheme is described by a Markovian master equation in Eq.~(\ref{me-mbs}) with additive Lindblad dissipators for simultaneous collisions. When all the uncoupled qubits of the spin-star system in the third stage of the refrigerator cycle collide with a qubit of the target system simultaneously, the resulting
the master equation would be a straightforward generalization of Eq.~(\ref{me-mbs}).
The coefficients of two Lindblad dissipators in Eq.~(\ref{me-mbs}) responsible for heating and cooling become the sum of excited and ground state populations of the spin-star qubits, respectively. Accordingly,
the multi-qubit system relaxes to a thermal state at temperature $T_\text{eff,whole}$ which now depends on $N$.
We can calculate $N_e$ and $N_g$ for a given set of spin-star qubits by using $N_e+N_g = N + 1$ and $N_e-N_g=\langle \hat{S}_z\rangle$ where
$\hat{S}_z = \sum_{n=0}^{N} \hat{\sigma}_{z,n}$ and
\begin{eqnarray}
<\hat{S}_z > &=& \frac{-1}{\beta} \frac{\partial \ln Z_{\text{tot}}}{\partial h} = \frac{-2^N}{Z_{\text{tot}}}(e^{\beta h}\cosh^N (\beta(h-g))- \nonumber \\
&&e^{-\beta h}\cosh^N (\beta(h+g))+\nonumber \\
&&N(e^{-\beta h}\sinh(\beta(h+g))\cosh^{N-1} (\beta(h+g))+\nonumber \\
&&e^{\beta h}\sinh(\beta(h-g))\cosh^{N-1} (\beta(h-g)))).
\end{eqnarray}
$T_\text{eff,whole}$ is then given by
\begin{equation}\label{beq}
\beta_{\text{eff,whole}} = \frac{1}{k_B T_{\text{eff,whole}}} = \frac{1}{2h} \ln \left( \frac{N+1-<\hat{S}_z>}{N+1+<\hat{S}_z>}\right).
\end{equation}
Cooling of the many-qubit system with transition
frequencies $\omega_i$ requires collisions
with sets of spin-star refrigerant qubits with $2h_i=\omega_i$.
Each spin-star cluster, associated with a different $\omega_i$, must be at the
same $T_\text{eff,whole} = 1/k_B \beta_\text{eff,whole}$, which can be
satisfied by using $g_i$. Under this condition, $T_\text{eff,whole}$ will
be the temperature of the multi-qubit system in a steady state due to the
repeated simultaneous collisions with the sets of the spin-star qubits.
Fig.~\ref{fig:ising-total} shows $T_\text{eff}$ for an example, where $\omega_i = 2$ GHz
so that $h_i \equiv h = 1$ GHz for a particular set of spin-star qubits.
For a target $T_\text{eff}$ one can determine the required $g_i \equiv g$
from Fig.~\ref{fig:ising-total}. Comparison of Fig.~\ref{fig:ising-teff} with Fig.~\ref{fig:ising-total} indicates that using only central qubits
as the refrigerants of the spin-star quantum refrigerators yields colder $T_\text{eff}$ for the many-body system.
\begin{figure}[t!]
\centering
\subfloat[\label{fig:6a}]{\includegraphics[width=\linewidth]{ising_refrigerator_effcy_whole_g-eps-converted-to.pdf}}
\qquad
\subfloat[\label{fig:6b}]{\includegraphics[width=\linewidth]{ising_refrigerator_effcy_whole_n-eps-converted-to.pdf}}
\caption{\label{fig:effcy-total}Efficiency $\varepsilon_{\text{whole}}$ defined in Eq. (\ref{ewhole}) as a function of (a) interaction strength $g$ with $N=6$ and (b) number of ancilla qubits $N$ with $g=-h$ at different environment temperatures $T$. We take
$h = 1$ GHz.}
\end{figure}
As a concrete example of how limited this proposal is in terms of cooling the target many-body system, we observe from Fig.~\ref{fig:ising-total} that the ratio does not get significantly lower than $0.5$ for reasonable coupling strengths and unrealistically large numbers of ancilla qubits. We expect the relative advantage of using the whole spin star qubits should lie in the cooling efficiency. We define the efficiency of the cycle for cooperative cooling as
\begin{equation}\label{ewhole}
\varepsilon_{\text{whole}} = \frac{E_0-E_{3}}{W_{\text{cycle}}}
\end{equation}
where the numerator is the total energy loss of the spin-star system instead of the energy loss of the central qubit only as in Eq. (\ref{effcy}) and the quantities $E_0$ and $E_3$ take the values calculated in Sec. \ref{sec:refrigeratorCycle}. The resulting efficiency with all the spin-star qubits for different $N$ and $g$ is plotted in Fig.~\ref{fig:effcy-total} which shows an anticipated increase in efficiency for all $N$ and $g$ compared to Eq. (\ref{effcy}). By a comparison with Fig.~\ref{fig:ising-eff}, the efficiency $\varepsilon_{\text{whole}}$ is several times higher than its counterpart $\varepsilon$ without the contribution of the ancilla qubits for most of the parameter choices. The increase of efficiency with the use of ancilla qubits is found to be particularly high in Fig.~\ref{fig:6b}, up to an order of magnitude for $T=10~\text{mK}$ which corresponds to the regime $h~ \text{\textasciitilde{}}~ k_B T/\hbar$ and high numbers of ancilla qubits.
Based on our numerical results, we can conclude that the cooperative cooling with ancilla qubits always increases the efficiency but it significantly increases the minimum achievable effective temperature especially for high numbers of ancilla qubits compared to the case where only the central qubit is used for cooling of the target many-body system. However, this trade-off between achieving cooling to very cold temperatures and efficiency, which manifests itself as the dynamical third law of both classical~\cite{3rdlaw-2} and quantum~\cite{3rdlaw-1} thermodynamics, is the main challenge of all refrigeration schemes and it persists with our proposal. Also, cooperative cooling allows makes the thermalization of the target many-body system at the temperature $T_\text{eff,whole}$ faster and more robust against the inevitable effects of the environment on the many-body system.
To address the trade-off between reaching very low temperatures and refrigeration with high efficiency, we also consider discarding some of the ancilla qubits. For this purpose, we calculate the expectation of the operator defined as $\hat{S}'_z = \sum_{n=1}^{N} \hat{\sigma}_{z,n}$ by expressing the total spin-star Hamiltonian and its partition function as
\begin{eqnarray}
\hat{H}_{\text{Ising}} &=& h_0~\hat{\sigma}_{z,0} + h_1\sum_{n=1}^{N}\hat{\sigma}_{z,n} + g~\hat{\sigma}_{z,0}\sum_{n=1}^{N}\hat{\sigma}_{z,n},\label{ising0} \\
Z_{\text{tot}} &=& 2^N (e^{-\beta h_0}\cosh^N(\beta(g+h_1))\nonumber \\
&&+e^{\beta h_0}\cosh^N(\beta(h_1-g))) .
\end{eqnarray}
We take $h_0=h_1=h$, which gives
\begin{eqnarray}
<\hat{S}'_z> &=& \frac{-1}{\beta} \frac{\partial \ln Z_{\text{tot}}}{\partial h_1} = \frac{-2^N N}{Z_{\text{tot}}}(e^{-\beta h}\sinh(\beta(h+g))\nonumber \\
&&\cosh^{N-1}(\beta(h+g))+e^{\beta h}\sinh(\beta(h-g)) \nonumber \\
&&\cosh^{N-1} (\beta(h-g))).
\end{eqnarray}
As the spin-star Hamiltonian is symmetric with respect to permutations of ancilla qubits, all of the ancilla qubits have the same ground and excited populations, so that we can calculate the effective temperature of ancilla spins similarly to Eq.~(\ref{beq}) as
\begin{equation}
\beta_{\text{eff,ancilla}} = \frac{1}{k_B T_{\text{eff,ancilla}}} = \frac{1}{2h} \ln \left( \frac{1-\frac{<\hat{S}'_z>}{N}}{1+\frac{<\hat{S}'_z>}{N}}\right). \label{tenv}
\end{equation}
The resulting effective ancilla temperature is plotted in
Fig.~\ref{fig:ising-env}. It is always higher than the center qubit effective temperature in Fig. \ref{fig:ising-teff} except for the trivial case of $N=1$ ancilla qubits. Therefore, the excited population of the ancilla qubits is always greater or equal to the excited population of the central qubit.
\begin{figure}[t!]
\centering
\subfloat[]{\includegraphics[width=\linewidth]{ising_refrigerator_polarization_env_g-eps-converted-to.pdf}}
\qquad
\subfloat[]{\includegraphics[width=\linewidth]{ising_refrigerator_polarization_env_n-eps-converted-to.pdf}}
\caption{\label{fig:ising-env}Ratio of the effective temperature of the ancilla qubits $T_{\text{eff}} = 1/k_B \beta_{\text{eq,ancilla}}$ after turning off Ising interactions defined in Eq.~(\ref{tenv}) to the environment temperature $T$ as a function of (a) interaction strength $g$ with $N=6$ and (b) number of ancilla qubits $N$ with $g=-h$. We take
$h = 1$ GHz.}
\end{figure}
Now, we can define an effective temperature of collective cooling when a number $n \leq N$ of the ancilla qubits are used as
\begin{equation}
\beta_{\text{eff,n}} = \frac{1}{k_B T_{\text{eff,n}}} = \frac{1}{2h} \ln \left( \frac{n+1-\frac{n<\hat{S}'_z>}{N}+\tanh(\beta_{\text{eff}}h)}{n+1+\frac{n<\hat{S}'_z>}{N}-\tanh(\beta_{\text{eff}}h)}\right).
\end{equation}
As we are able to see $|<\hat{S}'_z>/N|<\tanh(\beta_{\text{eff}}h)$ by comparing
Figs.~\ref{fig:ising-teff} and~\ref{fig:ising-env}, we also have $\beta_{\text{eff}}>\beta_{\text{eff,n}}>\beta_{\text{eff,ancilla}}$. We can define the efficiency of the refrigeration cycle for the case of discarding some ancillae by ignoring the energy taken from these qubits, but the result is obvious: This efficiency would be between Eqs. (\ref{effcy}) and (\ref{ewhole}).
\subsubsection{Ancilla qubits used as a cold bath for a quantum heat engine}
\label{sec:anc-engine}
Although using all qubits allows reasonable efficiency values for a specific temperature range, we propose another way of using the ancilla qubits to increase efficiency. As the center qubit's effective temperature gets lower with the increasing number of ancilla qubits while the effective temperature of ancilla qubits do not, we suggest that the center qubit can be used to cool down a many-body system to a very cold temperature. In contrast, the ancilla qubits can mimic a cold reservoir for an engine that would "recycle" some of the work spent in the refrigeration cycle after the Ising interaction of the spin-star system is turned off in a thermalized state, which corresponds to the interval between the third and fourth steps of the cycle described in Sec.~\ref{sec:refrigeratorCycle}. Similar to the many-body cooling discussed in the previous section, the interaction of the ancilla qubits with this engine must take place in a timescale much smaller than the relaxation time of the qubits to the environment temperature. For this proposal, the efficiency would depend on the type of engine in question, but we can give a reasonable definition of efficiency
\begin{equation}
\varepsilon_{\text{re}} = \frac{h(\tanh(\beta_{\text{eff,center}} h)-\tanh(\beta h))}{W_{\text{cycle}}-W_{\text{engine}}}
\end{equation}
based on the efficiency definition in Eq. (\ref{effcy}) without the contribution of the engine.
To gain insight into how large $W_{\text{engine}}$ can get, it is useful to calculate the effective temperature of the environment qubits after tracing out the center qubit by finding the ratio of the total ground and excited populations of the ancilla qubits. As all ancilla qubits are at the same effective temperature $\beta_{\text{eff,ancilla}}$, their collective effective temperature is also the same. This argument also applies to cases where some of the ancilla qubits are discarded. Fig.~\ref{fig:ising-env} shows the equilibrium temperature when all of the ancilla qubits in the spin-star system is used for collisions with the engine as its artificial cold reservoir. The plot is somehow similar to Fig.~\ref{fig:ising-total} with center qubit included.
Now that we have some qualitative results on the effective temperature of ancilla spins, we can make a more detailed comment on a possible engine working with the ancilla spins and its work production. As an analytically tractable \cite{ottorev} and experimentally realizable model \cite{iontrap}, we propose to use a quantum Otto engine using a harmonic oscillator as its working medium. For this engine, the environment would be the hot bath at the inverse temperature $\beta$ and the ancilla spins would be the cold bath at the inverse temperature $\beta_{\text{eq,ancilla}}$ using our previously proposed collision model~\cite{ourpaper}.
As the thermalization of a system happens asymptotically with the number of collisions diverging to infinity, we assume that the number of ancilla spins $N$ is sufficiently large so that they are able to bring the harmonic oscillator to their effective temperature with negligible deviation. To summarize the quantum Otto cycle, the harmonic oscillator thermalized at the inverse temperature $\beta$ and the frequency $\omega_h$ is adiabatically driven to a lower frequency $\omega_c$, leading to a work extraction. Then, the harmonic oscillator is brought to the inverse temperature $\beta_{\text{eq,ancilla}}$ by collisions with ancilla spins, and it is driven back to the frequency $\omega_h$, taking some work from outside and completing the cycle. However, we cannot suppress the effects of the environment at the inverse temperature $\beta$ during the adiabatic strokes in an experimental realization of this engine, and we need to implement adiabatic strokes in short times so that the effect of the environment on these steps can be neglected, making these strokes strongly non-adiabatic and reducing the efficiency~\cite{nonad-otto1}. Another widely studied modification of this cycle is to introduce squeezing in the hot reservoir~\cite{ottosq1}, which is shown to exceed the Carnot efficiency~\cite{ottosq2} and even reach a unity efficiency for some choices of engine parameters~\cite{ottosq3}.
\section{Conclusion\label{sec:conclusion}}
In this work, we presented a way to cool down two-level systems using a finite number of ancilla spins and longitudinal ferromagnetic Ising interactions between center and ancilla spins. Our analytical calculations showed that the effective temperature of the center spin monotonically decreases with increasing magnitude of Ising interactions, and it asymptotically gets reduced by a factor of $N+1$, meaning that cooling the central spin by an order of magnitude with respect to its environment is not an unrealistic goal with currently available quantum technologies. We analyzed a simple refrigeration cycle in terms of its efficiency and proposed two different usages of ancilla spins after the refrigeration cycle to increase the efficiency. Based on our previous work \cite{ourpaper}, we also illustrated how our refrigerator for two-level systems could be a part of a many-body quantum system refrigerator, which is desirable in the context of quantum computation as pointed out in Refs.~\cite{ggl,metcalf2020} with different proposals of artificial environments for quantum many-body systems.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 9,686 |
Blind Dog Surfboards: How strong is a hollow wood surfboard?
We are often asked "How strong is it?". Our beach is really great, wooden walk for public access and best part is they have railings. So while checking out the surf, I'll put my longboard across the railings and sit, watching waves. Tom freaks out..."Won't it breat?" We I haven't broken one yet and certainly wouldn't try this with a foam board.
Not only do we sit on our boards we stand behind them. If you don't like it after 30 days, just give it back.
Here's a short video about board strenght. Oh and don't think they are tanks. This 9'5" shown here is 22 pounds and that Fish is 14 pounds. | {
"redpajama_set_name": "RedPajamaC4"
} | 8,223 |
On Second Thought: Refereeing in the Steelers/Bengals Week 2 Match
Posted on September 20, 2016 by Rebecca 10 comments
via Foxsports
There are so many things one could talk about in this game, good and bad. Some players would show up in both columns—Sammie Coates, for instance. His ability to stretch the field is awesome, but as the person formerly known as PaVaSteeler stated after the game, Coates appears to give inconsistent effort. However you view it, he certainly needed to have made more effort to wrest the ball from Dre Kirkpatrick. Sometimes your quarterback makes you look good, and sometimes you have to bail him out.
There was a whole lot to like about the defense, although the nit-packers will still shout, I'm sure, about how many yards they gave up and how few sacks they got.
But beyond the game itself there was a whole lot going on in terms of the officiating, with varying results. Everyone knew the refs were going to be extra vigilant, with the intention of preventing the sort of blood-bath which took place in the Wild Card game last January. As well they should. If I wanted to watch an all-out brawl, I wouldn't be turning to the NFL for my entertainment.
But in the end it turned out to be unnecessary. Whether it will be so again on Week 15 remains to be seen, although Steeler Nation will be pretty wroth should the refs not keep a particular eye on Vontaze Burfict. But Sunday's game was not marred by much in the way of extra-curricular activity, at least as far as I could tell. Ed Bouchette of the Pittsburgh Post-Gazette relayed the following story in yesterday's paper:
At one point Bengals cornerback Dre Kirkpatrick said something to Steelers guard Ramon Foster that actually can be printed in a family newspaper.
"Man," Kirkpatrick said, via Foster, "this is how this game is supposed to be played; a good, old-fashioned football game."
"And when you respect a team, the way we do," Foster told reporters, "and they respect us, that's the type of game you get. It was an old-school game."
Bouchette also mentioned that only ten penalties were accepted. (There were a few more which were declined, including, of course, the special teams play in which the Bengals committed three fouls.) But there was some amount of angst about the officiating nonetheless, including this query to Ray Fittipaldo in yesterday's Steelers chat:
Guest: Bad officiating both ways yesterday I thought, you?
Ray Fittipaldo: Yeah I didn't think it was a very good game for that crew. And as you mentioned there were some bad calls both ways.
So with that thought let's take a look at both the calls which were actually made and the ones which weren't. As I told a Bengals fan, I realize that I will be looking with Steelers-tinted glasses, but of course he will necessarily have a bit of an orange-striped cast to how he views it as well. This is what makes us fans.
And I won't be able to do so with the all-22 coaches film yet, as that doesn't come out until mid-week, but I will do my best with the broadcast film. Flagged plays in color of flagged team. Comments in italics.
10:31—Illegal shift penalty on CIN, declined. [It wasn't easy to see what was going on during the snap, but I didn't hear any complaints from the Bengals.]
7:00—False start penalty on PIT (Marcus Gilbert) [The commentators were busy wittering about the previous play and it wasn't even shown. Again, though, no complaints…]
3:46—TD, Xavier Grimble. [It wouldn't surprise me if there was some amount of grumbling (or grimbling, perhaps?) over whether this was a touchdown, especially in light of the incomplete ruling in the endzone later in the game against CIN. But watching it a couple of times, it is clear that the ball breaks the plane before either knee touches the ground or the ball comes loose. It was, of course, reviewed, as scoring plays are, and there shouldn't be any question about it.]
3:46—after the PIT extra point was kicked the announcers showed the play in slow motion, mentioning that 6'6″ Karlos Dunlap came close to getting a piece of it. [What the film also showed, and wasn't mentioned, was that Dunlap might well have been nicked for leverage, as he clearly pushed himself upward with the assistance of Ramon Foster's back.]
:13—pass to David Johnson for 5 yards. [Johnson was stopped by Rey Malaluga and Karlos Dansby, in both cases by smacking him in the helmet. No comment from anyone involved, and may have been considered incidental contact, but it didn't look great to me.]
15:00—hit on Ben Roethlisberger right after the throw by Karlos Dansby. [The commentators mention the hit "could get flagged" as it was a crown of the helmet hit into Ben's throat that made his head snap back. This is very likely one of the hits which sent him to the locker room temporarily—although we don't see it on the game broadcast. In the post-game presser Ben was asked whether his "trips" (plural) to the locker room were health or equipment related, and he said "both." I'm guessing he had at least two concussion tests, and I'm guessing that even though he apparently didn't show up concussed it possibly affected his clarity. He threw a fair number of wonky passes after that. The commentators also noted that the big QBs like Ben and Cam Newton don't get the flags the other QBs do. They continued to talk about it for the next couple of plays. The on-field announcer even mentioned that the Steelers keep track of these non-flagged hits in the QB room while they watch the game film.]
13:09—end around by Sammie Coates. Taken down by Adam Jones, who dives at Coates' knees. [Not cool. But the really not-cool part was Michael Johnson leading with his helmet, right into Sammie Coates' ear hole, as he was going down. No flag, no discussion.]
11:51—short pass to Darrius Heyward-Bey, who takes a vicious shot that knocks his helmet off from Adam Jones. [Commentator: "He recognizes this route right away and he breaks it up with a hit to the helmet." Um, okay…]
10:55—incomplete pass from Dalton—intended receiver "takes a shot after the ball passes him by." [Mike Mitchell made a "perhaps questionable late hit," hitting him on the shoulder with no head contact. Wasn't flagged, perhaps should have been.]
8:43—incomplete to Antonio Brown—announcers note "Brown is looking for a flag." He doesn't get it. Should he have? Well, the DB put his arm around AB's back in a way which would have been more appropriate had they been courting, but it was right as the ball arrived, and the ball was overthrown. Had that been flagged, I would definitely have felt it was a marginal call at best.
8:31—this is the infamous punt return which garnered three different CIN penalties. I couldn't see enough to say whether they all deserved being called, but that's an impressive amount of penalties to incur on a single play…
7:51—completion to Tyler Boyd. Ryan Shazier may or may not have had contact to Boyd's helmet. It was really difficult to see with the angles shown. No suggestion was made of an illegal hit.
5:59—unnecessary roughness penalty on CIN during punt. They didn't show it, but there was no suggestion the penalty was unwarranted.
5:05—pass incomplete to Eli Rogers. The covering DB from Cincy had his back to the ball and was all up in Rogers' grill. Presumably it wasn't really PI because the pass was short. But there was no way for the DB to know that.
1:30—pass incomplete to Sammie Coates. According to the announcers, Coates and Kirkpatrick were "tied up" about 10 yards downfield, but the officials "let them play" and the timing was thrown off, meaning the ball got farther downfield than Coates did. Seems like PI to me, but apparently not to the officials that day, at least for PIT.
1:24—unflagged holding by Ramon Foster, completion to DWill. Ben would possibly be dead now. I'm all in favor of that hold, but it should have been flagged.
:18—a little shove of A.J. Green after he ran out of bounds by Sean Davis. This wasn't flagged, and there wasn't a suggestion it should be, but it was extracurricular, as Green was clearly out of bounds as far as I could see.
11:43—horse collar tackle called on Javon Hargrave. No ambiguity there.
6:16—Dalton pass to Hill. Offensive holding called. There was also a bit of extracurricular activity after Hill was deflected out of bounds. James Harrison may have had something to do with it…
5:45—Delay of game penalty on CIN
3:57—incomplete to A.J. Green. PI called on Ross Cockrell. He stuck his arm out. Flag…
3:02—Artie Burns gets a PI on his contact with Brandon LaFell. Funny how much it looks like the incompletion in the 2nd quarter to Eli Rogers, except that the ball was overhead rather than short. This sets up Custer's (or Cameron's) Last Stand—first and goal at the one-yard line.
2:24—incomplete pass to Uzomah. This was controversial, but unchallenged by Marvin Lewis. Depending on the angle you look at, the receiver's knee was either down inside the end zone or on the white line. It was 2nd and goal. Had they punched it in on 3rd and goal no one would have said anything, but they didn't. Therefore the announcers took a good look at it later in the game and decided Lewis should have challenged. They felt it was a TD. Whether there was sufficient evidence to overturn the call on the field is interesting but moot.
1:36—Domata Peko grabs DWill's facemask on a run. No flag.
1:27—Bengals offsides penalty. They claimed it was Pouncey moving the ball which caused them to be offsides, but the officials didn't see it that way. However, the announcers showed Pouncey moving, so it looks like he got away with one.
1:08—Dre Kirkpatrick takes DWill down. By wrapping his arms around his neck. I don't get how using a guy's head to tackle him isn't a penalty if grabbing the back of his jersey neck is. Just sayin'…
12:16—pass to AB on sideline ruled a catch. The DBs thought he was out of bounds, but the official was a few feet away. Brown gets the first foot down about a foot inside and drags the other.
6:59—this non-PI call for AB was pretty bad. Karlos Dansby shoved AB to the ground in the end zone, according to the announcers, long before the ball was coming. The official is standing literally 3 feet away as AB lies face-down in the turf. This raises several questions. First of all, what is a linebacker doing covering AB? Second of all, why is it okay for him to shove the intended receiver to the ground? And finally, as has been speculated already, does AB's increasingly poor relationship with the referees have anything to do with him being unable to get a PI call? If so, perhaps he should shut his mouth and start sending flowers…
6:48—a good second or two after Dalton gets off a pass, Stephon Tuitt drills Andy Dalton, and Dalton's helmet comes off. The announcer first says "you're not supposed to hit the quarterback in the head" but then notes that Tuitt drilled him in the back. It still probably should have been called as a late hit, and wasn't.
3:34—Dalton pass to Bernard, runs it in for TD. James Harrison was being held like nobody's business, but as usual the refs don't make it their business either…
3:16—incomplete to AB. He was wearing Adam Jones. The announcers raved about how it was perfect coverage. It helps when you knock the receiver onto his backside. Amazing how often he misses the ball…
2:00—this is probably the most controversial call in the game. Tyler Boyd caught the ball, and as he was falling to the ground James Harrison knocked it loose. Robert Golden recovered it and headed upfield. The question is, was Boyd's knee down before the ball was popped out, or after? Dean Blandino, head of officiating for the NFL, actually put up a video explaining the result of the review (call on the field stands.) The issue was that whichever was it was called on the field was likely to stand, because there weren't good enough camera angles to say for sure what was happening during Boyd's fall. (This is despite the statements by the announcers that his right knee was definitely down, and the play was coming back.) If you just look at the play in real time (as the official who made the call was doing) it definitely looks like a forced fumble/recovery. Whether it actually was or not, I suppose we'll never know.
:14—delay of game on PIT. This was deliberate.
Remind me never to do this again. It took forever, mainly because I was trying very hard to actually be fair.
It was interesting, though. The QB hits offset, I suppose you could say, although I think there was another one on Ben that I couldn't see at all. The refs weren't calling helmet hits on other guys, apparently. The PIs definitely favored the Bengals. The remaining non-calls went both ways, although my impression was the Bengals got the better deal.
It's easy to say there are two calls which changed the course of the game, both in the Steelers' favor—the non-TD and the fumble near the end of the game which gave the ball back to the Steelers. The non-TD call took four points off the board for the Bengals. However, their coaching staff had the opportunity to challenge it but didn't, so it's hard to feel too sorry for them. The Boyd fumble that perhaps wasn't is less clear. Maybe the Bengals would have driven down the rest of the field for a touchdown, but save for the blown coverage on Bernard, the Steelers D had kept the Bengals out of the endzone all day.
The non-called PIs were also game-changers that killed drives in most cases. You can never know how things might have come out differently, as just noted in re the fumble question, but there's a reasonable chance one of them might have led to a touchdown instead of a punt.
All in all, I would say the officiating wasn't very good. But I certainly wouldn't say it clearly favored the Steelers over the Bengals, or vice versa. But feel free to analyze my findings and tell me differently.
tagged with Antonio Brown, Cameron Heyward, DeAngelo Williams, James Harrison, Mike Mitchell, Ryan Shazier
mtsnot
There seems to be more griping throughout the league than typical so far this season about officiating. Part of that I think is the focus on player safety rules changes causing much more focus to be put on the refs. That doesn't take away how I feel the game was officiated. I thought they missed way too much to be considered a good job. Hopefully the refs will settle down and do a better job as the season continues.
For an organization focused on player safety, an awful lot of dangerous hits are not getting flagged. Hopefully there will be letters and fines.
On the old site, I posted the five missed calls I had thought were most egregious but I forgot about Tuitt drilling Dalton in the back on a late hit.
This game represented one of the poorest, though not overly biased, officiated games I have seen in the NFL. The only bias was the Steelers ability to overcome the bad calls due to their timing, nature and moral superiority (I might be wearing black and gold glasses when I say that last part).
This is one of the issues that turns me off the NFL. I nearly quit watching in the past (but I was weak) and it will probably be the wheat based, agricultural by-product that puts the dromedary on permanent compensation.
Did I get them all?
Well, I'll give the refs some credit that no one was killed or seriously injured, even though Ol' Jupiter Pluvius has something to do with that. It's harder to get the traction you need for a head start to launch yourself in a downpour, and your cleats won't get caught in the turf so there are fewer broken ankles, etc;
As far as "the knee was down," people are ignoring the fact that a play is NOT automatically over when the knee is down. Let me explain in a way that even those CincyJungle folks can understand.
If you have the football and you slip and fall and no one touches you, you can get up and keep running. The play is not over until someone touches you when you are on the ground. Got that? Of course. We've all seen that.
The other side of that coin is that you can be flat on the ground, holding the ball in the air, and if a defender swats the ball out of your hand without anyone touching you, it's a fumble. After all, if no one touched you, the play is still alive and you can get up and run.
So, according to the rule, if Deebo hit the ball first and the ball began moving before he touched Boyd's body, the fumble technically began while the play was still alive, whether or not Boyd's knee was down.
The ruling on the field stands.
Props for use of the word "wroth" in an article about football 🙂
Bill S
September 20, 2016 12:25 pm
"I don't get how using a guy's head to tackle him isn't a penalty if grabbing the back of his jersey neck is. Just sayin'…"
I thought the EXACT SAME THING.
On Cockrell's pi the WR was tugging the back of his jersey, which is likely why he stuck his arm out.
And about 4-5 times way oob the Bungals defenders were rag dolling D Will, by rolling him over an extra time. Sort of reminiscent to a tackle like ast season by a Bengal.
Pingback: Refereeing Part 2: Calculating the Cost in the Steelers/Bengals Game | Going Deep: | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 3,128 |
Giambattista Giraldi, genannt Cinzio (lat. Geraldus Cinthius) (* 1504 in Ferrara; † 30. Dezember 1573 in Ferrara) war ein italienischer Dichter, Schriftsteller, Philosoph und Mediziner.
Leben
Giraldi studierte an der Universität von Ferrara und wurde dort Professor der Philosophie und Medizin. 1543 ernannte ihn Herzog Ercole II. d'Este zu seinem Sekretär. Diese Stelle bekleidete er bis zum Tod dieses Fürsten 1559, den er in einem unvollendeten Epos verherrlichte. Zwistigkeiten mit dem Geheimsekretär des Herzogs Alfonso II., Giovanni Battista Pigna (1530–1575), veranlassten ihn, seine Stelle aufzugeben und Ferrara zu verlassen. Er begab sich nach Mondovì, wo er Professor der Beredsamkeit wurde, ging 1569 in gleicher Eigenschaft nach Pavia und kehrte schließlich nach Ferrara zurück, wo er am 30. Dezember 1573 starb.
Das bemerkenswerteste unter seinen Werken sind die von Shakespeare viel benutzten Degli Hecatommithi (100 Novellen), in denen er alles Anstößige fernzuhalten sucht, aber höhere dichterische Begabung und feineren Geschmack vermissen lässt. Daneben fanden seine Tragedie den meisten Beifall. Er versuchte sich mit Egle auch in der antiken Gattung des Satyrspiels. Sonette und Kanzonen veröffentlichte er unter dem Titel Le fiamme.
Werke
Poemata (Basel 1540)
Egle (Ferrara 1546)
Le fiamme (Venedig 1548, 2 Bde.)
L'Ercole (Modena 1557)
Degli Hecatommithi (Mondovi 1565)
Tragedie (Venedig 1582, 2 Bde.)
Scritti estetici (Mailand 1864, 2 Bde.)
Literatur
Weblinks
Autor
Renaissance-Humanist
Literatur (Neulatein)
Literatur (16. Jahrhundert)
Hochschullehrer (Universität Ferrara)
Hochschullehrer (Universität Pavia)
Historische Person (Italien)
Geboren 1504
Gestorben 1573
Mann | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,764 |
Home/Vegas News/Canada/Taliban acknowledge struggle for recognition of Afghan govt
Taliban acknowledge struggle for recognition of Afghan govt
Team Las Vegas News September 7, 2022
ISLAMABAD (AP) — The Taliban-appointed foreign minister acknowledged Wednesday that the former insurgents' year-old government in Afghanistan remains isolated. But he claimed it is able to conduct business and trade internationally as if it were officially recognized on the global stage.
The remarks by Amir Khan Muttaqi underscored the struggles faced by the Taliban since they seized power and overthrew a Western-backed government in August 2021. They have since been trying to transition from insurgency and warfare to governing amid an economic downturn that has driven millions more Afghans into poverty and even hunger.
The international community, wary of the Taliban's harsh rule when they were last in power more than 20 years ago, has withheld official recognition and Afghanistan's assets abroad have been frozen.
Most foreign delegations visiting Afghanistan since the Taliban takeover have been bringing in humanitarian assistance, but the flow of foreign aid has slowed to a trickle.
"This is true that no country has made an announcement of official recognition of the new government of Afghanistan," Muttaqi told reporters in the capital, Kabul.
He insisted, however, that "whatever interaction" is taking place between the Taliban and other countries is "official."
"Maybe they have some issues," he added, referring to the Taliban treatment of women , minorities and other issues
The international community has demanded the Taliban uphold women's rights, allow girls to go to school beyond sixth grade, and revoke their ban on women's full access to society and the right to work in all fields.
There are also other demands, such as rights for ethnic minorities and the establishment of an inclusive government — all points on which the Taliban have not responded despite their initial promises to the contrary.
Other countries, Muttaqi said, "behave with us like an official government."
Hard-liners appear to hold sway in the Taliban establishment — a year since the Taliban takeover, teenage girls are still barred from school and women are required to cover themselves head-to-toe in public, with only their eyes showing.
Muttaqi pointed to his own participation at several regional conferences and meetings, including in Pakistan, Moscow, Turkey, Qatar and China and said "many other countries' delegations have come and visited Afghanistan.
"These were all official trips" he added.
acknowledge Afghan Govt recognition struggle Taliban
VIDEO : Floods hit India and Pakistan
Bond revoked for man accused of killing Tennessee jogger Eliza Fletcher
China's Xi clinches third term, packs leadership with loyalists
Rugby-Dominant England demolish South Africa to set up Australia clash
Timeline: Chinese leader Xi Jinping's rise and rule
Palestinian militant group accuses Israel of killing fighter
Phillies eye first WS trip since '09 as Padres try to stay alive | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 8,953 |
Tag Archives: nchs
Find health data at Childstats.gov, a clearinghouse for kid numbers
About Andrew Van Dam
Andrew Van Dam of The Wall Street Journal previously worked at the AHCJ offices while earning his master's degree at the Missouri School of Journalism.
View all posts by Andrew Van Dam →
Time to add another link to your "federal data clearinghouses" folder, if you haven't already. Childstats.gov, published by the Federal Interagency Forum on Child and Family Statistics, synthesizes data from the CDC, NCHS, National Children's Survey, AHRQ, Census and other specialized programs.
Photo by nasa hq photo via Flickr
The site is anchored by its annual report, "America's Children: Key National Indicators of Well-Being," and the easy-to-navigate nature of its databases seems to have already inspired some discussion on Twitter, particularly in relation to child homelessness.
Many of the data tools are simply links to general surveys (like AHRQ's National Healthcare Cost and Utilization Project) that just happen to contain child-related information, but there are some more specifically relevant data sources, the best of which I've listed below.
Data Resource Center for Child & Adolescent Health
The National Children's Study
The National Center for Education Statistics
Find Youth Info (findyouthinfo.gov)
Census Child Care data
This entry was posted in Children, Government, Health data, Health journalism, Public records, Tools and tagged ahrq, car, CDC, census, data, Health data, Health journalism, nchs on September 2, 2011 by Andrew Van Dam.
State-by-state data, the plug-n-play version
We write about state-by-state federal health statistics a lot here, but acknowledge that they can sometimes require basic spreadsheet and database skills, not to mention an understanding of statistics.
That's where the National Center for Health Statistics Stats of the States pages comes in. It has piles of neatly packaged and ranked PDFs on things like "Kidney Disease Mortality by State" and "Percentage of Births Born Preterm by State," and it even tidies it all up further by giving each state its own fact sheet full of ranks and numbers.
This is a site for the curious, as well as for folks who just need quick, clean numbers. Data-savvy reporters will already have their own ways of accessing all of this basic information, and would probably rather not deal with the PDF-entrapped numbers anyway. But, for what it is, it does the job nicely.
Related AHCJ tip sheets
Using the Census for health reporting
Finding patterns and trends in health data: Pivot tables in spreadsheets
Looking at Health Indicators by Zip Code
This entry was posted in Health data, Health journalism and tagged data, Health data, hhs, nchs on November 16, 2010 by Andrew Van Dam.
Forum offers stats on well-being of elderly
AgingStats.gov is an often-overlooked federal clearinghouse of aging-related data from the Federal Interagency Forum on Age-Related Statistics. It focuses on summary reports.
Its latest effort, Older Americans 2010: Key Indicators of Well-Being (174-page PDF), summarizes 37 key indicators it believes are broadly relevant and easy to understand. By my count, 24 of those are explicitly health-related.
Everything is illustrated with an abundance of charts and maps, and an emphasis on bulleted summary and analysis helps keep things accessible. Those looking for a deeper dive into the summary numbers will want to head to the appendix.
As part of its health sections, the report contains seven "Health Status" indicators, including chronic health conditions, depressive symptoms, sensory impairments and oral health, and functional limitations.
One example:
It also includes eight "Health Risks and Behaviors" – things like diet, air quality, mammography and vaccinations – and nine "Health Care" indicators, including expenditures, prescription drugs and residential services.
The forum, which nobody seems to refer to by the acronym FIFARS, has been around since 1986. Participants include the Census Bureau, a number of Health and Human Services departments (AHRQ, CMS, NCHS and others), HUD, the Bureau of Labor Statistics, the Department of Veterans Affairs, the EPA, the Office of Management and Budget, and the Social Security Administration.
Thanks to AHCJ member Eileen Beal for suggesting this as a tool other members might find helpful.
This entry was posted in Government, Health data, Health journalism, Studies, Tools and tagged ahrq, databases, Health data, hhs, nchs on August 11, 2010 by Andrew Van Dam.
Reporters use county rankings for analysis
On Feb. 17, rankings of the relative health of counties in each American state were released by the Robert Wood Johnson Foundation and the University of Wisconsin. The rankings used data from 13 distinct (mostly federal) sources, including the National Center for Health Statistics, the Census Bureau and the Dartmouth Atlas. With that data, researchers computed eight separate composite scores, which were then weighted to produce one overall score. The ratings are navigated by clicking through a national map to the state and county level. Enough clicks will even bring you to the raw data itself. The state only compares counties, not states, because data collection varies from state to state and isn't always standardized.
It's a combination of data, analysis and an intuitive interface, and journalists have been quick to localize the story. Many reporters reached beyond the easy numbers ("our county is 67th!") to use the system for deeper stories.
For example, Robin Erb of the Detroit Free Press dissected the ratings process and how individual factors and disparities played into them before launching into the standard state breakdown.
Writing for Health News Florida, David Gulliver took a broader state view and considered how various socioeconomic factors played into the rankings of Florida counties. Gulliver's analysis:
The strong-performing coastal counties, like Collier, St. John's Sarasota, Charlotte, Palm Beach and Broward, all benefit from having heavy concentrations of retirees who have guaranteed health care access via Medicare. …
[Dr. Kevin Sherin, director of public health for Orange County] said that in Florida's tourism and service industries, workers tend to be transient and less likely to have insurance or consistent primary care.
He noted the low-ranked counties were some of the poorest in Florida, like Union and Bradford in the rural north, and Glades and Okeechobee, with heavy populations of migrant workers. Those counties also tend to have more people who speak only Spanish, Creole or other languages.
Gulliver localized the story on a county level for his Sarasota Health News site.
In USA Today, Mary Brophy Marcus took the national view and looked for broad trends and generalizations. Marcus' story was accompanied by a map by Frank Pompa highlighting each state's healthiest and least healthy counties.
This entry was posted in Health data, Public health, Public records, Studies, Tools and tagged census, dartmouth atlas, Health data, Health News Florida, nchs, Public health, robert wood johnson foundation, Sarasota Health News, usa today on February 19, 2010 by Andrew Van Dam.
Patient 2.0 empowers patients, worries doctors
Writing for Time, Bonnie Rochman digs into the ramifications of patients sharing information and tips online, an "empowerment movement" she calls "Patient 2.0." In the piece, she profiles the newly created Society for Participatory Medicine, which "encourages patients to learn as much as they can about their health and also helps doctors support patients on this data-intensive quest," as well as PatientsLikeMe.com, a free service which makes its money by selling anonymized patient information.
Photo by presta via Flickr.
One private-sector initiative already has about 50,000 patients inputting their symptoms and treatment regimens and updating details of their disease progression. Wonder how others are coping with your particular ailment? PatientsLikeMe.com spells it out via color-coded charts and graphs. "When you need help, privacy is a terrible thing," says Jamie Heywood, who co-founded PatientsLikeMe in 2004 before his brother died of Lou Gehrig's disease, or ALS.
Rochman demonstrated the strength of PatientsLikeMe in an anecdote in which data from the site's users allowed administrators to reach clear conclusions about the effectiveness of lithium in the treatment of ALS six months ahead the formal clinical trials that were testing the same thing.
While medical professionals like those at the Society for Participatory Medicine have embraced the patient power movement, "plenty of doctors are worried about the quality of the information that is being assessed as well as patients' ability to understand it," Rochman wrote. A few have taken it upon themselves to fill the gaps, banding together to weigh in on the effectiveness of certain off-label treatments via Twitter, and to produce patient seminars on the reasons for clinical trials and the efficacy of various treatments.
NCHS: Patient 2.0 most popular use of health tech by far
The National Center for Health Statistics recently (Feb. 2) released statistics for the first half of 2009 on "Health Information Technology Use Among Men and Women Aged 18-64." The stats show that "searching for health information online" is still the only use of health information technology embraced by a majority of American adults.
The numbers:
From January through June 2009, 51% of adults aged 18-64 had used the Internet to look up health information during the past 12 months.
Over 3% of adults aged 18-64 had used an online chat group to learn about health topics in the past 12 months.
Among adults aged 18-64, women were more likely than men to look up health information on the Internet (58.0% versus 43.4%) and were also more likely to use online chat groups to learn about health topics (4.1% versus 2.5%).
From January through June 2009, almost 5% of adults aged 18-64 had communicated with a health care provider by e-mail in the past 12 months.
During the first 6 months of 2009, 6% of adults aged 18-64 requested a refill of a prescription on the Internet, and almost 3% had made an appointment with a health care provider in the past 12 months using the Internet.
Among adults aged 18¬64, women were more likely than men to request a prescription refill on the Internet (6.6% versus 5.3%), make an appointment using the Internet (3.5% versus 1.8%), and communicate with a health care provider over e-mail (5.6% versus 4.2%).
This entry was posted in Health data, Hospitals, Hot Health Headline and tagged health information technology, nchs, patient 2.0, time on February 5, 2010 by Andrew Van Dam.
Health & Fitness, Los Angeles Times
Boston Health News, Tinker Ready
Retraction Watch, Adam Marcus & Ivan Oransky
Check Up, The Philadelphia Inquirer
This podcast will kill you
Today's RDH Dental Hygiene Podcast
HealthCetera
Nursing Uncensored
Dr. Death, Laura Beil
Sick, WFYI
Health Literacy Out Loud | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,441 |
package depauw.edu.myro.original;
import java.util.Random;
//import com.sun.speech.freetts.*; // for text-to-speech
/**
* Miscellaneous methods for Myro/Java programs
*
* @author Douglas Harms
* @version 1.0
*/
public class MyroUtils
{
private static Random _randomSeq;
private static boolean _newCountDown;
private static long _startTime;
// private static Void _voice;
// // static constructor
// static
// {
// String VOICE_NAME = "kevin16";
//
// // initialize random number sequence
// _randomSeq = new Random();
//
// // initialize timeRemaining
// _newCountDown = true;
//
// // initialize text-to-speach
// VoiceManager voiceManager = VoiceManager.getInstance();
// _voice = voiceManager.getVoice( VOICE_NAME );
// _voice.allocate();
// }
/**
* Cause the current thread to sleep for numSeconds.
*
* @pre numSeconds >= 0.0
*
* @param numSeconds The length of time to sleep.
*/
public static void sleep( double numSeconds )
{
assert numSeconds >= 0.0 : "numSeconds must be >= 0.0";
try
{
Thread.sleep( (int)(numSeconds * 1000.0) );
} catch (InterruptedException e) {}
}
/**
* Returns a random integer within a specified range.
*
* @pre low <= high
*
* @param low Low end of range
* @param high High end of range
* @return A uniformly distributed random int between low (inclusive) and high (inclusive)
*/
public static int randomInt( int low, int high )
{
assert low <= high : "low cannot be greater than high";
return _randomSeq.nextInt( high-low+1 ) + low ;
}
/**
* Returns a random double in the range 0.0 (inclusive) and 1.0 (exclusive).
*
* @return A uniformly distributed random double between 0.0 (inclusive) and 1.0 (exclusive)
*/
public static double randomDouble( )
{
return _randomSeq.nextDouble();
}
/**
* Controls a while-loop for a specific number of seconds.
*
* @param seconds number of seconds to loop
* @return true iff the specified number of seconds has not elapsed
*/
public static boolean timeRemaining( double seconds )
{
if( _newCountDown )
{
_startTime = System.currentTimeMillis();
_newCountDown = false;
}
if( System.currentTimeMillis() <= _startTime+seconds*1000 )
return true;
else
{
_newCountDown = true;
return false;
}
}
// /**
// * Speak the passed string using the speach synthesizer.
// *
// * @param message The string to speak.
// */
// public static void speak( String message )
// {
// _voice.speak( message );
// }
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 9,822 |
La United Soccer League 2018 est la huitième saison de la United Soccer League, le championnat professionnel de soccer d'Amérique du Nord de deuxième division. Elle est composée de trente-trois équipes (31 des États-Unis et 2 du Canada).
Contexte
Le , la ligue annonce que le calendrier comprendra 34 rencontres pour chaque équipe.
Après la saison 2017, trois équipes quittent la ligue. D'un côté, les Whitecaps 2 de Vancouver cessent leurs activités et l'équipe-mère des Whitecaps de Vancouver décide alors de s'affilier à la franchise du Fresno FC, nouvellement arrivante tandis que les Rhinos de Rochester annoncent qu'ils seront en hiatus pour la saison 2018, tout comme le Orlando City B. Le novembre, les Sounders 2 de Seattle déménagent à Tacoma, au Cheney Stadium, stade de baseball des Rainiers de Tacoma.
Le , la ligue annonce une nouvelle franchise basée à Nashville, dans l'État du Tennessee, avec une saison inaugurale prévue pour la saison 2018. Il faut malgré tout attendre le juillet suivant pour obtenir les modalités de cette franchise qui hérite des couleurs et du logo du Nashville FC qui évolue alors en NPSL, une des deux quatrièmes divisions nord-américaines. Au mois de septembre, cette nouvelle franchise change son identité en prenant le nom de Nashville SC.
Le , Las Vegas obtient une nouvelle franchise à compter de la saison 2018 et son nom n'est révélé qu'en quand il est annoncé que la franchise est baptisée Lights de Las Vegas à la suite d'un processus de vote populaire. Le , Fresno obtient également une nouvelle franchise de la USL à compter de la saison 2018, et se nommera Fresno FC.
Le , la USL fait l'annonce de l'arrivée d'une nouvelle équipe réserve, celle d'Atlanta United qui est par la suite baptisée Atlanta United 2 dans le comté de Gwinnett, pour la saison 2018. Les Islanders de Harrisburg deviennent le Penn FC le . Le lendemain, le , la franchise de North Carolina FC quitte la NASL pour rejoindre la USL, puis, le , l'Eleven d'Indy rejoint aussi la USL pour la saison 2018.
Le Atlanta United 2, l'Eleven d'Indy, le Nashville SC et le North Carolina FC intègrent la conférence Est tandis que le Saint Louis FC quitte cette conférence pour intégrer la conférence Ouest. Enfin, le Fresno FC et les Lights de Las Vegas intègrent la conférence Ouest.
Les trente-trois franchises participantes
Carte
Entraîneurs et stades
Changements d'entraîneurs
Format de la compétition
Les trente-trois équipes sont réparties en deux conférences : conférence de l'Ouest (17 équipes) et la conférence de l'Est (16 équipes).
Toutes les équipes disputent trente rencontres, uniquement contre des équipes de leur propre conférence.
Les huit meilleures équipes de chaque conférence sont qualifiées pour les séries. À chaque tour, c'est l'équipe la mieux classée en saison régulière qui accueille son adversaire.
En cas d'égalité, les critères suivants départagent les équipes :
Nombre de victoires
Différence de buts générale
Nombre de buts marqués
Points obtenus contre les quatre meilleures équipes de la conférence
Classement du fair-play
Tirage à la pièce
Saison régulière
Classements des conférences Ouest et Est
|valign=top align=left width=50%|
Conférence Est
|}
|}
Résultats
Séries éliminatoires
Règlement
Seize équipes se qualifient pour les séries éliminatoires (soit huit équipes par conférence). Le format des séries est une phase à élimination directe. Pour toutes les rencontres, c'est l'équipe la mieux classée en saison régulière qui accueille son adversaire.
Le Championnat de la USL a lieu sur le terrain de la meilleure équipe en phase régulière. Cette finale se déroule en un seul match, avec prolongations et tirs au but pour départager, si nécessaire, les équipes.
Tableau
Résultats
Premier tour
Est
Ouest
Demi-finales de conférence
Est
Ouest
Finales de conférence
Est
Ouest
Coupe USL 2018
Statistiques individuelles
Meilleurs buteurs
Source : USL
Meilleurs passeurs
Source : USL
Récompenses individuelles
Récompenses annuelles
Onze type de l'année
Récompenses mensuelles
Joueur du mois
B=Buts; P=Passe; BV=But Vainqueur; A=Arrêts; CS=Clean Sheet
Récompenses hebdomadaires
Joueur de la semaine
B=Buts; P=Passe; BV=But Vainqueur; A=Arrêts; CS=Clean Sheet
Annexes
Notes
Références
Lien externe
Site officiel
2018
United Soccer League
Soccer au Canada en 2018 | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 2,039 |
Q: systemd deletes sub-cgroups started by other services I have a service (HTCondor batch system), which is started as service unit within cpu,cpuacct and memory cgroup slices (CentOS 7 @ 3.10.0-*).
The service starts sub-processes (~~> batch jobs) for which it creates sub-slices, i.e., subdividing its parent resources. Without further interfering, the started processes are in the sub-slices
wc -l /sys/fs/cgroup/cpu,cpuacct/system.slice/condor.service/tasks
19
wc -l /sys/fs/cgroup/cpu,cpuacct/system.slice/condor.service/*/tasks
29 /sys/fs/cgroup/cpu,cpuacct/system.slice/condor.service/condor_var_lib_condor_execute_slot1_2@batch0311.desy.de/tasks
22 /sys/fs/cgroup/cpu,cpuacct/system.slice/condor.service/condor_var_lib_condor_execute_slot1_3@batch0311.desy.de/tasks
22 /sys/fs/cgroup/cpu,cpuacct/system.slice/condor.service/condor_var_lib_condor_execute_slot1_4@batch0311.desy.de/tasks
...
and as cross-check, the processes have their corresponding cgroups also in their process info, e.g.,
cat /proc/58683/cgroup
11:perf_event:/
10:memory:/system.slice/condor.service/condor_var_lib_condor_execute_slot1_6@batch0311.desy.de
9:devices:/system.slice
8:blkio:/system.slice/condor.service /condor_var_lib_condor_execute_slot1_6@batch0311.desy.de
7:cpuset:/
6:freezer:/system.slice/condor.service/condor_var_lib_condor_execute_slot1_6@batch0311.desy.de
5:hugetlb:/
4:cpuacct,cpu:/system.slice/condor.service/condor_var_lib_condor_execute_slot1_6@batch0311.desy.de
3:pids:/system.slice/condor.service
2:net_prio,net_cls:/
1:name=systemd:/system.slice/condor.service
AFAIS, systemd seems to be not aware of the sub-slices as systemd-cgls shows the processes directly beneath the the parent unit's cgroup
systemd-cgls
...
├─condor.service
│ ├─ 781 /bin/bash ...foo...
│ ├─ 1596 condor_starter -f -a slot1_4 ...baz...
Now, when adding a new unit, reloading the systemd daemons and starting the new unit, all the job sub-cgroups disappear and their processes get attached to the parent cgroup.
wc -l /sys/fs/cgroup/cpu,cpuacct/system.slice/condor.service/tasks
337 /sys/fs/cgroup/cpu,cpuacct/system.slice/condor.service/tasks
My assumption is, that systemd is not aware of the sub-slices (guessing from systemd-cgls), while from the kernel's view these are proper cgroup slices. When starting the new unit, systemd notices the discrepancy from its expectations and 'cleans up'.
Can this behaviour somehow be avoided?
A: It looks like upstream since fixed this by specifying the Delegate= directive (commit 890186d82a – though specifying a subset of controllers would be a bit more elegant than simply true IMHO). If that update isn't propagated to the CentOS package, you can apply it locally with the following command:
systemctl set-property condor.service Delegate=true
A: problem was, that by default systemd assumes that all sub-cgroup/slices are handled by itself and that any unit processes have no own control.
When enabling delegation for a unit, systemd will not try to take control of the unit's sub-resources
[Service]
...
Delegate=true
(the [Slice] section might also be the right section, but apparently the right section depends on the release/kernel so #YMMV)
note that the cgroups/slices shown by
systemd-cgls
and
systemd-cgtop
still differ and only systemd-cgtop shows the 'right'Ä kernel view of cgroups while systemd-cgls does not show any sub-hierarchy of slices even with delegation)
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 4,977 |
Q: Usar variable data source flujo de datos SSIS En SSIS, tengo una variable que coge el valor de una query definida en un componente "Tarea Ejecutar SQL". El valor de la variable se asigna correctamente. Lo que quiero hacer luego es dentro de un "Flujo de datos", usando un "origen de ADO", donde se define la query usar en el where la variable utilizada anteriormente.
¿Cómo puedo hacerlo? He buscado por todos sitios y no encuentro nada de esto.
He probado con algo como
SELECT col1,col2...
FROM tabla
WHERE col1 = @nombreVar.
A: En la fuente de datos (Source), elige data access mode SQL Command y en el where especificas que vas a usar un parametro usando un signo de interrogación ?
Select col1, col2, col3
from tabla
where col1 =?
Luego haces click en el botón Parameters y allí eliges la variable (User::NombreDeVariable) y dejas input como la dirección
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 474 |
Q: Call onActivityResult for contact in OnCreate() Android I got this code from another question but I don't know how to call this onActivityResult() class in my onCreate() activity to display the first contact from my phone. Also, what does "if (requestCode == RQS_PICKCONTACT){" and "RQS_PICKCONTACT" stand for? Could someone please clarify?
public class MainActivity extends Activity {
Button buttonReadContact;
TextView textPhone;
final int RQS_PICKCONTACT = 1;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
buttonReadContact = (Button)findViewById(R.id.readcontact);
textPhone = (TextView)findViewById(R.id.phone);
buttonReadContact.setOnClickListener(new OnClickListener(){
@Override
public void onClick(View v) {
//Start activity to get contact
/*final Uri uriContact = ContactsContract.Contacts.CONTENT_URI;
Intent intentPickContact = new Intent(Intent.ACTION_PICK, uriContact);
startActivityForResult(intentPickContact, RQS_PICKCONTACT);
*/
Intent intent = new Intent(Intent.ACTION_GET_CONTENT);
// BoD con't: CONTENT_TYPE instead of CONTENT_ITEM_TYPE
intent.setType(ContactsContract.CommonDataKinds.Phone.CONTENT_ITEM_TYPE);
startActivityForResult(intent, RQS_PICKCONTACT);
}});
}
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
// TODO Auto-generated method stub
if (resultCode == RESULT_OK) {
if(requestCode == RQS_PICKCONTACT) {
Uri returnUri = data.getData();
Cursor cursor = getContentResolver().query(returnUri, null, null, null, null);
if (cursor.moveToNext()) {
int columnIndex_ID = cursor.getColumnIndex(ContactsContract.Contacts._ID);
String contactID = cursor.getString(columnIndex_ID);
int columnIndex_HASPHONENUMBER = cursor.getColumnIndex(ContactsContract.Contacts.HAS_PHONE_NUMBER);
String stringHasPhoneNumber = cursor.getString(columnIndex_HASPHONENUMBER);
if(stringHasPhoneNumber.equalsIgnoreCase("1")){
Cursor cursorNum = getContentResolver().query(
ContactsContract.CommonDataKinds.Phone.CONTENT_URI,
null,
ContactsContract.CommonDataKinds.Phone.CONTACT_ID + "=" + contactID,
null,
null);
//Get the first phone number
if(cursorNum.moveToNext()){
int columnIndex_number = cursorNum.getColumnIndex(ContactsContract.CommonDataKinds.Phone.NUMBER);
String stringNumber = cursorNum.getString(columnIndex_number);
textPhone.setText("0"+stringNumber);
}
} else {
textPhone.setText("NO Phone Number");
}
} else {
Toast.makeText(getApplicationContext(), "NO data!", Toast.LENGTH_LONG).show();
}
}
}
}
A: When you call startActivityForResult(intent,requestCode)
onActivityResult is called when user comes back to calling activity with
requestCode
//You can start multiple activities by calling startActivityForResult so this value is to differentiate between them
resultCode
//This value is set by the called activity to indicate whether the intended operation was a success or not.
data
//this is an object of type Intent which contains data returned by called activity.
in your code when this part is executed:
Intent intent = new Intent(Intent.ACTION_GET_CONTENT);
// BoD con't: CONTENT_TYPE instead of CONTENT_ITEM_TYPE
intent.setType(ContactsContract.CommonDataKinds.Phone.CONTENT_ITEM_TYPE);
startActivityForResult(intent, RQS_PICKCONTACT);
New activity is started and when user comes back from that activity by selecting a contact onActivityResult is called
A: onActivityResult is called after u startIntent or u select an contact.
RQS_PICK_CONTACT u can change as u want. like 2 , 3,4 or another number.
it just identity for requestCode in onActivityResult so u can do as u need.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 4,845 |
Elkhonon Goldberg (ur. 1946 w Rydze) – neuropsycholog i neurobiolog, zajmujący się głównie zagadnieniem lateralizacji ludzkiego mózgu. Jego nauczycielem był Aleksander Łuria.
Prace
Elkhonon Goldberg. Contemporary Neuropsychology and the Legacy of Luria, Hillsdale, NJ: Lawrence Erlbaum, 1990.
Elkhonon Goldberg. The Executive Brain: Frontal Lobes and the Civilized Mind, NY: Oxford University Press, 2001; paperback 2002.
Elkhonon Goldberg. The Wisdom Paradox: How Your Mind Can Grow Stronger As Your Brain Grows Older, NY: Penguin, 2005; paperback 2006. UK edition: Free Press, Simon & Schuster, 2005.
Amerykańscy neurobiolodzy
Amerykańscy psycholodzy
Neuropsycholodzy
Urodzeni w 1946 | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 29 |
\section{Introduction}
\label{sec:intro}
Latin hypercube designs are useful for numerical integration and emulation of computer experiments.
An $n\times p$ matrix is called a Latin hypercube design if each of its columns contains exactly one point in each of the $n$ bins of $(0,1/n],(1/n,2/n],\cdots,((n-1)/n,1]$, which is called the design achieves univariate stratifications.
\cite{mcKay1979a_compar} proposed a method to generate Latin hypercube designs.
\cite{stein1987large} gave that the variance of the sample mean based on Latin hypercube designs can achieve more reduction than independent and identically distributed sampled.
\cite{owen1992a_central} extended Stein's work and proved a central limit theorem.
\cite{loh1996on} provided some results about the multivariate central limit theorem and the convergence rate for the sample mean based on Latin hypercube designs.
Sliced Latin hypercube designs~\citep{qian2012sliced_Latin} are Latin hypercube designs that can be partitioned into several smaller Latin hypercube designs.
Such designs are appealing when computer simulations are carried out in batches, in multi-fidelity, or with both quantitative and qualitative variables.
In general, running some complex codes under different parameters in different computer is a time-saving method, which is called as computer experiment in batches.
Each slice of the sliced Latin hypercube designs can be used to each batch, which makes both the design on each computer and the whole design can achieve optimal one-dimensional uniformity.
The experiments with both quantitative and qualitative variables are very common.
\cite{deng2015design} proposed a new type of designs, marginally coupled designs, for this problem.
Sliced Latin hypercube designs are desirable to deal with this problem.
Namely, we arrange each slice of designs to each combination of qualitative variables.
While most existing methods generate designs with equal batch sizes,
in many applications sliced designs with unequal run sizes are needed.
For instance, when simulations are carried out from multiple computers, it is desirable to assign more runs to faster computers;
to integrate a computer model with one qualitative factor that is not uniformly distributed, it is most efficient to assign more runs to levels with higher probability;
to emulate computer experiments with tunable accuracy, it was suggested in \citet{he2017optimization} to use more low-accuracy runs than high-accuracy runs.
In this paper, we give, to the best of our knowledge, the first construction of sliced Latin hypercube designs that allow the run sizes to be chosen arbitrarily.
Before this work, \citet{Yuan:2017} and \citet{xu2018sliced} constructed sliced Latin hypercube designs with certain types of unequal run sizes.
Flexible sliced designs~\citep{kong2017flexible} allow flexibly chosen run sizes but are not Latin hypercube designs.
It is commonly believed that Latin hypercube designs with uncorrelated or nearly uncorrelated columns are more advantageous than average Latin hypercube designs~\citep{owen1994controlling}.
Inspired by the method of reducing correlations of equal-size sliced Latin hypercube designs~\citep{chen2018controlling},
we also provide an algorithm to reduce correlations of our proposed designs.
Numerical results suggest that this leads to improved performance in some circumstances.
The remainder of the article is organized as follows.
The constructions for sliced Latin hypercube designs with arbitrary run sizes are given in Section 2.
Section 3 provides an algorithm to reduce the column correlation of designs.
Section 4 gives some numerical illustrations.
Section 5 concludes this paper.
All proofs are deferred to the appendix.
\section{Construction}
\label{sec:con}
For $a \in R$, let $\lceil a\rceil$ denote the smallest integer no less than $a$.
We propose generating the sliced Latin hypercube design in $p$ dimensions with $t$ slices of sizes $n_1,\cdots,n_t$ by the following three steps.
\begin{itemize}
\item[Step 1:] Initialize $S_0=G_1=\cdots=G_t=\emptyset$.
\item[Step 2:] For $i$ from 1 to $n= \sum_{i=1}^t n_i$, let $S_{i,0} = S_{i-1} \cup \{i\}$ and compute
\[ \delta_i = \sum_{j=1}^{t} \left\{ \lceil n_j(i+1/2)/n\rceil - \lceil n_j(i-1/2)/n\rceil \right\}. \]
If $\delta_i > 0$, for $j$ from 1 to $\delta_i$,
let $k$ be the $j$th smallest integer of set $\{z:\lceil n_z(i+1/2)/n\rceil - \lceil n_z(i-1/2)/n\rceil = 1\}$
and $u$ be the smallest integer in $S_{i,j-1}$ such that $\lceil n_k(u-1/2)/n\rceil = \lceil n_k(i-1/2)/n\rceil$,
add $u$ to $G_k$, and let $S_{i,j} = S_{i,j-1} \setminus \{u\}$.
Let $S_i = S_{i,\delta_i}$ and continue to the next $i$.
\item[Step 3:] For $j$ from 1 to $t$, uniformly permute $G_j$ for $p$ times and obtain $h_{j,1},\cdots,h_{j,p}$ such that all permutations are carried out independently.
For $l$ from 1 to $p$, stack $h_{1,l},\cdots,h_{t,l}$ together, divide them by $n$, and subtract them by $1/(2n)$ to obtain the $l$th column of the design.
\end{itemize}
To better understand the algorithm,
we now present a simple example.
\begin{example}\label{exa:1}
Consider $t = 3,$ $n_1=2$, $n_2=5$, $n_3=10$, $n=17$, and $p=3$.
Here, $(\delta_1,\cdots,\delta_n) = (0,1,2,0,1,0,2,0,2,2,0,1,0,1,1,0,3)$.
Since $\delta_1=0$, we have $S_1 = \{1\}$.
For $i=2$, $S_{2,0} = S_1 \cup \{2\} = \{1,2\}$, $\delta_i=1$,
$k=3$ is the only integer satisfying $\lceil n_k(i+1/2)/n\rceil - \lceil n_k(i-1/2)/n\rceil = 1$,
and $u=1$ is the smallest integer among the two integers satisfying $\lceil n_3(u-1/2)/n\rceil = \lceil n_3(i-1/2)/n\rceil$.
Thus, $S_2 = S_{2,1} = S_{2,0} \setminus \{1\} = \{2\}$ and we assign 1 to $G_3$.
For $i=3$, $S_{3,0} = \{2,3\}$, $\delta_i=2$,
and both $k=2$ and $k=3$ make $\lceil n_k(i+1/2)/n\rceil - \lceil n_k(i-1/2)/n\rceil = 1$.
We first set $k=2$ and find that $u=2$ is the smallest integer satisfying $\lceil n_2(u-1/2)/n\rceil = \lceil n_2(i-1/2)/n\rceil$.
Thus, $S_{3,1} = \{3\}$ and we assign 2 to $G_2$.
We then set $k=3$. Luckily, the only number in $S_{3,1}$, $u=3$, makes $\lceil n_3(u-1/2)/n\rceil = \lceil n_3(i-1/2)/n\rceil$.
Thus, $S_3 = S_{3,2} = \emptyset$ and we assign 3 to $G_3$.
After going through all $i$, we finally obtain $S_n=\emptyset$, $G_1=\{7,14\}$, $G_2=\{2,5,9,12,16\}$, and $G_3=\{1,3,4,6,8,10,11,13,15,17\}$.
Randomly permuting $G_1$, $G_2$ and $G_3$, we obtain $h_{1,1} = (7,14)$, $h_{2,1} = (12,2,16,9,5)$, and $h_{3,1} = (15, 6, 17, 11, 1, 13, 10, 3, 4, 8)$.
Thus, the first column of the final design is $(13, 27, 23, 3, 31, 17, 9, 29, 11, 33, 21, 1, 25, 19, 5, 7, 15)^{T}/34$.
Similarly, we can obtain other columns of the design.
\end{example}
\begin{remark}
The algorithm is valid only if in Step~2 there is at least one element in $S_{i,j-1}$ such that $\lceil n_k(u-1/2)/n\rceil = \lceil n_k(i-1/2)/n\rceil$.
Proposition~\ref{pro:set-non-empty:mid} below ensures this.
\end{remark}
\begin{proposition}
\label{pro:set-non-empty:mid}
For any $i = 1,\cdots,n$, $\delta_i >0$, and $j = 1,\cdots,\delta_i$,
there is at least one element of $S_{i,j-1}$ that makes $\lceil n_k(u-1/2)/n\rceil = \lceil n_k(i-1/2)/n\rceil$.
\end{proposition}
All of the proofs are given in the Appendix.
Theorem~\ref{the:slhd} below shows the generated designs are sliced Latin hypercube designs.
\begin{theorem}
\label{the:slhd}
Let $H$ denote an arbitrary column of a design generated from the proposed algorithm.
Then, (i) $H$ is a permutation of $\{1/(2n),3/(2n),\cdots,(2n-1)/(2n)\}$;
and (ii) for $i = 1,\cdots,t$, the $(\sum_{k=1}^{i-1} n_k +1)$th to the $(\sum_{k=1}^i n_k)$th entry of $H$ have exactly one element in each of the $n_i$ bins of $(0,1/n_i],\cdots,((n_i-1)/n_i,1]$.
\end{theorem}
\begin{remark}
In contrast to ``randomized'' Latin hypercube designs with entries that take arbitrary values in $[0,1]$, our algorithm yields ``midpoint'' Latin hypercube designs with entries that locate at the center of the bins of $(0,1/n],\cdots,((n-1)/n,1]$.
One can view our algorithm as assigning elements of the one-dimensional midpoint Latin hypercube design $\{1/(2n),\cdots,(2n-1)/(2n)\}$
to $G_1,\cdots,G_t$, such that each of the $n_i$ bins of $(0,1/n_i],\cdots,((n_i-1)/n_i,1]$ contains exactly one element of $G_i$ for $i=1,\cdots,t$.
We focus on midpoint designs because, unlike the case with equal run sizes, not every one-dimensional Latin hypercube design can be partitioned at will.
For instance, consider the case with $t=3$, $n_1=1$, $n_2=n_3=3$, $n=7$, and $H=(0.1,0.2,0.3,0.5,0.7,0.8,0.9)^T$.
It is not difficult to verify that each of the seven bins of $(0,1/7],\cdots,(6/7,1]$ contains exactly one point of $H$, but there is no partition of $H$ to $G_1$, $G_2$, and $G_3$ that fulfills the property of Theorem~\ref{the:slhd}(ii).
Interestingly, when $H = \{1/(2n), \cdots, (2n-1)/(2n)\}$, at least one valid assignment always exists,
and Theorem~\ref{the:slhd} is the first result indicating this.
Furthermore, from numerical results shown in Section~\ref{sec:sim}, midpoint Latin hypercube designs are usually as good as or even better than randomized Latin hypercube designs.
\end{remark}
\section{Reducing correlations}
\label{sec:corr}
\cite{chen2018controlling} gives a method to control column-wise correlations of sliced Latin hypercube designs.
In this section, we provide an algorithm to reduce the correlations between each column of the designs proposed in Section 2.
Let $D_{j,k}$ denote the $j$th slice of the $k$th column of $D$, a sliced design obtained from our algorithm in Section~\ref{sec:con}.
We can further reduce the correlations of $D$ using the following five steps.
\begin{itemize}
\item[Step 1:] For $j$ from $1$ to $t$, $k$ from $2$ to $p$, and $l$ from $1$ to $k-1$,
fit a simple linear regression model with $D_{j,l}$ being the response and $D_{j,k}$ being the only covariate besides the intercept,
and replace $D_{j,l}$ with the residual.
\item[Step 2:] For $j$ from $1$ to $t$, $k$ from $1$ to $p$, and $u$ from $1$ to $n_j$, use the $u$th smallest element of $G_j$, subtracted by $1/2$ and divided by $n$, to replace the $u$th smallest element of $D_{j,k}$.
\item[Step 3:] For $j$ from $1$ to $t$, $k$ from $p-1$ to $1$, and $l$ from $p$ to $k+1$,
fit a simple linear regression model with $D_{j,l}$ being the response and $D_{j,k}$ being the only covariate besides the intercept,
and replace $D_{j,l}$ with the residual.
\item[Step 4:] For $j$ from $1$ to $t$, $k$ from $1$ to $p$, and $u$ from $1$ to $n_j$, use the $u$th smallest element of $G_j$, subtracted by $1/2$ and divided by $n$, to replace the $u$th smallest element of $D_{j,k}$.
\item[Step 5:] Iterate Steps~1-4 nine more times.
\end{itemize}
Here, replacing $D_{j,l}$ with the residual means
$$D_{j,l}=D_{j,l}-(D_{j,k}-\bar{D}_{j,k})\rho(D_{j,k},D_{j,l})\sigma(D_{j,l})/\sigma(D_{j,k}),$$
where $\rho(D_{i,j},D_{i,k})$ is the sample correlation of $D_{i,j}$ and $D_{i,k}$, $\sigma(D_{i,k})$ and $\sigma(D_{i,j})$ are the standard deviations of the two vectors and $D_{i,j}-\bar{D}_{i,j}$ amounts to $D_{i,j}$
minus its mean times a vector of 1s.
\begin{remark}
\cite{chen2018controlling} controls column-wise correlations of each slices, then combine them to obtain a sliced Latin hypercube designs.
Similarly, we reduce the correlations of each slices, and combine them to let the each slice and the whole design be Latin hypercube designs.
\end{remark}
\begin{remark}
The algorithm is said to converge if the root mean square correlation among columns, which is defined in \cite{owen1994controlling}, stops decreasing.
The root mean square correlation is
\begin{equation*}
\rho_{\rm rms} (D)= \left(\frac{\sum_{1\leq j<k\leq p}(\rho(D_{:,j},D_{:,k}))^2}{p(p-1)/2}\right)^{1/2}
\end{equation*}
where $D$ is a design in $p$ factors, $D_{:,j}$ and $D_{:,k}$ are the $j$th and $k$ columns of $D$, respectively.
We stop the above algorithm after 10 iterations because from our experience it already warrants convergence.
\end{remark}
We give an example to illustrate this algorithm.
\begin{example}
Consider $t=2,n_1=6,n_2=7,p=3$, using the algorithm 1 in section 2 to generate the initial design $D:$
\begin{equation}
\label{equ:initial-D}
D=1/26\times\left[
\begin{array}{cccccc|ccccccc}
19 & 23 & 11 & 5 & 15 & 1 & 25 & 9 & 7 & 3 & 17 & 13 & 21 \\
15 & 23 & 11 & 5 & 1 & 19 & 9 & 13 & 21 & 17 & 3 & 7 & 25 \\
11 & 15 & 19 & 5 & 23 & 1 & 17 & 21 & 9 & 25 & 7 & 13 & 3
\end{array}
\right]^T
\end{equation}
In the Step 1, when $j=1,k=2,l=1$, we have
$D_{1,1}=(19 ,23,11,5,15,1 )^T/26$ and $D_{1,2}=(15, 23, 11, 5, 1, 19)^T/26$.
Clearly,
$\sigma(D_{1,1})=\sigma(D_{1,2})=0.7189,\rho(D_{1,2},D_{1,1})=0.2328.$
Then, renew $D_{1,1}$ to get that
\begin{equation*}
\begin{split}
D_{1,1}
&=D_{1,1}-(D_{1,2}-\bar{D}_{1,2})\rho(D_{1,2},D_{1,1})\sigma(D_{1,1})/\sigma(D_{1,2})\\
&=(0.7068, 0.7891, 0.4350 , 0.2580 , 0.6784, -0.0212)^T.
\end{split}
\end{equation*}
Similarly, when $j=1,k=3,l=1$, we have
$$D_{1,1}=(0.7632, 0.8196, 0.2606, 0.3710, 0.3170, 0.31464)^T.$$
Then, Step 1 gives that
\begin{equation*}
D=\left[
\begin{array}{ccc}
0.7633 & 0.5583 & 11/26 \\
0.8196 & 0.9218 & 15/26 \\
0.2606 & 0.5161 & 19/26 \\
0.3710 & 0.0900 & 5/26 \\
0.3170 & 0.1872 & 23/26 \\
0.3146 & 0.5727 & 1/26 \\
\hline
1.0273 & 0.3681 & 17/26 \\
0.4886 & 0.5476 & 21/26 \\
0.1816 & 0.7784 & 9/26 \\
0.3345 & 0.7271 & 25/26 \\
0.5279 & 0.0733 & 7/26 \\
0.4890 & 0.2656 & 13/26 \\
0.6050 & 0.8938 & 3/26
\end{array}
\right]
\end{equation*}
In Step 2, for $j = 1$, we have $G_1=\{ 1/26, 5/26, 11/26, 15/26, 19/26, 23/26\}$,
then $$D_{1,1}=(0.7632, 0.8196, 0.2606, 0.3710, 0.3170, 0.31464)^T$$ is replaced with
$$D_{1,1}=(19/26, 23/26,1/26, 15/26, 11/26, 5/26)^T.$$
Similarly, we have
\begin{equation}
\label{equ:forward-D}
D=1/26\times\left[
\begin{array}{cccccc|ccccccc}
19 & 23 & 1 & 15 & 11 & 5 & 25 & 9 & 3 & 7 & 17 & 13 & 21 \\
15 & 23 & 11 & 1 & 5 & 19 & 9 & 13 & 21 & 17 & 3 & 7 & 25 \\
11 & 15 & 19 & 5 & 23 & 1 & 17 & 21 & 9 & 25 & 7 & 13 & 3
\end{array}
\right]^T
\end{equation}
after Step 2.
Finally, we obtain
\begin{equation}
\label{equ:final-D}
D=1/26\times\left[
\begin{array}{cccccc|ccccccc}
19 & 23 & 1 & 15 & 11 & 5 & 25 & 9 & 3 & 7 & 17 & 13 & 21 \\
15 & 23 & 11 & 1 & 5 & 19 & 13 & 9 & 17 & 21 & 3 & 7 & 25 \\
11 & 15 & 19 & 5 & 23 & 1 & 21 & 17 & 7 & 25 & 9 & 13 & 3
\end{array}
\right]^T
\end{equation}
Figure \ref{fig:reduce-corr} shows that the root mean square correlations of the each slice and the whole design have a distinct reduction.
\begin{figure}
\centering
\includegraphics[width=0.65\linewidth]{reduce.eps}
\caption{The changes of root mean square correlations of the each slice and the whole design}
\label{fig:reduce-corr}
\end{figure}
\end{example}
\section{Numerical comparison}
\label{sec:sim}
We now demonstrate the usefulness of our proposed sliced designs in numerically integrating the two test functions used in \citet{qian2012sliced_Latin},
\begin{eqnarray*}
f_1(x) & = & \log(x_1x_2x_2x_4x_5), \\
f_2(x) & = & \log\left( x_1^{-1/2} + x_2^{-1/2} \right).
\end{eqnarray*}
Assume we have $t$ computers to evaluate the functions and from time constraints we can arrange at most $n_1,\cdots,n_t$ runs for the computers separately.
We have at least three choices to solve the problem, as follows.
First, we can use a single design with $n=\sum_{i=1}^t n_i$ runs and assign the runs randomly to the $t$ computers.
We consider using an ordinary Latin hypercube design~\citep{mcKay1979a_compar}, its midpoint modification, and its correlation-controlled extension~\citep{owen1994controlling} for this approach.
Second, we can combine $t$ independently generated Latin hypercube designs with sizes $n_1,\cdots,n_t$, separately, and assign one design to each computer.
Third, we can use a flexible sliced design~\citep{kong2017flexible} or our newly proposed sliced Latin hypercube design and assign one slice to each computer.
These methods are shown as follows.
\begin{itemize}
\item[\textbf{RLH}] single randomized Latin hypercube design with $n$ runs;
\item[\textbf{MLH}] single midpoint Latin hypercube design with $n$ runs;
\item[\textbf{CLH}] single correlation-controlled Latin hypercube design with $n$ runs;
\item[\textbf{IMLH}] $t$ independent midpoint Latin hypercube designs with $n_1,\cdots,n_t$ runs, respectively;
\item[\textbf{ICLH}] $t$ independent correlation-controlled Latin hypercube designs with $n_1,\cdots,n_t$ runs, respectively;
\item[\textbf{FSD}] flexible sliced design in $t$ slices, and its $i$th slice contains $n_i$ runs;
\item[\textbf{SLH}] the proposed sliced Latin hypercube design in $t$ slices, and its $i$th slice contains $n_i$ runs;
\item[\textbf{CSLH}] the proposed sliced Latin hypercube design with reduced correlations in $t$ slices, and its $i$th slice contains $n_i$ runs.
\end{itemize}
Under all approaches, we estimate the mean function value using the averaged output value among completed computer trials.
We compare the methods using two scenarios.
In the first scenario, all of the functional evaluations terminate correctly and we obtain all $n$ output values.
In the second scenario, one random computer fails and we obtain all other output values.
For $f_1$, we assume $t=4$, $n_1=17$, $n_2=13$, $n_3=11$, and $n_4=7$;
for $f_2$, we assume $t=3$, $n_1=9$, $n_2=7$, and $n_3=6$.
We repeat the procedure 10,000 times and report the averaged root-mean-square estimation error in Table~\ref{tab:result}.
\begin{table}
\caption{Root-mean-square estimation error on mean output value. \label{tab:result}}
\begin{center}
\begin{tabular}{cccccccccc}
Function & Scenario & RLH & MLH & CLH & IMLH & ICLH & FSD & SLH & CSLH \\
$f_1$ & 1 & $0.0487$& $0.0360$& $0.0360$& $0.1428$&$0.1428$& $0.0971$& $0.0360$& $0.0360$ \\
& 2 & $0.1941$& $0.1851$& $0.1845$& $0.1442$& $0.1442$& $0.1132$& $0.0958$& $0.0958$ \\
$f_2$ & 1 & $0.0121$& $0.0060$& $0.0041$& $0.0117$& $0.0110$& $0.0194$& $0.0061$& $0.0042$ \\
& 2 & $0.0363$& $0.0322$& $0.0319$& $0.0122$& $0.0112$& $0.0239$& $0.0099$& $0.0075$
\end{tabular}
\end{center}
\end{table}
Observed from the results, midpoint Latin hypercube designs are usually better than ordinary Latin hypercube designs.
Reducing correlation helps for $f_2$ but has no effect for $f_1$.
Both with and without reducing correlations, the proposed sliced designs perform the best for all functions and scenarios.
Single Latin hypercube designs are as good as the proposed designs in the first scenario but much worse in the second scenario.
Independent Latin hypercube designs and flexible sliced designs are inferior to the proposed designs in both scenarios.
These observations suggest that the proposed new designs, while allowing flexible run sizes, achieve the same level of variance reduction as ordinary sliced Latin hypercube designs.
\section{Conclusion}
\label{sec:conc}
We propose, to the best of our knowledge, the first construction of sliced Latin hypercube designs that allow the run sizes to be chosen arbitrarily.
Moreover, we provide an algorithm to reduce correlations of our proposed designs.
Numerical results suggest that this leads to improved performance in some circumstances.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,320 |
Willo's welcome home
This year's pre-season has been a welcome return to IKON Park for Charlotte Wilson.
By Gabrielle Keegan, Carlton Media on
FOR Charlotte Wilson, returning to IKON Park for pre-season after the past two years of disruption has felt like a homecoming.
"The last two years with COVID, we haven't been able to have the access to the club that we usually would," Wilson explained.
Wilson on comfort, connection of the group
Charlotte Wilson caught up with Carlton Media to discuss the pre-season so far and life away from football.
"It was really good to come back in, use the facilities and see the girls again."
Wilson cited being reunited with her Carlton teammates as the highlight of pre-season thus far, with an off-field focus on culture creating a tight-knit bond among the Game Changers.
"We've done a lot of culture work this pre-season – that's really impacted everyone," she said.
"I feel like we're all quite connected this year and have a really good understanding of what makes everyone tick.
"The off-field connection really transfers to an on-field connection."
A post shared by CHARLOTTE WILSON (@charlotte_wilsonn_)
This on-field connection is particularly evident within the backline, noted as one of the most cohesive in the competition.
Despite being only 20 years of age, Wilson has stood tall down back alongside senior players such Mua Laloifi, Gab Pound, and skipper Kerryn Harrington which has helped to fuel leadership ambitions of her own.
"I'd love to help out the younger girls coming through," she said.
"A few of them have a lot of questions – I don't always know the answer so sometimes I pass to Kez [Harrington], but between us we can usually find an answer."
Munro talks Pies, 2022 fixture
General Manager of Women's Football Brett Munro spoke to Carlton Media about the 2022 AFLW fixture.
Off the football field, Wilson has recently completed a Bachelor of Exercise and Sport Science.
"I don't feel like I've got any weight off my shoulders yet because I think I want to do more study," Wilson said of her future aspirations.
After completing her placements through the Carlton College of Sport with Steve Moore, Carlton's AFLW Strength and Conditioning Coach, Wilson has her eye on becoming a dietitian.
"I like the idea of being able to work in sport or outside of sport."
With no shortage of resources at her fingertips around the Club, Wilson is well set to make that a reality.
"You've got to get your foot in the door in the industry – that's what they say – you've got to know people.
"I think I know some people."
Up Next Up to the challenge: Munro on AFLW fixture
Up Next AFLW News
Up to the challenge: Munro on AFLW fixture
Brett Munro unpacks this week's AFLW fixture announcement. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 7,249 |
(-) Belgium (70)
(-) France (122)
The IDB honors Peruvian chef Gastón Acurio
The ambassador of Peruvian cuisine will deliver the Cátedra Enrique Iglesias of Culture and Development on May 21, 2019. The Inter-American Development Bank (IDB) announced today that the chef Gastón Acurio was recognized with the Cátedra Enrique V. Iglesias of Culture and Development. This award was created by the organization to distinguish those who have excelled in their work of promoting the progress of Latin America and the Caribbean through the arts and culture.
IDB Launches GBP 500 Million 1.250% Benchmark Due December 2025
The transaction pays an annual coupon of 1.250% and matures on 15 December 2025. It priced with a spread of 40 basis points over the 2.000% UKT due September 2025, which represents a reoffer yield of 1.305% annually.
Inter-American Development Bank Prices AUD 500 Million 5-Year 'EYE' Bond
The Inter-American Development Bank ("IADB" or "IDB"), rated Aaa/AAA (Moody's/S&P), priced a new AUD500 million 5-year fixed rate Kangaroo offering under the Education, Youth, and Employment ("EYE") Bond program...
IDB selects CastleOak Securities to its Discount Note Dealer Group
To partner with the IDB to broaden the distribution of its debt securities.
Spain makes contribution to IDB's migration initiative
Event on migration analyzes impact on Latin America MADRID - Spain has made a $5 million contribution to help the Inter-American Development Bank (IDB) tackle urgent development challenges posed by the rise of transborder migrations in Latin America and the Caribbean. The contribution was announced at the conference Migration and Cities: The Road to Inclusive Integration held March 20, 2019 in Madrid.
Standard & Poor's affirms IDB's AAA/A-1+ ratings and upgrades SACP to 'aaa' from 'aa+"
Standard & Poor's has affirmed the Inter-American Development Bank's 'AAA' long-term and 'A-1+' short-term issuer credit rating with a stable outlook. Also, following a review under the revised criteria for multilateral lending institutions, the IDB's stand-alone credit profile was upgraded from 'aa+' to 'aaa', due to its extremely strong enterprise risk profile and very strong financial risk profile. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 2,517 |
Come join us in a panel discussion with a diverse group of agents, executives and managers in the entertainment industry to discuss topics ranging from their work experience to the impact of diversity on Hollywood.
How diverse is Hollywood's workforce? How does diversity behind the scenes influence what's on the screen? What does it feel like working behind the scenes in the industry as a minority? Come join us in a panel discussion with a diverse group of agents, executives and managers in the entertainment industry to discuss topics ranging from their work experience to the impact of diversity on Hollywood.
Register: Click here to register through the Harvardwood website now.
Emily Song MBA '17 works on cross-border business initiatives between China and the US at the Creative Artists Agency (CAA). She works closely with NBA clients on cross-border opportunities in China. Prior to moving to the US, she worked as a TV host on China Central Television. Forbes named her in the 2018 "30 Under 30" list for Asia.
Jason Hafford is a Global Client Strategy Executive at CAA.He works on identifying new business opportunities for the agency and its clients with an emphasis on the international marketplace. Hafford advises on domestic and international initiatives including Cirque du Soleil, Credit Suisse, Econet Media, Ivanhoe Media, YES Bank, and Reliance Industries Limited, among others. He also works with film and television clients on cross-border opportunities in emerging entertainment markets.
A Southern California native, Nicole Torres AB '11 received her A.B. from Harvard before returning home to SoCal to complete her J.D. at USC Gould's School of Law. After passing the California bar she worked briefly in law before returning to her initial passion: acting. She is a proud member of Harvardwood and has been a Harvardwood Highlights profile writer since 2016. | {
"redpajama_set_name": "RedPajamaC4"
} | 8,791 |
Q: Associative Array check if login combination is correct This is my very first post and question on this website. I'm working on my school assignment now and I have to check if the login credentials are the same as the ones that are listed in my associative array. I have searched everywhere but I couldn't find an answer. Can someone please tell me how to do this?
This is my PHP code:
// Create associative array
$loginCombinations = array("Lucas"=>"lucas3284", "Bob"=>"bob9584", "Frits"=>"frits1842", "Kees"=>"kees1394", "Sjakie"=>"sjakie1953", "Bas"=>"bas6382", "Peter"=>"peter2391", "Robbie"=>"robbie1289", "Jan"=>"jan1462", "Tim"=>"tim9324");
// Create message (login succesful / login failed)
$message = "";
// Create foreach loop
foreach($loginCombinations as $username => $password)
{
}
and this is my HTML code:
<form action="login.php" method="get">
<table>
<tr>
<td>
</td>
<td>
<?php
echo $message;
?>
</td>
</tr>
<tr>
<td>
<label for="username">username</label>
</td>
<td>
<input type="text" id="username" name="username">
</td>
</tr>
<tr>
<td>
<label for="password">password</label>
</td>
<td>
<input type="password" id="password" name="password">
</td>
</tr>
<tr>
<td>
</td>
<td>
<input type="submit">
</td>
</tr>
</table>
</form>
A: First, change GET method to POST as <form action="login.php" method="POST"> to send the data in request payload rather than as GET parameters for security.
So, you would put if condition in the foreach loop to check and echo success message accordingly.
<?php
$found = false;
foreach($loginCombinations as $username => $password){
if($_POST['username'] == $username && $_POST['password'] == $password){
echo "Yes, user found!!";
$found = true;
break;
}
}
if(!$found){
echo "No user found!!";
}
Update:
Add a name attribute to your submit button say submit like <input type="submit" name="submit">. Now, you will need to add an additional if condition to check if data was actually posted.
<?php
if(isset($_POST['submit'])){
$found = false;
foreach($loginCombinations as $username => $password){
if($_POST['username'] == $username && $_POST['password'] == $password){
echo "Yes, user found!!";
$found = true;
break;
}
}
if(!$found){
echo "No user found!!";
}
}
Update #2:
As pointed out by @Nigel Ren, you can simply do an isset check.
if(isset($loginCombinations[$_POST['username']],$loginCombinations[$_POST['password']])){
echo "user found";
}else{
echo "user not found";
}
A: As your array is indexed by the user name, there is no need to do a loop. First check if the user name element is set and then check the password for a match...
$userName = $_GET['username'] ?? '';
$message = "";
if ( isset($loginCombinations[$userName]) &&
$loginCombinations[$userName] === $_GET['password']) {
$message = "user login correct";
}
else {
$message = "user login incorrect";
}
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 8,223 |
Tag: Isis
KUWAITI GOVERNMENT STAFFER ARRESTED FOR ROLE IN ISIS CYBER WING
Source: National Cyber Security – Produced By Gregory Evans Kuwaiti police have detained a government worker on suspicion of proliferating the ideology of the Islamic State militant group (ISIS), the interior ministry said late Thursday. The suspect, identified as 26-year-old Kuwaiti national Othman Zain Nayef, had "used his office The…
ISIS hacker steals IDs from online retailer's customers for 'kill list'
Source: National Cyber Security – Produced By Gregory Evans The attack seemed like a garden-variety digital holdup. A computer intruder, calling himself the "Albanian hacker," left a message for the administrator of a website for an Illinois internet retailer: Pay two Bitcoins, or about $500 at the time, and the…
This hacker is fighting ISIS by spamming its Twitter accounts with porn
June 14, 2016 Author: Category: Greg's Blog
Source: National Cyber Security – Produced By Gregory Evans It started years ago, when at age 16 he bought his first computer, took it home and disassembled it. When he put the machine back together and it refused to run, a local big-box store tech guru taught the teen who…
U.S. deploying 250 more U.S. troops to Syria, launching cyberattacks on ISIS
April 25, 2016 Author: Category: Greg's Blog
President Obama said Monday he is sending 250 more U.S. military personnel to combat the Islamic State in Syria, bringing the total U.S. military force in Syria to about 300. "They're not going to be leading the fight on the ground, but they will be essential in providing the training…
U.S. military claims to be dropping 'cyber bombs' on ISIS
America's military forces are dropping "cyber bombs" on Islamic State terrorist groups for the first time, Deputy Defense Secretary Robert Work told reporters accompanying him on a military flight on Tuesday. The ISIS internet attacks, whatever the particulars really may be, are part of a stepped-up coordinated effort to put…
US begins to engage cyber war against Isis
April 6, 2016 Author: Category: Greg's Blog
Source: National Cyber Security – Produced By Gregory Evans The US has begun waging cyber warfare against Isis, defence secretary Ashton Carter has confirmed. As the Financial Times reports, Carter said he has issued orders to the US Cyber Command to launch online attacks against the fundamentalist group. Speaking to…
ISIS to unleash TENS OF MILLIONS of jihadi hackers on West in blitz worse than NUCLEAR WAR
Source: National Cyber Security – Produced By Gregory Evans The computer security expert, who invented the McAfee anti-virus software, claimed "fifteen to 25 percent" of the world's 1.6 billion Muslims are extremists, meaning ISIS could have an army of 400 million fanatical followers ready to strike at any minute. Computer…
ISIS Twitter Accounts Traced Back to UK Government by Hackers
Source: National Cyber Security – Produced By Gregory Evans Every computer and mobile phone logs onto the internet using an IP address, which is a type of identification number. The hacking collective showed Mirror Online details of the IP addresses used by a trio of separate digital jihadis to access…
Anonymous Is Hacking ISIS, But Warns Collaborating With US Government Is 'Deeply Stupid'
Source: National Cyber Security – Produced By Gregory Evans The hacking collective Anonymous is battling ISIS online, but one of its most important voices has warned members that collaborating with the U.S. war on terror would be "deeply stupid." The shadowy group issued a statement distancing itself from an offshoot, Ghost Security Group,…
ISIS Twitter Accounts, IP Addresses Connected To British Government, Hackers Claim
Source: National Cyber Security – Produced By Gregory Evans At least three social media accounts belonging to supporters of the Islamic State group have been traced back to the British government, according to the hacking group VandaSec. The group, which is comprised of four male teenagers, says it discovered that… | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 6,779 |
\section{Introduction}
In recent years the infrared behavior of gauge-variant Green's functions
of Yang-Mills theories has increasingly attracted interest. This fact is mainly
related to the existence of the Landau (or Coulomb) gauge confinement scenarios
proposed by Gribov~\cite{Gribov:1977wm} and Zwanziger~\cite{Zwanziger:1993dh}
on one hand and by Kugo and Ojima~\cite{Kugo:1979gm} on the other. The
interest was stimulated by the practical progress achieved over the years within
the Dyson-Schwinger equation (DSE) approach as pursued by Alkofer, von Smekal and
others (for an intermediate review see~\cite{Alkofer:2000wg}). Lattice gauge
theory is able to check these scenarios from first principles. For example, one
can compare lattice results with analytic and numerical solutions of the
(truncated) hierarchy of DSE, however within the limitations of finite lattice
disretisation and - even more important in this respect - of finite-volume
effects. One crucial test concerns the proposed infrared vanishing (diverging)
of the Landau gauge gluon (ghost) propagator. The closely related behavior of
the two propagators is intimately connected with an infrared fixed
point~\cite{von_Smekal:1997is,von_Smekal:1997vx} of the momentum subtraction (MOM)
scheme~\cite{Chetyrkin:2000fd} running QCD coupling
(see also e.g. \cite{Shirkov:2002gw}). So far,
only in two and three dimensions it was possible to reach the expected asymptotics
in an unambigious manner~\cite{Maas:2006qw,Cucchieri:2007uj,Maas:2007uv}.
In four dimensions, for $SU(2)$ as well as $SU(3)$ lattice gauge theory, the
ultimate decrease of the gluon propagator towards vanishing momentum has not
yet been established. This paper is devoted to this question but restricted to
the $SU(2)$ case.
A possible pattern of finite-volume deviations from the far-infrared behavior
of the gluon and ghost propagators has been pointed out thanks to the
formulation and solution of the DSE in a compact space-time~\cite{Fischer:2007pf}.
The sobering message is that really infrared results can be expected only on
lattices of linear sizes $L=O(10 {\rm~fm})$.
However, in the DSE approach the Gribov ambiguity is assumed not to play
a relevant role, such that something comparable about the gauge-fixing
vulnerability of the propagators cannot be learned from DSE solutions.
Nevertheless, the restriction to the {\it fundamental modular region}
might also considerably change the structure of the DSE at finite
volume~\cite{Zwanziger:1993dh}.
In the present paper we study the question to what extent the finite-volume
effects observed in lattice calculations can be related to the
existence of Gribov copies and can be cured (for presently accessible volumes)
by a better treatment of the Gribov ambiguity,
i.e. systematically pursuing a restriction to the fundamental modular region.
The common hope
is that in the limit of infinitely large volume Gribov copy effects become
negligible. If this is true, then
the random choice of an arbitrary gauge copy in the Gribov region (which is
statistically equivalent to an average over all of them) should be the
physically adequate solution~\cite{Zwanziger:2003cf}.
In paper~\cite{Bogolubsky:2005wf} it has been noted that
enlarging the gauge orbits by nonperiodic $\mathbb{Z}(2)$ gauge
transformations (called ``$\mathbb{Z}(2)$ flips'') generically leads to larger
values of the gauge functional $F$. In this paper we continue to explore this
approach. Furthermore, within the traditional, continuous part of the
gauge-fixing problem, we systematically employ the simulated annealing algorithm.
Testing these two modifications, we find that in the range of linear lattice sizes
between $L \simeq 2 {\rm~fm}$ and $6.5 {\rm~fm}$ the choice among Gribov copies,
and therefore the optimization of the gauge-fixing method, is still important.
Our paper represents a systematic extension of the previous work,
where the $\mathbb{Z}(2)$ flips have been
studied for the first time~\cite{Bogolubsky:2005wf}. Besides being much less
volume dependent, the gluon propagator in the extended Landau gauge is found
flattened for momenta $p < 0.5 {\rm~GeV}$,
and there are first indications for a decrease towards the infrared limit.
Section II will give an introduction to the necessary technical details.
In Sec. III we discuss steps towards an optimal gauge-fixing strategy.
The Gribov copy effects at finite volumes are pointed out in Sec. IV.
In Sec. V all our results, obtained on various lattices with the respective
optimal strategy, are put together and we summarize our findings.
\section{General setup: extension of the Landau gauge}
Like many other investigators of the SU(2) gluon propagator we compute it
with Monte Carlo (MC) techniques on a lattice with periodic boundary conditions.
The standard Wilson single-plaquette action and the lattice definition for
the gauge potentials
\begin{equation}
A_{\mu}(x+\hat{\mu}/2) = A^b_{\mu}(x+\hat{\mu}/2)~\frac{\sigma^b}{2} = \frac{1}{2iag_0} \left(U_{x\mu} - U^{\dagger}_{x\mu}\right)
\label{gauge_potential}
\end{equation}
are adopted. In order to fix the Landau gauge for each lattice gauge field
$\{U\}$ generated by means of a MC procedure, the gauge functional
\begin{equation}
F[g]= \frac{1}{2} \sum_{x,\mu}
\mathrm{tr} \left( g(x) U_{x\mu} g^{\dagger}(x+\hat{\mu}) \right)
\label{gauge_functional}
\end{equation}
is iteratively maximized with respect to a gauge transformation $~g(x)~$
which is usually taken as a periodic field, too.
In order to approach the global maximum (related to the fundamental modular
region) as close as possible, we are using the simulated annealing (SA)
algorithm~\cite{Kirkpatrick:1983aa},
in combination with subsequent standard overrelaxation (OR). The latter
is applied in the final stage of the gauge-fixing procedure in order to finalize
the transformation to any required precision of the transversality
condition $~\partial_{\mu} A_{\mu} = 0$.
A decade ago, SA has been shown to be very efficient, when dealing with
the maximally Abelian gauge (MAG)~\cite{Bali:1994jg,Bali:1996dm}.
In the latter case typically a huge number of local extrema of the gauge functional
is observed. The effectiveness of the SA algorithm in the case of the Landau gauge
remained quite unclear for a long time. It was practically used for this
gauge in the first study of the ghost propagator~\cite{Suman:1995zg}.
In Ref.~\cite{Gutbrod:1996sq}, also for the Landau gauge, a comparison with other
algorithms was carried out. This comparative study came to the conclusion that
SA might not provide a real advantage. Today, the state of the art is that SA
is practiced in a hybrid form, mixed with microcanonical update steps. It is
repeatedly started from random gauge transformations $~g(x)~$ and ends with
OR, producing each time one gauge copy in the Gribov region. In a recent,
more thorough investigation~\cite{Schemel:2006xx} this version of the
SA algorithm was seen to become superior,
with growing lattice size, to the repeated application of
the pure OR algorithm. The efficiency was quantified by the ability to
produce a better (narrower) distribution of copies (local extrema) within less
or equal CPU time. The results of this study will be published
elsewhere~\cite{Schemel:2007xx}.
The SA algorithm, in the present context, generates a field of gauge transformations
$~g(x)~$ by MC iterations with a statistical weight proportional to
$~\exp{(F[g]/T)}~$. The ``temperature'' $~T~$ is a technical parameter which is
gradually decreased in order to maximize the gauge functional $F[g]$. In the
beginning, $~T~$ has to be chosen sufficiently large in order to allow traversing
the configuration space of $~g(x)~$ fields in large steps. It has been checked
that an initial value $~T_{\rm init}=1.5~$ is high enough. After each
quasiequilibrium sweep, including both heatbath and microcanonical updates,
$~T~$ has been decreased with equal step size until $~g(x)~$ is uniquely
captured in one basin of attraction. The criterion of success is that
during the following OR the violation of transversality decreases in a
monotonous manner for almost all applications of the compound algorithm.
This condition is reasonably satisfied for a final lower temperature value
$~T_{\rm final}=0.01~$~\cite{Schemel:2006xx}. The number of temperature steps
was chosen of the order $O(10^3)$.
The second novel feature of our gauge-fixing procedure compared to the standard
ones is the application of $\mathbb{Z}(2)$ flip transformations, the essence of
which is an extension of the gauge orbits for any MC generated lattice
configuration. We will abbreviate the extended gauge-fixing method as the FSA
(flip-SA) algorithm. There is room for its realization under various strategies
(see below) that can be chosen in order to save computing time. The flip
transformation was first considered in the context of Landau gauge fixing in
Ref.~\cite{Bogolubsky:2005wf}. For $SU(2)$ gauge theory, each flip transformation
consists of a simultaneous $\mathbb{Z}(2)$ flip of all links
$~U_{\nu}(x) \to - ~U_{\nu}(x)~$ throughout a 3D hyperplane at a given value of
the coordinate $~x_{\nu}$. This is just a particular case of a gauge transformation
which is not periodic but periodic modulo $\mathbb{Z}(2)$,
\begin{equation}
g(x+L\hat{\nu}) = z_{\nu} g(x)\,, \qquad z_{\nu}=\pm 1 \in \mathbb{Z}(2) \, .
\end{equation}
It is obvious that the above transformation of the gauge field leaves the gauge
field action as well as the path integral measure invariant
(note that this symmetry is unbroken in the confinement phase).
This would not be true anymore in a gauge theory with a
fundamental matter field. Therefore, the $\mathbb{Z}(2)$ flip transformation
cannot be applied to such models.
With respect to the flip transformation all gauge copies of one given
field configuration relative to the initial gauge can be split into
$~2^4=16$ sectors for $SU(2)$ gauge fields ($3^4=81$ sectors for $SU(3)$).
Within each of these sectors - all being present in the path integral measure -
different gauge copies are connected by continuous, strictly periodic gauge
transformations. With this new element, our gauge-fixing procedure consists
of two steps: the first one is to choose the best out of the $16$ flip sectors
and the second one with the help of SA is to find the gauge copy with the highest
value of the gauge functional while staying within the given sector. In practice,
both steps are performed in an intertwined manner, because the decision which is
the ``best'' sector in principle requires knowing the best copy of each sector.
It is immediately clear that this procedure allows to find higher local maxima
of the gauge functional (\ref{gauge_functional}) than the traditional
gauge-fixing procedures. The latter by default choose for a given configuration
only one flip sector, and in most of the cases only one copy in this sector.
The sector taken is usually the one
randomly selected by the MC update algorithm. It is equivalent to averaging
over all flip sectors and therein over copies within the so-called Gribov region.
Obviously the two prescriptions to fix the Landau gauge, the traditional one
and the new one, are not equivalent. Indeed, for some modest lattice
volumes it has been shown in Ref.~\cite{Bogolubsky:2005wf}
that they give rise to different results for the gluon as well as the ghost
propagators. In the present paper for the gluon propagator we want to present
some numerical evidence that the results converge to each other in the large
volume limit. The ghost propagator under this extended Landau gauge fixing will
be addressed in a future publication.
The computations presented in this work have been done at rather strong coupling,
at $\beta\equiv 4/g_0^2 = 2.20$ . The reason for this choice was to get access
to a comparatively large physical volume. We fix the scale taking the string
tension as $\sigma$ = (440 MeV)$^2$ and adopting the lattice value
$\sqrt{\sigma} a = .469$ found in Ref.~\cite{Fingberg:1992ju}.
Thus, our largest lattice size $32^4$ has a physical size of about
$(6.5 {\rm~fm})^4$. In order to study the volume dependence we have calculated
the gluon propagator also for smaller lattices, such that we have sizes
ranging from $L^4=8^4$ to $32^4$.
\section{The quest for an optimal gauge-fixing strategy}
As a first step we have searched for an optimal strategy to find the best
gauge copy for each lattice size. On $16^4$ (and $24^4$) lattices we have
produced ensembles of $60$ ($46$) MC configurations. For each configuration
we created with the help of SA $5$ gauge copies as local maxima of the
gauge functional $~F~$ within each of the $16$ flip sectors, i.e. in total $80$
gauge copies per MC field configuration. In a production run we would like
to get along with considerably less copies per MC configuration.
This will become particularly important for $SU(3)$, where one has
to deal with $3^4=81$ different $\mathbb{Z}(3)$ sectors.
By $~\langle F_{ns}(nc) \rangle~$ let us denote the MC ensemble average
over the maximized functional values $~F~$ taken from
all $16$ sectors ($~ns = 16~$) or a random subset of $~ns < 16~$ flip sectors
and from the best of $~nc \le 5~$ gauge-fixed copies.
These copies are created sequentially, starting from new random periodic copies,
in each of the $ns$ chosen sectors and the best one is stored.
The average $\langle F_{16}(5)\rangle~$ corresponds to the
largest accessible (best) functional values. Representing the largest
affordable computing effort it will serve as a reference value.
Table \ref{tab:gaugefunctional} shows the values for the different cases.
One sees that the functional values become larger, when all 16 flip
sectors are taken into account. The data clearly indicate that (for the given
volume) it is more important to scan all 16 sectors than to search for the
best copy in one (randomly chosen) sector.
But the improvement is much less dramatic
for the larger lattice size $24^4$ than for the $16^4$ lattice. The
reference values for the functional are very close for the two lattice
sizes in contrast to the cases $ns=1$ of one randomly chosen flip sector.
Moreover, we see that 5 random copies already seem to be optimal for both the
lattice sizes.
\begin{table*}
\begin{center}
\mbox{
\begin{tabular}{|c|c|c|c|}\hline
& & $\langle F_{ns}(nc) - F_0 \rangle$ &
$\langle F_{ns}(nc) - F_0 \rangle$ \\
$ns$ & $nc$ & for $16^4$ & for $24^4$ \\
\hline\hline
1 & 1 & $ 1(8) \cdot 10^{-5}$ & $ 25(4) \cdot 10^{-5}$ \\ \hline
1 & 5 & $ 6(8) \cdot 10^{-5}$ & $ 31(4) \cdot 10^{-5}$ \\ \hline\hline
16 & 1 & $ 32(9) \cdot 10^{-5}$ & $ 36(4) \cdot 10^{-5}$ \\ \hline
16 & 2 & $ 33(9) \cdot 10^{-5}$ & $ 38(4) \cdot 10^{-5}$ \\ \hline
16 & 3 & $ 34(9) \cdot 10^{-5}$ & $ 38(4) \cdot 10^{-5}$ \\ \hline
16 & 4 & $ 34(9) \cdot 10^{-5}$ & $ 39(4) \cdot 10^{-5}$ \\ \hline
16 & 5 & $ 34(9) \cdot 10^{-5}$ & $ 39(4) \cdot 10^{-5}$ \\ \hline
\end{tabular}
}
\end{center}
\caption{The average gauge functionals $\langle F_{ns}(nc)\rangle$ as explained in
the text and subtracted with $F_0=0.82800$. For the lattice sizes $16^4$ and $24^4$
the numbers of investigated MC configurations are $60$ and $46$, respectively.
The inverse coupling is $\beta=4/g_0^2=2.20$.
}
\label{tab:gaugefunctional}
\end{table*}
In Table \ref{tab:diff_gaugefunctional} we show additionally the deviations or
distances
$\Delta_{ns,ns^{\prime}}(nc,nc^{\prime})=
\langle F_{ns}(nc)-F_{ns^{\prime}}(nc^{\prime}) \rangle$
between would-be runs with different numbers $ns$ and $nc$. The $\Delta$-values
have quite small statistical errors since the differences are always computed
configuration by configuration.
\begin{table*}
\begin{center}
\mbox{
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
& & & & &
$\Delta_{ns,ns^{\prime}}(nc,nc^{\prime})$ & $\Delta_{ns,ns^{\prime}}(nc,nc^{\prime})$ \\
& $ns$ & $nc$ & $ns^{\prime}$ & $nc^{\prime}$ & for $16^4$ & for $24^4$ \\
\hline\hline
A & 16 & 5 & 1 & 5 & $2.8(1) \cdot 10^{-4}$ & $8.5(4) \cdot 10^{-5}$ \\ \hline\hline
B & 1 & 5 & 1 & 1 & $4.6(4) \cdot 10^{-5}$ & $5.3(2) \cdot 10^{-5}$ \\ \hline\hline
C & 16 & 5 & 16 & 1 & $1.3(1) \cdot 10^{-5}$ & $2.9(2) \cdot 10^{-5}$ \\ \hline
D & 16 & 5 & 16 & 2 & $4.9(8) \cdot 10^{-6}$ & $1.5(1) \cdot 10^{-5}$ \\ \hline
E & 16 & 5 & 16 & 3 & $2.5(5) \cdot 10^{-6}$ & $7.6(7) \cdot 10^{-6}$ \\ \hline
\end{tabular}
}
\end{center}
\caption{Distances
$\Delta_{ns,ns^{\prime}}(nc,nc^{\prime})=
\langle F_{ns}(nc)-F_{ns^{\prime}}(nc^{\prime}) \rangle$
as defined in the text.
The statistics and the inverse coupling are the same as quoted in
Table \ref{tab:gaugefunctional}.
}
\label{tab:diff_gaugefunctional}
\end{table*}
From this work as well as from our earlier experience we know that the
functional $F$ and the gluon propagator at small momenta are
anticorrelated (more detailed description of this anticorrelation
will be published elsewhere). We wish to emphasize that substantial
decrease of $\Delta$ with increasing volume shown in Table II does {\it not}
imply that the effect of improved gauge fixing on the propagator decreases
also that much.
Notice that the values in Table \ref{tab:diff_gaugefunctional} fall monotonously
from comparison A to comparison E for both the lattice sizes.
For $24^4$ the variation covers only one order of
magnitude compared with two orders for $16^4$. The variation of the best copy
results ($nc,nc'=5$) comparing the best sector ($ns=16$) with the first random sector
($ns^{\prime}=1$) (comparison A) shows the sectors to differ much
more strongly from each
other on the smaller lattice than on the larger one. This indicates that,
concerning the gauge functional, the r\^ole of the flip sectors is
weakening with increasing volume. On the other hand the variation
between different copies within the same random flip sectors (case B) or within
the best sectors (C, D, E) becomes stronger the larger the lattice is. Therefore,
in order to distinguish the best sector we certainly need to generate more
gauge copies per sector the larger the lattice volume is. How many copies are
required within a given sector depends on the deviation from the reference value
one considers to be tolerable (compare with cases C, D, E).
These observations suggest a strategy to keep the total number of gauge copies
as low as possible, that have to be generated in order to guarantee a
certain prescribed closeness of the {\it average best gauge functional} to the
reference case. Since for the smaller lattice sizes
the functional values of $~F~$ for different gauge copies generated within
the best sector are scattered very closely to the maximal value in that sector,
we try to identify the best sector by gauge-fixing not more than one gauge copy
per sector. Actually this becomes difficult or even impossible for a larger volume.
After the best sector has been figured out, we could generate a few more gauge
copies for this particular sector only. In order to increase the probability not
to misidentify the best sector, compared to making only one gauge-fixing attempt
in all sectors, it is reasonable to perform a few more gauge fixings in a few
sectors that have already been recognized as good pretenders of being the best
sector.
In shorthand, we denote as ``$16 + 4$'' a strategy, where we first fix
one gauge copy in all $16$ sectors, and then fix a second, independent copy in
the $4$ best-candidate sectors, those with the highest ranking gauge functional
values of the gauge copy found first.
Taking again our data for the $24^4$ lattice with $80$ gauge copies per
configuration as the reference case to compare with, we checked the reliability
of such an improved strategy. We get a difference
$\langle F_{16}(5)-F_{16+4} \rangle = 1.9(2) \cdot 10^{-5}$, i.e.
almost the closeness to the reference case that was obtained with two gauge
copies in all sectors, although now, in the ``$16 + 4$'' strategy, a second copy
has been fixed in only $4$ out of $16$ sectors. As a compromise between the quality
and the need to limit the CPU time we have in practice chosen a strategy with
``$16 + 4 * 2$'' copies, i.e. in four selected sectors not one but two more
gauge copies are created. On our test ensemble of $46$ primary Monte Carlo
configurations we get a difference from the reference value
$\langle F_{16}(5)-F_{16+4*2} \rangle = 1.4(2) \cdot 10^{-5}$.
We have attempted to apply the same ``$16 + 4 * 2$'' strategy to $32^4$ lattices
as well. We have observed that for this lattice size the best sectors are not
so clearly distinguishable from the other sectors with generically lower values of
the gauge functional. For this reason we decided to produce additionally 16 copies,
one per sector. Thus we generated in total $40$ copies per MC configurations on
this lattice (``$16 * 2 + 4*2$''), instead of $80$.
For $12^4$ lattices we have blindly generated $5$ copies in each sector
(``$16 * 5$''), and for $8^4$ lattices just $3$ copies in each sector
(``$16 * 3$''). We found confirmation of the features observed for $16^4$ and
$24^4$ lattices as discussed in the beginning of this section.
Our produced ensembles of gauge-fixed field configurations are quoted in
Table~\ref{tab:statistics} together with the strategy used in each case.
$\langle F^{bc} \rangle$ is the average gauge functional for the best copy
(\bc) found by means of the preferential strategy at the given lattice size.
The difference $\langle F^{bc} - F^{fc} \rangle$
means the difference between the values achieved with the preferential
strategy (based always on access to all $16$ sectors) and the value found
for the first copy (\fc), i.e. for just one randomly chosen flip sector
and one copy. For comparison also
some values obtained with the standard OR method with $ns=1$ (i.e. no flips)
and $nc=1$ (one copy) are shown. The statistics for the OR procedure was
generally smaller but of the same order of magnitude as shown in the
second column of Table \ref{tab:statistics}.
\begin{table*}
\begin{center}
\mbox{
\begin{tabular}{|c|c|c|c|c|c|} \hline
$L$ & $\#$ & strategy & $\langle F^{bc} \rangle $ & $ \langle F^{bc} - F^{fc} \rangle $
& $\langle F^{fc}_{OR} \rangle$\\ \hline
8 & 200 & ``$16 * 3$'' & 0.82721(23) & 0.00298(7) & 0.82365(25) \\ \hline
12 & 200 & ``$16 * 5$'' & 0.82817(10) & 0.00077(2) & 0.82715(11) \\ \hline
16 & 60 & ``$16 * 5$'' & 0.82834(9) & 0.00028(1) & \\ \hline
16 & 180 & ``$16 + 4 * 2$'' & 0.82834(8) & 0.000244(6) & 0.82779(5) \\ \hline
24 & 46 & ``$16 * 5$'' & 0.82839(4) & 0.000085(4) & \\ \hline
24 & 300 & ``$16 + 4 * 2$'' & 0.82843(2) & 0.000132(2) & 0.82805(3) \\ \hline
32 & 247 & ``$16 * 2 + 4 * 2$'' & 0.82843(1) & 0.000075(1) & 0.82815(1) \\ \hline
\end{tabular}
}
\end{center}
\caption{Lattice sizes, statistics, gauge-fixing strategy employed and
the data on average values of the gauge functional $ F $.
The meaning of $F^{bc}$, $F^{fc}$ and of $F^{fc}_{OR}$ is explained in the text.}
\label{tab:statistics}
\end{table*}
\section{The gluon propagator: Gribov copy and finite-volume effects}
The gluon propagator is defined by
\begin{equation}
D_{\mu\nu}^{ab}(p)=\langle \widetilde{A}_{\mu}^a(k) \widetilde{A}_{\nu}^b(-k) \rangle
=\left( \delta_{\mu\nu} - \frac{p_{\mu}~p_{\nu}}{p^2} \right)
\delta^{ab} D(p)\,,
\label{gluonpropagator}
\end{equation}
where $\widetilde{A}(k)$ represents the Fourier transform of the gauge potentials
according to Eq. (\ref{gauge_potential}) after having fixed the gauge. The momentum
$p$ is given by $p_{\mu}=(2/a) \sin{(\pi k_{\mu}/L)}, ~~k_{\mu} \in (-L/2,L/2]$.
For $p \ne 0$, one gets
\begin{equation}
D(p) = \frac{1}{9} \sum_{a=1}^3 \sum_{\mu=1}^4 D^{aa}_{\mu\mu}(p) \; ,
\end{equation}
whereas at $p = 0$ the ``zero momentum propagator'' $D(0)$ is defined as
\begin{equation}
D(0) = \frac{1}{12} \sum_{a=1}^3 \sum_{\mu=1}^4 D^{aa}_{\mu\mu}(p=0) \; .
\end{equation}
In order to compare with standard methods employed by other authors we have carried
out our own analysis with standard overrelaxation (OR) without $\mathbb{Z}(2)$ flips
and restricting always to the first gauge copy. The corresponding findings together
with our \bc--FSA results obtained with the ``$16*2+4*2$'' strategy on the largest
lattice $32^4$ are plotted in Fig.~\ref{fig:SA_vs_OR}.
\begin{figure*}
\vspace*{0.8cm}
\includegraphics[width=0.6\textwidth]{fig1.eps}
\caption{The lattice gluon propagator versus momentum for $\beta=2.20$ and
various lattice sizes obtained by OR in comparison with FSA results for $32^4$.}
\label{fig:SA_vs_OR}
\end{figure*}
We have convinced ourselves that the OR results for the $24^4$ lattice are in perfect
agreement with those recently obtained for a $22^4$ lattice and the same $\beta=2.20$
in Ref.~\cite{Cucchieri:2007uj}. A deviation of our FSA results in the infrared
($p < 0.4 {\rm~GeV}$) towards lower values of $D(p)$ becomes clearly visible.
As one might expect, due to the bias towards a larger gauge functional
in the case of the FSA algorithm (compared with the OR algorithm)
not only the expectation value of the gluon propagator becomes suppressed
at low momenta, but also the statistical fluctuations of the gluon propagator
become reduced. The effect is most clearly seen for the zero momentum
propagator $D(0)$.
In comparison to the finite-size dependence showing up after gauge fixing
with standard OR our new FSA method provides results very stable against varying
lattice size. This is demonstrated in Fig.~\ref{fig:Gl_main} collecting our
main results. All data points nicely fall onto a universal curve.
Indeed, comparing the data for different lattice sizes entering
Fig.~\ref{fig:Gl_main} one can see that the finite-volume effects for the
momenta shown in the figure are indeed small. This is particularly important
for the minimal nonzero (on-axis) momenta for each given lattice size which
are {\it not excluded} from the plot.
\begin{figure*}
\vspace*{1cm}
\includegraphics[width=0.6\textwidth]{fig2.eps}
\caption{The gluon propagator obtained with FSA gauge fixing in the infrared region
for various lattice sizes, all simulated at $\beta=2.20$.}
\label{fig:Gl_main}
\end{figure*}
Notice that for lattices $24^4$ and $32^4$ all momenta with components
$k_{\mu}$ satisfying the condition~\cite{Leinweber:1998uu}
\begin{equation}
\sum_{\mu} k_{\mu}^2 - \left(\sum_{\mu} \frac{1}{2} k_{\mu} \right)^2 < 3
\end{equation}
are shown.
There is no significant breaking of rotational invariance for the momenta
included in the figure. Only in the case of OR there is one bigger deviation
from rotational invariance: on the largest lattice $32^4$ the propagator
values for momenta with components $k=(0,0,0,2)$ and $k=(1,1,1,1)$ differ
by less than 3 standard deviations. For the lattices $16^4$ and $24^4$ we
have found a good agreement within both gauge-fixing algorithms.
In Figs. \ref{fig:SA_vs_OR} and \ref{fig:Gl_main} the data obtained with FSA on the
$32^4$ lattice show a tendency to decrease toward smaller values at the
smallest nonzero momentum. This is the first lattice result in favor of a
decreasing gluon propagator towards the infrared in four dimensions.
In both figures we have also shown the values of the gluon propagator
at zero momentum, $D(0)$, which has a monotonous downward volume dependence
(compare also Fig.~\ref{fig:Gl_D0})~\cite{Boucaud:2006pc}.
\begin{figure*}
\includegraphics[width=0.55\textwidth]{fig3.eps}
\caption{Lattice gluon propagator for zero momentum $D(0)$ obtained with the
FSA method as a function of the inverse lattice size.}
\label{fig:Gl_D0}
\end{figure*}
However, the value of $D(p \equiv 0)$ is expected to be affected by
stronger finite-volume (and Gribov ambiguity) effects than $D(p_{\rm min}
\to 0)$~\cite{Fischer:2007pf}.
We have also checked, whether our result can be seen in agreement with
the expectation $~D(p~\to~0)=0$. Indeed, a fit restricted to the interval
$ 0 < p < 500$ MeV with the function
\begin{equation}
D(p)=p^{\,2 \alpha} \cdot (g_0 + g_1 \cdot p^2 ) \; ,
\end{equation}
worked perfect ($\chi^2/{\rm d.o.f.}=0.06$) with an exponent $\alpha = 0.09(1)$,
which is in qualitative agreement with the DSE result \cite{Lerche:2002ep}
$\kappa_D \equiv 1 + \alpha = 1.19$.
Although this cannot be taken too seriously,
our result gives some credit to the assumption that we are beginning to see
the gluon propagator to decrease toward zero momentum.
The replacement of the \fc--SA algorithm (i.e. with one copy $nc=1$ in one
random flip sector $ns=1$) by the \bc--FSA algorithm (with $16$ sectors under
control and the preferential strategy according to Table~\ref{tab:statistics})
leads to a systematic change of the resulting propagator which is
presented in Fig.~\ref{fig:change_vs_V_p}.
\begin{figure*}
\vspace*{1cm}
\includegraphics[width=0.7\textwidth]{fig4.eps}
\caption{The relative difference between the propagator values obtained from
\fc--SA (with one copy in one random flip sector) and obtained from
\bc--FSA as a function of the momentum $p$ for various lattice sizes.}
\label{fig:change_vs_V_p}
\end{figure*}
The Figure shows for all lattice volumes that for fixed lattice size the
relative deviation of the FSA results for the gluon propagator from the
simple SA results decreases with increasing momentum going rather quickly to
zero within error bars. Furthermore, for fixed physical momentum the
relative deviation goes to zero with increasing volume, indicating that
the two Landau gauge-fixing prescriptions (without and with flips) become
equivalent in the large volume limit. On the other hand, if we compare data
for the minimal momenta for every lattice we find that the respective relative
deviation decreases with increasing lattice volume rather slowly indicating that
the effect of flip sectors for the minimal momentum will be important for all
accessible lattices. This is also a valid conclusion, although to a smaller
extent, for the next-to-minimal momentum.
\section{Conclusions}
In this paper we have reinvestigated the Landau gauge gluon propagator
on the lattice within $SU(2)$ pure Yang-Mills theory. Our main achievement
is the use of an improved gauge-fixing prescription which takes into account
$\mathbb{Z}(2)$ flip transformations equivalent to nonperiodic gauge
transformations as well as the use of the simulated annealing method in combination
with subsequent overrelaxation steps. Comparing with the exclusive use of standard
overrelaxation without applying flips we confirm clear Gribov copy effects for the
gluon propagator. But more important, we observe that finite-size effects seem to
become suppressed for a gauge-fixing prescription providing copies closer to the
fundamental modular region. For the first time in the 4d $SU(2)$ case on symmetric
lattices we see a flattening or a signal for a turnover giving access to a limit
$D(q \to 0)=0$ in agreement with DSE predictions and confinement scenarios by
Zwanziger~\cite{Zwanziger:1993dh} or Kugo and Ojima~\cite{Kugo:1979gm}
\section*{ACKNOWLEDGEMENTS}
V.~G.~B., E.-M.~I, and M.~M-P. wish to thank Boris Martemyanov for useful
remarks and discussions. G.~B. is grateful to Jan M. Pawlowski and Holger
Gies for their interest and discussions.
This investigation has been partly supported by the Heisenberg-Landau
program of collaboration between the Bogoliubov Laboratory of Theoretical
Physics of the Joint Institute for Nuclear Research Dubna (Russia) and
German institutes and partly by the joint DFG-RFBR grant 436 RUS 113/866/0-1
and the RFBR-DFG grant 06-02-04014.
V.~G.~B. and V.~K.~M. acknowledge support by the RFBR grant 05-02-16306,
and V.~G.~B. is presently supported by the RFBR grant 07-02-00237.
G.~B. acknowledges support from DFG grants Re856/6-1 and Re856/6-2.
M.~M.-P. and E.-M.~I. appreciate the support from DFG under the grant
FOR 465 / Mu932/2-2.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 4,580 |
Q: How can i remove the Tick button in Coding4Fun MessagePrompt I have tried to hide the default check button inside message prompt.but i could not find any property to hide.have only IsCancelVisible button property.Now i want to create custom Ok button instead of default check button.Please Help.
A: Try this code working fine.
// remove all buttons
messagePrompt.ActionPopUpButtons.Clear();
For example, if you want your own button to be displayed below, use the following code:
var messagePrompt = new MessagePrompt
{
Title = "Simple Message",
Message = "This is a demo of the Coding4Fun MessagePrompt."
};
// remove all buttons
messagePrompt.ActionPopUpButtons.Clear();
// add your own
Button button;
messagePrompt.ActionPopUpButtons.Add(button = new Button()
{
Content = "Close"
});
// handle click state
button.Click += button_Click;
enjoy this code ....
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 33 |
\section*{Abstract}
{\bf
Topological aspects represent currently a boosting area in condensed matter physics.
Yet there are very few suggestions for technical applications of topological phenomena.
Still, the most important is the calibration of resistance standards by means of the
integer quantum Hall effect. We propose modifications of samples displaying
the integer quantum Hall effect which render the tunability of the Fermi velocity
possible by external control parameters such as gate voltages.
In this way, so far unexplored possibilities arise to realize devices such as
tunable delay lines and interferometers.
}
\vspace{10pt}
\noindent\rule{\textwidth}{1pt}
\tableofcontents\thispagestyle{fancy}
\noindent\rule{\textwidth}{1pt}
\vspace{10pt}
\section{Introduction}
\label{sec:intro}
\subsection{General context}
Subjecting a two-dimensional electron gas at low temperature to a strong perpendicular magnetic field results in the well-known quantization of the transverse conductivity
$\sigma_{xy} = \nu e^2/ h$ with $\nu \in \mathbb{N}$ which is called integer quantum Hall effect \cite{klitz80} (IQHE). The remarkable high-precision with which
the integer quantum Hall conductivity can be measured is attributed to its
relation to topological invariants \cite{thoul82,avron83,niu85,kohmo85,hatsu93,altla10}.
Shortly after the discovery of the IQHE another topological effect was measured and baptized
the fractional quantum Hall effect \cite{tsui82,laugh83} since Hall plateaus appear at fraction filling factors $\nu$. The discovery of the integer and fractional quantum Hall effect triggered
a steadily growing interest in topological phenomena in condensed matter phenomena.
The IQHE is a single-particle phenomenon \cite{laugh81,halpe82}; no interaction between the
electrons needs to be taken into account which facilitates its understanding greatly.
In the bulk, the interpretation of the IQHE is that
the filling factor $\nu$ equals the total Chern number of the filled Landau bands.
This Chern number is a topological invariant \cite{avron83,niu85,kohmo85} related to
the fundamental Berry phase \cite{berry84}. This warrants the high precision of resistance
measurements fulfilling Ohm's law without any non-linear corrections \cite{uhrig91}.
A closer understanding is gained if one realizes that the
actual charge currents are carried by gapless edge states \cite{hatsu93} which cross
the Fermi level. They have to exist
at the boundaries because the Chern number jumps across them \cite{berne13}.
The number of gapless edge states \cite{thoul82} corresponds to the Chern number $\nu$. Each of
these edge states can be seen as single-channel conductor \cite{butti85} propagating only in one
direction along the edge which therefore are called chiral edge states. They allow for adiabatic
transport \cite{beena91} because backscattering is forbidden which makes such transport
particularly interesting for applications.
It is fascinating that the IQHE can be put into a larger context of Chern insulators \cite{berne13}
which need not be induced by external magnetic fields. Complex kinetic Hamilton
operators on lattices, i.e., complex hoppings, can imply non-trivial Chern numbers
and concomitant edge modes as they appear in the IQHE. The seminal example is the
Haldane model \cite{halda88b}. Its quantum Hall effect is called anomalous because
no magnetic field is required.
The inclusion of the spin degree of freedom \cite{kane05a,kane05b} opens the possibility of the quantum spin Hall effect \cite{weng15,liu16,ren16}. The quantum anomalous Hall effect
\cite{chang13,kou14,chang15} as well as the quantum spin Hall effect \cite{ando13} have been realized experimentally.
\subsection{Present objective}
For clarity, we focus here on the IQHE and do not take the spin into account
which is left to future research. The topological
protection of the chiral edge states and the complete suppression of backscattering in these edge
states suggests that the chiral edge states enable robust applications. Calibrating
resistance standards to extremely high precision is certainly a wonderful example.
Yet, in the present study we want to trigger research on \emph{further} applications.
We will investigate the Fermi velocity $v_\mathrm{F}$ occurring in the chiral edge states.
It represents the group velocity of electrical signal transmitted through the system.
Hence, it determines the speed of signal transmission. If it can be tuned it can
be used to influence the time signals need to cross the sample. In this way, certain delays
can be imposed and used for signal processing, for instance for interference measurements.
We emphasize that the Fermi velocity does not influence the widely studied
DC conductivity which is not the quantity of interest in our study, in contrast to
the majority of theoretical studies in the literature.
Triggered by the observation that the Fermi velocity of edge states in Chern insulators
on lattices differs depending on the details of the edges \cite{redde16}
a systematic study of modifications of the edges of the generic Chern insulator
in the Haldane model revealed that the Fermi velocity can indeed be tuned over
orders of magnitude by changing external parameters such as gate voltages \cite{uhrig16}.
The key idea is to modify the edges by decorations such that local levels are created which are
brought in weak contact with the dispersive edge modes. The ensuing hybridization
leads to a weakly dispersing mode of which the Fermi velocity can be tuned by
changing the energy of the local modes. If the local levels are in resonance with the edge modes
the sketched mechanism is at work and a low Fermi velocity appears.
If they are out-of-resonance the hybridization is ineffective and the
edge states remain strongly dispersive. The tuning of the local decorated edge
modes can be achieved by gate voltages.
This fundamental idea has been carried over from the spinless Haldane model
to the spinful Kane-Mele model \cite{malki17b}. In this study, the effect of
disorder in the decorated Haldane model has been addressed as well.
It was shown that the Fermi velocity is robust against weak disorder
if the dispersion is not too flat, i.e., if the Fermi velocity is not
too low. Hence, in contrast to the naive expectation of complete robustness
due to the topological origin of the edge states disorder changes the dispersion
of the modes and can deteriorate signal transmission \emph{beyond}
the DC conductivity.
As pointed out in the general context, tunable Fermi velocities open the
possibility of interesting applications such as delay lines or interference
devices. Unfortunately, the lattice systems known so far cannot yet be tailored
on the nanoscale to render the experimental verification of the theoretical proposal
possible. So far, solid state systems postulated by density-functional theory
can be envisaged to yield realizations in the future \cite{liu11,wu14,han15}.
Alternatively, intricate optical lattice may make proof-of-principle
realizations of tunable Fermi velocities possible \cite{jotzu14,aidel15}.
Yet, the search for different realizations is called for.
In particular, the high standard of designing nanostructures in semiconductor
systems suggests to look for such systems for the realization of tunable
dispersions of edge states.
This brings us back to the IQHE which is based on a semiconducting
interface generating a two-dimensional (2D) electron gas and a perpendicular
magnetic field. If one is able to tailor the boundaries of the 2D electron gas
in a way that mimics the decoration of 2D lattice models tunable
Fermi velocities become possible. Indeed, it has been proposed by one
of us that attaching bays to the boundaries of a Hall sample allows
us to generate local modes in the bay \cite{uhrig16}. If they are slightly opened
to the 2D bulk a weak hybridization is realized and the physics
established so far for lattice systems should carry over to the
IQHE. The basic geometry is sketched in Fig.\ \ref{fig:sample}.
\begin{figure}[htb]
\centering
\includegraphics[width=\columnwidth]{fig1.pdf}
\caption{Panel (a): proposal of a decorated quantum Hall sample with tunable Fermi velocity. A
perpendicular magnetic field puts the two-dimensional electron gas in the quantum
Hall phase. Two independent gate voltages $V_\text{g1}$ and $V_\text{g2}$
change the potential of the blue bays at the upper boundary and of the green bays at the lower
boundary, respectively. The grey area is inaccessible to the electrons.
The size of the opening of the bays to the bulk 2DEG can be controlled
by a gate voltage $V_\text{go}$ as depicted in panel (b). The size of the opening
controls the degree of hybridization of the local mode within the bays and the
edge mode in the 2D bulk, see panel (c).}
\label{fig:sample}
\end{figure}
Currently, it is possible to implement bays in the submicrometer range in IQHE samples.
For instance, a single-electron source has been realized by coupling a quantum dot
to a 2DEG via quantum point contacts and a gate voltage setting the dot potential
\cite{feve07}. An additional gate voltage at the quantum point contacts is used to control the transmission, see Fig.\ \ref{fig:sample}(b)
so that the hybridization can be tuned as indicated in Fig.\ \ref{fig:sample}(c).
If such a coupled quantum dot is repeated periodically the geometry in Fig.\ \ref{fig:sample}(a)
is obtained. This proposed setup will be studied in the sequel as
an exemplary model for the realization of tunable Fermi velocities in the IQHE.
Below, we present calculations showing that the Fermi velocity $v_\mathrm{F}$
can be tuned by adding periodically arranged bays to an integer quantum Hall sample.
The paper is organized as follows. In Sect.\ \ref{sec:model} we specify the model
Hamiltonian describing the IQHE and the numerical approach to
compute the edge states and their dispersion.
Sect.\ \ref{sec:dispersion} illustrates step by step how the spectrum of the
decorated IQHE is structured. In particular, we focus on the effects of
the hybridization between the modes in the bays and the edge modes because
this is the mechanism altering the Fermi velocities.
The results for tuned Fermi velocities are presented in Sect.\ \ref{sec:tune}.
Finally, Sect.\ \ref{sec:conclusion} collects our findings and provides an outlook.
\section{Model and technical aspects}
\label{sec:model}
The present work is designated to illustrate the tunability of the Fermi velocity
on a proof-of-principle level. For the sake of clarity, we assume that the upper and the
lower boundary are sufficiently far away from each other so that the edge states
localized at the upper and at the lower boundary do not influence each other.
Practically, this means that the magnetic length $\ell_B=\sqrt{\hbar/(|e B|)}$
is significantly smaller than
the width $L_y$ of the quantum Hall sample, i.e., the external magnetic field must
be large enough. Then, it is not necessary to study a system of which
both boundaries are decorated. Hence, we focus here on a sample with quadratic bays
at the upper boundaries, but no decoration at the lower boundary which
is kept smooth. The precise shape of the bays does not matter for
our proof-of-principle calculations.
Within the colored area shown in the panels of Fig.\ \ref{fig:ez} the electrons
can move freely. Their dynamics is only governed by their kinetic energy. The boundaries
are supposed to be infinitely hard walls as indicated by thick black lines.
\begin{figure}[htb]
\centering
\includegraphics[width=\columnwidth]{fig2.pdf}
\caption{Sketch of the considered geometries of increasing complexity. Panel (a)
displays the standard IQHE sample without any decoration of the boundaries; its width
is denoted by $L_y$ and its total length by $L_x$. Periodic boundary conditions
in $x$-direction are assumed. \smash{Panel (b)} shows the single unit considered where
the dimensions of the bay and the coupled bulk are given. Note the opening of the
bay shown in green; its width is denoted by $L_o$. The total sample consists
of $N_x$ such units as shown in panel (c) so that $L_x=N_x L_{x\text{p}}$.}
\label{fig:ez}
\end{figure}
Applying a perpendicular magnetic field in $z$-direction, see panel (a) in Fig.\ \ref{fig:ez}, is
incorporated in the usual way by minimal coupling
\begin{equation}
\vec{p} \rightarrow \vec{p} - q \vec{A}
\end{equation}
where the charge reads $q = - |e|$ and $\vec{A}$ is the magnetic vector potential.
No electron-electron interactions are considered so that the full Hamilton
operator reads
\begin{equation}
H = \frac{1}{2m} \left( \vec{p} - q \vec{A} \right)^2
\end{equation}
where $m$ is the (effective) mass of the electrons.
The electrons are confined to the $xy$-plane; we do not consider their spin
degree of freedom. This can be justified because the two spin species $\uparrow$ and
$\downarrow$ are decoupled in the perpendicular magnetic field \cite{hwang93,parad10}.
Due to the translational invariance in the $x$-direction a Landau gauge is particularly
appropriate. We choose the Landau gauge in $x$-direction $\vec{A} = B (-y, 0, 0)$
so that the \smash{momentum $k_x$} remains manifestly conserved.
This leads to the continuum Hamilton operator
\begin{subequations}
\begin{align}
H_{\mathrm{bulk}} =& \frac{\hbar^2}{2m} \left[ \left( - \mathrm{i} \frac{\partial}{\partial x} + \frac{q B}{\hbar} y \right)^2 - \frac{\partial^2}{\partial y^2} \right]
\\
=& \frac{m \omega_{\mathrm{c}}^2}{2} \left( y + \mathrm{i} \ell_B^2 \frac{\partial}{\partial x} \right)^2 - \frac{\hbar^2}{2 m} \frac{\partial^2}{\partial y^2}
\label{eq:landau}
\end{align}
\end{subequations}
in real space where we use the definition of the cyclotron frequency
$\omega_\mathrm{c} = |e|B/m$ and the magnetic length $\ell_B = \sqrt{\hbar/|eB|}$.
It is implied that $x$ and $y$ take only values in the colored regions of the panels
in Fig.\ \ref{fig:ez} unless stated otherwise.
\subsection{Bulk system}
\label{sec:bulk}
Solving the Hamiltonian \eqref{eq:landau} in case of a bulk system
without any boundaries leads to the famous Landau levels with quantized energy values
\cite{landa30}
\begin{equation}
E_n = \hbar \omega_\mathrm{c} \left(n + 1/2 \right), \ n \in \mathbb{N} \ .
\label{eq:landau_energy}
\end{equation}
The corresponding wave functions are plane waves in
$x$-direction and Gaussians multiplied with Hermite polynomials in $y$-direction
\begin{equation}
\psi(n, k_x, y) =
N \mathrm{e}^{-(y-y_0)^2/2 \ell_B^2} H_n((y-y_0)/\ell_B) \mathrm{e}^{\mathrm{i} k_x x}
\label{eq:landau_wave}
\end{equation}
because the Hamiltonian corresponds to shifted harmonic oscillators in $y$-direction.
The wave functions are normalized by $N$, $H_n$ is the $n$th Hermite polynomial, and
$y_0 = k_x \ell_B^2$ determines the center of the wave function $\psi(n, k_x, y)$
in $y$-direction. These facts about the bulk Landau level will be helpful for the
understanding of the more complicated situations and serve as reference.
Below, we consider more and more details of the actual model depicted in
Fig.\ \ref{fig:ez}(c).
\subsection{Sample of finite width $L_y$}
\label{sec:hardwall}
Next, we consider a sample as shown in Fig.\ \ref{fig:ez}(a), i.e., of finite width
in $y$-direction, but with translational invariance along $x$ due to periodic boundary
conditions. A numerical treatment is required which we introduce here. It is chosen
flexible enough to be extended subsequently to the decorated sample including
the bays.
For simplicity we set the effective electron mass
$m = 1$, Planck's constant $\hbar = 1$, and use $B$ henceforth for $|e| B $.
This amounts to setting $\omega_\text{c}=1$, i.e., to using $\hbar\omega_\text{c}$ as energy unit.
The resulting bulk Hamiltonian reads
\begin{equation}
H_{\mathrm{bulk}} = \frac{1}{2} \left[ \left( \frac{y}{\ell_B^2} +
\mathrm{i} \frac{\partial}{\partial x} \right)^2 - \frac{\partial^2}{\partial y^2} \right] .
\end{equation}
As displayed in Fig.\ \ref{fig:ez}(a) the boundary conditions in the $y$-direction
imply $V(y) = \infty$ for $|y| \ge L_y/2$. We use the same Landau gauge as before in the bulk system. In $x$-direction, we exploit the translational invariance using the plane wave ansatz
\begin{equation}
\label{eq:plane}
\psi(x,y) = \exp(ik_x x) \psi(y).
\end{equation}
This leads to the Hamilton operator which acts on $\psi(y)$
\begin{equation}
H_\mathrm{undec.~con.} =
\frac{1}{2} \left[ \left( \frac{y}{\ell_B^2} - k_x \right)^2 - \frac{\partial^2}{\partial y^2} \right]
\label{eq:H_cont_y}
\end{equation}
with $|y| \leq \frac{L_y}{2}$. We tackle this problem by discretizing the $y$ coordinate
by a mesh with \smash{distance $a$} between the points. It is understood that $a$ is much smaller
than any other physical length scale in the system, i.e., $\ell_B$ and $L_y$.
The resulting model resembles a tight-binding model, but we emphasize that its discrete
character is just due to the approximate treatment of the continuum. We make sure
that the discretization mesh is always fine enough so that the results are close
to the continuum values, see below.
So the discretized Hamiltonian, expressed in second quantization, which approximates the continuum operator \eqref{eq:H_cont_y} reads
\begin{align}
H_\mathrm{undec.~dis.} = \sum_y &\left[ \frac{1}{2} \left( \left( \frac{y}{\ell_B^2} - k_x \right)^2 + \frac{5}{2 a^2} \right) c_{y, k_x}^{\dagger} c_{y, k_x} - \frac{2}{3 a^2} c_{y+a, k_x}^{\dagger} c_{y, k_x}
\right. \nonumber \\
& \left. + \frac{1}{24 a^2} c_{y+2a, k_x}^{\dagger} c_{y, k_x} + \mathrm{h.c.} \right] - \frac{1}{24 a^2} c_{b(y), k_x}^{\dagger} c_{b(y), k_x}
\label{eq:boundaryterm}
\end{align}
where $c_{y, k_x}$ ($c_{y, k_x}^\dagger$) annihilates (creates) an electron with wave vector $k_x$
in $x$-direction at coordinate $y$. To this end, the second derivative is approximated
by the difference quotient
\begin{align}
\frac{\partial^2 \psi(y)}{\partial y^2} &\approx
\frac{1}{a^2} \left[ -\frac{1}{12} \psi(y - 2a) + \frac{4}{3} \psi(y - a)
- \frac{5}{2} \psi(y) + \frac{4}{3} \psi(y + a) - \frac{1}{12}\psi(y + 2 a) \right] .
\label{eq:differenz}
\end{align}
This formula cannot be applied to values of $y$ which are close to a boundary
because the values $\psi(y + a)$ and $\psi(y + 2a)$ may not exist, see Fig.\ \ref{fig:spiegel}.
In fact, if $y_\text{bdry}$ is the value right at the boundary
(red site partly in the shaded area in
Fig.\ \ref{fig:spiegel}), then $\psi(y_\text{bdry})=0$
holds due to the hard-wall boundary condition
and one does not need a terms at $y_\text{bdry}$. What is needed is an approximation of
the second derivative at \smash{$y_\text{bdry}-a$} for which \smash{$\psi(y_\text{bdry}+a)$}
is required. One could simply omit this term, but this omission would introduce
an error of the order of $a$
with respect to the continuum situation which we intend to approximate. Hence, we exploit
that $\psi(y_\text{bdry})=0$ and that a continuous function can be approximated by its
Taylor expansion around $y_\text{bdry}$. In linear order this implies
\smash{$\psi(y_\text{bdry}+a)\approx -\psi(y_\text{bdry}-a)$} which leads the last term in
\eqref{eq:boundaryterm} where we used the symbol \smash{$b(y)=y_\text{bdry}-a$}
for the value of $y$ adjacent to the boundary. This improves the results roughly by
one order in $a$, especially at the important edges of the sample.
\begin{figure}[htb]
\centering
\includegraphics[width=0.5\columnwidth]{fig3.pdf}
\caption{Illustration of the approximation used in immediate vicinity of
a boundary in order to improve the approximation of the continuous system
by a discretized one, see main text.}
\label{fig:spiegel}
\end{figure}
In this way, we can very accurately compute eigen energies of the plain
Hall sample as function of $k_x$. In particular, we obtain the wanted
dispersion of the edge states. The level of complexity is illustrated by
Fig.\ \ref{fig:grid} where the discretization meshes are shown.
The calculation for the plain sample without any decoration only requires
to discretize the $y$-axis, shown in Fig.\ \ref{fig:grid}(a), because the
other spatial dependence is fully captured by the plane wave ansatz \eqref{eq:plane}.
This can be done very efficiently because only a relatively small number
of sites is required. But in order to be able to later include the bays
as shown in Fig.\ \ref{fig:grid}(d) we first re-calculate the sample without
bays by considering the grid in panel (b).
\begin{figure}[htb]
\centering
\includegraphics[width=0.8\columnwidth]{fig4.pdf}
\caption{Sketches of the meshes used to capture the physics
of the quantum Hall sample with and without bays.
The strip geometry (a) and the rectangular geometry (b) are used to describe the
sample without bays. The decoupled bay (c) is considered to compute the energy spectrum
of the isolated bay as reference for the coupled system shown in panel (d).
The orange dashed lines indicate the respective unit cells.}
\label{fig:grid}
\end{figure}
\subsection{Fully discretized samples}
\label{sec:fully}
Enlarging the unit cell as shown in panel Fig.\ \ref{fig:grid}(b) leads to the
continuum Hamilton operator
\begin{equation}
H_\mathrm{(b)}= \frac{1}{2} \left( \frac{y^2}{\ell_B^4} +
2 \mathrm{i} \frac{y}{\ell_B^2} \frac{\partial}{\partial x} -
\frac{\partial^2}{\partial x^2} - \frac{\partial^2}{\partial y^2} \right)
\end{equation}
with the periodic condition for the wave function
\begin{equation}
\label{eq:period}
\psi(x+L_{x\text{p}},y)=\exp(ik_x L_{x\text{p}})\psi(x,y).
\end{equation}
We stress that this condition allows us to determine the value
of $k_x$ only up to multiplies of $2\pi/L_{x\text{p}}$,
as usual if a reduce unit cell in real space is considered.
In the approximate discretized system the first order derivatives are expressed
by using the central finite difference quotient in forth order accuracy
\begin{align}
\frac{\partial \psi(x, y)}{\partial x} &\approx \frac{1}{a}
\left[\frac{1}{12} \psi(x - a2, y) - \frac{2}{3} \psi(x - a, y)
+ \frac{2}{3}\psi(x + a, y) - \frac{1}{12}\psi(x + 2 a, y) \right]
\end{align}
wherever possible. Close to a hard-wall boundary the value \smash{$\psi(x + 2 a, y)$}
is not known because it refers to sites outside of the considered domain.
Then this term is simply omitted.
The improvement used for the second derivative based on the
mirroring explained in Fig.\ \ref{fig:spiegel} cannot be used
at hard walls in $x$-direction
because the resulting correction terms would be local densities with
imaginary prefactors spoiling the hermitecity of the Hamiltonian.
Thus, the Hamiltonian $H_\mathrm{(b)}$ is discretized in both directions.
Expressed in second quantization it is given by
\begin{align}
H = \sum_{x,y} &\left[ \frac{1}{2} \left( \frac{y^2}{\ell_B^4} +
\frac{5}{a^2} \right) c_{x, y}^{\dagger} c_{x,y} - \frac{2}{3 a^2} c_{x,y+a}^{\dagger} c_{x,y} +
\frac{1}{24 a^2} c_{x,y+2a}^{\dagger} c_{x,y} + \frac{\mathrm{i} 2 B y}{3 a} c_{x+a,y}^{\dagger} c_{x,y}\right.
\nonumber \\
& \left. -
\frac{\mathrm{i} B y}{12 a} c_{x+2a,y}^{\dagger} c_{x,y} + \mathrm{h.c.}
\right] - \frac{1}{24 a^2} c_{x,b(y)}^\dagger c_{x,b(y)}
- \frac{1}{24 a^2} c_{b(x),y}^\dagger c_{b(x),y}
\label{eq:ham_disc}
\end{align}
where $x$ and $y$ run over the discrete sites within the
colored areas in Fig.\ \ref{fig:grid}. The very last term
occurs at hard-wall boundaries in $x$-direction, i.e.,
treating the bays, improving the second derivatives. The periodicity condition
\eqref{eq:period} carries over to
\begin{equation}
c_{x+L_{x\mathrm{p}}, y} = c_{x, y} \mathrm{e}^{\mathrm{i} k_x L_{x\mathrm{p}}}
\end{equation}
in second quantization.
The Hamiltonian \eqref{eq:ham_disc} can be used to numerically calculate the spectrum for
any shape of the integer quantum Hall sample. We employ it below to
consider the finite strip without bays first , cf.\ Fig.\ \ref{fig:grid}(b), and
isolated bays, cf.\ Fig.\ \ref{fig:grid}(c), for reference purposes.
Finally, we pass on to the coupled system, cf.\ Fig.\ \ref{fig:grid}(d).
Then, we also have to include the effect of the gate voltages, see Fig.\ \ref{fig:sample}.
Gate voltage $V_\text{go}$ controls the size of the opening. This is implemented
in our calculation by the choice of the geometry, i.e., by the value of $L_\text{o}$.
Since we only consider bays at the upper boundary there is no $V_{g2}$ to study.
The gate voltage $V_{g1}$ is implemented by the Hamiltonian part
\begin{equation}
H_\mathrm{bays} = - \sum_{x, y \in\ \mathrm{bays}}
V_{\mathrm{g}1} c_{x,y}^\dagger c_{x,y}
\end{equation}
where we incorporated the value of the charge into $V_\text{g1}$, i.e.,
we use $V_\text{g1}$ for $|e|V_\text{g1}$.
For small values of $a$ the Hamiltonian \eqref{eq:ham_disc} corresponds to very
large, though sparsely populated matrices. We do not need all eigen values of them
because we focus on the energies of the lowest Landau level up to about the third
Landau level. In particular, the high-lying eigenvalues are strongly influenced
by the discretization and hence they are meaningless for the underlying continuum model.
In order to handle the diagonalization within given intervals of the spectrum
efficiently for large sparse matrices we employ the FEAST eigen value solver.
The FEAST algorithm \cite{poliz09} uses the quantum mechanical density matrix representation
and counter integration techniques to solve the eigenvalue problem within a given
search interval. Now, we are in the position to calculate the dispersion of the lowest
eigen states and thus also able to calculate the Fermi velocities being the
derivatives of the dispersion at the Fermi level.
\section{Dispersions in decorated quantum Hall samples}
\label{sec:dispersion}
So far, we analyzed the Landau levels in the bulk, see Sect.\ \ref{sec:bulk},
and we introduced the approximate Hamiltonians to describe hard-wall boundaries
of varying shapes, see Sects.\ \ref{sec:hardwall} and \ref{sec:fully}.
Here we present the results for geometries of increasing complexity. First, we address
the strip geometry, i.e., the sample without any bays. Then, we study the isolated bays
before we address the full coupled system, cf.\ Fig.\ \ref{fig:grid}.
For clarity, we focus on the lowest Landau levels.
\subsection{Strip geometry}
In the case of a hard-wall confining potential in $y$-direction, i.e.,
$V(y) = \infty$ for $|y| > L_y/2$, one still expects to find eigen values
and eigen states bearing similarities to the bulk solutions. For instance,
the eigen function exponentially localized in the middle of the strip
hardly feel the hard-wall confining potential. Hence they closely resemble
the bulk functions \eqref{eq:landau_wave} and their energies are exponentially close
to the bulk Landau levels \eqref{eq:landau_energy}, see also below.
Moreover, the lowest eigen functions localized right at the boundary,
i.e., $k_x = \pm L_2/2 \ell_B^2$, equal the eigen function of the second Landau level $n = 1$.
This is so because the zero of the antisymmetric wave functions coincides with
the boundary \cite{yoshi02} as is well known from the text book problem
in quantum mechanics of a parabolic potential cut off at its apex by an infinite
potential step. Thus, the antisymmetric Hermite polynomials are solutions
which satisfy the boundary condition where they are localized.
The influence of the other boundary is exponentially small if $\ell_B \ll L_y$
which is the limit we presuppose. These special points are used to
verify the accuracy of the calculations based on the discretized model Hamiltonian
in comparison to the continuum solutions.
For the discretized description to approximate the continuum efficiently
in $y$-direction, the distance $a$ between sites must be small enough to capture
the dependence of the Hermite polynomials \eqref{eq:landau_wave} on $y$.
Since $H_n(y)$ has $n$ zeros on the root mean square length $\ell_B \sqrt{n + 1/2}$
we arrive at the constraint
\begin{equation}
\label{eq:y-constraint}
a\ll \ell_B \frac{\sqrt{n + 1/2}}{n+1} \approx \frac{\ell_B}{\sqrt{n+1}}.
\end{equation}
In $x$-direction the wave length set by $2\pi/k_x$ sets an upper limit of $a$
so that we have to require
\begin{equation}
\label{eq:x-constraint}
a\ll \frac{2\pi}{k_x}.
\end{equation}
While \eqref{eq:y-constraint} needs to be fulfilled in all our calculations,
\eqref{eq:x-constraint} is not required in the solution of \eqref{eq:boundaryterm},
i.e., if the system in Fig.\ \ref{fig:grid}(a) is considered, but only
if the fully discretized model introduced in Sect.\ \ref{sec:fully} is considered.
In addition to these numerical requirements, we argued that we want to
consider the case where the edge states at the upper and at the lower
boundaries do not interfere. This requires
\begin{equation}
\label{eq:bdry-independence}
\ell_B \frac{\sqrt{n + 1/2}}{n+1} \ll L_y
\end{equation}
on physical grounds. The left hand side is the root mean square of the
spatial extension of the $n$th Landau level in $y$-direction.
We focus on the lowest bands anyway so that $n=0$ and $n=1$ are the
relevant cases.
For concreteness, we henceforth use the values $\ell_B=1\mu$m, $a=0.01\ell_B$ and
$L_y=10\ell_B$. These values are in accordance with the above considerations
for numerical accuracy and independence (up to exponentially small corrections)
of the edge states.
\begin{figure}[htb]
\centering
\includegraphics[width=0.6\columnwidth]{fig5.pdf}
\caption{The blue curves show the dispersions of the Landau levels of a strip of finite \smash{width
$L_y$}, see Fig.\ \ref{fig:sample}(a). The red dashed lines indicate the equidistant energy spectrum of the Landau levels in the bulk. The vertical dashed lines are located at
$k_x = \pm L_y/2 \ell_B^2$ indicating the states which are localized at the upper and lower
boundaries of the sample.}
\label{fig:landau}
\end{figure}
Considering the mesh in $y$-direction depicted in Fig.\ \ref{fig:grid}(a) we obtain
the results (blue solid curves) shown in Fig.\ \ref{fig:landau} where they
are compared to the bulk results \eqref{eq:landau_energy} (red dashed lines).
Clearly, for small wave vectors one obtains flat bands agreeing very well
with the bulk Landau level. Deviations occur only in the tenth digit of the eigen energies.
This is so because $k_x$ determines the position
of the harmonic oscillator in $y$-direction, cf.\ Eq.\ \eqref{eq:landau_wave}.
Closer to the boundaries, an upturn in energy occurs
because the electrons feel the hard-wall in their vicinity. As pointed out above,
the state $n=0$ right at the boundary acquires the energy of the Landau level $n=1$
because its wave functions corresponds to half a harmonic oscillator \cite{yoshi02}.
This relation is fulfilled up to the fifth digit thanks to the improved treatment
of the second derivative at the boundary, see Fig.\ \ref{fig:spiegel}.
\begin{figure}[htb]
\centering
\includegraphics[width=\columnwidth]{fig6.pdf}
\caption{Left panel: Zoom of the dispersion bands in the IQHE. The vertical dashed green line
is located at $k_x = L_y/2 \ell_B^2$ marking the boundary of the sample. The colored dots
indicate the eigen energies of the corresponding eigen wave functions.
Right panel: Probability \smash{densities $|\psi(k_x, y)|^2$} of these eigen wave functions.}
\label{fig:landau_edge}
\end{figure}
The gradual change of the eigen wave functions upon increasing $k_x$
is illustrated in \mbox{Fig.\ \ref{fig:landau_edge}.}
The colored dots in the left panel indicate the energies and the $k_x$ values
of the eigen wave functions depicted in the right panel by solid lines of the
same color. The dashed lines of the same color display the corresponding
eigen functions in the bulk which remain of Gaussian shape throughout.
Note the increase of the peak of the eigen functions in the strip geometry
upon approaching the boundary (sequence red $\to$ black $\to$ yellow)
because the electron cannot enter the hard-wall.
\subsection{Rectangular geometry}
\label{sec:rectangle}
\begin{figure}[htb]
\centering
\includegraphics[width=\columnwidth]{fig7.pdf}
\caption{Left panel: Zoom of the lowest eigen energies in the extended zone scheme
for $L_y = \SI{10}{\micro\meter}$ and $L_{x\mathrm{p}} = \SI{4.01}{\micro\meter}$.
(the deviation from $\SI{4}{\micro\meter}$ is only due to the discretization).
Red symbols corresponds to occupied states while blue symbols represent unoccupied states.
The horizontal dashed green line indicates the chosen Fermi energy.
The thin vertical red lines show boundaries of the corresponding reduced zone scheme. By backfolding the energies into the green shaded area one obtains the representation of the reduced zone scheme which is shown in the right panel.}
\label{fig:folded}
\end{figure}
Next, we pass to the fully discretized model \eqref{eq:ham_disc} for the sample without bays,
see Fig.\ \ref{fig:grid}(b). This describes the same physics as the calculation
in the previous subsection. Still, we present exemplary results in Fig.\ \ref{fig:folded}
for two reasons. The first one is to illustrate that this calculation indeed reproduces
the results obtained previously on the mesh Fig.\ \ref{fig:grid}(a)
with sufficient accuracy. Comparing the results from mesh (a) with those from mesh
(b) in Fig.\ \ref{fig:grid} we find that their eigen energies agree up to the fifth digit.
Note that the calculation for mesh (a) requires
to deal with a vector space of dimension of the order of 1000 while the calculation for
\smash{mesh (b)} requires to deal with a vector space with dimension of the order of $10^6$.
The second reason is to obtain results for the undecorated sample, i.e., without bays,
as reference for the subsequent complete analysis. The main point is that the reduction
of the translational invariance by considering the enlarged rectangular unit cell in real space
of \smash{length $L_{x\text{p}}$} leads to a reduced zone scheme in $k_x$ space. The backfolded
branches of the dispersion are shown in the right panel of Fig.\ \ref{fig:folded}.
Since there is no real, physical reduction of the translational symmetry
the backfolded branches display level crossings at the boundaries and elsewhere
which are preserved as long as the physical translational symmetry is preserved.
Hence the backfolded branches can be unfolded again to yield the extended zone scheme
display in the left panel of Fig.\ \ref{fig:folded}. This shows the same results as
were obtained directly by the previous calculation based on mesh Fig.\ \ref{fig:grid}(a),
presented in Figs.\ \ref{fig:landau} and \ref{fig:landau_edge}.
For clarity, we have chosen in Fig.\ \ref{fig:folded} to consider a quantum
Hall sample of finite length. The length of the unit cell in real space
is given by $L_{x\mathrm{p}}$ and we fix the total number of these cells
to $N_x=50$. Of course, this value can easily be changed if needed.
Hence, there are $N_x$ different momenta $k_x$
in the reduced zone scheme. They are multiples of $2\pi/N_xL_{x\mathrm{p}}$
lying in the interval $\left[ -\pi/L_{x\mathrm{p}}, \pi/L_{x\mathrm{p}} \right]$.
We want to focus on the filled lowest Landau level, i.e., filling factor $\nu=1$. Due to
the upturn of the lowest level upon approaching the boundaries of the sample this
filling factor requires to occupy all states with energies just below the flat region
of the second lowest level, see left panel of Fig.\ \ref{fig:folded}.
However, in order to
exclude any spurious effects of the energy levels of the second lowest Landau level
we set the Fermi level to a value slightly below the flat band of the Landau
level $n=1$, namely to $\epsilon_\text{F} = 1.4 \omega_{\mathrm{c}}$ as indicated
by the green dashed line in \mbox{Fig.\ \ref{fig:folded}.} This allows us to distinguish
unambiguously between occupied and unoccupied levels.
This procedure helps to identify our quantity of interest, the Fermi velocity,
i.e., the derivative of the dispersion with respect to $k_x$ at the Fermi level.
The ensuing minor deviation of
the filling factor $\nu$ from 1 is macroscopically irrelevant for large values of $L_y$.
\subsection{Isolated bays}
Before dealing with the complete system with bays coupled to the
quantum Hall sample we determine the energy spectrum of isolated bays
for later comparison. Note that we choose to consider quadratic bay for
calculational simplicity. But the underlying physics does not require a particular
shape of the bay so that samples decorated with circular bays
will show the same physics at somehow modified quantitative parameters.
\begin{figure}[htb]
\centering
\includegraphics[width=0.5\columnwidth]{fig8.pdf}
\caption{Discrete energy spectra of decoupled, i.e., isolated bays
as function of their size $L_\mathrm{b}$ are rendered by blue solid lines and blue
symbols for $\ell_B = \SI{1}{\micro\meter}$. The horizontal red dashed lines indicate
the equidistant Landau levels in the bulk for comparison.}
\label{fig:bay}
\end{figure}
For considering the isolated bays we treat the mesh shown in Fig.\ \ref{fig:grid} (c). The calculated energy spectrum as function of the length $L_\mathrm{b}$ is plotted in Fig.\
\ref{fig:bay}. Having the classical cyclotron picture of circular electronic orbits in mind
we choose $L_\mathrm{b} = 2 \ell_B$ as starting value. No smaller bay would allow
for a classical circular orbit. As expected the energies are larger than the
bulk Landau energies because the confinement due to the bays restricts the
motion of the electrons. Accordingly, increasing $L_\mathrm{b}$ lowers the energies because
enlarging the bays reduces the influence of the confining potential.
The lowest eigen energy of the bay reaches the energy gap between the two lowest Landau levels
at a bay size of $L_\mathrm{b} \approx 2.6 \ell_B$. Using the gate voltage $V_\text{g1}$ to
shift the energies in the bays relative to the rest of the sample offers a
possibility to tune a local mode in resonance to an edge mode. We will discuss
this in more detail in the next subsection.
Adding the decoupled bay to the unit cell, i.e., considering the model
shown in the \smash{panels (b)} and (c) of Fig.\ \ref{fig:grid} without any coupling
yields the eigen energies provided in \mbox{Sect.\ \ref{sec:rectangle}} plus the eigen energies
of the bays which do not disperse at all (not shown). They appear as completely flat
modes if plotted against $k_x$ due to their completely local nature in real space.
\subsection{Quantum Hall sample with coupled bays}
Now, we pass to the fully decorated sample where the bays are coupled
to the 2D electron gas in the strip, i.e., we consider the mesh in Fig.\ \ref{fig:grid}(d).
We switch on the coupling between the bays and the strip by gradually
increasing the opening $L_\mathrm{o}$ from zero to the maximum \smash{value $L_\text{b}$.}
The energy spectra are computed and tracked to understand how the coupling
influences the eigen states in general and the edge modes in particular.
\begin{figure}[htb]
\centering
\includegraphics[width=\columnwidth]{fig9.pdf}
\caption{Energy spectra of the lowest eigen states of a quantum Hall sample with
$L_y = \SI{10}{\micro\meter}$, $L_{x\mathrm{p}} = \SI{4.01}{\micro\meter}$, and
$\ell_B = \SI{1}{\micro\meter}$. The left panel shows the case of weakly coupled bays
because the opening $L_\text{o}$ is small. The middle panel shows a moderate coupling
while the right panel a rather strongly coupled case because the opening $L_\text{o}$
is increased step by step. Red symbols correspond to occupied states while blue symbols
depict unoccupied states; the dashed horizontal green line indicates the chosen Fermi level.
The shaded areas highlight the locations of two avoid crossings due to the hybridization
of local and dispersive modes.}
\label{fig:coupled}
\end{figure}
To this end, we depict three representative cases with openings
$L_\mathrm{o} = \left\lbrace 1 \ell_B, 2 \ell_B, 3 \ell_B \right\rbrace$
and a bay size $L_\mathrm{b} = 3 \ell_B$ in Fig.\ \ref{fig:coupled}.
They represent the cases of weak, moderate, and strong coupling of the bays.
Upon coupling the bays to the quantum Hall sample, i.e., for $L_\mathrm{o} \neq 0$,
the eigen states of the bays and the strip start to merge.
Energy crossings of local modes from the bays with dispersive edge modes
in absence of any coupling turn into avoided crossings once
the bays and the strip are coupled.
This represents a clear finger print of level repulsion.
Inspecting the three panels, one realizes that only the right moving
edge modes are influenced by the coupling of the bays. Only their energies
depend on the degree of coupling, i.e., on the size $L_\text{o}$ of the opening.
The left moving modes are spatially separated because they are localized
at the other boundary of the sample without decoration. Hence they are
influenced only exponentially weakly.
A nice example of the level repulsion between a (formerly) local bay mode
and a dispersive, right moving edge mode is seen in the middle of the panels in Fig.\
\ref{fig:coupled} around $k_x=0$. The relevant area is shaded in violet in the
left and the middle panel. An example of a corresponding
wave function is shown in the left panel of Fig.\ \ref{fig:psi}.
In the right panel of \mbox{Fig.\ \ref{fig:coupled}} the avoided crossing is
still present, but hardly discernible because the energies are
already very different due to the strong coupling.
In return, the left panel shows the character of an avoided level
crossing most clearly because the coupling of the bays is still small
and hence the hybridization between the bay modes and the strip modes is still
small.
Another, less obvious and thus surprising, origin of avoided level crossings
between dispersive edge modes and local modes results from the breaking of the translational
invariance and the concomitant backfolding. This mechanism induces hybridization
between local Landau levels and edge modes. An example is indicated by
a shaded area in the left panel at $k_x\approx0.4 \pi/L_{x\text{p}}$ and
in the middle panel at $k_x\approx0.8 \pi/L_{x\text{p}}$ of Fig.\ \ref{fig:coupled}.
Clearly, the effect is weaker
than the hybridization of edge modes and local bay modes.
This is so because the coupling of edge modes and local Landau levels
is a second order effect in the coupling of the bays to the strip.
The bay modes are involved only indirectly by virtual processes, see also
the right panel of Fig.\ \ref{fig:psi} where an exemplary wave function
is shown.
Similar effects were also found in the IQHE where different edge modes
start to mix with one other due to breaking the translational symmetry
by a step potential \cite{ventu11}.
\begin{figure}[htb]
\centering
\includegraphics[width=0.75\columnwidth]{fig10.pdf}
\caption{Probability density $|\psi_n(x, y)|^2$ of two eigen states influenced by
the different avoid crossings. The left panel shows the hybridization between the
edge state of the Landau level $n=0$ with the local mode in the bay. The energy and
momentum of this state are indicated in the middle panel of Fig.\ \ref{fig:coupled}
by the open arrow.
The right panel shows the weak hybridization of the edge state with the second Landau level
$n=1$ mediated by the local mode in the bay.
The energy and momentum of this state are indicated in the middle panel of
Fig.\ \ref{fig:coupled} by the filled arrow.
The parameters of the geometry are $L_y = \SI{10}{\micro\meter}$, $L_{x\mathrm{p}} =
\SI{4.01}{\micro\meter}$, $L_\mathrm{b} = \SI{3}{\micro\meter}$, $L_\mathrm{o} =
\SI{2}{\micro\meter}$, and $\ell_B = \SI{1}{\micro\meter}$.}
\label{fig:psi}
\end{figure}
To support the interpretations given above, we plot the probability density
$|\psi(x, y)|^2$ for eigen states from the two avoided level crossings in
Fig.\ \ref{fig:psi}. The left panel shows a state built from an edge mode
and a local mode from the bays; its position in the energy spectrum
is indicated by a solid arrow in the middle panel of Fig.\ \ref{fig:folded}.
Clearly, the two constituents, the edge mode and the local mode in the bay
can be seen.
The right panel Fig.\ \ref{fig:psi} shows a state built from an edge mode,
a local mode from the bays, and a the next higher Landau level $n=1$;
its position in the energy spectrum
is indicated by a filled arrow in the middle panel of \mbox{Fig.\ \ref{fig:folded}}.
Here, three states are involved and contribute to the eigen states
as can be discerned nicely. The contribution of the local mode in the bay
is much smaller than in the case shown in the left panel because it
contributes only as virtual state mediating the breaking of the translational
invariance.
\section{Tuning the Fermi velocity}
\label{sec:tune}
In the previous sections we developed a detailed understanding
of the energy spectra of quantum Hall sample decorated by bays.
Our ultimate goal is to study whether and how the Fermi velocity
$v_\text{F}$
can be tuned in such a decorated quantum Hall sample.
We highlight that the Fermi velocity $v_\text{F}$
represents the group velocity of the coherent quantum mechanical
propagation of electronic wave packets. It cannot be seen as
classical propagation of electrons along the (longer) boundaries
of the bays, see below.
Here we present quantitative results for the Fermi velocity and its
dependence on the parameters of the model.
\begin{figure}[htb]
\centering
\includegraphics[width=\columnwidth]{fig11.pdf}
\caption{Left panel: Fermi velocity $v_\mathrm{F}$ of the right moving modes
as function of the bay size $L_\mathrm{b}$ with
at $L_y = \SI{10}{\micro\meter}$, $L_{x\mathrm{p}} = \SI{4.01}{\micro\meter}$,
$\ell_B = \SI{1}{\micro\meter}$, and $L_\mathrm{o} = L_\mathrm{b}$. Right panel: Fermi \smash{velocity $v_\mathrm{F}$} as a function of the opening $L_\mathrm{o}$ of the bays
for various bay \smash{sizes $L_\mathrm{b}$} with \smash{$L_y = \SI{10}{\micro\meter}$,} $L_{x\mathrm{p}} =
\SI{4.01}{\micro\meter}$, and $\ell_B = \SI{1}{\micro\meter}$.}
\label{fig:vf_bay_Lo}
\end{figure}
First, we examine the dependence of $v_\text{F}$ on the size of the bays by
increasing $L_\mathrm{b}$ for maximally opened bays, i.e., for $L_\mathrm{o} = L_\mathrm{b}$.
The results are shown in the left panel of \mbox{Fig.\ \ref{fig:vf_bay_Lo}.}
For maximally opened bays $L_\mathrm{o} = L_\mathrm{b}$ the dispersions display no
flat region because the strong level repulsion induces
sizable momentum dependencies of most modes,
see right panel of Fig.\ \ref{fig:coupled}. Thus no strong dependence of the
Fermi velocity is expected in accordance with the left panel of Fig.\ \ref{fig:vf_bay_Lo}.
The complex interplay of many hybridizing levels makes it impossible to predict sizes
for which parameter precisely $v_\mathrm{F}$ takes its minimum value.
However, the comparison of the left panel in Fig.\ \ref{fig:vf_bay_Lo} with Fig.\ \ref{fig:bay}
reveals that the Fermi velocity is indeed influenced when the local mode
in the bay approaches the Fermi level, here $1.4\omega_\text{c}$, which is the case
around $L_\mathrm{b} = 2.6 \ell_B$. Note that the Fermi velocity
is generally reduced, roughly by a factor 2, once the local modes have come down
in energy so that they reach the Fermi level.
The next parameter varied is the opening $L_\text{o}$ of the bay.
The right panel of Fig.\ \ref{fig:vf_bay_Lo} shows the results for various bay sizes. Note that the opening
cannot exceed the size of the bay, hence the curves stop at $L_\text{o}=L_\text{b}$.
All curves follow the general trend that the Fermi velocity is lowered upon
increasing the hybridization between local modes in the bays and the dispersive
edge modes. This is achieved by increasing the opening $L_\text{o}$.
An approximate reduction by a factor of 2 is achieved once the
local energy levels from the bay come down in energy, i.e., for large \smash{enough
$L_\text{b}$.} This reduction is not very impressive; in addition, the geometry
is fixed once the sample is grown and cannot be tuned on the fly.
\begin{figure}[htb]
\centering
\includegraphics[width=0.5\columnwidth]{fig12.pdf}
\caption{Fermi velocity $v_\mathrm{F}$ as function of the distance between the bays, i.e.,
$L_{x \mathrm{p}}$, see \mbox{Fig.\ \ref{fig:ez},} for various bay openings $L_\mathrm{o}$ with
$L_y = \SI{10}{\micro\meter}$, $L_{\mathrm{b}} = \SI{3}{\micro\meter}$, and
$\ell_B = \SI{1}{\micro\meter}$.}
\label{fig:vf_Lxp}
\end{figure}
The last dependence of $v_\text{F}$ on a geometric parameter, that we study, is the dependence
on the distance between the bays, i.e., the size $L_{x\text{p}}$, of the decorated unit cell,
see Fig.\ \ref{fig:ez}. One could imagine that a certain resonance phenomenon occurs
for special values of $L_{x\text{p}}$. Generally, we expect that the influence of the
decorating bays decreases upon increasing $L_{x\text{p}}$ because the fraction
of decorated boundary decreases. Explicit results are shown in Fig.\ \ref{fig:vf_Lxp}.
Again, the dependence of $v_\text{F}$ is rather weak. The expected trend
that larger $L_{x\text{p}}$ reduces $v_\text{F}$ less
is clearly confirmed because the Fermi velocity approaches
its undecorated value of \smash{about $1\omega_\text{c}\ell_B$} upon increasing $L_{x\text{p}}$.
At small values of $L_{x\text{p}}$ we retrieve a reduction of the order of a
factor 2. But no resonance phenomena at particular values of the interbay distance
are found. We attribute this to the fact that none of the local modes in the bay
is truly in resonance with the edge modes
In order to identify a suitable tuning parameter we resort to the results gained
for lattice models \cite{uhrig16,malki17b}. Three ingredients are important for
sizable changes of the Fermi velocity: (i) the local and the dispersive modes
must be in (or close to) resonance. (ii) There must be a parameter to tune and to
detune this resonance. (iii) The coupling of the modes should be rather small
so that they are sensitive to being or not being in resonance.
Translating these conclusions back to the IQHE, it appears that we have to use the
gate voltage $V_\text{g1}$ to control the resonance between the local modes
in the bays and the dispersive edge modes. It is obvious that one can shift
the bay modes by changing $V_\text{g1}$. An additional asset is that this can be
done on the fly so that one disposes of a true control knob for the speed of
signal transmission and hence for the delay time which can be turned while
the signal processing is going on.
\begin{figure}[htb]
\centering
\includegraphics[width=0.5\columnwidth]{fig13.pdf}
\caption{Upper panel: The horizontal dashed line shows the set Fermi level while
the slanted solid lines depict the energy level in the isolated bays shifted by
the gate voltage. The vertical red line is a guide to the eye to link the resonance
visible in the upper panel to the strong response in the lower panel.
Lower panel: Fermi velocity $v_\mathrm{F}$ as function of the gate
\smash{voltage $V_\text{g1}$}
for $L_\text{b}=\SI{2}{\micro\meter}$,
$L_y = \SI{10}{\micro\meter}$, $L_{x\mathrm{p}} = \SI{4.01}{\micro\meter}$,
$L_\mathrm{o} = \SI{1}{\micro\meter}$, and $\ell_B = \SI{1}{\micro\meter}$.}
\label{fig:vf_pot1}
\end{figure}
The opening of the bays should not be large because the coupling and hence
the hybridization of the local and the dispersive modes should be rather weak.
Thus we choose the rather small value $L_\text{o}=\ell_B$ in Fig.\ \ref{fig:vf_pot1}.
In this figure, we plot the dependence of the Fermi velocity on
the gate voltage. For most values, the Fermi velocity does not deviate strongly from
its value of about $1\omega_\text{c}\ell_B$ in a sample without bays. But if the energy levels of
the local modes in the bays approach the dispersive edge mode at the Fermi level
they resonate and produce an avoided level crossing. In the region of the
avoided level crossing
the local mode and the dispersive one mix so that the formerly steep
crossing of the dispersion through the Fermi level becomes flat. Hence the Fermi velocity
is considerably suppressed. Note that the resulting resonance dips of
$v_\text{F}$ are rather narrow and can easily be used to (de)tune the velocity
by moderate changes of the applied external gate voltage.
In this fashion, changes of the Fermi velocity by factors 10 to 100 should
be realizable, similar to what was found in lattice models \cite{uhrig16,malki17b}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.8\columnwidth,clip]{fig14.pdf}
\caption{Upper panel: The horizontal dashed line shows the set Fermi level while
the slanted solid lines depict the energy level in the isolated bays shifted by
the gate voltage. The vertical red lines are guides to the eye to link the resonance
visible in the upper panel to the strong response in the lower three panels.
Lower three panels: Fermi velocity $v_\mathrm{F}$ as function of the gate
\smash{voltage $V_\text{g1}$} for $L_\text{b}=\SI{3}{\micro\meter}$,
$L_y = \SI{10}{\micro\meter}$, $L_{x\mathrm{p}} = \SI{4.01}{\micro\meter}$,
$\ell_B = \SI{1}{\micro\meter}$, and the three different values of
$L_\mathrm{o} = \SI{1}{\micro\meter}$ as indicated.
\label{fig:vf_pot2}}
\end{figure}
Comparing Figs.\ \ref{fig:vf_pot1} and \ref{fig:vf_pot2} one realizes the
similarities of the curves. The width of the resonance dips is comparable
if the openings $L_\text{o}$ are the same, cf.\ Fig.\ \ref{fig:vf_pot1}
and the second lowest panel of Fig.\ \ref{fig:vf_pot2}.
Fig.\ \ref{fig:vf_pot2} illustrates very clearly, that
larger openings lead to significantly broader dips which are less
deep. In return, smaller openings and thus less coupled bays lead
to narrower dips with significantly lower residual Fermi velocity
at the minimum. This minimum value of $v_\mathrm{F}$
depends on how flat the dispersion of the hybridized modes remains
as determined by the coupling strength: Weaker coupling implies
better localized hybridized modes with flatter dispersion.
Flatter modes allow for sharper dips to lower residual values of the
Fermi velocity. Note that the reduction of the Fermi velocity can reach a factor of
100 for narrow bay openings. A classical interpretation of slower
propagation due to the longer path along the boundaries of the
bays would explain a factor $3.75$ at best for $L_\text{o}=
\SI{0.5}{\micro\meter}$.
The positions of the resonance dips
depend on the energy levels of the modes in the bay so that
the bay size influences them strongly. In the smaller bays studied
in Fig.\ \ref{fig:vf_pot1}, the lowest bay level lies above the Fermi
level so that the gate voltage has to bring it down in order to observe
resonance. In the larger bays, studied
in Fig.\ \ref{fig:vf_pot2}, the lowest bay level lies below the Fermi
level while the second lowest above it.
So Fig.\ \ref{fig:vf_pot2} shows that several dips may occur, even
for different signs of the gate voltages.
All in all, it appears that the precise position of the dips is not
at the resonance of the energy levels of the decoupled bays, but at slightly
higher values of the gate voltage. We attribute this to the effects
of the hybridizing couplings which shifts the local modes in the bays
downwards in energy.
\section{Conclusions}
\label{sec:conclusion}
Topologically protected edge states possess many theoretically appealing properties.
Still, avenues towards applications have not been followed by broad research.
The recent proposal of tunable Fermi velocities in Chern insulators and
spinful topological insulators for the realization of delay lines and interference
devices is a step in this direction. The purpose of the present study
was to show that no lattice models are required, but that semiconductor samples
with decorated boundaries show the same phenomena. This finding represents
a substantial step forward towards realization because of the extremely
high standard of designing and growing nanostructures for semiconductor devices.
We analyzed the dependence of the dispersion of the edge states in decorated
quantum Hall samples on various
parameters. The geometry of the sample sets the energy levels and partly
the degree of coupling between the decorating bays and the bulk of the
two-dimensional sample. Yet the geometric parameters do not allow for
a fine-tuning of the Fermi velocity, let alone quick changes of it in the course
of signal processing.
But gate voltages can achieve the wanted tunability. First, we found that the
local levels in the bays should be close in energy to the Fermi level in
the remainder of the quantum Hall sample so that the gate voltage applied to
the bays does not need to shift them to a large extent. Second, the coupling
between the bays and the rest of the sample should be rather weak to
have rather narrow and deep dips in the Fermi velocity if the local
modes are tuned into resonance to the dispersive edge states.
Then, the fundamental mechanism of mode mixing and level repulsion
leads to weakly dispersive eigen modes crossing the Fermi level.
This represents the key phenomenon for tunability.
Changes by up to two orders of magnitude appear possible.
In our calculations, the degree of coupling is a geometric parameter.
In practice, we propose to make it tunable as well by additional gate
electrodes which modify the width of the opening of the bays, cf.\
Ref.\ \cite{feve07}.
The calculations are based on discretizing the sample in real space
and mapping it to a tight-binding type of model. For fine enough meshes, reliable
results valid for the continuum case are obtained
as we could verify by comparison to analytic bulk solutions.
We increased the complexity of the considered geometry step by step in order
to gain a reliable understanding of the occurring physical phenomena.
The approach is flexible enough to be adapted to various geometries.
We considered quadratic bays, but any other shape is possible as well,
but only small, quantitative changes are expected.
Here the focus was
on a proof-of-principle calculation to show that the anticipated physics
takes indeed place in the integer quantum Hall effect.
In view of experimental realizability, some aspects must be kept in
mind. First, the neglected interaction between the electrons may lead to
the formation of certain charge modulations at the boundary. On the one hand,
it is established that compressible and incompressible stripes form close
to the boundaries \cite{chklo92}. The incompressible stripes may
hinder the propagation of signals. On the other hand, if the filling
is tuned just below filling factor $\nu=1$, we expect that this
effect is avoided because no incompressible stripes are formed at the
edges. The final clarification, however, can only be reached by
an experimental study.
For concreteness, we showed calculations for
$\ell_B=\SI{1}{\micro\meter}$. This
value corresponds via $B=\hbar/(e\ell_B^2)$ to a magnetic field of
$0.66$mT and to a electron density of $3.2\cdot 10^7$cm$^{-2}$.
Both values are very small
compared to the values in generic quantum Hall setups
which have magnetic fields and electron densities
higher by about a factor $10^4$. Thus, for realization
one has to look for systems with high mobility at much smaller electron
densities or to make the geometric structures of the sample smaller, e.g.,
a factor 5 in linear dimensions yields a factor $25$ in the electron density
and in the magnetic field.
An interesting alternative to standard semiconductors is the quantum
Hall effect in graphene. The relation between magnetic length $\ell_B$ and
magnetic field is the same \cite{brey06,abani07,delpl10,wang11c,stegm15}, but the relevant electron density $n$ is measured
relative to the semimetal so that small values are easily realized.
Due to the density-of-states linear in energy one has $n\propto
\epsilon_\text{F}^2$.
Furthermore, due to the perfect lattice structure a high mobility can
be expected. So the promising aim is to create non-trivial boundaries
with bays on the scale of $10$ to $1000$nm in graphene.
In conclusion, an experimentally realizable topological phase, the integer quantum Hall effect,
allows for tunable Fermi velocities if its edges are appropriately decorated. Gate voltages can
serve as control parameters for tuning. These findings should
encourage further research
to realize such systems on the laboratory level to ultimately pave the way towards real devices.
As an outlook we want to emphasize that the presented finding can be
extended in several ways as has been done for lattice models \cite{malki17b}.
The detrimental effects of disorder can be included to study
the robustness of the observed effects. Such investigations will help
to understand with which accuracy an experimental realization has to
be grown in order to be able to observe the predicted effects.
Without doubt, this constitutes an essential step toward applications.
Second, our findings can be extended
to spinful models without conceptual difficulties. If the spin is subject to
spin-orbit coupling the chiral edge modes will generically become helical modes
which opens up the promising field for applications in spintronics, for instance
realizing switchable spin diodes. Thus, many tantalizing research projects lie ahead.
\section*{Acknowledgements}
We acknowledge useful discussions with Manfred Bayer, Axel Lorke, Bruce Normand, and
Dirk Reuter.
\paragraph{Funding information}
One of the authors (MM) gratefully acknowledge financial support by the Studienstiftung des deutschen Volkes. This work was also supported by the Deutsche Forschungsgemeinschaft and the
Russian Foundation of Basic Research in the International Collaborative
Research Center TRR 160.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,131 |
#include "El.hpp"
#include "./SVT/Normal.hpp"
#include "./SVT/Cross.hpp"
#include "./SVT/PivotedQR.hpp"
#include "./SVT/TSQR.hpp"
namespace El {
template<typename F>
Int SVT( Matrix<F>& A, Base<F> tau, bool relative )
{
DEBUG_ONLY(CSE cse("SVT"))
return svt::Normal( A, tau, relative );
}
template<typename F>
Int SVT( ElementalMatrix<F>& A, Base<F> tau, bool relative )
{
DEBUG_ONLY(CSE cse("SVT"))
// NOTE: This should be less accurate (but faster) than svt::Normal
return svt::Cross( A, tau, relative );
}
template<typename F>
Int SVT( Matrix<F>& A, Base<F> tau, Int relaxedRank, bool relative )
{
DEBUG_ONLY(CSE cse("SVT"))
// Preprocess with numSteps iterations of pivoted QR factorization
return svt::PivotedQR( A, tau, relaxedRank, relative );
}
template<typename F>
Int SVT( ElementalMatrix<F>& A, Base<F> tau, Int relaxedRank, bool relative )
{
DEBUG_ONLY(CSE cse("SVT"))
// Preprocess with numSteps iterations of pivoted QR factorization
return svt::PivotedQR( A, tau, relaxedRank, relative );
}
// Singular-value soft-thresholding based on TSQR
template<typename F,Dist U>
Int SVT( DistMatrix<F,U,STAR>& A, Base<F> tau, bool relative )
{
DEBUG_ONLY(CSE cse("SVT"))
return svt::TSQR( A, tau, relative );
}
#define PROTO_DIST(F,U) \
template Int SVT( DistMatrix<F,U,STAR>& A, Base<F> tau, bool relative );
#define PROTO(F) \
template Int SVT( Matrix<F>& A, Base<F> tau, bool relative ); \
template Int SVT( ElementalMatrix<F>& A, Base<F> tau, bool relative ); \
template Int SVT \
( Matrix<F>& A, Base<F> tau, Int relaxedRank, bool relative ); \
template Int SVT \
( ElementalMatrix<F>& A, Base<F> tau, Int relaxedRank, bool relative ); \
template Int svt::Cross \
( Matrix<F>& A, Base<F> tau, bool relative ); \
template Int svt::Cross \
( ElementalMatrix<F>& A, Base<F> tau, bool relative ); \
template Int svt::Cross \
( DistMatrix<F,VC,STAR>& A, Base<F> tau, bool relative ); \
template Int svt::PivotedQR \
( Matrix<F>& A, Base<F> tau, Int numSteps, bool relative ); \
template Int svt::PivotedQR \
( ElementalMatrix<F>& A, Base<F> tau, Int numSteps, bool relative ); \
template Int svt::TSQR \
( ElementalMatrix<F>& A, Base<F> tau, bool relative ); \
PROTO_DIST(F,MC ) \
PROTO_DIST(F,MD ) \
PROTO_DIST(F,MR ) \
PROTO_DIST(F,STAR) \
PROTO_DIST(F,VC ) \
PROTO_DIST(F,VR )
#define EL_NO_INT_PROTO
#include "El/macros/Instantiate.h"
} // namespace El
| {
"redpajama_set_name": "RedPajamaGithub"
} | 7,262 |
\section{Introduction}
For any complex 3D conformally flat manifold we can always find local coordinates $x,y,z$ such that the classical Hamiltonian takes the form
\begin{equation}\label{hamiltonian}
H=\frac{1}{\lambda(x,y,z)}(p_1^2+p_2^2+p_3^2)+V(x,y,z),\qquad
(x,y,z)=(x_1,x_2,x_3),
\end{equation}
i.e., the complex metric is
$ds^2=\lambda(x,y,z)(dx^2+dy^2+dz^2)$.
This system is {\bf superintegrable} for some potential $V$ if it
admits 5 functionally independent constants of the motion (the maximum
number possible) that are
polynomials in the momenta $p_j$. It is {\bf second order
superintegrable } if the constants of the motion are quadratic,
i.e., of the form
\begin{equation}\label{symmetry}
S=\sum a^{ji}(x,y)p_jp_i +W(x,y,z).\end{equation}
That is, $\{{ H},{S}\}=0$ where
\[ \{f,g\}=\sum_{j=1}^n(\partial_{x_j}f\partial_{p_j}g-\partial_{p_j}f\partial_{x_j}g)
\]
is the Poisson bracket for functions $f({\bf x},{\bf p}),g({\bf
x},{\bf p})$ on phase space \cite{WOJ,EVA,EVAN,FMSUW,FSUW,MSVW,CALO,CIMC}.
There is a similar definition of second order superintegrability
for quantum systems with formally self-adjoint Schr\"odinger and
symmetry operators whose classical analogs are those given above,
and these systems correspond one-to-one, \cite{KKM20061}. (In particular, the
terms in the Hamiltonian that are quadratic in the momenta are
replaced by the Laplace-Beltrami operator on the manifold, and Poisson
brackets are replaced by operator commutators in the quantum case.) Historically the most
important superintegrable system is the Euclidean space Kepler-Coulomb
problem where $V=\alpha/\sqrt{x^2+y^2+z^2}$. (Recall that this
system not only has angular momentum and energy as constants of the
motion but a Laplace vector that is conserved.) Second order superintegrable systems have remarkable properties. In
particular, every trajectory of a solution of the Hamilton equations
for such a system in 6-dimensional phase space lies on the intersection
of 5 independent constant of the motion hypersurfaces in that space,
so that the trajectory can be obtained by algebraic methods alone,
with no need to solve Hamilton's equations directly. Other common properties
include multiseparability (which implies multiintegrability, i.e.,
integrability in distinct ways) \cite{WOJ,EVA,EVAN,FMSUW, FSUW,MSVW,
CIMC,CALO,MPSTAN,GZLU,BDK} and the existence of a quadratic algebra of symmetries that
closes at order 6. The quadratic algebra in the quantum case gives
information relating the spectra of the constants of the motion,
including the Schr\"odinger operator.
Many examples of 3D superintegrable systems are known, although, in
distinction to the 2D case, they
have not been classified, \cite{GPS, KKMP, KKW, KKMW, RAN, KMWP}.
Here, we employ theoretical methods based on integrability
conditions to obtain a complete classification of Euclidean systems
with nondegenerate potentials. To make it clear how these systems
relate to general second order superintegrable systems we introduce
some terminology. A set of 2nd order symmetries for a classical superintegrable system is either linearly independent (LI) or
linearly dependent (LD). LI sets can functionally independent (FI)
in the 6-dimensional phase space in two ways: they are strongly functionally independent (FI-S) if they are
functionally independent even when the potential is set equal to zero. They are
weakly functionally independent (FI-W) if the functional independence
holds only when the potential is turned on (example: the isotropic
oscillator). Otherwise they are functionally dependent (FD). An LI set can be functionally linearly independent (FLD) if it is linearly dependent at each regular point, but the linear dependence varies with the point. An LI set can be FLD in two ways. It
is weakly functional linear dependent (FLD-W) if the functional linear
dependence holds only with the potential turned off and strongly
functional linear dependent (FLD-S) if the functional linear dependence
holds even with the potential turned on. Otherwise the set is
functionally linearly independent (FLI). The Calogero and Generalized Calogero
potentials are FD, and FLD-S, \cite{KKM20061}. One property of FLD systems is that their potentials satisfy a first order linear partial differential equation. Thus they can be expressed in terms of a function of only two variables. In that sense they are degenerate. This paper is concerned with a classification of functionally linearly independent potentials. As shown in
\cite{KKM20051}, if a 3D second order superintegrable system is FLI then the potential $V$ is must satisfy a system of coupled PDEs of the form
\begin{equation} \label{veqn1a} V_{22}=V_{11}+A^{22}V_1+B^{22}V_2+C^{22}V_3,\
V_{33}=V_{11}+A^{33}V_1+B^{33}V_2+C^{33}V_3,
\end{equation}
$$
V_{12}= A^{12}V_1+B^{12}V_2+C^{12}V_3,\
V_{13}= A^{13}V_1+B^{13}V_2+C^{13}V_3,$$
\begin{equation}\label{3Dnondegenerate} V_{23}= A^{23}V_1+B^{23}V_2+C^{23}V_3.\end{equation}
The analytic functions $A^{ij},B^{ij},C^{ij}$ are determined uniquely from the Bertrand-Darboux equations for the 5
constants of the motion and are analytic except for a finite number of poles.
If the integrability conditions for these equations are satisfied identically then the potential is said to be {\bf nondegenerate}. A nondegenerate potential (which is actually a vector space of potential functions) is characterized by the following property. At any regular point
${\bf x}_0=(x_0,y_0,z_0)$, i.e., a point where
the $A^{ij},B^{ij},C^{ij}$ are defined and analytic and the constants of the motion are functionally independent,
we can prescribe the values of
$V({\bf x}_0)$, $V_1({\bf x}_0)$,$V_2({\bf x}_0)$,$V_3({\bf x}_0)$,$V_{11}({\bf x}_0)$ arbitrarily and obtain a unique solution of (\ref{3Dnondegenerate}). Here,
$V_1=\partial V/\partial x$, $V_2=\partial V/\partial y$, etc. The 4 parameters for a nondegenerate potential (in addition to the usual additive constant) are the maximum number of parameters that can appear in a
superintegrable system. A FLI superintegrable system is {\bf degenerate} if the potential function satisfies additional restrictions in addition to equations (\ref{3Dnondegenerate}). These restrictions can arise in two ways, either as additional equations arising directly from the Bertrand-Darboux equations or as restrictions that occur because the integrability conditions for equations (\ref{3Dnondegenerate}) are not satisfied identically. In any case, the number of free parameters for a degenerate potential is strictly fewer than 4. In this sense the nondegenerate potentials are those of maximal symmetry, though the symmetry is not meant in the traditional Lie group or Lie algebra sense. Nondegenerate potentials admit no nontrivial Killing vectors.
Our concern in this paper is the classification of all 3D FLI
nondegenerate potentials in complex Euclidean space. In \cite{KKM20071} we
have begun the study of fine structure for second order 3D
superintegrable systems, i.e., the structure and classification theory
of systems with various types of degenerate potentials.
Our plan of attack is as follows. First we give a brief
review of the fundamental equations that characterize second order
FLI systems with nondegenerate potential in a 3D conformally flat
space. Then we review the structure theory that has been worked out
for these systems, including multiseparability and the existence of a
quadratic algebra. We will recall the fact that all such systems are
equivalent via a St\"ackel transform to a superintegrable system on
complex Euclidean 3-space or on the complex 3-sphere. Thus a
classification theory must focus on these two spaces. Due to the
multiseparability of these systems we can use separation of variables
theory to help attack the classification problem. In \cite{KKM20052} we showed
that associated with each of the 7 Jacobi elliptic coordinate generically
separable systems for complex Euclidean space there was a
unique superintegrable system with a separable eigenbasis in these
coordinates. Thus the only remaining systems were those that separated
in nongeneric orthogonal coordinates alone, e.g., Cartesian coordinates, spherical
coordinates, etc. The possible nongeneric separable coordinates are
known \cite{ERNIE} so, in principle, the classification problem could be solved. Unfortunately, that still left so many specific
coordinate systems to check that classification was a practical
impossibility. Here we present a new attack on the problem, based on
characterizing the possible
superintegrable systems with nondegenerate potentials as points on an algebraic variety.
Specifically, we determine a variety in 10 variables subject to six
quadratic polynomial constraints. Each point on the
variety corresponds to a superintegrable system. The Euclidean group
$E(3,\bb C)$ acts on the variety such that two points determine the same
superintegrable system if and only if they lie on the same leaf of
the foliation. The differential equations describing the spacial
evolution of the system are just those induced by the Lie algebra of
the subgroup of Euclidean translations. A further simplification is
achieved by writing the algebraic and differential equations in an
explicit
form so that they transform irreducibly according to representations
of the rotation subgroup $SO(3,\bb C)$. At this point the equations
are simple enough to check directly which superintegrable systems
arise that permit separation in a given coordinate system. We show that in addition to the 7 superintegrable systems corresponding to separation in one of the generic separable coordinates, there are exactly 3 superintegrable systems that separate only in nongeneric coordinates. Furthermore, for every system of orthogonal separable coordinates in complex Euclidean space there corresponds at least one nondegenerate superintegrable system that separates in these coordinates. The method of proof of these results should generalize to higher dimensions.
\section{Conformally flat spaces in three dimensions} Here we review some basic results about 3D second order superintegrable systems in conformally flat spaces.
For each such space there always exists a local coordinate system
$x,y,z$ and a nonzero function $\lambda(x,y,z)=\exp G(x,y,z)$ such that the
Hamiltonian is (\ref{hamiltonian}).
A
quadratic constant of the motion (or generalized symmetry) (\ref{symmetry})
must satisfy
$
\{{ H},{\ S}\}=0
$.
i.e.,
\begin{equation}\label{killingtensors}\begin{array}{lll}
a_i^{ii}&=&-G_1a^{1i}-G_2a^{2i}-G_3a^{3i}\nonumber\\
2a^{ij}_i+
a_j^{ii}&=&-G_1a^{1j}-G_2a^{2j}-G_3a^{3j},\quad i\ne j\nonumber\\
a^{ij}_k+a^{ki}_j+a^{jk}_i&=&0,\quad i,j,k\ {\rm distinct}\noindent\end{array}
\end{equation}
and
\begin{equation}
W_k=\lambda\sum_{s=1}^3a^{sk}
V_s,\quad k=1,2,3
.\label{potc}
\end{equation}
(Here a subscript $j$ denotes differentiation with respect to $x_j$.)
The requirement that $\partial_{x_\ell} W_j=\partial_{x_j}
W_\ell,\ \ell\ne j$ leads from
(\ref{potc}) to the
second order Bertrand-Darboux partial differential equations for the potential.
\begin{equation}\label{BertrandDarboux}
\sum_{s=1}^3\left[V_{sj}\lambda a^{s\ell}-V_{s\ell}\lambda a^{sj}+
V_s\left(
(\lambda a^{s\ell})_j-(\lambda
a^{sj})_\ell\right)\right]=0.
\end{equation}
For second order superintegrabilty in 3D there must
be five functionally independent constants of the motion (including the Hamiltonian itself). Thus the Hamilton-Jacobi
equation admits four additional constants of the
motion:
\[
{ S}_h=\sum_{j,k=1}^3a^{jk}_{(h)}p_kp_j+W_{(h)}={ L}_h+W_{(h)},\qquad h=1,\cdots,4.
\]
We assume that the four functions ${ { S}}_h$ together with ${H}$ are functionally linearly
independent in the six-dimensional phase space. In \cite{KKM20051} it is shown that the matrix of the 15 B-D equations for the potential has rank at least 5, hence we can solve for the second derivatives of the potential in the form (\ref{veqn1a}).
If the matrix has rank $>5$ then there will be
additional conditions on the potential and it will depend on fewer parameters.
$D^{1}_{(s)}V_1+D^{2}_{(s)}V_2+D^{3}_{(s)}V_3=0$. Here the $A^{ij},B^{ij},C^{ij},D^{i}_{(s)}$ are
functions of $x$, symmetric in the superscripts, that can be calculated explicitly.
Suppose now that the superintegrable system is such that the rank is exactly 5 so that the relations are only
(\ref{veqn1a}). Further, suppose the integrability conditions for system (\ref{veqn1a}) are satisfied identically.
In this case the potential is nondegenerate.
Thus, at any point ${\bf x}_0$, where the $A^{ij}, B^{ij}, C^{ij}$ are
defined and analytic, there is a unique solution $V({\bf x})$ with
arbitrarily prescribed values of $V_1({\bf x}_0), V_2({\bf x}_0),V_3({\bf x}_0),V_{11}({\bf x}_0)$ (as well as the value of $V({\bf x}_0)$ itself.)
The points ${\bf x}_0$ are called {\it regular}.
Assuming that $V$ is nondegenerate, we
substitute the requirement (\ref{veqn1a}) into the
B-D equations (\ref{BertrandDarboux}) and obtain three equations for the
derivatives $a^{jk}_i$.
Then we can equate coefficients of $V_1,V_2,V_3,V_{11}$ on each side of the conditions $\partial_1V_{23}=\partial_2V_{13}
=\partial_3V_{12}$, $\partial_3V_{23}=\partial_2V_{33}$, etc., to obtain
integrability conditions, the simplest of which are
\begin{equation}\label{int11}
A^{23}=B^{13}=C^{12},\ B^{12}-A^{22}=C^{13}-A^{33},\
B^{23}=A^{31}+C^{22},\ C^{23}=A^{12}+B^{33}.
\end{equation}
It follows that the 15 unknowns can be expressed linearly in terms of the 10
functions
\begin{equation}\label{10terms} A^{i2},A^{13},A^{22},A^{23},A^{33}, B^{12}, B^{22},B^{23},B^{33}, C^{33}.\end{equation}
In general, the integrability conditions satisfied by the potential
equations take the following form. We introduce the vector
${\bf w}=\left( V_1, V_2, V_3,
V_{11}\right)^{\rm T}$,
and the matrices
${\bf A}^{(j)}$, $j=1,2,3$, such that
\begin{equation}\label{int21}
\partial_{x_j}{\bf w}={\bf A}^{(j )}{\bf w}\qquad j=1,2,3.
\end{equation}
The integrability conditions for this system are
\begin{equation}\label{int31}
A^{(j)}_i-A^{(i)}_j=A^{(i)}A^{(j)}-A^{(j)}A^{(i)}\equiv [A^{(i)},A^{(j)}].
\end{equation}
The integrability conditions (\ref{int11}) and (\ref{int31}) are analytic expressions in $x_1,x_2,x_3$ and must hold identically.
Then the system has a solution $V$ depending on 4 parameters (plus an arbitrary additive parameter).
Using the nondegenerate potential condition and the B-D equations we can solve for
all of the first partial derivatives $a^{jk}_i$ of a quadratic
symmetry to obtain the 18 basic symmetry equations, (27) in \cite{KKM20051},
plus the linear relations (\ref{int11}).
Using the linear relations we can express
$C^{12},C^{13},C^{22},C^{23}$ and $B^{13}$ in terms of the remaining
$10$ functions. Each $a^{jk}_i$ is a linear combination of the
$a^{\ell m}$ with coefficients that are linear in the 10 variables
and in the $G_s$.
Since this system of first order partial differential equations
is involutive, the general
solution for the 6 functions $a^{jk}$ can depend on at most 6
parameters, the values $a^{jk}({\bf x}_0)$ at a fixed regular point
${\bf x}_0$. For the integrability conditions
we define the vector-valued function
\[
{\bf h}(x,y,z)=\left(
a^{11},\\a^{12},\\a^{13},\\a^{22},\\a^{23},\\a^{33}\right)^{\rm T}
\]
and directly compute the $6\times 6$ matrix functions ${\cal A}^{(j)}$ to get the first-order system
$
\partial_{x_j}{\bf h}={\cal A}^{(j )}{\bf h},$ $ j=1,2,3$.
The integrability conditions for this system are are
\begin{equation}\label{int5c}
{\cal A}^{(j)}_i{\bf h}-{\cal A}^{(i)}_j{\bf h}={\cal A}^{(i)}{\cal A}^{(j)}{\bf h}-{\cal A}^{(j)}{\cal A}^{(i)}{\bf h}\equiv [{\cal A}^{(i)},{\cal A}^{(j)}]{\bf h}.
\end{equation}
By assumption we have
5 functionally linearly independent symmetries, so at each regular
point the solutions sweep out a 5 dimensional subspace of the 6
dimensional space of symmetric matrices. However, from the conditions
derived above there seems to be no obstruction to construction of a 6
dimensional space of solutions. Indeed in \cite{KKM20051} we show that this
construction can always be carried out.
\begin{theorem} $ (5\Longrightarrow 6)$ Let $V$ be a nondegenerate potential corresponding to
a conformally flat space in 3 dimensions that is superintegrable,
i.e., suppose $V$ satisfies the equations (\ref{veqn1a}) whose
integrability conditions
hold identically, and there are 5 functionally
independent constants of the motion. Then the space of second order symmetries
for the Hamiltonian ${ H}=(p^2_x+p^2_y+p^2_z)/\lambda(x,y,z)+V(x,y,z)$
(excluding multiplication by a constant) is of
dimension $D= 6$.
\end{theorem}
Thus, at any regular point $(x_0,y_0,z_0)$, and given constants
$\alpha^{kj}=\alpha^{jk}$, there is exactly one symmetry ${ S}$ (up to an additive constant) such
that $a^{kj}(x_0,y_0,z_0)=\alpha^{kj}$. Given a set of $5$ functionally
independent 2nd order symmetries ${\cal L}=\{{ S}_\ell:\ell=1,\cdots 5\}$ associated with the
potential, there is always a $6$th second order symmetry ${S}_6$ that is
functionally dependent on $\cal L$, but linearly independent.
Since the solution space of the symmetry equations is of
dimension $D=6$, it follows that the integability conditions for these
equations must be satisfied identically in the $a^{ij}$
As part of the analysis in reference \cite{KKM20051} we used the integrability
conditions for these equations and for the
potential to derive the following:
\begin{enumerate} \item
An expression for each of the first partial derivatives $\partial_\ell A^{ij}$, $\partial_\ell B^{ij}$, $\partial_\ell C^{ij}$,
for the $10$ independent functions
as homogeneous polynomials
of order at most two in the $A^{i'j'}$, $B^{i'j'}$, $C^{i'j'}$. There are
$30=3\times 10$ such expressions in all.
(In the case $G\equiv 0$ the full set of conditions can be written in
the convenient form (\ref{Zde}), (\ref{Yde}).)
\item Exactly 5 quadratic identities for the
$10$ independent functions, see (31) in \cite{KKM20051}. In Euclidean space
these identities take the form $I^{(a)} - I^{(e)}$ in (\ref{ideal})
of the present paper.
\end{enumerate}
In references \cite{KKM20051} we studied the structure of the spaces of third, fourth and sixth order symmetries (or constants of the motion) of $H$. Here the {\bf order} refers to the highest order terms in the momenta. We established the following results.
\begin{theorem} Let $V$ be a superintegrable nondegenerate
potential on a conformally flat space. Then
the space of third order constants of the motion is 4-dimensional
and is spanned by Poisson brackets $R_{jk}=\{S_j,S_k\}$ of the second order constants of
the motion. The
dimension of the space of fourth order symmetries is $21$ and is spanned by second order polynomials in the 6 basis symmetries $S_h$. (In particular, the Poisson brackets $ \{R_{jk},S_\ell\}$ can be expressed as second order polynomials in the basis symmetries.)
The dimension of the space of
sixth order symmetries is $56$ and is spanned by third order polynomials in the 6 basis symmetries $S_h$. (In particular the products $R_{jk}R_{\ell h}$ can be expressed by third order polynomials in the 6 basis symmetries.)
\end{theorem}
There is a similar result for fifth order constants of the motion, but it follows directly from the Jacobi identity for the Poisson bracket. This establishes the quadratic algebra structure of the space of constants of the motion: it is closed under the Poisson bracket action.
{}From the general theory of variable separation for Hamilton-Jacobi
equations \cite{ERNIE, MIL88} and the structure theory for Poisson brackets of second order constants of the motion, we established the following result \cite{KKM20052}.
\begin{theorem}\label{3Dmultiseparable} A superintegrable system with nondegenerate
potential in a 3D conformally flat space is
multiseparable. That is, the Hamilton - Jacobi equation for the system can be solved by additive separation of variables in more than one orthogonal coordinate system.
\end{theorem}
The corresponding Schr\"odinger eigenvalue equation for the quantum systems can be solved by multiplicative separation of variables in the same coordinate systems.
Finally, in \cite{KKM20052} we studied the St\"ackel transform for 3D systems,
an invertible transform that maps a nondegenerate superintegrable system on one conformally flat manifold to a nondegenerate superintegrable system on another manifold. Our principal result was
\begin{theorem}
Every superintegrable system with nondegenerate potential on a 3D conformally flat space is equivalent under the St\"ackel transform to a superintegrable system on either 3D flat space or the 3-sphere.
\end{theorem}
\section{Generic separable coordinates for Euclidean spaces}
Now we turn to the classification of second order nondegenerate superintegrable systems in 3D complex Euclidean space. A subclass of these systems can be obtained rather easily from separation of variables theory. To make this clear we recall some facts about generic
elliptical coordinates in complex Euclidean $n$ space and their
relationship to superintegrable systems with nondegenerate potentials
(see \cite{KKWMPOG} for more details).
Consider a second order superintegrable system of the form
$H=\sum_{k=1}^n p_k^2+V({\bf x})$
in Euclidean $n$ space, expressed in Cartesian coordinates $x_k$. In
analogy with the 3D theory, the potential is nondegenerate if it satisfies a system of equations of the form
\begin{equation}\label{nondegenerate} V_{jj}-V_{11} = \sum_{\ell=1}^nA^{jj,\ell}({\bf x})V_\ell,\quad j=2,\cdots ,n,\end{equation}
$$ V_{kj}= \sum_{\ell=1}^nA^{kj,\ell}({\bf x})V_\ell,\quad 1\le k<j\le n,$$
where all of the integrability conditions for this system of partial
differential equations are identically satisfied, \cite{KKM20041,KKM20051}.
There is an important subclass of such nondegenerate superintegrable systems that can be constructed for all $n\ge 2$, based on their relationship to variable separation in generic Jacobi elliptic coordinates.
The
prototype superintegrable system which is nondegenerate in $n$ dimensional flat
space has the Hamiltonian
\begin{equation} \label{prototype}
H=\sum ^n_{i=1}(p^2_i+ \alpha x^2_i + \frac{\beta _i}{ x^2_i} )+\delta.
\end{equation}
This system is superintegrable with nondegenerate potential and a basis of
$n(n+1)/2$ second order symmetry operators given by
$$P_i=p^2_i+\alpha x^2_i+ \frac{\beta _i}{ x^2_i},\quad
J_{ij}=(x_ip_j-x_jp_i)^2+\beta _i \frac{x^2_j}{ x^2_i} + \beta _j
\frac{x^2_i}{ x^2_j},\quad i\neq j.
$$
Although there appear to be ``too many" symmetries, all are functionally dependent on a subset of $2n-1$ functionally independent symmetries. A crucial
observation is that the corresponding Hamilton-Jacobi equation $H=E$ admits additive
separation in $n$ generic elliptical coordinates.
$$x^2_i=c^2 {\Pi ^n_{j=1}(u_j-e_i)}/{ \Pi _{k\neq i}(e_k-e_i)}
$$
simultaneously {\it for all} values of the parameters with $e_i\neq e_j$ if $i\neq j$ and $i,j=1,\cdots,n$. (Similarly the quantum problem $H\Psi = E\Psi$ is superintergrable and admits multiplicative separation.) Thus the equation is multiseparable
and separates in a continuum of elliptic coordinate systems (and in
many others besides). The $n$ involutive symmetries characterizing a fixed elliptic separable system are polynomial functions of the $e_i$, and requiring separation for all $e_i$ simultaneously sweeps out the full $n(n+1)/2$ space of symmetries and uniquely determines the nondegenerate potential. The infinitesimal distance in Jacobi
elliptical coordinates $u_j$ has the form
\begin{equation}\label{ellipticmetric}ds^2=-\frac{c^2}{ 4} \sum ^n_{i=1}
\frac{\Pi _{j\neq i}(u_i-u_j)}{ \Pi ^n_{k=1}(u_i-e_k)} du^2_i
=-{c^2\over 4} \sum ^n_{i=1} \frac{\Pi _{j\neq i}(u_i-u_j)}{ P(u_i)}
du^2_i,
\end{equation}
where $P(\lambda )=\Pi ^n_{k=1}(\lambda -e_k)$. However, it is well
known that (\ref{ellipticmetric}) is a flat space metric for any
polynomial $P(\lambda)$ of order $\le n$ and that each choice of such
a $P(\lambda)$ defines an elliptic type multiplicative separable
solution of the Laplace - Beltrami eigenvalue problem (with constant
potential) in complex Euclidean $n$-space, \cite{ERNIE}. The distinct cases are
labeled by the degree of the polynomial and the multiplicities of its
distinct roots. If for each distinct case we determine the most
general potential that admits separation for all $e_i$ compatible with
the multiplicity structure of the roots, we obtain a unique
superintegrable system with nondegenerate potential and $n(n+1)/2$
second order symmetries, \cite{KKWMPOG, KKM20052}. These are the generic superintegrable systems. (Thus, for $n=3$ there are 7 distinct cases for $-\frac14\ P(\lambda)$:
$$ (\lambda-e_1)(\lambda-e_2)(\lambda-e_3),\ (\lambda-e_1)(\lambda-e_2)^2,\ (\lambda-e_1)^3,\ $$
$$ (\lambda-e_1)(\lambda-e_2),\ (\lambda-e_1)^2,\ (\lambda-e_1),\ 1,$$
where $e_i\ne e_j$ for $i\ne j$. The first case corresponds to Jacobi elliptic coordinates.)
The number of distinct generic superintegrable systems for each integer $n\ge 2$ is
$\sum_{j=0}^np(j)
$,
where $p(j)$ is the number of integer partitions of $j$.
All of the generic separable systems, their potentials and their
defining symmetries can be obtained from the basic Jacobi elliptic
system in $n$ dimensions by a complicated but well defined set of
limit processes \cite{ KKM20052, KKWMPOG,Bocher}. In addition to these generic superintegrable systems there is an undetermined number of nongeneric systems. For $n=2$ all the systems have been found, and now we give the results for $n=3$.
We review some of the details from reference \cite{KKM20052} to show how each of
the generic separable systems in three dimensions uniquely determines a nondegenerate
superintegrable system that contains it.
We begin by
summarizing the full list of orthogonal separable systems in complex Euclidean
space and the associated symmetries. (All of these systems have been classified, \cite{ERNIE}, and all can be obtained from the ultimate generic
Jacobi elliptic coordinates by limiting processes \cite{Bocher, KMR}.) Here, a ``natural'' basis
for first order symmetries (Killing vectors) is given by
$ p_1\equiv p_x$, $ p_2\equiv p_y$,$ p_3\equiv p_z$, $
J_1=yp_z-zp_y$,$ J_2=zp_x-xp_z$, $J_3=xp_y-yp_x$
in the classical case and
$ p_1=\partial_x$, $ p_2=\partial_y$, $ p_3=\partial_z$, $
J_1=y\partial_z-z\partial_y$, $ J_2=z\partial_x-x\partial_z$, $ J_3=x\partial_y-y\partial_x$
in the quantum case. (In the operator characterizations for the
quantum case, the classical product of two constants of the motion is replaced
by the symmetrized product of the corresponding operator symmetries.)
The free Hamiltonian is $H_0=p_1^2+p_2^2+p_3^2$. In each case below
we list the coordinates. The constants of the motion that characterize
these coordinates can be found in \cite{KKM20052}. We use the bracket
notation of Bocher \cite{Bocher} to characterize each separable system.
$$ [2111]\quad
x^2 =c^2 {(u-e_1)(v-e_1)(w-e_1)\over (e_1-e_2)(e_1-e_3)},\quad
y^2 =c^2 {(u-e_2)(v-e_2)(w-e_2)\over (e_2-e_1)(e_2-e_3)}$$
$$z^2 =c^2 {(u-e_3)(v-e_3)(w-e_3)\over (e_3-e_1)(e_3-e_2)}$$
$$ [221]\quad
x^2+y^2=-c^2\left[\frac{(u-e_1)(v-e_1)(w-e_1)}{(e_1-e_2)^2}\right]
$$
$$-\frac{c^2}{e_1-e_2}\left[
(u-e_1)(v-e_1)+(u-e_1)(w-e_1)+(v-e_1)(w-e_1)\right],
$$
$$ (x-iy)^2=c^2\frac{(u-e_1)(v-e_1)(w-e_1)}{e_1-e_2},\quad z^2=c^2\frac{(u-e_2)(v-e_2)(w-e_2)}{(e_2-e_1)^2}.$$
$$ [23] \quad
x-iy={1\over 2}c( {u^2+v^2+w^2\over uvw}- {1\over 2}
{u^2v^2+u^2w^2+v^2w^2\over u^3v^3w^3}),$$
$$z={1\over 2}c({uv\over w} + {uw\over v} + {vw\over u}),\quad
x+iy=cuvw.
$$
$$ [311] \quad
x={c\over 4}(u^2+v^2+w^2+{1\over u^2}+{1\over v^2}+{1\over w^2})+{3\over 2}c,$$ $$y=-{c\over 4} {(u^2-1)(v^2-1)(w^2-1)\over uvw},\quad
z=i{c\over 4} {(u^2+1)(v^2+1)(w^2+1)\over uvw}.$$
$$ [32] \quad
x+iy=uvw,\quad x-iy=-({uv\over w}+{uw\over v}+{vw\over u}),\quad
z={1\over 2}(u^2+v^2+w^2).$$
$$ [41] \quad
x+iy=u^2v^2+u^2w^2+v^2w^2-{1\over 2}(u^4+v^4+w^4),\
x-iy=c^2(u^2+v^2+w^2),\
z=2icuvw.$$
$$[5] \quad
x+iy=c(u+v+w),\quad
x-iy={c\over 4}(u-v-w)(u+v-w)(u+w-v),$$
$$z=-{c\over 4}(u^2+v^2+w^2-2(uv+uw+vw)).$$
\noindent We summarize the remaining degenerate separable coordinates:
{\bf Cylindrical type coordinates.} All of these have one symmetry in common:
$L_1=p^2_3.$ The 7 systems are, polar, Cartesian, light cone, elliptic,
parabolic, hyperbolic and semihyperbolic.
{\bf Complex sphere coordinates.}
These all have the symmetry
$L_1=J^2_1+J^2_2+J^2_3$ in common. The 5 systems are spherical, horospherical,
elliptical, hyperbolic, and semi-circular parabolic.
{\bf Rotational types of coordinates.} There are 3 of these systems,
each of which is characterized by the fact that the momentum terms in one defining symmetry
form a perfect square whereas the other two are not squares.
In addition to these orthogonal coordinates, there is a class of
nonorthogonal heat-type separable coordinates that are related to
the embedding of the heat equation in two dimensions into three
dimensional complex Euclidean space.\cite{ERNIE}. These coordinates are not present in real Euclidean space, only in real Minkowski spaces.The coordinates do not have any bearing on our further analysis as they do not occur in nondegenerate systems in three dimensions. This is because they are characterized by an element of the Lie algebra $p_1+ip_2$ (not squared, i.e., a Killing vector) so they cannot occur for a nondegenerate system.
Note that the first $7$ separable systems are ``generic,'' i.e.,
they occur in one, two or three - parameter families, whereas the
remaining systems are special limiting cases of the generic ones. Each of the $7$ generic Euclidean separable systems depends on a
scaling parameter $c$ and up to three parameters $e_1,e_2,e_3$. For
each such set of coordinates there is exactly one nondegenerate
superintegrable system that admits separation in these coordinates
{\it simultaneously for all values of the parameters $c,e_j$}.
Consider the system $[23]$, for example. If a nondegenerate
superintegrable system separates in these coordinates for all values of
the parameter $c$, then the space of second order symmetries must
contain the $5$ symmetries
$$ { H}=p_x^2+p_y^2+p_z^2+V,\quad { S}_1=J_1^2+J_2^2+J_3^2+f_1,\quad
{ S}_2=J_3(J_1+iJ_2)+f_2,$$
$${ S}_3=(p_x+ip_y)^2+f_3,\quad {S}_4=p_z(p_x+ip_y)+f_4.$$
It is straightforward to check that the $12\times 5$ matrix of coefficients of the
second derivative terms in the $12$ Bertrand-Darboux equations
associated with symmetries ${S}_1,\cdots, { S}_4$ has rank 5
in general. Thus, there is at most one nondegenerate superintegrable
system admitting these symmetries. Solving the Bertrand-Darboux
equations for the potential we find the unique solution
$$
V({\bf x}):=\alpha(x^2+y^2+z^2)+\frac{\beta}{(x+iy)^2}+\frac{\gamma
z}{(x+iy)^3}+\frac{\delta (x^2+y^2-3z^2)}{(x+iy)^4}.$$
Finally, we can use the symmetry conditions for this potential to
obtain the full $6$-dimensional space of second order
symmetries. This is the superintegrable system III on the following table. The other six cases yield corresponding results.
\begin{theorem} Each of the $7$ ``generic'' Euclidean separable
systems determines a unique nondegenerate superintegrable system
that permits separation simultaneously for all values of the
scaling parameter $c$ and any other defining parameters $e_j$. For
each of these systems there is a basis of $5$ (strongly) functionally
independent and $6$ linearly independent second order symmetries.
The corresponding nondegenerate potentials and basis of symmetries are:
\begin{equation} {\bf \rm I\ } [2111] \qquad V={\alpha _1\over x^2} + {\alpha _2\over y^2} + {\alpha _3\over z^2}
+\delta (x^2+y^2+z^2),
\end{equation}
$$
{\cal P}_i=p ^2_{x_i}+\delta x^2_i+ {\alpha _i\over x^2_i},\qquad
{\cal J}_{ij}=(x_ip_{x_j}-x_jp_{x_i})^2+\alpha ^2_i {x^2_j\over x^2_i} + \alpha ^2_j
{x^2_i\over x^2_j},\quad i\geq j.
$$
\begin{equation} {\bf\rm II\ } [221] \qquad
V=\alpha (x^2+y^2+z^2)+ \beta {x-iy\over (x+iy)^3} + {\gamma \over (x+iy)^2}
+ {\delta \over z^2},
\end{equation}
$${\cal S}_1=J\cdot J+f_1,\quad {\cal S}_2=p^2_z+f_2,\quad {\cal S}_3=J^2_3+f_3,
$$
$$
{\cal S}_4=(p_x+ip_y)^2+f_4,\quad L_5=(J_2-iJ_1)^2+f_5.
$$
\begin{equation} {\bf\rm III\ }[23] \qquad
V=\alpha (x^2+y^2+z^2)+ {\beta \over (x+iy)^2} + {\gamma z\over (x+iy)^3}+
{\delta (x^2+y^2-3z^2)\over (x+iy)^4},
\end{equation}
$$
{\cal S}_1=J\cdot J+f_1,\quad {\cal S}_2=(J_2-iJ_1)^2+f_2,\quad {\cal S}_3=J_3(J_2-iJ_1)+f_3,
$$
$$
{\cal S}_4=(p_x+ip_y)^2+f_4,\quad {\cal S}_5=p_z(p_x+ip_y)+f_5.
$$
\begin{equation} {\bf \rm IV\ } [311] \qquad
V=\alpha (4x^2+y^2+z^2)+ \beta x +{\gamma \over y^2} + {\delta \over z^2},
\end{equation}
$${\cal S}_1=p^2_x+f_1,\quad {\cal S}_2=p^2_y+f_2,\quad {\cal S}_3=p_zJ_2+f_3,
$$
$$ {\cal S}_4=p_yJ_3+f_4,\quad {\cal S}_5=J^2_1+f_5.
$$
\begin{equation} {\bf\rm V\ } [32] \qquad
V=\alpha (4x^2+y^2+z^2)+\beta x+{\gamma \over (y+iz)^2} +
{\delta (y-iz)\over (y+iz)^3},
\end{equation}
$$ {\cal S}_1=p^2_x+f_1,\quad {\cal S}_2=J^2_1+f_2,\quad {\cal S}_3=(p_z-ip_y)(J_2+iJ_3)+f_3,
$$
$$ {\cal S}_4=p_zJ_2-p_yJ_3+f_4,\quad {\cal S}_5=(p_z-ip_y)^2+f_5.
$$
\begin{equation} {\bf\rm VI\ } [41] \
V=\alpha \left(z^2-2(x-iy)^3+4(x^2+y^2)\right)+\beta \left(2(x+iy)-3(x-iy)^2\right)+\gamma (x-iy)+
{\delta \over z^2},
\end{equation}
$${\cal S}_1=(p_x-ip_y)^2+f_1,\quad {\cal S}_2=p^2_z+f_2,\quad {\cal S}_3=p_z(J_2+iJ_1)+f_3,
$$
$$ {\cal S}_4=J_3(p_x-ip_y)-{i\over 4}(p_x+ip_y)^2+f_4,\quad {\cal S}_5=(J_2+iJ_1)^2+4ip_zJ_1+f_5.
$$
\begin{equation}{\bf\rm VII\ } [5] \qquad
V=\alpha (x+iy)+\beta (\frac{3}{ 4}(x+iy)^2+\frac{1}{ 4}z)+\gamma ((x+iy)^3+
\frac{1}{ 16}(x-iy)+\frac{3}{ 4}(x+iy)z)
\end{equation}
$$
+\delta (\frac{5}{ 16}(x+iy)^4+\frac{1}{ 16}(x^2+y^2+z^2)+
\frac{3}{ 8}(x+iy)^2z),
$$
$${\cal S}_1=(J_1+iJ_2)^2+2iJ_1(p_x+ip_y)-J_2(p_x+ip_y)+\frac{1}{ 4}(p^2_y-p^2_z)
-iJ_3p_z+f_1,$$
$${\cal S}_2=J_2p_z-J_3p_y+i(J_3p_x-J_1p_z)-\frac{i}{ 2}p_yp_z+f_2,\quad
{\cal S}_3=(p_x+ip_y)^2+f_4,$$
$${\cal S}_4=J_3p_z+iJ_1p_y+iJ_2p_x+2J_1p_x+\frac{i}{ 4}p^2_z+f_3,\quad
{\cal S}_5=p_z(p_x+ip_y)+f_5.$$
\end{theorem}
In \cite{KKM20052} we proved what was far from obvious, the fact that {\it no other } nondegenerate superintegrable system
separates for {\it any} special case of ellipsoidal coordinates, i.e., fixed parameter.
\begin{theorem}\label{eucgen} A 3D Euclidean nondegenerate superintegrable system admits
separation in a special case of the generic coordinates [2111],
[221], [23], [311], [32], [41] or [5], respectively, if and only if
it is equivalent via a Euclidean transformation to system [I], [II], [III], [IV], [V], [VI] or [VII],
respectively.
\end{theorem}
This does not settle the problem of classifying all 3D nondegenerate
superintegrable systems in complex Euclidean space, for we have not
excluded the possibility of such systems that separate only in
degenerate separable coordinates. In fact we have already studied two
such systems in \cite{KKM20051}:
$$[O]\quad V(x,y,z)=\alpha x+\beta y+\gamma z+\delta(x^2+y^2+z^2).
$$
\begin{equation} [OO]\quad V(x,y,z)=\frac{\alpha}{2}(x^2+y^2+\frac14 z^2)+\beta x+\gamma y +\frac{\delta}{z^2}.
\end{equation}
\section{Polynomial ideals}
In this section we introduce a very different way of studying and
classifying superintegrable systems, through polynomial ideals. Here we
confine our analysis to 3D Euclidean superintegrable systems with
nondegenerate potentials. Thus we can set $G\equiv 0$ in the 18
fundamental equations for the derivatives $\partial_ia^{jk}$. Due to the
linear conditions (\ref{int11}) all of the functions
$A^{ij},B^{ij},C^{ij}$ can be expressed in terms of the 10 basic
terms (\ref{10terms}).
Since the fundamental equations admit 6 linearly independent
solutions $a^{hk}$ the integrability conditions
$\partial_ia^{hk}_\ell=\partial_\ell a^{hk}_i$ for these equations must
be satisfied identically. As follows from \cite{KKM20051}, these conditions
plus the integrability conditions (\ref{int31}) for the potential
allow us to compute the 30 derivatives $\partial_\ell D^{ij}$ of the
10 basic terms (equations (\ref{diffconds}) in what follows). Each is a quadratic polynomial in the 10 terms. In
addition there are 5 quadratic conditions remaining, equation (31) in \cite{KKM20051} with $G\equiv 0$.
These 5 polynomials determine an ideal $\Sigma'$. Already we see that
the values of the 10 terms at a fixed regular point must uniquely
determine a superintegrable system. However, choosing those values
such that the 5 conditions $I^{(a)}$-$I^{(e)}$, listed below, are satisfied will
not guarantee the existence of a solution, because the conditions
may be violated for values of $(x,y,z)$ away from the chosen regular
point. To test this we compute the derivatives $\partial_i\Sigma'$
and obtain a single new condition, the square of the quadratic expression $I^{(f)}$, listed below.
The polynomial $I^{(f)}$ extends the ideal. Let $\Sigma\supset \Sigma'$ be the ideal generated by the 6
quadratic polynomials, $I^{(a)},\cdots, I^{(f)}$:
\begin{eqnarray}\label{ideal}
I^{(a)} &=& -A^{22}B^{23} + B^{23}A^{33} + B^{12}A^{13}
+ A^{23}B^{22} - A^{12}A^{23} - A^{23}B^{33} \\
I^{(b)} &=& (A^{33})^2 + B^{12}A^{33} - A^{33}A^{22}
- A^{12}B^{33} - A^{13}C^{33} + A^{12}B^{22}\nonumber \\
& & {}\qquad
- B^{12}A^{22} + A^{13}B^{23} - (A^{12})^2 \nonumber\\
I^{(c)} &=& B^{23}C^{33} + B^{12}A^{33} + (B^{12})^2
+ B^{22}B^{33} - (B^{33})^2 - A^{12}B^{33} - (B^{23})^2\nonumber \\
I^{(d)} &=& -B^{12}A^{23} - A^{33}A^{23} + A^{13}B^{33}
+ A^{12}B^{23}\nonumber \\
I^{(e)} &=& -B^{23}A^{23} + C^{33}A^{23} + A^{22}B^{33}
- A^{33}B^{33} + B^{12}A^{12} \nonumber\\
I^{(f)} &=& A^{13}C^{33} + 2A^{13}B^{23} + B^{22}B^{33}
- (B^{33})^2 + A^{33}A^{22} - (A^{33})^2 \nonumber\\
& & {}\quad + 2A^{12}B^{22}
+ (A^{12})^2 - 2B^{12}A^{22} + (B^{12})^2
+ B^{23}C^{33} - (B^{23})^2 - 3(A^{23})^2.\nonumber
\end{eqnarray}
It can be verified with the Gr\"obner basis package of Maple that $\partial_i\Sigma
\subseteq \Sigma$, so that the system is closed under
differentiation!
This leads us to a fundamental result.
\begin{theorem} Choose the 10-tuple (\ref{10terms}) at a regular
point, such that the 6 polynomial identities (\ref{ideal}) are satisfied. Then there exists one and only one Euclidean
superintegrable system with nondegenerate potential that takes on these values at a point.
\end{theorem}
We see that all possible nondegenerate 3D Euclidean superintegrable
systems are encoded into the 6 quadratic polynomial identities. These
identities define an algebraic variety that generically has dimension 6, though
there are singular points, such as the origin $(0,\cdots,0)$, where the
dimension of the tangent space is greater. This result gives us the
means to classify all superintegrable systems.
An issue is that many
different 10-tuples correspond to the same superintegrable system. How
do we sort this out? The key is that the Euclidean group E(3,{\bb C}) acts as a
transformation group on the variety and gives rise to a foliation. The
action of the translation subgroup is determined by the derivatives
$\partial_kD^{ij}$ that we have already determined \ (and will list below). The action of the
rotation subgroup on the $D^{ij}$ can be determined from the behavior
of the canonical equations (\ref{veqn1a}) under rotations. The local
action on a 10-tuple is then given by 6 Lie derivatives that are a
basis for the Euclidean Lie algebra $e(3,{\bb C})$. For ``most'' 10-tuples
${\bf D}_0$ on the 6 dimensional variety the action of the Euclidean
group is locally transitive with isotropy subgroup only the identity element. Thus the group action on such points
sweeps out a solution surface homeomorphic to the 6 parameter $E(3,{\bb C})$ itself. This
occurs for the generic Jacobi elliptic system with potential
$$
V=\alpha(x^2+y^2+z^2)+\frac{\beta}{x^2}+\frac{\gamma}{y^2}+\frac{\delta}{z^2}.
$$
At the other extreme the isotropy subgroup of the origin
$(0,\cdots,0)$ is $E(3,C)$ itself, i.e., the point is fixed under the group
action. This corresponds to the isotropic oscillator with potential
$$ V=\alpha(x^2+y^2+z^2)+\beta x+\gamma y+\delta z.
$$
More generally, the isotropy subgroup at ${\bf D}_0$ will be $H$ and
the Euclidean group action will
sweep out a solution surface homeomorphic to the homogeneous space
$E(3,C)/H$ and define a unique superintegrable system. For example, the isotropy subalgebra formed by the
translation and rotation generators $\{P_1,P_2,P_3,J_1+iJ_2\}$
determines a new superintegrable system $[A]$ with potential
$$ V=\alpha \left((x-iy)^3+6(x^2+y^2+z^2)\right) + \beta\left( (x-iy)^2+2(x+iy)\right)
+ \gamma (x-iy) + \delta z.
$$
Indeed, each class of St\"ackel equivalent Euclidean
superintegrable systems is associated with a unique isotropy
subalgebra of $e(3,{\bb C})$, although not all subalgebras occur. (Indeed,
there is no isotropy subalgebra conjugate to $\{P_1,P_2,P_3\}$.) One way to
find all superintegrable systems would be to determine a list of all
subalgebras of $e(3,{\bb C})$, defined up to conjugacy, and then for each
subalgebra to determine if it occurs as an isotropy
subalgebra. Then we would have to resolve the degeneracy problem in which
more than one superintegrable system may correspond to a single
isotropy subalgebra.
To begin our analysis of the ideal $\Sigma$ we first determine how the rotation subalgebra $so(3,{\bb C})$ acts on the 10 variables (\ref{10terms}) and their derivatives and decompose the representation spaces into $so(3,{\bb C})$ - irreducible pieces.
The $A^{ij}$, $B^{ij}$ and $C^{ij}$ are 10 variables that,
under the action of rotations, split into two irreducible blocks
of dimension 3 and 7.
\begin{eqnarray}
X_{+1} &=& A^{33} + 3B^{12} - 2A^{22} + i(3A^{12} + B^{33} + B^{22}) \\
X_0 &=& -\sqrt2(C^{33}+2A^{13}+B^{23}) \\
X_{-1} &=& - A^{33} - 3B^{12} + 2A^{22} + i(3A^{12} + B^{33} + B^{22})
\end{eqnarray}
\begin{eqnarray}
Y_{+3} &=& A^{22} + 2B^{12} + i(B^{22} - 2A^{12}) \\
Y_{+2} &=& \sqrt6(A^{13}-B^{23}+2iA^{23}) \\
Y_{+1} &=& \frac{\sqrt3}{\sqrt5} \Bigl(3A^{22} - 2B^{12} - 4A^{33}
+ i(B^{22} - 2A^{12} - 4 B^{33})\Bigr) \\
Y_0 &=& \frac2{\sqrt5}\Bigl(2C^{33}-A^{13}-3B^{23}\Bigr) \\
Y_{-1} &=& \frac{\sqrt3}{\sqrt5} \Bigl(2B^{12} + 4A^{33} - 3A^{22}
+ i(B^{22} - 2A^{12} -4B^{33})\Bigr) \\
Y_{-2} &=& \sqrt6(A^{13}-B^{23}-2iA^{23}) \\
Y_{-3} &=& - A^{22} - 2B^{12} + i(B^{22}-2A^{12})
\end{eqnarray}
Quadratics in the variables can also be decomposed into irreducible
blocks. There are 2 one-dimensional representations, 3 of dimension 5,
1 of dimension 7, 2 of dimension 9 and 1 of dimension 13.
\begin{eqnarray}
Z^{(1a)}_0 &=& X_0^2-2X_{-1}X_{+1} \\
Z^{(1b)}_0 &=& Y_0^2-2Y_{-1}Y_{+1}+2Y_{-2}Y_{+2}-2Y_{-3}Y_{+3} \\[5mm]
Z^{(5a)}_{\pm2} &=& X_{\pm1}^2 \\
Z^{(5a)}_{\pm1} &=& \sqrt2 X_0 X_{\pm1} \\
Z^{(5a)}_{0} &=& \frac{\sqrt2}{\sqrt3}(X_0^2+X_{-1}X_{+1}) \\[5mm]
Z^{(5b)}_{\pm2} &=& Y_{\pm1}^2 - \frac{\sqrt{10}}{\sqrt3} Y_0Y_{\pm2}
+ \frac{\sqrt5}{\sqrt3} Y_{\mp1}Y_{\pm3} \\
Z^{(5b)}_{\pm1} &=& \frac{1}{\sqrt3} Y_0Y_{\pm1}
- \frac{\sqrt5}{\sqrt2} Y_{\mp1}Y_{\pm2}
+ \frac{5}{\sqrt6} Y_{\mp2}Y_{\pm3} \\
Z^{(5b)}_{0} &=& \frac{\sqrt2}{\sqrt3} Y_0^2
- \frac{\sqrt3}{\sqrt2} Y_{-1}Y_{+1}
+ \frac{5}{\sqrt6} Y_{-3}Y_{+3} \\[5mm]
Z^{(5c)}_{\pm2} &=& X_{\mp1}Y_{\pm3}
+ \frac{1}{\sqrt{15}} X_{\pm1}Y_{\pm1}
- \frac{1}{\sqrt3} X_0Y_{\pm2} \\
Z^{(5c)}_{\pm1} &=& \frac{1}{\sqrt5} X_{\pm1}Y_0
- \frac{2\sqrt2}{\sqrt{15}} X_0Y_{\pm1}
+ \frac{\sqrt2}{\sqrt3} X_{\mp1}Y_{\pm2} \\
Z^{(5c)}_{0} &=& - \frac{\sqrt3}{\sqrt5} X_0Y_0
+ \frac{\sqrt2}{\sqrt5} X_{-1}Y_{+1}
+ \frac{\sqrt2}{\sqrt5} X_{+1}Y_{-1}
\end{eqnarray}
There is one 7-dimensional representation with highest weight vector
\begin{equation}
Z^{(7)}_{+3} = X_0Y_{+3} - \frac{1}{\sqrt3} X_{+1}Y_{+2}\,,
\end{equation}
two 9-dimensional representations with highest weight vectors
\begin{eqnarray}
Z^{(9a)}_{+4} &=& Y_{+2}^2 - \frac{2\sqrt3}{\sqrt5} Y_{+1}Y_{+3} \\
Z^{(9b)}_{+4} &=& X_{+1}Y_{+3}
\end{eqnarray}
and one 13-dimensional representation
\begin{equation}
Z^{(13)}_{+3} = Y_{+3}^2\,.
\end{equation}
A linear combination of representations of the same
dimension is another representation and if we define
\begin{eqnarray}
Z_{m} &=& 2Z^{(5a)}_{m} - 5Z^{(5b)}_m + 5Z^{(5c)}_m\,,\qquad
\mbox{for $m=-2,-1,0,+1,+2$.} \\
W_0 &=& 8Z^{(1a)}_0 - 5Z^{(1b)}_0\,,
\end{eqnarray}
the algebraic variety defining the nondegenerate superintegrable
systems is given by
\begin{equation}
\label{eqn:quadidents}
Z_m=W_0=0\qquad \mbox{for $m=-2,-1,0,+1,+2$.}
\end{equation}
If $J_x$, $J_y$ and $J_z$ are Lie derivatives corresponding to
rotation about the $x$, $y$ and $z$ axes, we define
\[
J_+ = iJ_x + J_y\,, \quad J_- = iJ_x - J_y
\quad\mbox{and}\quad J_3 = iJ_z\,.
\]
then
\begin{eqnarray}
J_+f_m &=& \sqrt{(l-m)(l+m+1)}f_{m+1} \\
J_-f_m &=& \sqrt{(l+m)(l-m+1)}f_{m-1}\nonumber \\
J_3f_m &=& mf_m\nonumber
\end{eqnarray}
where $f_m$ is taken as one of $X_m$, $Y_m$, $Z_m$ or $W_0$.
Derivatives of the $X_m$ and $Y_m$ are quadratics in these
variables. The derivatives of the $X_m$ are linear combinations
of the quadratics from the representations of dimensions 1 and 5.
In particular,
\begin{equation}\label{Xde}
\partial_iX_j \in \{2Z^{(5a)}_m+5Z^{(5b)}_m:m=0,\pm1,\pm2\}
\cup\{Z^{(1A)}_0\}\,.
\end{equation}
Hence the quadratic identities (\ref{eqn:quadidents}) can be
used to write these derivatives as a sum of terms each of degree
at least 1 in the $X_m$. This means that whenever all of the
$X_m$ vanish at a point, their derivatives also vanish and hence
the set $\{X_{-1},X_0,X_{+1}\}$ is a relative invariant.
The derivatives of the $Y_m$ are linear combinations
of the quadratics from the representations of dimensions 5
and 9.
\begin{equation}
\partial_iY_j \in \{2Z^{(5a)}_m+5Z^{(5b)}_m: -2\le m\le +2\}
\cup \{5Z^{(9a)}_m-24Z^{(9b)}_m: -4\le m\le +4\}\,.
\end{equation}
Hence they can be written as a sum of terms each of degree
at least 1 in the $Y_m$, so\
$$\{Y_{-3},Y_{-2},Y_{-1},Y_{0},Y_{+1},Y_{+2},Y_{+3}\}$$ is a relative
invariant set.
Note that from the dimension of the spaces containing the derivatives
of the $X_m$ and $Y_m$, there must be at least 3 linear relations
among the derivatives of the $X_m$ and 7 among the derivatives
of the $Y_m$.
In a similar way we can find we can find relative invariant sets
of quadratics carrying a representation of the Lie
algebra $so(3,{\bb C})$.
For example, the following are relative invariant sets.
\begin{eqnarray} \label{relinv}
R_1 &=& \{X_{-1},X_0,X_{+1}\}, \\
R_2 &=& \{Y_{-3},Y_{-2},Y_{-1},Y_{0},Y_{+1},Y_{+2},Y_{+3}\},\nonumber \\
R_3 &=& \{4Z^{(5a)}_m-15Z^{(5b)}_m:m=0,\pm1,\pm2\}\cup\{Z^{(1A)}_0\},\nonumber \\
R_4 &=& \{3Z^{(5a)}_m-5Z^{(5b)}_m:m=0,\pm1,\pm2\}\cup\{Z^{(1A)}_0\},\nonumber \\
R_5 &=& \{8Z^{(5a)}_m-5Z^{(5b)}_m:m=0,\pm1,\pm2\},\nonumber \\
R_6 &=& R_5 \cup \{5Z^{(9a)}_m+6Z^{(9b)}_m:m=0,\pm1,\pm2,\pm3,\pm4\}.\nonumber
\end{eqnarray}
Recall that the known superintegrable nondegenerate potentials are
\begin{eqnarray}\label{ndpotentials}
V_{I} &=& \alpha(x^2+y^2+z^2) + \frac\beta{x^2}
+ \frac\gamma{y^2} + \frac\delta{z^2}, \\
V_{II} &=& \alpha(x^2+y^2+z^2) + \frac{\beta(x-iy)}{(x+iy)^3}
+ \frac{\gamma}{(x+iy)^2} + \frac\delta{z^2},\nonumber \\
V_{III} &=& \alpha(x^2+y^2+z^2) + \beta{(x+iy)^2}
+ \frac{\gamma z}{(x+iy)^3}
+ \frac{\delta(x^2+y^2-3z^2)}{(x+iy)^4},\nonumber \\
V_{IV} &=& \alpha(4x^2+y^2+z^2) + \beta x
+ \frac\gamma{y^2} + \frac{\delta}{z^2},\nonumber \\
V_{V} &=& \alpha(4z^2+x^2+y^2) + \beta z
+ \frac\gamma{(x+iy)^2} + \frac{\delta(x-iy)}{(x+iy)^3},\nonumber \\
V_{VI} &=& \alpha(4x^2+4y^2+z^2-2(x-iy)^3)
+ \beta(2x+2iy-3(x-iy)^2) + \gamma(x-iy) + \frac\delta{z^2},\nonumber \\
V_{VII} &=& \alpha(x+iy) + \beta(3(x+iy)^2+z)
+ \gamma(16(x+iy)^3+x-iy+12z(x+iy))\nonumber \\
& & {}\qquad + \delta(5(x+iy)^4+x^2+y^2+z^2+6(x+iy)^2z),\nonumber \\
V_{O} &=& \alpha(x^2+y^2+z^2)+\beta x + \gamma y + \delta z,\nonumber \\
V_{OO} &=& \alpha(4x^2+4y^2+z^2) + \beta x + \gamma y
+ \frac\delta{z^2},\nonumber \\
V_{A} &=& \alpha((x-iy)^3+6(x^2+y^2+z^2))
+ \beta((x-iy)^2 + 2x+2iy) + \gamma(x-iy) + \delta z.\nonumber
\end{eqnarray}
The correspondence between relative invariant sets and
potentials is in the accompanying table.
\medskip
\begin{tabular}{c|c|c|c|c|c|c|}
$V$ &$R_1$&$R_2$&$R_3$&$R_4$&$R_5$&$R_6$ \\
\hline
$I$ & & & & & & \\
$II$ & & & & & & \\
$III$ & & & $0$ & & & \\
$IV$ & & & & & & \\
$V$ & & & & $0$ & & \\
$VI$ & & & & & $0$ & \\
$VII$ & $0$ & & $0$ & $0$ & $0$ & \\
$O$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \\
$OO$ & & & & & $0$ & $0$ \\
$A$ & $0$ & & $0$ & $0$ & $0$ & $0$
\end{tabular}
\medskip
The action of the Euclidean translation generators on the 10 basis
monomials can also be written in terms of the irreducible
representations of $so(3,{\bb C})$. (Indeed these equations are much
simpler than when written directly in terms of the
$A^{ij},B^{ij},C^{ij}$.) Using the notation
\begin{equation}
\partial_\pm = i\partial_y\pm\partial_x
\end{equation}
\begin{equation}\label{Zde}
Z^{(5X)}_m = 5Z^{(5b)}_m + 2Z^{(5a)}_m\,, \qquad
Z^{(9Y)}_m = 24Z^{(9b)}_m - 5Z^{(9a)}\,.
\end{equation} we obtain the fundamental differential relations:
\begin{eqnarray}\label{diffconds}
\partial_-X_{+1} &=& \frac{1}{30\sqrt6}Z^{(5X)}_0 - \frac19Z^{(1A)}_0,\quad
\partial_+X_{+1} = \frac1{30}Z^{(5X)}_{+2}, \\
\partial_zX_{+1} &=& -\frac1{60}Z^{(5X)}_{+1},\quad
\partial_-X_{0} = \frac1{30\sqrt2}Z^{(5X)}_{-1}, \nonumber\\
\partial_+X_{0} &=& \frac1{30\sqrt2}Z^{(5X)}_{+1},\quad
\partial_zX_{0} = -\frac1{30\sqrt3}Z^{(5X)}_0 - \frac1{9\sqrt2}Z^{(1A)}_0, \nonumber\\
\partial_-X_{-1} &=& \frac1{30}Z^{(5X)}_{-2},\quad
\partial_+X_{-1} = \frac1{30\sqrt6}Z^{(5X)}_0 -\frac19Z^{(1A)}_0,\nonumber \\
\partial_zX_{-1} &=& -\frac1{60}Z^{(5X)}_{-1}, \nonumber\\[5mm]
\partial_-Y_{+3} &=& \frac1{180\sqrt7}Z^{(9Y)}_{+2} + \frac1{35}Z^{(5X)}_{+2},\quad
\partial_+Y_{+3} = \frac1{90}Z^{(9Y)}_{+4},\label{Yde} \\
\partial_zY_{+3} &=& -\frac1{180\sqrt2}Z^{(9Y)}_{+3},\quad
\partial_-Y_{+2} = \frac1{60\sqrt{21}}Z^{(9Y)}_{+1}
+ \frac{\sqrt2}{35\sqrt3}Z^{(5X)}_{+1},\nonumber \\
\partial_+Y_{+2} &=& \frac1{60\sqrt3}Z^{(9Y)}_{+3},\quad
\partial_zY_{+2} = -\frac1{30\sqrt{42}}Z^{(9Y)}_{+2} + \frac1{35\sqrt6}Z^{(5X)}_{+2}, \nonumber \\
\partial_-Y_{+1} &=& \frac1{30\sqrt{42}}Z^{(9Y)}_0 + \frac{\sqrt2}{35\sqrt5}Z^{(5X)}_0 ,\quad
\partial_+Y_{+1} = \frac1{12\sqrt{105}}Z^{(9Y)}_{+2} + \frac1{35\sqrt{15}}Z^{(5X)}_{+2},\nonumber \\
\partial_zY_{+1} &=& -\frac1{12\sqrt{210}}Z^{(9Y)}_{+1} + \frac2{35\sqrt{15}}Z^{(5X)}_{+1},\quad
\partial_-Y_{0} = \frac1{18\sqrt{70}}Z^{(9Y)}_{-1} + \frac1{35\sqrt5}X^{(5X)}_{-1},\nonumber \\
\partial_+Y_{0} &=& \frac1{18\sqrt{70}}Z^{(9Y)}_{+1} + \frac1{35\sqrt5}X^{(5X)}_{+1},\quad
\partial_zY_{0} = -\frac1{45\sqrt{14}}Z^{(9Y)}_0 + \frac{\sqrt3}{35\sqrt{10}}X^{(5X)}_0,\nonumber \\
\partial_-Y_{-1} &=& \frac1{12\sqrt{105}}Z^{(9Y)}_{-2} + \frac1{35\sqrt{15}}Z^{(5X)}_{-2},\quad
\partial_+Y_{-1} = \frac1{30\sqrt{42}}Z^{(9Y)}_0 + \frac{\sqrt2}{35\sqrt{5}}Z^{(5X)}_0 ,\nonumber\\
\partial_zY_{-1} &=& -\frac1{12\sqrt{210}}Z^{(9Y)}_{-1} + \frac2{35\sqrt{15}}Z^{(5X)}_{-1}\quad
\partial_-Y_{-2} = \frac1{60\sqrt3}Z^{(9Y)}_{-3},\nonumber \\
\partial_+Y_{-2} &=& \frac1{60\sqrt{21}}Z^{(9Y)}_{-1} + \frac{\sqrt2}{35\sqrt3}Z^{(5X)}_{-1},\quad
\partial_zY_{-2} = -\frac1{30\sqrt{42}}Z^{(9Y)}_{-2} + \frac1{35\sqrt6}Z^{(5X)}_{-2},\nonumber \\
\partial_-Y_{-3} &=& \frac1{90}Z^{(9Y)}_{-4},\quad
\partial_+Y_{-3} = \frac1{180\sqrt7}Z^{(9Y)}_{-2} +\frac1{35}Z^{(5X)}_{-2},\nonumber \\
\partial_zY_{-3} &=& -\frac1{180\sqrt2}Z^{(9Y)}_{-3}.\nonumber \\
\end{eqnarray}
In the following table we describe each of the known superintegrable
systems in terms of variables adapted to the rotation group action.
for this it is convenient to choose the 10 constrained variables in
the form $X_i,\ i=1\ldots3$ and $Y_j,\ j=1\ldots7$ with $d_X$ and
$d_Y$, respectively, as the number of
independent variables
on which these variables depend.
These are defined by
\begin{eqnarray}
X_1 &=& 2A^{13} + B^{23} + C^{33}=-\frac{X_0}{\sqrt{2}},\
X_2 = 2A^{22} - A^{33} - 3B^{12}=\frac{X_{-1}-X_{+1}}{2},\nonumber \\
X_3 &=& 3A^{12} + B^{33} + B^{22}=\frac{X_{-1}+X_{+1}}{2},\ Y_1 = \frac12(Y_{+3}-Y_{-3}),\nonumber
\end{eqnarray}
\begin{eqnarray}\label{Yvariables}
Y_2 &=& \frac1{2i}(Y_{+3}+Y_{-3}), \
Y_3 = \frac1{2i\sqrt6}(Y_{+2}-Y_{-2}), \
Y_4 = \frac1{2\sqrt6}(Y_{+2}+Y_{-2}),\nonumber \\
Y_5 &=& \frac{\sqrt5}{2\sqrt3}(Y_{+1}-Y_{-1}), \
Y_6 = \frac{\sqrt5}{2i\sqrt3}(Y_{+1}+Y_{-1}),\
Y_7 = \frac{\sqrt5}2Y_0.
\end{eqnarray}
\medskip
\begin{tabular}{|c|c|c|c|c|}
& $\sum_{j=1}^3X_j^2$
& $[X_1,X_2,X_3]$
& $d_X$
& $[Y_1,Y_2,Y_3,Y_4,Y_5,Y_6,Y_7]$
\\
&&& $d_Y$&\\
\hline &&&& \\
$V_I$
& $\frac9{x^2}+\frac9{y^2}+\frac9{z^2}$
& $\left[-\frac3x,-\frac3y,\frac3z\right]$
& $3$
& $\left[\frac3x,-\frac3y,0,0,-\frac3x,-\frac3y,-\frac6z\right]$
\\
&
&
& $3$
&
\\[5mm]
$V_{II}$
& $\frac9{z^2}$
& $\left[-\frac6{x+iy},-\frac{6i}{x+iy},\frac3z\right]$
& $2$
& $\left[-\frac{6(x-iy)}{(x+iy)^2},-\frac{6i(x-iy)}{(x+iy)^2},
0,0,-\frac6{x+iy},-\frac{6i}{x+iy},-\frac6z \right]$
\\
&
&
& $3$
&
\\[5mm]
$V_{III}$
& $0$
& $\left[-\frac9{x+iy},-\frac{9i}{x+iy},0\right]$
& $1$
& $\left[-\frac{6(x^2+y^2-2z^2)}{(x+iy)^3},-\frac{6i(x^2+y^2-2z^2)}{(x+iy)^3},
\frac{6iz}{(x+iy)^2},\right.$
\\
&
&
&$3$
& $\left.\frac{6z}{(x+iy)^2},
\frac{6}{x+iy},\frac{6i}{x+iy},0 \right]$
\\[5mm]
$V_{IV}$
& $\frac9{y^2}+\frac9{z^2}$
& $\left[0,-\frac3y,\frac3z\right]$
& $2$
& $\left[0,-\frac3y,0,0,0,-\frac3y,-\frac6z\right]$
\\
&
&
& $2$
&
\\[5mm]
$V_{V}$
& $0$
& $\left[-\frac6{x+iy},-\frac{6i}{x+iy},0\right]$
& $1$
& $\left[-\frac{6(x-iy)}{(x+iy)^2},-\frac{6i(x-iy)}{(x+iy)^2},0,0
-\frac6{x+iy},-\frac{6i}{x+iy},0\right]$
\\
&
&
& $2$
&
\\[5mm]
$V_{VI}$
& $\frac9{z^2}$
& $\left[0,0,\frac3z\right]$
& $1$
& $\left[6,-6i,0,0,0,0,-\frac6z\right]$
\\
&
&
& $1$
&
\\[5mm]
$V_{VII}$
& $0$
& $[0,0,0]$
& $0$
& $[-48(x+iy),-48i(x+iy),12i,12,0,0,0]$
\\
&
&
& $1$
&
\\[5mm]
$V_{O}$
& $0$
& $[0,0,0]$
& $0$
& $[0,0,0,0,0,0,0]$
\\
&
&
& $0$
&
\\[5mm]
$V_{OO}$
& $\frac9{z^2}$
& $\left[0,0,\frac3z\right]$
& $1$
& $\left[0,0,0,0,0,0,-\frac6z\right]$
\\
&
&
& $1$
&
\\[5mm]
$V_{A}$
& $0$
& $[0,0,0]$
& $0$
& $[-2,2i,0,0,0,0,0]$
\\
&
&
& $0$
&
\end{tabular}
\bigskip
In principle one could classify all possibilities by referring to
distinct cases exhibited in the accompanying table. Here,
however, we use the preceding algebraic and differential conditions, together with the coordinates in which the corresponding nondegenerate system could separate, to demonstrate that our 10 known superintegrable systems are the only ones possible.
\section{Completion of the proof}
We know that in addition to the generic superintegrable systems, the only possible superintegrable systems are those that are multiseparable in nongeneric coordinates. Our strategy is
to consider each nongeneric separable system in a given standard form and use the
integrability conditions associated with the corresponding separable
potential. If a superintegrable system permits separation in these coordinates, then by a suitable Euclidean transformation, we can assume the system permits separation in this standard form. This information is then used together with the six algebraic conditions
$I^{(a)},\cdots, I^{(f)}$, (\ref{ideal}), to deduce all the information available from algebraic
conditions. At that point the differential equations
(\ref{diffconds}) for the $D^{ij}$ can be solved in a
straight forward manner to obtain the final possible superintegrable systems. In some cases the algebraic conditions alone suffice and the differential equations are unnecessary. We proceed on a case by
case basis.
\subsection{Cylindrical systems}
For cylindrical-type systems the potential splits off the $z$
variable , i.e., the potential satisfies $V_{13}=0,V_{23}=0$ in equations (\ref{veqn1a}). This implies that
$A^{13}=B^{13}=C^{13}=0$ and $A^{23}=B^{23}=C^{23}=0$. From the equations for
$X_i, (i=1,2,3)$ and $Y_j, (j=1,\cdots,7)$ we can deduce that $Y_7=-2X_3$. In addition it is also easy to
conclude that
$Y_3=Y_4=0$ and $X_1=Y_5,X_2=Y_6$.
If we add the requirement of Cartesian
coordinate separation then $A^{12}=B^{12}=C^{12}=0$. If $X_3=0$ we
obtain potential $V_0$. If $X_3\ne 0$ then $X_3=3/z$ If $X_1=X_2=0$
then we have potential $V_{00}$. If one of $X_1,X_2$ is not zero this leads directly to
potential $V_I$.
{}For separation in cylindrical coordinates
$x=r\cos\theta ,\
y=r\sin\theta ,\
z$,
the following conditions must apply:
$$V_{xz}=0,\
V_{yz}=0,$$
$$(x^2-y^2)V_{xy}+xy(V_{yy}-V_{xx})+3xV_y-3yV_x=0.$$
The last condition is equivalent to
$\partial _\theta (r\partial _r(r^2V))=0$
where $r^2=x^2+y^2$. Solving the algebraic conditions that result, we
determine that
$$X_1=Y_5=-G(1+\frac{y^2}{ x^2})-\frac{3}{ x},\
X_2=Y_6=G(\frac{x}{ y}+\frac{y}{ x})-\frac{3}{ y},$$
$$Y_1=G(-3+\frac{y^2}{x^2})+\frac{3}{ x},\
Y_2=G({x\over y}-3{y\over x})-{3\over y},\
Y_3=Y_4=0,$$
where $G$ is an unknown function. In addition we deduce that
$Y_7=-2X_3$. It is
then easy to show from the differential equations that
$X_3=\frac{3}{ z}$ or $ 0$ and that $G=0$. We conclude that separation of this
type occurs in cases $V_I$ and $V_{IV}$.
For parabolic cylinder coordinates
$x=\frac{1}{ 2}(\xi ^2-\eta ^2),\
y=\xi \eta ,\
z$,
the conditions on the potential have the form
$$V_{xz}=0 ,\ V_{yz}=0\
2xV_{xy}+y(V_{yy}-V_{xx})+3V_y=0.$$
This implies that
$$X_1=-2F,\
X_2=2\frac{x}{ y}F-\frac{3}{ y},\
X_3=-C,$$
$$Y_1=-2F,\
Y_2=2\frac{x}{ y}F-\frac{3}{ y},\
Y_3=Y_4=0,
$$
$$Y_5=-2F,\
Y_6=2\frac{x}{ y}F-\frac{3}{ y} ,\
Y_7=2C.$$
The remaining differential equations require that $F=0$ and
$C={3\over z}$. This type occurs in case $V_{IV}$.
{}For elliptic cylinder coordinates
$x=\cosh A \cos B, y=\sinh A \sin B, z,$ the integrability conditions for the
potential have the form
$$V_{zx}=0 ,\ V_{yz}=0,\
(x^2-y^2-1)V_{xy}+xy(V_{yy}-V_{xx})+3(xV_y-yV_x)=0.$$
This and the algebraic conditions imply
$$X_1=(\frac{x}{ y}+\frac{y}{ x}+\frac{1}{ xy})G-\frac{3}{ x},\
X_2=(-1-\frac{x^2}{ y^2}+\frac{1}{ y^2})G-\frac{3}{ y},\
X_3=-C,$$
$$Y_1=(3\frac{x}{ y}-\frac{y}{ x}-\frac{1}{ xy})G+\frac{3}{ x},\
Y_2=(-\frac{x^2}{ y^2}+3+\frac{1}{ y^2})G-\frac{3}{ y},\
Y_3=Y_4=0,$$
$$Y_5=(\frac{x}{ y}+\frac{y}{ x}+\frac{1}{ xy})G-\frac{3}{ x},\
Y_6=(-1-\frac{x^2}{ y^2}+\frac{1}{ y^2})G-\frac{3}{ y},\
Y_7=2C.$$
The remaining differential equations require $G=0$, and $C=-\frac{3}{ z}$ or $0$
corresponding to systems $V_I$ and $V_{IV}$.
In semihyperbolic coordinates $x+iy=4i(u+v),x-iy=2i(u-v)^2$ the extra
integrability condition is
$$(1+ix+y)(V_{xx}-V_{yy})+2(-2i-x+iy)V_{xy}+3iV_x-3V_y=0.$$
The algebraic conditions yield the requirements
$$X_1=Y_5=G,\
X_2=-G,\
X_3=-C,\ Y_3=Y_4=0,\ $$
$$Y_1=\frac{3}{ 2}i+\frac{i}{ 2}(x-iy)G,\
Y_2=-\frac{3}{ 2}+\frac{1}{ 2}(-x+iy)G,\
Y_6=iG,\
Y_7=2C.$$
This leads to potentials $V_A$ and $V_{VI}$.
{}For hyperbolic coordinates $x+iy=rs,x-iy=
(r^2+s^2)/ rs, z$, the integrability condition is
$$(1+ixy)(V_{yy}-V_{xx})+i(x^2-y^2-2)V_{xy}+3i(xV_y-yV_x)=0.$$
The algebraic conditions imply $Y_7=2X_3=2C$ and
$$X_1=Y_5=(xy-iy^2-2i)G-\frac{6}{ x+iy},\
X_2=Y_6=-(x^2-ixy-2)G-\frac{6i}{ x+iy},\
$$
$$Y_1= \frac{3yx^2-2ix-y^3-2y}{ x+iy}G - \frac{6(x-iy)}{ (x+iy)^2},\
Y_2=- \frac{x^3-3xy^2-2x+2iy}{ x+iy}G - i\frac{6(x-iy)}{ (x+iy)^2}.$$
This yields potential $V_{II}$.
\subsection{Radial-type coordinates}
We consider systems that have a radial coordinate $r$ as one of the
separable coordinates. The two other coordinates are separable
on the complex two dimensional sphere. We first consider
spherical coordinates
$x=r\sin\theta \cos\varphi$,
$y=r\sin\theta \sin\varphi$,
$z=r\cos\theta$.
The integrability conditions on the potential have the form
$$(x^2-y^2)V_{xy}+xzV_{yz}-yzV_{xz}+xy(V_{yy}-V_{xx})+3xV_y-3yV_x=0,$$
$$(x^2-z^2)V_{xz}+xz(V_{zz}-V_{xx})+xyV_{yz}-zyV_{xy}+3xV_z-3zV_x=0,$$
$$(y^2-z^2)V_{yz}+yz(V_{zz}-V_{yy})+xyV_{xz}-zxV_{xy}+3yV_z-3zV_y=0,$$
$$xV_{yz}-yV_{xz}=0.$$
Note that the first three conditions are not independent and only two are
required. For any potential that separates in spherical coordinates, one
additional condition is required. Indeed,
if $r,u$ and $v$ are any form of separable spherical-type coordinates then
the potential must have the functional form
\begin{equation}\label{star1}V=f(r)+g(u,v)/r^2,\end{equation}
it being understood that $u$ and $v$ are coordinates on the complex two
dimensional sphere and $r$ is the radius. It is then clear that
$r^2V=r^2f(r)+g(u,v)$. As a consequence there are the conditions
$ \partial _r\partial _\lambda (r^2V)=0$ ,
where $\lambda =u,v$. Noting that
$$x\partial _xF+y\partial _yF+z\partial _zF=DF=r\partial _rF$$
and that
$$J_1F=y\partial _zF-z\partial _yF=a(u,v)\partial _uF+b(u,v)\partial _vF,$$
with similar expressions for $J_2F$ and $J_3F$, we conclude that the conditions
(\ref{star1}) are equivalent to any two of the three conditions
${1\over r^2}J_iD(r^2V)=0$.
These are indeed the three conditions we have given. If we now solve all the
algebraic conditions, we determine that
$$X_1=Y_5=-\frac{(x^2+y^2)}{ xy} G -\frac{3}{ x},\
X_2=\frac{(x^2+y^2)}{ y^2} G -\frac{3}{y},\
X_3=\frac{3}{ z},\ Y_7=-\frac{6}{z},
$$
$$Y_1=- \frac{3x^2-y^2}{xy} G +\frac{3}{ x},\
Y_2= \frac{x^2-3y^2}{ y^2} G -\frac{3}{ y},\ Y_3=Y_4=0.
$$
{}From this we see that the remaining differential equations give $G=0$
and we obtain solution $V_I$.
We now consider horospherical
coordinates on a complex 2 sphere. viz.
$$x+iy=-i\frac{r}{ v}(u^2+v^2),\
x-iy=i\frac{r}{v},\
z=-ir\frac{u}{ v}.
$$
The extra integrability condition in this case is
$$z(V_{xx}-V_{yy})+2izV_{xy}-(x+iy)(V_{xz}+iV_{yz})=0.$$
Solving the algebraic conditions we conclude that
$$X_1=iX_2=\frac{(x+iy)}{ z} G - \frac{6}{ x+iy},\
X_3= \frac{(x+iy)^2}{ z^2}G +\frac{3}{z},
$$
$$Y_1=iY_2=-4 \frac{z}{ (x+iy)} G -6 \frac{(x-iy)}{ (x+iy)^2},\
Y_3=iY_4=-2iG,
$$
$$Y_5=iY_6=-4 \frac{(x+iy)}{ z} G - \frac{6}{ (x+iy)},\
Y_7=-2 \frac{(x+iy)^2}{ z^2}G -\frac{6}{ z}.
$$
The derivative conditions give $G=0$, so this corresponds to solution $V_{II}$.
Conical coordinates are also radial-type:
$$x^2=r^2\frac{(u-e_1)(v-e_1)}{ (e_1-e_2)(e_1-e_3)},\
y^2=r^2\frac{(u-e_2)(v-e_2)}{ (e_2-e_1)(e_2-e_3)},
$$
$$z^2=r^2\frac{(u-e_3)(v-e_3)}{ (e_3-e_2)(e_3-e_1)}.
$$
The extra integrability condition is
$$3(e_2-e_3)yzV_x+3(e_3-e_1)xzV_y+3(e_1-e_2)xyV_z+
xyz[(e_2-e_3)V_{xx}+(e_3-e_1)V_{yy}+(e_1-e_2)V_{zz}]$$
$$+z[(e_3-e_1)y^2+(e_2-e_3)x^2+(e_2-e_1)z^2]V_{xy}
+y[(e_1-e_2)z^2+(e_2-e_3)x^2+(e_1-e_3)y^2]V_{xz}$$
$$+x[(e_1-e_2)z^2+(e_3-e_2)x^2+(e_1-e_3)y^2]V_{yz}=0.$$
The algebraic conditions yield immediately solution $V_I$ with
$$X_1=-\frac{3}{ x},\ X_2=-\frac{3}{ y},\
X_3=\frac{3}{ z},\ Y_1=\frac{3}{ x},\
Y_2=-\frac{3}{ y},$$
$$Y_3=Y_4=0,\
Y_5=-\frac{3}{ x},\
Y_6=-\frac{3}{ y},\
Y_6?=-\frac{6}{ x}.$$
For degenerate type elliptic polar coordinates (type 1) we can write
$$x+iy = \frac{r}{ \cosh A\cosh B},\
2x =r[\frac{\cosh A}{ \cosh B} + \frac{\sinh B}{ \sinh A}],\
z=r\tanh A\tanh B.
$$
The extra integrability condition is
$$3(x+iy)^2V_z-3xzV_x-3i(2x+iy)zV_y-2i(x+iy)(z^2+ixy)V_{yz}-2(y^2+z^2)(x+iy)V_{xz}
$$
$$+2iz(z^2+y^2)V_{xy}+z(x+iy)^2V_{zz}+z(z^2+y^2)V_{xx}-z(x^2+z^2+2ixy)V_{yy}=0.
$$
Solving the algebraic conditions we deduce that
$$X_1=-\frac{2}{ x}(y-ix)G-\frac{6}{ x+iy},\
X_2=-i\frac{2}{ x}(y^2-x^2+z^2-ixy)(y-ix)G-\frac{6i}{ x+iy},
$$
$$X_3=-\frac{2i}{ xz}(z^2+y^2-ixy)(y-ix)^2G+\frac{3}{ z},\
Y_1=-\frac{1}{ x}(-y^3+3x^2y+2z^2y-6iz^2x)G-6\frac{(x-iy)}{(x+iy)^2},
$$
$$Y_2=-\frac{i}{ x}(-3ixy^2+ix^3+2z^2y-6iz^2x)G-6i\frac{x-iy}{ (x+iy)^2},\
Y_3=iY_4=2z \frac{(y-ix)^2}{ x}G,$$
$$
Y_5=-\frac{3}{ x}(-3y^2+5ixy+2z^2)(y-ix)G-\frac{6}{ x+iy},
$$
$$ Y_6=-\frac{i}{ 6}(-8y^2+13ixy+3x^2+2z^2)(y-ix)G-\frac{6i}{ x+iy},\
Y_7=-\frac{2i}{ xz}(-2y^2+2ixy+3z^2)(y-ix)^2G-\frac{6}{ z}.
$$
The differential conditions require $G=0$, leading to a
type $V_{II}$ potential.
For degenerate elliptic coordinates (type 2) on the complex 2
sphere we have
$$x+iy=ruv,\
x-iy=\frac{1}{ 4}r{(u^2+v^2)^2}{ u^3v^3},\
z=-\frac{i}{ 2}r \frac{u^2-v^2}{ uv}.
$$
The corresponding integrability condition is
$$3(z^2+ixy-y^2)V_x+3i(z^2-x^2-ixy)V_y-3iz(y-ix)V_z$$
$$-i(-ixy^2+y^3+iz^2x+yz^2)V_{xx}+i(ix^3-x^2y+iz^2x+tz^2)V_{xx}$$
$$+i(-ix+y)(x^2+y^2)V_{zz}+2(x^2y+yz^2+ixy^2+ixz^2)V_{xy}$$
$$-2iz(x^2+y^2)V_{yz}-2z(x^2+y^2)V_{xz}=0.$$
The solutions to the algebraic conditions are
$$X_1=-2iz(ix+2y)(y-ix)G-\frac{9}{ x+iy},\
X_2=2z(y+ix)(y-ix)G-\frac{9i}{ x+iy},\
$$
$$X_3=2(-ix+y)(x^2+y^2)G,\ Y_3=2i(yz^2+iz^2x-ixy^2-x^2y)G+ \frac{6iz}{ (x+iy)^2},
$$
$$Y_1=-iY_2=i\frac{(-3y^2-3x^2+4z^2)z(ix+y)}{ ix-y} G + 6
\frac{(-x^2-y^2+2z^2)}{ (x+iy)^3},
$$
$$Y_4=(2yz^2+2iz^2x-y^3+ixy^2+x^2y-ix^3)G+\frac{6}{ (x+iy)^2},\
Y_5=iz(y+3ix)(y-ix)G+\frac{6}{ x+iy},$$
$$
Y_6=-z(ix+3y)(y-ix)G+\frac{6i}{ x+iy},\
Y_7=(-ix+y)(x^2+y^2)G.$$
The differential conditions hold only if $G=0$. This is system $V_{III}$.
\subsection{Spheroidal coordinates}
We take these as
$$x=\sinh A\cos B\cos\varphi,\
y=\sinh A\cos B\sin\varphi,\
z=\cosh A\sin B.$$
The integrability conditions for the potential are
$$-3zV_x+3xV_z+zx(V_{zz}-V_{xx})-zyV_{xy}+(1+x^2+y^2-z^2)V_{zx}=0,$$
$$-3zV_y+3yV_z+zy(V_{zz}-V_{yy})-zxV_{xy}+(1+x^2+y^2-z^2)V_{zy}=0,$$
$$yV_{zx}-xV_{zy}=0.$$
The solutions of the algebraic conditions are
$$X_1=Y_5=-\frac{y}{x}(x^2+y^2)G-\frac{3}{ x},\
X_2=Y_6=(x^2+y^2)G-\frac{3}{ y},\
X_3=\frac{3}{ z},
$$
$$Y_1=-\frac{y}{ x}(-y^2+3x^2)G+\frac{3}{r x},\
Y_2=(-3y^2+x^2)G-\frac{3}{ y},\
Y_7=-\frac{6}{ z}.$$
From the differential conditions we see that $G=0$, and obtain potential $V_I$.
\subsection{Horospherical coordinates}
These are
$$x+iy=\sqrt{\rho \nu },\ x-iy= 4 \frac{\rho +\nu -\rho \nu \mu }{ \sqrt{\rho \nu }},\
z=2\sqrt{\rho \nu \mu }.$$
The corresponding integrability conditions for the potential are
$$(x^2-ixy-z^2)V_{zx}+(yx-iy^2+iz^2)V_{zy}+i(x+iy)zV_{xy}+zx(V_{zz}-V_{xx})+izy(
V_{yy}-V_{zz})=0,$$
$$(x^2-y^2)V_{xy}+xy(V_{yy}-V_{xx})+zxV_{zy}-yzV_{zx}-3yV_x+3xV_y=0,
$$
$$z(V_{xx}-V_{yy})-2izV_{xy}+(ix+y)V_{zy}+(-x+iy)V_{zx}=0.$$
The solutions to all the algebraic conditions are
$$X_1=-iX_2=- \frac{i(x+iy)}{ z} G - \frac{6}{x+iy},\
X_3= \frac{i(x+iy)^2}{ z^2}G+\frac{3}{ z},
$$
$$Y_1=-iY_2=- \frac{4iz}{ x+iy} G - 6 \frac{x-iy}{ (x+iy)^2},\
Y_3=iY_4=2G,
$$
$$Y_5=-iY_6=4\frac{i(x+iy)}{ z} G - \frac{6}{ x+iy},\
Y_7=-2i \frac{(x+iy)^2}{ z^2} G - \frac{6}{ z}.$$
The differential conditions require $G=0$ and this gives potential $V_{II}$.
\subsection{Rotational parabolic coordinates}
For these coordinates
$x=\xi \eta \cos\varphi $ , $y=\xi \eta \sin\varphi$, $ z=\frac{1}{ 2}(\xi ^2-\eta ^2)$.
The required conditions on the potential are
$$xy(V_{yy}-V_{xx})+(x^2-y^2)V_{xy}-yzV_{yz}+xzV_{xz}-3yV_x+3xV_y=0,$$
$$x^2(V_{xx}-V_{zz})+y^2(V_{yy}-V_{zz})+2xyV_{xy}+2zxV_{yz}+2xzV_{zx}+3xV_x+3yV_y=0,$$
$$xV_{zy}-yV_{zx}=0.$$
These integrability conditions directly produce the solution
$$X_1=-\frac{3}{ x} , \
X_2=-\frac{3}{y} , \
X_3=0 ,\
Y_1=\frac{3}{x} , \
Y_2=-\frac{3}{ y} ,
$$
$$Y_3=Y_4=0,\
Y_5=-\frac{3}{ x} , \
Y_6=-\frac{3}{ y} , \
Y_7=0.
$$
This is a permuted version of potential $V_{IV}$.
We have covered all possibilities for separable coordinates and found exactly which superintegrable system separates in each coordinate system It follows that our list of 10 superintegrable systems is complete. Another interesting consequence of this analysis is
\begin{theorem} For every orthogonal separable coordinate system there is at least one nondegenerate superintegrable system that separates in these coordinates.
\end{theorem}
On the other hand, no nondegenerate superintegrable system permits
separation in nonorthogonal heat-type coordinates. Potential
$V_{VII}$ is the only generic system that separates in generic
coordinates alone.
\section{Outlook} The basic structure and classification problems
for 2D second order superintegrable systems have been solved,\cite{KKMP,KKM20042,KMJP2,KMJP3,DASK2005}. For 3D
systems the corresponding problems are much more complicated, but we
have now achieved a verifiably complete
classification of the possible nondegenerate potentials in 3D
Euclidean space. There are 10 such potentials, as compared to 11 in
2D. To finish the classification of nondegenerate potentials for all 3D conformally flat spaces the main task remaining is the classification on the 3-sphere, probably not difficult. This is because all conformally flat systems can be obtained from flat space and the 3-sphere by St\"ackel transforms. The new idea used here that made the complete verifiable classification practical was the association of nondegenerate superintegrable systems with points on an algebraic variety on which the Euclidean group acts to produce foliations. In the future we hope to refine this approach to give a direct classification using only the algebraic variety and group action. Here we had also to rely on basic results from separation of variables theory to simplify the calculations. In distinction to the 2D case, which is special, the 3D classification problem seems to have all of the ingredients that go into the corresponding nondegenerate potential classification problem in $n$ dimensions. The number of nondegenerate potentials grows rapidly with dimension: the number of generic potentials alone is $\sum_{j=0}^np(j)$, where $p(j)$ is the number of partitions of $j$. The algebraic variety approach should be generalizable to this case.
Nondegenerate potentials for 3D superintegrable systems are just the most symmetric. There is also ``fine structure," a hierarchy of various classes of degenerate potentials with fewer than 4 parameters. The structure and classification theory for these systems has just gotten underway, with initial results for 3 parameter FLI systems. \cite{KKM20071}. Sometimes a quadratic algebra structure exists and sometimes it does not. Extension of these methods to complete the fine structure analysis for 3D systems
appears relatively straightforward. The analysis
can be extended to 2 parameter and 1 parameter potentials
with 5 functionally linearly independent second order symmetries. Here
first order PDEs for the potential appear as well as second order,
and Killing vectors may occur. Another class of 3D superintegrable
systems is that for which the 5 functionally independent symmetries are
functionally linearly dependent. This class is related to the Calogero
potential \cite{CAL1,WOJW,HMS} and necessarily leads to first order PDEs for the
potential, as well as second order \cite{KKM20061}. However, the
integrability
methods discussed here should be able to handle this class with no
special difficulties. On a deeper level, we think that the algebraic
geometry approach can be extended to determine the
possible superintegrable systems in all these cases.
Finally, the algebraic geometry related results that we have described
in this paper suggest strongly that there is an underlying
geometric structure to superintegrable systems that is not apparent
from the usual presentations of these systems.
\medskip
\noindent
{\bf Acknowledgement}: The authors wish to thank Thomas Wolf and Greg Reid for very helpful consultations on Gr\"obner basis techniques and on numerical methods in the study of algebraic varieties.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 8,579 |
{"url":"https:\/\/brilliant.org\/discussions\/thread\/c-programming\/","text":"# Q.1\n\n Take as input a natural number, say n. Output all subsets of {1,2,...,n}.\nYou cannot use functions or recursion.\n\n\n# Q.2\n\n Write a program to read a sequence of non-zero integers till the number zero is entered and at the end display the following:\na) the number of even and odd numbers,\nb) sum of all the numbers entered,\nc) the length and starting index (or position) of a largest subsequence of consecutive non-decreasing integers entered. Assume that the index of the numbers start from 1.\n\nFor example, if the user input is 1 -2 3 17 9 5 -10 -12 0, then the output will be:\nNo. of even numbers: 3\nNo. of odd numbers: 5\nSum of the numbers: 11\nLength of largest non-decreasing subsequence: 3\nStarting index: 2\nNote: Do not use arrays.\n\n\nNote by Rishabh Deep Singh\n3\u00a0years, 1\u00a0month ago\n\nThis discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution \u2014 they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.\n\nWhen posting on Brilliant:\n\n\u2022 Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused .\n\u2022 Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting \"I don't understand!\" doesn't help anyone.\n\u2022 Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge.\n\nMarkdownAppears as\n*italics* or _italics_ italics\n**bold** or __bold__ bold\n- bulleted- list\n\u2022 bulleted\n\u2022 list\n1. numbered2. list\n1. numbered\n2. list\nNote: you must add a full line of space before and after lists for them to show up correctly\nparagraph 1paragraph 2\n\nparagraph 1\n\nparagraph 2\n\n[example link](https:\/\/brilliant.org)example link\n> This is a quote\nThis is a quote\n # I indented these lines\n# 4 spaces, and now they show\n# up as a code block.\n\nprint \"hello world\"\n# I indented these lines\n# 4 spaces, and now they show\n# up as a code block.\n\nprint \"hello world\"\nMathAppears as\nRemember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting.\n2 \\times 3 $2 \\times 3$\n2^{34} $2^{34}$\na_{i-1} $a_{i-1}$\n\\frac{2}{3} $\\frac{2}{3}$\n\\sqrt{2} $\\sqrt{2}$\n\\sum_{i=1}^3 $\\sum_{i=1}^3$\n\\sin \\theta $\\sin \\theta$\n\\boxed{123} $\\boxed{123}$\n\nSort by:\n\n 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 #include int main() { int n=1, e=0, o=0, sum=0, p=0, i=0, index=1, c=0, gre=0, first=1; printf(\"Enter a sequnce of numbers terminal no. of sequence being 0\"); while(n!=0) { i++; scanf(\"%d\" ,&n); if(n==0) break; sum+=n; if(n%2==0) e++; else o++; if(n>=p) { c++; } else { if(c>gre) { gre=c; first=index; index=1; } c=1; }p=n; } if(c>gre) { gre=c; first=index; } printf(\"No. of even no. is %d \\n\" ,e); printf(\"No. of odd numbers is %d \\n\" ,o); printf(\"Sum of the entered numbers is %d \\n\" ,sum); printf(\"Length of largest non decreasing subsequence within is %d \\n\" ,gre); printf(\"starting index is %d \\n\" ,first); return 0; } \n\n- 3\u00a0years, 1\u00a0month ago\n\nI have increased the scope of your problems from C to all other languages by editing \"C\" out of the discussion.\n\nHint about 2: How do functions work? What is the underlying data structure that keeps track of where the execution should return to?\n\nStaff - 3\u00a0years, 1\u00a0month ago\n\nThanks @Agnishom Chattopadhyay Can u post a solution to Question no 1.\n\n- 3\u00a0years, 1\u00a0month ago\n\nStaff - 3\u00a0years, 1\u00a0month ago\n\ni need to submit my assignment now please post the answer Bro.\n\n- 3\u00a0years, 1\u00a0month ago\n\n 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 #include #include int main() { int n,i,r=1,z,b=0,rem,k=0,j=1; printf(\"Enter the number\\n\"); scanf(\"%d\",&n); for(i=0;i\n\n- 3\u00a0years, 1\u00a0month ago\n\n- 3\u00a0years, 1\u00a0month ago\n\nNice!\n\nStaff - 3\u00a0years, 1\u00a0month ago\n\nThanks\n\n- 3\u00a0years, 1\u00a0month ago","date":"2019-10-14 14:26:01","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 8, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8454300165176392, \"perplexity\": 2763.1196476831587}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-43\/segments\/1570986653247.25\/warc\/CC-MAIN-20191014124230-20191014151730-00383.warc.gz\"}"} | null | null |
Aalborg Instruments announced the expansion of its economical TP Peristaltic Pump line. TPV models are designed for liquids of low- to high-viscosity and can also accommodate fuels with the appropriate tubing.
TPV Pump Systems help ensure stable performance and accurate, repeatable liquid transfer applications. They are suitable for laboratory, processing and OEM applications.
TPV pumps include 3-316 stainless steel rollers and shafts; controls for prime, brake, rpm, reverse flow direction and power LED; a safety cover; 24 VDC brushless motor and fixed occlusion wall. | {
"redpajama_set_name": "RedPajamaC4"
} | 6,009 |
Кракаушаттен () — коммуна () в Австрии, в федеральной земле Штирия.
Входит в состав округа Мурау. Население составляет 308 человек (на 31 декабря 2005 года). Занимает площадь 13,02 км². Официальный код — 61406.
Политическая ситуация
Бургомистр коммуны — Отто Эстерль (АНП) по результатам выборов 2005 года.
Совет представителей коммуны () состоит из 9 мест.
АНП занимает 7 мест.
СДПА занимает 2 места.
Ссылки
Официальная страница
Города Штирии | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 131 |
{"url":"https:\/\/portrait.gitee.com\/lhp625\/apolloauto\/blob\/master\/docs\/quickstart\/apollo_1_0_hardware_system_installation_guide.md","text":"## lhp625 \/ apollo\n\nforked from \/ apollo\nExplore and code with more than 6 million developers\uff0cFree private repositories \uff01\uff1a\uff09\napollo_1_0_hardware_system_installation_guide.md 31.06 KB\n\n# Apollo 1.0 Hardware and System Installation Guide\n\nThe Apollo 1.0 Hardware and System Installation Guide provides the instructions to install all of the hardware components and system software for the **Apollo Project **. The system installation information included pertains to the procedures to download and install the Apollo Linux Kernel.\n\n## Document Conventions\n\nThe following table lists the conventions that are used in this document:\n\nIcon Description\nBold Emphasis\nMono-space font Code, typed data\nItalic Titles of documents, sections, and headings Terms used\nInfo Contains information that might be useful. Ignoring the Info icon has no negative consequences.\nTip. Includes helpful hints or a shortcut that might assist you in completing a task.\nWarning. Contains information that must not be ignored or you risk failure when you perform a certain task or step.\n\n# Introduction\n\nThe Apollo Project is an initiative that provides an open, complete, and reliable software platform for Apollo partners in the automotive and autonomous driving industries. The aim of this project is to enable these entities to develop their own self-driving systems based on Apollo software stack.\n\n## Documentation\n\nThe following set of documentation describes Apollo 1.0:\n\n\u2022 [Apollo Hardware and System Installation Guide] \u2500 Provides the instructions to install the hardware components and the system software for the vehicle:\n\n\u2022 Vehicle:\n\n\u2022 Industrial PC (IPC)\n\u2022 Global Positioning System (GPS)\n\u2022 Inertial Measurement Unit (IMU)\n\u2022 Controller Area Network (CAN) card\n\u2022 Hard drive\n\u2022 GPS Antenna\n\u2022 Software:\n\n\u2022 Ubuntu Linux\n\u2022 Apollo Linux Kernel\n\u2022 [Apollo Quick Start Guide] \u2500 A combination tutorial and roadmap that provide the complete set of end-to-end instructions. The Quick Start Guide also provides links to additional documents that describe the conversion of a regular car to an autonomous-driving vehicle.\n\n# Key Hardware Components\n\nThe key hardware components to install include:\n\n\u2022 Onboard computer system \u2500 Neousys Nuvo-5095GC\n\u2022 Controller Area Network (CAN) Card \u2500 ESD CAN-PCIe\/402-1\n\u2022 General Positioning System (GPS) and Inertial Measurement Unit (IMU) \u2500 You can select one of the following options:\n\u2022 NovAtel SPAN-IGM-A1\n\u2022 NovAtel SPAN\u00ae ProPak6\u2122 and NovAtel IMU-IGM-A1\n\n\u2022 A 4G router for Internet access\n\u2022 A monitor, keyboard, and mouse for debugging at the car onsite\n\u2022 Cables: Video Graphics Array (VGA) connector, a Digital Visual Interface (DVI) cable (optional)\n\u2022 Apple iPad Pro: 9.7-inch, Wi-Fi (optional)\n\nThe features of the key hardware components are presented in the subsequent sections.\n\n## Onboard Computer System - IPC\n\nThe onboard computer system is an industrial PC (IPC) for the autonomous vehicle and uses the NeousysNuvo-5095GC that is powered by a sixth-generation Intel Skylake core i7-6700 CPU.\n\nThe Neousys Nuvo-5095GC is the central unit of the autonomous driving system (ADS).\n\n### IPC Configuration\n\nConfigure the IPC as follows:\n\n\u2022 32GB DDR4 RAM\n\u2022 MezIO-V20-EP module (with ignition control for in-vehicle usage)\n\u2022 PO-160W-OW 160W AC\/DC power adapter\n\u2022 CSM2 module (x16 PCIe expansion Gen3 8-lane cassette)\n\n### IPC Front and Rear Views\n\nThe front and rear views of the IPC are shown with the Graphics Processing Unit (GPU) installed in the following pictures:\n\nThe front view of the Nuvo-5095GC:\n\nThe rear view of the Nuvo-5095GC:\n\nNeousys Nuvo-5095GC Product Page:\n\nNeousys Nuvo-5095GC-Manual:\n\nhttp:\/\/www.neousys-tech.com\/en\/support\/resources\/category\/162-manual\n\n## Controller Area Network (CAN) Card\n\nThe CAN card to use with the IPC is ESD CAN-PCIe\/402.\n\nESD CAN-PCIe\/402 Product Page:\n\nhttps:\/\/esd.eu\/en\/products\/can-pcie402\n\n## Global Positioning System (GPS) and Inertial Measurement Unit (IMU)\n\nThere are two GPS-IMU options available,and the choice depends upon the one that most fits your needs:\n\n\u2022 Option 1: NovAtel SPAN-IGM-A1\n\u2022 Option 2: NovAtel SPAN\u00ae ProPak6\u2122 and NovAtel IMU-IGM-A1\n\n### Option 1: The NovAtel SPAN-IGM-A1\n\nThe NovAtel SPAN-IGM-A1 is an integrated, single-box solution that offers tightly coupled Global Navigation Satellite System (GNSS) positioning and inertial navigation featuring the NovAtel OEM615 receiver.\n\nNovAtel SPAN-IGM-A1 Product Page:\n\nhttps:\/\/www.novatel.com\/products\/span-gnss-inertial-systems\/span-combined-systems\/span-igm-a1\/\n\n### Option 2: The NovAtel SPAN ProPak6 and NovAtel IMU-IGM-A1\n\nNovAtel ProPak6 is a standalone GNSS receiver. It works with a separate NovAtel- supported IMU (in this case, the NovAtel IMU-IGM-A1)to provide localization.\n\nThe ProPak6 provides the latest and most sophisticated enclosure product manufactured by NovAtel.\n\nThe IMU-IGM-A1 is an IMU that pairs with a SPAN-enabled GNSS receiver such as the SPAN ProPak6.\n\nNovAtel ProPak6 Installation & Operation Manual:\n\nhttps:\/\/www.novatel.com\/assets\/Documents\/Manuals\/OM-20000148.pdf\n\nNovAtel IMU-IGM-A1 Product Page:\n\nhttps:\/\/www.novatel.com\/products\/span-gnss-inertial-systems\/span-imus\/span-mems-imus\/imu-igm-a1\/#overview\n\nThe GPS Receiver\/Antenna used with the GPS-IMU component is the NovAtel GPS-703-GGG-HV.\n\n**NOTE: **The GPS NovAtelGPS-703-GGG-HV works with either model of the two GPS-IMU options that are described in the previous section, Global Positioning System (GPS) and Inertial Measurement Unit (IMU).\n\nNovAtel GPS-703-GGG-HV Product Page:\n\nhttps:\/\/www.novatel.com\/products\/gnss-antennas\/high-performance-gnss-antennas\/gps-703-ggg-hv\/\n\n# Overview of the Installation Tasks\n\nInstalling the hardware and the software components involves these tasks:\n\nAT THE OFFICE:\n\n1. Prepare the IPC: a. Examine the Graphics Processing Unit (GPU) cassette to determine if you need to remove the GPU card (if it was pre-installed). b. Prepare and then install the Controller Area Network (CAN) card by first repositioning the CAN card termination jumper before you insert the card into the slot.\n\n2. Install the hard drive (if none was pre-installed) in the IPC.\n\nYou can also choose to replace a pre-installed hard drive if you prefer.\n\nRecommendations :\n\n\u2022 Install a Solid-State Drive (SSD) for better reliability.\n\u2022 Use a high-capacity drive if you need to collect driving data.\n3. Prepare the IPC for powering up: a. Attach the power cable to the power connector (terminal block). b. Connect the monitor, Ethernet, keyboard, and mouse to the IPC. c. Connect the IPC to a power source.\n\n4. Install the software on the IPC (some Linux experience is required): a. Install Ubuntu Linux. b. Install the Apollo Linux kernel.\n\nIN THE VEHICLE:\n\n\u2022 Make sure that all the modifications for the vehicle, which are listed in the section Prerequisites, have been performed.\n\n\u2022 Install the major components (according to the illustrations and the instructions included in this document):\n\n\u2022 GPS Antenna\n\u2022 IPC\n\nThe actual steps to install all of the hardware and software components are detailed in the section, Steps for the Installation Tasks.\n\n# Steps for the Installation Tasks\n\nThis section describes the steps to install:\n\n\u2022 The key hardware and software components\n\u2022 The hardware in the vehicle\n\n## At the Office\n\n\u2022 Prepare the IPC:\n\n\u2022 Install the CAN card\n\u2022 Install or replace the hard drive\n\u2022 Prepare the IPC for powering up\n\u2022 Install the software for the IPC:\n\n\u2022 Ubuntu Linux\n\u2022 Apollo Kernel\n\n### Preparing the IPC\n\n1. In the IPC, examine the GPU cassette to determine if there is a pre-installed GPU card, which you need to remove:\n\na. Turn over the IPC to unscrew the four screws (shown in the purple squares) on the bottom of computer that are holding the GPU cassette in place:\n\nb. Remove the GPU cassette from the IPC:\n\nc. Remove the GPU cassette from the IPC: Unscrew three additional screws (shown in the purple circles) on the bottom of the GPU cassette to open the cover:\n\nd. Remove the GPU card (if installed):\n\n2. Prepare and install the CAN card:\n\na. Set the CAN card termination jumper by removing the red jumper cap (shown in yellow circles) from its default location and placing it at its termination position:\n\nWARNING: The CAN card will not work if the termination jumper is not set correctly.\n\nb. Insert the CAN card into the slot in the IPC:\n\nc. Reinstall the GPU cassette in the IPC:\n\n3. Install or replace the hard drive.\n\nYou need to install one or two 2.5\u201d SSD or hard drives if none have been pre-installed. As an alternative, you might want to replace a pre-installed hard drive with one of your own (say, an SSD).\n\nAn SSD drive is highly recommended for better reliability. Also consider using a high-capacity drive if you need to collect driving data.\n\nTo install the hard drive:\n\na. Unscrew the three screws (shown in the purple circles) to open the hard drive cover (caddy):\n\nb. Install the drive in the caddy (as shown with an Intel SSD):\n\nObserve the way the hard drive is situated in the caddy for the installation. The Serial Advanced Technology Attachment (SATA) and the power connectors should be placed in the caddy facing the end that has the two screw holes showing.\n\nThe hard drive in the caddy is now connected:\n\nc. Reinstall the SSD caddy in the IPC:\n\n4. Prepare the IPC for powering up:\n\na. Attach the power cable to the power connector(terminal block) that comes with the IPC:\n\nWARNING: Make sure that the positive(labeled R for red) and the negative(labeled B for black) wires of the power cable are inserted into the correct holes on the power terminal block.\n\nb. Connect the monitor, Ethernet cable, keyboard, and mouse to the IPC:\n\nIt is recommended that you use a Video Graphics Array (VGA) connector for the monitor for these reasons:\n\n\u2022 If you do not see any screen display when the IPC boots up, switch to the VGA input. The Neousys Nuvo-5095GC IPC always outputs to a VGA port even if there is no monitor connected. Consequently, the Linux installer might \u201celect\u201d to output to a VGA port instead of a DVI port.\n\u2022 If you do not see a dialog window during the installation process when using a dual-monitor setup, try switching between VGA and DVI to find it. The Linux installer might detect two monitors and use them both.\n\nFor better display quality, you have the option to:\n\n\u2022 Connect to another monitor using a DVI cable, or a High-Definition Multimedia Interface (HMI) with DVI-HMI adapter\n\n\u2022 Use the DVI\/HDMI port on the same monitor\n\nc. Connect the power:\n\n### Installing the Software for the IPC\n\nThis section describes the steps to install:\n\n\u2022 Ubuntu Linux\n\u2022 Apollo Kernel\n\nIt is assumed that you have experience working with Linux to successfully perform the software installation.\n\n#### Installing Ubuntu Linux\n\n1. Create a bootable Ubuntu Linux USB flash drive:\n\nDownload Ubuntu (or a variant such as Xubuntu) and follow the online instructions to create a bootable USB flash drive.\n\nIt is recommended that you use Ubuntu 14.04.3.\n\nYou can type F2 during the system boot process to enter the BIOS settings. It is recommended that you disable Quick Boot and Quiet Boot in the BIOS to make it easier to catch any issues in the boot process.\n\nhttps:\/\/www.ubuntu.com\/desktop\n\n1. Install Ubuntu Linux:\n\na. Insert the Ubuntu installation drive into a USB port and turn on the system. b. Install Linux by following the on-screen instructions.\n\n2. Perform a software update and the installation: a. Reboot into Linux after the installation is done. b. Launch the Software Updater to update to the latest software packages (for the installed distribution) or type the following commands in a terminal program such as GNOME Terminal.\n\nsudo apt-get update; sudo apt-get upgrade\n\nc. Launch a terminal program such as GNOME Terminal and type the following command to install the Linux 4.4 kernel:\n\nsudo apt-get install linux-generic-lts-xenial\n\nThe IPC must have Internet access to update and install software. Make sure that the Ethernet cable is connected to a network with Internet access. You might need to configure the network for the IPC if the network that it is connected to is not using the Dynamic Host Configuration Protocol (DHCP).\n\n#### Installing the Apollo Kernel\n\nThe Apollo runtime in the vehicle requires the Apollo Kernel. You are strongly recommended to install the pre-built kernel.\n\n##### Use pre-built Apollo Kernel.\n\nYou get access and install the pre-built kernel with the following commands.\n\nhttps:\/\/github.com\/ApolloAuto\/apollo-kernel\/releases\ntar zxvf linux-4.4.32-apollo-1.0.0.tar.gz\ncd install\nsudo bash install_kernel.sh\n1. Reboot your system by the reboot command\n2. Build the ESD CAN driver source code Now you need to build the ESD CAN driver source code according to ESDCAN-README.md\n\nIf have modified the kernel, or the pre-built kernel is not the best for your platform, you can build your own kernel with the following steps.\n\n1. Clone the code from repository\ngit clone https:\/\/github.com\/ApolloAuto\/apollo-kernel.git\ncd apollo-kernel\n\n2. Build the kernel with the following command.\n\nbash build.sh\n1. Install the kernel the same way as using a pre-built Apollo Kernel.\n##### Optional: Test the ESD CAN device node\n\nAfter rebooting the IPC with the new kernel:\n\na. Create the CAN device node by issuing the following commands in a terminal:\n\ncd \/dev; sudo mknod \u2013-mode=a+rw can0 c 52 0\n\nb. Test the CAN device node using the test program that is part of the ESD CAN software package that you have acquired from ESD Electronics.\n\nThe IPC is now ready to be mounted on the vehicle.\n\n## In the Vehicle\n\n\u2022 Make the necessary modifications to the vehicle as specified in the list of prerequisites\n\u2022 Install the major components:\n\u2022 GPS Antenna\n\u2022 IPC\n\n### Prerequisites\n\nWARNING: Prior to mounting the major components (GPS Antenna, IPC, and GPS Receiver) in the vehicle, certain modifications must be performed as specified in the list of prerequisites. The instructions for making the mandatory changes in the list are outside the scope of this document.\n\nThe list of prerequisites are as follows:\n\n\u2022 The vehicle must be modified for \u201cdrive-by-wire\u201d technology by a professional service company. Also, a CAN interface hookup must be provided in the trunk where the IPC will be mounted.\n\u2022 A power panel must be installed in the trunk to provide power to the IPC and the GPS-IMU. The power panel would also service other devices in the vehicle such as a 4G LTE router. The power panel should be hooked up to the power system in the vehicle.\n\u2022 A custom-made rack must be installed to mount the GPS-IMU Antenna on top of the vehicle.\n\u2022 A custom-made rack must be installed to mount the GPS-IMU in the trunk.\n\u2022 A 4G LTE router must be mounted in the trunk to provide Internet access for the IPC. The router must have built-in Wi-Fi access point (AP) capability to connect to other devices, such as an iPad, to interface with the autonomous driving (AD) system. A user would be able to use the mobile device to start AD mode or monitor AD status, for example.\n\n### Diagrams of the Major Component Installations\n\nThe following two diagrams indicate the locations of where the three major components (GPS Antenna, IPC, and GPS Receiver) should be installed on the vehicle:\n\n### Installing the GPS Receiver and Antenna\n\nThis section provides general information about installing one of two choices:\n\n\u2022 Option 1: GPS-IMU: NovAtel SPAN-IGM-A1\n\u2022 Option 2: GPS-IMU: NovAtel SPAN\u00ae ProPak6\u2122 and NovAtel IMU-IGM-A1\n\n#### Option 1: Installing the NovAtel SPAN-IGM-A1\n\nThe installation instructions describe the procedures to mount, connect, and take the lever arm measurements for the GPS-IMU NovAtel SPAN-IGM-A1.\n\n##### Mounting\n\nYou can place the GPS-IMU NovAtel SPAN-IGM-A1 in most places in the vehicle but it is suggested that you follow these recommendations:\n\n\u2022 Place and secure the NovAtel SPAN-IGM-A1 inside the trunk with the Y-axis pointing forward.\n\u2022 Mount the NovAtel GPS-703-GGG-HV antenna in an unobscured location on top of the vehicle.\n##### Wiring\n\nYou must connect two cables:\n\n\u2022 The antenna cable \u2500 Connects the GNSS antenna to the antenna port of the SPAN-IGM-A1\n\u2022 The main cable:\n\u2022 Connects its 15-pin end to the SPAN-IGM-A1\n\u2022 Connects its power wires to a power supply of 10-to-30V DC\n\u2022 Connects its serial port to the IPC. If the power comes from a vehicle battery, add an auxiliary battery (recommended).\n\nMain Cable Connections\n\nFor more information, see the SPAN-IGM\u2122 Quick Start Guide, page 3, for a detailed diagram:\n\nSPAN-IGM\u2122 Quick Start Guide\n\nhttp:\/\/www.novatel.com\/assets\/Documents\/Manuals\/GM-14915114.pdf\n\n##### Taking the Lever Arm Measurement\n\nWhen the SPAN-IGM-A1 and the GPS Antenna are in position,the distance from the SPAN-IGM-A1 to the GPS Antenna must be measured. The distance should be measured as: X offset, Y offset, and Z offset.\n\nThe error of offset must be within one centimeter to achieve high accuracy. For more information, see the SPAN-IGM\u2122 Quick Start Guide, page 5, for a detailed diagram.\n\nSPAN-IGM\u2122 User Manual:\n\nhttp:\/\/www.novatel.com\/assets\/Documents\/Manuals\/OM-20000141.pdf\n\n#### Option 2: Installing NovAtel SPAN\u00ae ProPak6\u2122 and NovAtel IMU-IGM-A1\n\nThe installation instructions describe the procedures to mount, connect, and take the lever arm measurements for the GPS NovAtel SPAN\u00ae ProPak6\u2122 and the NovAtel IMU-IGM-A1.\n\n##### Components for the Installation\n\nThe components that are required for the installation include:\n\n\u2022 NovAtel GPS SPAN ProPak6\n\n\u2022 NovAtel IMU-IGM-A1\n\n\u2022 NovAtel GPS-703-GGG-HV Antenna\n\n\u2022 NovAtel GPS-C006 Cable (to connect antenna to GPS)\n\n\u2022 NovAtel 01019014 Main Cable (to connect GPS to a serial port the IPC)\n\n\u2022 Data Transport Unit (DTU) \u2013 similar to a 4G router\n\n\u2022 Magnetic adapters (for antenna and DTU)\n\n\u2022 DB9 Straight Through Cable\n\n##### Mounting\n\nYou can place the two devices, the ProPak6 and the IMU, inmost places in the vehicle but it is suggested that you follow these recommendations:\n\n\u2022 Place and secure the ProPak6 and the IMU side-by-side inside the trunk with the Y-axis pointing forward.\n\u2022 Mount the NovAtel GPS-703-GGG-HV antenna on top of the vehicle or on top of the trunk lid as shown:\n\n\u2022 Use a magnetic adapter to tightly attach the antenna to the trunk lid.\n\u2022 Install the antenna cable in the trunk by opening the trunk and placing the cable in the space between the trunk lid and the body of the car.\n##### Wiring\n\nFollow these steps to connect the ProPak6 GNSS Receiver and the IMU to the Apollo system:\n\n1. Use the split cable that comes with IMU-IGM-A1 to connect the IMU Main port and theProPak6 COM3\/IMU port.\n2. Use a USB-A-to-MicroUSB cable to connect the USB port of the IPC and the MicroUSB port of the ProPak6.\n3. Connect the other end of the IMU-IGM-A1 split cable to the vehicle power.\n4. Connect the GNSS antenna to Propak6.\n5. Connect the Propak6 power cable.\n\nNovAtel ProPak6 Installation& Operation Manual:\n\nhttps:\/\/www.novatel.com\/assets\/Documents\/Manuals\/OM-20000148.pdf\n\n### Installingthe IPC\n\n1. Use a power cable to connect the vehicle power source to the IPC:\n\nUse its power connector as one end , and connect the other end to the power panel in the vehicle(see the section, Prerequisites).\n\n1. Place the onboard computer system, the 5059GC,inside the trunk (recommended).\n\nFor example, Apollo 1.0 uses 4x4 self-tapping screws to bolt the 5059GC to the carpeted floor of the trunk.\n\n1. Mount the IPC so that its front and back sides(where all ports are located) face the right side (passenger) and the left side(driver) of the trunk.\n\nThis positioning makes it easier to connect all of the cables.\n\nNeousys Nuvo-5095GC \u2013 Manual:\n\nhttp:\/\/www.neousys-tech.com\/en\/support\/resources\/category\/162-manual\n\n1. Connect all cables, which include:\n\u2022 Power cable\n\n\u2022 Controller Area Network (CAN) cable\n\n\u2022 Ethernet cable from the 4G router to the IPC\n\n\u2022 GPS Receiver to the IPC\n\n\u2022 (Optional) Monitor, keyboard, mouse\n\na. Connect the power cable to the IPC (as shown):\n\nb. Connect the other end of the power cable to the vehicle battery (as shown):\n\nc. Connect the DB9 cable to the IPC to talk to the CAN (as shown):\n\nd. Connect:\n\n\u2022 the Ethernet cable from the 4G router to the IPC (labeled as Router)\n\n\u2022 the GPS Receiver to the IPC (labeled as GPSIMU)\n\n\u2022 (optional) the monitor (labeled as Monitor):\n\n#### Taking the Lever Arm Measurement\n\n1. Before taking the measurement, turn on the IPC.\n\n2. When the IMU and the GPS Antenna are in position, the distance from the IMU to the GPS Antenna must be measured. The distance should be measured as: X offset, Yoffset, and Z offset.\n\nThe error of offset must be within one centimeter to achieve high accuracy in positioning and localization.\n\nNovAtel ProPak6 Installation & Operation Manual:\n\nhttps:\/\/www.novatel.com\/assets\/Documents\/Manuals\/OM-20000148.pdf\n\nNovAtel SPAN-IGM-A1 Product Page:\n\nhttps:\/\/www.novatel.com\/products\/span-gnss-inertial-systems\/span-combined-systems\/span-igm-a1\/\n\n### Configuring the GPS and IMU\n\nConfigure the GPS and IMU as shown:\n\nWIFICONFIGSTATE OFF\nUNLOGALLTHISPORT\nSETIMUTOANTOFFSET0.00 1.10866 1.14165 0.05 0.05 0.08\nSETINSOFFSET0 0 0\nLOGCOM2 GPRMC ONTIME 1.0 0.25\nEVENTOUTCONTROLMARK2 ENABLE POSITIVE 999999990 10\nEVENTOUTCONTROLMARK1 ENABLE POSITIVE 500000000 500000000\nLOGNCOM1 GPGGA ONTIME 1.0\n\nlogbestgnssposb ontime 0.5\nlogbestgnssvelb ontime 0.5\nlogbestposb ontime 0.5\nlogCORRIMUDATASB ontime 0.01\nlogmark1pvab onnew\n\nlogimutoantoffsetsb once\nlogvehiclebodyrotationb onchanged\n\nSAVECONFIG\n\nFor ProPak6:\n\nWIFICONFIG STATE OFF\nINSCOMMAND ENABLE\nSETIMUORIENTATION 5\nALIGNMENTMODE AUTOMATIC\nSETIMUTOANTOFFSET 0.00 1.10866 1.14165 0.05 0.05 0.08\nVEHICLEBODYROTATION 0 0 0\n\nCOM COM1 9600 N 8 1 N OFF OFF\nCOM COM2 9600 N 8 1 N OFF OFF\nINTERFACEMODE COM1 NOVATEL NOVATEL OFF\nLOG COM2 GPRMC ONTIME 1 0.25\nPPSCONTROL ENABLE POSITIVE 1.0 10000\nMARKCONTROL MARK1 ENABLE POSITIVE\nEVENTINCONTROL MARK1 ENABLE POSITIVE 0 2\n\ninterfacemode usb2 rtcmv3 none off\nrtksource auto any\npsrdiffsource auto any\n\nSAVECONFIG\n\nWARNING: Modify the SETIMUTOANTOFFSET line based on the actual measurement (of the antenna and the IMU offset).\n\nFor example:\n\nSETIMUTOANTOFFSET -0.05 0.5 0.8 0.05 0.05 0.08\n\n# Setting up the Network\n\nThis section provides recommendations for setting up the network.\n\nThe IPC that is running the Apollo software must access the Internet to acquire the Real Time Kinematic (RTK) data for accurate localization. A mobile device also needs to connect to the IPC to run the Apollo software.\n\n## Recommendations\n\nItis recommended that you set up your network according to the following diagram:\n\n1. Install and configure a 4G LTE router with Wi-Fi Access Point (AP) capability and Gigabit Ethernet ports.\n\n2. Connect the IPC to the LTE router using an Ethernet cable.\n\n3. Configure the LTE router to access the Internet using the LTE cellular network.\n\n4. Configure the AP capability of the LTE router so that the iPad Pro or another mobile device can connect to the router, and, in turn, connect to the IPC.\n\nIt is recommended that you configure a fixed IP instead of using DHCP on the IPC to make it easier to connect to it from a mobile terminal.\n\nYouwill use the components that you were required to provide to perform the following tasks:\n\n1. Connect a monitor using the DVI or the HDMI cables and connect the keyboard and mouse to perform debugging tasks at the car onsite.\n\n2. Establish a Wi-Fi connection on the Apple iPad Pro to access the HMI and control the Apollo ADS that is running on the IPC.\n\n# Next Steps\n\nAfter you complete the hardware installation in the vehicle, see the Apollo Quick Start for the steps to complete the software installation.","date":"2021-06-13 22:03:31","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.19427816569805145, \"perplexity\": 7646.014738367244}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-25\/segments\/1623487610841.7\/warc\/CC-MAIN-20210613192529-20210613222529-00413.warc.gz\"}"} | null | null |
Is Fix Monthly Income A Scam? In My Opinion Yes It Is!
Is Fix Monthly Income a scam or legit? They promise you a whopping $5 every time someone simply clicks on your link and $10 every time someone signs up to the site through you.
If only making money online was as easy as that!
If you're tempted to join this site then I strongly urge you to read this Fix Monthly Income review in its entirety.
I'm not affiliate in any way with this site so I have nothing to gain except helping people like you avoid scams online.
The truth is the internet is littered with work from home scams and if you're looking to start earning money online it can be tricky to know where to start.
After reviewing hundreds of online programs there's only one program I recommend. Follow the training here and you'll learn how to create a legit and profitable online business for yourself.
With that being said, let's dive into the full Fix Monthly Income review!
What is Fix Monthly Income?
Immediately upon landing on the site I was suspicious.
It claims to make money through advertisers and so the more people visit the site. The more money it makes and it passes some of their earnings onto you for generating referrals.
This sounds fine except for one thing: there are no ads on the site whatsoever. This should be a big red flag.
We also have no idea who is behind this program and there's no way of contacting the owners. If Fix Monthly Income was legit they'd have terms and conditions, an earnings disclosure or at least a contact page, but there's none of that.
There's a contact page with an email address (Admin@fixmonthlyincome.com) but try sending an email for support and it comes back undeliverable.
These scam sites disappear from the web once people start complaining about not getting paid and another pops up in its place.
This lazy scammer can't even be bothered to change the logo, using the same green bars logo. It's the same scam throughout and they all show the same three steps: sign up, refer your friends and earn money.
And this is just a few examples, there are at least 20 sites out there (that I know of!) that look just like this.
Fix Monthly Income say they'll pay you for each person that clicks onto their site and they'll pay you more when someone signs up.
The way they want you to promote their site is by using a unique affiliate link they give you inside the member's area and start spamming this link across Facebook, Twitter, in emails and any other way you can think of.
So you spam your family and friends on Facebook and wait for the money to roll in? I don't think so.
Firstly, you're just going to annoy everyone who sees you spamming their Facebook timeline and no one will click on it and no one will sign up, because it's quite obviously a scam.
The minimum amount they say you can withdraw is $300 but there are lots of complaints from people angry that they never pay out.
That's right, you can watch your balance go up all you like, but its just a number in the corner of the screen. This will never translate into cash in your account because scammer will never pay you.
When you try to cash out you'll be told you need to sign up for at least 3 cash offers using your credit card. These subscriptions you have to sign up for will take out a monthly fee.
And for each offer you sign up for the owner of this site will take his cut, and this is how he makes his money. Fix Monthly Income is solely designed to put money into the scam artist's pocket, not yours!
[alert-note]"FixMonthlyIncome.com is a total ripoff!! They claim to pay you for visits they get from a provided link.
Is Fixed Monthly Income a scam?
Yes Fixed Monthly Income is a total scam and just one of many that this guy is running. People who need fast cash fall for the empty promises of high earnings simply by copying and pasting links and don't find out it's a scam until it's too late.
Because it's free to join, people let their guards down thinking they have nothing to lose and then willingly hand over their emails and passwords without thinking twice.
Where this scam gets really dangerous is when you're told you need to provide your social security number before you can receive that elusive payment.
With your email, password, address and telephone number you open up yourself to all kinds of fraud and identity theft. I strongly advise you to stay away from this online scam.
If you're looking for a legit way to earn money online but are sick of all the hyped up promises and dodgy scammers, take a look at my TOP-RATED program here and get instant access to the free beginner's course.
It will walk you through how to start a profitable and successful internet business that generates a steady and growing income for you. Yes it takes work, time and patience but it is worth it.
I hope you found this Fix Monthly Income review helpful and if you have any questions at all, do go ahead and leave them below and I'll get back to you personally. If you've ever fallen victim to these types of scams please share your experience with us in the comments below. | {
"redpajama_set_name": "RedPajamaC4"
} | 3,455 |
\section{Introduction}
Stellar tidal disruption is an unavoidable outcome of collisional orbital dynamics in dense stellar systems \citep{1976MNRAS.176..633F}. The stochastic two--body relaxation of orbital parameters leads stars on a random walk through angular momentum space, eventually delivering them to pericenters close to the supermassive black hole (SMBH). Once a star's orbital pericenter falls within the tidal, or Roche, radius of the SMBH, the star will be destroyed upon pericenter passage
\citep{1975Natur.254..295H, Rees88}. The resulting tidal disruption events (TDEs) were theoretical curiosities for many years, but have been discovered in increasing numbers over the last two decades. There are now dozens of known TDEs discovered as transient nuclear flares, which have been identified primarily through quasi--thermal emission in soft X-ray \citep[e.g.,][]{1996A&A...309L..35B, Greiner+00, 2004ApJ...603L..17K,2014ApJ...781...59D}, UV \citep{2006ApJ...653L..25G,2009ApJ...698.1367G}, and optical \citep{2011ApJ...741...73V,2012Natur.485..217G,Chornock+14, 2014ApJ...793...38A, 2014MNRAS.445.3263H,2016MNRAS.455.2918H,2016MNRAS.463.3813H, vvelzen2019-ztf} wavelengths. A minority of TDEs have been observed to launch relativistic jets detectable (via non--thermal hard X--ray and soft $\gamma$--ray emission) to cosmological distances (e.g.~\citealt{2011Sci...333..203B}; \citealt{2011Sci...333..199L}). However, late--time radio followup of thermally--selected TDEs usually returns upper limits \citep{2013A&A...552A...5V, Bower+13}, suggesting that only a minority of TDEs are accompanied by very high luminosity jets \citep{Generozov+17}.
Astrophysical interest in TDEs is manifold. These flares hold great scientific potential as probes of SMBH demographics, as the mass fallback rate onto the black hole encodes the mass \citep{Rees88, Lodato+09, Guillochon&RamirezRuiz13} of the SMBH. The SMBH spin may be more subtly imprinted into TDE observables \citep{Stone&Loeb12, Guillochon&RamirezRuiz15, Hayasaki+16}. In the subset of TDEs that launch relativistic jets, radio synchrotron emission produced in the jet forward shock can place tight constraints on circumnuclear gas in distant galactic nuclei \citep{Giannios&Metzger11, 2012ApJ...748...36B}. More speculatively, these jets could be responsible for the observed flux of ultra--high energy cosmic rays \citep{2009ApJ...693..329F, Farrar&Piran14}. Exotic TDEs may serve as signposts of unusual SMBH dynamics: truncated
light curves are expected in the vicinity of close SMBH binaries
\citep{2009ApJ...706L.133L}, and off-nuclear TDEs may indicate SMBHs recoiling
after anisotropic gravitational wave emission (\citealt{2011MNRAS.412...75S}; \citealt{Jonker2012}). Finally, TDEs may also serve as natural accretion physics laboratories, as the mass fallback feeding the disk declines from super--Eddington levels to a few percent of Eddington over a period of months to years \citep{Shen&Matzner14}. As TDE accretion rates decline from super--Eddington, to modestly
sub--Eddington, to very sub--Eddington levels, their accretion disks might exhibit state changes analogous to those
of stellar--mass black holes in X--ray binaries (XRBs; \citealt{2004MNRAS.355.1105F}; \citealt{2004ApJ...603L..17K}).
Early models for TDE light curves and spectra assumed that the highly eccentric debris streams from stellar disruption would quickly circularize into a compact accretion disk \citep{Rees88, Cannizzo+90, Ulmer99} that might resemble a scaled--up XRB disk, or the innermost regions of an active galactic nucleus (AGN). A circularized TDE disk would differ from both of these analogues in its radial extent: typically, the tidal radius $R_{\rm t} \lesssim 100 R_{\rm g}$, where $R_{\rm g}$ is the SMBH gravitational radius; a scale much smaller than the typical XRB or AGN disk.
This simple expectation has, however, been strongly challenged. Recent analytic and numerical theory has found that circularization may be very slow if the debris pericenter $R_{\rm p} \gg 10 R_{\rm g}$ \citep{Shiokawa+15, Dai+15, Piran+15} and/or there is strong misalignment between the SMBH spin vector and the debris angular momentum vector \citep{Guillochon&RamirezRuiz15, Hayasaki+16}. In tandem, early--time observations have found four properties characteristic of optical/UV-selected TDEs (\citealt{2011ApJ...741...73V}; \citealt{2014ApJ...793...38A}; \citealt{2018ApJS..238...15H}): \newline {\it (i)}
low blackbody temperatures ($T_{\rm BB} \approx 2 \times 10^4~{\rm K}$) with blackbody radii
$R_{\rm BB} \sim 10^{2-3} R_{\rm g}$, {\it (ii)} little cooling
(${\rm d} \ln(T_{\rm BB})/{\rm d}t<0.01$~day$^{-1}$) over a $\sim$100 day baseline, {\it
(iii)} a steep power--law decay in observed flux $F(t)$ often consistent with
$F\propto t^{-5/3}$, and {\it (iv)} very high optical/UV luminosities, with $L_{\rm BB} \sim 10^{43.5-44.5}~{\rm erg~s}^{-1}$ near peak.
All of these properties are inconsistent with the simplest TDE emission model, which assumes emission from radii $\lesssim R_{\rm t} \sim 10 R_{\rm g}$ \citep{Ulmer99}. In this scenario, the optical/UV emission is far down the Raleigh--Jeans tail
of the disk spectral energy distribution (SED), and therefore decays slowly in time, $L_{\rm RJ} \propto T_{\rm BB} \propto t^{-5/12}$ \citep{Lodato&Rossi11}. The predicted level of optical/UV luminosity is $L_{\rm opt} \sim 10^{41}~{\rm erg~s}^{-1}$, far lower than observed. These discrepancies have motivated multiple theoretical alternatives for the observed optical/UV emission: photon--driven \citep{2009MNRAS.400.2070S} or line--driven
\citep{Miller15} outflows; emission powered by shocks at debris
stream self-intersections \citep{Piran+15}; or thermal reprocessing of accretion power
by a layer of gas at large radii \citep{Loeb&Ulmer97, 2014ApJ...783...23G}.
Conversely, soft X--ray observations of TDEs are more qualitatively consistent with the simple picture of a compact accretion disk. Most X--ray detections of TDEs find very soft spectra, consistent with the Wien tail of (multi--color) black bodies at temperatures $T \lesssim 0.1\,{\rm keV}$
\citep{Auchettl+17}, like a scaled--up version of a high-soft
state XRB. However, these X-ray spectra are almost always taken in the first one or two years of the
flare, when accretion rates are expected to be, at the very least, at a large fraction of the Eddington limit. Notably, many optically selected TDEs go undetected in X--rays \citep{2012Natur.485..217G}
and, vice versa, X--ray selected TDEs often lack optical variability. For instance, the TDE XMMSL1~J074008.2$-$853927 reported by \citet{2017A&A...598A..29S} does not show a large enhancement in the optical. Some even show no evidence for enhanced optical emission. For instance, the TDE SDSS~J120136.02$+$300305.5 discovered by \citet{2012A&A...541A.106S} had an X--ray luminosity of 3$\times 10^{44}$erg s$^{-1}$\, at discovery while the optical spectrum obtained 12 days after the X--ray discovery shows no spectroscopic features (such as broad emission lines) that are usually associated with TDEs. A recent X--ray discovered source, XMMSL2~J144605.0$+$685735 (Saxton et al.~2019, in prep.), also shows little or no optical emission above the contribution of the nuclear region of the host galaxy.
So far, we have discussed the state of the art in {\it early--time} TDE observations, by which we mean observations taken within two years of the peak of the flare. The behavior of TDE disks at late times is relatively under--explored. We note two differences between the early-- and late--time phases:
\begin{enumerate}
\item The large theoretical uncertainties associated with circularization and disk formation will be less important long after the peak of the mass return rate. A quasi--circular disk is a more reasonable approximation at late times, even if initial circularization was inefficient due to weak apsidal precession \citep{Shiokawa+15} or misaligned SMBH spin \citep{Guillochon&RamirezRuiz15, Hayasaki+16}.
\item The monotonically declining debris fallback rate suggests that at sufficiently late times, TDE disks may pass through the range of sub-Eddington accretion rates that produces a state change in XRB disks (e.g.~\citealt{2011MNRAS.417L..51V}; \citealt{Giannios&Metzger11}; \citealt{Tchekhovskoy+14}). This analogy suggests that once TDE accretion rates decline below a few percent of Eddington, X--ray emission may exhibit features of the XRB low/hard state, such as a primarily non--thermal, hard power--law spectrum. Such ``SMBH state changes'' have not yet been seen in TDEs, although there is one suggestive example: X-ray observations of the TDE in NGC~5905 show a transition from a soft to harder spectrum at late times (\citealt{1999A&A...343..775K}).
\end{enumerate}
The search for late--time TDE X--ray emission is further motivated by the recent {\it Hubble
Space Telescope} discovery of late--time far UV (FUV) emission in six optically--selected TDEs \citep{vanVelzen+18}. In all six cases, the late--time FUV luminosities were well above the levels predicted from extrapolating a naive $\propto t^{-5/3}$ power--law. The observed slower rate of decline hints at a transition from fallback--dominated to disk--dominated accretion rates \citep{Cannizzo+90}, and the small fitted black body radii ($R_{\rm BB} \sim 2-5 R_{\rm t}$)
indicates that if optically thick reprocessing layers once existed, they have since dissipated. It is therefore reasonable to expect that many optically--selected TDEs should, at late times, be emitting relatively unobscured X--rays from their inner disks.
In this paper, we present and analyze {\it Chandra}\, observations of four optically--selected TDEs taken at late times, long after the peak of the optical flare has passed. We have observed
PTF09axc{} and PTF09ge{} 8 years after their discovery, PTF09djl{} 9 years after its discovery, and ASASSN--14ae{} 5 years after its discovery. In \S \ref{sec:obs}, we present our observations and results, and in \S \ref{sec:disc}, we discuss the implications of both detections and non--detections for broader questions in TDE and accretion physics. We adopt $\Omega_m= 0.3$, $\Omega_\Lambda=0.7$, and
$H_0 = 70$ km s$^{-1}$ Mpc$^{-1}$ to convert the redshift of each source to
luminosity distances.
\section{Observations, analysis and results}
\label{sec:obs}
We obtained 69.19, 34.15, 9.6, 19.08 ksec long on--source {\it Chandra}\,
exposures of PTF09axc, PTF09ge, ASASSN--14ae, and PTF09djl,
respectively. The first two sources were observed under {\it Chandra} Guest Observer program 18700591, and the latter two under 20700515. The observation of PTF09axc\, was split in two parts of
53.66 and 15.53 ksec in length. The observation identification (ID)
numbers for the data presented here are 19532 (53.66 ksec) and 20879
(15.53 ksec) for PTF09axc, 19531 for PTF09ge, 21503 for ASASSN--14ae, and
21504 for PTF09djl\, with observing dates and start times (UTC) of
2017-12-08 at 23:11:32, 2017-12-06 at 18:12:18, 2017-09-28 at
20:19:15, 2018-11-17 at 21:48:37, and 2019-01-06 at 13:08:18,
respectively. A log of the observations can be found in
Table~\ref{tab:log}.
\begin{table*}
\caption{A log of the {\it Chandra}\, late--time X--ray observations of four
optically selected tidal disruption events. The time since the discovery of the optical transient is denoted with $\Delta t$ (delay).}
\label{tab:log}
\begin{center}
\begin{tabular}{ccccc}
\hline
Source & Observing date & Observation ID & Duration & Delay \\
& MJD (UTC) & & (kilo seconds) & ($\Delta t$; yr) \\
\hline
PTF09axc & 58095.966 & 19532 & 53.66 & 8.5 \\
PTF09axc & 58093.759 & 20879 & 15.53 & 8.5 \\
PTF09ge & 58024.847 & 19531 & 34.15 & 8.4 \\
ASASSN--14ae & 58439.909 & 21503 & 9.6 & 4.8 \\
PTF09djl & 58489.547 & 21504 & 19.08 & 9.5 \\
\end{tabular}
\end{center}
\end{table*}
In all cases, the source position as derived in the initial optical
outburst was covered by the S3 CCD of the ACIS-S detector array
(\citealt{1997AAS...190.3404G}). For the observations of PTF09axc\, and
PTF09ge, 3 CCDs were operational (besides the S3 CCD, S4 and S2 were
operational) and the full CCDs were read out providing a nominal
exposure time per frame of 3.1~sec. For the observations of
ASASSN--14ae\, and PTF09djl\, we chose to use only the S3 CCD. It was
operated in sub--array mode where only a quarter of the CCD is read out. This yields an
exposure time of 0.8~s per CCD frame.
We reprocessed and analyzed the data using the {\sc ciao} 4.10
software developed by the {\it Chandra}\, X--ray Center and employing {\sc
caldb} version 4.8.1. To allow for a thorough rejection of events
unrelated to the source such as cosmic ray hits, the data telemetry
mode was set to {\it very faint}. Using the {\sc ciao} tool {\it
wavdetect} we have detected an X--ray source in an image constructed
from the 0.3--7 keV data. The position of the X-ray source is
consistent with the optical position of the TDE in all three cases
where we detected a source close to the expected position
(see Table \ref{tab:coor}). No X--ray source was detected at the
location of the optical outburst source in the case of PTF09djl.
For the detected sources we calculate the 95\% confidence
uncertainty on the {\it Chandra}\, X--ray position using eq.~12 in \citet{2007ApJS..169..401K} which
contain the off--axis angle and the detected number of source
counts. All our sources have been detected on--axis and the number of
{\it wavdetect}--detected counts is given in Table
~\ref{tab:coor}. This internal positional uncertainty has to be
supplemented with the external uncertainty, which includes the
uncertainty in the satellite aspect solution, and the knowledge of the
geometry and alignment of the spacecraft and focal
plane. \citet{2010ApJS..189...37E} found this external correction to
be 0.39\hbox{$^{\prime\prime}$}, which was subsequently found to be under--estimated by
0.16\hbox{$^{\prime\prime}$}\, by \citet{2011ApJS..192....8R}. The total external 95\%
confidence uncertainty of 0.55\hbox{$^{\prime\prime}$}\, needs to be added in
quadrature to the internal positional uncertainties given in Table
~\ref{tab:coor}.
We use the {\sc ciao} tool {\it specextract} to extract a source
spectrum for each of the three detected sources separately, using the
best known optical coordinates for the sources (see Table~\ref{tab:coor} for references). We created source and
background regions centered on the optical position of the
sources. The circular source regions have a radius of 2\hbox{$^{\prime\prime}$}. The
background regions for PTF09axc\, and PTF09ge\, are annular with inner
and outer radii of 10\hbox{$^{\prime\prime}$} and 30\hbox{$^{\prime\prime}$}, respectively. For ASASSN--14ae,
the background is drawn from a
source--free, circular region on the same CCD (because of the smaller sky area covered due to the employment of a
sub--array in the read-out). This circular region has
a radius of 30\hbox{$^{\prime\prime}$}. We do not rebin the extracted source spectra,
although we require each channel to have at least one X-ray photon. We
report the 68\% confidence regions for fitted parameters unless
mentioned otherwise.
\begin{table*}
\caption{World Coordinate System information of our sample. }
\label{tab:coor}
\begin{center}
\begin{tabular}{cccccccc}
\hline
Source & Optical position & {\it Chandra}\, X--ray position & 95\% conf.~internal & Total 95\% conf. & Offset & Source & Ref.\\
& & & uncert.~[\hbox{$^{\prime\prime}$}] & uncert. [\hbox{$^{\prime\prime}$}] &[\hbox{$^{\prime\prime}$}] & counts& $\dagger$ \\
\hline
PTF09axc & 14:53:13.06 $+$22:14:32.2 & 14:53:13.08
$+$22:14:32.169 & 0.11 & 0.56 & 0.2 & 381 & [1]\\
PTF09axc & 223.30442 $+$22.24228 & 223.30449 $+$22.24227
& 0.11 & 0.56 &0.2 & 381 & [1]\\
\hline
PTF09ge & 14:57:03.18 $+$49:36:40.97 & 14:57:03.18 $+$49:36:40.865
& 0.24 & 0.6 &0.1 & 43 & [1]\\
PTF09ge & 224.26325 $+$49.61138 & 224.26326 $+$49.61135&
0.24 & 0.6 & 0.1 & 43& [1]\\
\hline
ASASSN--14ae & 11:08:40.12 $+$34:05:52.23 & 11:08:40.13
$+$34:05:53.045 & 0.56 & 0.78 & 0.8 & 8 & [3]\\
ASASSN--14ae & 167.16717 $+$34.09784 & 167.16719
$+$34.09807 & 0.56 &0.78 & 0.8 & 8 & [3]\\
\hline
PTF09djl & 16:33:55.94 $+$30:14:16.3 & -- & -- & -- & -- & -- & [1]\\
PTF09djl & 248.4831 $+$30.23786 & -- & -- & -- & -- & -- & [1]\\
\end{tabular}
\end{center}
\footnotesize{$\dagger$ Reference for the optical coordinates of the sources: [1]~\citet{2014ApJ...793...38A}; [3]~\citet{2014MNRAS.445.3263H}}
\tablecomments{Optical and {\it Chandra}\, X--ray coordinates of the tidal
disruption events in our sample, the offset between the two and
the number of X--ray counts detected in the observation between
0.3--7 keV. The nominal external uncertainty on the {\it Chandra}\, X--ray
coordinates is 0.55\hbox{$^{\prime\prime}$}\, at 95\% confidence. We have chosen to
add this in quadrature to the provided internal uncertainty in the
fourth column. For PTF09axc\, we report the values found in Obs ID
19532 as this is the longer of the two, providing significantly
more source counts. The coordinates found when using Obs ID 20879
are fully consistent with this. }
\end{table*}
We fitted the extracted spectra of each source individually using the
{\sc heasoft} {\sc xspec} tool version 12.10.1. We excluded photons
detected outside the range 0.3--7 keV, as this energy interval is the
best calibrated and most sensitive range for {\it Chandra}. Throughout the
spectral fitting we employ Cash statistics
(\citealt{1979ApJ...228..939C}) unless mentioned otherwise. For each
source we fit the background spectrum separately first. A power law is
an adequate, first order, description of the background spectrum (see
Table~\ref{tab:xfit}). When fitting the source spectrum, the background
is described using the shape and parameters fixed to those derived
from the separate background fit. We scale the normalization of the
power law model (that describes the background) on the
basis of the ratio between the size of the source region and that of
the background region.
\subsection{PTF09axc}
PTF09axc\, has a redshift of $z=0.1146$ (d$_L=532.6$ Mpc) and is
associated with the galaxy SDSS~J145313.07$+$221432.2
(\citealt{2014ApJ...793...38A}). Given the relatively high observed
count rate of PTF09axc\, we investigate if the source spectrum is
affected by pile--up by employing the {\sc ciao} tool {\sc
pileup\_map} on an image created including all photon energies for
both observations of PTF09axc. The count rate per frame in both
observations is less than 0.02, implying a pile--up fraction lower
than 1\%. Therefore, we conclude that pile--up is insignificant for
our observations of PTF09axc\, and by extension, given that the other
sources we observed have a lower count rate per frame, those spectra
are not affected by pile--up either.
In the fit we take the attenuating effect of Galactic foreground
extinction into account. To model this effect we use the {\sc xspec}
{\sc phabs} multiplicative model, where we convert the $A_V=0.098$ for
Galactic foreground extinction obtained through NED
(\citealt{2011ApJ...737..103S}) to an $N_H=1.8\times 10^{20}$
cm$^{-2}$ using the relation $N_H=1.79 A_V \times 10^{21}$ cm$^{-2}$
\citep{1995A&A...293..889P}. The
value of $N_H$ is kept fixed during the fit. We employ the {\sc
xspec} fit--function {\sc pegpwr $+$ phabs $\times$ pegpwr}. For here and below,
we note that in all cases the normalisation of the {\sc pegpwr}
function is equal to the unabsorbed 0.3--7 keV flux.
Fitting the two observations together, the spectrum of PTF09axc\, is
well--fit by a power law with index $\Gamma$=2.5$\pm$0.1, with an
unabsorbed 0.3--7 keV flux of (9.5$\pm$0.6)$\times 10^{-14}$erg cm$^{-2}$ s$^{-1}$\,
translating to a 0.3--7 keV luminosity L$_X=(3.2\pm0.2)\times 10^{42}$erg s$^{-1}$, where in the calculation of the luminosity uncertainty we, here and below, only included the uncertainty in the flux measurement and not that in the distance determination. The
observed, absorbed, 0.3--7 keV flux is
(8.5$\pm$0.5)$\times 10^{-13}$erg cm$^{-2}$ s$^{-1}$. The C-statistic of the fit was 226.6 for 223 bins and 221 degrees of freedom. Using the {\sc goodness} command in {\sc xspec} we obtained that 100\% of the realizations yield a lower fit statistic.
For reference, given the observed number of background events
extracted in the background region (1720 for obs ID 19532 and 479 in
obs ID 20503) one expects that out of the 447 detected counts at the
source position (375 and 72 for the two obs IDs, respectively), 11 are
due to the background (8.5 and 2.4 for the two obs IDs, respectively).
To check our results, we rebinned the data of obs ID 19532 (the longer
of the two observations), requiring that each bin contains at least 30
counts. We subtracted the background and fit the resulting spectrum
with a power law attenuated by the foreground Galactic extinction
employing Chi--squared statistics. The result is fully consistent with
that obtained fitting the unbinned data on both data sets. Given the high number of counts detected, we produced a light curve of the observation with ID 19532 with 1 ksec--long bins to investigate if
flares are present: none were found.
\subsection{PTF09ge}
PTF09ge\, has a redshift of $z=0.064$ ($d_L=287.4$ Mpc) and is
associated with the galaxy SDSS~J145703.17$+$493640.9
(\citealt{2014ApJ...793...38A}).
The spectrum of PTF09ge\, is relatively soft compared to the spectrum
of PTF09axc: no photons with energies above 2 keV are detected. We
fitted the source spectrum with a redshifted black body including a
power law model for the background using Cash statistics. As for
PTF09axc\, our fit--function includes a factor to model the foreground
extinction, $N_H$. For this we use a rounded--off value of
1$\times 10^{20}$ cm$^{-2}$ given the $A_V=0.046$ from NED
(\citealt{2011ApJ...737..103S}). The value of $N_H$ is kept fixed during the fit.
Fitting the source and background together, we use a fit--function of
an absorbed, redshifted black body for the source plus a power law for
the background ({\sc pegpwr $+$ phabs $\times$ zashift $\times$
bbodyrad} in {\sc xspec}). We find a best-fit value for the black body temperature of
0.18$\pm$0.02 keV. The unabsorbed source flux, subtracting the flux
due to the background power law in the 0.3--7 keV range is
1.9$\times 10^{-14}$erg cm$^{-2}$ s$^{-1}$\, giving a 0.3--7 keV luminosity of
L$_X=2 \times 10^{41}$erg s$^{-1}$. The absorbed 0.3--7 keV flux is
$(1.7^{+0.3}_{-0.5})\times 10^{-14}$erg cm$^{-2}$ s$^{-1}$. The C-statistic of the fit was 34.5 for 29 bins and 27 degrees of freedom. Using the {\sc goodness} command in {\sc xspec} we obtained that 98\% of the realizations yield a lower fit statistic (when all simulations are drawn from the best-fit model). The bolometric source luminosity is 2.7$\times 10^{41}$erg s$^{-1}$.
As the fit shows some notable residuals, it mostly under-predicts the flux at low energies, we also try the simple fit--function used for PTF09axc\, ({\sc pegpwr $+$ phabs $\times$ pegpwr} in {\sc xspec}). For this power law fit we find a best-fit value for the power law index of
3.9$\pm$0.4, and an unabsorbed source flux in the 0.3--7 keV range of
$3.9^{+1.2}_{-0.9}\times 10^{-14}$erg cm$^{-2}$ s$^{-1}$\, giving a 0.3--7 keV luminosity of
L$_X= 3.9^{+1.1}_{-1.0}\times 10^{41}$erg s$^{-1}$. The absorbed 0.3--7 keV flux is
$(3.5\pm0.9)\times 10^{-14}$erg cm$^{-2}$ s$^{-1}$. The C-statistic of the fit was 25.2 for 29 bins and 27 degrees of freedom. Using the {\sc goodness} command in {\sc XSPEC} we obtained that 58\% of the realizations yield a lower fit statistic (again when all simulations are drawn from the best-fit model).
\begin{figure*}[ht]
\centering
\includegraphics[width=.48\linewidth]{ptf09axc.pdf}
\includegraphics[width=.48\linewidth]{spec09gepl.pdf}
\caption{{\it Left panel:} We show the {\it Chandra}\, ACIS--S spectrum of PTF09axc fitted with a power law folded through the detector response. The black data points are from observation ID 19532 and the red (/grey) data points are from observation ID 20879. The best--fit power law index is 2.5$\pm$0.1. {\it Right panel:} The {\it Chandra}\, ACIS--S spectrum of PTF09ge fitted with a power law. The best--fit power law index is 3.9$\pm$0.4, consistent with the slope of the Wien tail of a black body that peaks in the extreme UV.}
\label{plot:xspectra}
\end{figure*}
\subsection{ASASSN--14ae}
ASASSN--14ae\,has a redshift of $z=0.043671$ ($d_L=193.3$ Mpc) and is
associated with the galaxy SDSS~J110840.11$+$340552.2
(\citealt{2014MNRAS.445.3263H}). For foreground extinction, $N_H$, we
use a rounded--off value of 1$\times 10^{20}$ cm$^{-2}$ given the
$A_V=0.048$ from NED (\citealt{2011ApJ...737..103S}). The
value of N$_H$ is kept fixed during the fit.
Eight photons are detected at a position consistent with that of the
optical source in outburst. Owing to the relative short exposure
compared to the other observations we report on in this manuscript, on average only 0.3
background counts would fall in the source extraction region. Given this very low back ground event rate the eight--count detection is highly significant: i.e.~it occurs due to chance in approximately one out of 8$\times 10^8$ cases. For our spectral analysis of these eight photons we do not correct for this expected background. We fitted for the
power law index and normalisation in the fit--function {\sc phabs
$\times$ pegpwr} in {\sc xspec}. The best--fit power law index is
$\Gamma=$ 3.2$\pm$1.0. The unabsorbed 0.3-7 keV flux is
$(2^{+2}_{-1})\times 10^{-14}$erg cm$^{-2}$ s$^{-1}$, giving a 0.3--7 keV luminosity of
$(9^{+9}_{-5})\times 10^{40}$erg s$^{-1}$. The absorbed 0.3--7 keV flux is
(1.8$\pm0.8$)$\times 10^{-14}$erg cm$^{-2}$ s$^{-1}$. The C-statistic of the fit was 17.6 for 8 bins and 6 degrees of freedom. Using the {\sc goodness} command we obtained that 96\% of the realizations yield a lower fit statistic (again when all simulations are drawn from the best-fit model).
\subsection{PTF09djl}
PTF09djl{} has a redshift of $z=0.184$ ($d_L=893.2$ Mpc) and is associated
with the galaxy SDSS~J163355.96$+$301416.6
(\citealt{2014ApJ...793...38A}). For foreground extinction, $N_H$, we
use a rounded--off value of 1$\times 10^{20}$ cm$^{-2}$ given the
$A_V=0.049$ from NED (\citealt{2011ApJ...737..103S}). The
value of $N_H$ is kept fixed during the fit.
No X--ray photons with energies between 0.3--7 keV have been
detected in a circle with a radius of 1\hbox{$^{\prime\prime}$}\, centered on the
optical outburst position of PTF09djl. We estimate the average
background photon rate in 0.3--7 keV by extracting the detected counts
in a circular region with a radius of 30\hbox{$^{\prime\prime}$}\, close to the source
where no sources were found when using the {\sc wavdetect} tool with
default parameters. 110 background photons are detected in such a region
centered on coordinates RA 16:33:52.17 Dec.~$+$30:13:44.9,
implying that on average 0.12 background count is expected in a
1\hbox{$^{\prime\prime}$}\, circular region.
Following \citet{1991ApJ...374..344K} and
\citet{1984NIMPA.228..120H}, we derive a 95\% confidence upper limit
on the number of detected source counts in the 0.3--7 keV band of 3.
To convert this to a limit on the flux, we divide the upper limit on
the detected number of source counts by the on--source time of this
observation to obtain an upper limit on the source count rate. Next,
we use two models for the spectral shape of the source: a blackbody
with a temperature of 180 eV similar to that found for PTF09ge, or a power law
with index of 2.5, as was found for PTF09axc. The attenuating effect of the $N_H$ derived above is marginal and therefore ignored. W3PIMMS\footnote{https://heasarc.gsfc.nasa.gov/cgi-bin/Tools/w3pimms/w3pimms.pl} provides a
95\% upper limit to the (absorbed) 0.3--7 keV X--ray flux of
F$_X \leq 3 \times 10^{-15}$ erg cm$^{-2}$ s$^{-1}$\, both for the power law model and
for the black body model. This yields an upper limit to the source 0.3--7 keV
luminosity of L$_X \leq 3 \times 10^{41}$ erg s$^{-1}$\, for both models.
\begin{table*}
\caption{X--ray spectral fit--parameters for PTF09axc, PTF09ge\, and ASASSN--14ae. The normalization of the background has been scaled down to match the source photon extraction area (the scaling factor was 200 for PTF09axc\, and PTF09ge\, and 900/4 for the circular background region for ASASSN--14ae).}
\label{tab:xfit}
\begin{center}
\begin{tabular}{cccccc}
\hline
Source & Background & Background & Source model & Source flux (absorbed; 0.3--7 keV) & Luminosity (0.3--7 keV)\\
& Power law index $\Gamma$ & Flux erg cm$^{-2}$ s$^{-1}$ & Power law index $\Gamma$ & Flux erg cm$^{-2}$ s$^{-1}$ & erg s$^{-1}$\\
\hline
PTF09axc & 0.7$\pm$0.1 & 3.2$\times 10^{-16}$& 2.5$\pm$0.1 & $(8.5\pm$0.5)$\times 10^{-13}$ & $(3.2\pm0.2)\times 10^{42}$\\
PTF09ge & 0.3$\pm$0.2 & 3.7$\times 10^{-16}$& 3.9$\pm$0.4 & $(3.5\pm0.9)\times 10^{-14}$ & $3.9^{+1.1}_{-1.0}\times 10^{41}$\\
ASASSN--14ae & -- & -- & 3.2$\pm$1.0& $(1.8\pm0.8)\times 10^{-14}$ & $(9^{+9}_{-5})\times 10^{40}$\\
PTF09djl & -- & -- & 2.5$^*$ & $<3\times 10^{-15}$ & $<3\times 10^{41}$ \\
\end{tabular}
\end{center}
\footnotesize{$^*$ Parameter fixed to this value. When using a 0.18 keV black body to convert the derived upper limit on the number of source photons to flux as same flux limit as reported for the power law spectral shape is obtained.}
\end{table*}
\section{Discussion}
\label{sec:disc}
We observed four optically selected TDEs in X--rays using the {\it Chandra}\, satellite. One source, ASASSN--14ae, was observed 4.8 yr after its discovery by \citet{2014MNRAS.445.3263H}, while the other three sources were observed $\approx$8--10 yr after their discovery by \citet[][]{2014ApJ...793...38A}. Three of the four sources were detected; only PTF09djl\, remains undetected. The X--ray detections of PTF09axc\, and PTF09ge\,
are especially interesting in conjunction: the X--ray spectrum of PTF09axc{} is well--fit with
a power--law ($\Gamma=2.5\pm0.1$); conversely, our observations of PTF09ge{} are
well--fit by a blackbody Wien tail that manifests itself in the 0.3--7 keV {\it
Chandra} band as a very soft, $\Gamma=3.9\pm0.4$ power law. Finally, for ASASSN--14ae{},
the number of detected X--ray photons is too low for a meaningful spectral fit. Our {\it Chandra} detections are consistent with the 2014 upper limit of L$_X<2.3\times 10^{42}~$erg s$^{-1}$\, for PTF09ge{}, the 2014 detection of L$_X = 7.1_{-3.1}^{+12}\times 10^{42}~$erg s$^{-1}$\, for PTF09axc{} \citep{2014ApJ...793...38A}, and the 2014 upper limit of $L_{\rm X} < 1.3 \times 10^{41}~{\rm erg~s}^{-1}$ for ASASSN--14ae{} \citep{2014MNRAS.445.3263H}.
Our results indicate that optically--selected TDEs may maintain a substantial X--ray luminosity for at least $\sim 5-10~{\rm yr}$ post-peak, long after the optical emission has become undetectable.
Notably, several optically selected TDEs have stringent early--time
X--ray upper limits around or below the luminosities seen in the three sources we detected at late times. For instance, \citet{2012Natur.485..217G} provide a non--detection for the optical/UV selected TDE PS1--10jh, with an upper limit to the 0.2--10 keV X--ray luminosity of $<5.8\times 10^{41}$erg s$^{-1}$. \citet{nadia-iptf} report a marginal detection of the TDE iPTF16fnl in stacked observations with a 0.3--10 keV luminosity of $2.4^{+1.9}_{-1.1}\times 10^{39}$erg s$^{-1}$, and \citet{hung-2017} did not detect the TDE iPTF16axa down to a 0.3--10 keV luminosity limit of $<3 \times 10^{41}$erg s$^{-1}$. As a caveat, we note that these reported upper limits were provided for the 0.2/0.3--10 keV band, whereas in Table~\ref{tab:xfit}, we report 0.3--7 keV luminosities. For spectral shapes with power-law index of 2 (typically assumed for the above cases), these upper limits would be 10--20\% lower when converted to the 0.3--7 keV band.
The late--time detection of X--ray emission in PTF09axc{}, PTF09ge{}, and ASASSN--14ae{} provides further evidence against the alternative hypothesis that most claimed TDE candidates are, in reality, exotic nuclear supernovae \citep{Saxton+18}. Supernova (SNe) explosions are not generally bright in X--ray wavelengths, and even among those that are X--ray bright, none are observed to emit above $\sim 10^{39}~{\rm erg~s}^{-1}$ at times $\gtrsim 10^4$ days post--peak \citep{Dwarkadas&Gruszko12}. This upper limit is far below even the late--time luminosity detected for ASASSN--14ae{}. Our observations complement late--time FUV detections of six TDE candidates (including PTF09ge{}) by \citet{vanVelzen+18}, which also argue against a ``peculiar SNe'' interpretation.
Our results also constrain the hypothesis that PTF09axc{} may represent extreme optical variability in a low-luminosity AGN. This interpretation was first raised in \citet{2014ApJ...793...38A}, who observed a weak [O~III] emission feature with luminosity $L_{\rm [O~III]}= (2.4\pm 0.3)\times 10^{39}~{\rm erg~s}^{-1}$. This feature is not conclusive evidence of an AGN, and could also be produced by star formation, but in conjunction with the 2014 X--ray detection of the host galaxy, has cast doubt on the TDE status of PTF09axc{} (see e.g. \citealt{Auchettl+17}). Our X--ray luminosity measurement strengthens the case that PTF09axc{} is indeed a bonafide TDE. Using an empirical relationship between the [O~III] and 3--20 keV luminosities in AGN \citep{2005ApJ...634..161H}, we can estimate the range of [O~III] line luminosities expected if our X--ray detection were of AGN origin (the scatter in this relationship is $\sigma=0.51$ dex, i.e.~a factor $\approx$3.24). Converting our 0.3--7 keV luminosity to the 3--20 keV band using W3PIMMS, PTF09axc\,has an $L_X$ (3-20 keV) of $8\times 10^{42}$erg s$^{-1}$, and therefore the predicted AGN luminosity for the [O~III] line would be
$L_{\rm [O~III]}\approx 5.7\times 10^{40}$erg s$^{-1}$, which is a factor $\approx 24$ higher than the actual $L_{\rm [O~III]}$ measured by \citet{2014ApJ...793...38A}. The predicted value of $L_{\rm [O~III]}$ is inconsistent with the observed value at the 2.7$\sigma$ level, making a conventional AGN origin for the X-ray and [O~III] luminosity unlikely.
The detected {\it Chandra}\, luminosities of PTF09ge{} and ASASSN--14ae{} can be compared with the late--time FUV luminosities reported for those sources by \citet{vanVelzen+18}. FUV detections of these sources were used to produce disk models and estimates for a range of quasi--thermal soft X--ray luminosities; the range of modeled X--ray predictions is particularly sensitive to the dimensionless SMBH spin parameter, $\chi_\bullet$.
While our detection of ASASSN--14ae{} is compatible with the lower end (i.e.~retrograde disk and large $|\chi_\bullet|$) of the predicted range $\log_{10}[L_{\rm X}/({\rm erg}~{\rm s}^{-1})] = 41.7^{+1.3}_{-0.9}$, our detection of PTF09ge{} is considerably brighter than the predicted range $\log_{10}[L_{\rm X} /({\rm erg}~{\rm s}^{-1})] = 37.0^{+3.6}_{-2.6}$ (\citealt{vanVelzen+18}, where the fiducial predictions correspond to assuming $\chi_\bullet=0$, and the lower and upper error bars correspond to assuming $\chi_\bullet = -0.9$ and $\chi_\bullet = 0.9$, respectively). This discrepancy could be reconciled by invoking even larger values of prograde SMBH spin and/or a SMBH mass somewhat smaller than the fiducial prediction of the $M_\bullet - \sigma$ relationship \citep{wevers-masses-ii}. Unfortunately, PTF09axc\, was not observed at late times in the FUV.
Interestingly, PTF09djl{}, which went undetected in the X--rays (with a 0.3--7 keV upper limit of $3\times 10^{41}$erg s$^{-1}${}), was detected in the FUV at 3$\times 10^{42}$~erg s$^{-1}${}, leading to a predicted X--ray luminosity range $\log_{10}[L_{\rm X} /({\rm erg}~{\rm s}^{-1})] = 41.5^{+1.6}_{-1.1}$. Our non--detection is compatible with this prediction for any range of retrograde SMBH spin values. While there are a number of important caveats associated with the late--time X--ray luminosity predictions from \citet{vanVelzen+18}, the strong sensitivity of quasi--thermal X--ray emission to $\chi_\bullet$ in late--time TDE disks underlines the value of multiwavelength, late--time observations for constraining SMBH spin. We will return to this subject in Section~\ref{sec:retro}.
\subsection{Disk state changes}
Stellar--mass black holes that accrete from companion stars are visible as X--ray binaries. The X--ray emission from these disks exhibits a wide variety of spectral properties, or ``states'' (e.g.~\citealt{1989A&A...225...79H}; \citealt{2004ARA&A..42..317F})\footnote{Formally, both timing and spectral properties are necessary for the identification of states (\citealt{1989A&A...225...79H}). Regrettably, the low number of detected X-ray photons in our late-time TDE observations precludes us from a meaningful X--ray timing study.}.
Two of the most commonly observed states, the high--soft and low--hard state, are characterized by quasi--thermal and power--law spectra, respectively. Soft states often show sub--dominant power--law X--ray contributions from thermal seed photons up--scattered by an electron corona. One of the important variables controlling the accretion state of an XRB disk is the dimensionless mass accretion rate $\dot{m} \equiv \dot{M} / \dot{M}_{\rm Edd}$, where $\dot{M}$ is the physical accretion rate and $\dot{M}_{\rm Edd}$ is the Eddington--limited accretion rate. Because $\dot{M}$ in X--ray binary disks can vary greatly on humanly observable timescales, state changes are often observed, typically following a hysteresis pattern \citep{Maccarone-hyst2003}. When a source in a high--soft state experiences a persistent decline in $\dot{m}$, it will typically transition to a low-hard state once $\dot{m}$ falls below a threshold value $\sim 0.03$ \citep{Maccarone-hyst2003}. However, some variation in this transition luminosity (as a fractional Eddington luminosity) has been observed: \citet{Kalemci+13} find a soft--to--hard X--ray state change at an Eddington ratio of $\dot{m} = 0.0030 \pm 0.0041$, and on the extreme end, ~\citet{2019arXiv190508497C} find a recent outburst of the candidate black hole XRB MAXI~J1535-571 in which the soft--to--hard spectral state change seems to occur at a fraction 1.2--3.3$\times 10^{-5}$ of the Eddington luminosity (see also \citealt{2003A&A...409..697M} for a discussion of variation in Eddington fraction for state changes in XRBs).
There is some evidence that analogous state changes occur in AGN accretion disks around SMBHs (e.g.~\citealt[][and references therein]{maccarone03}). However, as the viscous times in AGN disks are typically much longer than reasonable observational baselines, it is not easy to observe state changes in AGN. A further difficulty is that in the soft X--rays, AGN spectra are generally dominated by power--law or reflection contributions. This is because the peak of the thermal blackbody disk emission occurs in the extreme UV, where observations are hindered by gas and dust extinction (although a soft spectral component can sometimes be discerned, e.g. \citealt{done2012}).
Compared to standard AGN, TDE disks are probably more favorable laboratories for observing ``scaled up'' state changes around SMBHs \citep{Giannios&Metzger11, Tchekhovskoy+14}. The main reason is that the accretion disks expected to form in TDEs are much smaller than AGN disks, implying shorter time scales. If we consider a steady--state Shakura--Sunyaev disk with dimensionless viscosity $\alpha$, constant aspect ratio $H/R$, and an outer edge $R_{\rm d}$, the viscous time scales as $\propto R_{\rm d}^{3/2}$. Late--time TDE disks should be geometrically thin and mostly circularized, and have an outer radius $R_{\rm d} \sim 2R_{\rm p} = 2R_{\rm t}/\beta$, where $\beta\sim 1$ is the penetration parameter of the TDE, and the tidal radius is
\begin{align}
R_{\rm t} =& R_\star \left(\frac{M_\bullet}{M_\star}\right)^{1/3} \\
\approx & 2 \times 10^{-6}~{\rm pc} \left( \frac{M_\bullet}{10^6 M_\odot} \right)^{1/3} \left( \frac{M_\star}{M_\odot} \right)^{-1/3} \left( \frac{R_\star}{R_\odot} \right). \notag
\end{align}
Here $M_\star$ and $R_\star$ are the mass and radius of the victim star, and we see that both $R_{\rm t}$ and $R_{\rm d}$ are far smaller than the typical radius of an AGN accretion disk: for example, an AGN broad line region with 5100$\AA$ luminosity $\lambda L_\lambda$ has a typical scale $R_{\rm BLR} \approx 0.026 \big( \frac{\lambda L_\lambda(5100\AA)}{10^{44}}\big)^{0.7}$ pc \citep{kaspi-2000}, a factor of 10$^4$ times larger than the typical TDE disk.
Shortly after disruption, the peak mass fallback rate onto the SMBH will generally be super--Eddington, with a peak fallback rate $\dot{M}_{\rm peak} = \frac{1}{3}M_\star / t_{\rm fall}$, where
\begin{equation}
t_{\rm fall} \approx 3.5\times10^6~{\rm s}~ \left( \frac{M_\bullet}{10^6 M_\odot} \right)^{1/2} \left( \frac{M_\star}{M_\odot} \right)^{-1} \left( \frac{R_\star}{R_\odot} \right)^{3/2} \label{eq:tFall}
\end{equation}
is the fallback time for the most tightly bound debris. In Eddington units, this is \citep{stone+2013}
\begin{equation}
\frac{\dot{M}_{\rm peak}}{\dot{M}_{\rm Edd}} \approx 130 \left( \frac{M_\bullet}{10^6 M_\odot} \right)^{-3/2} \left( \frac{M_\star}{M_\odot} \right)^{2} \left( \frac{R_\star}{R_\odot} \right)^{-3/2}.
\end{equation}
If circularization is efficient, the disk accretion rate $\dot{M}$ will track the (super--Eddington) mass fallback rate, and therefore the most relevant stellar--mass point of comparison might seem to be ultra--luminous X--ray sources (ULXs), rather than high--soft XRBs (which are generally sub-Eddington). Contrary to this supposition, early--time soft X--ray detections of TDE candidates generally find quasi--thermal spectra that {\it are} analogous to a high--soft state \citep{1999A&A...349L..45K, Greiner+00}, particularly in the best-characterized TDEs \citep{2016MNRAS.455.2918H, gezari-15oi-2017, Wevers+19}, although we note that given the limited pass--band (typically 0.2--10 keV at best) it is difficult to rule out the soft ULX state (cf.~\citealt{gladstone2009}).
However, even in the limiting case of rapid circularization, the super--Eddington phase is expected to last only a fraction of the time TDEs are typically observed. Given the absence of observed state changes from a super--Eddington, ULX--like state to a sub--Eddington, high--soft state, we deem it likely that X--ray bright TDEs are seen mostly in the equivalent of the XRB soft state. As we will discuss in Section~\ref{sec:XvsO}, the absence of super--Eddington emission may be related to a delay before the sources are detected in X--rays. A soft, quasi-thermal spectrum will no longer be a reasonable expectation (i) at late enough times, once $\dot{m}$ becomes very sub--Eddington,
or (ii) if circularization is highly inefficient and $\dot{m} \ll 1$ always. Because $\dot{M}/\dot{M}_{\rm Edd}$ steadily decreases during late stages of a TDE
flare, we may expect a late--time transition to the SMBH equivalent of the XRB low--hard state.
Observationally, TDE candidates with soft spectra containing an additional hard, power--law X--ray spectral
components do exist \citep[e.g.~][]{2016MNRAS.463.3813H, 2017A&A...598A..29S,saxton2017}, much like XRB soft states where a sub--dominant power-law component
also exists. Another example is the X--ray selected TDE 2XMMi~J184725.1$-$631724
(\citealt{2011ApJ...738...52L}). It showed an X--ray spectrum that was
well--fit by a soft thermal component with a temperature of
approximately 60 eV plus a (soft) power law with a photon index of
around 3--4 contributing around 10--15\% to the total 0.2--10 keV
luminosity (at the first detection of the outburst, in Sept 2006). The
temperature of the soft component had risen to around 90 eV nine
months later as measured by {\it XMM--Newton}{}, with a power--law contribution of
5--10\%. The X--ray spectrum in the TDE candidate RX~J1242$-$1119 changed from a power--law with $\Gamma\approx 5$ (so a very soft spectrum that could also be fit with a blackbody with a temperature of 0.06 keV) to $\Gamma\approx2.5$ at late--times (\citealt{1999A&A...349L..45K,2004ApJ...603L..17K}), signifying a potential state change.
These exceptions aside, the best--studied TDE X--ray spectra are qualitatively closer to an XRB high--soft state than they are to AGN power laws. The reasons for this are unclear, but likely involve the higher blackbody temperature of TDE disks near the ISCO, due to (i) the smaller SMBH masses in TDEs relative to most AGN \citep[compare the SMBH mass distributions in][]{Woo-Urry-2002,2017MNRAS.471.1694W,wevers-masses-ii}; (ii) the higher early--time Eddington fraction expected for TDEs in comparison to typical AGN \citep{Kauffmann&Heckman09}; (iii) a bias towards prograde spinning SMBHs for X--ray selected TDEs (see $\S$~\ref{sec:rates}) enabling a smaller value for the innermost stable circular orbit (ISCO). Early--time TDE X--ray spectra often appear even more thermally dominated than the typical XRB high--soft state, possibly indicating difficulty in forming a Compton scattering corona.
Our interpretation of the spectral properties of PTF09axc{} and PTF09ge{} follows straightforwardly from the XRB analogy: PTF09axc{} has undergone a state change to the SMBH analogue of the low--hard state, but this type of change has not yet occurred for PTF09ge{}, which likely remains in an analogue of the high--soft state. This hypothesis is complicated by the Eddington ratios we observe. Using literature estimates for the SMBH masses \citep{wevers-masses-ii} and accounting for both the one--sigma scatter of the underlying $M_\bullet - \sigma$ relation and the uncertainty in our X-ray luminosity estimates, we find that PTF09axc{} was observed at an Eddington fraction of $\dot{m} = 5.4_{-3.8}^{+12}\times 10^{-2}$; PTF09ge\, was observed at an Eddington fraction of $\dot{m} = 1.6_{-1.1}^{+3.3}\times 10^{-3}$; and ASASSN--14ae{} at $\dot{m} = 2.8_{-2.4}^{+13}\times 10^{-3}$. The simplest theoretical expectation might be that the TDE disk with the lower Eddington ratio, PTF09ge{}, should have undergone a state change prior to one with a higher Eddington ratio (PTF09axc{}). However, we note that in XRBs, the emergence of a coronal
power--law and the ensuing state change is regulated not only by the accretion rate $\dot{m}$ but also by an additional parameter (cf.~\citealt{2001ApJS..132..377H}, where the second parameter is interpreted as the fractional size of the Comptonizing region). Furthermore, TDEs differ from standard accretion disks in several ways, and there are other plausible ``hidden
variables'' that may be acting to prevent the emergence of a corona in
PTF09ge. For example, the relatively weak magnetic fields of main
sequence stars may mean that TDE disks are born with extremely low
magnetizations\footnote{Indeed, TDE disks may be so starved of
magnetic flux that initial angular momentum transport may be
dominated by exotic processes such as the Papaloizou-Pringle
instability \citep{2018MNRAS.474.1737N} or fallback shocks \citep{Chan+19} rather than the usual
magnetorotational instability.}. Since coronal electron populations
are thought to be accelerated to relativistic energies in magnetic
reconnection events \citep{2001MNRAS.321..549M}, standard low--hard state
coronae may only emerge in TDE disks born with unusually large
magnetizations, or ones where external factors like large and retrograde
SMBH spin \citep{Parfrey2015} favor magnetic field generation {\it in situ} through dynamo processes.
Overall, the X--ray Eddington ratio of PTF09axc{} is broadly compatible with the common range of Eddington ratios where soft--to--hard state changes occur in XRBs. The persistently soft spectrum of PTF09ge{} is more unusual, but as mentioned before, XRB soft states have been observed to persist down to an Eddington ratio of $\sim 10^{-3}$ and in an extreme case even down to a few times $10^{-5}$.
One testable prediction of our XRB analogy is the predicted radio luminosity using the Fundamental Plane of black hole activity (\citealt{2003MNRAS.345.1057M}; \citealt{2004A&A...414..895F}). Using the calibration of \citet{2003MNRAS.345.1057M}, and given the SMBH mass estimate of $\log {M}_\bullet=5.68$ in PTF09axc\, from \citet{2017MNRAS.471.1694W}, we derive an expected radio luminosity at 5 GHz of $2\times 10^{37}$erg s$^{-1}$. Given the luminosity distance of PTF09axc{}, this translates to a flux density at 5 GHz of 20 $\mu$Jy, a level which is detectable with current radio telescopes, although this flux estimate carries a substantial uncertainty.
If the soft X--ray spectra of X--ray bright TDEs imply that those systems accrete in the equivalent of the XRB soft state, the fact that many TDEs have very weak or nonexistent early-time radio emission is unsurprising (cf.~\citealt{maccarone03}; \citealt{2013A&A...552A...5V}). We note that XMMSL1~J074008.2$-$853927, another TDE with an X--ray power--law component (index $\Gamma=2$) was detected in radio (\citealt{2017A&A...598A..29S, Alexander+17}), although XMMSL2~J144605.0$+$685735, which shows a power--law with index $\Gamma=2.5$, was not (Saxton et al.~2019 in prep.).
Finally, we note that our X--ray detections demonstrate that late--time TDE disks do not generally exhibit a different sort of state change: a collapse into a cold, gas pressure--dominated state due to the development of a thermal instability. This type of collapse is predicted by simple applications of the popular $\alpha$--disk model, but would imply that late--time TDE disks have luminosities far below what we observe \citep{Shen&Matzner14}. Our observations further substantiate this point, which was recently made in the context of late--time detections of TDE disks in the FUV \citep{vanVelzen+18}. The evidence against very cold disks in (most) TDEs seen at late times could indicate that the nonlinear development of the thermal instability is suppressed by an iron opacity bump \citep{Jiang+16}, or alternatively magnetic pressure support \citep{Begelman&Pringle07, Sadowski16, Jiang+19}.
\subsection{Optical vs.~X--ray selected TDEs}
\label{sec:XvsO}
Many of the first TDE candidates were detected from their soft X--ray emission, but either lacked contemporaneous searches for optical variability \citep{1999A&A...349L..45K}, or were observed {\it not} to show variable optical behavior (\citealt{Greiner+00}; \citealt{2012A&A...541A.106S}; Saxton et al. 2019 in prep). Later, optical and UV surveys discovered a second class of TDE candidates, which often possessed upper limits on their X--ray emission (\citealt{2012Natur.485..217G}; see also PTF09ge{}, ASASSN--14ae{}, and PTF09djl{}). More recently, a number of TDEs have been observed to exhibit both optical/UV {\it and} X--ray variability \citep{2016MNRAS.455.2918H, 2016MNRAS.463.3813H, Wevers+19}. With such a diversity of X--ray ($L_{\rm X}$) and optical ($L_{\rm opt}$) luminosities, it is fair to ask: do these transients all really stem from the same underlying type of event?
In the context of the reprocessing paradigm, this question has sometimes been answered (theoretically) in the affirmative by introducing a viewing angle dependence, akin to the AGN unification model \citep{Metzger&Stone16, Jane-unifi2018, Lu&Bonnerot19}: edge--on TDEs obscure the X--rays from the inner accretion flow, but face--on TDEs are viewed through a low--density polar region, and thus will be X--ray bright. The complicated three--dimensional geometry of the circularization/shock paradigm \citep{Piran+15, Shiokawa+15} likely suggests a viewing angle dependence as well.
A different -- possibly complementary -- way to unify TDE candidates across a broad range of $L_{\rm X}/L_{\rm opt}$ ratios is to postulate a strong temporal, rather than angular, dependence in $L_{\rm X}/L_{\rm opt}$. Our late--time detections of PTF09axc{}, PTF09ge{}, and ASASSN--14ae{} demonstrate that a substantial fraction of optically selected TDEs are X--ray bright at late times $\approx 5-10~{\rm yr}$ post--peak, signifying the presence of an exposed, compact accretion disk. If the optical emission is caused by circularization shocks, a delay between optical and X--ray would be related to delays in forming the (inner, X--ray emitting) accretion disk, as has been suggested by \citet{Shiokawa+15}. If the optical is instead caused by reprocessing of the inner disk's X--rays and EUV, then an enshrouded inner disk will only become visible in X--rays after the reprocessing screen has diluted enough to permit an ionization breakout \citep{Metzger&Stone16, roth+2016}.
Because the low $L_{\rm X}$ values we observe are compatible with past X--ray non--detections (or, in the case of PTF09axc{}, its 2014 detection), we are unable to say whether this truly represents {\it brightening} of initially X--ray dim TDEs. However, deep limits on the X--ray luminosity in several other optically selected TDEs suggests that brightening is certainly plausible (for references and limits see the first paragraphs of the Discussion). The nature of the X--ray light curve in optically selected TDEs is a crucial observable to constrain with future observations. The offset between the peaks of optical and X--ray emission, $\Delta t_{o-X}$, is a key parameter for testing the idea of unification in {\it time}, rather than (or in addition to) angle. The distributions of $\Delta t_{o-X}$ will depend on the emission mechanism for the optical and X--ray light, as well as on event parameters such as $\beta$, $M_\bullet$, and $\chi_\bullet$.
Depending on the delay between disruption and X--ray observation, an individual TDE could be in the equivalent of the soft X--ray spectral
state, or as in the case of XMMSL2~J144605.0$+$685735, in a hard power--law like spectrum\footnote{A potential selection effect might be at play, as massive brightening of an X-ray power law is more difficult to separate from AGN flares, and thus will often not be classified as a TDE, but as an AGN flare}. We hypothesize that the X--ray selected TDEs are, in this scenario, often discovered much longer after the disruption than are optically selected TDEs. This particular unification hypothesis would be falsified if observations months to years before the X--ray turn--on in a TDE candidate did not show signs of an optical enhancement\footnote{In individual host galaxies, there could be reasons why the optical emission should be strongly reduced in these TDEs (such as the presence of a large amount of nuclear dust, e.g.~\citealt{2018Sci...361..482M}).}.
This scenario also implies that all optically selected TDEs will at some point emit X--ray radiation, as is true for three of the four sources we observed in this work. Sources which are detected in both optical and X--ray observations at early times (e.g.~ASASSN--14li, ASASSN--15oi and AT2018fyk; \citealt{2016MNRAS.455.2918H}, \citealt{2016MNRAS.463.3813H}, \citealt{2019arXiv190312203W}, respectively) could be explained in this scenario as sources with efficient circularization due, for instance, to high $\beta$, large $M_\bullet$ (though this is disfavored by $M_\bullet-\sigma$ estimates), or large and retrograde SMBH spin.
The shape of the X--ray spectra as well as the lower luminosities that we observed in PTF09axc, PTF09ge\, and ASASSN--14ae\, differ from the (soft) X--ray discovered TDEs, which have soft thermal spectra and luminosities of order L$_X\approx 10^{43-44}$erg s$^{-1}$\,(\citealt{Auchettl+17}). This implies that our observed sources are, in this scenario, at an even later stage in the evolution of the mass fall--back and accretion rate.
\subsection{Rates of detection in future X--ray surveys}
\label{sec:rates}
Near--future wide--field X--ray surveys are predicted to expand our sample of
X--ray TDEs by $1-2$ orders of magnitude. For example, the Einstein Probe is
expected to find $\sim 100$ new TDEs per year \citep{2015arXiv150607735Y},
while eROSITA is expected to find $\sim 1000$ \citep{2014MNRAS.437..327K}.
In this section, we revisit the latter estimate, making the following
modifications to the model of \citet{2014MNRAS.437..327K}:
\begin{enumerate}
\item We allow (in one of our models) the temperature at the inner edge of the accretion disk to be a function of SMBH spin.
\item We assume the volumetric TDE rate is given by:
\begin{equation}
\dot{N}_{\rm tde}=2.9\times 10^{-5} \left(\frac{M_\bullet}{10^8 M_{\odot}}\right)^{-0.4} {\rm yr}^{-1} \phi(M_\bullet).
\label{eq:tdeRate}
\end{equation}
This assumption takes
a theoretical (per galaxy) TDE rate calibrated from observations of nearby galactic nuclei \citep{stone&metzger2016}, and multiplies this by $\phi(M_\bullet)$, the
$z=0.02$ black hole mass function from \citet{shankar+2009} (their table 3).\footnote{eROSITA would be sensitive to TDEs with $z\lesssim$0.2, and the \citet{shankar+2009} mass function varies little in this redshift range.} We consider black hole masses between $10^5 M_{\odot}$ and $10^8 M_{\odot}$ in
our estimate. The volumetric TDE rate is $\sim 10^{-5}$ Mpc$^{-3}$ yr$^{-1}$
for this range.
\end{enumerate}
We consider two different models for the TDE light curve and spectrum: (I) an optimistic theoretical model based on simple accretion disk theory and (II)
a more pessimistic quasi--empirical model that is calibrated to reproduce the late--time X--ray properties of PTF09ge. In both cases, we only consider disruption of Solar--type stars, for simplicity.
\subsubsection{Model I}
\label{sec:theoryRates}
We assume circularization occurs efficiently, and that the mass accretion rate through the disk is
\begin{align}
&\dot{M}_{\rm acc} (M_\bullet, t)=
\begin{cases}
0 & t< t_{\rm fall}\\
\dot{M}_{\rm max}(M_\bullet) \left[\frac{t}{t_{\rm fall} (M_\bullet)}\right]^{-1.2} & t\geq t_{\rm fall}
\end{cases}
\label{eq:lbol}
\end{align}
where $t_{\rm fall}$ is the fallback time (Eq.~\ref{eq:tFall}).
This power law is shallower than the canonical $t^{-5/3}$ decline of the mass fall--back rate and is motivated by theoretical models for viscously spreading disks \citep{Cannizzo+90}, the late time FUV light curves of TDEs \citep{vanVelzen+18}, and our own
late--time X--ray detections. The maximum accretion rate $\dot{M}_{\rm max}$ is a factor of
$\sim$3 smaller than the peak fall--back rate $\dot{M}_{\rm peak}$. With this normalization, a total of half a solar mass of material is accreted.
The bolometric disk luminosity after one fallback time is
\begin{align}
&L_{\rm bol}(t, M_\bullet, \chi_\bullet)=\min[L_{\rm Edd}(M_\bullet), \eta_\bullet(\chi_\bullet) \dot{M}_{\rm acc}(t) c^2]\nonumber\\
&=\min\Bigl[L_{\rm Edd}(M_\bullet),\nonumber\\ &3\times 10^{45} \left(\frac{\eta_\bullet(\chi_\bullet)}{0.057}\right) \left(\frac{M_\bullet}{10^6 M_{\odot}}\right)^{-1/2} \left(\frac{t}{t_{\rm fall}}\right)^{-1.2} \mathrm{erg\,\, s^{-1}}\Bigr],
\label{eq:lpeak}
\end{align}
where $L_{\rm Edd} (M_\bullet)$ is the Eddington luminosity, and $
\eta_\bullet$ is the standard radiative efficiency of a thin, equatorial accretion disk\footnote{The efficiency $\eta_\bullet$ ranges from $0.038$ to $0.42$ as $a_\bullet$ goes from -1 to 1, and is computed as in \citet{Bardeen+72}.}. Here we have further assumed that the disk aligns itself into the SMBH equatorial plane after an initial period of misalignment. Typical alignment timescales are $\lesssim 100~{\rm d}$ for large ($\chi_\bullet >0.5$) SMBH spins \citep{Franchini+16}, so alignment is a reasonable approximation for eROSITA observations, which have a typical cadence of 6 months\footnote{In principle, alignment can take longer than 6 months if $\chi_\bullet \lesssim 0.5$, but $\eta_\bullet$ is considerably less sensitive to SMBH spin in this regime.}. Eq.~\eqref{eq:lpeak} is close to the estimated bolometric luminosity of PTF09ge\, near peak ($\sim 8\times 10^{44}$ erg s$^{-1}$, which is comparable to the Eddington limit for this source; see \citealt{vanVelzen+18}).
Equations~\eqref{eq:lbol} and~\eqref{eq:lpeak} specify the bolometric
luminosity, but here we are interested in soft X--ray observations of TDEs, and many optically selected TDEs (including three sources of our sample) have not been detected in X--rays at early times. Theoretically, TDEs may become X--ray bright when the central engine
ionizes through a surrounding reprocessing layer
\citep{Metzger&Stone16, roth+2016} or, if circularization is inefficient, after repeated
shock interactions near stream apocenter (e.g.~\citealt{Dai+15, Shiokawa+15}). The
precise time when this occurs is uncertain. However, at least the early,
super--Eddington phases of mass fallback are likely to be X--ray
dim.\footnote{At least for most viewing angles: observers aligned with the
poles may see X--ray emission from a jet according to the unification model
of \citealt{Jane-unifi2018}.} If disk formation is inefficient, there is little accretion to produce X--rays \citep{Shiokawa+15}; even if disk formation is efficient, the inner disk can be heavily obscured by bound debris \citep{Loeb&Ulmer97, coughlin&begelman2014} or by outflows \citep{Miller15, Metzger&Stone16, Jane-unifi2018, Lu&Bonnerot19}. However, as our present work shows, a large fraction of TDEs become X--ray bright at later times, when the luminosity becomes sub--Eddington. This occurs after
\begin{align}
t_{\rm Edd} (M_{\bullet}, \chi_\bullet)\approx 1.5 {\rm yr} \left(\frac{M_\bullet}{10^6 M_\odot}\right)^{-3/4} \left(\frac{\eta(\chi_\bullet)}{0.057}\right)^{5/6}.
\end{align}
Here, $t_{\rm Edd}$ is the time after which the accretion rate becomes sub--Eddington.
In practice, we consider a TDE at redshift $z$ to be detectable by eROSITA after $t_{\rm
Edd} (M_{\bullet}, \chi_\bullet)$, as long as
\begin{align}
&\frac{L}{4 \pi d^2_L(z) K(z)}\geq f_{\rm lim},
\end{align}
where $d_L(z)$ is the luminosity distance and
\begin{align}
&f_{\rm lim}= \frac{C_{\rm crit}}{t_{\rm int} \int_{\nu_{\rm min}}^{\nu_{\rm max}} \frac{S_{\nu} (\nu) A(\nu) e^{-\xi(\nu)}}{h \nu} {\rm d} \nu}\nonumber\\
&K(z)^{-1} = \frac{(1+z) \int_{\nu_{\rm min}}^{\nu_{\rm max}} \frac{S_\nu (\nu (1+z)) A(\nu) e^{-\xi(\nu)}}{h \nu} {\rm d}\nu} {\int_{\nu_{\rm min}}^{\nu_{\rm max}} \frac{S_\nu (\nu) A(\nu) e^{-\xi(\nu)}}{h \nu} {\rm d}\nu}
\end{align}
Here $C_{\rm crit}$ and $t_{\rm int}$ are the minimum number of counts
resulting in a detection and the integration time respectively (which we take to be 40 and 240
seconds following \citealt{2014MNRAS.437..327K}), $A_{\nu}$ is the effective area as a function of energy\footnote{\url{https://wiki.mpe.mpg.de/eRosita/erocalib\_calibration}},
$e^{-\xi(\nu)}$ accounts for photoelectric absorption\footnote{This is derived
from the XSPEC PHABS multiplicative model with $N_H=5\times 10^{20}$ cm$^{-2}$ following \citet{2014MNRAS.437..327K}.}, and $S_{\nu}$ is
the Spectral Energy Distribution (SED), which we take to be a black--body with an effective temperature corresponding to the temperature near the ISCO as given by Eq.~9 of \citet{Lodato&Rossi11}.\footnote{The effective temperature actually goes to zero at the ISCO in this model. In practice we evaluate $T_{\rm eff,in}$ at 1.36 times the ISCO, where the effective temperature is maximized.} We integrate SEDs between $h\nu_{\rm min} = 0.2~{\rm keV}$ and $h\nu_{\rm max} = 2~{\rm keV}$ (following \citealt{2014MNRAS.437..327K}).
The total number of new TDEs detected every year is
\begin{align}
N_{\rm det}=&\int_{0}^{1\,\, {\rm year}} \int_{M_{\rm min}}^{M_{\rm max}} \int_{0}^{z_{\rm lim}(M_\bullet)} \frac{{\rm d}N}{{\rm d}t {\rm d}M_\bullet {\rm d}z} {\rm d}z {\rm d}M_\bullet {\rm d}t\nonumber\\
=&\int_{0}^{1\,\, {\rm year}} \int_{M_{\rm min}}^{M_{\rm max}} \int_{0}^{V_{\rm c}[z_{\rm lim}(M_\bullet)]} \dot{N}_{\rm tde} {\rm d} V_{\rm c}(z) {\rm d}M_\bullet {\rm d}t,
\end{align}
where ${\rm d}N/{\rm d}t {\rm d}M_\bullet {\rm d}z$ is the differential TDE rate per unit SMBH mass per unit redshift, and $z_{\rm lim}$ is the maximum redshift to which a TDE in a given mass bin could be detected. In the second line, $\dot{N}_{\rm tde}$ is the volumetric TDE rate (Eq.~\ref{eq:tdeRate}), while ${\rm d}V_c$ is the co--moving volume element. Conservatively, $z_{\rm lim}$ satisfies
\begin{align}
\frac{L(t_o+6 \,\,{\rm months})}{4 \pi d_L(z_{\rm lim})^2 K(z_{\rm lim})}=f_{\rm lim}\nonumber\\
t_o=\max[t_{\rm edd}(M_\bullet, \chi_\bullet), t_{\rm fall}],
\label{eq:zlim}
\end{align}
where $t_o$ is when the X--rays turn on and six months is the time it takes eROSITA to scan the entire sky.
The top panel of Fig.~\ref{fig:detRate} shows the eROSITA detection rate
as a function of SMBH mass and spin, assuming all TDE hosts have the same mass and spin combination, and that the total TDE rate is $10^{-5}$
Mpc$^{-3}$ yr$^{-1}$. For a flux--limited sample of TDEs produced by rapidly spinning black holes, there are 1--2 orders of magnitude more detections when the black hole spin is universally prograde (with respect to the accretion disk's rotation) than universally retrograde, irrespective of the SMBH mass bin we consider. In stark contrast to optically selected TDE samples ($\S$~\ref{sec:retro}), an X--ray selected sample would be strongly biased towards prograde black hole spins, though this bias abates if the SMBH spin distribution is very bottom--heavy (with typical $\chi_\bullet \ll 1$).
In the bottom panel of Fig. \ref{fig:detRate}, we use the more realistic, non--uniform (in SMBH mass) TDE rate given by Eq.~\ref{eq:tdeRate}. Smaller SMBH masses are strongly favored in flux--limited X--ray TDE samples, because (i) their disks have higher effective temperatures, increasing the luminosity in the eROSITA band; (ii) they preferentially occur in denser and cuspier galactic nuclei, where two--body relaxation times are shorter and TDE rates are higher; (iii) such SMBHs are more common, given our assumed mass function. Our predictions are closest to those of \citet{2014MNRAS.437..327K} when we set $\chi_\bullet\approx 0.9-0.95$, where the effective temperature at the inner disk edge in our model matches theirs.
The observed black hole mass distribution of soft X--ray selected TDEs (\citealt{wevers-masses-ii}) does not show evidence for a larger number of TDEs from smaller SMBH masses, although there is a hint for this in hard X--ray selected TDEs.
Table~\ref{tab:rates} shows the mass--integrated eROSITA detection for a few different SMBH spin parameters (assuming
equal intrinsic numbers of prograde and retrograde disruptions). The detection rate is a strong function of the uncertain SMBH spin distribution: in our fiducial model (where the SMBH mass function extends down to $M_\bullet = 10^5 M_\odot$) we predict $\approx$1000 detections per year for $\chi_\bullet$=0.99, but only 170 per year for $\chi_\bullet=0$. For large values of $\chi_\bullet$, a flux-limited sample is strongly dominated by the $50\%$ of TDE disks we assume to align into prograde equatorial configurations; depending on the combination of $M_\bullet$ and $\chi_\bullet$, this ``prograde bias'' can range from $\sim 10\%$ to multiple orders of magnitude. Prograde disks and high values of $|\chi_\bullet|$ are favored because of their higher bolometric luminosities and effective temperatures.
The X--ray discovery rate is dominated by the smallest ($M_\bullet \sim 10^5 M_{\odot}$) SMBHs, a part of parameter space where the SMBH occupation fraction is poorly constrained. Interestingly, a flux--limited and X--ray selected TDE sample can be a more sensitive probe of the bottom end of the SMBH mass function than a volume--complete TDE sample would be\footnote{Using Eq.~\ref{eq:tdeRate}, we find that reducing $M_{\rm min}$ from $10^6 M_\odot$ to $10^5 M_\odot$ increases the volumetric TDE rate by a factor $\approx 8.5$.}. In our models, this is true for $\chi_\bullet \lesssim 0.9$, and is due to the fact that (unless most SMBHs are nearly extremal) X--ray emission is typically on the Wien tail of TDE disks, and is thus highly sensitive to populations of smaller SMBHs. Furthermore, the eROSITA TDE detection rate may also be a strong indicator of the SMBH spin distribution, even if the spins of individual TDE--hosting SMBHs cannot be measured. This is analogous to the manner in which statistical samples of TDEs may probe the SMBH spin distribution near the Hills mass \citep{Kesden12}, although not limited to the largest TDE hosts.
Many TDE light curves would be reasonably well--sampled with eROSITA. For example, for a $10^5$ ($10^6$) $M_{\odot}$ SMBH with a spin of 0.99, prograde disruptions would be visible on average for 27 (5.3) years. This would give an average of eight detections per TDE, considering the cadence and nominal duration of the eROSITA all--sky survey (six months and four years respectively).
\begin{table*}[]
\caption{Estimated eROSITA TDE detection rates.}
\centering
\begin{tabular}{c|ccc|ccc}
& & $\dot{N} (M_{\bullet}\geq 10^5 M_{\odot})$
& & & $\dot{N} (M_{\bullet}\geq 10^6 M_{\odot})$ & \\
\hline
$\chi_{\bullet}$ & Total & Retrograde & Prograde & Total &Retrograde&Prograde\\
& [yr$^{-1}$]& [yr$^{-1}$] & [yr$^{-1}$]& [yr$^{-1}$]& [yr$^{-1}$] & [yr$^{-1}$] \\
\hline
0 & 172.3 & --& --& 4.8 &--& --\\
0.1 & 174.3 & 76.0 & 98.3 & 5.0 & 3.1 & 1.9\\
0.5 & 232.3 & 48.3 & 184.0 & 10.8 & 0.8 & 10.0\\
0.9 & 551.3 & 32.6 & 518.7 & 65.8 & 0.4 & 65.4 \\
0.99& 992.9 & 29.9 & 963.0 & 192.6 & 0.3 & 192.3 \\
\end{tabular}
\label{tab:rates}
\tablecomments{
Estimated eROSITA TDE detection rates using the formalism outlined in $\S$~\ref{sec:theoryRates}. The first column is the SMBH spin. Columns 2--4 give the total, retrograde, and prograde detection rates including all SMBHs between $10^5$ $M_{\odot}$ and the Hills mass.
Columns 5--7 give the total, retrograde, and prograde detection rate including SMBHs with masses between $10^6$ and the Hills mass. In all cases we assumed an equal intrinsic number of prograde and retrograde disruptions (see the discussion in $\S$~\ref{sec:retro}). For these estimates, we assume the TDE mass function from Eq.~\eqref{eq:tdeRate}, but discard TDEs with Galactic latitudes $\leq$30$^{\circ}$, as in \citet{2014MNRAS.437..327K}.}
\end{table*}
\begin{figure}
\centering
\includegraphics[width=8.5cm]{grid.pdf}
\includegraphics[width=8.5cm]{Mplot.pdf}
\caption{{\emph{Top panel:}} The rate of eROSITA
detections as a function of SMBH mass and spin,
for a fixed TDE rate of $10^{-5}$ Mpc$^{-3}$ yr$^{-1}$. These are the detection rates assuming
TDEs are distributed as a delta function with a specific mass
and spin (i.e.~each SMBH mass and spin is assumed equally likely). {\emph{Bottom panel:}} The rate of eROSITA detections as a function
of SMBH mass, for a selection of spin parameters. These lines are a convolution
of the rates from the top panel with the SMBH mass function and a theoretical estimate of TDE rates as a function of $M_\bullet$ (Eq.~\ref{eq:tdeRate}). We assume $50\%$ of TDE disks align into prograde equatorial and $50\%$ align into retrograde equatorial configurations by the time of observation--see the discussion in $\S$~\ref{sec:retro}).
}
\label{fig:detRate}
\end{figure}
\subsubsection{Model II}
\label{sec:empiricalRates}
Next, we re--estimate eROSITA detection rates with a quasi--empirical model calibrated to reproduce the observed properties of PTF09ge. While the model from the prior section was arguably an optimistic one (in its assumption that all TDEs will become X--ray bright after the disk accretion rate becomes sub-Eddington), this empirically calibrated model can be viewed as a rather pessimistic scenario, where we impose a long, adjustable period of X--ray darkness on most TDEs. In this model, the bolometric luminosity is
\begin{align}
&L_{\rm bol}=
\begin{cases}
0 &,\,t\leq t_{\rm o}\\
\min\Bigl[L_{\rm Edd}(M_\bullet),2.5\times 10^{43} {\rm erg\,\,s^{-1}}\nonumber\\
\left(\frac{t}{t_{\rm fall}}\right)^{-1.2} \left(\frac{M_\bullet}{3\times 10^6 M_{\odot}}\right)^{-1/2}\Bigr] &,\, t\geq t_{o}.
\end{cases}\nonumber\\
&t_o=\max[t_{\rm br}, t_{\rm Edd}, t_{\rm fall}]
\label{eq:empirical_lum}
\end{align}
This reproduces the inferred late--time bolometric luminosity for PTF09ge\footnote{$2.7\times 10^{41}$ erg s$^{-1}$ derived from the best fit black body spectrum for this event.} for its inferred SMBH mass of $\sim 3\times 10^6 M_{\odot}$ \citep{wevers-masses-ii}. The scalings with SMBH mass and time are the same as in the theoretical model, but SMBH spin is not explicitly included. The X--rays turn on after the brightening time ($t_{\rm br}$), as long as this is greater than the fallback time and the luminosity is sub--Eddington.
We assume, based on our late--time observations of PTF09ge{}, that the spectrum is a blackbody with effective temperature $k T=$ 0.18 keV. The bolometric luminosity is up to two orders of magnitude smaller than in Model I, which would reduce the detection rate. However, this is partially compensated for by the harder spectrum.
Table~\ref{tab:rates2} shows the mass--integrated eROSITA detection rates for two different brightening times $t_{\rm br}$. For higher SMBH minimum masses ($M_{\rm min} = 10^6 M_{\odot}$) and small brightening times, the rates are comparable to the
estimates from Model I for moderate spins (with $0.5 \lesssim \chi_\bullet \lesssim 0.9$--see Table~\ref{tab:rates}). However, the predicted rates for lower SMBH mass limits ($M_{\rm min} = 10^5 M_{\odot}$) and/or larger brightening times are smaller than the zero spin case in Model I.
\begin{table}[]
\caption{Estimated TDE detection rates with
eROSITA using the quasi-empirical model outlined in $\S$~\ref{sec:empiricalRates}.}
\centering
\begin{tabular}{c|c|c}
$t_{\rm br}$ & $\dot{N} (M_{\bullet}\geq 10^5 M_{\odot})$
& $\dot{N} (M_{\bullet}\geq 10^6 M_{\odot})$
\\
yr & [yr$^{-1}$] & [yr$^{-1}$] \\
\hline
1 & 150 & 24 \\
5 & 16 & 2.6
\end{tabular}
\label{tab:rates2}
\end{table}
In Model II, TDEs are on average observable for 2 (6) years after detection for a brightening time of 1 (5 years). This implies significantly poorer temporal sampling in eROSITA light curves than in Model I.
\subsection{Retrograde and prograde TDE disks}
\label{sec:retro}
In the previous section, we saw that X--ray selected TDE samples are likely to possess a strong bias towards prograde configurations of SMBH spin $\vec{\chi}_\bullet$ and disk angular momentum $\vec{L}_{\rm d}$ (i.e. $\vec{\chi}_\bullet \cdot \vec{L}_{\rm d}>0$), so long as typical SMBH spin magnitudes are reasonably large ($\chi_\bullet \gtrsim 0.5$). In this section, we discuss prospects for observing this prograde preference, and build simple toy models to show how it contrasts with the likely weaker orientation biases in optically selected TDE samples, which may even exhibit a preference for retrograde configurations ($\vec{\chi}_\bullet \cdot \vec{L}_{\rm d}<0$)
Our observations of PTF09ge{} indicate that quasi--thermal soft X--ray emission may remain visible for roughly a decade after the peak of a tidal disruption flare. This raises the prospect of using late--time TDE observations to directly measure SMBH spin through continuum fitting techniques. While continuum fitting is a fruitful method of measuring the spins of stellar--mass black holes in XRBs (\citealt{Jeff2014SSRv}), it has only rarely been applied to SMBHs because{\it (i)} AGN typically produce dusty tori, and these complicate the X--ray spectral fitting, {\it (ii)} the spectral peak of a quasi--thermal AGN disk is usually in observationally inaccessible EUV bands.
The relatively cool temperatures of TDE disks (in contrast to those of XRBs) mean that quasi--thermal soft X--rays will generally be on the Wien tail of emission \citep{Lodato&Rossi11}, and their production will be dominated by the innermost gas annuli of the disk. As a result, quasi--thermal X--rays from TDEs will be exponentially sensitive to the size of the disk inner edge, and therefore will depend strongly on SMBH spin. At early times, the disk inner edge is nontrivial to estimate. Because two--body scattering feeds stars to SMBHs from a roughly isotropic distribution of angles, TDE disks are generically born with order unity tilts. A tilted thin disk will be truncated near the innermost stable spherical orbit (ISSO), but the high early--time accretion rates of TDEs may cause their innermost disk annuli to extend inside the ISSO\footnote{For example, with accretion disks tilted with respect to the black hole spin by an angle $15^\circ$ and a thickness of the order of 0.2, the simulations of \citet{Fragile2009} found the inner edge to be nearly independent of spin.}. A greater problem at early times, however, is the messy hydrodynamical environment of the disk: if the stellar pericenter was sufficiently non--relativistic ($R_{\rm p} \gg R_{\rm g}$), the disk may retain substantial eccentricity \citep{Shiokawa+15}, and if optically thick stellar debris subtends a large solid angle on the sky, the majority of the X--ray flux may be absorbed in a reprocessing layer \citep{Guillochon+14, Metzger&Stone16}.
At late times, however, accretion rates will have dropped to sub-Eddington levels, shifting the disk inner edge close to the test particle value; many fallback times will have passed, enabling more complete circularization \citep{Hayasaki+16, Bonnerot+17}; reprocessing layers will have dissipated, revealing the inner disk \citep{Metzger&Stone16, vanVelzen+18}; and internal torques will have had time to align the TDE disk angular momentum vector with the black hole spin vector \citep{Franchini+16}. Thus, for the quasi-thermal sources we have observed (PTF09ge{}, and perhaps ASASSN--14ae{}), it is reasonable to expect thin disks in the SMBH equatorial plane, with inner edges at the test particle ISCO. We may now ask the question: do we expect an imbalance in the number of prograde and retrograde TDEs? For a volume--complete sample the answer is clearly no. However, for a more practical, flux--limited, sample of TDEs, there are strong reasons to suspect an imbalance. We have already predicted that flux-limited, X-ray selected TDE samples can exhibit an enormous prograde bias (e.g. Table \ref{tab:rates}). In this section, we investigate whether the same bias should be apparent for a flux-limited but optically selected TDE sample. While the origin of TDE optical emission remains contested between ``shock-powered'' and ``reprocessing'' models, both of these scenarios have peak luminosities that will depend strongly on the orbital precession of debris streams, and therefore on SMBH spin.
To leading post-Newtonian (PN) order, both apsidal precession (precession of the debris stream's Runge--Lenz vector within the orbital plane) and nodal precession (precession of the orbital plane's angular momentum vector about the SMBH spin vector) are larger for retrograde than for prograde orbits (\citealt{Merritt2010}). Neglecting for now the possibility that extreme nodal precession may prevent stream self-intersections\footnote{Tidal disruptions of stars in the relativistic regime (e.g.~white dwarfs disrupted by intermediate--mass BHs, or solar mass main sequence stars disrupted by a BH with $M_\bullet=10^{7-8}$\msun) around spinning SMBHs, may lead to stellar debris streams that fail to promptly self--interact, unless the inclination of the stellar orbit is nearly perpendicular to the BH spin axis or if the thickness of the debris streams is large enough such that they always intersect (\citealt{dai2013}; \citealt{Guillochon&RamirezRuiz15}, \citealt{Hayasaki+16}).}, the greater apsidal shifts for debris from retrograde TDEs means that these debris streams will self-intersect and dissipate energy at smaller radii. Smaller stream self-intersection radii $R_{\rm SI}$ will probably yield higher peak optical luminosities, regardless of the dominant optical power source in observed TDEs. In the ``reprocessing paradigm,'' smaller $R_{\rm SI}$ will mean faster disk formation and higher peak accretion rates $\dot{M}$ onto the SMBH, although this must be weighed against the potentially greater radiative efficiency of prograde disks. In the ``circularization paradigm,'' smaller $R_{\rm SI}$ values will thermalize greater amounts of bulk kinetic energy.
The translation between self--intersection radius $R_{\rm SI}$ and optical luminosity is currently an unsolved problem. Under the assumption that most of the observed optical emission is shock-powered, we will use the following toy model for peak luminosity:
\begin{equation}
L_{\rm peak} = \eta_{\rm SI} \frac{GM_\bullet \dot{M}_{\rm peak}}{R_{\rm SI}}, \label{eq:LPeakCirc}
\end{equation}
where, as before, $\dot{M}_{\rm peak}$ is the peak mass fallback rate. The dimensionless number $\eta_{\rm SI}\le 1$ is the fraction of stream kinetic energy thermalized {\it and} radiated at the self-intersection; for simplicity, we take it to be a constant\footnote{As \citet{Lu&Bonnerot19} have noted, a large fraction of the thermalized stream kinetic energy may be lost to adiabatic degradation prior to the time it can be emitted. Because the fractional energy loss to $P{\rm d}V$ work depends on gas optical depth at $R_{\rm SI}$ and therefore on $M_\bullet$ and other parameters, the assumption of constant $\eta_{\rm SI}$ is crude. Deriving a more complete theoretical model is, however, beyond the scope of this work.}. A flux-limited survey will find a differential number of TDEs per bin of pericenter $R_{\rm p}$ and inclination $i$ that scales as ${\rm d}N_{\rm det}/{\rm d}i{\rm d}R_{\rm p} \propto L_{\rm peak}^{3/2}(i, R_{\rm p})({\rm d}\dot{n}/{\rm d}i{\rm d}R_{\rm p})$, where the differential rate ${\rm d}\dot{n}/{\rm d}i{\rm d}R_{\rm p} \propto \sin i$ if we are in the full loss cone (FLC) regime, and ${\rm d}\dot{n}/{\rm d}i{\rm d}R_{\rm p} \propto \sin i\times \delta(R_{\rm p}-R_{\rm t})$ if we are in the empty loss cone (ELC) regime (we have assumed isotropy in stellar arrival directions in both regimes).
The dependence of $L_{\rm peak}$ on $i$ and $R_{\rm p}$ can be computed by defining the self-intersection radius \citep{Dai+15}
\begin{equation}
R_{\rm SI} = \frac{R_{\rm p}(1+e)}{1+e\cos(\pi + \delta \omega / 2)},
\end{equation}
where $e$ is the eccentricity of the elliptical orbit of the stream of material formed by the disrupted star. For convenience, we take the eccentricity of the most tightly bound debris, $e_{\rm min} = 1-2(M_\star / M_\bullet)^{1/3}/\beta$.
Here, we have made use of the per--orbit apsidal shift angle, $\delta\omega$, which, to lowest PN order in dimensionless SMBH spin, $\chi_\bullet$, is \citep{Merritt2010}
\begin{equation}
\delta\omega = A_{\rm S}-2A_{\rm J}\cos i ,
\end{equation}
where
\begin{align}
A_{\rm S} =& \frac{6\pi}{c^2} \left( \frac{GM_\bullet}{R_{\rm p}(1+e)} \right) \\
A_{\rm J} =& \frac{4\pi \chi_\bullet}{c^3} \left( \frac{GM_\bullet}{R_{\rm p}(1+e)} \right)^{3/2}.
\end{align}
In the empty loss cone regime, for fixed SMBH mass and stellar properties, the retrograde fraction is simply
\begin{equation}
f_{\rm ret}^{\rm ELC} = \frac{\int^\pi_{\pi/2}\sin i[1+e\cos(\pi+\delta\omega/2)]^{3/2}{\rm d}i}{\int_0^\pi\sin i[1+e\cos(\pi+\delta\omega/2)]^{3/2}{\rm d}i},
\end{equation}
In the full loss cone (FLC) regime, a second integral is necessary:
\begin{equation}
f_{\rm ret}^{\rm FLC} = \frac{\int^\pi_{\pi/2} \sin i \int^{R_{\rm t}}_{R_{\rm min}}R_{\rm p}^{-3/2}[1+e\cos(\pi+\delta\omega/2)]^{3/2}{\rm d}R_{\rm p}{\rm d}i}{\int_0^\pi\sin i \int^{R_{\rm t}}_{R_{\rm min}} R_{\rm p}^{-3/2} [1+e\cos(\pi+\delta\omega/2)]^{3/2}{\rm d}R_{\rm p}{\rm d}i}.
\end{equation}
Here $R_{\rm p}$ ranges from a maximum value of $R_{\rm t}$ down to a minimum value of $R_{\rm min}(\chi_\bullet, i)$. This minimum value, the innermost bound spherical orbit (IBSO), is computed from the Kerr geodesic equations \citep{Bardeen+72}. The IBSO is larger for retrograde spins, which (via Eq.~\ref{eq:LPeakCirc}) introduces a prograde bias.
We illustrate the overall orientation bias in flux--limited samples of shock--powered TDEs in Fig.~\ref{fig:retroBiasCirc}. There is no bias when $\chi_\bullet=0$ (as symmetry demands), but the bias becomes notable when $\chi_\bullet \gtrsim 0.5$. Interestingly, the bias is qualitatively different in the two regimes of loss cone repopulation. In the empty loss cone regime, there is almost no bias for $M_\bullet \lesssim 10^6 M_\odot$, but a moderate retrograde bias for larger SMBHs. In the full loss cone regime, there is a moderate prograde bias across all bins of $M_\bullet$. Since the empty loss cone regime predominates for high--mass SMBHs, and the full loss cone regime predominates for low--mass SMBHs \citep{stone&metzger2016}, we expect that flux--limited, shock--powered TDE samples will exhibit a prograde bias for $M_\bullet \lesssim 10^{6.5} M_\odot$, and a retrograde bias at higher masses.
We may also consider a similar sort of toy model for the reprocessing picture of TDE optical luminosity, designed to illustrate the competition between disk formation (which is faster for retrograde orbits) and the radiative efficiency of a circularized accretion disk (which is higher for prograde orbits). Let us say that the peak optical luminosity in a reprocessing model is
\begin{equation}
L_{\rm peak} = \eta_\bullet \eta_{\rm r} \dot{M}_{\rm max}c^2,
\end{equation}
where $\eta_\bullet$ is the standard radiative efficiency of a thin, equatorial accretion disk (see $\S$~\ref{sec:rates}), and the efficiency with which an optically thick reprocessing layer converts X--ray and extreme UV photons to optical emission is assumed (again, for simplicity) to be a constant, $\eta_{\rm r}$. Here $\dot{M}_{\rm max}$ does not represent the peak mass fallback rate to pericenter, $\dot{M}_{\rm max} =\frac{M_\star}{3}(t/t_{\rm fall})^{-5/3}$, but rather the peak accretion rate through the disk, which we parametrize as
\begin{equation}
\dot{M}_{\rm max} = \frac{M_\star}{2t_{\rm circ}},
\end{equation}
where we assume that the ``circularization timescale'', $t_{\rm circ}$, is a function only of the self--intersection radius, and is related to the fallback time for the most tightly bound debris as $t_{\rm circ} = t_{\rm fall}(R_{\rm SI}/R_{\rm g})^\xi$. This power--law parametrization of the disk formation timescale is crude, but will suffice to explore what types of disk orientation biases we expect if reprocessing is responsible for the observed optical emission. We find modified versions of the empty and full loss cone regime retrograde fractions:
\begin{equation}
\tilde{f}_{\rm ret}^{\rm ELC} = \frac{\int^\pi_{\pi/2}\sin i[1+e\cos(\pi+\frac{\delta\omega}{2})]^{3\xi/2}\eta_\bullet^{3/2}{\rm d}i}{\int_0^\pi\sin i[1+e\cos(\pi+\delta\omega/2)]^{3\xi/2}\eta_\bullet^{3/2}{\rm d}i},
\end{equation}
and
\begin{equation}
\tilde{f}_{\rm ret}^{\rm FLC} = \frac{\int^\pi_{\pi/2} \sin i \int^{R_{\rm t}}_{R_{\rm min}}\left(\frac{\eta_\bullet}{R_{\rm p}^\xi}\right)^{3/2}[1+e\cos(\pi+\frac{\delta\omega}{2})]^{3\xi/2}{\rm d}R_{\rm p}{\rm d}i}{\int_0^\pi\sin i \int^{R_{\rm t}}_{R_{\rm min}} \left(\frac{\eta_\bullet}{R_{\rm p}^\xi}\right)^{3/2} [1+e\cos(\pi+\frac{\delta\omega}{2})]^{3\xi/2}{\rm d}R_{\rm p}{\rm d}i}.
\end{equation}
We illustrate the retrograde fractions in a flux--limited, reprocessing-powered TDE sample in Fig. \ref{fig:retroBiasRep}. In contrast to our earlier shock--powered calculations, our toy model for reprocessing power almost always exhibits a {\it prograde} disk bias, as this configuration yields much higher radiative efficiencies. The detailed nature of the orientation bias depends on the power law index $\xi$ encoding the dependence of circularization efficiency on $R_{\rm SI}$ (in this figure, we use $\chi=0.5$). Very high values of $\xi$ ($\gtrsim 2$) can create a retrograde bias in a sample of TDEs in the empty loss cone regime, but this level of sensitivity to $R_{\rm SI}$ is not suggested by existing hydrodynamical simulations of circularization \citep{Hayasaki+16, Bonnerot+16}. The overall level of bias depends on $\chi_\bullet$, but shows little variation with $M_\bullet$.
\begin{figure}
\includegraphics[width=85mm]{retrograde_bias_circ.pdf}
\caption{The fraction of TDEs with retrograde disks, $f_{\rm ret}$, in a flux-limited sample of (i) optically--selected and (ii) shock-powered tidal disruption flares. In the empty loss cone regime (solid lines), there is no preference for retrograde orbits when SMBH spin $\chi_\bullet$ is zero (the Schwarzschild limit), but the preference becomes more notable for higher values of $\chi_\bullet$ (shown and labeled as color-coded curves). Conversely, in the full loss cone regime (dashed lines), the preference is for prograde disks.}
\label{fig:retroBiasCirc}
\end{figure}
\begin{figure}
\includegraphics[width=85mm]{retrograde_bias_rep.pdf}
\caption{Same as Fig. \ref{fig:retroBiasCirc}, but we now compute the retrograde fraction ($\tilde{f}_{\rm ret}$) considering a model for optical emission based on reprocessed X-ray/EUV emission from an inner accretion disk. In contrast to the shock--powered model of Fig. \ref{fig:retroBiasCirc}, reprocessing-powered TDEs almost always show a bias for {\it prograde} disks due to radiative efficiency considerations. This bias is generally strongest for the empty loss cone regime and smaller SMBHs, but depends somewhat on the power law index $\xi$ (assumed to be 0.5 in this plot). If $\xi \gtrsim 2$, a weak retrograde bias may be recovered in the empty loss cone regime.}
\label{fig:retroBiasRep}
\end{figure}
Of the TDEs we have observed, both PTF09ge{} and ASASSN--14ae{} have late--time accretion disks whose FUV properties were modeled in \citet{vanVelzen+18}. Our X--ray detections are compatible with these disk models provided the disk in PTF09ge{} is prograde with respect to a very rapidly spinning SMBH, and the disk in ASASSN--14ae{} is retrograde with respect to a spinning SMBH. While this sample is too small (and our disk models so far too crude) to meaningfully constrain $f_{\rm ret}$, these observations, and the arguments in this section, highlight the potential of future late--time observations and modeling to determine the dependence of peak flare luminosity on the inclination of the disrupted star's orbit. This may also serve as a useful test between different models of optical power sources in TDEs, as it would be hard to explain a pronounced retrograde bias in the reprocessing paradigm.
In this section, we have used simple but illustrative optical emission models to demonstrate that the selection effects operating in flux-limited, {\it optically selected} TDE samples favor a very different $\chi_\bullet$ distribution than do the selection effects in flux-limited, X--ray selected samples (\S \ref{sec:rates}). Specifically, an X--ray selected sample will be biased strongly towards prograde orbits around rapidly spinning SMBHs, while an optically selected sample will still be biased towards high $|\chi_\bullet|$, but much more weakly so, and may possess either a prograde or retrograde bias depending on the specific optical emission mechanism. A consequence of this is that the quasi-thermal X--ray luminosities in optically selected TDE distributions should be systematically lower than the corresponding X--ray luminosities in an X--ray selected sample, since the former will have cooler disk temperatures, on average.
\section{Conclusions}
We have conducted {\it Chandra} ~X--ray observations of four optically--selected TDEs long after the peak of their optical flares. In three cases we detected late--time soft X--ray emission: PTF09axc{}, PTF09ge{}, and ASASSN--14ae{} are best--fit with unabsorbed ($0.3-7~{\rm keV}$) luminosities of $(3.2\pm0.2)\times 10^{42}~{\rm erg~s}^{-1}$, $3.9^{+1.1}_{-1.0}\times 10^{41}~{\rm erg~s}^{-1}$, and $9^{+9}_{-5}\times 10^{40}~{\rm erg~s}^{-1}$, respectively. Our fourth target, PTF09djl{}, was undetected by {\it Chandra}{}, yielding an upper limit on its soft X--ray luminosity of $L_{\rm X} < 3 \times 10^{41}~{\rm erg~s}^{-1}$. Three of these observations represent the longest temporal baseline for X--ray observations of optically--selected TDEs to date: PTF09axc{} and PTF09ge{} were observed roughly eight years after peak, while PTF09djl{} was observed roughly ten years post--peak.
These TDEs exhibit a diversity of X--ray behavior at late times. The X--ray spectrum of PTF09ge{} is best fit as the Wien tail of a thermal blackbody spectrum, similar to soft X--ray spectra observed at early times in optically--selected TDEs (and analogous to the high--soft state of XRBs). In contrast, the X--ray spectrum of PTF09axc{} is best fit as a comparatively hard, non--thermal power law, quite unlike most TDEs seen at early times, and more similar to the spectrum of an AGN or the low--hard state of an XRB. ASASSN--14ae{} does not have sufficient X--ray counts to determine the shape of its spectrum.
Our primary conclusions are as follows:
\begin{enumerate}
\item Late--time X--ray detections are further evidence that PTF09axc{}, PTF09ge{}, and ASASSN--14ae{} represent bonafide TDEs and not a peculiar type of nuclear supernova explosion. The persistence of high X--ray luminosities $\approx 5-10$ yr post-peak also argues strongly against the presence of a thermal instability in TDE disks, as would be predicted by simple $\alpha$-disk theory.
\item We hypothesize that the marked spectral differences between PTF09axc{} and PTF09ge{} may have been caused by a late--time state change in PTF09axc{}\,to a low--hard state (in analogy to the state changes regularly observed in black hole X--ray binaries). Radio follow--up observations of PTF09axc\, could test this hypothesis, as could continued X--ray monitoring of PTF09ge{} to investigate if it also exhibits a state change to a power-law spectrum.
\item Assuming that our observations 4--9 years after optical detection are not caused by short--lived flares, we conclude that most TDEs are persistently bright X--ray sources visible for at least a decade, which has implications for detection
rates in near future, wide field X--ray surveys. For example, we find the eROSITA instrument planned for imminent launch on the {\it Spectrum R\"ontgen Gamma} satellite could detect up to $1000$ TDE flares per year if most low mass SMBHs have near maximal spins. However, the detection rate would be 170 per year if most SMBHs have zero spin, and (in the Schwarzschild limit) would be further reduced to only 5 per year if SMBHs with masses below $10^6 M_{\odot}$ are excluded.
\item We propose that there is often a delay between the peak optical and the X--ray emission in TDEs, such that optical and X--ray selected TDEs are, in many cases, the same type of flare observed at different stages. For example, in X--ray selected TDEs the optical emission (e.g.~from the circularisation shock), may have already subsided below the level that can be detected above the nuclear region of the host galaxy.
\item The persistence of a soft X--ray spectrum at late times (such as in PTF09ge{}) opens up the possibility of black hole spin determinations using continuum fitting techniques (Wen et al.~in prep.). These were, in the past, primarily applied to black holes in soft--state X--ray binaries (\citealt{Jeff2014SSRv}). Late--time X--ray observations will avoid, or at least minimize, theoretical uncertainties associated with early--time TDE disk modeling, such as generic disk tilts, significantly non--circular gas flows, and the presence of optically thick stellar debris on larger scales. The number of X--ray photons detected in the current observations of PTF09ge{} are, however, insufficient to attempt this exercise.
\item If the SMBHs responsible for TDEs possess appreciable spins, a flux--limited sample of TDEs will generally be biased towards an excess of prograde or retrograd disks. In an optically selected sample, the sign of this bias depends on the exact emission mechanism. Shock-powered optical emission \citep{Piran+15} will exhibit a mild retrograde bias in the empty loss cone regime, and a mild prograde bias in the full loss cone regime. If instead, the optical emission is powered by reprocessed X--rays generated from a veiled inner accretion flow \citep{Guillochon+14, Metzger&Stone16}, then prograde black hole spins are almost always favored, usually by a factor of a few. X--ray selected TDE samples have a very strong (one--to--two orders of magnitude) bias for prograde orbits if most SMBHs are spinning rapidly.
\end{enumerate}
\section*{Acknowledgments} We would like to thank the {\it Chandra X-ray Observatory} for approving and carrying out the observations presented in this paper. \noindent PGJ acknowledges funding from the
European Research Council under ERC Consolidator Grant agreement no
647208 and discussions with Giacomo Cannizzaro. NCS and AG acknowledge funding from {\it Chandra} GO 18700591. NCS acknowledges additional funding from {\it Chandra} GO 20700515. BDM acknowledges support from NASA through the Astrophysics Theory Program (grant number NNX17AK43G). This research has made use of the NASA/IPAC Extragalactic
Database (NED), which is operated by the Jet Propulsion Laboratory,
California Institute of Technology, under contract with the National
Aeronautics and Space Administration.
\bibliographystyle{aasjournal}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 577 |
from datetime import datetime, timedelta
from django.test import TestCase
from icalendar import Event as VEvent, vDDDTypes
from events.models import Event
from calendars.models import CalendarFeed
def dummy_event(**kwargs):
default = {
'UID': 'uid',
'SUMMARY': 'event name',
'DESCRIPTION': 'event description',
'DTSTART': vDDDTypes(datetime.now()),
'DTEND': vDDDTypes(datetime.now()),
'url': 'http://www.tripod.com',
}
default.update(kwargs)
return default
class EventTests(TestCase):
def setUp(self):
"""
Includes one event which occurred in the past and so need not be
updated, one event for which there is a match and the information
should be updated.
"""
today = datetime.today()
self.old_event = Event(name="Old event", slug="old-event",
start=(today - timedelta(days=1)),
uid='event_dxczvgyrnbdc@meetup.com')
self.unchanged_event = Event(name="Unchanged event",
slug="unchanged-event",
start=(today - timedelta(days=1)),
uid='event_qgfkkgyrnbhc@meetup.com')
self.update_event = Event(name="Update event", slug="update-event",
start=(today - timedelta(days=1)),
uid='event_dxczvgyrnbmc@meetup.com')
def test_add_event(self):
"""Ensure a new event is added"""
events_feed = [(VEvent(dummy_event(UID='somenewevent')))]
c = CalendarFeed.objects.create(
name="Test", url="")
c.process_events(events_feed)
self.assertEqual(Event.objects.all().count(), 1)
def test_update_event(self):
"""Event with matching UID should be updated"""
event = Event.objects.create(
name="Name",
slug="slug",
uid="somenewevent",
)
events_feed = [(VEvent(dummy_event(UID='somenewevent')))]
c = CalendarFeed.objects.create(
name="Test", url="")
c.process_events(events_feed)
self.assertEqual(
Event.objects.get(pk=event.pk).url,
"http://www.tripod.com",
)
| {
"redpajama_set_name": "RedPajamaGithub"
} | 4,843 |
synonym:
The attacks on Nord Stream and the elephant in the room (tags)
In an interview with Berliner Zeitung, U.S. economist Jeffrey Sachs said that destroying Nord Stream would be contrary to Russia's interests; the country would lose "income, financial assets & bargaining power." The U.S., would "benefit strategically and financially."
We are witnessing the collapse of diplomacy (tags)
We, who keep this society running, are footing the bill, while the super-rich and big corporations are lining their pockets, making profits from the crises.
Ukraine's decoupling was foolish and dangerous (tags)
Promote peace, don't fuel conflict. There are only two ways to end a war. Either with a negotiated settlement. Or with the annihilation of one side or the other. That's how wars end, unfortunately, when there is no negotiated settlement...There must be a diplomatic solution.
Nuclear deterrence and Negotiate and peace now! (tags)
"I believe the Administration's recommendation to admit new members to NATO at this time is misguided. If adopted by the United States Senate, it could go down in history as the greatest strategic blunder since the end of the Cold War..." Jack F. Matlock Jr, U.S. ambassador
Stop the war! (tags)
At Easter, "a strong signal must be sent against a policy of military confrontation, against a new global arms race and against an increase in the arsenals of weapons of mass destruction," is the reason given for the call to actively participate in the Easter marches.
Democracy = not more than a brand name (tags)
The USA, a hegemonic power threatened with decline, is struggling to maintain and expand a monopolistic world order. This is supposed to be justified by surrounding itself with an aura of freedom and democracy.
The ban on violence (tags)
We can only free ourselves from this spiral of violence if we return to the principles of international law and end both the use and the threat of force. For they both pursue the same goal - the subjugation of the adversary.
War as crisis accelerator (tags)
Opportunistic parts of the German left are already sorting themselves into NATO supporters and cheerleaders of hollow Western values, who are peddling the stale bourgeois-liberal ideology one last time before the Western states also sink into barbarism.
The other side of the truth (tags)
Without U.S. President Obama's breach of international law eight years ago, Putin's illegal military invasion probably would not have happened... It is time to stop settling for half-truths from one side or the other and tell the story of the conflict in a complete and balanced way.
Cuba and the Ukraine crisis (tags)
History repeats itself after all. What as the Cuban Missile Crisis left the world balancing on the precipice is exemplified by the Ukraine crisis. Starting in 1959, the USA stationed nuclear-tipped medium-range missiles in Italy and Turkey aimed at the USSR. The latter responded.
Enlargement to the East: How Nato broke its word (tags)
One point of contention, however, should have been settled by the declassified documents. They allow no other conclusion than that the West made a promise to Gorbachev to leave NATO in its former borders, but that this promise was already broken at the end of the 1990s.
Russia is reacting to NATO's policy of expansion (tags)
Russia is accused of calculating power and bellicose behavior. As far as the situation in Ukraine is concerned, however, the political facts speak a different language. Russia is being built as an enemy for peace. With its Ukraine policy, Russia is actually reacting to NATO's expansionist policy.
Because man is a human being (tags)
People don't have to remain victims. Rebellion and resistance against a disease-causing system serve to stabilize psychological existence and social coexistence. The basis for resistance is sensitization: perceiving that society is deprived and disenfranchised by privatization, for example, that people are violated.
How should our economy grow? (tags)
If, on the other hand, the inequality distribution is reduced via increased spending on education, positive growth effects may result. In addition, too much inequality can destabilize the political system through social unrest, creating great uncertainty for investors, for example with regard to the protection of property rights.
The election as a farce: Donald Trump and the rise of the autocrats (tags)
To elect a demagogue is always dangerous, but it does not condemn a country to the collapse of its democracy. Strong institutions can keep corrupt or autocratic leaders in check. That is exactly what the U.S. Constitution is designed to do, and for much of our history it has been successful in doing so.
Behavior and body in the sights of capital (tags)
The proclamation of states of emergency in the spring of 2020, even if not in all countries, helped to consolidate a state crisis management system that overturned constitutions and now completely bypassed democratic decision-making processes.
On the prohibition of the war of aggression (tags)
Happy Anti-War Day Sept 1, 2020! Peace is not everything but without peace, everything is nothing (Willy Brandt). The German Basic Law mentioned in this article is equivalent to our constitution.
Paraphysique de l'hybris pirocratie (tags)
La violence comme mode pensée...
PHILIPPINES: 8,000 killings proof of policy to kill (tags)
Akbayan Partylist condemns President Rodrigo Duterte's denial of his policy to kill under the War on Drugs.
(A-Radio) Anarchist Black Cross Czech: Presentation on Operation Fenix and related issues (tags)
In the end of November of 2016 we had the opportunity of recording a presentation in Berlin by the Anarchist Black Cross in Czech Republic on the topic of Operation Fenix. The talk comprised the following topics: a short review of what had happened, the use of the term "terrorism", the topic of solidarity in Czech Republic and in general, a reflection on mistakes and how to deal with repression and police infiltrators, and finally the current development of the anarchist movement in that country.
April 2016 Honduras coup update (tags)
In April 2016, in Honduras, dam company DESA, its financers and concessioner, having killed Goldman Prize winner indigenous leader Bertha Caceres, continues to go after all those who fight in her name. A uni lecturer suffers an attempt, the same as 3 others before him, one of whom had been assassinated later.
L'avènement du pirocrate...
Fall 2015 National Immigrant Solidarity Network Monthly News Alert! (tags)
Migrant's Human Rights Violations, Syria Refugee Crisis – Proudly Made by U.S.A. + Dump the Trump!
US Imperial Wars Responsible for Refugee Crisis (tags)
Unwanted Refugees: EU Countries Block Borders (tags)
(A-Radio) Audio documentation: CrimethInc presentation in Prague 2015 - To Change Everythi (tags)
As Anarchist Radio Berlin we documented a CrimethInc presentation on "To Change Everything", recorded at the anarchist bookfair in Prague, Czech Republic. You'll find the audio (to listen online or download in different sizes) here: http://aradio.blogsport.de/2015/08/26/a-radio-auf-englisch-audio-documentation-crimethinc-presentation-in-prague-2015-to-change-everything/ Length: 57:39 min
Israeli Doctors Refuse Force-Feeding Order (tags)
Dreyer's, Another Ice Cream Maker, Is Owned By Nestle (tags)
Breyer's, processor of ice cream, has run a deceptive ad on IHeart (formerly Clear Channel) stations, which cites the FDA and alleges cows' milk with growth hormone is not significantly different from cows' milk without. Dreyer's is one of many companies owned by Nestle's, a company attempting to privatize the world's water, a factor in California drought with its illegal draining of Sacramento's reservoir, and a major abuser of animals.
Irresponsibly Freezing Russian Assets (tags)
US-Dominated NATO Myths v. Russian Hard Truths (tags)
Know The Brand Names of Water Thief Nestle's (tags)
27 years ago Nestle's permit to siphon off the people's water in California expired
Washington Wants War with Russia (tags)
Escalated War on Islam (tags)
Hypocrisy in Paris (tags)
UK Vote on Palestinian Statehood: Hold the Cheers (tags)
Sweden Recognizes Palestinian Statehood (tags)
Circus Elephants Used for Children's Rides Go Out of Control (tags)
Animal Defenders International (ADI) has released film of elephants used for children's rides fighting as workers try to control them with bullhooks, metal bars and stun guns.
Stoking Confrontation with Russia (tags)
East/West Confrontation Looms (tags)
Alternative Trade Mandate, 20pp (tags)
Trade needs a new vision and should serve the public interest, communities, farmers, workers and the poor, and not only corporate profit interests. Communities should have the right to food sovereignty and to protect themselves from the ravages of financial deregulation & GMOs.
Federal Prohibition Against Legalized Medical Cannabis Enables Robbery, Kidnapping (tags)
The kidnapping, torture and castration of a local medical cannabis dispensary owner plays into the hands of the federal prohibition against medical cannabis and prevents the voters of states like CA and CO from being able to put their votes for legalization into practice. By creating a hostile business climate and forcing dispensary owners to deal in cash the federal government directly enables these sorts of kidnappings and robberies. Yet big banks continue to launder drug cartel money? Hypocrisy anyone?
Sharonian Evil Lives (tags)
Washington's Dirty Game in Ukraine (tags)
The Battle for Ukraine (tags)
Ukraine Dodged a Bullet (tags)
EU Shamelessly Declares Hezbollah a Terrorist Organization (tags)
Colombia Takes Another Step Towards Circus Animal Ban (tags)
May 14, 2013, BOGOTA, COLOMBIA – TODAY, the Senate Commission V of Colombia amended and approved the draft law (Law 244/12) banning the use of animals in circuses, allowing Plenary to pass this initiative. The committee vote was seven in favor and none against.
Google Recognizes Palestinian Statehood (tags)
Globalized Torture (tags)
Israel's Man at State (tags)
UN Vote on Palestine (tags)
Support shown by UN,US,maybe NZ for ethical human rights approach to replace neoliberalism (tags)
A new idea, an ethical approach to human rights, development and globalization to replace neoliberalism, gives people a choice.The ethical human rights approach is being supported on the social networking sites by the US government, the UN while the NZ government maybe sympathetic.
Obama's Insult to Poland Shows US Ignorance of History (tags)
The world is finally waking up to the profound ignorance of history that exists in the USA. The current president, born in 1961, is so removed from any connection to World War 2 and European history that he referred to the Nazi death camps in Poland as Polish death camps when giving an award to a former Polish resistance fighter, Jan Karski. He has apologized but it should embarrass all Americans as this allegedly educated American, a lawyer, has no concept of history at all.
European Electoral Postmortems (tags)
Obama's War on Iran (tags)
Economic Dictatorship (tags)
In the crisis, there was a paradigm based on the belief in unlimited economic growth on a planet with infinite resources. This paradigm identifies happiness with wealth, well-being with accumulation of material goods and progress with consumerism.
World Week for the Abolition of Meat: 23-30 January 2012 (tags)
The next WWAM will take place from 23 to 30 January 2012 and coincides with the World Day for the Abolition of Meat on 28 January 2012.
Wrecking Europe to Fix It (tags)
class war
Washington Threatens Palestinian Statehood Bid (tags)
Meaner Tougher IMF with Lagarde (tags)
financial terrorism
Encircling Russia with US Bases (tags)
Re-visiting Uncle Ted & A Few FC Targets (tags)
Revisiting writings of Ted K, whether we disagree with his actions or not is irrelevant. The point is there are lessons to be learned in strategy and where the collective eco-activist movement is heading.
DOWNLOAD Carbusters #42 (tags)
Carbusters #42 June 2010-August 2010 from Prague, Czech Republic includes cartoons by Andy Singer, articles "The Traffic Hierarchy" and "High Speed Rail: Green or Mean" and a book review of "Carjacked."
Torture in US Prisons (tags)
homeland prison torture
World Geopolitics and The Battle for the Mediterranean (tags)
"If one were to live in a city where the only form of employment was a coal mine and there was no means to leave the city then one would have no choice but to work at the coal mine. Control of labour movement is a cornerstone to the socio-economic objectives of the U.S., the E.U., the World Bank, and a league of associated international financial institutions (IFIs). By rendering work forces immobile in any given geographic locality the rights of employment choice and occupational alternatives are removed and a new form of monopoly is established — a forced acceptance of work on whole pools of individuals. Rising fuel prices are also adding to the erosion of mobility rights.??The security agenda behind controlling movements is heavily tied to economic objectives, as are the international disease scares like avian influenza (bird flu) and the swine flue that lock up human movement. Control of mobility in the oceans and international waters of the world is also part of this objective. The internationally illegal Proliferation Security Initiative (PSI) was initiated by the U.S. government, with the support of the E.U., in 2003 as part of the "Global War on Terror." The Proliferation Security Initiative is presented as a means to prevent the proliferation of weapons of mass destruction (WMDs), however it can be applied to bring about a hold over global maritime mobility. The strategy is a threat to international movement on the high seas and maritime trade. There is good reason why it is illegal under international law and the 1982 U.N. Convention on the Law of the Sea.??Industrial De-location in the European Union and the Global Economic Crisis??This process of industrial de-location has already been underway in the E.U. for years, under which industries have been relocated to Eastern Europe and other global regions. Under this neo-liberal paradigm jobs and industries can gradually be removed from wealthier E.U. states to Southern Mediterranean nations, where cheap and immobile labour forces will be awaiting."
Czech Court Gives Record Sentence for Racial Crime (tags)
A court in the northeastern Czech city of Ostrava Wednesday gave the country's longest-ever sentence for a racial crime, finding four Czech neo-Nazi sympathizers guilty of setting fire to a Roma family's house, in which a two-year-old girl suffered severe burns.
The Selling of the Woodrow Wilson Center (tags)
The DC-based Woodrow Wilson Center, established by Congress, has sold out to corporate America and political interests. Its president, former Congressman Lee H. Hamilton, has questionable corporate links.
Obama's Brave Nuke World (tags)
smoke and mirrors, not policy change
Pianeta di Amore Planet of Love Planeta lásky 愛的行星 Planeet van Liefde Planet of Love Плане (tags)
Radical global performative action: how 'bout a hug?
David Swanson Lays Out 16-Point Plan for U.S. Left (tags)
Activist David Swanson spoke at the First Unitarian-Universalist Church in Hillcrest July 15, ostensibly to promote his book "Daybreak: Undoing the Imperial Presidency and Forming a More Perfect Union," but actually to deliver a 16-point plan for revitalizing the American Left. His speech was full of withering scorn not only for President Obama but also for progressives who supported him in the campaign and still believe in his good intentions.
Obama's War on Yemen (tags)
Obama's expanded war agenda
The Great Game: U.S., NATO War In Afghanistan: Is the Entire World the Target? (tags)
"The longest war in American history prior to the current one was that in Vietnam. U.S. military advisers were present in the country from the late 1950s onward and covert operations were carried on in the early 1960s, but only in the year after the contrived Gulf of Tonkin incident - 1965 - did the Pentagon begin major combat operations in the south and regular bombing raids in the north. The last American combat unit left South Vietnam in 1972, seven years later. The U.S. (and Britain) began bombing the Afghan capital of Kabul on October 7, 2001 with Tomahawk cruise missiles launched from warships and submarines and bombs dropped from warplanes and shortly thereafter American special forces began ground operations, a task that has been conducted since by regular Army and Marine units. The bombing and the ground combat operations continue more than eight years later and both will be intensified to record levels in short order."
Czech president signs Lisbon Treaty/EU Dirty Deal regarding human rights (tags)
With the EU guarantee to the Czech president, that the Czech State should not be exposed to property claims of expelled Germans after World War II, the EU not only implicitly consents with ethnical cleansings, it is also highly hypocritical, since the EU regularly points a finger to human rights violating countries like Sudan, Myanmar and Iran It is high time the EU should look in its own mirror, among else with regard to police abuse, racism and violation of the rights of migrants and real or alleged terrorsuspects
NV; New Cave Species at Risk from SNWA Pipeline (tags)
Two new cave species were discovered near Great Basin Natn'l Park's Lehman Caves, part of the same cave aquifer system that SNWA plans to pump billions of gallons away from with their proposed pipeline. The species found were a shrimp called an ostracod and a cave millipede. Both depend on groundwater in caves that would be taken to Whittemore's suburban sprawl developments outside of Vegas city limits.
The Shortwave Report 09/25 Listen Globally! (tags)
A weekly 30 minute review of news and opinion, recorded from a shortwave radio. With times and freqs for listening at home. 2 files- broadcast and slow-modem streaming. Free to rebroadcast. China, Netherlands, Cuba, and Russia.
Reviewing F. William Engdahl's "Full Spectrum Dominance:" Part II (tags)
the threat of totalitarian democracy
BTL:U.S. and Russia Move Toward Reducing Number of Nuclear Weapons (tags)
BETWEEN THE LINES Syndicated Radio Newsmagazine
Bioweapons, Dangerous Vaccines, and Threats of a Global Pandemic (tags)
dangerous mandated vaccines must be avoided
Reviewing F. William Engdahl's "Full Spectrum Dominance" (tags)
must-read on US imperial aims
SHIFTING PARADIGM (tags)
In a sweeping speech to international leaders and security experts on February 6 in Munich, US vice president Biden, prima facie, revealed to an international audience the core of the Obama regime's foreign policy program. Although emphasizing diplomacy and cooperation as its center piece, he gave a stern warning to the Imperialist allies and client states that they are expected to share the burdens of fighting extremism (whatever that is).
Russia Backs Away from Kaliningrad Missile Plans (tags)
Russia's de-escalation should prompt the offloading of US Cold Warriors who have had their day.
1/3: Latest Updates from Israel Invade Gaza, Worldwide Protest Against Invasion (tags)
Latest Updates from Israel Invade Gaza: Worldwide Protest Against Israeli Invasion and Killing at Gaza
The Shortwave Report 12/5 Listen Globally! (tags)
Reinventing the Evil Empire (tags)
The risk of serious confrontation looms.
Facts for Working People (tags)
Labors Militant Voice (LMV) publishes a newspaper which aims to talk to the working class about issues effecting us. August issue: Why Gas Prices are so High, How to Apply for a Government Bail Out, Strikers in South Africa chant "Eat the Rich" and Why we Need a Workers Party Now.
Blockades: Acts of War (tags)
Washington threatens a naval blockade against iran.
Why the MSM Can't Tell The Truth About Georgia (tags)
"From Tbilisi to Teheran" Heightens Suspicions of Motive in Georgian Crisis Since the attacks by Georgia which sparked the fighting (which, alone, suggests Georgia had been promised the support of a larger power) took place on the day that the last two US carrier groups started for the Strait of Hormuz, it would appear that one motive (besides the pipelines from the Caspian which rival the new ones the US built in Afghanistan) appears to be tying up and/or demonizing Russia, in order to minimize its response to US/Israeli Aggression towards Iran. The following article from the Jerusalem Post appears to support this, as does the US/Israeli line seen today, as well as the willingness of the bulk of the Western press to so horribly misrepresent this conflict. https://israel.indymedia.org/newswire/display/9471/index.php
Cannon fodder for the market (tags)
The government of Georgia would never have launched its armed forces against the capital of the Autonomous Republic of South Ossetia in the dawn of August 8, engaged in what it called the re-establishing of constitutional order, without previous coordination with Bush
Using Georgia to Target Russia (tags)
Welcome to the new Cold War and Great Game
US, Israeli Roles in Georgian Crisis Raises Alarms (tags)
Since the attacks by Georgia which sparked the fighting (which, alone, suggests Georgia had been promised the support of a larger power) took place on the day that the last two US carrier groups started for the Strait of Hormuz, it would appear that one motive (besides the pipelines from the Caspian which rival the new ones the US built in Afghanistan) appears to be tying up Russia, in order to minimize its response to US/Israeli Aggression towards Iran.
The Shortwave Report 8/8 Listen Globally! (tags)
The Shortwave Report 7/11 Listen Globally! (tags)
The United States, Europe and Human Rights (tags)
The discredited way in which the European Union suspended its sanctions on Cuba on June 19 has been reported in 16 international press dispatches… Such hypocrisy is made all the more evident by the brutal European measure to expel illegal immigrants from Latin American countries.
Hunger Strike Against US Space Shield: 6/22/08 Santa Monica, CA (tags)
Worldwide hunger strike - June 22, 2008: Against the US "Star Wars" project and the installation of a US military base in the Czech Republic. Thousands of People will be participating throughout the world and locally on the Santa Monica 3rd Street Promenade.
Coup E'Etat Rumblings in Venezuela (tags)
The Bush administration aims to crush democracy in Venezuela.
Mexico,CIA,Guantanamo Rendition Plane, Cocaine, Homeland 'Security' (tags)
''Increasing suspicion even more was the suggestion, in a report of a committee of the European Parliament, that in addition to having been used in drug trafficking the Gulfstream II had flown CIA rendition flights to Guantanamo.'' - Daniel Hopsicker , www.madcowprod.com
UN First Committee Passes DU Resolution in Landslide Vote (tags)
US University report: "In a group of 251 soldiers in one study group in Mississippi, all of whom had normally birthed babies prior to their participation in either of the two (Persian) Gulf Wars, sixty-seven percent of their post-war offspring were born with severe deformities."
Why World War 3 is VERY possible (tags)
Following on the heels of President George W. Bush's warning last week that those countries "interested in avoiding World War III" should align themselves with Washington's escalating threats against Iran, a series of unfolding developments point to the danger of armed violence engulfing a broad swath of the Middle East and Central Asia and, indeed, posing the threat of a new world war. Six years after the US invasion of Afghanistan and four-and-a-half years after the invasion of Iraq, the continuation and deepening of the conflicts in both of these countries is setting into motion a political chain reaction of incalculable dimensions.
Bush invokes threat of "World War III" (tags)
The press conference held by President George W. Bush Wednesday was, like all of his press appearances, full of non-sequiturs, evasions and political bullying. Bush called the news conference to present himself as an opponent of excessive federal spending, by which he meant a few billion for children's health insurance in the bill he vetoed last week, not the hundreds of billions his administration has squandered on wars in Iraq and Afghanistan or the trillions in tax cuts for the rich. The routine of his 20th press conference of the year was broken only when Bush was asked about the visit of Russian President Vladimir Putin to Tehran, widely seen as undercutting the Bush administration's campaign to isolate Iran and pave the way for military action against it. Putin took part in a meeting of the five states bordering on the Caspian Sea, each of them pledging not to allow their territory to be used for military action against any of the others.
The Shortwave Report 10/12/07 ¡Listen Globally! (tags)
George Bush Immunized Himself from Nuremberg Prior to Iraq Iran (tags)
An ounce of prevention is worth a pound of cure.
Nostradamus Third Anti Christ Name Revealed by Peru Meteorite Crash (tags)
The Peru meteorite crash was the awaited sign of the advent of Nostradamus' third anti Christ. Learn his real name.
The Shortwave Report 9/7/07 ¡Listen Globally! (tags)
Mexicans find a rough welcome mat in Canada (tags)
Mexicans find a rough welcome mat in Canada Tourists, being denied entry in increasing numbers, report harsh, insensitive, even racist treatment by Canadian border officials
Russia's Joint Radar Gamble (tags)
There are mainly three proposals made by the Russian President - to broaden US missile defense plans in Europe by bringing NATO into the project, which President Bush has not agreed to; to set up an "online information exchange center" in Moscow as part of the system; and similar installations in European cities with a joint radar at Azerbaijan that would protect the whole of Europe, rather than only one part of Europe.
US "Warned" of Glasgow Attack 2 weeks ago.. (tags)
Was it a leak by the planners, lucky guess or good ole psychic precognition? Predictably, Iran & Syria being blamed by MSM. Setting up that casus belli..
Reviewing Michel Chossudovsky's America's War on Terrorism (tags)
Exposing the war on terrorism as a cruel hoax
A weekly 30 minute review of news and opinion, recorded from a shortwave radio. With times and freqs for listening at home. 2 files- broadcast and slow-modem streaming. Free to rebroadcast. Netherlands, Cuba, China, and Russia.
ANOTHER GLOBAL ARMS RACE? (tags)
Why does the U.S. plan to install an anti-missile shield against Iran in Poland and the Czech Republic?
The Shortwave Report 5/25/07 ¡Listen Globally! (tags)
A weekly 30 minute review of news and opinion, recorded from a shortwave radio. With times and freqs for listening at home. 2 files- broadcast and slow-modem streaming. Free to rebroadcast. Netherlands, Cuba, and Russia.
Halliburton's Kellogg,Brown and Root boys suspect Jewish Conspiracy,Holocaust Survivors (tags)
Strange that while Halliburton is relocating to Dubai, another branch of their military and war fraud operations,(Kellogg,Brown and Root boys running Cerberus' IAP Worldwide and Cerberus hedge fund and banking operations),are maneuvering in from the west,with Refco's money laundering Bawag Bank of Austria,into Israeli banking,with anonymous Jewish connections and their spoils from the looting in the war in Iraq.
Dick Cheney Nearly Killed by Pervez Musharraf through Mullah Dadullah (tags)
Castro says "Close but no cigar."
The Russian Bear rouses from Hibernation (tags)
Speaking at the Munich security conference, Vladimir Putin took the opportunity to denounce America for its pursuit of world domination. Putin's verbal attack lacked the clout the old Soviet Union once possessed, however, the message was not lost on the Americans who immediately went into recovery/spin mode regarding their hegemonic pursuits.
What's the Death Toll, Mr. Bush? (tags)
The ever increasing, unmentionable death toll of U.S. military expansionism in the Middle East, Eastern Europe and elsewhere exceeds one million souls. The human casualties have achieved unification and consensus at last! The slogan of death is unutterable, perfectly silent and devastatingly effective! Devoid of tribe, nation, race, religion and every other divisive factor that contributed to their demise, all humans are united in death – they are finished! In death there are no ideologies, 'critical issues' or causes.
Makram Chams of Titan Corp & 9/11 fame sues Daniel Hopsicker (tags)
Makram Majid Chams of Lebanon,Saudi Arabia,Titan Corporation,San Diego,and Venice,Florida during events leading up to 9/11,and last known to be with W Bush's allies in Saudi Arabia,is suing Daniel Hopsicker of http://www.madcowprod.com !
Los Angeles Times:Why did Osama bin Laden choose Jeb Bush's Venice,Florida flight school ? (tags)
Wall Street Journal,New York Times,Los Angeles Times,Miami Herald,etc. have maintained a self imposed black out of the 9/11,WTC, Pentagon, etc. tragedy's connections to Venice,Florida. This indymedia article-post is my protest against established media's allegiance to the Bush war criminals,stock frauds,money launderers and arms,petroleum and drug traffickers.
L A Times:Why did Osama bin Laden choose Jeb Bush's Huffman Aviation terror flight school (tags)
Germany joins US, British, Israeli axis of aggression (tags)
Last Sunday, German government spokesman Ulrich William spoke on behalf of the chancellor, Angela Merkel (Christian Democratic Union—CDU), and expressed Merkel's "great regret and deep sadness over the consequences of the Israeli air raid on Qana." Two days later, German Foreign Minister Frank-Walter Steinmeier (Social Democratic Party—SPD) began an interview in the Süddeutsche Zeitung with the words: "What took place on Sunday in Qana was appalling. The large number of civilian victims of the Israeli air raid is terrible and unacceptable."
Czech Greens enter right-wing government (tags)
Following the Czech parliamentary elections of June 2 and 3, the Green Party is now preparing to join the right-wing government of the arch-conservative Citizens Party (ODS). The Greens have now entered an Eastern European parliament for the first time, with Strana Zelenych (SZ) receiving 6.3 percent of the vote. Clearly, the Green Party in Prague is beginning its political odyssey at the point it left off in Germany—as a governing party; but this time no longer in alliance with the Social Democrats, but instead with politically conservative and right-wing parties.
A weekly 30 minute review of news and opinion, recorded from a shortwave radio. With times and freqs for listening at home. 2 files- broadcast and slow-modem streaming. Free to rebroadcast. China, Netherlands, Spain, Cuba, and Russia.
The Most Dangerous Double Standard in the Middle East (tags)
Iran and Israel: Ambiguous Nuclear Weapons At present, there is only one country in the Middle East with "secret" and ambiguous stockpiles of nuclear weapons. (full references and links at bottom)
Bush administration finalizes military attack on Iran (tags)
Bush administration finalizes military attack on Iran
Liars (tags)
The Lies and theTruth About the U.S. War on Iraq
CHART, TABLE: CIA Secret Prison Planes--What the MSM Won't Tell (tags)
The European Commission said last week it will investigate published reports that the CIA set up secret jails in Eastern Europe to detain high-profile terrorism suspects. The Commission says the governments of the EU's 25 member nations will be informally questioned about possible human rights violations. News media reported also that the group Human Rights Watch "claims records and other evidence point to POLAND and ROMANIA as countries that allowed their territory to be used by the CIA to jail top suspected al-Qaeda captives." We report that HRW knows that from tracing the movements of CIA planes and WE PROVIDE A LIST OF THE 28 PLANES, 8 SHELL COMPANIES, AND SEVERAL CIA-RELATED COMPANIES.
SOSMM? Racist In Arizona Exposed! (tags)
One of these SOSMM hate mongers who was caught brandishing a weapon at Mexicans has been exposed to be an unmitigated liar. What a surprise!
New Pentagon plans to conquer nations, secure oil, advance globalization, militarize space (tags)
Need some motivation to march for peace this Saturday and to keep working for peace every day? Then please read the following. And please forward it far and wide.
Monty Python on Post-Modern Bush (tags)
This Monty Python letter could cheer us in this murky world of denial and myopia. Pyro-maniacs make poor firefighters. Wolves in sheeps' clothing make poor leaders. Enron lawyers (eg Gonzales) make poor Attorney Generals.
Web Server Takedown Called Speech Threat (tags)
Indymedia sites knocked offline- from yahoo.com
A Bush pre-election strike on Iran 'imminent (tags)
eradicate bush
A Bush Pre-Election Strike On Iran 'Imminent' (tags)
According to White House and Washington Beltway insiders, the Bush administration, worried that it could lose the presidential election to Senator John F. Kerry, has initiated plans to launch a military strike on Iran's top Islamic leadership, its nuclear reactor at Bushehr on the Persian Gulf, and key nuclear targets throughout the country, including the main underground research site at Natanz in central Iran and another in Isfahan. Targets of the planned U.S. attack reportedly include mosques in Tehran, Qom, and Isfahan known by the U.S. to headquarter Iran's top mullahs.
DETAILS OF FBI SEIZURE OF GLOBAL IMC SERVER. (tags)
Here are 3 articles posted by Truthout on the Oct. 7 raid, in the UK , on the servers of Global Indy Media and several local IMC's, apparently instigated by one IMC's posting of a photo of an undercover agent in Europe.
European Parliament 2004 elections and new constitution (tags)
The European Parliament elections, held June 10-13, showed a strong trend among voters against ruling parties that sided with Bush's war on Iraq, as well as those that have been imposing pro-business economic policies, as in Germany and France.
European Parliament 2004 elections results: Throw the Bums Out! (tags)
10,000 INTERNATIONAL OBSERVERS NEEDED IN PALESTINE (tags)
FOR IMMEDIATE RELEASE www.P10K.net Ex-US Marine/Gulf War Veteran & Founder of the Human Shield Action to Iraq Ken O'Keefe announces 'P10K FORCE' plan to Mobilize 10,000 International Observers from Western Nations to the Occupied Palestinian Territories (OPT) - Launching September 11, 2004.
Over 800 US fatalities in Iraq so far (tags)
The total US fatalities just passed the 800 mark today. The total for the whole coalition is over 900. For the month of May, there has been an average of 2.78 coalition fatalities per day. The average since the war began is 2.11 per day.
An anti-state communist perspective on the war (tags)
This is 2003 article about the Iraq War 2, just before it started. It's from Anarchy: a journal of desire armed. It's unusually prescient in its predictions of how the entire war will go. I post this because the journal has been criticized by some anarchistists as "primitivist", and Anarchism has been derided by the Left as everything from chaos to disruptive and unthinking. Liberals stereotype anarchists as young fashion victims, implying they are shallow. These are simplistic stereotypes at best, and fear mongering at the worst. Articles like this one below, I hope, give people pause to consider the relevance and usefulness of anti-state, anti-capitalist critique. Remember, this article was written before we invaded Iraq. So here goes. -- Just Another Anarchist
Lies the Government Told Us Part 2 (tags)
World cracks down on Big Tobacco (tags)
To slow the spread of smoking, especially in poor nations, where smoking rates are soaring, the World Health Organization in Geneva voted for unprecedented and potentially deep restrictions on tobacco products.
The Rational Destruction of Yugoslavia (tags)
Now that the United States has completed a so-called humanitarian intervention in Iraq, I believe it is very informative to look back at the previous one in Yugoslavia. A brilliant analysis by noted author and political scientist Michael Parenti.
Russian Spy Daily War Reports (tags)
Get the real scoop at http://www.aeronautics.ru
RUSSIAN MILITARY INTELLIGENCE REPORTS ON THE IRAQ WAR (SUMMARY March 22-28). (tags)
Summary of events, IRAQ War, March 22-28, 2003.
Latest Russian Intelligence report (tags)
The following is the English translation of the IRAQWAR.RU report based on the Russian military intelligence reports.
THE IRAQ WAR - MARCH 25, 2003 SUMMARY. (tags)
Summary of events, IRAQ War, March 25, 2003.
Greenpeace: Momentum Builds for New UN Peace Resolution (tags)
Demands for a UN emergency session are on the rise! 32,015 of you have written to UN Ambassadors around the world. You've sent 29,700 E-cards to friends, colleagues, fellow students, and family members. This is an extraordinary response in a very short time, and what do we want??? MORE! Please re-post onother indymedia sites.
Rebuttal to a fighter pilot (tags)
A co-worker forwarded to me a letter written by a fighter pilot stationed in Turkey. I added my comments and sent it back to her.
Refuting The Top Ten Most Annoying Anti-War Cliches (tags)
Refuting The Top Ten Most Annoying Anti-War Cliches
UN will sink into irrelevance -- Good! (tags)
The United Nations is on the verge of demonstrating finally and fatally its moral bankruptcy and its strategic irrelevance: moral bankruptcy, because it will have made a mockery of the very resolution on whose sanctity it insists; strategic irrelevance, because the United States is going to disarm Iraq anyway.
The Split in the Western Alliance-Can Europe's opposition prevent war ? (tags)
A weekly 30 minute review of news and opinion recorded from a shortwave radio. 2 files- broadcast quality (13.3MB) and quick download (3.3MB). With times and freqs for listening at home. Free to rebroadcast. Netherlands, Spain, Germany, Russia, and Cuba.
The European Summit in Copenhagen (tags)
European summit
Prague antiNATO protests (tags)
The NATO summit (November 21-22) in Prague was met by a week of peaceful protests, despite the presence of 2000 CIA Agents and 12,000 Czech police officers.
basic stats for US imperialism (tags)
a reference guide for activists.
This is very Revealing (tags)
Once upon a time the was a land called America, where the people were actually free, they were free to think, free to speak, free from slavery, even economic slavery. But in the shadows of this great land men were plotting against her becasue they had ideas of their own!
urgent: S26 outlawed / open letter to V Havel (tags)
Prague authorities have revoked permits for all street demonstrations on September 26th. This obvious abuse of power probably won't stop the thousands of activists gearing up to disrupt the IMF/World Bank meetings next week. Activists from around the globe can also help out by signing the open protest letter to Czech President Vaclav Havel (once political prisoner himself), and by faxing it to his office and/or sending email. ©copyLEFT
What Next? On to Prague! (tags)
Article describes some history of the poverty situation in LA and points to the next stop for anti-poverty activists: Prague.
ignored tags synonyms top tags bottom tags | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,084 |
Q: insert query with ajax without reloading whole page I want to insert data through AJAX (without reload page). I tried but it is not showing data and also reload page.
I have a file first.php (in which, form is present), a AJAX code and a firstcall.php where query will be execute.
My first.php (html form) is:
<form class="reservation-form mb-0" action="" method="post" autocomplete="off">
<input name="name1" id="name1" class="form-control" type="text" placeholder="Enter Name" required aria-required="true">
<input name="age" id="age" class="form-control" required type="number" placeholder="Enter Age" aria-required="true">
<input type="checkbox" id="checkbox" class="checkbox1" name="namec[]" value="<?php echo $value['id']; ?>" >
<input type="submit" class="pull-right btn btn-warning" value="Submit" id="submit">
</form>
Here data should be display:
<div class="col-md-5">
<div class="panel panel-primary" id="showdata">
</div>
</div>
AJAX is:
<script type="text/javascript">
$(document).ready(function(){
$("#submit").click(function(){
var name1 = $("#name1").val();
var age = $("#age").val();
var chkArray=[];
$('.checkbox1:checked').each( function() {
chkArray.push($(this).val());
} );
var selected;
selected = chkArray.join(',') ;
if(selected.length > 1){
$.ajax( {
url:'firstcall.php',
type:'POST',
data:{name1: name1,age: age,namec: chkArray},
}).done(function(data){
$("#showdata").html(data);
});
}
else{
alert("Please at least one of the checkbox");
}
}
}
</script>
firstcall.php is:
<div class="panel panel-primary" id="showdata">
<?php
foreach($_POST['namec'] as $selected){
echo $selected;
$_SESSION['name1']=$_POST["name1"];
$_SESSION['age']=$_POST["age"];
echo $name1=$_SESSION['name1'];
echo $age=$_SESSION['age'];
$query=mysql_query("insert into patient_details (p_name,p_age,g_number) values ('$name1','$age','$selected')") or die(mysql_error());
}
?>
A: After $("#submit").click(function(event){ add command
event.preventDefault();
And your page will not be reloaded
A: The submit button will automatically reload the page on click so the solution is either change the button type to button or add preventDefault to the click event
$("#submit1,#submit2").click(function() {
alert('form submited')
})
$("#submit3").click(function(e) {
e.preventDefault();
alert('form submited')
})
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<form>
this will reload the page
<input type="submit" class="pull-right btn btn-warning" value="Submit" id="submit1">
</form>
<form>
this will not reload the page
<input type="button" value="Submit" id="submit2">
</form>
<form>
this will not reload the page
<input type="submit" value="Submit" id="submit3">
</form>
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 3,522 |
250 watt metal halide fixture are UL listed for wet location outdoor use and include bulb, body, ballast kits, light bulb socket and mount. Outdoor 250 watt metal halide fixture are used for lighting parking lots, building facades, playfields, greenhouses, displays, signs, or general security lighting.
Outdoor 250 watt metal halide fixture with heavy duty die cast aluminum housing.
Powder-coated dark bronze, with a tempered glass hinged door. The 250 watt metal halide fixture floodlight has a polished aluminum reflector and a door seals with a silicone gasket preventing moisture and dust.
250 watt metal halide fixture installation should only be performed by a qualified electrician. Supply power should be turned off when replacing components or checking connections. Never perform maintenance or cleaning while fixture is energized. Disconnect power and allow lamp to cool before replacing. | {
"redpajama_set_name": "RedPajamaC4"
} | 4,934 |
Man of Tai Chi hits Berlin cinemas on March 13.
Fighting for the upkeep of his master's temple and his own growth as a practitioner of Tai Chi, titular hero Chen is drawn into the underground fight ring of ruthless manipulator Donaka Mark (Reeves) intent upon corrupting Chen's pure approach. Chen trained Reeves for The Matrix and this movie is also choreographed by Matrix action-master Yuen Woo-ping. So far, so good. It's the acting and the script that brings these heroes to their knees. | {
"redpajama_set_name": "RedPajamaC4"
} | 5,884 |
Les Rückert-Lieder sont cinq chants pour voix et orchestre composés par Gustav Mahler en 1901 et 1902.
Ils furent créés à Vienne en 1905. Les poèmes sont extraits d'œuvres de Friedrich Rückert.
Liste par ordre chronologique
Blicke mir nicht in die Lieder
Ich atmet' einen linden Duft
Ich bin der Welt abhanden gekommen
Um Mitternacht
Liebst du um Schönheit
Composition
Mahler composa quatre des cinq chants lors d'un séjour à la Villa Mahler lors de l'été 1901. Le dernier chant composé est un poème que Mahler mit en musique en pour son épouse, Alma Mahler. Il cacha le manuscrit dans la partition de Siegfried qu'Alma déchiffrait souvent. Malheureusement pour le compositeur, Alma ne vint pas déchiffrer la partition pour plusieurs jours. Gustav invita donc Alma à une séance de déchiffrage, ce qui lui fit découvrir le manuscrit, qui la fit presque pleurer.
Création et réception
Les quatre premiers lieder ont été créés à Vienne le , par des membres choisis du Wiener Philharmoniker. Le concert, dans lequel étaient également créées les Kindertotenlieder et certains Wunderhorn lieder, fut un des grands succès de la carrière de Mahler. Paul Stefan en a écrit : .
Fiche technique
Titre : Fünf Lieder nach Texten von Friedrich Rückert, Sieben Lieder aus letzter Zeit ou Rückert-Lieder
Composition : juin 1901 pour les quatre premiers et juillet 1902 pour le dernier
Durée : 25 minutes environ
Création :
Les quatre lieder de 1901: à Vienne
Création de la version complète le à Vienne
Publication : C. F. Kahnt 1905
Analyse
Blicke mir nicht in die Lieder
Précision lexicale : die Lieder = die (Augen)lieder = les paupières. Cette orthographe est certes archaïque (on écrirait aujourd'hui das Lid), mais néanmoins attestée (cf. Deutsches Wörterbuch de Jacob Grimm et Wilhelm Grimm).
Il y a un jeu de mots sur la double signification paupière / chant : à l'écoute (plus qu'à la lecture), on pourrait comprendre ce premier vers de Rückert à la fois comme ou bien . C'est en réalité le deuxième sens qui est le plus important comme en atteste le développement du poème : avant d'en faire profiter son entourage et le monde, le poète garde jalousement sa production celée aux regards extérieurs tant que celle-ci n'est pas achevée, comme les abeilles les alvéoles et le miel dans leur ruche : Les abeilles, quand elles construisent leurs alvéoles / les cachent au regard des autres, puis Quand elles mettront au jour / Le précieux gâteau de miel / Alors tu seras le premier à t'en régaler ! (traduction libre). Ce qui, par son traitement musical, apparaît comme un Lied relativement léger de Mahler cache en réalité un aspect très profond du compositeur ; c'est une métaphore de son attitude intime vis-à-vis de la composition et plus généralement de la création..
Ich atmet' einen linden Duft
Proposition de traduction du début du Lied :
Je respirais un doux parfum.
Il y avait dans la chambre une branche de tilleul.
Le cadeau d'une main chère.
Comme c'était doux, le parfum du tilleul...
Ich bin der Welt abhanden gekommen
Proposition de traduction du début du Lied :
Je suis coupé du monde.
Dans lequel je n'ai que trop perdu mon temps.
Depuis longtemps, il n'a plus rien entendu de moi.
Il peut bien penser que je suis mort !
et le texte se termine sur ces mots : Je suis mort au tumulte du monde et repose dans mon tranquille domaine. Je vis seul dans mon ciel, dans mon amour. dans mon chant.
Um Mitternacht
Proposition de traduction du début du Lied :
A minuit.
Je me suis réveillé.
Et j'ai regardé le ciel.
Parmi les millions d'étoiles,
Aucune ne m'a souri.
A minuit.
Liebst du um Schönheit
Proposition de traduction du début du Lied :
Si tu aimes pour la beauté, alors ne m'aime pas !
Aime le soleil aux cheveux dorés !
Discographie
Janet Baker (mezzo-soprano), John Barbirolli, Orchestre Hallé - EMI
Kathleen Ferrier (contralto), Bruno Walter, Orchestre philharmonique de Vienne - DECCA
Christa Ludwig (mezzo-soprano), Herbert von Karajan, Orchestre philharmonique de Berlin - DGG
Christa Ludwig (mezzo-soprano), Otto Klemperer, Orchestre Philharmonia - EMI
Dietrich Fischer-Dieskau (baryton), Karl Böhm, Orchestre philharmonique de Berlin - DGG
Brigitte Fassbaender (mezzo-soprano), Riccardo Chailly, Orchestre symphonique allemand de Berlin - DECCA
Violeta Urmana (mezzo-soprano), Pierre Boulez, Orchestre philharmonique de Vienne - DGG
Bibliographie
Notes et références
Liens externes
5 Rückert-Lieder (MIDI)
Page sur les Rückert-Lieder sur le site gustavmahler.net, avec discographie et commentaire d'Henry-Louis de La Grange
Texte allemand et traduction anglaise des Rückert-Lieder sur The Lied and Art Song Texts Page
Cycle de lieder
Lied avec orchestre
Œuvre de Gustav Mahler | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 9,197 |
Miss Kentucky Elle Smith Crowned Miss USA 2021
The winner of the 2021 Miss USA pageant is Miss Kentucky Elle Smith. She took home the title during the competition held Monday at the River Spirit Casino Resort in Tulsa, Okla.
Miss North Dakota Caitlyn Vogel was named the runner-up, followed by Miss Florida Ashley Carino and Miss Illinois Sydni Bennett.
Smith, who is a journalist at WHAS11 News in Louisville, succeeds Miss USA 2020 Asya Branch. This December, Smith will compete in the Miss Universe pageant in Israel. Andrea Meza currently holds the title of Miss Universe.
This year marked the 70th anniversary of the Miss USA competition. The event comes two days after the 2021 Miss Teen USA pageant, which Miss Florida Breanna Myles won. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 6,599 |
Q: Loop object private array I am sure my loop code is wrong, but for the life of me, I cannot see what it is.
I have to create a Student(String name, double gpa) object.
I have a class Classroom that initializes a private array students[]. I ask the user how many students to create. Using a loop, I ask for the name and gpa of each student and I add it in the array with the add(Student aStudent) method. My method is supposed to check if the cell is null. If it is, add the object. If not, go to next cell. I cannot create multiple Student objects.
I also have a get method to return the reference of a specific array cell.
Here is the class Classroom, add and get method. The variables are set by the assignment.
public class Classroom {
private boolean hasSpace = false;
int maxClassroomSize;
private Student students[];
public Classroom (int size){
maxClassroomSize = size;
students = new Student[maxClassroomSize];
}
public boolean add (Student aStudent) {
for(int i = 0; i <=(students.length-1); i++)
{
if (students[i] == null) {
students[i] = aStudent;
hasSpace = true;
} else hasSpace = false;
} return hasSpace;
}
public Student getStudent(int position){
return students[position];
}
}
Here is my main method:
import java.util.Scanner;
public class Program {
public static void main(String[] args) {
int classSize;
int numberStudentsInput;
double gpa = 0;
String studentName = null;
Student student1 = new Student();
Scanner sc = new Scanner(System.in);
System.out.println("How big is this class?");
classSize = sc.nextInt();
Classroom classroom = new Classroom(classSize);
do{
System.out.println("How many students are enrolled in this class?");
numberStudentsInput = sc.nextInt();
if (numberStudentsInput>classSize)
System.out.println("Too many students for the class size. Please try again. ");
} while (numberStudentsInput >classSize);
for (int i=0; i<=(numberStudentsInput-1);i++) {
Scanner sc2 = new Scanner(System.in);
System.out.println("What is the student's name: ");
studentName = sc2.nextLine();
student1.setName(studentName);
System.out.println("What is the student's GPA");
gpa = sc2.nextDouble();
student1.setGPA(gpa);
classroom.add(student1);
}
System.out.println(student1.getName(classroom.getStudent(0)));
System.out.println(student1.getName(classroom.getStudent(1)));
}
}
I am outputting the name of array cell 0 and 1 to see the result but it seems to only keep the latest input. So if I enter "John" and "Paul" as the names, my output would be "Paul" for both cells.
I think my add method is correct, but I am most surely wrong... What am I doing wrong?
Thanks for the input!
A: You are adding the same Student object to all cells. So when you change that object, you are changing it both in the old cells that already have it, and in the new cell to which you are adding it. You say you "cannot create multiple Student objects." I don't understand why not, but that is at the heart of your problem.
A: You have to create a new Student object inside the for loop
Student student1;
for(...) {
student = new Student();
..
classroom.add(student);
}
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 2,288 |
\section{Introduction}\label{intro}
The descent theory for tori was first established by Colliot-Th\'el\`ene and Sansuc in \cite{CTS87} and was extended by Skorobogatov to groups of multiplicative type in \cite{Sk}. In a series of papers \cite{H}, \cite{HSk}, \cite{HSk1}, Harari and Skorobogatov introduced descent obstruction for a general algebraic group and compared the descent obstruction with the Brauer-Manin obstruction. By various works of Poonen \cite{P}, the second named author \cite{D09}, Stoll \cite{St} and Skorobogatov \cite{Sk1}, it was proved that the descent obstruction is equivalent to the \'etale Brauer-Manin obstruction for smooth projective geometrically integral varieties. In this paper, we study the relation between the descent obstruction and the Brauer-Manin obstruction for open varieties by using new arithmetic tools developed in \cite{BD}, \cite{CT08}, \cite{CTX}, \cite{D}, \cite{Ha08} and \cite{HS05}, and we extend the equivalence between the descent obstruction and the \'etale Brauer-Manin obstruction to smooth \emph{quasi-projective} varieties.
Let $k$ be a number field, $\Omega_k$ the set of all primes of $k$ and ${\bf A}_k$ the adelic ring of $k$. A variety over $k$ is defined to be a separated scheme $X$ of finite type over $k$. Fix an algebraic closure $\bar k$ of $k$. We denote by $X_{\bar k}$ the fibre product $X\times_k \bar{k}$. Let $${\rm Br}(X)=H^2_{\textup{\'et}}(X, \Bbb G_m), \ \ \ {\rm Br}_1(X)= {\rm ker}({\rm Br}(X) \rightarrow {\rm Br}(X_{\bar k})) \ \ \ \text{and} \ \ \ {\rm Br}_0(X)= \rm{Im} ({\rm Br}(k) \xrightarrow{\pi^*} {\rm Br}(X)) $$ where $X\xrightarrow{\pi} Spec(k)$ is the structure morphism, and ${\rm Br}_a(X)={\rm Br}_1(X)/{\rm Br}_0(X)$. For any subgroup $B$ of ${\rm Br}(X)$, one can define the Brauer-Manin set
$$ X({\bf A}_k)^B= \{ (x_v)_{v\in \Omega_k}\in X({\bf A}_k): \ \sum_{v\in \Omega_k} {\rm inv}_v(\xi(x_v))=0 \ \ \text{for all} \ \xi\in B \} $$ with respect to $B$. When $B={\rm Br}(X)$, we simply write this Brauer-Manin set as $ X({\bf A}_k)^{{\rm Br}}$.
Suppose $Y\xrightarrow{f} X$ is a left torsor under a linear algebraic group $G$ over $k$. The descent obstruction (see \cite{H}, \cite{HSk} and \cite{HSk1}) given by $f$ is defined by the following set
$$ X({\bf A}_k)^f = \{(x_v)\in X({\bf A}_k): ([Y](x_v))\in {\rm{Im}} (H^1(k, G) \rightarrow \prod_{v\in \Omega_k} H^1(k_v, G)) \} = \bigcup_{\sigma\in H^1(k, G)} f_\sigma (Y^\sigma ({\bf A}_k)) $$ where $Y^\sigma \xrightarrow{f_\sigma}X$ is the twist of $Y\xrightarrow{f} X$ by a 1-cocycle representing $\sigma\in H^1(k,G)$. Moreover, one can define
$$ X({\bf A}_k)^{\text{desc}} =\bigcap_{Y\xrightarrow{f} X} X({\bf A}_k)^f$$
following \cite{P}, where $Y\xrightarrow{f} X$ runs through all torsors under all linear algebraic groups over $k$.
The main results in this paper are the following theorems.
\begin{thm}\label{c-a} {\rm (Theorem \ref{main-general}) } Let $k$ be a number field, $G$ a connected linear algebraic group or a group of multiplicative type over $k$, and $X$ a smooth and geometrically integral variety over $k$. Suppose $Y\xrightarrow{f} X$ is a left torsor under $G$. For any subgroup $A \subseteq {\rm Br}(X)$ which contains the kernel of the natural map $f^*: {\rm Br}(X) \rightarrow {\rm Br}(Y)$ we have
$$ X({\bf A}_k)^A = \bigcup_{\sigma\in H^1(k, G)} f_\sigma (Y^\sigma ({\bf A}_k)^{f_\sigma^*(A)}) $$ where $Y^\sigma \xrightarrow{f_\sigma}X$ is the twist of $Y\xrightarrow{f} X$ by $\sigma$ and $ {\rm Br}(X) \xrightarrow{f_\sigma^*} {\rm Br}(Y^\sigma)$ is the associated pull-back map, for each $\sigma\in H^1(k,G)$.
\end{thm}
When $G$ is a torus, this theorem can be refined in order to get Theorem \ref{tor} in \S 4. In particular, we prove:
\begin{thm}\label{intor} {\rm (Corollary \ref{algebraic}) } Under the same assumptions as in Theorem \ref{c-a}, if $G$ is assumed to be a torus, then
$$ X({\bf A}_k)^{{\rm Br}_1(X)} = \bigcup_{\sigma\in H^1(k, G)} f_\sigma (Y^\sigma ({\bf A}_k)^{{\rm Br}_1(Y^\sigma)}) $$
and
$$ X({\bf A}_k)^{{\rm Br}} = \bigcup_{\sigma\in H^1(k, G)} f_\sigma (Y^\sigma ({\bf A}_k)^{{\rm Br}_1(Y^\sigma)+f_\sigma^*({\rm Br}(X))}) . $$
\end{thm}
This result is inspired by some lectures by Yonatan Harpaz. It should be pointed out that the first part in Theorem \ref{intor} was first obtained by Dasheng Wei in \cite{Wei}: his proof uses an argument of Harari and Skorobogatov in \cite{HaSk} together with an exact sequence due to Sansuc (see \cite{BD}, Theorem 2.8). Theorem \ref{intor} can be applied to study strong approximation, as in \cite{Wei}. It should be noted that in general, the image of ${\rm Br} (X)$ in ${\rm Br}(Y^\sigma)$ in Theorem \ref{c-a} and Theorem \ref{intor} is not easy to describe, even under the assumption $\bar k[X]^\times=\bar k^\times$ (see \cite[Theorem 1.7(b)]{HSk03}).
\begin{definition} Let $X$ be a variety over a number field $k$ and let $B$ be a subgroup of ${\rm Br}(X)$. For a finite subset $S$ of $\Omega_k$, we denote by $pr^S: X({\bf A}_k) \to X({\bf A}_k^S)$ the projection map, where ${\bf A}_k^S$ is the set of adeles of $k$ without $S$-components.
We say that $X$ satisfies \emph{strong approximation off $S$} if $X({\bf A}_{k})\neq \emptyset$ and the diagonal image of $X(k)$ is dense in $pr^{S}(X({\bf A}_{k}))$.
We say that $X$ satisfies \emph{strong approximation with respect to $B$ off $S$} if $X({\bf A}_{k})^{B} \neq \emptyset$ and the diagonal image of $X(k)$ is dense in $pr^{S}(X({\bf A}_{k})^{B})$.
\end{definition}
Corollary 3.20 in \cite{D} provides a sufficient condition for strong approximation with Brauer-Manin obstruction to hold for a connected linear algebraic group. As an application of Theorem \ref{intor}, we prove that this sufficient condition is also a necessary condition:
\begin{thm} \label{application} {\rm (Corollary \ref{iff})} Let $G$ be a connected linear algebraic group over a number field $k$ and let $S$ be a finite subset of $\Omega_k$ containing $\infty_k$. Then $G$ satisfies strong approximation with respect to ${\rm Br}_1(G)$ off $S$ if and only if $\prod_{v\in S} G'(k_v)$ is not compact for any non-trivial simple factor $G'$ of the semi-simple part $G^{ss}$ of $G$.
\end{thm}
For any variety $X$ over a number field $k$, one can define, following \cite{P}:
$$ X({\bf A}_k)^{\textup{\'et}, {\rm Br}}= \bigcap_{Y\xrightarrow{f} X} \bigcup_{\sigma\in H^1(k, F)} f_\sigma (Y^\sigma({\bf A}_k)^{{\rm Br}}) \, ,$$ where $Y\xrightarrow{f} X$ runs through all torsors under all finite group schemes $F$ over $k$. The last two sections of the paper are devoted to the proof of the following generalization of \cite{D09} and \cite{Sk1}:
\begin{thm}\label{inteq} {\rm (Corollary \ref{oneside} and Theorem \ref{main-last}) } If $X$ is a smooth quasi-projective and geometrically integral variety over a number field $k$, then
$$ X({\bf A}_k)^{\rm{desc}} = X({\bf A}_k)^{\textup{\'et}, {\rm Br}} . $$
\end{thm}
Terminology and notations are standard if not explained. For any connected linear algebraic group $G$ over an field $k$ of characteristic zero, the reductive part $G^{\rm{red}}$ of $G$ is defined by the exact sequence
$$ 1\rightarrow R_u(G)\rightarrow G \rightarrow G^{\rm{red}} \rightarrow 1 $$ where $R_u(G)$ is the unipotent radical of $G$. The semi-simple part $G^{ss}$ of $G$ is defined to be the derived subgroup $[G^{\rm{red}}, G^{\rm{red}}]$, which is isogenous to the product of its simple factors, and the maximal toric quotient $G^{tor}$ of $G$ is defined to be $G^{\rm{red}}/[G^{\rm{red}}, G^{\rm{red}}]$. We use $\hat{G}$ for the character group of $G$. For a topological abelian group $A$, the topological dual of $A$ is defined as $A^D=\Hom_{cont} (A, \Bbb Q/\Bbb Z)$ with the compact-open topology. For any ring $R$, $R^\times$ stands for the group of invertible elements of $R$. For a number field $k$, we denote by $\infty_k$ the set of all archimedean primes of $k$ and by $O_S$ the ring of $S$-integers, for any finite subset $S \subset \Omega_k$ containing $\infty_k$. For any $v\in \Omega_k$, $k_v$ is the completion of $k$ with respect to $v$, and if $v\in \Omega_k\setminus \infty_k$, $O_v$ is the integral ring of $k_v$.
The paper is organized as follows. In \S \ref{sl}, we establish some algebraic results over an arbitrary field of characteristic zero which we need in the next sections. Then we prove Theorem \ref{c-a} in \S \ref{clag}, Theorem \ref{intor} in \S \ref{rtc}. As an application of those results, we prove Theorem \ref{application} in \S \ref{aa}. Theorem \ref{inteq} is proved in \S \ref{cI} and \S \ref{CII}.
\section{Brauer groups of torsors}\label{sl}
In this section, we assume that $k$ is an arbitrary field of characteristic 0.
\begin{lem} \label{br} Let $H$ be a semi-simple simply connected group or a unipotent group over $k$. Suppose $X$ is a smooth and geometrically integral variety over $k$. If $Z\xrightarrow{\rho} X$ is a torsor under $H$, then the induced map ${\rm Br}(X)\xrightarrow{\rho^*}{\rm Br}(Z)$ is an isomorphism.
\end{lem}
\begin{proof} We first show that ${\rm Br}(X)\xrightarrow{\cong} {\rm Br}(X\times_k H)$, where the map is induced by the natural projection $X \times_k H \to X$. Using the spectral sequence $$ H^p(k, H^q(X_{\bar k}, \Bbb G_m)) \Rightarrow H^{p+q}(X, \Bbb G_m) , $$ one only needs to show that
$$ \bar k [X_{\bar k}]^\times/\bar k^\times\xrightarrow{\cong} {\bar k}[X_{\bar k} \times_{\bar k} H_{\bar k}]^\times /\bar k^\times , \ \ \ {\rm Pic}(X_{\bar k})\xrightarrow{\cong} {\rm Pic}(X_{\bar k} \times_{\bar k} H_{\bar k}) \ \ \ \text{and} \ \ \ {\rm Br}(X_{\bar k})\xrightarrow{\cong} {\rm Br}(X_{\bar k} \times_{\bar k} H_{\bar k}). $$
Since $\bar k[H]^\times=\bar k^\times$ and ${\rm Pic}(H_{\bar k})={\rm Br}(H_{\bar k})=0$ by \cite[Proposition 2.6]{CTX}, the first two parts are true by \cite[Proposition 6.10 ]{Sansuc}. To prove the last part, Kummer exact sequence ensures that one only needs to prove that
\begin{equation} \label{pr} H^2_{\textup{\'et}}(X_{\bar k}, \Bbb Z/n) \xrightarrow{\cong} H_{\textup{\'et}}^2(X_{\bar k} \times_{\bar k} H_{\bar k}, \Bbb Z/n ) \end{equation} for all $n \geq 1$. This last isomorphism follows from \cite[Proposition 2.2]{SZ} and \cite[Expos\'e XI, Th\'eor\`eme 4.4]{SGA4} with
$H^i_{\textup{\'et}}(H_{\bar k}, \Bbb Z/n)=0$ for $i=1, 2$. So we proved the required isomorphism ${\rm Br}(X)\xrightarrow{\cong} {\rm Br}(X\times_k H)$.
Let us now deduce Lemma \ref{br}: since ${\rm Pic}(H)=0$, \cite[Proposition 2.4]{BD} gives the following short exact sequence
$$ 0 \rightarrow {\rm Br} (X) \rightarrow {\rm Br} (Z) \xrightarrow{m^*-p_Z^*} {\rm Br}(H\times_{k} Z) \, ,$$
where $m^*$ and $p_Z^*$ are induced by the multiplication map $H\times_k Z\xrightarrow{m} Z$ and the projection map $H\times_k Z\xrightarrow{p_Z} Z$ respectively. Since $m\circ (1_{H}\times \id)=p_Z \circ (1_H\times \id ) = \id $, one concludes that $m^*=p_Z^*$ by the above argument. Therefore ${\rm Br}(X)\xrightarrow{\cong}{\rm Br}(Z)$.
\end{proof}
Let $H$ be a closed subgroup of an algebraic group $G$ over $k$, and $Y\xrightarrow{f} X$ be a left torsor under $H$. Let $Z\xrightarrow{\rho} X$ be the left torsor under $G$ defined by the contracted product $Z=G\times^H Y$ (see \cite[Example 3 in p.21]{Sko}): the torsor $Z$ is the push-forward of $Y$ by the homomorphism $H \to G$. The projection map $G\times_k Y \xrightarrow{pr_G} G$ induces the following commutative diagram
\begin{equation} \label{d} \begin{CD}
G\times_k Y @>>> Z=G\times^H Y \\
@V{pr_G}VV @VV{\theta}V \\
G @>{\pi}>> G/H \, , \end{CD} \end{equation}
where $\theta$ is induced by $pr_G$ via the quotient by $H$.
\begin{lem} \label{q} With the above notations, for any $\gamma\in (G/H)(k)$, the composite map $\theta^{-1}(\gamma) \to Z \xrightarrow{\rho} X$ is naturally a left torsor under $H^\sigma$, which is canonically isomorphic to the twist of $Y\xrightarrow{f} X$ by the $k$-torsor $\pi^{-1}(\gamma)$ under $H$
\end{lem}
\begin{proof} It follows from diagram (\ref{d}) and \cite[Example 2 in p.20]{Sko}. \end{proof}
Let $G$ be a connected linear algebraic group over $k$, and $Y$ be a smooth variety over $k$. Since $G_{\bar k}$ is rational over $\bar k$ by Bruhat decomposition, the projections $G\times_k Y\to G$ and $G\times_k Y\to Y$ induce an isomorphism
$$ {\rm Br}_a(G)\oplus {\rm Br}_a(Y) \xrightarrow{\sim} {\rm Br}_a(G\times_k Y) $$
by \cite[Lemma 6.6]{Sansuc}. If $P$ is a (left) torsor under $G$ over $k$ and $H^3(k, \bar{k}^\times)=0$, the previous result generalizes to an isomorphism
\begin{equation} \label{iso-sansuc}
{\rm Br}_a(P) \oplus {\rm Br}_a(Y) \xrightarrow{\sim} {\rm Br}_a(P \times Y)
\end{equation}
by \cite[Lemma 5.1]{BvH}.
Let $G$ be a connected linear algebraic group over $k$ and let $X$ be a smooth variety over $k$ with $H^3(k, \bar{k}^\times)=0$. Suppose that $Y\xrightarrow{f} X$ is a left torsor under $G$ and $P$ is a left $k$-torsor under $G$, associated to a cocycle $\sigma \in Z^1(k,G)$. One can consider $P$ as a right torsor under $G$ by defining a right action $ x\circ g:= g^{-1} x$ (see \cite[Example 2 in p.20]{Sko}). This right torsor is called the inverse right torsor of $P$ under $G$, and is denoted by $P'$.
One can now consider the map given by the quotient of $P \times_k Y$ by the diagonal action of $G$ given by $g \cdot (p,y) := (p \circ g^{-1}, g \cdot y) = (g \cdot p, g \cdot y)$:
$$ \chi_P: P\times_k Y\rightarrow Y^\sigma:=P'\times^G Y \, .$$
\begin{definition} \label{brauer-twist} With the above notation, assuming that $H^3(k, \bar{k}^\times) = 0$, consider the map
$$ \psi_\sigma = \psi_P : {\rm Br}_a(Y^\sigma) \xrightarrow{\chi_P^*} {\rm Br}_a (P\times_k Y) \xleftarrow{\sim} {\rm Br}_a(P) \oplus {\rm Br}_a(Y) \rightarrow {\rm Br}_a(Y) \, .$$
\end{definition}
The following lemma, which compares the algebraic Brauer groups of twists of a given torsor, can be regarded as an extension of \cite[Lemma 1.3]{Wei} to torsors under connected linear algebraic groups.
\begin{lem} \label{twist-isom} The morphism $\psi_\sigma$ in Definition \ref{brauer-twist} is an isomorphism.
\end{lem}
\begin{proof}
The natural morphism $(pr_P,\chi_P) : P \times_k Y \to P \times_k Y^\sigma$ is an isomorphism, and we have a commutative diagram:
$$ \begin{CD}
P\times_k Y @>{(pr_P,\chi_P)}>> P\times_k Y^\sigma \\
@V{pr_P}VV @VV{pr_P}V \\
P @>>{\id}> P \, .
\end{CD}$$
Therefore $(pr_P,\chi_P)^* : {\rm Br}_a(Y^\sigma \times_k P) \to {\rm Br}_a(Y \times P)$ induces the identity map on the subgroups ${\rm Br}_a(P) \subset {\rm Br}_1(Y^\sigma \times_k P)$ and ${\rm Br}_a(P) \subset {\rm Br}_1(Y \times_k P)$, hence
$$\psi_\sigma : {\rm Br}_a(Y^\sigma) \to {\rm Br}_a(Y^\sigma \times_k P) \xrightarrow{(pr_P,\chi_P)^*} {\rm Br}_a(Y \times P) \to {\rm Br}_a(Y)$$
is an isomorphism (using the isomorphism \eqref{iso-sansuc}).
\end{proof}
Let $f: Y\rightarrow X$ be a torsor under a connected linear algebraic group $G$ over $k$ and let $$ a_Y: \ G \times_k Y\rightarrow Y$$ be the action of $G$. There is a canonical map $\lambda : {\rm Br}_1(Y)\rightarrow {\rm Br}_a(G)$ by \cite[Lemma 6.4]{Sansuc}. Let $e: {\rm Br}_a(G)\rightarrow {\rm Br}_1(G)$ be the section of ${\rm Br}_1(G)\rightarrow {\rm Br}_a(G)$ such that $1_G^*\circ e=0$. If $X$ is smooth and geometrically integral, then the following diagram
\begin{equation} \label{diag torsor}
\begin{CD}
{\rm Br}_1(Y) @>{\lambda}>> {\rm Br}_a(G) \\
@VVV @VV{p_G^* \circ e}V \\
{\rm Br}(Y) @>>{a_Y^*-p_Y^*}> {\rm Br}(G\times_k Y)
\end{CD}
\end{equation}
commutes by \cite[Theorem 2.8]{BD}, where
$G\times_k Y\xrightarrow{p_G} G$ and $G\times_k Y\xrightarrow{p_Y} Y$ are the projections. One can reformulate the commutative diagram \eqref{diag torsor} in the following proposition:
\begin{prop}\label{torsorbraueraction} With the above notation, one has $$ b(t \cdot x)=\lambda (b)(t)+b(x)$$ for any $x\in Y(k)$, $t\in G(k)$ and $b\in {\rm Br}_1(Y)$.
\end{prop}
\begin{proof} The commutativity of diagram \eqref{diag torsor} implies that
$$a_Y^*-p_Y^*=p_G^*\circ e\circ \lambda: \ {\rm Br}_1(Y)\rightarrow {\rm Br}_1(G\times Y) \, ,$$
therefore one has
$$b(t \cdot x)=a_Y^*(b)(t,x)=p_Y^*(b)(t,x)+p_G^*\circ e\circ \lambda (b) (t,x)=b(x)+\lambda (b) (t)$$ as required.
\end{proof}
\section{Connected linear algebraic groups or groups of multiplicative type}\label{clag}
In this section, we study the relation between the descent obstruction and the Brauer-Manin obstruction for a general connected linear group or a group of multiplicative type.
First we need the following fact concerning topological groups:
\begin{lem}\label{top} Let $f: M\rightarrow N$ be an open homomorphism of topological groups. If $K$ is a closed subgroup of $M$ containing ${\rm ker}(f)$, then $f(K)$ is a closed subgroup of $N$.
\end{lem}
\begin{proof} Since $K$ is a closed subgroup containing ${\rm ker}(f)$, one has $$f(K)=f(M)\setminus f(M\setminus K) . $$ Since $f$ is an open homomorphism, $f(M)$ is an open subgroup of $N$. This implies that $f(M)$ is closed in $N$. Since $f(M\setminus K)$ is open in $N$, one concludes that $f(K)$ is closed in $N$.
\end{proof}
\begin{rem} The assumption $K \supseteq {\rm ker}(f)$ in Lemma \ref{top} can not be removed. For example, the projection map $pr^S: {\bf A}_k \to {\bf A}_k^S$ is open where ${\bf A}_k^S$ is the set of adeles of $k$ without $S$-component. It is clear that $k$ is a discrete subgroup of ${\bf A}_k$ by the product formula. However $k$ is dense in ${\bf A}_k^S$
by strong approximation for $\Bbb G_a$, when $S$ is not empty.
\end{rem}
For a short exact sequence of connected linear algebraic groups, one has the following result.
\begin{prop}\label{connected-groups} Let $$1\rightarrow G_1 \xrightarrow{\psi} G_2 \xrightarrow{\phi} G_3 \rightarrow 1$$ be a short exact sequence of connected linear algebraic groups over a number field $k$. Then
(1) $\phi\left(G_2({\bf A}_k)^{{\rm Br}_1(G_2)}\right)$ is a closed subgroup of $G_3({\bf A}_k)$.
(2) If $G'(k_\infty)$ is not compact for each simple factor $G'$ of the semi-simple part of $G_3$, then one has
$$ G_3({\bf A}_k)^{{\rm Br}_1(G_3)} = G_3(k) \cdot \phi\left(G_2({\bf A}_k)^{{\rm Br}_1(G_2)}\right) . $$
\end{prop}
\begin{proof} Let $S$ be a sufficiently large finite set of primes of $\Omega_k$ containing $\infty_k$ and let
${\bf G}_1$ (resp. ${\bf G}_2$, resp. ${\bf G}_3$) be a smooth group scheme model of $G_1$ (resp. $G_2$, resp. $G_3$) over $O_S$ with connected fibres, such that
the short exact sequence of smooth group schemes
$$ 1 \rightarrow {\bf G}_1 \xrightarrow{\psi} {\bf G}_2 \xrightarrow{\phi} {\bf G}_3 \rightarrow 1$$ extends the given short exact sequence of their generic fibres. The set $H_{\rm et}^1(O_v, {\bf G}_1)$ is trivial by Hensel's lemma together with Lang's theorem, and the following diagram
$$ \begin{CD} {\bf G}_3(O_v) @>{\partial_v}>> H_{\rm et}^1(O_v, {\bf G}_3) \\
@VVV @VVV \\
G_3(k_v) @>>{\partial_v}> H^1(k_v, G_3) \end{CD} $$ commutes, hence we deduce the following commutative diagram of exact sequences in Galois cohomology:
$$\begin{CD}
G_1(k) @>{\psi}>> G_2(k) @>{\phi}>> G_3(k) @>{\partial}>> H^1(k, G_1) \\
@VVV @VVV @VVV @VVV \\
G_1({\bf A}_k) @>>{(\psi_v)}> G_2({\bf A}_k) @>>{(\phi_v)}> G_3({\bf A}_k) @>>{(\partial_v)}> \bigoplus_{v\in \Omega_k} H^1(k_v, G_1) \, . \end{CD} $$
In addition, \cite[Theorem 5.1]{D} and \cite[Corollary 6.11]{Sansuc} gives the following commutative diagram of exact sequences of topological groups and pointed topological spaces:
\begin{equation} \label{diag adelic}
\begin{CD}
@. @. G_1({\bf A}_k) @>{\theta_1}>> {\rm Br}_a(G_1)^D @>>> {\mbox{\textcyr{Sh}}}^1(k, G_1) \\
@. @. @VV{(\psi_v)}V @VV{(\psi^*)^D}V @.\\
1 @>>> {\rm ker}(\theta_2) @>>> G_2({\bf A}_k) @>{\theta_2}>> {\rm Br}_a(G_2)^D \\
@. @VVV @VV{(\phi_v)}V @VV{(\phi^*)^D}V \\
1 @>>> {\rm ker}(\theta_3) @>>> G_3({\bf A}_k) @>{\theta_3}>> {\rm Br}_a(G_3)^D \\
@. @. @VV(\partial_v)V @. \\
@. @. \bigoplus_{v\in \Omega_k} H^1(k_v, G_1) \, , \end{CD}
\end{equation}
where ${\rm Br}_a(G_i)^D$ is the topological dual of the discrete group ${\rm Br}_a(G_i)$, for $1\leq i\leq 3$. Since $\theta_1(G_1({\bf A}_k))$ is the kernel of the continuous map ${\rm Br}_a(G_1)^D \to {\mbox{\textcyr{Sh}}}^1(k, G_1)$, it is a closed subgroup of ${\rm Br}_1(G)^D$. Since $(\psi^*)^D$ is a closed map, one obtains that $(\psi^*)^D(\theta_1(G_1({\bf A}_k))$ is a closed subgroup of ${\rm Br}_1(G_2)^D$. It implies that
$$ {\rm ker}(\theta_2) \cdot \psi (G_1({\bf A}_k)) = \theta_2^{-1}\left[(\psi^*)^D(\theta_1(G_1({\bf A}_k))\right] $$ is a closed subgroup of $G_2({\bf A}_k)$ by diagram \eqref{diag adelic}. Proposition 6.5 in Chapter 6 of \cite{PR} ensures that $\phi: G_2({\bf A}_k)\rightarrow G_3({\bf A}_k)$ is an open homomorphism of topological groups. Then $\phi({\rm ker}(\theta_2))= \phi\left(G_2({\bf A}_k)^{{\rm Br}_1(G_2)}\right)$ is closed by Lemma \ref{top}, and property (1) follows.
Let us now prove statement (2): Corollary 3.20 in \cite{D} (see also the proof of Proposition 4.5 in \cite{CX2}) implies that
$${\rm ker}(\theta_3)=G_3({\bf A}_k)^{{\rm Br}_1(G_3)}=\overline{G_3(k) \cdot G_3(k_\infty)^0}\, ,$$
where $G_3(k_\infty)^0$ is the connected component of identity with respect to the topology of $k_\infty$. One only needs to show that
$$ G_3({\bf A}_k)^{{\rm Br}_1(G_3)} \subseteq G_3(k) \cdot \phi\left(G_2({\bf A}_k)^{{\rm Br}_1(G_2)}\right) \, . $$
For any $(x_v) \in \overline{G_3(k) \cdot G_3(k_\infty)^0}$, there is $h\in G_3(k)$ and $h_\infty \in G_3(k_\infty)$ such that $$(\partial_v)(h\cdot h_\infty)= (\partial_v)(x_v) \, ,$$
because $(\partial_v)$ is a continuous map with respect to the discrete topology of $\bigoplus_{v\in \Omega_k} H^1(k_v, G_1)$. Since $\phi_\infty (G_2(k_\infty)^0)$ is open and connected, the finiteness of $H^1 (k_\infty, G_1)$ gives
$$G_3(k_\infty)^0=\phi_\infty (G_2(k_\infty)^0) \, .$$
Therefore
$$(h\cdot h_\infty) \in G_3(k) \cdot \phi\left(G_2({\bf A}_k)^{{\rm Br}_1(G_2)}\right) $$ and one can replace $(x_v)$ by $(h\cdot h_{\infty})^{-1} \cdot (x_v)$. Without loss of generality,
one can therefore assume $(\partial_v)(x_v)$ is the trivial element in $\bigoplus_{v\in \Omega_k} H^1(k_v, G_1)$.
Since ${\mbox{\textcyr{Sh}}}^1 (k, G_1)$ is finite, one can fix $\xi_1, \cdots, \xi_n$ in $G_3(k)$ such that each element of ${\mbox{\textcyr{Sh}}}^1(k,G_1) \cap \partial (G_3(k))$ is represented by one of the $\xi_i$'s.
As $\partial_\infty (h_\infty)$ is trivial for any $h_\infty \in G_3(k_\infty)^0$, one concludes that
$$(x_v) \in \overline{\bigcup_{i=1}^n \xi_i\phi({\rm ker}(\theta_2))}= \bigcup_{i=1}^n \xi_i \cdot \overline{\phi({\rm ker}(\theta_2))} \subseteq G_3(k) \cdot \phi\left(G_2({\bf A}_k)^{{\rm Br}_1(G_2)}\right) $$ by Corollary 1 in Page 50 of \cite{Ser} and assertion (1).
\end{proof}
The main result of this section is the following theorem:
\begin{thm}\label{main-general} Let $X$ be a smooth and geometrically integral variety and let $G$ be a connected linear algebraic group or a group of multiplicative type over a number field $k$. Suppose that $f: Y\rightarrow X$ is a left torsor under $G$. If $A$ is a subgroup of ${\rm Br}(X)$ which contains the kernel of the natural map $f^*: {\rm Br}(X) \rightarrow {\rm Br}(Y)$, then
$$ X({\bf A}_k)^A = \bigcup_{\sigma\in H^1(k, G)} f_\sigma \left(Y^\sigma ({\bf A}_k)^{ f_\sigma^*(A)}\right) \, ,$$
where $Y^\sigma \xrightarrow{f_\sigma}X$ is the twist of $f$ by $\sigma$ and $ {\rm Br}(X)\xrightarrow{f_\sigma^*} {\rm Br}(Y^\sigma)$ is the associated pull-back morphism, for each $\sigma\in H^1(k,G)$.
\end{thm}
\begin{proof} By the functoriality of Brauer-Manin pairing, one only needs to show that
$$ X({\bf A}_k)^{A}\subseteq \bigcup_{\sigma\in H^1(k, G)} f_\sigma \left(Y^\sigma ({\bf A}_k)^{ f_\sigma^*(A)}\right) \, .$$
It is clear that
\begin{equation}\label{equiv}
(x_v)\in \bigcup_{\sigma\in H^1(k, G)} f_\sigma (Y^\sigma ({\bf A}_k)) \ \ \ \Leftrightarrow \ \ \
([Y](x_v))\in {\rm {Im}} \left[H^1(k,G) \rightarrow \prod_{v\in \Omega_k} H^1(k_v, G) \right] \, . \end{equation}
(1) Assume that $G$ is connected.
Recall first that Hensel's lemma together with Lang's theorem ensures that $H^1(k, G)$ maps to $\bigoplus_{v\in \Omega_k} H^1(k_v, G)$. Since any element $P \in {\rm Pic} (G)$ can be given the structure of a central extension of algebraic groups
\begin{equation} \label{ses} 1 \to \Bbb G_m \to P \to G \to 1 \end{equation} by \cite[Corollary 5.7]{CT08}, one obtains a coboundary map
$$ \partial_P: \ \ \ H^1(X, G) \rightarrow H^2(X, \Bbb G_m)={\rm Br}(X)$$
associated to $P$ (see \cite[IV.4.4.2]{Gi}). Then the map defined by
$$ \Delta_{Y/X}: {\rm Pic}(G) \rightarrow {\rm Br}(X), \ \ P \mapsto \partial_P([Y]) $$ appears in the following short exact sequence (see \cite[Theorem 2.8]{BD})
\begin{equation} \label{pic-br} {\rm Pic}(G) \xrightarrow{\Delta_{X/Y}} {\rm Br}(X) \xrightarrow{f^*} {\rm Br}(Y) \, . \end{equation}
For any $v\in \Omega_k$, the exact sequence \eqref{ses} defines a coboundary map
$$\partial_P^{k_v}: \ \ \ H^1(k_v, G)\rightarrow H^2(k_v, \Bbb G_m)={\rm Br}(k_v) \, .$$
One can therefore define a pairing
$$ \delta_v: \ H^1(k_v, G) \times {\rm Pic} (G) \to {\rm Br}(k_v) \subseteq \Bbb Q/\Bbb Z, \ \ (\sigma_v, P)\mapsto \partial_P^{k_v}(\sigma_v) $$
such that the following diagram
\begin{equation} \label{diag pairings}
\xymatrix{
X(k_v) \ar[d]_{[Y]} & \times & {\rm Br}(X) & \ar[r]^(.35){ev} & {\rm Br}(k_v) \ar[d]^{\id} \\
H^1(k_v, G) & \times & {\rm Pic}(G) \ar[u]_{\Delta_{X/Y}} & \ar[r]^(.4){\delta_v} & {\rm Br}(k_v)
}
\end{equation}
commutes (see Proposition 2.9 in \cite{CTX}). These pairings induce a pairing
$$ (\delta_v)_{v\in \Omega_k} : \ \ \ \bigoplus_{v\in \Omega_k} H^1(k_v, G) \times {\rm Pic}(G) \rightarrow \Bbb Q/\Bbb Z, \ \ ((\sigma_v)_{v\in \Omega_k}, P)\mapsto \sum_{v\in \Omega_k} \delta_v(\sigma_v, P)\in \Bbb Q/\Bbb Z $$ and a natural exact sequence of pointed sets
$$ H^1(k, G) \to \bigoplus_{v\in \Omega_k} H^1(k_v, G) \to {\rm Hom} ({\rm Pic}(G), \Bbb Q/\Bbb Z) $$
by \cite[Theorem 3.1]{CTX}. Therefore (\ref{equiv}) is equivalent to the fact that $ ([Y](x_v))\in \bigoplus_{v\in \Omega_k} H^1(k_v, G)$ is orthogonal to ${\rm Pic}(G)$ for the pairing $(\delta_v)_{v\in \Omega_k}$. The commutative diagram \eqref{diag pairings}, together with \eqref{pic-br}, gives
$$X({\bf A}_k)^{{\rm ker}(f^*)}=\bigcup_{\sigma\in H^1(k, G)} f_\sigma (Y^\sigma ({\bf A}_k)) . $$ Since ${\rm ker}(f^*)\subseteq A$, one has
$$ X({\bf A}_k)^{A}\subseteq X({\bf A}_k)^{{\rm ker}(f^*)}=\bigcup_{\sigma\in H^1(k, G)} f_\sigma (Y^\sigma ({\bf A}_k)). $$
Then the functoriality of the Brauer-Manin pairing implies that
$$ X({\bf A}_k)^{A}\subseteq \bigcup_{\sigma\in H^1(k, G)} f_\sigma \left(Y^\sigma ({\bf A}_k)^{ f_\sigma^*(A)}\right). $$
(2) When $G$ is a group of multiplicative type, one obtains that (\ref{equiv}) is equivalent to
$$ \sum_{v\in \Omega_k} {\rm inv}_v (\chi \cup [Y]) (x_v) = 0 $$ for all $\chi \in H^1(k, \hat{G})$ by \cite[Theorem 6.3]{D0}.
Let
$$ \mathcal{K}_f=\langle \{ \chi\cup [Y] : \ \chi \in H^1(k, \hat{G}) \} \rangle $$
be the subgroup of ${\rm Br}(X)$ generated by elements $\chi\cup [Y]$, where $\cup$ is the cup product
$$\cup: H^1(k, \hat{G}) \times H^1(X, G) \rightarrow H^2(X, \Bbb G_m)={\rm Br}(X) .$$
Then $$X({\bf A}_k)^{\mathcal{K}_f}=\bigcup_{\sigma\in H^1(k, G)} f_\sigma (Y^\sigma ({\bf A}_k))$$ by \cite[Proposition 3.1]{HaSk}.
Functoriality of the cup product proves that the following diagram
$$ \begin{CD}
H^1(k, \hat{G}) \times H^1(X, G) @>{\cup}>> H^2(X, \Bbb G_m)={\rm Br}(X) \\
@V{\id\times f^*}VV @VV{f^*}V \\
H^1(k, \hat{G}) \times H^1(Y, G) @>{\cup}>> H^2(Y, \Bbb G_m)={\rm Br}(Y)
\end{CD}$$
is commutative. Since $Y\xrightarrow{f} X$ becomes a trivial torsor over $Y$, the above diagram gives $\mathcal{K}_f \subseteq {\rm ker}(f^*) $.
Since $\mathcal{K}_f \subseteq {\rm ker}(f^*) \subseteq A$, one has
$$ X({\bf A}_k)^{A}\subseteq X({\bf A}_k)^{\mathcal{K}_f}=\bigcup_{\sigma\in H^1(k, G)} f_\sigma (Y^\sigma ({\bf A}_k)). $$
Then the functoriality of the Brauer-Manin pairing implies that
$$ X({\bf A}_k)^{A}\subseteq \bigcup_{\sigma\in H^1(k, G)} f_\sigma \left(Y^\sigma ({\bf A}_k)^{ f_\sigma^*(A)}\right). $$
\end{proof}
\section{Refinement in the toric case}\label{rtc}
In this section, we will refine Theorem \ref{main-general} for torsors under tori.
\begin{thm}\label{tor} Let $f: Y\rightarrow X$ be a torsor under a torus $G$ over a number field $k$. Assume that $X$ is smooth and geometrically integral. Let $ {\rm ker}(f^*) \subseteq A \subseteq {\rm Br}(X)$ be a subgroup, and for all $\sigma \in H^1(k,G)$, let $B_\sigma \subseteq {\rm Br}_1(Y^\sigma) $ be a subgroup such that $$ {f^*}^{-1}\left(\sum_{\sigma\in H^1(k,G)} \psi_\sigma (\widetilde{B_\sigma})\right) \subseteq A \, ,$$ where $ {\rm Br}_a(Y^\sigma)\xrightarrow{\psi_\sigma} {\rm Br}_a(Y)$ is the morphism of Definition \ref{brauer-twist} and $\widetilde{B_\sigma}$ is the image of $B_\sigma$ in ${\rm Br}_a(Y^\sigma)$.
Then one has
$$ X({\bf A}_k)^A = \bigcup_{\sigma\in H^1(k, G)} f_\sigma \left(Y^\sigma ({\bf A}_k)^{ B_\sigma + f_\sigma^*(A)}\right) $$ where $Y^\sigma \xrightarrow{f_\sigma}X$ is the twist of $Y\xrightarrow{f} X$ by $\sigma$.
\end{thm}
\begin{proof} Since
$$\bigcup_{\sigma\in H^1(k, G)} f_\sigma \left(Y^\sigma ({\bf A}_k)^{ B_\sigma + f_\sigma^*(A)}\right) \subseteq \bigcup_{\sigma\in H^1(k, G)} f_\sigma \left(Y^\sigma ({\bf A}_k)^{f_\sigma^*(A)}\right) \subseteq X({\bf A}_k)^A $$ by the functoriality of Brauer-Manin pairing, one only needs to prove the converse inclusion.
Step 1. We first prove the result when $\hat{G}$ is a permutation Galois module. In this case, Shapiro Lemma and Hilbert 90 gives $H^1(K, G)=\{1\}$ for any field extension $K/k$. This implies that
$$ X({\bf A}_k)^A= f \left(Y({\bf A}_k)^{f^*(A)}\right)$$ by the functoriality of Brauer-Manin pairing.
Let $(x_v)\in X({\bf A}_k)^A$. Then there is $(y_v)\in Y({\bf A}_k)^{f^*(A)}$ such that $(x_v)=f((y_v))$.
By Proposition 6.10 (6.10.3) in \cite{Sansuc}, the natural sequence
$$ {\rm Br}_1(X) \xrightarrow{f^*} {\rm Br}_1(Y) \xrightarrow{\lambda} {\rm Br}_a(G) $$
is exact, and it induces the exact sequence
$$ (f^*)^{-1}(B) \xrightarrow{f^*} B \xrightarrow{\lambda} {\rm Br}_a(G) $$ for any subgroup $B\subseteq {\rm Br}_1(Y)$. Therefore the following sequence
$$ {\rm Br}_a(G)^D \xrightarrow{\lambda^D} B^D \xrightarrow{(f^*)^D} ((f^*)^{-1}(B))^D $$ is exact. Assuming $(f^*)^{-1}(B) \subseteq A$, one has $(f^*)^D((y_v))=0$, where we (abusively) identify $(y_v)$ with its image in $B^D$ via the Brauer-Manin pairing. By the aforementioned exactness, there is $\xi \in {\rm Br}_a(G)^D$ such that $\lambda^D(\xi)=(y_v)$. Since ${\mbox{\textcyr{Sh}}}^1(k,G)=\{1\}$, Theorem 2 in \cite{Ha08} implies that every element in ${\rm Br}_a(G)^D$ is given by an element in $G({\bf A}_k)$ via the Brauer-Manin pairing. Namely, there is $(g_v)\in G({\bf A}_k)$ such that
$$b(y_v)=\lambda(b) (g_v)$$ for all $b\in B$. Then $(g_v)^{-1}\cdot (y_v) \in Y({\bf A}_k)^{B+ f^*(A)}$ by Proposition \ref{torsorbraueraction}, and $(x_v)=f((g_v)^{-1}\cdot (y_v))$.
Step 2. We now prove the case of an arbitrary torus $G$. By Proposition-Definition 3.1 in \cite{CT08}, there is a short exact sequence of tori
$$ 1\rightarrow G \rightarrow T_0 \xrightarrow{q} T_1 \rightarrow 1 \, ,$$
such that $\hat{T_0}$ is a permutation Galois module and $\hat{T_1}$ is a coflasque Galois module. Since
$$ H^3(k, \hat{T_1}) \cong \prod_{v\in \infty_k} H^3(k_v, \hat{T_1}) \cong \prod_{v\in \infty_k} H^1(k_v, \hat{T_1}) =\{1\}$$
(see for instance Proposition 5.9 in \cite{HS05}), the map ${\rm Br}_1(T_0) \rightarrow {\rm Br}_1(G)$ is surjective.
Let $Z\xrightarrow{\rho} X$ be the torsor under $T_0$ defined by $Z := T_0\times^G Y$. We have a morphism of torsors under $G$:
$$ \begin{CD}
Y @>{e_0 \times \id_Y}>> T_0\times_k Y @>{\chi}>> Z=T_0 \times^G Y \\
@. @V{p_0}VV @VV{\theta}V \\
@. T_0 @>>{q}> T_1 \\
\end{CD} $$
where $e_0 \in T_0(k)$ is the unit element, $p_0$ is the projection map and $\theta$ is given as in (\ref{d}). For simplicity, denote by $i := \chi \circ (e_0 \times \id_Y) : Y \to Z$ the composite morphism defined in the previous diagram.
Then Proposition 6.10 (6.10.3) in \cite{Sansuc} gives the following commutative diagram of exact sequences:
$$ \begin{CD}
{\rm Br}_1(T_1) @>{q^*}>> {\rm Br}_1(T_0)@>>> {\rm Br}_a(G) \\
@V{\theta^*}VV @VV{p_0^*}V @VV{\id}V \\
{\rm Br}_1(Z) @>>{\chi^*}> {\rm Br}_1(T_0\times_k Y) @>>> {\rm Br}_a(G) \, . \\
\end{CD} $$
Since the following sequence
$$ {\rm Br}_1(T_0) \xrightarrow{p_0^*} {\rm Br}_1(T_0\times_k Y) \xrightarrow{(e_0\times \id_{Y})^*} {\rm Br}_a(Y) \rightarrow 1 $$ is exact by Lemma 6.6 in \cite{Sansuc}, the surjectivity of the map ${\rm Br}_1(T_0)\rightarrow {\rm Br}_1(G)$ implies that the morphism
$$i^* : \ \ {\rm Br}_1(Z)\rightarrow {\rm Br}_1(Y) $$ is surjective, by a simple diagram chase.
Lemma \ref{q} implies that for any $t\in T_1 (k)$, the composite morphism $\theta^{-1}(t) \to Z\xrightarrow{\rho} X$ is canonically isomorphic to the twist $f_t: Y^{q^{-1}(t)} \rightarrow X$ of $f: Y\rightarrow X$ by the $\Spec(k)$-torsor $q^{-1}(t)$ under $G$.
Denote by $i_t: \theta^{-1}(t) \rightarrow Z$ the closed immersion. Then $f_t=\rho\circ i_t$ for any $t\in T_1(k)$.
Let $\chi_t$ be the restriction of $\chi$ to $q^{-1}(t) \times_k Y$ for any $t\in T_1(k)$. Then the following diagram
$$ \begin{CD}
@. q^{-1}(t) \times_k Y @>{\chi_t}>> Y^{q^{-1}(t)} \\
@. @VV{j_t\times \id_Y}V @VV{i_t}V \\
Y @>{e_0 \times \id_Y}>> T_0\times_k Y @>{\chi}>> Z \\
@. @VV{p_0}V @VV{\theta}V \\
G @>>> T_0 @ >{q}>> T_1
\end{CD} $$ is commutative, where $j_t: q^{-1}(t)\rightarrow T_0$ is the closed immersion of the fiber of $q$ at $t$.
Therefore
Definition \ref{brauer-twist} implies that we have a commutative triangle:
\[
\xymatrix{
{\rm Br}_a(Z) \ar[r]^(.4){i_t^*} \ar[rd]_{i^*} & {\rm Br}_a(Y^{q^{-1}(t)}) \ar[d]_{\sim}^{\psi_{q^{-1}(t)}} \\
& {\rm Br}_a(Y) \, ,
}
\]
i.e. that
$\psi_{q^{-1}(t)} \circ i_{t}^* = i^*$.
Let $$B= {i^*}^{-1}\left(\sum_{t\in T_1(k)} \psi_{q^{-1}(t)} \left(\widetilde{ B_{q^{-1}(t)}}\right)\right) \subset {\rm Br}_a(Y)$$
where $\widetilde{ B_{q^{-1}(t)}}$ is the image of $B_{q^{-1}(t)}$ in ${\rm Br}_a(Y^{q^{-1}(t)})$ and $\psi_{q^{-1}(t)}$ is given by Definition \ref{brauer-twist} for all $t\in T_1(k)$.
Since $i^*\circ \rho^* = f^*$, we have
$${\rho^*}^{-1} (B) ={f^*}^{-1}\left(\sum_{t\in T_1(k)} \psi_{q^{-1}(t)} \left(\widetilde{ B_{q^{-1}(t)}}\right)\right) \subseteq A \, ,$$
hence step 1 applied to the torsor $Z\xrightarrow{\rho} X$ under $T_0$ implies that
\begin{equation} \label{desc qt}
X({\bf A}_k)^A= \rho\left(Z({\bf A}_k)^{B+\rho^*(A)}\right) \, .
\end{equation}
Let $(x_v)\in X({\bf A}_k)^A$. By \eqref{desc qt}, there is $(z_v)\in Z({\bf A}_k)^{B+ \rho^*(A)}$ such that $(x_v)=\rho((z_v))$.
Since $$i^*\circ \theta^*({\rm Br}_1(T_1))=(e_0\times \id_{Y})^*\circ p_0^* \circ q^* ({\rm Br}_1(T_1))={\rm Br}_0(Y) $$ and $i^*({\rm Br}_0(Z))={\rm Br}_0(Y)$, one gets $\theta^*({\rm Br}_1(T_1))\subseteq {\rm Br}_0(Z) + B$ (by construction, $B$ contains ${\rm ker}(i^* : {\rm Br}_1(Z) \to {\rm Br}_1(Y))$). Functoriality of the Brauer-Manin pairing now gives
$$\theta ((z_v))\in T_1({\bf A}_k)^{{\rm Br}_1(T_1)}\, .$$
By Proposition \ref{connected-groups}, there are $\alpha\in T_1(k)$ and $(\beta_v)\in T_0({\bf A}_k)^{{\rm Br}_1(T_0)}$ such that $\theta ((z_v))= \alpha \cdot q(\beta_v)$.
Therefore $ (\beta_v)^{-1} \cdot (z_v) \in \theta^{-1}(\alpha)$, hence $(\beta_v)^{-1} \cdot (z_v) \in Z({\bf A}_k)^{B+ \rho^*(A)}$.
Since $i^* : {\rm Br}_1(Z) \to {\rm Br}_1(Y)$ is surjective, one has
$$\psi_{q^{-1}(\alpha)} \circ i_\alpha^*(\widetilde{B}) = i^* (\widetilde{B}) = \sum_{t\in T_1(k)} \psi_{q^{-1}(t)} \left(\widetilde{ B_{q^{-1}}}\right) \supseteq \psi_{q^{-1}(\alpha)}\left(\widetilde{B_{q^{-1}(\alpha)}}\right) \, ,$$ where $\widetilde{B}$ is the image of $B$ in ${\rm Br}_a(Z)$. It implies that $i_\alpha^*(B)+ {\rm Br}_0(\theta^{-1}(\alpha)) \supseteq B_{q^{-1}(\alpha)}$ by Lemma \ref{twist-isom}, and
$$ (\beta_v)^{-1} \cdot (z_v) \in \left[\theta^{-1}(\alpha) ({\bf A}_k)\right]^{i_\alpha^*(B)+ (i_\alpha^*\circ \rho^*)(A)} \subseteq \left[\theta^{-1}(\alpha) ({\bf A}_k)\right]^{B_{q^{-1}(\alpha)}+ (i_\alpha^*\circ \rho^*)(A)} $$ as desired.
\end{proof}
The first part of the following result is also proved in Theorem 1.7 of \cite{Wei}.
\begin{cor} \label{algebraic} Let $X$ be a smooth and geometrically integral variety. If $f: Y\rightarrow X$ is a torsor under a torus $G$ over a number field $k$, then
$$ X({\bf A}_k)^{{\rm Br}_1(X)} = \bigcup_{\sigma\in H^1(k, G)} f_\sigma \left(Y^\sigma ({\bf A}_k)^{{\rm Br}_1(Y^\sigma)}\right) $$
and
$$ X({\bf A}_k)^{{\rm Br}} = \bigcup_{\sigma\in H^1(k, G)} f_\sigma \left(Y^\sigma ({\bf A}_k)^{{\rm Br}_1(Y^\sigma)+f_\sigma^*({\rm Br}(X))}\right) . $$
\end{cor}
\begin{proof} To get the first equality, apply Theorem \ref{tor} to $A={\rm Br}_1(X)$ and $B_\sigma={\rm Br}_1(Y^\sigma)$ for each $\sigma \in H^1(k, G)$. Since ${\rm Pic} (G_{\bar k})=0$, Proposition 6.10 in \cite{Sansuc} gives
$${f^*}^{-1}\left(\sum_{\sigma\in H^1(k,G)} \psi_\sigma \left(\widetilde{B_\sigma}\right)\right) \subseteq {f^*}^{-1}({\rm Br}_a(Y)) \subseteq {\rm Br}_1(X)=A \, ,$$
as required.
The second equality follows from Theorem \ref{tor} by taking $A={\rm Br}(X)$ and $B_\sigma={\rm Br}_1(Y^\sigma)$ for each $\sigma \in H^1(k, G)$.
\end{proof}
\section{An application} \label{aa}
In this section, we apply the previous results to study the necessary conditions for a connected linear algebraic group to satisfy strong approximation with Brauer-Manin obstruction.
When $X$ is affine, the set $X(k)$ is discrete in $X({\bf A}_k)$ by the product formula. Therefore if such an $X$ satisfies strong approximation off $S$, then $\prod_{v\in S} X(k_v)$ is not compact. However this necessary condition for strong approximation is no longer true for strong approximation with Brauer-Manin obstruction if ${\rm Br}(X)/{\rm Br}(k)$ is not finite. For example, a torus $X$ always satisfies strong approximation with Brauer-Manin obstruction off $\infty_k$, $X$ being anisotropic over $k_\infty$ or not: see \cite[Theorem 2]{Ha08}. When $X$ is a semi-simple linear algebraic group, the necessary and sufficient condition for $X$ to satisfy strong approximation with Brauer-Manin obstruction is given by Proposition 6.1 in \cite{CX2}. In this section, we extend this result to a general connected linear algebraic group.
The following lemma explains that strong approximation with Brauer-Manin obstruction for a general connected linear algebraic group can be reduced to the reductive case.
\begin{lem} \label{unip-equ} Let $G$ be a connected linear algebraic group over a number field $k$.
If $\pi: G\to G^{\rm{red}}$ is the quotient map, then $G^{\rm{red}}({\bf A}_k)^{{\rm Br}_1(G^{\rm{red}})} = \pi \left(G({\bf A}_k)^{{\rm Br}_1(G)} \right)$.
In particular, for any finite subset $S$ of $\Omega_k$, $G$ satisfies strong approximation with respect to ${\rm Br}_1(G)$ off $S$ if and only if $G^{\rm{red}}$ satisfies strong approximation with respect to ${\rm Br}_1(G^{\rm{red}})$ off $S$.
\end{lem}
\begin{proof} By applying Lemma \ref{br} for $k$ and $\bar{k}$, one obtains that $ \pi^*({\rm Br}_1(G^{\rm{red}}))={\rm Br}_1(G)$.
The first part follows from Theorem \ref{main-general} and Proposition 6 of \S 2.1 of Chapter III in \cite{Ser}.
Suppose $G$ satisfies strong approximation with respect to ${\rm Br}_1(G)$ off $S$. For any open subset $$M=\prod_{v\in S} G^{\rm{red}}(k_v) \times \prod_{v\not\in S} M_v $$ of $G^{\rm{red}}({\bf A}_k)$ such that $M \cap \left[G^{\rm{red}}({\bf A}_k)^{{\rm Br}_1(G^{\rm{red}})}\right] \neq \emptyset$, one has that
$$ \pi^{-1}(M) = \prod_{v\in S} G(k_v) \times \prod_{v\not\in S} \pi^{-1}(M_v) $$ with
$\pi^{-1}(M) \cap G({\bf A}_k)^{{\rm Br}_1(G)} \neq \emptyset$ by the first part. Then by assumption there is $x\in G(k) \cap \pi^{-1}(M)$. It implies that $\pi(x)\in M\cap G^{\rm{red}}(k)$, as required.
Conversely, suppose $G^{\rm{red}}$ satisfies strong approximation with respect to ${\rm Br}_1(G^{\rm{red}})$ off $S$. For any open subset $$N=\prod_{v\in S} G(k_v) \times \prod_{v\not\in S} N_v $$ of $G({\bf A}_k)$ such that $N\cap G({\bf A}_k)^{{\rm Br}_1(G)} \neq \emptyset$, we have
$$ \pi (N) = \prod_{v\in S} G^{\rm{red}}(k_v) \times \prod_{v\not\in S} \pi (N_v) $$
and this set is an open subset of $G^{\rm{red}}({\bf A}_k)$, with $\pi(M) \cap \left[G^{\rm{red}}({\bf A}_k)^{{\rm Br}_1(G^{\rm{red}})}\right] \neq \emptyset$: here we use Proposition 6 of \S 2.1 of Chapter III in \cite{Ser}, Proposition 6.5 in Chapter 6 of \cite{PR} and the functoriality of Brauer-Manin pairing. Then by assumption there is $y\in G^{\rm{red}}(k) \cap \pi(N)$. Using Proposition 6 of \S 2.1 of Chapter III in \cite{Ser} one more time, one concludes that $\pi^{-1}(y)$ is isomorphic to $R_u(G)$ as an algebraic variety, hence it satisfies strong approximation off $S$. Since
$$ \pi^{-1}(y)\cap N= \prod_{v\in S} \pi^{-1}(y)(k_v) \times \prod_{v\not\in S} (\pi^{-1}(y)(k_v) \cap N) \neq \emptyset , $$
there is $z\in \pi^{-1}(y)(k) \cap N \subset G(k)\cap N$, as desired.
\end{proof}
The main result of this section is the following statement:
\begin{thm} \label{red-semi} Let $G$ be a connected linear algebraic group over a number field $k$ and let $G^{\rm{qs}}:=G/R(G)$, where $R(G)$ is the solvable radical of $G$. If $\pi: G\to G^{\rm{qs}}$ is the quotient map, then $$G^{\rm{qs}}({\bf A}_k)^{{\rm Br}_1(G^{\rm{qs}})} = \pi \left(G({\bf A}_k)^{{\rm Br}_1(G)} \right) \cdot G^{\rm{qs}}(k) \, . $$
In particular, if $G$ satisfies strong approximation with respect to ${\rm Br}_1(G)$ off a finite subset $S$ of $\Omega_k$, then $G^{\rm{qs}}$ satisfies strong approximation with respect to ${\rm Br}_1(G^{\rm{qs}})$ off $S$.
\end{thm}
\begin{proof} For the first part, by functoriality of the Brauer-Manin pairing, one only needs to prove that
$$G^{\rm{qs}}({\bf A}_k)^{{\rm Br}_1(G^{\rm{qs}})} \subseteq \pi \left(G({\bf A}_k)^{{\rm Br}_1(G)} \right) \cdot G^{\rm{qs}}(k) \, .$$
By Lemma \ref{unip-equ}, we can assume that $G$ is reductive. Then $R(G)$ is a torus contained in the center of $G$ (see Theorem 2.4 in Chapter 2 of \cite{PR}) and $\pi: G\to G^{\rm{qs}}$ is a torsor under $R(G)$. By Corollary \ref{algebraic}, for any $(x_v)\in G^{\rm{qs}}({\bf A}_k)^{{\rm Br}_1(G^{\rm{qs}})}$, there are $\sigma\in H^1(k, R(G))$ and $(y_v) \in G^{\sigma}({\bf A}_k)^{{\rm Br}_1(G^\sigma)}$ such that $(x_v)=\pi_\sigma ((y_v))$. Since $G^\sigma(k)\neq \emptyset $ by Corollary 8.7 in \cite{Sansuc} (see also Theorem 5.2.1 in \cite{Sko}), there is $\gamma\in G^{\rm{qs}}(k)$ such that $\partial(\gamma)=\sigma$, where $\partial$ is the coboundary map in the following exact sequence in Galois cohomology:
$$ 1\to R(G)(k) \to G(k) \to G^{\rm{qs}}(k) \xrightarrow{\partial} H^1(k, R(G)) \to H^1(k, G) \, .$$
In addition, the choice of an element $\bar{\gamma} \in G(\bar{k})$ such that $\pi(\bar{\gamma}) = \gamma$ defines a commutative diagram defined over $k$:
$$ \begin{CD}
G^\sigma @>{\bar{\gamma} \, \cdot}>{\sim}> G \\
@V{\pi_\sigma}VV @VV{\pi}V \\
G^{\rm{qs}} @>{\gamma \, \cdot}>{\sim}> G^{\rm{qs}} \\
\end{CD} $$
(see for instance Example 2 of p.20 in \cite{Sko}). This implies that $$ \pi_\sigma \left(G^\sigma({\bf A}_k)^{{\rm Br}_1(G^\sigma)} \right) = \pi \left(G({\bf A}_k)^{{\rm Br}_1(G)} \right) \cdot \gamma \, ,$$ as desired.
Suppose now that $G$ satisfies strong approximation with respect to ${\rm Br}_1(G)$ off $S$. For any open subset $$M=\prod_{v\in S} G^{\rm{qs}}(k_v) \times \prod_{v\not\in S} M_v $$ of $G^{\rm{qs}}({\bf A}_k)$ such that $M \cap G^{\rm{qs}}({\bf A}_k)^{{\rm Br}_1(G^{\rm{qs}})} \neq \emptyset$, the first part implies that there is $g \in G^{\rm{qs}}(k)$ such that
$$ \pi^{-1}(M\cdot g) = \prod_{v\in S} G(k_v) \times \prod_{v\not\in S} \pi^{-1}(M_v\cdot g)\, ,$$
with $\pi^{-1}(M\cdot g) \cap G({\bf A}_k)^{{\rm Br}_1(G)} \neq \emptyset$. Since $G$ satisfies strong approximation with algebraic Brauer-Manin obstruction off $S$, there exists $x\in G(k) \cap \pi^{-1}(M \cdot g)$. This implies that $\pi(x)\cdot g^{-1}\in M\cap G^{\rm{qs}}(k)$ as required. \end{proof}
\begin{cor} \label{iff} Let $G$ be a connected linear algebraic group over a number field $k$ and let $S$ a finite subset of $\Omega_k$ containing $\infty_k$. Then $G$ satisfies strong approximation with respect to ${\rm Br}_1(G)$ off $S$ if and only if $\prod_{v\in S} G'(k_v)$ is not compact for any non-trivial simple factor $G'$ of the semi-simple part $G^{ss}$ of $G$.
\end{cor}
\begin{proof} By Theorem 2.3 and Theorem 2.4 of Chapter 2 in \cite{PR}, the quotient map $$G^{\rm{red}}\to G/R(G)=G^{\rm{qs}}$$ induces an isogeny $G^{ss} \to G^{\rm{qs}}$.
One side follows from Corollary 3.20 in \cite{D}. The other side follows from Theorem \ref{red-semi} and Proposition 6.1 in \cite{CX2}.
\end{proof}
\begin{rem} All the results in this section involve the group ${\rm Br}_1(G)$, and they remain true with ${\rm Br}_1(G)$ replaced by ${\rm Br}(G)$. Indeed, there is a sufficiently large subset $S$ of $\Omega_k$ containing $\infty_k$ such that $\prod_{v\in S} G'(k_v)$ is not compact for any non-trivial simple factor $G'$ of $G^{ss}$, therefore Corollary 3.20 in \cite{D}, Proposition 2.6 in \cite{CTX} and the functoriality of Brauer-Manin pairing gives the following inclusions:
$$ G({\bf A}_k)^{{\rm Br}_1(G)} = \overline{G(k) \cdot \rho (\prod_{v\in S} G^{scu}(k_v))} \subseteq G({\bf A}_k)^{{\rm Br}(G)} \subseteq G({\bf A}_k)^{{\rm Br}_1(G)} \, , $$
where $G^{scu}=G^{sc}\times_{G^{\rm{red}}} G$ with the projection map $G^{scu}\xrightarrow{\rho} G$ and $G^{sc}$ is the simply connected covering of $G^{ss}$. In particular, we have $G({\bf A}_k)^{{\rm Br}(G)} = G({\bf A}_k)^{{\rm Br}_1(G)}$.
\end{rem}
\section{Comparison I, $X({\bf A}_k)^{\rm{desc}} \subseteq X({\bf A}_k)^{\textup{\'et}, {\rm Br}}$ }\label{cI}
Let $Y\xrightarrow{f} X$ be a left torsor under a linear algebraic group $G$ over a number field $k$. The fundamental problem to define the descent obstruction for strong approximation with respect to $Y\xrightarrow{f} X$ is to decide whether the set
$$ X({\bf A}_k)^f = \left\{(x_v)\in X({\bf A}_k): ([Y](x_v))\in {\rm{Im}} \left(H^1(k, G) \rightarrow \prod_v H^1(k_v, G)\right) \right\} = \bigcup_{\sigma\in H^1(k, G)} f_\sigma (Y^\sigma ({\bf A}_k)) $$ is closed or not in $X({\bf A}_k)$. We already know that this is true when $G$ is either connected or a group of multiplicative type, by Theorem \ref{main-general}. For a general linear algebraic group $G$, this result is proved by Skorobogatov in Corollary 2.7 of \cite{ Sk1}, when $X$ is assumed to be proper over $k$. The proof depends on Proposition 5.3.2 in \cite{Sko} or Proposition 4.4 in \cite{HSk}, which are not true for open varieties, as explained in the following example.
\begin{exa} The short exact sequence of linear algebraic groups
$$ 1 \rightarrow \mu_2 \rightarrow \Bbb G_m \xrightarrow{f} \Bbb G_m \rightarrow 1 \, ,$$
where $f(x)=x^2$, can be viewed as torsor over ${\mathbb G}_m$ under $\mu_2$. For any $\sigma \in H^1(k, \mu_2)\cong k^\times/(k^\times)^2$, the twist $\Bbb G_m^\sigma$ of $\Bbb G_m$ by $\sigma$ is given by the equation $x=a_\sigma y^2$ in $\Bbb G_m\times_k \Bbb G_m$, where $a_\sigma$ is an element in $k^\times$ representing the class $\sigma$ by the above isomorphism. It is clear that $\Bbb G_m^\sigma \cong \Bbb G_m$ as varieties over $k$, hence it always contains adelic points.
\end{exa}
We use the same definition of an integral model as in \cite{LX}.
\begin{definition} Let $X$ be a variety over a number field $k$ and let $S$ be a finite subset of $\Omega_k$ containing $\infty_k$. An integral model of $X$ over $O_S$ is a faithfully flat separated $O_S$-scheme $\mathcal{X}_S$ of finite type such that $\mathcal{X}_S\times_{O_S} k \cong X$.
\end{definition}
The replacement for Proposition 5.3.2 in \cite{Sko} or Proposition 4.4 in \cite{HSk} is the following proposition:
\begin{prop}\label{finite} Let $X$ be a variety over a number field $k$ and let $S$ be a finite subset of $\Omega_k$ containing $\infty_k$. Fix an integral model $\mathcal{X}_S$ of $X$ over $O_S$. If $Y\xrightarrow{f} X$ is a left torsor under a linear algebraic group $G$ over $k$, then the set
$$ \left\{ [\sigma] \in H^1(k, G): \ f_{\sigma} (Y^\sigma ({\bf A}_k))\cap \left[ \prod_{v\in S}X(k_v) \times \prod_{v\not\in S} \mathcal{X}_S(O_v) \right]\neq \emptyset \right\} $$ is finite.
\end{prop}
\begin{proof} It follows from the same argument as the proof of Proposition 4.4 in \cite{HSk}.
\end{proof}
One can now extend Corollary 2.7 in \cite{ Sk1} to open varieties by using the above replacement for Proposition 4.4 in \cite{HSk}.
\begin{prop}\label{closed} Let $X$ be a (not necessarily proper) variety over a number field $k$. If $Y\xrightarrow{f} X$ is a left torsor under a linear algebraic group $G$ over $k$, then the set
$X({\bf A}_k)^f$ is closed in $X({\bf A}_k)$.
\end{prop}
\begin{proof} Take an integral model $\mathcal{X}_{S_0}$ of $X$ over $O_{S_0}$, where $S_0$ is a finite subset of $\Omega_k$ containing $\infty_k$. Then
$$ \left\{ \prod_{v\in S} X(k_v) \times \prod_{v\in \Omega_k\setminus S} \mathcal{X}_{S_0} (O_v) \right\}_{S} $$ is an open covering of $X({\bf A}_k)$ (see Theorem 3.6 in \cite{Conrad}), where $S$ runs through all finite subsets of $\Omega_k$ containing $S_0$. By Proposition \ref{finite} and Corollary 2.5 in \cite{Sk1}, the set
$$ X({\bf A}_k)^f \cap \left[\prod_{v\in S} X(k_v) \times \prod_{v\in \Omega_k\setminus S} \mathcal{X}_{S_0} (O_v)\right] $$
is closed in $\prod_{v\in S} X(k_v) \times \prod_{v\in \Omega_k\setminus S} \mathcal{X}_{S_0} (O_v)$, therefore the set $X({\bf A}_k)^f$ is closed in $X({\bf A}_k)$.
\end{proof}
Applying Proposition \ref{finite}, one can also extend Lemma 2.2 and Theorem 1.1 in \cite{Sk1} to open varieties.
For any variety over a number field $k$, and following \cite{Sk1}, we write
$$ X({\bf A}_k)^{\text{desc}} =\bigcap_{Y\xrightarrow{f} X} X({\bf A}_k)^f \, ,$$
where $Y\xrightarrow{f} X$ runs through all torsors under all linear algebraic groups over $k$ (see also \S \ref{intro}.).
\begin{lem} \label{rep2.2} Let $X$ be a (not necessarily proper) variety and let $Y\rightarrow X$ be a torsor over a number field $k$. For any $(P_v)\in X({\bf A}_k)^{\rm{desc}}$, there is a twist $Y'\rightarrow X$ of $Y\rightarrow X$ such that the following property holds:
For any surjective $X$-torsor morphism $Z\rightarrow Y'$ (see Definition 2.1 in \cite{Sk1}), there is a twist $Z'\rightarrow Y'$ of $Z\rightarrow Y'$ such that $(P_v)$ lies in the image of $Z'({\bf A}_k)$. \end{lem}
\begin{proof} There are a finite subset $S_0$ of $\Omega_k$ containing $\infty_k$ and an integral model $\mathcal{X}_{S_0}$ over $O_{S_0}$ such that
$$ (P_v) \in \prod_{v\in S_0} X(k_v) \times \prod_{v\in \Omega_k\setminus S_0} \mathcal{X}_{S_0} (O_v) $$ (see for instance Theorem 3.6 in \cite{Conrad}), hence Proposition \ref{finite} implies that there are only finitely many twists of a given torsor over $X$ such that $(P_v)$ lifts as an adelic point of this torsor. As pointed out in the proof of Lemma 2.2 in \cite{Sk1}, the finite combinatorics in the first part of the proof of Proposition 5.17 in \cite{St} are still valid. It concludes the proof.
\end{proof}
\begin{prop} \label{dd} Let $X$ be a (not necessarily proper) variety over a number field $k$. If $Y\xrightarrow{f} X$ is a left torsor under a finite group scheme $F$ over $k$, then
$$ X({\bf A}_k)^{\rm{desc}} =\bigcup_{\sigma\in H^1(k, F)} f_{\sigma} \left(Y^{\sigma}({\bf A}_k)^{\rm{desc}}\right) . $$
\end{prop}
\begin{proof} One only needs to modify the proof of Theorem 1.1 in \cite{Sk1} by replacing Lemma 2.2 in \cite{Sk1} with Lemma \ref{rep2.2}, Corollary 2.7 in \cite{Sk1} with Proposition \ref{closed}. Moreover, since $f$ is finite, the induced map $Y({\bf A}_k) \xrightarrow{f} X({\bf A}_k)$ is topologically proper by Proposition 4.4 in \cite{Conrad}. This implies that $f^{-1}((P_v))$ is compact. \end{proof}
Recall that, following \cite{P}, one can define for any variety $X$ over a number field $k$, the set
$$ X({\bf A}_k)^{\textup{\'et}, {\rm Br}}= \bigcap_{Y\xrightarrow{f} X} \bigcup_{\sigma\in H^1(k, F)} f_\sigma (Y^\sigma({\bf A}_k)^{{\rm Br}}) \, ,$$
where $Y\xrightarrow{f} X$ runs over all torsors under all finite groups $F$ over $k$ (see \S \ref{intro}). Since the induced map $Y({\bf A}_k) \xrightarrow{f} X({\bf A}_k)$ is topologically closed for any finite morphism $Y\xrightarrow{f} X$ by Proposition 4.4 in \cite{Conrad}, one concludes that $X({\bf A}_k)^{\textup{\'et}, {\rm Br}}$ is closed in $ X({\bf A}_k)$ by the same argument as in Proposition \ref{closed}.
\begin{cor} \label{oneside} If $X$ is a smooth quasi-projective variety over a number field $k$, then
$$ X({\bf A}_k)^{\rm{desc}} \subseteq X({\bf A}_k)^{\textup{\'et}, {\rm Br}} \subseteq X({\bf A}_k)^{{\rm Br}} . $$
\end{cor}
\begin{proof} One only needs to show that $ X({\bf A}_k)^{\text{desc}} \subseteq X({\bf A}_k)^{\textup{\'et}, {\rm Br}}$. For any torsor $Y\xrightarrow{f} X$ under a finite group scheme $F$, Proposition \ref{dd} gives the equality
$$ X({\bf A}_k)^{\text{desc}} =\bigcup_{\sigma\in H^1(k, F)} f_{\sigma} \left(Y^{\sigma}({\bf A}_k)^{\text{desc}}\right) \, .$$
Since $X$ is quasi-projective, $Y^\sigma$ is quasi-projective as well. By a theorem of Gabber (see \cite{de}), one has
$$Y^{\sigma}({\bf A}_k)^{\text{desc}}\subseteq Y^{\sigma}({\bf A}_k)^{{\rm Br}} $$ (see the proof of Lemma 2.8 in \cite{Sk1}) and the result follows.
\end{proof}
\section{Comparison II, $X({\bf A}_k)^{\textup{\'et}, {\rm Br}}\subseteq X({\bf A}_k)^{\rm{desc}}$} \label{CII}
In this section, we prove the inclusion $X({\bf A}_k)^{\textup{\'et}, {\rm Br}}\subseteq X({\bf A}_k)^{\rm{desc}}$ for open varieties, which implies Theorem \ref{inteq}. The strategy of proof is the same as in \cite{D09}.
The second named author would like to thank Laurent Moret-Bailly warmly for finding a mistake and for suggesting the following alternative proof of Lemma 4 in \cite{D09} (which already appeared in \cite{Dth}). The statement of this lemma is correct, but the proof in \cite{D09} uses a result of Stoll (see \cite{St}) that is not. Note that in contrast with \cite{D09}, all torsors (unless explicitely mentioned) are assumed to be left torsors.
\begin{lem} \label{Stoll}
Let $X$ be a smooth geometrically connected $k$-variety. Let $(P_v) \in X({\bf A}_k)^{\et, {\rm Br}}$ and let $Z \xrightarrow{g} X$ be a torsor under a finite $k$-group $F$.
Then there are a cocycle $\sigma \in Z^1(k, F)$ and a connected component $X'$ of $Z^\sigma$ over $k$ such that the restriction of $g_\sigma$ to $X'$ is a torsor $X' \rightarrow X$ under the stabilizer $F'$ of $X'$ for the action of $F^\sigma$, and the point $(P_v)$ lifts to a point $(Q'_v) \in X'({\bf A}_k)^{\textup{Br}}$.
In particular, $X'$ is geometrically integral
\end{lem}
\begin{proof}
By assumption, the point
$(P_v)$ lifts to some point $(Q_v) \in Z^{\sigma}({\bf A}_k)^{\textup{Br}}$ for some cocycle $\sigma$ with values in $F$. Since $Z^{\sigma}$ is smooth, $Z^\sigma$ is a disjoint union of connected components over $k$. By Proposition 3.3 in \cite{LX}, there is a $k$-connected component $X'$ of $Z^{\sigma}$ such that $(Q_v)_{v\not\in \Xi} \in P_{\Xi} (X'({\bf A}_k)^{\textup{Br}})$, where $\Xi$ is the set of all complex places of $k$, ${\bf A}_k^{\Xi}$ is the ring of adeles without $\Xi$-components and $P_{\Xi}$ is the projection from $X'({\bf A}_k)$ to $X'({\bf A}_k^{\Xi})$. Since for $v\in \Xi$, $Z^\sigma \times_k k_v$ is a trivial torsor under the finite constant group scheme $F^\sigma\times_k k_v$, we have $g_\sigma (X'(k_v))=X(k_v)$ for all $v\in\Xi$. Hence one can assume that $Q_v\in X'(k_v)$ for $v\in \Xi$, so that we have $(Q_v) \in X'({\bf A}_k)^{\textup{Br}}$.
Since $X'$ is connected and $X'({\bf A}_k) \neq \emptyset$, the proof of Lemma 5.5 in \cite{St} implies that $X'$ is geometrically connected. Eventually, $X'$ being geometrically connected guarantees that the variety $X'$ is an $X$-torsor under the stabilizer $F'$ of $X'$ in $F^{\sigma}$.
\end{proof}
Let us continue the proof of the aforementioned inclusion. Let $X$ be a smooth and geometrically integral $k$-variety, and $(P_v) \in X({\bf A}_k)^{\textup{\'et}, {\rm Br}}$. We need to prove that $(P_v) \in X({\bf A}_k)^{\rm{desc}}$.
For a linear algebraic group $G$ over $k$, one has the following short exact sequence of algebraic groups over $k$:
$$ 1\rightarrow H\rightarrow G \rightarrow F \rightarrow 1 \, ,$$
where $H$ is the connected component of $G$ and $F$ is finite over $k$. This induces the following diagram of short exact sequences
\begin{displaymath}
\xymatrix{
1 \ar[r] & H \ar[r] \ar[d] & G \ar[r] \ar[d] & F \ar[r] \ar[d] & 1 \\
1 \ar[r] & T \ar[r] & G' \ar[r] & F \ar[r] & 1
}\end{displaymath}
where $T$ denotes the maximal toric quotient of $H$ and $G'$ is the quotient of $G$ by the kernel of $H\rightarrow T$.
Let $Y \rightarrow X$ be a torsor under $G$ and let $Z \rightarrow X$ be the push-forward of $Y \rightarrow X$ by the morphism $G \rightarrow F$, which is a torsor under $F$. If $\sigma \in Z^1(k, F)$ is a $1$-cocycle given by Lemma \ref{Stoll} applied to the torsor $Z \rightarrow X$ and to the point $(P_v)$, we want to show that the cocycle $\sigma \in Z^1(k, F)$ lifts to a cocycle $\tau \in Z^1(k, G)$, as in Proposition 5 in \cite{D09}. The obstruction to lift $\sigma$ to a cocycle in $Z^1(k,G)$ gives a natural cohomology class $\eta_\sigma \in H^2(k,\kappa_\sigma)$ by (5.1) in \cite{FSS} (see also (7.7) in \cite{B}), where $\kappa_\sigma$ is a natural $k$-kernel on $H_{\bar{k}}$ associated to $\sigma$. Lemma 6 in \cite{D09} implies that there is a canonical map $H^2(k, \kappa_\sigma) \to H^2(k, T^\sigma)$ such that the class $\eta_\sigma$ is neutral if and only if its image $\eta'_\sigma \in H^2(k, T^\sigma)$ is zero.
We now apply the open descent theory and the extended type developed by Harari and Skorobogatov in \cite{HaSk} to establish the analogue of Lemma 7 in \cite{D09} for open varieties.
As in the proof of \cite{D09}, the torsor $Y \to Z$ under $H$ induces a torsor $ W \xrightarrow{\varpi} Z$ under $T$ by the natural map $H^1(Z,H)\to H^1(Z,T)$. Instead of using the type of the torsor $\varpi$ that was used in \cite{D09}, we consider the so-called "extended type" of the torsor $\varpi$ that was introduced by Harari and Skorobogatov (see Definition 8.2 in \cite{HaSk}). For a variety $Z$ over $k$, let $KD'(Z)$ denote the complex of Galois modules $[\overline{k}(Z)^* / \overline{k}^* \to \Div(Z_{\bar k})]$ in the derived category $D^b_\et(k)$ of bounded complexes of \'etale sheaves over $\Spec(k)$. One can associate to the torsor $W \xrightarrow{\varpi} Z$ under $T$ a canonical morphism in this derived category
$$\lambda_W : \widehat{T} \to KD'(Z) \, , $$
called the extended type of $\varpi$. This induces a morphism in the derived category of bounded complexes of abelian groups
$$\lambda_W^\sigma : \widehat{T}^\sigma \to KD'(Z^\sigma) $$ for the above $\sigma \in Z^1(k,F)$.
\begin{lem} \label{twist}
The morphism $\lambda_W^\sigma : \widehat{T}^\sigma \to KD'(Z^\sigma)$ is a morphism in the derived category of bounded complexes of \'etale sheaves over $\Spec(k)$.
\end{lem}
\begin{proof}
The natural left actions of $F$ on both $T$ and $Z$ induces right actions of $F$ on $\widehat{T}$ and on $KD'(Z)$.
We first prove that the morphism $\lambda_W$ is $F$-equivariant for those actions.
Let $f \in F(\overline{k})$. We denote by $f_{Z} : Z_{\bar k} \to Z_{\bar k}$ the morphism of $\bar{k}$-varieties defined by $z \mapsto f \cdot z$. This morphism induces a natural morphism in the derived category $f_{Z}^* : KD'(Z_{\bar k}) \to KD'(Z_{\bar k})$. Similarly, the element $f$ defines a natural morphism of $\bar{k}$-tori $f_T : T_{\bar k} \to T_{\bar k}$ such that $f_T(t) := g t g^{-1}$, where $g \in G'(\overline{k})$ is any point lifting $f \in F(\overline{k})$. This morphism $f_T$ induces a morphism of abelian groups $\widehat{f_T} : \widehat{T} \to \widehat{T}$ such that $\widehat{f_T}(\chi) := \chi \circ f_T$.
One needs to prove that the following diagram
\begin{displaymath}
\xymatrix{
\widehat{T} \ar[r]^{\lambda_{W_{\bar k}} \ \ \ \ } \ar[d]_{\widehat{f_T}} & KD'(Z_{\bar k}) \ar[d]^{f_{Z}^*} \\
\widehat{T} \ar[r]_{\lambda_{W_{\bar k}} \ \ \ \ } & KD'(Z_{\bar k})
}
\end{displaymath}
is commutative.
Let $f_{T, *} W_{\bar k}$ be the push-forward of the torsor $W_{\bar k} \to Z_{\bar k}$ under $T_{\bar k}$ by the $\bar k$-morphism $ T_{\bar k} \xrightarrow{f_T} T_{\bar k}$ and let $f_{Z}^* W_{\bar{k}}$ be the pullback of the torsor $W_{\bar{k}} \to Z_{\bar{k}}$ under $T_{\bar{k}}$ by the $\bar{k}$-morphism $f_Z : Z_{\bar{k}} \to Z_{\bar{k}}$.
Then functoriality of the extended type gives:
$$f_Z^* \circ \lambda_{W_{\bar{k}}} = \lambda_{f_Z^* W_{\bar{k}}} \, \, \ \ \textup{and} \ \ \, \, \lambda_{f_{T,*} W_{\bar{k}}} = \lambda_{W_{\bar{k}}} \circ \widehat{f_T} \, .$$
To prove the required commutativity $f_Z^* \circ \lambda_{W_{\bar{k}}} = \lambda_{W_{\bar{k}}} \circ \widehat{f_T}$, it is enough to show that the torsors $f_Z^* W_{\bar{k}} \to Z_{\bar{k}}$ and $f_{T, *} W_{\bar{k}} \to Z_{\bar{k}}$ under $T_{\bar{k}}$ are isomorphic. Indeed, we have the following commutative diagram
\begin{displaymath}
\xymatrix{
T_{\bar{k}} \times W_{\bar{k}} \ar[d]_{\varpi \circ p_W} \ar[r]^{ \ \ \ g} & W_{\bar{k}} \ar[d]^{\varpi} \\
Z_{\bar{k}}\ar[r]_{f_Z} & Z_{\bar{k}} \, ,
}
\end{displaymath}
where $p_W$ denotes the projection on $W_{\bar{k}}$ and the morphism $g$ is defined by $(t,w) \mapsto (t g) \cdot w$. This diagram induces a natural $Z_{\bar{k}}$-morphism $\phi : T_{\bar{k}} \times W_{\bar{k}} \to f_Z^* W_{\bar{k}}$. Consider now the right action of $T_{\bar{k}}$ on $T_{\bar{k}} \times W_{\bar{k}}$ defined by $(s,w) \cdot t := (s f_T(t),t^{-1} \cdot w) = (s g t g^{-1},t^{-1} \cdot w)$. Then the morphism $\phi$ is $T_{\bar{k}}$-invariant under this action, hence it induces a $Z_{\bar{k}}$-morphism $\psi : f_{T,*} W_{\bar{k}} \to f_Z^* W_{\bar{k}}$. One can check by a simple computation that $\psi$ is $T_{\bar{k}}$-equivariant, i.e. that $\psi$ is a morphism of (left) torsors over $Z_{\bar{k}}$ under $T_{\bar{k}}$. It concludes the proof of the required commutativity, hence the morphism $\lambda_W$ is $F$-equivariant.
By definition of the twists $T^\sigma$ and $Z^\sigma$, the fact that $\lambda_W$ is $F$-equivariant implies that the morphism $\lambda_W^\sigma$ is Galois equivariant, i.e. that $\lambda_W^\sigma$ is a morphism in the derived category of bounded complexes of \'etale sheaves over $\Spec(k)$.
\end{proof}
By Proposition 8.1 in \cite{HaSk}, there is a natural exact sequence of abelian groups
$$H^1(k, T^\sigma) \to H^1(X', T^\sigma) \xrightarrow{\lambda} \Hom_k(\widehat{T^\sigma}, KD'(X')) \xrightarrow{\partial} H^2(k, T^\sigma) $$
where the map $\lambda$ is the extended type. Let $\lambda_\sigma '=\psi^* \circ \lambda_W^\sigma$, where $\psi : X' \to W$ is the inclusion of the $k$-connected component given by Lemma \ref{Stoll}, and $KD'(Z^\sigma)\xrightarrow{\psi^*} KD'(X')$ is the map induced by $\psi$.
The following lemma, which is an analogue of Lemma 8 in \cite{D09}, is a crucial step for proving the main result of this section. We give here a more conceptual proof than that in \cite{D09}, where a similar statement was proven by cocycle computations under the assumption that $\bar{k}[X]^\times = \bar{k}^\times$.
\begin{lem} \label{comparison H2} With the above notation, one has
$$\partial(\lambda_\sigma ') = 0 \, \, \textup{if and only if} \, \, \eta'_\sigma = 0 \, .$$
\end{lem}
\begin{proof} In the following proof, we work over the small \'etale site of $\Spec(k)$.
Recall that we are given a cocycle $\sigma \in Z^1(k, F)$ as in Lemma \ref{Stoll}: one can associate to $\sigma$ a $\Spec(k)$-torsor $U$ under $F$ with a point $u_0 \in U(\overline{k})$. This torsor $U$ is naturally a homogeneous space of the group $G'$ with geometric stabilizer isomorphic to $T_{\bar{k}}$.
Section IV.5.1 in \cite{Gi} implies that the element $\eta'_\sigma \in H^2(k, T^\sigma)$ is the class of the $\Spec(k)$-gerbe $\mathcal{E}_\sigma$ banded by $T^\sigma$ such that for all \'etale schemes $S$ over $\Spec(k)$, the category $\mathcal{E}_\sigma(S)$ is defined as follows: the objects of $\mathcal{E}_\sigma(S)$ are triples $(P,p, \alpha)$ where $P \to S$ is a torsor under $G'$, $p \in P(S_{\bar{k}})$ and $\alpha : P \to U_S$ is a $G'$-equivariant $S$-morphism. The morphisms of $\mathcal{E}_\sigma(S)$ between triples $(P,p,\alpha)$ and $(P',p',\alpha')$ are given by morphisms of torsors $P \to P'$ over $S$ under $G'$ that commute with $\alpha$ and $\alpha'$.
Similarly, one can associate to the morphism $\lambda_\sigma'$ a $\Spec(k)$-gerbe banded by $T^\sigma$ that will be the obstruction for the morphism $\lambda_\sigma'$ to be the extended type of a torsor over $X'$ under $T^\sigma$. The morphism $\lambda_\sigma '$ induces a morphism $\overline{\lambda_\sigma '} : \widehat{T^\sigma_{\bar{k}}} \to KD'(X'_{\bar{k}})$ in $D^b_\et(\bar{k})$.
By construction, $\overline{\lambda_\sigma '}$ is the extended type of the torsor $Y_0 := W_{\bar{k}} \times_{Z_{\bar{k}}} X'_{\bar{k}}$ over $X'_{\bar{k}}$ under $T^\sigma_{\bar{k}} = T_{\bar{k}}$.
We now define $\mathcal{L}_\sigma$ to be the fibered category defined as follows : for all \'etale schemes $S$ over $\Spec(k)$, the objects of the category $\mathcal{L}_\sigma(S)$ are pairs $(V,\varphi)$, where $V \to X'_S$ is a torsor under $T^\sigma_S$ of extended type $\lambda_V$ compatible with $\lambda_\sigma '$ and $\varphi : V_{\bar{k}} \to Y_0 \times_{\bar{k}} S_{\bar{k}}$ is an isomorphism of torsors over $X' \times_k S_{\bar{k}}$ under $T^\sigma_{S_{\bar{k}}}$.
Given two such objects $(V, \varphi)$ and $(V', \varphi')$, a morphism between $(V, \varphi)$ and $(V', \varphi')$ in the category $\mathcal{L}_\sigma(S)$ is a pair $(\alpha, t)$, where $\alpha : V \to V'$ is a morphism of torsors over $X'_S$ under $T^\sigma_S$ and $t \in T^\sigma(S_{\bar{k}})$ such that the diagram
\begin{displaymath}
\xymatrix{
V_{\bar{k}} \ar[r]^{\overline{\alpha}} \ar[d]_\varphi & V'_{\bar{k}} \ar[d]^{\varphi'} \\
Y_0 \times_{\bar{k}} S_{\bar{k}} \ar[r]_t & Y_0 \times_{\bar{k}} S_{\bar{k}}
}
\end{displaymath}
commutes.
One can check that $\mathcal{L}_\sigma$ is a stack for the \'etale topology over $\Spec(k)$, and the fact that this is a gerbe is a consequence of the exact sequence of Proposition 8.1 in \cite{HaSk}
$$H^1(S, T^\sigma) \to H^1(X'_S, T^\sigma) \xrightarrow{\lambda} \Hom_S(\widehat{T^\sigma}, KD'(X'_S)) \xrightarrow{\partial} H^2(S, T^\sigma) $$
(which holds provided that $S$ is integral, regular and noetherian).
The band of this gerbe is the abelian band represented by $T^\sigma$.
In addition, it is clear that $\mathcal{L}_\sigma$ is neutral if and only if $\mathcal{L}_\sigma(k) \neq \emptyset$ if and only if there exists a torsor over $X'$ under $T^\sigma$ of type $\lambda_\sigma '$ if and only if $\partial(\lambda_\sigma ') = 0$.
Let us now construct an equivalence of gerbes between $\mathcal{E}_\sigma$ and $\mathcal{L}_\sigma$.
For all \'etale $\Spec(k)$-schemes $S$, consider the functor
$$m_S : \mathcal{E}_\sigma(S) \to \mathcal{L}_\sigma(S)$$
that maps an object $(P,p, \alpha)$ to the object $(V, \varphi)$, where $V$ is defined to be the contracted product $V := (P \times_S^{G'} W_S) \times_{Z^\sigma_S} X'_S$ and $\varphi : V_{\bar{k}} \to Y_0 \times_{\bar{k}} S_{\bar{k}} = (W_{\bar{k}} \times_{Z_{\bar{k}}} X_{\bar{k}}') \times_{\bar{k}} S_{\bar{k}}$ is induced by the point $p \in P(S_{\bar{k}})$. Indeed, by construction, we have a natural map $P \times_S^{G'} W_S \to U_S \times_S^F Z_S = Z^\sigma_S$, and a simple computation proves that this map is a torsor under $T^\sigma$ of extended type compatible with $\lambda_W^\sigma$.
By definition, the functor $m_S$ sends a morphism $\varphi : (P,p, \alpha) \to (P', p', \alpha')$ to the morphism $(\widetilde{\varphi}, t_0)$ such that $\widetilde{\varphi} : (P \times_S^{G'} W_S) \times_{Z^\sigma_S} X'_S \to (P' \times_S^{G'} W_S) \times_{Z^\sigma_S} X'_S$ is the morphism induced by the morphism of torsors $\varphi : P \to P'$, and $t_0 \in T^{\sigma}(S_{\overline{k}})$ is the element such that $p' = t_0 \cdot \varphi(p)$ as $S_{\overline{k}}$-points in $(P' \times_S^{G'} W_S) \times_{Z^\sigma_S} X'_S$.
Finally, one checks that the collection of functors $m_S$ defines a morphism of gerbes $m : \mathcal{E}_\sigma \to \mathcal{L}_\sigma$ banded by the identity of $T^\sigma$, which implies that $\eta'_\sigma := [\mathcal{E}_\sigma] = [\mathcal{L}_\sigma] \in H^2(k, T^\sigma)$.
Therefore, $\eta'_\sigma = 0$ if and only if $\mathcal{E}_\sigma(k) \neq \emptyset$ if and only if $\mathcal{L}_\sigma(k) \neq \emptyset$ if and only if $\partial(\lambda_\sigma ') = 0$.
\end{proof}
The immediate consequence of Lemma \ref{comparison H2} is the following result which extends Proposition 5 in \cite{D09} to open varieties.
\begin{prop}
\label{prop}
Let $X$ be a smooth geometrically integral $k$-variety. Let $(P_v) \in X({\bf A}_k)^{\textup{\'et, Br}}$ and let $Y \rightarrow X$ be a torsor under a linear $k$-group $G$.
Let
$$1 \rightarrow H \rightarrow G \rightarrow F \rightarrow 1$$
be an exact sequence of linear $k$-groups, where $H$ is connected and $F$ finite. Let $Z \rightarrow X$ be the push-forward of $Y \rightarrow X$ by the morphism $G \rightarrow F$, which is a torsor under $F$. Let $\sigma \in Z^1(k, F)$ be a $1$-cocycle given by Lemma \ref{Stoll} applied to the torsor $Z \rightarrow X$ and the point $(P_v)$.
Then the cocycle $\sigma \in Z^1(k, F)$ lifts to a cocycle $\tau \in Z^1(k, G)$.
\end{prop}
\begin{proof} As mentionned above, Construction (5.1) in \cite{FSS} (see also (7.7) in \cite{B}) gives a class $\eta_\sigma$ of $H^2(k, \kappa_\sigma)$ such that $\sigma$ can be lifted to $Z^1(k,G)$ if and only if $\eta_\sigma$ is neutral, where $\kappa_\sigma$ is a $k$-kernel on $H_{\bar{k}}$. By (6.1.2) of \cite{B} and Lemma 6 in \cite{D09}, there is a canonical map $H^2(k, \kappa_\sigma) \to H^2(k, T^\sigma)$ such that the class $\eta_\sigma$ is neutral if and only if its image $\eta'_\sigma \in H^2(k, T^\sigma)$ is zero. By Lemma \ref{comparison H2}, one only needs to show that $\partial(\lambda_\sigma ') = 0$ where $\lambda_\sigma '=\psi^* \circ \lambda_W^\sigma$, with $KD'(Z^\sigma)\xrightarrow{\psi^*} KD'(X')$ given by Lemma \ref{Stoll} and $\lambda_W^\sigma$ defined by Lemma \ref{twist}.
By Lemma \ref{Stoll}, we know that $X'({\bf A}_k)^{\rm Br} \neq \emptyset$. Therefore the map $\lambda$ in the exact sequence (see Proposition 8.1 in \cite{HaSk})
$$H^1(X', T^\sigma) \xrightarrow{\lambda} \Hom_k(\widehat{T^\sigma}, KD'(X')) \xrightarrow{\partial} H^2(k, T^\sigma) $$
is surjective by Corollary 8.17 in \cite{HaSk}. Hence the map $\partial$ is the zero map and $\partial(\lambda_\sigma ') = 0$, which concludes the proof. \end{proof}
\begin{rem} The proof of Proposition \ref{prop} also gives the following result:
Let $X$ be a smooth geometrically integral $k$-variety and let $Y \rightarrow X$ be a torsor under a linear algebraic $k$-group $G$. Let
$$1 \rightarrow H \rightarrow G \rightarrow F \rightarrow 1$$
be an exact sequence of linear $k$-groups, where $H$ is connected and $F$ finite. Let $Z \rightarrow X$ be the push-forward of $Y \rightarrow X$ by the morphism $G \rightarrow F$.
If $\sigma\in H^1(k, F)$ satisfies $Z^{\sigma}({\bf A}_k)^{{\rm Br}_1(Z^\sigma)}\neq \emptyset$, then $\sigma$ can be lifted to $H^1(k, G)$.
\end{rem}
One can now prove the main result of this section:
\begin{thm}\label{main-last} If $X$ is a smooth and geometrically integral variety over a number field $k$, then
$$ X({\bf A}_k)^{\textup{\'et}, {\rm Br}} \subseteq X({\bf A}_k)^{\rm{desc}} \, . $$
\end{thm}
\begin{proof}
Since the statement 2 of Theorem 2 in \cite{H} (which we apply to $X'$) holds for any geometrically integral variety (without any assumption on $\bar{k}[X']^\times$), the proof of this theorem using Proposition \ref{prop} is exactly the same as the proof of Theorem 1 using Proposition 5 in \cite{D09} (see in particular \cite{D09}, p. 244-245). \end{proof}
\bigskip
\noindent{\bf Acknowledgements.} We would like to thank Jean-Louis Colliot-Th\'el\`ene for several comments for the early version of this paper. We would also like to thank the referee for pointing out a mistake in the previous version.
The first named author acknowledges the support of the French Agence Nationale de la Recherche (ANR)
under reference ANR-12-BL01-0005, the second named author acknowledges the support of the French Agence Nationale de la Recherche (ANR)
under references ANR-12-BL01-0005 and ANR-15-CE40-0002-01, and the third named author acknowledges the support of NSFC grant no.11471219 and 11631009.
\begin{bibdiv}
\begin{biblist}
\bib{B}{article}{
author={Borovoi, M.}
title={Abelianization of the second nonabelian Galois cohomology}
journal={Duke Math. J.}
volume={72}
date={1993}
pages={217-239}
}
\bib{BD} {article} {
author={Borovoi, M.},
author={Demarche, C.},
title={Manin obstruction to strong approximation for homogeneous spaces},
journal={Comment. Math. Helv.},
volume={88},
date={2013},
pages={1-54},
}
\bib{BvH}{article}{
AUTHOR = {Borovoi, M.},
author = {van Hamel, J.},
TITLE = {Extended {P}icard complexes and linear algebraic groups},
JOURNAL = {J. Reine Angew. Math.},
VOLUME = {627},
date = {2009},
PAGES = {53--82},
}
\bib{CX1} {article} {
author={Cao, Y.},
author={Xu, F.}
title={Strong approximation with Brauer-Manin obstruction for toric varieties},
journal={arXiv:1311.7655},
volume={},
date={2013},
Pages={},
}
\bib{CX2} {article} {
author={Cao, Y.},
author={Xu, F.}
title={Strong approximation with Brauer-Manin obstruction for groupic varieties},
journal={arXiv:1507.04340v4},
volume={},
date={2015},
Pages={},
}
\bib{CT08}{article}{
author={Colliot-Th\'el\`ene, J.-L.},
title={R\'esolutions flasques des groupes lin\'eaires connexes}
journal={ J. reine angew. Math.}
volume={618}
date={2008}
pages={77-133}
}
\bib{CTH} {article} {
author={Colliot-Th\'el\`ene, J.-L.},
author={Harari, D.},
title={Approximation forte en famille},
journal={to appear in J. reine angew. Math.},
volume={},
date={},
Pages={},
}
\bib{CTS87} {article} {
author={Colliot-Th\'el\`ene, J.-L.},
author={Sansuc, J.-J.},
title={La descente sur les vari\'et\'es rationnelles, II,},
journal={Duke Math. J.},
volume={54},
date={1987},
Pages={375-492},
}
\bib{CTX} {article} {
author={J.-L. Colliot-Th\'el\`ene},
author={F. Xu},
title={Brauer-Manin obstruction for integral points of homogeneous spaces and
representations by integral quadratic forms},
journal={Compositio Math.},
volume={145},
date={2009},
Pages={309-363},
}
\bib{CTX1} {article} {
author={J.-L. Colliot-Th\'el\`ene},
author={F. Xu},
title={Strong approximation for the total space of certain quadric fibrations},
journal={Acta Arithmetica},
volume={157},
date={2013},
Pages={169-199},
}
\bib{Conrad} {article} {
author={B. Conrad},
title={Weil and Grothendieck approaches to adelic points},
journal={Enseign. Math.},
volume={58},
date={2012},
Pages={61-97},
}
\bib{de} {article} {
author={de Jong, A.J.},
title={A result of Gabber},
journal={},
volume={},
date={},
Pages={Available at \texttt{http://www.math.columbia.edu/\~{}dejong/papers}},
}
\bib{SGA4} {book} {
author={Artin, M.}
title={Comparaison avec la cohomologie classique: cas d'un pr\'esch\'ema lisse}
series={SGA 4, Lecture Notes in Mathematics 305}
publisher={Springer-Verlag}
date={1973}
}
\bib{D09}{article}{
author={Demarche, C.}
title={Obstruction de descente et obstruction de Brauer-Manin \'etale}
journal={Algebra Number Theory}
volume={3}
date={2009}
pages={237-254}
}
\bib{Dth}{article}{
author={Demarche, C.}
title={M\'ethodes cohomologiques pour l'\'etude des points rationnels sur les espaces homog\`enes}
journal={PhD thesis, University Paris-Sud XI}
date={2009}
pages={Available at \texttt{https://webusers.imj-prg.fr/\~{}cyril.demarche/these/these.pdf} }
}
\bib{D0} {article} {
author={Demarche, C.},
title={Suites de Poitou-Tate pour les complexes de tores \`a deux termes},
journal={Int. Math. Res. Not.},
volume={},
date={2011},
Pages={135-174},
}
\bib{D} {article} {
author={Demarche, C.},
title={Le d\'efaut d'approximation forte dans les groupes lin\'eaires connexes},
journal={Proc.London Math.Soc.},
volume={102},
date={2011},
pages={563-597},
}
\bib{FSS}{article}{
author={Flicker, Y. Z.}
author={Scheiderer, C.}
author={Sujatha, R.}
title={Grothendieck's theorem on non-abelian $H^2$ and local-global principles}
journal={J. Amer. Math. Soc.}
volume={11}
date={1998}
pages={731-750}
}
\bib{Gi}{book}{
author={Giraud, J.}
title={Cohomologie non-ab\'elienne}
series={Die Grundlehren der mathematischen Wissenschaften}
publisher={Springer-Verlag}
volume={179}
date={1971}
}
\bib{Gr}{book} {
author={Grothendieck, A.}
title={Le groupe de Brauer, I,II,III}
series={Dix expos\'es sur la cohomologie des sch\'emas}
publisher={North-Holland}
date={1968}
}
\bib{H}{article}{
author={Harari, D.}
title={Groupes alg\'ebriques et points rationnels}
journal={Math. Ann.}
volume={322}
date={2002}
pages={811-826}
}
\bib{Ha08} {article} {
author={Harari, D.}
title={Le d\'efaut d'approximation forte pour les groupes alg\'ebriques commutatifs}
journal={Algebra \& Number Theory}
volume={2}
date={2008}
pages={595-611}
}
\bib{HSk}{article}{
author={Harari, D.}
author={Skorobogatov, A. N.}
title={Non-abelian cohomology and rational points}
journal={Compos. Math.}
volume={130}
date={2002}
pages={241-273}
}
\bib{HSk03}{article}{
author={Harari, D.}
author={Skorobogatov, A. N.}
title={The Brauer group of torsors and its arithmetic applications}
journal={Ann. Inst. Fourier, Grenoble}
volum={53}
date={2003}
pages={1987-2019}
}
\bib{HSk1}{article}{
author={Harari, D.}
author={Skorobogatov, A. N.}
title={Non-abelian descent and the arithmetic of Enriques surfaces}
journal={Intern. Math. Res. Notices}
volume={52}
date={2005}
pages={3203-3228}
}
\bib{HaSk}{article}{
author={Harari, D.}
author={Skorobogatov, A. N.}
title={Descent theory for open varieties}
journal={London Mathematical Society Lecture Note Series}
volume={405}
date={2013}
pages={250-279}
number={}
}
\bib{HS05} {article}{
author={ Harari, D.}
author={ Szamuely, T.}
title={ Arithmetic duality theorem for 1-motives}
journal={J. reine angew. Math.}
volume={578}
date={2005}
pages={93-128}
}
\bib{LX} {article}{
author={Liu, Q.}
author={Xu, F.}
title={Very strong approximation for certain algebraic varieties}
journal={Math. Ann.}
volume={363}
date={2015}
pages={701-731}
}
\bib{Milne}{book}{
author={J.S. Milne}
title={\'Etale cohomology}
publisher={Princeton University Press}
date={1980}
}
\bib {PR}{book}{
author={V.P. Platonov},
author={A.S. Rapinchuk}
title={Algebraic groups and number theory},
publisher={Academic Press},
place={},
journal={ },
series={},
volume={},
date={1994},
number={ },
pages={},
}
\bib{P}{article}{
author={Poonen, B.}
title={Insufficiency of the Brauer-Manin obstruction applied to \'etale covers}
journal={Ann. of Math.}
volume={171}
date={2010}
pages={2157-2169}
}
\bib{Sansuc} {article} {
author={Sansuc, J.-J.},
title={Groupe de Brauer et arithm\'etique des groupes alg\'ebriques lin\'eaires sur un corps
de nombres},
journal={J. reine angew. Math.},
volume={327},
date={1981},
pages={12-80},
}
\bib {Ser}{book}{
author={J. P. Serre},
title={Cohomologie Galoisienne},
publisher={Springer},
place={Berlin},
journal={ },
series={Lecture Notes in Mathematics},
volume={5},
date={1965},
number={ },
pages={},
}
\bib{Sk} {article} {
author={A. N. Skorobogatov},
title={Beyond the Manin obstruction},
journal={Invent. Math.},
volume={135},
date={1999},
pages={399-424},
}
\bib{Sk1}{article}{
author={A. N. Skorobogatov}
title={Descent obstruction is equivalent to \'etale Brauer-Manin obstruction}
journal={Math. Ann.}
volume={344}
date={2009}
pages={501-510}
}
\bib {Sko}{book}{
author={A. N. Skorobogatov},
title={Torsors and rational points},
publisher={Cambridge University Press},
place={},
journal={ },
series={Cambridge Tracts in Mathematics},
volume={144},
date={2001},
number={ },
pages={},
}
\bib{SZ}{article}{
author={A. N. Skorobogatov},
author={Y. G. Zarhin}
title={The Brauer group and the Brauer-Manin set of products of varieties},
journal={J. Eur. Math. Soc.}
volume={16}
date={2014}
pages={749-768}
}
\bib{St}{article}{
author={M. Stoll}
title={Finite descent obstructions and rational points on curves}
journal={Algebra Number Theory}
volume={1}
date={2007}
pages={349-391}
}
\bib{Wei}{article}{
author={Dasheng Wei}
title={Open descent and strong approximation}
journal={arXiv.1604.00610v2}
date={2016}
}
\end{biblist}
\end{bibdiv}
\end{document}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 9,752 |
Recommendations for Athletes. The Academy of Nutrition and Dietetics and the American College of Sports Medicine both recommend that athletes eat 1.2 to 2 grams of protein per kilogram of body weight. For a 150-pound person, that boosts the 55-gram RDA to between 82 and 136 grams. | {
"redpajama_set_name": "RedPajamaC4"
} | 9,259 |
Film Posters were created mostly by unknown artists, working in-house for the major studios, these posters played their part in turning actors such as Marilyn Monroe, James Dean, Humphrey Bogart, John Wayne, Elizabeth Taylor, Clint Eastwood, Audrey Hepburn and Steve McQueen into icons.
The film poster which was never seen as art and was meant to be thrown away, has now been taken by Stephen to a new level and turned into a true piece of fine art.
The paintings are full of texture and painted as on a wall peeling off and rotting away, with some unique surprises.
What is your favourite film? Commissions available.
All paintings are oil on canvas, framed with a large high gloss black wooden frame. | {
"redpajama_set_name": "RedPajamaC4"
} | 742 |
{"url":"http:\/\/mathhelpforum.com\/calculus\/18367-using-cases-proofs-print.html","text":"# Using cases in proofs\n\n\u2022 September 2nd 2007, 07:01 AM\nfruitcakelover\nUsing cases in proofs\nHi, does anybody know how to prove the following statement:\n\nFor each real number r, -|r|\u00a3 r \u00a3 |r|.\n\nby using two cases in the proof.\n\u2022 September 2nd 2007, 07:38 AM\nThePerfectHacker\nQuote:\n\nOriginally Posted by fruitcakelover\nHi, does anybody know how to prove the following statement:\n\nFor each real number r, -|r|\u00a3 r \u00a3 |r|.\n\nby using two cases in the proof.\n\nGiven $x\\in \\mathbb{R}$ we define $|x| = x \\mbox{ if }x\\geq 0 \\mbox{ and }|x| = -x\\mbox{ if }x<0$.\n\nCase 1, when $x\\geq 0$. Thus $|x|=x$. And so we need to prove $-x\\leq x\\leq x$. Which is true since $x\\leq x$ and $-x<0\\leq x$.\n\nCase 2, when $x<0$. You continue.","date":"2015-01-25 15:25:08","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 8, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8958473205566406, \"perplexity\": 1125.1749651193}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-06\/segments\/1422122087108.30\/warc\/CC-MAIN-20150124175447-00248-ip-10-180-212-252.ec2.internal.warc.gz\"}"} | null | null |
\section{Introduction}
Hidden Markov models with parametric emission distributions and discrete and known state space, which we shorten as HMMs, have been widely used in diverse areas for analyzing dependent discrete-time data \citep{fru06, elliott2008hidden,zucchini2017hidden}. Our focus is on studying asymptotic properties of the posterior distribution of parameters in an HMM \citep{cappe2006inference,de2008asymptotic}. We prove a Bernstein-von Mises theorem for HMMs using a testing condition. Informally, this theorem says that the posterior distribution of parameters in an HMM converges in distribution to a normal distribution that centers at the maximum likelihood estimator of parameters, and its covariance matrix equals inverse of the Fisher information matrix.
Consider the setup of a general hidden Markov model as a discrete-time stochastic process $(X_{t},Y_{t})_{t \geq 1}$, where $(Y_{t})_{t\geq 1}$ process is observed and $(X_{t})_{t\geq 1}$ process is unobserved and Markov. Its state space includes all possible values of $(X_{t})_{t\geq 1}$ and emission distribution is defined as the conditional distribution of $Y_{t}$ given $X_{t}$. Our HMMs are a special case of this setup in that the emission distributions are parametric and state space is discrete and known. There are many theoretical results about frequentist estimation in hidden Markov models with discrete, continuous, or unknown state spaces and parametric or non-parametric emission distributions \citep{baum1966statistical, leroux1992maximum,bickel1998asymptotic,jensen1999asymptotic, douc2001asymptotics,douc2011consistency}. The equivalent asymptotic Bayesian results have started to emerge recently \citep{de2008asymptotic,gassiat2014posterior,vernet2015posterior,gassiat2018efficient,douc2020posterior}.
Our theoretical results are closest to those of \citet{de2008asymptotic}. They prove a Bernstein-von Mises theorem for HMMs using Taylor expansion of the log likelihood function. Our proof differs from theirs in that we adopt the setup of \citet{ghosal2000convergence} for deriving posterior contraction rates in general Bayesian models. Using this setup, we adapt the proof of Theorem 10.1 in \cite{Van00} to show that the posterior distribution of parameters in an HMM is asymptotically normal. Compared to \cite{de2008asymptotic}, our assumption on the log likelihood function is milder in that our proof assumes only the first order smoothness of the log likelihood function. More importantly, our proof relies on the local asymptotic normality condition for HMMs, which is satisfied under the assumptions of \citet{bickel1998asymptotic,de2008asymptotic}.
Our proof techniques are similar to those used for investigating asymptotic properties of the posterior distribution in extensions of HMMs with non-parametric emission distributions \citep{gassiat2014posterior,vernet2015posterior}. A key step in our proof is to establish a testing condition based on a sequence of test functions. We are not aware of any existing test function that is tuned for such applications in HMMs.
Our main contribution is to establish the testing condition using a sequence of test functions obtained from an optimal transportation cost inequality. The testing condition measures the complexity of the HMM family and establishing it is a key step in showing the consistency of posterior distribution \citep{ghosal2000convergence}. Our construction of the sequence of test functions is based on the $L^1$-transportation cost information inequality for HMMs \citep{djellout2004transportation, kontorovich2008concentration} and Lipschitz concentration of log likelihood function of HMMs \citep{le2000exponential}. \citet{hu2011transportation} has constructed the likelihood ratio test with exponentially decaying error probabilities for testing simple hypotheses. Extending this work, we construct the sequence of test functions with exponentially decaying
error probabilities for testing general hypotheses using the $L^1$-transportation cost information inequality. The sequence of test functions satisfies the testing condition in \cite{ghosal2000convergence,Van00} and is used to prove the Bernstein-von Mises theorem for HMMs.
This paper is structured in four sections. In Section \ref{sec:pre}, we state the HMM setup, define the prediction filter, and restructure the results of \citet{kontorovich2008concentration} and \citet{hu2011transportation}
in the context of this paper. The assumptions and main results about the testing condition are presented in Section \ref{sec:main}. The contraction rate and asymptotic normality of the posterior distribution of parameters in an HMM follow immediately from the testing condition. Finally, Section \ref{sec:proof} contains proofs of the main results.
\section{Setup and Background}
\label{sec:pre}
Consider the HMM setup in greater detail. Recall that the HMM is a discrete-time stochastic process $(X_{t},Y_{t})_{t \geq 1}$ and that $(Y_{t})_{t\geq 1}$ process is observed and $(X_{t})_{t\geq 1}$ process is unobserved and Markov.
The hidden Markov chain $(X_{t})_{t \geq 1}$ has state space $\mathcal{X} = \{1,\ldots,S\}$ with a known $S \in \mathbb{N}$. The $(X_{t})_{t \geq 1}$ process has an initial stationary distribution $r(a) = \mathbb{P}(X_{1} = a)$ for $a \in \mathcal{X}$ and is time homogeneous with transition kernel
\begin{align}
\label{eq:1}
Q_{ab} = q(a,b) = \mathbb{P}(X_{t+1} = b | X_{t} = a),\quad t \geq 1,\;a,b\in\mathcal{X}.
\end{align}
Given $(X_{t})_{t\geq 1}$, $(Y_{t})_{t \geq 1}$ is a sequence of independent random variables on the metric space $(\mathcal{Y},d_{\mathcal{Y}})$ and the conditional distribution of $Y_{t}$ depends only on $X_{t}$. Furthermore, the conditional distribution of $Y_{t}$ given $X_{t} = x_{t}$ does not depend on $t$ and has a density function $g(\cdot \mid x_{t})$ with respect to the $\sigma$-finite measure $\mu$ on $\mathcal{Y}$. Let $\Theta\subseteq \mathbb{R}^{d}$ denote the paramter space with fixed dimension $d \in \mathbb{N}$. The density functions $r(\cdot)$, $q(\cdot,\cdot)$, and $g(\cdot\mid \cdot)$ belong to a parametric family indexed by $\theta$, which is denoted as $\{r_\theta(\cdot), q_\theta(\cdot,\cdot), g_\theta(\cdot\mid \cdot), \theta \in \Theta \subseteq \mathbb{R}^{d}\}$. Our focus is Bayesian inference on $\theta$ given the observed data.
Consider the joint and marginal distributions of the augmented and observed data. Let $n$ be the number of observations, $(Y_{1},\ldots,Y_{n})$ be the observed data, $(X_{1},\ldots,X_{n},Y_{1},\ldots,Y_{n})$ be the augmented data, and $c_{i}^j$ be the shorthand for sequence $c_i, \ldots, c_j$ for $i \leq j$. Then, the joint and marginal densities of the augmented and observed data under
$(\text{counting measure})^{n}\otimes \mu^{n}$ and $\mu^{n}$, respectively, are
\begin{align}
p_{\theta}(x_{1}^n, y_{1}^n) &= r_{\theta}(x_{1})\prod_{k=1}^{n-1}q_{\theta}(x_{k},x_{k+1})g_{\theta}(y_{k}|x_{k}) g_{\theta}(y_{n}|x_{n}), \nonumber\\
p_{\theta}(y_{1}^n) &= \sum_{(x_{1}^{n})\in \mathcal{X}^{n}} p_{\theta}(x_{1}^n,y_{1}^n),
\label{eq:lik:f}
\end{align}
where $x_{1}^n \in \mathcal{X}^n$, $y_{1}^n \in \mathcal{Y}^n$, and $\theta \in \Theta$. Denote the distribution of $Y_1^n$ as $\mathbb{P}_{\theta}^{(n)}$, density $p_{\theta}(y_{1}^n)$ as $p_{\theta}^{(n)}$, and the expectation under $\PP^{(n)}_{\theta}$ as $\mathbb{E}_{\theta}^{(n)}$. If $\theta_0 \in \Theta$ is the true parameter, then $\PP^{(n)}_{\theta_0}$ is the true distribution of $Y_{1}^{n}$.
We now define related conditional distributions and the \emph{extended} HMM sequence. For $2 \leq t \leq n$ and $\theta \in \Theta$, the conditional distribution of $X_{t}$ given $Y_{1}^{t-1}$ is called the \emph{prediction filter} at time $t$ and is defined as
\begin{align}
\label{eq:filter}
p_{t}^{\theta}\vcentcolon = \left\{ \mathbb{P}_{\theta}(X_{t}= 1 \mid Y_{1}^{t-1}), \ldots, \mathbb{P}_{\theta}(X_{t}= S \mid Y_{1}^{t-1}) \right\}^\mathsf{T} \in \mathcal{E} ,
\end{align}
where $\mathcal{E}$ is the space of all probability distributions on $\mathcal{X} $ equipped with total variation distance $\|\cdot\|_{\text{TV}}$. Baum's forward equation further implies that for $t = 1, \ldots, n-1$,
\begin{align}
\label{eq:fb}
p_{t+1}^{\theta} (\cdot) &= \frac{\PP_{\theta}(X_{t+1} = \cdot, Y_{t}\mid Y_{1}^{t-1})}{\PP_{\theta}(Y_{t}\mid Y_{1}^{t-1})}
= \frac{\sum_{x_{t}}q_{\theta}(x_{t}, \cdot) g_{\theta}(Y_{t} \mid X_{t} = x_{t}) p_{t}^{\theta}(x_{t}) }{\sum_{x_{t}}g_{\theta}(Y_{t} \mid X_{t} = x_{t}) p_{t}^{\theta}(x_{t})} \nonumber\\
&\vcentcolon = f^{\theta}(Y_{t},p_{t}^{\theta}),
\end{align}
where $f^{\theta}$ is a measurable function on $\mathcal{Z} = \mathcal{Y} \times \mathcal{E}$ equipped with metric $d_{\mathcal{Z}} = d_{\mathcal{Y}} + \|\cdot\|_{\text{TV}}$. Extending the definition of $f^{\theta}(Y_{t},p_{t}^{\theta})$ for lags $s = 0, 1, \ldots, t - 1$, we recursively define $p_{t+1}^{\theta}$ for $s = 0,
\ldots, t-1$ as
\begin{align}
\label{eq:funcf}
p_{t+1}^{\theta} = f_0^\theta(Y_t^t, p^{\theta}_{t}) = f_{1}^{\theta}(Y_{t-1}^{t},p^{\theta}_{t-1}) = \cdots = f_{s}^{\theta}(Y_{t-s}^{t},p^{\theta}_{t-s}) = \cdots = f_{t-1}^{\theta}(Y_{1}^{t},p^{\theta}_{1}),
\end{align}
where $f_{0}^{\theta} =f^{\theta}$ and $p_{1}^{\theta} = r_{\theta}$.
The \emph{extended HMM} sequence is defined as $(Y_{t},p^{\theta}_{t})_{t = 1}^{n} \in (\mathcal{Y}\times \mathcal{E})^{n} = \mathcal{Z}^{n}$, where $(p^{\theta}_{t})_{t=1}^{n}$ is defined in \eqref{eq:funcf}. Let $l_{n}(\theta,Y_{1}^{n}) = \log p_{\theta}(Y_{1}^n)$ be the log likelihood function of $\theta$, where $p_{\theta}(y_{1}^n)$ is defined in \eqref{eq:lik:f}. Using the extended HMM sequence, $l_{n}(\theta,Y_{1}^{n})$ is expressed as
\begin{align}
l_{n}(\theta,Y_{1}^{n}) &= \log p_{\theta}(Y_{1})+\log \prod_{t=2}^{n}p_{\theta}(Y_{t}|Y_{t-1},\ldots,Y_{1})
\nonumber\\
& =
\log \left\{\sum_{x_1}g_{\theta}(Y_{1}|x_{1})r_{\theta}(x_1)\right\} +\sum_{t=2}^{n}\log\left\{ \sum_{x_t}g_{\theta}(Y_{t}|x_{t})\PP_{\theta}(X_{t} = x_{t}| Y_{t-1},\ldots,Y_{1})\right\}
\nonumber\\
& = \sum_{t=1}^{n} \log \sum_{x_{t}}p_{t}^{\theta}(x_{t})g_{\theta}(Y_{t}|x_{t}).
\label{eq:lik:a}
\end{align}
The log likelihood function in \eqref{eq:lik:a} depends on $(Y_{t},p^{\theta}_{t})_{t = 1}^{n}$. The main advantage of
\eqref{eq:lik:a} is that the log likelihood function $l_{n}(\theta,Y_{1}^{n})$ can be expressed in an additive form and easier to manipulate. Unfortunately, this is not possible if we use $p_{\theta}(y_{1}^n)$ in \eqref{eq:lik:f} to define $l_{n}(\theta,Y_{1}^{n})$.
We now state the optimal transportation inequality and related results that are required for constructing the sequence of test functions.
\begin{definition}
\label{def:t1}
Let $(E,d)$ be a metric space. A probability measure $\mu$ on $(E,d)$ satisfies $L^{1}$-transportation cost information inequality if for any probability measure $\nu$ on $(E,d)$, there exists a constant $C>0$ such that
\begin{align}
\label{eq:t1}
W_{1}^{d}(\mu,\nu) \leq \sqrt{2CH(\nu\mid \mu)},
\end{align}
where $W_{1}^{d}$ is the optimal transportation cost with cost function $d(\cdot,\cdot)$ and $H(\cdot\mid \cdot)$ denotes the Kullback-Leibler divergence. If a measure $\mu$ satisfies \eqref{eq:t1}, we denote this relation as $\mu \in T_{1}(C)$.
\end{definition}
\noindent The $L^{1}$-transportation cost information inequality is important because it is an equivalent characterization of sub-Gaussian measure concentration.
\begin{theorem}\cite[Theorem 3.1]{bobkov1999exponential}
\label{thm:ti}
A measure $\mu$ defined on a metric space $(E,d)$ satisfies $T_{1}(C)$-inequality if and only if for any $\mu$-integrable Lipschitz function $F: E\rightarrow \mathbb{R}$ and any $\lambda \in \mathbb{R}$,
$$\int_{E} e^{\lambda(F - \langle F\rangle_{\mu})} d\mu \leq e^{\tfrac{\lambda^{2}}{2}C\|F\|^{2}_{\text{\text{Lip}}}},$$
where $\langle F\rangle_{\mu} = \int_{E}F d\mu$ and $\|F\|_{\text{\text{Lip}}}$ is the Lipschitz constant of $F$. In this case, for any $r > 0$,
\begin{align}
\label{eq:conc-ineq}
\mu(F - \langle F\rangle_{\mu}>r) \leq e^{-\frac{r^{2}}{2C\|F\|^{2}_{\text{Lip}}}}.
\end{align}
\end{theorem}
An important property of $L^{1}$-transportation cost information inequality is called the \emph{tensorization principle}. For $i=1,\dots,n$, let $\mu_{i}$ be a probability measure on metric space $(E_{i},d_{i})$ such that $\mu_{i} \in T_{1}(C)$. Consider the product measure $\mu_1\otimes \cdots\otimes \mu_{n}$ on $(E_{1}\times \cdots \times E_{n}, d_{l_1})$, where the $l_{1}$-metric is defined as
\begin{align}
\label{eq:metric1}
d_{l_{1}}(u,v) = \sum_{i=1}^{n} d_{i}(u_{i},v_{i}), \;
u = (u_{1},\ldots,u_{n}),\; v = (v_{1},\ldots,v_{n}),\; u_i, v_i \in E_i.
\end{align}
Then, the tensorization principle for $L^{1}$-transportation cost information inequality states that
\begin{align}
\label{eq:tensor}
\mu_1 \otimes \cdots \otimes \mu_{n} \in T_{1}(nC);
\end{align}
see \cite[Chapter 2]{van2014probability} for greater details. For a product measure, this property is called \emph{independent tensorization}.
Under mild conditions, the probability measure $\mathbb{P}^n_\theta$ induced by $(Y_{t})_{t =1}^{n}$ satisfies the $T_{1}(nC_{H}^{\theta})$-inequality on the metric space $(\mathcal{Y}^{n},d_{l_{1}})$ for some constant $C_{H}^{\theta}$. If $\mathcal{Y}$ is countable, then the constant $C_{H}^{\theta}$ is determined by the mixing property of the hidden Markov chain $(X_{t})_{t \geq 1}$ \citep{kontorovich2008concentration}. Furthermore, $C_{H}^{\theta}$ is also related to the measure concentration of emission distribution if $\mathcal{Y}$ is uncountable \citep{hu2011transportation}. We end this section by extending the former two results in the following theorem.
\begin{theorem}
\label{thm:hmm:t1}
If the $r$-step transition matrix $Q^{r}_{\theta}$ of $(X_{t})_{t\geq 1}$ is positive entrywise for some $r >0$ and $\theta \in \Theta$, then
\begin{align}
\label{mixing}
D_{\theta} = \frac{1}{2} \sum_{t=1}^{\infty}\sup_{a,a'} \sum_{b \in \mathcal{X}}|Q_{\theta}^{t}(a,b)-Q_{\theta}^{t}(a',b)| < \infty,
\end{align}
and $\mathbb{P}_{\theta}^{(n)}(Y_{1}^{n} \in \cdot) \in T_{1}(n C_{H}^{\theta})$ with respect to $(\mathcal{Y}^{n},d_{l_{1}})$, where $C_{H}^{\theta}$ is given as:
\begin{enumerate}
\item In the case $(\mathcal{Y},d_{\mathcal{Y}})$ is countable, $C_{H}^{\theta} = (D_{\theta}+1)^{2};$
\item In the case $(\mathcal{Y},d_{\mathcal{Y}})$ is uncountable, if for any $a \in \mathcal{X}$, the emission distribution $\mathbb{P}_{\theta}(Y_{1}\in \cdot |X_{1} = a) \in T_{1}(C_{\mathcal{Y}})$ for some constant $C_{\mathcal{Y}}$,
and there exists a constant $L>0$ such that for any $a,b \in \mathcal{X}$,
$$W_{1}^{d_{\mathcal{Y}}}\{ \mathbb{P}_{\theta}(Y_{1} \in \cdot \mid X_{1} = a) ,\mathbb{P}_{\theta}(Y_{1} \in \cdot \mid X_{1} = b) \} \leq L 1_{a\neq b},$$
then $C_{H}^{\theta} = C_{\mathcal{Y}} + L^{2}(D_{\theta}+1)^{2}.$
\end{enumerate}
\end{theorem}
\section{Main results}
\label{sec:main}
Consider the setup for studying asymptotic properties of the posterior distribution of $\theta$. Assume that the observe data $Y_{1}^{n}$ has $\PP_{\theta_0}^{(n)}$ as its distribution. Given a prior distribution $\Pi_{n}$ on the parameter space $\Theta$ with respect to Borel $\sigma$-field $\mathcal{F}$, the posterior distribution of $\theta$ conditional on $Y_1^n$ is
\begin{align}
\label{eq:post}
\Pi_{n}(B \mid Y_1^n) = \frac{\int_{B} d\PP^{(n)}_{\theta}(Y_1^n) \, d\Pi_{n}(\theta)}{\int_{\Theta} d\PP^{(n)}_{\theta}(Y_{1}^{n}) \, d\Pi_{n}(\theta)},\quad B \in \mathcal{F}.
\end{align}
The posterior distribution is consistent if it concentrates on arbitrary small neighborhoods of $\theta_0$ with $\PP_{\theta_0}^{(n)}$ probability tending to 1 as $n$ tends to infinity. Let $\| \cdot \|_2$ be the Euclidean distance. Then, the contraction rate in metric space $(\Theta, \|\cdot\|_2)$ is at least $\epsilon_n$ if
\begin{align*}
\Pi_{n}\{\theta \in \Theta: \|\theta - \theta_0\|_2 > M_n \epsilon_n \mid Y_1^n \}\rightarrow 0\text{ in $\PP_{\theta_0}^{(n)}$-probability for every $M_n \rightarrow \infty$}.
\end{align*}
The contraction rate measures the size of small neighborhoods of $\theta_0$ on which the posterior distribution puts almost all mass.
A general technique for deriving posterior contraction rates is based on proving the following testing condition \citep[Theorem 7.3]{ghosal2000convergence}. Let $\epsilon_n $ be a sequence satisfying $\epsilon_{n} \rightarrow 0$ and $(n\epsilon_{n}^2)^{-1} = O(1)$, the testing condition says that for a universal constants $ K >0$ and every sufficiently large $j>0$, there exists a sequence test functions $\phi_{n}$ such that
\begin{align}
\label{eq:opt:test}
\EE_{\theta_0}^{(n)}(\phi_n) \leq e^{-Kj^{2}n\epsilon_{n}^2},\quad \sup_{\theta \in \Theta:j \epsilon_{n} <\|\theta - \theta_0\|_{2}\leq 2j \epsilon_{n}} \EE_\theta^{(n)}(1 - \phi_n) \leq e^{-Kj^{2}n\epsilon_{n}^2}.
\end{align}
Once the testing condition with rate $\epsilon_{n}$ in \eqref{eq:opt:test} is established, the posterior distribution is consistent with the same rate $\epsilon_n$ for any prior distributions $\Pi_{n}$ putting enough amount of mass near the true parameter $\theta_0$ \citep{ghosal2000convergence,GhoVan07}. Furthermore, if the local asymptotic normality property (LAN) of $\{\PP_{\theta}^{(n)}:\theta \in \Theta\}$ also holds, then the Bernstein-von Mises theorem follows immediately \citep{Van00}.
Our main contribution is to construct $\phi_{n}$ in an HMM and prove that it satisfies the testing condition in \eqref{eq:opt:test}. Starting from the simple hypothesis, we construct the likelihood ratio test with exponentially decaying error based on \cite{hu2011transportation}. Applying the $L^1$-transportation cost information inequality for HMMs, we show that the log likelihood function is a Lipschitz function of the extended HMM sequence $(Y_{t},p^{\theta}_{t})_{t = 1}^{n}$, and it concentrates on its mean with sub-Gaussian tail behavior.
Then, for any $\epsilon >0$, some $0<\xi <1$, and any $\theta_1$ with $\|\theta_1 - \theta_0\|_{2} > \epsilon$, we consider hypothesis of $\PP_{\theta_0}^{(n)}$ versus the complement $\{\PP_{\theta}^{(n)}: \theta \in \Theta,\|\theta - \theta_1\|_{2} \leq \xi \epsilon\}$.
By taking the liklihood ratio tests of $\PP_{\theta_0}^{(n)}$ versus $\PP_{\theta'}^{n}$ where $\theta'$ is the center of $\{ \theta \in \Theta,\|\theta - \theta_1\|_{2} \leq \xi \epsilon\}$, the constructed test has an exponential decaying error. Finally, we prove the testing condition \eqref{eq:opt:test} by an entropy bound of $\{\PP_{\theta}^{(n)}: \theta \in \Theta,j \epsilon_{n} <\|\theta - \theta_0\|_{2}\leq 2j \epsilon_{n}\}$.
We require the following assumptions to ensure that a sequence of test functions $\phi_{n}$ exists that satisfies \eqref{eq:opt:test}.
\begin{enumerate}[label=(\subscript{A}{{\arabic*}})]
\item
\label{a1}
For any $\theta \in \Theta$, $\mathbb{P}_{\theta}^{(n)}(Y_{1}^{n} \in \cdot) \in T_{1}( nC_{H})$ on metric space $(\mathcal{Y}^{n},d_{l^{1}})$ for some constant $C_{H}>0$.
\item
\label{a2}
For any $\theta \in \Theta$, there exists constants $\delta_1,\delta_2 >0$ such that the hidden Markov chain $(X_{t})_{t\geq 1}$ is ergodic, $\|\partial_{y}f^{\theta}_{t}\|_{\infty} < \delta_1
$, and $\sum_{t=0}^{\infty}\|\partial_{p}f^{\theta}_{t}\|_{\infty} < \delta_2$, where
\begin{align}
&\|\partial_{y}f^{\theta}_{t}\|_{\infty} = \sup_{y \neq y',p \in \mathcal{E}}\frac{\| f_{t}^{\theta}(y,p)-f_{t}^{\theta}(y',p)\|_{\text{TV}}}{d_{\mathcal{Y}}(y,y')},
\nonumber
\\
&
\|\partial_{p}f_{t}^{\theta}\|_{\infty} = \sup_{p_1\neq p_1' \in \mathcal{E},y_{1}^{t+1} \in \mathcal{Y}^{t+1}} \frac{\|f^{\theta}_{t}(y_{1}^{t+1},p_{1})-f_{t}^{\theta}(y_{1}^{t+1},p_{1}')\|_{\text{TV}}} {\|p_1-p_1'\|_{\text{TV}}}.
\end{align}
\item
\label{a3}
For any $\theta \in \Theta$, the function $\log \left\{\sum_{x_{1}}p_{1}^{\theta}(x_{1})g_{\theta}(Y_{1}|x_{1})\right\}$ is Lipschitz with norm $L/2$ on metric space $(\mathcal{Z} = \mathcal{Y} \times \mathcal{E},d_{\mathcal{Z}} = d_{\mathcal{Y}} + \|\cdot\|_{\text{TV}})$.
\item
\label{a4}
$\Theta \subset \mathbb{R}^{d}$ is compact and $\theta_0$ lies in its interior.
\item
\label{a6}
As $n \rightarrow \infty$, $\frac{1}{n}\mathbb{E}^{(n)}_{\theta_1}\{l_{n}(\theta_1,Y_{1}^{n}) - l_{n}(\theta_{2},Y_{1}^{n})\} \rightarrow J(\theta_1\mid\theta_2)$ uniformly for $\theta_1,\theta_2 \in \Theta$, where
\begin{align*}
J(\theta_1\mid\theta_2) = \int_{\mathcal{Y}\times \mathcal{E}} \log \frac{\sum_{x \in \mathcal{X}}p(x)g_{\theta_1}(y\mid x) }{\sum_{x \in \mathcal{X}}p(x)g_{\theta_2}(y\mid x)} R_{\theta_{1}}(dy,dp),
\end{align*}
and $ R_{\theta_{1}}$ is the stationary distribution of $(Y_{t},p_{t})$ under $\theta_1$.
\item
\label{a7}
There exist constants $\kappa_1, \kappa_2$ such that $0 < \kappa_1 \leq \kappa_2 \leq 2\kappa_1$ and
for any $\theta_1,\theta_2 \in \Theta$, $$ \kappa_1 \|\theta_1 - \theta_2\|_{2} \leq J(\theta_1|\theta_2) \leq \kappa_2 \|\theta_1 - \theta_2\|_{2}.$$
\end{enumerate}
Assumptions \ref{a1}-\ref{a3} are based on the conditions in \citet[Theorem 3.1]{le2000exponential} and are global over $\Theta$. Assumption \ref{a1} states that the probability measure $\PP_{\theta}^{(n)}$ satisfies the $L^{1}$-transportation cost information inequality on $\Theta$ with $nC_{H}$. Theorem \ref{thm:hmm:t1} shows that the constant $C_{H}$ is expressed in terms of the emission distribution and mixing property of hidden Markov chain. Assumption \ref{a2} implies that the extended HMM sequence $(Y_{t},p_{t})_{t=1}^{n}$ is stationary and satisfies $L^{1}$-transportation cost information inequality on $\Theta$ \citep{hu2011transportation}. Assumption \ref{a3} states that the $\log$ likelihood function is Lipschitz on metric space $(\mathcal{Z} ,d_{\mathcal{Z}})$, which is mild and satisfied in many cases \citep[Example 3.1]{le2000exponential}. Assumption \ref{a6} implies the uniform convergence of normalized log likelihood ratio function, which is slightly stronger than \cite[Proposition 4]{leroux1992maximum}. Assumption \ref{a7} specifies the equivalence of $J(\theta_1 \mid \theta_2)$ and $\|\theta_1 - \theta_2\|_{2}$ which is a regular condition in parametric models \citep[Example 7.1]{ghosal2000convergence}.
\begin{theorem}[Testing Condition]
\label{thm:opt:test}
Let $\epsilon_n >0$ with $\epsilon_{n} \rightarrow 0$ such that $(n\epsilon_{n}^{2})^{-1} = O(1)$. Under Assumptions \ref{a1}-\ref{a7}, there exist positive constants $C', K$ and a sequence of test functions $\phi_{n}$ such that for every sufficiently large $j$,
\begin{align}
\label{eq:thm:test}
\mathbb{E}_{\theta_0}^{(n)}(\phi_{n}) \leq C' e^{- Kn \epsilon_{n}^{2}j^{2} },\quad \sup_{\theta \in \Theta: j \epsilon_{n} <\|\theta - \theta_0\|_{2}\leq 2j \epsilon_{n}}\mathbb{E}_{\theta}^{(n)}(1-\phi_{n}) \leq C'e^{-Kn\epsilon_{n}^{2}j^{2}}.
\end{align}
\end{theorem}
Theorem \ref{thm:opt:test} shows that under certain regularity conditions, there exists a sequence of test functions $\phi_{n}$ satisfying the testing condition in \eqref{eq:opt:test} for HMMs. The existsnce of such tests plays a crucial role in establishing the convergence of posterior distributions \citep{ghosal2000convergence, GhoVan07,ghosal2017fundamentals}. We can directly apply the testing condition in \eqref{eq:thm:test} to establish the contraction rate and asymptotic normality of posterior distribution.
\begin{corollary}[Posterior Convergence]
\label{thm:conver}
Let $\epsilon_n >0$ with $\epsilon_{n} \rightarrow 0$ such that $(n\epsilon_{n}^{2})^{-1} = O(1)$. For $k >1$, define
\begin{align} \label{eq:setB}
K(p_{\theta_0}^{(n)},p_{\theta}^{(n)}) &= \int p_{\theta_0}^{(n)} \log(p_{\theta_0}^{(n)}/p_{\theta}^{(n)}) d\mu^{n}, \nonumber\\
V_{k,0}(p_{\theta_0}^{(n)},p_{\theta}^{(n)}) &= \int p_{\theta_0}^{(n)}| \log(p_{\theta_0}^{(n)}/p_{\theta}^{(n)}) - K(p_{\theta_0}^{(n)},p_{\theta}^{(n)}) |^{k}d\mu^{n}, \nonumber\\
B_{n}(\theta_0,\epsilon;k) &= \{\theta \in \Theta: K(p_{\theta_0}^{(n)},p_{\theta}^{(n)}) \leq n\epsilon^{2}, V_{k,0}(p_{\theta_0}^{(n)},p_{\theta}^{(n)}) \leq n^{k/2}\epsilon^{k} \}.
\end{align}
Assume for some $k>1$, the prior distribution $\Pi_{n}$ of $\theta$ satisfies
\begin{align}
\label{cor:2.5}
\frac{\Pi_{n}(\theta \in \Theta: j\epsilon_{n}< \|\theta - \theta_0\|_{2}\leq 2j \epsilon_{n})}{\Pi_{n}(B_{n}(\theta_0,\epsilon_{n};k))} \leq e^{Kn\epsilon_{n}^{2}j^{2}/2}.
\end{align}
Then under the condition of Theorem \ref{thm:opt:test}, for every $M_{n} \rightarrow \infty$, we have that
\begin{align}
\label{thm:conver:1}
\mathbb{E}_{\theta_0}^{(n)}\Pi_{n}(\theta \in \Theta: \|\theta - \theta_0\|_{2} \geq M_{n}\epsilon_{n} \mid Y_{1}^{n}) \rightarrow 0.
\end{align}
\end{corollary}
\begin{corollary}[Bernstein-von Mises theorem]
\label{thm:bvm}
Given observation $Y_{1}^{n}$, denote the log likelihood as $l_{n}(\theta)$ and $l_{n}'(\theta)$ as the derivative of $l_{n}$ with respect to $\theta$. Assume the HMM model $\{\mathbb{P}_{\theta}^{(n)}:\theta \in \Theta\}$ satisfies the local asymptotic normality condition (LAN):
\begin{align}
\label{eq:lan}
l_{n}\left(\theta_0 + \frac{h}{\sqrt{n}} \right) - l_{n}(\theta_0) = \frac{1}{\sqrt{n}} h^{\top} l'_{n}(\theta_0) - \frac{1}{2} h^{\top}I_{\theta_0}h + o_{\PP_{\theta_0}^{(n)}}(1),
\end{align}
where $h \in \{h\in \mathbb{R}^{d}: \theta_{0} + \frac{h}{\sqrt{n}} \in \Theta\}$ and $l_{n}'(\theta_0)$ weakly converges to $N_{d}(0,I_{\theta_0})$ under $\PP_{\theta_0}^{(n)}$ probability.
Assume the prior distribution has continuous density $\pi(\theta) > 0 $ on $\Theta$ and the Fisher information matrix $I_{\theta_0}$ at $\theta_0$ is nonsingular. Then under the condition of Theorem \ref{thm:opt:test},
\begin{align}
\label{thm:bvm:1}
\mathbb{E}_{\theta_0}^{(n)}\|\Pi_{n}\left\{ \sqrt{n}(\theta - \theta_0) \mid Y_{1}^{n}\right\} - N_{d}(\Delta_{n,0},I_{\theta_0}^{-1})\|_{TV} \rightarrow 0,
\end{align}
where $\Delta_{n,0} = \frac{1}{\sqrt{n}} I_{\theta_0}^{-1} l'_{n}(\theta_0).$
\end{corollary}
\citet{bickel1998asymptotic} have developed the first set of results showing the consistency and asymptotic normality of maximum likelihood estimators of the parameters in an HMM.
Corollary \ref{thm:bvm} provides similar theoretical gaurantees about Bayesian estimation in HMMs. Specifically, it states that the posterior distribution of $\theta$ given the observations $Y_{1}^{n}$ in an HMM is asymptotically normal with random mean vector $\theta_0 + \frac{1}{\sqrt{n}}\Delta_{n,0}$ and covariance matrix $(nI_{\theta_0})^{-1}$. Compared to the similar results in \cite{de2008asymptotic}, which are based on Taylor expansion of the log likelihood function, Corollary \ref{thm:bvm} provides an alternative approach to prove the Bernstein-von Mises theorem using the LAN condition \eqref{eq:lan} and testing condition \eqref{eq:thm:test}. The LAN condition is a general regularity assumption on the class of parametric models $\{\PP_{\theta}^{(n)}:\theta \in \Theta\}$, which is satisfied under the theoretical setup in \cite{bickel1998asymptotic,de2008asymptotic}. In our proof, we construct the sequence of test functions based on a likelihood ratio test and prove the testing condition using the optimal transportation inequality for HMMs.
\section{Proofs}
\label{sec:proof}
In this section, for simplicity, we denote $\mathbb{P}_{\theta}^{(n)}$ and $\mathbb{E}_{\theta}^{(n)}$ as $\mathbb{P}_{\theta}$ and $\mathbb{E}_{\theta}$, respectively.
\subsection*{Proof of Theorem \ref{thm:hmm:t1}}
In the case $\mathcal{Y}$ is countable, the proof is based on \citet[Theorem 1.3]{kontorovich2008concentration}.
\begin{theorem} \label{thm:kc1.1}[\citet[Theorem 1.3]{kontorovich2008concentration}] Suppose $\mathcal{X}$ is a countable space, $\mathcal{F}$ is the collection of all subsets of $\mathcal{X}^{n}$, $\mathbb{P}$ is the probability measure on $(\mathcal{X}^{n},\mathcal{F})$, and $\mathbb{E}$ is the expectation under $\mathbb{P}$. Suppose $\phi: \mathcal{X}^{n} \mapsto \mathbb{R}$ is a $c$-Lipschitz function (with respect to Hamming metric) for some $c>0$, then for any $r >0$,
$$\mathbb{P}\{|\phi - \mathbb{E}\phi| > r\} \leq 2 \exp(-\frac{r^{2}}{2nc^{2}\|\Delta_{n}\|_{\infty}^{2}}).$$
\end{theorem}
Under $\PP_\theta$, $(X_{t})_{t\geq 1}$ is a Markov chain with transition matrix $Q_{\theta}$, and the mixing coefficient $\|\Delta_{n}\|_{\infty}$ is bounded by the Markov contraction coefficients \citep[Lemma 7.1]{kontorovich2008concentration}:
\begin{align}
\label{Martingale}
\|\Delta_n\|_{\infty} \leq \frac{1}{2} \sum_{t=1}^{\infty}\sup_{a,a'} \sum_{b \in \mathcal{X}}|Q_{\theta}^{t}(a,b)-Q_{\theta}^{t}(a',b)|+1 = D_{\theta}+1.
\end{align}
The transportation inequality of HMM with countable $\mathcal{Y}$ is given as an example in \cite[Section 7.2]{kontorovich2008concentration} and $C_{H}^{\theta} = (D_{\theta}+1)^{2}$.
In the case $\mathcal{Y}$ is uncountable, Theorem \ref{thm:ti} implies that proving the second part of Theorem \ref{thm:hmm:t1} is equivalent to showing that for any Lipschitz function $F: \mathcal{Y}^{n} \mapsto \mathbb{R}$ with $\|F\|_{\text{Lip}} \leq \alpha$ (with respect to $d_{l^1}$),
$$\mathbb{E_\theta} e^{\lambda (F - \mathbb{E_\theta}F)} \leq \exp \left\{ \frac{(\lambda\alpha)^{2}}{2} n(C_{\mathcal{Y}} + L^{2}(D_{\theta} + 1)^{2}) \right\},\quad \forall \lambda \in \mathbb{R}.$$
Let $G_{n}(X_{1}^{n}) = \mathbb{E_{\theta}}[F \mid X_{1}^{n}]$ for $n \geq 1$. Then, for $x_{1}^{n} \neq \tilde{x}_{1}^{n}$,
\begin{align*}
|G_{n}(x_{1}^{n}) - G_{n}(\tilde{x}_{1}^{n})| & = |\int F(y_{1}^{n}) d\mathbb{P}_\theta(Y_{1}^{n} = y_{1}^{n}\mid X_{1}^{n} = x_{1}^{n}) - \int F(y_{1}^{n}) d\mathbb{P}_{\theta}(Y_{1}^{n} = y_{1}^{n}\mid X_{1}^{n} = \tilde x_{1}^{n}) |
\\&
=|\int F(y_{1}^{n}) \left\{ d\mathbb{P}_\theta(Y_{1}^{n} = y_{1}^{n}\mid X_{1}^{n} = x_{1}^{n})- d\mathbb{P}_{\theta}(Y_{1}^{n} = y_{1}^{n}\mid X_{1}^{n} = \tilde x_{1}^{n})\right\} |
\\&
\overset{(i)}{\leq}
\sup_{\|\Phi\|_{\text{Lip}}\leq \alpha} \int \Phi(y_{1}^{n}) \left\{ d\mathbb{P}_\theta(Y_{1}^{n} = y_{1}^{n}\mid X_{1}^{n} = x_{1}^{n})- d\mathbb{P}_{\theta}(Y_{1}^{n} = y_{1}^{n}\mid X_{1}^{n} = \tilde x_{1}^{n})\right\}
\\
& \overset{(ii)}{=} \alpha W_{1}^{d_{l_1}} \{ \mathbb{P}_{\theta}(Y_{1}^{n} \in \cdot \mid X_{1}^{n} = x_{1}^{n}) , \mathbb{P}_{\theta}(Y_{1}^{n} \in \cdot \mid X_{1}^{n} = \tilde x_{1}^{n})\} \\
&\overset{(iii)}{=} \alpha \inf _{\Pi} \int \sum_{t=1}^{n} d_{\mathcal{Y}}(y_{t},\tilde{y}_{t}) d \Pi(y_{1}^{n},\tilde y_{1}^{n})
\\
& = \alpha\sum_{t=1}^{n} 1_{x_{t} \neq \tilde{x}_{t}} W_{1}^{d_{\mathcal{Y}}} \{ \mathbb{P}_\theta(Y_{t} \in \cdot | X_{t} = x_t) ,\mathbb{P}_\theta(Y_{k} \in \cdot | X_{k} = \tilde{x}_t) \} \\
&\overset{(iv)}{ \leq} \alpha L \sum_{k=1}^{n}1_{x_{k} \neq \tilde{x}_{k}} = \alpha Ld_{H}(x, \tilde x),
\end{align*}
where $\Phi$ on the RHS of $(i)$ is an $\alpha$-Lipschitz function on $(\mathcal{Y}^{n},d_{l_{1}})$, $(ii)$ follows from the duality form of $W_{1}$ distance, $(iii)$ follows from the definition of $W_{1}^{d_{l_1}}$ where $\Pi$ is the joint distribution with marginal distributions $\mathbb{P}_{\theta}(Y_{1}^{n} \in \cdot |x_{1}^{n})$ and $\mathbb{P}_{\theta}(Y_{1}^{n} \in \cdot |\tilde{x}_{1}^{n})$, and $(iv)$ follows from assumption that $W_{1}^{d_{\mathcal{Y}}}\{ \mathbb{P}_{\theta}(Y_{1} \in \cdot \mid X_{1} = a) ,\mathbb{P}_{\theta}(Y_{1} \in \cdot \mid X_{1} = b) \} \leq L 1_{a\neq b}$;
therefore, $G_{n}$ is a Lipschitz function on $\mathcal{X}^{n}$ with respect to Hamming metric $d_{H}$. Using (\ref{Martingale}), we have that $\mathbb{P}_{\theta}(X_{1}^{n} \in \cdot) \in T_{1}(n(D_{\theta}+1)^{2})$ on $(\mathcal{X}^{n},d_{l_{1}})$. By Theorem \ref{thm:ti}, for any $\lambda >0$,
$$\mathbb{E}_\theta e^{\lambda(G_{n} - \mathbb{E}_\theta G_{n})} \leq \exp \left(\frac{(\lambda\alpha L)^{2}}{2} n(D_{\theta} + 1)^{2} \right).$$
Using independent tensorization, we have $\mathbb{P}_{\theta}(Y_{1}^{n} \in \cdot |X_{1}^{n}) \in T_{1}(nC_{\mathcal{Y}} )$ on $(\mathcal{Y}^{n},d_{H})$, which implies that
$$\mathbb{E}_{\theta} \{ e^{\lambda (F - E_{\theta}[F \mid X_{1}^{n}])}\mid X_{1}^{n}\} = \mathbb{E}_{\theta} \{ e^{\lambda (F - G_{n})}\mid X_{1}^{n}\}\leq \exp\left(\frac{(\lambda\alpha)^{2}}{2}nC_{\mathcal{Y}} \right). $$
Therefore,
\begin{align*}
\mathbb{E}_{\theta} e^{\lambda (F - \mathbb{E}_{\theta}F}) & = \mathbb{E}_{\theta}\left( \mathbb{E}_{\theta} \{ e^{\lambda (F - G_{n})}\mid X_{1}^{n}\} e^{\lambda (G_{n} - \mathbb{E}_{\theta}F)} \right ) \\
& \leq \mathbb{E}_{\theta}\left( \mathbb{E}_{\theta} \left\{ e^{\lambda (F - E_{\theta}[F \mid X_{1}^{n}])}\mid X_{1}^{n} \right \}\right)\exp \left\{\frac{(\lambda\alpha L)^{2}}{2} n (D_{\theta}+1)^{2}\right\} \\
& \leq \exp \left \{\frac{(\lambda\alpha)^{2}}{2} n(C_{\mathcal{Y}} + L^{2}(D_{\theta}+1)^{2}) \right \} .
\end{align*}
\hfill $\square$
\section*{Proof of Theorem \ref{thm:opt:test}}
\noindent We start by proving the following lemma that is based on \citet[Theorem 3.3]{hu2011transportation}.
\begin{lemma}
\label{lemma:simply}
If Assumptions \ref{a1}-\ref{a3} hold, then for $\theta_1 \neq \theta_2 \in \Theta$, $$F_{n}(\theta_1,\theta_2) = l_{n}(\theta_1,Y_{1}^{n}) - l_{n}(\theta_2,Y_{1}^{n}) $$ is a Lipschitz function on $(\mathcal{Z}^{n},d_{l^{1}})$ with $\|F_{n}(\theta_1,\theta_2)\|_{\text{Lip}} \leq L$ and for any $\theta \in \Theta$,
$$\mathbb{E}_{\theta} e^{\lambda \left\{F_{n}(\theta_1,\theta_2) - \mathbb{E}_{\theta}F_{n}(\theta_1,\theta_2) \right\}}
\leq
\exp\left\{\frac{\lambda^{2}}{2}L^{2}nC_{H}(1 + \delta)^{4}\right\},$$
where $\delta = \max(\delta_1,\delta_2)$ in Assumption \ref{a2}.
\end{lemma}
\subsection*{Proof of Lemma \ref{lemma:simply}}
First we show that for any $\theta \in \Theta$, there exists a constant $C_{E}>0$ such that the probability measure induced by the extended HMM $\{Y_{t},p_{t}^{\theta}\}_{t=1}^{n}$ satisfies $T_{1}(nC_{E})$-inequality on metric space $(\mathcal{Z}^{n} = (\mathcal{Y}\times \mathcal{E})^{n}, d_{l_{1}})$, where the $l_1$-metric on $\mathcal{Z}^{n}$ is defined as:
$d_{l_{1}}(z_{1}^{n},\tilde{z}_{1}^{n}) = \sum _{k=1}^{n}d_{\mathcal{Z}}(z_{k},\tilde{z}_{k})$ and
$$d_{\mathcal{Z}}(z_{k},\tilde{z}_{k}) = d_{\mathcal{Y}}(y_{k}, \tilde y_{k}) + \|p_{k}^{\theta} - p_{k}^{\theta}\|_{\text{TV}}, \quad z_{k} = (y_{k},p_{k}^{\theta}),\tilde z_{k} = (\tilde y_{k},\tilde p_{k}^{\theta}) \in \mathcal{Z}.$$
That is, given $p_{1}^{\theta} = r_{\theta} \in \mathcal{E}$ and for any $\theta \in \Theta$, we want to show the probability measure $\mathbb{P}_{\theta}((Y_{i},p_{i}^{\theta})_{i=1}^{n} \in \cdot)\in T_{1}(nC_{E})$ on $(\mathcal{Z}^{n},d_{l^{1}})$, and $C_{E}$ is given as
$$C_{E} = C_{H}(1 + \delta )^{4},\quad \delta = \max(\delta_1,\delta_2).$$
Let $F$ be a Lipschitz function on $(\mathcal{Z}^{n},d_{l_{1}})$ with $\|F\|_{\text{Lip}} \leq 1$. By the Lipschitz property and definition of $d_{l_{1}}$, for $1\leq k \leq n$ we have that,
\begin{align}
\label{eq:lipF}
& \|\partial_{y_{k}}F\|_{\infty} := \sup_{z_{1}^{k-1},y_{k}\neq \tilde{y}_{k},p_{k},z_{k+1}^{n}} \frac{|F(z_{1}^{k-1},(y_{k},p_{k}),z_{k+1}^{n}) - F(z_{1}^{k-1},(\tilde{y}_{k},p_{k}),z_{k+1}^{n})|}{d_{\mathcal{Y}}(y_{k},\tilde{y}_{k})} \leq 1,
\nonumber
\\
& \|\partial_{p_{k}}F\|_{\infty} := \sup_{z_{1}^{k-1},p_{k}\neq \tilde{p}_{k},y_{k},z_{k+1}^{n}} \frac{|F(z_{1}^{k-1},(y_{k},p_{k}),z_{k+1}^{n}) - F(z_{1}^{k-1},(y_{k},\tilde{p}_{k}),z_{k+1}^{n})|}{\|p_{k} - \tilde{p}_{k}\|_{\text{TV}}} \leq 1.
\end{align}
For $k = 2,\ldots,n$, \eqref{eq:funcf} implies that the prediction filters $p_{k}^{\theta}$ only depend on $y_{1}^{k-1}$, so $F(z_{1}^{n})$ can be written as a function $G$ on $\mathcal{Y}^{n}$.
Next, we get a bound for $\|\partial_{y_{k}}G\|_{\infty}$. Suppose $y_{1}^{n} \neq \tilde{y}_{1}^{n}$ only differ at the $k$th coordinates $y_{k} \neq \tilde{y}_{k}$. Then,
$$G(y_{1}^{n}) - G(\tilde{y}_{1}^{n}) =
F(y_{1}^{n},p_{1}^{n}) - F(\tilde{y}_{1}^{n},\tilde{p}_{1}^{n}).$$
For $j \leq k$, $p_{j} = \tilde{p}_{j}$ because of $y_{1}^{k-1} = \tilde y_{1}^{k-1}$. When $j = k+1$, Assumption \ref{a2} implies that
\begin{align}
\label{eq:k+1}
\| p_{k+1} - \tilde{p}_{k+1}\|_{\text{TV}} &= \| f^{\theta}(y_{k},p_{1}^{k}) - f^{\theta}(\tilde{y_{k}},p_{1}^{k})\|_{\text{TV}} \nonumber\\
&\leq \delta_1 d_{\mathcal{Y}}(y_{k},\tilde{y}_{k});
\end{align}
Finally, for $j \geq k+2$,
\begin{align}
\label{eq:k+2}
\|p_{j} - \tilde{p}_{j}\|_{\text{TV}} &= \|f^{\theta}_{j-k-2}(y_{k+1}^{j-1},p_{k+1}) - f^{\theta}_{j-k-2}(y_{k+1}^{j-1},\tilde{p}_{k+1})\|_{\text{TV}} \nonumber\\
&\leq \|\partial_{p_0}f^{\theta}_{j-k-2}\|_{\infty} \cdot \|p_{k+1} - \tilde{p}_{k+1}\|_{\text{TV}};
\end{align}
therefore, if $y_{1}^{n} , \tilde{y}_{1}^{n}$ differ only at the $k$th coordinates, then
\begin{align}
|G(y_{1}^{n}) - G(\tilde{y}_{1}^{n})| &= | F(y_{1}^{n},p_{1}^{n}) - F(\tilde{y}_{1}^{n},\tilde{p}_{1}^{n})| \nonumber\\
&\leq \| \partial_{y_k} F \|_{\infty}d_{\mathcal{Y}}(y_{k},\tilde{y}_{k}) + \sum_{j=k+1}^{n} \|\partial_{p_j}F\|_{\infty}\|p_{j} - \tilde{p}_{j}\|_{\text{TV}}\nonumber\\
&\overset{(i)}{\leq} d_{\mathcal{Y}}(y_{k},\tilde{y}_{k}) + \left(\|\partial_{p_{k+1}}F\|_{\infty} + \sum_{j=k+2}^{n}\|\partial_{p_0}f^{\theta}_{j-k-2}\|_{\infty}\right)\|p_{k+1} - \tilde{p}_{k+1}\|_{\text{TV}}\nonumber \\
&\overset{(ii)}{\leq} d_{\mathcal{Y}}(y_{k},\tilde{y}_{k})\left(1 + \delta_1 + \delta_1 \delta_2\right) < d_{\mathcal{Y}}(y_{k},\tilde{y}_{k})\left(1 + \delta \right)^{2},
\end{align}
where $(i)$ follows from \eqref{eq:lipF} and $(ii)$ follows from \eqref{eq:k+1}, \eqref{eq:k+2}, and Assumption \ref{a2}. Under $\PP_\theta$, $G(y_{1}^{n})$ is a Lipschitz function on $(\mathcal{Y}^{n},d_{l_1})$ with $\|G\|_{\text{Lip}} \leq (1 + \delta)^{2}$.
Using Assumption \ref{a1}, $\mathbb{P}_{\theta}(Y_{1}^{n} \in \cdot) \in T_{1}( nC_{H})$; Theorem \ref{thm:ti} implies that for any $\lambda \in \mathbb{R}$,
\begin{align}
\label{eq:lG}
\mathbb{E}_{\theta} e^{\lambda \left\{F(Y_{1}^{n},p_{1}^{n}) - \mathbb{E}_{\theta}F(Y_{1}^{n},p_{1}^{n})\right\}} &= \mathbb{E}_{\theta} e^{\lambda\{G(Y_{1}^{n}) - \mathbb{E}_{\theta}G(Y_{1}^{n})\} }\nonumber \\
& \leq \exp\left\{\frac{\lambda^{2}}{2}nC_{H}\left(1 + \delta \right)^{4}\right\}.
\end{align}
Hence, $\mathbb{P}_{\theta}((Y_{i},p_{i}^{\theta})_{i=1}^{n} \in \cdot)\in T_{1}(nC_{E})$ with $C_{E} = C_{H}(1 + \delta )^{4}.$
Concluding the proof of the lemma, we show that $F_{n}(\theta_1,\theta_2)$ is a Lipscthiz function on $(\mathcal{E}^{n},d_{l_1})$. Expressing the log-likelihood additively, we have
\begin{align}
F_{n}(\theta_1,\theta_2) & = l_{n}(\theta_1,Y_{1}^{n}) - l_{n}(\theta_2,Y_{1}^{n})
\nonumber
\\&
= \sum_{t=1}^{n}\left( \log \sum_{x_{t}}p_{t}^{\theta_1}(x_{t})g_{\theta_1}(Y_{t}|x_{t}) -
\log \sum_{x_{t}}p_{t}^{\theta_2}(x_{t})g_{\theta_2}(Y_{t}|x_{t}) \right).
\end{align}
By Assumption \ref{a3}, for $i = 1,2$, the function $\log \sum_{x_{t}}p_{t}^{\theta_i}(x_{t})g_{\theta_i}(Y_{t}|x_{t})$ is a Lipschitz with norm $L/2$ on metric space $(\mathcal{Z} = \mathcal{Y} \times \mathcal{E},d_{\mathcal{Z}} = d_{\mathcal{Y}} + \|\cdot\|_{\text{TV}})$. So $F_{n}(\theta_1,\theta_2)$ is a Lipshitz function on $(\mathcal{Z}^{n},d_{l_1})$ with $\|F_{n}\|_{\text{Lip}} \leq L$. Using Theorem \ref{thm:ti}, we have that
$$\mathbb{E}^{n}_{\theta} e^{\lambda \left\{F_{n}(\theta_1,\theta_2) - \mathbb{E}_{\theta}^{n}F_{n}(\theta_1,\theta_2) \right\}}
\leq
\exp\left\{\frac{\lambda^{2}}{2}L^{2}nC_{H}(1 + \delta )^{4}\right\}.$$
\hfill $\square$
As a special case of Lemma \ref{lemma:simply}, we have
\begin{align}
\label{lem:spec}
\mathbb{E}^{n}_{\theta_1} e^{\lambda \left\{F_{n}(\theta_1,\theta_2) - nH_{n}(\theta_1|\theta_2) \right\}}
\leq
\exp\left\{\frac{\lambda^{2}}{2}L^{2}nC_{H}(1 + \delta )^{4}\right\}.
\end{align}
Now we prove Theorem \ref{thm:opt:test} in three steps. First, we show the existence of a test for $\PP_{\theta_0}$ versus $\PP_{\theta_1}$ with exponentially decaying error probabilities. Second, we show the existence of a test for $\PP_{\theta_0}$ versus the complement $\{\PP_{\theta}:\theta \in \Theta, \|\theta - \theta_1\|_{2} \leq \xi \epsilon\}$ with exponentially decaying error probabilities for any $\epsilon >0$, some $\xi <1$, and any $\theta_1 \in \Theta$ satisfying $\|\theta_1 - \theta_0\|_{2} > \epsilon$. Finally, the proof is completed by showing the existence of a test for $\PP_{\theta_0}$ versus the complement $\{\PP_{\theta}:\theta \in \Theta,\|\theta - \theta_0\|_{2} \geq M \epsilon\}$ with exponentially decaying error probabilities for any $\epsilon >0$ and sufficiently large $M>0$.
\vspace{3mm}
\noindent
\emph{Step 1: Show the existence of a test with exponentially decaying error probabilities for simple hypotheses.}
For $\theta_1 \neq \theta_0 \in \Theta$, we consider the simple hypotheses
\begin{align}
\label{hy:sim}
H_{0}: \theta = \theta_0 \quad \text{vs} \quad H_{1}: \theta = \theta_1.
\end{align}
Based on Lemma \ref{lemma:simply}, we show the existence of a test with Type \Romannum{1} and \Romannum{2} error probability decaying exponentially to $0$ as $n\rightarrow \infty$. For $\theta \neq \theta' \in \Theta$, define $$H_{n}(\theta|\theta') = \frac{1}{n}\mathbb{E}_{\theta}\{l_{n}(\theta,Y_{1}^{n}) - l_{n}(\theta',Y_{1}^{n})\}.$$
Using Jensen's inequality, we have $H_{n}(\theta|\theta') \geq 0$. By Assumption \ref{a6}, $H_{n}(\theta | \theta') \rightarrow J(\theta | \theta') >0$ as $n \rightarrow \infty$; so there exists an integer $N = N(\theta,\theta')$ depending on $(\theta, \theta')$ such that $H_{n}(\theta | \theta') \geq \frac{1}{2}J(\theta| \theta')$ for any $n > N(\theta,\theta')$. Let the critical value $r \in (0, \frac{1}{4}J(\theta_0|\theta_1)).$ We construct the likelihood ratio test for the hypotheses in \eqref{hy:sim} as
\begin{align}
\label{hy:sim:test}
\phi_{\theta_1} = 1\{l_{n}(\theta_0,Y_{1}^{n}) - l_{n}(\theta_1,Y_{1}^{n}) \leq nr\} \equiv 1\{F_{n}(\theta_0,\theta_1) \leq nr\},
\end{align}
and reject $H_0$ if $\phi_{\theta_1} = 1$.
For any $n > N(\theta_0,\theta_1)$, we have $ H_{n}(\theta_0 | \theta_1) - r >\frac{1}{4}J(\theta_0 | \theta_1)$ and the probability of Type \Romannum{1} error of the test in \eqref{hy:sim:test} is
\begin{align}
\label{eq:sim:e1}
\mathbb{P}_{\theta_0}(\text{reject} \;H_0) &= \mathbb{P}_{\theta_0}(\phi_{\theta_1}=1)= \PP_{\theta_0}[l_{n}(\theta_0,Y_{1}^{n}) - l_{n}(\theta_1,Y_{1}^{n}) \leq n r ]
\nonumber
\\&
= \PP_{\theta_0}[l_{n}(\theta_0,Y_{1}^{n}) - l_{n}(\theta_1,Y_{1}^{n}) - nH_{n}(\theta_0|\theta_1) \leq n\{r - H_{n}(\theta_0|\theta_1)\}]
\nonumber
\\&
= \PP_{\theta_0}[F_{n}(\theta_0,\theta_1) - \mathbb{E}_{\theta_0}F_{n}(\theta_0,\theta_1) \leq n\{r - H_{n}(\theta_0|\theta_1)\}]
\nonumber
\\&
\overset{(i)}{\leq} \exp\left(-\frac{n(r - H_{n}(\theta_0|\theta_1))^{2}}{\tilde{C}}\right)
\overset{(ii)}\leq \exp\left(-\frac{nJ(\theta_0 |\theta_1)^{2}}{16 \tilde{C}}\right),
\end{align}
where $(i)$ follows from \eqref{lem:spec} with $\tilde{C} = L^{2}C_{H}(1 + \delta )^{4}$, and $(ii)$ follows from $ H_{n}(\theta_0 | \theta_1) - r >\frac{1}{4}J(\theta_0 | \theta_1) > 0$.
The exponentially decaying bound on the probability of Type \Romannum{2} error follows similarly by switching the roles of $\theta_0$ and $\theta_1$. There exists $N(\theta_1,\theta_0) \in \mathbb{N}$ such that for any $n > N(\theta_1,\theta_0)$, $H_{n}(\theta_1|\theta_0) \geq \frac{1}{2}J(\theta_1|\theta_0)$, and the probability of Type \Romannum{2} error of $\phi_{\theta_1}$ in \eqref{hy:sim:test} is
\begin{align}
\label{eq:sim:e2}
\mathbb{P}_{\theta_1}(\text{fail to reject}\; H_0) &= \mathbb{P}_{\theta_1}(\phi_{\theta_1}=0)=\mathbb{P}_{\theta_1}[l_{n}(\theta_0,Y_{1}^{n}) - l_{n}(\theta_1,Y_{1}^{n}) > nr ]
\nonumber
\\&
= \mathbb{P}_{\theta_1}[l_{n}(\theta_0,Y_{1}^{n}) - l_{n}(\theta_1,Y_{1}^{n}) + nH_{n}(\theta_1|\theta_0) > n\{H_{n}(\theta_1|\theta_0) + r \} ]
\nonumber
\\&
= \mathbb{P}_{\theta_1}[F_{n}(\theta_1,\theta_0) - \mathbb{E}_{\theta_1} F_{n}(\theta_1,\theta_0)< -n\{ H_{n}(\theta_1|\theta_0) + r \} ]
\nonumber
\\&
\leq \exp\left(-\frac{n(H_{n}(\theta_1|\theta_0) + r )^{2}}{\tilde{C}}\right)
\leq \exp\left(-\frac{nJ(\theta_1|\theta_0)^{2}}{4\tilde{C}}\right) .
\end{align}
By choosing $n > \max\{ N(\theta_1,\theta_0), N(\theta_1,\theta_0)\},$ we can see that the probabilities of Type \Romannum{1} and Type \Romannum{2} error of test \eqref{hy:sim:test} decay exponentially in $n$.
\vspace{3mm}
\noindent
\emph{Step 2: For any $\epsilon >0$, some $\xi <1$, any $\theta_1 \in \Theta$ satisfying $\|\theta_1 - \theta_0\|_{2} > \epsilon$, show the existence of a test for $\PP_{\theta_0}$ versus the complement $\{\PP_{\theta}: \theta \in \Theta, \|\theta - \theta_1\|_{2} \leq \xi \epsilon\}$ with exponentially decaying error probabilities.}
For any $\epsilon >0$, let $\theta' \in \Theta$ satisfy $\epsilon<\|\theta' - \theta_0\|_{2} < 2\epsilon$, $U \subset \{\theta \in \Theta, \epsilon<\|\theta' - \theta_0\|_{2} < 2\epsilon\}$ be an open neighborhood with diameter $\frac{\epsilon}{2}$, and $\theta_U \in U$ be a center such that $\|\theta - \theta_U\|_{2}<\frac{\epsilon}{4}$ for any $\theta \in U$. We consider the following hypotheses:
\begin{align}
\label{hy:ball}
H_{0}: \theta = \theta_0 \quad \text{vs} \quad H_{1}: \theta \in U.
\end{align}
We want to show the existence of a test $\phi_{U}^{\epsilon}$ with exponentially decaying Type \Romannum{1} and \Romannum{2} error probabilities; that is,
$$ \mathbb{E}_{\theta_0}(\phi^{\epsilon}_{U}) \lesssim e^{-n\tilde{c}\epsilon^{2}},\quad\sup_{\theta \in U} \, \mathbb{E}_{\theta}(1-\phi_{U}^{\epsilon}) \lesssim e^{-n\tilde{c}\epsilon^{2}},$$
where $\lesssim$ denotes inequality up to a fixed constant. Let $r >0$ be the critical value to be chosen later. Then, define
\begin{align}
\label{hy:ball:test}\phi_{U}^{\epsilon} =1\{l_{n}(\theta_0,Y_{1}^{n}) - l_{n}(\theta_U,Y_{1}^{n}) \leq nr\},
\end{align}
and reject $H_0$ in \eqref{hy:ball} if $\phi_{U}^{\epsilon} =1$.
The corresponding probability of Type \Romannum{1} error of $\phi_{U}^{\epsilon} $ is
\begin{align}
\label{er:b:1}
\mathbb{P}_{\theta_0}(\text{reject $H_0$}) &= \mathbb{E}_{\theta_0}(\phi_{U}^{\epsilon})
\nonumber
\\&
= \mathbb{P}_{\theta_0}[l_{n}(\theta_0,Y_{1}^{n}) - l_{n}(\theta_U,Y_{1}^{n}) \leq nr]
\nonumber
\\&
\overset{(i)}{\leq}\exp\left(-\frac{n(r - H_{n}(\theta_0|\theta_U))^{2}}{\tilde{C}}\right).
\end{align}
where $(i)$ follows from Step 1 provided $r - H_{n}(\theta_0|\theta_U) <0$.
The corresponding probability of Type \Romannum{2} error of $\phi_{U}^{\epsilon} $ is
\begin{align}
\label{er:b:2}
\sup_{\theta \in U} \, & \mathbb{P}_{\theta}(\text{fail to reject $H_0$})
=\sup_{\theta \in U} \mathbb{E}_{\theta}(1 - \phi_{U}^{\epsilon})
\nonumber
\\
&=
\sup_{\theta \in U} \mathbb{P}_{\theta}[l_{n}(\theta_0,Y_{1}^{n}) - l_{n}(\theta_U,Y_{1}^{n}) > nr ]
\nonumber
\\&
=
\sup_{\theta \in U} \mathbb{P}_{\theta}[l_{n}(\theta_0,Y_{1}^{n}) -l_{n}(\theta,Y_{1}^{n}) + l_{n}(\theta,Y_{1}^{n}) - l_{n}(\theta_U,Y_{1}^{n}) > nr]
\nonumber
\\&
=
\sup_{\theta \in U} \mathbb{P}_{\theta}[l_{n}(\theta_0,Y_{1}^{n}) -l_{n}(\theta,Y_{1}^{n}) + nH_{n}(\theta | \theta_0)
\nonumber
\\&
\quad
+ l_{n}(\theta,Y_{1}^{n}) - l_{n}(\theta_U,Y_{1}^{n}) - nH_{n}(\theta|\theta_U) > n(r + H_{n}(\theta | \theta_0) - H_{n}(\theta|\theta_U)) ]
\nonumber
\\&
\leq \sup_{\theta \in U}\{ \mathbb{P}_{\theta}[l_{n}(\theta,Y_{1}^{n}) -l_{n}(\theta_0,Y_{1}^{n}) - nH_{n}(\theta | \theta_0) < -\frac{n}{2}(r + H_{n}(\theta | \theta_0) - H_{n}(\theta|\theta_U)) ]
\nonumber
\\&
\quad
+ \mathbb{P}_{\theta} [l_{n}(\theta,Y_{1}^{n}) - l_{n}(\theta_U,Y_{1}^{n}) - nH_{n}(\theta|\theta_U) > \frac{n}{2}(r + H_{n}(\theta | \theta_0) - H_{n}(\theta|\theta_U)) ]\}
\nonumber
\\&
\overset{(i)}{\leq} 2\sup_{\theta \in U}\left\{ \exp\left(-\frac{n(r + H_{n}(\theta | \theta_0) - H_{n}(\theta|\theta_U) )^{2}}{4\tilde{C}}\right) \right\}.
\end{align}
where $(i)$ follows from Lemma \ref{lemma:simply} provided $r + H_{n}(\theta | \theta_0) - H_{n}(\theta|\theta_U) >0$.
Now we choose the critical value $r$ such that
$$\frac{1}{4}J(\theta_0|\theta_U) <r < \frac{1}{2}J(\theta_0|\theta_U).$$
From Step 1, there exists $N(\theta_0,\theta_U) \in \mathbb{N}$ such that for any $n > N(\theta_0,\theta_U)$, $H_{n}(\theta_0|\theta_U) \geq \frac{3}{4}J(\theta_0|\theta_U)$; so we have that
\begin{align}
\label{er:b:21}
r - H_{n}(\theta_0|\theta_U) \leq -\frac{1}{4} J(\theta_0|\theta_U)
\overset{(i)}{\leq} -\frac{\kappa_2}{4}\|\theta_0 - \theta_U\|_{2}
\leq -\frac{9\kappa_2}{16}\epsilon,
\end{align}
where $(i)$ follows from Assumption \ref{a7}. Combining the results in \eqref{er:b:1} and \eqref{er:b:21} gives the exponentially decaying bounds for the Type \Romannum{1} error:
$$\mathbb{E}_{\theta_0}(\phi_{U}^{\epsilon}) \leq
\exp\left[- n\epsilon^{2} \frac{81\kappa_2^{2}}{256\tilde{C}} \right ] .$$
Assumption \ref{a6} implies that $H_{n}(\theta |\theta_0) \rightarrow J(\theta | \theta_0)$ uniformly for any $\theta \in U$; so there exists $N'(\theta_0) \in \mathbb{N}$ such that for any $n > N'(\theta_0)$, $H_{n}(\theta |\theta_0) \geq \frac{15}{16}J(\theta |\theta_0)$ for any $\theta \in U$. Furthermore, by choosing $n > \max\{N'(\theta_0), N'(\theta_U)\}$, we also have $\frac{15}{16}J(\theta |\theta_U)\leq H_{n}(\theta |\theta_U) \leq \frac{17}{16}J(\theta |\theta_U)$ for any $\theta \in U$. Therefore,
\begin{align}
\label{er:b:22}
\inf_{\theta \in U}\{r &+ H_{n}(\theta | \theta_0) - H_{n}(\theta|\theta_U)\}
\geq \frac{1}{4}J(\theta_0|\theta_U) + \inf_{\theta \in U}\{ \frac{3}{4}J (\theta | \theta_0) - \frac{4}{5}J(\theta |\theta_U)\}
\nonumber
\\
&
\overset{(i)}{\geq} \frac{\kappa_1}{4}\|\theta_0 - \theta_U\|_{2} + \frac{3\kappa_{1} }{4}\inf_{\theta \in U}\|\theta - \theta_0\|_{2} - \frac{4\kappa_{2} }{5}\sup_{\theta \in U} \|\theta - \theta_U\|_{2}
\nonumber
\\
&
\overset{(ii)}{\geq} (\frac{9\kappa_{1} }{16} - \frac{\kappa_2}{5})\epsilon >0,
\end{align}
where $(i)$ follows from Assumption \ref{a7} and $(ii)$ follows from for $\theta \in U$, $\|\theta_0 - \theta_U\|_{2} \geq \frac{3\epsilon}{4}, \|\theta_0 - \theta\|_{2} \geq \frac{\epsilon}{2} $, and $\|\theta - \theta_U\|_{2} \leq \frac{\epsilon}{4}.$
Combining the results in \eqref{er:b:2} and \eqref{er:b:22} gives the exponentially decaying bounds for the Type \Romannum{2} error:
$$\sup_{\theta \in U} \mathbb{E}_{\theta}(1 - \phi_{U}^{\epsilon}) \leq
2\exp\left[- n\epsilon^{2} \frac{(\frac{9\kappa_{1} }{16} - \frac{\kappa_2}{5})^{2}}{4\tilde{C}} \right ] .$$
By choosing $n > \max\{ N(\theta_0,\theta_U),N(\theta_0), N'(\theta_0)\},$ the probabilities of Type \Romannum{1} and Type \Romannum{2} error of test \eqref{hy:ball:test} is dominated by $\exp(-\tilde c n\epsilon^{2})$ where $\tilde c = \min\{\frac{81\kappa_2^{2}}{256\tilde{C}} ,\frac{(\frac{9\kappa_{1} }{16} - \frac{\kappa_2}{5})^{2}}{4\tilde{C}}\}$. Furthermore, this result holds for any open ball $U_{\xi}$ of radius $\xi\epsilon$ such that $ U_{\xi} \subset \{\theta \in \Theta, \epsilon<\|\theta' - \theta_0\|_{2} < 2\epsilon\}$.
\vspace{3mm}
\noindent
\emph{Step 3: For any $\epsilon >0$ and sufficiently large $M>0$, show the existence of a test for testing $\PP_{\theta_0}$ versus the complement $\{\PP_{\theta}: \theta \in \Theta, \|\theta - \theta_0\|_{2} \geq M \epsilon\}$ with exponentially decaying error probabilities.}
We want to show the existence of a test $\phi^{\epsilon}_{M}$ with exponentially decaying Type \Romannum{1} and \Romannum{2} error probabilities; that is,
$$ \mathbb{E}_{\theta_0}(\phi^{\epsilon}_{M}) \lesssim e^{-c_{1}n\epsilon^{2}M^{2}},\quad\sup_{\theta \in \Theta:\|\theta - \theta_0\|_{2} > M/\sqrt{n}}\mathbb{E}_{\theta}(1-\phi^{\epsilon}_{M}) \lesssim e^{-c_{2}n\epsilon^{2}M^{2}}.$$
We start by defining a collection of sets that are used to construct our test later.
For any $\epsilon >0$ and $\xi \in (0,1)$, the compactness of the closure of $\{\theta'\in \Theta, \epsilon < \|\theta' - \theta_0\|_{2} \leq 2\epsilon\}$ implies that there exist a finite number of open balls $\{U_{l,1}\}_{l=1}^{N(\xi, \epsilon)}$ with radius $\xi\epsilon$ such that
$$ \{\theta'\in \Theta, \epsilon < \|\theta' - \theta_0\|_{2} < 2\epsilon\} \subseteq \bigcup_{l=1}^{N(\xi, \epsilon)} U_{l,1},$$
where $N(\xi, \epsilon) = N(\xi\epsilon, \{\theta'\in \Theta, \epsilon < \|\theta' - \theta_0\|_{2} \leq 2\epsilon\},\|\cdot\|_{2})$ is the minimum number of balls with radius $\xi\epsilon$ under metric $\|\cdot\|_{2}$ needed to cover the set $\{\theta'\in \Theta, \epsilon < \|\theta' - \theta_0\|_{2} \leq 2\epsilon\}$. Extending this argument further, we have that for a large number $M$ and any integer $j \geq M$, there are $N(\xi j\epsilon, \{\theta'\in \Theta, j\epsilon < \|\theta' - \theta_0\|_{2} \leq 2j\epsilon\},\|\cdot\|_{2})$ many open balls with radius $\xi j \epsilon$ under metric $\|\cdot\|_{2}$ that cover the set $\{\theta'\in \Theta, j\epsilon < \|\theta' - \theta_0\|_{2} \leq 2j\epsilon\}$.
For ease of notation and following $N(\xi, \epsilon)$ defined earlier, denote $$N(\xi,j\epsilon) = N(\xi j \epsilon, \{\theta'\in \Theta, j\epsilon < \|\theta' - \theta_0\|_{2} \leq 2j\epsilon\},\|\cdot\|_{2})$$
for any $\epsilon >0$, $0< \xi<1$, and $j \geq M$. The compactness of the closure of $ \{\theta'\in \Theta, j\epsilon < \|\theta' - \theta_0\|_{2} \leq 2j\epsilon\}$ for every integer $j \geq M$ implies that we have a collection of open balls $\{U_{l,j}\}_{l=1}^{N(\xi,j\epsilon)}$ with radius $\xi\epsilon$ satisfying
\begin{align*}
\{\theta'\in \Theta, j\epsilon < \|\theta' - \theta_0\|_{2} \leq 2j\epsilon\} \subseteq \bigcup_{l=1}^{N(\xi, j\epsilon)} U_{l,j}, \quad j \geq M.
\end{align*}
We define the test $\phi^{\epsilon}_{M}$ using the test constructed in Step 2 for different choices of the open ball $U$. For each ball in $\{U_{l,j}\}_{l=1}^{N(\xi, j\epsilon)}$, we can find a collection of tests $\{\psi_{U_{l,j}}^{n}\}_{l=1}^{N(\xi, j\epsilon)}$ with Type \Romannum{1} and \Romannum{2} error rates dominated by $\exp(-\tilde{c}nj^{2}\epsilon^{2})$; see Step 2 for the definition of $\tilde c$. Let $\phi_{M}^{\epsilon}$ be the supremum of countably many tests obtained this way:
$$ \phi_{M}^{\epsilon} = \sup_{j\geq M} \sup_{1\leq l \leq N(\xi,j\epsilon)} \psi_{U_{j,l}}^{n}.$$
Then, for a sufficiently large $n$,
\begin{align}
\mathbb{E}_{\theta_0}\phi_{M}^{\epsilon} &\leq \sum_{j=M}^{\infty}\sum_{l=1}^{N(\xi,j\epsilon)} \mathbb{E}_{\theta_0}\psi^{n}_{U_{j,l}}
\overset{(i)}{\leq} \sum_{j=M}^{\infty} N(\xi,j\epsilon) e^{-\tilde{c}nj^{2}\epsilon^{2}}
\leq \sup_{j\geq 1}N(\xi, j \xi) \sum_{j=M}^{\infty} e^{-\tilde{c}nj^{2}\epsilon^{2}}
\nonumber \\&
\leq \frac{\sup_{j\geq 1}N(\xi, j \xi)}{1 - e^{-\tilde c n\epsilon^{2}}} e^{-\tilde{c}nM^{2}\epsilon^{2}},
\label{eq:test:1}
\end{align}
where $(i)$ follows from the upper bound on the Type \Romannum{1} error probability obtained in Step 2.
Similarly, the upper bound on the Type \Romannum{2} error probability obtained in Step 2 implies that, for a sufficiently large $n$,
\begin{align}
\sup_{\theta \in \Theta:\|\theta - \theta_0\|_{2}> M\epsilon}\mathbb{E}_{\theta}(1-\phi_{M}^{\epsilon})
&\overset{(i)}{\leq} \sup_{U_{l,j},\;\;j \geq M} \mathbb{E}_{\theta}(1-\psi_{U_{l,j}}^{n}) \nonumber \\
&\leq \sup_{j \geq M} 2e^{-\tilde{c}nj^{2}\epsilon^{2} } \leq 2 e^{-\tilde{c}nM^{2}\epsilon^{2}},
\label{eq:test:2}
\end{align}
where $(i)$ follows from the construction of $\phi_{M}^{\epsilon}$ because for each $\theta \in \{\theta \in \Theta: \|\theta - \theta_0\|_{2}> M\epsilon\}$, there exists a $j \geq M$ and a test $\psi_{U_{l,j}}^{n}$ with $\phi \geq \psi_{U_{l,j}}^{n}$ satisfying $\mathbb{E}_{\theta}( 1- \psi_{U_{l,j}}^{n}) \leq
e^{-\tilde{c}nj^{2}\epsilon^{2} }.$
Let $\epsilon_{n}>0$, $\epsilon_{n} \rightarrow 0$ and $(n\epsilon_{n}^{2})^{-1} = O(1)$. Then, \citet[Example 7.1]{ghosal2000convergence} implies that
$$\sup_{j\geq 1}N(\xi, j\xi) = \sup_{j\geq 1}N(\xi j \epsilon, \{\theta'\in \Theta, j\epsilon < \|\theta' - \theta_0\|_{2} < 2j\epsilon\},\|\cdot\|_{2}) \leq \left(\frac{12}{\xi}\right)^{d} ;$$
therefore, let $\xi = \frac{1}{4}$; \eqref{eq:test:1} and \eqref{eq:test:2} imply that there exists a constant $$\tilde{c} = \min\{\frac{81\kappa_2^{2}}{256L^{2}C_{H}(1 + \delta )^{4}} ,\frac{(\frac{9\kappa_{1} }{16} - \frac{\kappa_2}{5})^{2}}{4L^{2}C_{H}(1 + \delta )^{4}}\} $$ and a sequence of test functions $\phi_{n} = \phi_{M}^{\epsilon_{n}}$ for large enough $M$ such that for every sufficiently large $n$,
$$ \mathbb{E}_{\theta_0}(\phi_{n}) \leq \frac{\left(48\right)^{d} }{1 - e^{-\tilde{c}}} e^{-\tilde{c}n M^{2}\epsilon_{n}^{2}},\quad\sup_{\theta \in \Theta:\|\theta - \theta_0\| > M\epsilon_{n}}\mathbb{E}_{\theta}(1-\phi_{n}) \leq 2e^{-\tilde{c}nM^{2}\epsilon_{n}^{2}}.$$
The proof is complete by observing that
$$\sup_{\theta \in \Theta: M\epsilon_{n}< \|\theta - \theta_0\|_{2}\leq 2M \epsilon_{n}}\mathbb{E}_{\theta}(1-\phi_{n}) \leq \sup_{\theta \in \Theta:\|\theta - \theta_0\|_{2} > M\epsilon_{n}}\mathbb{E}_{\theta}(1-\phi_{n}) .$$
\hfill $\square$
\section*{Proof of Corollary \ref{thm:conver}}
\noindent
The statement \eqref{thm:conver:1} follows by verifying the conditions in \cite[Theorem 3]{GhoVan07}.
\hfill $\square$
\section*{Proof of Corollary \ref{thm:bvm}}
\noindent The proof is based on \cite[Theorem 10.1]{Van00}. First we verify the contiguity property of $\PP_{\theta}$ and the existence of tests as specified in \citet[Theorem 10.1]{Van00}.
\emph{Contiguity property:} From local asymptotic normality condition \eqref{eq:lan}, under $\PP_0$-probability, we have that for any $h \in \sqrt{n}(\Theta - \theta_0)$
\begin{align}
\label{eq:pf:lan}
l_{n}(\theta_0 + \frac{h}{\sqrt{n}}) - l_{n}(\theta_0) = \frac{1}{\sqrt{n}}h^{\top}l'_{n}(\theta_0) - \frac{1}{2}h^{\top}I_{\theta_0}h + o_{\PP_{\theta_0}}(1),
\end{align}
where $l_{n}'(\theta_0)$ weakly converges to $N_{d}(0,I_{\theta_0})$ under $\PP_{\theta_0}$-probability.
Let $\PP_{\theta_n}$ denote the distribution of $(Y_{1},\ldots,Y_{n})$ under parameter $\theta_n = \theta_0 +\frac{h}{\sqrt{n}}$ for some $h \in \sqrt{n}(\Theta - \theta_0)$.
LeCam's first lemma \citep{lecam1960locally} implies that $\PP_{\theta_n}$ is mutually contiguous to $\PP_{\theta_0}$.
\emph{Existence of test:} If the assumptions of Theorem \ref{thm:opt:test} hold, then for any sequence $\epsilon$ satisfying $\epsilon_{n}>0$, $\epsilon_{n} \rightarrow 0$, and $(n\epsilon_{n}^{2})^{-1} = O(1)$, there exists a constant $K$ and a sequence of test $\phi_{n}$, such that for every sufficiently large $n$,
$$ \mathbb{E}_{\theta_0}(\phi_{n}) \lesssim e^{-KM^{2}n\epsilon_{n}^{2}},\;\quad\sup_{\theta \in \Theta:\|\theta - \theta_0\| > M\epsilon_{n}}\mathbb{E}_{\theta}(1-\phi_{n}) \leq 2e^{-KM^{2}n\epsilon_{n}^{2}}.$$
By setting $\epsilon_{n} = n^{-1/2}$ and $r_{n}$ be any sequence tending to infinity, we can find a sequence of tests $\phi_{n}$ satisfying as $n \rightarrow \infty$,
\begin{align}
\label{eq:bvm:test}
\mathbb{E}_{\theta_0}(\phi_{n}) \lesssim e^{-Kr_{n}^{2}}\rightarrow 0,\quad \sup_{\theta \in \Theta:\|\theta - \theta_0\| > r_{n}/\sqrt{n}}\mathbb{E}_{\theta}(1-\phi_{n}) \leq 2e^{-Kr_{n}^{2}} \rightarrow 0.
\end{align}
Now we prove Corollary \ref{thm:bvm} in two steps. In the first step, we show that the total variance distance between the posterior distribution of $h = \sqrt{n}(\theta - \theta_0)$ relative to prior $\Pi_{n}$ and the restricted prior $\Pi_{n}^{C_{n}}$, where $C_{n}$ is the ball with radius $r_{n}$, is asmptotically negligible as $r_{n} \rightarrow \infty.$
By \eqref{eq:bvm:test}, for every $r_{n} \rightarrow \infty$, we can find a constant $K$ and a sequence of tests $\phi_{n}$, such that as $n \rightarrow \infty$,
\begin{align}
\label{test}
\mathbb{E}_{\theta_0}\phi_{n} \rightarrow 0,\quad\sup_{\theta \in \Theta:\|\theta - \theta_0\| > r_{n}/\sqrt{n}}\mathbb{E}_{\theta}(1 - \phi_{n}) \leq 2e^{-Kr_{n}^{2}}.
\end{align}
Let $\mathcal{F}$ denote the Borel $\sigma$-field on $\sqrt{n}(\Theta - \theta_0)$ and $\Pi_{h|Y_{1}^{n}}^{C_{n}}$ denote the posterior distribution of $h$ conditional on $Y_{1}^{n}$ with respect to the restricted prior $\Pi_{n}^{C_{n}}$. We have that
\begin{align}
\label{eq:bvm:1}
&\|\Pi_{h|Y_{1}^{n}} - \Pi_{h|Y_{1}^{n}}^{C_{n}}\|_{\text{TV}} = \sup_{B \in \mathcal{F}}\left| \Pi_{h|Y_{1}^{n}}(B) - \Pi_{h|Y_{1}^{n}}^{C_{n}}(B)\right|
\nonumber
\\&
= \sup_{B \in \mathcal{F}}\left| \frac{\int 1_{B}(h)\pi(h)p_{\theta + h/\sqrt{n}}(Y_{1}^{n})dh}{\int \pi(s)p_{\theta + s/\sqrt{n}}(Y_{1}^{n})ds} - \frac{\int 1_{C_n}(h) 1_{B}(h)\pi(h)p_{\theta + h/\sqrt{n}}(Y_{1}^{n})dh}{\int 1_{C_n}(s)\pi(s)p_{\theta + s/\sqrt{n}}(Y_{1}^{n})ds} \right|
\nonumber
\\&
= \sup_{B \in \mathcal{F}}\left|\frac{\int \{1_{B}(h) - 1_{B}(h)1_{C_{n}}(h)\}\pi(h)p_{\theta + h/\sqrt{n}}(Y_{1}^{n})dh}{\int \pi(s)p_{\theta + s/\sqrt{n}}(Y_{1}^{n})ds} - \int 1_{C_n}(h) 1_{B}(h)\pi(h)p_{\theta + h/\sqrt{n}}(Y_{1}^{n})dh \times \right.
\nonumber
\\
&
\left.\left[\{\int 1_{C_n}(s)\pi(s)p_{\theta + s/\sqrt{n}}(Y_{1}^{n})ds\}^{-1} - \{\int\pi(s)p_{\theta + s/\sqrt{n}}(Y_{1}^{n})ds\}^{-1}\right]
\right|
\nonumber
\\&
=
\sup_{B \in \mathcal{F}}\left|\frac{\int \{1_{B}(h) - 1_{B}(h)1_{C_{n}}(h)\}\pi(h)p_{\theta + h/\sqrt{n}}(Y_{1}^{n})dh}{\int \pi(s)p_{\theta + s/\sqrt{n}}(Y_{1}^{n})ds} - \right.
\nonumber
\\
&\left.
\frac{\int 1_{C_n}(h) 1_{B}(h)\pi(h)p_{\theta + h/\sqrt{n}}(Y_{1}^{n})dh}{\int 1_{C_n}(s)\pi(s)p_{\theta + s/\sqrt{n}}(Y_{1}^{n})ds }\times \frac{\int\{1 - 1_{C_n}(s)\}\pi(s)p_{\theta + s/\sqrt{n}}(Y_{1}^{n})ds}{\int\pi(s)p_{\theta + s/\sqrt{n}}(Y_{1}^{n})ds} \right|
\nonumber
\\&
= \sup_{B \in \mathcal{F}}| P_{h|Y_{1}^{n}}(B\cap C_{n}^{c}) - P_{h|Y_{1}^{n}}^{C_{n}}(B) P_{h|Y_{1}^{n}}( C_{n}^{c}) |
\nonumber
\\&
\leq 2P_{h|Y_{1}^{n}}(C_{n}^{c})
\end{align}
Let $U$ be a ball with fixed radius. Define $\PP_{n,U}= \int \PP_{\theta + h/\sqrt{n}} d\Pi^{U}_{n}(h)$ and the associated expectation as $\EE_{n,U}$. Using \eqref{eq:lan} and boundedness of $\pi(\cdot)$, $\PP_{n,U}$ is mutually contiguous with $\PP_{\theta_0}$;
so using \eqref{eq:bvm:test}, we have that
\begin{align*}
\mathbb{E}_{n,U} \Pi_{h|Y_{1}^{n}}(C_{n}^{c}) &=\mathbb{E}_{n,U} \Pi_{h|Y_{1}^{n}}(C_{n}^{c})(1 - \phi_{n}) + \mathbb{E}_{0}\Pi_{h|Y_{1}^{n}}(C_{n}^{c})\phi_n + o(1)\\
&= \mathbb{E}_{n,U} \Pi_{h|Y_{1}^{n}}(C_{n}^{c})(1 - \phi_{n}) + o(1),
\end{align*}
where $\mathbb{E}_{n,U} \Pi_{h|Y_{1}^{n}}(C_{n}^{c})(1 - \phi_{n})$ can be expressed as
\begin{align}
\label{eq:bvm:2}
\mathbb{E}_{n,U} \Pi_{h|Y_{1}^{n}}(C_{n}^{c})(1 - \phi_{n})
&= \int \PP_{\theta + h/\sqrt{n}}\Pi_{h|Y_{1}^{n}} (C_{n}^{c})(1 - \phi_{n}) d\Pi^{U}_{n}(h)
\nonumber
\\&
= \frac{1}{\Pi_{n}(U)}\int 1_{U}(h)\pi(h) \PP_{\theta + h/\sqrt{n}}\Pi_{h|Y_{1}^{n}} (C_{n}^{c})(1 - \phi_{n}) dh
\nonumber
\\&
= \frac{\Pi_{n}(C_{n}^{c})}{\Pi_{n}(U)}\int \PP_{\theta + h/\sqrt{n}}\Pi_{h|Y_{1}^{n}} (U)(1 - \phi_{n}) d\Pi^{C_{n}^{c}}_{n}(h)
\nonumber
\\& \leq \frac{1}{\Pi_{n}(U)} \int_{C_{n}^{c}}\PP_{\theta + h/\sqrt{n}}(1 - \phi_{n})d\Pi_{n}(h)
\nonumber
\\&
\leq
\frac{1}{\Pi_{n}(U)} \int_{\|h\|\geq r_{n}} e^{-Kr_{n}^{2}}d\Pi_{n}(h),
\end{align}
where the last inequality follows from \eqref{eq:bvm:test} and $\frac{1}{\Pi_{n}(U)}$ is bounded above by order $n^{d/2}$ due to the continuity of the density $\pi$ at $\theta_0$; therefore, \eqref{eq:bvm:2} is bounded by
$\int_{\|h\|\geq r_{n}} \exp(-Kr_{n}^{2})dh$, which converges to zero as $n, r_{n} \rightarrow \infty.$
Let $N^{C}(\mu,\Sigma)$ denote the restriction of normal distribution with mean $\mu$ and covariance matrix $\Sigma$ on a set $C$. In the second step of the proof, we want to show that for a ball $C$ with fixed radius $M$,
$$\mathbb{E}_{n,C} \| \Pi^{C}_{h|Y_{1}^{n}} - N^{C}(\Delta_{n,0},I_{\theta_0}^{-1}) \|_{\text{TV}} \rightarrow 0,$$
where $\Delta_{n,0} = \frac{1}{\sqrt{n}} I_{\theta_0}^{-1} l'(\theta_0).$
We have that
\begin{align}
\label{eq:fix1}
&\mathbb{E}_{n,C} \| \Pi^{C}_{h|Y_{1}^{n}} - N^{C}(\Delta_{n,0},I_{\theta_0}^{-1}) \|_{\text{TV}}
\nonumber
\\
&
= 2 \int \int \left\{1 - \frac{dN^{C}(\Delta_{n,0},I_{\theta_0}^{-1})(h)}{1_{C}(h) d\PP_{\theta + h/\sqrt{n}}(Y_{1}^{n})\pi_{n}(h)/\int_{C} d\PP_{\theta + g/\sqrt{n}}(Y_{1}^{n})\pi_{n}(g)dg} \right\}^{+} d\Pi_{h|Y_{1}^{n}}^{C}(dh)\PP_{n,C}(dY_{1}^{n})
\nonumber\\
&
\overset{(i)}{\leq}
2 \int \left\{1 - \frac{1_{C}(h) d\PP_{\theta + g/\sqrt{n}}(Y_{1}^{n})\pi_{n}(g)dN^{C}(\Delta_{n,0},I_{\theta_0}^{-1})(h)}{1_{C}(h) d\PP_{\theta + h/\sqrt{n}}(Y_{1}^{n})\pi_{n}(h)N^{C}(\Delta_{n,0},I_{\theta_0}^{-1})(g)} \right\}^{+}
dN^{C}(\Delta_{n,0},I_{\theta_0}^{-1})(g) d\Pi_{h|Y_{1}^{n}}^{C}(dh)\PP_{n,C}(dY_{1}^{n}),
\end{align}
where $(i)$ follows from Jensen's inequality of $f(x) = (1-x)^{+}$.
By dominated-convergence theorem and $dN^{C}(\Delta_{n,0},I_{\theta_0}^{-1})(g) \leq C\lambda^{C}(g)$ for some constant $C>0$; therefore, it is suffices to show under measure $\PP_{n,C}(dy_{1}^{n})\Pi_{h|Y_{1}^{n}}^{C}(dh) \lambda^{C}(dg)$,
\begin{align}
\label{conv:p}
\frac{d\PP_{\theta + g/\sqrt{n}}(Y_{1}^{n}) \pi_{n}(g) dN^{C}(\Delta_{n,0},I_{\theta_0}^{-1})(h)}{d\PP_{\theta + h/\sqrt{n}}(Y_{1}^{n})\pi_{n}(h) dN^{C}(\Delta_{n,0},I_{\theta_0}^{-1})(g)} \longrightarrow 1 \;\text{in probability},
\end{align}
where $\lambda^{C}$ is the Lebesgue measure on $C$. From the definition of posterior distribution, we have
$$\PP_{n,C}(dy_{1}^{n})\Pi_{h|Y_{1}^{n}}^{C}(dh) \lambda^{C}(dg) = \Pi_{n}^{C}(dh) \PP_{\theta + h/\sqrt{n}}(dy_{1}^{n})\lambda^{C}(dg).$$
Using local asymptotic normality condition \eqref{eq:lan} and continuity of $\pi$, we know that $\PP_{n,C}(dy_{1}^{n})\Pi_{h|Y_{1}^{n}}^{C}(dh) \lambda^{C}(dg)$ is contiguous to $ \lambda^{C}(dh) P_{\theta_0}(dy_{1}^{n})\lambda^{C}(dg);$
therefore, it is suffices to show that under measure $\lambda^{C}(dh) \PP_{\theta_0}(dy_{1}^{n}) \lambda^{C}(dg) $,
$$\frac{d\PP_{n,g}(Y_{1}^{n}) \pi_{n}(g) dN^{C}(\Delta_{n,0},I_{\theta_0}^{-1})(h)}{d\PP_{n,h}(Y_{1}^{n})\pi_{n}(h) dN^{C}(\Delta_{n,0},I_{\theta_0}^{-1})(g)} \longrightarrow 1 \;\text{in probability},$$
which is true under the local asymptotic normality condition \eqref{eq:lan}.
We have for fixed $C$, $$\mathbb{E}_{\theta_0} \| \Pi^{C}_{h|Y_{1}^{n}} - N^{C}(\Delta_{n,0},I_{\theta_0}^{-1}) \|_{TV} \rightarrow 0.$$ Combining this with step one, we have for $n, r_{n} \rightarrow \infty,$
\begin{align}
&\mathbb{E}_{\theta_0} \|\Pi_{h|Y_{1}^{n}}^{C_{n}} - \Pi_{h|Y_{1}^{n}}\|_{\text{TV}} \rightarrow 0,
\nonumber\\&
\|N^{C_{n}}(\Delta_{n,0},I_{\theta_0}^{-1}) - N(\Delta_{n,0},I_{\theta_0}^{-1})\|_{TV} \rightarrow 0.
\end{align}
By diagonal argument, $$\mathbb{E}_{\theta_0} \| \Pi_{h|Y_{1}^{n}} - N(\Delta_{n,0},I_{\theta_0}^{-1}) \|_{TV} \rightarrow 0,$$
which completes the proof.
\hfill $\square$
\section*{Acknowledgements}
Chunlei Wang and Sanvesh Srivastava are partially supported by grants from the Office of Naval Research (ONR-BAA N000141812741) and the National Science Foundation (DMS-1854667/1854662).
\bibliographystyle{Chicago}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,485 |
\section{Introduction}
The basis of the modern theory of strong interactions is Quantum Chromodynamics, a gauge quantum theory of quark and gluon fields which Professor A. A. Slavnov has made fundamental contributions \cite{Sla} to.
Our present view of high energy scattering of hadrons is dominated by the idea of a leading Regge trajectory, the Pomeron, which embodies
colourless gluon exchanges and leads asymptotically to the hypothesis (ascendant to the celebrated Pomeranchuk theorem and later pushed forward by V. N. Gribov in early 1970s) of universal C-even ("C" means "crossing") behaviour of cross-sections independent of flavours of colliding hadrons. At low energy this is violated by "usual" quarkic reggeons which, however, die off with energy.
Afterwards, it was argued that besides quarkic C-odd reggeons one can admit a C-odd partner of the Pomeron, "the Odderon", which can , potentially, violate the above said universality even at high energies \cite{Luk} .
Recent measurements by the LHC TOTEM Collaboration at 13 TeV \cite{TOT} caused a vivid discussion (more than 60 publications by now) of a strikingly small value of the parameter $ \rho = {Re T_{N}(s,0)}/{Im T_{N}(s,0)}$ (here $T_{N}(s,t)$ stands for the $ pp $ scattering amplitude) which lies (with some variations) near $ 0.10 $. It was considered in Ref.\,\cite{Nic} as manifestation of so-called "maximal Odderon" which is to violate the strong interaction universality in a maximal possible way.
The extraction of this $ \rho $-parameter (which, let us recall, is inherently model dependent) from the data depends decisively on how the Coulomb contributions are taken into account in the full scattering amplitude.
\section{From Bethe to CKL}
During quite a long time the Bethe formula \cite{Be} for the total amplitude $ T_{C+N} $ has been widely applied for extraction of the parameter $ \rho $ from the data (which is defined by $\mid T_{C+N}\mid^{2}$, see Eq.(2)) :
\begin{equation}
T_{C+N} = \frac{8\pi s \alpha \mathcal{F}^{2}(t)}{t} + e^{i\alpha\Phi (s,t)} T_{N}(t)
\end{equation}
where $ \mathcal{F} $ is the proton e.m. form factor and $ \Phi (s,t) $ is the Bethe phase usually in the form given to it by West and Yenni \cite{We} (or some later modifications of it).
However, over recent years the general practice in the TOTEM publications on this subject is based, instead of Eq.(1), on the use of the Cahn-Kundr\'{a}t-Lokaj\'{i}\v{c}ek (CKL) formula \cite{Cahn} for account of CNI which is more general (e.g., it does not imply the $ t $ independence \footnote{Problems with $ t $ dependence of the nuclear phase were analyzed in \cite{Pet}. } of the nuclear phase $ Arg T_{N} (s,t) $ ) than the Bethe formula.
The CKL approximation used in \cite{TOT} (in a bit different normalization) has the form \footnote{The damping factors due to the soft and virtual photons are well known but negligible in the region of CNI.}
\begin{equation}
\frac{d\sigma_{C+N}}{dt}= \frac{(\hbar c)^{2}}{16\pi s^{2}} \mid T_{C+N}\mid^{2}= \frac{(\hbar c)^{2}}{16\pi s^{2}} \mid \frac{8\pi\alpha s}{t}\mathcal{F}^{2}(t) + T_{N} [1-i\alpha G(t)]\mid^{2}
\end{equation}
with
\begin{equation}
G(t)= \int dt^{'} log (\frac{t^{'}}{t}) \frac{d}{dt^{'}} \mathcal{F}^{2}(t^{'})-\int dt^{'}(\frac{T_{N}(t^{'})}{T_{N}(t)} - 1) \frac{I(t,t^{'})}{2\pi}
\end{equation}
where $ \mathcal{F}(t) $ is the proton electric form factor and
\begin{center}
$ I(t,t^{'})= \int_{0}^{2\pi}d\phi\: \mathcal{F}^{2}(t^{''})/t^{''},\: t^{''}=t+t^{'} + 2\sqrt{t t^{'}} \cos\phi . $
\end{center}
It is clear, however, that for proper accounting of powers of $ \alpha $ in perturbative QED expansion used in Eq.(2) one has to retain not only order $ \alpha^{1} $ terms but also terms $\sim \alpha^{2} $ . Otherwise, we will miss some terms $\sim \alpha^{2} $ in the differential section.
Eqs. $(2)\:-\:(3)$ were obtained as a result of rather questionable manipulations \cite{Cahn} with the IR regulator mass prior it could be finally eliminated. Eq.(2) was criticized in Ref. \cite{Petr} where it was argued, in particular, that the term $ \int dt^{'} log (\frac{t^{'}}{t}) \frac{d}{dt^{'}} \mathcal{F}^{2}(t^{'}) $ is superfluous.
\section{Modified form of the CNI account}
To proceed further we have to notice that many problems can be overcome much easier if we realize that the square of the amplitude is \textit{free from Coulombic IR divergences}.
Below we will use, instead of $ t $, a more convenient variable
\begin{center}
$ q^{2} \equiv q^{2}_{\perp} = ut/4k^{2} = k^{2}sin^{2}\theta, \; s= 4k^{2}+ 4m^{2}, $
\end{center}
which reflects the $ t-u $ symmetry of the $ pp $ scattering.
At $ \theta\rightarrow 0\;\quad q^{2} \approx -t $ while at $ \theta\rightarrow \pi\;\;\quad q^{2} \approx -u $.
We will use the same notation $ q $ both for 2-dimensional vectors $ \textbf{q} $ and their modules $ \vert \textbf{q} \vert\ $. In the latter case, the limits of integration are indicated explicitly. As we deal with high energies and have in the integrands fast decreasing nuclear amplitudes and form factors we can (modulo vanishingly small corrections) extend the integration
in $ \textbf{q} $ (kinematically limited by $ \mid \textbf{q} \mid \leq \sqrt{s}/2 $ ) over the whole 2D space. The benefit is the possibility to freely use direct and reversed 2D Fourier transforms.
Thus, based on the same premises as CKL ( the additivity of the eikonal w.r.t. strong and electromagnetic interactions) we have obtained the following expression for the \textit{modulus squared} of the full amplitude (i.e. for the \textit{ observed} quantity) which from the very beginning is free from IR regulators (e.g. "photon mass" or $ 2\rightarrow 2+\varepsilon $ regularization or else) and is well defined mathematically:
\begin{equation}
\mid T_{C+N}\mid_{q\neq0}^{2} = 4s^{2} S^{C} (q,q) + \int\frac{d^{2}q^{'}}{(2\pi)^{2}}\frac{d^{2}q^{''}}{(2\pi)^{2}} S^{C} (q^{'},q^{''})T_{N} (q-q^{'})T_{N}^{\ast} (q-q^{''})
\end{equation}
\[+4s \int\frac{d^{2}q^{'}}{(2\pi)^{2}} Im[S^{C} (q,q^{'})T_{N}^{\ast} (q-q^{'})]\]
where
\begin{equation}
S^{C} (q^{'},q^{''})= \int d^{2}b^{'}d^{2}b^{''} e^{i{q}^{'}{b}^{'}-i{q}^{''}{b}^{''}} e^{2i\alpha \Delta_{C} ( b^{'},\, b^{''})}
\end{equation}
and
\begin{equation}
\Delta_{C} ( b^{'},\, b^{''})= \frac{1}{2\pi}\int d^{2}k \frac{\mathcal{F}^{2}(k^{2})}{k^{2}}(e^{-ib^{''}k} - e^{-ib^{'}k})=
\end{equation}
\[=\int_{0}^{\infty}\frac{dk}{k}\mathcal{F}^{2}(k^{2})[J_{0}(b^{''}k)-J_{0}(b^{'}k)].\]
In Eq.(4) we explicitly indicate the condition $ q\neq0 $ which corresponds to real experimental conditions (the scattered proton cannot be detected arbitrarily close to the beam axis). The "forward" observables , e.g. $ \sigma_{tot}(s) = Im T_{N}(s,0)/s $, are understood as the result of extrapolation $ t\rightarrow 0 $. However , this does not concern expressions appearing as integrands and able to contain terms like $ \delta (\textbf{q}) $.
In Eq.(6) the Coulomb singularity at $ k \rightarrow 0 $ is safely cured by the exponential (Bessel function) difference.
Note that
\begin{center}
$ S^{C} (q^{'},q^{''})\mid_{\alpha=0} = (2\pi)^{2} \delta (\textbf{q}^{'})(2\pi)^{2} \delta (\textbf{q}^{''})$
\end{center}
while
\begin{center}
$\int S^{C} (q^{'},q^{''}) d^{2}q^{'}d^{2}q^{''}/(2\pi)^{4} = 1, \; \forall \alpha .$
\end{center}
In principle, when applying Eq.(4) to the data analysis, one could deal directly with Eq.(5) which is all-order (in $ \alpha $) exact expression free of singularities.
In unrealistic case of "electrically point like" nucleons, i.e. if $ \mathcal{F} = 1 $, we would have
a compact explicit expression for the Coulomb function $ S^{C} (q^{'},q^{''}) $ expressed in terms of the well known generalized functions described, e.g., in \cite{Vla}:
\begin{equation}
S^{C} (q^{'},q^{''}) = (4\pi\alpha)^{2} \frac{(q^{''2}/q^{'2})^{i\alpha}}{q^{'2}q^{''2}}.
\end{equation}
However, it is hardly possible to obtain an explicit and "user friendly" expressions for arbitrary $ T_{N} $ and $ \mathcal{F} $.
Thus, in practice we have to use perturbative expansions in $ \alpha $. Let us notice, however, that it would be a bit rash to limit to zero and first orders in $ \alpha $ because, e.g., the pure Coulomb contribution ($ \sim\alpha^{2} $) to the observed $ d\sigma^{C+N}/dt $ at $ -t = \mathcal{O}(10^{-3} GeV^{2}) $ and $ \sqrt{s} = 13 TeV $ reaches near 30 \%. We notice that in relevant publications ( see e.g. Refs.\cite{Cahn} ) only the terms up to the first order in $ \alpha $ are retained in the amplitude, so when passing to the cross-section some terms are missing. This can lead to wrong estimation of parameters like $ \rho $ and so to wrong physical conclusions.
The basic kernel $ S^{C} (q^{'},q^{''}) $ has the following expansion in $ \alpha $ up to $ \alpha^{2} $ inclusively:
\begin{equation}
S^{C} (q^{'},q^{''})= (2\pi)^{2}\delta (\textbf{q}')(2\pi)^{2}\delta (\textbf{q}'')+2i\alpha (2\pi)^{3} [\hat{{\delta}_{C}}(q')\delta (\textbf{q}'')+ \hat{{\delta}_{C}}(q'')\delta (\textbf{q}')]+
\end{equation}
\[+ 2\alpha^{2}\pi^{2}\lbrace 2\hat{{\delta}_{C}}(q')\hat{{\delta}_{C}}(q'')-\delta (\textbf{q}')X(q'')- \delta (\textbf{q}'')X(q'))\rbrace + ...\]
where
\begin{equation}
\hat{{\delta}_{C}}(q)\doteq \int \frac{d\textbf{k}}{k^{2}}\mathcal{F}^{2}(k^{2})[\delta (\textbf{q})-\delta(\textbf{q}-\textbf{k})]
\end{equation}
and
\begin{equation}
X(q) = \int\frac{d\textbf{k}}{k^{2}}\mathcal{F}^{2}(k^{2})\int \frac{d\textbf{p}}{p^{2}}\mathcal{F}^{2}(p^{2})[\delta (\textbf{q}-\textbf{k}-\textbf{p}) -\delta (\textbf{q}-\textbf{k})-\delta (\textbf{q}-\textbf{p})+\delta(\textbf{q})].
\end{equation}
Quantities (9) and (10) are generalized functions which are defined on the space of appropriate test functions $ \phi (\textbf{q}) $. Normally infintely differentiable functions decreasing at infinity faster than any inverse power are used (Schwartz class $ S $) though in our case just differentiable and bounded at infinity functions would be fairly suitable.
Generalized functions (9) and (10) are defined as linear functionals $ (...,\phi) $ with
\begin{equation}
(\hat{{\delta}_{C}},\phi) =\int \frac{d\textbf {k}}{k^{2}}\mathcal{F}^{2}(k^{2})( \phi(\textbf {k})-\phi(0)),
\end{equation}
and
\begin{equation}
(X,\phi) = \int\frac{d\textbf{k}}{k^{2}}\mathcal{F}^{2}(k^{2})\int \frac{d\textbf{p}}{p^{2}}\mathcal{F}^{2}(p^{2})[\phi(\textbf{k}+\textbf{p}) -\phi(\textbf{k})-\phi (\textbf{p})+\phi(\textbf{0})].
\end{equation}
Distribution $ X $ can be expressed as a convolution of the distribution $ \hat{{\delta}_{C}} $ with itself:
\begin{equation}
X(\textbf{q}) = (\hat{{\delta}_{C}}\star\hat{{\delta}_{C}})(\textbf{q})
\end{equation}
and in terms of local values we get
\begin{equation}
X(q)\mid _{q\neq 0} = \frac{1}{q^{2}}\int \frac{dk^{2}dp^{2}}{k^{2}p^{2}} (-\lambda (q^{2},k^{2},p^{2}))_{+}^{-1/2}\times
\end{equation}
\[\times[q^{2}\mathcal{F}^{2}(k^{2})\mathcal{F}^{2}(p^{2})- (k^{2}\mathcal{F}^{2}(p^{2}) + p^{2}\mathcal{F}^{2}(k^{2}))\mathcal{F}^{2}(q^{2}) ]\]
where
\begin{center}
$ \lambda (q^{2},k^{2},p^{2})=q^{4}+k^{4}+p^{4} -2q^{2}k^{2} -2q^{2}p^{2} -2k^{2}p^{2} .$
\end{center}
and $ x_{+}^{\nu} \doteq x^{\nu},x\geq 0; = 0, x <0. $
One can readily see that the integrals in Eqs.(12)and (14) are well convergent at $ {k}^{2},{p}^{2} \rightarrow 0 $ . UV convergence is provided by the form factors as $ \mathcal{F}^{2}(k^{2})\sim k^{-8}$ at $ k^{2}\rightarrow \infty $.
Now we are able to write down the approximate
( up to $ \sim \alpha^{2} $ inclusively) expression(in units $ GeV^{-4}$) for the observed cross-section for pp scattering with account of Coulomb-nuclear interference ( $ t\approx -\textbf{q}^{2} $ and we do not explicitly indicate the $ s $-dependence in the amplitude):
\begin{equation}
16\pi s^{2} \frac{d\sigma_{C+N}^{pp}}{dt} = \mid T_{N}(q^{2})\mid^{2} + \alpha J_{1}+\alpha^{2} J_{2} + \mathcal{O}(\alpha^{3}).
\end{equation}
Here
\[J_{1} = \lbrace \frac{16\pi s\mathcal{F}^{2}(q^{2})}{q^{2}} ReT_{N}({q}^{2})+\frac{2}{\pi}\int\frac{dk^{2}\mathcal{F}^{2}(k^{2})}{k^{2}} dq'^{2}(- \lambda (q^{2}, q'^{2}, k^{2}))^{-1/2}_{+} Im [T_{N}(q^{2})T^{\ast}_{N}(q'^{2})]\rbrace,\]
and then we break $ J_{2} $, in its turn, into three terms :
\[J_{2} = J^{CC} _{2} + J^{CN}_{2} + J^{CNN}_{2},\]
where
$ J^{CC}_{2} $ is the term independent on the nuclear amplitude, $ J^{CN}_{2} $ the term linear in the nuclear amplitude, $ J^{CNN}_{2}$ the term quadratic in the nuclear amplitude:
\[J^{CC} _{2} = [\frac{8\pi s\mathcal{F}^{2}({q}^{2})}{q^{2}}]^{2},\]
\[ J^{CN} _{2} = \frac{2sImT_{N}({q}^{2})}{q^{2}}\int \frac{dk^{2}dp^{2}}{k^{2}p^{2}}[q^{2}\mathcal{F}^{2}(k^{2})\mathcal{F}^{2}(p^{2})- (k^{2}\mathcal{F}^{2}(p^{2}) + p^{2}\mathcal{F}^{2}(k^{2}))\mathcal{F}^{2}(q^{2}) ]\times\]
\[(-\lambda (q^{2},k^{2},p^{2}))_{+}^{-1/2}+\frac{4 s\mathcal{F}^{2}({q}^{2})}{q^{2}}\int\frac{dk^{2}dq'^{2}\mathcal{F}^{2}(k^{2})}{k^{2}} \times\]
\[\times(-\lambda (q^{2},k^{2},q'^{2}))_{+}^{-1/2}Im(T_{N} (q'^{2}) -T_{N} (q^{2})),\]
\[J^{CNN}_{2} = \: \mid\int \frac{d{k}^{2}\mathcal{F}^{2}((k^{2})}{2\pi k^{2}} dq'^{2}(-\lambda (q^{2},k^{2},p^{2}))_{+}^{-1/2}[T_{N} (q'^{2}) -T_{N} ({q^{2}})] \mid^{2}\]
\[-\frac{1}{(2\pi)^{2}}\int\frac{d\textbf{k}\mathcal{F}^{2}(k^{2})}{k^{2}}\frac{d\textbf{p}\mathcal{F}^{2}(p^{2})}{p^{2}}[ReT_{N}(\textbf{q})( ReT_{N}(\textbf{q}-\textbf{p}-\textbf{k})\]
\[-ReT_{N}(\textbf{q}-\textbf{p})-ReT_{N}(\textbf{q}-\textbf{k})+ReT_{N}(\textbf{q}))+ ImT_{N}(\textbf{q})(ImT_{N}(\textbf{q}-\textbf{p}-\textbf{k})\]
\[-ImT_{N}(\textbf{q}-\textbf{p})-ImT_{N}(\textbf{q}-\textbf{k})+ImT_{N}(\textbf{q}))]\rbrace. \]
In order not to make Eq.(15) too
unwieldy we have kept vector arguments in integration and in the scattering amplitudes in the last expression of the $ \alpha^{2} $ term.
To pass to invariant variables the integration measure $ d\textbf{k} d\textbf{p} $ is to be changed for
$ dk^{2}dp^{2}dq'^{2}dq''^{2} (-\lambda (q^{2},q'^{2},k^{2} ))_{+}^{-1/2}(-\lambda (q^{2},q"^{2},p^{2} ))_{+}^{-1/2}$ and the following substitutions should be made:
\[T_{N}(\textbf{q})\rightarrow T_{N}(q^{2}),T_{N}(\textbf{q}-\textbf{p}-\textbf{k})\]
\[\rightarrow T_{N} (\frac{(q'^{2}-k^{2}-q^{2})(q''^{2}-p^{2}-q^{2})+(-\lambda (q^{2},q'^{2},k^{2} ))_{+}^{1/2}(-\lambda (q^{2},q''^{2},p^{2} ))_{+}^{1/2}}{2q^{2}} +\] \[+ \: q'^{2}+q''^{2}-q^{2}); \: T_{N}(\textbf{q}-\textbf{k})\rightarrow T_{N}(q'^{2}),\: T_{N}(\textbf{q}-\textbf{p})\rightarrow T_{N}(q''^{2}).\]
This expression is certainly quite bulky but we cannot avoid it if we keep $ \mathcal{O}(\alpha^{2}) $ terms which are important at low enough $ q^{2} $ characteristic for the region of CNI.
Plain fact is that it significantly differs from the expression that one obtains by taking the square of the CKL amplitude modulus (2),(3) used in Ref.\cite{TOT} for extraction of the $ \rho $ - parameter from the data. We believe that the application of our expression (15) given above can lead to essentially different values of $ \rho $ and, consequently, to different both numerical and conceptual conclusions.
\section{ Conclusion and outlook}
In this note we have exhibited a new, relatively simple but mathematically consisted, formula to deal with the Coulomb-nuclear interference which minimizes the use of IR regularizations and modifies the previously applied formula for $ T_{C+N} $.
We also have shown that the usual retaining only the $ \mathcal{O}(\alpha) $ terms in the QED perturbative expansion of the amplitude $ T_{C+N} $ leads to loss of terms which can be important when passing to the cross-section and have explicitly calculated these terms. Their influence is potentially capable to change the values of the parameter $ \rho $ and, hence, the physical interpretation of the elastic proton-proton scattering at the LHC.
Phenomenological application of the results presented here is the subject of a special publication \cite{Ezh} .
\section{Acknowledgements}
I am grateful to Vladimir Ezhela, Anatolii Likhoded, Jan Ka\v{s}par, Vojtech Kundr\'{a}t, Per Grafstr\"{o}m, Roman Ryutin and Nikolai Tkachenko for their interest to this work and inspiring conversations and correspondence. I am particularly indebted to Anatolii Samokhin for very fruitful discussions of some peculiar details of the paper as well as to the reviewer whose comments were helpful for improvement of the presentation.
This work is supported by the RFBR Grant 17-02-00120.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,844 |
Malabon (officiellt City of Malabon) är en stad på ön Luzon i Filippinerna. Den ligger i Metro Manila och har 338 855 invånare (folkräkning 1 maj 2000).
Staden är indelad i 21 smådistrikt, barangayer, varav samtliga är klassificerade som tätortsdistrikt.
Referenser
National Statistical Coordination Board, Filippinerna
Orter i Metro Manila | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 4,420 |
# Building Tools with GitHub
Customize Your Workflow
Chris Dawson with Ben Straub
# Building Tools with GitHub
by Chris Dawson and Ben Straub
Copyright © 2016 Chris Dawson, Ben Straub. All rights reserved.
Printed in the United States of America.
Published by O'Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472.
O'Reilly books may be purchased for educational, business, or sales promotional use. Online editions are also available for most titles (<http://safaribooksonline.com>). For more information, contact our corporate/institutional sales department: 800-998-9938 or _corporate@oreilly.com_.
* Editors: Brian MacDonald and Meghan Blanchette
* Production Editor: Nicholas Adams
* Copyeditor: Christina Edwards
* Proofreader: Kim Cofer
* Indexer: WordCo Indexing Services, Inc.
* Interior Designer: David Futato
* Cover Designer: Randy Comer
* Illustrator: Rebecca Demarest
* February 2016: First Edition
# Revision History for the First Edition
* 2016-02-05: First Release
See <http://oreilly.com/catalog/errata.csp?isbn=9781491933503> for release details.
The O'Reilly logo is a registered trademark of O'Reilly Media, Inc. _Building Tools with GitHub_ , the cover image, and related trade dress are trademarks of O'Reilly Media, Inc.
While the publisher and the authors have used good faith efforts to ensure that the information and instructions contained in this work are accurate, the publisher and the authors disclaim all responsibility for errors or omissions, including without limitation responsibility for damages resulting from the use of or reliance on this work. Use of the information and instructions contained in this work is at your own risk. If any code samples or other technology this work contains or describes is subject to open source licenses or the intellectual property rights of others, it is your responsibility to ensure that your use thereof complies with such licenses and/or rights.
978-1-491-93350-3
[LSI]
# Preface
This book contains stories about building software tools.
If you write software on a daily basis, you realize the act of writing software is the craft of creating tools. Software is nothing more than a tool. A spreadsheet is fundamentally a tool to add and subtract numbers. A video game is fundamentally a tool to alleviate boredom. Almost immediately after people started writing software tools we then discovered we needed more tools to permit us to write the tools we set out to build in the first place. Let's call these tools that are strictly to support writing software (rather than software tools for the general population) meta-tools.
One of the most important meta-tools in the software development world is Git. Git is a meta-tool that helps software developers manage the complexity that comes from writing software. Git allows software developers to store snapshots of their programs (and then easily restore those snapshots if necessary) and to easily collaborate with other programmers (a surprisingly complicated problem). Git is called a source code management (SCM) tool and though there were many other SCMs before Git, Git has taken the software world by storm like no other before it and now dominates the SCM landscape.
GitHub is a company that saw the immense potential of Git early on and built a layer of web services on top of the existing features found in Git. Not surprisingly, one of the factors behind its success was that GitHub employees embraced the ethos of writing meta-tools from the beginning. Building meta-tools requires the courage to take a little extra time to build a meta-tool rather than taking the easy route to get the public-facing software out the door. GitHub employees are proud of this prioritization and have written extensively about the benefits, which include easy on-boarding of new hires and a transparent workflow visible to all employees.
This book looks at the tools GitHub uses internally. The GitHub.com website is itself a meta-tool, and we discuss the many facets of the GitHub service. Specifically these technologies are the GitHub API and related GitHub technologies, Gollum wiki, Jekyll static page generator, and the chat robot called Hubot (if you are not familiar with any of these, we'll explain them fully in their respective chapters).
To reiterate, this book is not a reference of those technologies. This book is a story-book, a book that relates the process of building software meta-tools, explaining not only the technology specifics, but also the compromises, the realities of refactoring, and the challenges inherent to writing meta-tools in long narrative story form.
Meta-tools require a different mindset than what comes from building software available to the general population. Meta-tools are generally open source, which requires a different level of responsibility and usage. One could argue that software engineers are more demanding of quality than general users because software developers know they can take action to improve or fork software that does not work for them. Meta-tools enforce a higher level of contributory involvement, which makes automated tests almost a requirement. All of these concepts constitute the background story behind meta-tools, and we show you how they play out when building your own.
# Why APIs and Why the GitHub API?
Using an API to back an application is a common practice today: this is the future of application development. APIs provide a great pattern for making data accessible to the multiscreen world. If your application is backed by a remote service API, the first application could be a mobile app running on Apple's iOS operating system. Critically, if that business model does not turn out to be correct, you can respond quickly to changing requirements and iterate to build another application for an Android wearable. Or, perhaps you'll build an integrated car application, or any other console (or even nonconsole) application. As long as your applications can send and receive data using calls to a remote API you are free to build whatever user interface you want on whatever platform you want.
As an author, you could write and host your own API. Many frameworks for popular languages like Ruby, Go, or Java support building APIs using standard architectural styles like REST. Or, you could use a third-party API. In this book we'll focus on a third-party API: the GitHub API.
Why the GitHub API? The GitHub API is exceedingly relevant if you are building software because you are probably using GitHub to manage your software code. For those that aren't, you might be using Git without GitHub, and the GitHub API is useful to know there as well, as it layers the functionality of Git into a networked programming interface.
The GitHub API is perhaps the best designed API I've ever used. It is a Hypermedia API, which is an arguably successful attempt to make API clients resilient to API changes—a tricky problem. The API is well versioned. It is comprehensive, mapping closely to most features of Git. It is consistent across sections and well organized. The GitHub API is a great API on which to build applications, serving as a case study for a well-designed API.
# Structure of This Book
The GitHub API is extremely comprehensive, permitting access and modification of almost all data and metadata stored or associated with a Git repository. Here is a grouped summary of the sections of the API ordered alphabetically as they are on the GitHub API documentation site:
* Activity: notifications of interesting events in your developer life
* Gists: programmatically create and share code snippets
* Git Data: raw access to Git data over a remote API
* Issues: add and modify issues
* Miscellaneous: whatever does not fit into the general API categorization
* Organizations: access and retrieve organizational membership data
* Pull Requests: a powerful API layer on the popular merge process
* Repositories: modify everything and anything related to repositories
* Search: code-driven search within the entire GitHub database
* Users: access user data
* Enterprise: specifics about using the API when using the private corporate GitHub
In addition, though not a part of the API, there are other important technologies you should know about when using GitHub that are not covered in the API documentation:
* Jekyll and "gh-pages": hosting blogs and static documentation
* Gollum: wikis tied to a repository
* Hubot: a programmable chat robot used extensively at GitHub
Each of these sections of the GitHub technology stack are covered in various chapters (with two exceptions, which we explain next). The GitHub API documentation is a stellar reference you will use constantly when writing any application that talks to the API, but the chapters in this book serve a different purpose: these chapters are stories about building applications on top of the technologies provided by GitHub. Within these stories you will learn the trade-offs and considerations you will face when you use the GitHub API. Chapters in this book often cover multiple pieces of the API when appropriate for the story we are telling. We've generally tried to focus on a major API section and limit exposure to other pieces as much as possible, but most chapters do need to bring in small pieces of more than one section.
Here is a short synopsis of each chapter:
Chapter 1
This chapter covers a first look at the API through the command-line HTTP client called cURL. We talk a bit about the response format and how to parse it within the command line, and also document authentication. This is the only chapter that does not build an application from the technologies presented. Chapter 2: This chapter covers the Gist API, as well as command-line tools and the Ruby language "Octokit" API client. We then take this API and build a simple Ruby server that is stored as a gist and displays gists.
Chapter 3
This chapter explains usage of the Gollum command-line tool and associated Ruby library (gem), which is backed by Grit, the C-language bindings for accessing Git repositories. We also document some details of the Git storage format and how it applies to storing large files inside of a Git repository, and show how to use the Git command-line tools to play with this information. We use Gollum and the Grit libraries to build an image management tool that also functions as a regular Gollum wiki, which can be published to GitHub.
Chapter 4
In this chapter we explore the Search API and build a GUI tool to search repositories on GitHub using Python.
Chapter 5
This chapter covers a relatively new part of the API that documents the interactions between third-party tools and your code. This chapter builds an application using C# and the Nancy .NET GitHub API libraries.
Chapter 6
If you push a specifically organized repository into GitHub, GitHub will host a fully featured blog, equivalent in most ways to a Wordpress site (well, except for the complexity part). This chapter documents how to format your repository, how to use Markdown within Jekyll, how to use programmatic looping constructs provided by Liquid Templates, and then shows how to import an entire website from the Internet Archive into the Jekyll format using Ruby. We show how to respectfully spider a site using caching, a valuable technique when using APIs or third-party public information.
Chapter 7
In this chapter we create a mobile application targeting the Android OS. Our application reads and writes information into a Jekyll repository from the Git Data section of the API. We show how to create user interface tests for Android that verify GitHub API responses using the Calabash UI testing tool.
Chapter 8
Hubot is a JavaScript (NodeJS) chat robot enabling technologists to go beyond developer operations ("DevOps") to a new frontier called "ChatOps." This chapter illustrates using the Activity and Pull Requests section of the API. In addition, we show how you can simulate GitHub notifications and how to write testable Hubot extensions (which is often a challenge when writing JavaScript code). We string all these pieces together and build a robot that automates assigning pull request review requests.
Chapter 9
Did you know you can host an entire "single-page application" on GitHub? We show how you can build a coffee shop information app backed by a flat file database hosted on GitHub written in the JavaScript language. Importantly, we show how you can write a testable JavaScript application that mocks out the GitHub API when needed.
We don't cover the Organizations API: this is a small facet of the API with only the ability to list organizations and modify metadata about your organization; once you have used other parts of the API this nook of the API will be very intuitive.
We also don't cover the Users section of the API. While you might expect it to be an important part of the API, the Users API is really nothing more than an endpoint to list information about users, add or remove SSH keys, adjust email addresses, and modify your list of followers.
There is not a specific chapter on issues. GitHub originally grouped issues and pull requests into the same API section, but with the growing importance of pull requests GitHub has separated them in the API documentation. In fact, they are still internally stored in the same database and pull requests are, at least for now, just another type of issue. Chapter 8 documents using pull requests and is a good reference for issues in that way.
The Enterprise API works almost exactly the same as the GitHub.com site API. We don't have a chapter telling a story about an Enterprise version of the API, but we do provide an appendix that contains a few notes about how the examples work when using an Enterprise server. We also provide the specific syntax for each of the languages used in the chapters that will make any of the examples provided work with an Enterprise server.
Through these stories about the technologies behind GitHub we hope to give you an inside look at the inner workings of the brain of a developer building on top of the GitHub API.
# Who You Are
This book should be an interesting source of information for people who have used Git or GitHub and want to "level-up" their skills related to these technologies. People without any experience using GitHub or Git should start with an introductory book on these technologies.
You should have good familiarity with at least one imperative modern programming language. You don't need to be an expert programmer to read this book, but having some programming experience and familiarity with at least one language is essential.
You should understand the basics of the HTTP protocol. The GitHub team uses a very standard RESTful approach for its API. You should understand the difference between a GET request and POST request and what HTTP status codes mean at the very least.
Familiarity with other web APIs will make traversing these chapters easier, although this book simultaneously aspires to provide a guide showing how a well-thought-out, well-designed, and well-tested web API creates a foundation for building fun and powerful tools. If you have not used web APIs extensively, but have experience using other types of APIs, you will be in good company.
# What You Will Learn
Much of the book focuses on the technical capabilities exposed by GitHub and the powerful GitHub API. Perhaps you feel constrained by using Git only from within a certain toolset; for example, if you are an Android developer using Git to manage your app source code and want to unlock Git in other places in your life as a developer, this book provides a wider vista to learn about the power of Git and GitHub. If you have fallen into using Git for your own projects and are now interested in using Git within a larger community, this book can teach you all about the "social coding" style pioneered and dogfooded by the GitHub team. This book provides a stepping stone for software developers who have used other distributed version control systems and are looking for a bridge to using their skills with Git and within a web service like GitHub.
Like any seasoned developer, automation of your tools is important to you. This book provides examples of mundane tasks converted into automated and repeatable processes. We show how to do this using a variety of languages talking to the GitHub API.
To make this book accessible to everyone, regardless of their editor or operating system, many of the programming samples work within the command line. If you are unfamiliar with the "command line" this book will give you a firm understanding of how to use it, and we bet you will find great power there. If you have hated the command line since your father forced you to use it when you were five, this is the perfect book to rekindle a loving relationship with the bash shell.
If you absorb not only the technical facets of using GitHub but also pay attention to the cultural and ideological changes offered behind the tools, you'll very likely see a new way of working in the modern age. We focus on these "meta" viewpoints as we discuss the tools themselves to help you see these extra opportunities.
Almost every chapter has an associated repository hosted on GitHub where you can review the code discussed. Fork away and take these samples into your own projects and tools!
Finally, we help you write testable API-backed code. Even the most experienced developers often find that writing tests for their code is a challenge, despite the massive body of literature connecting quality code with tests. Testing can be especially challenging when you are testing something backed by an API; it requires a different level of thinking than is found in strict unit testing. To help you get past this roadblock, whenever possible, this book shows you how to write code that interacts with the GitHub API and is testable.
# GitHub "First Class" Languages
There are two languages that are so fundamentally linked to GitHub that you do need to install and use them in order to get the most out of this book.
Ruby
A simple, readable programming language the founders of GitHub used extensively early in the life of the company.
JavaScript
The only ubiquitous browser-side programming language; its importance has grown to new heights with the introduction of NodeJS, rivaling even the popularity of Ruby on Rails as a server-side toolkit for web applications, especially for independent developers.
Undoubtedly, many of you picking up this book already have familiarity with Ruby or JavaScript/NodeJS. So, the basics and installation of them are in appendices in the back of the book. The appendices don't cover syntax of these languages; we expect you have experience with other languages as a prerequisite and can read code from any imperative language regardless of the syntax. Later chapters discuss facets of the API and go into language details at times, but the code is readable regardless of your familiarity with that particular language. These explanatory appendices discuss the history of these tools within the GitHub story as well as important usage notes like special files and installation options.
Your time will not be wasted if you install and play with these two tools. Between them you will have a solid toolset to begin exploration of the GitHub API. Several chapters in this book use Ruby or JavaScript, so putting in some time to learn at least a little bit will make the journey through this book richer for you.
# Operating System Prerequisites
We, the authors, wrote this book using MacBook Pros. MacBooks have a ubiquitous shell ("BASH") that works almost identically to the one found on any Linux machine. If you use either of these two operating systems, you will be able to run the code from any chapter.
If you use a Windows machine (or an OS that does not include the BASH shell) then some of the commands and code examples may not work without installing additional software.
An easy remedy is to use VirtualBox and Vagrant. VirtualBox is a freely available virtualization system for x86 hardware. Vagrant is a tool for managing development environments: using VirtualBox and Vagrant you can quickly install a Linux virtual machine. To do this, visit the downloads page for VirtualBox and Vagrant. Once you have installed these two tools, you can then install an Ubuntu Linux virtual machine with these two commands:
$ vagrant init hashicorp/precise32
$ vagrant up
# Who This Book Is Not For
If you are looking for a discussion of the GitHub API that focuses on a single language, you should know that we look at the API through many different languages. We do this to describe the API from not only the way the GitHub team designed it to work, but the aspirational way that client library authors made it work within diverse programming languages and communities. We think there is a lot to learn from this approach, but if you are interested in only a specific language and how it works with the GitHub API, this is not the book for you.
This book strives to prove that API-driven code is testable and that there is a benefit to doing so. This book does not intend to provide a manual on how to write perfectly tested code. We cover too many languages to end the healthy debates happening within each community about the right test frameworks. Instead, given our contention that most software projects have zero test coverage, this book tries to help you get past this significant roadblock. There is something transformational about writing tests if you have never done so before. Having these examples in hand, we hope, will allow you to transition to writing testable code for APIs, especially if you have not done so before. Some of the associated repositories have much greater test suites than are documented in this book, but we don't cover all the entire set of edge cases in every situation.
# Conventions Used in This Book
The following typographical conventions are used in this book:
_Italic_
Indicates new terms, URLs, email addresses, filenames, and file extensions.
`Constant width`
Used for program listings, as well as within paragraphs to refer to program elements such as variable or function names, databases, data types, environment variables, statements, and keywords.
_`Constant width italic`_
Shows text that should be replaced with user-supplied values or by values determined by context.
This icon signifies a general note.
This icon indicates a warning or caution.
# Using Code Examples
Supplemental material (code examples, exercises, etc.) is available for download at _https://github.com/xrd/building-tools-with-github_.
This book is here to help you get your job done. In general, if example code is offered with this book, you may use it in your programs and documentation. You do not need to contact us for permission unless you're reproducing a significant portion of the code. For example, writing a program that uses several chunks of code from this book does not require permission. Selling or distributing a CD-ROM of examples from O'Reilly books does require permission. Answering a question by citing this book and quoting example code does not require permission. Incorporating a significant amount of example code from this book into your product's documentation does require permission.
We appreciate, but do not require, attribution. An attribution usually includes the title, author, publisher, and ISBN. For example: " _Building Tools with GitHub_ by Chris Dawson and Ben Straub (O'Reilly). Copyright 2016 Chris Dawson and Ben Straub, 978-1-491-93350-3."
If you feel your use of code examples falls outside fair use or the permission given above, feel free to contact us at _permissions@oreilly.com_.
# Safari® Books Online
_Safari Books Online_ is an on-demand digital library that delivers expert content in both book and video form from the world's leading authors in technology and business.
Technology professionals, software developers, web designers, and business and creative professionals use Safari Books Online as their primary resource for research, problem solving, learning, and certification training.
Safari Books Online offers a range of plans and pricing for enterprise, government, education, and individuals.
Members have access to thousands of books, training videos, and prepublication manuscripts in one fully searchable database from publishers like O'Reilly Media, Prentice Hall Professional, Addison-Wesley Professional, Microsoft Press, Sams, Que, Peachpit Press, Focal Press, Cisco Press, John Wiley & Sons, Syngress, Morgan Kaufmann, IBM Redbooks, Packt, Adobe Press, FT Press, Apress, Manning, New Riders, McGraw-Hill, Jones & Bartlett, Course Technology, and hundreds more. For more information about Safari Books Online, please visit us online.
# How to Contact Us
Please address comments and questions concerning this book to the publisher:
* O'Reilly Media, Inc.
* 1005 Gravenstein Highway North
* Sebastopol, CA 95472
* 800-998-9938 (in the United States or Canada)
* 707-829-0515 (international or local)
* 707-829-0104 (fax)
We have a web page for this book, where we list errata, examples, and any additional information. You can access this page at _http://bit.ly/building-tools-with-github_.
To comment or ask technical questions about this book, send email to _bookquestions@oreilly.com_.
For more information about our books, courses, conferences, and news, see our website at _http://www.oreilly.com_.
Find us on Facebook: _http://facebook.com/oreilly_
Follow us on Twitter: _http://twitter.com/oreillymedia_
Watch us on YouTube: _http://www.youtube.com/oreillymedia_
# Acknowledgments
Chris wants to thank his lovely wife, Nicole. I hope that I have added to this book even a tiny bit of the wit and wisdom you provide to me and our family every day. My son Roosevelt's energy continues to inspire me and keep me going even when I am at my limits. To my daughter Charlotte, you are my little smiling Buddha. To my mother, who showed me how to write and, most importantly, why to write, which is something we need more of in the technology world. To Tim O'Brien who invited me into this project, thank you, and I hope we can collaborate again. To Bradley Horowitz, who demonstrates how small acts of kindness can have immeasurable impact. And, to David J. Groom, though we have never met face to face, your suggestions and excitement about the book early on came at a critical moment in the life of this book, and I thank you for channeling the excitement I hoped to cultivate with people who would one day pick up this book.
Ben would like to thank his wife, Becky, for her ongoing support and (when needed) push from behind. None of this would have happened without you.
# Chapter 1. The Unclad GitHub API
This chapter eases us into reading and writing data from the GitHub API. Successive chapters show you how to access information from the GitHub API using a variety of client libraries. These client libraries, by design, hide the nuts and bolts of the API from you, providing streamlined and idiomatic methods to view and modify data inside a Git repository hosted on GitHub. This chapter, however, gives you a naked viewpoint of the GitHub API and documents the details of the raw HTTP requests and responses. It also discusses the different ways to access public and private data inside of GitHub and where limitations exist. And it gives you an overview of the options for accessing GitHub data when running inside a web browser context where network access is restrained.
# cURL
There will be times when you want to quickly access information from the GitHub API without writing a formal program. Or, when you want to quickly get access to the raw HTTP request headers and content. Or, where you might even question the implementation of a client library and need confirmation it is doing the right thing from another vantage point. In these situations, cURL, a simple command-line HTTP tool, is the perfect fit. cURL, like the best Unix tools, is a small program with a very specific and purposefully limited set of features for accessing HTTP servers.
cURL, like the HTTP protocol it speaks intimately, is stateless: we will explore solutions to this limitation in a later chapter, but note that cURL works best with one-off requests.
# Installing cURL
cURL is usually installed on most OS X machines, and can easily be installed using Linux package managers (probably one of `apt-get install curl` or `yum install curl`). If you are using Windows or want to manually install it, go to _http://curl.haxx.se/download.html_.
Let's make a request. We'll start with the most basic GitHub API endpoint found at __https://api.github.com__ :
$ curl https://api.github.com
{
"current_user_url": "https://api.github.com/user",
"current_user_authorizations_html_url":
"https://github.com/settings/connections/applications{/client_id}",
"authorizations_url": "https://api.github.com/authorizations",
"code_search_url":
"https://api.github.com/search/code?q={query}{&page,per_page,sort,order}",
"emails_url": "https://api.github.com/user/emails",
"emojis_url": "https://api.github.com/emojis",
...
}
We've abbreviated the response to make it more readable. A few salient things to notice: there are a lot of URLs pointing to secondary information, parameters are included in the URLs, and the response format is JSON.
What can we learn from this API response?
# Breadcrumbs to Successive API Paths
The GitHub API is a hypermedia API. Though a discussion on what constitutes hypermedia deserves an entire book of its own (check out O'Reilly's _Building Hypermedia APIs with HTML5 and Node_), you can absorb much of what makes hypermedia interesting by just looking at a response. First, you can see from the API response that each response contains a map with directions for the next responses you might make. Not all clients use this information, of course, but one goal behind hypermedia APIs is that clients can dynamically adjust their endpoints without recoding the client code. If the thought of GitHub changing an API because clients _should_ be written to handle new endpoints automatically sounds worrisome, don't fret too much: GitHub is very dilligent about maintaining and supporting its API in a way that most companies would do well to emulate. But you should know that you can rely on having an API reference inside the API itself, rather than hosted externally in documentation, which very easily could turn out to be out of date with the API itself.
These API maps are rich with data. For example, they are not just URLs to content, but also information about how to provide parameters to the URLs. Looking at the previous example, the `code_search_url` key references a URL that obviously allows you to search within code on GitHub, but also tells you how to structure the parameters passed to this URL. If you have an intelligent client who can follow this programmatic format, you could dynamically generate the query without involving a developer who can read API documentation. At least that is the dream hypermedia points us to; if you are skeptical, at least know that APIs such as GitHub encode documentation into themselves, and you can bet GitHub has test coverage to prove that this documentation matches the information delivered by the API endpoints. That's a strong guarantee that is sadly missing from many other APIs.
Now let's briefly discuss the format of all GitHub API responses: JSON.
# The JavaScript Object Notation (JSON) Format
Every response you get back from the GitHub API will be in the JSON (JavaScript Object Notation) format. JSON is a "lightweight data interchange format" (read more on the JSON.org website). There are other competing and effective formats, such as XML (Extensible Markup Language) or YAML (YAML Ain't Markup Language), but JSON is quickly becoming the de facto standard for web services.
A few of the reasons JSON is so popular:
* JSON is readable. JSON has a nice balance of human readability when compared to serialization formats like XML.
* JSON can be used within JavaScript with very little modification (and cognitive processing on the part of the programmer). A data format that works equally well on both the client and server side was bound to be victorious, as JSON has been.
You might expect that a site like GitHub, originally built on the Ruby on Rails stack (and some of that code is still live), would support specifying an alternative format like XML, but XML is no longer supported. Long live JSON.
JSON is very straightforward if you have used any other text-based interchange format. One note about JSON that is not always obvious or expected to people new to JSON is that the format only supports using double quotes, not single quotes.
We are using a command-line tool, cURL, to retrieve data from the API. It would be handy to have a simple command-line tool that also processes that JSON. Let's talk about one such tool next.
## Parsing JSON from the Command Line
JSON is a text format, so you could use any command-line text processing tool, such as the venerable AWK, to process JSON responses. There is one fantastic JSON-specific parsing tool that complements cURL that is worth knowing: _jq_. If you pipe JSON content (using the `|` character for most shells) into jq, you can then easily extract pieces of the JSON using _filters_.
# Installing jq
jq can be installed from source, using package managers like `brew` or `apt-get`, and there are binaries on the downloads page for OS X, Linux, Windows, and Solaris.
Going deeper into the prior example, let's pull out something interesting from the API map that we receive when we access _api.github.com_ :
$ curl https://api.github.com | jq '.current_user_url'
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 2004 100 2004 0 0 4496 0 --:--:-- --:--:-- --:--:-- 4493
"https://api.github.com/user"
What just happened? The jq tool parsed the JSON, and using the `.current_user_url` filter, it retrieved content from the JSON response. If you look at the response again, you'll notice it has key/value pairs inside an associative array. It uses the `current_user_url` as a key into that associative array and prints out the value there.
You also will notice that cURL printed out transfer time information. cURL printed this information to _standard error_ , which is a shell convention used for messaging errors and an output stream that jq will correctly ignore (in other words, the JSON format stream will not be corrupted by error messages). If we want to restrict that information and clean up the request we should use the `-s` switch, which runs cURL in "silent" mode.
It should be easy to understand how the jq filter is applied to the response JSON. For a more complicated request (for example, we might want to obtain a list of public repositories for a user), we can see the pattern for the jq pattern parameter emerging. Let's get a more complicated set of information, a user's list of repositories, and see how we can extract information from the response using jq:
$ curl -s https://api.github.com/users/xrd/repos
[
{
"id": 19551182,
"name": "a-gollum-test",
"full_name": "xrd/a-gollum-test",
"owner": {
"login": "xrd",
"id": 17064,
"avatar_url":
"https://avatars.githubusercontent.com/u/17064?v=3",
...
}
]
$ curl -s https://api.github.com/users/xrd/repos | jq '.[0].owner.id'
17064
This response is different structurally: instead of an associative array, we now have an array (multiple items). To get the first one, we specify a numeric index, and then key into the successive associative arrays inside of it to reach the desired content: the owner id.
jq is a great tool for checking the validity of JSON. As mentioned before, JSON key/values are stored only with double quotes, not single quotes. You can verify that JSON is valid and satisfies this requirement using jq:
$ echo '{ "a" : "b" }' | jq '.'
{
"a": "b"
}
$ echo "{ 'no' : 'bueno' }" | jq "."
parse error: Invalid numeric literal at line 1, column 7
The first JSON we pass into jq works, while the second, because it uses invalid single-quote characters, fails with an error. jq filters are strings passed as arguments, and the shell that provides the string to jq does not care if you use single quotes or double quotes, as you can see in the preceding code. The `echo` command, if you didn't already know, prints out whatever string you provide to it; when we combine this with the pipe character we can easily provide that string to jq through standard input.
jq is a powerful tool for quickly retrieving content from an arbitray JSON request. jq has many other powerful features, documented at __https://stedolan.github.io/jq/__.
We now know how to retrieve some interesting information from the GitHub API and parse out bits of information from that response, all in a single line. But there will be times when you incorrectly specify parameters to cURL or the API, and the data is not what you expect. Now we'll learn about how to debug the cURL tool and the API service itself to provide more context when things go wrong.
## Debugging Switches for cURL
As mentioned, cURL is a great tool when you are verifying that a response is what you expect it to be. The response body is important, but often you'll want access to the headers as well. cURL makes getting these easy with the `-i` and `-v` switches. The `-i` switch prints out request headers, and the `-v` switch prints out both request and response headers (the `>` character indicates request data, and the `<` character indicates response data):
$ curl -i https://api.github.com
HTTP/1.1 200 OK
Server: GitHub.com
Date: Wed, 03 Jun 2015 19:39:03 GMT
Content-Type: application/json; charset=utf-8
Content-Length: 2004
Status: 200 OK
X-RateLimit-Limit: 60
...
{
"current_user_url": "https://api.github.com/user",
...
}
$ curl -v https://api.github.com
* Rebuilt URL to: https://api.github.com/
* Hostname was NOT found in DNS cache
* Trying 192.30.252.137...
* Connected to api.github.com (192.30.252.137) port 443 (#0)
* successfully set certificate verify locations:
* CAfile: none
CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
...
* CN=DigiCert SHA2 High Assurance Server CA
* SSL certificate verify ok.
> GET / HTTP/1.1
> User-Agent: curl/7.35.0
> Host: api.github.com
> Accept: */*
>
< HTTP/1.1 200 OK
* Server GitHub.com is not blacklisted
...
With the `-v` switch you get everything: DNS lookups, information on the SSL chain, and the full request and response information.
Be aware that if you print out headers, a tool like jq will get confused because you are no longer providing it with pure JSON.
This section shows us that there is interesting information not only in the body (the JSON data) but also in the headers. It is important to understand what headers are here and which ones are important. The HTTP specification requires a lot of these headers, and we can often ignore those, but there are a few that are vital when you start making more than just a few isolated requests.
# Important Headers
Three headers are present in every GitHub API response that tell you about the GitHub API rate limits. They are X-RateLimit-Limit, X-RateLimit-Remaining, and X-RateLimit-Reset. These limits are explained in detail in "GitHub API Rate Limits".
The X-GitHub-Media-Type header contains information that will come in handy when you are starting to retrieve text or blob content from the API. When you make a request to the GitHub API you can specify the format you want to work with by sending an Accept header with your request.
Now, let's use a response to build another response.
# Following a Hypermedia API
We'll use the "map" of the API by hitting the base endpoint, and then use the response to manually generate another request:
$ curl -i https://api.github.com/
HTTP/1.1 200 OK
Server: GitHub.com
Date: Sat, 25 Apr 2015 05:36:16 GMT
...
{
"current_user_url": "https://api.github.com/user",
...
"organization_url": "https://api.github.com/orgs/{org}",
...
}
We can use the organizational URL and substitute `"github"` in the placeholder:
$ curl https://api.github.com/orgs/github
{
"login": "github",
"id": 9919,
"url": "https://api.github.com/orgs/github",
...
"description": "GitHub, the company.",
"name": "GitHub",
"company": null,
"blog": "https://github.com/about",
"location": "San Francisco, CA",
"email": "support@github.com",
...
"created_at": "2008-05-11T04:37:31Z",
"updated_at": "2015-04-25T05:17:01Z",
"type": "Organization"
}
This information allows us to do some forensics on GitHub itself. We get the company blog _https://github.com/about_. We see that GitHub is located in San Francisco, and we see that the creation date of the organization is May 11th, 2008. Reviewing the blog, we see a blog post from April that indicates GitHub launched as a company a month earlier. Perhaps organizations were not added to the GitHub site features until a month after the company launched?
So far all of our requests have retrieved publicly available information. But the GitHub API has a much richer set of information that is available only once we authenticate and access private information and publicly inaccessible services. For example, if you are using the API to write data into GitHub, you need to know about authentication.
# Authentication
There are two ways to authenticate when making a request to the GitHub API: username and passwords (HTTP Basic) and OAuth tokens.
## Username and Password Authentication
You can access protected content inside GitHub using a username and password combination. Username authentication works by using the HTTP Basic authentication supported by the `-u` flag in cURL. HTTP Basic Authentication is synonymous with username and password authentication:
$ curl -u xrd https://api.github.com/rate_limit
Enter host password for user 'xrd': xxxxxxxx
{
"rate": {
"limit": 5000,
"remaining": 4995,
"reset": 1376251941
}
}
This cURL command authenticates into the GitHub API and then retrieves information about our own specific rate limits for our user account, protected information only available as a logged-in user.
### Benefits of username authentication
Almost any client library you use will support HTTP Basic authentication. All the GitHub API clients we looked at support username and passwords. And, writing your own specific client is easy as this is a core feature of the HTTP standard, so if you use any standard HTTP library when building your own client, you will be able to access content inside the GitHub API.
### Downsides to username authentication
There are many reasons username and password authentication is the wrong way to manage your GitHub API access:
* HTTP Basic is an old protocol that never anticipated the granularity of web services. It is not possible to specify only certain features of a web service if you ask users to authenticate with username/passwords.
* If you use a username and password to access GitHub API content from your cell phone, and then access API content from your laptop, you have no way to block access to one without blocking the other.
* HTTP Basic authentication does not support extensions to the authentication flow. Many modern services now support two-factor authentication and there is no way to inject this into the process without changing the HTTP clients (web browsers, for example) or at least the flow they expect (making the browser repeat the request).
All of these problems are solved (or at least supported) with OAuth flows. Given all these concerns, the only time you will want to use username and password authentication is when convenience trumps all other considerations.
## OAuth
OAuth is an authentication mechanism where tokens are tied to functionality or clients. In other words, you can specify what features of a service you want to permit an OAuth token to carry with it, and you can issue multiple tokens and tie those to specific clients: a cell phone app, a laptop, a smart watch, or even an Internet of Things toaster. And, importantly, you can revoke tokens without impacting other tokens.
The main downside to OAuth tokens is that they introduce a level of complexity that you may not be familiar with if you have only used HTTP Basic. HTTP Basic requests generally only require adding an extra header to the HTTP request, or an extra flag to a client tool like cURL.
OAuth solves the problems just described by linking tokens to scopes (specified subsets of functionality inside a web service) and issuing as many tokens as you need to multiple clients.
### Scopes: specified actions tied to authentication tokens
When you generate an OAuth token, you specify the access rights you require. Though our examples create the token using HTTP Basic, once you have the token, you no longer need to use HTTP Basic in successive requests. If this token is properly issued, the OAuth token will have permissions to read and write to public repositories owned by that user.
The following cURL command uses HTTP Basic to initiate the token request process:
$ curl -u username -d '{"scopes":["public_repo"]}' \
https://api.github.com/authorizations
{
"id": 1234567,
"url": "https://api.github.com/authorizations/1234567",
"app": {
"name": "My app",
"url": "https://developer.github.com/v3/oauth_authorizations/",
"client_id": "00000000000000000000"
},
"token": "abcdef87654321
...
}
The JSON response, upon success, has a token you can extract and use for applications that need access to the GitHub API.
If you are using two-factor authentication, this flow requires additional steps, all of which are documented within Chapter 8.
To use this token, you specify the token inside an authorization header:
$ curl -H "Authorization: token abcdef87654321" ...
Scopes clarify how a service or application will use data inside the GitHub API. This makes it easy to audit how you are using the information if this was a token issued for your own personal use. But, most importantly, this provides valuable clarity and protection for those times when a third-party application wants to access your information: you can be assured the application is limited in what data it can access, and you can revoke access easily.
### Scope limitations
There is one major limitation of scopes to be aware of: you cannot do fine-grained access to certain repositories only. If you provide access to any of your private repositories, you are providing access to all repositories.
It is likely that GitHub will change the way scopes work and address some of these issues. The great thing about the way OAuth works is that to support these changes you will simply need to request a new token with the scope modified, but otherwise, the application authentication flow will be unchaged.
Be very careful about the scopes you request when building a service or application. Users are (rightly) paranoid about the data they are handing over to you, and will evaluate your application based on the scopes requested. If they don't think you need that scope, be sure to remove it from the list you provide to GitHub when authorizing and consider escalation to a higher scope after you have developed some trust with your users.
### Scope escalation
You can ask for scope at one point that is very limited, and then later ask for a greater scope. For example, when a user first accesses your application, you could only get the user scope to create a user object inside your service, and only when your application needs repository information for a user, then request to escalate privileges. At this point the user will need to approve or disapprove your request, but asking for everything upfront (before you have a relationship with the user) often results in a user abandoning the login.
Now let's get into the specifics of authentication using OAuth.
### Simplified OAuth flow
OAuth has many variants, but GitHub uses OAuth2. OAuth2 specifies a flow where:
1. The application requests access
2. The service provider (GitHub) requests authentication: username and password usually
3. If two-factor authentication is enabled, ask for the OTP (one-time password) code
4. GitHub responds with a token inside a JSON payload
5. The application uses the OAuth token to make requests of the API
A real-world flow is described in full in Chapter 8.
Now let's look at the variety of HTTP status codes GitHub uses to communicate feedback when using the API.
# Status Codes
The GitHub API uses HTTP status codes to tell you definitive information about how your request was processed. If you are using a basic client like cURL, it will be important to validate the status code before you look at any of the data retrieved. If you are writing your own API client, pay close attention to the status code before anything else. If you are new to the GitHub API, it is worth reviewing the response codes thoroughly until you are familiar with the various conditions that can cause errors when making a request.
## Success (200 or 201)
If you have worked with any HTTP clients whatsoever, you know that the HTTP status code "200" means success. GitHub will respond with a 200 status code when your request destination URL and associated parameters are correct. If your request creates content on the server, then you will get a 201 status code, indicating successful creation on the server.
$ curl -s -i https://api.github.com | grep Status
Status: 200 OK
## Naughty JSON (400)
If your payload (the JSON you send to a request) is invalid, the GitHub API will respond with a 400 error, as shown here:
$ curl -i -u xrd -d 'yaml: true' -X POST https://api.github.com/gists
Enter host password for user 'xrd':
HTTP/1.1 400 Bad Request
Server: GitHub.com
Date: Thu, 04 Jun 2015 20:33:49 GMT
Content-Type: application/json; charset=utf-8
Content-Length: 148
Status: 400 Bad Request
...
{
"message": "Problems parsing JSON",
"documentation_url":
"https://developer.github.com/v3/oauth_authorizations/#create...authorization"
}
Here we attempt to generate a new gist by using the endpoint described at the Gist API documentation. We'll discuss gists in more detail in a later chapter. This issue fails because we are not using JSON (this looks like it could be YAML, which we will discuss in Chapter 6). The payload is sent using the `-d` switch. GitHub responds with advice on where to find the documentation for the correct format at the `documentation_url` key inside the JSON response. Notice that we use the `-X POST` switch and value to tell cURL to make a POST request to GitHub.
## Improper JSON (422)
If any of the fields in your request are invalid, GitHub will respond with a 422 error. Let's attempt to fix the previous request. The documentation indicates the JSON payload should look like this:
{
"description": "the description for this gist",
"public": true,
"files": {
"file1.txt": {
"content": "String file contents"
}
}
}
What happens if the JSON is valid, but the fields are incorrect?
$ curl -i -u chris@burningon.com -d '{ "a" : "b" }' -X POST
https://api.github.com/gists
Enter host password for user 'chris@burningon.com':
HTTP/1.1 422 Unprocessable Entity
...
{
"message": "Invalid request.\n\n\"files\" wasn't supplied.",
"documentation_url": "https://developer.github.com/v3"
}
There are two important things to note: first, we get a 422 error, which indicates the JSON was valid, but the fields were incorrect. We also get a response that indicates why: we are missing the `files` key inside the request payload.
## Successful Creation (201)
We've seen what happens when the JSON is invalid, but what happens when the JSON is valid for our request?
$ curl -i -u xrd \
-d '{"description":"A","public":true,"files":{"a.txt":{"content":"B"}}} \
https://api.github.com/gists
Enter host password for user 'xrd':
HTTP/1.1 201 Created
...
{
"url": "https://api.github.com/gists/4a86ed1ca6f289d0f6a4",
"forks_url":
"https://api.github.com/gists/4a86ed1ca6f289d0f6a4/forks",
"commits_url":
"https://api.github.com/gists/4a86ed1ca6f289d0f6a4/commits",
"id": "4a86ed1ca6f289d0f6a4",
"git_pull_url": "https://gist.github.com/4a86ed1ca6f289d0f6a4.git",
...
}
Success! We created a gist and got a 201 status code indicating things worked properly. To make our command more readable we used the backslash character to allow parameters to span across lines. Also, notice the JSON does not require whitespace, which we have completely removed from the string passed to the `-d` switch (in order to save space and make this command a little bit more readable).
## Nothing Has Changed (304)
304s are like 200s in that they say to the client: yes, your request succeeded. They give a little bit of extra information, however, in that they tell the client that the data has not changed since the last time the same request was made. This is valuable information if you are concerned about your usage limits (and in most cases you will be). We have not yet explained how rate limits work, so let's discuss that and then return to demonstrate triggering a 304 response code by using conditional headers.
## GitHub API Rate Limits
GitHub tries to limit the rate at which users can make requests to the API. Anonymous requests (requests that haven't authenticated with either a username/password or OAuth information) are limited to 60 requests an hour. If you are developing a system to integrate with the GitHub API on behalf of users, clearly 60 requests per hour isn't going to be sufficient.
This rate limit is increased to 5000 requests per hour if you are making an authenticated request to the GitHub API, and while this rate is two orders of magnitude larger than the anonymous rate limit, it still presents problems if you intend to use your own GitHub credentials when making requests on behalf of many users.
For this reason, if your website or service uses the GitHub API to request information from the GitHub API, you should consider using OAuth and make requests to the GitHub API using your user's shared authentication information. If you use a token connected to another user's GitHub account, the rate limits count against that user, and not your user account.
There are actually two rate limits: the _core_ rate limit and the _search_ rate limit. The rate limits explained in the previous paragraphs were for the core rate limit. For search, requests are limited to 20 requests per minute for authenticated user requests and 5 requests per minute for anonymous requests. The assumption here is that search is a more infrastructure-intensive request to satisfy and that tighter limits are placed on its usage.
Note that GitHub tracks anonymous requests by IP address. This means that if you are behind a firewall with other users making anonymous requests, all those requests will be grouped together.
## Reading Your Rate Limits
Reading your rate limit is straightforward—just make a GET request to `/rate_limit`. This will return a JSON document that tells you the limit you are subject to, the number of requests you have remaining, and the timestamp (in seconds since 1970). Note that this timestamp is in the Coordinated Universal Time (UTC) time zone.
The following command listing uses cURL to retrieve the rate limit for an anonymous request. This response is abbreviated to save space in this book, but you'll notice that the quota information is supplied twice: once in the HTTP response headers and again in the JSON response. The rate limit headers are returned with every request to the GitHub API, so there is little need to make a direct call to the /rate_limit API:
$ curl https://api.github.com/rate_limit
{
"resources": {
"core": {
"limit": 60,
"remaining": 48,
"reset": 1433398160
},
"search": {
"limit": 10,
"remaining": 10,
"reset": 1433395543
}
},
"rate": {
"limit": 60,
"remaining": 48,
"reset": 1433398160
}
}
Sixty requests over the course of an hour isn't very much, and if you plan on doing anything interesting, you will likely exceed this limit quickly. If you are hitting up against the 60 requests per minute limit, you will likely want to investigate making authenticated requests to the GitHub API. We'll show that when we discuss authenticated requests.
Calls to the /rate_limit API are not deducted from your rate limits. And, remember, rate limits are reset after 24 hours.
# Conditional Requests to Avoid Rate Limitations
If you are querying the GitHub APIs to obtain activity data for a user or a repository, there's a good chance that many of your requests won't return much activity. If you check for new activity once every few minutes, there will be time periods over which no activity has occurred. These constant polls still use up requests in your rate limit even though there's no new activity to be delivered.
In these cases, you can send the conditional HTTP headers `If-Modified-Since` and `If-None-Match` to tell GitHub to return an HTTP 304 response code telling you that nothing has been modified. When you send a request with a conditional header and the GitHub API responds with an HTTP 304 response code, this request is not deducted from your rate limit.
The following command listing is an example of passing in the `If-Modified-Since` HTTP header to the GitHub API. Here we've specified that we're only interested in receiving content if the Twitter Bootstrap repositories have been altered after 7:49 PM GMT on Sunday, August 11, 2013. The GitHub API responds with an HTTP 304 response code that also tells us that the last time this repository changed was a minute earlier than our cutoff date:
$ curl -i https://api.github.com/repos/twbs/bootstrap \
-H "If-Modified-Since: Sun, 11 Aug 2013 19:48:59 GMT"
HTTP/1.1 304 Not Modified
Server: GitHub.com
Date: Sun, 11 Aug 2013 20:11:26 GMT
Status: 304 Not Modified
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 46
X-RateLimit-Reset: 1376255215
Cache-Control: public, max-age=60, s-maxage=60
Last-Modified: Sun, 11 Aug 2013 19:48:39 GMT
The GitHub API also understands HTTP caching tags. An ETag, or Entity Tag, is an HTTP header that is used to control whether or not content you have previously cached is the most recent version. Here's how your systems would use an ETag:
* Your server requests information from an HTTP server.
* Server returns an ETag header for a version of a content item.
* Your server includes this ETag in all subsequent requests:
* If the server has a newer version it returns new content + a new ETag.
* If the server doesn't have a newer version it returns an HTTP 304.
The following command listing demonstrates two commands. The first cURL call to the GitHub API generates an ETag value, and the second value passes this ETag value as an `If-None-Match` header. You'll note that the second response is an HTTP 304, which tells the caller that there is no new content available:
$ curl -i https://api.github.com/repos/twbs/bootstrap
HTTP/1.1 200 OK
Cache-Control: public, max-age=60, s-maxage=60
Last-Modified: Sun, 11 Aug 2013 20:25:37 GMT
ETag: "462c74009317cf64560b8e395b9d0cdd"
{
"id": 2126244,
"name": "bootstrap",
"full_name": "twbs/bootstrap",
....
}
$ curl -i https://api.github.com/repos/twbs/bootstrap \
-H 'If-None-Match: "462c74009317cf64560b8e395b9d0cdd"'
HTTP/1.1 304 Not Modified
Status: 304 Not Modified
Cache-Control: public, max-age=60, s-maxage=60
Last-Modified: Sun, 11 Aug 2013 20:25:37 GMT
ETag: "462c74009317cf64560b8e395b9d0cdd"
Use of conditional request headers is encouraged to conserve resources and make sure that the infrastructure that supports GitHub's API isn't asked to generate content unnecessarily.
At this point we have been accessing the GitHub API from a cURL client, and as long as our network permits it, we can do whatever we want. The GitHub API is accessible in other situations as well, like from within a browser context, and certain restrictions apply there, so let's discuss that next.
# Accessing Content from the Web
If you are using the GitHub API from a server-side program or the command line then you are free to issue any network calls as long as your network permits it. If you are attempting to access the GitHub API from within a browser using JavaScript and the XHR (XmlHttpRequest) object, then you should be aware of limitations imposed by the browser's same-origin policy. In a nutshell, you are not able to access domains from JavaScript using standard XHR requests outside of the domain from which you retrieved the original page. There are two options for getting around this restriction, one clever (JSON-P) and one fully supported but slightly more onerous (CORS).
## JSON-P
JSON-P is a browser hack, more or less, that allows retrieval of information from servers outside of the same-origin policy. JSON-P works because `<script>` tags are not checked against the same-origin policy; in other words, your page can include references to content on servers other than the one from which the page originated. With JSON-P, you load a JavaScript file that resolves to a specially encoded data payload wrapped in a callback function you implement. The GitHub API supports this syntax: you request a script with a parameter on the URL indicating what callback you want the script to execute once loaded.
We can simulate this request in cURL:
$ curl https://api.github.com/?callback=myCallback
/**/myCallback({
"meta": {
"X-RateLimit-Limit": "60",
"X-RateLimit-Remaining": "52",
"X-RateLimit-Reset": "1433461950",
"Cache-Control": "public, max-age=60, s-maxage=60",
"Vary": "Accept",
"ETag": "\"a5c656a9399ccd6b44e2f9a4291c8289\"",
"X-GitHub-Media-Type": "github.v3",
"status": 200
},
"data": {
"current_user_url": "https://api.github.com/user",
"current_user_authorizations_html_url":
"https://github.com/settings/connections/applications{/client_id}",
"authorizations_url": "https://api.github.com/authorizations",
...
}
})
If you used the same URL we used in the preceding code inside a script tag on a web page (`<script src="https://api.github.com/?callback=myCallback" type= "text/javascript"></script>`), your browser would load the content displayed in the preceding code, and then a JavaScript function you defined called `myCallback` would be executed with the data shown. This function could be implemented like this inside your web page:
<script>
function myCallback( payload ) {
if( 200 == payload.status ) {
document.getElementById("success").innerHTML =
payload.data.current_user_url;
} else {
document.getElementById("error").innerHTML =
"An error occurred";
}
}
</script>
This example demonstrates taking the `current_user_url` from the data inside the payload and putting it into a DIV, one that might look like `<div id="success"> </div>`.
Because JSON-P works via `<script>` tags, only GET requests to the API are supported. If you only need read-only access to the API, JSON-P can fulfill that need in many cases, and it is easy to configure.
If JSON-P seems too limiting or hackish, CORS is a more complicated but official way to access external services from within a web page.
## CORS Support
CORS is the W3C (a web standards body) approved way to access content from a different domain than the original host. CORS requires that the server be properly configured in advance; the server must indicate when queried that it allows cross-domain requests. If the server effectively says "yes, you can access my content from a different domain," then CORS requests are permitted. The HTML5Rocks website has a great tutorial explaining many details of CORS.
Because XHR using CORS allows the same type of XHR requests you get from the same domain origin, you can make requests beyond GET to the GitHub API: POST, DELETE, and UPDATE. Between JSON-P and CORS you have two options for accessing content from the GitHub API inside of web browsers. The choice is between the simplicity of JSON-P and the power and extra configuration of CORS.
We can prove using cURL that the GitHub API server is responding correctly for CORS requests. In this case we only care about the headers, so we use the `-I` switch, which tells cURL to make a HEAD request, telling the server not to respond with body content:
curl -I https://api.github.com
HTTP/1.1 200 OK
Server: GitHub.com
...
X-Frame-Options: deny
Content-Security-Policy: default-src 'none'
Access-Control-Allow-Credentials: true
Access-Control-Expose-Headers: ETag, Link, X-GitHub-OTP,
X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset,
X-OAuth-Scopes, X-Accepted-OAuth-Scopes, X-Poll-Interval
Access-Control-Allow-Origin: *
X-GitHub-Request-Id: C0F1CF9E:07AD:3C493B:557107C7
Strict-Transport-Security: max-age=31536000; includeSubdomains;
preload
We can see the `Access-Control-Allow-Credentials` header is set to true. It depends on the browser implementation, but some JavaScript host browsers will automatically make a _preflight_ request to verify this header is set to true (and that other headers, like the `Access-Control-Allow-Origin`, are set correctly and permit requests from that origin to proceed). Other JavaScript host browsers will need you to make that request. Once the browser has used the headers to confirm that CORS is permitted, you can make XHR requests to the GitHub API domain as you would any other XHR request going into the same domain.
We've covered much of the details of connecting and dissecting the GitHub API, but there are a few other options to know about when using it. One of them is that you can use the GitHub API service to provide rendered content when you need it.
## Specifying Response Content Format
When you send a request to the GitHub API, you have some ability to specify the format of the response you expect. For example, if you are requesting content that contains text from a commit's comment thread, you can use the `Accept` header to ask for the raw Markdown or for the HTML this Markdown generates. You also have the ability to specify this version of the GitHub API you are using. At this point, you can specify either version 3 or beta of the API.
### Retrieving formatted content
The `Accept` header you send with a request can affect the format of text returned by the GitHub API. As an example, let's assume you wanted to read the body of a GitHub Issue. An issue's body is stored in Markdown and will be sent back in the request by default. If we wanted to render the response as HTML instead of Markdown, we could do this by sending a different `Accept` header, as the following cURL commands demonstrate:
$ URL='https://api.github.com/repos/rails/rails/issues/11819'
$ curl -s $URL | jq '.body'
"Hi, \r\n\r\nI have a problem with strong...." 
$ curl -s $URL | jq '.body_html'
null 
$ curl -s $URL \
-H "Accept: application/vnd.github.html+json" | jq '.body_html'
"<p>Hi, </p>\n\n<p>I have a problem with..." 
Without specifying an extra header, we get the internal representation of the data, sent as Markdown.
Note that if we don't request the HTML representation, we don't see it in the JSON by default.
If we use a customized `Accept` header like in the third instance, then our JSON is populated with a rendered version of the body in HTML.
Besides "raw" and "html" there are two other format options that influence how Markdown content is delivered via the GitHub API. If you specify "text" as a format, the issue body would have been returned as plaintext. If you specify "full" then the content will be rendered multiple times including the raw Markdown, rendered HTML, and rendered plaintext.
In addition to controlling the format of text content, you can also retrieve GitHub blobs either as raw binary or as a BASE64-encoded text. When retrieving commits, you can also specify that the content be returned either as a diff or as a patch. For more information about these fine-grained controls for formatting, see the GitHub API documentation.
The GitHub team has already provided very thorough documentation on their API with examples using cURL. Bookmark this URL: _https://developer.github.com/v3/_. You'll use it often. Do note that this URL is tied, obviously, to the current API "version 3," so this URL will change when a new version is released.
# Summary
In this chapter we learned how to access the GitHub API from the simplest client available: the command-line cURL HTTP tool. We also explored the API by looking at the JSON and played with a command-line tool (jq) that when paired with cURL gives us the ability to quickly find information in the often large body of data the GitHub API provides. We learned about the different authentication schemes supported by GitHub, and also learned about the possibilities and trade-offs when accessing the GitHub API from within a browser context.
In the next chapter we will look at gists and the Gist API. We'll use Ruby to build a gist display program, and host all source files for the application as a gist itself.
# Chapter 2. Gists and the Gist API
GitHub revolutionized software development by responding to a deep desire to share information. But calling it just "sharing" does a disservice to the tools GitHub provides: these tools remove barriers to communication and streamline workflows. These tools also arose at exactly the moment when the information technology revolution forced companies to adopt more open technologies that assisted an emerging remote workforce.
Gists service part of this need: they permit intimate code sharing and reuse, refactoring, and experimentation in a way not served by the heavyweight tools predating it. In this chapter we will explore using gists to share code, and then build an application hosted as a gist that uses the Gist API.
# Easy Code Sharing
Gists are straightforward to create. You copy a snippet of code into the large text box in the center, optionally enter in a description or filename, and then choose between a public or private gist. Once your gist has been created you are presented with a URL to share. Gists autodetect the language in most cases and syntax highlight according to the language when displayed as in Figure 2-1.
###### Figure 2-1. Documenting JSON using a gist
There are other services that do this: pastebin was the first, and there are many others that offer variances on code sharing. But gists by GitHub are not simply a pasting service. Gists are first-class repositories, forkable, editable, and expansive. We'll go over the basics of what gists are, and how to create them, and then show how they allow you to share code that is also a live application.
# Gists Are Repositories
Every gist created is a tiny repository. You can update gists and see the history using `git log`. You can download gists, hack on the repository, and `git push` them back into the repository on _gist.github.com_ (which will republish them onto the publicly facing web page). And, you can "fork" gists, just like any other repository.
You are allowed to branch within gist repositories; however, branches are not displayed inside of _gist.github.com/_. But if you need the benefits of branching when using gists you can branch normally inside a repository and the branch information is retained on the upstream repository after you push it up.
You can have an unlimited number of public and secret gists. Secret gists can, in many cases, replace private repositories, and these secret gists don't count against the limited amount of private repositories you have with paid GitHub accounts. Or, you can make a gist public, and share that URL to mailing lists or anywhere you need public feedback.
As there are two types of gists (public and secret), it is important to understand the differences between them. Public gists are searchable. Secret gists are not searchable, but they are accessible to anyone who knows the URL. Don't post any code to a gist you need to keep secret as once you put it there, it is only as safe as the URL is secret.
Most people share gists through the URL, but you can embed gists inside of other contexts (like blogs) and get a simple and pretty snippet of code.
## Embedding Gists Inside HTML
To embed inside of an HTML page look for the "Embed this gist" box to the left of a gist. Copy the code listed there (which will look something like `<script src="https://gist.github.com/xrd/8923697.js"></script>`) and paste it into your HTML.
If you wish to include only a particular file from the gist (if it contains multiple files), then add `?file=hi.rb` to the end of the URL specified in the `src` attribute.
## Embedding Inside Jekyll Blogs
Jekyll blogs (explained in Chapter 6) can easily host gists using a special syntax. The shortcut `{% gist 8138797 %}` will embed a private gist, which would be found at __http://gist.github.com/8138797__. If you want to use a specific file within the gist, add a filename to the gist code like `{% gist 8138797 hi.rb %}`. Secret gists can also be embedded. If you use a secret gist, prefix the username of the account holder in the gist like so: `{% gist xrd/8138797 hi.rb %}`.
Now let's look at creating gists from outside the GitHub.com site, using the command-line.
# Gist from the Command Line
`gem install gist` will install a command line tool that helps create gists. You can use it simply by typing the command, and then entering the data you want to post as a gist:
$ gist
(type a gist. <ctrl-c> to cancel, <ctrl-d> when done)
{ "foo" : "bar" }
https://gist.github.com/9106765
The `gist` command will return the link to the gist just created. Gists are created anonymously by default. You can log in using the `--login` switch. Once you do this, your gists will be linked to your account:
$ gist --login
Obtaining OAuth2 access_token from github.
GitHub username: xrd
GitHub password:
2-factor auth code: 787878
Success! https://github.com/settings/applications
You can pipe text to the `gist` command to use the contents of that file:
$ echo '{ "foo" : "bar" }' | gist
https://gist.github.com/9106799
You can also `cat` a file to gist:
$ cat MyJavaFile.java | gist
https://gist.github.com/9345609
Gists are often used to show interesting or troublesome code, and there are times when you don't want to display the entirety of a file. In this case the command-line `grep` tool can be useful; `grep` searches for a specific piece of code and with the right switches can include several lines of context around that code inside a gist. This command looks for the function `myFunction` inside the _MyJavaFile.java_ file and then prints the next 20 lines of context and stores it as a gist:
$ grep -A 20 myFunction MyJavaFile.java | gist
https://gist.github.com/9453069
Adding the `-o` switch automatically opens the gist inside your default web browser. You can also copy the gist URL to the clipboard using the `-c` switch. Or, you can copy the contents of your clipboard into a gist using the `-P` switch.
There are many other fun features of the `gist` command. To learn more run the `gist` command with the `--help` switch.
As gists are themselves repositories, you can use them for dual purposes: for hosting code samples, and for code samples that are themselves fully working and packaged applications inside a gist repository.
# Gists as Fully Functioning Apps
Let's build a simple Sinatra application to showcase how code hosted as a gist can also be a living application. Sinatra is a Ruby library for creating dead-simple web servers. A Sinatra program can be as simple as this:
require 'sinatra'
get '/hi' do
"Hello World!"
end
Create a gist for this by visiting _gist.github.com_. Enter in the text exactly as shown and then choose public gist.
You now have a share-friendly gist of code anyone can use to review. More importantly, this is a repository with executable code. To clone it, look for the Clone URL to the right of the gist itself. You will likely see a Git protocol URL and an HTTPS URL. If you are cloning the URL and intend only to read the gist, you can use the HTTPS URL. You technically can push changes once you have cloned a repository using the HTTPS URL but not if you have two-factor authentication enabled. In most cases it is easier and more flexible to use the Git protocol URL.
Let's clone it now:
$ git clone git@gist.github.com:8138797.git
Once you have cloned the repository, go inside it. You'll see a list of files, a list that right now includes only one file:
$ cd 8138797
$ ls
hi.rb
This code is exectuable: to run it enter `ruby hi.rb`.
If you have not used Sinatra with Ruby before, this will cause an error. This program requires a library called "sinatra" and you have not yet installed it. We could write a read me file, or add documentation into this file itself. Another way to guarantee the user has the proper files installed is to use a _Gemfile_ , which is a file that tells which libraries are installed and from where. That sounds like the best way:
$ printf "source 'https://rubygems.org'\ngem 'sinatra'" > Gemfile
The `bundle` command (from the bundler gem) will install Sinatra and the associated dependencies:
$ bundle
Using rack (1.5.2)
Using rack-protection (1.5.1)
Using tilt (1.4.1)
Using sinatra (1.4.4)
Using bundler (1.3.5)
Your bundle is complete!
Use `bundle show [gemname]` to see where a bundled gem is installed.
Why did we do things this way? Because now we can add the Gemfile to our repository locally, and then publish into our gist for sharing on the Web. Our repository now not only has the code, but a well-known manifest file that explains the necessary components when running the code.
# Gists that Render Gists
Let's add to our application and use the Octokit Ruby gem to pull all public gists for any user we specify. The Octokit library is the the official Ruby library for accessing the GitHub API. Why would we want to make a gist that displays other gists? Self-referential meta code is all the rage, the modern-day response to René Magritte's famous work: "Ceci n'est pas une pipe."
Add a view _index.erb_ at the root of our directory:
<html>
<body>
User has <%= count %> public gists
</body>
</html>
Add the Octokit gem to our Gemfile:
gem "octokit"
Run `bundle` to install Octokit. Then, modify our _hi.rb_ app to look like this:
require 'sinatra'
require 'octokit'
set :views, "."
get '/:username' do |username|
user = Octokit.user username
count = user.public_gists
erb :index, locals: { :count => count }
end
Our filesystem should look like this, with three files:
$ ls -1
Gemfile
hi.rb
index.erb
Restart Sinatra by running Ctrl-C and then `ruby hi.rb`. If you visit __http://localhost:4567/xrd__ in your browser, you will see the count of public gists for user `xrd` (Figure 2-2); modify the username in the URL to specify any GitHub username and you will see their last five gists displayed.
###### Figure 2-2. Displaying the gist count
## Going Deeper into the Gist API
The GitHub API uses hypermedia instead of basic resource-driven APIs. If you use a client like Octokit, the hypermedia details are hidden behind an elegant Ruby client. But there is a benefit to understanding how hypermedia works when you need to retrieve deeper information from the GitHub API.
Most RESTful APIs come with a "sitemap," generally an API reference document that tells a user which endpoints to use. You view the resources available from that API and then apply some HTTP verb to do something to them. Hypermedia thinks of an API differently. Hypermedia APIs describe themselves inside their responses using "affordances." What this means is that the API might respond like this:
{
"_links": {
"self": {
"href": "http://shop.oreilly.com/product/0636920030300.do"
}
}
"id": "xrd",
"name": "Chris Dawson"
}
In this payload, you can see that there is an id (`"xrd"`) and a name (`"Chris Dawson"`). This particular payload was forked from the HAL explanation at the HAL Primer document, and you can find a more detailed explanation of these concepts there.
The important thing to note about hypermedia APIs is that payloads contain metadata about data itself and metadata about the possible options of operating on the data. RESTful APIs typically provide a mapping outside of the payload. You have to join the API sitemap with the data in an ad hoc way when using RESTful APIs; with hypermedia APIs your client can react to the payload itself correctly and intelligently without knowing anything about a sitemap stored in human-readable documentation.
This loose coupling makes APIs and their clients flexible. In theory, a hypermedia API works intuitively with a hypermedia-aware client. If you change the API, the client, as it understands hypermedia, can react and still work as expected. Using a RESTful API means that clients must be updated (a newer version of the client must be installed) or the client code must be upgraded. Hypermedia APIs can alter their backend, and then the client, as long as it is hypermedia-aware, can automatically and dynamically determine the right way to access information from the response itself. In other words, with a hypermedia client the API backend can change and your client code should not need to.
This is explained in great detail in the book _Building Hypermedia APIs with HTML5 and Node_ (O'Reilly).
## Using Hypermedia Data from Octokit
Now that you know a little about hypermedia, let's navigate it using Octokit:
* Start at a resource, with code like `user = Octokit.user "xrd"`. This begins the initialization of the client.
* `user` now is an object filled with the actual data of the resource. In this case, you could call a method like `user.followers` to see a meager follower count.
* `user` also has hypermedia references. You can see these by calling `user.rels`. This retrieves the relationships described in the hypermedia links.
* Relationships (found by calling `user.rels`) include avatar, self, followers, etc.
* Use a relationship by calling the `get.data` method to retrieve and access the data from the GitHub API (`followers = user.rels[:followers].get.data`).
* Calling `.get.data` populates an array of the followers (paged if it exceeds 100 items).
Let's extend our Sinatra app to retrieve actual data about the user's gists by using hypermedia references:
require 'sinatra'
require 'octokit'
set :views, "."
helpers do
def h(text)
Rack::Utils.escape_html(text)
end
end
get '/:username' do |username|
gists = Octokit.gists username, :per_page => 5
erb :index, locals: { :gists => gists, username: username }
end
The _index.erb_ file contains code to iterate over each gist and pull the content. You can see that our response object is an array of gists, and each has an attribute called `fields`. This `fields` attribute specifies the filenames available in each gist. If you reference that filename against the files, the response includes a hypermedia `ref` attribute. Retrieve the `raw` content using the Octokit method `.get.data`:
<html>
<body>
<h2>User <%= username %>'s last five gists</h2>
<% gists.each do |g| %>
<% g[:files].fields.each do |f| %>
<b><%= f %></b>:
<%= h g[:files][f.to_sym].rels[:raw].get.data %>
<br/>
<br/>
<% end %>
<% end %>
</body>
</html>
Now we see the gists and the contents, as in Figure 2-3.
###### Figure 2-3. Last five gists, with details
# Summary
In this chapter we looked at gists and learned how they can be used to share code snippets. We built a simple application and stored it as a gist. This application retrieves data from the GitHub API using our first higher-level language client library (the Octokit library for Ruby). We also went deeper into how hypermedia works and how a client library implements using hypermedia metadata.
In the next chapter we will look at Gollum, the GitHub wiki. This chapter provides an introduction to the Rugged Ruby library for accessing Git repositories and the Ruby library for accessing GitHub.
Explained best by Ben Zimmer
# Chapter 3. GitHub Wikis with Gollum
Wikis have revolutionized the way we create and digest information. It turns out that they are a great complement to technical projects (code repositories) because they allow nontechnical users to contribute in ways other than adding code. Gollum is an open source wiki created by GitHub. Just as Git has revolutionized collaborative editing of code, Gollum wikis layer the benefits of Git onto the widely used wiki publishing workflow. Gollum wikis are themselves repositories that generally annotate other typically code-centric repositories. GitHub makes it easy to associate a wiki with any repository.
In this chapter we'll explore the basics of using Gollum, creating a wiki on GitHub and then learning how to edit it on GitHub and as a repository on our local machine. We will then create a Gollum wiki by hand from the command line, and show the bare minimum set of files to call something a Gollum repository. Finally, we will build a simple image organization tool that allows us to edit a Gollum wiki in an entirely different way, but still publishes information into GitHub as a regular Gollum wiki, exploring a little bit of the internals of Git along the way.
This chapter demonstrates code that modifies a Git repository programmatically. You will be able to follow along without possessing a deep understanding of the internals of Git. And, a good supplement to this chapter (and later chapters as well) is the _Version Control with Git_ book from O'Reilly.
# "The Story of Smeagol..."
At its most basic form, a Gollum wiki is a Git repository with a single file, _Home.ext_ ( _ext_ would be any of the supported wiki markup formats, which we will talk about later).
## Repository Linked Wikis
Any repository on GitHub, public or private, can have an associated Gollum wiki. To create a wiki linked to your repository, visit the repository page and then look in the rightmost colum. You'll see an icon that looks like a book, next to which will be the word "Wiki," as in Figure 3-1.
###### Figure 3-1. Accessing the associated wiki from the sidebar
Clicking this link will bring you to a page where you are asked to create a wiki for the first time. GitHub will ask you to create the "Home" page, which is the starting point in a Gollum wiki (Figure 3-2). GitHub will automatically create a page template with the project name; you can customize this information to suit your own needs. Clicking "Save Page" will save your first page and create the wiki for you.
###### Figure 3-2. Genesis of a new wiki, creating the home page
Your wiki is now as public as your repository is public. Public repositories have public wikis, accessible to anyone. Private repositories have private wikis, accessible only to those users or organizations that have rights to edit the repository data.
Let's review the markup options for Gollum wikis now.
## Markup and Structure
Gollum files can be written in any of the supported "Github Markup" formats, which include ASCIIdoc, Creole, Markdown, Org Mode, Pod, RDoc, ReStructuredText, Textile, and MediaWiki. The variety of markup languages brings flexibility but it can be difficult to know which one to use. Markdown (and its variant cousins) is the most popular markup language on GitHub, and is well liked on other popular sites like Stack Overflow. If you are unsure about which language to use, Markdown is a safe bet because it is ubiquitous across GitHub. Chapter 6 has a much deeper overview of Markdown.
If you do choose Markdown, in addition to the standard vanilla Markdown language tags, Gollum adds its own set of wiki-specific tags. There are often subtle (or conflicting) differences from other wiki markup so it is worth reviewing the Gollum repository documentation page. We'll go over the most important ones here.
### Links
Links obviously convert into the `<a>` HTML tag. Each format has its own linking format: in Markdown you use `text`. Gollum adds its own link tag: `[[Link]]`.
In addition:
* You can add a link title using the bar character: `[[http://foobar.com|A link to foobar]]`.
* Links can be either external or internal links.
* A link like `[[Review Images]]` will be converted to a relative link to the page _review-images.ext_ (where _.ext_ is the preferred extension you are using with your wiki, most likely Markdown).
Wikis are generally a collection of pages linked together in myriad ways, and this assumption about the structure of links makes writing pages easier.
As we mentioned, there are differences between Gollum wiki tags and other wikis, despite their having similar syntax. One such example is MediaWiki, where links with titles use the opposite ordering `[[A link to foobar|http://foobar.com]]`, so caveat emptor.
### Code snippets
Gollum (the wiki) was invented at GitHub, a company dedicated to improving the lives of software developers, so it stands to reason Gollum wikis would support insertion of code snippets. To include a snippet of code, use three backticks, followed by an optional language name, and close the block of code using three more backticks. If you use the language name, Gollum will do proper syntax highlighting for most languages:
```ruby
def hello
puts "hello"
end
```
Gollum at one point supported inclusion of files from any GitHub repository (and any branch!) using a syntax like this:
```ruby:github:xrd/TeddyHyde/blob/master/Gemfile```
Unfortunately, this no longer works. According to current documentation for Gollum, this tag allows inclusion of files from the parent repository:
```ruby:/lib/gollum/app.rb```
But I found this to be broken as well. At the time of writing, it tragically appears that there is no way to insert code from the parent repository (or any other repository) into your wiki content.
### Structural components
Gollum includes capabilities to add sidebars, headers, and footers. If you include a file called __Sidebar.ext_ inside your repository, you'll see it as a sidebar for every file rendered. Sidebars are automatically added to any file and any file from subdirectories that do not have their own sidebar files. If you wanted to add sidebars specific to a subdirectory, add another sidebar file in the subdirectory and this file will override the top-level sidebar file.
### No styling or JavaScript
For security reasons, Gollum strips out all CSS and JavaScript from raw markup files. You can include your own JavaScript or CSS file when running Gollum from the command line (discussed momentarily) using the `--custom-css` or `--custom-js` switches, but there is no way to include these files on a wiki when your Gollum wiki is hosted on GitHub.
### Inserting images
Images are inserted into your document using the same tag format `[[ceo.png]]`: this adds the correct HTML tags to include an image named _ceo.png_ inside your page. This basic syntax is often extended for additional funtionality. For example, to add a frame and an `alt` tag, you could use syntax like `[[ceo.png|frame|alt=Our CEO relaxing on the beach]]`. This creates the proper HTML tags for the same image, and also adds a frame and alt text (helpful for better context and the extra information is used by screenreaders for visually impaired users as well). Review the documentation on the Gollum repository for more details about the breadth of the image options.
You can also add images using the editor on GitHub. But you'll notice that either way you are adding a link to an image and that there is no way to upload images into GitHub from the editor (Figure 3-3).
###### Figure 3-3. No image upload, only image URLs
For nontechnical users, this makes Gollum wikis on GitHub almost unusable if they need to add images. Let's address this problem by building our own customized image-centric Gollum editor that still interoperates with regular Gollum wikis. We can put this editor in front of nontechnical users, allowing them to add images, and then publish the wiki into GitHub as is.
# Hacking Gollum
Would an image editor based on Gollum be of general use? On many software teams there is tension between the design team and the software team stemming from the fact that designers generally don't like using source-code tools to manage images. This causes issues when software developers rely on designs that are rapidly changing: coders quickly get out of sync with the latest designs. As a wiki, Gollum is the perfect tool to bridge this gap between designers and coders: wikis are easy to read and modify by nontechnical users. Since Gollum is a hackable wiki, we can build our own workflow tool that allows designers to manage images and coders to easily see those changes in a source-code repository.
This will be a dual-purpose repository. We can use the repository with Gollum as a standard wiki, and we can use it with our application to enter data in a more powerful way than Gollum permits from its default interface. The data will still be compatible with Gollum and will be hosted on GitHub.
To begin, install the Gollum Ruby gem and then initialize our repository:
$ gem install gollum
$ mkdir images
$ cd images
$ git init .
$ printf "### Our home" > Home.md
$ git add Home.md
$ git commit -m "Initial commit"
We've just created a wiki compatible with Gollum. Let's see what it looks like inside Gollum. Run the `gollum` command then open __http://localhost:4567/__ in your browser, as shown in Figure 3-4.
###### Figure 3-4. Viewing the wiki home page running on our laptop
As you can see, this tiny set of commands was enough to create the basics of the Gollum wiki structure.
If you edit a Gollum wiki from the command line, be aware that Gollum only looks inside the repository data for files. If you have added something to the working directory or have not yet commited files in your index, they will not be visible to Gollum.
Now let's begin creating the web app that will help us store images inside a Gollum wiki.
# The Starting Point of a Gollum Editor
Now we will create our custom editor. We'll use Sinatra, a Ruby library that provides a simple DSL (domain-specific language) for building web applications. First, create a file called _image.rb_ and put the following contents inside it:
require 'sinatra'
require 'gollum-lib'
wiki = Gollum::Wiki.new(".")
get '/pages' do
"All pages: \n" + wiki.pages.collect { |p| p.path }.join( "\n" )
end
Then, create the Gemfile, install the dependencies, and run the web application:
$ echo "source 'https://rubygems.org'
gem 'sinatra', '1.4.5'
gem 'gollum-lib', '4.1.0'" >> Gemfile
$ bundle install
Fetching gem metadata from https://rubygems.org/..........
Resolving dependencies...
Installing charlock_holmes (0.7.3)
Using diff-lcs (1.2.5)
Installing github-markup (1.3.3)
Using mime-types (1.25.1)
...
$ bundle exec ruby image.rb
$ open http://localhost:4567/pages
We specify at least the minimum 4.1.0 for `gollum-lib` as the interface and list of supporting libraries has changed. We then run within the bundler context (using gems installed from this Gemfile rather than system gems) using the `bundle exec ruby image.rb` command.
You'll see a report of the files that exist in our Gollum wiki right now. We've only added one file, the _Home.md_ file.
# Programmatically Handling Images
Let's add to our server. We want to support uploading ZIP files into our system that we will then unpack and add to our repository, as well as add a list of these files to our wiki. Modify our _image.rb_ script to look like this:
require 'sinatra'
require 'gollum-lib'
require 'tempfile'
require 'zip'
require 'rugged'
def index( message=nil )
response = File.read(File.join('.', 'index.html'))
response.gsub!( "<!-- message -->\n",
"<h2>Received and unpacked #{message}</h2>" ) if message
response
end
wiki = Gollum::Wiki.new(".")
get '/' do
index()
end
post '/unpack' do
@repo = Rugged::Repository.new('.')
@index = Rugged::Index.new
zip = params[:zip][:tempfile]
Zip::Zip.open( zip ) { |zipfile|
zipfile.each do |f|
contents = zipfile.read( f.name )
filename = f.name.split( File::SEPARATOR ).pop
if contents and filename and filename =~ /(png|jp?g|gif)$/i
puts "Writing out: #{filename}"
end
end
}
index( params[:zip][:filename] )
end
We'll need an _index.html_ file as well, so add that:
<html>
<body>
<!-- message -->
<form method='POST' enctype='multipart/form-data' action='/unpack'>
Choose a zip file:
<input type='file' name='zip'/>
<input type='submit' name='submit'>
</form>
</body>
</html>
This server script receives a POST request at the `/unpack` mount point and retrieves a ZIP file from the parameters passed into the script. It then opens the ZIP file (stored as a temp file on the server side), iterates over each file in the ZIP, strips the full path from the filename, and then prints out that filename (if it looks like an image) to our console. Regardless of whether we are accessing the root of our server, or have just posted to the `/unpack` mount point, we always need to render our index page. When we do render it after unzipping, we replace a comment stored in the index file with a status message indicating the script received the correct file we posted.
We need to add the new Ruby libraries (RubyZip and Rugged) to our Gemfile: update the required gems using the following commands, and then rerun our Sinatra server script:
$ echo "gem 'rubyzip', '1.1.7'
gem 'rugged', '0.23.2'" >> Gemfile
$ bundle install
$ bundle exec ruby image.rb
Rugged requires the libgit2 libraries (the pure C libraries for accessing Git repositories). Rugged gives you access to modification of Git repositories in the elegance of the Ruby language but with the speed of C. However, as this library is based on libgit2, and libgit2 requires a C compiler, you will need to install this toolset first to install Rugged. On OS X this can look like `brew install cmake` or `apt-get install cmake` for Linux.
Then, we can open __http://localhost:4567/__ and test uploading a ZIP file full of images. You'll see output similar to this in your console after uploading a ZIP file:
...
[2014-05-07 10:08:49] INFO WEBrick 1.3.1
[2014-05-07 10:08:49] INFO ruby 2.0.0 (2013-05-14)
[x86_64-darwin13.0.0]
== Sinatra/1.4.5 has taken the stage on 4567 for development with
backup from WEBrick
[2014-05-07 10:08:49] INFO WEBrick::HTTPServer#start: pid=46370
port=4567
Writing out: IMG1234.png
Writing out: IMG5678.png
Writing out: IMG5678.png
...
We are not doing anything beyond printing out the names of the images in the ZIP. We'll actually insert them into our Git repository in the next section.
# Using the Rugged Library
Our end goal for this script is to add files to our Gollum wiki, which means adding files to the repository that backs our Gollum wiki. The Rugged library handles the grunt work of this type of task easily. Rugged is the successor to the original Ruby library for Git (called Grit). Gollum, at the time of writing, uses the Grit libraries, which also provide a binding to the libgit2 library, a "portable, pure C implementation of the Git core methods." Grit has been abandoned (though there are unofficial maintainers) and the Gollum team intends to use Rugged as the long-term library backing Gollum. Rugged is written in Ruby and (provided you like Ruby) is a more elegant way to interface with a Git repository than raw Git commands. As you might expect, Rugged is maintained by several employees of GitHub.
To change our script to modify our Git repository, let's change our script to no longer print the filename (using the `puts` method inside the ZIP decode block) and instead call a new method called `write_file_to_repo`. And, at the end of the ZIP block, add a method called `build_commit`, which builds the commit from our new files. Our new file (omitting the unchanged code at the head of the file) looks like this:
post '/unpack' do
@repo = Rugged::Repository.new('.')
@index = Rugged::Index.new
zip = params[:zip][:tempfile]
Zip::Zip.open( zip ) { |zipfile|
zipfile.each do |f|
contents = zipfile.read( f.name )
filename = f.name.split( File::SEPARATOR ).pop
if contents and filename and filename =~ /(png|jp?g|gif)$/i
write_file_to_repo contents, filename # Write the file
end
end
build_commit() # Build a commit from the new files
}
index( params[:zip][:filename] )
end
def get_credentials
contents = File.read File.join( ENV['HOME'], ".gitconfig" )
@email = $1 if contents =~ /email = (.+)$/
@name = $1 if contents =~ /name = (.+)$/
end
def build_commit
get_credentials()
options = {}
options[:tree] = @index.write_tree(@repo)
options[:author] = { :email => @email, :name => @name, :time => Time.now }
options[:committer] = { :email => @email, :name => @name, :time => Time.now }
options[:message] ||= "Adding new images"
options[:parents] = @repo.empty? ? [] : [ @repo.head.target ].compact
options[:update_ref] = 'HEAD'
Rugged::Commit.create(@repo, options)
end
def write_file_to_repo( contents, filename )
oid = @repo.write( contents, :blob )
@index.add(:path => filename, :oid => oid, :mode => 0100644)
end
As you can see from the code, Rugged handles a lot of the grunt work required when creating a commit inside a Git repository. Rugged has a simple interface to creating a blob inside your Git repository (`write`), and adding files to the index (the `add` method), and also has a simple and clean interface to build the tree object (`write_tree`) and then build the commit (`Rugged::Commit.create`).
To ease the burden of hardcoding our commit credentials, we implement a method called `get_credentials` that loads up your credentials from a file called _.gitconfig_ located in your home directory. You probably have this if you have used Git for anything at all on your machine, but if this file is missing, this method will fail. On my machine this file looks like the following code snippet. The `get_credentials` method simply loads up this file and parses it for the name and email address. If you wanted to load the credentials using another method, or even hardcode them, you can just modify this method to suit your needs. The instance variables `@email` and `@name` are then used in the `build_commit()` method:
[user]
name = Chris Dawson
email = xrdawson@gmail.com
[credential]
helper = cache --timeout=3600
...
Let's verify that things are working correctly after uploading a ZIP file. Jumping into a terminal window after uploading a new file, imagine running these commands:
$ git status
To our surprise, we will see something like this:
$ git status
On branch master
Changes to be committed:
(use "git reset HEAD <file>..." to unstage)
deleted: images/3190a7759f7f668.../IMG_20120825_164703.jpg
deleted: images/3190a7759f7f668.../IMG_20130704_151522.jpg
deleted: images/3190a7759f7f668.../IMG_20130704_174217.jpg
We just added those files; why is Git reporting them as deleted?
To understand why this happens, remember that in Git there are three places files can reside: the working directory, the staging area or index, and the repository itself. Your working directory is the set of local files you are working on. The `git status` command describes itself as "show the working tree status." Rugged operates on the repository itself, and the Rugged calls in the preceding code operated on the index and then built a commit. This is important to note because our files will not exist in our working directory if we only write them using the Rugged calls, and if we do this, we cannot reference them inside our wiki page when we are running Gollum locally. We'll fix this in the next section.
We've now added the files to our repository, but we have not exposed these files inside our wiki. Let's modify our server script to write out each file to a wiki page for review. As we mentioned in the previous section, we need to make sure we write the files to both the working index and the repository (using the Rugged library `write` call). Then we can generate a Review file that details all the images uploaded.
# Optimizing for Image Storage
If a designer uploads the same image twice, what happens? Our code writes the uploaded image to a path on disk that is based on the parent SHA hash of the repository (and this means we will always write the file to a different path, even when the file is the same as a previous uploaded file). It would look to an untrained eye like we are adding the file multiple times. However, the nature of Git permits us to add the same file multiple times without incurring any additional storage cost beyond the first addition (and the minimal cost of a tree structure). When a file is added to a Git repository, an SHA hash is generated from the file contents. For example, generating the SHA hash from an empty file will always return the same SHA hash:
$ echo -en "blob 0\0" | shasum
e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
$ printf '' | git hash-object -w --stdin
e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
Adding a ZIP file with a bunch of files where only one or two differs from the prior ZIP file means that Git will properly reference the same file multiple times. Unfortunately, GitHub does not provide an interface for reviewing the statistics of wikis in the same way they do for regular repositories. We can, however, review our repository size from within the local repository by running the `count-objects` Git subcommand. As an example, I uploaded a ZIP file with two images inside of it. I then use the `count-objects` command and see this:
$ git gc
...
$ git count-objects -v
count: 0
size: 0
in-pack: 11
packs: 1
size-pack: 2029
prune-packable: 0
garbage: 0
size-garbage: 0
Inspecting the first ZIP file, I see these statistics about it:
$ unzip -l ~/Downloads/Photos\ \(4\).zip
Archive: /Users/xrdawson/Downloads/Photos (4).zip
Length Date Time Name
-------- ---- ---- ----
1189130 01-01-12 00:00 IMG_20130704_151522.jpg
889061 01-01-12 00:00 IMG_20130704_174217.jpg
-------- -------
2078191 2 files
Now let's use another ZIP file with the same two files present but with an additional image file added:
unzip -l ~/Downloads/Photos\ \(5\).zip
Archive: /Users/xrdawson/Downloads/Photos (5).zip
Length Date Time Name
-------- ---- ---- ----
1189130 01-01-12 00:00 IMG_20130704_151522.jpg
566713 01-01-12 00:00 IMG_20120825_164703.jpg
889061 01-01-12 00:00 IMG_20130704_174217.jpg
-------- -------
2644904 3 files
Then, I upload the second ZIP file. If I rerun the `count-objects` command (after running `git gc`, a command that packs files efficiently and makes our output more human readable), I see this:
$ git gc
...
$ git count-objects -v
count: 0
size: 0
in-pack: 17
packs: 1
size-pack: 2578
prune-packable: 0
garbage: 0
size-garbage: 0
Notice that our packed size has only changed by about half a MB, which is the compressed size of the additional third file, but more importantly, there was no impact from the other two files on our repository size, even though they were added at different paths.
If we upload the secondary file yet again, we will regenerate and commit a new version of the _Review.md_ file, but no new files will need to be created inside our Git repository object store from the images directory (even though their paths have changed), so our impact on the repository will be minimal:
$ git gc
...
$ git count-objects -v
count: 0
size: 0
in-pack: 21
packs: 1
size-pack: 2578
prune-packable: 0
garbage: 0
size-garbage: 0
As you can see, our packed size has barely changed, an indication that the only changes were a new Git tree object and commit object. We still have the files located in our repository at a variety of paths so our review pages will work no matter what revision we are accessing:
$ find images
images
images/7507409915d00ad33d03c78af0a4004797eec4b4
images/7507409915d00ad33d03c78af0a4004797eec4b4/IMG_20120825_164703.jpg
images/7507409915d00ad33d03c78af0a4004797eec4b4/IMG_20130704_151522.jpg
images/7507409915d00ad33d03c78af0a4004797eec4b4/IMG_20130704_174217.jpg
images/7f9505a4bafe8c8f654e22ea3fd4dab8b4075f75
images/7f9505a4bafe8c8f654e22ea3fd4dab8b4075f75/IMG_20120825_164703.jpg
images/7f9505a4bafe8c8f654e22ea3fd4dab8b4075f75/IMG_20130704_151522.jpg
images/7f9505a4bafe8c8f654e22ea3fd4dab8b4075f75/IMG_20130704_174217.jpg
images/b4be28e5b24bfa46c4942d756a3a07efd24bc234
images/b4be28e5b24bfa46c4942d756a3a07efd24bc234/IMG_20130704_151522.jpg
images/b4be28e5b24bfa46c4942d756a3a07efd24bc234/IMG_20130704_174217.jpg
Git and Gollum can efficiently store the same file at different paths without overloading the repository.
# Reviewing on GitHub
The raison d'etre for this wiki is to annotate a development project. If you follow the instructions and create a new wiki for a repository, you'll then be able to push up the changes we've made using our `image.rb` script. Once you have created a new wiki, look for a box on the right that says "Clone this wiki locally," as seen in Figure 3-5.
###### Figure 3-5. Getting the clone URL for our wiki
Copy that link, and then enter a terminal window where we can then add a remote URL to our local repository that allows us to synchronize our repositories and publish our images into GitHub. Gollum wikis have a simple URL structure based on the original clone URL: just add the word `.wiki` to the end of the clone URL (but before the final `.git` extension). So, if the original clone URL of the repository is `git@github.com:xrd/webiphany.com.git` our clone URL for the associated wiki will be `git@github.com:xrd/webiphany.com.wiki.git`. Once we have the URL, we can add it as a remote to our local repository using the following commands:
$ git remote add origin git@github.com:xrd/webiphany.com.wiki.git
$ git pull # This will require us to merge the changes...
$ git push
When we pull, we will be asked to merge our changes since GitHub created a _Home.md_ file that did not exist in our local repository. We can just accept the merge as is. The `git push` publishes our changes. If we then visit the wiki, we'll see an additional file listed under the pages sidebar to the right. Clicking the Review page, as in Figure 3-6, we can see the images we've added most recently.
###### Figure 3-6. An image review page
Not sure why our designer is providing us with an image of a couch, but I am sure he has his reasons.
Once we have published the file, we can click the Review link in the sidebar to see the most current version of the Review page. We also can review the revisions of this file by clicking the "3 Commits" (or whatever number of commits have occurred with this file) link right underneath the page title. Jumping onto that page shows us the full history of this file, as shown in Figure 3-7.
###### Figure 3-7. Wiki history review via the Commit Log
Clicking any of the SHA hashes will display the page at that revision in our history and show us the state of the document at any given moment in history. Unfortunately, jumping back and forth between revisions requires two clicks, one from the Review page to the list of revisions, and then another click to jump into the revision we want, but this permits us to review changes between the comps provided from our designer.
It would be nice if GitHub provided a simple way to jump from a revision to the parent (older) revision, but it doesn't expose this in its site at this point. We can fix this, however, by generating our own special link inside the Review page itself, which will magically know how to navigate to a previous version of the page.
# Improving Revision Navigation
In our example, we only have three revisions right now, and all share the same commit message ("Adding new images"). This is not very descriptive and makes it challenging to understand the differences between revisions, which is critical when we are trying to understand how things have changed between comps. We can improve this easily.
First, let's add a commit message field to our upload form:
<html>
<body>
<!-- message -->
<form method='POST' enctype='multipart/form-data' action='/unpack'>
Choose a zip file:
<input type='file' name='zip'/>
<input type='text' name='message' placeholder='Enter commit message'/>
<input type='submit' name='submit'>
</form>
</body>
</html>
Then, let's adjust the commit message inside our _image.rb_ script, which is a one-line change to the options hash, setting the value of it to the parameter we are now passing in for commit:
...
options[:committer] = { :email => @email, :name => @name, :time => Time.now }
options[:message] = params[:message]
options[:parents] = @repo.empty? ? [] : [ @repo.head.target ].compact
...
Now, if our designer posts a new version of the UI comps, they can specify what changes were made, and we have a record of that in our change log, which is exposed on the revisions section of our wiki hosted on GitHub.
# Fixing Linking Between Comp Pages
As noted, there is no quick way to jump between comps once we are inside a review revision. However, if you recall we used the parent SHA hash to build out our image links. We can use this to build out a navigation links inside our comp page when we are on a revision page while viewing the history.
Again, it is a simple change: one line within the `write_review_file` method. After the block that creates each link to the image files, add a line that builds a link to the parent document via its SHA hash using the parent SHA found in our Rugged object under `@repo.head.target`. This link will allow us to navigate to prior revisions in our history:
...
files.each do |f|
contents += "### #{f} \n[[#{dir}/#{f}]]\n\n"
end
contents += "[Prior revision (only when viewing history)]" +
"(#{@repo.head.target})\n\n"
File.write review_filename, contents
oid = @repo.write( contents, :blob )
...
Now, when we view the Review file history, we see a link to each prior version. Is it possible to provide a link to the next version in our history? Unfortunately, we have no way to predict the SHA hash of the next commit made to the repository, so we cannot build this link inside our _Review.md_ file with our Ruby script. However, we do get something just as good for free because we can simply use the back button to jump back to the prior page in the history stack of our browser. We might try to get clever and use a link with JavaScript to call `window.history.back()` but Gollum will foil this attempt by stripping JavaScript from rendered markup files. This is generally a good thing, as we don't want to permit rogue markup inside our wiki pages, but it does limit our options in this situation.
Unfortunately, these links do not work when you are viewing the review file itself (clicking them brings you to a page that asks you to create this as a new page). Gollum, unlike Jekyll, does not support Liquid tags, which would permit building a link using the username and repository. Right now we don't have access to these variables, so our link needs to be relative, which works when we are in history review, but not in the normal review. It does not affect viewing the files so this would require educating your stakeholders on the limitations of this link.
# Summary
In this chapter we learned how to create a Gollum wiki from scratch, both on GitHub and as a fresh repository from the command line. We then looked at the different ways to use the Gollum command-line tool and learned why this is a nice option when we want to run our own Gollum server. Finally, we built a customized Gollum image-centric editor using the Rugged and Sinatra Ruby libraries.
In the next chapter we'll switch gears completely and build a GUI application for searching GitHub issues. And we'll do it in Python.
This is explained beautifully in the blog _http://alblue.bandlem.com/2011/08/git-tip-of-week-objects.html_.
# Chapter 4. Python and the Search API
Once you have enough data, no amount of organization will make everything easy to find. As Google has taught us, the only system that works at this scale is a search box. When you use GitHub, you're exposed to both sides of this phenomenon: the repositories you have direct access to—which are relatively small in number—are given a single level of hierarchy, so you can keep them straight in your head. For the rest, the uncountable millions of public repositories that belong to other people, there's a search box, with powerful features to help you find what you're looking for.
Helpfully, GitHub also exposes this capability as an API you can consume from your own applications. GitHub's Search API gives you access to the full power of the built-in search function. This includes the use of logical and scoping operators, like `"or"` and `"user"`. By integrating this feature with your application, you can provide your users a very powerful way of finding what they're looking for.
In this chapter we'll take a close look at this API, and try building a useful application with it. We'll see how the Search API is structured, what kind of results come back, and how it can help us create a feature for someone on our team.
# Search API General Principles
The Search API is split into four separate parts: repositories, code, issues, and users. These APIs all have different subject matter, and have different formats for their results, but they all behave the same in a few key ways. We're covering these first, because they'll help you understand the results coming back from the specific API calls that we cover later. There are four major areas of commonality.
## Authentication
Your identity as a user can determine the result set from a search query, so it's important to know about authentication. We cover GitHub authentication fully in "Authentication", but this API is also available without logging in. However, there are a few limitations to this approach.
First, you'll only be able to search public repositories. This is probably fine if you're primarily working with open source software, but users of your application will probably expect to have access to their private code, as well as that of any organizations they belong to. Also, since _all_ Enterprise repositories are private, anonymous search is completely useless there.
Secondly, authenticating opens up your rate limit. The limits on search are stricter than other APIs anyway, because search is computationally expensive, but anonymous searches are stricter still. As of this writing, and according to the documentation, anonymous searches are limited to 5 per minute, and you can do 20 authenticated queries per minute. Take a look at "GitHub API Rate Limits" for more on how to work with rate limits.
Here's that same information in tabular form:
| Anonymous | Authenticated
---|---|---
Results include private repositories | No | Yes
Use with Enterprise | No | Yes
Rate limit | 5/minute | 20/minute
## Result Format
No matter what you're searching for, the return value from the API follows a certain format. Here's a sample result from a query, which has been heavily edited to focus only on the parts you'll always see:
{
"total_count": 824,
"incomplete_results": false,
"items": [
{
...
"score": 3.357718
}
]
}
Starting from the top: the `total_count` field represents the total number of search results that turned up from this query. It's not uncommon for a fairly specific search to turn up thousands of results—remember, there are millions of repositories on GitHub. By default, only the first 30 are returned, but you can customize this with `page` and `per_page` query parameters in the URL. For example, a GET request to this URL will return 45 items, starting with the 46th result:
search/repositories?q=foobar&page=2&page_size=45
Page sizes are generally limited to 100.
The `incomplete_results` field refers to a computational limit placed on the Search API. If your search takes too long, the GitHub API will stop it partway through executing, return the results that did complete, and set this flag to `true`. For most queries this won't be a problem, and the `total_count` will represent all the results from the search, but if your query is complicated, you might only get a partial result set.
Search results are returned in the `items` array, and each item always has a `score` field. This field is numeric, but it's only a relative measure of how well a result matches the query, and is used for the default sort order—highest score first. If you do pay attention to it, remember it only has meaning when compared to other results from the same query; a result with a score of 50 isn't necessarily ten times "better" than a result scored 5.
Here's a summary of the important fields:
Field | Meaning
---|---
`total_count` | Total search result count
`incomplete_results` | `true` if search was halted before finishing
`items` | List of search results
`(item).score` | Relevance of this item as a search result
## Search Operators and Qualifiers
Of course, it's always better if you can avoid pagination altogether, or at least get the best results in the first page. Qualifiers and operators can help narrow your search results to fewer pages, hopefully allowing the right result to float to the top.
With the Search API, all searches are done through a search query, which is encoded and passed in the URL as the `q` parameter. Most of the query will be free text, but the API also supports some powerful syntax, such as these forms:
* `x AND y`, as well as `OR` and `NOT`
* `user: _< name>_`, where _`name`_ is a user or organization
* `repo: _< name>_`
* `language: _< name>_`
* `created: _< date(s)>_`
* `extension: _< pattern>_` matches file extensions (like _py_ or _ini_ )
Numerical values and dates can have ranges:
* `2015-02-01` will match only the given date
* `<2015-02-01` will match any date previous to the one given
* `2015-02-01..2015-03-01` will match dates within the given range, including the endpoints
For example, to find code written by the user tpope during July of 2012, you would write `"user:tpope created:2012-07-01..2015-07-31"` for the query parameter. That would be encoded in a URL like so:
search/repositories?q=user%3Atpope+created%3A2012-07-01..2015-07-31
To constrain this search to only Python code, we could add ` language=python`, URL encoded as `+language%3Apython`, to the end of the URL.
There are many other options. Check out _https://github.com/search/advanced_ for a UI that can help you construct a query.
## Sorting
If search query operators can't narrow down a result set to just the most important items, perhaps sorting them can. Search results are returned in a definite order, never at random. The default order is "best match," which sorts your results based on their search score, best score first. If you want to override this, you can pass `stars`, `forks`, or `updated` in the `sort` query parameter, as in `search/repositories?q=foobar&sort=stars`.
You can also reverse the sort order using the `order` parameter, like `search/repositories?q=foobar&sort=stars&order=desc`. The default is `desc` ("descending"), but `asc` is also accepted, and will reverse the order.
# Search APIs in Detail
Now that we've covered how all these APIs behave the same, let's discuss their specifics. The Search API is compartmentalized into four categories: repositories, code, issues, and users. The basic mechanism is the same for all four: send a GET request to the endpoint, and provide a URL-encoded search term as the `q` parameter. We'll show an abridged response from each of the four, along with some discussion of what to expect.
## Repository Search
The `search/repositories` endpoint looks in the repository metadata to match your query. This includes the project's name and description by default, though you can also search the read me file by specifying `in:readme` in the query. Other qualifiers are documented at _https://developer.github.com/v3/search/#search-repositories_.
A query such as `search/repositories?q=foobar` might result in a response that looks something like this:
{
"total_count": 824,
"incomplete_results": false,
"items": [
{
"id": 10869370,
"name": "foobar",
"full_name": "iwhitcomb/foobar",
"owner": {
"login": "iwhitcomb",
"id": 887528,
"avatar_url": "https://avatars.githubusercontent.com/u/887528?v=3",
...
},
"private": false,
"html_url": "https://github.com/iwhitcomb/foobar",
"description": "Drupal 8 Module Example",
"fork": false,
...
"score": 59.32314
},
...
]
}
Each item in `items` is the description of a repository. All sorts of useful information is included, such as a URL to the UI for this repository (`html_url`), the owner's avatar (`owner.avatar_url`), and a URL suitable for cloning the repository using Git (`git_url`).
## Code Search
The `search/code` endpoint is for searching the contents of a repository. You can try matching the contents of the files themselves, or their paths (using `in:path`). (For complete documentation on the other available qualifiers, check out _https://developer.github.com/v3/search/#search-code_.)
This API is subject to several limits that don't affect the other search endpoints, because of the sheer amount of data the server must sort through to find matches. First, it requires that you provide a general search term (a phrase to match); specifying a query with _only_ operators (like `language:python`) is valid with other APIs, but not here. Second, any wildcard characters in the query will be ignored. Third, files above a certain size will not be searched. Fourth, it only searches the default branch of any given project, which is usually `master`. Fifth, and possibly most importantly, you _must_ specify a repository owner using the `user: _< name>_` qualifier; you cannot search all repositories with one query.
The JSON returned looks something like this:
{
"total_count": 9246,
"incomplete_results": false,
"items": [
{
"name": "migrated_0000.js",
"path": "test/fixtures/ES6/class/migrated_0000.js",
"sha": "37bdd2221a71b58576da9d3c2dc0ef0998263652",
"url": "...",
"git_url": "...",
"html_url": "...",
"repository": {
"id": 2833537,
"name": "esprima",
"full_name": "jquery/esprima",
"owner": {
"login": "jquery",
"id": 70142,
"avatar_url": "https://avatars.githubusercontent.com/u/70142?v=3",
...
},
"private": false,
...
},
"score": 2.3529532
},
...
]
}
Each item has some data about the file that turned up, including its name and URLs for a couple of representations of it. Then there's the blob of data about its repository, followed by a score, which is used for the default "best match" sorting.
## Issue Search
Repositories contain more than just code. The `search/issues` endpoint looks for matches in the issues and pull requests attached to a project. This endpoint responds to a wide variety of search qualifiers, such as:
`type`
Either "pr" for pull requests, or "issue" for issues (the default is both).
`team`
Match issues whose discussions mention a specific team (only works for organizations you belong to).
`no`
Match issues that are missing a piece of data (as in "no:label").
There are many more; see _https://developer.github.com/v3/search/#search-issues_ for complete documentation.
The result of a call to this endpoint looks like this:
{
"total_count": 1278397,
"incomplete_results": false,
"items": [
{
"url": "...",
"labels_url": "...",
"comments_url": "...",
"events_url": "...",
"html_url": "...",
"id": 69671218,
"number": 1,
"title": "Classes",
"user": {
"login": "reubeningber",
"id": 2552792,
"avatar_url": "...",
...
},
"labels": [
...
],
"state": "open",
"locked": false,
"assignee": null,
"milestone": null,
"comments": 0,
"created_at": "2015-04-20T20:18:56Z",
"updated_at": "2015-04-20T20:18:56Z",
"closed_at": null,
"body": "There should be an option to add classes to the ul and li...",
"score": 22.575937
},
]
}
Again, each item in the list looks like the result of a call to the issued API. There are a lot of useful bits of data here, such as the issue's title (`title`), labels (`labels`), and links to information about the pull-request data (`pull_request.url`), which won't be present if the result isn't a pull request.
## User Search
All the other Search APIs are centered around repositories, but this endpoint searches a different namespace: GitHub users. By default, only a user's login name and public email address are searched; the `in` qualifier can extend this to include the user's full name as well, with `in:fullname,login,email`. There are several other useful qualifiers available; see _https://developer.github.com/v3/search/#search-users_ for complete documentation.
Querying the `search/users` endpoint gives you this kind of response:
{
"total_count": 26873,
"incomplete_results": false,
"items": [
{
"login": "ben",
"id": 39902,
"avatar_url": "...",
"gravatar_id": "",
"url": "...",
"html_url": "...",
...
"score": 98.24275
},
{
"login": "bengottlieb",
"id": 53162,
"avatar_url": "...",
"gravatar_id": "",
"url": "...",
"html_url": "...",
...
"score": 35.834213
},
]
}
The list of items in this case look like the results from a query of the `users/ _< name>_` endpoint. Useful items here are the user's avatar (`avatar_url`), several links to other API endpoints (`repos_url`, `url`), and the type of result (user or organization, in `type`).
# Our Example Application
Now that we know a bit about how this API behaves, let's do something useful with it.
Imagine that your development team uses GitHub to store their Git repositories, and that there are lots of little repositories for parts of the application that work together at runtime. This kind of situation ends up being fairly difficult to work with for your nontechnical colleagues; if they want to report an issue, they don't know where to go, and they don't know how to find issues that already exist.
Search can make this possible, but doing a search across an entire organization's repositories involves using the `user: _< organization>_` operator, which is obtusely named, and kind of scary for nonprogrammers. Plus, the user would have to remember to add that option every single time they wanted to search for issues.
The Search API can make this a bit easier. Let's make a GUI application with just a single search box, which makes it dead simple for a nontechnical user to search all the issues in all the repositories in a single organization. It'll end up looking a bit like Figures 4-1, 4-2, and 4-3.
###### Figure 4-1. GitHub search application on Windows
###### Figure 4-2. GitHub search application on Mac
###### Figure 4-3. GitHub search application on Linux
## User Flow
That's the overall goal, but let's dig in to more detail about how the user experiences the application.
The first thing we'll do is require the user to log in with GitHub credentials. Why? Partly because the Search API is throttled pretty aggressively, and the rate limits are higher with authenticated access. But also because our user is going to need the ability to search issues in private repositories. To make this easier, our program will try to get GitHub credentials from Git's credential store, but it'll fall back to a login form, which looks like Figure 4-4.
###### Figure 4-4. Login UI
Once the user logs in, they'll be shown a search box. Typing in a search query and hitting Enter will result in a scrollable list of search results, with titles and the first line of the description. Clicking a search result opens the issue in the user's browser.
That's about it. This application only has two main screens from the user's point of view. It's a simple, focused tool to solve a very tightly defined problem, so the code shouldn't be too hard.
# Python
Now that we know how the program should act, let's decide how it should _work_.
We'll use Python for our implementation language, for several reasons. First, because we haven't yet seen it in this book, and we like to expose you to a wide variety of languages. One of our goals is to help the reader explore technologies they might not have seen before.
Secondly, there's a Python library for building GUI applications that run without modification on Mac OS X, Linux, and Windows. Surprisingly, this is a fairly unique feature among modern high-level programming languages. If you want this capability elsewhere, you usually have to use a high-complexity framework, a lower-level language like C++, or both.
Thirdly, this will help make it easy to distribute. There is a Python package that bundles an entire Python program and all of its dependencies into a single file (or _.app_ bundle on OS X). So giving this program to a colleague is as easy as emailing her a ZIP file, which will help with our use case: a nontechnical user might not be totally comfortable clicking through an installer (or even have permissions to do so on their machine).
Let's take a quick look at the libraries we'll be using in our application's code. We'll see them in action later on, but a quick overview will help you understand what each one is trying to do. As is unfortunately typical with Python development, installation methods vary from package to package, so we'll also tell you how to get each one onto your machine.
## AGitHub
The first thing we should mention is the library we'll use to talk to the GitHub API, which is called `agithub`. `agithub` is a very thin layer that converts GitHub's REST API into method calls on objects, resulting in delightfully readable code.
`agithub` can be found at _https://github.com/jpaugh/agithub_, and the "installation" is simply to download a copy of the _agithub.py_ source file and place it alongside your project files.
## WxPython
WxPython is how we'll create the graphical interface for our application. It's an object-oriented Python layer over the top of a toolkit called WxWidgets, which is itself a common-code adapter for native UI toolkits. WxWidgets supports Linux, Mac, and Windows operating systems with native controls, so you can access all of those platforms with the same Python code.
Information about the WxPython project can be found at _http://www.wxpython.org_, and you'll find a download link for your platform on the lefthand side of the page. The next version of WxPython (code-named "Phoenix") will be installable via PIP, but at the time of this writing Phoenix is still prerelease software, so it's probably safer to use the stable version.
A bit of background on Python: it's undergoing a transition. Currently there are two actively used versions: Python 2.7 and Python 3 (3.5 at the time of this writing). Most of the details are unimportant, but in order to follow along with this example, you'll have to be running Python 2.7, because WxPython doesn't currently support Python 3. Support for Python 3 is planned for the upcoming Phoenix release, so most of the following code is written in a "polyglot" fashion, so you shouldn't run into any trouble running it under Python 3 if Phoenix has arrived by the time you read this.
## PyInstaller
PyInstaller will be our distribution tool. Its main function is to read your Python code, analyze it to discover all its dependencies, then collect all these files (including the Python interpreter) and put them in one directory. It can even wrap all of that up in a single package that, when double-clicked, runs your program. It does all this without needing much input from you, and there are only a few configuration options. If you've written GUI applications before, you'll know how hard each of these problems are.
For information on this project, you can visit _http://pythonhosted.org/PyInstaller_. You can install it using Python's package manager by running `pip install pyinstaller`.
# The Code
Alright, now you have an idea of which parts of the Python ecosystem will be helping us on our journey. Let's get started looking at the code that brings them all together. We'll start with this skeleton file:
#!/usr/bin/env python 
import os, subprocess
import wx
from agithub import Github 
class SearchFrame(wx.Frame): 
pass
if __name__ == '__main__': 
app = wx.App() 
SearchFrame(None)
app.MainLoop()
Let's take a look at a few key things:
The "shebang" specifies that this is a Python 2.7 program.
Here we import our handy libraries. We import WxPython (`wx`) whole cloth, but with `agithub` we only need the `Github` (note the capitalization) class. `os` and `subprocess` come from the Python standard library.
This is the class for our main window. We'll walk through the particulars later on when we discuss the real implementation.
In Python, you create the main entry point of an application using this syntax.
And this is how you write a "main" function in WxPython. We instantiate an `App` instance, create an instance of our top-level frame, and run the app's main loop.
If you run this program right now, your command line will appear to hang, but it's actually waiting for GUI input. This is because the `wx` library won't create a "frame" window that has no contents. Let's correct that, but first a quick diversion into Git internals to make our experience a bit nicer.
## Git Credential Helper
That's how most of the UI code is going to be structured, but before we go any further, we should define a function to help us get the user's GitHub credentials. We'll be cheating a bit, by asking Git if it has the user's login and password.
We'll leverage the `git credential fill` command. This is used internally by Git to avoid having to ask the user for their GitHub password every time they interact with a GitHub remote. The way it works is by accepting all the known facts about a connection as text lines through `stdin`, in the format `<key>=<value>`. Once the caller has supplied all the facts it knows, it can close the `stdin` stream (or supply an empty line), and Git will respond with all the facts _it_ knows about this connection. With any luck, this will include the user's login and password. The whole interaction looks a bit like this:
$ echo "host=github.com" | git credential fill 
host=github.com
username=ben 
password=(redacted)
This passes a single line to `git credential` and closes `stdin`, which Git will recognize as the end of input.
Git responds with all the facts it knows about the connection. This includes the input values, as well as the username and password if Git knows them.
One other thing you should know about `git-credential` is that by default, if it doesn't know anything about the host, it'll ask the user at the terminal. That's bad for a GUI app, so we're going to be disabling that feature through the use of the `GIT_ASKPASS` environment variable.
Here's what our helper looks like:
GITHUB_HOST = 'github.com'
def git_credentials():
os.environ['GIT_ASKPASS'] = 'true' 
p = subprocess.Popen(['git', 'credential', 'fill'],
stdout=subprocess.PIPE,
stdin=subprocess.PIPE) 
stdout,stderr = p.communicate('host={}\n\n'.format(GITHUB_HOST)) 
creds = {}
for line in stdout.split('\n')[:-1]: 
k,v = line.split('=')
creds[k] = v
return creds 
Here we set `GIT_ASKPASS` to the string `'true'`, which is a UNIX program that always succeeds, which will in turn cause `git-credential` to stop trying to get credentials when it gets to the "ask the user" stage.
`subprocess.Popen` is the way you use a program with `stdin` and `stdout` in Python. The first argument is a list of arguments for the new program, and we also specify that we want `stdin` and `stdout` to be captured.
`p.communicate` does the work of writing to `stdin` and returning the contents of `stdout`. It also returns the contents of `stderr`, which we ignore in this program.
Here we process the `stdout` contents by splitting each line at the `=` character and slurping it into a dictionary.
So the return value from this call should be a dictionary with `'username'` and `'password'` values. Handy!
## Windowing and Interface
Okay, so now we have something that can help us skip a login screen, but we don't have a way of showing that login screen to the user. Let's get closer to that goal by filling in the main frame's implementation:
class SearchFrame(wx.Frame):
def __init__(self, *args, **kwargs): 
kwargs.setdefault('size', (600,500))
wx.Frame.__init__(self, *args, **kwargs)
self.credentials = {}
self.orgs = []
self.create_controls()
self.do_layout()
self.SetTitle('GitHub Issue Search')
# Try to pre-load credentials from Git's cache
self.credentials = git_credentials()
if self.test_credentials():
self.switch_to_search_panel()
self.Show()
There's a bit of syntax here that might be confusing. The `*args` and `**kwargs` entries here are ways of capturing multiple arguments into one parameter. For now, just know that we're only capturing them here so we can pass them to the parent class constructor two lines down.
The `__init__` method is the constructor, so this is where we start when the main function calls `SearchFrame()`. Here's what's happening at a high level—we'll dig into the details in a bit:
1. Set up some layout dimensions and pass to the parent class's constructor
2. Create the UI controls
3. Retrieve the credentials from the user using the credential helper we described earlier
4. Change the title and display the application to the user
Before we get to _how_ all those things are done, let's step back a bit and talk about this class's job. It's responsible for maintaining the top-level _frame_ (a window with a title bar, a menu, and so on), and deciding what's displayed in that frame. In this case, we want to show a login UI first, and when we get valid credentials (either from Git or the user), we'll switch to a searching UI.
Alright, enough background. Let's walk through the code for getting and checking credentials:
def login_accepted(self, username, password):
self.credentials['username'] = username
self.credentials['password'] = password
if self.test_credentials():
self.switch_to_search_panel()
def test_credentials(self):
if any(k not in self.credentials for k in ['username', 'password']):
return False
g = Github(self.credentials['username'], self.credentials['password'])
status,data = g.user.orgs.get() 
if status != 200:
print('bad credentials in store')
return False
self.orgs = [o['login'] for o in data] 
return True
def switch_to_search_panel(self):
self.login_panel.Destroy()
self.search_panel = SearchPanel(self,
orgs=self.orgs,
credentials=self.credentials)
self.sizer.Add(self.search_panel, 1, flag=wx.EXPAND | wx.ALL, border=10)
self.sizer.Layout()
The `agithub` library always returns two values from every function call. Python lets us bind these directly to variables with this `a,b = <expr>` syntax.
`agithub` decodes the JSON from the API call into a Python dictionary. Here we're only really interested in the names of the organization, so we use a _list comprehension_ , where we tell Python to only keep the value of the `"login"` field from each dictionary in the `data` list.
Each of these three methods comes in at a different point during our program's execution. If our credentials are coming from Git, we proceed straight to `test_credentials`; if they're coming from the login panel (see "GitHub Login"), they go through the `login_accepted` callback first, which then calls `test_credentials`.
Either way, what we do is try to fetch a list of the user's organizations, to see if they work. Here you can see the usage pattern for `agithub`—the URL path is mapped to object-property notation on an instance of the `Github` class, and the HTTP verb is mapped to a method call. The return values are a status code and the data, which has been decoded into a dictionary object. If it fails—meaning the returned status is not `200`—we send the user to the login panel. If it succeeds, we call `switch_to_search_panel`.
We're doing a synchronous network call on the UI thread. This is usually a bad idea, because the UI will become unresponsive until the network call completes. Ideally we'd move this out onto another thread, and get the return value with a message. However, this would add length and complexity to a chapter already rife with both, so we've decided not to include this advanced topic here. We hope you'll forgive us this small simplification; for this use case, the synchronous code will be just fine.
The last method handles the UI switch. The login panel is referenced by two things: the `SearchFrame` instance (the parent window), and the sizer that's controlling its layout. Fortunately, calling the `Destroy()` method cleans both of those up, so we can then create the `SearchPanel` instance and add it to our sizer. Doing this requires a specific call to the sizer's `Layout()` method; otherwise, the sizer won't know that it needs to adjust the position and size of the new panel:
def create_controls(self):
# Set up a menu. This is mainly for "Cmd-Q" behavior on OSX
filemenu = wx.Menu()
filemenu.Append(wx.ID_EXIT, '&Exit')
menuBar = wx.MenuBar()
menuBar.Append(filemenu, '&File')
self.SetMenuBar(menuBar)
# Start with a login UI
self.login_panel = LoginPanel(self, onlogin=self.login_accepted)
def do_layout(self):
self.sizer = wx.BoxSizer(wx.VERTICAL)
self.sizer.Add(self.login_panel, 1, flag=wx.EXPAND | wx.ALL, border=10)
self.SetSizer(self.sizer)
`create_controls` is fairly straightforward. It instantiates a menu that only contains File→Exit, and a login panel, whose implementation we'll cover a bit later on. Note that when we create a visible control, we pass `self` as the first parameter to the constructor. That's because the `SearchFrame` instance we're constructing is the parent window of that control.
`do_layout` uses a WxWidgets feature called _sizers_ to do some automated layout. Sizers are a complex topic, but here's all you need to know about this snippet:
* A `BoxSizer` stacks widgets in a single direction, in this case vertically.
* The second parameter to `sizer.Add` is a scaling factor. If it's zero, the widget you're adding will always stay the same size if the parent window resizes; if it's anything else, all the things the sizer is controlling will adjust to fill their container. There's only one control in this sizer, but we still want it to take up the full area of the window, so we pass `1`.
* The `border` parameter tells the sizer how much area to leave around the widget as padding.
* The `wx.EXPAND` flag tells the sizer that we want the widget to expand in the direction the sizer isn't stacking. In this case, we're stacking vertically, but we also want this widget to expand horizontally.
* The `wx.ALL` flag specifies which edges of the widget should have the border area.
Let's make sure we're following good practices, and write some tests. There isn't a lot here we can verify automatedly, but what there is should be covered:
from nose.tools import eq_, ok_, raises 
class TestApp:
def setUp(self): 
self.f = None
self.app = wx.App()
def tearDown(self):
if self.f:
self.f.Destroy()
self.app.Destroy()
def test_switching_panels(self): 
self.f = SearchFrame(None, id=-1)
# Sub-panels should exist, and be of the right type
ok_(isinstance(self.f.login_panel, LoginPanel))
ok_(isinstance(self.f.search_panel, SearchPanel))
# Already destroyed
raises(RuntimeError, lambda: self.f.login_panel.Destroy())
# Not already destroyed
ok_(self.f.search_panel.Destroy())
Here we're using a testing tool called Nose. Install it with `pip install nose`, and invoke it at the command line by typing `nosetests app.py`. It uses naming conventions to identify tests and fixtures, and is generally nice to work with.
Nose will automatically find these `setUp` and `tearDown` methods, and call them before and after each test method is run. In this case, we're just managing the frames we want to test, as well as an `App` instance for all of them to belong to.
Here's a test method that Nose will find and run. We ensure the subpanels are the right type, and that we've auto-transitioned to the `SearchPanel` by finding credentials in Git's storage.
That's it! Aside from managing a couple of fields, most of this code is managing the UI, which is almost exactly what we'd want from a UI class. Let's write the first of the two panels we swap in and out.
## GitHub Login
The `LoginPanel` class is similar in structure to the `SearchFrame` class, with a couple of key differences, which we'll describe after the wall of code:
class LoginPanel(wx.Panel):
def __init__(self, *args, **kwargs):
self.callback = kwargs.pop('onlogin', None)
wx.Panel.__init__(self, *args, **kwargs)
self.create_controls()
self.do_layout()
def create_controls(self):
self.userLabel = wx.StaticText(self, label='Username:')
self.userBox = wx.TextCtrl(self, style=wx.TE_PROCESS_ENTER)
self.passLabel = wx.StaticText(self, label='Password (or token):')
self.passBox = wx.TextCtrl(self, style=wx.TE_PROCESS_ENTER)
self.login = wx.Button(self, label='Login')
self.error = wx.StaticText(self, label='')
self.error.SetForegroundColour((200,0,0))
# Bind events
self.login.Bind(wx.EVT_BUTTON, self.do_login)
self.userBox.Bind(wx.EVT_TEXT_ENTER, self.do_login)
self.passBox.Bind(wx.EVT_TEXT_ENTER, self.do_login)
def do_layout(self):
# Grid arrangement for controls
grid = wx.GridBagSizer(3,3)
grid.Add(self.userLabel, pos=(0,0),
flag=wx.TOP | wx.LEFT | wx.BOTTOM, border=5)
grid.Add(self.userBox, pos=(0,1),
flag=wx.EXPAND | wx.LEFT | wx.RIGHT, border=5)
grid.Add(self.passLabel, pos=(1,0),
flag=wx.TOP | wx.LEFT | wx.BOTTOM, border=5)
grid.Add(self.passBox, pos=(1,1),
flag=wx.EXPAND | wx.LEFT | wx.RIGHT, border=5)
grid.Add(self.login, pos=(2,0), span=(1,2),
flag=wx.EXPAND | wx.LEFT | wx.RIGHT, border=5)
grid.Add(self.error, pos=(3,0), span=(1,2),
flag=wx.EXPAND | wx.LEFT | wx.RIGHT, border=5)
grid.AddGrowableCol(1)
# Center the grid vertically
vbox = wx.BoxSizer(wx.VERTICAL)
vbox.Add((0,0), 1)
vbox.Add(grid, 0, wx.EXPAND)
vbox.Add((0,0), 2)
self.SetSizer(vbox)
def do_login(self, _):
u = self.userBox.GetValue()
p = self.passBox.GetValue()
g = Github(u, p)
status,data = g.issues.get()
if status != 200:
self.error.SetLabel('ERROR: ' + data['message'])
elif callable(self.callback):
self.callback(u, p)
There's some structure that's similar to the preceding code. We'll start with the constructor.
Recall that this panel is created with a keyword argument in `SearchFrame`'s `create_controls` method, like `LoginPanel(self, onlogin=self.login_accepted)`. In the constructor definition, we pull that callback out and store it for later. Afterward, we just call the two other construction functions and return.
`create_controls` has more to it than `SearchFrame`'s version, because this panel has more controls. Every static-text, text-input, and button control gets its own line of code. The `wx.TE_PROCESS_ENTER` style tells the library we want an event to be triggered if the user presses the Enter key while the cursor is inside that text box.
The next block binds control events to method calls. Every event in WxPython will call the handler with a single argument, an object that contains information about the event. That means we can use the same function to handle any number of different kinds of events, so we do—the `ENTER` handlers for both text boxes and the `BUTTON` handler for the button all go through `self.do_login`.
`do_layout` uses a different kind of sizer—a `GridBagSizer`. Again, the topic of sizers is _way_ outside the scope of this chapter, but just know that this kind arranges things in a grid, and you can allow some of the rows or columns to stretch to fill the container. Here we drop all of the controls into their positions with the `pos=(r,c)` notation (here "rows" come first, which isn't like most coordinate systems), and cause one control to span two columns with the `span` parameter. The `flags` and `border` parameters mostly mean the same things as before, and the `AddGrowableCol` function tells the layout engine which parts of the grid should be allowed to stretch.
Then we do something curious: we put the `GridBagSizer` _into another sizer_. Sizer nesting is a powerful feature, and allows almost any window layout to be possible—although perhaps not easy or simple. The vertical box sizer also contains some bare tuples; this special form is called "adding a spacer." In this case, we sandwich the sizer with all the controls between two spacers with different weights, making it float about a third of the way down the window. The effect is like Figure 4-5.
###### Figure 4-5. Resizing behavior of login UI
Then comes the `do_login` method, which tests out the given credentials, and if they work, passes them back through the callback set at construction time. If they don't work, it sets the text of a label, whose foreground color has been set to a nice, alarming shade of red.
Let's make sure this behavior is tested at least a little bit. Again, there's not much that it's doing other than setting up WxPython stuff, but we can validate that a login error is displayed by adding this method to the test class:
def test_login_panel(self):
self.f = wx.Frame(None)
lp = LoginPanel(self.f)
eq_(lp.error.GetLabelText(), '')
lp.do_login(None)
ok_(lp.error.GetLabelText().startswith('ERROR'))
## GitHub Search
Once the user has successfully logged in, we destroy the `LoginPanel` instance and show the `SearchPanel`:
class SearchPanel(wx.Panel):
def __init__(self, *args, **kwargs):
self.orgs = kwargs.pop('orgs', [])
self.credentials = kwargs.pop('credentials', {}) 
wx.Panel.__init__(self, *args, **kwargs)
self.create_controls()
self.do_layout()
def create_controls(self):
self.results_panel = None
self.orgChoice = wx.Choice(self, choices=self.orgs, style=wx.CB_SORT)
self.searchTerm = wx.TextCtrl(self, style=wx.TE_PROCESS_ENTER)
self.searchTerm.SetFocus()
self.searchButton = wx.Button(self, label="Search")
# Bind events 
self.searchButton.Bind(wx.EVT_BUTTON, self.do_search)
self.searchTerm.Bind(wx.EVT_TEXT_ENTER, self.do_search)
def do_layout(self):
# Arrange choice, query box, and button horizontally
hbox = wx.BoxSizer(wx.HORIZONTAL)
hbox.Add(self.orgChoice, 0, wx.EXPAND)
hbox.Add(self.searchTerm, 1, wx.EXPAND | wx.LEFT, 5)
hbox.Add(self.searchButton, 0, wx.EXPAND | wx.LEFT, 5)
# Dock everything to the top, leaving room for the results
self.vbox = wx.BoxSizer(wx.VERTICAL)
self.vbox.Add(hbox, 0, wx.EXPAND) 
self.SetSizer(self.vbox)
def do_search(self, event):
term = self.searchTerm.GetValue()
org = self.orgChoice.GetString(self.orgChoice.GetCurrentSelection())
g = Github(self.credentials['username'], self.credentials['password'])
code,data = g.search.issues.get(q="user:{} {}".format(org, term)) 
if code != 200:
self.display_error(code, data)
else:
self.display_results(data['items'])
def display_results(self, results): 
if self.results_panel:
self.results_panel.Destroy()
self.results_panel = SearchResultsPanel(self, -1, results=results)
self.vbox.Add(self.results_panel, 1, wx.EXPAND | wx.TOP, 5)
self.vbox.Layout()
def display_error(self, code, data): 
if self.results_panel:
self.results_panel.Destroy()
if 'errors' in data:
str = ''.join('\n\n{}'.format(e['message']) for e in data['errors'])
else:
str = data['message']
self.results_panel = wx.StaticText(self, label=str)
self.results_panel.SetForegroundColour((200,0,0))
self.vbox.Add(self.results_panel, 1, wx.EXPAND | wx.TOP, 5)
self.vbox.Layout()
width = self.results_panel.GetSize().x
self.results_panel.Wrap(width)
There's quite a bit here, but some of it is familiar. We'll skip the usual walkthrough to point out a couple of interesting features:
When creating the panel, we pass in the user's credentials and list of organizations as keyword arguments, so they show up in the `kwargs` dictionary. Here we use `pop` to make sure the parent class's constructor doesn't get confused by them.
Here we capture both the search button's "click" event, as well as the text box's "enter key" event. Both should cause the search to be performed.
When we add the search bar to the sizer, we use `0` as a scale factor. This means it shouldn't expand to fit the available size, but keep its own size instead, to leave room to add a results panel later on.
Here's where the actual search is being done. We get the search term and organization, and send them to the `agithub` instance, which returns our results and an HTTP result code.
We pass the search results into another class, then add it to the main sizer with parameters to fill the remaining available space.
If an error is returned from the search call instead, we display it here. There's some code to adjust the wrap width of the text, based on the laid-out width of the control. This isn't a great approach, but doing it better is left as an exercise for the reader.
Again, there's a fair amount of code here, but most of it should look familiar. Here's the test code that covers the previous code:
def test_search_panel(self):
self.f = wx.Frame(None)
sp = SearchPanel(self.f, orgs=['a', 'b', 'c'])
eq_(0, sp.orgChoice.GetCurrentSelection())
eq_('a', sp.orgChoice.GetString(0))
sp.display_error(400, {'errors': [{'message': 'xyz'}]})
ok_(isinstance(sp.results_panel, wx.StaticText))
eq_('xyz', sp.results_panel.GetLabelText().strip())
## Displaying Results
So now we have our login panel, and a way for the user to enter a search query, but no way to display results. Let's fix that.
Whenever search results are retrieved, we create a new instance of `SearchResultsPanel`, which then creates a series of `SearchResult` instances. Let's look at both of them together:
class SearchResultsPanel(wx.ScrolledWindow): 
def __init__(self, *args, **kwargs):
results = kwargs.pop('results', [])
wx.PyScrolledWindow.__init__(self, *args, **kwargs)
# Layout search result controls inside scrollable area
vbox = wx.BoxSizer(wx.VERTICAL)
if not results:
vbox.Add(wx.StaticText(self, label="(no results)"), 0, wx.EXPAND)
for r in results:
vbox.Add(SearchResult(self, result=r),
flag=wx.TOP | wx.BOTTOM, border=8)
self.SetSizer(vbox)
self.SetScrollbars(0, 4, 0, 0)
class SearchResult(wx.Panel):
def __init__(self, *args, **kwargs):
self.result = kwargs.pop('result', {})
wx.Panel.__init__(self, *args, **kwargs)
self.create_controls()
self.do_layout()
def create_controls(self): 
titlestr = self.result['title']
if self.result['state'] != 'open':
titlestr += ' ({})'.format(self.result['state'])
textstr = self.first_line(self.result['body'])
self.title = wx.StaticText(self, label=titlestr)
self.text = wx.StaticText(self, label=textstr)
# Adjust the title font
titleFont = wx.Font(16, wx.FONTFAMILY_DEFAULT,
wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_BOLD)
self.title.SetFont(titleFont)
# Bind click and hover events on this whole control 
self.Bind(wx.EVT_LEFT_UP, self.on_click)
self.Bind(wx.EVT_ENTER_WINDOW, self.enter)
self.Bind(wx.EVT_LEAVE_WINDOW, self.leave)
def do_layout(self):
vbox = wx.BoxSizer(wx.VERTICAL)
vbox.Add(self.title, flag=wx.EXPAND | wx.BOTTOM, border=2)
vbox.Add(self.text, flag=wx.EXPAND)
self.SetSizer(vbox)
def enter(self, _):
self.title.SetForegroundColour(wx.BLUE)
self.text.SetForegroundColour(wx.BLUE)
def leave(self, _):
self.title.SetForegroundColour(wx.BLACK)
self.text.SetForegroundColour(wx.BLACK)
def on_click(self, event): 
import webbrowser
webbrowser.open(self.result['html_url'])
def first_line(self, body):
return body.split('\n')[0].strip() or '(no body)'
The containing panel is simple enough that it only consists of a constructor. This class's job is to contain the results and present them in a scroll window.
A `SearchResult` is comprised of two static text controls, which contain the issue's title and the first line of its body.
We're not only binding the click handler for this entire panel, but also the mouse-enter and mouse-leave events, so we can make it behave more like a link in a browser.
Here's how you open the default browser to a URL in Python.
So now you've seen the code for a simple WxPython application. Using this library tends to produce code of a certain style, which is kind of verbose. The positive side of this is that nothing is hidden; all the layout for your app is done right in the code, with no "magic," and the fact that it can run without modification on just about anybody's computer is a huge plus. WxPython may lack some facilities of newer frameworks, but there's nothing better for getting a basic cross-platform UI out the door quickly.
That's all of the code! If you've been following along and typing all this code into a file, you can run that file and do issue searches. However, our use case has a nontechnical user running this; let's see what can be done to make it easier for them to get started.
# Packaging
What we're not going to do is require anyone to install Python 2.7 and a bunch of packages. We'll use PyInstaller to bundle our application into something that's easy to distribute and run.
Let's assume you wrote all the preceding code into a file called _search.py_ , and _agithub.py_ is sitting in the same directory. Here's how to tell PyInstaller to generate a single application for you:
$ pyinstaller -w search.py
That's it! The `-w` flag tells PyInstaller to create a "windowed" build of your application, rather than the default console build. On OS X, this generates a _search.app_ application bundle, and on Windows this generates a _search.exe_ file. You can take either of these to a computer with no Python installed, and they'll run perfectly.
That's because PyInstaller has copied everything necessary for your program to run, from the Python interpreter on up, inside that file. The one I just generated is 67 MB, which seems large for such a simple program, but that number is more reasonable when you consider what's inside the package.
# Summary
Whew! This chapter was quite a journey. Let's take a breath, and look at what we've learned.
The main bulk of the code in this chapter had to do with defining a graphical interface. Code for this task is always pretty verbose, because of the sheer complexity of the task. With WxPython in your tool belt, however, you can now write GUI applications using Python, with code that's no harder to write than with other toolkits, and get the ability to run on every major platform for free.
We saw how to ask Git for credentials to a Git server using `git credential`. This feature is quite capable, and includes the ability to write a custom credential storage backend, but we at least saw a peek into how it works. Using this knowledge, you can piggyback on your users' existing habits to avoid having to ask them for the same things over and over again.
We also saw a rather nice HTTP API abstraction with `agithub`. We authenticated and queried the issue search API endpoint using what looked like object-method notation. `agithub` is a great example of how a library package can be both future-proof and idiomatic—the library constructs a query URL by looking at the chain of properties and methods used in the call. This is a great jumping-off point for querying other REST APIs using the same pattern.
Finally, the main thrust of this chapter was using the GitHub Search API. You've learned about its general behavior, the different categories of search, how to interpret and sort results, and ways of focusing a search to reduce the number of uninteresting results. Using this knowledge you should be able to find anything you're looking for on GitHub or GitHub Enterprise. You also know that the search UI on GitHub is just a thin layer over the Search API, so the same tricks and techniques will serve you whether you're writing code or using a browser.
Time to switch gears a bit. The next chapter introduces the Commit Status API, which is a way of annotating individual commits in a Git repository with a "good" or "bad" flag. We'll be using what only a few years ago would have been a polarizing choice: C# and the CLR.
# Chapter 5. .NET and the Commit Status API
At the risk of oversimplifying things too much, one way to look at a Git repository is as just a long series of commits. Each commit contains quite a bit of information: the contents of the source files, who created the commit and when, the author's comments on what changes the commit introduces, and so on. This is all good stuff, and works very well for Git's main use case: controlling the history of a software project.
GitHub's Commit Status API adds another layer of metadata to a commit: what various services _say_ about that commit. This capability primarily shows itself in the pull request UI, as shown in Figure 5-1. Each commit in the pull request is annotated with a symbol indicating its status—a red "×" for failure or error, a green "✓" for success, or an amber "•" to indicate that a decision is in the process of being made. This feature also surfaces at the bottom of the pull request; if the last commit in the branch is not marked as successful, you get a warning about merging the request.
###### Figure 5-1. Commit status in the pull request UI
The most obvious application for this feature is a continuous-integration service. A program like Jenkins will get a notification when new commits are pushed to a branch, run a build/test cycle using the new code, and post the results through the Commit Status API. An application like this can even include a link back to the build results, so the user can find out which tests failed. This is a great way to bring together everything needed to make a decision about a proposal: what code has changed, what do people think about it, and does this change break anything? The answer to all of these questions is available on the same page: the pull-request conversation view.
But building and testing is only the beginning; the status of a commit can be used for other purposes as well. For example, open source projects often have a license agreement you must sign in order to submit a contribution. These are called "contributor license agreements," and usually contain language about licensing the contribution to the maintainers of the project. But it's tedious to manually check every incoming pull request to see if the author has signed the CLA, so a continuous-integration-style service can be used for this. CLAHub is one such example: it checks to see if all of the authors of the contained commits have signed the CLA, and marks the latest commit as "error" if not.
So now that we know what the feature is, and what its intended use is, let's take a look at how a program can interact with it.
# The API
First, let's talk about access control. The Commit Status API exposes the need for OAuth as few others do. Making a repository private means you want complete control of what people or applications can access it. Naturally you trust GitHub's internal code to do the right thing with your data, but what about some random application from the Internet? OAuth gives you a way to grant private-repository access to an application _with limits_ —the use of OAuth scopes allows an application to ask for a specific set of permissions, but it won't be able to do just any old thing with your data. Plus, this way you're always in control of these permissions; you can revoke an application's access at any time.
The OAuth system includes the concept of scopes, which can be requested by and granted to an application, each of which allows a certain set of actions. The Commit Status API requires the `repo:status` OAuth scope, which allows an application read and write access to _just_ commit statuses; there is no access granted to the actual contents of the repository. This might seem strange: how can you judge the status of a commit without being able to inspect its contents? Just remember that this feature has use cases beyond continuous integration, and an application may not need full access to make a decision. For services that do need to be able to look at the repository contents, you can request the `repo` scope, which grants read _and_ write access to the entire contents of a repository, including commit statuses. As of this writing, there's no way to request read-only access to repositories, so if a service needs access to your data, you have to trust it with write access.
You can also use this API in anonymous mode, without using OAuth at all. However, in that case you're limited to reading statuses from public repositories; there is no writing, and private repositories are off-limits.
Just to summarize:
OAuth scope | Access to statuses | Access to repo data
---|---|---
None (anonymous) | Read-only on public repos | Read-only on public repos
`repo:status` | Read/write | None
`repo` | Read/write | Read/write
## Raw Statuses
Now that we know how we get access to commit statuses, let's see what they look like. Commit statuses exist as atomic entities, and each commit can have a practically unlimited number of them (the actual number is in the thousands). You can query for existing statuses by doing a GET request to the API server at `/repos/ _< user>_/ _< repo>_/ _< ref>_/statuses`, and it will return a list of them that looks like this:
[
{
"url": "https://api.github.com/repos/...",
"id": 224503786,
"state": "success",
"description": "The Travis CI build passed",
"target_url": "https://travis-ci.org/libgit2/libgit2/builds/63428108",
"context": "continuous-integration/travis-ci/push",
"created_at": "2015-05-21T03:11:02Z",
"updated_at": "2015-05-21T03:11:02Z"
},
...
]
Most of this is self-explanatory, but a couple of fields need explaining:
Field | Description
---|---
`state` | One of `success`, `failure`, `error`, or `pending`.
`target_url` | A URL for the specific decision made for this commit (in this case a build/test log), which helps the user figure out why a particular decision was reached.
`context` | Used for correlating multiple status updates to a single service; each application sets this according to its own rules, but any process that creates statuses should post the `pending` status and the result status using the same context value.
This API is useful for getting the raw data involved, but it gets complicated quickly. How do you decide if a given commit is "good?" What if there are three pending statuses, one success, another pending, two failures, and another success, in that order? The `context` field can help you correlate a single service's updates, and you can order them by `created_at` to see how each one turned out, but that's a lot of work. Fortunately, the API server can do it for you.
## Combined Status
If you instead do a GET to `/repos/ _< user>_/ _< repo>_/ _< ref>_/status` (note that the last word is singular), you'll instead get a response that looks like this:
{
"state": "success",
"statuses": [
{
"url": "https://api.github.com/repos/...",
...
},
{... }
],
"sha": "6675aaba883952a1c1b28390866301ee5c281d37",
"total_count": 2,
"repository": {... },
"commit_url": "https://api.github.com/repos/...",
"url": "https://api.github.com/repos/..."
}
The `statuses` array is the result of the logic you'd probably write if you had to: it collapses the statuses by context, keeping only the last one. The `state` field contains an overall status that takes into account all of the contexts, providing a final value based on these rules:
Status | Cause
---|---
``failure`` | Any of the contexts posted a `failure` or `error` state
``pending`` | Any of the contexts' latest state is `pending`, or there are no statuses
``success`` | Latest status for every context is `success`
This is probably exactly what you want, but if you find that your use case calls for different rules, you can always use the `statuses` endpoint to get the raw data and calculate your own combined status.
## Creating a Status
Now obviously these statuses have to come from somewhere. This API also includes a facility for creating them. To do this, you simply make a POST request to `/repos/ _< user>_/ _< repo>_/statuses/ _< sha>_`, and supply a JSON object for the fields you want to include with your status:
Field | Description
---|---
``state`` | Must be one of `pending`, `success`, `error`, or `failure` (required).
``target_url`` | A link to detailed information on the process of deciding what the state is or will be.
``description`` | A short string describing what the service is doing to make a decision.
``context`` | An application-specific string to allow the API to manage multiple services contributing to a single commit's status.
Notice how the last component in that URL is `_< sha>_`. While you can query for statuses or a combined status using a ref name (like `master`), creating a status requires you to know the full SHA-1 hash of the commit you want to annotate. This is to avoid race conditions: if you were targeting a ref, it may have moved between when your process started and when it finished, but the SHA of a commit will never change.
# Let's Write an App
Alright, now that we know how to read and write statuses, let's put this API to work. In this chapter, we'll build a simple HTTP service that lets you create commit statuses for repositories you have access to using the OAuth web flow for authorization. The system we'll build will be fairly limited in scope, but it's a great starting point to customize for your specific needs.
The language this time is C#, running on the CLR (Common Language Runtime). At one point in the history of computing this wouldn't have been a good choice for a book like this, since it was only available on Windows, the development tools cost quite a bit of money, and the language and libraries were fairly limited. However, with the advent of Mono (an open source implementation of the .NET runtime), the open sourcing of the CLR core, and the availability of free tools, C# is now a completely valid and rather nice option for open source or hobby developers. Plus, it has a vibrant ecosystem of packages we can leverage to make our jobs easier.
## Libraries
You'll be happy to know we won't be writing an entire HTTP server from scratch in this chapter. There are a number of open source packages that do this work for us, and in this project we'll be using Nancy. Nancy is a project that started as a CLR port of the Sinatra framework for Ruby (it takes its name from Frank Sinatra's daughter, Nancy). It's very capable, but also very succinct, as you'll see.
We also won't be directly implementing access to the GitHub API, because GitHub provides a CLR library for that. It's called octokit.net, and it does all the right things with regard to asynchrony and type safety. This is the same library used by the GitHub client for Windows, so it'll definitely do the job for our little application. It is, however, the source of a constraint on how we set up our example project: it requires a rather new version of the CLR (4.5) in order to function. If you want some guidance on how to avoid this pitfall and follow along, continue reading the next section. If you've worked with Nancy before, and have installed NuGet packages in the past, you might be able to skip to the section labeled "Sending the Request".
## Development Environment
If you'd like to follow along with the code examples, here's how to set up a development environment with all the necessary elements. The process is different on Windows (using Visual Studio) and any other platforms (using Xamarin tools).
### Visual Studio
If you're running Windows, you'll want to visit _https://www.visualstudio.com/_ and download the Community edition of Visual Studio. The installer will present you with lots of options; for this example, we'll only need the "web developer" components, but feel free to check all the boxes that look interesting to you. (If you have access to a higher tier of Visual Studio, or already have it installed with the web-development packages, you're all set.)
In order to make things just a little smoother, you'll want to install a plug-in: the Nancy project templates. Visit _https://visualstudiogallery.msdn.microsoft.com/_ and search for "nancy.templates." Choose the search result "Nancy.Templates," which belongs to the NancyFx organization, and click "Get Now." This should download a _.vsix_ file that you can double-click to install the templates into Visual Studio.
The next step is to create a new project using one of the newly installed templates. Go to "File→New Project" and select "Visual C#→Web→Nancy Application with ASP.NET Hosting" from the template list (as shown in Figure 5-2). Make sure the path and name settings at the bottom are to your liking, and click OK.
###### Figure 5-2. Creating a Nancy application in Visual Studio
Next, change the target CLR framework version to something that will work with Octokit. Right-click the project's node in the Solution Explorer, and select "Properties." In the Application section, set Target Framework to be .NET 4.5 (or later), and save. You may be prompted to reload the solution.
The very last step is to add NuGet packages for Octokit and Nancy. Right-click the project node in Solution Explorer, and select "Manage NuGet Packages." Do a search for "Nancy," and upgrade it if necessary—there's a chance the Nancy project template specifies an out-of-date version. Then do a search for "Octokit," and install that. At this point, you should have an empty solution, configured and ready for our example code. To run it with debugging, go to "Debug→Start Debugging," or hit F5. Visual Studio will start the server under a debugger, and open an IE instance on _http://localhost:12008/_ (the port might be different), which should serve you the default Nancy "404 Not Found" page.
### Xamarin Studio
On OS X and Linux, as of this writing the easiest way forward is to visit _http://www.monodevelop.com/_ and install MonoDevelop. Mono is an open source implementation of Microsoft's CLR specification, and MonoDevelop is a development environment that works much like Visual Studio, but is built on Mono, and is completely open source. If you try to download MonoDevelop on a Windows or OS X machine, you'll be prompted to install Xamarin Studio instead; this is a newer version of MonoDevelop with more capabilities, and will work just as well for these examples.
There are no Nancy-specific project templates for these IDEs, so you'll just start with an empty web project. Go to "File→New→Solution," and choose "ASP.NET→Empty ASP.NET Project" from the template chooser, as shown in Figure 5-3.
###### Figure 5-3. Creating an empty ASP.NET application in Xamarin Studio
The rest of the wizard steps are about the project name and location; feel free to name and locate this project however you like.
Next, update the target framework setting. Control- or right-click the node in the solution explorer that corresponds with your project ( _not_ your solution), and select Options from the menu. Under "Build→General," set the Target Framework to "Mono / .NET 4.5" (or later) and click OK.
Lastly, install the Nancy and Octokit NuGet packages. Go to "Project→Add NuGet Packages" in the menu to open the package manager. Search for Nancy, check the box next to it, search for Octokit, check its box, and click "Add Packages" at the bottom right. Once the process is complete, your project is ready for our example code. To run it under the debugger, go to "Run→Start Debugging," or type ⌘-Enter. Xamarin will start the server and open a browser window to _http://127.0.0.1:80080_ (possibly with a different port), which at this point will just show the default "404 Not Found" page.
## Sending the Request
Alright, now that we have a project ready for some code, let's get our Nancy application up and running. Let's be good engineers, and write our tests first. In order to do this, generate a new unit-test project alongside your existing application project, and add a NuGet reference to the `Nancy.Testing` package. You can then copy and paste the test examples over the top of the default test module that comes with that template.
The first thing we're going to write is an endpoint that reports how many followers a user has. In order to test it, we'll choose a well-known user and make sure their real name is fetched. Here's what the test code looks like:
using NUnit.Framework;
using Nancy;
using Nancy.Testing;
using Nancy.Bootstrapper;
using System.Collections.Generic;
using Nancy.Session;
namespace NancyApplication1.Tests
{
[TestFixture ()]
public class Test
{
private Browser browser;
[SetUp]
public void Setup(){
this.bootstrapper =
new ConfigurableBootstrapper(with => {
with.Module<Handler>();
});
this.browser = new Browser (bootstrapper);
}
[Test ()]
public void FetchesUserDetails ()
{
var result = this.browser.Get ("/mojombo", 
with => with.HttpRequest ());
Assert.AreEqual (HttpStatusCode.OK, result.StatusCode);
Assert.IsTrue (result.Body.AsString()
.Contains("Tom Preston-Werner")); 
}
}
}
Here we're using the `Browser` class provided by `Nancy.Testing` to make a request to `/mojombo`, which should give us the number of likes for that GitHub user.
Here we're asserting that mojombo's real name is fetched by the endpoint.
Now that we have a failing test, let's write the code to implement that endpoint in Nancy. Here's what the initial version of that file will look like:
using Nancy;
using Octokit;
using System;
using System.Collections.Generic;
using System.Linq;
namespace NancyApp
{
public class Handler : NancyModule 
{
private readonly GitHubClient client =
new GitHubClient(new ProductHeaderValue("MyHello")); 
public Handler()
{
Get["/{user}", true] = async (parms, ct) => 
{
var user = await client.User.Get(parms.user.ToString()); 
return String.Format("{0} people love {1}!",
user.Followers, user.Name); 
};
}
}
}
Here we derive a class from `NancyModule`, which is all you have to do to start receiving and processing HTTP requests in Nancy.
The `GitHubClient` class is the entry point for Octokit. Here we create an instance we'll use later on, using a placeholder product name—this name will not be used for the APIs we'll be accessing.
The module's constructor needs to set up route mappings. We map `/{user}` to a lambda function using the `Get` dictionary that comes with `NancyModule`. The second parameter to the index operator says that the handler will be asynchronous.
Here we see how to get the `{user}` part of the request URL (it comes as a property on the `parms` parameter), and how to query the GitHub User API using Octokit. Note that we have to `await` the result of the network query, since it may take some time.
Nancy request handlers can simply return a text string, which will be marked as HTML for the viewing browser. Here we return a simple string with the user's name and number of followers.
The `async` and `await` keywords bear special mention. These comprise a syntactic nicety that encapsulates a series of functions that are running on an event loop. The code looks like it runs in order, but really when the `await` keyword is reached, the system starts an asynchronous request, and returns control back to the main event loop. Once the request has finished, and the promise is fulfilled, the event loop will then call back into the code that's expecting the return value of the `await` keyword, with all the scope variables intact. This feature was introduced in .NET 4.0 (which was released in 2012), and it lets you write asynchronous code almost as though it were synchronous. This is but one of the features that make C# the favorite of many developers.
This example is a bit more complicated than "hello, world," but it's still fairly succinct and clear. This bodes well, because we're about to introduce some complexity, in the form of OAuth.
## OAuth Flow
In order to post a status update for a commit, we're going to have to ask the user for permission. Apart from asking for their username and password (which gives way too much control, and if two-factor authentication is enabled may not even be enough), the only way to do this is OAuth, which isn't entirely straightforward.
Here's a simple outline of the OAuth process, from our little server's point of view:
1. We need an authorization token, either because we don't have one, or because the one we have is expired. This is just a string of characters, but we can't generate it ourselves, so we ask GitHub for one. This involves redirecting the user's browser to a GitHub API endpoint, with the kind of permission we're asking for and some other details as query parameters.
2. GitHub tells the user (through their browser) that an application is requesting some permissions, and they can either allow or deny them.
3. If the user allows this access, their browser is redirected to a URL we specified in step 1. A "code" is passed as a query parameter; this is not the access token we want, but a time-limited key to get one.
4. From inside the handler for this request, we can use a REST API to get the actual OAuth access token, which we can store somewhere safe. We do this because if we already have a token, we can skip all the way to the last step of this process.
5. Now we have permission, and we can use the GitHub API in authenticated mode.
This might seem overly complicated, but its design achieves several goals. First, permission can be scoped—an application is almost never given full access to the user's account and data. Second, the whole exchange is secure; at least one part of this has to go through the user, and cannot be automated. Third, the access token is never transmitted to the user's browser, which avoids an entire class of security vulnerabilities.
Let's walk through the code for our tiny little server's implementation of this flow. First, once we have a token, we should store it so we're not going through the entire redirect cycle for every user request. We're going to store it in a cookie (though since this goes back and forth to the user's browser, a production application would probably use a database). Nancy can help us with this, but first we have to enable it, and the way this is accomplished is by using a bootstrapper. We're going to add this class to our application:
using Nancy;
using Nancy.Bootstrapper;
using Nancy.Session;
using Nancy.TinyIoc;
namespace NancyApp
{
public class Bootstrapper : DefaultNancyBootstrapper
{
protected override void ApplicationStartup(TinyIoCContainer container,
IPipelines pipelines)
{
CookieBasedSessions.Enable(pipelines);
}
}
}
Nancy will automatically detect a bootstrapper class, and use it to initialize our server. Now, from within a `NancyModule`, we can use the `Session` property to store and retrieve values that are transmitted as cookies.
Next, we have to include our application's ID and secret in some of the requests, so we embed them in the code by adding these fields to the `Handler` class. If you don't have an application, visit _https://github.com/settings/developers_ to create one and use `http://localhost:8080/authorize` (depending in your environment, the port number might be slightly different) for the callback URL—we'll see why in a bit:
private const string clientId = "<clientId>";
private const string clientSecret = "<clientSecret>";
Obviously, you should use values from your own API application if you're following along.
After that, we'll need a helper method that kicks off the OAuth process:
private Response RedirectToOAuth()
{
var csrf = Guid.NewGuid().ToString();
Session["CSRF:State"] = csrf; 
Session["OrigUrl"] = this.Request.Path; 
var request = new OauthLoginRequest(clientId)
{
Scopes = { "repo:status" }, 
State = csrf,
};
var oauthLoginUrl = client.Oauth.GetGitHubLoginUrl(request);
return Response.AsRedirect(oauthLoginUrl.ToString()); 
}
CSRF stands for _cross-site request forgery_. This is a mechanism by which we can be sure the OAuth request process really did originate from our site. The GitHub OAuth API will pass this value back to us when the user authorizes access, so we store it in the cookie for later reference.
Storing the original URL in the session cookie is a UX feature; once the OAuth process has completed, we want to send the user back to what they were trying to do in the first place.
`repo:status` is the permission set we're asking for. Note that we're also including our CSRF token in this object; this is so GitHub can give it back to us later for verification.
Here we use Octokit to generate the redirect URL, and send the user's browser there.
`RedirectToOAuth` is a method that can be called from any route handler in our module, if it's discovered that the token is missing or invalid. We'll see how it's called a bit later, but for now let's follow the rest of the OAuth process.
In our GitHub application settings, we specified an authorization URL. In this case, we've specified `http://localhost:8080/authorize`, and that's where GitHub will redirect the user's browser if they decide to grant our application the permissions it's asking for. Here's the handler for that endpoint, which has been inserted into the module constructor:
Get["/authorize", true] = async (parms, ct) => 
{
var csrf = Session["CSRF:State"] as string;
Session.Delete("CSRF:State");
if (csrf != Request.Query["state"]) 
{
return HttpStatusCode.Unauthorized;
}
var queryCode = Request.Query["code"].ToString();
var tokenReq = new OauthTokenRequest(clientId, 
clientSecret,
queryCode);
var token = await client.Oauth.CreateAccessToken(tokenReq);
Session["accessToken"] = token.AccessToken; 
var origUrl = Session["OrigUrl"].ToString();
Session.Delete("OrigUrl");
return Response.AsRedirect(origUrl); 
};
This is how you map paths to handler functions in Nancy. Any class that derives from `NancyModule` has an indexable object for every HTTP verb, and you can attach a synchronous or asynchronous handler to any one of them. There are also ways to include dynamic portions of URLs, which we'll see later on.
Here we verify the CSRF token we generated before. If it doesn't match, something shady is happening, so we return a 401.
This is the REST call that converts our OAuth code to an access token. In order to verify that this really is our application asking for the token, we pass in both the client ID and secret, as well as the code given to us by GitHub.
This is where we store the resulting token in the session cookie. Again, this wouldn't be a good idea for a real application, but for our purposes it'll do.
Here we redirect the user back to what they were originally trying to do, with as little disruption as possible.
This last endpoint is something we can test, but we'll need to be able to handle sessions. In order to do that, we'll add this snippet to our test project's namespace:
public static class BootstrapperExtensions
{
public static void WithSession(this IPipelines pipeline,
IDictionary<string, object> session)
{
pipeline.BeforeRequest.AddItemToEndOfPipeline(ctx =>
{
ctx.Request.Session = new Session(session);
return null;
});
}
}
This is an _extension method_ that allows us to provide a `Session` object for a request, something the CSRF handling uses. Now that that exists, we can add a test method to our test-suite class:
[Test]
public void HandlesAuthorization()
{
// Mismatched CSRF token
bootstrapper.WithSession(new Dictionary<string, object> {
{ "CSRF:State", "sometoken" },
});
var result = this.browser.Get ("/authorize", (with) => {
with.HttpRequest();
with.Query("state", "someothertoken");
});
Assert.AreEqual (HttpStatusCode.Unauthorized, result.StatusCode);
// Matching CSRF token
bootstrapper.WithSession(new Dictionary<string, object> {
{ "CSRF:State", "sometoken" },
{ "OrigUrl", "http://success" },
});
result = this.browser.Get ("/authorize", (with) => {
with.HttpRequest();
with.Query("state", "sometoken");
});
result.ShouldHaveRedirectedTo ("http://success");
}
The first part sets up a mismatched CSRF token; it's `"sometoken"` in the session (which is set before the API call is made), and `"someothertoken"` in the request (which should be sent from GitHub), so we assert that the status code is 401. The second part has matching tokens, so we assert that the response is a redirect to the URL we stored in the session.
Once all that is done, we've got our token and are able to continue on our merry way. All our handlers have to do to trigger an OAuth sequence is to call `RedirectToOAuth()` if it's necessary, and we'll automatically return the user to where they were when the process completes.
## Status Handler
Having gone through all that OAuth business, we should now have a token that grants us permission to create commit statuses. We're going to add this handler to our Nancy module constructor:
Get["/{user}/{repo}/{sha}/{status}", true] = async (parms, ct) => 
{
var accessToken = Session["accessToken"] as string;
if (string.IsNullOrEmpty(accessToken))
return RedirectToOAuth(); 
client.Credentials = new Credentials(accessToken);
CommitState newState = Enum.Parse(typeof(CommitState), 
parms.status,
true);
try
{
var newStatus = new NewCommitStatus 
{
State = newState,
Context = "example-api-app",
TargetUrl = new Uri(Request.Url.SiteBase),
};
await client.Repository.CommitStatus.Create(parms.user, 
parms.repo,
parms.sha,
newStatus);
}
catch (NotFoundException) 
{
return HttpStatusCode.NotFound;
}
var template = @"Done! Go to <a href=""https://" 
+ @"api.github.com/repos/{0}/{1}/commits/{2}/status"
+ @""">this API endpiont</a>";
return String.Format(template,
parms.user, parms.repo, parms.sha);
};
Note the request path for this handler: a GET request to `localhost:8080/user/repo/ _< sha>_/ _< status>_` will create a new status. This is easy to test with the browser, but also makes it easy for web crawlers to unknowingly trigger this API. For this example it's okay, but for a real application you'd probably want to require this to be a POST request.
Here's where our OAuth helper comes in. We redirect through the OAuth flow if the session cookie doesn't have an authorization token. It's not shown here, but we'd also want to do this if we get an authorization exception from any of the Octokit APIs.
Here we're trying to parse the last segment of the request URL into a member of the `CommitState` enumeration. Octokit tries to maintain type safety for all of its APIs, so we can't just use the raw string.
The `NewCommitStatus` object encapsulates all the things you can set when creating a new status. Here we set the state we parsed earlier, a (hopefully) unique context value that identifies our service, and a not-very-useful target URL (which should really go to an explanation of how the result was derived).
This is the REST call to create the new status. It's an `async` method, which means we have to `await` the result before we can do anything with it.
There are a number of exceptions that could be thrown from the API, but the biggest one we want to handle is the `NotFoundException`, which has been translated from the HTTP 404 status. Here we translate it back to make for a nice(r) experience for the user.
If we succeed, we render a snippet of HTML and return it from our handler. Nancy sets the response's `content-type` to `text/html` by default, so the user will get a nice clickable link.
That's it! If you've typed all this into a project of your own, you should be able to run it under the debugger, or host it in an ASP.NET server, and create commit statuses for your projects by opening URLs in your browser.
We noted this a bit earlier, but it bears repeating: this particular example responds to GET requests for ease of testing, but for a real service like this you'd probably want creation of statuses to use a POST request.
# Summary
Even if you haven't written a lot of code during this chapter, you've learned a lot of concepts.
You've seen the Commit Status API, and you've seen how it's used by continuous integration software, but you know that it can be used for much more. You can read and write statuses, and you know how the API server coalesces many statuses into a single pass/fail value, and you also know how to write your own multistatus calculation if the default one doesn't meet your needs. You also know what's behind the green checkmarks and red Xs you see in your pull requests.
You've learned how the OAuth web flow works, and why it's designed the way it is. OAuth is the key to many other capabilities of the GitHub API, and it's the right thing to do with regards to trust and permissions. This will allow you to write truly world-class GitHub-interfacing applications, whether running on the Web or on a user's device.
You've gained a passing knowledge of C#, including its package system, at least one IDE, lambda functions, object initializers, and more. C# really is a nice language, and if you use it for a while, you'll probably miss some of its features if you write in anything else.
You've seen NuGet, the .NET package manager, and had a peek at the multitudes of packages in this ecosystem. The capability you have here is astounding; libraries exist for many common activities, and lots of uncommon ones too, so no matter what you need to do, you're likely to find a NuGet package to help you do it.
You've learned about Nancy, with which you can quickly build any HTTP service, from a REST API to an HTML-based interface, and all with a compact syntax and intuitive object model. If you've never been exposed to the Sinatra view of the world, this probably makes you think about web servers a bit differently, and if you have, you'll have a new appreciation for how this model can be idiomatically implemented.
And you've had an introduction to Octokit, a type-safe implementation of a REST API, with built-in asynchrony and OAuth helpers. This toolkit really does make working with the GitHub API as simple and straightforward as using any .NET library, including the ability to explore it using Intellisense.
Now it's time to switch back to Ruby. In our next chapter, we'll take a look at Jekyll (which is what really runs GitHub Pages), and how to use it to write a blog.
# Chapter 6. Ruby and Jekyll
The Jekyll project calls itself a "blog-aware, static site generator in Ruby." At its core, Jekyll is a very simple set of technologies for building websites. Simplicity is what gives Jekyll its power: using Jekyll you will never have to learn about database backends, complicated server installations, or any of the myriad processes involved with most monolithic website technologies. Many prominent technical bloggers use Jekyll as their blogging platform.
Like many of the open source technologies in heavy usage at GitHub, Jekyll was originally developed by Tom Preson Warner, one of the cofounders of GitHub, and Nick Quaranto, of 37 Signals, though there are now thousands of contributors to the Jekyll codebase. Unsurprisingly, the strength of the Jekyll tool comes not from the brilliance of the original developers or the brilliance of the idea, but the way those original developers cultivated community and involvement among their users.
# Learning and Building with Jekyll
In this chapter we will investigate the structure of a Jekyll blog, illustrating the few major technology pieces involved. Once we have familiarized ourselves with Jekyll, we will then create a Jekyll blog from scratch using the command-line tools. Then we will write a Ruby program that scrapes a blog-like website and converts the scraped information into a new Jekyll blog.
# What Is Jekyll?
Jekyll specifies a file structure format: conform to this format and Jekyll will compile your files into HTML. Jekyll builds on top of two proven tools: Markdown, a markup language that is surprisingly readable and expressive, and Liquid Markup, a simple programming language that gives you just enough components to build modern web pages requiring conditionals and loops, but safe enough that you can run untrusted pages on public servers. With these two technologies and agreement on a layout structure, Jekyll can build very complicated websites paradoxically without requiring a complicated structure of files and technologies.
Jekyll works natively with GitHub because a Jekyll blog is stored as a Git repository. When you push files into GitHub from a repository GitHub recognizes as a Jekyll site, GitHub automatically rebuilds the site for you. Jekyll is an open source generator and defines a format for your source files, a format other tools can easily understand and operate upon. This means you can build your own tools to interact with a Jekyll blog. Combining an open source tool like Jekyll with a well-written API like the GitHub API makes for some powerful publishing tools.
## Operating Jekyll Locally
To really use Jekyll, you'll need the `jekyll` gem. As we explain in Appendix B, we could install a ruby gem using this command:
$ gem install jekyll
There are two issues with installing this way. The first is that any commands we run inside the command line are lost to us and the world (other than in our private shell history file). The second is that if we are going to publish any of our sites to GitHub, we will want to make sure we are matching the exact versions of Jekyll and its dependencies so that a site that works on our local laptop also works when published into GitHub. If you don't take care of this, you'll occasionally get an email like this from GitHub:
The page build failed with the following error:
page build failed
For information on troubleshooting Jekyll see
https://help.github.com/articles/using-jekyll-with-pages#troubleshooting
If you have any questions please contact GitHub Support.
The fix for these two issues is a simple one. You've probably seen other chapters using a `Gemfile` to install Ruby libraries. Instead of using a manual command like `bundle` to install from the command line, let's put this dependency into the Gemfile. Then, anyone else using this repository can run the command `bundle install` and install the correct dependencies. And instead of using the `jekyll` gem directly, use the `github-pages` gem, which synchronizes your Jekyll gem versions with those on GitHub. If you do get the preceding email, run the command `bundle update` to make sure that everything is properly set up and synchronized and generally this will reproduce the issues on your local setup, which is a much faster place to fix them:
$ printf "gem 'github-pages' >> Gemfile
$ bundle install
Creating and managing your dependencies inside a Gemfile is the smart way to get your Jekyll tool synced with the version running on GitHub.
Now we are ready to create a Jekyll blog.
# Jekyll Blog Quick Start
We have our required tools installed, so let's create a simple blog. Run these commands:
$ jekyll new myblog
$ cd myblog
The `jekyll new` command creates the necessary structure for a minimal Jekyll blog. Taking a look inside the directory, you'll see a few files that comprise the structure of a basic Jekyll blog.
The `jekyll new` command installs two CSS files: one for the blog ( _main.css_ ) and one for syntax highlighting ( _syntax.css_ ). Remember, you are in full control of this site; the _main.css_ file is simply boilerplate, which you can completely throw away if it does not suit your needs. The syntax file helps when including code snippets and contains syntax highlighting CSS that prettifies many programming languages.
Installation of a new blog comes with a _.gitignore_ file as well that contains one entry: __site_. When you use the Jekyll library to build your site locally, all files are by default built into the __site_ directory. This _.gitignore_ file prevents those files from being included inside your repository as they are overwritten by the Jekyll command on GitHub when your files are pushed up to GitHub.
The `jekyll new` command does not create or initialize a new Git repository for you with your files. If you want to do this, you will need to use the `git init` command. The Jekyll initialization command does create the proper structure for you to easily add all files to a Git repository; just use `git add .; git commit` and your _.gitignore_ file will be added and configure your repository to ignore unnecessary files like the __site_ directory.
All your blog posts are stored in the __posts_ directory. Jekyll sites are not required to have a __posts_ directory (you can use Jekyll with any kind of static site) but if you do include files in this directory Jekyll handles them in a special way. If you look in the __posts_ directory now, you see that the Jekyll initialization command has created your first post for you, something like __posts/2014-03-03-welcome-to-jekyll.Markdown_. These posts have a special naming format: the title of the post (with any whitespace replaced with hyphens) trailed by the date and then an extension (either _.Markdown_ or _.md_ for Markdown files, or _.textile_ for Textile).
Your new Jekyll blog also comes with a few HTML files: an _index.html_ file, which is the starting point for your blog, and several layout files, which are used as wrappers when generating your content. If you look in the __layouts_ directory, notice there is a file named _default.html_ and another named _post.html_. These files are the layout files, files that are wrapped around all generated content, like those from your Markdown-formatted blog posts. For example, the _post.html_ file is wrapped around the generated content of each file stored inside the __posts_ directory. First, the markup content is turned into HTML and then the layout wrapper is applied. If you look inside each of the files inside the __layouts_ directory, you will see that each contains a placeholder with `{{ content }}`. This placeholder is replaced with the generated content from other files.
These placeholders are actually a markup language on their own: _Liquid Markup_. Liquid Markup was developed and open sourced by Shopify.com. Liquid Markup arose from a desire to have a safe way to host programmatic constructs (like loops and variables) inside a template, without exposing the rendering context to a full-fledged programming environment. Shopify wanted to create a way for untrusted users of its public-facing systems to upload dynamic content but not worry that the markup language would permit malicious activity; for example, given a full-fledged embedded programming language, Shopify would open itself to attack if a user wrote code to open network connections to sites on its internal networks. Templating languages like PHP or ERB (embedded Ruby templates, popular with the Ruby on Rails framework) allow fully embedded code snippets, and while this is very powerful when you have full control over your source documents, it can be dangerous to provide a mechanism where that embedded code could look like `system("rm -rf /")`. Liquid Markup provides many of the benefits of embedded programming templates, without the dangers. We will show several examples of Liquid Markup and how they work later in the chapter.
Lastly, your Jekyll directory has a special file called __config.yml_. This is the Jekyll configuration file. Peering into it, you'll see it is very basic:
name: Your New Jekyll Site
markdown: redcarpet
highlighter: pygments
We only have three lines to contend with and they are simple to understand: the name of our site, the Markdown parser used by our Jekyll command, and whether to use `pygments` to do syntax highlighting.
To view this site locally run this command:
$ jekyll serve
This command builds the entirety of your Jekyll directory, and then starts a mini web server to serve the files up to you. If you then visit __http://localhost:4000__ in your web browser, you will see something on the front page of your site and a single blog post listed in the index, as shown in Figure 6-1.
###### Figure 6-1. A bare Jekyll site
Clicking into the link inside the "Blog Posts" section, you will then see your first post, as in Figure 6-2.
###### Figure 6-2. A sample post
Our Jekyll initialization command created this new post for us. This page is backed by the Markdown file inside the __posts_ directory we saw earlier:
---
layout: post
title: "Welcome to Jekyll!"
date: 2014-03-03 12:56:40
categories: jekyll update
---
You'll find this post in your __posts_ directory—edit this post and rebuild (or run with the `-w` switch) to see your changes! To add new posts, simply add a file in the __posts_ directory that follows the convention: YYYY-MM-DD-name-of-post.ext.
Jekyll also offers powerful support for code snippets:
{% highlight ruby %}
def print_hi(name)
puts "Hi, #{name}"
end
print_hi('Tom')
#=> prints 'Hi, Tom' to STDOUT.
{% endhighlight %}
Check out the Jekyll docs for more info on how to get the most out of Jekyll. File all bugs/feature requests at Jekyll's GitHub repo.
Hopefully you agree that this is a fairly intuitive and readable alternative to raw HTML. This simplicity and readability is one of the major benefits of using Jekyll. Your source files maintain a readability that allows you to focus on the content itself, not on the technology that will eventually make them beautiful. Let's go over this file and investigate some of the important pieces.
## YFM: YAML Front Matter
The first thing we see in a Jekyll file is the YAML Front Matter (YFM):
---
layout: post
title: "Welcome to Jekyll!"
date: 2014-03-03 12:56:40
categories: jekyll update
---
YFM is a snippet of YAML ("YAML Aint Markup Language") delimited by three hyphens on both the top and bottom. YAML is a simple structured data serialization language used by many open source projects instead of XML. Many people find it more readable and editable by humans than XML. The YFM in this file shows a few configuration options: a layout, the title, the date, and a list of categories.
The layout specified references one of the files in our __layouts_ directory. If you don't specify a layout file in the YFM, then Jekyll assumes you want to use a file called _default.html_ to wrap your content. You can easily imagine adding your own custom layout files to this directory and then overriding them in the YFM. If you look at this file, you see that it manually specifies the `post` layout.
The title is used to generate the `<title>` tag and can be used anywhere else you need it inside your template using the double-braces syntax from Liquid Markup: `{{ page.title }}`. Notice that any variable from the __config.yml_ file is prefixed with the `site.` namespace, while variables from your YFM are prefixed with `page.` Though the title matches the filename (after replacing spaces with hyphens), changing the title in the YFM does not affect the name of the URL generated by Jekyll. If you want to change the URL, you need to rename the file itself. This is a nice benefit if you need to slightly modify the title and don't want to damage preexisting URLs.
The date and categories are two other variables included in the YFM. They are completely optional and strangely unused by the structure and templates created by default using the Jekyll initializer. They do provide additional context to the post, but are only stored in the Markdown file and not included inside the generated content itself. The categories list is often used to generate an index file of categories with a list of each post included in a category. If you come from a Wordpress background, you'll likely have used categories. These are generated dynamically from the MySQL database each time you request a list of them, but in Jekyll this file is statically generated. If you wanted something more dynamic, you could imagine generating a JSON file with these categories and files, and then building a JavaScript widget that requests this file and then does something more interactive on the client side. Jekyll can take any template file and convert it to JSON (or any other format)—you are not limited to just generating HTML files.
YFM is completely optional. A post or page can be rendered into your Jekyll site without any YFM inside it. Without YFM, your page is rendered using the defaults for those variables, so make sure the default template, at the very least, is what you expect will wrap around all pages left with unspecified layouts.
One important default variable for YFM is the published variable. This variable is set to true by default. This means that if you create a file in your Jekyll repository and do not manually specify the published setting, it will be published automatically. If you set the variable to false, the post will not be published. With private repositories you can keep the contents of draft posts entirely private until writing has completed by making sure published is set to false. Unfortunately, not all tools that help you create Jekyll Markdown files remember to set the published variable explicitly inside of YFM, so make sure you check before committing the file to your repository if there is something you don't yet want published.
## Jekyll Markup
Going past the YFM, we can start to see the structure of Markdown files. Markdown files can be, at their simplest, just textual information without any formatting characters. In fact, if your layout files are well done, you can definitely create great blog posts without any fancy formatting, just pure textual content.
But with a few small Markdown additions, you can really make posts shine. One of the first Markdown components we notice is the backtick character, which is used to wrap small spans of code (or code-ish information, like filenames in this case). As you use more and more Markdown, you'll find Markdown to be insidiously clever in the way it provides formatting characters without the onerous weight that HTML requires to offer the same explicit formatting.
Links can be specified using `format][link]`, where `link` is the fully qualified URL (like " _[ _http://example.com__ "), or a reference to a link at the bottom of the page. In our page we have two references, keyed as `jekyll-gh` and `jekyll`; we can then use these inside our page with syntax like `[Jekyll's GitHub repo][jekyll-gh]`. Using references has an additional benefit in that you can use the link more than once by its short name.
Though not offered in the sample, Markdown provides an easy way to generate headers of varying degrees. To add a header, use the `#` character, and repeat the `#` character to build smaller headers. These delimiters simply map to the `H` tag; two hash characters (`##`) turns into an `<h2>` tag. Building text enclosed by `<h3>` tags looks like `### Some Text`. You can optionally match the same number of hash symbols at the end of the line if you find it more expressive (`### Some Text ###`), but you don't have to.
Markdown offers easy shortcuts for most HTML elements: numbered and unordered lists, emphasis, and more. And, if you cannot find a Markdown equivalent, you can embed normal HTML right next to Markdown formatting characters. The best way to write Markdown is to keep a Markdown cheat sheet near you when writing. John Gruber from Daring Fireball invented Markdown, and his site has a more in-depth description of the how and why of Markdown.
## Using the Jekyll Command
Running `jekyll --help` will show you the options for running Jekyll. You already saw the `jekyll serve` command, which builds the files into the __site_ directory and then starts a web server with its root at that directory. If you start to use this mechanism to build your Jekyll sites then there are a few other switches you'll want to learn about.
If you are authoring and adjusting a page often, and switching back into your browser to see what it looks like, you'll find utility in the `-w` switch ("watch"). This can be used to automatically regenerate the entire site if you make changes to any of the source files. If you edit a post file and save it, that file will be regenerated automatically. Without the `-w` switch you would need to kill the Jekyll server, and then restart it.
The Jekyll watch switch does reload all HTML and markup files, but does not reload the __config.yml_ file. If you make changes to it, you will need to stop and restart the server.
If you are running multiple Jekyll sites on the same laptop, you'll quickly find that the second instance of `jekyll serve` fails because it cannot open port 4000. In this case, use `jekyll --port 4010` to open port 4010 (or whatever port you wish to use instead).
## Privacy Levels with Jekyll
Jekyll repositories on GitHub can be either public or private repositories. If your repository is public you can host public content generated from the Jekyll source files without publishing the source files themselves. Remember, as noted previously, that any file without `publishing: false` inside the YFM will be made public the moment you push it into your repository.
## Themes
Jekyll does not support theming internally, but it is trivial to add any CSS files or entire CSS frameworks. You can also fork an existing Jekyll blog that has the theming you like. We will show how and where to add your own customized CSS later in the chapter.
## Publishing on GitHub
Once you have your blog created, you can easily publish it to GitHub. There are two ways you can publish Jekyll blogs:
* As a github.io site
* On a domain you own
GitHub offers free personal blogs that are hosted on the github.io domain. And you can host any site with your own domain name with a little bit of configuration.
### Using a GitHub.io Jekyll blog
To create a github.io personal blog site, your Jekyll blog should be on the master branch of your Git repository. The repository should be named `username.github.io` on GitHub. If everything is set up correctly you can then publish your Jekyll blog by adding a remote for GitHub and pushing your files up. If you use the `hub` tool (a command for interacting with Git and GitHub), you can go from start to finish with a few simple commands. Make sure to change the first line to reflect your username.
The hub tool was originally written in Ruby and as such could be easily installed using only `gem install hub`, but hub was recently rewritten in Go. Go has a somewhat more complicated installation process, so we won't document it here. If you have the `brew` command installed for OS X, you can install hub with the `brew install hub` command. Other platforms vary, so check _http://github.com/github/hub_ to determine the best way for your system.
Use these commands to install your github.io hosted Jekyll blog:
$ export USERNAME=xrd
$ jekyll new $USERNAME.github.io
$ cd $USERNAME.github.io
$ git init
$ git commit -m "Initial checkin" -a
$ hub create # You'll need to login here...
$ sleep $((10*60)) && open $USERNAME.github.io
The second to the last line creates a repository on GitHub for you with the same name as the directory. That last line sleeps for 10 minutes while your github.io site is provisioned on GitHub, and then opens the site in your browser for you. It can take ten minutes for GitHub to configure your site the first time, but subsequent content pushes will be reflected immediately.
## Hosting On Your Own Domain
To host a blog on your own domain name, you need to use the `gh-pages` branch inside your repository. You need to create a CNAME file in your repository, and then finally establish DNS settings to point your domain to the GitHub servers.
### The gh-pages branch
To work on the `gh-pages` branch, check it out and create the branch inside your repository:
$ git checkout -b gh-pages
$ rake post title="My next big blog post"
$ git add _posts
$ git commit -m "Added my next big blog post"
$ git push -u origin gh-pages
You will need to always remember to work on the `gh-pages` branch; if this repository is only used as a blog, then this probably is not an issue. Adding the `-u` switch will make sure that Git always pushes up the `gh-pages` branch whenever you do a push.
### The CNAME file
The CNAME file is a simple text file with the domain name inside of it:
$ echo 'mydomain.com' > CNAME
$ git add CNAME
$ git commit -m "Added CNAME"
$ git push
Once you have pushed the CNAME file to your repository, you can verify that GitHub thinks the blog is established correctly by visiting the admin page of your repository. An easy way to get there is using the `github` gem, which is no longer actively maintained but is still a useful command-line tool:
$ gem install github
$ github admin # Opens up https://github.com/username/repo/settings
The `github` gem is a useful command-line tool, but unfortunately it is tied to an older version of the GitHub API, which means the documented functionality is often incorrect.
If your blog is correctly set up, you will see something like Figure 6-3 in the middle of your settings page.
###### Figure 6-3. Settings for a Jekyll blog
GitHub has properly recognized the CNAME file and will accept requests made to that host on its servers. We are still not yet complete, however, in that we need to make sure the DNS is established for our site.
### DNS settings
Generally, establishing DNS settings for your site is straightforward. It is easiest if you are setting up DNS with a _subdomain_ as opposed to an _apex domain_. To be more concrete, an apex domain is a site like _mypersonaldomain.com_ , while a subdomain would be _blog.mypersonaldomain.com_.
Setting up a blog on a subdomain is simple: create a CNAME record in DNS that points to _username.github.io_.
For an apex domain, things are slightly more complicated. You must create DNS A records to point to these IP addresses: `192.30.252.153` and `192.30.252.154`. These are the IP addresses right now; there is always the possibility that GitHub could change these at some point in the future. For this reason, hosting on apex domains is risky. If GitHub needed to change its IP addresses (say during a denial-of-service attack), you would need to respond to this, and deal with the DNS propagation issues. If you instead use a subdomain, the CNAME record will automatically redirect to the correct IP even if it is changed by GitHub.
# Importing from Other Blogs
There are many tools that can be used to import an existing blog into Jekyll. As Jekyll is really nothing more than a file-layout convention, you just need to pull the relevant pieces (the post itself, and associated metadata like the post title, publishing date, etc.) and then write out a file with those contents. Jekyll blogs prefer Markdown, but they work fine with HTML content, so you can often convert a blog with minimal effort, and there are good tools that automate things for you.
## From Wordpress
The most popular importer is the Wordpress importer. You will need the _jekyll-import_ gem. This gem is distributed separately from the core Jekyll gem, but will be installed if you use the `github-pages` gem inside your Gemfile and use the `bundle` command.
### Importing with direct database access
Once you have the `jekyll-import` gem, you can convert a Wordpress blog using a command like this:
$ ruby -rubygems -e 'require "jekyll-import";
JekyllImport::Importers::WordPress.run({
"dbname" => "wordpress",
"user" => "hastie",
"password" => "lanyon",
"host" => "localhost",
"status" => ["publish"]
})'
This command will import from an existing Wordpress installation, provided that your Ruby code can access your database. This will work if you can log in to the server itself and run the command on the server, or if the database is accessible across the network (which is generally bad practice when hosting Wordpress!).
Note the status option: this specifies that imported pages and posts are published automatically. More specifically, the YAML for each file will specify `published: true`, which will publish the page or post into your blog. If you want to review each item individually, you can specify a status of `private`, which will export the pages into Jekyll but leave them unpublished. Remember that if your repository is public, posts marked as unpublished will not be displayed in the blog but can still be seen if someone visits your the repository for your blog on GitHub.
There are many more options than listed here. For example, by default, the Wordpress-Jekyll importer imports categories from your Wordpress database, but you can turn this off by specifying `"categories" => false`.
### Importing from the Wordpress XML
Another alternative is to export the entire database as an XML file. Then, you can run the importer on that file:
ruby -rubygems -e 'require "jekyll-import";
JekyllImport::Importers::WordpressDotCom.run({
"source" => "wordpress.xml",
"no_fetch_images" => false,
"assets_folder" => "assets"
})'
This can be used to export files from a server you don't maintain, but works with sites you do maintain and might be a more plausible option than running against a database.
To export the XML file, visit the export page on your Wordpress site. This is usually mapped to _/wp-admin/export.php_ , so it will be something like __https://blogname.com/wp-admin/export.php__ (replacing "blogname.com" with your blog's name).
Like many free tools, there are definitely limitations to using this method of export. If your Wordpress site is anything beyond the simplest of Wordpress sites, then using this tool to import from Wordpress means you will lose much of the metadata stored inside your blog. This metadata can include pages, tags, custom fields, and image attachments.
If you want to keep this metadata, then you might consider another import option like Exitwp. Exitwp is a Python tool that provides a much higher level of fidelity between the original Wordpress site and the final Jekyll site, but has a longer learning curve and option set.
## Exporting from Wordpress Alternatives
If you use another blog format other than Wordpress, chances are there is a Jekyll importer for it. Jekyll has dozens of importers, well documented on the Jekyll importer site.
For example, this command-line example from the importer site exports from Tumblr blogs:
$ ruby -rubygems -e 'require "jekyll-import";
JekyllImport::Importers::Tumblr.run({
"url" => "http://myblog.tumblr.com",
"format" => "html", 
"grab_images" => false, 
"add_highlights" => false, 
"rewrite_urls" => false 
})'
The Tumblr import plug-in has a few interesting options.
Write out HTML; if you prefer to use Markdown use `md`.
This importer will grab images if you provide a true value.
Wrap code blocks (indented four spaces) in a Liquid Markup "highlight" tag if this is set to true.
Write pages that redirect from the old Tumblr paths to the new Jekyll paths using this configuration option.
Exporting from Tumblr is considerably easier than Wordpress. The Tumblr exporter scrapes all public posts from the blog, and then converts to a Jekyll-compatible post format.
We've seen how we can use the importers available on _import.jekyllrb.com_ to import. What if we have a nonstandard site we need to import?
# Scraping Sites into Jekyll
Jekyll provides various importers that make it easy to convert an existing blog into a Jekyll blog. But if you have a nonstandard blog, or a site that is not a blog, you still have options for migrating it to Jekyll. The first option is to write your own importer by perusing the source of the Jekyll importers on GitHub. This is probably the right way to build an importer if you plan on letting others use it, as it will extend several Jekyll importer classes already available to make importing standard for other contributors.
Another option is to simply write out files in the simple format that is a Jekyll blog. This is much lazier than reading through the Jekyll tools and their libraries, of course. I started as a Perl programmer and always loved this quote from Larry Wall, the creator of Perl: "We will encourage you to develop the three great virtues of a programmer: laziness, impatience, and hubris." Let's accept our inherent laziness and choose the second route. We'll write some code to scrape a site and make a new Jekyll site from scratch, learning about the structure of a Jekyll blog through trial and error.
While living in Brazil in 2000 I built a site called ByTravelers.com, an early travel blog. At some point, I sadly lost the database and thought the site contents were completely gone. Almost by accident, I happened upon ByTravelers on Archive.org, the Internet Archive. I found that almost all of the articles were listed there and available. Though the actual database is long gone, could we recover the data from the site using Archive.org?
## Jekyll Scraping Tactics
We can start by looking at the structure of the archive presented on Archive.org. Go to Archive.org, enter "bytravelers.com" into the search box in the middle of the page, and then click "BROWSE HISTORY." You will see a calendar view that shows all the pages scraped by the Internet Archive for this site as shown in Figure 6-4.
###### Figure 6-4. Calendar view of Archive.org
In the middle of 2003 I took down the server, intending to upgrade it to another set of technologies, and never got around to completing this migration, and then lost the data. If we click the calendar item on June 6th, 2003, we will see a view of the data that was more or less complete at the height of the site's functionality and data. There are a few broken links to images, but otherwise the site is functionally archived inside Archive.org (Figure 6-5).
###### Figure 6-5. Archive of ByTravelers.com on Archive.org
Taking the URL from our browser, we can use this as our starting point for scraping. Clicking around throughout the site, it becomes evident that each URL to a journal entry uses a standard format; in other words, __http://www.bytravelers.com/journal/entry/56__ indicates the 56th journal item stored on the site. With this knowledge in hand, we can iterate over the first hundred or so URLs easily.
## Setting Up
A naive implementation of a scraper would be a single Ruby file in which the execution and functionality were contained all in one. However, if we expose the functionality as a class, and then instantiate the class in a separate file, we can also write tests that utilize and validate the same steps as the runner script. So, let's take this smarter approach and create three files: the scraper class, the runner class (which instantiates and "runs" our scraper), and the test file (which instantiates and validates the functionality of our scraper).
First, the runner script:
#!/usr/bin/env ruby
require './scraper'
scraper = Scraper.new()
scraper.run()
Our barebones scraper class just looks like this:
class Scraper
def run
end
end
We also need to have a manifest file, the Gemfile, where we will document our library dependencies:
source "https://rubygems.org"
gem "github-pages"
gem "rspec"
Then, install our gems using the command `bundle`. That installs the `rspec` tool, the Jekyll tool, and associated libraries.
Finally, we can create our test harness:
require './scraper'
describe "#run" do
it "should run" do
scraper = Scraper.new
scraper.run()
end
end
Remember to run using the `bundle exec rspec scraper_spec.rb` command, which makes everything run inside the bundler context (and load our libraries from the Gemfile, instead of the default system gems):
$ bundle exec rspec scraper_spec.rb
.
Finished in 0.00125 seconds (files took 0.12399 seconds to load)
1 example, 0 failures
There is nothing we are explicitly testing yet, but our test harness displays that our code inside our tests will match closely the code we write inside our runner wrapper.
## Scraping Titles
Let's start with something simple: scraping the titles from the site. We'll use Ruby to scrape the site; Ruby has some intuitive gems like `mechanize` that simplify building web clients. There is an API for the Internet archive, but I found it flakey and unreliable, so we'll just scrape the site. Add these additional lines to the Gemfile using this command and then install the libraries:
$ echo "gem 'mechanize'" >> Gemfile
$ bundle
Now we can modify our scraper to use the `mechanize` gem and retrieve content from Archive.org:
require 'mechanize' # 
class Scraper
attr_accessor :root # 
attr_accessor :agent
def initialize # 
@root = "http://web.archive.org/web/20030820233527/" +
"http://bytravelers.com/journal/entry/" # 
@agent = Mechanize.new
end
def run
100.times do |i| # 
url = "#{@root}#{i}" # 
@agent.get( url ) do |page|
puts "#{i} #{page.title}"
end
end
end
end
Require the `mechanize` library.
We use a Ruby method called `attr_accessor`, which creates a public instance variable. We can use variables created using `attr_accessor` by prefixing the variable name with an `@` character. Instance variables are accessible outside the class as well.
When a method named `initialize` is defined for a class, this method is called right after object creation, so this is the appropriate place for us to initialize the member variables.
Initialize the variables to default values. We store the root of the URL to the cached copy of ByTravelers.com here.
Our run method runs the block inside 100 times.
Our block starts by generating a URL to the specific page, retrieves the page, and then prints out the index in our loop plus the title of the page object.
Let's run our scraper and see what happens now:
$ bundle exec ./run.rb
...
53 Read Journal Entries
54 Read Journal Entries
55 Read Journal Entries
56 Read Journal Entries
57 Internet Archive Wayback Machine
58 Internet Archive Wayback Machine
...
You can see that some of the entries have a generic "Internet Archive Wayback Machine" while some have "Read Journal Entries." Archive.org will respond with a placeholder title when it does not have content from the site (as is the case with item #58, for example). We should ignore those pages that don't have the string "Read Journal Entries" as the title (which tells us Archive.org does have cached content from our site).
Now that we have all the content, we can start finding the important pieces inside and putting them into our Jekyll posts.
## Refinining with Interactive Ruby
There are two things that make Mechanize immensely powerful as the foundation for a scraping tool: easy access to making HTTP calls, and a powerful searching syntax once you have a remote document. You've seen how Mechanize makes it simple to make a GET request. Let's explore sifting through a massive document to get the important pieces of textual content. We can manually explore scraping using the Ruby IRB (interactive Ruby shell):
$ irb -r./scraper
2.0.0-p481 :001 > scraper = Scraper.new
=> #<Scraper:0x00000001e37ca8...>
2.0.0-p481 :002 > page = scraper.agent.get "#{scraper.root}#{56}"
=> #<Mechanize::Page {url #<URI::HTTP:0x00000001a85218...>
The first line invokes IRB and uses the `-r` switch to load the scraper library in the current directory. If you have not used IRB before, there are a few things to know that will make life easier. The IRB has a prompt, which indicates the version of Ruby you are using, and the index of the command you are running. IRB has a lot of features beyond what we will discuss here, but those indexes can be used to replay history and for job control, like many other types of shells. At the IRB prompt you can enter Ruby and IRB executes the command immediately. Once the command executes, IRB prints the result; the characters `=>` indicate the return value. When you are playing with Ruby, return values will often be complex objects: the return value when you use `scraper.agent.get` is a Mechanize Ruby object. This is a very large object, so printing it out takes a lot of real estate. We've abbreviated the majority of it here, and will do that for many complex objects to save space when discussing IRB.
The last command in IRB saves the HTTP GET request as a page object. Once we have the page, how do we extract information from it? Mechanize has a nice piece of syntactic sugar that makes it easy to search the DOM structure: the "/" operator. Let's try it:
2.0.0-p481 :003 > page / "tr"
=> []
If our query path had found anything, we would have seen a return value with an array of Mechanize objects, but in this case we got back an empty array (which indicates nothing was found). Unfortunately, the paths vary when the document is loaded into a browser (the browser can customize the DOM or the server can send slightly different data to the client). But if we experiment with similar paths inside IRB, we will find what we need. It helps to jump back and forth between Chrome and IRB, examining the structure of the HTML inside Chrome and then testing a search path using IRB. Eventually, we come across this search path:
2.0.0-p481 :004 > items = page / "table[valign=top] tr"
=> [#<Nokogiri::XML::Element:0xc05670 name="font"
attributes=[#<Nokogiri::XML::Attr:0xc05328 name="size"
value="-2">]...
2.0.0-p481 :005 > items.length
=> 5
2.0.0-p481 :006 > items[0].text()
=> "\n\n\n\n\n\n\n\n\n\nBeautiful Belize\n\n\n\n\n\n\n"
2.0.0-p481 :005 > items[0].text().strip
=> "Beautiful Belize"
Eureka, we found the pattern that gives us our title. We had to jump around inside the results from the query, but we can correlate the text on the page inside the browser with different structures found using the query inside IRB. It is important to note that we have to strip whitespace from the title to make it presentable. We can incorporate this into our scraper code, but this is a good moment to think about how we can write tests to verify this works properly. And when we start writing tests, we open the door for another opportunity: caching to our HTTP requests.
## Writing Tests and Caching
Were we to run our `run.rb` script again, we would notice that it prints the document title, then halts as it retrieves the content from the server, and then prints again, stopping and starting until complete. The content from Archive.org does not change at all since the original site was scraped years ago, so there is no reason we need to get the latest content; content even several months stale will be the same as content retrieved a few moments ago. It seems like a good opportunity to put a caching layer between us and the code, reducing impact on Archive.org and making our script run faster. In addition, if we structure our code to make retrieval and processing happen independently, we can write tests to verify the processing:
require 'mechanize'
require 'vcr' # 
VCR.configure do |c| # 
c.cassette_library_dir = 'cached'
c.hook_into :webmock
end
class Scraper
attr_accessor :root
attr_accessor :agent
attr_accessor :pages # 
def initialize
@root = "http://web.archive.org/web/20030820233527/" +
"http://bytravelers.com/journal/entry/"
@agent = Mechanize.new
@pages = [] # 
end
def scrape
100.times do |i|
begin
VCR.use_cassette("bt_#{i}") do # 
url = "#{@root}#{i}"
@agent.get( url ) do |page|
if page.title.eql? "Read Journal Entries" # 
pages << page
end
end
end
rescue Exception => e
STDERR.puts "Unable to scrape this file (#{i})"
end
end
end
def process_title( row )
row.strip # 
end
def run
scrape()
@pages.each do |page| # 
rows = ( page / "table[valign=top] tr" )
puts process_title( rows[0].text() )
end
end
end
We require the VCR gem: this gem intercepts HTTP requests, sending them out normally the first time, and caching all successive calls, completely transparent to the user.
VCR must be configured when you use it: in this case we specify a directory where results will be cached, and tell it what mocking library we should use to store the cached results.
We establish a new variable called `pages`. We will scrape all the pages into this array (and get them for free once the information is cached).
Initialize the `pages` array here.
To use the VCR recording feature, we wrap any code that makes HTTP requests inside a VCR block with a name specifying the `cassette` to save it under. In this case, we use a cassette named `bt` (for ByTravelers) with the index of the page. The first time we use the scraper to request the page, it is retrieved and stored inside the cache. Successive calls to the scraper `get` method are retrieved from the cached responses.
We then look for any titles that look like pages archived into Archive.org (using the title to differentiate) and if we find one, store that page into our pages array for later processing.
We move the title processing into its own method called `process_title`. Here we use the information and remove any whitespace.
Inside of `run` we now call `scrape` to load the pages, and then iterate over each page, searching inside them and processing the titles.
We need to install the VCR and webmock libraries, so add them to the Gemfile:
$ echo "gem 'vcr'" >> Gemfile
$ echo "gem 'webmock'" >> Gemfile
$ bundle
If we run our script using `bundle exec ruby ./run.rb`, we will see it print out the titles:
$ bundle exec ruby ./run.rb
Unable to scrape this file (14)
Unable to scrape this file (43)
Unable to scrape this file (47)
Unable to scrape this file (71)
Unable to scrape this file (94)
Unable to scrape this file (96)
Third day in Salvador
The Hill-Tribes of Northern Thailand
Passion Play of Oberammergau
"Angrezis in Bharat"
Cuba - the good and bad
Nemaste
Mexico/Belize/Guatemala
South Africa
...
We print out the errors (when Archive.org does not have a page for a particular URL). Note that as a side effect of caching, things work much faster. If we analyze the time we save using the `time` command, we see these results:
$ time bundle exec ruby ./run.rb # before VCR
real 0m29.907s
user 0m2.220s
sys 0m0.170s
$ time bundle exec ruby ./run.rb # after VCR
real 0m3.750s
user 0m3.474s
sys 0m0.194s
So, it takes an order of magnitude more time without caching. And, we get these cached responses for free, and inside our IRB sessions as well.
The titles look good, but the fourth one is a little worrisome. Looks like one of the users decided to enclose their title in double quotes. To control the formatting, it would be nice to clean that up. Let's do that, and write tests to verify things work:
require './scraper'
describe "#run" do
before :each do
@scraper = Scraper.new
end
describe "#process_titles" do
it "should correct titles with double quotes" do
str = ' something " with a double quote'
expect( @scraper.process_title( str ) ).to_not match( /"/ )
end
it "should strip whitespace from titles" do
str = '\n\n something between newlines \n\n'
expect( @scraper.process_title( str ) ).to_not match( /^\n\n/ )
end
end
end
If we run this, we see one test pass and one test fail:
$ bundle exec rspec scraper_spec.rb
F.
Failures:
1) #run #process_titles should correct titles with double quotes
Failure/Error: expect( @scraper.process_title( ' something " with
a double quote' ) ).to_not match( /"/ )
expected "something \" with a double quote" not to match /"/
Diff:
@@ -1,2 +1,2 @@
-/"/
+"something \" with a double quote"
# ./scraper_spec.rb:10:in `block (3 levels) in <top (required)>'
Finished in 0.01359 seconds (files took 0.83765 seconds to load)
2 examples, 1 failure
Failed examples:
rspec ./scraper_spec.rb:9 # #run #process_titles should correct titles
with double quotes
To fix this test, let's strip out the double quotes by changing one line in the _scraper.rb_ file:
...
def process_title( row )
row.strip.gsub( /"/, '' )
end
...
Now both tests pass. That line of code might be worrisome if you believe in defensive coding. If this function were called with a nil value, for example, it would crash. Even if we could guarantee that this situation would never occur from our calling context, it is better to make our method safe. Let's make sure it works and write a test to prove it.
Add a test that asserts there is not an error when the argument to `process_title` is nil:
...
it "should not crash if the title is nil" do
expect{ @scraper.process_title( nil ) }.to_not raise_error()
end
...
Running `rspec scraper_spec.rb` results in the following error, which we expect since we have not yet fixed the code:
..F..
Failures:
1) #run #process_titles should not crash if the title is nil
Failure/Error: expect{ @scraper.process_title( nil ) }.to_not raise_error()
expected no Exception, got #<NoMethodError: undefined method
`strip' for nil:NilClass> with backtrace:
# ./scraper.rb:38:in `process_title'
# ./scraper_spec.rb:20:in `block (4 levels) in <top (required)>'
# ./scraper_spec.rb:20:in `block (3 levels) in <top (required)>'
# ./scraper_spec.rb:20:in `block (3 levels) in <top (required)>'
Finished in 0.00701 seconds
5 examples, 1 failure
Failed examples:
rspec ./scraper_spec.rb:19 # #run #process_titles should not crash if the title
# is nil
We can fix it with this one simple change:
...
def process_title( row )
row.strip.gsub( /"/, '' ) if row
end
...
Now we are in a position to write out the files for our actual posts.
## Writing Jekyll Posts
With our titles in hand, we can generate an actual Jekyll post. To keep things simple each post will contain nothing beyond the titles for now, but we will quickly add other content. Getting the skeleton of a post established allows us to use the Jekyll command-line tools to troubleshoot our setup.
First, create a Git repository for our files. When the Jekyll tool runs, it generates all the files into a directory called __site_ so we should add a _.gitignore_ file, which ignores this directory:
$ git init
$ mkdir _posts
$ echo "_site" >> .gitignore
$ git add .gitignore
$ git commit -m "Initial checkin"
Jekyll Markdown files are very simple: just a bit of YAML at the beginning, with text content following, formatted as Markdown. To generate Markdown posts, add a method called `write` to our scraper that writes out the processed information after we have retrieved and parsed the pages from Archive.org.
Jekyll posts are stored inside the __posts_ directory. As a convention, filenames are generated with the date and title, lowercased, converted to a string without any characters beyond a-z and the hyphen, and terminated by the extension (usually _.md_ for Markdown). In order to properly generate the filename, we will need to scrape the date, so we will do that as well.
As a more concrete example, we want to take something like `Cuba - the good and bad` that happened on January 12th, 2001, and make a filename like `2001-01-12-cuba-the-good-and-bad.md`. Or, `Mexico/Belize/Guatemala` from the same date, and make it into the filename `2001-01-12-mexico-belize-guatemala.md`. These conversions look like good places to write tests, so we can start there:
describe "#get_filename" do
it "should take 'Cuba - the good and bad' on January 12th, 2001" +
" and get a proper filename" do
input = 'Cuba - the good and bad'
date = "January 12th, 2001"
output = "2001-01-12-cuba-the-good-and-bad.md"
expect( @scraper.get_filename( input, date ) ).to eq( output )
end
it "should `Mexico/Belize/Guatemala` and get a proper filename" do
input = "Mexico/Belize/Guatemala"
date = "2001-01-12"
output = "2001-01-12-mexico-belize-guatemala.md"
expect( @scraper.get_filename( input, date ) ).to eq( output )
end
end
Let's build the `get_filename` method. This method uses the handy Ruby `DateTime.parse` method to convert a string representation of a date into a date object, and then uses the `strfmtime` method to format that date into the format we want in our filename:
...
def get_filename( title, date )
processed_date = DateTime.parse( date )
processed_title = title.downcase.gsub( /[^a-z]+/, '-' )
"#{processed_date.strftime('%Y-%m-%d')}-#{processed_title}.md"
end
...
If we run our tests now, we will see them both pass.
Now we can add to our scraper so that it can write out the posts:
def render( processed ) # 
processed['layout'] = 'post'
rendered = "#{processed.to_yaml}---\n\n" # 
rendered
end
def write( rendered, processed ) # 
Dir.mkdir( "_posts" ) unless File.exists?( "_posts" )
filename = get_filename( processed['title'], processed['creation_date'] )
File.open( "_posts/#{filename}", "w+" ) do |f|
f.write rendered
end
end
def process_creation_date( date )
tuple = date.split( /last updated on:/ ) # 
rv = tuple[1].strip if tuple and tuple.length > 1
rv
end
def run
scrape()
@pages.each do |page| # 
rows = ( page / "table[valign=top] tr" )
processed = {}
processed['title'] = process_title( rows[0].text() )
processed['creation_date'] = process_creation_date( rows[3].text() ) # 
rendered = render( processed )
write( rendered, processed )
end
We define a `render` method. This takes the processed information (which arrives as a hash) and renders the information into the proper format: the YAML Front Matter (YFM) and then the body (which we don't have yet). We then return the rendered string.
We use the `to_yaml` method on our hash. This method appears when we include the yaml library using `require 'yaml'` (not displayed here, but easy to add to the _scraper.rb_ file and present in the samples on GitHub).
The `write` method writes the rendered content to disk. It makes sure the __posts_ directory is available, and if not, creates it. It then writes out the file using our `get_filename` method to get the path, prefixed with the __posts_ directory.
`process_creation_date` takes a piece from the scraped page and breaks it apart by the string "`last updated` on:" and uses the second item in the resultant array.
Inside our `run` method we now build out the processed hash, finding the date and title using rows from the query path we used before.
Once we have our processed array, we can "render" it and then write out the rendered string to our filesystem.
If we generate the posts by calling `bundle exec ruby ./run.rb` we will see our posts generated into the __posts_ directory. Choosing a random one, they look like this:
---
title: Beautiful Belize
creation_date: '2003-03-23'
layout: post
---
As you can see, for now, posts are nothing more than the YFM, but this is still a perfectly valid Jekyll post.
Now let's use the `jekyll` command-line tool to start looking at our posts and to troubleshoot any issues with our Jekyll repository.
## Using the Jekyll Command-Line Tool
Taking a moment to add our files to the Git repository, we can then take a look at our site using the `jekyll` command-line tool. Using the command-line tool locally will spot check our new content as we will see errors immediately (rather than getting notification emails from GitHub after publishing there). Errors can occur if our scraper does not correctly process the HTML retrieved from Archive.org and subsequently generates incorrect Markdown content, for example.
$ git add .
$ git commit -m "Make this into a Jekyll site"
...
$ jekyll serve --watch
Configuration file: none
Source: /home/xrdawson/bytravelers
Destination: /home/xrdawson/bytravelers/_site
Generating...
Build Warning: Layout 'post' requested in _posts/2000-05-23-third-day-in...
Build Warning: Layout 'post' requested in _posts/2000-08-28-the-hill-tri...
...
done.
Auto-regeneration: enabled for '/home/xrdawson/bytravelers'
Configuration file: none
Server address: http://0.0.0.0:4000/
Server running... press ctrl-c to stop.
So, we see a few problems already. First, we don't have a layout for "post." And, there is no configuration file. Let's fix these problems.
Add a file called __config.yml_ to the root directory:
name: ByTravelers.com: Online travel information
markdown: redcarpet
highlighter: pygments
Remember, the `jekyll` tool does not reload the configuration file automatically, so we should restart the tool by hitting Ctrl-C and restarting.
Then, create a directory called __layouts_ , and place a file called _post.html_ inside it with these contents:
---
layout: default
---
<h1>{{ page.title }}</h1>
{{ content }}
The _post.html_ layout file is very simple: we use Liquid Markup tags to write out the title of the site (contained in an object called `page`, which our template has access to) and then the content itself, which is the rendered output from the post page.
We also need to create a "default" layout, so create this inside the __layouts_ directory with the filename _default.html_ :
<html>
<head>
<title>ByTravelers.com</title>
</head>
<body>
{{ content }}
</body>
</html>
This file is almost pure HTML, with only the `{{ content }}` tag. When we specify `default` as the layout inside YAML for a Markdown file, the Markdown text is converted to HTML, and then this layout file is wrapped around it. You can see that the initial post files specify the `post` layout, which is wrapped around the content, then the _post.html_ layout file specifies the _default.html_ layout, which is wrapped around the entire contents.
When we add these files, the Jekyll tool will notice the filesystem has changed and regenerate files. We now have generated posts, but we don't have a master index file, so let's add this now.
## Master Index File with Liquid Markup
We now have the posts generated properly, but we don't have an entry page into the blog. We can create an _index.md_ file, which just displays an index of all the blog posts:
---
layout: default
---
<h1>ByTravelers.com</h1>
Crowd sourced travel information.
<br/>
<div>
{% for post in site.posts %}
<a href="{{ post.url }}"><h2> {{ post.title }} </h2></a>
{{ post.content | strip_html | truncatewords: 40 }}
<br/>
<em>Posted on {{ post.date | date_to_string }}</em>
<br/>
{% endfor %}
</div>
Notice that the file combines Markdown (the single `#` character converts into an H1 tag) with regular HTML. You are free to mix regular HTML inside of Markdown files when there is not a Markdown equivalent.
Output tags use double braces surrounding the content (`{{ site.title }}`) while logic tags use a brace and percent symbol (`{% if site.title %}`). As you might expect, output tags place some type of visible output into the page, and logic tags perform some logic operation, like conditionals or loops.
The preceding template has both output and logic tags. We see a logic tag in the form of `{% for ... %}`, which loops over each post. Jekyll will process the entire posts directory and provide it to pages inside the `site.posts` variable, and the `for` logic tag allows us to iterate over them. If we use a `{% for ... %}` tag we need to "close" the tag with a matching `{% endfor %}` tag. Inside of our for loop we have several output tags: `{{ post.url }}` outputs the post URL associated with a post, for example. We also have _filters_ , which are methods defined to process data. One such filter is the `strip_html` filter, which you might guess strips out HTML text, converting it to escaped text. This is necessary when your text could include HTML tags. You'll also notice that filters can be "chained"; we process the body with the `strip_html` filter and then truncate the text by 40 characters using the `truncatewords:40` filter.
If we open __http://localhost:4000__ in our browser, we will see a simple index page with the titles of our posts, like Figure 6-6.
###### Figure 6-6. The Index Page, for a naked Jekyll blog
This index page lists every post: let's make it display only the last 10 posts. Copy the _index.md_ file to a file named _archive.md_. Then, change the `{% for post in site.posts %}` tag to `{% for post in site.posts | limit:10 %}`.
Each post has an associated page that is generated by Jekyll. Clicking any of the links displays the post, which is right now just the title. We can now add the rest of the pages from our scraper.
## Scraping Body and Author
Use IRB to find the author and body content. Start by searching for the author information:
2.0.0-p481 :037 > rows[2].to_s
=> "<tr>\n<td align=\"center\">\n\n\n\n<font size=\"+1\">author:..."
2.0.0-p481 :038 > ( rows[2] / "td font" )[0].text()
=> "author: \n\nMD \n\n\nread more from this author | \nsee maps from this..."
2.0.0-p481 :039 > author = ( rows[2] / "td font" )[0].text()
=> "author: \n\nMD \n\n\nread more from this author | \nsee maps from this..."
2.0.0-p481 :040 > author =~ /author:\s+\n\n([^\s]+)\n\n/
=> 0
2.0.0-p481 :041 > $1
=> "MD"
We start by looking at the second row and converting it to raw HTML. We see there is a string `author:`, which is a likely place to reference the author. This string is wrapped by a `font` tag and a `td` tag, so we can use these search queries to eliminate extra information. Then, we convert the HTML to text using the `text()` method and use a regular expression to pull out the text after the `author:` string. If a regular expression matches and has a captured expression, it will be held in the global variable `$1`. There is more than one way to get this information, of course.
Next, we retrieve our body from the scraped page. Add a method called `process_body` and insert this into our processed hash:
def render( processed )
processed['layout'] = 'post'
filtered = processed.reject{ |k,v| k.eql?('body') } # 
rendered = "#{filtered.to_yaml}---\n\n" + # 
"### Written by: #{processed['author']}\n\n" +
processed['body']
rendered
end
# 
def process_body( paragraphs )
paragraphs.map { |p| p.text() }.join "\n\n"
end
def run
scrape()
@pages.each do |page|
rows = ( page / "table[valign=top] tr" )
processed = {}
processed['title'] = process_title( rows[0].text() ) # 
processed['creation_date'] = process_creation_date( rows[3].text() )
processed['body'] = process_body( rows[4] / "p" ) # 
author_text = ( rows[2] / "td font" )[0].text()
processed['author'] = $1.strip if author_text =~ /author:\s+\n\n+(.+)\n\n+/
rendered = render( processed )
write( rendered, processed )
end
We need to rewrite `render` slightly. There is no need for the entire body content of a post to be included in the YFM. We can filter this out using the `reject` method.
Then, we append the author and body content to generate the new rendered output.
Our process body is straightforward: we convert each node passed into text (using the `text()` method) and then rejoin them with double newlines. Markdown will properly format paragraphs if they are separated by two newlines.
We then just need to invoke the `process_body` method and insert the results into our processed hash.
Next, we use the query path we found in our IRB session to retrieve the author information, and insert it into our processed hash. The author name will then be inserted into our YFM automatically within the `render` method, and we will insert it into the post.
We can then run `bundle exec ./run.rb` to rewrite our post files.
## Adding Images to Jekyll
Jekyll can host any binary files as well, and Markdown files can host the proper markup to include these assets. Let's add the images from the original site:
def process_image( title )
img = ( title / "img" )
src = img.attr('src').text()
filename = src.split( "/" ).pop
output = "assets/images/"
FileUtils.mkdir_p output unless File.exists? output
full = File.join( output, filename )
if not File.exists? full or not File.size? full
root = "https://web.archive.org"
remote = root + src
# puts "Downloading #{full} from #{remote}"
`curl -L #{remote} -o #{full}`
end
filename
end
We use the venerable cURL to download our images. Our code makes it so that the file is only downloaded the first time. We use the `-L` switch to tell cURL to follow redirects, because these images URLs are transparently redirected inside the browser.
We need to customize our run method to invoke the `process_image` call: add `processed['image'] = process_image( rows[0] )` after any of the other process methods.
I paid an artist for the images used on the original ByTravelers.com. If you are using this technique to scrape images or text content from another site, make sure you are abiding by all local and international copyright laws.
Then, modify our post layout to include the image:
---
layout: default
---
<h1>{{ page.title }}</h1>
<img src="/assets/images/{{ page.image }}">
{{ content }}
Regenerating this page shows us a white background with an awkwardly juxtaposed colored image. Adding background colors to the entire site will help, so let's now modify the CSS for our site.
## Customizing Styling (CSS)
We used Bootstrap in Chapter 9 and will use it again here. We will also layer another CSS file on top of Bootstrap to customize the colors.
First, add a reference to Bootstrap and our custom CSS inside of the master layout file, _default.html_ :
<html>
<head>
<title>ByTravelers.com</title>
<link href="/assets/css/bootstrap.min.css" rel="stylesheet">
<link href="/assets/css/site.css" rel="stylesheet">
</head>
<body>
{{ content }}
</body>
</html>
Then, download the Bootstrap CSS file into the proper folder:
$ mkdir assets/css
$ curl \
https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css \
-o assets/css/bootstrap.min.css
Adding a CSS framework like Bootstrap helps things considerably, but we should match the original colors as well. Add a file called _site.css_ into the _assets/css_ directory:
body {
color: #000000;
background-color: #CCCC99;
}
a {
color: #603;
}
.jumbotron {
background-color: #FFFFCC;
}
With the Bootstrap library installed, we can slightly modify our _default.html_ layout to make the site really stand out. Many Jekyll blogs are quite minimalistic and stark, but you are limited only by your imagination:
<html>
<head>
<title>ByTravelers.com</title>
<link href="/assets/css/bootstrap.min.css" rel="stylesheet">
<link href="/assets/css/site.css" rel="stylesheet">
</head>
<body>
<div class="container">
<div class="jumbotron">
<h1>ByTravelers.com</h1>
Alternative travel information
</div>
<div class='row>
<div class='span12'>
<div class="container">
{{ content }}
</div>
</div>
</div>
</div>
</body>
</html>
If we reload, we will see a much prettier version of the site (Figure 6-7).
###### Figure 6-7. Restoring the original colors and images
We've now entirely scraped an old site and built a new Jekyll blog, so there is just one thing left to do: encourage and permit collaboration, which GitHub makes particularly easy.
## Inviting Contributions with GitHub "Fork"
When you publish a Jekyll blog, the fact that it is a repository on GitHub makes it simple to manage and track changes. In addition, because forking is a button click away, you can ask people to contribute or make changes with very little friction. You might have seen the banner saying "Fork me on GitHub" on many a software project page hosted on GitHub. We can motivate others to participate in our blog using pull requests. Let's add that as a final touch and invite people to make contributions the GitHub way. The GitHub blog first posted these banners, and we'll use its code almost as is inside our _default.html_ page, just changing the reference to our repository in the link tag:
...
<body>
<a href="https://github.com/xrd/bytravelers.com">
<img style="position: absolute; top: 0; right: 0; border: 0;"
src="https://..."
alt="Fork me on GitHub"
data-canonical-src="https://.../forkme_right_gray_6d6d6d.png"></a>
<div class="container">
<div class="jumbotron">
<h1>ByTravelers.com</h1>
Alternative travel information
...
Now anyone can fork our repository, add their own post to the __posts_ directory, and then issue a pull request asking us to incorporate the new post into our Jekyll blog.
## Publishing Our Blog to GitHub
Like any other GitHub repository, we can then publish our blog using the same commands we saw with earlier repositories. Obviously you should change the username and blog name to suit your own needs:
$ export BLOG_NAME=xrd/bytravelers.com
$ gem install hub
$ hub create $BLOG_NAME # You might need to login here
$ sleep $((10*60)) && open http://bytravelers.com
And, don't forget to set up DNS records and give yourself appropriate time to let those records propagate out.
# Summary
We've explored the details of Jekyll, looking at the structure of a Jekyll blog. Liquid Markup is a powerful way to use programmatic constructs inside a Markdown file, and we documented the most important concepts around using this templating language. By investigating the internals of a Jekyll post, we explained the intricacies of YAML Front Matter (YFM) and how seamlessly you can mix and match HTML with Markdown syntax. Jekyll blogs can utilize their own custom CSS, and we've shown how easy it is to use a powerful complete library like Bootstrap layered underneath a site-specific small CSS file. And, we built a scraper application that retrieves a remote site in its entirety and converts it into the correct structure of a Jekyll blog. Even though this scraper application was built specifically for a particular site, by adding testing and properly structuring the components it should be evident how to reuse much of the scraper for anything else you want to quickly convert into a Jekyll blog.
In the next chapter we will continue looking at Jekyll by building an Android application that uses the Java GitHub API bindings and allows you to create Jekyll blog posts with the Git Data API.
This is all well documented on the GitHub blog.
# Chapter 7. Android and the Git Data API
You might not use your phone right now as a developer tool, but the odds are that you will soon. At the moment, phones and tablets can be great for reading code, but the editors we developers use on our laptops have not yet been reimagined for mobile devices. We are getting close though: the GitHub API is accessible through the well-written EGit client library for Java, and this library supports both reading data stored on GitHub and writing data back into it. These are a perfect set of building blocks to develop applications for the Android platform, currently the world's most popular mobile OS.
In this chapter, we'll use the Java EGit libraries to develop a small Android application that posts to our blog hosted on GitHub. Our blogging application will allow us to log in to GitHub, and then ask us for a quick note describing how we are feeling. The application will then compose a Jekyll blog post for us and push the post into our blog on GitHub.
# Setting Up
To build this application, we need to create a Jekyll blog and then install the necessary Android build tools.
## Creating a Jekyll Blog
We are writing an application that adds Jekyll blog entries, and we are writing tests to verify our application works as advertisted, so we need a sandbox blog against which we can run commands. There are various ways to create a new Jekyll blog. The simplest is to run a series of Ruby commands documented here; if you want to know more about Jekyll, it is covered in more depth in Chapter 6. There are a few items of note when establishing a Jekyll blog that have some complexity, things like mapping a hostname properly and using the correct branch inside Git. For our purposes here, however, we won't need to make sure all that is established. All we need is to make sure we have a sandbox repository that has the structure of a Jekyll blog:
$ echo "source 'https://rubygems.org'" >> Gemfile
$ echo "gem 'github-pages'" >> Gemfile
$ echo "gem 'hub'" >> Gemfile
$ export BLOG_NAME=mytestblog
$ bundle
$ jekyll new $BLOG_NAME
$ cd $BLOG_NAME
$ hub create
$ git push -u origin master
These commands install the correct libraries for using Jekyll (and one for our tests as well), generate a new blog using the Jekyll command-line tool, and then create a blog on GitHub with those files. On the second line we specify the name of the blog; you are welcome to change this to any name you'd like, just make sure the tests match the name.
When you have finished running these commands, you should close the terminal window. There are other commands later in this chapter that should occur in a fresh directory and as such it is best not to run those commands from within the same directory where you created your Jekyll blog. You've pushed all those files into GitHub, so you could safely delete the local repository in this directory.
## Android Development Tools
If you don't have a physical Android device, don't fret. You can follow along with this chapter without having an actual Android device by doing development and testing on a virtual device.
### Installing the Java SDK
Unfortunately there is no simple shell command to install Java in the same way as there is for Ruby and NodeJS using RVM or NVM. Oracle controls the Java language and distribution of official SDKs, and it restricts access to downloads other than from _java.oracle.com_. Java is freely available, but you need to visit _java.oracle.com_ and find the correct download for your needs. Android works with the 1.7 versions of Java or better.
### Installing Android Studio
We will use Android Studio, the Google IDE for developing Android applications. To install it, go to _https://developer.android.com/sdk/index.html_ and you will see a download button for your platform (OS X, Linux, and Windows supported). Android Studio bundles all the important tools for building Android applications.
# Creating a New Project
Let's now create our Android project. When you first open Android Studio, you will see an option in the right pane inviting you to create a new project. Click the "Start a new Android Studio project" option. In the next step, you will see a screen for configuring your new project. Enter GhRU ("GitHub R U?") into the Application Name and use _example.com_ as the Company Domain (or use your own domain, but be aware that this will make the directory structure presented in this chapter different than yours). Android Studio should automatically generate the "package name" for you as `com.example.ghru`.
You will then need to choose a target SDK. The higher the target, the better access to newer Android APIs, but the fewer number of devices that can run the application. The code in this chapter will work with older SDKs, so let's make a balanced choice and use Android 4.4 (KitKat), which runs on phones and tablets. At the moment this means, according to Android Studio, that our application will run on 49.5% of Android devices in the world as shown in Figure 7-1.
###### Figure 7-1. Choose an Android SDK
You will then be presented with a choice of activities. Choose "Blank Activity." You will be taken to a screen that allows you to customize the activity. Accept the defaults of "MainActivity" as the Activity Name and the associated files for the layout, title, and menu resource name. Then click the "Finish" button to generate the project.
After completing these steps, Android Studio will create Gradle configuration files and generate the structure of your application. Once this has completed, you can review the file tree of your project by clicking the lefthand vertical tab labeled "Project" as shown in Figure 7-2.
###### Figure 7-2. Reviewing the Android project structure for the first time
If you have never seen an Android project before, this screen deserves some explanation. The _app_ directory contains your application code and resources (layout files, images, and strings). Inside the _app_ directory you will see several other directories: The _java_ directory contains, quite obviously, any Java code for the project, which includes the application files, and also programs that do not reside in the app when it is published to the app store but perform testing on the app. The _res_ directory contains the resources we mentioned. Android Studio lists all build files under the Gradle Scripts section, and groups them regardless of their directory placement. You can see two _build.gradle_ files, the first of which you can generally ignore, though the second we will need to adjust.
Now we are ready to start editing our project.
## Editing the Gradle Build File
First, we need to add to our Gradle build file and specify the dependent libraries. Gradle is a build system for Java and has become the offical build system for the Android platform. Open the _build.gradle_ within the `app` module (the second of the two _build.gradle_ files):
apply plugin: 'com.android.application' // 
android {
compileSdkVersion 23 // 
buildToolsVersion "23.0.1"
defaultConfig {
applicationId "com.example.ghru"
minSdkVersion 21
targetSdkVersion 23
versionCode 1
versionName "1.0"
testInstrumentationRunner
"android.support.test.runner.AndroidJUnitRunner" // 
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'),
'proguard-rules.pro'
}
}
}
dependencies {
compile fileTree(dir: 'libs', include: ['*.jar']) // 
compile 'com.android.support:appcompat-v7:23.0.1'
compile 'org.eclipse.mylyn.github:org.eclipse.egit.github.core:2.1.5'
compile( 'commons-codec:commons-codec:1.9' )
testCompile 'junit:junit:4.12' // 
testCompile 'com.squareup.okhttp:okhttp:2.5.0'
androidTestCompile 'com.android.support.test:runner:0.4' // 
androidTestCompile 'com.android.support.test:rules:0.4'
androidTestCompile 'com.android.support.test.espresso:espresso-core:2.2.1'
}
First, we load the Android gradle plug-in. This extends our project to allow an `android` block, which we specify next.
Next, we configure our `android` block, with things like the target version (which we choose when setting up our project) and the actual SDK, which we are using to compile the application.
In order to run UI tests, we need to specify a test runner called the `AndroidJUnitRunner`.
Android Studio automatically adds a configuration to our build file that loads any JARS (Java libraries) from the _lib_ directory. We also install the support compatibility library for older Android devices, and most importantly, the EGit library that manages connections to GitHub for us. The commons CODEC library from the Apache Foundation provides tools that help to encode content into Base64, one of the options for storing data inside a GitHub repository using the API.
Next, we install libraries that are only used when we run unit tests. `testCompile` libraries are compiled only when the code is run on the local development machine, and for this situation we need the JUnit library, and the OkHttp library from Square, which helps us validate that our request for a new commit has made it all the way into the GitHub API.
Lastly, we install the Espresso libraries, the Google UI testing framework. The first line (of the three libraries) installs the test runner we configured earlier. We use `androidTestCompile`, which compiles against these libraries when the code runs on Android in test mode.
### Creating AVDs for development
Android Studio makes creating AVD (Android Virtual Devices) simple. To start, under the "Tools" menu, click "Android" and then select "AVD Manager." To create a new AVD, click the "Create Virtual Device" button and follow the prompts. You are generally free to choose whatever settings you like. Google produces a real device called the Nexus 5. This is the Android reference device, and is a good option for a generic device with good support across all features. You can choose this one if you are confused about which to use as shown in Figure 7-3.
###### Figure 7-3. Creating a new AVD
Once you have created an AVD, start it up. It will take a few minutes to boot; AVDs emulate the chipset in software and booting up can take a few minutes, unfortunately. There are alternative tools that speed up AVD boot time (Genymotion is one of those), but there are complexities if you stray away from the stock Android tools, so we will stick with AVD.
## Default Android Main
When we use the preceding commands to create a new Android application, it creates a sample entry point that is the starting point of our Android application. All Android applications have a file called _AndroidManifest.xml_ , which specifies this activity and also supplies a list of permissions to the apps. Open the _AndroidManifest.xml_ file from within the _app/src/main_ directory. We need to make one change: to add a line that specifies that this app will use the Internet permission (required if our app will be talking to the GitHub API). Note that when viewing this file inside Android Studio the IDE can interpolate strings from resources, so you might see the `android:label` attribute displayed as `GhRU` with a grey tinge, when in fact the XML file itself has the value displayed here (`@string/app_name`):
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.example.ghru">
<uses-permission android:name="android.permission.INTERNET" />
<application android:allowBackup="true" android:label="@string/app_name"
android:icon="@mipmap/ic_launcher" android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity android:name="MainActivity"
android:label="@string/app_name">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
When the application is launched, the Android OS will launch this activity and then call the `onCreate` function for us. Inside this function, our application calls our parent's implementation of `onCreate`, and then inflates the layout for our application. Layouts are XML files in which the UI of an Android application is declaratively described.
Android Studio created a default layout for us (called _activity_main.xml_ ), but let's ignore that and create our own layout. To do so, right-click (Ctrl-click on OS X) on the _layouts_ directory, and then choose "New" and then "Layout resource file" at the very top of the list (Android Studio nicely chooses the most likely candidate given the context of the click). Enter "main.xml" as the filename, and accept the other defaults.
This application requires that we log in, so we know we at least need a field and a descriptive label for the username, a password field (and associated descriptive label) for the password, a button to click that tells our app to attempt to log in, and a status field that indicates success or failure of the login. So, let's modify the generated _main.xml_ to specify this user interface. To edit this file as text, click the tab labeled Text next to the tab labeled Design at the very bottom of the _main.xml_ pane to switch to text view. Then, edit the file to look like the following:
<?xml version="1.0" encoding="utf-8"?> <--  -->
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:orientation="vertical"
android:layout_width="match_parent"
android:layout_height="match_parent"
> <--  -->
<TextView
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:text="GitHub Username:"
/>
<EditText
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:id="@+id/username"
/>
<TextView
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:text="GitHub Password:"
/>
<EditText
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:id="@+id/password"
android:inputType="textWebPassword"
/> <--  -->
<Button
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:text="Login"
android:id="@+id/login"
/> <--  -->
<TextView
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:id="@+id/login_status"
/>
</LinearLayout>
You may have complicated feelings about XML files (I know I do), but the Android layout XML files are a straightforward way to design layouts declaratively, and there is a great ecosystem of GUI tools that provide sophisticated ways to manage them. Scanning this XML file, it should be relatively easy to understand what is happening here.
The entire layout is wrapped in a `LinearLayout`, which simply positions each element stacked vertically inside it. We set the height and width layout attributes to `match_parent`, which means this layout occupies the entire space of the screen.
We then add the elements we described previously: pairs of `TextView` and `EditView` for the label and entry options necessary for the username and password.
The password field customizes the type to be a password field, which means the entry is hidden when we enter it.
Some elements in the XML have an ID attribute, which allows us to access the items within our Java code, such as when we need to assign a handler to a button or retrieve text entered by the user from an entry field. We will demonstrate this in a moment.
You can review the visual structure of this XML file by clicking the "Design" tab to switch back to design mode.
We also need a layout once we have logged in. Create a file called _logged_in.xml_ using the same set of steps. Once logged in, the user is presented with a layout asking him to choose which repository to save into, to enter his blog post into a large text field, and then to click a button to submit that blog post. We also leave an empty status box beneath the button to provide context while saving the post:
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:orientation="vertical"
android:layout_width="match_parent"
android:layout_height="match_parent"
>
<TextView
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:text="Logged into GitHub"
android:layout_weight="0"
android:id="@+id/status" />
<EditText
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:hint="Enter the blog repository"
android:id="@+id/repository"
android:layout_weight="0"
/>
<EditText
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:hint="Enter the blog title"
android:id="@+id/title"
android:layout_weight="0" />
<EditText
android:gravity="top"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:hint="Enter your blog post"
android:id="@+id/post"
android:layout_weight="1"
/>
<Button
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_weight="0"
android:id="@+id/submit"
android:text="Send blog post"/>
</LinearLayout>
Most of this should be familiar once you have reviewed the _main.xml_ file (and be sure to copy this from the associated sample repository on GitHub if you don't want to copy it in yourself).
Now that we have our XML established, we can ready our application for testing.
# Android Automated Testing
Android supports three types of tests: unit tests, integration tests, and user interface (UI) tests. Unit tests validate very tightly defined and isolated pieces of code, while integration tests and UI tests test larger pieces of the whole. On Android, integration tests generally mean instantiation of data managers or code that interacts with multiple components inside the app, while UI testing permits testing of user-facing elements like buttons or text fields. In this chapter we will create a unit test and a UI test.
One important note: Unit tests run on your development machine, not the Android device itself. UI tests run on the Android device (or emulator). There can be subtle differences between the Java interpreter running on your development machine and the Dalvik interpreter running on your Android device, so it is worthwhile to use a mixture of the three types of tests. Stated another way, write at least one test that runs on the device or emulator itself!
## Unit Tests for Our GitHub Client
Let's start by defining a unit test. Since the unit test runs on our development machine, our test and implementation code should be written such that they do not need to load any Android classes. This forces us to constrain functionality to only the GitHub API. We will define a helper class that will handle all the interaction with the GitHub API but does not know about Android whatsoever. Then, we can write a test harness that takes that class, instantiates it, and validates our calls to GitHub produce the right results.
You might legitimately ask: is a unit test the right place to verify an API call? Will this type of test be fast, given that slow-running unit tests are quickly ignored by software developers? Would it be better to mock out the response data inside our unit tests? These are all good questions!
To set up unit tests, we need to switch the build variant to unit tests. Look for a vertical tab on the lefthand side of Android Studio. Click this, and then where it says "Test Artifact" switch to "Unit Tests." From the project view (click the "Project" vertical tab if project view is not already selected) you can expand the "java" directory, and you should then see a directory with "(test)" in parentheses indicating this is where tests go. If this directory is not there, create a directory using the command line (this command would work: `mkdir -p app/src/test/java/com/example/ghru`).
Then, create a test file called _GitHubHelperTest.java_ that looks like the following:
package com.example.ghru;
import com.squareup.okhttp.OkHttpClient; // 
import com.squareup.okhttp.Request;
import com.squareup.okhttp.Response;
import org.junit.Test; // 
import java.util.Date;
import static org.junit.Assert.assertTrue;
/**
* To work on unit tests, switch the Test Artifact in the Build Variants view.
*/
public class GitHubHelperTest { // 
@Test
public void testClient() throws Exception {
String login = System.getenv("GITHUB_HELPER_USERNAME"); // 
String password = System.getenv("GITHUB_HELPER_PASSWORD");
String repoName = login + ".github.io";
int randomNumber = (int)(Math.random() * 10000000);
String randomString = String.valueOf( randomNumber );
String randomAndDate = randomString + " " +
(new Date()).toString() ; //
GitHubHelper ghh = new GitHubHelper( login, password ); // 
ghh.SaveFile(repoName,
"Some random title",
"Some random body text",
randomAndDate );
Thread.sleep(3000); // 
String url = "https://api.github.com/repos/" + // 
login + "/" + repoName + "/events";
OkHttpClient ok = new OkHttpClient();
Request request = new Request.Builder()
.url( url )
.build();
Response response = ok.newCall( request ).execute();
String body = response.body().string();
assertTrue( "Body does not have: " + randomAndDate, // 
body.contains( randomAndDate ) );
}
}
First, we import the OkHttp library, a library for making HTTP calls. We will verify that our GitHub API calls made it all the way into GitHub by looking at the event log for our repository, a log accessible via HTTP.
Next, we import JUnit, which provides us with an annotation `@Test` we can use to indicate to a test runner that certain methods are test functions (and should be executed as tests when in test mode).
We create a class called `GitHubHelperTest`. In it, we define a sole test case `testClient`. We use the `@Test` annotation to indicate to JUnit that this is a test case.
Now we specify our login information and the repository we want to test against. In order to keep the password out of our source code, we use an environment variable we can specify when we run the tests.
Next, we build a random string. This unique string will be our commit message, a beacon that allows us to verify that our commit made it all the way through and was stored on GitHub, and to differentiate it from other commits made recently by other tests.
Now, to the meat of the test: we instantiate our GitHub helper class with login credentials, then use the `SaveFile` function to save the file. The last parameter is our commit message, which we will verify later.
There can be times when the GitHub API has registered the commit but the event is not yet displayed in results coming back from the API; sleeping for a few seconds fixes this.
Next, we go through the steps to make an HTTP call with the OkHttp library. We load a URL that provides us with the events for a specified repository, events that will have the commit message when it is a push type event. This repository happens to be public so we don't require authentication against the GitHub API to see this data.
Once we have the body of the HTTP call, we can scan it to verify the commit message is there.
The final steps deserve a bit more investigation. If we load the event URL from cURL, we see data like this:
$ curl https://api.github.com/repos/burningonup/burningonup.github.io/events
[
{
"id": "3244787408",
"type": "PushEvent",
...
"repo": {
"id": 44361330,
"name": "BurningOnUp/BurningOnUp.github.io",
"url":
"https://api.github.com/repos/BurningOnUp/BurningOnUp.github.io"
},
"payload": {
...
"commits": [
{
"sha": "28f247973e73e3128737cab33e1000a7c281ff4b",
"author": {
"email": "unknown@example.com",
"name": "Unknown"
},
"message": "207925 Thu Oct 15 23:06:09 PDT 2015",
"distinct": true,
"url":
"https://api.github.com/repos/BurningOnUp/BurningOnUp.github.io/..."
}
]
}
...
]
This is obviously JSON. We see the type is `PushEvent` for this event, and it has a commit message that matches our random string format. We could reconstitute this into a complex object structure, but scanning the JSON as a string works for our test.
## Android UI Tests
Let's now write a UI test. Our test will start our app, find the username and password fields, enter in the proper username and password text, then click the login button, and finally verify that we have logged in by checking for the text "Logged into GitHub" in our UI.
Android uses the Espresso framework to support UI testing. We already installed Espresso with our Gradle configuration, so we can now write a test. Tests are written by deriving from a generic test base class (`ActivityInstrumentationTestCase2`). Any public function defined inside the test class is run as a test.
In Android Studio, from the "Build Variant" window, select "Android Instrumentation Test," which will then display a test directory called "androidTest." These are tests that will run on the emulator or actual device. Inside the directory, make a new file called _MainActivityTest.java_ :
package com.example.ghru;
import android.support.test.InstrumentationRegistry; // 
import android.test.ActivityInstrumentationTestCase2;
import static android.support.test.espresso.Espresso.onView;
import static android.support.test.espresso.action.ViewActions.*;
import static android.support.test.espresso.assertion.ViewAssertions.matches;
import static android.support.test.espresso.matcher.ViewMatchers.*;
public class MainActivityTest // 
extends ActivityInstrumentationTestCase2<MainActivity> {
public MainActivityTest() {
super( MainActivity.class ); // 
}
public void testLogin() { // 
injectInstrumentation( InstrumentationRegistry.
getInstrumentation() ); // 
MainActivity mainActivity = getActivity();
String username = mainActivity // 
.getString( R.string.github_helper_username );
onView( withId( R.id.username ) ) // 
.perform( typeText( username ) ); // 
String password = mainActivity
.getString( R.string.github_helper_password );
onView( withId( R.id.password ) )
.perform( typeText( password ) );
onView( withId( R.id.login ) )
.perform( click() );
onView( withId( R.id.status ) ) // 
.check( matches( withText( "Logged into GitHub" ) ) );
}
}
We import the instrumentation registry (for instrumenting the tests of our app), the base class, and matchers that will be used to make assertions in our tests.
We create a test class that derives from the `ActivityInstrumentationTestCase2` generic.
The constructor of an Espresso test implementation needs to call the parent constructor with the class of the activity for test, in this case `MainActivity`.
Our test verifies that we can log in to GitHub, so we name it accordingly.
We then load the instrumentation registry, and also call `getActivity`, which actually instantiates and starts the activity. In many Espresso tests these two steps will occur in a function annotated as a `@Before` function if they are used across multiple tests (in which case they will be run before each test). Here to simplify our function count we can call them inside the single test function.
It is never a good idea to store credentials inside of a code repository, so we retrieve the username and password from a resource XML file using the `getString` function available using the activity. We will show what the contents of this secret file could look like presently.
Once we have the username, we can enter it in the text field in our UI. With the `onView` function we can interact with a view (for example: a button or text field). `withId` finds the view using the resource identifier inside the XML layout files. Once we have the view, we can then perform an action (using the `perform` function) like typing in text. This chain of calls enters the GitHub username into the first text field.
We then complete our interaction with the UI, entering in the password and then clicking the login button.
If all is successful, we should see the text "Logged into GitHub." Under the hood, this test will verify that we are logged in to GitHub and display the successful result.
To provide a username and password to our test and to keep these credentials out of our source code, create a file called _secrets.xml_ inside our _strings_ directory inside the resource folder. This file should look like this:
<?xml version="1.0" encoding="utf-8"?>
<resources>
<string name="github_helper_login">MyUsername</string>
<string name="github_helper_password">MyPwd123</string>
</resources>
Make sure this is not checked into your source code by adding an exception to _.gitignore_ (the command `echo "secrets.xml" >> .gitgnore` is a quick way to add this to your _.gitignore_ file).
Our tests will not even compile yet because we have not yet written the other parts of the application. As such, we will skip the setup required to run our tests within Android Studio for now.
Let's now build the application itself to pass these tests.
# Application Implementation
Now we can start writing some Java code for our application. Let's make it so our `MainActivity` class will inflate the layouts we defined earlier:
package com.example.ghru;
import android.app.Activity;
import android.os.Bundle;
import android.widget.Button;
import android.widget.LinearLayout;
import android.widget.EditText;
import android.widget.TextView;
import android.view.View;
public class MainActivity extends Activity
{
/** Called when the activity is first created. */
@Override
public void onCreate(Bundle savedInstanceState)
{
super.onCreate(savedInstanceState);
setContentView( R.layout.main);
Button login = (Button)findViewById( R.id.login );
login.setOnClickListener(new View.OnClickListener() { // 
public void onClick(View v) {
login(); // 
}
});
}
private void login() {
setContentView(R.layout.logged_in); // 
Button submit = (Button)findViewById( R.id.submit );
submit.setOnClickListener(new View.OnClickListener() {
public void onClick(View v) { // 
doPost(); (4)
}
});
}
private void doPost() {
TextView tv = (TextView)findViewById( R.id.post_status ); // 
tv.setText( "Successful jekyll post" );
}
}
This code mocks out the functionality we will be building and shows us exactly what the UI will look like once that code is completed.
We register a click handler for our login button.
When the login button is clicked, we call the `login()` function that triggers a login flow.
Once we have logged in, we inflate the logged-in layout, suitable for making a blog post.
We then set up another click handler for the submit button; when clicked, we call the `doPost()` function.
Our `doPost()` function updates the status message at the bottom of our application.
Even though our code is not functionally complete, this application will compile. This is a good time to play with this application and verify that the UI looks appropriate. Our login form looks like Figure 7-4.
###### Figure 7-4. A simple UI for making blog post entries
## Code to Log In to GitHub
Now we can wire in the GitHub API. Let's first work on the `login()` function. Poking into the EGit libary reference, we can write GitHub login code, which is as simple as the following:
GitHubClient client = new GitHubClient();
client.setCredentials("us3r", "passw0rd");
The context in which the code runs makes as much of a difference as the code. The Android OS disallows any code from making network connections unless it runs inside a background thread. If you are not a Java developer already, and the thought of using threads with Java sounds daunting, dispell your worries. The Android SDK provides a great class for managing background tasks called `AsyncTask`. This class provides several entry points into the lifecycle of a thread that is managed by the Android OS. We implement a class and then override two functions provided by `AsyncTask`: the first function is `doInBackground()`, which handles operations off the main thread (our background thread code), and the second function is `onPostExecute()`, which runs on the UI thread and allows us to update the UI with the results of the code that ran inside `doInBackground()`.
Before we implement the login, we need to update our `onCreate` function of the `MainActivity`. Our login button handles logging in, so let's register a click handler on the login button that will call the login task we will define inside our class based off `AsyncTask`:
...
@Override
public void onCreate(Bundle savedInstanceState)
{
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
Button login = (Button)findViewById( R.id.login );
login.setOnClickListener(new View.OnClickListener() {
public void onClick(View v) {
EditText utv = (EditText)findViewById( R.id.username );
EditText ptv = (EditText)findViewById( R.id.password );
username = (String)utv.getText().toString();
password = (String)ptv.getText().toString(); // 
TextView status =
(TextView)findViewById( R.id.login_status );
status.setText( "Logging in, please wait..." ); // 
new LoginTask().execute( username, password ); // 
}
});
}
...
We retrieve the username and password from our UI elements.
Our UI should notify the user that a login is occurring in a background task, so we grab the status text element and update the text in it.
We then start the background thread process to do our login. This syntax creates a new thread for us with the username and password as parameters. Android will manage the lifecycle of this thread for us, including starting the new thread separate from the main UI thread.
Now we can implement `LoginTask`:
...
class LoginTask extends AsyncTask<String, Void, Boolean> { // 
@Override
protected Boolean doInBackground(String... credentials) { // 
boolean rv = false;
UserService us = new UserService();
us.getClient().setCredentials( credentials[0], credentials[1] );
try {
User user = us.getUser( credentials[0] ); // 
rv = null != user;
}
catch( IOException ioe ) {}
return rv;
}
@Override
protected void onPostExecute(Boolean result) {
if( result ) {
loggedIn(); // 
}
else { // 
TextView status = (TextView)findViewById( R.id.login_status );
status.setText( "Invalid login, please check credentials" );
}
}
}
...
Here we define our class derived from `AsyncTask`. You see three types in the generics signature: `String`, `Void`, and `Boolean`. These are the parameters to our entry point, an intermediate callback and the final callback, which returns control to the calling thread. The first type allows us to parameterize our instantiated task; we need to provide a username and password to the background task, and the first type in the signature allows us to pass an array of Strings. You can see in the actual function definition that the ellipsis notation provides a way to parameterize a function with a variable number of arguments (called varargs). Inside our defined function we expect we will send two Strings in, and we make sure to do that in our call.
Once inside the `doInBackground()` function, we instantiate a `UserService` class, a wrapper around the GitHub API, which interacts with the user service API call. In order to access this information, we have to retrieve the client for this service call and provide the client with the username and password credentials. This is the syntax to do that.
We wrap the call to `getUser()` in a try block as the function signature can throw an error (if the network were down, for example). We don't really need to retrieve information about the user using the `User` object, but this call verifies that our username and password are correct, and we store this result in our return value. GitHub will not use the credentials you set until you make an API call, so we need to use our credentials to access something in order to verify that those credentials work.
Let's call our function `loggedIn()` instead of `login()` to more accurately reflect the fact that when we call this, we are already logged in to GitHub.
If our login was a failure, either because of network failure, or because our credentials were incorrect, we indicate this in the status message. A user can retry if they wish.
`loggedIn` updates the UI once logging in has completed and then initiates the post on GitHub:
...
private void loggedIn() {
setContentView(R.layout.logged_in); // 
Button submit = (Button)findViewById( R.id.submit );
submit.setOnClickListener(new View.OnClickListener() { // 
public void onClick(View v) {
TextView status = (TextView) findViewById(R.id.login_status);
status.setText("Logging in, please wait...");
EditText post = (EditText) findViewById(R.id.post); // 
String postContents = post.getText().toString();
EditText repo = (EditText) findViewById(R.id.repository);
String repoName = repo.getText().toString();
EditText title = (EditText) findViewById(R.id.title);
String titleText = title.getText().toString();
doPost(repoName, titleText, postContents); // 
}
});
}
...
Inflate the logged-in layout to reflect the fact we are now logged in.
Then, install a click handler on the submit button so that when we submit our post information, we can start the process to create the post on GitHub.
We need to gather up three details the user provides: the post body, the post title, and the repository name.
Using these three pieces of data, we can then call into `doPost` and initiate the asynchronous task.
Building out `doPost()` should be more familiar now that we have experience with `AsyncTask`. `doPost()` makes the commit inside of GitHub, and it performs the network activity it needs to run on a background thread:
...
private void doPost( String repoName, String title, String post ) {
new PostTask().execute( username, password, repoName, title, post );
}
class PostTask extends AsyncTask<String, Void, Boolean> {
@Override
protected Boolean doInBackground(String... information) { // 
String login = information[0];
String password = information[1];
String repoName = information[2];
String titleText = information[3];
String postContents = information[4];
Boolean rv = false; // 
GitHubHelper ghh = new GitHubHelper(login, password); // 
try {
rv = ghh.SaveFile(repoName, titleText,
postContents, "GhRu Update"); // 
} catch (IOException ioe) { // 
Log.d(ioe.getStackTrace().toString(), "GhRu");
}
return rv;
}
@Override
protected void onPostExecute(Boolean result) {
TextView status = (TextView) findViewById(R.id.status);
if (result) { // 
status.setText("Successful jekyll post");
EditText post = (EditText) findViewById(R.id.post);
post.setText("");
EditText repo = (EditText) findViewById(R.id.repository);
repo.setText("");
EditText title = (EditText) findViewById(R.id.title);
title.setText("");
} else {
status.setText("Post failed.");
}
}
}
...
First, we retrieve the parameters we need to send off to the GitHub API. Notice that we don't attempt to retrieve these from the UI. Background threads don't have access to the Android UI functions.
This function returns a true or false value indicating success or failure (using the variable `rv` for "return value"). We assume that it fails unless everything we need to do inside our function works exactly as expected, so set the expectation to false to start. The value of our return statement is passed to the next stage in the lifecycle of the thread, a function called `onPostExecute` (an optional stage in the thread lifecycle we will use to report status of the operation back to the user).
Now, we instantiate the `GitHubHelper` class. This instantiation and usage should look very familiar as it is the same thing we did inside our unit test.
Our helper class returns success or failure. If we have reached this point, this is our final return value.
We will wrap the call to `SaveFile` inside a try/catch block to make sure we handle errors; these will most likely be network errors.
`onPostExecute()` is the function we (optionally) return to once our background task has completed. It receives the return value from our previous function. If we have a true value returned from `doInBackground()`, then our save file succeeded and we can update the UI of our application.
We need to import the support classes. The JARs and classes for EGit have already been added to our project automatically using Gradle. Make sure you add these `import` statements to the top of the file, under the other imports:
...
import android.view.View;
import android.os.AsyncTask;
import org.eclipse.egit.github.core.service.UserService;
import org.eclipse.egit.github.core.User;
import java.io.IOException;
...
Now we are ready to write the code to write data into GitHub.
## Code to Talk to GitHub
Our last step is to write the code that handles putting content into GitHub. This is not a simple function, because the GitHub API requires you build out the structure used internally by Git. A great reference for learning more about this structure is the free and open-source book called _Pro Git_ and specifically the last chapter called Git Internals.
In a nutshell, the GitHub API expects you to create a Git "tree" and then place a "blob" object into that tree. You then wrap the tree in a "commit" object and then create that commit on GitHub using a data service wrapper. In addition, writing a tree into GitHub requires knowing the base SHA identifier, so you'll see code that retrieves the last SHA in the tree associated with our current branch. This code will work regardless of whether we are pushing code into the master branch, or into the `gh-pages` branch, so this utility class works with real Jekyll blogs.
We'll write a helper class called `GitHubHelper` and add a single function that writes a file to our repository.
The GitHub API requires that files stored in repositories be either Base64 encoded or UTF-8. The Apache Foundation provides a suite of tools published to Maven (the same software repository where we grabbed the EGit libraries), which can do this encoding for us, and which were already installed in our Gradle file previously (the "commons-codec" declaration).
We will start by defining a series of high-level functions inside `SaveFile` to get through building a commit inside of GitHub. Each function itself contains some complexity so let's look first at the overview of what it takes to put data into GitHub using the Git Data API:
package com.example;
import android.util.Log;
import org.eclipse.egit.github.core.*;
import org.eclipse.egit.github.core.client.GitHubClient;
import org.eclipse.egit.github.core.service.CommitService;
import org.eclipse.egit.github.core.service.DataService;
import org.eclipse.egit.github.core.service.RepositoryService;
import org.eclipse.egit.github.core.service.UserService;
import org.apache.commons.codec.binary.Base64;
import java.text.SimpleDateFormat;
import java.util.Date;
import java.io.IOException;
import java.util.*;
class GitHubHelper {
String login;
String password;
GitHubHelper( String _login, String _password ) {
login = _login;
password = _password;
}
public boolean SaveFile( String _repoName,
String _title,
String _post,
String _commitMessage ) throws IOException {
post = _post;
repoName = _repoName;
title = _title;
commitMessage = _commitMessage;
boolean rv = false;
generateContent();
createServices();
retrieveBaseSha();
if( null != baseCommitSha && "" != baseCommitSha ) {
createBlob();
generateTree();
createCommitUser();
createCommit();
createResource();
updateMasterResource();
rv = true;
}
return rv;
}
...
The `SaveFile` function goes through each step of writing data into a repository using the GitHub API. We will walk through each of these functions. As you can see, the `SaveFile` function has the same signature as the function we call inside our unit test.
Let's implement each of the functions specified in the `GitHubHelper` class.
## Writing the Blog Content
First, we implement `generateContent()`. The following code snippet shows the functions defined to generate the content we will place into our remote Git repository stored on GitHub:
...
String commitMessage; // 
String postContentsWithYfm;
String contentsBase64;
String filename;
String post;
String title;
String repoName;
private void generateContent() { // 
postContentsWithYfm = // 
"---\n" +
"layout: post\n" +
"published: true\n" +
"title: '" + title + "'\n---\n\n" +
post;
contentsBase64 = // 
new String( Base64.encodeBase64( postContentsWithYfm.getBytes() ) );
filename = getFilename();
}
private String getFilename() {
String titleSub = title.substring( 0, // 
post.length() > 30 ?
30 :
title.length() );
String jekyllfied = titleSub.toLowerCase() // 
.replaceAll( "\\W+", "-")
.replaceAll( "\\W+$", "" );
SimpleDateFormat sdf = new SimpleDateFormat( "yyyy-MM-dd-" ); // 
String prefix = sdf.format( new Date() );
return "_posts/" + prefix + jekyllfied + ".md"; // 
}
String blobSha;
Blob blob;
...
You will notice many similarities between this Java code and the Ruby code we used in Chapter 6 when generating filenames and escaping whitespace.
First, we set up several instance variables we will use when storing the data into GitHub: the commit message, the full post including the YAML Front Matter (YFM), the post contents encoded as Base64, the filename, and then the three parameters we saved from the call to `SaveFile()`: the post itself, the title, and the repository name.
The `generateContent` function creates the necessary components for our new post: the full content Base64 encoded, and the filename we will use to store the content.
Here we create the YAML Front Matter (see Chapter 6 for more details on YFM). This YAML specifies the "post" layout and sets publishing to "true." We need to terminate the YAML with two newlines.
Base64 encodes the contents of the blog post itself using a utility class found inside the Apache Commons library. Contents inside a Git repository are stored either as UTF-8 content or Base64; we could have used UTF-8 since this is text content but Base64 works losslessly, and you can always safely use Base64 without concerning yourself about the content.
Next, inside `getFilename()`, create the title by using the first 30 characters of the post.
Convert the title to lowercase, and replace the whitespace with hyphens to get the Jekyll post title format.
Jekyll expects the date to be formatted as `yyyy-MM-dd`, so use the java `SimpleDateFormat` class to help create a string of that format.
Finally, create the filename from all these pieces, prepending `_posts` to the filename, where Jekyll expects posts to reside.
Now we will set up the services necessary to store a commit inside GitHub.
## GitHub Services
Next, we implement `createServices()`. There are several services (wrappers around Git protocols) we need to instantiate. We don't use them all immediately, but we will need them at various steps during the file save process. The `createServices` call manages these for us:
...
RepositoryService repositoryService;
CommitService commitService;
DataService dataService;
private void createServices() throws IOException {
GitHubClient ghc = new GitHubClient();
ghc.setCredentials( login, password );
repositoryService = new RepositoryService( ghc );
commitService = new CommitService( ghc );
dataService = new DataService( ghc );
}
...
As a side note, writing things this way would allow us to specify an enterprise endpoint instead of GitHub.com. Refer to the Appendix A for specific syntax on how to do this.
## The Base SHA from the Repository and Branch
Now we implement `retrieveBaseSha()`. A Git repository is a directed acyclic graph (DAG) and as such, (almost) every node in the graph points to another commit (or potentially two if it is a merge commit). When we append content to our graph, we need to determine the prior node in that graph and attach the new node. `retrieveBaseSha` does this: it finds the SHA hash for our last commit, a SHA hash that is functionally an address inside our tree. To determine this address, our application needs to have a reference to the repository, and we use the repository service we instantiated earlier to get this reference. Once we have the repository, we need to look inside the correct branch: `getBranch` does this for us:
...
private void createServices() throws IOException {
GitHubClient ghc = new GitHubClient();
ghc.setCredentials( login, password );
repositoryService = new RepositoryService( ghc );
commitService = new CommitService( ghc );
dataService = new DataService( ghc );
}
Repository repository;
RepositoryBranch theBranch;
String baseCommitSha;
private void retrieveBaseSha() throws IOException {
// get some sha's from current state in git
repository = repositoryService.getRepository(login, repoName);
theBranch = getBranch();
baseCommitSha = theBranch.getCommit().getSha();
}
public RepositoryBranch getBranch() throws IOException {
List<RepositoryBranch> branches =
repositoryService.getBranches(repository);
RepositoryBranch master = null;
// Iterate over the branches and find gh-pages or master
for( RepositoryBranch i : branches ) {
String theName = i.getName().toString();
if( theName.equalsIgnoreCase("gh-pages") ) {
theBranch = i;
}
else if( theName.equalsIgnoreCase("master") ) {
master = i;
}
}
if( null == theBranch ) {
theBranch = master;
}
return theBranch;
}
...
This SHA commit is very important. Without it, we cannot create a new commit that links into our existing commit graph. In our starting point function `SaveFile()` we discontinue our commit steps if the SHA hash is not retrieved properly.
## Creating the Blob
Contents inside a Git repository are stored as blobs. `createBlob` manages storing our content as a blob object, and then uses the `dataService` to store this blob into a repository. Until we have called `dataService.createBlob`, we have not actually placed the object inside GitHub. Also, remember that blobs are not linked into our DAG by themselves; they need to be associated with our DAG vis-a-vis a tree and commit object, which we do next:
...
String blobSha;
Blob blob;
private void createBlob() throws IOException {
blob = new Blob();
blob.setContent(contentsBase64);
blob.setEncoding(Blob.ENCODING_BASE64);
blobSha = dataService.createBlob(repository, blob);
}
...
## Generating a Tree
Next, we generate a tree by implementing `generateTree()`. A tree wraps a blob object and provides basically a path to our object: if you were designing an operating system, the tree would be the filename path and the blob is an inode. Our data service manager uses a repository name and a base SHA address, one that we retrieved earlier, to validate that this is a valid starting point inside our repository. Once we have a tree, we fill out the necessary tree attributes, like tree type (blob) and tree mode (blob), and set the SHA from the previously created blob object along with the size. Then we store the tree into our GitHub account using the data service object:
...
Tree baseTree;
private void generateTree() throws IOException {
baseTree = dataService.getTree(repository, baseCommitSha);
TreeEntry treeEntry = new TreeEntry();
treeEntry.setPath( filename );
treeEntry.setMode( TreeEntry.MODE_BLOB );
treeEntry.setType( TreeEntry.TYPE_BLOB );
treeEntry.setSha(blobSha);
treeEntry.setSize(blob.getContent().length());
Collection<TreeEntry> entries = new ArrayList<TreeEntry>();
entries.add(treeEntry);
newTree = dataService.createTree( repository, entries,
baseTree.getSha() );
}
...
## Creating the Commit
We are getting close to actually finalizing the creation of content: next, implement `createCommit()`. We have created a blob that stores the actual content, and created a tree that stores the path to the content (more or less), but since Git is a version control system, we also need to store information about who wrote this object and why. A commit object stores this information. The process should look familiar coming from the previous steps: we create the commit and then add relevant metadata, in this case the commit message. We also need to provide the commit user with the commit. We then use the data service to create the commit inside our repository in GitHub at the correct SHA address:
...
CommitUser commitUser;
private void createCommitUser() throws IOException {
UserService us = new UserService(); // 
us.getClient().setCredentials( login, password );
commitUser = new CommitUser(); // 
User user = us.getUser(); // 
commitUser.setDate(new Date());
String name = user.getName();
if( null == name || name.isEmpty() ) { // 
name = "Unknown";
}
commitUser.setName( name ); // 
String email = user.getEmail();
if( null == email || email.isEmpty() ) {
email = "unknown@example.com";
}
commitUser.setEmail( email );
}
Commit newCommit;
private void createCommit() throws IOException {
// create commit
Commit commit = new Commit(); // 
commit.setMessage( commitMessage );
commit.setAuthor( commitUser); // 
commit.setCommitter( commitUser );
commit.setTree( newTree );
List<Commit> listOfCommits = new ArrayList<Commit>(); // 
Commit parentCommit = new Commit();
parentCommit.setSha(baseCommitSha);
listOfCommits.add(parentCommit);
commit.setParents(listOfCommits);
newCommit = dataService.createCommit(repository, commit); // 
}
...
Create a user service object. We will use this to get back user data for the logged-in user from GitHub.
We then create a commit user. This will be used to annotate the commit object (twice in fact, as we will use it for both the author and committer).
Retrieve the user from the service, loading it from GitHub.
Now, attempt to get the name for the logged-in user. If the name does not exist (the user has not set a name in their GitHub profile) set the name to unknown. Then, store the name in the commit user object.
Do the same process to establish the email for the commit user.
Now, return to the `createCommit` function and create a commit object.
We need to use an author and committer, so pass in the commit user we created in the `createCommitUser` function.
Next, generate a list of commits. We will only use one, but you might recall commits can have multiple parents (a merge, for example) and we need to specify the parent or parents. We create the list, create a parent, and set the base SHA we determined earlier, and then indicate in our new commit that it is the parent.
Finally, we create the commit using our data service object.
## Updating the Master Resource
Our final step is to take the new commit SHA and update our branch reference to point to it:
...
TypedResource commitResource;
private void createResource() {
commitResource = new TypedResource(); // 
commitResource.setSha(newCommit.getSha());
commitResource.setType(TypedResource.TYPE_COMMIT);
commitResource.setUrl(newCommit.getUrl());
}
private void updateMasterResource() throws IOException {
Reference reference =
dataService.getReference(repository,
"heads/" + theBranch.getName() ); // 
reference.setObject(commitResource);
dataService.editReference(repository, reference, true) ; // 
}
...
First, we create the new commit resource. We then associate the new commit SHA, indicate it is a resource of commit type, and then link it to our commit using its URL.
We use the data service object to get the current branch reference from GitHub. Branch references are retrieved by appending "heads" to the branch (we determined the branch in a previous step).
Finally, we update the branch reference to our new commit resource.
This is the complete code to add data to GitHub using the Git Data API. Good work!
## Passing All Our Tests
Our code is complete. Let's make sure our tests run successfully.
We need to set up our test configuration to run within Android Studio. Select the "Build Variants" vertical tab on the left, and in Test Artifact select Unit Tests. Then, open the Run menu, and select "Edit configurations". Click the plus symbol, and choose JUnit. You will be presented with space to create a unit test run configuration. First, click "Use classpath of module" and select "app". Make sure the Test Kind is set to class, and then click the selector to the right of the class field. It should display your test class "GitHubHelperTest.java". We will need to store the username and password as environment variables, so click to add these. Your final configuration should look like Figure 7-5.
###### Figure 7-5. Creating a unit test configuration
Now, create the UI tests configuration: switch to "Android Instrumentation Tests" in the "Test Artifact" of the "Build Variants" tab. Then, click the "Run" menu, and again go to "Edit configurations". Click the plus symbol, and this time choose "Android Tests." Choose "app" as the module, and then select "android.support.test.runner.AndroidJUnitRunner" as the specific instrumentation runner. You can choose whichever target device you prefer, an emulator, or a physical device if you have one. Give the configuration a name like "Android Test."
To run your tests, switch to the appropriate test artifact and then from the "Run" menu, select "Debug" and choose the proper test configuration. You can set breakpoints and step through code in your test or implementation from within Android Studio.
I personally find it annoying to switch between build variants when I want to run my tests, so if you prefer, you can use the command line instead (and ignore the need to change build variants):
$ GITHUB_HELPER_USERNAME=MyUsername \
GITHUB_HELPER_PASSWORD=MyPwd123 \
./gradlew testDebugUnitTest
...
:app:mockableAndroidJar UP-TO-DATE
:app:assembleDebugUnitTest UP-TO-DATE
:app:testDebugUnitTest UP-TO-DATE
BUILD SUCCESSFUL
$ ./gradlew connectedAndroidTest
...
:app:compileDebugAndroidTestNdk UP-TO-DATE
:app:compileDebugAndroidTestSources
:app:preDexDebugAndroidTest
:app:dexDebugAndroidTest
:app:packageDebugAndroidTest
:app:assembleDebugAndroidTest
:app:connectedDebugAndroidTest
BUILD SUCCESSFUL
You will see similar results with the Android Studio test runner windows. Our tests pass and our application is complete.
If you want to see a more complicated version of the GitHub API on Android, take a look at Teddy Hyde (also available on the Google Play Store). Teddy Hyde uses OAuth to log in to GitHub, and has a much richer set of features for editing Jekyll blogs.
# Summary
This application will allow you to write into a real Jekyll blog, adding posts, upon which GitHub will regenerate your site. This little application manages quite a few things: formatting the filename correctly, encoding the data for submission to GitHub, and we have a unit test and UI test that help to verify the functionality.
In the next chapter we will use CoffeeScript to create our own chat robot that requests pull request reviews from chat room members using the Activity API.
# Chapter 8. CoffeeScript, Hubot, and the Activity API
Though the phrase has now been removed from its marketing materials, GitHub used to call itself a tool for "social coding." This idea is still central to the services GitHub provides—intimate access to the social layer inside of GitHub through the Activity API.
In this chapter we'll investigate the Activity API by extending a chat robot. You might find it odd that a robot, generally considered an antisocial invention despite all best attempts, would play nicely with a social API, but this is a social robot. GitHubbers use an extensible chat robot called Hubot to record and automate their tasks, and to have fun on the Internet. If there were any robot suited for interacting with the GitHub Activity API, it's Hubot, described on the site _https://hubot.github.com/_ as "a customizable, kegerator-powered life embetterment robot."
# The Activity API
The Activity API includes:
* Notifications (comments issued to users through various events)
* Stargazing tools (Facebook has "likes" while GitHub has "stars" to indicate approval or interest)
* Watching (a way to track GitHub data)
* Events (a higher-level activity stream useful for following actions of users)
The Activity API section also includes _feeds_. While feeds are grouped within the Activity API, they are not programmatic in the same way an API is, and we won't cover them in depth here. Feeds are actually Atom feeds and not interactive beyond that. Atom feeds are similar to RSS feeds: a static feed you can subscribe to with an Atom client.
# Planning for PR Satisfaction Guaranteed
We are going to build an extension to Hubot. When we are done, Hubot will be transformed into a robot that...
* listens for pull request events from GitHub by subscribing to notifications using the GitHub Activity API;
* invites people in the chat room to comment on those pull requests;
* guarantees that communication between it and GitHub is securely delivered (with an unfortunate bug as caveat);
* retrieves vital information from an external service (the Slack API);
* has functionality fully described by automated tests;
* allows easy simulation of inputs and outputs that map to the inputs and outputs it gets from APIs and services; and
* runs with ease on a major PaaS (Heroku).
Hubot provides the skeleton for our chat robot. We'll add the preceding functionality to Hubot and see how easy it is to combine these features into a coherent whole that solves a real problem.
## Considerations and Limitations
If you want stability with your Hubot, you need to host it on a server. Hubot is written in NodeJS and requires a hosting service that supports NodeJS. Our Hubot needs to sit on a public IP address (not inside the firewall) because we receive notifications from GitHub. It is not strictly required that you host Hubot on a public server; if your Hubot does not need to receive requests from the outside world, you can host on a private internal server as well.
The simplest and cheapest hosting service for Hubot is Heroku. Once we generate our Hubot, we can simply do a git-push into Heroku to publish our chat robot for free. We'll show these steps later in the chapter.
Hubot works with many chat endpoints. Your Hubot can connect to almost any popular chat service or protocol: IRC, XMPP, and many commercial services like Gchat, Basecamp, and even Twitter. Slack is a relatively new entrant into the world of chat services, but despite its youth, the Slack API is solid and connecting third-party clients to the Slack service is simple and straightforward. We'll use Slack as our chat endpoint.
Now let's create our Hubot and configure it to use Slack.
## Creating a Vanilla Hubot
To build a Hubot you will need a working NodeJS installation, as specified in Appendix B. The following commands create a directory with a barebones Hubot:
$ npm install -g generator-hubot 
$ mkdir slacker-hubot 
$ cd slacker-hubot/
$ yo hubot 
$ npm install hubot-slack --save 
You may not be familiar with these commands, so let's go over the important ones.
`npm` is the tool that installs packages for NodeJS (documented in Appendix B). The `npm install -g generator-hubot` command installs a command-line tool called yeoman and a plug-in for yeoman that scaffolds Hubot.
You should create a new directory and enter it so that when you create your Hubot you can store it entirely in its own space.
You run the generator using the `yo hubot` command. This builds out the set of files for a minimal Hubot.
We then install the slack adapter and save the package to the _package.json_ file.
Now that we have a simple Hubot created we need to create the Slack site where our Hubot will live.
## Creating a Slack Account
Going to _https://slack.com/_ starts the process of creating your own Slack site. You'll need to step through creating an account. Slack sites are segmented by organization, and you'll want to establish a URL prefix for your Slack site. Typically this is the name of your organization.
### Naming the channel
Once you have your slack site created, you need to create a channel as in Figure 8-1.
###### Figure 8-1. Creating a channel from the Slack sidebar
You can name the channel anything you want, but it is often a good mnemonic to use a name that suggests this is a channel where more serious work gets done. You could use a name like "PR Discussion" to indicate this is the channel where PRs are discussed. To keep things simple, we will use the name "#general." Once you click the link to create a channel, you'll see a popup asking for the name and an optional description. After you have created the channel, you will see a link to "Add a service integration" as shown in Figure 8-2.
###### Figure 8-2. Adding service integrations to Slack
Slack supports many different service integrations, and one of them is Hubot as shown in Figure 8-3.
###### Figure 8-3. Service integration options for Slack
Choosing Hubot takes you to a settings screen for your Hubot integration.
Slack automatically generates an authentication token for you. This token is used to verify the connection from your Hubot. This token can be revoked, and in fact the token from Figure 8-4 has been revoked and can no longer be used to authenticate into Slack. If you ever accidentally publicize this token, you can easily revoke and reassign a token to your Hubot on this screen.
You will also need to specify a name. Use "probot" and if you'd like, change the avatar associated with the Hubot (these options are shown in Figure 8-4).
###### Figure 8-4. Hubot configuration page for Slack
Make sure you save your integration before continuing.
## Running Hubot Locally
Eventually you will want to run your Hubot on a server, but Hubot can run from a laptop behind a firewall as well. At the beginning of development, while testing and developing your bot and the changes are fast and furious, you probably want to run Hubot locally. In fact, Hubot behind a firewall is almost identical in its feature set with one major exception: anything behind the firewall is inaccessible, obviously, to external services. We are eventually going to be configuring GitHub to send events to us when a pull request is created, and Hubot behind the firewall cannot receive those events. But, for almost all other functionality, running Hubot locally speeds up development cadence.
To run your bot locally, make sure you specify the variables on the command line:
$ HUBOT_SLACK_TOKEN=xoxb-3295776784-nZxl1H3nyLsVcgdD29r1PZCq \
./bin/hubot -a slack
This command runs the Hubot script with the Slack adapter. The Slack adapter knows how to interact with the Slack.com service. It requires an authentication token, and this is provided via the environment variable at the beginning of the line.
### A first conversation
Your bot should be set up and waiting in the #general room inside your Slack site. Go to the #general room. Then, you can test that Hubot is properly connectd by typing in the name of your Hubot and then a command like `the rules`. For example, if our Hubot is named probot, then we would type `probot the rules`, which then displays the following conversation as shown in Figure 8-5.
###### Figure 8-5. Hubot's built-in repartee
We see that our Hubot printed out the rules it abides by (published originally by Isaac Asimov in his "Runaround" short story in 1942).
### Exploring the Hubot vocabulary
Hubot out-of-the-box supports many commands. To get a list, type `help` to see a list like that shown in Figure 8-6.
###### Figure 8-6. Listing the Hubot vocabulary
The `pug me` command is a favorite. Many people new to Hubot quickly get sucked into spending hours looking at cute pictures of pugs. Beware!
# Installation on Heroku
Now that we've successfully started our Hubot locally, we can move it to Heroku and keep it running even when our laptop is turned off.
## Setting Up Heroku
Heroku requires registration before using it. Heroku offers free plans and everything we'll do here can be done using one of them. Once you have created an account, install the Heroku toolbelt found here: _https://toolbelt.heroku.com/_. The toolbelt provides a set of tools useful for managing Heroku applications. You will need to have Ruby set up as explained in Chapter 1.
If your chatbot is working per the instructions given in the previous section, then it is almost ready to deploy to Heroku. You'll need to add the same environment variable using the Heroku tools. In addition to the authentication token for Slack, you will need to configure a URL for your site. Heroku will generate a URL for you from the name of your project (in this case `inqry-chatbot`); so as long as the name has not been claimed already by someone else, you can name it as you will:
$ heroku create inqry-chatbot
$ heroku config:add HEROKU_URL=https://inqry-chatbot.herokuapp.com/
$ heroku config:add HUBOT_SLACK_TOKEN=xxbo-3957767284-ZnxlH1n3ysLVgcD2dr1PZ9Cq
$ git push heroku master
Fetching repository, done.
Counting objects: 5, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 317 bytes | 0 bytes/s, done.
Total 3 (delta 2), reused 0 (delta 0)
-----> Node.js app detected
-----> Requested node range: 0.10.x
...
-----> Compressing... done, 6.8MB
-----> Launching... done, v9
https://inqry-chatbot.herokuapp.com/ deployed to Heroku
To git@heroku.com:inqry-chatbot.git
d32e2db..3627218 master -> master
If you need to troubleshoot issues with your Hubot, you can always run the `heroku log` command to view logs for your application, `heroku logs -t`:
$ heroku logs -t
2014-11-18T07:07:18.716943+00:00 app[web.1]: Successfully 'connected'
as hubot
2014-11-18T07:07:18.576287+00:00 app[web.1]: Tue, 18 Nov 2014 07:07:18
GMT connect deprecated limit: Restrict request size at location of
read at
node_modules/hubot/.../express/.../connect/.../middleware/multipart.js:86:15
...
When you send commands into your chat room you will notice events inside of Heroku. This is a good way to verify that your bot is wired into Slack properly.
You might also want to publish this repository into GitHub. Heroku, as a part of hosting your live application, also hosts the full Git repository of your Hubot (Hubot, as friendly as it tries to be, is just another NodeJS application in the end). Heroku can host the entirety of the source code for your Hubot for you, but does not have the additional tools, like user management, that GitHub does. For this reason, use your GitHub account as your code repository, the place where team members develop new features of your chatbot. Build and test locally, and then push into Heroku using the ease of the Git workflow as a deployment layer.
Now that we have created and installed Hubot, let's look at the Activity API and determine how we want to code our extension.
# Activity API Overview
The Activity API centers around notifications: notifications are similar to the notifications you see on social networking sites, events that occur that document important points of interest inside a timeline of activity. GitHub activity events are often tied to important milestones inside of a developer's day, activities like pushing commits into the main remote repository, asking questions on discussion threads associated with a repository, or assigning issues to a developer for review.
These notifications are accessible to team members without programmatically accessing the GitHub API. Team members are notified of events inside of their workflow using email based on several rules. GitHub will automatically send out notification emails when a user has watched a repository and issues or comments are added, a pull request is made, or there are comments made on a commit. In addition, even if a user has not watched a repository, they will be notified if that user is _@mentioned_ (prefixing the @ character to a team member's name inside a comment), when an issue is assigned to them, or when that user participates in a discussion associated with any repository.
The GitHub policy for notification is definitely to err on the side of being overly verbose. Many people live in their email, and making sure that all important activities are distributed to the right people involved makes sense. GitHub has a good set of rules for making sure the correct notifications get to the right parties.
Email does falter as a to-do list, however, and at times the ease in which email can be delivered breeds a secondary problem: overwhelm. It can be very easy to lose focus (vital to building software) when you are constantly context switching by checking email, and notifications can often fly by. In addition, email is privately directed and prevents easy collaboration; generally people don't share email inboxes. Let's extend our Hubot to help us resolve these problems by taking our GitHub notifications into a shared and "opt-in when you are logged-in" communication channel.
## Writing a Hubot Extension
Hubot extensions are written in either JavaScript or CoffeeScript. CoffeeScript is a intermediate language that compiles directly to JavaScript. Many people prefer writing in CoffeeScript because it has a cleaner syntax and writes "safer" JavaScript (the syntax helps you avoid common tricky pitfalls in the JavaScript language, like what "this" refers to). CoffeeScript is an indentation-based language (much like Python), and after the initial learning curve, can feel easier to read than JavaScript, especially when you have many nested function callbacks (common in JavaScript programming); it is easier to see where a function begins and ends given the indentation levels. Hubot is itself written in CoffeeScript, and we'll write our extension in CoffeeScript as well.
CoffeeScript is a language where indentation is important. For readability purposes, when we display a snippet of code from a longer file, there are times where we have changed the indentation of that snippet and removed the initial indentation. If you were to copy the code without realignment, the snippet would not work until you reindented it to fit the context into which it sits.
The Hubot extension module format is exceedingly simple. You write JavaScript modules (using the `export` syntax) and Hubot passes you in a robot object you program using several API methods.
There are a few concepts useful to programming Hubot. You can find an example of each of these methods inside the _example.coffee_ file inside the _scripts_ directory:
* Hubot has a "brain." This is an internal state object, which means these values persist across chat messages. This state is not persisted into a database by default, so this state is not restored if you restart Hubot. However, a persistence mechanism is exposed via Redis, though this is optional and requires configuration. The brain is the way you set and get values that are saved across discrete messages.
* Hubot has different response mechanisms. They can choose to respond only when they hear exact phrases or when keywords are found in any message, and you don't need to do the grunt work inside your code to determine the differences between these communication types.
* Hubot includes an HTTP server. You might need your Hubot to accept requests from additional services beyond the chat service, and Hubot makes it easy to accept these kinds of requests.
* Hubot has a built-in HTTP client. You can easily access HTTP resources within Hubot; many popular extensions to Hubot access a web service when Hubot receives a request.
* Hubot commands can include parameters. You can tell a Hubot to do something multiple times and write a generic function that accepts options.
* Hubot can handle events. Each chat service has a generalized set of events that are normalized to a common API. Hubot can be programmed to interact with these events. For example, Hubot can perform actions when a room topic changes or when users leave rooms.
* Hubot can handle generic errors at the top level. Hubot can be programmed with a catch-all error handler so that no matter where your code failed, you can catch it without crashing your bot.
Hubot will use the first five of these features:
* We will use the Hubot brain to store a PR review request. If Hubot asks a user to review a PR, it needs to keep track of this so that when the user responds it has some context of the request.
* We will use the `respond` method to program our Hubot to handle a request when a user accepts or declines the review request.
* We will use the HTTP server to accept PR notifications from GitHub webhooks.
* We will use the HTTP client to get a list of users from Slack.
* We will use the parameterization of requests to Hubot to retrieve the specific pull request ID from a chat user message.
There are examples of the other two features (events and generic errors) inside the examples script that ship with the Hubot source code but we won't use those APIs in our Hubot.
## Code Reviews via Pull Requests
As we've seen in other chapters, pull requests are the mechanism used on GitHub to easily integrate code changes into a project. Contributors either fork the master repository and then issue a pull request against that repository, or, if they have write permission to the main repository, make a "feature" branch and then issue a pull request against the "master" branch.
Pull requests often come with a chat message indicating several people who should review the request. This tribal knowledge about who should be involved is only in the head of the developer who created the code. It could be that they invited the correct people. Or, it could be that they invited the people they prefer to review their code for various (and completely rational reasons). This can be an effective way to engage the right people around a new piece of code.
And inviting reviewers this way can have downsides as well: if the person is otherwise engaged, pull requests can linger when a notification email goes unread. And, there is good research to indicate that the best performing teams are those who share all tasks and responsibilities equally. It often does not scale to ask everyone to participate in all code reviews associated with a pull request. But it might be the case that randomly selecting developers involved in a project is a better (and more efficient) way to review code than asking the developer who created the code to determine these people.
Hubot will assign active chat room users to do code reviews when a new pull request is created. We will use the GitHub Activity API to subscribe to pull request events. When Hubot becomes aware that a pull request needs review, it will randomly assign a user in the chat room to do the review and then ask that user if they want to accept the challenge. If they accept, we will note that in the pull request comments.
### Extension boilerplate
We will start writing our extension by defining the high-level communication format we expect from our users. Our script has a simple vocabulary: look for responses indicating acceptance or refusal of our review requests. Our extension script should be in the _scripts_ directory and named _pr-delegator.coffee_. This is just the back and forth we will be having with users; we are not yet writing any code to handle the pull request notifications:
module.exports = (robot) -> 
robot.respond /accept/i, (res) -> 
accept( res )
robot.respond /decline/i, (res) -> 
decline( res )
accept = ( res ) -> 
res.reply "Thanks, you got it!"
console.log "Accepted!" 
decline = ( res ) -> 
res.reply "OK, I'll find someone else"
console.log "Declined!"
This is a dense piece of code and can be confusing if you are new to CoffeeScript. At the same time, hopefully you will agree that this is amazingly powerful code for such a small snippet after reading these notes.
All NodeJS modules start by defining entrypoints using the `exports` syntax. This code defines a function that expects a single parameter; when the function is executed, the parameter will be called a robot. The Hubot framework will pass in a `robot` object for us that we will program further down.
The Hubot API defines a method on the `robot` object called `respond`, which we use here. It takes two parameters: a regular expression to match against and a function that receives an instance of the chat response object (called `res` here). The second line uses the API for this response object to call a method `accept` with the response object. We define `accept` in a moment.
We setup a response matcher for a `decline` response.
Now we define the `accept` method. The `accept` method receives the response object generated by the Hubot framework and calls the `reply` method, which, you guessed it, sends a message back into the chat channel with the text "Thanks, you got it!"
The `accept` method then also calls `console.log` with information that is displayed on the console from which we started Hubot. This is a simple way for us to assure everything worked correctly; if we don't see this message, our code before this was broken. The `console.log` is not visible to any users in the channel. It is good practice to remove this code when you finalize your production code, but if you forget, it won't affect anything happening in the channel.
We then define the `decline` method using the same APIs as for the `accept` method.
If Hubot is running, you will need to restart it to reload any scripts. Kill Hubot (using Ctrl-C), and then restart it, and then play with commands inside your Slack site. Enter the commands `probot accept` and `probot decline` and you'll see Hubot responding inside the channel. You'll also see the message `Accepted!` or `Declined!` printed to the console on which Hubot is running.
### Writing tests for Hubot extensions
Now that we have the basics of our Hubot working, let's make sure we certify our code with some tests. We'll use the Jasmine testing framework for NodeJS. It offers an elegant behavior-driven testing syntax where you specify a behavior as the first parameter to an `it` function, and as a second parameter, a function that is run as the test itself. Jasmine manages running each `it` call and displays a nice output of passing and failed tests at the end of your run. Jasmine tests are typically written in JavaScript, but the latest versions of Jasmine support tests are also written in CoffeeScript. Hubot is written in CoffeeScript, so let's write our tests in CoffeeScript as well. We need to put our tests inside a directory called _spec_ and make sure our filename ends with _.spec.coffee_. Let's use _spec/pr-delegator.spec.coffee_ as the complete filename. Jasmine expects spec files to have _.spec._ at the end of their filename (before the extension, either _.js_ or _.coffee_ ); if your filename does not match this pattern Jasmine won't recognize it as a test.
Probot = require "../scripts/pr-delegator"
Handler = require "../lib/handler"
pr = undefined
robot = undefined
describe "#probot", ->
beforeEach () ->
robot = {
respond: jasmine.createSpy( 'respond' )
router: {
post: jasmine.createSpy( 'post' )
}
}
it "should verify our calls to respond", (done) ->
pr = Probot robot
expect( robot.respond.calls.count() ).toEqual( 2 )
done()
The first line in our test requires, or loads, the Hubot extension module into our test script, giving us a function we save as a `Probot` variable. We then create a `describe` function, which is an organizing function to group tests. `describe` functions take an indentifier (in this case `#probot`) and a function that contains multiple `it` calls. In addition, a `describe` function can also contain a `beforeEach` function that configures common elements inside our `it` calls; in this case we create a faked `robot` object we will pass into our `Probot` function call. When we are running Hubot itself, Hubot creates the `robot` and passes it into the `Probot` function, but when we run our tests, we generate a fake one and query it to make sure it is receiving the proper configuration. If we make a change inside our actual Hubot code and forget to update our tests to verify those changes, our tests will fail and we'll know we need to either augment our tests, or something broke inside our `robot`, a good automated sanity check for us when we are feverishly coding away, animating our helpful Hubot.
You should see some similarities between the calls made to our `robot` (`robot.respond` and `robot.router.post`) and the tests. We set up "spies" using Jasmine that generate fake function calls capable of recording any interaction from outside sources (either our production code or the test code harness). Inside our `it` call, we then verify that those calls were made. We use the `expect` function to verify that we made two calls to the `respond` function defined on the `robot`, and that `robot.router.post` has been called as well.
We need to install Jasmine, and we do this by adding to our _package.json_ file. Append `"jasmine-node": "^1.14.5"` to the file, and make sure to add a comma to the tuple above it. Adding this code specifies that the minimum version of Jasmine node we will use is "1.14.5".
...
"hubot-shipit": "^0.1.1",
"hubot-slack": "^3.2.1",
"hubot-youtube": "^0.1.2",
"jasmine-node": "^2.0.0"
},
"engines": {
...
Runing the following commands will then install Jasmine (the library and a test runner command-line tool) and run our tests. We abbreviate some of the installation output to save space:
$ npm install
...
hubot-slack@3.2.1 node_modules/hubot-slack
└── slack-client@1.2.2 (log@1.4.0, coffee-script@1.6.3, ws@0.4.31)
...
$ ./node_modules/.bin/jasmine-node --coffee spec/
.
Finished in 0.009 seconds
1 test, 1 assertions, 0 failures, 0 skipped
Our tests pass and we now have a way to document and verify that our code does what we think it does.
### Setting up our webhook
We are now in a position to start adding the actual functionality to our Hubot. Our first requirement is to register for pull request events. We could do this from within the GitHub website, but another way is to use the cURL tool to create the webhook from the command line. In order to do this, we need to first create an authorization token, and then we can use that token to create a webhook.
To create the token, run this command, setting the proper variables for your username instead of mine ("xrd"):
$ export USERNAME=xrd
$ curl https://api.github.com/authorizations --user $USERNAME --data
'{"scopes":["repo"], "note": "Probot access to PRs" }' -X POST
This call can return in one of three ways. If your username or password is incorrect, you will get an error response message like this:
{
"message": "Bad credentials",
"documentation_url": "https://developer.github.com/v3"
}
If your username and password are correct and you don't have two-factor authentication turned on, the request will succeed and you will get back a token inside the JSON response:
{
"id": 238749874,
"url": "https://api.github.com/authorizations/9876533",
"app": {
"name": "Probot access to PRs",
"url": "https://developer.github.com/v3/oauth_authorizations/",
"client_id": "00000000000000000000"
},
"token": "fakedtoken1234",
"hashed_token": "fakedhashedtoken7654",
...
If you are using two-factor authentication then you will see a response message like this:
{
"message": "Must specify two-factor authentication OTP code.",
"documentation_url":
"https://developer.github.com/v3/auth#working-with-two-factor-authentication"
}
If you get this message in response to the prior cURL command, then you will be receiving a one-time password via your choice of a two-factor authentication alternative endpoint (either SMS or a two-factor authentication app like Google Authenticator or recovery codes that you printed out). If you use text messaging, check your text messages and then resend the request appending a header using cURL:
$ curl https://api.github.com/authorizations --user $USERNAME --data
'{"scopes":["repo"], "note": "Probot access to PRs" }' -X POST
--header "X-GitHub-OTP: 423584"
Enter host password for user 'xrd':
If all these steps complete successfully (regardless of whether you are using two-factor authentication or not) you will then receive an OAuth token:
{
"id": 1234567,
"url": "https://api.github.com/authorizations/1234567",
"app": {
"name": "Probot access to PRs (API)",
"url": "https://developer.github.com/v3/oauth_authorizations/",
"client_id": "00000000000000000000"
},
"token": "ad5a36c3b7322c4ae8bb9069d4f20fdf2e454266",
"note": "Probot access to PRs",
"note_url": null,
"created_at": "2015-01-13T06:23:53Z",
"updated_at": "2015-01-13T06:23:53Z",
"scopes": [
"notifications"
]
}
## Using the OAuth Token to Register for Events
Once this is completed we now have our token we can use to create a webhook. Make sure to use the correct repository name and access token before running the cURL command. We will also need the endpoint we created when we published into Heroku (in our case `https://inqry-chatbot.herokuapp.com`):
$ REPOSITORY=testing_repostory
$ TOKEN=ad5a36c3b7322c4ae8bb9069d4f20fdf2e454266
$ WEBHOOK_URL=https://inqry-chatbot.herokuapp.com/pr
$ CONFIG=$(echo '{
"name": "web",
"active": true,
"events": [
"push",
"pull_request"
],
"config": {
"url": "'$WEBHOOK_URL'",
"content_type": "form",
"secret" : "XYZABC"
}
}')
$ curl -H "Authorization: token $TOKEN" \
-H "Content-Type: application/json" -X POST \
-d "$CONFIG" https://api.github.com/repos/$USERNAME/$REPOSITORY/hooks
{
"url": "https://api.github.com/repos/xrd/testing_repostory/hooks/3846063",
"test_url":
"https://api.github.com/repos/xrd/testing_repostory/hooks/3846063/test",
"ping_url":
"https://api.github.com/repos/xrd/testing_repostory/hooks/3846063/pings",
"id": 3846063,
"name": "web",
"active": true,
"events": [
"push",
"pull_request"
],
"config": {
"url": "https://inqry-chatbot.herokuapp.com/pr",
"content_type": "json"
},
"last_response": {
"code": null,
"status": "unused",
"message": null
},
"updated_at": "2015-01-14T06:23:59Z",
"created_at": "2015-01-14T06:23:59Z"
}
There is a bit of bash cleverness here, but nothing to be overly disturbed by. We create a few variables we use in the final command. Since the `$CONFIG` variable is particularly long, we use `echo` to print out a bunch of information with the webhook URL in the middle. If you want to see the result of that variable, type `echo $CONFIG` and you'll notice the snippet `... "url": "https://inqry-chatbot.herokuapp.com/pr" ...` properly interpolated.
Here we use the Heroku API URL as our webhook endpoint. This means we need to have things hosted on Heroku for the webhook to talk to our HTTP server properly. We can do some things (like connecting the Hubot to the Slack service) from behind a firewall and have it talk with other chat room participants, but any webhook request will fail unless the chat client is running on a publicly available server.
Be careful to make sure you use the `content_type` set to `"form"` (which is the default, so you could leave it blank). Setting this to `json` will make it difficult to retrieve the raw body inside your Hubot when the post request is received and validate the request using a secure digest. We want to make sure all requests are real requests from GitHub and not a cracker attempting to maliciously inject themselves into our conversations. To protect from this possible situation, we verify each request back into GitHub by using the secret generated when we created the webhook. We'll discuss this in detail later in this chapter, but for now, establish a secret when you create the hook. A cracker might be able to guess about where our endpoint exists, but unless Heroku or GitHub is compromised, they won't know our webhook secret.
We should update our tests to make sure we anticipate this new functionality. We will be using the Hubot HTTP server, which piggybacks on the built-in express server running inside of Hubot. Our new test should reflect that we use the `router.post` method exposed to our Hubot, and that it is called once. We add this next test to the end of our spec file:
it "should verify our calls to router.post", (done) ->
pr = Probot robot
expect( robot.router.post ).toHaveBeenCalled()
done()
This additional test will fail should we run it. Now we can add to our Hubot and have it handle webhook callbacks from GitHub. Add this to the end of the file:
robot.router.post '/pr', ( req, res ) ->
console.log "We received a pull request"
Now if we run our tests, they all pass. If they do, publish our new version of the app into Heroku. We'll omit this step in the future, but if you want to receive pull requests on the router you have set up, remember that you need to publish your files into Heroku so the endpoint is public.
$ ./node_modules/.bin/jasmine-node --coffee spec/
..
$ git commit -m "Working tests and associated code" -a
...
$ heroku push
Finished in 0.009 seconds
2 tests, 2 assertions, 0 failures, 0 skipped
$ git push heroku master
Fetching repository, done.
Counting objects: 5, done.
Delta compression using up to 8 threads.
...
We now have an end-to-end Hubot setup, ready to receive webhook notifications.
## Triggering Real Pull Requests
We can now start testing our Hubot with real GitHub notifications. First, let's set up a repository we can use for testing. Creating the new repository on GitHub is a quick task if we use the `hub` tool described in Chapter 6:
$ mkdir testing_repository
$ cd testing_repository
$ git init
$ touch test.txt
$ git add .
$ git commit -m "Initial checkin"
$ hub create
...
Now we can create a real pull requests for our repository from the command line and test our Hubot. A typical pull request flow looks like the following:
1. Create a new branch
2. Add new content
3. Commit the content
4. Push the new branch into GitHub
5. Issue a pull request
All of this can be automated using a combination of Git commands and cURL. We've seen some of these commands before and can reuse the previous command-line invocations and variables we used when generating our webhook using the API via cURL. Our config variable is similar, but the required fields in this case are: the title and body for the pull request, the `"head"` key that matches the name of the branch, and where to merge it to using the `"base"` key.
Creating a new branch, adding some content, and then issuing a pull request against the branch might be something we need to do several (or more) times as we experiment and learn about the Hubot extension API. The examples here work right out of the box, but don't be fooled into thinking that it all went exactly as we expected the first time. Given that, these are commands you might want to perform multiple times as you are experimenting, so let's put the commands described in the previous paragraph into a bash script that is generic and can be run multiple times. We can call it `issue-pull-request.sh` and place the script inside the test directory:
# Modify these three variables
AUTH_TOKEN=b2ac1f43aeb8d73b69754d2fe337de7035ec9df7
USERNAME=xrd
REPOSITORY=test_repository
DATE=$(date "+%s")
NEW_BRANCH=$DATE
git checkout -b $NEW_BRANCH
echo "Adding some content" >> test-$DATE.txt
git commit -m "Adding test file to test branch at $DATE" -a
git push origin $NEW_BRANCH
CONFIG=$(echo '
{ "title": "PR on '$DATE'",
"body" : "Pull this PR'$DATE'",
"head": "'$NEW_BRANCH'",
"base": "master"
}' )
URL=https://api.github.com/repos/$USERNAME/$REPOSITORY/pulls
curl -H "Authorization: token $AUTH_TOKEN" \
-H "Content-Type: application/json" -X POST -d "$CONFIG" "$URL"
This script generates a unique string based on the current time. It then creates and checks out a new branch based on that name, adds some content to a unique file, commits it, pushes it into GitHub, and generates a pull request using the API. All you will need to do is make a one-time update to the three variables at the top of the script to match your information. This script is resilient in that even if your auth token were incorrect (or had expired) this command would do nothing other than add testing data to your test repository, so you can experiment safely. Just be sure to pay attention to whether you see a successful JSON request as shown in the following code or an error message. And, as we are going to run this script as a command, make it executable using the `chmod` command.
Now, let's run it and see what happens:
$ chmod +x ./issue-pull-request.sh
$ ./issue-pull-request.sh
{
"url": "https://api.github.com/repos/xrd/testing_repostory/pulls/1",
"id": 27330198,
"html_url": "https://github.com/xrd/testing_repostory/pull/1",
"diff_url": "https://github.com/xrd/testing_repostory/pull/1.diff",
"patch_url": "https://github.com/xrd/testing_repostory/pull/1.patch",
"issue_url": "https://api.github.com/repos/xrd/testing_repostory/issues/1",
"number": 1,
"state": "open",
"locked": false,
"title": "A PR test",
"open_issues_count": 1,
...
This returns a huge JSON response (abbreviated here), but you can see the first item is a link to the pull request. For a human-readable link, we should use the link called `html_url`. Were we to visit this link, we could merge the pull request from within the GitHub web UI.
To see more context on what is happening with this pull request, once we are looking at this pull request inside of GitHub, we can then navigate to the settings for our repository, follow the link to "Webhooks and Services" on the left navigation bar, and we will then find at the very bottom of the page a list of recent deliveries to our webhook, as in Figure 8-7.
###### Figure 8-7. Recent failed deliveries from our webhook
These requests all failed; our Hubot is not correctly configured to handle real HTTP requests from GitHub. This does show that GitHub is trying to do something when a pull request is received. We'll work on getting our handler code written and pushed into Heroku, and then issue another PR.
## Handling PR Notifications as Post Requests over HTTP
Let's build our HTTP handler when PR notifications arrive from GitHub. At first glance, we might take the easy route, adding it directly into the top-level script. But given the fact that JavaScript handles events inside of callbacks and the fact that Hubot extensions only export a single constructor (using the `module.exports` syntax), it is easier to create, and more importantly test, a separate module, which we require in our main extension script.
We start by writing our tests. We've already created a test that verifies the call to `robot.router.post`. Our new functionality will actually handle the PR notification, so let's add a new grouping using the describe syntax and call it `"#pr"`. The new functionality is simple: if the Hubot receives the proper parameters (most importantly that the internal secret matches the secret sent on the request) then we accept the PR as valid and message our room with further instructions, namely inviting some user to review this pull request. Our handler then needs to expose two methods: `prHandler`, which is where we delegate any information coming from an HTTP request to the `/pr` route, and a method where we can configure the secret, which we call `setSecret`. Once we have established this internal signature for our handler library, we can add two simple tests and then our library.
We have two tests: one that handles the correct flow and one that handles the incorrect flow. In a before block (this happens before each test) we set up a fake `robot`, and set the secret on our handler module. Our faked `robot` implements the same methods a real Hubot `robot` does (the `messageRoom` and `send` methods), but we create Jasmine spies to verify these functions are called inside our implementation code:
describe "#pr", ->
secret = "ABCDEF"
robot = undefined
res = undefined
beforeEach ->
robot = {
messageRoom: jasmine.createSpy()
}
res = { send: jasmine.createSpy() }
Handler.setSecret secret
it "should disallow calls without the secret", (done) ->
req = {}
Handler.prHandler( robot, req, res )
expect( robot.messageRoom ).not.toHaveBeenCalled()
expect( res.send ).toHaveBeenCalled()
done()
it "should allow calls with the secret", (done) ->
req = { body: { secret: secret } }
Handler.prHandler( robot, req, res )
expect( robot.messageRoom ).toHaveBeenCalled()
expect( res.send ).toHaveBeenCalled()
done()
Now, add a file called _./lib/handler.coffee_ :
_SECRET = undefined
exports.prHandler = ( robot, req, res ) ->
secret = req.body?.secret
if secret == _SECRET
console.log "Secret verified, let's notify our channel"
room = "general"
robot.messageRoom room, "OMG, GitHub is on my caller-id!?!"
res.send "OK\n"
exports.setSecret = (secret) ->
_SECRET = secret
As you can see, the Hubot API does a lot of work for us: it processes the JSON POST request to the `/pr` endpoint and provides us with the parsed parameters inside the `body` object. We use that to retrieve the secret from the request. Even if you have used CoffeeScript before, you may not be familiar with the `?.` syntax: this just tests to see if `body` is defined and if so, has a key named `secret`. This prevents us from crashing if the secret is not sent in with the request. If the secret from the request matches the configured secret, then we message the room; otherwise we ignore the request. In either case, we need to respond to the calling server by using the `send` method (`send` is provided by the built-in _express_ server Hubot uses to provide an HTTP server). For debugging purposes we output that the secret was validated, if it was in fact validated, but otherwise the behavior of our response to the calling client is the same regardless of whether they provided a correct secret or not. We don't want to provide an attacker with anything extra if they pass in an incorrect secret.
If we run our tests we will see them all pass:
$ node_modules/jasmine-node/bin/jasmine-node \
--coffee spec/pr-delegator.spec.coffee
....
Finished in 0.01 seconds
4 tests, 6 assertions, 0 failures, 0 skipped
Hubot will spawn the HTTP server wherever it runs so we can talk to it on our local machine (though this will likely be inside a firewall and inaccessible to GitHub), so we can test it using cURL locally. Remember that our robot router accepts commands as HTTP POST requests, so we need to specify a post request (using the `--data` switch with cURL):
$ ( HUBOT_SLACK_TOKEN=xoxb-3295776784-nZxl1H3nyLsVcgdD29r1PZCq \
./bin/hubot -a slack 2> /dev/null | grep -i secret & )
$ curl --data '' http://localhost:8080/pr
Invalid secret
OK
$ curl --data 'secret=XYZABC' http://localhost:8080/pr
Secret verified
OK
$ kill `ps a | grep node | grep -v grep | awk -F ' ' '{ print $1 }'`
These commands verify that things are working properly. First, we start the server, piping the output to `grep` to constrain output related to our secret processing (we also background the entire chain using an ampersand and parentheses, a bash trick). Then, we hit the server running locally without the secret: the server (as it is running in the same shell) prints out the message "Invalid secret" using `console.log`, and then cURL prints out "OK," which is what was returned from our server. If we run the command again, this time including the secret as post parameters, we see that Hubot verified the secret internally against its own secret, and then cURL again prints "OK," which was what the express server inside of Hubot returned to the calling client. The final line quits Hubot: this command finds the PID for the Hubot client (which runs as a node process) and then sends it a SIGHUP signal, signaling to Hubot that it should quit.
Provided you connected correctly to your Slack site, you'll also see a message inside your #general channel, which says "OMG, GitHub is on my caller-id!?!" We now have a simple way to trigger a pull request notification without going through the formality of actually generating a pull request. Between our script, which issues real pull requests through the GitHub API, and this one that fakes a webhook notification, we have the ability to test our code externally as we develop it. Of course, our tests are valuable, but sometimes it is impossible to understand what is happening inside of our Hubot without running against the real Hubot and not a test harness.
### Assigning an active chat room user
Now that we have an incoming pull request (albeit one we are faking), we need to write the code to find a random user and assign them to the pull request.
This next section is redundant; our Hubot will function exactly as we need it to if you were to disregard any code from this section. As I was writing this book, I mistakenly missed the fact that the Hubot `brain` contains a list of users and found another avenue to get that data, the Slack API. I wrote the chapter using the Slack API, and then discovered my mistake.
Initially I planned to remove this entire section. However, it does demonstrate the ease of using an external service through the built-in HTTP client, which is a powerful feature of Hubot. And it also demonstrates how powerful tests aid you when developing a Hubot extension; I was able to refactor to use a radically different internal code path for getting the list of users and maintain faith that the end-to-end process of my code works by refactoring and then fixing broken tests. And, though not important for this section per se, the Slack API provides much richer data on the users logged in to a room, which could be valuable in other situations. If you want to skip to the next section, you will have all the code to build our Hubot as we described earlier. But I think it is a worthwhile read for general Hubot understanding.
To find a user in the room, one option is to go outside the Hubot API and use the Slack API to query for a list of users. The Slack API provides an endpoint, giving you all users currently in a room. To access the Slack API, we will use the built-in Hubot HTTP client. Once we have the list of members in the room we can look over this list and randomly choose a member and deliver the PR request to them:
_SECRET = undefined
anyoneButProbot = (members) -> 
user = undefined
while not user
user = members[ parseInt( Math.random() * \
members.length ) ].name
user = undefined if "probot" == user
user
sendPrRequest = ( robot, body, room, url ) -> 
parsed = JSON.parse( body )
user = anyoneButProbot( parsed.members )
robot.messageRoom room, "#{user}: Hey, want a PR? #{url}"
exports.prHandler = ( robot, req, res ) ->
slack_users_url = 
"https://slack.com/api/users.list?token=" +
process.env.HUBOT_SLACK_TOKEN
secret = req.body?.secret 
url = req.body?.url
if secret == _SECRET and url
room = "general"
robot.http( slack_users_url ) 
.get() (err, response, body) ->
sendPrRequest( robot, body, \
room, url ) unless err
else
console.log "Invalid secret or no URL specified"
res.send "OK\n"
exports.setSecret = (secret) ->
_SECRET = secret
We define a method called `anyoneButProbot` that takes a list of users and finds a random one, as long as it is not the Hubot.
The `sendPrRequest` method parses the JSON returned from the Slack API and then sends the members inside of the object into the `anyoneButProbot` call. It then uses the Hubot API to send a message to the room asking if that user will accept the pull request review invitation.
We build the URL to the Slack service by tacking on the Slack API token to the base Slack API URL.
As we did before, we pull out the secret and the PR URL, and then make sure they both exist.
We use the built-in HTTP client to make a GET request to the Slack API. Unless we receive an error in the response callback, we use the data provided by the Slack API to initiate the PR review request.
To test this using our cURL command, we need to modify the invocation slightly:
$ curl --data 'secret=XYZABC&url=http://pr/1' \
http://localhost:8080/pr
Our randomly selected user will see the text `username: Hey, want a PR? http://pr/1` (and the Slack client will format that link as a clickable URL).
Unfortunately, our tests are now broken: we now have the failure `TypeError: Object #<Object> has no method 'http'`. Our mocked `robot` object that we pass into our tests does not have the HTTP interface that comes with Hubot, so we should add it to our custom Robot. The method signature for the HTTP client (which comes from the `node-scoped-http-client` NodeJS package) is hairy: you chain calls together to build up an HTTP client request and end up with a function returned into which you pass a callback where you handle the response body. This module makes you write code that is not particularly testable (said another way, it was challenging for me to understand what the faked test implementation should look like), but the setup code does work and the test itself documents an interface to our robot, which is easily understandable. We simulate the same chain, defining an `http` attribute on the mocked `robot` object, an attribute that resolves to a function call itself. Calling that function returns an object that has a `get` method, and calling that function returns a function callback that when called executes that function with three parameters. In real life that function callback would contain the error code, the response object, and the JSON. In our case, as long as the error code is empty, our implementation will parse the JSON for members, and then issue the PR request:
json = '{ "members" : [ { "name" : "bar" } , { "name" : "foo" } ] }'
httpSpy = jasmine.createSpy( 'http' ).and.returnValue(
{ get: () -> ( func ) ->
func( undefined, undefined, json ) } )
beforeEach ->
robot = {
messageRoom: jasmine.createSpy( 'messageRoom' )
http: httpSpy
}
res = { send: jasmine.createSpy( 'send' ) }
Handler.setSecret secret
it "should disallow calls without the secret", (done) ->
req = {}
Handler.prHandler( robot, req, res )
expect( robot.messageRoom ).not.toHaveBeenCalled()
expect( httpSpy ).not.toHaveBeenCalled()
expect( res.send ).toHaveBeenCalled()
done()
it "should disallow calls without the url", (done) ->
req = { body: { secret: secret } }
Handler.prHandler( robot, req, res )
expect( robot.messageRoom ).not.toHaveBeenCalled()
expect( httpSpy ).not.toHaveBeenCalled()
expect( res.send ).toHaveBeenCalled()
done()
it "should allow calls with the secret", (done) ->
req = { body: { secret: secret, url: "http://pr/1" } }
Handler.prHandler( robot, req, res )
expect( robot.messageRoom ).toHaveBeenCalled()
expect( httpSpy ).toHaveBeenCalled()
expect( res.send ).toHaveBeenCalled()
done()
The code we write here was definitely not a piece of code where testing came easy; I refactored this multiple times to find a balance between an easy-to-read test and easy-to-read code. Writing test code takes effort, but when both your tests and code are readable and minimal, you generally can be sure you have a good implementation.
We now have a functional and complete implementation of the code to retrieve a list of users and assign an incoming pull request out to a randomly selected user from that list.
### The user list from the Hubot brain
Instead of using the Slack API, we can replace the code with a much simpler call to `robot.brain.users`. Calling into the Slack users API takes a callback, but the `brain.users` call does not, which simplifies our code. We do verify inside our tests that we make a call to the HTTP Jasmine spy on the `get` function, so we will want to remove that inside our tests. We will need to provide a new function called `users` to the Hubot inside the faked `brain` we created.
Unfortunately, things don't just work when we change our code to this:
...
users = robot.brain.users()
sendPrRequest( robot, users, room, url, number )
...
It is likely that what we got back from the Slack API and what Hubot stores inside its brain for users are functionally the same information, but structurally stored very differently. How can we investigate whether this assumption is correct? NodeJS has a standard library module called `util`, which includes useful utility functions, as you might expect from the name. One of them is `inspect`, which will dig into an object and create a pretty printed view. If we use this module and `console.log` we can see the full contents of a live response object passed into our `accept` function. A line like `console.log( require( 'util' ).inspect( users ) )` displays the following:
{ U04FVFE97:
{ id: 'U04FVFE97',
name: 'ben',
real_name: 'Ben Straub',
email_address: 'xxx' },
U038PNUP2:
{ id: 'U038PNUP2',
name: 'probot',
real_name: '',
email_address: undefined },
U04624M1A:
{ id: 'U04624M1A',
name: 'teddyhyde',
real_name: 'Teddy Hyde',
email_address: 'xxx' },
U030YMBJY:
{ id: 'U030YMBJY',
name: 'xrd',
real_name: 'Chris Dawson',
email_address: 'xxx' },
USLACKBOT:
{ id: 'USLACKBOT',
name: 'slackbot',
real_name: 'Slack Bot',
email_address: null } }
Ah, we were right: the Slack API returns an array while this is an associative array (called a hash in other languages). So, we need to refactor our inputs to the test to take an associative array instead of an array, and then we need a function to flatten it out (after that our code will work the same as before). We will return that when the user calls `robot.brain.users` so add a new spy as the `users` key inside our fake `robot`:
...
users = { CDAWSON: { name: "Chris Dawson" }, BSTRAUB: { name: "Ben Straub" } }
brainSpy = {
users: jasmine.createSpy( 'getUsers' ).and.returnValue( users ),
set: jasmine.createSpy( 'setBrain' ),
...
Inside our implementation code, flatten out the user associative array and find the user inside the new flattened array:
...
flattenUsers = (users) ->
rv = []
for x in Object.keys( users )
rv.push users[x]
rv
anyoneButProbot = ( users ) ->
user = undefined
flattened = flattenUsers( users )
while not user
user = flattened[ parseInt( Math.random() * \
flattened.length ) ].name
user = undefined if "probot" == user
user
...
### Sending PR data via webhook
Our wiring is almost complete, so let's actually send real pull request information. If we run our script `issue-pull-request.sh` we will see it sending data out to our Hubot. Once we have deployed to Heroku, our Hubot is listening on a public hostname. GitHub will accept the pull request and then send a JSON inside the body of a POST request made to our Hubot. This JSON looks very different from the URL-encoded parameters we provide in our cURL script, so we need to modify our code to fit.
If we retrieve the JSON from a POST, it will look something like this (reformatted for clarity and brevity):
{
"action":"opened",
"number":13,
"pull_request": {
"locked" : false,
"comments_url" :
"https://api.github.com/repos/xrd/test_repository/issues/13/comments",
"url" : "https://api.github.com/repos/xrd/test_repository/pulls/13",
"html_url" : "https://github.com/xrd/test_repository/pulls/13",
}
...
}
Most importantly, you see a URL (the `html_url` more specifically) we will use inside our Hubot message to the user. Retrieving the JSON and parsing it is trivial inside our Hubot:
...
exports.prHandler = ( robot, req, res ) ->
body = req.body
pr = JSON.parse body if body
url = pr.pull_request.html_url if pr
secret = pr.secret if pr
if secret == _SECRET and url
room = "general"
...
Here you see we pull out the body contents, process them as JSON, extract the secret and the URL from the parsed JSON, and then go through our normal routine.
Our tests are simple, and require that we send in JSON:
...
it "should disallow calls without the secret and url", (done) ->
req = {}
Handler.prHandler( robot, req, res )
expect( robot.messageRoom ).not.toHaveBeenCalled()
expect( httpSpy ).not.toHaveBeenCalled()
expect( res.send ).toHaveBeenCalled()
done()
it "should allow calls with the secret and url", (done) ->
req = { body: '{ "pull_request" : { "html_url" : "http://pr/1" },
"secret": "ABCDEF" }' }
Handler.prHandler( robot, req, res )
expect( robot.messageRoom ).toHaveBeenCalled()
expect( httpSpy ).toHaveBeenCalled()
expect( res.send ).toHaveBeenCalled()
done()
...
We are putting the secret inside the JSON as a convenience. The secret will not come in with the JSON when GitHub sends us JSON via the webhook, but this is an easy way to provide it to our handler for the moment. If we run our tests, they should pass now.
### Securing the webhook
Our Hubot is now in a position where it will operate correctly if the secret passes validation and the webhook data is passed properly. Now we need to secure the webhook. GitHub signs your data inside the webhook payload, which provides you with a way to verify the data really came from an authorized host. We need to decode it inside our handler. To do this, we will need to retrieve the secure hash GitHub provides inside the request headers. Then, we will need to calculate the hash ourselves using the secret we maintain internally. If these hashes match, then we know the incoming request and JSON is truly from GitHub and not an attacker:
...
getSecureHash = (body, secret) ->
hash = crypto.
createHmac( 'sha1', secret ).
update( "sha1=" + body ).
digest('hex')
console.log "Hash: #{hash}"
hash
exports.prHandler = ( robot, req, res ) ->
slack_users_url =
"https://slack.com/api/users.list?token=" +
process.env.HUBOT_SLACK_TOKEN
body = req.body
pr = JSON.parse body if body
url = pr.pull_request.html_url if pr
secureHash = getSecureHash( body, _SECRET ) if body
webhookProvidedHash = req.headers['HTTP_X_HUB_SIGNATURE' ] \
if req?.headers
secureCompare = require 'secure-compare'
if secureCompare( secureHash, webhookProvidedHash ) and url
room = "general"
robot.http( slack_users_url ) ->
.get() (err, response, body) ->
sendPrRequest( robot, body, \
room, url ) unless err
else
...
The signature is a _hash message authentication code_ (HMAC). HMAC cryptography is vulnerable to timing attacks. When you use this encryption technique, the time it takes to complete a comparison of the computed hash and the sent hash can be the starting point for an attacker to gain forced access to a server. More specifically to JavaScript, naive comparison operators like `==` will leak this timing information. To eliminate the risk that this information could be used to compromise the host system, we use a module called `secure-compare` that obscures this timing information when making a comparison. To load this module, we need to add it to our _package.json_ manifest file with the command `npm install secure-compare --save`.
Now we can adjust our tests to fit the new reality of our handler:
...
it "should disallow calls without the secret and url", (done) ->
req = {}
Handler.prHandler( robot, req, res )
expect( robot.messageRoom ).not.toHaveBeenCalled()
expect( httpSpy ).not.toHaveBeenCalled()
expect( res.send ).toHaveBeenCalled()
done()
it "should allow calls with the secret and url", (done) ->
req = { body: '{ "pull_request" : { "html_url" : "http://pr/1" }}',
headers: { "HTTP_X_HUB_SIGNATURE" :
"cd970490d83c01b678fa9af55f3c7854b5d22918" } }
Handler.prHandler( robot, req, res )
expect( robot.messageRoom ).toHaveBeenCalled()
expect( httpSpy ).toHaveBeenCalled()
expect( res.send ).toHaveBeenCalled()
done()
...
You'll notice we moved the secret out of the JSON and into the headers. This is the same structure our Hubot will see when the GitHub webhook encodes the content of the JSON and provides us with a secure hash in the `HTTP_X_HUB_SIGNATURE` key. Inside our test we will need to provide the same signature inside our mocked request object. We could duplicate our secure hash generation code from the handler implementation, or we could be lazy and just run our tests once (knowing they will fail this time), watch for the `console.log` output that says "Hash: cd970490d83c..." and copy this hash into our mocked request object. Once we do this, our tests will pass.
Now, after reloading our Hubot, if we issue a pull request using our `issue-pull-request.sh` script, we should see the matching hashes. But we won't (at least if you used the same _package.json_ file as we specified earlier) because of a critical bug inside of Hubot at the time of this writing.
As we mentioned earlier, Hubot bundles Express.js, a high-performance web framework for NodeJS. Express.js has a modular architecture, where middleware is inserted into a request and response chain. This approach to building functionality and the wide array of middleware allows web developers to string together various standardized middleware components to use only those features needed for the problem at hand. Common middleware includes static file handlers (for serving static files), cookie handlers, session handlers, and body parsers. You can imagine circumstances where you would not need all of these (or you might need others) and this flexibility makes Express.js a popular choice for building NodeJS web applications.
The body parser middleware is of particular interest to us here: the body parser middleware is used to convert the "body" of a request into a JavaScript object attached to the request object. Previously you saw us access it inside a variable we called `req` inside our callback; obviously this stands for request. The body parser takes on converting whatever data content comes from inside the body of the HTTP request into a structured JavaScript associative array inside the `body` object within our `request` object. If the body is URL encoded (as the PR information is encoded if we create the webhook with the `content_type` set to `form`), then the body parser URL decodes the content, parses it as JSON, and then sets the inflated object to the `body` attribute on our `request` object. Normally, this is a very handy process that removes a lot of grunt work for web application authors.
Unfortunately, because the `express` object is bundled and configured for us long before our extension is loaded, we cannot interrupt the load order of the body parser middleware inside our extension, which means we cannot get access to the raw body content. The body parser middleware processes the stream of data by registering for events inside the HTTP request flow. NodeJS made a mark on web application development by providing a network application toolkit centered around one of the most controversial features of JavaScript: the asynchronous callback. In NodeJS, processes register for events and then return control to the host program. In other languages, like Ruby, for example, when building services that receive data from clients, by default, you listen for incoming data, and the moment you tell your program to listen, you have blocked other processing. Asynchronous programming is by no means a new concept (threading in many languages, for example), but NodeJS offers a simple way to interact with asynchronous functions through event registration. In the case of express middleware, however, this event registration process bites us, because middleware loaded first gets first access to incoming data, and once the body parser has processed our body content, we can no longer access the original content. We need access to the raw body content, and there is no way to install our own middleware that would provide it inside our Hubot extension when a PR request is received on the router.
What options do we have then? Well, fortunately, every bit of our stack here is open source, and we can modify the code inside Hubot that sets up our express server to fit our needs. This code is installed by the `npm` tool in the _node_modules_ directory, and we can easily find where express is configured inside of Hubot. There are issues with doing it this way: if we rerun `npm install` we will blow away our _node_modules_ directory, and this is something Heroku will do if it is not told otherwise. A better way might be to fork Hubot and store our own copy of Hubot inside of GitHub and then specify our forked copy inside of the _package.json_? file. This has issues too; if Hubot gets updated with a critical security flaw, we need to merge those changes into our fork, a maintenance issue we would avoid if we use tagged releases from the main repository. There is, unfortunately, no perfect way to resolve this problem that does not itself create other problems.
If you do choose to modify the built-in Hubot code, modify the file _robot.coffee_ inside the _node_modules/hubot/src/_ directory. The _node_modules_ directory, in case memory fails, is where the NodeJS package manager (npm) builds out the local dependency tree for libraries, and this is the file Hubot uses internally to build the `robot` object and set up the express HTTP server. If we add the following code at line 288 (this line number might vary if you are not using the same version of Hubot we specify in our _package.json_ ), we can install a custom middleware callback that will provide us with the raw body we can use when verifying the HMAC signature:
...
app.use (req, res, next) =>
res.setHeader "X-Powered-By", "hubot/#{@name}"
next()
app.use (req, res, next) =>
req.rawBody = ''
req.on 'data', (chunk) ->
req.rawBody += chunk
next()
app.use express.basicAuth user, pass if user and pass
app.use express.query()
...
Express middleware have a very simple interface: they are nothing more than a JavaScript function callback that receives a request, response, and continuation function passed as parameters. We register a listener when data content (the body) is propagated, and then add the body content to a variable on the request object. When the request object is passed in to our handler for pull requests within our Hubot, we have the raw data prefilled. The `next()` function is used to indicate to the middleware host that the next middleware can proceed.
We now need to adjust our tests to fit this new requirement. We prime the pump with a request object that has this `rawBody` inside it, and we should properly encode the content using `encodeURIComponent` to match the format in which it will be appearing from GitHub:
...
it "should allow calls with the secret and url", (done) ->
payload = '{ "pull_request" : { "html_url" : "http://pr/1" } }'
bodyPayload = "payload=#{encodeURIComponent(payload)}"
req = { rawBody: bodyPayload,
headers: { "x-hub-signature" : \
"sha1=dc827de09c5b57da3ee54dcfc8c5d09a3d3e6109" } }
Handler.prHandler( robot, req, res )
expect( robot.messageRoom ).toHaveBeenCalled()
expect( httpSpy ).toHaveBeenCalled()
expect( res.send ).toHaveBeenCalled()
done()
...
Our implementation breaks our tests, so we will need to modify the cost to use the `rawBody` attribute on the request object, break it apart from the payload key/value pair, URI decode it, and then if all that works, parse the JSON and start the verification process. Our tests describe all this for us. The new `prHandler` method looks like this:
...
exports.prHandler = ( robot, req, res ) ->
rawBody = req.rawBody
body = rawBody.split( '=' ) if rawBody
payloadData = body[1] if body and body.length == 2
if payloadData
decodedJson = decodeURIComponent payloadData
pr = JSON.parse decodedJson
if pr and pr.pull_request
url = pr.pull_request.html_url
secureHash = getSecureHash( rawBody )
signatureKey = "x-hub-signature"
if req?.headers
webhookProvidedHash =
req.headers[ signatureKey ]
secureCompare = require 'secure-compare'
if url and secureCompare( "sha1=#{secureHash}",
webhookProvidedHash )
room = "general"
users = robot.brain.users()
sendPrRequest( robot, users, room, url )
else
console.log "Invalid secret or no URL specified"
else
console.log "No pull request in here"
res.send "OK\n"
_GITHUB = undefined
...
When all is said and done, is verifying the signature even worth it? If we are not hosting our Hubot on a service that handles our router requests over HTTPS, this HMAC verification could be compromised. And, given the issues with maintaining our own copy of the Hubot code in order to permit the validation inside our Hubot extension, it might be best to ignore the validation header. The worst case, as our extension is written now, would be that an attacker could fake a pull request notification, and falsely engage chat room users around it. If the PR the attacker used was fake, it might confuse our Hubot, but no real harm would be done. If they used an existing real PR, an attacker could trick our Hubot into adding data to the PR, adding confusion in the comments about who accepted the review request. We won't solve that potential problem with this code, but you can imagine adding code to our Hubot that handles a case like this (for example, by checking first to see if someone was already tagged on the PR, and ignoring successive incoming webhooks associated with that PR).
### Responding to the PR request
Our Hubot is now programmed to generate a pull request review message and send it to a random user. What happens when they respond? They can respond in two ways obviously: accepting the request or declining the request. We put placeholders in our Hubot extension to notify us with a debugging message when the user responds and send a message back to whoever sent us a message, but now we can actually wire up handling the response and adding to the pull request on GitHub based on the user we are interacting with (provided they accepted).
There are multiple ways in which a Hubot can interact with chat room messages. We chose the `respond` method, but there is another method called `hear` we could have used. `respond` is used when the message is preceded by the Hubot name, so only messages that look like `probot: accept` or `@probot decline` or `/ accept` (if the Hubot name alias is enabled) will be processed by our Hubot. We could have used `hear` but in our case we are processing a simple response, and without a clear direction for the message, it would be difficult to always make sure we were interpreting the message in the correct context. `respond` makes more sense here.
If they decline the request, let's just graciously note that the offer was declined:
...
exports.decline = ( res ) ->
res.reply "No problem, we'll go through this PR in a bug scrub"
...
We are asking someone to accept a pull request, and there is a possible situation where two could come in within a very short period of time. For this reason, it probably makes sense for us to indicate the pull request identifier in the communication with the target user. And, users should be told to reply with a string like `accept 112`. The Hubot can then interpret this to mean they are accepting PR #112 and not the other pull request the Hubot invited John to respond to 10 seconds later.
If we do this, our Hubot does need to save the state of pull request invitations. Fortunately, there is an extremely easy way to do this using the "brain" of our Hubot. The brain is a persistent store, typically backed by Redis, into which you can keep any type of information. You simply reference the `robot.brain` and use methods like `get` or `set` to retrieve and store information. The `set` method takes any key and any value but note that the Hubot brain does not do much with your value if that value happens to be a complex object; if you want to properly serialize something beyond a flat value, you should probably call `JSON.stringify` on the object to maintain full control over the roundtrip storing and retrieval.
Let's modify our Hubot handler to deal with accepting or declining responses (and change our extension file to deal with this new interface). Of course, we will need to add to our tests. Finally, we will need to set up a way to provide the GitHub API key to our Hubot handler, so we'll add a method to do that that looks almost exactly like the one for setting our secret key.
We'll use a GitHub API NodeJs module called `node-github`, found on GitHub at _https://github.com/mikedeboer/node-github_. If we look at the API documentation, we see that it supports authentication using an OAuth token (using the `github.authenticate( { 'type' : 'oauth': 'token' : '...' }` syntax), and has methods we can use to add a comment to an issue or pull request associated with a repository (using the `github.issues.createComment` method).
Knowing that this module handles most of the work for us between these two methods, we can start by writing our tests. We'll create a new describe block called `#response` that groups our tests together. As we noted earlier, our Hubot can take affirmative and negative responses, so our tests should reflect these two code paths. Our setup block (the `beforeEach` section) in both cases should do the same thing for each response—make the pull request invitation to a random user: this all happens inside our `prHandler` code. We don't need to verify the expectations of this method since that got that covered by prior tests. After we get our handler to the right state, we need to test that the handler works correctly with an `accept` and `decline` method (they don't yet exist in our handler code so we'll add them next).
Our accept request handler triggers our Hubot to contact GitHub and add a comment to the pull request noting our targeted chat user accepted the request. The network connection to the GitHub API uses the GitHub API bindings from within the `node-github` module. We want to make this testable, so we should pass in the GitHub binding object inside our interface, and during the test, pass in a mocked object. If we review the documentation for the `createComment` in the GitHub API binding, we see it requires information about the repository such as the user or organization that owns the repository, the repository name, the issue number (pull requests are also referenced by issue numbers), and the comment itself. To get this information we simply need to decode it from the Hubot handler that receives the pull request information, and we will add code that does this (and is exposed in our handler for testing). We saw that a pull request comes in through a large JSON response, and we can use the URL we used earlier as the way we decode this information. So, we'll need to have two more tests inside our `#response` block, one for the decoding of the URL into a message object, and another to retrieve the username we insert into the comment stored in the pull request on the repository. We know what our test URL looks like since we saw it in our PR webhook message, but we don't yet have the structure of the chat message from which we can pull out our username, so our test will need to be adjusted when we know what it really looks like.
Declining the request means nothing happens. If we mock out our GitHub API binding, acceptance should log in (using the `authenticate` method) and then call `createComment`. These are directly pulled from the GitHub API NodeJS documentation. Finally, we should record the result of this operation inside the chat room, which happens using the reply method on our response object:
...
describe "#response", ->
createComment = jasmine.createSpy( 'createComment' ).and.
callFake( ( msg, cb ) -> cb( false, "some data" ) )
issues = { createComment: createComment }
authenticate = jasmine.createSpy( 'ghAuthenticate' )
responder = { reply: jasmine.createSpy( 'reply' ),
send: jasmine.createSpy( 'send' ) }
beforeEach ->
githubBinding = { authenticate: authenticate, \
issues: issues }
github = Handler.setApiToken( githubBinding, \
"ABCDEF" )
req = { body: '{ "pull_request" : \
{ url : "http://pr/1" } }', \
headers: { "HTTP_X_HUB_SIGNATURE" : \
"cd970490d83c01b678fa9af55f3c7854b5d22918" } }
Handler.prHandler( robot, req, responder )
it "should tag the PR on GitHub if the user accepts", (done) ->
Handler.accept( responder )
expect( authenticate ).toHaveBeenCalled()
expect( createComment ).toHaveBeenCalled()
expect( responder.reply ).toHaveBeenCalled()
done()
it "should not tag the PR on GitHub if the user declines", \
(done) ->
Handler.decline( responder )
expect( authenticate ).toHaveBeenCalled()
expect( createComment ).not.toHaveBeenCalledWith()
expect( responder.reply ).toHaveBeenCalled()
done()
it "should decode the URL into a proper message object " + \
"for the createMessage call", (done) ->
url = "https://github.com/xrd/testing_repository/pull/1"
msg = Handler.decodePullRequest( url )
expect( msg.user ).toEqual( "xrd" )
expect( msg.repository ).toEqual( "testing_repository" )
expect( msg.number ).toEqual( "1" )
done()
it "should get the username from the response object", (done) ->
res = { username: { name: "Chris Dawson" } }
expect( Handler.getUsernameFromResponse( res ) ).toEqual \
"Chris Dawson"
done()
Note that this code was indented to save space, but yours will be nested in several deeper levels of indentation. Refer to the sample repository for the exact code if there is confusion.
Our tests will fail if we run them now. So, let's write the code at the end of our delegator extension. We need code that parses the URL into the appropriate structured message object, code to put the reminder into the pull request comment on GitHub, and code that pulls the user out of the response object passed to us. The first two of these are within reach; basic JavaScript and reading the GitHub API binding documentation will get us to these two. The third one requires a little more investigation, so we will leave this as a placeholder for now.
To convert the URL into the object necessary for the `createMessage` call, we just need to split the message into pieces by the slash character, and then retrieve the correct items by index. We probably could add some additional tests that cover passing in empty strings, or other edge cases, but we'll leave it as an exercise to the reader. Our code does not crash in these cases, but it would be nice to have coverage of our expectations represented in our tests:
...
_GITHUB = undefined
_PR_URL = undefined
exports.decodePullRequest = (url) ->
rv = {}
if url
chunks = url.split "/"
if chunks.length == 7
rv.user = chunks[3]
rv.repository = chunks[4]
rv.number = chunks[6]
rv
exports.getUsernameFromResponse = ( res ) ->
"username"
exports.accept = ( res ) ->
msg = exports.decodePullRequest( _PR_URL )
username = exports.getUsernameFromResponse( res )
msg.body = "@#{username} will review this (via Probot)."
_GITHUB.issues.createComment msg, ( err, data ) ->
unless err
res.reply "Thanks, I've noted that in a PR comment!"
else
res.reply "Something went wrong, " + \
"I could not tag you on the PR comment."
exports.decline = ( res ) ->
res.reply "OK, I'll find someone else."
console.log "Declined!"
exports.setApiToken = (github, token) ->
_API_TOKEN = token
_GITHUB = github
_GITHUB.authenticate type: "oauth", token: token
exports.setSecret = (secret) ->
_SECRET = secret
To summarize, we added an internal variable called `_GITHUB` where we will store a reference to our instantiation of the GitHub API binding. Our interface to the `setApiToken` call passes in the instantiation; this method takes our OAuth token and the binding because using an interface like this means we can pass in a mocked binding inside our tests. When we are not running inside a test, this method call authenticates against the GitHub API, readying the API binding to make connections to the GitHub API itself.
Our top-level extension script looks like this now:
handler = require '../lib/handler'
handler.setSecret "XYZABC"
github = require 'node-github'
handler.setApiToken github, "12345ABCDEF"
module.exports = (robot) ->
robot.respond /accept/i, ( res ) ->
handler.accept( res )
robot.respond /decline/i, ( res ) ->
handler.decline( res )
robot.router.post '/pr', ( req, res ) ->
handler.prHandler( robot, req, res )
If you were to look only at this code, the interface is clean, and the bulk of the work is handled by our very testable handler.
### Peering into the response object
We need to get the username, and it stands to reason that the object passed to us when we get a respond callback might have it in there. The `respond` method provided by the Hubot API is documented mostly by way of the example scripts that come with Hubot. There is very little information on what the parameter passed to your callback looks like. Let's use the `util` library to inspect the data and print it to the console. We abbreviate the full output here, and show you that it contains information on the user who sent the message to our Hubot. We can access this information by using `response.message.user.name` if, for example, we wanted to retrieve the name of the user:
{ robot:
{ name: 'probot',
brain:
{ data: [Object],
...
message:
{ user:
{ id: '...',
name: 'xrd',
real_name: 'Chris Dawson',
email: 'chrisdawson@example.com'
...
text: 'probot accept',
rawText: 'accept',
rawMessage:
{ _client: [Object],
...
match: [ 'probot accept', index: 0, input: 'probot accept' ],
...
}
Inside it all we can find information we need, specifically the username and email. So, let's update our test and our handler code. The last test in our spec file can be modified to look like this:
...
it "should get the username from the response object", (done) ->
res = { message: { user: { name: "Chris Dawson" } } }
expect( Handler.getUsernameFromResponse( res ) ).toEqual "Chris Dawson"
done()
...
And, our handler code defining `getUsernameFromResponse` simply turns into this:
...
exports.getUsernameFromResponse = ( res ) ->
res.message.user.name
...
With this information in hand, we can properly comment on the pull request. Well, almost.
### Unifying usernames via the Collaborators API
If the Slack username for the person who accepted the pull request is an exact match with their GitHub username, then we can assume they are the same person in real life and create a comment inside the pull request reminding them (and anyone else) that they will be reviewing the PR. We can use the collaborator subsection of the Repository API to look up their name on GitHub.
If we don't find them inside the list of users and there is not an exact match with their Slack name then we have at least one problem, maybe two. First, we could just have a mismatch in their identities (their usernames are different on each site). If this is the case, we could ask them to clarify this inside the Slack room. We do have another case: the user is not a collaborator on the repository hosted on GitHub. If this is the case, clarifying their username is not going to help. The Repository API does support adding a user to the list of collaborators so we could do that here, but this arguably is a moment where a larger discussion should happen (write access to a repository is a big resposibility in a way that being inside a chat room is not). Adding a user as a repository collaborator should not be automated inside a chat room. Because of the complexity here, we will write code to unify a username inside the chat room, but we won't handle the case where there is no clarification to be made because they are not in the repository collaborator list.
Using the GitHub API binding we passed into our `setApiToken` call we will verify the user exists as a collaborator on the repository. The API binding provides a method called `getCollaborator` inside the `repos` namespace we can use to verify that a username is on the list of collaborators. It takes as the first parameter a message that is used to specify the repository and owner, and then an attribute called `collabuser`, which is the name you want to ensure is a collaborator. The second parameter to the function is a callback that is executed once the request has completed. If the callback returns without an error code, then our Hubot should tag the pull request with a comment confirming and message the room.
Our new test reflects usage of the `repos.getCollaborator` call. In our test setup block we mock out the call to `getCollaborator` and use Jasmine to "spy on" it so we can assure it was called later in our actual test. Our setup is more beefy than before, but we are following the same patterns of generating spies to watch methods, and implementing our fake callbacks when necessary. We can also move our message inside the response object into the one created in our setup block so that we can use it inside all of our subtests, rather than creating a new object for each test inside the test body:
...
send: jasmine.createSpy( 'send' ),
message: { user: { name: "Chris Dawson" } } }
getCollaborator = jasmine.createSpy( 'getCollaborator' ).and.
callFake( ( msg, cb ) -> cb( false, true ) )
repos = { getCollaborator: getCollaborator }
...
it "should tag the PR on GitHub if the user accepts", (done) ->
Handler.accept( robot, responder )
expect( authenticate ).toHaveBeenCalled()
expect( createComment ).toHaveBeenCalled()
expect( responder.reply ).toHaveBeenCalled()
expect( repos.getCollaborator ).toHaveBeenCalled()
done()
Our handler can then implement the `accept` and `decline` methods in full:
...
exports.accept = ( robot, res ) ->
prNumber = res.match[1]
url = robot.brain.get( prNumber )
msg = exports.decodePullRequest( url )
username = exports.getUsernameFromResponse( res )
msg.collabuser = username
_GITHUB.repos.getCollaborator msg, ( err, collaborator ) ->
msg.body = "@#{username} will review this (via Probot)."
_GITHUB.issues.createComment msg, ( err, data ) ->
unless err
res.reply "Thanks, I've noted that " + \
"in a PR comment. " + \
"Review the PR here: #{url}"
else
res.reply "Something went wrong." + \
"I could not tag you " + \
"on the PR comment: " +
"#{require('util').inspect( err )}"
exports.decline = ( res ) ->
res.reply "No problem, we'll go through this PR in a bug scrub"
...
We now have a full implementation of both the `accept` and `decline` methods inside our Hubot.
### Sanitizing our source code
It is typically bad form to save passwords (or other access credentials, like OAuth tokens or secrets) inside of source code. Right now we have hardcoded them into our application inside of the _pr-delegator.coffee_ file. We could instead retrieve them from the environment of the running process:
...
handler.setSecret process.env.PROBOT_SECRET
github = require 'github'
ginst = new github version: '3.0.0'
handler.setApiToken ginst, process.env.PROBOT_API_TOKEN
...
When we launch our Hubot from the command line, we will need to use a command like this as we are testing locally from our laptop:
$ PROBOT_SECRET=XYZABC \
PROBOT_API_TOKEN=926a701550d4dfae93250dbdc068cce887531 \
HUBOT_SLACK_TOKEN=xoxb-3295776784-nZxl1H3nyLsVcgdD29r1PZCq \
./bin/hubot -a slack
When we publish into Heroku, we will want to set these as environment variables using the appropriate Heroku commands:
$ heroku config:set PROBOT_API_TOKEN=926a701550d4dfae93250dbdc068cce887531
Adding config vars and restarting myapp... done, v12
PROBOT_API_TOKEN=926a701550d4dfae93250dbdc068cce887531
$ heroku config:set PROBOT_SECRET=XYZABC
Adding config vars and restarting myapp... done, v12
PROBOT_SECRET=XYZABC
Don't forget that when we run our tests, we will need to specify the environment variables on the command line as well:
$ PROBOT_SECRET=XYZABC \
PROBOT_API_TOKEN=926a701550d4dfae93250dbdc068cce887531 \
node_modules/jasmine-node/bin/jasmine-node --coffee \
spec/pr-delegator.spec.coffee
# Summary
Our Hubot is alive! We went through building a robot that can interact with us inside a chat room, then refactored the robot so that its functionality is contained into a highly testable module. Along the way, we got intimate with the Hubot API, and even discussed how to modify (and the drawbacks surrounding) modifying the source code to Hubot itself. Finally, we demonstrated how to use the Activity API receiving (and faking data) coming from a GitHub webhook.
In the next chapter we will look at building a single-page application that edits information inside a GitHub repository using JavaScript and the GitHub.js library talking to the Pull Request API.
# Chapter 9. JavaScript and the Git Data API
Applications utilizing the GitHub API will typically reside inside a server. You are not limited, however, to accessing the API from within server-side programming languages exclusively. The GitHub API works perfectly well from within a web browser context as well, and the UI to your application comes for free if you know a little HTML. In this chapter we discuss how to use the unofficial JavaScript client library to access the GitHub API and build a single-page application (SPA), which we host entirely on GitHub.
The main weakness of JavaScript has always been testability. Mainly due to the asynchronous nature of JavaScript, writing tests has never been easy; polling for changes when a callback returns was until recently the best way to test nonlinear code. But recent toolkits like AngularJS and promise-based libraries have made testing not only easy, but elegant as well. Building applications on top of third-party services makes testing even more important than it already was, and we'll make sure to add testing to our application to verify the functionality works as we expect.
JavaScript should be generally accessible to most people who know other imperative programming languages. There is one feature, however, that can be challenging: the callback function. In JavaScript, functions are first-class objects, meaning they can be passed as arguments to other functions and stored as the value of a variable. You will find callbacks everywhere in JavaScript programming. Callbacks make debugging and understanding JavaScript code more challenging at times. As we stated earlier, writing code that includes tests makes understanding the entire picture easier, and we will do that in this chapter to further explain sections where necessary function callbacks may initially look confusing.
# Building a Coffee Shop Database on GitHub
Like many software developers, I suffer from an almost disturbing obsession with coffee. Perhaps it is really my family that suffers: when we travel to a new city, I drag my wife and children through questionable neighborhoods just to find the perfect brew and complementary gluten-free desserts.
Google Maps is a great help on these quests, in that it will find me a coffee shop and reviews, but the granularity of information about that coffee shop is often poor and limited in scope. Do they offer rice milk as a dairy-free alternative? What special details should I know when considering a place? Many guidance and mapping applications exist, but if they don't fit my own personalized informational niche, I might miss a unique experience. With such a pressing and dire problem in front of us, let's use the GitHub API to solve it.
We'll build a coffee shop single-page web app that allows anyone to add information on coffee shops, information that is flexible and dynamic, and search and filter through that information about a coffee shop. All files, such as the HTML, images, and JavaScript will be hosted on GitHub. And we'll be using the GitHub API to allow contributors to add data to our database, a database we will also host on GitHub. And as GitHub developers write code with tests, we will write tests to validate our JavaScript code as well as the expectations we have of the GitHub API.
More specifically, we'll use these technologies:
* An (unofficial) GitHub API JavaScript library
* AngularJS, a "superpowered framework" for writing JS applications that are testable
* Bootstrap, a CSS library that simplifies building beautiful webapps
You don't need to know these technologies in advance of working on this chapter.
# Set Up
To create our app, let's first create our main web page and push it into our repository:
$ mkdir coffeete.ch
$ cd coffeete.ch
$ git init
$ git checkout -b gh-pages
$ printf "<html>\n<body>Hello from CoffeeTe.ch</body>\n</html>\n" > index.html
$ git commit -m "Add starting point index.html" -a
$ git config push.default gh-pages
Notice that we created a new repository, and then created and entered the `gh-pages` branch. We'll do all our work there. And by using the `git config` command, we specified that we want the default push branch to be `gh-pages`. This allows us to use `git push` to push our branch up instead of the longer `git push origin gh-pages`.
## Mapping Hostnames
Once we publish these files into GitHub inside a repository we can connect the repository to a real hostname. There are two steps to take to do this:
* Add a CNAME file that tells GitHub under which server name this service should resolve.
* Set up DNS records so that the hostname maps to the correct IP address at GitHub.
Imagine you have the hostname _myspecialhostname.com_. If you map this repository to a subdomain called _coffeetech_ , then you would do something like this:
$ echo 'coffeetech.myspecialhostname.com' > CNAME
$ git commit -m "Added CNAME mapping" -a
$ git push
Remember that you need to wait about 10 minutes before GitHub regenerates its database to establish the connection between your `gh-pages` site and the mapping on their frontend servers. This is only the first time you connect a repository to a hostname; you will see subsequent changes almost instantaneously.
Generally it takes several hours to even a few days to propagate DNS settings out into the wild, so make sure you choose and set up a hostname far in advance if your site has to be live by a certain point.
Now we can install the libraries needed for this application.
## Adding the Support Libraries
As we mentioned, we will use the GitHub.js library, AngularJS, and Bootstrap. Let's add those to our project now. Using whatever editor you prefer, edit the _index.html_ file to look like this:
<html>
<head>
<title>CoffeeTe.ch</title>
<meta name="viewport" content="width=device-width, initial-scale=1.0"> 
<link rel="stylesheet" type="text/css" href="bootstrap.min.css"></link>
</head>
<body ng-app> 
<div class="container">
{{'Welcome to Coffeete.ch'}} 
</div>
<script src="angular.js"></script>
<script src="github.js"></script>
</body>
</html>
I am assuming you have a firm grasp on most HTML concepts, but a few of the advanced topics are included here:
The `meta` tag makes our page work well with mobile browsers and enables the responsive features of Bootstrap.
The `ng-app` attribute in the body tag tells AngularJS to initialize and compile our page from the body tag downward.
The `{{ }}` (double brackets) are an AngularJS two-way data binding directive. You'll see two-way data binding in action very soon if it is not already familiar. Adding this code here sanity checks whether AngularJS is working for us; if we see "Welcome to Coffeete.ch" without the braces then we know AngularJS is loading and working properly. If we see the braces, then there is some error in our setup to resolve. Two-way data binding solves a significant pain point when building JS apps: marshalling data back and forth between network events, into HTML and out of HTML forms. AngularJS does all this heavy lifting for you. In a moment we'll show how to use two-way data binding as it was intended by defining a variable on the AngularJS scope. We then access the variable using the same `{{ }}` data binding directives.
Then, download the necessary files locally using these commands. We include AngularJS, GitHub.js, and Bootstrap CSS:
$ wget https://ajax.googleapis.com/ajax/libs/angularjs/1.2.10/angular.js
$ wget https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css
$ wget https://github.com/michael/github/raw/master/github.js
Now we are ready to use the GitHub library inside our SPA.
# An AngularJS Application Using GitHub.js
Now let's implement a _coffeetech.js_ file, which is where we will build our single-page application functionality. Create a new file called _coffeetech.js_ in the root of your repository:
var mod = angular.module( 'coffeetech', [] ) 
mod.controller( 'GithubCtrl', function( $scope ) { 
var github = new Github({} ); 
var repo = github.getRepo( "gollum", "gollum" ); 
repo.show( function(err, repo) { 
$scope.repo = repo;
$scope.$apply(); 
});
})
Define a module named "coffeetech." Save a reference to the module we will use next in defining a controller, a smaller bundle of functions. Modules are an AngularJS feature for grouping related functionality, and we will keep all our code for this application inside this module.
We define a controller called `GithubCtrl` that bundles up functions and data. When we use the controller syntax, we name the controller, and then define a function with at least a single parameter: the `scope` object. I think of `scope` as the "world" available to the controller. The controller knows only of data and functions defined on its `scope`, and AngularJS does its magic as long as your functions or variables are defined on the `scope`.
We create a new `Github()` object using the constructor. This constructor can take user credentials, but for now, we can just create it without those since we are accessing a public repository.
Once we have our `github` object, we call the method `getRepo()` with an owner and a name. This returns our repository object.
To actually load the data for this repository object, we call the `show` method and pass it a callback that uses the two parameters `err` and `repo` to handle errors or otherwise provide us with details of the repository specified. In this case we are using the Gollum wiki public repository to display some sample data.
Once we have loaded the repository data, we need to call `$apply` to tell AngularJS a change has occurred to data stored within the scope variable. As we mentioned before, AngularJS knows only about functions and data defined on its `scope`. The `show` function is defined on the GitHub object, and any changes are not tracked by AngularJS, so we need to use `$apply()`.
GitHub.js handles making the proper request to GitHub for us, and AngularJS handles putting the results into our web page. To modify our HTML to use this data, we change _index.html_ to look like the following:
<html>
<head>
<title>CoffeeTe.ch</title>
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="stylesheet" type="text/css" href="bootstrap.min.css"></link>
</head>
<body ng-app="coffeetech"> 
<div class="container" ng-controller="GithubCtrl">
{{ repo }} 
</div>
<script src="angular.js"></script>
<script src="github.js"></script>
<script src="coffeetech.js"></script> 
</body>
</html>
Change the `ng-app` reference to use the module we defined in our _coffeetech.js_ file.
Remove our data binding to the `Welcome to CoffeeTech` string and replace it with a binding to the variable `repo` (by default AngularJS will filter complex objects and convert them to JSON).
Add a reference to our _coffeetech.js_ file beneath our other JS references.
If you load this up in your browser, you will see something like Figure 9-1.
###### Figure 9-1. The whole messy JSON
That is a lot of data. AngularJS's JSON filter pretty-printed it for us, but this is a bit too much. Let's change the HTML to reduce some noise:
<html>
<head>
<title>CoffeeTe.ch</title>
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="stylesheet" type="text/css" href="bootstrap.min.css"></link>
</head>
<body ng-app="coffeetech">
<div class="container" ng-controller="GithubCtrl">
<div>Subscriber count: {{ repo.subscribers_count }}</div>
<div>Network count: {{ repo.network_count }}</div>
</div>
<script
src="angular.js"></script>
<script src="github.js"></script>
<script src="coffeetech.js"></script>
</body>
</html>
We can filter this information by modifying the HTML to show just a few vital pieces of information from the repository JSON. Let's display the `subscriber_count` and the `network_count`. Now we see something more palatable (Figure 9-2).
###### Figure 9-2. Pulling out what we want
We've just extracted the subscriber and network count from the Gollum repository hosted on GitHub using the GitHub API and placed it into our single-page app.
## Visualize Application Data Structure
We are going to be building a coffee shop database. We want to use Git as our datastore, but Git and its associated tools (either command-line tools or GitHub) don't offer the same features as a standard relational database. So, we need to think and plan how we will structure our data inside our repository to make it easily searchable.
This application allows us to search coffee shops. These coffee shops will be, for the most part, in larger cities. If we keep all the data stored as JSON files named after the city, we can keep data located in a file named after the city, and then either use geolocation on the client side to retrieve a set of the data, or ask the user to choose their city manually.
If we look at the GitHub.js JavaScript documentation on GitHub we can see that there are some options for us to pull content from a repository. We'll store a data file in JSON named after the city inside our repository and retrieve this from that repository. It looks like the calls we need to use are `github.getRepo( username, reponame )`, and once we have retrieved the repository, `repo.contents( branch, path, callback )`.
Now that we have a barebones application let's pause and make sure we are building something we can refactor and maintain long term. This means adding tests to our project.
## Making Our App Testable
Testing not only builds better code by making us think clearly about how our code will be used from the outside, but makes it easier for an outsider (meaning other team members) to use our code. Testing facilitates "social coding."
We'll use a JavaScript testing tool called "Karma." Karma simplifies writing JavaScript unit tests. We need to first install the tool, then write a test or two. Karma can easily be installed using `npm` (installation of which is documented in Appendix B):
$ npm install karma -g
$ wget https://ajax.googleapis.com/ajax/libs/angularjs/1.2.7/angular-mocks.js
The _angular-mocks.js_ file makes it easy to mock out Angular dependencies in our tests.
Then, create a file called _karma.config.js_ and enter the following contents:
module.exports = function(config) {
config.set({
basePath: '',
frameworks: ['jasmine'],
files: [ 
'angular.js',
'fixtures-*.js',
'angular-mocks.js',
'firebase-mock.js',
'github.js',
'*.js'
],
reporters: ['progress'],
port: 9876,
colors: true,
logLevel: config.LOG_INFO,
autoWatch: true,
browsers: ['Chrome'], 
captureTimeout: 60000,
singleRun: false
});
};
This is more or less a default Karma configuration file.
The `files` section specifying the load order of our JavaScript implementations and the test scripts. You can see a few of the files we've added specified directly and wildcards to cover the remaining files.
Note also that we've specified Chrome as our test browser (so you should have it installed), which is a safe bet because it works on just about any desktop platform you might be running. Know that you can always choose Safari or Firefox if you want Karma to test inside those as well. Karma will start a new instance of each browser specified and run your tests inside a test harness in those browsers.
To write the test, let's clarify what we want our code to do:
* When a user first visits the application, we should use the geolocation features of their browser to determine their location.
* Pull a file from our repository that contains general latitude and longitude locations of different cities.
* Iterate over the list of cities and see if we are within 25 miles of any of the cities. If so, set the current city to the first match.
* If we found a city, load the JSON data file from GitHub.
Concretely, let's assert that we load the list of cities and have two of them, then we load a matching city named "Portland," a city that has three shops available.
We'll use an `ng-init` directive, which is the mechanism to tell AngularJS we want to call the function specified when the controller has finished loading. We'll call this function `init` so let's test it.
First, we will write the setup code for an AngularJS test written using the Jasmine test framework. Jasmine is a "behavior-driven JavaScript" library that provides functions to group and create expectation-based tests. Within the Jasmine framework are "matchers" that allow for the most common assertions (comparing a variety of expected types to the resultant types from function calls) and the ability to define your own custom matchers. Jasmine also gives you the ability to "spy" on functions, which is another way of saying Jasmine can intercept function calls to validate that those calls were made in the way you anticipate. It is easiest to explain the power of Jasmine by showing the elegance of the tests themselves, so let's do that now:
describe( "GithubCtrl", function() {
var scope = undefined; 
var ctrl = undefined;
var gh = undefined;
var repo = undefined;
var geo = undefined;
beforeEach( module( "coffeetech" ) ); 
beforeEach( inject( function ($controller, $rootScope ) { 
generateMockGeolocationSupport(); 
generateMockRepositorySupport();
scope = $rootScope.$new(); 
ctrl = $controller( "GithubCtrl",
{ $scope: scope, Github: gh, Geo: geo } ); 
} )
);
...
We declare our variables at the top of the function. If we did not do this, JavaScript would silently define them inside the functions the first time the variable is used. Then our variables would be different inside our setup code and the actual tests.
We load our `coffeetech` module into our tests using the `module` method inside a `beforeEach` call, code that is executed before our tests run.
`inject` is the AngularJS way to provide our before functions with the `$controller` and `$rootScope` objects, which we use to set up our tests.
We will be creating two functions that generate the mock objects required for our tests. We'll discuss these two functions in a bit.
`scope` is the AngularJS convention for the object into which all functionality and state is stored. We create a new `scope` using the AngularJS utility function `$rootScope.$new()` and store a reference to this `scope` so we can test functionality we've implemented in our actual code.
We pass in the mocked objects (created by the mocked function calls) as well as the `scope` object and instantiate a controller object. This controller uses the `scope` to define functions and data, and since we have a reference to it, we can call those functions and inspect that data and assert our implementation is correct.
Now, let's write an actual test:
describe( "#init", function() { 
it( "should initialize, grabbing current city", function() { 
scope.init(); 
expect( geo.getCurrentPosition ).toHaveBeenCalled(); 
expect( gh.getRepo ).toHaveBeenCalled();
expect( repo.read ).toHaveBeenCalled();
expect( scope.cities.length ).toEqual( 2 ); 
expect( scope.city.name ).toEqual( "portland" );
expect( scope.shops.length ).toEqual( 3 );
});
});
});
Describe functions are used to group tests defined inside `it` functions. Since we are testing the `init` function, it seems logical to use an identifier called `#init`.
`describe` blocks group tests while `it` blocks actually specify code that is run as a test.
Our controller code begins with an `init` call, so we mimic that inside our test to set up the controller state.
We assert that our code uses the various interfaces we defined on our injected objects: `getCurrentPosition` on the `geo` object, and `read` on the repository object.
Then we assert that the data is properly loaded. Our test verifies that there are two cities, that a default city has been loaded and the name of the default city is equal to the string `"portland"`. In addition, the test verifies there are three shops loaded for the default city. Behind the scenes in our implementation we will load these via JSON, but all we care about is that the interface and data matches our expectations.
This syntax initially can look confusing if you have never written Jasmine tests for JavaScript, but it actually solves a lot of problems in an elegant way. Most importantly, Jasmine provides a `spyOn` function that will intercept a call to it, and then allow you to assert that it was called. Any place in our tests you see `toHaveBeenCalled()` is an assertion that `spyOn` provides to us proving that a call was made.
Now we can implement the two mocking functions vital for the test. Put them in between the `beforeEach( module( "coffeetech" ) )` line and the `beforeEach( inject( ... ) )` functions to provide proper visibility to Karma:
...
beforeEach( module( "coffeetech" ) );
function generateMockGeolocationSupport( lat, lng ) { 
response = ( lat && lng ) ?
{ coords: { lat: lat, lng: lng } } :
{ coords: CITIES[0] };
geo = { getCurrentPosition: function( success, failure ) { 
success( response );
} };
spyOn( geo, "getCurrentPosition" ).andCallThrough(); 
}
function generateMockRepositorySupport() { 
repo = { read: function( branch, filename, cb ) { 
cb( undefined,
JSON.stringify( filename == "cities.json" ?
CITIES : PORTLAND ) );
} };
spyOn( repo, "read" ).andCallThrough();
gh = new Github({});
spyOn( gh, "getRepo" ).andCallFake( function() { 
return repo;
} );
}
beforeEach( inject( function ($controller, $rootScope ) {
...
We first implement the `generateMockLocation` function.
Mock location involves creating a `geo` object that has a single function `getCurrentPosition`, which is a function that calls back into a success callback function provided. This exactly matches the native browser support for Geolocation, which has the same function defined.
We then `spyOn` the function so we can assert that it was called in our actual tests.
Next, we implement `generateMockRepositorySupport`.
Again, we implement a mock object: this one to provide a method called `read`. This function matches the function of the same name contained in the API provided by the JavaScript GitHub.js library. Just like in the previous mock, we `spyOn` the function so we can validate it was called. However, this is not the "top-level" repository object—this is the object returned from the call to `getRepo`. We will take this mock object and return it from the `getRepo` call.
We spy on the `getRepo` call, and then return our next mock object, the repository object. This object is used to retrieve the actual information using the `read` call.
Now that we have a set of tests, run the test suite from the command line and watch them fail:
$ karma start karma.conf.js
Chrome 32.0.1700 (Mac OS X 10.9.1) GithubCtrl #init should initialize,
grabbing current city FAILED
Error: [$injector:modulerr] Failed to instantiate module...:
Error: [$injector:nomod] Module 'coffeetech' is not available!
You either misspelled the module name or forgot to load it.
If registering a module ensure that you specify the
dependencies as the second argument.
...
We now need to provide some test fixtures.
## Test Data
We need to build our support fixtures, data files that have test data. Add the _fixtures-cities.js_ file into the same directory as your other code:
var CITIES = [{
name: "portland",
latitude: 45,
longitude: 45
}, {
name: "seattle",
latitude: 47.662613,
longitude: -122.323837
}]
And the _fixtures-portland.js_ file:
var PORTLAND = [{
"name": "Very Good Coffee Shop",
"latitude": 45.52292,
"longitude": -122.643074
}, {
"name": "Very Bad Coffee Shop",
"latitude": 45.522181,
"longitude": -122.63709
}, {
"name": "Mediocre Coffee Shop",
"latitude": 45.520437,
"longitude": -122.67846
}]
## CoffeeTech.js
Then, add the _coffeetech.js_ file. We'll focus just on the setup code and the changes to the `init` function for now:
var mod = angular.module( 'coffeetech', [] );
mod.factory( 'Github', function() { // 
return new Github({});
});
mod.factory( 'Geo', [ '$window', function( $window ) { // 
return $window.navigator.geolocation;
} ] );
mod.factory( 'Prompt', [ '$window', function( $window ) {
return $window.prompt;
} ] );
mod.controller( 'GithubCtrl', [ '$scope', 'Github', 'Geo', 'Prompt', // 
function( $scope, ghs, Geo, Prompt ) {
$scope.messages = []
$scope.init = function() { // 
$scope.getCurrentLocation( function( position ) {
$scope.latitude = position.coords.latitude;
$scope.longitude = position.coords.longitude;
$scope.repo = ghs.getRepo( "xrd", "spa.coffeete.ch" ); // 
$scope.repo.read( "gh-pages", "cities.json",
function(err, data) { //
$scope.cities = JSON.parse( data ); // 
// Determine our current city
$scope.detectCurrentCity(); // 
// If we have a city, get it
if( $scope.city ) {
$scope.retrieveCity();
}
$scope.$apply(); // 
});
});
...
We extract the GitHub library into an AngularJS factory. This allows us to inject our mocked GitHub object inside our tests; if we had placed the GitHub instance-creation code inside our controller, we would not have been able to easily mock it out in our tests.
We extract the geolocation support into an AngularJS factory. As we did with the GitHub library mock, we can now inject a fake one into our tests.
Our new controller "injects" the various objects we need. We have extracted the GitHub API object and a `Geo` object into dependencies, and this syntax finds the proper objects and provides them to our controller. You'll also notice a slightly different syntax for creating the controller: `controller( "CtrlName", [ 'dependency1', 'dependency2', function( dependency1, dependency2 ) {} ] );`. This style works even if JavaScript minification were to occur; the previous incarnation we saw would not have survived this process because AngularJS would not have known the dependency name after it had been mangled by a minimizer.
We extract the functionality into a function called `init`, which we can explicitly call from within our tests.
Set the username and load the repository. If you are putting this into your own repository, modify this appropriately, but you can use these arguments until you do post this into your own repository.
We use the `read` method to pull file contents from the repository. Notice that we use the `gh-pages` branch since we are storing our single-page app and all the data there.
Once our data is returned to us, it is simply a string. We need to reconstitute this data to a JavaScript object using the `JSON.parse` method.
After we retrieve our data from the repository, we can use the data inside the cities array to determine our current city.
Since we are calling outside of AngularJS and returning inside a callback, we need to call `scope.$apply()` like we showed in prior examples.
We are now ready to write our geocoding implementation.
# Geocoding Support
We'll build functions to retrieve the data for a city from the GitHub API, find the location of the user using their browser's Geolocation feature, use the user's current location to determine what cities they are close to, implement a distance calculation function, load the city once close proximity cities are determined, and finally, add a function to query the user for their GitHub credentials and annotation data.
First, we can implement the city-loading functions:
$scope.retrieveCity = function() { 
$scope.repo.read( "gh-pages", $scope.city.name + ".json",
function(err, data) {
$scope.shops = JSON.parse( data );
$scope.$apply();
});
}
$scope.loadCity = function( city ) { 
$scope.repo.read( "gh-pages", city + ".json", function(err, data) {
$scope.shops = JSON.parse( data );
$scope.$apply();
});
...
`retrieveCity` retrieves a list of shops in the same way we retrieved the list of cities by reading from the repository object. After loading the data into the scope, we need to call `$apply()` to notify Angular.
`loadCity` uses the city name to load city data.
Next, we can implement the functionality to calculate distances between the current user and available cities:
$scope.getCurrentLocation = function( cb ) { 
if( undefined != Geo ) {
Geo.getCurrentPosition( cb, $scope.geolocationError );
} else {
console.error('not supported');
}
};
$scope.geolocationError = function( error ) { 
console.log( "Inside failure" );
};
$scope.detectCurrentCity = function() { 
// Calculate the distance from our current position and use
// this to determine which city we are closest to and within
// 25 miles
for( var i = 0; i < $scope.cities.length; i++ ) {
var dist = $scope.calculateDistance( $scope.latitude, 
$scope.longitude,
$scope.cities[i].latitude,
$scope.cities[i].longitude );
if( dist < 25 ) {
$scope.city = $scope.cities[i];
break;
}
}
}
toRad = function(Value) { 
return Value * Math.PI / 180;
};
$scope.calculateDistance = function( latitude1, 
longitude1,
latitude2,
longitude2 ) {
R = 6371;
dLatitude = toRad(latitude2 - latitude1);
dLongitude = toRad(longitude2 - longitude1);
latitude1 = toRad(latitude1);
latitude2 = toRad(latitude2);
a = Math.sin(dLatitude / 2) * Math.sin(dLatitude / 2) +
Math.sin(dLongitude / 2) * Math.sin(dLongitude / 2) *
Math.cos(latitude1) * Math.cos(latitude2);
c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1 - a));
d = R * c;
return d;
...
We build a `getCurrentLocation` function we will call within our code. We use the injected `Geo` object that has our `getCurrentPosition` function (which inside our tests will be the mocked function, and inside our real code just layers an abstraction on top of the native browser interface).
We need to provide an error callback to the `getCurrentPosition` call, so we implement that, which logs it to the console.
Then we build `detectCurrentCity`; we will look over the list of cities and see if we are in one.
We iterate over the list of cities and calculate whether they are within 25 miles of our current location. Each city is stored with its own latitude and longitude data. When we find a city, we store that in the scope as the official current city and exit the loop.
To calculate distance, we need to build a radian conversion function.
Finally, we build our distance calculation function.
At first glance, the calculate distance function looks confusing, no? This was code I developed after reading a post on geocoding using a stored procedure within the PostgreSQL database, and I converted the code to JavaScript. Unless you are a geocoding geek, how do we know this works as advertised? Well, let's write some tests to prove it. Add these lines to the bottom of your _coffeetech.spec.js_ , just within the last `});` closing braces:
describe( "#calculateDistance", function() {
it( "should find distance between two points", function() {
expect( parseInt(
scope.calculateDistance( 14.599512,
120.98422,
10.315699,
123.885437 ) * 0.61371 ) ).
toEqual( 354 );
});
});
To build this test, I searched for "distance between Manila" and Google autocompleted my search to "Cebu." It says they are 338 miles apart. I then grabbed latitude and longitudes for those cities and built the preceding test. I expected my test to fail as my coordinates were going to be off by a few miles here or there. But the test showed that our distance was 571. Hmm, perhaps we calculated in kilometers, not miles? Indeed, I had forgotten this algorithm actually calculated the distance in kilometers, not miles. So, we need to multiply the result by 0.621371 to get the value in miles, which ends up being close enough to what Google reports the distance to be.
## City Data
Let's seed our application with some starting data and write out the _cities.json_ file:
[
{
"longitude": -122.67620699999999,
"latitude": 45.523452,
"name": "portland"
},
{
"longitude": -122.323837,
"latitude": 47.662613,
"name": "seattle"
}
]
Now that we have our geocoding implementation complete and sample data in place, we can move on to acquiring credentials from the user.
# Adding Login
If we want people to fork a repository on GitHub, we need to have them log in to GitHub. So, we need to ask for credentials:
...
$scope.annotate = function() {
user = Prompt( "Enter your github username" )
password = Prompt( "Enter your github password" )
data = Prompt( "Enter data to add" );
};
...
We can now expose the new data inside the _index.html_ file like so (omitting the obvious from the HTML):
<body ng-app="coffeetech">
<div class="container" ng-controller="GithubCtrl" ng-init="init()">
<h1>CoffeeTe.ch</h1>
<h3 ng-show="city">Current city: {{city.name}}</h3>
<div class="row">
<div class="col-md-6"><h4>Shop Name</h4> </div>
<div class="col-md-6"><h4>Lat/Lng</h4> </div>
</div>
<div class="row" ng-repeat="shop in shops"> 
<div class="col-md-6"> 
{{ shop.name }} 
</div>
<div class="col-md-6"> {{ shop.latitude }} / {{ shop.longitude }} </div>
</div>
</div>
`ng-repeat` is an AngularJS directive that iterates over an array of items. Here we use it to iterate over the items in our _portland.json_ file and insert a snippet of HTML with our data interpolated from each item in the iteration.
Bootstrap makes it easy to establish structure in our HTML. The `col-md-6` class tells Bootstrap to build a column sized at 50% of our 12-column layout (the default for Bootstrap layouts). We set up two adjacent columns this way. And if we are inside a mobile device, it properly stacks these columns.
Using AngularJS two-way data binding we insert the name of the shop.
## Errors Already?
If you run this in your browser, you will not see the shops for our city displayed. Something is broken, so let's investigate. I recommend using the Chrome browser to debug this, but you can use any browser and set of developer tools you like. For Chrome, right-clicking the page anywhere and selecting "Inspect Element" at the bottom (or by the keyboard shortcut "F12" or "Ctrl-Shift-I" on Windows or Linux or "Cmd-Opt-I" on Mac) will bring up the developer console. Then select the console window. Refresh the browser window, and you'll see this in the console:
Uncaught TypeError: Cannot call method 'select' of undefined
If you click the link to the right for GitHub.js, you'll see something like Figure 9-3.
###### Figure 9-3. An unexpected error
You see at the point of error that we are calling `select` on the tree. `select` appears to be a method defined on an underscore character. If you use JavaScript frequently, you'll recognize that the underscore variable comes from the Underscore library, and `select` is a method that detects the first matching instance inside an array. Under the hood, the GitHub.js library is pulling the entire tree from the repository, then iterating over each item in the tree, then selecting the item from the tree that matches the name of the file we have requested. This is an important performance implication to consider; the GitHub API does not provide a way to directly request content by the path name. Instead, you pull a list of files and then request the file by the SHA hash, a two-step process that makes two (potentially lengthy) calls to the API.
How do we fix the error telling us `select` is undefined? Did we forget to include underscore.js? Reviewing the documentation on GitHub.js, we see that it states underscore.js and base64.js are required. We forgot to include them. Oops! To include these, run these commands from the console:
$ wget http://underscorejs.org/underscore-min.js
$ wget https://raw.github.com/dankogai/js-base64/master/base64.js
Then, add the libraries to your _index.html_ so that the JavaScript includes look like this:
...
<script src="angular.js"></script>
<script src="underscore-min.js"></script>
<script src="base64.min.js"></script>
<script src="github.js"></script>
<script src="coffeetech.js"></script>
...
Now we can build out some faked data and start envisioning the structure of our data that will eventually come from our users.
# Displaying (Soon-to-Be) User-Reported Data
So far we have built a database of cities and coffee shops in those cities. Google Maps or Apple Maps already provide this information. If we layer additional information on top of this data (like quirky information about the coffee shop), however, then we might have something that someone might find useful once they have found the coffee shop on their favorite mapping application.
So, to start, let's add some fake data to our coffee shop information. Add a file called _portland.json_ that looks like this:
[
{
"information" : [
"offers gluten free desserts",
"free wifi",
"accepts dogs"
],
"longitude" : -122.643074,
"latitude" : 45.52292,
"name" : "Very Good Coffee Shop"
},
{
"latitude" : 45.522181,
"name" : "Very Bad Coffee Shop",
"longitude" : -122.63709
},
{
"name" : "Mediocre Coffee Shop",
"latitude" : 45.520437,
"longitude" : -122.67846
}
]
Notice that we added an array called `information` to our data set. We'll use this to allow simple search. Add the search feature to our _index.html_ :
...
<div class="container" ng-controller="GithubCtrl" ng-init="init()">
<h1>CoffeeTe.ch</h1>
<input style="width: 20em;" ng-model="search"
placeholder="Enter search parameters..."/> 
<h3 ng-show="city">Current city: {{city.name}}</h3>
<div class="row=">
<div class="col-md-6"><h4>Shop Name</h4> </div>
<div class="col-md-6"><h4>Lat/Lng</h4> </div>
</div>
<div class="row" ng-repeat="shop in shops | filter:search"> 
<div class="col-md-6">
{{ shop.name }}
<div ng-show="search"> 
<span ng-repeat="info in city.information">
<span class="label label-default">city.data</span>
</span>
</div>
</div>
<div class="col-md-6">
<a target="_map" 
href="http://maps.google.com/?q={{shop.latitude}},{{shop.longitude}}">
Open in map ({{shop.latitude}},{{shop.longitude}})
</a>
</div>
...
We add a search box that binds to the `search` model in our scope.
We add a filter on the data to display that searches through all data inside each item in our `shops` array.
If we are searching (the model variable `search` is defined) then we show the extra information.
We alter our lat/lng information to point to a Google Maps page.
Now if we type the word "gluten" in our search box, we filter out anything except shops that match that, and we see the information pieces formatted as labels underneath the shop name (Figure 9-4).
###### Figure 9-4. Filtering coffee shops using the term gluten
## User-Contributed Data
Now that we have a functioning application, let's allow people to add information themselves and help build our database. Just beneath the link to the map link, add a button that will allow us to annotate a coffee shop with extra information.
To make a contribution, users will fork the repository, make a change, and then issue a pull request from the fork to the original repository. Forking means we create a copy of the original repository in our GitHub account. All these steps are possible from within our webapp using the GitHub.js library. Of course, if someone is going to fork a repository into their account, we must ask the user to log in, so we will prompt them for their username and password. If you are grimacing at the thought of a webapp asking for GitHub credentials, don't fret—we'll find a safe way to achieve the same thing shortly.
The implementation we will use starts with adding an `annotate` button to our HTML:
<button ng-click="annotate(shop)">Add factoid</button>
Let's add some tests. Add another file called _coffeetech.annotate.spec.js_ with these contents:
describe( "GithubCtrl", function() {
var scope = undefined, gh = undefined,
repo = undefined, prompter = undefined;
function generateMockPrompt() {
prompter = { prompt: function() { return "ABC" } }; 
spyOn( prompter, "prompt" ).andCallThrough();
}
var PR_ID = 12345;
function generateMockRepositorySupport() { 
repo = {
fork: function( cb ) {
cb( false );
},
write: function( branch, filename, data, commit_msg, cb ) {
cb( false );
},
createPullRequest: function( pull, cb ) {
cb( false, PR_ID );
},
read: function( branch, filename, cb ) {
cb( undefined,
JSON.stringify( filename == "cities.json" ?
CITIES : PORTLAND ) );
}
};
spyOn( repo, "fork" ).andCallThrough();
spyOn( repo, "write" ).andCallThrough();
spyOn( repo, "createPullRequest" ).andCallThrough();
spyOn( repo, "read" ).andCallThrough();
gh = { getRepo: function() {} }; 
spyOn( gh, "getRepo" ).andCallFake( function() {
return repo;
} );
ghs = { create: function() { return gh; } };
}
...
It looks similar to our previous tests where we mock out a bunch of items from the GitHub.js library.
We added a mock prompt. We will be prompting the user for username, password, and the annotating data, and we will use the native browser prompt mechanism to do this.
We added three new methods to our mock GitHub object: `fork`, `write`, and `createPullRequest`. We verify that these are called.
When we call the `getRepo` function we want to spy on it so we can assure it is called, but we also want to return the fake repository we provide inside our test, and this syntax does that.
We have some setup code that is called in a before function to load the mock objects and establish a controller and scope for testing:
...
var $timeout; // 
beforeEach( inject( function ($controller, $rootScope, $injector ) {
generateMockRepositorySupport(); // 
generateMockPrompt();
$timeout = $injector.get( '$timeout' ); // 
scope = $rootScope.$new();
ctrl = $controller( "GithubCtrl",
{ $scope: scope,
Github: ghs,
'$timeout': $timeout,
'$window': prompter } );
} ) );
...
According to the documentation for `fork` in the GitHub.js library, this method can take a little time to return (as long as it takes for GitHub to complete our fork request, which is nondeterministic), so we need to set a timeout in our app and query for the new repository. If we are using AngularJS, we can ask it for a mocked and programmatic timeout interface, which we can control inside our tests.
We generate our mocked GitHub method calls and spies, and we follow that by mocking our prompt calls.
As mentioned earlier, we need to get `$timeout`, and we can use the injector to retrieve the mocked one AngularJS provides for testing using this call.
Now we can write our tests for the `annotate` function:
...
describe( "#annotate", function() { 
it( "should annotate a shop", function() {
scope.city = PORTLAND
var shop = { name: "A coffeeshop" }
scope.annotate( shop ); 
expect( scope.shopToAnnotate ).toBeTruthy();
expect( prompter.prompt.calls.length ).toEqual( 3 );
expect( scope.username ).not.toBeFalsy();
expect( scope.annotation ).not.toBeFalsy();
expect( repo.fork ).toHaveBeenCalled(); 
expect( scope.waiting.state ).toEqual( "forking" ); 
$timeout.flush();
expect( scope.forkedRepo ).toBeTruthy(); 
expect( repo.read ).toHaveBeenCalled();
expect( repo.write ).toHaveBeenCalled();
expect( repo.createPullRequest ).toHaveBeenCalled();
expect( scope.waiting.state ).toEqual( "annotated" );
$timeout.flush();
expect( scope.waiting ).toBeFalsy();
});
});
...
We create a new `describe` block to organize our tests, calling it `#annotate`. We then implement one `it` function, which is the single test we are creating: "annotate a shop."
After setting up the preconditions that our `scope` object should have a city selected, and creating a shop to annotate, we then call our `annotate` method.
Once we have called `annotate`, our code should request our credentials for the GitHub API, and then ask us for the information to use in annotating the shop. If this were happening in the browser, we would get three prompts. Our test mocks out the `prompt` object here, and we should therefore see three calls made to our mocked `prompt` object. We also validate some state we should see on the `scope` object like holding a username and annotation for usage later.
We should then see the first of our GitHub API calls being made: GitHub.js should issue a request to `fork` the repository.
We should then enter in our waiting state; we will tell the user we are waiting and our UI will use the `scope.waiting.state` to notify them of that.
Once we have flushed the timeout that simulates completion of the fork, we will then see our code storing the result of the forked repo into the scope.
Next, we can observe the other GitHub API calls that perform the annotation.
We flush again to resolve the timeouts, and then finally, after everything is done, we should no longer be telling the user they are in a waiting state.
If you are still running Karma in the background, you'll see the tests fail with:
Chrome 32.0.1700 (Mac OS X 10.9.1) GithubCtrl #annotate should
annotate a shop FAILED
TypeError: Object #<Scope> has no method 'annotate'
at null.<anonymous> (/.../coffeetech.spec.js:80:19)
Now, let's implement this functionality in our _coffeetech.js_ file. Add these lines to the bottom of the file, but before the last closing braces. The function `annotate` actually does two things: makes a fork of the repository for the user, and then adds annotation information to that repository using the GitHub API once the fork has completed:
...
$scope.annotate = function( shop ) { 
$scope.shopToAnnotate = shop;
$scope.username = $window.prompt( "Enter your github username (not email!)" )
pass = $window.prompt( "Enter your github password" )
$scope.annotation = $window.prompt( "Enter data to add" ); 
gh = ghs.create( $scope.username, pass ); 
toFork = gh.getRepo( "xrd", "spa.coffeete.ch" ); 
toFork.fork( function( err ) {
if( !err ) { 
$scope.notifyWaiting( "forking",
"Forking in progress on GitHub, please wait" );
$timeout( $scope.annotateAfterForkCompletes, 10000 );
$scope.$apply();
}
} );
};
...
We start by creating our annotation function. As we specified in our tests, this function takes a `shop` object, an object into which annotations about the shop are added.
We prompt the user three times: username and password on GitHub, and the text they want to annotate. If this seems like a really bad way to do things, don't worry, we'll fix it in a moment.
We create a new GitHub object with the username and password provided. We leave it as an exercise for the reader to contend with mistyped or incorrect credentials.
The GitHub.js library allows you to create a repository object (meaning create a local reference to an existing repository) using the `getRepo` function. Once we have this, we can issue a `fork` to the repository.
If we did not get an error, we still need to contend with the fact that forking takes a nondeterministic amount of time. So, we schedule a timeout in 10 seconds, which will check to make sure our request completed. As this operation is happening inside the browser, we have no way of registering for a notification, and as such, must poll GitHub to determine whether our fork has completed. In the real world, we probably would need to redo this request if we see it fail as this could just mean it was still pending on GitHub.
We register a message using a key called `"forking"` which we can use inside our HTML template to display to the user that our fork has completed. We'll build this function out soon; it basically stores the value and a string for display, and allows us to clear it when the message is no longer valid.
Finally, we call the method `annotateAfterForkCompletes`, which adds data to our new forked repository once the process is fully complete.
Let's now build the code to annotate our repository after the fork has completed:
...
$scope.annotateAfterForkCompletes = function() {
$scope.forkedRepo = gh.getRepo( $scope.username, "spa.coffeete.ch" );
$scope.forkedRepo.read( "gh-pages", "cities.json", function(err, data) {
if( err ) {
$timeout( $scope.annotateAfterForkCompletes, 10000 );
}
else {
$scope.notifyWaiting( "annotating",
"Annotating data on GitHub" );
// Write the new data into our repository
$scope.appendQuirkToShop();
var newData = JSON.stringify( $scope.shops, stripHashKey, 2 ); 
$scope.forkedRepo.write('gh-pages', $scope.city.name + '.json', 
newData,
'Added my quirky information',
function(err) {
if( !err ) {
// Annotate our data using a pull request
var pull = { 
title: "Adding quirky information to " +
$scope.shopToAnnotate.name,
body: "Created by :" + $scope.username,
base: "gh-pages",
head: $scope.username + ":" + "gh-pages"
};
target = gh.getRepo( "xrd", "spa.coffeete.ch" ); 
target.createPullRequest( pull,
function( err, pullRequest ) {
if( !err ) {
$scope.notifyWaiting( "annotated",
"Successfully sent annotation request" );
$timeout(
function() {
$scope.notifyWaiting( undefined )
}, 5000 );
$scope.$apply(); 
}
} );
}
$scope.$apply();
});
}
$scope.$apply();
} );
...
Once we have verified the fork has completed, we need to get the new forked repository. We use the username provided to our code when the user logs in to build the repository object. We then read the _cities.json_ file from the repository; if we retrieve this file successfully (we don't see the `err` object evaluating to true) then we know we are ready to start editing data.
We notify the UI that we are annotating and tell the user they will need to wait while the annotation request is in progress.
`JSON.stringify` converts our annotated shop object into a JSON object. If you have used `JSON.stringify` before, you might not know about the other two parameters (beyond just the object you want to serialize) you can provide to this function. These two extra parameters allow us to filter the object and specify certain elements to ignore when serializing and how and if to indent the resultant JSON. So, we provide the `stripHashKey` function to remove the `$$hashKey` Angular tracking data, and an indentation count. The indentation count makes it much easier to read a pull request, because the diff'ing algorithm can diff line by line rather than as a long JSON string, which is how `JSON.stringify` serializes by default.
We then write data back to the forked repository using the `write` function. If this succeeds, the error value will be undefined inside the callback function as the last parameter.
If our error was undefined, we are in a position where we can make a pull request back to the original repository. To make a pull request, we create a pull request object we need to provide to the pull request method inside of GitHub.js.
We then get a reference to the target of the pull request, the original repository.
We then issue the pull request against the target. This takes the pull request specification object we created earlier, and a callback function that has an error code if the request failed, and otherwise, a pull request object.
Once the request has succeeded, we can notify the UI that the annotation process has completed, and then issue a timeout to remove that from the UI after 5000 milliseconds, or 5 seconds.
Any time we are inside a callback in a third-party library (like GitHub.js) we, as mentioned before, need to use `$apply()` to notify Angular that our scope object has changed.
We have three convenience methods to implement:
...
$scope.appendQuirkToShop = function() { 
if( undefined == $scope.shopToAnnotate.information ) {
$scope.shopToAnnotate.information = [];
}
$scope.shopToAnnotate.information.push( $scope.annotation );
};
function stripHashKey( key, value ) { 
if( key == "$$hashKey" ) {
return undefined;
}
return value;
}
$scope.notifyWaiting = function( state, msg ) { 
if( state ) {
$scope.waiting = {};
$scope.waiting.state = state;
$scope.waiting.msg = msg;
}
else {
$scope.waiting = undefined;
}
}
...
The `appendQuirkToShop` function creates an empty array if it is not yet defined and then adds the annotation to the list of annotations. We don't want our code to crash if we try to add an annotation to an object for which there is an undefined array reference.
We define a transformation function that we used with the `JSON.stringify` function. AngularJS adds a tracking attribute (`$$hashKey`) to our objects when we use the `ng-repeat` directive, and this function filters that out so that our pull request data is clean.
`notifyWaiting` (obviously) notifies users. We create a waiting object, and then update the state (which our app will use to hide or display messages) and then a message itself. If we provide an empty message, we will clear the object, effectively removing the message from the UI.
Now we need to expose the status message in our UI by modifying the HTML:
...
<input class="ctinput" ng-model="search"
placeholder="Enter search parameters..."/>
<h3 ng-show="city">Current city: {{city.name}}</h3>
<div ng-show="waiting">
{{waiting.msg}}
</div>
...
# Accepting Pull Requests
When someone makes an annotation to a shop, the owner of the original repository gets a pull request notification on GitHub (Figure 9-5).
###### Figure 9-5. Adding information through a pull request
Now we can review changes through GitHub's integrated online diff tool (Figure 9-6).
###### Figure 9-6. Reviewing annotation pull request diffs from within GitHub
Here we see a clear "diff" of the changes our contributor made: they added an annotation that tells us "no turtles allowed." We might want to consider a different location the next time we have a date with Morla. The diff is clear in that the green information is easy to read, which is a benefit we get when we use the `JSON.stringify` function with the third parameter set to something other than undefined. Unfortunately, the first line differs only by the extra comma, but this is still a very readable diff.
# Toward a Safe Login Implementation
If I saw this app in the wild I would never use it to submit data. The app asks for my GitHub username and password. Asking for my username and password implicitly asks me to trust the authors of this application. Trust in this case means that I trust them to not maliciously use my credentials for nefarious purposes, and also asks me to trust that they are not doing something stupid that would allow an attacker to insert themselves into the middle of the authentication process and steal my crendentials. GitHub is a large part of my online identity and I would never provide these crendentials to a web application.
Fortunately, we have an alternative to asking for passwords: OAuth.
When we use OAuth, our users enter their credentials directly into GitHub. If our users have turned on two-factor authentication, GitHub can still authenticate them (while our naive implementation could not be modified to accept this type of authentication process). Once we have entered our credentials, GitHub decides whether we are who we say we are, and then returns us to the application that requested access.
There are many benefits to using OAuth. GitHub provides the application with what is called an OAuth token that encapsulates exactly what services on GitHub we have access to, and whether that access is read-only or whether we can add data in a read-write manner. This means our requesting service can ask to modify only parts of our data within GitHub; this provides a much higher level of trust to users as they know the application cannot touch the more private parts within GitHub. Specifically, this means we could ask for access only to gists and not request access to our repositories. One important point about OAuth tokens is that they can be revoked. So, once a specific action has been taken, we can destroy the token and revoke access. With simple username and password access, the only way to revoke access is to change the password, which means any place you have saved that password (password managers or other applications that log in via username and password) need to update their settings as well. With OAuth we can revoke a single token at any time (and GitHub makes it easy to do this) without affecting access to other services.
Let's modify our application to use OAuth.
## Authentication Requires a Server
Up until now we have been able to publish all our files into GitHub, and they are hosted for us by GitHub. Sadly the authentication component cannot be hosted on GitHub. Somehow we need to safely authenticate our user into GitHub and retrieve an OAuth token. There is currently no way to do this strictly client side (using only static HTML and JavaScript running in the browser). Other authentication providers like Facebook do provide pure JavaScript login functionality in their SDKs, but GitHub, citing security concerns, has not released anything that does authentication purely on the client side as of yet.
Somehow we have to involve a server into our authentication process. The most obvious choice we have is to run a small authentication server, delegate authentication to it, and once authentication is completed, jump back in our application hosted on GitHub. We provide code (written in NodeJS, JavaScript for the server side) to do this in the associated repository for this chapter. But creating even a simple authentication system has a baseline of complexity that seems like overkill. If we could instead delegate this authentication to a third party, we could reduce a massive amount of code and complexity from our system.
## Fixing Authentication with Firebase
Instead of writing our own server to manage authentication and talk to the GitHub API, we will delegate that authentication to Firebase. Firebase is a real-time communication toolset that integrates well with our choice of AngularJS. By far the simplest and safest option, Firebase offers AngularJS bindings (called "AngularFire") and an integrated GitHub authentication component (called "Simple Login"). Together they resolve the authentication issue for us, and keep all our code hosted on GitHub. Delegation of our authentication component is easy with Firebase: we just modify our existing GitHub application, provide the credentials and GitHub OAuth scope to Firebase, and then our application offloads user management to Firebase.
First, we need to create a new GitHub application. In the top-right corner on GitHub.com, click on the "Account settings" link, and then navigate to the "Applications" link toward the bottom. Click the "Developer Applications" tab in the right center column and then click the "Register new application" button. Make sure "Authorization callback URL" is set to __https://auth.firebase.com/auth/github/callback__. Then save the application by clicking the "Register application" button as shown in Figure 9-7.
###### Figure 9-7. A new GitHub application for OAuth
Now, create an account on Firebase. Once you have done this, create a new app called "CoffeeTech" inside Firebase. The APP URL needs be unique, so use "coffeetech-<USERNAME>", replacing USERNAME with your GitHub username. Once you have created the app, click the "View Firebase" button. You'll then see a settings screen, and click "Simple Login" and then "GitHub" as shown in Figure 9-8.
###### Figure 9-8. Creating the Firebase hosted login
Then, copy your GitHub client ID and secret to the sections inside the Firebase Simple Login settings for the GitHub provider. Make sure the "enabled" checkbox is checked to enable the provider.
We've now established a login application on GitHub, configured it to use the Firebase service, and have properly configured Firebase to use that GitHub application. We want all functionality, especially external services, to be covered by tests, so we'll write that test coverage next.
## Testing Firebase
Since we load Firebase from its CDN, we first need to mock out the `Firebase` constructor using a simple shim. Put the following into a file called _firebase-mock.js_ :
var Firebase = function (url) {
}
angular.module( 'firebase', [] );
To test our code, we make the following changes to our _coffeetech-annotate.spec.js_ :
beforeEach( module( "coffeetech" ) );
var mockFirebase = mockSimpleLogin = undefined;
function generateMockFirebaseSupport() { 
mockFirebase = function() {};
mockSimpleLogin = function() {
return {
'$login': function() {
return { then: function( cb ) {
cb( { name: "someUser",
accessToken: "abcdefghi" } );
} };
}
}
};
}
var $timeout;
beforeEach( inject( function ($controller, $rootScope, $injector ) {
generateMockRepositorySupport();
generateMockPrompt();
generateMockFirebaseSupport(); 
$timeout = $injector.get( '$timeout' );
scope = $rootScope.$new();
ctrl = $controller( "GithubCtrl",
{ $scope: scope,
Github: ghs,
'$timeout': $timeout,
'$window': prompter,
'$firebase': mockFirebase,
'$firebaseSimpleLogin': mockSimpleLogin } ); 
} ) );
describe( "#annotate", function() {
it( "should annotate a shop", function() {
scope.auth = mockSimpleLogin( mockFirebase() ); 
scope.city = PORTLAND
var shop = { name: "A coffeeshop" }
scope.annotate( shop );
expect( prompter.prompt.calls.length ).toEqual( 1 ); 
expect( scope.shopToAnnotate ).toBeTruthy();
expect( scope.username ).not.toBeFalsy();
expect( scope.annotation ).not.toBeFalsy();
expect( repo.fork ).toHaveBeenCalled();
expect( scope.waiting.state ).toEqual( "forking" );
$timeout.flush();
expect( scope.forkedRepo ).toBeTruthy();
expect( repo.read ).toHaveBeenCalled();
expect( repo.write ).toHaveBeenCalled();
expect( repo.createPullRequest ).toHaveBeenCalled();
expect( scope.waiting.state ).toEqual( "annotated" );
$timeout.flush();
expect( scope.waiting ).toBeFalsy();
We add a `generateMockFirebaseSupport()` function that creates the mock firebase and simple login objects.
We call this method to initialize the mocks.
In our test we use the `$controller` method instantiator to inject these mock objects instead of letting AngularJS inject the real ones. We should modify our other spec file as well now that we are changing the required injections for any controller.
We change our `#annotate` test and create the `auth` object (normally created inside the initialization).
We prompt only once for the data to annotate (we don't need to prompt for username and password any longer).
## Implementing Firebase Login
Now, add Firebase support to our AngularJS application. Add the references to the Firebase support libraries right after AngularJS is loaded:
<script src="angular.js"></script>
<script src='https://cdn.firebase.com/v0/firebase.js'></script>
<script
src='https://cdn.firebase.com/libs/angularfire/0.6.0/angularfire.min.js'>
</script>
<script
src='https://cdn.firebase.com/js/simple-login/1.2.5/firebase-simple-login.js'>
</script>
We need to adjust our _coffeetech.js_ file in a few ways. First, import the Firebase into our AngularJS module. Also, our original GitHub service expected username and password as parameters, but we are now using a slightly different signature for OAuth tokens:
var mod = angular.module( 'coffeetech', [ 'firebase' ] );
mod.factory( 'Github', function() {
return {
create: function(token) {
return new Github( { token: token, auth: 'oauth' } );
}
};
});
When we instantiate our controller, we need to inject `Firebase` and `FirebaseSimpleLogin` and initialize them inside our `init` method:
mod.controller( 'GithubCtrl', [ '$scope', 'Github', 'Geo', '$window', '$timeout',
'$firebase', '$firebaseSimpleLogin',
function( $scope, ghs, Geo, $window, $timeout,
$firebase, $firebaseSimpleLogin ) {
$scope.init = function() {
var ref = new Firebase( 'https://coffeetech.firebaseio.com' );
$scope.auth = $firebaseSimpleLogin( ref );
$scope.getCurrentLocation( function( position ) {
$scope.latitude = position.coords.latitude;
Then, when we annotate, we need to provide the `auth` token returned from Firebase. But it is gratifying to see that little else needs to change in our flow:
$scope.annotate = function( shop ) {
$scope.shopToAnnotate = shop;
$scope.auth.$login( 'github', { scope: 'repo' } ).then(
function( user ) { 
$scope.me = user;
$scope.username = user.name;
$scope.annotation = $window.prompt( "Enter data to add" ); 
if( $scope.annotation ) {
gh = ghs.create( $scope.me.accessToken ); 
toFork = gh.getRepo( "xrd", "spa.coffeete.ch" );
toFork.fork( function( err ) {
We call the `$login` method on our `auth` object created using the Firebase SimpleLogin service. It returns a "promise," which is an interface that has a `then()` method that will be called if the `$login()` succeeds. `then()` calls our callback function, giving us a user object.
We still need to prompt the user for one piece of information—the data to annotate. You can imagine other ways to get this information, using modal HTML5 dialogs, but this will work for us for right now. At least we are only prompting once instead of three times!
Once we are ready to fork we need to create our user object using the token.
After we make these changes, we can click the "Add factoid" button and we'll get a dialog like Figure 9-9 indicating we are logging in to GitHub (via the Firebase SimpleLogin).
###### Figure 9-9. The final step in the permission flow for GitHub access using Firebase
After you authorize the application, the execution flow is identical to the prior authentication flow (using username and password). As an optimization we could check for previous logins before calling `$login()` again, but we don't do that here, meaning the login dialog momentarily pops up each time we click the button.
Once users have logged in, they will be redirected to the application, and we'll notify them that they have submitted a pull request with their contribution. Since their contribution is associated with their GitHub account, they will receive standard pull request notifications when their contribution is accepted, so we don't need to implement that ourselves.
# Summary
We've built an application in JavaScript that requires no server and provides users with a searchable coffee shop database that accepts contributions in a very safe and secure way using the Pull Request API. We were able to completely ignore all the administrative features of a data entry system, delegating all these to GitHub. Our single-page app permits us to focus on one thing: making a powerful and useful application.
# Appendix A. GitHub Enterprise
Most people understandably equate GitHub (the company) with GitHub.com (the website), but it's interesting to note that they're not one and the same.
The GitHub website, as important as it is to modern open and closed source software development, is not the only product that GitHub (the company) produces. The single largest other product from that team is called GitHub Enterprise, and it's a version of the GitHub software that can be deployed inside a corporate firewall—like having your own private GitHub.com.
The two products are very similar from a user's point of view, but there are some important differences. It can sometimes be hard to imagine the kinds of difficulties that Enterprise is designed to solve, but keep in mind that it's for large teams.
# Installation
Using GitHub Enterprise isn't as easy as signing up for an account. You're responsible for all the infrastructure and maintenance, including installation, updates, system maintenance, keeping the machine running, and so on. However, if your company is considering Enterprise, it's likely you already have specialists who are already doing this for other services.
The GitHub team has also made it pretty easy for them. The software comes as a pre-packaged virtual machine in a variety of formats, so you'll likely find something that fits into your infrastructure. Once the machine is running, most of the configuration can be done with a web interface, but there are some tricky bits like network configuration and port forwarding that aren't easy for the layperson to get right.
# Administration
Since you're in control of the environment in which Enterprise runs, you now have a lot of concerns that the typical GitHub.com user does not. GitHub Enterprise has an administration interface for dealing with these issues, which doesn't exist on GitHub.com. It allows management of things like system resources, reports, search, and many others.
Also, while GitHub.com has its own user system, GitHub Enterprise can optionally plug in to your organization's existing authentication system. This allows a company's IT organization to manage user identities in one single place, rather than having to duplicate a lot of effort when a new team member hires on. It also eases the initial transition, when perhaps thousands of people will need new accounts. Several systems are supported, including LDAP and SAML, as well as plain old email and password.
# Endpoints
The complete GitHub API is also available on an Enterprise instance; you just need to send your requests to _https:// <hostname>/api/v3_ instead of _https://api.github.com/_. You can imagine that some users have accounts on both an Enterprise instance as well as GitHub.com, and many applications have started supporting this scenario.
# Full Hostnames Versus Mount Points
One of the main differences between GitHub.com and an Enterprise setup is often in the way that hostnames are set up. GitHub.com has several hostnames for various content served. An incomplete list includes:
_github.io_
Hosting Jekyll blogs for users and project pages
_gist.github.com_
Hosting gists
_raw.githubusercontent.com_
Hosting raw pages (unprocessed files)
For a variety of reasons, Enterprise GitHub installations often don't retain the same mapping. An Enterprise installation might look like:
_github.bigdevcorp.example.com/pages/xrd/somerepo_
Hosting gh-pages sites _github.bigdevcorp.example.com/gists_ : Hosting gists
As you can see, Enterprise installations often map the subdomains to a subdirectory rather than a different hostname. This simplifies the setup of the Enterprise installation. But it means that some tools require reconfiguration.
For the command-line Gist tool, you need to export an environment variable that specifies the Gist URL:
$ export GITHUB_URL=http://github.bigdevcorp.example.com/
For the command-line Hub tool, you need to use a different variable—`GITHUB_HOST`:
$ GITHUB_HOST=github.bigdevcorp.example.com hub clone myproject
# Command-Line Client Tools: cURL
We show in Chapter 1 how to use cURL to make a request against the API on the main GitHub.com site. If you wanted to do this against an Enterprise site, your request would look a little different:
$ curl -i https://github.bigdevcorp.example.com/api/v3/search/repositories?q=@ben
# Example Request Using a Client Library
If you use a client library, most provide a way to configure the library to use a different endpoint, as is required when you are using an Enterprise GitHub instance.
This book documents connecting to GitHub using five different languages: Ruby, Java, JavaScript, Python, and C#. Here are examples in each language. With these snippets in hand, any example in the book can be converted to work against a GitHub Enterprise server.
## Ruby Client Configuration
For the Octokit Ruby library, use code like this:
github = Github.new
basic_auth: 'login:password',
endpoint: 'https://github.bigdevcorp.example.com/api/v3/'
puts github.repos.list
## Java
For the EGit Java library, this code specifies an Enterprise endpoint:
GitHubClient client = new GitHubClient("github.bigcorpdev.example.com");
UserService us = new UserService(client);
us.getUser("internaluser");
When you create a new GitHub-backed service object of any type, you parameterize the service constructor with the customized client object.
Also, note that this library is specifically configured for version 3 (v3) of the API (you cannot specify another version). If you need to use a newer version of the API, you will need to make sure you are using the correct version of the EGit libraries. And, unfortunately, there is no way to use an older version of the API with this Java client if you have an outdated Enterprise server that for some reason cannot be upgraded.
## JavaScript
The JavaScript library we write about in this book (GitHub.js) uses the following syntax to specify a GitHub Enterprise backend:
var github = new Github({
apiUrl: "https://github.bigdevcorp.example.com/api/v3"
...
});
## Python
The agithub client we use in Chapter 4 does not permit parameterizing an Enterprise endpoint when creating the GitHub client. To use an Enterprise endpoint you need to define a new class that overrides the built-in `agithub.Github` and then use that new client in place of the built-in one:
class GitHubEnterprise(agithub.API):
def __init__(self, api_url, *args, **kwargs):
props = ConnectionProperties(
api_url = api_url,
secure_http = True,
extra_headers = {
'accept' : 'application/vnd.github.v3+json'
}
)
self.setClient(Client(*args, **kwargs))
self.setConnectionProperties(props)
g = GitHubEnterprise('github.mycorp.com', 'myusername', 'mypassword')
## C#
The default behavior of the Octokit library is to connect to GitHub.com, but it's relatively straightforward to give it another API host instead. Simply replace the instantiation of the `GitHubClient` object with something like this:
var ghe = new Uri("https://github.myenterprise.com/");
var client = new GitHubClient(new ProductHeaderValue("my-cool-app"), ghe);
# Management API
Enterprise servers have a special additional API section that isn't available on GitHub.com, called the Management Console API. It allows you to do things like check settings, maintain SSH keys, manage your license, and so on. Nearly anything you can do from the web management console, you can do through the API (so you can script management tasks when desirable).
# Documentation
Documentation for the Enterprise API is available at _https://developer.github.com/v3/enterprise_.
# Appendix B. Ruby, NodeJS, (and the Shell) at GitHub
The founders of GitHub all had deep ties and contributions to the Ruby programming language, so we cover it more than other languages in this book.
In recent years, NodeJS (JavaScript for the server side) has grown in popularity, and JavaScript has always been an interesting language because it works on both the client side and server side. GitHub has offered several popular open source projects written in NodeJS.
For these reasons, this appendix gives a little more detail on using these two languages.
In addition, some fluency with the shell is beneficial. There are many GUI programs that hide the command line from you, but to truly dive deep into the GitHub API, it is worthwhile to use the command line inside a shell. These examples all work with bash (the Bourne Again Shell), but are careful not to use any advanced features of bash (so they should convert to other shells if you strongly favor another shell).
# GitHub and Ruby
When the history of GitHub is documented, the Ruby language will take its place as a major character. Tom Preston Warner (one of the three founders of GitHub) built the initial libraries for using Git with Ruby, a library called Grit. You can host blogs on GitHub for free, and this tool called Jekyll is built using Ruby. Gollum, the technology that powers GitHub wikis, is built using Grit and runs on Ruby.
To understand GitHub, it is best to understand a little bit of Ruby. You can use many of the tools used at GitHub by simply installing Ruby, and not knowing any Ruby syntax. This book will not require you to become an expert in Ruby, but will ask you to read through Ruby code. We write in a literal, readable way, so that anyone with basic software developer skills and mastery of the English language should be able to understand the tools we are using. Ruby is not a perfect language, but is a useful addition to a developer's toolkit because of its focus on developer productivity.
## Installing Ruby
There are many ways to get Ruby but not all of them are created equal. As a long-time user I have experienced the pain of using a preinstalled Ruby or one from a package manager, and generally these installation methods provide a suboptimal experience. If you are not familiar with Ruby, use this appendix to get through installation with the least friction and trouble.
You might already have a version of Ruby installed. Mac OS X comes bundled with Ruby and various flavors of Linux do as well (or provide a quick and easy installation through the built-in package manager like "apt-get"). However, I recommend that you use the method of installation described here rather than using the stock installed version of Ruby you might already have on your system. Often Ruby packages require a specific version of Ruby, though they may work with other versions. The problem is that you might encounter subtle bugs that have never been seen before, and using the methods described here will make it trivial to install any version of Ruby that you need side by side with any other version. You can guarantee you are using the correct version, and the method described here will not interfere with any other previously installed version of Ruby you already have on your system.
To install Ruby, use RVM. RVM stands for Ruby Version Manager. RVM allows you to install multiple versions of Ruby on your machine and have them interoperate without conflict. You will probably only need to install a single version of Ruby to use the examples in the book. And RVM makes it so that if you choose to install another version, you will not have to reconfigure any applications that relied on the other versions installed.
Installation of Ruby using RVM depends on your operating system. If you are using Mac OS X or Linux, your installation will probably be as simple as running these commands from a shell:
$ \curl -sSL https://get.rvm.io | bash -s stable
This will install RVM and Ruby.
If you are running Windows, you can use RVM to install Ruby, but your instructions are a little more complicated. Refer to the documentation to do so. A better option is to consider installing something like VirtualBox (a virtual machine manager). If you do this, you can install RVM inside a Linux Virtual Machine (VM). Windows is, sadly, a second-class citizen with Ruby and RVM and, for this reason, it is often better to install RVM inside a host system like Linux, which has a wider community around it to support you. VirtualBox and Linux are free as in beer and as in speech, so you can try them out without cost (other than your time). There are many native gems for Ruby that don't properly compile if the host system is Windows, so you can save yourself considerable time by just using a completely free (as in beer) option like VirtualBox and a Linux virtual machine running on Windows instead of fighting with running everything directly on Windows.
## Important Ruby and RVM Concepts
Here are a few tips when using Ruby and RVM:
Gemfile
Ruby packages libraries in a format called a gem. A gemfile is a manifest that describes which gems your application needs. Gemfiles make it simple to install all the required libraries: run the `bundle` command from a shell prompt and all libraries will be installed, which can include downloading from the network and building from source if compilation is required.
_.ruby-version_ or _.rvmrc_
These two files tell your application (or shell) which version of Ruby to use. Often applications will include this file as a part of their package. If you use RVM, it will either switch to that version of Ruby or prompt you to install that version. Imagine that you have an application that only runs on Ruby 2.1.3. You can create a file called _.ruby-version_ , which contains the string `ruby-2.1.3` and when your application starts, it will automatically use that version of Ruby. There are other Ruby-based tools (like the zero-configuration web server Pow) that are aware of files like _.ruby-version_ and will properly use the correct version of Ruby if they see this file.
_config.ru_
This is a file used to run Ruby applications using Rack. Rack is a web server interface, compatible with many application servers. If you see a _config.ru_ file, you can run this application with many different servers. These can be powerful frontends used in production on many large sites on the Internet, or they can be minimal servers used just on a single laptop; Rack makes it easy to set up a server.
## Potential Problems Installing Ruby
Missing system tools
If you are running Mac OS X, you need to install Xcode and the command-line tools. If you are a software developer, you probably have these already installed. If not, review online documentation to install these. If you are running Linux, you might not have installed the compiler chain; you can install all the build tools you will ever need using this command: `sudo apt-get install build-essential`. This can take a while, but will ensure you have all the tools necessary for building RVM and any binary gems.
Missing developer libraries
There are some libraries that support Ruby (readline support, as an example, which allows you to use command-line history inside of an interactive Ruby shell) that are not always installed or available to the RVM tool. RVM has greatly improved in detecting the correct libraries, and there are often notes that tell you how to properly configure these libraries. Make sure to read the output printed to the screen as you install Ruby using RVM for special instructions specific to your platform.
# GitHub Is Excited about NodeJS
NodeJS is the server-side version of JavaScript. JavaScript is the only ubiquitious client-side programming language for the Web. Between Ruby and JavaScript, you can build any web application you need. Tools like Hubot show the benefits of using a language like JavaScript running on the NodeJS platform, which facilitates building "fast, scalable networked applications."
## NodeJS Installation
The nodejs.org web page offers various binary installers. These are generally the best way to install the most recent version of NodeJS.
## Node Version Manager
NVM stands for "Node Version Manager" and is a direct correlate to RVM. Like RVM, NVM allows you to install multiple versions of NodeJS on a single machine and switch between them seamlessly. This can be very useful when using a tool like NodeJS that is iterating rapidly (and whose modules are also often tested against only a very new version of NodeJS). NVM runs on OS X or Linux. To install, run these commands from a shell prompt:
$ curl -o- \
https://raw.githubusercontent.com/creationix/nvm/v0.25.3/install.sh | \
bash
This will install NVM for you. You then might need to run `source ~/.bash_profile` to load the NVM scripts. Once this is completed, you are able to run NVM commands:
$ nvm install 0.10 # Install version 0.10
$ nvm use 0.10 # Use version 0.10
There are many more commands available with NVM, all of which can be found at the repository where the tool is hosted.
## package.json
Much like Ruby has a Gemfile that indicates required libraries, so too does NodeJS have an equivalent file. In NodeJS, this file is called _package.json_. To install all required libraries for any project, use the `npm` tool (installed by default when you install NodeJS using NVM). Running `npm` without any arguments will install all libraries specified by the application if there is a _package.json_ file included with the project. If you want to add a package to an existing _package.json_ file, you can append `--save` to the `npm` command and `npm` will update _package.json_ for you once the installation of the package has completed.
# Command-Line Basics and the Shell
Though most chapters have focused on a specific programming language (aside from Chapter 1), all of the chapters contain command-line invocations. There are a few intricacies when using the shell you might not be familiar with that we will explain here, with an actual example of each.
## Shell Comments
If you type a hash character (`#`) into a shell command, the rest of the line is considered a comment. This makes it easy to document commands on the same line:
$ cat file.txt # This prints out the file "file.txt"
This command ends after the `file.txt` string. We use this often throughout the appendix to document shell commands.
## Providing Variables to Commands
When a process runs in the shell, it runs within an environment, and this environment can be configured with key/value pairs. These are called environment variables. A common reason for this is that you can write a program that reads passwords from the environment variables and then specify them at runtime rather than in the source code. You specify environment variables either as key/value pairs joined by an equal sign in front of a command, or by using the `export` command to persist them across commands:
$ PASSWORD=MyPwd123 myProgram # myProgram retrieves the variable PASSWORD
$ export PASSWORD=MyPwd123
$ myProgram # PASSWORD is now a persisted key value
## Splitting Commands into Multiple Lines
The shell invokes commands when you hit the Enter key. But there are times when you want to break a command into multiple lines for readability. In this case, break each line up using the backslash character:
$ git log -S http
...
$ git \
log \
-S \
http
...
Though not the most compelling command to break into multiple lines, this example shows two commands that do exactly the same thing.
## Piping Output to Successive Commands
Shell commands were written long ago in an era when programs fulfilled upon a small set of functionality, in stark contrast to today's monolithic GUI programs. Each program generally did a few simple things and then passed information to another program for further processing. Programs then needed an elegant way to pass data between each other, and the pipe was born. Pipes facilitate communication between processes: one command's output becomes another command's input.
$ cat /etc/mime.types | grep http
application/http
application/vnd.httphone
application/x-httpd-eruby rhtml
application/x-httpd-php
phtml pht php
application/x-httpd-php-source phps
This invocation uses the `cat` program to output the file _/etc/mime.types_ , and then passes this information to the `grep` program, which looks inside the input to find all lines that contain the string `http`.
## Redirection
Similar to the pipe, shells support redirecting output to files using the `>` and `>>` characters. `>` will overwrite an existing file (or create a new file if it does not exist) while the double `>>` string will append to a file:
$ cat /etc/mime.types | grep http > saved-output.txt
After running this command, the file _saved-output.txt_ will contain the same text as was produced in the prior example for the pipe. The file will be overwritten if it existed already.
# Index
### Symbols
* .ruby–version, Important Ruby and RVM Concepts
* .rvmrc, Important Ruby and RVM Concepts
* 200 status code, Success (200 or 201)
* 201 status code, Success (200 or 201), Successful Creation (201)
* 304 status code, Nothing Has Changed (304)
* 400 status code, Naughty JSON (400)
* 422 status code, Improper JSON (422)
* –c switch, Gist from the Command Line
* –i switch, Debugging Switches for cURL
* –o switch, Gist from the Command Line
* –P switch, Gist from the Command Line
* –s switch, Parsing JSON from the Command Line
* –v switch, Debugging Switches for cURL
* ––help switch, Gist from the Command Line
### A
* access control, The API
* Activity API
* and pull requests, Code Reviews via Pull Requests
* contents of, The Activity API
* overview, Activity API Overview
* affordances, Going Deeper into the Gist API
* AGitHub library, AGitHub
* Android, Android and the Git Data API-Summary
* and Git Data API, Android and the Git Data API-Summary
* Android Studio installation, Installing Android Studio
* development tools for, Android Development Tools
* Java SDK installation, Installing the Java SDK
* Android application example
* and GitHub services, GitHub Services
* automated testing for, Android Automated Testing-Android UI Tests
* base SHA implementation, The Base SHA from the Repository and Branch
* blob creation for, Creating the Blob
* code for logging in to GitHub, Code to Log In to GitHub-Code to Log In to GitHub
* code for putting content into GitHub, Code to Talk to GitHub-Code to Talk to GitHub
* creating commit for, Creating the Commit
* creating new project, Creating a New Project-Default Android Main
* default main for, Default Android Main-Default Android Main
* Gradle build file editing, Editing the Gradle Build File-Editing the Gradle Build File
* implementation, Application Implementation-Passing All Our Tests
* master resource updating, Updating the Master Resource
* setup for, Setting Up-Installing Android Studio
* testing, Passing All Our Tests-Passing All Our Tests
* tree generation, Generating a Tree
* UI tests for, Android UI Tests-Android UI Tests
* unit tests for, Unit Tests for Our GitHub Client-Unit Tests for Our GitHub Client
* writing blog content, Writing the Blog Content
* Android Studio, Creating a New Project
* Android Virtual Devices (AVDs), Creating AVDs for development
* AndroidManifest.xml file, Default Android Main
* AngularJS, Adding the Support Libraries
* application using GitHub.js, An AngularJS Application Using GitHub.js-CoffeeTech.js
* Jasmine test framework for, Making Our App Testable
* apex domains, DNS settings
* APIs, reasons for using, Why APIs and Why the GitHub API?
* async keyword, Sending the Request
* authentication
* for coffee shop database app, Authentication Requires a Server-Fixing Authentication with Firebase
* GitHub API, Authentication-Simplified OAuth flow
* OAuth for, OAuth-Simplified OAuth flow
* of search API user, Authentication
* username and password, Username and Password Authentication
* authorization token
* for Hubot, Setting up our webhook
* registering for events with, Using the OAuth Token to Register for Events-Using the OAuth Token to Register for Events
* await keyword, Sending the Request
### B
* BASH, Operating System Prerequisites
* body parser middleware, Securing the webhook
* Booleans, Code to Log In to GitHub
* Bootstrap, Customizing Styling (CSS), Adding the Support Libraries, Adding Login
* build_commit() method, Using the Rugged Library
### C
* C#, Let's Write an App, C#
* caching
* and scraping, Writing Tests and Caching-Writing Tests and Caching
* tags, Conditional Requests to Avoid Rate Limitations
* callback, Peering into the response object, JavaScript and the Git Data API
* chat robot (see Hubot)
* CLR (Common Language Runtime), Let's Write an App
* CNAME file, The CNAME file
* code reviews, Code Reviews via Pull Requests-Setting up our webhook
* code search, Code Search
* code snippets, Code snippets
* coffee shop database app, Building a Coffee Shop Database on GitHub-Summary
* accepting pull requests, Accepting Pull Requests
* and coffeetech.js. file, CoffeeTech.js-CoffeeTech.js
* AngularJS application using GitHub.js, An AngularJS Application Using GitHub.js-CoffeeTech.js
* application database structure visualization, Visualize Application Data Structure
* city data for, City Data
* displaying data, Displaying (Soon-to-Be) User-Reported Data-User-Contributed Data
* error handling, Errors Already?-Errors Already?
* geocoding support, Geocoding Support-Geocoding Support
* login for, Adding Login
* mapping hostnames, Mapping Hostnames
* safe login implementation for, Toward a Safe Login Implementation-Implementing Firebase Login
* setup, Set Up-Adding the Support Libraries
* support libraries for, Adding the Support Libraries
* test data for, Test Data
* testability of app, Making Our App Testable
* user–contributed data for, User-Contributed Data-User-Contributed Data
* CoffeeScript
* characteristics, Writing a Hubot Extension
* extension boilerplate, Extension boilerplate
* indentation in, Writing a Hubot Extension
* Jasmine support tests, Writing tests for Hubot extensions
* Collaborators API, unifying usernames via, Unifying usernames via the Collaborators API
* combined status, Combined Status
* command line
* basics, Command-Line Basics and the Shell
* editing Gollum from, Hacking Gollum
* gists from, Gist from the Command Line
* Jekyll command line tool, Using the Jekyll Command-Line Tool-Using the Jekyll Command-Line Tool
* launching Hubot from, Sanitizing our source code
* parsing JSON from, Parsing JSON from the Command Line
* piping output to successive commands, Piping Output to Successive Commands
* providing variables to commands, Providing Variables to Commands
* redirection, Redirection
* shell comments, Shell Comments
* splitting commands into multiple lines, Splitting Commands into Multiple Lines
* commit (Android app example), Creating the Commit
* Commit Status
* example app, Let's Write an App-Status Handler
* Commit Status API, .NET and the Commit Status API-Summary
* and Visual Studio, Visual Studio
* combined status, Combined Status
* creating a status, Creating a Status
* development environment for app, Development Environment-Xamarin Studio
* libraries for, Libraries
* OAuth flow, OAuth Flow-OAuth Flow
* raw status, Raw Statuses
* sending request, Sending the Request-Sending the Request
* status handler, Status Handler-Status Handler
* statuses in, The API-Creating a Status
* Xamarin Studio, Xamarin Studio-Xamarin Studio
* comp pages, fixing linking between, Fixing Linking Between Comp Pages
* conditional HTTP headers, Conditional Requests to Avoid Rate Limitations
* conditional requests, Conditional Requests to Avoid Rate Limitations
* config.ru, Important Ruby and RVM Concepts
* continuous–integration service, .NET and the Commit Status API
* core rate limits, GitHub API Rate Limits
* CORS, CORS Support
* createBlob function, Creating the Blob
* createCommit() function, Creating the Commit
* createServices() function, GitHub Services
* create_controls method, Windowing and Interface, GitHub Login
* credentials, Windowing and Interface
* CSS
* for Jekyll blogs, Customizing Styling (CSS)-Customizing Styling (CSS)
* Gollum limitations with, No styling or JavaScript
* cURL, cURL
* and GitHub Enterprise, Command-Line Client Tools: cURL
* and rate limit retrieval, Reading Your Rate Limits
* debugging switches for, Debugging Switches for cURL-Debugging Switches for cURL
* installing, cURL
### D
* debugging, cURL switches for, Debugging Switches for cURL-Debugging Switches for cURL
* describe functions, Writing tests for Hubot extensions
* Destroy() method, Windowing and Interface
* directed acylic graphs (DAG), The Base SHA from the Repository and Branch
* DNS settings, DNS settings
* doInBackground() function, Code to Log In to GitHub
* doPost() function, Code to Log In to GitHub
* do_layout method, GitHub Login
* do_login method, GitHub Login
### E
* Enterprise (see GitHub Enterprise)
* Entity Tag, Conditional Requests to Avoid Rate Limitations
* error handling, Errors Already?-Errors Already?
* Espresso, Android UI Tests
* ETag, Conditional Requests to Avoid Rate Limitations
* Exitwp, Importing from the Wordpress XML
* express middleware, Securing the webhook
* Express.js, Securing the webhook
* extension methods, OAuth Flow
* extensions
* boilerplate, Extension boilerplate
* Hubot, Writing a Hubot Extension
* writing tests for, Writing tests for Hubot extensions-Writing tests for Hubot extensions
### F
* feeds, The Activity API
* fields, invalid, Improper JSON (422)
* filters, Parsing JSON from the Command Line
* Firebase
* fixing authentication with, Fixing Authentication with Firebase-Fixing Authentication with Firebase
* implementing login with, Implementing Firebase Login-Implementing Firebase Login
* testing, Testing Firebase-Testing Firebase
* footers, in Gollum wikis, Structural components
* forking, Inviting Contributions with GitHub "Fork"
* full format, Retrieving formatted content
### G
* gem install gist command, Gist from the Command Line
* Gemfile, Gists as Fully Functioning Apps, Important Ruby and RVM Concepts
* generateContent() function, Writing the Blog Content
* generateTree() function, Generating a Tree
* geocoding, Geocoding Support-Geocoding Support
* get_credentials method, Using the Rugged Library
* get_filename method, Writing Jekyll Posts
* gh–pages branch, The gh-pages branch
* gists, Gists and the Gist API-Summary
* and hypermedia, Going Deeper into the Gist API
* and RESTful APIs, Going Deeper into the Gist API
* as fully functioning apps, Gists as Fully Functioning Apps
* as repositories, Gists Are Repositories
* creating, Easy Code Sharing
* for rendering other gists, Gists that Render Gists-Using Hypermedia Data from Octokit
* from command line, Gist from the Command Line
* in HTML, Embedding Gists Inside HTML
* in Jekyll blogs, Embedding Inside Jekyll Blogs
* using hypermedia data from Octokit, Using Hypermedia Data from Octokit
* git credential fill command, Git Credential Helper
* Git credential helper, Git Credential Helper
* git log, Gists Are Repositories
* git push, Gists Are Repositories
* GitHub API
* accessing content from Web, Accessing Content from the Web-Retrieving formatted content
* and JSON–P, JSON-P
* as hypermedia API, Breadcrumbs to Successive API Paths
* authentication, Authentication-Simplified OAuth flow
* conditional requests to avoid rate limitations, Conditional Requests to Avoid Rate Limitations
* CORS support, CORS Support
* cURL and, cURL
* following a Hypermedia API, Following a Hypermedia API
* important headers, Important Headers
* JSON and, The JavaScript Object Notation (JSON) Format-Debugging Switches for cURL
* rate limits, GitHub API Rate Limits
* reading and writing data from, The Unclad GitHub API-Summary
* reasons for using, Why APIs and Why the GitHub API?
* status codes, Status Codes-Reading Your Rate Limits
* GitHub Enterprise, GitHub Enterprise-Documentation
* administration, Administration
* and C#, C#
* and cURL, Command-Line Client Tools: cURL
* and Java, Java
* and JavaScript library, JavaScript
* and Management Console API, Management API
* and Python, Python
* API, Structure of This Book
* documentation, Documentation
* endpoints, Endpoints
* example request using a client library, Example Request Using a Client Library
* full hostnames vs. mount points, Full Hostnames Versus Mount Points
* installation, Installation
* Ruby client configuration, Ruby Client Configuration
* GitHub login (search API GUI application), GitHub Login-GitHub Login
* GitHub Markup formats, Markup and Structure
* GitHub search (search API GUI application), GitHub Search-GitHub Search
* GitHub wikis, GitHub Wikis with Gollum-Summary
* GitHub.io personal blog site, Using a GitHub.io Jekyll blog
* GitHub.js library, Adding the Support Libraries, An AngularJS Application Using GitHub.js-CoffeeTech.js
* gitignore file, Jekyll Blog Quick Start
* Gollum
* adding structural components, Structural components
* and repository–linked wikis, Repository Linked Wikis
* as hackable wiki, Hacking Gollum
* basics, "The Story of Smeagol..."-Inserting images
* code snippets in wikis, Code snippets
* fixing linking between comp pages, Fixing Linking Between Comp Pages
* GitHub wikis with, GitHub Wikis with Gollum-Summary
* image editor construction, The Starting Point of a Gollum Editor
* improving revision navigation, Improving Revision Navigation
* inserting images, Inserting images
* link tag, Links
* markup options, Markup and Structure-Inserting images
* optimizing for image storage, Optimizing for Image Storage-Optimizing for Image Storage
* programmatically handling images, Programmatically Handling Images-Programmatically Handling Images
* reviewing wiki on GitHub, Reviewing on GitHub-Reviewing on GitHub
* Rugged library for adding files to wiki, Using the Rugged Library-Using the Rugged Library
* styling limitations, No styling or JavaScript
* Google Maps, Building a Coffee Shop Database on GitHub, Displaying (Soon-to-Be) User-Reported Data, Displaying (Soon-to-Be) User-Reported Data
* Gradle, Editing the Gradle Build File-Editing the Gradle Build File
* grep tool, Gist from the Command Line
* Grit
* origins of, GitHub and Ruby
* Rugged as successor to, Using the Rugged Library
* GUI search API application
* code for, The Code-Displaying Results
* displaying results, Displaying Results-Displaying Results
* Git credential helper, Git Credential Helper
* GitHub login, GitHub Login-GitHub Login
* GitHub search panel, GitHub Search-GitHub Search
* packaging, Packaging
* Python as implementation language for, Python-PyInstaller
* search API, Our Example Application-Summary
* windowing and interface, Windowing and Interface-Windowing and Interface
### H
* hash message authentication code (HMAC), Securing the webhook
* headers
* in GitHub API responses, Important Headers
* in Gollum wikis, Structural components
* Markdown tags for, Jekyll Markup
* hear method, Responding to the PR request
* Heroku
* API as webhood endpoint, Using the OAuth Token to Register for Events
* Hubot and, Considerations and Limitations
* Hubot installation on, Installation on Heroku-Setting Up Heroku
* publishing into, Sanitizing our source code
* setup, Setting Up Heroku
* hostnames, mapping, Mapping Hostnames
* HTML, Links
* gists in, Embedding Gists Inside HTML
* Markdown shortcuts for, Jekyll Markup
* HTTP Basic authentication, Username and Password Authentication
* HTTP caching tags, Conditional Requests to Avoid Rate Limitations
* HTTP handler, handling PR notifications as post requests over, Handling PR Notifications as Post Requests over HTTP-Sanitizing our source code
* HTTP server app example
* and Visual Studio, Visual Studio
* Commit Status API, Let's Write an App-Status Handler
* development environment for, Development Environment-Xamarin Studio
* libraries for, Libraries
* sending request, Sending the Request-Sending the Request
* Xamarin Studio, Xamarin Studio-Xamarin Studio
* HTTP status codes, Status Codes-Reading Your Rate Limits
* hub tool, Using a GitHub.io Jekyll blog, Triggering Real Pull Requests
* Hubot, CoffeeScript, Hubot, and the Activity API-Summary
* and pull request response object, Peering into the response object
* basic, Creating a Vanilla Hubot
* capabilities of, Planning for PR Satisfaction Guaranteed
* channel creation, Naming the channel-Naming the channel
* code reviews via pull requests, Code Reviews via Pull Requests-Setting up our webhook
* considerations and limitations, Considerations and Limitations
* exploring vocabulary of, Exploring the Hubot vocabulary
* extensions, Writing a Hubot Extension
* first conversation, A first conversation
* handling PR notifications as post requests over HTTP, Handling PR Notifications as Post Requests over HTTP-Sanitizing our source code
* installation on Heroku, Installation on Heroku-Setting Up Heroku
* programming concepts, Writing a Hubot Extension
* responding to pull requests, Responding to the PR request-Responding to the PR request
* running locally, Running Hubot Locally
* sanitizing source code, Sanitizing our source code
* securing webhook, Securing the webhook-Securing the webhook
* sending PR data via webhook, Sending PR data via webhook-Sending PR data via webhook
* Slack account for, Creating a Slack Account-Naming the channel
* triggering real pull requests, Triggering Real Pull Requests-Triggering Real Pull Requests
* unifying usernames via Collaborators API, Unifying usernames via the Collaborators API
* using OAuth token to register for events, Using the OAuth Token to Register for Events-Using the OAuth Token to Register for Events
* webhook setup, Setting up our webhook
* writing tests for extensions, Writing tests for Hubot extensions-Writing tests for Hubot extensions
* Hubot brain
* and pull request state, Responding to the PR request
* user list from, The user list from the Hubot brain
* Hypermedia API, Why APIs and Why the GitHub API?
* following, Following a Hypermedia API
* gist and, Going Deeper into the Gist API
* hypermedia API, Breadcrumbs to Successive API Paths
* hypermedia data, Octokit, Using Hypermedia Data from Octokit
### I
* If–Modified–Since header, Conditional Requests to Avoid Rate Limitations
* If–None–Match header, Conditional Requests to Avoid Rate Limitations
* images
* adding to Jekyll, Adding Images to Jekyll
* Gollum tag format for, Inserting images
* Gollum–based editor for, Hacking Gollum-Fixing Linking Between Comp Pages
* handling programmatically, Programmatically Handling Images-Programmatically Handling Images
* optimizing repository for storage of, Optimizing for Image Storage-Optimizing for Image Storage
* incomplete_results field, Result Format
* indentation, CoffeeScript, Writing a Hubot Extension
* index.md file, Master Index File with Liquid Markup
* instrumentation registry, Android UI Tests
* integration tests, Android Automated Testing
* interactive Ruby shell (IRB), Refinining with Interactive Ruby, Scraping Body and Author
* issue search, Issue Search
* issues, Structure of This Book
* items array, Result Format
### J
* Jasmine
* installation, Writing tests for Hubot extensions
* test framework for coffee shop database app, Making Our App Testable
* testing framework, Writing tests for Hubot extensions
* Java
* and GitHub Enterprise, Java
* SDK installation, Installing the Java SDK
* JavaScript, GitHub "First Class" Languages, JavaScript and the Git Data API-Summary
* coffee shop database app (see coffee shop database app)
* GitHub Enterprise and, JavaScript
* Gollum limitations with, No styling or JavaScript
* Jekyll, Ruby and Jekyll-Summary
* basics, What Is Jekyll?-Operating Jekyll Locally
* command line tool, Using the Jekyll Command-Line Tool-Using the Jekyll Command-Line Tool
* operating locally, Operating Jekyll Locally
* privacy levels with, Privacy Levels with Jekyll
* themes, Themes
* using the Jekyll command, Using the Jekyll Command
* watch switch, Using the Jekyll Command
* Jekyll blogs, Jekyll Blog Quick Start-Summary
* adding images to, Adding Images to Jekyll
* and CNAME file, The CNAME file
* command line tool, Using the Jekyll Command-Line Tool-Using the Jekyll Command-Line Tool
* custom CSS for, Customizing Styling (CSS)-Customizing Styling (CSS)
* DNS settings, DNS settings
* for Android app, Creating a Jekyll Blog
* gists in, Embedding Inside Jekyll Blogs
* GitHub.io site creation, Using a GitHub.io Jekyll blog
* hosting via gh–pages branch, The gh-pages branch
* hosting your own domain, Hosting On Your Own Domain-DNS settings
* importing from other blogs into, Importing from Other Blogs-Importing from Other Blogs
* importing from Tumblr, Exporting from Wordpress Alternatives
* markup, Jekyll Markup
* master index file creation with Liquid Markup, Master Index File with Liquid Markup
* publishing on GitHub, Publishing on GitHub, Publishing Our Blog to GitHub
* scraper setup, Setting Up-Setting Up
* scraping sites into, Scraping Sites into Jekyll-Publishing Our Blog to GitHub
* scraping tactics, Jekyll Scraping Tactics
* simple blog creation, Jekyll Blog Quick Start-Exporting from Wordpress Alternatives
* writing posts, Writing Jekyll Posts-Writing Jekyll Posts
* YAML Front Matter, YFM: YAML Front Matter
* jekyll gem, Operating Jekyll Locally
* jekyll new command, Jekyll Blog Quick Start-Jekyll Blog Quick Start
* jekyll ––help command, Using the Jekyll Command
* jq, Parsing JSON from the Command Line-Parsing JSON from the Command Line
* JSON (JavaScript Object Notation), The JavaScript Object Notation (JSON) Format-Debugging Switches for cURL
* JSON.stringify, User-Contributed Data
* JSON–P, JSON-P
* JUnit library, Editing the Gradle Build File
### K
* Karma, Making Our App Testable
### L
* links
* Gollum tag, Links
* Markdown tags for, Jekyll Markup
* Liquid Markup
* Jekyll and, What Is Jekyll?
* master index file creation with, Master Index File with Liquid Markup
* origins, Jekyll Blog Quick Start
* Liquid tags, Fixing Linking Between Comp Pages
* list comprehension, Windowing and Interface
* logic tags, Master Index File with Liquid Markup
* login
* for Android app, Code to Log In to GitHub
* for coffee shop database app, Adding Login, Toward a Safe Login Implementation-Implementing Firebase Login
* in search API, GitHub Login-GitHub Login
* with Firebase, Implementing Firebase Login-Implementing Firebase Login
* LoginPanel class, GitHub Login
### M
* MacBooks, Operating System Prerequisites
* Management Console API, Management API
* Markdown, Markup and Structure
* and Jekyll, What Is Jekyll?
* and Jekyll markup, Jekyll Markup
* link tag, Links
* Maven, Code to Talk to GitHub
* Mechanize, Scraping Titles-Refinining with Interactive Ruby
* meta–tools, Preface
* Mono, Let's Write an App
* MonoDevelop, Xamarin Studio
### N
* Nancy library, Libraries-Status Handler
* network calls, synchronous, Windowing and Interface
* NodeJS, GitHub "First Class" Languages
* and Express.js, Securing the webhook
* and Hubot, Creating a Vanilla Hubot
* and package.json, package.json
* GitHub and, GitHub Is Excited about NodeJS
* installation, NodeJS Installation
* version manager, Node Version Manager
* node–github module, Responding to the PR request
* notifications, Activity API Overview
* NuGet, Xamarin Studio
* numerical values, in search queries, Search Operators and Qualifiers
* NVM (Node version manager), Node Version Manager
### O
* OAuth, OAuth-Simplified OAuth flow
* and Commit Status API, The API
* authorization process outline, OAuth Flow
* benefits of, Toward a Safe Login Implementation
* flow for Commit Status API, OAuth Flow-OAuth Flow
* for coffee shop database app login, Toward a Safe Login Implementation-Implementing Firebase Login, Implementing Firebase Login
* scopes, OAuth-Scope escalation
* simplified flow, Simplified OAuth flow
* tokens, OAuth-Simplified OAuth flow, Scopes: specified actions tied to authentication tokens, Using the OAuth Token to Register for Events-Using the OAuth Token to Register for Events
* OAuth2, Simplified OAuth flow
* Octokit, Gists that Render Gists
* and GitHub Enterprise client configuration, Ruby Client Configuration
* using hypermedia data from, Using Hypermedia Data from Octokit
* Octokit NuGet, Xamarin Studio
* Octokit Ruby, Gists that Render Gists
* OkHttp library, Editing the Gradle Build File
* onCreate function, Code to Log In to GitHub
* onPostExecute() function, Code to Log In to GitHub
* onView function, Android UI Tests
* operating system prerequisites, Operating System Prerequisites
* operators, search API, Search Operators and Qualifiers
* Organizations API, Structure of This Book
* output tags, Master Index File with Liquid Markup
### P
* package.json, package.json
* password authentication, Username and Password Authentication
* pipes, Piping Output to Successive Commands
* post requests, handling PR notifications as, Handling PR Notifications as Post Requests over HTTP-Sanitizing our source code
* posts, blog, Writing Jekyll Posts-Writing Jekyll Posts
* privacy, Jekyll, Privacy Levels with Jekyll
* public gists, Gists Are Repositories
* published variable, YFM: YAML Front Matter
* publishing Jekyll blogs, Publishing Our Blog to GitHub
* pull requests
* and response object, Peering into the response object
* and user list from Hubot brain, The user list from the Hubot brain
* assigning an active chat room user to, Assigning an active chat room user-Assigning an active chat room user
* code reviews via, Code Reviews via Pull Requests-Setting up our webhook
* handling notifications as post requests over HTTP, Handling PR Notifications as Post Requests over HTTP-Sanitizing our source code
* responding to, Responding to the PR request-Responding to the PR request
* securing webhook, Securing the webhook-Securing the webhook
* sending data via webhook, Sending PR data via webhook-Sending PR data via webhook
* testing Hubot with, Triggering Real Pull Requests-Triggering Real Pull Requests
* unifying usernames via Collaborators API, Unifying usernames via the Collaborators API
* with coffee shop database app, Accepting Pull Requests
* PyInstaller, PyInstaller, Packaging
* Python
* 2.7 vs. 3, WxPython
* AGitHub library, AGitHub
* and code for search API application, The Code-Displaying Results
* and Git credential helper, Git Credential Helper
* and GitHub Enterprise, Python
* as implementation language for search API application, Python-PyInstaller
* PyInstaller, PyInstaller
* WxPython project, WxPython
### Q
* qualifiers, search API, Search Operators and Qualifiers
### R
* rate limits
* and authenticated requests, GitHub API Rate Limits
* authentication and, Authentication
* conditional requests to avoid, Conditional Requests to Avoid Rate Limitations
* headers for, Important Headers
* reading, Reading Your Rate Limits
* redirection, Redirection
* RedirectToOAuth method, OAuth Flow
* repo scope, The API
* repositories
* and associated Gollum wikis, Repository Linked Wikis
* gists as, Gists Are Repositories
* search API, Repository Search
* respond callback, Peering into the response object
* respond method, Responding to the PR request
* RESTful APIs, Going Deeper into the Gist API
* retrieveBaseSha() function, The Base SHA from the Repository and Branch
* return value, search API, Result Format
* revisions, improving navigation of, Improving Revision Navigation
* Ruby, GitHub "First Class" Languages
* client configuration with GitHub Enterprise, Ruby Client Configuration
* for scraping titles, Scraping Titles
* gem installation, Hacking Gollum
* GitHub and, GitHub and Ruby-Potential Problems Installing Ruby
* installation, Installing Ruby
* installation problems, Potential Problems Installing Ruby
* libraries (see Nancy) (see Octokit) (see Rugged) (see Sinatra)
* tips for using, Important Ruby and RVM Concepts
* Ruby IRB, Refinining with Interactive Ruby
* RubyZip, Programmatically Handling Images
* Rugged library, Programmatically Handling Images, Using the Rugged Library-Using the Rugged Library
* RVM (Ruby Version Manager), Installing Ruby
### S
* SaveFile function, Code to Talk to GitHub
* scopes, The API
* and OAuth tokens, Scopes: specified actions tied to authentication tokens
* escalation, Scope escalation
* limitations of, Scope limitations
* OAuth and, OAuth-Scope escalation
* score field, Result Format
* scraping
* author and body content, Scraping Body and Author
* into Jekyll, Scraping Sites into Jekyll-Publishing Our Blog to GitHub
* setting up a scraper, Setting Up-Setting Up
* tactics, Jekyll Scraping Tactics
* titles, Scraping Titles
* with Ruby IRB, Refinining with Interactive Ruby
* writing tests and caching, Writing Tests and Caching-Writing Tests and Caching
* SDK (software development kit), Creating a New Project
* search API, Python and the Search API-Summary
* authentication, Authentication
* code search, Code Search
* displaying results, Displaying Results-Displaying Results
* example GUI application, Our Example Application-Summary
* general principles, Search API General Principles-Sorting
* Git credential helper, Git Credential Helper
* GitHub login, GitHub Login-GitHub Login
* GitHub search panel, GitHub Search-GitHub Search
* issue search, Issue Search
* operators and qualifiers, Search Operators and Qualifiers
* packaging, Packaging
* Python as implementation language for GUI application, Python-PyInstaller
* repository search, Repository Search
* result format, Result Format
* sorting of results, Sorting
* user flow, User Flow
* user search, User Search
* windowing and interface, Windowing and Interface-Windowing and Interface
* search query, Search Operators and Qualifiers
* search rate limits, GitHub API Rate Limits
* search results, displaying, Displaying Results-Displaying Results
* secret gists, Gists Are Repositories
* secure–compare module, Securing the webhook
* Session property, OAuth Flow
* SHA (secure hash algorithm), The Base SHA from the Repository and Branch
* shell, Operating System Prerequisites
* comments, Shell Comments
* piping output to successive commands, Piping Output to Successive Commands
* providing variables to commands, Providing Variables to Commands
* redirection, Redirection
* splitting commands into multiple lines, Splitting Commands into Multiple Lines
* sidebars, in Gollum wikis, Structural components
* silent mode, Parsing JSON from the Command Line
* Sinatra, Gists as Fully Functioning Apps
* and Nancy library, Libraries
* for Gollum image editor construction, The Starting Point of a Gollum Editor
* sitemaps, Going Deeper into the Gist API
* sizers, Windowing and Interface, GitHub Login
* Slack, Creating a Slack Account-Naming the channel
* Slack API, Assigning an active chat room user-Assigning an active chat room user
* sorting, search query results, Sorting
* source code management (SCM), Preface
* spyOn function, Making Our App Testable
* standard error, Parsing JSON from the Command Line
* status codes, Status Codes-Reading Your Rate Limits
* improper JSON (422), Improper JSON (422)
* invalid payload (400), Naughty JSON (400)
* no change (304), Nothing Has Changed (304)
* successful creation (201), Successful Creation (201)
* status handler, Status Handler-Status Handler
* String type, Code to Log In to GitHub
* subdomain, DNS setup with, DNS settings
* switches, cURL, Debugging Switches for cURL-Debugging Switches for cURL
* synchronous network calls, Windowing and Interface
### T
* target SDK, Creating a New Project
* testing
* Android app, Android Automated Testing-Android UI Tests, Passing All Our Tests-Passing All Our Tests
* coffee shop database app, Making Our App Testable
* Firebase, Testing Firebase-Testing Firebase
* text format, Retrieving formatted content
* themes, Jekyll, Themes
* titles, scraping, Scraping Titles
* tokens, OAuth, OAuth-Simplified OAuth flow, Scopes: specified actions tied to authentication tokens, Using the OAuth Token to Register for Events-Using the OAuth Token to Register for Events
* total_count field, Result Format
* tree (for Android app), Generating a Tree
* Tumblr, Exporting from Wordpress Alternatives
### U
* UI tests, Android UI Tests-Android UI Tests
* Unbuntu Linux virtual machine, Operating System Prerequisites
* unit tests, Unit Tests for Our GitHub Client-Unit Tests for Our GitHub Client
* uploading ZIP files, Programmatically Handling Images
* user search, User Search
* username authentication, Username and Password Authentication
* benefits of, Benefits of username authentication
* downsides to, Downsides to username authentication
* usernames, unifying via Collaborators API, Unifying usernames via the Collaborators API
* Users API, Structure of This Book
### V
* Vagrant, Operating System Prerequisites
* variables, providing to commands, Providing Variables to Commands
* VirtualBox, Operating System Prerequisites, Installing Ruby
* Visual Studio, Visual Studio
* Void type, Code to Log In to GitHub
### W
* watch switch, Using the Jekyll Command
* Web content
* accessing, Accessing Content from the Web-Retrieving formatted content
* accessing with JSON–P, JSON-P
* CORS requests, CORS Support
* response format specification, Specifying Response Content Format
* retrieving formatted content, Retrieving formatted content
* webhook
* for Hubot, Setting up our webhook
* securing, Securing the webhook-Securing the webhook
* sending PR data via, Sending PR data via webhook-Sending PR data via webhook
* wikis (see under Gollum)
* withId function, Android UI Tests
* Wordpress
* importing database as XML file, Importing from the Wordpress XML
* importing into Jekyll blogs from, From Wordpress
* importing with direct database access, Importing with direct database access
* write_review_file method, Fixing Linking Between Comp Pages
* WxPython project, WxPython
* WxWidgets, Windowing and Interface
### X
* Xamarin Studio, Xamarin Studio-Xamarin Studio
* XHR (XmlHttpRequest), Accessing Content from the Web
* X–GitHub–Media–Type header, Important Headers
* X–RateLimit–Limit, Important Headers
* X–RateLimit–Remaining, Important Headers
* X–RateLimit–Reset, Important Headers
### Y
* YFM (YAML Front Matter), YFM: YAML Front Matter
# About the Authors
**Chris Dawson** comes from a family of public school teachers. From an early age, computers provided an always fascinating and often frustrating complement to learning and teaching for Chris. Notably inconspicuous at several notable startups and technology companies like Apple, Virage and RealNetworks, Chris gratefully had the opportunity to live on three continents and experience the power and dynamism of diverse communities. As such, it is with great relish that Chris has been participating in and documenting one of the most exciting learning communities of the 21st century: GitHub.
**Ben Straub** is a lifelong developer, and enthusiast of the craft of making great software. He's written software for over 15 years, has authored several books, and has recorded educational software training videos. He enjoys reading, taking his kids on bike rides, chocolate, dogs, those little notebooks you carry around with you, photography, a good weekend hack, traveling, writing, food, craftsmanship, a great pen, Markdown, music, movies, and talking to amazing people.
# Colophon
The animal on the cover of _Building Tools with GitHub_ is a beagle, a small- to medium-sized breed of dog ( _Canis familiaris_ ). The modern beagle breed was developed in Great Britain in the 1830s, and was originally created to track small game animals, such as rabbits. Hunting by using beagles to track prey is known as "beagling."
Beagles are part of the hound family of dog breeds, but compared to other hounds beagles are small, with shorter legs and snouts. Beagles are most commonly tricolored (white, black, and brown), but can occasionally be found with only two of the three colors.
Beagles are well-regarded as household pets because of their even demeanor and high intelligence. They have made appearances in popular culture since Elizabethan times, from the works of Shakespeare to modern cartoon strips.
Many of the animals on O'Reilly covers are endangered; all of them are important to the world. To learn more about how you can help, go to _animals.oreilly.com_.
The cover image is from _Lydekker's Royal Natural History, Vol. 1_. The cover fonts are URW Typewriter and Guardian Sans. The text font is Adobe Minion Pro; the heading font is Adobe Myriad Condensed; and the code font is Dalton Maag's Ubuntu Mono.
1. Preface
1. Why APIs and Why the GitHub API?
2. Structure of This Book
3. Who You Are
4. What You Will Learn
5. GitHub "First Class" Languages
6. Operating System Prerequisites
7. Who This Book Is Not For
8. Conventions Used in This Book
9. Using Code Examples
10. Safari® Books Online
11. How to Contact Us
12. Acknowledgments
2. 1. The Unclad GitHub API
1. cURL
2. Breadcrumbs to Successive API Paths
3. The JavaScript Object Notation (JSON) Format
1. Parsing JSON from the Command Line
2. Debugging Switches for cURL
4. Important Headers
5. Following a Hypermedia API
6. Authentication
1. Username and Password Authentication
2. OAuth
7. Status Codes
1. Success (200 or 201)
2. Naughty JSON (400)
3. Improper JSON (422)
4. Successful Creation (201)
5. Nothing Has Changed (304)
6. GitHub API Rate Limits
7. Reading Your Rate Limits
8. Conditional Requests to Avoid Rate Limitations
9. Accessing Content from the Web
1. JSON-P
2. CORS Support
3. Specifying Response Content Format
10. Summary
3. 2. Gists and the Gist API
1. Easy Code Sharing
2. Gists Are Repositories
1. Embedding Gists Inside HTML
2. Embedding Inside Jekyll Blogs
3. Gist from the Command Line
4. Gists as Fully Functioning Apps
5. Gists that Render Gists
1. Going Deeper into the Gist API
2. Using Hypermedia Data from Octokit
6. Summary
4. 3. GitHub Wikis with Gollum
1. "The Story of Smeagol..."
1. Repository Linked Wikis
2. Markup and Structure
2. Hacking Gollum
3. The Starting Point of a Gollum Editor
4. Programmatically Handling Images
5. Using the Rugged Library
6. Optimizing for Image Storage
7. Reviewing on GitHub
8. Improving Revision Navigation
9. Fixing Linking Between Comp Pages
10. Summary
5. 4. Python and the Search API
1. Search API General Principles
1. Authentication
2. Result Format
3. Search Operators and Qualifiers
4. Sorting
2. Search APIs in Detail
1. Repository Search
2. Code Search
3. Issue Search
4. User Search
3. Our Example Application
1. User Flow
4. Python
1. AGitHub
2. WxPython
3. PyInstaller
5. The Code
1. Git Credential Helper
2. Windowing and Interface
3. GitHub Login
4. GitHub Search
5. Displaying Results
6. Packaging
7. Summary
6. 5. .NET and the Commit Status API
1. The API
1. Raw Statuses
2. Combined Status
3. Creating a Status
2. Let's Write an App
1. Libraries
2. Development Environment
3. Sending the Request
4. OAuth Flow
5. Status Handler
3. Summary
7. 6. Ruby and Jekyll
1. Learning and Building with Jekyll
2. What Is Jekyll?
1. Operating Jekyll Locally
3. Jekyll Blog Quick Start
1. YFM: YAML Front Matter
2. Jekyll Markup
3. Using the Jekyll Command
4. Privacy Levels with Jekyll
5. Themes
6. Publishing on GitHub
7. Hosting On Your Own Domain
4. Importing from Other Blogs
1. From Wordpress
2. Exporting from Wordpress Alternatives
5. Scraping Sites into Jekyll
1. Jekyll Scraping Tactics
2. Setting Up
3. Scraping Titles
4. Refinining with Interactive Ruby
5. Writing Tests and Caching
6. Writing Jekyll Posts
7. Using the Jekyll Command-Line Tool
8. Master Index File with Liquid Markup
9. Scraping Body and Author
10. Adding Images to Jekyll
11. Customizing Styling (CSS)
12. Inviting Contributions with GitHub "Fork"
13. Publishing Our Blog to GitHub
6. Summary
8. 7. Android and the Git Data API
1. Setting Up
1. Creating a Jekyll Blog
2. Android Development Tools
2. Creating a New Project
1. Editing the Gradle Build File
2. Default Android Main
3. Android Automated Testing
1. Unit Tests for Our GitHub Client
2. Android UI Tests
4. Application Implementation
1. Code to Log In to GitHub
2. Code to Talk to GitHub
3. Writing the Blog Content
4. GitHub Services
5. The Base SHA from the Repository and Branch
6. Creating the Blob
7. Generating a Tree
8. Creating the Commit
9. Updating the Master Resource
10. Passing All Our Tests
5. Summary
9. 8. CoffeeScript, Hubot, and the Activity API
1. The Activity API
2. Planning for PR Satisfaction Guaranteed
1. Considerations and Limitations
2. Creating a Vanilla Hubot
3. Creating a Slack Account
4. Running Hubot Locally
3. Installation on Heroku
1. Setting Up Heroku
4. Activity API Overview
1. Writing a Hubot Extension
2. Code Reviews via Pull Requests
3. Using the OAuth Token to Register for Events
4. Triggering Real Pull Requests
5. Handling PR Notifications as Post Requests over HTTP
5. Summary
10. 9. JavaScript and the Git Data API
1. Building a Coffee Shop Database on GitHub
2. Set Up
1. Mapping Hostnames
2. Adding the Support Libraries
3. An AngularJS Application Using GitHub.js
1. Visualize Application Data Structure
2. Making Our App Testable
3. Test Data
4. CoffeeTech.js
4. Geocoding Support
1. City Data
5. Adding Login
1. Errors Already?
6. Displaying (Soon-to-Be) User-Reported Data
1. User-Contributed Data
7. Accepting Pull Requests
8. Toward a Safe Login Implementation
1. Authentication Requires a Server
2. Fixing Authentication with Firebase
3. Testing Firebase
4. Implementing Firebase Login
9. Summary
11. A. GitHub Enterprise
1. Installation
2. Administration
3. Endpoints
4. Full Hostnames Versus Mount Points
5. Command-Line Client Tools: cURL
6. Example Request Using a Client Library
1. Ruby Client Configuration
2. Java
3. JavaScript
4. Python
5. C#
7. Management API
8. Documentation
12. B. Ruby, NodeJS, (and the Shell) at GitHub
1. GitHub and Ruby
1. Installing Ruby
2. Important Ruby and RVM Concepts
3. Potential Problems Installing Ruby
2. GitHub Is Excited about NodeJS
1. NodeJS Installation
2. Node Version Manager
3. package.json
3. Command-Line Basics and the Shell
1. Shell Comments
2. Providing Variables to Commands
3. Splitting Commands into Multiple Lines
4. Piping Output to Successive Commands
5. Redirection
13. Index
| {
"redpajama_set_name": "RedPajamaBook"
} | 7,450 |
What would be done with my health checkup?
Checking blood pressure every two years. If BP > 140/90 mmHg, it is called Hypertension and need adequately treatment.
Checking height and weight, counseling for dietary to avoid obesity.
Colorectal caner screening every year.
Dental exam annually and Eye exam every year.
Most men over 50 years old should have screening for prostate cancer.
Counseling for alcohol and tobacco use. | {
"redpajama_set_name": "RedPajamaC4"
} | 8,724 |
package io.katharsis.legacy.registry;
import io.katharsis.resource.information.ResourceInformationBuilder;
import io.katharsis.resource.information.ResourceInformationBuilderContext;
public class DefaultResourceInformationBuilderContext implements ResourceInformationBuilderContext {
private ResourceInformationBuilder builder;
public DefaultResourceInformationBuilderContext(ResourceInformationBuilder builder) {
this.builder = builder;
}
@Override
public String getResourceType(Class<?> clazz) {
return builder.getResourceType(clazz);
}
@Override
public boolean accept(Class<?> type) {
return builder.accept(type);
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 3,125 |
Enterprise Bill — 28 Oct 2002 at 15:25
Read a third time.
After Clause 16, insert the following new clause-
"TRIBUNAL: REGULATIONS
(1) The Lord Chancellor and the Secretary of State may together make regulations-
(a) empowering the courts to transfer to the Tribunal for determination by it any issue arising in any civil proceedings the determination of which depends on whether provisions of Chapter I or II of the 1998 Act or of Article 81 or 82 of the Treaty have been infringed where, in the opinion of the court making the transfer, the transfer would be conducive to the efficient conduct of the proceedings;
(b) making any rules that the Lord Chancellor and the Secretary of State may deem to be appropriate as ancillary to the power to make such transfers or to be reasonably required in connection therewith and in particular, but without prejudice to the generality of the foregoing, to the effect that-
(i) on making such a transfer, the court making the transfer may state facts that the Tribunal shall then treat as established for the purposes of determining the issues transferred to it;
(ii) after having made its determination, the Tribunal shall remit the matter to the court that made the transfer to it, declaring the determination of that issue by the Tribunal, which, subject to any clarification or amplification by the Tribunal of its determination that may be requested by the court that made the transfer, shall then be treated as a determination of that issue by that court;
(iii) enabling courts that have made, or have in contemplation the making of, such transfers and the Tribunal to co-operate together in any way that they deem to be appropriate to enable issues arising in the proceedings before them to be determined as efficiently as possible.
(2) The Lord Chancellor may appoint as president and as chairman of the Tribunal judges of any of the courts provided that, before appointing a judge of the Court of Session or sheriff courts under this subsection, the Lord Chancellor shall first consult the Lord President of the Court of Session.
(3) In this section references to "the courts" are to the High Court of Justice and the county courts in England and Wales and Northern Ireland and the Court of Session and the sheriff courts in Scotland.
(4) The power to make regulations under this section is exercisable by statutory instrument subject to annulment in pursuance of a resolution of either House of Parliament."
Their Lordships divided: Contents, 92; Not-Contents, 86.
Party Majority (Content) Minority (Not-Content) Turnout
Con 59 (+2 tell) 0 27.7%
Crossbench 4 7 6.5%
Lab 0 76 (+2 tell) 40.8%
Other 0 1 7.1%
Total: 90 84 26.9%
Viscount Bledisloe Crossbench aye
Lord Rees-Mogg Crossbench aye
Lord Williamson of Horton Crossbench (front bench) aye | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,833 |
{"url":"https:\/\/suelan.github.io\/2020\/05\/03\/An-glimpse-of-iOS-Memory-Deep-Dive\/","text":"RY 's Blog\n\nA glimpse of iOS Memory Deep Dive\n\n2020-05-03\n\nThis is an pretty good session about iOS memory. iOS Memory Deep Dive - WWDC 2018 - Videos - Apple Developer. I saw it and took some notes here.\n\nNot all memory is created equal.\n\nThere are dirty memory, clean memory, compressed memory in iOS system. We have to know the differences between them.\n\nPage\n\nPage is typically 16KB in size and operating system gives it to you when your app requests memory.\n\nSome pages can hold multiple objects, and some objects\ncan span multiple pages.\n\nPages Types\n\n\u2022 Clean\n\n\u2022 Data that can be paged out of memory\n\u2022 Memory mapped files\n\u2022 frameworks*\n\u2022 Dirty\n\n\u2022 memory written by an app\n\u2022 all heap allocations\n\u2022 decoded image buffers\n\u2022 Comporessed\n\nThere is no traditional disk swap in iOS\n\nMemory compressor\n\nThe system will do the compression and decompression for you by memory compressor.\n\nWhat does Memory compressor do?\n\n\u2022 Compresses unaccessed pages\n\u2022 Decompresses pages upon access\n\nBefore being compressed:\n\nAfter being compressed:\n\nWhen you got Memory warning, you App is not always the cause. Maybe because the compressor freeing memory. Like, you receiving a phone call while using the App.\n\nAfter being decompressed:\n\nAfter removing objects in didReceiveMemoryWarning\n\nCaching\n\n\u2022 Trade-offs between CPU and memory. Caching can reduce the CPU usage and time complexity, but it costs memory.\n\u2022 Remember the compressor. When decompressing, the used memory will be increased.\n\u2022 Prefer NSCache over dictionary.\n\nMemory Profile\n\nIt is the dirty size + the compressed size that the system uses to determine how much memory the app is really using.\n\nWe should mainly focuse on these two part, dirty and compressed memory when analyzing the memory profile.\n\nTools for Profiling Footprint\n\n1. Xcode memory gauge\n\n2. Instruments\n\n\u2022 Allocations\n\u2022 Leaks\n\u2022 VM Tracker\n\u2022 providing profiles for dirty and compressed memories.\n\u2022 Virtual memory trace\n1. Debugger\n2. memory graph\n\nworking with memory graph using commands\n\nvmmap\n\nvmmap helps to show some dirty memory info of your app. In general, we should look for the big number for the size.\nThere are virtual size, resident size, dirty size, swapped size columns here.\n\nAccording to this session, we can ignore the virtual size, because it is memory requested by the app, while not neccessarily be used. swapped size is related to compressed memory. So we should care more about dirty size and swapped size.\n\nAn example of using vmmap to debug a memory issue\n\nFirst, we can use summary info to look for the big numbers in virtual size and swapped size colomn. Here, we find CG Image takes much more memory than others.\n\nThen, we use grep to get more info about CG Image.\n\nThere are two regions here, the last row is summary info. The secong CG Image region takes more takes more dirty and swapped memory. So we have to see more infomation of this region by using --verbose option.\n\nAll these commands can work with other shell commands, like redirecting the output stream a output.txt file.\n\nAnd we will more regions.\n\nIt turns out that vmmap, by default, if it finds contiguous regions, it collapses. A general rule. the later region was created, the later my app\u2019s life cycle it happened. Chance are this later region is more closely tied to whatever caused that memory pike.\n\nSo, we start to look at the last region. We can use the start memory address of the last region and search it in the memory graph in XCode.\n\nOr use leak to get the trace tree. By scanning these info, we would find more clues.\n\nHere, using malloc_history to see the back trace for this object, we found the related code creating this particular VM memory.\n\n1. vmmap and AWK\n\nThis command can work with other commands.\n\nleak\n\nIt not only shows the cycle, but also the root object of the cycle.\n\n\u2022 leak circle\n\n\u2022 root object\n\nheap\n\n\u2022 Shows objects allocated on the heap;\n\u2022 useful for indentifying large objects in memory adn what allocated it.\n\nheap command shows the class name in CLASS_NAME column, the num of the class in COUNT column, the average size of the object in the AVG column, the total size in the BYTES column.\n\nmalloc_history\n\nIn some cases, we not only want to know the memory size, but also want to know the how it created. So, here comes the malloc_history command.\n\n\u2022 enable the malloc_stack logging\n\nWhich tool to pick\n\nUse vmmap and heap to find some objects or regions with big number, use leak to see references between objects, like finding circular reference; use malloc_history to see how it is created.","date":"2021-12-02 18:10:48","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.21664959192276, \"perplexity\": 5891.70011393767}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-49\/segments\/1637964362287.26\/warc\/CC-MAIN-20211202175510-20211202205510-00341.warc.gz\"}"} | null | null |
\section{Introduction}
The astrophysical site(s) for the synthesis of heavy elements via the rapid neutron capture process (\textsl{r}-process) has been a long standing puzzle.
The electromagnetic counterpart
following the detection of gravitational waves from the first ever binary neutron star merger (BNSM) event GW170817 have confirmed the presence of heavy elements in the merger ejecta and thus confirming BNSMs as an \textsl{r}-process source \citep{abbott2017a,abbott2017b,Kasen+17,Cowperthwaite+17,Tanaka+2017}.
Despite this seminal discovery, very little is known about the details of \textsl{r}-process nucleosynthesis including the possibility of additional sources that have varying levels of neutron richness.
The additional sources may be needed to explain the evolution of \textsl{r}-process elements in the Galaxy as well as neighbouring dwarf galaxies. In particular, the origin of \textsl{r}-process in the early Galaxy as observed in very metal-poor stars (see e.g., a recent review by~\citet{Cowan2019} and the references therein) as well the evolution \textsl{r}-process elements in the disk at late times may require additional sources~\citep{cote2019a}.
A number of sites have been proposed as candidates for additional \textsl{r}-process sources that include NS -- black hole (BH) mergers~\citep{Lattimer1974,Rosswog2005,Kyutoku2013,Foucart2014}, neutrino-driven winds from core-collapse supernovae (CCSNe)~\citep{Woosley1994,Takahashi1994,Qian1996,Arcones2013}, magneto-rotational supernovae (MRSNe)~\citep{Nishimura2006,Winteler+12,Mosta+17}, collapsars~\citep{Siegel+18,Miller2020}, accretion disk outflows during the common envelope phase of NS--massive star system \citep{Aldana2019}, CCSNe triggered by the hadron-quark phase transitions~\citep{FischerWu2020}, etc.
In this regard, measurement of the abundances of short-lived radioactive isotopes (SLRIs) that are exclusively produced by \textsl{r}-process in the early solar system (ESS), as determined from meteorites as well as Earth's deep-sea sediments, can be used to infer the properties of \textsl{r}-process events that occurred in the Solar neighbourhood~\citep{Wallner_2015,Hotokezaka:2015zea,Bartos:2019cec,Cote2021,Wallner:2021}.
In a recent study \citet{Cote2021} showed
that the observed ratio of the abundances of \textsl{r}-process SLRIs $^{129}$I and $^{247}$Cm present in the ESS
can be used to constrain the ``last'' \textsl{r}-process event that contributed to the solar abundance before the formation of the SS.
In particular, the authors showed that due to the almost identical lifetimes of SLRIs $^{129}$I and $^{247}$Cm, the value of their ratio is insensitive to the uncertainties associated with Galactic chemical evolution and the observed ratio of $438\pm 184$ corresponds to the value produced by the ``last'' \textsl{r}-process event. Additionally, using the fact that $^{129}$I/$^{247}$Cm is extremely sensitive to the neutron richness, they concluded that the observed value is consistent with medium neutron rich conditions that are most likely associated with the disk ejecta following BNSMs~\citep{Fernandez2013,Just:2014fka,Fujibayashi2018,Miller2019}.
Tidally-disrupted ejecta from NS--BH mergers or BNSMs~\citep{Freiburghaus1999,Goriely2011,Korobkin2012} produce $^{129}$I/$^{247}$Cm ratios that are too low whereas those from the MRSNe result in values that are too high to be compatible with the measurement\footnote{We note that recent BNSM simulations that include the effect of weak interactions generally predict less neutron-rich conditions in the early-time ejecta, particularly for the component that originates from the collisional interface of the two NSs~\citep{Wanajo:2014wha,Radice2016,Bovard2017,Vincent2020,George2020,Kullmann:2021gvo}.
However, the exact electron fraction ($Y_e$) distribution of this component still vary substantially in different works and require further improved treatment of weak interactions in simulations.}.
The above conclusion reached by \cite{Cote2021} is based on the result that for the typical values of frequency of various \textsl{r}-process sources, the probability of more than one source contributing to $^{129}$I and $^{247}$Cm in the ESS is negligible based on a stochastic chemical evolution model.
In this paper we use the turbulent gas diffusion formalism similar to \citet{hotokezaka2015} to simulate the evolution of \textsl{r}-process isotopes at the birth location of the Sun.
We find that the ``last'' major \textsl{r}-process event only accounts for $\sim 50$--$75\%$ of the total $^{129}$I and $^{247}$Cm measured in the ESS and at least three events are required to account for $\gtrsim 95\%$.
The minor contributing events can dramatically affect the $^{129}$I/$^{247}$Cm ratio when there are more than one \textsl{r}-process source with distinct production ratios for $^{129}$I/$^{247}$Cm (corresponding to different levels of neutron richness).
Consequently, we find that the measured meteoritic value may not
correspond to the value due to a single ``last'' \textsl{r}-process event. Surprisingly, we find that the $^{129}$I/$^{247}$Cm ratio can nevertheless provide important constraints on the neutron richness of \textsl{r}-process when it is combined with the
meteoritic measurement for the $^{129}$I/$^{127}$I and $^{247}$Cm/$^{235}$U ratio in the ESS.
\section{Method}
In order to calculate the abundance evolution of \textsl{r}-process elements, the frequency of \textsl{r}-process events as well as their locations are
required. The frequency of an \textsl{r}-process source depends on the star formation history (SFH) of the Milky Way. We use the \textsc{omega+} code along with the parameters for the best-fit Milky Way model of \cite{cote2019} to calculate the SFH and the resulting CCSN rate and mass loss factor $f(t)$ (see Appendix~\ref{appA}).
The rate of \textsl{r}-process events like BNSMs as well as the rate of formation of neutron star binaries (NSBs) are derived by assuming a fraction $f_r$ of CCSNs result in the formation of NSBs.
We adopt a fiducial value of $f_r=8\times 10^{-4}$ that corresponds to
a merger frequency $\nu_0\sim 10$~Myr$^{-1}$ at the present time.
We simulate multiple realisations of \textsl{r}-process events in the Milky Way.
For each realisation, the birth times for binaries, which lead to BNSM like events, are generated from a probability distribution
proportional to the NSB birth rate calculated using \textsc{omega+}.
The cylindrical radial coordinates for the birth locations in the MW are generated according to a distribution $\propto R\exp(-R/R_d)$, where $R_d$ is the radial scale length for the surface SFR density. In order to account for the inside-out formation of the MW, we adopt time dependent $R_d$ from \citet{Schonrich17}. The corresponding vertical heights are generated according to the distribution of the estimated molecular and atomic gas density in the local solar neighbourhood by \citet{mckee2015}. The merger time $t_{\rm merge}$ for each NSB is sampled from a DTD $\propto t^{-1}$ with a minimum and maximum delay time of 10 Myr and 10 Gyr, respectively. Each NSB is assigned a kick velocity $\vec v_{\rm kick}$ at the time of its birth. The magnitude of $\vec v_{\rm kick}$ is generated from a distribution
$\propto \exp(-v_{\rm kick}/v_0)$ with a fiducial value of $30\,{\rm km\,s}^{-1}$, whereas the direction of the kick is generated from a uniform isotropic distribution. In order to find the final location of the NSB at the time of its merger, we use \textsc{galpy} \citep{galpy} to trace the motion of the NSB under the influence of the Galactic potential as described in \citet{bwy2020}. The birth time and location for MRSN
or collapsar like events are generated similar to the NSBs as described above. In this case, however, there are no delays or offsets due to natal kicks.
In order to calculate the evolution of \textsl{r}-process elements at the Solar location, we use the turbulent gas diffusion formalism by \citet{hotokezaka2015}. The number density of an isotope at a location $\vec r$ and time $t$ is given by
\begin{equation}
n(\vec r,t)=\sum_{t_j<t} \frac{Y_{r,j} e^{-f(t)\Delta t_j}}{K_j(\Delta t_j)} \exp\left[-\frac{\left| \vec r-\vec r_j\right|^2}{4D\Delta t_j} -\frac{\Delta t_j}{\tau}\right],
\end{equation}
where $t_j$ is the occurrence time for the $j$th \textsl{r}-process event, $\Delta t_j=t-t_j$, $Y_{r,j}$ is the number of atoms of the isotope produced by the $j$th
\textsl{r}-process event, $f(t)$ is the time-dependent loss factor due to star formation and galactic outflows calculated from Milky Way model using \textsc{omega+}, $\tau$ is the lifetime of the isotope. $D$ is the turbulent diffusion coefficient, and $K_j(\Delta t_j)$ is given by
\begin{equation}
K_j(\Delta t_j)=\mathrm{min}[(4\pi D \Delta t_j)^{3/2},8\pi h_z D \Delta t_j],
\end{equation}
where $h_z =0.3$~kpc is the vertical scale height.
The overall mixing of metals depend on the parameters $D$ and the frequency $\nu$ of \textsl{r}-process events with typical mixing timescale $\tau_{\rm mix}$ given by \citep{hotokezaka2015}
\begin{equation}
\tau_{\rm mix}\approx 200 \left(\frac{\nu}{30~{\rm Myr}^{-1}} \right)^{-2/5} \left(\frac{D}{0.1~{\rm kpc^2~Gyr^{-1}}} \right)^{-3/5}~{\rm Myr}
\label{eq:taumix}
\end{equation}
A recent study by \citet{beniamini2020}
found that in order to satisfy various independent observational constraints such as scatter of stable \textsl{r}-process elements, highest observed \textsl{r}-process enrichment in the solar neighbourhood, as well as constraints from radioactivity in the ESS, it requires that $D\gtrsim 0.1~{\rm kpc^2~Gyr^{-1}}$ and $\nu\lesssim 40~{\rm Myr}^{-1} $ with typical mixing timescale of $ \tau_{\rm mix}\approx 200$ Myr. This also means that $D$ and $\nu$ are related to each other by
\begin{equation}
D\approx 0.3 \left (\frac{\nu}{10~{\rm Myr}^{-1}}\right)^{-2/3}~{\rm kpc^2~Gyr^{-1}}.
\label{eq:D}
\end{equation}
Because $ \tau_{\rm mix}$ is a fixed parameter, varying the value of $\nu$ is always associated with a corresponding change in $D$. Thus, it is sufficient to consider a single value of $\nu$.
In our Milky Way model, the current rate of $\nu_0=10~{\rm Myr^{-1}}$ corresponds to a rate of \textsl{r}-process events of $\nu \approx 15~{\rm Myr^{-1}}$ at the time of SS formation of $t_\odot \sim 9.2$~Gyr. We adopt values of $D=0.1$ and $0.3~{\rm kpc^2~Myr^{-1}}$ that covers values of $D$
slightly above and below the corresponding values given by Eq.~\ref{eq:D} for $\nu \approx 15~{\rm Myr^{-1}}$ and correspond to $\tau_{\rm mix}\approx 140$-- $260$ Myr.
\begin{table*}
\caption{List of important parameters used in this work.}
\centering
\begin{tabular}{l c c c}
\hline
Parameter & Definition & Values &Unit\\
\hline
$D$ & Turbulent diffusion coefficient& 0.1, 0.3 &${\rm kpc^2~Myr^{-1}}$\\
$\nu$ &Total \textsl{r}-process frequency during SS formation&$\sim 15$ & ${\rm Myr}^{-1}$\\
$\nu_{LM}$ &Ratio of frequency of sources $S_L$ to $S_M$ &1&--\\
$\nu_{LH}$ &Ratio of frequency of sources $S_L$ to $S_H$ &1&--\\
$M_{LM}^{\rm ej}$ &Ratio of ejecta masses of sources $S_L$ to $S_M$ &1/3,1,3&--\\
$M_{LH}^{\rm ej}$ &Ratio of ejecta masses of sources $S_L$ to $S_H$ &1/3,1,3&--\\
$\lambda_{L}$ &$^{129}$I/$^{247}$Cm production ratio for $S_L$ &10&--\\
$\lambda_{L}$ &$^{129}$I/$^{247}$Cm production ratio for $S_M$ &100&--\\
$\lambda_{H}$ &$^{129}$I/$^{247}$Cm production ratio for $S_H$ &1000&--\\
\hline
\end{tabular}
\label{tab:parameters}
\end{table*}
We compute the number abundance of \textsl{r}-process isotopes using the diffusion prescription discussed above at the solar radius $R_\odot$ at the time when the gas decoupled from the inter stellar medium (ISM) at $t=t_{\rm iso}$ to form the SS. In this work, we consider three different types of \textsl{r}-process sources $S_L$, $S_M$, and $S_H$ defined by distinct $^{129}$I/$^{247}$Cm production ratios of $\lambda_L$, $\lambda_M$, and $\lambda_H$. The subscripts $L$, $M$, and $H$ refer to low, medium and high values, respectively, for the $^{129}$I/$^{247}$Cm production ratio corresponding to different astrophysical sites with varying neutron richness. We adopt fiducial values of $\lambda_L=10$, $\lambda_M=100$, and $\lambda_H=1000$. The adopted values roughly correspond to values expected from neutron rich dynamical ejecta during a BNSM or NS--BH merger ($\lambda_L$), moderately neutron rich disk ejecta following BNSMs ($\lambda_M$), and a low neutron rich ejecta from MRSN events ($\lambda_H$) from theoretical models reported in \citet{Cote2021}. For each \textsl{r}-process event the isotopic production ratio is taken to be $^{129}$I/$^{127}$I= 1.46, $^{247}$Cm/$^{235}$U= 0.20, $^{244}$Pu/$^{238}$U= 0.40, and $^{235}$U/$^{238}$U= 1.05.
These values for actinide ratios are generally consistent with the predictions from \citet{Mendoza2015} and
\citet{Wu:2016pnw} which computed the \textit{r}-process nucleosynthesis in the BNSM dynamical ejecta and in the BH--accretion disk outflows using different nuclear physics inputs. The value for $^{129}$I/$^{127}$I ratio corresponds to the solar \textsl{r}-process value adapted from \citet{Sneden2008}. We also account for the contribution of \textsl{s}-process to the ESS value of $^{127}$I by multiplying the a factor 1.06 which is consistent the solar abundance decomposition by \citet{Sneden2008}.
We consider three scenarios, where only two out of the three different types of \textsl{r}-process sites contribute. Additionally, we consider the scenario where all three types of sources contribute.
The relevant parameters that impact the evolution of isotopic ratios of SLRs produced by \textsl{r}-process are the frequency $\nu_i$ of each \textsl{r}-process source, the corresponding production ratio $\lambda_i$, the relative ratio of the ejected mass $M^{\rm ej}_{ij}=M^{\rm ej}_i/M^{\rm ej}_j$ for sources $S_i$ and $S_j$, and the value of the diffusion coefficient $D$. We list the definitions and the adopted values of relevant model parameters in Table~\ref{tab:parameters}.
\begin{figure*}
\centerline{\includegraphics[width=\textwidth]{I129_Cm247_Dp1.pdf}}
\caption{Evolution of $^{129}$I/$^{127}$Cm at the solar location for models with two equally frequent ($\nu_{LM}=1$) sources $S_L$ and $S_M$ with $\lambda_L=10$ and $\lambda_M=100$ with $D=0.1$~kpc$^2$ Gyr$^{-1}$ for three different values of $M^{\rm ej}_{LM}$.}
\label{fig:I129_Cm247_Dp1}
\end{figure*}
\section{Results}
We first consider two \textsl{r}-process sources $S_L$ and $S_M$ that are BNSM like.
We assume that both of them are equally frequent, i.e., $\nu_{LM}=\nu_L/\nu_M=1$ with $\nu_{L}+\nu_{M}=\nu$, and have the same kick velocity distribution.
For their relative ejecta mass ratios, we consider three different values of
$M^{\rm ej}_{LM}=1/3,1,$ and $3$.
$M^{\rm ej}_{LM}=1$ corresponds to the scenario where both $S_L$ and $S_M$ contribute equally to the main \textsl{r}-process. The other two values of $M^{\rm ej}_{LM}$ correspond to scenarios where one of the sources is the dominant site for the main \textsl{r}-process.
Figure~\ref{fig:I129_Cm247_Dp1} shows the evolution of $^{129}$I/$^{247}$Cm ratio at the solar location for $D=0.1$~kpc$^2$ Gyr$^{-1}$ with three different values of $M^{\rm ej}_{LM}$ for one single representative realisation. As expected, the value of the ratio varies between $\lambda_L$ and $\lambda_M$. However, the ratio not only takes extremal values, but also assumes values of $\sim 30-70$ for substantial lengths of time. This occurs even when the ejecta mass is higher for one of the sources i.e, $M^{\rm ej}_{LM}=1/3$ and $3$.
\begin{table*}
\caption{Probability distribution of $^{129}$I/$^{247}$Cm ratio in the ESS for criteria T0 (see text) for two equally frequent \textsl{r}-process sources $S_L$ and $S_M$ with $\lambda_{L}=10$, $\lambda_{M}=100$.
Unit for $D$ is kpc$^2$ Gyr$^{-1}$.}
\centering
\begin{tabular}{c c c c c c c c c c}
\hline\hline
Model Parameters&\multicolumn{7}{c|}{Probability of ESS $^{129}$I/$^{247}$Cm Ratio within an interval}\\ [0.5ex]
\hline
$(D,M^{\rm ej}_{LM})$ & 10--20 & 20--30 & 30--40&40--50 &50--60& 50--70 & 70--80 & 80--90 & 90--110 \\ [0.5ex]
\hline
(0.3,1/3) &0.41 &0.11 &0.07 &0.04 &0.04 &0.04 &0.05&0.08& 0.18\\
(0.1,1/3) &0.45 &0.06 &0.04 &0.03 &0.03 &0.03 &0.03&0.04& 0.30\\
\hline
(0.3,1)& 0.56 &0.09 &0.06 &0.05 &0.04 &0.03 &0.03&0.04& 0.09\\
(0.1,1)& 0.63 &0.06 &0.04 &0.03 &0.01 &0.02 &0.02&0.03& 0.17\\
\hline
(0.3,3)& 0.75 &0.07&0.04 &0.03 &0.02 &0.02 &0.02&0.02& 0.03\\
(0.1,3)& 0.75 &0.04&0.03 &0.02 &0.02 &0.01 &0.02&0.03& 0.10\\
\hline
\end{tabular}
\label{tab:minimum}
\end{table*}
\subsection{Results with Minimum Criteria}
The formation of meteorites in the ESS is always preceded with some isolation time. This is because there is always a interval $\Delta_{\rm iso}$ between the time when the molecular cloud, from which the SS formed, decouples from the ISM at $t_{\rm iso}$ and the formation of the SS at $t_\odot=t_{\rm iso}+\Delta_{\rm iso}$. During this time, the gas from which the SS was formed does not receive any contribution for \textsl{r}-process isotopes and the SLRs free decay for a period of $\Delta_{\rm iso}$.
Thus, in order for the isotope ratio $^{129}$I/$^{127}$I and $^{247}$Cm/$^{235}$U to be a viable candidate for the measured value at ESS formation time,
the SLR isotopic ratio resulting from chemical evolution model has to be greater than the ESS meteoritic values. Because SLR $^{244}$Pu is also produced exclusively by \textsl{r}-process, this also applies to the $^{244}$Pu/$^{238}$U ratio.
We define this as the minimum criteria and
refer to this criteria as T0 with the adopted
observed ESS abundance of $^{129}$I/$^{127}$I$= 1.28\times 10^{-4}$ \citep{Ott2016}, $^{247}$Cm/$^{235}$U$= 5.6\times 10^{-5}$ \citep{Tang+2017}, and $^{244}$Pu/$^{238}$U$= 7\times 10^{-3}$ \citep{Hudson1989}.
We simulate 1000 realisations of \textsl{r}-process enrichment in the Milky Way that satisfy criteria T0. This gives us a distribution of ESS value of the $^{129}$I/$^{247}$Cm ratio that ranges roughly from $\lambda_L$ to $\sim \lambda_M$.
The distribution allows us to directly calculate the probability of finding the ESS $^{129}$I/$^{247}$Cm ratio within a given interval between $\lambda_L$ and $\sim \lambda_M$.
Table~\ref{tab:minimum} shows the results using $\lambda_M=100$ and $\lambda_L=10$ for three different choices of $M^{\rm ej}_{LM}$, and two different values of $D$. Interestingly, for all cases, the probability always peaks at values close to $\lambda_L$.
In all cases with $D=0.3$ kpc$^2$ Gyr$^{-1}$, there is a $\gtrsim 12$--$20\%$ probability of getting intermediate $^{129}$I/$^{247}$Cm values of $30$--$70$ that is comparable to the probability of getting values close to $\lambda_M$. When $D=0.1$ kpc$^2$ Gyr$^{-1}$, the probability for intermediate values are somewhat lower but is still comparable to the probability for getting values close to $\lambda_M$.
In order to understand the probability distribution of the $^{129}$I/$^{247}$Cm ratio,
the history of enrichment of \textsl{r}-process SLRs at the solar location needs to be analysed. In general, the abundance of \textsl{r}-process isotopes at the solar location $R_\odot$ at $t_\odot$ receives contribution from all \textsl{r}-process events that has occurred before $t_{\rm iso}$. However, for SLRs, only a handful of events that has occurred at $t'$ where $t_{\rm iso}-t'$ is within a few lifetimes of the SLRs at locations relatively close to the Sun can contribute. For any isotope including SLRs, the contribution from all the past events at a particular $(R_\odot,t)$ can be sorted in terms of the fraction contributed to the total amount. It is useful to define the quantity $f^{\rm iso}_{\rm h1}$ as the highest fraction of the ESS value contributed by a single event for a particular isotope. This is the same as $f_{\rm last}$ defined by \citet{beniamini2020}. We can further define $f^{\rm iso}_{\rm h2}$ and $f^{\rm iso}_{\rm h3}$ as the fractions contributed by the second and third highest contributors for a particular isotope, respectively. Table~\ref{tab:h1h2h3_mimimum} shows the values of $f_{\rm h1}$, $f_{\rm h2}$, and $f_{\rm h3}$ for $^{129}$I corresponding to the models listed in Table~\ref{tab:minimum}. As can be seen from the table, on average, the single major contributor accounts for only $70$--$84\%$ and at least three events are required to account for $\gtrsim 95\%$ of the total $^{129}$I in the ESS.
\begin{table*}
\caption{The mean and median values of $f_{\rm h1}$, $f_{\rm h2}$, and $f_{\rm h3}$ along with the 95th percentile range for $^{129}$I for the models listed in Table~\ref{tab:minimum}. Unit for $D$ is kpc$^2$ Gyr$^{-1}$.}
\centering
\begin{tabular}{c c c c c c}
\hline
$(D,M^{\rm ej}_{LM})$ & &$f^{\rm ^{129}I}_{\rm h1}$ & $f^{\rm ^{129}I}_{\rm h2}$ & $f^{\rm ^{129}I}_{\rm h3}$ & $f^{\rm ^{129}I}_{\rm h1+h2+h3}$\T \B\\
\hline
\multirow{3}{*}{(0.3,1/3)} & mean&0.702 &0.167 &0.062 &0.931\\
&median&0.714 &0.158 &0.039 &0.961\\
&95th percentile &0.350--0.988 & 0.006--0.394 & 0.002--0.191 & 0.753--0.999 \\
\hline
\multirow{3}{*}{(0.1,1/3)} & mean&0.838 &0.119 &0.028 &0.985\\
&median&0.908 &0.062 &0.006 &0.997\\
&95th percentile&0.488--0.999 & 0.000--0.400&0.000--0.128&0.923--1.000\\
\hline
\hline
\multirow{3}{*}{(0.3,1)} & mean&0.701 &0.164 &0.062 &0.927\\
&median&0.705 &0.155 &0.044 &0.959\\
&95th percentile&0.349--0.9991 & 0.005--0.374 & 0.001--0.189 & 0.746--0.999\\
\hline
\multirow{3}{*}{(0.1,1)} & mean&0.823 & 0.127 &0.032 &0.982\\
&median&0.893 & 0.076 &0.008 &0.996\\
&95th percentile&0.497--0.999 & 0.000--0.398 & 0.000--0.142&0.913--1.000\\
\hline
\hline
\multirow{3}{*}{(0.3,3)} & mean&0.719 &0.158 &0.058 &0.936\\
&median&0.753 &0.140 &0.033 &0.970\\
&95th percentile&0.348--0.990 & 0.005--0.384 & 0.001--0.190 &0.772--0.999\\
\hline
\multirow{3}{*}{(0.1,3)} & mean&0.826 &0.124 &0.031 &0.982\\
&median&0.906 &0.068 &0.007 &0.997\\
&95th percentile&0.459--0.999 & 0.000--0.395 & 0.000--0.148 &0.905--1.000\\
\hline
\end{tabular}
\label{tab:h1h2h3_mimimum}
\end{table*}
The contributions from the second and third highest contributors have a critical impact on the $^{129}$I/$^{247}$Cm ratio. To illustrate this, let us consider the case with $D=0.1$ kpc$^2$ Gyr$^{-1}$ and $M^{\rm ej}_{LM}=1$ where $\approx 99\%$ of the total $^{129}$I is produced by three events with mean values of $f_{\rm h1}=0.84$, $f_{\rm h2}=0.12$, and $f_{\rm h3}=0.03$. In this case both $S_L$ and $S_M$ produce the same amount of $^{129}$I whereas the former produces a factor 10 higher amount of $^{247}$Cm. If $N_0$ denotes the number of atoms of $^{129}$I produced by each source, then the corresponding yield of $^{247}$Cm is $0.1N_0$ and $0.01N_0$ for $S_L$ and $S_M$, respectively.
For simplicity, let us assume that the three highest contributing events account for all the $^{129}$I. In this case, for these three events $({\rm h1,h2,h3})$, the 8 different possible combinations are $(S_L,S_L,S_L)$, $(S_L,S_L,S_M)$, $(S_L,S_M,S_L)$, $(S_L,S_M,S_M)$, $(S_M,S_M,S_M)$, $(S_M,S_L,S_M)$, $(S_M,S_L,S_L)$, and $(S_M,S_M,S_L)$, which all have equal occurrence probability of $12.5\%$ and produce equal amounts of $^{129}$I ($0.99 N_0$ atoms).
Summing over the produced $^{247}$Cm atoms gives rise to values of $^{129}$I$/^{247}$Cm equal to 10.1, 10.4, 11.3, 11.7, 101.0, 48.3, 42.7, 79.4 for these 8 scenarios, respectively.
Clearly, the first four combinations ($50\%$ probability) where the dominant $^{129}$I contributor ${\rm h1}=S_L$, the total number of $^{247}$Cm atoms is dominated by the $S_L$ source, leading to $^{129}$I/$^{247}$Cm ratios close to $\lambda_L=10$.
In contrast, in the latter 4 combinations where ${\rm h1}=S_M$,
except for the case $(S_M,S_M,S_M)$, the total $^{247}$Cm gets substantial contribution from $S_L$ sources from ${\rm h2}$ and ${\rm h3}$ events as at least one them is of $S_L$.
Thus, in the latter 4 cases, $^{129}$I/$^{247}$Cm ratio is close to $\lambda_M$ only for $(S_M,S_M,S_M)$ with a probability of $12.5\%$.
For the other three combinations, the $^{129}$I/$^{247}$Cm ratio takes intermediate values.
\begin{table*}
\caption{Probability distribution of $^{129}$I/$^{247}$Cm ratio in the ESS for concordance criteria T10 and T20 (see text) for the models listed in Table~\ref{tab:minimum}.
Values with the minimal criteria T0 are the same as in Table~\ref{tab:minimum}. Unit for $D$ is kpc$^2$ Gyr$^{-1}$.}
\centering
\begin{tabular}{c c c c c c c c c c c}
\hline\hline
Model& &\multicolumn{7}{c|}{Probability of $^{129}$I/$^{247}$Cm within an interval }\\ [0.5ex]
\hline
$(D,M^{\rm ej}_{LM})$ &Criteria & 10--20 & 20--30 & 30--40& 40--50 & 50--60& 60--70 & 70--80 & 80--90 & 90--110 \\ [0.5ex]
\hline -
\multirow{3}{*}{(0.3,1/3)} &T0 &0.41 &0.11 &0.07 &0.04 &0.04 &0.04 &0.05&0.08& 0.18\\
&T10 & 0.00 & 0.10 &0.56 &0.32 &0.03 &0.00 & 0.00&0.00&0.00\\
&T20 & 0.00 & 0.24 &0.35 &0.28 &0.13 &0.00 & 0.00&0.00&0.00\\
\hline
\multirow{3}{*}{(0.1,1/3)} & T0 &0.45 &0.06 &0.04 &0.03 &0.03 &0.03 &0.03&0.04& 0.30\\
&T10& 0.00 & 0.10&0.59 &0.29 &0.03 &0.00 & 0.00& 0.00& 0.00\\
&T20& 0.00 &0.22 &0.34 &0.27 &0.15 &0.02 & 0.00&0.00 & 0.00\\
\hline
\hline
\multirow{3}{*}{(0.3,1)}& T0 &0.56 &0.09 &0.06 &0.05 &0.04 &0.03 &0.03&0.04& 0.09\\
&T10& 0.32 &0.68&0.00 &0.00 &0.00 &0.00 &0.00&0.00& 0.00\\
&T20& 0.46 &0.47&0.07 &0.00 &0.00 &0.00 &0.00&0.00& 0.00\\
\hline
\multirow{3}{*}{(0.1,1)}&T0 &0.63 &0.06 &0.04 &0.03 &0.01 &0.02 &0.02&0.03& 0.17\\
&T10 & 0.35 & 0.63&0.02 &0.00 &0.00 &0.00 & 0.00& 0.00& 0.00\\
&T20 & 0.50 &0.39 &0.11 &0.00 &0.00 &0.00 & 0.00&0.00& 0.00\\
\hline
\hline
\multirow{3}{*}{(0.3,3)}& T0& 0.75 &0.07&0.04 &0.03 &0.02 &0.02 &0.02&0.02& 0.03\\
&T10& 0.98 &0.02&0.00 &0.00 &0.00 &0.00 &0.00&0.00& 0.00\\
&T20& 0.93 &0.07&0.00 &0.00 &0.00 &0.00 &0.00&0.00& 0.00\\
\hline
\multirow{3}{*}{(0.1,3)}&T0& 0.75 &0.04&0.03 &0.02 &0.02 &0.01 &0.02&0.03& 0.10\\
&T10& 0.98 &0.02&0.00 &0.00 &0.00 &0.00 &0.00&0.00& 0.00\\
&T20& 0.95 &0.05&0.00 &0.00 &0.00 &0.00 &0.00&0.00& 0.00\\
\hline
\end{tabular}
\label{tab:2sourcesLM}
\end{table*}
\begin{figure*}
\centerline{\includegraphics[width=\textwidth]{Dp1_LM_hist.pdf}}
\caption{Probability distribution of $^{129}$I/$^{127}$Cm ratio in the ESS for two equally frequent \textsl{r}-process sources $S_L$ and $S_M$ with $\lambda_L=10$ and $\lambda_M=100$ for $M_{LM}^{\rm ej}=1$ (black), 3 (red), and 1/3 (blue) and tolerances T0 (left vertical panel), T10 (middle vertical panel), and T20 (right vertical panel) corresponding to the values listed in Table~\ref{tab:2sourcesLM}. All models have $D=0.1$ kpc$^2$ Gyr$^{-1}$.}
\label{fig:Dp1_LM_hist}
\end{figure*}
The probabilities estimated from above simple illustration using the mean values of $f_{\rm h1}$, $f_{\rm h2}$, and $f_{\rm h3}$ agrees qualitatively with the values listed Table~\ref{tab:minimum}. The quantitative differences is due to the fact that the mean value does not fully represent the distribution of $f_{\rm h1}$, $f_{\rm h2}$, and $f_{\rm h3}$. This is particularly true for models with lower value of $D=0.1$ kpc$^2$ Gyr$^{-1}$ which has distribution with a long tail, as evident from the large 95th percentile ranges along with large differences between the mean and median values.
Nevertheless, the simple analysis clearly illustrates the fact that $^{247}$Cm can receive substantial contribution from the subdominant ${\rm h2}$ and ${\rm h3}$ events and directly impacts the $^{129}$I/$^{247}$Cm ratio. In particular, this shows why the probability distribution peaks at values closer to $\lambda_L$ rather than $\lambda_M$ and why it also takes
intermediate values.
\begin{table}
\caption{The mean values of $f_{\rm h1}$, $f_{\rm h2}$, and $f_{\rm h3}$ for $^{129}$I for the models listed in Table~\ref{tab:minimum}.
Values with the minimal criteria T0 are the same as in Table~\ref{tab:h1h2h3_mimimum}. Unit for $D$ is kpc$^2$ Gyr$^{-1}$.}
\centering
\begin{tabular}{c c c c c c}
\hline
$(D,M_{LM}^{ej})$& Criteria &$f^{\rm ^{129}I}_{\rm h1}$ & $f^{\rm ^{129}I}_{\rm h2}$ & $f^{\rm ^{129}I}_{\rm h3}$ & $f^{\rm ^{129}I}_{\rm h1+h2+h3}$\T \B\\
\hline
\multirow{3}{*}{ (0.3,1/3)} & T0 &0.702 &0.167 &0.062 &0.931\\
&T10 &0.693 &0.196 &0.071 &0.960\\
&T20 &0.591 &0.196 &0.094 &0.882\\
\hline
\multirow{3}{*}{ (0.1,1/3)} & T0 & 0.838&0.119 & 0.028&0.985\\
&T10 &0.690 &0.198 & 0.073&0.961\\
&T20 &0.691 &0.198 &0.070&0.960\\
\hline
\hline
\multirow{3}{*}{ (0.3,1)} & T0 &0.701 &0.164 &0.062 &0.927\\
&T10 &0.458 &0.292 &0.116 &0.865\\
&T20 &0.483 &0.273 &0.111 &0.868\\
\hline
\multirow{3}{*}{ (0.1,1)} & T0 &0.823 &0.127 &0.032 &0.982\\
&T10 &0.525 & 0.340& 0.087&0.952\\
&T20 &0.561 &0.311& 0.085&0.957\\
\hline
\hline
\multirow{3}{*}{ (0.3,3)} & T0 &0.719 &0.158 &0.058 &0.936\\
&T10 & 0.639&0.249 &0.072 &0.959\\
&T20 & 0.626&0.200 &0.079 &0.905\\
\hline
\multirow{3}{*}{ (0.1,3)} & T0 &0.826 &0.124 &0.031 &0.982\\
&T10 &0.656 &0.246 & 0.063&0.965\\
&T20 &0.768 &0.168 &0.040 &0.977\\
\hline
\hline
\end{tabular}
\label{tab:h1h2h3_concordance}
\end{table}
\subsection{Results with Concordant Decay Interval}\label{sec:concordant}
So far, we have considered Milky Way realisations that satisfy the minimum criteria T0, namely, that the ratios of $^{129}$I/$^{127}$I, $^{247}$Cm/$^{235}$U, and $^{244}$Pu/$^{238}$U are higher than the mean measured values in the ESS. This, however, does not guarantee that the ratios are actually compatible with the measurements. As mentioned before, after the star forming gas in the molecular or giant molecular cloud decouples from the ISM, each SLR undergoes free decay for the same length of isolation time $\Delta_{\rm iso}$ before the formation of meteorites in the ESS \citep{wasserburg2006,lugaro2018}. The value of $\Delta_{\rm iso}$ can be directly calculated for each SLR separately from the relation
\begin{equation}
\left( \frac{N_R}{N_I}\right )_{\rm ESS}\simeq\left( \frac{N_R}{N_I}\right )_{\rm iso}e^{-\Delta_{\rm iso}/\tau_R},
\label{eq:Delta}
\end{equation}
where $(N_R/N_I)_{\rm ESS}$ is the meteoritic ratio of the SLR $R$ with respect to the stable or long-lived isotope $I$ and $(N_R/N_I)_{\rm iso}$ is the corresponding ratio when the star forming gas decouples from the ISM at $t_{\rm iso}$\footnote{The result in Eq.~\ref{eq:Delta} holds exactly when the long-lived isotope $I$ is stable or when the SLR $R$ does not decay to $I$.}. In order for the ratios to be compatible with the ESS measurements, the value of $\Delta_{\rm iso}$ calculated for $^{129}$I/$^{127}$I, $^{247}$Cm/$^{235}$U, and $^{244}$Pu/$^{238}$U need to match. This criteria, however, is not meaningful as the probability for the three different values of $\Delta_{\rm iso}$ to be exactly identical is zero. We can however, define compatibility within some tolerance range about the mean measured value of $(N_R/N_I)_{\rm ESS}$. In this case, for each of the three SLR to stable isotope ratios, we get a range of $\Delta_{\rm iso}=(\Delta^{\rm min}_{\rm iso},\Delta^{\rm max}_{\rm iso})$.
We consider a realisation to be compatible (concordant) with the ESS measurements if the range of the three $\Delta_{\rm iso}$ values overlap.
\begin{table*}
\caption{Probability distribution of $^{129}$I/$^{247}$Cm ratio in the ESS for concordance criteria T0, T10 and T20 (see text) for two equally frequent \textsl{r}-process sources: one MRSN like $S_H$ source with $\lambda_H=1000$ and one BNSM like $S_L$ source with $\lambda_L=10$.}
\centering
\begin{tabular}{c c c c c c c c c c c c}
\hline\hline
Model& &\multicolumn{10}{c|}{Probability of $^{129}$I/$^{247}$Cm within an interval }\\ [0.5ex]
\hline
$(D,M^{\rm ej}_{LH})$ &Criteria & 10-20 & 20-30 & 30-50 &50-70 & 70-80 & 80-90 & 90-110 &110-130&130-150& $>150$\\ [0.5ex]
\hline
\multirow{3}{*}{(0.1,1)} &T0 & 0.63& 0.06& 0.05& 0.03& 0.01& 0.01& 0.02& 0.01& 0.01& 0.17\\
&T10 & 0.04& 0.69& 0.27& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00 \\
&T20 & 0.16& 0.49& 0.34& 0.01& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00 \\
\hline
\multirow{3}{*}{(0.3,1)} &T0 &0.53& 0.09& 0.08& 0.04& 0.02& 0.01& 0.02& 0.01& 0.02& 0.17\\
&T10 & 0.03& 0.84& 0.13& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00 \\
&T20 & 0.21& 0.50& 0.29& 0.01& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00 \\
\hline
\hline
\multirow{3}{*}{(0.1,1/3)} &T0 & 0.47& 0.05& 0.06& 0.03& 0.02& 0.01& 0.02& 0.01& 0.01& 0.32\\
&T10 &0.00 &0.00& 0.25& 0.60& 0.10& 0.04& 0.01& 0.00& 0.00& 0.00\\
&T20 &0.00 &0.00& 0.33& 0.42& 0.12& 0.07& 0.05& 0.00& 0.00& 0.00\\
\hline
\multirow{3}{*}{(0.3,1/3)} &T0 & 0.37& 0.08& 0.09& 0.06& 0.02& 0.01& 0.03& 0.02& 0.01& 0.31 \\
&T10 & 0.00& 0.10& 0.88& 0.03& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00 \\
&T20 & 0.00& 0.00& 0.45& 0.41& 0.10& 0.03& 0.00& 0.00& 0.00& 0.00 \\
\hline
\hline
\multirow{3}{*}{(0.1,3)} &T0 & 0.76& 0.04& 0.05& 0.02& 0.01& 0.00& 0.01& 0.01& 0.00& 0.10\\
&T10 &0.98& 0.02& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00\\
&T20 &0.88& 0.12& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00\\
\hline
\multirow{3}{*}{(0.3,3)} &T0 &0.70& 0.07& 0.06& 0.03& 0.01& 0.01& 0.01& 0.01& 0.01& 0.09\\
&T10 &0.96& 0.04& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00\\
&T20 &0.87& 0.13& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00\\
\hline
\end{tabular}
\label{tab:2sourcesLH}
\end{table*}
We adopt tolerance levels of $10\%$ (criteria T10) and $20\%$ (criteria T20) about the mean measured meteoritic value in the ESS for each of the three ratios. Table~\ref{tab:2sourcesLM} shows the
resulting probabilities for the ESS $^{129}$I/$^{247}$Cm ratio in different intervals between $\sim \lambda_L$ to $\lambda_M$.
When concordance is imposed with the adopted tolerances, the probability distributions are dramatically different compared to case where only the minimum criteria (T0) is imposed (see Fig.~\ref{fig:Dp1_LM_hist}). For both concordance criteria, the probability distribution for the $^{129}$I/$^{247}$Cm ratio is $\lesssim 6\lambda_L$ for all models. In fact, for $M^{\rm ej}_{LM}=1$ and $3$, the $^{129}$I/$^{247}$Cm ratio is limited to values $\leq 4\lambda_L$.
Only the models with $M^{\rm ej}_{LM}=1/3$ have non-negligible probability of up to 17\% for $^{129}$I/$^{247}$Cm ratio to be larger than $>5 \lambda_L$.
Thus, remarkably, even in the case where $\lambda_M$ is the major contributor to \textsl{r}-process, i.e., $M^{\rm ej}_{LM}=1/3$, the probability for the $^{129}$I/$^{247}$Cm ratio to be close to $\lambda_M$ is zero. As we can see from Table~\ref{tab:2sourcesLM}, in all models with T10 and T20 criteria, the probability is strongly peaked at values either close to $\lambda_L$ or at intermediate values of $\sim 3-5\lambda_L$. Models with $M^{\rm ej}_{LM}=3$ i.e., $S_L$ as the dominant \textsl{r}-process source, where the ratio is almost entirely limited to values close to $\lambda_L$, are the only ones where the measured $^{129}$I/$^{247}$Cm ratio corresponds to the true production ratio of one of the sources with almost 100\% probability. In all other cases, the ESS value of $^{129}$I/$^{247}$Cm can only provide constraints on the range spanned by $\lambda_L$ and $\lambda_M$ but cannot directly constrain their exact values.
\begin{figure*}
\centerline{\includegraphics[width=\textwidth]{Dp1_LH_hist.pdf}}
\caption{Probability distribution of $^{129}$I/$^{127}$Cm ratio in the ESS for two equally frequent \textsl{r}-process sources $S_L$ and $S_H$ with $\lambda_L=10$ and $\lambda_H=1000$ for $M_{LM}^{\rm ej}=1$ (black), 3 (red), and 1/3 (blue) and tolerances T0 (left vertical panel), T10 (middle vertical panel), and T20 (right vertical panel) corresponding to the values listed in Table~\ref{tab:2sourcesLH}. All models have $D=0.1$ kpc$^2$ Gyr$^{-1}$.}
\label{fig:Dp1_LH_hist}
\end{figure*}
The dramatic change in the probability distribution when we apply the T10 or T20 criteria is primarily due to the requirement of concordance for $^{129}$I/$^{127}$I and $^{247}$Cm/$^{235}$U ratios.
Because $^{129}$I and $^{247}$Cm have almost identical lifetimes and, $^{127}$I and $^{235}$U are stable or relatively long-lived, the relative ratio of $^{129}$I/$^{127}$I to $^{247}$Cm/$^{235}$U
is essentially unaffected by the free decay during the typical isolation time $\Delta_{\rm iso}\lesssim 50$ Myr.
Therefore, the criteria for concordance requires the $^{129}$I/$^{127}$I and $^{247}$Cm/$^{235}$U ratios to overlap within the tolerance level at $t_{\rm iso}$. In the scenario where $\nu_L=\nu_M$, both type of sources contribute equally to $^{127}$I whereas $^{235}$U is mostly from the $S_L$ source as it produces ten times more $^{235}$U per event relative to $S_M$. In addition, if we consider the simplified approximation where the SLRs $^{129}$I and $^{247}$Cm mostly come from one major event, the contribution of this event to the total amount of $^{127}$I and $^{235}$U accumulated from all past events is negligible.
Thus, the $^{129}$I/$^{127}$I ratio after the last major event is the same for both $S_M$ and $S_L$ as both sources produce identical amounts of $^{129}$I.
However, the $^{247}$Cm/$^{235}$U ratio is a factor of $\sim 10$ lower if the last major event is $S_M$ compared to $S_L$ as the former produces 10 times less $^{247}$Cm.
With the adopted production factors, only the scenario where the last major event is $S_L$ can result
in $^{129}$I/$^{127}$I ratio
comparable to that of $^{247}$Cm/$^{235}$U at $(R_\odot,t_\odot)$.
If the last major event is $S_M$, the $^{247}$Cm/$^{235}$U is $\sim$ an order of magnitude lower, which substantially lowers the chances of concordance.
This leads to
a probability distribution of $^{129}$I/$^{247}$Cm ratio that is
mostly limited to low to intermediate values for criteria T10 and T20.
The criteria for concordance also affects the fraction contributed from the last three highest contributing events. Table~\ref{tab:h1h2h3_concordance} shows the mean values of $f_{\rm h1},~f_{\rm h2},$ and $f_{\rm h3}$ for $^{129}$I with the concordance criteria T10 and T20 along with the the minimum criteria T0. Overall, compared to T0, for T10 and T20, the fraction contributed from the highest contributing event h1 decreases while the fraction contributed from h2 and h3 increases. With the concordance criteria, the mean values $f_{\rm h1},~f_{\rm h2},$ and $f_{\rm h3}$ ranges form $\sim 0.53$--0.76, 0.17--0.34, and 0.04--0.12 compared to $0.70$--0.84, 0.12--0.17, and 0.03--0.06, respectively, for the minimum criteria.
\begin{table*}
\caption{Probability distribution of $^{129}$I/$^{247}$Cm ratio in the ESS for concordance criteria T0, T10 and T20 (see text) for three equally frequent \textsl{r}-process sources: one MRSN like $S_H$ source with $\lambda_H=1000$, one BNSM like $S_M$ source with $\lambda_M=100$, and one BNSM like $S_L$ source with $\lambda_L=10$.
}
\centering
\begin{tabular}{c c c c c c c c c c c c}
\hline\hline
Model& &\multicolumn{10}{c|}{Probability of $^{129}$I/$^{247}$Cm within an interval }\\ [0.5ex]
\hline
$(D,M^{\rm ej}_{LH},M^{\rm ej}_{LM})$ &Criteria & 10-20 & 20-30 & 30-50 &50-70 & 70-80 & 80-90 & 90-110 &110-130&130-150& $>150$\\ [0.5ex]
\hline
\multirow{3}{*}{(0.1,1,1)} &T0 & 0.44& 0.05 & 0.06 & 0.03 & 0.02 & 0.04 & 0.18 & 0.02 & 0.02&0.14\\
&T10 & 0.00 & 0.19 &0.76 &0.05 &0.00 &0.00 & 0.00 &0.00 &0.00&0.00\\
&T20 & 0.02 & 0.25 &0.61 &0.12 &0.01 &0.00 & 0.00 &0.00 &0.00&0.00\\
\hline
\multirow{3}{*}{(0.3,1,1)} &T0 & 0.40 &0.10& 0.11& 0.06& 0.03& 0.04& 0.10& 0.02& 0.01& 0.13 \\
&T10 & 0.00& 0.16& 0.84& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00 \\
&T20 & 0.00& 0.32& 0.61& 0.07& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00 \\
\hline
\hline
\multirow{3}{*}{(0.1,3,3)} &T0 & 0.69 &0.04 &0.05 &0.03 &0.01 &0.02 &0.11 &0.01 & 0.01&0.05\\
&T10 &0.87 &0.13 & 0.00& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00 & 0.00\\
&T20 &0.86 &0.14 & 0.00& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00 & 0.00\\
\hline
\multirow{3}{*}{(0.3,3,3)} &T0 & 0.6& 0.09& 0.08& 0.05& 0.03& 0.03& 0.05& 0.01& 0.00& 0.05\\
&T10 &0.91 &0.09 & 0.00& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00 & 0.00\\
&T20 &0.84 &0.16 & 0.00& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00 & 0.00\\
\hline
\hline
\multirow{3}{*}{(0.1,1,1/3)} &T0 & 0.31& 0.05& 0.04& 0.05& 0.03& 0.02& 0.22& 0.03& 0.01&0.23\\
&T10 & 0.00& 0.00 &0.01& 0.25 & 0.18& 0.16& 0.40& 0.01& 0.00 &0.00\\
&T20 & 0.00& 0.00 &0.03& 0.13 & 0.09& 0.08& 0.61& 0.05& 0.01 &0.00\\
\hline
\multirow{3}{*}{(0.3,1,1/3)} &T0 &0.21 & 0.08 &0.09& 0.07& 0.03& 0.04& 0.17& 0.04& 0.02& 0.26\\
&T10 & 0.00& 0.00 &0.00& 0.50& 0.31& 0.16& 0.04 &0.00& 0.00& 0.00 \\
&T20 & 0.00& 0.00 &0.06& 0.25& 0.14& 0.15& 0.38 &0.02& 0.00& 0.00 \\
\hline
\hline
\multirow{3}{*}{(0.1,1/3,1)} &T0& 0.36& 0.06& 0.06& 0.03& 0.02& 0.02& 0.16& 0.02& 0.01&0.25\\
&T10 & 0.00 &0.00& 0.09& 0.46 &0.15 &0.13 &0.17& 0.01& 0.00&0.00 \\
&T20 & 0.00 &0.00& 0.11& 0.20 &0.12 &0.12 &0.42& 0.04& 0.00&0.00 \\
\hline
\multirow{3}{*}{(0.3,1/3,1)} &T0& 0.28 &0.09& 0.10& 0.06& 0.03& 0.03 & 0.09 & 0.02& 0.03& 0.27\\
&T10 & 0.00 &0.00& 0.12& 0.75& 0.11& 0.02& 0.00& 0.00& 0.00& 0.00 \\
&T20 & 0.00 &0.00& 0.21& 0.39& 0.19& 0.12& 0.09& 0.00& 0.00& 0.00 \\
\hline
\end{tabular}
\label{tab:3sources}
\end{table*}
\subsection{Results with Two Sources of Type $S_L$ and $S_H$}
We also explored the scenario with two sources $S_L$ and $S_H$, where the former is a BNSM like source and the latter is a MRSN like source with $\lambda_L=10$ and $\lambda_H=1000$.
As before, we assume that both sources are equally frequent and consider three different relative ejecta mass ratios of $M_{LH}^{\rm ej}=1/3, 1,$ and 3. In the case of minimum criteria T0, the probability distribution has a peak at values close to $\lambda_L$ with a very long tail that extends up to $\lambda_H$. The reason is similar to the scenario discussed before with $S_L$ and $S_M$. Even if the main contributing h1 event is of $S_H$ type, a minor contribution from $S_L$ either as h2 or h3 produces a significant amount of $^{247}$Cm and decreases the value of the $^{129}$I/$^{247}$Cm ratio and brings it closer to $\lambda_L$. When the criteria for concordance i.e, T10 or T20 is imposed, the probability distribution for the $^{129}$I/$^{247}$Cm again changes drastically (see Fig.~\ref{fig:Dp1_LH_hist}). In this case, the probability distribution is primarily limited to values $\lesssim 7\lambda_L$ and is negligible for values $\gtrsim \lambda_M$ in all cases. When $M_{LH}^{ej}=3$, the distribution
strongly peaks at $\sim \lambda_L$, whereas for $M_{LH}^{ej}=1$, the peak of the probability is at $\sim 2-5\lambda_L$. Even when $S_H$ is the dominant source with $M_{LH}^{ej}=1$, the peak of the probability distribution is at $\sim 5-7\lambda_L$ with zero probability for values $>11\lambda_L$.
The reason for the dramatic change in the probability distribution when T10 and T20 is imposed is again similar to the scenario discussed before with $S_L$ and $S_M$. The allowed values for the $^{129}$I/$^{247}$Cm ratio is governed by the requirement of concordance for $^{129}$I/$^{127}$I and $^{247}$Cm/$^{235}$U. As before, both sources contribute equally to the total $^{127}$I but almost all of the $^{235}$U comes from $S_L$. Thus, if the last few events are dominated by $S_H$, the $^{247}$Cm/$^{235}$U ratio is factor $\sim 100$ lower than $^{129}$I/$^{127}$I which makes concordance impossible.
\subsection{Results with Two Sources of Type $S_M$ and $S_H$}
Because we are dealing with isotopic ratios, the results for the evolution of $^{129}$I/$^{247}$Cm ratio for a fixed value of $\lambda_i/\lambda_j$ and $\nu_i/\nu_j$ can be used to get the corresponding results for $\kappa\lambda_i$ and $\kappa\lambda_j$ by simply multiplying the results for $^{129}$I/$^{247}$Cm the corresponding factor $\kappa$. Thus, the results for the case of two types of sources $S_L$ and $S_M$ with $\lambda_L=10$ and $\lambda_M=100$ can be used to get the results for the case with two sources $S_M$ and $S_H$ with $\lambda_M=100$ and $\lambda_H=1000$ with $\nu_{MH}=1$ by scaling up the probability distribution of $^{129}$I/$^{247}$Cm ratio by a factor of 10. In this case, with the criteria T10 and T20, the probability distribution is mostly limited to values of $\lesssim 5\lambda_M$ and strongly peaks at values of $\sim 300$--500.
\begin{figure*}
\centerline{\includegraphics[width=\textwidth]{Dp1_LMH_hist.pdf}}
\caption{Probability distribution of $^{129}$I/$^{127}$Cm ratio in the ESS for three equally frequent \textsl{r}-process sources $S_L$, $S_M$, and $S_H$ with $\lambda_L=10$, $\lambda_M=100$, and $\lambda_H=1000$, respectively, for $(M_{LM}^{\rm ej},M_{LH}^{\rm ej})=(1,1)$ (black), $(3,3)$ (red), $(1/3,1)$ (blue), and $(1,1/3)$ (green) and tolerances T0 (left vertical panel), T10 (middle vertical panel), and T20 (right vertical panel) corresponding to the values listed in Table~\ref{tab:3sources}. All models have $D=0.1$ kpc$^2$ Gyr$^{-1}$.
}
\label{fig:Dp1_LMH_hist}
\end{figure*}
\subsection{Result with Three Sources}
Finally, we consider the scenario with three different sources $S_L$, $S_M$, and $S_H$ with distinct values for $^{129}$I/$^{247}$Cm ratio covering two orders of magnitude ranging from low $\lambda_L=10$, to medium $\lambda_M=100$, to high $\lambda_H=1000$.
The frequency for all sources are taken to be equal. In this case, there are two parameters for the ratio of mass of \textsl{r}-process ejecta. They are denoted by $M_{LH}^{\rm ej}$ and $M_{LM}^{\rm ej}$ corresponding to the ejecta mass ratio for $S_L$ relative to $S_H$ and $S_L$ relative to $S_M$, respectively. We simulate four different scenarios. The first one is where all sources contribute equally to the main \textsl{r}-process, i.e, $M_{LH}^{\rm ej}=M_{LM}^{\rm ej}=1$. The other three scenarios have one of the sources as the dominant main \textsl{r}-process source, which has three times more ejecta mass than the rest of the two sources.
As before, for each scenario, we consider two different values of $D$. The probability distribution of the $^{129}$I/$^{247}$Cm ratio for the three different criteria T0, T10, and T20 are listed in Table~\ref{tab:3sources} along with the corresponding figure for $D=0.1$ kpc$^2$ Gyr$^{-1}$ in Fig.~\ref{fig:Dp1_LMH_hist}.
The overall results are roughly an average of the results derived for the scenario with two sources ($S_L$, $S_M$) and ($S_L$, $S_H$).
When the minimum criteria T0 is applied, the probability distribution for the $^{129}$I/$^{247}$Cm ratio has the highest peak close to $\lambda_L$ with values ranging from $\sim 0.2$--$0.7$, and a relatively smaller peak at $\sim \lambda_M$ with values ranging from $\sim 0.09$--$0.22$ along with probability for intermediate values between $\lambda_L$ and $\lambda_M$ ranging from $\sim 0.09$-$0.17$. The probability distribution drops sharply for values $\gtrsim \lambda_M$ and has a long and relative flat tail that extends to values up to $\sim \lambda_H$.
When the criteria for concordance are imposed, the probability distribution changes drastically similar to the cases with two sources. Firstly, the long tail of the probability distribution above $\gtrsim \lambda_M$ vanishes completely. As with the models with two sources, this change in the probability distribution is due to the requirement of concordance for $^{129}$I/$^{127}$I and $^{247}$Cm/$^{235}$U such that if the last major contributors are events of the $S_H$ type, concordance is impossible.
There is, however, a significant difference in the probability distribution for 3 sources when compared to results from 2 sources. In the case of 2 sources, for all models, the probability distribution peaks at values $\sim \lambda_L=10$ or intermediate values of $\sim 50$ with negligible or very low probability for values close to $\lambda_M$. In contrast, for 3 sources, the probability distribution has either peaks or relatively high values close to $\lambda_M$ for models where $S_M$ or $S_H$ are the dominant \textsl{r}-process source. However, in all such models, there is a substantial probability ranging from 0.16--0.60 for the ratio to have intermediate values between 30--70. In the model with equal ejecta mass for all sources, the probability distribution peaks at intermediate values. Only for the model where $S_L$ is the dominant \textsl{r}-process source, the entire probability distribution is limited close to $\lambda_L$ with a negligible probability for values $\gtrsim 30$.
\subsection{Isolation Time and Time of ``Last" Event}
It is interesting to consider the typical isolation time $\Delta_{\rm iso}$ in our models. In addition, we can also consider the time interval $\delta_{\rm h1}$ between the highest $^{129}$I contributing event and the beginning of isolation at $t_{\rm iso}$.
Among the three highest contributors, h1, h2, and h3, h1 is not necessarily the last contributing event. Thus we additionally define $\delta_{\rm last}$ as the time interval between the most recent event among h1, h2, and h3 and $t_{\rm iso}$. The effective isolation time corresponding to h1 and the last event can be defined as $\Delta_{\rm iso}^{\rm h1}= \Delta_{\rm iso}+\delta_{\rm h1}$ and $\Delta_{\rm iso}^{\rm last}= \Delta_{\rm iso}+\delta_{\rm last}$. We find that the distribution of $\Delta_{\rm iso}$, $\Delta_{\rm iso}^{\rm h1}$, and $\Delta_{\rm iso}^{\rm last}$ are similar for the four different scenarios consider in this work. For the purpose of illustration, the distribution of $\Delta_{\rm iso}$, $\Delta_{\rm iso}^{\rm h1}$, and $\Delta_{\rm iso}^{\rm last}$ are shown in Fig.~\ref{fig:Delta_LM_hist} for the scenario with two sources $S_L$ and $S_M$ for models with $D=0.1$ kpc$^2$ Gyr$^{-1}$ and criteria T20.
The distribution of $\Delta_{\rm iso}$ peaks at values of $\sim \lesssim 50$~Myr which is consistent with the typical lifetimes associated with molecular and giant molecular clouds \citep{hartmann2001,murray2011}. The 68th percentile range for $\Delta_{\rm iso}^{\rm h1}$ ranges form $\sim 100$--140 Myr whereas the corresponding range for $\Delta_{\rm iso}^{\rm last}$ is $\sim 80$--115 Myr. The central values are broadly consistent with recent calculations by \citet{cote2019} where the total isolation time for $^{129}$I was found to be $\sim 85$--116 Myr. Interestingly, in our case, the probability for the effective isolation time to be $\lesssim 50$ Myr is low with typical values $\lesssim 5\%$ but not zero.
\begin{figure*}
\centerline{\includegraphics[width=\textwidth]{Delta_LM_hist.pdf}}
\caption{Probability distribution of $\Delta_{\rm iso}$, $\Delta_{\rm iso}^{\rm h1}$, and $\Delta_{\rm iso}^{\rm last}$ for two equally frequent \textsl{r}-process sources $S_L$ and $S_M$ with $\lambda_L=10$ and $\lambda_M=100$ for $M_{LM}^{\rm ej}=1$ (black), 3 (red), and 1/3 (blue). All models have $D=0.1$ kpc$^2$ Gyr$^{-1}$ and tolerance T20.}
\label{fig:Delta_LM_hist}
\end{figure*}
\section{Discussion \& Conclusions}
In this work we explored the evolution of the SLRs produced by \textsl{r}-process at the solar location due to two or three different \textsl{r}-process sources that have distinct $^{129}$I/$^{247}$Cm production ratios. In contrast to the conclusion reached in \citet{Cote2021}, we find that in general, the observed ESS ratio for $^{129}$I/$^{247}$Cm does not correspond to a single ``last'' event. Although there is a major contributing last event, it accounts for only $\sim 50$--75\% of all the $^{129}$I in the ESS and at least two more minor contributing events are required to account for $\gtrsim 95\%$ of the observed $^{129}$I. This has a large impact on the probability distribution for the $^{129}$I/$^{247}$Cm ratio in the ESS
that depends on the particular choice of parameters such as the ratio of ejecta masses, and the relative frequency of the sources.
One of the reasons for the difference in our conclusion and the one reached by \citet{Cote2021} is
related to the
prescription for modelling the evolution of \textsl{r}-process elements.
The turbulent gas diffusion prescription used in this work is very different from the one used in \citet{Cote2021} where a stochastic one-zone model was used. Although both prescriptions are able to model the stochasticity of occurrence time of \textsl{r}-process events, the diffusion prescription can also capture the stochasticity associated with the spatial location, distance of the event from the solar location as well as the corresponding dilution. The other important reason is the application of the criteria for concordance used in our work, which
involves using the ESS ratio of $^{129}$I/$^{127}$I and $^{247}$Cm/$^{235}$U that is different from what was considered in \citet{Cote2021}.
As mentioned earlier, the concordance imposes additional constraints that changes the ESS $^{129}$I/$^{247}$Cm ratio substantially..
Although we find that the ESS $^{129}$I/$^{247}$Cm ratio measured in meteorites cannot be used to directly constrain the ``last'' \textsl{r}-process event, the concordance criteria used in our work still allows us to put interesting constraints on the nature of \textsl{r}-process sources when combined with theoretical nucleosynthetic calculations for various astrophysical sources.
Below, we take the results of \textsl{r}-process nucleosynthesis calculations for the various astrophysical scenarios and nuclear physics inputs by \citet{Cote2021} as examples for illustration and discussion.
The astrophysical models considered in \citet{Cote2021} are, i) dynamical ejecta from NS-NS and NS-BH mergers that are the most neutron rich and have the lowest values of $^{129}$I/$^{247}$Cm, ii) three different NS-NS merger disk ejecta numbered 1, 2 and 3 with varying levels of neutron richness resulting in higher values of $^{129}$I/$^{247}$Cm compared to the dynamical ejecta, and iii) MRSN ejecta that have the highest values of $^{129}$I/$^{247}$Cm. The values of $^{129}$I/$^{247}$Cm in all models are sensitive to the nuclear physics inputs where three different nuclear reaction rates and three different fission fragment distributions were considered for each model.
The ranges of the $^{129}$I/$^{247}$Cm ratio in these models are roughly between 10--100 for the neutron-rich dynamical ejecta, 250--1000 for disk ejecta 1, 50--250 for disk ejecta 2, 25--150 for disk ejecta 3, and 1000-6000 for MRSN ejecta.
We associate
the dynamical ejecta from NS-NS and NS-BH mergers as the $S_L$ source, one of the three different disk ejecta as the $S_M$ source, and the MRSN ejecta as the $S_H$ source.
Considering two equally frequent \textsl{r}-process sources $S_L$ and $S_M$ with
$\lambda_M/\lambda_{L}\approx 10$,
we can draw the following conclusions when the concordance criteria (T10 or T20) is imposed;
\begin{itemize}
\item Because the probability distribution can have a maximum value of $\sim \lambda_M$, the possibility of NS-NS merger disk ejecta 3 as $S_M$ is ruled out.
This is simply due to the fact that for all NS-NS merger disk ejecta 3 models, $\lambda \lesssim 150$ making it incompatible with the observed value of $438\pm 184$.
\item The $S_L$ source as the NS-NS or NS-BH dynamical ejecta being the dominant \textsl{r}-process source (i.e, $M_{LM}^{\rm ej}=3$) is highly disfavoured for all models. This is because in this case the values for the $^{129}$I/$^{247}$Cm ratio is limited to $< 3\lambda_L$ whereas the $^{129}$I/$^{247}$Cm production ratio $\lesssim 90$ for all dynamical ejecta models considered by \citep{Cote2021}\footnote{Other than the TF(D3C*) model which is an outlier.}. This is inconsistent with the measured value of $438\pm 184$.
\item If the $S_M$ source is the dominant contributor to the \textsl{r}-process (i.e, $M_{LM}^{\rm ej}=1/3$), NS-NS merger disk ejecta 1 is favoured while NS-NS merger disk ejecta 2 is marginally consistent with the observed values.
\item The $S_L$ source as NS-NS or NS-BH dynamical ejecta as an equal contributor to the \textsl{r}-process (i.e., $M_{LM}^{\rm ej}=1$) requires the $S_M$ source to be NS-NS merger disk ejecta 1. This is because when $S_L$ is an equal contributor, the probability distribution for the $^{129}$I/$^{247}$Cm ratio is limited to values of $\lesssim \lambda_M/2$. Among all the disk models, this can only be satisfied by the NS-NS merger disk ejecta 1 where the $^{129}$I/$^{247}$Cm production ratio can reach values of up to $\sim 900$ such that the probability distribution for the $^{129}$I/$^{247}$Cm ratio is non-negligible at the observed value of $438\pm 184$.
\item Overall, the most favoured scenario is the NS-NS merger disk ejecta 1 as the dominant $S_M$ source which gives the highest probability for the $^{129}$I/$^{247}$Cm ratio to be in the range consistent with the observed value of $438\pm 184$.
\end{itemize}
For the case with two equally frequent \textsl{r}-process sources $S_M$ and $S_H$ with $\lambda_H/\lambda_M\approx 10$ and MRSN ejecta the $S_H$ source, all the three NS-NS merger disk ejecta are possible as they result in probability distribution of $^{129}$I/$^{247}$Cm that is compatible with the observed value of $438\pm 184$.
On the other hand, with MRSN ejecta the $S_H$ source and NS-NS or NS-BH dynamical ejecta as the $S_L$ source with $\lambda_H/\lambda_L\approx 100$ we conclude that;
\begin{itemize}
\item NS-NS or NS-BH dynamical ejecta ($S_L$) as the dominant \textsl{r}-process source is highly disfavoured. Again, in this case the $^{129}$I/$^{247}$Cm ratio is limited to $\lesssim 3\lambda_L$ whereas $^{129}$I/$^{247}$Cm production ratio $\lesssim 90$ for all dynamical ejecta models considered by \citep{Cote2021}. This makes it incompatible with the observed value of $438\pm 184$.
\item NS-NS or NS-BH dynamical ejecta ($S_L$) as either equal or subdominant contributor to total \textsl{r}-process is consistent with the observed value.
\end{itemize}
Finally, with three equally frequent \textsl{r}-process sources with NS-NS or NS-BH dynamical ejecta as $S_L$, NS-NS merger disk ejecta as $S_M$ and MRSN ejecta as $S_H$, the conclusions we draw are;
\begin{itemize}
\item The probability distribution for the $^{129}$I/$^{247}$Cm ratio is limited to values between $\lambda_L$ to $\sim \lambda_M$. Thus, the ESS value of $438\pm 184$ disfavours NS-NS merger disk ejecta 3 and and is marginally consistent with NS-NS merger disk ejecta 2 as the $S_M$ source whereas NS-NS merger disk ejecta 1 is favoured.
\item Similar to the scenario with two sources, the $S_L$ source as the NS-NS or NS-BH dynamical ejecta being the dominant \textsl{r}-process source (i.e $M_{LH}^{\rm ej},M_{LM}^{\rm ej}=3$) is highly disfavoured for all models.
\item Overall, the most favoured scenario involves NS-NS merger disk ejecta 1 as the $S_M$ source, with either the $S_H$ (MRSN ejecta) or the disk ejecta are the dominant \textsl{r}-process source (i.e, $M_{LH}^{\rm ej},M_{LM}^{\rm ej}=1/3,1$ or $M_{LH}^{\rm ej},M_{LM}^{\rm ej}=1,1/3$).
\end{itemize}
\section{Summary and Outlook}
We studied the evolution of \textsl{r}-process isotopes including SLRs at the birth location of the sun and the prospect of using the $^{129}$I/$^{247}$Cm ratio to constrain the \textsl{r}-process sources. We find that the measured meteoritic value of the $^{129}$I/$^{247}$Cm ratio does not correspond to a single ``last'' \textsl{r}-process event when there are multiple sources with distinct $^{129}$I/$^{247}$Cm production ratios.
Instead, we find that the $^{129}$I/$^{247}$Cm ratio can be used to put important constraints on \textsl{r}-process sources when the ESS data of $^{129}$I/$^{127}$I and $^{247}$Cm/$^{235}$U is taken into account. In particular, based on the nucleosynthesis calculation by \citet{Cote2021} for various astrophysical sites, we find that models of NS-NS or NS-BH dynamical ejecta that are neutron rich and have low $^{129}$I/$^{247}$Cm ratio cannot be the dominant source of \textsl{r}-process. This statement holds both in the case of where there are just two sources as well as when all three sources are considered.
Interestingly, this is consistent with current detailed BNSM merger simulations which predict that merger disk ejecta to be a more dominant source of \textsl{r}-process than the dynamical ejecta~\citep{Shibata:2019wef,Metzger:2019zeh}.
If there is a MRSN like source that has a high value for the production ratio of $^{129}$I/$^{127}$I and is as frequent as BNSMs, then it can mix with the dynamical ejecta to produce values that are compatible with observations without the need for disk ejecta.
However, from theoretical expectations, this is unlikely as substantial amount of \textsl{r}-process is expected from NS-NS merger disk ejecta.
In a realistic scenario where the disk ejecta is as frequent as the dynamical ejecta, NS-NS merger disk ejecta
which has medium values of $^{129}$I/$^{247}$Cm production ratio is highly favoured when there are no MRSN like source that is similarly
frequent. However, if there is a MRSN like source, any of the current merger disk models calculated in \citet{Cote2021} are in fact possible.
Our analysis is based on simplifying assumptions such as equal frequency for all sources and fixed values of $\lambda$ for the three types of \textsl{r}-process sources and limited values of ejecta masses. In principle, future studies can explore a larger parameter space to identify allowed regions that are consistent with the ESS data.
However, for some realistic scenarios which have parameters that are not directly covered in this work, the results presented here can be easily extrapolated. For example, in the scenario with all the three sources, if the frequency of MRSN like event is lower, then it would effectively reduce to the scenario with two sources of type $S_L$ and $S_M$ and the conclusion drawn for such a scenario can be applied in this case. Similarly, if there are two different $S_M$ sources that have similar values of $\lambda$, then they can effectively be treated as a single $S_M$ source and the results for two sources with $S_L$ and $S_M$ can be applied in this case.
In future, with better simulations of astrophysical sites with improved nuclear physics inputs and accurate nucleosynthetic models, the ESS ratio of $^{129}$I/$^{247}$Cm along with $^{129}$I/$^{127}$I and $^{247}$Cm/$^{235}$U could be used to provide strong constraint that could help identify the main source of \textsl{r}-process and even rule out certain astrophysical sites.
\section*{Acknowledgements}
M.-R.~W. acknowledges supports from the Ministry of Science and Technology, Taiwan under Grant No.~110-2112-M-001 -050, the Academia Sinica under Project No.~AS-CDA-109-M11, and the Physics Division, National Center for Theoretical Sciences, Taiwan.
\section*{Data Availability}
Data is available upon request.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 2,377 |
The two main types of basketball courts are wood flooring or an acrylic paint finish. Wood flooring is most commonly used in indoor play areas such as gyms, while acrylic paint is mostly used on outdoor basketball courts.
Amateur basketball courts vary in a wide proportion.
Apart from the size, the recommended ceiling height is of 27 feet (8.23 meters) and also you can choose your favorite color for the basketball court you want to build from a wide number of color combinations.
Choose the suitable basketball court dimensions for your needs.
Level the playing surface by transferring dirt from high areas to low areas.
Make sure the concrete or asphalt is installed per ASBA specifications.
When installing basketball goal footers, make sure they are the proper size for the goal.
Paint the principal lines on your court with a line tape machine.
For safety, insure you have a minimum runoff of 3 feet on the sideline and 5 feet on the baseline .
The basketball court should be oriented from north to south in order to minimize the effect of sun glare while playing.
Make sure that there are no low tree branches nearby that could interfere with shots or passes.
Condition of land is a crucial factor while choosing land to build a basketball court. So you must choose land which is geologically stable and flat to limit earth movements and support work as much as possible.
Avoid highly reactive clays and prefer compactible soil.
Hire a qualified geotechnical engineer to examine the soil for finding out some problems like expansive soils, high organic material content, and high groundwater conditions.
Building permits vary according to the location as there are different rules and regulations for different locations. Contact Ace surfaces for more information on building permits on your exact location.
Appropriate drainage system is necessary to keep water away from your court surface. Because if uphill water drains into the soil beneath your court, it can damage the court extensively.
Proper slope (max 1%) is needed to allow water to drain away from the court.
Choose a suitable water drainage system from a number of drainage types according to your court location.
All acrylic surfaces are prone to damage under harsh temperature and weather.
Too cold (below 50* F) or too hot weather conditions adversely affect the drying process.
Overcast conditions also play a big role in preventing the surface materials from proper drying.
If you are pouring a concrete slab, check the weather to make sure the concrete will be dry before inclimate weather approaches. After concrete has been poured, wait for 36 hours to utilize the court and let the court settle a minimum of 30 days before applying acrylic coatings.
Asphalt is an affordable option and comes with a lesser price range than the concrete construction.
The installation process for asphalt is easier than the concrete.
Material costs for asphalt is less expensive than concrete.
If asphalt is improperly built, then it will not last as long as a concrete.
Post Tension concrete, while more expensive, is typically going to outlast an asphalt surface.
Concrete play courts are more durable, low maintenance, and crack resistant than asphalt.
Warranty for the standard acrylic surface is for 1 year where as premium surface comes with a warranty of 5 years.
Life expectancy for standard surface is of 3-5 years but in case of premium, it is 20-30 years.
Premium surface provides less down time, more vibrant and long lasting colors.
Cushioned basketball courts are the premium hard courts and are considered as the ultimate hard court.
Cushioned courts are Easier on the body than a standard hard court.
Compared to a standard hard court, cushion courts play slower than a standard hard court.
Ace surface helps to build your dream basketball court from a wide variety of court options such as an indoor basketball court, an outdoor basketball court or a cushioned basketball court. If you are looking to build a basketball court or need a resurfacing and are clueless with what to do and how to answer the questions in your mind, we are ready to give you our best solution for your basketball court project. Ace Surfaces has partnered with Advanced Polymer Technology and Laykold creating Laykold Masters basketball court surfacing. Laykold Masters provides indoor and outdoor basketball court surfacing materials that provide durability, resistance to weather and resistance of fading to ultra violet rays with a 5 year warranty which is remarkable in the industry. You may also consider HARO Sports flooring for hardwood flooring. Whatever you want, we are the best in it. | {
"redpajama_set_name": "RedPajamaC4"
} | 7,701 |
As a provider of solutions to the EU's strategic dependencies, the pulp & paper sector must be allowed to 'keep the lights on'
Cepi, the European paper industry association, and its members across Europe have responded decisively in face of the unjustified military aggression against Ukraine and in a spirit of solidarity with the people in Ukraine at this difficult time.
"We support EU leaders in their defence of international law, human rights and democratic values. Europeans see our products as essential goods. But to provide them and maintain business as usual has proven difficult. Our sector is particularly hit by the spike in energy prices. At the time of writing, many paper mills across of Europe were forced to stop production or to introduce temporary downtimes. This situation puts at risk the jobs of over 4 million people who are employed in the European forest-based value chain."
The supply chain shortages could have major implications for the pulp and paper industry. Products of all kinds made of pulp and paper, including packaging and essential hygiene products, are in danger of disruptions. This includes the transport and delivery of food and pharmaceuticals, including to the populations which need it the most in the face of the multiple current crises.
"The Versailles Declaration has called for the protection of critical infrastructure, we are now calling for our sectors (NACE codes 17, 18 and 02.10) to be recognised as essential suppliers in several critical European value chains and to be eligible for state aid and preferential gas deliveries. This will ensure continuity of vital supplies to society during the double energetic and security crisis, and as the EU is still recovering from the Covid-19 health crisis."
"Reasons for further diversification of gas supplies, as called for by the EU Commission and several Member States are well understood, and supported, by our sector. Over the past decade, we have been leading in terms of investments in the transition to a greener and more efficient energy model. But it is important that new restrictions are applied in a pragmatic and fair way, and with deep respect to the fundamental role of different economic sectors."
"A plan to wean ourselves off Russian gas and oil as a feedstock for manufacturing should be backed by the necessary national and European resources. Wood-based materials clearly are such a resource, and many connected sectors are already starting to tap into our readily available substitutes. In addressing Europe's strategic dependencies, Heads of State and Government have also underlined the importance of critical raw materials; wood-based materials are a key enabling raw material. Pulp and paper and their derivatives hold potential as circular, sustainable and home-grown substitutes for those fossil materials impacted by the crisis. This includes a number of plastic-based products, but also textiles, packaging, and even electric car batteries. By replacing CO2-intensive materials with wood-based products we reduce EU emissions by 10%, or 410 Mt CO2 equivalent per year."
#Tags:#cepi #paperindustry
Combined competencies of Voith and Meri lead to significant water savings in paper production
MM Board & Paper: Modernization of board machines In Frohnleiten, Neuss and Kolicevo
UPM's stream water programme will release and restore 500 km of Finland's rivers by 2030
Sofidel in the top 1% of companies rated by EcoVadis for sustainability in the tissue paper industry
A.Celli successfully starts up key parts for the Anhui Linping's PM7
Michigan State legislatures have approved legislation to grant Billerud investment support
Smurfit Kappa embraces green energy at Spanish paper mill
Toscotec to supply a complete press section rebuild to Cartiera Pirinoli in Italy
Latest news from the market channel
Paper Excellence announces $50 million investment to restart paper operations at Crofton mill
Domtar's Kingsport Mill resumes operation; produces first containerboard roll
Voith expands service activities of OnPerformance.Lab (OPL) with new location in Kunshan, China
Duino mill in Italy now part of Mondi Group
Metsä Board Kaskinen folding boxboard mill EIA procedure has started
A.Celli, industrial integration between paper mills and other types of plants
Villa Santa Lucia mill resumes operations
Baosuo Paper Machinery Manufacture CO.,Ltd
Baosuo Enterprise enters Vietnamese market with a Turnkey Tissue Equipment Project and cooperates with Hengan Group
Baosuo Enterprise Group enters the Middle East market with a Turnkey Solution and continues to provide solutions for the Chinese market | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 7,375 |
Q: How do hide the x-axis serifs in chart.js 2? How do I hide the x-axis serifs in chart.js 2 ?
And remove all paddings around of chart?
var barChartEl = document.getElementById("barChart");
var barChart = new Chart(barChartEl, {
type: 'bar',
data: {
labels: ["2012 г.", "2013 г.", "2014 г.", "2015 г."],
datasets: [{
data: [542, 34127, 39797, 51450]
}]
},
options: {
scales: {
yAxes: [{
ticks: {
display: false
}
}]
}
}
});
<script src="https://cdnjs.cloudflare.com/ajax/libs/Chart.js/2.5.0/Chart.bundle.js"></script>
<canvas id="barChart" width="400" height="200"></canvas>
A: resolve:
scales: {
xAxes: [{
gridLines: {
tickMarkLength: 0
}
}]
}
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 9,944 |
Q: how to sum complicated condition in pandas dataframe I have dataframe below;
df=pd.DataFrame(np.arange(1,19).reshape(6,3),columns=list('ABC'),index=list('acbabc'))
A B C
a 1 2 3
c 4 5 6
b 7 8 9
a 10 11 12
b 13 14 15
c 16 17 18
I would like to conditional summing dataframe shown as below;
A B C
a 11 13 15
b 20 22 24
c 20 22 24
each elements shows conditional sum of df.for instance,(I am inconfident about expression)
result.loc[0,0]=df.loc[df.A=="a"].sum()
how can I get this dataframe ?
A: Groupby index and sum the columns should give you what you need:
df.groupby(df.index).sum()
# A B C
#a 11 13 15
#b 20 22 24
#c 20 22 24
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 6,647 |
Every time I see a pattern I absolutely love and must make, I save it to bookmarks then rarely get to it for all the gorgeous patterns I already have on file. So, this page's purpose is twofold: organising my favourite pattern links, and sharing them with others.
http://beautifulcrochetstuff.com/crochet-summer-top-free-pattern/#comment-2865 A gorgeous pink top made by attaching tapes to a beautiful yoke. I'll make mine longer in front and back as I'm 48 and not in shape enough for the designers daring look. I'm picturing the curved hemline reaching mid-thigh in back and slightly higher in the front. Did I mention I want one for each day of the week? Colours? Yellow, brown, mint green (loving 'this season's' colour, despite not liking green usually), purple, white, royal blue and red…? I better get cracking – summer's almost here!
https://knottingnoodles.wordpress.com/2012/10/15/little-cupcake-newborn-hat/#comment-967 I've a new grandchild (#3) on the way, so I was wrapped to find this Cupcake Newborn Hat pattern – and a freebie too!
https://www.youtube.com/watch?v=xwHlQ7Y0_ug Hairpin lace tutorial with cardigan pattern – gorgeous!
http://www.shareapattern.com/knitting/cancun-boxy-lace-top/ This one's not only to share a link to a gorgeous top, it's also to share a wonderful site!
http://www.itsalwaysautumn.com/2015/10/21/stuffed-animal-teddy-bear-robe-free-sewing-pattern.html#comment-491067 It's Always Autumn: Teddy bear robe – too cute!
http://littlemonkeyscrochet.com/the-half-n-half-slouch/ A perfectly simple to make but stunning slouch! | {
"redpajama_set_name": "RedPajamaC4"
} | 7,662 |
{"url":"https:\/\/socratic.org\/questions\/equal-volumes-of-0-25-m-hno2-and-0-25-m-hno3-are-titrated-separately-with-0-25-m","text":"# Equal volumes of \"0.25 M\" \"HNO\"_2 and \"0.25 M\" \"HNO\"_3 are titrated separately with \"0.25 M\" \"KOH\". Which would be the same for both titrations?\n\n## a) initial pH b) pH halfway to the equivalence point c) pH at the equivalence point d) pH when 5 mL excess KOH has been added I know the answer is D but I don't understand why, could someone explain? Thanks a million ! (:\n\nDec 30, 2017\n\n#### Explanation:\n\nAs you know, you're dealing with nitrous acid, ${\\text{HNO}}_{2}$, a weak acid, and nitric acid, ${\\text{HNO}}_{3}$, a strong acid.\n\nRight from the start, this tells you that point (a) cannot be true because the weak acid does not dissociate completely in aqueous solution to produce hydronium cations.\n\nThis implies that the concentration of hydronium cations will be greater in the solution that contains the strong acid, which, in turn, means that the $\\text{pH}$ of the strong acid will be lower than the $\\text{pH}$ of the weak acid.\n\nSo in terms of initial $\\text{pH}$ values, you have\n\n\"pH\"_ (\"0 HNO\"_ 2) > \"pH\"_ (\"0 HNO\"_3)\n\nNow, point (b) cannot be true because, at the half-equivalence point, the $\\text{pH}$ of the nitrous acid solution will be equal to the 'p\"K_a of the weak acid.\n\nThis is the case because the half-equivalence point, the nitrous acid solution will contain equal concentrations of nitrous acid and nitrite anions, ${\\text{NO}}_{2}^{-}$, the conjugate base of nitrous acid, i.e. you are now in the buffer region.\n\nBy comparison, at the half-equivalence point, the $\\text{pH}$ of the strong acid solution will be lower than the $\\text{pH}$ of the weak acid solution because the strong acid is fully dissociated in aqueous solution.\n\nThis implies that adding the strong base will only reduce the concentration of the hydronium cations produced by the full ionization of the strong acid.\n\nKeep in mind that unlike the nitrite anion, which acts as a weak base, you can assume that the nitrate anion does not act as a base in aqueous solution, i.e. that it will not react with water to reform nitric acid.\n\nPoint (c) cannot be true because at the equivalence point, the $\\text{pH}$ of the weak acid solution will actually be $> 7$ while the $\\text{pH}$ of the strong acid solution will be equal to $7$ at ${25}^{\\circ} \\text{C}$.\n\nThis is the case because once the neutralization is complete, the weak acid solution will still contain nitrite anions, ${\\text{NO}}_{2}^{-}$, which will react with water to produce hydroxide anions and reform some of the nitrous acid.\n\n${\\text{NO\"_ (2(aq))^(-) + \"H\"_ 2\"O\"_ ((l)) rightleftharpoons \"HNO\"_ (2(aq)) + \"OH}}_{\\left(a q\\right)}^{-}$\n\nThis will cause the $\\text{pH}$ of the solution to be $> 7$, i.e. the resulting solution will be basic.\n\nSince the nitrate anion does not react with water to produce hydroxide anions and reform some of the nitric acid, the resulting solution will be neutral and it will have $\\text{pH} = 7$.\n\nFinally, point (d) is correct because adding $\\text{5 mL}$ of excess potassium hydroxide, a strong base, will cause the $\\text{pH}$ of the two solutions to reach the same level.\n\nThis happens because even though the $\\text{pH}$ of the weak acid solution is already $> 7$ and the equivalence point, the concentration of hydroxide anions it already contains will not be high enough to affect the excess hydroxide anions coming from the strong base.\n\nIn other words, for the weak acid solution, you can use the approximation\n\n[\"OH\"^(-)] = [\"OH\"^(-)]_ \"already present\" + [\"OH\"^(-)]_ \"from excess strong base\"\n\n[\"OH\"^(-)] ~~ [\"OH\"^(-)]_ \"from excees strong base\"\n\nBy comparison, once you add the excess strong base, the strong acid solution, which at the equivalence point contains the very small concentration of hydroxide anions produced by the auto-ionization of water\n\n[\"OH\"^(-)] = 1 * 10^(-7)color(white)(.)\"M\" -> a neutral aqueous solution at ${25}^{\\circ} \\text{C}$\n\nwill have\n\n[\"OH\"^(-)] = 1 * 10^(-7)color(white)(.)\"M\" + [\"OH\"^(-)]_ \"from excess strong base\"\n\n[\"OH\"^(-)] ~~ [\"OH\"^(-)]_ \"from excess strong base\"\n\nThis is why the $\\text{pH}$ of the two solutions will be equal after $\\text{5 mL}$ excess strong base is added. In both cases, you have\n\n\"pH\" = 14 - [-log([\"OH\"^(-)]_ \"from excess strong base\")]\n\nhence why point (d) is true.","date":"2020-02-28 09:25:06","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 34, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7255808711051941, \"perplexity\": 1822.6316303633823}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-10\/segments\/1581875147116.85\/warc\/CC-MAIN-20200228073640-20200228103640-00064.warc.gz\"}"} | null | null |
{"url":"https:\/\/www.lernapehlivan.com\/publication\/2021-integers-spinor-regular-forms\/","text":"# Representation Numbers of Spinor Regular Ternary Quadratic Forms\n\n### Abstract\n\nRecently Earnest and Haensch (2019) established that there are exactly twenty-nine (classes of) spinor regular primitive positive-definite integral ternary quadratic forms, which are not regular. In this paper we determine explicit formulas for the representation numbers of the twenty-seven of these ternary quadratic forms, which are alone in their spinor genus. For the remaining two spinor regular forms, which are not alone in their genus, we determine their representation numbers for even positive integers. As a consequence of our formulas we are able to determine exactly which positive integers are represented by the twenty-seven ternary quadratic forms alone in their spinor genus. The integers represented by six of these forms had been found by Lomadze in 1977 and three of them by Berkovich in 2015, one form of which had already been treated by Lomadze. Our method is a new approach and quite different from the methods of said authors.\n\nPublication\nINTEGERS 21 (2021), A99","date":"2022-09-24 21:59:20","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8393281698226929, \"perplexity\": 469.4384714881782}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-40\/segments\/1664030333541.98\/warc\/CC-MAIN-20220924213650-20220925003650-00138.warc.gz\"}"} | null | null |
Имиут — это фетиш, сделанный из шкуры обезглавленного животного. Чаще всего Имиут делали из бычьей или кошачьей шкур. Эту шкуру привязывали за хвост к столбу, заканчивающимуся бутоном лотоса. Столб был вставлен в специальную подставку. Фетиш использовали в погребальных обрядах ещё с самых ранних времён истории Древнего Египта. Одни из самых ранних находок датируются I династией. Несмотря на столь древнее происхождение, предназначение этого фетиша до сих пор остаётся загадкой.
История
С самых ранних времён древнеегипетской мифологии, божество Имиут (что значит «тот кто в шкуре его») вероятно был богом загробного мира, хотя на сегодняшний день нет достоверных свидетельств подтверждающих это. Один из первых экземпляров Имиута был найден экспедицией музея Метрополитен, неподалёку от пирамиды Сенусерта I в святилище.
Так как в более поздние периоды он ассоциировался с Анубисом, его иногда называли фетишем Анубиса. Согласно одной из теорий Имиут мог быть символом бальзамирования, также как и Анубис, хотя есть сомнения в том, что это могло быть, так как Анубис изначально был богом мёртвых, а затем в более поздние времена стал ассоциироваться с бальзамированием. Изображения фетиша Имиут присутствуют на стенах древнеегипетских храмов и гробниц, иногда они входили в комплект погребальной утвари. Два из подобных погребальных фетишей были найдены Г. Картером в гробнице Тутанхамона.
См. также
Фетиш
Древнеегипетская религия
Примечания
Ссылки
Imiut fetish article from Ancient Egypt Online
Боги по алфавиту
Древнеегипетские боги
Боги смерти и загробного мира
Древнеегипетская мифология | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,750 |
{"url":"https:\/\/es.mathworks.com\/help\/simulink\/slref\/simulink.codeimporter.customcode-class.html","text":"Specify custom code settings for Simulink.CodeImporter and sltest.CodeImporter classes\n\n## Description\n\nThe Simulink.CodeImporter.CustomCode class is a handle class.\n\n## Creation\n\nWhen you create an object of class Simulink.CodeImporter, an object of class Simulink.CodeImporter.CustomCode is automatically created as the CustomCode property of that object. Do not create an object of class Simulink.CodeImporter.CustomCode directly.\n\n## Properties\n\nexpand all\n\nNote\n\nThe first four properties listed below (SourceFiles, InterfaceHeaders, IncludePaths, and Libraries) let you specify file path information about the location of your custom code. To enable portability, specify this information as a file path relative to the folder specified in the OutputFolder property of the relevant Simulink.CodeImporter object rather than as an absolute path.\n\nSource files to be imported, specified as a cell array of character vector or a string array. Supported files include .c and .cpp files. Each file name can be specified as a path relative to the folder specified in the OutputFolder property of the relevant Simulink.CodeImporter object or as an absolute path.\n\nProviding a value for SourceFiles is optional for Simulink.CodeImporter and optional for sltest.CodeImporter when the TestType is IntegrationTest.\n\nExample: {'foo.c', 'bar.c'}\n\nExample: [\".\\foo.c\", \"..\\bar.c\"]\n\nExample: fullfile(pwd, 'Src', 'foo.c')\n\nData Types: cell array of character vectors | string array\n\nInterface headers to be imported, specified as a cell array of character vectors or a string array. Supported files include .h and .hpp files. Each file name can be specified as a path relative to the folder specified in the OutputFolder property of the relevant Simulink.CodeImporter object or as an absolute path. Interface headers should contain the function declarations and type definitions that you want to bring into Simulink\u00ae. These declarations and definitions are usually contained in the export header of your C code library.\n\nExample: {'foo.h', 'bar.h'}\n\nExample: [\".\\foo.h\", \"..\\bar.h\"]\n\nExample: fullfile(pwd, 'Hdr', 'foo.h')\n\nData Types: cell array of character vectors | string array\n\nFolders containing included header files for the parser to find, specified as a cell array of character vectors or a string array. Each folder path can be specified as a path relative to the folder specified in the OutputFolder property of the relevant Simulink.CodeImporter object or as an absolute path.\n\nExample: {'.', '..\\..'}\n\nExample: [\".\\Include1\", \"..\\Include2\"]\n\nExample: fullfile(pwd, 'Include1')\n\nData Types: cell array of character vectors | string array\n\nLibraries that contain custom object code to link, specified as a cell array of character vectors or a string array. Supported files include .obj, .dll, .lib, .so, .o, .a, and .dylib files. Each file name can be specified as a path relative to the folder specified in the OutputFolder property of the relevant Simulink.CodeImporter object or as an absolute path.\n\nProviding libraries is optional.\n\nExample: {'foo.lib', 'foo.dll'}\n\nExample: [\".\\foo.so\", \"..\\bar.so\"]\n\nData Types: cell array of character vectors | string array\n\nPreprocessor macro definitions to be added to the compiler command line, specified as a cell array of character vectors or a string array. '-D' is optional in defines.\n\nExample: {'-D DEF1', '-D DEF2'}\n\nExample: [\"DEF1\", \"DEF2\"]\n\nData Types: cell array of character vectors | string array\n\nCustom code language, specified as 'C' or 'C++'. C and C++ are the only supported languages.\n\nData Types: character vector | string scalar\n\nAdditional complier flags to be added to the compiler command line, specified as a cell array of character vectors or a string array.\n\nExample: {'\/O2' , '\/Og'}\n\nExample: \"-g\"\n\nData Types: cell array of character vectors | string array\n\nAdditional linker flags to be added to the linker command line, specified as a cell array of character vectors or a string array.\n\nExample: {'\/WX'}\n\nData Types: cell array of character vectors | string array\n\nOption to enable global variables as function interfaces, specified as a logical scalar. If set to true, global variables accessed by the custom code functions will be treated as function interfaces in the generated Simulink library. See Call C Caller Block and Specify Ports and Enable global variables as function interfaces.\n\nData Types: logical scalar\n\nDefault array layout for custom code functions to use to access input argument arrays, specified as NotSpecified, RowMajor, ColumnMajor, or Any. You can override the default for an individual function by using the ArrayLayout property of the Simulink.CodeImporter.Function object corresponding to that function. Matrix data passed to and from your C functions is converted to the function array layout you specify. See Integrate C Code Using C Caller Blocks and Default function array layout.\n\nData Types: enum\n\n## Examples\n\ncollapse all\n\nCreate an object of class Simulink.CodeImporter. Set the properties of its CustomCode property to specify custom code to import into Simulink.\n\nobj = Simulink.CodeImporter(\"pumpController\"); obj.OutputFolder = \".\"; obj.CustomCode.InterfaceHeaders = [\"pumpController.h\"]; obj.CustomCode.IncludePaths = [\".\/include\"]; obj.CustomCode.SourceFiles = [\"src\/pumpController.c\" \"src\/utils.c\"];\n\nIntroduced in R2021a","date":"2021-07-31 11:09:42","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2307310700416565, \"perplexity\": 8184.027407596688}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-31\/segments\/1627046154089.6\/warc\/CC-MAIN-20210731105716-20210731135716-00327.warc.gz\"}"} | null | null |
Huge increase in Covid in Northumberland - here are the 15 areas with the highest infection levels
Plans for quarantine-free travel for fully vaccinated Brits being 'worked on'
Updated Thursday, 24th June 2021, 8:23 am
Ministers are looking to scrap the self-isolation requirement for travellers returning from an amber list country (Photo: Getty Images)
Plans to introduce quarantine-free travel for fully vaccinated Brits are being worked on by the UK government, Matt Hancock has confirmed.
The Health Secretary said that ministers are looking to scrap the requirement to self-isolate for 10 days after returning to the UK from an amber list country, but warned international travel remains a "difficult" area.
Plans are still in progress
The UK government is looking to replace the quarantine requirement with daily Covid-19 testing instead, although it has not been clinically advised yet.
A date for the change to quarantine rules has not yet been confirmed, but ministers are currently "working with clinicians" to ensure the plan is safe and secure to introduce.
Mr Hancock said the government hopes to relax restrictions on foreign travel soon following the continued success of the vaccination programme, but warned the approach will be cautious so as not to undo the progress being made in driving down cases in the UK.
Speaking on LBC radio, he said: "The whole point of the vaccine programme is to be able to remove restrictions, and for people to be able to be kept safe by the vaccine rather than by these rules.
"So we are working on a plan for the double-vaccinated people, using tests, and to have that testing regime in place, instead of having to have the quarantine in some circumstances.
"We're working with the clinicians, because we want to make sure the plan is safe and secure, so I can't give you a date but what I can tell you is that I'm in favour of moving forward in this area."
Brits will need to show vaccination status
The Health Secretary said the main NHS app, which is different from the Covid-19 app and records vaccination status, is "important" as countries are likely to need proof that Britons travelling abroad have had their jabs.
Six million people have now downloaded the app and it is expected that Brits will need to use it to show proof they have been vaccinated when entering other countries.
Mr Hancock said: "We can now, all of us, see our vaccine status, see your testing status, on the NHS app.
"Six million people have now downloaded the main NHS app and on that you can show whether you have had the jabs.
"It's important because we know other countries are going to say that they want proof that you have been vaccinated before you go.
"So, when travel is opened up, we are going to make sure people have got that ability to prove it."
A 'difficult year for travel'
While plans to lift all lockdown restrictions in England on 19 July remain on track, Prime Minister Boris Johnson has played down suggestions that restrictions on foreign travel could be lifted soon.
Mr Johnson said on Monday (21 June) that it will be a "difficult year for travel" as the priority is on keeping the UK safe, and preventing the virus coming back in.
The next travel update is due to take place on 28 June, after which it is hoped more countries will be added to the green list.
The current limited ability to travel abroad has prompted Ryanair and Manchester Airports Group, which owns Manchester, Stansted and East Midlands Airports, to prepare to take legal action, with both calling for greater transparency on how Whitehall decides which countries are added to the green, amber and red lists.
Matt HancockHealth Secretary | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 6,857 |
Miklós Gábor, né le à Zalaegerszeg – mort le à Budapest, est un acteur hongrois.
Filmographie partielle
1961 : Les cigognes s'envolent à l'aube (Alba Regia) de Mihály Szemes
Liens externes
Naissance en avril 1919
Naissance à Zalaegerszeg
Acteur hongrois de cinéma
Acteur hongrois de théâtre
Lauréat du prix Kossuth
Décès en juillet 1998
Décès à Budapest
Décès à 79 ans
Personnalité inhumée au cimetière de Farkasrét | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 647 |
Home / Game News / San Antonio Spurs at Miami Heat NBA Finals Game 7 Recap
San Antonio Spurs at Miami Heat NBA Finals Game 7 Recap
By Dustin Kent
NBA Finals Game 7 Recap (Miami, FL) — They say Game 7's are where legends of the game are made. After Thursday night's Game 7 performance, the legend of LeBron James will continue to grow.
The NBA's four-time league Most Valuable Player earned his second straight Finals MVP with a dominant 37-point, 12-rebound performance in leading the Miami Heat to a 95-88 victory over the San Antonio Spurs and the franchise's second consecutive NBA title.
(Photo Credit: Reuters) The Miami Heat celebrate their second straight NBA title after defeating the San Antonio Spurs 95-88 in Thursday night's Game 7.
It's a performance that hit back at many of the biggest criticisms James has faced through his career: not being a clutch player, not being able to knock down shots from the perimeter in big moments.
After Thursday night, those criticisms can be officially retired.
"Listen, for me, I can't worry about what everybody says about me," James said after the game. "I'm LeBron James from Akron, Ohio, from the inner city. I'm not even supposed to be here."
James came through with perhaps the biggest performance of his career, making a playoff-high five three-pointers and peppered the sinking Spurs defense with jumpers all night, the biggest of which coming with 27 seconds left to put Miami up 92-88 with the Heat clinging to a two-point lead.
"I work on my game a lot. Throughout the off-season, I put a lot of work into it," James said of his improved shooting. "To be able to come out here and see the results happen, it's the ultimate. I'm at a loss for words."
James came up with a steal off of an errant Manu Ginobili pass on the ensuing possession, and then made two free throws with 23 seconds to play to put the Heat up six to all be seal the victory.
The win gave the Heat's Big 3 a second straight title, and gave the Heat organization its third overall, with coach Erik Spoelstra saying that this victory was the hardest earned of all.
"This is the toughest series we've ever been in," he said. "Our absolute respect and hats go off to the San Antonio Spurs. It's a class organization, a championship organization. We have as much respect for them as anybody in the league."
Said James: "The second one was way harder than the first one. After I won my first one, people started telling me it would get easier and easier. That's not true. This was the hardest by far."
Miami got to Game 7 with a miraculous late-game comeback and finish in Game 6, and had to work every bit as hard to win Thursday.
The Heat never led by more than six points in the first half and needed a buzzer-beating bank shot three-pointer to take a 72-71 lead into the fourth period.
Consecutive mid-range jumpers by James gave the Heat an 85-79 lead with 4:32 to play, but just as Miami appeared destined to pull away, the Spurs responded by trimming the margin back to two on a three-pointer by Kawhi Leonard to make it 90-88 with two minutes left.
Two possessions later, the Spurs had a golden opportunity to tie the game with just under 50 seconds to play, but Tim Duncan missed two consecutive shots at point blank range, with James' jumper on the next possession putting Miami up four.
It was a huge missed opportunity for the future Hall of Famer, who finished with 24 points and 12 rebounds, but was dejected in the post-game press conference.
"The obvious word is disappointing," Duncan said. "I made some bad decisions, missed some tough shots. I don't know what to say. Credit to the Miami Heat. LeBron was unbelievable, Dwyane was great. I just think they found a way to get it done. We gave ourselves opportunities to win the game, but we couldn't turn the corner. They made more plays down the stretch."
James and Wade combined for 50 points on 23-of-44 shooting, including 29 of the team's 46 first half points, as well as 36 of 41 Miami points over a stretch of the second and third periods.
"They played Hall of Fame basketball tonight," Popovich said of the Heat duo.
Like James, Wade made the Spurs pay for their defensive strategy of playing off and enticing the duo to take jump shots, making several mid-range shots en route to a 14-point first half, with James taking over in the third period by knocking in 3-of-4 threes for 13 points.
Miami made 12 three-pointers overall, with Shane Battier knocking down 6-of-8 from long distance after coming into the night shooting 25 percent from long distance in the postseason.
"It's better to be timely than good," Battier said after the game.
James said he wasn't surprised to see the veteran forward come through for the team in such a big moment.
"He's a true professional . He never got down on himself," he said. "Every time Shane is open, we tell him to shoot the ball. For him to come through with six threes was big-time."
Wade was also able to bounce back, shrugging off a pedestrian Game 6 performance to make 11-of-21 shots, grab 10 rebounds, and play perhaps his sharpest defensive game of the series to help lead the Heat to another title, the third of his career.
"This is sweet. This is the sweetest one by far because of everything we've been through and everything I've been through individually," Wade said, noting his prior struggles through the playoffs due in part to a deep bone bruise that limited his explosiveness. "To get here to this point and have this kind of performance to be able to help my team is so special."
But while Wade's contribution was key, the game and the series ultimately belonged to James, whose performance at both ends of the court while playing a combined 95 minutes in the final two games left Spoelstra flummoxed for where he found the energy.
"I'm not sure. He's the best conditioned athlete in this game, and he takes pride and puts in so much time into it," he said. "He was guarding Parker, Ginobili, whoever was the most dangerous threat, and he had to create so much offense for us. He probably lost 12-15 pounds in the playoff run expending so much energy. He always rises to the occasion when it matters most and when the competition is fiercest."
dwyaneFinalsheatjameslebronmiamiMVPwade
← Previous Story San Antonio Spurs at Miami Heat Game 7 Finals Preview
Next Story → Miami Heat at Charlotte Bobcats Game 4 Recap
Why the Miami Heat's Jimmy Butler Is a Dark Horse Candidate for the 2019-20 NBA MVP
Erik Spoelstra Discusses Possibility of Miami Heat Tanking This Season
Miami Heat Rumors: Heat Interested in Trading for John Wall
About Dustin Kent | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 2,406 |
The annual Speech Day and Prize-Giving was held on Saturday 25 May 2019. A highlight of the Llandovery College year, this was an opportunity as always to mark and celebrate the many achievements of our pupils.
The College was honoured to welcome two distinguished guests, The Venerable Dorrien Davies, Archdeacon of Carmarthen and Kirsty Williams CBE, Assembly Member and Minister for Education.
The day began with the traditional Service of Praise and Thanksgiving in a packed Llandingat Church during which The Venerable Dorrien Davies delivered his wonderful address which was both meaningful and witty.
At the Speech and Prize-Giving Ceremony, Mr Michael Morgan, Chair of the Governing Body, and Ms Anna Sandford MA, Interim Warden addressed the audience. Guest of Honour, Kirsty Williams CBE then spoke to the pupils and presented the prizes. The ceremony concluded with a Vote of Thanks and heartfelt speeches from both Head Girl Eve Hancock and Head Boy Griffydd Evans.
With the official business of the day concluded there followed a sumptuous lunch in the Great Hall. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 8,596 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.